repo_name
stringlengths 5
100
| path
stringlengths 4
299
| copies
stringclasses 990
values | size
stringlengths 4
7
| content
stringlengths 666
1.03M
| license
stringclasses 15
values | hash
int64 -9,223,351,895,964,839,000
9,223,297,778B
| line_mean
float64 3.17
100
| line_max
int64 7
1k
| alpha_frac
float64 0.25
0.98
| autogenerated
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|
pilou-/ansible | lib/ansible/modules/cloud/xenserver/xenserver_guest.py | 7 | 97662 | #!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright: (c) 2018, Bojan Vitnik <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: xenserver_guest
short_description: Manages virtual machines running on Citrix XenServer host or pool
description: >
This module can be used to create new virtual machines from templates or other virtual machines,
modify various virtual machine components like network and disk, rename a virtual machine and
remove a virtual machine with associated components.
version_added: '2.8'
author:
- Bojan Vitnik (@bvitnik) <[email protected]>
notes:
- Minimal supported version of XenServer is 5.6.
- Module was tested with XenServer 6.5, 7.1 and 7.2.
- 'XenAPI Python library can be acquired from XenServer SDK (downloadable from Citrix website) or by running C(pip install XenAPI) (possibly very old
version, not compatible with Python 3.x). Latest version can also be acquired from GitHub:
https://raw.githubusercontent.com/xapi-project/xen-api/master/scripts/examples/python/XenAPI.py'
- 'If no scheme is specified in C(hostname), module defaults to C(http://) because C(https://) is problematic in most setups. Make sure you are
accessing XenServer host in trusted environment or use C(https://) scheme explicitly.'
- 'To use C(https://) scheme for C(hostname) you have to either import host certificate to your OS certificate store or use C(validate_certs: no)
which requires XenAPI library from XenServer 7.2 SDK or newer and Python 2.7.9 or newer.'
- 'Network configuration inside a guest OS, by using C(networks.type), C(networks.ip), C(networks.gateway) etc. parameters, is supported on
XenServer 7.0 or newer for Windows guests by using official XenServer Guest agent support for network configuration. The module will try to
detect if such support is available and utilize it, else it will use a custom method of configuration via xenstore. Since XenServer Guest
agent only support None and Static types of network configuration, where None means DHCP configured interface, C(networks.type) and C(networks.type6)
values C(none) and C(dhcp) have same effect.
More info here: https://xenserver.org/blog/entry/set-windows-guest-vm-static-ip-address-in-xenserver.html'
- 'On platforms without official support for network configuration inside a guest OS, network parameters will be written to xenstore
C(vm-data/networks/<vif_device>) key. Parameters can be inspected by using C(xenstore ls) and C(xenstore read) tools on \*nix guests or trough
WMI interface on Windows guests. They can also be found in VM facts C(instance.xenstore_data) key as returned by the module. It is up to the user
to implement a boot time scripts or custom agent that will read the parameters from xenstore and configure network with given parameters.
Take note that for xenstore data to become available inside a guest, a VM restart is needed hence module will require VM restart if any
parameter is changed. This is a limitation of XenAPI and xenstore. Considering these limitations, network configuration trough xenstore is most
useful for bootstraping newly deployed VMs, much less for reconfiguring existing ones.
More info here: https://support.citrix.com/article/CTX226713'
requirements:
- python >= 2.6
- XenAPI
options:
state:
description:
- Specify the state VM should be in.
- If C(state) is set to C(present) and VM exists, ensure the VM configuration conforms to given parameters.
- If C(state) is set to C(present) and VM does not exist, then VM is deployed with given parameters.
- If C(state) is set to C(absent) and VM exists, then VM is removed with its associated components.
- If C(state) is set to C(poweredon) and VM does not exist, then VM is deployed with given parameters and powered on automatically.
type: str
default: present
choices: [ present, absent, poweredon ]
name:
description:
- Name of the VM to work with.
- VMs running on XenServer do not necessarily have unique names. The module will fail if multiple VMs with same name are found.
- In case of multiple VMs with same name, use C(uuid) to uniquely specify VM to manage.
- This parameter is case sensitive.
type: str
required: yes
aliases: [ name_label ]
name_desc:
description:
- VM description.
type: str
uuid:
description:
- UUID of the VM to manage if known. This is XenServer's unique identifier.
- It is required if name is not unique.
- Please note that a supplied UUID will be ignored on VM creation, as XenServer creates the UUID internally.
type: str
template:
description:
- Name of a template, an existing VM (must be shut down) or a snapshot that should be used to create VM.
- Templates/VMs/snapshots on XenServer do not necessarily have unique names. The module will fail if multiple templates with same name are found.
- In case of multiple templates/VMs/snapshots with same name, use C(template_uuid) to uniquely specify source template.
- If VM already exists, this setting will be ignored.
- This parameter is case sensitive.
type: str
aliases: [ template_src ]
template_uuid:
description:
- UUID of a template, an existing VM or a snapshot that should be used to create VM.
- It is required if template name is not unique.
type: str
is_template:
description:
- Convert VM to template.
type: bool
default: no
folder:
description:
- Destination folder for VM.
- This parameter is case sensitive.
- 'Example:'
- ' folder: /folder1/folder2'
type: str
hardware:
description:
- Manage VM's hardware parameters. VM needs to be shut down to reconfigure these parameters.
- 'Valid parameters are:'
- ' - C(num_cpus) (integer): Number of CPUs.'
- ' - C(num_cpu_cores_per_socket) (integer): Number of Cores Per Socket. C(num_cpus) has to be a multiple of C(num_cpu_cores_per_socket).'
- ' - C(memory_mb) (integer): Amount of memory in MB.'
type: dict
disks:
description:
- A list of disks to add to VM.
- All parameters are case sensetive.
- Removing or detaching existing disks of VM is not supported.
- 'Required parameters per entry:'
- ' - C(size_[tb,gb,mb,kb,b]) (integer): Disk storage size in specified unit. VM needs to be shut down to reconfigure this parameter.'
- 'Optional parameters per entry:'
- ' - C(name) (string): Disk name. You can also use C(name_label) as an alias.'
- ' - C(name_desc) (string): Disk description.'
- ' - C(sr) (string): Storage Repository to create disk on. If not specified, will use default SR. Cannot be used for moving disk to other SR.'
- ' - C(sr_uuid) (string): UUID of a SR to create disk on. Use if SR name is not unique.'
type: list
aliases: [ disk ]
cdrom:
description:
- A CD-ROM configuration for the VM.
- All parameters are case sensitive.
- 'Valid parameters are:'
- ' - C(type) (string): The type of CD-ROM, valid options are C(none) or C(iso). With C(none) the CD-ROM device will be present but empty.'
- ' - C(iso_name) (string): The file name of an ISO image from one of the XenServer ISO Libraries (implies C(type: iso)).
Required if C(type) is set to C(iso).'
type: dict
networks:
description:
- A list of networks (in the order of the NICs).
- All parameters are case sensetive.
- 'Required parameters per entry:'
- ' - C(name) (string): Name of a XenServer network to attach the network interface to. You can also use C(name_label) as an alias.'
- 'Optional parameters per entry (used for VM hardware):'
- ' - C(mac) (string): Customize MAC address of the interface.'
- 'Optional parameters per entry (used for OS customization):'
- ' - C(type) (string): Type of IPv4 assignment, valid options are C(none), C(dhcp) or C(static). Value C(none) means whatever is default for OS.
On some operating systems it could be DHCP configured (e.g. Windows) or unconfigured interface (e.g. Linux).'
- ' - C(ip) (string): Static IPv4 address (implies C(type: static)). Can include prefix in format <IPv4 address>/<prefix> instead of using C(netmask).'
- ' - C(netmask) (string): Static IPv4 netmask required for C(ip) if prefix is not specified.'
- ' - C(gateway) (string): Static IPv4 gateway.'
- ' - C(type6) (string): Type of IPv6 assignment, valid options are C(none), C(dhcp) or C(static). Value C(none) means whatever is default for OS.
On some operating systems it could be DHCP configured (e.g. Windows) or unconfigured interface (e.g. Linux).'
- ' - C(ip6) (string): Static IPv6 address (implies C(type6: static)) with prefix in format <IPv6 address>/<prefix>.'
- ' - C(gateway6) (string): Static IPv6 gateway.'
type: list
aliases: [ network ]
home_server:
description:
- Name of a XenServer host that will be a Home Server for the VM.
- This parameter is case sensitive.
type: str
custom_params:
description:
- Define a list of custom VM params to set on VM.
- Useful for advanced users familiar with managing VM params trough xe CLI.
- A custom value object takes two fields C(key) and C(value) (see example below).
type: list
wait_for_ip_address:
description:
- Wait until XenServer detects an IP address for the VM. If C(state) is set to C(absent), this parameter is ignored.
- This requires XenServer Tools to be preinstalled on the VM to work properly.
type: bool
default: no
state_change_timeout:
description:
- 'By default, module will wait indefinitely for VM to accquire an IP address if C(wait_for_ip_address: yes).'
- If this parameter is set to positive value, the module will instead wait specified number of seconds for the state change.
- In case of timeout, module will generate an error message.
type: int
default: 0
linked_clone:
description:
- Whether to create a Linked Clone from the template, existing VM or snapshot. If no, will create a full copy.
- This is equivalent to C(Use storage-level fast disk clone) option in XenCenter.
type: bool
default: no
force:
description:
- Ignore warnings and complete the actions.
- This parameter is useful for removing VM in running state or reconfiguring VM params that require VM to be shut down.
type: bool
default: no
extends_documentation_fragment: xenserver.documentation
'''
EXAMPLES = r'''
- name: Create a VM from a template
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
validate_certs: no
folder: /testvms
name: testvm_2
state: poweredon
template: CentOS 7
disks:
- size_gb: 10
sr: my_sr
hardware:
num_cpus: 6
num_cpu_cores_per_socket: 3
memory_mb: 512
cdrom:
type: iso
iso_name: guest-tools.iso
networks:
- name: VM Network
mac: aa:bb:dd:aa:00:14
wait_for_ip_address: yes
delegate_to: localhost
register: deploy
- name: Create a VM template
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
validate_certs: no
folder: /testvms
name: testvm_6
is_template: yes
disk:
- size_gb: 10
sr: my_sr
hardware:
memory_mb: 512
num_cpus: 1
delegate_to: localhost
register: deploy
- name: Rename a VM (requires the VM's UUID)
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
uuid: 421e4592-c069-924d-ce20-7e7533fab926
name: new_name
state: present
delegate_to: localhost
- name: Remove a VM by UUID
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
uuid: 421e4592-c069-924d-ce20-7e7533fab926
state: absent
delegate_to: localhost
- name: Modify custom params (boot order)
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
name: testvm_8
state: present
custom_params:
- key: HVM_boot_params
value: { "order": "ndc" }
delegate_to: localhost
- name: Customize network parameters
xenserver_guest:
hostname: "{{ xenserver_hostname }}"
username: "{{ xenserver_username }}"
password: "{{ xenserver_password }}"
name: testvm_10
networks:
- name: VM Network
ip: 192.168.1.100/24
gateway: 192.168.1.1
- type: dhcp
delegate_to: localhost
'''
RETURN = r'''
instance:
description: Metadata about the VM
returned: always
type: dict
sample: {
"cdrom": {
"type": "none"
},
"customization_agent": "native",
"disks": [
{
"name": "testvm_11-0",
"name_desc": "",
"os_device": "xvda",
"size": 42949672960,
"sr": "Local storage",
"sr_uuid": "0af1245e-bdb0-ba33-1446-57a962ec4075",
"vbd_userdevice": "0"
},
{
"name": "testvm_11-1",
"name_desc": "",
"os_device": "xvdb",
"size": 42949672960,
"sr": "Local storage",
"sr_uuid": "0af1245e-bdb0-ba33-1446-57a962ec4075",
"vbd_userdevice": "1"
}
],
"domid": "56",
"folder": "",
"hardware": {
"memory_mb": 8192,
"num_cpu_cores_per_socket": 2,
"num_cpus": 4
},
"home_server": "",
"is_template": false,
"name": "testvm_11",
"name_desc": "",
"networks": [
{
"gateway": "192.168.0.254",
"gateway6": "fc00::fffe",
"ip": "192.168.0.200",
"ip6": [
"fe80:0000:0000:0000:e9cb:625a:32c5:c291",
"fc00:0000:0000:0000:0000:0000:0000:0001"
],
"mac": "ba:91:3a:48:20:76",
"mtu": "1500",
"name": "Pool-wide network associated with eth1",
"netmask": "255.255.255.128",
"prefix": "25",
"prefix6": "64",
"vif_device": "0"
}
],
"other_config": {
"base_template_name": "Windows Server 2016 (64-bit)",
"import_task": "OpaqueRef:e43eb71c-45d6-5351-09ff-96e4fb7d0fa5",
"install-methods": "cdrom",
"instant": "true",
"mac_seed": "f83e8d8a-cfdc-b105-b054-ef5cb416b77e"
},
"platform": {
"acpi": "1",
"apic": "true",
"cores-per-socket": "2",
"device_id": "0002",
"hpet": "true",
"nx": "true",
"pae": "true",
"timeoffset": "-25200",
"vga": "std",
"videoram": "8",
"viridian": "true",
"viridian_reference_tsc": "true",
"viridian_time_ref_count": "true"
},
"state": "poweredon",
"uuid": "e3c0b2d5-5f05-424e-479c-d3df8b3e7cda",
"xenstore_data": {
"vm-data": ""
}
}
changes:
description: Detected or made changes to VM
returned: always
type: list
sample: [
{
"hardware": [
"num_cpus"
]
},
{
"disks_changed": [
[],
[
"size"
]
]
},
{
"disks_new": [
{
"name": "new-disk",
"name_desc": "",
"position": 2,
"size_gb": "4",
"vbd_userdevice": "2"
}
]
},
{
"cdrom": [
"type",
"iso_name"
]
},
{
"networks_changed": [
[
"mac"
],
]
},
{
"networks_new": [
{
"name": "Pool-wide network associated with eth2",
"position": 1,
"vif_device": "1"
}
]
},
"need_poweredoff"
]
'''
import re
HAS_XENAPI = False
try:
import XenAPI
HAS_XENAPI = True
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils import six
from ansible.module_utils.xenserver import (xenserver_common_argument_spec, XAPI, XenServerObject, get_object_ref,
gather_vm_params, gather_vm_facts, set_vm_power_state, wait_for_vm_ip_address,
is_valid_mac_addr, is_valid_ip_addr, is_valid_ip_netmask, is_valid_ip_prefix,
ip_prefix_to_netmask, ip_netmask_to_prefix,
is_valid_ip6_addr, is_valid_ip6_prefix)
class XenServerVM(XenServerObject):
"""Class for managing XenServer VM.
Attributes:
vm_ref (str): XAPI reference to VM.
vm_params (dict): A dictionary with VM parameters as returned
by gather_vm_params() function.
"""
def __init__(self, module):
"""Inits XenServerVM using module parameters.
Args:
module: Reference to Ansible module object.
"""
super(XenServerVM, self).__init__(module)
self.vm_ref = get_object_ref(self.module, self.module.params['name'], self.module.params['uuid'], obj_type="VM", fail=False, msg_prefix="VM search: ")
self.gather_params()
def exists(self):
"""Returns True if VM exists, else False."""
return True if self.vm_ref is not None else False
def gather_params(self):
"""Gathers all VM parameters available in XAPI database."""
self.vm_params = gather_vm_params(self.module, self.vm_ref)
def gather_facts(self):
"""Gathers and returns VM facts."""
return gather_vm_facts(self.module, self.vm_params)
def set_power_state(self, power_state):
"""Controls VM power state."""
state_changed, current_state = set_vm_power_state(self.module, self.vm_ref, power_state, self.module.params['state_change_timeout'])
# If state has changed, update vm_params.
if state_changed:
self.vm_params['power_state'] = current_state.capitalize()
return state_changed
def wait_for_ip_address(self):
"""Waits for VM to acquire an IP address."""
self.vm_params['guest_metrics'] = wait_for_vm_ip_address(self.module, self.vm_ref, self.module.params['state_change_timeout'])
def deploy(self):
"""Deploys new VM from template."""
# Safety check.
if self.exists():
self.module.fail_json(msg="Called deploy on existing VM!")
try:
templ_ref = get_object_ref(self.module, self.module.params['template'], self.module.params['template_uuid'], obj_type="template", fail=True,
msg_prefix="VM deploy: ")
# Is this an existing running VM?
if self.xapi_session.xenapi.VM.get_power_state(templ_ref).lower() != 'halted':
self.module.fail_json(msg="VM deploy: running VM cannot be used as a template!")
# Find a SR we can use for VM.copy(). We use SR of the first disk
# if specified or default SR if not specified.
disk_params_list = self.module.params['disks']
sr_ref = None
if disk_params_list:
disk_params = disk_params_list[0]
disk_sr_uuid = disk_params.get('sr_uuid')
disk_sr = disk_params.get('sr')
if disk_sr_uuid is not None or disk_sr is not None:
sr_ref = get_object_ref(self.module, disk_sr, disk_sr_uuid, obj_type="SR", fail=True,
msg_prefix="VM deploy disks[0]: ")
if not sr_ref:
if self.default_sr_ref != "OpaqueRef:NULL":
sr_ref = self.default_sr_ref
else:
self.module.fail_json(msg="VM deploy disks[0]: no default SR found! You must specify SR explicitly.")
# VM name could be an empty string which is bad.
if self.module.params['name'] is not None and not self.module.params['name']:
self.module.fail_json(msg="VM deploy: VM name must not be an empty string!")
# Support for Ansible check mode.
if self.module.check_mode:
return
# Now we can instantiate VM. We use VM.clone for linked_clone and
# VM.copy for non linked_clone.
if self.module.params['linked_clone']:
self.vm_ref = self.xapi_session.xenapi.VM.clone(templ_ref, self.module.params['name'])
else:
self.vm_ref = self.xapi_session.xenapi.VM.copy(templ_ref, self.module.params['name'], sr_ref)
# Description is copied over from template so we reset it.
self.xapi_session.xenapi.VM.set_name_description(self.vm_ref, "")
# If template is one of built-in XenServer templates, we have to
# do some additional steps.
# Note: VM.get_is_default_template() is supported from XenServer 7.2
# onward so we use an alternative way.
templ_other_config = self.xapi_session.xenapi.VM.get_other_config(templ_ref)
if "default_template" in templ_other_config and templ_other_config['default_template']:
# other_config of built-in XenServer templates have a key called
# 'disks' with the following content:
# disks: <provision><disk bootable="true" device="0" size="10737418240" sr="" type="system"/></provision>
# This value of other_data is copied to cloned or copied VM and
# it prevents provisioning of VM because sr is not specified and
# XAPI returns an error. To get around this, we remove the
# 'disks' key and add disks to VM later ourselves.
vm_other_config = self.xapi_session.xenapi.VM.get_other_config(self.vm_ref)
if "disks" in vm_other_config:
del vm_other_config['disks']
self.xapi_session.xenapi.VM.set_other_config(self.vm_ref, vm_other_config)
# At this point we have VM ready for provisioning.
self.xapi_session.xenapi.VM.provision(self.vm_ref)
# After provisioning we can prepare vm_params for reconfigure().
self.gather_params()
# VM is almost ready. We just need to reconfigure it...
self.reconfigure()
# Power on VM if needed.
if self.module.params['state'] == "poweredon":
self.set_power_state("poweredon")
except XenAPI.Failure as f:
self.module.fail_json(msg="XAPI ERROR: %s" % f.details)
def reconfigure(self):
"""Reconfigures an existing VM.
Returns:
list: parameters that were reconfigured.
"""
# Safety check.
if not self.exists():
self.module.fail_json(msg="Called reconfigure on non existing VM!")
config_changes = self.get_changes()
vm_power_state_save = self.vm_params['power_state'].lower()
if "need_poweredoff" in config_changes and vm_power_state_save != 'halted' and not self.module.params['force']:
self.module.fail_json(msg="VM reconfigure: VM has to be in powered off state to reconfigure but force was not specified!")
# Support for Ansible check mode.
if self.module.check_mode:
return config_changes
if "need_poweredoff" in config_changes and vm_power_state_save != 'halted' and self.module.params['force']:
self.set_power_state("shutdownguest")
try:
for change in config_changes:
if isinstance(change, six.string_types):
if change == "name":
self.xapi_session.xenapi.VM.set_name_label(self.vm_ref, self.module.params['name'])
elif change == "name_desc":
self.xapi_session.xenapi.VM.set_name_description(self.vm_ref, self.module.params['name_desc'])
elif change == "folder":
self.xapi_session.xenapi.VM.remove_from_other_config(self.vm_ref, 'folder')
if self.module.params['folder']:
self.xapi_session.xenapi.VM.add_to_other_config(self.vm_ref, 'folder', self.module.params['folder'])
elif change == "home_server":
if self.module.params['home_server']:
host_ref = self.xapi_session.xenapi.host.get_by_name_label(self.module.params['home_server'])[0]
else:
host_ref = "OpaqueRef:NULL"
self.xapi_session.xenapi.VM.set_affinity(self.vm_ref, host_ref)
elif isinstance(change, dict):
if change.get('hardware'):
for hardware_change in change['hardware']:
if hardware_change == "num_cpus":
num_cpus = int(self.module.params['hardware']['num_cpus'])
if num_cpus < int(self.vm_params['VCPUs_at_startup']):
self.xapi_session.xenapi.VM.set_VCPUs_at_startup(self.vm_ref, str(num_cpus))
self.xapi_session.xenapi.VM.set_VCPUs_max(self.vm_ref, str(num_cpus))
else:
self.xapi_session.xenapi.VM.set_VCPUs_max(self.vm_ref, str(num_cpus))
self.xapi_session.xenapi.VM.set_VCPUs_at_startup(self.vm_ref, str(num_cpus))
elif hardware_change == "num_cpu_cores_per_socket":
self.xapi_session.xenapi.VM.remove_from_platform(self.vm_ref, 'cores-per-socket')
num_cpu_cores_per_socket = int(self.module.params['hardware']['num_cpu_cores_per_socket'])
if num_cpu_cores_per_socket > 1:
self.xapi_session.xenapi.VM.add_to_platform(self.vm_ref, 'cores-per-socket', str(num_cpu_cores_per_socket))
elif hardware_change == "memory_mb":
memory_b = str(int(self.module.params['hardware']['memory_mb']) * 1048576)
vm_memory_static_min_b = str(min(int(memory_b), int(self.vm_params['memory_static_min'])))
self.xapi_session.xenapi.VM.set_memory_limits(self.vm_ref, vm_memory_static_min_b, memory_b, memory_b, memory_b)
elif change.get('disks_changed'):
vm_disk_params_list = [disk_params for disk_params in self.vm_params['VBDs'] if disk_params['type'] == "Disk"]
position = 0
for disk_change_list in change['disks_changed']:
for disk_change in disk_change_list:
vdi_ref = self.xapi_session.xenapi.VDI.get_by_uuid(vm_disk_params_list[position]['VDI']['uuid'])
if disk_change == "name":
self.xapi_session.xenapi.VDI.set_name_label(vdi_ref, self.module.params['disks'][position]['name'])
elif disk_change == "name_desc":
self.xapi_session.xenapi.VDI.set_name_description(vdi_ref, self.module.params['disks'][position]['name_desc'])
elif disk_change == "size":
self.xapi_session.xenapi.VDI.resize(vdi_ref, str(self.get_normalized_disk_size(self.module.params['disks'][position],
"VM reconfigure disks[%s]: " % position)))
position += 1
elif change.get('disks_new'):
for position, disk_userdevice in change['disks_new']:
disk_params = self.module.params['disks'][position]
disk_name = disk_params['name'] if disk_params.get('name') else "%s-%s" % (self.vm_params['name_label'], position)
disk_name_desc = disk_params['name_desc'] if disk_params.get('name_desc') else ""
if disk_params.get('sr_uuid'):
sr_ref = self.xapi_session.xenapi.SR.get_by_uuid(disk_params['sr_uuid'])
elif disk_params.get('sr'):
sr_ref = self.xapi_session.xenapi.SR.get_by_name_label(disk_params['sr'])[0]
else:
sr_ref = self.default_sr_ref
disk_size = str(self.get_normalized_disk_size(self.module.params['disks'][position], "VM reconfigure disks[%s]: " % position))
new_disk_vdi = {
"name_label": disk_name,
"name_description": disk_name_desc,
"SR": sr_ref,
"virtual_size": disk_size,
"type": "user",
"sharable": False,
"read_only": False,
"other_config": {},
}
new_disk_vbd = {
"VM": self.vm_ref,
"VDI": None,
"userdevice": disk_userdevice,
"bootable": False,
"mode": "RW",
"type": "Disk",
"empty": False,
"other_config": {},
"qos_algorithm_type": "",
"qos_algorithm_params": {},
}
new_disk_vbd['VDI'] = self.xapi_session.xenapi.VDI.create(new_disk_vdi)
self.xapi_session.xenapi.VBD.create(new_disk_vbd)
elif change.get('cdrom'):
vm_cdrom_params_list = [cdrom_params for cdrom_params in self.vm_params['VBDs'] if cdrom_params['type'] == "CD"]
# If there is no CD present, we have to create one.
if not vm_cdrom_params_list:
# We will try to place cdrom at userdevice position
# 3 (which is default) if it is not already occupied
# else we will place it at first allowed position.
cdrom_userdevices_allowed = self.xapi_session.xenapi.VM.get_allowed_VBD_devices(self.vm_ref)
if "3" in cdrom_userdevices_allowed:
cdrom_userdevice = "3"
else:
cdrom_userdevice = cdrom_userdevices_allowed[0]
cdrom_vbd = {
"VM": self.vm_ref,
"VDI": "OpaqueRef:NULL",
"userdevice": cdrom_userdevice,
"bootable": False,
"mode": "RO",
"type": "CD",
"empty": True,
"other_config": {},
"qos_algorithm_type": "",
"qos_algorithm_params": {},
}
cdrom_vbd_ref = self.xapi_session.xenapi.VBD.create(cdrom_vbd)
else:
cdrom_vbd_ref = self.xapi_session.xenapi.VBD.get_by_uuid(vm_cdrom_params_list[0]['uuid'])
cdrom_is_empty = self.xapi_session.xenapi.VBD.get_empty(cdrom_vbd_ref)
for cdrom_change in change['cdrom']:
if cdrom_change == "type":
cdrom_type = self.module.params['cdrom']['type']
if cdrom_type == "none" and not cdrom_is_empty:
self.xapi_session.xenapi.VBD.eject(cdrom_vbd_ref)
elif cdrom_type == "host":
# Unimplemented!
pass
elif cdrom_change == "iso_name":
if not cdrom_is_empty:
self.xapi_session.xenapi.VBD.eject(cdrom_vbd_ref)
cdrom_vdi_ref = self.xapi_session.xenapi.VDI.get_by_name_label(self.module.params['cdrom']['iso_name'])[0]
self.xapi_session.xenapi.VBD.insert(cdrom_vbd_ref, cdrom_vdi_ref)
elif change.get('networks_changed'):
position = 0
for network_change_list in change['networks_changed']:
if network_change_list:
vm_vif_params = self.vm_params['VIFs'][position]
network_params = self.module.params['networks'][position]
vif_ref = self.xapi_session.xenapi.VIF.get_by_uuid(vm_vif_params['uuid'])
network_ref = self.xapi_session.xenapi.network.get_by_uuid(vm_vif_params['network']['uuid'])
vif_recreated = False
if "name" in network_change_list or "mac" in network_change_list:
# To change network or MAC, we destroy old
# VIF and then create a new one with changed
# parameters. That's how XenCenter does it.
# Copy all old parameters to new VIF record.
vif = {
"device": vm_vif_params['device'],
"network": network_ref,
"VM": vm_vif_params['VM'],
"MAC": vm_vif_params['MAC'],
"MTU": vm_vif_params['MTU'],
"other_config": vm_vif_params['other_config'],
"qos_algorithm_type": vm_vif_params['qos_algorithm_type'],
"qos_algorithm_params": vm_vif_params['qos_algorithm_params'],
"locking_mode": vm_vif_params['locking_mode'],
"ipv4_allowed": vm_vif_params['ipv4_allowed'],
"ipv6_allowed": vm_vif_params['ipv6_allowed'],
}
if "name" in network_change_list:
network_ref_new = self.xapi_session.xenapi.network.get_by_name_label(network_params['name'])[0]
vif['network'] = network_ref_new
vif['MTU'] = self.xapi_session.xenapi.network.get_MTU(network_ref_new)
if "mac" in network_change_list:
vif['MAC'] = network_params['mac'].lower()
if self.vm_params['power_state'].lower() == "running":
self.xapi_session.xenapi.VIF.unplug(vif_ref)
self.xapi_session.xenapi.VIF.destroy(vif_ref)
vif_ref_new = self.xapi_session.xenapi.VIF.create(vif)
if self.vm_params['power_state'].lower() == "running":
self.xapi_session.xenapi.VIF.plug(vif_ref_new)
vif_ref = vif_ref_new
vif_recreated = True
if self.vm_params['customization_agent'] == "native":
vif_reconfigure_needed = False
if "type" in network_change_list:
network_type = network_params['type'].capitalize()
vif_reconfigure_needed = True
else:
network_type = vm_vif_params['ipv4_configuration_mode']
if "ip" in network_change_list:
network_ip = network_params['ip']
vif_reconfigure_needed = True
elif vm_vif_params['ipv4_addresses']:
network_ip = vm_vif_params['ipv4_addresses'][0].split('/')[0]
else:
network_ip = ""
if "prefix" in network_change_list:
network_prefix = "/%s" % network_params['prefix']
vif_reconfigure_needed = True
elif vm_vif_params['ipv4_addresses'] and vm_vif_params['ipv4_addresses'][0]:
network_prefix = "/%s" % vm_vif_params['ipv4_addresses'][0].split('/')[1]
else:
network_prefix = ""
if "gateway" in network_change_list:
network_gateway = network_params['gateway']
vif_reconfigure_needed = True
else:
network_gateway = vm_vif_params['ipv4_gateway']
if vif_recreated or vif_reconfigure_needed:
self.xapi_session.xenapi.VIF.configure_ipv4(vif_ref, network_type,
"%s%s" % (network_ip, network_prefix), network_gateway)
vif_reconfigure_needed = False
if "type6" in network_change_list:
network_type6 = network_params['type6'].capitalize()
vif_reconfigure_needed = True
else:
network_type6 = vm_vif_params['ipv6_configuration_mode']
if "ip6" in network_change_list:
network_ip6 = network_params['ip6']
vif_reconfigure_needed = True
elif vm_vif_params['ipv6_addresses']:
network_ip6 = vm_vif_params['ipv6_addresses'][0].split('/')[0]
else:
network_ip6 = ""
if "prefix6" in network_change_list:
network_prefix6 = "/%s" % network_params['prefix6']
vif_reconfigure_needed = True
elif vm_vif_params['ipv6_addresses'] and vm_vif_params['ipv6_addresses'][0]:
network_prefix6 = "/%s" % vm_vif_params['ipv6_addresses'][0].split('/')[1]
else:
network_prefix6 = ""
if "gateway6" in network_change_list:
network_gateway6 = network_params['gateway6']
vif_reconfigure_needed = True
else:
network_gateway6 = vm_vif_params['ipv6_gateway']
if vif_recreated or vif_reconfigure_needed:
self.xapi_session.xenapi.VIF.configure_ipv6(vif_ref, network_type6,
"%s%s" % (network_ip6, network_prefix6), network_gateway6)
elif self.vm_params['customization_agent'] == "custom":
vif_device = vm_vif_params['device']
# A user could have manually changed network
# or mac e.g. trough XenCenter and then also
# make those changes in playbook manually.
# In that case, module will not detect any
# changes and info in xenstore_data will
# become stale. For that reason we always
# update name and mac in xenstore_data.
# Since we handle name and mac differently,
# we have to remove them from
# network_change_list.
network_change_list_tmp = [net_chg for net_chg in network_change_list if net_chg not in ['name', 'mac']]
for network_change in network_change_list_tmp + ['name', 'mac']:
self.xapi_session.xenapi.VM.remove_from_xenstore_data(self.vm_ref,
"vm-data/networks/%s/%s" % (vif_device, network_change))
if network_params.get('name'):
network_name = network_params['name']
else:
network_name = vm_vif_params['network']['name_label']
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/%s" % (vif_device, 'name'), network_name)
if network_params.get('mac'):
network_mac = network_params['mac'].lower()
else:
network_mac = vm_vif_params['MAC'].lower()
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/%s" % (vif_device, 'mac'), network_mac)
for network_change in network_change_list_tmp:
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/%s" % (vif_device, network_change),
network_params[network_change])
position += 1
elif change.get('networks_new'):
for position, vif_device in change['networks_new']:
network_params = self.module.params['networks'][position]
network_ref = self.xapi_session.xenapi.network.get_by_name_label(network_params['name'])[0]
network_name = network_params['name']
network_mac = network_params['mac'] if network_params.get('mac') else ""
network_type = network_params.get('type')
network_ip = network_params['ip'] if network_params.get('ip') else ""
network_prefix = network_params['prefix'] if network_params.get('prefix') else ""
network_netmask = network_params['netmask'] if network_params.get('netmask') else ""
network_gateway = network_params['gateway'] if network_params.get('gateway') else ""
network_type6 = network_params.get('type6')
network_ip6 = network_params['ip6'] if network_params.get('ip6') else ""
network_prefix6 = network_params['prefix6'] if network_params.get('prefix6') else ""
network_gateway6 = network_params['gateway6'] if network_params.get('gateway6') else ""
vif = {
"device": vif_device,
"network": network_ref,
"VM": self.vm_ref,
"MAC": network_mac,
"MTU": self.xapi_session.xenapi.network.get_MTU(network_ref),
"other_config": {},
"qos_algorithm_type": "",
"qos_algorithm_params": {},
}
vif_ref_new = self.xapi_session.xenapi.VIF.create(vif)
if self.vm_params['power_state'].lower() == "running":
self.xapi_session.xenapi.VIF.plug(vif_ref_new)
if self.vm_params['customization_agent'] == "native":
if network_type and network_type == "static":
self.xapi_session.xenapi.VIF.configure_ipv4(vif_ref_new, "Static",
"%s/%s" % (network_ip, network_prefix), network_gateway)
if network_type6 and network_type6 == "static":
self.xapi_session.xenapi.VIF.configure_ipv6(vif_ref_new, "Static",
"%s/%s" % (network_ip6, network_prefix6), network_gateway6)
elif self.vm_params['customization_agent'] == "custom":
# We first have to remove any existing data
# from xenstore_data because there could be
# some old leftover data from some interface
# that once occupied same device location as
# our new interface.
for network_param in ['name', 'mac', 'type', 'ip', 'prefix', 'netmask', 'gateway', 'type6', 'ip6', 'prefix6', 'gateway6']:
self.xapi_session.xenapi.VM.remove_from_xenstore_data(self.vm_ref, "vm-data/networks/%s/%s" % (vif_device, network_param))
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref, "vm-data/networks/%s/name" % vif_device, network_name)
# We get MAC from VIF itself instead of
# networks.mac because it could be
# autogenerated.
vm_vif_mac = self.xapi_session.xenapi.VIF.get_MAC(vif_ref_new)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref, "vm-data/networks/%s/mac" % vif_device, vm_vif_mac)
if network_type:
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref, "vm-data/networks/%s/type" % vif_device, network_type)
if network_type == "static":
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/ip" % vif_device, network_ip)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/prefix" % vif_device, network_prefix)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/netmask" % vif_device, network_netmask)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/gateway" % vif_device, network_gateway)
if network_type6:
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref, "vm-data/networks/%s/type6" % vif_device, network_type6)
if network_type6 == "static":
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/ip6" % vif_device, network_ip6)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/prefix6" % vif_device, network_prefix6)
self.xapi_session.xenapi.VM.add_to_xenstore_data(self.vm_ref,
"vm-data/networks/%s/gateway6" % vif_device, network_gateway6)
elif change.get('custom_params'):
for position in change['custom_params']:
custom_param_key = self.module.params['custom_params'][position]['key']
custom_param_value = self.module.params['custom_params'][position]['value']
self.xapi_session.xenapi_request("VM.set_%s" % custom_param_key, (self.vm_ref, custom_param_value))
if self.module.params['is_template']:
self.xapi_session.xenapi.VM.set_is_a_template(self.vm_ref, True)
elif "need_poweredoff" in config_changes and self.module.params['force'] and vm_power_state_save != 'halted':
self.set_power_state("poweredon")
# Gather new params after reconfiguration.
self.gather_params()
except XenAPI.Failure as f:
self.module.fail_json(msg="XAPI ERROR: %s" % f.details)
return config_changes
def destroy(self):
"""Removes an existing VM with associated disks"""
# Safety check.
if not self.exists():
self.module.fail_json(msg="Called destroy on non existing VM!")
if self.vm_params['power_state'].lower() != 'halted' and not self.module.params['force']:
self.module.fail_json(msg="VM destroy: VM has to be in powered off state to destroy but force was not specified!")
# Support for Ansible check mode.
if self.module.check_mode:
return
# Make sure that VM is poweredoff before we can destroy it.
self.set_power_state("poweredoff")
try:
# Destroy VM!
self.xapi_session.xenapi.VM.destroy(self.vm_ref)
vm_disk_params_list = [disk_params for disk_params in self.vm_params['VBDs'] if disk_params['type'] == "Disk"]
# Destroy all VDIs associated with VM!
for vm_disk_params in vm_disk_params_list:
vdi_ref = self.xapi_session.xenapi.VDI.get_by_uuid(vm_disk_params['VDI']['uuid'])
self.xapi_session.xenapi.VDI.destroy(vdi_ref)
except XenAPI.Failure as f:
self.module.fail_json(msg="XAPI ERROR: %s" % f.details)
def get_changes(self):
"""Finds VM parameters that differ from specified ones.
This method builds a dictionary with hierarchy of VM parameters
that differ from those specified in module parameters.
Returns:
list: VM parameters that differ from those specified in
module parameters.
"""
# Safety check.
if not self.exists():
self.module.fail_json(msg="Called get_changes on non existing VM!")
need_poweredoff = False
if self.module.params['is_template']:
need_poweredoff = True
try:
# This VM could be a template or a snapshot. In that case we fail
# because we can't reconfigure them or it would just be too
# dangerous.
if self.vm_params['is_a_template'] and not self.vm_params['is_a_snapshot']:
self.module.fail_json(msg="VM check: targeted VM is a template! Template reconfiguration is not supported.")
if self.vm_params['is_a_snapshot']:
self.module.fail_json(msg="VM check: targeted VM is a snapshot! Snapshot reconfiguration is not supported.")
# Let's build a list of parameters that changed.
config_changes = []
# Name could only differ if we found an existing VM by uuid.
if self.module.params['name'] is not None and self.module.params['name'] != self.vm_params['name_label']:
if self.module.params['name']:
config_changes.append('name')
else:
self.module.fail_json(msg="VM check name: VM name cannot be an empty string!")
if self.module.params['name_desc'] is not None and self.module.params['name_desc'] != self.vm_params['name_description']:
config_changes.append('name_desc')
# Folder parameter is found in other_config.
vm_other_config = self.vm_params['other_config']
vm_folder = vm_other_config.get('folder', '')
if self.module.params['folder'] is not None and self.module.params['folder'] != vm_folder:
config_changes.append('folder')
if self.module.params['home_server'] is not None:
if (self.module.params['home_server'] and
(not self.vm_params['affinity'] or self.module.params['home_server'] != self.vm_params['affinity']['name_label'])):
# Check existance only. Ignore return value.
get_object_ref(self.module, self.module.params['home_server'], uuid=None, obj_type="home server", fail=True,
msg_prefix="VM check home_server: ")
config_changes.append('home_server')
elif not self.module.params['home_server'] and self.vm_params['affinity']:
config_changes.append('home_server')
config_changes_hardware = []
if self.module.params['hardware']:
num_cpus = self.module.params['hardware'].get('num_cpus')
if num_cpus is not None:
# Kept for compatibility with older Ansible versions that
# do not support subargument specs.
try:
num_cpus = int(num_cpus)
except ValueError as e:
self.module.fail_json(msg="VM check hardware.num_cpus: parameter should be an integer value!")
if num_cpus < 1:
self.module.fail_json(msg="VM check hardware.num_cpus: parameter should be greater than zero!")
# We can use VCPUs_at_startup or VCPUs_max parameter. I'd
# say the former is the way to go but this needs
# confirmation and testing.
if num_cpus != int(self.vm_params['VCPUs_at_startup']):
config_changes_hardware.append('num_cpus')
# For now, we don't support hotpluging so VM has to be in
# poweredoff state to reconfigure.
need_poweredoff = True
num_cpu_cores_per_socket = self.module.params['hardware'].get('num_cpu_cores_per_socket')
if num_cpu_cores_per_socket is not None:
# Kept for compatibility with older Ansible versions that
# do not support subargument specs.
try:
num_cpu_cores_per_socket = int(num_cpu_cores_per_socket)
except ValueError as e:
self.module.fail_json(msg="VM check hardware.num_cpu_cores_per_socket: parameter should be an integer value!")
if num_cpu_cores_per_socket < 1:
self.module.fail_json(msg="VM check hardware.num_cpu_cores_per_socket: parameter should be greater than zero!")
if num_cpus and num_cpus % num_cpu_cores_per_socket != 0:
self.module.fail_json(msg="VM check hardware.num_cpus: parameter should be a multiple of hardware.num_cpu_cores_per_socket!")
vm_platform = self.vm_params['platform']
vm_cores_per_socket = int(vm_platform.get('cores-per-socket', 1))
if num_cpu_cores_per_socket != vm_cores_per_socket:
config_changes_hardware.append('num_cpu_cores_per_socket')
# For now, we don't support hotpluging so VM has to be
# in poweredoff state to reconfigure.
need_poweredoff = True
memory_mb = self.module.params['hardware'].get('memory_mb')
if memory_mb is not None:
# Kept for compatibility with older Ansible versions that
# do not support subargument specs.
try:
memory_mb = int(memory_mb)
except ValueError as e:
self.module.fail_json(msg="VM check hardware.memory_mb: parameter should be an integer value!")
if memory_mb < 1:
self.module.fail_json(msg="VM check hardware.memory_mb: parameter should be greater than zero!")
# There are multiple memory parameters:
# - memory_dynamic_max
# - memory_dynamic_min
# - memory_static_max
# - memory_static_min
# - memory_target
#
# memory_target seems like a good candidate but it returns 0 for
# halted VMs so we can't use it.
#
# I decided to use memory_dynamic_max and memory_static_max
# and use whichever is larger. This strategy needs validation
# and testing.
#
# XenServer stores memory size in bytes so we need to divide
# it by 1024*1024 = 1048576.
if memory_mb != int(max(int(self.vm_params['memory_dynamic_max']), int(self.vm_params['memory_static_max'])) / 1048576):
config_changes_hardware.append('memory_mb')
# For now, we don't support hotpluging so VM has to be in
# poweredoff state to reconfigure.
need_poweredoff = True
if config_changes_hardware:
config_changes.append({"hardware": config_changes_hardware})
config_changes_disks = []
config_new_disks = []
# Find allowed userdevices.
vbd_userdevices_allowed = self.xapi_session.xenapi.VM.get_allowed_VBD_devices(self.vm_ref)
if self.module.params['disks']:
# Get the list of all disk. Filter out any CDs found.
vm_disk_params_list = [disk_params for disk_params in self.vm_params['VBDs'] if disk_params['type'] == "Disk"]
# Number of disks defined in module params have to be same or
# higher than a number of existing disks attached to the VM.
# We don't support removal or detachment of disks.
if len(self.module.params['disks']) < len(vm_disk_params_list):
self.module.fail_json(msg="VM check disks: provided disks configuration has less disks than the target VM (%d < %d)!" %
(len(self.module.params['disks']), len(vm_disk_params_list)))
# Find the highest disk occupied userdevice.
if not vm_disk_params_list:
vm_disk_userdevice_highest = "-1"
else:
vm_disk_userdevice_highest = vm_disk_params_list[-1]['userdevice']
for position in range(len(self.module.params['disks'])):
if position < len(vm_disk_params_list):
vm_disk_params = vm_disk_params_list[position]
else:
vm_disk_params = None
disk_params = self.module.params['disks'][position]
disk_size = self.get_normalized_disk_size(self.module.params['disks'][position], "VM check disks[%s]: " % position)
disk_name = disk_params.get('name')
if disk_name is not None and not disk_name:
self.module.fail_json(msg="VM check disks[%s]: disk name cannot be an empty string!" % position)
# If this is an existing disk.
if vm_disk_params and vm_disk_params['VDI']:
disk_changes = []
if disk_name and disk_name != vm_disk_params['VDI']['name_label']:
disk_changes.append('name')
disk_name_desc = disk_params.get('name_desc')
if disk_name_desc is not None and disk_name_desc != vm_disk_params['VDI']['name_description']:
disk_changes.append('name_desc')
if disk_size:
if disk_size > int(vm_disk_params['VDI']['virtual_size']):
disk_changes.append('size')
need_poweredoff = True
elif disk_size < int(vm_disk_params['VDI']['virtual_size']):
self.module.fail_json(msg="VM check disks[%s]: disk size is smaller than existing (%d bytes < %s bytes). "
"Reducing disk size is not allowed!" % (position, disk_size, vm_disk_params['VDI']['virtual_size']))
config_changes_disks.append(disk_changes)
# If this is a new disk.
else:
if not disk_size:
self.module.fail_json(msg="VM check disks[%s]: no valid disk size specification found!" % position)
disk_sr_uuid = disk_params.get('sr_uuid')
disk_sr = disk_params.get('sr')
if disk_sr_uuid is not None or disk_sr is not None:
# Check existance only. Ignore return value.
get_object_ref(self.module, disk_sr, disk_sr_uuid, obj_type="SR", fail=True,
msg_prefix="VM check disks[%s]: " % position)
elif self.default_sr_ref == 'OpaqueRef:NULL':
self.module.fail_json(msg="VM check disks[%s]: no default SR found! You must specify SR explicitly." % position)
if not vbd_userdevices_allowed:
self.module.fail_json(msg="VM check disks[%s]: maximum number of devices reached!" % position)
disk_userdevice = None
# We need to place a new disk right above the highest
# placed existing disk to maintain relative disk
# positions pairable with disk specifications in
# module params. That place must not be occupied by
# some other device like CD-ROM.
for userdevice in vbd_userdevices_allowed:
if int(userdevice) > int(vm_disk_userdevice_highest):
disk_userdevice = userdevice
vbd_userdevices_allowed.remove(userdevice)
vm_disk_userdevice_highest = userdevice
break
# If no place was found.
if disk_userdevice is None:
# Highest occupied place could be a CD-ROM device
# so we have to include all devices regardless of
# type when calculating out-of-bound position.
disk_userdevice = str(int(self.vm_params['VBDs'][-1]['userdevice']) + 1)
self.module.fail_json(msg="VM check disks[%s]: new disk position %s is out of bounds!" % (position, disk_userdevice))
# For new disks we only track their position.
config_new_disks.append((position, disk_userdevice))
# We should append config_changes_disks to config_changes only
# if there is at least one changed disk, else skip.
for disk_change in config_changes_disks:
if disk_change:
config_changes.append({"disks_changed": config_changes_disks})
break
if config_new_disks:
config_changes.append({"disks_new": config_new_disks})
config_changes_cdrom = []
if self.module.params['cdrom']:
# Get the list of all CD-ROMs. Filter out any regular disks
# found. If we found no existing CD-ROM, we will create it
# later else take the first one found.
vm_cdrom_params_list = [cdrom_params for cdrom_params in self.vm_params['VBDs'] if cdrom_params['type'] == "CD"]
# If no existing CD-ROM is found, we will need to add one.
# We need to check if there is any userdevice allowed.
if not vm_cdrom_params_list and not vbd_userdevices_allowed:
self.module.fail_json(msg="VM check cdrom: maximum number of devices reached!")
cdrom_type = self.module.params['cdrom'].get('type')
cdrom_iso_name = self.module.params['cdrom'].get('iso_name')
# If cdrom.iso_name is specified but cdrom.type is not,
# then set cdrom.type to 'iso', unless cdrom.iso_name is
# an empty string, in that case set cdrom.type to 'none'.
if not cdrom_type:
if cdrom_iso_name:
cdrom_type = "iso"
elif cdrom_iso_name is not None:
cdrom_type = "none"
self.module.params['cdrom']['type'] = cdrom_type
# If type changed.
if cdrom_type and (not vm_cdrom_params_list or cdrom_type != self.get_cdrom_type(vm_cdrom_params_list[0])):
config_changes_cdrom.append('type')
if cdrom_type == "iso":
# Check if ISO exists.
# Check existance only. Ignore return value.
get_object_ref(self.module, cdrom_iso_name, uuid=None, obj_type="ISO image", fail=True,
msg_prefix="VM check cdrom.iso_name: ")
# Is ISO image changed?
if (cdrom_iso_name and
(not vm_cdrom_params_list or
not vm_cdrom_params_list[0]['VDI'] or
cdrom_iso_name != vm_cdrom_params_list[0]['VDI']['name_label'])):
config_changes_cdrom.append('iso_name')
if config_changes_cdrom:
config_changes.append({"cdrom": config_changes_cdrom})
config_changes_networks = []
config_new_networks = []
# Find allowed devices.
vif_devices_allowed = self.xapi_session.xenapi.VM.get_allowed_VIF_devices(self.vm_ref)
if self.module.params['networks']:
# Number of VIFs defined in module params have to be same or
# higher than a number of existing VIFs attached to the VM.
# We don't support removal of VIFs.
if len(self.module.params['networks']) < len(self.vm_params['VIFs']):
self.module.fail_json(msg="VM check networks: provided networks configuration has less interfaces than the target VM (%d < %d)!" %
(len(self.module.params['networks']), len(self.vm_params['VIFs'])))
# Find the highest occupied device.
if not self.vm_params['VIFs']:
vif_device_highest = "-1"
else:
vif_device_highest = self.vm_params['VIFs'][-1]['device']
for position in range(len(self.module.params['networks'])):
if position < len(self.vm_params['VIFs']):
vm_vif_params = self.vm_params['VIFs'][position]
else:
vm_vif_params = None
network_params = self.module.params['networks'][position]
network_name = network_params.get('name')
if network_name is not None and not network_name:
self.module.fail_json(msg="VM check networks[%s]: network name cannot be an empty string!" % position)
if network_name:
# Check existance only. Ignore return value.
get_object_ref(self.module, network_name, uuid=None, obj_type="network", fail=True,
msg_prefix="VM check networks[%s]: " % position)
network_mac = network_params.get('mac')
if network_mac is not None:
network_mac = network_mac.lower()
if not is_valid_mac_addr(network_mac):
self.module.fail_json(msg="VM check networks[%s]: specified MAC address '%s' is not valid!" % (position, network_mac))
# IPv4 reconfiguration.
network_type = network_params.get('type')
network_ip = network_params.get('ip')
network_netmask = network_params.get('netmask')
network_prefix = None
# If networks.ip is specified and networks.type is not,
# then set networks.type to 'static'.
if not network_type and network_ip:
network_type = "static"
# XenServer natively supports only 'none' and 'static'
# type with 'none' being the same as 'dhcp'.
if self.vm_params['customization_agent'] == "native" and network_type and network_type == "dhcp":
network_type = "none"
if network_type and network_type == "static":
if network_ip is not None:
network_ip_split = network_ip.split('/')
network_ip = network_ip_split[0]
if network_ip and not is_valid_ip_addr(network_ip):
self.module.fail_json(msg="VM check networks[%s]: specified IPv4 address '%s' is not valid!" % (position, network_ip))
if len(network_ip_split) > 1:
network_prefix = network_ip_split[1]
if not is_valid_ip_prefix(network_prefix):
self.module.fail_json(msg="VM check networks[%s]: specified IPv4 prefix '%s' is not valid!" % (position, network_prefix))
if network_netmask is not None:
if not is_valid_ip_netmask(network_netmask):
self.module.fail_json(msg="VM check networks[%s]: specified IPv4 netmask '%s' is not valid!" % (position, network_netmask))
network_prefix = ip_netmask_to_prefix(network_netmask, skip_check=True)
elif network_prefix is not None:
network_netmask = ip_prefix_to_netmask(network_prefix, skip_check=True)
# If any parameter is overridden at this point, update it.
if network_type:
network_params['type'] = network_type
if network_ip:
network_params['ip'] = network_ip
if network_netmask:
network_params['netmask'] = network_netmask
if network_prefix:
network_params['prefix'] = network_prefix
network_gateway = network_params.get('gateway')
# Gateway can be an empty string (when removing gateway
# configuration) but if it is not, it should be validated.
if network_gateway and not is_valid_ip_addr(network_gateway):
self.module.fail_json(msg="VM check networks[%s]: specified IPv4 gateway '%s' is not valid!" % (position, network_gateway))
# IPv6 reconfiguration.
network_type6 = network_params.get('type6')
network_ip6 = network_params.get('ip6')
network_prefix6 = None
# If networks.ip6 is specified and networks.type6 is not,
# then set networks.type6 to 'static'.
if not network_type6 and network_ip6:
network_type6 = "static"
# XenServer natively supports only 'none' and 'static'
# type with 'none' being the same as 'dhcp'.
if self.vm_params['customization_agent'] == "native" and network_type6 and network_type6 == "dhcp":
network_type6 = "none"
if network_type6 and network_type6 == "static":
if network_ip6 is not None:
network_ip6_split = network_ip6.split('/')
network_ip6 = network_ip6_split[0]
if network_ip6 and not is_valid_ip6_addr(network_ip6):
self.module.fail_json(msg="VM check networks[%s]: specified IPv6 address '%s' is not valid!" % (position, network_ip6))
if len(network_ip6_split) > 1:
network_prefix6 = network_ip6_split[1]
if not is_valid_ip6_prefix(network_prefix6):
self.module.fail_json(msg="VM check networks[%s]: specified IPv6 prefix '%s' is not valid!" % (position, network_prefix6))
# If any parameter is overridden at this point, update it.
if network_type6:
network_params['type6'] = network_type6
if network_ip6:
network_params['ip6'] = network_ip6
if network_prefix6:
network_params['prefix6'] = network_prefix6
network_gateway6 = network_params.get('gateway6')
# Gateway can be an empty string (when removing gateway
# configuration) but if it is not, it should be validated.
if network_gateway6 and not is_valid_ip6_addr(network_gateway6):
self.module.fail_json(msg="VM check networks[%s]: specified IPv6 gateway '%s' is not valid!" % (position, network_gateway6))
# If this is an existing VIF.
if vm_vif_params and vm_vif_params['network']:
network_changes = []
if network_name and network_name != vm_vif_params['network']['name_label']:
network_changes.append('name')
if network_mac and network_mac != vm_vif_params['MAC'].lower():
network_changes.append('mac')
if self.vm_params['customization_agent'] == "native":
if network_type and network_type != vm_vif_params['ipv4_configuration_mode'].lower():
network_changes.append('type')
if network_type and network_type == "static":
if network_ip and (not vm_vif_params['ipv4_addresses'] or
not vm_vif_params['ipv4_addresses'][0] or
network_ip != vm_vif_params['ipv4_addresses'][0].split('/')[0]):
network_changes.append('ip')
if network_prefix and (not vm_vif_params['ipv4_addresses'] or
not vm_vif_params['ipv4_addresses'][0] or
network_prefix != vm_vif_params['ipv4_addresses'][0].split('/')[1]):
network_changes.append('prefix')
network_changes.append('netmask')
if network_gateway is not None and network_gateway != vm_vif_params['ipv4_gateway']:
network_changes.append('gateway')
if network_type6 and network_type6 != vm_vif_params['ipv6_configuration_mode'].lower():
network_changes.append('type6')
if network_type6 and network_type6 == "static":
if network_ip6 and (not vm_vif_params['ipv6_addresses'] or
not vm_vif_params['ipv6_addresses'][0] or
network_ip6 != vm_vif_params['ipv6_addresses'][0].split('/')[0]):
network_changes.append('ip6')
if network_prefix6 and (not vm_vif_params['ipv6_addresses'] or
not vm_vif_params['ipv6_addresses'][0] or
network_prefix6 != vm_vif_params['ipv6_addresses'][0].split('/')[1]):
network_changes.append('prefix6')
if network_gateway6 is not None and network_gateway6 != vm_vif_params['ipv6_gateway']:
network_changes.append('gateway6')
elif self.vm_params['customization_agent'] == "custom":
vm_xenstore_data = self.vm_params['xenstore_data']
if network_type and network_type != vm_xenstore_data.get('vm-data/networks/%s/type' % vm_vif_params['device'], "none"):
network_changes.append('type')
need_poweredoff = True
if network_type and network_type == "static":
if network_ip and network_ip != vm_xenstore_data.get('vm-data/networks/%s/ip' % vm_vif_params['device'], ""):
network_changes.append('ip')
need_poweredoff = True
if network_prefix and network_prefix != vm_xenstore_data.get('vm-data/networks/%s/prefix' % vm_vif_params['device'], ""):
network_changes.append('prefix')
network_changes.append('netmask')
need_poweredoff = True
if network_gateway is not None and network_gateway != vm_xenstore_data.get('vm-data/networks/%s/gateway' %
vm_vif_params['device'], ""):
network_changes.append('gateway')
need_poweredoff = True
if network_type6 and network_type6 != vm_xenstore_data.get('vm-data/networks/%s/type6' % vm_vif_params['device'], "none"):
network_changes.append('type6')
need_poweredoff = True
if network_type6 and network_type6 == "static":
if network_ip6 and network_ip6 != vm_xenstore_data.get('vm-data/networks/%s/ip6' % vm_vif_params['device'], ""):
network_changes.append('ip6')
need_poweredoff = True
if network_prefix6 and network_prefix6 != vm_xenstore_data.get('vm-data/networks/%s/prefix6' % vm_vif_params['device'], ""):
network_changes.append('prefix6')
need_poweredoff = True
if network_gateway6 is not None and network_gateway6 != vm_xenstore_data.get('vm-data/networks/%s/gateway6' %
vm_vif_params['device'], ""):
network_changes.append('gateway6')
need_poweredoff = True
config_changes_networks.append(network_changes)
# If this is a new VIF.
else:
if not network_name:
self.module.fail_json(msg="VM check networks[%s]: network name is required for new network interface!" % position)
if network_type and network_type == "static" and network_ip and not network_netmask:
self.module.fail_json(msg="VM check networks[%s]: IPv4 netmask or prefix is required for new network interface!" % position)
if network_type6 and network_type6 == "static" and network_ip6 and not network_prefix6:
self.module.fail_json(msg="VM check networks[%s]: IPv6 prefix is required for new network interface!" % position)
# Restart is needed if we are adding new network
# interface with IP/gateway parameters specified
# and custom agent is used.
if self.vm_params['customization_agent'] == "custom":
for parameter in ['type', 'ip', 'prefix', 'gateway', 'type6', 'ip6', 'prefix6', 'gateway6']:
if network_params.get(parameter):
need_poweredoff = True
break
if not vif_devices_allowed:
self.module.fail_json(msg="VM check networks[%s]: maximum number of network interfaces reached!" % position)
# We need to place a new network interface right above the
# highest placed existing interface to maintain relative
# positions pairable with network interface specifications
# in module params.
vif_device = str(int(vif_device_highest) + 1)
if vif_device not in vif_devices_allowed:
self.module.fail_json(msg="VM check networks[%s]: new network interface position %s is out of bounds!" % (position, vif_device))
vif_devices_allowed.remove(vif_device)
vif_device_highest = vif_device
# For new VIFs we only track their position.
config_new_networks.append((position, vif_device))
# We should append config_changes_networks to config_changes only
# if there is at least one changed network, else skip.
for network_change in config_changes_networks:
if network_change:
config_changes.append({"networks_changed": config_changes_networks})
break
if config_new_networks:
config_changes.append({"networks_new": config_new_networks})
config_changes_custom_params = []
if self.module.params['custom_params']:
for position in range(len(self.module.params['custom_params'])):
custom_param = self.module.params['custom_params'][position]
custom_param_key = custom_param['key']
custom_param_value = custom_param['value']
if custom_param_key not in self.vm_params:
self.module.fail_json(msg="VM check custom_params[%s]: unknown VM param '%s'!" % (position, custom_param_key))
if custom_param_value != self.vm_params[custom_param_key]:
# We only need to track custom param position.
config_changes_custom_params.append(position)
if config_changes_custom_params:
config_changes.append({"custom_params": config_changes_custom_params})
if need_poweredoff:
config_changes.append('need_poweredoff')
return config_changes
except XenAPI.Failure as f:
self.module.fail_json(msg="XAPI ERROR: %s" % f.details)
def get_normalized_disk_size(self, disk_params, msg_prefix=""):
"""Parses disk size parameters and returns disk size in bytes.
This method tries to parse disk size module parameters. It fails
with an error message if size cannot be parsed.
Args:
disk_params (dist): A dictionary with disk parameters.
msg_prefix (str): A string error messages should be prefixed
with (default: "").
Returns:
int: disk size in bytes if disk size is successfully parsed or
None if no disk size parameters were found.
"""
# There should be only single size spec but we make a list of all size
# specs just in case. Priority is given to 'size' but if not found, we
# check for 'size_tb', 'size_gb', 'size_mb' etc. and use first one
# found.
disk_size_spec = [x for x in disk_params.keys() if disk_params[x] is not None and (x.startswith('size_') or x == 'size')]
if disk_size_spec:
try:
# size
if "size" in disk_size_spec:
size_regex = re.compile(r'(\d+(?:\.\d+)?)\s*(.*)')
disk_size_m = size_regex.match(disk_params['size'])
if disk_size_m:
size = disk_size_m.group(1)
unit = disk_size_m.group(2)
else:
raise ValueError
# size_tb, size_gb, size_mb, size_kb, size_b
else:
size = disk_params[disk_size_spec[0]]
unit = disk_size_spec[0].split('_')[-1]
if not unit:
unit = "b"
else:
unit = unit.lower()
if re.match(r'\d+\.\d+', size):
# We found float value in string, let's typecast it.
if unit == "b":
# If we found float but unit is bytes, we get the integer part only.
size = int(float(size))
else:
size = float(size)
else:
# We found int value in string, let's typecast it.
size = int(size)
if not size or size < 0:
raise ValueError
except (TypeError, ValueError, NameError):
# Common failure
self.module.fail_json(msg="%sfailed to parse disk size! Please review value provided using documentation." % msg_prefix)
disk_units = dict(tb=4, gb=3, mb=2, kb=1, b=0)
if unit in disk_units:
return int(size * (1024 ** disk_units[unit]))
else:
self.module.fail_json(msg="%s'%s' is not a supported unit for disk size! Supported units are ['%s']." %
(msg_prefix, unit, "', '".join(sorted(disk_units.keys(), key=lambda key: disk_units[key]))))
else:
return None
@staticmethod
def get_cdrom_type(vm_cdrom_params):
"""Returns VM CD-ROM type."""
# TODO: implement support for detecting type host. No server to test
# this on at the moment.
if vm_cdrom_params['empty']:
return "none"
else:
return "iso"
def main():
argument_spec = xenserver_common_argument_spec()
argument_spec.update(
state=dict(type='str', default='present',
choices=['present', 'absent', 'poweredon']),
name=dict(type='str', aliases=['name_label']),
name_desc=dict(type='str'),
uuid=dict(type='str'),
template=dict(type='str', aliases=['template_src']),
template_uuid=dict(type='str'),
is_template=dict(type='bool', default=False),
folder=dict(type='str'),
hardware=dict(
type='dict',
options=dict(
num_cpus=dict(type='int'),
num_cpu_cores_per_socket=dict(type='int'),
memory_mb=dict(type='int'),
),
),
disks=dict(
type='list',
elements='dict',
options=dict(
size=dict(type='str'),
size_tb=dict(type='str'),
size_gb=dict(type='str'),
size_mb=dict(type='str'),
size_kb=dict(type='str'),
size_b=dict(type='str'),
name=dict(type='str', aliases=['name_label']),
name_desc=dict(type='str'),
sr=dict(type='str'),
sr_uuid=dict(type='str'),
),
aliases=['disk'],
mutually_exclusive=[
['size', 'size_tb', 'size_gb', 'size_mb', 'size_kb', 'size_b'],
['sr', 'sr_uuid'],
],
),
cdrom=dict(
type='dict',
options=dict(
type=dict(type='str', choices=['none', 'iso']),
iso_name=dict(type='str'),
),
required_if=[
['type', 'iso', ['iso_name']],
],
),
networks=dict(
type='list',
elements='dict',
options=dict(
name=dict(type='str', aliases=['name_label']),
mac=dict(type='str'),
type=dict(type='str', choices=['none', 'dhcp', 'static']),
ip=dict(type='str'),
netmask=dict(type='str'),
gateway=dict(type='str'),
type6=dict(type='str', choices=['none', 'dhcp', 'static']),
ip6=dict(type='str'),
gateway6=dict(type='str'),
),
aliases=['network'],
required_if=[
['type', 'static', ['ip']],
['type6', 'static', ['ip6']],
],
),
home_server=dict(type='str'),
custom_params=dict(
type='list',
elements='dict',
options=dict(
key=dict(type='str', required=True),
value=dict(type='raw', required=True),
),
),
wait_for_ip_address=dict(type='bool', default=False),
state_change_timeout=dict(type='int', default=0),
linked_clone=dict(type='bool', default=False),
force=dict(type='bool', default=False),
)
module = AnsibleModule(argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=[
['name', 'uuid'],
],
mutually_exclusive=[
['template', 'template_uuid'],
],
)
result = {'failed': False, 'changed': False}
vm = XenServerVM(module)
# Find existing VM
if vm.exists():
if module.params['state'] == "absent":
vm.destroy()
result['changed'] = True
elif module.params['state'] == "present":
config_changes = vm.reconfigure()
if config_changes:
result['changed'] = True
# Make new disk and network changes more user friendly
# and informative.
for change in config_changes:
if isinstance(change, dict):
if change.get('disks_new'):
disks_new = []
for position, userdevice in change['disks_new']:
disk_new_params = {"position": position, "vbd_userdevice": userdevice}
disk_params = module.params['disks'][position]
for k in disk_params.keys():
if disk_params[k] is not None:
disk_new_params[k] = disk_params[k]
disks_new.append(disk_new_params)
if disks_new:
change['disks_new'] = disks_new
elif change.get('networks_new'):
networks_new = []
for position, device in change['networks_new']:
network_new_params = {"position": position, "vif_device": device}
network_params = module.params['networks'][position]
for k in network_params.keys():
if network_params[k] is not None:
network_new_params[k] = network_params[k]
networks_new.append(network_new_params)
if networks_new:
change['networks_new'] = networks_new
result['changes'] = config_changes
elif module.params['state'] in ["poweredon", "poweredoff", "restarted", "shutdownguest", "rebootguest", "suspended"]:
result['changed'] = vm.set_power_state(module.params['state'])
elif module.params['state'] != "absent":
vm.deploy()
result['changed'] = True
if module.params['wait_for_ip_address'] and module.params['state'] != "absent":
vm.wait_for_ip_address()
result['instance'] = vm.gather_facts()
if result['failed']:
module.fail_json(**result)
else:
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 | 7,389,772,228,995,172,000 | 49.680851 | 158 | 0.50169 | false |
clarkperkins/stackdio | stackdio/api/cloud/utils.py | 2 | 2418 | # -*- coding: utf-8 -*-
# Copyright 2017, Digital Reasoning
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import unicode_literals
import importlib
import logging
import re
from django.conf import settings
from stackdio.core.config import StackdioConfigException
logger = logging.getLogger(__name__)
def get_provider_driver_class(provider):
provider_classes = get_cloud_providers()
for provider_class in provider_classes:
if provider_class.SHORT_NAME == provider.name:
return provider_class
return None
def check_cloud_provider_settings():
if not hasattr(settings, 'CLOUD_PROVIDERS'):
raise StackdioConfigException(
'settings.CLOUD_PROVIDERS must set with a list of supported cloud providers.'
)
def get_cloud_provider_choices():
check_cloud_provider_settings()
choices = []
for provider in get_cloud_providers():
choices.append(provider.get_provider_choice())
return choices
def get_cloud_providers():
check_cloud_provider_settings()
providers = []
for class_path in settings.CLOUD_PROVIDERS:
try:
module_path, class_name = class_path.rsplit('.', 1)
module = importlib.import_module(module_path)
providers.append(getattr(module, class_name))
except ImportError as e:
msg = 'Could not import {0} from settings.CLOUD_PROVIDERS'.format(class_path)
logger.error(e)
raise StackdioConfigException(msg)
return providers
def find_roles(filename, pattern):
with open(filename) as f:
recording = False
for line in f:
# if line.startswith(pattern):
# re.match('^(\s)+-\s(?!match\:)', line)
if re.match(pattern, line):
yield line
recording = not recording
elif recording:
yield line
| apache-2.0 | 3,608,501,492,056,877,600 | 28.13253 | 89 | 0.664599 | false |
CiscoSystems/nova | nova/tests/api/openstack/compute/contrib/test_extended_server_attributes.py | 31 | 4912 | # Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from lxml import etree
import webob
from nova.api.openstack.compute.contrib import extended_server_attributes
from nova import compute
from nova import db
from nova import exception
from nova.objects import instance as instance_obj
from nova.openstack.common import jsonutils
from nova import test
from nova.tests.api.openstack import fakes
from oslo.config import cfg
NAME_FMT = cfg.CONF.instance_name_template
UUID1 = '00000000-0000-0000-0000-000000000001'
UUID2 = '00000000-0000-0000-0000-000000000002'
UUID3 = '00000000-0000-0000-0000-000000000003'
def fake_compute_get(*args, **kwargs):
fields = instance_obj.INSTANCE_DEFAULT_FIELDS
return instance_obj.Instance._from_db_object(
args[1], instance_obj.Instance(),
fakes.stub_instance(1, uuid=UUID3, host="host-fake",
node="node-fake"), fields)
def fake_compute_get_all(*args, **kwargs):
db_list = [
fakes.stub_instance(1, uuid=UUID1, host="host-1", node="node-1"),
fakes.stub_instance(2, uuid=UUID2, host="host-2", node="node-2")
]
fields = instance_obj.INSTANCE_DEFAULT_FIELDS
return instance_obj._make_instance_list(args[1],
instance_obj.InstanceList(),
db_list, fields)
class ExtendedServerAttributesTest(test.TestCase):
content_type = 'application/json'
prefix = 'OS-EXT-SRV-ATTR:'
def setUp(self):
super(ExtendedServerAttributesTest, self).setUp()
fakes.stub_out_nw_api(self.stubs)
self.stubs.Set(compute.api.API, 'get', fake_compute_get)
self.stubs.Set(compute.api.API, 'get_all', fake_compute_get_all)
self.stubs.Set(db, 'instance_get_by_uuid', fake_compute_get)
self.flags(
osapi_compute_extension=[
'nova.api.openstack.compute.contrib.select_extensions'],
osapi_compute_ext_list=['Extended_server_attributes'])
def _make_request(self, url):
req = webob.Request.blank(url)
req.headers['Accept'] = self.content_type
res = req.get_response(fakes.wsgi_app(init_only=('servers',)))
return res
def _get_server(self, body):
return jsonutils.loads(body).get('server')
def _get_servers(self, body):
return jsonutils.loads(body).get('servers')
def assertServerAttributes(self, server, host, node, instance_name):
self.assertEqual(server.get('%shost' % self.prefix), host)
self.assertEqual(server.get('%sinstance_name' % self.prefix),
instance_name)
self.assertEqual(server.get('%shypervisor_hostname' % self.prefix),
node)
def test_show(self):
url = '/v2/fake/servers/%s' % UUID3
res = self._make_request(url)
self.assertEqual(res.status_int, 200)
self.assertServerAttributes(self._get_server(res.body),
host='host-fake',
node='node-fake',
instance_name=NAME_FMT % 1)
def test_detail(self):
url = '/v2/fake/servers/detail'
res = self._make_request(url)
self.assertEqual(res.status_int, 200)
for i, server in enumerate(self._get_servers(res.body)):
self.assertServerAttributes(server,
host='host-%s' % (i + 1),
node='node-%s' % (i + 1),
instance_name=NAME_FMT % (i + 1))
def test_no_instance_passthrough_404(self):
def fake_compute_get(*args, **kwargs):
raise exception.InstanceNotFound(instance_id='fake')
self.stubs.Set(compute.api.API, 'get', fake_compute_get)
url = '/v2/fake/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115'
res = self._make_request(url)
self.assertEqual(res.status_int, 404)
class ExtendedServerAttributesXmlTest(ExtendedServerAttributesTest):
content_type = 'application/xml'
ext = extended_server_attributes
prefix = '{%s}' % ext.Extended_server_attributes.namespace
def _get_server(self, body):
return etree.XML(body)
def _get_servers(self, body):
return etree.XML(body).getchildren()
| apache-2.0 | -1,250,139,382,363,879,700 | 36.212121 | 78 | 0.629682 | false |
ngageoint/geoq | geoq/proxy/tests.py | 1 | 6015 | from django.test import TestCase,Client
from httmock import urlmatch, response, HTTMock
import os
from django.contrib.auth.models import User
from django.template.defaultfilters import slugify
from .models import *
def register_valid_proxy(name,url,refresh=100):
p = SourceDocument.objects.create(Name=name,SourceURL=url,Refresh=refresh)
p.save()
p.refresh(force=True)
class MyMock:
""" uses HTTMock but adds a state variable so I can check which calls got made and how many """
def __init__(self):
self.state = []
@urlmatch(netloc=r'(.*\.)?validkmz\.com$')
def validkmz_mock(self,url, request):
self.state.append("downloaded"+str(url))
return open(os.path.join("proxy","testdata","mykmz.kmz")).read()
@urlmatch(netloc=r'(.*\.)?boguskmz\.com$')
def boguskmz_mock(self,url, request):
self.state.append("failed to download"+str(url))
return response(404)
class Duplicates(TestCase):
""" placeholder for needing to test trying to register kmz with the same name or two kmz with the same child names or a kmz with two dupe children """
pass
class RegisterTests(TestCase):
""" As a user, I want to be able to access proxies but I can't configure/edit them without having appropiate permissions """
def setUp(self):
""" create a test user and log them in, setup new mock object"""
self.user = User.objects.create_user("bob",password="bob")
self.user.save()
self.c = Client()
self.c.login(username="bob",password="bob")
self.myMock = MyMock()
def test_permissions(self):
""" check that an anoymous user can access proxies but can't register new ones"""
with HTTMock(self.myMock.validkmz_mock):
self.c.logout()
r = self.c.get("/proxy/")
self.assertEqual(200, r.status_code)
register_valid_proxy("bob",url="http://validkmz.com/data/some.kmz",refresh=100) #this should be long enough that we don't refresh from registration
r = self.c.get("/proxy/")
self.assertEqual(200, r.status_code)
self.assertContains(r,"bob")
r = self.c.get("/proxy/kmz/bob/")
self.assertEqual(200, r.status_code)
r = self.c.get("/proxy/kmz/notbob/")
self.assertEqual(404, r.status_code)
r = self.c.post("/proxy/register/",{"Name":"bob2","SourceURL":"http://validkmz.com/data/someother.kmz","Type":"kmz"})
self.assertEqual(302, r.status_code) #redirects to login (or would try to ... )
newloc = r._headers.get('location',("","fail"))[1]
self.assertNotEqual(-1,newloc.find("login"),"Should have redirected user to login")
self.assertEqual(1,len(self.myMock.state),"should have only had one call go out")
self.assertTrue(self.myMock.state[0].find("downloaded") != -1)
def test_valid_registration(self):
""" test that a valid user can register a new kmz file"""
with HTTMock(self.myMock.validkmz_mock):
r = self.c.post("/proxy/register/",{"Name":"bob2","SourceURL":"http://validkmz.com/data/someother.kmz","Type":"kmz"})
self.assertEqual(302, r.status_code)
r = self.c.get("/proxy/")
self.assertEqual(200, r.status_code)
self.assertContains(r,"bob2")
r = self.c.get("/proxy/kmz/bob2/")
self.assertEqual(200, r.status_code)
def test_invalid_registration(self):
""" allow the user to register a non-working KMZ file but warn them (and return dummy kml """
with HTTMock(self.myMock.boguskmz_mock):
r = self.c.post("/proxy/register/",{"Name":"badbob","SourceURL":"http://boguskmz.com/data/someother.kmz","Type":"kmz"})
self.assertEqual(302, r.status_code)
r = self.c.get("/proxy/kmz/badbob/")
self.assertContains(r,"Warning")
r = self.c.get("/proxy/")
self.assertEqual(200, r.status_code)
self.assertContains(r,"badbob")
r = self.c.get("/proxy/kmz/badbob/")
self.assertEqual(200, r.status_code)
self.assertContains(r,"Warning: KMZ file is currently unavailable")
class CacheTests(TestCase):
def setUp(self):
""" create a kmz file registration """
self.myMock = MyMock()
self.user = User.objects.create_user("bob",password="bob")
self.user.save()
self.c = Client()
self.c.login(username="bob",password="bob")
with HTTMock(self.myMock.validkmz_mock):
register_valid_proxy("proxytest",url="http://validkmz.com/data/some.kmz",refresh=3)
def makeRequest(self,n="proxytest"):
with HTTMock(self.myMock.validkmz_mock):
r = self.c.get("/proxy/kmz/"+n+"/")
self.assertEqual(200, r.status_code)
#todo: introspection
for img in [slugify("files/neko.png"),slugify("files/icon56.png")]:
r = self.c.get("/proxy/image/%s/%s/"%(n,img))
self.assertEqual(200, r.status_code)
r = self.c.get("/proxy/image/%s/boguspng/"%n)
self.assertEqual(404, r.status_code)
def stestFirstRequest(self):
""" test that the first request after registration works (assumes right after registration """
self.makeRequest("proxytest")
self.assertEqual(1,len(self.myMock.state),"should have only had one call go out")
def testLaterRequest(self):
""" test that a subsequent request triggers a refresh """
import time
time.sleep(5) # ugh...
self.makeRequest("proxytest")
self.assertEqual(2,len(self.myMock.state),"should have only had one call go out")
class ConncurrentTests(TestCase):
def setUp(self):
pass
def testDualUpdates(self):
print("Do concurrent tests once we figure out how to do so")
#self.assertEqual("do I know how to test this","yes")
| mit | -1,567,299,680,008,759,300 | 44.568182 | 159 | 0.61862 | false |
bokeh/bokeh | tests/integration/widgets/test_toggle.py | 1 | 4531 | #-----------------------------------------------------------------------------
# Copyright (c) 2012 - 2021, Anaconda, Inc. All rights reserved.
#
# Powered by the Bokeh Development Team.
#
# The full license is in the file LICENSE.txt, distributed with this software.
#-----------------------------------------------------------------------------
#-----------------------------------------------------------------------------
# Boilerplate
#-----------------------------------------------------------------------------
from __future__ import annotations # isort:skip
import pytest ; pytest
#-----------------------------------------------------------------------------
# Imports
#-----------------------------------------------------------------------------
# External imports
from flaky import flaky
# Bokeh imports
from bokeh._testing.util.selenium import RECORD
from bokeh.core.enums import ButtonType
from bokeh.layouts import column
from bokeh.models import (
Circle,
ColumnDataSource,
CustomAction,
CustomJS,
Plot,
Range1d,
Toggle,
)
#-----------------------------------------------------------------------------
# Tests
#-----------------------------------------------------------------------------
pytest_plugins = (
"bokeh._testing.plugins.project",
)
@pytest.mark.selenium
class Test_Toggle:
def test_displays_label(self, bokeh_model_page) -> None:
button = Toggle(label="label", css_classes=["foo"])
page = bokeh_model_page(button)
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
assert button.text == "label"
@pytest.mark.parametrize('typ', list(ButtonType))
def test_displays_button_type(self, typ, bokeh_model_page) -> None:
button = Toggle(button_type=typ, css_classes=["foo"])
page = bokeh_model_page(button)
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
assert typ in button.get_attribute('class')
@flaky(max_runs=10)
def test_server_on_click_round_trip(self, bokeh_server_page) -> None:
def modify_doc(doc):
source = ColumnDataSource(dict(x=[1, 2], y=[1, 1]))
plot = Plot(height=400, width=400, x_range=Range1d(0, 1), y_range=Range1d(0, 1), min_border=0)
plot.add_glyph(source, Circle(x='x', y='y', size=20))
plot.add_tools(CustomAction(callback=CustomJS(args=dict(s=source), code=RECORD("data", "s.data"))))
button = Toggle(css_classes=['foo'])
def cb(value):
if value:
source.data=dict(x=[10, 20], y=[10, 10])
else:
source.data=dict(x=[100, 200], y=[100, 100])
button.on_click(cb)
doc.add_root(column(button, plot))
page = bokeh_server_page(modify_doc)
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
page.click_custom_action()
results = page.results
assert results == {'data': {'x': [10, 20], 'y': [10, 10]}}
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
page.click_custom_action()
results = page.results
assert results == {'data': {'x': [100, 200], 'y': [100, 100]}}
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
page.click_custom_action()
results = page.results
assert results == {'data': {'x': [10, 20], 'y': [10, 10]}}
# XXX (bev) disabled until https://github.com/bokeh/bokeh/issues/7970 is resolved
#assert page.has_no_console_errors()
# XXX (bev) Toggle does not register to process ButtonClick events
def test_js_on_click_executes(self, bokeh_model_page) -> None:
button = Toggle(css_classes=['foo'])
button.js_on_click(CustomJS(code=RECORD("value", "cb_obj.active")))
page = bokeh_model_page(button)
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
results = page.results
assert results == {'value': True}
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
results = page.results
assert results == {'value': False}
button = page.driver.find_element_by_css_selector('.foo .bk-btn')
button.click()
results = page.results
assert results == {'value': True}
assert page.has_no_console_errors()
| bsd-3-clause | -6,577,584,198,115,750,000 | 32.072993 | 111 | 0.522181 | false |
CapOM/ChromiumGStreamerBackend | tools/telemetry/third_party/gsutilz/third_party/boto/tests/integration/s3/test_bucket.py | 88 | 12516 | # -*- coding: utf-8 -*-
# Copyright (c) 2011 Mitch Garnaat http://garnaat.org/
# All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish, dis-
# tribute, sublicense, and/or sell copies of the Software, and to permit
# persons to whom the Software is furnished to do so, subject to the fol-
# lowing conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
"""
Some unit tests for the S3 Bucket
"""
from mock import patch, Mock
import unittest
import time
from boto.exception import S3ResponseError
from boto.s3.connection import S3Connection
from boto.s3.bucketlogging import BucketLogging
from boto.s3.lifecycle import Lifecycle
from boto.s3.lifecycle import Transition
from boto.s3.lifecycle import Expiration
from boto.s3.lifecycle import Rule
from boto.s3.acl import Grant
from boto.s3.tagging import Tags, TagSet
from boto.s3.website import RedirectLocation
from boto.compat import urllib
class S3BucketTest (unittest.TestCase):
s3 = True
def setUp(self):
self.conn = S3Connection()
self.bucket_name = 'bucket-%d' % int(time.time())
self.bucket = self.conn.create_bucket(self.bucket_name)
def tearDown(self):
for key in self.bucket:
key.delete()
self.bucket.delete()
def test_next_marker(self):
expected = ["a/", "b", "c"]
for key_name in expected:
key = self.bucket.new_key(key_name)
key.set_contents_from_string(key_name)
# Normal list of first 2 keys will have
# no NextMarker set, so we use last key to iterate
# last element will be "b" so no issue.
rs = self.bucket.get_all_keys(max_keys=2)
for element in rs:
pass
self.assertEqual(element.name, "b")
self.assertEqual(rs.next_marker, None)
# list using delimiter of first 2 keys will have
# a NextMarker set (when truncated). As prefixes
# are grouped together at the end, we get "a/" as
# last element, but luckily we have next_marker.
rs = self.bucket.get_all_keys(max_keys=2, delimiter="/")
for element in rs:
pass
self.assertEqual(element.name, "a/")
self.assertEqual(rs.next_marker, "b")
# ensure bucket.list() still works by just
# popping elements off the front of expected.
rs = self.bucket.list()
for element in rs:
self.assertEqual(element.name, expected.pop(0))
self.assertEqual(expected, [])
def test_list_with_url_encoding(self):
expected = ["α", "β", "γ"]
for key_name in expected:
key = self.bucket.new_key(key_name)
key.set_contents_from_string(key_name)
# ensure bucket.list() still works by just
# popping elements off the front of expected.
orig_getall = self.bucket._get_all
getall = lambda *a, **k: orig_getall(*a, max_keys=2, **k)
with patch.object(self.bucket, '_get_all', getall):
rs = self.bucket.list(encoding_type="url")
for element in rs:
name = urllib.parse.unquote(element.name.encode('utf-8'))
self.assertEqual(name, expected.pop(0))
self.assertEqual(expected, [])
def test_logging(self):
# use self.bucket as the target bucket so that teardown
# will delete any log files that make it into the bucket
# automatically and all we have to do is delete the
# source bucket.
sb_name = "src-" + self.bucket_name
sb = self.conn.create_bucket(sb_name)
# grant log write perms to target bucket using canned-acl
self.bucket.set_acl("log-delivery-write")
target_bucket = self.bucket_name
target_prefix = u"jp/ログ/"
# Check existing status is disabled
bls = sb.get_logging_status()
self.assertEqual(bls.target, None)
# Create a logging status and grant auth users READ PERM
authuri = "http://acs.amazonaws.com/groups/global/AuthenticatedUsers"
authr = Grant(permission="READ", type="Group", uri=authuri)
sb.enable_logging(target_bucket, target_prefix=target_prefix, grants=[authr])
# Check the status and confirm its set.
bls = sb.get_logging_status()
self.assertEqual(bls.target, target_bucket)
self.assertEqual(bls.prefix, target_prefix)
self.assertEqual(len(bls.grants), 1)
self.assertEqual(bls.grants[0].type, "Group")
self.assertEqual(bls.grants[0].uri, authuri)
# finally delete the src bucket
sb.delete()
def test_tagging(self):
tagging = """
<Tagging>
<TagSet>
<Tag>
<Key>tagkey</Key>
<Value>tagvalue</Value>
</Tag>
</TagSet>
</Tagging>
"""
self.bucket.set_xml_tags(tagging)
response = self.bucket.get_tags()
self.assertEqual(response[0][0].key, 'tagkey')
self.assertEqual(response[0][0].value, 'tagvalue')
self.bucket.delete_tags()
try:
self.bucket.get_tags()
except S3ResponseError as e:
self.assertEqual(e.code, 'NoSuchTagSet')
except Exception as e:
self.fail("Wrong exception raised (expected S3ResponseError): %s"
% e)
else:
self.fail("Expected S3ResponseError, but no exception raised.")
def test_tagging_from_objects(self):
"""Create tags from python objects rather than raw xml."""
t = Tags()
tag_set = TagSet()
tag_set.add_tag('akey', 'avalue')
tag_set.add_tag('anotherkey', 'anothervalue')
t.add_tag_set(tag_set)
self.bucket.set_tags(t)
response = self.bucket.get_tags()
self.assertEqual(response[0][0].key, 'akey')
self.assertEqual(response[0][0].value, 'avalue')
self.assertEqual(response[0][1].key, 'anotherkey')
self.assertEqual(response[0][1].value, 'anothervalue')
def test_website_configuration(self):
response = self.bucket.configure_website('index.html')
self.assertTrue(response)
config = self.bucket.get_website_configuration()
self.assertEqual(config, {'WebsiteConfiguration':
{'IndexDocument': {'Suffix': 'index.html'}}})
config2, xml = self.bucket.get_website_configuration_with_xml()
self.assertEqual(config, config2)
self.assertTrue('<Suffix>index.html</Suffix>' in xml, xml)
def test_website_redirect_all_requests(self):
response = self.bucket.configure_website(
redirect_all_requests_to=RedirectLocation('example.com'))
config = self.bucket.get_website_configuration()
self.assertEqual(config, {
'WebsiteConfiguration': {
'RedirectAllRequestsTo': {
'HostName': 'example.com'}}})
# Can configure the protocol as well.
response = self.bucket.configure_website(
redirect_all_requests_to=RedirectLocation('example.com', 'https'))
config = self.bucket.get_website_configuration()
self.assertEqual(config, {
'WebsiteConfiguration': {'RedirectAllRequestsTo': {
'HostName': 'example.com',
'Protocol': 'https',
}}}
)
def test_lifecycle(self):
lifecycle = Lifecycle()
lifecycle.add_rule('myid', '', 'Enabled', 30)
self.assertTrue(self.bucket.configure_lifecycle(lifecycle))
response = self.bucket.get_lifecycle_config()
self.assertEqual(len(response), 1)
actual_lifecycle = response[0]
self.assertEqual(actual_lifecycle.id, 'myid')
self.assertEqual(actual_lifecycle.prefix, '')
self.assertEqual(actual_lifecycle.status, 'Enabled')
self.assertEqual(actual_lifecycle.transition, None)
def test_lifecycle_with_glacier_transition(self):
lifecycle = Lifecycle()
transition = Transition(days=30, storage_class='GLACIER')
rule = Rule('myid', prefix='', status='Enabled', expiration=None,
transition=transition)
lifecycle.append(rule)
self.assertTrue(self.bucket.configure_lifecycle(lifecycle))
response = self.bucket.get_lifecycle_config()
transition = response[0].transition
self.assertEqual(transition.days, 30)
self.assertEqual(transition.storage_class, 'GLACIER')
self.assertEqual(transition.date, None)
def test_lifecycle_multi(self):
date = '2022-10-12T00:00:00.000Z'
sc = 'GLACIER'
lifecycle = Lifecycle()
lifecycle.add_rule("1", "1/", "Enabled", 1)
lifecycle.add_rule("2", "2/", "Enabled", Expiration(days=2))
lifecycle.add_rule("3", "3/", "Enabled", Expiration(date=date))
lifecycle.add_rule("4", "4/", "Enabled", None,
Transition(days=4, storage_class=sc))
lifecycle.add_rule("5", "5/", "Enabled", None,
Transition(date=date, storage_class=sc))
# set the lifecycle
self.bucket.configure_lifecycle(lifecycle)
# read the lifecycle back
readlifecycle = self.bucket.get_lifecycle_config();
for rule in readlifecycle:
if rule.id == "1":
self.assertEqual(rule.prefix, "1/")
self.assertEqual(rule.expiration.days, 1)
elif rule.id == "2":
self.assertEqual(rule.prefix, "2/")
self.assertEqual(rule.expiration.days, 2)
elif rule.id == "3":
self.assertEqual(rule.prefix, "3/")
self.assertEqual(rule.expiration.date, date)
elif rule.id == "4":
self.assertEqual(rule.prefix, "4/")
self.assertEqual(rule.transition.days, 4)
self.assertEqual(rule.transition.storage_class, sc)
elif rule.id == "5":
self.assertEqual(rule.prefix, "5/")
self.assertEqual(rule.transition.date, date)
self.assertEqual(rule.transition.storage_class, sc)
else:
self.fail("unexpected id %s" % rule.id)
def test_lifecycle_jp(self):
# test lifecycle with Japanese prefix
name = "Japanese files"
prefix = "日本語/"
days = 30
lifecycle = Lifecycle()
lifecycle.add_rule(name, prefix, "Enabled", days)
# set the lifecycle
self.bucket.configure_lifecycle(lifecycle)
# read the lifecycle back
readlifecycle = self.bucket.get_lifecycle_config();
for rule in readlifecycle:
self.assertEqual(rule.id, name)
self.assertEqual(rule.expiration.days, days)
#Note: Boto seems correct? AWS seems broken?
#self.assertEqual(rule.prefix, prefix)
def test_lifecycle_with_defaults(self):
lifecycle = Lifecycle()
lifecycle.add_rule(expiration=30)
self.assertTrue(self.bucket.configure_lifecycle(lifecycle))
response = self.bucket.get_lifecycle_config()
self.assertEqual(len(response), 1)
actual_lifecycle = response[0]
self.assertNotEqual(len(actual_lifecycle.id), 0)
self.assertEqual(actual_lifecycle.prefix, '')
def test_lifecycle_rule_xml(self):
# create a rule directly with id, prefix defaults
rule = Rule(status='Enabled', expiration=30)
s = rule.to_xml()
# Confirm no ID is set in the rule.
self.assertEqual(s.find("<ID>"), -1)
# Confirm Prefix is '' and not set to 'None'
self.assertNotEqual(s.find("<Prefix></Prefix>"), -1)
| bsd-3-clause | 6,793,581,756,970,989,000 | 40.538206 | 85 | 0.619451 | false |
Ballz0fSteel/Umeko | lib/youtube_dl/extractor/ted.py | 16 | 11976 | from __future__ import unicode_literals
import json
import re
from .common import InfoExtractor
from ..compat import compat_str
from ..utils import (
int_or_none,
try_get,
)
class TEDIE(InfoExtractor):
IE_NAME = 'ted'
_VALID_URL = r'''(?x)
(?P<proto>https?://)
(?P<type>www|embed(?:-ssl)?)(?P<urlmain>\.ted\.com/
(
(?P<type_playlist>playlists(?:/\d+)?) # We have a playlist
|
((?P<type_talk>talks)) # We have a simple talk
|
(?P<type_watch>watch)/[^/]+/[^/]+
)
(/lang/(.*?))? # The url may contain the language
/(?P<name>[\w-]+) # Here goes the name and then ".html"
.*)$
'''
_TESTS = [{
'url': 'http://www.ted.com/talks/dan_dennett_on_our_consciousness.html',
'md5': '0de43ac406aa3e4ea74b66c9c7789b13',
'info_dict': {
'id': '102',
'ext': 'mp4',
'title': 'The illusion of consciousness',
'description': ('Philosopher Dan Dennett makes a compelling '
'argument that not only don\'t we understand our own '
'consciousness, but that half the time our brains are '
'actively fooling us.'),
'uploader': 'Dan Dennett',
'width': 853,
'duration': 1308,
}
}, {
'url': 'http://www.ted.com/watch/ted-institute/ted-bcg/vishal-sikka-the-beauty-and-power-of-algorithms',
'md5': 'b899ac15e345fb39534d913f7606082b',
'info_dict': {
'id': 'tSVI8ta_P4w',
'ext': 'mp4',
'title': 'Vishal Sikka: The beauty and power of algorithms',
'thumbnail': r're:^https?://.+\.jpg',
'description': 'md5:6261fdfe3e02f4f579cbbfc00aff73f4',
'upload_date': '20140122',
'uploader_id': 'TEDInstitute',
'uploader': 'TED Institute',
},
'add_ie': ['Youtube'],
}, {
'url': 'http://www.ted.com/talks/gabby_giffords_and_mark_kelly_be_passionate_be_courageous_be_your_best',
'md5': '71b3ab2f4233012dce09d515c9c39ce2',
'info_dict': {
'id': '1972',
'ext': 'mp4',
'title': 'Be passionate. Be courageous. Be your best.',
'uploader': 'Gabby Giffords and Mark Kelly',
'description': 'md5:5174aed4d0f16021b704120360f72b92',
'duration': 1128,
},
}, {
'url': 'http://www.ted.com/playlists/who_are_the_hackers',
'info_dict': {
'id': '10',
'title': 'Who are the hackers?',
},
'playlist_mincount': 6,
}, {
# contains a youtube video
'url': 'https://www.ted.com/talks/douglas_adams_parrots_the_universe_and_everything',
'add_ie': ['Youtube'],
'info_dict': {
'id': '_ZG8HBuDjgc',
'ext': 'webm',
'title': 'Douglas Adams: Parrots the Universe and Everything',
'description': 'md5:01ad1e199c49ac640cb1196c0e9016af',
'uploader': 'University of California Television (UCTV)',
'uploader_id': 'UCtelevision',
'upload_date': '20080522',
},
'params': {
'skip_download': True,
},
}, {
# YouTube video
'url': 'http://www.ted.com/talks/jeffrey_kluger_the_sibling_bond',
'add_ie': ['Youtube'],
'info_dict': {
'id': 'aFBIPO-P7LM',
'ext': 'mp4',
'title': 'The hidden power of siblings: Jeff Kluger at TEDxAsheville',
'description': 'md5:3d7a4f50d95ca5dd67104e2a20f43fe1',
'uploader': 'TEDx Talks',
'uploader_id': 'TEDxTalks',
'upload_date': '20111216',
},
'params': {
'skip_download': True,
},
}]
_NATIVE_FORMATS = {
'low': {'width': 320, 'height': 180},
'medium': {'width': 512, 'height': 288},
'high': {'width': 854, 'height': 480},
}
def _extract_info(self, webpage):
info_json = self._search_regex(
r'(?s)q\(\s*"\w+.init"\s*,\s*({.+})\)\s*</script>',
webpage, 'info json')
return json.loads(info_json)
def _real_extract(self, url):
m = re.match(self._VALID_URL, url, re.VERBOSE)
if m.group('type').startswith('embed'):
desktop_url = m.group('proto') + 'www' + m.group('urlmain')
return self.url_result(desktop_url, 'TED')
name = m.group('name')
if m.group('type_talk'):
return self._talk_info(url, name)
elif m.group('type_watch'):
return self._watch_info(url, name)
else:
return self._playlist_videos_info(url, name)
def _playlist_videos_info(self, url, name):
'''Returns the videos of the playlist'''
webpage = self._download_webpage(url, name,
'Downloading playlist webpage')
info = self._extract_info(webpage)
playlist_info = try_get(
info, lambda x: x['__INITIAL_DATA__']['playlist'],
dict) or info['playlist']
playlist_entries = [
self.url_result('http://www.ted.com/talks/' + talk['slug'], self.ie_key())
for talk in try_get(
info, lambda x: x['__INITIAL_DATA__']['talks'],
dict) or info['talks']
]
return self.playlist_result(
playlist_entries,
playlist_id=compat_str(playlist_info['id']),
playlist_title=playlist_info['title'])
def _talk_info(self, url, video_name):
webpage = self._download_webpage(url, video_name)
info = self._extract_info(webpage)
talk_info = try_get(
info, lambda x: x['__INITIAL_DATA__']['talks'][0],
dict) or info['talks'][0]
title = talk_info['title'].strip()
external = talk_info.get('external')
if external:
service = external['service']
self.to_screen('Found video from %s' % service)
ext_url = None
if service.lower() == 'youtube':
ext_url = external.get('code')
return {
'_type': 'url',
'url': ext_url or external['uri'],
}
native_downloads = try_get(
talk_info, lambda x: x['downloads']['nativeDownloads'],
dict) or talk_info['nativeDownloads']
formats = [{
'url': format_url,
'format_id': format_id,
'format': format_id,
} for (format_id, format_url) in native_downloads.items() if format_url is not None]
if formats:
for f in formats:
finfo = self._NATIVE_FORMATS.get(f['format_id'])
if finfo:
f.update(finfo)
player_talk = talk_info['player_talks'][0]
resources_ = player_talk.get('resources') or talk_info.get('resources')
http_url = None
for format_id, resources in resources_.items():
if format_id == 'h264':
for resource in resources:
h264_url = resource.get('file')
if not h264_url:
continue
bitrate = int_or_none(resource.get('bitrate'))
formats.append({
'url': h264_url,
'format_id': '%s-%sk' % (format_id, bitrate),
'tbr': bitrate,
})
if re.search(r'\d+k', h264_url):
http_url = h264_url
elif format_id == 'rtmp':
streamer = talk_info.get('streamer')
if not streamer:
continue
for resource in resources:
formats.append({
'format_id': '%s-%s' % (format_id, resource.get('name')),
'url': streamer,
'play_path': resource['file'],
'ext': 'flv',
'width': int_or_none(resource.get('width')),
'height': int_or_none(resource.get('height')),
'tbr': int_or_none(resource.get('bitrate')),
})
elif format_id == 'hls':
formats.extend(self._extract_m3u8_formats(
resources.get('stream'), video_name, 'mp4', m3u8_id=format_id, fatal=False))
m3u8_formats = list(filter(
lambda f: f.get('protocol') == 'm3u8' and f.get('vcodec') != 'none',
formats))
if http_url:
for m3u8_format in m3u8_formats:
bitrate = self._search_regex(r'(\d+k)', m3u8_format['url'], 'bitrate', default=None)
if not bitrate:
continue
f = m3u8_format.copy()
f.update({
'url': re.sub(r'\d+k', bitrate, http_url),
'format_id': m3u8_format['format_id'].replace('hls', 'http'),
'protocol': 'http',
})
formats.append(f)
audio_download = talk_info.get('audioDownload')
if audio_download:
formats.append({
'url': audio_download,
'format_id': 'audio',
'vcodec': 'none',
})
self._sort_formats(formats)
video_id = compat_str(talk_info['id'])
return {
'id': video_id,
'title': title,
'uploader': player_talk.get('speaker') or talk_info.get('speaker'),
'thumbnail': player_talk.get('thumb') or talk_info.get('thumb'),
'description': self._og_search_description(webpage),
'subtitles': self._get_subtitles(video_id, talk_info),
'formats': formats,
'duration': talk_info.get('duration'),
}
def _get_subtitles(self, video_id, talk_info):
sub_lang_list = {}
for language in try_get(
talk_info,
(lambda x: x['downloads']['languages'],
lambda x: x['languages']), list):
lang_code = language.get('languageCode') or language.get('ianaCode')
if not lang_code:
continue
sub_lang_list[lang_code] = [
{
'url': 'http://www.ted.com/talks/subtitles/id/%s/lang/%s/format/%s' % (video_id, lang_code, ext),
'ext': ext,
}
for ext in ['ted', 'srt']
]
return sub_lang_list
def _watch_info(self, url, name):
webpage = self._download_webpage(url, name)
config_json = self._html_search_regex(
r'"pages\.jwplayer"\s*,\s*({.+?})\s*\)\s*</script>',
webpage, 'config', default=None)
if not config_json:
embed_url = self._search_regex(
r"<iframe[^>]+class='pages-video-embed__video__object'[^>]+src='([^']+)'", webpage, 'embed url')
return self.url_result(self._proto_relative_url(embed_url))
config = json.loads(config_json)['config']
video_url = config['video']['url']
thumbnail = config.get('image', {}).get('url')
title = self._html_search_regex(
r"(?s)<h1(?:\s+class='[^']+')?>(.+?)</h1>", webpage, 'title')
description = self._html_search_regex(
[
r'(?s)<h4 class="[^"]+" id="h3--about-this-talk">.*?</h4>(.*?)</div>',
r'(?s)<p><strong>About this talk:</strong>\s+(.*?)</p>',
],
webpage, 'description', fatal=False)
return {
'id': name,
'url': video_url,
'title': title,
'thumbnail': thumbnail,
'description': description,
}
| gpl-3.0 | 4,865,771,499,831,663,000 | 36.425 | 117 | 0.483968 | false |
cbrewster/servo | tests/wpt/web-platform-tests/tools/pywebsocket/mod_pywebsocket/util.py | 23 | 14116 | # Copyright 2011, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""WebSocket utilities."""
import array
import errno
# Import hash classes from a module available and recommended for each Python
# version and re-export those symbol. Use sha and md5 module in Python 2.4, and
# hashlib module in Python 2.6.
try:
import hashlib
md5_hash = hashlib.md5
sha1_hash = hashlib.sha1
except ImportError:
import md5
import sha
md5_hash = md5.md5
sha1_hash = sha.sha
from six.moves import StringIO
import logging
import os
import re
import socket
import traceback
import zlib
try:
from mod_pywebsocket import fast_masking
except ImportError:
pass
def get_stack_trace():
"""Get the current stack trace as string.
This is needed to support Python 2.3.
TODO: Remove this when we only support Python 2.4 and above.
Use traceback.format_exc instead.
"""
out = StringIO()
traceback.print_exc(file=out)
return out.getvalue()
def prepend_message_to_exception(message, exc):
"""Prepend message to the exception."""
exc.args = (message + str(exc),)
return
def __translate_interp(interp, cygwin_path):
"""Translate interp program path for Win32 python to run cygwin program
(e.g. perl). Note that it doesn't support path that contains space,
which is typically true for Unix, where #!-script is written.
For Win32 python, cygwin_path is a directory of cygwin binaries.
Args:
interp: interp command line
cygwin_path: directory name of cygwin binary, or None
Returns:
translated interp command line.
"""
if not cygwin_path:
return interp
m = re.match('^[^ ]*/([^ ]+)( .*)?', interp)
if m:
cmd = os.path.join(cygwin_path, m.group(1))
return cmd + m.group(2)
return interp
def get_script_interp(script_path, cygwin_path=None):
r"""Get #!-interpreter command line from the script.
It also fixes command path. When Cygwin Python is used, e.g. in WebKit,
it could run "/usr/bin/perl -wT hello.pl".
When Win32 Python is used, e.g. in Chromium, it couldn't. So, fix
"/usr/bin/perl" to "<cygwin_path>\perl.exe".
Args:
script_path: pathname of the script
cygwin_path: directory name of cygwin binary, or None
Returns:
#!-interpreter command line, or None if it is not #!-script.
"""
fp = open(script_path)
line = fp.readline()
fp.close()
m = re.match('^#!(.*)', line)
if m:
return __translate_interp(m.group(1), cygwin_path)
return None
def wrap_popen3_for_win(cygwin_path):
"""Wrap popen3 to support #!-script on Windows.
Args:
cygwin_path: path for cygwin binary if command path is needed to be
translated. None if no translation required.
"""
__orig_popen3 = os.popen3
def __wrap_popen3(cmd, mode='t', bufsize=-1):
cmdline = cmd.split(' ')
interp = get_script_interp(cmdline[0], cygwin_path)
if interp:
cmd = interp + ' ' + cmd
return __orig_popen3(cmd, mode, bufsize)
os.popen3 = __wrap_popen3
def hexify(s):
return ' '.join(map(lambda x: '%02x' % ord(x), s))
def get_class_logger(o):
"""Return the logging class information."""
return logging.getLogger(
'%s.%s' % (o.__class__.__module__, o.__class__.__name__))
class NoopMasker(object):
"""A NoOp masking object.
This has the same interface as RepeatedXorMasker but just returns
the string passed in without making any change.
"""
def __init__(self):
"""NoOp."""
pass
def mask(self, s):
"""NoOp."""
return s
class RepeatedXorMasker(object):
"""A masking object that applies XOR on the string.
Applies XOR on the string given to mask method with the masking bytes
given to the constructor repeatedly. This object remembers the position
in the masking bytes the last mask method call ended and resumes from
that point on the next mask method call.
"""
def __init__(self, masking_key):
self._masking_key = masking_key
self._masking_key_index = 0
def _mask_using_swig(self, s):
"""Perform the mask via SWIG."""
masked_data = fast_masking.mask(
s, self._masking_key, self._masking_key_index)
self._masking_key_index = (
(self._masking_key_index + len(s)) % len(self._masking_key))
return masked_data
def _mask_using_array(self, s):
"""Perform the mask via python."""
result = array.array('B')
result.fromstring(s)
# Use temporary local variables to eliminate the cost to access
# attributes
masking_key = map(ord, self._masking_key)
masking_key_size = len(masking_key)
masking_key_index = self._masking_key_index
for i in xrange(len(result)):
result[i] ^= masking_key[masking_key_index]
masking_key_index = (masking_key_index + 1) % masking_key_size
self._masking_key_index = masking_key_index
return result.tostring()
if 'fast_masking' in globals():
mask = _mask_using_swig
else:
mask = _mask_using_array
# By making wbits option negative, we can suppress CMF/FLG (2 octet) and
# ADLER32 (4 octet) fields of zlib so that we can use zlib module just as
# deflate library. DICTID won't be added as far as we don't set dictionary.
# LZ77 window of 32K will be used for both compression and decompression.
# For decompression, we can just use 32K to cover any windows size. For
# compression, we use 32K so receivers must use 32K.
#
# Compression level is Z_DEFAULT_COMPRESSION. We don't have to match level
# to decode.
#
# See zconf.h, deflate.cc, inflate.cc of zlib library, and zlibmodule.c of
# Python. See also RFC1950 (ZLIB 3.3).
class _Deflater(object):
def __init__(self, window_bits):
self._logger = get_class_logger(self)
self._compress = zlib.compressobj(
zlib.Z_DEFAULT_COMPRESSION, zlib.DEFLATED, -window_bits)
def compress(self, bytes):
compressed_bytes = self._compress.compress(bytes)
self._logger.debug('Compress input %r', bytes)
self._logger.debug('Compress result %r', compressed_bytes)
return compressed_bytes
def compress_and_flush(self, bytes):
compressed_bytes = self._compress.compress(bytes)
compressed_bytes += self._compress.flush(zlib.Z_SYNC_FLUSH)
self._logger.debug('Compress input %r', bytes)
self._logger.debug('Compress result %r', compressed_bytes)
return compressed_bytes
def compress_and_finish(self, bytes):
compressed_bytes = self._compress.compress(bytes)
compressed_bytes += self._compress.flush(zlib.Z_FINISH)
self._logger.debug('Compress input %r', bytes)
self._logger.debug('Compress result %r', compressed_bytes)
return compressed_bytes
class _Inflater(object):
def __init__(self, window_bits):
self._logger = get_class_logger(self)
self._window_bits = window_bits
self._unconsumed = ''
self.reset()
def decompress(self, size):
if not (size == -1 or size > 0):
raise Exception('size must be -1 or positive')
data = ''
while True:
if size == -1:
data += self._decompress.decompress(self._unconsumed)
# See Python bug http://bugs.python.org/issue12050 to
# understand why the same code cannot be used for updating
# self._unconsumed for here and else block.
self._unconsumed = ''
else:
data += self._decompress.decompress(
self._unconsumed, size - len(data))
self._unconsumed = self._decompress.unconsumed_tail
if self._decompress.unused_data:
# Encountered a last block (i.e. a block with BFINAL = 1) and
# found a new stream (unused_data). We cannot use the same
# zlib.Decompress object for the new stream. Create a new
# Decompress object to decompress the new one.
#
# It's fine to ignore unconsumed_tail if unused_data is not
# empty.
self._unconsumed = self._decompress.unused_data
self.reset()
if size >= 0 and len(data) == size:
# data is filled. Don't call decompress again.
break
else:
# Re-invoke Decompress.decompress to try to decompress all
# available bytes before invoking read which blocks until
# any new byte is available.
continue
else:
# Here, since unused_data is empty, even if unconsumed_tail is
# not empty, bytes of requested length are already in data. We
# don't have to "continue" here.
break
if data:
self._logger.debug('Decompressed %r', data)
return data
def append(self, data):
self._logger.debug('Appended %r', data)
self._unconsumed += data
def reset(self):
self._logger.debug('Reset')
self._decompress = zlib.decompressobj(-self._window_bits)
# Compresses/decompresses given octets using the method introduced in RFC1979.
class _RFC1979Deflater(object):
"""A compressor class that applies DEFLATE to given byte sequence and
flushes using the algorithm described in the RFC1979 section 2.1.
"""
def __init__(self, window_bits, no_context_takeover):
self._deflater = None
if window_bits is None:
window_bits = zlib.MAX_WBITS
self._window_bits = window_bits
self._no_context_takeover = no_context_takeover
def filter(self, bytes, end=True, bfinal=False):
if self._deflater is None:
self._deflater = _Deflater(self._window_bits)
if bfinal:
result = self._deflater.compress_and_finish(bytes)
# Add a padding block with BFINAL = 0 and BTYPE = 0.
result = result + chr(0)
self._deflater = None
return result
result = self._deflater.compress_and_flush(bytes)
if end:
# Strip last 4 octets which is LEN and NLEN field of a
# non-compressed block added for Z_SYNC_FLUSH.
result = result[:-4]
if self._no_context_takeover and end:
self._deflater = None
return result
class _RFC1979Inflater(object):
"""A decompressor class a la RFC1979.
A decompressor class for byte sequence compressed and flushed following
the algorithm described in the RFC1979 section 2.1.
"""
def __init__(self, window_bits=zlib.MAX_WBITS):
self._inflater = _Inflater(window_bits)
def filter(self, bytes):
# Restore stripped LEN and NLEN field of a non-compressed block added
# for Z_SYNC_FLUSH.
self._inflater.append(bytes + '\x00\x00\xff\xff')
return self._inflater.decompress(-1)
class DeflateSocket(object):
"""A wrapper class for socket object to intercept send and recv to perform
deflate compression and decompression transparently.
"""
# Size of the buffer passed to recv to receive compressed data.
_RECV_SIZE = 4096
def __init__(self, socket):
self._socket = socket
self._logger = get_class_logger(self)
self._deflater = _Deflater(zlib.MAX_WBITS)
self._inflater = _Inflater(zlib.MAX_WBITS)
def recv(self, size):
"""Receives data from the socket specified on the construction up
to the specified size. Once any data is available, returns it even
if it's smaller than the specified size.
"""
# TODO(tyoshino): Allow call with size=0. It should block until any
# decompressed data is available.
if size <= 0:
raise Exception('Non-positive size passed')
while True:
data = self._inflater.decompress(size)
if len(data) != 0:
return data
read_data = self._socket.recv(DeflateSocket._RECV_SIZE)
if not read_data:
return ''
self._inflater.append(read_data)
def sendall(self, bytes):
self.send(bytes)
def send(self, bytes):
self._socket.sendall(self._deflater.compress_and_flush(bytes))
return len(bytes)
# vi:sts=4 sw=4 et
| mpl-2.0 | 5,943,935,005,135,710,000 | 32.292453 | 79 | 0.632545 | false |
sachdevs/rmc | models/rating.py | 8 | 3818 | import json
import logging
import mongoengine as me
import rmc.shared.util as util
class AggregateRating(me.EmbeddedDocument):
rating = me.FloatField(min_value=0.0, max_value=1.0, default=0.0)
count = me.IntField(min_value=0, default=0)
sorting_score_positive = me.FloatField(
min_value=0.0, max_value=1.0, default=0.0)
sorting_score_negative = me.FloatField(
min_value=0.0, max_value=1.0, default=0.0)
def debug_logging(self, func_name):
# TODO(Sandy): Temporary debugging for over 100% average rating bug
if self.rating > 1:
logging.warn(
"%s: update_sorting_score will fail" % (func_name) +
" self.count=%s self.rating=%s" % (self.count, self.rating)
)
@property
def num_approves(self):
"""Returns the number of users who selected "yes" for this rating."""
return int(round(self.rating * self.count))
def update_sorting_score(self):
self.sorting_score_positive = util.get_sorting_score(
self.rating, self.count)
self.sorting_score_negative = util.get_sorting_score(
1 - self.rating, self.count)
def add_rating(self, rating):
self.rating = float(self.num_approves + rating) / (self.count + 1)
self.count += 1
# TODO(Sandy): Temporary debugging
self.debug_logging("add_rating(%s)" % (rating))
self.update_sorting_score()
def remove_rating(self, rating):
if self.count == 0:
logging.warn(
"AggregateRating: called remove_rating with count = 0")
return
if self.count == 1:
self.rating = 0.0
else:
self.rating = float(self.num_approves - rating) / (self.count - 1)
self.count -= 1
# TODO(Sandy): Temporary debugging
self.debug_logging("remove_rating(%s)" % (rating))
self.update_sorting_score()
def add_aggregate_rating(self, ar):
if ar.count == 0:
return
total = ar.rating * ar.count
self.rating = (float(self.num_approves + total) /
(self.count + ar.count))
self.count += ar.count
# TODO(Sandy): Temporary debugging
self.debug_logging("add_aggregate_rating(%s)" % (ar))
self.update_sorting_score()
def to_dict(self):
return {
'rating': self.rating,
'count': self.count,
}
def to_json(self):
return json.dumps(self.to_dict())
def update_aggregate_after_replacement(self, old_value, new_value):
if old_value is None and new_value is None:
# Rating not changed
pass
elif old_value is None:
# New rating, add new_value to the aggregate
self.add_rating(new_value)
elif new_value is None:
# Removed a rating, remove old_value from the aggregate
self.remove_rating(old_value)
elif old_value != new_value:
# Modified a rating, removing old_value and add new_value to the
# aggregate
self.remove_rating(old_value)
self.add_rating(new_value)
@classmethod
def from_json(cls, json_str):
obj = json.loads(json_str)
return cls(**obj)
# TODO(david): Does not make sense to make aggregate rating from one rating
@classmethod
def from_single_rating(cls, value):
return cls(rating=value, count=1)
def get_overall_rating(ar_ratings):
sum_ratings = sum(r['rating'] * r['count'] for r in ar_ratings)
num_ratings = sum(r['count'] for r in ar_ratings)
return AggregateRating(
count=max(r['count'] for r in ar_ratings) if ar_ratings else 0,
rating=sum_ratings / max(num_ratings, 1),
)
| mit | 8,650,051,165,622,778,000 | 31.355932 | 79 | 0.58879 | false |
stevenmizuno/QGIS | python/plugins/processing/tests/GdalAlgorithmsTest.py | 9 | 5256 | # -*- coding: utf-8 -*-
"""
***************************************************************************
GdalAlgorithmTests.py
---------------------
Date : January 2016
Copyright : (C) 2016 by Matthias Kuhn
Email : [email protected]
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Matthias Kuhn'
__date__ = 'January 2016'
__copyright__ = '(C) 2016, Matthias Kuhn'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = ':%H$'
import AlgorithmsTestBase
from processing.algs.gdal.OgrToPostGis import OgrToPostGis
from processing.algs.gdal.GdalUtils import GdalUtils
from qgis.core import QgsProcessingContext
import nose2
import os
import shutil
import tempfile
from qgis.testing import (
start_app,
unittest
)
testDataPath = os.path.join(os.path.dirname(__file__), 'testdata')
class TestGdalAlgorithms(unittest.TestCase, AlgorithmsTestBase.AlgorithmsTest):
@classmethod
def setUpClass(cls):
start_app()
from processing.core.Processing import Processing
Processing.initialize()
cls.cleanup_paths = []
@classmethod
def tearDownClass(cls):
for path in cls.cleanup_paths:
shutil.rmtree(path)
def test_definition_file(self):
return 'gdal_algorithm_tests.yaml'
def testOgrLayerNameExtraction(self):
outdir = tempfile.mkdtemp()
self.cleanup_paths.append(outdir)
def _copyFile(dst):
shutil.copyfile(os.path.join(testDataPath, 'custom', 'grass7', 'weighted.csv'), dst)
# OGR provider - single layer
_copyFile(os.path.join(outdir, 'a.csv'))
name = GdalUtils.ogrLayerName(outdir)
self.assertEqual(name, 'a')
# OGR provider - multiple layers
_copyFile(os.path.join(outdir, 'b.csv'))
name1 = GdalUtils.ogrLayerName(outdir + '|layerid=0')
name2 = GdalUtils.ogrLayerName(outdir + '|layerid=1')
self.assertEqual(sorted([name1, name2]), ['a', 'b'])
name = GdalUtils.ogrLayerName(outdir + '|layerid=2')
self.assertIsNone(name)
# OGR provider - layername takes precedence
name = GdalUtils.ogrLayerName(outdir + '|layername=f')
self.assertEqual(name, 'f')
name = GdalUtils.ogrLayerName(outdir + '|layerid=0|layername=f')
self.assertEqual(name, 'f')
name = GdalUtils.ogrLayerName(outdir + '|layername=f|layerid=0')
self.assertEqual(name, 'f')
# SQLiite provider
name = GdalUtils.ogrLayerName('dbname=\'/tmp/x.sqlite\' table="t" (geometry) sql=')
self.assertEqual(name, 't')
# PostgreSQL provider
name = GdalUtils.ogrLayerName('port=5493 sslmode=disable key=\'edge_id\' srid=0 type=LineString table="city_data"."edge" (geom) sql=')
self.assertEqual(name, 'city_data.edge')
class TestGdalOgrToPostGis(unittest.TestCase):
@classmethod
def setUpClass(cls):
# start_app()
from processing.core.Processing import Processing
Processing.initialize()
@classmethod
def tearDownClass(cls):
pass
# See https://issues.qgis.org/issues/15706
def test_getConnectionString(self):
obj = OgrToPostGis()
obj.initAlgorithm({})
parameters = {}
context = QgsProcessingContext()
# NOTE: defaults are debatable, see
# https://github.com/qgis/QGIS/pull/3607#issuecomment-253971020
self.assertEqual(obj.getConnectionString(parameters, context),
"host=localhost port=5432 active_schema=public")
parameters['HOST'] = 'remote'
self.assertEqual(obj.getConnectionString(parameters, context),
"host=remote port=5432 active_schema=public")
parameters['HOST'] = ''
self.assertEqual(obj.getConnectionString(parameters, context),
"port=5432 active_schema=public")
parameters['PORT'] = '5555'
self.assertEqual(obj.getConnectionString(parameters, context),
"port=5555 active_schema=public")
parameters['PORT'] = ''
self.assertEqual(obj.getConnectionString(parameters, context),
"active_schema=public")
parameters['USER'] = 'usr'
self.assertEqual(obj.getConnectionString(parameters, context),
"active_schema=public user=usr")
parameters['PASSWORD'] = 'pwd'
self.assertEqual(obj.getConnectionString(parameters, context),
"password=pwd active_schema=public user=usr")
if __name__ == '__main__':
nose2.main()
| gpl-2.0 | -9,066,875,606,486,929,000 | 33.12987 | 142 | 0.580479 | false |
axilleas/ansible | lib/ansible/plugins/callback/mail.py | 114 | 4572 | # -*- coding: utf-8 -*-
# Copyright 2012 Dag Wieers <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
import smtplib
import json
from ansible.plugins.callback import CallbackBase
def mail(subject='Ansible error mail', sender=None, to=None, cc=None, bcc=None, body=None, smtphost=None):
if sender is None:
sender='<root>'
if to is None:
to='root'
if smtphost is None:
smtphost=os.getenv('SMTPHOST', 'localhost')
if body is None:
body = subject
smtp = smtplib.SMTP(smtphost)
content = 'From: %s\n' % sender
content += 'To: %s\n' % to
if cc:
content += 'Cc: %s\n' % cc
content += 'Subject: %s\n\n' % subject
content += body
addresses = to.split(',')
if cc:
addresses += cc.split(',')
if bcc:
addresses += bcc.split(',')
for address in addresses:
smtp.sendmail(sender, address, content)
smtp.quit()
class CallbackModule(CallbackBase):
"""
This Ansible callback plugin mails errors to interested parties.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'mail'
def v2_runner_on_failed(self, res, ignore_errors=False):
host = res._host.get_name()
if ignore_errors:
return
sender = '"Ansible: %s" <root>' % host
attach = res._task.action
if 'invocation' in res._result:
attach = "%s: %s" % (res._result['invocation']['module_name'], json.dumps(res._result['invocation']['module_args']))
subject = 'Failed: %s' % attach
body = 'The following task failed for host ' + host + ':\n\n%s\n\n' % attach
if 'stdout' in res._result.keys() and res._result['stdout']:
subject = res._result['stdout'].strip('\r\n').split('\n')[-1]
body += 'with the following output in standard output:\n\n' + res._result['stdout'] + '\n\n'
if 'stderr' in res._result.keys() and res._result['stderr']:
subject = res['stderr'].strip('\r\n').split('\n')[-1]
body += 'with the following output in standard error:\n\n' + res._result['stderr'] + '\n\n'
if 'msg' in res._result.keys() and res._result['msg']:
subject = res._result['msg'].strip('\r\n').split('\n')[0]
body += 'with the following message:\n\n' + res._result['msg'] + '\n\n'
body += 'A complete dump of the error:\n\n' + self._dump_results(res._result)
mail(sender=sender, subject=subject, body=body)
def v2_runner_on_unreachable(self, result):
host = result._host.get_name()
res = result._result
sender = '"Ansible: %s" <root>' % host
if isinstance(res, basestring):
subject = 'Unreachable: %s' % res.strip('\r\n').split('\n')[-1]
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + res
else:
subject = 'Unreachable: %s' % res['msg'].strip('\r\n').split('\n')[0]
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + \
res['msg'] + '\n\nA complete dump of the error:\n\n' + str(res)
mail(sender=sender, subject=subject, body=body)
def v2_runner_on_async_failed(self, result):
host = result._host.get_name()
res = result._result
sender = '"Ansible: %s" <root>' % host
if isinstance(res, basestring):
subject = 'Async failure: %s' % res.strip('\r\n').split('\n')[-1]
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + res
else:
subject = 'Async failure: %s' % res['msg'].strip('\r\n').split('\n')[0]
body = 'An error occurred for host ' + host + ' with the following message:\n\n' + \
res['msg'] + '\n\nA complete dump of the error:\n\n' + str(res)
mail(sender=sender, subject=subject, body=body)
| gpl-3.0 | 7,918,595,190,212,266,000 | 37.420168 | 129 | 0.59252 | false |
racitup/django-currencies | example/settings.py | 2 | 2709 | """
Django settings for example project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.7/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'YOUR_SECRET_KEY'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
TEMPLATE_DIRS = (
os.path.join(os.path.dirname(__file__), 'templates'),
)
TEMPLATE_CONTEXT_PROCESSORS = (
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.tz",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.request",
"django.contrib.messages.context_processors.messages",
"currencies.context_processors.currencies",
)
# Application definition
PROJECT_APPS = [
'currencies',
]
INSTALLED_APPS = [
'django.contrib.auth',
'django.contrib.admin',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.contenttypes',
] + PROJECT_APPS
import django
if django.VERSION < (1, 7):
INSTALLED_APPS += [
'south',
]
MIDDLEWARE_CLASSES = (
# 'django.middleware.cache.UpdateCacheMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
# 'django.middleware.cache.FetchFromCacheMiddleware',
)
ROOT_URLCONF = 'example.urls'
SITE_ID = 1
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
OPENEXCHANGERATES_APP_ID = "38aceb88e3154a649cf9b0f6e4214598"
| bsd-3-clause | 5,694,972,170,673,474,000 | 23.627273 | 71 | 0.711333 | false |
gameduell/duell | bin/win/python2.7.9/Lib/site-packages/pip/_vendor/requests/models.py | 277 | 26436 | # -*- coding: utf-8 -*-
"""
requests.models
~~~~~~~~~~~~~~~
This module contains the primary objects that power Requests.
"""
import collections
import datetime
from io import BytesIO, UnsupportedOperation
from .hooks import default_hooks
from .structures import CaseInsensitiveDict
from .auth import HTTPBasicAuth
from .cookies import cookiejar_from_dict, get_cookie_header
from .packages.urllib3.fields import RequestField
from .packages.urllib3.filepost import encode_multipart_formdata
from .packages.urllib3.util import parse_url
from .packages.urllib3.exceptions import DecodeError
from .exceptions import (
HTTPError, RequestException, MissingSchema, InvalidURL,
ChunkedEncodingError, ContentDecodingError)
from .utils import (
guess_filename, get_auth_from_url, requote_uri,
stream_decode_response_unicode, to_key_val_list, parse_header_links,
iter_slices, guess_json_utf, super_len, to_native_string)
from .compat import (
cookielib, urlunparse, urlsplit, urlencode, str, bytes, StringIO,
is_py2, chardet, json, builtin_str, basestring, IncompleteRead)
from .status_codes import codes
#: The set of HTTP status codes that indicate an automatically
#: processable redirect.
REDIRECT_STATI = (
codes.moved, # 301
codes.found, # 302
codes.other, # 303
codes.temporary_moved, # 307
)
DEFAULT_REDIRECT_LIMIT = 30
CONTENT_CHUNK_SIZE = 10 * 1024
ITER_CHUNK_SIZE = 512
class RequestEncodingMixin(object):
@property
def path_url(self):
"""Build the path URL to use."""
url = []
p = urlsplit(self.url)
path = p.path
if not path:
path = '/'
url.append(path)
query = p.query
if query:
url.append('?')
url.append(query)
return ''.join(url)
@staticmethod
def _encode_params(data):
"""Encode parameters in a piece of data.
Will successfully encode parameters when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if isinstance(data, (str, bytes)):
return data
elif hasattr(data, 'read'):
return data
elif hasattr(data, '__iter__'):
result = []
for k, vs in to_key_val_list(data):
if isinstance(vs, basestring) or not hasattr(vs, '__iter__'):
vs = [vs]
for v in vs:
if v is not None:
result.append(
(k.encode('utf-8') if isinstance(k, str) else k,
v.encode('utf-8') if isinstance(v, str) else v))
return urlencode(result, doseq=True)
else:
return data
@staticmethod
def _encode_files(files, data):
"""Build the body for a multipart/form-data request.
Will successfully encode files when passed as a dict or a list of
2-tuples. Order is retained if data is a list of 2-tuples but arbitrary
if parameters are supplied as a dict.
"""
if (not files):
raise ValueError("Files must be provided.")
elif isinstance(data, basestring):
raise ValueError("Data must not be a string.")
new_fields = []
fields = to_key_val_list(data or {})
files = to_key_val_list(files or {})
for field, val in fields:
if isinstance(val, basestring) or not hasattr(val, '__iter__'):
val = [val]
for v in val:
if v is not None:
# Don't call str() on bytestrings: in Py3 it all goes wrong.
if not isinstance(v, bytes):
v = str(v)
new_fields.append(
(field.decode('utf-8') if isinstance(field, bytes) else field,
v.encode('utf-8') if isinstance(v, str) else v))
for (k, v) in files:
# support for explicit filename
ft = None
fh = None
if isinstance(v, (tuple, list)):
if len(v) == 2:
fn, fp = v
elif len(v) == 3:
fn, fp, ft = v
else:
fn, fp, ft, fh = v
else:
fn = guess_filename(v) or k
fp = v
if isinstance(fp, str):
fp = StringIO(fp)
if isinstance(fp, bytes):
fp = BytesIO(fp)
rf = RequestField(name=k, data=fp.read(),
filename=fn, headers=fh)
rf.make_multipart(content_type=ft)
new_fields.append(rf)
body, content_type = encode_multipart_formdata(new_fields)
return body, content_type
class RequestHooksMixin(object):
def register_hook(self, event, hook):
"""Properly register a hook."""
if event not in self.hooks:
raise ValueError('Unsupported event specified, with event name "%s"' % (event))
if isinstance(hook, collections.Callable):
self.hooks[event].append(hook)
elif hasattr(hook, '__iter__'):
self.hooks[event].extend(h for h in hook if isinstance(h, collections.Callable))
def deregister_hook(self, event, hook):
"""Deregister a previously registered hook.
Returns True if the hook existed, False if not.
"""
try:
self.hooks[event].remove(hook)
return True
except ValueError:
return False
class Request(RequestHooksMixin):
"""A user-created :class:`Request <Request>` object.
Used to prepare a :class:`PreparedRequest <PreparedRequest>`, which is sent to the server.
:param method: HTTP method to use.
:param url: URL to send.
:param headers: dictionary of headers to send.
:param files: dictionary of {filename: fileobject} files to multipart upload.
:param data: the body to attach the request. If a dictionary is provided, form-encoding will take place.
:param params: dictionary of URL parameters to append to the URL.
:param auth: Auth handler or (user, pass) tuple.
:param cookies: dictionary or CookieJar of cookies to attach to this request.
:param hooks: dictionary of callback hooks, for internal usage.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> req.prepare()
<PreparedRequest [GET]>
"""
def __init__(self,
method=None,
url=None,
headers=None,
files=None,
data=None,
params=None,
auth=None,
cookies=None,
hooks=None):
# Default empty dicts for dict params.
data = [] if data is None else data
files = [] if files is None else files
headers = {} if headers is None else headers
params = {} if params is None else params
hooks = {} if hooks is None else hooks
self.hooks = default_hooks()
for (k, v) in list(hooks.items()):
self.register_hook(event=k, hook=v)
self.method = method
self.url = url
self.headers = headers
self.files = files
self.data = data
self.params = params
self.auth = auth
self.cookies = cookies
def __repr__(self):
return '<Request [%s]>' % (self.method)
def prepare(self):
"""Constructs a :class:`PreparedRequest <PreparedRequest>` for transmission and returns it."""
p = PreparedRequest()
p.prepare(
method=self.method,
url=self.url,
headers=self.headers,
files=self.files,
data=self.data,
params=self.params,
auth=self.auth,
cookies=self.cookies,
hooks=self.hooks,
)
return p
class PreparedRequest(RequestEncodingMixin, RequestHooksMixin):
"""The fully mutable :class:`PreparedRequest <PreparedRequest>` object,
containing the exact bytes that will be sent to the server.
Generated from either a :class:`Request <Request>` object or manually.
Usage::
>>> import requests
>>> req = requests.Request('GET', 'http://httpbin.org/get')
>>> r = req.prepare()
<PreparedRequest [GET]>
>>> s = requests.Session()
>>> s.send(r)
<Response [200]>
"""
def __init__(self):
#: HTTP verb to send to the server.
self.method = None
#: HTTP URL to send the request to.
self.url = None
#: dictionary of HTTP headers.
self.headers = None
# The `CookieJar` used to create the Cookie header will be stored here
# after prepare_cookies is called
self._cookies = None
#: request body to send to the server.
self.body = None
#: dictionary of callback hooks, for internal usage.
self.hooks = default_hooks()
def prepare(self, method=None, url=None, headers=None, files=None,
data=None, params=None, auth=None, cookies=None, hooks=None):
"""Prepares the entire request with the given parameters."""
self.prepare_method(method)
self.prepare_url(url, params)
self.prepare_headers(headers)
self.prepare_cookies(cookies)
self.prepare_body(data, files)
self.prepare_auth(auth, url)
# Note that prepare_auth must be last to enable authentication schemes
# such as OAuth to work on a fully prepared request.
# This MUST go after prepare_auth. Authenticators could add a hook
self.prepare_hooks(hooks)
def __repr__(self):
return '<PreparedRequest [%s]>' % (self.method)
def copy(self):
p = PreparedRequest()
p.method = self.method
p.url = self.url
p.headers = self.headers.copy()
p._cookies = self._cookies.copy()
p.body = self.body
p.hooks = self.hooks
return p
def prepare_method(self, method):
"""Prepares the given HTTP method."""
self.method = method
if self.method is not None:
self.method = self.method.upper()
def prepare_url(self, url, params):
"""Prepares the given HTTP URL."""
#: Accept objects that have string representations.
try:
url = unicode(url)
except NameError:
# We're on Python 3.
url = str(url)
except UnicodeDecodeError:
pass
# Don't do any URL preparation for oddball schemes
if ':' in url and not url.lower().startswith('http'):
self.url = url
return
# Support for unicode domain names and paths.
scheme, auth, host, port, path, query, fragment = parse_url(url)
if not scheme:
raise MissingSchema("Invalid URL {0!r}: No schema supplied. "
"Perhaps you meant http://{0}?".format(url))
if not host:
raise InvalidURL("Invalid URL %r: No host supplied" % url)
# Only want to apply IDNA to the hostname
try:
host = host.encode('idna').decode('utf-8')
except UnicodeError:
raise InvalidURL('URL has an invalid label.')
# Carefully reconstruct the network location
netloc = auth or ''
if netloc:
netloc += '@'
netloc += host
if port:
netloc += ':' + str(port)
# Bare domains aren't valid URLs.
if not path:
path = '/'
if is_py2:
if isinstance(scheme, str):
scheme = scheme.encode('utf-8')
if isinstance(netloc, str):
netloc = netloc.encode('utf-8')
if isinstance(path, str):
path = path.encode('utf-8')
if isinstance(query, str):
query = query.encode('utf-8')
if isinstance(fragment, str):
fragment = fragment.encode('utf-8')
enc_params = self._encode_params(params)
if enc_params:
if query:
query = '%s&%s' % (query, enc_params)
else:
query = enc_params
url = requote_uri(urlunparse([scheme, netloc, path, None, query, fragment]))
self.url = url
def prepare_headers(self, headers):
"""Prepares the given HTTP headers."""
if headers:
self.headers = CaseInsensitiveDict((to_native_string(name), value) for name, value in headers.items())
else:
self.headers = CaseInsensitiveDict()
def prepare_body(self, data, files):
"""Prepares the given HTTP body data."""
# Check if file, fo, generator, iterator.
# If not, run through normal process.
# Nottin' on you.
body = None
content_type = None
length = None
is_stream = all([
hasattr(data, '__iter__'),
not isinstance(data, (basestring, list, tuple, dict))
])
try:
length = super_len(data)
except (TypeError, AttributeError, UnsupportedOperation):
length = None
if is_stream:
body = data
if files:
raise NotImplementedError('Streamed bodies and files are mutually exclusive.')
if length is not None:
self.headers['Content-Length'] = builtin_str(length)
else:
self.headers['Transfer-Encoding'] = 'chunked'
else:
# Multi-part file uploads.
if files:
(body, content_type) = self._encode_files(files, data)
else:
if data:
body = self._encode_params(data)
if isinstance(data, str) or isinstance(data, builtin_str) or hasattr(data, 'read'):
content_type = None
else:
content_type = 'application/x-www-form-urlencoded'
self.prepare_content_length(body)
# Add content-type if it wasn't explicitly provided.
if (content_type) and (not 'content-type' in self.headers):
self.headers['Content-Type'] = content_type
self.body = body
def prepare_content_length(self, body):
if hasattr(body, 'seek') and hasattr(body, 'tell'):
body.seek(0, 2)
self.headers['Content-Length'] = builtin_str(body.tell())
body.seek(0, 0)
elif body is not None:
l = super_len(body)
if l:
self.headers['Content-Length'] = builtin_str(l)
elif self.method not in ('GET', 'HEAD'):
self.headers['Content-Length'] = '0'
def prepare_auth(self, auth, url=''):
"""Prepares the given HTTP auth data."""
# If no Auth is explicitly provided, extract it from the URL first.
if auth is None:
url_auth = get_auth_from_url(self.url)
auth = url_auth if any(url_auth) else None
if auth:
if isinstance(auth, tuple) and len(auth) == 2:
# special-case basic HTTP auth
auth = HTTPBasicAuth(*auth)
# Allow auth to make its changes.
r = auth(self)
# Update self to reflect the auth changes.
self.__dict__.update(r.__dict__)
# Recompute Content-Length
self.prepare_content_length(self.body)
def prepare_cookies(self, cookies):
"""Prepares the given HTTP cookie data."""
if isinstance(cookies, cookielib.CookieJar):
self._cookies = cookies
else:
self._cookies = cookiejar_from_dict(cookies)
cookie_header = get_cookie_header(self._cookies, self)
if cookie_header is not None:
self.headers['Cookie'] = cookie_header
def prepare_hooks(self, hooks):
"""Prepares the given hooks."""
for event in hooks:
self.register_hook(event, hooks[event])
class Response(object):
"""The :class:`Response <Response>` object, which contains a
server's response to an HTTP request.
"""
__attrs__ = [
'_content',
'status_code',
'headers',
'url',
'history',
'encoding',
'reason',
'cookies',
'elapsed',
'request',
]
def __init__(self):
super(Response, self).__init__()
self._content = False
self._content_consumed = False
#: Integer Code of responded HTTP Status, e.g. 404 or 200.
self.status_code = None
#: Case-insensitive Dictionary of Response Headers.
#: For example, ``headers['content-encoding']`` will return the
#: value of a ``'Content-Encoding'`` response header.
self.headers = CaseInsensitiveDict()
#: File-like object representation of response (for advanced usage).
#: Use of ``raw`` requires that ``stream=True`` be set on the request.
# This requirement does not apply for use internally to Requests.
self.raw = None
#: Final URL location of Response.
self.url = None
#: Encoding to decode with when accessing r.text.
self.encoding = None
#: A list of :class:`Response <Response>` objects from
#: the history of the Request. Any redirect responses will end
#: up here. The list is sorted from the oldest to the most recent request.
self.history = []
#: Textual reason of responded HTTP Status, e.g. "Not Found" or "OK".
self.reason = None
#: A CookieJar of Cookies the server sent back.
self.cookies = cookiejar_from_dict({})
#: The amount of time elapsed between sending the request
#: and the arrival of the response (as a timedelta)
self.elapsed = datetime.timedelta(0)
def __getstate__(self):
# Consume everything; accessing the content attribute makes
# sure the content has been fully read.
if not self._content_consumed:
self.content
return dict(
(attr, getattr(self, attr, None))
for attr in self.__attrs__
)
def __setstate__(self, state):
for name, value in state.items():
setattr(self, name, value)
# pickled objects do not have .raw
setattr(self, '_content_consumed', True)
setattr(self, 'raw', None)
def __repr__(self):
return '<Response [%s]>' % (self.status_code)
def __bool__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __nonzero__(self):
"""Returns true if :attr:`status_code` is 'OK'."""
return self.ok
def __iter__(self):
"""Allows you to use a response as an iterator."""
return self.iter_content(128)
@property
def ok(self):
try:
self.raise_for_status()
except RequestException:
return False
return True
@property
def is_redirect(self):
"""True if this Response is a well-formed HTTP redirect that could have
been processed automatically (by :meth:`Session.resolve_redirects`).
"""
return ('location' in self.headers and self.status_code in REDIRECT_STATI)
@property
def apparent_encoding(self):
"""The apparent encoding, provided by the chardet library"""
return chardet.detect(self.content)['encoding']
def iter_content(self, chunk_size=1, decode_unicode=False):
"""Iterates over the response data. When stream=True is set on the
request, this avoids reading the content at once into memory for
large responses. The chunk size is the number of bytes it should
read into memory. This is not necessarily the length of each item
returned as decoding can take place.
If decode_unicode is True, content will be decoded using the best
available encoding based on the response.
"""
def generate():
try:
# Special case for urllib3.
try:
for chunk in self.raw.stream(chunk_size, decode_content=True):
yield chunk
except IncompleteRead as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
raise ContentDecodingError(e)
except AttributeError:
# Standard file-like object.
while True:
chunk = self.raw.read(chunk_size)
if not chunk:
break
yield chunk
self._content_consumed = True
# simulate reading small chunks of the content
reused_chunks = iter_slices(self._content, chunk_size)
stream_chunks = generate()
chunks = reused_chunks if self._content_consumed else stream_chunks
if decode_unicode:
chunks = stream_decode_response_unicode(chunks, self)
return chunks
def iter_lines(self, chunk_size=ITER_CHUNK_SIZE, decode_unicode=None):
"""Iterates over the response data, one line at a time. When
stream=True is set on the request, this avoids reading the
content at once into memory for large responses.
"""
pending = None
for chunk in self.iter_content(chunk_size=chunk_size, decode_unicode=decode_unicode):
if pending is not None:
chunk = pending + chunk
lines = chunk.splitlines()
if lines and lines[-1] and chunk and lines[-1][-1] == chunk[-1]:
pending = lines.pop()
else:
pending = None
for line in lines:
yield line
if pending is not None:
yield pending
@property
def content(self):
"""Content of the response, in bytes."""
if self._content is False:
# Read the contents.
try:
if self._content_consumed:
raise RuntimeError(
'The content for this response was already consumed')
if self.status_code == 0:
self._content = None
else:
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
except AttributeError:
self._content = None
self._content_consumed = True
# don't need to release the connection; that's been handled by urllib3
# since we exhausted the data.
return self._content
@property
def text(self):
"""Content of the response, in unicode.
If Response.encoding is None, encoding will be guessed using
``chardet``.
The encoding of the response content is determined based solely on HTTP
headers, following RFC 2616 to the letter. If you can take advantage of
non-HTTP knowledge to make a better guess at the encoding, you should
set ``r.encoding`` appropriately before accessing this property.
"""
# Try charset from content-type
content = None
encoding = self.encoding
if not self.content:
return str('')
# Fallback to auto-detected encoding.
if self.encoding is None:
encoding = self.apparent_encoding
# Decode unicode from given encoding.
try:
content = str(self.content, encoding, errors='replace')
except (LookupError, TypeError):
# A LookupError is raised if the encoding was not found which could
# indicate a misspelling or similar mistake.
#
# A TypeError can be raised if encoding is None
#
# So we try blindly encoding.
content = str(self.content, errors='replace')
return content
def json(self, **kwargs):
"""Returns the json-encoded content of a response, if any.
:param \*\*kwargs: Optional arguments that ``json.loads`` takes.
"""
if not self.encoding and len(self.content) > 3:
# No encoding set. JSON RFC 4627 section 3 states we should expect
# UTF-8, -16 or -32. Detect which one to use; If the detection or
# decoding fails, fall back to `self.text` (using chardet to make
# a best guess).
encoding = guess_json_utf(self.content)
if encoding is not None:
try:
return json.loads(self.content.decode(encoding), **kwargs)
except UnicodeDecodeError:
# Wrong UTF codec detected; usually because it's not UTF-8
# but some other 8-bit codec. This is an RFC violation,
# and the server didn't bother to tell us what codec *was*
# used.
pass
return json.loads(self.text, **kwargs)
@property
def links(self):
"""Returns the parsed header links of the response, if any."""
header = self.headers.get('link')
# l = MultiDict()
l = {}
if header:
links = parse_header_links(header)
for link in links:
key = link.get('rel') or link.get('url')
l[key] = link
return l
def raise_for_status(self):
"""Raises stored :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if 400 <= self.status_code < 500:
http_error_msg = '%s Client Error: %s' % (self.status_code, self.reason)
elif 500 <= self.status_code < 600:
http_error_msg = '%s Server Error: %s' % (self.status_code, self.reason)
if http_error_msg:
raise HTTPError(http_error_msg, response=self)
def close(self):
"""Releases the connection back to the pool. Once this method has been
called the underlying ``raw`` object must not be accessed again.
*Note: Should not normally need to be called explicitly.*
"""
return self.raw.release_conn()
| bsd-2-clause | 5,847,116,123,971,238,000 | 31.921544 | 114 | 0.565857 | false |
rwl/PyCIM | CIM15/IEC61970/Core/BasePower.py | 1 | 1832 | # Copyright (C) 2010-2011 Richard Lincoln
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
# IN THE SOFTWARE.
from CIM15.IEC61970.Core.IdentifiedObject import IdentifiedObject
class BasePower(IdentifiedObject):
"""The BasePower class defines the base power used in the per unit calculations.The BasePower class defines the base power used in the per unit calculations.
"""
def __init__(self, basePower=0.0, *args, **kw_args):
"""Initialises a new 'BasePower' instance.
@param basePower: Definition of base power.
"""
#: Definition of base power.
self.basePower = basePower
super(BasePower, self).__init__(*args, **kw_args)
_attrs = ["basePower"]
_attr_types = {"basePower": float}
_defaults = {"basePower": 0.0}
_enums = {}
_refs = []
_many_refs = []
| mit | -7,089,260,728,300,321,000 | 41.604651 | 161 | 0.721616 | false |
Sofcom/treeio | treeio/finance/api/urls.py | 3 | 3231 | # encoding: utf-8
# Copyright 2011 Tree.io Limited
# This file is part of Treeio.
# License www.tree.io/license
# -*- coding: utf-8 -*-
import handlers
from treeio.core.api.auth import auth_engine
from treeio.core.api.doc import documentation_view
from treeio.core.api.resource import CsrfExemptResource
from django.conf.urls import *
ad = {'authentication': auth_engine}
# finance resources
currencyResource = CsrfExemptResource(handler=handlers.CurrencyHandler, **ad)
taxResource = CsrfExemptResource(handler=handlers.TaxHandler, **ad)
categoryResource = CsrfExemptResource(handler=handlers.CategoryHandler, **ad)
assetResource = CsrfExemptResource(handler=handlers.AssetHandler, **ad)
accountResource = CsrfExemptResource(handler=handlers.AccountHandler, **ad)
equityResource = CsrfExemptResource(handler=handlers.EquityHandler, **ad)
liabilityResource = CsrfExemptResource(handler=handlers.LiabilityHandler, **ad)
transactionResource = CsrfExemptResource(
handler=handlers.TransactionHandler, **ad)
urlpatterns = patterns('',
# Finance
url(r'^doc$', documentation_view, kwargs={
'module': handlers}, name="api_finance_doc"),
url(r'^currencies$', currencyResource,
name="api_finance_currencies"),
url(r'^currency/(?P<object_ptr>\d+)',
currencyResource, name="api_finance_currencies"),
url(r'^taxes$', taxResource, name='api_finance_taxes'),
url(r'^tax/(?P<object_ptr>\d+)',
taxResource, name='api_finance_taxes'),
url(r'^categories$', categoryResource,
name='api_finance_categories'),
url(r'^category/(?P<object_ptr>\d+)',
categoryResource, name='api_finance_categories'),
url(r'^assets$', assetResource,
name='api_finance_assets'),
url(r'^asset/(?P<object_ptr>\d+)',
assetResource, name='api_finance_assets'),
url(r'^accounts$', accountResource,
name='api_finance_accounts'),
url(r'^account/(?P<object_ptr>\d+)',
accountResource, name='api_finance_accounts'),
url(r'^equities$', equityResource,
name='api_finance_equities'),
url(r'^equity/(?P<object_ptr>\d+)',
equityResource, name='api_finance_equities'),
url(r'^liabilities$', liabilityResource,
name='api_finance_liabilities'),
url(r'^liability/(?P<object_ptr>\d+)',
liabilityResource, name='api_finance_liabilities'),
url(r'^transactions$', transactionResource,
name='api_finance_transactions'),
url(r'^transaction/(?P<object_ptr>\d+)',
transactionResource, name='api_finance_transactions'),
)
| mit | 7,804,919,125,094,325,000 | 50.285714 | 81 | 0.553389 | false |
CongBaoBao/bcloud | bcloud/PropertiesDialog.py | 10 | 4825 |
# Copyright (C) 2014-2015 LiuLang <[email protected]>
# Use of this source code is governed by GPLv3 license that can be found
# in http://www.gnu.org/licenses/gpl-3.0.html
import os
import time
from gi.repository import Gtk
from bcloud import Config
_ = Config._
from bcloud import util
from bcloud.Widgets import LeftLabel
from bcloud.Widgets import SelectableLeftLabel
(PIXBUF_COL, NAME_COL, PATH_COL, TOOLTIP_COL, SIZE_COL, HUMAN_SIZE_COL,
ISDIR_COL, MTIME_COL, HUMAN_MTIME_COL, TYPE_COL, PCS_FILE_COL) = list(
range(11))
class PropertiesDialog(Gtk.Dialog):
def __init__(self, parent, app, pcs_file):
file_path, file_name = os.path.split(pcs_file['path'])
super().__init__(file_name + _(' Properties'), app.window,
Gtk.DialogFlags.MODAL,
(Gtk.STOCK_CLOSE, Gtk.ResponseType.CLOSE))
self.set_default_response(Gtk.ResponseType.CLOSE)
self.set_border_width(15)
#self.set_default_size(640, 480)
box = self.get_content_area()
grid = Gtk.Grid()
grid.props.row_spacing = 8
if Config.GTK_GE_312:
grid.props.margin_start = 15
else:
grid.props.margin_left = 15
grid.props.column_spacing = 15
box.pack_start(grid, True, True, 10)
name_label = LeftLabel(_('Name:'))
grid.attach(name_label, 0, 0, 1, 1)
name_label2 = SelectableLeftLabel(file_name)
grid.attach(name_label2, 1, 0, 1, 1)
location_label = LeftLabel(_('Location:'))
grid.attach(location_label, 0, 2, 1, 1)
location_label2 = SelectableLeftLabel(file_path)
grid.attach(location_label2, 1, 2, 1, 1)
if pcs_file['isdir']:
pass
else:
size_label = LeftLabel(_('Size'))
grid.attach(size_label, 0, 1, 1, 1)
size_human, size_comma = util.get_human_size(pcs_file['size'])
if size_human:
size_text = ''.join([str(size_human), ' (', size_comma,
_(' bytes'), ')'])
else:
size_text = size_comma + _(' bytes')
size_label2 = SelectableLeftLabel(size_text)
grid.attach(size_label2, 1, 1, 1, 1)
md5_label = LeftLabel('MD5:')
grid.attach(md5_label, 0, 3, 1, 1)
md5_label2 = SelectableLeftLabel(pcs_file['md5'])
grid.attach(md5_label2, 1, 3, 1, 1)
id_label = LeftLabel('FS ID:')
grid.attach(id_label, 0, 4, 1, 1)
id_label2 = SelectableLeftLabel(pcs_file['fs_id'])
grid.attach(id_label2, 1, 4, 1, 1)
ctime_label = LeftLabel(_('Created:'))
grid.attach(ctime_label, 0, 5, 1, 1)
ctime_label2 = SelectableLeftLabel(time.ctime(pcs_file['server_ctime']))
grid.attach(ctime_label2, 1, 5, 1, 1)
mtime_label = LeftLabel(_('Modified:'))
grid.attach(mtime_label, 0, 6, 1, 1)
mtime_label2 = SelectableLeftLabel(time.ctime(pcs_file['server_mtime']))
grid.attach(mtime_label2, 1, 6, 1, 1)
box.show_all()
class FolderPropertyDialog(Gtk.Dialog):
def __init__(self, icon_window, app, path):
file_path, file_name = os.path.split(path)
# modify file_name if path is '/'
if not file_name:
file_name = '/'
super().__init__(file_name + _(' Properties'), app.window,
Gtk.DialogFlags.MODAL,
(Gtk.STOCK_CLOSE, Gtk.ResponseType.CLOSE))
self.set_border_width(15)
box = self.get_content_area()
grid = Gtk.Grid()
grid.props.row_spacing = 8
if Config.GTK_GE_312:
grid.props.margin_start = 15
else:
grid.props.margin_left = 15
grid.props.column_spacing = 15
box.pack_start(grid, True, True, 10)
name_label = LeftLabel(_('Name:'))
grid.attach(name_label, 0, 0, 1, 1)
name_label2 = SelectableLeftLabel(file_name)
grid.attach(name_label2, 1, 0, 1, 1)
location_label = LeftLabel(_('Location:'))
grid.attach(location_label, 0, 1, 1, 1)
location_label2 = SelectableLeftLabel(file_path)
grid.attach(location_label2, 1, 1, 1, 1)
file_count = 0
folder_count = 0
for row in icon_window.liststore:
if row[ISDIR_COL]:
folder_count = folder_count + 1
else:
file_count = file_count + 1
contents = _('{0} folders, {1} files').format(folder_count, file_count)
content_label = LeftLabel(_('Contents:'))
grid.attach(content_label, 0, 2, 1, 1)
content_label2 = SelectableLeftLabel(contents)
grid.attach(content_label2, 1, 2, 1, 1)
box.show_all()
| gpl-3.0 | -6,341,817,197,452,482,000 | 34.218978 | 80 | 0.56601 | false |
nicobustillos/odoo | addons/payment_ogone/controllers/main.py | 389 | 1179 | # -*- coding: utf-8 -*-
import logging
import pprint
import werkzeug
from openerp import http, SUPERUSER_ID
from openerp.http import request
_logger = logging.getLogger(__name__)
class OgoneController(http.Controller):
_accept_url = '/payment/ogone/test/accept'
_decline_url = '/payment/ogone/test/decline'
_exception_url = '/payment/ogone/test/exception'
_cancel_url = '/payment/ogone/test/cancel'
@http.route([
'/payment/ogone/accept', '/payment/ogone/test/accept',
'/payment/ogone/decline', '/payment/ogone/test/decline',
'/payment/ogone/exception', '/payment/ogone/test/exception',
'/payment/ogone/cancel', '/payment/ogone/test/cancel',
], type='http', auth='none')
def ogone_form_feedback(self, **post):
""" Ogone contacts using GET, at least for accept """
_logger.info('Ogone: entering form_feedback with post data %s', pprint.pformat(post)) # debug
cr, uid, context = request.cr, SUPERUSER_ID, request.context
request.registry['payment.transaction'].form_feedback(cr, uid, post, 'ogone', context=context)
return werkzeug.utils.redirect(post.pop('return_url', '/'))
| agpl-3.0 | -3,440,930,928,491,942,000 | 39.655172 | 102 | 0.670908 | false |
manojhirway/ExistingImagesOnNFS | cinder/volume/drivers/eqlx.py | 15 | 23543 | # Copyright (c) 2013 Dell Inc.
# Copyright 2013 OpenStack LLC
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Volume driver for Dell EqualLogic Storage."""
import functools
import random
import eventlet
from eventlet import greenthread
import greenlet
from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_log import versionutils
from oslo_utils import excutils
from six.moves import range
from cinder import exception
from cinder.i18n import _, _LE, _LW, _LI
from cinder import ssh_utils
from cinder import utils
from cinder.volume.drivers import san
LOG = logging.getLogger(__name__)
eqlx_opts = [
cfg.StrOpt('eqlx_group_name',
default='group-0',
help='Group name to use for creating volumes. Defaults to '
'"group-0".'),
cfg.IntOpt('eqlx_cli_timeout',
default=30,
help='Timeout for the Group Manager cli command execution. '
'Default is 30. Note that this option is deprecated '
'in favour of "ssh_conn_timeout" as '
'specified in cinder/volume/drivers/san/san.py '
'and will be removed in M release.'),
cfg.IntOpt('eqlx_cli_max_retries',
default=5,
help='Maximum retry count for reconnection. Default is 5.'),
cfg.BoolOpt('eqlx_use_chap',
default=False,
help='Use CHAP authentication for targets. Note that this '
'option is deprecated in favour of "use_chap_auth" as '
'specified in cinder/volume/driver.py and will be '
'removed in next release.'),
cfg.StrOpt('eqlx_chap_login',
default='admin',
help='Existing CHAP account name. Note that this '
'option is deprecated in favour of "chap_username" as '
'specified in cinder/volume/driver.py and will be '
'removed in next release.'),
cfg.StrOpt('eqlx_chap_password',
default='password',
help='Password for specified CHAP account name. Note that this '
'option is deprecated in favour of "chap_password" as '
'specified in cinder/volume/driver.py and will be '
'removed in the next release',
secret=True),
cfg.StrOpt('eqlx_pool',
default='default',
help='Pool in which volumes will be created. Defaults '
'to "default".')
]
CONF = cfg.CONF
CONF.register_opts(eqlx_opts)
def with_timeout(f):
@functools.wraps(f)
def __inner(self, *args, **kwargs):
timeout = kwargs.pop('timeout', None)
gt = eventlet.spawn(f, self, *args, **kwargs)
if timeout is None:
return gt.wait()
else:
kill_thread = eventlet.spawn_after(timeout, gt.kill)
try:
res = gt.wait()
except greenlet.GreenletExit:
raise exception.VolumeBackendAPIException(
data="Command timed out")
else:
kill_thread.cancel()
return res
return __inner
class DellEQLSanISCSIDriver(san.SanISCSIDriver):
"""Implements commands for Dell EqualLogic SAN ISCSI management.
To enable the driver add the following line to the cinder configuration:
volume_driver=cinder.volume.drivers.eqlx.DellEQLSanISCSIDriver
Driver's prerequisites are:
- a separate volume group set up and running on the SAN
- SSH access to the SAN
- a special user must be created which must be able to
- create/delete volumes and snapshots;
- clone snapshots into volumes;
- modify volume access records;
The access credentials to the SAN are provided by means of the following
flags
san_ip=<ip_address>
san_login=<user name>
san_password=<user password>
san_private_key=<file containing SSH private key>
Thin provision of volumes is enabled by default, to disable it use:
san_thin_provision=false
In order to use target CHAP authentication (which is disabled by default)
SAN administrator must create a local CHAP user and specify the following
flags for the driver:
use_chap_auth=True
chap_login=<chap_login>
chap_password=<chap_password>
eqlx_group_name parameter actually represents the CLI prompt message
without '>' ending. E.g. if prompt looks like 'group-0>', then the
parameter must be set to 'group-0'
Version history:
1.0 - Initial driver
1.1.0 - Misc fixes
1.2.0 - Deprecated eqlx_cli_timeout infavor of ssh_conn_timeout
"""
VERSION = "1.2.0"
def __init__(self, *args, **kwargs):
super(DellEQLSanISCSIDriver, self).__init__(*args, **kwargs)
self.configuration.append_config_values(eqlx_opts)
self._group_ip = None
self.sshpool = None
if self.configuration.eqlx_use_chap is True:
LOG.warning(_LW(
'Configuration options eqlx_use_chap, '
'eqlx_chap_login and eqlx_chap_password are deprecated. Use '
'use_chap_auth, chap_username and chap_password '
'respectively for the same.'))
self.configuration.use_chap_auth = (
self.configuration.eqlx_use_chap)
self.configuration.chap_username = (
self.configuration.eqlx_chap_login)
self.configuration.chap_password = (
self.configuration.eqlx_chap_password)
if self.configuration.eqlx_cli_timeout:
msg = _LW('Configuration option eqlx_cli_timeout '
'is deprecated and will be removed in M release. '
'Use ssh_conn_timeout instead.')
self.configuration.ssh_conn_timeout = (
self.configuration.eqlx_cli_timeout)
versionutils.report_deprecated_feature(LOG, msg)
def _get_output(self, chan):
out = ''
ending = '%s> ' % self.configuration.eqlx_group_name
while out.find(ending) == -1:
ret = chan.recv(102400)
if len(ret) == 0:
# According to paramiko.channel.Channel documentation, which
# says "If a string of length zero is returned, the channel
# stream has closed". So we can confirm that the EQL server
# has closed the connection.
msg = _("The EQL array has closed the connection.")
LOG.error(msg)
raise exception.VolumeBackendAPIException(data=msg)
out += ret
LOG.debug("CLI output\n%s", out)
return out.splitlines()
def _get_prefixed_value(self, lines, prefix):
for line in lines:
if line.startswith(prefix):
return line[len(prefix):]
return
@with_timeout
def _ssh_execute(self, ssh, command, *arg, **kwargs):
transport = ssh.get_transport()
chan = transport.open_session()
completed = False
try:
chan.invoke_shell()
LOG.debug("Reading CLI MOTD")
self._get_output(chan)
cmd = 'stty columns 255'
LOG.debug("Setting CLI terminal width: '%s'", cmd)
chan.send(cmd + '\r')
out = self._get_output(chan)
LOG.debug("Sending CLI command: '%s'", command)
chan.send(command + '\r')
out = self._get_output(chan)
completed = True
if any(ln.startswith(('% Error', 'Error:')) for ln in out):
desc = _("Error executing EQL command")
cmdout = '\n'.join(out)
LOG.error(_LE("%s"), cmdout)
raise processutils.ProcessExecutionError(
stdout=cmdout, cmd=command, description=desc)
return out
finally:
if not completed:
LOG.debug("Timed out executing command: '%s'", command)
chan.close()
def _run_ssh(self, cmd_list, attempts=1):
utils.check_ssh_injection(cmd_list)
command = ' '. join(cmd_list)
if not self.sshpool:
password = self.configuration.san_password
privatekey = self.configuration.san_private_key
min_size = self.configuration.ssh_min_pool_conn
max_size = self.configuration.ssh_max_pool_conn
self.sshpool = ssh_utils.SSHPool(
self.configuration.san_ip,
self.configuration.san_ssh_port,
self.configuration.ssh_conn_timeout,
self.configuration.san_login,
password=password,
privatekey=privatekey,
min_size=min_size,
max_size=max_size)
try:
total_attempts = attempts
with self.sshpool.item() as ssh:
while attempts > 0:
attempts -= 1
try:
LOG.info(_LI('EQL-driver: executing "%s".'), command)
return self._ssh_execute(
ssh, command,
timeout=self.configuration.ssh_conn_timeout)
except Exception:
LOG.exception(_LE('Error running command.'))
greenthread.sleep(random.randint(20, 500) / 100.0)
msg = (_("SSH Command failed after '%(total_attempts)r' "
"attempts : '%(command)s'") %
{'total_attempts': total_attempts - attempts,
'command': command})
raise exception.VolumeBackendAPIException(data=msg)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Error running SSH command: "%s".'), command)
def check_for_setup_error(self):
super(DellEQLSanISCSIDriver, self).check_for_setup_error()
if self.configuration.eqlx_cli_max_retries < 0:
raise exception.InvalidInput(
reason=_("eqlx_cli_max_retries must be greater than or "
"equal to 0"))
def _eql_execute(self, *args, **kwargs):
return self._run_ssh(
args, attempts=self.configuration.eqlx_cli_max_retries + 1)
def _get_volume_data(self, lines):
prefix = 'iSCSI target name is '
target_name = self._get_prefixed_value(lines, prefix)[:-1]
lun_id = "%s:%s,1 %s 0" % (self._group_ip, '3260', target_name)
model_update = {}
model_update['provider_location'] = lun_id
if self.configuration.use_chap_auth:
model_update['provider_auth'] = 'CHAP %s %s' % \
(self.configuration.chap_username,
self.configuration.chap_password)
return model_update
def _get_space_in_gb(self, val):
scale = 1.0
part = 'GB'
if val.endswith('MB'):
scale = 1.0 / 1024
part = 'MB'
elif val.endswith('TB'):
scale = 1.0 * 1024
part = 'TB'
return scale * float(val.partition(part)[0])
def _update_volume_stats(self):
"""Retrieve stats info from eqlx group."""
LOG.debug('Updating volume stats.')
data = {}
backend_name = "eqlx"
if self.configuration:
backend_name = self.configuration.safe_get('volume_backend_name')
data["volume_backend_name"] = backend_name or 'eqlx'
data["vendor_name"] = 'Dell'
data["driver_version"] = self.VERSION
data["storage_protocol"] = 'iSCSI'
data['reserved_percentage'] = 0
data['QoS_support'] = False
data['total_capacity_gb'] = 0
data['free_capacity_gb'] = 0
for line in self._eql_execute('pool', 'select',
self.configuration.eqlx_pool, 'show'):
if line.startswith('TotalCapacity:'):
out_tup = line.rstrip().partition(' ')
data['total_capacity_gb'] = self._get_space_in_gb(out_tup[-1])
if line.startswith('FreeSpace:'):
out_tup = line.rstrip().partition(' ')
data['free_capacity_gb'] = self._get_space_in_gb(out_tup[-1])
self._stats = data
def _check_volume(self, volume):
"""Check if the volume exists on the Array."""
command = ['volume', 'select', volume['name'], 'show']
try:
self._eql_execute(*command)
except processutils.ProcessExecutionError as err:
with excutils.save_and_reraise_exception():
if err.stdout.find('does not exist.\n') > -1:
LOG.debug('Volume %s does not exist, '
'it may have already been deleted',
volume['name'])
raise exception.VolumeNotFound(volume_id=volume['id'])
def _parse_connection(self, connector, out):
"""Returns the correct connection id for the initiator.
This parses the cli output from the command
'volume select <volumename> access show'
and returns the correct connection id.
"""
lines = [line for line in out if line != '']
# Every record has 2 lines
for i in range(0, len(lines), 2):
try:
int(lines[i][0])
# sanity check
if len(lines[i + 1].split()) == 1:
check = lines[i].split()[1] + lines[i + 1].strip()
if connector['initiator'] == check:
return lines[i].split()[0]
except (IndexError, ValueError):
pass # skip the line that is not a valid access record
return None
def do_setup(self, context):
"""Disable cli confirmation and tune output format."""
try:
disabled_cli_features = ('confirmation', 'paging', 'events',
'formatoutput')
for feature in disabled_cli_features:
self._eql_execute('cli-settings', feature, 'off')
for line in self._eql_execute('grpparams', 'show'):
if line.startswith('Group-Ipaddress:'):
out_tup = line.rstrip().partition(' ')
self._group_ip = out_tup[-1]
LOG.info(_LI('EQL-driver: Setup is complete, group IP is "%s".'),
self._group_ip)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to setup the Dell EqualLogic driver.'))
def create_volume(self, volume):
"""Create a volume."""
try:
cmd = ['volume', 'create',
volume['name'], "%sG" % (volume['size'])]
if self.configuration.eqlx_pool != 'default':
cmd.append('pool')
cmd.append(self.configuration.eqlx_pool)
if self.configuration.san_thin_provision:
cmd.append('thin-provision')
out = self._eql_execute(*cmd)
self.add_multihost_access(volume)
return self._get_volume_data(out)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to create volume "%s".'), volume['name'])
def add_multihost_access(self, volume):
"""Add multihost-access to a volume. Needed for live migration."""
try:
cmd = ['volume', 'select',
volume['name'], 'multihost-access', 'enable']
self._eql_execute(*cmd)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to add multihost-access '
'for volume "%s".'),
volume['name'])
def delete_volume(self, volume):
"""Delete a volume."""
try:
self._check_volume(volume)
self._eql_execute('volume', 'select', volume['name'], 'offline')
self._eql_execute('volume', 'delete', volume['name'])
except exception.VolumeNotFound:
LOG.warning(_LW('Volume %s was not found while trying to delete '
'it.'), volume['name'])
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to delete '
'volume "%s".'), volume['name'])
def create_snapshot(self, snapshot):
"""Create snapshot of existing volume on appliance."""
try:
out = self._eql_execute('volume', 'select',
snapshot['volume_name'],
'snapshot', 'create-now')
prefix = 'Snapshot name is '
snap_name = self._get_prefixed_value(out, prefix)
self._eql_execute('volume', 'select', snapshot['volume_name'],
'snapshot', 'rename', snap_name,
snapshot['name'])
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to create snapshot of volume "%s".'),
snapshot['volume_name'])
def create_volume_from_snapshot(self, volume, snapshot):
"""Create new volume from other volume's snapshot on appliance."""
try:
out = self._eql_execute('volume', 'select',
snapshot['volume_name'], 'snapshot',
'select', snapshot['name'],
'clone', volume['name'])
self.add_multihost_access(volume)
return self._get_volume_data(out)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to create volume from snapshot "%s".'),
snapshot['name'])
def create_cloned_volume(self, volume, src_vref):
"""Creates a clone of the specified volume."""
try:
src_volume_name = src_vref['name']
out = self._eql_execute('volume', 'select', src_volume_name,
'clone', volume['name'])
self.add_multihost_access(volume)
return self._get_volume_data(out)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to create clone of volume "%s".'),
volume['name'])
def delete_snapshot(self, snapshot):
"""Delete volume's snapshot."""
try:
self._eql_execute('volume', 'select', snapshot['volume_name'],
'snapshot', 'delete', snapshot['name'])
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to delete snapshot %(snap)s of '
'volume %(vol)s.'),
{'snap': snapshot['name'],
'vol': snapshot['volume_name']})
def initialize_connection(self, volume, connector):
"""Restrict access to a volume."""
try:
cmd = ['volume', 'select', volume['name'], 'access', 'create',
'initiator', connector['initiator']]
if self.configuration.use_chap_auth:
cmd.extend(['authmethod', 'chap', 'username',
self.configuration.chap_username])
self._eql_execute(*cmd)
iscsi_properties = self._get_iscsi_properties(volume)
return {
'driver_volume_type': 'iscsi',
'data': iscsi_properties
}
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to initialize connection '
'to volume "%s".'),
volume['name'])
def terminate_connection(self, volume, connector, force=False, **kwargs):
"""Remove access restrictions from a volume."""
try:
out = self._eql_execute('volume', 'select', volume['name'],
'access', 'show')
connection_id = self._parse_connection(connector, out)
if connection_id is not None:
self._eql_execute('volume', 'select', volume['name'],
'access', 'delete', connection_id)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to terminate connection '
'to volume "%s".'),
volume['name'])
def create_export(self, context, volume, connector):
"""Create an export of a volume.
Driver has nothing to do here for the volume has been exported
already by the SAN, right after it's creation.
"""
pass
def ensure_export(self, context, volume):
"""Ensure an export of a volume.
Driver has nothing to do here for the volume has been exported
already by the SAN, right after it's creation. We will just make
sure that the volume exists on the array and issue a warning.
"""
try:
self._check_volume(volume)
except exception.VolumeNotFound:
LOG.warning(_LW('Volume %s is not found!, it may have been '
'deleted.'), volume['name'])
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to ensure export of volume "%s".'),
volume['name'])
def remove_export(self, context, volume):
"""Remove an export of a volume.
Driver has nothing to do here for the volume has been exported
already by the SAN, right after it's creation.
Nothing to remove since there's nothing exported.
"""
pass
def extend_volume(self, volume, new_size):
"""Extend the size of the volume."""
try:
self._eql_execute('volume', 'select', volume['name'],
'size', "%sG" % new_size)
except Exception:
with excutils.save_and_reraise_exception():
LOG.error(_LE('Failed to extend_volume %(name)s from '
'%(current_size)sGB to %(new_size)sGB.'),
{'name': volume['name'],
'current_size': volume['size'],
'new_size': new_size})
def local_path(self, volume):
raise NotImplementedError()
| apache-2.0 | 1,405,562,940,059,441,700 | 39.873264 | 79 | 0.543898 | false |
shikhardb/scikit-learn | sklearn/covariance/__init__.py | 389 | 1157 | """
The :mod:`sklearn.covariance` module includes methods and algorithms to
robustly estimate the covariance of features given a set of points. The
precision matrix defined as the inverse of the covariance is also estimated.
Covariance estimation is closely related to the theory of Gaussian Graphical
Models.
"""
from .empirical_covariance_ import empirical_covariance, EmpiricalCovariance, \
log_likelihood
from .shrunk_covariance_ import shrunk_covariance, ShrunkCovariance, \
ledoit_wolf, ledoit_wolf_shrinkage, \
LedoitWolf, oas, OAS
from .robust_covariance import fast_mcd, MinCovDet
from .graph_lasso_ import graph_lasso, GraphLasso, GraphLassoCV
from .outlier_detection import EllipticEnvelope
__all__ = ['EllipticEnvelope',
'EmpiricalCovariance',
'GraphLasso',
'GraphLassoCV',
'LedoitWolf',
'MinCovDet',
'OAS',
'ShrunkCovariance',
'empirical_covariance',
'fast_mcd',
'graph_lasso',
'ledoit_wolf',
'ledoit_wolf_shrinkage',
'log_likelihood',
'oas',
'shrunk_covariance']
| bsd-3-clause | 3,775,881,959,496,661,000 | 33.029412 | 79 | 0.663786 | false |
MaT1g3R/YasenBaka | core/weeb_core.py | 1 | 12442 | import re
from datetime import datetime
from html import unescape
from pathlib import Path
from random import choice, random
from typing import List, Tuple, Union
from aiohttp_wrapper import SessionManager
from discord import Embed, File
from minoshiro import Medium, Site
from bot.anime_searcher import AnimeSearcher
from core.nsfw_core import get_lewd
from data_manager import DataManager
colours = {
Medium.ANIME: 0x1660A5,
Medium.MANGA: 0x2fbf56,
Medium.LN: 0x994647
}
site_names = {
Site.MAL: 'MAL',
Site.ANILIST: 'AL',
Site.ANIMEPLANET: 'A-P',
Site.ANIDB: 'A-DB',
Site.KITSU: 'KIT',
Site.MANGAUPDATES: 'MU',
Site.LNDB: 'LNDB',
Site.NOVELUPDATES: 'NU'
}
async def random_picture(files: List[Path], tags: Tuple[str],
session_manager: SessionManager,
data_manager: DataManager) -> Union[File, str]:
"""
Returns a random file from local and safebooru with safebooru tags.
:param files: list of local files.
:param tags: a tuple of safebooru tags.
:param session_manager: the SessionManager.
:param data_manager: the data manager.
:return: a random image
"""
file = File(str(choice(files)))
if random() < 0.5:
return file
_, url, __ = await get_lewd(
session_manager, 'safebooru', tags, data_manager)
return url or file
async def search_weeb(ctx, search, medium):
"""
Search for an anime/manga/light novel.
:param ctx: Discord context
:param search: the search term.
:param medium: the medium type.
"""
if not search:
await ctx.send('Please enter a search term.')
return
async with ctx.typing():
res, to_be_cached, names, medium = await get_weeb_res(
ctx.bot.anime_search, search, medium
)
if isinstance(res, str):
await ctx.send(res)
else:
await ctx.send(embed=res)
if to_be_cached and names:
await ctx.bot.anime_search.cache(to_be_cached, names, medium)
async def get_weeb_res(ani_search: AnimeSearcher, search, medium):
"""
Get result for weeb search.
:param ani_search: the `AnimeSearcher`
:param search: the search term.
:param medium: the medium type.
:return: A tuple of (Search result, to_be_cached, names, medium)
"""
data, to_be_cached, names, medium = await ani_search.get(search, medium)
if not data:
return 'Sorry, nothing found.', to_be_cached, names, medium
return make_embed(data, medium), to_be_cached, names, medium
def make_embed(data, medium):
"""
Make the embed for the weeb search.
:param data: All of the data for the search.
:param medium: the medium type.
:return: An embed if the search was found, else an error message.
"""
mal = data.get(Site.MAL, {})
anilist = data.get(Site.ANILIST, {})
kitsu = data.get(Site.KITSU, {})
kitsu_attr = kitsu.get('attributes', {})
mu = data.get(Site.MANGAUPDATES, {})
anidb = data.get(Site.ANIDB, {})
nu = data.get(Site.NOVELUPDATES, {})
name = get_name(mal, anilist, kitsu_attr, anidb, mu, nu)
if not name:
return 'Sorry, nothing found.'
colour = colours[medium]
des = []
type_ = get_type(mal, kitsu_attr)
status = get_status(mal, anilist, kitsu_attr)
length = get_len(medium, mal, anilist, kitsu_attr)
genres = get_genres(anilist)
if type_:
des.append(type_.title())
if status:
des.append(f'Status: {status.title()}')
if length:
des.append(length)
if genres:
des.append(f'Genres: {", ".join(genres)}')
embed = Embed(colour=colour, title=name)
if des:
embed.description = ' | '.join(des)
pic = get_pic(mal, anilist, kitsu_attr)
if pic:
embed.set_image(url=pic)
airing_info = get_airing(mal, anilist, kitsu_attr)
if airing_info:
embed.add_field(name=':calendar: Publishing info', value=airing_info)
rating = get_rating(anilist, kitsu_attr, mu)
if rating:
embed.add_field(name=':star: Ratings', value=rating)
synopsis = get_synopsis(mal, anilist, kitsu_attr)
if synopsis:
embed.add_field(name=':blue_book: Synopsis', value=synopsis,
inline=False)
links = get_links(data)
if links:
embed.add_field(name=':link: Links', value=links, inline=False)
return embed
def get_synopsis(mal, anilist, kitsu_attr):
"""
Get the synopsis for the weeb search.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:return:
"""
synopsis = mal.get('synopsis')
anilist_des = anilist.get('description')
if anilist_des and (not synopsis or len(anilist_des) < len(synopsis)):
synopsis = anilist_des
kitsu_des = kitsu_attr.get('synopsis')
if kitsu_des and (not synopsis or len(kitsu_des) < len(synopsis)):
synopsis = kitsu_des
if not synopsis:
return
synopsis = cleanup_des(synopsis.rstrip())
if len(synopsis) > 1000:
return f'{synopsis[:1000]}\n......'
else:
return synopsis
def cleanup_des(desc):
"""
Clean up the synopsis.
:param desc: the raw synopsis.
:return: cleaned up synopsis.
"""
cleanr = re.compile('[<\[].*?[>\]]')
cleantext = re.sub(cleanr, '', desc)
return unescape(cleantext)
def get_name(mal, anilist, kitsu_attr, anidb, mu, nu):
"""
Get the name of the search.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:param anidb: The anidb search result.
:param mu: The manga updates search result.
:param nu: The novel updates search result.
:return: The name of the search.
"""
mal_name = mal.get('title')
if mal_name:
return mal_name
anilist = anilist.get('title', {})
anilist_name = extract(anilist, ('english', 'romaji', 'native'))
if anilist_name:
return anilist_name
kitsu = kitsu_attr.get('titles', {})
kitsu_name = extract(kitsu, ('en', 'en_jp', 'ja_jp'))
if kitsu_name:
return kitsu_name
anidb = anidb.get('titles')
if anidb:
return anidb[0]
manga_updates = mu.get('title')
if manga_updates:
return manga_updates
novel_updates = nu.get('title')
if novel_updates:
return novel_updates
def get_type(mal, kitsu_attr):
"""
Get the type of the weeb media.
:param mal: The MAL search result.
:param kitsu_attr: The attributes of kitsu search result.
:return: the type of the weeb media
"""
mal_type = mal.get('type')
if mal_type:
return mal_type
show_type = kitsu_attr.get('showType')
subtype = kitsu_attr.get('subtype')
if show_type or subtype:
return show_type or subtype
def get_status(mal, anilist, kitsu_attr):
"""
Get the airing status of the search.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:return: the airing status of the search.
"""
mal_status = mal.get('status')
if mal_status:
return mal_status
anilist_status = anilist.get('status')
if anilist_status:
return anilist_status
kitsu_status = kitsu_attr.get('status')
if kitsu_status:
return kitsu_status
def get_airing(mal, anilist, kitsu_attr):
"""
Get the airing dates for the search.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:return: the airing dates for the search.
"""
def anilist_date(d):
if not d:
return
year = d.get('year')
month = d.get('month')
day = d.get('day')
if not (year and month and day):
return
return f'{year}-{month}-{day}'
start = None
end = None
next_ = None
mal_start = mal.get('start_date')
if mal_start and not mal_start.startswith('0000'):
start = mal_start
mal_end = mal.get('end_date')
if mal_end and not mal_end.startswith('0000'):
end = mal_end
anilist_start = anilist.get('startDate')
if not start:
start = anilist_date(anilist_start)
anilist_end = anilist.get('endDate')
if not end:
end = anilist_date(anilist_end)
anilist_next = anilist.get('nextAiringEpisode', {})
if anilist_next:
anilist_next = anilist_next.get('airingAt')
try:
next_ = datetime.fromtimestamp(anilist_next).strftime('%Y-%m-%d')
except TypeError:
next_ = None
kitsu_start = kitsu_attr.get('startDate')
if not start:
start = kitsu_start
kitsu_end = kitsu_attr.get('endDate')
if not end:
end = kitsu_end
if start and end:
return f'Start date: {start} | End date: {end}'
elif start and next_:
return f'Start date: {start} | Next: {next_}'
elif start:
return f'Start date: {start}'
elif next_:
return f'Next: {next_}'
def get_len(medium, mal, anilist, kitsu_attr):
"""
Get the length of the search.
:param medium: the medium type.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:return: the length of the search.
"""
if medium == Medium.ANIME:
noun = 'Episodes'
anilist_mal_key = 'episodes'
kitsu_key = 'episodeCount'
else:
noun = 'Volumes'
anilist_mal_key = 'volumes'
kitsu_key = 'volumeCount'
mal_len = mal.get(anilist_mal_key)
if mal_len and mal_len != '0':
return f'{noun}: {mal_len}'
anilist_len = anilist.get(anilist_mal_key)
if anilist_len:
return f'{noun} {anilist_len}'
kitsu_len = kitsu_attr.get(kitsu_key)
if kitsu_len:
return f'{noun} {kitsu_len}'
def get_genres(anilist):
"""
Get the genres for the search.
:param anilist: The anilist search result.
:return: the genres for the search.
"""
lst = anilist.get('genres', [])
if not lst:
return
return [s for s in lst if s]
def get_links(data):
"""
Get all links for the search.
:param data: all of the search data.
:return: all links for the search.
"""
res = []
for site in Site:
site_data = data.get(site, {})
url = site_data.get('url')
if url:
res.append(f'[{site_names[site]}]({url})')
return ' | '.join(res) if res else None
def get_rating(anilist, kitsu_attr, mu):
"""
Get the rating for the search.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:param mu: The manga updates search result.
:return: the rating for the search.
"""
res = []
anilist_rating = anilist.get('meanScore')
if anilist_rating:
res.append(f'Anilist - {anilist_rating}/100')
kitsu_rating = kitsu_attr.get('averageRating')
if kitsu_rating:
res.append(f'Kitsu - {kitsu_rating}/100')
mu_rating = mu.get('rating')
if mu_rating:
res.append(f'Manga Updates - {mu_rating}/10')
return '\n'.join(res) if res else None
def get_pic(mal, anilist, kitsu_attr):
"""
Get the image url for the search.
:param mal: The MAL search result.
:param anilist: The anilist search result.
:param kitsu_attr: The attributes of kitsu search result.
:return: the image url for the search.
"""
kitsu_img = kitsu_attr.get('coverImage', {})
kitsu_img = extract(kitsu_img, ('original', 'large'))
if kitsu_img:
return kitsu_img
anilist_img = anilist.get('coverImage', {})
anilist_img = extract(anilist_img, ('large', 'medium'))
if anilist_img:
return anilist_img
mal_img = mal.get('image')
if mal_img:
return mal_img
def extract(d, keys):
"""
Extract a key from a dict.
:param d: The dict.
:param keys: A list of keys, in order of priority.
:return: The most important key with an value found.
"""
if not d:
return
for key in keys:
tmp = d.get(key)
if tmp:
return tmp
| apache-2.0 | 8,146,474,353,584,592,000 | 27.800926 | 77 | 0.613888 | false |
lmr/autotest | tko/parsers/test/new_scenario.py | 4 | 3716 | #!/usr/bin/python
"""Create new scenario test instance from an existing results directory.
This automates creation of regression tests for the results parsers.
There are 2 primary use cases for this.
1) Bug fixing: Parser broke on some input in the field and we want
to start with a test that operates on that input and fails. We
then apply fixes to the parser implementation until it passes.
2) Regression alarms: We take input from various real scenarios that
work as expected with the parser. These will be used to ensure
we do not break the expected functionality of the parser while
refactoring it.
While much is done automatically, a scenario harness is meant to
be easily extended and configured once generated.
"""
import optparse
import os
import shutil
import sys
from os import path
try:
import autotest.common as common
except ImportError:
import common
from autotest.tko.parsers.test import scenario_base
from autotest.client.shared import autotemp
usage = 'usage: %prog [options] results_dirpath scenerios_dirpath'
parser = optparse.OptionParser(usage=usage)
parser.add_option(
'-n', '--name',
help='Name for new scenario instance. Will use dirname if not specified')
parser.add_option(
'-p', '--parser_result_tag',
default='v1',
help='Storage tag to use for initial parser result.')
parser.add_option(
'-t', '--template_type',
default='base',
help='Type of unittest module to copy into new scenario.')
def main():
(options, args) = parser.parse_args()
if len(args) < 2:
parser.print_help()
sys.exit(1)
results_dirpath = path.normpath(args[0])
if not path.exists(results_dirpath) or not path.isdir(results_dirpath):
print 'Invalid results_dirpath:', results_dirpath
parser.print_help()
sys.exit(1)
scenarios_dirpath = path.normpath(args[1])
if not path.exists(scenarios_dirpath) or not path.isdir(scenarios_dirpath):
print 'Invalid scenarios_dirpath:', scenarios_dirpath
parser.print_help()
sys.exit(1)
results_dirname = path.basename(results_dirpath)
# Not everything is a valid python package name, fix if necessary
package_dirname = scenario_base.fix_package_dirname(
options.name or results_dirname)
scenario_package_dirpath = path.join(
scenarios_dirpath, package_dirname)
if path.exists(scenario_package_dirpath):
print (
'Scenario package already exists at path: %s' %
scenario_package_dirpath)
parser.print_help()
sys.exit(1)
# Create new scenario package
os.mkdir(scenario_package_dirpath)
# Create tmp_dir
tmp_dirpath = autotemp.tempdir(unique_id='new_scenario')
copied_dirpath = path.join(tmp_dirpath.name, results_dirname)
# Copy results_dir
shutil.copytree(results_dirpath, copied_dirpath)
# scenario_base.sanitize_results_data(copied_dirpath)
# Launch parser on copied_dirpath, collect emitted test objects.
harness = scenario_base.new_parser_harness(copied_dirpath)
try:
parser_result = harness.execute()
except Exception, e:
parser_result = e
scenario_base.store_parser_result(
scenario_package_dirpath, parser_result,
options.parser_result_tag)
scenario_base.store_results_dir(
scenario_package_dirpath, copied_dirpath)
scenario_base.write_config(
scenario_package_dirpath,
status_version=harness.status_version,
parser_result_tag=options.parser_result_tag,
)
scenario_base.install_unittest_module(
scenario_package_dirpath, options.template_type)
tmp_dirpath.clean()
if __name__ == '__main__':
main()
| gpl-2.0 | -5,166,504,483,850,857,000 | 30.760684 | 79 | 0.70183 | false |
mtagle/airflow | tests/providers/amazon/aws/operators/test_sagemaker_transform.py | 4 | 4422 | #
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
import unittest
import mock
from airflow.exceptions import AirflowException
from airflow.providers.amazon.aws.hooks.sagemaker import SageMakerHook
from airflow.providers.amazon.aws.operators.sagemaker_transform import SageMakerTransformOperator
role = 'arn:aws:iam:role/test-role'
bucket = 'test-bucket'
key = 'test/data'
data_url = 's3://{}/{}'.format(bucket, key)
job_name = 'test-job-name'
model_name = 'test-model-name'
image = 'test-image'
output_url = 's3://{}/test/output'.format(bucket)
create_transform_params = {
'TransformJobName': job_name,
'ModelName': model_name,
'MaxConcurrentTransforms': '12',
'MaxPayloadInMB': '6',
'BatchStrategy': 'MultiRecord',
'TransformInput': {
'DataSource': {
'S3DataSource': {
'S3DataType': 'S3Prefix',
'S3Uri': data_url
}
}
},
'TransformOutput': {
'S3OutputPath': output_url,
},
'TransformResources': {
'InstanceType': 'ml.m4.xlarge',
'InstanceCount': '3'
}
}
create_model_params = {
'ModelName': model_name,
'PrimaryContainer': {
'Image': image,
'ModelDataUrl': output_url,
},
'ExecutionRoleArn': role
}
config = {
'Model': create_model_params,
'Transform': create_transform_params
}
class TestSageMakerTransformOperator(unittest.TestCase):
def setUp(self):
self.sagemaker = SageMakerTransformOperator(
task_id='test_sagemaker_operator',
aws_conn_id='sagemaker_test_id',
config=config,
wait_for_completion=False,
check_interval=5
)
def test_parse_config_integers(self):
self.sagemaker.parse_config_integers()
test_config = self.sagemaker.config['Transform']
self.assertEqual(test_config['TransformResources']['InstanceCount'],
int(test_config['TransformResources']['InstanceCount']))
self.assertEqual(test_config['MaxConcurrentTransforms'],
int(test_config['MaxConcurrentTransforms']))
self.assertEqual(test_config['MaxPayloadInMB'],
int(test_config['MaxPayloadInMB']))
@mock.patch.object(SageMakerHook, 'get_conn')
@mock.patch.object(SageMakerHook, 'create_model')
@mock.patch.object(SageMakerHook, 'create_transform_job')
def test_execute(self, mock_transform, mock_model, mock_client):
mock_transform.return_value = {'TransformJobArn': 'testarn',
'ResponseMetadata':
{'HTTPStatusCode': 200}}
self.sagemaker.execute(None)
mock_model.assert_called_once_with(create_model_params)
mock_transform.assert_called_once_with(create_transform_params,
wait_for_completion=False,
check_interval=5,
max_ingestion_time=None
)
@mock.patch.object(SageMakerHook, 'get_conn')
@mock.patch.object(SageMakerHook, 'create_model')
@mock.patch.object(SageMakerHook, 'create_transform_job')
def test_execute_with_failure(self, mock_transform, mock_model, mock_client):
mock_transform.return_value = {'TransformJobArn': 'testarn',
'ResponseMetadata':
{'HTTPStatusCode': 404}}
self.assertRaises(AirflowException, self.sagemaker.execute, None)
if __name__ == '__main__':
unittest.main()
| apache-2.0 | 2,331,741,496,044,163,600 | 33.818898 | 97 | 0.621212 | false |
aidanlister/django | django/contrib/gis/db/backends/spatialite/schema.py | 518 | 6882 | from django.db.backends.sqlite3.schema import DatabaseSchemaEditor
from django.db.utils import DatabaseError
class SpatialiteSchemaEditor(DatabaseSchemaEditor):
sql_add_geometry_column = (
"SELECT AddGeometryColumn(%(table)s, %(column)s, %(srid)s, "
"%(geom_type)s, %(dim)s, %(null)s)"
)
sql_add_spatial_index = "SELECT CreateSpatialIndex(%(table)s, %(column)s)"
sql_drop_spatial_index = "DROP TABLE idx_%(table)s_%(column)s"
sql_remove_geometry_metadata = "SELECT DiscardGeometryColumn(%(table)s, %(column)s)"
sql_discard_geometry_columns = "DELETE FROM %(geom_table)s WHERE f_table_name = %(table)s"
sql_update_geometry_columns = (
"UPDATE %(geom_table)s SET f_table_name = %(new_table)s "
"WHERE f_table_name = %(old_table)s"
)
geometry_tables = [
"geometry_columns",
"geometry_columns_auth",
"geometry_columns_time",
"geometry_columns_statistics",
]
def __init__(self, *args, **kwargs):
super(SpatialiteSchemaEditor, self).__init__(*args, **kwargs)
self.geometry_sql = []
def geo_quote_name(self, name):
return self.connection.ops.geo_quote_name(name)
def column_sql(self, model, field, include_default=False):
from django.contrib.gis.db.models.fields import GeometryField
if not isinstance(field, GeometryField):
return super(SpatialiteSchemaEditor, self).column_sql(model, field, include_default)
# Geometry columns are created by the `AddGeometryColumn` function
self.geometry_sql.append(
self.sql_add_geometry_column % {
"table": self.geo_quote_name(model._meta.db_table),
"column": self.geo_quote_name(field.column),
"srid": field.srid,
"geom_type": self.geo_quote_name(field.geom_type),
"dim": field.dim,
"null": int(not field.null),
}
)
if field.spatial_index:
self.geometry_sql.append(
self.sql_add_spatial_index % {
"table": self.quote_name(model._meta.db_table),
"column": self.quote_name(field.column),
}
)
return None, None
def remove_geometry_metadata(self, model, field):
self.execute(
self.sql_remove_geometry_metadata % {
"table": self.quote_name(model._meta.db_table),
"column": self.quote_name(field.column),
}
)
self.execute(
self.sql_drop_spatial_index % {
"table": model._meta.db_table,
"column": field.column,
}
)
def create_model(self, model):
super(SpatialiteSchemaEditor, self).create_model(model)
# Create geometry columns
for sql in self.geometry_sql:
self.execute(sql)
self.geometry_sql = []
def delete_model(self, model, **kwargs):
from django.contrib.gis.db.models.fields import GeometryField
# Drop spatial metadata (dropping the table does not automatically remove them)
for field in model._meta.local_fields:
if isinstance(field, GeometryField):
self.remove_geometry_metadata(model, field)
# Make sure all geom stuff is gone
for geom_table in self.geometry_tables:
try:
self.execute(
self.sql_discard_geometry_columns % {
"geom_table": geom_table,
"table": self.quote_name(model._meta.db_table),
}
)
except DatabaseError:
pass
super(SpatialiteSchemaEditor, self).delete_model(model, **kwargs)
def add_field(self, model, field):
from django.contrib.gis.db.models.fields import GeometryField
if isinstance(field, GeometryField):
# Populate self.geometry_sql
self.column_sql(model, field)
for sql in self.geometry_sql:
self.execute(sql)
self.geometry_sql = []
else:
super(SpatialiteSchemaEditor, self).add_field(model, field)
def remove_field(self, model, field):
from django.contrib.gis.db.models.fields import GeometryField
# NOTE: If the field is a geometry field, the table is just recreated,
# the parent's remove_field can't be used cause it will skip the
# recreation if the field does not have a database type. Geometry fields
# do not have a db type cause they are added and removed via stored
# procedures.
if isinstance(field, GeometryField):
self._remake_table(model, delete_fields=[field])
else:
super(SpatialiteSchemaEditor, self).remove_field(model, field)
def alter_db_table(self, model, old_db_table, new_db_table):
from django.contrib.gis.db.models.fields import GeometryField
# Remove geometry-ness from temp table
for field in model._meta.local_fields:
if isinstance(field, GeometryField):
self.execute(
self.sql_remove_geometry_metadata % {
"table": self.quote_name(old_db_table),
"column": self.quote_name(field.column),
}
)
# Alter table
super(SpatialiteSchemaEditor, self).alter_db_table(model, old_db_table, new_db_table)
# Repoint any straggler names
for geom_table in self.geometry_tables:
try:
self.execute(
self.sql_update_geometry_columns % {
"geom_table": geom_table,
"old_table": self.quote_name(old_db_table),
"new_table": self.quote_name(new_db_table),
}
)
except DatabaseError:
pass
# Re-add geometry-ness and rename spatial index tables
for field in model._meta.local_fields:
if isinstance(field, GeometryField):
self.execute(self.sql_add_geometry_column % {
"table": self.geo_quote_name(new_db_table),
"column": self.geo_quote_name(field.column),
"srid": field.srid,
"geom_type": self.geo_quote_name(field.geom_type),
"dim": field.dim,
"null": int(not field.null),
})
if getattr(field, 'spatial_index', False):
self.execute(self.sql_rename_table % {
"old_table": self.quote_name("idx_%s_%s" % (old_db_table, field.column)),
"new_table": self.quote_name("idx_%s_%s" % (new_db_table, field.column)),
})
| bsd-3-clause | 1,256,238,944,224,622,000 | 41.481481 | 96 | 0.56379 | false |
ArabellaTech/django-page-cms | pages/widgets.py | 1 | 13014 | # -*- coding: utf-8 -*-
"""Django CMS come with a set of ready to use widgets that you can enable
in the admin via a placeholder tag in your template."""
from pages.settings import PAGES_MEDIA_URL, PAGE_TAGGING
from pages.settings import PAGE_TINYMCE, PAGE_LANGUAGES
from pages.models import Page
from pages.widgets_registry import register_widget
from django.conf import settings
from django.forms import TextInput, Textarea, HiddenInput
from django.forms import MultiWidget, FileInput
from django.contrib.admin.widgets import AdminTextInputWidget
from django.contrib.admin.widgets import AdminTextareaWidget
from django.utils.safestring import mark_safe
from django.template.loader import render_to_string
from django.core.exceptions import ObjectDoesNotExist
from django.utils.translation import ugettext as _
from os.path import join
register_widget(TextInput)
register_widget(Textarea)
register_widget(AdminTextInputWidget)
register_widget(AdminTextareaWidget)
if "filebrowser" in getattr(settings, 'INSTALLED_APPS', []):
from filebrowser.fields import FileBrowseWidget
class FileBrowseInput(FileBrowseWidget):
"""FileBrowseInput widget."""
def __init__(self, attrs={}):
super(FileBrowseInput, self).__init__(attrs)
register_widget(FileBrowseInput)
class RichTextarea(Textarea):
"""A RichTextarea widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'javascript/jquery.js',
)]
css = {
'all': [join(PAGES_MEDIA_URL, path) for path in (
'css/rte.css',
)]
}
def __init__(self, language=None, attrs=None, **kwargs):
attrs = {'class': 'rte'}
self.language = language
super(RichTextarea, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
rendered = super(RichTextarea, self).render(name, value, attrs)
context = {
'name': name,
'PAGES_MEDIA_URL': PAGES_MEDIA_URL,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/richtextarea.html', context))
register_widget(RichTextarea)
if PAGE_TINYMCE:
from tinymce import widgets as tinymce_widgets
class TinyMCE(tinymce_widgets.TinyMCE):
"""TinyMCE widget."""
def __init__(self, language=None, attrs=None, mce_attrs=None,
**kwargs):
self.language = language
if mce_attrs is None:
mce_attrs = {}
self.mce_attrs = mce_attrs
self.mce_attrs.update({
'mode': "exact",
'theme': "advanced",
'width': 640,
'height': 400,
'theme_advanced_toolbar_location': "top",
'theme_advanced_toolbar_align': "left"
})
# take into account the default settings, don't allow
# the above hard coded ones overriding them
self.mce_attrs.update(
getattr(settings, 'TINYMCE_DEFAULT_CONFIG', {}))
super(TinyMCE, self).__init__(language, attrs, mce_attrs)
register_widget(TinyMCE)
class CKEditor(Textarea):
"""CKEditor widget."""
class Media:
js = [join(PAGES_MEDIA_URL, 'ckeditor/ckeditor.js'),
join(settings.MEDIA_URL, 'filebrowser/js/FB_CKEditor.js'),
]
def __init__(self, language=None, attrs=None, **kwargs):
self.language = language
self.filebrowser = "filebrowser" in getattr(settings,
'INSTALLED_APPS', [])
self.attrs = {'class': 'ckeditor'}
if attrs:
self.attrs.update(attrs)
super(CKEditor, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
rendered = super(CKEditor, self).render(name, value, attrs)
context = {
'name': name,
'filebrowser': self.filebrowser,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/ckeditor.html', context))
register_widget(CKEditor)
class WYMEditor(Textarea):
"""WYMEditor widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'javascript/jquery.js',
'javascript/jquery.ui.js',
'javascript/jquery.ui.resizable.js',
'wymeditor/jquery.wymeditor.js',
'wymeditor/plugins/resizable/jquery.wymeditor.resizable.js',
)]
if "filebrowser" in getattr(settings, 'INSTALLED_APPS', []):
js.append(join(PAGES_MEDIA_URL,
'wymeditor/plugins/filebrowser/jquery.wymeditor.filebrowser.js'))
def __init__(self, language=None, attrs=None, **kwargs):
self.language = language or getattr(settings, 'LANGUAGE_CODE')
self.attrs = {'class': 'wymeditor'}
if attrs:
self.attrs.update(attrs)
super(WYMEditor, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
rendered = super(WYMEditor, self).render(name, value, attrs)
context = {
'name': name,
'lang': self.language[:2],
'language': self.language,
'PAGES_MEDIA_URL': PAGES_MEDIA_URL,
}
context['page_link_wymeditor'] = 1
context['page_list'] = Page.objects.all().order_by('tree_id', 'lft')
context['filebrowser'] = 0
if "filebrowser" in getattr(settings, 'INSTALLED_APPS', []):
context['filebrowser'] = 1
return rendered + mark_safe(render_to_string(
'pages/widgets/wymeditor.html', context))
register_widget(WYMEditor)
class markItUpMarkdown(Textarea):
"""markItUpMarkdown widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'javascript/jquery.js',
'markitup/jquery.markitup.js',
'markitup/sets/markdown/set.js',
)]
css = {
'all': [join(PAGES_MEDIA_URL, path) for path in (
'markitup/skins/simple/style.css',
'markitup/sets/markdown/style.css',
)]
}
def render(self, name, value, attrs=None, **kwargs):
rendered = super(markItUpMarkdown, self).render(name, value, attrs)
context = {
'name': name,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/markitupmarkdown.html', context))
register_widget(markItUpMarkdown)
class markItUpRest(Textarea):
"""markItUpRest widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'javascript/jquery.js',
'markitup/jquery.markitup.js',
'markitup/sets/rst/set.js',
)]
css = {
'all': [join(PAGES_MEDIA_URL, path) for path in (
'markitup/skins/simple/style.css',
'markitup/sets/rst/style.css',
)]
}
def render(self, name, value, attrs=None, **kwargs):
rendered = super(markItUpRest, self).render(name, value, attrs)
context = {
'name': name,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/markituprest.html', context))
register_widget(markItUpRest)
class markItUpHTML(Textarea):
"""markItUpHTML widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'javascript/jquery.js',
'markitup/jquery.markitup.js',
'markitup/sets/default/set.js',
)]
css = {
'all': [join(PAGES_MEDIA_URL, path) for path in (
'markitup/skins/simple/style.css',
'markitup/sets/default/style.css',
)]
}
def render(self, name, value, attrs=None, **kwargs):
rendered = super(markItUpHTML, self).render(name, value, attrs)
context = {
'name': name,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/markituphtml.html', context))
register_widget(markItUpHTML)
class EditArea(Textarea):
"""EditArea is a html syntax coloured widget."""
class Media:
js = [join(PAGES_MEDIA_URL, path) for path in (
'edit_area/edit_area_full.js',
)]
def __init__(self, language=None, attrs=None, **kwargs):
self.language = language
self.attrs = {'class': 'editarea'}
if attrs:
self.attrs.update(attrs)
super(EditArea, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
rendered = super(EditArea, self).render(name, value, attrs)
context = {
'name': name,
'language': self.language,
'PAGES_MEDIA_URL': PAGES_MEDIA_URL,
}
return rendered + mark_safe(render_to_string(
'pages/widgets/editarea.html', context))
register_widget(EditArea)
class ImageInput(FileInput):
def __init__(self, page=None, language=None, attrs=None, **kwargs):
self.language = language
self.page = page
super(ImageInput, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
if not self.page:
field_content = _('Please save the page to show the image field')
else:
field_content = ''
if value:
field_content += _('Current file: %s<br/>') % value
field_content += super(ImageInput, self).render(name, attrs)
if value:
field_content += '''<br><label for="%s-delete">%s</label>
<input name="%s-delete" id="%s-delete"
type="checkbox" value="true">
''' % (name, _('Delete image'), name, name)
return mark_safe(field_content)
register_widget(ImageInput)
class FileInput(FileInput):
def __init__(self, page=None, language=None, attrs=None, **kwargs):
self.language = language
self.page = page
super(FileInput, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
if not self.page:
field_content = _('Please save the page to show the file field')
else:
field_content = ''
if value:
field_content += _('Current file: %s<br/>') % value
field_content += super(FileInput, self).render(name, attrs)
if value:
field_content += '''<br><label for="%s-delete">%s</label>
<input name="%s-delete" id="%s-delete"
type="checkbox" value="true">
''' % (name, _('Delete file'), name, name)
return mark_safe(field_content)
register_widget(FileInput)
class VideoWidget(MultiWidget):
'''A youtube `Widget` for the admin.'''
def __init__(self, attrs=None, page=None, language=None,
video_url=None, weight=None, height=None):
widgets = [
TextInput(attrs=attrs),
TextInput(attrs=attrs),
TextInput(attrs=attrs)
]
super(VideoWidget, self).__init__(widgets, attrs)
def decompress(self, value):
# backslashes are forbidden in URLs
if value:
return value.split('\\')
return (None, None, None)
def value_from_datadict(self, data, files, name):
value = [u'', u'', u'']
for da in filter(lambda x: x.startswith(name), data):
index = int(da[len(name) + 1:])
value[index] = data[da]
if value[0] == value[1] == value[2] == u'':
return None
return u'%s\\%s\\%s' % tuple(value)
def _has_changed(self, initial, data):
"""Need to be reimplemented to be correct."""
if data == initial:
return False
return bool(initial) != bool(data)
def format_output(self, rendered_widgets):
"""
Given a list of rendered widgets (as strings), it inserts an HTML
linebreak between them.
Returns a Unicode string representing the HTML for the whole lot.
"""
return u"""<table>
<tr><td>url</td><td>%s</td></tr>
<tr><td>width</td><td>%s</td></tr>
<tr><td>weight</td><td>%s</td></tr>
</table>""" % tuple(rendered_widgets)
register_widget(VideoWidget)
class LanguageChoiceWidget(TextInput):
def __init__(self, language=None, attrs=None, **kwargs):
self.language = language
self.page = kwargs.get('page')
# page is None
super(LanguageChoiceWidget, self).__init__(attrs)
def render(self, name, value, attrs=None, **kwargs):
context = {
'name': name,
'value': value,
'page': self.page,
'language': value,
'page_languages': PAGE_LANGUAGES
}
return mark_safe(render_to_string(
'pages/widgets/languages.html', context))
| bsd-3-clause | 704,359,040,994,003,200 | 32.890625 | 77 | 0.572537 | false |
vathpela/anaconda | pyanaconda/ui/tui/spokes/warnings_spoke.py | 1 | 2587 | # Ask vnc text spoke
#
# Copyright (C) 2013 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions of
# the GNU General Public License v.2, or (at your option) any later version.
# This program is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY expressed or implied, including the implied warranties of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details. You should have received a copy of the
# GNU General Public License along with this program; if not, write to the
# Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA. Any Red Hat trademarks that are incorporated in the
# source code or documentation are not subject to the GNU General Public
# License and may only be used or replicated with the express permission of
# Red Hat, Inc.
#
from pyanaconda.ui.tui.spokes import StandaloneTUISpoke
from pyanaconda.ui.tui.hubs.summary import SummaryHub
from pyanaconda.core.i18n import N_, _
from pyanaconda.core.util import is_unsupported_hw
from pyanaconda.product import productName
from simpleline.render.widgets import TextWidget
from pyanaconda.anaconda_loggers import get_module_logger
log = get_module_logger(__name__)
__all__ = ["WarningsSpoke"]
class WarningsSpoke(StandaloneTUISpoke):
"""
.. inheritance-diagram:: WarningsSpoke
:parts: 3
"""
preForHub = SummaryHub
priority = 0
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.title = N_("Warnings")
self.initialize_start()
self._message = _("This hardware (or a combination thereof) is not "
"supported by Red Hat. For more information on "
"supported hardware, please refer to "
"http://www.redhat.com/hardware.")
# Does anything need to be displayed?
# pylint: disable=no-member
self._unsupported = productName.startswith("Red Hat ") and \
is_unsupported_hw() and \
not self.data.unsupportedhardware.unsupported_hardware
self.initialize_done()
@property
def completed(self):
return not self._unsupported
def refresh(self, args=None):
super().refresh(args)
self.window.add_with_separator(TextWidget(self._message))
# Override Spoke.apply
def apply(self):
pass
| gpl-2.0 | 7,156,291,706,919,628,000 | 35.43662 | 82 | 0.677619 | false |
jmighion/ansible | lib/ansible/modules/cloud/amazon/redshift_subnet_group.py | 24 | 5940 | #!/usr/bin/python
# Copyright 2014 Jens Carl, Hothead Games Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
author:
- "Jens Carl (@j-carl), Hothead Games Inc."
module: redshift_subnet_group
version_added: "2.2"
short_description: mange Redshift cluster subnet groups
description:
- Create, modifies, and deletes Redshift cluster subnet groups.
options:
state:
description:
- Specifies whether the subnet should be present or absent.
default: 'present'
choices: ['present', 'absent' ]
group_name:
description:
- Cluster subnet group name.
required: true
aliases: ['name']
group_description:
description:
- Database subnet group description.
required: false
default: null
aliases: ['description']
group_subnets:
description:
- List of subnet IDs that make up the cluster subnet group.
required: false
default: null
aliases: ['subnets']
requirements: [ 'boto' ]
extends_documentation_fragment: aws
'''
EXAMPLES = '''
# Create a Redshift subnet group
- local_action:
module: redshift_subnet_group
state: present
group_name: redshift-subnet
group_description: Redshift subnet
group_subnets:
- 'subnet-aaaaa'
- 'subnet-bbbbb'
# Remove subnet group
- redshift_subnet_group:
state: absent
group_name: redshift-subnet
'''
RETURN = '''
group:
description: dictionary containing all Redshift subnet group information
returned: success
type: complex
contains:
name:
description: name of the Redshift subnet group
returned: success
type: string
sample: "redshift_subnet_group_name"
vpc_id:
description: Id of the VPC where the subnet is located
returned: success
type: string
sample: "vpc-aabb1122"
'''
try:
import boto
import boto.redshift
HAS_BOTO = True
except ImportError:
HAS_BOTO = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ec2 import HAS_BOTO, connect_to_aws, ec2_argument_spec, get_aws_connection_info
def main():
argument_spec = ec2_argument_spec()
argument_spec.update(dict(
state=dict(required=True, choices=['present', 'absent']),
group_name=dict(required=True, aliases=['name']),
group_description=dict(required=False, aliases=['description']),
group_subnets=dict(required=False, aliases=['subnets'], type='list'),
))
module = AnsibleModule(argument_spec=argument_spec)
if not HAS_BOTO:
module.fail_json(msg='boto v2.9.0+ required for this module')
state = module.params.get('state')
group_name = module.params.get('group_name')
group_description = module.params.get('group_description')
group_subnets = module.params.get('group_subnets')
if state == 'present':
for required in ('group_name', 'group_description', 'group_subnets'):
if not module.params.get(required):
module.fail_json(msg=str("parameter %s required for state='present'" % required))
else:
for not_allowed in ('group_description', 'group_subnets'):
if module.params.get(not_allowed):
module.fail_json(msg=str("parameter %s not allowed for state='absent'" % not_allowed))
region, ec2_url, aws_connect_params = get_aws_connection_info(module)
if not region:
module.fail_json(msg=str("region not specified and unable to determine region from EC2_REGION."))
# Connect to the Redshift endpoint.
try:
conn = connect_to_aws(boto.redshift, region, **aws_connect_params)
except boto.exception.JSONResponseError as e:
module.fail_json(msg=str(e))
try:
changed = False
exists = False
group = None
try:
matching_groups = conn.describe_cluster_subnet_groups(group_name, max_records=100)
exists = len(matching_groups) > 0
except boto.exception.JSONResponseError as e:
if e.body['Error']['Code'] != 'ClusterSubnetGroupNotFoundFault':
# if e.code != 'ClusterSubnetGroupNotFoundFault':
module.fail_json(msg=str(e))
if state == 'absent':
if exists:
conn.delete_cluster_subnet_group(group_name)
changed = True
else:
if not exists:
new_group = conn.create_cluster_subnet_group(group_name, group_description, group_subnets)
group = {
'name': new_group['CreateClusterSubnetGroupResponse']['CreateClusterSubnetGroupResult']
['ClusterSubnetGroup']['ClusterSubnetGroupName'],
'vpc_id': new_group['CreateClusterSubnetGroupResponse']['CreateClusterSubnetGroupResult']
['ClusterSubnetGroup']['VpcId'],
}
else:
changed_group = conn.modify_cluster_subnet_group(group_name, group_subnets, description=group_description)
group = {
'name': changed_group['ModifyClusterSubnetGroupResponse']['ModifyClusterSubnetGroupResult']
['ClusterSubnetGroup']['ClusterSubnetGroupName'],
'vpc_id': changed_group['ModifyClusterSubnetGroupResponse']['ModifyClusterSubnetGroupResult']
['ClusterSubnetGroup']['VpcId'],
}
changed = True
except boto.exception.JSONResponseError as e:
module.fail_json(msg=str(e))
module.exit_json(changed=changed, group=group)
if __name__ == '__main__':
main()
| gpl-3.0 | 7,691,942,296,058,210,000 | 32.184358 | 122 | 0.630303 | false |
ahaldane/IvoGPU | utils/getSeqEnergies.py | 1 | 2110 | #!/usr/bin/env python2
#
#Copyright 2016 Allan Haldane.
#This file is part of IvoGPU.
#IvoGPU is free software: you can redistribute it and/or modify
#it under the terms of the GNU General Public License as published by
#the Free Software Foundation, version 3 of the License.
#IvoGPU is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
#GNU General Public License for more details.
#You should have received a copy of the GNU General Public License
#along with IvoGPU. If not, see <http://www.gnu.org/licenses/>.
#Contact: allan.haldane _AT_ gmail.com
import scipy
from scipy import *
import seqload
import sys, argparse
from Bio.Alphabet import IUPAC
def getLq(J):
L = int(((1+sqrt(1+8*J.shape[0]))/2) + 0.5)
q = int(sqrt(J.shape[1]) + 0.5)
return L, q
def energies(s, J):
L, q = getLq(J)
pairenergy = zeros(s.shape[0])
for n,(i,j) in enumerate([(i,j) for i in range(L-1) for j in range(i+1,L)]):
pairenergy += couplings[n,s[:,j] + q*s[:,i]]
return pairenergy
def main():
parser = argparse.ArgumentParser(description='Compute Sequence Energies')
parser.add_argument('seqs')
parser.add_argument('couplings')
parser.add_argument('-o', '--out')
parser.add_argument('--alpha', default='protgap')
args = parser.parse_args(sys.argv[1:])
alphabets = {'protein': IUPAC.protein.letters,
'protgap': '-' + IUPAC.protein.letters,
'charge': '0+-',
'nuc': "ACGT"}
try:
letters = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"[:int(args.alpha)]
except ValueError:
letters = alphabets.get(args.alpha, args.alpha)
couplings = scipy.load(args.couplings)
def chunkE(seqs, param):
return energies(seqs, couplings)
# process the file in chunks for speed
e = seqload.mapSeqs(args.seqs, letters, chunkE)[0]
if args.out:
save(args.out, e)
else:
savetxt(sys.stdout, e)
if __name__ == '__main__':
main()
| gpl-3.0 | 1,367,004,495,877,434,000 | 29.142857 | 80 | 0.649289 | false |
matuu/simpleai | tests/machine_learning/test_metrics.py | 4 | 2290 | #!/usr/bin/env python
# coding: utf-8
"""
Tests for metrics module in machine learning.
"""
import unittest
from simpleai.machine_learning.metrics import Counter, OnlineEntropy, \
OnlineLogProbability, \
OnlineInformationGain
class TestCounter(unittest.TestCase):
def test_total_starts_in_zero(self):
counter = Counter(lambda x: None)
self.assertEqual(counter.total, 0)
def test_add_elements(self):
counter = Counter(lambda x: None)
for i in xrange(20):
counter.add("something")
self.assertEqual(counter.total, 20)
def test_target_values(self):
counter = Counter(lambda x: x % 2 == 0)
for i in xrange(25):
counter.add(i)
self.assertEqual(counter[0], 12)
self.assertEqual(counter[1], 13)
counter = Counter(lambda x: None)
for i in xrange(50):
counter.add(i)
self.assertEqual(counter[None], 50)
class TestOnlineEntropy(unittest.TestCase):
def test_starts_in_zero(self):
entropy = OnlineEntropy(lambda x: None)
self.assertEqual(entropy.get_entropy(), 0)
def test_valid_values(self):
entropy = OnlineEntropy(lambda x: x % 10)
for i in xrange(150):
entropy.add(i)
self.assertGreaterEqual(entropy.get_entropy(), 0.0)
class TestOnlineInformationGain(unittest.TestCase):
def test_starts_in_zero(self):
gain = OnlineInformationGain(lambda x: None, lambda x: None)
self.assertEqual(gain.get_gain(), 0)
self.assertEqual(gain.get_target_class_counts().items(), [])
self.assertEqual(gain.get_branches(), [])
def test_no_gain(self):
f = lambda x: None
gain = OnlineInformationGain(f, f)
for i in xrange(30):
gain.add(i)
self.assertEqual(gain.get_gain(), 0)
def test_full_gain(self):
target = lambda x: x % 7
gain = OnlineInformationGain(target, target)
entropy = OnlineEntropy(target)
for i in xrange(50):
gain.add(i)
entropy.add(i)
self.assertEqual(gain.get_gain(), entropy.get_entropy())
self.assertGreaterEqual(gain.get_gain(), 0)
| mit | -7,100,092,211,988,470,000 | 30.805556 | 71 | 0.599563 | false |
eerwitt/tensorflow | tensorflow/python/platform/app.py | 35 | 1691 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Generic entry point script."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys as _sys
from tensorflow.python.platform import flags
from tensorflow.python.util.all_util import remove_undocumented
def run(main=None, argv=None):
"""Runs the program with an optional 'main' function and 'argv' list."""
f = flags.FLAGS
# Extract the args from the optional `argv` list.
args = argv[1:] if argv else None
# Parse the known flags from that list, or from the command
# line otherwise.
# pylint: disable=protected-access
flags_passthrough = f._parse_flags(args=args)
# pylint: enable=protected-access
main = main or _sys.modules['__main__'].main
# Call the main function, passing through any arguments
# to the final program.
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
_allowed_symbols = [
'run',
# Allowed submodule.
'flags',
]
remove_undocumented(__name__, _allowed_symbols)
| apache-2.0 | 6,297,203,371,925,682,000 | 30.90566 | 80 | 0.693081 | false |
b-cuts/airflow | airflow/hooks/S3_hook.py | 28 | 12831 | from future import standard_library
standard_library.install_aliases()
import logging
import json
import re
import fnmatch
import configparser
from urllib.parse import urlparse
import boto
from boto.s3.connection import S3Connection
from boto.sts import STSConnection
boto.set_stream_logger('boto')
logging.getLogger("boto").setLevel(logging.INFO)
from airflow.utils import AirflowException
from airflow.hooks.base_hook import BaseHook
def _parse_s3_config(config_file_name, config_format='boto', profile=None):
"""
Parses a config file for s3 credentials. Can currently
parse boto, s3cmd.conf and AWS SDK config formats
:param config_file_name: path to the config file
:type config_file_name: str
:param config_format: config type. One of "boto", "s3cmd" or "aws".
Defaults to "boto"
:type config_format: str
:param profile: profile name in AWS type config file
:type profile: str
"""
Config = configparser.ConfigParser()
if Config.read(config_file_name):
sections = Config.sections()
else:
raise AirflowException("Couldn't read {0}".format(config_file_name))
# Setting option names depending on file format
conf_format = config_format.lower()
if conf_format == 'boto':
if profile is not None and 'profile ' + profile in sections:
cred_section = 'profile ' + profile
else:
cred_section = 'Credentials'
elif conf_format == 'aws' and profile is not None:
cred_section = profile
else:
cred_section = 'default'
# Option names
if conf_format in ('boto', 'aws'):
key_id_option = 'aws_access_key_id'
secret_key_option = 'aws_secret_access_key'
# security_token_option = 'aws_security_token'
else:
key_id_option = 'access_key'
secret_key_option = 'secret_key'
# Actual Parsing
if cred_section not in sections:
raise AirflowException("This config file format is not recognized")
else:
try:
access_key = Config.get(cred_section, key_id_option)
secret_key = Config.get(cred_section, secret_key_option)
except:
logging.warning("Option Error in parsing s3 config file")
raise
return (access_key, secret_key)
class S3Hook(BaseHook):
"""
Interact with S3. This class is a wrapper around the boto library.
"""
def __init__(
self,
s3_conn_id='s3_default'):
self.s3_conn_id = s3_conn_id
self.s3_conn = self.get_connection(s3_conn_id)
self.profile = None
self._sts_conn_required = False
self._creds_in_config_file = False
try:
self.extra_params = json.loads(self.s3_conn.extra)
if 'aws_secret_access_key' in self.extra_params:
self._a_key = self.extra_params['aws_access_key_id']
self._s_key = self.extra_params['aws_secret_access_key']
else:
self._creds_in_config_file = True
self.s3_config_format = self.extra_params['s3_config_format']
self.s3_config_file = self.extra_params['s3_config_file']
if 'profile' in self.extra_params:
self.profile = self.extra_params['profile']
self._sts_conn_required = 'aws_account_id' in self.extra_params
if self._sts_conn_required:
self.aws_account_id = self.extra_params['aws_account_id']
self.aws_iam_role = self.extra_params['aws_iam_role']
self.role_arn = "arn:aws:iam::" + self.aws_account_id
self.role_arn += ":role/" + self.aws_iam_role
except TypeError as e:
raise AirflowException("S3 connection needs to set configuration"
" parameters in extra")
except KeyError as e:
raise AirflowException("S3 connection definition needs to include"
" {p} in extra".format(p=e.message))
self.connection = self.get_conn()
def __getstate__(self):
pickled_dict = dict(self.__dict__)
del pickled_dict['connection']
return pickled_dict
def __setstate__(self, d):
self.__dict__.update(d)
self.__dict__['connection'] = self.get_conn()
def _parse_s3_url(self, s3url):
parsed_url = urlparse(s3url)
if not parsed_url.netloc:
raise AirflowException('Please provide a bucket_name')
else:
bucket_name = parsed_url.netloc
if parsed_url.path[0] == '/':
key = parsed_url.path[1:]
else:
key = parsed_url.path
return (bucket_name, key)
def get_conn(self):
"""
Returns the boto S3Connection object.
"""
if self._creds_in_config_file:
a_key, s_key = _parse_s3_config(self.s3_config_file,
self.s3_config_format,
self.profile)
else:
a_key = self._a_key
s_key = self._s_key
if self._sts_conn_required:
sts_connection = STSConnection(aws_access_key_id=a_key,
aws_secret_access_key=s_key,
profile_name=self.profile)
assumed_role_object = sts_connection.assume_role(
role_arn=self.role_arn,
role_session_name="Airflow_" + self.s3_conn_id
)
creds = assumed_role_object.credentials
connection = S3Connection(
aws_access_key_id=creds.access_key,
aws_secret_access_key=creds.secret_key,
security_token=creds.session_token
)
else:
connection = S3Connection(aws_access_key_id=a_key,
aws_secret_access_key=s_key)
return connection
def check_for_bucket(self, bucket_name):
"""
Check if bucket_name exists.
:param bucket_name: the name of the bucket
:type bucket_name: str
"""
return self.connection.lookup(bucket_name) is not None
def get_bucket(self, bucket_name):
"""
Returns a boto.s3.bucket.Bucket object
:param bucket_name: the name of the bucket
:type bucket_name: str
"""
return self.connection.get_bucket(bucket_name)
def list_keys(self, bucket_name, prefix='', delimiter=''):
"""
Lists keys in a bucket under prefix and not containing delimiter
:param bucket_name: the name of the bucket
:type bucket_name: str
:param prefix: a key prefix
:type prefix: str
:param delimiter: the delimiter marks key hierarchy.
:type delimiter: str
"""
b = self.get_bucket(bucket_name)
keylist = list(b.list(prefix=prefix, delimiter=delimiter))
return [k.name for k in keylist] if keylist != [] else None
def list_prefixes(self, bucket_name, prefix='', delimiter=''):
"""
Lists prefixes in a bucket under prefix
:param bucket_name: the name of the bucket
:type bucket_name: str
:param prefix: a key prefix
:type prefix: str
:param delimiter: the delimiter marks key hierarchy.
:type delimiter: str
"""
b = self.get_bucket(bucket_name)
plist = b.list(prefix=prefix, delimiter=delimiter)
prefix_names = [p.name for p in plist
if isinstance(p, boto.s3.prefix.Prefix)]
return prefix_names if prefix_names != [] else None
def check_for_key(self, key, bucket_name=None):
"""
Checks that a key exists in a bucket
"""
if not bucket_name:
(bucket_name, key) = self._parse_s3_url(key)
bucket = self.get_bucket(bucket_name)
return bucket.get_key(key) is not None
def get_key(self, key, bucket_name=None):
"""
Returns a boto.s3.key.Key object
:param key: the path to the key
:type key: str
:param bucket_name: the name of the bucket
:type bucket_name: str
"""
if not bucket_name:
(bucket_name, key) = self._parse_s3_url(key)
bucket = self.get_bucket(bucket_name)
return bucket.get_key(key)
def check_for_wildcard_key(self,
wildcard_key, bucket_name=None, delimiter=''):
"""
Checks that a key matching a wildcard expression exists in a bucket
"""
return self.get_wildcard_key(wildcard_key=wildcard_key,
bucket_name=bucket_name,
delimiter=delimiter) is not None
def get_wildcard_key(self, wildcard_key, bucket_name=None, delimiter=''):
"""
Returns a boto.s3.key.Key object matching the regular expression
:param regex_key: the path to the key
:type regex_key: str
:param bucket_name: the name of the bucket
:type bucket_name: str
"""
if not bucket_name:
(bucket_name, wildcard_key) = self._parse_s3_url(wildcard_key)
bucket = self.get_bucket(bucket_name)
prefix = re.split(r'[*]', wildcard_key, 1)[0]
klist = self.list_keys(bucket_name, prefix=prefix, delimiter=delimiter)
if not klist:
return None
key_matches = [k for k in klist if fnmatch.fnmatch(k, wildcard_key)]
return bucket.get_key(key_matches[0]) if key_matches else None
def check_for_prefix(self, bucket_name, prefix, delimiter):
"""
Checks that a prefix exists in a bucket
"""
prefix = prefix + delimiter if prefix[-1] != delimiter else prefix
prefix_split = re.split(r'(\w+[{d}])$'.format(d=delimiter), prefix, 1)
previous_level = prefix_split[0]
plist = self.list_prefixes(bucket_name, previous_level, delimiter)
return False if plist is None else prefix in plist
def load_file(self, filename,
key, bucket_name=None,
replace=False):
"""
Loads a local file to S3
This is provided as a convenience to drop a file in S3. It uses the
boto infrastructure to ship a file to s3. It is currently using only
a single part download, and should not be used to move large files.
:param filename: name of the file to load.
:type filename: str
:param key: S3 key that will point to the file
:type key: str
:param bucket_name: Name of the bucket in which to store the file
:type bucket_name: str
:param replace: A flag to decide whether or not to overwrite the key
if it already exists
:type replace: bool
"""
if not bucket_name:
(bucket_name, key) = self._parse_s3_url(key)
bucket = self.get_bucket(bucket_name)
if not self.check_for_key(key, bucket_name):
key_obj = bucket.new_key(key_name=key)
else:
key_obj = bucket.get_key(key)
key_size = key_obj.set_contents_from_filename(filename,
replace=replace)
logging.info("The key {key} now contains"
" {key_size} bytes".format(**locals()))
def load_string(self, string_data,
key, bucket_name=None,
replace=False):
"""
Loads a local file to S3
This is provided as a convenience to drop a file in S3. It uses the
boto infrastructure to ship a file to s3. It is currently using only
a single part download, and should not be used to move large files.
:param string_data: string to set as content for the key.
:type string_data: str
:param key: S3 key that will point to the file
:type key: str
:param bucket_name: Name of the bucket in which to store the file
:type bucket_name: str
:param replace: A flag to decide whether or not to overwrite the key
if it already exists
:type replace: bool
"""
if not bucket_name:
(bucket_name, key) = self._parse_s3_url(key)
bucket = self.get_bucket(bucket_name)
if not self.check_for_key(key, bucket_name):
key_obj = bucket.new_key(key_name=key)
else:
key_obj = bucket.get_key(key)
key_size = key_obj.set_contents_from_string(string_data,
replace=replace)
logging.info("The key {key} now contains"
" {key_size} bytes".format(**locals()))
| apache-2.0 | -7,858,040,844,244,873,000 | 37.64759 | 79 | 0.576261 | false |
derDavidT/sympy | sympy/sets/sets.py | 17 | 57589 | from __future__ import print_function, division
from itertools import product
from sympy.core.sympify import _sympify, sympify
from sympy.core.basic import Basic
from sympy.core.singleton import Singleton, S
from sympy.core.evalf import EvalfMixin
from sympy.core.numbers import Float
from sympy.core.compatibility import iterable, with_metaclass, ordered, range
from sympy.core.evaluate import global_evaluate
from sympy.core.decorators import deprecated
from sympy.core.mul import Mul
from sympy.core.relational import Eq
from sympy.sets.contains import Contains
from mpmath import mpi, mpf
from sympy.logic.boolalg import And, Or, Not, true, false
from sympy.utilities import subsets
class Set(Basic):
"""
The base class for any kind of set.
This is not meant to be used directly as a container of items. It does not
behave like the builtin ``set``; see :class:`FiniteSet` for that.
Real intervals are represented by the :class:`Interval` class and unions of
sets by the :class:`Union` class. The empty set is represented by the
:class:`EmptySet` class and available as a singleton as ``S.EmptySet``.
"""
is_number = False
is_iterable = False
is_interval = False
is_FiniteSet = False
is_Interval = False
is_ProductSet = False
is_Union = False
is_Intersection = None
is_EmptySet = None
is_UniversalSet = None
is_Complement = None
is_ComplexRegion = False
@staticmethod
def _infimum_key(expr):
"""
Return infimum (if possible) else S.Infinity.
"""
try:
infimum = expr.inf
assert infimum.is_comparable
except (NotImplementedError,
AttributeError, AssertionError, ValueError):
infimum = S.Infinity
return infimum
def union(self, other):
"""
Returns the union of 'self' and 'other'.
Examples
========
As a shortcut it is possible to use the '+' operator:
>>> from sympy import Interval, FiniteSet
>>> Interval(0, 1).union(Interval(2, 3))
[0, 1] U [2, 3]
>>> Interval(0, 1) + Interval(2, 3)
[0, 1] U [2, 3]
>>> Interval(1, 2, True, True) + FiniteSet(2, 3)
(1, 2] U {3}
Similarly it is possible to use the '-' operator for set differences:
>>> Interval(0, 2) - Interval(0, 1)
(1, 2]
>>> Interval(1, 3) - FiniteSet(2)
[1, 2) U (2, 3]
"""
return Union(self, other)
def intersect(self, other):
"""
Returns the intersection of 'self' and 'other'.
>>> from sympy import Interval
>>> Interval(1, 3).intersect(Interval(1, 2))
[1, 2]
>>> from sympy import imageset, Lambda, symbols, S
>>> n, m = symbols('n m')
>>> a = imageset(Lambda(n, 2*n), S.Integers)
>>> a.intersect(imageset(Lambda(m, 2*m + 1), S.Integers))
EmptySet()
"""
return Intersection(self, other)
def intersection(self, other):
"""
Alias for :meth:`intersect()`
"""
return self.intersect(other)
def _intersect(self, other):
"""
This function should only be used internally
self._intersect(other) returns a new, intersected set if self knows how
to intersect itself with other, otherwise it returns ``None``
When making a new set class you can be assured that other will not
be a :class:`Union`, :class:`FiniteSet`, or :class:`EmptySet`
Used within the :class:`Intersection` class
"""
return None
def is_disjoint(self, other):
"""
Returns True if 'self' and 'other' are disjoint
Examples
========
>>> from sympy import Interval
>>> Interval(0, 2).is_disjoint(Interval(1, 2))
False
>>> Interval(0, 2).is_disjoint(Interval(3, 4))
True
References
==========
.. [1] http://en.wikipedia.org/wiki/Disjoint_sets
"""
return self.intersect(other) == S.EmptySet
def isdisjoint(self, other):
"""
Alias for :meth:`is_disjoint()`
"""
return self.is_disjoint(other)
def _union(self, other):
"""
This function should only be used internally
self._union(other) returns a new, joined set if self knows how
to join itself with other, otherwise it returns ``None``.
It may also return a python set of SymPy Sets if they are somehow
simpler. If it does this it must be idempotent i.e. the sets returned
must return ``None`` with _union'ed with each other
Used within the :class:`Union` class
"""
return None
def complement(self, universe):
"""
The complement of 'self' w.r.t the given the universe.
Examples
========
>>> from sympy import Interval, S
>>> Interval(0, 1).complement(S.Reals)
(-oo, 0) U (1, oo)
>>> Interval(0, 1).complement(S.UniversalSet)
UniversalSet() \ [0, 1]
"""
return Complement(universe, self)
def _complement(self, other):
# this behaves as other - self
if isinstance(other, ProductSet):
# For each set consider it or it's complement
# We need at least one of the sets to be complemented
# Consider all 2^n combinations.
# We can conveniently represent these options easily using a
# ProductSet
# XXX: this doesn't work if the dimentions of the sets isn't same.
# A - B is essentially same as A if B has a different
# dimentionality than A
switch_sets = ProductSet(FiniteSet(o, o - s) for s, o in
zip(self.sets, other.sets))
product_sets = (ProductSet(*set) for set in switch_sets)
# Union of all combinations but this one
return Union(p for p in product_sets if p != other)
elif isinstance(other, Interval):
if isinstance(self, Interval) or isinstance(self, FiniteSet):
return Intersection(other, self.complement(S.Reals))
elif isinstance(other, Union):
return Union(o - self for o in other.args)
elif isinstance(other, Complement):
return Complement(other.args[0], Union(other.args[1], self))
elif isinstance(other, EmptySet):
return S.EmptySet
elif isinstance(other, FiniteSet):
return FiniteSet(*[el for el in other if self.contains(el) != True])
def symmetric_difference(self, other):
return SymmetricDifference(self, other)
def _symmetric_difference(self, other):
return Union(Complement(self, other), Complement(other, self))
@property
def inf(self):
"""
The infimum of 'self'
Examples
========
>>> from sympy import Interval, Union
>>> Interval(0, 1).inf
0
>>> Union(Interval(0, 1), Interval(2, 3)).inf
0
"""
return self._inf
@property
def _inf(self):
raise NotImplementedError("(%s)._inf" % self)
@property
def sup(self):
"""
The supremum of 'self'
Examples
========
>>> from sympy import Interval, Union
>>> Interval(0, 1).sup
1
>>> Union(Interval(0, 1), Interval(2, 3)).sup
3
"""
return self._sup
@property
def _sup(self):
raise NotImplementedError("(%s)._sup" % self)
def contains(self, other):
"""
Returns True if 'other' is contained in 'self' as an element.
As a shortcut it is possible to use the 'in' operator:
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1).contains(0.5)
True
>>> 0.5 in Interval(0, 1)
True
"""
other = sympify(other, strict=True)
ret = self._contains(other)
if ret is None:
if all(Eq(i, other) == False for i in self):
return False
ret = Contains(other, self, evaluate=False)
return ret
def _contains(self, other):
raise NotImplementedError("(%s)._contains(%s)" % (self, other))
@deprecated(useinstead="is_subset", issue=7460, deprecated_since_version="0.7.6")
def subset(self, other):
"""
Returns True if 'other' is a subset of 'self'.
"""
return other.is_subset(self)
def is_subset(self, other):
"""
Returns True if 'self' is a subset of 'other'.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 0.5).is_subset(Interval(0, 1))
True
>>> Interval(0, 1).is_subset(Interval(0, 1, left_open=True))
False
"""
if isinstance(other, Set):
return self.intersect(other) == self
else:
raise ValueError("Unknown argument '%s'" % other)
def issubset(self, other):
"""
Alias for :meth:`is_subset()`
"""
return self.is_subset(other)
def is_proper_subset(self, other):
"""
Returns True if 'self' is a proper subset of 'other'.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 0.5).is_proper_subset(Interval(0, 1))
True
>>> Interval(0, 1).is_proper_subset(Interval(0, 1))
False
"""
if isinstance(other, Set):
return self != other and self.is_subset(other)
else:
raise ValueError("Unknown argument '%s'" % other)
def is_superset(self, other):
"""
Returns True if 'self' is a superset of 'other'.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 0.5).is_superset(Interval(0, 1))
False
>>> Interval(0, 1).is_superset(Interval(0, 1, left_open=True))
True
"""
if isinstance(other, Set):
return other.is_subset(self)
else:
raise ValueError("Unknown argument '%s'" % other)
def issuperset(self, other):
"""
Alias for :meth:`is_superset()`
"""
return self.is_superset(other)
def is_proper_superset(self, other):
"""
Returns True if 'self' is a proper superset of 'other'.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1).is_proper_superset(Interval(0, 0.5))
True
>>> Interval(0, 1).is_proper_superset(Interval(0, 1))
False
"""
if isinstance(other, Set):
return self != other and self.is_superset(other)
else:
raise ValueError("Unknown argument '%s'" % other)
def _eval_powerset(self):
raise NotImplementedError('Power set not defined for: %s' % self.func)
def powerset(self):
"""
Find the Power set of 'self'.
Examples
========
>>> from sympy import FiniteSet, EmptySet
>>> A = EmptySet()
>>> A.powerset()
{EmptySet()}
>>> A = FiniteSet(1, 2)
>>> a, b, c = FiniteSet(1), FiniteSet(2), FiniteSet(1, 2)
>>> A.powerset() == FiniteSet(a, b, c, EmptySet())
True
References
==========
.. [1] http://en.wikipedia.org/wiki/Power_set
"""
return self._eval_powerset()
@property
def measure(self):
"""
The (Lebesgue) measure of 'self'
Examples
========
>>> from sympy import Interval, Union
>>> Interval(0, 1).measure
1
>>> Union(Interval(0, 1), Interval(2, 3)).measure
2
"""
return self._measure
@property
def boundary(self):
"""
The boundary or frontier of a set
A point x is on the boundary of a set S if
1. x is in the closure of S.
I.e. Every neighborhood of x contains a point in S.
2. x is not in the interior of S.
I.e. There does not exist an open set centered on x contained
entirely within S.
There are the points on the outer rim of S. If S is open then these
points need not actually be contained within S.
For example, the boundary of an interval is its start and end points.
This is true regardless of whether or not the interval is open.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1).boundary
{0, 1}
>>> Interval(0, 1, True, False).boundary
{0, 1}
"""
return self._boundary
@property
def is_open(self):
if not Intersection(self, self.boundary):
return True
# We can't confidently claim that an intersection exists
return None
@property
def is_closed(self):
return self.boundary.is_subset(self)
@property
def closure(self):
return self + self.boundary
@property
def interior(self):
return self - self.boundary
@property
def _boundary(self):
raise NotImplementedError()
def _eval_imageset(self, f):
from sympy.sets.fancysets import ImageSet
return ImageSet(f, self)
@property
def _measure(self):
raise NotImplementedError("(%s)._measure" % self)
def __add__(self, other):
return self.union(other)
def __or__(self, other):
return self.union(other)
def __and__(self, other):
return self.intersect(other)
def __mul__(self, other):
return ProductSet(self, other)
def __xor__(self, other):
return SymmetricDifference(self, other)
def __pow__(self, exp):
if not sympify(exp).is_Integer and exp >= 0:
raise ValueError("%s: Exponent must be a positive Integer" % exp)
return ProductSet([self]*exp)
def __sub__(self, other):
return Complement(self, other)
def __contains__(self, other):
symb = self.contains(other)
if symb not in (true, false):
raise TypeError('contains did not evaluate to a bool: %r' % symb)
return bool(symb)
@property
@deprecated(useinstead="is_subset(S.Reals)", issue=6212, deprecated_since_version="0.7.6")
def is_real(self):
return None
class ProductSet(Set):
"""
Represents a Cartesian Product of Sets.
Returns a Cartesian product given several sets as either an iterable
or individual arguments.
Can use '*' operator on any sets for convenient shorthand.
Examples
========
>>> from sympy import Interval, FiniteSet, ProductSet
>>> I = Interval(0, 5); S = FiniteSet(1, 2, 3)
>>> ProductSet(I, S)
[0, 5] x {1, 2, 3}
>>> (2, 2) in ProductSet(I, S)
True
>>> Interval(0, 1) * Interval(0, 1) # The unit square
[0, 1] x [0, 1]
>>> coin = FiniteSet('H', 'T')
>>> set(coin**2)
set([(H, H), (H, T), (T, H), (T, T)])
Notes
=====
- Passes most operations down to the argument sets
- Flattens Products of ProductSets
References
==========
.. [1] http://en.wikipedia.org/wiki/Cartesian_product
"""
is_ProductSet = True
def __new__(cls, *sets, **assumptions):
def flatten(arg):
if isinstance(arg, Set):
if arg.is_ProductSet:
return sum(map(flatten, arg.args), [])
else:
return [arg]
elif iterable(arg):
return sum(map(flatten, arg), [])
raise TypeError("Input must be Sets or iterables of Sets")
sets = flatten(list(sets))
if EmptySet() in sets or len(sets) == 0:
return EmptySet()
if len(sets) == 1:
return sets[0]
return Basic.__new__(cls, *sets, **assumptions)
def _eval_Eq(self, other):
if not other.is_ProductSet:
return
if len(self.args) != len(other.args):
return false
return And(*(Eq(x, y) for x, y in zip(self.args, other.args)))
def _contains(self, element):
"""
'in' operator for ProductSets
Examples
========
>>> from sympy import Interval
>>> (2, 3) in Interval(0, 5) * Interval(0, 5)
True
>>> (10, 10) in Interval(0, 5) * Interval(0, 5)
False
Passes operation on to constituent sets
"""
try:
if len(element) != len(self.args):
return false
except TypeError: # maybe element isn't an iterable
return false
return And(*
[set.contains(item) for set, item in zip(self.sets, element)])
def _intersect(self, other):
"""
This function should only be used internally
See Set._intersect for docstring
"""
if not other.is_ProductSet:
return None
if len(other.args) != len(self.args):
return S.EmptySet
return ProductSet(a.intersect(b)
for a, b in zip(self.sets, other.sets))
def _union(self, other):
if not other.is_ProductSet:
return None
if len(other.args) != len(self.args):
return None
if self.args[0] == other.args[0]:
return self.args[0] * Union(ProductSet(self.args[1:]),
ProductSet(other.args[1:]))
if self.args[-1] == other.args[-1]:
return Union(ProductSet(self.args[:-1]),
ProductSet(other.args[:-1])) * self.args[-1]
return None
@property
def sets(self):
return self.args
@property
def _boundary(self):
return Union(ProductSet(b + b.boundary if i != j else b.boundary
for j, b in enumerate(self.sets))
for i, a in enumerate(self.sets))
@property
@deprecated(useinstead="is_subset(S.Reals)", issue=6212, deprecated_since_version="0.7.6")
def is_real(self):
return all(set.is_real for set in self.sets)
@property
def is_iterable(self):
return all(set.is_iterable for set in self.sets)
def __iter__(self):
if self.is_iterable:
return product(*self.sets)
else:
raise TypeError("Not all constituent sets are iterable")
@property
def _measure(self):
measure = 1
for set in self.sets:
measure *= set.measure
return measure
def __len__(self):
return Mul(*[len(s) for s in self.args])
class Interval(Set, EvalfMixin):
"""
Represents a real interval as a Set.
Usage:
Returns an interval with end points "start" and "end".
For left_open=True (default left_open is False) the interval
will be open on the left. Similarly, for right_open=True the interval
will be open on the right.
Examples
========
>>> from sympy import Symbol, Interval
>>> Interval(0, 1)
[0, 1]
>>> Interval(0, 1, False, True)
[0, 1)
>>> Interval.Ropen(0, 1)
[0, 1)
>>> Interval.Lopen(0, 1)
(0, 1]
>>> Interval.open(0, 1)
(0, 1)
>>> a = Symbol('a', real=True)
>>> Interval(0, a)
[0, a]
Notes
=====
- Only real end points are supported
- Interval(a, b) with a > b will return the empty set
- Use the evalf() method to turn an Interval into an mpmath
'mpi' interval instance
References
==========
.. [1] http://en.wikipedia.org/wiki/Interval_%28mathematics%29
"""
is_Interval = True
@property
@deprecated(useinstead="is_subset(S.Reals)", issue=6212, deprecated_since_version="0.7.6")
def is_real(self):
return True
def __new__(cls, start, end, left_open=False, right_open=False):
start = _sympify(start)
end = _sympify(end)
left_open = _sympify(left_open)
right_open = _sympify(right_open)
if not all(isinstance(a, (type(true), type(false)))
for a in [left_open, right_open]):
raise NotImplementedError(
"left_open and right_open can have only true/false values, "
"got %s and %s" % (left_open, right_open))
inftys = [S.Infinity, S.NegativeInfinity]
# Only allow real intervals (use symbols with 'is_real=True').
if not all(i.is_real is not False or i in inftys for i in (start, end)):
raise ValueError("Non-real intervals are not supported")
# evaluate if possible
if (end < start) == True:
return S.EmptySet
elif (end - start).is_negative:
return S.EmptySet
if end == start and (left_open or right_open):
return S.EmptySet
if end == start and not (left_open or right_open):
return FiniteSet(end)
# Make sure infinite interval end points are open.
if start == S.NegativeInfinity:
left_open = true
if end == S.Infinity:
right_open = true
return Basic.__new__(cls, start, end, left_open, right_open)
@property
def start(self):
"""
The left end point of 'self'.
This property takes the same value as the 'inf' property.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1).start
0
"""
return self._args[0]
_inf = left = start
@classmethod
def open(cls, a, b):
"""Return an interval including neither boundary."""
return cls(a, b, True, True)
@classmethod
def Lopen(cls, a, b):
"""Return an interval not including the left boundary."""
return cls(a, b, True, False)
@classmethod
def Ropen(cls, a, b):
"""Return an interval not including the right boundary."""
return cls(a, b, False, True)
@property
def end(self):
"""
The right end point of 'self'.
This property takes the same value as the 'sup' property.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1).end
1
"""
return self._args[1]
_sup = right = end
@property
def left_open(self):
"""
True if 'self' is left-open.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1, left_open=True).left_open
True
>>> Interval(0, 1, left_open=False).left_open
False
"""
return self._args[2]
@property
def right_open(self):
"""
True if 'self' is right-open.
Examples
========
>>> from sympy import Interval
>>> Interval(0, 1, right_open=True).right_open
True
>>> Interval(0, 1, right_open=False).right_open
False
"""
return self._args[3]
def _intersect(self, other):
"""
This function should only be used internally
See Set._intersect for docstring
"""
# We only know how to intersect with other intervals
if not other.is_Interval:
return None
# handle (-oo, oo)
infty = S.NegativeInfinity, S.Infinity
if self == Interval(*infty):
l, r = self.left, self.right
if l.is_real or l in infty or r.is_real or r in infty:
return other
# We can't intersect [0,3] with [x,6] -- we don't know if x>0 or x<0
if not self._is_comparable(other):
return None
empty = False
if self.start <= other.end and other.start <= self.end:
# Get topology right.
if self.start < other.start:
start = other.start
left_open = other.left_open
elif self.start > other.start:
start = self.start
left_open = self.left_open
else:
start = self.start
left_open = self.left_open or other.left_open
if self.end < other.end:
end = self.end
right_open = self.right_open
elif self.end > other.end:
end = other.end
right_open = other.right_open
else:
end = self.end
right_open = self.right_open or other.right_open
if end - start == 0 and (left_open or right_open):
empty = True
else:
empty = True
if empty:
return S.EmptySet
return Interval(start, end, left_open, right_open)
def _complement(self, other):
if other == S.Reals:
a = Interval(S.NegativeInfinity, self.start,
True, not self.left_open)
b = Interval(self.end, S.Infinity, not self.right_open, True)
return Union(a, b)
if isinstance(other, FiniteSet):
nums = [m for m in other.args if m.is_number]
if nums == []:
return None
return Set._complement(self, other)
def _union(self, other):
"""
This function should only be used internally
See Set._union for docstring
"""
if other.is_Interval and self._is_comparable(other):
from sympy.functions.elementary.miscellaneous import Min, Max
# Non-overlapping intervals
end = Min(self.end, other.end)
start = Max(self.start, other.start)
if (end < start or
(end == start and (end not in self and end not in other))):
return None
else:
start = Min(self.start, other.start)
end = Max(self.end, other.end)
left_open = ((self.start != start or self.left_open) and
(other.start != start or other.left_open))
right_open = ((self.end != end or self.right_open) and
(other.end != end or other.right_open))
return Interval(start, end, left_open, right_open)
# If I have open end points and these endpoints are contained in other
if ((self.left_open and other.contains(self.start) is true) or
(self.right_open and other.contains(self.end) is true)):
# Fill in my end points and return
open_left = self.left_open and self.start not in other
open_right = self.right_open and self.end not in other
new_self = Interval(self.start, self.end, open_left, open_right)
return set((new_self, other))
return None
@property
def _boundary(self):
return FiniteSet(self.start, self.end)
def _contains(self, other):
if other.is_real is False:
return false
if self.start is S.NegativeInfinity and self.end is S.Infinity:
if not other.is_real is None:
return other.is_real
if self.left_open:
expr = other > self.start
else:
expr = other >= self.start
if self.right_open:
expr = And(expr, other < self.end)
else:
expr = And(expr, other <= self.end)
return _sympify(expr)
def _eval_imageset(self, f):
from sympy.functions.elementary.miscellaneous import Min, Max
from sympy.solvers.solveset import solveset
from sympy.core.function import diff, Lambda
from sympy.series import limit
from sympy.calculus.singularities import singularities
# TODO: handle functions with infinitely many solutions (eg, sin, tan)
# TODO: handle multivariate functions
expr = f.expr
if len(expr.free_symbols) > 1 or len(f.variables) != 1:
return
var = f.variables[0]
if expr.is_Piecewise:
result = S.EmptySet
domain_set = self
for (p_expr, p_cond) in expr.args:
if p_cond is S.true:
intrvl = domain_set
else:
intrvl = p_cond.as_set()
intrvl = Intersection(domain_set, intrvl)
if p_expr.is_Number:
image = FiniteSet(p_expr)
else:
image = imageset(Lambda(var, p_expr), intrvl)
result = Union(result, image)
# remove the part which has been `imaged`
domain_set = Complement(domain_set, intrvl)
if domain_set.is_EmptySet:
break
return result
if not self.start.is_comparable or not self.end.is_comparable:
return
try:
sing = [x for x in singularities(expr, var)
if x.is_real and x in self]
except NotImplementedError:
return
if self.left_open:
_start = limit(expr, var, self.start, dir="+")
elif self.start not in sing:
_start = f(self.start)
if self.right_open:
_end = limit(expr, var, self.end, dir="-")
elif self.end not in sing:
_end = f(self.end)
if len(sing) == 0:
solns = list(solveset(diff(expr, var), var))
extr = [_start, _end] + [f(x) for x in solns
if x.is_real and x in self]
start, end = Min(*extr), Max(*extr)
left_open, right_open = False, False
if _start <= _end:
# the minimum or maximum value can occur simultaneously
# on both the edge of the interval and in some interior
# point
if start == _start and start not in solns:
left_open = self.left_open
if end == _end and end not in solns:
right_open = self.right_open
else:
if start == _end and start not in solns:
left_open = self.right_open
if end == _start and end not in solns:
right_open = self.left_open
return Interval(start, end, left_open, right_open)
else:
return imageset(f, Interval(self.start, sing[0],
self.left_open, True)) + \
Union(*[imageset(f, Interval(sing[i], sing[i + 1]), True, True)
for i in range(1, len(sing) - 1)]) + \
imageset(f, Interval(sing[-1], self.end, True, self.right_open))
@property
def _measure(self):
return self.end - self.start
def to_mpi(self, prec=53):
return mpi(mpf(self.start._eval_evalf(prec)),
mpf(self.end._eval_evalf(prec)))
def _eval_evalf(self, prec):
return Interval(self.left._eval_evalf(prec),
self.right._eval_evalf(prec),
left_open=self.left_open, right_open=self.right_open)
def _is_comparable(self, other):
is_comparable = self.start.is_comparable
is_comparable &= self.end.is_comparable
is_comparable &= other.start.is_comparable
is_comparable &= other.end.is_comparable
return is_comparable
@property
def is_left_unbounded(self):
"""Return ``True`` if the left endpoint is negative infinity. """
return self.left is S.NegativeInfinity or self.left == Float("-inf")
@property
def is_right_unbounded(self):
"""Return ``True`` if the right endpoint is positive infinity. """
return self.right is S.Infinity or self.right == Float("+inf")
def as_relational(self, x):
"""Rewrite an interval in terms of inequalities and logic operators."""
x = sympify(x)
if self.right_open:
right = x < self.end
else:
right = x <= self.end
if self.left_open:
left = self.start < x
else:
left = self.start <= x
return And(left, right)
def _eval_Eq(self, other):
if not other.is_Interval:
if (other.is_Union or other.is_Complement or
other.is_Intersection or other.is_ProductSet):
return
return false
return And(Eq(self.left, other.left),
Eq(self.right, other.right),
self.left_open == other.left_open,
self.right_open == other.right_open)
class Union(Set, EvalfMixin):
"""
Represents a union of sets as a :class:`Set`.
Examples
========
>>> from sympy import Union, Interval
>>> Union(Interval(1, 2), Interval(3, 4))
[1, 2] U [3, 4]
The Union constructor will always try to merge overlapping intervals,
if possible. For example:
>>> Union(Interval(1, 2), Interval(2, 3))
[1, 3]
See Also
========
Intersection
References
==========
.. [1] http://en.wikipedia.org/wiki/Union_%28set_theory%29
"""
is_Union = True
def __new__(cls, *args, **kwargs):
evaluate = kwargs.get('evaluate', global_evaluate[0])
# flatten inputs to merge intersections and iterables
args = list(args)
def flatten(arg):
if isinstance(arg, Set):
if arg.is_Union:
return sum(map(flatten, arg.args), [])
else:
return [arg]
if iterable(arg): # and not isinstance(arg, Set) (implicit)
return sum(map(flatten, arg), [])
raise TypeError("Input must be Sets or iterables of Sets")
args = flatten(args)
# Union of no sets is EmptySet
if len(args) == 0:
return S.EmptySet
# Reduce sets using known rules
if evaluate:
return Union.reduce(args)
args = list(ordered(args, Set._infimum_key))
return Basic.__new__(cls, *args)
@staticmethod
def reduce(args):
"""
Simplify a :class:`Union` using known rules
We first start with global rules like
'Merge all FiniteSets'
Then we iterate through all pairs and ask the constituent sets if they
can simplify themselves with any other constituent
"""
# ===== Global Rules =====
# Merge all finite sets
finite_sets = [x for x in args if x.is_FiniteSet]
if len(finite_sets) > 1:
a = (x for set in finite_sets for x in set)
finite_set = FiniteSet(*a)
args = [finite_set] + [x for x in args if not x.is_FiniteSet]
# ===== Pair-wise Rules =====
# Here we depend on rules built into the constituent sets
args = set(args)
new_args = True
while(new_args):
for s in args:
new_args = False
for t in args - set((s,)):
new_set = s._union(t)
# This returns None if s does not know how to intersect
# with t. Returns the newly intersected set otherwise
if new_set is not None:
if not isinstance(new_set, set):
new_set = set((new_set, ))
new_args = (args - set((s, t))).union(new_set)
break
if new_args:
args = new_args
break
if len(args) == 1:
return args.pop()
else:
return Union(args, evaluate=False)
def _complement(self, universe):
# DeMorgan's Law
return Intersection(s.complement(universe) for s in self.args)
@property
def _inf(self):
# We use Min so that sup is meaningful in combination with symbolic
# interval end points.
from sympy.functions.elementary.miscellaneous import Min
return Min(*[set.inf for set in self.args])
@property
def _sup(self):
# We use Max so that sup is meaningful in combination with symbolic
# end points.
from sympy.functions.elementary.miscellaneous import Max
return Max(*[set.sup for set in self.args])
def _contains(self, other):
or_args = [the_set.contains(other) for the_set in self.args]
return Or(*or_args)
@property
def _measure(self):
# Measure of a union is the sum of the measures of the sets minus
# the sum of their pairwise intersections plus the sum of their
# triple-wise intersections minus ... etc...
# Sets is a collection of intersections and a set of elementary
# sets which made up those intersections (called "sos" for set of sets)
# An example element might of this list might be:
# ( {A,B,C}, A.intersect(B).intersect(C) )
# Start with just elementary sets ( ({A}, A), ({B}, B), ... )
# Then get and subtract ( ({A,B}, (A int B), ... ) while non-zero
sets = [(FiniteSet(s), s) for s in self.args]
measure = 0
parity = 1
while sets:
# Add up the measure of these sets and add or subtract it to total
measure += parity * sum(inter.measure for sos, inter in sets)
# For each intersection in sets, compute the intersection with every
# other set not already part of the intersection.
sets = ((sos + FiniteSet(newset), newset.intersect(intersection))
for sos, intersection in sets for newset in self.args
if newset not in sos)
# Clear out sets with no measure
sets = [(sos, inter) for sos, inter in sets if inter.measure != 0]
# Clear out duplicates
sos_list = []
sets_list = []
for set in sets:
if set[0] in sos_list:
continue
else:
sos_list.append(set[0])
sets_list.append(set)
sets = sets_list
# Flip Parity - next time subtract/add if we added/subtracted here
parity *= -1
return measure
@property
def _boundary(self):
def boundary_of_set(i):
""" The boundary of set i minus interior of all other sets """
b = self.args[i].boundary
for j, a in enumerate(self.args):
if j != i:
b = b - a.interior
return b
return Union(map(boundary_of_set, range(len(self.args))))
def _eval_imageset(self, f):
return Union(imageset(f, arg) for arg in self.args)
def as_relational(self, symbol):
"""Rewrite a Union in terms of equalities and logic operators. """
return Or(*[set.as_relational(symbol) for set in self.args])
@property
def is_iterable(self):
return all(arg.is_iterable for arg in self.args)
def _eval_evalf(self, prec):
try:
return Union(set._eval_evalf(prec) for set in self.args)
except Exception:
raise TypeError("Not all sets are evalf-able")
def __iter__(self):
import itertools
# roundrobin recipe taken from itertools documentation:
# https://docs.python.org/2/library/itertools.html#recipes
def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
pending = len(iterables)
nexts = itertools.cycle(iter(it).next for it in iterables)
while pending:
try:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = itertools.cycle(itertools.islice(nexts, pending))
if all(set.is_iterable for set in self.args):
return roundrobin(*(iter(arg) for arg in self.args))
else:
raise TypeError("Not all constituent sets are iterable")
@property
@deprecated(useinstead="is_subset(S.Reals)", issue=6212, deprecated_since_version="0.7.6")
def is_real(self):
return all(set.is_real for set in self.args)
class Intersection(Set):
"""
Represents an intersection of sets as a :class:`Set`.
Examples
========
>>> from sympy import Intersection, Interval
>>> Intersection(Interval(1, 3), Interval(2, 4))
[2, 3]
We often use the .intersect method
>>> Interval(1,3).intersect(Interval(2,4))
[2, 3]
See Also
========
Union
References
==========
.. [1] http://en.wikipedia.org/wiki/Intersection_%28set_theory%29
"""
is_Intersection = True
def __new__(cls, *args, **kwargs):
evaluate = kwargs.get('evaluate', global_evaluate[0])
# flatten inputs to merge intersections and iterables
args = list(args)
def flatten(arg):
if isinstance(arg, Set):
if arg.is_Intersection:
return sum(map(flatten, arg.args), [])
else:
return [arg]
if iterable(arg): # and not isinstance(arg, Set) (implicit)
return sum(map(flatten, arg), [])
raise TypeError("Input must be Sets or iterables of Sets")
args = flatten(args)
if len(args) == 0:
raise TypeError("Intersection expected at least one argument")
# args can't be ordered for Partition see issue #9608
if 'Partition' not in [type(a).__name__ for a in args]:
args = list(ordered(args, Set._infimum_key))
# Reduce sets using known rules
if evaluate:
return Intersection.reduce(args)
return Basic.__new__(cls, *args)
@property
def is_iterable(self):
return any(arg.is_iterable for arg in self.args)
@property
def _inf(self):
raise NotImplementedError()
@property
def _sup(self):
raise NotImplementedError()
def _eval_imageset(self, f):
return Intersection(imageset(f, arg) for arg in self.args)
def _contains(self, other):
from sympy.logic.boolalg import And
return And(*[set.contains(other) for set in self.args])
def __iter__(self):
for s in self.args:
if s.is_iterable:
other_sets = set(self.args) - set((s,))
other = Intersection(other_sets, evaluate=False)
return (x for x in s if x in other)
raise ValueError("None of the constituent sets are iterable")
@staticmethod
def reduce(args):
"""
Simplify an intersection using known rules
We first start with global rules like
'if any empty sets return empty set' and 'distribute any unions'
Then we iterate through all pairs and ask the constituent sets if they
can simplify themselves with any other constituent
"""
# ===== Global Rules =====
# If any EmptySets return EmptySet
if any(s.is_EmptySet for s in args):
return S.EmptySet
# If any FiniteSets see which elements of that finite set occur within
# all other sets in the intersection
for s in args:
if s.is_FiniteSet:
other_args = [a for a in args if a != s]
res = FiniteSet(*[x for x in s
if all(other.contains(x) == True for other in other_args)])
unk = [x for x in s
if any(other.contains(x) not in (True, False) for other in other_args)]
if unk:
other_sets = Intersection(*other_args)
if other_sets.is_EmptySet:
return EmptySet()
res += Intersection(s.func(*unk), other_sets, evaluate=False)
return res
# If any of the sets are unions, return a Union of Intersections
for s in args:
if s.is_Union:
other_sets = set(args) - set((s,))
if len(other_sets) > 0:
other = Intersection(other_sets)
return Union(Intersection(arg, other) for arg in s.args)
else:
return Union(arg for arg in s.args)
for s in args:
if s.is_Complement:
other_sets = args + [s.args[0]]
other_sets.remove(s)
return Complement(Intersection(*other_sets), s.args[1])
# At this stage we are guaranteed not to have any
# EmptySets, FiniteSets, or Unions in the intersection
# ===== Pair-wise Rules =====
# Here we depend on rules built into the constituent sets
args = set(args)
new_args = True
while(new_args):
for s in args:
new_args = False
for t in args - set((s,)):
new_set = s._intersect(t)
# This returns None if s does not know how to intersect
# with t. Returns the newly intersected set otherwise
if new_set is not None:
new_args = (args - set((s, t))).union(set((new_set, )))
break
if new_args:
args = new_args
break
if len(args) == 1:
return args.pop()
else:
return Intersection(args, evaluate=False)
def as_relational(self, symbol):
"""Rewrite an Intersection in terms of equalities and logic operators"""
return And(*[set.as_relational(symbol) for set in self.args])
class Complement(Set, EvalfMixin):
"""Represents the set difference or relative complement of a set with
another set.
`A - B = \{x \in A| x \\notin B\}`
Examples
========
>>> from sympy import Complement, FiniteSet
>>> Complement(FiniteSet(0, 1, 2), FiniteSet(1))
{0, 2}
See Also
=========
Intersection, Union
References
==========
.. [1] http://mathworld.wolfram.com/ComplementSet.html
"""
is_Complement = True
def __new__(cls, a, b, evaluate=True):
if evaluate:
return Complement.reduce(a, b)
return Basic.__new__(cls, a, b)
@staticmethod
def reduce(A, B):
"""
Simplify a :class:`Complement`.
"""
if B == S.UniversalSet:
return EmptySet()
if isinstance(B, Union):
return Intersection(s.complement(A) for s in B.args)
result = B._complement(A)
if result != None:
return result
else:
return Complement(A, B, evaluate=False)
def _contains(self, other):
A = self.args[0]
B = self.args[1]
return And(A.contains(other), Not(B.contains(other)))
class EmptySet(with_metaclass(Singleton, Set)):
"""
Represents the empty set. The empty set is available as a singleton
as S.EmptySet.
Examples
========
>>> from sympy import S, Interval
>>> S.EmptySet
EmptySet()
>>> Interval(1, 2).intersect(S.EmptySet)
EmptySet()
See Also
========
UniversalSet
References
==========
.. [1] http://en.wikipedia.org/wiki/Empty_set
"""
is_EmptySet = True
is_FiniteSet = True
def _intersect(self, other):
return S.EmptySet
@property
def _measure(self):
return 0
def _contains(self, other):
return false
def as_relational(self, symbol):
return False
def __len__(self):
return 0
def _union(self, other):
return other
def __iter__(self):
return iter([])
def _eval_imageset(self, f):
return self
def _eval_powerset(self):
return FiniteSet(self)
@property
def _boundary(self):
return self
def _complement(self, other):
return other
def _symmetric_difference(self, other):
return other
class UniversalSet(with_metaclass(Singleton, Set)):
"""
Represents the set of all things.
The universal set is available as a singleton as S.UniversalSet
Examples
========
>>> from sympy import S, Interval
>>> S.UniversalSet
UniversalSet()
>>> Interval(1, 2).intersect(S.UniversalSet)
[1, 2]
See Also
========
EmptySet
References
==========
.. [1] http://en.wikipedia.org/wiki/Universal_set
"""
is_UniversalSet = True
def _intersect(self, other):
return other
def _complement(self, other):
return S.EmptySet
def _symmetric_difference(self, other):
return other
@property
def _measure(self):
return S.Infinity
def _contains(self, other):
return true
def as_relational(self, symbol):
return True
def _union(self, other):
return self
@property
def _boundary(self):
return EmptySet()
class FiniteSet(Set, EvalfMixin):
"""
Represents a finite set of discrete numbers
Examples
========
>>> from sympy import FiniteSet
>>> FiniteSet(1, 2, 3, 4)
{1, 2, 3, 4}
>>> 3 in FiniteSet(1, 2, 3, 4)
True
>>> members = [1, 2, 3, 4]
>>> FiniteSet(*members)
{1, 2, 3, 4}
References
==========
.. [1] http://en.wikipedia.org/wiki/Finite_set
"""
is_FiniteSet = True
is_iterable = True
def __new__(cls, *args, **kwargs):
evaluate = kwargs.get('evaluate', global_evaluate[0])
if evaluate:
args = list(map(sympify, args))
if len(args) == 0:
return EmptySet()
else:
args = list(map(sympify, args))
args = list(ordered(frozenset(tuple(args)), Set._infimum_key))
obj = Basic.__new__(cls, *args)
obj._elements = frozenset(args)
return obj
def _eval_Eq(self, other):
if not other.is_FiniteSet:
if (other.is_Union or other.is_Complement or
other.is_Intersection or other.is_ProductSet):
return
return false
if len(self) != len(other):
return false
return And(*(Eq(x, y) for x, y in zip(self.args, other.args)))
def __iter__(self):
return iter(self.args)
def _intersect(self, other):
"""
This function should only be used internally
See Set._intersect for docstring
"""
if isinstance(other, self.__class__):
return self.__class__(*(self._elements & other._elements))
return self.__class__(el for el in self if el in other)
def _complement(self, other):
if isinstance(other, Interval):
nums = sorted(m for m in self.args if m.is_number)
if other == S.Reals and nums != []:
syms = [m for m in self.args if m.is_Symbol]
# Reals cannot contain elements other than numbers and symbols.
intervals = [] # Build up a list of intervals between the elements
intervals += [Interval(S.NegativeInfinity, nums[0], True, True)]
for a, b in zip(nums[:-1], nums[1:]):
intervals.append(Interval(a, b, True, True)) # both open
intervals.append(Interval(nums[-1], S.Infinity, True, True))
if syms != []:
return Complement(Union(intervals, evaluate=False),
FiniteSet(*syms), evaluate=False)
else:
return Union(intervals, evaluate=False)
elif nums == []:
return None
elif isinstance(other, FiniteSet):
elms_unknown = FiniteSet(*[el for el in self if other.contains(el) not in (True, False)])
if elms_unknown == self:
return
return Complement(FiniteSet(*[el for el in other if self.contains(el) != True]), elms_unknown)
return Set._complement(self, other)
def _union(self, other):
"""
This function should only be used internally
See Set._union for docstring
"""
if other.is_FiniteSet:
return FiniteSet(*(self._elements | other._elements))
# If other set contains one of my elements, remove it from myself
if any(other.contains(x) is true for x in self):
return set((
FiniteSet(*[x for x in self if other.contains(x) is not true]),
other))
return None
def _contains(self, other):
"""
Tests whether an element, other, is in the set.
Relies on Python's set class. This tests for object equality
All inputs are sympified
Examples
========
>>> from sympy import FiniteSet
>>> 1 in FiniteSet(1, 2)
True
>>> 5 in FiniteSet(1, 2)
False
"""
r = false
for e in self._elements:
t = Eq(e, other, evaluate=True)
if isinstance(t, Eq):
t = t.simplify()
if t == true:
return t
elif t != false:
r = None
return r
def _eval_imageset(self, f):
return FiniteSet(*map(f, self))
@property
def _boundary(self):
return self
@property
def _inf(self):
from sympy.functions.elementary.miscellaneous import Min
return Min(*self)
@property
def _sup(self):
from sympy.functions.elementary.miscellaneous import Max
return Max(*self)
@property
def measure(self):
return 0
def __len__(self):
return len(self.args)
def as_relational(self, symbol):
"""Rewrite a FiniteSet in terms of equalities and logic operators. """
from sympy.core.relational import Eq
return Or(*[Eq(symbol, elem) for elem in self])
@property
@deprecated(useinstead="is_subset(S.Reals)", issue=6212, deprecated_since_version="0.7.6")
def is_real(self):
return all(el.is_real for el in self)
def compare(self, other):
return (hash(self) - hash(other))
def _eval_evalf(self, prec):
return FiniteSet(*[elem._eval_evalf(prec) for elem in self])
def _hashable_content(self):
return (self._elements,)
@property
def _sorted_args(self):
return tuple(ordered(self.args, Set._infimum_key))
def _eval_powerset(self):
return self.func(*[self.func(*s) for s in subsets(self.args)])
def __ge__(self, other):
return other.is_subset(self)
def __gt__(self, other):
return self.is_proper_superset(other)
def __le__(self, other):
return self.is_subset(other)
def __lt__(self, other):
return self.is_proper_subset(other)
class SymmetricDifference(Set):
"""Represents the set of elements which are in either of the
sets and not in their intersection.
Examples
========
>>> from sympy import SymmetricDifference, FiniteSet
>>> SymmetricDifference(FiniteSet(1, 2, 3), FiniteSet(3, 4, 5))
{1, 2, 4, 5}
See Also
========
Complement, Union
References
==========
.. [1] http://en.wikipedia.org/wiki/Symmetric_difference
"""
is_SymmetricDifference = True
def __new__(cls, a, b, evaluate=True):
if evaluate:
return SymmetricDifference.reduce(a, b)
return Basic.__new__(cls, a, b)
@staticmethod
def reduce(A, B):
result = B._symmetric_difference(A)
if result is not None:
return result
else:
return SymmetricDifference(A, B, evaluate=False)
def imageset(*args):
r"""
Image of set under transformation ``f``.
If this function can't compute the image, it returns an
unevaluated ImageSet object.
.. math::
{ f(x) | x \in self }
Examples
========
>>> from sympy import Interval, Symbol, imageset, sin, Lambda
>>> x = Symbol('x')
>>> imageset(x, 2*x, Interval(0, 2))
[0, 4]
>>> imageset(lambda x: 2*x, Interval(0, 2))
[0, 4]
>>> imageset(Lambda(x, sin(x)), Interval(-2, 1))
ImageSet(Lambda(x, sin(x)), [-2, 1])
See Also
========
sympy.sets.fancysets.ImageSet
"""
from sympy.core import Dummy, Lambda
from sympy.sets.fancysets import ImageSet
if len(args) == 3:
f = Lambda(*args[:2])
else:
# var and expr are being defined this way to
# support Python lambda and not just sympy Lambda
f = args[0]
if not isinstance(f, Lambda):
var = Dummy()
expr = args[0](var)
f = Lambda(var, expr)
set = args[-1]
r = set._eval_imageset(f)
if isinstance(r, ImageSet):
f, set = r.args
if f.variables[0] == f.expr:
return set
if isinstance(set, ImageSet):
if len(set.lamda.variables) == 1 and len(f.variables) == 1:
return imageset(Lambda(set.lamda.variables[0],
f.expr.subs(f.variables[0], set.lamda.expr)),
set.base_set)
if r is not None:
return r
return ImageSet(f, set)
| bsd-3-clause | 3,218,167,639,633,148,000 | 27.722693 | 106 | 0.544357 | false |
tum-pbs/PhiFlow | phi/math/backend/_backend.py | 1 | 44839 | from collections import namedtuple
from contextlib import contextmanager
from threading import Barrier
from typing import List, Callable
import numpy
from ._dtype import DType, combine_types
SolveResult = namedtuple('SolveResult', [
'method', 'x', 'residual', 'iterations', 'function_evaluations', 'converged', 'diverged', 'message',
])
class ComputeDevice:
"""
A physical device that can be selected to perform backend computations.
"""
def __init__(self, backend: 'Backend', name: str, device_type: str, memory: int, processor_count: int, description: str, ref=None):
self.name: str = name
""" Name of the compute device. CPUs are typically called `'CPU'`. """
self.device_type: str = device_type
""" Type of device such as `'CPU'`, `'GPU'` or `'TPU'`. """
self.memory: int = memory
""" Maximum memory of the device that can be allocated (in bytes). -1 for n/a. """
self.processor_count: int = processor_count
""" Number of CPU cores or GPU multiprocessors. -1 for n/a. """
self.description: str = description
""" Further information about the device such as driver version. """
self.ref = ref
""" (Optional) Reference to the internal device representation. """
self.backend: 'Backend' = backend
""" Backend that this device belongs to. Different backends represent the same device with different objects. """
def __repr__(self):
mem = f"{(self.memory / 1024 ** 2)} MB" if self.memory > 0 else "memory: n/a"
pro = f"{self.processor_count} processors" if self.processor_count > 0 else "processors: n/a"
descr = self.description.replace('\n', ' ')
if len(descr) > 30:
descr = descr[:28] + "..."
return f"'{self.name}' ({self.device_type}) | {mem} | {pro} | {descr}"
class Backend:
def __init__(self, name: str, default_device: ComputeDevice):
"""
Backends delegate low-level operations to a compute library or emulate them.
The methods of `Backend` form a comprehensive list of available operations.
To support a compute library, subclass `Backend` and register it by adding it to `BACKENDS`.
Args:
name: Human-readable string
default_device: `ComputeDevice` being used by default
"""
self._name = name
self._default_device = default_device
def __enter__(self):
_DEFAULT.append(self)
def __exit__(self, exc_type, exc_val, exc_tb):
_DEFAULT.pop(-1)
@property
def name(self) -> str:
return self._name
def supports(self, feature: str or Callable) -> bool:
"""
Tests if this backend supports the given feature.
Features correspond to a method of this backend that must be implemented if the feature is supported.
Possible features:
* `sparse_tensor`
* `gradients
Args:
feature: `str` or unbound Backend method, e.g. `Backend.sparse_tensor`
Returns:
Whether the feature is supported.
"""
feature = feature if isinstance(feature, str) else feature.__name__
if not hasattr(Backend, feature):
raise ValueError(f"Not a valid feature: '{feature}'")
backend_fun = getattr(Backend, feature)
impl_fun = getattr(self.__class__, feature)
return impl_fun is not backend_fun
def prefers_channels_last(self) -> bool:
raise NotImplementedError()
@property
def precision(self) -> int:
""" Short for math.backend.get_precision() """
return get_precision()
@property
def float_type(self) -> DType:
return DType(float, self.precision)
@property
def as_registered(self) -> 'Backend':
from phi.math.backend import BACKENDS
for backend in BACKENDS:
if self.name in backend.name:
return backend
raise RuntimeError(f"Backend '{self}' is not visible.")
@property
def complex_type(self) -> DType:
return DType(complex, max(64, self.precision))
def combine_types(self, *dtypes: DType) -> DType:
return combine_types(*dtypes, fp_precision=self.precision)
def auto_cast(self, *tensors) -> list:
"""
Determins the appropriate values type resulting from operations involving the tensors as input.
This method is called by the default implementations of basic operators.
Backends can override this method to prevent unnecessary casting.
Args:
*tensors: tensors to cast and to consider when determining the common data type
Returns:
tensors cast to a common data type
"""
dtypes = [self.dtype(t) for t in tensors]
result_type = self.combine_types(*dtypes)
if result_type.kind in (int, float, complex, bool):
tensors = [self.cast(t, result_type) for t in tensors]
return tensors
def __str__(self):
return self.name
def __repr__(self):
return self.name
def list_devices(self, device_type: str or None = None) -> List[ComputeDevice]:
"""
Fetches information about all available compute devices this backend can use.
Implementations:
* NumPy: [`os.cpu_count`](https://docs.python.org/3/library/os.html#os.cpu_count)
* PyTorch: [`torch.cuda.get_device_properties`](https://pytorch.org/docs/stable/cuda.html#torch.cuda.get_device_properties)
* TensorFlow: `tensorflow.python.client.device_lib.list_local_devices`
* Jax: [`jax.devices`](https://jax.readthedocs.io/en/latest/jax.html#jax.devices)
Args:
device_type: (optional) Return only devices of this type, e.g. `'GPU'` or `'CPU'`. See `ComputeDevice.device_type`.
Returns:
`list` of all currently available devices.
"""
raise NotImplementedError()
def get_default_device(self) -> ComputeDevice:
return self._default_device
def set_default_device(self, device: ComputeDevice or str):
if isinstance(device, str):
devices = self.list_devices(device)
assert len(devices) >= 1, f"{self.name}: Cannot select '{device} because no device of this type is available."
device = devices[0]
self._default_device = device
def seed(self, seed: int):
raise NotImplementedError()
def is_tensor(self, x, only_native=False):
"""
An object is considered a native tensor by a backend if no internal conversion is required by backend methods.
An object is considered a tensor (nativer or otherwise) by a backend if it is not a struct (e.g. tuple, list) and all methods of the backend accept it as a tensor argument.
Args:
x: object to check
only_native: If True, only accepts true native tensor representations, not Python numbers or others that are also supported as tensors (Default value = False)
Returns:
bool: whether `x` is considered a tensor by this backend
"""
raise NotImplementedError()
def as_tensor(self, x, convert_external=True):
"""
Converts a tensor-like object to the native tensor representation of this backend.
If x is a native tensor of this backend, it is returned without modification.
If x is a Python number (numbers.Number instance), `convert_numbers` decides whether to convert it unless the backend cannot handle Python numbers.
*Note:* There may be objects that are considered tensors by this backend but are not native and thus, will be converted by this method.
Args:
x: tensor-like, e.g. list, tuple, Python number, tensor
convert_external: if False and `x` is a Python number that is understood by this backend, this method returns the number as-is. This can help prevent type clashes like int32 vs int64. (Default value = True)
Returns:
tensor representation of `x`
"""
raise NotImplementedError()
def is_available(self, tensor) -> bool:
"""
Tests if the value of the tensor is known and can be read at this point.
If true, `numpy(tensor)` must return a valid NumPy representation of the value.
Tensors are typically available when the backend operates in eager mode.
Args:
tensor: backend-compatible tensor
Returns:
bool
"""
raise NotImplementedError()
def numpy(self, tensor) -> numpy.ndarray:
"""
Returns a NumPy representation of the given tensor.
If `tensor` is already a NumPy array, it is returned without modification.
This method raises an error if the value of the tensor is not known at this point, e.g. because it represents a node in a graph.
Use `is_available(tensor)` to check if the value can be represented as a NumPy array.
Args:
tensor: backend-compatible tensor
Returns:
NumPy representation of the values stored in the tensor
"""
raise NotImplementedError()
def to_dlpack(self, tensor):
raise NotImplementedError()
def from_dlpack(self, capsule):
raise NotImplementedError()
def copy(self, tensor, only_mutable=False):
raise NotImplementedError()
def call(self, f: Callable, *args, name=None):
"""
Calls `f(*args)` and returns the result.
This method may be used to register internal calls with the profiler.
Usage:
choose_backend(key).call(custom_function, *args)
"""
return f(*args)
def block_until_ready(self, values):
pass
def jit_compile(self, f: Callable) -> Callable:
return NotImplemented
def functional_gradient(self, f, wrt: tuple or list, get_output: bool):
raise NotImplementedError(self)
def custom_gradient(self, f: Callable, gradient: Callable) -> Callable:
"""
Creates a function based on `f` that uses a custom gradient for backprop.
Args:
f: Forward function.
gradient: Function for backprop. Will be called as `gradient(*d_out)` to compute the gradient of `f`.
Returns:
Function with similar signature and return values as `f`. However, the returned function does not support keyword arguments.
"""
return NotImplemented
def jit_compile_grad(self, f, wrt: tuple or list, get_output: bool):
raise NotImplementedError()
def transpose(self, tensor, axes):
raise NotImplementedError()
def random_uniform(self, shape):
""" Float tensor of selected precision containing random values in the range [0, 1) """
raise NotImplementedError(self)
def random_normal(self, shape):
""" Float tensor of selected precision containing random values sampled from a normal distribution with mean 0 and std 1. """
raise NotImplementedError(self)
def stack(self, values, axis=0):
raise NotImplementedError(self)
def concat(self, values, axis):
raise NotImplementedError(self)
def pad(self, value, pad_width, mode: str = 'constant', constant_values=0):
"""
Pad a tensor with values as specified by `mode` and `constant_values`.
If the mode is not supported, returns NotImplemented.
Args:
value: tensor
pad_width: 2D tensor specifying the number of values padded to the edges of each axis in the form [[axis 0 lower, axis 0 upper], ...] including batch and component axes.
mode: constant', 'boundary', 'periodic', 'symmetric', 'reflect'
constant_values: used for out-of-bounds points if mode='constant' (Default value = 0)
mode: str: (Default value = 'constant')
Returns:
padded tensor or NotImplemented
"""
raise NotImplementedError(self)
def reshape(self, value, shape):
raise NotImplementedError(self)
def flip(self, value, axes: tuple or list):
slices = tuple(slice(None, None, -1 if i in axes else None) for i in range(self.ndims(value)))
return value[slices]
def sum(self, value, axis=None, keepdims=False):
raise NotImplementedError(self)
def prod(self, value, axis=None):
raise NotImplementedError(self)
def divide_no_nan(self, x, y):
"""
Computes x/y but returns 0 if y=0.
Args:
x:
y:
Returns:
"""
raise NotImplementedError(self)
def where(self, condition, x=None, y=None):
raise NotImplementedError(self)
def nonzero(self, values):
"""
Args:
values: Tensor with only spatial dimensions
Returns:
non-zero multi-indices as tensor of shape (nnz, vector)
"""
raise NotImplementedError(self)
def mean(self, value, axis=None, keepdims=False):
raise NotImplementedError(self)
def range(self, start, limit=None, delta=1, dtype: DType = DType(int, 32)):
raise NotImplementedError(self)
def zeros(self, shape, dtype: DType = None):
raise NotImplementedError(self)
def zeros_like(self, tensor):
raise NotImplementedError(self)
def ones(self, shape, dtype: DType = None):
raise NotImplementedError(self)
def ones_like(self, tensor):
raise NotImplementedError(self)
def meshgrid(self, *coordinates):
raise NotImplementedError(self)
def linspace(self, start, stop, number):
raise NotImplementedError(self)
def tensordot(self, a, a_axes: tuple or list, b, b_axes: tuple or list):
""" Multiply-sum-reduce a_axes of a with b_axes of b. """
raise NotImplementedError(self)
def matmul(self, A, b):
raise NotImplementedError(self)
def einsum(self, equation, *tensors):
raise NotImplementedError(self)
def while_loop(self, loop: Callable, values: tuple):
"""
```python
while any(values[0]):
values = loop(*values)
return values
```
This operation does not support backpropagation.
Args:
loop: Loop function, must return a `tuple` with entries equal to `values` in shape and data type.
values: Initial values of loop variables.
Returns:
Loop variables upon loop completion.
"""
raise NotImplementedError(self)
def abs(self, x):
raise NotImplementedError(self)
def sign(self, x):
raise NotImplementedError(self)
def round(self, x):
raise NotImplementedError(self)
def ceil(self, x):
raise NotImplementedError(self)
def floor(self, x):
raise NotImplementedError(self)
def max(self, x, axis=None, keepdims=False):
raise NotImplementedError(self)
def min(self, x, axis=None, keepdims=False):
raise NotImplementedError(self)
def maximum(self, a, b):
raise NotImplementedError(self)
def minimum(self, a, b):
raise NotImplementedError(self)
def clip(self, x, minimum, maximum):
raise NotImplementedError(self)
def sqrt(self, x):
raise NotImplementedError(self)
def exp(self, x):
raise NotImplementedError(self)
def conv(self, value, kernel, zero_padding=True):
"""
Convolve value with kernel.
Depending on the tensor rank, the convolution is either 1D (rank=3), 2D (rank=4) or 3D (rank=5).
Higher dimensions may not be supported.
Args:
value: tensor of shape (batch_size, in_channel, spatial...)
kernel: tensor of shape (batch_size or 1, out_channel, in_channel, spatial...)
zero_padding: If True, pads the edges of `value` with zeros so that the result has the same shape as `value`.
Returns:
Convolution result as tensor of shape (batch_size, out_channel, spatial...)
"""
raise NotImplementedError(self)
def expand_dims(self, a, axis=0, number=1):
raise NotImplementedError(self)
def shape(self, tensor):
raise NotImplementedError(self)
def staticshape(self, tensor):
raise NotImplementedError(self)
def cast(self, x, dtype: DType):
raise NotImplementedError(self)
def to_float(self, x):
"""
Converts a tensor to floating point values with precision equal to the currently set default precision.
See Also:
`Backend.precision()`.
If `x` is mutable and of the correct floating type, returns a copy of `x`.
To convert float tensors to the backend precision but leave non-float tensors untouched, use `Backend.as_tensor()`.
Args:
x: tensor of bool, int or float
Returns:
Values of `x` as float tensor
"""
return self.cast(x, self.float_type)
def to_int32(self, x):
return self.cast(x, DType(int, 32))
def to_int64(self, x):
return self.cast(x, DType(int, 64))
def to_complex(self, x):
return self.cast(x, DType(complex, max(64, min(self.precision * 2, 128))))
def batched_gather_nd(self, values, indices):
"""
Gathers values from the tensor `values` at locations `indices`.
The first dimension of `values` and `indices` is the batch dimension which must be either equal for both or one for either.
Args:
values: tensor of shape (batch, spatial..., channel)
indices: int tensor of shape (batch, any..., multi_index) where the size of multi_index is values.rank - 2.
Returns:
Gathered values as tensor of shape (batch, any..., channel)
"""
raise NotImplementedError(self)
def flatten(self, x):
return self.reshape(x, (-1,))
def std(self, x, axis=None, keepdims=False):
raise NotImplementedError(self)
def boolean_mask(self, x, mask, axis=0):
"""
Args:
x: tensor with any number of dimensions
mask: 1D mask tensor
axis: Axis index >= 0
"""
raise NotImplementedError(self)
def isfinite(self, x):
raise NotImplementedError(self)
def scatter(self, base_grid, indices, values, mode: str):
"""
Depending on `mode`, performs scatter_update or scatter_add.
Args:
base_grid: Tensor into which scatter values are inserted at indices. Tensor of shape (batch_size, spatial..., channels)
indices: Tensor of shape (batch_size or 1, update_count, index_vector)
values: Values to scatter at indices. Tensor of shape (batch_size or 1, update_count or 1, channels or 1)
mode: One of ('update', 'add')
Returns:
Copy of base_grid with values at `indices` updated by `values`.
"""
raise NotImplementedError(self)
def any(self, boolean_tensor, axis=None, keepdims=False):
raise NotImplementedError(self)
def all(self, boolean_tensor, axis=None, keepdims=False):
raise NotImplementedError(self)
def fft(self, x):
"""
Computes the n-dimensional FFT along all but the first and last dimensions.
Args:
x: tensor of dimension 3 or higher
Returns:
"""
raise NotImplementedError(self)
def ifft(self, k):
"""
Computes the n-dimensional inverse FFT along all but the first and last dimensions.
Args:
k: tensor of dimension 3 or higher
Returns:
"""
raise NotImplementedError(self)
def imag(self, x):
raise NotImplementedError(self)
def real(self, x):
raise NotImplementedError(self)
def sin(self, x):
raise NotImplementedError(self)
def cos(self, x):
raise NotImplementedError(self)
def tan(self, x):
raise NotImplementedError(self)
def log(self, x):
""" Natural logarithm """
raise NotImplementedError(self)
def log2(self, x):
raise NotImplementedError(self)
def log10(self, x):
raise NotImplementedError(self)
def dtype(self, array) -> DType:
raise NotImplementedError(self)
def tile(self, value, multiples):
"""
Repeats the tensor along each axis the number of times given by multiples.
If `multiples` has more dimensions than `value`, these dimensions are added to `value` as outer dimensions.
Args:
value: tensor
multiples: tuple or list of integers
Returns:
tile tensor
"""
raise NotImplementedError(self)
def sparse_tensor(self, indices, values, shape):
"""
Optional features.
Args:
indices: tuple/list matching the dimensions (pair for matrix)
values: param shape:
shape:
Returns:
"""
raise NotImplementedError(self)
def coordinates(self, tensor):
"""
Returns the coordinates and values of a tensor.
Args:
tensor: Sparse tensor
Returns:
coordinates: `tuple` of tensor holding the coordinate vectors, i.e. (row, col) for matrices.
indices: Tensor holding the corresponding values
"""
raise NotImplementedError(self)
def minimize(self, method: str, f, x0, atol, max_iter, trj: bool):
from scipy.optimize import OptimizeResult, minimize
from threading import Thread
assert self.supports(Backend.functional_gradient)
assert len(self.staticshape(x0)) == 2 # (batch, parameters)
batch_size = self.staticshape(x0)[0]
fg = self.functional_gradient(f, [0], get_output=True)
method_description = f"SciPy {method} with {self.name}"
iterations = [0] * batch_size
function_evaluations = [0] * batch_size
xs = [None] * batch_size
final_losses = [None] * batch_size
converged = [False] * batch_size
diverged = [False] * batch_size
messages = [""] * batch_size
f_inputs = [None] * batch_size
f_b_losses = None
f_b_losses_np = None
f_grad_np = None
f_input_available = Barrier(batch_size + 1)
f_output_available = Barrier(batch_size + 1)
finished = [False] * batch_size
all_finished = False
trajectories = [[] for _ in range(batch_size)] if trj else None
threads = []
for b in range(batch_size):
def b_thread(b=b):
recent_b_losses = []
def b_fun(x: numpy.ndarray):
function_evaluations[b] += 1
f_inputs[b] = self.as_tensor(x, convert_external=True)
f_input_available.wait()
f_output_available.wait()
recent_b_losses.append(f_b_losses[b])
if final_losses[b] is None: # first evaluation
final_losses[b] = f_b_losses[b]
if trajectories is not None:
trajectories[b].append(SolveResult(method_description, x0[b], f_b_losses[b], 0, 1, False, False, ""))
return f_b_losses_np[b], f_grad_np[b]
def callback(x, *args): # L-BFGS-B only passes x but the documentation says (x, state)
iterations[b] += 1
loss = min(recent_b_losses)
recent_b_losses.clear()
final_losses[b] = loss
if trajectories is not None:
trajectories[b].append(SolveResult(method_description, x, loss, iterations[b], function_evaluations[b], False, False, ""))
res = minimize(fun=b_fun, x0=x0[b], jac=True, method=method, tol=atol[b], options={'maxiter': max_iter[b]}, callback=callback)
assert isinstance(res, OptimizeResult)
# res.nit, res.nfev
xs[b] = res.x
converged[b] = res.success
diverged[b] = res.status not in (0, 1) # 0=success
messages[b] = res.message
finished[b] = True
while not all_finished:
f_input_available.wait()
f_output_available.wait()
b_thread = Thread(target=b_thread)
threads.append(b_thread)
b_thread.start()
while True:
f_input_available.wait()
if all(finished):
all_finished = True
f_output_available.wait()
break
_, f_b_losses, f_grad = fg(self.stack(f_inputs))
f_b_losses_np = self.numpy(f_b_losses).astype(numpy.float64)
f_grad_np = self.numpy(f_grad).astype(numpy.float64)
f_output_available.wait()
for b_thread in threads:
b_thread.join() # make sure threads exit correctly
if trj:
max_trajectory_length = max([len(t) for t in trajectories])
last_points = [SolveResult(method_description, xs[b], final_losses[b], iterations[b], function_evaluations[b], converged[b], diverged[b], "") for b in range(batch_size)]
trajectories = [t[:-1] + [last_point] * (max_trajectory_length - len(t) + 1) for t, last_point in zip(trajectories, last_points)]
trajectory = []
for states in zip(*trajectories):
x = self.stack([self.to_float(state.x) for state in states])
residual = self.stack([state.residual for state in states])
iterations = [state.iterations for state in states]
function_evaluations = [state.function_evaluations for state in states]
converged = [state.converged for state in states]
diverged = [state.diverged for state in states]
trajectory.append(SolveResult(method_description, x, residual, iterations, function_evaluations, converged, diverged, messages))
return trajectory
else:
x = self.stack(xs)
residual = self.stack(final_losses)
return SolveResult(method_description, x, residual, iterations, function_evaluations, converged, diverged, messages)
def linear_solve(self, method: str, lin, y, x0, rtol, atol, max_iter, trj: bool) -> SolveResult or List[SolveResult]:
"""
Solve the system of linear equations A · x = y.
This method need not provide a gradient for the operation.
Args:
method: Which algorithm to use. One of `('auto', 'CG', 'CG-adaptive')`.
lin: Linear operation. One of
* sparse/dense matrix valid for all instances
* tuple/list of sparse/dense matrices for varying matrices along batch, must have the same nonzero locations.
* linear function A(x), must be called on all instances in parallel
y: target result of A * x. 2nd order tensor (batch, vector) or list of vectors.
x0: Initial guess of size (batch, parameters)
rtol: Relative tolerance of size (batch,)
atol: Absolute tolerance of size (batch,)
max_iter: Maximum number of iterations of size (batch,)
trj: Whether to record and return the optimization trajectory as a `List[SolveResult]`.
Returns:
result: `SolveResult` or `List[SolveResult]`, depending on `trj`.
"""
if method == 'auto':
return self.conjugate_gradient_adaptive(lin, y, x0, rtol, atol, max_iter, trj)
elif method == 'CG':
return self.conjugate_gradient(lin, y, x0, rtol, atol, max_iter, trj)
elif method == 'CG-adaptive':
return self.conjugate_gradient_adaptive(lin, y, x0, rtol, atol, max_iter, trj)
else:
raise NotImplementedError(f"Method '{method}' not supported for linear solve.")
def conjugate_gradient(self, lin, y, x0, rtol, atol, max_iter, trj: bool) -> SolveResult or List[SolveResult]:
""" Standard conjugate gradient algorithm. Signature matches to `Backend.linear_solve()`. """
# Based on "An Introduction to the Conjugate Gradient Method Without the Agonizing Pain" by Jonathan Richard Shewchuk
# symbols: dx=d, dy=q, step_size=alpha, residual_squared=delta, residual=r, y=b
method = f"Φ-Flow CG ({self.name})"
y = self.to_float(y)
x0 = self.copy(self.to_float(x0), only_mutable=True)
batch_size = self.staticshape(y)[0]
tolerance_sq = self.maximum(rtol ** 2 * self.sum(y ** 2, -1), atol ** 2)
x = x0
dx = residual = y - self.linear(lin, x)
it_counter = 0
iterations = self.zeros([batch_size], DType(int, 32))
function_evaluations = self.ones([batch_size], DType(int, 32))
residual_squared = rsq0 = self.sum(residual ** 2, -1, keepdims=True)
diverged = self.any(~self.isfinite(x), axis=(1,))
converged = self.all(residual_squared <= tolerance_sq, axis=(1,))
trajectory = [SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, "")] if trj else None
finished = converged | diverged | (iterations >= max_iter); not_finished_1 = self.to_int32(~finished) # ; active = self.to_float(self.expand_dims(not_finished_1, -1))
while ~self.all(finished):
it_counter += 1; iterations += not_finished_1
dy = self.linear(lin, dx); function_evaluations += not_finished_1
dx_dy = self.sum(dx * dy, axis=-1, keepdims=True)
step_size = self.divide_no_nan(residual_squared, dx_dy)
step_size *= self.expand_dims(self.to_float(not_finished_1), -1) # this is not really necessary but ensures batch-independence
x += step_size * dx
if it_counter % 50 == 0:
residual = y - self.linear(lin, x); function_evaluations += 1
else:
residual = residual - step_size * dy # in-place subtraction affects convergence
residual_squared_old = residual_squared
residual_squared = self.sum(residual ** 2, -1, keepdims=True)
dx = residual + self.divide_no_nan(residual_squared, residual_squared_old) * dx
diverged = self.any(residual_squared / rsq0 > 100, axis=(1,)) & (iterations >= 8)
converged = self.all(residual_squared <= tolerance_sq, axis=(1,))
if trajectory is not None:
trajectory.append(SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, ""))
x = self.copy(x)
iterations = self.copy(iterations)
finished = converged | diverged | (iterations >= max_iter); not_finished_1 = self.to_int32(~finished) # ; active = self.to_float(self.expand_dims(not_finished_1, -1))
return trajectory if trj else SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, "")
def conjugate_gradient_adaptive(self, lin, y, x0, rtol, atol, max_iter, trj: bool) -> SolveResult or List[SolveResult]:
""" Conjugate gradient algorithm with adaptive step size. Signature matches to `Backend.linear_solve()`. """
# Based on the variant described in "Methods of Conjugate Gradients for Solving Linear Systems" by Magnus R. Hestenes and Eduard Stiefel
# https://nvlpubs.nist.gov/nistpubs/jres/049/jresv49n6p409_A1b.pdf
method = f"Φ-Flow CG-adaptive ({self.name})"
y = self.to_float(y)
x0 = self.copy(self.to_float(x0), only_mutable=True)
batch_size = self.staticshape(y)[0]
tolerance_sq = self.maximum(rtol ** 2 * self.sum(y ** 2, -1), atol ** 2)
x = x0
dx = residual = y - self.linear(lin, x)
dy = self.linear(lin, dx)
iterations = self.zeros([batch_size], DType(int, 32))
function_evaluations = self.ones([batch_size], DType(int, 32))
residual_squared = rsq0 = self.sum(residual ** 2, -1, keepdims=True)
diverged = self.any(~self.isfinite(x), axis=(1,))
converged = self.all(residual_squared <= tolerance_sq, axis=(1,))
trajectory = [SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, "")] if trj else None
continue_ = ~converged & ~diverged & (iterations < max_iter)
def loop(continue_, it_counter, x, dx, dy, residual, iterations, function_evaluations, _converged, _diverged):
continue_1 = self.to_int32(continue_)
it_counter += 1
iterations += continue_1
dx_dy = self.sum(dx * dy, axis=-1, keepdims=True)
step_size = self.divide_no_nan(self.sum(dx * residual, axis=-1, keepdims=True), dx_dy)
step_size *= self.expand_dims(self.to_float(continue_1), -1) # this is not really necessary but ensures batch-independence
x += step_size * dx
# if it_counter % 50 == 0: # Not traceable since Python bool
# residual = y - self.linear(lin, x); function_evaluations += 1
# else:
residual = residual - step_size * dy # in-place subtraction affects convergence
residual_squared = self.sum(residual ** 2, -1, keepdims=True)
dx = residual - self.divide_no_nan(self.sum(residual * dy, axis=-1, keepdims=True) * dx, dx_dy)
dy = self.linear(lin, dx); function_evaluations += continue_1
diverged = self.any(residual_squared / rsq0 > 100, axis=(1,)) & (iterations >= 8)
converged = self.all(residual_squared <= tolerance_sq, axis=(1,))
if trajectory is not None:
trajectory.append(SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, ""))
x = self.copy(x)
iterations = self.copy(iterations)
continue_ = ~converged & ~diverged & (iterations < max_iter)
return continue_, it_counter, x, dx, dy, residual, iterations, function_evaluations, converged, diverged
_, _, x, _, _, residual, iterations, function_evaluations, converged, diverged =\
self.while_loop(loop, (continue_, 0, x, dx, dy, residual, iterations, function_evaluations, converged, diverged))
return trajectory if trj else SolveResult(method, x, residual, iterations, function_evaluations, converged, diverged, "")
def linear(self, lin, vector):
if callable(lin):
return lin(vector)
elif isinstance(lin, (tuple, list)):
for lin_i in lin:
lin_shape = self.staticshape(lin_i)
assert len(lin_shape) == 2
return self.stack([self.matmul(m, v) for m, v in zip(lin, self.unstack(vector))])
else:
lin_shape = self.staticshape(lin)
assert len(lin_shape) == 2, f"A must be a matrix but got shape {lin_shape}"
return self.matmul(lin, vector)
def gradients(self, y, xs: tuple or list, grad_y) -> tuple:
raise NotImplementedError(self)
def record_gradients(self, xs: tuple or list, persistent=False):
raise NotImplementedError(self)
def stop_gradient(self, value):
raise NotImplementedError(self)
def grid_sample(self, grid, spatial_dims: tuple, coordinates, extrapolation='constant'):
"""
Interpolates a regular grid at the specified coordinates.
Args:
grid: Tensor
spatial_dims: Dimension indices that correspond to coordinate vectors
coordinates: Tensor of floating grid indices.
The last dimension must match `spatial_dims`.
The first grid point of dimension i lies at position 0, the last at values.shape[i]-1.
extrapolation: Values to use for coordinates outside the grid.
One of `('undefined', 'zeros', 'boundary', 'periodic', 'symmetric', 'reflect')`.
Returns:
sampled values with linear interpolation
"""
return NotImplemented
def variable(self, value):
return NotImplemented
def ndims(self, tensor):
return len(self.staticshape(tensor))
def size(self, array):
return self.prod(self.shape(array))
def batch_gather(self, tensor, batches):
if isinstance(batches, int):
batches = [batches]
return tensor[batches, ...]
def unstack(self, tensor, axis=0, keepdims=False) -> tuple:
if axis < 0:
axis += len(tensor.shape)
if axis >= len(tensor.shape) or axis < 0:
raise ValueError("Illegal axis value")
result = []
for slice_idx in range(tensor.shape[axis]):
if keepdims:
component = tensor[tuple([slice(slice_idx, slice_idx + 1) if d == axis else slice(None) for d in range(len(tensor.shape))])]
else:
component = tensor[tuple([slice_idx if d == axis else slice(None) for d in range(len(tensor.shape))])]
result.append(component)
return tuple(result)
def equal(self, x, y):
""" Element-wise equality check """
raise NotImplementedError(self)
def not_equal(self, x, y):
return ~self.equal(x, y)
def greater_than(self, x, y):
x, y = self.auto_cast(x, y)
return x > y
def greater_or_equal(self, x, y):
x, y = self.auto_cast(x, y)
return x >= y
def add(self, a, b):
a, b = self.auto_cast(a, b)
return a + b
def sub(self, a, b):
a, b = self.auto_cast(a, b)
return a - b
def mul(self, a, b):
a, b = self.auto_cast(a, b)
return a * b
def div(self, numerator, denominator):
numerator, denominator = self.auto_cast(numerator, denominator)
return numerator / denominator
def pow(self, base, exp):
base, exp = self.auto_cast(base, exp)
return base ** exp
def mod(self, dividend, divisor):
dividend, divisor = self.auto_cast(dividend, divisor)
return dividend % divisor
def and_(self, a, b):
a, b = self.auto_cast(a, b)
return a & b
def or_(self, a, b):
a, b = self.auto_cast(a, b)
return a | b
def xor(self, a, b):
a, b = self.auto_cast(a, b)
return a ^ b
def floordiv(self, a, b):
a, b = self.auto_cast(a, b)
return a // b
BACKENDS = []
""" Global list of all registered backends. Register a `Backend` by adding it to the list. """
_DEFAULT = [] # [0] = global default, [1:] from 'with' blocks
_PRECISION = [32] # [0] = global precision in bits, [1:] from 'with' blocks
def choose_backend(*values, prefer_default=False) -> Backend:
"""
Selects a suitable backend to handle the given values.
This function is used by most math functions operating on `Tensor` objects to delegate the actual computations.
Args:
*values:
prefer_default: if True, selects the default backend assuming it can handle handle the values, see `default_backend()`.
raise_error: Determines the behavior of this function if no backend can handle the given values.
If True, raises a `NoBackendFound` error, else returns `None`.
Returns:
the selected `Backend`
"""
# --- Default Backend has priority ---
if _is_applicable(_DEFAULT[-1], values) and (prefer_default or _is_specific(_DEFAULT[-1], values)):
return _DEFAULT[-1]
# --- Filter out non-applicable ---
backends = [backend for backend in BACKENDS if _is_applicable(backend, values)]
if len(backends) == 0:
raise NoBackendFound(f"No backend found for types {[type(v).__name__ for v in values]}; registered backends are {BACKENDS}")
# --- Native tensors? ---
for backend in backends:
if _is_specific(backend, values):
return backend
return backends[0]
class NoBackendFound(Exception):
"""
Thrown by `choose_backend` if no backend can handle the given values.
"""
def __init__(self, msg):
Exception.__init__(self, msg)
def default_backend() -> Backend:
"""
The default backend is preferred by `choose_backend()`.
The default backend can be set globally using `set_global_default_backend()` and locally using `with backend:`.
Returns:
current default `Backend`
"""
return _DEFAULT[-1]
def context_backend() -> Backend or None:
"""
Returns the backend set by the inner-most surrounding `with backend:` block.
If called outside a backend context, returns `None`.
Returns:
`Backend` or `None`
"""
return _DEFAULT[-1] if len(_DEFAULT) > 1 else None
def set_global_default_backend(backend: Backend):
"""
Sets the given backend as default.
This setting can be overridden using `with backend:`.
See `default_backend()`, `choose_backend()`.
Args:
backend: `Backend` to set as default
"""
assert isinstance(backend, Backend)
_DEFAULT[0] = backend
def set_global_precision(floating_point_bits: int):
"""
Sets the floating point precision of DYNAMIC_BACKEND which affects all registered backends.
If `floating_point_bits` is an integer, all floating point tensors created henceforth will be of the corresponding data type, float16, float32 or float64.
Operations may also convert floating point values to this precision, even if the input had a different precision.
If `floating_point_bits` is None, new tensors will default to float32 unless specified otherwise.
The output of math operations has the same precision as its inputs.
Args:
floating_point_bits: one of (16, 32, 64, None)
"""
_PRECISION[0] = floating_point_bits
def get_precision() -> int:
"""
Gets the current target floating point precision in bits.
The precision can be set globally using `set_global_precision()` or locally using `with precision(p):`.
Any Backend method may convert floating point values to this precision, even if the input had a different precision.
Returns:
16 for half, 32 for single, 64 for double
"""
return _PRECISION[-1]
@contextmanager
def precision(floating_point_bits: int):
"""
Sets the floating point precision for the local context.
Usage: `with precision(p):`
This overrides the global setting, see `set_global_precision()`.
Args:
floating_point_bits: 16 for half, 32 for single, 64 for double
"""
_PRECISION.append(floating_point_bits)
try:
yield None
finally:
_PRECISION.pop(-1)
def convert(tensor, backend: Backend = None, use_dlpack=True):
"""
Convert a Tensor to the native format of `backend`.
If the target backend can operate natively on `tensor`, returns `tensor`.
If both backends support *DLPack* and `use_dlpack=True`, uses zero-copy conversion using the DLPack library.
Else, intermediately converts `tensor` to a NumPy array.
*Warning*: This operation breaks the automatic differentiation chain.
Args:
tensor: Native tensor belonging to any registered backend.
backend: Target backend. If `None`, uses the current default backend, see `default_backend()`.
Returns:
Tensor belonging to `backend`.
"""
backend = backend or default_backend()
current_backend = choose_backend(tensor, prefer_default=False)
if backend.is_tensor(tensor, True) or backend is current_backend:
return tensor
if use_dlpack and current_backend.supports(Backend.to_dlpack) and backend.supports(Backend.from_dlpack):
capsule = current_backend.to_dlpack(tensor)
return backend.from_dlpack(capsule)
else:
nparray = current_backend.numpy(tensor)
return backend.as_tensor(nparray)
# Backend choice utility functions
def _is_applicable(backend, values):
for value in values:
if not backend.is_tensor(value, only_native=False):
return False
return True
def _is_specific(backend, values):
for value in values:
if backend.is_tensor(value, only_native=True):
return True
return False
# Other low-level helper functions
def combined_dim(dim1, dim2, type_str: str = 'batch'):
if dim1 is None and dim2 is None:
return None
if dim1 is None or dim1 == 1:
return dim2
if dim2 is None or dim2 == 1:
return dim1
assert dim1 == dim2, f"Incompatible {type_str} dimensions: x0 {dim1}, y {dim2}"
return dim1
| mit | -2,784,862,675,511,536,000 | 36.708999 | 216 | 0.616179 | false |
lnls-fac/sirius | pymodels/TB_V03_02/lattice.py | 1 | 9745 | """Lattice module.
In this module the lattice of the corresponding accelerator is defined.
"""
import math as _math
from pyaccel import lattice as _pyacc_lat, elements as _pyacc_ele, \
accelerator as _pyacc_acc, optics as _pyacc_opt
from . import segmented_models as _segmented_models
energy = 0.150e9 # [eV]
default_optics_mode = 'M1'
class LatticeError(Exception):
"""LatticeError class."""
def create_lattice(optics_mode=default_optics_mode):
"""Create lattice function."""
strengths, twiss_at_start = get_optics_mode(optics_mode)
# -- shortcut symbols --
marker = _pyacc_ele.marker
drift = _pyacc_ele.drift
quadrupole = _pyacc_ele.quadrupole
rbend_sirius = _pyacc_ele.rbend
sextupole = _pyacc_ele.sextupole
deg_2_rad = _math.pi / 180.0
corr_length = 0.082
# --- drift spaces ---
lp2 = drift('lp2', 0.0002)
lp3 = drift('lp3', 0.0003)
lp4 = drift('lp4', 0.0004)
lp5 = drift('lp5', 0.0005)
lp6 = drift('lp6', 0.0006)
lp7 = drift('lp7', 0.0007)
l1 = drift('l1', 0.001)
l2 = drift('l2', 0.002)
l3 = drift('l3', 0.003)
l4 = drift('l4', 0.004)
l5 = drift('l5', 0.005)
l6 = drift('l6', 0.006)
l7 = drift('l7', 0.007)
l8 = drift('l8', 0.008)
l9 = drift('l9', 0.009)
l10 = drift('l10', 0.010)
l30 = drift('l30', 0.030)
l40 = drift('l40', 0.040)
l60 = drift('l60', 0.060)
l70 = drift('l70', 0.070)
l80 = drift('l80', 0.080)
l90 = drift('l90', 0.090)
l100 = drift('l100', 0.100)
l200 = drift('l200', 0.200)
# --- markers ---
inicio = marker('start')
fim = marker('end')
# --- slits ---
slith = marker('SlitH')
slitv = marker('SlitV')
# --- beam screens ---
scrn = marker('Scrn')
# --- beam current monitors ---
ict = marker('ICT')
fct = marker('FCT')
# --- beam position monitors ---
bpm = marker('BPM')
# --- correctors ---
chv = sextupole('CHV', corr_length, 0.0)
# cv = sextupole('CV', corr_length, 0.0)
# --- quadrupoles ---
qf2L = quadrupole('QF2L', 0.112, strengths['qf2l']) # LINAC TRIPLET
qd2L = quadrupole('QD2L', 0.162, strengths['qd2l']) # LINAC TRIPLET
qf3L = quadrupole('QF3L', 0.112, strengths['qf3l']) # LINAC QUADRUPOLE
# -- spec --
ang = 15.0 # injection mode
dip_nam = 'Spect'
dip_len = 0.45003
dip_ang = -ang * deg_2_rad
dip_K = 0.0
dip_S = 0.00
spech = rbend_sirius(dip_nam, dip_len/2, dip_ang/2,
0, 0,
0, 0, 0, [0, 0, 0], [0, dip_K, dip_S])
spec = [spech, spech]
qd1 = quadrupole('QD1', 0.100, strengths['qd1'])
qf1 = quadrupole('QF1', 0.100, strengths['qf1'])
qd2a = quadrupole('QD2A', 0.100, strengths['qd2a'])
qf2a = quadrupole('QF2A', 0.100, strengths['qf2a'])
qf2b = quadrupole('QF2B', 0.100, strengths['qf2b'])
qd2b = quadrupole('QD2B', 0.100, strengths['qd2b'])
qf3 = quadrupole('QF3', 0.100, strengths['qf3'])
qd3 = quadrupole('QD3', 0.100, strengths['qd3'])
qf4 = quadrupole('QF4', 0.100, strengths['qf4'])
qd4 = quadrupole('QD4', 0.100, strengths['qd4'])
# --- bending magnets ---
bp = _segmented_models.dipole(sign=+1)
bn = _segmented_models.dipole(sign=-1)
# -- bo injection septum --
dip_nam = 'InjSept'
dip_len = 0.50
dip_ang = 21.75 * deg_2_rad
dip_K = 0.0
dip_S = 0.00
septine = rbend_sirius(dip_nam, dip_len/2, dip_ang/2,
1*dip_ang/2, 0*dip_ang,
0, 0, 0, [0, 0, 0], [0, dip_K, dip_S])
septins = rbend_sirius(dip_nam, dip_len/2, dip_ang/2,
0*dip_ang, 1*dip_ang/2,
0, 0, 0, [0, 0, 0], [0, dip_K, dip_S])
bseptin = marker('bInjS')
eseptin = marker('eInjS')
# Excluded ch to make it consistent with other codes.
# The corrector can be implemented in the polynomB:
septin = [bseptin, septine, septins, eseptin]
# --- lines ---
s00_1 = [l80, l4, qf2L, l30, l8, qd2L, l30, l8, qf2L, l30, l8, qf3L]
s00_2 = [l80, l7, bpm, l200, l40, l6, ict, l200, l100, l90, l5]
s01_1 = [
l200, l200, l200, l80, l4, lp2, scrn, l100, l40, lp2, bpm,
l100, l2, lp4]
s01_2 = [l80, l8, lp4, chv, l200, l90, l1, lp2]
s01_3 = [
l200, l200, l200, l200, l200, l40, l4, slith, l100, l80, scrn,
l100, l40, bpm, l100, l90, l9, chv, l100, l90, l3, lp3, slitv,
l200, l10, lp4]
s02_1 = [l100, l90, l4, lp4, ict, l200, l200, l200, l10, l6]
s02_2 = [l200, l70]
s02_3 = [
l200, scrn, l100, l40, bpm, l60, l9, chv] + [l200]*26 + \
[l100, l70, l3]
s02_4 = [l200, l70]
s02_5 = [
l200, scrn, l100, l40, bpm, l60, l8, lp5, chv, l200, l100,
l10, l9, lp7]
s03_1 = [l200] * 10 + [l100, l90, l9, lp6]
s03_2 = [l200, l6]
s03_3 = [l100, bpm, l100, l40, l4, scrn, l200, l10, lp4]
s04_1 = [
l200, l70, l2, lp4, chv, l200, l200, l100, l80, lp5, fct,
l100, l40, ict, l200, l100, l5, lp7, bpm, l100, l10, l5, lp6]
s04_2 = [l200, l10, l6]
s04_3 = [l100, l70, scrn, l60, l1, lp2, chv, l80, l6, lp6]
sector00 = [s00_1, s00_2, spec]
sector01 = [s01_1, qd1, s01_2, qf1, s01_3, bn]
sector02 = [s02_1, qd2a, s02_2, qf2a, s02_3, qf2b, s02_4, qd2b, s02_5, bp]
sector03 = [s03_1, qf3, s03_2, qd3, s03_3, bp]
sector04 = [s04_1, qf4, s04_2, qd4, s04_3, septin]
# TB beamline
ltlb = [inicio, sector00, sector01, sector02, sector03, sector04, fim]
elist = ltlb
the_line = _pyacc_lat.build(elist)
# --- shifts model to marker 'start' ---
idx = _pyacc_lat.find_indices(the_line, 'fam_name', 'start')
the_line = _pyacc_lat.shift(the_line, idx[0])
lengths = _pyacc_lat.get_attribute(the_line, 'length')
for length in lengths:
if length < 0:
raise LatticeError('Model with negative drift!')
# sets number of integration steps
set_num_integ_steps(the_line)
# -- define vacuum chamber for all elements
the_line = set_vacuum_chamber(the_line)
return the_line, twiss_at_start
def get_optics_mode(optics_mode):
"""Return magnet strengths of a given opics mode."""
# -- selection of optics mode --
if optics_mode == 'M1':
# Initial Conditions from Linac measured parameters on 16/07/2019
# Linac second quadrupole triplet set to same values used during
# measurements (Sem tripleto)
twiss_at_start = _pyacc_opt.Twiss.make_new(
beta=[2.71462, 4.69925], alpha=[-2.34174, 1.04009],
etax=[0.0, 0.0])
strengths = {
'qf2l': 12.37,
'qd2l': -14.85,
'qf3l': 5.713160289024,
'qd1': -8.821809143987,
'qf1': 13.335946597802,
'qd2a': -11.859318300947,
'qf2a': 14.532892396682,
'qf2b': 8.647545577362,
'qd2b': -8.836916532517,
'qf3': 10.020651462368,
'qd3': -4.974049498621,
'qf4': 11.168208453391,
'qd4': -6.191738912262,
}
elif optics_mode == 'M2':
# Initial Conditions from Linac measured parameters on 16/07/2019
# Linac second quadrupole triplet is used to match the LBT optics
# (Sem tripleto)
twiss_at_start = _pyacc_opt.Twiss.make_new(
beta=[2.71462, 4.69925], alpha=[-2.34174, 1.04009],
etax=[0.0, 0.0])
strengths = {
'qf2L': 11.78860,
'qd2L': -14.298290,
'qf3L': 4.801910,
'qd1': -8.822256368219,
'qf1': 13.336060990905,
'qd2a': -9.382785447106,
'qf2a': 12.670391768958,
'qf2b': 7.994238513566,
'qd2b': -7.118805773505,
'qf3': 10.328752039153,
'qd3': -5.519539215470,
'qf4': 11.635406805193,
'qd4': -6.936225524796,
}
else:
_pyacc_acc.AcceleratorException(
'Invalid TB optics mode: ' + optics_mode)
return strengths, twiss_at_start
def set_num_integ_steps(the_line):
"""Set number of integration steps in each lattice element."""
dl = 0.035
for i, _ in enumerate(the_line):
if the_line[i].angle:
length = the_line[i].length
the_line[i].nr_steps = max(10, int(_math.ceil(length/dl)))
elif the_line[i].polynom_b[1]:
the_line[i].nr_steps = 10
elif the_line[i].polynom_b[2]:
the_line[i].nr_steps = 10
else:
the_line[i].nr_steps = 1
ch_indices = _pyacc_lat.find_indices(the_line, 'fam_name', 'CHV')
cv_indices = _pyacc_lat.find_indices(the_line, 'fam_name', 'CHV')
corr_indices = ch_indices + cv_indices
for idx in corr_indices:
the_line[idx].nr_steps = 5
def set_vacuum_chamber(the_line):
"""Set vacuum chamber for all elements."""
# -- default physical apertures --
for i, _ in enumerate(the_line):
the_line[i].hmin = -0.018
the_line[i].hmax = +0.018
the_line[i].vmin = -0.018
the_line[i].vmax = +0.018
# -- bo injection septum --
beg = _pyacc_lat.find_indices(the_line, 'fam_name', 'bInjS')[0]
end = _pyacc_lat.find_indices(the_line, 'fam_name', 'eInjS')[0]
for i in range(beg, end+1):
the_line[i].hmin = -0.0075
the_line[i].hmax = +0.0075
the_line[i].vmin = -0.0080
the_line[i].vmax = +0.0080
# -- dipoles --
bnd = _pyacc_lat.find_indices(the_line, 'fam_name', 'B')
for i in bnd:
the_line[i].hmin = -0.0117
the_line[i].hmax = +0.0117
the_line[i].vmin = -0.0117
the_line[i].vmax = +0.0117
return the_line
| mit | 7,018,905,494,914,259,000 | 32.146259 | 78 | 0.548179 | false |
mrquim/repository.mrquim | plugin.video.mrpiracy/resources/lib/js2py/es6/__init__.py | 27 | 1270 | INITIALISED = False
babel = None
babelPresetEs2015 = None
def js6_to_js5(code):
global INITIALISED, babel, babelPresetEs2015
if not INITIALISED:
import signal, warnings, time
warnings.warn('\nImporting babel.py for the first time - this can take some time. \nPlease note that currently Javascript 6 in Js2Py is unstable and slow. Use only for tiny scripts!')
from .babel import babel as _babel
babel = _babel.Object.babel
babelPresetEs2015 = _babel.Object.babelPresetEs2015
# very weird hack. Somehow this helps babel to initialise properly!
try:
babel.transform('warmup', {'presets': {}})
signal.alarm(2)
def kill_it(a,b): raise KeyboardInterrupt('Better work next time!')
signal.signal(signal.SIGALRM, kill_it)
babel.transform('stuckInALoop', {'presets': babelPresetEs2015}).code
for n in range(3):
time.sleep(1)
except:
print("Initialised babel!")
INITIALISED = True
return babel.transform(code, {'presets': babelPresetEs2015}).code
if __name__=='__main__':
print(js6_to_js5('obj={}; obj.x = function() {return () => this}'))
print()
print(js6_to_js5('const a = 1;')) | gpl-2.0 | -387,959,809,374,441,300 | 38.71875 | 191 | 0.627559 | false |
natanovia/Anki-Android | tools/manage-crowdin.py | 20 | 5581 | #!/usr/bin/python
# Copyright (c) 2010 [email protected]
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free Software
# Foundation; either version 3 of the License, or (at your option) any later
# version.
#
# This program is distributed in the hope that it will be useful, but WITHOUT ANY
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE. See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program. If not, see <http://www.gnu.org/licenses/>.
#
# This script updates the master file(s) for crowdin.net
import pycurl
import StringIO
import sys
import string
import os
from os.path import expanduser
CROWDIN_KEY = ''
PROJECT_IDENTIFIER = 'ankidroid'
path = './AnkiDroid/src/main/res/values/'
files = ['01-core', '02-strings', '03-dialogs', '04-network', '05-feedback', '06-statistics', '07-cardbrowser', '08-widget', '09-backup', '10-preferences', '11-arrays', '14-marketdescription', '16-multimedia-editor']
alllang = ['ar', 'ca', 'cs', 'de', 'el', 'es-AR', 'es-ES', 'fa', 'fi', 'fr', 'hu', 'id', 'it', 'ja', 'ko', 'nl', 'pl', 'pt-PT', 'pt-BR', 'ro', 'ru', 'sr', 'sv-SE', 'th', 'tr', 'vi', 'zh-CN', 'zh-TW']
'''def uploadtranslation(language, filename, sourcefile):
if len(language) > 2:
pathlan = string.replace(language, '-', '-r')
if not os.path.exists('./res/values-' + pathlan):
pathlan = pathlan[:2]
else:
pathlan = language
path = './res/values-' + pathlan + '/'
filename = filename + '.xml'
# if selu == 's':
# filename = 'strings.xml'
# elif selu == 'a':
# filename = 'arrays.xml'
# else:
# filename = ''
# print "nothing to do"
print 'Update of Translation '+language+' for '+filename
if filename:
if language:
c = pycurl.Curl()
fields = [('files['+filename+']', (c.FORM_FILE, path + sourcefile + '.xml')), ('language', language), ('auto_approve_imported','0'), ('import_eq_suggestions','0')]
c.setopt(pycurl.URL, 'https://api.crowdin.com/api/project/' + PROJECT_IDENTIFIER + '/upload-translation?key=' + CROWDIN_KEY)
c.setopt(pycurl.HTTPPOST, fields)
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
c.close()
print b.getvalue()
else:
print 'no language code entered'
'''
def updateMasterFile(fn):
if fn == '14-marketdescription':
targetName = '14-marketdescription.txt'
sourceName = './docs/marketing/localized_description/marketdescription.txt'
else:
targetName = fn + '.xml'
sourceName = path + targetName
if targetName:
print 'Update of Master File ' + targetName
c = pycurl.Curl()
fields = [('files['+targetName+']', (c.FORM_FILE, sourceName))]
c.setopt(pycurl.URL, 'https://api.crowdin.com/api/project/' + PROJECT_IDENTIFIER + '/update-file?key=' + CROWDIN_KEY)
c.setopt(pycurl.HTTPPOST, fields)
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
c.close()
print b.getvalue()
try:
try:
p = os.path.join(expanduser("~"), "src", "crowdin_key.txt")
print(p)
c = open(p,"r+")
except IOError as e0:
c = open("tools/crowdin_key.txt","r+")
CROWDIN_KEY = c.readline();
c.close()
except IOError as e:
CROWDIN_KEY = raw_input("please enter your crowdin key or create \'crowdin_key.txt\': ")
#sel = raw_input("update (m)aster file, update (t)ranslation or (r)efresh builds? ")
sel='m'
if sel == 'm':
# Update Master Files:
fn = raw_input("update " + ', '.join([str(x) for x in files]) + ", (all)?")
if fn == 'all':
for n in range(0, len(files)):
updateMasterFile(files[n])
else:
updateMasterFile(fn)
else:
print "nothing to do"
'''
elif sel == 't':
# Update Translations:
print 'still problems with crowding here'
language = raw_input("enter language code: ")
selu = raw_input("update 0(1)-core, 0(2)-strings, 0(3)-dialogs, 0(4)-network, 0(5)-feedback, 0(6)-statistics, 0(7)-cardbrowser, 0(8)-widget, 0(9)-backup, (10)-preferences, (11)-arrays, (13)-newfeatures? ")
if selu == '12' or selu == '14':
print "translations of this file cannot be uploaded"
elif selu != 'all':
defaultSource = files[int(selu)-1]
sourcefile = raw_input("enter source file (default: " + defaultSource + "): ")
if sourcefile == "":
sourcefile = defaultSource
if language == 'all':
for language in alllang:
if selu == 'all':
for s in files:
uploadtranslation(language, s, s)
else:
uploadtranslation(language, files[int(selu)-1], sourcefile)
elif selu == 'all':
for s in files:
uploadtranslation(language, s, s)
else:
uploadtranslation(language, files[int(selu)-1], sourcefile)
elif sel == 'r':
# Update Translations:
print "Force translation export"
c = pycurl.Curl()
c.setopt(pycurl.URL, 'https://api.crowdin.com/api/project/' + PROJECT_IDENTIFIER + '/export?&key=' + CROWDIN_KEY)
b = StringIO.StringIO()
c.setopt(pycurl.WRITEFUNCTION, b.write)
c.perform()
c.close()
print b.getvalue()
'''
| gpl-3.0 | -7,926,630,329,264,523,000 | 36.206667 | 216 | 0.604014 | false |
spaghetti-/rosdep | src/rosdep2/sources_list.py | 1 | 24825 | # Copyright (c) 2012, Willow Garage, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the Willow Garage, Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
# Author Ken Conley/[email protected]
from __future__ import print_function
import os
import sys
import tempfile
import yaml
import hashlib
try:
from urllib.request import urlopen
from urllib.error import URLError
except ImportError:
from urllib2 import urlopen
from urllib2 import URLError
try:
import cPickle as pickle
except ImportError:
import pickle
from .core import InvalidData, DownloadFailure, CachePermissionError
from .gbpdistro_support import get_gbprepo_as_rosdep_data, download_gbpdistro_as_rosdep_data
try:
import urlparse
except ImportError:
import urllib.parse as urlparse #py3k
try:
import httplib
except ImportError:
import http.client as httplib # py3k
import rospkg
import rospkg.distro
from .loader import RosdepLoader
from .rosdistrohelper import get_index, get_index_url
# default file to download with 'init' command in order to bootstrap
# rosdep
DEFAULT_SOURCES_LIST_URL = 'https://raw.githubusercontent.com/ros/rosdistro/master/rosdep/sources.list.d/20-default.list'
#seconds to wait before aborting download of rosdep data
DOWNLOAD_TIMEOUT = 15.0
SOURCES_LIST_DIR = 'sources.list.d'
SOURCES_CACHE_DIR = 'sources.cache'
# name of index file for sources cache
CACHE_INDEX = 'index'
# extension for binary cache
PICKLE_CACHE_EXT = '.pickle'
SOURCE_PATH_ENV = 'ROSDEP_SOURCE_PATH'
def get_sources_list_dirs(source_list_dir):
if SOURCE_PATH_ENV in os.environ:
sdirs = os.environ[SOURCE_PATH_ENV].split(os.pathsep)
else:
sdirs = [source_list_dir]
for p in list(sdirs):
if not os.path.exists(p):
sdirs.remove(p)
return sdirs
def get_sources_list_dir():
# base of where we read config files from
# TODO: windows
if 0:
# we can't use etc/ros because environment config does not carry over under sudo
etc_ros = rospkg.get_etc_ros_dir()
else:
etc_ros = '/etc/ros'
# compute default system wide sources directory
sys_sources_list_dir = os.path.join(etc_ros, 'rosdep', SOURCES_LIST_DIR)
sources_list_dirs = get_sources_list_dirs(sys_sources_list_dir)
if sources_list_dirs:
return sources_list_dirs[0]
else:
return sys_sources_list_dir
def get_default_sources_list_file():
return os.path.join(get_sources_list_dir(), '20-default.list')
def get_sources_cache_dir():
ros_home = rospkg.get_ros_home()
return os.path.join(ros_home, 'rosdep', SOURCES_CACHE_DIR)
# Default rosdep.yaml format. For now this is the only valid type and
# is specified for future compatibility.
TYPE_YAML = 'yaml'
# git-buildpackage repo list
TYPE_GBPDISTRO = 'gbpdistro'
VALID_TYPES = [TYPE_YAML, TYPE_GBPDISTRO]
class DataSource(object):
def __init__(self, type_, url, tags, origin=None):
"""
:param type_: data source type, e.g. TYPE_YAML, TYPE_GBPDISTRO
:param url: URL of data location. For file resources, must
start with the file:// scheme. For remote resources, URL
must include a path.
:param tags: tags for matching data source to configurations
:param origin: filename or other indicator of where data came from for debugging.
:raises: :exc:`ValueError` if parameters do not validate
"""
# validate inputs
if not type_ in VALID_TYPES:
raise ValueError("type must be one of [%s]"%(','.join(VALID_TYPES)))
parsed = urlparse.urlparse(url)
if not parsed.scheme or (parsed.scheme != 'file' and not parsed.netloc) or parsed.path in ('', '/'):
raise ValueError("url must be a fully-specified URL with scheme, hostname, and path: %s"%(str(url)))
if not type(tags) == list:
raise ValueError("tags must be a list: %s"%(str(tags)))
self.type = type_
self.tags = tags
self.url = url
self.origin = origin
def __eq__(self, other):
return isinstance(other, DataSource) and \
self.type == other.type and \
self.tags == other.tags and \
self.url == other.url and \
self.origin == other.origin
def __str__(self):
if self.origin:
return "[%s]:\n%s %s %s"%(self.origin, self.type, self.url, ' '.join(self.tags))
else:
return "%s %s %s"%(self.type, self.url, ' '.join(self.tags))
def __repr__(self):
return repr((self.type, self.url, self.tags, self.origin))
class RosDistroSource(DataSource):
def __init__(self, distro):
self.type = TYPE_GBPDISTRO
self.tags = [distro]
# In this case self.url is a list if REP-143 is being used
self.url = get_index().distributions[distro]['distribution']
self.origin = None
# create function we can pass in as model to parse_source_data. The
# function emulates the CachedDataSource constructor but does the
# necessary full filepath calculation and loading of data.
def cache_data_source_loader(sources_cache_dir, verbose=False):
def create_model(type_, uri, tags, origin=None):
# compute the filename has from the URL
filename = compute_filename_hash(uri)
filepath = os.path.join(sources_cache_dir, filename)
pickle_filepath = filepath + PICKLE_CACHE_EXT
if os.path.exists(pickle_filepath):
if verbose:
print("loading cached data source:\n\t%s\n\t%s"%(uri, pickle_filepath), file=sys.stderr)
with open(pickle_filepath, 'rb') as f:
rosdep_data = pickle.loads(f.read())
elif os.path.exists(filepath):
if verbose:
print("loading cached data source:\n\t%s\n\t%s"%(uri, filepath), file=sys.stderr)
with open(filepath) as f:
rosdep_data = yaml.load(f.read())
else:
rosdep_data = {}
return CachedDataSource(type_, uri, tags, rosdep_data, origin=filepath)
return create_model
class CachedDataSource(object):
def __init__(self, type_, url, tags, rosdep_data, origin=None):
"""
Stores data source and loaded rosdep data for that source.
NOTE: this is not a subclass of DataSource, though it's API is
duck-type compatible with the DataSource API.
"""
self.source = DataSource(type_, url, tags, origin=origin)
self.rosdep_data = rosdep_data
def __eq__(self, other):
try:
return self.source == other.source and \
self.rosdep_data == other.rosdep_data
except AttributeError:
return False
def __str__(self):
return "%s\n%s"%(self.source, self.rosdep_data)
def __repr__(self):
return repr((self.type, self.url, self.tags, self.rosdep_data, self.origin))
@property
def type(self):
"""
:returns: data source type
"""
return self.source.type
@property
def url(self):
"""
:returns: data source URL
"""
return self.source.url
@property
def tags(self):
"""
:returns: data source tags
"""
return self.source.tags
@property
def origin(self):
"""
:returns: data source origin, if set, or ``None``
"""
return self.source.origin
class DataSourceMatcher(object):
def __init__(self, tags):
self.tags = tags
def matches(self, rosdep_data_source):
"""
Check if the datasource matches this configuration.
:param rosdep_data_source: :class:`DataSource`
"""
# all of the rosdep_data_source tags must be in our matcher tags
return not any(set(rosdep_data_source.tags)-set(self.tags))
@staticmethod
def create_default(os_override=None):
"""
Create a :class:`DataSourceMatcher` to match the current
configuration.
:param os_override: (os_name, os_codename) tuple to override
OS detection
:returns: :class:`DataSourceMatcher`
"""
distro_name = rospkg.distro.current_distro_codename()
if os_override is None:
os_detect = rospkg.os_detect.OsDetect()
os_name, os_version, os_codename = os_detect.detect_os()
else:
os_name, os_codename = os_override
tags = [t for t in (distro_name, os_name, os_codename) if t]
return DataSourceMatcher(tags)
def download_rosdep_data(url):
"""
:raises: :exc:`DownloadFailure` If data cannot be
retrieved (e.g. 404, bad YAML format, server down).
"""
try:
f = urlopen(url, timeout=DOWNLOAD_TIMEOUT)
text = f.read()
f.close()
data = yaml.safe_load(text)
if type(data) != dict:
raise DownloadFailure('rosdep data from [%s] is not a YAML dictionary'%(url))
return data
except (URLError, httplib.HTTPException) as e:
raise DownloadFailure(str(e) + ' (%s)' % url)
except yaml.YAMLError as e:
raise DownloadFailure(str(e))
def download_default_sources_list(url=DEFAULT_SOURCES_LIST_URL):
"""
Download (and validate) contents of default sources list.
:param url: override URL of default sources list file
:return: raw sources list data, ``str``
:raises: :exc:`InvalidData`
:raises: :exc:`urllib2.URLError` If data cannot be
retrieved (e.g. 404, server down).
"""
try:
f = urlopen(url, timeout=DOWNLOAD_TIMEOUT)
except (URLError, httplib.HTTPException) as e:
raise URLError(str(e) + ' (%s)' % url)
data = f.read().decode()
f.close()
if not data:
raise RuntimeError("cannot download defaults file: empty contents")
# parse just for validation
parse_sources_data(data)
return data
def parse_sources_data(data, origin='<string>', model=None):
"""
Parse sources file format (tags optional)::
# comments and empty lines allowed
<type> <uri> [tags]
e.g.::
yaml http://foo/rosdep.yaml fuerte lucid ubuntu
If tags are specified, *all* tags must match the current
configuration for the sources data to be used.
:param data: data in sources file format
:param model: model to load data into. Defaults to :class:`DataSource`
:returns: List of data sources, [:class:`DataSource`]
:raises: :exc:`InvalidData`
"""
if model is None:
model = DataSource
sources = []
for line in data.split('\n'):
line = line.strip()
# ignore empty lines or comments
if not line or line.startswith('#'):
continue
splits = line.split(' ')
if len(splits) < 2:
raise InvalidData("invalid line:\n%s"%(line), origin=origin)
type_ = splits[0]
url = splits[1]
tags = splits[2:]
try:
sources.append(model(type_, url, tags, origin=origin))
except ValueError as e:
raise InvalidData("line:\n\t%s\n%s"%(line, e), origin=origin)
return sources
def parse_sources_file(filepath):
"""
Parse file on disk
:returns: List of data sources, [:class:`DataSource`]
:raises: :exc:`InvalidData` If any error occurs reading
file, so an I/O error, non-existent file, or invalid format.
"""
try:
with open(filepath, 'r') as f:
return parse_sources_data(f.read(), origin=filepath)
except IOError as e:
raise InvalidData("I/O error reading sources file: %s"%(str(e)), origin=filepath)
def parse_sources_list(sources_list_dir=None):
"""
Parse data stored in on-disk sources list directory into a list of
:class:`DataSource` for processing.
:returns: List of data sources, [:class:`DataSource`]. If there is
no sources list dir, this returns an empty list.
:raises: :exc:`InvalidData`
:raises: :exc:`OSError` if *sources_list_dir* cannot be read.
:raises: :exc:`IOError` if *sources_list_dir* cannot be read.
"""
if sources_list_dir is None:
sources_list_dir = get_sources_list_dir()
sources_list_dirs = get_sources_list_dirs(sources_list_dir)
filelist = []
for sdir in sources_list_dirs:
filelist += sorted([os.path.join(sdir, f) for f in os.listdir(sdir) if f.endswith('.list')])
sources_list = []
for f in filelist:
sources_list.extend(parse_sources_file(f))
return sources_list
def _generate_key_from_urls(urls):
# urls may be a list of urls or a single string
try:
assert isinstance(urls, (list, basestring))
except NameError:
assert isinstance(urls, (list, str))
# We join the urls by the '^' character because it is not allowed in urls
return '^'.join(urls if isinstance(urls, list) else [urls])
def update_sources_list(sources_list_dir=None, sources_cache_dir=None,
success_handler=None, error_handler=None):
"""
Re-downloaded data from remote sources and store in cache. Also
update the cache index based on current sources.
:param sources_list_dir: override source list directory
:param sources_cache_dir: override sources cache directory
:param success_handler: fn(DataSource) to call if a particular
source loads successfully. This hook is mainly for printing
errors to console.
:param error_handler: fn(DataSource, DownloadFailure) to call
if a particular source fails. This hook is mainly for
printing errors to console.
:returns: list of (`DataSource`, cache_file_path) pairs for cache
files that were updated, ``[str]``
:raises: :exc:`InvalidData` If any of the sources list files is invalid
:raises: :exc:`OSError` if *sources_list_dir* cannot be read.
:raises: :exc:`IOError` If *sources_list_dir* cannot be read or cache data cannot be written
"""
if sources_cache_dir is None:
sources_cache_dir = get_sources_cache_dir()
sources = parse_sources_list(sources_list_dir=sources_list_dir)
retval = []
for source in list(sources):
try:
if source.type == TYPE_YAML:
rosdep_data = download_rosdep_data(source.url)
elif source.type == TYPE_GBPDISTRO: # DEPRECATED, do not use this file. See REP137
if not source.tags[0] in ['electric', 'fuerte']:
print('Ignore legacy gbpdistro "%s"' % source.tags[0])
sources.remove(source)
continue # do not store this entry in the cache
rosdep_data = download_gbpdistro_as_rosdep_data(source.url)
retval.append((source, write_cache_file(sources_cache_dir, source.url, rosdep_data)))
if success_handler is not None:
success_handler(source)
except DownloadFailure as e:
if error_handler is not None:
error_handler(source, e)
# Additional sources for ros distros
# In compliance with REP137 and REP143
print('Query rosdistro index %s' % get_index_url())
for dist_name in sorted(get_index().distributions.keys()):
print('Add distro "%s"' % dist_name)
rds = RosDistroSource(dist_name)
rosdep_data = get_gbprepo_as_rosdep_data(dist_name)
# dist_files can either be a string (single filename) or a list (list of filenames)
dist_files = get_index().distributions[dist_name]['distribution']
key = _generate_key_from_urls(dist_files)
retval.append((rds, write_cache_file(sources_cache_dir, key, rosdep_data)))
sources.append(rds)
# Create a combined index of *all* the sources. We do all the
# sources regardless of failures because a cache from a previous
# attempt may still exist. We have to do this cache index so that
# loads() see consistent data.
if not os.path.exists(sources_cache_dir):
os.makedirs(sources_cache_dir)
cache_index = os.path.join(sources_cache_dir, CACHE_INDEX)
data = "#autogenerated by rosdep, do not edit. use 'rosdep update' instead\n"
for source in sources:
url = _generate_key_from_urls(source.url)
data += "yaml %s %s\n" % (url, ' '.join(source.tags))
write_atomic(cache_index, data)
# mainly for debugging and testing
return retval
def load_cached_sources_list(sources_cache_dir=None, verbose=False):
"""
Load cached data based on the sources list.
:returns: list of :class:`CachedDataSource` instance with raw
rosdep data loaded.
:raises: :exc:`OSError` if cache cannot be read
:raises: :exc:`IOError` if cache cannot be read
"""
if sources_cache_dir is None:
sources_cache_dir = get_sources_cache_dir()
cache_index = os.path.join(sources_cache_dir, 'index')
if not os.path.exists(cache_index):
if verbose:
print("no cache index present, not loading cached sources", file=sys.stderr)
return []
with open(cache_index, 'r') as f:
cache_data = f.read()
# the loader does all the work
model = cache_data_source_loader(sources_cache_dir, verbose=verbose)
return parse_sources_data(cache_data, origin=cache_index, model=model)
def compute_filename_hash(key_filenames):
sha_hash = hashlib.sha1()
if isinstance(key_filenames, list):
for key in key_filenames:
sha_hash.update(key.encode())
else:
sha_hash.update(key_filenames.encode())
return sha_hash.hexdigest()
def write_cache_file(source_cache_d, key_filenames, rosdep_data):
"""
:param source_cache_d: directory to write cache file to
:param key_filenames: filename (or list of filenames) to be used in hashing
:param rosdep_data: dictionary of data to serialize as YAML
:returns: name of file where cache is stored
:raises: :exc:`OSError` if cannot write to cache file/directory
:raises: :exc:`IOError` if cannot write to cache file/directory
"""
if not os.path.exists(source_cache_d):
os.makedirs(source_cache_d)
key_hash = compute_filename_hash(key_filenames)
filepath = os.path.join(source_cache_d, key_hash)
try:
write_atomic(filepath + PICKLE_CACHE_EXT, pickle.dumps(rosdep_data, -1), True)
except OSError as e:
raise CachePermissionError("Failed to write cache file: " + str(e))
try:
os.unlink(filepath)
except OSError:
pass
return filepath
def write_atomic(filepath, data, binary=False):
# write data to new file
fd, filepath_tmp = tempfile.mkstemp(prefix=os.path.basename(filepath) + '.tmp.', dir=os.path.dirname(filepath))
if (binary):
fmode = 'wb'
else:
fmode = 'w'
with os.fdopen(fd, fmode) as f:
f.write(data)
f.close()
try:
# switch file atomically (if supported)
os.rename(filepath_tmp, filepath)
except OSError:
# fall back to non-atomic operation
try:
os.unlink(filepath)
except OSError:
pass
try:
os.rename(filepath_tmp, filepath)
except OSError:
os.unlink(filepath_tmp)
class SourcesListLoader(RosdepLoader):
"""
SourcesList loader implements the general RosdepLoader API. This
implementation is fairly simple as there is only one view the
source list loader can create. It is also a bit degenerate as it
is not capable of mapping resource names to views, thus any
resource-name-based API fails or returns nothing interesting.
This loader should not be used directly; instead, it is more
useful composed with other higher-level implementations, like the
:class:`rosdep2.rospkg_loader.RospkgLoader`. The general intent
is to compose it with another loader by making all of the other
loader's views depends on all the views in this loader.
"""
ALL_VIEW_KEY = 'sources.list'
def __init__(self, sources):
"""
:param sources: cached sources list entries, [:class:`CachedDataSource`]
"""
self.sources = sources
@staticmethod
def create_default(matcher=None, sources_cache_dir=None, os_override=None, verbose=False):
"""
:param matcher: override DataSourceMatcher. Defaults to
DataSourceMatcher.create_default().
:param sources_cache_dir: override location of sources cache
"""
if matcher is None:
matcher = DataSourceMatcher.create_default(os_override=os_override)
if verbose:
print("using matcher with tags [%s]"%(', '.join(matcher.tags)), file=sys.stderr)
sources = load_cached_sources_list(sources_cache_dir=sources_cache_dir, verbose=verbose)
if verbose:
print("loaded %s sources"%(len(sources)), file=sys.stderr)
sources = [x for x in sources if matcher.matches(x)]
if verbose:
print("%s sources match current tags"%(len(sources)), file=sys.stderr)
return SourcesListLoader(sources)
def load_view(self, view_name, rosdep_db, verbose=False):
"""
Load view data into rosdep_db. If the view has already been
loaded into rosdep_db, this method does nothing.
:param view_name: name of ROS stack to load, ``str``
:param rosdep_db: database to load stack data into, :class:`RosdepDatabase`
:raises: :exc:`InvalidData`
"""
if rosdep_db.is_loaded(view_name):
return
source = self.get_source(view_name)
if verbose:
print("loading view [%s] with sources.list loader"%(view_name), file=sys.stderr)
view_dependencies = self.get_view_dependencies(view_name)
rosdep_db.set_view_data(view_name, source.rosdep_data, view_dependencies, view_name)
def get_loadable_resources(self):
return []
def get_loadable_views(self):
return [x.url for x in self.sources]
def get_view_dependencies(self, view_name):
# use dependencies to implement precedence
if view_name != SourcesListLoader.ALL_VIEW_KEY:
# if the view_name matches one of our sources, return
# empty list as none of our sources has deps.
if any([x for x in self.sources if view_name == x.url]):
return []
# not one of our views, so it depends on everything we provide
return [x.url for x in self.sources]
def get_source(self, view_name):
matches = [x for x in self.sources if x.url == view_name]
if matches:
return matches[0]
else:
raise rospkg.ResourceNotFound(view_name)
def get_rosdeps(self, resource_name, implicit=True):
"""
Always raises as SourceListLoader defines no concrete resources with rosdeps.
:raises: :exc:`rospkg.ResourceNotFound`
"""
raise rospkg.ResourceNotFound(resource_name)
def get_view_key(self, resource_name):
"""
Always raises as SourceListLoader defines no concrete resources with rosdeps.
:returns: Name of view that *resource_name* is in, ``None`` if no associated view.
:raises: :exc:`rospkg.ResourceNotFound` if *resource_name* cannot be found.
"""
raise rospkg.ResourceNotFound(resource_name)
| bsd-3-clause | 706,193,362,580,082,700 | 35.941964 | 121 | 0.644632 | false |
badock/nova | nova/objects/flavor.py | 6 | 10781 | # Copyright 2013 Red Hat, Inc
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova import db
from nova import exception
from nova import objects
from nova.objects import base
from nova.objects import fields
OPTIONAL_FIELDS = ['extra_specs', 'projects']
class Flavor(base.NovaPersistentObject, base.NovaObject):
# Version 1.0: Initial version
# Version 1.1: Added save_projects(), save_extra_specs(), removed
# remoteable from save()
VERSION = '1.1'
fields = {
'id': fields.IntegerField(),
'name': fields.StringField(nullable=True),
'memory_mb': fields.IntegerField(),
'vcpus': fields.IntegerField(),
'root_gb': fields.IntegerField(),
'ephemeral_gb': fields.IntegerField(),
'flavorid': fields.StringField(),
'swap': fields.IntegerField(),
'rxtx_factor': fields.FloatField(nullable=True, default=1.0),
'vcpu_weight': fields.IntegerField(nullable=True),
'disabled': fields.BooleanField(),
'is_public': fields.BooleanField(),
'extra_specs': fields.DictOfStringsField(),
'projects': fields.ListOfStringsField(),
}
def __init__(self, *args, **kwargs):
super(Flavor, self).__init__(*args, **kwargs)
self._orig_extra_specs = {}
self._orig_projects = {}
@staticmethod
def _from_db_object(context, flavor, db_flavor, expected_attrs=None):
if expected_attrs is None:
expected_attrs = []
for name, field in flavor.fields.items():
if name in OPTIONAL_FIELDS:
continue
value = db_flavor[name]
if isinstance(field, fields.IntegerField):
value = value if value is not None else 0
flavor[name] = value
if 'extra_specs' in expected_attrs:
flavor.extra_specs = db_flavor['extra_specs']
if 'projects' in expected_attrs:
flavor._load_projects(context)
flavor._context = context
flavor.obj_reset_changes()
return flavor
@base.remotable
def _load_projects(self, context):
self.projects = [x['project_id'] for x in
db.flavor_access_get_by_flavor_id(context,
self.flavorid)]
self.obj_reset_changes(['projects'])
def obj_load_attr(self, attrname):
# NOTE(danms): Only projects could be lazy-loaded right now
if attrname != 'projects':
raise exception.ObjectActionError(
action='obj_load_attr', reason='unable to load %s' % attrname)
self._load_projects()
def obj_reset_changes(self, fields=None):
super(Flavor, self).obj_reset_changes(fields=fields)
if fields is None or 'extra_specs' in fields:
self._orig_extra_specs = (dict(self.extra_specs)
if self.obj_attr_is_set('extra_specs')
else {})
if fields is None or 'projects' in fields:
self._orig_projects = (list(self.projects)
if self.obj_attr_is_set('projects')
else [])
def obj_what_changed(self):
changes = super(Flavor, self).obj_what_changed()
if ('extra_specs' in self and
self.extra_specs != self._orig_extra_specs):
changes.add('extra_specs')
if 'projects' in self and self.projects != self._orig_projects:
changes.add('projects')
return changes
@classmethod
def _obj_from_primitive(cls, context, objver, primitive):
self = super(Flavor, cls)._obj_from_primitive(context, objver,
primitive)
changes = self.obj_what_changed()
if 'extra_specs' not in changes:
# This call left extra_specs "clean" so update our tracker
self._orig_extra_specs = (dict(self.extra_specs)
if self.obj_attr_is_set('extra_specs')
else {})
if 'projects' not in changes:
# This call left projects "clean" so update our tracker
self._orig_projects = (list(self.projects)
if self.obj_attr_is_set('projects')
else [])
return self
@base.remotable_classmethod
def get_by_id(cls, context, id):
db_flavor = db.flavor_get(context, id)
return cls._from_db_object(context, cls(context), db_flavor,
expected_attrs=['extra_specs'])
@base.remotable_classmethod
def get_by_name(cls, context, name):
db_flavor = db.flavor_get_by_name(context, name)
return cls._from_db_object(context, cls(context), db_flavor,
expected_attrs=['extra_specs'])
@base.remotable_classmethod
def get_by_flavor_id(cls, context, flavor_id, read_deleted=None):
db_flavor = db.flavor_get_by_flavor_id(context, flavor_id,
read_deleted)
return cls._from_db_object(context, cls(context), db_flavor,
expected_attrs=['extra_specs'])
@base.remotable
def add_access(self, context, project_id):
if 'projects' in self.obj_what_changed():
raise exception.ObjectActionError(action='add_access',
reason='projects modified')
db.flavor_access_add(context, self.flavorid, project_id)
self._load_projects(context)
@base.remotable
def remove_access(self, context, project_id):
if 'projects' in self.obj_what_changed():
raise exception.ObjectActionError(action='remove_access',
reason='projects modified')
db.flavor_access_remove(context, self.flavorid, project_id)
self._load_projects(context)
@base.remotable
def create(self, context):
if self.obj_attr_is_set('id'):
raise exception.ObjectActionError(action='create',
reason='already created')
updates = self.obj_get_changes()
expected_attrs = []
for attr in OPTIONAL_FIELDS:
if attr in updates:
expected_attrs.append(attr)
projects = updates.pop('projects', [])
db_flavor = db.flavor_create(context, updates, projects=projects)
self._from_db_object(context, self, db_flavor,
expected_attrs=expected_attrs)
@base.remotable
def save_projects(self, context, to_add=None, to_delete=None):
"""Add or delete projects.
:param:to_add: A list of projects to add
:param:to_delete: A list of projects to remove
"""
to_add = to_add if to_add is not None else []
to_delete = to_delete if to_delete is not None else []
for project_id in to_add:
db.flavor_access_add(context, self.flavorid, project_id)
for project_id in to_delete:
db.flavor_access_remove(context, self.flavorid, project_id)
self.obj_reset_changes(['projects'])
@base.remotable
def save_extra_specs(self, context, to_add=None, to_delete=None):
"""Add or delete extra_specs.
:param:to_add: A dict of new keys to add/update
:param:to_delete: A list of keys to remove
"""
to_add = to_add if to_add is not None else []
to_delete = to_delete if to_delete is not None else []
if to_add:
db.flavor_extra_specs_update_or_create(context, self.flavorid,
to_add)
for key in to_delete:
db.flavor_extra_specs_delete(context, self.flavorid, key)
self.obj_reset_changes(['extra_specs'])
def save(self):
context = self._context
updates = self.obj_get_changes()
projects = updates.pop('projects', None)
extra_specs = updates.pop('extra_specs', None)
if updates:
raise exception.ObjectActionError(
action='save', reason='read-only fields were changed')
if extra_specs is not None:
deleted_keys = (set(self._orig_extra_specs.keys()) -
set(extra_specs.keys()))
added_keys = self.extra_specs
else:
added_keys = deleted_keys = None
if projects is not None:
deleted_projects = set(self._orig_projects) - set(projects)
added_projects = set(projects) - set(self._orig_projects)
else:
added_projects = deleted_projects = None
# NOTE(danms): The first remotable method we call will reset
# our of the original values for projects and extra_specs. Thus,
# we collect the added/deleted lists for both above and /then/
# call these methods to update them.
if added_keys or deleted_keys:
self.save_extra_specs(context, self.extra_specs, deleted_keys)
if added_projects or deleted_projects:
self.save_projects(context, added_projects, deleted_projects)
@base.remotable
def destroy(self, context):
db.flavor_destroy(context, self.name)
class FlavorList(base.ObjectListBase, base.NovaObject):
VERSION = '1.1'
fields = {
'objects': fields.ListOfObjectsField('Flavor'),
}
child_versions = {
'1.0': '1.0',
'1.1': '1.1',
}
@base.remotable_classmethod
def get_all(cls, context, inactive=False, filters=None,
sort_key='flavorid', sort_dir='asc', limit=None, marker=None):
db_flavors = db.flavor_get_all(context, inactive=inactive,
filters=filters, sort_key=sort_key,
sort_dir=sort_dir, limit=limit,
marker=marker)
return base.obj_make_list(context, cls(context), objects.Flavor,
db_flavors, expected_attrs=['extra_specs'])
| apache-2.0 | 1,388,317,981,248,626,400 | 38.92963 | 78 | 0.572025 | false |
zjuchenyuan/BioWeb | Lib/Bio/SwissProt/__init__.py | 1 | 22555 | # Copyright 2007 by Michiel de Hoon. All rights reserved.
# This code is part of the Biopython distribution and governed by its
# license. Please see the LICENSE file that should have been included
# as part of this package.
"""Code to work with the sprotXX.dat file from SwissProt.
http://www.expasy.ch/sprot/sprot-top.html
Tested with:
Release 56.9, 03-March-2009.
Classes:
- Record Holds SwissProt data.
- Reference Holds reference data from a SwissProt record.
Functions:
- read Read one SwissProt record
- parse Read multiple SwissProt records
"""
from __future__ import print_function
from Bio._py3k import _as_string
class Record(object):
"""Holds information from a SwissProt record.
Members:
- entry_name Name of this entry, e.g. RL1_ECOLI.
- data_class Either 'STANDARD' or 'PRELIMINARY'.
- molecule_type Type of molecule, 'PRT',
- sequence_length Number of residues.
- accessions List of the accession numbers, e.g. ['P00321']
- created A tuple of (date, release).
- sequence_update A tuple of (date, release).
- annotation_update A tuple of (date, release).
- description Free-format description.
- gene_name Gene name. See userman.txt for description.
- organism The source of the sequence.
- organelle The origin of the sequence.
- organism_classification The taxonomy classification. List of strings.
(http://www.ncbi.nlm.nih.gov/Taxonomy/)
- taxonomy_id A list of NCBI taxonomy id's.
- host_organism A list of names of the hosts of a virus, if any.
- host_taxonomy_id A list of NCBI taxonomy id's of the hosts, if any.
- references List of Reference objects.
- comments List of strings.
- cross_references List of tuples (db, id1[, id2][, id3]). See the docs.
- keywords List of the keywords.
- features List of tuples (key name, from, to, description).
from and to can be either integers for the residue
numbers, '<', '>', or '?'
- seqinfo tuple of (length, molecular weight, CRC32 value)
- sequence The sequence.
"""
def __init__(self):
self.entry_name = None
self.data_class = None
self.molecule_type = None
self.sequence_length = None
self.accessions = []
self.created = None
self.sequence_update = None
self.annotation_update = None
self.description = []
self.gene_name = ''
self.organism = []
self.organelle = ''
self.organism_classification = []
self.taxonomy_id = []
self.host_organism = []
self.host_taxonomy_id = []
self.references = []
self.comments = []
self.cross_references = []
self.keywords = []
self.features = []
self.seqinfo = None
self.sequence = ''
class Reference(object):
"""Holds information from one reference in a SwissProt entry.
Members:
number Number of reference in an entry.
evidence Evidence code. List of strings.
positions Describes extent of work. List of strings.
comments Comments. List of (token, text).
references References. List of (dbname, identifier).
authors The authors of the work.
title Title of the work.
location A citation for the work.
"""
def __init__(self):
self.number = None
self.positions = []
self.comments = []
self.references = []
self.authors = []
self.title = []
self.location = []
def parse(handle):
while True:
record = _read(handle)
if not record:
return
yield record
def read(handle):
record = _read(handle)
if not record:
raise ValueError("No SwissProt record found")
# We should have reached the end of the record by now
# Used to check with handle.read() but that breaks on Python 3.5
# due to http://bugs.python.org/issue26499 and could download
# lot of data needlessly if there were more records.
remainder = handle.readline()
if remainder:
raise ValueError("More than one SwissProt record found")
return record
# Everything below is considered private
def _read(handle):
record = None
unread = ""
for line in handle:
# This is for Python 3 to cope with a binary handle (byte strings),
# or a text handle (unicode strings):
line = _as_string(line)
key, value = line[:2], line[5:].rstrip()
if unread:
value = unread + " " + value
unread = ""
if key == '**':
# See Bug 2353, some files from the EBI have extra lines
# starting "**" (two asterisks/stars). They appear
# to be unofficial automated annotations. e.g.
# **
# ** ################# INTERNAL SECTION ##################
# **HA SAM; Annotated by PicoHamap 1.88; MF_01138.1; 09-NOV-2003.
pass
elif key == 'ID':
record = Record()
_read_id(record, line)
_sequence_lines = []
elif key == 'AC':
accessions = [word for word in value.rstrip(";").split("; ")]
record.accessions.extend(accessions)
elif key == 'DT':
_read_dt(record, line)
elif key == 'DE':
record.description.append(value.strip())
elif key == 'GN':
if record.gene_name:
record.gene_name += " "
record.gene_name += value
elif key == 'OS':
record.organism.append(value)
elif key == 'OG':
record.organelle += line[5:]
elif key == 'OC':
cols = [col for col in value.rstrip(";.").split("; ")]
record.organism_classification.extend(cols)
elif key == 'OX':
_read_ox(record, line)
elif key == 'OH':
_read_oh(record, line)
elif key == 'RN':
reference = Reference()
_read_rn(reference, value)
record.references.append(reference)
elif key == 'RP':
assert record.references, "RP: missing RN"
record.references[-1].positions.append(value)
elif key == 'RC':
assert record.references, "RC: missing RN"
reference = record.references[-1]
unread = _read_rc(reference, value)
elif key == 'RX':
assert record.references, "RX: missing RN"
reference = record.references[-1]
_read_rx(reference, value)
elif key == 'RL':
assert record.references, "RL: missing RN"
reference = record.references[-1]
reference.location.append(value)
# In UniProt release 1.12 of 6/21/04, there is a new RG
# (Reference Group) line, which references a group instead of
# an author. Each block must have at least 1 RA or RG line.
elif key == 'RA':
assert record.references, "RA: missing RN"
reference = record.references[-1]
reference.authors.append(value)
elif key == 'RG':
assert record.references, "RG: missing RN"
reference = record.references[-1]
reference.authors.append(value)
elif key == "RT":
assert record.references, "RT: missing RN"
reference = record.references[-1]
reference.title.append(value)
elif key == 'CC':
_read_cc(record, line)
elif key == 'DR':
_read_dr(record, value)
elif key == 'PE':
# TODO - Record this information?
pass
elif key == 'KW':
_read_kw(record, value)
elif key == 'FT':
_read_ft(record, line)
elif key == 'SQ':
cols = value.split()
assert len(cols) == 7, "I don't understand SQ line %s" % line
# Do more checking here?
record.seqinfo = int(cols[1]), int(cols[3]), cols[5]
elif key == ' ':
_sequence_lines.append(value.replace(" ", "").rstrip())
elif key == '//':
# Join multiline data into one string
record.description = " ".join(record.description)
record.organism = " ".join(record.organism)
record.organelle = record.organelle.rstrip()
for reference in record.references:
reference.authors = " ".join(reference.authors).rstrip(";")
reference.title = " ".join(reference.title).rstrip(";")
if reference.title.startswith('"') and reference.title.endswith('"'):
reference.title = reference.title[1:-1] # remove quotes
reference.location = " ".join(reference.location)
record.sequence = "".join(_sequence_lines)
return record
else:
raise ValueError("Unknown keyword '%s' found" % key)
if record:
raise ValueError("Unexpected end of stream.")
def _read_id(record, line):
cols = line[5:].split()
# Prior to release 51, included with MoleculeType:
# ID EntryName DataClass; MoleculeType; SequenceLength AA.
#
# Newer files lack the MoleculeType:
# ID EntryName DataClass; SequenceLength AA.
if len(cols) == 5:
record.entry_name = cols[0]
record.data_class = cols[1].rstrip(";")
record.molecule_type = cols[2].rstrip(";")
record.sequence_length = int(cols[3])
elif len(cols) == 4:
record.entry_name = cols[0]
record.data_class = cols[1].rstrip(";")
record.molecule_type = None
record.sequence_length = int(cols[2])
else:
raise ValueError("ID line has unrecognised format:\n" + line)
# check if the data class is one of the allowed values
allowed = ('STANDARD', 'PRELIMINARY', 'IPI', 'Reviewed', 'Unreviewed')
if record.data_class not in allowed:
raise ValueError("Unrecognized data class %s in line\n%s" %
(record.data_class, line))
# molecule_type should be 'PRT' for PRoTein
# Note that has been removed in recent releases (set to None)
if record.molecule_type not in (None, 'PRT'):
raise ValueError("Unrecognized molecule type %s in line\n%s" %
(record.molecule_type, line))
def _read_dt(record, line):
value = line[5:]
uprline = value.upper()
cols = value.rstrip().split()
if 'CREATED' in uprline \
or 'LAST SEQUENCE UPDATE' in uprline \
or 'LAST ANNOTATION UPDATE' in uprline:
# Old style DT line
# =================
# e.g.
# DT 01-FEB-1995 (Rel. 31, Created)
# DT 01-FEB-1995 (Rel. 31, Last sequence update)
# DT 01-OCT-2000 (Rel. 40, Last annotation update)
#
# or:
# DT 08-JAN-2002 (IPI Human rel. 2.3, Created)
# ...
# find where the version information will be located
# This is needed for when you have cases like IPI where
# the release version is in a different spot:
# DT 08-JAN-2002 (IPI Human rel. 2.3, Created)
uprcols = uprline.split()
rel_index = -1
for index in range(len(uprcols)):
if 'REL.' in uprcols[index]:
rel_index = index
assert rel_index >= 0, \
"Could not find Rel. in DT line: %s" % line
version_index = rel_index + 1
# get the version information
str_version = cols[version_index].rstrip(",")
# no version number
if str_version == '':
version = 0
# dot versioned
elif '.' in str_version:
version = str_version
# integer versioned
else:
version = int(str_version)
date = cols[0]
if 'CREATED' in uprline:
record.created = date, version
elif 'LAST SEQUENCE UPDATE' in uprline:
record.sequence_update = date, version
elif 'LAST ANNOTATION UPDATE' in uprline:
record.annotation_update = date, version
else:
assert False, "Shouldn't reach this line!"
elif 'INTEGRATED INTO' in uprline \
or 'SEQUENCE VERSION' in uprline \
or 'ENTRY VERSION' in uprline:
# New style DT line
# =================
# As of UniProt Knowledgebase release 7.0 (including
# Swiss-Prot release 49.0 and TrEMBL release 32.0) the
# format of the DT lines and the version information
# in them was changed - the release number was dropped.
#
# For more information see bug 1948 and
# http://ca.expasy.org/sprot/relnotes/sp_news.html#rel7.0
#
# e.g.
# DT 01-JAN-1998, integrated into UniProtKB/Swiss-Prot.
# DT 15-OCT-2001, sequence version 3.
# DT 01-APR-2004, entry version 14.
#
# This is a new style DT line...
# The date should be in string cols[1]
# Get the version number if there is one.
# For the three DT lines above: 0, 3, 14
try:
version = int(cols[-1])
except ValueError:
version = 0
date = cols[0].rstrip(",")
# Re-use the historical property names, even though
# the meaning has changed slighty:
if "INTEGRATED" in uprline:
record.created = date, version
elif 'SEQUENCE VERSION' in uprline:
record.sequence_update = date, version
elif 'ENTRY VERSION' in uprline:
record.annotation_update = date, version
else:
assert False, "Shouldn't reach this line!"
else:
raise ValueError("I don't understand the date line %s" % line)
def _read_ox(record, line):
# The OX line used to be in the simple format:
# OX DESCRIPTION=ID[, ID]...;
# If there are too many id's to fit onto a line, then the ID's
# continue directly onto the next line, e.g.
# OX DESCRIPTION=ID[, ID]...
# OX ID[, ID]...;
# Currently, the description is always "NCBI_TaxID".
# To parse this, I need to check to see whether I'm at the
# first line. If I am, grab the description and make sure
# it's an NCBI ID. Then, grab all the id's.
#
# As of the 2014-10-01 release, there may be an evidence code, e.g.
# OX NCBI_TaxID=418404 {ECO:0000313|EMBL:AEX14553.1};
# In the short term, we will ignore any evidence codes:
line = line.split('{')[0]
if record.taxonomy_id:
ids = line[5:].rstrip().rstrip(";")
else:
descr, ids = line[5:].rstrip().rstrip(";").split("=")
assert descr == "NCBI_TaxID", "Unexpected taxonomy type %s" % descr
record.taxonomy_id.extend(ids.split(', '))
def _read_oh(record, line):
# Line type OH (Organism Host) for viral hosts
assert line[5:].startswith("NCBI_TaxID="), "Unexpected %s" % line
line = line[16:].rstrip()
assert line[-1] == "." and line.count(";") == 1, line
taxid, name = line[:-1].split(";")
record.host_taxonomy_id.append(taxid.strip())
record.host_organism.append(name.strip())
def _read_rn(reference, rn):
# This used to be a very simple line with a reference number, e.g.
# RN [1]
# As of the 2014-10-01 release, there may be an evidence code, e.g.
# RN [1] {ECO:0000313|EMBL:AEX14553.1}
words = rn.split(None, 1)
number = words[0]
assert number.startswith('[') and number.endswith(']'), "Missing brackets %s" % number
reference.number = int(number[1:-1])
if len(words) > 1:
evidence = words[1]
assert evidence.startswith('{') and evidence.endswith('}'), "Missing braces %s" % evidence
reference.evidence = evidence[1:-1].split('|')
def _read_rc(reference, value):
cols = value.split(';')
if value[-1] == ';':
unread = ""
else:
cols, unread = cols[:-1], cols[-1]
for col in cols:
if not col: # last column will be the empty string
return
# The token is everything before the first '=' character.
i = col.find("=")
if i >= 0:
token, text = col[:i], col[i + 1:]
comment = token.lstrip(), text
reference.comments.append(comment)
else:
comment = reference.comments[-1]
comment = "%s %s" % (comment, col)
reference.comments[-1] = comment
return unread
def _read_rx(reference, value):
# The basic (older?) RX line is of the form:
# RX MEDLINE; 85132727.
# but there are variants of this that need to be dealt with (see below)
# CLD1_HUMAN in Release 39 and DADR_DIDMA in Release 33
# have extraneous information in the RX line. Check for
# this and chop it out of the line.
# (noticed by [email protected])
value = value.replace(' [NCBI, ExPASy, Israel, Japan]', '')
# RX lines can also be used of the form
# RX PubMed=9603189;
# reported by [email protected]
# and these can be more complicated like:
# RX MEDLINE=95385798; PubMed=7656980;
# RX PubMed=15060122; DOI=10.1136/jmg 2003.012781;
# We look for these cases first and deal with them
warn = False
if "=" in value:
cols = value.split("; ")
cols = [x.strip() for x in cols]
cols = [x for x in cols if x]
for col in cols:
x = col.split("=")
if len(x) != 2 or x == ("DOI", "DOI"):
warn = True
break
assert len(x) == 2, "I don't understand RX line %s" % value
reference.references.append((x[0], x[1].rstrip(";")))
# otherwise we assume we have the type 'RX MEDLINE; 85132727.'
else:
cols = value.split("; ")
# normally we split into the three parts
if len(cols) != 2:
warn = True
else:
reference.references.append((cols[0].rstrip(";"), cols[1].rstrip(".")))
if warn:
import warnings
from Bio import BiopythonParserWarning
warnings.warn("Possibly corrupt RX line %r" % value,
BiopythonParserWarning)
def _read_cc(record, line):
key, value = line[5:8], line[9:].rstrip()
if key == '-!-': # Make a new comment
record.comments.append(value)
elif key == ' ': # add to the previous comment
if not record.comments:
# TCMO_STRGA in Release 37 has comment with no topic
record.comments.append(value)
else:
record.comments[-1] += " " + value
def _read_dr(record, value):
cols = value.rstrip(".").split('; ')
record.cross_references.append(tuple(cols))
def _read_kw(record, value):
# Old style - semi-colon separated, multi-line. e.g. Q13639.txt
# KW Alternative splicing; Cell membrane; Complete proteome;
# KW Disulfide bond; Endosome; G-protein coupled receptor; Glycoprotein;
# KW Lipoprotein; Membrane; Palmitate; Polymorphism; Receptor; Transducer;
# KW Transmembrane.
#
# New style as of 2014-10-01 release with evidence codes, e.g. H2CNN8.txt
# KW Monooxygenase {ECO:0000313|EMBL:AEX14553.1};
# KW Oxidoreductase {ECO:0000313|EMBL:AEX14553.1}.
# For now to match the XML parser, drop the evidence codes.
for value in value.rstrip(";.").split('; '):
if value.endswith("}"):
# Discard the evidence code
value = value.rsplit("{", 1)[0]
record.keywords.append(value.strip())
def _read_ft(record, line):
line = line[5:] # get rid of junk in front
name = line[0:8].rstrip()
try:
from_res = int(line[9:15])
except ValueError:
from_res = line[9:15].lstrip()
try:
to_res = int(line[16:22])
except ValueError:
to_res = line[16:22].lstrip()
# if there is a feature_id (FTId), store it away
if line[29:35] == r"/FTId=":
ft_id = line[35:70].rstrip()[:-1]
description = ""
else:
ft_id = ""
description = line[29:70].rstrip()
if not name: # is continuation of last one
assert not from_res and not to_res
name, from_res, to_res, old_description, old_ft_id = record.features[-1]
del record.features[-1]
description = ("%s %s" % (old_description, description)).strip()
# special case -- VARSPLIC, reported by [email protected]
if name == "VARSPLIC":
# Remove unwanted spaces in sequences.
# During line carryover, the sequences in VARSPLIC can get mangled
# with unwanted spaces like:
# 'DISSTKLQALPSHGLESIQT -> PCRATGWSPFRRSSPC LPTH'
# We want to check for this case and correct it as it happens.
descr_cols = description.split(" -> ")
if len(descr_cols) == 2:
first_seq, second_seq = descr_cols
extra_info = ''
# we might have more information at the end of the
# second sequence, which should be in parenthesis
extra_info_pos = second_seq.find(" (")
if extra_info_pos != -1:
extra_info = second_seq[extra_info_pos:]
second_seq = second_seq[:extra_info_pos]
# now clean spaces out of the first and second string
first_seq = first_seq.replace(" ", "")
second_seq = second_seq.replace(" ", "")
# reassemble the description
description = first_seq + " -> " + second_seq + extra_info
record.features.append((name, from_res, to_res, description, ft_id))
if __name__ == "__main__":
print("Quick self test...")
example_filename = "../../Tests/SwissProt/sp008"
import os
if not os.path.isfile(example_filename):
print("Missing test file %s" % example_filename)
else:
# Try parsing it!
with open(example_filename) as handle:
records = parse(handle)
for record in records:
print(record.entry_name)
print(",".join(record.accessions))
print(record.keywords)
print(repr(record.organism))
print(record.sequence[:20] + "...")
| mit | 1,913,186,127,371,190,500 | 36.404643 | 98 | 0.56697 | false |
programmer10110/instavpn | util.py | 2 | 5006 | import platform, os, logging_subprocess, random, string, logging, sys, json, urllib2, fileinput
logger = logging.getLogger()
string_pool = string.ascii_letters + string.digits
gen_random_text = lambda s: ''.join(map(lambda _: random.choice(string_pool), range(s)))
def run_command(cmd):
return not (logging_subprocess.call(cmd,
stdout_log_level=logging.DEBUG,
stderr_log_level=logging.DEBUG,
shell=True))
def check_os():
if platform.linux_distribution() != ('Ubuntu', '14.04', 'trusty'):
logger.debug('OS: ' + ' '.join(platform.linux_distribution()))
return False
return True
def not_sudo():
return os.getuid() != 0
def install_packages():
logger.debug('Update package lists')
if not run_command("apt-get update"):
return False
logger.debug('Update packages')
if not run_command("apt-get -y upgrade"):
return False
logger.debug('Install node.js')
if not run_command("apt-get install -y nodejs-legacy npm build-essential libssl-dev"):
return False
logger.debug('Install vnstat')
if not run_command("apt-get install -y vnstat vnstati"):
return False
logger.debug('Install VPN server packages')
if not run_command("DEBIAN_FRONTEND=noninteractive apt-get install -q -y openswan xl2tpd ppp lsof"):
return False
return True
def setup_sysctl():
if not run_command("sh files/sysctl.sh"):
return False
return True
def setup_passwords():
try:
char_set = string.ascii_lowercase + string.ascii_uppercase + string.digits
f = open('/etc/ppp/chap-secrets', 'w')
pw1 = gen_random_text(12)
pw2 = gen_random_text(12)
f.write("username1 l2tpd {} *\n".format(pw1))
f.write("username2 l2tpd {} *".format(pw2))
f.close()
f = open('/etc/ipsec.secrets', 'w')
f.write('1.2.3.4 %any: PSK "{}"'.format(gen_random_text(16)))
f.close()
except:
logger.exception("Exception creating passwords:")
return False
return True
def cp_configs():
logger.debug('xl2tpd.conf')
if not run_command("cp files/xl2tpd.conf /etc/xl2tpd/xl2tpd.conf"):
return False
logger.debug('options.xl2tpd')
if not run_command("cp files/options.xl2tpd /etc/ppp/options.xl2tpd"):
return False
logger.debug('ipsec.conf.template')
if not run_command("cp files/ipsec.conf.template /etc/ipsec.conf.template"):
return False
return True
def setup_vpn():
logger.debug('Write setup-vpn.sh to /etc')
if not run_command("cp files/setup-vpn.sh /etc/setup-vpn.sh"):
return False
logger.debug('Add to rc.local')
try:
open("/etc/rc.local", "w").write("bash /etc/setup-vpn.sh\n" + open("/etc/rc.local").read())
except:
logger.exception("Exception setting up vpn:")
return False
logger.debug('Execute setup-vpn.sh')
if not run_command("bash /etc/setup-vpn.sh"):
return False
logger.debug('Ufw default forward policy')
try:
for line in fileinput.input("/etc/default/ufw", inplace=True):
print line.replace('DEFAULT_FORWARD_POLICY="DROP"', 'DEFAULT_FORWARD_POLICY="ACCEPT"'),
run_command("service ufw restart")
except OSError as e:
logger.warn('ufw not found')
logger.debug('Copy CLI')
if not run_command("chmod +x files/instavpn && cp files/instavpn /usr/bin/instavpn"):
return False
return True
CRONTAB = 'crontab -l | { cat; echo "* * * * * vnstati -s -i eth0 -o /opt/instavpn/public/images/vnstat.png"; } | crontab -'
def webui():
logger.debug('Generate random password')
char_set = string.ascii_lowercase + string.ascii_uppercase + string.digits
with open('web/server/credentials.json', 'w') as f:
json.dump({
"admin": {
"login": "admin",
"password": gen_random_text(16)
}
}, f)
logger.debug('Copy web UI directory')
if not run_command("cp -rf web/ /opt/instavpn"):
return False
logger.debug('Install node_modules')
if not run_command("cd /opt/instavpn && npm install"):
return False
logger.debug('Copy upstart script')
if not run_command("cp files/instavpn.conf /etc/init"):
return False
logger.debug('Add vnstati to cron')
if not run_command(CRONTAB):
return False
logger.debug('Start service')
if not run_command("start instavpn"):
return False
return True
def info():
logger.info('')
with open('/opt/instavpn/server/credentials.json') as f:
json_data = json.load(f)
logger.info('Browse web UI at http://' + urllib2.urlopen("http://myip.dnsdynamic.org/").read() + ':8080/')
logger.info(" Username: {}".format(json_data["admin"]["login"]))
logger.info(" Password: {}".format(json_data["admin"]["password"]))
logger.info("Completed. Run 'instavpn -h' for help")
| apache-2.0 | 7,130,548,158,920,767,000 | 29.901235 | 124 | 0.622853 | false |
sertac/django | tests/template_tests/filter_tests/test_truncatewords_html.py | 386 | 1607 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.template.defaultfilters import truncatewords_html
from django.test import SimpleTestCase
class FunctionTests(SimpleTestCase):
def test_truncate_zero(self):
self.assertEqual(truncatewords_html('<p>one <a href="#">two - three <br>four</a> five</p>', 0), '')
def test_truncate(self):
self.assertEqual(
truncatewords_html('<p>one <a href="#">two - three <br>four</a> five</p>', 2),
'<p>one <a href="#">two ...</a></p>',
)
def test_truncate2(self):
self.assertEqual(
truncatewords_html('<p>one <a href="#">two - three <br>four</a> five</p>', 4),
'<p>one <a href="#">two - three <br>four ...</a></p>',
)
def test_truncate3(self):
self.assertEqual(
truncatewords_html('<p>one <a href="#">two - three <br>four</a> five</p>', 5),
'<p>one <a href="#">two - three <br>four</a> five</p>',
)
def test_truncate4(self):
self.assertEqual(
truncatewords_html('<p>one <a href="#">two - three <br>four</a> five</p>', 100),
'<p>one <a href="#">two - three <br>four</a> five</p>',
)
def test_truncate_unicode(self):
self.assertEqual(truncatewords_html('\xc5ngstr\xf6m was here', 1), '\xc5ngstr\xf6m ...')
def test_truncate_complex(self):
self.assertEqual(
truncatewords_html('<i>Buenos días! ¿Cómo está?</i>', 3),
'<i>Buenos días! ¿Cómo ...</i>',
)
| bsd-3-clause | 6,794,931,790,073,381,000 | 35.522727 | 107 | 0.555694 | false |
psychobaka/PatchCorral | src/engine/file.py | 3 | 2758 | ####################################################################################################
# Copyright 2013 John Crawford
#
# This file is part of PatchCorral.
#
# PatchCorral is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# PatchCorral is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with PatchCorral. If not, see <http://www.gnu.org/licenses/>.
####################################################################################################
## @file
# Represents a file in the file system. Simplifies operations such as loading, saving, and
# tracking modifications outside of the process.
import os
##
# Represents a file in the file system. Simplifies operations such as loading, saving, and
# tracking modifications outside of the process.
class File():
##
# Class initializer.
# @param filename Name of the file to load. If "None", will not be associated with a file until
# "load" or "save" is called.
# @return "None".
def __init__(self, filename=None):
self.filename = filename
if filename is not None:
if not os.path.exists(filename):
open(filename, 'w').close()
self.load()
##
# Loads the given file.
# @param filename Path of the file to load.
# @return "None".
def load(self, filename=None):
if filename is None:
if self.filename is None:
raise ValueError('No associated filename. One must be provided.')
filename = self.filename
self._load(filename)
self.filename = filename
##
# Helper function for "load". Intended to be overridden by subclasses.
# @param filename Path of the file to load.
# @return "None".
def _load(self, filename):
pass
##
# Saves the current contents to file.
# @param filename Path to save to.
# @return "None".
def save(self, filename=None):
if filename is None:
if self.filename is None:
raise ValueError('No associated filename. One must be provided.')
filename = self.filename
self._save(filename)
self.filename = filename
##
# Helper function for "save". Intended to be overridden by subclasses.
# @param filename Path to save to.
# @return "None".
def _save(self, filename):
pass
| gpl-3.0 | -8,352,095,456,644,056,000 | 32.049383 | 100 | 0.616751 | false |
kemalakyol48/python-for-android | python3-alpha/extra_modules/gdata/urlfetch.py | 47 | 9308 | #!/usr/bin/python
#
# Copyright (C) 2008 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Provides HTTP functions for gdata.service to use on Google App Engine
AppEngineHttpClient: Provides an HTTP request method which uses App Engine's
urlfetch API. Set the http_client member of a GDataService object to an
instance of an AppEngineHttpClient to allow the gdata library to run on
Google App Engine.
run_on_appengine: Function which will modify an existing GDataService object
to allow it to run on App Engine. It works by creating a new instance of
the AppEngineHttpClient and replacing the GDataService object's
http_client.
HttpRequest: Function that wraps google.appengine.api.urlfetch.Fetch in a
common interface which is used by gdata.service.GDataService. In other
words, this module can be used as the gdata service request handler so
that all HTTP requests will be performed by the hosting Google App Engine
server.
"""
__author__ = 'api.jscudder (Jeff Scudder)'
import io
import atom.service
import atom.http_interface
from google.appengine.api import urlfetch
def run_on_appengine(gdata_service):
"""Modifies a GDataService object to allow it to run on App Engine.
Args:
gdata_service: An instance of AtomService, GDataService, or any
of their subclasses which has an http_client member.
"""
gdata_service.http_client = AppEngineHttpClient()
class AppEngineHttpClient(atom.http_interface.GenericHttpClient):
def __init__(self, headers=None):
self.debug = False
self.headers = headers or {}
def request(self, operation, url, data=None, headers=None):
"""Performs an HTTP call to the server, supports GET, POST, PUT, and
DELETE.
Usage example, perform and HTTP GET on http://www.google.com/:
import atom.http
client = atom.http.HttpClient()
http_response = client.request('GET', 'http://www.google.com/')
Args:
operation: str The HTTP operation to be performed. This is usually one
of 'GET', 'POST', 'PUT', or 'DELETE'
data: filestream, list of parts, or other object which can be converted
to a string. Should be set to None when performing a GET or DELETE.
If data is a file-like object which can be read, this method will
read a chunk of 100K bytes at a time and send them.
If the data is a list of parts to be sent, each part will be
evaluated and sent.
url: The full URL to which the request should be sent. Can be a string
or atom.url.Url.
headers: dict of strings. HTTP headers which should be sent
in the request.
"""
all_headers = self.headers.copy()
if headers:
all_headers.update(headers)
# Construct the full payload.
# Assume that data is None or a string.
data_str = data
if data:
if isinstance(data, list):
# If data is a list of different objects, convert them all to strings
# and join them together.
converted_parts = [__ConvertDataPart(x) for x in data]
data_str = ''.join(converted_parts)
else:
data_str = __ConvertDataPart(data)
# If the list of headers does not include a Content-Length, attempt to
# calculate it based on the data object.
if data and 'Content-Length' not in all_headers:
all_headers['Content-Length'] = len(data_str)
# Set the content type to the default value if none was set.
if 'Content-Type' not in all_headers:
all_headers['Content-Type'] = 'application/atom+xml'
# Lookup the urlfetch operation which corresponds to the desired HTTP verb.
if operation == 'GET':
method = urlfetch.GET
elif operation == 'POST':
method = urlfetch.POST
elif operation == 'PUT':
method = urlfetch.PUT
elif operation == 'DELETE':
method = urlfetch.DELETE
else:
method = None
return HttpResponse(urlfetch.Fetch(url=str(url), payload=data_str,
method=method, headers=all_headers))
def HttpRequest(service, operation, data, uri, extra_headers=None,
url_params=None, escape_params=True, content_type='application/atom+xml'):
"""Performs an HTTP call to the server, supports GET, POST, PUT, and DELETE.
This function is deprecated, use AppEngineHttpClient.request instead.
To use this module with gdata.service, you can set this module to be the
http_request_handler so that HTTP requests use Google App Engine's urlfetch.
import gdata.service
import gdata.urlfetch
gdata.service.http_request_handler = gdata.urlfetch
Args:
service: atom.AtomService object which contains some of the parameters
needed to make the request. The following members are used to
construct the HTTP call: server (str), additional_headers (dict),
port (int), and ssl (bool).
operation: str The HTTP operation to be performed. This is usually one of
'GET', 'POST', 'PUT', or 'DELETE'
data: filestream, list of parts, or other object which can be
converted to a string.
Should be set to None when performing a GET or PUT.
If data is a file-like object which can be read, this method will read
a chunk of 100K bytes at a time and send them.
If the data is a list of parts to be sent, each part will be evaluated
and sent.
uri: The beginning of the URL to which the request should be sent.
Examples: '/', '/base/feeds/snippets',
'/m8/feeds/contacts/default/base'
extra_headers: dict of strings. HTTP headers which should be sent
in the request. These headers are in addition to those stored in
service.additional_headers.
url_params: dict of strings. Key value pairs to be added to the URL as
URL parameters. For example {'foo':'bar', 'test':'param'} will
become ?foo=bar&test=param.
escape_params: bool default True. If true, the keys and values in
url_params will be URL escaped when the form is constructed
(Special characters converted to %XX form.)
content_type: str The MIME type for the data being sent. Defaults to
'application/atom+xml', this is only used if data is set.
"""
full_uri = atom.service.BuildUri(uri, url_params, escape_params)
(server, port, ssl, partial_uri) = atom.service.ProcessUrl(service, full_uri)
# Construct the full URL for the request.
if ssl:
full_url = 'https://%s%s' % (server, partial_uri)
else:
full_url = 'http://%s%s' % (server, partial_uri)
# Construct the full payload.
# Assume that data is None or a string.
data_str = data
if data:
if isinstance(data, list):
# If data is a list of different objects, convert them all to strings
# and join them together.
converted_parts = [__ConvertDataPart(x) for x in data]
data_str = ''.join(converted_parts)
else:
data_str = __ConvertDataPart(data)
# Construct the dictionary of HTTP headers.
headers = {}
if isinstance(service.additional_headers, dict):
headers = service.additional_headers.copy()
if isinstance(extra_headers, dict):
for header, value in extra_headers.items():
headers[header] = value
# Add the content type header (we don't need to calculate content length,
# since urlfetch.Fetch will calculate for us).
if content_type:
headers['Content-Type'] = content_type
# Lookup the urlfetch operation which corresponds to the desired HTTP verb.
if operation == 'GET':
method = urlfetch.GET
elif operation == 'POST':
method = urlfetch.POST
elif operation == 'PUT':
method = urlfetch.PUT
elif operation == 'DELETE':
method = urlfetch.DELETE
else:
method = None
return HttpResponse(urlfetch.Fetch(url=full_url, payload=data_str,
method=method, headers=headers))
def __ConvertDataPart(data):
if not data or isinstance(data, str):
return data
elif hasattr(data, 'read'):
# data is a file like object, so read it completely.
return data.read()
# The data object was not a file.
# Try to convert to a string and send the data.
return str(data)
class HttpResponse(object):
"""Translates a urlfetch resoinse to look like an hhtplib resoinse.
Used to allow the resoinse from HttpRequest to be usable by gdata.service
methods.
"""
def __init__(self, urlfetch_response):
self.body = io.StringIO(urlfetch_response.content)
self.headers = urlfetch_response.headers
self.status = urlfetch_response.status_code
self.reason = ''
def read(self, length=None):
if not length:
return self.body.read()
else:
return self.body.read(length)
def getheader(self, name):
if name not in self.headers:
return self.headers[name.lower()]
return self.headers[name]
| apache-2.0 | -5,397,352,535,760,641,000 | 36.684211 | 79 | 0.691878 | false |
ddrown/irssiconnectbot-protobuf | gtest/test/gtest_xml_outfiles_test.py | 718 | 5312 | #!/usr/bin/env python
#
# Copyright 2008, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Unit test for the gtest_xml_output module."""
__author__ = "[email protected] (Keith Ray)"
import os
from xml.dom import minidom, Node
import gtest_test_utils
import gtest_xml_test_utils
GTEST_OUTPUT_SUBDIR = "xml_outfiles"
GTEST_OUTPUT_1_TEST = "gtest_xml_outfile1_test_"
GTEST_OUTPUT_2_TEST = "gtest_xml_outfile2_test_"
EXPECTED_XML_1 = """<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="1" failures="0" disabled="0" errors="0" time="*" name="AllTests">
<testsuite name="PropertyOne" tests="1" failures="0" disabled="0" errors="0" time="*">
<testcase name="TestSomeProperties" status="run" time="*" classname="PropertyOne" SetUpProp="1" TestSomeProperty="1" TearDownProp="1" />
</testsuite>
</testsuites>
"""
EXPECTED_XML_2 = """<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="1" failures="0" disabled="0" errors="0" time="*" name="AllTests">
<testsuite name="PropertyTwo" tests="1" failures="0" disabled="0" errors="0" time="*">
<testcase name="TestSomeProperties" status="run" time="*" classname="PropertyTwo" SetUpProp="2" TestSomeProperty="2" TearDownProp="2" />
</testsuite>
</testsuites>
"""
class GTestXMLOutFilesTest(gtest_xml_test_utils.GTestXMLTestCase):
"""Unit test for Google Test's XML output functionality."""
def setUp(self):
# We want the trailing '/' that the last "" provides in os.path.join, for
# telling Google Test to create an output directory instead of a single file
# for xml output.
self.output_dir_ = os.path.join(gtest_test_utils.GetTempDir(),
GTEST_OUTPUT_SUBDIR, "")
self.DeleteFilesAndDir()
def tearDown(self):
self.DeleteFilesAndDir()
def DeleteFilesAndDir(self):
try:
os.remove(os.path.join(self.output_dir_, GTEST_OUTPUT_1_TEST + ".xml"))
except os.error:
pass
try:
os.remove(os.path.join(self.output_dir_, GTEST_OUTPUT_2_TEST + ".xml"))
except os.error:
pass
try:
os.rmdir(self.output_dir_)
except os.error:
pass
def testOutfile1(self):
self._TestOutFile(GTEST_OUTPUT_1_TEST, EXPECTED_XML_1)
def testOutfile2(self):
self._TestOutFile(GTEST_OUTPUT_2_TEST, EXPECTED_XML_2)
def _TestOutFile(self, test_name, expected_xml):
gtest_prog_path = gtest_test_utils.GetTestExecutablePath(test_name)
command = [gtest_prog_path, "--gtest_output=xml:%s" % self.output_dir_]
p = gtest_test_utils.Subprocess(command,
working_dir=gtest_test_utils.GetTempDir())
self.assert_(p.exited)
self.assertEquals(0, p.exit_code)
# TODO([email protected]): libtool causes the built test binary to be
# named lt-gtest_xml_outfiles_test_ instead of
# gtest_xml_outfiles_test_. To account for this possibillity, we
# allow both names in the following code. We should remove this
# hack when Chandler Carruth's libtool replacement tool is ready.
output_file_name1 = test_name + ".xml"
output_file1 = os.path.join(self.output_dir_, output_file_name1)
output_file_name2 = 'lt-' + output_file_name1
output_file2 = os.path.join(self.output_dir_, output_file_name2)
self.assert_(os.path.isfile(output_file1) or os.path.isfile(output_file2),
output_file1)
expected = minidom.parseString(expected_xml)
if os.path.isfile(output_file1):
actual = minidom.parse(output_file1)
else:
actual = minidom.parse(output_file2)
self.NormalizeXml(actual.documentElement)
self.AssertEquivalentNodes(expected.documentElement,
actual.documentElement)
expected.unlink()
actual.unlink()
if __name__ == "__main__":
os.environ["GTEST_STACK_TRACE_DEPTH"] = "0"
gtest_test_utils.Main()
| bsd-3-clause | -5,281,076,184,684,413,000 | 39.242424 | 140 | 0.699925 | false |
vmfarms/django-rest-swagger | tests/cigar_example/cigar_example/wsgi.py | 18 | 1148 | """
WSGI config for cigar_example project.
This module contains the WSGI application used by Django's development server
and any production WSGI deployments. It should expose a module-level variable
named ``application``. Django's ``runserver`` and ``runfcgi`` commands discover
this application via the ``WSGI_APPLICATION`` setting.
Usually you will have the standard Django WSGI application here, but it also
might make sense to replace the whole Django WSGI application with a custom one
that later delegates to the Django one. For example, you could introduce WSGI
middleware here, or combine a Django application with an application of another
framework.
"""
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "cigar_example.settings")
# This application object is used by any WSGI server configured to use this
# file. This includes Django's development server, if the WSGI_APPLICATION
# setting points here.
application = get_wsgi_application()
# Apply WSGI middleware here.
# from helloworld.wsgi import HelloWorldApplication
# application = HelloWorldApplication(application)
| bsd-2-clause | 223,203,459,558,364,380 | 40 | 79 | 0.800523 | false |
glwu/python-for-android | python3-alpha/python3-src/Lib/ctypes/test/test_pointers.py | 53 | 6273 | import unittest, sys
from ctypes import *
import _ctypes_test
ctype_types = [c_byte, c_ubyte, c_short, c_ushort, c_int, c_uint,
c_long, c_ulong, c_longlong, c_ulonglong, c_double, c_float]
python_types = [int, int, int, int, int, int,
int, int, int, int, float, float]
class PointersTestCase(unittest.TestCase):
def test_pointer_crash(self):
class A(POINTER(c_ulong)):
pass
POINTER(c_ulong)(c_ulong(22))
# Pointer can't set contents: has no _type_
self.assertRaises(TypeError, A, c_ulong(33))
def test_pass_pointers(self):
dll = CDLL(_ctypes_test.__file__)
func = dll._testfunc_p_p
func.restype = c_long
i = c_int(12345678)
## func.argtypes = (POINTER(c_int),)
address = func(byref(i))
self.assertEqual(c_int.from_address(address).value, 12345678)
func.restype = POINTER(c_int)
res = func(pointer(i))
self.assertEqual(res.contents.value, 12345678)
self.assertEqual(res[0], 12345678)
def test_change_pointers(self):
dll = CDLL(_ctypes_test.__file__)
func = dll._testfunc_p_p
i = c_int(87654)
func.restype = POINTER(c_int)
func.argtypes = (POINTER(c_int),)
res = func(pointer(i))
self.assertEqual(res[0], 87654)
self.assertEqual(res.contents.value, 87654)
# C code: *res = 54345
res[0] = 54345
self.assertEqual(i.value, 54345)
# C code:
# int x = 12321;
# res = &x
res.contents = c_int(12321)
self.assertEqual(i.value, 54345)
def test_callbacks_with_pointers(self):
# a function type receiving a pointer
PROTOTYPE = CFUNCTYPE(c_int, POINTER(c_int))
self.result = []
def func(arg):
for i in range(10):
## print arg[i],
self.result.append(arg[i])
## print
return 0
callback = PROTOTYPE(func)
dll = CDLL(_ctypes_test.__file__)
# This function expects a function pointer,
# and calls this with an integer pointer as parameter.
# The int pointer points to a table containing the numbers 1..10
doit = dll._testfunc_callback_with_pointer
## i = c_int(42)
## callback(byref(i))
## self.assertTrue(i.value == 84)
doit(callback)
## print self.result
doit(callback)
## print self.result
def test_basics(self):
from operator import delitem
for ct, pt in zip(ctype_types, python_types):
i = ct(42)
p = pointer(i)
## print type(p.contents), ct
self.assertTrue(type(p.contents) is ct)
# p.contents is the same as p[0]
## print p.contents
## self.assertTrue(p.contents == 42)
## self.assertTrue(p[0] == 42)
self.assertRaises(TypeError, delitem, p, 0)
def test_from_address(self):
from array import array
a = array('i', [100, 200, 300, 400, 500])
addr = a.buffer_info()[0]
p = POINTER(POINTER(c_int))
## print dir(p)
## print p.from_address
## print p.from_address(addr)[0][0]
def test_other(self):
class Table(Structure):
_fields_ = [("a", c_int),
("b", c_int),
("c", c_int)]
pt = pointer(Table(1, 2, 3))
self.assertEqual(pt.contents.a, 1)
self.assertEqual(pt.contents.b, 2)
self.assertEqual(pt.contents.c, 3)
pt.contents.c = 33
from ctypes import _pointer_type_cache
del _pointer_type_cache[Table]
def test_basic(self):
p = pointer(c_int(42))
# Although a pointer can be indexed, it ha no length
self.assertRaises(TypeError, len, p)
self.assertEqual(p[0], 42)
self.assertEqual(p.contents.value, 42)
def test_charpp(self):
"""Test that a character pointer-to-pointer is correctly passed"""
dll = CDLL(_ctypes_test.__file__)
func = dll._testfunc_c_p_p
func.restype = c_char_p
argv = (c_char_p * 2)()
argc = c_int( 2 )
argv[0] = b'hello'
argv[1] = b'world'
result = func( byref(argc), argv )
self.assertEqual(result, b'world')
def test_bug_1467852(self):
# http://sourceforge.net/tracker/?func=detail&atid=532154&aid=1467852&group_id=71702
x = c_int(5)
dummy = []
for i in range(32000):
dummy.append(c_int(i))
y = c_int(6)
p = pointer(x)
pp = pointer(p)
q = pointer(y)
pp[0] = q # <==
self.assertEqual(p[0], 6)
def test_c_void_p(self):
# http://sourceforge.net/tracker/?func=detail&aid=1518190&group_id=5470&atid=105470
if sizeof(c_void_p) == 4:
self.assertEqual(c_void_p(0xFFFFFFFF).value,
c_void_p(-1).value)
self.assertEqual(c_void_p(0xFFFFFFFFFFFFFFFF).value,
c_void_p(-1).value)
elif sizeof(c_void_p) == 8:
self.assertEqual(c_void_p(0xFFFFFFFF).value,
0xFFFFFFFF)
self.assertEqual(c_void_p(0xFFFFFFFFFFFFFFFF).value,
c_void_p(-1).value)
self.assertEqual(c_void_p(0xFFFFFFFFFFFFFFFFFFFFFFFF).value,
c_void_p(-1).value)
self.assertRaises(TypeError, c_void_p, 3.14) # make sure floats are NOT accepted
self.assertRaises(TypeError, c_void_p, object()) # nor other objects
def test_pointers_bool(self):
# NULL pointers have a boolean False value, non-NULL pointers True.
self.assertEqual(bool(POINTER(c_int)()), False)
self.assertEqual(bool(pointer(c_int())), True)
self.assertEqual(bool(CFUNCTYPE(None)(0)), False)
self.assertEqual(bool(CFUNCTYPE(None)(42)), True)
# COM methods are boolean True:
if sys.platform == "win32":
mth = WINFUNCTYPE(None)(42, "name", (), None)
self.assertEqual(bool(mth), True)
if __name__ == '__main__':
unittest.main()
| apache-2.0 | -8,955,999,783,600,438,000 | 31.671875 | 92 | 0.54631 | false |
chrisdunelm/google-api-dotnet-client | ClientGenerator/src/googleapis/codegen/targets.py | 6 | 10444 | #!/usr/bin/python2.7
# Copyright 2011 Google Inc. All Rights Reserved.
"""Targets class describes which languages/platforms we support."""
__author__ = '[email protected] (Will Clarkson)'
import logging
import os
from googleapis.codegen.filesys import files
from googleapis.codegen.utilities import json_expander
from googleapis.codegen.utilities import json_with_comments
class Targets(object):
"""Targets maintains the list of possible target options.
Reads targets.json file in local directory. This file is formatted
as:
{
'languages': {
'languageA': {
'surface_option1': {
'path': 'stable',
'description': 'something about language A',
'displayName': 'SurfaceOption1',
},
'surface_option2': {
'path': 'experimental',
'description': 'something about language A',
'displayName': 'SurfaceOption2',
'platforms': ['cmd-line'],
}
},
'languageB': {
...
}, ...
},
'platforms': {
'cmd-line': {
'displayName': 'Pretty Platform Name'
}
}
}
"""
def __init__(self, targets_path=None, template_root=None, targets_dict=None):
"""Constructor.
Loads targets file.
Args:
targets_path: (str) Path to targets file. Defaults to './targets.json'
template_root: (str) Path to template root. Defaults to '.'
targets_dict: (dict) Initial data, if not supplied from a file.
Raises:
ValueError: if the targets file does not contain the required sections.
"""
self.template_root = template_root or Targets._default_template_root
self.targets_path = targets_path or os.path.join(self.template_root,
'targets.json')
if targets_dict:
self._targets_dict = targets_dict
else:
self._targets_dict = json_with_comments.Loads(
files.GetFileContents(self.targets_path))
# Do some basic validation that this has the required fields
if 'languages' not in self._targets_dict:
raise ValueError('languages not in targets.json')
def Dict(self):
"""The targets.json file as a dictionary."""
return self._targets_dict
def VariationsForLanguage(self, language):
language_def = self._targets_dict['languages'].get(language)
if not language_def:
return None
return Variations(self, language, language_def['variations'])
def GetLanguage(self, language):
return self._targets_dict['languages'][language]
def Languages(self):
return self._targets_dict['languages']
def Platforms(self):
return self._targets_dict.get('platforms', {})
@staticmethod
def SetDefaultTemplateRoot(path):
"""Sets a new default full path to the templates directory.
Args:
path: (str) full path to templates directory.
"""
# This is not a classmethod because we don't want subclasses
# to shadow this value.
logging.info('setting default template root to %s', path)
Targets._default_template_root = path
@staticmethod
def GetDefaultTemplateRoot():
return Targets._default_template_root
# Set the initial template root.
_default_template_root = os.path.join(os.path.dirname(__file__),
'languages')
# Whether to use variation release versions when calculating template paths.
use_versioned_paths = False
@staticmethod
def SetUseVersionedPaths(use_versioned_paths):
"""Sets whether versions are used in the template path."""
# This is not a classmethod because we don't want subclasses
# to shadow this value.
Targets.use_versioned_paths = use_versioned_paths
class Variations(dict):
"""A set of variations available for a particular language."""
def __init__(self, targets, language, variations_dict):
super(Variations, self).__init__(variations_dict)
self._targets = targets
self._language = language
def IsValid(self, variation):
"""Test is a variation exists."""
return variation in self
def _RelativeTemplateDir(self, variation):
"""Returns the path to template dir for the selected variation.
By default, the path is the same as the variation name. It can be
overridden in two ways, of descending precedence:
1. by the 'releaseVersion' element, if use_versioned_paths is set.
2. with an explicit 'path' statement.
Args:
variation: (str) A target variation name.
Returns:
(str) Relative path to template directory.
"""
if self._targets.use_versioned_paths:
path = self[variation].get('releaseVersion') or variation
else:
path = None
if not path:
path = self.get(variation, {}).get('path') or variation
return os.path.join(self._language, path)
def AbsoluteTemplateDir(self, variation):
"""Returns the path to template dir for the selected variation.
Args:
variation: (str) A target variation name.
Returns:
(str) Absolute path to template directory.
"""
return os.path.join(self._targets.template_root,
self._RelativeTemplateDir(variation))
def GetFeaturesForReleaseVersion(self, release_version):
for name in self:
features = self.GetFeatures(name)
if release_version == features.get('releaseVersion'):
return features
return None
def GetFeatures(self, variation):
"""Returns the features dictionary for a specific variation.
This is the basic dictionary informaion plus any specific overrides in
the per-template-tree features.json file.
Args:
variation: (str) A target variation name.
Returns:
(Features) features dictionary
"""
if not variation:
return None
template_dir = self.AbsoluteTemplateDir(variation)
features = Features(template_dir, self.get(variation), variation)
json_path = os.path.join(template_dir, 'features.json')
try:
features_json = files.GetFileContents(json_path)
except files.FileDoesNotExist:
# for backwards compatibility, we forgive this.
# TODO(user): be stricter about this and
# fix/remove any tests that fail as a result.
return features
features.update(json_expander.ExpandJsonTemplate(
json_with_comments.Loads(features_json)))
# If not specified, the releaseVersion matches the variation
if not features.get('releaseVersion'):
features['releaseVersion'] = variation
return features
class Features(dict):
"""A dictionary describing the features of a particular API variation."""
# TODO(user): Do we need initial_content? The only thing we see in it is
# path, which should be set explicitly to the dirname of the real file path.
def __init__(self, template_dir, initial_content=None, name=None):
super(Features, self).__init__(initial_content or {})
self.name = name
self.template_dir = template_dir
if 'path' not in self:
self['path'] = os.path.basename(template_dir)
def DependenciesForEnvironment(self, environment=None):
"""Returns the list of dependencies for an environment.
Given an environment:
build the list of dependencies required for that environment. This
includes elements marked as all (platform='*') and ones specifically
mentioning that environment.
build the list of optional packages which might be useful to your app.
That is, everything marked generic, but not in the first list.
build the list of everything excluded from the first two sets
Args:
environment: (str) An environment (as per platforms.PLATFORMS). If None,
the optional packages list will include everything that is not
mandatory (i.e. marked with platform='*').
Returns:
list(dict), list(dict), list(dict): required_packages, optional_packages,
packages we do not want.
"""
required = []
optional = []
excluded = []
for r in self.get('requires', []):
environments = r['environments']
if '*' in environments or environment in environments:
required.append(r)
elif 'generic' in environments or not environment:
optional.append(r)
else:
excluded.append(r)
return required, optional, excluded
def ExtractPathsFromDependencies(self, dependencies, file_type=None):
"""Extract the file paths from a list of dependencies.
Args:
dependencies: (list(str)) list of dependencies from a Features object
file_type: (str) If specified, only extract paths for that file type.
Returns:
set(str): The set of file paths required for this dependency set.
"""
ret = set()
for d in dependencies or []:
for f in d.get('files') or []:
p = f.get('path')
if p and (file_type is None or file_type == f.get('type')):
ret.add(p)
return ret
def AllDependencyPaths(self):
"""Returns the set of all file paths mentioned as dependencies.
Returns:
set(str)
"""
ret = set()
for dependency in self.get('requires', []):
for f in dependency.get('files') or []:
p = f.get('path')
if p:
ret.add(p)
return ret
def FilePathsWeDoNotDependOn(self, environment=None, file_type=None):
"""Returns the list of file paths which are NOT required for an environment.
Figure out the files we need for an environment and reduce that by the
kind of files (if we only want source or binary), then invert that list
w.r.t. all the files mentioned in the features requirements list.
The rationale for this function is to make it easy to find the set of
files which should be stripped from a download, while leaving all files
not explicitly mentioned in the features.
Args:
environment: (str) An environment (as per platforms.PLATFORMS). If None,
return have the optional packages list include everything that is
not requried.
file_type: (str) If specified, only extract paths for that file type.
Returns:
list(str): The paths which are NOT required for that platform
"""
if not environment and not file_type: # quick exit for common case
return []
req, _, _ = self.DependenciesForEnvironment(environment=environment)
req_paths = self.ExtractPathsFromDependencies(req, file_type=file_type)
all_paths = self.AllDependencyPaths()
return all_paths - req_paths
| apache-2.0 | 2,495,858,284,310,484,000 | 32.909091 | 80 | 0.669763 | false |
Maikflow/django_test | lib/python2.7/site-packages/setuptools/__init__.py | 158 | 5195 | """Extensions to the 'distutils' for large or complex distributions"""
import os
import sys
import distutils.core
import distutils.filelist
from distutils.core import Command as _Command
from distutils.util import convert_path
from fnmatch import fnmatchcase
import setuptools.version
from setuptools.extension import Extension
from setuptools.dist import Distribution, Feature, _get_unpatched
from setuptools.depends import Require
from setuptools.compat import filterfalse
__all__ = [
'setup', 'Distribution', 'Feature', 'Command', 'Extension', 'Require',
'find_packages'
]
__version__ = setuptools.version.__version__
bootstrap_install_from = None
# If we run 2to3 on .py files, should we also convert docstrings?
# Default: yes; assume that we can detect doctests reliably
run_2to3_on_doctests = True
# Standard package names for fixer packages
lib2to3_fixer_packages = ['lib2to3.fixes']
class PackageFinder(object):
@classmethod
def find(cls, where='.', exclude=(), include=('*',)):
"""Return a list all Python packages found within directory 'where'
'where' should be supplied as a "cross-platform" (i.e. URL-style)
path; it will be converted to the appropriate local path syntax.
'exclude' is a sequence of package names to exclude; '*' can be used
as a wildcard in the names, such that 'foo.*' will exclude all
subpackages of 'foo' (but not 'foo' itself).
'include' is a sequence of package names to include. If it's
specified, only the named packages will be included. If it's not
specified, all found packages will be included. 'include' can contain
shell style wildcard patterns just like 'exclude'.
The list of included packages is built up first and then any
explicitly excluded packages are removed from it.
"""
out = cls._find_packages_iter(convert_path(where))
out = cls.require_parents(out)
includes = cls._build_filter(*include)
excludes = cls._build_filter('ez_setup', '*__pycache__', *exclude)
out = filter(includes, out)
out = filterfalse(excludes, out)
return list(out)
@staticmethod
def require_parents(packages):
"""
Exclude any apparent package that apparently doesn't include its
parent.
For example, exclude 'foo.bar' if 'foo' is not present.
"""
found = []
for pkg in packages:
base, sep, child = pkg.rpartition('.')
if base and base not in found:
continue
found.append(pkg)
yield pkg
@staticmethod
def _all_dirs(base_path):
"""
Return all dirs in base_path, relative to base_path
"""
for root, dirs, files in os.walk(base_path, followlinks=True):
for dir in dirs:
yield os.path.relpath(os.path.join(root, dir), base_path)
@classmethod
def _find_packages_iter(cls, base_path):
dirs = cls._all_dirs(base_path)
suitable = filterfalse(lambda n: '.' in n, dirs)
return (
path.replace(os.path.sep, '.')
for path in suitable
if cls._looks_like_package(os.path.join(base_path, path))
)
@staticmethod
def _looks_like_package(path):
return os.path.isfile(os.path.join(path, '__init__.py'))
@staticmethod
def _build_filter(*patterns):
"""
Given a list of patterns, return a callable that will be true only if
the input matches one of the patterns.
"""
return lambda name: any(fnmatchcase(name, pat=pat) for pat in patterns)
class PEP420PackageFinder(PackageFinder):
@staticmethod
def _looks_like_package(path):
return True
find_packages = PackageFinder.find
setup = distutils.core.setup
_Command = _get_unpatched(_Command)
class Command(_Command):
__doc__ = _Command.__doc__
command_consumes_arguments = False
def __init__(self, dist, **kw):
# Add support for keyword arguments
_Command.__init__(self,dist)
for k,v in kw.items():
setattr(self,k,v)
def reinitialize_command(self, command, reinit_subcommands=0, **kw):
cmd = _Command.reinitialize_command(self, command, reinit_subcommands)
for k,v in kw.items():
setattr(cmd,k,v) # update command with keywords
return cmd
distutils.core.Command = Command # we can't patch distutils.cmd, alas
def findall(dir = os.curdir):
"""Find all files under 'dir' and return the list of full filenames
(relative to 'dir').
"""
all_files = []
for base, dirs, files in os.walk(dir):
if base==os.curdir or base.startswith(os.curdir+os.sep):
base = base[2:]
if base:
files = [os.path.join(base, f) for f in files]
all_files.extend(filter(os.path.isfile, files))
return all_files
distutils.filelist.findall = findall # fix findall bug in distutils.
# sys.dont_write_bytecode was introduced in Python 2.6.
_dont_write_bytecode = getattr(sys, 'dont_write_bytecode',
bool(os.environ.get("PYTHONDONTWRITEBYTECODE")))
| gpl-2.0 | 9,113,684,457,920,407,000 | 32.733766 | 79 | 0.643118 | false |
feer56/Kitsune1 | kitsune/questions/tasks.py | 1 | 6470 | import logging
import traceback
from datetime import date
from django.conf import settings
from django.contrib.sites.models import Site
from django.db import connection, transaction
# NOTE: This import is just so _fire_task gets registered with celery.
import tidings.events # noqa
from celery import task
from multidb.pinning import pin_this_thread, unpin_this_thread
from statsd import statsd
from zendesk import ZendeskError
from kitsune.kbadge.utils import get_or_create_badge
from kitsune.questions.config import ANSWERS_PER_PAGE
from kitsune.questions.karma_actions import AnswerAction, FirstAnswerAction
from kitsune.questions.marketplace import submit_ticket
from kitsune.search.es_utils import ES_EXCEPTIONS
from kitsune.search.tasks import index_task
from kitsune.sumo.decorators import timeit
log = logging.getLogger('k.task')
@task(rate_limit='1/s')
@timeit
def update_question_votes(question_id):
from kitsune.questions.models import Question
log.debug('Got a new QuestionVote for question_id=%s.' % question_id)
statsd.incr('questions.tasks.update')
# Pin to master db to avoid lag delay issues.
pin_this_thread()
try:
q = Question.uncached.get(id=question_id)
q.sync_num_votes_past_week()
q.save(force_update=True)
except Question.DoesNotExist:
log.info('Question id=%s deleted before task.' % question_id)
unpin_this_thread()
@task(rate_limit='4/s')
@timeit
def update_question_vote_chunk(data):
"""Update num_votes_past_week for a number of questions."""
# First we recalculate num_votes_past_week in the db.
log.info('Calculating past week votes for %s questions.' % len(data))
ids = ','.join(map(str, data))
sql = """
UPDATE questions_question q
SET num_votes_past_week = (
SELECT COUNT(created)
FROM questions_questionvote qv
WHERE qv.question_id = q.id
AND qv.created >= DATE(SUBDATE(NOW(), 7))
)
WHERE q.id IN (%s);
""" % ids
cursor = connection.cursor()
cursor.execute(sql)
transaction.commit_unless_managed()
# Next we update our index with the changes we made directly in
# the db.
if data and settings.ES_LIVE_INDEXING:
# Get the data we just updated from the database.
sql = """
SELECT id, num_votes_past_week
FROM questions_question
WHERE id in (%s);
""" % ids
cursor = connection.cursor()
cursor.execute(sql)
# Since this returns (id, num_votes_past_week) tuples, we can
# convert that directly to a dict.
id_to_num = dict(cursor.fetchall())
try:
# Fetch all the documents we need to update.
from kitsune.questions.models import QuestionMappingType
from kitsune.search import es_utils
es_docs = es_utils.get_documents(QuestionMappingType, data)
# For each document, update the data and stick it back in the
# index.
for doc in es_docs:
# Note: Need to keep this in sync with
# Question.extract_document.
num = id_to_num[int(doc[u'id'])]
doc[u'question_num_votes_past_week'] = num
QuestionMappingType.index(doc, id_=doc['id'])
except ES_EXCEPTIONS:
# Something happened with ES, so let's push index updating
# into an index_task which retries when it fails because
# of ES issues.
index_task.delay(QuestionMappingType, id_to_num.keys())
@task(rate_limit='4/m')
@timeit
def update_answer_pages(question):
log.debug('Recalculating answer page numbers for question %s: %s' %
(question.pk, question.title))
i = 0
answers = question.answers.using('default').order_by('created')
for answer in answers.filter(is_spam=False):
answer.page = i / ANSWERS_PER_PAGE + 1
answer.save(no_notify=True)
i += 1
@task()
@timeit
def log_answer(answer):
pin_this_thread()
# Record karma actions
AnswerAction(answer.creator, answer.created.date()).save()
try:
from kitsune.questions.models import Answer
answers = Answer.uncached.filter(question=answer.question_id)
if answer == answers.order_by('created')[0]:
FirstAnswerAction(answer.creator, answer.created.date()).save()
except IndexError:
# If we hit an IndexError, we assume this is the first answer.
FirstAnswerAction(answer.creator, answer.created.date()).save()
unpin_this_thread()
@task()
@timeit
def maybe_award_badge(badge_template, year, user):
"""Award the specific badge to the user if they've earned it."""
badge = get_or_create_badge(badge_template, year)
# If the user already has the badge, there is nothing else to do.
if badge.is_awarded_to(user):
return
# Count the number of replies tweeted in the current year.
from kitsune.questions.models import Answer
qs = Answer.objects.filter(
creator=user,
created__gte=date(year, 1, 1),
created__lt=date(year + 1, 1, 1))
# If the count is 30 or higher, award the badge.
if qs.count() >= 30:
badge.award_to(user)
return True
class PickleableZendeskError(Exception):
"""Zendesk error that captures information and can be pickled
This is like kitsune/search/tasks.py:IndexingTaskError and is
totally goofy.
"""
def __init__(self):
super(PickleableZendeskError, self).__init__(traceback.format_exc())
@task()
@timeit
def escalate_question(question_id):
"""Escalate a question to zendesk by submitting a ticket."""
from kitsune.questions.models import Question
question = Question.objects.get(id=question_id)
url = 'https://{domain}{url}'.format(
domain=Site.objects.get_current().domain,
url=question.get_absolute_url())
try:
submit_ticket(
email='[email protected]',
category='Escalated',
subject=u'[Escalated] {title}'.format(title=question.title),
body=u'{url}\n\n{content}'.format(url=url,
content=question.content),
tags=[t.slug for t in question.tags.all()])
except ZendeskError:
# This is unpickleable, so we need to unwrap it a bit
raise PickleableZendeskError()
| bsd-3-clause | -5,057,971,360,786,031,000 | 31.676768 | 76 | 0.646368 | false |
richard-willowit/odoo | addons/payment_stripe/models/payment.py | 3 | 10420 | # coding: utf-8
import logging
import requests
from odoo import api, fields, models, _
from odoo.addons.payment.models.payment_acquirer import ValidationError
from odoo.exceptions import UserError
from odoo.tools.safe_eval import safe_eval
_logger = logging.getLogger(__name__)
# Force the API version to avoid breaking in case of update on Stripe side
# cf https://stripe.com/docs/api#versioning
# changelog https://stripe.com/docs/upgrades#api-changelog
STRIPE_HEADERS = {'Stripe-Version': '2016-03-07'}
# The following currencies are integer only, see https://stripe.com/docs/currencies#zero-decimal
INT_CURRENCIES = [
u'BIF', u'XAF', u'XPF', u'CLP', u'KMF', u'DJF', u'GNF', u'JPY', u'MGA', u'PYG', u'RWF', u'KRW',
u'VUV', u'VND', u'XOF'
]
class PaymentAcquirerStripe(models.Model):
_inherit = 'payment.acquirer'
provider = fields.Selection(selection_add=[('stripe', 'Stripe')])
stripe_secret_key = fields.Char(required_if_provider='stripe', groups='base.group_user')
stripe_publishable_key = fields.Char(required_if_provider='stripe', groups='base.group_user')
stripe_image_url = fields.Char(
"Checkout Image URL", groups='base.group_user',
help="A relative or absolute URL pointing to a square image of your "
"brand or product. As defined in your Stripe profile. See: "
"https://stripe.com/docs/checkout")
@api.multi
def stripe_form_generate_values(self, tx_values):
self.ensure_one()
stripe_tx_values = dict(tx_values)
temp_stripe_tx_values = {
'company': self.company_id.name,
'amount': tx_values.get('amount'),
'currency': tx_values.get('currency') and tx_values.get('currency').name or '',
'currency_id': tx_values.get('currency') and tx_values.get('currency').id or '',
'address_line1': tx_values['partner_address'],
'address_city': tx_values['partner_city'],
'address_country': tx_values['partner_country'] and tx_values['partner_country'].name or '',
'email': tx_values['partner_email'],
'address_zip': tx_values['partner_zip'],
'name': tx_values['partner_name'],
'phone': tx_values['partner_phone'],
}
temp_stripe_tx_values['returndata'] = stripe_tx_values.pop('return_url', '')
stripe_tx_values.update(temp_stripe_tx_values)
return stripe_tx_values
@api.model
def _get_stripe_api_url(self):
return 'api.stripe.com/v1'
@api.model
def stripe_s2s_form_process(self, data):
payment_token = self.env['payment.token'].sudo().create({
'cc_number': data['cc_number'],
'cc_holder_name': data['cc_holder_name'],
'cc_expiry': data['cc_expiry'],
'cc_brand': data['cc_brand'],
'cvc': data['cvc'],
'acquirer_id': int(data['acquirer_id']),
'partner_id': int(data['partner_id'])
})
return payment_token
@api.multi
def stripe_s2s_form_validate(self, data):
self.ensure_one()
# mandatory fields
for field_name in ["cc_number", "cvc", "cc_holder_name", "cc_expiry", "cc_brand"]:
if not data.get(field_name):
return False
return True
def _get_feature_support(self):
"""Get advanced feature support by provider.
Each provider should add its technical in the corresponding
key for the following features:
* fees: support payment fees computations
* authorize: support authorizing payment (separates
authorization and capture)
* tokenize: support saving payment data in a payment.tokenize
object
"""
res = super(PaymentAcquirerStripe, self)._get_feature_support()
res['tokenize'].append('stripe')
return res
class PaymentTransactionStripe(models.Model):
_inherit = 'payment.transaction'
def _create_stripe_charge(self, acquirer_ref=None, tokenid=None, email=None):
api_url_charge = 'https://%s/charges' % (self.acquirer_id._get_stripe_api_url())
charge_params = {
'amount': int(self.amount if self.currency_id.name in INT_CURRENCIES else self.amount*100),
'currency': self.currency_id.name,
'metadata[reference]': self.reference
}
if acquirer_ref:
charge_params['customer'] = acquirer_ref
if tokenid:
charge_params['card'] = str(tokenid)
if email:
charge_params['receipt_email'] = email
r = requests.post(api_url_charge,
auth=(self.acquirer_id.stripe_secret_key, ''),
params=charge_params,
headers=STRIPE_HEADERS)
return r.json()
@api.multi
def stripe_s2s_do_transaction(self, **kwargs):
self.ensure_one()
result = self._create_stripe_charge(acquirer_ref=self.payment_token_id.acquirer_ref)
return self._stripe_s2s_validate_tree(result)
def _create_stripe_refund(self):
api_url_refund = 'https://%s/refunds' % (self.acquirer_id._get_stripe_api_url())
refund_params = {
'charge': self.acquirer_reference,
'amount': int(self.amount*100), # by default, stripe refund the full amount (we don't really need to specify the value)
'metadata[reference]': self.reference,
}
r = requests.post(api_url_refund,
auth=(self.acquirer_id.stripe_secret_key, ''),
params=refund_params,
headers=STRIPE_HEADERS)
return r.json()
@api.multi
def stripe_s2s_do_refund(self, **kwargs):
self.ensure_one()
self.state = 'refunding'
result = self._create_stripe_refund()
return self._stripe_s2s_validate_tree(result)
@api.model
def _stripe_form_get_tx_from_data(self, data):
""" Given a data dict coming from stripe, verify it and find the related
transaction record. """
reference = data.get('metadata', {}).get('reference')
if not reference:
error_msg = _(
'Stripe: invalid reply received from provider, missing reference. Additional message: %s'
% data.get('error', {}).get('message', '')
)
_logger.error(error_msg)
raise ValidationError(error_msg)
tx = self.search([('reference', '=', reference)])
if not tx:
error_msg = (_('Stripe: no order found for reference %s') % reference)
_logger.error(error_msg)
raise ValidationError(error_msg)
elif len(tx) > 1:
error_msg = (_('Stripe: %s orders found for reference %s') % (len(tx), reference))
_logger.error(error_msg)
raise ValidationError(error_msg)
return tx[0]
@api.multi
def _stripe_s2s_validate_tree(self, tree):
self.ensure_one()
if self.state not in ('draft', 'pending', 'refunding'):
_logger.info('Stripe: trying to validate an already validated tx (ref %s)', self.reference)
return True
status = tree.get('status')
if status == 'succeeded':
new_state = 'refunded' if self.state == 'refunding' else 'done'
self.write({
'state': new_state,
'date_validate': fields.datetime.now(),
'acquirer_reference': tree.get('id'),
})
self.execute_callback()
if self.payment_token_id:
self.payment_token_id.verified = True
return True
else:
error = tree['error']['message']
_logger.warn(error)
self.sudo().write({
'state': 'error',
'state_message': error,
'acquirer_reference': tree.get('id'),
'date_validate': fields.datetime.now(),
})
return False
@api.multi
def _stripe_form_get_invalid_parameters(self, data):
invalid_parameters = []
reference = data['metadata']['reference']
if reference != self.reference:
invalid_parameters.append(('Reference', reference, self.reference))
return invalid_parameters
@api.multi
def _stripe_form_validate(self, data):
return self._stripe_s2s_validate_tree(data)
class PaymentTokenStripe(models.Model):
_inherit = 'payment.token'
@api.model
def stripe_create(self, values):
res = {}
payment_acquirer = self.env['payment.acquirer'].browse(values.get('acquirer_id'))
url_token = 'https://%s/tokens' % payment_acquirer._get_stripe_api_url()
url_customer = 'https://%s/customers' % payment_acquirer._get_stripe_api_url()
if values.get('cc_number'):
payment_params = {
'card[number]': values['cc_number'].replace(' ', ''),
'card[exp_month]': str(values['cc_expiry'][:2]),
'card[exp_year]': str(values['cc_expiry'][-2:]),
'card[cvc]': values['cvc'],
}
r = requests.post(url_token,
auth=(payment_acquirer.stripe_secret_key, ''),
params=payment_params,
headers=STRIPE_HEADERS)
token = r.json()
if token.get('id'):
customer_params = {
'source': token['id']
}
r = requests.post(url_customer,
auth=(payment_acquirer.stripe_secret_key, ''),
params=customer_params,
headers=STRIPE_HEADERS)
customer = r.json()
res = {
'acquirer_ref': customer['id'],
'name': 'XXXXXXXXXXXX%s - %s' % (values['cc_number'][-4:], values['cc_holder_name'])
}
elif token.get('error'):
raise UserError(token['error']['message'])
# pop credit card info to info sent to create
for field_name in ["cc_number", "cvc", "cc_holder_name", "cc_expiry", "cc_brand"]:
values.pop(field_name, None)
return res
| gpl-3.0 | 1,138,100,766,478,482,700 | 39.23166 | 131 | 0.563148 | false |
szeged/servo | tests/wpt/web-platform-tests/tools/third_party/hyper/hyper/packages/hyperframe/flags.py | 41 | 1028 | # -*- coding: utf-8 -*-
"""
hyperframe/flags
~~~~~~~~~~~~~~~~
Defines basic Flag and Flags data structures.
"""
import collections
Flag = collections.namedtuple("Flag", ["name", "bit"])
class Flags(collections.MutableSet):
"""
A simple MutableSet implementation that will only accept known flags as elements.
Will behave like a regular set(), except that a ValueError will be thrown when .add()ing
unexpected flags.
"""
def __init__(self, defined_flags):
self._valid_flags = set(flag.name for flag in defined_flags)
self._flags = set()
def __contains__(self, x):
return self._flags.__contains__(x)
def __iter__(self):
return self._flags.__iter__()
def __len__(self):
return self._flags.__len__()
def discard(self, value):
return self._flags.discard(value)
def add(self, value):
if value not in self._valid_flags:
raise ValueError("Unexpected flag: {}".format(value))
return self._flags.add(value)
| mpl-2.0 | -3,295,095,865,065,525,000 | 24.7 | 92 | 0.610895 | false |
derrod/livestreamer | src/livestreamer/plugins/azubutv.py | 15 | 6049 | import re
from io import BytesIO
from time import sleep
from livestreamer.exceptions import PluginError
from livestreamer.packages.flashmedia import AMFPacket, AMFMessage
from livestreamer.packages.flashmedia.types import AMF3ObjectBase
from livestreamer.plugin import Plugin
from livestreamer.plugin.api import http, validate
from livestreamer.stream import AkamaiHDStream
AMF_GATEWAY = "http://c.brightcove.com/services/messagebroker/amf"
AMF_MESSAGE_PREFIX = "af6b88c640c8d7b4cc75d22f7082ad95603bc627"
STREAM_NAMES = ["360p", "480p", "720p", "source"]
HTTP_HEADERS = {
"User-Agent": ("Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 "
"(KHTML, like Gecko) Chrome/36.0.1944.9 Safari/537.36")
}
_url_re = re.compile("http(s)?://(\w+\.)?azubu.tv/(?P<domain>\w+)")
CHANNEL_INFO_URL = "http://api.azubu.tv/public/channel/%s/player"
_viewerexp_schema = validate.Schema(
validate.attr({
"programmedContent": {
"videoPlayer": validate.attr({
"mediaDTO": validate.attr({
"renditions": {
int: validate.attr({
"encodingRate": int,
"defaultURL": validate.text
})
}
})
})
}
})
)
@AMF3ObjectBase.register("com.brightcove.experience.ViewerExperienceRequest")
class ViewerExperienceRequest(AMF3ObjectBase):
__members__ = ["contentOverrides",
"experienceId",
"URL",
"playerKey",
"deliveryType",
"TTLToken"]
def __init__(self, URL, contentOverrides, experienceId, playerKey, TTLToken=""):
self.URL = URL
self.deliveryType = float("nan")
self.contentOverrides = contentOverrides
self.experienceId = experienceId
self.playerKey = playerKey
self.TTLToken = TTLToken
@AMF3ObjectBase.register("com.brightcove.experience.ContentOverride")
class ContentOverride(AMF3ObjectBase):
__members__ = ["featuredRefId",
"contentRefIds",
"contentId",
"contentType",
"contentIds",
"featuredId",
"contentRefId",
"target"]
def __init__(self, contentId=float("nan"), contentRefId=None, contentType=0,
target="videoPlayer"):
self.contentType = contentType
self.contentId = contentId
self.target = target
self.contentIds = None
self.contentRefId = contentRefId
self.contentRefIds = None
self.contentType = 0
self.featuredId = float("nan")
self.featuredRefId = None
class AzubuTV(Plugin):
@classmethod
def can_handle_url(cls, url):
return _url_re.match(url)
@classmethod
def stream_weight(cls, stream):
if stream == "source":
weight = 1080
else:
weight, group = Plugin.stream_weight(stream)
return weight, "azubutv"
def _create_amf_request(self, key, video_player, player_id):
if video_player.startswith("ref:"):
content_override = ContentOverride(contentRefId=video_player[4:])
else:
content_override = ContentOverride(contentId=int(video_player))
viewer_exp_req = ViewerExperienceRequest(self.url,
[content_override],
int(player_id), key)
req = AMFPacket(version=3)
req.messages.append(AMFMessage(
"com.brightcove.experience.ExperienceRuntimeFacade.getDataForExperience",
"/1",
[AMF_MESSAGE_PREFIX, viewer_exp_req]
))
return req
def _send_amf_request(self, req, key):
headers = {
"content-type": "application/x-amf"
}
res = http.post(AMF_GATEWAY, data=bytes(req.serialize()),
headers=headers, params=dict(playerKey=key))
return AMFPacket.deserialize(BytesIO(res.content))
def _get_player_params(self, retries=5):
match = _url_re.match(self.url);
domain = match.group('domain');
try:
res = http.get(CHANNEL_INFO_URL % str(domain))
except PluginError as err:
# The server sometimes gives us 404 for no reason
if "404" in str(err) and retries:
sleep(1)
return self._get_player_params(retries - 1)
else:
raise
channel_info = http.json(res)
channel_info = channel_info['data']
key = channel_info['player_key'];
is_live = channel_info['is_live'];
stream_video = channel_info['stream_video']
if stream_video:
video_player = "ref:" + stream_video['reference_id']
else:
is_live = False
player_id = channel_info['player_id']
return key, video_player, player_id, is_live
def _parse_result(self, res):
res = _viewerexp_schema.validate(res)
player = res.programmedContent["videoPlayer"]
renditions = sorted(player.mediaDTO.renditions.values(),
key=lambda r: r.encodingRate or 100000000)
streams = {}
for stream_name, rendition in zip(STREAM_NAMES, renditions):
stream = AkamaiHDStream(self.session, rendition.defaultURL)
streams[stream_name] = stream
return streams
def _get_streams(self):
key, video_player, player_id, is_live = self._get_player_params()
if not is_live:
return
req = self._create_amf_request(key, video_player, player_id)
res = self._send_amf_request(req, key)
streams = {}
for message in res.messages:
if message.target_uri == "/1/onResult":
streams = self._parse_result(message.value)
return streams
__plugin__ = AzubuTV
| bsd-2-clause | 1,600,710,230,266,024,200 | 32.054645 | 85 | 0.574144 | false |
loseblue/vim-ycm-windows-64 | third_party/ycmd/third_party/waitress/waitress/tests/test_regression.py | 40 | 4059 | ##############################################################################
#
# Copyright (c) 2005 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Tests for waitress.channel maintenance logic
"""
import doctest
class FakeSocket: # pragma: no cover
data = ''
setblocking = lambda *_: None
close = lambda *_: None
def __init__(self, no):
self.no = no
def fileno(self):
return self.no
def getpeername(self):
return ('localhost', self.no)
def send(self, data):
self.data += data
return len(data)
def recv(self, data):
return 'data'
def zombies_test():
"""Regression test for HTTPChannel.maintenance method
Bug: This method checks for channels that have been "inactive" for a
configured time. The bug was that last_activity is set at creation time
but never updated during async channel activity (reads and writes), so
any channel older than the configured timeout will be closed when a new
channel is created, regardless of activity.
>>> import time
>>> import waitress.adjustments
>>> config = waitress.adjustments.Adjustments()
>>> from waitress.server import HTTPServer
>>> class TestServer(HTTPServer):
... def bind(self, (ip, port)):
... print "Listening on %s:%d" % (ip or '*', port)
>>> sb = TestServer('127.0.0.1', 80, start=False, verbose=True)
Listening on 127.0.0.1:80
First we confirm the correct behavior, where a channel with no activity
for the timeout duration gets closed.
>>> from waitress.channel import HTTPChannel
>>> socket = FakeSocket(42)
>>> channel = HTTPChannel(sb, socket, ('localhost', 42))
>>> channel.connected
True
>>> channel.last_activity -= int(config.channel_timeout) + 1
>>> channel.next_channel_cleanup[0] = channel.creation_time - int(
... config.cleanup_interval) - 1
>>> socket2 = FakeSocket(7)
>>> channel2 = HTTPChannel(sb, socket2, ('localhost', 7))
>>> channel.connected
False
Write Activity
--------------
Now we make sure that if there is activity the channel doesn't get closed
incorrectly.
>>> channel2.connected
True
>>> channel2.last_activity -= int(config.channel_timeout) + 1
>>> channel2.handle_write()
>>> channel2.next_channel_cleanup[0] = channel2.creation_time - int(
... config.cleanup_interval) - 1
>>> socket3 = FakeSocket(3)
>>> channel3 = HTTPChannel(sb, socket3, ('localhost', 3))
>>> channel2.connected
True
Read Activity
--------------
We should test to see that read activity will update a channel as well.
>>> channel3.connected
True
>>> channel3.last_activity -= int(config.channel_timeout) + 1
>>> import waitress.parser
>>> channel3.parser_class = (
... waitress.parser.HTTPRequestParser)
>>> channel3.handle_read()
>>> channel3.next_channel_cleanup[0] = channel3.creation_time - int(
... config.cleanup_interval) - 1
>>> socket4 = FakeSocket(4)
>>> channel4 = HTTPChannel(sb, socket4, ('localhost', 4))
>>> channel3.connected
True
Main loop window
----------------
There is also a corner case we'll do a shallow test for where a
channel can be closed waiting for the main loop.
>>> channel4.last_activity -= 1
>>> last_active = channel4.last_activity
>>> channel4.set_async()
>>> channel4.last_activity != last_active
True
"""
def test_suite():
return doctest.DocTestSuite()
| gpl-3.0 | 2,775,847,563,361,845,000 | 27.1875 | 78 | 0.618625 | false |
yitian134/chromium | tools/code_coverage/coverage_posix_unittest.py | 54 | 4782 | #!/usr/bin/env python
# Copyright (c) 2010 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Unit tests for coverage_posix.py.
Run a single test with a command such as:
./coverage_posix_unittest.py CoveragePosixTest.testFindTestsAsArgs
Waring that running a single test like that may interfere with the arg
parsing tests, since coverage_posix.py uses optparse.OptionParser()
which references globals.
"""
import coverage_posix as coverage
import os
import sys
import tempfile
import unittest
class CoveragePosixTest(unittest.TestCase):
def setUp(self):
self.parseArgs()
self.sample_test_names = ['zippy_tests', '../base/base.gyp:base_unittests']
def confirmSampleTestsArePresent(self, tests):
"""Confirm the tests in self.sample_test_names are in some form in 'tests'.
The Coverage object can munge them (e.g. add .exe to the end as needed.
Helper function for arg parsing, bundle file tests.
Args:
tests: the parsed tests from a Coverage object.
"""
for simple_test_name in ('zippy_tests', 'base_unittests'):
found = False
for item in tests:
if simple_test_name in item:
found = True
break
self.assertTrue(found)
for not_test_name in ('kablammo', 'not_a_unittest'):
found = False
for item in tests:
if not_test_name in item:
found = True
break
self.assertFalse(found)
def parseArgs(self):
"""Setup and process arg parsing."""
self.parser = coverage.CoverageOptionParser()
(self.options, self.args) = self.parser.parse_args()
self.options.directory = '.'
def testSanity(self):
"""Sanity check we're able to actually run the tests.
Simply creating a Coverage instance checks a few things (e.g. on
Windows that the coverage tools can be found)."""
c = coverage.Coverage(self.options, self.args)
def testRunBasicProcess(self):
"""Test a simple run of a subprocess."""
c = coverage.Coverage(self.options, self.args)
for code in range(2):
retcode = c.Run([sys.executable, '-u', '-c',
'import sys; sys.exit(%d)' % code],
ignore_error=True)
self.assertEqual(code, retcode)
def testRunSlowProcess(self):
"""Test program which prints slowly but doesn't hit our timeout.
Overall runtime is longer than the timeout but output lines
trickle in keeping things alive.
"""
self.options.timeout = 2.5
c = coverage.Coverage(self.options, self.args)
slowscript = ('import sys, time\n'
'for x in range(10):\n'
' time.sleep(0.5)\n'
' print "hi mom"\n'
'sys.exit(0)\n')
retcode = c.Run([sys.executable, '-u', '-c', slowscript])
self.assertEqual(0, retcode)
def testRunExcessivelySlowProcess(self):
"""Test program which DOES hit our timeout.
Initial lines should print but quickly it takes too long and
should be killed.
"""
self.options.timeout = 2.5
c = coverage.Coverage(self.options, self.args)
slowscript = ('import time\n'
'for x in range(1,10):\n'
' print "sleeping for %d" % x\n'
' time.sleep(x)\n')
self.assertRaises(Exception,
c.Run,
[sys.executable, '-u', '-c', slowscript])
def testFindTestsAsArgs(self):
"""Test finding of tests passed as args."""
self.args += '--'
self.args += self.sample_test_names
c = coverage.Coverage(self.options, self.args)
c.FindTests()
self.confirmSampleTestsArePresent(c.tests)
def testFindTestsFromBundleFile(self):
"""Test finding of tests from a bundlefile."""
(fd, filename) = tempfile.mkstemp()
f = os.fdopen(fd, 'w')
f.write(str(self.sample_test_names))
f.close()
self.options.bundles = filename
c = coverage.Coverage(self.options, self.args)
c.FindTests()
self.confirmSampleTestsArePresent(c.tests)
os.unlink(filename)
def testExclusionList(self):
"""Test the gtest_filter exclusion list."""
c = coverage.Coverage(self.options, self.args)
self.assertFalse(c.GtestFilter('doesnotexist_test'))
fake_exclusions = { sys.platform: { 'foobar':
('a','b'),
'doesnotexist_test':
('Evil.Crash','Naughty.Test') } }
self.assertFalse(c.GtestFilter('barfoo'))
filter = c.GtestFilter('doesnotexist_test', fake_exclusions)
self.assertEquals('--gtest_filter=-Evil.Crash:-Naughty.Test', filter)
if __name__ == '__main__':
unittest.main()
| bsd-3-clause | -862,730,220,981,577,100 | 32.676056 | 79 | 0.628607 | false |
lyw07/kolibri | kolibri/core/exams/models.py | 1 | 6100 | from django.db import models
from jsonfield import JSONField
from .permissions import UserCanReadExamAssignmentData
from .permissions import UserCanReadExamData
from kolibri.core.auth.constants import role_kinds
from kolibri.core.auth.models import AbstractFacilityDataModel
from kolibri.core.auth.models import Collection
from kolibri.core.auth.models import FacilityUser
from kolibri.core.auth.permissions.base import RoleBasedPermissions
from kolibri.core.notifications.models import LearnerProgressNotification
class Exam(AbstractFacilityDataModel):
"""
This class stores metadata about teacher-created quizzes to test current student knowledge.
"""
morango_model_name = "exam"
permissions = (
RoleBasedPermissions(
target_field="collection",
can_be_created_by=(role_kinds.ADMIN, role_kinds.COACH),
can_be_read_by=(role_kinds.ADMIN, role_kinds.COACH),
can_be_updated_by=(role_kinds.ADMIN, role_kinds.COACH),
can_be_deleted_by=(role_kinds.ADMIN, role_kinds.COACH),
)
| UserCanReadExamData()
)
title = models.CharField(max_length=200)
# Total number of questions in the exam. Equal to the length of the question_sources array.
question_count = models.IntegerField()
"""
The `question_sources` field contains different values depending on the 'data_model_version' field.
V2:
Similar to V1, but with a `counter_in_exercise` field
[
{
"exercise_id": <exercise_pk>,
"question_id": <item_id_within_exercise>,
"title": <exercise_title>,
"counter_in_exercise": <unique_count_for_question>
},
...
]
V1:
JSON array describing the questions in this exam and the exercises they come from:
[
{
"exercise_id": <exercise_pk>,
"question_id": <item_id_within_exercise>,
"title": <exercise_title>,
},
...
]
V0:
JSON array describing exercise nodes this exam draws questions from,
how many from each, and the node titles at the time of exam creation:
[
{
"exercise_id": <exercise_pk>,
"number_of_questions": 6,
"title": <exercise_title>
},
...
]
"""
question_sources = JSONField(default=[], blank=True)
"""
This field is interpretted differently depending on the 'data_model_version' field.
V1:
Used to help select new questions from exercises at quiz creation time
V0:
Used to decide which questions are in an exam at runtime.
See convertExamQuestionSourcesV0V2 in exams/utils.js for details.
"""
seed = models.IntegerField(default=1)
# When True, learners see questions in the order they appear in 'question_sources'.
# When False, each learner sees questions in a random (but consistent) order seeded
# by their user's UUID.
learners_see_fixed_order = models.BooleanField(default=False)
# Is this exam currently active and visible to students to whom it is assigned?
active = models.BooleanField(default=False)
# Exams are scoped to a particular class (usually) as they are associated with a Coach
# who creates them in the context of their class, this stores that relationship but does
# not assign exam itself to the class - for that see the ExamAssignment model.
collection = models.ForeignKey(
Collection, related_name="exams", blank=False, null=False
)
creator = models.ForeignKey(
FacilityUser, related_name="exams", blank=False, null=False
)
archive = models.BooleanField(default=False)
def delete(self, using=None, keep_parents=False):
"""
We delete all notifications objects whose quiz is this exam id.
"""
LearnerProgressNotification.objects.filter(quiz_id=self.id).delete()
super(Exam, self).delete(using, keep_parents)
"""
As we evolve this model in ways that migrations can't handle, certain fields may
become deprecated, and other fields may need to be interpretted differently. This
may happen when multiple versions of the model need to coexist in the same database.
The 'data_model_version' field is used to keep track of the version of the model.
Certain fields that are only relevant for older model versions get prefixed
with their version numbers.
"""
data_model_version = models.SmallIntegerField(default=2)
def infer_dataset(self, *args, **kwargs):
return self.creator.dataset_id
def calculate_partition(self):
return self.dataset_id
def __str__(self):
return self.title
class ExamAssignment(AbstractFacilityDataModel):
"""
This class acts as an intermediary to handle assignment of an exam to particular collections
classes, groups, etc.
"""
morango_model_name = "examassignment"
permissions = (
RoleBasedPermissions(
target_field="collection",
can_be_created_by=(role_kinds.ADMIN, role_kinds.COACH),
can_be_read_by=(role_kinds.ADMIN, role_kinds.COACH),
can_be_updated_by=(),
can_be_deleted_by=(role_kinds.ADMIN, role_kinds.COACH),
)
| UserCanReadExamAssignmentData()
)
exam = models.ForeignKey(Exam, related_name="assignments", blank=False, null=False)
collection = models.ForeignKey(
Collection, related_name="assigned_exams", blank=False, null=False
)
assigned_by = models.ForeignKey(
FacilityUser, related_name="assigned_exams", blank=False, null=False
)
def infer_dataset(self, *args, **kwargs):
return self.assigned_by.dataset_id
def calculate_source_id(self):
return "{exam_id}:{collection_id}".format(
exam_id=self.exam_id, collection_id=self.collection_id
)
def calculate_partition(self):
return self.dataset_id
| mit | -7,012,715,160,879,537,000 | 34.672515 | 103 | 0.654262 | false |
andip71/boeffla-kernel-samsung-n8000 | tools/perf/scripts/python/Perf-Trace-Util/lib/Perf/Trace/Util.py | 12527 | 1935 | # Util.py - Python extension for perf script, miscellaneous utility code
#
# Copyright (C) 2010 by Tom Zanussi <[email protected]>
#
# This software may be distributed under the terms of the GNU General
# Public License ("GPL") version 2 as published by the Free Software
# Foundation.
import errno, os
FUTEX_WAIT = 0
FUTEX_WAKE = 1
FUTEX_PRIVATE_FLAG = 128
FUTEX_CLOCK_REALTIME = 256
FUTEX_CMD_MASK = ~(FUTEX_PRIVATE_FLAG | FUTEX_CLOCK_REALTIME)
NSECS_PER_SEC = 1000000000
def avg(total, n):
return total / n
def nsecs(secs, nsecs):
return secs * NSECS_PER_SEC + nsecs
def nsecs_secs(nsecs):
return nsecs / NSECS_PER_SEC
def nsecs_nsecs(nsecs):
return nsecs % NSECS_PER_SEC
def nsecs_str(nsecs):
str = "%5u.%09u" % (nsecs_secs(nsecs), nsecs_nsecs(nsecs)),
return str
def add_stats(dict, key, value):
if not dict.has_key(key):
dict[key] = (value, value, value, 1)
else:
min, max, avg, count = dict[key]
if value < min:
min = value
if value > max:
max = value
avg = (avg + value) / 2
dict[key] = (min, max, avg, count + 1)
def clear_term():
print("\x1b[H\x1b[2J")
audit_package_warned = False
try:
import audit
machine_to_id = {
'x86_64': audit.MACH_86_64,
'alpha' : audit.MACH_ALPHA,
'ia64' : audit.MACH_IA64,
'ppc' : audit.MACH_PPC,
'ppc64' : audit.MACH_PPC64,
's390' : audit.MACH_S390,
's390x' : audit.MACH_S390X,
'i386' : audit.MACH_X86,
'i586' : audit.MACH_X86,
'i686' : audit.MACH_X86,
}
try:
machine_to_id['armeb'] = audit.MACH_ARMEB
except:
pass
machine_id = machine_to_id[os.uname()[4]]
except:
if not audit_package_warned:
audit_package_warned = True
print "Install the audit-libs-python package to get syscall names"
def syscall_name(id):
try:
return audit.audit_syscall_to_name(id, machine_id)
except:
return str(id)
def strerror(nr):
try:
return errno.errorcode[abs(nr)]
except:
return "Unknown %d errno" % nr
| gpl-2.0 | 6,205,065,191,760,019,000 | 21.5 | 72 | 0.667183 | false |
Serag8/Bachelor | google_appengine/lib/django-1.4/django/utils/timezone.py | 81 | 8011 | """Timezone helper functions.
This module uses pytz when it's available and fallbacks when it isn't.
"""
from datetime import datetime, timedelta, tzinfo
from threading import local
import time as _time
try:
import pytz
except ImportError:
pytz = None
from django.conf import settings
__all__ = [
'utc', 'get_default_timezone', 'get_current_timezone',
'activate', 'deactivate', 'override',
'is_naive', 'is_aware', 'make_aware', 'make_naive',
]
# UTC and local time zones
ZERO = timedelta(0)
class UTC(tzinfo):
"""
UTC implementation taken from Python's docs.
Used only when pytz isn't available.
"""
def __repr__(self):
return "<UTC>"
def utcoffset(self, dt):
return ZERO
def tzname(self, dt):
return "UTC"
def dst(self, dt):
return ZERO
class LocalTimezone(tzinfo):
"""
Local time implementation taken from Python's docs.
Used only when pytz isn't available, and most likely inaccurate. If you're
having trouble with this class, don't waste your time, just install pytz.
"""
def __init__(self):
# This code is moved in __init__ to execute it as late as possible
# See get_default_timezone().
self.STDOFFSET = timedelta(seconds=-_time.timezone)
if _time.daylight:
self.DSTOFFSET = timedelta(seconds=-_time.altzone)
else:
self.DSTOFFSET = self.STDOFFSET
self.DSTDIFF = self.DSTOFFSET - self.STDOFFSET
tzinfo.__init__(self)
def __repr__(self):
return "<LocalTimezone>"
def utcoffset(self, dt):
if self._isdst(dt):
return self.DSTOFFSET
else:
return self.STDOFFSET
def dst(self, dt):
if self._isdst(dt):
return self.DSTDIFF
else:
return ZERO
def tzname(self, dt):
return _time.tzname[self._isdst(dt)]
def _isdst(self, dt):
tt = (dt.year, dt.month, dt.day,
dt.hour, dt.minute, dt.second,
dt.weekday(), 0, 0)
stamp = _time.mktime(tt)
tt = _time.localtime(stamp)
return tt.tm_isdst > 0
utc = pytz.utc if pytz else UTC()
"""UTC time zone as a tzinfo instance."""
# In order to avoid accessing the settings at compile time,
# wrap the expression in a function and cache the result.
# If you change settings.TIME_ZONE in tests, reset _localtime to None.
_localtime = None
def get_default_timezone():
"""
Returns the default time zone as a tzinfo instance.
This is the time zone defined by settings.TIME_ZONE.
See also :func:`get_current_timezone`.
"""
global _localtime
if _localtime is None:
if isinstance(settings.TIME_ZONE, basestring) and pytz is not None:
_localtime = pytz.timezone(settings.TIME_ZONE)
else:
_localtime = LocalTimezone()
return _localtime
# This function exists for consistency with get_current_timezone_name
def get_default_timezone_name():
"""
Returns the name of the default time zone.
"""
return _get_timezone_name(get_default_timezone())
_active = local()
def get_current_timezone():
"""
Returns the currently active time zone as a tzinfo instance.
"""
return getattr(_active, "value", get_default_timezone())
def get_current_timezone_name():
"""
Returns the name of the currently active time zone.
"""
return _get_timezone_name(get_current_timezone())
def _get_timezone_name(timezone):
"""
Returns the name of ``timezone``.
"""
try:
# for pytz timezones
return timezone.zone
except AttributeError:
# for regular tzinfo objects
local_now = datetime.now(timezone)
return timezone.tzname(local_now)
# Timezone selection functions.
# These functions don't change os.environ['TZ'] and call time.tzset()
# because it isn't thread safe.
def activate(timezone):
"""
Sets the time zone for the current thread.
The ``timezone`` argument must be an instance of a tzinfo subclass or a
time zone name. If it is a time zone name, pytz is required.
"""
if isinstance(timezone, tzinfo):
_active.value = timezone
elif isinstance(timezone, basestring) and pytz is not None:
_active.value = pytz.timezone(timezone)
else:
raise ValueError("Invalid timezone: %r" % timezone)
def deactivate():
"""
Unsets the time zone for the current thread.
Django will then use the time zone defined by settings.TIME_ZONE.
"""
if hasattr(_active, "value"):
del _active.value
class override(object):
"""
Temporarily set the time zone for the current thread.
This is a context manager that uses ``~django.utils.timezone.activate()``
to set the timezone on entry, and restores the previously active timezone
on exit.
The ``timezone`` argument must be an instance of a ``tzinfo`` subclass, a
time zone name, or ``None``. If is it a time zone name, pytz is required.
If it is ``None``, Django enables the default time zone.
"""
def __init__(self, timezone):
self.timezone = timezone
self.old_timezone = getattr(_active, 'value', None)
def __enter__(self):
if self.timezone is None:
deactivate()
else:
activate(self.timezone)
def __exit__(self, exc_type, exc_value, traceback):
if self.old_timezone is not None:
_active.value = self.old_timezone
else:
del _active.value
# Templates
def localtime(value, use_tz=None):
"""
Checks if value is a datetime and converts it to local time if necessary.
If use_tz is provided and is not None, that will force the value to
be converted (or not), overriding the value of settings.USE_TZ.
This function is designed for use by the template engine.
"""
if (isinstance(value, datetime)
and (settings.USE_TZ if use_tz is None else use_tz)
and not is_naive(value)
and getattr(value, 'convert_to_local_time', True)):
timezone = get_current_timezone()
value = value.astimezone(timezone)
if hasattr(timezone, 'normalize'):
# available for pytz time zones
value = timezone.normalize(value)
return value
# Utilities
def now():
"""
Returns an aware or naive datetime.datetime, depending on settings.USE_TZ.
"""
if settings.USE_TZ:
# timeit shows that datetime.now(tz=utc) is 24% slower
return datetime.utcnow().replace(tzinfo=utc)
else:
return datetime.now()
# By design, these four functions don't perform any checks on their arguments.
# The caller should ensure that they don't receive an invalid value like None.
def is_aware(value):
"""
Determines if a given datetime.datetime is aware.
The logic is described in Python's docs:
http://docs.python.org/library/datetime.html#datetime.tzinfo
"""
return value.tzinfo is not None and value.tzinfo.utcoffset(value) is not None
def is_naive(value):
"""
Determines if a given datetime.datetime is naive.
The logic is described in Python's docs:
http://docs.python.org/library/datetime.html#datetime.tzinfo
"""
return value.tzinfo is None or value.tzinfo.utcoffset(value) is None
def make_aware(value, timezone):
"""
Makes a naive datetime.datetime in a given time zone aware.
"""
if hasattr(timezone, 'localize'):
# available for pytz time zones
return timezone.localize(value, is_dst=None)
else:
# may be wrong around DST changes
return value.replace(tzinfo=timezone)
def make_naive(value, timezone):
"""
Makes an aware datetime.datetime naive in a given time zone.
"""
value = value.astimezone(timezone)
if hasattr(timezone, 'normalize'):
# available for pytz time zones
value = timezone.normalize(value)
return value.replace(tzinfo=None)
| mit | 9,090,184,471,638,654,000 | 27.407801 | 81 | 0.642616 | false |
zhuyongyong/crosswalk-test-suite | cordova/cordova-sampleapp-android-tests/sampleapp/privateNotes_close.py | 15 | 2239 | #!/usr/bin/env python
#
# Copyright (c) 2015 Intel Corporation.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of works must retain the original copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the original copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of Intel Corporation nor the names of its contributors
# may be used to endorse or promote products derived from this work without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY INTEL CORPORATION "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL INTEL CORPORATION BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# Authors:
# Zhu, Yongyong <[email protected]>
import unittest
import os
import commands
import comm
import time
class TestPrivateNotesAppBuild(unittest.TestCase):
def test_close(self):
comm.setUp()
app_name = "privateNotes"
pkg_name = "com.example." + app_name
if not comm.check_app_installed(pkg_name, self):
comm.app_install(app_name, pkg_name, self)
if not comm.check_app_launched(pkg_name, self):
print "Close app ---------------->%s App haven't launched, need to launch it!" % app_name
comm.app_launch(app_name, pkg_name, self)
time.sleep(1)
comm.app_stop(pkg_name, self)
if __name__ == '__main__':
unittest.main()
| bsd-3-clause | -2,951,104,607,400,155,600 | 42.057692 | 101 | 0.724431 | false |
prarthitm/edxplatform | common/lib/capa/capa/tests/test_correctmap.py | 107 | 7116 | """
Tests to verify that CorrectMap behaves correctly
"""
import unittest
from capa.correctmap import CorrectMap
import datetime
class CorrectMapTest(unittest.TestCase):
"""
Tests to verify that CorrectMap behaves correctly
"""
def setUp(self):
super(CorrectMapTest, self).setUp()
self.cmap = CorrectMap()
def test_set_input_properties(self):
# Set the correctmap properties for three inputs
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5,
msg='Test message',
hint='Test hint',
hintmode='always',
queuestate={
'key': 'secretstring',
'time': '20130228100026'
}
)
self.cmap.set(
answer_id='2_2_1',
correctness='incorrect',
npoints=None,
msg=None,
hint=None,
hintmode=None,
queuestate=None
)
self.cmap.set(
answer_id='3_2_1',
correctness='partially-correct',
npoints=3,
msg=None,
hint=None,
hintmode=None,
queuestate=None
)
# Assert that each input has the expected properties
self.assertTrue(self.cmap.is_correct('1_2_1'))
self.assertFalse(self.cmap.is_correct('2_2_1'))
self.assertTrue(self.cmap.is_correct('3_2_1'))
self.assertTrue(self.cmap.is_partially_correct('3_2_1'))
self.assertFalse(self.cmap.is_partially_correct('2_2_1'))
# Intentionally testing an item that's not in cmap.
self.assertFalse(self.cmap.is_partially_correct('9_2_1'))
self.assertEqual(self.cmap.get_correctness('1_2_1'), 'correct')
self.assertEqual(self.cmap.get_correctness('2_2_1'), 'incorrect')
self.assertEqual(self.cmap.get_correctness('3_2_1'), 'partially-correct')
self.assertEqual(self.cmap.get_npoints('1_2_1'), 5)
self.assertEqual(self.cmap.get_npoints('2_2_1'), 0)
self.assertEqual(self.cmap.get_npoints('3_2_1'), 3)
self.assertEqual(self.cmap.get_msg('1_2_1'), 'Test message')
self.assertEqual(self.cmap.get_msg('2_2_1'), None)
self.assertEqual(self.cmap.get_hint('1_2_1'), 'Test hint')
self.assertEqual(self.cmap.get_hint('2_2_1'), None)
self.assertEqual(self.cmap.get_hintmode('1_2_1'), 'always')
self.assertEqual(self.cmap.get_hintmode('2_2_1'), None)
self.assertTrue(self.cmap.is_queued('1_2_1'))
self.assertFalse(self.cmap.is_queued('2_2_1'))
self.assertEqual(self.cmap.get_queuetime_str('1_2_1'), '20130228100026')
self.assertEqual(self.cmap.get_queuetime_str('2_2_1'), None)
self.assertTrue(self.cmap.is_right_queuekey('1_2_1', 'secretstring'))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', 'invalidstr'))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', ''))
self.assertFalse(self.cmap.is_right_queuekey('1_2_1', None))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', 'secretstring'))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', 'invalidstr'))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', ''))
self.assertFalse(self.cmap.is_right_queuekey('2_2_1', None))
def test_get_npoints(self):
# Set the correctmap properties for 4 inputs
# 1) correct, 5 points
# 2) correct, None points
# 3) incorrect, 5 points
# 4) incorrect, None points
# 5) correct, 0 points
# 4) partially correct, 2.5 points
# 5) partially correct, None points
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5.3
)
self.cmap.set(
answer_id='2_2_1',
correctness='correct',
npoints=None
)
self.cmap.set(
answer_id='3_2_1',
correctness='incorrect',
npoints=5
)
self.cmap.set(
answer_id='4_2_1',
correctness='incorrect',
npoints=None
)
self.cmap.set(
answer_id='5_2_1',
correctness='correct',
npoints=0
)
self.cmap.set(
answer_id='6_2_1',
correctness='partially-correct',
npoints=2.5
)
self.cmap.set(
answer_id='7_2_1',
correctness='partially-correct',
npoints=None
)
# Assert that we get the expected points
# If points assigned --> npoints
# If no points assigned and correct --> 1 point
# If no points assigned and partially correct --> 1 point
# If no points assigned and incorrect --> 0 points
self.assertEqual(self.cmap.get_npoints('1_2_1'), 5.3)
self.assertEqual(self.cmap.get_npoints('2_2_1'), 1)
self.assertEqual(self.cmap.get_npoints('3_2_1'), 5)
self.assertEqual(self.cmap.get_npoints('4_2_1'), 0)
self.assertEqual(self.cmap.get_npoints('5_2_1'), 0)
self.assertEqual(self.cmap.get_npoints('6_2_1'), 2.5)
self.assertEqual(self.cmap.get_npoints('7_2_1'), 1)
def test_set_overall_message(self):
# Default is an empty string string
self.assertEqual(self.cmap.get_overall_message(), "")
# Set a message that applies to the whole question
self.cmap.set_overall_message("Test message")
# Retrieve the message
self.assertEqual(self.cmap.get_overall_message(), "Test message")
# Setting the message to None --> empty string
self.cmap.set_overall_message(None)
self.assertEqual(self.cmap.get_overall_message(), "")
def test_update_from_correctmap(self):
# Initialize a CorrectMap with some properties
self.cmap.set(
answer_id='1_2_1',
correctness='correct',
npoints=5,
msg='Test message',
hint='Test hint',
hintmode='always',
queuestate={
'key': 'secretstring',
'time': '20130228100026'
}
)
self.cmap.set_overall_message("Test message")
# Create a second cmap, then update it to have the same properties
# as the first cmap
other_cmap = CorrectMap()
other_cmap.update(self.cmap)
# Assert that it has all the same properties
self.assertEqual(
other_cmap.get_overall_message(),
self.cmap.get_overall_message()
)
self.assertEqual(
other_cmap.get_dict(),
self.cmap.get_dict()
)
def test_update_from_invalid(self):
# Should get an exception if we try to update() a CorrectMap
# with a non-CorrectMap value
invalid_list = [None, "string", 5, datetime.datetime.today()]
for invalid in invalid_list:
with self.assertRaises(Exception):
self.cmap.update(invalid)
| agpl-3.0 | -2,497,503,349,873,010,700 | 31.792627 | 81 | 0.568016 | false |
eul-721/The-Perfect-Pokemon-Team-Balancer | libs/env/Lib/site-packages/whoosh/lang/snowball/dutch.py | 96 | 6194 | from .bases import _StandardStemmer
from whoosh.compat import u
class DutchStemmer(_StandardStemmer):
"""
The Dutch Snowball stemmer.
:cvar __vowels: The Dutch vowels.
:type __vowels: unicode
:cvar __step1_suffixes: Suffixes to be deleted in step 1 of the algorithm.
:type __step1_suffixes: tuple
:cvar __step3b_suffixes: Suffixes to be deleted in step 3b of the algorithm.
:type __step3b_suffixes: tuple
:note: A detailed description of the Dutch
stemming algorithm can be found under
http://snowball.tartarus.org/algorithms/dutch/stemmer.html
"""
__vowels = u("aeiouy\xE8")
__step1_suffixes = ("heden", "ene", "en", "se", "s")
__step3b_suffixes = ("baar", "lijk", "bar", "end", "ing", "ig")
def stem(self, word):
"""
Stem a Dutch word and return the stemmed form.
:param word: The word that is stemmed.
:type word: str or unicode
:return: The stemmed form.
:rtype: unicode
"""
word = word.lower()
step2_success = False
# Vowel accents are removed.
word = (word.replace(u("\xE4"), "a").replace(u("\xE1"), "a")
.replace(u("\xEB"), "e").replace(u("\xE9"), "e")
.replace(u("\xED"), "i").replace(u("\xEF"), "i")
.replace(u("\xF6"), "o").replace(u("\xF3"), "o")
.replace(u("\xFC"), "u").replace(u("\xFA"), "u"))
# An initial 'y', a 'y' after a vowel,
# and an 'i' between self.__vowels is put into upper case.
# As from now these are treated as consonants.
if word.startswith("y"):
word = "".join(("Y", word[1:]))
for i in range(1, len(word)):
if word[i - 1] in self.__vowels and word[i] == "y":
word = "".join((word[:i], "Y", word[i + 1:]))
for i in range(1, len(word) - 1):
if (word[i - 1] in self.__vowels and word[i] == "i" and
word[i + 1] in self.__vowels):
word = "".join((word[:i], "I", word[i + 1:]))
r1, r2 = self._r1r2_standard(word, self.__vowels)
# R1 is adjusted so that the region before it
# contains at least 3 letters.
for i in range(1, len(word)):
if word[i] not in self.__vowels and word[i - 1] in self.__vowels:
if len(word[:i + 1]) < 3 and len(word[:i + 1]) > 0:
r1 = word[3:]
elif len(word[:i + 1]) == 0:
return word
break
# STEP 1
for suffix in self.__step1_suffixes:
if r1.endswith(suffix):
if suffix == "heden":
word = "".join((word[:-5], "heid"))
r1 = "".join((r1[:-5], "heid"))
if r2.endswith("heden"):
r2 = "".join((r2[:-5], "heid"))
elif (suffix in ("ene", "en") and
not word.endswith("heden") and
word[-len(suffix) - 1] not in self.__vowels and
word[-len(suffix) - 3:-len(suffix)] != "gem"):
word = word[:-len(suffix)]
r1 = r1[:-len(suffix)]
r2 = r2[:-len(suffix)]
if word.endswith(("kk", "dd", "tt")):
word = word[:-1]
r1 = r1[:-1]
r2 = r2[:-1]
elif (suffix in ("se", "s") and
word[-len(suffix) - 1] not in self.__vowels and
word[-len(suffix) - 1] != "j"):
word = word[:-len(suffix)]
r1 = r1[:-len(suffix)]
r2 = r2[:-len(suffix)]
break
# STEP 2
if r1.endswith("e") and word[-2] not in self.__vowels:
step2_success = True
word = word[:-1]
r1 = r1[:-1]
r2 = r2[:-1]
if word.endswith(("kk", "dd", "tt")):
word = word[:-1]
r1 = r1[:-1]
r2 = r2[:-1]
# STEP 3a
if r2.endswith("heid") and word[-5] != "c":
word = word[:-4]
r1 = r1[:-4]
r2 = r2[:-4]
if (r1.endswith("en") and word[-3] not in self.__vowels and
word[-5:-2] != "gem"):
word = word[:-2]
r1 = r1[:-2]
r2 = r2[:-2]
if word.endswith(("kk", "dd", "tt")):
word = word[:-1]
r1 = r1[:-1]
r2 = r2[:-1]
# STEP 3b: Derivational suffixes
for suffix in self.__step3b_suffixes:
if r2.endswith(suffix):
if suffix in ("end", "ing"):
word = word[:-3]
r2 = r2[:-3]
if r2.endswith("ig") and word[-3] != "e":
word = word[:-2]
else:
if word.endswith(("kk", "dd", "tt")):
word = word[:-1]
elif suffix == "ig" and word[-3] != "e":
word = word[:-2]
elif suffix == "lijk":
word = word[:-4]
r1 = r1[:-4]
if r1.endswith("e") and word[-2] not in self.__vowels:
word = word[:-1]
if word.endswith(("kk", "dd", "tt")):
word = word[:-1]
elif suffix == "baar":
word = word[:-4]
elif suffix == "bar" and step2_success:
word = word[:-3]
break
# STEP 4: Undouble vowel
if len(word) >= 4:
if word[-1] not in self.__vowels and word[-1] != "I":
if word[-3:-1] in ("aa", "ee", "oo", "uu"):
if word[-4] not in self.__vowels:
word = "".join((word[:-3], word[-3], word[-1]))
# All occurrences of 'I' and 'Y' are put back into lower case.
word = word.replace("I", "i").replace("Y", "y")
return word
| gpl-2.0 | 3,293,219,560,937,721,300 | 34.803468 | 80 | 0.415886 | false |
tejonbiker/Adafruit_Python_MPR121 | Adafruit_MPR121/MPR121.py | 11 | 8262 | # Copyright (c) 2014 Adafruit Industries
# Author: Tony DiCola
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import time
# Register addresses.
MPR121_I2CADDR_DEFAULT = 0x5A
MPR121_TOUCHSTATUS_L = 0x00
MPR121_TOUCHSTATUS_H = 0x01
MPR121_FILTDATA_0L = 0x04
MPR121_FILTDATA_0H = 0x05
MPR121_BASELINE_0 = 0x1E
MPR121_MHDR = 0x2B
MPR121_NHDR = 0x2C
MPR121_NCLR = 0x2D
MPR121_FDLR = 0x2E
MPR121_MHDF = 0x2F
MPR121_NHDF = 0x30
MPR121_NCLF = 0x31
MPR121_FDLF = 0x32
MPR121_NHDT = 0x33
MPR121_NCLT = 0x34
MPR121_FDLT = 0x35
MPR121_TOUCHTH_0 = 0x41
MPR121_RELEASETH_0 = 0x42
MPR121_DEBOUNCE = 0x5B
MPR121_CONFIG1 = 0x5C
MPR121_CONFIG2 = 0x5D
MPR121_CHARGECURR_0 = 0x5F
MPR121_CHARGETIME_1 = 0x6C
MPR121_ECR = 0x5E
MPR121_AUTOCONFIG0 = 0x7B
MPR121_AUTOCONFIG1 = 0x7C
MPR121_UPLIMIT = 0x7D
MPR121_LOWLIMIT = 0x7E
MPR121_TARGETLIMIT = 0x7F
MPR121_GPIODIR = 0x76
MPR121_GPIOEN = 0x77
MPR121_GPIOSET = 0x78
MPR121_GPIOCLR = 0x79
MPR121_GPIOTOGGLE = 0x7A
MPR121_SOFTRESET = 0x80
MAX_I2C_RETRIES = 5
class MPR121(object):
"""Representation of a MPR121 capacitive touch sensor."""
def __init__(self):
"""Create an instance of the MPR121 device."""
# Nothing to do here since there is very little state in the class.
pass
def begin(self, address=MPR121_I2CADDR_DEFAULT, i2c=None, **kwargs):
"""Initialize communication with the MPR121.
Can specify a custom I2C address for the device using the address
parameter (defaults to 0x5A). Optional i2c parameter allows specifying a
custom I2C bus source (defaults to platform's I2C bus).
Returns True if communication with the MPR121 was established, otherwise
returns False.
"""
# Assume we're using platform's default I2C bus if none is specified.
if i2c is None:
import Adafruit_GPIO.I2C as I2C
i2c = I2C
# Require repeated start conditions for I2C register reads. Unfortunately
# the MPR121 is very sensitive and requires repeated starts to read all
# the registers.
I2C.require_repeated_start()
# Save a reference to the I2C device instance for later communication.
self._device = i2c.get_i2c_device(address, **kwargs)
return self._reset()
def _reset(self):
# Soft reset of device.
self._i2c_retry(self._device.write8, MPR121_SOFTRESET, 0x63)
time.sleep(0.001) # This 1ms delay here probably isn't necessary but can't hurt.
# Set electrode configuration to default values.
self._i2c_retry(self._device.write8, MPR121_ECR, 0x00)
# Check CDT, SFI, ESI configuration is at default values.
c = self._i2c_retry(self._device.readU8, MPR121_CONFIG2)
if c != 0x24:
return False
# Set threshold for touch and release to default values.
self.set_thresholds(12, 6)
# Configure baseline filtering control registers.
self._i2c_retry(self._device.write8, MPR121_MHDR, 0x01)
self._i2c_retry(self._device.write8, MPR121_NHDR, 0x01)
self._i2c_retry(self._device.write8, MPR121_NCLR, 0x0E)
self._i2c_retry(self._device.write8, MPR121_FDLR, 0x00)
self._i2c_retry(self._device.write8, MPR121_MHDF, 0x01)
self._i2c_retry(self._device.write8, MPR121_NHDF, 0x05)
self._i2c_retry(self._device.write8, MPR121_NCLF, 0x01)
self._i2c_retry(self._device.write8, MPR121_FDLF, 0x00)
self._i2c_retry(self._device.write8, MPR121_NHDT, 0x00)
self._i2c_retry(self._device.write8, MPR121_NCLT, 0x00)
self._i2c_retry(self._device.write8, MPR121_FDLT, 0x00)
# Set other configuration registers.
self._i2c_retry(self._device.write8, MPR121_DEBOUNCE, 0)
self._i2c_retry(self._device.write8, MPR121_CONFIG1, 0x10) # default, 16uA charge current
self._i2c_retry(self._device.write8, MPR121_CONFIG2, 0x20) # 0.5uS encoding, 1ms period
# Enable all electrodes.
self._i2c_retry(self._device.write8, MPR121_ECR, 0x8F) # start with first 5 bits of baseline tracking
# All done, everything succeeded!
return True
def _i2c_retry(self, func, *params):
# Run specified I2C request and ignore IOError 110 (timeout) up to
# retries times. For some reason the Pi 2 hardware I2C appears to be
# flakey and randomly return timeout errors on I2C reads. This will
# catch those errors, reset the MPR121, and retry.
count = 0
while True:
try:
return func(*params)
except IOError as ex:
# Re-throw anything that isn't a timeout (110) error.
if ex.errno != 110:
raise ex
# Else there was a timeout, so reset the device and retry.
self._reset()
# Increase count and fail after maximum number of retries.
count += 1
if count >= MAX_I2C_RETRIES:
raise RuntimeError('Exceeded maximum number or retries attempting I2C communication!')
def set_thresholds(self, touch, release):
"""Set the touch and release threshold for all inputs to the provided
values. Both touch and release should be a value between 0 to 255
(inclusive).
"""
assert touch >= 0 and touch <= 255, 'touch must be between 0-255 (inclusive)'
assert release >= 0 and release <= 255, 'release must be between 0-255 (inclusive)'
# Set the touch and release register value for all the inputs.
for i in range(12):
self._i2c_retry(self._device.write8, MPR121_TOUCHTH_0 + 2*i, touch)
self._i2c_retry(self._device.write8, MPR121_RELEASETH_0 + 2*i, release)
def filtered_data(self, pin):
"""Return filtered data register value for the provided pin (0-11).
Useful for debugging.
"""
assert pin >= 0 and pin < 12, 'pin must be between 0-11 (inclusive)'
return self._i2c_retry(self._device.readU16LE, MPR121_FILTDATA_0L + pin*2)
def baseline_data(self, pin):
"""Return baseline data register value for the provided pin (0-11).
Useful for debugging.
"""
assert pin >= 0 and pin < 12, 'pin must be between 0-11 (inclusive)'
bl = self._i2c_retry(self._device.readU8, MPR121_BASELINE_0 + pin)
return bl << 2
def touched(self):
"""Return touch state of all pins as a 12-bit value where each bit
represents a pin, with a value of 1 being touched and 0 not being touched.
"""
t = self._i2c_retry(self._device.readU16LE, MPR121_TOUCHSTATUS_L)
return t & 0x0FFF
def is_touched(self, pin):
"""Return True if the specified pin is being touched, otherwise returns
False.
"""
assert pin >= 0 and pin < 12, 'pin must be between 0-11 (inclusive)'
t = self.touched()
return (t & (1 << pin)) > 0
| mit | -6,006,464,784,987,713,000 | 42.946809 | 109 | 0.64137 | false |
linebp/pandas | doc/make.py | 8 | 12640 | #!/usr/bin/env python
"""
Python script for building documentation.
To build the docs you must have all optional dependencies for pandas
installed. See the installation instructions for a list of these.
<del>Note: currently latex builds do not work because of table formats that are not
supported in the latex generation.</del>
2014-01-30: Latex has some issues but 'latex_forced' works ok for 0.13.0-400 or so
Usage
-----
python make.py clean
python make.py html
"""
from __future__ import print_function
import io
import glob # noqa
import os
import shutil
import sys
from contextlib import contextmanager
import sphinx # noqa
import argparse
import jinja2 # noqa
os.environ['PYTHONPATH'] = '..'
SPHINX_BUILD = 'sphinxbuild'
def _process_user(user):
if user is None or user is False:
user = ''
else:
user = user + '@'
return user
def upload_dev(user=None):
'push a copy to the pydata dev directory'
user = _process_user(user)
if os.system('cd build/html; rsync -avz . {0}pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/dev/ -essh'.format(user)):
raise SystemExit('Upload to Pydata Dev failed')
def upload_dev_pdf(user=None):
'push a copy to the pydata dev directory'
user = _process_user(user)
if os.system('cd build/latex; scp pandas.pdf {0}pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/dev/'.format(user)):
raise SystemExit('PDF upload to Pydata Dev failed')
def upload_stable(user=None):
'push a copy to the pydata stable directory'
user = _process_user(user)
if os.system('cd build/html; rsync -avz . {0}pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/stable/ -essh'.format(user)):
raise SystemExit('Upload to stable failed')
def upload_stable_pdf(user=None):
'push a copy to the pydata dev directory'
user = _process_user(user)
if os.system('cd build/latex; scp pandas.pdf {0}pandas.pydata.org'
':/usr/share/nginx/pandas/pandas-docs/stable/'.format(user)):
raise SystemExit('PDF upload to stable failed')
def upload_prev(ver, doc_root='./', user=None):
'push a copy of older release to appropriate version directory'
user = _process_user(user)
local_dir = doc_root + 'build/html'
remote_dir = '/usr/share/nginx/pandas/pandas-docs/version/%s/' % ver
cmd = 'cd %s; rsync -avz . %spandas.pydata.org:%s -essh'
cmd = cmd % (local_dir, user, remote_dir)
print(cmd)
if os.system(cmd):
raise SystemExit(
'Upload to %s from %s failed' % (remote_dir, local_dir))
local_dir = doc_root + 'build/latex'
pdf_cmd = 'cd %s; scp pandas.pdf %spandas.pydata.org:%s'
pdf_cmd = pdf_cmd % (local_dir, user, remote_dir)
if os.system(pdf_cmd):
raise SystemExit('Upload PDF to %s from %s failed' % (ver, doc_root))
def build_pandas():
os.chdir('..')
os.system('python setup.py clean')
os.system('python setup.py build_ext --inplace')
os.chdir('doc')
def build_prev(ver):
if os.system('git checkout v%s' % ver) != 1:
os.chdir('..')
os.system('python setup.py clean')
os.system('python setup.py build_ext --inplace')
os.chdir('doc')
os.system('python make.py clean')
os.system('python make.py html')
os.system('python make.py latex')
os.system('git checkout master')
def clean():
if os.path.exists('build'):
shutil.rmtree('build')
if os.path.exists('source/generated'):
shutil.rmtree('source/generated')
@contextmanager
def maybe_exclude_notebooks():
"""
Skip building the notebooks if pandoc is not installed.
This assumes that nbsphinx is installed.
"""
base = os.path.dirname(__file__)
notebooks = [os.path.join(base, 'source', nb)
for nb in ['style.ipynb']]
contents = {}
def _remove_notebooks():
for nb in notebooks:
with open(nb, 'rt') as f:
contents[nb] = f.read()
os.remove(nb)
# Skip notebook conversion if
# 1. nbconvert isn't installed, or
# 2. nbconvert is installed, but pandoc isn't
try:
import nbconvert
except ImportError:
print("Warning: nbconvert not installed. Skipping notebooks.")
_remove_notebooks()
else:
try:
nbconvert.utils.pandoc.get_pandoc_version()
except nbconvert.utils.pandoc.PandocMissing:
print("Warning: Pandoc is not installed. Skipping notebooks.")
_remove_notebooks()
yield
for nb, content in contents.items():
with open(nb, 'wt') as f:
f.write(content)
def html():
check_build()
with maybe_exclude_notebooks():
if os.system('sphinx-build -P -b html -d build/doctrees '
'source build/html'):
raise SystemExit("Building HTML failed.")
try:
# remove stale file
os.remove('build/html/pandas.zip')
except:
pass
def zip_html():
try:
print("\nZipping up HTML docs...")
# just in case the wonky build box doesn't have zip
# don't fail this.
os.system('cd build; rm -f html/pandas.zip; zip html/pandas.zip -r -q html/* ')
print("\n")
except:
pass
def latex():
check_build()
if sys.platform != 'win32':
# LaTeX format.
if os.system('sphinx-build -j 2 -b latex -d build/doctrees '
'source build/latex'):
raise SystemExit("Building LaTeX failed.")
# Produce pdf.
os.chdir('build/latex')
# Call the makefile produced by sphinx...
if os.system('make'):
print("Rendering LaTeX failed.")
print("You may still be able to get a usable PDF file by going into 'build/latex'")
print("and executing 'pdflatex pandas.tex' for the requisite number of passes.")
print("Or using the 'latex_forced' target")
raise SystemExit
os.chdir('../..')
else:
print('latex build has not been tested on windows')
def latex_forced():
check_build()
if sys.platform != 'win32':
# LaTeX format.
if os.system('sphinx-build -j 2 -b latex -d build/doctrees '
'source build/latex'):
raise SystemExit("Building LaTeX failed.")
# Produce pdf.
os.chdir('build/latex')
# Manually call pdflatex, 3 passes should ensure latex fixes up
# all the required cross-references and such.
os.system('pdflatex -interaction=nonstopmode pandas.tex')
os.system('pdflatex -interaction=nonstopmode pandas.tex')
os.system('pdflatex -interaction=nonstopmode pandas.tex')
raise SystemExit("You should check the file 'build/latex/pandas.pdf' for problems.")
os.chdir('../..')
else:
print('latex build has not been tested on windows')
def check_build():
build_dirs = [
'build', 'build/doctrees', 'build/html',
'build/latex', 'build/plots', 'build/_static',
'build/_templates']
for d in build_dirs:
try:
os.mkdir(d)
except OSError:
pass
def all():
# clean()
html()
def auto_dev_build(debug=False):
msg = ''
try:
step = 'clean'
clean()
step = 'html'
html()
step = 'upload dev'
upload_dev()
if not debug:
sendmail(step)
step = 'latex'
latex()
step = 'upload pdf'
upload_dev_pdf()
if not debug:
sendmail(step)
except (Exception, SystemExit) as inst:
msg = str(inst) + '\n'
sendmail(step, '[ERROR] ' + msg)
def sendmail(step=None, err_msg=None):
from_name, to_name = _get_config()
if step is None:
step = ''
if err_msg is None or '[ERROR]' not in err_msg:
msgstr = 'Daily docs %s completed successfully' % step
subject = "DOC: %s successful" % step
else:
msgstr = err_msg
subject = "DOC: %s failed" % step
import smtplib
from email.MIMEText import MIMEText
msg = MIMEText(msgstr)
msg['Subject'] = subject
msg['From'] = from_name
msg['To'] = to_name
server_str, port, login, pwd = _get_credentials()
server = smtplib.SMTP(server_str, port)
server.ehlo()
server.starttls()
server.ehlo()
server.login(login, pwd)
try:
server.sendmail(from_name, to_name, msg.as_string())
finally:
server.close()
def _get_dir(subdir=None):
import getpass
USERNAME = getpass.getuser()
if sys.platform == 'darwin':
HOME = '/Users/%s' % USERNAME
else:
HOME = '/home/%s' % USERNAME
if subdir is None:
subdir = '/code/scripts/config'
conf_dir = '%s/%s' % (HOME, subdir)
return conf_dir
def _get_credentials():
tmp_dir = _get_dir()
cred = '%s/credentials' % tmp_dir
with open(cred, 'r') as fh:
server, port, un, domain = fh.read().split(',')
port = int(port)
login = un + '@' + domain + '.com'
import base64
with open('%s/cron_email_pwd' % tmp_dir, 'r') as fh:
pwd = base64.b64decode(fh.read())
return server, port, login, pwd
def _get_config():
tmp_dir = _get_dir()
with open('%s/addresses' % tmp_dir, 'r') as fh:
from_name, to_name = fh.read().split(',')
return from_name, to_name
funcd = {
'html': html,
'zip_html': zip_html,
'upload_dev': upload_dev,
'upload_stable': upload_stable,
'upload_dev_pdf': upload_dev_pdf,
'upload_stable_pdf': upload_stable_pdf,
'latex': latex,
'latex_forced': latex_forced,
'clean': clean,
'auto_dev': auto_dev_build,
'auto_debug': lambda: auto_dev_build(True),
'build_pandas': build_pandas,
'all': all,
}
small_docs = False
# current_dir = os.getcwd()
# os.chdir(os.path.dirname(os.path.join(current_dir, __file__)))
import argparse
argparser = argparse.ArgumentParser(description="""
pandas documentation builder
""".strip())
# argparser.add_argument('-arg_name', '--arg_name',
# metavar='label for arg help',
# type=str|etc,
# nargs='N|*|?|+|argparse.REMAINDER',
# required=False,
# #choices='abc',
# help='help string',
# action='store|store_true')
# args = argparser.parse_args()
#print args.accumulate(args.integers)
def generate_index(api=True, single=False, **kwds):
from jinja2 import Template
with open("source/index.rst.template") as f:
t = Template(f.read())
with open("source/index.rst","w") as f:
f.write(t.render(api=api,single=single,**kwds))
import argparse
argparser = argparse.ArgumentParser(description="pandas documentation builder",
epilog="Targets : %s" % funcd.keys())
argparser.add_argument('--no-api',
default=False,
help='Ommit api and autosummary',
action='store_true')
argparser.add_argument('--single',
metavar='FILENAME',
type=str,
default=False,
help='filename of section to compile, e.g. "indexing"')
argparser.add_argument('--user',
type=str,
default=False,
help='Username to connect to the pydata server')
def main():
args, unknown = argparser.parse_known_args()
sys.argv = [sys.argv[0]] + unknown
if args.single:
args.single = os.path.basename(args.single).split(".rst")[0]
if 'clean' in unknown:
args.single=False
generate_index(api=not args.no_api and not args.single, single=args.single)
if len(sys.argv) > 2:
ftype = sys.argv[1]
ver = sys.argv[2]
if ftype == 'build_previous':
build_prev(ver, user=args.user)
if ftype == 'upload_previous':
upload_prev(ver, user=args.user)
elif len(sys.argv) == 2:
for arg in sys.argv[1:]:
func = funcd.get(arg)
if func is None:
raise SystemExit('Do not know how to handle %s; valid args are %s' % (
arg, list(funcd.keys())))
if args.user:
func(user=args.user)
else:
func()
else:
small_docs = False
all()
# os.chdir(current_dir)
if __name__ == '__main__':
import sys
sys.exit(main())
| bsd-3-clause | -8,735,844,811,641,713,000 | 27.858447 | 95 | 0.582991 | false |
urandu/rethinkdb | external/v8_3.30.33.16/build/gyp/tools/pretty_sln.py | 806 | 5092 | #!/usr/bin/env python
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Prints the information in a sln file in a diffable way.
It first outputs each projects in alphabetical order with their
dependencies.
Then it outputs a possible build order.
"""
__author__ = 'nsylvain (Nicolas Sylvain)'
import os
import re
import sys
import pretty_vcproj
def BuildProject(project, built, projects, deps):
# if all dependencies are done, we can build it, otherwise we try to build the
# dependency.
# This is not infinite-recursion proof.
for dep in deps[project]:
if dep not in built:
BuildProject(dep, built, projects, deps)
print project
built.append(project)
def ParseSolution(solution_file):
# All projects, their clsid and paths.
projects = dict()
# A list of dependencies associated with a project.
dependencies = dict()
# Regular expressions that matches the SLN format.
# The first line of a project definition.
begin_project = re.compile(('^Project\("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942'
'}"\) = "(.*)", "(.*)", "(.*)"$'))
# The last line of a project definition.
end_project = re.compile('^EndProject$')
# The first line of a dependency list.
begin_dep = re.compile('ProjectSection\(ProjectDependencies\) = postProject$')
# The last line of a dependency list.
end_dep = re.compile('EndProjectSection$')
# A line describing a dependency.
dep_line = re.compile(' *({.*}) = ({.*})$')
in_deps = False
solution = open(solution_file)
for line in solution:
results = begin_project.search(line)
if results:
# Hack to remove icu because the diff is too different.
if results.group(1).find('icu') != -1:
continue
# We remove "_gyp" from the names because it helps to diff them.
current_project = results.group(1).replace('_gyp', '')
projects[current_project] = [results.group(2).replace('_gyp', ''),
results.group(3),
results.group(2)]
dependencies[current_project] = []
continue
results = end_project.search(line)
if results:
current_project = None
continue
results = begin_dep.search(line)
if results:
in_deps = True
continue
results = end_dep.search(line)
if results:
in_deps = False
continue
results = dep_line.search(line)
if results and in_deps and current_project:
dependencies[current_project].append(results.group(1))
continue
# Change all dependencies clsid to name instead.
for project in dependencies:
# For each dependencies in this project
new_dep_array = []
for dep in dependencies[project]:
# Look for the project name matching this cldis
for project_info in projects:
if projects[project_info][1] == dep:
new_dep_array.append(project_info)
dependencies[project] = sorted(new_dep_array)
return (projects, dependencies)
def PrintDependencies(projects, deps):
print "---------------------------------------"
print "Dependencies for all projects"
print "---------------------------------------"
print "-- --"
for (project, dep_list) in sorted(deps.items()):
print "Project : %s" % project
print "Path : %s" % projects[project][0]
if dep_list:
for dep in dep_list:
print " - %s" % dep
print ""
print "-- --"
def PrintBuildOrder(projects, deps):
print "---------------------------------------"
print "Build order "
print "---------------------------------------"
print "-- --"
built = []
for (project, _) in sorted(deps.items()):
if project not in built:
BuildProject(project, built, projects, deps)
print "-- --"
def PrintVCProj(projects):
for project in projects:
print "-------------------------------------"
print "-------------------------------------"
print project
print project
print project
print "-------------------------------------"
print "-------------------------------------"
project_path = os.path.abspath(os.path.join(os.path.dirname(sys.argv[1]),
projects[project][2]))
pretty = pretty_vcproj
argv = [ '',
project_path,
'$(SolutionDir)=%s\\' % os.path.dirname(sys.argv[1]),
]
argv.extend(sys.argv[3:])
pretty.main(argv)
def main():
# check if we have exactly 1 parameter.
if len(sys.argv) < 2:
print 'Usage: %s "c:\\path\\to\\project.sln"' % sys.argv[0]
return 1
(projects, deps) = ParseSolution(sys.argv[1])
PrintDependencies(projects, deps)
PrintBuildOrder(projects, deps)
if '--recursive' in sys.argv:
PrintVCProj(projects)
return 0
if __name__ == '__main__':
sys.exit(main())
| agpl-3.0 | -117,906,438,856,939,490 | 29.309524 | 80 | 0.569717 | false |
jolevq/odoopub | addons/website_mail/models/mail_thread.py | 338 | 1454 | # -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2013-Today OpenERP SA (<http://www.openerp.com>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import osv, fields
# TODO for trunk, remove me
class MailThread(osv.AbstractModel):
_inherit = 'mail.thread'
_columns = {
'website_message_ids': fields.one2many(
'mail.message', 'res_id',
domain=lambda self: [
'&', ('model', '=', self._name), ('type', '=', 'comment')
],
string='Website Messages',
help="Website communication history",
),
}
| agpl-3.0 | 7,625,428,154,479,396,000 | 37.263158 | 78 | 0.575653 | false |
ericlink/adms-server | playframework-dist/play-1.1/python/Lib/subprocess.py | 2 | 45310 | # subprocess - Subprocesses with accessible I/O streams
#
# For more information about this module, see PEP 324.
#
# This module should remain compatible with Python 2.2, see PEP 291.
#
# Copyright (c) 2003-2005 by Peter Astrand <[email protected]>
#
# Licensed to PSF under a Contributor Agreement.
# See http://www.python.org/2.4/license for licensing details.
r"""subprocess - Subprocesses with accessible I/O streams
This module allows you to spawn processes, connect to their
input/output/error pipes, and obtain their return codes. This module
intends to replace several other, older modules and functions, like:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
Information about how the subprocess module can be used to replace these
modules and functions can be found below.
Using the subprocess module
===========================
This module defines one class called Popen:
class Popen(args, bufsize=0, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=False, shell=False,
cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0):
Arguments are:
args should be a string, or a sequence of program arguments. The
program to execute is normally the first item in the args sequence or
string, but can be explicitly set by using the executable argument.
On UNIX, with shell=False (default): In this case, the Popen class
uses os.execvp() to execute the child program. args should normally
be a sequence. A string will be treated as a sequence with the string
as the only item (the program to execute).
On UNIX, with shell=True: If args is a string, it specifies the
command string to execute through the shell. If args is a sequence,
the first item specifies the command string, and any additional items
will be treated as additional shell arguments.
On Windows: the Popen class uses CreateProcess() to execute the child
program, which operates on strings. If args is a sequence, it will be
converted to a string using the list2cmdline method. Please note that
not all MS Windows applications interpret the command line the same
way: The list2cmdline is designed for applications using the same
rules as the MS C runtime.
bufsize, if given, has the same meaning as the corresponding argument
to the built-in open() function: 0 means unbuffered, 1 means line
buffered, any other positive value means use a buffer of
(approximately) that size. A negative bufsize means to use the system
default, which usually means fully buffered. The default value for
bufsize is 0 (unbuffered).
stdin, stdout and stderr specify the executed programs' standard
input, standard output and standard error file handles, respectively.
Valid values are PIPE, an existing file descriptor (a positive
integer), an existing file object, and None. PIPE indicates that a
new pipe to the child should be created. With None, no redirection
will occur; the child's file handles will be inherited from the
parent. Additionally, stderr can be STDOUT, which indicates that the
stderr data from the applications should be captured into the same
file handle as for stdout.
If preexec_fn is set to a callable object, this object will be called
in the child process just before the child is executed.
If close_fds is true, all file descriptors except 0, 1 and 2 will be
closed before the child process is executed.
if shell is true, the specified command will be executed through the
shell.
If cwd is not None, the current directory will be changed to cwd
before the child is executed.
If env is not None, it defines the environment variables for the new
process.
If universal_newlines is true, the file objects stdout and stderr are
opened as a text files, but lines may be terminated by any of '\n',
the Unix end-of-line convention, '\r', the Macintosh convention or
'\r\n', the Windows convention. All of these external representations
are seen as '\n' by the Python program. Note: This feature is only
available if Python is built with universal newline support (the
default). Also, the newlines attribute of the file objects stdout,
stdin and stderr are not updated by the communicate() method.
The startupinfo and creationflags, if given, will be passed to the
underlying CreateProcess() function. They can specify things such as
appearance of the main window and priority for the new process.
(Windows only)
This module also defines two shortcut functions:
call(*popenargs, **kwargs):
Run command with arguments. Wait for command to complete, then
return the returncode attribute.
The arguments are the same as for the Popen constructor. Example:
retcode = call(["ls", "-l"])
check_call(*popenargs, **kwargs):
Run command with arguments. Wait for command to complete. If the
exit code was zero then return, otherwise raise
CalledProcessError. The CalledProcessError object will have the
return code in the returncode attribute.
The arguments are the same as for the Popen constructor. Example:
check_call(["ls", "-l"])
Exceptions
----------
Exceptions raised in the child process, before the new program has
started to execute, will be re-raised in the parent. Additionally,
the exception object will have one extra attribute called
'child_traceback', which is a string containing traceback information
from the childs point of view.
The most common exception raised is OSError. This occurs, for
example, when trying to execute a non-existent file. Applications
should prepare for OSErrors.
A ValueError will be raised if Popen is called with invalid arguments.
check_call() will raise CalledProcessError, if the called process
returns a non-zero return code.
Security
--------
Unlike some other popen functions, this implementation will never call
/bin/sh implicitly. This means that all characters, including shell
metacharacters, can safely be passed to child processes.
Popen objects
=============
Instances of the Popen class have the following methods:
poll()
Check if child process has terminated. Returns returncode
attribute.
wait()
Wait for child process to terminate. Returns returncode attribute.
communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout
and stderr, until end-of-file is reached. Wait for process to
terminate. The optional input argument should be a string to be
sent to the child process, or None, if no data should be sent to
the child.
communicate() returns a tuple (stdout, stderr).
Note: The data read is buffered in memory, so do not use this
method if the data size is large or unlimited.
The following attributes are also available:
stdin
If the stdin argument is PIPE, this attribute is a file object
that provides input to the child process. Otherwise, it is None.
stdout
If the stdout argument is PIPE, this attribute is a file object
that provides output from the child process. Otherwise, it is
None.
stderr
If the stderr argument is PIPE, this attribute is file object that
provides error output from the child process. Otherwise, it is
None.
pid
The process ID of the child process.
returncode
The child return code. A None value indicates that the process
hasn't terminated yet. A negative value -N indicates that the
child was terminated by signal N (UNIX only).
Replacing older functions with the subprocess module
====================================================
In this section, "a ==> b" means that b can be used as a replacement
for a.
Note: All functions in this section fail (more or less) silently if
the executed program cannot be found; this module raises an OSError
exception.
In the following examples, we assume that the subprocess module is
imported with "from subprocess import *".
Replacing /bin/sh shell backquote
---------------------------------
output=`mycmd myarg`
==>
output = Popen(["mycmd", "myarg"], stdout=PIPE).communicate()[0]
Replacing shell pipe line
-------------------------
output=`dmesg | grep hda`
==>
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
output = p2.communicate()[0]
Replacing os.system()
---------------------
sts = os.system("mycmd" + " myarg")
==>
p = Popen("mycmd" + " myarg", shell=True)
pid, sts = os.waitpid(p.pid, 0)
Note:
* Calling the program through the shell is usually not required.
* It's easier to look at the returncode attribute than the
exitstatus.
A more real-world example would look like this:
try:
retcode = call("mycmd" + " myarg", shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Replacing os.spawn*
-------------------
P_NOWAIT example:
pid = os.spawnlp(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg")
==>
pid = Popen(["/bin/mycmd", "myarg"]).pid
P_WAIT example:
retcode = os.spawnlp(os.P_WAIT, "/bin/mycmd", "mycmd", "myarg")
==>
retcode = call(["/bin/mycmd", "myarg"])
Vector example:
os.spawnvp(os.P_NOWAIT, path, args)
==>
Popen([path] + args[1:])
Environment example:
os.spawnlpe(os.P_NOWAIT, "/bin/mycmd", "mycmd", "myarg", env)
==>
Popen(["/bin/mycmd", "myarg"], env={"PATH": "/usr/bin"})
Replacing os.popen*
-------------------
pipe = os.popen(cmd, mode='r', bufsize)
==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdout=PIPE).stdout
pipe = os.popen(cmd, mode='w', bufsize)
==>
pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin
(child_stdin, child_stdout) = os.popen2(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdin, child_stdout) = (p.stdin, p.stdout)
(child_stdin,
child_stdout,
child_stderr) = os.popen3(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=PIPE, close_fds=True)
(child_stdin,
child_stdout,
child_stderr) = (p.stdin, p.stdout, p.stderr)
(child_stdin, child_stdout_and_stderr) = os.popen4(cmd, mode, bufsize)
==>
p = Popen(cmd, shell=True, bufsize=bufsize,
stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
(child_stdin, child_stdout_and_stderr) = (p.stdin, p.stdout)
Replacing popen2.*
------------------
Note: If the cmd argument to popen2 functions is a string, the command
is executed through /bin/sh. If it is a list, the command is directly
executed.
(child_stdout, child_stdin) = popen2.popen2("somestring", bufsize, mode)
==>
p = Popen(["somestring"], shell=True, bufsize=bufsize
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdout, child_stdin) = (p.stdout, p.stdin)
(child_stdout, child_stdin) = popen2.popen2(["mycmd", "myarg"], bufsize, mode)
==>
p = Popen(["mycmd", "myarg"], bufsize=bufsize,
stdin=PIPE, stdout=PIPE, close_fds=True)
(child_stdout, child_stdin) = (p.stdout, p.stdin)
The popen2.Popen3 and popen2.Popen4 basically works as subprocess.Popen,
except that:
* subprocess.Popen raises an exception if the execution fails
* the capturestderr argument is replaced with the stderr argument.
* stdin=PIPE and stdout=PIPE must be specified.
* popen2 closes all filedescriptors by default, but you have to specify
close_fds=True with subprocess.Popen.
"""
import sys
mswindows = (sys.platform == "win32")
import os
import types
import traceback
import gc
# Exception classes used by this module.
class CalledProcessError(Exception):
"""This exception is raised when a process run by check_call() returns
a non-zero exit status. The exit status will be stored in the
returncode attribute."""
def __init__(self, returncode, cmd):
self.returncode = returncode
self.cmd = cmd
def __str__(self):
return "Command '%s' returned non-zero exit status %d" % (self.cmd, self.returncode)
if mswindows:
import threading
import msvcrt
if 0: # <-- change this to use pywin32 instead of the _subprocess driver
import pywintypes
from win32api import GetStdHandle, STD_INPUT_HANDLE, \
STD_OUTPUT_HANDLE, STD_ERROR_HANDLE
from win32api import GetCurrentProcess, DuplicateHandle, \
GetModuleFileName, GetVersion
from win32con import DUPLICATE_SAME_ACCESS, SW_HIDE
from win32pipe import CreatePipe
from win32process import CreateProcess, STARTUPINFO, \
GetExitCodeProcess, STARTF_USESTDHANDLES, \
STARTF_USESHOWWINDOW, CREATE_NEW_CONSOLE
from win32event import WaitForSingleObject, INFINITE, WAIT_OBJECT_0
else:
from _subprocess import *
class STARTUPINFO:
dwFlags = 0
hStdInput = None
hStdOutput = None
hStdError = None
wShowWindow = 0
class pywintypes:
error = IOError
else:
import select
import errno
import fcntl
import pickle
__all__ = ["Popen", "PIPE", "STDOUT", "call", "check_call", "CalledProcessError"]
try:
MAXFD = os.sysconf("SC_OPEN_MAX")
except:
MAXFD = 256
# True/False does not exist on 2.2.0
try:
False
except NameError:
False = 0
True = 1
_active = []
def _cleanup():
for inst in _active[:]:
if inst.poll(_deadstate=sys.maxint) >= 0:
try:
_active.remove(inst)
except ValueError:
# This can happen if two threads create a new Popen instance.
# It's harmless that it was already removed, so ignore.
pass
PIPE = -1
STDOUT = -2
def call(*popenargs, **kwargs):
"""Run command with arguments. Wait for command to complete, then
return the returncode attribute.
The arguments are the same as for the Popen constructor. Example:
retcode = call(["ls", "-l"])
"""
return Popen(*popenargs, **kwargs).wait()
def check_call(*popenargs, **kwargs):
"""Run command with arguments. Wait for command to complete. If
the exit code was zero then return, otherwise raise
CalledProcessError. The CalledProcessError object will have the
return code in the returncode attribute.
The arguments are the same as for the Popen constructor. Example:
check_call(["ls", "-l"])
"""
retcode = call(*popenargs, **kwargs)
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
if retcode:
raise CalledProcessError(retcode, cmd)
return retcode
def list2cmdline(seq):
"""
Translate a sequence of arguments into a command line
string, using the same rules as the MS C runtime:
1) Arguments are delimited by white space, which is either a
space or a tab.
2) A string surrounded by double quotation marks is
interpreted as a single argument, regardless of white space
contained within. A quoted string can be embedded in an
argument.
3) A double quotation mark preceded by a backslash is
interpreted as a literal double quotation mark.
4) Backslashes are interpreted literally, unless they
immediately precede a double quotation mark.
5) If backslashes immediately precede a double quotation mark,
every pair of backslashes is interpreted as a literal
backslash. If the number of backslashes is odd, the last
backslash escapes the next double quotation mark as
described in rule 3.
"""
# See
# http://msdn.microsoft.com/library/en-us/vccelng/htm/progs_12.asp
result = []
needquote = False
for arg in seq:
bs_buf = []
# Add a space to separate this argument from the others
if result:
result.append(' ')
needquote = (" " in arg) or ("\t" in arg) or arg == ""
if needquote:
result.append('"')
for c in arg:
if c == '\\':
# Don't know if we need to double yet.
bs_buf.append(c)
elif c == '"':
# Double backspaces.
result.append('\\' * len(bs_buf)*2)
bs_buf = []
result.append('\\"')
else:
# Normal char
if bs_buf:
result.extend(bs_buf)
bs_buf = []
result.append(c)
# Add remaining backspaces, if any.
if bs_buf:
result.extend(bs_buf)
if needquote:
result.extend(bs_buf)
result.append('"')
return ''.join(result)
class Popen(object):
def __init__(self, args, bufsize=0, executable=None,
stdin=None, stdout=None, stderr=None,
preexec_fn=None, close_fds=False, shell=False,
cwd=None, env=None, universal_newlines=False,
startupinfo=None, creationflags=0):
"""Create new Popen instance."""
_cleanup()
self._child_created = False
if not isinstance(bufsize, (int, long)):
raise TypeError("bufsize must be an integer")
if mswindows:
if preexec_fn is not None:
raise ValueError("preexec_fn is not supported on Windows "
"platforms")
if close_fds:
raise ValueError("close_fds is not supported on Windows "
"platforms")
else:
# POSIX
if startupinfo is not None:
raise ValueError("startupinfo is only supported on Windows "
"platforms")
if creationflags != 0:
raise ValueError("creationflags is only supported on Windows "
"platforms")
self.stdin = None
self.stdout = None
self.stderr = None
self.pid = None
self.returncode = None
self.universal_newlines = universal_newlines
# Input and output objects. The general principle is like
# this:
#
# Parent Child
# ------ -----
# p2cwrite ---stdin---> p2cread
# c2pread <--stdout--- c2pwrite
# errread <--stderr--- errwrite
#
# On POSIX, the child objects are file descriptors. On
# Windows, these are Windows file handles. The parent objects
# are file descriptors on both platforms. The parent objects
# are None when not using PIPEs. The child objects are None
# when not redirecting.
(p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite) = self._get_handles(stdin, stdout, stderr)
self._execute_child(args, executable, preexec_fn, close_fds,
cwd, env, universal_newlines,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
# On Windows, you cannot just redirect one or two handles: You
# either have to redirect all three or none. If the subprocess
# user has only redirected one or two handles, we are
# automatically creating PIPEs for the rest. We should close
# these after the process is started. See bug #1124861.
if mswindows:
if stdin is None and p2cwrite is not None:
os.close(p2cwrite)
p2cwrite = None
if stdout is None and c2pread is not None:
os.close(c2pread)
c2pread = None
if stderr is None and errread is not None:
os.close(errread)
errread = None
if p2cwrite:
self.stdin = os.fdopen(p2cwrite, 'wb', bufsize)
if c2pread:
if universal_newlines:
self.stdout = os.fdopen(c2pread, 'rU', bufsize)
else:
self.stdout = os.fdopen(c2pread, 'rb', bufsize)
if errread:
if universal_newlines:
self.stderr = os.fdopen(errread, 'rU', bufsize)
else:
self.stderr = os.fdopen(errread, 'rb', bufsize)
def _translate_newlines(self, data):
data = data.replace("\r\n", "\n")
data = data.replace("\r", "\n")
return data
def __del__(self, sys=sys):
if not self._child_created:
# We didn't get to successfully create a child process.
return
# In case the child hasn't been waited on, check if it's done.
self.poll(_deadstate=sys.maxint)
if self.returncode is None and _active is not None:
# Child is still running, keep us alive until we can wait on it.
_active.append(self)
def communicate(self, input=None):
"""Interact with process: Send data to stdin. Read data from
stdout and stderr, until end-of-file is reached. Wait for
process to terminate. The optional input argument should be a
string to be sent to the child process, or None, if no data
should be sent to the child.
communicate() returns a tuple (stdout, stderr)."""
# Optimization: If we are only using one pipe, or no pipe at
# all, using select() or threads is unnecessary.
if [self.stdin, self.stdout, self.stderr].count(None) >= 2:
stdout = None
stderr = None
if self.stdin:
if input:
self.stdin.write(input)
self.stdin.close()
elif self.stdout:
stdout = self.stdout.read()
elif self.stderr:
stderr = self.stderr.read()
self.wait()
return (stdout, stderr)
return self._communicate(input)
if mswindows:
#
# Windows methods
#
def _get_handles(self, stdin, stdout, stderr):
"""Construct and return tupel with IO objects:
p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
"""
if stdin is None and stdout is None and stderr is None:
return (None, None, None, None, None, None)
p2cread, p2cwrite = None, None
c2pread, c2pwrite = None, None
errread, errwrite = None, None
if stdin is None:
p2cread = GetStdHandle(STD_INPUT_HANDLE)
if p2cread is not None:
pass
elif stdin is None or stdin == PIPE:
p2cread, p2cwrite = CreatePipe(None, 0)
# Detach and turn into fd
p2cwrite = p2cwrite.Detach()
p2cwrite = msvcrt.open_osfhandle(p2cwrite, 0)
elif isinstance(stdin, int):
p2cread = msvcrt.get_osfhandle(stdin)
else:
# Assuming file-like object
p2cread = msvcrt.get_osfhandle(stdin.fileno())
p2cread = self._make_inheritable(p2cread)
if stdout is None:
c2pwrite = GetStdHandle(STD_OUTPUT_HANDLE)
if c2pwrite is not None:
pass
elif stdout is None or stdout == PIPE:
c2pread, c2pwrite = CreatePipe(None, 0)
# Detach and turn into fd
c2pread = c2pread.Detach()
c2pread = msvcrt.open_osfhandle(c2pread, 0)
elif isinstance(stdout, int):
c2pwrite = msvcrt.get_osfhandle(stdout)
else:
# Assuming file-like object
c2pwrite = msvcrt.get_osfhandle(stdout.fileno())
c2pwrite = self._make_inheritable(c2pwrite)
if stderr is None:
errwrite = GetStdHandle(STD_ERROR_HANDLE)
if errwrite is not None:
pass
elif stderr is None or stderr == PIPE:
errread, errwrite = CreatePipe(None, 0)
# Detach and turn into fd
errread = errread.Detach()
errread = msvcrt.open_osfhandle(errread, 0)
elif stderr == STDOUT:
errwrite = c2pwrite
elif isinstance(stderr, int):
errwrite = msvcrt.get_osfhandle(stderr)
else:
# Assuming file-like object
errwrite = msvcrt.get_osfhandle(stderr.fileno())
errwrite = self._make_inheritable(errwrite)
return (p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
def _make_inheritable(self, handle):
"""Return a duplicate of handle, which is inheritable"""
return DuplicateHandle(GetCurrentProcess(), handle,
GetCurrentProcess(), 0, 1,
DUPLICATE_SAME_ACCESS)
def _find_w9xpopen(self):
"""Find and return absolut path to w9xpopen.exe"""
w9xpopen = os.path.join(os.path.dirname(GetModuleFileName(0)),
"w9xpopen.exe")
if not os.path.exists(w9xpopen):
# Eeek - file-not-found - possibly an embedding
# situation - see if we can locate it in sys.exec_prefix
w9xpopen = os.path.join(os.path.dirname(sys.exec_prefix),
"w9xpopen.exe")
if not os.path.exists(w9xpopen):
raise RuntimeError("Cannot locate w9xpopen.exe, which is "
"needed for Popen to work with your "
"shell or platform.")
return w9xpopen
def _execute_child(self, args, executable, preexec_fn, close_fds,
cwd, env, universal_newlines,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite):
"""Execute program (MS Windows version)"""
if not isinstance(args, types.StringTypes):
args = list2cmdline(args)
# Process startup details
if startupinfo is None:
startupinfo = STARTUPINFO()
if None not in (p2cread, c2pwrite, errwrite):
startupinfo.dwFlags |= STARTF_USESTDHANDLES
startupinfo.hStdInput = p2cread
startupinfo.hStdOutput = c2pwrite
startupinfo.hStdError = errwrite
if shell:
startupinfo.dwFlags |= STARTF_USESHOWWINDOW
startupinfo.wShowWindow = SW_HIDE
comspec = os.environ.get("COMSPEC", "cmd.exe")
args = comspec + " /c " + args
if (GetVersion() >= 0x80000000L or
os.path.basename(comspec).lower() == "command.com"):
# Win9x, or using command.com on NT. We need to
# use the w9xpopen intermediate program. For more
# information, see KB Q150956
# (http://web.archive.org/web/20011105084002/http://support.microsoft.com/support/kb/articles/Q150/9/56.asp)
w9xpopen = self._find_w9xpopen()
args = '"%s" %s' % (w9xpopen, args)
# Not passing CREATE_NEW_CONSOLE has been known to
# cause random failures on win9x. Specifically a
# dialog: "Your program accessed mem currently in
# use at xxx" and a hopeful warning about the
# stability of your system. Cost is Ctrl+C wont
# kill children.
creationflags |= CREATE_NEW_CONSOLE
# Start the process
try:
hp, ht, pid, tid = CreateProcess(executable, args,
# no special security
None, None,
# must inherit handles to pass std
# handles
1,
creationflags,
env,
cwd,
startupinfo)
except pywintypes.error, e:
# Translate pywintypes.error to WindowsError, which is
# a subclass of OSError. FIXME: We should really
# translate errno using _sys_errlist (or simliar), but
# how can this be done from Python?
raise WindowsError(*e.args)
# Retain the process handle, but close the thread handle
self._child_created = True
self._handle = hp
self.pid = pid
ht.Close()
# Child is launched. Close the parent's copy of those pipe
# handles that only the child should have open. You need
# to make sure that no handles to the write end of the
# output pipe are maintained in this process or else the
# pipe will not close when the child process exits and the
# ReadFile will hang.
if p2cread is not None:
p2cread.Close()
if c2pwrite is not None:
c2pwrite.Close()
if errwrite is not None:
errwrite.Close()
def poll(self, _deadstate=None):
"""Check if child process has terminated. Returns returncode
attribute."""
if self.returncode is None:
if WaitForSingleObject(self._handle, 0) == WAIT_OBJECT_0:
self.returncode = GetExitCodeProcess(self._handle)
return self.returncode
def wait(self):
"""Wait for child process to terminate. Returns returncode
attribute."""
if self.returncode is None:
obj = WaitForSingleObject(self._handle, INFINITE)
self.returncode = GetExitCodeProcess(self._handle)
return self.returncode
def _readerthread(self, fh, buffer):
buffer.append(fh.read())
def _communicate(self, input):
stdout = None # Return
stderr = None # Return
if self.stdout:
stdout = []
stdout_thread = threading.Thread(target=self._readerthread,
args=(self.stdout, stdout))
stdout_thread.setDaemon(True)
stdout_thread.start()
if self.stderr:
stderr = []
stderr_thread = threading.Thread(target=self._readerthread,
args=(self.stderr, stderr))
stderr_thread.setDaemon(True)
stderr_thread.start()
if self.stdin:
if input is not None:
self.stdin.write(input)
self.stdin.close()
if self.stdout:
stdout_thread.join()
if self.stderr:
stderr_thread.join()
# All data exchanged. Translate lists into strings.
if stdout is not None:
stdout = stdout[0]
if stderr is not None:
stderr = stderr[0]
# Translate newlines, if requested. We cannot let the file
# object do the translation: It is based on stdio, which is
# impossible to combine with select (unless forcing no
# buffering).
if self.universal_newlines and hasattr(file, 'newlines'):
if stdout:
stdout = self._translate_newlines(stdout)
if stderr:
stderr = self._translate_newlines(stderr)
self.wait()
return (stdout, stderr)
else:
#
# POSIX methods
#
def _get_handles(self, stdin, stdout, stderr):
"""Construct and return tupel with IO objects:
p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
"""
p2cread, p2cwrite = None, None
c2pread, c2pwrite = None, None
errread, errwrite = None, None
if stdin is None:
pass
elif stdin == PIPE:
p2cread, p2cwrite = os.pipe()
elif isinstance(stdin, int):
p2cread = stdin
else:
# Assuming file-like object
p2cread = stdin.fileno()
if stdout is None:
pass
elif stdout == PIPE:
c2pread, c2pwrite = os.pipe()
elif isinstance(stdout, int):
c2pwrite = stdout
else:
# Assuming file-like object
c2pwrite = stdout.fileno()
if stderr is None:
pass
elif stderr == PIPE:
errread, errwrite = os.pipe()
elif stderr == STDOUT:
errwrite = c2pwrite
elif isinstance(stderr, int):
errwrite = stderr
else:
# Assuming file-like object
errwrite = stderr.fileno()
return (p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite)
def _set_cloexec_flag(self, fd):
try:
cloexec_flag = fcntl.FD_CLOEXEC
except AttributeError:
cloexec_flag = 1
old = fcntl.fcntl(fd, fcntl.F_GETFD)
fcntl.fcntl(fd, fcntl.F_SETFD, old | cloexec_flag)
def _close_fds(self, but):
for i in xrange(3, MAXFD):
if i == but:
continue
try:
os.close(i)
except:
pass
def _execute_child(self, args, executable, preexec_fn, close_fds,
cwd, env, universal_newlines,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite):
"""Execute program (POSIX version)"""
if isinstance(args, types.StringTypes):
args = [args]
else:
args = list(args)
if shell:
args = ["/bin/sh", "-c"] + args
if executable is None:
executable = args[0]
# For transferring possible exec failure from child to parent
# The first char specifies the exception type: 0 means
# OSError, 1 means some other error.
errpipe_read, errpipe_write = os.pipe()
self._set_cloexec_flag(errpipe_write)
gc_was_enabled = gc.isenabled()
# Disable gc to avoid bug where gc -> file_dealloc ->
# write to stderr -> hang. http://bugs.python.org/issue1336
gc.disable()
try:
self.pid = os.fork()
except:
if gc_was_enabled:
gc.enable()
raise
self._child_created = True
if self.pid == 0:
# Child
try:
# Close parent's pipe ends
if p2cwrite:
os.close(p2cwrite)
if c2pread:
os.close(c2pread)
if errread:
os.close(errread)
os.close(errpipe_read)
# Dup fds for child
if p2cread:
os.dup2(p2cread, 0)
if c2pwrite:
os.dup2(c2pwrite, 1)
if errwrite:
os.dup2(errwrite, 2)
# Close pipe fds. Make sure we don't close the same
# fd more than once, or standard fds.
if p2cread and p2cread not in (0,):
os.close(p2cread)
if c2pwrite and c2pwrite not in (p2cread, 1):
os.close(c2pwrite)
if errwrite and errwrite not in (p2cread, c2pwrite, 2):
os.close(errwrite)
# Close all other fds, if asked for
if close_fds:
self._close_fds(but=errpipe_write)
if cwd is not None:
os.chdir(cwd)
if preexec_fn:
apply(preexec_fn)
if env is None:
os.execvp(executable, args)
else:
os.execvpe(executable, args, env)
except:
exc_type, exc_value, tb = sys.exc_info()
# Save the traceback and attach it to the exception object
exc_lines = traceback.format_exception(exc_type,
exc_value,
tb)
exc_value.child_traceback = ''.join(exc_lines)
os.write(errpipe_write, pickle.dumps(exc_value))
# This exitcode won't be reported to applications, so it
# really doesn't matter what we return.
os._exit(255)
# Parent
if gc_was_enabled:
gc.enable()
os.close(errpipe_write)
if p2cread and p2cwrite:
os.close(p2cread)
if c2pwrite and c2pread:
os.close(c2pwrite)
if errwrite and errread:
os.close(errwrite)
# Wait for exec to fail or succeed; possibly raising exception
data = os.read(errpipe_read, 1048576) # Exceptions limited to 1 MB
os.close(errpipe_read)
if data != "":
os.waitpid(self.pid, 0)
child_exception = pickle.loads(data)
raise child_exception
def _handle_exitstatus(self, sts):
if os.WIFSIGNALED(sts):
self.returncode = -os.WTERMSIG(sts)
elif os.WIFEXITED(sts):
self.returncode = os.WEXITSTATUS(sts)
else:
# Should never happen
raise RuntimeError("Unknown child exit status!")
def poll(self, _deadstate=None):
"""Check if child process has terminated. Returns returncode
attribute."""
if self.returncode is None:
try:
pid, sts = os.waitpid(self.pid, os.WNOHANG)
if pid == self.pid:
self._handle_exitstatus(sts)
except os.error:
if _deadstate is not None:
self.returncode = _deadstate
return self.returncode
def wait(self):
"""Wait for child process to terminate. Returns returncode
attribute."""
if self.returncode is None:
pid, sts = os.waitpid(self.pid, 0)
self._handle_exitstatus(sts)
return self.returncode
def _communicate(self, input):
read_set = []
write_set = []
stdout = None # Return
stderr = None # Return
if self.stdin:
# Flush stdio buffer. This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
self.stdin.flush()
if input:
write_set.append(self.stdin)
else:
self.stdin.close()
if self.stdout:
read_set.append(self.stdout)
stdout = []
if self.stderr:
read_set.append(self.stderr)
stderr = []
input_offset = 0
while read_set or write_set:
rlist, wlist, xlist = select.select(read_set, write_set, [])
if self.stdin in wlist:
# When select has indicated that the file is writable,
# we can write up to PIPE_BUF bytes without risk
# blocking. POSIX defines PIPE_BUF >= 512
bytes_written = os.write(self.stdin.fileno(), buffer(input, input_offset, 512))
input_offset += bytes_written
if input_offset >= len(input):
self.stdin.close()
write_set.remove(self.stdin)
if self.stdout in rlist:
data = os.read(self.stdout.fileno(), 1024)
if data == "":
self.stdout.close()
read_set.remove(self.stdout)
stdout.append(data)
if self.stderr in rlist:
data = os.read(self.stderr.fileno(), 1024)
if data == "":
self.stderr.close()
read_set.remove(self.stderr)
stderr.append(data)
# All data exchanged. Translate lists into strings.
if stdout is not None:
stdout = ''.join(stdout)
if stderr is not None:
stderr = ''.join(stderr)
# Translate newlines, if requested. We cannot let the file
# object do the translation: It is based on stdio, which is
# impossible to combine with select (unless forcing no
# buffering).
if self.universal_newlines and hasattr(file, 'newlines'):
if stdout:
stdout = self._translate_newlines(stdout)
if stderr:
stderr = self._translate_newlines(stderr)
self.wait()
return (stdout, stderr)
def _demo_posix():
#
# Example 1: Simple redirection: Get process list
#
plist = Popen(["ps"], stdout=PIPE).communicate()[0]
print "Process list:"
print plist
#
# Example 2: Change uid before executing child
#
if os.getuid() == 0:
p = Popen(["id"], preexec_fn=lambda: os.setuid(100))
p.wait()
#
# Example 3: Connecting several subprocesses
#
print "Looking for 'hda'..."
p1 = Popen(["dmesg"], stdout=PIPE)
p2 = Popen(["grep", "hda"], stdin=p1.stdout, stdout=PIPE)
print repr(p2.communicate()[0])
#
# Example 4: Catch execution error
#
print
print "Trying a weird file..."
try:
print Popen(["/this/path/does/not/exist"]).communicate()
except OSError, e:
if e.errno == errno.ENOENT:
print "The file didn't exist. I thought so..."
print "Child traceback:"
print e.child_traceback
else:
print "Error", e.errno
else:
print >>sys.stderr, "Gosh. No error."
def _demo_windows():
#
# Example 1: Connecting several subprocesses
#
print "Looking for 'PROMPT' in set output..."
p1 = Popen("set", stdout=PIPE, shell=True)
p2 = Popen('find "PROMPT"', stdin=p1.stdout, stdout=PIPE)
print repr(p2.communicate()[0])
#
# Example 2: Simple execution of program
#
print "Executing calc..."
p = Popen("calc")
p.wait()
if __name__ == "__main__":
if mswindows:
_demo_windows()
else:
_demo_posix()
| mit | -8,167,670,807,610,235,000 | 34.017488 | 128 | 0.549305 | false |
mongmong/python-sweety | src/sweety/loader.py | 1 | 1184 | #!/usr/bin/env python
'''
sweety.loader
This module contains the functions for loading modules.
@author: Chris Chou <m2chrischou AT gmail.com>
@description:
'''
import os
import sys
import traceback
from sweety.log import get_logger
def load_file(filename):
'''
load_file(filename) -> module
Loads python module with specified filename.
'''
dirname = os.path.dirname(filename)
dirname = os.path.abspath(dirname)
modulename = os.path.basename(filename)
modulename = modulename.rsplit('.', 1)[0]
if dirname:
sys.path.insert(0, dirname)
mod = None
try:
#print sys.path
mod = __import__(modulename, {}, {}, [''])
reload(mod)
except:
errinfo = traceback.format_exc()
_log = get_logger('smartcube.util.load_file')
_log.error(errinfo)
if dirname:
del sys.path[0]
return mod
def load_module(modulename):
mod = None
try:
mod = __import__(modulename, {}, {}, [''])
reload(mod)
except:
errinfo = traceback.format_exc()
_log = get_logger('smartcube.util.load_module')
_log.error(errinfo)
return mod
| bsd-2-clause | 5,655,514,452,603,742,000 | 18.733333 | 55 | 0.603041 | false |
blueboxgroup/ansible | lib/ansible/module_utils/vca.py | 5 | 10930 | #
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
import os
try:
from pyvcloud.vcloudair import VCA
HAS_PYVCLOUD = True
except ImportError:
HAS_PYVCLOUD = False
from ansible.module_utils.basic import AnsibleModule
SERVICE_MAP = {'vca': 'ondemand', 'vchs': 'subscription', 'vcd': 'vcd'}
LOGIN_HOST = {'vca': 'vca.vmware.com', 'vchs': 'vchs.vmware.com'}
DEFAULT_SERVICE_TYPE = 'vca'
DEFAULT_VERSION = '5.7'
class VcaError(Exception):
def __init__(self, msg, **kwargs):
self.kwargs = kwargs
super(VcaError, self).__init__(msg)
def vca_argument_spec():
return dict(
username=dict(type='str', aliases=['user'], required=True),
password=dict(type='str', aliases=['pass','passwd'], required=True, no_log=True),
org=dict(),
service_id=dict(),
instance_id=dict(),
host=dict(),
api_version=dict(default=DEFAULT_VERSION),
service_type=dict(default=DEFAULT_SERVICE_TYPE, choices=SERVICE_MAP.keys()),
vdc_name=dict(),
gateway_name=dict(default='gateway')
)
class VcaAnsibleModule(AnsibleModule):
def __init__(self, *args, **kwargs):
argument_spec = vca_argument_spec()
argument_spec.update(kwargs.get('argument_spec', dict()))
kwargs['argument_spec'] = argument_spec
super(VcaAnsibleModule, self).__init__(*args, **kwargs)
if not HAS_PYVCLOUD:
self.fail("python module pyvcloud is required for this module")
self._vca = self.create_instance()
self.login()
self._gateway = None
self._vdc = None
@property
def vca(self):
return self._vca
@property
def gateway(self):
if self._gateway is not None:
return self._gateway
vdc_name = self.params['vdc_name']
gateway_name = self.params['gateway_name']
_gateway = self.vca.get_gateway(vdc_name, gateway_name)
if not _gateway:
raise VcaError('vca instance has no gateway named %s' % gateway_name)
self._gateway = _gateway
return _gateway
@property
def vdc(self):
if self._vdc is not None:
return self._vdc
vdc_name = self.params['vdc_name']
_vdc = self.vca.get_vdc(vdc_name)
if not _vdc:
raise VcaError('vca instance has no vdc named %s' % vdc_name)
self._vdc = _vdc
return _vdc
def get_vapp(self, vapp_name):
vapp = self.vca.get_vapp(self.vdc, vapp_name)
if not vapp:
raise VcaError('vca instance has no vapp named %s' % vapp_name)
return vapp
def get_vm(self, vapp_name, vm_name):
vapp = self.get_vapp(vapp_name)
vms = [vm for vm in children.get_Vm() if vm.name == vm_name]
try:
return vms[0]
except IndexError:
raise VcaError('vapp has no vm named %s' % vm_name)
def create_instance(self):
service_type = self.params.get('service_type', DEFAULT_SERVICE_TYPE)
if service_type == 'vcd':
host = self.params['host']
else:
host = LOGIN_HOST[service_type]
username = self.params['username']
version = self.params.get('api_version')
if service_type == 'vchs':
version = '5.6'
verify = self.params.get('verify_certs')
return VCA(host=host, username=username,
service_type=SERVICE_MAP[service_type],
version=version, verify=verify)
def login(self):
service_type = self.params['service_type']
password = self.params['password']
if not self.vca.login(password=password):
self.fail('Login to VCA failed', response=self.vca.response.content)
try:
method_name = 'login_%s' % service_type
meth = getattr(self, method_name)
meth()
except AttributeError:
self.fail('no login method exists for service_type %s' % service_type)
except VcaError, e:
self.fail(e.message, response=self.vca.response.content, **e.kwargs)
def login_vca(self):
instance_id = self.params['instance_id']
if not instance_id:
raise VcaError('missing required instance_id for service_type vca')
self.vca.login_to_instance_sso(instance=instance_id)
def login_vchs(self):
service_id = self.params['service_id']
if not service_id:
raise VcaError('missing required service_id for service_type vchs')
org = self.params['org']
if not org:
raise VcaError('missing required or for service_type vchs')
self.vca.login_to_org(service_id, org)
def login_vcd(self):
org = self.params['org']
if not org:
raise VcaError('missing required or for service_type vchs')
if not self.vca.token:
raise VcaError('unable to get token for service_type vcd')
if not self.vca.vcloud_session.org_url:
raise VcaError('unable to get org_url for service_type vcd')
self.vca.login(token=self.vca.token, org=org,
org_url=self.vca.vcloud_session.org_url)
def save_services_config(self, blocking=True):
task = self.gateway.save_services_configuration()
if not task:
self.fail(msg='unable to save gateway services configuration')
if blocking:
self.vca.block_until_completed(task)
def fail(self, msg, **kwargs):
self.fail_json(msg=msg, **kwargs)
def exit(self, **kwargs):
self.exit_json(**kwargs)
# -------------------------------------------------------------
# 9/18/2015 @privateip
# All of the functions below here were migrated from the original
# vca_* modules. All functions below should be considered deprecated
# and will be removed once all of the vca_* modules have been updated
# to use the new instance module above
# -------------------------------------------------------------
VCA_REQ_ARGS = ['instance_id', 'vdc_name']
VCHS_REQ_ARGS = ['service_id']
def _validate_module(module):
if not HAS_PYVCLOUD:
module.fail_json(msg="python module pyvcloud is needed for this module")
service_type = module.params.get('service_type', DEFAULT_SERVICE_TYPE)
if service_type == 'vca':
for arg in VCA_REQ_ARGS:
if module.params.get(arg) is None:
module.fail_json(msg="argument %s is mandatory when service type "
"is vca" % arg)
if service_type == 'vchs':
for arg in VCHS_REQ_ARGS:
if module.params.get(arg) is None:
module.fail_json(msg="argument %s is mandatory when service type "
"is vchs" % arg)
if service_type == 'vcd':
for arg in VCD_REQ_ARGS:
if module.params.get(arg) is None:
module.fail_json(msg="argument %s is mandatory when service type "
"is vcd" % arg)
def serialize_instances(instance_list):
instances = []
for i in instance_list:
instances.append(dict(apiUrl=i['apiUrl'], instance_id=i['id']))
return instances
def _vca_login(vca, password, instance):
if not vca.login(password=password):
raise VcaError("Login Failed: Please check username or password",
error=vca.response.content)
if not vca.login_to_instance_sso(instance=instance):
s_json = serialize_instances(vca.instances)
raise VcaError("Login to Instance failed: Seems like instance_id provided "
"is wrong .. Please check", valid_instances=s_json)
return vca
def _vchs_login(vca, password, service, org):
if not vca.login(password=password):
raise VcaError("Login Failed: Please check username or password",
error=vca.response.content)
if not vca.login_to_org(service, org):
raise VcaError("Failed to login to org, Please check the orgname",
error=vca.response.content)
def _vcd_login(vca, password, org):
# TODO: this function needs to be refactored
if not vca.login(password=password, org=org):
raise VcaError("Login Failed: Please check username or password "
"or host parameters")
if not vca.login(password=password, org=org):
raise VcaError("Failed to get the token",
error=vca.response.content)
if not vca.login(token=vca.token, org=org, org_url=vca.vcloud_session.org_url):
raise VcaError("Failed to login to org", error=vca.response.content)
def vca_login(module):
service_type = module.params.get('service_type')
username = module.params.get('username')
password = module.params.get('password')
instance = module.params.get('instance_id')
org = module.params.get('org')
vdc_name = module.params.get('vdc_name')
service = module.params.get('service_id')
version = module.params.get('api_version')
verify = module.params.get('verify_certs')
_validate_module(module)
if not vdc_name and service_type == 'vchs':
vdc_name = module.params.get('service_id')
if not org and service_type == 'vchs':
org = vdc_name or service
if service_type == 'vcd':
host = module.params.get('host')
else:
host = LOGIN_HOST[service_type]
username = os.environ.get('VCA_USER', username)
password = os.environ.get('VCA_PASS', password)
if not username or not password:
msg = "Either the username or password is not set, please check args"
module.fail_json(msg=msg)
if service_type == 'vchs':
version = '5.6'
elif service_type == 'vcd' and not version:
version == '5.6'
vca = VCA(host=host, username=username,
service_type=SERVICE_MAP[service_type],
version=version, verify=verify)
try:
if service_type == 'vca':
_vca_login(vca, password, instance)
elif service_type == 'vchs':
_vchs_login(vca, password, service, org)
elif service_type == 'vcd':
_vcd_login(vca, password, org)
except VcaError, e:
module.fail_json(msg=e.message, **e.kwargs)
return vca
| gpl-3.0 | 1,363,206,694,171,736,600 | 32.734568 | 89 | 0.606221 | false |
hhsprings/cython | Cython/Compiler/TypeInference.py | 13 | 21051 | from __future__ import absolute_import
from .Errors import error, message
from . import ExprNodes
from . import Nodes
from . import Builtin
from . import PyrexTypes
from .. import Utils
from .PyrexTypes import py_object_type, unspecified_type
from .Visitor import CythonTransform, EnvTransform
try:
reduce
except NameError:
from functools import reduce
class TypedExprNode(ExprNodes.ExprNode):
# Used for declaring assignments of a specified type without a known entry.
subexprs = []
def __init__(self, type, pos=None):
super(TypedExprNode, self).__init__(pos, type=type)
object_expr = TypedExprNode(py_object_type)
class MarkParallelAssignments(EnvTransform):
# Collects assignments inside parallel blocks prange, with parallel.
# Perhaps it's better to move it to ControlFlowAnalysis.
# tells us whether we're in a normal loop
in_loop = False
parallel_errors = False
def __init__(self, context):
# Track the parallel block scopes (with parallel, for i in prange())
self.parallel_block_stack = []
super(MarkParallelAssignments, self).__init__(context)
def mark_assignment(self, lhs, rhs, inplace_op=None):
if isinstance(lhs, (ExprNodes.NameNode, Nodes.PyArgDeclNode)):
if lhs.entry is None:
# TODO: This shouldn't happen...
return
if self.parallel_block_stack:
parallel_node = self.parallel_block_stack[-1]
previous_assignment = parallel_node.assignments.get(lhs.entry)
# If there was a previous assignment to the variable, keep the
# previous assignment position
if previous_assignment:
pos, previous_inplace_op = previous_assignment
if (inplace_op and previous_inplace_op and
inplace_op != previous_inplace_op):
# x += y; x *= y
t = (inplace_op, previous_inplace_op)
error(lhs.pos,
"Reduction operator '%s' is inconsistent "
"with previous reduction operator '%s'" % t)
else:
pos = lhs.pos
parallel_node.assignments[lhs.entry] = (pos, inplace_op)
parallel_node.assigned_nodes.append(lhs)
elif isinstance(lhs, ExprNodes.SequenceNode):
for i, arg in enumerate(lhs.args):
if not rhs or arg.is_starred:
item_node = None
else:
item_node = rhs.inferable_item_node(i)
self.mark_assignment(arg, item_node)
else:
# Could use this info to infer cdef class attributes...
pass
def visit_WithTargetAssignmentStatNode(self, node):
self.mark_assignment(node.lhs, node.with_node.enter_call)
self.visitchildren(node)
return node
def visit_SingleAssignmentNode(self, node):
self.mark_assignment(node.lhs, node.rhs)
self.visitchildren(node)
return node
def visit_CascadedAssignmentNode(self, node):
for lhs in node.lhs_list:
self.mark_assignment(lhs, node.rhs)
self.visitchildren(node)
return node
def visit_InPlaceAssignmentNode(self, node):
self.mark_assignment(node.lhs, node.create_binop_node(), node.operator)
self.visitchildren(node)
return node
def visit_ForInStatNode(self, node):
# TODO: Remove redundancy with range optimization...
is_special = False
sequence = node.iterator.sequence
target = node.target
if isinstance(sequence, ExprNodes.SimpleCallNode):
function = sequence.function
if sequence.self is None and function.is_name:
entry = self.current_env().lookup(function.name)
if not entry or entry.is_builtin:
if function.name == 'reversed' and len(sequence.args) == 1:
sequence = sequence.args[0]
elif function.name == 'enumerate' and len(sequence.args) == 1:
if target.is_sequence_constructor and len(target.args) == 2:
iterator = sequence.args[0]
if iterator.is_name:
iterator_type = iterator.infer_type(self.current_env())
if iterator_type.is_builtin_type:
# assume that builtin types have a length within Py_ssize_t
self.mark_assignment(
target.args[0],
ExprNodes.IntNode(target.pos, value='PY_SSIZE_T_MAX',
type=PyrexTypes.c_py_ssize_t_type))
target = target.args[1]
sequence = sequence.args[0]
if isinstance(sequence, ExprNodes.SimpleCallNode):
function = sequence.function
if sequence.self is None and function.is_name:
entry = self.current_env().lookup(function.name)
if not entry or entry.is_builtin:
if function.name in ('range', 'xrange'):
is_special = True
for arg in sequence.args[:2]:
self.mark_assignment(target, arg)
if len(sequence.args) > 2:
self.mark_assignment(
target,
ExprNodes.binop_node(node.pos,
'+',
sequence.args[0],
sequence.args[2]))
if not is_special:
# A for-loop basically translates to subsequent calls to
# __getitem__(), so using an IndexNode here allows us to
# naturally infer the base type of pointers, C arrays,
# Python strings, etc., while correctly falling back to an
# object type when the base type cannot be handled.
self.mark_assignment(target, ExprNodes.IndexNode(
node.pos,
base=sequence,
index=ExprNodes.IntNode(target.pos, value='PY_SSIZE_T_MAX',
type=PyrexTypes.c_py_ssize_t_type)))
self.visitchildren(node)
return node
def visit_ForFromStatNode(self, node):
self.mark_assignment(node.target, node.bound1)
if node.step is not None:
self.mark_assignment(node.target,
ExprNodes.binop_node(node.pos,
'+',
node.bound1,
node.step))
self.visitchildren(node)
return node
def visit_WhileStatNode(self, node):
self.visitchildren(node)
return node
def visit_ExceptClauseNode(self, node):
if node.target is not None:
self.mark_assignment(node.target, object_expr)
self.visitchildren(node)
return node
def visit_FromCImportStatNode(self, node):
pass # Can't be assigned to...
def visit_FromImportStatNode(self, node):
for name, target in node.items:
if name != "*":
self.mark_assignment(target, object_expr)
self.visitchildren(node)
return node
def visit_DefNode(self, node):
# use fake expressions with the right result type
if node.star_arg:
self.mark_assignment(
node.star_arg, TypedExprNode(Builtin.tuple_type, node.pos))
if node.starstar_arg:
self.mark_assignment(
node.starstar_arg, TypedExprNode(Builtin.dict_type, node.pos))
EnvTransform.visit_FuncDefNode(self, node)
return node
def visit_DelStatNode(self, node):
for arg in node.args:
self.mark_assignment(arg, arg)
self.visitchildren(node)
return node
def visit_ParallelStatNode(self, node):
if self.parallel_block_stack:
node.parent = self.parallel_block_stack[-1]
else:
node.parent = None
nested = False
if node.is_prange:
if not node.parent:
node.is_parallel = True
else:
node.is_parallel = (node.parent.is_prange or not
node.parent.is_parallel)
nested = node.parent.is_prange
else:
node.is_parallel = True
# Note: nested with parallel() blocks are handled by
# ParallelRangeTransform!
# nested = node.parent
nested = node.parent and node.parent.is_prange
self.parallel_block_stack.append(node)
nested = nested or len(self.parallel_block_stack) > 2
if not self.parallel_errors and nested and not node.is_prange:
error(node.pos, "Only prange() may be nested")
self.parallel_errors = True
if node.is_prange:
child_attrs = node.child_attrs
node.child_attrs = ['body', 'target', 'args']
self.visitchildren(node)
node.child_attrs = child_attrs
self.parallel_block_stack.pop()
if node.else_clause:
node.else_clause = self.visit(node.else_clause)
else:
self.visitchildren(node)
self.parallel_block_stack.pop()
self.parallel_errors = False
return node
def visit_YieldExprNode(self, node):
if self.parallel_block_stack:
error(node.pos, "Yield not allowed in parallel sections")
return node
def visit_ReturnStatNode(self, node):
node.in_parallel = bool(self.parallel_block_stack)
return node
class MarkOverflowingArithmetic(CythonTransform):
# It may be possible to integrate this with the above for
# performance improvements (though likely not worth it).
might_overflow = False
def __call__(self, root):
self.env_stack = []
self.env = root.scope
return super(MarkOverflowingArithmetic, self).__call__(root)
def visit_safe_node(self, node):
self.might_overflow, saved = False, self.might_overflow
self.visitchildren(node)
self.might_overflow = saved
return node
def visit_neutral_node(self, node):
self.visitchildren(node)
return node
def visit_dangerous_node(self, node):
self.might_overflow, saved = True, self.might_overflow
self.visitchildren(node)
self.might_overflow = saved
return node
def visit_FuncDefNode(self, node):
self.env_stack.append(self.env)
self.env = node.local_scope
self.visit_safe_node(node)
self.env = self.env_stack.pop()
return node
def visit_NameNode(self, node):
if self.might_overflow:
entry = node.entry or self.env.lookup(node.name)
if entry:
entry.might_overflow = True
return node
def visit_BinopNode(self, node):
if node.operator in '&|^':
return self.visit_neutral_node(node)
else:
return self.visit_dangerous_node(node)
visit_UnopNode = visit_neutral_node
visit_UnaryMinusNode = visit_dangerous_node
visit_InPlaceAssignmentNode = visit_dangerous_node
visit_Node = visit_safe_node
def visit_assignment(self, lhs, rhs):
if (isinstance(rhs, ExprNodes.IntNode)
and isinstance(lhs, ExprNodes.NameNode)
and Utils.long_literal(rhs.value)):
entry = lhs.entry or self.env.lookup(lhs.name)
if entry:
entry.might_overflow = True
def visit_SingleAssignmentNode(self, node):
self.visit_assignment(node.lhs, node.rhs)
self.visitchildren(node)
return node
def visit_CascadedAssignmentNode(self, node):
for lhs in node.lhs_list:
self.visit_assignment(lhs, node.rhs)
self.visitchildren(node)
return node
class PyObjectTypeInferer(object):
"""
If it's not declared, it's a PyObject.
"""
def infer_types(self, scope):
"""
Given a dict of entries, map all unspecified types to a specified type.
"""
for name, entry in scope.entries.items():
if entry.type is unspecified_type:
entry.type = py_object_type
class SimpleAssignmentTypeInferer(object):
"""
Very basic type inference.
Note: in order to support cross-closure type inference, this must be
applies to nested scopes in top-down order.
"""
def set_entry_type(self, entry, entry_type):
entry.type = entry_type
for e in entry.all_entries():
e.type = entry_type
def infer_types(self, scope):
enabled = scope.directives['infer_types']
verbose = scope.directives['infer_types.verbose']
if enabled == True:
spanning_type = aggressive_spanning_type
elif enabled is None: # safe mode
spanning_type = safe_spanning_type
else:
for entry in scope.entries.values():
if entry.type is unspecified_type:
self.set_entry_type(entry, py_object_type)
return
# Set of assignemnts
assignments = set()
assmts_resolved = set()
dependencies = {}
assmt_to_names = {}
for name, entry in scope.entries.items():
for assmt in entry.cf_assignments:
names = assmt.type_dependencies()
assmt_to_names[assmt] = names
assmts = set()
for node in names:
assmts.update(node.cf_state)
dependencies[assmt] = assmts
if entry.type is unspecified_type:
assignments.update(entry.cf_assignments)
else:
assmts_resolved.update(entry.cf_assignments)
def infer_name_node_type(node):
types = [assmt.inferred_type for assmt in node.cf_state]
if not types:
node_type = py_object_type
else:
entry = node.entry
node_type = spanning_type(
types, entry.might_overflow, entry.pos)
node.inferred_type = node_type
def infer_name_node_type_partial(node):
types = [assmt.inferred_type for assmt in node.cf_state
if assmt.inferred_type is not None]
if not types:
return
entry = node.entry
return spanning_type(types, entry.might_overflow, entry.pos)
def resolve_assignments(assignments):
resolved = set()
for assmt in assignments:
deps = dependencies[assmt]
# All assignments are resolved
if assmts_resolved.issuperset(deps):
for node in assmt_to_names[assmt]:
infer_name_node_type(node)
# Resolve assmt
inferred_type = assmt.infer_type()
assmts_resolved.add(assmt)
resolved.add(assmt)
assignments.difference_update(resolved)
return resolved
def partial_infer(assmt):
partial_types = []
for node in assmt_to_names[assmt]:
partial_type = infer_name_node_type_partial(node)
if partial_type is None:
return False
partial_types.append((node, partial_type))
for node, partial_type in partial_types:
node.inferred_type = partial_type
assmt.infer_type()
return True
partial_assmts = set()
def resolve_partial(assignments):
# try to handle circular references
partials = set()
for assmt in assignments:
if assmt in partial_assmts:
continue
if partial_infer(assmt):
partials.add(assmt)
assmts_resolved.add(assmt)
partial_assmts.update(partials)
return partials
# Infer assignments
while True:
if not resolve_assignments(assignments):
if not resolve_partial(assignments):
break
inferred = set()
# First pass
for entry in scope.entries.values():
if entry.type is not unspecified_type:
continue
entry_type = py_object_type
if assmts_resolved.issuperset(entry.cf_assignments):
types = [assmt.inferred_type for assmt in entry.cf_assignments]
if types and all(types):
entry_type = spanning_type(
types, entry.might_overflow, entry.pos)
inferred.add(entry)
self.set_entry_type(entry, entry_type)
def reinfer():
dirty = False
for entry in inferred:
types = [assmt.infer_type()
for assmt in entry.cf_assignments]
new_type = spanning_type(types, entry.might_overflow, entry.pos)
if new_type != entry.type:
self.set_entry_type(entry, new_type)
dirty = True
return dirty
# types propagation
while reinfer():
pass
if verbose:
for entry in inferred:
message(entry.pos, "inferred '%s' to be of type '%s'" % (
entry.name, entry.type))
def find_spanning_type(type1, type2):
if type1 is type2:
result_type = type1
elif type1 is PyrexTypes.c_bint_type or type2 is PyrexTypes.c_bint_type:
# type inference can break the coercion back to a Python bool
# if it returns an arbitrary int type here
return py_object_type
else:
result_type = PyrexTypes.spanning_type(type1, type2)
if result_type in (PyrexTypes.c_double_type, PyrexTypes.c_float_type,
Builtin.float_type):
# Python's float type is just a C double, so it's safe to
# use the C type instead
return PyrexTypes.c_double_type
return result_type
def simply_type(result_type, pos):
if result_type.is_reference:
result_type = result_type.ref_base_type
if result_type.is_const:
result_type = result_type.const_base_type
if result_type.is_cpp_class:
result_type.check_nullary_constructor(pos)
if result_type.is_array:
result_type = PyrexTypes.c_ptr_type(result_type.base_type)
return result_type
def aggressive_spanning_type(types, might_overflow, pos):
return simply_type(reduce(find_spanning_type, types), pos)
def safe_spanning_type(types, might_overflow, pos):
result_type = simply_type(reduce(find_spanning_type, types), pos)
if result_type.is_pyobject:
# In theory, any specific Python type is always safe to
# infer. However, inferring str can cause some existing code
# to break, since we are also now much more strict about
# coercion from str to char *. See trac #553.
if result_type.name == 'str':
return py_object_type
else:
return result_type
elif result_type is PyrexTypes.c_double_type:
# Python's float type is just a C double, so it's safe to use
# the C type instead
return result_type
elif result_type is PyrexTypes.c_bint_type:
# find_spanning_type() only returns 'bint' for clean boolean
# operations without other int types, so this is safe, too
return result_type
elif result_type.is_ptr:
# Any pointer except (signed|unsigned|) char* can't implicitly
# become a PyObject, and inferring char* is now accepted, too.
return result_type
elif result_type.is_cpp_class:
# These can't implicitly become Python objects either.
return result_type
elif result_type.is_struct:
# Though we have struct -> object for some structs, this is uncommonly
# used, won't arise in pure Python, and there shouldn't be side
# effects, so I'm declaring this safe.
return result_type
# TODO: double complex should be OK as well, but we need
# to make sure everything is supported.
elif (result_type.is_int or result_type.is_enum) and not might_overflow:
return result_type
return py_object_type
def get_type_inferer():
return SimpleAssignmentTypeInferer()
| apache-2.0 | 2,212,766,573,687,447,000 | 36.524064 | 95 | 0.567146 | false |
gangadharkadam/vlinkerp | erpnext/config/crm.py | 16 | 2937 | from frappe import _
def get_data():
return [
{
"label": _("Documents"),
"icon": "icon-star",
"items": [
{
"type": "doctype",
"name": "Lead",
"description": _("Database of potential customers."),
},
{
"type": "doctype",
"name": "Customer",
"description": _("Customer database."),
},
{
"type": "doctype",
"name": "Opportunity",
"description": _("Potential opportunities for selling."),
},
{
"type": "doctype",
"name": "Contact",
"description": _("All Contacts."),
},
{
"type": "doctype",
"name": "Newsletter",
"description": _("Newsletters to contacts, leads."),
},
]
},
{
"label": _("Tools"),
"icon": "icon-wrench",
"items": [
{
"type": "doctype",
"name": "SMS Center",
"description":_("Send mass SMS to your contacts"),
},
]
},
{
"label": _("Setup"),
"icon": "icon-cog",
"items": [
{
"type": "doctype",
"name": "Campaign",
"description": _("Sales campaigns."),
},
{
"type": "page",
"label": _("Customer Group"),
"name": "Sales Browser",
"icon": "icon-sitemap",
"link": "Sales Browser/Customer Group",
"description": _("Manage Customer Group Tree."),
"doctype": "Customer Group",
},
{
"type": "page",
"label": _("Territory"),
"name": "Sales Browser",
"icon": "icon-sitemap",
"link": "Sales Browser/Territory",
"description": _("Manage Territory Tree."),
"doctype": "Territory",
},
{
"type": "page",
"label": _("Sales Person"),
"name": "Sales Browser",
"icon": "icon-sitemap",
"link": "Sales Browser/Sales Person",
"description": _("Manage Sales Person Tree."),
"doctype": "Sales Person",
},
{
"type": "doctype",
"name": "Newsletter List",
"description": _("Newsletter Mailing List"),
},
{
"type": "doctype",
"name": "SMS Settings",
"description": _("Setup SMS gateway settings")
},
]
},
{
"label": _("Main Reports"),
"icon": "icon-table",
"items": [
{
"type": "page",
"name": "sales-funnel",
"label": _("Sales Funnel"),
"icon": "icon-bar-chart",
},
]
},
{
"label": _("Standard Reports"),
"icon": "icon-list",
"items": [
{
"type": "report",
"is_query_report": True,
"name": "Lead Details",
"doctype": "Lead"
},
{
"type": "report",
"is_query_report": True,
"name": "Customer Addresses and Contacts",
"doctype": "Contact"
},
{
"type": "report",
"is_query_report": True,
"name": "Customers Not Buying Since Long Time",
"doctype": "Sales Order"
},
]
},
{
"label": _("Help"),
"items": [
{
"type": "help",
"label": _("Lead to Quotation"),
"youtube_id": "TxYX4r4JAKA"
},
]
},
]
| agpl-3.0 | -1,340,473,570,891,689,200 | 19.829787 | 62 | 0.483146 | false |
brianyu2010/Mini-Metagenomic_Analyses | Snakefile_combined_analysis.py | 1 | 7049 | ###############################################
# Snakemake rules associated with combined
# analysis of each biosample from subsample resutls
# this file must be included into another
# Snakemake file
###############################################
# rename each contig with subsample information
# this rule is no longer used 2015.07.15
rule combine_contigs:
input: expand("{subsample}/contigs.{subsample}.fasta", subsample=subsampleIDs)
output: "Combined_Analysis/subsample_contigs.{id}.fasta", "Combined_Analysis/subsample_contigs_name.{id}.txt"
params:
name="combine_contigs",
partition="general",
mem="3000"
threads: 1
version: "1.0"
run:
assert(file_empty(input)),"One of the input contigs is empty."
shell("source /local10G/brianyu/anaconda/bin/activate /local10G/brianyu/anaconda/ &&\
python {code_dir}/snakehelper_combine_subsample_contigs.py {input} -o {output[0]} -l {output[1]}")
assert(file_empty(output)),"Either the combined contigs or names is empty."
# this rule is no longer used 2015.07.15
rule superContig_distribution:
# it is important that the order for input is query and then contig
input:
"{folder}/super_contigs.{id}.fasta",
"{folder}/subsample_contigs.{id}.fasta",
"{folder}/super_contigs_name.{id}.txt",
"{folder}/subsample_contigs_name.{id}.txt"
output:
"{folder}/super_contigs_blast_report.{id}.txt",
"{folder}/super_contigs_distribution.{id}.txt"
params:
name="superContig_distribution",
partition="general",
mem="20000", # don't change this
contig_thresh=parameters.ix['biosample_contig_thresh','entry']
threads: 4
version: "1.0"
run:
# Managing files and obtain scratch location
scratch = get_scratch(False)
input_on_scratch = names_on_scratch(input, scratch)
output_on_scratch = names_on_scratch(output, scratch)
cp_to_scratch(input, scratch)
# Performing Blast and Data Summary
shell("bash {code_dir}/snakehelper_localblast.sh {scratch} {input_on_scratch[0]} {input_on_scratch[1]} {output_on_scratch[0]} {tool_dir} {code_dir} {threads} {params.contig_thresh}")
# path on scratch contains absolute path information
shell("source /local10G/brianyu/anaconda/bin/activate /local10G/brianyu/anaconda/ &&\
python {code_dir}/snakehelper_contig_similarity.py {input_on_scratch[2]} {input_on_scratch[3]} {output_on_scratch[0]} {output_on_scratch[1]}")
cp_from_scratch(output, scratch)
# This rule is no longer used 2015.07.15
rule contig_similarity:
input:
"{folder}/{filename}_contigs.{id}.fasta",
"{folder}/{filename}_contigs_name.{id}.txt"
output:
"{folder}/{filename}_contigs.{id}.similarity_blast_report.txt",
"{folder}/{filename}_contigs.{id}.similarity_matrix.txt"
params:
name="contig_similarity",
partition="long",
mem="40000",
contig_thresh=parameters.ix['biosample_contig_thresh','entry']
threads: 11
version: "1.0"
run:
# Managing files and obtain scratch location
scratch = get_scratch(False)
input_on_scratch = names_on_scratch(input, scratch)
output_on_scratch = names_on_scratch(output, scratch)
cp_to_scratch(input, scratch)
# Performing Blast and Data Summary
shell("bash {code_dir}/snakehelper_localblast.sh {scratch} {input_on_scratch[0]} {input_on_scratch[0]} {output_on_scratch[0]} {tool_dir} {code_dir} {threads} {params.contig_thresh}") # typically use {contig_thresh}
shell("source /local10G/brianyu/anaconda/bin/activate /local10G/brianyu/anaconda/ &&\
python {code_dir}/snakehelper_contig_similarity.py {input_on_scratch[1]} {input_on_scratch[1]} {output_on_scratch[0]} {output_on_scratch[1]}")
assert(file_empty(output_on_scratch)),"One of the output files are empty."
cp_from_scratch(output, scratch)
rule organize_subsample_blast:
input: expand("{subsample}/BlastResults.{subsample}.txt", subsample=subsampleIDs)
output: "Combined_Analysis/subsample_species_abundance.{id}.txt"
params:
name="organize_subsample_blast",
partition="general",
mem="10000",
species_thresh=parameters.ix['species_number_thresh','entry']
threads: 1
version: "1.0"
run:
# Managing files and obtain scratch location
scratch = get_scratch(False)
input_on_scratch = names_on_scratch(input, scratch)
output_on_scratch = names_on_scratch(output, scratch)
# print(output_on_scratch)
cp_to_scratch(input, scratch)
# first pass to get top n species
shell("touch {output_on_scratch}")
for i in range(len(input)):
# print(input_on_scratch[i])
if file_empty([input_on_scratch[i]]):
# assert(file_empty([input_on_scratch[i]])),"Input file is empty."
sample_name = input_on_scratch[i].split('.')[-2]
filename = input_on_scratch[i]
shell("sort {filename} > {scratch}/sorted_blast_results.txt")
with open(scratch+"/sorted_blast_results.txt",'r') as finput:
last_qseqid = ''
last_sseqid = ''
species = []
for line in finput:
# must split line with tab because there are spaces in fields
if line.split('\t')[9] == "Bacteria":
if line.split('\t')[0] == last_qseqid:
if line.split('\t')[1] != last_sseqid:
print('Warning: '+last_qseqid+' got matched to '+last_sseqid+' and '+line.split()[1]+'\n')
else:
# print(line.split('\t')[7])
last_qseqid = line.split('\t')[0]
last_sseqid = line.split('\t')[1]
species.append(line.split('\t')[7]) # 8th column is the scientific name
species_cnt = dict([(i,species.count(i)) for i in set(species)])
# sort species_cnt dictionary by the values
sorted_keys = sorted(species_cnt, key=species_cnt.get, reverse=True)
# Append to output file the new subsample
with open(output_on_scratch[0],'a') as f:
# The '>' is added in front for later parsing with python script
t = f.write('>'+sample_name+'\n')
t = f.write('total\t'+str(len(species))+'\n')
for k in sorted_keys:
t = f.write(str(species_cnt[k])+'\t'+k+'\n')
# shell("echo 'start'")
# shell("head -n 50 {output_on_scratch[0]}")
# shell("echo 'end'")
else:
sample_name = input_on_scratch[i].split('.')[-2]
filename = input_on_scratch[i]
print('Warning: Input file '+filename+' is empty.\n')
with open(output_on_scratch[0],'a') as f:
# The '>' is added in front for later parsing with python script
t = f.write('>'+sample_name+'\n')
t = f.write('total\t0\n')
# second path to organize data for plotting
shell("source /local10G/brianyu/anaconda/bin/activate /local10G/brianyu/anaconda/ &&\
python {code_dir}/snakehelper_subsample_topspecies.py {output_on_scratch}")
assert(file_empty([output_on_scratch[0]])),"Output is empty"
cp_from_scratch(output, scratch)
| gpl-3.0 | -835,447,864,730,813,000 | 44.477419 | 218 | 0.640375 | false |
yangjiandong/djangosnippets | cab/views/popular.py | 1 | 1362 | import datetime
from django.contrib.auth.models import User
from django.shortcuts import render_to_response
from django.template.context import RequestContext
from django.views.generic.list_detail import object_list
from taggit.models import Tag
from cab.models import Snippet, Language, Bookmark
from cab.utils import month_object_list
def top_authors(request):
return object_list(
request,
queryset=Snippet.objects.top_authors(),
template_name='cab/top_authors.html',
paginate_by=20)
def top_languages(request):
return object_list(
request,
queryset=Language.objects.top_languages(),
template_name='cab/language_list.html',
paginate_by=20)
def top_tags(request):
return object_list(
request,
queryset=Snippet.objects.top_tags(),
template_name='cab/tag_list.html',
paginate_by=20,
)
def top_bookmarked(request):
queryset = Snippet.objects.most_bookmarked()
return month_object_list(
request,
queryset=queryset,
template_name='cab/most_bookmarked.html',
paginate_by=20,
)
def top_rated(request):
queryset = Snippet.objects.top_rated()
return month_object_list(
request,
queryset=queryset,
template_name='cab/top_rated.html',
paginate_by=20,
)
| bsd-3-clause | -5,742,447,194,425,680,000 | 24.222222 | 56 | 0.668135 | false |
50wu/gpdb | gpMgmt/bin/gppylib/gpMgmttest/__init__.py | 29 | 2244 | import unittest
import time
class GpMgmtTestRunner(unittest.TextTestRunner):
def _makeResult(self):
return GpMgmtTextTestResult(self.stream, self.descriptions, self.verbosity)
class GpMgmtTextTestResult(unittest.TextTestResult):
def __init__(self, stream, descriptions, verbosity):
super(GpMgmtTextTestResult, self).__init__(stream, descriptions, verbosity)
self.verbosity = verbosity
self.startTime = 0
def getDescription(self, test):
case_name, full_name = test.__str__().split()
suite_name, class_name = full_name.strip('()').rsplit('.',1)
if self.verbosity > 1:
if test.shortDescription():
return 'Test Suite Name|%s|Test Case Name|%s|Test Details|%s' % (suite_name, case_name, test.shortDescription())
else:
return 'Test Suite Name|%s|Test Case Name|%s|Test Details|' % (suite_name, case_name)
def startTest(self, test):
super(GpMgmtTextTestResult, self).startTest(test)
self.startTime = test.start_time = time.time()
def addSuccess(self, test):
test.end_time = time.time()
self._show_run_time()
self.stream.write('|Test Status|')
super(GpMgmtTextTestResult, self).addSuccess(test)
def addError(self, test, err):
test.end_time = time.time()
self._show_run_time()
self.stream.write('|Test Status|')
super(GpMgmtTextTestResult, self).addError(test, err)
def addFailure(self, test, err):
test.end_time = time.time()
self._show_run_time()
self.stream.write('|Test Status|')
super(GpMgmtTextTestResult, self).addFailure(test, err)
def addSkip(self, test, err):
self._show_run_time()
self.stream.write('|Test Status|')
super(GpMgmtTextTestResult, self).addSkip(test, err)
def addExpectedFailure(self, test, err):
self.end_time = time.time()
self._show_run_time()
self.stream.write('|Test Status|')
super(GpMgmtTextTestResult, self).addExpectedFailure(test, err)
def _show_run_time(self):
etime = time.time()
elapsed = etime - self.startTime
self.stream.write('(%4.2f ms)' % (elapsed*1000))
| apache-2.0 | -5,969,435,742,392,830,000 | 36.4 | 128 | 0.63057 | false |
procangroup/edx-platform | lms/djangoapps/instructor_task/migrations/0002_gradereportsetting.py | 25 | 1167 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('instructor_task', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='GradeReportSetting',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('change_date', models.DateTimeField(auto_now_add=True, verbose_name='Change date')),
('enabled', models.BooleanField(default=False, verbose_name='Enabled')),
('batch_size', models.IntegerField(default=100)),
('changed_by', models.ForeignKey(on_delete=django.db.models.deletion.PROTECT, editable=False, to=settings.AUTH_USER_MODEL, null=True, verbose_name='Changed by')),
],
options={
'ordering': ('-change_date',),
'abstract': False,
},
),
]
| agpl-3.0 | -3,656,651,798,719,279,000 | 36.645161 | 178 | 0.602399 | false |
dcramer/django-compositepks | django/contrib/gis/admin/widgets.py | 6 | 3727 | from django.contrib.gis.gdal import OGRException
from django.contrib.gis.geos import GEOSGeometry, GEOSException
from django.forms.widgets import Textarea
from django.template.loader import render_to_string
class OpenLayersWidget(Textarea):
"""
Renders an OpenLayers map using the WKT of the geometry.
"""
def render(self, name, value, attrs=None):
# Update the template parameters with any attributes passed in.
if attrs: self.params.update(attrs)
# Defaulting the WKT value to a blank string -- this
# will be tested in the JavaScript and the appropriate
# interfaace will be constructed.
self.params['wkt'] = ''
# If a string reaches here (via a validation error on another
# field) then just reconstruct the Geometry.
if isinstance(value, basestring):
try:
value = GEOSGeometry(value)
except (GEOSException, ValueError):
value = None
if value and value.geom_type.upper() != self.geom_type:
value = None
# Constructing the dictionary of the map options.
self.params['map_options'] = self.map_options()
# Constructing the JavaScript module name using the ID of
# the GeometryField (passed in via the `attrs` keyword).
self.params['module'] = 'geodjango_%s' % self.params['field_name']
if value:
# Transforming the geometry to the projection used on the
# OpenLayers map.
srid = self.params['srid']
if value.srid != srid:
try:
value.transform(srid)
wkt = value.wkt
except OGRException:
wkt = ''
else:
wkt = value.wkt
# Setting the parameter WKT with that of the transformed
# geometry.
self.params['wkt'] = wkt
return render_to_string(self.template, self.params)
def map_options(self):
"Builds the map options hash for the OpenLayers template."
# JavaScript construction utilities for the Bounds and Projection.
def ol_bounds(extent):
return 'new OpenLayers.Bounds(%s)' % str(extent)
def ol_projection(srid):
return 'new OpenLayers.Projection("EPSG:%s")' % srid
# An array of the parameter name, the name of their OpenLayers
# counterpart, and the type of variable they are.
map_types = [('srid', 'projection', 'srid'),
('display_srid', 'displayProjection', 'srid'),
('units', 'units', str),
('max_resolution', 'maxResolution', float),
('max_extent', 'maxExtent', 'bounds'),
('num_zoom', 'numZoomLevels', int),
('max_zoom', 'maxZoomLevels', int),
('min_zoom', 'minZoomLevel', int),
]
# Building the map options hash.
map_options = {}
for param_name, js_name, option_type in map_types:
if self.params.get(param_name, False):
if option_type == 'srid':
value = ol_projection(self.params[param_name])
elif option_type == 'bounds':
value = ol_bounds(self.params[param_name])
elif option_type in (float, int):
value = self.params[param_name]
elif option_type in (str,):
value = '"%s"' % self.params[param_name]
else:
raise TypeError
map_options[js_name] = value
return map_options
| bsd-3-clause | 5,866,231,798,301,732,000 | 39.51087 | 74 | 0.552455 | false |
janpascal/denyhosts | DenyHosts/purgecounter.py | 1 | 1894 | import logging
import os
from . import constants
from .counter import Counter, CounterRecord
error = logging.getLogger("purgecounter").error
info = logging.getLogger("purgecounter").info
class PurgeCounter(object):
def __init__(self, prefs):
self.filename = os.path.join(prefs['WORK_DIR'],
constants.PURGE_HISTORY)
self.purge_threshold = prefs['PURGE_THRESHOLD']
def get_banned_for_life(self):
banned = set()
if self.purge_threshold == 0:
return banned
try:
fp = open(self.filename, "r")
except IOError:
return banned
for line in fp:
try:
host, count, timestamp = line.strip().split(':', 2)
except Exception:
continue
if int(count) > self.purge_threshold:
banned.add(host)
fp.close()
return banned
def get_data(self):
counter = Counter()
try:
fp = open(self.filename, "r")
except IOError:
return counter
for line in fp:
try:
host, count, timestamp = line.strip().split(':', 2)
except Exception:
continue
counter[host] = CounterRecord(int(count), timestamp)
fp.close()
return counter
def write_data(self, data):
try:
fp = open(self.filename, "w")
keys = list(data.keys())
keys.sort()
for key in keys:
fp.write("%s:%s\n" % (key, data[key]))
fp.close()
except Exception as e:
error("error saving %s: %s", self.filename, str(e))
def increment(self, purged_hosts):
data = self.get_data()
for host in purged_hosts:
data[host] += 1
self.write_data(data)
| gpl-2.0 | -77,918,251,938,563,620 | 24.253333 | 67 | 0.515312 | false |
jvarho/rencfs | aes.py | 1 | 2540 | #!/usr/bin/python3
# Copyright (c) 2017-2020, Jan Varho
#
# Permission to use, copy, modify, and/or distribute this software for any
# purpose with or without fee is hereby granted, provided that the above
# copyright notice and this permission notice appear in all copies.
#
# THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
# OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
'''aes wrapper for rencfs
Uses either PyCrypto or pyca/cryptography'''
def cryptography_aes_ecb(key):
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import ECB
cipher = Cipher(AES(key), ECB(), default_backend())
e = cipher.encryptor()
d = cipher.decryptor()
e.encrypt = e.update
e.decrypt = d.update
return e
def cryptography_aes_ctr(key, index):
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.ciphers import Cipher
from cryptography.hazmat.primitives.ciphers.algorithms import AES
from cryptography.hazmat.primitives.ciphers.modes import CTR
from struct import pack
ctr = b'\0'*8 + pack('>Q', index)
cipher = Cipher(AES(key), CTR(ctr), default_backend())
e = cipher.encryptor()
e.encrypt = e.update
return e
def pycrypto_aes_ecb(key):
from Crypto.Cipher import AES
return AES.new(key, AES.MODE_ECB)
def pycrypto_aes_ctr(key, index):
from Crypto.Cipher import AES
from Crypto.Util import Counter
ctr = Counter.new(128, initial_value=index)
return AES.new(key, AES.MODE_CTR, counter=ctr)
try:
cryptography_aes_ecb(b'\1'*16)
aes_ecb = cryptography_aes_ecb
aes_ctr = cryptography_aes_ctr
except:
pycrypto_aes_ecb(b'\1'*16)
aes_ecb = pycrypto_aes_ecb
aes_ctr = pycrypto_aes_ctr
if __name__ == '__main__': #pragma no cover
a = aes_ecb(b'\1'*16).encrypt(b'\0'*16)
b = aes_ctr(b'\1'*16, 0).encrypt(b'\0'*16)
c = aes_ctr(b'\1'*16, 1).encrypt(b'\0'*16)
assert a == b
assert a != c
| isc | 999,948,682,936,665,900 | 33.324324 | 74 | 0.714173 | false |
ageron/tensorflow | tensorflow/contrib/training/python/training/evaluation_test.py | 25 | 18065 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tf.contrib.training.evaluation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import glob
import os
import time
import numpy as np
from tensorflow.contrib.framework.python.ops import variables
from tensorflow.contrib.layers.python.layers import layers
from tensorflow.contrib.losses.python.losses import loss_ops
from tensorflow.contrib.training.python.training import evaluation
from tensorflow.contrib.training.python.training import training
from tensorflow.core.protobuf import config_pb2
from tensorflow.python.client import session as session_lib
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import random_seed
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import metrics
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import variables as variables_lib
from tensorflow.python.platform import gfile
from tensorflow.python.platform import test
from tensorflow.python.summary import summary as summary_lib
from tensorflow.python.summary import summary_iterator
from tensorflow.python.training import basic_session_run_hooks
from tensorflow.python.training import gradient_descent
from tensorflow.python.training import saver as saver_lib
class CheckpointIteratorTest(test.TestCase):
def testReturnsEmptyIfNoCheckpointsFound(self):
checkpoint_dir = os.path.join(self.get_temp_dir(), 'no_checkpoints_found')
num_found = 0
for _ in evaluation.checkpoints_iterator(checkpoint_dir, timeout=0):
num_found += 1
self.assertEqual(num_found, 0)
def testReturnsSingleCheckpointIfOneCheckpointFound(self):
checkpoint_dir = os.path.join(self.get_temp_dir(), 'one_checkpoint_found')
if not gfile.Exists(checkpoint_dir):
gfile.MakeDirs(checkpoint_dir)
global_step = variables.get_or_create_global_step()
saver = saver_lib.Saver() # Saves the global step.
with self.cached_session() as session:
session.run(variables_lib.global_variables_initializer())
save_path = os.path.join(checkpoint_dir, 'model.ckpt')
saver.save(session, save_path, global_step=global_step)
num_found = 0
for _ in evaluation.checkpoints_iterator(checkpoint_dir, timeout=0):
num_found += 1
self.assertEqual(num_found, 1)
def testReturnsSingleCheckpointIfOneShardedCheckpoint(self):
checkpoint_dir = os.path.join(self.get_temp_dir(),
'one_checkpoint_found_sharded')
if not gfile.Exists(checkpoint_dir):
gfile.MakeDirs(checkpoint_dir)
global_step = variables.get_or_create_global_step()
# This will result in 3 different checkpoint shard files.
with ops.device('/cpu:0'):
variables_lib.Variable(10, name='v0')
with ops.device('/cpu:1'):
variables_lib.Variable(20, name='v1')
saver = saver_lib.Saver(sharded=True)
with session_lib.Session(
target='',
config=config_pb2.ConfigProto(device_count={'CPU': 2})) as session:
session.run(variables_lib.global_variables_initializer())
save_path = os.path.join(checkpoint_dir, 'model.ckpt')
saver.save(session, save_path, global_step=global_step)
num_found = 0
for _ in evaluation.checkpoints_iterator(checkpoint_dir, timeout=0):
num_found += 1
self.assertEqual(num_found, 1)
def testTimeoutFn(self):
timeout_fn_calls = [0]
def timeout_fn():
timeout_fn_calls[0] += 1
return timeout_fn_calls[0] > 3
results = list(
evaluation.checkpoints_iterator(
'/non-existent-dir', timeout=0.1, timeout_fn=timeout_fn))
self.assertEqual([], results)
self.assertEqual(4, timeout_fn_calls[0])
class WaitForNewCheckpointTest(test.TestCase):
def testReturnsNoneAfterTimeout(self):
start = time.time()
ret = evaluation.wait_for_new_checkpoint(
'/non-existent-dir', 'foo', timeout=1.0, seconds_to_sleep=0.5)
end = time.time()
self.assertIsNone(ret)
# We've waited one second.
self.assertGreater(end, start + 0.5)
# The timeout kicked in.
self.assertLess(end, start + 1.1)
def logistic_classifier(inputs):
return layers.fully_connected(inputs, 1, activation_fn=math_ops.sigmoid)
class EvaluateOnceTest(test.TestCase):
def setUp(self):
super(EvaluateOnceTest, self).setUp()
# Create an easy training set:
np.random.seed(0)
self._inputs = np.zeros((16, 4))
self._labels = np.random.randint(0, 2, size=(16, 1)).astype(np.float32)
for i in range(16):
j = int(2 * self._labels[i] + np.random.randint(0, 2))
self._inputs[i, j] = 1
def _train_model(self, checkpoint_dir, num_steps):
"""Trains a simple classification model.
Note that the data has been configured such that after around 300 steps,
the model has memorized the dataset (e.g. we can expect %100 accuracy).
Args:
checkpoint_dir: The directory where the checkpoint is written to.
num_steps: The number of steps to train for.
"""
with ops.Graph().as_default():
random_seed.set_random_seed(0)
tf_inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
tf_labels = constant_op.constant(self._labels, dtype=dtypes.float32)
tf_predictions = logistic_classifier(tf_inputs)
loss = loss_ops.log_loss(tf_predictions, tf_labels)
optimizer = gradient_descent.GradientDescentOptimizer(learning_rate=1.0)
train_op = training.create_train_op(loss, optimizer)
loss = training.train(
train_op,
checkpoint_dir,
hooks=[basic_session_run_hooks.StopAtStepHook(num_steps)])
if num_steps >= 300:
assert loss < .015
def testEvaluatePerfectModel(self):
checkpoint_dir = os.path.join(self.get_temp_dir(),
'evaluate_perfect_model_once')
# Train a Model to completion:
self._train_model(checkpoint_dir, num_steps=300)
# Run
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
labels = constant_op.constant(self._labels, dtype=dtypes.float32)
logits = logistic_classifier(inputs)
predictions = math_ops.round(logits)
accuracy, update_op = metrics.accuracy(
predictions=predictions, labels=labels)
checkpoint_path = evaluation.wait_for_new_checkpoint(checkpoint_dir)
final_ops_values = evaluation.evaluate_once(
checkpoint_path=checkpoint_path,
eval_ops=update_op,
final_ops={'accuracy': accuracy},
hooks=[
evaluation.StopAfterNEvalsHook(1),
])
self.assertTrue(final_ops_values['accuracy'] > .99)
def testEvalOpAndFinalOp(self):
checkpoint_dir = os.path.join(self.get_temp_dir(), 'eval_ops_and_final_ops')
# Train a model for a single step to get a checkpoint.
self._train_model(checkpoint_dir, num_steps=1)
checkpoint_path = evaluation.wait_for_new_checkpoint(checkpoint_dir)
# Create the model so we have something to restore.
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
logistic_classifier(inputs)
num_evals = 5
final_increment = 9.0
my_var = variables.local_variable(0.0, name='MyVar')
eval_ops = state_ops.assign_add(my_var, 1.0)
final_ops = array_ops.identity(my_var) + final_increment
final_ops_values = evaluation.evaluate_once(
checkpoint_path=checkpoint_path,
eval_ops=eval_ops,
final_ops={'value': final_ops},
hooks=[
evaluation.StopAfterNEvalsHook(num_evals),
])
self.assertEqual(final_ops_values['value'], num_evals + final_increment)
def testOnlyFinalOp(self):
checkpoint_dir = os.path.join(self.get_temp_dir(), 'only_final_ops')
# Train a model for a single step to get a checkpoint.
self._train_model(checkpoint_dir, num_steps=1)
checkpoint_path = evaluation.wait_for_new_checkpoint(checkpoint_dir)
# Create the model so we have something to restore.
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
logistic_classifier(inputs)
final_increment = 9.0
my_var = variables.local_variable(0.0, name='MyVar')
final_ops = array_ops.identity(my_var) + final_increment
final_ops_values = evaluation.evaluate_once(
checkpoint_path=checkpoint_path, final_ops={'value': final_ops})
self.assertEqual(final_ops_values['value'], final_increment)
class EvaluateRepeatedlyTest(test.TestCase):
def setUp(self):
super(EvaluateRepeatedlyTest, self).setUp()
# Create an easy training set:
np.random.seed(0)
self._inputs = np.zeros((16, 4))
self._labels = np.random.randint(0, 2, size=(16, 1)).astype(np.float32)
for i in range(16):
j = int(2 * self._labels[i] + np.random.randint(0, 2))
self._inputs[i, j] = 1
def _train_model(self, checkpoint_dir, num_steps):
"""Trains a simple classification model.
Note that the data has been configured such that after around 300 steps,
the model has memorized the dataset (e.g. we can expect %100 accuracy).
Args:
checkpoint_dir: The directory where the checkpoint is written to.
num_steps: The number of steps to train for.
"""
with ops.Graph().as_default():
random_seed.set_random_seed(0)
tf_inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
tf_labels = constant_op.constant(self._labels, dtype=dtypes.float32)
tf_predictions = logistic_classifier(tf_inputs)
loss = loss_ops.log_loss(tf_predictions, tf_labels)
optimizer = gradient_descent.GradientDescentOptimizer(learning_rate=1.0)
train_op = training.create_train_op(loss, optimizer)
loss = training.train(
train_op,
checkpoint_dir,
hooks=[basic_session_run_hooks.StopAtStepHook(num_steps)])
def testEvaluatePerfectModel(self):
checkpoint_dir = os.path.join(self.get_temp_dir(),
'evaluate_perfect_model_repeated')
# Train a Model to completion:
self._train_model(checkpoint_dir, num_steps=300)
# Run
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
labels = constant_op.constant(self._labels, dtype=dtypes.float32)
logits = logistic_classifier(inputs)
predictions = math_ops.round(logits)
accuracy, update_op = metrics.accuracy(
predictions=predictions, labels=labels)
final_values = evaluation.evaluate_repeatedly(
checkpoint_dir=checkpoint_dir,
eval_ops=update_op,
final_ops={'accuracy': accuracy},
hooks=[
evaluation.StopAfterNEvalsHook(1),
],
max_number_of_evaluations=1)
self.assertTrue(final_values['accuracy'] > .99)
def testEvaluationLoopTimeout(self):
checkpoint_dir = os.path.join(self.get_temp_dir(),
'evaluation_loop_timeout')
if not gfile.Exists(checkpoint_dir):
gfile.MakeDirs(checkpoint_dir)
# We need a variable that the saver will try to restore.
variables.get_or_create_global_step()
# Run with placeholders. If we actually try to evaluate this, we'd fail
# since we're not using a feed_dict.
cant_run_op = array_ops.placeholder(dtype=dtypes.float32)
start = time.time()
final_values = evaluation.evaluate_repeatedly(
checkpoint_dir=checkpoint_dir,
eval_ops=cant_run_op,
hooks=[evaluation.StopAfterNEvalsHook(10)],
timeout=6)
end = time.time()
self.assertFalse(final_values)
# Assert that we've waited for the duration of the timeout (minus the sleep
# time).
self.assertGreater(end - start, 5.0)
# Then the timeout kicked in and stops the loop.
self.assertLess(end - start, 7)
def testEvaluationLoopTimeoutWithTimeoutFn(self):
checkpoint_dir = os.path.join(self.get_temp_dir(),
'evaluation_loop_timeout_with_timeout_fn')
# Train a Model to completion:
self._train_model(checkpoint_dir, num_steps=300)
# Run
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
labels = constant_op.constant(self._labels, dtype=dtypes.float32)
logits = logistic_classifier(inputs)
predictions = math_ops.round(logits)
accuracy, update_op = metrics.accuracy(
predictions=predictions, labels=labels)
timeout_fn_calls = [0]
def timeout_fn():
timeout_fn_calls[0] += 1
return timeout_fn_calls[0] > 3
final_values = evaluation.evaluate_repeatedly(
checkpoint_dir=checkpoint_dir,
eval_ops=update_op,
final_ops={'accuracy': accuracy},
hooks=[
evaluation.StopAfterNEvalsHook(1),
],
eval_interval_secs=1,
max_number_of_evaluations=2,
timeout=0.1,
timeout_fn=timeout_fn)
# We should have evaluated once.
self.assertTrue(final_values['accuracy'] > .99)
# And called 4 times the timeout fn
self.assertEqual(4, timeout_fn_calls[0])
def testEvaluateWithEvalFeedDict(self):
# Create a checkpoint.
checkpoint_dir = os.path.join(self.get_temp_dir(),
'evaluate_with_eval_feed_dict')
self._train_model(checkpoint_dir, num_steps=1)
# We need a variable that the saver will try to restore.
variables.get_or_create_global_step()
# Create a variable and an eval op that increments it with a placeholder.
my_var = variables.local_variable(0.0, name='my_var')
increment = array_ops.placeholder(dtype=dtypes.float32)
eval_ops = state_ops.assign_add(my_var, increment)
increment_value = 3
num_evals = 5
expected_value = increment_value * num_evals
final_values = evaluation.evaluate_repeatedly(
checkpoint_dir=checkpoint_dir,
eval_ops=eval_ops,
feed_dict={increment: 3},
final_ops={'my_var': array_ops.identity(my_var)},
hooks=[
evaluation.StopAfterNEvalsHook(num_evals),
],
max_number_of_evaluations=1)
self.assertEqual(final_values['my_var'], expected_value)
def _create_names_to_metrics(self, predictions, labels):
accuracy0, update_op0 = metrics.accuracy(labels, predictions)
accuracy1, update_op1 = metrics.accuracy(labels, predictions + 1)
names_to_values = {'Accuracy': accuracy0, 'Another_accuracy': accuracy1}
names_to_updates = {'Accuracy': update_op0, 'Another_accuracy': update_op1}
return names_to_values, names_to_updates
def _verify_events(self, output_dir, names_to_values):
"""Verifies that the given `names_to_values` are found in the summaries.
Also checks that a GraphDef was written out to the events file.
Args:
output_dir: An existing directory where summaries are found.
names_to_values: A dictionary of strings to values.
"""
# Check that the results were saved. The events file may have additional
# entries, e.g. the event version stamp, so have to parse things a bit.
output_filepath = glob.glob(os.path.join(output_dir, '*'))
self.assertEqual(len(output_filepath), 1)
events = summary_iterator.summary_iterator(output_filepath[0])
summaries = []
graph_def = None
for event in events:
if event.summary.value:
summaries.append(event.summary)
elif event.graph_def:
graph_def = event.graph_def
values = []
for summary in summaries:
for value in summary.value:
values.append(value)
saved_results = {v.tag: v.simple_value for v in values}
for name in names_to_values:
self.assertAlmostEqual(names_to_values[name], saved_results[name], 5)
self.assertIsNotNone(graph_def)
def testSummariesAreFlushedToDisk(self):
checkpoint_dir = os.path.join(self.get_temp_dir(), 'summaries_are_flushed')
logdir = os.path.join(self.get_temp_dir(), 'summaries_are_flushed_eval')
if gfile.Exists(logdir):
gfile.DeleteRecursively(logdir)
# Train a Model to completion:
self._train_model(checkpoint_dir, num_steps=300)
# Create the model (which can be restored).
inputs = constant_op.constant(self._inputs, dtype=dtypes.float32)
logistic_classifier(inputs)
names_to_values = {'bread': 3.4, 'cheese': 4.5, 'tomato': 2.0}
for k in names_to_values:
v = names_to_values[k]
summary_lib.scalar(k, v)
evaluation.evaluate_repeatedly(
checkpoint_dir=checkpoint_dir,
hooks=[
evaluation.SummaryAtEndHook(log_dir=logdir),
],
max_number_of_evaluations=1)
self._verify_events(logdir, names_to_values)
def testSummaryAtEndHookWithoutSummaries(self):
logdir = os.path.join(self.get_temp_dir(),
'summary_at_end_hook_without_summaires')
if gfile.Exists(logdir):
gfile.DeleteRecursively(logdir)
with ops.Graph().as_default():
# Purposefully don't add any summaries. The hook will just dump the
# GraphDef event.
hook = evaluation.SummaryAtEndHook(log_dir=logdir)
hook.begin()
with self.cached_session() as session:
hook.after_create_session(session, None)
hook.end(session)
self._verify_events(logdir, {})
if __name__ == '__main__':
test.main()
| apache-2.0 | 3,265,417,175,553,067,500 | 34.631164 | 80 | 0.6801 | false |
Weasyl/weasyl | weasyl/report.py | 1 | 9411 | import arrow
from sqlalchemy.dialects.postgresql import ARRAY
from sqlalchemy.orm import aliased, contains_eager, joinedload
import sqlalchemy as sa
import web
from libweasyl.models.content import Report, ReportComment
from libweasyl.models.users import Login
from libweasyl import constants, staff
from weasyl.error import WeasylError
from weasyl import macro as m, define as d, media, note
_CONTENT = 2000
def _convert_violation(target):
violation = [i[2] for i in m.MACRO_REPORT_VIOLATION if i[0] == target]
return violation[0] if violation else 'Unknown'
def _dict_of_targetid(submitid, charid, journalid):
"""
Given a target of some type, return a dictionary indicating what the 'some
type' is. The dictionary's key will be the appropriate column on the Report
model.
"""
if submitid:
return {'target_sub': submitid}
elif charid:
return {'target_char': charid}
elif journalid:
return {'target_journal': journalid}
else:
raise ValueError('no ID given')
# form
# submitid violation
# charid content
# journalid
def create(userid, form):
form.submitid = d.get_int(form.submitid)
form.charid = d.get_int(form.charid)
form.journalid = d.get_int(form.journalid)
form.violation = d.get_int(form.violation)
form.content = form.content.strip()[:_CONTENT]
# get the violation type from allowed types
try:
vtype = next(x for x in m.MACRO_REPORT_VIOLATION if x[0] == form.violation)
except StopIteration:
raise WeasylError("Unexpected")
if not form.submitid and not form.charid and not form.journalid:
raise WeasylError("Unexpected")
elif form.violation == 0:
if userid not in staff.MODS:
raise WeasylError("Unexpected")
elif (form.submitid or form.charid) and not 2000 <= form.violation < 3000:
raise WeasylError("Unexpected")
elif form.journalid and not 3000 <= form.violation < 4000:
raise WeasylError("Unexpected")
elif vtype[3] and not form.content:
raise WeasylError("ReportCommentRequired")
is_hidden = d.engine.scalar(
"SELECT settings ~ 'h' FROM %s WHERE %s = %i" % (
("submission", "submitid", form.submitid) if form.submitid else
("character", "charid", form.charid) if form.charid else
("journal", "journalid", form.journalid)
)
)
if is_hidden is None or (form.violation != 0 and is_hidden):
raise WeasylError("TargetRecordMissing")
now = arrow.get()
target_dict = _dict_of_targetid(form.submitid, form.charid, form.journalid)
report = Report.query.filter_by(is_closed=False, **target_dict).first()
if report is None:
if form.violation == 0:
raise WeasylError("Unexpected")
urgency = vtype[1]
report = Report(urgency=urgency, opened_at=now, **target_dict)
Report.dbsession.add(report)
Report.dbsession.add(ReportComment(
report=report, violation=form.violation, userid=userid, unixtime=now, content=form.content))
Report.dbsession.flush()
_report_types = [
'_target_sub',
'_target_char',
'_target_journal',
]
def select_list(userid, form):
# Find the unique violation types and the number of reporters. This will be
# joined against the Report model to get the violations/reporters for each
# selected report.
subq = (
ReportComment.dbsession.query(
ReportComment.reportid,
sa.func.count(),
sa.type_coerce(
sa.func.array_agg(ReportComment.violation.distinct()),
ARRAY(sa.Integer, as_tuple=True)).label('violations'))
.filter(ReportComment.violation != 0)
.group_by(ReportComment.reportid)
.subquery())
# Find reports, joining against the aforementioned subquery, and eager-load
# the reports' owners.
q = (
Report.dbsession.query(Report, subq)
.options(joinedload(Report.owner))
.join(subq, Report.reportid == subq.c.reportid)
.reset_joinpoint())
# For each type of report, eagerly load the content reported and the
# content's owner. Also, keep track of the Login model aliases used for each
# report type so they can be filtered against later.
login_aliases = []
for column_name in _report_types:
login_alias = aliased(Login)
login_aliases.append(login_alias)
q = (
q
.outerjoin(getattr(Report, column_name))
.outerjoin(login_alias)
.options(contains_eager(column_name + '.owner', alias=login_alias))
.reset_joinpoint())
# Filter by report status. form.status can also be 'all', in which case no
# filter is applied.
if form.status == 'closed':
q = q.filter_by(is_closed=True)
elif form.status == 'open':
q = q.filter_by(is_closed=False)
# If filtering by the report's content's owner, iterate over the previously
# collected Login model aliases to compare against Login.login_name.
if form.submitter:
submitter = d.get_sysname(form.submitter)
q = q.filter(sa.or_(l.login_name == submitter for l in login_aliases))
# If filtering by violation type, see if the violation is in the array
# aggregate of unique violations for this report.
if form.violation and form.violation != '-1':
q = q.filter(sa.literal(int(form.violation)) == sa.func.any(subq.c.violations))
q = q.order_by(Report.opened_at.desc())
return [(report, report_count, list(map(_convert_violation, violations)))
for report, _, report_count, violations in q.all()]
def select_view(userid, form):
report = (
Report.query
.options(joinedload('comments', innerjoin=True).joinedload('poster', innerjoin=True))
.get_or_404(int(form.reportid)))
report.old_style_comments = [
{
'userid': c.userid,
'username': c.poster.profile.username,
'unixtime': c.unixtime,
'content': c.content,
'violation': _convert_violation(c.violation),
} for c in report.comments]
media.populate_with_user_media(report.old_style_comments)
report.old_style_comments.sort(key=lambda c: c['unixtime'])
return report
_closure_actions = {
'no_action_taken': constants.ReportClosureReason.no_action_taken,
'action_taken': constants.ReportClosureReason.action_taken,
'invalid': constants.ReportClosureReason.invalid,
}
def close(userid, form):
if userid not in staff.MODS:
raise WeasylError("InsufficientPermissions")
root_report = Report.query.get(int(form.reportid))
if root_report is None or root_report.is_closed:
return
if 'close_all_user_reports' in form:
# If we're closing all of the reports opened against a particular content
# owner, do the same thing as in the select_list function and collect Login
# aliases so that filtering can be done by Login.login_name.
q = Report.query
login_aliases = []
for column_name in _report_types:
login_alias = aliased(Login)
login_aliases.append(login_alias)
q = (
q
.outerjoin(getattr(Report, column_name))
.outerjoin(login_alias)
.reset_joinpoint())
q = (
q
.filter_by(is_closed=False)
.filter(sa.or_(l.login_name == root_report.target.owner.login_name for l in login_aliases)))
reports = q.all()
else:
reports = [root_report]
for report in reports:
if report.is_closed:
raise RuntimeError("a closed report shouldn't have gotten this far")
report.closerid = userid
report.settings.mutable_settings.clear()
if 'assign' in form:
report.is_under_review = True
elif 'unassign' in form:
report.closerid = None
else:
report.closed_at = arrow.get()
report.closure_explanation = form.explanation
report.closure_reason = _closure_actions[form.action]
Report.dbsession.flush()
if form.action == 'action_taken':
# TODO(hyena): Remove this dependency on web.py's Storage objects.
note_form = web.Storage()
note_form.title = form.note_title
note_form.content = form.user_note
note_form.recipient = root_report.target.owner.login_name
note_form.mod_copy = True
note_form.staff_note = form.explanation
note.send(userid, note_form)
def check(submitid=None, charid=None, journalid=None):
return bool(
Report.query
.filter_by(is_closed=False, **_dict_of_targetid(submitid, charid, journalid))
.count())
def select_reported_list(userid):
q = (
Report.query
.join(ReportComment)
.options(contains_eager(Report.comments))
.options(joinedload('_target_sub'))
.options(joinedload('_target_char'))
.options(joinedload('_target_journal'))
.filter(ReportComment.violation != 0)
.filter_by(userid=userid))
reports = q.all()
for report in reports:
report.latest_report = max(c.unixtime for c in report.comments)
reports.sort(key=lambda r: r.latest_report, reverse=True)
return reports
| apache-2.0 | -3,824,364,035,865,131,500 | 34.115672 | 104 | 0.639889 | false |
duhzecca/cinder | cinder/tests/unit/scheduler/test_host_manager.py | 7 | 27242 | # Copyright (c) 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Tests For HostManager
"""
from datetime import datetime
import mock
from oslo_config import cfg
from oslo_utils import timeutils
from cinder import exception
from cinder import objects
from cinder.openstack.common.scheduler import filters
from cinder.scheduler import host_manager
from cinder import test
from cinder.tests.unit.objects import test_service
CONF = cfg.CONF
class FakeFilterClass1(filters.BaseHostFilter):
def host_passes(self, host_state, filter_properties):
pass
class FakeFilterClass2(filters.BaseHostFilter):
def host_passes(self, host_state, filter_properties):
pass
class HostManagerTestCase(test.TestCase):
"""Test case for HostManager class."""
def setUp(self):
super(HostManagerTestCase, self).setUp()
self.host_manager = host_manager.HostManager()
self.fake_hosts = [host_manager.HostState('fake_host%s' % x)
for x in range(1, 5)]
def test_choose_host_filters_not_found(self):
self.flags(scheduler_default_filters='FakeFilterClass3')
self.host_manager.filter_classes = [FakeFilterClass1,
FakeFilterClass2]
self.assertRaises(exception.SchedulerHostFilterNotFound,
self.host_manager._choose_host_filters, None)
def test_choose_host_filters(self):
self.flags(scheduler_default_filters=['FakeFilterClass2'])
self.host_manager.filter_classes = [FakeFilterClass1,
FakeFilterClass2]
# Test 'volume' returns 1 correct function
filter_classes = self.host_manager._choose_host_filters(None)
self.assertEqual(1, len(filter_classes))
self.assertEqual('FakeFilterClass2', filter_classes[0].__name__)
@mock.patch('cinder.scheduler.host_manager.HostManager.'
'_choose_host_filters')
def test_get_filtered_hosts(self, _mock_choose_host_filters):
filter_class = FakeFilterClass1
mock_func = mock.Mock()
mock_func.return_value = True
filter_class._filter_one = mock_func
_mock_choose_host_filters.return_value = [filter_class]
fake_properties = {'moo': 1, 'cow': 2}
expected = []
for fake_host in self.fake_hosts:
expected.append(mock.call(fake_host, fake_properties))
result = self.host_manager.get_filtered_hosts(self.fake_hosts,
fake_properties)
self.assertEqual(expected, mock_func.call_args_list)
self.assertEqual(set(self.fake_hosts), set(result))
@mock.patch('oslo_utils.timeutils.utcnow')
def test_update_service_capabilities(self, _mock_utcnow):
service_states = self.host_manager.service_states
self.assertDictMatch(service_states, {})
_mock_utcnow.side_effect = [31337, 31338, 31339]
host1_volume_capabs = dict(free_capacity_gb=4321, timestamp=1)
host2_volume_capabs = dict(free_capacity_gb=5432, timestamp=1)
host3_volume_capabs = dict(free_capacity_gb=6543, timestamp=1)
service_name = 'volume'
self.host_manager.update_service_capabilities(service_name, 'host1',
host1_volume_capabs)
self.host_manager.update_service_capabilities(service_name, 'host2',
host2_volume_capabs)
self.host_manager.update_service_capabilities(service_name, 'host3',
host3_volume_capabs)
# Make sure dictionary isn't re-assigned
self.assertEqual(service_states, self.host_manager.service_states)
# Make sure original dictionary wasn't copied
self.assertEqual(1, host1_volume_capabs['timestamp'])
host1_volume_capabs['timestamp'] = 31337
host2_volume_capabs['timestamp'] = 31338
host3_volume_capabs['timestamp'] = 31339
expected = {'host1': host1_volume_capabs,
'host2': host2_volume_capabs,
'host3': host3_volume_capabs}
self.assertDictMatch(expected, service_states)
@mock.patch('cinder.utils.service_is_up')
@mock.patch('cinder.db.service_get_all_by_topic')
def test_has_all_capabilities(self, _mock_service_get_all_by_topic,
_mock_service_is_up):
_mock_service_is_up.return_value = True
services = [
dict(id=1, host='host1', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
dict(id=2, host='host2', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
dict(id=3, host='host3', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
]
_mock_service_get_all_by_topic.return_value = services
# Create host_manager again to let db.service_get_all_by_topic mock run
self.host_manager = host_manager.HostManager()
self.assertFalse(self.host_manager.has_all_capabilities())
host1_volume_capabs = dict(free_capacity_gb=4321, timestamp=1)
host2_volume_capabs = dict(free_capacity_gb=5432, timestamp=1)
host3_volume_capabs = dict(free_capacity_gb=6543, timestamp=1)
service_name = 'volume'
self.host_manager.update_service_capabilities(service_name, 'host1',
host1_volume_capabs)
self.assertFalse(self.host_manager.has_all_capabilities())
self.host_manager.update_service_capabilities(service_name, 'host2',
host2_volume_capabs)
self.assertFalse(self.host_manager.has_all_capabilities())
self.host_manager.update_service_capabilities(service_name, 'host3',
host3_volume_capabs)
self.assertTrue(self.host_manager.has_all_capabilities())
@mock.patch('cinder.db.service_get_all_by_topic')
@mock.patch('cinder.utils.service_is_up')
@mock.patch('oslo_utils.timeutils.utcnow')
def test_update_and_get_pools(self, _mock_utcnow,
_mock_service_is_up,
_mock_service_get_all_by_topic):
"""Test interaction between update and get_pools
This test verifies that each time that get_pools is called it gets the
latest copy of service_capabilities, which is timestamped with the
current date/time.
"""
context = 'fake_context'
dates = [datetime.fromtimestamp(400), datetime.fromtimestamp(401),
datetime.fromtimestamp(402)]
_mock_utcnow.side_effect = dates
services = [
# This is the first call to utcnow()
dict(id=1, host='host1', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
]
mocked_service_states = {
'host1': dict(volume_backend_name='AAA',
total_capacity_gb=512, free_capacity_gb=200,
timestamp=None, reserved_percentage=0),
}
_mock_service_get_all_by_topic.return_value = services
_mock_service_is_up.return_value = True
_mock_warning = mock.Mock()
host_manager.LOG.warn = _mock_warning
host_volume_capabs = dict(free_capacity_gb=4321)
service_name = 'volume'
with mock.patch.dict(self.host_manager.service_states,
mocked_service_states):
self.host_manager.update_service_capabilities(service_name,
'host1',
host_volume_capabs)
res = self.host_manager.get_pools(context)
self.assertEqual(1, len(res))
self.assertEqual(dates[1], res[0]['capabilities']['timestamp'])
self.host_manager.update_service_capabilities(service_name,
'host1',
host_volume_capabs)
res = self.host_manager.get_pools(context)
self.assertEqual(1, len(res))
self.assertEqual(dates[2], res[0]['capabilities']['timestamp'])
@mock.patch('cinder.db.service_get_all_by_topic')
@mock.patch('cinder.utils.service_is_up')
def test_get_all_host_states(self, _mock_service_is_up,
_mock_service_get_all_by_topic):
context = 'fake_context'
topic = CONF.volume_topic
services = [
dict(id=1, host='host1', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow(),
binary=None, deleted=False, created_at=None, modified_at=None,
report_count=0, deleted_at=None, disabled_reason=None),
dict(id=2, host='host2', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow(),
binary=None, deleted=False, created_at=None, modified_at=None,
report_count=0, deleted_at=None, disabled_reason=None),
dict(id=3, host='host3', topic='volume', disabled=False,
availability_zone='zone2', updated_at=timeutils.utcnow(),
binary=None, deleted=False, created_at=None, modified_at=None,
report_count=0, deleted_at=None, disabled_reason=None),
dict(id=4, host='host4', topic='volume', disabled=False,
availability_zone='zone3', updated_at=timeutils.utcnow(),
binary=None, deleted=False, created_at=None, modified_at=None,
report_count=0, deleted_at=None, disabled_reason=None),
]
service_objs = []
for db_service in services:
service_obj = objects.Service()
service_objs.append(objects.Service._from_db_object(context,
service_obj,
db_service))
service_states = {
'host1': dict(volume_backend_name='AAA',
total_capacity_gb=512, free_capacity_gb=200,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=312),
'host2': dict(volume_backend_name='BBB',
total_capacity_gb=256, free_capacity_gb=100,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=156),
'host3': dict(volume_backend_name='CCC',
total_capacity_gb=10000, free_capacity_gb=700,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=9300),
}
# First test: service_is_up is always True, host5 is disabled,
# host4 has no capabilities
self.host_manager.service_states = service_states
_mock_service_get_all_by_topic.return_value = services
_mock_service_is_up.return_value = True
_mock_warning = mock.Mock()
host_manager.LOG.warning = _mock_warning
# Get all states
self.host_manager.get_all_host_states(context)
_mock_service_get_all_by_topic.assert_called_with(context,
topic,
disabled=False)
expected = []
for service in service_objs:
expected.append(mock.call(service))
self.assertEqual(expected, _mock_service_is_up.call_args_list)
# Get host_state_map and make sure we have the first 3 hosts
host_state_map = self.host_manager.host_state_map
self.assertEqual(3, len(host_state_map))
for i in range(3):
volume_node = services[i]
host = volume_node['host']
test_service.TestService._compare(self, volume_node,
host_state_map[host].service)
# Second test: Now service_is_up returns False for host3
_mock_service_is_up.reset_mock()
_mock_service_is_up.side_effect = [True, True, False, True]
_mock_service_get_all_by_topic.reset_mock()
_mock_warning.reset_mock()
# Get all states, make sure host 3 is reported as down
self.host_manager.get_all_host_states(context)
_mock_service_get_all_by_topic.assert_called_with(context,
topic,
disabled=False)
self.assertEqual(expected, _mock_service_is_up.call_args_list)
self.assertTrue(_mock_warning.call_count > 0)
# Get host_state_map and make sure we have the first 2 hosts (host3 is
# down, host4 is missing capabilities)
host_state_map = self.host_manager.host_state_map
self.assertEqual(2, len(host_state_map))
for i in range(2):
volume_node = services[i]
host = volume_node['host']
test_service.TestService._compare(self, volume_node,
host_state_map[host].service)
@mock.patch('cinder.db.service_get_all_by_topic')
@mock.patch('cinder.utils.service_is_up')
def test_get_pools(self, _mock_service_is_up,
_mock_service_get_all_by_topic):
context = 'fake_context'
services = [
dict(id=1, host='host1', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
dict(id=2, host='host2@back1', topic='volume', disabled=False,
availability_zone='zone1', updated_at=timeutils.utcnow()),
dict(id=3, host='host2@back2', topic='volume', disabled=False,
availability_zone='zone2', updated_at=timeutils.utcnow()),
]
mocked_service_states = {
'host1': dict(volume_backend_name='AAA',
total_capacity_gb=512, free_capacity_gb=200,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=312),
'host2@back1': dict(volume_backend_name='BBB',
total_capacity_gb=256, free_capacity_gb=100,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=156),
'host2@back2': dict(volume_backend_name='CCC',
total_capacity_gb=10000, free_capacity_gb=700,
timestamp=None, reserved_percentage=0,
provisioned_capacity_gb=9300),
}
_mock_service_get_all_by_topic.return_value = services
_mock_service_is_up.return_value = True
_mock_warning = mock.Mock()
host_manager.LOG.warn = _mock_warning
with mock.patch.dict(self.host_manager.service_states,
mocked_service_states):
res = self.host_manager.get_pools(context)
# check if get_pools returns all 3 pools
self.assertEqual(3, len(res))
expected = [
{
'name': 'host1#AAA',
'capabilities': {
'timestamp': None,
'volume_backend_name': 'AAA',
'free_capacity_gb': 200,
'driver_version': None,
'total_capacity_gb': 512,
'reserved_percentage': 0,
'vendor_name': None,
'storage_protocol': None,
'provisioned_capacity_gb': 312},
},
{
'name': 'host2@back1#BBB',
'capabilities': {
'timestamp': None,
'volume_backend_name': 'BBB',
'free_capacity_gb': 100,
'driver_version': None,
'total_capacity_gb': 256,
'reserved_percentage': 0,
'vendor_name': None,
'storage_protocol': None,
'provisioned_capacity_gb': 156},
},
{
'name': 'host2@back2#CCC',
'capabilities': {
'timestamp': None,
'volume_backend_name': 'CCC',
'free_capacity_gb': 700,
'driver_version': None,
'total_capacity_gb': 10000,
'reserved_percentage': 0,
'vendor_name': None,
'storage_protocol': None,
'provisioned_capacity_gb': 9300},
}
]
self.assertEqual(len(expected), len(res))
self.assertEqual(sorted(expected), sorted(res))
class HostStateTestCase(test.TestCase):
"""Test case for HostState class."""
def test_update_from_volume_capability_nopool(self):
fake_host = host_manager.HostState('host1')
self.assertIsNone(fake_host.free_capacity_gb)
volume_capability = {'total_capacity_gb': 1024,
'free_capacity_gb': 512,
'provisioned_capacity_gb': 512,
'reserved_percentage': 0,
'timestamp': None}
fake_host.update_from_volume_capability(volume_capability)
# Backend level stats remain uninitialized
self.assertEqual(0, fake_host.total_capacity_gb)
self.assertEqual(None, fake_host.free_capacity_gb)
# Pool stats has been updated
self.assertEqual(1024, fake_host.pools['_pool0'].total_capacity_gb)
self.assertEqual(512, fake_host.pools['_pool0'].free_capacity_gb)
self.assertEqual(512,
fake_host.pools['_pool0'].provisioned_capacity_gb)
# Test update for existing host state
volume_capability.update(dict(total_capacity_gb=1000))
fake_host.update_from_volume_capability(volume_capability)
self.assertEqual(1000, fake_host.pools['_pool0'].total_capacity_gb)
# Test update for existing host state with different backend name
volume_capability.update(dict(volume_backend_name='magic'))
fake_host.update_from_volume_capability(volume_capability)
self.assertEqual(1000, fake_host.pools['magic'].total_capacity_gb)
self.assertEqual(512, fake_host.pools['magic'].free_capacity_gb)
self.assertEqual(512,
fake_host.pools['magic'].provisioned_capacity_gb)
# 'pool0' becomes nonactive pool, and is deleted
self.assertRaises(KeyError, lambda: fake_host.pools['pool0'])
def test_update_from_volume_capability_with_pools(self):
fake_host = host_manager.HostState('host1')
self.assertIsNone(fake_host.free_capacity_gb)
capability = {
'volume_backend_name': 'Local iSCSI',
'vendor_name': 'OpenStack',
'driver_version': '1.0.1',
'storage_protocol': 'iSCSI',
'pools': [
{'pool_name': '1st pool',
'total_capacity_gb': 500,
'free_capacity_gb': 230,
'allocated_capacity_gb': 270,
'provisioned_capacity_gb': 270,
'QoS_support': 'False',
'reserved_percentage': 0,
'dying_disks': 100,
'super_hero_1': 'spider-man',
'super_hero_2': 'flash',
'super_hero_3': 'neoncat',
},
{'pool_name': '2nd pool',
'total_capacity_gb': 1024,
'free_capacity_gb': 1024,
'allocated_capacity_gb': 0,
'provisioned_capacity_gb': 0,
'QoS_support': 'False',
'reserved_percentage': 0,
'dying_disks': 200,
'super_hero_1': 'superman',
'super_hero_2': 'Hulk',
}
],
'timestamp': None,
}
fake_host.update_from_volume_capability(capability)
self.assertEqual('Local iSCSI', fake_host.volume_backend_name)
self.assertEqual('iSCSI', fake_host.storage_protocol)
self.assertEqual('OpenStack', fake_host.vendor_name)
self.assertEqual('1.0.1', fake_host.driver_version)
# Backend level stats remain uninitialized
self.assertEqual(0, fake_host.total_capacity_gb)
self.assertEqual(None, fake_host.free_capacity_gb)
# Pool stats has been updated
self.assertEqual(2, len(fake_host.pools))
self.assertEqual(500, fake_host.pools['1st pool'].total_capacity_gb)
self.assertEqual(230, fake_host.pools['1st pool'].free_capacity_gb)
self.assertEqual(270,
fake_host.pools['1st pool'].provisioned_capacity_gb)
self.assertEqual(1024, fake_host.pools['2nd pool'].total_capacity_gb)
self.assertEqual(1024, fake_host.pools['2nd pool'].free_capacity_gb)
self.assertEqual(0,
fake_host.pools['2nd pool'].provisioned_capacity_gb)
capability = {
'volume_backend_name': 'Local iSCSI',
'vendor_name': 'OpenStack',
'driver_version': '1.0.2',
'storage_protocol': 'iSCSI',
'pools': [
{'pool_name': '3rd pool',
'total_capacity_gb': 10000,
'free_capacity_gb': 10000,
'allocated_capacity_gb': 0,
'provisioned_capacity_gb': 0,
'QoS_support': 'False',
'reserved_percentage': 0,
},
],
'timestamp': None,
}
# test update HostState Record
fake_host.update_from_volume_capability(capability)
self.assertEqual('1.0.2', fake_host.driver_version)
# Non-active pool stats has been removed
self.assertEqual(1, len(fake_host.pools))
self.assertRaises(KeyError, lambda: fake_host.pools['1st pool'])
self.assertRaises(KeyError, lambda: fake_host.pools['2nd pool'])
self.assertEqual(10000, fake_host.pools['3rd pool'].total_capacity_gb)
self.assertEqual(10000, fake_host.pools['3rd pool'].free_capacity_gb)
self.assertEqual(0,
fake_host.pools['3rd pool'].provisioned_capacity_gb)
def test_update_from_volume_infinite_capability(self):
fake_host = host_manager.HostState('host1')
self.assertIsNone(fake_host.free_capacity_gb)
volume_capability = {'total_capacity_gb': 'infinite',
'free_capacity_gb': 'infinite',
'reserved_percentage': 0,
'timestamp': None}
fake_host.update_from_volume_capability(volume_capability)
# Backend level stats remain uninitialized
self.assertEqual(0, fake_host.total_capacity_gb)
self.assertEqual(None, fake_host.free_capacity_gb)
# Pool stats has been updated
self.assertEqual(
'infinite',
fake_host.pools['_pool0'].total_capacity_gb)
self.assertEqual(
'infinite',
fake_host.pools['_pool0'].free_capacity_gb)
def test_update_from_volume_unknown_capability(self):
fake_host = host_manager.HostState('host1')
self.assertIsNone(fake_host.free_capacity_gb)
volume_capability = {'total_capacity_gb': 'infinite',
'free_capacity_gb': 'unknown',
'reserved_percentage': 0,
'timestamp': None}
fake_host.update_from_volume_capability(volume_capability)
# Backend level stats remain uninitialized
self.assertEqual(0, fake_host.total_capacity_gb)
self.assertEqual(None, fake_host.free_capacity_gb)
# Pool stats has been updated
self.assertEqual(
'infinite',
fake_host.pools['_pool0'].total_capacity_gb)
self.assertEqual(
'unknown',
fake_host.pools['_pool0'].free_capacity_gb)
def test_update_from_empty_volume_capability(self):
fake_host = host_manager.HostState('host1')
vol_cap = {'timestamp': None}
fake_host.update_from_volume_capability(vol_cap)
self.assertEqual(0, fake_host.total_capacity_gb)
self.assertEqual(None, fake_host.free_capacity_gb)
# Pool stats has been updated
self.assertEqual(0,
fake_host.pools['_pool0'].total_capacity_gb)
self.assertEqual(0,
fake_host.pools['_pool0'].free_capacity_gb)
self.assertEqual(0,
fake_host.pools['_pool0'].provisioned_capacity_gb)
class PoolStateTestCase(test.TestCase):
"""Test case for HostState class."""
def test_update_from_volume_capability(self):
fake_pool = host_manager.PoolState('host1', None, 'pool0')
self.assertIsNone(fake_pool.free_capacity_gb)
volume_capability = {'total_capacity_gb': 1024,
'free_capacity_gb': 512,
'reserved_percentage': 0,
'provisioned_capacity_gb': 512,
'timestamp': None,
'cap1': 'val1',
'cap2': 'val2'}
fake_pool.update_from_volume_capability(volume_capability)
self.assertEqual('host1#pool0', fake_pool.host)
self.assertEqual('pool0', fake_pool.pool_name)
self.assertEqual(1024, fake_pool.total_capacity_gb)
self.assertEqual(512, fake_pool.free_capacity_gb)
self.assertEqual(512,
fake_pool.provisioned_capacity_gb)
self.assertDictMatch(fake_pool.capabilities, volume_capability)
| apache-2.0 | -7,739,582,416,772,611,000 | 43.805921 | 79 | 0.558549 | false |
heenbo/mosquitto-heenbo | test/mosq_test.py | 7 | 13413 | import errno
import os
import socket
import subprocess
import struct
import time
def start_broker(filename, cmd=None, port=1888):
delay = 0.1
if cmd is None:
cmd = ['../../src/mosquitto', '-v', '-c', filename.replace('.py', '.conf')]
if os.environ.get('MOSQ_USE_VALGRIND') is not None:
cmd = ['valgrind', '-q', '--log-file='+filename+'.vglog'] + cmd
delay = 1
broker = subprocess.Popen(cmd, stderr=subprocess.PIPE)
for i in range(0, 20):
time.sleep(delay)
c = None
try:
c = socket.create_connection(("localhost", port))
except socket.error as err:
if err.errno != errno.ECONNREFUSED:
raise
if c is not None:
c.close()
time.sleep(delay)
return broker
raise IOError
def start_client(filename, cmd, env):
if cmd is None:
raise ValueError
if os.environ.get('MOSQ_USE_VALGRIND') is not None:
cmd = ['valgrind', '-q', '--log-file='+filename+'.vglog'] + cmd
return subprocess.Popen(cmd, env=env)
def expect_packet(sock, name, expected):
if len(expected) > 0:
rlen = len(expected)
else:
rlen = 1
packet_recvd = sock.recv(rlen)
return packet_matches(name, packet_recvd, expected)
def packet_matches(name, recvd, expected):
if recvd != expected:
print("FAIL: Received incorrect "+name+".")
try:
print("Received: "+to_string(recvd))
except struct.error:
print("Received (not decoded): "+recvd)
try:
print("Expected: "+to_string(expected))
except struct.error:
print("Expected (not decoded): "+expected)
return 0
else:
return 1
def do_client_connect(connect_packet, connack_packet, hostname="localhost", port=1888, timeout=60, connack_error="connack"):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
sock.connect((hostname, port))
sock.send(connect_packet)
if expect_packet(sock, connack_error, connack_packet):
return sock
else:
sock.close()
raise ValueError
def remaining_length(packet):
l = min(5, len(packet))
all_bytes = struct.unpack("!"+"B"*l, packet[:l])
mult = 1
rl = 0
for i in range(1,l-1):
byte = all_bytes[i]
rl += (byte & 127) * mult
mult *= 128
if byte & 128 == 0:
packet = packet[i+1:]
break
return (packet, rl)
def to_string(packet):
if len(packet) == 0:
return ""
packet0 = struct.unpack("!B", packet[0])
packet0 = packet0[0]
cmd = packet0 & 0xF0
if cmd == 0x00:
# Reserved
return "0x00"
elif cmd == 0x10:
# CONNECT
(packet, rl) = remaining_length(packet)
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'sBBH' + str(len(packet)-slen-4) + 's'
(protocol, proto_ver, flags, keepalive, packet) = struct.unpack(pack_format, packet)
s = "CONNECT, proto="+protocol+str(proto_ver)+", keepalive="+str(keepalive)
if flags&2:
s = s+", clean-session"
else:
s = s+", durable"
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'s' + str(len(packet)-slen) + 's'
(client_id, packet) = struct.unpack(pack_format, packet)
s = s+", id="+client_id
if flags&4:
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'s' + str(len(packet)-slen) + 's'
(will_topic, packet) = struct.unpack(pack_format, packet)
s = s+", will-topic="+will_topic
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'s' + str(len(packet)-slen) + 's'
(will_message, packet) = struct.unpack(pack_format, packet)
s = s+", will-message="+will_message
s = s+", will-qos="+str((flags&24)>>3)
s = s+", will-retain="+str((flags&32)>>5)
if flags&128:
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'s' + str(len(packet)-slen) + 's'
(username, packet) = struct.unpack(pack_format, packet)
s = s+", username="+username
if flags&64:
pack_format = "!H" + str(len(packet)-2) + 's'
(slen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(slen)+'s' + str(len(packet)-slen) + 's'
(password, packet) = struct.unpack(pack_format, packet)
s = s+", password="+password
return s
elif cmd == 0x20:
# CONNACK
(cmd, rl, resv, rc) = struct.unpack('!BBBB', packet)
return "CONNACK, rl="+str(rl)+", res="+str(resv)+", rc="+str(rc)
elif cmd == 0x30:
# PUBLISH
dup = (packet0 & 0x08)>>3
qos = (packet0 & 0x06)>>1
retain = (packet0 & 0x01)
(packet, rl) = remaining_length(packet)
pack_format = "!H" + str(len(packet)-2) + 's'
(tlen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(tlen)+'s' + str(len(packet)-tlen) + 's'
(topic, packet) = struct.unpack(pack_format, packet)
s = "PUBLISH, rl="+str(rl)+", topic="+topic+", qos="+str(qos)+", retain="+str(retain)+", dup="+str(dup)
if qos > 0:
pack_format = "!H" + str(len(packet)-2) + 's'
(mid, packet) = struct.unpack(pack_format, packet)
s = s + ", mid="+str(mid)
s = s + ", payload="+packet
return s
elif cmd == 0x40:
# PUBACK
(cmd, rl, mid) = struct.unpack('!BBH', packet)
return "PUBACK, rl="+str(rl)+", mid="+str(mid)
elif cmd == 0x50:
# PUBREC
(cmd, rl, mid) = struct.unpack('!BBH', packet)
return "PUBREC, rl="+str(rl)+", mid="+str(mid)
elif cmd == 0x60:
# PUBREL
dup = (packet0 & 0x08)>>3
(cmd, rl, mid) = struct.unpack('!BBH', packet)
return "PUBREL, rl="+str(rl)+", mid="+str(mid)+", dup="+str(dup)
elif cmd == 0x70:
# PUBCOMP
(cmd, rl, mid) = struct.unpack('!BBH', packet)
return "PUBCOMP, rl="+str(rl)+", mid="+str(mid)
elif cmd == 0x80:
# SUBSCRIBE
(packet, rl) = remaining_length(packet)
pack_format = "!H" + str(len(packet)-2) + 's'
(mid, packet) = struct.unpack(pack_format, packet)
s = "SUBSCRIBE, rl="+str(rl)+", mid="+str(mid)
topic_index = 0
while len(packet) > 0:
pack_format = "!H" + str(len(packet)-2) + 's'
(tlen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(tlen)+'sB' + str(len(packet)-tlen-1) + 's'
(topic, qos, packet) = struct.unpack(pack_format, packet)
s = s + ", topic"+str(topic_index)+"="+topic+","+str(qos)
return s
elif cmd == 0x90:
# SUBACK
(packet, rl) = remaining_length(packet)
pack_format = "!H" + str(len(packet)-2) + 's'
(mid, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + "B"*len(packet)
granted_qos = struct.unpack(pack_format, packet)
s = "SUBACK, rl="+str(rl)+", mid="+str(mid)+", granted_qos="+str(granted_qos[0])
for i in range(1, len(granted_qos)-1):
s = s+", "+str(granted_qos[i])
return s
elif cmd == 0xA0:
# UNSUBSCRIBE
(packet, rl) = remaining_length(packet)
pack_format = "!H" + str(len(packet)-2) + 's'
(mid, packet) = struct.unpack(pack_format, packet)
s = "UNSUBSCRIBE, rl="+str(rl)+", mid="+str(mid)
topic_index = 0
while len(packet) > 0:
pack_format = "!H" + str(len(packet)-2) + 's'
(tlen, packet) = struct.unpack(pack_format, packet)
pack_format = "!" + str(tlen)+'s' + str(len(packet)-tlen) + 's'
(topic, packet) = struct.unpack(pack_format, packet)
s = s + ", topic"+str(topic_index)+"="+topic
return s
elif cmd == 0xB0:
# UNSUBACK
(cmd, rl, mid) = struct.unpack('!BBH', packet)
return "UNSUBACK, rl="+str(rl)+", mid="+str(mid)
elif cmd == 0xC0:
# PINGREQ
(cmd, rl) = struct.unpack('!BB', packet)
return "PINGREQ, rl="+str(rl)
elif cmd == 0xD0:
# PINGRESP
(cmd, rl) = struct.unpack('!BB', packet)
return "PINGRESP, rl="+str(rl)
elif cmd == 0xE0:
# DISCONNECT
(cmd, rl) = struct.unpack('!BB', packet)
return "DISCONNECT, rl="+str(rl)
elif cmd == 0xF0:
# Reserved
return "0xF0"
def gen_connect(client_id, clean_session=True, keepalive=60, username=None, password=None, will_topic=None, will_qos=0, will_retain=False, will_payload="", proto_ver=3):
if (proto_ver&0x7F) == 3 or proto_ver == 0:
remaining_length = 12
elif (proto_ver&0x7F) == 4:
remaining_length = 10
else:
raise ValueError
if client_id != None:
remaining_length = remaining_length + 2+len(client_id)
connect_flags = 0
if clean_session:
connect_flags = connect_flags | 0x02
if will_topic != None:
remaining_length = remaining_length + 2+len(will_topic) + 2+len(will_payload)
connect_flags = connect_flags | 0x04 | ((will_qos&0x03) << 3)
if will_retain:
connect_flags = connect_flags | 32
if username != None:
remaining_length = remaining_length + 2+len(username)
connect_flags = connect_flags | 0x80
if password != None:
connect_flags = connect_flags | 0x40
remaining_length = remaining_length + 2+len(password)
rl = pack_remaining_length(remaining_length)
packet = struct.pack("!B"+str(len(rl))+"s", 0x10, rl)
if (proto_ver&0x7F) == 3 or proto_ver == 0:
packet = packet + struct.pack("!H6sBBH", len("MQIsdp"), "MQIsdp", proto_ver, connect_flags, keepalive)
elif (proto_ver&0x7F) == 4:
packet = packet + struct.pack("!H4sBBH", len("MQTT"), "MQTT", proto_ver, connect_flags, keepalive)
if client_id != None:
packet = packet + struct.pack("!H"+str(len(client_id))+"s", len(client_id), client_id)
if will_topic != None:
packet = packet + struct.pack("!H"+str(len(will_topic))+"s", len(will_topic), will_topic)
if len(will_payload) > 0:
packet = packet + struct.pack("!H"+str(len(will_payload))+"s", len(will_payload), will_payload)
else:
packet = packet + struct.pack("!H", 0)
if username != None:
packet = packet + struct.pack("!H"+str(len(username))+"s", len(username), username)
if password != None:
packet = packet + struct.pack("!H"+str(len(password))+"s", len(password), password)
return packet
def gen_connack(resv=0, rc=0):
return struct.pack('!BBBB', 32, 2, resv, rc);
def gen_publish(topic, qos, payload=None, retain=False, dup=False, mid=0):
rl = 2+len(topic)
pack_format = "!BBH"+str(len(topic))+"s"
if qos > 0:
rl = rl + 2
pack_format = pack_format + "H"
if payload != None:
rl = rl + len(payload)
pack_format = pack_format + str(len(payload))+"s"
else:
payload = ""
pack_format = pack_format + "0s"
cmd = 48 | (qos<<1)
if retain:
cmd = cmd + 1
if dup:
cmd = cmd + 8
if qos > 0:
return struct.pack(pack_format, cmd, rl, len(topic), topic, mid, payload)
else:
return struct.pack(pack_format, cmd, rl, len(topic), topic, payload)
def gen_puback(mid):
return struct.pack('!BBH', 64, 2, mid)
def gen_pubrec(mid):
return struct.pack('!BBH', 80, 2, mid)
def gen_pubrel(mid, dup=False):
if dup:
cmd = 96+8+2
else:
cmd = 96+2
return struct.pack('!BBH', cmd, 2, mid)
def gen_pubcomp(mid):
return struct.pack('!BBH', 112, 2, mid)
def gen_subscribe(mid, topic, qos):
pack_format = "!BBHH"+str(len(topic))+"sB"
return struct.pack(pack_format, 130, 2+2+len(topic)+1, mid, len(topic), topic, qos)
def gen_suback(mid, qos):
return struct.pack('!BBHB', 144, 2+1, mid, qos)
def gen_unsubscribe(mid, topic):
pack_format = "!BBHH"+str(len(topic))+"s"
return struct.pack(pack_format, 162, 2+2+len(topic), mid, len(topic), topic)
def gen_unsuback(mid):
return struct.pack('!BBH', 176, 2, mid)
def gen_pingreq():
return struct.pack('!BB', 192, 0)
def gen_pingresp():
return struct.pack('!BB', 208, 0)
def gen_disconnect():
return struct.pack('!BB', 224, 0)
def pack_remaining_length(remaining_length):
s = ""
while True:
byte = remaining_length % 128
remaining_length = remaining_length // 128
# If there are more digits to encode, set the top bit of this digit
if remaining_length > 0:
byte = byte | 0x80
s = s + struct.pack("!B", byte)
if remaining_length == 0:
return s
| gpl-3.0 | -5,645,407,650,333,233,000 | 34.204724 | 169 | 0.549318 | false |
justinslee/Wai-Not-Makahiki | makahiki/apps/widgets/notifications/tests.py | 7 | 5066 | """Notification testing."""
from django.test import TransactionTestCase
from django.core.urlresolvers import reverse
from django.contrib.auth.models import User
from apps.managers.challenge_mgr import challenge_mgr
from apps.utils import test_utils
from apps.widgets.notifications import get_unread_notifications
from apps.widgets.notifications.models import UserNotification
class NotificationUnitTests(TransactionTestCase):
"""Notification Test."""
def testGetUnread(self):
"""Test that we can get the user's unread notifications."""
user = User.objects.create_user("test", "[email protected]")
for i in range(0, 3):
notification = UserNotification(recipient=user, contents="Test notification %i" % i)
notification.save()
notifications = get_unread_notifications(user)
self.assertEqual(notifications["alerts"].count(), 0,
"There should not be any alert notifications.")
unread = notifications["unread"]
self.assertEqual(unread.count(), 3, "There should be three unread notifications.")
alert = UserNotification(recipient=user, contents="Alert notification", display_alert=True)
alert.save()
notifications = get_unread_notifications(user)
self.assertEqual(notifications["alerts"][0], alert,
"Alert notification should have been returned.")
unread = notifications["unread"]
self.assertEqual(unread.count(), 4, "There should be four unread notifications.")
class NotificationFunctionalTests(TransactionTestCase):
"""View Test."""
def setUp(self):
self.user = test_utils.setup_user(username="user", password="test")
self.team = self.user.get_profile().team
challenge_mgr.register_page_widget("help", "help.faq")
challenge_mgr.register_page_widget("home", "home")
from apps.managers.cache_mgr import cache_mgr
cache_mgr.clear()
self.client.login(username="user", password="test")
def testShowNotifications(self):
"""
Test that we can show notifications to the user.
"""
for i in range(0, 3):
notification = UserNotification(recipient=self.user,
contents="Test notification %i" % i)
notification.save()
response = self.client.get(reverse("home_index"))
self.assertNotContains(response, "The following item(s) need your attention",
msg_prefix="Alert should not be shown"
)
for i in range(0, 3):
self.assertContains(response, "Test notification %i" % i,
msg_prefix="Notification %i is not shown" % i
)
def testAlertNotifications(self):
"""Test alert."""
alert = UserNotification(recipient=self.user, contents="Alert notification",
display_alert=True)
alert.save()
response = self.client.get(reverse("home_index"))
self.assertContains(response, "notification-dialog", msg_prefix="Alert should be shown")
response = self.client.get(reverse("help_index"))
self.assertNotContains(response, "notification-dialog",
msg_prefix="Dialog should not be displayed")
def testAjaxReadNotifications(self):
"""Test that notifications can be marked as read via AJAX."""
notification = UserNotification(recipient=self.user, contents="Test notification")
notification.save()
response = self.client.post(reverse("notifications_read", args=(notification.pk,)), {},
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
self.failUnlessEqual(response.status_code, 200)
response = self.client.get(reverse("home_index"))
self.assertNotContains(response, "Test notification",
msg_prefix="Notification should be read")
def testReadNotifications(self):
"""Test that notifications can be marked as read without AJAX."""
notification = UserNotification(recipient=self.user, contents="Test notification")
notification.save()
response = self.client.post(reverse("notifications_read", args=(notification.pk,)), {})
self.assertRedirects(response, reverse("home_index"),
msg_prefix="Marking as read should redirect.")
response = self.client.get(reverse("home_index"))
self.assertNotContains(response, "Test notification",
msg_prefix="Notification should be read")
# Test with a referring page.
notification = UserNotification(recipient=self.user, contents="Test notification 2")
notification.save()
response = self.client.post(reverse("notifications_read", args=(notification.pk,)), {},
HTTP_REFERER=reverse("help_index"))
self.assertRedirects(response, reverse("help_index"),
msg_prefix="Marking as read should redirect.")
response = self.client.get(reverse("home_index"))
self.assertNotContains(response, "Test notification 2",
msg_prefix="Notification should be read")
| mit | 3,791,493,151,515,724,300 | 41.571429 | 99 | 0.66364 | false |
neuroidss/htmengine-traffic-tutorial | python-engine/consume_realtime_results.py | 11 | 5688 | #!/usr/bin/env python
# ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2015, Numenta, Inc. Unless you have purchased from
# Numenta, Inc. a separate commercial license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""Consume anomaly results in near realtime"""
import os
from nta.utils import amqp
from nta.utils.config import Config
from htmengine import htmengineerrno
from htmengine.runtime.anomaly_service import AnomalyService
appConfig = Config("application.conf", os.environ["APPLICATION_CONFIG_PATH"])
modelResultsExchange = appConfig.get("metric_streamer",
"results_exchange_name")
queueName = "skeleton_results"
def declareExchanges(amqpClient):
""" Declares model results and non-metric data exchanges
"""
amqpClient.declareExchange(exchange=modelResultsExchange,
exchangeType="fanout",
durable=True)
def declareQueueAndBindToExchanges(amqpClient):
""" Declares skeleton queue and binds to model results.
"""
result = amqpClient.declareQueue(queueName, durable=True)
amqpClient.bindQueue(exchange=modelResultsExchange,
queue=result.queue, routingKey="")
def configChannel(amqpClient):
amqpClient.requestQoS(prefetchCount=1)
def handleModelInferenceResults(body):
""" Model results batch handler.
:param body: Serialized message payload; the message is compliant with
htmengine/runtime/json_schema/model_inference_results_msg_schema.json.
:type body: str
"""
try:
batch = AnomalyService.deserializeModelResult(body)
except Exception:
print "Error deserializing model result"
raise
metricId = batch["metric"]["uid"]
metricName = batch["metric"]["name"]
print "Handling %d model result(s) for %s - %s" % (len(batch["results"]),
metricId,
metricName)
if not batch["results"]:
print "Empty results in model inference results batch; model=%s" % metricId
return
print metricId, batch["results"]
def handleModelCommandResult(body):
""" ModelCommandResult handler. Handles model creation/deletion events
:param body: Incoming message payload
:type body: str
"""
try:
modelCommandResult = AnomalyService.deserializeModelResult(body)
except Exception:
print "Error deserializing model command result"
raise
if modelCommandResult["status"] != htmengineerrno.SUCCESS:
return # Ignore...
if modelCommandResult["method"] == "defineModel":
print "Handling `defineModel` for %s" % modelCommandResult.get("modelId")
print modelCommandResult
elif modelCommandResult["method"] == "deleteModel":
print "Handling `deleteModel` for %s" % modelCommandResult.get("modelId")
print modelCommandResult
def messageHandler(message):
""" Inspect all inbound model results
We will key off of routing key to determine specific handler for inbound
message. If routing key is `None`, attempt to decode message using
`AnomalyService.deserializeModelResult()`.
:param amqp.messages.ConsumerMessage message: ``message.body`` is one of:
Serialized batch of model inference results generated in
``AnomalyService`` and must be deserialized using
``AnomalyService.deserializeModelResult()``. Per
htmengine/runtime/json_schema/model_inference_results_msg_schema.json
Serialized ``ModelCommandResult`` generated in ``AnomalyService``
per model_command_result_amqp_message.json and must be deserialized
using ``AnomalyService.deserializeModelResult()``
"""
if message.methodInfo.routingKey is None:
print "Unrecognized routing key."
else:
dataType = (message.properties.headers.get("dataType")
if message.properties.headers else None)
if not dataType:
handleModelInferenceResults(message.body)
elif dataType == "model-cmd-result":
handleModelCommandResult(message.body)
else:
print "Unexpected message header dataType=%s" % dataType
message.ack()
if __name__ == "__main__":
with amqp.synchronous_amqp_client.SynchronousAmqpClient(
amqp.connection.getRabbitmqConnectionParameters(),
channelConfigCb=configChannel) as amqpClient:
declareExchanges(amqpClient)
declareQueueAndBindToExchanges(amqpClient)
consumer = amqpClient.createConsumer(queueName)
# Start consuming messages
for evt in amqpClient.readEvents():
if isinstance(evt, amqp.messages.ConsumerMessage):
messageHandler(evt)
elif isinstance(evt, amqp.consumer.ConsumerCancellation):
# Bad news: this likely means that our queue was deleted externally
msg = "Consumer cancelled by broker: %r (%r)" % (evt, consumer)
raise Exception(msg)
else:
print "Unexpected amqp event=%r" % evt
| gpl-3.0 | -4,774,042,185,256,086,000 | 33.26506 | 79 | 0.69128 | false |
lscheinkman/nupic | tests/unit/nupic/encoders/logenc_test.py | 10 | 10536 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU Affero Public License for more details.
#
# You should have received a copy of the GNU Affero Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""Unit tests for logarithmic encoder"""
import numpy
import math
from nupic.data import SENTINEL_VALUE_FOR_MISSING_DATA
from nupic.data.field_meta import FieldMetaType
import tempfile
import unittest
from nupic.encoders.logarithm import LogEncoder
from nupic.encoders.scalar import ScalarEncoder
try:
import capnp
except ImportError:
capnp = None
if capnp:
from nupic.encoders.logarithm_capnp import LogEncoderProto
class LogEncoderTest(unittest.TestCase):
"""Unit tests for LogEncoder class"""
def testLogEncoder(self):
# Create the encoder
# use of forced=True is not recommended, but is used in the example for
# readibility, see scalar.py
le = LogEncoder(w=5,
resolution=0.1,
minval=1,
maxval=10000,
name="amount",
forced=True)
# Verify we're setting the description properly
self.assertEqual(le.getDescription(), [("amount", 0)])
# Verify we're getting the correct field types
types = le.getDecoderOutputFieldTypes()
self.assertEqual(types[0], FieldMetaType.float)
# Verify the encoder ends up with the correct width
#
# 10^0 -> 10^4 => 0 -> 4; With a resolution of 0.1
# 41 possible values plus padding = 4 = width 45
self.assertEqual(le.getWidth(), 45)
# Verify we have the correct number of possible values
self.assertEqual(len(le.getBucketValues()), 41)
# Verify closeness calculations
testTuples = [([1], [10000], 0.0),
([1], [1000], 0.25),
([1], [1], 1.0),
([1], [-200], 1.0)]
for tm in testTuples:
expected = tm[0]
actual = tm[1]
expectedResult = tm[2]
self.assertEqual(le.closenessScores(expected, actual),
expectedResult,
"exp: %s act: %s expR: %s" % (str(expected),
str(actual),
str(expectedResult)))
# Verify a value of 1.0 is encoded as expected
value = 1.0
output = le.encode(value)
# Our expected encoded representation of the value 1 is the first
# w bits on in an array of len width.
expected = [1, 1, 1, 1, 1] + 40 * [0]
# Convert to numpy array
expected = numpy.array(expected, dtype="uint8")
self.assertTrue(numpy.array_equal(output, expected))
# Test reverse lookup
decoded = le.decode(output)
(fieldsDict, _) = decoded
self.assertEqual(len(fieldsDict), 1)
(ranges, _) = fieldsDict.values()[0]
self.assertEqual(len(ranges), 1)
self.assertTrue(numpy.array_equal(ranges[0], [1, 1]))
# Verify an input representing a missing value is handled properly
mvOutput = le.encode(SENTINEL_VALUE_FOR_MISSING_DATA)
self.assertEqual(sum(mvOutput), 0)
# Test top-down for all values
value = le.minval
while value <= le.maxval:
output = le.encode(value)
topDown = le.topDownCompute(output)
# Do the scaling by hand here.
scaledVal = math.log10(value)
# Find the range of values that would also produce this top down
# output.
minTopDown = math.pow(10, (scaledVal - le.encoder.resolution))
maxTopDown = math.pow(10, (scaledVal + le.encoder.resolution))
# Verify the range surrounds this scaled val
self.assertGreaterEqual(topDown.value, minTopDown)
self.assertLessEqual(topDown.value, maxTopDown)
# Test bucket support
bucketIndices = le.getBucketIndices(value)
topDown = le.getBucketInfo(bucketIndices)[0]
# Verify our reconstructed value is in the valid range
self.assertGreaterEqual(topDown.value, minTopDown)
self.assertLessEqual(topDown.value, maxTopDown)
# Same for the scalar value
self.assertGreaterEqual(topDown.scalar, minTopDown)
self.assertLessEqual(topDown.scalar, maxTopDown)
# That the encoding portion of our EncoderResult matched the result of
# encode()
self.assertTrue(numpy.array_equal(topDown.encoding, output))
# Verify our reconstructed value is the same as the bucket value
bucketValues = le.getBucketValues()
self.assertEqual(topDown.value,
bucketValues[bucketIndices[0]])
# Next value
scaledVal += le.encoder.resolution / 4.0
value = math.pow(10, scaledVal)
# Verify next power of 10 encoding
output = le.encode(100)
# increase of 2 decades = 20 decibels
# bit 0, 1 are padding; bit 3 is 1, ..., bit 22 is 20 (23rd bit)
expected = 20 * [0] + [1, 1, 1, 1, 1] + 20 * [0]
expected = numpy.array(expected, dtype="uint8")
self.assertTrue(numpy.array_equal(output, expected))
# Test reverse lookup
decoded = le.decode(output)
(fieldsDict, _) = decoded
self.assertEqual(len(fieldsDict), 1)
(ranges, _) = fieldsDict.values()[0]
self.assertEqual(len(ranges), 1)
self.assertTrue(numpy.array_equal(ranges[0], [100, 100]))
# Verify next power of 10 encoding
output = le.encode(10000)
expected = 40 * [0] + [1, 1, 1, 1, 1]
expected = numpy.array(expected, dtype="uint8")
self.assertTrue(numpy.array_equal(output, expected))
# Test reverse lookup
decoded = le.decode(output)
(fieldsDict, _) = decoded
self.assertEqual(len(fieldsDict), 1)
(ranges, _) = fieldsDict.values()[0]
self.assertEqual(len(ranges), 1)
self.assertTrue(numpy.array_equal(ranges[0], [10000, 10000]))
def testGetBucketValues(self):
"""
Verify that the values of buckets are as expected for given
init params
"""
# Create the encoder
le = LogEncoder(w=5,
resolution=0.1,
minval=1,
maxval=10000,
name="amount",
forced=True)
# Build our expected values
inc = 0.1
exp = 0
expected = []
# Incrementing to exactly 4.0 runs into fp issues
while exp <= 4.0001:
val = 10 ** exp
expected.append(val)
exp += inc
expected = numpy.array(expected)
actual = numpy.array(le.getBucketValues())
numpy.testing.assert_almost_equal(expected, actual, 7)
def testInitWithRadius(self):
"""
Verifies you can use radius to specify a log encoder
"""
# Create the encoder
le = LogEncoder(w=1,
radius=1,
minval=1,
maxval=10000,
name="amount",
forced=True)
self.assertEqual(le.encoder.n, 5)
# Verify a a couple powers of 10 are encoded as expected
value = 1.0
output = le.encode(value)
expected = [1, 0, 0, 0, 0]
# Convert to numpy array
expected = numpy.array(expected, dtype="uint8")
self.assertTrue(numpy.array_equal(output, expected))
value = 100.0
output = le.encode(value)
expected = [0, 0, 1, 0, 0]
# Convert to numpy array
expected = numpy.array(expected, dtype="uint8")
self.assertTrue(numpy.array_equal(output, expected))
def testInitWithN(self):
"""
Verifies you can use N to specify a log encoder
"""
# Create the encoder
n = 100
le = LogEncoder(n=n, forced=True)
self.assertEqual(le.encoder.n, n)
def testMinvalMaxVal(self):
"""
Verifies unusual instances of minval and maxval are handled properly
"""
self.assertRaises(ValueError, LogEncoder, n=100, minval=0, maxval=-100,
forced=True)
self.assertRaises(ValueError, LogEncoder, n=100, minval=0, maxval=1e-07,
forced=True)
le = LogEncoder(n=100, minval=42, maxval=1.3e12, forced=True)
expectedRadius = 0.552141792732
expectedResolution = 0.110428358546
self.assertAlmostEqual(le.encoder.radius, expectedRadius)
self.assertAlmostEqual(le.encoder.resolution, expectedResolution)
@unittest.skipUnless(
capnp, "pycapnp is not installed, skipping serialization test.")
def testReadWrite(self):
le = LogEncoder(w=5,
resolution=0.1,
minval=1,
maxval=10000,
name="amount",
forced=True)
originalValue = le.encode(1.0)
proto1 = LogEncoderProto.new_message()
le.write(proto1)
# Write the proto to a temp file and read it back into a new proto
with tempfile.TemporaryFile() as f:
proto1.write(f)
f.seek(0)
proto2 = LogEncoderProto.read(f)
encoder = LogEncoder.read(proto2)
self.assertIsInstance(encoder, LogEncoder)
self.assertEqual(encoder.minScaledValue, le.minScaledValue)
self.assertEqual(encoder.maxScaledValue, le.maxScaledValue)
self.assertEqual(encoder.minval, le.minval)
self.assertEqual(encoder.maxval, le.maxval)
self.assertEqual(encoder.name, le.name)
self.assertEqual(encoder.verbosity, le.verbosity)
self.assertEqual(encoder.clipInput, le.clipInput)
self.assertEqual(encoder.width, le.width)
self.assertEqual(encoder.description, le.description)
self.assertIsInstance(encoder.encoder, ScalarEncoder)
self.assertTrue(numpy.array_equal(encoder.encode(1), originalValue))
self.assertEqual(le.decode(encoder.encode(1)),
encoder.decode(le.encode(1)))
# Feed in a new value and ensure the encodings match
result1 = le.encode(10)
result2 = encoder.encode(10)
self.assertTrue(numpy.array_equal(result1, result2))
if __name__ == "__main__":
unittest.main()
| agpl-3.0 | 5,842,225,114,962,640,000 | 31.518519 | 76 | 0.633257 | false |
davidbz/trafficserver | tests/gold_tests/pluginTest/multiplexer/multiplexer.test.py | 2 | 2128 | '''
'''
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
Test.Summary = '''
Test experimental/multiplexer.
'''
# need Curl
Test.SkipUnless(
Condition.PluginExists('multiplexer.so')
)
Test.ContinueOnFail = False
# Define default ATS
ts = Test.MakeATSProcess("ts")
server = Test.MakeOriginServer("server")
request_header = {"headers": "GET / HTTP/1.1\r\nHost: www.example.com\r\n\r\n", "timestamp": "1469733493.993", "body": ""}
response_header = {"headers": "HTTP/1.1 200 OK\r\nConnection: close\r\n\r\n", "timestamp": "1469733493.993", "body": ""}
server.addResponse("sessionfile.log", request_header, response_header)
ts.Disk.records_config.update({
'proxy.config.diags.debug.enabled': 1,
'proxy.config.diags.debug.tags': 'multiplexer',
})
ts.Disk.remap_config.AddLine(
'map http://www.example.com http://127.0.0.1:{0} @plugin=multiplexer.so'.format(server.Variables.Port)
)
# For now, just make sure the plugin loads without error.
tr = Test.AddTestRun()
tr.Processes.Default.Command = 'curl --silent --proxy 127.0.0.1:{0} "http://www.example.com" -H "Proxy-Connection: close"'.format(ts.Variables.port)
tr.Processes.Default.ReturnCode = 0
tr.Processes.Default.StartBefore(server, ready=When.PortOpen(server.Variables.Port))
tr.Processes.Default.StartBefore(Test.Processes.ts)
ts.Streams.stderr = "gold/multiplexer.gold"
tr.StillRunningAfter = ts
| apache-2.0 | 1,482,773,943,674,797,600 | 39.923077 | 148 | 0.738252 | false |
wehkamp/ansible | lib/ansible/playbook/role/__init__.py | 15 | 14784 | # (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from six import iteritems, string_types
import inspect
import os
from hashlib import sha1
from types import NoneType
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.parsing import DataLoader
from ansible.playbook.attribute import FieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.become import Become
from ansible.playbook.conditional import Conditional
from ansible.playbook.helpers import load_list_of_blocks
from ansible.playbook.role.include import RoleInclude
from ansible.playbook.role.metadata import RoleMetadata
from ansible.playbook.taggable import Taggable
from ansible.plugins import get_all_plugin_loaders
from ansible.utils.vars import combine_vars
__all__ = ['Role', 'ROLE_CACHE', 'hash_params']
# FIXME: this should be a utility function, but can't be a member of
# the role due to the fact that it would require the use of self
# in a static method. This is also used in the base class for
# strategies (ansible/plugins/strategies/__init__.py)
def hash_params(params):
if not isinstance(params, dict):
return params
else:
s = set()
for k,v in params.iteritems():
if isinstance(v, dict):
s.update((k, hash_params(v)))
elif isinstance(v, list):
things = []
for item in v:
things.append(hash_params(item))
s.update((k, tuple(things)))
else:
s.update((k, v))
return frozenset(s)
# The role cache is used to prevent re-loading roles, which
# may already exist. Keys into this cache are the SHA1 hash
# of the role definition (for dictionary definitions, this
# will be based on the repr() of the dictionary object)
ROLE_CACHE = dict()
class Role(Base, Become, Conditional, Taggable):
def __init__(self):
self._role_name = None
self._role_path = None
self._role_params = dict()
self._loader = None
self._metadata = None
self._play = None
self._parents = []
self._dependencies = []
self._task_blocks = []
self._handler_blocks = []
self._default_vars = dict()
self._role_vars = dict()
self._had_task_run = False
self._completed = False
super(Role, self).__init__()
def __repr__(self):
return self.get_name()
def get_name(self):
return self._role_name
@staticmethod
def load(role_include, parent_role=None):
# FIXME: add back in the role caching support
try:
# The ROLE_CACHE is a dictionary of role names, with each entry
# containing another dictionary corresponding to a set of parameters
# specified for a role as the key and the Role() object itself.
# We use frozenset to make the dictionary hashable.
#hashed_params = frozenset(role_include.get_role_params().iteritems())
hashed_params = hash_params(role_include.get_role_params())
if role_include.role in ROLE_CACHE:
for (entry, role_obj) in ROLE_CACHE[role_include.role].iteritems():
if hashed_params == entry:
if parent_role:
role_obj.add_parent(parent_role)
return role_obj
r = Role()
r._load_role_data(role_include, parent_role=parent_role)
if role_include.role not in ROLE_CACHE:
ROLE_CACHE[role_include.role] = dict()
ROLE_CACHE[role_include.role][hashed_params] = r
return r
except RuntimeError:
# FIXME: needs a better way to access the ds in the role include
raise AnsibleError("A recursion loop was detected with the roles specified. Make sure child roles do not have dependencies on parent roles", obj=role_include._ds)
def _load_role_data(self, role_include, parent_role=None):
self._role_name = role_include.role
self._role_path = role_include.get_role_path()
self._role_params = role_include.get_role_params()
self._variable_manager = role_include.get_variable_manager()
self._loader = role_include.get_loader()
if parent_role:
self.add_parent(parent_role)
# copy over all field attributes, except for when and tags, which
# are special cases and need to preserve pre-existing values
for (attr_name, _) in iteritems(self._get_base_attributes()):
if attr_name not in ('when', 'tags'):
setattr(self, attr_name, getattr(role_include, attr_name))
current_when = getattr(self, 'when')[:]
current_when.extend(role_include.when)
setattr(self, 'when', current_when)
current_tags = getattr(self, 'tags')[:]
current_tags.extend(role_include.tags)
setattr(self, 'tags', current_tags)
# dynamically load any plugins from the role directory
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(self._role_path, obj.subdir)
if os.path.isdir(plugin_path):
obj.add_directory(plugin_path)
# load the role's other files, if they exist
metadata = self._load_role_yaml('meta')
if metadata:
self._metadata = RoleMetadata.load(metadata, owner=self, loader=self._loader)
self._dependencies = self._load_dependencies()
task_data = self._load_role_yaml('tasks')
if task_data:
self._task_blocks = load_list_of_blocks(task_data, play=None, role=self, loader=self._loader)
handler_data = self._load_role_yaml('handlers')
if handler_data:
self._handler_blocks = load_list_of_blocks(handler_data, play=None, role=self, loader=self._loader)
# vars and default vars are regular dictionaries
self._role_vars = self._load_role_yaml('vars')
if not isinstance(self._role_vars, (dict, NoneType)):
raise AnsibleParserError("The vars/main.yml file for role '%s' must contain a dictionary of variables" % self._role_name)
elif self._role_vars is None:
self._role_vars = dict()
self._default_vars = self._load_role_yaml('defaults')
if not isinstance(self._default_vars, (dict, NoneType)):
raise AnsibleParserError("The default/main.yml file for role '%s' must contain a dictionary of variables" % self._role_name)
elif self._default_vars is None:
self._default_vars = dict()
def _load_role_yaml(self, subdir):
file_path = os.path.join(self._role_path, subdir)
if self._loader.path_exists(file_path) and self._loader.is_directory(file_path):
main_file = self._resolve_main(file_path)
if self._loader.path_exists(main_file):
return self._loader.load_from_file(main_file)
return None
def _resolve_main(self, basepath):
''' flexibly handle variations in main filenames '''
possible_mains = (
os.path.join(basepath, 'main.yml'),
os.path.join(basepath, 'main.yaml'),
os.path.join(basepath, 'main.json'),
os.path.join(basepath, 'main'),
)
if sum([self._loader.is_file(x) for x in possible_mains]) > 1:
raise AnsibleError("found multiple main files at %s, only one allowed" % (basepath))
else:
for m in possible_mains:
if self._loader.is_file(m):
return m # exactly one main file
return possible_mains[0] # zero mains (we still need to return something)
def _load_dependencies(self):
'''
Recursively loads role dependencies from the metadata list of
dependencies, if it exists
'''
deps = []
if self._metadata:
for role_include in self._metadata.dependencies:
r = Role.load(role_include, parent_role=self)
deps.append(r)
return deps
#------------------------------------------------------------------------------
# other functions
def add_parent(self, parent_role):
''' adds a role to the list of this roles parents '''
assert isinstance(parent_role, Role)
if parent_role not in self._parents:
self._parents.append(parent_role)
def get_parents(self):
return self._parents
def get_default_vars(self):
# FIXME: get these from dependent roles too
default_vars = dict()
for dep in self.get_all_dependencies():
default_vars = combine_vars(default_vars, dep.get_default_vars())
default_vars = combine_vars(default_vars, self._default_vars)
return default_vars
def get_inherited_vars(self):
inherited_vars = dict()
for parent in self._parents:
inherited_vars = combine_vars(inherited_vars, parent.get_inherited_vars())
inherited_vars = combine_vars(inherited_vars, parent._role_vars)
inherited_vars = combine_vars(inherited_vars, parent._role_params)
return inherited_vars
def get_vars(self):
all_vars = self.get_inherited_vars()
for dep in self.get_all_dependencies():
all_vars = combine_vars(all_vars, dep.get_vars())
all_vars = combine_vars(all_vars, self._role_vars)
all_vars = combine_vars(all_vars, self._role_params)
return all_vars
def get_direct_dependencies(self):
return self._dependencies[:]
def get_all_dependencies(self):
'''
Returns a list of all deps, built recursively from all child dependencies,
in the proper order in which they should be executed or evaluated.
'''
child_deps = []
for dep in self.get_direct_dependencies():
for child_dep in dep.get_all_dependencies():
child_deps.append(child_dep)
child_deps.append(dep)
return child_deps
def get_task_blocks(self):
return self._task_blocks[:]
def get_handler_blocks(self):
return self._handler_blocks[:]
def has_run(self):
'''
Returns true if this role has been iterated over completely and
at least one task was run
'''
return self._had_task_run and self._completed
def compile(self, play, dep_chain=[]):
'''
Returns the task list for this role, which is created by first
recursively compiling the tasks for all direct dependencies, and
then adding on the tasks for this role.
The role compile() also remembers and saves the dependency chain
with each task, so tasks know by which route they were found, and
can correctly take their parent's tags/conditionals into account.
'''
block_list = []
# update the dependency chain here
new_dep_chain = dep_chain + [self]
deps = self.get_direct_dependencies()
for dep in deps:
dep_blocks = dep.compile(play=play, dep_chain=new_dep_chain)
for dep_block in dep_blocks:
new_dep_block = dep_block.copy()
new_dep_block._dep_chain = new_dep_chain
new_dep_block._play = play
block_list.append(new_dep_block)
block_list.extend(self._task_blocks)
return block_list
def serialize(self, include_deps=True):
res = super(Role, self).serialize()
res['_role_name'] = self._role_name
res['_role_path'] = self._role_path
res['_role_vars'] = self._role_vars
res['_role_params'] = self._role_params
res['_default_vars'] = self._default_vars
res['_had_task_run'] = self._had_task_run
res['_completed'] = self._completed
if self._metadata:
res['_metadata'] = self._metadata.serialize()
if include_deps:
deps = []
for role in self.get_direct_dependencies():
deps.append(role.serialize())
res['_dependencies'] = deps
parents = []
for parent in self._parents:
parents.append(parent.serialize(include_deps=False))
res['_parents'] = parents
return res
def deserialize(self, data, include_deps=True):
self._role_name = data.get('_role_name', '')
self._role_path = data.get('_role_path', '')
self._role_vars = data.get('_role_vars', dict())
self._role_params = data.get('_role_params', dict())
self._default_vars = data.get('_default_vars', dict())
self._had_task_run = data.get('_had_task_run', False)
self._completed = data.get('_completed', False)
if include_deps:
deps = []
for dep in data.get('_dependencies', []):
r = Role()
r.deserialize(dep)
deps.append(r)
setattr(self, '_dependencies', deps)
parent_data = data.get('_parents', [])
parents = []
for parent in parent_data:
r = Role()
r.deserialize(parent, include_deps=False)
parents.append(r)
setattr(self, '_parents', parents)
metadata_data = data.get('_metadata')
if metadata_data:
m = RoleMetadata()
m.deserialize(metadata_data)
self._metadata = m
super(Role, self).deserialize(data)
def set_loader(self, loader):
self._loader = loader
for parent in self._parents:
parent.set_loader(loader)
for dep in self.get_direct_dependencies():
dep.set_loader(loader)
| gpl-3.0 | 7,133,974,722,278,067,000 | 36.333333 | 174 | 0.599026 | false |
saiwing-yeung/scikit-learn | sklearn/neural_network/_stochastic_optimizers.py | 93 | 8873 | """Stochastic optimization methods for MLP
"""
# Authors: Jiyuan Qian <[email protected]>
# License: BSD 3 clause
import numpy as np
class BaseOptimizer(object):
"""Base (Stochastic) gradient descent optimizer
Parameters
----------
params : list, length = len(coefs_) + len(intercepts_)
The concatenated list containing coefs_ and intercepts_ in MLP model.
Used for initializing velocities and updating params
learning_rate_init : float, optional, default 0.1
The initial learning rate used. It controls the step-size in updating
the weights
Attributes
----------
learning_rate : float
the current learning rate
"""
def __init__(self, params, learning_rate_init=0.1):
self.params = [param for param in params]
self.learning_rate_init = learning_rate_init
self.learning_rate = float(learning_rate_init)
def update_params(self, grads):
"""Update parameters with given gradients
Parameters
----------
grads : list, length = len(params)
Containing gradients with respect to coefs_ and intercepts_ in MLP
model. So length should be aligned with params
"""
updates = self._get_updates(grads)
for param, update in zip(self.params, updates):
param += update
def iteration_ends(self, time_step):
"""Perform update to learning rate and potentially other states at the
end of an iteration
"""
pass
def trigger_stopping(self, msg, verbose):
"""Decides whether it is time to stop training
Parameters
----------
msg : str
Message passed in for verbose output
verbose : bool
Print message to stdin if True
Returns
-------
is_stopping : bool
True if training needs to stop
"""
if verbose:
print(msg + " Stopping.")
return True
class SGDOptimizer(BaseOptimizer):
"""Stochastic gradient descent optimizer with momentum
Parameters
----------
params : list, length = len(coefs_) + len(intercepts_)
The concatenated list containing coefs_ and intercepts_ in MLP model.
Used for initializing velocities and updating params
learning_rate_init : float, optional, default 0.1
The initial learning rate used. It controls the step-size in updating
the weights
lr_schedule : {'constant', 'adaptive', 'invscaling'}, default 'constant'
Learning rate schedule for weight updates.
-'constant', is a constant learning rate given by
'learning_rate_init'.
-'invscaling' gradually decreases the learning rate 'learning_rate_' at
each time step 't' using an inverse scaling exponent of 'power_t'.
learning_rate_ = learning_rate_init / pow(t, power_t)
-'adaptive', keeps the learning rate constant to
'learning_rate_init' as long as the training keeps decreasing.
Each time 2 consecutive epochs fail to decrease the training loss by
tol, or fail to increase validation score by tol if 'early_stopping'
is on, the current learning rate is divided by 5.
momentum : float, optional, default 0.9
Value of momentum used, must be larger than or equal to 0
nesterov : bool, optional, default True
Whether to use nesterov's momentum or not. Use nesterov's if True
Attributes
----------
learning_rate : float
the current learning rate
velocities : list, length = len(params)
velocities that are used to update params
"""
def __init__(self, params, learning_rate_init=0.1, lr_schedule='constant',
momentum=0.9, nesterov=True, power_t=0.5):
super(SGDOptimizer, self).__init__(params, learning_rate_init)
self.lr_schedule = lr_schedule
self.momentum = momentum
self.nesterov = nesterov
self.power_t = power_t
self.velocities = [np.zeros_like(param) for param in params]
def iteration_ends(self, time_step):
"""Perform updates to learning rate and potential other states at the
end of an iteration
Parameters
----------
time_step : int
number of training samples trained on so far, used to update
learning rate for 'invscaling'
"""
if self.lr_schedule == 'invscaling':
self.learning_rate = (float(self.learning_rate_init) /
(time_step + 1) ** self.power_t)
def trigger_stopping(self, msg, verbose):
if self.lr_schedule == 'adaptive':
if self.learning_rate > 1e-6:
self.learning_rate /= 5.
if verbose:
print(msg + " Setting learning rate to %f" %
self.learning_rate)
return False
else:
if verbose:
print(msg + " Learning rate too small. Stopping.")
return True
else:
if verbose:
print(msg + " Stopping.")
return True
def _get_updates(self, grads):
"""Get the values used to update params with given gradients
Parameters
----------
grads : list, length = len(coefs_) + len(intercepts_)
Containing gradients with respect to coefs_ and intercepts_ in MLP
model. So length should be aligned with params
Returns
-------
updates : list, length = len(grads)
The values to add to params
"""
updates = [self.momentum * velocity - self.learning_rate * grad
for velocity, grad in zip(self.velocities, grads)]
self.velocities = updates
if self.nesterov:
updates = [self.momentum * velocity - self.learning_rate * grad
for velocity, grad in zip(self.velocities, grads)]
return updates
class AdamOptimizer(BaseOptimizer):
"""Stochastic gradient descent optimizer with Adam
Note: All default values are from the original Adam paper
Parameters
----------
params : list, length = len(coefs_) + len(intercepts_)
The concatenated list containing coefs_ and intercepts_ in MLP model.
Used for initializing velocities and updating params
learning_rate_init : float, optional, default 0.1
The initial learning rate used. It controls the step-size in updating
the weights
beta_1 : float, optional, default 0.9
Exponential decay rate for estimates of first moment vector, should be
in [0, 1)
beta_2 : float, optional, default 0.999
Exponential decay rate for estimates of second moment vector, should be
in [0, 1)
epsilon : float, optional, default 1e-8
Value for numerical stability
Attributes
----------
learning_rate : float
The current learning rate
t : int
Timestep
ms : list, length = len(params)
First moment vectors
vs : list, length = len(params)
Second moment vectors
References
----------
Kingma, Diederik, and Jimmy Ba.
"Adam: A method for stochastic optimization."
arXiv preprint arXiv:1412.6980 (2014).
"""
def __init__(self, params, learning_rate_init=0.001, beta_1=0.9,
beta_2=0.999, epsilon=1e-8):
super(AdamOptimizer, self).__init__(params, learning_rate_init)
self.beta_1 = beta_1
self.beta_2 = beta_2
self.epsilon = epsilon
self.t = 0
self.ms = [np.zeros_like(param) for param in params]
self.vs = [np.zeros_like(param) for param in params]
def _get_updates(self, grads):
"""Get the values used to update params with given gradients
Parameters
----------
grads : list, length = len(coefs_) + len(intercepts_)
Containing gradients with respect to coefs_ and intercepts_ in MLP
model. So length should be aligned with params
Returns
-------
updates : list, length = len(grads)
The values to add to params
"""
self.t += 1
self.ms = [self.beta_1 * m + (1 - self.beta_1) * grad
for m, grad in zip(self.ms, grads)]
self.vs = [self.beta_2 * v + (1 - self.beta_2) * (grad ** 2)
for v, grad in zip(self.vs, grads)]
self.learning_rate = (self.learning_rate_init *
np.sqrt(1 - self.beta_2 ** self.t) /
(1 - self.beta_1 ** self.t))
updates = [-self.learning_rate * m / (np.sqrt(v) + self.epsilon)
for m, v in zip(self.ms, self.vs)]
return updates
| bsd-3-clause | -3,335,869,347,061,843,500 | 32.357143 | 79 | 0.588076 | false |
HopeFOAM/HopeFOAM | ThirdParty-0.1/ParaView-5.0.1/VTK/ThirdParty/Twisted/twisted/conch/test/test_tap.py | 32 | 6164 | # Copyright (c) Twisted Matrix Laboratories.
# See LICENSE for details.
"""
Tests for L{twisted.conch.tap}.
"""
try:
import Crypto.Cipher.DES3
except:
Crypto = None
try:
import pyasn1
except ImportError:
pyasn1 = None
try:
from twisted.conch import unix
except ImportError:
unix = None
if Crypto and pyasn1 and unix:
from twisted.conch import tap
from twisted.conch.openssh_compat.factory import OpenSSHFactory
from twisted.application.internet import StreamServerEndpointService
from twisted.cred import error
from twisted.cred.credentials import IPluggableAuthenticationModules
from twisted.cred.credentials import ISSHPrivateKey
from twisted.cred.credentials import IUsernamePassword, UsernamePassword
from twisted.trial.unittest import TestCase
class MakeServiceTest(TestCase):
"""
Tests for L{tap.makeService}.
"""
if not Crypto:
skip = "can't run w/o PyCrypto"
if not pyasn1:
skip = "Cannot run without PyASN1"
if not unix:
skip = "can't run on non-posix computers"
usernamePassword = ('iamuser', 'thisispassword')
def setUp(self):
"""
Create a file with two users.
"""
self.filename = self.mktemp()
f = open(self.filename, 'wb+')
f.write(':'.join(self.usernamePassword))
f.close()
self.options = tap.Options()
def test_basic(self):
"""
L{tap.makeService} returns a L{StreamServerEndpointService} instance
running on TCP port 22, and the linked protocol factory is an instance
of L{OpenSSHFactory}.
"""
config = tap.Options()
service = tap.makeService(config)
self.assertIsInstance(service, StreamServerEndpointService)
self.assertEqual(service.endpoint._port, 22)
self.assertIsInstance(service.factory, OpenSSHFactory)
def test_defaultAuths(self):
"""
Make sure that if the C{--auth} command-line option is not passed,
the default checkers are (for backwards compatibility): SSH, UNIX, and
PAM if available
"""
numCheckers = 2
try:
from twisted.cred import pamauth
self.assertIn(IPluggableAuthenticationModules,
self.options['credInterfaces'],
"PAM should be one of the modules")
numCheckers += 1
except ImportError:
pass
self.assertIn(ISSHPrivateKey, self.options['credInterfaces'],
"SSH should be one of the default checkers")
self.assertIn(IUsernamePassword, self.options['credInterfaces'],
"UNIX should be one of the default checkers")
self.assertEqual(numCheckers, len(self.options['credCheckers']),
"There should be %d checkers by default" % (numCheckers,))
def test_authAdded(self):
"""
The C{--auth} command-line option will add a checker to the list of
checkers, and it should be the only auth checker
"""
self.options.parseOptions(['--auth', 'file:' + self.filename])
self.assertEqual(len(self.options['credCheckers']), 1)
def test_multipleAuthAdded(self):
"""
Multiple C{--auth} command-line options will add all checkers specified
to the list ofcheckers, and there should only be the specified auth
checkers (no default checkers).
"""
self.options.parseOptions(['--auth', 'file:' + self.filename,
'--auth', 'memory:testuser:testpassword'])
self.assertEqual(len(self.options['credCheckers']), 2)
def test_authFailure(self):
"""
The checker created by the C{--auth} command-line option returns a
L{Deferred} that fails with L{UnauthorizedLogin} when
presented with credentials that are unknown to that checker.
"""
self.options.parseOptions(['--auth', 'file:' + self.filename])
checker = self.options['credCheckers'][-1]
invalid = UsernamePassword(self.usernamePassword[0], 'fake')
# Wrong password should raise error
return self.assertFailure(
checker.requestAvatarId(invalid), error.UnauthorizedLogin)
def test_authSuccess(self):
"""
The checker created by the C{--auth} command-line option returns a
L{Deferred} that returns the avatar id when presented with credentials
that are known to that checker.
"""
self.options.parseOptions(['--auth', 'file:' + self.filename])
checker = self.options['credCheckers'][-1]
correct = UsernamePassword(*self.usernamePassword)
d = checker.requestAvatarId(correct)
def checkSuccess(username):
self.assertEqual(username, correct.username)
return d.addCallback(checkSuccess)
def test_checkersPamAuth(self):
"""
The L{OpenSSHFactory} built by L{tap.makeService} has a portal with
L{IPluggableAuthenticationModules}, L{ISSHPrivateKey} and
L{IUsernamePassword} interfaces registered as checkers if C{pamauth} is
available.
"""
# Fake the presence of pamauth, even if PyPAM is not installed
self.patch(tap, "pamauth", object())
config = tap.Options()
service = tap.makeService(config)
portal = service.factory.portal
self.assertEqual(
set(portal.checkers.keys()),
set([IPluggableAuthenticationModules, ISSHPrivateKey,
IUsernamePassword]))
def test_checkersWithoutPamAuth(self):
"""
The L{OpenSSHFactory} built by L{tap.makeService} has a portal with
L{ISSHPrivateKey} and L{IUsernamePassword} interfaces registered as
checkers if C{pamauth} is not available.
"""
# Fake the absence of pamauth, even if PyPAM is installed
self.patch(tap, "pamauth", None)
config = tap.Options()
service = tap.makeService(config)
portal = service.factory.portal
self.assertEqual(
set(portal.checkers.keys()),
set([ISSHPrivateKey, IUsernamePassword]))
| gpl-3.0 | -540,310,456,152,638,200 | 32.68306 | 79 | 0.641629 | false |
quinot/ansible | lib/ansible/modules/network/illumos/ipadm_addrprop.py | 61 | 7148 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2016, Adam Števko <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ipadm_addrprop
short_description: Manage IP address properties on Solaris/illumos systems.
description:
- Modify IP address properties on Solaris/illumos systems.
version_added: "2.3"
author: Adam Števko (@xen0l)
options:
addrobj:
description:
- Specifies the address object we want to manage.
required: true
aliases: [nic, interface]
property:
description:
- Specifies the name of the address property we want to manage.
required: true
aliases: [name]
value:
description:
- Specifies the value we want to set for the address property.
required: false
temporary:
description:
- Specifies that the address property value is temporary.
Temporary values do not persist across reboots.
required: false
default: false
state:
description:
- Set or reset the property value.
required: false
default: present
choices: [ "present", "absent", "reset" ]
'''
EXAMPLES = '''
- name: Mark address on addrobj as deprecated
ipadm_addrprop: property=deprecated value=on addrobj=e1000g0/v6
- name: Set network prefix length for addrobj
ipadm_addrprop: addrobj=bge0/v4 name=prefixlen value=26
'''
RETURN = '''
property:
description: property name
returned: always
type: string
sample: deprecated
addrobj:
description: address object name
returned: always
type: string
sample: bge0/v4
state:
description: state of the target
returned: always
type: string
sample: present
temporary:
description: specifies if operation will persist across reboots
returned: always
type: boolean
sample: True
value:
description: property value
returned: when value is provided
type: string
sample: 26
'''
from ansible.module_utils.basic import AnsibleModule
class AddrProp(object):
def __init__(self, module):
self.module = module
self.addrobj = module.params['addrobj']
self.property = module.params['property']
self.value = module.params['value']
self.temporary = module.params['temporary']
self.state = module.params['state']
def property_exists(self):
cmd = [self.module.get_bin_path('ipadm')]
cmd.append('show-addrprop')
cmd.append('-p')
cmd.append(self.property)
cmd.append(self.addrobj)
(rc, _, _) = self.module.run_command(cmd)
if rc == 0:
return True
else:
self.module.fail_json(msg='Unknown property "%s" on addrobj %s' %
(self.property, self.addrobj),
property=self.property,
addrobj=self.addrobj)
def property_is_modified(self):
cmd = [self.module.get_bin_path('ipadm')]
cmd.append('show-addrprop')
cmd.append('-c')
cmd.append('-o')
cmd.append('current,default')
cmd.append('-p')
cmd.append(self.property)
cmd.append(self.addrobj)
(rc, out, _) = self.module.run_command(cmd)
out = out.rstrip()
(value, default) = out.split(':')
if rc == 0 and value == default:
return True
else:
return False
def property_is_set(self):
cmd = [self.module.get_bin_path('ipadm')]
cmd.append('show-addrprop')
cmd.append('-c')
cmd.append('-o')
cmd.append('current')
cmd.append('-p')
cmd.append(self.property)
cmd.append(self.addrobj)
(rc, out, _) = self.module.run_command(cmd)
out = out.rstrip()
if rc == 0 and self.value == out:
return True
else:
return False
def set_property(self):
cmd = [self.module.get_bin_path('ipadm')]
cmd.append('set-addrprop')
if self.temporary:
cmd.append('-t')
cmd.append('-p')
cmd.append(self.property + '=' + self.value)
cmd.append(self.addrobj)
return self.module.run_command(cmd)
def reset_property(self):
cmd = [self.module.get_bin_path('ipadm')]
cmd.append('reset-addrprop')
if self.temporary:
cmd.append('-t')
cmd.append('-p')
cmd.append(self.property)
cmd.append(self.addrobj)
return self.module.run_command(cmd)
def main():
module = AnsibleModule(
argument_spec=dict(
addrobj=dict(required=True, default=None, aliases=['nic', 'interface']),
property=dict(required=True, aliases=['name']),
value=dict(required=False),
temporary=dict(default=False, type='bool'),
state=dict(
default='present', choices=['absent', 'present', 'reset']),
),
supports_check_mode=True
)
addrprop = AddrProp(module)
rc = None
out = ''
err = ''
result = {}
result['property'] = addrprop.property
result['addrobj'] = addrprop.addrobj
result['state'] = addrprop.state
result['temporary'] = addrprop.temporary
if addrprop.value:
result['value'] = addrprop.value
if addrprop.state == 'absent' or addrprop.state == 'reset':
if addrprop.property_exists():
if not addrprop.property_is_modified():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = addrprop.reset_property()
if rc != 0:
module.fail_json(property=addrprop.property,
addrobj=addrprop.addrobj,
msg=err,
rc=rc)
elif addrprop.state == 'present':
if addrprop.value is None:
module.fail_json(msg='Value is mandatory with state "present"')
if addrprop.property_exists():
if not addrprop.property_is_set():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = addrprop.set_property()
if rc != 0:
module.fail_json(property=addrprop.property,
addrobj=addrprop.addrobj,
msg=err,
rc=rc)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
module.exit_json(**result)
if __name__ == '__main__':
main()
| gpl-3.0 | 5,081,859,442,747,857,000 | 26.590734 | 92 | 0.558494 | false |
leiferikb/bitpop | src/third_party/scons-2.0.1/engine/SCons/Tool/tlib.py | 61 | 1884 | """SCons.Tool.tlib
XXX
"""
#
# Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 The SCons Foundation
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "src/engine/SCons/Tool/tlib.py 5134 2010/08/16 23:02:40 bdeegan"
import SCons.Tool
import SCons.Tool.bcc32
import SCons.Util
def generate(env):
SCons.Tool.bcc32.findIt('tlib', env)
"""Add Builders and construction variables for ar to an Environment."""
SCons.Tool.createStaticLibBuilder(env)
env['AR'] = 'tlib'
env['ARFLAGS'] = SCons.Util.CLVar('')
env['ARCOM'] = '$AR $TARGET $ARFLAGS /a $SOURCES'
env['LIBPREFIX'] = ''
env['LIBSUFFIX'] = '.lib'
def exists(env):
return SCons.Tool.bcc32.findIt('tlib', env)
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
| gpl-3.0 | 352,941,697,377,537,150 | 34.54717 | 95 | 0.721868 | false |
blazek/QGIS | python/plugins/processing/algs/qgis/SelectByExpression.py | 15 | 3786 | # -*- coding: utf-8 -*-
"""
***************************************************************************
SelectByExpression.py
---------------------
Date : July 2014
Copyright : (C) 2014 by Michael Douchin
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Michael Douchin'
__date__ = 'July 2014'
__copyright__ = '(C) 2014, Michael Douchin'
from qgis.core import (QgsExpression,
QgsProcessing,
QgsVectorLayer,
QgsProcessingAlgorithm,
QgsProcessingException,
QgsProcessingParameterVectorLayer,
QgsProcessingParameterExpression,
QgsProcessingParameterEnum,
QgsProcessingOutputVectorLayer)
from processing.algs.qgis.QgisAlgorithm import QgisAlgorithm
class SelectByExpression(QgisAlgorithm):
INPUT = 'INPUT'
EXPRESSION = 'EXPRESSION'
OUTPUT = 'OUTPUT'
METHOD = 'METHOD'
def group(self):
return self.tr('Vector selection')
def groupId(self):
return 'vectorselection'
def __init__(self):
super().__init__()
def flags(self):
return super().flags() | QgsProcessingAlgorithm.FlagNoThreading
def initAlgorithm(self, config=None):
self.methods = [self.tr('creating new selection'),
self.tr('adding to current selection'),
self.tr('removing from current selection'),
self.tr('selecting within current selection')]
self.addParameter(QgsProcessingParameterVectorLayer(self.INPUT, self.tr('Input layer'), types=[QgsProcessing.TypeVector]))
self.addParameter(QgsProcessingParameterExpression(self.EXPRESSION,
self.tr('Expression'), parentLayerParameterName=self.INPUT))
self.addParameter(QgsProcessingParameterEnum(self.METHOD,
self.tr('Modify current selection by'), self.methods, defaultValue=0))
self.addOutput(QgsProcessingOutputVectorLayer(self.OUTPUT, self.tr('Selected (attribute)')))
def name(self):
return 'selectbyexpression'
def displayName(self):
return self.tr('Select by expression')
def processAlgorithm(self, parameters, context, feedback):
layer = self.parameterAsVectorLayer(parameters, self.INPUT, context)
method = self.parameterAsEnum(parameters, self.METHOD, context)
if method == 0:
behavior = QgsVectorLayer.SetSelection
elif method == 1:
behavior = QgsVectorLayer.AddToSelection
elif method == 2:
behavior = QgsVectorLayer.RemoveFromSelection
elif method == 3:
behavior = QgsVectorLayer.IntersectSelection
expression = self.parameterAsString(parameters, self.EXPRESSION, context)
qExp = QgsExpression(expression)
if qExp.hasParserError():
raise QgsProcessingException(qExp.parserErrorString())
layer.selectByExpression(expression, behavior)
return {self.OUTPUT: parameters[self.INPUT]}
| gpl-2.0 | -997,381,572,583,215,400 | 38.852632 | 130 | 0.549128 | false |
GoogleCloudPlatform/cloud-foundation-toolkit | dm/templates/interconnect/interconnect.py | 1 | 2261 | # Copyright 2018 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" This template creates an Interconnect resource. """
def generate_config(context):
""" Entry point for the deployment resources. """
properties = context.properties
name = properties.get('name', context.env['name'])
project_id = properties.get('project', context.env['project'])
resources = []
intercon = {
'name': context.env['name'],
# https://cloud.google.com/compute/docs/reference/rest/v1/interconnects
'type': 'gcp-types/compute-v1:interconnects',
'properties':
{
'project': project_id,
'name': name,
'customerName':
context.properties['customerName'],
'interconnectType':
context.properties['interconnectType'],
'location':
context.properties['location'],
'requestedLinkCount':
context.properties['requestedLinkCount']
}
}
optional_props = [
'adminEnabled',
'description',
'linkType',
'nocContactEmail'
]
for prop in optional_props:
if prop in context.properties:
intercon['properties'][prop] = context.properties[prop]
resources.append(intercon)
return {
'resources':
resources,
'outputs':
[
{
'name': 'name',
'value': name
},
{
'name': 'selfLink',
'value': '$(ref.{}.selfLink)'.format(context.env['name'])
}
]
}
| apache-2.0 | 3,848,690,436,779,972,600 | 30.402778 | 79 | 0.560814 | false |
Define-break-continue/raspoznavayka | src/lib/rtaudio-4.1.2/contrib/python/pyrtaudio/setup.py | 8 | 1601 | #!/bin/env python
import os
from distutils.core import setup, Extension
if hasattr(os, 'uname'):
OSNAME = os.uname()[0]
else:
OSNAME = 'Windows'
define_macros = []
libraries = []
extra_link_args = []
extra_compile_args = ['-I../../../']
sources = ['rtaudiomodule.cpp', '../../../RtAudio.cpp']
if OSNAME == 'Linux':
define_macros=[("__LINUX_ALSA__", ''),
('__LINUX_JACK__', ''),
('__LINUX_OSS__', '')]
libraries = ['asound', 'jack', 'pthread']
elif OSNAME == 'Darwin':
define_macros = [('__MACOSX_CORE__', '')]
libraries = ['pthread', 'stdc++']
extra_link_args = ['-framework', 'CoreAudio']
elif OSNAME == 'Windows':
define_macros = [('__WINDOWS_DS__', None),
('__WINDOWS_ASIO__', None),
('__LITTLE_ENDIAN__',None),
('WIN32',None)]
libraries = ['winmm', 'dsound', 'Advapi32','Ole32','User32']
sources += ['../../../include/asio.cpp',
'../../../include/asiodrivers.cpp',
'../../../include/asiolist.cpp',
'../../../include/iasiothiscallresolver.cpp']
extra_compile_args.append('-I../../../include/')
extra_compile_args.append('-EHsc')
audio = Extension('rtaudio',
sources=sources,
libraries=libraries,
define_macros=define_macros,
extra_compile_args = extra_compile_args,
extra_link_args = extra_link_args,
)
setup(name = 'rtaudio',
version = '0.1',
description = 'Python RtAudio interface',
ext_modules = [audio])
| gpl-3.0 | 8,364,879,456,257,611,000 | 26.603448 | 64 | 0.518426 | false |
TheBraveWarrior/pyload | module/plugins/accounts/QuickshareCz.py | 8 | 1274 | # -*- coding: utf-8 -*-
import re
from ..internal.Account import Account
class QuickshareCz(Account):
__name__ = "QuickshareCz"
__type__ = "account"
__version__ = "0.11"
__status__ = "testing"
__description__ = """Quickshare.cz account plugin"""
__license__ = "GPLv3"
__authors__ = [("zoidberg", "[email protected]")]
TRAFFIC_LEFT_PATTERN = r'Stav kreditu: <strong>(.+?)</strong>'
def grab_info(self, user, password, data):
html = self.load("http://www.quickshare.cz/premium")
m = re.search(self.TRAFFIC_LEFT_PATTERN, html)
if m is not None:
trafficleft = self.parse_traffic(m.group(1))
premium = True if trafficleft else False
else:
trafficleft = None
premium = False
return {'validuntil': -1, 'trafficleft': trafficleft, 'premium': premium}
def signin(self, user, password, data):
html = self.load('http://www.quickshare.cz/html/prihlaseni_process.php',
post={'akce': u'Přihlásit',
'heslo': password,
'jmeno': user})
if u'>Takový uživatel neexistuje.<' in html or u'>Špatné heslo.<' in html:
self.fail_login()
| gpl-3.0 | 4,411,305,939,339,684,000 | 30.7 | 82 | 0.554416 | false |
glynjackson/ec2-deploy | ec2_deploy/ec2/api.py | 1 | 1965 | import time
import boto
import boto.ec2
from fabric.api import task, settings, sudo, execute, env, run, cd, local, put, abort, get, hosts
from fabric.operations import prompt
from ec2_deploy.connections import AWS
from ec2_deploy.notifications import Notification
def create_instance(instance_type='web', address=None):
"""
Creates a new EC2 Instance using Boto.
"""
# Open connection to AWS
try:
connection = AWS(access_key=env.aws_key, secret_key=env.aws_secret_key).connect()
except Exception:
Notification("Could not connect to AWS").error()
abort("Exited!")
aws_ami = prompt("Hit enter to use the default Ubuntu AMI or enter one:", default="ami-234ecc54")
aws_security_groups = prompt("Enter the security group (must already exist)?", default="web")
aws_instance_type = prompt("What instance type do you want to create? ", default="m3.medium")
aws_instance_key_name = prompt("Enter your key pair name (don't include .pem extension)", default=env.key_filename.rsplit('/', 1)[1][:-4])
BUILD_SERVER = {
'image_id': aws_ami,
'instance_type': aws_instance_type,
'security_groups': [aws_security_groups],
'key_name': aws_instance_key_name
}
Notification('Spinning up the instance...').info()
# Create new instance using boto.
reservation = connection.run_instances(**BUILD_SERVER)
instance = reservation.instances[0]
time.sleep(5)
while instance.state != 'running':
time.sleep(5)
instance.update()
Notification('-Instance state: %s' % instance.state).info()
Notification('Instance %s was created successfully' % instance.id).success()
# A new instance take a little while to allow connections so sleep for x seconds.
sleep_for = 30
Notification('Sleeping for %s seconds before attempting to connect...' % sleep_for).info()
time.sleep(sleep_for)
return instance.public_dns_name
| mit | -3,967,451,818,571,063,000 | 34.727273 | 142 | 0.678372 | false |
grant-olson/pyasm | makeStructs.py | 1 | 7808 | # Copyright 2004-2010 Grant T. Olson.
# See license.txt for terms.
import re, sys, glob, os, time
#These files are in the 'include' directory, but are modules.
#They won't get built properly
skipFiles = ('datetime.h','py_curses.h','structseq.h','symtable.h')
def emitFileHeader():
print """/* Copyright 2004-2006 Grant T. Olson. See license.txt for terms.*/
#include <Python.h>
/* file autogenerated by pyasm's makeStructs script on """ + time.ctime() + """ %/
/* Preprocessor abuse at it's finest.
We could probably do all of this in a straight python file, but then
it wouldn't be in sync with a particular build. This insures we have
the right offsets for our structs in a way we can't in pure python
*/
#define OFFSET_STRING(f) #f
#define OFFSET(m,s,f) \
offset = PyInt_FromLong((long)&(((s*)0)->f)); \
Py_INCREF(offset); \
PyModule_AddObject(m, OFFSET_STRING(f), offset);
/* Py_DEBUG implies Py_TRACE_REFS. */
#if defined(Py_DEBUG) && !defined(Py_TRACE_REFS)
#define Py_TRACE_REFS
#endif
/* Py_TRACE_REFS implies Py_REF_DEBUG. */
#if defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG)
#define Py_REF_DEBUG
#endif
static PyObject *StructsError;
static PyObject *offset;
static PyMethodDef StructsMethods[] = {
{NULL, NULL, 0, NULL} /* Sentinel */
};
"""
def emitFileBody():
print """
static void
load_PyObject(PyObject* module)
{
#ifdef Py_TRACE_REFS
OFFSET(module,PyObject,_ob_next);
OFFSET(module,PyObject,_ob_prev);
#endif
OFFSET(module,PyObject,ob_refcnt);
OFFSET(module,PyObject,ob_type);
}
static void
load_PyVarObject(PyObject* module)
{
load_PyObject(module);
OFFSET(module,PyVarObject,ob_size);
}
"""
def emitFileFooter(modules):
print """
PyMODINIT_FUNC
initstructs(void)
{
PyObject *m, *n, *o;
/*PyObject *offset;*/
m = Py_InitModule("structs", StructsMethods);
n = Py_InitModule("PyObject", StructsMethods);
o = Py_InitModule("PyVarObject", StructsMethods);
StructsError = PyErr_NewException("structs.StructsError", NULL, NULL);
Py_INCREF(StructsError);
PyModule_AddObject(m, "StructsError", StructsError);
load_PyObject(n);
Py_INCREF(n);
PyModule_AddObject(m, "PyObject", n);
load_PyVarObject(o);
Py_INCREF(o);
PyModule_AddObject(m, "PyVarObject", o);
%s
}""" % ''.join([" load_%s(m);\n" % x for x in modules])
def emitModuleHeader(moduleName):
print """static void
load_%(funcname)s(PyObject *structs)
{
PyObject *sm = Py_InitModule("%(funcname)s",StructsMethods);
""" % {'funcname':moduleName}
def emitModuleFooter(moduleName):
print """
Py_INCREF(sm);
PyModule_AddObject(structs,"%(funcname)s",sm);
}""" % {'funcname':moduleName}
structsRe = re.compile("typedef\s+struct\s*\w*\s*{(.*?)}\s*(\w+)",re.DOTALL)
typeofRe = re.compile(r"(?P<type>\w+)\s*(?P<rest>[^;]+);")
variablesRe = re.compile(r"(\(|\)|\*\*|\*|\[|\]|\w+)[,\s]*")
names = []
def emitComment(commentText):
print "/* %s */" % commentText
def emitRaw(rawText):
print rawText
def emitOffset(name,val):
print " OFFSET(sm,%s,%s);" % (name,val)
def parse_filetext(filetext):
global names
for struct in structsRe.findall(filetext):
body,name = struct
if name in ('PyObject','PyVarObject', 'PyFrameObject'):
emitComment("Skipping object %s" % name)
continue
print >> sys.stderr, "NAME", name
startComment = body.find("/*")
while startComment >= 0: #strip multiline comments
endComment = body.find("*/",startComment) + 2
body = body[:startComment] + body[endComment:]
startComment = body.find("/*")
lines = body.split("\n")
isPyObject = False
for line in lines:
line = line.strip()
if not line:
continue
print >> sys.stderr, "LINE:" , line
if line.startswith("#"):
print >> sys.stderr, "PREPROCESSOR DIRECTIVE"
emitRaw(line)
elif line == 'PyObject_HEAD':
print >> sys.stderr, "HEADER" , line
isPyObject = True
emitModuleHeader(name)
names.append(name)
emitRaw(" load_PyObject(sm);")
elif line == 'PyObject_VAR_HEAD':
print >> sys.stderr, "HEADER" , line
isPyObject = True
emitModuleHeader(name)
names.append(name)
emitRaw(" load_PyVarObject(sm);")
elif line:
if isPyObject == False:
print >> sys.stderr, "NOT A PyObject: SKIPPING" , name
emitComment("Skipping struct %s, not a PyObject based struct" % name)
break
typeof,rest = typeofRe.match(line).groups()
print >> sys.stderr, "TYPE", typeof
vars = variablesRe.findall(rest)
vars.reverse()
if typeof == "struct": # skip struct def
print >> sys.stderr, "STRUCT", vars
vars.pop()
while vars:
var = vars.pop()
if var in ('*', '**'):
var = vars.pop()
if var == "(":
#function pointer
print >> sys.stderr, "FUNCTION POINTER", vars
var = vars.pop()
if var != "*":
print >> sys.stderr, var, vars
raise RuntimeError("Invalid Function Pointer "
"format: %s. Expected '*' got %s from %s" % (line,var,vars))
var = vars.pop()
emitOffset(name, var)
vars = None
else:
print >> sys.stderr, "POINTER", var
emitOffset(name, var)
elif var == '(':
print >> sys.stderr, "FUNCTION POINTER", vars
var = vars.pop()
print >> sys.stderr, "NAME VAR" , name, var
if var != "*":
print >> sys.stderr, var, vars
raise RuntimeError("Invalid Function Pointer "
"format: %s. Expected '*' got %s from %s" % (line,var,vars))
var = vars.pop()
emitOffset(name, var)
vars = None
elif var == "[":
print >> sys.stderr, "SKIPPING ARRAY STUB" , vars
var = vars.pop()
var = vars.pop()
else:
print >> sys.stderr, "normal", var
emitOffset(name,var)
if isPyObject == True:
emitModuleFooter(name)
def parse_headers():
headerDir = os.path.join(sys.exec_prefix, "include")
headerFiles = glob.glob(os.path.join(headerDir,"*.h"))
headerFiles = [x for x in headerFiles if os.path.split(x)[1] not in skipFiles]
for filename in headerFiles:
print >> sys.stderr, "PROCESSING FILE", filename
print "\n\n/* Generated from file %s */\n\n" % filename
f = file(filename)
filetext = f.read()
f.close()
parse_filetext(filetext)
def make_struct_c():
emitFileHeader()
emitFileBody()
parse_headers()
emitFileFooter(names)
make_struct_c()
| bsd-3-clause | 1,840,819,514,430,691,300 | 30.483871 | 108 | 0.519083 | false |
aptrishu/coala | coalib/misc/Caching.py | 10 | 5435 | import logging
import time
import os
from coala_utils.decorators import enforce_signature
from coalib.misc.CachingUtilities import (
pickle_load, pickle_dump, delete_files)
class FileCache:
"""
This object is a file cache that helps in collecting only the changed
and new files since the last run. Example/Tutorial:
>>> import logging
>>> import copy, time
>>> logging.getLogger().setLevel(logging.CRITICAL)
To initialize the cache create an instance for the project:
>>> cache = FileCache(None, "test", flush_cache=True)
Now we can track new files by running:
>>> cache.track_files(["a.c", "b.c"])
Since all cache operations are lazy (for performance), we need to
explicitly write the cache to disk for persistence in future uses:
(Note: The cache will automatically figure out the write location)
>>> cache.write()
Let's go into the future:
>>> time.sleep(1)
Let's create a new instance to simulate a separate run:
>>> cache = FileCache(None, "test", flush_cache=False)
>>> old_data = copy.deepcopy(cache.data)
We can mark a file as changed by doing:
>>> cache.untrack_files({"a.c"})
Again write to disk after calculating the new cache times for each file:
>>> cache.write()
>>> new_data = cache.data
Since we marked 'a.c' as a changed file:
>>> "a.c" not in cache.data
True
>>> "a.c" in old_data
True
Since 'b.c' was untouched after the second run, its time was updated
to the latest value:
>>> old_data["b.c"] < new_data["b.c"]
True
"""
@enforce_signature
def __init__(
self,
log_printer,
project_dir: str,
flush_cache: bool=False):
"""
Initialize FileCache.
:param log_printer: An object to use for logging.
:param project_dir: The root directory of the project to be used
as a key identifier.
:param flush_cache: Flush the cache and rebuild it.
"""
self.project_dir = project_dir
self.current_time = int(time.time())
cache_data = pickle_load(None, project_dir, {})
last_time = -1
if 'time' in cache_data:
last_time = cache_data['time']
if not flush_cache and last_time > self.current_time:
logging.warning('It seems like you went back in time - your system '
'time is behind the last recorded run time on this '
'project. The cache will be force flushed.')
flush_cache = True
self.data = cache_data.get('files', {})
if flush_cache:
self.flush_cache()
# store the files to be untracked and then untrack them in the end
# so that an untracked file is not tracked again by mistake in a
# later section (which will happen if that file doesn't yield a
# result in that section).
self.to_untrack = set()
def flush_cache(self):
"""
Flushes the cache and deletes the relevant file.
"""
self.data = {}
delete_files(None, [self.project_dir])
logging.debug('The file cache was successfully flushed.')
def __enter__(self):
return self
def write(self):
"""
Update the last run time on the project for each file
to the current time. Using this object as a contextmanager is
preferred (that will automatically call this method on exit).
"""
for file in self.to_untrack:
if file in self.data:
del self.data[file]
for file_name in self.data:
self.data[file_name] = self.current_time
pickle_dump(
None,
self.project_dir,
{'time': self.current_time, 'files': self.data})
def __exit__(self, type, value, traceback):
"""
Update the last run time on the project for each file
to the current time.
"""
self.write()
def untrack_files(self, files):
"""
Removes the given files from the cache so that they are no longer
considered cached for this and the next run.
:param files: A set of files to remove from cache.
"""
self.to_untrack.update(files)
def track_files(self, files):
"""
Start tracking files given in ``files`` by adding them to the
database.
:param files: A set of files that need to be tracked.
These files are initialized with their last
modified tag as -1.
"""
for file in files:
if file not in self.data:
self.data[file] = -1
def get_uncached_files(self, files):
"""
Returns the set of files that are not in the cache yet or have been
untracked.
:param files: The list of collected files.
:return: A set of files that are uncached.
"""
if self.data == {}:
# The first run on this project. So all files are new
# and must be returned irrespective of whether caching is turned on.
return files
else:
return {file
for file in files
if (file not in self.data or
int(os.path.getmtime(file)) > self.data[file])}
| agpl-3.0 | -4,251,664,043,201,579,500 | 30.057143 | 80 | 0.580497 | false |
semkiv/heppy_fcc | background_Bs2DsDsK_with_Ds2PiPiPiPi_analysis_cfg.py | 1 | 3669 | #!/usr/bin/env python
"""
Configuration script for the analyzer of B0s -> K*0 Ds+ Ds- background events
| | |-> pi- pi- pi+ pi0
| |-> pi+ pi+ pi- pi0
|-> K+ pi-
Note: it is supposed to be used within heppy_fcc framework
"""
import os
import heppy.framework.config as cfg
import logging
from ROOT import gSystem
from EventStore import EventStore as Events
from heppy_fcc.analyzers.BackgroundBs2DsDsKWithDs2PiPiPiPiAnalyzer import BackgroundBs2DsDsKWithDs2PiPiPiPiAnalyzer
logging.basicConfig(level=logging.WARNING)
# input component
# several input components can be declared and added to the list of selected components
input_component = cfg.Component('ILD-like', files = ['/afs/cern.ch/work/a/ansemkiv/private/FCC/analysis/background_Bs2DsDsK_with_Ds2PiPiPiPi_100k.root'])
selected_components = [input_component]
# analyzers
# analyzer for Bs -> Ds Ds K* events
bgana = cfg.Analyzer(BackgroundBs2DsDsKWithDs2PiPiPiPiAnalyzer,
smear_momentum = True,
momentum_x_resolution = 0.01,
momentum_y_resolution = 0.01,
momentum_z_resolution = 0.01,
smear_pv = True,
# IDL-like res
pv_x_resolution = 0.0025,
pv_y_resolution = 0.0025,
pv_z_resolution = 0.0025,
# progressive res
# pv_x_resolution = 0.001,
# pv_y_resolution = 0.001,
# pv_z_resolution = 0.001,
# outstanding res
# pv_x_resolution = 0.0005,
# pv_y_resolution = 0.0005,
# pv_z_resolution = 0.0005,
smear_sv = True,
# IDL-like res
sv_x_resolution = 0.007,
sv_y_resolution = 0.007,
sv_z_resolution = 0.007,
# progressive res
# sv_x_resolution = 0.003,
# sv_y_resolution = 0.003,
# sv_z_resolution = 0.003,
# outstanding res
# sv_x_resolution = 0.0015,
# sv_y_resolution = 0.0015,
# sv_z_resolution = 0.0015,
smear_tv = True,
# IDL-like res
tv_x_resolution = 0.005,
tv_y_resolution = 0.005,
tv_z_resolution = 0.005,
# progressive res
# tv_x_resolution = 0.002,
# tv_y_resolution = 0.002,
# tv_z_resolution = 0.002,
# outstanding res
# tv_x_resolution = 0.001,
# tv_y_resolution = 0.001,
# tv_z_resolution = 0.001,
stylepath = os.environ.get('FCC') + 'lhcbstyle.C',
tree_name = 'Events',
tree_title = 'Events',
mc_truth_tree_name = 'MCTruth',
mc_truth_tree_title = 'MC Truth',
verbose = False)
# definition of a sequence of analyzers, the analyzers will process each event in this order
sequence = cfg.Sequence([bgana])
# finalization of the configuration object.
gSystem.Load('libdatamodel')
config = cfg.Config(components = selected_components, sequence = sequence, services = [],events_class = Events)
| gpl-3.0 | 571,574,815,957,426,200 | 40.693182 | 153 | 0.487054 | false |
zr4x/pythonTests | fixture/group.py | 1 | 3582 | from model.group import Group
class GroupHelper:
def __init__(self, app):
self.app = app
def open_groups_page(self):
wd = self.app.wd
if not (wd.current_url.endswith("/group.php") and len(wd.find_elements_by_name("new")) > 0):
wd.find_element_by_link_text("groups").click()
def create(self, group):
wd = self.app.wd
self.open_groups_page()
wd.find_element_by_name("new").click()
self.fill_group_form(group)
wd.find_element_by_name("submit").click()
self.return_to_group_page()
self.group_cache = None
def fill_group_form(self, group):
self.change_field_value("group_name", group.name)
self.change_field_value("group_header", group.header)
self.change_field_value("group_footer", group.footer)
def change_field_value(self, field_name, text):
wd = self.app.wd
if text is not None:
wd.find_element_by_name(field_name).click()
wd.find_element_by_name(field_name).clear()
wd.find_element_by_name(field_name).send_keys(text)
def return_to_group_page(self):
wd = self.app.wd
wd.find_element_by_link_text("group page").click()
def delete_group_by_index(self, index):
wd = self.app.wd
self.open_groups_page()
self.select_group_by_index(index)
wd.find_element_by_name("delete").click()
self.group_cache = None
def delete_first_group(self):
self.delete_group_by_index(0)
def modify_by_index(self, index, group):
wd = self.app.wd
self.open_groups_page()
self.select_group_by_index(index)
wd.find_element_by_name("edit").click()
self.fill_group_form(group)
wd.find_element_by_name("update").click()
self.return_to_group_page()
self.group_cache = None
def modify_group_by_id(self, group_id, new_group_data):
wd = self.app.wd
self.open_groups_page()
self.select_group_by_id(group_id)
wd.find_element_by_name("edit").click()
self.fill_group_form(new_group_data)
wd.find_element_by_name("update").click()
self.return_to_group_page()
self.group_cache = None
def modify_first_group(self):
self.modify_by_index(0)
def select_first_group(self):
wd = self.app.wd
wd.find_element_by_name("selected[]").click()
def select_group_by_index(self, index):
wd = self.app.wd
wd.find_elements_by_name("selected[]")[index].click()
def count(self):
wd = self.app.wd
self.open_groups_page()
return len(wd.find_elements_by_name("selected[]"))
def delete_group_by_id(self, id):
wd = self.app.wd
self.open_groups_page()
self.select_group_by_id(id)
wd.find_element_by_name("delete").click()
self.return_to_group_page()
self.group_cache = None
def select_group_by_id(self, id):
wd = self.app.wd
wd.find_element_by_css_selector("input[value='%s']" % id).click()
group_cache = None
def get_group_list(self):
if self.group_cache is None:
wd = self.app.wd
self.open_groups_page()
self.group_cache = []
for element in wd.find_elements_by_css_selector("span.group"):
text = element.text
id = element.find_element_by_name("selected[]").get_attribute("value")
self.group_cache.append(Group(name=text, id=id))
return list(self.group_cache)
| apache-2.0 | 2,976,077,634,855,927,300 | 32.476636 | 100 | 0.589615 | false |
SerialShadow/SickRage | lib/sqlalchemy/util/langhelpers.py | 75 | 37513 | # util/langhelpers.py
# Copyright (C) 2005-2014 the SQLAlchemy authors and contributors <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""Routines to help with the creation, loading and introspection of
modules, classes, hierarchies, attributes, functions, and methods.
"""
import itertools
import inspect
import operator
import re
import sys
import types
import warnings
from functools import update_wrapper
from .. import exc
import hashlib
from . import compat
from . import _collections
def md5_hex(x):
if compat.py3k:
x = x.encode('utf-8')
m = hashlib.md5()
m.update(x)
return m.hexdigest()
class safe_reraise(object):
"""Reraise an exception after invoking some
handler code.
Stores the existing exception info before
invoking so that it is maintained across a potential
coroutine context switch.
e.g.::
try:
sess.commit()
except:
with safe_reraise():
sess.rollback()
"""
def __enter__(self):
self._exc_info = sys.exc_info()
def __exit__(self, type_, value, traceback):
# see #2703 for notes
if type_ is None:
exc_type, exc_value, exc_tb = self._exc_info
self._exc_info = None # remove potential circular references
compat.reraise(exc_type, exc_value, exc_tb)
else:
self._exc_info = None # remove potential circular references
compat.reraise(type_, value, traceback)
def decode_slice(slc):
"""decode a slice object as sent to __getitem__.
takes into account the 2.5 __index__() method, basically.
"""
ret = []
for x in slc.start, slc.stop, slc.step:
if hasattr(x, '__index__'):
x = x.__index__()
ret.append(x)
return tuple(ret)
def _unique_symbols(used, *bases):
used = set(used)
for base in bases:
pool = itertools.chain((base,),
compat.itertools_imap(lambda i: base + str(i),
range(1000)))
for sym in pool:
if sym not in used:
used.add(sym)
yield sym
break
else:
raise NameError("exhausted namespace for symbol base %s" % base)
def decorator(target):
"""A signature-matching decorator factory."""
def decorate(fn):
if not inspect.isfunction(fn):
raise Exception("not a decoratable function")
spec = compat.inspect_getfullargspec(fn)
names = tuple(spec[0]) + spec[1:3] + (fn.__name__,)
targ_name, fn_name = _unique_symbols(names, 'target', 'fn')
metadata = dict(target=targ_name, fn=fn_name)
metadata.update(format_argspec_plus(spec, grouped=False))
metadata['name'] = fn.__name__
code = """\
def %(name)s(%(args)s):
return %(target)s(%(fn)s, %(apply_kw)s)
""" % metadata
decorated = _exec_code_in_env(code,
{targ_name: target, fn_name: fn},
fn.__name__)
decorated.__defaults__ = getattr(fn, 'im_func', fn).__defaults__
decorated.__wrapped__ = fn
return update_wrapper(decorated, fn)
return update_wrapper(decorate, target)
def _exec_code_in_env(code, env, fn_name):
exec(code, env)
return env[fn_name]
def public_factory(target, location):
"""Produce a wrapping function for the given cls or classmethod.
Rationale here is so that the __init__ method of the
class can serve as documentation for the function.
"""
if isinstance(target, type):
fn = target.__init__
callable_ = target
doc = "Construct a new :class:`.%s` object. \n\n"\
"This constructor is mirrored as a public API function; see :func:`~%s` "\
"for a full usage and argument description." % (
target.__name__, location, )
else:
fn = callable_ = target
doc = "This function is mirrored; see :func:`~%s` "\
"for a description of arguments." % location
location_name = location.split(".")[-1]
spec = compat.inspect_getfullargspec(fn)
del spec[0][0]
metadata = format_argspec_plus(spec, grouped=False)
metadata['name'] = location_name
code = """\
def %(name)s(%(args)s):
return cls(%(apply_kw)s)
""" % metadata
env = {'cls': callable_, 'symbol': symbol}
exec(code, env)
decorated = env[location_name]
decorated.__doc__ = fn.__doc__
if compat.py2k or hasattr(fn, '__func__'):
fn.__func__.__doc__ = doc
else:
fn.__doc__ = doc
return decorated
class PluginLoader(object):
def __init__(self, group, auto_fn=None):
self.group = group
self.impls = {}
self.auto_fn = auto_fn
def load(self, name):
if name in self.impls:
return self.impls[name]()
if self.auto_fn:
loader = self.auto_fn(name)
if loader:
self.impls[name] = loader
return loader()
try:
import pkg_resources
except ImportError:
pass
else:
for impl in pkg_resources.iter_entry_points(
self.group, name):
self.impls[name] = impl.load
return impl.load()
raise exc.NoSuchModuleError(
"Can't load plugin: %s:%s" %
(self.group, name))
def register(self, name, modulepath, objname):
def load():
mod = compat.import_(modulepath)
for token in modulepath.split(".")[1:]:
mod = getattr(mod, token)
return getattr(mod, objname)
self.impls[name] = load
def get_cls_kwargs(cls, _set=None):
"""Return the full set of inherited kwargs for the given `cls`.
Probes a class's __init__ method, collecting all named arguments. If the
__init__ defines a \**kwargs catch-all, then the constructor is presumed to
pass along unrecognized keywords to it's base classes, and the collection
process is repeated recursively on each of the bases.
Uses a subset of inspect.getargspec() to cut down on method overhead.
No anonymous tuple arguments please !
"""
toplevel = _set == None
if toplevel:
_set = set()
ctr = cls.__dict__.get('__init__', False)
has_init = ctr and isinstance(ctr, types.FunctionType) and \
isinstance(ctr.__code__, types.CodeType)
if has_init:
names, has_kw = inspect_func_args(ctr)
_set.update(names)
if not has_kw and not toplevel:
return None
if not has_init or has_kw:
for c in cls.__bases__:
if get_cls_kwargs(c, _set) is None:
break
_set.discard('self')
return _set
try:
# TODO: who doesn't have this constant?
from inspect import CO_VARKEYWORDS
def inspect_func_args(fn):
co = fn.__code__
nargs = co.co_argcount
names = co.co_varnames
args = list(names[:nargs])
has_kw = bool(co.co_flags & CO_VARKEYWORDS)
return args, has_kw
except ImportError:
def inspect_func_args(fn):
names, _, has_kw, _ = inspect.getargspec(fn)
return names, bool(has_kw)
def get_func_kwargs(func):
"""Return the set of legal kwargs for the given `func`.
Uses getargspec so is safe to call for methods, functions,
etc.
"""
return compat.inspect_getargspec(func)[0]
def get_callable_argspec(fn, no_self=False, _is_init=False):
"""Return the argument signature for any callable.
All pure-Python callables are accepted, including
functions, methods, classes, objects with __call__;
builtins and other edge cases like functools.partial() objects
raise a TypeError.
"""
if inspect.isbuiltin(fn):
raise TypeError("Can't inspect builtin: %s" % fn)
elif inspect.isfunction(fn):
if _is_init and no_self:
spec = compat.inspect_getargspec(fn)
return compat.ArgSpec(spec.args[1:], spec.varargs,
spec.keywords, spec.defaults)
else:
return compat.inspect_getargspec(fn)
elif inspect.ismethod(fn):
if no_self and (_is_init or fn.__self__):
spec = compat.inspect_getargspec(fn.__func__)
return compat.ArgSpec(spec.args[1:], spec.varargs,
spec.keywords, spec.defaults)
else:
return compat.inspect_getargspec(fn.__func__)
elif inspect.isclass(fn):
return get_callable_argspec(fn.__init__, no_self=no_self, _is_init=True)
elif hasattr(fn, '__func__'):
return compat.inspect_getargspec(fn.__func__)
elif hasattr(fn, '__call__'):
if inspect.ismethod(fn.__call__):
return get_callable_argspec(fn.__call__, no_self=no_self)
else:
raise TypeError("Can't inspect callable: %s" % fn)
else:
raise TypeError("Can't inspect callable: %s" % fn)
def format_argspec_plus(fn, grouped=True):
"""Returns a dictionary of formatted, introspected function arguments.
A enhanced variant of inspect.formatargspec to support code generation.
fn
An inspectable callable or tuple of inspect getargspec() results.
grouped
Defaults to True; include (parens, around, argument) lists
Returns:
args
Full inspect.formatargspec for fn
self_arg
The name of the first positional argument, varargs[0], or None
if the function defines no positional arguments.
apply_pos
args, re-written in calling rather than receiving syntax. Arguments are
passed positionally.
apply_kw
Like apply_pos, except keyword-ish args are passed as keywords.
Example::
>>> format_argspec_plus(lambda self, a, b, c=3, **d: 123)
{'args': '(self, a, b, c=3, **d)',
'self_arg': 'self',
'apply_kw': '(self, a, b, c=c, **d)',
'apply_pos': '(self, a, b, c, **d)'}
"""
if compat.callable(fn):
spec = compat.inspect_getfullargspec(fn)
else:
# we accept an existing argspec...
spec = fn
args = inspect.formatargspec(*spec)
if spec[0]:
self_arg = spec[0][0]
elif spec[1]:
self_arg = '%s[0]' % spec[1]
else:
self_arg = None
if compat.py3k:
apply_pos = inspect.formatargspec(spec[0], spec[1],
spec[2], None, spec[4])
num_defaults = 0
if spec[3]:
num_defaults += len(spec[3])
if spec[4]:
num_defaults += len(spec[4])
name_args = spec[0] + spec[4]
else:
apply_pos = inspect.formatargspec(spec[0], spec[1], spec[2])
num_defaults = 0
if spec[3]:
num_defaults += len(spec[3])
name_args = spec[0]
if num_defaults:
defaulted_vals = name_args[0 - num_defaults:]
else:
defaulted_vals = ()
apply_kw = inspect.formatargspec(name_args, spec[1], spec[2],
defaulted_vals,
formatvalue=lambda x: '=' + x)
if grouped:
return dict(args=args, self_arg=self_arg,
apply_pos=apply_pos, apply_kw=apply_kw)
else:
return dict(args=args[1:-1], self_arg=self_arg,
apply_pos=apply_pos[1:-1], apply_kw=apply_kw[1:-1])
def format_argspec_init(method, grouped=True):
"""format_argspec_plus with considerations for typical __init__ methods
Wraps format_argspec_plus with error handling strategies for typical
__init__ cases::
object.__init__ -> (self)
other unreflectable (usually C) -> (self, *args, **kwargs)
"""
if method is object.__init__:
args = grouped and '(self)' or 'self'
else:
try:
return format_argspec_plus(method, grouped=grouped)
except TypeError:
args = (grouped and '(self, *args, **kwargs)'
or 'self, *args, **kwargs')
return dict(self_arg='self', args=args, apply_pos=args, apply_kw=args)
def getargspec_init(method):
"""inspect.getargspec with considerations for typical __init__ methods
Wraps inspect.getargspec with error handling for typical __init__ cases::
object.__init__ -> (self)
other unreflectable (usually C) -> (self, *args, **kwargs)
"""
try:
return inspect.getargspec(method)
except TypeError:
if method is object.__init__:
return (['self'], None, None, None)
else:
return (['self'], 'args', 'kwargs', None)
def unbound_method_to_callable(func_or_cls):
"""Adjust the incoming callable such that a 'self' argument is not
required.
"""
if isinstance(func_or_cls, types.MethodType) and not func_or_cls.__self__:
return func_or_cls.__func__
else:
return func_or_cls
def generic_repr(obj, additional_kw=(), to_inspect=None):
"""Produce a __repr__() based on direct association of the __init__()
specification vs. same-named attributes present.
"""
if to_inspect is None:
to_inspect = [obj]
else:
to_inspect = _collections.to_list(to_inspect)
missing = object()
pos_args = []
kw_args = _collections.OrderedDict()
vargs = None
for i, insp in enumerate(to_inspect):
try:
(_args, _vargs, vkw, defaults) = \
inspect.getargspec(insp.__init__)
except TypeError:
continue
else:
default_len = defaults and len(defaults) or 0
if i == 0:
if _vargs:
vargs = _vargs
if default_len:
pos_args.extend(_args[1:-default_len])
else:
pos_args.extend(_args[1:])
else:
kw_args.update([
(arg, missing) for arg in _args[1:-default_len]
])
if default_len:
kw_args.update([
(arg, default)
for arg, default
in zip(_args[-default_len:], defaults)
])
output = []
output.extend(repr(getattr(obj, arg, None)) for arg in pos_args)
if vargs is not None and hasattr(obj, vargs):
output.extend([repr(val) for val in getattr(obj, vargs)])
for arg, defval in kw_args.items():
try:
val = getattr(obj, arg, missing)
if val is not missing and val != defval:
output.append('%s=%r' % (arg, val))
except:
pass
if additional_kw:
for arg, defval in additional_kw:
try:
val = getattr(obj, arg, missing)
if val is not missing and val != defval:
output.append('%s=%r' % (arg, val))
except:
pass
return "%s(%s)" % (obj.__class__.__name__, ", ".join(output))
class portable_instancemethod(object):
"""Turn an instancemethod into a (parent, name) pair
to produce a serializable callable.
"""
def __init__(self, meth):
self.target = meth.__self__
self.name = meth.__name__
def __call__(self, *arg, **kw):
return getattr(self.target, self.name)(*arg, **kw)
def class_hierarchy(cls):
"""Return an unordered sequence of all classes related to cls.
Traverses diamond hierarchies.
Fibs slightly: subclasses of builtin types are not returned. Thus
class_hierarchy(class A(object)) returns (A, object), not A plus every
class systemwide that derives from object.
Old-style classes are discarded and hierarchies rooted on them
will not be descended.
"""
if compat.py2k:
if isinstance(cls, types.ClassType):
return list()
hier = set([cls])
process = list(cls.__mro__)
while process:
c = process.pop()
if compat.py2k:
if isinstance(c, types.ClassType):
continue
bases = (_ for _ in c.__bases__
if _ not in hier and not isinstance(_, types.ClassType))
else:
bases = (_ for _ in c.__bases__ if _ not in hier)
for b in bases:
process.append(b)
hier.add(b)
if compat.py3k:
if c.__module__ == 'builtins' or not hasattr(c, '__subclasses__'):
continue
else:
if c.__module__ == '__builtin__' or not hasattr(c, '__subclasses__'):
continue
for s in [_ for _ in c.__subclasses__() if _ not in hier]:
process.append(s)
hier.add(s)
return list(hier)
def iterate_attributes(cls):
"""iterate all the keys and attributes associated
with a class, without using getattr().
Does not use getattr() so that class-sensitive
descriptors (i.e. property.__get__()) are not called.
"""
keys = dir(cls)
for key in keys:
for c in cls.__mro__:
if key in c.__dict__:
yield (key, c.__dict__[key])
break
def monkeypatch_proxied_specials(into_cls, from_cls, skip=None, only=None,
name='self.proxy', from_instance=None):
"""Automates delegation of __specials__ for a proxying type."""
if only:
dunders = only
else:
if skip is None:
skip = ('__slots__', '__del__', '__getattribute__',
'__metaclass__', '__getstate__', '__setstate__')
dunders = [m for m in dir(from_cls)
if (m.startswith('__') and m.endswith('__') and
not hasattr(into_cls, m) and m not in skip)]
for method in dunders:
try:
fn = getattr(from_cls, method)
if not hasattr(fn, '__call__'):
continue
fn = getattr(fn, 'im_func', fn)
except AttributeError:
continue
try:
spec = inspect.getargspec(fn)
fn_args = inspect.formatargspec(spec[0])
d_args = inspect.formatargspec(spec[0][1:])
except TypeError:
fn_args = '(self, *args, **kw)'
d_args = '(*args, **kw)'
py = ("def %(method)s%(fn_args)s: "
"return %(name)s.%(method)s%(d_args)s" % locals())
env = from_instance is not None and {name: from_instance} or {}
compat.exec_(py, env)
try:
env[method].__defaults__ = fn.__defaults__
except AttributeError:
pass
setattr(into_cls, method, env[method])
def methods_equivalent(meth1, meth2):
"""Return True if the two methods are the same implementation."""
return getattr(meth1, '__func__', meth1) is getattr(meth2, '__func__', meth2)
def as_interface(obj, cls=None, methods=None, required=None):
"""Ensure basic interface compliance for an instance or dict of callables.
Checks that ``obj`` implements public methods of ``cls`` or has members
listed in ``methods``. If ``required`` is not supplied, implementing at
least one interface method is sufficient. Methods present on ``obj`` that
are not in the interface are ignored.
If ``obj`` is a dict and ``dict`` does not meet the interface
requirements, the keys of the dictionary are inspected. Keys present in
``obj`` that are not in the interface will raise TypeErrors.
Raises TypeError if ``obj`` does not meet the interface criteria.
In all passing cases, an object with callable members is returned. In the
simple case, ``obj`` is returned as-is; if dict processing kicks in then
an anonymous class is returned.
obj
A type, instance, or dictionary of callables.
cls
Optional, a type. All public methods of cls are considered the
interface. An ``obj`` instance of cls will always pass, ignoring
``required``..
methods
Optional, a sequence of method names to consider as the interface.
required
Optional, a sequence of mandatory implementations. If omitted, an
``obj`` that provides at least one interface method is considered
sufficient. As a convenience, required may be a type, in which case
all public methods of the type are required.
"""
if not cls and not methods:
raise TypeError('a class or collection of method names are required')
if isinstance(cls, type) and isinstance(obj, cls):
return obj
interface = set(methods or [m for m in dir(cls) if not m.startswith('_')])
implemented = set(dir(obj))
complies = operator.ge
if isinstance(required, type):
required = interface
elif not required:
required = set()
complies = operator.gt
else:
required = set(required)
if complies(implemented.intersection(interface), required):
return obj
# No dict duck typing here.
if not type(obj) is dict:
qualifier = complies is operator.gt and 'any of' or 'all of'
raise TypeError("%r does not implement %s: %s" % (
obj, qualifier, ', '.join(interface)))
class AnonymousInterface(object):
"""A callable-holding shell."""
if cls:
AnonymousInterface.__name__ = 'Anonymous' + cls.__name__
found = set()
for method, impl in dictlike_iteritems(obj):
if method not in interface:
raise TypeError("%r: unknown in this interface" % method)
if not compat.callable(impl):
raise TypeError("%r=%r is not callable" % (method, impl))
setattr(AnonymousInterface, method, staticmethod(impl))
found.add(method)
if complies(found, required):
return AnonymousInterface
raise TypeError("dictionary does not contain required keys %s" %
', '.join(required - found))
class memoized_property(object):
"""A read-only @property that is only evaluated once."""
def __init__(self, fget, doc=None):
self.fget = fget
self.__doc__ = doc or fget.__doc__
self.__name__ = fget.__name__
def __get__(self, obj, cls):
if obj is None:
return self
obj.__dict__[self.__name__] = result = self.fget(obj)
return result
def _reset(self, obj):
memoized_property.reset(obj, self.__name__)
@classmethod
def reset(cls, obj, name):
obj.__dict__.pop(name, None)
class memoized_instancemethod(object):
"""Decorate a method memoize its return value.
Best applied to no-arg methods: memoization is not sensitive to
argument values, and will always return the same value even when
called with different arguments.
"""
def __init__(self, fget, doc=None):
self.fget = fget
self.__doc__ = doc or fget.__doc__
self.__name__ = fget.__name__
def __get__(self, obj, cls):
if obj is None:
return self
def oneshot(*args, **kw):
result = self.fget(obj, *args, **kw)
memo = lambda *a, **kw: result
memo.__name__ = self.__name__
memo.__doc__ = self.__doc__
obj.__dict__[self.__name__] = memo
return result
oneshot.__name__ = self.__name__
oneshot.__doc__ = self.__doc__
return oneshot
class group_expirable_memoized_property(object):
"""A family of @memoized_properties that can be expired in tandem."""
def __init__(self, attributes=()):
self.attributes = []
if attributes:
self.attributes.extend(attributes)
def expire_instance(self, instance):
"""Expire all memoized properties for *instance*."""
stash = instance.__dict__
for attribute in self.attributes:
stash.pop(attribute, None)
def __call__(self, fn):
self.attributes.append(fn.__name__)
return memoized_property(fn)
def method(self, fn):
self.attributes.append(fn.__name__)
return memoized_instancemethod(fn)
def dependency_for(modulename):
def decorate(obj):
# TODO: would be nice to improve on this import silliness,
# unfortunately importlib doesn't work that great either
tokens = modulename.split(".")
mod = compat.import_(".".join(tokens[0:-1]), globals(), locals(), tokens[-1])
mod = getattr(mod, tokens[-1])
setattr(mod, obj.__name__, obj)
return obj
return decorate
class dependencies(object):
"""Apply imported dependencies as arguments to a function.
E.g.::
@util.dependencies(
"sqlalchemy.sql.widget",
"sqlalchemy.engine.default"
);
def some_func(self, widget, default, arg1, arg2, **kw):
# ...
Rationale is so that the impact of a dependency cycle can be
associated directly with the few functions that cause the cycle,
and not pollute the module-level namespace.
"""
def __init__(self, *deps):
self.import_deps = []
for dep in deps:
tokens = dep.split(".")
self.import_deps.append(
dependencies._importlater(
".".join(tokens[0:-1]),
tokens[-1]
)
)
def __call__(self, fn):
import_deps = self.import_deps
spec = compat.inspect_getfullargspec(fn)
spec_zero = list(spec[0])
hasself = spec_zero[0] in ('self', 'cls')
for i in range(len(import_deps)):
spec[0][i + (1 if hasself else 0)] = "import_deps[%r]" % i
inner_spec = format_argspec_plus(spec, grouped=False)
for impname in import_deps:
del spec_zero[1 if hasself else 0]
spec[0][:] = spec_zero
outer_spec = format_argspec_plus(spec, grouped=False)
code = 'lambda %(args)s: fn(%(apply_kw)s)' % {
"args": outer_spec['args'],
"apply_kw": inner_spec['apply_kw']
}
decorated = eval(code, locals())
decorated.__defaults__ = getattr(fn, 'im_func', fn).__defaults__
return update_wrapper(decorated, fn)
@classmethod
def resolve_all(cls, path):
for m in list(dependencies._unresolved):
if m._full_path.startswith(path):
m._resolve()
_unresolved = set()
_by_key = {}
class _importlater(object):
_unresolved = set()
_by_key = {}
def __new__(cls, path, addtl):
key = path + "." + addtl
if key in dependencies._by_key:
return dependencies._by_key[key]
else:
dependencies._by_key[key] = imp = object.__new__(cls)
return imp
def __init__(self, path, addtl):
self._il_path = path
self._il_addtl = addtl
dependencies._unresolved.add(self)
@property
def _full_path(self):
return self._il_path + "." + self._il_addtl
@memoized_property
def module(self):
if self in dependencies._unresolved:
raise ImportError(
"importlater.resolve_all() hasn't "
"been called (this is %s %s)"
% (self._il_path, self._il_addtl))
return getattr(self._initial_import, self._il_addtl)
def _resolve(self):
dependencies._unresolved.discard(self)
self._initial_import = compat.import_(
self._il_path, globals(), locals(),
[self._il_addtl])
def __getattr__(self, key):
if key == 'module':
raise ImportError("Could not resolve module %s"
% self._full_path)
try:
attr = getattr(self.module, key)
except AttributeError:
raise AttributeError(
"Module %s has no attribute '%s'" %
(self._full_path, key)
)
self.__dict__[key] = attr
return attr
# from paste.deploy.converters
def asbool(obj):
if isinstance(obj, compat.string_types):
obj = obj.strip().lower()
if obj in ['true', 'yes', 'on', 'y', 't', '1']:
return True
elif obj in ['false', 'no', 'off', 'n', 'f', '0']:
return False
else:
raise ValueError("String is not true/false: %r" % obj)
return bool(obj)
def bool_or_str(*text):
"""Return a callable that will evaulate a string as
boolean, or one of a set of "alternate" string values.
"""
def bool_or_value(obj):
if obj in text:
return obj
else:
return asbool(obj)
return bool_or_value
def asint(value):
"""Coerce to integer."""
if value is None:
return value
return int(value)
def coerce_kw_type(kw, key, type_, flexi_bool=True):
"""If 'key' is present in dict 'kw', coerce its value to type 'type\_' if
necessary. If 'flexi_bool' is True, the string '0' is considered false
when coercing to boolean.
"""
if key in kw and type(kw[key]) is not type_ and kw[key] is not None:
if type_ is bool and flexi_bool:
kw[key] = asbool(kw[key])
else:
kw[key] = type_(kw[key])
def constructor_copy(obj, cls, **kw):
"""Instantiate cls using the __dict__ of obj as constructor arguments.
Uses inspect to match the named arguments of ``cls``.
"""
names = get_cls_kwargs(cls)
kw.update((k, obj.__dict__[k]) for k in names if k in obj.__dict__)
return cls(**kw)
def counter():
"""Return a threadsafe counter function."""
lock = compat.threading.Lock()
counter = itertools.count(1)
# avoid the 2to3 "next" transformation...
def _next():
lock.acquire()
try:
return next(counter)
finally:
lock.release()
return _next
def duck_type_collection(specimen, default=None):
"""Given an instance or class, guess if it is or is acting as one of
the basic collection types: list, set and dict. If the __emulates__
property is present, return that preferentially.
"""
if hasattr(specimen, '__emulates__'):
# canonicalize set vs sets.Set to a standard: the builtin set
if (specimen.__emulates__ is not None and
issubclass(specimen.__emulates__, set)):
return set
else:
return specimen.__emulates__
isa = isinstance(specimen, type) and issubclass or isinstance
if isa(specimen, list):
return list
elif isa(specimen, set):
return set
elif isa(specimen, dict):
return dict
if hasattr(specimen, 'append'):
return list
elif hasattr(specimen, 'add'):
return set
elif hasattr(specimen, 'set'):
return dict
else:
return default
def assert_arg_type(arg, argtype, name):
if isinstance(arg, argtype):
return arg
else:
if isinstance(argtype, tuple):
raise exc.ArgumentError(
"Argument '%s' is expected to be one of type %s, got '%s'" %
(name, ' or '.join("'%s'" % a for a in argtype), type(arg)))
else:
raise exc.ArgumentError(
"Argument '%s' is expected to be of type '%s', got '%s'" %
(name, argtype, type(arg)))
def dictlike_iteritems(dictlike):
"""Return a (key, value) iterator for almost any dict-like object."""
if compat.py3k:
if hasattr(dictlike, 'items'):
return list(dictlike.items())
else:
if hasattr(dictlike, 'iteritems'):
return dictlike.iteritems()
elif hasattr(dictlike, 'items'):
return iter(dictlike.items())
getter = getattr(dictlike, '__getitem__', getattr(dictlike, 'get', None))
if getter is None:
raise TypeError(
"Object '%r' is not dict-like" % dictlike)
if hasattr(dictlike, 'iterkeys'):
def iterator():
for key in dictlike.iterkeys():
yield key, getter(key)
return iterator()
elif hasattr(dictlike, 'keys'):
return iter((key, getter(key)) for key in dictlike.keys())
else:
raise TypeError(
"Object '%r' is not dict-like" % dictlike)
class classproperty(property):
"""A decorator that behaves like @property except that operates
on classes rather than instances.
The decorator is currently special when using the declarative
module, but note that the
:class:`~.sqlalchemy.ext.declarative.declared_attr`
decorator should be used for this purpose with declarative.
"""
def __init__(self, fget, *arg, **kw):
super(classproperty, self).__init__(fget, *arg, **kw)
self.__doc__ = fget.__doc__
def __get__(desc, self, cls):
return desc.fget(cls)
class hybridmethod(object):
"""Decorate a function as cls- or instance- level."""
def __init__(self, func, expr=None):
self.func = func
def __get__(self, instance, owner):
if instance is None:
return self.func.__get__(owner, owner.__class__)
else:
return self.func.__get__(instance, owner)
class _symbol(int):
def __new__(self, name, doc=None, canonical=None):
"""Construct a new named symbol."""
assert isinstance(name, compat.string_types)
if canonical is None:
canonical = hash(name)
v = int.__new__(_symbol, canonical)
v.name = name
if doc:
v.__doc__ = doc
return v
def __reduce__(self):
return symbol, (self.name, "x", int(self))
def __str__(self):
return repr(self)
def __repr__(self):
return "symbol(%r)" % self.name
_symbol.__name__ = 'symbol'
class symbol(object):
"""A constant symbol.
>>> symbol('foo') is symbol('foo')
True
>>> symbol('foo')
<symbol 'foo>
A slight refinement of the MAGICCOOKIE=object() pattern. The primary
advantage of symbol() is its repr(). They are also singletons.
Repeated calls of symbol('name') will all return the same instance.
The optional ``doc`` argument assigns to ``__doc__``. This
is strictly so that Sphinx autoattr picks up the docstring we want
(it doesn't appear to pick up the in-module docstring if the datamember
is in a different module - autoattribute also blows up completely).
If Sphinx fixes/improves this then we would no longer need
``doc`` here.
"""
symbols = {}
_lock = compat.threading.Lock()
def __new__(cls, name, doc=None, canonical=None):
cls._lock.acquire()
try:
sym = cls.symbols.get(name)
if sym is None:
cls.symbols[name] = sym = _symbol(name, doc, canonical)
return sym
finally:
symbol._lock.release()
_creation_order = 1
def set_creation_order(instance):
"""Assign a '_creation_order' sequence to the given instance.
This allows multiple instances to be sorted in order of creation
(typically within a single thread; the counter is not particularly
threadsafe).
"""
global _creation_order
instance._creation_order = _creation_order
_creation_order += 1
def warn_exception(func, *args, **kwargs):
"""executes the given function, catches all exceptions and converts to
a warning.
"""
try:
return func(*args, **kwargs)
except:
warn("%s('%s') ignored" % sys.exc_info()[0:2])
def warn(msg, stacklevel=3):
"""Issue a warning.
If msg is a string, :class:`.exc.SAWarning` is used as
the category.
.. note::
This function is swapped out when the test suite
runs, with a compatible version that uses
warnings.warn_explicit, so that the warnings registry can
be controlled.
"""
if isinstance(msg, compat.string_types):
warnings.warn(msg, exc.SAWarning, stacklevel=stacklevel)
else:
warnings.warn(msg, stacklevel=stacklevel)
def only_once(fn):
"""Decorate the given function to be a no-op after it is called exactly
once."""
once = [fn]
def go(*arg, **kw):
if once:
once_fn = once.pop()
return once_fn(*arg, **kw)
return go
_SQLA_RE = re.compile(r'sqlalchemy/([a-z_]+/){0,2}[a-z_]+\.py')
_UNITTEST_RE = re.compile(r'unit(?:2|test2?/)')
def chop_traceback(tb, exclude_prefix=_UNITTEST_RE, exclude_suffix=_SQLA_RE):
"""Chop extraneous lines off beginning and end of a traceback.
:param tb:
a list of traceback lines as returned by ``traceback.format_stack()``
:param exclude_prefix:
a regular expression object matching lines to skip at beginning of ``tb``
:param exclude_suffix:
a regular expression object matching lines to skip at end of ``tb``
"""
start = 0
end = len(tb) - 1
while start <= end and exclude_prefix.search(tb[start]):
start += 1
while start <= end and exclude_suffix.search(tb[end]):
end -= 1
return tb[start:end + 1]
NoneType = type(None)
| gpl-3.0 | -8,910,968,798,898,428,000 | 29.473599 | 85 | 0.569909 | false |
bzero/networkx | networkx/algorithms/bipartite/spectral.py | 76 | 2538 | # -*- coding: utf-8 -*-
"""
Spectral bipartivity measure.
"""
import networkx as nx
__author__ = """Aric Hagberg ([email protected])"""
# Copyright (C) 2011 by
# Aric Hagberg <[email protected]>
# Dan Schult <[email protected]>
# Pieter Swart <[email protected]>
# All rights reserved.
# BSD license.
__all__ = ['spectral_bipartivity']
def spectral_bipartivity(G, nodes=None, weight='weight'):
"""Returns the spectral bipartivity.
Parameters
----------
G : NetworkX graph
nodes : list or container optional(default is all nodes)
Nodes to return value of spectral bipartivity contribution.
weight : string or None optional (default = 'weight')
Edge data key to use for edge weights. If None, weights set to 1.
Returns
-------
sb : float or dict
A single number if the keyword nodes is not specified, or
a dictionary keyed by node with the spectral bipartivity contribution
of that node as the value.
Examples
--------
>>> from networkx.algorithms import bipartite
>>> G = nx.path_graph(4)
>>> bipartite.spectral_bipartivity(G)
1.0
Notes
-----
This implementation uses Numpy (dense) matrices which are not efficient
for storing large sparse graphs.
See Also
--------
color
References
----------
.. [1] E. Estrada and J. A. Rodríguez-Velázquez, "Spectral measures of
bipartivity in complex networks", PhysRev E 72, 046105 (2005)
"""
try:
import scipy.linalg
except ImportError:
raise ImportError('spectral_bipartivity() requires SciPy: ',
'http://scipy.org/')
nodelist = G.nodes() # ordering of nodes in matrix
A = nx.to_numpy_matrix(G, nodelist, weight=weight)
expA = scipy.linalg.expm(A)
expmA = scipy.linalg.expm(-A)
coshA = 0.5 * (expA + expmA)
if nodes is None:
# return single number for entire graph
return coshA.diagonal().sum() / expA.diagonal().sum()
else:
# contribution for individual nodes
index = dict(zip(nodelist, range(len(nodelist))))
sb = {}
for n in nodes:
i = index[n]
sb[n] = coshA[i, i] / expA[i, i]
return sb
def setup_module(module):
"""Fixture for nose tests."""
from nose import SkipTest
try:
import numpy
except:
raise SkipTest("NumPy not available")
try:
import scipy
except:
raise SkipTest("SciPy not available")
| bsd-3-clause | 1,779,719,764,401,748,000 | 27.818182 | 76 | 0.606073 | false |
chrippa/xmms2 | waftools/gittools.py | 5 | 1944 | import os
try: from hashlib import sha1 as sha
except ImportError: from sha import sha
def gitsha(path):
h = sha()
data = file(path, 'rb').read()
h.update("blob %d\0" % len(data))
h.update(data)
return h.hexdigest()
def git_info():
commithash = os.popen('git rev-parse --verify HEAD 2>/dev/null').read().strip()
if not commithash:
raise ValueError("Couldn't get hash")
if os.getuid() == os.stat(".git/index").st_uid:
os.system('git update-index --refresh >/dev/null')
else:
print("NOT updating git cache, local changes might not be detected")
changed = bool(os.popen('git diff-index -r HEAD').read())
return commithash[:8], changed
def snapshot_info():
info = file('commithash').read().split('\n')
commithash = info[0]
changed = False
for line in [a for a in info[2:] if a]:
[mode, tag, sha, path] = line.split(None, 4)
if tag != 'blob':
continue
if gitsha(path) != sha:
changed = True
break
return commithash, changed
def get_info():
try:
return git_info()
except:
try:
return snapshot_info()
except:
return 'Unknown', False
def get_info_str():
commithash, changed = get_info()
if changed:
changed = " + local changes"
else:
changed = ""
return "%s%s" % (commithash, changed)
submodule_status = {'-':'missing', '+':'outdated', ' ':'uptodate'}
def git_submodules():
submodules = {}
for l in os.popen('git submodule').read().split('\n'):
status = submodule_status.get(l and l[0] or "", 'uptodate')
l = l[1:].strip()
if not l:
continue
commithash, folder = l.strip().split()[:2]
submodules[folder] = (status, commithash[:8])
return submodules
def get_submodules():
try:
return git_submodules()
except:
return {}
| lgpl-2.1 | -5,659,197,145,269,755,000 | 26 | 83 | 0.56893 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.