repo_name
stringlengths 6
100
| path
stringlengths 4
294
| copies
stringlengths 1
5
| size
stringlengths 4
6
| content
stringlengths 606
896k
| license
stringclasses 15
values |
---|---|---|---|---|---|
Lujeni/ansible
|
lib/ansible/modules/web_infrastructure/deploy_helper.py
|
149
|
19571
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2014, Jasper N. Brouwer <[email protected]>
# (c) 2014, Ramon de la Fuente <[email protected]>
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: deploy_helper
version_added: "2.0"
author: "Ramon de la Fuente (@ramondelafuente)"
short_description: Manages some of the steps common in deploying projects.
description:
- The Deploy Helper manages some of the steps common in deploying software.
It creates a folder structure, manages a symlink for the current release
and cleans up old releases.
- "Running it with the C(state=query) or C(state=present) will return the C(deploy_helper) fact.
C(project_path), whatever you set in the path parameter,
C(current_path), the path to the symlink that points to the active release,
C(releases_path), the path to the folder to keep releases in,
C(shared_path), the path to the folder to keep shared resources in,
C(unfinished_filename), the file to check for to recognize unfinished builds,
C(previous_release), the release the 'current' symlink is pointing to,
C(previous_release_path), the full path to the 'current' symlink target,
C(new_release), either the 'release' parameter or a generated timestamp,
C(new_release_path), the path to the new release folder (not created by the module)."
options:
path:
required: True
aliases: ['dest']
description:
- the root path of the project. Alias I(dest).
Returned in the C(deploy_helper.project_path) fact.
state:
description:
- the state of the project.
C(query) will only gather facts,
C(present) will create the project I(root) folder, and in it the I(releases) and I(shared) folders,
C(finalize) will remove the unfinished_filename file, create a symlink to the newly
deployed release and optionally clean old releases,
C(clean) will remove failed & old releases,
C(absent) will remove the project folder (synonymous to the M(file) module with C(state=absent))
choices: [ present, finalize, absent, clean, query ]
default: present
release:
description:
- the release version that is being deployed. Defaults to a timestamp format %Y%m%d%H%M%S (i.e. '20141119223359').
This parameter is optional during C(state=present), but needs to be set explicitly for C(state=finalize).
You can use the generated fact C(release={{ deploy_helper.new_release }}).
releases_path:
description:
- the name of the folder that will hold the releases. This can be relative to C(path) or absolute.
Returned in the C(deploy_helper.releases_path) fact.
default: releases
shared_path:
description:
- the name of the folder that will hold the shared resources. This can be relative to C(path) or absolute.
If this is set to an empty string, no shared folder will be created.
Returned in the C(deploy_helper.shared_path) fact.
default: shared
current_path:
description:
- the name of the symlink that is created when the deploy is finalized. Used in C(finalize) and C(clean).
Returned in the C(deploy_helper.current_path) fact.
default: current
unfinished_filename:
description:
- the name of the file that indicates a deploy has not finished. All folders in the releases_path that
contain this file will be deleted on C(state=finalize) with clean=True, or C(state=clean). This file is
automatically deleted from the I(new_release_path) during C(state=finalize).
default: DEPLOY_UNFINISHED
clean:
description:
- Whether to run the clean procedure in case of C(state=finalize).
type: bool
default: 'yes'
keep_releases:
description:
- the number of old releases to keep when cleaning. Used in C(finalize) and C(clean). Any unfinished builds
will be deleted first, so only correct releases will count. The current version will not count.
default: 5
notes:
- Facts are only returned for C(state=query) and C(state=present). If you use both, you should pass any overridden
parameters to both calls, otherwise the second call will overwrite the facts of the first one.
- When using C(state=clean), the releases are ordered by I(creation date). You should be able to switch to a
new naming strategy without problems.
- Because of the default behaviour of generating the I(new_release) fact, this module will not be idempotent
unless you pass your own release name with C(release). Due to the nature of deploying software, this should not
be much of a problem.
'''
EXAMPLES = '''
# General explanation, starting with an example folder structure for a project:
# root:
# releases:
# - 20140415234508
# - 20140415235146
# - 20140416082818
#
# shared:
# - sessions
# - uploads
#
# current: releases/20140416082818
# The 'releases' folder holds all the available releases. A release is a complete build of the application being
# deployed. This can be a clone of a repository for example, or a sync of a local folder on your filesystem.
# Having timestamped folders is one way of having distinct releases, but you could choose your own strategy like
# git tags or commit hashes.
#
# During a deploy, a new folder should be created in the releases folder and any build steps required should be
# performed. Once the new build is ready, the deploy procedure is 'finalized' by replacing the 'current' symlink
# with a link to this build.
#
# The 'shared' folder holds any resource that is shared between releases. Examples of this are web-server
# session files, or files uploaded by users of your application. It's quite common to have symlinks from a release
# folder pointing to a shared/subfolder, and creating these links would be automated as part of the build steps.
#
# The 'current' symlink points to one of the releases. Probably the latest one, unless a deploy is in progress.
# The web-server's root for the project will go through this symlink, so the 'downtime' when switching to a new
# release is reduced to the time it takes to switch the link.
#
# To distinguish between successful builds and unfinished ones, a file can be placed in the folder of the release
# that is currently in progress. The existence of this file will mark it as unfinished, and allow an automated
# procedure to remove it during cleanup.
# Typical usage
- name: Initialize the deploy root and gather facts
deploy_helper:
path: /path/to/root
- name: Clone the project to the new release folder
git:
repo: git://foosball.example.org/path/to/repo.git
dest: '{{ deploy_helper.new_release_path }}'
version: v1.1.1
- name: Add an unfinished file, to allow cleanup on successful finalize
file:
path: '{{ deploy_helper.new_release_path }}/{{ deploy_helper.unfinished_filename }}'
state: touch
- name: Perform some build steps, like running your dependency manager for example
composer:
command: install
working_dir: '{{ deploy_helper.new_release_path }}'
- name: Create some folders in the shared folder
file:
path: '{{ deploy_helper.shared_path }}/{{ item }}'
state: directory
with_items:
- sessions
- uploads
- name: Add symlinks from the new release to the shared folder
file:
path: '{{ deploy_helper.new_release_path }}/{{ item.path }}'
src: '{{ deploy_helper.shared_path }}/{{ item.src }}'
state: link
with_items:
- path: app/sessions
src: sessions
- path: web/uploads
src: uploads
- name: Finalize the deploy, removing the unfinished file and switching the symlink
deploy_helper:
path: /path/to/root
release: '{{ deploy_helper.new_release }}'
state: finalize
# Retrieving facts before running a deploy
- name: Run 'state=query' to gather facts without changing anything
deploy_helper:
path: /path/to/root
state: query
# Remember to set the 'release' parameter when you actually call 'state=present' later
- name: Initialize the deploy root
deploy_helper:
path: /path/to/root
release: '{{ deploy_helper.new_release }}'
state: present
# all paths can be absolute or relative (to the 'path' parameter)
- deploy_helper:
path: /path/to/root
releases_path: /var/www/project/releases
shared_path: /var/www/shared
current_path: /var/www/active
# Using your own naming strategy for releases (a version tag in this case):
- deploy_helper:
path: /path/to/root
release: v1.1.1
state: present
- deploy_helper:
path: /path/to/root
release: '{{ deploy_helper.new_release }}'
state: finalize
# Using a different unfinished_filename:
- deploy_helper:
path: /path/to/root
unfinished_filename: README.md
release: '{{ deploy_helper.new_release }}'
state: finalize
# Postponing the cleanup of older builds:
- deploy_helper:
path: /path/to/root
release: '{{ deploy_helper.new_release }}'
state: finalize
clean: False
- deploy_helper:
path: /path/to/root
state: clean
# Or running the cleanup ahead of the new deploy
- deploy_helper:
path: /path/to/root
state: clean
- deploy_helper:
path: /path/to/root
state: present
# Keeping more old releases:
- deploy_helper:
path: /path/to/root
release: '{{ deploy_helper.new_release }}'
state: finalize
keep_releases: 10
# Or, if you use 'clean=false' on finalize:
- deploy_helper:
path: /path/to/root
state: clean
keep_releases: 10
# Removing the entire project root folder
- deploy_helper:
path: /path/to/root
state: absent
# Debugging the facts returned by the module
- deploy_helper:
path: /path/to/root
- debug:
var: deploy_helper
'''
import os
import shutil
import time
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
class DeployHelper(object):
def __init__(self, module):
self.module = module
self.file_args = module.load_file_common_arguments(module.params)
self.clean = module.params['clean']
self.current_path = module.params['current_path']
self.keep_releases = module.params['keep_releases']
self.path = module.params['path']
self.release = module.params['release']
self.releases_path = module.params['releases_path']
self.shared_path = module.params['shared_path']
self.state = module.params['state']
self.unfinished_filename = module.params['unfinished_filename']
def gather_facts(self):
current_path = os.path.join(self.path, self.current_path)
releases_path = os.path.join(self.path, self.releases_path)
if self.shared_path:
shared_path = os.path.join(self.path, self.shared_path)
else:
shared_path = None
previous_release, previous_release_path = self._get_last_release(current_path)
if not self.release and (self.state == 'query' or self.state == 'present'):
self.release = time.strftime("%Y%m%d%H%M%S")
if self.release:
new_release_path = os.path.join(releases_path, self.release)
else:
new_release_path = None
return {
'project_path': self.path,
'current_path': current_path,
'releases_path': releases_path,
'shared_path': shared_path,
'previous_release': previous_release,
'previous_release_path': previous_release_path,
'new_release': self.release,
'new_release_path': new_release_path,
'unfinished_filename': self.unfinished_filename
}
def delete_path(self, path):
if not os.path.lexists(path):
return False
if not os.path.isdir(path):
self.module.fail_json(msg="%s exists but is not a directory" % path)
if not self.module.check_mode:
try:
shutil.rmtree(path, ignore_errors=False)
except Exception as e:
self.module.fail_json(msg="rmtree failed: %s" % to_native(e), exception=traceback.format_exc())
return True
def create_path(self, path):
changed = False
if not os.path.lexists(path):
changed = True
if not self.module.check_mode:
os.makedirs(path)
elif not os.path.isdir(path):
self.module.fail_json(msg="%s exists but is not a directory" % path)
changed += self.module.set_directory_attributes_if_different(self._get_file_args(path), changed)
return changed
def check_link(self, path):
if os.path.lexists(path):
if not os.path.islink(path):
self.module.fail_json(msg="%s exists but is not a symbolic link" % path)
def create_link(self, source, link_name):
changed = False
if os.path.islink(link_name):
norm_link = os.path.normpath(os.path.realpath(link_name))
norm_source = os.path.normpath(os.path.realpath(source))
if norm_link == norm_source:
changed = False
else:
changed = True
if not self.module.check_mode:
if not os.path.lexists(source):
self.module.fail_json(msg="the symlink target %s doesn't exists" % source)
tmp_link_name = link_name + '.' + self.unfinished_filename
if os.path.islink(tmp_link_name):
os.unlink(tmp_link_name)
os.symlink(source, tmp_link_name)
os.rename(tmp_link_name, link_name)
else:
changed = True
if not self.module.check_mode:
os.symlink(source, link_name)
return changed
def remove_unfinished_file(self, new_release_path):
changed = False
unfinished_file_path = os.path.join(new_release_path, self.unfinished_filename)
if os.path.lexists(unfinished_file_path):
changed = True
if not self.module.check_mode:
os.remove(unfinished_file_path)
return changed
def remove_unfinished_builds(self, releases_path):
changes = 0
for release in os.listdir(releases_path):
if os.path.isfile(os.path.join(releases_path, release, self.unfinished_filename)):
if self.module.check_mode:
changes += 1
else:
changes += self.delete_path(os.path.join(releases_path, release))
return changes
def remove_unfinished_link(self, path):
changed = False
tmp_link_name = os.path.join(path, self.release + '.' + self.unfinished_filename)
if not self.module.check_mode and os.path.exists(tmp_link_name):
changed = True
os.remove(tmp_link_name)
return changed
def cleanup(self, releases_path, reserve_version):
changes = 0
if os.path.lexists(releases_path):
releases = [f for f in os.listdir(releases_path) if os.path.isdir(os.path.join(releases_path, f))]
try:
releases.remove(reserve_version)
except ValueError:
pass
if not self.module.check_mode:
releases.sort(key=lambda x: os.path.getctime(os.path.join(releases_path, x)), reverse=True)
for release in releases[self.keep_releases:]:
changes += self.delete_path(os.path.join(releases_path, release))
elif len(releases) > self.keep_releases:
changes += (len(releases) - self.keep_releases)
return changes
def _get_file_args(self, path):
file_args = self.file_args.copy()
file_args['path'] = path
return file_args
def _get_last_release(self, current_path):
previous_release = None
previous_release_path = None
if os.path.lexists(current_path):
previous_release_path = os.path.realpath(current_path)
previous_release = os.path.basename(previous_release_path)
return previous_release, previous_release_path
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(aliases=['dest'], required=True, type='path'),
release=dict(required=False, type='str', default=None),
releases_path=dict(required=False, type='str', default='releases'),
shared_path=dict(required=False, type='path', default='shared'),
current_path=dict(required=False, type='path', default='current'),
keep_releases=dict(required=False, type='int', default=5),
clean=dict(required=False, type='bool', default=True),
unfinished_filename=dict(required=False, type='str', default='DEPLOY_UNFINISHED'),
state=dict(required=False, choices=['present', 'absent', 'clean', 'finalize', 'query'], default='present')
),
add_file_common_args=True,
supports_check_mode=True
)
deploy_helper = DeployHelper(module)
facts = deploy_helper.gather_facts()
result = {
'state': deploy_helper.state
}
changes = 0
if deploy_helper.state == 'query':
result['ansible_facts'] = {'deploy_helper': facts}
elif deploy_helper.state == 'present':
deploy_helper.check_link(facts['current_path'])
changes += deploy_helper.create_path(facts['project_path'])
changes += deploy_helper.create_path(facts['releases_path'])
if deploy_helper.shared_path:
changes += deploy_helper.create_path(facts['shared_path'])
result['ansible_facts'] = {'deploy_helper': facts}
elif deploy_helper.state == 'finalize':
if not deploy_helper.release:
module.fail_json(msg="'release' is a required parameter for state=finalize (try the 'deploy_helper.new_release' fact)")
if deploy_helper.keep_releases <= 0:
module.fail_json(msg="'keep_releases' should be at least 1")
changes += deploy_helper.remove_unfinished_file(facts['new_release_path'])
changes += deploy_helper.create_link(facts['new_release_path'], facts['current_path'])
if deploy_helper.clean:
changes += deploy_helper.remove_unfinished_link(facts['project_path'])
changes += deploy_helper.remove_unfinished_builds(facts['releases_path'])
changes += deploy_helper.cleanup(facts['releases_path'], facts['new_release'])
elif deploy_helper.state == 'clean':
changes += deploy_helper.remove_unfinished_link(facts['project_path'])
changes += deploy_helper.remove_unfinished_builds(facts['releases_path'])
changes += deploy_helper.cleanup(facts['releases_path'], facts['new_release'])
elif deploy_helper.state == 'absent':
# destroy the facts
result['ansible_facts'] = {'deploy_helper': []}
changes += deploy_helper.delete_path(facts['project_path'])
if changes > 0:
result['changed'] = True
else:
result['changed'] = False
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
jaeilepp/mne-python
|
mne/io/egi/tests/test_egi.py
|
1
|
4257
|
# -*- coding: utf-8 -*-
# Authors: Denis A. Engemann <[email protected]>
# simplified BSD-3 license
import os.path as op
import warnings
import inspect
import numpy as np
from numpy.testing import assert_array_equal, assert_allclose
from nose.tools import assert_true, assert_raises, assert_equal
from mne import find_events, pick_types
from mne.io import read_raw_egi
from mne.io.tests.test_raw import _test_raw_reader
from mne.io.egi.egi import _combine_triggers
from mne.utils import run_tests_if_main
from mne.datasets.testing import data_path, requires_testing_data
warnings.simplefilter('always') # enable b/c these tests throw warnings
FILE = inspect.getfile(inspect.currentframe())
base_dir = op.join(op.dirname(op.abspath(FILE)), 'data')
egi_fname = op.join(base_dir, 'test_egi.raw')
egi_txt_fname = op.join(base_dir, 'test_egi.txt')
@requires_testing_data
def test_io_egi_mff():
"""Test importing EGI MFF simple binary files"""
egi_fname_mff = op.join(data_path(), 'EGI', 'test_egi.mff')
raw = read_raw_egi(egi_fname_mff, include=None)
assert_true('RawMff' in repr(raw))
include = ['DIN1', 'DIN2', 'DIN3', 'DIN4', 'DIN5', 'DIN7']
raw = _test_raw_reader(read_raw_egi, input_fname=egi_fname_mff,
include=include, channel_naming='EEG %03d')
assert_equal('eeg' in raw, True)
eeg_chan = [c for c in raw.ch_names if 'EEG' in c]
assert_equal(len(eeg_chan), 129)
picks = pick_types(raw.info, eeg=True)
assert_equal(len(picks), 129)
assert_equal('STI 014' in raw.ch_names, True)
events = find_events(raw, stim_channel='STI 014')
assert_equal(len(events), 8)
assert_equal(np.unique(events[:, 1])[0], 0)
assert_true(np.unique(events[:, 0])[0] != 0)
assert_true(np.unique(events[:, 2])[0] != 0)
assert_raises(ValueError, read_raw_egi, egi_fname_mff, include=['Foo'],
preload=False)
assert_raises(ValueError, read_raw_egi, egi_fname_mff, exclude=['Bar'],
preload=False)
for ii, k in enumerate(include, 1):
assert_true(k in raw.event_id)
assert_true(raw.event_id[k] == ii)
def test_io_egi():
"""Test importing EGI simple binary files."""
# test default
with open(egi_txt_fname) as fid:
data = np.loadtxt(fid)
t = data[0]
data = data[1:]
data *= 1e-6 # μV
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter('always')
raw = read_raw_egi(egi_fname, include=None)
assert_true('RawEGI' in repr(raw))
assert_equal(len(w), 1)
assert_true(w[0].category == RuntimeWarning)
msg = 'Did not find any event code with more than one event.'
assert_true(msg in '%s' % w[0].message)
data_read, t_read = raw[:256]
assert_allclose(t_read, t)
assert_allclose(data_read, data, atol=1e-10)
include = ['TRSP', 'XXX1']
with warnings.catch_warnings(record=True): # preload=None
raw = _test_raw_reader(read_raw_egi, input_fname=egi_fname,
include=include)
assert_equal('eeg' in raw, True)
eeg_chan = [c for c in raw.ch_names if c.startswith('E')]
assert_equal(len(eeg_chan), 256)
picks = pick_types(raw.info, eeg=True)
assert_equal(len(picks), 256)
assert_equal('STI 014' in raw.ch_names, True)
events = find_events(raw, stim_channel='STI 014')
assert_equal(len(events), 2) # ground truth
assert_equal(np.unique(events[:, 1])[0], 0)
assert_true(np.unique(events[:, 0])[0] != 0)
assert_true(np.unique(events[:, 2])[0] != 0)
triggers = np.array([[0, 1, 1, 0], [0, 0, 1, 0]])
# test trigger functionality
triggers = np.array([[0, 1, 0, 0], [0, 0, 1, 0]])
events_ids = [12, 24]
new_trigger = _combine_triggers(triggers, events_ids)
assert_array_equal(np.unique(new_trigger), np.unique([0, 12, 24]))
assert_raises(ValueError, read_raw_egi, egi_fname, include=['Foo'],
preload=False)
assert_raises(ValueError, read_raw_egi, egi_fname, exclude=['Bar'],
preload=False)
for ii, k in enumerate(include, 1):
assert_true(k in raw.event_id)
assert_true(raw.event_id[k] == ii)
run_tests_if_main()
|
bsd-3-clause
|
hectord/lettuce
|
tests/integration/lib/Django-1.3/tests/regressiontests/sites_framework/tests.py
|
92
|
1784
|
from django.conf import settings
from django.contrib.sites.models import Site
from django.test import TestCase
from models import SyndicatedArticle, ExclusiveArticle, CustomArticle, InvalidArticle, ConfusedArticle
class SitesFrameworkTestCase(TestCase):
def setUp(self):
Site.objects.get_or_create(id=settings.SITE_ID, domain="example.com", name="example.com")
Site.objects.create(id=settings.SITE_ID+1, domain="example2.com", name="example2.com")
def test_site_fk(self):
article = ExclusiveArticle.objects.create(title="Breaking News!", site_id=settings.SITE_ID)
self.assertEqual(ExclusiveArticle.on_site.all().get(), article)
def test_sites_m2m(self):
article = SyndicatedArticle.objects.create(title="Fresh News!")
article.sites.add(Site.objects.get(id=settings.SITE_ID))
article.sites.add(Site.objects.get(id=settings.SITE_ID+1))
article2 = SyndicatedArticle.objects.create(title="More News!")
article2.sites.add(Site.objects.get(id=settings.SITE_ID+1))
self.assertEqual(SyndicatedArticle.on_site.all().get(), article)
def test_custom_named_field(self):
article = CustomArticle.objects.create(title="Tantalizing News!", places_this_article_should_appear_id=settings.SITE_ID)
self.assertEqual(CustomArticle.on_site.all().get(), article)
def test_invalid_name(self):
article = InvalidArticle.objects.create(title="Bad News!", site_id=settings.SITE_ID)
self.assertRaises(ValueError, InvalidArticle.on_site.all)
def test_invalid_field_type(self):
article = ConfusedArticle.objects.create(title="More Bad News!", site=settings.SITE_ID)
self.assertRaises(TypeError, ConfusedArticle.on_site.all)
|
gpl-3.0
|
cardoe/virt-manager
|
virtinst/CPU.py
|
3
|
9479
|
#
# Copyright 2010 Red Hat, Inc.
# Cole Robinson <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
# MA 02110-1301 USA.
from virtinst import XMLBuilderDomain
from virtinst.XMLBuilderDomain import _xml_property
import libxml2
def _int_or_none(val):
return val and int(val) or val
class CPUFeature(XMLBuilderDomain.XMLBuilderDomain):
"""
Class for generating <cpu> child <feature> XML
"""
POLICIES = ["force", "require", "optional", "disable", "forbid"]
def __init__(self, conn, parsexml=None, parsexmlnode=None, caps=None):
XMLBuilderDomain.XMLBuilderDomain.__init__(self, conn, parsexml,
parsexmlnode, caps)
self._name = None
self._policy = None
if self._is_parse():
return
def _get_name(self):
return self._name
def _set_name(self, val):
self._name = val
name = _xml_property(_get_name, _set_name,
xpath="./@name")
def _get_policy(self):
return self._policy
def _set_policy(self, val):
self._policy = val
policy = _xml_property(_get_policy, _set_policy,
xpath="./@policy")
def _get_xml_config(self):
if not self.name:
return ""
xml = " <feature"
if self.policy:
xml += " policy='%s'" % self.policy
xml += " name='%s'/>" % self.name
return xml
class CPU(XMLBuilderDomain.XMLBuilderDomain):
"""
Class for generating <cpu> XML
"""
_dumpxml_xpath = "/domain/cpu"
MATCHS = ["minimum", "exact", "strict"]
def __init__(self, conn, parsexml=None, parsexmlnode=None, caps=None):
self._model = None
self._match = None
self._vendor = None
self._mode = None
self._features = []
self._sockets = None
self._cores = None
self._threads = None
XMLBuilderDomain.XMLBuilderDomain.__init__(self, conn, parsexml,
parsexmlnode, caps)
if self._is_parse():
return
def _parsexml(self, xml, node):
XMLBuilderDomain.XMLBuilderDomain._parsexml(self, xml, node)
for node in self._xml_node.children:
if node.name != "feature":
continue
feature = CPUFeature(self.conn, parsexmlnode=node)
self._features.append(feature)
def _get_features(self):
return self._features[:]
features = _xml_property(_get_features)
def add_feature(self, name, policy="require"):
feature = CPUFeature(self.conn)
feature.name = name
feature.policy = policy
if self._is_parse():
xml = feature.get_xml_config()
node = libxml2.parseDoc(xml).children
feature.set_xml_node(node)
self._add_child_node("./cpu", node)
self._features.append(feature)
def remove_feature(self, feature):
if self._is_parse() and feature in self._features:
xpath = feature.get_xml_node_path()
if xpath:
self._remove_child_xpath(xpath)
self._features.remove(feature)
def _get_model(self):
return self._model
def _set_model(self, val):
if val:
self.mode = "custom"
if val and not self.match:
self.match = "exact"
self._model = val
model = _xml_property(_get_model, _set_model,
xpath="./cpu/model")
def _get_match(self):
return self._match
def _set_match(self, val):
self._match = val
match = _xml_property(_get_match, _set_match,
xpath="./cpu/@match")
def _get_vendor(self):
return self._vendor
def _set_vendor(self, val):
self._vendor = val
vendor = _xml_property(_get_vendor, _set_vendor,
xpath="./cpu/vendor")
def _get_mode(self):
return self._mode
def _set_mode(self, val):
self._mode = val
mode = _xml_property(_get_mode, _set_mode,
xpath="./cpu/@mode")
# Topology properties
def _get_sockets(self):
return self._sockets
def _set_sockets(self, val):
self._sockets = _int_or_none(val)
sockets = _xml_property(_get_sockets, _set_sockets,
get_converter=lambda s, x: _int_or_none(x),
xpath="./cpu/topology/@sockets")
def _get_cores(self):
return self._cores
def _set_cores(self, val):
self._cores = _int_or_none(val)
cores = _xml_property(_get_cores, _set_cores,
get_converter=lambda s, x: _int_or_none(x),
xpath="./cpu/topology/@cores")
def _get_threads(self):
return self._threads
def _set_threads(self, val):
self._threads = _int_or_none(val)
threads = _xml_property(_get_threads, _set_threads,
get_converter=lambda s, x: _int_or_none(x),
xpath="./cpu/topology/@threads")
def clear_attrs(self):
self.match = None
self.mode = None
self.vendor = None
self.model = None
for feature in self.features:
self.remove_feature(feature)
def copy_host_cpu(self):
"""
Enact the equivalent of qemu -cpu host, pulling all info
from capabilities about the host CPU
"""
cpu = self._get_caps().host.cpu
if not cpu.model:
raise ValueError(_("No host CPU reported in capabilities"))
self.mode = "custom"
self.match = "exact"
self.model = cpu.model
self.vendor = cpu.vendor
for feature in self.features:
self.remove_feature(feature)
for name in cpu.features.names():
self.add_feature(name)
def vcpus_from_topology(self):
"""
Determine the CPU count represented by topology, or 1 if
no topology is set
"""
self.set_topology_defaults()
if self.sockets:
return self.sockets * self.cores * self.threads
return 1
def set_topology_defaults(self, vcpus=None):
"""
Fill in unset topology values, using the passed vcpus count if
required
"""
if (self.sockets is None and
self.cores is None and
self.threads is None):
return
if vcpus is None:
if self.sockets is None:
self.sockets = 1
if self.threads is None:
self.threads = 1
if self.cores is None:
self.cores = 1
vcpus = int(vcpus or 0)
if not self.sockets:
if not self.cores:
self.sockets = vcpus / self.threads
else:
self.sockets = vcpus / self.cores
if not self.cores:
if not self.threads:
self.cores = vcpus / self.sockets
else:
self.cores = vcpus / (self.sockets * self.threads)
if not self.threads:
self.threads = vcpus / (self.sockets * self.cores)
return
def _get_topology_xml(self):
xml = ""
if self.sockets:
xml += " sockets='%s'" % self.sockets
if self.cores:
xml += " cores='%s'" % self.cores
if self.threads:
xml += " threads='%s'" % self.threads
if not xml:
return ""
return " <topology%s/>\n" % xml
def _get_feature_xml(self):
xml = ""
for feature in self._features:
xml += feature.get_xml_config() + "\n"
return xml
def _get_xml_config(self):
top_xml = self._get_topology_xml()
feature_xml = self._get_feature_xml()
mode_xml = ""
match_xml = ""
if self.match:
match_xml = " match='%s'" % self.match
xml = ""
if self.model == "host-passthrough":
self.mode = "host-passthrough"
mode_xml = " mode='%s'" % self.mode
xml += " <cpu%s/>" % mode_xml
return xml
else:
self.mode = "custom"
mode_xml = " mode='%s'" % self.mode
if not (self.model or top_xml or feature_xml):
return ""
# Simple topology XML mode
xml += " <cpu%s%s>\n" % (mode_xml, match_xml)
if self.model:
xml += " <model>%s</model>\n" % self.model
if self.vendor:
xml += " <vendor>%s</vendor>\n" % self.vendor
if top_xml:
xml += top_xml
if feature_xml:
xml += feature_xml
xml += " </cpu>"
return xml
|
gpl-2.0
|
tedelhourani/ansible
|
lib/ansible/modules/network/nxos/nxos_ospf.py
|
26
|
4221
|
#!/usr/bin/python
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'network'}
DOCUMENTATION = '''
---
module: nxos_ospf
extends_documentation_fragment: nxos
version_added: "2.2"
short_description: Manages configuration of an ospf instance.
description:
- Manages configuration of an ospf instance.
author: Gabriele Gerbino (@GGabriele)
options:
ospf:
description:
- Name of the ospf instance.
required: true
state:
description:
- Determines whether the config should be present or not
on the device.
required: false
default: present
choices: ['present','absent']
'''
EXAMPLES = '''
- nxos_ospf:
ospf: 1
state: present
'''
RETURN = '''
commands:
description: commands sent to the device
returned: always
type: list
sample: ["router ospf 1"]
'''
import re
from ansible.module_utils.nxos import get_config, load_config
from ansible.module_utils.nxos import nxos_argument_spec, check_args
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.netcfg import CustomNetworkConfig
PARAM_TO_COMMAND_KEYMAP = {
'ospf': 'router ospf'
}
def get_value(config, module):
splitted_config = config.splitlines()
value_list = []
REGEX = '^router ospf\s(?P<ospf>\S+).*'
for line in splitted_config:
value = ''
if 'router ospf' in line:
try:
match_ospf = re.match(REGEX, line, re.DOTALL)
ospf_group = match_ospf.groupdict()
value = ospf_group['ospf']
except AttributeError:
value = ''
if value:
value_list.append(value)
return value_list
def get_existing(module):
existing = {}
config = str(get_config(module))
value = get_value(config, module)
if value:
existing['ospf'] = value
return existing
def state_present(module, proposed, candidate):
commands = ['router ospf {0}'.format(proposed['ospf'])]
candidate.add(commands, parents=[])
def state_absent(module, proposed, candidate):
commands = ['no router ospf {0}'.format(proposed['ospf'])]
candidate.add(commands, parents=[])
def main():
argument_spec = dict(
ospf=dict(required=True, type='str'),
state=dict(choices=['present', 'absent'], default='present', required=False),
include_defaults=dict(default=True),
config=dict(),
save=dict(type='bool', default=False)
)
argument_spec.update(nxos_argument_spec)
module = AnsibleModule(argument_spec=argument_spec, supports_check_mode=True)
warnings = list()
check_args(module, warnings)
result = dict(changed=False, warnings=warnings)
state = module.params['state']
ospf = str(module.params['ospf'])
existing = get_existing(module)
proposed = dict(ospf=ospf)
if not existing:
existing_list = []
else:
existing_list = existing['ospf']
candidate = CustomNetworkConfig(indent=3)
if state == 'present' and ospf not in existing_list:
state_present(module, proposed, candidate)
if state == 'absent' and ospf in existing_list:
state_absent(module, proposed, candidate)
if candidate:
candidate = candidate.items_text()
load_config(module, candidate)
result['changed'] = True
result['commands'] = candidate
else:
result['commands'] = []
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
TileStache/TileStache
|
TileStache/PixelEffects.py
|
9
|
4207
|
""" Different effects that can be applied to tiles.
Options are:
- blackwhite:
"effect":
{
"name": "blackwhite"
}
- greyscale:
"effect":
{
"name": "greyscale"
}
- desaturate:
Has an optional parameter "factor" that defines the saturation of the image.
Defaults to 0.85.
"effect":
{
"name": "desaturate",
"factor": 0.85
}
- pixelate:
Has an optional parameter "reduction" that defines how pixelated the image
will be (size of pixel). Defaults to 5.
"effect":
{
"name": "pixelate",
"factor": 5
}
- halftone:
"effect":
{
"name": "halftone"
}
- blur:
Has an optional parameter "radius" that defines the blurriness of an image.
Larger radius means more blurry. Defaults to 5.
"effect":
{
"name": "blur",
"radius": 5
}
"""
from PIL import Image, ImageFilter
def put_original_alpha(original_image, new_image):
""" Put alpha channel of original image (if any) in the new image.
"""
try:
alpha_idx = original_image.mode.index('A')
alpha_channel = original_image.split()[alpha_idx]
new_image.putalpha(alpha_channel)
except ValueError:
pass
return new_image
class PixelEffect:
""" Base class for all pixel effects.
Subclasses must implement method `apply_effect`.
"""
def __init__(self):
pass
def apply(self, image):
try:
image = image.image() # Handle Providers.Verbatim tiles
except (AttributeError, TypeError):
pass
return self.apply_effect(image)
def apply_effect(self, image):
raise NotImplementedError(
'PixelEffect subclasses must implement method `apply_effect`.'
)
class Blackwhite(PixelEffect):
""" Returns a black and white version of the original image.
"""
def apply_effect(self, image):
new_image = image.convert('1').convert(image.mode)
return put_original_alpha(image, new_image)
class Greyscale(PixelEffect):
""" Returns a grescale version of the original image.
"""
def apply_effect(self, image):
return image.convert('LA').convert(image.mode)
class Desaturate(PixelEffect):
""" Returns a desaturated version of the original image.
`factor` is a number between 0 and 1, where 1 results in a
greyscale image (no color), and 0 results in the original image.
"""
def __init__(self, factor=0.85):
self.factor = min(max(factor, 0.0), 1.0) # 0.0 <= factor <= 1.0
def apply_effect(self, image):
avg = image.convert('LA').convert(image.mode)
return Image.blend(image, avg, self.factor)
class Pixelate(PixelEffect):
""" Returns a pixelated version of the original image.
`reduction` defines how pixelated the image will be (size of pixels).
"""
def __init__(self, reduction=5):
self.reduction = max(reduction, 1) # 1 <= reduction
def apply_effect(self, image):
tmp_size = (int(image.size[0] / self.reduction),
int(image.size[1] / self.reduction))
pixelated = image.resize(tmp_size, Image.NEAREST)
return pixelated.resize(image.size, Image.NEAREST)
class Halftone(PixelEffect):
""" Returns a halftone version of the original image.
"""
def apply_effect(self, image):
cmyk = []
for band in image.convert('CMYK').split():
cmyk.append(band.convert('1').convert('L'))
new_image = Image.merge('CMYK', cmyk).convert(image.mode)
return put_original_alpha(image, new_image)
class Blur(PixelEffect):
""" Returns a blurred version of the original image.
`radius` defines the blurriness of an image. Larger radius means more
blurry.
"""
def __init__(self, radius=5):
self.radius = max(radius, 0) # 0 <= radius
def apply_effect(self, image):
return image.filter(ImageFilter.GaussianBlur(self.radius))
all = {
'blackwhite': Blackwhite,
'greyscale': Greyscale,
'desaturate': Desaturate,
'pixelate': Pixelate,
'halftone': Halftone,
'blur': Blur,
}
|
bsd-3-clause
|
dakcarto/QGIS
|
python/plugins/processing/algs/qgis/Explode.py
|
10
|
3527
|
# -*- coding: utf-8 -*-
"""
***************************************************************************
Explode.py
---------------------
Date : August 2012
Copyright : (C) 2012 by Victor Olaya
Email : volayaf at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Victor Olaya'
__date__ = 'August 2012'
__copyright__ = '(C) 2012, Victor Olaya'
# This will get replaced with a git SHA1 when you do a git archive
__revision__ = '$Format:%H$'
from qgis.core import QGis, QgsFeature, QgsGeometry
from processing.core.GeoAlgorithm import GeoAlgorithm
from processing.core.parameters import ParameterVector
from processing.core.outputs import OutputVector
from processing.tools import dataobjects, vector
class Explode(GeoAlgorithm):
INPUT = 'INPUT'
OUTPUT = 'OUTPUT'
def processAlgorithm(self, progress):
vlayer = dataobjects.getObjectFromUri(
self.getParameterValue(self.INPUT))
output = self.getOutputFromName(self.OUTPUT)
vprovider = vlayer.dataProvider()
fields = vprovider.fields()
writer = output.getVectorWriter(fields, QGis.WKBLineString,
vlayer.crs())
outFeat = QgsFeature()
inGeom = QgsGeometry()
nElement = 0
features = vector.features(vlayer)
nFeat = len(features)
for feature in features:
nElement += 1
progress.setPercentage(nElement * 100 / nFeat)
inGeom = feature.geometry()
atMap = feature.attributes()
segments = self.extractAsSingleSegments(inGeom)
outFeat.setAttributes(atMap)
for segment in segments:
outFeat.setGeometry(segment)
writer.addFeature(outFeat)
del writer
def extractAsSingleSegments(self, geom):
segments = []
if geom.isMultipart():
multi = geom.asMultiPolyline()
for polyline in multi:
segments.extend(self.getPolylineAsSingleSegments(polyline))
else:
segments.extend(self.getPolylineAsSingleSegments(
geom.asPolyline()))
return segments
def getPolylineAsSingleSegments(self, polyline):
segments = []
for i in range(len(polyline) - 1):
ptA = polyline[i]
ptB = polyline[i + 1]
segment = QgsGeometry.fromPolyline([ptA, ptB])
segments.append(segment)
return segments
def defineCharacteristics(self):
self.name, self.i18n_name = self.trAlgorithm('Explode lines')
self.group, self.i18n_group = self.trAlgorithm('Vector geometry tools')
self.addParameter(ParameterVector(self.INPUT,
self.tr('Input layer'), [ParameterVector.VECTOR_TYPE_LINE]))
self.addOutput(OutputVector(self.OUTPUT, self.tr('Exploded')))
|
gpl-2.0
|
waynenilsen/statsmodels
|
statsmodels/sandbox/examples/try_quantile_regression1.py
|
33
|
1188
|
'''Example to illustrate Quantile Regression
Author: Josef Perktold
polynomial regression with systematic deviations above
'''
import numpy as np
from statsmodels.compat.python import zip
from scipy import stats
import statsmodels.api as sm
from statsmodels.regression.quantile_regression import QuantReg
sige = 0.1
nobs, k_vars = 500, 3
x = np.random.uniform(-1, 1, size=nobs)
x.sort()
exog = np.vander(x, k_vars+1)[:,::-1]
mix = 0.1 * stats.norm.pdf(x[:,None], loc=np.linspace(-0.5, 0.75, 4), scale=0.01).sum(1)
y = exog.sum(1) + mix + sige * (np.random.randn(nobs)/2 + 1)**3
p = 0.5
res_qr = QuantReg(y, exog).fit(p)
res_qr2 = QuantReg(y, exog).fit(0.1)
res_qr3 = QuantReg(y, exog).fit(0.75)
res_ols = sm.OLS(y, exog).fit()
params = [res_ols.params, res_qr2.params, res_qr.params, res_qr3.params]
labels = ['ols', 'qr 0.1', 'qr 0.5', 'qr 0.75']
import matplotlib.pyplot as plt
plt.figure()
plt.plot(x, y, '.', alpha=0.5)
for lab, beta in zip(['ols', 'qr 0.1', 'qr 0.5', 'qr 0.75'], params):
print('%-8s'%lab, np.round(beta, 4))
fitted = np.dot(exog, beta)
lw = 2
plt.plot(x, fitted, lw=lw, label=lab)
plt.legend()
plt.title('Quantile Regression')
plt.show()
|
bsd-3-clause
|
elevien/mlmc-rdme
|
crn_mc/simulation/rhs.py
|
3
|
4024
|
from ..mesh import *
from ..model import *
from .timer import *
import copy,json
import numpy as np
from scipy.integrate import ode
def res(x,y):
return x - min(x,y)
# Right hand sides --------------------------------------------------------
# curretly spending too much time inside this function. perhaps don't
# use filter?
def chvrhs_hybrid(t,y,model,sample_rate):
for i in range(model.dimension):
model.systemState[i].value[0] = y[i]
for e in model.events:
e.updaterate()
#MIXED = filter(lambda e: e.hybridType == MIXED, model.events)
agg_rate = 0.
for i in range(model.dimension):
if model.events[i].hybridType == SLOW or model.events[i].hybridType == MIXED:
agg_rate = agg_rate + model.events[i].rate
#for s in MIXED:
# agg_rate = agg_rate + s.rate
rhs = np.zeros(model.dimension+1)
fast = filter(lambda e: e.hybridType == FAST, model.events)
for e in fast:
for i in range(model.dimension):
name = model.systemState[i].name
r = list(filter(lambda e: e[0].name == name, e.reactants))
p = list(filter(lambda e: e[0].name == name, e.products))
direction = 0.
if r:
direction = direction - float(r[0][1])
if p:
direction = direction + float(p[0][1])
rhs[i] = rhs[i]+ direction*e.rate
rhs[len(model.systemState)] = 1.
rhs = rhs/(agg_rate+sample_rate)
return rhs
def chvrhs_coupled(t,y,model_hybrid,model_exact,sample_rate):
for i in range(model_exact.dimension):
model_hybrid.systemState[i].value[0] = y[i]
for i in range(model_hybrid.dimension):
model_exact.systemState[i].value[0] = y[i+model_exact.dimension]
for e in model_exact.events:
e.updaterate()
for e in model_hybrid.events:
e.updaterate()
agg_rate = 0.
for i in range(len(model_hybrid.events)):
if model_hybrid.events[i].hybridType == SLOW or model_hybrid.events[i].hybridType == MIXED:
hybrid_rate = model_hybrid.events[i].rate
exact_rate = model_exact.events[i].rate
agg_rate = agg_rate + res(hybrid_rate,exact_rate )
agg_rate = agg_rate + res(exact_rate,hybrid_rate )
agg_rate = agg_rate + min(hybrid_rate,exact_rate )
elif model_hybrid.events[i].hybridType == FAST or model_hybrid.events[i].hybridType == VITL:
agg_rate = agg_rate + model_exact.events[i].rate
rhs = np.zeros(2*model_exact.dimension+1)
fast = filter(lambda e: e.hybridType == FAST, model_hybrid.events)
for e in fast:
for i in range(model_exact.dimension):
name = model_exact.systemState[i].name
r = list(filter(lambda e: e[0].name == name, e.reactants))
p = list(filter(lambda e: e[0].name == name, e.products))
direction = 0.
if r:
direction = direction - float(r[0][1])
if p:
direction = direction + float(p[0][1])
rhs[i] = rhs[i] + direction*e.rate
rhs[2*model_exact.dimension] = 1.
rhs = rhs/(agg_rate+sample_rate)
return rhs
def rrerhs(t,y,model,sample_rate):
"""rhs of determistic part of equations, i.e. the rhs of reaction rate equations"""
for i in range(model.dimension):
model.systemState[i].value[0] = y[i]
for e in model.events:
e.updaterate()
rhs = np.zeros(model.dimension)
fast = filter(lambda e: e.hybridType == FAST, model.events)
for e in fast:
for i in range(model.dimension):
name = model.systemState[i].name
r = list(filter(lambda e: e[0].name == name, e.reactants))
p = list(filter(lambda e: e[0].name == name, e.products))
direction = 0.
if r:
direction = direction - float(r[0][1])
if p:
direction = direction + float(p[0][1])
rhs[i] = rhs[i]+ direction*e.rate
return rhs
|
mit
|
meabsence/python-for-android
|
python-modules/twisted/twisted/persisted/journal/rowjournal.py
|
78
|
3077
|
# Copyright (c) 2001-2004 Twisted Matrix Laboratories.
# See LICENSE for details.
#
"""Journal using twisted.enterprise.row RDBMS support.
You're going to need the following table in your database::
| CREATE TABLE journalinfo
| (
| commandIndex int
| );
| INSERT INTO journalinfo VALUES (0);
"""
from __future__ import nested_scopes
# twisted imports
from twisted.internet import defer
# sibling imports
import base
# constants for command list
INSERT, DELETE, UPDATE = range(3)
class RowJournal(base.Journal):
"""Journal that stores data 'snapshot' in using twisted.enterprise.row.
Use this as the reflector instead of the original reflector.
It may block on creation, if it has to run recovery.
"""
def __init__(self, log, journaledService, reflector):
self.reflector = reflector
self.commands = []
self.syncing = 0
base.Journal.__init__(self, log, journaledService)
def updateRow(self, obj):
"""Mark on object for updating when sync()ing."""
self.commands.append((UPDATE, obj))
def insertRow(self, obj):
"""Mark on object for inserting when sync()ing."""
self.commands.append((INSERT, obj))
def deleteRow(self, obj):
"""Mark on object for deleting when sync()ing."""
self.commands.append((DELETE, obj))
def loadObjectsFrom(self, tableName, parentRow=None, data=None, whereClause=None, forceChildren=0):
"""Flush all objects to the database and then load objects."""
d = self.sync()
d.addCallback(lambda result: self.reflector.loadObjectsFrom(
tableName, parentRow=parentRow, data=data, whereClause=whereClause,
forceChildren=forceChildren))
return d
def sync(self):
"""Commit changes to database."""
if self.syncing:
raise ValueError, "sync already in progress"
comandMap = {INSERT : self.reflector.insertRowSQL,
UPDATE : self.reflector.updateRowSQL,
DELETE : self.reflector.deleteRowSQL}
sqlCommands = []
for kind, obj in self.commands:
sqlCommands.append(comandMap[kind](obj))
self.commands = []
if sqlCommands:
self.syncing = 1
d = self.reflector.dbpool.runInteraction(self._sync, self.latestIndex, sqlCommands)
d.addCallback(self._syncDone)
return d
else:
return defer.succeed(1)
def _sync(self, txn, index, commands):
"""Do the actual database synchronization."""
for c in commands:
txn.execute(c)
txn.update("UPDATE journalinfo SET commandIndex = %d" % index)
def _syncDone(self, result):
self.syncing = 0
return result
def getLastSnapshot(self):
"""Return command index of last snapshot."""
conn = self.reflector.dbpool.connect()
cursor = conn.cursor()
cursor.execute("SELECT commandIndex FROM journalinfo")
return cursor.fetchall()[0][0]
|
apache-2.0
|
nmfisher/poincare-embeddings
|
scripts/create_mammal_subtree.py
|
2
|
1450
|
from __future__ import print_function, division, unicode_literals, absolute_import
import random
from nltk.corpus import wordnet as wn
import click
def transitive_closure(synsets):
hypernyms = set([])
for s in synsets:
paths = s.hypernym_paths()
for path in paths:
hypernyms.update((s,h) for h in path[1:] if h.pos() == 'n')
return hypernyms
@click.command()
@click.argument('result_file')
@click.option('--shuffle', is_flag=True)
@click.option('--sep', default='\t')
@click.option('--target', default='mammal.n.01')
def main(result_file, shuffle, sep, target):
target = wn.synset(target)
print('target:', target)
words = wn.words()
nouns = set([])
for word in words:
nouns.update(wn.synsets(word, pos='n'))
print( len(nouns), 'nouns')
hypernyms = []
for noun in nouns:
paths = noun.hypernym_paths()
for path in paths:
try:
pos = path.index(target)
for i in range(pos, len(path)-1):
hypernyms.append((noun, path[i]))
except Exception:
continue
hypernyms = list(set(hypernyms))
print( len(hypernyms), 'hypernyms' )
if not shuffle:
random.shuffle(hypernyms)
with open(result_file, 'w') as fout:
for n1, n2 in hypernyms:
print(n1.name(), n2.name(), sep=sep, file=fout)
if __name__ == '__main__':
main()
|
mit
|
pjdelport/django
|
django/contrib/messages/tests/fallback.py
|
199
|
6978
|
from django.contrib.messages import constants
from django.contrib.messages.storage.fallback import (FallbackStorage,
CookieStorage)
from django.contrib.messages.tests.base import BaseTest
from django.contrib.messages.tests.cookie import (set_cookie_data,
stored_cookie_messages_count)
from django.contrib.messages.tests.session import (set_session_data,
stored_session_messages_count)
class FallbackTest(BaseTest):
storage_class = FallbackStorage
def get_request(self):
self.session = {}
request = super(FallbackTest, self).get_request()
request.session = self.session
return request
def get_cookie_storage(self, storage):
return storage.storages[-2]
def get_session_storage(self, storage):
return storage.storages[-1]
def stored_cookie_messages_count(self, storage, response):
return stored_cookie_messages_count(self.get_cookie_storage(storage),
response)
def stored_session_messages_count(self, storage, response):
return stored_session_messages_count(self.get_session_storage(storage))
def stored_messages_count(self, storage, response):
"""
Return the storage totals from both cookie and session backends.
"""
total = (self.stored_cookie_messages_count(storage, response) +
self.stored_session_messages_count(storage, response))
return total
def test_get(self):
request = self.get_request()
storage = self.storage_class(request)
cookie_storage = self.get_cookie_storage(storage)
# Set initial cookie data.
example_messages = [str(i) for i in range(5)]
set_cookie_data(cookie_storage, example_messages)
# Overwrite the _get method of the fallback storage to prove it is not
# used (it would cause a TypeError: 'NoneType' object is not callable).
self.get_session_storage(storage)._get = None
# Test that the message actually contains what we expect.
self.assertEqual(list(storage), example_messages)
def test_get_empty(self):
request = self.get_request()
storage = self.storage_class(request)
# Overwrite the _get method of the fallback storage to prove it is not
# used (it would cause a TypeError: 'NoneType' object is not callable).
self.get_session_storage(storage)._get = None
# Test that the message actually contains what we expect.
self.assertEqual(list(storage), [])
def test_get_fallback(self):
request = self.get_request()
storage = self.storage_class(request)
cookie_storage = self.get_cookie_storage(storage)
session_storage = self.get_session_storage(storage)
# Set initial cookie and session data.
example_messages = [str(i) for i in range(5)]
set_cookie_data(cookie_storage, example_messages[:4] +
[CookieStorage.not_finished])
set_session_data(session_storage, example_messages[4:])
# Test that the message actually contains what we expect.
self.assertEqual(list(storage), example_messages)
def test_get_fallback_only(self):
request = self.get_request()
storage = self.storage_class(request)
cookie_storage = self.get_cookie_storage(storage)
session_storage = self.get_session_storage(storage)
# Set initial cookie and session data.
example_messages = [str(i) for i in range(5)]
set_cookie_data(cookie_storage, [CookieStorage.not_finished],
encode_empty=True)
set_session_data(session_storage, example_messages)
# Test that the message actually contains what we expect.
self.assertEqual(list(storage), example_messages)
def test_flush_used_backends(self):
request = self.get_request()
storage = self.storage_class(request)
cookie_storage = self.get_cookie_storage(storage)
session_storage = self.get_session_storage(storage)
# Set initial cookie and session data.
set_cookie_data(cookie_storage, ['cookie', CookieStorage.not_finished])
set_session_data(session_storage, ['session'])
# When updating, previously used but no longer needed backends are
# flushed.
response = self.get_response()
list(storage)
storage.update(response)
session_storing = self.stored_session_messages_count(storage, response)
self.assertEqual(session_storing, 0)
def test_no_fallback(self):
"""
Confirms that:
(1) A short number of messages whose data size doesn't exceed what is
allowed in a cookie will all be stored in the CookieBackend.
(2) If the CookieBackend can store all messages, the SessionBackend
won't be written to at all.
"""
storage = self.get_storage()
response = self.get_response()
# Overwrite the _store method of the fallback storage to prove it isn't
# used (it would cause a TypeError: 'NoneType' object is not callable).
self.get_session_storage(storage)._store = None
for i in range(5):
storage.add(constants.INFO, str(i) * 100)
storage.update(response)
cookie_storing = self.stored_cookie_messages_count(storage, response)
self.assertEqual(cookie_storing, 5)
session_storing = self.stored_session_messages_count(storage, response)
self.assertEqual(session_storing, 0)
def test_session_fallback(self):
"""
Confirms that, if the data exceeds what is allowed in a cookie,
messages which did not fit are stored in the SessionBackend.
"""
storage = self.get_storage()
response = self.get_response()
# see comment in CookieText.test_cookie_max_length
msg_size = int((CookieStorage.max_cookie_size - 54) / 4.5 - 37)
for i in range(5):
storage.add(constants.INFO, str(i) * msg_size)
storage.update(response)
cookie_storing = self.stored_cookie_messages_count(storage, response)
self.assertEqual(cookie_storing, 4)
session_storing = self.stored_session_messages_count(storage, response)
self.assertEqual(session_storing, 1)
def test_session_fallback_only(self):
"""
Confirms that large messages, none of which fit in a cookie, are stored
in the SessionBackend (and nothing is stored in the CookieBackend).
"""
storage = self.get_storage()
response = self.get_response()
storage.add(constants.INFO, 'x' * 5000)
storage.update(response)
cookie_storing = self.stored_cookie_messages_count(storage, response)
self.assertEqual(cookie_storing, 0)
session_storing = self.stored_session_messages_count(storage, response)
self.assertEqual(session_storing, 1)
|
bsd-3-clause
|
bbossola/robotframework-selenium2library
|
src/Selenium2Library/keywords/_javascript.py
|
61
|
6560
|
import os
from selenium.common.exceptions import WebDriverException
from keywordgroup import KeywordGroup
class _JavaScriptKeywords(KeywordGroup):
def __init__(self):
self._cancel_on_next_confirmation = False
# Public
def alert_should_be_present(self, text=''):
"""Verifies an alert is present and dismisses it.
If `text` is a non-empty string, then it is also verified that the
message of the alert equals to `text`.
Will fail if no alert is present. Note that following keywords
will fail unless the alert is dismissed by this
keyword or another like `Get Alert Message`.
"""
alert_text = self.get_alert_message()
if text and alert_text != text:
raise AssertionError("Alert text should have been '%s' but was '%s'"
% (text, alert_text))
def choose_cancel_on_next_confirmation(self):
"""Cancel will be selected the next time `Confirm Action` is used."""
self._cancel_on_next_confirmation = True
def choose_ok_on_next_confirmation(self):
"""Undo the effect of using keywords `Choose Cancel On Next Confirmation`. Note
that Selenium's overridden window.confirm() function will normally automatically
return true, as if the user had manually clicked OK, so you shouldn't
need to use this command unless for some reason you need to change
your mind prior to the next confirmation. After any confirmation, Selenium will resume using the
default behavior for future confirmations, automatically returning
true (OK) unless/until you explicitly use `Choose Cancel On Next Confirmation` for each
confirmation.
Note that every time a confirmation comes up, you must
consume it by using a keywords such as `Get Alert Message`, or else
the following selenium operations will fail.
"""
self._cancel_on_next_confirmation = False
def confirm_action(self):
"""Dismisses currently shown confirmation dialog and returns it's message.
By default, this keyword chooses 'OK' option from the dialog. If
'Cancel' needs to be chosen, keyword `Choose Cancel On Next
Confirmation` must be called before the action that causes the
confirmation dialog to be shown.
Examples:
| Click Button | Send | # Shows a confirmation dialog |
| ${message}= | Confirm Action | # Chooses Ok |
| Should Be Equal | ${message} | Are your sure? |
| | | |
| Choose Cancel On Next Confirmation | | |
| Click Button | Send | # Shows a confirmation dialog |
| Confirm Action | | # Chooses Cancel |
"""
text = self._close_alert(not self._cancel_on_next_confirmation)
self._cancel_on_next_confirmation = False
return text
def execute_javascript(self, *code):
"""Executes the given JavaScript code.
`code` may contain multiple lines of code but must contain a
return statement (with the value to be returned) at the end.
`code` may be divided into multiple cells in the test data. In that
case, the parts are catenated together without adding spaces.
If `code` is an absolute path to an existing file, the JavaScript
to execute will be read from that file. Forward slashes work as
a path separator on all operating systems.
Note that, by default, the code will be executed in the context of the
Selenium object itself, so `this` will refer to the Selenium object.
Use `window` to refer to the window of your application, e.g.
`window.document.getElementById('foo')`.
Example:
| Execute JavaScript | window.my_js_function('arg1', 'arg2') |
| Execute JavaScript | ${CURDIR}/js_to_execute.js |
"""
js = self._get_javascript_to_execute(''.join(code))
self._info("Executing JavaScript:\n%s" % js)
return self._current_browser().execute_script(js)
def execute_async_javascript(self, *code):
"""Executes asynchronous JavaScript code.
`code` may contain multiple lines of code but must contain a
return statement (with the value to be returned) at the end.
`code` may be divided into multiple cells in the test data. In that
case, the parts are catenated together without adding spaces.
If `code` is an absolute path to an existing file, the JavaScript
to execute will be read from that file. Forward slashes work as
a path separator on all operating systems.
Note that, by default, the code will be executed in the context of the
Selenium object itself, so `this` will refer to the Selenium object.
Use `window` to refer to the window of your application, e.g.
`window.document.getElementById('foo')`.
Example:
| Execute Async JavaScript | window.my_js_function('arg1', 'arg2') |
| Execute Async JavaScript | ${CURDIR}/js_to_execute.js |
"""
js = self._get_javascript_to_execute(''.join(code))
self._info("Executing Asynchronous JavaScript:\n%s" % js)
return self._current_browser().execute_async_script(js)
def get_alert_message(self):
"""Returns the text of current JavaScript alert.
This keyword will fail if no alert is present. Note that
following keywords will fail unless the alert is
dismissed by this keyword or another like `Get Alert Message`.
"""
return self._close_alert()
# Private
def _close_alert(self, confirm=False):
alert = None
try:
alert = self._current_browser().switch_to_alert()
text = ' '.join(alert.text.splitlines()) # collapse new lines chars
if not confirm: alert.dismiss()
else: alert.accept()
return text
except WebDriverException:
raise RuntimeError('There were no alerts')
def _get_javascript_to_execute(self, code):
codepath = code.replace('/', os.sep)
if not (os.path.isabs(codepath) and os.path.isfile(codepath)):
return code
self._html('Reading JavaScript from file <a href="file://%s">%s</a>.'
% (codepath.replace(os.sep, '/'), codepath))
codefile = open(codepath)
try:
return codefile.read().strip()
finally:
codefile.close()
|
apache-2.0
|
jacquesqiao/Paddle
|
python/paddle/batch.py
|
6
|
1574
|
# Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
__all__ = ['batch']
def batch(reader, batch_size, drop_last=False):
"""
Create a batched reader.
:param reader: the data reader to read from.
:type reader: callable
:param batch_size: size of each mini-batch
:type batch_size: int
:param drop_last: drop the last batch, if the size of last batch is not equal to batch_size.
:type drop_last: bool
:return: the batched reader.
:rtype: callable
"""
def batch_reader():
r = reader()
b = []
for instance in r:
b.append(instance)
if len(b) == batch_size:
yield b
b = []
if drop_last == False and len(b) != 0:
yield b
# Batch size check
batch_size = int(batch_size)
if batch_size <= 0:
raise ValueError("batch_size should be a positive integeral value, "
"but got batch_size={}".format(batch_size))
return batch_reader
|
apache-2.0
|
IONISx/edx-platform
|
openedx/core/djangoapps/content/course_overviews/migrations/0004_default_lowest_passing_grade_to_None.py
|
62
|
3157
|
# -*- coding: utf-8 -*-
from south.utils import datetime_utils as datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Changing field 'CourseOverview.lowest_passing_grade'
db.alter_column('course_overviews_courseoverview', 'lowest_passing_grade', self.gf('django.db.models.fields.DecimalField')(null=True, max_digits=5, decimal_places=2))
def backwards(self, orm):
# Changing field 'CourseOverview.lowest_passing_grade'
db.alter_column('course_overviews_courseoverview', 'lowest_passing_grade', self.gf('django.db.models.fields.DecimalField')(default=0.5, max_digits=5, decimal_places=2))
models = {
'course_overviews.courseoverview': {
'Meta': {'object_name': 'CourseOverview'},
'_location': ('xmodule_django.models.UsageKeyField', [], {'max_length': '255'}),
'_pre_requisite_courses_json': ('django.db.models.fields.TextField', [], {}),
'advertised_start': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'cert_html_view_enabled': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'cert_name_long': ('django.db.models.fields.TextField', [], {}),
'cert_name_short': ('django.db.models.fields.TextField', [], {}),
'certificates_display_behavior': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'certificates_show_before_end': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'course_image_url': ('django.db.models.fields.TextField', [], {}),
'days_early_for_beta': ('django.db.models.fields.FloatField', [], {'null': 'True'}),
'display_name': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'display_number_with_default': ('django.db.models.fields.TextField', [], {}),
'display_org_with_default': ('django.db.models.fields.TextField', [], {}),
'end': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'end_of_course_survey_url': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'facebook_url': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'has_any_active_web_certificate': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('xmodule_django.models.CourseKeyField', [], {'max_length': '255', 'primary_key': 'True', 'db_index': 'True'}),
'lowest_passing_grade': ('django.db.models.fields.DecimalField', [], {'null': 'True', 'max_digits': '5', 'decimal_places': '2'}),
'mobile_available': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'social_sharing_url': ('django.db.models.fields.TextField', [], {'null': 'True'}),
'start': ('django.db.models.fields.DateTimeField', [], {'null': 'True'}),
'visible_to_staff_only': ('django.db.models.fields.BooleanField', [], {'default': 'False'})
}
}
complete_apps = ['course_overviews']
|
agpl-3.0
|
nwjs/chromium.src
|
components/feed/tools/mockserver_textpb_to_binary.py
|
3
|
2151
|
#!/usr/bin/python3
# Copyright 2020 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Lint as: python3
"""The tool converts a textpb into a binary proto using chromium protoc binary.
After converting a feed response textpb file into a mockserver textpb file using
the proto_convertor script, then a engineer runs this script to encode the
mockserver textpb file into a binary proto file that is being used by the feed
card render test (Refers to go/create-a-feed-card-render-test for more).
Make sure you have absl-py installed via 'python3 -m pip install absl-py'.
Usage example:
python3 ./mockserver_textpb_to_binary.py
--chromium_path ~/chromium/src
--output_file /tmp/binary.pb
--source_file /tmp/original.textpb
--alsologtostderr
"""
import glob
import os
import protoc_util
import subprocess
from absl import app
from absl import flags
FLAGS = flags.FLAGS
FLAGS = flags.FLAGS
flags.DEFINE_string('chromium_path', '', 'The path of your chromium depot.')
flags.DEFINE_string('output_file', '', 'The target output binary file path.')
flags.DEFINE_string('source_file', '',
'The source proto file, in textpb format, path.')
ENCODE_NAMESPACE = 'components.feed.core.proto.wire.mockserver.MockServer'
COMPONENT_FEED_PROTO_PATH = 'components/feed/core/proto'
def main(argv):
if len(argv) > 1:
raise app.UsageError('Too many command-line arguments.')
if not FLAGS.chromium_path:
raise app.UsageError('chromium_path flag must be set.')
if not FLAGS.source_file:
raise app.UsageError('source_file flag must be set.')
if not FLAGS.output_file:
raise app.UsageError('output_file flag must be set.')
with open(FLAGS.source_file) as file:
value_text_proto = file.read()
encoded = protoc_util.encode_proto(value_text_proto, ENCODE_NAMESPACE,
FLAGS.chromium_path,
COMPONENT_FEED_PROTO_PATH)
with open(FLAGS.output_file, 'wb') as file:
file.write(encoded)
if __name__ == '__main__':
app.run(main)
|
bsd-3-clause
|
xorpaul/shinken
|
libexec/discovery/cluster_discovery_runner.py
|
20
|
4633
|
#!/usr/bin/env python
# Copyright (C) 2009-2012:
# Camille, VACQUIE
# Romain, FORLOT, [email protected]
#
# This file is part of Shinken.
#
# Shinken is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Shinken is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with Shinken. If not, see <http://www.gnu.org/licenses/>.
#
###############################################################
#
# cluster_discovery_runner.py script simply try to get informations
# from HACMP mib and failback on Safekit mib. SNMP for both product
# need to be activated. For Safekit, add a proxy into snmpd conf to
# include its mib into the master agent netsnmp.
#
# For SNMPv3 we created a default user using the command :
# net-snmp-config --create-snmpv3-user -a "mypassword" myuser
# Here the user name is myuser and his password is mypassword
#
###############################################################
### modules import
import netsnmp
import optparse
import re
##########
# menu #
##########
parser = optparse.OptionParser('%prog [options] -H HOSTADRESS -C SNMPCOMMUNITYREAD -O ARG1 -V SNMPVERSION -l SNMPSECNAME -L SNMPSECLEVEL -p SNMPAUTHPROTO -x SNMPAUTHPASS')
# user name and password are defined in /var/lib/net-snmp/snmpd.conf
parser.add_option("-H", "--hostname", dest="hostname", help="Hostname to scan")
parser.add_option("-C", "--community", dest="community", help="Community to scan (default:public)")
parser.add_option("-O", "--os", dest="os", help="OS from scanned host")
parser.add_option("-V", "--version", dest="version", type=int, help="Version number for SNMP (1, 2 or 3; default:1)")
parser.add_option("-l", "--login", dest="snmpv3_user", help="User name for snmpv3(default:admin)")
parser.add_option("-L", "--level", dest="snmpv3_level", help="Security level for snmpv3(default:authNoPriv)")
parser.add_option("-p", "--authproto", dest="snmpv3_auth", help="Authentication protocol for snmpv3(default:MD5)")
parser.add_option("-x", "--authpass", dest="snmpv3_auth_pass", help="Authentication password for snmpv3(default:monpassword)")
opts, args = parser.parse_args()
hostname = opts.hostname
os = opts.os
clSolution_by_os = { 'aix' : 'hacmp',
'linux': 'safekit',
}
if not opts.hostname:
parser.error("Requires one host and its os to scan (option -H)")
if not opts.os:
parser.error("Requires the os host(option -O)")
if opts.community:
community = opts.community
else:
community = 'public'
if opts.version:
version = opts.version
else:
version = 1
if opts.snmpv3_user:
snmpv3_user = opts.snmpv3_user
else:
snmpv3_user = 'myuser'
if opts.snmpv3_level:
snmpv3_level = opts.snmpv3_level
else:
snmpv3_level = 'authNoPriv'
if opts.snmpv3_auth:
snmpv3_auth = opts.snmpv3_auth
else:
snmpv3_auth = 'MD5'
if opts.snmpv3_auth_pass:
snmpv3_auth_pass = opts.snmpv3_auth_pass
else:
snmpv3_auth_pass = 'mypassword'
oid_safekit_moduleName = ".1.3.6.1.4.1.107.175.10.1.1.2"
oid_hacmp_clusterName = ".1.3.6.1.4.1.2.3.1.2.1.5.1.2"
##############
# functions #
##############
### Search for cluster solution, between safekit or hacmp, presents on the target
def get_cluster_discovery(oid):
name= netsnmp.Varbind(oid)
result = netsnmp.snmpwalk(name, Version=version, DestHost=hostname, Community=community, SecName=snmpv3_user, SecLevel=snmpv3_level, AuthProto=snmpv3_auth, AuthPass=snmpv3_auth_pass)
nameList = list(result)
return nameList
### format the modules list and display them on the standard output
def get_cluster_discovery_output(list):
names = []
if list :
for elt in list:
names.append(elt)
print "%s::%s=1"%(hostname, clSolution)# To add tag
print "%s::_%s_modules=%s"%(hostname, clSolution, ','.join(names))# Host macros by Safekit modules
else :
print "%s::%s=0"%(hostname, clSolution)# No cluster detected
###############
# execution #
###############
scan = []
clSolution = clSolution_by_os[os]
scan = get_cluster_discovery(oid_hacmp_clusterName)
if not scan:
scan = get_cluster_discovery(oid_safekit_moduleName)
clSolution = 'safekit'
get_cluster_discovery_output(scan)
|
agpl-3.0
|
toshywoshy/ansible
|
lib/ansible/modules/cloud/ovirt/ovirt_storage_domain_info.py
|
20
|
4336
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 Red Hat, Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: ovirt_storage_domain_info
short_description: Retrieve information about one or more oVirt/RHV storage domains
author: "Ondra Machacek (@machacekondra)"
version_added: "2.3"
description:
- "Retrieve information about one or more oVirt/RHV storage domains."
- This module was called C(ovirt_storage_domain_facts) before Ansible 2.9, returning C(ansible_facts).
Note that the M(ovirt_storage_domain_info) module no longer returns C(ansible_facts)!
notes:
- "This module returns a variable C(ovirt_storage_domains), which
contains a list of storage domains. You need to register the result with
the I(register) keyword to use it."
options:
pattern:
description:
- "Search term which is accepted by oVirt/RHV search backend."
- "For example to search storage domain X from datacenter Y use following pattern:
name=X and datacenter=Y"
extends_documentation_fragment: ovirt_info
'''
EXAMPLES = '''
# Examples don't contain auth parameter for simplicity,
# look at ovirt_auth module to see how to reuse authentication:
# Gather information about all storage domains which names start with C(data) and
# belong to data center C(west):
- ovirt_storage_domain_info:
pattern: name=data* and datacenter=west
register: result
- debug:
msg: "{{ result.ovirt_storage_domains }}"
'''
RETURN = '''
ovirt_storage_domains:
description: "List of dictionaries describing the storage domains. Storage_domain attributes are mapped to dictionary keys,
all storage domains attributes can be found at following url: http://ovirt.github.io/ovirt-engine-api-model/master/#types/storage_domain."
returned: On success.
type: list
'''
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.ovirt import (
check_sdk,
create_connection,
get_dict_of_struct,
ovirt_info_full_argument_spec,
)
def main():
argument_spec = ovirt_info_full_argument_spec(
pattern=dict(default='', required=False),
)
module = AnsibleModule(argument_spec)
is_old_facts = module._name == 'ovirt_storage_domain_facts'
if is_old_facts:
module.deprecate("The 'ovirt_storage_domain_facts' module has been renamed to 'ovirt_storage_domain_info', "
"and the renamed one no longer returns ansible_facts", version='2.13')
check_sdk(module)
try:
auth = module.params.pop('auth')
connection = create_connection(auth)
storage_domains_service = connection.system_service().storage_domains_service()
storage_domains = storage_domains_service.list(search=module.params['pattern'])
result = dict(
ovirt_storage_domains=[
get_dict_of_struct(
struct=c,
connection=connection,
fetch_nested=module.params.get('fetch_nested'),
attributes=module.params.get('nested_attributes'),
) for c in storage_domains
],
)
if is_old_facts:
module.exit_json(changed=False, ansible_facts=result)
else:
module.exit_json(changed=False, **result)
except Exception as e:
module.fail_json(msg=str(e), exception=traceback.format_exc())
finally:
connection.close(logout=auth.get('token') is None)
if __name__ == '__main__':
main()
|
gpl-3.0
|
pklimai/py-junos-eznc
|
tests/unit/test_rpcmeta.py
|
2
|
10679
|
import unittest
import os
import re
from nose.plugins.attrib import attr
from jnpr.junos.device import Device
from jnpr.junos.rpcmeta import _RpcMetaExec
from jnpr.junos.facts.swver import version_info
from ncclient.manager import Manager, make_device_handler
from ncclient.transport import SSHSession
from jnpr.junos.exception import JSONLoadError
from mock import patch, MagicMock, call
from lxml import etree
__author__ = "Nitin Kumar, Rick Sherman"
__credits__ = "Jeremy Schulman"
@attr('unit')
class Test_RpcMetaExec(unittest.TestCase):
@patch('ncclient.manager.connect')
def setUp(self, mock_connect):
mock_connect.side_effect = self._mock_manager
self.dev = Device(host='1.1.1.1', user='rick', password='password123',
gather_facts=False)
self.dev.open()
self.rpc = _RpcMetaExec(self.dev)
def test_rpcmeta_constructor(self):
self.assertTrue(isinstance(self.rpc._junos, Device))
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_load_config(self, mock_execute_fn):
root = etree.XML('<root><a>test</a></root>')
self.rpc.load_config(root)
self.assertEqual(mock_execute_fn.call_args[0][0].tag,
'load-configuration')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_load_config_with_configuration_tag(self, mock_execute_fn):
root = etree.XML(
'<configuration><root><a>test</a></root></configuration>')
self.rpc.load_config(root)
self.assertEqual(mock_execute_fn.call_args[0][0].tag,
'load-configuration')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_load_config_option_action(self, mock_execute_fn):
set_commands = """
set system host-name test_rpc
set system domain-name test.juniper.net
"""
self.rpc.load_config(set_commands, action='set')
self.assertEqual(mock_execute_fn.call_args[0][0].get('action'),
'set')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_option_format(self, mock_execute_fn):
set_commands = """
set system host-name test_rpc
set system domain-name test.juniper.net
"""
self.rpc.load_config(set_commands, format='text')
self.assertEqual(mock_execute_fn.call_args[0][0].get('format'),
'text')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_option_format_json(self, mock_execute_fn):
json_commands = """
{
"configuration" : {
"system" : {
"services" : {
"telnet" : [null]
}
}
}
}
"""
self.rpc.load_config(json_commands, format='json')
self.assertEqual(mock_execute_fn.call_args[0][0].get('format'),
'json')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_vargs(self, mock_execute_fn):
self.rpc.system_users_information(dict(format='text'))
self.assertEqual(mock_execute_fn.call_args[0][0].get('format'),
'text')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_kvargs_bool_true(self, mock_execute_fn):
self.rpc.system_users_information(test=True)
self.assertEqual(mock_execute_fn.call_args[0][0][0].tag,
'test')
self.assertEqual(mock_execute_fn.call_args[0][0][0].text,
None)
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_kvargs_bool_False(self, mock_execute_fn):
self.rpc.system_users_information(test=False)
self.assertEqual(mock_execute_fn.call_args[0][0].find('test'),
None)
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_kvargs_tuple(self, mock_execute_fn):
self.rpc.system_users_information(set_data=('test', 'foo'))
self.assertEqual(mock_execute_fn.call_args[0][0][0].text,
'test')
self.assertEqual(mock_execute_fn.call_args[0][0][1].text,
'foo')
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_kvargs_dict(self, mock_execute_fn):
self.assertRaises(TypeError,
self.rpc.system_users_information,
dict_data={'test': 'foo'})
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_kvargs_list_with_dict(self, mock_execute_fn):
self.assertRaises(TypeError,
self.rpc.system_users_information,
list_with_dict_data=[True, {'test': 'foo'}])
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_exec_rpc_normalize(self, mock_execute_fn):
self.rpc.any_ole_rpc(normalize=True)
self.assertEqual(mock_execute_fn.call_args[1], {'normalize': True})
@patch('jnpr.junos.device.Device.execute')
def test_rpcmeta_get_config(self, mock_execute_fn):
root = etree.XML('<root><a>test</a></root>')
self.rpc.get_config(root)
self.assertEqual(mock_execute_fn.call_args[0][0].tag,
'get-configuration')
def test_rpcmeta_exec_rpc_format_json_14_2(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
self.dev.facts._cache['version_info'] = version_info('14.2X46-D15.3')
op = self.rpc.get_system_users_information(dict(format='json'))
self.assertEqual(op['system-users-information'][0]
['uptime-information'][0]['date-time'][0]['data'],
u'4:43AM')
def test_rpcmeta_exec_rpc_format_json_gt_14_2(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
self.dev.facts._cache['version_info'] = version_info('15.1X46-D15.3')
op = self.rpc.get_system_users_information(dict(format='json'))
self.assertEqual(op['system-users-information'][0]
['uptime-information'][0]['date-time'][0]['data'],
u'4:43AM')
@patch('jnpr.junos.device.warnings')
def test_rpcmeta_exec_rpc_format_json_lt_14_2(self, mock_warn):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
self.dev.facts._cache['version_info'] = version_info('13.1X46-D15.3')
self.rpc.get_system_users_information(dict(format='json'))
mock_warn.assert_has_calls([call.warn(
'Native JSON support is only from 14.2 onwards', RuntimeWarning)])
def test_get_rpc(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get(filter_select='bgp')
self.assertEqual(resp.tag, 'data')
def test_get_config_filter_xml_string_xml(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get_config(
filter_xml='<system><services/></system>')
self.assertEqual(resp.tag, 'configuration')
def test_get_config_filter_xml_string(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get_config(filter_xml='system/services')
self.assertEqual(resp.tag, 'configuration')
def test_get_config_filter_xml_model(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get_config('bgp/neighbors', model='openconfig')
self.assertEqual(resp.tag, 'bgp')
def test_get_rpc_ignore_warning_bool(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get(ignore_warning=True)
self.assertEqual(resp.tag, 'data')
def test_get_rpc_ignore_warning_str(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get(ignore_warning='vrrp subsystem not running')
self.assertEqual(resp.tag, 'data')
def test_get_rpc_ignore_warning_list(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get(ignore_warning=['vrrp subsystem not running',
'statement not found'])
self.assertEqual(resp.tag, 'data')
# below test need to be fixed for Python 3.x
"""
def test_get_config_remove_ns(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
resp = self.dev.rpc.get_config('bgp/neighbors', model='openconfig',
remove_ns=False)
self.assertEqual(resp.tag, '{http://openconfig.net/yang/bgp}bgp')
"""
#
def test_model_true(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
data = self.dev.rpc.get_config(model=True)
self.assertEqual(data.tag, 'data')
def test_get_config_format_json_JSONLoadError_with_line(self):
self.dev._conn.rpc = MagicMock(side_effect=self._mock_manager)
self.dev.facts._cache['version_info'] = version_info('15.1X46-D15.3')
try:
self.dev.rpc.get_config(options={'format': 'json'})
except JSONLoadError as ex:
self.assertTrue(re.search(
"Expecting \'?,\'? delimiter: line 17 column 39 \(char 516\)",
ex.ex_msg) is not None)
def _mock_manager(self, *args, **kwargs):
if kwargs:
if 'normalize' in kwargs and args:
return self._read_file(args[0].tag + '.xml')
device_params = kwargs['device_params']
device_handler = make_device_handler(device_params)
session = SSHSession(device_handler)
return Manager(session, device_handler)
if args:
if len(args[0]) > 0 and args[0][0].tag == 'bgp':
return self._read_file(args[0].tag + '_bgp_openconfig.xml')
elif (args[0].attrib.get('format') ==
'json' and args[0].tag == 'get-configuration'):
return self._read_file(args[0].tag + '.json')
return self._read_file(args[0].tag + '.xml')
def _read_file(self, fname):
from ncclient.xml_ import NCElement
fpath = os.path.join(os.path.dirname(__file__),
'rpc-reply', fname)
with open(fpath) as fp:
foo = fp.read()
return NCElement(foo,
self.dev._conn._device_handler.transform_reply())
|
apache-2.0
|
yapengsong/ovirt-engine
|
packaging/setup/plugins/ovirt-engine-common/eayunos-version/version.py
|
4
|
2554
|
"""EayunOS Version plugin."""
import os
from otopi import plugin, util
from ovirt_engine_setup import constants as osetupcons
from ovirt_engine_setup.engine import constants as oenginecons
from ovirt_engine_setup.engine_common import constants as oengcommcons
@util.export
class Plugin(plugin.PluginBase):
"""EayunOS Version plugin."""
def __init__(self, context):
super(Plugin, self).__init__(context=context)
@plugin.event(
stage=plugin.Stages.STAGE_MISC,
after=(
oengcommcons.Stages.DB_CONNECTION_AVAILABLE,
),
)
def _customization(self):
version = self.environment.get(
oenginecons.ConfigEnv.EAYUNOS_VERSION
)
if version == 'Enterprise':
self.enterprise_version_setup()
self.dialog.note(text="EayunOS version: Enterprise")
def enterprise_version_setup(self):
# update ovirt-engine files
os.system("sed -i 's/4\.2 Basic/4\.2 Enterprise/' /usr/share/ovirt-engine/branding/ovirt.brand/messages.properties")
os.system("sed -i 's/\\\u57FA\\\u7840\\\u7248/\\\u4f01\\\u4e1a\\\u7248/g' /usr/share/ovirt-engine/branding/ovirt.brand/messages_zh_CN.properties")
os.system("sed -i 's/EayunOS_top_logo_basic\.png/EayunOS_top_logo_enterprise\.png/' /usr/share/ovirt-engine/branding/ovirt.brand/common.css")
os.system("sudo -u postgres psql -d engine -c \"select fn_db_add_config_value('EayunOSVersion','Enterprise','general');\"")
os.system("sudo -u postgres psql -d engine -c \"select fn_db_update_config_value('EayunOSVersion','Enterprise','general');\"")
# make product uuid readable
os.system("echo \"#! /bin/bash\" > /etc/init.d/systemuuid")
os.system("echo \"# chkconfig: 2345 10 90\" >> /etc/init.d/systemuuid")
os.system("echo \"chmod a+r /sys/class/dmi/id/product_uuid\" >> /etc/init.d/systemuuid")
os.system("chmod a+x /etc/init.d/systemuuid")
os.system("chkconfig systemuuid on")
os.system("chmod a+r /sys/class/dmi/id/product_uuid")
@plugin.event(
stage=plugin.Stages.STAGE_MISC,
after=(
oengcommcons.Stages.DB_CONNECTION_AVAILABLE,
),
condition=lambda self: (
not self.environment[oenginecons.EngineDBEnv.NEW_DATABASE]
),
)
def _setup_installation_time(self):
os.system("sudo -u postgres psql -d engine -c \"select fn_db_add_config_value('InstallationTime',to_char(current_timestamp,'yyyy-MM-dd HH24:mm:ss'),'general');\"")
|
apache-2.0
|
Widiot/simpleblog
|
venv/lib/python3.5/site-packages/pygments/lexers/shell.py
|
25
|
31426
|
# -*- coding: utf-8 -*-
"""
pygments.lexers.shell
~~~~~~~~~~~~~~~~~~~~~
Lexers for various shells.
:copyright: Copyright 2006-2017 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
import re
from pygments.lexer import Lexer, RegexLexer, do_insertions, bygroups, \
include, default, this, using, words
from pygments.token import Punctuation, \
Text, Comment, Operator, Keyword, Name, String, Number, Generic
from pygments.util import shebang_matches
__all__ = ['BashLexer', 'BashSessionLexer', 'TcshLexer', 'BatchLexer',
'MSDOSSessionLexer', 'PowerShellLexer',
'PowerShellSessionLexer', 'TcshSessionLexer', 'FishShellLexer']
line_re = re.compile('.*?\n')
class BashLexer(RegexLexer):
"""
Lexer for (ba|k|z|)sh shell scripts.
.. versionadded:: 0.6
"""
name = 'Bash'
aliases = ['bash', 'sh', 'ksh', 'zsh', 'shell']
filenames = ['*.sh', '*.ksh', '*.bash', '*.ebuild', '*.eclass',
'*.exheres-0', '*.exlib', '*.zsh',
'.bashrc', 'bashrc', '.bash_*', 'bash_*', 'zshrc', '.zshrc',
'PKGBUILD']
mimetypes = ['application/x-sh', 'application/x-shellscript']
tokens = {
'root': [
include('basic'),
(r'`', String.Backtick, 'backticks'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', String.Interpol, 'curly'),
(r'\$[a-zA-Z_]\w*', Name.Variable), # user variable
(r'\$(?:\d+|[#$?!_*@-])', Name.Variable), # builtin
(r'\$', Text),
],
'basic': [
(r'\b(if|fi|else|while|do|done|for|then|return|function|case|'
r'select|continue|until|esac|elif)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|break|builtin|caller|cd|command|compgen|'
r'complete|declare|dirs|disown|echo|enable|eval|exec|exit|'
r'export|false|fc|fg|getopts|hash|help|history|jobs|kill|let|'
r'local|logout|popd|printf|pushd|pwd|read|readonly|set|shift|'
r'shopt|source|suspend|test|time|times|trap|true|type|typeset|'
r'ulimit|umask|unalias|unset|wait)(?=[\s)`])',
Name.Builtin),
(r'\A#!.+\n', Comment.Hashbang),
(r'#.*\n', Comment.Single),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(\+?=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]', Operator),
(r'<<<', Operator), # here-string
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r'&&|\|\|', Operator),
],
'data': [
(r'(?s)\$?"(\\\\|\\[0-7]+|\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&', Punctuation),
(r'\|', Punctuation),
(r'\s+', Text),
(r'\d+\b', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
(r'<', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'curly': [
(r'\}', String.Interpol, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$\\]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
def analyse_text(text):
if shebang_matches(text, r'(ba|z|)sh'):
return 1
if text.startswith('$ '):
return 0.2
class ShellSessionBaseLexer(Lexer):
"""
Base lexer for simplistic shell sessions.
.. versionadded:: 2.1
"""
def get_tokens_unprocessed(self, text):
innerlexer = self._innerLexerCls(**self.options)
pos = 0
curcode = ''
insertions = []
backslash_continuation = False
for match in line_re.finditer(text):
line = match.group()
m = re.match(self._ps1rgx, line)
if backslash_continuation:
curcode += line
backslash_continuation = curcode.endswith('\\\n')
elif m:
# To support output lexers (say diff output), the output
# needs to be broken by prompts whenever the output lexer
# changes.
if not insertions:
pos = match.start()
insertions.append((len(curcode),
[(0, Generic.Prompt, m.group(1))]))
curcode += m.group(2)
backslash_continuation = curcode.endswith('\\\n')
elif line.startswith(self._ps2):
insertions.append((len(curcode),
[(0, Generic.Prompt, line[:len(self._ps2)])]))
curcode += line[len(self._ps2):]
backslash_continuation = curcode.endswith('\\\n')
else:
if insertions:
toks = innerlexer.get_tokens_unprocessed(curcode)
for i, t, v in do_insertions(insertions, toks):
yield pos+i, t, v
yield match.start(), Generic.Output, line
insertions = []
curcode = ''
if insertions:
for i, t, v in do_insertions(insertions,
innerlexer.get_tokens_unprocessed(curcode)):
yield pos+i, t, v
class BashSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic shell sessions.
.. versionadded:: 1.1
"""
name = 'Bash Session'
aliases = ['console', 'shell-session']
filenames = ['*.sh-session', '*.shell-session']
mimetypes = ['application/x-shell-session', 'application/x-sh-session']
_innerLexerCls = BashLexer
_ps1rgx = \
r'^((?:(?:\[.*?\])|(?:\(\S+\))?(?:| |sh\S*?|\w+\S+[@:]\S+(?:\s+\S+)' \
r'?|\[\S+[@:][^\n]+\].+))\s*[$#%])(.*\n?)'
_ps2 = '>'
class BatchLexer(RegexLexer):
"""
Lexer for the DOS/Windows Batch file format.
.. versionadded:: 0.7
"""
name = 'Batchfile'
aliases = ['bat', 'batch', 'dosbatch', 'winbatch']
filenames = ['*.bat', '*.cmd']
mimetypes = ['application/x-dos-batch']
flags = re.MULTILINE | re.IGNORECASE
_nl = r'\n\x1a'
_punct = r'&<>|'
_ws = r'\t\v\f\r ,;=\xa0'
_space = r'(?:(?:(?:\^[%s])?[%s])+)' % (_nl, _ws)
_keyword_terminator = (r'(?=(?:\^[%s]?)?[%s+./:[\\\]]|[%s%s(])' %
(_nl, _ws, _nl, _punct))
_token_terminator = r'(?=\^?[%s]|[%s%s])' % (_ws, _punct, _nl)
_start_label = r'((?:(?<=^[^:])|^[^:]?)[%s]*)(:)' % _ws
_label = r'(?:(?:[^%s%s%s+:^]|\^[%s]?[\w\W])*)' % (_nl, _punct, _ws, _nl)
_label_compound = (r'(?:(?:[^%s%s%s+:^)]|\^[%s]?[^)])*)' %
(_nl, _punct, _ws, _nl))
_number = r'(?:-?(?:0[0-7]+|0x[\da-f]+|\d+)%s)' % _token_terminator
_opword = r'(?:equ|geq|gtr|leq|lss|neq)'
_string = r'(?:"[^%s"]*(?:"|(?=[%s])))' % (_nl, _nl)
_variable = (r'(?:(?:%%(?:\*|(?:~[a-z]*(?:\$[^:]+:)?)?\d|'
r'[^%%:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:[^%%%s^]|'
r'\^[^%%%s])[^=%s]*=(?:[^%%%s^]|\^[^%%%s])*)?)?%%))|'
r'(?:\^?![^!:%s]+(?::(?:~(?:-?\d+)?(?:,(?:-?\d+)?)?|(?:'
r'[^!%s^]|\^[^!%s])[^=%s]*=(?:[^!%s^]|\^[^!%s])*)?)?\^?!))' %
(_nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl, _nl))
_core_token = r'(?:(?:(?:\^[%s]?)?[^"%s%s%s])+)' % (_nl, _nl, _punct, _ws)
_core_token_compound = r'(?:(?:(?:\^[%s]?)?[^"%s%s%s)])+)' % (_nl, _nl,
_punct, _ws)
_token = r'(?:[%s]+|%s)' % (_punct, _core_token)
_token_compound = r'(?:[%s]+|%s)' % (_punct, _core_token_compound)
_stoken = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token))
def _make_begin_state(compound, _core_token=_core_token,
_core_token_compound=_core_token_compound,
_keyword_terminator=_keyword_terminator,
_nl=_nl, _punct=_punct, _string=_string,
_space=_space, _start_label=_start_label,
_stoken=_stoken, _token_terminator=_token_terminator,
_variable=_variable, _ws=_ws):
rest = '(?:%s|%s|[^"%%%s%s%s])*' % (_string, _variable, _nl, _punct,
')' if compound else '')
rest_of_line = r'(?:(?:[^%s^]|\^[%s]?[\w\W])*)' % (_nl, _nl)
rest_of_line_compound = r'(?:(?:[^%s^)]|\^[%s]?[^)])*)' % (_nl, _nl)
set_space = r'((?:(?:\^[%s]?)?[^\S\n])*)' % _nl
suffix = ''
if compound:
_keyword_terminator = r'(?:(?=\))|%s)' % _keyword_terminator
_token_terminator = r'(?:(?=\))|%s)' % _token_terminator
suffix = '/compound'
return [
((r'\)', Punctuation, '#pop') if compound else
(r'\)((?=\()|%s)%s' % (_token_terminator, rest_of_line),
Comment.Single)),
(r'(?=%s)' % _start_label, Text, 'follow%s' % suffix),
(_space, using(this, state='text')),
include('redirect%s' % suffix),
(r'[%s]+' % _nl, Text),
(r'\(', Punctuation, 'root/compound'),
(r'@+', Punctuation),
(r'((?:for|if|rem)(?:(?=(?:\^[%s]?)?/)|(?:(?!\^)|'
r'(?<=m))(?:(?=\()|%s)))(%s?%s?(?:\^[%s]?)?/(?:\^[%s]?)?\?)' %
(_nl, _token_terminator, _space,
_core_token_compound if compound else _core_token, _nl, _nl),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(r'(goto%s)(%s(?:\^[%s]?)?/(?:\^[%s]?)?\?%s)' %
(_keyword_terminator, rest, _nl, _nl, rest),
bygroups(Keyword, using(this, state='text')),
'follow%s' % suffix),
(words(('assoc', 'break', 'cd', 'chdir', 'cls', 'color', 'copy',
'date', 'del', 'dir', 'dpath', 'echo', 'endlocal', 'erase',
'exit', 'ftype', 'keys', 'md', 'mkdir', 'mklink', 'move',
'path', 'pause', 'popd', 'prompt', 'pushd', 'rd', 'ren',
'rename', 'rmdir', 'setlocal', 'shift', 'start', 'time',
'title', 'type', 'ver', 'verify', 'vol'),
suffix=_keyword_terminator), Keyword, 'follow%s' % suffix),
(r'(call)(%s?)(:)' % _space,
bygroups(Keyword, using(this, state='text'), Punctuation),
'call%s' % suffix),
(r'call%s' % _keyword_terminator, Keyword),
(r'(for%s(?!\^))(%s)(/f%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/f', 'for')),
(r'(for%s(?!\^))(%s)(/l%s)' %
(_token_terminator, _space, _token_terminator),
bygroups(Keyword, using(this, state='text'), Keyword),
('for/l', 'for')),
(r'for%s(?!\^)' % _token_terminator, Keyword, ('for2', 'for')),
(r'(goto%s)(%s?)(:?)' % (_keyword_terminator, _space),
bygroups(Keyword, using(this, state='text'), Punctuation),
'label%s' % suffix),
(r'(if(?:(?=\()|%s)(?!\^))(%s?)((?:/i%s)?)(%s?)((?:not%s)?)(%s?)' %
(_token_terminator, _space, _token_terminator, _space,
_token_terminator, _space),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), Keyword,
using(this, state='text')), ('(?', 'if')),
(r'rem(((?=\()|%s)%s?%s?.*|%s%s)' %
(_token_terminator, _space, _stoken, _keyword_terminator,
rest_of_line_compound if compound else rest_of_line),
Comment.Single, 'follow%s' % suffix),
(r'(set%s)%s(/a)' % (_keyword_terminator, set_space),
bygroups(Keyword, using(this, state='text'), Keyword),
'arithmetic%s' % suffix),
(r'(set%s)%s((?:/p)?)%s((?:(?:(?:\^[%s]?)?[^"%s%s^=%s]|'
r'\^[%s]?[^"=])+)?)((?:(?:\^[%s]?)?=)?)' %
(_keyword_terminator, set_space, set_space, _nl, _nl, _punct,
')' if compound else '', _nl, _nl),
bygroups(Keyword, using(this, state='text'), Keyword,
using(this, state='text'), using(this, state='variable'),
Punctuation),
'follow%s' % suffix),
default('follow%s' % suffix)
]
def _make_follow_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_space=_space, _start_label=_start_label,
_token=_token, _token_compound=_token_compound,
_ws=_ws):
suffix = '/compound' if compound else ''
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'%s([%s]*)(%s)(.*)' %
(_start_label, _ws, _label_compound if compound else _label),
bygroups(Text, Punctuation, Text, Name.Label, Comment.Single)),
include('redirect%s' % suffix),
(r'(?=[%s])' % _nl, Text, '#pop'),
(r'\|\|?|&&?', Punctuation, '#pop'),
include('text')
]
return state
def _make_arithmetic_state(compound, _nl=_nl, _punct=_punct,
_string=_string, _variable=_variable, _ws=_ws):
op = r'=+\-*/!~'
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state += [
(r'0[0-7]+', Number.Oct),
(r'0x[\da-f]+', Number.Hex),
(r'\d+', Number.Integer),
(r'[(),]+', Punctuation),
(r'([%s]|%%|\^\^)+' % op, Operator),
(r'(%s|%s|(\^[%s]?)?[^()%s%%^"%s%s%s]|\^[%s%s]?%s)+' %
(_string, _variable, _nl, op, _nl, _punct, _ws, _nl, _ws,
r'[^)]' if compound else r'[\w\W]'),
using(this, state='variable')),
(r'(?=[\x00|&])', Text, '#pop'),
include('follow')
]
return state
def _make_call_state(compound, _label=_label,
_label_compound=_label_compound):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(:?)(%s)' % (_label_compound if compound else _label),
bygroups(Punctuation, Name.Label), '#pop'))
return state
def _make_label_state(compound, _label=_label,
_label_compound=_label_compound, _nl=_nl,
_punct=_punct, _string=_string, _variable=_variable):
state = []
if compound:
state.append((r'(?=\))', Text, '#pop'))
state.append((r'(%s?)((?:%s|%s|\^[%s]?%s|[^"%%^%s%s%s])*)' %
(_label_compound if compound else _label, _string,
_variable, _nl, r'[^)]' if compound else r'[\w\W]', _nl,
_punct, r')' if compound else ''),
bygroups(Name.Label, Comment.Single), '#pop'))
return state
def _make_redirect_state(compound,
_core_token_compound=_core_token_compound,
_nl=_nl, _punct=_punct, _stoken=_stoken,
_string=_string, _space=_space,
_variable=_variable, _ws=_ws):
stoken_compound = (r'(?:[%s]+|(?:%s|%s|%s)+)' %
(_punct, _string, _variable, _core_token_compound))
return [
(r'((?:(?<=[%s%s])\d)?)(>>?&|<&)([%s%s]*)(\d)' %
(_nl, _ws, _nl, _ws),
bygroups(Number.Integer, Punctuation, Text, Number.Integer)),
(r'((?:(?<=[%s%s])(?<!\^[%s])\d)?)(>>?|<)(%s?%s)' %
(_nl, _ws, _nl, _space, stoken_compound if compound else _stoken),
bygroups(Number.Integer, Punctuation, using(this, state='text')))
]
tokens = {
'root': _make_begin_state(False),
'follow': _make_follow_state(False),
'arithmetic': _make_arithmetic_state(False),
'call': _make_call_state(False),
'label': _make_label_state(False),
'redirect': _make_redirect_state(False),
'root/compound': _make_begin_state(True),
'follow/compound': _make_follow_state(True),
'arithmetic/compound': _make_arithmetic_state(True),
'call/compound': _make_call_state(True),
'label/compound': _make_label_state(True),
'redirect/compound': _make_redirect_state(True),
'variable-or-escape': [
(_variable, Name.Variable),
(r'%%%%|\^[%s]?(\^!|[\w\W])' % _nl, String.Escape)
],
'string': [
(r'"', String.Double, '#pop'),
(_variable, Name.Variable),
(r'\^!|%%', String.Escape),
(r'[^"%%^%s]+|[%%^]' % _nl, String.Double),
default('#pop')
],
'sqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Single)
],
'bqstring': [
include('variable-or-escape'),
(r'[^%]+|%', String.Backtick)
],
'text': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s%s%s\d)]+|.' % (_nl, _punct, _ws), Text)
],
'variable': [
(r'"', String.Double, 'string'),
include('variable-or-escape'),
(r'[^"%%^%s]+|.' % _nl, Name.Variable)
],
'for': [
(r'(%s)(in)(%s)(\()' % (_space, _space),
bygroups(using(this, state='text'), Keyword,
using(this, state='text'), Punctuation), '#pop'),
include('follow')
],
'for2': [
(r'\)', Punctuation),
(r'(%s)(do%s)' % (_space, _token_terminator),
bygroups(using(this, state='text'), Keyword), '#pop'),
(r'[%s]+' % _nl, Text),
include('follow')
],
'for/f': [
(r'(")((?:%s|[^"])*?")([%s%s]*)(\))' % (_variable, _nl, _ws),
bygroups(String.Double, using(this, state='string'), Text,
Punctuation)),
(r'"', String.Double, ('#pop', 'for2', 'string')),
(r"('(?:%%%%|%s|[\w\W])*?')([%s%s]*)(\))" % (_variable, _nl, _ws),
bygroups(using(this, state='sqstring'), Text, Punctuation)),
(r'(`(?:%%%%|%s|[\w\W])*?`)([%s%s]*)(\))' % (_variable, _nl, _ws),
bygroups(using(this, state='bqstring'), Text, Punctuation)),
include('for2')
],
'for/l': [
(r'-?\d+', Number.Integer),
include('for2')
],
'if': [
(r'((?:cmdextversion|errorlevel)%s)(%s)(\d+)' %
(_token_terminator, _space),
bygroups(Keyword, using(this, state='text'),
Number.Integer), '#pop'),
(r'(defined%s)(%s)(%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text'),
using(this, state='variable')), '#pop'),
(r'(exist%s)(%s%s)' % (_token_terminator, _space, _stoken),
bygroups(Keyword, using(this, state='text')), '#pop'),
(r'(%s%s)(%s)(%s%s)' % (_number, _space, _opword, _space, _number),
bygroups(using(this, state='arithmetic'), Operator.Word,
using(this, state='arithmetic')), '#pop'),
(_stoken, using(this, state='text'), ('#pop', 'if2')),
],
'if2': [
(r'(%s?)(==)(%s?%s)' % (_space, _space, _stoken),
bygroups(using(this, state='text'), Operator,
using(this, state='text')), '#pop'),
(r'(%s)(%s)(%s%s)' % (_space, _opword, _space, _stoken),
bygroups(using(this, state='text'), Operator.Word,
using(this, state='text')), '#pop')
],
'(?': [
(_space, using(this, state='text')),
(r'\(', Punctuation, ('#pop', 'else?', 'root/compound')),
default('#pop')
],
'else?': [
(_space, using(this, state='text')),
(r'else%s' % _token_terminator, Keyword, '#pop'),
default('#pop')
]
}
class MSDOSSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic MSDOS sessions.
.. versionadded:: 2.1
"""
name = 'MSDOS Session'
aliases = ['doscon']
filenames = []
mimetypes = []
_innerLexerCls = BatchLexer
_ps1rgx = r'^([^>]+>)(.*\n?)'
_ps2 = 'More? '
class TcshLexer(RegexLexer):
"""
Lexer for tcsh scripts.
.. versionadded:: 0.10
"""
name = 'Tcsh'
aliases = ['tcsh', 'csh']
filenames = ['*.tcsh', '*.csh']
mimetypes = ['application/x-csh']
tokens = {
'root': [
include('basic'),
(r'\$\(', Keyword, 'paren'),
(r'\$\{#?', Keyword, 'curly'),
(r'`', String.Backtick, 'backticks'),
include('data'),
],
'basic': [
(r'\b(if|endif|else|while|then|foreach|case|default|'
r'continue|goto|breaksw|end|switch|endsw)\s*\b',
Keyword),
(r'\b(alias|alloc|bg|bindkey|break|builtins|bye|caller|cd|chdir|'
r'complete|dirs|echo|echotc|eval|exec|exit|fg|filetest|getxvers|'
r'glob|getspath|hashstat|history|hup|inlib|jobs|kill|'
r'limit|log|login|logout|ls-F|migrate|newgrp|nice|nohup|notify|'
r'onintr|popd|printenv|pushd|rehash|repeat|rootnode|popd|pushd|'
r'set|shift|sched|setenv|setpath|settc|setty|setxvers|shift|'
r'source|stop|suspend|source|suspend|telltc|time|'
r'umask|unalias|uncomplete|unhash|universe|unlimit|unset|unsetenv|'
r'ver|wait|warp|watchlog|where|which)\s*\b',
Name.Builtin),
(r'#.*', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]{}()=]+', Operator),
(r'<<\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
(r';', Punctuation),
],
'data': [
(r'(?s)"(\\\\|\\[0-7]+|\\.|[^"\\])*"', String.Double),
(r"(?s)'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r'\s+', Text),
(r'[^=\s\[\]{}()$"\'`\\;#]+', Text),
(r'\d+(?= |\Z)', Number),
(r'\$#?(\w+|.)', Name.Variable),
],
'curly': [
(r'\}', Keyword, '#pop'),
(r':-', Keyword),
(r'\w+', Name.Variable),
(r'[^}:"\'`$]+', Punctuation),
(r':', Punctuation),
include('root'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'backticks': [
(r'`', String.Backtick, '#pop'),
include('root'),
],
}
class TcshSessionLexer(ShellSessionBaseLexer):
"""
Lexer for Tcsh sessions.
.. versionadded:: 2.1
"""
name = 'Tcsh Session'
aliases = ['tcshcon']
filenames = []
mimetypes = []
_innerLexerCls = TcshLexer
_ps1rgx = r'^([^>]+>)(.*\n?)'
_ps2 = '? '
class PowerShellLexer(RegexLexer):
"""
For Windows PowerShell code.
.. versionadded:: 1.5
"""
name = 'PowerShell'
aliases = ['powershell', 'posh', 'ps1', 'psm1']
filenames = ['*.ps1', '*.psm1']
mimetypes = ['text/x-powershell']
flags = re.DOTALL | re.IGNORECASE | re.MULTILINE
keywords = (
'while validateset validaterange validatepattern validatelength '
'validatecount until trap switch return ref process param parameter in '
'if global: function foreach for finally filter end elseif else '
'dynamicparam do default continue cmdletbinding break begin alias \\? '
'% #script #private #local #global mandatory parametersetname position '
'valuefrompipeline valuefrompipelinebypropertyname '
'valuefromremainingarguments helpmessage try catch throw').split()
operators = (
'and as band bnot bor bxor casesensitive ccontains ceq cge cgt cle '
'clike clt cmatch cne cnotcontains cnotlike cnotmatch contains '
'creplace eq exact f file ge gt icontains ieq ige igt ile ilike ilt '
'imatch ine inotcontains inotlike inotmatch ireplace is isnot le like '
'lt match ne not notcontains notlike notmatch or regex replace '
'wildcard').split()
verbs = (
'write where wait use update unregister undo trace test tee take '
'suspend stop start split sort skip show set send select scroll resume '
'restore restart resolve resize reset rename remove register receive '
'read push pop ping out new move measure limit join invoke import '
'group get format foreach export expand exit enter enable disconnect '
'disable debug cxnew copy convertto convertfrom convert connect '
'complete compare clear checkpoint aggregate add').split()
commenthelp = (
'component description example externalhelp forwardhelpcategory '
'forwardhelptargetname functionality inputs link '
'notes outputs parameter remotehelprunspace role synopsis').split()
tokens = {
'root': [
# we need to count pairs of parentheses for correct highlight
# of '$(...)' blocks in strings
(r'\(', Punctuation, 'child'),
(r'\s+', Text),
(r'^(\s*#[#\s]*)(\.(?:%s))([^\n]*$)' % '|'.join(commenthelp),
bygroups(Comment, String.Doc, Comment)),
(r'#[^\n]*?$', Comment),
(r'(<|<)#', Comment.Multiline, 'multline'),
(r'@"\n', String.Heredoc, 'heredoc-double'),
(r"@'\n.*?\n'@", String.Heredoc),
# escaped syntax
(r'`[\'"$@-]', Punctuation),
(r'"', String.Double, 'string'),
(r"'([^']|'')*'", String.Single),
(r'(\$|@@|@)((global|script|private|env):)?\w+',
Name.Variable),
(r'(%s)\b' % '|'.join(keywords), Keyword),
(r'-(%s)\b' % '|'.join(operators), Operator),
(r'(%s)-[a-z_]\w*\b' % '|'.join(verbs), Name.Builtin),
(r'\[[a-z_\[][\w. `,\[\]]*\]', Name.Constant), # .net [type]s
(r'-[a-z_]\w*', Name),
(r'\w+', Name),
(r'[.,;@{}\[\]$()=+*/\\&%!~?^`|<>-]|::', Punctuation),
],
'child': [
(r'\)', Punctuation, '#pop'),
include('root'),
],
'multline': [
(r'[^#&.]+', Comment.Multiline),
(r'#(>|>)', Comment.Multiline, '#pop'),
(r'\.(%s)' % '|'.join(commenthelp), String.Doc),
(r'[#&.]', Comment.Multiline),
],
'string': [
(r"`[0abfnrtv'\"$`]", String.Escape),
(r'[^$`"]+', String.Double),
(r'\$\(', Punctuation, 'child'),
(r'""', String.Double),
(r'[`$]', String.Double),
(r'"', String.Double, '#pop'),
],
'heredoc-double': [
(r'\n"@', String.Heredoc, '#pop'),
(r'\$\(', Punctuation, 'child'),
(r'[^@\n]+"]', String.Heredoc),
(r".", String.Heredoc),
]
}
class PowerShellSessionLexer(ShellSessionBaseLexer):
"""
Lexer for simplistic Windows PowerShell sessions.
.. versionadded:: 2.1
"""
name = 'PowerShell Session'
aliases = ['ps1con']
filenames = []
mimetypes = []
_innerLexerCls = PowerShellLexer
_ps1rgx = r'^(PS [^>]+> )(.*\n?)'
_ps2 = '>> '
class FishShellLexer(RegexLexer):
"""
Lexer for Fish shell scripts.
.. versionadded:: 2.1
"""
name = 'Fish'
aliases = ['fish', 'fishshell']
filenames = ['*.fish', '*.load']
mimetypes = ['application/x-fish']
tokens = {
'root': [
include('basic'),
include('data'),
include('interp'),
],
'interp': [
(r'\$\(\(', Keyword, 'math'),
(r'\(', Keyword, 'paren'),
(r'\$#?(\w+|.)', Name.Variable),
],
'basic': [
(r'\b(begin|end|if|else|while|break|for|in|return|function|block|'
r'case|continue|switch|not|and|or|set|echo|exit|pwd|true|false|'
r'cd|count|test)(\s*)\b',
bygroups(Keyword, Text)),
(r'\b(alias|bg|bind|breakpoint|builtin|command|commandline|'
r'complete|contains|dirh|dirs|emit|eval|exec|fg|fish|fish_config|'
r'fish_indent|fish_pager|fish_prompt|fish_right_prompt|'
r'fish_update_completions|fishd|funced|funcsave|functions|help|'
r'history|isatty|jobs|math|mimedb|nextd|open|popd|prevd|psub|'
r'pushd|random|read|set_color|source|status|trap|type|ulimit|'
r'umask|vared|fc|getopts|hash|kill|printf|time|wait)\s*\b(?!\.)',
Name.Builtin),
(r'#.*\n', Comment),
(r'\\[\w\W]', String.Escape),
(r'(\b\w+)(\s*)(=)', bygroups(Name.Variable, Text, Operator)),
(r'[\[\]()=]', Operator),
(r'<<-?\s*(\'?)\\?(\w+)[\w\W]+?\2', String),
],
'data': [
(r'(?s)\$?"(\\\\|\\[0-7]+|\\.|[^"\\$])*"', String.Double),
(r'"', String.Double, 'string'),
(r"(?s)\$'(\\\\|\\[0-7]+|\\.|[^'\\])*'", String.Single),
(r"(?s)'.*?'", String.Single),
(r';', Punctuation),
(r'&|\||\^|<|>', Operator),
(r'\s+', Text),
(r'\d+(?= |\Z)', Number),
(r'[^=\s\[\]{}()$"\'`\\<&|;]+', Text),
],
'string': [
(r'"', String.Double, '#pop'),
(r'(?s)(\\\\|\\[0-7]+|\\.|[^"\\$])+', String.Double),
include('interp'),
],
'paren': [
(r'\)', Keyword, '#pop'),
include('root'),
],
'math': [
(r'\)\)', Keyword, '#pop'),
(r'[-+*/%^|&]|\*\*|\|\|', Operator),
(r'\d+#\d+', Number),
(r'\d+#(?! )', Number),
(r'\d+', Number),
include('root'),
],
}
|
mit
|
vibhorag/scikit-learn
|
sklearn/metrics/tests/test_score_objects.py
|
138
|
14048
|
import pickle
import numpy as np
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_raises_regexp
from sklearn.utils.testing import assert_true
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import assert_not_equal
from sklearn.base import BaseEstimator
from sklearn.metrics import (f1_score, r2_score, roc_auc_score, fbeta_score,
log_loss, precision_score, recall_score)
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.scorer import (check_scoring, _PredictScorer,
_passthrough_scorer)
from sklearn.metrics import make_scorer, get_scorer, SCORERS
from sklearn.svm import LinearSVC
from sklearn.pipeline import make_pipeline
from sklearn.cluster import KMeans
from sklearn.dummy import DummyRegressor
from sklearn.linear_model import Ridge, LogisticRegression
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.datasets import make_blobs
from sklearn.datasets import make_classification
from sklearn.datasets import make_multilabel_classification
from sklearn.datasets import load_diabetes
from sklearn.cross_validation import train_test_split, cross_val_score
from sklearn.grid_search import GridSearchCV
from sklearn.multiclass import OneVsRestClassifier
REGRESSION_SCORERS = ['r2', 'mean_absolute_error', 'mean_squared_error',
'median_absolute_error']
CLF_SCORERS = ['accuracy', 'f1', 'f1_weighted', 'f1_macro', 'f1_micro',
'roc_auc', 'average_precision', 'precision',
'precision_weighted', 'precision_macro', 'precision_micro',
'recall', 'recall_weighted', 'recall_macro', 'recall_micro',
'log_loss',
'adjusted_rand_score' # not really, but works
]
MULTILABEL_ONLY_SCORERS = ['precision_samples', 'recall_samples', 'f1_samples']
class EstimatorWithoutFit(object):
"""Dummy estimator to test check_scoring"""
pass
class EstimatorWithFit(BaseEstimator):
"""Dummy estimator to test check_scoring"""
def fit(self, X, y):
return self
class EstimatorWithFitAndScore(object):
"""Dummy estimator to test check_scoring"""
def fit(self, X, y):
return self
def score(self, X, y):
return 1.0
class EstimatorWithFitAndPredict(object):
"""Dummy estimator to test check_scoring"""
def fit(self, X, y):
self.y = y
return self
def predict(self, X):
return self.y
class DummyScorer(object):
"""Dummy scorer that always returns 1."""
def __call__(self, est, X, y):
return 1
def test_check_scoring():
# Test all branches of check_scoring
estimator = EstimatorWithoutFit()
pattern = (r"estimator should a be an estimator implementing 'fit' method,"
r" .* was passed")
assert_raises_regexp(TypeError, pattern, check_scoring, estimator)
estimator = EstimatorWithFitAndScore()
estimator.fit([[1]], [1])
scorer = check_scoring(estimator)
assert_true(scorer is _passthrough_scorer)
assert_almost_equal(scorer(estimator, [[1]], [1]), 1.0)
estimator = EstimatorWithFitAndPredict()
estimator.fit([[1]], [1])
pattern = (r"If no scoring is specified, the estimator passed should have"
r" a 'score' method\. The estimator .* does not\.")
assert_raises_regexp(TypeError, pattern, check_scoring, estimator)
scorer = check_scoring(estimator, "accuracy")
assert_almost_equal(scorer(estimator, [[1]], [1]), 1.0)
estimator = EstimatorWithFit()
scorer = check_scoring(estimator, "accuracy")
assert_true(isinstance(scorer, _PredictScorer))
estimator = EstimatorWithFit()
scorer = check_scoring(estimator, allow_none=True)
assert_true(scorer is None)
def test_check_scoring_gridsearchcv():
# test that check_scoring works on GridSearchCV and pipeline.
# slightly redundant non-regression test.
grid = GridSearchCV(LinearSVC(), param_grid={'C': [.1, 1]})
scorer = check_scoring(grid, "f1")
assert_true(isinstance(scorer, _PredictScorer))
pipe = make_pipeline(LinearSVC())
scorer = check_scoring(pipe, "f1")
assert_true(isinstance(scorer, _PredictScorer))
# check that cross_val_score definitely calls the scorer
# and doesn't make any assumptions about the estimator apart from having a
# fit.
scores = cross_val_score(EstimatorWithFit(), [[1], [2], [3]], [1, 0, 1],
scoring=DummyScorer())
assert_array_equal(scores, 1)
def test_make_scorer():
# Sanity check on the make_scorer factory function.
f = lambda *args: 0
assert_raises(ValueError, make_scorer, f, needs_threshold=True,
needs_proba=True)
def test_classification_scores():
# Test classification scorers.
X, y = make_blobs(random_state=0, centers=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = LinearSVC(random_state=0)
clf.fit(X_train, y_train)
for prefix, metric in [('f1', f1_score), ('precision', precision_score),
('recall', recall_score)]:
score1 = get_scorer('%s_weighted' % prefix)(clf, X_test, y_test)
score2 = metric(y_test, clf.predict(X_test), pos_label=None,
average='weighted')
assert_almost_equal(score1, score2)
score1 = get_scorer('%s_macro' % prefix)(clf, X_test, y_test)
score2 = metric(y_test, clf.predict(X_test), pos_label=None,
average='macro')
assert_almost_equal(score1, score2)
score1 = get_scorer('%s_micro' % prefix)(clf, X_test, y_test)
score2 = metric(y_test, clf.predict(X_test), pos_label=None,
average='micro')
assert_almost_equal(score1, score2)
score1 = get_scorer('%s' % prefix)(clf, X_test, y_test)
score2 = metric(y_test, clf.predict(X_test), pos_label=1)
assert_almost_equal(score1, score2)
# test fbeta score that takes an argument
scorer = make_scorer(fbeta_score, beta=2)
score1 = scorer(clf, X_test, y_test)
score2 = fbeta_score(y_test, clf.predict(X_test), beta=2)
assert_almost_equal(score1, score2)
# test that custom scorer can be pickled
unpickled_scorer = pickle.loads(pickle.dumps(scorer))
score3 = unpickled_scorer(clf, X_test, y_test)
assert_almost_equal(score1, score3)
# smoke test the repr:
repr(fbeta_score)
def test_regression_scorers():
# Test regression scorers.
diabetes = load_diabetes()
X, y = diabetes.data, diabetes.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = Ridge()
clf.fit(X_train, y_train)
score1 = get_scorer('r2')(clf, X_test, y_test)
score2 = r2_score(y_test, clf.predict(X_test))
assert_almost_equal(score1, score2)
def test_thresholded_scorers():
# Test scorers that take thresholds.
X, y = make_blobs(random_state=0, centers=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf = LogisticRegression(random_state=0)
clf.fit(X_train, y_train)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, clf.decision_function(X_test))
score3 = roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])
assert_almost_equal(score1, score2)
assert_almost_equal(score1, score3)
logscore = get_scorer('log_loss')(clf, X_test, y_test)
logloss = log_loss(y_test, clf.predict_proba(X_test))
assert_almost_equal(-logscore, logloss)
# same for an estimator without decision_function
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, clf.predict_proba(X_test)[:, 1])
assert_almost_equal(score1, score2)
# test with a regressor (no decision_function)
reg = DecisionTreeRegressor()
reg.fit(X_train, y_train)
score1 = get_scorer('roc_auc')(reg, X_test, y_test)
score2 = roc_auc_score(y_test, reg.predict(X_test))
assert_almost_equal(score1, score2)
# Test that an exception is raised on more than two classes
X, y = make_blobs(random_state=0, centers=3)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
clf.fit(X_train, y_train)
assert_raises(ValueError, get_scorer('roc_auc'), clf, X_test, y_test)
def test_thresholded_scorers_multilabel_indicator_data():
# Test that the scorer work with multilabel-indicator format
# for multilabel and multi-output multi-class classifier
X, y = make_multilabel_classification(allow_unlabeled=False,
random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# Multi-output multi-class predict_proba
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
y_proba = clf.predict_proba(X_test)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, np.vstack(p[:, -1] for p in y_proba).T)
assert_almost_equal(score1, score2)
# Multi-output multi-class decision_function
# TODO Is there any yet?
clf = DecisionTreeClassifier()
clf.fit(X_train, y_train)
clf._predict_proba = clf.predict_proba
clf.predict_proba = None
clf.decision_function = lambda X: [p[:, 1] for p in clf._predict_proba(X)]
y_proba = clf.decision_function(X_test)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, np.vstack(p for p in y_proba).T)
assert_almost_equal(score1, score2)
# Multilabel predict_proba
clf = OneVsRestClassifier(DecisionTreeClassifier())
clf.fit(X_train, y_train)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, clf.predict_proba(X_test))
assert_almost_equal(score1, score2)
# Multilabel decision function
clf = OneVsRestClassifier(LinearSVC(random_state=0))
clf.fit(X_train, y_train)
score1 = get_scorer('roc_auc')(clf, X_test, y_test)
score2 = roc_auc_score(y_test, clf.decision_function(X_test))
assert_almost_equal(score1, score2)
def test_unsupervised_scorers():
# Test clustering scorers against gold standard labeling.
# We don't have any real unsupervised Scorers yet.
X, y = make_blobs(random_state=0, centers=2)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
km = KMeans(n_clusters=3)
km.fit(X_train)
score1 = get_scorer('adjusted_rand_score')(km, X_test, y_test)
score2 = adjusted_rand_score(y_test, km.predict(X_test))
assert_almost_equal(score1, score2)
@ignore_warnings
def test_raises_on_score_list():
# Test that when a list of scores is returned, we raise proper errors.
X, y = make_blobs(random_state=0)
f1_scorer_no_average = make_scorer(f1_score, average=None)
clf = DecisionTreeClassifier()
assert_raises(ValueError, cross_val_score, clf, X, y,
scoring=f1_scorer_no_average)
grid_search = GridSearchCV(clf, scoring=f1_scorer_no_average,
param_grid={'max_depth': [1, 2]})
assert_raises(ValueError, grid_search.fit, X, y)
@ignore_warnings
def test_scorer_sample_weight():
# Test that scorers support sample_weight or raise sensible errors
# Unlike the metrics invariance test, in the scorer case it's harder
# to ensure that, on the classifier output, weighted and unweighted
# scores really should be unequal.
X, y = make_classification(random_state=0)
_, y_ml = make_multilabel_classification(n_samples=X.shape[0],
random_state=0)
split = train_test_split(X, y, y_ml, random_state=0)
X_train, X_test, y_train, y_test, y_ml_train, y_ml_test = split
sample_weight = np.ones_like(y_test)
sample_weight[:10] = 0
# get sensible estimators for each metric
sensible_regr = DummyRegressor(strategy='median')
sensible_regr.fit(X_train, y_train)
sensible_clf = DecisionTreeClassifier(random_state=0)
sensible_clf.fit(X_train, y_train)
sensible_ml_clf = DecisionTreeClassifier(random_state=0)
sensible_ml_clf.fit(X_train, y_ml_train)
estimator = dict([(name, sensible_regr)
for name in REGRESSION_SCORERS] +
[(name, sensible_clf)
for name in CLF_SCORERS] +
[(name, sensible_ml_clf)
for name in MULTILABEL_ONLY_SCORERS])
for name, scorer in SCORERS.items():
if name in MULTILABEL_ONLY_SCORERS:
target = y_ml_test
else:
target = y_test
try:
weighted = scorer(estimator[name], X_test, target,
sample_weight=sample_weight)
ignored = scorer(estimator[name], X_test[10:], target[10:])
unweighted = scorer(estimator[name], X_test, target)
assert_not_equal(weighted, unweighted,
msg="scorer {0} behaves identically when "
"called with sample weights: {1} vs "
"{2}".format(name, weighted, unweighted))
assert_almost_equal(weighted, ignored,
err_msg="scorer {0} behaves differently when "
"ignoring samples and setting sample_weight to"
" 0: {1} vs {2}".format(name, weighted,
ignored))
except TypeError as e:
assert_true("sample_weight" in str(e),
"scorer {0} raises unhelpful exception when called "
"with sample weights: {1}".format(name, str(e)))
|
bsd-3-clause
|
albertrdixon/CouchPotatoServer
|
couchpotato/core/plugins/profile/main.py
|
47
|
6878
|
import traceback
from couchpotato import get_db, tryInt
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from .index import ProfileIndex
log = CPLog(__name__)
class ProfilePlugin(Plugin):
_database = {
'profile': ProfileIndex
}
def __init__(self):
addEvent('profile.all', self.all)
addEvent('profile.default', self.default)
addApiView('profile.save', self.save)
addApiView('profile.save_order', self.saveOrder)
addApiView('profile.delete', self.delete)
addApiView('profile.list', self.allView, docs = {
'desc': 'List all available profiles',
'return': {'type': 'object', 'example': """{
'success': True,
'list': array, profiles
}"""}
})
addEvent('app.initialize', self.fill, priority = 90)
addEvent('app.load', self.forceDefaults, priority = 110)
def forceDefaults(self):
db = get_db()
# Fill qualities and profiles if they are empty somehow..
if db.count(db.all, 'profile') == 0:
if db.count(db.all, 'quality') == 0:
fireEvent('quality.fill', single = True)
self.fill()
# Get all active movies without profile
try:
medias = fireEvent('media.with_status', 'active', single = True)
profile_ids = [x.get('_id') for x in self.all()]
default_id = profile_ids[0]
for media in medias:
if media.get('profile_id') not in profile_ids:
media['profile_id'] = default_id
db.update(media)
except:
log.error('Failed: %s', traceback.format_exc())
def allView(self, **kwargs):
return {
'success': True,
'list': self.all()
}
def all(self):
db = get_db()
profiles = db.all('profile', with_doc = True)
return [x['doc'] for x in profiles]
def save(self, **kwargs):
try:
db = get_db()
profile = {
'_t': 'profile',
'label': toUnicode(kwargs.get('label')),
'order': tryInt(kwargs.get('order', 999)),
'core': kwargs.get('core', False),
'minimum_score': tryInt(kwargs.get('minimum_score', 1)),
'qualities': [],
'wait_for': [],
'stop_after': [],
'finish': [],
'3d': []
}
# Update types
order = 0
for type in kwargs.get('types', []):
profile['qualities'].append(type.get('quality'))
profile['wait_for'].append(tryInt(kwargs.get('wait_for', 0)))
profile['stop_after'].append(tryInt(kwargs.get('stop_after', 0)))
profile['finish'].append((tryInt(type.get('finish')) == 1) if order > 0 else True)
profile['3d'].append(tryInt(type.get('3d')))
order += 1
id = kwargs.get('id')
try:
p = db.get('id', id)
profile['order'] = tryInt(kwargs.get('order', p.get('order', 999)))
except:
p = db.insert(profile)
p.update(profile)
db.update(p)
return {
'success': True,
'profile': p
}
except:
log.error('Failed: %s', traceback.format_exc())
return {
'success': False
}
def default(self):
db = get_db()
return list(db.all('profile', limit = 1, with_doc = True))[0]['doc']
def saveOrder(self, **kwargs):
try:
db = get_db()
order = 0
for profile_id in kwargs.get('ids', []):
p = db.get('id', profile_id)
p['hide'] = tryInt(kwargs.get('hidden')[order]) == 1
p['order'] = order
db.update(p)
order += 1
return {
'success': True
}
except:
log.error('Failed: %s', traceback.format_exc())
return {
'success': False
}
def delete(self, id = None, **kwargs):
try:
db = get_db()
success = False
message = ''
try:
p = db.get('id', id)
db.delete(p)
# Force defaults on all empty profile movies
self.forceDefaults()
success = True
except Exception as e:
message = log.error('Failed deleting Profile: %s', e)
return {
'success': success,
'message': message
}
except:
log.error('Failed: %s', traceback.format_exc())
return {
'success': False
}
def fill(self):
try:
db = get_db()
profiles = [{
'label': 'Best',
'qualities': ['720p', '1080p', 'brrip', 'dvdrip']
}, {
'label': 'HD',
'qualities': ['720p', '1080p']
}, {
'label': 'SD',
'qualities': ['dvdrip', 'dvdr']
}, {
'label': 'Prefer 3D HD',
'qualities': ['1080p', '720p', '720p', '1080p'],
'3d': [True, True]
}, {
'label': '3D HD',
'qualities': ['1080p', '720p'],
'3d': [True, True]
}]
# Create default quality profile
order = 0
for profile in profiles:
log.info('Creating default profile: %s', profile.get('label'))
pro = {
'_t': 'profile',
'label': toUnicode(profile.get('label')),
'order': order,
'qualities': profile.get('qualities'),
'minimum_score': 1,
'finish': [],
'wait_for': [],
'stop_after': [],
'3d': []
}
threed = profile.get('3d', [])
for q in profile.get('qualities'):
pro['finish'].append(True)
pro['wait_for'].append(0)
pro['stop_after'].append(0)
pro['3d'].append(threed.pop() if threed else False)
db.insert(pro)
order += 1
return True
except:
log.error('Failed: %s', traceback.format_exc())
return False
|
gpl-3.0
|
endlessm/chromium-browser
|
third_party/protobuf/python/google/protobuf/service.py
|
243
|
9144
|
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""DEPRECATED: Declares the RPC service interfaces.
This module declares the abstract interfaces underlying proto2 RPC
services. These are intended to be independent of any particular RPC
implementation, so that proto2 services can be used on top of a variety
of implementations. Starting with version 2.3.0, RPC implementations should
not try to build on these, but should instead provide code generator plugins
which generate code specific to the particular RPC implementation. This way
the generated code can be more appropriate for the implementation in use
and can avoid unnecessary layers of indirection.
"""
__author__ = '[email protected] (Petar Petrov)'
class RpcException(Exception):
"""Exception raised on failed blocking RPC method call."""
pass
class Service(object):
"""Abstract base interface for protocol-buffer-based RPC services.
Services themselves are abstract classes (implemented either by servers or as
stubs), but they subclass this base interface. The methods of this
interface can be used to call the methods of the service without knowing
its exact type at compile time (analogous to the Message interface).
"""
def GetDescriptor():
"""Retrieves this service's descriptor."""
raise NotImplementedError
def CallMethod(self, method_descriptor, rpc_controller,
request, done):
"""Calls a method of the service specified by method_descriptor.
If "done" is None then the call is blocking and the response
message will be returned directly. Otherwise the call is asynchronous
and "done" will later be called with the response value.
In the blocking case, RpcException will be raised on error.
Preconditions:
* method_descriptor.service == GetDescriptor
* request is of the exact same classes as returned by
GetRequestClass(method).
* After the call has started, the request must not be modified.
* "rpc_controller" is of the correct type for the RPC implementation being
used by this Service. For stubs, the "correct type" depends on the
RpcChannel which the stub is using.
Postconditions:
* "done" will be called when the method is complete. This may be
before CallMethod() returns or it may be at some point in the future.
* If the RPC failed, the response value passed to "done" will be None.
Further details about the failure can be found by querying the
RpcController.
"""
raise NotImplementedError
def GetRequestClass(self, method_descriptor):
"""Returns the class of the request message for the specified method.
CallMethod() requires that the request is of a particular subclass of
Message. GetRequestClass() gets the default instance of this required
type.
Example:
method = service.GetDescriptor().FindMethodByName("Foo")
request = stub.GetRequestClass(method)()
request.ParseFromString(input)
service.CallMethod(method, request, callback)
"""
raise NotImplementedError
def GetResponseClass(self, method_descriptor):
"""Returns the class of the response message for the specified method.
This method isn't really needed, as the RpcChannel's CallMethod constructs
the response protocol message. It's provided anyway in case it is useful
for the caller to know the response type in advance.
"""
raise NotImplementedError
class RpcController(object):
"""An RpcController mediates a single method call.
The primary purpose of the controller is to provide a way to manipulate
settings specific to the RPC implementation and to find out about RPC-level
errors. The methods provided by the RpcController interface are intended
to be a "least common denominator" set of features which we expect all
implementations to support. Specific implementations may provide more
advanced features (e.g. deadline propagation).
"""
# Client-side methods below
def Reset(self):
"""Resets the RpcController to its initial state.
After the RpcController has been reset, it may be reused in
a new call. Must not be called while an RPC is in progress.
"""
raise NotImplementedError
def Failed(self):
"""Returns true if the call failed.
After a call has finished, returns true if the call failed. The possible
reasons for failure depend on the RPC implementation. Failed() must not
be called before a call has finished. If Failed() returns true, the
contents of the response message are undefined.
"""
raise NotImplementedError
def ErrorText(self):
"""If Failed is true, returns a human-readable description of the error."""
raise NotImplementedError
def StartCancel(self):
"""Initiate cancellation.
Advises the RPC system that the caller desires that the RPC call be
canceled. The RPC system may cancel it immediately, may wait awhile and
then cancel it, or may not even cancel the call at all. If the call is
canceled, the "done" callback will still be called and the RpcController
will indicate that the call failed at that time.
"""
raise NotImplementedError
# Server-side methods below
def SetFailed(self, reason):
"""Sets a failure reason.
Causes Failed() to return true on the client side. "reason" will be
incorporated into the message returned by ErrorText(). If you find
you need to return machine-readable information about failures, you
should incorporate it into your response protocol buffer and should
NOT call SetFailed().
"""
raise NotImplementedError
def IsCanceled(self):
"""Checks if the client cancelled the RPC.
If true, indicates that the client canceled the RPC, so the server may
as well give up on replying to it. The server should still call the
final "done" callback.
"""
raise NotImplementedError
def NotifyOnCancel(self, callback):
"""Sets a callback to invoke on cancel.
Asks that the given callback be called when the RPC is canceled. The
callback will always be called exactly once. If the RPC completes without
being canceled, the callback will be called after completion. If the RPC
has already been canceled when NotifyOnCancel() is called, the callback
will be called immediately.
NotifyOnCancel() must be called no more than once per request.
"""
raise NotImplementedError
class RpcChannel(object):
"""Abstract interface for an RPC channel.
An RpcChannel represents a communication line to a service which can be used
to call that service's methods. The service may be running on another
machine. Normally, you should not use an RpcChannel directly, but instead
construct a stub {@link Service} wrapping it. Example:
Example:
RpcChannel channel = rpcImpl.Channel("remotehost.example.com:1234")
RpcController controller = rpcImpl.Controller()
MyService service = MyService_Stub(channel)
service.MyMethod(controller, request, callback)
"""
def CallMethod(self, method_descriptor, rpc_controller,
request, response_class, done):
"""Calls the method identified by the descriptor.
Call the given method of the remote service. The signature of this
procedure looks the same as Service.CallMethod(), but the requirements
are less strict in one important way: the request object doesn't have to
be of any specific class as long as its descriptor is method.input_type.
"""
raise NotImplementedError
|
bsd-3-clause
|
kurli/blink-crosswalk
|
Source/build/scripts/make_runtime_features.py
|
51
|
4136
|
#!/usr/bin/env python
# Copyright (C) 2013 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys
import in_generator
import name_utilities
from name_utilities import lower_first
import template_expander
class RuntimeFeatureWriter(in_generator.Writer):
class_name = 'RuntimeEnabledFeatures'
filters = {
'enable_conditional': name_utilities.enable_conditional_if_endif,
}
# FIXME: valid_values and defaults should probably roll into one object.
valid_values = {
'status': ['stable', 'experimental', 'test', 'deprecated'],
}
defaults = {
'condition' : None,
'depends_on' : [],
'custom': False,
'status': None,
}
_status_aliases = {
'deprecated': 'test',
}
def __init__(self, in_file_path):
super(RuntimeFeatureWriter, self).__init__(in_file_path)
self._outputs = {(self.class_name + '.h'): self.generate_header,
(self.class_name + '.cpp'): self.generate_implementation,
}
self._features = self.in_file.name_dictionaries
# Make sure the resulting dictionaries have all the keys we expect.
for feature in self._features:
feature['first_lowered_name'] = lower_first(feature['name'])
feature['status'] = self._status_aliases.get(feature['status'], feature['status'])
# Most features just check their isFooEnabled bool
# but some depend on more than one bool.
enabled_condition = 'is%sEnabled' % feature['name']
for dependant_name in feature['depends_on']:
enabled_condition += ' && is%sEnabled' % dependant_name
feature['enabled_condition'] = enabled_condition
self._non_custom_features = filter(lambda feature: not feature['custom'], self._features)
def _feature_sets(self):
# Another way to think of the status levels is as "sets of features"
# which is how we're referring to them in this generator.
return [status for status in self.valid_values['status'] if status not in self._status_aliases]
@template_expander.use_jinja(class_name + '.h.tmpl', filters=filters)
def generate_header(self):
return {
'features': self._features,
'feature_sets': self._feature_sets(),
}
@template_expander.use_jinja(class_name + '.cpp.tmpl', filters=filters)
def generate_implementation(self):
return {
'features': self._features,
'feature_sets': self._feature_sets(),
}
if __name__ == '__main__':
in_generator.Maker(RuntimeFeatureWriter).main(sys.argv)
|
bsd-3-clause
|
mdrumond/tensorflow
|
tensorflow/python/estimator/inputs/pandas_io.py
|
86
|
4503
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Methods to allow pandas.DataFrame."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.python.estimator.inputs.queues import feeding_functions
try:
# pylint: disable=g-import-not-at-top
# pylint: disable=unused-import
import pandas as pd
HAS_PANDAS = True
except IOError:
# Pandas writes a temporary file during import. If it fails, don't use pandas.
HAS_PANDAS = False
except ImportError:
HAS_PANDAS = False
def pandas_input_fn(x,
y=None,
batch_size=128,
num_epochs=1,
shuffle=None,
queue_capacity=1000,
num_threads=1,
target_column='target'):
"""Returns input function that would feed Pandas DataFrame into the model.
Note: `y`'s index must match `x`'s index.
Args:
x: pandas `DataFrame` object.
y: pandas `Series` object. `None` if absent.
batch_size: int, size of batches to return.
num_epochs: int, number of epochs to iterate over data. If not `None`,
read attempts that would exceed this value will raise `OutOfRangeError`.
shuffle: bool, whether to read the records in random order.
queue_capacity: int, size of the read queue. If `None`, it will be set
roughly to the size of `x`.
num_threads: Integer, number of threads used for reading and enqueueing. In
order to have predicted and repeatable order of reading and enqueueing,
such as in prediction and evaluation mode, `num_threads` should be 1.
target_column: str, name to give the target column `y`.
Returns:
Function, that has signature of ()->(dict of `features`, `target`)
Raises:
ValueError: if `x` already contains a column with the same name as `y`, or
if the indexes of `x` and `y` don't match.
TypeError: `shuffle` is not bool.
"""
if not HAS_PANDAS:
raise TypeError(
'pandas_input_fn should not be called without pandas installed')
if not isinstance(shuffle, bool):
raise TypeError('shuffle must be explicitly set as boolean; '
'got {}'.format(shuffle))
x = x.copy()
if y is not None:
if target_column in x:
raise ValueError(
'Cannot use name %s for target column: DataFrame already has a '
'column with that name: %s' % (target_column, x.columns))
if not np.array_equal(x.index, y.index):
raise ValueError('Index for x and y are mismatched.\nIndex for x: %s\n'
'Index for y: %s\n' % (x.index, y.index))
x[target_column] = y
# TODO(mdan): These are memory copies. We probably don't need 4x slack space.
# The sizes below are consistent with what I've seen elsewhere.
if queue_capacity is None:
if shuffle:
queue_capacity = 4 * len(x)
else:
queue_capacity = len(x)
min_after_dequeue = max(queue_capacity / 4, 1)
def input_fn():
"""Pandas input function."""
queue = feeding_functions._enqueue_data( # pylint: disable=protected-access
x,
queue_capacity,
shuffle=shuffle,
min_after_dequeue=min_after_dequeue,
num_threads=num_threads,
enqueue_size=batch_size,
num_epochs=num_epochs)
if num_epochs is None:
features = queue.dequeue_many(batch_size)
else:
features = queue.dequeue_up_to(batch_size)
assert len(features) == len(x.columns) + 1, ('Features should have one '
'extra element for the index.')
features = features[1:]
features = dict(zip(list(x.columns), features))
if y is not None:
target = features.pop(target_column)
return features, target
return features
return input_fn
|
apache-2.0
|
AOSP-S4-KK/platform_external_chromium_org
|
tools/cr/cr/commands/command.py
|
25
|
3270
|
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Module to hold the Command plugin."""
import argparse
import cr
class Command(cr.Plugin, cr.Plugin.Type):
"""Base class for implementing cr commands.
These are the sub-commands on the command line, and modify the
accepted remaining arguments.
Commands in general do not implement the functionality directly, instead they
run a sequence of actions.
"""
@classmethod
def Select(cls, context):
"""Called to select which command is active.
This picks a command based on the first non - argument on the command
line.
Args:
context: The context to select the command for.
Returns:
the selected command, or None if not specified on the command line.
"""
if context.args:
return getattr(context.args, '_command', None)
return None
def __init__(self):
super(Command, self).__init__()
self.help = 'Missing help: {0}'.format(self.__class__.__name__)
self.description = None
self.epilog = None
self.parser = None
self.requires_build_dir = True
def AddArguments(self, subparsers):
"""Add arguments to the command line parser.
Called by the main function to add the command to the command line parser.
Commands that override this function to add more arguments must invoke
this method.
Args:
subparsers: The argparse subparser manager to add this command to.
Returns:
the parser that was built for the command.
"""
self.parser = subparsers.add_parser(
self.name,
add_help=False,
help=self.help,
description=self.description or self.help,
epilog=self.epilog,
)
self.parser.set_defaults(_command=self)
cr.Context.AddCommonArguments(self.parser)
cr.base.client.AddArguments(self.parser)
return self.parser
def ConsumeArgs(self, parser, reason):
"""Adds a remaining argument consumer to the parser.
A helper method that commands can use to consume all remaining arguments.
Use for things like lists of targets.
Args:
parser: The parser to consume remains for.
reason: The reason to give the user in the help text.
"""
parser.add_argument(
'_remains', metavar='arguments',
nargs=argparse.REMAINDER,
help='The additional arguments to {0}.'.format(reason)
)
def EarlyArgProcessing(self, context):
"""Called to make decisions based on speculative argument parsing.
When this method is called, enough of the command line parsing has been
done that the command is selected. This allows the command to make any
modifications needed before the final argument parsing is done.
Args:
context: The context that is parsing the arguments.
"""
cr.base.client.ApplyOutArgument(context)
@cr.Plugin.activemethod
def Run(self, context):
"""The main method of the command.
This is the only thing that a command has to implement, and it should not
call this base version.
Args:
context: The context to run the command in.
"""
_ = context
raise NotImplementedError('Must be overridden.')
|
bsd-3-clause
|
kamyu104/django
|
tests/template_tests/syntax_tests/test_comment.py
|
521
|
3667
|
from django.test import SimpleTestCase
from ..utils import setup
class CommentSyntaxTests(SimpleTestCase):
@setup({'comment-syntax01': '{# this is hidden #}hello'})
def test_comment_syntax01(self):
output = self.engine.render_to_string('comment-syntax01')
self.assertEqual(output, 'hello')
@setup({'comment-syntax02': '{# this is hidden #}hello{# foo #}'})
def test_comment_syntax02(self):
output = self.engine.render_to_string('comment-syntax02')
self.assertEqual(output, 'hello')
@setup({'comment-syntax03': 'foo{# {% if %} #}'})
def test_comment_syntax03(self):
output = self.engine.render_to_string('comment-syntax03')
self.assertEqual(output, 'foo')
@setup({'comment-syntax04': 'foo{# {% endblock %} #}'})
def test_comment_syntax04(self):
output = self.engine.render_to_string('comment-syntax04')
self.assertEqual(output, 'foo')
@setup({'comment-syntax05': 'foo{# {% somerandomtag %} #}'})
def test_comment_syntax05(self):
output = self.engine.render_to_string('comment-syntax05')
self.assertEqual(output, 'foo')
@setup({'comment-syntax06': 'foo{# {% #}'})
def test_comment_syntax06(self):
output = self.engine.render_to_string('comment-syntax06')
self.assertEqual(output, 'foo')
@setup({'comment-syntax07': 'foo{# %} #}'})
def test_comment_syntax07(self):
output = self.engine.render_to_string('comment-syntax07')
self.assertEqual(output, 'foo')
@setup({'comment-syntax08': 'foo{# %} #}bar'})
def test_comment_syntax08(self):
output = self.engine.render_to_string('comment-syntax08')
self.assertEqual(output, 'foobar')
@setup({'comment-syntax09': 'foo{# {{ #}'})
def test_comment_syntax09(self):
output = self.engine.render_to_string('comment-syntax09')
self.assertEqual(output, 'foo')
@setup({'comment-syntax10': 'foo{# }} #}'})
def test_comment_syntax10(self):
output = self.engine.render_to_string('comment-syntax10')
self.assertEqual(output, 'foo')
@setup({'comment-syntax11': 'foo{# { #}'})
def test_comment_syntax11(self):
output = self.engine.render_to_string('comment-syntax11')
self.assertEqual(output, 'foo')
@setup({'comment-syntax12': 'foo{# } #}'})
def test_comment_syntax12(self):
output = self.engine.render_to_string('comment-syntax12')
self.assertEqual(output, 'foo')
@setup({'comment-tag01': '{% comment %}this is hidden{% endcomment %}hello'})
def test_comment_tag01(self):
output = self.engine.render_to_string('comment-tag01')
self.assertEqual(output, 'hello')
@setup({'comment-tag02': '{% comment %}this is hidden{% endcomment %}'
'hello{% comment %}foo{% endcomment %}'})
def test_comment_tag02(self):
output = self.engine.render_to_string('comment-tag02')
self.assertEqual(output, 'hello')
@setup({'comment-tag03': 'foo{% comment %} {% if %} {% endcomment %}'})
def test_comment_tag03(self):
output = self.engine.render_to_string('comment-tag03')
self.assertEqual(output, 'foo')
@setup({'comment-tag04': 'foo{% comment %} {% endblock %} {% endcomment %}'})
def test_comment_tag04(self):
output = self.engine.render_to_string('comment-tag04')
self.assertEqual(output, 'foo')
@setup({'comment-tag05': 'foo{% comment %} {% somerandomtag %} {% endcomment %}'})
def test_comment_tag05(self):
output = self.engine.render_to_string('comment-tag05')
self.assertEqual(output, 'foo')
|
bsd-3-clause
|
dufferzafar/mitmproxy
|
test/pathod/test_language_http2.py
|
3
|
6779
|
from six import BytesIO
from netlib import tcp
from netlib.http import user_agents
from pathod import language
from pathod.language import http2
from pathod.protocols.http2 import HTTP2StateProtocol
from . import tutils
def parse_request(s):
return next(language.parse_pathoc(s, True))
def parse_response(s):
return next(language.parse_pathod(s, True))
def default_settings():
return language.Settings(
request_host="foo.com",
protocol=HTTP2StateProtocol(tcp.TCPClient(('localhost', 1234)))
)
def test_make_error_response():
d = BytesIO()
s = http2.make_error_response("foo", "bar")
language.serve(s, d, default_settings())
class TestRequest:
def test_cached_values(self):
req = parse_request("get:/")
req_id = id(req)
assert req_id == id(req.resolve(default_settings()))
assert req.values(default_settings()) == req.values(default_settings())
def test_nonascii(self):
tutils.raises("ascii", parse_request, "get:\xf0")
def test_err(self):
tutils.raises(language.ParseException, parse_request, 'GET')
def test_simple(self):
r = parse_request('GET:"/foo"')
assert r.method.string() == b"GET"
assert r.path.string() == b"/foo"
r = parse_request('GET:/foo')
assert r.path.string() == b"/foo"
def test_multiple(self):
r = list(language.parse_pathoc("GET:/ PUT:/"))
assert r[0].method.string() == b"GET"
assert r[1].method.string() == b"PUT"
assert len(r) == 2
l = """
GET
"/foo"
PUT
"/foo
bar"
"""
r = list(language.parse_pathoc(l, True))
assert len(r) == 2
assert r[0].method.string() == b"GET"
assert r[1].method.string() == b"PUT"
l = """
get:"http://localhost:9999/p/200"
get:"http://localhost:9999/p/200"
"""
r = list(language.parse_pathoc(l, True))
assert len(r) == 2
assert r[0].method.string() == b"GET"
assert r[1].method.string() == b"GET"
def test_render_simple(self):
s = BytesIO()
r = parse_request("GET:'/foo'")
assert language.serve(
r,
s,
default_settings(),
)
def test_raw_content_length(self):
r = parse_request('GET:/:r')
assert len(r.headers) == 0
r = parse_request('GET:/:r:b"foobar"')
assert len(r.headers) == 0
r = parse_request('GET:/')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-length", b"0")
r = parse_request('GET:/:b"foobar"')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-length", b"6")
r = parse_request('GET:/:b"foobar":h"content-length"="42"')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-length", b"42")
r = parse_request('GET:/:r:b"foobar":h"content-length"="42"')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-length", b"42")
def test_content_type(self):
r = parse_request('GET:/:r:c"foobar"')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-type", b"foobar")
def test_user_agent(self):
r = parse_request('GET:/:r:ua')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"user-agent", user_agents.get_by_shortcut('a')[2].encode())
def test_render_with_headers(self):
s = BytesIO()
r = parse_request('GET:/foo:h"foo"="bar"')
assert language.serve(
r,
s,
default_settings(),
)
def test_nested_response(self):
l = "get:/p/:s'200'"
r = parse_request(l)
assert len(r.tokens) == 3
assert isinstance(r.tokens[2], http2.NestedResponse)
assert r.values(default_settings())
def test_render_with_body(self):
s = BytesIO()
r = parse_request("GET:'/foo':bfoobar")
assert language.serve(
r,
s,
default_settings(),
)
def test_spec(self):
def rt(s):
s = parse_request(s).spec()
assert parse_request(s).spec() == s
rt("get:/foo")
class TestResponse:
def test_cached_values(self):
res = parse_response("200")
res_id = id(res)
assert res_id == id(res.resolve(default_settings()))
assert res.values(default_settings()) == res.values(default_settings())
def test_nonascii(self):
tutils.raises("ascii", parse_response, "200:\xf0")
def test_err(self):
tutils.raises(language.ParseException, parse_response, 'GET:/')
def test_raw_content_length(self):
r = parse_response('200:r')
assert len(r.headers) == 0
r = parse_response('200')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-length", b"0")
def test_content_type(self):
r = parse_response('200:r:c"foobar"')
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"content-type", b"foobar")
def test_simple(self):
r = parse_response('200:r:h"foo"="bar"')
assert r.status_code.string() == b"200"
assert len(r.headers) == 1
assert r.headers[0].values(default_settings()) == (b"foo", b"bar")
assert r.body is None
r = parse_response('200:r:h"foo"="bar":bfoobar:h"bla"="fasel"')
assert r.status_code.string() == b"200"
assert len(r.headers) == 2
assert r.headers[0].values(default_settings()) == (b"foo", b"bar")
assert r.headers[1].values(default_settings()) == (b"bla", b"fasel")
assert r.body.string() == b"foobar"
def test_render_simple(self):
s = BytesIO()
r = parse_response('200')
assert language.serve(
r,
s,
default_settings(),
)
def test_render_with_headers(self):
s = BytesIO()
r = parse_response('200:h"foo"="bar"')
assert language.serve(
r,
s,
default_settings(),
)
def test_render_with_body(self):
s = BytesIO()
r = parse_response('200:bfoobar')
assert language.serve(
r,
s,
default_settings(),
)
def test_spec(self):
def rt(s):
s = parse_response(s).spec()
assert parse_response(s).spec() == s
rt("200:bfoobar")
|
mit
|
havrlant/fca-search
|
src/tests/common_tests.py
|
1
|
2118
|
import unittest
import string
from common.list import splitlist
from common.string import sjoin, isletter, strip_accents, remove_nonletters,\
normalize_text, replace_white_spaces, replace_single, replace_dict
class TestString(unittest.TestCase):
def test_sjoin(self):
self.assertEqual('', sjoin([]))
self.assertEqual('123', sjoin([1,2,3]))
self.assertEqual('12[3, 4]', sjoin([1,2,[3,4]]))
def test_isletter(self):
map(lambda x: self.assert_(isletter(x)), 'dgpnéíáýžřčšěÉÍÁÝŽŘČŠČŠŮÚůú'.split())
map(lambda x: self.assert_(not(isletter(x))), string.punctuation + string.digits)
def test_strip_accents(self):
self.assertEqual('escrzyaieuuESCRZYAIEUU', strip_accents('ěščřžýáíéúůĚŠČŘŽÝÁÍÉÚŮ'))
def test_remove_nonletters(self):
self.assertEqual('helloworld', remove_nonletters('hello__world!!! :-)) <3 :-|'))
self.assertEqual('hello world ', remove_nonletters('hello__world!?!', ' '))
def test_normalize_text(self):
self.assertEqual('háčky čárky to je věda dva tři',
normalize_text('Háčky čárky, to je věda! Dva + Tři = __?'))
def test_replace_white_spaces(self):
self.assertEqual('ssdasasasdasdaasdasd',
replace_white_spaces(' ssd as as as d asd a as dasd '))
self.assertEqual('text with white spaces',
replace_white_spaces('text with white spaces', ' '))
def test_replace_single(self):
self.assertEqual('??efghijklmnopqrstuvwxyz',
replace_single(string.ascii_lowercase, ['a', 'bcd', 'xx'], '?'))
def test_replace_dict(self):
self.assertEqual('?bcdefghijklmnopqrstuvwend',
replace_dict(string.ascii_lowercase, {'a':'?', 'xyz':'end'}))
class TestList(unittest.TestCase):
def test_splitlist(self):
self.assertEqual([[1, 2, 3], [4, 5, 6], [7, 8, 9]], splitlist([1,2,3,'x',4,5,6,'x',7,8,9], 'x'))
self.assertEqual([[1, 2, 3, 'x', 4, 5, 6, 'x', 7, 8, 9]], splitlist([1,2,3,'x',4,5,6,'x',7,8,9], 'xy'))
self.assertEqual([[], [], []], splitlist(['a','a'], 'a'))
if __name__ == "__main__":
#import sys;sys.argv = ['', 'Test.testName']
unittest.main()
|
bsd-2-clause
|
resmo/ansible
|
lib/ansible/plugins/inventory/yaml.py
|
54
|
7175
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
inventory: yaml
version_added: "2.4"
short_description: Uses a specific YAML file as an inventory source.
description:
- "YAML-based inventory, should start with the C(all) group and contain hosts/vars/children entries."
- Host entries can have sub-entries defined, which will be treated as variables.
- Vars entries are normal group vars.
- "Children are 'child groups', which can also have their own vars/hosts/children and so on."
- File MUST have a valid extension, defined in configuration.
notes:
- If you want to set vars for the C(all) group inside the inventory file, the C(all) group must be the first entry in the file.
- Whitelisted in configuration by default.
options:
yaml_extensions:
description: list of 'valid' extensions for files containing YAML
type: list
default: ['.yaml', '.yml', '.json']
env:
- name: ANSIBLE_YAML_FILENAME_EXT
- name: ANSIBLE_INVENTORY_PLUGIN_EXTS
ini:
- key: yaml_valid_extensions
section: defaults
- section: inventory_plugin_yaml
key: yaml_valid_extensions
'''
EXAMPLES = '''
all: # keys must be unique, i.e. only one 'hosts' per group
hosts:
test1:
test2:
host_var: value
vars:
group_all_var: value
children: # key order does not matter, indentation does
other_group:
children:
group_x:
hosts:
test5 # Note that one machine will work without a colon
#group_x:
# hosts:
# test5 # But this won't
# test7 #
group_y:
hosts:
test6: # So always use a colon
vars:
g2_var2: value3
hosts:
test4:
ansible_host: 127.0.0.1
last_group:
hosts:
test1 # same host as above, additional group membership
vars:
group_last_var: value
'''
import os
from ansible.errors import AnsibleError, AnsibleParserError
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.common._collections_compat import MutableMapping
from ansible.plugins.inventory import BaseFileInventoryPlugin
NoneType = type(None)
class InventoryModule(BaseFileInventoryPlugin):
NAME = 'yaml'
def __init__(self):
super(InventoryModule, self).__init__()
def verify_file(self, path):
valid = False
if super(InventoryModule, self).verify_file(path):
file_name, ext = os.path.splitext(path)
if not ext or ext in self.get_option('yaml_extensions'):
valid = True
return valid
def parse(self, inventory, loader, path, cache=True):
''' parses the inventory file '''
super(InventoryModule, self).parse(inventory, loader, path)
self.set_options()
try:
data = self.loader.load_from_file(path, cache=False)
except Exception as e:
raise AnsibleParserError(e)
if not data:
raise AnsibleParserError('Parsed empty YAML file')
elif not isinstance(data, MutableMapping):
raise AnsibleParserError('YAML inventory has invalid structure, it should be a dictionary, got: %s' % type(data))
elif data.get('plugin'):
raise AnsibleParserError('Plugin configuration YAML file, not YAML inventory')
# We expect top level keys to correspond to groups, iterate over them
# to get host, vars and subgroups (which we iterate over recursivelly)
if isinstance(data, MutableMapping):
for group_name in data:
self._parse_group(group_name, data[group_name])
else:
raise AnsibleParserError("Invalid data from file, expected dictionary and got:\n\n%s" % to_native(data))
def _parse_group(self, group, group_data):
if isinstance(group_data, (MutableMapping, NoneType)):
try:
group = self.inventory.add_group(group)
except AnsibleError as e:
raise AnsibleParserError("Unable to add group %s: %s" % (group, to_text(e)))
if group_data is not None:
# make sure they are dicts
for section in ['vars', 'children', 'hosts']:
if section in group_data:
# convert strings to dicts as these are allowed
if isinstance(group_data[section], string_types):
group_data[section] = {group_data[section]: None}
if not isinstance(group_data[section], (MutableMapping, NoneType)):
raise AnsibleParserError('Invalid "%s" entry for "%s" group, requires a dictionary, found "%s" instead.' %
(section, group, type(group_data[section])))
for key in group_data:
if not isinstance(group_data[key], (MutableMapping, NoneType)):
self.display.warning('Skipping key (%s) in group (%s) as it is not a mapping, it is a %s' % (key, group, type(group_data[key])))
continue
if isinstance(group_data[key], NoneType):
self.display.vvv('Skipping empty key (%s) in group (%s)' % (key, group))
elif key == 'vars':
for var in group_data[key]:
self.inventory.set_variable(group, var, group_data[key][var])
elif key == 'children':
for subgroup in group_data[key]:
subgroup = self._parse_group(subgroup, group_data[key][subgroup])
self.inventory.add_child(group, subgroup)
elif key == 'hosts':
for host_pattern in group_data[key]:
hosts, port = self._parse_host(host_pattern)
self._populate_host_vars(hosts, group_data[key][host_pattern] or {}, group, port)
else:
self.display.warning('Skipping unexpected key (%s) in group (%s), only "vars", "children" and "hosts" are valid' % (key, group))
else:
self.display.warning("Skipping '%s' as this is not a valid group definition" % group)
return group
def _parse_host(self, host_pattern):
'''
Each host key can be a pattern, try to process it and add variables as needed
'''
(hostnames, port) = self._expand_hostpattern(host_pattern)
return hostnames, port
|
gpl-3.0
|
molobrakos/home-assistant
|
tests/components/mqtt/test_vacuum.py
|
1
|
26645
|
"""The tests for the Mqtt vacuum platform."""
import copy
import json
import pytest
from homeassistant.components import mqtt, vacuum
from homeassistant.components.mqtt import (
CONF_COMMAND_TOPIC, vacuum as mqttvacuum)
from homeassistant.components.mqtt.discovery import async_start
from homeassistant.components.vacuum import (
ATTR_BATTERY_ICON, ATTR_BATTERY_LEVEL, ATTR_FAN_SPEED, ATTR_STATUS)
from homeassistant.const import (
CONF_NAME, CONF_PLATFORM, STATE_OFF, STATE_ON, STATE_UNAVAILABLE)
from homeassistant.setup import async_setup_component
from tests.common import (
MockConfigEntry, async_fire_mqtt_message, async_mock_mqtt_component)
from tests.components.vacuum import common
default_config = {
CONF_PLATFORM: 'mqtt',
CONF_NAME: 'mqtttest',
CONF_COMMAND_TOPIC: 'vacuum/command',
mqttvacuum.CONF_SEND_COMMAND_TOPIC: 'vacuum/send_command',
mqttvacuum.CONF_BATTERY_LEVEL_TOPIC: 'vacuum/state',
mqttvacuum.CONF_BATTERY_LEVEL_TEMPLATE:
'{{ value_json.battery_level }}',
mqttvacuum.CONF_CHARGING_TOPIC: 'vacuum/state',
mqttvacuum.CONF_CHARGING_TEMPLATE: '{{ value_json.charging }}',
mqttvacuum.CONF_CLEANING_TOPIC: 'vacuum/state',
mqttvacuum.CONF_CLEANING_TEMPLATE: '{{ value_json.cleaning }}',
mqttvacuum.CONF_DOCKED_TOPIC: 'vacuum/state',
mqttvacuum.CONF_DOCKED_TEMPLATE: '{{ value_json.docked }}',
mqttvacuum.CONF_ERROR_TOPIC: 'vacuum/state',
mqttvacuum.CONF_ERROR_TEMPLATE: '{{ value_json.error }}',
mqttvacuum.CONF_FAN_SPEED_TOPIC: 'vacuum/state',
mqttvacuum.CONF_FAN_SPEED_TEMPLATE: '{{ value_json.fan_speed }}',
mqttvacuum.CONF_SET_FAN_SPEED_TOPIC: 'vacuum/set_fan_speed',
mqttvacuum.CONF_FAN_SPEED_LIST: ['min', 'medium', 'high', 'max'],
}
@pytest.fixture
def mock_publish(hass):
"""Initialize components."""
yield hass.loop.run_until_complete(async_mock_mqtt_component(hass))
async def test_default_supported_features(hass, mock_publish):
"""Test that the correct supported features."""
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
entity = hass.states.get('vacuum.mqtttest')
entity_features = \
entity.attributes.get(mqttvacuum.CONF_SUPPORTED_FEATURES, 0)
assert sorted(mqttvacuum.services_to_strings(entity_features)) == \
sorted(['turn_on', 'turn_off', 'stop',
'return_home', 'battery', 'status',
'clean_spot'])
async def test_all_commands(hass, mock_publish):
"""Test simple commands to the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
common.turn_on(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'turn_on', 0, False)
mock_publish.async_publish.reset_mock()
common.turn_off(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'turn_off', 0, False)
mock_publish.async_publish.reset_mock()
common.stop(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'stop', 0, False)
mock_publish.async_publish.reset_mock()
common.clean_spot(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'clean_spot', 0, False)
mock_publish.async_publish.reset_mock()
common.locate(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'locate', 0, False)
mock_publish.async_publish.reset_mock()
common.start_pause(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'start_pause', 0, False)
mock_publish.async_publish.reset_mock()
common.return_to_base(hass, 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/command', 'return_to_base', 0, False)
mock_publish.async_publish.reset_mock()
common.set_fan_speed(hass, 'high', 'vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/set_fan_speed', 'high', 0, False)
mock_publish.async_publish.reset_mock()
common.send_command(hass, '44 FE 93', entity_id='vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
mock_publish.async_publish.assert_called_once_with(
'vacuum/send_command', '44 FE 93', 0, False)
mock_publish.async_publish.reset_mock()
common.send_command(hass, '44 FE 93', {"key": "value"},
entity_id='vacuum.mqtttest')
await hass.async_block_till_done()
await hass.async_block_till_done()
assert json.loads(mock_publish.async_publish.mock_calls[-1][1][1]) == {
"command": "44 FE 93",
"key": "value"
}
async def test_status(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"battery_level": 54,
"cleaning": true,
"docked": false,
"charging": false,
"fan_speed": "max"
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_ON == state.state
assert 'mdi:battery-50' == \
state.attributes.get(ATTR_BATTERY_ICON)
assert 54 == state.attributes.get(ATTR_BATTERY_LEVEL)
assert 'max' == state.attributes.get(ATTR_FAN_SPEED)
message = """{
"battery_level": 61,
"docked": true,
"cleaning": false,
"charging": true,
"fan_speed": "min"
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_OFF == state.state
assert 'mdi:battery-charging-60' == \
state.attributes.get(ATTR_BATTERY_ICON)
assert 61 == state.attributes.get(ATTR_BATTERY_LEVEL)
assert 'min' == state.attributes.get(ATTR_FAN_SPEED)
async def test_status_battery(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"battery_level": 54
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert 'mdi:battery-50' == \
state.attributes.get(ATTR_BATTERY_ICON)
async def test_status_cleaning(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"cleaning": true
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_ON == state.state
async def test_status_docked(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"docked": true
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_OFF == state.state
async def test_status_charging(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"charging": true
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert 'mdi:battery-outline' == \
state.attributes.get(ATTR_BATTERY_ICON)
async def test_status_fan_speed(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"fan_speed": "max"
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert 'max' == state.attributes.get(ATTR_FAN_SPEED)
async def test_status_error(hass, mock_publish):
"""Test status updates from the vacuum."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
message = """{
"error": "Error1"
}"""
async_fire_mqtt_message(hass, 'vacuum/state', message)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert 'Error: Error1' == state.attributes.get(ATTR_STATUS)
async def test_battery_template(hass, mock_publish):
"""Test that you can use non-default templates for battery_level."""
default_config.update({
mqttvacuum.CONF_SUPPORTED_FEATURES:
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES),
mqttvacuum.CONF_BATTERY_LEVEL_TOPIC: "retroroomba/battery_level",
mqttvacuum.CONF_BATTERY_LEVEL_TEMPLATE: "{{ value }}"
})
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
async_fire_mqtt_message(hass, 'retroroomba/battery_level', '54')
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert 54 == state.attributes.get(ATTR_BATTERY_LEVEL)
assert state.attributes.get(ATTR_BATTERY_ICON) == \
'mdi:battery-50'
async def test_status_invalid_json(hass, mock_publish):
"""Test to make sure nothing breaks if the vacuum sends bad JSON."""
default_config[mqttvacuum.CONF_SUPPORTED_FEATURES] = \
mqttvacuum.services_to_strings(mqttvacuum.ALL_SERVICES)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
async_fire_mqtt_message(hass, 'vacuum/state', '{"asdfasas false}')
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_OFF == state.state
assert "Stopped" == state.attributes.get(ATTR_STATUS)
async def test_missing_battery_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_BATTERY_LEVEL_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_missing_charging_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_CHARGING_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_missing_cleaning_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_CLEANING_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_missing_docked_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_DOCKED_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_missing_error_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_ERROR_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_missing_fan_speed_template(hass, mock_publish):
"""Test to make sure missing template is not allowed."""
config = copy.deepcopy(default_config)
config.pop(mqttvacuum.CONF_FAN_SPEED_TEMPLATE)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: config,
})
state = hass.states.get('vacuum.mqtttest')
assert state is None
async def test_default_availability_payload(hass, mock_publish):
"""Test availability by default payload with defined topic."""
default_config.update({
'availability_topic': 'availability-topic'
})
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE == state.state
async_fire_mqtt_message(hass, 'availability-topic', 'online')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE != state.state
async_fire_mqtt_message(hass, 'availability-topic', 'offline')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE == state.state
async def test_custom_availability_payload(hass, mock_publish):
"""Test availability by custom payload with defined topic."""
default_config.update({
'availability_topic': 'availability-topic',
'payload_available': 'good',
'payload_not_available': 'nogood'
})
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: default_config,
})
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE == state.state
async_fire_mqtt_message(hass, 'availability-topic', 'good')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE != state.state
async_fire_mqtt_message(hass, 'availability-topic', 'nogood')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.mqtttest')
assert STATE_UNAVAILABLE == state.state
async def test_discovery_removal_vacuum(hass, mock_publish):
"""Test removal of discovered vacuum."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
await async_start(hass, 'homeassistant', {}, entry)
data = (
'{ "name": "Beer",'
' "command_topic": "test_topic" }'
)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert state is not None
assert state.name == 'Beer'
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config', '')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert state is None
async def test_discovery_broken(hass, mqtt_mock, caplog):
"""Test handling of bad discovery message."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
await async_start(hass, 'homeassistant', {}, entry)
data1 = (
'{ "name": "Beer",'
' "command_topic": "test_topic#" }'
)
data2 = (
'{ "name": "Milk",'
' "command_topic": "test_topic" }'
)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data1)
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert state is None
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data2)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.milk')
assert state is not None
assert state.name == 'Milk'
state = hass.states.get('vacuum.beer')
assert state is None
async def test_discovery_update_vacuum(hass, mock_publish):
"""Test update of discovered vacuum."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
await async_start(hass, 'homeassistant', {}, entry)
data1 = (
'{ "name": "Beer",'
' "command_topic": "test_topic" }'
)
data2 = (
'{ "name": "Milk",'
' "command_topic": "test_topic" }'
)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data1)
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert state is not None
assert state.name == 'Beer'
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data2)
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert state is not None
assert state.name == 'Milk'
state = hass.states.get('vacuum.milk')
assert state is None
async def test_setting_attribute_via_mqtt_json_message(hass, mqtt_mock):
"""Test the setting of attribute via MQTT with JSON payload."""
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'test-topic',
'json_attributes_topic': 'attr-topic'
}
})
async_fire_mqtt_message(hass, 'attr-topic', '{ "val": "100" }')
await hass.async_block_till_done()
state = hass.states.get('vacuum.test')
assert '100' == state.attributes.get('val')
async def test_update_with_json_attrs_not_dict(hass, mqtt_mock, caplog):
"""Test attributes get extracted from a JSON result."""
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'test-topic',
'json_attributes_topic': 'attr-topic'
}
})
async_fire_mqtt_message(hass, 'attr-topic', '[ "list", "of", "things"]')
await hass.async_block_till_done()
state = hass.states.get('vacuum.test')
assert state.attributes.get('val') is None
assert 'JSON result was not a dictionary' in caplog.text
async def test_update_with_json_attrs_bad_JSON(hass, mqtt_mock, caplog):
"""Test attributes get extracted from a JSON result."""
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: {
'platform': 'mqtt',
'name': 'test',
'state_topic': 'test-topic',
'json_attributes_topic': 'attr-topic'
}
})
async_fire_mqtt_message(hass, 'attr-topic', 'This is not JSON')
await hass.async_block_till_done()
state = hass.states.get('vacuum.test')
assert state.attributes.get('val') is None
assert 'Erroneous JSON: This is not JSON' in caplog.text
async def test_discovery_update_attr(hass, mqtt_mock, caplog):
"""Test update of discovered MQTTAttributes."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
await async_start(hass, 'homeassistant', {}, entry)
data1 = (
'{ "name": "Beer",'
' "command_topic": "test_topic",'
' "json_attributes_topic": "attr-topic1" }'
)
data2 = (
'{ "name": "Beer",'
' "command_topic": "test_topic",'
' "json_attributes_topic": "attr-topic2" }'
)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data1)
await hass.async_block_till_done()
async_fire_mqtt_message(hass, 'attr-topic1', '{ "val": "100" }')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert '100' == state.attributes.get('val')
# Change json_attributes_topic
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data2)
await hass.async_block_till_done()
await hass.async_block_till_done()
# Verify we are no longer subscribing to the old topic
async_fire_mqtt_message(hass, 'attr-topic1', '{ "val": "50" }')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert '100' == state.attributes.get('val')
# Verify we are subscribing to the new topic
async_fire_mqtt_message(hass, 'attr-topic2', '{ "val": "75" }')
await hass.async_block_till_done()
await hass.async_block_till_done()
state = hass.states.get('vacuum.beer')
assert '75' == state.attributes.get('val')
async def test_unique_id(hass, mock_publish):
"""Test unique id option only creates one vacuum per unique_id."""
await async_mock_mqtt_component(hass)
assert await async_setup_component(hass, vacuum.DOMAIN, {
vacuum.DOMAIN: [{
'platform': 'mqtt',
'name': 'Test 1',
'command_topic': 'command-topic',
'unique_id': 'TOTALLY_UNIQUE'
}, {
'platform': 'mqtt',
'name': 'Test 2',
'command_topic': 'command-topic',
'unique_id': 'TOTALLY_UNIQUE'
}]
})
async_fire_mqtt_message(hass, 'test-topic', 'payload')
await hass.async_block_till_done()
await hass.async_block_till_done()
assert len(hass.states.async_entity_ids()) == 2
# all vacuums group is 1, unique id created is 1
async def test_entity_device_info_with_identifier(hass, mock_publish):
"""Test MQTT vacuum device registry integration."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
entry.add_to_hass(hass)
await async_start(hass, 'homeassistant', {}, entry)
registry = await hass.helpers.device_registry.async_get_registry()
data = json.dumps({
'platform': 'mqtt',
'name': 'Test 1',
'command_topic': 'test-command-topic',
'device': {
'identifiers': ['helloworld'],
'connections': [
["mac", "02:5b:26:a8:dc:12"],
],
'manufacturer': 'Whatever',
'name': 'Beer',
'model': 'Glass',
'sw_version': '0.1-beta',
},
'unique_id': 'veryunique'
})
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data)
await hass.async_block_till_done()
await hass.async_block_till_done()
device = registry.async_get_device({('mqtt', 'helloworld')}, set())
assert device is not None
assert device.identifiers == {('mqtt', 'helloworld')}
assert device.connections == {('mac', "02:5b:26:a8:dc:12")}
assert device.manufacturer == 'Whatever'
assert device.name == 'Beer'
assert device.model == 'Glass'
assert device.sw_version == '0.1-beta'
async def test_entity_device_info_update(hass, mqtt_mock):
"""Test device registry update."""
entry = MockConfigEntry(domain=mqtt.DOMAIN)
entry.add_to_hass(hass)
await async_start(hass, 'homeassistant', {}, entry)
registry = await hass.helpers.device_registry.async_get_registry()
config = {
'platform': 'mqtt',
'name': 'Test 1',
'state_topic': 'test-topic',
'command_topic': 'test-command-topic',
'device': {
'identifiers': ['helloworld'],
'connections': [
["mac", "02:5b:26:a8:dc:12"],
],
'manufacturer': 'Whatever',
'name': 'Beer',
'model': 'Glass',
'sw_version': '0.1-beta',
},
'unique_id': 'veryunique'
}
data = json.dumps(config)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data)
await hass.async_block_till_done()
await hass.async_block_till_done()
device = registry.async_get_device({('mqtt', 'helloworld')}, set())
assert device is not None
assert device.name == 'Beer'
config['device']['name'] = 'Milk'
data = json.dumps(config)
async_fire_mqtt_message(hass, 'homeassistant/vacuum/bla/config',
data)
await hass.async_block_till_done()
await hass.async_block_till_done()
device = registry.async_get_device({('mqtt', 'helloworld')}, set())
assert device is not None
assert device.name == 'Milk'
|
apache-2.0
|
orgito/ansible
|
lib/ansible/modules/storage/purestorage/purefb_dsrole.py
|
11
|
6100
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2018, Simon Dodsley ([email protected])
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: purefb_dsrole
version_added: '2.8'
short_description: Configure FlashBlade Management Directory Service Roles
description:
- Set or erase directory services role configurations.
author:
- Simon Dodsley (@sdodsley)
options:
state:
description:
- Create or delete directory service role
default: present
choices: [ absent, present ]
role:
description:
- The directory service role to work on
choices: [ array_admin, ops_admin, readonly, storage_admin ]
group_base:
description:
- Specifies where the configured group is located in the directory
tree. This field consists of Organizational Units (OUs) that combine
with the base DN attribute and the configured group CNs to complete
the full Distinguished Name of the groups. The group base should
specify OU= for each OU and multiple OUs should be separated by commas.
The order of OUs is important and should get larger in scope from left
to right.
- Each OU should not exceed 64 characters in length.
group:
description:
- Sets the common Name (CN) of the configured directory service group
containing users for the FlashBlade. This name should be just the
Common Name of the group without the CN= specifier.
- Common Names should not exceed 64 characters in length.
extends_documentation_fragment:
- purestorage.fb
'''
EXAMPLES = r'''
- name: Delete exisitng array_admin directory service role
purefb_dsrole:
role: array_admin
state: absent
fb_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
- name: Create array_admin directory service role
purefa_ds:
role: array_admin
group_base: "OU=PureGroups,OU=SANManagers"
group: pureadmins
fb_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
- name: Update ops_admin directory service role
purefa_ds:
role: ops_admin
group_base: "OU=PureGroups"
group: opsgroup
fb_url: 10.10.10.2
api_token: e31060a7-21fc-e277-6240-25983c6c4592
'''
RETURN = r'''
'''
HAS_PURITY_FB = True
try:
from purity_fb import DirectoryServiceRole
except ImportError:
HAS_PURITY_FB = False
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.pure import get_blade, purefb_argument_spec
def update_role(module, blade):
"""Update Directory Service Role"""
changed = False
role = blade.directory_services.list_directory_services_roles(names=[module.params['role']])
if role.items[0].group_base != module.params['group_base'] or role.items[0].group != module.params['group']:
try:
role = DirectoryServiceRole(group_base=module.params['group_base'],
group=module.params['group'])
blade.directory_services.update_directory_services_roles(names=[module.params['role']],
directory_service_role=role)
changed = True
except Exception:
module.fail_json(msg='Update Directory Service Role {0} failed'.format(module.params['role']))
module.exit_json(changed=changed)
def delete_role(module, blade):
"""Delete Directory Service Role"""
changed = False
try:
role = DirectoryServiceRole(group_base='',
group='')
blade.directory_services.update_directory_services_roles(names=[module.params['role']],
directory_service_role=role)
changed = True
except Exception:
module.fail_json(msg='Delete Directory Service Role {0} failed'.format(module.params['role']))
module.exit_json(changed=changed)
def create_role(module, blade):
"""Create Directory Service Role"""
changed = False
try:
role = DirectoryServiceRole(group_base=module.params['group_base'],
group=module.params['group'])
blade.directory_services.update_directory_services_roles(names=[module.params['role']],
directory_service_role=role)
changed = True
except Exception:
module.fail_json(msg='Create Directory Service Role {0} failed: Check configuration'.format(module.params['role']))
module.exit_json(changed=changed)
def main():
argument_spec = purefb_argument_spec()
argument_spec.update(dict(
role=dict(required=True, type='str', choices=['array_admin', 'ops_admin', 'readonly', 'storage_admin']),
state=dict(type='str', default='present', choices=['absent', 'present']),
group_base=dict(type='str'),
group=dict(type='str'),
))
required_together = [['group', 'group_base']]
module = AnsibleModule(argument_spec,
required_together=required_together,
supports_check_mode=False)
if not HAS_PURITY_FB:
module.fail_json(msg='purity_fb sdk is required for this module')
state = module.params['state']
blade = get_blade(module)
role_configured = False
role = blade.directory_services.list_directory_services_roles(names=[module.params['role']])
if role.items[0].group is not None:
role_configured = True
if state == 'absent' and role_configured:
delete_role(module, blade)
elif role_configured and state == 'present':
update_role(module, blade)
elif not role_configured and state == 'present':
create_role(module, blade)
else:
module.exit_json(changed=False)
if __name__ == '__main__':
main()
|
gpl-3.0
|
terbolous/CouchPotatoServer
|
libs/tmdb3/cache.py
|
32
|
4661
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#-----------------------
# Name: cache.py
# Python Library
# Author: Raymond Wagner
# Purpose: Caching framework to store TMDb API results
#-----------------------
import time
import os
from tmdb_exceptions import *
from cache_engine import Engines
import cache_null
import cache_file
class Cache(object):
"""
This class implements a cache framework, allowing selecting of a
pluggable engine. The framework stores data in a key/value manner,
along with a lifetime, after which data will be expired and
pulled fresh next time it is requested from the cache.
This class defines a wrapper to be used with query functions. The
wrapper will automatically cache the inputs and outputs of the
wrapped function, pulling the output from local storage for
subsequent calls with those inputs.
"""
def __init__(self, engine=None, *args, **kwargs):
self._engine = None
self._data = {}
self._age = 0
self.configure(engine, *args, **kwargs)
def _import(self, data=None):
if data is None:
data = self._engine.get(self._age)
for obj in sorted(data, key=lambda x: x.creation):
if not obj.expired:
self._data[obj.key] = obj
self._age = max(self._age, obj.creation)
def _expire(self):
for k, v in self._data.items():
if v.expired:
del self._data[k]
def configure(self, engine, *args, **kwargs):
if engine is None:
engine = 'file'
elif engine not in Engines:
raise TMDBCacheError("Invalid cache engine specified: "+engine)
self._engine = Engines[engine](self)
self._engine.configure(*args, **kwargs)
def put(self, key, data, lifetime=60*60*12):
# pull existing data, so cache will be fresh when written back out
if self._engine is None:
raise TMDBCacheError("No cache engine configured")
self._expire()
self._import(self._engine.put(key, data, lifetime))
def get(self, key):
if self._engine is None:
raise TMDBCacheError("No cache engine configured")
self._expire()
if key not in self._data:
self._import()
try:
return self._data[key].data
except:
return None
def cached(self, callback):
"""
Returns a decorator that uses a callback to specify the key to use
for caching the responses from the decorated function.
"""
return self.Cached(self, callback)
class Cached( object ):
def __init__(self, cache, callback, func=None, inst=None):
self.cache = cache
self.callback = callback
self.func = func
self.inst = inst
if func:
self.__module__ = func.__module__
self.__name__ = func.__name__
self.__doc__ = func.__doc__
def __call__(self, *args, **kwargs):
if self.func is None:
# decorator is waiting to be given a function
if len(kwargs) or (len(args) != 1):
raise TMDBCacheError(
'Cache.Cached decorator must be called a single ' +
'callable argument before it be used.')
elif args[0] is None:
raise TMDBCacheError(
'Cache.Cached decorator called before being given ' +
'a function to wrap.')
elif not callable(args[0]):
raise TMDBCacheError(
'Cache.Cached must be provided a callable object.')
return self.__class__(self.cache, self.callback, args[0])
elif self.inst.lifetime == 0:
# lifetime of zero means never cache
return self.func(*args, **kwargs)
else:
key = self.callback()
data = self.cache.get(key)
if data is None:
data = self.func(*args, **kwargs)
if hasattr(self.inst, 'lifetime'):
self.cache.put(key, data, self.inst.lifetime)
else:
self.cache.put(key, data)
return data
def __get__(self, inst, owner):
if inst is None:
return self
func = self.func.__get__(inst, owner)
callback = self.callback.__get__(inst, owner)
return self.__class__(self.cache, callback, func, inst)
|
gpl-3.0
|
iModels/ffci
|
github/tests/Issue54.py
|
1
|
2363
|
# -*- coding: utf-8 -*-
# ########################## Copyrights and license ############################
# #
# Copyright 2012 Vincent Jacques <[email protected]> #
# Copyright 2012 Zearin <[email protected]> #
# Copyright 2013 Vincent Jacques <[email protected]> #
# #
# This file is part of PyGithub. http://jacquev6.github.com/PyGithub/ #
# #
# PyGithub is free software: you can redistribute it and/or modify it under #
# the terms of the GNU Lesser General Public License as published by the Free #
# Software Foundation, either version 3 of the License, or (at your option) #
# any later version. #
# #
# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #
# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #
# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #
# details. #
# #
# You should have received a copy of the GNU Lesser General Public License #
# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #
# #
# ##############################################################################
import datetime
from . import Framework
class Issue54(Framework.TestCase):
def setUp(self):
Framework.TestCase.setUp(self)
self.repo = self.g.get_user().get_repo("TestRepo")
def testConversion(self):
commit = self.repo.get_git_commit("73f320ae06cd565cf38faca34b6a482addfc721b")
self.assertEqual(commit.message, "Test commit created around Fri, 13 Jul 2012 18:43:21 GMT, that is vendredi 13 juillet 2012 20:43:21 GMT+2\n")
self.assertEqual(commit.author.date, datetime.datetime(2012, 7, 13, 18, 47, 10))
|
mit
|
HackerEarth/django-allauth
|
allauth/socialaccount/providers/openid/provider.py
|
6
|
2030
|
from urlparse import urlparse
from django.core.urlresolvers import reverse
from django.utils.http import urlencode
from allauth.socialaccount import providers
from allauth.socialaccount.providers.base import Provider, ProviderAccount
class OpenIDAccount(ProviderAccount):
def get_brand(self):
ret = super(OpenIDAccount, self).get_brand()
domain = urlparse(self.account.uid).netloc
# FIXME: Instead of hardcoding, derive this from the domains
# listed in the openid endpoints setting.
provider_map = {'yahoo': dict(id='yahoo',
name='Yahoo'),
'hyves': dict(id='hyves',
name='Hyves'),
'google': dict(id='google',
name='Google')}
for d, p in provider_map.iteritems():
if domain.lower().find(d) >= 0:
ret = p
break
return ret
def __unicode__(self):
return self.account.uid
class OpenIDProvider(Provider):
id = 'openid'
name = 'OpenID'
package = 'allauth.socialaccount.providers.openid'
account_class = OpenIDAccount
def get_login_url(self, request, next=None, openid=None):
url = reverse('openid_login')
query = {}
if openid:
query['openid'] = openid
if next:
query['next'] = next
if query:
url += '?' + urlencode(query)
return url
def get_brands(self):
# These defaults are a bit too arbitrary...
default_servers = [dict(id='yahoo',
name='Yahoo',
openid_url='http://me.yahoo.com'),
dict(id='hyves',
name='Hyves',
openid_url='http://hyves.nl')]
return self.get_settings().get('SERVERS', default_servers)
providers.registry.register(OpenIDProvider)
|
mit
|
foreni-administrator/pyew
|
vtrace/envitools.py
|
17
|
4179
|
"""
Some tools that require the envi framework to be installed
"""
import sys
import traceback
import envi
import envi.archs.i386 as e_i386 # FIXME This should NOT have to be here
class RegisterException(Exception):
pass
def cmpRegs(emu, trace):
for idx,name in reg_map:
er = emu.getRegister(idx)
tr = trace.getRegisterByName(name)
if er != tr:
raise RegisterException("REGISTER MISMATCH: %s 0x%.8x 0x%.8x" % (name, tr, er))
return True
reg_map = [
(e_i386.REG_EAX, "eax"),
(e_i386.REG_ECX, "ecx"),
(e_i386.REG_EDX, "edx"),
(e_i386.REG_EBX, "ebx"),
(e_i386.REG_ESP, "esp"),
(e_i386.REG_EBP, "ebp"),
(e_i386.REG_ESI, "esi"),
(e_i386.REG_EDI, "edi"),
(e_i386.REG_EIP, "eip"),
(e_i386.REG_EFLAGS, "eflags")
]
#FIXME intel specific
def setRegs(emu, trace):
for idx,name in reg_map:
tr = trace.getRegisterByName(name)
emu.setRegister(idx, tr)
def emulatorFromTrace(trace):
"""
Produce an envi emulator for this tracer object. Use the trace's arch
info to get the emulator so this can be done on the client side of a remote
vtrace session.
"""
arch = trace.getMeta("Architecture")
amod = envi.getArchModule(arch)
emu = amod.getEmulator()
if trace.getMeta("Platform") == "Windows":
emu.setSegmentInfo(e_i386.SEG_FS, trace.getThreads()[trace.getMeta("ThreadId")], 0xffffffff)
emu.setMemoryObject(trace)
setRegs(emu, trace)
return emu
def lockStepEmulator(emu, trace):
while True:
print "Lockstep: 0x%.8x" % emu.getProgramCounter()
try:
pc = emu.getProgramCounter()
op = emu.makeOpcode(pc)
trace.stepi()
emu.stepi()
cmpRegs(emu, trace)
except RegisterException, msg:
print "Lockstep Error: %s: %s" % (repr(op),msg)
setRegs(emu, trace)
sys.stdin.readline()
except Exception, msg:
traceback.print_exc()
print "Lockstep Error: %s" % msg
return
import vtrace
import vtrace.platforms.base as v_base
class TraceEmulator(vtrace.Trace, v_base.TracerBase):
"""
Wrap an arbitrary emulator in a Tracer compatible API.
"""
def __init__(self, emu):
self.emu = emu
vtrace.Trace.__init__(self)
v_base.TracerBase.__init__(self)
# Fake out being attached
self.attached = True
self.pid = 0x56
self.setRegisterInfo(emu.getRegisterInfo())
def getPointerSize(self):
return self.emu.getPointerSize()
def platformStepi(self):
self.emu.stepi()
def platformWait(self):
# We only support single step events now
return True
def archGetRegCtx(self):
return self.emu
def platformGetRegCtx(self, threadid):
return self.emu
def platformSetRegCtx(self, threadid, ctx):
self.setRegisterSnap(ctx.getRegisterSnap())
def platformProcessEvent(self, event):
self.fireNotifiers(vtrace.NOTIFY_STEP)
def platformReadMemory(self, va, size):
return self.emu.readMemory(va, size)
def platformWriteMemory(self, va, bytes):
return self.emu.writeMemory(va, bytes)
def platformGetMaps(self):
return self.emu.getMemoryMaps()
def platformGetThreads(self):
return {1:0xffff0000,}
def platformGetFds(self):
return [] #FIXME perhaps tie this into magic?
def getStackTrace(self):
# FIXME i386...
return [(self.emu.getProgramCounter(), 0), (0,0)]
def platformDetach(self):
pass
def main():
import vtrace
sym = sys.argv[1]
pid = int(sys.argv[2])
t = vtrace.getTrace()
t.attach(pid)
symaddr = t.parseExpression(sym)
t.addBreakpoint(vtrace.Breakpoint(symaddr))
while t.getProgramCounter() != symaddr:
t.run()
snap = t.takeSnapshot()
#snap.saveToFile("woot.snap") # You may open in vdb to follow along
emu = emulatorFromTrace(snap)
lockStepEmulator(emu, t)
if __name__ == "__main__":
# Copy this file out to the vtrace dir for testing and run as main
main()
|
gpl-2.0
|
romankagan/DDBWorkbench
|
python/lib/Lib/site-packages/django/db/models/sql/where.py
|
289
|
13163
|
"""
Code to manage the creation and SQL rendering of 'where' constraints.
"""
import datetime
from itertools import repeat
from django.utils import tree
from django.db.models.fields import Field
from django.db.models.query_utils import QueryWrapper
from datastructures import EmptyResultSet, FullResultSet
# Connection types
AND = 'AND'
OR = 'OR'
class EmptyShortCircuit(Exception):
"""
Internal exception used to indicate that a "matches nothing" node should be
added to the where-clause.
"""
pass
class WhereNode(tree.Node):
"""
Used to represent the SQL where-clause.
The class is tied to the Query class that created it (in order to create
the correct SQL).
The children in this tree are usually either Q-like objects or lists of
[table_alias, field_name, db_type, lookup_type, value_annotation,
params]. However, a child could also be any class with as_sql() and
relabel_aliases() methods.
"""
default = AND
def add(self, data, connector):
"""
Add a node to the where-tree. If the data is a list or tuple, it is
expected to be of the form (obj, lookup_type, value), where obj is
a Constraint object, and is then slightly munged before being stored
(to avoid storing any reference to field objects). Otherwise, the 'data'
is stored unchanged and can be any class with an 'as_sql()' method.
"""
if not isinstance(data, (list, tuple)):
super(WhereNode, self).add(data, connector)
return
obj, lookup_type, value = data
if hasattr(value, '__iter__') and hasattr(value, 'next'):
# Consume any generators immediately, so that we can determine
# emptiness and transform any non-empty values correctly.
value = list(value)
# The "annotation" parameter is used to pass auxilliary information
# about the value(s) to the query construction. Specifically, datetime
# and empty values need special handling. Other types could be used
# here in the future (using Python types is suggested for consistency).
if isinstance(value, datetime.datetime):
annotation = datetime.datetime
elif hasattr(value, 'value_annotation'):
annotation = value.value_annotation
else:
annotation = bool(value)
if hasattr(obj, "prepare"):
value = obj.prepare(lookup_type, value)
super(WhereNode, self).add((obj, lookup_type, annotation, value),
connector)
return
super(WhereNode, self).add((obj, lookup_type, annotation, value),
connector)
def as_sql(self, qn, connection):
"""
Returns the SQL version of the where clause and the value to be
substituted in. Returns None, None if this node is empty.
If 'node' is provided, that is the root of the SQL generation
(generally not needed except by the internal implementation for
recursion).
"""
if not self.children:
return None, []
result = []
result_params = []
empty = True
for child in self.children:
try:
if hasattr(child, 'as_sql'):
sql, params = child.as_sql(qn=qn, connection=connection)
else:
# A leaf node in the tree.
sql, params = self.make_atom(child, qn, connection)
except EmptyResultSet:
if self.connector == AND and not self.negated:
# We can bail out early in this particular case (only).
raise
elif self.negated:
empty = False
continue
except FullResultSet:
if self.connector == OR:
if self.negated:
empty = True
break
# We match everything. No need for any constraints.
return '', []
if self.negated:
empty = True
continue
empty = False
if sql:
result.append(sql)
result_params.extend(params)
if empty:
raise EmptyResultSet
conn = ' %s ' % self.connector
sql_string = conn.join(result)
if sql_string:
if self.negated:
sql_string = 'NOT (%s)' % sql_string
elif len(self.children) != 1:
sql_string = '(%s)' % sql_string
return sql_string, result_params
def make_atom(self, child, qn, connection):
"""
Turn a tuple (table_alias, column_name, db_type, lookup_type,
value_annot, params) into valid SQL.
Returns the string for the SQL fragment and the parameters to use for
it.
"""
lvalue, lookup_type, value_annot, params_or_value = child
if hasattr(lvalue, 'process'):
try:
lvalue, params = lvalue.process(lookup_type, params_or_value, connection)
except EmptyShortCircuit:
raise EmptyResultSet
else:
params = Field().get_db_prep_lookup(lookup_type, params_or_value,
connection=connection, prepared=True)
if isinstance(lvalue, tuple):
# A direct database column lookup.
field_sql = self.sql_for_columns(lvalue, qn, connection)
else:
# A smart object with an as_sql() method.
field_sql = lvalue.as_sql(qn, connection)
if value_annot is datetime.datetime:
cast_sql = connection.ops.datetime_cast_sql()
else:
cast_sql = '%s'
if hasattr(params, 'as_sql'):
extra, params = params.as_sql(qn, connection)
cast_sql = ''
else:
extra = ''
if (len(params) == 1 and params[0] == '' and lookup_type == 'exact'
and connection.features.interprets_empty_strings_as_nulls):
lookup_type = 'isnull'
value_annot = True
if lookup_type in connection.operators:
format = "%s %%s %%s" % (connection.ops.lookup_cast(lookup_type),)
return (format % (field_sql,
connection.operators[lookup_type] % cast_sql,
extra), params)
if lookup_type == 'in':
if not value_annot:
raise EmptyResultSet
if extra:
return ('%s IN %s' % (field_sql, extra), params)
max_in_list_size = connection.ops.max_in_list_size()
if max_in_list_size and len(params) > max_in_list_size:
# Break up the params list into an OR of manageable chunks.
in_clause_elements = ['(']
for offset in xrange(0, len(params), max_in_list_size):
if offset > 0:
in_clause_elements.append(' OR ')
in_clause_elements.append('%s IN (' % field_sql)
group_size = min(len(params) - offset, max_in_list_size)
param_group = ', '.join(repeat('%s', group_size))
in_clause_elements.append(param_group)
in_clause_elements.append(')')
in_clause_elements.append(')')
return ''.join(in_clause_elements), params
else:
return ('%s IN (%s)' % (field_sql,
', '.join(repeat('%s', len(params)))),
params)
elif lookup_type in ('range', 'year'):
return ('%s BETWEEN %%s and %%s' % field_sql, params)
elif lookup_type in ('month', 'day', 'week_day'):
return ('%s = %%s' % connection.ops.date_extract_sql(lookup_type, field_sql),
params)
elif lookup_type == 'isnull':
return ('%s IS %sNULL' % (field_sql,
(not value_annot and 'NOT ' or '')), ())
elif lookup_type == 'search':
return (connection.ops.fulltext_search_sql(field_sql), params)
elif lookup_type in ('regex', 'iregex'):
return connection.ops.regex_lookup(lookup_type) % (field_sql, cast_sql), params
raise TypeError('Invalid lookup_type: %r' % lookup_type)
def sql_for_columns(self, data, qn, connection):
"""
Returns the SQL fragment used for the left-hand side of a column
constraint (for example, the "T1.foo" portion in the clause
"WHERE ... T1.foo = 6").
"""
table_alias, name, db_type = data
if table_alias:
lhs = '%s.%s' % (qn(table_alias), qn(name))
else:
lhs = qn(name)
return connection.ops.field_cast_sql(db_type) % lhs
def relabel_aliases(self, change_map, node=None):
"""
Relabels the alias values of any children. 'change_map' is a dictionary
mapping old (current) alias values to the new values.
"""
if not node:
node = self
for pos, child in enumerate(node.children):
if hasattr(child, 'relabel_aliases'):
child.relabel_aliases(change_map)
elif isinstance(child, tree.Node):
self.relabel_aliases(change_map, child)
elif isinstance(child, (list, tuple)):
if isinstance(child[0], (list, tuple)):
elt = list(child[0])
if elt[0] in change_map:
elt[0] = change_map[elt[0]]
node.children[pos] = (tuple(elt),) + child[1:]
else:
child[0].relabel_aliases(change_map)
# Check if the query value also requires relabelling
if hasattr(child[3], 'relabel_aliases'):
child[3].relabel_aliases(change_map)
class EverythingNode(object):
"""
A node that matches everything.
"""
def as_sql(self, qn=None, connection=None):
raise FullResultSet
def relabel_aliases(self, change_map, node=None):
return
class NothingNode(object):
"""
A node that matches nothing.
"""
def as_sql(self, qn=None, connection=None):
raise EmptyResultSet
def relabel_aliases(self, change_map, node=None):
return
class ExtraWhere(object):
def __init__(self, sqls, params):
self.sqls = sqls
self.params = params
def as_sql(self, qn=None, connection=None):
return " AND ".join(self.sqls), tuple(self.params or ())
class Constraint(object):
"""
An object that can be passed to WhereNode.add() and knows how to
pre-process itself prior to including in the WhereNode.
"""
def __init__(self, alias, col, field):
self.alias, self.col, self.field = alias, col, field
def __getstate__(self):
"""Save the state of the Constraint for pickling.
Fields aren't necessarily pickleable, because they can have
callable default values. So, instead of pickling the field
store a reference so we can restore it manually
"""
obj_dict = self.__dict__.copy()
if self.field:
obj_dict['model'] = self.field.model
obj_dict['field_name'] = self.field.name
del obj_dict['field']
return obj_dict
def __setstate__(self, data):
"""Restore the constraint """
model = data.pop('model', None)
field_name = data.pop('field_name', None)
self.__dict__.update(data)
if model is not None:
self.field = model._meta.get_field(field_name)
else:
self.field = None
def prepare(self, lookup_type, value):
if self.field:
return self.field.get_prep_lookup(lookup_type, value)
return value
def process(self, lookup_type, value, connection):
"""
Returns a tuple of data suitable for inclusion in a WhereNode
instance.
"""
# Because of circular imports, we need to import this here.
from django.db.models.base import ObjectDoesNotExist
try:
if self.field:
params = self.field.get_db_prep_lookup(lookup_type, value,
connection=connection, prepared=True)
db_type = self.field.db_type(connection=connection)
else:
# This branch is used at times when we add a comparison to NULL
# (we don't really want to waste time looking up the associated
# field object at the calling location).
params = Field().get_db_prep_lookup(lookup_type, value,
connection=connection, prepared=True)
db_type = None
except ObjectDoesNotExist:
raise EmptyShortCircuit
return (self.alias, self.col, db_type), params
def relabel_aliases(self, change_map):
if self.alias in change_map:
self.alias = change_map[self.alias]
|
apache-2.0
|
4eek/edx-platform
|
common/djangoapps/track/tests/test_logs.py
|
163
|
2712
|
"""Tests that tracking data are successfully logged"""
import mock
import unittest
from django.test import TestCase
from django.core.urlresolvers import reverse
from django.conf import settings
from track.models import TrackingLog
from track.views import user_track
@unittest.skip("TODO: these tests were not being run before, and now that they are they're failing")
@unittest.skipUnless(settings.ROOT_URLCONF == 'lms.urls', 'Test only valid in lms')
class TrackingTest(TestCase):
"""
Tests that tracking logs correctly handle events
"""
def test_post_answers_to_log(self):
"""
Checks that student answer requests submitted to track.views via POST
are correctly logged in the TrackingLog db table
"""
requests = [
{"event": "my_event", "event_type": "my_event_type", "page": "my_page"},
{"event": "{'json': 'object'}", "event_type": unichr(512), "page": "my_page"}
]
with mock.patch.dict('django.conf.settings.FEATURES', {'ENABLE_SQL_TRACKING_LOGS': True}):
for request_params in requests:
response = self.client.post(reverse(user_track), request_params)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, 'success')
tracking_logs = TrackingLog.objects.order_by('-dtcreated')
log = tracking_logs[0]
self.assertEqual(log.event, request_params["event"])
self.assertEqual(log.event_type, request_params["event_type"])
self.assertEqual(log.page, request_params["page"])
def test_get_answers_to_log(self):
"""
Checks that student answer requests submitted to track.views via GET
are correctly logged in the TrackingLog db table
"""
requests = [
{"event": "my_event", "event_type": "my_event_type", "page": "my_page"},
{"event": "{'json': 'object'}", "event_type": unichr(512), "page": "my_page"}
]
with mock.patch.dict('django.conf.settings.FEATURES', {'ENABLE_SQL_TRACKING_LOGS': True}):
for request_params in requests:
response = self.client.get(reverse(user_track), request_params)
self.assertEqual(response.status_code, 200)
self.assertEqual(response.content, 'success')
tracking_logs = TrackingLog.objects.order_by('-dtcreated')
log = tracking_logs[0]
self.assertEqual(log.event, request_params["event"])
self.assertEqual(log.event_type, request_params["event_type"])
self.assertEqual(log.page, request_params["page"])
|
agpl-3.0
|
einstein95/crunchy-xml-decoder
|
crunchy-xml-decoder/unidecode/x0d7.py
|
252
|
4559
|
data = (
'hwen', # 0x00
'hwenj', # 0x01
'hwenh', # 0x02
'hwed', # 0x03
'hwel', # 0x04
'hwelg', # 0x05
'hwelm', # 0x06
'hwelb', # 0x07
'hwels', # 0x08
'hwelt', # 0x09
'hwelp', # 0x0a
'hwelh', # 0x0b
'hwem', # 0x0c
'hweb', # 0x0d
'hwebs', # 0x0e
'hwes', # 0x0f
'hwess', # 0x10
'hweng', # 0x11
'hwej', # 0x12
'hwec', # 0x13
'hwek', # 0x14
'hwet', # 0x15
'hwep', # 0x16
'hweh', # 0x17
'hwi', # 0x18
'hwig', # 0x19
'hwigg', # 0x1a
'hwigs', # 0x1b
'hwin', # 0x1c
'hwinj', # 0x1d
'hwinh', # 0x1e
'hwid', # 0x1f
'hwil', # 0x20
'hwilg', # 0x21
'hwilm', # 0x22
'hwilb', # 0x23
'hwils', # 0x24
'hwilt', # 0x25
'hwilp', # 0x26
'hwilh', # 0x27
'hwim', # 0x28
'hwib', # 0x29
'hwibs', # 0x2a
'hwis', # 0x2b
'hwiss', # 0x2c
'hwing', # 0x2d
'hwij', # 0x2e
'hwic', # 0x2f
'hwik', # 0x30
'hwit', # 0x31
'hwip', # 0x32
'hwih', # 0x33
'hyu', # 0x34
'hyug', # 0x35
'hyugg', # 0x36
'hyugs', # 0x37
'hyun', # 0x38
'hyunj', # 0x39
'hyunh', # 0x3a
'hyud', # 0x3b
'hyul', # 0x3c
'hyulg', # 0x3d
'hyulm', # 0x3e
'hyulb', # 0x3f
'hyuls', # 0x40
'hyult', # 0x41
'hyulp', # 0x42
'hyulh', # 0x43
'hyum', # 0x44
'hyub', # 0x45
'hyubs', # 0x46
'hyus', # 0x47
'hyuss', # 0x48
'hyung', # 0x49
'hyuj', # 0x4a
'hyuc', # 0x4b
'hyuk', # 0x4c
'hyut', # 0x4d
'hyup', # 0x4e
'hyuh', # 0x4f
'heu', # 0x50
'heug', # 0x51
'heugg', # 0x52
'heugs', # 0x53
'heun', # 0x54
'heunj', # 0x55
'heunh', # 0x56
'heud', # 0x57
'heul', # 0x58
'heulg', # 0x59
'heulm', # 0x5a
'heulb', # 0x5b
'heuls', # 0x5c
'heult', # 0x5d
'heulp', # 0x5e
'heulh', # 0x5f
'heum', # 0x60
'heub', # 0x61
'heubs', # 0x62
'heus', # 0x63
'heuss', # 0x64
'heung', # 0x65
'heuj', # 0x66
'heuc', # 0x67
'heuk', # 0x68
'heut', # 0x69
'heup', # 0x6a
'heuh', # 0x6b
'hyi', # 0x6c
'hyig', # 0x6d
'hyigg', # 0x6e
'hyigs', # 0x6f
'hyin', # 0x70
'hyinj', # 0x71
'hyinh', # 0x72
'hyid', # 0x73
'hyil', # 0x74
'hyilg', # 0x75
'hyilm', # 0x76
'hyilb', # 0x77
'hyils', # 0x78
'hyilt', # 0x79
'hyilp', # 0x7a
'hyilh', # 0x7b
'hyim', # 0x7c
'hyib', # 0x7d
'hyibs', # 0x7e
'hyis', # 0x7f
'hyiss', # 0x80
'hying', # 0x81
'hyij', # 0x82
'hyic', # 0x83
'hyik', # 0x84
'hyit', # 0x85
'hyip', # 0x86
'hyih', # 0x87
'hi', # 0x88
'hig', # 0x89
'higg', # 0x8a
'higs', # 0x8b
'hin', # 0x8c
'hinj', # 0x8d
'hinh', # 0x8e
'hid', # 0x8f
'hil', # 0x90
'hilg', # 0x91
'hilm', # 0x92
'hilb', # 0x93
'hils', # 0x94
'hilt', # 0x95
'hilp', # 0x96
'hilh', # 0x97
'him', # 0x98
'hib', # 0x99
'hibs', # 0x9a
'his', # 0x9b
'hiss', # 0x9c
'hing', # 0x9d
'hij', # 0x9e
'hic', # 0x9f
'hik', # 0xa0
'hit', # 0xa1
'hip', # 0xa2
'hih', # 0xa3
'[?]', # 0xa4
'[?]', # 0xa5
'[?]', # 0xa6
'[?]', # 0xa7
'[?]', # 0xa8
'[?]', # 0xa9
'[?]', # 0xaa
'[?]', # 0xab
'[?]', # 0xac
'[?]', # 0xad
'[?]', # 0xae
'[?]', # 0xaf
'[?]', # 0xb0
'[?]', # 0xb1
'[?]', # 0xb2
'[?]', # 0xb3
'[?]', # 0xb4
'[?]', # 0xb5
'[?]', # 0xb6
'[?]', # 0xb7
'[?]', # 0xb8
'[?]', # 0xb9
'[?]', # 0xba
'[?]', # 0xbb
'[?]', # 0xbc
'[?]', # 0xbd
'[?]', # 0xbe
'[?]', # 0xbf
'[?]', # 0xc0
'[?]', # 0xc1
'[?]', # 0xc2
'[?]', # 0xc3
'[?]', # 0xc4
'[?]', # 0xc5
'[?]', # 0xc6
'[?]', # 0xc7
'[?]', # 0xc8
'[?]', # 0xc9
'[?]', # 0xca
'[?]', # 0xcb
'[?]', # 0xcc
'[?]', # 0xcd
'[?]', # 0xce
'[?]', # 0xcf
'[?]', # 0xd0
'[?]', # 0xd1
'[?]', # 0xd2
'[?]', # 0xd3
'[?]', # 0xd4
'[?]', # 0xd5
'[?]', # 0xd6
'[?]', # 0xd7
'[?]', # 0xd8
'[?]', # 0xd9
'[?]', # 0xda
'[?]', # 0xdb
'[?]', # 0xdc
'[?]', # 0xdd
'[?]', # 0xde
'[?]', # 0xdf
'[?]', # 0xe0
'[?]', # 0xe1
'[?]', # 0xe2
'[?]', # 0xe3
'[?]', # 0xe4
'[?]', # 0xe5
'[?]', # 0xe6
'[?]', # 0xe7
'[?]', # 0xe8
'[?]', # 0xe9
'[?]', # 0xea
'[?]', # 0xeb
'[?]', # 0xec
'[?]', # 0xed
'[?]', # 0xee
'[?]', # 0xef
'[?]', # 0xf0
'[?]', # 0xf1
'[?]', # 0xf2
'[?]', # 0xf3
'[?]', # 0xf4
'[?]', # 0xf5
'[?]', # 0xf6
'[?]', # 0xf7
'[?]', # 0xf8
'[?]', # 0xf9
'[?]', # 0xfa
'[?]', # 0xfb
'[?]', # 0xfc
'[?]', # 0xfd
'[?]', # 0xfe
)
|
gpl-2.0
|
rickardo10/distributed-db-app
|
auth.py
|
1
|
5815
|
# -*- encoding: UTF-8 -*-
#
# Form based authentication for CherryPy. Requires the
# Session tool to be loaded.
#
import cherrypy
SESSION_KEY = '_cp_username'
def check_credentials(username, password):
"""Verifies credentials for username and password.
Returns None on success or a string describing the error on failure"""
# Adapt to your needs
if username in ('cap', 'investigador', 'asistente') and password == 'cap':
return None
else:
return u"Nombre de usuario o contraseña incorrecto."
# An example implementation which uses an ORM could be:
# u = User.get(username)
# if u is None:
# return u"Username %s is unknown to me." % username
# if u.password != md5.new(password).hexdigest():
# return u"Incorrect password"
def check_auth(*args, **kwargs):
"""A tool that looks in config for 'auth.require'. If found and it
is not None, a login is required and the entry is evaluated as a list of
conditions that the user must fulfill"""
conditions = cherrypy.request.config.get('auth.require', None)
if conditions is not None:
username = cherrypy.session.get(SESSION_KEY)
if username:
cherrypy.request.login = username
for condition in conditions:
# A condition is just a callable that returns true or false
if not condition():
raise cherrypy.HTTPRedirect("/auth/login")
else:
raise cherrypy.HTTPRedirect("/auth/login")
cherrypy.tools.auth = cherrypy.Tool('before_handler', check_auth)
def require(*conditions):
"""A decorator that appends conditions to the auth.require config
variable."""
def decorate(f):
if not hasattr(f, '_cp_config'):
f._cp_config = dict()
if 'auth.require' not in f._cp_config:
f._cp_config['auth.require'] = []
f._cp_config['auth.require'].extend(conditions)
return f
return decorate
# Conditions are callables that return True
# if the user fulfills the conditions they define, False otherwise
#
# They can access the current username as cherrypy.request.login
#
# Define those at will however suits the application.
def member_of(groupname):
def check():
# replace with actual check if <username> is in <groupname>
return cherrypy.request.login == 'cap' and groupname == 'admin'
return check
def name_is(reqd_username):
return lambda: reqd_username == cherrypy.request.login
# These might be handy
def any_of(*conditions):
"""Returns True if any of the conditions match"""
def check():
for c in conditions:
if c():
return True
return False
return check
# By default all conditions are required, but this might still be
# needed if you want to use it inside of an any_of(...) condition
def all_of(*conditions):
"""Returns True if all of the conditions match"""
def check():
for c in conditions:
if not c():
return False
return True
return check
# Controller to provide login and logout actions
class AuthController(object):
def on_login(self, username):
"""Called on successful login"""
def on_logout(self, username):
"""Called on logout"""
def get_loginform(self, username, msg="", from_page="/"):
_header = """
<html>
<head>
<title>Login</title>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
<style type="text/css">
%s
</style>
</head>
"""
_header = _header % open("css/auth.css").read()
_body = """
<body>
<div id="signup-form">
<div id="signup-form-inner">
<form method="post" action="/auth/login">
<p>
<h3>Inicio de sesión</h3>
</p>
<p>
<label for="username">Usuario</label>
<input type="text" name="username" value="%(username)s" /><br />
</p>
<p>
<label for="password">Contraseña</label>
<input type="password" name="password" /><br />
</p>
<p>
<button type="submit" class="btn btn-primary">Enviar</button>
</p>
<p id="error">
<input type="hidden" name="from_page" value="%(from_page)s" />
%(msg)s
</p>
</div>
</div>
</body>
</html>""" % locals()
return [ _header, _body]
@cherrypy.expose
def login(self, username=None, password=None, from_page="/"):
if username is None or password is None:
return self.get_loginform("", from_page=from_page)
error_msg = check_credentials(username, password)
if error_msg:
return self.get_loginform(username, error_msg, from_page)
else:
cherrypy.session[SESSION_KEY] = cherrypy.request.login = username
self.on_login(username)
raise cherrypy.HTTPRedirect(from_page or "/")
@cherrypy.expose
def logout(self, from_page="/"):
sess = cherrypy.session
username = sess.get(SESSION_KEY, None)
sess[SESSION_KEY] = None
if username:
cherrypy.request.login = None
self.on_logout(username)
raise cherrypy.HTTPRedirect("/auth/login")
|
mit
|
appsembler/edx-platform
|
openedx/core/djangoapps/bookmarks/tasks.py
|
20
|
5912
|
"""
Tasks for bookmarks.
"""
import logging
from celery.task import task # pylint: disable=import-error,no-name-in-module
from django.db import transaction
from opaque_keys.edx.keys import CourseKey
from xmodule.modulestore.django import modulestore
from . import PathItem
log = logging.getLogger('edx.celery.task')
def _calculate_course_xblocks_data(course_key):
"""
Fetch data for all the blocks in the course.
This data consists of the display_name and path of the block.
"""
with modulestore().bulk_operations(course_key):
course = modulestore().get_course(course_key, depth=None)
blocks_info_dict = {}
# Collect display_name and children usage keys.
blocks_stack = [course]
while blocks_stack:
current_block = blocks_stack.pop()
children = current_block.get_children() if current_block.has_children else []
usage_id = unicode(current_block.scope_ids.usage_id)
block_info = {
'usage_key': current_block.scope_ids.usage_id,
'display_name': current_block.display_name_with_default,
'children_ids': [unicode(child.scope_ids.usage_id) for child in children]
}
blocks_info_dict[usage_id] = block_info
# Add this blocks children to the stack so that we can traverse them as well.
blocks_stack.extend(children)
# Set children
for block in blocks_info_dict.values():
block.setdefault('children', [])
for child_id in block['children_ids']:
block['children'].append(blocks_info_dict[child_id])
block.pop('children_ids', None)
# Calculate paths
def add_path_info(block_info, current_path):
"""Do a DFS and add paths info to each block_info."""
block_info.setdefault('paths', [])
block_info['paths'].append(current_path)
for child_block_info in block_info['children']:
add_path_info(child_block_info, current_path + [block_info])
add_path_info(blocks_info_dict[unicode(course.scope_ids.usage_id)], [])
return blocks_info_dict
def _paths_from_data(paths_data):
"""
Construct a list of paths from path data.
"""
paths = []
for path_data in paths_data:
paths.append([
PathItem(item['usage_key'], item['display_name']) for item in path_data
if item['usage_key'].block_type != 'course'
])
return [path for path in paths if path]
def paths_equal(paths_1, paths_2):
"""
Check if two paths are equivalent.
"""
if len(paths_1) != len(paths_2):
return False
for path_1, path_2 in zip(paths_1, paths_2):
if len(path_1) != len(path_2):
return False
for path_item_1, path_item_2 in zip(path_1, path_2):
if path_item_1.display_name != path_item_2.display_name:
return False
usage_key_1 = path_item_1.usage_key.replace(
course_key=modulestore().fill_in_run(path_item_1.usage_key.course_key)
)
usage_key_2 = path_item_1.usage_key.replace(
course_key=modulestore().fill_in_run(path_item_2.usage_key.course_key)
)
if usage_key_1 != usage_key_2:
return False
return True
def _update_xblocks_cache(course_key):
"""
Calculate the XBlock cache data for a course and update the XBlockCache table.
"""
from .models import XBlockCache
blocks_data = _calculate_course_xblocks_data(course_key)
def update_block_cache_if_needed(block_cache, block_data):
""" Compare block_cache object with data and update if there are differences. """
paths = _paths_from_data(block_data['paths'])
if block_cache.display_name != block_data['display_name'] or not paths_equal(block_cache.paths, paths):
log.info(u'Updating XBlockCache with usage_key: %s', unicode(block_cache.usage_key))
block_cache.display_name = block_data['display_name']
block_cache.paths = paths
block_cache.save()
with transaction.atomic():
block_caches = XBlockCache.objects.filter(course_key=course_key)
for block_cache in block_caches:
block_data = blocks_data.pop(unicode(block_cache.usage_key), None)
if block_data:
update_block_cache_if_needed(block_cache, block_data)
for block_data in blocks_data.values():
with transaction.atomic():
paths = _paths_from_data(block_data['paths'])
log.info(u'Creating XBlockCache with usage_key: %s', unicode(block_data['usage_key']))
block_cache, created = XBlockCache.objects.get_or_create(usage_key=block_data['usage_key'], defaults={
'course_key': course_key,
'display_name': block_data['display_name'],
'paths': paths,
})
if not created:
update_block_cache_if_needed(block_cache, block_data)
@task(name=u'openedx.core.djangoapps.bookmarks.tasks.update_xblock_cache')
def update_xblocks_cache(course_id):
"""
Update the XBlocks cache for a course.
Arguments:
course_id (String): The course_id of a course.
"""
# Ideally we'd like to accept a CourseLocator; however, CourseLocator is not JSON-serializable (by default) so
# Celery's delayed tasks fail to start. For this reason, callers should pass the course key as a Unicode string.
if not isinstance(course_id, basestring):
raise ValueError('course_id must be a string. {} is not acceptable.'.format(type(course_id)))
course_key = CourseKey.from_string(course_id)
log.info(u'Starting XBlockCaches update for course_key: %s', course_id)
_update_xblocks_cache(course_key)
log.info(u'Ending XBlockCaches update for course_key: %s', course_id)
|
agpl-3.0
|
Tepira/binwalk
|
src/binwalk/modules/disasm.py
|
1
|
7773
|
import capstone
import binwalk.core.common
import binwalk.core.compat
from binwalk.core.module import Module, Option, Kwarg
class ArchResult(object):
def __init__(self, **kwargs):
for (k,v) in kwargs.iteritems():
setattr(self, k, v)
class Architecture(object):
def __init__(self, **kwargs):
for (k, v) in kwargs.iteritems():
setattr(self, k, v)
class Disasm(Module):
THRESHOLD = 10
DEFAULT_MIN_INSN_COUNT = 500
TITLE = "Disassembly Scan"
ORDER = 10
CLI = [
Option(short='Y',
long='disasm',
kwargs={'enabled' : True},
description='Identify the CPU architecture of a file using the capstone disassembler'),
Option(short='T',
long='minsn',
type=int,
kwargs={'min_insn_count' : 0},
description='Minimum number of consecutive instructions to be considered valid (default: %d)' % DEFAULT_MIN_INSN_COUNT),
]
KWARGS = [
Kwarg(name='enabled', default=False),
Kwarg(name='min_insn_count', default=DEFAULT_MIN_INSN_COUNT),
]
ARCHITECTURES = [
Architecture(type=capstone.CS_ARCH_ARM,
mode=capstone.CS_MODE_ARM,
endianess=capstone.CS_MODE_BIG_ENDIAN,
description="ARM executable code, 32-bit, big endian"),
Architecture(type=capstone.CS_ARCH_ARM,
mode=capstone.CS_MODE_ARM,
endianess=capstone.CS_MODE_LITTLE_ENDIAN,
description="ARM executable code, 32-bit, little endian"),
Architecture(type=capstone.CS_ARCH_ARM64,
mode=capstone.CS_MODE_ARM,
endianess=capstone.CS_MODE_BIG_ENDIAN,
description="ARM executable code, 64-bit, big endian"),
Architecture(type=capstone.CS_ARCH_ARM64,
mode=capstone.CS_MODE_ARM,
endianess=capstone.CS_MODE_LITTLE_ENDIAN,
description="ARM executable code, 64-bit, little endian"),
Architecture(type=capstone.CS_ARCH_PPC,
mode=capstone.CS_MODE_BIG_ENDIAN,
endianess=capstone.CS_MODE_BIG_ENDIAN,
description="PPC executable code, 32/64-bit, big endian"),
Architecture(type=capstone.CS_ARCH_MIPS,
mode=capstone.CS_MODE_64,
endianess=capstone.CS_MODE_BIG_ENDIAN,
description="MIPS executable code, 32/64-bit, big endian"),
Architecture(type=capstone.CS_ARCH_MIPS,
mode=capstone.CS_MODE_64,
endianess=capstone.CS_MODE_LITTLE_ENDIAN,
description="MIPS executable code, 32/64-bit, little endian"),
Architecture(type=capstone.CS_ARCH_ARM,
mode=capstone.CS_MODE_THUMB,
endianess=capstone.CS_MODE_LITTLE_ENDIAN,
description="ARM executable code, 16-bit (Thumb), little endian"),
Architecture(type=capstone.CS_ARCH_ARM,
mode=capstone.CS_MODE_THUMB,
endianess=capstone.CS_MODE_BIG_ENDIAN,
description="ARM executable code, 16-bit (Thumb), big endian"),
]
def init(self):
self.disassemblers = []
if not self.min_insn_count:
self.min_insn_count = self.DEFAULT_MIN_INSN_COUNT
self.disasm_data_size = self.min_insn_count * 10
for arch in self.ARCHITECTURES:
self.disassemblers.append((capstone.Cs(arch.type, (arch.mode + arch.endianess)), arch.description))
def scan_file(self, fp):
total_read = 0
while True:
result = None
(data, dlen) = fp.read_block()
if not data:
break
# If this data block doesn't contain at least two different bytes, skip it
# to prevent false positives (e.g., "\x00\x00\x00\x00" is a nop in MIPS).
if len(set(data)) >= 2:
block_offset = 0
# Loop through the entire block, or until we're pretty sure we've found some valid code in this block
while (block_offset < dlen) and (result is None or result.count < self.THRESHOLD):
# Don't pass the entire data block into disasm_lite, it's horribly inefficient
# to pass large strings around in Python. Break it up into smaller code blocks instead.
code_block = binwalk.core.compat.str2bytes(data[block_offset:block_offset+self.disasm_data_size])
# If this code block doesn't contain at least two different bytes, skip it
# to prevent false positives (e.g., "\x00\x00\x00\x00" is a nop in MIPS).
if len(set(code_block)) >= 2:
for (md, description) in self.disassemblers:
insns = [insn for insn in md.disasm_lite(code_block, (total_read+block_offset))]
binwalk.core.common.debug("0x%.8X %s, at least %d valid instructions" % ((total_read+block_offset),
description,
len(insns)))
# Did we disassemble at least self.min_insn_count instructions?
if len(insns) >= self.min_insn_count:
# If we've already found the same type of code in this block, simply update the result counter
if result and result.description == description:
result.count += 1
if result.count >= self.THRESHOLD:
break
else:
result = ArchResult(offset=total_read+block_offset+fp.offset,
description=description,
insns=insns,
count=1)
block_offset += 1
if result is not None:
r = self.result(offset=result.offset,
file=fp,
description=(result.description + ", at least %d valid instructions" % len(result.insns)))
if r.valid and r.display:
if self.config.verbose:
for (position, size, mnem, opnds) in result.insns:
self.result(offset=position, file=fp, description="\t\t%s %s" % (mnem, opnds))
if not self.config.keep_going:
return
total_read += dlen
self.status.completed = total_read
def run(self):
for fp in iter(self.next_file, None):
self.header()
self.scan_file(fp)
self.footer()
|
mit
|
drepetto/chiplotle
|
chiplotle/tools/mathtools/rotate_2d.py
|
1
|
1793
|
from chiplotle.geometry.core.coordinate import Coordinate
from chiplotle.geometry.core.coordinatearray import CoordinateArray
import math
## TODO: refactor, this is nasty. Take one type only!
def rotate_2d(xy, angle, pivot=(0, 0)):
'''2D rotation.
- `xy` is an (x, y) coordinate pair or a list of coordinate pairs.
- `angle` is the angle of rotation in radians.
- `pivot` the point around which to rotate `xy`.
Returns a Coordinate or a CoordinateArray.
'''
try:
xy = Coordinate(*xy)
pivot = Coordinate(*pivot)
result = rotate_coordinate_2d(xy, angle, pivot)
except:
xy = CoordinateArray(xy)
pivot = Coordinate(*pivot)
result = rotate_coordinatearray_2d(xy, angle, pivot)
return result
def rotate_coordinate_2d(xy, angle, pivot):
'''Coordinate 2D rotation.
- `xy` is an (x, y) coordinate pair.
- `angle` is the angle of rotation in radians.
- `pivot` the point around which to rotate `xy`.
Returns a Coordinate.
'''
pivot = Coordinate(*list(pivot))
## rotate counter-clockwise...
angle = -angle
#cp = Coordinate(xy)
xy -= pivot
x = xy.x * math.cos(angle) + xy.y * math.sin(angle)
y = -xy.x * math.sin(angle) + xy.y * math.cos(angle)
result = Coordinate(x, y) + pivot
return result
def rotate_coordinatearray_2d(xylst, angle, pivot):
'''2D rotation of list of coordinate pairs (CoordinateArray).
- `xylst` list of (x, y) coordinate pairs.
- `angle` is the angle of rotation in radians.
- `pivot` the point around which to rotate `xy`.
Returns a CoordinateArray.
'''
result = CoordinateArray( )
for xy in xylst:
r = rotate_coordinate_2d(xy, angle, pivot)
result.append(r)
return result
|
gpl-3.0
|
mrmans0n/sublime-text-3-config
|
Packages/Androguard/androguard/decompiler/dad/decompile.py
|
6
|
12959
|
# This file is part of Androguard.
#
# Copyright (c) 2012 Geoffroy Gueguen <[email protected]>
# All Rights Reserved.
#
# Androguard is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Androguard is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with Androguard. If not, see <http://www.gnu.org/licenses/>.
import sys
sys.path.append('./')
import logging
import androguard.core.androconf as androconf
import androguard.decompiler.dad.util as util
from androguard.core.analysis import analysis
from androguard.core.bytecodes import apk, dvm
from androguard.decompiler.dad.control_flow import identify_structures
from androguard.decompiler.dad.dataflow import (build_def_use,
dead_code_elimination,
register_propagation)
from androguard.decompiler.dad.graph import construct
from androguard.decompiler.dad.instruction import Param, ThisParam
from androguard.decompiler.dad.writer import Writer
def auto_vm(filename):
ret = androconf.is_android(filename)
if ret == 'APK':
return dvm.DalvikVMFormat(apk.APK(filename).get_dex())
elif ret == 'DEX':
return dvm.DalvikVMFormat(open(filename, 'rb').read())
elif ret == 'ODEX':
return dvm.DalvikOdexVMFormat(open(filename, 'rb').read())
return None
class DvMethod():
def __init__(self, methanalysis):
method = methanalysis.get_method()
self.start_block = next(methanalysis.get_basic_blocks().get(), None)
self.cls_name = method.get_class_name()
self.name = method.get_name()
self.lparams = []
self.var_to_name = {}
self.writer = None
self.graph = None
access = method.get_access_flags()
self.access = [flag for flag in util.ACCESS_FLAGS_METHODS
if flag & access]
desc = method.get_descriptor()
self.type = util.get_type(desc.split(')')[-1])
self.params_type = util.get_params_type(desc)
self.exceptions = methanalysis.exceptions.exceptions
code = method.get_code()
if code is None:
logger.debug('No code : %s %s', self.name, self.cls_name)
else:
start = code.registers_size - code.ins_size
if 0x8 not in self.access:
self.var_to_name[start] = ThisParam(start, self.name)
self.lparams.append(start)
start += 1
num_param = 0
for ptype in self.params_type:
param = start + num_param
self.lparams.append(param)
self.var_to_name.setdefault(param, Param(param, ptype))
num_param += util.get_type_size(ptype)
if 0:
from androguard.core import bytecode
bytecode.method2png('/tmp/dad/graphs/%s#%s.png' % \
(self.cls_name.split('/')[-1][:-1], self.name), methanalysis)
def process(self):
logger.debug('METHOD : %s', self.name)
# Native methods... no blocks.
if self.start_block is None:
return logger.debug('Native Method.')
graph = construct(self.start_block, self.var_to_name, self.exceptions)
self.graph = graph
if 0:
util.create_png(self.cls_name, self.name, graph, '/tmp/dad/blocks')
defs, uses = build_def_use(graph, self.lparams)
dead_code_elimination(graph, uses, defs)
register_propagation(graph, uses, defs)
del uses, defs
# After the DCE pass, some nodes may be empty, so we can simplify the
# graph to delete these nodes.
# We start by restructuring the graph by spliting the conditional nodes
# into a pre-header and a header part.
graph.split_if_nodes()
# We then simplify the graph by merging multiple statement nodes into
# a single statement node when possible. This also delete empty nodes.
graph.simplify()
graph.reset_rpo()
idoms = graph.immediate_dominators()
identify_structures(graph, idoms)
if 0:
util.create_png(self.cls_name, self.name, graph,
'/tmp/dad/structured')
self.writer = Writer(graph, self)
self.writer.write_method()
del graph
def show_source(self):
if self.writer:
print self.writer
def get_source(self):
if self.writer:
return '%s' % self.writer
return ''
def __repr__(self):
return 'Method %s' % self.name
class DvClass():
def __init__(self, dvclass, vma):
name = dvclass.get_name()
if name.find('/') > 0:
pckg, name = name.rsplit('/', 1)
else:
pckg, name = '', name
self.package = pckg[1:].replace('/', '.')
self.name = name[:-1]
self.vma = vma
self.methods = dict((meth.get_method_idx(), meth)
for meth in dvclass.get_methods())
self.fields = dict((field.get_name(), field)
for field in dvclass.get_fields())
self.subclasses = {}
self.code = []
self.inner = False
access = dvclass.get_access_flags()
self.access = [util.ACCESS_FLAGS_CLASSES.get(flag) for flag in
util.ACCESS_FLAGS_CLASSES if flag & access]
self.prototype = '%s class %s' % (' '.join(self.access), self.name)
self.interfaces = dvclass.interfaces
self.superclass = dvclass.get_superclassname()
logger.info('Class : %s', self.name)
logger.info('Methods added :')
for index, meth in self.methods.iteritems():
logger.info('%s (%s, %s)', index, self.name, meth.name)
logger.info('')
def add_subclass(self, innername, dvclass):
self.subclasses[innername] = dvclass
dvclass.inner = True
def get_methods(self):
return self.methods
def process_method(self, num):
methods = self.methods
if num in methods:
method = methods[num]
if not isinstance(method, DvMethod):
method.set_instructions([i for i in method.get_instructions()])
meth = methods[num] = DvMethod(self.vma.get_method(method))
meth.process()
method.set_instructions([])
else:
method.process()
else:
logger.error('Method %s not found.', num)
def process(self):
for klass in self.subclasses.values():
klass.process()
for meth in self.methods:
self.process_method(meth)
def get_source(self):
source = []
if not self.inner and self.package:
source.append('package %s;\n' % self.package)
if self.superclass is not None:
self.superclass = self.superclass[1:-1].replace('/', '.')
if self.superclass.split('.')[-1] == 'Object':
self.superclass = None
if self.superclass is not None:
self.prototype += ' extends %s' % self.superclass
if self.interfaces is not None:
interfaces = self.interfaces[1:-1].split(' ')
self.prototype += ' implements %s' % ', '.join(
[n[1:-1].replace('/', '.') for n in interfaces])
source.append('%s {\n' % self.prototype)
for field in self.fields.values():
access = [util.ACCESS_FLAGS_FIELDS.get(flag) for flag in
util.ACCESS_FLAGS_FIELDS if flag & field.get_access_flags()]
f_type = util.get_type(field.get_descriptor())
name = field.get_name()
source.append(' %s %s %s;\n' % (' '.join(access), f_type, name))
for klass in self.subclasses.values():
source.append(klass.get_source())
for _, method in self.methods.iteritems():
if isinstance(method, DvMethod):
source.append(method.get_source())
source.append('}\n')
return ''.join(source)
def show_source(self):
if not self.inner and self.package:
print 'package %s;\n' % self.package
if self.superclass is not None:
self.superclass = self.superclass[1:-1].replace('/', '.')
if self.superclass.split('.')[-1] == 'Object':
self.superclass = None
if self.superclass is not None:
self.prototype += ' extends %s' % self.superclass
if self.interfaces is not None:
interfaces = self.interfaces[1:-1].split(' ')
self.prototype += ' implements %s' % ', '.join(
[n[1:-1].replace('/', '.') for n in interfaces])
print '%s {\n' % self.prototype
for field in self.fields.values():
access = [util.ACCESS_FLAGS_FIELDS.get(flag) for flag in
util.ACCESS_FLAGS_FIELDS if flag & field.get_access_flags()]
f_type = util.get_type(field.get_descriptor())
name = field.get_name()
print ' %s %s %s;\n' % (' '.join(access), f_type, name)
for klass in self.subclasses.values():
klass.show_source()
for _, method in self.methods.iteritems():
if isinstance(method, DvMethod):
method.show_source()
print '}\n'
def __repr__(self):
if not self.subclasses:
return 'Class(%s)' % self.name
return 'Class(%s) -- Subclasses(%s)' % (self.name, self.subclasses)
class DvMachine():
def __init__(self, name):
vm = auto_vm(name)
self.vma = analysis.uVMAnalysis(vm)
self.classes = dict((dvclass.get_name(), dvclass)
for dvclass in vm.get_classes())
#util.merge_inner(self.classes)
def get_classes(self):
return self.classes.keys()
def get_class(self, class_name):
for name, klass in self.classes.iteritems():
if class_name in name:
if isinstance(klass, DvClass):
return klass
dvclass = self.classes[name] = DvClass(klass, self.vma)
return dvclass
def process(self):
for name, klass in self.classes.iteritems():
logger.info('Processing class: %s', name)
if isinstance(klass, DvClass):
klass.process()
else:
dvclass = self.classes[name] = DvClass(klass, self.vma)
dvclass.process()
def show_source(self):
for klass in self.classes.values():
klass.show_source()
def process_and_show(self):
for name, klass in self.classes.iteritems():
logger.info('Processing class: %s', name)
if not isinstance(klass, DvClass):
klass = DvClass(klass, self.vma)
klass.process()
klass.show_source()
logger = logging.getLogger('dad')
sys.setrecursionlimit(5000)
def main():
logger.setLevel(logging.INFO)
console_hdlr = logging.StreamHandler(sys.stdout)
console_hdlr.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(console_hdlr)
default_file = 'examples/android/TestsAndroguard/bin/classes.dex'
if len(sys.argv) > 1:
machine = DvMachine(sys.argv[1])
else:
machine = DvMachine(default_file)
logger.info('========================')
logger.info('Classes:')
for class_name in machine.get_classes():
logger.info(' %s', class_name)
logger.info('========================')
cls_name = raw_input('Choose a class: ')
if cls_name == '*':
machine.process_and_show()
else:
cls = machine.get_class(cls_name)
if cls is None:
logger.error('%s not found.', cls_name)
else:
logger.info('======================')
for method_id, method in cls.get_methods().items():
logger.info('%d: %s', method_id, method.name)
logger.info('======================')
meth = raw_input('Method: ')
if meth == '*':
logger.info('CLASS = %s', cls)
cls.process()
else:
cls.process_method(int(meth))
logger.info('Source:')
logger.info('===========================')
cls.show_source()
if __name__ == '__main__':
main()
|
mit
|
yencarnacion/jaikuengine
|
.google_appengine/lib/django-1.5/django/contrib/localflavor/mx/models.py
|
197
|
2266
|
from django.utils.translation import ugettext_lazy as _
from django.db.models.fields import CharField
from django.contrib.localflavor.mx.mx_states import STATE_CHOICES
from django.contrib.localflavor.mx.forms import (MXRFCField as MXRFCFormField,
MXZipCodeField as MXZipCodeFormField, MXCURPField as MXCURPFormField)
class MXStateField(CharField):
"""
A model field that stores the three-letter Mexican state abbreviation in the
database.
"""
description = _("Mexico state (three uppercase letters)")
def __init__(self, *args, **kwargs):
kwargs['choices'] = STATE_CHOICES
kwargs['max_length'] = 3
super(MXStateField, self).__init__(*args, **kwargs)
class MXZipCodeField(CharField):
"""
A model field that forms represent as a forms.MXZipCodeField field and
stores the five-digit Mexican zip code.
"""
description = _("Mexico zip code")
def __init__(self, *args, **kwargs):
kwargs['max_length'] = 5
super(MXZipCodeField, self).__init__(*args, **kwargs)
def formfield(self, **kwargs):
defaults = {'form_class': MXZipCodeFormField}
defaults.update(kwargs)
return super(MXZipCodeField, self).formfield(**defaults)
class MXRFCField(CharField):
"""
A model field that forms represent as a forms.MXRFCField field and
stores the value of a valid Mexican RFC.
"""
description = _("Mexican RFC")
def __init__(self, *args, **kwargs):
kwargs['max_length'] = 13
super(MXRFCField, self).__init__(*args, **kwargs)
def formfield(self, **kwargs):
defaults = {'form_class': MXRFCFormField}
defaults.update(kwargs)
return super(MXRFCField, self).formfield(**defaults)
class MXCURPField(CharField):
"""
A model field that forms represent as a forms.MXCURPField field and
stores the value of a valid Mexican CURP.
"""
description = _("Mexican CURP")
def __init__(self, *args, **kwargs):
kwargs['max_length'] = 18
super(MXCURPField, self).__init__(*args, **kwargs)
def formfield(self, **kwargs):
defaults = {'form_class': MXCURPFormField}
defaults.update(kwargs)
return super(MXCURPField, self).formfield(**defaults)
|
apache-2.0
|
noironetworks/group-based-policy
|
gbpservice/neutron/services/grouppolicy/config.py
|
1
|
1327
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_config import cfg
from gbpservice._i18n import _
group_policy_opts = [
cfg.ListOpt('policy_drivers',
default=['dummy'],
help=_("An ordered list of group policy driver "
"entrypoints to be loaded from the "
"gbpservice.neutron.group_policy.policy_drivers "
"namespace.")),
cfg.ListOpt('extension_drivers',
default=[],
help=_("An ordered list of extension driver "
"entrypoints to be loaded from the "
"gbpservice.neutron.group_policy.extension_drivers "
"namespace.")),
]
cfg.CONF.register_opts(group_policy_opts, "group_policy")
|
apache-2.0
|
Flight/django-filer
|
filer/south_migrations/0013_remove_null_file_name.py
|
22
|
11402
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
class Migration(DataMigration):
def forwards(self, orm):
for file_name_null in orm.File.objects.filter(name__isnull=True):
file_name_null.name = ""
file_name_null.save()
print('Setting empty string in null name for File object %s. See Release notes for further info' % file_name_null.pk)
def backwards(self, orm):
pass
#raise RuntimeError("Cannot reverse this migration.")
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'filer.clipboard': {
'Meta': {'object_name': 'Clipboard'},
'files': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'in_clipboards'", 'symmetrical': 'False', 'through': "orm['filer.ClipboardItem']", 'to': "orm['filer.File']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'filer_clipboards'", 'to': "orm['auth.User']"})
},
'filer.clipboarditem': {
'Meta': {'object_name': 'ClipboardItem'},
'clipboard': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['filer.Clipboard']"}),
'file': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['filer.File']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'})
},
'filer.file': {
'Meta': {'object_name': 'File'},
'_file_size': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'file': ('django.db.models.fields.files.FileField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'folder': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'all_files'", 'null': 'True', 'to': "orm['filer.Folder']"}),
'has_all_mandatory_data': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_public': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'modified_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '255', 'blank': 'True'}),
'original_filename': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'owned_files'", 'null': 'True', 'to': "orm['auth.User']"}),
'polymorphic_ctype': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'polymorphic_filer.file_set'", 'null': 'True', 'to': "orm['contenttypes.ContentType']"}),
'sha1': ('django.db.models.fields.CharField', [], {'default': "''", 'max_length': '40', 'blank': 'True'}),
'uploaded_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'filer.folder': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('parent', 'name'),)", 'object_name': 'Folder'},
'created_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'level': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'lft': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'modified_at': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'filer_owned_folders'", 'null': 'True', 'to': "orm['auth.User']"}),
'parent': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'children'", 'null': 'True', 'to': "orm['filer.Folder']"}),
'rght': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'tree_id': ('django.db.models.fields.PositiveIntegerField', [], {'db_index': 'True'}),
'uploaded_at': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'filer.folderpermission': {
'Meta': {'object_name': 'FolderPermission'},
'can_add_children': ('django.db.models.fields.SmallIntegerField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'can_edit': ('django.db.models.fields.SmallIntegerField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'can_read': ('django.db.models.fields.SmallIntegerField', [], {'default': 'None', 'null': 'True', 'blank': 'True'}),
'everybody': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'folder': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['filer.Folder']", 'null': 'True', 'blank': 'True'}),
'group': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'filer_folder_permissions'", 'null': 'True', 'to': "orm['auth.Group']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'type': ('django.db.models.fields.SmallIntegerField', [], {'default': '0'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'blank': 'True', 'related_name': "'filer_folder_permissions'", 'null': 'True', 'to': "orm['auth.User']"})
},
'filer.image': {
'Meta': {'object_name': 'Image', '_ormbases': ['filer.File']},
'_height': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'_width': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'author': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'date_taken': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'default_alt_text': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'default_caption': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'file_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['filer.File']", 'unique': 'True', 'primary_key': 'True'}),
'must_always_publish_author_credit': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'must_always_publish_copyright': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'subject_location': ('django.db.models.fields.CharField', [], {'default': 'None', 'max_length': '64', 'null': 'True', 'blank': 'True'})
},
'taggit.tag': {
'Meta': {'object_name': 'Tag'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'slug': ('django.db.models.fields.SlugField', [], {'unique': 'True', 'max_length': '100'})
},
'taggit.taggeditem': {
'Meta': {'object_name': 'TaggedItem'},
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'taggit_taggeditem_tagged_items'", 'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'object_id': ('django.db.models.fields.IntegerField', [], {'db_index': 'True'}),
'tag': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'taggit_taggeditem_items'", 'to': "orm['taggit.Tag']"})
}
}
complete_apps = ['filer']
|
bsd-3-clause
|
40123248/2015cd_midterm2
|
static/Brython3.1.0-20150301-090019/Lib/textwrap.py
|
745
|
16488
|
"""Text wrapping and filling.
"""
# Copyright (C) 1999-2001 Gregory P. Ward.
# Copyright (C) 2002, 2003 Python Software Foundation.
# Written by Greg Ward <[email protected]>
import re
__all__ = ['TextWrapper', 'wrap', 'fill', 'dedent', 'indent']
# Hardcode the recognized whitespace characters to the US-ASCII
# whitespace characters. The main reason for doing this is that in
# ISO-8859-1, 0xa0 is non-breaking whitespace, so in certain locales
# that character winds up in string.whitespace. Respecting
# string.whitespace in those cases would 1) make textwrap treat 0xa0 the
# same as any other whitespace char, which is clearly wrong (it's a
# *non-breaking* space), 2) possibly cause problems with Unicode,
# since 0xa0 is not in range(128).
_whitespace = '\t\n\x0b\x0c\r '
class TextWrapper:
"""
Object for wrapping/filling text. The public interface consists of
the wrap() and fill() methods; the other methods are just there for
subclasses to override in order to tweak the default behaviour.
If you want to completely replace the main wrapping algorithm,
you'll probably have to override _wrap_chunks().
Several instance attributes control various aspects of wrapping:
width (default: 70)
the maximum width of wrapped lines (unless break_long_words
is false)
initial_indent (default: "")
string that will be prepended to the first line of wrapped
output. Counts towards the line's width.
subsequent_indent (default: "")
string that will be prepended to all lines save the first
of wrapped output; also counts towards each line's width.
expand_tabs (default: true)
Expand tabs in input text to spaces before further processing.
Each tab will become 0 .. 'tabsize' spaces, depending on its position
in its line. If false, each tab is treated as a single character.
tabsize (default: 8)
Expand tabs in input text to 0 .. 'tabsize' spaces, unless
'expand_tabs' is false.
replace_whitespace (default: true)
Replace all whitespace characters in the input text by spaces
after tab expansion. Note that if expand_tabs is false and
replace_whitespace is true, every tab will be converted to a
single space!
fix_sentence_endings (default: false)
Ensure that sentence-ending punctuation is always followed
by two spaces. Off by default because the algorithm is
(unavoidably) imperfect.
break_long_words (default: true)
Break words longer than 'width'. If false, those words will not
be broken, and some lines might be longer than 'width'.
break_on_hyphens (default: true)
Allow breaking hyphenated words. If true, wrapping will occur
preferably on whitespaces and right after hyphens part of
compound words.
drop_whitespace (default: true)
Drop leading and trailing whitespace from lines.
"""
unicode_whitespace_trans = {}
uspace = ord(' ')
for x in _whitespace:
unicode_whitespace_trans[ord(x)] = uspace
# This funky little regex is just the trick for splitting
# text up into word-wrappable chunks. E.g.
# "Hello there -- you goof-ball, use the -b option!"
# splits into
# Hello/ /there/ /--/ /you/ /goof-/ball,/ /use/ /the/ /-b/ /option!
# (after stripping out empty strings).
wordsep_re = re.compile(
r'(\s+|' # any whitespace
r'[^\s\w]*\w+[^0-9\W]-(?=\w+[^0-9\W])|' # hyphenated words
r'(?<=[\w\!\"\'\&\.\,\?])-{2,}(?=\w))') # em-dash
# This less funky little regex just split on recognized spaces. E.g.
# "Hello there -- you goof-ball, use the -b option!"
# splits into
# Hello/ /there/ /--/ /you/ /goof-ball,/ /use/ /the/ /-b/ /option!/
wordsep_simple_re = re.compile(r'(\s+)')
# XXX this is not locale- or charset-aware -- string.lowercase
# is US-ASCII only (and therefore English-only)
sentence_end_re = re.compile(r'[a-z]' # lowercase letter
r'[\.\!\?]' # sentence-ending punct.
r'[\"\']?' # optional end-of-quote
r'\Z') # end of chunk
def __init__(self,
width=70,
initial_indent="",
subsequent_indent="",
expand_tabs=True,
replace_whitespace=True,
fix_sentence_endings=False,
break_long_words=True,
drop_whitespace=True,
break_on_hyphens=True,
tabsize=8):
self.width = width
self.initial_indent = initial_indent
self.subsequent_indent = subsequent_indent
self.expand_tabs = expand_tabs
self.replace_whitespace = replace_whitespace
self.fix_sentence_endings = fix_sentence_endings
self.break_long_words = break_long_words
self.drop_whitespace = drop_whitespace
self.break_on_hyphens = break_on_hyphens
self.tabsize = tabsize
# -- Private methods -----------------------------------------------
# (possibly useful for subclasses to override)
def _munge_whitespace(self, text):
"""_munge_whitespace(text : string) -> string
Munge whitespace in text: expand tabs and convert all other
whitespace characters to spaces. Eg. " foo\tbar\n\nbaz"
becomes " foo bar baz".
"""
if self.expand_tabs:
text = text.expandtabs(self.tabsize)
if self.replace_whitespace:
text = text.translate(self.unicode_whitespace_trans)
return text
def _split(self, text):
"""_split(text : string) -> [string]
Split the text to wrap into indivisible chunks. Chunks are
not quite the same as words; see _wrap_chunks() for full
details. As an example, the text
Look, goof-ball -- use the -b option!
breaks into the following chunks:
'Look,', ' ', 'goof-', 'ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', 'option!'
if break_on_hyphens is True, or in:
'Look,', ' ', 'goof-ball', ' ', '--', ' ',
'use', ' ', 'the', ' ', '-b', ' ', option!'
otherwise.
"""
if self.break_on_hyphens is True:
chunks = self.wordsep_re.split(text)
else:
chunks = self.wordsep_simple_re.split(text)
chunks = [c for c in chunks if c]
return chunks
def _fix_sentence_endings(self, chunks):
"""_fix_sentence_endings(chunks : [string])
Correct for sentence endings buried in 'chunks'. Eg. when the
original text contains "... foo.\nBar ...", munge_whitespace()
and split() will convert that to [..., "foo.", " ", "Bar", ...]
which has one too few spaces; this method simply changes the one
space to two.
"""
i = 0
patsearch = self.sentence_end_re.search
while i < len(chunks)-1:
if chunks[i+1] == " " and patsearch(chunks[i]):
chunks[i+1] = " "
i += 2
else:
i += 1
def _handle_long_word(self, reversed_chunks, cur_line, cur_len, width):
"""_handle_long_word(chunks : [string],
cur_line : [string],
cur_len : int, width : int)
Handle a chunk of text (most likely a word, not whitespace) that
is too long to fit in any line.
"""
# Figure out when indent is larger than the specified width, and make
# sure at least one character is stripped off on every pass
if width < 1:
space_left = 1
else:
space_left = width - cur_len
# If we're allowed to break long words, then do so: put as much
# of the next chunk onto the current line as will fit.
if self.break_long_words:
cur_line.append(reversed_chunks[-1][:space_left])
reversed_chunks[-1] = reversed_chunks[-1][space_left:]
# Otherwise, we have to preserve the long word intact. Only add
# it to the current line if there's nothing already there --
# that minimizes how much we violate the width constraint.
elif not cur_line:
cur_line.append(reversed_chunks.pop())
# If we're not allowed to break long words, and there's already
# text on the current line, do nothing. Next time through the
# main loop of _wrap_chunks(), we'll wind up here again, but
# cur_len will be zero, so the next line will be entirely
# devoted to the long word that we can't handle right now.
def _wrap_chunks(self, chunks):
"""_wrap_chunks(chunks : [string]) -> [string]
Wrap a sequence of text chunks and return a list of lines of
length 'self.width' or less. (If 'break_long_words' is false,
some lines may be longer than this.) Chunks correspond roughly
to words and the whitespace between them: each chunk is
indivisible (modulo 'break_long_words'), but a line break can
come between any two chunks. Chunks should not have internal
whitespace; ie. a chunk is either all whitespace or a "word".
Whitespace chunks will be removed from the beginning and end of
lines, but apart from that whitespace is preserved.
"""
lines = []
if self.width <= 0:
raise ValueError("invalid width %r (must be > 0)" % self.width)
# Arrange in reverse order so items can be efficiently popped
# from a stack of chucks.
chunks.reverse()
while chunks:
# Start the list of chunks that will make up the current line.
# cur_len is just the length of all the chunks in cur_line.
cur_line = []
cur_len = 0
# Figure out which static string will prefix this line.
if lines:
indent = self.subsequent_indent
else:
indent = self.initial_indent
# Maximum width for this line.
width = self.width - len(indent)
# First chunk on line is whitespace -- drop it, unless this
# is the very beginning of the text (ie. no lines started yet).
if self.drop_whitespace and chunks[-1].strip() == '' and lines:
del chunks[-1]
while chunks:
l = len(chunks[-1])
# Can at least squeeze this chunk onto the current line.
if cur_len + l <= width:
cur_line.append(chunks.pop())
cur_len += l
# Nope, this line is full.
else:
break
# The current line is full, and the next chunk is too big to
# fit on *any* line (not just this one).
if chunks and len(chunks[-1]) > width:
self._handle_long_word(chunks, cur_line, cur_len, width)
# If the last chunk on this line is all whitespace, drop it.
if self.drop_whitespace and cur_line and cur_line[-1].strip() == '':
del cur_line[-1]
# Convert current line back to a string and store it in list
# of all lines (return value).
if cur_line:
lines.append(indent + ''.join(cur_line))
return lines
# -- Public interface ----------------------------------------------
def wrap(self, text):
"""wrap(text : string) -> [string]
Reformat the single paragraph in 'text' so it fits in lines of
no more than 'self.width' columns, and return a list of wrapped
lines. Tabs in 'text' are expanded with string.expandtabs(),
and all other whitespace characters (including newline) are
converted to space.
"""
text = self._munge_whitespace(text)
chunks = self._split(text)
if self.fix_sentence_endings:
self._fix_sentence_endings(chunks)
return self._wrap_chunks(chunks)
def fill(self, text):
"""fill(text : string) -> string
Reformat the single paragraph in 'text' to fit in lines of no
more than 'self.width' columns, and return a new string
containing the entire wrapped paragraph.
"""
return "\n".join(self.wrap(text))
# -- Convenience interface ---------------------------------------------
def wrap(text, width=70, **kwargs):
"""Wrap a single paragraph of text, returning a list of wrapped lines.
Reformat the single paragraph in 'text' so it fits in lines of no
more than 'width' columns, and return a list of wrapped lines. By
default, tabs in 'text' are expanded with string.expandtabs(), and
all other whitespace characters (including newline) are converted to
space. See TextWrapper class for available keyword args to customize
wrapping behaviour.
"""
w = TextWrapper(width=width, **kwargs)
return w.wrap(text)
def fill(text, width=70, **kwargs):
"""Fill a single paragraph of text, returning a new string.
Reformat the single paragraph in 'text' to fit in lines of no more
than 'width' columns, and return a new string containing the entire
wrapped paragraph. As with wrap(), tabs are expanded and other
whitespace characters converted to space. See TextWrapper class for
available keyword args to customize wrapping behaviour.
"""
w = TextWrapper(width=width, **kwargs)
return w.fill(text)
# -- Loosely related functionality -------------------------------------
_whitespace_only_re = re.compile('^[ \t]+$', re.MULTILINE)
_leading_whitespace_re = re.compile('(^[ \t]*)(?:[^ \t\n])', re.MULTILINE)
def dedent(text):
"""Remove any common leading whitespace from every line in `text`.
This can be used to make triple-quoted strings line up with the left
edge of the display, while still presenting them in the source code
in indented form.
Note that tabs and spaces are both treated as whitespace, but they
are not equal: the lines " hello" and "\thello" are
considered to have no common leading whitespace. (This behaviour is
new in Python 2.5; older versions of this module incorrectly
expanded tabs before searching for common leading whitespace.)
"""
# Look for the longest leading string of spaces and tabs common to
# all lines.
margin = None
text = _whitespace_only_re.sub('', text)
indents = _leading_whitespace_re.findall(text)
for indent in indents:
if margin is None:
margin = indent
# Current line more deeply indented than previous winner:
# no change (previous winner is still on top).
elif indent.startswith(margin):
pass
# Current line consistent with and no deeper than previous winner:
# it's the new winner.
elif margin.startswith(indent):
margin = indent
# Current line and previous winner have no common whitespace:
# there is no margin.
else:
margin = ""
break
# sanity check (testing/debugging only)
if 0 and margin:
for line in text.split("\n"):
assert not line or line.startswith(margin), \
"line = %r, margin = %r" % (line, margin)
if margin:
text = re.sub(r'(?m)^' + margin, '', text)
return text
def indent(text, prefix, predicate=None):
"""Adds 'prefix' to the beginning of selected lines in 'text'.
If 'predicate' is provided, 'prefix' will only be added to the lines
where 'predicate(line)' is True. If 'predicate' is not provided,
it will default to adding 'prefix' to all non-empty lines that do not
consist solely of whitespace characters.
"""
if predicate is None:
def predicate(line):
return line.strip()
def prefixed_lines():
for line in text.splitlines(True):
yield (prefix + line if predicate(line) else line)
return ''.join(prefixed_lines())
if __name__ == "__main__":
#print dedent("\tfoo\n\tbar")
#print dedent(" \thello there\n \t how are you?")
print(dedent("Hello there.\n This is indented."))
|
gpl-3.0
|
matteo88/gasistafelice
|
gasistafelice/supplier/management/commands/export_SBW_suppliers.py
|
6
|
10164
|
from django.core.management.base import BaseCommand, CommandError
from django.conf import settings
from django.db import IntegrityError, transaction
from django.core.files import File
from django.contrib.auth.models import User
from gasistafelice.lib.csvmanager import CSVManager
from gasistafelice.lib import get_params_from_template
from gasistafelice.supplier.models import Supplier, Product, SupplierStock, Certification, ProductCategory, ProductPU, ProductMU
from gasistafelice.gas.models import GAS, GASMember
from gasistafelice.base.models import Place, Person, Contact
import decimal
from pprint import pprint
import logging
log = logging.getLogger(__name__)
ENCODING = "iso-8859-1"
PRODUCT_MU = ProductMU.objects.get(pk=7) #Kg
PRODUCT_CAT = ProductCategory.objects.get(pk=81) #
PRODUCT_PU = ProductPU.objects.get(pk=5) #cf
CERTIFICATION = [Certification.objects.get(pk=4)] #Bad
STEP = 1.0
class Command(BaseCommand):
#TODO: pass argument <gas_pk> for automatic associate a pact for the supplier list?
args = "<supplier_csv_file> <products_csv_file> [pk][delimiter] [python_template] [python_template2] [simulate]"
allowed_keys_1 = ['ID','Active (0/1)','Name *','Description','Short description','Meta title','Meta keywords','Meta description','ImageURL']
allowed_keys_2 = ['ID','Active (0/1)','Name *','Categories (x,y,z...)','Price tax excluded or Price tax included','Tax rules ID','Wholesale price','On sale (0/1)','Discount amount','Discount percent','Discount from (yyyy-mm-dd)','Discount to (yyyy-mm-dd)','Reference #','Supplier reference #','Supplier','Manufacturer','EAN13','UPC','Ecotax','Width','Height','Depth','Weight','Quantity','Minimal quantity','Visibility','Additional shipping cost','Unity','Unit price','Short description','Description','Tags (x,y,z...)','Meta title','Meta keywords','Meta description','URL rewritten','Text when in stock','Text when backorder allowed','Available for order (0 = No, 1 = Yes)','Product available date','Product creation date','Show price (0 = No, 1 = Yes)','Image URLs (x,y,z...)','Delete existing images (0 = No, 1 = Yes)','Feature(Name:Value:Position)','Available online only (0 = No, 1 = Yes)','Condition','Customizable (0 = No, 1 = Yes)','Uploadable files (0 = No, 1 = Yes)','Text fields (0 = No, 1 = Yes)','Out of stock','ID / Name of shop','Advanced stock management','Depends On Stock','Warehouse']
help = """Import supplier and products from SBW csv file. Attributes allowed in python template are:
* supplier: """ + ",".join(allowed_keys_1) + """;
* products: """ + ",".join(allowed_keys_2) + """;
They are both connected by `fake_id_supplier` which must match between both the provided documents.
"""
def handle(self, *args, **options):
self.simulate = False
delimiter = ';'
pk = 0
tmpl_1 = "%(ID)s %(Active (0/1))s %(Name *)s %(Description)s %(Short description)s %(Meta title)s %(Meta keywords)s %(Meta description)s %(ImageURL)s"
tmpl_2 = "%(ID)s %(Active (0/1))s %(Name *)s %(Categories (x,y,z...))s %(Price tax excluded or Price tax included)s %(Tax rules ID)s %(Wholesale price)s %(On sale (0/1)s %(Discount amount)s %(Discount percent)s %(Discount from (yyyy-mm-dd))s %(Discount to (yyyy-mm-dd))s %(Reference #)s %(Supplier reference #)s %(Supplier)s %(Manufacturer)s %(EAN13)s %(UPC)s %(Ecotax)s %(Width)s %(Height)s %(Depth)s %(Weight)s %(Quantity)s %(Minimal quantity)s %(Visibility)s %(Additional shipping cost)s %(Unity)s %(Unit price)s %(Short description)s %(Description)s %(Tags (x,y,z...))s %(Meta title)s %(Meta keywords)s %(Meta description)s %(URL rewritten)s %(Text when in stock)s %(Text when backorder allowed)s %(Available for order (0 = No, 1 = Yes))s %(Product available date)s %(Product creation date)s %(Show price (0 = No, 1 = Yes))s %(Image URLs (x,y,z...))s %(Delete existing images (0 = No, 1 = Yes))s %(Feature(Name:Value:Position))s %(Available online only (0 = No, 1 = Yes))s %(Condition)s %(Customizable (0 = No, 1 = Yes))s %(Uploadable files (0 = No, 1 = Yes))s %(Text fields (0 = No, 1 = Yes))s %(Out of stock)s %(ID / Name of shop)s %(Advanced stock management)s %(Depends On Stock)s %(Warehouse)s "
try:
csv_filename_suppliers = args[0]
csv_filename_products = args[1]
except:
raise CommandError("Usage import_suppliers: %s" % (self.args))
try:
i = 2
while(i < 6):
arg = args[i].split('=')
if arg[0] == 'delimiter':
delimiter = arg[1]
elif arg[0] == 'simulate':
self.simulate = self._bool(arg[1], False)
elif arg[0] == 'python_template':
tmpl_1 = arg[1]
elif arg[0] == 'python_template2':
tmpl_2 = arg[1]
if arg[0] == 'pk':
pk = arg[1]
i += 1
except IndexError as e:
pass
if pk:
suppliers = [Supplier.objects.get(pk=pk)]
else:
suppliers = Supplier.objects.all()
# [ {'':'','';''}, ]
stocks_data = []
suppliers_data = []
for supplier in suppliers:
log.info(pprint("#### ---- start new supplier export (%s)... ----####" % (supplier.pk)))
suppliers_data.append(
{'ID' : supplier.pk,
'Active (0/1)' : '1',
'Name *' : supplier.name,
'Description' : supplier.description,
'Short description' : '',
'Meta title' : '',
'Meta keywords' : '',
'Meta description' : '',
'ImageURL' : supplier.logo
}
)
for stock in supplier.stocks:
log.info(pprint(" %s=%s product [%s]" % (supplier ,stock , stock.pk)))
stocks_data.append(
{'ID': stock.pk,
'Active (0/1)' : '1',
'Name *' : stock.name.encode('utf8'),
'Categories (x,y,z...)' : str(stock.supplier_category).encode('utf8') if stock.supplier_category else str(stock.product.category).encode('utf8'),
'Price tax excluded or Price tax included' : stock.price,
'Tax rules ID' : '',
'Wholesale price' : stock.price,
'On sale (0/1)' : '1',
'Discount amount' : '',
'Discount percent' : '',
'Discount from (yyyy-mm-dd)' : '',
'Discount to (yyyy-mm-dd)' : '',
'Reference #' : supplier.pk,
'Supplier reference #' : supplier.pk,
'Supplier' : supplier.name,
'Manufacturer' : supplier.name,
'EAN13' : '',
'UPC' : '',
'Ecotax' : '',
'Width' : '',
'Height' : '',
'Depth' : '',
'Weight' : '',
'Quantity' : stock.amount_available,
'Minimal quantity' : stock.units_minimum_amount,
'Visibility' : '',
'Additional shipping cost' : '',
'Unity' : '',
'Unit price' : stock.product.muppu ,
'Short description' : '',
#'Description' : stock.product.description.decode('utf-8').encode('ascii','replace'),
'Description' : '',
'Tags (x,y,z...)' : '',
'Meta title' : '',
'Meta keywords' : '',
'Meta description' : '',
'URL rewritten' : '',
'Text when in stock' : '',
'Text when backorder allowed' : '',
'Available for order (0 = No, 1 = Yes)' : int(not stock.deleted),
'Product available date' : '',
'Product creation date' : '',
'Show price (0 = No, 1 = Yes)' : '',
'Image URLs (x,y,z...)' : stock.image,
'Delete existing images (0 = No, 1 = Yes)' : '',
'Feature(Name:Value:Position)' : '',
'Available online only (0 = No, 1 = Yes)' : '',
'Condition' : '',
'Customizable (0 = No, 1 = Yes)' : '',
'Uploadable files (0 = No, 1 = Yes)' : '',
'Text fields (0 = No, 1 = Yes)' : '',
'Out of stock' : int(stock.deleted),
'ID / Name of shop' : supplier.name,
'Advanced stock management' : '',
'Depends On Stock' : '',
'Warehouse' : ''
}
)
# STEP 1: write data in files
self._write_data(csv_filename_suppliers, delimiter,suppliers_data, tmpl_1, )
self._write_data(csv_filename_products, delimiter, stocks_data, tmpl_2, )
return 0
def _write_data(self, csv_filename, delimiter, csvdata, tmpl):
print "self.simulate", self.simulate
if(self.simulate):
log.debug(pprint("SIMULATING write. Content is: %s" % csvdata))
else:
log.debug(pprint("WRITING on file %s. Content is: %s" % (csv_filename,csvdata)))
f = file(csv_filename, "wb")
fieldnames = get_params_from_template(tmpl)
m = CSVManager(fieldnames=fieldnames, delimiter=delimiter, encoding=ENCODING)
data = m.write(csvdata)
#log.debug(pprint(m.read(csvdata)))
f.write(data)
f.close()
return
def _bool(self, val_d, default):
if not val_d or val_d =='' :
return default
else:
try:
x=bool(val_d)
except:
return default
else:
return x
|
agpl-3.0
|
geerlingguy/ansible
|
lib/ansible/parsing/plugin_docs.py
|
38
|
4260
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
from ansible.module_utils._text import to_text
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.utils.display import Display
display = Display()
# NOTE: should move to just reading the variable as we do in plugin_loader since we already load as a 'module'
# which is much faster than ast parsing ourselves.
def read_docstring(filename, verbose=True, ignore_errors=True):
"""
Search for assignment of the DOCUMENTATION and EXAMPLES variables in the given file.
Parse DOCUMENTATION from YAML and return the YAML doc or None together with EXAMPLES, as plain text.
"""
data = {
'doc': None,
'plainexamples': None,
'returndocs': None,
'metadata': None, # NOTE: not used anymore, kept for compat
'seealso': None,
}
string_to_vars = {
'DOCUMENTATION': 'doc',
'EXAMPLES': 'plainexamples',
'RETURN': 'returndocs',
'ANSIBLE_METADATA': 'metadata', # NOTE: now unused, but kept for backwards compat
}
try:
with open(filename, 'rb') as b_module_data:
M = ast.parse(b_module_data.read())
for child in M.body:
if isinstance(child, ast.Assign):
for t in child.targets:
try:
theid = t.id
except AttributeError:
# skip errors can happen when trying to use the normal code
display.warning("Failed to assign id for %s on %s, skipping" % (t, filename))
continue
if theid in string_to_vars:
varkey = string_to_vars[theid]
if isinstance(child.value, ast.Dict):
data[varkey] = ast.literal_eval(child.value)
else:
if theid == 'EXAMPLES':
# examples 'can' be yaml, but even if so, we dont want to parse as such here
# as it can create undesired 'objects' that don't display well as docs.
data[varkey] = to_text(child.value.s)
else:
# string should be yaml if already not a dict
data[varkey] = AnsibleLoader(child.value.s, file_name=filename).get_single_data()
display.debug('assigned: %s' % varkey)
except Exception:
if verbose:
display.error("unable to parse %s" % filename)
if not ignore_errors:
raise
return data
def read_docstub(filename):
"""
Quickly find short_description using string methods instead of node parsing.
This does not return a full set of documentation strings and is intended for
operations like ansible-doc -l.
"""
in_documentation = False
capturing = False
indent_detection = ''
doc_stub = []
with open(filename, 'r') as t_module_data:
for line in t_module_data:
if in_documentation:
# start capturing the stub until indentation returns
if capturing and line.startswith(indent_detection):
doc_stub.append(line)
elif capturing and not line.startswith(indent_detection):
break
elif line.lstrip().startswith('short_description:'):
capturing = True
# Detect that the short_description continues on the next line if it's indented more
# than short_description itself.
indent_detection = ' ' * (len(line) - len(line.lstrip()) + 1)
doc_stub.append(line)
elif line.startswith('DOCUMENTATION') and '=' in line:
in_documentation = True
short_description = r''.join(doc_stub).strip().rstrip('.')
data = AnsibleLoader(short_description, file_name=filename).get_single_data()
return data
|
gpl-3.0
|
pombredanne/pyfpm
|
docs/conf.py
|
2
|
7784
|
# -*- coding: utf-8 -*-
#
# pyfpm documentation build configuration file, created by
# sphinx-quickstart on Sat Aug 4 20:39:29 2012.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
import pyfpm
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'pyfpm'
copyright = pyfpm.__copyright__
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = pyfpm.__version__
# The full version, including alpha/beta/rc tags.
release = pyfpm.__version__
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'pyfpmdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'pyfpm.tex', u'pyfpm Documentation',
pyfpm.__author__, 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'pyfpm', u'pyfpm Documentation',
[pyfpm.__author__], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'pyfpm', u'pyfpm Documentation',
pyfpm.__author__, 'pyfpm', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
|
mit
|
miragshin/pupy
|
pupy/packages/all/interactive_shell.py
|
25
|
1683
|
# -*- coding: UTF8 -*-
import sys
from subprocess import PIPE, Popen
from threading import Thread
from Queue import Queue, Empty
import time
import traceback
ON_POSIX = 'posix' in sys.builtin_module_names
def write_output(out, queue):
try:
for c in iter(lambda: out.read(1), b""):
queue.put(c)
out.close()
except Exception as e:
print(traceback.format_exc())
def flush_loop(queue, encoding):
try:
while True:
buf=b""
while True:
try:
buf+=queue.get_nowait()
except Empty:
break
if buf:
if encoding:
try:
buf=buf.decode(encoding)
except Exception:
pass
sys.stdout.write(buf)
sys.stdout.flush()
time.sleep(0.5)
except Exception as e:
print(traceback.format_exc())
def interactive_open(program=None, encoding=None):
try:
if program is None:
if "win" in sys.platform.lower():
program="cmd.exe"
encoding="cp437"
else:
program="/bin/sh"
encoding=None
print "Opening interactive %s ... (encoding : %s)"%(program,encoding)
p = Popen([program], stdout=PIPE, stderr=PIPE, stdin=PIPE, bufsize=0, shell=True, close_fds=ON_POSIX, universal_newlines=True)
q = Queue()
q2 = Queue()
t = Thread(target=write_output, args=(p.stdout, q))
t.daemon = True
t.start()
t = Thread(target=write_output, args=(p.stderr, q2))
t.daemon = True
t.start()
t = Thread(target=flush_loop, args=(q, encoding))
t.daemon = True
t.start()
t = Thread(target=flush_loop, args=(q2, encoding))
t.daemon = True
t.start()
while True:
line = raw_input()
p.stdin.write(line+"\n")
if line.strip()=="exit":
break
except Exception as e:
print(traceback.format_exc())
|
bsd-3-clause
|
VanirAOSP/kernel_oppo_n1
|
tools/perf/util/setup.py
|
4998
|
1330
|
#!/usr/bin/python2
from distutils.core import setup, Extension
from os import getenv
from distutils.command.build_ext import build_ext as _build_ext
from distutils.command.install_lib import install_lib as _install_lib
class build_ext(_build_ext):
def finalize_options(self):
_build_ext.finalize_options(self)
self.build_lib = build_lib
self.build_temp = build_tmp
class install_lib(_install_lib):
def finalize_options(self):
_install_lib.finalize_options(self)
self.build_dir = build_lib
cflags = ['-fno-strict-aliasing', '-Wno-write-strings']
cflags += getenv('CFLAGS', '').split()
build_lib = getenv('PYTHON_EXTBUILD_LIB')
build_tmp = getenv('PYTHON_EXTBUILD_TMP')
ext_sources = [f.strip() for f in file('util/python-ext-sources')
if len(f.strip()) > 0 and f[0] != '#']
perf = Extension('perf',
sources = ext_sources,
include_dirs = ['util/include'],
extra_compile_args = cflags,
)
setup(name='perf',
version='0.1',
description='Interface with the Linux profiling infrastructure',
author='Arnaldo Carvalho de Melo',
author_email='[email protected]',
license='GPLv2',
url='http://perf.wiki.kernel.org',
ext_modules=[perf],
cmdclass={'build_ext': build_ext, 'install_lib': install_lib})
|
gpl-2.0
|
markYoungH/chromium.src
|
tools/telemetry/telemetry/core/platform/power_monitor/power_monitor_controller_unittest.py
|
25
|
1135
|
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
import unittest
import telemetry.core.platform.power_monitor as power_monitor
from telemetry.core.platform.power_monitor import power_monitor_controller
class PowerMonitorControllerTest(unittest.TestCase):
def testComposition(self):
class P1(power_monitor.PowerMonitor):
def StartMonitoringPower(self, browser):
raise NotImplementedError()
def StopMonitoringPower(self):
raise NotImplementedError()
class P2(power_monitor.PowerMonitor):
def __init__(self, value):
self._value = value
def CanMonitorPower(self):
return True
def StartMonitoringPower(self, browser):
pass
def StopMonitoringPower(self):
return self._value
controller = power_monitor_controller.PowerMonitorController(
[P1(), P2(1), P2(2)])
self.assertEqual(controller.CanMonitorPower(), True)
controller.StartMonitoringPower(None)
self.assertEqual(controller.StopMonitoringPower(), 1)
|
bsd-3-clause
|
quickresolve/accel.ai
|
flask-aws/lib/python2.7/site-packages/pip/_vendor/requests/packages/chardet/mbcsgroupprober.py
|
2769
|
1967
|
######################## BEGIN LICENSE BLOCK ########################
# The Original Code is Mozilla Universal charset detector code.
#
# The Initial Developer of the Original Code is
# Netscape Communications Corporation.
# Portions created by the Initial Developer are Copyright (C) 2001
# the Initial Developer. All Rights Reserved.
#
# Contributor(s):
# Mark Pilgrim - port to Python
# Shy Shalom - original C code
# Proofpoint, Inc.
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
######################### END LICENSE BLOCK #########################
from .charsetgroupprober import CharSetGroupProber
from .utf8prober import UTF8Prober
from .sjisprober import SJISProber
from .eucjpprober import EUCJPProber
from .gb2312prober import GB2312Prober
from .euckrprober import EUCKRProber
from .cp949prober import CP949Prober
from .big5prober import Big5Prober
from .euctwprober import EUCTWProber
class MBCSGroupProber(CharSetGroupProber):
def __init__(self):
CharSetGroupProber.__init__(self)
self._mProbers = [
UTF8Prober(),
SJISProber(),
EUCJPProber(),
GB2312Prober(),
EUCKRProber(),
CP949Prober(),
Big5Prober(),
EUCTWProber()
]
self.reset()
|
mit
|
fintech-circle/edx-platform
|
lms/djangoapps/course_wiki/settings.py
|
260
|
1247
|
"""
These callables are used by django-wiki to check various permissions
a user has on an article.
"""
from course_wiki.utils import user_is_article_course_staff
def CAN_DELETE(article, user): # pylint: disable=invalid-name
"""Is user allowed to soft-delete article?"""
return _is_staff_for_article(article, user)
def CAN_MODERATE(article, user): # pylint: disable=invalid-name
"""Is user allowed to restore or purge article?"""
return _is_staff_for_article(article, user)
def CAN_CHANGE_PERMISSIONS(article, user): # pylint: disable=invalid-name
"""Is user allowed to change permissions on article?"""
return _is_staff_for_article(article, user)
def CAN_ASSIGN(article, user): # pylint: disable=invalid-name
"""Is user allowed to change owner or group of article?"""
return _is_staff_for_article(article, user)
def CAN_ASSIGN_OWNER(article, user): # pylint: disable=invalid-name
"""Is user allowed to change group of article to one of its own groups?"""
return _is_staff_for_article(article, user)
def _is_staff_for_article(article, user):
"""Is the user staff for article's course wiki?"""
return user.is_staff or user.is_superuser or user_is_article_course_staff(user, article)
|
agpl-3.0
|
Endika/c2c-rd-addons
|
mrp_no_gap/mrp.py
|
4
|
1542
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>).
# Copyright (C) 2012-2012 ChriCar Beteiligungs- und Beratungs- GmbH (<http://www.camptocamp.at>)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
from openerp.osv import fields, osv
import logging
class mrp_production(osv.osv):
_inherit = "mrp.production"
_defaults = {
'name' : '/',
}
def create(self, cr, uid, vals, context=None):
if vals.get('name', '/') == '/':
vals.update({'name': self.pool.get('ir.sequence').get(cr, uid, 'mrp.production')})
return super(mrp_production, self).create(cr, uid, vals, context=context)
mrp_production()
|
agpl-3.0
|
ashang/calibre
|
src/calibre/utils/recycle_bin.py
|
14
|
5429
|
#!/usr/bin/env python2
# vim:fileencoding=UTF-8:ts=4:sw=4:sta:et:sts=4:ai
from __future__ import print_function
__license__ = 'GPL v3'
__copyright__ = '2010, Kovid Goyal <[email protected]>'
__docformat__ = 'restructuredtext en'
import os, shutil, time, sys
from calibre import isbytestring
from calibre.constants import (iswindows, isosx, plugins, filesystem_encoding,
islinux)
recycle = None
if iswindows:
from calibre.utils.ipc import eintr_retry_call
from threading import Lock
recycler = None
rlock = Lock()
def start_recycler():
global recycler
if recycler is None:
from calibre.utils.ipc.simple_worker import start_pipe_worker
recycler = start_pipe_worker('from calibre.utils.recycle_bin import recycler_main; recycler_main()')
def recycle_path(path):
from win32com.shell import shell, shellcon
flags = (shellcon.FOF_ALLOWUNDO | shellcon.FOF_NOCONFIRMATION | shellcon.FOF_NOCONFIRMMKDIR | shellcon.FOF_NOERRORUI |
shellcon.FOF_SILENT | shellcon.FOF_RENAMEONCOLLISION)
retcode, aborted = shell.SHFileOperation((0, shellcon.FO_DELETE, path, None, flags, None, None))
if retcode != 0 or aborted:
raise RuntimeError('Failed to delete: %r with error code: %d' % (path, retcode))
def recycler_main():
while True:
path = eintr_retry_call(sys.stdin.readline)
if not path:
break
try:
path = path.decode('utf-8').rstrip()
except (ValueError, TypeError):
break
try:
recycle_path(path)
except:
eintr_retry_call(print, b'KO', file=sys.stdout)
sys.stdout.flush()
try:
import traceback
traceback.print_exc() # goes to stderr, which is the same as for parent process
except Exception:
pass # Ignore failures to write the traceback, since GUI processes on windows have no stderr
else:
eintr_retry_call(print, b'OK', file=sys.stdout)
sys.stdout.flush()
def delegate_recycle(path):
if '\n' in path:
raise ValueError('Cannot recycle paths that have newlines in them (%r)' % path)
with rlock:
start_recycler()
eintr_retry_call(print, path.encode('utf-8'), file=recycler.stdin)
recycler.stdin.flush()
# Theoretically this could be made non-blocking using a
# thread+queue, however the original implementation was blocking,
# so I am leaving it as blocking.
result = eintr_retry_call(recycler.stdout.readline)
if result.rstrip() != b'OK':
raise RuntimeError('recycler failed to recycle: %r' % path)
def recycle(path):
# We have to run the delete to recycle bin in a separate process as the
# morons who wrote SHFileOperation designed it to spin the event loop
# even when no UI is created. And there is no other way to send files
# to the recycle bin on windows. Le Sigh. So we do it in a worker
# process. Unfortunately, if the worker process exits immediately after
# deleting to recycle bin, winblows does not update the recycle bin
# icon. Le Double Sigh. So we use a long lived worker process, that is
# started on first recycle, and sticks around to handle subsequent
# recycles.
if isinstance(path, bytes):
path = path.decode(filesystem_encoding)
path = os.path.abspath(path) # Windows does not like recycling relative paths
return delegate_recycle(path)
elif isosx:
u = plugins['usbobserver'][0]
if hasattr(u, 'send2trash'):
def osx_recycle(path):
if isbytestring(path):
path = path.decode(filesystem_encoding)
u.send2trash(path)
recycle = osx_recycle
elif islinux:
from calibre.utils.linux_trash import send2trash
def fdo_recycle(path):
if isbytestring(path):
path = path.decode(filesystem_encoding)
path = os.path.abspath(path)
send2trash(path)
recycle = fdo_recycle
can_recycle = callable(recycle)
def nuke_recycle():
global can_recycle
can_recycle = False
def restore_recyle():
global can_recycle
can_recycle = callable(recycle)
def delete_file(path, permanent=False):
if not permanent and can_recycle:
try:
recycle(path)
return
except:
import traceback
traceback.print_exc()
os.remove(path)
def delete_tree(path, permanent=False):
if permanent:
try:
# For completely mysterious reasons, sometimes a file is left open
# leading to access errors. If we get an exception, wait and hope
# that whatever has the file (Antivirus, DropBox?) lets go of it.
shutil.rmtree(path)
except:
import traceback
traceback.print_exc()
time.sleep(1)
shutil.rmtree(path)
else:
if can_recycle:
try:
recycle(path)
return
except:
import traceback
traceback.print_exc()
delete_tree(path, permanent=True)
|
gpl-3.0
|
kcorring/ds4100-music-analytics
|
muslytics/ITunesUtils.py
|
1
|
12418
|
#!/usr/bin/python
'''itunes utility classes'''
from __future__ import absolute_import, print_function
import logging
import re
from muslytics.Utils import strip_featured_artists, AbstractTrack, MULT_ARTIST_PATTERN, UNKNOWN_GENRE
logger = logging.getLogger(__name__)
FEAT_GROUP_PATTERN = re.compile('.*\(feat\.(?P<artist>.*)\)\s*')
RATING_MAPPING = {0: 0, None: 0, 20: 1, 40: 2, 60: 3, 80: 4, 100: 5}
class ITunesLibrary(object):
"""A representation of an ITunes Library"""
def __init__(self):
"""Initializes an empty ITunes Library."""
self.albums = {}
self.tracks = {}
self.artists = {}
self.genres = set()
def add_artist(self, artist):
"""Add an artist to this library.
Args:
artist (Artist): an ITunes artist
"""
self.artists[artist.name] = artist
def add_album(self, album_key, album):
"""Add an album to this library.
Args:
album_key (tuple): album identifier of name, year
album (Album): an ITunes album
"""
self.albums[album_key] = album
def add_track(self, track):
"""Add a track to this library.
Args:
track (ITunesTrack): an ITunes track
"""
self.tracks[track.id] = track
self._add_genre(track.genre)
def _add_genre(self, genre):
"""Add the genre to the library
Args:
genre (str): genre to be added
"""
self.genres.add(genre)
def remove_duplicates(self):
"""Merge duplicate tracks into one and remove extraneous.
Preference will be given to merge the duplicate track info onto the album
with the most tracks, then the most recent.
Updated track will have sum of play counts and average of ratings.
If any of the duplicates are tagged loved, the merged will retain that.
"""
# { track_identifier : [track_id] }
identifier_to_index = {}
# { track_identifier }
duplicate_identifiers = set()
# { track_identifier : (track_id, plays, rating, loved) }
# the track we'll merge onto, and the merged plays/rating/loved
merged_tracks = {}
for track_id, track in self.tracks.iteritems():
track_ident = track.get_track_identifier()
if track_ident in identifier_to_index:
duplicate_identifiers.add(track_ident)
identifier_to_index[track_ident].append(track_id)
else:
identifier_to_index[track_ident] = [track_id]
for duplicate_identifier in duplicate_identifiers:
logger.info('Identified duplicate track {dup}.'.format(dup=duplicate_identifier))
duplicate_indexes = identifier_to_index[duplicate_identifier]
duplicate_tracks = [self.tracks[track_id] for track_id in duplicate_indexes]
plays = 0
rating = 0
loved = False
album_preference = []
for track in duplicate_tracks:
# if ths is the first one, we'll start with a preference for this album
if not album_preference:
album_preference = [track.id, track.album_id,
len(self.albums[track.album_id].tracks)]
# else, first let's make sure the dup track is from a different album
elif not track.album_id == album_preference[1]:
# preference is given to the greater year, so check the diff
year_diff = track.album_id[1] - album_preference[1][1]
# years are the same, so fallback to the number of tracks in the album
tracks_in_album = len(self.albums[track.album_id].tracks)
if year_diff == 0:
if tracks_in_album > album_preference[2]:
album_preference = [track.id, track.album_id, tracks_in_album]
# this track's year is more recent, so prefer this album
elif year_diff > 0:
album_preference = [track.id, track.album_id, tracks_in_album]
loved = loved or track.loved
plays += track.plays
rating = track.rating if track.rating > rating else rating
merged_tracks[duplicate_identifier] = (album_preference[0], plays, rating, loved)
removed_track_count = 0
removed_album_count = 0
removed_artist_count = 0
# remove the tracks whose info we merged
for duplicate_identifier, merged_info in merged_tracks.iteritems():
duplicates = identifier_to_index[duplicate_identifier]
duplicates.remove(merged_info[0])
# merge the dup info onto the desired track
merged = self.tracks[merged_info[0]]
merged.set_plays(merged_info[1])
merged.set_rating(merged_info[2])
merged.set_loved(merged_info[3])
for duplicate_id in duplicates:
# remove the duplicate tracks from their albums
album_id = self.tracks[duplicate_id].album_id
del self.tracks[duplicate_id]
removed_track_count += 1
album = self.albums[album_id]
album.tracks.remove(duplicate_id)
# if removing a track from an album leaves it empty, delete the album
if not album.tracks:
for artist_name in album.artists:
if artist_name in self.artists:
albums = self.artists[artist_name].albums
if album_id in albums:
albums.remove(album_id)
# if deleting an album leaves an artist empty, delete the artist
if not albums:
del self.artists[artist_name]
removed_artist_count += 1
del self.albums[album_id]
removed_album_count += 1
if removed_track_count > 0:
logger.info(('Removed {lost_track} duplicate tracks, which resulted in removing ' +
'{lost_album} albums and {lost_artist} artists. {kept_track} tracks, ' +
'{kept_album} albums, and {kept_artist} artists remain.')
.format(lost_track=removed_track_count,
lost_album=removed_album_count,
lost_artist=removed_artist_count,
kept_track=len(self.tracks),
kept_album=len(self.albums),
kept_artist=len(self.artists)))
def __len__(self):
return len(self.tracks)
class ITunesArtist(object):
"""A representation of an artist."""
def __init__(self, name):
"""Initialize an artist by name.
Args:
name (str): artist name
"""
self.name = name
self.genres = set()
self.albums = set()
def add_album(self, album_id):
"""Associate an album with this artist.
Args:
album_id (tuple): album id
"""
self.albums.add(album_id)
def add_genre(self, genre):
"""Associate a genre with this artist.
Args:
genre (int): genre key
"""
self.genres.add(genre)
def __repr__(self):
return ('({name},{genres},{albums})'
.format(name=self.name, genres=self.genres, albums=self.albums))
class ITunesAlbum(object):
"""A collection of tracks in an album."""
def __init__(self, name, year):
"""Create a music album with no tracks.
Args:
name (str): album name
year (int): album year
"""
self.name = name
self.year = year
self.tracks = set()
self.artists = set()
def add_track(self, track):
"""Add a track to the music album, updating album artists as necessary.
Args:
track (ITunesTrack): iTunes track parsed from library XML
"""
self.tracks.add(track.id)
self.artists.update(track.artists)
def __repr__(self):
return ('({name},{year},{track_count})'
.format(name=self.name, year=self.year, track_count=len(self.tracks)))
class ITunesTrack(AbstractTrack):
"""Representation of an iTunes library music track."""
def __init__(self, id, name, artists, rating):
"""Create a base music track.
Sets the id, name, artists, rating as given.
If there are multiple or featured artists they will be combined in a set.
Defaults plays to 0 and genre to UNKNOWN_GENRE.
Args:
id (str): unique track id
name (str): track name
artists (str): track artists
rating (str): track rating
"""
self.rating = RATING_MAPPING[int(rating)]
self.plays = 0
feat_artists = FEAT_GROUP_PATTERN.match(name)
artists = re.split(MULT_ARTIST_PATTERN, artists)
main_artist = artists[0]
artists = set(artists)
if feat_artists:
name = strip_featured_artists(name)
feat_artists = re.split(MULT_ARTIST_PATTERN, feat_artists.group('artist').strip())
artists.update(feat_artists)
if len(artists) > 1:
artists.remove(main_artist)
self.artists = list(artists)
self.artists.insert(0, main_artist)
else:
self.artists = [main_artist]
self.genre = UNKNOWN_GENRE
self.loved = False
self.album_id = None
self.year = None
super(ITunesTrack, self).__init__(int(id), name)
def set_year(self, year):
"""Sets the track year.
Args:
year (int): year the track was released
"""
self.year = int(year) if year else None
def set_loved(self, is_loved):
"""Sets whether the track is 'loved' on iTunes.
Args:
is_loved (bool): whether the track is loved
"""
self.loved = is_loved
def set_genre(self, genre=UNKNOWN_GENRE):
"""Set the track genre.
Args:
genre (int): track genre, defaults to UNKNOWN_GENRE
"""
self.genre = genre
def set_rating(self, rating=0):
"""Set the track rating.
Args:
rating (int): track rating, defaults to 0
"""
self.rating = rating
def set_plays(self, plays=0):
"""Set the track play count.
Args:
plays (str): track play count, defaults to 0
"""
self.plays = int(plays)
def set_album_id(self, album_id):
"""Set the album id.
Args:
album_id (tuple): unique album identifier
"""
self.album_id = album_id
def get_track_identifier(self):
"""Retrieves a track identifier in the form of its name and artists.
Intended to be used for identifying duplicate tracks within the same album.
Returns:
tuple of track name, artists
"""
return (self.name, ','.join(self.artists))
def print_verbose(self):
"""Creates a verbose string representation.
Returns:
a verbose string representation of the track attributes
"""
rstr = 'Track ID:\t{id}\n'.format(id=self.id)
rstr += 'Name:\t\t{name}\n'.format(name=self.name)
rstr += 'Artists:\t\t{artist}\n'.format(artist=','.join(self.artists))
rstr += 'Genre:\t\t{genre}\n'.format(genre=self.genre)
rstr += 'Rating:\t\t{rating}\n'.format(rating=self.rating)
rstr += 'Loved:\t\t{loved}\n'.format(loved=self.loved)
rstr += 'Play Count:\t{plays}\n'.format(plays=self.plays)
rstr += 'Year:\t{year}\n'.format(year=self.year)
return rstr
def __repr__(self):
rstr = ('({id},{name},({artists}),{genre},{rating},{loved},{plays})'
.format(id=self.id, name=self.name, artists=','.join(self.artists),
genre=self.genre, rating=self.rating, loved=self.loved, plays=self.plays))
return rstr
|
mit
|
jay-z007/dxr
|
tests/test_basic/test_basic.py
|
7
|
2874
|
from dxr.testing import DxrInstanceTestCase
from nose.tools import eq_, ok_
class BasicTests(DxrInstanceTestCase):
"""Tests for functionality that isn't specific to particular filters"""
def test_text(self):
"""Assert that a plain text search works."""
self.found_files_eq('main', ['main.c', 'makefile'])
def test_and(self):
"""Finding 2 words should find only the lines that contain both."""
self.found_line_eq(
'main int',
'<b>int</b> <b>main</b>(<b>int</b> argc, char* argv[]){',
4)
def test_structural_and(self):
"""Try ANDing a structural with a text filter."""
self.found_line_eq(
'function:main int',
'<b>int</b> <b>main</b>(<b>int</b> argc, char* argv[]){',
4)
def test_case_sensitive(self):
"""Make sure case-sensitive searching is case-sensitive.
This tests trilite's substr-extents query type.
"""
self.found_files_eq('really', ['README.mkd'])
self.found_nothing('REALLY')
def test_case_insensitive(self):
"""Test case-insensitive free-text searching without extents.
Also test negation of text queries.
This tests trilite's isubstr query type.
"""
results = self.search_results('path:makefile -code')
eq_(results,
[{"path": "makefile",
"lines": [
{"line_number": 3,
"line": "$(CXX) -o $@ $^"},
{"line_number": 4,
"line": "clean:"}],
"icon": "unknown"}])
def test_case_insensitive_extents(self):
"""Test case-insensitive free-text searching with extents.
This tests trilite's isubstr-extents query type.
"""
self.found_files_eq('main', ['main.c', 'makefile'])
def test_index(self):
"""Make sure the index controller redirects."""
response = self.client().get('/')
eq_(response.status_code, 302)
ok_(response.headers['Location'].endswith('/code/source/'))
def test_file_based_search(self):
"""Make sure searches that return files and not lines work.
Specifically, test behavior when SearchFilter.has_lines is False.
"""
eq_(self.search_results('path:makefile'),
[{"path": "makefile",
"lines": [],
"icon": "unknown"}])
def test_filter_punting(self):
"""Make sure filters can opt out of filtration--in this case, due to
terms shorter than trigrams. Make sure even opted-out filters get a
chance to highlight.
"""
# Test a bigram that should be highlighted and one that shouldn't.
self.found_line_eq(
'argc gv qq',
'int main(int <b>argc</b>, char* ar<b>gv</b>[]){',
4)
|
mit
|
wfxiang08/django178
|
django/utils/translation/__init__.py
|
49
|
6780
|
"""
Internationalization support.
"""
from __future__ import unicode_literals
import re
from django.utils.encoding import force_text
from django.utils.functional import lazy
from django.utils import six
__all__ = [
'activate', 'deactivate', 'override', 'deactivate_all',
'get_language', 'get_language_from_request',
'get_language_info', 'get_language_bidi',
'check_for_language', 'to_locale', 'templatize', 'string_concat',
'gettext', 'gettext_lazy', 'gettext_noop',
'ugettext', 'ugettext_lazy', 'ugettext_noop',
'ngettext', 'ngettext_lazy',
'ungettext', 'ungettext_lazy',
'pgettext', 'pgettext_lazy',
'npgettext', 'npgettext_lazy',
'LANGUAGE_SESSION_KEY',
]
LANGUAGE_SESSION_KEY = '_language'
class TranslatorCommentWarning(SyntaxWarning):
pass
# Here be dragons, so a short explanation of the logic won't hurt:
# We are trying to solve two problems: (1) access settings, in particular
# settings.USE_I18N, as late as possible, so that modules can be imported
# without having to first configure Django, and (2) if some other code creates
# a reference to one of these functions, don't break that reference when we
# replace the functions with their real counterparts (once we do access the
# settings).
class Trans(object):
"""
The purpose of this class is to store the actual translation function upon
receiving the first call to that function. After this is done, changes to
USE_I18N will have no effect to which function is served upon request. If
your tests rely on changing USE_I18N, you can delete all the functions
from _trans.__dict__.
Note that storing the function with setattr will have a noticeable
performance effect, as access to the function goes the normal path,
instead of using __getattr__.
"""
def __getattr__(self, real_name):
from django.conf import settings
if settings.USE_I18N:
from django.utils.translation import trans_real as trans
else:
from django.utils.translation import trans_null as trans
setattr(self, real_name, getattr(trans, real_name))
return getattr(trans, real_name)
_trans = Trans()
# The Trans class is no more needed, so remove it from the namespace.
del Trans
def gettext_noop(message):
return _trans.gettext_noop(message)
ugettext_noop = gettext_noop
def gettext(message):
return _trans.gettext(message)
def ngettext(singular, plural, number):
return _trans.ngettext(singular, plural, number)
def ugettext(message):
return _trans.ugettext(message)
def ungettext(singular, plural, number):
return _trans.ungettext(singular, plural, number)
def pgettext(context, message):
return _trans.pgettext(context, message)
def npgettext(context, singular, plural, number):
return _trans.npgettext(context, singular, plural, number)
gettext_lazy = lazy(gettext, str)
ugettext_lazy = lazy(ugettext, six.text_type)
pgettext_lazy = lazy(pgettext, six.text_type)
def lazy_number(func, resultclass, number=None, **kwargs):
if isinstance(number, six.integer_types):
kwargs['number'] = number
proxy = lazy(func, resultclass)(**kwargs)
else:
class NumberAwareString(resultclass):
def __mod__(self, rhs):
if isinstance(rhs, dict) and number:
try:
number_value = rhs[number]
except KeyError:
raise KeyError('Your dictionary lacks key \'%s\'. '
'Please provide it, because it is required to '
'determine whether string is singular or plural.'
% number)
else:
number_value = rhs
kwargs['number'] = number_value
translated = func(**kwargs)
try:
translated = translated % rhs
except TypeError:
# String doesn't contain a placeholder for the number
pass
return translated
proxy = lazy(lambda **kwargs: NumberAwareString(), NumberAwareString)(**kwargs)
return proxy
def ngettext_lazy(singular, plural, number=None):
return lazy_number(ngettext, str, singular=singular, plural=plural, number=number)
def ungettext_lazy(singular, plural, number=None):
return lazy_number(ungettext, six.text_type, singular=singular, plural=plural, number=number)
def npgettext_lazy(context, singular, plural, number=None):
return lazy_number(npgettext, six.text_type, context=context, singular=singular, plural=plural, number=number)
def activate(language):
return _trans.activate(language)
def deactivate():
return _trans.deactivate()
class override(object):
def __init__(self, language, deactivate=False):
self.language = language
self.deactivate = deactivate
self.old_language = get_language()
def __enter__(self):
if self.language is not None:
activate(self.language)
else:
deactivate_all()
def __exit__(self, exc_type, exc_value, traceback):
if self.deactivate:
deactivate()
else:
activate(self.old_language)
def get_language():
return _trans.get_language()
def get_language_bidi():
return _trans.get_language_bidi()
def check_for_language(lang_code):
return _trans.check_for_language(lang_code)
def to_locale(language):
return _trans.to_locale(language)
def get_language_from_request(request, check_path=False):
return _trans.get_language_from_request(request, check_path)
def get_language_from_path(path):
return _trans.get_language_from_path(path)
def templatize(src, origin=None):
return _trans.templatize(src, origin)
def deactivate_all():
return _trans.deactivate_all()
def _string_concat(*strings):
"""
Lazy variant of string concatenation, needed for translations that are
constructed from multiple parts.
"""
return ''.join(force_text(s) for s in strings)
string_concat = lazy(_string_concat, six.text_type)
def get_language_info(lang_code):
from django.conf.locale import LANG_INFO
try:
return LANG_INFO[lang_code]
except KeyError:
if '-' not in lang_code:
raise KeyError("Unknown language code %s." % lang_code)
generic_lang_code = lang_code.split('-')[0]
try:
return LANG_INFO[generic_lang_code]
except KeyError:
raise KeyError("Unknown language code %s and %s." % (lang_code, generic_lang_code))
trim_whitespace_re = re.compile('\s*\n\s*')
def trim_whitespace(s):
return trim_whitespace_re.sub(' ', s.strip())
|
bsd-3-clause
|
romain-dartigues/ansible
|
lib/ansible/modules/cloud/smartos/vmadm.py
|
48
|
24315
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
# (c) 2017, Jasper Lievisse Adriaanse <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = '''
---
module: vmadm
short_description: Manage SmartOS virtual machines and zones.
description:
- Manage SmartOS virtual machines through vmadm(1M).
version_added: "2.3"
author: Jasper Lievisse Adriaanse (@jasperla)
options:
archive_on_delete:
required: false
description:
- When enabled, the zone dataset will be mounted on C(/zones/archive)
upon removal.
autoboot:
required: false
description:
- Whether or not a VM is booted when the system is rebooted.
brand:
required: true
choices: [ joyent, joyent-minimal, kvm, lx ]
default: joyent
description:
- Type of virtual machine.
boot:
required: false
description:
- Set the boot order for KVM VMs.
cpu_cap:
required: false
description:
- Sets a limit on the amount of CPU time that can be used by a VM.
Use C(0) for no cap.
cpu_shares:
required: false
description:
- Sets a limit on the number of fair share scheduler (FSS) CPU shares for
a VM. This limit is relative to all other VMs on the system.
cpu_type:
required: false
choices: [ qemu64, host ]
default: qemu64
description:
- Control the type of virtual CPU exposed to KVM VMs.
customer_metadata:
required: false
description:
- Metadata to be set and associated with this VM, this contain customer
modifiable keys.
delegate_dataset:
required: false
description:
- Whether to delegate a ZFS dataset to an OS VM.
disk_driver:
required: false
description:
- Default value for a virtual disk model for KVM guests.
disks:
required: false
description:
- A list of disks to add, valid properties are documented in vmadm(1M).
dns_domain:
required: false
description:
- Domain value for C(/etc/hosts).
docker:
required: false
description:
- Docker images need this flag enabled along with the I(brand) set to C(lx).
version_added: "2.5"
filesystems:
required: false
description:
- Mount additional filesystems into an OS VM.
firewall_enabled:
required: false
description:
- Enables the firewall, allowing fwadm(1M) rules to be applied.
force:
required: false
description:
- Force a particular action (i.e. stop or delete a VM).
fs_allowed:
required: false
description:
- Comma separated list of filesystem types this zone is allowed to mount.
hostname:
required: false
description:
- Zone/VM hostname.
image_uuid:
required: false
description:
- Image UUID.
indestructible_delegated:
required: false
description:
- Adds an C(@indestructible) snapshot to delegated datasets.
indestructible_zoneroot:
required: false
description:
- Adds an C(@indestructible) snapshot to zoneroot.
internal_metadata:
required: false
description:
- Metadata to be set and associated with this VM, this contains operator
generated keys.
internal_metadata_namespace:
required: false
description:
- List of namespaces to be set as I(internal_metadata-only); these namespaces
will come from I(internal_metadata) rather than I(customer_metadata).
kernel_version:
required: false
description:
- Kernel version to emulate for LX VMs.
limit_priv:
required: false
description:
- Set (comma separated) list of privileges the zone is allowed to use.
maintain_resolvers:
required: false
description:
- Resolvers in C(/etc/resolv.conf) will be updated when updating
the I(resolvers) property.
max_locked_memory:
required: false
description:
- Total amount of memory (in MiBs) on the host that can be locked by this VM.
max_lwps:
required: false
description:
- Maximum number of lightweight processes this VM is allowed to have running.
max_physical_memory:
required: false
description:
- Maximum amount of memory (in MiBs) on the host that the VM is allowed to use.
max_swap:
required: false
description:
- Maximum amount of virtual memory (in MiBs) the VM is allowed to use.
mdata_exec_timeout:
required: false
description:
- Timeout in seconds (or 0 to disable) for the C(svc:/smartdc/mdata:execute) service
that runs user-scripts in the zone.
name:
required: false
aliases: [ alias ]
description:
- Name of the VM. vmadm(1M) uses this as an optional name.
nic_driver:
required: false
description:
- Default value for a virtual NIC model for KVM guests.
nics:
required: false
description:
- A list of nics to add, valid properties are documented in vmadm(1M).
nowait:
required: false
description:
- Consider the provisioning complete when the VM first starts, rather than
when the VM has rebooted.
qemu_opts:
required: false
description:
- Additional qemu arguments for KVM guests. This overwrites the default arguments
provided by vmadm(1M) and should only be used for debugging.
qemu_extra_opts:
required: false
description:
- Additional qemu cmdline arguments for KVM guests.
quota:
required: false
description:
- Quota on zone filesystems (in MiBs).
ram:
required: false
description:
- Amount of virtual RAM for a KVM guest (in MiBs).
resolvers:
required: false
description:
- List of resolvers to be put into C(/etc/resolv.conf).
routes:
required: false
description:
- Dictionary that maps destinations to gateways, these will be set as static
routes in the VM.
spice_opts:
required: false
description:
- Addition options for SPICE-enabled KVM VMs.
spice_password:
required: false
description:
- Password required to connect to SPICE. By default no password is set.
Please note this can be read from the Global Zone.
state:
required: true
choices: [ present, absent, stopped, restarted ]
description:
- States for the VM to be in. Please note that C(present), C(stopped) and C(restarted)
operate on a VM that is currently provisioned. C(present) means that the VM will be
created if it was absent, and that it will be in a running state. C(absent) will
shutdown the zone before removing it.
C(stopped) means the zone will be created if it doesn't exist already, before shutting
it down.
tmpfs:
required: false
description:
- Amount of memory (in MiBs) that will be available in the VM for the C(/tmp) filesystem.
uuid:
required: false
description:
- UUID of the VM. Can either be a full UUID or C(*) for all VMs.
vcpus:
required: false
description:
- Number of virtual CPUs for a KVM guest.
vga:
required: false
description:
- Specify VGA emulation used by KVM VMs.
virtio_txburst:
required: false
description:
- Number of packets that can be sent in a single flush of the tx queue of virtio NICs.
virtio_txtimer:
required: false
description:
- Timeout (in nanoseconds) for the TX timer of virtio NICs.
vnc_password:
required: false
description:
- Password required to connect to VNC. By default no password is set.
Please note this can be read from the Global Zone.
vnc_port:
required: false
description:
- TCP port to listen of the VNC server. Or set C(0) for random,
or C(-1) to disable.
zfs_data_compression:
required: false
description:
- Specifies compression algorithm used for this VMs data dataset. This option
only has effect on delegated datasets.
zfs_data_recsize:
required: false
description:
- Suggested block size (power of 2) for files in the delegated dataset's filesystem.
zfs_filesystem_limit:
required: false
description:
- Maximum number of filesystems the VM can have.
zfs_io_priority:
required: false
description:
- IO throttle priority value relative to other VMs.
zfs_root_compression:
required: false
description:
- Specifies compression algorithm used for this VMs root dataset. This option
only has effect on the zoneroot dataset.
zfs_root_recsize:
required: false
description:
- Suggested block size (power of 2) for files in the zoneroot dataset's filesystem.
zfs_snapshot_limit:
required: false
description:
- Number of snapshots the VM can have.
zpool:
required: false
description:
- ZFS pool the VM's zone dataset will be created in.
requirements:
- python >= 2.6
'''
EXAMPLES = '''
- name: create SmartOS zone
vmadm:
brand: joyent
state: present
alias: fw_zone
image_uuid: 95f265b8-96b2-11e6-9597-972f3af4b6d5
firewall_enabled: yes
indestructible_zoneroot: yes
nics:
- nic_tag: admin
ip: dhcp
primary: true
internal_metadata:
root_pw: 'secret'
quota: 1
- name: Delete a zone
vmadm:
alias: test_zone
state: deleted
- name: Stop all zones
vmadm:
uuid: '*'
state: stopped
'''
RETURN = '''
uuid:
description: UUID of the managed VM.
returned: always
type: string
sample: 'b217ab0b-cf57-efd8-cd85-958d0b80be33'
alias:
description: Alias of the managed VM.
returned: When addressing a VM by alias.
type: string
sample: 'dns-zone'
state:
description: State of the target, after execution.
returned: success
type: string
sample: 'running'
'''
import json
import os
import re
import tempfile
import traceback
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
# While vmadm(1M) supports a -E option to return any errors in JSON, the
# generated JSON does not play well with the JSON parsers of Python.
# The returned message contains '\n' as part of the stacktrace,
# which breaks the parsers.
def get_vm_prop(module, uuid, prop):
# Lookup a property for the given VM.
# Returns the property, or None if not found.
cmd = '{0} lookup -j -o {1} uuid={2}'.format(module.vmadm, prop, uuid)
(rc, stdout, stderr) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg='Could not perform lookup of {0} on {1}'.format(prop, uuid), exception=stderr)
try:
stdout_json = json.loads(stdout)
except Exception as e:
module.fail_json(
msg='Invalid JSON returned by vmadm for uuid lookup of {0}'.format(prop),
details=to_native(e), exception=traceback.format_exc())
if len(stdout_json) > 0 and prop in stdout_json[0]:
return stdout_json[0][prop]
else:
return None
def get_vm_uuid(module, alias):
# Lookup the uuid that goes with the given alias.
# Returns the uuid or '' if not found.
cmd = '{0} lookup -j -o uuid alias={1}'.format(module.vmadm, alias)
(rc, stdout, stderr) = module.run_command(cmd)
if rc != 0:
module.fail_json(
msg='Could not retrieve UUID of {0}'.format(alias), exception=stderr)
# If no VM was found matching the given alias, we get back an empty array.
# That is not an error condition as we might be explicitly checking it's
# absence.
if stdout.strip() == '[]':
return None
else:
try:
stdout_json = json.loads(stdout)
except Exception as e:
module.fail_json(
msg='Invalid JSON returned by vmadm for uuid lookup of {0}'.format(alias),
details=to_native(e), exception=traceback.format_exc())
if len(stdout_json) > 0 and 'uuid' in stdout_json[0]:
return stdout_json[0]['uuid']
def get_all_vm_uuids(module):
# Retrieve the UUIDs for all VMs.
cmd = '{0} lookup -j -o uuid'.format(module.vmadm)
(rc, stdout, stderr) = module.run_command(cmd)
if rc != 0:
module.fail_json(msg='Failed to get VMs list', exception=stderr)
try:
stdout_json = json.loads(stdout)
return [v['uuid'] for v in stdout_json]
except Exception as e:
module.fail_json(msg='Could not retrieve VM UUIDs', details=to_native(e),
exception=traceback.format_exc())
def new_vm(module, uuid, vm_state):
payload_file = create_payload(module, uuid)
(rc, stdout, stderr) = vmadm_create_vm(module, payload_file)
if rc != 0:
changed = False
module.fail_json(msg='Could not create VM', exception=stderr)
else:
changed = True
# 'vmadm create' returns all output to stderr...
match = re.match('Successfully created VM (.*)', stderr)
if match:
vm_uuid = match.groups()[0]
if not is_valid_uuid(vm_uuid):
module.fail_json(msg='Invalid UUID for VM {0}?'.format(vm_uuid))
else:
module.fail_json(msg='Could not retrieve UUID of newly created(?) VM')
# Now that the VM is created, ensure it is in the desired state (if not 'running')
if vm_state != 'running':
ret = set_vm_state(module, vm_uuid, vm_state)
if not ret:
module.fail_json(msg='Could not set VM {0} to state {1}'.format(vm_uuid, vm_state))
try:
os.unlink(payload_file)
except Exception as e:
# Since the payload may contain sensitive information, fail hard
# if we cannot remove the file so the operator knows about it.
module.fail_json(msg='Could not remove temporary JSON payload file {0}: {1}'.format(payload_file, to_native(e)),
exception=traceback.format_exc())
return changed, vm_uuid
def vmadm_create_vm(module, payload_file):
# Create a new VM using the provided payload.
cmd = '{0} create -f {1}'.format(module.vmadm, payload_file)
return module.run_command(cmd)
def set_vm_state(module, vm_uuid, vm_state):
p = module.params
# Check if the VM is already in the desired state.
state = get_vm_prop(module, vm_uuid, 'state')
if state and (state == vm_state):
return None
# Lookup table for the state to be in, and which command to use for that.
# vm_state: [vmadm commandm, forceable?]
cmds = {
'stopped': ['stop', True],
'running': ['start', False],
'deleted': ['delete', True],
'rebooted': ['reboot', False]
}
if p['force'] and cmds[vm_state][1]:
force = '-F'
else:
force = ''
cmd = 'vmadm {0} {1} {2}'.format(cmds[vm_state][0], force, vm_uuid)
(rc, stdout, stderr) = module.run_command(cmd)
match = re.match('^Successfully.*', stderr)
if match:
return True
else:
return False
def create_payload(module, uuid):
# Create the JSON payload (vmdef) and return the filename.
p = module.params
# Filter out the few options that are not valid VM properties.
module_options = ['debug', 'force', 'state']
vmattrs = filter(lambda prop: prop not in module_options, p)
vmdef = {}
for attr in vmattrs:
if p[attr]:
vmdef[attr] = p[attr]
try:
vmdef_json = json.dumps(vmdef)
except Exception as e:
module.fail_json(
msg='Could not create valid JSON payload', exception=traceback.format_exc())
# Create the temporary file that contains our payload, and set tight
# permissions for it may container sensitive information.
try:
# XXX: When there's a way to get the current ansible temporary directory
# drop the mkstemp call and rely on ANSIBLE_KEEP_REMOTE_FILES to retain
# the payload (thus removing the `save_payload` option).
fname = tempfile.mkstemp()[1]
os.chmod(fname, 0o400)
with open(fname, 'w') as fh:
fh.write(vmdef_json)
except Exception as e:
module.fail_json(msg='Could not save JSON payload: %s' % to_native(e), exception=traceback.format_exc())
return fname
def vm_state_transition(module, uuid, vm_state):
ret = set_vm_state(module, uuid, vm_state)
# Whether the VM changed state.
if ret is None:
return False
elif ret:
return True
else:
module.fail_json(msg='Failed to set VM {0} to state {1}'.format(uuid, vm_state))
def is_valid_uuid(uuid):
if re.match('^[0-9a-f]{8}-([0-9a-f]{4}-){3}[0-9a-f]{12}$', uuid, re.IGNORECASE):
return True
else:
return False
def validate_uuids(module):
# Perform basic UUID validation.
failed = []
for u in [['uuid', module.params['uuid']],
['image_uuid', module.params['image_uuid']]]:
if u[1] and u[1] != '*':
if not is_valid_uuid(u[1]):
failed.append(u[0])
if len(failed) > 0:
module.fail_json(msg='No valid UUID(s) found for: {0}'.format(", ".join(failed)))
def manage_all_vms(module, vm_state):
# Handle operations for all VMs, which can by definition only
# be state transitions.
state = module.params['state']
if state == 'created':
module.fail_json(msg='State "created" is only valid for tasks with a single VM')
# If any of the VMs has a change, the task as a whole has a change.
any_changed = False
# First get all VM uuids and for each check their state, and adjust it if needed.
for uuid in get_all_vm_uuids(module):
current_vm_state = get_vm_prop(module, uuid, 'state')
if not current_vm_state and vm_state == 'deleted':
any_changed = False
else:
if module.check_mode:
if (not current_vm_state) or (get_vm_prop(module, uuid, 'state') != state):
any_changed = True
else:
any_changed = (vm_state_transition(module, uuid, vm_state) | any_changed)
return any_changed
def main():
# In order to reduce the clutter and boilerplate for trivial options,
# abstract the vmadm properties and build the dict of arguments later.
# Dict of all options that are simple to define based on their type.
# They're not required and have a default of None.
properties = {
'str': [
'boot', 'disk_driver', 'dns_domain', 'fs_allowed', 'hostname',
'image_uuid', 'internal_metadata_namespace', 'kernel_version',
'limit_priv', 'nic_driver', 'qemu_opts', 'qemu_extra_opts',
'spice_opts', 'uuid', 'vga', 'zfs_data_compression',
'zfs_root_compression', 'zpool'
],
'bool': [
'archive_on_delete', 'autoboot', 'debug', 'delegate_dataset',
'docker', 'firewall_enabled', 'force', 'indestructible_delegated',
'indestructible_zoneroot', 'maintain_resolvers', 'nowait'
],
'int': [
'cpu_cap', 'cpu_shares', 'max_locked_memory', 'max_lwps',
'max_physical_memory', 'max_swap', 'mdata_exec_timeout',
'quota', 'ram', 'tmpfs', 'vcpus', 'virtio_txburst',
'virtio_txtimer', 'vnc_port', 'zfs_data_recsize',
'zfs_filesystem_limit', 'zfs_io_priority', 'zfs_root_recsize',
'zfs_snapshot_limit'
],
'dict': ['customer_metadata', 'internal_metadata', 'routes'],
'list': ['disks', 'nics', 'resolvers', 'filesystems']
}
# Start with the options that are not as trivial as those above.
options = dict(
state=dict(
default='running',
type='str',
choices=['present', 'running', 'absent', 'deleted', 'stopped', 'created', 'restarted', 'rebooted']
),
name=dict(
default=None, type='str',
aliases=['alias']
),
brand=dict(
default='joyent',
type='str',
choices=['joyent', 'joyent-minimal', 'kvm', 'lx']
),
cpu_type=dict(
default='qemu64',
type='str',
choices=['host', 'qemu64']
),
# Regular strings, however these require additional options.
spice_password=dict(type='str', no_log=True),
vnc_password=dict(type='str', no_log=True),
)
# Add our 'simple' options to options dict.
for type in properties:
for p in properties[type]:
option = dict(default=None, type=type)
options[p] = option
module = AnsibleModule(
argument_spec=options,
supports_check_mode=True,
required_one_of=[['name', 'uuid']]
)
module.vmadm = module.get_bin_path('vmadm', required=True)
p = module.params
uuid = p['uuid']
state = p['state']
# Translate the state parameter into something we can use later on.
if state in ['present', 'running']:
vm_state = 'running'
elif state in ['stopped', 'created']:
vm_state = 'stopped'
elif state in ['absent', 'deleted']:
vm_state = 'deleted'
elif state in ['restarted', 'rebooted']:
vm_state = 'rebooted'
result = {'state': state}
# While it's possible to refer to a given VM by it's `alias`, it's easier
# to operate on VMs by their UUID. So if we're not given a `uuid`, look
# it up.
if not uuid:
uuid = get_vm_uuid(module, p['name'])
# Bit of a chicken and egg problem here for VMs with state == deleted.
# If they're going to be removed in this play, we have to lookup the
# uuid. If they're already deleted there's nothing to lookup.
# So if state == deleted and get_vm_uuid() returned '', the VM is already
# deleted and there's nothing else to do.
if uuid is None and vm_state == 'deleted':
result['name'] = p['name']
module.exit_json(**result)
validate_uuids(module)
if p['name']:
result['name'] = p['name']
result['uuid'] = uuid
if uuid == '*':
result['changed'] = manage_all_vms(module, vm_state)
module.exit_json(**result)
# The general flow is as follows:
# - first the current state of the VM is obtained by it's UUID.
# - If the state was not found and the desired state is 'deleted', return.
# - If the state was not found, it means the VM has to be created.
# Subsequently the VM will be set to the desired state (i.e. stopped)
# - Otherwise, it means the VM exists already and we operate on it's
# state (i.e. reboot it.)
#
# In the future it should be possible to query the VM for a particular
# property as a valid state (i.e. queried) so the result can be
# registered.
# Also, VMs should be able to get their properties updated.
# Managing VM snapshots should be part of a standalone module.
# First obtain the VM state to determine what needs to be done with it.
current_vm_state = get_vm_prop(module, uuid, 'state')
# First handle the case where the VM should be deleted and is not present.
if not current_vm_state and vm_state == 'deleted':
result['changed'] = False
elif module.check_mode:
# Shortcut for check mode, if there is no VM yet, it will need to be created.
# Or, if the VM is not in the desired state yet, it needs to transition.
if (not current_vm_state) or (get_vm_prop(module, uuid, 'state') != state):
result['changed'] = True
else:
result['changed'] = False
module.exit_json(**result)
# No VM was found that matched the given ID (alias or uuid), so we create it.
elif not current_vm_state:
result['changed'], result['uuid'] = new_vm(module, uuid, vm_state)
else:
# VM was found, operate on its state directly.
result['changed'] = vm_state_transition(module, uuid, vm_state)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
camradal/ansible
|
lib/ansible/modules/cloud/profitbricks/profitbricks_volume_attachments.py
|
6
|
8222
|
#!/usr/bin/python
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'version': '1.0'}
DOCUMENTATION = '''
---
module: profitbricks_volume_attachments
short_description: Attach or detach a volume.
description:
- Allows you to attach or detach a volume from a ProfitBricks server. This module has a dependency on profitbricks >= 1.0.0
version_added: "2.0"
options:
datacenter:
description:
- The datacenter in which to operate.
required: true
server:
description:
- The name of the server you wish to detach or attach the volume.
required: true
volume:
description:
- The volume name or ID.
required: true
subscription_user:
description:
- The ProfitBricks username. Overrides the PB_SUBSCRIPTION_ID environment variable.
required: false
subscription_password:
description:
- THe ProfitBricks password. Overrides the PB_PASSWORD environment variable.
required: false
wait:
description:
- wait for the operation to complete before returning
required: false
default: "yes"
choices: [ "yes", "no" ]
wait_timeout:
description:
- how long before wait gives up, in seconds
default: 600
state:
description:
- Indicate desired state of the resource
required: false
default: 'present'
choices: ["present", "absent"]
requirements: [ "profitbricks" ]
author: Matt Baldwin ([email protected])
'''
EXAMPLES = '''
# Attach a Volume
- profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: present
# Detach a Volume
- profitbricks_volume_attachments:
datacenter: Tardis One
server: node002
volume: vol01
wait_timeout: 500
state: absent
'''
import re
import uuid
import time
HAS_PB_SDK = True
try:
from profitbricks.client import ProfitBricksService, Volume
except ImportError:
HAS_PB_SDK = False
uuid_match = re.compile(
'[\w]{8}-[\w]{4}-[\w]{4}-[\w]{4}-[\w]{12}', re.I)
def _wait_for_completion(profitbricks, promise, wait_timeout, msg):
if not promise:
return
wait_timeout = time.time() + wait_timeout
while wait_timeout > time.time():
time.sleep(5)
operation_result = profitbricks.get_request(
request_id=promise['requestId'],
status=True)
if operation_result['metadata']['status'] == "DONE":
return
elif operation_result['metadata']['status'] == "FAILED":
raise Exception(
'Request failed to complete ' + msg + ' "' + str(
promise['requestId']) + '" to complete.')
raise Exception(
'Timed out waiting for async operation ' + msg + ' "' + str(
promise['requestId']
) + '" to complete.')
def attach_volume(module, profitbricks):
"""
Attaches a volume.
This will attach a volume to the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was attached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server= s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.attach_volume(datacenter, server, volume)
def detach_volume(module, profitbricks):
"""
Detaches a volume.
This will remove a volume from the server.
module : AnsibleModule object
profitbricks: authenticated profitbricks object.
Returns:
True if the volume was detached, false otherwise
"""
datacenter = module.params.get('datacenter')
server = module.params.get('server')
volume = module.params.get('volume')
# Locate UUID for Datacenter
if not (uuid_match.match(datacenter)):
datacenter_list = profitbricks.list_datacenters()
for d in datacenter_list['items']:
dc = profitbricks.get_datacenter(d['id'])
if datacenter == dc['properties']['name']:
datacenter = d['id']
break
# Locate UUID for Server
if not (uuid_match.match(server)):
server_list = profitbricks.list_servers(datacenter)
for s in server_list['items']:
if server == s['properties']['name']:
server= s['id']
break
# Locate UUID for Volume
if not (uuid_match.match(volume)):
volume_list = profitbricks.list_volumes(datacenter)
for v in volume_list['items']:
if volume == v['properties']['name']:
volume = v['id']
break
return profitbricks.detach_volume(datacenter, server, volume)
def main():
module = AnsibleModule(
argument_spec=dict(
datacenter=dict(),
server=dict(),
volume=dict(),
subscription_user=dict(),
subscription_password=dict(),
wait=dict(type='bool', default=True),
wait_timeout=dict(type='int', default=600),
state=dict(default='present'),
)
)
if not HAS_PB_SDK:
module.fail_json(msg='profitbricks required for this module')
if not module.params.get('subscription_user'):
module.fail_json(msg='subscription_user parameter is required')
if not module.params.get('subscription_password'):
module.fail_json(msg='subscription_password parameter is required')
if not module.params.get('datacenter'):
module.fail_json(msg='datacenter parameter is required')
if not module.params.get('server'):
module.fail_json(msg='server parameter is required')
if not module.params.get('volume'):
module.fail_json(msg='volume parameter is required')
subscription_user = module.params.get('subscription_user')
subscription_password = module.params.get('subscription_password')
profitbricks = ProfitBricksService(
username=subscription_user,
password=subscription_password)
state = module.params.get('state')
if state == 'absent':
try:
(changed) = detach_volume(module, profitbricks)
module.exit_json(changed=changed)
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
elif state == 'present':
try:
attach_volume(module, profitbricks)
module.exit_json()
except Exception as e:
module.fail_json(msg='failed to set volume_attach state: %s' % str(e))
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
|
gpl-3.0
|
adamwwt/chvac
|
venv/lib/python2.7/site-packages/pygments/formatters/other.py
|
363
|
3811
|
# -*- coding: utf-8 -*-
"""
pygments.formatters.other
~~~~~~~~~~~~~~~~~~~~~~~~~
Other formatters: NullFormatter, RawTokenFormatter.
:copyright: Copyright 2006-2013 by the Pygments team, see AUTHORS.
:license: BSD, see LICENSE for details.
"""
from pygments.formatter import Formatter
from pygments.util import OptionError, get_choice_opt, b
from pygments.token import Token
from pygments.console import colorize
__all__ = ['NullFormatter', 'RawTokenFormatter']
class NullFormatter(Formatter):
"""
Output the text unchanged without any formatting.
"""
name = 'Text only'
aliases = ['text', 'null']
filenames = ['*.txt']
def format(self, tokensource, outfile):
enc = self.encoding
for ttype, value in tokensource:
if enc:
outfile.write(value.encode(enc))
else:
outfile.write(value)
class RawTokenFormatter(Formatter):
r"""
Format tokens as a raw representation for storing token streams.
The format is ``tokentype<TAB>repr(tokenstring)\n``. The output can later
be converted to a token stream with the `RawTokenLexer`, described in the
`lexer list <lexers.txt>`_.
Only two options are accepted:
`compress`
If set to ``'gz'`` or ``'bz2'``, compress the output with the given
compression algorithm after encoding (default: ``''``).
`error_color`
If set to a color name, highlight error tokens using that color. If
set but with no value, defaults to ``'red'``.
*New in Pygments 0.11.*
"""
name = 'Raw tokens'
aliases = ['raw', 'tokens']
filenames = ['*.raw']
unicodeoutput = False
def __init__(self, **options):
Formatter.__init__(self, **options)
if self.encoding:
raise OptionError('the raw formatter does not support the '
'encoding option')
self.encoding = 'ascii' # let pygments.format() do the right thing
self.compress = get_choice_opt(options, 'compress',
['', 'none', 'gz', 'bz2'], '')
self.error_color = options.get('error_color', None)
if self.error_color is True:
self.error_color = 'red'
if self.error_color is not None:
try:
colorize(self.error_color, '')
except KeyError:
raise ValueError("Invalid color %r specified" %
self.error_color)
def format(self, tokensource, outfile):
try:
outfile.write(b(''))
except TypeError:
raise TypeError('The raw tokens formatter needs a binary '
'output file')
if self.compress == 'gz':
import gzip
outfile = gzip.GzipFile('', 'wb', 9, outfile)
def write(text):
outfile.write(text.encode())
flush = outfile.flush
elif self.compress == 'bz2':
import bz2
compressor = bz2.BZ2Compressor(9)
def write(text):
outfile.write(compressor.compress(text.encode()))
def flush():
outfile.write(compressor.flush())
outfile.flush()
else:
def write(text):
outfile.write(text.encode())
flush = outfile.flush
if self.error_color:
for ttype, value in tokensource:
line = "%s\t%r\n" % (ttype, value)
if ttype is Token.Error:
write(colorize(self.error_color, line))
else:
write(line)
else:
for ttype, value in tokensource:
write("%s\t%r\n" % (ttype, value))
flush()
|
mit
|
fredericmohr/mitro
|
browser-ext/third_party/firefox-addon-sdk/python-lib/cuddlefish/rdf.py
|
29
|
7542
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import os
import xml.dom.minidom
import StringIO
RDF_NS = "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
EM_NS = "http://www.mozilla.org/2004/em-rdf#"
class RDF(object):
def __str__(self):
# real files have an .encoding attribute and use it when you
# write() unicode into them: they read()/write() unicode and
# put encoded bytes in the backend file. StringIO objects
# read()/write() unicode and put unicode in the backing store,
# so we must encode the output of getvalue() to get a
# bytestring. (cStringIO objects are weirder: they effectively
# have .encoding hardwired to "ascii" and put only bytes in
# the backing store, so we can't use them here).
#
# The encoding= argument to dom.writexml() merely sets the XML header's
# encoding= attribute. It still writes unencoded unicode to the output file,
# so we have to encode it for real afterwards.
#
# Also see: https://bugzilla.mozilla.org/show_bug.cgi?id=567660
buf = StringIO.StringIO()
self.dom.writexml(buf, encoding="utf-8")
return buf.getvalue().encode('utf-8')
class RDFUpdate(RDF):
def __init__(self):
impl = xml.dom.minidom.getDOMImplementation()
self.dom = impl.createDocument(RDF_NS, "RDF", None)
self.dom.documentElement.setAttribute("xmlns", RDF_NS)
self.dom.documentElement.setAttribute("xmlns:em", EM_NS)
def _make_node(self, name, value, parent):
elem = self.dom.createElement(name)
elem.appendChild(self.dom.createTextNode(value))
parent.appendChild(elem)
return elem
def add(self, manifest, update_link):
desc = self.dom.createElement("Description")
desc.setAttribute(
"about",
"urn:mozilla:extension:%s" % manifest.get("em:id")
)
self.dom.documentElement.appendChild(desc)
updates = self.dom.createElement("em:updates")
desc.appendChild(updates)
seq = self.dom.createElement("Seq")
updates.appendChild(seq)
li = self.dom.createElement("li")
seq.appendChild(li)
li_desc = self.dom.createElement("Description")
li.appendChild(li_desc)
self._make_node("em:version", manifest.get("em:version"),
li_desc)
apps = manifest.dom.documentElement.getElementsByTagName(
"em:targetApplication"
)
for app in apps:
target_app = self.dom.createElement("em:targetApplication")
li_desc.appendChild(target_app)
ta_desc = self.dom.createElement("Description")
target_app.appendChild(ta_desc)
for name in ["em:id", "em:minVersion", "em:maxVersion"]:
elem = app.getElementsByTagName(name)[0]
self._make_node(name, elem.firstChild.nodeValue, ta_desc)
self._make_node("em:updateLink", update_link, ta_desc)
class RDFManifest(RDF):
def __init__(self, path):
self.dom = xml.dom.minidom.parse(path)
def set(self, property, value):
elements = self.dom.documentElement.getElementsByTagName(property)
if not elements:
raise ValueError("Element with value not found: %s" % property)
if not elements[0].firstChild:
elements[0].appendChild(self.dom.createTextNode(value))
else:
elements[0].firstChild.nodeValue = value
def get(self, property, default=None):
elements = self.dom.documentElement.getElementsByTagName(property)
if not elements:
return default
return elements[0].firstChild.nodeValue
def remove(self, property):
elements = self.dom.documentElement.getElementsByTagName(property)
if not elements:
return True
else:
for i in elements:
i.parentNode.removeChild(i);
return True;
def gen_manifest(template_root_dir, target_cfg, jid,
update_url=None, bootstrap=True, enable_mobile=False):
install_rdf = os.path.join(template_root_dir, "install.rdf")
manifest = RDFManifest(install_rdf)
dom = manifest.dom
manifest.set("em:id", jid)
manifest.set("em:version",
target_cfg.get('version', '1.0'))
manifest.set("em:name",
target_cfg.get('title', target_cfg.get('fullName', target_cfg['name'])))
manifest.set("em:description",
target_cfg.get("description", ""))
manifest.set("em:creator",
target_cfg.get("author", ""))
manifest.set("em:bootstrap", str(bootstrap).lower())
# XPIs remain packed by default, but package.json can override that. The
# RDF format accepts "true" as True, anything else as False. We expect
# booleans in the .json file, not strings.
manifest.set("em:unpack", "true" if target_cfg.get("unpack") else "false")
for translator in target_cfg.get("translators", [ ]):
elem = dom.createElement("em:translator");
elem.appendChild(dom.createTextNode(translator))
dom.documentElement.getElementsByTagName("Description")[0].appendChild(elem)
for contributor in target_cfg.get("contributors", [ ]):
elem = dom.createElement("em:contributor");
elem.appendChild(dom.createTextNode(contributor))
dom.documentElement.getElementsByTagName("Description")[0].appendChild(elem)
if update_url:
manifest.set("em:updateURL", update_url)
else:
manifest.remove("em:updateURL")
if target_cfg.get("preferences"):
manifest.set("em:optionsType", "2")
else:
manifest.remove("em:optionsType")
if enable_mobile:
target_app = dom.createElement("em:targetApplication")
dom.documentElement.getElementsByTagName("Description")[0].appendChild(target_app)
ta_desc = dom.createElement("Description")
target_app.appendChild(ta_desc)
elem = dom.createElement("em:id")
elem.appendChild(dom.createTextNode("{aa3c5121-dab2-40e2-81ca-7ea25febc110}"))
ta_desc.appendChild(elem)
elem = dom.createElement("em:minVersion")
elem.appendChild(dom.createTextNode("19.0"))
ta_desc.appendChild(elem)
elem = dom.createElement("em:maxVersion")
elem.appendChild(dom.createTextNode("22.0a1"))
ta_desc.appendChild(elem)
if target_cfg.get("homepage"):
manifest.set("em:homepageURL", target_cfg.get("homepage"))
else:
manifest.remove("em:homepageURL")
return manifest
if __name__ == "__main__":
print "Running smoke test."
root = os.path.join(os.path.dirname(__file__), '../../app-extension')
manifest = gen_manifest(root, {'name': 'test extension'},
'fakeid', 'http://foo.com/update.rdf')
update = RDFUpdate()
update.add(manifest, "https://foo.com/foo.xpi")
exercise_str = str(manifest) + str(update)
for tagname in ["em:targetApplication", "em:version", "em:id"]:
if not len(update.dom.getElementsByTagName(tagname)):
raise Exception("tag does not exist: %s" % tagname)
if not update.dom.getElementsByTagName(tagname)[0].firstChild:
raise Exception("tag has no children: %s" % tagname)
print "Success!"
|
gpl-3.0
|
abutcher/origin
|
vendor/github.com/google/certificate-transparency/python/ct/crypto/asn1/oid_test.py
|
35
|
2146
|
#!/usr/bin/env python
import unittest
from ct.crypto import error
from ct.crypto.asn1 import oid
from ct.crypto.asn1 import type_test_base
class ObjectIdentifierTest(type_test_base.TypeTestBase):
asn1_type = oid.ObjectIdentifier
hashable = True
initializers = (
((0, 0), "0.0"),
((1, 2), "1.2"),
((2, 5), "2.5"),
((1, 2, 3, 4), "1.2.3.4"),
((1, 2, 840, 113549), "1.2.840.113549"),
((1, 2, 840, 113549, 1), "1.2.840.113549.1"),
)
bad_initializers = (
# Too short.
("0", ValueError),
((0,), ValueError),
(("1"), ValueError),
((1,), ValueError),
# Negative components.
("-1", ValueError),
((-1,), ValueError),
("1.2.3.-4", ValueError),
((1, 2, 3, -4), ValueError),
# Invalid components.
("3.2.3.4", ValueError),
((3, 2, 3, 4), ValueError),
("0.40.3.4", ValueError),
((0, 40, 3, 4), ValueError),
)
encode_test_vectors = (
# Example from ASN.1 spec.
("2.100.3", "0603813403"),
# More examples.
("0.0", "060100"),
("1.2", "06012a"),
("2.5", "060155"),
("1.2.3.4", "06032a0304"),
("1.2.840", "06032a8648"),
("1.2.840.113549", "06062a864886f70d"),
("1.2.840.113549.1", "06072a864886f70d01")
)
bad_encodings = (
# Empty OID.
("0600"),
# Last byte has high bit set.
("06020080"),
("06032a86c8"),
# Leading '80'-octets in component.
("06042a8086c8"),
# Indefinite length.
("06808134030000")
)
bad_strict_encodings = ()
def test_dictionary(self):
rsa = oid.ObjectIdentifier(value=oid.RSA_ENCRYPTION)
self.assertEqual("rsaEncryption", rsa.long_name)
self.assertEqual("RSA", rsa.short_name)
def test_unknown_oids(self):
unknown = oid.ObjectIdentifier(value="1.2.3.4")
self.assertEqual("1.2.3.4", unknown.long_name)
self.assertEqual("1.2.3.4", unknown.short_name)
if __name__ == '__main__':
unittest.main()
|
apache-2.0
|
GlobalBoost/GlobalBoost
|
test/functional/feature_cltv.py
|
5
|
5840
|
#!/usr/bin/env python3
# Copyright (c) 2015-2018 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test BIP65 (CHECKLOCKTIMEVERIFY).
Test that the CHECKLOCKTIMEVERIFY soft-fork activates at (regtest) block height
1351.
"""
from test_framework.blocktools import create_coinbase, create_block, create_transaction
from test_framework.messages import CTransaction, msg_block, ToHex
from test_framework.mininode import P2PInterface
from test_framework.script import CScript, OP_1NEGATE, OP_CHECKLOCKTIMEVERIFY, OP_DROP, CScriptNum
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
bytes_to_hex_str,
hex_str_to_bytes,
)
from io import BytesIO
CLTV_HEIGHT = 1351
# Reject codes that we might receive in this test
REJECT_INVALID = 16
REJECT_OBSOLETE = 17
REJECT_NONSTANDARD = 64
def cltv_invalidate(tx):
'''Modify the signature in vin 0 of the tx to fail CLTV
Prepends -1 CLTV DROP in the scriptSig itself.
TODO: test more ways that transactions using CLTV could be invalid (eg
locktime requirements fail, sequence time requirements fail, etc).
'''
tx.vin[0].scriptSig = CScript([OP_1NEGATE, OP_CHECKLOCKTIMEVERIFY, OP_DROP] +
list(CScript(tx.vin[0].scriptSig)))
def cltv_validate(node, tx, height):
'''Modify the signature in vin 0 of the tx to pass CLTV
Prepends <height> CLTV DROP in the scriptSig, and sets
the locktime to height'''
tx.vin[0].nSequence = 0
tx.nLockTime = height
# Need to re-sign, since nSequence and nLockTime changed
signed_result = node.signrawtransactionwithwallet(ToHex(tx))
new_tx = CTransaction()
new_tx.deserialize(BytesIO(hex_str_to_bytes(signed_result['hex'])))
new_tx.vin[0].scriptSig = CScript([CScriptNum(height), OP_CHECKLOCKTIMEVERIFY, OP_DROP] +
list(CScript(new_tx.vin[0].scriptSig)))
return new_tx
class BIP65Test(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
self.extra_args = [['-whitelist=127.0.0.1', '-par=1']] # Use only one script thread to get the exact reject reason for testing
self.setup_clean_chain = True
def skip_test_if_missing_module(self):
self.skip_if_no_wallet()
def run_test(self):
self.nodes[0].add_p2p_connection(P2PInterface())
self.log.info("Mining %d blocks", CLTV_HEIGHT - 2)
self.coinbase_txids = [self.nodes[0].getblock(b)['tx'][0] for b in self.nodes[0].generate(CLTV_HEIGHT - 2)]
self.nodeaddress = self.nodes[0].getnewaddress()
self.log.info("Test that an invalid-according-to-CLTV transaction can still appear in a block")
spendtx = create_transaction(self.nodes[0], self.coinbase_txids[0],
self.nodeaddress, amount=1.0)
cltv_invalidate(spendtx)
spendtx.rehash()
tip = self.nodes[0].getbestblockhash()
block_time = self.nodes[0].getblockheader(tip)['mediantime'] + 1
block = create_block(int(tip, 16), create_coinbase(CLTV_HEIGHT - 1), block_time)
block.nVersion = 3
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.solve()
self.nodes[0].p2p.send_and_ping(msg_block(block))
assert_equal(self.nodes[0].getbestblockhash(), block.hash)
self.log.info("Test that blocks must now be at least version 4")
tip = block.sha256
block_time += 1
block = create_block(tip, create_coinbase(CLTV_HEIGHT), block_time)
block.nVersion = 3
block.solve()
with self.nodes[0].assert_debug_log(expected_msgs=['{}, bad-version(0x00000003)'.format(block.hash)]):
self.nodes[0].p2p.send_and_ping(msg_block(block))
assert_equal(int(self.nodes[0].getbestblockhash(), 16), tip)
self.nodes[0].p2p.sync_with_ping()
self.log.info("Test that invalid-according-to-cltv transactions cannot appear in a block")
block.nVersion = 4
spendtx = create_transaction(self.nodes[0], self.coinbase_txids[1],
self.nodeaddress, amount=1.0)
cltv_invalidate(spendtx)
spendtx.rehash()
# First we show that this tx is valid except for CLTV by getting it
# rejected from the mempool for exactly that reason.
assert_equal(
[{'txid': spendtx.hash, 'allowed': False, 'reject-reason': '64: non-mandatory-script-verify-flag (Negative locktime)'}],
self.nodes[0].testmempoolaccept(rawtxs=[bytes_to_hex_str(spendtx.serialize())], allowhighfees=True)
)
# Now we verify that a block with this transaction is also invalid.
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.solve()
with self.nodes[0].assert_debug_log(expected_msgs=['CheckInputs on {} failed with non-mandatory-script-verify-flag (Negative locktime)'.format(block.vtx[-1].hash)]):
self.nodes[0].p2p.send_and_ping(msg_block(block))
assert_equal(int(self.nodes[0].getbestblockhash(), 16), tip)
self.nodes[0].p2p.sync_with_ping()
self.log.info("Test that a version 4 block with a valid-according-to-CLTV transaction is accepted")
spendtx = cltv_validate(self.nodes[0], spendtx, CLTV_HEIGHT - 1)
spendtx.rehash()
block.vtx.pop(1)
block.vtx.append(spendtx)
block.hashMerkleRoot = block.calc_merkle_root()
block.solve()
self.nodes[0].p2p.send_and_ping(msg_block(block))
assert_equal(int(self.nodes[0].getbestblockhash(), 16), block.sha256)
if __name__ == '__main__':
BIP65Test().main()
|
mit
|
bcl/pykickstart
|
tests/commands/rhsm.py
|
2
|
5046
|
#
# Copyright 2019 Red Hat, Inc.
#
# This copyrighted material is made available to anyone wishing to use, modify,
# copy, or redistribute it subject to the terms and conditions of the GNU
# General Public License v.2. This program is distributed in the hope that it
# will be useful, but WITHOUT ANY WARRANTY expressed or implied, including the
# implied warranties of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# See the GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along with
# this program; if not, write to the Free Software Foundation, Inc., 51
# Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Any Red Hat
# trademarks that are incorporated in the source code or documentation are not
# subject to the GNU General Public License and may only be used or replicated
# with the express permission of Red Hat, Inc.
#
import unittest
from tests.baseclass import CommandTest
class RHEL8_TestCase(CommandTest):
def runTest(self):
# basic parsing
self.assert_parse('rhsm --organization="12345" --activation-key="abcd"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --connect-to-insights')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --server-hostname="https://rhsm.example.com"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --rhsm-baseurl="https://content.example.com"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --server-hostname="https://rhsm.example.com" --connect-to-insights')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://proxy.com"')
# just the rhsm command without any options is not valid
self.assert_parse_error('rhsm')
# multiple activation keys can be passed
self.assert_parse('rhsm --organization="12345" --activation-key="a" --activation-key="b" --activation-key="c"')
# at least one activation key needs to be present
self.assert_parse_error('rhsm --organization="12345"')
# empty string is not a valid activation key
self.assert_parse_error('rhsm --organization="12345" --activation-key=""')
self.assert_parse_error('rhsm --organization="12345" --activation-key="a" --activation-key="b" --activation-key=""')
# organization id needs to be always specified
self.assert_parse_error('rhsm --activation-key="a"')
self.assert_parse_error('rhsm --activation-key="a" --activation-key="b" --activation-key="c"')
# check proxy parsing
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://proxy.com"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://proxy.com:9001"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://[email protected]:9001"')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://username:[email protected]:9001"')
# unknown options are an error
self.assert_parse_error('rhsm --organization="12345" --activation-key="abcd" --unknown=stuff')
# test output kickstart generation
# TODO: check if it is OK to have the organization name & activation key in output kickstart
self.assert_parse('rhsm --organization="12345" --activation-key="abcd"',
'rhsm --organization="12345" --activation-key="abcd"\n')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --activation-key="efgh"',
'rhsm --organization="12345" --activation-key="abcd" --activation-key="efgh"\n')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --server-hostname="https://rhsm.example.com" --connect-to-insights',
'rhsm --organization="12345" --activation-key="abcd" --connect-to-insights --server-hostname="https://rhsm.example.com"\n')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --rhsm-baseurl="https://content.example.com" --connect-to-insights',
'rhsm --organization="12345" --activation-key="abcd" --connect-to-insights --rhsm-baseurl="https://content.example.com"\n')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --rhsm-baseurl="https://content.example.com" --server-hostname="https://rhsm.example.com"',
'rhsm --organization="12345" --activation-key="abcd" --server-hostname="https://rhsm.example.com" --rhsm-baseurl="https://content.example.com"\n')
self.assert_parse('rhsm --organization="12345" --activation-key="abcd" --proxy="http://username:[email protected]:9001"',
'rhsm --organization="12345" --activation-key="abcd" --proxy="http://username:[email protected]:9001"\n')
if __name__ == "__main__":
unittest.main()
|
gpl-2.0
|
TieWei/nova
|
nova/cells/filters/__init__.py
|
58
|
2117
|
# Copyright (c) 2012-2013 Rackspace Hosting
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Cell scheduler filters
"""
from nova import filters
from nova.openstack.common import log as logging
from nova import policy
LOG = logging.getLogger(__name__)
class BaseCellFilter(filters.BaseFilter):
"""Base class for cell filters."""
def authorized(self, ctxt):
"""Return whether or not the context is authorized for this filter
based on policy.
The policy action is "cells_scheduler_filter:<name>" where <name>
is the name of the filter class.
"""
name = 'cells_scheduler_filter:' + self.__class__.__name__
target = {'project_id': ctxt.project_id,
'user_id': ctxt.user_id}
return policy.enforce(ctxt, name, target, do_raise=False)
def _filter_one(self, cell, filter_properties):
return self.cell_passes(cell, filter_properties)
def cell_passes(self, cell, filter_properties):
"""Return True if the CellState passes the filter, otherwise False.
Override this in a subclass.
"""
raise NotImplementedError()
class CellFilterHandler(filters.BaseFilterHandler):
def __init__(self):
super(CellFilterHandler, self).__init__(BaseCellFilter)
def all_filters():
"""Return a list of filter classes found in this directory.
This method is used as the default for available scheduler filters
and should return a list of all filter classes available.
"""
return CellFilterHandler().get_all_classes()
|
apache-2.0
|
Didacti/tornadozencoder
|
test/test_reports.py
|
2
|
2671
|
import unittest
from mock import patch
from test_util import TEST_API_KEY, load_response
from zencoder import Zencoder
import datetime
class TestReports(unittest.TestCase):
def setUp(self):
self.zen = Zencoder(api_key=TEST_API_KEY)
@patch("requests.Session.get")
def test_reports_vod(self, get):
get.return_value = load_response(200, 'fixtures/report_vod.json')
resp = self.zen.report.vod()
self.assertEquals(resp.code, 200)
self.assertEquals(resp.body['total']['encoded_minutes'], 6)
self.assertEquals(resp.body['total']['billable_minutes'], 8)
@patch("requests.Session.get")
def test_reports_live(self, get):
get.return_value = load_response(200, 'fixtures/report_live.json')
resp = self.zen.report.live()
self.assertEquals(resp.code, 200)
self.assertEquals(resp.body['total']['stream_hours'], 5)
self.assertEquals(resp.body['total']['encoded_hours'], 5)
self.assertEquals(resp.body['statistics']['length'], 5)
@patch("requests.Session.get")
def test_reports_all(self, get):
get.return_value = load_response(200, 'fixtures/report_all.json')
resp = self.zen.report.all()
self.assertEquals(resp.code, 200)
self.assertEquals(resp.body['total']['live']['stream_hours'], 5)
self.assertEquals(resp.body['total']['live']['encoded_hours'], 5)
self.assertEquals(resp.body['total']['vod']['encoded_minutes'], 6)
self.assertEquals(resp.body['total']['vod']['billable_minutes'], 8)
self.assertEquals(resp.body['statistics']['live']['length'], 2)
@patch("requests.Session.get")
def test_reports_all_date_filter(self, get):
get.return_value = load_response(200, 'fixtures/report_all_date.json')
start = datetime.date(2013, 5, 13)
end = datetime.date(2013, 5, 13)
resp = self.zen.report.all(start_date=start, end_date=end)
self.assertEquals(resp.code, 200)
self.assertEquals(resp.body['statistics']['vod'][0]['encoded_minutes'], 5)
self.assertEquals(resp.body['statistics']['vod'][0]['billable_minutes'], 0)
self.assertEquals(resp.body['statistics']['live'][0]['stream_hours'], 1)
self.assertEquals(resp.body['statistics']['live'][0]['total_hours'], 2)
self.assertEquals(resp.body['total']['vod']['encoded_minutes'], 5)
self.assertEquals(resp.body['total']['vod']['billable_minutes'], 0)
self.assertEquals(resp.body['total']['live']['stream_hours'], 1)
self.assertEquals(resp.body['total']['live']['total_hours'], 2)
if __name__ == "__main__":
unittest.main()
|
mit
|
yvxiang/tera
|
src/sdk/python/sample.py
|
2
|
1915
|
#!/usr/bin/env python
"""
sample of using Tera Python SDK
"""
from TeraSdk import Client, RowMutation, MUTATION_CALLBACK, TeraSdkException
import time
def main():
"""
REQUIRES: tera.flag in current work directory; table `oops' was created
"""
try:
client = Client("./tera.flag", "pysdk")
except TeraSdkException as e:
print(e.reason)
return
try:
table = client.OpenTable("oops2")
except TeraSdkException as e:
print(e.reason)
return
# sync put
try:
table.Put("row_sync", "cf0", "qu_sync", "value_sync")
except TeraSdkException as e:
print(e.reason)
return
# sync get
try:
print(table.Get("row_sync", "cf0", "qu_sync", 0))
except TeraSdkException as e:
print(e.reason)
if "not found" in e.reason:
pass
else:
return
# scan (stream)
scan(table)
# async put
mu = table.NewRowMutation("row_async")
mu.Put("cf0", "qu_async", "value_async")
mycallback = MUTATION_CALLBACK(my_mu_callback)
mu.SetCallback(mycallback)
table.ApplyMutation(mu) # async
while not table.IsPutFinished():
time.sleep(0.01)
print("main() done\n")
def my_mu_callback(raw_mu):
mu = RowMutation(raw_mu)
print "callback of rowkey:", mu.RowKey()
def scan(table):
from TeraSdk import ScanDescriptor
scan_desc = ScanDescriptor("")
scan_desc.SetBufferSize(1024 * 1024) # 1MB
try:
stream = table.Scan(scan_desc)
except TeraSdkException as e:
print(e.reason)
return
while not stream.Done():
row = stream.RowName()
column = stream.ColumnName()
timestamp = str(stream.Timestamp())
val = stream.Value()
print row + ":" + column + ":" + timestamp + ":" + val
stream.Next()
if __name__ == '__main__':
main()
|
bsd-3-clause
|
mjgrav2001/scikit-learn
|
sklearn/metrics/tests/test_regression.py
|
272
|
6066
|
from __future__ import division, print_function
import numpy as np
from itertools import product
from sklearn.utils.testing import assert_raises
from sklearn.utils.testing import assert_equal
from sklearn.utils.testing import assert_almost_equal
from sklearn.utils.testing import assert_array_equal
from sklearn.utils.testing import assert_array_almost_equal
from sklearn.metrics import explained_variance_score
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import median_absolute_error
from sklearn.metrics import r2_score
from sklearn.metrics.regression import _check_reg_targets
def test_regression_metrics(n_samples=50):
y_true = np.arange(n_samples)
y_pred = y_true + 1
assert_almost_equal(mean_squared_error(y_true, y_pred), 1.)
assert_almost_equal(mean_absolute_error(y_true, y_pred), 1.)
assert_almost_equal(median_absolute_error(y_true, y_pred), 1.)
assert_almost_equal(r2_score(y_true, y_pred), 0.995, 2)
assert_almost_equal(explained_variance_score(y_true, y_pred), 1.)
def test_multioutput_regression():
y_true = np.array([[1, 0, 0, 1], [0, 1, 1, 1], [1, 1, 0, 1]])
y_pred = np.array([[0, 0, 0, 1], [1, 0, 1, 1], [0, 0, 0, 1]])
error = mean_squared_error(y_true, y_pred)
assert_almost_equal(error, (1. / 3 + 2. / 3 + 2. / 3) / 4.)
# mean_absolute_error and mean_squared_error are equal because
# it is a binary problem.
error = mean_absolute_error(y_true, y_pred)
assert_almost_equal(error, (1. / 3 + 2. / 3 + 2. / 3) / 4.)
error = r2_score(y_true, y_pred, multioutput='variance_weighted')
assert_almost_equal(error, 1. - 5. / 2)
error = r2_score(y_true, y_pred, multioutput='uniform_average')
assert_almost_equal(error, -.875)
def test_regression_metrics_at_limits():
assert_almost_equal(mean_squared_error([0.], [0.]), 0.00, 2)
assert_almost_equal(mean_absolute_error([0.], [0.]), 0.00, 2)
assert_almost_equal(median_absolute_error([0.], [0.]), 0.00, 2)
assert_almost_equal(explained_variance_score([0.], [0.]), 1.00, 2)
assert_almost_equal(r2_score([0., 1], [0., 1]), 1.00, 2)
def test__check_reg_targets():
# All of length 3
EXAMPLES = [
("continuous", [1, 2, 3], 1),
("continuous", [[1], [2], [3]], 1),
("continuous-multioutput", [[1, 1], [2, 2], [3, 1]], 2),
("continuous-multioutput", [[5, 1], [4, 2], [3, 1]], 2),
("continuous-multioutput", [[1, 3, 4], [2, 2, 2], [3, 1, 1]], 3),
]
for (type1, y1, n_out1), (type2, y2, n_out2) in product(EXAMPLES,
repeat=2):
if type1 == type2 and n_out1 == n_out2:
y_type, y_check1, y_check2, multioutput = _check_reg_targets(
y1, y2, None)
assert_equal(type1, y_type)
if type1 == 'continuous':
assert_array_equal(y_check1, np.reshape(y1, (-1, 1)))
assert_array_equal(y_check2, np.reshape(y2, (-1, 1)))
else:
assert_array_equal(y_check1, y1)
assert_array_equal(y_check2, y2)
else:
assert_raises(ValueError, _check_reg_targets, y1, y2, None)
def test_regression_multioutput_array():
y_true = [[1, 2], [2.5, -1], [4.5, 3], [5, 7]]
y_pred = [[1, 1], [2, -1], [5, 4], [5, 6.5]]
mse = mean_squared_error(y_true, y_pred, multioutput='raw_values')
mae = mean_absolute_error(y_true, y_pred, multioutput='raw_values')
r = r2_score(y_true, y_pred, multioutput='raw_values')
evs = explained_variance_score(y_true, y_pred, multioutput='raw_values')
assert_array_almost_equal(mse, [0.125, 0.5625], decimal=2)
assert_array_almost_equal(mae, [0.25, 0.625], decimal=2)
assert_array_almost_equal(r, [0.95, 0.93], decimal=2)
assert_array_almost_equal(evs, [0.95, 0.93], decimal=2)
# mean_absolute_error and mean_squared_error are equal because
# it is a binary problem.
y_true = [[0, 0]]*4
y_pred = [[1, 1]]*4
mse = mean_squared_error(y_true, y_pred, multioutput='raw_values')
mae = mean_absolute_error(y_true, y_pred, multioutput='raw_values')
r = r2_score(y_true, y_pred, multioutput='raw_values')
assert_array_almost_equal(mse, [1., 1.], decimal=2)
assert_array_almost_equal(mae, [1., 1.], decimal=2)
assert_array_almost_equal(r, [0., 0.], decimal=2)
r = r2_score([[0, -1], [0, 1]], [[2, 2], [1, 1]], multioutput='raw_values')
assert_array_almost_equal(r, [0, -3.5], decimal=2)
assert_equal(np.mean(r), r2_score([[0, -1], [0, 1]], [[2, 2], [1, 1]],
multioutput='uniform_average'))
evs = explained_variance_score([[0, -1], [0, 1]], [[2, 2], [1, 1]],
multioutput='raw_values')
assert_array_almost_equal(evs, [0, -1.25], decimal=2)
# Checking for the condition in which both numerator and denominator is
# zero.
y_true = [[1, 3], [-1, 2]]
y_pred = [[1, 4], [-1, 1]]
r2 = r2_score(y_true, y_pred, multioutput='raw_values')
assert_array_almost_equal(r2, [1., -3.], decimal=2)
assert_equal(np.mean(r2), r2_score(y_true, y_pred,
multioutput='uniform_average'))
evs = explained_variance_score(y_true, y_pred, multioutput='raw_values')
assert_array_almost_equal(evs, [1., -3.], decimal=2)
assert_equal(np.mean(evs), explained_variance_score(y_true, y_pred))
def test_regression_custom_weights():
y_true = [[1, 2], [2.5, -1], [4.5, 3], [5, 7]]
y_pred = [[1, 1], [2, -1], [5, 4], [5, 6.5]]
msew = mean_squared_error(y_true, y_pred, multioutput=[0.4, 0.6])
maew = mean_absolute_error(y_true, y_pred, multioutput=[0.4, 0.6])
rw = r2_score(y_true, y_pred, multioutput=[0.4, 0.6])
evsw = explained_variance_score(y_true, y_pred, multioutput=[0.4, 0.6])
assert_almost_equal(msew, 0.39, decimal=2)
assert_almost_equal(maew, 0.475, decimal=3)
assert_almost_equal(rw, 0.94, decimal=2)
assert_almost_equal(evsw, 0.94, decimal=2)
|
bsd-3-clause
|
broferek/ansible
|
lib/ansible/module_utils/network/edgeswitch/edgeswitch_interface.py
|
52
|
3521
|
# This code is part of Ansible, but is an independent component.
# This particular file snippet, and this file snippet only, is BSD licensed.
# Modules you write using this snippet, which is embedded dynamically by Ansible
# still belong to the author of the module, and may assign their own license
# to the complete work.
#
# (c) 2018 Red Hat Inc.
#
# Redistribution and use in source and binary forms, with or without modification,
# are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
# USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
import re
class InterfaceConfiguration:
def __init__(self):
self.commands = []
self.merged = False
def has_same_commands(self, interface):
len1 = len(self.commands)
len2 = len(interface.commands)
return len1 == len2 and len1 == len(frozenset(self.commands).intersection(interface.commands))
def merge_interfaces(interfaces):
""" to reduce commands generated by an edgeswitch module
we take interfaces one by one and we try to merge them with neighbors if everyone has same commands to run
"""
merged = {}
for i, interface in interfaces.items():
if interface.merged:
continue
interface.merged = True
match = re.match(r'(\d+)\/(\d+)', i)
group = int(match.group(1))
start = int(match.group(2))
end = start
while True:
try:
start = start - 1
key = '{0}/{1}'.format(group, start)
neighbor = interfaces[key]
if not neighbor.merged and interface.has_same_commands(neighbor):
neighbor.merged = True
else:
break
except KeyError:
break
start = start + 1
while True:
try:
end = end + 1
key = '{0}/{1}'.format(group, end)
neighbor = interfaces[key]
if not neighbor.merged and interface.has_same_commands(neighbor):
neighbor.merged = True
else:
break
except KeyError:
break
end = end - 1
if end == start:
key = '{0}/{1}'.format(group, start)
else:
key = '{0}/{1}-{2}/{3}'.format(group, start, group, end)
merged[key] = interface
return merged
|
gpl-3.0
|
cryvate/project-euler
|
project_euler/solutions/problem_51.py
|
1
|
1244
|
from collections import Counter
from itertools import combinations
from ..library.number_theory.primes import is_prime, prime_sieve
from ..library.base import list_to_number, number_to_list
def solve() -> int:
primes = prime_sieve(1_000_000)
for prime in primes:
if prime < 100_000:
continue
representation = number_to_list(prime)
counter = Counter(representation)
if max(counter.values()) < 3:
continue
masks = []
for digit in counter:
if digit > 2: # because at least 8
continue
if counter[digit] >= 3:
digit_at = [i for i, d in enumerate(representation)
if d == digit]
masks += list(combinations(digit_at, 3))
for mask in masks:
masked_representation = list(representation)
counter = 0
for digit in range(10):
for index in mask:
masked_representation[-index] = digit
number = list_to_number(masked_representation)
if is_prime(number, primes):
counter += 1
if counter == 8:
return prime
|
mit
|
bgxavier/nova
|
nova/api/openstack/compute/plugins/v3/personality.py
|
43
|
2485
|
# Copyright 2014 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from nova.api.openstack.compute.schemas.v3 import personality
from nova.api.openstack import extensions
ALIAS = "os-personality"
class Personality(extensions.V3APIExtensionBase):
"""Personality support."""
name = "Personality"
alias = ALIAS
version = 1
def get_controller_extensions(self):
return []
def get_resources(self):
return []
def _get_injected_files(self, personality):
"""Create a list of injected files from the personality attribute.
At this time, injected_files must be formatted as a list of
(file_path, file_content) pairs for compatibility with the
underlying compute service.
"""
injected_files = []
for item in personality:
injected_files.append((item['path'], item['contents']))
return injected_files
# NOTE(gmann): This function is not supposed to use 'body_deprecated_param'
# parameter as this is placed to handle scheduler_hint extension for V2.1.
# making 'body_deprecated_param' as optional to avoid changes for
# server_update & server_rebuild
def server_create(self, server_dict, create_kwargs,
body_deprecated_param=None):
if 'personality' in server_dict:
create_kwargs['injected_files'] = self._get_injected_files(
server_dict['personality'])
def server_rebuild(self, server_dict, create_kwargs,
body_deprecated_param=None):
if 'personality' in server_dict:
create_kwargs['files_to_inject'] = self._get_injected_files(
server_dict['personality'])
def get_server_create_schema(self):
return personality.server_create
get_server_rebuild_schema = get_server_create_schema
|
apache-2.0
|
rue89-tech/edx-analytics-pipeline
|
edx/analytics/tasks/enrollments.py
|
1
|
21343
|
"""Compute metrics related to user enrollments in courses"""
import logging
import datetime
import luigi
import luigi.task
from edx.analytics.tasks.database_imports import ImportAuthUserProfileTask
from edx.analytics.tasks.mapreduce import MapReduceJobTaskMixin, MapReduceJobTask
from edx.analytics.tasks.pathutil import EventLogSelectionDownstreamMixin, EventLogSelectionMixin
from edx.analytics.tasks.url import get_target_from_url, url_path_join
from edx.analytics.tasks.util import eventlog, opaque_key_util
from edx.analytics.tasks.util.hive import WarehouseMixin, HiveTableTask, HivePartition, HiveQueryToMysqlTask
log = logging.getLogger(__name__)
DEACTIVATED = 'edx.course.enrollment.deactivated'
ACTIVATED = 'edx.course.enrollment.activated'
MODE_CHANGED = 'edx.course.enrollment.mode_changed'
class CourseEnrollmentTask(EventLogSelectionMixin, MapReduceJobTask):
"""Produce a data set that shows which days each user was enrolled in each course."""
output_root = luigi.Parameter()
def mapper(self, line):
value = self.get_event_and_date_string(line)
if value is None:
return
event, _date_string = value
event_type = event.get('event_type')
if event_type is None:
log.error("encountered event with no event_type: %s", event)
return
if event_type not in (DEACTIVATED, ACTIVATED, MODE_CHANGED):
return
timestamp = eventlog.get_event_time_string(event)
if timestamp is None:
log.error("encountered event with bad timestamp: %s", event)
return
event_data = eventlog.get_event_data(event)
if event_data is None:
return
course_id = event_data.get('course_id')
if course_id is None or not opaque_key_util.is_valid_course_id(course_id):
log.error("encountered explicit enrollment event with invalid course_id: %s", event)
return
user_id = event_data.get('user_id')
if user_id is None:
log.error("encountered explicit enrollment event with no user_id: %s", event)
return
mode = event_data.get('mode')
if mode is None:
log.error("encountered explicit enrollment event with no mode: %s", event)
return
yield (course_id, user_id), (timestamp, event_type, mode)
def reducer(self, key, values):
"""Emit records for each day the user was enrolled in the course."""
course_id, user_id = key
event_stream_processor = DaysEnrolledForEvents(course_id, user_id, self.interval, values)
for day_enrolled_record in event_stream_processor.days_enrolled():
yield day_enrolled_record
def output(self):
return get_target_from_url(self.output_root)
class EnrollmentEvent(object):
"""The critical information necessary to process the event in the event stream."""
def __init__(self, timestamp, event_type, mode):
self.timestamp = timestamp
self.datestamp = eventlog.timestamp_to_datestamp(timestamp)
self.event_type = event_type
self.mode = mode
class DaysEnrolledForEvents(object):
"""
Determine which days a user was enrolled in a course given a stream of enrollment events.
Produces a record for each date from the date the user enrolled in the course for the first time to the end of the
interval. Note that the user need not have been enrolled in the course for the entire day. These records will have
the following format:
datestamp (str): The date the user was enrolled in the course during.
course_id (str): Identifies the course the user was enrolled in.
user_id (int): Identifies the user that was enrolled in the course.
enrolled_at_end (int): 1 if the user was still enrolled in the course at the end of the day.
change_since_last_day (int): 1 if the user has changed to the enrolled state, -1 if the user has changed
to the unenrolled state and 0 if the user's enrollment state hasn't changed.
If the first event in the stream for a user in a course is an unenrollment event, that would indicate that the user
was enrolled in the course before that moment in time. It is unknown, however, when the user enrolled in the course,
so we conservatively omit records for the time before that unenrollment event even though it is likely they were
enrolled in the course for some unknown amount of time before then. Enrollment counts for dates before the
unenrollment event will be less than the actual value.
If the last event for a user is an enrollment event, that would indicate that the user was still enrolled in the
course at the end of the interval, so records are produced from that last enrollment event all the way to the end of
the interval. If we miss an unenrollment event after this point, it will result in enrollment counts that are
actually higher than the actual value.
Both of the above paragraphs describe edge cases that account for the majority of the error that can be observed in
the results of this analysis.
Ranges of dates where the user is continuously enrolled will be represented as contiguous records with the first
record indicating the change (new enrollment), and the last record indicating the unenrollment. It will look
something like this::
datestamp,enrolled_at_end,change_since_last_day
2014-01-01,1,1
2014-01-02,1,0
2014-01-03,1,0
2014-01-04,0,-1
2014-01-05,0,0
The above activity indicates that the user enrolled in the course on 2014-01-01 and unenrolled from the course on
2014-01-04. Records are created for every date after the date when they first enrolled.
If a user enrolls and unenrolls from a course on the same day, a record will appear that looks like this::
datestamp,enrolled_at_end,change_since_last_day
2014-01-01,0,0
Args:
course_id (str): Identifies the course the user was enrolled in.
user_id (int): Identifies the user that was enrolled in the course.
interval (luigi.date_interval.DateInterval): The interval of time in which these enrollment events took place.
events (iterable): The enrollment events as produced by the map tasks. This is expected to be an iterable
structure whose elements are tuples consisting of a timestamp and an event type.
"""
ENROLLED = 1
UNENROLLED = 0
MODE_UNKNOWN = 'unknown'
def __init__(self, course_id, user_id, interval, events):
self.course_id = course_id
self.user_id = user_id
self.interval = interval
self.sorted_events = sorted(events)
# After sorting, we can discard time information since we only care about date transitions.
self.sorted_events = [
EnrollmentEvent(timestamp, event_type, mode) for timestamp, event_type, mode in self.sorted_events
]
# Since each event looks ahead to see the time of the next event, insert a dummy event at then end that
# indicates the end of the requested interval. If the user's last event is an enrollment activation event then
# they are assumed to be enrolled up until the end of the requested interval. Note that the mapper ensures that
# no events on or after date_b are included in the analyzed data set.
self.sorted_events.append(EnrollmentEvent(self.interval.date_b.isoformat(), None, None)) # pylint: disable=no-member
self.first_event = self.sorted_events[0]
# track the previous state in order to easily detect state changes between days.
if self.first_event.event_type == DEACTIVATED:
# First event was an unenrollment event, assume the user was enrolled before that moment in time.
log.warning('First event is an unenrollment for user %d in course %s on %s',
self.user_id, self.course_id, self.first_event.datestamp)
elif self.first_event.event_type == MODE_CHANGED:
log.warning('First event is a mode change for user %d in course %s on %s',
self.user_id, self.course_id, self.first_event.datestamp)
# Before we start processing events, we can assume that their current state is the same as it has been for all
# time before the first event.
self.state = self.previous_state = self.UNENROLLED
self.mode = self.MODE_UNKNOWN
def days_enrolled(self):
"""
A record is yielded for each day during which the user was enrolled in the course.
Yields:
tuple: An enrollment record for each day during which the user was enrolled in the course.
"""
# The last element of the list is a placeholder indicating the end of the interval. Don't process it.
for index in range(len(self.sorted_events) - 1):
self.event = self.sorted_events[index]
self.next_event = self.sorted_events[index + 1]
self.change_state()
if self.event.datestamp != self.next_event.datestamp:
change_since_last_day = self.state - self.previous_state
# There may be a very wide gap between this event and the next event. If the user is currently
# enrolled, we can assume they continue to be enrolled at least until the next day we see an event.
# Emit records for each of those intermediary days. Since the end of the interval is represented by
# a dummy event at the end of the list of events, it will be represented by self.next_event when
# processing the last real event in the stream. This allows the records to be produced up to the end
# of the interval if the last known state was "ENROLLED".
for datestamp in self.all_dates_between(self.event.datestamp, self.next_event.datestamp):
yield self.enrollment_record(
datestamp,
self.state,
change_since_last_day if datestamp == self.event.datestamp else 0,
self.mode
)
self.previous_state = self.state
def all_dates_between(self, start_date_str, end_date_str):
"""
All dates from the start date up to the end date.
Yields:
str: ISO 8601 datestamp for each date from the first date (inclusive) up to the end date (exclusive).
"""
current_date = self.parse_date_string(start_date_str)
end_date = self.parse_date_string(end_date_str)
while current_date < end_date:
yield current_date.isoformat()
current_date += datetime.timedelta(days=1)
def parse_date_string(self, date_str):
"""Efficiently parse an ISO 8601 date stamp into a datetime.date() object."""
date_parts = [int(p) for p in date_str.split('-')[:3]]
return datetime.date(*date_parts)
def enrollment_record(self, datestamp, enrolled_at_end, change_since_last_day, mode_at_end):
"""A complete enrollment record."""
return (datestamp, self.course_id, self.user_id, enrolled_at_end, change_since_last_day, mode_at_end)
def change_state(self):
"""Change state when appropriate.
Note that in spite of our best efforts some events might be lost, causing invalid state transitions.
"""
self.mode = self.event.mode
if self.state == self.ENROLLED and self.event.event_type == DEACTIVATED:
self.state = self.UNENROLLED
elif self.state == self.UNENROLLED and self.event.event_type == ACTIVATED:
self.state = self.ENROLLED
elif self.event.event_type == MODE_CHANGED:
pass
else:
log.warning(
'No state change for %s event. User %d is already in the requested state for course %s on %s.',
self.event.event_type, self.user_id, self.course_id, self.event.datestamp
)
class CourseEnrollmentTableDownstreamMixin(WarehouseMixin, EventLogSelectionDownstreamMixin, MapReduceJobTaskMixin):
"""All parameters needed to run the CourseEnrollmentTableTask task."""
# Make the interval be optional:
interval = luigi.DateIntervalParameter(default=None)
# Define optional parameters, to be used if 'interval' is not defined.
interval_start = luigi.DateParameter(
default_from_config={'section': 'enrollments', 'name': 'interval_start'},
significant=False,
)
interval_end = luigi.DateParameter(default=datetime.datetime.utcnow().date(), significant=False)
def __init__(self, *args, **kwargs):
super(CourseEnrollmentTableDownstreamMixin, self).__init__(*args, **kwargs)
if not self.interval:
self.interval = luigi.date_interval.Custom(self.interval_start, self.interval_end)
class CourseEnrollmentTableTask(CourseEnrollmentTableDownstreamMixin, HiveTableTask):
"""Hive table that stores the set of users enrolled in each course over time."""
@property
def table(self):
return 'course_enrollment'
@property
def columns(self):
return [
('date', 'STRING'),
('course_id', 'STRING'),
('user_id', 'INT'),
('at_end', 'TINYINT'),
('change', 'TINYINT'),
('mode', 'STRING'),
]
@property
def partition(self):
return HivePartition('dt', self.interval.date_b.isoformat()) # pylint: disable=no-member
def requires(self):
return CourseEnrollmentTask(
mapreduce_engine=self.mapreduce_engine,
n_reduce_tasks=self.n_reduce_tasks,
source=self.source,
interval=self.interval,
pattern=self.pattern,
output_root=self.partition_location,
)
class EnrollmentTask(CourseEnrollmentTableDownstreamMixin, HiveQueryToMysqlTask):
"""Base class for breakdowns of enrollments"""
@property
def indexes(self):
return [
('course_id',),
# Note that the order here is extremely important. The API query pattern needs to filter first by course and
# then by date.
('course_id', 'date'),
]
@property
def partition(self):
return HivePartition('dt', self.interval.date_b.isoformat()) # pylint: disable=no-member
@property
def required_table_tasks(self):
yield (
CourseEnrollmentTableTask(
mapreduce_engine=self.mapreduce_engine,
n_reduce_tasks=self.n_reduce_tasks,
source=self.source,
interval=self.interval,
pattern=self.pattern,
warehouse_path=self.warehouse_path,
),
ImportAuthUserProfileTask()
)
class EnrollmentByGenderTask(EnrollmentTask):
"""Breakdown of enrollments by gender as reported by the user"""
@property
def query(self):
return """
SELECT
ce.date,
ce.course_id,
IF(p.gender != '', p.gender, NULL),
SUM(ce.at_end),
COUNT(ce.user_id)
FROM course_enrollment ce
LEFT OUTER JOIN auth_userprofile p ON p.user_id = ce.user_id
GROUP BY
ce.date,
ce.course_id,
IF(p.gender != '', p.gender, NULL)
"""
@property
def table(self):
return 'course_enrollment_gender_daily'
@property
def columns(self):
return [
('date', 'DATE NOT NULL'),
('course_id', 'VARCHAR(255) NOT NULL'),
('gender', 'VARCHAR(6)'),
('count', 'INTEGER'),
('cumulative_count', 'INTEGER')
]
class EnrollmentByBirthYearTask(EnrollmentTask):
"""Breakdown of enrollments by age as reported by the user"""
@property
def query(self):
return """
SELECT
ce.date,
ce.course_id,
p.year_of_birth,
SUM(ce.at_end),
COUNT(ce.user_id)
FROM course_enrollment ce
LEFT OUTER JOIN auth_userprofile p ON p.user_id = ce.user_id
GROUP BY
ce.date,
ce.course_id,
p.year_of_birth
"""
@property
def table(self):
return 'course_enrollment_birth_year_daily'
@property
def columns(self):
return [
('date', 'DATE NOT NULL'),
('course_id', 'VARCHAR(255) NOT NULL'),
('birth_year', 'INTEGER'),
('count', 'INTEGER'),
('cumulative_count', 'INTEGER')
]
class EnrollmentByEducationLevelTask(EnrollmentTask):
"""Breakdown of enrollments by education level as reported by the user"""
@property
def query(self):
return """
SELECT
ce.date,
ce.course_id,
CASE p.level_of_education
WHEN 'el' THEN 'primary'
WHEN 'jhs' THEN 'junior_secondary'
WHEN 'hs' THEN 'secondary'
WHEN 'a' THEN 'associates'
WHEN 'b' THEN 'bachelors'
WHEN 'm' THEN 'masters'
WHEN 'p' THEN 'doctorate'
WHEN 'p_se' THEN 'doctorate'
WHEN 'p_oth' THEN 'doctorate'
WHEN 'none' THEN 'none'
WHEN 'other' THEN 'other'
ELSE NULL
END,
SUM(ce.at_end),
COUNT(ce.user_id)
FROM course_enrollment ce
LEFT OUTER JOIN auth_userprofile p ON p.user_id = ce.user_id
GROUP BY
ce.date,
ce.course_id,
CASE p.level_of_education
WHEN 'el' THEN 'primary'
WHEN 'jhs' THEN 'junior_secondary'
WHEN 'hs' THEN 'secondary'
WHEN 'a' THEN 'associates'
WHEN 'b' THEN 'bachelors'
WHEN 'm' THEN 'masters'
WHEN 'p' THEN 'doctorate'
WHEN 'p_se' THEN 'doctorate'
WHEN 'p_oth' THEN 'doctorate'
WHEN 'none' THEN 'none'
WHEN 'other' THEN 'other'
ELSE NULL
END
"""
@property
def table(self):
return 'course_enrollment_education_level_daily'
@property
def columns(self):
return [
('date', 'DATE NOT NULL'),
('course_id', 'VARCHAR(255) NOT NULL'),
('education_level', 'VARCHAR(16)'),
('count', 'INTEGER'),
('cumulative_count', 'INTEGER')
]
class EnrollmentByModeTask(EnrollmentTask):
"""Breakdown of enrollments by mode"""
@property
def query(self):
return """
SELECT
ce.date,
ce.course_id,
ce.mode,
SUM(ce.at_end),
COUNT(ce.user_id)
FROM course_enrollment ce
GROUP BY
ce.date,
ce.course_id,
ce.mode
"""
@property
def table(self):
return 'course_enrollment_mode_daily'
@property
def columns(self):
return [
('date', 'DATE NOT NULL'),
('course_id', 'VARCHAR(255) NOT NULL'),
('mode', 'VARCHAR(255) NOT NULL'),
('count', 'INTEGER'),
('cumulative_count', 'INTEGER')
]
class EnrollmentDailyTask(EnrollmentTask):
"""A history of the number of students enrolled in each course at the end of each day"""
@property
def query(self):
return """
SELECT
ce.course_id,
ce.date,
SUM(ce.at_end),
COUNT(ce.user_id)
FROM course_enrollment ce
GROUP BY
ce.course_id,
ce.date
"""
@property
def table(self):
return 'course_enrollment_daily'
@property
def columns(self):
return [
('course_id', 'VARCHAR(255) NOT NULL'),
('date', 'DATE NOT NULL'),
('count', 'INTEGER'),
('cumulative_count', 'INTEGER')
]
class ImportEnrollmentsIntoMysql(CourseEnrollmentTableDownstreamMixin, luigi.WrapperTask):
"""Import all breakdowns of enrollment into MySQL"""
def requires(self):
kwargs = {
'n_reduce_tasks': self.n_reduce_tasks,
'source': self.source,
'interval': self.interval,
'pattern': self.pattern,
'warehouse_path': self.warehouse_path,
}
yield (
EnrollmentByGenderTask(**kwargs),
EnrollmentByBirthYearTask(**kwargs),
EnrollmentByEducationLevelTask(**kwargs),
EnrollmentByModeTask(**kwargs),
EnrollmentDailyTask(**kwargs),
)
|
agpl-3.0
|
donaloconnor/bitcoin
|
test/functional/rpc_named_arguments.py
|
22
|
1211
|
#!/usr/bin/env python3
# Copyright (c) 2016-2017 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.
"""Test using named arguments for RPCs."""
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import (
assert_equal,
assert_raises_rpc_error,
)
class NamedArgumentTest(BitcoinTestFramework):
def set_test_params(self):
self.num_nodes = 1
def run_test(self):
node = self.nodes[0]
h = node.help(command='getblockchaininfo')
assert(h.startswith('getblockchaininfo\n'))
assert_raises_rpc_error(-8, 'Unknown named parameter', node.help, random='getblockchaininfo')
h = node.getblockhash(height=0)
node.getblock(blockhash=h)
assert_equal(node.echo(), [])
assert_equal(node.echo(arg0=0,arg9=9), [0] + [None]*8 + [9])
assert_equal(node.echo(arg1=1), [None, 1])
assert_equal(node.echo(arg9=None), [None]*10)
assert_equal(node.echo(arg0=0,arg3=3,arg9=9), [0] + [None]*2 + [3] + [None]*5 + [9])
if __name__ == '__main__':
NamedArgumentTest().main()
|
mit
|
mozilla/inventory
|
user_systems/migrations/0001_initial.py
|
2
|
14540
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding model 'UserOperatingSystem'
db.create_table('user_systems_useroperatingsystem', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=128)),
))
db.send_create_signal('user_systems', ['UserOperatingSystem'])
# Adding model 'UnmanagedSystemType'
db.create_table('unmanaged_system_types', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(max_length=128)),
))
db.send_create_signal('user_systems', ['UnmanagedSystemType'])
# Adding model 'CostCenter'
db.create_table('cost_centers', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('cost_center_number', self.gf('django.db.models.fields.IntegerField')()),
('name', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
))
db.send_create_signal('user_systems', ['CostCenter'])
# Adding model 'UnmanagedSystem'
db.create_table(u'unmanaged_systems', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('serial', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('asset_tag', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('operating_system', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['systems.OperatingSystem'], null=True, blank=True)),
('owner', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.Owner'], null=True, blank=True)),
('system_type', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.UnmanagedSystemType'], null=True)),
('server_model', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['systems.ServerModel'], null=True, blank=True)),
('created_on', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('updated_on', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('date_purchased', self.gf('django.db.models.fields.DateField')(null=True, blank=True)),
('cost', self.gf('django.db.models.fields.CharField')(max_length=50, blank=True)),
('cost_center', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.CostCenter'], null=True, blank=True)),
('bug_number', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('notes', self.gf('django.db.models.fields.TextField')(blank=True)),
('is_loaned', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('is_loaner', self.gf('django.db.models.fields.IntegerField')(null=True, blank=True)),
('loaner_return_date', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
))
db.send_create_signal('user_systems', ['UnmanagedSystem'])
# Adding model 'History'
db.create_table('user_systems_history', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('change', self.gf('django.db.models.fields.CharField')(max_length=1000)),
('changed_by', self.gf('django.db.models.fields.CharField')(max_length=128, null=True, blank=True)),
('system', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.UnmanagedSystem'])),
('created', self.gf('django.db.models.fields.DateTimeField')(auto_now_add=True, blank=True)),
))
db.send_create_signal('user_systems', ['History'])
# Adding model 'Owner'
db.create_table(u'owners', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('name', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255, blank=True)),
('address', self.gf('django.db.models.fields.TextField')(blank=True)),
('note', self.gf('django.db.models.fields.TextField')(blank=True)),
('user_location', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.UserLocation'], null=True, blank=True)),
('email', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
))
db.send_create_signal('user_systems', ['Owner'])
# Adding model 'UserLicense'
db.create_table(u'user_licenses', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('username', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('version', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('license_type', self.gf('django.db.models.fields.CharField')(max_length=255, blank=True)),
('license_key', self.gf('django.db.models.fields.CharField')(max_length=255)),
('owner', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.Owner'], null=True, blank=True)),
('user_operating_system', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['user_systems.UserOperatingSystem'], null=True, blank=True)),
))
db.send_create_signal('user_systems', ['UserLicense'])
# Adding model 'UserLocation'
db.create_table(u'user_locations', (
('id', self.gf('django.db.models.fields.AutoField')(primary_key=True)),
('city', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255, blank=True)),
('country', self.gf('django.db.models.fields.CharField')(unique=True, max_length=255, blank=True)),
('created_at', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
('updated_at', self.gf('django.db.models.fields.DateTimeField')(null=True, blank=True)),
))
db.send_create_signal('user_systems', ['UserLocation'])
def backwards(self, orm):
# Deleting model 'UserOperatingSystem'
db.delete_table('user_systems_useroperatingsystem')
# Deleting model 'UnmanagedSystemType'
db.delete_table('unmanaged_system_types')
# Deleting model 'CostCenter'
db.delete_table('cost_centers')
# Deleting model 'UnmanagedSystem'
db.delete_table(u'unmanaged_systems')
# Deleting model 'History'
db.delete_table('user_systems_history')
# Deleting model 'Owner'
db.delete_table(u'owners')
# Deleting model 'UserLicense'
db.delete_table(u'user_licenses')
# Deleting model 'UserLocation'
db.delete_table(u'user_locations')
models = {
'systems.operatingsystem': {
'Meta': {'ordering': "['name', 'version']", 'object_name': 'OperatingSystem', 'db_table': "u'operating_systems'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'version': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'systems.servermodel': {
'Meta': {'ordering': "['vendor', 'model']", 'object_name': 'ServerModel', 'db_table': "u'server_models'"},
'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'part_number': ('django.db.models.fields.CharField', [], {'max_length': '255', 'null': 'True', 'blank': 'True'}),
'vendor': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'user_systems.costcenter': {
'Meta': {'object_name': 'CostCenter', 'db_table': "'cost_centers'"},
'cost_center_number': ('django.db.models.fields.IntegerField', [], {}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'user_systems.history': {
'Meta': {'ordering': "['-created']", 'object_name': 'History'},
'change': ('django.db.models.fields.CharField', [], {'max_length': '1000'}),
'changed_by': ('django.db.models.fields.CharField', [], {'max_length': '128', 'null': 'True', 'blank': 'True'}),
'created': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'system': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.UnmanagedSystem']"})
},
'user_systems.owner': {
'Meta': {'ordering': "['name']", 'object_name': 'Owner', 'db_table': "u'owners'"},
'address': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'email': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'blank': 'True'}),
'note': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'user_location': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.UserLocation']", 'null': 'True', 'blank': 'True'})
},
'user_systems.unmanagedsystem': {
'Meta': {'object_name': 'UnmanagedSystem', 'db_table': "u'unmanaged_systems'"},
'asset_tag': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'bug_number': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'cost': ('django.db.models.fields.CharField', [], {'max_length': '50', 'blank': 'True'}),
'cost_center': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.CostCenter']", 'null': 'True', 'blank': 'True'}),
'created_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'date_purchased': ('django.db.models.fields.DateField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_loaned': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'is_loaner': ('django.db.models.fields.IntegerField', [], {'null': 'True', 'blank': 'True'}),
'loaner_return_date': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'notes': ('django.db.models.fields.TextField', [], {'blank': 'True'}),
'operating_system': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['systems.OperatingSystem']", 'null': 'True', 'blank': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.Owner']", 'null': 'True', 'blank': 'True'}),
'serial': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'server_model': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['systems.ServerModel']", 'null': 'True', 'blank': 'True'}),
'system_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.UnmanagedSystemType']", 'null': 'True'}),
'updated_on': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'user_systems.unmanagedsystemtype': {
'Meta': {'object_name': 'UnmanagedSystemType', 'db_table': "'unmanaged_system_types'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'})
},
'user_systems.userlicense': {
'Meta': {'ordering': "['license_type']", 'object_name': 'UserLicense', 'db_table': "u'user_licenses'"},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'license_key': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'license_type': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'owner': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.Owner']", 'null': 'True', 'blank': 'True'}),
'user_operating_system': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['user_systems.UserOperatingSystem']", 'null': 'True', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'}),
'version': ('django.db.models.fields.CharField', [], {'max_length': '255', 'blank': 'True'})
},
'user_systems.userlocation': {
'Meta': {'object_name': 'UserLocation', 'db_table': "u'user_locations'"},
'city': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'blank': 'True'}),
'country': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '255', 'blank': 'True'}),
'created_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'updated_at': ('django.db.models.fields.DateTimeField', [], {'null': 'True', 'blank': 'True'})
},
'user_systems.useroperatingsystem': {
'Meta': {'object_name': 'UserOperatingSystem'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '128'})
}
}
complete_apps = ['user_systems']
|
bsd-3-clause
|
rchav/vinerack
|
saleor/order/migrations/0011_auto_20160207_0534.py
|
14
|
1053
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.1 on 2016-02-07 11:34
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
import django_prices.models
class Migration(migrations.Migration):
dependencies = [
('discount', '0003_auto_20160207_0534'),
('order', '0010_auto_20160119_0541'),
]
operations = [
migrations.AddField(
model_name='order',
name='discount_amount',
field=django_prices.models.PriceField(blank=True, currency='USD', decimal_places=2, max_digits=12, null=True),
),
migrations.AddField(
model_name='order',
name='discount_name',
field=models.CharField(blank=True, default='', max_length=255),
),
migrations.AddField(
model_name='order',
name='voucher',
field=models.ForeignKey(null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='discount.Voucher'),
),
]
|
bsd-3-clause
|
pricingassistant/mrq
|
mrq/scheduler.py
|
1
|
4982
|
from future.builtins import str, object
from .context import log, queue_job
import datetime
import ujson as json
import time
def _hash_task(task):
""" Returns a unique hash for identify a task and its params """
params = task.get("params")
if params:
params = json.dumps(sorted(list(task["params"].items()), key=lambda x: x[0])) # pylint: disable=no-member
full = [str(task.get(x)) for x in ["path", "interval", "dailytime", "weekday", "monthday", "queue"]]
full.extend([str(params)])
return " ".join(full)
class Scheduler(object):
def __init__(self, collection, config_tasks):
self.collection = collection
self.config_tasks = config_tasks
self.config_synced = False
self.all_tasks = []
def check_config_integrity(self):
""" Make sure the scheduler config is valid """
tasks_by_hash = {_hash_task(t): t for t in self.config_tasks}
if len(tasks_by_hash) != len(self.config_tasks):
raise Exception("Fatal error: there was a hash duplicate in the scheduled tasks config.")
for h, task in tasks_by_hash.items():
if task.get("monthday") and not task.get("dailytime"):
raise Exception("Fatal error: you can't schedule a task with 'monthday' and without 'dailytime' (%s)" % h)
if task.get("weekday") and not task.get("dailytime"):
raise Exception("Fatal error: you can't schedule a task with 'weekday' and without 'dailytime' (%s)" % h)
if not task.get("monthday") and not task.get("weekday") and not task.get("dailytime") and not task.get("interval"):
raise Exception("Fatal error: scheduler must be specified one of monthday,weekday,dailytime,interval. (%s)" % h)
def sync_config_tasks(self):
""" Performs the first sync of a list of tasks, often defined in the config file. """
tasks_by_hash = {_hash_task(t): t for t in self.config_tasks}
for task in self.all_tasks:
if tasks_by_hash.get(task["hash"]):
del tasks_by_hash[task["hash"]]
else:
self.collection.remove({"_id": task["_id"]})
log.debug("Scheduler: deleted %s" % task["hash"])
# What remains are the new ones to be inserted
for h, task in tasks_by_hash.items():
task["hash"] = h
task["datelastqueued"] = datetime.datetime.fromtimestamp(0)
if task.get("dailytime"):
# Because MongoDB can store datetimes but not times,
# we add today's date to the dailytime.
# The date part will be discarded in check()
task["dailytime"] = datetime.datetime.combine(
datetime.datetime.utcnow(), task["dailytime"])
task["interval"] = 3600 * 24
# Avoid to queue task in check() if today dailytime is already passed
if datetime.datetime.utcnow().time() > task["dailytime"].time():
task["datelastqueued"] = datetime.datetime.utcnow()
self.collection.find_one_and_update({"hash": task["hash"]}, {"$set": task}, upsert=True)
log.debug("Scheduler: added %s" % task["hash"])
def check(self):
self.all_tasks = list(self.collection.find())
if not self.config_synced:
self.sync_config_tasks()
self.all_tasks = list(self.collection.find())
self.config_synced = True
# log.debug(
# "Scheduler checking for out-of-date scheduled tasks (%s scheduled)..." %
# len(self.all_tasks)
# )
now = datetime.datetime.utcnow()
current_weekday = now.weekday()
current_monthday = now.day
for task in self.all_tasks:
interval = datetime.timedelta(seconds=task["interval"])
if task["datelastqueued"] >= now:
continue
if task.get("monthday", current_monthday) != current_monthday:
continue
if task.get("weekday", current_weekday) != current_weekday:
continue
if task.get("dailytime"):
if task["datelastqueued"].date() == now.date() or now.time() < task["dailytime"].time():
continue
# if we only have "interval" key
if all(k not in task for k in ["monthday", "weekday", "dailytime"]):
if now - task["datelastqueued"] < interval:
continue
queue_job(
task["path"],
task.get("params") or {},
queue=task.get("queue")
)
self.collection.update({"_id": task["_id"]}, {"$set": {
"datelastqueued": now
}})
log.debug("Scheduler: queued %s" % _hash_task(task))
# Make sure we never again execute a scheduler with the same exact second.
time.sleep(1)
|
mit
|
peterlauri/django
|
django/core/serializers/python.py
|
109
|
7851
|
"""
A Python "serializer". Doesn't do much serializing per se -- just converts to
and from basic Python data types (lists, dicts, strings, etc.). Useful as a basis for
other serializers.
"""
from __future__ import unicode_literals
from collections import OrderedDict
from django.apps import apps
from django.conf import settings
from django.core.serializers import base
from django.db import DEFAULT_DB_ALIAS, models
from django.utils import six
from django.utils.encoding import force_text, is_protected_type
class Serializer(base.Serializer):
"""
Serializes a QuerySet to basic Python objects.
"""
internal_use_only = True
def start_serialization(self):
self._current = None
self.objects = []
def end_serialization(self):
pass
def start_object(self, obj):
self._current = OrderedDict()
def end_object(self, obj):
self.objects.append(self.get_dump_object(obj))
self._current = None
def get_dump_object(self, obj):
data = OrderedDict([('model', force_text(obj._meta))])
if not self.use_natural_primary_keys or not hasattr(obj, 'natural_key'):
data["pk"] = force_text(obj._get_pk_val(), strings_only=True)
data['fields'] = self._current
return data
def handle_field(self, obj, field):
value = field.value_from_object(obj)
# Protected types (i.e., primitives like None, numbers, dates,
# and Decimals) are passed through as is. All other values are
# converted to string first.
if is_protected_type(value):
self._current[field.name] = value
else:
self._current[field.name] = field.value_to_string(obj)
def handle_fk_field(self, obj, field):
if self.use_natural_foreign_keys and hasattr(field.remote_field.model, 'natural_key'):
related = getattr(obj, field.name)
if related:
value = related.natural_key()
else:
value = None
else:
value = getattr(obj, field.get_attname())
if not is_protected_type(value):
value = field.value_to_string(obj)
self._current[field.name] = value
def handle_m2m_field(self, obj, field):
if field.remote_field.through._meta.auto_created:
if self.use_natural_foreign_keys and hasattr(field.remote_field.model, 'natural_key'):
def m2m_value(value):
return value.natural_key()
else:
def m2m_value(value):
return force_text(value._get_pk_val(), strings_only=True)
self._current[field.name] = [
m2m_value(related) for related in getattr(obj, field.name).iterator()
]
def getvalue(self):
return self.objects
def Deserializer(object_list, **options):
"""
Deserialize simple Python objects back into Django ORM instances.
It's expected that you pass the Python objects themselves (instead of a
stream or a string) to the constructor
"""
db = options.pop('using', DEFAULT_DB_ALIAS)
ignore = options.pop('ignorenonexistent', False)
field_names_cache = {} # Model: <list of field_names>
for d in object_list:
# Look up the model and starting build a dict of data for it.
try:
Model = _get_model(d["model"])
except base.DeserializationError:
if ignore:
continue
else:
raise
data = {}
if 'pk' in d:
try:
data[Model._meta.pk.attname] = Model._meta.pk.to_python(d.get('pk'))
except Exception as e:
raise base.DeserializationError.WithData(e, d['model'], d.get('pk'), None)
m2m_data = {}
if Model not in field_names_cache:
field_names_cache[Model] = {f.name for f in Model._meta.get_fields()}
field_names = field_names_cache[Model]
# Handle each field
for (field_name, field_value) in six.iteritems(d["fields"]):
if ignore and field_name not in field_names:
# skip fields no longer on model
continue
if isinstance(field_value, str):
field_value = force_text(
field_value, options.get("encoding", settings.DEFAULT_CHARSET), strings_only=True
)
field = Model._meta.get_field(field_name)
# Handle M2M relations
if field.remote_field and isinstance(field.remote_field, models.ManyToManyRel):
model = field.remote_field.model
if hasattr(model._default_manager, 'get_by_natural_key'):
def m2m_convert(value):
if hasattr(value, '__iter__') and not isinstance(value, six.text_type):
return model._default_manager.db_manager(db).get_by_natural_key(*value).pk
else:
return force_text(model._meta.pk.to_python(value), strings_only=True)
else:
def m2m_convert(v):
return force_text(model._meta.pk.to_python(v), strings_only=True)
try:
m2m_data[field.name] = []
for pk in field_value:
m2m_data[field.name].append(m2m_convert(pk))
except Exception as e:
raise base.DeserializationError.WithData(e, d['model'], d.get('pk'), pk)
# Handle FK fields
elif field.remote_field and isinstance(field.remote_field, models.ManyToOneRel):
model = field.remote_field.model
if field_value is not None:
try:
default_manager = model._default_manager
field_name = field.remote_field.field_name
if hasattr(default_manager, 'get_by_natural_key'):
if hasattr(field_value, '__iter__') and not isinstance(field_value, six.text_type):
obj = default_manager.db_manager(db).get_by_natural_key(*field_value)
value = getattr(obj, field.remote_field.field_name)
# If this is a natural foreign key to an object that
# has a FK/O2O as the foreign key, use the FK value
if model._meta.pk.remote_field:
value = value.pk
else:
value = model._meta.get_field(field_name).to_python(field_value)
data[field.attname] = value
else:
data[field.attname] = model._meta.get_field(field_name).to_python(field_value)
except Exception as e:
raise base.DeserializationError.WithData(e, d['model'], d.get('pk'), field_value)
else:
data[field.attname] = None
# Handle all other fields
else:
try:
data[field.name] = field.to_python(field_value)
except Exception as e:
raise base.DeserializationError.WithData(e, d['model'], d.get('pk'), field_value)
obj = base.build_instance(Model, data, db)
yield base.DeserializedObject(obj, m2m_data)
def _get_model(model_identifier):
"""
Helper to look up a model from an "app_label.model_name" string.
"""
try:
return apps.get_model(model_identifier)
except (LookupError, TypeError):
raise base.DeserializationError("Invalid model identifier: '%s'" % model_identifier)
|
bsd-3-clause
|
caterinaurban/Typpete
|
typpete/src/stubs/str_methods.py
|
1
|
3955
|
"""Stub file for methods invoked on strings
TODO make some arguments optional
"""
from typing import List
def capitalize(s: str) -> str:
"""Return a new string with the first letter capitalized"""
pass
def center(s: str, width: int, fillchar: str) -> str:
"""Returns a space-padded string with the original string centered to a total of width columns."""
pass
def count(s: str, str: str) -> int:
"""Counts how many times str occurs in string"""
pass
def format(self: str, arg1: object = '', arg2: object = '', arg3: object = '') -> str:
"""
Return a formatted version of S, using substitutions from args and kwargs.
The substitutions are identified by braces ('{' and '}').
"""
pass
def format(s: str, arg1: object = '', arg2: object = '', arg3: object = '', arg4: object = '') -> str:
"""
Return a formatted version of S, using substitutions from args and kwargs.
The substitutions are identified by braces ('{' and '}').
"""
pass
def isalnum(s: str) -> bool:
"""Returns true if string has at least 1 character and all characters are alphanumeric and false otherwise."""
pass
def isalpha(s: str) -> bool:
"""Returns true if string has at least 1 character and all characters are alphabetic and false otherwise."""
pass
def isdecimal(s: str) -> bool:
"""Returns true if a unicode string contains only decimal characters and false otherwise."""
pass
def isdigit(s: str) -> bool:
"""Returns true if string contains only digits and false otherwise."""
pass
def islower(s: str) -> bool:
"""Returns true if string has at least 1 character and all cased characters are in lowercase and false otherwise."""
pass
def isnumeric(s: str) -> bool:
"""Returns true if a unicode string contains only numeric characters and false otherwise."""
pass
def isspace(s: str) -> bool:
"""Returns true if string contains only whitespace characters and false otherwise."""
pass
def istitle(s: str) -> bool:
"""Returns true if string is properly "titlecased" and false otherwise."""
pass
def isupper(s: str) -> bool:
"""Returns true if string has at least one character and all characters are in uppercase and false otherwise."""
pass
def join(s: str, seq: List[str]) -> str:
"""Concatenates the string representations of elements in sequence seq into a string, with separator string."""
pass
def lower(s: str) -> str:
"""Converts all uppercase letters in string to lowercase."""
pass
def ljust(s: str, w: int, fill: str = ' ') -> str:
"""Return the string `s` left justified in a string of length `w`.
Padding is done using the specified fillchar `fill` (default is a space)."""
pass
def lstrip(s: str) -> str:
"""Removes all leading whitespace in string."""
pass
def replace(s: str, old: str, new: str) -> str:
"""Replaces all occurrences of old in string with new"""
pass
def rjust(s: str, w: int, fill: str = ' ') -> str:
"""Return the string `s` right justified in a string of length `w`.
Padding is done using the specified fillchar `fill` (default is a space)."""
pass
def rstrip(s: str) -> str:
"""Removes all trailing whitespace of string."""
pass
def split(s: str, sep: str = '', maxsplit: int = -1) -> List[str]:
"""Splits string according to delimiter str and returns list of substrings"""
pass
def startswith(s: str, c: str) -> bool:
""""""
pass
def strip(s: str, dl: str = None) -> str:
"""Performs both lstrip() and rstrip() on string"""
pass
def swapcase(s: str) -> str:
"""Inverts case for all letters in string."""
pass
def title(s: str) -> str:
"""Returns "titlecased" version of string, that is, all words begin with uppercase and the rest are lowercase."""
pass
def upper(s: str) -> str:
"""Converts lowercase letters in string to uppercase."""
pass
|
mpl-2.0
|
mainulhossain/phenoproc
|
app/biowl/libraries/shippi/img_pipe2.py
|
2
|
58923
|
#@author: Amit Kumar Mondal
#@address: SR LAB, Computer Science Department, USASK
#Email: [email protected]
import paramiko
import re
import sys
import os
import io
#import errno
import cv2
import numpy as np
#import functools
from skimage import *
from skimage import color
from skimage.feature import blob_doh
from io import BytesIO
import csv
from time import time
pipeline_obj = object()
sc= object()
spark =object()
npartitions = 8
from scipy.misc import imread, imsave
class ImgPipeline:
IMG_SERVER = ''
U_NAME = ''
PASSWORD = ''
LOADING_PATH = ''
SAVING_PATH = ''
CSV_FILE_PATH = '/home/amit/segment_data/imglist.csv'
def __init__(self, server, uname, password):
self.IMG_SERVER = server
self.U_NAME = uname
self.PASSWORD = password
def setLoadAndSavePath(self,loadpath, savepath):
self.LOADING_PATH = loadpath
self.SAVING_PATH = savepath
def setCSVAndSavePath(self,csvpath, savepath):
self.CSV_FILE_PATH = csvpath
self.SAVING_PATH = savepath
def collectDirs(self,apattern = '"*.jpg"'):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
ftp = ssh.open_sftp()
apath = self.LOADING_PATH
if(apattern=='*' ):
apattern = '"*"'
rawcommand = 'find {path} -name {pattern}'
command = rawcommand.format(path=apath, pattern=apattern)
stdin, stdout, stderr = ssh.exec_command(command)
filelist = stdout.read().splitlines()
print(len(filelist))
return filelist
def collectFiles(self, ext):
files = self.collectDirs(ext)
filenames = set()
for file in files:
if (len(file.split('.')) > 1):
filenames.add(file)
filenames = list(filenames)
return filenames
def collectImgFromCSV(self, column):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
csvfile = None
try:
ftp = ssh.open_sftp()
file = ftp.file(self.CSV_FILE_PATH, "r", -1)
buf = file.read()
csvfile = BytesIO(buf)
ftp.close()
except IOError as e:
print(e)
# ftp.close()
ssh.close()
contnts = csv.DictReader(csvfile)
filenames = {}
for row in contnts:
filenames[row[column]] = row[column]
# filenames.add(row[column])
return list(filenames)
def ImgandParamFromCSV(self, column1, column2):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
csvfile = None
try:
ftp = ssh.open_sftp()
file = ftp.file(self.CSV_FILE_PATH, "r", -1)
buf = file.read()
csvfile = BytesIO(buf)
ftp.close()
except IOError as e:
print(e)
# ftp.close()
ssh.close()
contnts = csv.DictReader(csvfile)
filenames = {}
for row in contnts:
filenames[row[column1]] = row[column2]
# filenames.add(row[column])
return filenames
def collectImgsAsGroup(self, file_abspaths):
rededge_channel_pattern = re.compile('(.+)_[0-9]+\.tif$')
# TODO merge this with the RededgeImage object created by Javier.
image_sets = {}
for path in file_abspaths:
match = rededge_channel_pattern.search(path)
if match:
common_path = match.group(1)
if common_path not in image_sets:
image_sets[common_path] = []
image_sets[common_path].append(path)
grouping_as_dic = dict()
for grp in image_sets:
grouping_as_dic.update({grp: image_sets[grp]})
return grouping_as_dic.items()
def collectImagesSet(self,ext):
#Collect sub-directories and files of a given directory
filelist = self.collectDirs(ext)
print(len(filelist))
dirs = set()
dirs_list = []
#Create a dictionary that contains: sub-directory --> [list of images of that directory]
dirs_dict = dict()
for afile in filelist:
(head, filename) = os.path.split(afile)
if (head in dirs):
if (head != afile):
dirs_list = dirs_dict[head]
dirs_list.append(afile)
dirs_dict.update({head: dirs_list})
else:
dirs_list = []
if (len(filename.split('.')) > 1 and head != afile):
dirs_list.append(afile)
dirs.add(head)
dirs_dict.update({head: dirs_list})
return dirs_dict.items()
def loadIntoCluster(self, path, offset=None, size=-1):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
imagbuf = ''
try:
ftp = ssh.open_sftp()
file = ftp.file(path, 'r', (-1))
buf = file.read()
imagbuf = imread(BytesIO(buf))
ftp.close()
except IOError as e:
print(e)
# ftp.close()
ssh.close()
return (path, imagbuf)
def loadBundleIntoCluster(self, path, offset=None, size=(-1)):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
images = []
sortedpath= path[1]
sortedpath.sort()
print(sortedpath)
try:
for img_name in sortedpath:
ftp = ssh.open_sftp()
file = ftp.file(img_name, 'r', (-1))
buf = file.read()
imagbuf = imread(BytesIO(buf))
images.append(imagbuf)
ftp.close()
except IOError as e:
print(e)
ssh.close()
return (path[0], images)
def loadBundleIntoCluster_Skip_conversion(self, path, offset=None, size=(-1)):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.IMG_SERVER, username=self.U_NAME, password=self.PASSWORD)
images = []
sortedpath= path[1]
sortedpath.sort()
print(sortedpath)
try:
for img_name in sortedpath:
ftp = ssh.open_sftp()
file = ftp.file(img_name, 'r', (-1))
buf = file.read()
imagbuf = imread(BytesIO(buf))
images.append(imagbuf)
ftp.close()
except IOError as e:
print(e)
ssh.close()
return (path[0], images,images)
def convert(self, img_object, params):
# convert
gray = cv2.cvtColor(img_object,cv2.COLOR_BGR2GRAY)
return gray
def estimate(self,img_object, params):
knl_size, itns = params
ret, thresh = cv2.threshold(img_object, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)
# noise removal
kernel = np.ones((knl_size, knl_size), np.uint8)
opening = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel, iterations=itns)
return opening
def model(self, opening, params):
knl_size, itns, dstnc, fg_ratio = params
print(params)
# sure background area
kernel = np.ones((knl_size, knl_size), np.uint8)
sure_bg = cv2.dilate(opening, kernel, iterations=itns)
# Finding sure foreground area
dist_transform = cv2.distanceTransform(opening, cv2.DIST_L2, dstnc)
ret, sure_fg = cv2.threshold(dist_transform, fg_ratio * dist_transform.max(), 255, 0)
# Finding unknown region
sure_fg = np.uint8(sure_fg)
unknown = cv2.subtract(sure_bg, sure_fg)
# Marker labelling
ret, markers = cv2.connectedComponents(sure_fg)
print("Number of objects")
print(ret)
# Add one to all labels so that sure background is not 0, but 1
markers = markers + 1
# Analysis
# Now, mark the region of unknown with zero
markers[unknown == 255] = 0
return markers
def analysis(self, img_object, markers, params):
markers = cv2.watershed(img_object, markers)
img_object[markers == -1] = [255, 0, 0]
return (img_object, markers)
def commonTransform(self, datapack, params):
fname, imgaes = datapack
procsd_obj=''
try:
procsd_obj = self.convert(imgaes, params)
except Exception as e:
print(e)
return (fname, imgaes, procsd_obj)
def commonEstimate(self, datapack, params):
fname, img, procsd_obj = datapack
processed_obj = self.estimate(procsd_obj, params)
return (fname, img, processed_obj)
def commonModel(self,datapack, params):
fname,img, processed_obj = datapack
model =self.model(processed_obj,params)
return (fname, img, model)
def commonAnalysisTransform(self, datapack, params):
fname, img, model = datapack
processedimg, stats = self.analysis(img, model, params)
return (fname, processedimg, stats)
def extarct_feature_locally(self, feature_name, img):
if feature_name in ["surf", "SURF"]:
extractor = cv2.xfeatures2d.SURF_create()
elif feature_name in ["sift", "SIFT"]:
extractor = cv2.xfeatures2d.SIFT_create()
elif feature_name in ["orb", "ORB"]:
extractor = cv2.ORB_create()
kp, descriptors = extractor.detectAndCompute(img_as_ubyte(img), None)
return descriptors
def estimate_feature(self, img, params):
feature_name = params
if feature_name in ["surf", "SURF"]:
extractor = cv2.xfeatures2d.SURF_create()
elif feature_name in ["sift", "SIFT"]:
extractor = cv2.xfeatures2d.SIFT_create()
elif feature_name in ["orb", "ORB"]:
extractor = cv2.ORB_create()
return extractor.detectAndCompute(img_as_ubyte(img), None)
def saveResult(self, result):
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
output = io.StringIO()
try:
sftp.stat(self.SAVING_PATH)
except IOError as e:
sftp.mkdir(self.SAVING_PATH)
for lstr in result:
output.write(str(lstr[0] + "\n", "utf-8"))
f = sftp.open(self.SAVING_PATH + str("result") + ".txt", 'wb')
f.write(output.getvalue())
sftp.close()
def saveClusterResult(self,result):
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
# sftp.mkdir("/home/amit/A1/regResult/")
dirs = set()
dirs_list = []
dirs_dict = dict()
clusters = result
try:
sftp.stat(self.SAVING_PATH)
except IOError as e:
sftp.mkdir(self.SAVING_PATH)
for lstr in clusters:
group = lstr[0][1]
img_name = lstr[0][0]
if (group in dirs):
exists_list = dirs_dict[group]
exists_list.append(img_name)
dirs_dict.update({group: exists_list})
else:
dirs_list = []
dirs_list.append(img_name)
dirs.add(group)
dirs_dict.update({group: dirs_list})
for itms in dirs_dict.items():
output = io.StringIO()
for itm in itms[1]:
output.write(str(itm + "\n", "utf-8"))
f = sftp.open(self.SAVING_PATH + str(itms[0]) + ".txt", 'wb')
f.write(output.getvalue())
sftp.close()
def common_write(self, result_path, sftp, fname, img, stat):
try:
sftp.stat(result_path)
except IOError as e:
sftp.mkdir(result_path)
buffer = BytesIO()
imsave(buffer, img, format='PNG')
buffer.seek(0)
dirs = fname.split('/')
print(fname)
img_name = dirs[len(dirs) - 1]
only_name = img_name.split('.')
f = sftp.open(result_path + "/IMG_" + only_name[len(only_name)-2]+".png", 'wb')
f.write(buffer.read())
sftp.close()
def commonSave(self, datapack):
fname, procsd_img, stats = datapack
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
self.common_write(self.SAVING_PATH, sftp, fname, procsd_img, stats)
def save_img_bundle(self, data_pack):
(fname, procsdimg, stats) = data_pack
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
last_part = fname.split('/')[(len(fname.split('/')) - 1)]
RESULT_PATH = (self.SAVING_PATH +"/"+ last_part)
print("Resutl writing into :" + RESULT_PATH)
try:
sftp.stat(RESULT_PATH)
except IOError as e:
sftp.mkdir(RESULT_PATH)
for (i, wrapped) in enumerate(procsdimg):
buffer = BytesIO()
imsave(buffer, wrapped, format='png')
buffer.seek(0)
f = sftp.open(((((RESULT_PATH + '/regi_') + last_part) + str(i)) + '.png'), 'wb')
f.write(buffer.read())
sftp.close()
class ImageRegistration(ImgPipeline):
def convert(self, img, params):
imgaes = color.rgb2gray(img)
return imgaes
def convert_bundle(self, images, params):
grey_imgs = []
for img in images:
try:
grey_imgs.append(self.convert(img, params))
except Exception as e:
print(e)
return grey_imgs
def commonTransform(self, datapack, params):
fname, imgaes = datapack
procsd_obj=[]
try:
procsd_obj = self.convert_bundle(imgaes, params)
except Exception as e:
print(e)
return (fname, imgaes, procsd_obj)
def bundle_estimate(self, img_obj, params):
extractor = cv2.xfeatures2d.SIFT_create(nfeatures=100000)
return extractor.detectAndCompute(img_as_ubyte(img_obj), None)
def commonEstimate(self, datapack, params):
fname, imgs, procsd_obj = datapack
img_key_points = []
img_descriptors = []
print("estimatinng for:" + fname + " " + str(len(imgs)))
for img in procsd_obj:
try:
(key_points, descriptors) = self.bundle_estimate(img,params)
key_points = np.float32([key_point.pt for key_point in key_points])
except Exception as e:
descriptors = None
key_points = None
img_key_points.append(key_points)
img_descriptors.append(descriptors)
procssd_entity = []
print(str(len(img_descriptors)))
procssd_entity.append(img_key_points)
procssd_entity.append(img_descriptors)
return (fname, imgs, procssd_entity)
def match_and_tranform(self, keypoints_to_be_reg, features_to_be_reg, ref_keypoints, ref_features, no_of_match,ratio, reproj_thresh):
#def match_and_tranform(self, features_to_be_reg, keypoints_to_be_reg, ref_features, ref_keypoints, no_of_match, ratio, reproj_thresh):
matcher = cv2.DescriptorMatcher_create('BruteForce')
raw_matches = matcher.knnMatch(features_to_be_reg, ref_features, 2)
matches = [(m[0].trainIdx, m[0].queryIdx) for m in raw_matches if ((len(m) == 2) and (m[0].distance < (m[1].distance * ratio)))]
back_proj_error = 0
inlier_count = 0
H =0
if (len(matches) > no_of_match):
src_pts = np.float32([keypoints_to_be_reg[i] for (_, i) in matches])
dst_pts = np.float32([ref_keypoints[i] for (i, _) in matches])
(H, status) = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, reproj_thresh)
src_t = np.transpose(src_pts)
dst_t = np.transpose(dst_pts)
for i in range(0, src_t.shape[1]):
x_i = src_t[0][i]
y_i = src_t[1][i]
x_p = dst_t[0][i]
y_p = dst_t[1][i]
num1 = (((H[0][0] * x_i) + (H[0][1] * y_i)) + H[0][2])
num2 = (((H[1][0] * x_i) + (H[1][1] * y_i)) + H[1][2])
dnm = (((H[2][0] * x_i) + (H[2][1] * y_i)) + H[2][2])
tmp = (((x_p - (num1 / (dnm ** 2))) + y_p) - (num2 / (dnm ** 2)))
if (status[i] == 1):
back_proj_error += tmp
inlier_count += 1
return (back_proj_error, inlier_count, H)
def wrap_and_sample(self, transformed_img, ref_img):
wrapped = cv2.warpPerspective(ref_img, transformed_img, (ref_img.shape[1], ref_img.shape[0]))
return wrapped
def commonModel(self,datapack, params):
fname,imgs, procssd_entity = datapack
right_key_points = procssd_entity[0]
img_features = procssd_entity[1]
no_of_match, ratio, reproj_thresh, base_img_idx = params
indx = 0
Hs = []
back_proj_errors = []
inlier_counts = []
#imgs = []
print("Modeling for:"+str(len(imgs)))
for imgind, right_features in enumerate(img_features):
if (right_features != None):
#imgs.append(img[imgind])
(back_proj_error, inlier_count, H) = self.match_and_tranform(right_key_points[imgind], right_features, right_key_points[base_img_idx], img_features[base_img_idx], no_of_match, ratio, reproj_thresh)
if ((H != None)):
Hs.append(H)
back_proj_errors.append(back_proj_error)
inlier_counts.append(inlier_count)
else:
print("Algorithm is not working properly")
print("it is working:" + str(imgind))
indx = (indx + 1)
Hs.insert(base_img_idx, np.identity(3))
model = []
model.append(Hs)
model.append(back_proj_errors)
model.append(inlier_counts)
return (fname, imgs, model)
def commonAnalysisTransform(self, datapack, params):
(fname, imgs, model) = datapack
H = model[0]
wrappeds = []
if(len(H) <1):
print("H is empty algorithm is not working properly")
for i, img in enumerate(imgs):
wrapped = self.wrap_and_sample(H[i], img)
wrappeds.append(wrapped)
stats = []
stats.append(H)
stats.append(model[1])
stats.append(model[2])
return (fname, wrappeds, stats)
def write_register_images(self, data_pack):
(fname, procsdimg, stats) = data_pack
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
RESULT_PATH = (self.SAVING_PATH +"/"+ fname.split('/')[(len(fname.split('/')) - 1)])
print("Resutl writing into :" + RESULT_PATH)
try:
sftp.stat(RESULT_PATH)
except IOError as e:
sftp.mkdir(RESULT_PATH)
for (i, wrapped) in enumerate(procsdimg):
buffer = BytesIO()
imsave(buffer, wrapped, format='PNG')
buffer.seek(0)
f = sftp.open((((((RESULT_PATH + '/IMG_') + '0') + '_') + str(i)) + '.png'), 'wb')
f.write(buffer.read())
sftp.close()
class ImageStitching(ImgPipeline):
def convert(self, img, params):
if(len(img.shape) == 3):
imgaes = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
else:
imgaes = img
return imgaes
def convert_bundle(self, images, params):
grey_imgs = []
for img in images:
try:
grey_imgs.append(self.convert(img, params))
except Exception as e:
print(e)
return grey_imgs
def commonTransform(self, datapack, params):
fname, images = datapack
print("size: %", params)
resize_imgs = []
for img in images:
resize_imgs.append(cv2.resize(img,params))
procsd_obj=[]
try:
procsd_obj = self.convert_bundle(resize_imgs, params)
except Exception as e:
print(e)
return (fname, resize_imgs, procsd_obj)
def bundle_estimate(self, img_obj, params):
extractor = cv2.xfeatures2d.SURF_create()
return extractor.detectAndCompute(img_as_ubyte(img_obj), None)
def commonEstimate(self, datapack, params):
fname, imgs, procsd_obj = datapack
img_key_points = []
img_descriptors = []
print("estimatinng for:" + fname + " " + str(len(imgs)))
for img in procsd_obj:
try:
(key_points, descriptors) = self.bundle_estimate(img,params)
except Exception as e:
descriptors = None
key_points = None
img_key_points.append(key_points)
img_descriptors.append(descriptors)
procssd_entity = []
print(str(len(img_descriptors)))
procssd_entity.append(img_key_points)
procssd_entity.append(img_descriptors)
return (fname, imgs, procssd_entity)
def match(self, img_to_merge, feature, kp):
index_params = dict(algorithm=0, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
kp_to_merge, feature_to_merge = self.getFeatureOfMergedImg(img_to_merge)
matches = flann.knnMatch(feature,feature_to_merge,k=2)
good = []
for i, (m, n) in enumerate(matches):
if m.distance < 0.7 * n.distance:
good.append((m.trainIdx, m.queryIdx))
if len(good) > 4:
pointsCurrent = kp
pointsPrevious = kp_to_merge
matchedPointsCurrent = np.float32([pointsCurrent[i].pt for (__, i) in good])
matchedPointsPrev = np.float32([pointsPrevious[i].pt for (i, __) in good])
H, s = cv2.findHomography(matchedPointsCurrent, matchedPointsPrev, cv2.RANSAC, 4)
return H
def match2(self, img_to_merge, b):
index_params = dict(algorithm=0, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
kp_to_merge, feature_to_merge = self.getFeatureOfMergedImg(img_to_merge)
bkp, bf = self.getFeatureOfMergedImg(b)
matches = flann.knnMatch(bf, feature_to_merge, k=2)
good = []
for i, (m, n) in enumerate(matches):
if m.distance < 0.7 * n.distance:
good.append((m.trainIdx, m.queryIdx))
if len(good) > 4:
pointsCurrent = bkp
pointsPrevious = kp_to_merge
matchedPointsCurrent = np.float32([pointsCurrent[i].pt for (__, i) in good])
matchedPointsPrev = np.float32([pointsPrevious[i].pt for (i, __) in good])
H, s = cv2.findHomography(matchedPointsCurrent, matchedPointsPrev, cv2.RANSAC, 4)
return H
def getFeatureOfMergedImg(self, img):
grey = self.convert(img,(0))
return self.bundle_estimate(grey, (0))
def commonModel(self,datapack, params):
fname,imgs, procssd_entity = datapack
right_key_points = procssd_entity[0]
img_features = procssd_entity[1]
mid_indx = int(len(imgs)/2)
start_indx = 1
frst_img = imgs[0]
print("shape of first img: % ", frst_img.shape)
print("first phase {} ".format(mid_indx))
for idx in range(mid_indx):
H = self.match(frst_img, img_features[start_indx], right_key_points[start_indx])
#H = self.match2(frst_img, imgs[start_indx])
xh = np.linalg.inv(H)
if(len(frst_img.shape) == 3):
ds = np.dot(xh, np.array([frst_img.shape[1], frst_img.shape[0], 1]))
ds = ds / ds[-1]
print("final ds=>", ds)
f1 = np.dot(xh, np.array([0, 0, 1]))
f1 = f1 / f1[-1]
xh[0][-1] += abs(f1[0])
xh[1][-1] += abs(f1[1])
ds = np.dot(xh, np.array([frst_img.shape[1], frst_img.shape[0], 1]))
else:
ds = np.dot(xh, np.array([frst_img.shape[1], frst_img.shape[0],1]))
print(ds[-1])
ds = ds / ds[-1]
print("final ds=>", ds)
f1 = np.dot(xh, np.array([0, 0,1]))
f1 = f1 / f1[-1]
xh[0][-1] += abs(f1[0])
xh[1][-1] += abs(f1[1])
ds = np.dot(xh, np.array([frst_img.shape[1], frst_img.shape[0],1]))
offsety = abs(int(f1[1]))
offsetx = abs(int(f1[0]))
dsize = (int(ds[0]) + offsetx, int(ds[1]) + offsety)
print("image dsize =>", dsize) #(697, 373)
tmp = cv2.warpPerspective(frst_img, xh, (frst_img.shape[1] * 2, frst_img.shape[0] * 2))
# cv2.imshow("warped", tmp)
# cv2.waitKey()
print("shape of img: %", imgs[start_indx].shape)
print("shape of new {}".format(tmp.shape))
tmp[offsety:imgs[start_indx].shape[0] + offsety, offsetx:imgs[start_indx].shape[1] + offsetx] = imgs[start_indx]
frst_img = tmp
start_indx = start_indx + 1
model = []
model.append(frst_img)
model.append(procssd_entity)
return (fname, imgs, model)
# Homography is: [[8.86033773e-01 6.59154846e-02 1.73593010e+02]
# [-8.13825392e-02 9.77171622e-01 - 1.25890876e+01]
# [-2.61821451e-04
# 4.91986599e-05
# 1.00000000e+00]]
# Inverse
# Homography: [[1.06785933e+00 - 6.26599825e-02 - 1.86161748e+02]
# [9.24787290e-02 1.01728698e+00 - 3.24694609e+00]
# [2.75038650e-04 - 6.64548835e-05
# 9.51418606e-01]]
# final
# ds = > [288.42753648 345.21227814 1.]
# image
# dsize = > (697, 373)
# shape
# of
# new(373, 697, 3)
def commonAnalysisTransform(self, datapack, params):
fname, imgs, model = datapack
procssd_entity = model[1]
right_key_points = procssd_entity[0]
img_features = procssd_entity[1]
mid_indx = int(len(imgs) / 2)
length = len(imgs)
start_indx = mid_indx
frst_img = model[0]
print("second phase: %", start_indx)
for idx in range(length-mid_indx):
H = self.match(frst_img, img_features[start_indx], right_key_points[start_indx])
txyz = np.dot(H, np.array([imgs[start_indx].shape[1], imgs[start_indx].shape[0], 1]))
txyz = txyz / txyz[-1]
dsize = (int(txyz[0]) + frst_img.shape[1], int(txyz[1]) + frst_img.shape[0])
tmp = cv2.warpPerspective(imgs[start_indx], H, dsize)
# tmp[:self.leftImage.shape[0], :self.leftImage.shape[1]]=self.leftImage
tmp = self.mix_and_match(frst_img, tmp)
frst_img = tmp
start_indx = start_indx + 1
return (fname, frst_img, '')
def mix_and_match(self, leftImage, warpedImage):
i1y, i1x = leftImage.shape[:2]
i2y, i2x = warpedImage.shape[:2]
print(leftImage[-1, -1])
black_l = np.where(leftImage == np.array([0, 0, 0]))
black_wi = np.where(warpedImage == np.array([0, 0, 0]))
for i in range(0, i1x):
for j in range(0, i1y):
try:
if (np.array_equal(leftImage[j, i], np.array([0, 0, 0])) and np.array_equal(warpedImage[j, i],
np.array([0, 0, 0]))):
# print "BLACK"
# instead of just putting it with black,
# take average of all nearby values and avg it.
warpedImage[j, i] = [0, 0, 0]
else:
if (np.array_equal(warpedImage[j, i], [0, 0, 0])):
# print "PIXEL"
warpedImage[j, i] = leftImage[j, i]
else:
if not np.array_equal(leftImage[j, i], [0, 0, 0]):
bw, gw, rw = warpedImage[j, i]
bl, gl, rl = leftImage[j, i]
# b = (bl+bw)/2
# g = (gl+gw)/2
# r = (rl+rw)/2
warpedImage[j, i] = [bl, gl, rl]
except:
pass
# cv2.imshow("waRPED mix", warpedImage)
# cv2.waitKey()
return warpedImage
def write_stitch_images(self, data_pack):
(fname, procsdimg, stats) = data_pack
transport = paramiko.Transport((self.IMG_SERVER, 22))
transport.connect(username=self.U_NAME, password=self.PASSWORD)
sftp = paramiko.SFTPClient.from_transport(transport)
RESULT_PATH = (self.SAVING_PATH +"/"+ fname.split('/')[(len(fname.split('/')) - 1)])
print("Resutl writing into :" + RESULT_PATH)
try:
sftp.stat(RESULT_PATH)
except IOError as e:
sftp.mkdir(RESULT_PATH)
buffer = BytesIO()
imsave(buffer, procsdimg, format='PNG')
buffer.seek(0)
f = sftp.open((((((RESULT_PATH + '/IMG_') + '0') + '_') + "stitched") + '.png'), 'wb')
f.write(buffer.read())
sftp.close()
class FlowerCounter(ImgPipeline):
common_size =(534, 800)
region_matrix = [[0, 0, 0], [0, 1, 0], [0, 0, 0]]
template_img = []
avrg_histo_b = []
def setTemplateandSize(self, img, size):
self.common_size = size
self.template_img=img
def setRegionMatrix(self, mat):
self.region_matrix = mat
def setAvgHist(self, hist):
self.avrg_histo_b = hist
def convert(self, img_obj, params):
img_asarray = np.array(img_obj)
return img_asarray
def getFlowerArea(self, img, hist_b_shifts, segm_B_lower, segm_out_value=0.99, segm_dist_from_zerocross=5):
"""
Take the image given as parameter and highlight flowers applying a logistic function
on the B channel. The formula applied is f(x) = 1/(1 + exp(K * (x - T))) being K and T constants
calculated based on the given parameters.
:param img: Image array
:param fname: Image filename
:param segm_out_value: Value of the logistic function output when the input is the lower B segmentation value i.e. f(S), where S = self.segm_B_lower + self.hist_b_shifts[fname]
:param segm_dist_from_zerocross: Value that, when substracted from the lower B segmentation value, the output is 0.5 i.e. Value P where f(self.segm_B_lower + self.hist_b_shifts[fname] - P) = 0.5
:return: Grayscale image highlighting flower pixels (pixels values between 0 and 1)
"""
# Convert to LAB
img_lab = cv2.cvtColor(img, cv2.COLOR_BGR2Lab)
# Get the B channel and convert to float
img_B = np.array(img_lab[:, :, 2], dtype=np.float32)
# Get the parameter T for the formula
print(segm_dist_from_zerocross)
t_exp = float(segm_B_lower) + float(hist_b_shifts) - float(segm_dist_from_zerocross)
# Get the parameter K for the formula
k_exp = np.log(1 / segm_out_value - 1) / segm_dist_from_zerocross
# Apply logistic transformation
img_B = 1 / (1 + np.exp(k_exp * (img_B - t_exp)))
return img_B
def estimate(self, img_object, params):
plot_mask = params
array_image = np.asarray(img_object)
im_bgr = np.array(array_image)
# Shift to grayscale
im_gray = cv2.cvtColor(im_bgr, cv2.COLOR_BGR2GRAY)
# Shift to LAB
im_lab_plot = cv2.cvtColor(im_bgr, cv2.COLOR_BGR2Lab)
# Keep only plot pixels
im_gray = im_gray[plot_mask > 0]
im_lab_plot = im_lab_plot[plot_mask > 0]
# Get histogram of grayscale image
hist_G, _ = np.histogram(im_gray, 256, [0, 256])
# Get histogram of B component
hist_b, _ = np.histogram(im_lab_plot[:, 2], 256, [0, 256])
histograms = []
histograms.append(hist_b)
histograms.append(hist_G)
return histograms
def model(self,processed_obj, params):
hist_b = processed_obj[0]
avg_hist_b = params
# Calculate correlation
correlation_b = np.correlate(hist_b, avg_hist_b, "full")
# Get the shift on the X axis
x_shift_b = correlation_b.argmax().astype(np.int8)
return x_shift_b
def analysis(self, img, model, params):
flower_area_mask, segm_B_lower = params
hist_b_shift = model
# Get flower mask for this image
# pil_image = PIL.Image.open(value).convert('RGB')
open_cv_image = np.array(img)
# print open_cv_image
# img = open_cv_image[::-1].copy()
img = open_cv_image[:, :, ::-1].copy()
# Highlight flowers
img_flowers = self.getFlowerArea(img, hist_b_shift, segm_B_lower, segm_dist_from_zerocross=8)
# Apply flower area mask
# print(img_flowers)
# print(flower_area_mask)
img_flowers[flower_area_mask == 0] = 0
# Get number of flowers using blob counter on the B channel
blobs = blob_doh(img_flowers, max_sigma=5, min_sigma=1)
for bld in blobs:
x, y, r = bld
cv2.circle(img, (int(x), int(y)), int(r + 1), (0, 0, 0), 1)
return (img, blobs)
def crossProduct(self, p1, p2, p3):
"""
Cross product implementation: (P2 - P1) X (P3 - P2)
:param p1: Point #1
:param p2: Point #2
:param p3: Point #3
:return: Cross product
"""
v1 = [p2[0] - p1[0], p2[1] - p1[1]]
v2 = [p3[0] - p2[0], p3[1] - p2[1]]
return v1[0] * v2[1] - v1[1] * v2[0]
def userDefinePlot(self, img, bounds=None):
"""
:param image: The image array that contains the crop
:param bounds: Optionally user can set up previously the bounds without using GUI
:return: The four points selected by user and the mask to apply to the image
"""
# Initial assert
if not isinstance(img, np.ndarray):
print("Image is not a numpy array")
return
# Get image shape
shape = img.shape[::-1]
# Eliminate 3rd dimension if image is colored
if len(shape) == 3:
shape = shape[1:]
# Function definitions
def getMask(boundM):
"""
Get mask from bounds
:return: Mask in a numpy array
"""
# Initialize mask
# shapeM = img.shape[1::-1]
mask = np.zeros(shape[::-1])
# Get boundaries of the square containing our ROI
minX = max([min([x[0] for x in boundM]), 0])
minY = max([min([y[1] for y in boundM]), 0])
maxX = min(max([x[0] for x in boundM]), shape[0])
maxY = min(max([y[1] for y in boundM]), shape[1])
# Reshape bounds
# boundM = [(minX, minY), (maxX, minY), (minX, maxY), (maxX, maxY)]
# Iterate through the containing-square and eliminate points
# that are out of the ROI
for x in range(minX, maxX):
for y in range(minY, maxY):
h1 = self.crossProduct(boundM[2], boundM[0], (x, y))
h2 = self.crossProduct(boundM[3], boundM[1], (x, y))
v1 = self.crossProduct(boundM[0], boundM[1], (x, y))
v2 = self.crossProduct(boundM[2], boundM[3], (x, y))
if h1 > 0 and h2 < 0 and v1 > 0 and v2 < 0:
mask[y, x] = 255
return mask
# Check if bounds have been provided
if isinstance(bounds, list):
if len(bounds) != 4:
print("Bounds length must be 4. Setting up GUI...")
else:
mask = getMask(bounds)
return bounds, mask
# Get image shape
# shape = img.shape[1::-1]
# Initialize boudaries
height,width = self.common_size
bounds = [(0, 0), ((height-1), 0), (0, (width-1)), ((height-1), (width-1))]
# if plot == False:
# #for flower area
# bounds = [(308, 247), (923, 247), (308, 612), (923, 612)]
# Get binary mask for the user-selected ROI
mask = getMask(bounds)
return bounds, mask
# filenames = list_files("/data/mounted_hdfs_path/user/hduser/plot_images/2016-07-05_1207")
def setPlotMask(self, bounds, imsize, mask=None):
"""
Set mask of the plot under analysis
:param mask: Mask of the plot
:param bounds: Bounds of the plot
"""
plot_bounds = None
plot_mask = None
# Initial assert
if mask is not None:
print(mask.shape)
print(imsize)
assert isinstance(mask, np.ndarray), "Parameter 'corners' must be Numpy array"
assert mask.shape == imsize, "Mask has a different size"
assert isinstance(bounds, list) and len(bounds) == 4, "Bounds must be a 4-element list"
# Store bounds
plot_bounds = bounds
# Store mask
if mask is None:
_, plot_mask = self.userDefinePlot(np.zeros(imsize), bounds)
else:
plot_mask = mask
return plot_bounds, plot_mask
def setFlowerAreaMask(self, region_matrix, mask, imsize):
"""
Set mask of the flower area within the plot
:param region_matrix = Region matrix representing the flower area
:param mask: Mask of the flower area
"""
# Initial assert
if mask is not None:
assert isinstance(mask, np.ndarray), "Parameter 'mask' must be Numpy array"
assert mask.shape == imsize, "Mask has a different size"
# assert isinstance(bounds, list) and len(bounds) == 4, "Bounds must be a 4-element list"
# Store bounds
flower_region_matrix = region_matrix
# Store mask
flower_area_mask = mask
return flower_area_mask
def calculatePlotMask(self, images_bytes, imsize):
"""
Compute plot mask
"""
# Trace
print("Computing plot mask...")
# Read an image
open_cv_image = np.array(images_bytes)
# print open_cv_image
# print (open_cv_image.shape)
# Convert RGB to BGR
# open_cv_image = open_cv_image[:, :, ::-1].copy()
p_bounds, p_mask = self.userDefinePlot(open_cv_image, None)
# Store mask and bounds
return self.setPlotMask(p_bounds, imsize, p_mask)
def calculateFlowerAreaMask(self, region_matrix, plot_bounds, imsize):
"""
Compute the flower area mask based on a matrix th for bld in blob:
x, y, r = bld
cv2.circle(img, (int(x), int(y)), int(r + 1), (0, 0, 0), 1)at indicates which regions of the plot are part of the
flower counting.
:param region_matrix: Mmatrix reflecting which zones are within the flower area mask (e.g. in order to
sample the center region, the matrix should be [[0,0,0],[0,1,0],[0,0,0]]
"""
# Trace
print("Computing flower area mask...")
# Check for plot bounds
assert len(plot_bounds) > 0, "Plot bounds not set. Please set plot bounds before setting flower area mask"
# Convert to NumPy array if needed
if not isinstance(region_matrix, np.ndarray):
region_matrix = np.array(region_matrix)
# Assert
assert region_matrix.ndim == 2, 'region_matrix must be a 2D matrix'
# Get the number of rows and columns in the region matrix
rows, cols = region_matrix.shape
# Get transformation matrix
M = cv2.getPerspectiveTransform(np.float32([[0, 0], [cols, 0], [0, rows], [cols, rows]]),
np.float32(plot_bounds))
# Initialize flower area mask
fw_mask = np.zeros(imsize)
# Go over the flower area mask and turn to 1 the marked areas in the region_matrix
for x in range(cols):
for y in range(rows):
# Write a 1 if the correspondant element in the region matrix is 1
if region_matrix[y, x] == 1:
# Get boundaries of this zone as a float32 NumPy array
bounds = np.float32([[x, y], [x + 1, y], [x, y + 1], [x + 1, y + 1]])
bounds = np.array([bounds])
# Transform points
bounds_T = cv2.perspectiveTransform(bounds, M)[0].astype(np.int)
# Get mask for this area
_, mask = self.userDefinePlot(fw_mask, list(bounds_T))
# Apply mask
fw_mask[mask > 0] = 255
# Save flower area mask & bounds
return self.setFlowerAreaMask(region_matrix, fw_mask, imsize)
def computeAverageHistograms(self, hist_b_all):
"""
Compute average B histogram
"""
# Vertically stack all the B histograms
avg_hist_B = np.vstack(tuple([h for h in hist_b_all]))
# Sum all columns
avg_hist_B = np.sum(avg_hist_B, axis=0)
# Divide by the number of images and store
avg_hist_b = np.divide(avg_hist_B, len(hist_b_all))
return avg_hist_b
def common_write(self, result_path, sftp, fname, img, stat):
try:
sftp.stat(result_path)
except IOError as e:
sftp.mkdir(result_path)
buffer = BytesIO()
imsave(buffer, img, format='PNG')
buffer.seek(0)
dirs = fname.split('/')
print(fname)
img_name = dirs[len(dirs) - 1]
only_name = img_name.split('.')
f = sftp.open(result_path + "/IMG_" + only_name[len(only_name)-2]+".png", 'wb')
f.write(buffer.read())
sftp.close()
class PlotSegment(ImgPipeline):
def normalize_gaps(self, gaps, num_items):
gaps = list(gaps)
gaps_arr = np.array(gaps, dtype=np.float64)
if gaps_arr.shape == (1,):
gap_size = gaps_arr[0]
gaps_arr = np.empty(num_items - 1)
gaps_arr.fill(gap_size)
elif gaps_arr.shape != (num_items - 1,):
raise ValueError('gaps should have shape {}, but has shape {}.'
.format((num_items - 1,), gaps_arr.shape))
return gaps_arr
def get_repeated_seqs_2d_array(self, buffer_size, item_size, gaps, num_repeats_of_seq):
start = buffer_size
steps = gaps + item_size
items = np.insert(np.cumsum(steps), 0, np.array(0)) + start
return np.tile(items, (num_repeats_of_seq, 1))
def set_plot_layout_relative_meters(self, buffer_blocwise_m, plot_width_m, gaps_blocs_m, num_plots_per_bloc, buffer_plotwise_m, plot_height_m, gaps_plots_m, num_blocs):
# this one already has the correct grid shape.
plot_top_left_corners_x = self.get_repeated_seqs_2d_array(buffer_blocwise_m, plot_width_m, gaps_blocs_m, num_plots_per_bloc)
# this one needs to be transposed to assume the correct grid shape.
plot_top_left_corners_y = self.get_repeated_seqs_2d_array(buffer_plotwise_m, plot_height_m, gaps_plots_m, num_blocs).T
num_plots = num_blocs * num_plots_per_bloc
plot_top_left_corners = np.stack((plot_top_left_corners_x, plot_top_left_corners_y)).T.reshape((num_plots, 2))
plot_height_m_buffered = plot_height_m - 2 * buffer_plotwise_m
plot_width_m_buffered = plot_width_m - 2 * buffer_blocwise_m
plot_top_right_corners = np.copy(plot_top_left_corners)
plot_top_right_corners[:, 0] = plot_top_right_corners[:, 0] + plot_width_m_buffered
plot_bottom_left_corners = np.copy(plot_top_left_corners)
plot_bottom_left_corners[:, 1] = plot_bottom_left_corners[:, 1] + plot_height_m_buffered
plot_bottom_right_corners = np.copy(plot_top_left_corners)
plot_bottom_right_corners[:, 0] = plot_bottom_right_corners[:, 0] + plot_width_m_buffered
plot_bottom_right_corners[:, 1] = plot_bottom_right_corners[:, 1] + plot_height_m_buffered
plots_all_box_coords = np.concatenate((plot_top_left_corners, plot_top_right_corners,
plot_bottom_right_corners, plot_bottom_left_corners), axis=1)
print(plots_all_box_coords)
plots_corners_relative_m = plots_all_box_coords
return plots_corners_relative_m
def plot_segmentation(self, num_blocs, num_plots_per_bloc, plot_width, plot_height):
# num_blocs = 5
# num_plots_per_bloc = 17
gaps_blocs = np.array([50])
gaps_plots = np.array([5])
buffer_blocwise = 1
buffer_plotwise = 1
# plot_width = 95
# plot_height = 30
num_blocs = int(num_blocs)
num_plots_per_bloc = int(num_plots_per_bloc)
buffer_blocwise_m = float(buffer_blocwise)
buffer_plotwise_m = float(buffer_plotwise)
plot_width_m = float(plot_width)
plot_height_m = float(plot_height)
if not all((num_blocs >= 1,
num_plots_per_bloc >= 1,
buffer_blocwise_m >= 0,
buffer_plotwise_m >= 0,
plot_width_m >= 0,
plot_height_m >= 0)):
raise ValueError("invalid field layout parameters.")
gaps_blocs_m = self.normalize_gaps(gaps_blocs, num_blocs)
print(gaps_blocs_m)
gaps_plots_m = self.normalize_gaps(gaps_plots, num_plots_per_bloc)
print(gaps_plots_m)
plots_corners_relative_m = None
return self.set_plot_layout_relative_meters(buffer_blocwise_m, plot_width_m, gaps_blocs_m, num_plots_per_bloc, buffer_plotwise_m, plot_height_m, gaps_plots_m, num_blocs)
def estimate(self,img_object, params):
num_blocs, num_plots_per_bloc, p_width, p_height = params
coord = self.plot_segmentation(num_blocs, num_plots_per_bloc, p_width, p_height)
return coord
def analysis(self, img, coord, params):
xOffset, yOffset = params
for i in range(coord.shape[0]):
cv2.line(img, (int(coord[i, 0] + xOffset), int(coord[i, 1] + yOffset)),
(int(coord[i, 2] + xOffset), int(coord[i, 3] + yOffset)), (255, 255, 255), 2)
cv2.line(img, (int(coord[i, 2] + xOffset), int(coord[i, 3] + yOffset)),
(int(coord[i, 4] + xOffset), int(coord[i, 5] + yOffset)), (255, 255, 255), 2)
cv2.line(img, (int(coord[i, 4] + xOffset), int(coord[i, 5] + yOffset)),
(int(coord[i, 6] + xOffset), int(coord[i, 7] + yOffset)), (255, 255, 255), 2)
cv2.line(img, (int(coord[i, 6] + xOffset), int(coord[i, 7] + yOffset)),
(int(coord[i, 0] + xOffset), int(coord[i, 1] + yOffset)), (255, 255, 255), 2)
return (img, params)
def commonEstimate(self, datapack, params):
fname, img = datapack
get_params = params[fname]
extract_param = get_params.split()
processed_obj = self.estimate(None, (int(extract_param[0]),int(extract_param[1]), int(extract_param[2]), int(extract_param[3]) ))
return (fname, img, processed_obj)
def commonAnalysisTransform(self, datapack, params):
fname, img, model = datapack
get_params = params[fname]
extract_param = get_params.split()
processedimg, stats = self.analysis(img, model,(int(extract_param[4]), int(extract_param[5])))
return (fname, processedimg, stats)
#print(filenames)
#ftp = sc.broadcast(ftp)
def collectFiles(pipes, pattern):
fil_list = pipes.collectFiles(pattern)
return fil_list
def collectfromCSV(pipes, column):
fil_list = pipes.collectImgFromCSV(column)
return fil_list
def loadFiles( pipes, fil_list):
images = []
for file_path in fil_list:
images.append(pipes.loadIntoCluster(file_path))
return images
def collectResultAsName(pipes, rdd):
pipes.saveResult(rdd.collect())
def collectBundle(pipes, pattern):
image_sets_dirs = pipes.collectImagesSet(pattern)
return image_sets_dirs
def loadBundle( pipes, fil_list):
bundles = []
for sub_path in fil_list:
print(sub_path[0])
print(sub_path[1])
bundles.append( pipes.loadBundleIntoCluster(sub_path))
return bundles
def loadBundleSkipConvert( pipes, fil_list):
bundles = []
for sub_path in fil_list:
print(sub_path[0])
print(sub_path[1])
bundles.append( pipes.loadBundleIntoCluster_Skip_conversion(sub_path))
return bundles
def img_registration(sc, server, uname, password, data_path, save_path, img_type, no_of_match, ratio, reproj_thresh, base_img_idx):
print('Executing from web................')
pipes = ImageRegistration(server, uname, password)
pipes.setLoadAndSavePath(data_path, save_path)
file_bundles = collectBundle(pipes, img_type)
rdd = loadBundle(pipes, file_bundles)
processing_start_time = time()
for bundle in rdd:
pack = pipes.commonTransform(bundle, (0))
pack = pipes.commonEstimate( pack, ('sift'))
pack = pipes.commonModel(pack, (no_of_match, ratio, reproj_thresh, base_img_idx))
pack = pipes.commonAnalysisTransform(pack, (0))
pipes.write_register_images(pack)
processing_end_time = time() - processing_start_time
print( "SUCCESS: Images procesed in {} seconds".format(round(processing_end_time, 3)))
def img_registration2(sc, server, uname, password, data_path, save_path, img_type, no_of_match, ratio, reproj_thresh, base_img_idx):
print('Executing from web................')
pipes = ImageRegistration(server, uname, password)
pipes.setLoadAndSavePath(data_path, save_path)
file_bundles = pipes.collectImgsAsGroup(pipes.collectDirs(img_type))
rdd = loadBundleSkipConvert(pipes, file_bundles)
processing_start_time = time()
for bundle in rdd:
pack = pipes.commonEstimate( bundle, ('sift'))
pack = pipes.commonModel(pack, (no_of_match, ratio, reproj_thresh, base_img_idx))
pack = pipes.commonAnalysisTransform(pack, (0))
pipes.save_img_bundle(pack)
processing_end_time = time() - processing_start_time
print( "SUCCESS: Images procesed in {} seconds".format(round(processing_end_time, 3)))
def img_segmentation(sc,server,uname,upass, data_path, save_path, img_type,kernel_size, iterations, distance, forg_ratio):
pipes = ImgPipeline(server, uname, upass)
pipes.setLoadAndSavePath(data_path, save_path)
files = collectFiles(pipes, img_type)
rdd = loadFiles(pipes, files)
processing_start_time = time()
for bundle in rdd:
pack = pipes.commonTransform(bundle, (0))
pack = pipes.commonEstimate( pack, (kernel_size,iterations))
pack = pipes.commonModel(pack, (kernel_size, iterations, distance, forg_ratio))
pack = pipes.commonAnalysisTransform(pack, (0))
pipes.commonSave(pack)
processing_end_time = time() - processing_start_time
print("SUCCESS: Images procesed in {} seconds".format(round(processing_end_time, 3)))
#127.0.0.1 akm523 523@mitm /hadoopdata/segment_data /hadoopdata/segment_result '*' 3 2 0 .70
def callImgSeg():
try:
print(sys.argv[1:8])
server = sys.argv[1]
uname = sys.argv[2]
upass = sys.argv[3]
print('From web .............')
print(uname)
data_path = sys.argv[4]
save_path = sys.argv[5]
img_type = sys.argv[6]
kernel_size = int(sys.argv[7])
iterations = int(sys.argv[8])
distance = int(sys.argv[9])
fg_ratio = float(sys.argv[10])
img_segmentation(sc, server, uname, upass, data_path, save_path, img_type, kernel_size, iterations, distance,
fg_ratio)
except Exception as e:
print(e)
# img_matching(sc, server, uname, upass, data_path, save_path, img_type, img_to_seacrh, ratio = 0.55)
# img_clustering(sc, server, uname, upass, csv_path, save_path, "'*'", K=3, iterations=20)
#127.0.0.1 akm523 523@mitm /hadoopdata/reg_test_images /result '*' 4 .75 0 0
def callImgReg():
print(sys.argv[2:12])
server = sys.argv[2]
uname = sys.argv[3]
upass = sys.argv[4]
print('From web .............')
print(uname)
data_path = sys.argv[5]
save_path = sys.argv[6]
img_type = sys.argv[7]
no_of_match = int(sys.argv[8])
ratio = float(sys.argv[9])
reproj_thresh = float(sys.argv[10])
base_img_idx = int(sys.argv[11])
#img_registration(sc, server, uname, upass, data_path, save_path, img_type, no_of_match, ratio, reproj_thresh, base_img_idx)
img_registration2(sc, server, uname, upass, data_path, save_path, img_type, no_of_match, ratio, reproj_thresh, base_img_idx)
#127.0.0.1 uname pass /hadoopdata/flower /hadoopdata/flower_result '*' 155
def callFlowerCount():
print(sys.argv[1:8])
server = sys.argv[1]
uname = sys.argv[2]
upass = sys.argv[3]
print('From web .............')
print(uname)
data_path = sys.argv[4]
save_path = sys.argv[5]
img_type = sys.argv[6]
segm_B_lower = int(sys.argv[7])
pipes = FlowerCounter(server, uname, upass)
pipes.setLoadAndSavePath(data_path, save_path)
files = collectFiles(pipes, img_type)
print(files[0:20])
rdd = loadFiles(pipes, files[0:20])
packs = []
processing_start_time = time()
for bundle in rdd:
packs.append(pipes.commonTransform(bundle, (0)))
template = packs[0]
print(len(template))
tem_img = template[2]
height, width,channel = tem_img.shape
pipes.setTemplateandSize(tem_img, (height, width))
region_matrix = [[0, 0, 0], [0, 1, 0], [0, 0, 0]]
pipes.setRegionMatrix(region_matrix)
print(len(tem_img), tem_img.shape)
plot_bound, plot_mask = pipes.calculatePlotMask(pipes.template_img, pipes.common_size)
flower_mask = pipes.calculateFlowerAreaMask(pipes.region_matrix, plot_bound, pipes.common_size)
est_packs = []
ii = 0
for pack in packs:
print(ii)
if(len(pack[2].shape) !=0):
est_packs.append( pipes.commonEstimate(pack, (plot_mask)))
ii = ii+1
hist_b_all = []
all_array = est_packs
for i, element in enumerate(all_array):
#print(element)
histogrm = element[2]
hist_b_all.append(histogrm[0])
avg_hist_b = pipes.computeAverageHistograms(hist_b_all) # Need to convert it in array
for pack in est_packs:
imgs = pipes.commonModel(pack, (avg_hist_b))
imgs = pipes.commonAnalysisTransform(imgs, (flower_mask, segm_B_lower))
pipes.commonSave(imgs)
processing_end_time = time() - processing_start_time
print("SUCCESS: Images procesed in {} seconds".format(round(processing_end_time, 3)))
#127.0.0.1 akm523 523@mitm /hadoopdata/stitching /hadoopdata/stitch_result '*'
def imageStitching():
print(sys.argv[1:8])
server = sys.argv[1]
uname = sys.argv[2]
upass = sys.argv[3]
print('From web .............')
print(uname)
data_path = sys.argv[4]
save_path = sys.argv[5]
img_type = sys.argv[6]
pipes = ImageStitching(server, uname, upass)
pipes.setLoadAndSavePath(data_path, save_path)
file_bundles = collectBundle(pipes, img_type)
rdd = loadBundle(pipes, file_bundles)
processing_start_time = time()
for bundle in rdd:
pack = pipes.commonTransform(bundle, ((1280, 960)))
pack = pipes.commonEstimate(pack, ('sift'))
fs,img, entt = pack
print("chechking entity")
for ent in entt[1]:
print(str(len(ent)))
print(ent)
pack = pipes.commonModel(pack, (0))
pack = pipes.commonAnalysisTransform(pack, (0))
pipes.write_stitch_images(pack)
processing_end_time = time() - processing_start_time
print("SUCCESS: Images procesed in {} seconds".format(round(processing_end_time, 3)))
def collectImgSets():
print(sys.argv[1:8])
server = sys.argv[1]
uname = sys.argv[2]
upass = sys.argv[3]
print('From web .............')
print(uname)
data_path = sys.argv[4]
save_path = sys.argv[5]
img_type = sys.argv[6]
pipes = ImageStitching(server, uname, upass)
pipes.setLoadAndSavePath(data_path, save_path)
file_lists = pipes.collectDirs(img_type)
sets = pipes.collectImgsAsGroup(file_lists)
print(sets)
# 127.0.0.1 akm523 523@mitm /hadoopdata/csvfile/plotsegment.csv /hadoopdata/plot_result '*' 155
def plotSegment():
print(sys.argv[1:8])
server = sys.argv[1]
uname = sys.argv[2]
upass = sys.argv[3]
print('From web .............')
print(uname)
data_path = sys.argv[4]
save_path = sys.argv[5]
img_type = sys.argv[6]
pipes = PlotSegment(server, uname, upass)
pipes.setCSVAndSavePath(data_path, save_path)
imgfile_withparams = pipes.ImgandParamFromCSV("path","param")
for file in imgfile_withparams:
print(imgfile_withparams[file])
data_packs = loadFiles(pipes, list(imgfile_withparams))
for img in data_packs:
processed_pack = pipes.commonEstimate(img,(imgfile_withparams))
ploted_img = pipes.commonAnalysisTransform(processed_pack,(imgfile_withparams))
pipes.commonSave(ploted_img)
if(__name__=="__main__"):
print(sys.argv[0:11])
if sys.argv[1] == "registerimage":
callImgReg()
#callImgReg()
#callImgSeg()
#callFlowerCount()
#imageStitching()
#collectImgSets()
# plotSegment()
|
mit
|
Hakuba/youtube-dl
|
youtube_dl/extractor/blinkx.py
|
199
|
3217
|
from __future__ import unicode_literals
import json
from .common import InfoExtractor
from ..utils import (
remove_start,
int_or_none,
)
class BlinkxIE(InfoExtractor):
_VALID_URL = r'(?:https?://(?:www\.)blinkx\.com/#?ce/|blinkx:)(?P<id>[^?]+)'
IE_NAME = 'blinkx'
_TEST = {
'url': 'http://www.blinkx.com/ce/Da0Gw3xc5ucpNduzLuDDlv4WC9PuI4fDi1-t6Y3LyfdY2SZS5Urbvn-UPJvrvbo8LTKTc67Wu2rPKSQDJyZeeORCR8bYkhs8lI7eqddznH2ofh5WEEdjYXnoRtj7ByQwt7atMErmXIeYKPsSDuMAAqJDlQZ-3Ff4HJVeH_s3Gh8oQ',
'md5': '337cf7a344663ec79bf93a526a2e06c7',
'info_dict': {
'id': 'Da0Gw3xc',
'ext': 'mp4',
'title': 'No Daily Show for John Oliver; HBO Show Renewed - IGN News',
'uploader': 'IGN News',
'upload_date': '20150217',
'timestamp': 1424215740,
'description': 'HBO has renewed Last Week Tonight With John Oliver for two more seasons.',
'duration': 47.743333,
},
}
def _real_extract(self, url):
video_id = self._match_id(url)
display_id = video_id[:8]
api_url = ('https://apib4.blinkx.com/api.php?action=play_video&' +
'video=%s' % video_id)
data_json = self._download_webpage(api_url, display_id)
data = json.loads(data_json)['api']['results'][0]
duration = None
thumbnails = []
formats = []
for m in data['media']:
if m['type'] == 'jpg':
thumbnails.append({
'url': m['link'],
'width': int(m['w']),
'height': int(m['h']),
})
elif m['type'] == 'original':
duration = float(m['d'])
elif m['type'] == 'youtube':
yt_id = m['link']
self.to_screen('Youtube video detected: %s' % yt_id)
return self.url_result(yt_id, 'Youtube', video_id=yt_id)
elif m['type'] in ('flv', 'mp4'):
vcodec = remove_start(m['vcodec'], 'ff')
acodec = remove_start(m['acodec'], 'ff')
vbr = int_or_none(m.get('vbr') or m.get('vbitrate'), 1000)
abr = int_or_none(m.get('abr') or m.get('abitrate'), 1000)
tbr = vbr + abr if vbr and abr else None
format_id = '%s-%sk-%s' % (vcodec, tbr, m['w'])
formats.append({
'format_id': format_id,
'url': m['link'],
'vcodec': vcodec,
'acodec': acodec,
'abr': abr,
'vbr': vbr,
'tbr': tbr,
'width': int_or_none(m.get('w')),
'height': int_or_none(m.get('h')),
})
self._sort_formats(formats)
return {
'id': display_id,
'fullid': video_id,
'title': data['title'],
'formats': formats,
'uploader': data['channel_name'],
'timestamp': data['pubdate_epoch'],
'description': data.get('description'),
'thumbnails': thumbnails,
'duration': duration,
}
|
unlicense
|
Voluntarynet/BitmessageKit
|
BitmessageKit/Vendor/static-python/Lib/encodings/cp858.py
|
416
|
34271
|
""" Python Character Mapping Codec for CP858, modified from cp850.
"""
import codecs
### Codec APIs
class Codec(codecs.Codec):
def encode(self,input,errors='strict'):
return codecs.charmap_encode(input,errors,encoding_map)
def decode(self,input,errors='strict'):
return codecs.charmap_decode(input,errors,decoding_table)
class IncrementalEncoder(codecs.IncrementalEncoder):
def encode(self, input, final=False):
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
class IncrementalDecoder(codecs.IncrementalDecoder):
def decode(self, input, final=False):
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
class StreamWriter(Codec,codecs.StreamWriter):
pass
class StreamReader(Codec,codecs.StreamReader):
pass
### encodings module API
def getregentry():
return codecs.CodecInfo(
name='cp858',
encode=Codec().encode,
decode=Codec().decode,
incrementalencoder=IncrementalEncoder,
incrementaldecoder=IncrementalDecoder,
streamreader=StreamReader,
streamwriter=StreamWriter,
)
### Decoding Map
decoding_map = codecs.make_identity_dict(range(256))
decoding_map.update({
0x0080: 0x00c7, # LATIN CAPITAL LETTER C WITH CEDILLA
0x0081: 0x00fc, # LATIN SMALL LETTER U WITH DIAERESIS
0x0082: 0x00e9, # LATIN SMALL LETTER E WITH ACUTE
0x0083: 0x00e2, # LATIN SMALL LETTER A WITH CIRCUMFLEX
0x0084: 0x00e4, # LATIN SMALL LETTER A WITH DIAERESIS
0x0085: 0x00e0, # LATIN SMALL LETTER A WITH GRAVE
0x0086: 0x00e5, # LATIN SMALL LETTER A WITH RING ABOVE
0x0087: 0x00e7, # LATIN SMALL LETTER C WITH CEDILLA
0x0088: 0x00ea, # LATIN SMALL LETTER E WITH CIRCUMFLEX
0x0089: 0x00eb, # LATIN SMALL LETTER E WITH DIAERESIS
0x008a: 0x00e8, # LATIN SMALL LETTER E WITH GRAVE
0x008b: 0x00ef, # LATIN SMALL LETTER I WITH DIAERESIS
0x008c: 0x00ee, # LATIN SMALL LETTER I WITH CIRCUMFLEX
0x008d: 0x00ec, # LATIN SMALL LETTER I WITH GRAVE
0x008e: 0x00c4, # LATIN CAPITAL LETTER A WITH DIAERESIS
0x008f: 0x00c5, # LATIN CAPITAL LETTER A WITH RING ABOVE
0x0090: 0x00c9, # LATIN CAPITAL LETTER E WITH ACUTE
0x0091: 0x00e6, # LATIN SMALL LIGATURE AE
0x0092: 0x00c6, # LATIN CAPITAL LIGATURE AE
0x0093: 0x00f4, # LATIN SMALL LETTER O WITH CIRCUMFLEX
0x0094: 0x00f6, # LATIN SMALL LETTER O WITH DIAERESIS
0x0095: 0x00f2, # LATIN SMALL LETTER O WITH GRAVE
0x0096: 0x00fb, # LATIN SMALL LETTER U WITH CIRCUMFLEX
0x0097: 0x00f9, # LATIN SMALL LETTER U WITH GRAVE
0x0098: 0x00ff, # LATIN SMALL LETTER Y WITH DIAERESIS
0x0099: 0x00d6, # LATIN CAPITAL LETTER O WITH DIAERESIS
0x009a: 0x00dc, # LATIN CAPITAL LETTER U WITH DIAERESIS
0x009b: 0x00f8, # LATIN SMALL LETTER O WITH STROKE
0x009c: 0x00a3, # POUND SIGN
0x009d: 0x00d8, # LATIN CAPITAL LETTER O WITH STROKE
0x009e: 0x00d7, # MULTIPLICATION SIGN
0x009f: 0x0192, # LATIN SMALL LETTER F WITH HOOK
0x00a0: 0x00e1, # LATIN SMALL LETTER A WITH ACUTE
0x00a1: 0x00ed, # LATIN SMALL LETTER I WITH ACUTE
0x00a2: 0x00f3, # LATIN SMALL LETTER O WITH ACUTE
0x00a3: 0x00fa, # LATIN SMALL LETTER U WITH ACUTE
0x00a4: 0x00f1, # LATIN SMALL LETTER N WITH TILDE
0x00a5: 0x00d1, # LATIN CAPITAL LETTER N WITH TILDE
0x00a6: 0x00aa, # FEMININE ORDINAL INDICATOR
0x00a7: 0x00ba, # MASCULINE ORDINAL INDICATOR
0x00a8: 0x00bf, # INVERTED QUESTION MARK
0x00a9: 0x00ae, # REGISTERED SIGN
0x00aa: 0x00ac, # NOT SIGN
0x00ab: 0x00bd, # VULGAR FRACTION ONE HALF
0x00ac: 0x00bc, # VULGAR FRACTION ONE QUARTER
0x00ad: 0x00a1, # INVERTED EXCLAMATION MARK
0x00ae: 0x00ab, # LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
0x00af: 0x00bb, # RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
0x00b0: 0x2591, # LIGHT SHADE
0x00b1: 0x2592, # MEDIUM SHADE
0x00b2: 0x2593, # DARK SHADE
0x00b3: 0x2502, # BOX DRAWINGS LIGHT VERTICAL
0x00b4: 0x2524, # BOX DRAWINGS LIGHT VERTICAL AND LEFT
0x00b5: 0x00c1, # LATIN CAPITAL LETTER A WITH ACUTE
0x00b6: 0x00c2, # LATIN CAPITAL LETTER A WITH CIRCUMFLEX
0x00b7: 0x00c0, # LATIN CAPITAL LETTER A WITH GRAVE
0x00b8: 0x00a9, # COPYRIGHT SIGN
0x00b9: 0x2563, # BOX DRAWINGS DOUBLE VERTICAL AND LEFT
0x00ba: 0x2551, # BOX DRAWINGS DOUBLE VERTICAL
0x00bb: 0x2557, # BOX DRAWINGS DOUBLE DOWN AND LEFT
0x00bc: 0x255d, # BOX DRAWINGS DOUBLE UP AND LEFT
0x00bd: 0x00a2, # CENT SIGN
0x00be: 0x00a5, # YEN SIGN
0x00bf: 0x2510, # BOX DRAWINGS LIGHT DOWN AND LEFT
0x00c0: 0x2514, # BOX DRAWINGS LIGHT UP AND RIGHT
0x00c1: 0x2534, # BOX DRAWINGS LIGHT UP AND HORIZONTAL
0x00c2: 0x252c, # BOX DRAWINGS LIGHT DOWN AND HORIZONTAL
0x00c3: 0x251c, # BOX DRAWINGS LIGHT VERTICAL AND RIGHT
0x00c4: 0x2500, # BOX DRAWINGS LIGHT HORIZONTAL
0x00c5: 0x253c, # BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL
0x00c6: 0x00e3, # LATIN SMALL LETTER A WITH TILDE
0x00c7: 0x00c3, # LATIN CAPITAL LETTER A WITH TILDE
0x00c8: 0x255a, # BOX DRAWINGS DOUBLE UP AND RIGHT
0x00c9: 0x2554, # BOX DRAWINGS DOUBLE DOWN AND RIGHT
0x00ca: 0x2569, # BOX DRAWINGS DOUBLE UP AND HORIZONTAL
0x00cb: 0x2566, # BOX DRAWINGS DOUBLE DOWN AND HORIZONTAL
0x00cc: 0x2560, # BOX DRAWINGS DOUBLE VERTICAL AND RIGHT
0x00cd: 0x2550, # BOX DRAWINGS DOUBLE HORIZONTAL
0x00ce: 0x256c, # BOX DRAWINGS DOUBLE VERTICAL AND HORIZONTAL
0x00cf: 0x00a4, # CURRENCY SIGN
0x00d0: 0x00f0, # LATIN SMALL LETTER ETH
0x00d1: 0x00d0, # LATIN CAPITAL LETTER ETH
0x00d2: 0x00ca, # LATIN CAPITAL LETTER E WITH CIRCUMFLEX
0x00d3: 0x00cb, # LATIN CAPITAL LETTER E WITH DIAERESIS
0x00d4: 0x00c8, # LATIN CAPITAL LETTER E WITH GRAVE
0x00d5: 0x20ac, # EURO SIGN
0x00d6: 0x00cd, # LATIN CAPITAL LETTER I WITH ACUTE
0x00d7: 0x00ce, # LATIN CAPITAL LETTER I WITH CIRCUMFLEX
0x00d8: 0x00cf, # LATIN CAPITAL LETTER I WITH DIAERESIS
0x00d9: 0x2518, # BOX DRAWINGS LIGHT UP AND LEFT
0x00da: 0x250c, # BOX DRAWINGS LIGHT DOWN AND RIGHT
0x00db: 0x2588, # FULL BLOCK
0x00dc: 0x2584, # LOWER HALF BLOCK
0x00dd: 0x00a6, # BROKEN BAR
0x00de: 0x00cc, # LATIN CAPITAL LETTER I WITH GRAVE
0x00df: 0x2580, # UPPER HALF BLOCK
0x00e0: 0x00d3, # LATIN CAPITAL LETTER O WITH ACUTE
0x00e1: 0x00df, # LATIN SMALL LETTER SHARP S
0x00e2: 0x00d4, # LATIN CAPITAL LETTER O WITH CIRCUMFLEX
0x00e3: 0x00d2, # LATIN CAPITAL LETTER O WITH GRAVE
0x00e4: 0x00f5, # LATIN SMALL LETTER O WITH TILDE
0x00e5: 0x00d5, # LATIN CAPITAL LETTER O WITH TILDE
0x00e6: 0x00b5, # MICRO SIGN
0x00e7: 0x00fe, # LATIN SMALL LETTER THORN
0x00e8: 0x00de, # LATIN CAPITAL LETTER THORN
0x00e9: 0x00da, # LATIN CAPITAL LETTER U WITH ACUTE
0x00ea: 0x00db, # LATIN CAPITAL LETTER U WITH CIRCUMFLEX
0x00eb: 0x00d9, # LATIN CAPITAL LETTER U WITH GRAVE
0x00ec: 0x00fd, # LATIN SMALL LETTER Y WITH ACUTE
0x00ed: 0x00dd, # LATIN CAPITAL LETTER Y WITH ACUTE
0x00ee: 0x00af, # MACRON
0x00ef: 0x00b4, # ACUTE ACCENT
0x00f0: 0x00ad, # SOFT HYPHEN
0x00f1: 0x00b1, # PLUS-MINUS SIGN
0x00f2: 0x2017, # DOUBLE LOW LINE
0x00f3: 0x00be, # VULGAR FRACTION THREE QUARTERS
0x00f4: 0x00b6, # PILCROW SIGN
0x00f5: 0x00a7, # SECTION SIGN
0x00f6: 0x00f7, # DIVISION SIGN
0x00f7: 0x00b8, # CEDILLA
0x00f8: 0x00b0, # DEGREE SIGN
0x00f9: 0x00a8, # DIAERESIS
0x00fa: 0x00b7, # MIDDLE DOT
0x00fb: 0x00b9, # SUPERSCRIPT ONE
0x00fc: 0x00b3, # SUPERSCRIPT THREE
0x00fd: 0x00b2, # SUPERSCRIPT TWO
0x00fe: 0x25a0, # BLACK SQUARE
0x00ff: 0x00a0, # NO-BREAK SPACE
})
### Decoding Table
decoding_table = (
u'\x00' # 0x0000 -> NULL
u'\x01' # 0x0001 -> START OF HEADING
u'\x02' # 0x0002 -> START OF TEXT
u'\x03' # 0x0003 -> END OF TEXT
u'\x04' # 0x0004 -> END OF TRANSMISSION
u'\x05' # 0x0005 -> ENQUIRY
u'\x06' # 0x0006 -> ACKNOWLEDGE
u'\x07' # 0x0007 -> BELL
u'\x08' # 0x0008 -> BACKSPACE
u'\t' # 0x0009 -> HORIZONTAL TABULATION
u'\n' # 0x000a -> LINE FEED
u'\x0b' # 0x000b -> VERTICAL TABULATION
u'\x0c' # 0x000c -> FORM FEED
u'\r' # 0x000d -> CARRIAGE RETURN
u'\x0e' # 0x000e -> SHIFT OUT
u'\x0f' # 0x000f -> SHIFT IN
u'\x10' # 0x0010 -> DATA LINK ESCAPE
u'\x11' # 0x0011 -> DEVICE CONTROL ONE
u'\x12' # 0x0012 -> DEVICE CONTROL TWO
u'\x13' # 0x0013 -> DEVICE CONTROL THREE
u'\x14' # 0x0014 -> DEVICE CONTROL FOUR
u'\x15' # 0x0015 -> NEGATIVE ACKNOWLEDGE
u'\x16' # 0x0016 -> SYNCHRONOUS IDLE
u'\x17' # 0x0017 -> END OF TRANSMISSION BLOCK
u'\x18' # 0x0018 -> CANCEL
u'\x19' # 0x0019 -> END OF MEDIUM
u'\x1a' # 0x001a -> SUBSTITUTE
u'\x1b' # 0x001b -> ESCAPE
u'\x1c' # 0x001c -> FILE SEPARATOR
u'\x1d' # 0x001d -> GROUP SEPARATOR
u'\x1e' # 0x001e -> RECORD SEPARATOR
u'\x1f' # 0x001f -> UNIT SEPARATOR
u' ' # 0x0020 -> SPACE
u'!' # 0x0021 -> EXCLAMATION MARK
u'"' # 0x0022 -> QUOTATION MARK
u'#' # 0x0023 -> NUMBER SIGN
u'$' # 0x0024 -> DOLLAR SIGN
u'%' # 0x0025 -> PERCENT SIGN
u'&' # 0x0026 -> AMPERSAND
u"'" # 0x0027 -> APOSTROPHE
u'(' # 0x0028 -> LEFT PARENTHESIS
u')' # 0x0029 -> RIGHT PARENTHESIS
u'*' # 0x002a -> ASTERISK
u'+' # 0x002b -> PLUS SIGN
u',' # 0x002c -> COMMA
u'-' # 0x002d -> HYPHEN-MINUS
u'.' # 0x002e -> FULL STOP
u'/' # 0x002f -> SOLIDUS
u'0' # 0x0030 -> DIGIT ZERO
u'1' # 0x0031 -> DIGIT ONE
u'2' # 0x0032 -> DIGIT TWO
u'3' # 0x0033 -> DIGIT THREE
u'4' # 0x0034 -> DIGIT FOUR
u'5' # 0x0035 -> DIGIT FIVE
u'6' # 0x0036 -> DIGIT SIX
u'7' # 0x0037 -> DIGIT SEVEN
u'8' # 0x0038 -> DIGIT EIGHT
u'9' # 0x0039 -> DIGIT NINE
u':' # 0x003a -> COLON
u';' # 0x003b -> SEMICOLON
u'<' # 0x003c -> LESS-THAN SIGN
u'=' # 0x003d -> EQUALS SIGN
u'>' # 0x003e -> GREATER-THAN SIGN
u'?' # 0x003f -> QUESTION MARK
u'@' # 0x0040 -> COMMERCIAL AT
u'A' # 0x0041 -> LATIN CAPITAL LETTER A
u'B' # 0x0042 -> LATIN CAPITAL LETTER B
u'C' # 0x0043 -> LATIN CAPITAL LETTER C
u'D' # 0x0044 -> LATIN CAPITAL LETTER D
u'E' # 0x0045 -> LATIN CAPITAL LETTER E
u'F' # 0x0046 -> LATIN CAPITAL LETTER F
u'G' # 0x0047 -> LATIN CAPITAL LETTER G
u'H' # 0x0048 -> LATIN CAPITAL LETTER H
u'I' # 0x0049 -> LATIN CAPITAL LETTER I
u'J' # 0x004a -> LATIN CAPITAL LETTER J
u'K' # 0x004b -> LATIN CAPITAL LETTER K
u'L' # 0x004c -> LATIN CAPITAL LETTER L
u'M' # 0x004d -> LATIN CAPITAL LETTER M
u'N' # 0x004e -> LATIN CAPITAL LETTER N
u'O' # 0x004f -> LATIN CAPITAL LETTER O
u'P' # 0x0050 -> LATIN CAPITAL LETTER P
u'Q' # 0x0051 -> LATIN CAPITAL LETTER Q
u'R' # 0x0052 -> LATIN CAPITAL LETTER R
u'S' # 0x0053 -> LATIN CAPITAL LETTER S
u'T' # 0x0054 -> LATIN CAPITAL LETTER T
u'U' # 0x0055 -> LATIN CAPITAL LETTER U
u'V' # 0x0056 -> LATIN CAPITAL LETTER V
u'W' # 0x0057 -> LATIN CAPITAL LETTER W
u'X' # 0x0058 -> LATIN CAPITAL LETTER X
u'Y' # 0x0059 -> LATIN CAPITAL LETTER Y
u'Z' # 0x005a -> LATIN CAPITAL LETTER Z
u'[' # 0x005b -> LEFT SQUARE BRACKET
u'\\' # 0x005c -> REVERSE SOLIDUS
u']' # 0x005d -> RIGHT SQUARE BRACKET
u'^' # 0x005e -> CIRCUMFLEX ACCENT
u'_' # 0x005f -> LOW LINE
u'`' # 0x0060 -> GRAVE ACCENT
u'a' # 0x0061 -> LATIN SMALL LETTER A
u'b' # 0x0062 -> LATIN SMALL LETTER B
u'c' # 0x0063 -> LATIN SMALL LETTER C
u'd' # 0x0064 -> LATIN SMALL LETTER D
u'e' # 0x0065 -> LATIN SMALL LETTER E
u'f' # 0x0066 -> LATIN SMALL LETTER F
u'g' # 0x0067 -> LATIN SMALL LETTER G
u'h' # 0x0068 -> LATIN SMALL LETTER H
u'i' # 0x0069 -> LATIN SMALL LETTER I
u'j' # 0x006a -> LATIN SMALL LETTER J
u'k' # 0x006b -> LATIN SMALL LETTER K
u'l' # 0x006c -> LATIN SMALL LETTER L
u'm' # 0x006d -> LATIN SMALL LETTER M
u'n' # 0x006e -> LATIN SMALL LETTER N
u'o' # 0x006f -> LATIN SMALL LETTER O
u'p' # 0x0070 -> LATIN SMALL LETTER P
u'q' # 0x0071 -> LATIN SMALL LETTER Q
u'r' # 0x0072 -> LATIN SMALL LETTER R
u's' # 0x0073 -> LATIN SMALL LETTER S
u't' # 0x0074 -> LATIN SMALL LETTER T
u'u' # 0x0075 -> LATIN SMALL LETTER U
u'v' # 0x0076 -> LATIN SMALL LETTER V
u'w' # 0x0077 -> LATIN SMALL LETTER W
u'x' # 0x0078 -> LATIN SMALL LETTER X
u'y' # 0x0079 -> LATIN SMALL LETTER Y
u'z' # 0x007a -> LATIN SMALL LETTER Z
u'{' # 0x007b -> LEFT CURLY BRACKET
u'|' # 0x007c -> VERTICAL LINE
u'}' # 0x007d -> RIGHT CURLY BRACKET
u'~' # 0x007e -> TILDE
u'\x7f' # 0x007f -> DELETE
u'\xc7' # 0x0080 -> LATIN CAPITAL LETTER C WITH CEDILLA
u'\xfc' # 0x0081 -> LATIN SMALL LETTER U WITH DIAERESIS
u'\xe9' # 0x0082 -> LATIN SMALL LETTER E WITH ACUTE
u'\xe2' # 0x0083 -> LATIN SMALL LETTER A WITH CIRCUMFLEX
u'\xe4' # 0x0084 -> LATIN SMALL LETTER A WITH DIAERESIS
u'\xe0' # 0x0085 -> LATIN SMALL LETTER A WITH GRAVE
u'\xe5' # 0x0086 -> LATIN SMALL LETTER A WITH RING ABOVE
u'\xe7' # 0x0087 -> LATIN SMALL LETTER C WITH CEDILLA
u'\xea' # 0x0088 -> LATIN SMALL LETTER E WITH CIRCUMFLEX
u'\xeb' # 0x0089 -> LATIN SMALL LETTER E WITH DIAERESIS
u'\xe8' # 0x008a -> LATIN SMALL LETTER E WITH GRAVE
u'\xef' # 0x008b -> LATIN SMALL LETTER I WITH DIAERESIS
u'\xee' # 0x008c -> LATIN SMALL LETTER I WITH CIRCUMFLEX
u'\xec' # 0x008d -> LATIN SMALL LETTER I WITH GRAVE
u'\xc4' # 0x008e -> LATIN CAPITAL LETTER A WITH DIAERESIS
u'\xc5' # 0x008f -> LATIN CAPITAL LETTER A WITH RING ABOVE
u'\xc9' # 0x0090 -> LATIN CAPITAL LETTER E WITH ACUTE
u'\xe6' # 0x0091 -> LATIN SMALL LIGATURE AE
u'\xc6' # 0x0092 -> LATIN CAPITAL LIGATURE AE
u'\xf4' # 0x0093 -> LATIN SMALL LETTER O WITH CIRCUMFLEX
u'\xf6' # 0x0094 -> LATIN SMALL LETTER O WITH DIAERESIS
u'\xf2' # 0x0095 -> LATIN SMALL LETTER O WITH GRAVE
u'\xfb' # 0x0096 -> LATIN SMALL LETTER U WITH CIRCUMFLEX
u'\xf9' # 0x0097 -> LATIN SMALL LETTER U WITH GRAVE
u'\xff' # 0x0098 -> LATIN SMALL LETTER Y WITH DIAERESIS
u'\xd6' # 0x0099 -> LATIN CAPITAL LETTER O WITH DIAERESIS
u'\xdc' # 0x009a -> LATIN CAPITAL LETTER U WITH DIAERESIS
u'\xf8' # 0x009b -> LATIN SMALL LETTER O WITH STROKE
u'\xa3' # 0x009c -> POUND SIGN
u'\xd8' # 0x009d -> LATIN CAPITAL LETTER O WITH STROKE
u'\xd7' # 0x009e -> MULTIPLICATION SIGN
u'\u0192' # 0x009f -> LATIN SMALL LETTER F WITH HOOK
u'\xe1' # 0x00a0 -> LATIN SMALL LETTER A WITH ACUTE
u'\xed' # 0x00a1 -> LATIN SMALL LETTER I WITH ACUTE
u'\xf3' # 0x00a2 -> LATIN SMALL LETTER O WITH ACUTE
u'\xfa' # 0x00a3 -> LATIN SMALL LETTER U WITH ACUTE
u'\xf1' # 0x00a4 -> LATIN SMALL LETTER N WITH TILDE
u'\xd1' # 0x00a5 -> LATIN CAPITAL LETTER N WITH TILDE
u'\xaa' # 0x00a6 -> FEMININE ORDINAL INDICATOR
u'\xba' # 0x00a7 -> MASCULINE ORDINAL INDICATOR
u'\xbf' # 0x00a8 -> INVERTED QUESTION MARK
u'\xae' # 0x00a9 -> REGISTERED SIGN
u'\xac' # 0x00aa -> NOT SIGN
u'\xbd' # 0x00ab -> VULGAR FRACTION ONE HALF
u'\xbc' # 0x00ac -> VULGAR FRACTION ONE QUARTER
u'\xa1' # 0x00ad -> INVERTED EXCLAMATION MARK
u'\xab' # 0x00ae -> LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\xbb' # 0x00af -> RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
u'\u2591' # 0x00b0 -> LIGHT SHADE
u'\u2592' # 0x00b1 -> MEDIUM SHADE
u'\u2593' # 0x00b2 -> DARK SHADE
u'\u2502' # 0x00b3 -> BOX DRAWINGS LIGHT VERTICAL
u'\u2524' # 0x00b4 -> BOX DRAWINGS LIGHT VERTICAL AND LEFT
u'\xc1' # 0x00b5 -> LATIN CAPITAL LETTER A WITH ACUTE
u'\xc2' # 0x00b6 -> LATIN CAPITAL LETTER A WITH CIRCUMFLEX
u'\xc0' # 0x00b7 -> LATIN CAPITAL LETTER A WITH GRAVE
u'\xa9' # 0x00b8 -> COPYRIGHT SIGN
u'\u2563' # 0x00b9 -> BOX DRAWINGS DOUBLE VERTICAL AND LEFT
u'\u2551' # 0x00ba -> BOX DRAWINGS DOUBLE VERTICAL
u'\u2557' # 0x00bb -> BOX DRAWINGS DOUBLE DOWN AND LEFT
u'\u255d' # 0x00bc -> BOX DRAWINGS DOUBLE UP AND LEFT
u'\xa2' # 0x00bd -> CENT SIGN
u'\xa5' # 0x00be -> YEN SIGN
u'\u2510' # 0x00bf -> BOX DRAWINGS LIGHT DOWN AND LEFT
u'\u2514' # 0x00c0 -> BOX DRAWINGS LIGHT UP AND RIGHT
u'\u2534' # 0x00c1 -> BOX DRAWINGS LIGHT UP AND HORIZONTAL
u'\u252c' # 0x00c2 -> BOX DRAWINGS LIGHT DOWN AND HORIZONTAL
u'\u251c' # 0x00c3 -> BOX DRAWINGS LIGHT VERTICAL AND RIGHT
u'\u2500' # 0x00c4 -> BOX DRAWINGS LIGHT HORIZONTAL
u'\u253c' # 0x00c5 -> BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL
u'\xe3' # 0x00c6 -> LATIN SMALL LETTER A WITH TILDE
u'\xc3' # 0x00c7 -> LATIN CAPITAL LETTER A WITH TILDE
u'\u255a' # 0x00c8 -> BOX DRAWINGS DOUBLE UP AND RIGHT
u'\u2554' # 0x00c9 -> BOX DRAWINGS DOUBLE DOWN AND RIGHT
u'\u2569' # 0x00ca -> BOX DRAWINGS DOUBLE UP AND HORIZONTAL
u'\u2566' # 0x00cb -> BOX DRAWINGS DOUBLE DOWN AND HORIZONTAL
u'\u2560' # 0x00cc -> BOX DRAWINGS DOUBLE VERTICAL AND RIGHT
u'\u2550' # 0x00cd -> BOX DRAWINGS DOUBLE HORIZONTAL
u'\u256c' # 0x00ce -> BOX DRAWINGS DOUBLE VERTICAL AND HORIZONTAL
u'\xa4' # 0x00cf -> CURRENCY SIGN
u'\xf0' # 0x00d0 -> LATIN SMALL LETTER ETH
u'\xd0' # 0x00d1 -> LATIN CAPITAL LETTER ETH
u'\xca' # 0x00d2 -> LATIN CAPITAL LETTER E WITH CIRCUMFLEX
u'\xcb' # 0x00d3 -> LATIN CAPITAL LETTER E WITH DIAERESIS
u'\xc8' # 0x00d4 -> LATIN CAPITAL LETTER E WITH GRAVE
u'\u20ac' # 0x00d5 -> EURO SIGN
u'\xcd' # 0x00d6 -> LATIN CAPITAL LETTER I WITH ACUTE
u'\xce' # 0x00d7 -> LATIN CAPITAL LETTER I WITH CIRCUMFLEX
u'\xcf' # 0x00d8 -> LATIN CAPITAL LETTER I WITH DIAERESIS
u'\u2518' # 0x00d9 -> BOX DRAWINGS LIGHT UP AND LEFT
u'\u250c' # 0x00da -> BOX DRAWINGS LIGHT DOWN AND RIGHT
u'\u2588' # 0x00db -> FULL BLOCK
u'\u2584' # 0x00dc -> LOWER HALF BLOCK
u'\xa6' # 0x00dd -> BROKEN BAR
u'\xcc' # 0x00de -> LATIN CAPITAL LETTER I WITH GRAVE
u'\u2580' # 0x00df -> UPPER HALF BLOCK
u'\xd3' # 0x00e0 -> LATIN CAPITAL LETTER O WITH ACUTE
u'\xdf' # 0x00e1 -> LATIN SMALL LETTER SHARP S
u'\xd4' # 0x00e2 -> LATIN CAPITAL LETTER O WITH CIRCUMFLEX
u'\xd2' # 0x00e3 -> LATIN CAPITAL LETTER O WITH GRAVE
u'\xf5' # 0x00e4 -> LATIN SMALL LETTER O WITH TILDE
u'\xd5' # 0x00e5 -> LATIN CAPITAL LETTER O WITH TILDE
u'\xb5' # 0x00e6 -> MICRO SIGN
u'\xfe' # 0x00e7 -> LATIN SMALL LETTER THORN
u'\xde' # 0x00e8 -> LATIN CAPITAL LETTER THORN
u'\xda' # 0x00e9 -> LATIN CAPITAL LETTER U WITH ACUTE
u'\xdb' # 0x00ea -> LATIN CAPITAL LETTER U WITH CIRCUMFLEX
u'\xd9' # 0x00eb -> LATIN CAPITAL LETTER U WITH GRAVE
u'\xfd' # 0x00ec -> LATIN SMALL LETTER Y WITH ACUTE
u'\xdd' # 0x00ed -> LATIN CAPITAL LETTER Y WITH ACUTE
u'\xaf' # 0x00ee -> MACRON
u'\xb4' # 0x00ef -> ACUTE ACCENT
u'\xad' # 0x00f0 -> SOFT HYPHEN
u'\xb1' # 0x00f1 -> PLUS-MINUS SIGN
u'\u2017' # 0x00f2 -> DOUBLE LOW LINE
u'\xbe' # 0x00f3 -> VULGAR FRACTION THREE QUARTERS
u'\xb6' # 0x00f4 -> PILCROW SIGN
u'\xa7' # 0x00f5 -> SECTION SIGN
u'\xf7' # 0x00f6 -> DIVISION SIGN
u'\xb8' # 0x00f7 -> CEDILLA
u'\xb0' # 0x00f8 -> DEGREE SIGN
u'\xa8' # 0x00f9 -> DIAERESIS
u'\xb7' # 0x00fa -> MIDDLE DOT
u'\xb9' # 0x00fb -> SUPERSCRIPT ONE
u'\xb3' # 0x00fc -> SUPERSCRIPT THREE
u'\xb2' # 0x00fd -> SUPERSCRIPT TWO
u'\u25a0' # 0x00fe -> BLACK SQUARE
u'\xa0' # 0x00ff -> NO-BREAK SPACE
)
### Encoding Map
encoding_map = {
0x0000: 0x0000, # NULL
0x0001: 0x0001, # START OF HEADING
0x0002: 0x0002, # START OF TEXT
0x0003: 0x0003, # END OF TEXT
0x0004: 0x0004, # END OF TRANSMISSION
0x0005: 0x0005, # ENQUIRY
0x0006: 0x0006, # ACKNOWLEDGE
0x0007: 0x0007, # BELL
0x0008: 0x0008, # BACKSPACE
0x0009: 0x0009, # HORIZONTAL TABULATION
0x000a: 0x000a, # LINE FEED
0x000b: 0x000b, # VERTICAL TABULATION
0x000c: 0x000c, # FORM FEED
0x000d: 0x000d, # CARRIAGE RETURN
0x000e: 0x000e, # SHIFT OUT
0x000f: 0x000f, # SHIFT IN
0x0010: 0x0010, # DATA LINK ESCAPE
0x0011: 0x0011, # DEVICE CONTROL ONE
0x0012: 0x0012, # DEVICE CONTROL TWO
0x0013: 0x0013, # DEVICE CONTROL THREE
0x0014: 0x0014, # DEVICE CONTROL FOUR
0x0015: 0x0015, # NEGATIVE ACKNOWLEDGE
0x0016: 0x0016, # SYNCHRONOUS IDLE
0x0017: 0x0017, # END OF TRANSMISSION BLOCK
0x0018: 0x0018, # CANCEL
0x0019: 0x0019, # END OF MEDIUM
0x001a: 0x001a, # SUBSTITUTE
0x001b: 0x001b, # ESCAPE
0x001c: 0x001c, # FILE SEPARATOR
0x001d: 0x001d, # GROUP SEPARATOR
0x001e: 0x001e, # RECORD SEPARATOR
0x001f: 0x001f, # UNIT SEPARATOR
0x0020: 0x0020, # SPACE
0x0021: 0x0021, # EXCLAMATION MARK
0x0022: 0x0022, # QUOTATION MARK
0x0023: 0x0023, # NUMBER SIGN
0x0024: 0x0024, # DOLLAR SIGN
0x0025: 0x0025, # PERCENT SIGN
0x0026: 0x0026, # AMPERSAND
0x0027: 0x0027, # APOSTROPHE
0x0028: 0x0028, # LEFT PARENTHESIS
0x0029: 0x0029, # RIGHT PARENTHESIS
0x002a: 0x002a, # ASTERISK
0x002b: 0x002b, # PLUS SIGN
0x002c: 0x002c, # COMMA
0x002d: 0x002d, # HYPHEN-MINUS
0x002e: 0x002e, # FULL STOP
0x002f: 0x002f, # SOLIDUS
0x0030: 0x0030, # DIGIT ZERO
0x0031: 0x0031, # DIGIT ONE
0x0032: 0x0032, # DIGIT TWO
0x0033: 0x0033, # DIGIT THREE
0x0034: 0x0034, # DIGIT FOUR
0x0035: 0x0035, # DIGIT FIVE
0x0036: 0x0036, # DIGIT SIX
0x0037: 0x0037, # DIGIT SEVEN
0x0038: 0x0038, # DIGIT EIGHT
0x0039: 0x0039, # DIGIT NINE
0x003a: 0x003a, # COLON
0x003b: 0x003b, # SEMICOLON
0x003c: 0x003c, # LESS-THAN SIGN
0x003d: 0x003d, # EQUALS SIGN
0x003e: 0x003e, # GREATER-THAN SIGN
0x003f: 0x003f, # QUESTION MARK
0x0040: 0x0040, # COMMERCIAL AT
0x0041: 0x0041, # LATIN CAPITAL LETTER A
0x0042: 0x0042, # LATIN CAPITAL LETTER B
0x0043: 0x0043, # LATIN CAPITAL LETTER C
0x0044: 0x0044, # LATIN CAPITAL LETTER D
0x0045: 0x0045, # LATIN CAPITAL LETTER E
0x0046: 0x0046, # LATIN CAPITAL LETTER F
0x0047: 0x0047, # LATIN CAPITAL LETTER G
0x0048: 0x0048, # LATIN CAPITAL LETTER H
0x0049: 0x0049, # LATIN CAPITAL LETTER I
0x004a: 0x004a, # LATIN CAPITAL LETTER J
0x004b: 0x004b, # LATIN CAPITAL LETTER K
0x004c: 0x004c, # LATIN CAPITAL LETTER L
0x004d: 0x004d, # LATIN CAPITAL LETTER M
0x004e: 0x004e, # LATIN CAPITAL LETTER N
0x004f: 0x004f, # LATIN CAPITAL LETTER O
0x0050: 0x0050, # LATIN CAPITAL LETTER P
0x0051: 0x0051, # LATIN CAPITAL LETTER Q
0x0052: 0x0052, # LATIN CAPITAL LETTER R
0x0053: 0x0053, # LATIN CAPITAL LETTER S
0x0054: 0x0054, # LATIN CAPITAL LETTER T
0x0055: 0x0055, # LATIN CAPITAL LETTER U
0x0056: 0x0056, # LATIN CAPITAL LETTER V
0x0057: 0x0057, # LATIN CAPITAL LETTER W
0x0058: 0x0058, # LATIN CAPITAL LETTER X
0x0059: 0x0059, # LATIN CAPITAL LETTER Y
0x005a: 0x005a, # LATIN CAPITAL LETTER Z
0x005b: 0x005b, # LEFT SQUARE BRACKET
0x005c: 0x005c, # REVERSE SOLIDUS
0x005d: 0x005d, # RIGHT SQUARE BRACKET
0x005e: 0x005e, # CIRCUMFLEX ACCENT
0x005f: 0x005f, # LOW LINE
0x0060: 0x0060, # GRAVE ACCENT
0x0061: 0x0061, # LATIN SMALL LETTER A
0x0062: 0x0062, # LATIN SMALL LETTER B
0x0063: 0x0063, # LATIN SMALL LETTER C
0x0064: 0x0064, # LATIN SMALL LETTER D
0x0065: 0x0065, # LATIN SMALL LETTER E
0x0066: 0x0066, # LATIN SMALL LETTER F
0x0067: 0x0067, # LATIN SMALL LETTER G
0x0068: 0x0068, # LATIN SMALL LETTER H
0x0069: 0x0069, # LATIN SMALL LETTER I
0x006a: 0x006a, # LATIN SMALL LETTER J
0x006b: 0x006b, # LATIN SMALL LETTER K
0x006c: 0x006c, # LATIN SMALL LETTER L
0x006d: 0x006d, # LATIN SMALL LETTER M
0x006e: 0x006e, # LATIN SMALL LETTER N
0x006f: 0x006f, # LATIN SMALL LETTER O
0x0070: 0x0070, # LATIN SMALL LETTER P
0x0071: 0x0071, # LATIN SMALL LETTER Q
0x0072: 0x0072, # LATIN SMALL LETTER R
0x0073: 0x0073, # LATIN SMALL LETTER S
0x0074: 0x0074, # LATIN SMALL LETTER T
0x0075: 0x0075, # LATIN SMALL LETTER U
0x0076: 0x0076, # LATIN SMALL LETTER V
0x0077: 0x0077, # LATIN SMALL LETTER W
0x0078: 0x0078, # LATIN SMALL LETTER X
0x0079: 0x0079, # LATIN SMALL LETTER Y
0x007a: 0x007a, # LATIN SMALL LETTER Z
0x007b: 0x007b, # LEFT CURLY BRACKET
0x007c: 0x007c, # VERTICAL LINE
0x007d: 0x007d, # RIGHT CURLY BRACKET
0x007e: 0x007e, # TILDE
0x007f: 0x007f, # DELETE
0x00a0: 0x00ff, # NO-BREAK SPACE
0x00a1: 0x00ad, # INVERTED EXCLAMATION MARK
0x00a2: 0x00bd, # CENT SIGN
0x00a3: 0x009c, # POUND SIGN
0x00a4: 0x00cf, # CURRENCY SIGN
0x00a5: 0x00be, # YEN SIGN
0x00a6: 0x00dd, # BROKEN BAR
0x00a7: 0x00f5, # SECTION SIGN
0x00a8: 0x00f9, # DIAERESIS
0x00a9: 0x00b8, # COPYRIGHT SIGN
0x00aa: 0x00a6, # FEMININE ORDINAL INDICATOR
0x00ab: 0x00ae, # LEFT-POINTING DOUBLE ANGLE QUOTATION MARK
0x00ac: 0x00aa, # NOT SIGN
0x00ad: 0x00f0, # SOFT HYPHEN
0x00ae: 0x00a9, # REGISTERED SIGN
0x00af: 0x00ee, # MACRON
0x00b0: 0x00f8, # DEGREE SIGN
0x00b1: 0x00f1, # PLUS-MINUS SIGN
0x00b2: 0x00fd, # SUPERSCRIPT TWO
0x00b3: 0x00fc, # SUPERSCRIPT THREE
0x00b4: 0x00ef, # ACUTE ACCENT
0x00b5: 0x00e6, # MICRO SIGN
0x00b6: 0x00f4, # PILCROW SIGN
0x00b7: 0x00fa, # MIDDLE DOT
0x00b8: 0x00f7, # CEDILLA
0x00b9: 0x00fb, # SUPERSCRIPT ONE
0x00ba: 0x00a7, # MASCULINE ORDINAL INDICATOR
0x00bb: 0x00af, # RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK
0x00bc: 0x00ac, # VULGAR FRACTION ONE QUARTER
0x00bd: 0x00ab, # VULGAR FRACTION ONE HALF
0x00be: 0x00f3, # VULGAR FRACTION THREE QUARTERS
0x00bf: 0x00a8, # INVERTED QUESTION MARK
0x00c0: 0x00b7, # LATIN CAPITAL LETTER A WITH GRAVE
0x00c1: 0x00b5, # LATIN CAPITAL LETTER A WITH ACUTE
0x00c2: 0x00b6, # LATIN CAPITAL LETTER A WITH CIRCUMFLEX
0x00c3: 0x00c7, # LATIN CAPITAL LETTER A WITH TILDE
0x00c4: 0x008e, # LATIN CAPITAL LETTER A WITH DIAERESIS
0x00c5: 0x008f, # LATIN CAPITAL LETTER A WITH RING ABOVE
0x00c6: 0x0092, # LATIN CAPITAL LIGATURE AE
0x00c7: 0x0080, # LATIN CAPITAL LETTER C WITH CEDILLA
0x00c8: 0x00d4, # LATIN CAPITAL LETTER E WITH GRAVE
0x00c9: 0x0090, # LATIN CAPITAL LETTER E WITH ACUTE
0x00ca: 0x00d2, # LATIN CAPITAL LETTER E WITH CIRCUMFLEX
0x00cb: 0x00d3, # LATIN CAPITAL LETTER E WITH DIAERESIS
0x00cc: 0x00de, # LATIN CAPITAL LETTER I WITH GRAVE
0x00cd: 0x00d6, # LATIN CAPITAL LETTER I WITH ACUTE
0x00ce: 0x00d7, # LATIN CAPITAL LETTER I WITH CIRCUMFLEX
0x00cf: 0x00d8, # LATIN CAPITAL LETTER I WITH DIAERESIS
0x00d0: 0x00d1, # LATIN CAPITAL LETTER ETH
0x00d1: 0x00a5, # LATIN CAPITAL LETTER N WITH TILDE
0x00d2: 0x00e3, # LATIN CAPITAL LETTER O WITH GRAVE
0x00d3: 0x00e0, # LATIN CAPITAL LETTER O WITH ACUTE
0x00d4: 0x00e2, # LATIN CAPITAL LETTER O WITH CIRCUMFLEX
0x00d5: 0x00e5, # LATIN CAPITAL LETTER O WITH TILDE
0x00d6: 0x0099, # LATIN CAPITAL LETTER O WITH DIAERESIS
0x00d7: 0x009e, # MULTIPLICATION SIGN
0x00d8: 0x009d, # LATIN CAPITAL LETTER O WITH STROKE
0x00d9: 0x00eb, # LATIN CAPITAL LETTER U WITH GRAVE
0x00da: 0x00e9, # LATIN CAPITAL LETTER U WITH ACUTE
0x00db: 0x00ea, # LATIN CAPITAL LETTER U WITH CIRCUMFLEX
0x00dc: 0x009a, # LATIN CAPITAL LETTER U WITH DIAERESIS
0x00dd: 0x00ed, # LATIN CAPITAL LETTER Y WITH ACUTE
0x00de: 0x00e8, # LATIN CAPITAL LETTER THORN
0x00df: 0x00e1, # LATIN SMALL LETTER SHARP S
0x00e0: 0x0085, # LATIN SMALL LETTER A WITH GRAVE
0x00e1: 0x00a0, # LATIN SMALL LETTER A WITH ACUTE
0x00e2: 0x0083, # LATIN SMALL LETTER A WITH CIRCUMFLEX
0x00e3: 0x00c6, # LATIN SMALL LETTER A WITH TILDE
0x00e4: 0x0084, # LATIN SMALL LETTER A WITH DIAERESIS
0x00e5: 0x0086, # LATIN SMALL LETTER A WITH RING ABOVE
0x00e6: 0x0091, # LATIN SMALL LIGATURE AE
0x00e7: 0x0087, # LATIN SMALL LETTER C WITH CEDILLA
0x00e8: 0x008a, # LATIN SMALL LETTER E WITH GRAVE
0x00e9: 0x0082, # LATIN SMALL LETTER E WITH ACUTE
0x00ea: 0x0088, # LATIN SMALL LETTER E WITH CIRCUMFLEX
0x00eb: 0x0089, # LATIN SMALL LETTER E WITH DIAERESIS
0x00ec: 0x008d, # LATIN SMALL LETTER I WITH GRAVE
0x00ed: 0x00a1, # LATIN SMALL LETTER I WITH ACUTE
0x00ee: 0x008c, # LATIN SMALL LETTER I WITH CIRCUMFLEX
0x00ef: 0x008b, # LATIN SMALL LETTER I WITH DIAERESIS
0x00f0: 0x00d0, # LATIN SMALL LETTER ETH
0x00f1: 0x00a4, # LATIN SMALL LETTER N WITH TILDE
0x00f2: 0x0095, # LATIN SMALL LETTER O WITH GRAVE
0x00f3: 0x00a2, # LATIN SMALL LETTER O WITH ACUTE
0x00f4: 0x0093, # LATIN SMALL LETTER O WITH CIRCUMFLEX
0x00f5: 0x00e4, # LATIN SMALL LETTER O WITH TILDE
0x00f6: 0x0094, # LATIN SMALL LETTER O WITH DIAERESIS
0x00f7: 0x00f6, # DIVISION SIGN
0x00f8: 0x009b, # LATIN SMALL LETTER O WITH STROKE
0x00f9: 0x0097, # LATIN SMALL LETTER U WITH GRAVE
0x00fa: 0x00a3, # LATIN SMALL LETTER U WITH ACUTE
0x00fb: 0x0096, # LATIN SMALL LETTER U WITH CIRCUMFLEX
0x00fc: 0x0081, # LATIN SMALL LETTER U WITH DIAERESIS
0x00fd: 0x00ec, # LATIN SMALL LETTER Y WITH ACUTE
0x00fe: 0x00e7, # LATIN SMALL LETTER THORN
0x00ff: 0x0098, # LATIN SMALL LETTER Y WITH DIAERESIS
0x20ac: 0x00d5, # EURO SIGN
0x0192: 0x009f, # LATIN SMALL LETTER F WITH HOOK
0x2017: 0x00f2, # DOUBLE LOW LINE
0x2500: 0x00c4, # BOX DRAWINGS LIGHT HORIZONTAL
0x2502: 0x00b3, # BOX DRAWINGS LIGHT VERTICAL
0x250c: 0x00da, # BOX DRAWINGS LIGHT DOWN AND RIGHT
0x2510: 0x00bf, # BOX DRAWINGS LIGHT DOWN AND LEFT
0x2514: 0x00c0, # BOX DRAWINGS LIGHT UP AND RIGHT
0x2518: 0x00d9, # BOX DRAWINGS LIGHT UP AND LEFT
0x251c: 0x00c3, # BOX DRAWINGS LIGHT VERTICAL AND RIGHT
0x2524: 0x00b4, # BOX DRAWINGS LIGHT VERTICAL AND LEFT
0x252c: 0x00c2, # BOX DRAWINGS LIGHT DOWN AND HORIZONTAL
0x2534: 0x00c1, # BOX DRAWINGS LIGHT UP AND HORIZONTAL
0x253c: 0x00c5, # BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL
0x2550: 0x00cd, # BOX DRAWINGS DOUBLE HORIZONTAL
0x2551: 0x00ba, # BOX DRAWINGS DOUBLE VERTICAL
0x2554: 0x00c9, # BOX DRAWINGS DOUBLE DOWN AND RIGHT
0x2557: 0x00bb, # BOX DRAWINGS DOUBLE DOWN AND LEFT
0x255a: 0x00c8, # BOX DRAWINGS DOUBLE UP AND RIGHT
0x255d: 0x00bc, # BOX DRAWINGS DOUBLE UP AND LEFT
0x2560: 0x00cc, # BOX DRAWINGS DOUBLE VERTICAL AND RIGHT
0x2563: 0x00b9, # BOX DRAWINGS DOUBLE VERTICAL AND LEFT
0x2566: 0x00cb, # BOX DRAWINGS DOUBLE DOWN AND HORIZONTAL
0x2569: 0x00ca, # BOX DRAWINGS DOUBLE UP AND HORIZONTAL
0x256c: 0x00ce, # BOX DRAWINGS DOUBLE VERTICAL AND HORIZONTAL
0x2580: 0x00df, # UPPER HALF BLOCK
0x2584: 0x00dc, # LOWER HALF BLOCK
0x2588: 0x00db, # FULL BLOCK
0x2591: 0x00b0, # LIGHT SHADE
0x2592: 0x00b1, # MEDIUM SHADE
0x2593: 0x00b2, # DARK SHADE
0x25a0: 0x00fe, # BLACK SQUARE
}
|
mit
|
pilou-/ansible
|
lib/ansible/modules/cloud/vmware/vcenter_license.py
|
9
|
9447
|
#!/usr/bin/python
# Copyright: (c) 2017, Dag Wieers (@dagwieers) <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
module: vcenter_license
short_description: Manage VMware vCenter license keys
description:
- Add and delete vCenter, ESXi server license keys.
version_added: '2.4'
author:
- Dag Wieers (@dagwieers)
requirements:
- pyVmomi
options:
labels:
description:
- The optional labels of the license key to manage in vSphere vCenter.
- This is dictionary with key/value pair.
default: {
'source': 'ansible'
}
license:
description:
- The license key to manage in vSphere vCenter.
required: yes
state:
description:
- Whether to add (C(present)) or remove (C(absent)) the license key.
choices: [absent, present]
default: present
esxi_hostname:
description:
- The hostname of the ESXi server to which the specified license will be assigned.
- This parameter is optional.
version_added: '2.8'
datacenter:
description:
- The datacenter name to use for the operation.
type: str
version_added: '2.9'
cluster_name:
description:
- Name of the cluster to apply vSAN license.
type: str
version_added: '2.9'
notes:
- This module will also auto-assign the current vCenter to the license key
if the product matches the license key, and vCenter us currently assigned
an evaluation license only.
- The evaluation license (00000-00000-00000-00000-00000) is not listed
when unused.
- If C(esxi_hostname) is specified, then will assign the C(license) key to
the ESXi host.
extends_documentation_fragment: vmware.vcenter_documentation
'''
EXAMPLES = r'''
- name: Add a new vCenter license
vcenter_license:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
license: f600d-21ae3-5592b-249e0-cc341
state: present
delegate_to: localhost
- name: Remove an (unused) vCenter license
vcenter_license:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
license: f600d-21ae3-5592b-249e0-cc341
state: absent
delegate_to: localhost
- name: Add ESXi license and assign to the ESXi host
vcenter_license:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
esxi_hostname: '{{ esxi_hostname }}'
license: f600d-21ae3-5592b-249e0-dd502
state: present
delegate_to: localhost
- name: Add vSAN license and assign to the given cluster
vcenter_license:
hostname: '{{ vcenter_hostname }}'
username: '{{ vcenter_username }}'
password: '{{ vcenter_password }}'
datacenter: '{{ datacenter_name }}'
cluster_name: '{{ cluster_name }}'
license: f600d-21ae3-5592b-249e0-dd502
state: present
delegate_to: localhost
'''
RETURN = r'''
licenses:
description: list of license keys after module executed
returned: always
type: list
sample:
- f600d-21ae3-5592b-249e0-cc341
- 143cc-0e942-b2955-3ea12-d006f
'''
try:
from pyVmomi import vim
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_native
from ansible.module_utils.vmware import PyVmomi, vmware_argument_spec, find_hostsystem_by_name
class VcenterLicenseMgr(PyVmomi):
def __init__(self, module):
super(VcenterLicenseMgr, self).__init__(module)
def find_key(self, licenses, license):
for item in licenses:
if item.licenseKey == license:
return item
return None
def list_keys(self, licenses):
keys = []
for item in licenses:
# Filter out evaluation license key
if item.used is None:
continue
keys.append(item.licenseKey)
return keys
def main():
argument_spec = vmware_argument_spec()
argument_spec.update(dict(
labels=dict(type='dict', default=dict(source='ansible')),
license=dict(type='str', required=True),
state=dict(type='str', default='present', choices=['absent', 'present']),
esxi_hostname=dict(type='str'),
datacenter=dict(type='str'),
cluster_name=dict(type='str'),
))
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
)
license = module.params['license']
state = module.params['state']
# FIXME: This does not seem to work on vCenter v6.0
labels = []
for k in module.params['labels']:
kv = vim.KeyValue()
kv.key = k
kv.value = module.params['labels'][k]
labels.append(kv)
result = dict(
changed=False,
diff=dict(),
)
pyv = VcenterLicenseMgr(module)
if not pyv.is_vcenter():
module.fail_json(msg="vcenter_license is meant for vCenter, hostname %s "
"is not vCenter server." % module.params.get('hostname'))
lm = pyv.content.licenseManager
result['licenses'] = pyv.list_keys(lm.licenses)
if module._diff:
result['diff']['before'] = '\n'.join(result['licenses']) + '\n'
if state == 'present':
if license not in result['licenses']:
result['changed'] = True
if module.check_mode:
result['licenses'].append(license)
else:
lm.AddLicense(license, labels)
key = pyv.find_key(lm.licenses, license)
if key is not None:
lam = lm.licenseAssignmentManager
assigned_license = None
datacenter = module.params['datacenter']
datacenter_obj = None
if datacenter:
datacenter_obj = pyv.find_datacenter_by_name(datacenter)
if not datacenter_obj:
module.fail_json(msg="Unable to find the datacenter %(datacenter)s" % module.params)
cluster = module.params['cluster_name']
if cluster:
cluster_obj = pyv.find_cluster_by_name(cluster_name=cluster, datacenter_name=datacenter_obj)
if not cluster_obj:
msg = "Unable to find the cluster %(cluster_name)s"
if datacenter:
msg += " in datacenter %(datacenter)s"
module.fail_json(msg=msg % module.params)
entityId = cluster_obj._moId
# assign to current vCenter, if esxi_hostname is not specified
elif module.params['esxi_hostname'] is None:
entityId = pyv.content.about.instanceUuid
# if key name not contain "VMware vCenter Server"
if pyv.content.about.name not in key.name:
module.warn('License key "%s" (%s) is not suitable for "%s"' % (license, key.name, pyv.content.about.name))
# assign to ESXi server
else:
esxi_host = find_hostsystem_by_name(pyv.content, module.params['esxi_hostname'])
if esxi_host is None:
module.fail_json(msg='Cannot find the specified ESXi host "%s".' % module.params['esxi_hostname'])
entityId = esxi_host._moId
# e.g., key.editionKey is "esx.enterprisePlus.cpuPackage", not sure all keys are in this format
if 'esx' not in key.editionKey:
module.warn('License key "%s" edition "%s" is not suitable for ESXi server' % (license, key.editionKey))
try:
assigned_license = lam.QueryAssignedLicenses(entityId=entityId)
except Exception as e:
module.fail_json(msg='Could not query vCenter "%s" assigned license info due to %s.' % (entityId, to_native(e)))
if not assigned_license or (len(assigned_license) != 0 and assigned_license[0].assignedLicense.licenseKey != license):
try:
lam.UpdateAssignedLicense(entity=entityId, licenseKey=license)
except Exception:
module.fail_json(msg='Could not assign "%s" (%s) to vCenter.' % (license, key.name))
result['changed'] = True
result['licenses'] = pyv.list_keys(lm.licenses)
else:
module.fail_json(msg='License "%s" is not existing or can not be added' % license)
if module._diff:
result['diff']['after'] = '\n'.join(result['licenses']) + '\n'
elif state == 'absent' and license in result['licenses']:
# Check if key is in use
key = pyv.find_key(lm.licenses, license)
if key.used > 0:
module.fail_json(msg='Cannot remove key "%s", still in use %s time(s).' % (license, key.used))
result['changed'] = True
if module.check_mode:
result['licenses'].remove(license)
else:
lm.RemoveLicense(license)
result['licenses'] = pyv.list_keys(lm.licenses)
if module._diff:
result['diff']['after'] = '\n'.join(result['licenses']) + '\n'
module.exit_json(**result)
if __name__ == '__main__':
main()
|
gpl-3.0
|
memsharded/conan
|
conans/test/unittests/search/disk_search_test.py
|
1
|
3782
|
import os
import unittest
from conans.client.cache.cache import ClientCache
from conans.client.tools import chdir
from conans.model.info import ConanInfo
from conans.model.ref import ConanFileReference
from conans.paths import (BUILD_FOLDER, CONANINFO, EXPORT_FOLDER, PACKAGES_FOLDER)
from conans.search.search import search_packages, search_recipes
from conans.test.utils.test_files import temp_folder
from conans.test.utils.tools import TestBufferConanOutput
from conans.util.files import save, mkdir
class SearchTest(unittest.TestCase):
def setUp(self):
folder = temp_folder()
self.cache = ClientCache(folder, output=TestBufferConanOutput())
mkdir(self.cache.store)
def basic_test2(self):
with chdir(self.cache.store):
ref1 = ConanFileReference.loads("opencv/2.4.10@lasote/testing")
root_folder = str(ref1).replace("@", "/")
artifacts = ["a", "b", "c"]
reg1 = "%s/%s" % (root_folder, EXPORT_FOLDER)
os.makedirs(reg1)
for artif_id in artifacts:
build1 = "%s/%s/%s" % (root_folder, BUILD_FOLDER, artif_id)
artif1 = "%s/%s/%s" % (root_folder, PACKAGES_FOLDER, artif_id)
os.makedirs(build1)
info = ConanInfo().loads("[settings]\n[options]")
save(os.path.join(artif1, CONANINFO), info.dumps())
packages = search_packages(self.cache.package_layout(ref1), "")
all_artif = [_artif for _artif in sorted(packages)]
self.assertEqual(all_artif, artifacts)
def pattern_test(self):
with chdir(self.cache.store):
references = ["opencv/2.4.%s@lasote/testing" % ref for ref in ("1", "2", "3")]
refs = [ConanFileReference.loads(reference) for reference in references]
for ref in refs:
root_folder = str(ref).replace("@", "/")
reg1 = "%s/%s" % (root_folder, EXPORT_FOLDER)
os.makedirs(reg1)
recipes = search_recipes(self.cache, "opencv/*@lasote/testing")
self.assertEqual(recipes, refs)
def case_insensitive_test(self):
with chdir(self.cache.store):
root_folder2 = "sdl/1.5/lasote/stable"
ref2 = ConanFileReference.loads("sdl/1.5@lasote/stable")
os.makedirs("%s/%s" % (root_folder2, EXPORT_FOLDER))
root_folder3 = "assimp/0.14/phil/testing"
ref3 = ConanFileReference.loads("assimp/0.14@phil/testing")
os.makedirs("%s/%s" % (root_folder3, EXPORT_FOLDER))
root_folder4 = "sdl/2.10/lasote/stable"
ref4 = ConanFileReference.loads("sdl/2.10@lasote/stable")
os.makedirs("%s/%s" % (root_folder4, EXPORT_FOLDER))
root_folder5 = "SDL_fake/1.10/lasote/testing"
ref5 = ConanFileReference.loads("SDL_fake/1.10@lasote/testing")
os.makedirs("%s/%s" % (root_folder5, EXPORT_FOLDER))
# Case insensitive searches
reg_conans = sorted([str(_reg) for _reg in search_recipes(self.cache, "*")])
self.assertEqual(reg_conans, [str(ref5),
str(ref3),
str(ref2),
str(ref4)])
reg_conans = sorted([str(_reg) for _reg in search_recipes(self.cache,
pattern="sdl*")])
self.assertEqual(reg_conans, [str(ref5), str(ref2), str(ref4)])
# Case sensitive search
self.assertEqual(str(search_recipes(self.cache, pattern="SDL*",
ignorecase=False)[0]),
str(ref5))
|
mit
|
360youlun/django-nose
|
django_nose/tools.py
|
52
|
1647
|
# vim: tabstop=4 expandtab autoindent shiftwidth=4 fileencoding=utf-8
"""
Provides Nose and Django test case assert functions
"""
from django.test.testcases import TransactionTestCase
from django.core import mail
import re
## Python
from nose import tools
for t in dir(tools):
if t.startswith('assert_'):
vars()[t] = getattr(tools, t)
## Django
caps = re.compile('([A-Z])')
def pep8(name):
return caps.sub(lambda m: '_' + m.groups()[0].lower(), name)
class Dummy(TransactionTestCase):
def nop():
pass
_t = Dummy('nop')
for at in [ at for at in dir(_t)
if at.startswith('assert') and not '_' in at ]:
pepd = pep8(at)
vars()[pepd] = getattr(_t, at)
del Dummy
del _t
del pep8
## New
def assert_code(response, status_code, msg_prefix=''):
"""Asserts the response was returned with the given status code
"""
if msg_prefix:
msg_prefix = '%s: ' % msg_prefix
assert response.status_code == status_code, \
'Response code was %d (expected %d)' % \
(response.status_code, status_code)
def assert_ok(response, msg_prefix=''):
"""Asserts the response was returned with status 200 (OK)
"""
return assert_code(response, 200, msg_prefix=msg_prefix)
def assert_mail_count(count, msg=None):
"""Assert the number of emails sent.
The message here tends to be long, so allow for replacing the whole
thing instead of prefixing.
"""
if msg is None:
msg = ', '.join([e.subject for e in mail.outbox])
msg = '%d != %d %s' % (len(mail.outbox), count, msg)
assert_equals(len(mail.outbox), count, msg)
# EOF
|
bsd-3-clause
|
ssundaresan/sculpte-simulator
|
flow.py
|
1
|
5360
|
import sys
import os
import random as rnd
import numpy as np
from numpy import random as rv
import route
import gen
import heapq
class flowgen(object):
'''
Flow generator class. Instantiates objects inside
- for flow arrival, size of flow and throughput assignment.
'''
def __init__(self,fptr,tm):
self.time = 0
self.flowid = 0
fsize = gen.get_param(fptr,"Flow Size")
if fsize.lower() == 'exponential':
self.sizegen = size_exp(fptr) # object for flow size generation
if fsize.lower() == 'pareto':
self.sizegen = size_pareto(fptr) # object for flow size generation
arrprocess = gen.get_param(fptr,"Flow Arrival")
if arrprocess.lower() == 'poisson': # object for flow arrival
self.arrival = arr_poisson(fptr,tm,self.sizegen.avgflowsize())
tput = gen.get_param(fptr,"Flow Tput")
if tput.lower() == 'standard':
self.tput = tput_std() # object for throughput assignment
self.numslices = int(gen.get_param(fptr,"numslices"))
self.routing = gen.get_param(fptr,"routing")
def get_sbits(self,num):
'''
assign splicing bits to flow
'''
sbits = ''
if self.routing.lower() == 'normal':
sbits += str(int(rnd.random()*self.numslices))
if self.routing.lower() == 'splicing':
for i in range(0,num):
#sbits += str(rnd.randrange(0,self.numslices,1)) + "-"
sbits += str(int(rnd.random()*self.numslices)) + "-"
sbits = sbits[0:len(sbits)-1]
return sbits
def next(self,tm):
'''
Fn to generate next flow
'''
arrevent = self.arrival.next(tm)
s = arrevent["s"]
d = arrevent["d"]
arrtime = arrevent["time"]
flowsize = self.sizegen.next()
tput = self.tput.next()
sbits = self.get_sbits(len(tm))
#sprt = rnd.randrange(0,20000)
#dprt = rnd.randrange(0,20000)
sprt = int(rnd.random()*20000)
dprt = int(rnd.random()*20000)
nflow = {"id":self.flowid,"type":"flow-arr","s":s,"d":d,"time":arrtime,\
"size":flowsize,"tput":tput,"sbits":sbits,"sprt":sprt,"dprt":dprt}
self.flowid += 1
return nflow
class size_exp(object):
'''
Class for exponentially distributed flow generator.
Should make it inherit from parent generic class.
'''
def __init__(self,fptr):
self.param = float(gen.get_param(fptr,"Lambda"))
def avgflowsize(self):
return self.param
def next(self):
size = rv.exponential(self.param,1)
return size[0]
class size_pareto(object):
def __init__(self,fptr):
self.alpha = 1.3
self.param = float(gen.get_param(fptr,"Lambda"))
self.U = 8000
#self.L = self.param*(self.alpha-1)/self.alpha
self.L = float(gen.get_param(fptr,"Pareto L"))
def avgflowsize(self):
return self.param
def next(self):
alpha = self.alpha
U = self.U
L = self.L
exp = 1.0/alpha
if U == None:
val = L/(math.pow(rnd.random(),exp) )
else:
r = rnd.random()
val = pow((-(r*pow(U,alpha) - r*pow(L,alpha) - pow(U,alpha))/(pow(U,alpha)*pow(L,alpha))),-exp)
return val
class tput_std(object):
def __init__(self):
#self.tputarr = [0.5,1.0,5]
self.tputarr = [0.5,2,10]
#self.tputarr = [0.0005]
self.num = len(self.tputarr)
def next(self):
#rndch = rnd.randrange(0,self.num)
rndch = rnd.random()
l = len(self.tputarr) - 1
if rndch < 0.3:
return self.tputarr[0]
if rndch < 0.9:
return self.tputarr[min(1,l)]
return self.tputarr[min(2,l)]
#return self.tputarr[rndch]
class arr_poisson(object):
'''
possion arrival generator
'''
def __init__(self,fptr,tm,flowsize):
self.simdur = float(gen.get_param(fptr,"SIM_DUR"))
self.flowsize = flowsize
self.allnextarr = []
self.initarrivals(tm)
def initarrivals(self,tm):
'''
intialize array with arrival for all s-d pairs.
'''
for s in tm:
for d in tm:
if s == d:
continue
arrtime = self.interarr(s,d,tm[s][d],0.0)
arrevent = {"s":s,"d":d,"time":arrtime}
self.allnextarr.append((arrevent["time"],arrevent))
#self.allnextarr.sort(lambda x,y:cmp(x["time"],y["time"]))
heapq.heapify(self.allnextarr)
def insert(self,event):
'''
Fn to insert a new event in the arrival queue.
'''
#inspos = gen.get_pos(self.allnextarr,'time',event)
#self.allnextarr.insert(inspos,event)
eventtuple = (event['time'],event)
heapq.heappush(self.allnextarr,eventtuple)
def pop(self):
eventtuple = heapq.heappop(self.allnextarr)
#print "flow eventtuple " + str(eventtuple)
event = eventtuple[1]
return event
def next(self,tm):
'''
choose earliest arrival from allnextarr and replace.
'''
#arrevent = self.allnextarr.pop(0)
arrevent = self.pop()
s = arrevent["s"]
d = arrevent["d"]
arrtime = arrevent["time"]
arrtime = self.interarr(s,d,tm[s][d],arrtime)
nextevent = {"s":s,"d":d,"time":arrtime}
#self.allnextarr.append(nextevent)
#self.allnextarr.sort(lambda x,y:cmp(x["time"],y["time"]))
self.insert(nextevent)
return arrevent
def interarr(self,s,d,avgtraff,time):
if float(avgtraff) == 0:
return self.simdur + 1
rate = float(avgtraff)/self.flowsize
scale = 1.0/rate
nextarr = rv.exponential(scale,1)
return nextarr[0] + time
|
gpl-2.0
|
vgupta6/Project-2
|
modules/tests/core/core_dataTable.py
|
2
|
12017
|
__all__ = ["dt_filter",
"dt_row_cnt",
"dt_data",
"dt_data_item",
"dt_find",
"dt_links",
"dt_action",
]
# @ToDo: There are performance issues
# - need to profile and find out in which functions are the bottlenecks
import time
from gluon import current
# -----------------------------------------------------------------------------
def convert_repr_number (number):
"""
Helper function to convert a string representation back to a number.
Assumptions:
* It may have a thousand separator
* It may have a decimal point
* If it has a thousand separator then it will have a decimal point
It will return false is the number doesn't look valid
"""
sep = ""
dec = ""
part_one = "0"
part_two = ""
for digit in number:
if digit.isdigit():
if sep == "":
part_one += digit
else:
part_two += digit
else:
if digit == "-" and part_one == "0":
part_one = "-0"
elif sep == "" and sep != digit:
sep = digit
elif dec == "":
dec = digit
part_two += "."
else:
# Doesn't look like a valid number repr so return
return False
if dec == "":
return float("%s.%s" % (part_one, part_two))
else:
return float("%s%s" % (part_one, part_two))
# -----------------------------------------------------------------------------
def dt_filter(reporter,
search_string=" ",
forceClear = True,
quiet = True):
"""
Filter the dataTable
"""
if forceClear:
if not dt_filter(reporter,
forceClear = False,
quiet = quiet):
return False
config = current.test_config
browser = config.browser
sleep_limit = 10
elem = browser.find_element_by_css_selector('label > input[type="text"]')
elem.clear()
elem.send_keys(search_string)
time.sleep(1) # give time for the list_processing element to appear
waiting_elem = browser.find_element_by_id("list_processing")
sleep_time = 0
while (waiting_elem.value_of_css_property("visibility") == "visible"):
time.sleep(1)
sleep_time += 1
if sleep_time > sleep_limit:
if not quiet:
reporter("DataTable filter didn't respond within %d seconds" % sleep_limit)
return False
return True
# -----------------------------------------------------------------------------
def dt_row_cnt(reporter,
check = (),
quiet = True,
utObj = None):
"""
return the rows that are being displayed and the total rows in the dataTable
"""
config = current.test_config
browser = config.browser
elem = browser.find_element_by_id("list_info")
details = elem.text
if not quiet:
reporter(details)
words = details.split()
start = int(words[1])
end = int(words[3])
length = int(words[5])
filtered = None
if len(words) > 10:
filtered = int(words[9])
if check != ():
if len(check ) == 3:
expected = "Showing %d to %d of %d entries" % check
actual = "Showing %d to %d of %d entries" % (start, end, length)
msg = "Expected result of '%s' doesn't equal '%s'" % (expected, actual)
if utObj != None:
utObj.assertEqual((start, end, length) == check, msg)
else:
assert (start, end, length) == check, msg
elif len(check) == 4:
expected = "Showing %d to %d of %d entries (filtered from %d total entries)" % check
if filtered:
actual = "Showing %d to %d of %d entries (filtered from %d total entries)" % (start, end, length, filtered)
else:
actual = "Showing %d to %d of %d entries" % (start, end, length)
msg = "Expected result of '%s' doesn't equal '%s'" % (expected, actual)
if utObj != None:
utObj.assertEqual((start, end, length) == check, msg)
else:
assert (start, end, length, filtered) == check, msg
if len(words) > 10:
return (start, end, length, filtered)
else:
return (start, end, length)
# -----------------------------------------------------------------------------
def dt_data(row_list = None,
add_header = False):
""" return the data in the displayed dataTable """
config = current.test_config
browser = config.browser
cell = browser.find_element_by_id("table-container")
text = cell.text
parts = text.splitlines()
records = []
cnt = 0
lastrow = ""
header = ""
for row in parts:
if row.startswith("Detail"):
header = lastrow
row = row[8:]
if row_list == None or cnt in row_list:
records.append(row)
cnt += 1
else:
lastrow = row
if add_header:
return [header] + records
return records
# -----------------------------------------------------------------------------
def dt_data_item(row = 1,
column = 1,
tableID = "list",
):
""" Returns the data found in the cell of the dataTable """
config = current.test_config
browser = config.browser
td = ".//*[@id='%s']/tbody/tr[%s]/td[%s]" % (tableID, row, column)
try:
elem = browser.find_element_by_xpath(td)
return elem.text
except:
return False
# -----------------------------------------------------------------------------
def dt_find(search = "",
row = None,
column = None,
cellList = None,
tableID = "list",
first = False,
):
"""
Find the cells where search is found in the dataTable
search: the string to search for. If you pass in a number (int, float)
then the function will attempt to convert all text values to
a float for comparison by using the convert_repr_number helper
function
row: The row or list of rows to search along
column: The column or list of columns to search down
cellList: This is a list of cells which may be returned from a previous
call, these cells will be searched again for the search string.
However if a row or column value is also provided then for
each cell in cellList the column or row will be offset.
For example cellList = [(3,1)] and column = 5, means rather
than looking in cell (3,1) the function will look in cell (3,5)
tableID: The HTML id of the table
first: Stop on the first match, or find all matches
Example of use (test url: /inv/warehouse/n/inv_item
{where n is the warehouse id}
):
match = dt_find("Plastic Sheets")
if match:
if not dt_find(4200, cellList=match, column=5, first=True):
assert 0, "Unable to find 4200 Plastic Sheets"
else:
assert 0, "Unable to find any Plastic Sheets"
"""
config = current.test_config
browser = config.browser
def find_match(search, tableID, r, c):
td = ".//*[@id='%s']/tbody/tr[%s]/td[%s]" % (tableID, r, c)
try:
elem = browser.find_element_by_xpath(td)
text = elem.text
if isinstance(search,(int, float)):
text = convert_repr_number(text)
if text == search:
return (r, c)
except:
return False
result = []
if cellList:
for cell in cellList:
if row:
r = row
else:
r = cell[0]
if column:
c = column
else:
c = cell[1]
found = find_match(search, tableID, r, c)
if found:
result.append(found)
if first:
return result
else:
# Calculate the rows that need to be navigated along to find the search string
colList = []
rowList = []
if row == None:
r = 1
while True:
tr = ".//*[@id='%s']/tbody/tr[%s]" % (tableID, r)
try:
elem = browser.find_element_by_xpath(tr)
rowList.append(r)
r += 1
except:
break
elif isinstance(row, int):
rowList = [row]
else:
rowList = row
# Calculate the columns that need to be navigated down to find the search string
if column == None:
c = 1
while True:
td = ".//*[@id='%s']/tbody/tr[1]/td[%s]" % (tableID, c)
try:
elem = browser.find_element_by_xpath(td)
colList.append(c)
c += 1
except:
break
elif isinstance(column, int):
colList = [column]
else:
colList = column
# Now try and find a match
for r in rowList:
for c in colList:
found = find_match(search, tableID, r, c)
if found:
result.append(found)
if first:
return result
return result
# -----------------------------------------------------------------------------
def dt_links(reporter,
row = 1,
tableID = "list",
quiet = True
):
""" Returns a list of links in the given row of the dataTable """
config = current.test_config
browser = config.browser
links = []
# loop through each column
column = 1
while True:
td = ".//*[@id='%s']/tbody/tr[%s]/td[%s]" % (tableID, row, column)
try:
elem = browser.find_element_by_xpath(td)
except:
break
# loop through looking for links in the cell
cnt = 1
while True:
link = ".//*[@id='%s']/tbody/tr[%s]/td[%s]/a[%s]" % (tableID, row, column, cnt)
try:
elem = browser.find_element_by_xpath(link)
except:
break
cnt += 1
if not quiet:
reporter("%2d) %s" % (column, elem.text))
links.append([column,elem.text])
column += 1
return links
# -----------------------------------------------------------------------------
def dt_action(row = 1,
action = None,
column = 1,
tableID = "list",
):
""" click the action button in the dataTable """
config = current.test_config
browser = config.browser
# What looks like a fairly fragile xpath, but it should work unless DataTable changes
if action:
button = ".//*[@id='%s']/tbody/tr[%s]/td[%s]/a[contains(text(),'%s')]" % (tableID, row, column, action)
else:
button = ".//*[@id='%s']/tbody/tr[%s]/td[%s]/a" % (tableID, row, column)
giveup = 0.0
sleeptime = 0.2
while giveup < 10.0:
try:
element = browser.find_element_by_xpath(button)
url = element.get_attribute("href")
if url:
browser.get(url)
return True
except Exception as inst:
print "%s with %s" % (type(inst), button)
time.sleep(sleeptime)
giveup += sleeptime
return False
# END =========================================================================
|
mit
|
jittat/cafe-grader-web
|
lib/assets/Lib/timeit.py
|
9
|
11830
|
#! /usr/bin/python3.4
"""Tool for measuring execution time of small code snippets.
This module avoids a number of common traps for measuring execution
times. See also Tim Peters' introduction to the Algorithms chapter in
the Python Cookbook, published by O'Reilly.
Library usage: see the Timer class.
Command line usage:
python timeit.py [-n N] [-r N] [-s S] [-t] [-c] [-p] [-h] [--] [statement]
Options:
-n/--number N: how many times to execute 'statement' (default: see below)
-r/--repeat N: how many times to repeat the timer (default 3)
-s/--setup S: statement to be executed once initially (default 'pass')
-p/--process: use time.process_time() (default is time.perf_counter())
-t/--time: use time.time() (deprecated)
-c/--clock: use time.clock() (deprecated)
-v/--verbose: print raw timing results; repeat for more digits precision
-h/--help: print this usage message and exit
--: separate options from statement, use when statement starts with -
statement: statement to be timed (default 'pass')
A multi-line statement may be given by specifying each line as a
separate argument; indented lines are possible by enclosing an
argument in quotes and using leading spaces. Multiple -s options are
treated similarly.
If -n is not given, a suitable number of loops is calculated by trying
successive powers of 10 until the total time is at least 0.2 seconds.
Note: there is a certain baseline overhead associated with executing a
pass statement. It differs between versions. The code here doesn't try
to hide it, but you should be aware of it. The baseline overhead can be
measured by invoking the program without arguments.
Classes:
Timer
Functions:
timeit(string, string) -> float
repeat(string, string) -> list
default_timer() -> float
"""
import gc
import sys
import time
import itertools
__all__ = ["Timer", "timeit", "repeat", "default_timer"]
dummy_src_name = "<timeit-src>"
default_number = 1000000
default_repeat = 3
default_timer = time.perf_counter
# Don't change the indentation of the template; the reindent() calls
# in Timer.__init__() depend on setup being indented 4 spaces and stmt
# being indented 8 spaces.
template = """
def inner(_it, _timer):
{setup}
_t0 = _timer()
for _i in _it:
{stmt}
_t1 = _timer()
return _t1 - _t0
"""
def reindent(src, indent):
"""Helper to reindent a multi-line statement."""
return src.replace("\n", "\n" + " "*indent)
def _template_func(setup, func):
"""Create a timer function. Used if the "statement" is a callable."""
def inner(_it, _timer, _func=func):
setup()
_t0 = _timer()
for _i in _it:
_func()
_t1 = _timer()
return _t1 - _t0
return inner
class Timer:
"""Class for timing execution speed of small code snippets.
The constructor takes a statement to be timed, an additional
statement used for setup, and a timer function. Both statements
default to 'pass'; the timer function is platform-dependent (see
module doc string).
To measure the execution time of the first statement, use the
timeit() method. The repeat() method is a convenience to call
timeit() multiple times and return a list of results.
The statements may contain newlines, as long as they don't contain
multi-line string literals.
"""
def __init__(self, stmt="pass", setup="pass", timer=default_timer):
"""Constructor. See class doc string."""
self.timer = timer
ns = {}
if isinstance(stmt, str):
stmt = reindent(stmt, 8)
if isinstance(setup, str):
setup = reindent(setup, 4)
src = template.format(stmt=stmt, setup=setup)
elif callable(setup):
src = template.format(stmt=stmt, setup='_setup()')
ns['_setup'] = setup
else:
raise ValueError("setup is neither a string nor callable")
self.src = src # Save for traceback display
code = compile(src, dummy_src_name, "exec")
exec(code, globals(), ns)
self.inner = ns["inner"]
elif callable(stmt):
self.src = None
if isinstance(setup, str):
_setup = setup
def setup():
exec(_setup, globals(), ns)
elif not callable(setup):
raise ValueError("setup is neither a string nor callable")
self.inner = _template_func(setup, stmt)
else:
raise ValueError("stmt is neither a string nor callable")
def print_exc(self, file=None):
"""Helper to print a traceback from the timed code.
Typical use:
t = Timer(...) # outside the try/except
try:
t.timeit(...) # or t.repeat(...)
except:
t.print_exc()
The advantage over the standard traceback is that source lines
in the compiled template will be displayed.
The optional file argument directs where the traceback is
sent; it defaults to sys.stderr.
"""
import linecache, traceback
if self.src is not None:
linecache.cache[dummy_src_name] = (len(self.src),
None,
self.src.split("\n"),
dummy_src_name)
# else the source is already stored somewhere else
traceback.print_exc(file=file)
def timeit(self, number=default_number):
"""Time 'number' executions of the main statement.
To be precise, this executes the setup statement once, and
then returns the time it takes to execute the main statement
a number of times, as a float measured in seconds. The
argument is the number of times through the loop, defaulting
to one million. The main statement, the setup statement and
the timer function to be used are passed to the constructor.
"""
it = itertools.repeat(None, number)
gcold = gc.isenabled()
gc.disable()
try:
timing = self.inner(it, self.timer)
finally:
if gcold:
gc.enable()
return timing
def repeat(self, repeat=default_repeat, number=default_number):
"""Call timeit() a few times.
This is a convenience function that calls the timeit()
repeatedly, returning a list of results. The first argument
specifies how many times to call timeit(), defaulting to 3;
the second argument specifies the timer argument, defaulting
to one million.
Note: it's tempting to calculate mean and standard deviation
from the result vector and report these. However, this is not
very useful. In a typical case, the lowest value gives a
lower bound for how fast your machine can run the given code
snippet; higher values in the result vector are typically not
caused by variability in Python's speed, but by other
processes interfering with your timing accuracy. So the min()
of the result is probably the only number you should be
interested in. After that, you should look at the entire
vector and apply common sense rather than statistics.
"""
r = []
for i in range(repeat):
t = self.timeit(number)
r.append(t)
return r
def timeit(stmt="pass", setup="pass", timer=default_timer,
number=default_number):
"""Convenience function to create Timer object and call timeit method."""
return Timer(stmt, setup, timer).timeit(number)
def repeat(stmt="pass", setup="pass", timer=default_timer,
repeat=default_repeat, number=default_number):
"""Convenience function to create Timer object and call repeat method."""
return Timer(stmt, setup, timer).repeat(repeat, number)
def main(args=None, *, _wrap_timer=None):
"""Main program, used when run as a script.
The optional 'args' argument specifies the command line to be parsed,
defaulting to sys.argv[1:].
The return value is an exit code to be passed to sys.exit(); it
may be None to indicate success.
When an exception happens during timing, a traceback is printed to
stderr and the return value is 1. Exceptions at other times
(including the template compilation) are not caught.
'_wrap_timer' is an internal interface used for unit testing. If it
is not None, it must be a callable that accepts a timer function
and returns another timer function (used for unit testing).
"""
if args is None:
args = sys.argv[1:]
import getopt
try:
opts, args = getopt.getopt(args, "n:s:r:tcpvh",
["number=", "setup=", "repeat=",
"time", "clock", "process",
"verbose", "help"])
except getopt.error as err:
print(err)
print("use -h/--help for command line help")
return 2
timer = default_timer
stmt = "\n".join(args) or "pass"
number = 0 # auto-determine
setup = []
repeat = default_repeat
verbose = 0
precision = 3
for o, a in opts:
if o in ("-n", "--number"):
number = int(a)
if o in ("-s", "--setup"):
setup.append(a)
if o in ("-r", "--repeat"):
repeat = int(a)
if repeat <= 0:
repeat = 1
if o in ("-t", "--time"):
timer = time.time
if o in ("-c", "--clock"):
timer = time.clock
if o in ("-p", "--process"):
timer = time.process_time
if o in ("-v", "--verbose"):
if verbose:
precision += 1
verbose += 1
if o in ("-h", "--help"):
print(__doc__, end=' ')
return 0
setup = "\n".join(setup) or "pass"
# Include the current directory, so that local imports work (sys.path
# contains the directory of this script, rather than the current
# directory)
import os
sys.path.insert(0, os.curdir)
if _wrap_timer is not None:
timer = _wrap_timer(timer)
t = Timer(stmt, setup, timer)
if number == 0:
# determine number so that 0.2 <= total time < 2.0
for i in range(1, 10):
number = 10**i
try:
x = t.timeit(number)
except:
t.print_exc()
return 1
if verbose:
print("%d loops -> %.*g secs" % (number, precision, x))
if x >= 0.2:
break
try:
r = t.repeat(repeat, number)
except:
t.print_exc()
return 1
best = min(r)
if verbose:
print("raw times:", " ".join(["%.*g" % (precision, x) for x in r]))
print("%d loops," % number, end=' ')
usec = best * 1e6 / number
if usec < 1000:
print("best of %d: %.*g usec per loop" % (repeat, precision, usec))
else:
msec = usec / 1000
if msec < 1000:
print("best of %d: %.*g msec per loop" % (repeat, precision, msec))
else:
sec = msec / 1000
print("best of %d: %.*g sec per loop" % (repeat, precision, sec))
return None
if __name__ == "__main__":
sys.exit(main())
|
mit
|
amirrpp/django-oscar
|
sites/sandbox/urls.py
|
41
|
1577
|
from django.conf import settings
from django.conf.urls import include, url
from django.conf.urls.i18n import i18n_patterns
from django.conf.urls.static import static
from django.contrib import admin
from oscar.app import application
from oscar.views import handler500, handler404, handler403
from apps.sitemaps import base_sitemaps
admin.autodiscover()
urlpatterns = [
# Include admin as convenience. It's unsupported and only included
# for developers.
url(r'^admin/', include(admin.site.urls)),
# i18n URLS need to live outside of i18n_patterns scope of Oscar
url(r'^i18n/', include('django.conf.urls.i18n')),
# include a basic sitemap
url(r'^sitemap\.xml$', 'django.contrib.sitemaps.views.index', {
'sitemaps': base_sitemaps}),
url(r'^sitemap-(?P<section>.+)\.xml$',
'django.contrib.sitemaps.views.sitemap', {'sitemaps': base_sitemaps}),
]
# Prefix Oscar URLs with language codes
urlpatterns += i18n_patterns('',
# Custom functionality to allow dashboard users to be created
url(r'gateway/', include('apps.gateway.urls')),
# Oscar's normal URLs
url(r'', include(application.urls)),
)
if settings.DEBUG:
import debug_toolbar
# Server statics and uploaded media
urlpatterns += static(settings.MEDIA_URL,
document_root=settings.MEDIA_ROOT)
# Allow error pages to be tested
urlpatterns += [
url(r'^403$', handler403),
url(r'^404$', handler404),
url(r'^500$', handler500),
url(r'^__debug__/', include(debug_toolbar.urls)),
]
|
bsd-3-clause
|
jcoady9/python-for-android
|
python3-alpha/extra_modules/gdata/apps/groups/service.py
|
48
|
12986
|
#!/usr/bin/python
#
# Copyright (C) 2008 Google, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Allow Google Apps domain administrators to manage groups, group members and group owners.
GroupsService: Provides methods to manage groups, members and owners.
"""
__author__ = '[email protected]'
import urllib.request, urllib.parse, urllib.error
import gdata.apps
import gdata.apps.service
import gdata.service
API_VER = '2.0'
BASE_URL = '/a/feeds/group/' + API_VER + '/%s'
GROUP_MEMBER_URL = BASE_URL + '?member=%s'
GROUP_MEMBER_DIRECT_URL = GROUP_MEMBER_URL + '&directOnly=%s'
GROUP_ID_URL = BASE_URL + '/%s'
MEMBER_URL = BASE_URL + '/%s/member'
MEMBER_WITH_SUSPENDED_URL = MEMBER_URL + '?includeSuspendedUsers=%s'
MEMBER_ID_URL = MEMBER_URL + '/%s'
OWNER_URL = BASE_URL + '/%s/owner'
OWNER_WITH_SUSPENDED_URL = OWNER_URL + '?includeSuspendedUsers=%s'
OWNER_ID_URL = OWNER_URL + '/%s'
PERMISSION_OWNER = 'Owner'
PERMISSION_MEMBER = 'Member'
PERMISSION_DOMAIN = 'Domain'
PERMISSION_ANYONE = 'Anyone'
class GroupsService(gdata.apps.service.PropertyService):
"""Client for the Google Apps Groups service."""
def _ServiceUrl(self, service_type, is_existed, group_id, member_id, owner_email,
direct_only=False, domain=None, suspended_users=False):
if domain is None:
domain = self.domain
if service_type == 'group':
if group_id != '' and is_existed:
return GROUP_ID_URL % (domain, group_id)
elif member_id != '':
if direct_only:
return GROUP_MEMBER_DIRECT_URL % (domain, urllib.parse.quote_plus(member_id),
self._Bool2Str(direct_only))
else:
return GROUP_MEMBER_URL % (domain, urllib.parse.quote_plus(member_id))
else:
return BASE_URL % (domain)
if service_type == 'member':
if member_id != '' and is_existed:
return MEMBER_ID_URL % (domain, group_id, urllib.parse.quote_plus(member_id))
elif suspended_users:
return MEMBER_WITH_SUSPENDED_URL % (domain, group_id,
self._Bool2Str(suspended_users))
else:
return MEMBER_URL % (domain, group_id)
if service_type == 'owner':
if owner_email != '' and is_existed:
return OWNER_ID_URL % (domain, group_id, urllib.parse.quote_plus(owner_email))
elif suspended_users:
return OWNER_WITH_SUSPENDED_URL % (domain, group_id,
self._Bool2Str(suspended_users))
else:
return OWNER_URL % (domain, group_id)
def _Bool2Str(self, b):
if b is None:
return None
return str(b is True).lower()
def _IsExisted(self, uri):
try:
self._GetProperties(uri)
return True
except gdata.apps.service.AppsForYourDomainException as e:
if e.error_code == gdata.apps.service.ENTITY_DOES_NOT_EXIST:
return False
else:
raise e
def CreateGroup(self, group_id, group_name, description, email_permission):
"""Create a group.
Args:
group_id: The ID of the group (e.g. us-sales).
group_name: The name of the group.
description: A description of the group
email_permission: The subscription permission of the group.
Returns:
A dict containing the result of the create operation.
"""
uri = self._ServiceUrl('group', False, group_id, '', '')
properties = {}
properties['groupId'] = group_id
properties['groupName'] = group_name
properties['description'] = description
properties['emailPermission'] = email_permission
return self._PostProperties(uri, properties)
def UpdateGroup(self, group_id, group_name, description, email_permission):
"""Update a group's name, description and/or permission.
Args:
group_id: The ID of the group (e.g. us-sales).
group_name: The name of the group.
description: A description of the group
email_permission: The subscription permission of the group.
Returns:
A dict containing the result of the update operation.
"""
uri = self._ServiceUrl('group', True, group_id, '', '')
properties = {}
properties['groupId'] = group_id
properties['groupName'] = group_name
properties['description'] = description
properties['emailPermission'] = email_permission
return self._PutProperties(uri, properties)
def RetrieveGroup(self, group_id):
"""Retrieve a group based on its ID.
Args:
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('group', True, group_id, '', '')
return self._GetProperties(uri)
def RetrieveAllGroups(self):
"""Retrieve all groups in the domain.
Args:
None
Returns:
A list containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('group', True, '', '', '')
return self._GetPropertiesList(uri)
def RetrievePageOfGroups(self, start_group=None):
"""Retrieve one page of groups in the domain.
Args:
start_group: The key to continue for pagination through all groups.
Returns:
A feed object containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('group', True, '', '', '')
if start_group is not None:
uri += "?start="+start_group
property_feed = self._GetPropertyFeed(uri)
return property_feed
def RetrieveGroups(self, member_id, direct_only=False):
"""Retrieve all groups that belong to the given member_id.
Args:
member_id: The member's email address (e.g. [email protected]).
direct_only: Boolean whether only return groups that this member directly belongs to.
Returns:
A list containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('group', True, '', member_id, '', direct_only=direct_only)
return self._GetPropertiesList(uri)
def DeleteGroup(self, group_id):
"""Delete a group based on its ID.
Args:
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the delete operation.
"""
uri = self._ServiceUrl('group', True, group_id, '', '')
return self._DeleteProperties(uri)
def AddMemberToGroup(self, member_id, group_id):
"""Add a member to a group.
Args:
member_id: The member's email address (e.g. [email protected]).
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the add operation.
"""
uri = self._ServiceUrl('member', False, group_id, member_id, '')
properties = {}
properties['memberId'] = member_id
return self._PostProperties(uri, properties)
def IsMember(self, member_id, group_id):
"""Check whether the given member already exists in the given group.
Args:
member_id: The member's email address (e.g. [email protected]).
group_id: The ID of the group (e.g. us-sales).
Returns:
True if the member exists in the group. False otherwise.
"""
uri = self._ServiceUrl('member', True, group_id, member_id, '')
return self._IsExisted(uri)
def RetrieveMember(self, member_id, group_id):
"""Retrieve the given member in the given group.
Args:
member_id: The member's email address (e.g. [email protected]).
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('member', True, group_id, member_id, '')
return self._GetProperties(uri)
def RetrieveAllMembers(self, group_id, suspended_users=False):
"""Retrieve all members in the given group.
Args:
group_id: The ID of the group (e.g. us-sales).
suspended_users: A boolean; should we include any suspended users in
the membership list returned?
Returns:
A list containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('member', True, group_id, '', '',
suspended_users=suspended_users)
return self._GetPropertiesList(uri)
def RetrievePageOfMembers(self, group_id, suspended_users=False, start=None):
"""Retrieve one page of members of a given group.
Args:
group_id: The ID of the group (e.g. us-sales).
suspended_users: A boolean; should we include any suspended users in
the membership list returned?
start: The key to continue for pagination through all members.
Returns:
A feed object containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('member', True, group_id, '', '',
suspended_users=suspended_users)
if start is not None:
if suspended_users:
uri += "&start="+start
else:
uri += "?start="+start
property_feed = self._GetPropertyFeed(uri)
return property_feed
def RemoveMemberFromGroup(self, member_id, group_id):
"""Remove the given member from the given group.
Args:
member_id: The member's email address (e.g. [email protected]).
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the remove operation.
"""
uri = self._ServiceUrl('member', True, group_id, member_id, '')
return self._DeleteProperties(uri)
def AddOwnerToGroup(self, owner_email, group_id):
"""Add an owner to a group.
Args:
owner_email: The email address of a group owner.
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the add operation.
"""
uri = self._ServiceUrl('owner', False, group_id, '', owner_email)
properties = {}
properties['email'] = owner_email
return self._PostProperties(uri, properties)
def IsOwner(self, owner_email, group_id):
"""Check whether the given member an owner of the given group.
Args:
owner_email: The email address of a group owner.
group_id: The ID of the group (e.g. us-sales).
Returns:
True if the member is an owner of the given group. False otherwise.
"""
uri = self._ServiceUrl('owner', True, group_id, '', owner_email)
return self._IsExisted(uri)
def RetrieveOwner(self, owner_email, group_id):
"""Retrieve the given owner in the given group.
Args:
owner_email: The email address of a group owner.
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('owner', True, group_id, '', owner_email)
return self._GetProperties(uri)
def RetrieveAllOwners(self, group_id, suspended_users=False):
"""Retrieve all owners of the given group.
Args:
group_id: The ID of the group (e.g. us-sales).
suspended_users: A boolean; should we include any suspended users in
the ownership list returned?
Returns:
A list containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('owner', True, group_id, '', '',
suspended_users=suspended_users)
return self._GetPropertiesList(uri)
def RetrievePageOfOwners(self, group_id, suspended_users=False, start=None):
"""Retrieve one page of owners of the given group.
Args:
group_id: The ID of the group (e.g. us-sales).
suspended_users: A boolean; should we include any suspended users in
the ownership list returned?
start: The key to continue for pagination through all owners.
Returns:
A feed object containing the result of the retrieve operation.
"""
uri = self._ServiceUrl('owner', True, group_id, '', '',
suspended_users=suspended_users)
if start is not None:
if suspended_users:
uri += "&start="+start
else:
uri += "?start="+start
property_feed = self._GetPropertyFeed(uri)
return property_feed
def RemoveOwnerFromGroup(self, owner_email, group_id):
"""Remove the given owner from the given group.
Args:
owner_email: The email address of a group owner.
group_id: The ID of the group (e.g. us-sales).
Returns:
A dict containing the result of the remove operation.
"""
uri = self._ServiceUrl('owner', True, group_id, '', owner_email)
return self._DeleteProperties(uri)
|
apache-2.0
|
Andrey-Tkachev/Creto
|
node_modules/npm/node_modules/node-gyp/gyp/PRESUBMIT.py
|
1369
|
3662
|
# Copyright (c) 2012 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Top-level presubmit script for GYP.
See http://dev.chromium.org/developers/how-tos/depottools/presubmit-scripts
for more details about the presubmit API built into gcl.
"""
PYLINT_BLACKLIST = [
# TODO: fix me.
# From SCons, not done in google style.
'test/lib/TestCmd.py',
'test/lib/TestCommon.py',
'test/lib/TestGyp.py',
]
PYLINT_DISABLED_WARNINGS = [
# TODO: fix me.
# Many tests include modules they don't use.
'W0611',
# Possible unbalanced tuple unpacking with sequence.
'W0632',
# Attempting to unpack a non-sequence.
'W0633',
# Include order doesn't properly include local files?
'F0401',
# Some use of built-in names.
'W0622',
# Some unused variables.
'W0612',
# Operator not preceded/followed by space.
'C0323',
'C0322',
# Unnecessary semicolon.
'W0301',
# Unused argument.
'W0613',
# String has no effect (docstring in wrong place).
'W0105',
# map/filter on lambda could be replaced by comprehension.
'W0110',
# Use of eval.
'W0123',
# Comma not followed by space.
'C0324',
# Access to a protected member.
'W0212',
# Bad indent.
'W0311',
# Line too long.
'C0301',
# Undefined variable.
'E0602',
# Not exception type specified.
'W0702',
# No member of that name.
'E1101',
# Dangerous default {}.
'W0102',
# Cyclic import.
'R0401',
# Others, too many to sort.
'W0201', 'W0232', 'E1103', 'W0621', 'W0108', 'W0223', 'W0231',
'R0201', 'E0101', 'C0321',
# ************* Module copy
# W0104:427,12:_test.odict.__setitem__: Statement seems to have no effect
'W0104',
]
def CheckChangeOnUpload(input_api, output_api):
report = []
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api))
return report
def CheckChangeOnCommit(input_api, output_api):
report = []
# Accept any year number from 2009 to the current year.
current_year = int(input_api.time.strftime('%Y'))
allowed_years = (str(s) for s in reversed(xrange(2009, current_year + 1)))
years_re = '(' + '|'.join(allowed_years) + ')'
# The (c) is deprecated, but tolerate it until it's removed from all files.
license = (
r'.*? Copyright (\(c\) )?%(year)s Google Inc\. All rights reserved\.\n'
r'.*? Use of this source code is governed by a BSD-style license that '
r'can be\n'
r'.*? found in the LICENSE file\.\n'
) % {
'year': years_re,
}
report.extend(input_api.canned_checks.PanProjectChecks(
input_api, output_api, license_header=license))
report.extend(input_api.canned_checks.CheckTreeIsOpen(
input_api, output_api,
'http://gyp-status.appspot.com/status',
'http://gyp-status.appspot.com/current'))
import os
import sys
old_sys_path = sys.path
try:
sys.path = ['pylib', 'test/lib'] + sys.path
blacklist = PYLINT_BLACKLIST
if sys.platform == 'win32':
blacklist = [os.path.normpath(x).replace('\\', '\\\\')
for x in PYLINT_BLACKLIST]
report.extend(input_api.canned_checks.RunPylint(
input_api,
output_api,
black_list=blacklist,
disabled_warnings=PYLINT_DISABLED_WARNINGS))
finally:
sys.path = old_sys_path
return report
TRYBOTS = [
'linux_try',
'mac_try',
'win_try',
]
def GetPreferredTryMasters(_, change):
return {
'client.gyp': { t: set(['defaulttests']) for t in TRYBOTS },
}
|
mit
|
garnaat/boto
|
boto/roboto/awsqueryservice.py
|
153
|
4453
|
from __future__ import print_function
import os
import urlparse
import boto
import boto.connection
import boto.jsonresponse
import boto.exception
from boto.roboto import awsqueryrequest
class NoCredentialsError(boto.exception.BotoClientError):
def __init__(self):
s = 'Unable to find credentials'
super(NoCredentialsError, self).__init__(s)
class AWSQueryService(boto.connection.AWSQueryConnection):
Name = ''
Description = ''
APIVersion = ''
Authentication = 'sign-v2'
Path = '/'
Port = 443
Provider = 'aws'
EnvURL = 'AWS_URL'
Regions = []
def __init__(self, **args):
self.args = args
self.check_for_credential_file()
self.check_for_env_url()
if 'host' not in self.args:
if self.Regions:
region_name = self.args.get('region_name',
self.Regions[0]['name'])
for region in self.Regions:
if region['name'] == region_name:
self.args['host'] = region['endpoint']
if 'path' not in self.args:
self.args['path'] = self.Path
if 'port' not in self.args:
self.args['port'] = self.Port
try:
super(AWSQueryService, self).__init__(**self.args)
self.aws_response = None
except boto.exception.NoAuthHandlerFound:
raise NoCredentialsError()
def check_for_credential_file(self):
"""
Checks for the existence of an AWS credential file.
If the environment variable AWS_CREDENTIAL_FILE is
set and points to a file, that file will be read and
will be searched credentials.
Note that if credentials have been explicitelypassed
into the class constructor, those values always take
precedence.
"""
if 'AWS_CREDENTIAL_FILE' in os.environ:
path = os.environ['AWS_CREDENTIAL_FILE']
path = os.path.expanduser(path)
path = os.path.expandvars(path)
if os.path.isfile(path):
fp = open(path)
lines = fp.readlines()
fp.close()
for line in lines:
if line[0] != '#':
if '=' in line:
name, value = line.split('=', 1)
if name.strip() == 'AWSAccessKeyId':
if 'aws_access_key_id' not in self.args:
value = value.strip()
self.args['aws_access_key_id'] = value
elif name.strip() == 'AWSSecretKey':
if 'aws_secret_access_key' not in self.args:
value = value.strip()
self.args['aws_secret_access_key'] = value
else:
print('Warning: unable to read AWS_CREDENTIAL_FILE')
def check_for_env_url(self):
"""
First checks to see if a url argument was explicitly passed
in. If so, that will be used. If not, it checks for the
existence of the environment variable specified in ENV_URL.
If this is set, it should contain a fully qualified URL to the
service you want to use.
Note that any values passed explicitly to the class constructor
will take precedence.
"""
url = self.args.get('url', None)
if url:
del self.args['url']
if not url and self.EnvURL in os.environ:
url = os.environ[self.EnvURL]
if url:
rslt = urlparse.urlparse(url)
if 'is_secure' not in self.args:
if rslt.scheme == 'https':
self.args['is_secure'] = True
else:
self.args['is_secure'] = False
host = rslt.netloc
port = None
l = host.split(':')
if len(l) > 1:
host = l[0]
port = int(l[1])
if 'host' not in self.args:
self.args['host'] = host
if port and 'port' not in self.args:
self.args['port'] = port
if rslt.path and 'path' not in self.args:
self.args['path'] = rslt.path
def _required_auth_capability(self):
return [self.Authentication]
|
mit
|
blindpenguin/blackboard
|
node_modules/gulp-sass/node_modules/node-sass/node_modules/node-gyp/gyp/tools/graphviz.py
|
2679
|
2878
|
#!/usr/bin/env python
# Copyright (c) 2011 Google Inc. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Using the JSON dumped by the dump-dependency-json generator,
generate input suitable for graphviz to render a dependency graph of
targets."""
import collections
import json
import sys
def ParseTarget(target):
target, _, suffix = target.partition('#')
filename, _, target = target.partition(':')
return filename, target, suffix
def LoadEdges(filename, targets):
"""Load the edges map from the dump file, and filter it to only
show targets in |targets| and their depedendents."""
file = open('dump.json')
edges = json.load(file)
file.close()
# Copy out only the edges we're interested in from the full edge list.
target_edges = {}
to_visit = targets[:]
while to_visit:
src = to_visit.pop()
if src in target_edges:
continue
target_edges[src] = edges[src]
to_visit.extend(edges[src])
return target_edges
def WriteGraph(edges):
"""Print a graphviz graph to stdout.
|edges| is a map of target to a list of other targets it depends on."""
# Bucket targets by file.
files = collections.defaultdict(list)
for src, dst in edges.items():
build_file, target_name, toolset = ParseTarget(src)
files[build_file].append(src)
print 'digraph D {'
print ' fontsize=8' # Used by subgraphs.
print ' node [fontsize=8]'
# Output nodes by file. We must first write out each node within
# its file grouping before writing out any edges that may refer
# to those nodes.
for filename, targets in files.items():
if len(targets) == 1:
# If there's only one node for this file, simplify
# the display by making it a box without an internal node.
target = targets[0]
build_file, target_name, toolset = ParseTarget(target)
print ' "%s" [shape=box, label="%s\\n%s"]' % (target, filename,
target_name)
else:
# Group multiple nodes together in a subgraph.
print ' subgraph "cluster_%s" {' % filename
print ' label = "%s"' % filename
for target in targets:
build_file, target_name, toolset = ParseTarget(target)
print ' "%s" [label="%s"]' % (target, target_name)
print ' }'
# Now that we've placed all the nodes within subgraphs, output all
# the edges between nodes.
for src, dsts in edges.items():
for dst in dsts:
print ' "%s" -> "%s"' % (src, dst)
print '}'
def main():
if len(sys.argv) < 2:
print >>sys.stderr, __doc__
print >>sys.stderr
print >>sys.stderr, 'usage: %s target1 target2...' % (sys.argv[0])
return 1
edges = LoadEdges('dump.json', sys.argv[1:])
WriteGraph(edges)
return 0
if __name__ == '__main__':
sys.exit(main())
|
gpl-3.0
|
zakdoek/django-simple-resizer
|
simple_resizer/_version.py
|
1
|
7478
|
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (built by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.13 (https://github.com/warner/python-versioneer)
# these strings will be replaced by git during git-archive
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
# these strings are filled in when 'setup.py versioneer' creates _version.py
tag_prefix = ""
parentdir_prefix = "simple-resizer"
versionfile_source = "simple_resizer/_version.py"
import errno
import os
import re
import subprocess
import sys
def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False):
assert isinstance(commands, list)
p = None
for c in commands:
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen([c] + args, cwd=cwd, stdout=subprocess.PIPE,
stderr=(subprocess.PIPE if hide_stderr
else None))
break
except EnvironmentError:
e = sys.exc_info()[1]
if e.errno == errno.ENOENT:
continue
if verbose:
print("unable to run %s" % args[0])
print(e)
return None
else:
if verbose:
print("unable to find command, tried %s" % (commands,))
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % args[0])
return None
return stdout
def versions_from_parentdir(parentdir_prefix, root, verbose=False):
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%s', but '%s' doesn't start with "
"prefix '%s'" % (root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
def git_get_keywords(versionfile_abs):
# the code embedded in _version.py can just fetch the value of these
# keywords. When used from setup.py, we don't want to import _version.py,
# so we do it with a regexp instead. This function is not used from
# _version.py.
keywords = {}
try:
f = open(versionfile_abs, "r")
for line in f.readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
keywords["full"] = mo.group(1)
f.close()
except EnvironmentError:
pass
return keywords
def git_versions_from_keywords(keywords, tag_prefix, verbose=False):
if not keywords:
return {} # keyword-finding function failed to find keywords
refnames = keywords["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("keywords are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
# starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of
# just "foo-1.0". If we see a "tag: " prefix, prefer those.
TAG = "tag: "
tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)])
if not tags:
# Either we're using git < 1.8.3, or there really are no tags. We use
# a heuristic: assume all version tags have a digit. The old git %d
# expansion behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs-tags))
if verbose:
print("likely tags: %s" % ",".join(sorted(tags)))
for ref in sorted(tags):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return {"version": r,
"full": keywords["full"].strip()}
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return {"version": keywords["full"].strip(),
"full": keywords["full"].strip()}
def git_versions_from_vcs(tag_prefix, root, verbose=False):
# this runs 'git' from the root of the source tree. This only gets called
# if the git-archive 'subst' keywords were *not* expanded, and
# _version.py hasn't already been rewritten with a short version string,
# meaning we're inside a checked out source tree.
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
return {}
GITS = ["git"]
if sys.platform == "win32":
GITS = ["git.cmd", "git.exe"]
stdout = run_command(GITS, ["describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
fmt = "tag '%s' doesn't start with prefix '%s'"
print(fmt % (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command(GITS, ["rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
# I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have
# __file__, we can work backwards from there to the root. Some
# py2exe/bbfreeze/non-CPython implementations don't do __file__, in which
# case we can only use expanded keywords.
keywords = {"refnames": git_refnames, "full": git_full}
ver = git_versions_from_keywords(keywords, tag_prefix, verbose)
if ver:
return ver
try:
root = os.path.realpath(__file__)
# versionfile_source is the relative path from the top of the source
# tree (where the .git directory might live) to this file. Invert
# this to find the root from __file__.
for i in range(len(versionfile_source.split('/'))):
root = os.path.dirname(root)
except NameError:
return default
return (git_versions_from_vcs(tag_prefix, root, verbose)
or versions_from_parentdir(parentdir_prefix, root, verbose)
or default)
|
mit
|
tlksio/tlksio
|
env/lib/python3.4/site-packages/django/contrib/redirects/models.py
|
115
|
1077
|
from django.contrib.sites.models import Site
from django.db import models
from django.utils.encoding import python_2_unicode_compatible
from django.utils.translation import ugettext_lazy as _
@python_2_unicode_compatible
class Redirect(models.Model):
site = models.ForeignKey(Site, models.CASCADE, verbose_name=_('site'))
old_path = models.CharField(
_('redirect from'),
max_length=200,
db_index=True,
help_text=_("This should be an absolute path, excluding the domain name. Example: '/events/search/'."),
)
new_path = models.CharField(
_('redirect to'),
max_length=200,
blank=True,
help_text=_("This can be either an absolute path (as above) or a full URL starting with 'http://'."),
)
class Meta:
verbose_name = _('redirect')
verbose_name_plural = _('redirects')
db_table = 'django_redirect'
unique_together = (('site', 'old_path'),)
ordering = ('old_path',)
def __str__(self):
return "%s ---> %s" % (self.old_path, self.new_path)
|
mit
|
hamiltont/CouchPotatoServer
|
couchpotato/core/providers/torrent/torrentshack/main.py
|
4
|
2881
|
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
import traceback
log = CPLog(__name__)
class TorrentShack(TorrentProvider):
urls = {
'test' : 'https://torrentshack.net/',
'login' : 'https://torrentshack.net/login.php',
'login_check': 'https://torrentshack.net/inbox.php',
'detail' : 'https://torrentshack.net/torrent/%s',
'search' : 'https://torrentshack.net/torrents.php?action=advanced&searchstr=%s&scene=%s&filter_cat[%d]=1',
'download' : 'https://torrentshack.net/%s',
}
cat_ids = [
([970], ['bd50']),
([300], ['720p', '1080p']),
([350], ['dvdr']),
([400], ['brrip', 'dvdrip']),
]
http_time_between_calls = 1 #seconds
cat_backup_id = 400
def _searchOnTitle(self, title, movie, quality, results):
scene_only = '1' if self.conf('scene_only') else ''
url = self.urls['search'] % (tryUrlencode('%s %s' % (title.replace(':', ''), movie['library']['year'])), scene_only, self.getCatId(quality['identifier'])[0])
data = self.getHTMLData(url)
if data:
html = BeautifulSoup(data)
try:
result_table = html.find('table', attrs = {'id' : 'torrent_table'})
if not result_table:
return
entries = result_table.find_all('tr', attrs = {'class' : 'torrent'})
for result in entries:
link = result.find('span', attrs = {'class' : 'torrent_name_link'}).parent
url = result.find('td', attrs = {'class' : 'torrent_td'}).find('a')
results.append({
'id': link['href'].replace('torrents.php?torrentid=', ''),
'name': unicode(link.span.string).translate({ord(u'\xad'): None}),
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['download'] % link['href'],
'size': self.parseSize(result.find_all('td')[4].string),
'seeders': tryInt(result.find_all('td')[6].string),
'leechers': tryInt(result.find_all('td')[7].string),
})
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'keeplogged': '1',
'login': 'Login',
}
def loginSuccess(self, output):
return 'logout.php' in output.lower()
loginCheckSuccess = loginSuccess
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.