text
stringlengths 29
850k
|
---|
"""
Retriever tools
---------------
Tools to use retrievers
"""
import numpy as np
from pySpatialTools.Discretization import _discretization_parsing_creation
from retrievers import BaseRetriever
###############################################################################
############################# Create aggretriever #############################
###############################################################################
def create_aggretriever(aggregation_info):
"""This function works to aggregate a retriever following the instructions
input in the aggregation_info variable. It returns an instance of a
retriever object to be appended in the collection manager list of
retrievers.
Parameters
----------
aggregation_info: tuple
the information to create a retriever aggregation.
Returns
-------
ret_out: pst.BaseRetriever
the retriever instance.
"""
## 0. Preparing inputs
assert(type(aggregation_info) == tuple)
disc_info, _, retriever_out, agg_info = aggregation_info
assert(type(agg_info) == tuple)
assert(len(agg_info) == 2)
aggregating_ret, _ = agg_info
## 1. Computing retriever_out
locs, regs, disc = _discretization_parsing_creation(disc_info)
ret_out = aggregating_ret(retriever_out, locs, regs, disc)
assert(isinstance(ret_out, BaseRetriever))
return ret_out
###############################################################################
################### Candidates to aggregating_out functions ###################
###############################################################################
def dummy_implicit_outretriver(retriever_out, locs, regs, disc):
"""Dummy implicit outretriever creation. It only maps the common output
to a regs discretized space.
Parameters
----------
retriever_out: class (pst.BaseRetriever)
the retriever object.
locs: list, np.ndarray or other
the spatial information of the retrievable elements.
regs: np.ndarray
the assigned region for each of the retrievable spatial elements.
disc: pst.BaseDiscretizor
a discretizor.
Returns
-------
ret_out: pst.BaseRetriever
the retriever instance.
"""
## Assert inputs
assert(type(retriever_out) == tuple)
assert(isinstance(retriever_out[0], object))
## Preparing function output and pars_ret
def m_out(self, i_locs, neighs_info):
neighs, dists = neighs_info
for i in range(len(neighs)):
for nei in range(len(neighs[i])):
neighs[i][nei] = regs[neighs[i][nei]]
return neighs, dists
pars_ret = {}
if len(retriever_out) == 2:
pars_ret = retriever_out[1]
pars_ret['output_map'] = m_out
## Instantiation
ret_out = retriever_out[0](locs, **pars_ret)
assert(isinstance(ret_out, BaseRetriever))
return ret_out
def dummy_explicit_outretriver(retriever_out, locs, regs, disc):
"""Dummy explicit outretriever creation. It computes a regiondistances
between each regions.
Parameters
----------
retriever_out: tuple (class (pst.BaseRetriever), dict, function)
the retriever information.
locs: list, np.ndarray or other
the spatial information of the retrievable elements.
regs: np.ndarray
the assigned region for each of the retrievable spatial elements.
disc: pst.BaseDiscretizor
a discretizor.
Returns
-------
ret_out: pst.BaseRetriever
the retriever instance.
"""
## Assert inputs
assert(type(retriever_out) == tuple)
assert(isinstance(retriever_out[0], object))
pars_ret = {}
if len(retriever_out) == 2:
pars_ret = retriever_out[1]
main_mapper = retriever_out[2](retriever_out[3])
ret_out = retriever_out[0](main_mapper, **pars_ret)
assert(isinstance(ret_out, BaseRetriever))
return ret_out
def avgregionlocs_outretriever(retriever_out, locs, regs, disc):
"""Retriever creation for avg region locations. It retrieves the
prototype of the region, the average location of the region each one
belong.
Parameters
----------
retriever_out: class (pst.BaseRetriever)
the retriever object.
locs: list, np.ndarray or other
the spatial information of the retrievable elements.
regs: np.ndarray
the assigned region for each of the retrievable spatial elements.
disc: pst.BaseDiscretizor
a discretizor.
Returns
-------
ret_out: pst.BaseRetriever
the retriever instance.
"""
u_regs = np.unique(regs)
avg_locs = np.zeros((len(u_regs), locs.shape[1]))
for i in xrange(len(u_regs)):
avg_locs[i] = np.mean(locs[regs == regs[i]], axis=0)
ret_out = retriever_out[0](avg_locs, **retriever_out[1])
return ret_out
|
Page 386 - Gallery Design of Wonderful Home Interior | Eileendcrowley Large Black Pendant Light. Modern Furniture Cheap. Patio Sail Shade.
collection of galleries from Wonderful Home Interior like Large Black Pendant Light. Modern Furniture Cheap. Patio Sail Shade. and other designs you might like Room And Board Rug Sale. Best Rated Delta Kitchen Faucets. Polished Nickel Faucets. Room Divider Panel. Gas Fireplace Surrounds. Tubs For Small Bathrooms. Restoration Hardware Vanities. |
#!/usr/bin/python
#coding=utf-8
"""****************************************************************************
Copyright (c) 2013 cocos2d-x.org
http://www.cocos2d-x.org
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
****************************************************************************"""
import sys
def commandCreate():
from module.core import CocosProject
project = CocosProject()
name, package, language, path = project.checkParams()
project.createPlatformProjects(name, package, language, path)
# ------------ main --------------
if __name__ == '__main__':
"""
There have double ways to create cocos project.
1.UI
2.console
#create_project.py --help
#create_project.py -n MyGame -k com.MyCompany.AwesomeGame -l javascript -p c:/mycompany
"""
if len(sys.argv)==1:
try:
from module.ui import createTkCocosDialog
createTkCocosDialog()
except ImportError:
commandCreate()
else:
commandCreate()
|
HOW TO CLEAR CACHE EBox MC ( EBMC ) OR KODI GUIDE.
Are you running out of space on the device as KODI or EBox MC ( EBMC ) app used the most space on the device?
Do you have issues with KODI or EBox MC ( EBMC ) with your setup on it and want to do a fresh start on it?
In this guide, we will provide steps how you can clear cache or clear data (Which will remove all you add on or setup) on KODI or SPMC for the first run on the device.
Please go to Settings on the main screen on the device or via Apps menu on your EBox device.
Now go to Other and then select More Settings.
Go to Settings on the main screen on the device or via Apps menu on your device.
Now go to Other – More Settings.
Now locate KODI or EBMC on it.
Select EBMC or KODI and you will see options as Force Stop.
First Click Force Stop. Now click Clear Cache than click on Clear Data ( Remove all add-on or data from KODI or SPMC ). Now once it goes to 0 then open KODI or SPMC and it will be as will do the first run on it and you can do your setup again on it.
This is a video on how to CLEAR CACHE XBMC SPMC KODI GUIDE, This will free up used dater that is not needed from Kodi so it will run a lot smoother and faster.
Please watch the above YouTube tutorial on How to clear cache XBMC SPMC Kodi guide. At EntertainmentBox.com we offer a varied selection of YouTube tutorials. Whether it be videos on devices or how to do certain things in Kodi. We even provide unboxings of products or links to reviews from some of our great customers.
If you found the How to clear cache XBMC SPMC Kodi to guide useful and as we offer many many more videos not just the How to clear cache XBMC SPMC Kodi guide. Please subscribe to the EntertainmentBox YouTube channel HERE.
Step by Step Android Guide Easy Steps Clearing Data will clear everything you had installed on KODI.
Saturday December orlistat price brazil 31 2011 Span platform offers three goals a.. |
# -*- coding: utf-8 -*-
#
# Copyright (c) 2016 NORDUnet A/S
# Copyright (c) 2018-2019 SUNET
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or
# without modification, are permitted provided that the following
# conditions are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following
# disclaimer in the documentation and/or other materials provided
# with the distribution.
# 3. Neither the name of the NORDUnet nor the names of its
# contributors may be used to endorse or promote products derived
# from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import json
from contextlib import contextmanager
from typing import Any, Dict, Mapping, Optional
from mock import patch
from eduid_common.api.testing import EduidAPITestCase
from eduid_userdb.exceptions import UserOutOfSync
from eduid_userdb.signup import SignupUser
from eduid_webapp.signup.app import SignupApp, signup_init_app
from eduid_webapp.signup.verifications import send_verification_mail
class SignupTests(EduidAPITestCase):
app: SignupApp
def setUp(self):
super(SignupTests, self).setUp(copy_user_to_private=True)
def load_app(self, config: Mapping[str, Any]) -> SignupApp:
"""
Called from the parent class, so we can provide the appropriate flask
app for this test case.
"""
return signup_init_app(name='signup', test_config=config)
def update_config(self, config: Dict[str, Any]) -> Dict[str, Any]:
config.update(
{
'available_languages': {'en': 'English', 'sv': 'Svenska'},
'signup_authn_url': '/services/authn/signup-authn',
'signup_url': 'https://localhost/',
'dashboard_url': 'https://localhost/',
'development': 'DEBUG',
'application_root': '/',
'log_level': 'DEBUG',
'password_length': 10,
'vccs_url': 'http://turq:13085/',
'tou_version': '2018-v1',
'tou_url': 'https://localhost/get-tous',
'default_finish_url': 'https://www.eduid.se/',
'recaptcha_public_key': 'XXXX',
'recaptcha_private_key': 'XXXX',
'students_link': 'https://www.eduid.se/index.html',
'technicians_link': 'https://www.eduid.se/tekniker.html',
'staff_link': 'https://www.eduid.se/personal.html',
'faq_link': 'https://www.eduid.se/faq.html',
'celery_config': {
'result_backend': 'amqp',
'task_serializer': 'json',
'mongo_uri': config['mongo_uri'],
},
'environment': 'dev',
}
)
return config
# parameterized test methods
@patch('eduid_webapp.signup.views.verify_recaptcha')
@patch('eduid_common.api.mail_relay.MailRelay.sendmail')
def _captcha_new(
self,
mock_sendmail: Any,
mock_recaptcha: Any,
data1: Optional[dict] = None,
email: str = '[email protected]',
recaptcha_return_value: bool = True,
add_magic_cookie: bool = False,
):
"""
:param data1: to control the data POSTed to the /trycaptcha endpoint
:param email: the email to use for registration
:param recaptcha_return_value: to mock captcha verification failure
:param add_magic_cookie: add magic cookie to the trycaptcha request
"""
mock_sendmail.return_value = True
mock_recaptcha.return_value = recaptcha_return_value
with self.session_cookie_anon(self.browser) as client:
with client.session_transaction() as sess:
with self.app.test_request_context():
data = {
'email': email,
'recaptcha_response': 'dummy',
'tou_accepted': True,
'csrf_token': sess.get_csrf_token(),
}
if data1 is not None:
data.update(data1)
if add_magic_cookie:
client.set_cookie(
'localhost', key=self.app.conf.magic_cookie_name, value=self.app.conf.magic_cookie
)
return client.post('/trycaptcha', data=json.dumps(data), content_type=self.content_type_json)
@patch('eduid_webapp.signup.views.verify_recaptcha')
@patch('eduid_common.api.mail_relay.MailRelay.sendmail')
def _resend_email(
self, mock_sendmail: Any, mock_recaptcha: Any, data1: Optional[dict] = None, email: str = '[email protected]'
):
"""
Trigger re-sending an email with a verification code.
:param data1: to control the data POSTed to the resend-verification endpoint
:param email: what email address to use
"""
mock_sendmail.return_value = True
mock_recaptcha.return_value = True
with self.session_cookie_anon(self.browser) as client:
with self.app.test_request_context():
with client.session_transaction() as sess:
data = {'email': email, 'csrf_token': sess.get_csrf_token()}
if data1 is not None:
data.update(data1)
return client.post('/resend-verification', data=json.dumps(data), content_type=self.content_type_json)
@patch('eduid_webapp.signup.views.verify_recaptcha')
@patch('eduid_common.api.mail_relay.MailRelay.sendmail')
@patch('eduid_common.api.am.AmRelay.request_user_sync')
@patch('vccs_client.VCCSClient.add_credentials')
def _verify_code(
self,
mock_add_credentials: Any,
mock_request_user_sync: Any,
mock_sendmail: Any,
mock_recaptcha: Any,
code: str = '',
email: str = '[email protected]',
):
"""
Test the verification link sent by email
:param code: the code to use
:param email: the email address to use
"""
mock_add_credentials.return_value = True
mock_request_user_sync.return_value = True
mock_sendmail.return_value = True
mock_recaptcha.return_value = True
with self.session_cookie_anon(self.browser) as client:
with client.session_transaction():
with self.app.test_request_context():
# lower because we are purposefully calling it with a mixed case mail address in tests
send_verification_mail(email.lower())
signup_user = self.app.private_userdb.get_user_by_pending_mail_address(email)
code = code or signup_user.pending_mail_address.verification_code
return client.get('/verify-link/' + code)
@patch('eduid_webapp.signup.views.verify_recaptcha')
@patch('eduid_common.api.mail_relay.MailRelay.sendmail')
@patch('eduid_common.api.am.AmRelay.request_user_sync')
@patch('vccs_client.VCCSClient.add_credentials')
def _verify_code_after_captcha(
self,
mock_add_credentials: Any,
mock_request_user_sync: Any,
mock_sendmail: Any,
mock_recaptcha: Any,
data1: Optional[dict] = None,
email: str = '[email protected]',
):
"""
Verify the pending account with an emailed verification code after creating the account by verifying the captcha.
:param data1: to control the data sent to the trycaptcha endpoint
:param email: what email address to use
"""
mock_add_credentials.return_value = True
mock_request_user_sync.return_value = True
mock_sendmail.return_value = True
mock_recaptcha.return_value = True
with self.session_cookie_anon(self.browser) as client:
with self.app.test_request_context():
with client.session_transaction() as sess:
data = {
'email': email,
'recaptcha_response': 'dummy',
'tou_accepted': True,
'csrf_token': sess.get_csrf_token(),
}
if data1 is not None:
data.update(data1)
client.post('/trycaptcha', data=json.dumps(data), content_type=self.content_type_json)
if data1 is None:
# lower because we are purposefully calling it with a mixed case mail address in tests
send_verification_mail(email.lower())
signup_user = self.app.private_userdb.get_user_by_pending_mail_address(email)
response = client.get('/verify-link/' + signup_user.pending_mail_address.verification_code)
return json.loads(response.data)
@patch('eduid_webapp.signup.views.verify_recaptcha')
@patch('eduid_common.api.mail_relay.MailRelay.sendmail')
@patch('eduid_common.api.am.AmRelay.request_user_sync')
@patch('vccs_client.VCCSClient.add_credentials')
def _get_code_backdoor(
self,
mock_add_credentials: Any,
mock_request_user_sync: Any,
mock_sendmail: Any,
mock_recaptcha: Any,
email: str,
):
"""
Test getting the generated verification code through the backdoor
"""
mock_add_credentials.return_value = True
mock_request_user_sync.return_value = True
mock_sendmail.return_value = True
mock_recaptcha.return_value = True
with self.session_cookie_anon(self.browser) as client:
with client.session_transaction():
with self.app.test_request_context():
send_verification_mail(email)
client.set_cookie(
'localhost', key=self.app.conf.magic_cookie_name, value=self.app.conf.magic_cookie
)
return client.get(f'/get-code?email={email}')
def test_get_code_backdoor(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'dev'
email = '[email protected]'
resp = self._get_code_backdoor(email=email)
signup_user = self.app.private_userdb.get_user_by_pending_mail_address(email)
self.assertEqual(signup_user.pending_mail_address.verification_code, resp.data.decode('ascii'))
def test_get_code_no_backdoor_in_pro(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'pro'
email = '[email protected]'
resp = self._get_code_backdoor(email=email)
self.assertEqual(resp.status_code, 400)
def test_get_code_no_backdoor_misconfigured1(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = ''
self.app.conf.environment = 'dev'
email = '[email protected]'
resp = self._get_code_backdoor(email=email)
self.assertEqual(resp.status_code, 400)
def test_get_code_no_backdoor_misconfigured2(self):
self.app.conf.magic_cookie = ''
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'dev'
email = '[email protected]'
resp = self._get_code_backdoor(email=email)
self.assertEqual(resp.status_code, 400)
# actual tests
def test_captcha_new_user(self):
response = self._captcha_new()
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
self.assertEqual(data['payload']['next'], 'new')
def test_captcha_new_user_mixed_case(self):
response = self._captcha_new(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
self.assertEqual(data['payload']['next'], 'new')
mixed_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
lower_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
assert mixed_user.eppn == lower_user.eppn
assert mixed_user.pending_mail_address.email == lower_user.pending_mail_address.email
def test_captcha_new_no_key(self):
self.app.conf.recaptcha_public_key = None
response = self._captcha_new()
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
self.assertEqual(data['payload']['message'], 'signup.recaptcha-not-verified')
def test_captcha_new_wrong_csrf(self):
data1 = {'csrf_token': 'wrong-token'}
response = self._captcha_new(data1=data1)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
self.assertEqual(data['payload']['error']['csrf_token'], ['CSRF failed to validate'])
def test_captcha_existing_user(self):
response = self._captcha_new(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
self.assertEqual(data['payload']['message'], 'signup.registering-address-used')
def test_captcha_existing_user_mixed_case(self):
response = self._captcha_new(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
self.assertEqual(data['payload']['message'], 'signup.registering-address-used')
def test_captcha_remove_existing_signup_user(self):
response = self._captcha_new(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
self.assertEqual(data['payload']['next'], 'new')
def test_captcha_remove_existing_signup_user_mixed_case(self):
response = self._captcha_new(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
self.assertEqual(data['payload']['next'], 'new')
mixed_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
lower_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
assert mixed_user.eppn == lower_user.eppn
assert mixed_user.pending_mail_address.email == lower_user.pending_mail_address.email
def test_captcha_fail(self):
response = self._captcha_new(recaptcha_return_value=False)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
def test_captcha_backdoor(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'dev'
response = self._captcha_new(recaptcha_return_value=False, add_magic_cookie=True)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
def test_captcha_no_backdoor_in_pro(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'pro'
response = self._captcha_new(recaptcha_return_value=False, add_magic_cookie=True)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
def test_captcha_no_backdoor_misconfigured1(self):
self.app.conf.magic_cookie = 'magic-cookie'
self.app.conf.magic_cookie_name = ''
self.app.conf.environment = 'dev'
response = self._captcha_new(recaptcha_return_value=False, add_magic_cookie=True)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
def test_captcha_no_backdoor_misconfigured2(self):
self.app.conf.magic_cookie = ''
self.app.conf.magic_cookie_name = 'magic'
self.app.conf.environment = 'dev'
response = self._captcha_new(recaptcha_return_value=False, add_magic_cookie=True)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
def test_captcha_unsynced(self):
with patch('eduid_webapp.signup.helpers.save_and_sync_user') as mock_save:
mock_save.side_effect = UserOutOfSync('unsync')
response = self._captcha_new()
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_SUCCESS')
self.assertEqual(data['payload']['next'], 'new')
def test_captcha_no_data_fail(self):
with self.session_cookie_anon(self.browser) as client:
response = client.post('/trycaptcha')
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
self.assertEqual(data['error'], True)
self.assertEqual(data['type'], 'POST_SIGNUP_TRYCAPTCHA_FAIL')
self.assertIn('email', data['payload']['error'])
self.assertIn('csrf_token', data['payload']['error'])
self.assertIn('recaptcha_response', data['payload']['error'])
def test_resend_email(self):
response = self._resend_email()
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_RESEND_VERIFICATION_SUCCESS')
def test_resend_email_mixed_case(self):
response = self._resend_email(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_RESEND_VERIFICATION_SUCCESS')
mixed_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
lower_user: SignupUser = self.app.private_userdb.get_user_by_pending_mail_address('[email protected]')
assert mixed_user.eppn == lower_user.eppn
assert mixed_user.pending_mail_address.email == lower_user.pending_mail_address.email
def test_resend_email_wrong_csrf(self):
data1 = {'csrf_token': 'wrong-token'}
response = self._resend_email(data1=data1)
data = json.loads(response.data)
self.assertEqual(data['type'], 'POST_SIGNUP_RESEND_VERIFICATION_FAIL')
self.assertEqual(data['payload']['error']['csrf_token'], ['CSRF failed to validate'])
def test_verify_code(self):
response = self._verify_code()
data = json.loads(response.data)
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_SUCCESS')
self.assertEqual(data['payload']['status'], 'verified')
def test_verify_code_mixed_case(self):
response = self._verify_code(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_SUCCESS')
self.assertEqual(data['payload']['status'], 'verified')
mixed_user: SignupUser = self.app.private_userdb.get_user_by_mail('[email protected]')
lower_user: SignupUser = self.app.private_userdb.get_user_by_mail('[email protected]')
assert mixed_user.eppn == lower_user.eppn
assert mixed_user.mail_addresses.primary.email == lower_user.mail_addresses.primary.email
def test_verify_code_unsynced(self):
with patch('eduid_webapp.signup.helpers.save_and_sync_user') as mock_save:
mock_save.side_effect = UserOutOfSync('unsync')
response = self._verify_code()
data = json.loads(response.data)
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_FAIL')
self.assertEqual(data['payload']['message'], 'user-out-of-sync')
def test_verify_existing_email(self):
response = self._verify_code(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_FAIL')
self.assertEqual(data['payload']['status'], 'already-verified')
def test_verify_existing_email_mixed_case(self):
response = self._verify_code(email='[email protected]')
data = json.loads(response.data)
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_FAIL')
self.assertEqual(data['payload']['status'], 'already-verified')
def test_verify_code_after_captcha(self):
data = self._verify_code_after_captcha()
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_SUCCESS')
def test_verify_code_after_captcha_mixed_case(self):
data = self._verify_code_after_captcha(email='[email protected]')
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_SUCCESS')
def test_verify_code_after_captcha_proofing_log_error(self):
from eduid_webapp.signup.verifications import ProofingLogFailure
with patch('eduid_webapp.signup.views.verify_email_code') as mock_verify:
mock_verify.side_effect = ProofingLogFailure('fail')
data = self._verify_code_after_captcha()
self.assertEqual(data['type'], 'GET_SIGNUP_VERIFY_LINK_FAIL')
self.assertEqual(data['payload']['message'], 'Temporary technical problems')
def test_verify_code_after_captcha_wrong_csrf(self):
with self.assertRaises(AttributeError):
data1 = {'csrf_token': 'wrong-token'}
self._verify_code_after_captcha(data1=data1)
def test_verify_code_after_captcha_dont_accept_tou(self):
with self.assertRaises(AttributeError):
data1 = {'tou_accepted': False}
self._verify_code_after_captcha(data1=data1)
|
No Strings Attached is a quirky little chick flick that falls a little flat sometimes, but like its lead actor Ashton Kutcher, has a mostly good heart that ultimately is a winner.
The movie's plot seems to be drawn somewhat from Kutcher's own life. When he was a student at the University of Iowa, he woke up many times not knowing what he had done the night before.
It's a little raunchy and sometimes tries too hard to be quirky, but Kutcher and Natalie Portman have nice chemistry. So while supporting players like Kevin Kline see their talent a little wasted, the fact that we always get to come back to the leads is comforting. Good soundtrack too, including Bishop Allen, Elvis, and Color Me Badd. |
# -*- coding: utf-8 -*-
"""
Github class for making needed API calls to github
"""
import base64
from itertools import chain
import shutil
import tempfile
import requests
import sh
CLONE_DIR = 'cloned_repo'
class GitHubException(Exception):
"""Base exception class others inherit."""
pass
class GitHubRepoExists(GitHubException):
"""Repo exists, and thus cannot be created."""
pass
class GitHubRepoDoesNotExist(GitHubException):
"""Repo does not exist, and therefore actions can't be taken on it."""
pass
class GitHubUnknownError(GitHubException):
"""Unexpected status code exception"""
pass
class GitHubNoTeamFound(GitHubException):
"""Name team not found in list"""
pass
class GitHub(object):
"""
API class for handling calls to github
"""
def __init__(self, api_url, oauth2_token):
"""Initialize a requests session for use with this class by
specifying the base API endpoint and key.
Args:
api_url (str): Github API URL such as https://api.github.com/
oauth2_token (str): Github OAUTH2 token for v3
"""
self.api_url = api_url
if not api_url.endswith('/'):
self.api_url += '/'
self.session = requests.Session()
# Add OAUTH2 token to session headers and set Agent
self.session.headers = {
'Authorization': 'token {0}'.format(oauth2_token),
'User-Agent': 'Orcoursetrion',
}
def _get_all(self, url):
"""Return all results from URL given (i.e. page through them)
Args:
url(str): Full github URL with results.
Returns:
list: List of items returned.
"""
results = None
response = self.session.get(url)
if response.status_code == 200:
results = response.json()
while (
response.links.get('next', False) and
response.status_code == 200
):
response = self.session.get(response.links['next']['url'])
results += response.json()
if response.status_code not in [200, 404]:
raise GitHubUnknownError(response.text)
return results
def _get_repo(self, org, repo):
"""Either return the repo dictionary, or None if it doesn't exists.
Args:
org (str): Organization the repo lives in.
repo (str): The name of the repo.
Raises:
requests.exceptions.RequestException
GitHubUnknownError
Returns:
dict or None: Repo dictionary from github
(https://developer.github.com/v3/repos/#get) or None if it
doesn't exist.
"""
repo_url = '{url}repos/{org}/{repo}'.format(
url=self.api_url,
org=org,
repo=repo
)
# Try and get the URL, if it 404's we are good, otherwise raise
repo_response = self.session.get(repo_url)
if repo_response.status_code == 200:
return repo_response.json()
if repo_response.status_code != 404:
raise GitHubUnknownError(repo_response.text)
def _find_team(self, org, team):
"""Find a team in an org by name, or raise.
Args:
org (str): Organization to create the repo in.
team (str): Team to find by name.
Raises:
GitHubUnknownError
GitHubNoTeamFound
Returns:
dict: Team dictionary
(https://developer.github.com/v3/orgs/teams/#response)
"""
list_teams_url = '{url}orgs/{org}/teams'.format(
url=self.api_url,
org=org
)
teams = self._get_all(list_teams_url)
if not teams:
raise GitHubUnknownError(
"No teams found in org. This shouldn't happen"
)
found_team = [
x for x in teams
if x['name'].strip().lower() == team.strip().lower()
]
if len(found_team) != 1:
raise GitHubNoTeamFound(
'{0} not in list of teams for {1}'.format(team, org)
)
found_team = found_team[0]
return found_team
def create_repo(self, org, repo, description):
"""Creates a new github repository or raises exceptions
Args:
org (str): Organization to create the repo in.
repo (str): Name of the repo to create.
description (str): Description of repo to use.
Raises:
GitHubRepoExists
GitHubUnknownError
requests.exceptions.RequestException
Returns:
dict: Github dictionary of a repo
(https://developer.github.com/v3/repos/#create)
"""
repo_dict = self._get_repo(org, repo)
if repo_dict is not None:
raise GitHubRepoExists('This repository already exists')
# Everything looks clean, create the repo.
create_url = '{url}orgs/{org}/repos'.format(
url=self.api_url,
org=org
)
payload = {
'name': repo,
'description': description,
'private': True,
}
repo_create_response = self.session.post(create_url, json=payload)
if repo_create_response.status_code != 201:
raise GitHubUnknownError(repo_create_response.text)
return repo_create_response.json()
def _create_team(self, org, team_name, read_only):
"""Internal function to create a team.
Args:
org (str): Organization to create the repo in.
team_name (str): Name of team to create.
read_only (bool): If false, read/write, if true read_only.
Raises:
GitHubUnknownError
requests.RequestException
Returns:
dict: Team dictionary
(https://developer.github.com/v3/orgs/teams/#response)
"""
if read_only:
permission = 'pull'
else:
permission = 'push'
create_url = '{url}orgs/{org}/teams'.format(
url=self.api_url,
org=org
)
response = self.session.post(create_url, json={
'name': team_name,
'permission': permission
})
if response.status_code != 201:
raise GitHubUnknownError(response.text)
return response.json()
def put_team(self, org, team_name, read_only, members):
"""Create a team in a github organization.
Utilize
https://developer.github.com/v3/orgs/teams/#list-teams,
https://developer.github.com/v3/orgs/teams/#create-team,
https://developer.github.com/v3/orgs/teams/#list-team-members,
https://developer.github.com/v3/orgs/teams/#add-team-membership,
and
https://developer.github.com/v3/orgs/teams/#remove-team-membership.
to create a team and/or replace an existing team's membership
with the ``members`` list.
Args:
org (str): Organization to create the repo in.
team_name (str): Name of team to create.
read_only (bool): If false, read/write, if true read_only.
members (list): List of github usernames to add to the
team. If none, membership changes won't occur
Raises:
GitHubUnknownError
requests.RequestException
Returns:
dict: The team dictionary
(https://developer.github.com/v3/orgs/teams/#response-1)
"""
# Disabling too-many-locals because I need them as a human to
# keep track of the sets going on here.
# pylint: disable=too-many-locals
try:
team_dict = self._find_team(org, team_name)
except GitHubNoTeamFound:
team_dict = self._create_team(org, team_name, read_only)
# Just get the team and exit if no members are given
if members is None:
return team_dict
# Have the team, now replace member list with the one we have
members_url = '{url}teams/{id}/members'.format(
url=self.api_url,
id=team_dict['id']
)
existing_members = self._get_all(members_url)
# Filter list of dicts down to just username list
existing_members = [x['login'] for x in existing_members]
# Grab everyone that should no longer be members
remove_members = dict(
[(x, False) for x in existing_members if x not in members]
)
# Grab everyone that should be added
add_members = dict(
[(x, True) for x in members if x not in existing_members]
)
# merge the dictionary of usernames dict with True to add,
# False to remove.
membership_dict = dict(
chain(remove_members.items(), add_members.items())
)
# Now do the adds and removes of membership to sync them
for member, add in membership_dict.items():
url = '{url}teams/{id}/memberships/{member}'.format(
url=self.api_url,
id=team_dict['id'],
member=member
)
if add:
response = self.session.put(url)
else:
response = self.session.delete(url)
if response.status_code not in [200, 204]:
raise GitHubUnknownError(
'Failed to add or remove {0}. Got: {1}'.format(
member, response.text
)
)
return team_dict
def add_team_repo(self, org, repo, team):
"""Add a repo to an existing team (by name) in the specified org.
We first look up the team to get its ID
(https://developer.github.com/v3/orgs/teams/#list-teams), and
then add the repo to that team
(https://developer.github.com/v3/orgs/teams/#add-team-repo).
Args:
org (str): Organization to create the repo in.
repo (str): Name of the repo to create.
team (str): Name of team to add.
Raises:
GitHubNoTeamFound
GitHubUnknownError
requests.exceptions.RequestException
"""
found_team = self._find_team(org, team)
team_repo_url = '{url}teams/{id}/repos/{org}/{repo}'.format(
url=self.api_url,
id=found_team['id'],
org=org,
repo=repo
)
response = self.session.put(team_repo_url)
if response.status_code != 204:
raise GitHubUnknownError(response.text)
def add_web_hook(self, org, repo, url):
"""Adds an active hook to a github repository.
This utilizes
https://developer.github.com/v3/repos/hooks/#create-a-hook to
create a form type Web hook that responds to push events
(basically all the defaults).
Args:
org (str): Organization to create the repo in.
repo (str): Name of the repo the hook will live in.
url (str): URL of the hook to add.
Raises:
GitHubUnknownError
requests.exceptions.RequestException
Returns:
dict: Github dictionary of a hook
(https://developer.github.com/v3/repos/hooks/#response-2)
"""
hook_url = '{url}repos/{org}/{repo}/hooks'.format(
url=self.api_url,
org=org,
repo=repo
)
payload = {
'name': 'web',
'active': True,
'config': {
'url': url,
}
}
response = self.session.post(hook_url, json=payload)
if response.status_code != 201:
raise GitHubUnknownError(response.text)
return response.json()
def delete_web_hooks(self, org, repo):
"""Delete all the Web hooks for a repository
Uses https://developer.github.com/v3/repos/hooks/#list-hooks
to get a list of all hooks, and then runs
https://developer.github.com/v3/repos/hooks/#delete-a-hook
to remove each of them.
Args:
org (str): Organization to create the repo in.
repo (str): Name of the repo to remove hooks from.
Raises:
GitHubUnknownError
GitHubRepoDoesNotExist
requests.exceptions.RequestException
Returns:
int: Number of hooks removed
"""
# Verify the repo exists first
repo_dict = self._get_repo(org, repo)
if repo_dict is None:
raise GitHubRepoDoesNotExist(
'Repo does not exist. Cannot remove hooks'
)
url = '{url}repos/{org}/{repo}/hooks'.format(
url=self.api_url,
org=org,
repo=repo
)
hooks = self._get_all(url)
num_hooks_removed = 0
for hook in hooks or []:
response = self.session.delete(hook['url'])
if response.status_code != 204:
raise GitHubUnknownError(response.text)
num_hooks_removed += 1
return num_hooks_removed
@staticmethod
def shallow_copy_repo(src_repo, dst_repo, committer, branch=None):
"""Copies one branch repo's contents to a new repo in the same
organization without history.
.. DANGER::
This will overwrite the destination repo's default branch and
rewrite its history.
The basic workflow is:
- Clone source repo
- Remove source repo ``.git`` folder
- Initialize as new git repo
- Set identity
- Add everything and commit
- Force push to destination repo
Args:
src_repo (str): Full git url to source repo.
dst_repo (str): Full git url to destination repo.
committer (dict): {'name': ..., 'email': ...} for the name
and e-mail to use in the initial commit of the
destination repo.
branch (str): Option branch, if not specified default is used.
Raises:
sh.ErrorReturnCode
Returns:
None
"""
# Disable member use because pylint doesn't get dynamic members
# pylint: disable=no-member
# Grab current working directory so we return after we are done
cwd = unicode(sh.pwd().rstrip('\n'))
tmp_dir = tempfile.mkdtemp(prefix='orc_git')
try:
sh.cd(tmp_dir)
if branch is None:
sh.git.clone(src_repo, CLONE_DIR, depth=1)
else:
sh.git.clone(src_repo, CLONE_DIR, depth=1, branch=branch)
sh.cd(CLONE_DIR)
shutil.rmtree('.git')
sh.git.init()
sh.git.config('user.email', committer['email'])
sh.git.config('user.name', committer['name'])
sh.git.remote.add.origin(dst_repo)
sh.git.add('.')
sh.git.commit(
m='Initial rerun copy by Orcoursetrion from {0}'.format(
src_repo
)
)
sh.git.push.origin.master(f=True)
finally:
shutil.rmtree(tmp_dir, ignore_errors=True)
sh.cd(cwd)
def add_repo_file(self, org, repo, committer, message, path, contents):
"""Adds the ``contents`` provided to the ``path`` in the repo
specified and committed by the ``commiter`` parameters
provided.
https://developer.github.com/v3/repos/contents/#create-a-file
.. NOTE::
This commits directly to the default branch of the repo.
Args:
org (str): Organization the repo lives in.
repo (str): The name of the repo.
committer (dict): {'name': ..., 'email': ...} for the name
and e-mail to use in the initial commit of the
destination repo.
message (str): Commit message to use for the addition.
path (str): The content path, i.e. ``docs/.gitignore``
contents (str): The actual string Contents of the file.
Raises:
requests.exceptions.RequestException
GitHubRepoDoesNotExist
GitHubUnknownError
Returns:
None
"""
repo_dict = self._get_repo(org, repo)
if repo_dict is None:
raise GitHubRepoDoesNotExist(
'Repo does not exist. Cannot add file'
)
url = '{url}repos/{org}/{repo}/contents/{path}'.format(
url=self.api_url,
org=org,
repo=repo,
path=path
)
payload = {
'message': message,
'committer': committer,
'content': base64.b64encode(contents).decode('ascii'),
}
response = self.session.put(url, json=payload)
if response.status_code != 201:
raise GitHubUnknownError(
'Failed to add contents to {org}/{repo}/{path}. '
'Got: {response}'.format(
org=org, repo=repo, path=path, response=response.text
)
)
|
Publié le 7 février 2019 CatégoriesActualité historiques, ActualitésÉtiquettesfamily history show, genealogy, RootsTech, rootsTechConference, Salt Lake CityLaisser un commentaire sur How to Remotely Watch RootsTech 2019 Salt Lake City!
RootsCrew : The Angels of RootsTech !
Publié le 5 janvier 2019 CatégoriesHistoires de FamillesÉtiquettesRoots Tech, RootsTech 2019, RootsTech Angels, Salon RootsTech, Salt Lake CityLaisser un commentaire sur RootsCrew : The Angels of RootsTech !
Believe it or not, RootsTech 2019 is just around the corner, and we, like you, are busy preparing and getting ready for the best RootsTech yet! We are always thinking as a team about ways that we can make your experience better. That being said, we are excited to introduce a new aspect of our RootsTech team: Roots Crew.
As if you needed another reason to download the app! During conference time, an icon will be added to the home screen titled Roots Crew, where you can directly message the Roots Crew team questions, comments, and concerns. We take attendee feedback very seriously!
Have you ever tweeted at a company or brand when you’ve loved them or had problems with them? We recognize that social media is an important part of your experience at RootsTech and we love seeing what you have to share. In fact, we have quite a few classes dedicated to the use of social media (link to social media class blog).
Roots Crew will be listening to what you have to say online and looking for opportunities to solve customer problems and spread joy.
Have you ever been walking around and your shoelace broke? Have you developed blisters after a long day on your feet? Have you ever gotten lost while looking for a class? Have you ever spilled on your shirt during lunch? Do you worry that a screw or lens will pop out of your glasses while in a class? Any of those situations can be solved by Roots Crew!
If your shoelace breaks, message us in the RootsTech App and we’ll bring you a new one. Got blisters? We can bring you anti-chafe cream and moleskin. If you get lost searching for that perfect class or exhibitor booth, send us a tweet and we’ll help you find your way. Did you spill on your shirt at lunch? Roots Crew will swing by with stain remover. Did your glasses break? We’d be happy to show up with a repair kit.
So, whether you reach out to us on social media or through the app, Roots Crew will be there to help in any way they can. You will see them wandering around in special T-shirts, prepared to help with whatever situation you find yourself in, and giving away some free swag.
Download our app today on the App Store or Google Play Store.
Occasion unique : Gagnez une entrée pour RootsTech!!!
Publié le 13 novembre 2018 13 novembre 2018 CatégoriesActualités, Animations et conférences, Histoires de Familles, Roots TechÉtiquettesancestry, ancetres, concours, expats français aux USA, gagnez, Gagnez vos places pour RootsTech, Généalogie, genealogy show, résidents français aux USA, RootsTech, RootsTech 2019, RootsTech Ambassador, Salt Lake City, Visit Salt LakeUn commentaire sur Occasion unique : Gagnez une entrée pour RootsTech!!!
Dans le cadre de ma mission d’ambassadrice RootsTech, j’ai l’immense honneur d’offrir un pass RootsTech à l’un(e) chançeux/se d’entre vous !
Avouez que vous etes une bande de petits veinards de vous faire gater par RootsTech comme ca!
Envoyez moi vos bafouilles, vos lettres, votre plus belle prose, ou vos vers, pour m’expliquer comment vous imaginez RootsTech et, les raisons de votre coeur qui font que vous aimeriez faire partie de cette folle aventure, le tout par courriel uniquement avant le 30 Novembre 2019 et peut-être serez vous le grand gagnant de mon concours RootsTech* ?
La/Le gagnant sera tiré au sort parmi les participations reçues !
Mais pensez qu’il s’agit d’une occasion unique de voir à quoi ressemble le plus grand salon généalogique au monde !
Bien sûr, un déplacement à Salt Lake City a un coût ! mais peut-être est ce l’occasion de ne pas louer une maison de vacances au bord de la mer et de découvrir enfin THE PLACE TO BE en généalogie! Ne partez pas à la mer ou à la montagne cette année….mais offrez vous plutot un bond unique plusieurs siècles en arrière!
How I love this « Roads to RootsTech series » giving us insight behind the curtains!
Good Job Guys and Girls!
Publié le 19 octobre 2018 19 octobre 2018 CatégoriesActualités, Histoires de FamillesÉtiquettesEtats Unis d'Amerique, family history, Family History Salt Lake City, Generalogy conference, RootsTech, RootsTech2019, salon de généalogie, Salt Lake City, United States, Utah, Visit Salt Lake2 commentaires sur Quel programme alléchant !
Que celui de RootsTech 2019 !
Vous allez me dire que c’est quelque chose dont je parle chaque année mais c’est tellement vrai ! Il y en a pour tous les goûts : des néophytes aux grands spécialistes, il y a des conférences sur des sujets tellement variés et surtout, chose qui manque cruellement en Europe, ou du moins qui n’est pas assez présente : la réflexion sur la pratique elle-même, sur la déontologie, sur les maniéres de faire, les « do/don’t » etc.
C’est sûr, le programme du plus grand salon de généalogie au monde est tellement riche et varié qu’on ne peut y rester insensible. Il y a de quoi satisfaire les débutants, voire les personnes qui ignorent totalement ce qu’est la généalogie et ses multiples pratiques.
Je suis spécialement tentée par tout ce qui est touche à la réflexion sur les méthodes de recherche et spécialement le digital, et le numérique, les exploitations de Google, Youtube, la vidéo : en bref, la généalogie du futur….qui sera d’ailleurs peut-être complétement différente de comment on l’imagine. Il y a 200 ou 300 ans, nos ancêtres ne nous aurait pas imaginé à converser en ligne avec des intervenants du monde, ni même du reste à entreprendre un voyage pour les beaux yeux de la pratique !
Comme d’habitude, j’essaye également de profiter de mon séjour la-bas pour apprendre quelque chose de tout à fait nouveau : cette année, je vais suivre une leçon de généalogie….japonaise!
Et surtout, surtout,je suis impatiente de vous montrer ce grand évenement, d’être votre envoyée spéciale sur place et de vous offrir les nouvelles les plus fraîches du monde généalogique !
5 More Reasons to Attend RootsTech 2019 !
Publié le 1 septembre 2018 CatégoriesHistoires de FamillesÉtiquettesfamily history, genealogy, Jason Hewett, RootsTech, RootsTech 2019, Salt Lake City, SLC, Utah, Visit UtahLaisser un commentaire sur 5 More Reasons to Attend RootsTech 2019 !
RootsTech 2019 is right around the corner, February 27 to March 2, 2019, and there are so many exciting reasons to attend! With more than 30,000 people expected to make the trek to Salt Lake City and the Salt Palace Convention Center, RootsTech has quickly become one of the largest and most recognizable genealogy conferences in the world.
We’re excited to announce that the world-famous a cappella group the Edge Effect will be performing at RootsTech 2019 during the opening event. The event will take place on the main stage on Wednesday, February 27, directly following a keynote address by Steve Rockwood, CEO of FamilySearch International. The Edge Effect, which travels the world performing covers and original pieces, is sure to excite and entertain RootsTech attendees. The group’s music covers a wide variety of styles. They’ll also be sharing their family heritage stories during their performance.
You can hear some of their music on their website or on their YouTube channel.
We’re thrilled to announce that Jason Hewlett will be returning to the RootsTech stage as host and emcee for the third time. Hewlett, the multitalented speaker, entertainer, comedian, and impressionist, will bring his family-friendly antics to the stage to entertain RootsTech attendees during the daily general sessions.
Located a block north of the Salt Palace Convention Center, the Family History Library offers a host of records, genealogical publications, documents, and research guides from more than 100 countries. Bring your unsolvable genealogy dilemmas, and spend some time with experienced genealogists who are willing to help you find that missing ancestor.
For the third year at RootsTech, we’ll have certified genealogists on hand to help you find answers to your toughest questions! Located in the RootsTech Expo Hall, the Coaches’ Corner will be a place where you can spend some time researching and collaborating with professionals trained to help you break through brick walls. This type of one-on-one expert help is something you won’t want to miss.
On Wednesday, free box lunches on large food carts throughout the main concourse will be offered during the lunch hour while the Expo Hall and food areas are under construction. Please feel free to help yourself to a lunch and enjoy some networking time with friends and colleagues. Seating will be available in the main stage area. If you choose not to take a boxed lunch, you can enjoy a number of restaurants within walking distance of the Salt Palace. For a complete list of restaurants available in downtown Salt Lake City, click here.
Whatever your reasons for attending, make sure to register for RootsTech 2019. It will be an experience you won’t want to miss!
RootsTech : Une magnifique expérience de vie !
Publié le 19 juillet 2018 19 juillet 2018 CatégoriesHistoires de FamillesÉtiquettesfamily history, RootsTech, RootsTech Conference, RootsTech Utah, Salt Lake CityLaisser un commentaire sur RootsTech : Une magnifique expérience de vie !
RootsTech c’est bien plus qu’un salon de généalogie,des cours, des conférences et des ateliers !
C’est une véritable expérience de vie !
Un endroit où se faire des contacts et qui sait ? se trouver des cousins et également des amis pour la vie qui ont la même passion que vous, qui comprennent ce dont vous parler quand vous évoquez cet ancêtre qui vous échappe ou cette collection d’archives dont vous aimeriez la numérisation !
Un lieu d’échange, de réflexion, de partages autour de belles valeurs. Un lieu pour apprendre de sujets moins maitrisés ou nouveaux. Un lieu pour se rappeler de principes élémentaires et d’éthique. Mais qu’ il est bon que ce lieu existe et rassemble, une fois l’an, la grande communauté des généalogistes !
Bien sûr, les sociétés commerciales sont là, qui présentent leurs produits mais c’est aussi le meilleur moment pour s’intéresser à leurs catalogues, à comprendre leur fonctionnement et à poser toute les questions qui vous traversent l’esprit !
C’est aussi le moment idéal pour promouvoir les entreprises européennes comme Geneanet , Famicity, Filae et Heredis !
Geneanet à RootsTech 2018 !
Un enrichissement de connaissances, de ma passion, de mon métier. Une meilleure connaissance de ce qui se fait dans le monde mais aussi un vrai plus personnel !
C’est une parenthèse qui fait réfléchir, grandir et avancer.
Et un petit paradis que je ne manquerais pour rien au monde !
En espérant vous voir nombreux dans cet évenement hors du commun ! |
"""Transformers for missing value imputation"""
# Authors: Nicolas Tresegnie <[email protected]>
# Sergey Feldman <[email protected]>
# License: BSD 3 clause
import warnings
import numbers
import numpy as np
import numpy.ma as ma
from scipy import sparse
from scipy import stats
from .base import BaseEstimator, TransformerMixin
from .utils import check_array
from .utils.sparsefuncs import _get_median
from .utils.validation import check_is_fitted
from .utils.validation import FLOAT_DTYPES
from .utils.fixes import _object_dtype_isnan
from .utils import is_scalar_nan
from .externals import six
zip = six.moves.zip
map = six.moves.map
__all__ = [
'MissingIndicator',
'SimpleImputer',
]
def _check_inputs_dtype(X, missing_values):
if (X.dtype.kind in ("f", "i", "u") and
not isinstance(missing_values, numbers.Real)):
raise ValueError("'X' and 'missing_values' types are expected to be"
" both numerical. Got X.dtype={} and "
" type(missing_values)={}."
.format(X.dtype, type(missing_values)))
def _get_mask(X, value_to_mask):
"""Compute the boolean mask X == missing_values."""
if is_scalar_nan(value_to_mask):
if X.dtype.kind == "f":
return np.isnan(X)
elif X.dtype.kind in ("i", "u"):
# can't have NaNs in integer array.
return np.zeros(X.shape, dtype=bool)
else:
# np.isnan does not work on object dtypes.
return _object_dtype_isnan(X)
else:
# X == value_to_mask with object dytpes does not always perform
# element-wise for old versions of numpy
return np.equal(X, value_to_mask)
def _most_frequent(array, extra_value, n_repeat):
"""Compute the most frequent value in a 1d array extended with
[extra_value] * n_repeat, where extra_value is assumed to be not part
of the array."""
# Compute the most frequent value in array only
if array.size > 0:
with warnings.catch_warnings():
# stats.mode raises a warning when input array contains objects due
# to incapacity to detect NaNs. Irrelevant here since input array
# has already been NaN-masked.
warnings.simplefilter("ignore", RuntimeWarning)
mode = stats.mode(array)
most_frequent_value = mode[0][0]
most_frequent_count = mode[1][0]
else:
most_frequent_value = 0
most_frequent_count = 0
# Compare to array + [extra_value] * n_repeat
if most_frequent_count == 0 and n_repeat == 0:
return np.nan
elif most_frequent_count < n_repeat:
return extra_value
elif most_frequent_count > n_repeat:
return most_frequent_value
elif most_frequent_count == n_repeat:
# Ties the breaks. Copy the behaviour of scipy.stats.mode
if most_frequent_value < extra_value:
return most_frequent_value
else:
return extra_value
class SimpleImputer(BaseEstimator, TransformerMixin):
"""Imputation transformer for completing missing values.
Read more in the :ref:`User Guide <impute>`.
Parameters
----------
missing_values : number, string, np.nan (default) or None
The placeholder for the missing values. All occurrences of
`missing_values` will be imputed.
strategy : string, optional (default="mean")
The imputation strategy.
- If "mean", then replace missing values using the mean along
each column. Can only be used with numeric data.
- If "median", then replace missing values using the median along
each column. Can only be used with numeric data.
- If "most_frequent", then replace missing using the most frequent
value along each column. Can be used with strings or numeric data.
- If "constant", then replace missing values with fill_value. Can be
used with strings or numeric data.
.. versionadded:: 0.20
strategy="constant" for fixed value imputation.
fill_value : string or numerical value, optional (default=None)
When strategy == "constant", fill_value is used to replace all
occurrences of missing_values.
If left to the default, fill_value will be 0 when imputing numerical
data and "missing_value" for strings or object data types.
verbose : integer, optional (default=0)
Controls the verbosity of the imputer.
copy : boolean, optional (default=True)
If True, a copy of X will be created. If False, imputation will
be done in-place whenever possible. Note that, in the following cases,
a new copy will always be made, even if `copy=False`:
- If X is not an array of floating values;
- If X is encoded as a CSR matrix.
Attributes
----------
statistics_ : array of shape (n_features,)
The imputation fill value for each feature.
Examples
--------
>>> import numpy as np
>>> from sklearn.impute import SimpleImputer
>>> imp_mean = SimpleImputer(missing_values=np.nan, strategy='mean')
>>> imp_mean.fit([[7, 2, 3], [4, np.nan, 6], [10, 5, 9]])
... # doctest: +NORMALIZE_WHITESPACE
SimpleImputer(copy=True, fill_value=None, missing_values=nan,
strategy='mean', verbose=0)
>>> X = [[np.nan, 2, 3], [4, np.nan, 6], [10, np.nan, 9]]
>>> print(imp_mean.transform(X))
... # doctest: +NORMALIZE_WHITESPACE
[[ 7. 2. 3. ]
[ 4. 3.5 6. ]
[10. 3.5 9. ]]
Notes
-----
Columns which only contained missing values at `fit` are discarded upon
`transform` if strategy is not "constant".
"""
def __init__(self, missing_values=np.nan, strategy="mean",
fill_value=None, verbose=0, copy=True):
self.missing_values = missing_values
self.strategy = strategy
self.fill_value = fill_value
self.verbose = verbose
self.copy = copy
def _validate_input(self, X):
allowed_strategies = ["mean", "median", "most_frequent", "constant"]
if self.strategy not in allowed_strategies:
raise ValueError("Can only use these strategies: {0} "
" got strategy={1}".format(allowed_strategies,
self.strategy))
if self.strategy in ("most_frequent", "constant"):
dtype = None
else:
dtype = FLOAT_DTYPES
if not is_scalar_nan(self.missing_values):
force_all_finite = True
else:
force_all_finite = "allow-nan"
try:
X = check_array(X, accept_sparse='csc', dtype=dtype,
force_all_finite=force_all_finite, copy=self.copy)
except ValueError as ve:
if "could not convert" in str(ve):
raise ValueError("Cannot use {0} strategy with non-numeric "
"data. Received datatype :{1}."
"".format(self.strategy, X.dtype.kind))
else:
raise ve
_check_inputs_dtype(X, self.missing_values)
if X.dtype.kind not in ("i", "u", "f", "O"):
raise ValueError("SimpleImputer does not support data with dtype "
"{0}. Please provide either a numeric array (with"
" a floating point or integer dtype) or "
"categorical data represented either as an array "
"with integer dtype or an array of string values "
"with an object dtype.".format(X.dtype))
return X
def fit(self, X, y=None):
"""Fit the imputer on X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where ``n_samples`` is the number of samples and
``n_features`` is the number of features.
Returns
-------
self : SimpleImputer
"""
X = self._validate_input(X)
# default fill_value is 0 for numerical input and "missing_value"
# otherwise
if self.fill_value is None:
if X.dtype.kind in ("i", "u", "f"):
fill_value = 0
else:
fill_value = "missing_value"
else:
fill_value = self.fill_value
# fill_value should be numerical in case of numerical input
if (self.strategy == "constant" and
X.dtype.kind in ("i", "u", "f") and
not isinstance(fill_value, numbers.Real)):
raise ValueError("'fill_value'={0} is invalid. Expected a "
"numerical value when imputing numerical "
"data".format(fill_value))
if sparse.issparse(X):
# missing_values = 0 not allowed with sparse data as it would
# force densification
if self.missing_values == 0:
raise ValueError("Imputation not possible when missing_values "
"== 0 and input is sparse. Provide a dense "
"array instead.")
else:
self.statistics_ = self._sparse_fit(X,
self.strategy,
self.missing_values,
fill_value)
else:
self.statistics_ = self._dense_fit(X,
self.strategy,
self.missing_values,
fill_value)
return self
def _sparse_fit(self, X, strategy, missing_values, fill_value):
"""Fit the transformer on sparse data."""
mask_data = _get_mask(X.data, missing_values)
n_implicit_zeros = X.shape[0] - np.diff(X.indptr)
statistics = np.empty(X.shape[1])
if strategy == "constant":
# for constant strategy, self.statistcs_ is used to store
# fill_value in each column
statistics.fill(fill_value)
else:
for i in range(X.shape[1]):
column = X.data[X.indptr[i]:X.indptr[i + 1]]
mask_column = mask_data[X.indptr[i]:X.indptr[i + 1]]
column = column[~mask_column]
# combine explicit and implicit zeros
mask_zeros = _get_mask(column, 0)
column = column[~mask_zeros]
n_explicit_zeros = mask_zeros.sum()
n_zeros = n_implicit_zeros[i] + n_explicit_zeros
if strategy == "mean":
s = column.size + n_zeros
statistics[i] = np.nan if s == 0 else column.sum() / s
elif strategy == "median":
statistics[i] = _get_median(column,
n_zeros)
elif strategy == "most_frequent":
statistics[i] = _most_frequent(column,
0,
n_zeros)
return statistics
def _dense_fit(self, X, strategy, missing_values, fill_value):
"""Fit the transformer on dense data."""
mask = _get_mask(X, missing_values)
masked_X = ma.masked_array(X, mask=mask)
# Mean
if strategy == "mean":
mean_masked = np.ma.mean(masked_X, axis=0)
# Avoid the warning "Warning: converting a masked element to nan."
mean = np.ma.getdata(mean_masked)
mean[np.ma.getmask(mean_masked)] = np.nan
return mean
# Median
elif strategy == "median":
median_masked = np.ma.median(masked_X, axis=0)
# Avoid the warning "Warning: converting a masked element to nan."
median = np.ma.getdata(median_masked)
median[np.ma.getmaskarray(median_masked)] = np.nan
return median
# Most frequent
elif strategy == "most_frequent":
# scipy.stats.mstats.mode cannot be used because it will no work
# properly if the first element is masked and if its frequency
# is equal to the frequency of the most frequent valid element
# See https://github.com/scipy/scipy/issues/2636
# To be able access the elements by columns
X = X.transpose()
mask = mask.transpose()
if X.dtype.kind == "O":
most_frequent = np.empty(X.shape[0], dtype=object)
else:
most_frequent = np.empty(X.shape[0])
for i, (row, row_mask) in enumerate(zip(X[:], mask[:])):
row_mask = np.logical_not(row_mask).astype(np.bool)
row = row[row_mask]
most_frequent[i] = _most_frequent(row, np.nan, 0)
return most_frequent
# Constant
elif strategy == "constant":
# for constant strategy, self.statistcs_ is used to store
# fill_value in each column
return np.full(X.shape[1], fill_value, dtype=X.dtype)
def transform(self, X):
"""Impute all missing values in X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete.
"""
check_is_fitted(self, 'statistics_')
X = self._validate_input(X)
statistics = self.statistics_
if X.shape[1] != statistics.shape[0]:
raise ValueError("X has %d features per sample, expected %d"
% (X.shape[1], self.statistics_.shape[0]))
# Delete the invalid columns if strategy is not constant
if self.strategy == "constant":
valid_statistics = statistics
else:
# same as np.isnan but also works for object dtypes
invalid_mask = _get_mask(statistics, np.nan)
valid_mask = np.logical_not(invalid_mask)
valid_statistics = statistics[valid_mask]
valid_statistics_indexes = np.flatnonzero(valid_mask)
if invalid_mask.any():
missing = np.arange(X.shape[1])[invalid_mask]
if self.verbose:
warnings.warn("Deleting features without "
"observed values: %s" % missing)
X = X[:, valid_statistics_indexes]
# Do actual imputation
if sparse.issparse(X):
if self.missing_values == 0:
raise ValueError("Imputation not possible when missing_values "
"== 0 and input is sparse. Provide a dense "
"array instead.")
else:
mask = _get_mask(X.data, self.missing_values)
indexes = np.repeat(np.arange(len(X.indptr) - 1, dtype=np.int),
np.diff(X.indptr))[mask]
X.data[mask] = valid_statistics[indexes].astype(X.dtype,
copy=False)
else:
mask = _get_mask(X, self.missing_values)
n_missing = np.sum(mask, axis=0)
values = np.repeat(valid_statistics, n_missing)
coordinates = np.where(mask.transpose())[::-1]
X[coordinates] = values
return X
class MissingIndicator(BaseEstimator, TransformerMixin):
"""Binary indicators for missing values.
Parameters
----------
missing_values : number, string, np.nan (default) or None
The placeholder for the missing values. All occurrences of
`missing_values` will be imputed.
features : str, optional
Whether the imputer mask should represent all or a subset of
features.
- If "missing-only" (default), the imputer mask will only represent
features containing missing values during fit time.
- If "all", the imputer mask will represent all features.
sparse : boolean or "auto", optional
Whether the imputer mask format should be sparse or dense.
- If "auto" (default), the imputer mask will be of same type as
input.
- If True, the imputer mask will be a sparse matrix.
- If False, the imputer mask will be a numpy array.
error_on_new : boolean, optional
If True (default), transform will raise an error when there are
features with missing values in transform that have no missing values
in fit This is applicable only when ``features="missing-only"``.
Attributes
----------
features_ : ndarray, shape (n_missing_features,) or (n_features,)
The features indices which will be returned when calling ``transform``.
They are computed during ``fit``. For ``features='all'``, it is
to ``range(n_features)``.
Examples
--------
>>> import numpy as np
>>> from sklearn.impute import MissingIndicator
>>> X1 = np.array([[np.nan, 1, 3],
... [4, 0, np.nan],
... [8, 1, 0]])
>>> X2 = np.array([[5, 1, np.nan],
... [np.nan, 2, 3],
... [2, 4, 0]])
>>> indicator = MissingIndicator()
>>> indicator.fit(X1)
MissingIndicator(error_on_new=True, features='missing-only',
missing_values=nan, sparse='auto')
>>> X2_tr = indicator.transform(X2)
>>> X2_tr
array([[False, True],
[ True, False],
[False, False]])
"""
def __init__(self, missing_values=np.nan, features="missing-only",
sparse="auto", error_on_new=True):
self.missing_values = missing_values
self.features = features
self.sparse = sparse
self.error_on_new = error_on_new
def _get_missing_features_info(self, X):
"""Compute the imputer mask and the indices of the features
containing missing values.
Parameters
----------
X : {ndarray or sparse matrix}, shape (n_samples, n_features)
The input data with missing values. Note that ``X`` has been
checked in ``fit`` and ``transform`` before to call this function.
Returns
-------
imputer_mask : {ndarray or sparse matrix}, shape \
(n_samples, n_features) or (n_samples, n_features_with_missing)
The imputer mask of the original data.
features_with_missing : ndarray, shape (n_features_with_missing)
The features containing missing values.
"""
if sparse.issparse(X) and self.missing_values != 0:
mask = _get_mask(X.data, self.missing_values)
# The imputer mask will be constructed with the same sparse format
# as X.
sparse_constructor = (sparse.csr_matrix if X.format == 'csr'
else sparse.csc_matrix)
imputer_mask = sparse_constructor(
(mask, X.indices.copy(), X.indptr.copy()),
shape=X.shape, dtype=bool)
missing_values_mask = imputer_mask.copy()
missing_values_mask.eliminate_zeros()
features_with_missing = (
np.flatnonzero(np.diff(missing_values_mask.indptr))
if missing_values_mask.format == 'csc'
else np.unique(missing_values_mask.indices))
if self.sparse is False:
imputer_mask = imputer_mask.toarray()
elif imputer_mask.format == 'csr':
imputer_mask = imputer_mask.tocsc()
else:
if sparse.issparse(X):
# case of sparse matrix with 0 as missing values. Implicit and
# explicit zeros are considered as missing values.
X = X.toarray()
imputer_mask = _get_mask(X, self.missing_values)
features_with_missing = np.flatnonzero(imputer_mask.sum(axis=0))
if self.sparse is True:
imputer_mask = sparse.csc_matrix(imputer_mask)
return imputer_mask, features_with_missing
def fit(self, X, y=None):
"""Fit the transformer on X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
Input data, where ``n_samples`` is the number of samples and
``n_features`` is the number of features.
Returns
-------
self : object
Returns self.
"""
if not is_scalar_nan(self.missing_values):
force_all_finite = True
else:
force_all_finite = "allow-nan"
X = check_array(X, accept_sparse=('csc', 'csr'),
force_all_finite=force_all_finite)
_check_inputs_dtype(X, self.missing_values)
self._n_features = X.shape[1]
if self.features not in ('missing-only', 'all'):
raise ValueError("'features' has to be either 'missing-only' or "
"'all'. Got {} instead.".format(self.features))
if not ((isinstance(self.sparse, six.string_types) and
self.sparse == "auto") or isinstance(self.sparse, bool)):
raise ValueError("'sparse' has to be a boolean or 'auto'. "
"Got {!r} instead.".format(self.sparse))
self.features_ = (self._get_missing_features_info(X)[1]
if self.features == 'missing-only'
else np.arange(self._n_features))
return self
def transform(self, X):
"""Generate missing values indicator for X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete.
Returns
-------
Xt : {ndarray or sparse matrix}, shape (n_samples, n_features)
The missing indicator for input data. The data type of ``Xt``
will be boolean.
"""
check_is_fitted(self, "features_")
if not is_scalar_nan(self.missing_values):
force_all_finite = True
else:
force_all_finite = "allow-nan"
X = check_array(X, accept_sparse=('csc', 'csr'),
force_all_finite=force_all_finite)
_check_inputs_dtype(X, self.missing_values)
if X.shape[1] != self._n_features:
raise ValueError("X has a different number of features "
"than during fitting.")
imputer_mask, features = self._get_missing_features_info(X)
if self.features == "missing-only":
features_diff_fit_trans = np.setdiff1d(features, self.features_)
if (self.error_on_new and features_diff_fit_trans.size > 0):
raise ValueError("The features {} have missing values "
"in transform but have no missing values "
"in fit.".format(features_diff_fit_trans))
if (self.features_.size > 0 and
self.features_.size < self._n_features):
imputer_mask = imputer_mask[:, self.features_]
return imputer_mask
def fit_transform(self, X, y=None):
"""Generate missing values indicator for X.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The input data to complete.
Returns
-------
Xt : {ndarray or sparse matrix}, shape (n_samples, n_features)
The missing indicator for input data. The data type of ``Xt``
will be boolean.
"""
return self.fit(X, y).transform(X)
|
What does the word T3 Phage mean?
Bacteriophage in the genus T7- like Phages, of the family PODOVIRIDAE, which is very closely related to BACTERIOPHAGE T7.
Each person working in the medical industry sometimes needs to know how to define a word from medical terminology. For example - how to explain T3 Phage? Here you can see the medical definition for T3 Phage. Medical-dictionary.cc is your online dictionary, full of medical definitions. |
import requests
import os
shows=[]
page=str(requests.get("http://dramaonline.com").text.encode('ascii','ignore'))
for item in page.split("\n"):
if "title=" in item:
print ("Title tag in item {0}".format(item))
if "<a href=" in item:
print ("<a href= found")
episodepage=item.split('<a href="')[1].split('"')[0]
print "Episode page is {0}".format(episodepage)
episodename=item.split('title="')[1].split('"')[0]
print "Searching for vidrail link in {0}".format(episodename)
for line in str(requests.get(episodepage).text.encode('ascii','ignore')).split("\n"):
if "vidrail" in line:
print line
if 'src="http://www.vidrail' in line:
print "Vidrail link found in line {0}".format(line)
vidraillink=line.split('src="')[1].split('"')[0]
print "Vidrail link is {0}".format(vidraillink)
for line in str(requests.get(vidraillink).text).encode('ascii','utf8').split("\n"):
if ".mp4" in line:
print ".mp4 found in line {0}".format(line)
episodelink=line.split('src="')[1].split('"')[0]
print "Episode link is {0}".format(episodelink)
shows.append({episodename,episodelink})
f=open("shows.txt",'w')
f.write(str(shows))
f.close() |
‘Dogs Today Magazine’ in the United Kingdom proclaimed our photographer Alex Cearns to be “one of the greatest dog photographers in the world”. That she is, and so much more. Of all the ways we humans connect, some of the strongest bonds for many of us are the bonds we create with our animal friends.
Alex intuitively understands what makes her animal subjects tick. With the magic of her camera, she has the uncanny ability to see into their souls and capture their visual language. By telling her eloquent photographic stories, Alex lovingly shows that the separation between us and our animal friends is very slim. She reminds us to enjoy, rejoice and empathise with every beloved pet and extraordinary sentient being that shares our precious planet.
What is it about Alex that gives her this unique ability? Perhaps the answer lies in her childhood. Alex grew up in the country surrounded by dogs, orphaned bottle-fed lambs, rescued joeys, rabbits and reptiles. Any creature in need of rehabilitation was welcome in Alex’s world, and offered a safe haven in her family’s kitchen. Animals became her best friends and she developed a magical way to communicate with them, something she carries through today.
Alex’s point of view is set squarely on the animal kingdom. She creates remarkable portraits of around 1300 animals each year – from dogs, cats, reptiles, rats, rabbits, ferrets, birds, horses, goats, sheep … to bilbies, penguins, possums, monkeys, bears, tigers and elephants.
provide excellence in everything she does as a professional photographer.
With ten years’ dedication to professional animal photography, she is an undisputed leader in her niche.
Alex has won more than 250 awards for photography, business and philanthropy. Her deep, long term commitment to philanthropy and her advocacy for animal rescue and wildlife conservation were distinguished with a Medal of the Order of Australia (OAM) in the Australia Day Honours List 2019 from the Council for the Order of Australia for her service to the community through charitable organisations. Her images have appeared in countless Australian and international media publications, on book covers, magazines and in advertising campaigns. They’ve been published in an Australia Post stamp collection, featured on sleepwear ranges by Peter Alexander Pajamas as part of an ongoing collaboration, and included in the ‘Best Dog Photographs in the World’ series by UK magazine Dog’s Today for more than 36 consecutive months.
Alex is deeply committed to the well being of all creatures great and small, and she is considered one of Australia’s most passionate champions and voices for animal rescue and wildlife conservation. Inspiring others with her joy of working with animals, Alex’s commitment includes partnering with, or providing sponsorship to, around 40 animal charity and conservation organisations.
Alex’s exquisite photography, active philanthropy and passionate advocacy for rescue and conservation have earned her high regard among animal lovers. Her social media influence includes a strong following on Facebook, Twitter and Instagram.
She is a Pro Team Ambassador for Tamron’s Super Performance Series Lenses in Australia and the United States, and Brand Ambassador for Profoto, BenQ, Spider Camera Holster and Seagate Technology.
A published author, Alex is currently signed to leading publisher ABC Books/Harper Collins Australia. Her hotly anticipated ‘Perfect Imperfection’ was released in March 2018. Please click here for information on how to order. Her book ‘Zen Dogs’, was published in October 2016 by Harper Collins New York. Her books ‘Mother Knows Best – Life Lessons from the Animal World’ and ‘Joy, A Celebration of the Animal Kingdom’ were both published in 2014 by Penguin Books Australia. In 2015 she collaborated as the photographer of ‘Things Your Dog Wants You to Know’ released by Penguin Books Australia.
Alex is an accomplished and inspirational public speaker. She uses warmth, humour and uplifting visual storytelling to engage her audience with her passions for photography, animal rescue and wildlife conservation. She is a popular guest speaker at photography events, conferences, expos, camera clubs, women’s groups and corporate functions. Not afraid to step out from behind the lens, she’s appeared on Channel 7’s Weekend Sunrise and Today Tonight programs, on Channel 10’s Studio 10 and on news programs in WA, South Australia and Tasmania. Alex is a regular guest on the WA produced TV show ‘The Couch’ and has been interviewed on Mix 94.5’s Drive Show, 6PR, 96fm, Nova 93.7 and ABC Radio National. She also appeared in an episode of 60 Second Docs, which was viewed by over 6 million people.
Alex is the first female professional photography tour leader for global travel company World Expeditions. Her sell-out annual wildlife tours have taken her to some of the world’s most exciting animal destinations and she has ticked off all 7 continents on her travels.
When you come to Houndstooth Studio, you and your pet will receive the full benefit of Alex’s exceptional experience, her intuitive animal handling expertise and Houndstooth Studio’s full commitment to client satisfaction.
Please join our Facebook community here to see more of what Alex does and how she does it. We look forward to welcoming you and your furred, feathered, finned or fanged friends into our studio soon. |
from django.db import models as m
from django.core.exceptions import ValidationError
"""
No changes to the models are needed to use flexselect.
"""
class Company(m.Model):
name = m.CharField(max_length=80)
def __str__(self):
return self.name
class CompanyContactPerson(m.Model):
company = m.ForeignKey(Company)
name = m.CharField(max_length=80)
email = m.EmailField()
def __str__(self):
return self.name
class Client(m.Model):
company = m.ForeignKey(Company)
name = m.CharField(max_length=80)
def __str__(self):
return self.name
class Case(m.Model):
client = m.ForeignKey(Client)
company_contact_person = m.ForeignKey(CompanyContactPerson)
def clean(self):
"""
Make sure that the company for client is the same as the company for
the company contact person.
"""
if not self.client.company == self.company_contact_person.company:
raise ValidationError('The clients and the contacts company does'
' not match.')
def __str__(self):
return 'Case: %d' % self.id
|
Having a mentor is important. Plato had Socrates; Daniel (the karate kid) had Mr. Miyagi; We all have Oprah. When you're looking for advice in life and in your career, sometimes you need guidance from someone who knows exactly what you're going through because they have been in your shoes.
This is particularly true for nurses, most of whom have to balance long hours at work with family time and achieving their professional goals. Our RN to BSN online program teaching staff, all practicing nurses, shared their advice on standing out in the work place, how to "have it all," and what it takes to make it in the nursing profession.
"Giving back through community volunteering looks good on a resume, but you also gain experience and make a difference. To be impactful as nurses, we need to go out into the community. It also gives you exposure to people that you don't normally see at the bedside."
"Nurses need to make themselves marketable through their education, becoming a member of organizations and volunteering, as well as having an up-to-date professional portfolio and references from professors and past employers."
"People have to realize that you have to set goals, short term and long term, and you have to make choices. You have to budget your time. It takes commitment and it takes real determination. But you also have to pace yourself."
"Develop a support team that includes family members, friends, or neighbors. Have family time that is planned; For example, block out Friday from 8-10 for family movie nights. Remind your family that you need time [to handle other obligations] in order to participate."
"Not everyone can be a caring nurse. There are some innate qualities you need to be effective."
"If you don't have it in your heart and you don't have compassion, you need to work somewhere else. In our profession it makes a big difference." |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# This file is part of PyBOSSA.
#
# PyBOSSA is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# PyBOSSA is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with PyBOSSA. If not, see <http://www.gnu.org/licenses/>.
import json
from optparse import OptionParser
import pbclient
import requests
def get_categories(url):
"""Gets Ushahidi categories from the server"""
url = url + "/api?task=categories"
r = requests.get(url)
data = r.json()
categories = data['payload']['categories']
return categories
def task_formatter(app_config, row, n_answers, categories):
"""
Creates tasks for the application
:arg integer app_id: Application ID in PyBossa.
:returns: Task ID in PyBossa.
:rtype: integer
"""
# Each row has the following format
# row[0] = INCIDENT ID,
# row[1] = INCIDENT TITLE,
# row[2] = INCIDENT DATE
# row[3] = LOCATION
# row[4] = DESCRIPTION
# row[5] = CATEGORY
# row[6] = LATITUDE
# row[7] = LONGITUDE
# row[8] = APPROVED
# row[9] = VERIFIED
incident = dict(id=row[0],
title=row[1],
date=row[2],
location=row[3],
description=row[4],
category=row[5],
latitude=row[6],
longitude=row[7],
approved=row[8],
verified=row[9])
categories = categories
return dict(question=app_config['question'],
n_answers=int(n_answers),
incident=incident,
categories=categories)
if __name__ == "__main__":
# Arguments for the application
usage = "usage: %prog [options]"
parser = OptionParser(usage)
# URL where PyBossa listens
parser.add_option("-s", "--server", dest="api_url",
help="PyBossa URL http://domain.com/", metavar="URL")
# API-KEY
parser.add_option("-k", "--api-key", dest="api_key",
help="PyBossa User API-KEY to interact with PyBossa",
metavar="API-KEY")
# Create App
parser.add_option("-c", "--create-app", action="store_true",
dest="create_app",
help="Create the application",
metavar="CREATE-APP")
# Update template for tasks and long_description for app
parser.add_option("-t", "--update-template", action="store_true",
dest="update_template",
help="Update Tasks template",
metavar="UPDATE-TEMPLATE")
# Update tasks question
parser.add_option("-q", "--update-tasks",
dest="update_tasks",
help="Update Tasks question",
metavar="UPDATE-TASKS")
parser.add_option("-x", "--extra-task", action="store_true",
dest="add_more_tasks",
help="Add more tasks",
metavar="ADD-MORE-TASKS")
# Modify the number of TaskRuns per Task
# (default 30)
parser.add_option("-n", "--number-answers",
dest="n_answers",
help="Number of answers per task",
metavar="N-ANSWERS")
parser.add_option("-u", "--ushahidi-server",
dest="ushahidi_server",
help="Ushahidi server",
metavar="Ushahidi server")
parser.add_option("-d", "--data",
dest="csv_file",
help="CSV file with incident reports to import",
metavar="CSV file")
parser.add_option("-v", "--verbose", action="store_true", dest="verbose")
(options, args) = parser.parse_args()
# Load app details
try:
app_json = open('app.json')
app_config = json.load(app_json)
app_json.close()
except IOError as e:
print "app.json is missing! Please create a new one"
exit(0)
if not options.api_url:
options.api_url = 'http://localhost:5000/'
pbclient.set('endpoint', options.api_url)
if not options.api_key:
parser.error("You must supply an API-KEY to create an \
applicationa and tasks in PyBossa")
else:
pbclient.set('api_key', options.api_key)
if (options.create_app or options.add_more_tasks) and not options.ushahidi_server:
parser.error("You must supply the Ushahidi server from where you want \
to categorize the reports")
if (options.verbose):
print('Running against PyBosssa instance at: %s' % options.api_url)
print('Using API-KEY: %s' % options.api_key)
if not options.n_answers:
options.n_answers = 2
if options.create_app:
import csv
pbclient.create_app(app_config['name'],
app_config['short_name'],
app_config['description'])
app = pbclient.find_app(short_name=app_config['short_name'])[0]
app.long_description = open('long_description.html').read()
app.info['task_presenter'] = open('template.html').read()
app.info['thumbnail'] = app_config['thumbnail']
app.info['tutorial'] = open('tutorial.html').read()
categories = get_categories(options.ushahidi_server)
pbclient.update_app(app)
if not options.csv_file:
options.csv = 'ushahidi.csv'
with open(options.csv_file, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
# Each row has the following format
# # <- ID
# INCIDENT TITLE
# INCIDENT DATE
# LOCATION
# DESCRIPTION
# CATEGORY
# LATITUDE
# LONGITUDE
# APPROVED
# VERIFIED
for row in csvreader:
if row[0] != '#':
task_info = task_formatter(app_config, row,
options.n_answers,
categories)
pbclient.create_task(app.id, task_info)
else:
app = pbclient.find_app(short_name=app_config['short_name'])[0]
if options.add_more_tasks:
categories = get_categories(options.ushahidi_server)
import csv
if not options.csv_file:
options.csv_file = 'ushahidi.csv'
with open(options.csv_file, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=',')
# Each row has the following format
# # <- ID
# INCIDENT TITLE
# INCIDENT DATE
# LOCATION
# DESCRIPTION
# CATEGORY
# LATITUDE
# LONGITUDE
# APPROVED
# VERIFIED
for row in csvreader:
if row[0] != 'tweetid':
task_info = task_formatter(app_config, row,
options.n_answers,
categories)
pbclient.create_task(app.id, task_info)
if options.update_template:
print "Updating app tutorial, description and task presenter..."
app = pbclient.find_app(short_name=app_config['short_name'])[0]
app.long_description = open('long_description.html').read()
app.info['task_presenter'] = open('template.html').read()
app.info['tutorial'] = open('tutorial.html').read()
app.info['thumbnail'] = app_config['thumbnail']
pbclient.update_app(app)
print "Done!"
if options.update_tasks:
print "Updating task n_answers"
app = pbclient.find_app(short_name=app_config['short_name'])[0]
n_tasks = 0
offset = 0
limit = 100
tasks = pbclient.get_tasks(app.id, offset=offset, limit=limit)
while tasks:
for task in tasks:
print "Updating task: %s" % task.id
if ('n_answers' in task.info.keys()):
del(task.info['n_answers'])
task.n_answers = int(options.update_tasks)
pbclient.update_task(task)
n_tasks += 1
offset = (offset + limit)
tasks = pbclient.get_tasks(app.id, offset=offset, limit=limit)
print "%s Tasks have been updated!" % n_tasks
if not options.create_app and not options.update_template\
and not options.add_more_tasks and not options.update_tasks:
parser.error("Please check --help or -h for the available options")
|
Even the most open-minded, well-intentioned people can fall victim to making snap judgments that aren’t aligned with their core values and beliefs. And those quick decisions, although not intended, could prevent others from getting a fair shake. That’s the main message behind the Check Your Blind Spots mobile tour. |
import numpy as np
import matplotlib.pyplot as plt
# Matriz que contiene los datos
Matriz = np.genfromtxt("Star_1.csv", delimiter=",")
# Vectores con cada una de las bandas y los errores
U_band = Matriz[:, 0]
G_band = Matriz[:, 1]
R_band = Matriz[:, 2]
I_band = Matriz[:, 3]
Z_band = Matriz[:, 4]
EU_band = Matriz[:, 5]
EG_band = Matriz[:, 6]
ER_band = Matriz[:, 7]
EI_band = Matriz[:, 8]
EZ_band = Matriz[:, 9]
U_mean = np.mean(U_band)
G_mean = np.mean(G_band)
R_mean = np.mean(R_band)
I_mean = np.mean(I_band)
Z_mean = np.mean(Z_band)
U_mean1 = 18.52
G_mean1 = 17.26
R_mean1 = 17.24
I_mean1 = 17.34
Z_mean1 = 17.39
x = np.arange(0, 43, 1)
# Grafica
plt.figure()
plt.plot(x, U_band, 'bo', label='Magnitude')
plt.hlines(y=U_mean, xmin=0, xmax=44, color='r', label='Found value')
plt.hlines(y=U_mean1, xmin=0, xmax=44, color='g', label='Mean value')
# plt.errorbar(x, U_band, yerr=EU_band, fmt='o', ecolor='b')
plt.ylabel("Magnitude")
plt.xlabel("Observation's night")
plt.title("U Band")
plt.grid()
plt.legend(loc='upper right')
plt.savefig("U.png")
plt.show(True)
plt.figure()
plt.plot(x, G_band, 'bo', label='Magnitude')
plt.hlines(y=G_mean, xmin=0, xmax=44, color='r', label='Found value')
plt.hlines(y=G_mean1, xmin=0, xmax=44, color='g', label='Mean value')
# plt.errorbar(x, U_band, yerr=EU_band, fmt='o', ecolor='b')
plt.ylabel("Magnitude")
plt.xlabel("Observation's night")
plt.title("G Band")
plt.grid()
plt.legend(loc='upper right')
plt.savefig("G.png")
plt.show(True)
plt.figure()
plt.plot(x, R_band, 'bo', label='Magnitude')
plt.hlines(y=R_mean, xmin=0, xmax=44, color='r', label='Found value')
plt.hlines(y=R_mean1, xmin=0, xmax=44, color='g', label='Mean value')
# plt.errorbar(x, U_band, yerr=EU_band, fmt='o', ecolor='b')
plt.ylabel("Magnitude")
plt.xlabel("Observation's night")
plt.title("R Band")
plt.grid()
plt.legend(loc='upper right')
plt.savefig("R.png")
plt.show(True)
plt.figure()
plt.plot(x, I_band, 'bo', label='Magnitude')
plt.hlines(y=I_mean, xmin=0, xmax=44, color='r', label='Found value')
plt.hlines(y=I_mean1, xmin=0, xmax=44, color='g', label='Mean value')
# plt.errorbar(x, U_band, yerr=EU_band, fmt='o', ecolor='b')
plt.ylabel("Magnitude")
plt.xlabel("Observation's night")
plt.title("I Band")
plt.grid()
plt.legend(loc='upper right')
plt.savefig("I.png")
plt.show(True)
plt.figure()
plt.plot(x, Z_band, 'bo', label='Magnitude')
plt.hlines(y=Z_mean, xmin=0, xmax=44, color='r', label='Found value')
plt.hlines(y=Z_mean1, xmin=0, xmax=44, color='g', label='Mean value')
# plt.errorbar(x, U_band, yerr=EU_band, fmt='o', ecolor='b')
plt.ylabel("Magnitude")
plt.xlabel("Observation's night")
plt.title("Z Band")
plt.grid()
plt.legend(loc='upper right')
plt.savefig("Z.png")
plt.show(True)
|
YK Thong is a premier clinic located at the prime area of Setia Alam. We are a one-stop medical clinic, offering you a wide range of medical, health, aesthetics and wellness services.
As one of the international cancer research bases, Modern Cancer Hospital Guangzhou is famous for good service and advanced technology, providing you the most suitable treatment and the best service. |
import time
import logging
from django.core.cache import cache
from django.conf import settings
from cacheback import tasks
logging.basicConfig()
logger = logging.getLogger('cacheback')
MEMCACHE_MAX_EXPIRATION = 2592000
class Job(object):
"""
A cached read job.
This is the core class for the package which is intended to be subclassed
to allow the caching behaviour to be customised.
"""
# All items are stored in memcache as a tuple (expiry, data). We don't use
# the TTL functionality within memcache but implement on own. If the
# expiry value is None, this indicates that there is already a job created
# for refreshing this item.
#: Default cache lifetime is 5 minutes. After this time, the result will
#: be considered stale and requests will trigger a job to refresh it.
lifetime = 600
#: Timeout period during which no new Celery tasks will be created for a
#: single cache item. This time should cover the normal time required to
#: refresh the cache.
refresh_timeout = 60
#: Time to store items in the cache. After this time, we will get a cache
#: miss which can lead to synchronous refreshes if you have
#: fetch_on_miss=True.
cache_ttl = MEMCACHE_MAX_EXPIRATION
#: Whether to perform a synchronous refresh when a result is missing from
#: the cache. Default behaviour is to do a synchronous fetch when the cache is empty.
#: Stale results are generally ok, but not no results.
fetch_on_miss = True
#: Whether to perform a synchronous refresh when a result is in the cache
#: but stale from. Default behaviour is never to do a synchronous fetch but
#: there will be times when an item is _too_ stale to be returned.
fetch_on_stale_threshold = None
#: Overrides options for `refresh_cache.apply_async` (e.g. `queue`).
task_options = {}
# --------
# MAIN API
# --------
def get(self, *raw_args, **raw_kwargs):
"""
Return the data for this function (using the cache if possible).
This method is not intended to be overidden
"""
# We pass args and kwargs through a filter to allow them to be
# converted into values that can be pickled.
args = self.prepare_args(*raw_args)
kwargs = self.prepare_kwargs(**raw_kwargs)
# Build the cache key and attempt to fetch the cached item
key = self.key(*args, **kwargs)
item = cache.get(key)
if item is None:
# Cache MISS - we can either:
# a) fetch the data immediately, blocking execution until
# the fetch has finished, or
# b) trigger an async refresh and return an empty result
if self.should_missing_item_be_fetched_synchronously(*args, **kwargs):
logger.debug(("Job %s with key '%s' - cache MISS - running "
"synchronous refresh"),
self.class_path, key)
return self.refresh(*args, **kwargs)
else:
logger.debug(("Job %s with key '%s' - cache MISS - triggering "
"async refresh and returning empty result"),
self.class_path, key)
# To avoid cache hammering (ie lots of identical Celery tasks
# to refresh the same cache item), we reset the cache with an
# empty result which will be returned until the cache is
# refreshed.
empty = self.empty()
self.cache_set(key, self.timeout(*args, **kwargs), empty)
self.async_refresh(*args, **kwargs)
return empty
expiry, data = item
delta = time.time() - expiry
if delta > 0:
# Cache HIT but STALE expiry - we can either:
# a) fetch the data immediately, blocking execution until
# the fetch has finished, or
# b) trigger a refresh but allow the stale result to be
# returned this time. This is normally acceptable.
if self.should_stale_item_be_fetched_synchronously(
delta, *args, **kwargs):
logger.debug(
("Job %s with key '%s' - STALE cache hit - running "
"synchronous refresh"),
self.class_path, key)
return self.refresh(*args, **kwargs)
else:
logger.debug(
("Job %s with key '%s' - STALE cache hit - triggering "
"async refresh and returning stale result"),
self.class_path, key)
# We replace the item in the cache with a 'timeout' expiry - this
# prevents cache hammering but guards against a 'limbo' situation
# where the refresh task fails for some reason.
timeout = self.timeout(*args, **kwargs)
self.cache_set(key, timeout, data)
self.async_refresh(*args, **kwargs)
else:
logger.debug("Job %s with key '%s' - cache HIT", self.class_path, key)
return data
def invalidate(self, *raw_args, **raw_kwargs):
"""
Mark a cached item invalid and trigger an asynchronous
job to refresh the cache
"""
args = self.prepare_args(*raw_args)
kwargs = self.prepare_kwargs(**raw_kwargs)
key = self.key(*args, **kwargs)
item = cache.get(key)
if item is not None:
expiry, data = item
self.cache_set(key, self.timeout(*args, **kwargs), data)
self.async_refresh(*args, **kwargs)
def delete(self, *raw_args, **raw_kwargs):
"""
Remove an item from the cache
"""
args = self.prepare_args(*raw_args)
kwargs = self.prepare_kwargs(**raw_kwargs)
key = self.key(*args, **kwargs)
item = cache.get(key)
if item is not None:
cache.delete(key)
# --------------
# HELPER METHODS
# --------------
def prepare_args(self, *args):
return args
def prepare_kwargs(self, **kwargs):
return kwargs
def cache_set(self, key, expiry, data):
"""
Add a result to the cache
:key: Cache key to use
:expiry: The expiry timestamp after which the result is stale
:data: The data to cache
"""
cache.set(key, (expiry, data), self.cache_ttl)
if getattr(settings, 'CACHEBACK_VERIFY_CACHE_WRITE', True):
# We verify that the item was cached correctly. This is to avoid a
# Memcache problem where some values aren't cached correctly
# without warning.
__, cached_data = cache.get(key, (None, None))
if data is not None and cached_data is None:
raise RuntimeError(
"Unable to save data of type %s to cache" % (
type(data)))
def refresh(self, *args, **kwargs):
"""
Fetch the result SYNCHRONOUSLY and populate the cache
"""
result = self.fetch(*args, **kwargs)
self.cache_set(self.key(*args, **kwargs),
self.expiry(*args, **kwargs),
result)
return result
def async_refresh(self, *args, **kwargs):
"""
Trigger an asynchronous job to refresh the cache
"""
# We trigger the task with the class path to import as well as the
# (a) args and kwargs for instantiating the class
# (b) args and kwargs for calling the 'refresh' method
try:
tasks.refresh_cache.apply_async(
kwargs=dict(
klass_str=self.class_path,
obj_args=self.get_constructor_args(),
obj_kwargs=self.get_constructor_kwargs(),
call_args=args,
call_kwargs=kwargs
),
**self.task_options
)
except Exception, e:
# Handle exceptions from talking to RabbitMQ - eg connection
# refused. When this happens, we try to run the task
# synchronously.
logger.error("Unable to trigger task asynchronously - failing "
"over to synchronous refresh")
logger.exception(e)
try:
return self.refresh(*args, **kwargs)
except Exception, e:
# Something went wrong while running the task
logger.error("Unable to refresh data synchronously: %s", e)
logger.exception(e)
else:
logger.debug("Failover synchronous refresh completed successfully")
def get_constructor_args(self):
return ()
def get_constructor_kwargs(self):
"""
Return the kwargs that need to be passed to __init__ when
reconstructing this class.
"""
return {}
@property
def class_path(self):
return '%s.%s' % (self.__module__, self.__class__.__name__)
# Override these methods
def empty(self):
"""
Return the appropriate value for a cache MISS (and when we defer the
repopulation of the cache)
"""
return None
def expiry(self, *args, **kwargs):
"""
Return the expiry timestamp for this item.
"""
return time.time() + self.lifetime
def timeout(self, *args, **kwargs):
"""
Return the refresh timeout for this item
"""
return time.time() + self.refresh_timeout
def should_missing_item_be_fetched_synchronously(self, *args, **kwargs):
"""
Return whether to refresh an item synchronously when it is missing from
the cache
"""
return self.fetch_on_miss
def should_item_be_fetched_synchronously(self, *args, **kwargs):
import warnings
warnings.warn(
"The method 'should_item_be_fetched_synchronously' is deprecated "
"and will be removed in 0.5. Use "
"'should_missing_item_be_fetched_synchronously' instead.",
DeprecationWarning)
return self.should_missing_item_be_fetched_synchronously(
*args, **kwargs)
def should_stale_item_be_fetched_synchronously(self, delta, *args, **kwargs):
"""
Return whether to refresh an item synchronously when it is found in the
cache but stale
"""
if self.fetch_on_stale_threshold is None:
return False
return delta > (self.fetch_on_stale_threshold - self.lifetime)
def key(self, *args, **kwargs):
"""
Return the cache key to use.
If you're passing anything but primitive types to the ``get`` method,
it's likely that you'll need to override this method.
"""
if not args and not kwargs:
return self.class_path
try:
if args and not kwargs:
return "%s:%s" % (self.class_path, hash(args))
# The line might break if your passed values are un-hashable. If
# it does, you need to override this method and implement your own
# key algorithm.
return "%s:%s:%s:%s" % (self.class_path,
hash(args),
hash(tuple(kwargs.keys())),
hash(tuple(kwargs.values())))
except TypeError:
raise RuntimeError(
"Unable to generate cache key due to unhashable"
"args or kwargs - you need to implement your own"
"key generation method to avoid this problem")
def fetch(self, *args, **kwargs):
"""
Return the data for this job - this is where the expensive work should
be done.
"""
raise NotImplementedError()
|
Is your kids diet starving them to death?
MANY kids are literally starved for good nutrition!
I do not understand what parents are thinking. Today I spent the day with my nieces and nephews. Four kids ages 5 to 16 years old.
When I picked the kids up at 11 AM the Mom of 2 of them was getting ready for work and asking, telling, insisting that the kids have breakfast before we go.
First of all, breakfast happens when you get up, not 3 hours later.
Second, they were having Captain Crunch cereal for breakfast. That is it! No protein, no veggie, fruit, or whole grain. Just plain old sugar coated cereal, with milk.
This woman is a smart lady. She has a good job keeping the books, payroll and all of the finances of a thriving business.
I can not understand why she thinks that Captain Crunch is a healthy strong-body-building breakfast for young developing children.
No offense to the Captain or anything, but the stuff is nothing but sugar, enhanced by some man-made vitamins. Just give the kids a spoonful of sugar and a vitamin pill and they would probably be getting more nutrition.
OK, so we are off with…… Aunt Deb for a day of fun.
The weather is not perfect for the planned picnic at the park with a day of swimming and fun in the water, so we switched to plan B.
York County, Pa is the home of factory tours. So we headed to Harley Davidson for fun in the motorcycle plant.
She doesn’t usually drink water and an aspirin always works.
After further discussion I found out that she gets a headache almost every day.
I explained to her that we need to drink 6-8 glasses of water a day and that the first sign of vitamin deficiency is a headache. Long story short, after 2 glasses of water, the headache was gone!
I had a jug of water in the back of the car and we all enjoyed water the rest of the day and munched on celery sticks (they were in the back too) off and on for the rest of the ride.
I know that at first she thought here goes Aunt Deb again on that healthy stuff.
I also know now that she is sold on the water taking away the headache and will consider it an option in the future.
Of course, the headache could be caused by certain chemically laden foods, but I am glad the water worked!
Water is essential for all of our bodily functions, one of which is removing toxins from our body, so a chemically induced headache may respond favorably to water. Next time one of your kids gets a headache, give them a glass of water.
After Harley Davidson we wanted to eat.
I gave the kids a choice. Red Lobster, Chili’s (I had gift cards) or any fast food (cheap) restaurant. I really did not want to go to a FF restaurant. I know that I can not change the way they eat in one day, so I always give choices.
Thank goodness, they picked Red Lobster.
I do not know why kids love Red Lobster. It would not be my first choice. It is certainly better than the FF joints! I always give them the liberty of picking anything they want on the menu and all four kids picked a real somewhat healthy real food meal.
The 16 year old had a salad and fettuccine with shrimp. The other three all had crab legs, two with baked potatoes. The youngest ordered a coleslaw and fries with hers. All the kids ordered a frozen strawberry-daiquiri type drink which came with whip cream on it. (more sugar and chemicals).
The biscuits which were all white (and only a teensy bit better nutritionally than the Captain) were inhaled immediately.
They need real food! I see this every time I spend time with them. It is because they are eating chemically laden sugar-coated meals that fill them up and do not provide nutrients.
They all ate every speck of food on their plates. The five year old even wanted a salad when she was done with her dinner.
We had agreed to get dessert later at Rita’s. I am not surprised that they all ate well. Frequently, when I take kids out, I find that they are more than happy to eat right. As a matter of fact, most often they are eager to eat good-for-you foods, provided I do not insist that they do, or make an issue out of it.
Later, we had our Rita’s and enjoyed the rest of the afternoon, before dropping 2 of the kids off at Mom’s work. While waiting for Mom, they were encouraged to buy a Reese’s Peanut Butter Cup to hold them over to dinner. I have no clue what dinner was, but you can be sure it was very convenient and most likely out of a box, similar to breakfast.
Had we had a beautiful day and gone to Pinchot Park as planned, I would have thrown some baked potatoes in the charcoal, had a watermelon, carrot sticks, grilled a turkey breast.
In the past, my experiences have been that these kids say that they want yellow macaroni and cheese or McDonalds, or they say that they do not want to eat healthy foods. (they hear their parents say “Aunt Deb is a health nut” or something to that effect).
Yet when I make a non-issue about it, pull celery sticks out of the cooler or just fix a nice meal and put it on the table, they eat it.
It is a non-issue. They actually eat it as if they are famished. They eat as if they have not had a meal in months.
This is a huge concern for the future of American kids!
Many kids’ diets are nutritionally void.
Convenience foods are sugar and chemical-laden.
Chemicals have no nutritional value and are addictive. Americans are the most over-fed undernourished population in the word. The society is overweight and starving to death. I know that many of you reading this are thinking that I am overreacting (my family does).
The fact is that the generation of kids that are growing up now are the first generation of Americans to have a shorter life expectancy than their parents. That is scary. You may ask why? The answer is because these kids have been fed sugary-chemically laden foods from the day they were born.
Their parents on the other hand or older generations only started to consume huge quantities of these toxins as adults or young adults.
Any parent should be extremely concerned bringing children into the world today. I plan to make a difference in the eating habits of future generations.
The WOW! Diet Is Really Just Healthy Living!
When you join the WOW! Healthy Living Diet you will learn how to increase your metabolism and stay healthy without dieting. join the program now: Wow! You Are Really Lucky… You Have A High Metabolism!
2 Responses to "Are Your Kids Starving?"
It’s really sad to know that so many kids are actually starving despite having enough money for food. By now I would think that there’s enough out there on main media for parents to know about the health hazards of sugar-laden cereals. I’m blessed that my daughter, who has a 1 yr old, knows about good nutrition and feeds the little one whole and healthy food.
Great article! I wish all parents of young children could read it. We all have made so many nutritional mistakes, it’s a wonder our kids grew up healty at all! |
# -*- coding: utf-8 -*-
import numpy as np
import numpy.random as np_random
print("""
## zip(*iterables)
Make an iterator that aggregates elements from each of the iterables""")
zipped = zip([1, 2, 3], [4, 5, 6], [7, 8, 9])
print(zipped)
print(list(zipped))
print()
print("## Select elements from boolean array:")
x_arr = np.array([1.1, 1.2, 1.3, 1.4, 1.5])
y_arr = np.array([2.1, 2.2, 2.3, 2.4, 2.5])
cond = np.array([True, False, True, True, False])
result = [(x if c else y) for x, y, c in zip(x_arr, y_arr, cond)]
print(result)
print("### np.where(cond, x_arr, y_arr):")
print(np.where(cond, x_arr, y_arr))
print()
print("### examples using 'where':")
arr = np_random.randn(4, 4)
print(arr)
print(np.where(arr > 0, 2, -2))
print()
print("## nested where:")
cond_1 = np.array([True, False, True, True, False])
cond_2 = np.array([False, True, False, True, False])
print("### Using ordinary code:")
result = []
for i in range(len(cond)):
if cond_1[i] and cond_2[i]:
result.append(0)
elif cond_1[i]:
result.append(1)
elif cond_2[i]:
result.append(2)
else:
result.append(3)
print(result)
print("### Using NumPy code:")
result = np.where(cond_1 & cond_2, 0,
np.where(cond_1, 1, np.where(cond_2, 2, 3)))
print(result)
|
What you get with your limited company registration!
Setting up a Limited Company with eFaze provides unrestricted access to our database to complete, amend or update submission to our Online Company Formation.
No hidden charges or registration fees.
Normally a company can be incorporated the same day*.
Business Registration with eFaze for Business Registration UK based Paperless incorporation and NO signatures required.
All Company Name Registration information for Company Registration UK is entered and submitted by you online following our simple instructions. Payment can be made online using our Secure Credit Card Facility. Don't worry if you haven't got all the information to hand, each step can be carried out separately and finished at your convenience.
*If ordered before 2.30pm and Companies House permitting.
Subscribe now to the Free "Company Formation Tips" Newsletter to get pro company formation tips, improve your security and efficiency, learn which company format the experts prefer, plus so much more! We will never sell, rent or share your private information with anyone else.
Search online at Efaze, for all your company registration products, including setting up a limited company and business registration.
Browse company formation UK and company registration UK requirements. For business registration UK and limited company registration methods, including company name registration using the Efaze online company formation service.
Credits: Initial web design by Orice Media. |
import server
from atlas import Operation, Entity
# Nourishable entities that receive nourishment and increase their '_nutrients' value.
class Nourishable(server.Thing):
def nourish_operation(self, op):
# Get the mass of the contained arg, convert it to nutrient through _modifier_eat* properties,
# and increase the "_nutrients" property.
# Check any limits on the amount of nutrient we can contain in our stomach/reserves
# through the _nutrients_max prop too.
if len(op) > 0:
arg = op[0]
if hasattr(arg, 'mass'):
# print('mass {}'.format(arg.mass))
# Check if we can convert to nutrient through the _modifier_eat property.
# We also check if there are specific values for herbivores and omnivores
# (_modifier_consume_type_meat and _modifier_consume_type_plant)
consume_factor = 0
if self.props._modifier_eat:
consume_factor = self.props._modifier_eat
if hasattr(arg, 'consume_type'):
if self.props["_modifier_consume_type_" + arg.consume_type]:
consume_factor = self.props["_modifier_consume_type_" + arg.consume_type]
# print("consume factor {}".format(consume_factor))
if consume_factor != 0:
nutrient = 0
if self.props._nutrients:
nutrient = self.props._nutrients
nutrient_new = nutrient + (arg.mass * consume_factor)
# Check if there's a limit to the nutrient we can contain in our stomach
if self.props._nutrients_max_factor and self.props.mass:
nutrient_new = min(self.props._nutrients_max_factor * self.props.mass, nutrient_new)
if nutrient_new != nutrient:
return server.OPERATION_BLOCKED, \
Operation("set", Entity(self.id, _nutrients=nutrient_new), to=self)
return server.OPERATION_BLOCKED
|
A Simple and Efficient Method for In Vivo Cardiac-specific Gene Manipulation by Intramyocardial Injection in Mice Yanan Fu*1,2, Wenlong Jiang*2, Yichao Zhao*2, Yuli Huang1, Heng Zhang1, Hongju Wang1, Jun Pu1,2 1Department of Cardiology, The First Affiliated Hospital of Bengbu Medical College, 2Department of Cardiology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University Here we present a protocol for cardiac-specific gene manipulation in mice. Under anesthesia, the mouse hearts were externalized through the fourth intercostal space. Subsequently, adenoviruses encoding specific genes were injected with a syringe into the myocardium, followed by protein expression measurement via in vivo imaging and Western blot analysis. |
# -*- coding: utf-8 -*-
import sys
from functools import partial
import collections
from collections import namedtuple, deque
import logging
import weakref
import datetime
import time as mod_time
from tornado.ioloop import IOLoop
from tornado import gen
from tornado import stack_context
from tornado.escape import to_unicode, to_basestring
from .exceptions import RequestError, ConnectionError, ResponseError
from .connection import Connection
log = logging.getLogger('tornadoredis.client')
Message = namedtuple('Message', ('kind', 'channel', 'body', 'pattern'))
PY3 = sys.version > '3'
class CmdLine(object):
def __init__(self, cmd, *args, **kwargs):
self.cmd = cmd
self.args = args
self.kwargs = kwargs
def __repr__(self):
return self.cmd + '(' + str(self.args) + ',' + str(self.kwargs) + ')'
def string_keys_to_dict(key_string, callback):
return dict([(key, callback) for key in key_string.split()])
def dict_merge(*dicts):
merged = {}
for d in dicts:
merged.update(d)
return merged
def reply_to_bool(r, *args, **kwargs):
return bool(r)
def make_reply_assert_msg(msg):
def reply_assert_msg(r, *args, **kwargs):
return r == msg
return reply_assert_msg
def reply_set(r, *args, **kwargs):
return set(r)
def reply_dict_from_pairs(r, *args, **kwargs):
return dict(zip(r[::2], r[1::2]))
def reply_str(r, *args, **kwargs):
return r or ''
def reply_int(r, *args, **kwargs):
return int(r) if r is not None else None
def reply_number(r, *args, **kwargs):
if r is not None:
num = float(r)
if not num.is_integer():
return num
else:
return int(num)
return None
def reply_datetime(r, *args, **kwargs):
return datetime.datetime.fromtimestamp(int(r))
def reply_pubsub_message(r, *args, **kwargs):
"""
Handles a Pub/Sub message and packs its data into a Message object.
"""
if len(r) == 3:
(kind, channel, body) = r
pattern = channel
elif len(r) == 4:
(kind, pattern, channel, body) = r
elif len(r) == 2:
(kind, channel) = r
body = pattern = None
else:
raise ValueError('Invalid number of arguments')
return Message(kind, channel, body, pattern)
def reply_zset(r, *args, **kwargs):
if r and 'WITHSCORES' in args:
return reply_zset_withscores(r, *args, **kwargs)
else:
return r
def reply_zset_withscores(r, *args, **kwargs):
return list(zip(r[::2], list(map(reply_number, r[1::2]))))
def reply_hmget(r, key, *fields, **kwargs):
return dict(list(zip(fields, r)))
def reply_info(response, *args):
info = {}
def get_value(value):
# Does this string contain subvalues?
if (',' not in value) or ('=' not in value):
return value
sub_dict = {}
for item in value.split(','):
k, v = item.split('=')
try:
sub_dict[k] = int(v)
except ValueError:
sub_dict[k] = v
return sub_dict
for line in response.splitlines():
line = line.strip()
if line and not line.startswith('#'):
key, value = line.split(':')
try:
info[key] = int(value)
except ValueError:
info[key] = get_value(value)
return info
def reply_ttl(r, *args, **kwargs):
return r != -1 and r or None
def reply_map(*funcs):
def reply_fn(r, *args, **kwargs):
if len(funcs) != len(r):
raise ValueError('more results than functions to map')
return [f(part) for f, part in zip(funcs, r)]
return reply_fn
def to_list(source):
if isinstance(source, str):
return [source]
else:
return list(source)
PUB_SUB_COMMANDS = (
'SUBSCRIBE',
'PSUBSCRIBE',
'UNSUBSCRIBE',
'PUNSUBSCRIBE',
# Not a command at all
'LISTEN',
)
REPLY_MAP = dict_merge(
string_keys_to_dict('AUTH BGREWRITEAOF BGSAVE DEL EXISTS '
'EXPIRE HDEL HEXISTS '
'HMSET MOVE PERSIST RENAMENX SISMEMBER SMOVE '
'SETEX SAVE SETNX MSET',
reply_to_bool),
string_keys_to_dict('BITCOUNT DECRBY GETBIT HLEN INCRBY LINSERT '
'LPUSHX RPUSHX SADD SCARD SDIFFSTORE SETBIT SETRANGE '
'SINTERSTORE STRLEN SUNIONSTORE SETRANGE',
reply_int),
string_keys_to_dict('FLUSHALL FLUSHDB SELECT SET SETEX '
'SHUTDOWN RENAME RENAMENX WATCH UNWATCH',
make_reply_assert_msg('OK')),
string_keys_to_dict('SMEMBERS SINTER SUNION SDIFF',
reply_set),
string_keys_to_dict('HGETALL BRPOP BLPOP',
reply_dict_from_pairs),
string_keys_to_dict('HGET',
reply_str),
string_keys_to_dict('SUBSCRIBE UNSUBSCRIBE LISTEN '
'PSUBSCRIBE UNSUBSCRIBE',
reply_pubsub_message),
string_keys_to_dict('ZRANK ZREVRANK',
reply_int),
string_keys_to_dict('ZCOUNT ZCARD',
reply_int),
string_keys_to_dict('ZRANGE ZRANGEBYSCORE ZREVRANGE '
'ZREVRANGEBYSCORE',
reply_zset),
string_keys_to_dict('ZSCORE ZINCRBY',
reply_number),
string_keys_to_dict('SCAN HSCAN SSCAN',
reply_map(reply_int, reply_set)),
{'HMGET': reply_hmget,
'PING': make_reply_assert_msg('PONG'),
'LASTSAVE': reply_datetime,
'TTL': reply_ttl,
'INFO': reply_info,
'MULTI_PART': make_reply_assert_msg('QUEUED'),
'TIME': lambda x: (int(x[0]), int(x[1])),
'ZSCAN': reply_map(reply_int, reply_zset_withscores)}
)
class Client(object):
# __slots__ = ('_io_loop', '_connection_pool', 'connection', 'subscribed',
# 'password', 'selected_db', '_pipeline', '_weak')
def __init__(self, host='localhost', port=6379, unix_socket_path=None,
password=None, selected_db=None, io_loop=None,
connection_pool=None):
self._io_loop = io_loop or IOLoop.current()
self._connection_pool = connection_pool
self._weak = weakref.proxy(self)
if connection_pool:
connection = (connection_pool
.get_connection(event_handler_ref=self._weak))
else:
connection = Connection(host=host, port=port,
unix_socket_path=unix_socket_path,
event_handler_proxy=self._weak,
io_loop=self._io_loop)
self.connection = connection
self.subscribed = set()
self.subscribe_callbacks = deque()
self.unsubscribe_callbacks = []
self.password = password
self.selected_db = selected_db or 0
self._pipeline = None
def __del__(self):
try:
connection = self.connection
pool = self._connection_pool
except AttributeError:
connection = None
pool = None
if connection:
if pool:
pool.release(connection)
connection.wait_until_ready()
else:
connection.disconnect()
def __repr__(self):
return 'tornadoredis.Client (db=%s)' % (self.selected_db)
def __enter__(self):
return self
def __exit__(self, *args, **kwargs):
pass
def __getattribute__(self, item):
"""
Bind methods to the weak proxy to avoid memory leaks
when bound method is passed as argument to the gen.Task
constructor.
"""
a = super(Client, self).__getattribute__(item)
try:
if isinstance(a, collections.Callable) and a.__self__:
try:
a = self.__class__.__dict__[item]
except KeyError:
a = Client.__dict__[item]
a = partial(a, self._weak)
except AttributeError:
pass
return a
def pipeline(self, transactional=False):
"""
Creates the 'Pipeline' to send multiple redis commands
in a single request.
Usage:
pipe = self.client.pipeline()
pipe.hset('foo', 'bar', 1)
pipe.expire('foo', 60)
yield gen.Task(pipe.execute)
or:
with self.client.pipeline() as pipe:
pipe.hset('foo', 'bar', 1)
pipe.expire('foo', 60)
yield gen.Task(pipe.execute)
"""
if not self._pipeline:
self._pipeline = Pipeline(
transactional=transactional,
selected_db=self.selected_db,
password=self.password,
io_loop=self._io_loop,
)
self._pipeline.connection = self.connection
return self._pipeline
def on_disconnect(self):
if self.subscribed:
self.subscribed = set()
raise ConnectionError("Socket closed on remote end")
#### connection
def connect(self):
if not self.connection.connected():
pool = self._connection_pool
if pool:
old_conn = self.connection
self.connection = pool.get_connection(event_handler_ref=self)
self.connection.ready_callbacks = old_conn.ready_callbacks
else:
self.connection.connect()
@gen.engine
def disconnect(self, callback=None):
"""
Disconnects from the Redis server.
"""
connection = self.connection
if connection:
pool = self._connection_pool
if pool:
pool.release(connection)
yield gen.Task(connection.wait_until_ready)
proxy = pool.make_proxy(client_proxy=self._weak,
connected=False)
self.connection = proxy
else:
self.connection.disconnect()
if callback:
callback(False)
#### formatting
def encode(self, value):
if not isinstance(value, str):
if not PY3 and isinstance(value, unicode):
value = value.encode('utf-8')
else:
value = str(value)
if PY3:
value = value.encode('utf-8')
return value
def format_command(self, *tokens, **kwargs):
cmds = []
for t in tokens:
e_t = self.encode(t)
e_t_s = to_basestring(e_t)
cmds.append('$%s\r\n%s\r\n' % (len(e_t), e_t_s))
return '*%s\r\n%s' % (len(tokens), ''.join(cmds))
def format_reply(self, cmd_line, data):
if cmd_line.cmd not in REPLY_MAP:
return data
try:
res = REPLY_MAP[cmd_line.cmd](data,
*cmd_line.args,
**cmd_line.kwargs)
except Exception as e:
raise ResponseError(
'failed to format reply to %s, raw data: %s; err message: %s'
% (cmd_line, data, e), cmd_line
)
return res
####
@gen.engine
def execute_command(self, cmd, *args, **kwargs):
result = None
execute_pending = cmd not in ('AUTH', 'SELECT')
callback = kwargs.get('callback', None)
if 'callback' in kwargs:
del kwargs['callback']
cmd_line = CmdLine(cmd, *args, **kwargs)
if callback and self.subscribed and cmd not in PUB_SUB_COMMANDS:
callback(RequestError(
'Executing non-Pub/Sub command while in subscribed state',
cmd_line))
return
n_tries = 2
while n_tries > 0:
n_tries -= 1
if not self.connection.connected():
self.connection.connect()
if not self.subscribed and not self.connection.ready():
yield gen.Task(self.connection.wait_until_ready)
if not self.subscribed and cmd not in ('AUTH', 'SELECT'):
if self.password and self.connection.info.get('pass', None) != self.password:
yield gen.Task(self.auth, self.password)
if self.selected_db and self.connection.info.get('db', 0) != self.selected_db:
yield gen.Task(self.select, self.selected_db)
command = self.format_command(cmd, *args, **kwargs)
try:
yield gen.Task(self.connection.write, command)
except Exception as e:
self.connection.disconnect()
if not n_tries:
raise e
else:
continue
listening = ((cmd in PUB_SUB_COMMANDS) or
(self.subscribed and cmd == 'PUBLISH'))
if listening:
result = True
execute_pending = False
break
else:
result = None
data = yield gen.Task(self.connection.readline)
if not data:
if not n_tries:
raise ConnectionError('no data received')
else:
resp = self.process_data(data, cmd_line)
if isinstance(resp, partial):
resp = yield gen.Task(resp)
result = self.format_reply(cmd_line, resp)
break
if execute_pending:
self.connection.execute_pending_command()
if callback:
callback(result)
@gen.engine
def _consume_bulk(self, tail, callback=None):
response = yield gen.Task(self.connection.read, int(tail) + 2)
if isinstance(response, Exception):
raise response
if not response:
raise ResponseError('EmptyResponse')
else:
response = to_unicode(response)
response = response[:-2]
callback(response)
def process_data(self, data, cmd_line):
data = to_basestring(data)
data = data[:-2] # strip \r\n
if data == '$-1':
response = None
elif data == '*0' or data == '*-1':
response = []
else:
head, tail = data[0], data[1:]
if head == '*':
return partial(self.consume_multibulk, int(tail), cmd_line)
elif head == '$':
return partial(self._consume_bulk, tail)
elif head == '+':
response = tail
elif head == ':':
response = int(tail)
elif head == '-':
if tail.startswith('ERR'):
tail = tail[4:]
response = ResponseError(tail, cmd_line)
else:
raise ResponseError('Unknown response type %s' % head,
cmd_line)
return response
@gen.engine
def consume_multibulk(self, length, cmd_line, callback=None):
tokens = []
while len(tokens) < length:
data = yield gen.Task(self.connection.readline)
if not data:
raise ResponseError(
'Not enough data in response to %s, accumulated tokens: %s'
% (cmd_line, tokens),
cmd_line)
token = self.process_data(data, cmd_line)
if isinstance(token, partial):
token = yield gen.Task(token)
tokens.append(token)
callback(tokens)
### MAINTENANCE
def bgrewriteaof(self, callback=None):
self.execute_command('BGREWRITEAOF', callback=callback)
def dbsize(self, callback=None):
self.execute_command('DBSIZE', callback=callback)
def flushall(self, callback=None):
self.execute_command('FLUSHALL', callback=callback)
def flushdb(self, callback=None):
self.execute_command('FLUSHDB', callback=callback)
def ping(self, callback=None):
self.execute_command('PING', callback=callback)
def object(self, infotype, key, callback=None):
self.execute_command('OBJECT', infotype, key, callback=callback)
def info(self, section_name=None, callback=None):
args = ('INFO', )
if section_name:
args += (section_name, )
self.execute_command(*args, callback=callback)
def echo(self, value, callback=None):
self.execute_command('ECHO', value, callback=callback)
def time(self, callback=None):
"""
Returns the server time as a 2-item tuple of ints:
(seconds since epoch, microseconds into this second).
"""
self.execute_command('TIME', callback=callback)
def select(self, db, callback=None):
self.selected_db = db
if self.connection.info.get('db', None) != db:
self.connection.info['db'] = db
self.execute_command('SELECT', '%s' % db, callback=callback)
elif callback:
callback(True)
def shutdown(self, callback=None):
self.execute_command('SHUTDOWN', callback=callback)
def save(self, callback=None):
self.execute_command('SAVE', callback=callback)
def bgsave(self, callback=None):
self.execute_command('BGSAVE', callback=callback)
def lastsave(self, callback=None):
self.execute_command('LASTSAVE', callback=callback)
def keys(self, pattern='*', callback=None):
self.execute_command('KEYS', pattern, callback=callback)
def auth(self, password, callback=None):
self.password = password
if self.connection.info.get('pass', None) != password:
self.connection.info['pass'] = password
self.execute_command('AUTH', password, callback=callback)
elif callback:
callback(True)
### BASIC KEY COMMANDS
def append(self, key, value, callback=None):
self.execute_command('APPEND', key, value, callback=callback)
def getrange(self, key, start, end, callback=None):
"""
Returns the substring of the string value stored at ``key``,
determined by the offsets ``start`` and ``end`` (both are inclusive)
"""
self.execute_command('GETRANGE', key, start, end, callback=callback)
def expire(self, key, ttl, callback=None):
self.execute_command('EXPIRE', key, ttl, callback=callback)
def expireat(self, key, when, callback=None):
"""
Sets an expire flag on ``key``. ``when`` can be represented
as an integer indicating unix time or a Python datetime.datetime object.
"""
if isinstance(when, datetime.datetime):
when = int(mod_time.mktime(when.timetuple()))
self.execute_command('EXPIREAT', key, when, callback=callback)
def ttl(self, key, callback=None):
self.execute_command('TTL', key, callback=callback)
def type(self, key, callback=None):
self.execute_command('TYPE', key, callback=callback)
def randomkey(self, callback=None):
self.execute_command('RANDOMKEY', callback=callback)
def rename(self, src, dst, callback=None):
self.execute_command('RENAME', src, dst, callback=callback)
def renamenx(self, src, dst, callback=None):
self.execute_command('RENAMENX', src, dst, callback=callback)
def move(self, key, db, callback=None):
self.execute_command('MOVE', key, db, callback=callback)
def persist(self, key, callback=None):
self.execute_command('PERSIST', key, callback=callback)
def pexpire(self, key, time, callback=None):
"""
Set an expire flag on key ``key`` for ``time`` milliseconds.
``time`` can be represented by an integer or a Python timedelta
object.
"""
if isinstance(time, datetime.timedelta):
ms = int(time.microseconds / 1000)
time = time.seconds + time.days * 24 * 3600 * 1000 + ms
self.execute_command('PEXPIRE', key, time, callback=callback)
def pexpireat(self, key, when, callback=None):
"""
Set an expire flag on key ``key``. ``when`` can be represented
as an integer representing unix time in milliseconds (unix time * 1000)
or a Python datetime.datetime object.
"""
if isinstance(when, datetime.datetime):
ms = int(when.microsecond / 1000)
when = int(mod_time.mktime(when.timetuple())) * 1000 + ms
self.execute_command('PEXPIREAT', key, when, callback=callback)
def pttl(self, key, callback=None):
"Returns the number of milliseconds until the key will expire"
self.execute_command('PTTL', key, callback=callback)
def substr(self, key, start, end, callback=None):
self.execute_command('SUBSTR', key, start, end, callback=callback)
def delete(self, *keys, **kwargs):
self.execute_command('DEL', *keys, callback=kwargs.get('callback'))
def set(self, key, value, expire=None, pexpire=None,
only_if_not_exists=False, only_if_exists=False, callback=None):
args = []
if expire is not None:
args.extend(("EX", expire))
if pexpire is not None:
args.extend(("PX", pexpire))
if only_if_not_exists and only_if_exists:
raise ValueError("only_if_not_exists and only_if_exists "
"cannot be true simultaneously")
if only_if_not_exists:
args.append("NX")
if only_if_exists:
args.append("XX")
self.execute_command('SET', key, value, *args, callback=callback)
def setex(self, key, ttl, value, callback=None):
self.execute_command('SETEX', key, ttl, value, callback=callback)
def setnx(self, key, value, callback=None):
self.execute_command('SETNX', key, value, callback=callback)
def setrange(self, key, offset, value, callback=None):
self.execute_command('SETRANGE', key, offset, value, callback=callback)
def strlen(self, key, callback=None):
self.execute_command('STRLEN', key, callback=callback)
def mset(self, mapping, callback=None):
items = [i for k, v in mapping.items() for i in (k, v)]
self.execute_command('MSET', *items, callback=callback)
def msetnx(self, mapping, callback=None):
items = [i for k, v in mapping.items() for i in (k, v)]
self.execute_command('MSETNX', *items, callback=callback)
def get(self, key, callback=None):
self.execute_command('GET', key, callback=callback)
def mget(self, keys, callback=None):
self.execute_command('MGET', *keys, callback=callback)
def getset(self, key, value, callback=None):
self.execute_command('GETSET', key, value, callback=callback)
def exists(self, key, callback=None):
self.execute_command('EXISTS', key, callback=callback)
def sort(self, key, start=None, num=None, by=None, get=None, desc=False,
alpha=False, store=None, callback=None):
if ((start is not None and num is None) or
(num is not None and start is None)):
raise ValueError("``start`` and ``num`` must both be specified")
tokens = [key]
if by is not None:
tokens.append('BY')
tokens.append(by)
if start is not None and num is not None:
tokens.append('LIMIT')
tokens.append(start)
tokens.append(num)
if get is not None:
tokens.append('GET')
tokens.append(get)
if desc:
tokens.append('DESC')
if alpha:
tokens.append('ALPHA')
if store is not None:
tokens.append('STORE')
tokens.append(store)
self.execute_command('SORT', *tokens, callback=callback)
def getbit(self, key, offset, callback=None):
self.execute_command('GETBIT', key, offset, callback=callback)
def setbit(self, key, offset, value, callback=None):
self.execute_command('SETBIT', key, offset, value, callback=callback)
def bitcount(self, key, start=None, end=None, callback=None):
args = [a for a in (key, start, end) if a is not None]
kwargs = {'callback': callback}
self.execute_command('BITCOUNT', *args, **kwargs)
def bitop(self, operation, dest, *keys, **kwargs):
"""
Perform a bitwise operation using ``operation`` between ``keys`` and
store the result in ``dest``.
"""
kwargs = {'callback': kwargs.get('callback', None)}
self.execute_command('BITOP', operation, dest, *keys, **kwargs)
### COUNTERS COMMANDS
def incr(self, key, callback=None):
self.execute_command('INCR', key, callback=callback)
def decr(self, key, callback=None):
self.execute_command('DECR', key, callback=callback)
def incrby(self, key, amount, callback=None):
self.execute_command('INCRBY', key, amount, callback=callback)
def incrbyfloat(self, key, amount=1.0, callback=None):
self.execute_command('INCRBYFLOAT', key, amount, callback=callback)
def decrby(self, key, amount, callback=None):
self.execute_command('DECRBY', key, amount, callback=callback)
### LIST COMMANDS
def blpop(self, keys, timeout=0, callback=None):
tokens = to_list(keys)
tokens.append(timeout)
self.execute_command('BLPOP', *tokens, callback=callback)
def brpop(self, keys, timeout=0, callback=None):
tokens = to_list(keys)
tokens.append(timeout)
self.execute_command('BRPOP', *tokens, callback=callback)
def brpoplpush(self, src, dst, timeout=1, callback=None):
tokens = [src, dst, timeout]
self.execute_command('BRPOPLPUSH', *tokens, callback=callback)
def lindex(self, key, index, callback=None):
self.execute_command('LINDEX', key, index, callback=callback)
def llen(self, key, callback=None):
self.execute_command('LLEN', key, callback=callback)
def lrange(self, key, start, end, callback=None):
self.execute_command('LRANGE', key, start, end, callback=callback)
def lrem(self, key, value, num=0, callback=None):
self.execute_command('LREM', key, num, value, callback=callback)
def lset(self, key, index, value, callback=None):
self.execute_command('LSET', key, index, value, callback=callback)
def ltrim(self, key, start, end, callback=None):
self.execute_command('LTRIM', key, start, end, callback=callback)
def lpush(self, key, *values, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('LPUSH', key, *values, callback=callback)
def lpushx(self, key, value, callback=None):
self.execute_command('LPUSHX', key, value, callback=callback)
def linsert(self, key, where, refvalue, value, callback=None):
self.execute_command('LINSERT', key, where, refvalue, value,
callback=callback)
def rpush(self, key, *values, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('RPUSH', key, *values, callback=callback)
def rpushx(self, key, value, **kwargs):
"Push ``value`` onto the tail of the list ``name`` if ``name`` exists"
callback = kwargs.get('callback', None)
self.execute_command('RPUSHX', key, value, callback=callback)
def lpop(self, key, callback=None):
self.execute_command('LPOP', key, callback=callback)
def rpop(self, key, callback=None):
self.execute_command('RPOP', key, callback=callback)
def rpoplpush(self, src, dst, callback=None):
self.execute_command('RPOPLPUSH', src, dst, callback=callback)
### SET COMMANDS
def sadd(self, key, *values, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('SADD', key, *values, callback=callback)
def srem(self, key, *values, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('SREM', key, *values, callback=callback)
def scard(self, key, callback=None):
self.execute_command('SCARD', key, callback=callback)
def spop(self, key, callback=None):
self.execute_command('SPOP', key, callback=callback)
def smove(self, src, dst, value, callback=None):
self.execute_command('SMOVE', src, dst, value, callback=callback)
def sismember(self, key, value, callback=None):
self.execute_command('SISMEMBER', key, value, callback=callback)
def smembers(self, key, callback=None):
self.execute_command('SMEMBERS', key, callback=callback)
def srandmember(self, key, number=None, callback=None):
if number:
self.execute_command('SRANDMEMBER', key, number, callback=callback)
else:
self.execute_command('SRANDMEMBER', key, callback=callback)
def sinter(self, keys, callback=None):
self.execute_command('SINTER', *keys, callback=callback)
def sdiff(self, keys, callback=None):
self.execute_command('SDIFF', *keys, callback=callback)
def sunion(self, keys, callback=None):
self.execute_command('SUNION', *keys, callback=callback)
def sinterstore(self, keys, dst, callback=None):
self.execute_command('SINTERSTORE', dst, *keys, callback=callback)
def sunionstore(self, keys, dst, callback=None):
self.execute_command('SUNIONSTORE', dst, *keys, callback=callback)
def sdiffstore(self, keys, dst, callback=None):
self.execute_command('SDIFFSTORE', dst, *keys, callback=callback)
### SORTED SET COMMANDS
def zadd(self, key, *score_value, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('ZADD', key, *score_value, callback=callback)
def zcard(self, key, callback=None):
self.execute_command('ZCARD', key, callback=callback)
def zincrby(self, key, value, amount, callback=None):
self.execute_command('ZINCRBY', key, amount, value, callback=callback)
def zrank(self, key, value, callback=None):
self.execute_command('ZRANK', key, value, callback=callback)
def zrevrank(self, key, value, callback=None):
self.execute_command('ZREVRANK', key, value, callback=callback)
def zrem(self, key, *values, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('ZREM', key, *values, callback=callback)
def zcount(self, key, start, end, callback=None):
self.execute_command('ZCOUNT', key, start, end, callback=callback)
def zscore(self, key, value, callback=None):
self.execute_command('ZSCORE', key, value, callback=callback)
def zrange(self, key, start, num, with_scores=True, callback=None):
tokens = [key, start, num]
if with_scores:
tokens.append('WITHSCORES')
self.execute_command('ZRANGE', *tokens, callback=callback)
def zrevrange(self, key, start, num, with_scores, callback=None):
tokens = [key, start, num]
if with_scores:
tokens.append('WITHSCORES')
self.execute_command('ZREVRANGE', *tokens, callback=callback)
def zrangebyscore(self, key, start, end, offset=None, limit=None,
with_scores=False, callback=None):
tokens = [key, start, end]
if offset is not None:
tokens.append('LIMIT')
tokens.append(offset)
tokens.append(limit)
if with_scores:
tokens.append('WITHSCORES')
self.execute_command('ZRANGEBYSCORE', *tokens, callback=callback)
def zrevrangebyscore(self, key, end, start, offset=None, limit=None,
with_scores=False, callback=None):
tokens = [key, end, start]
if offset is not None:
tokens.append('LIMIT')
tokens.append(offset)
tokens.append(limit)
if with_scores:
tokens.append('WITHSCORES')
self.execute_command('ZREVRANGEBYSCORE', *tokens, callback=callback)
def zremrangebyrank(self, key, start, end, callback=None):
self.execute_command('ZREMRANGEBYRANK', key, start, end,
callback=callback)
def zremrangebyscore(self, key, start, end, callback=None):
self.execute_command('ZREMRANGEBYSCORE', key, start, end,
callback=callback)
def zinterstore(self, dest, keys, aggregate=None, callback=None):
return self._zaggregate('ZINTERSTORE', dest, keys, aggregate, callback)
def zunionstore(self, dest, keys, aggregate=None, callback=None):
return self._zaggregate('ZUNIONSTORE', dest, keys, aggregate, callback)
def _zaggregate(self, command, dest, keys, aggregate, callback):
tokens = [dest, len(keys)]
if isinstance(keys, dict):
items = list(keys.items())
keys = [i[0] for i in items]
weights = [i[1] for i in items]
else:
weights = None
tokens.extend(keys)
if weights:
tokens.append('WEIGHTS')
tokens.extend(weights)
if aggregate:
tokens.append('AGGREGATE')
tokens.append(aggregate)
self.execute_command(command, *tokens, callback=callback)
### HASH COMMANDS
def hgetall(self, key, callback=None):
self.execute_command('HGETALL', key, callback=callback)
def hmset(self, key, mapping, callback=None):
items = [i for k, v in mapping.items() for i in (k, v)]
self.execute_command('HMSET', key, *items, callback=callback)
def hset(self, key, field, value, callback=None):
self.execute_command('HSET', key, field, value, callback=callback)
def hsetnx(self, key, field, value, callback=None):
self.execute_command('HSETNX', key, field, value, callback=callback)
def hget(self, key, field, callback=None):
self.execute_command('HGET', key, field, callback=callback)
def hdel(self, key, *fields, **kwargs):
callback = kwargs.get('callback')
self.execute_command('HDEL', key, *fields, callback=callback)
def hlen(self, key, callback=None):
self.execute_command('HLEN', key, callback=callback)
def hexists(self, key, field, callback=None):
self.execute_command('HEXISTS', key, field, callback=callback)
def hincrby(self, key, field, amount=1, callback=None):
self.execute_command('HINCRBY', key, field, amount, callback=callback)
def hincrbyfloat(self, key, field, amount=1.0, callback=None):
self.execute_command('HINCRBYFLOAT', key, field, amount,
callback=callback)
def hkeys(self, key, callback=None):
self.execute_command('HKEYS', key, callback=callback)
def hmget(self, key, fields, callback=None):
self.execute_command('HMGET', key, *fields, callback=callback)
def hvals(self, key, callback=None):
self.execute_command('HVALS', key, callback=callback)
### SCAN COMMANDS
def scan(self, cursor, count=None, match=None, callback=None):
self._scan('SCAN', cursor, count, match, callback)
def hscan(self, key, cursor, count=None, match=None, callback=None):
self._scan('HSCAN', cursor, count, match, callback, key=key)
def sscan(self, key, cursor, count=None, match=None, callback=None):
self._scan('SSCAN', cursor, count, match, callback, key=key)
def zscan(self, key, cursor, count=None, match=None, callback=None):
self._scan('ZSCAN', cursor, count, match, callback, key=key)
def _scan(self, cmd, cursor, count, match, callback, key=None):
tokens = [cmd]
key and tokens.append(key)
tokens.append(cursor)
match and tokens.extend(['MATCH', match])
count and tokens.extend(['COUNT', count])
self.execute_command(*tokens, callback=callback)
### PUBSUB
def subscribe(self, channels, callback=None):
self._subscribe('SUBSCRIBE', channels, callback=callback)
def psubscribe(self, channels, callback=None):
self._subscribe('PSUBSCRIBE', channels, callback=callback)
def _subscribe(self, cmd, channels, callback=None):
if isinstance(channels, str) or (not PY3 and isinstance(channels, unicode)):
channels = [channels]
if not self.subscribed:
listen_callback = None
original_cb = stack_context.wrap(callback) if callback else None
def _cb(*args, **kwargs):
self.on_subscribed(Message(kind='subscribe',
channel=channels[0],
body=None,
pattern=None))
if original_cb:
original_cb(True)
callback = _cb
else:
listen_callback = callback
callback = None
# Use the listen loop to execute subscribe callbacks
for channel in channels:
self.subscribe_callbacks.append((channel, listen_callback))
# Do not execute the same callback multiple times
listen_callback = None
self.execute_command(cmd, *channels, callback=callback)
def on_subscribed(self, result):
self.subscribed.add(result.channel)
def on_unsubscribed(self, channels, *args, **kwargs):
channels = set(channels)
self.subscribed -= channels
for cb_channels, cb in self.unsubscribe_callbacks:
cb_channels.difference_update(channels)
if not cb_channels:
self._io_loop.add_callback(cb)
def unsubscribe(self, channels, callback=None):
self._unsubscribe('UNSUBSCRIBE', channels, callback=callback)
def punsubscribe(self, channels, callback=None):
self._unsubscribe('PUNSUBSCRIBE', channels, callback=callback)
def _unsubscribe(self, cmd, channels, callback=None):
if isinstance(channels, str) or (not PY3 and isinstance(channels, unicode)):
channels = [channels]
if callback:
cb = stack_context.wrap(callback)
# TODO: Do we need to back this up with self._io_loop.add_timeout(time() + 1, cb)?
# FIXME: What about PUNSUBSCRIBEs?
self.unsubscribe_callbacks.append((set(channels), cb))
self.execute_command(cmd, *channels)
def publish(self, channel, message, callback=None):
self.execute_command('PUBLISH', channel, message, callback=callback)
@gen.engine
def listen(self, callback=None, exit_callback=None):
"""
Starts a Pub/Sub channel listening loop.
Use the unsubscribe or punsubscribe methods to exit it.
Each received message triggers the callback function.
Callback function receives a Message object instance as argument.
Here is an example of handling a channel subscription::
def handle_message(msg):
if msg.kind == 'message':
print msg.body
elif msg.kind == 'disconnect':
# Disconnected from the redis server
pass
yield gen.Task(client.subscribe, 'channel_name')
client.listen(handle_message)
...
yield gen.Task(client.subscribe, 'another_channel_name')
...
yield gen.Task(client.unsubscribe, 'another_channel_name')
yield gen.Task(client.unsubscribe, 'channel_name')
Unsubscribe from a channel to exit the 'listen' loop.
"""
if callback:
def error_wrapper(e):
if isinstance(e, GeneratorExit):
return ConnectionError('Connection lost')
else:
return e
cmd_listen = CmdLine('LISTEN')
while self.subscribed:
data = yield gen.Task(self.connection.readline)
if isinstance(data, Exception):
raise data
if data is None:
# If disconnected from the redis server clear the list
# of subscriber this client has subscribed to
channels = self.subscribed
self.subscribed = set()
# send a message to caller:
# Message(kind='disconnect', channel=set(channel1, ...))
callback(reply_pubsub_message(('disconnect', channels)))
return
response = self.process_data(data, cmd_listen)
if isinstance(response, partial):
response = yield gen.Task(response)
if isinstance(response, Exception):
raise response
result = self.format_reply(cmd_listen, response)
if result and result.kind in ('subscribe', 'psubscribe'):
self.on_subscribed(result)
try:
__, cb = self.subscribe_callbacks.popleft()
except IndexError:
__, cb = result.channel, None
if cb:
cb(True)
if result and result.kind in ('unsubscribe', 'punsubscribe'):
self.on_unsubscribed([result.channel])
callback(result)
if exit_callback:
exit_callback(bool(callback))
### CAS
def watch(self, *key_names, **kwargs):
callback = kwargs.get('callback', None)
self.execute_command('WATCH', *key_names, callback=callback)
def unwatch(self, callback=None):
self.execute_command('UNWATCH', callback=callback)
### LOCKS
def lock(self, lock_name, lock_ttl=None, polling_interval=0.1):
"""
Create a new Lock object using the Redis key ``lock_name`` for
state, that behaves like a threading.Lock.
This method is synchronous, and returns immediately with the Lock object.
This method doesn't acquire the Lock or in fact trigger any sort of
communications with the Redis server. This must be done using the Lock
object itself.
If specified, ``lock_ttl`` indicates the maximum life time for the lock.
If none is specified, it will remain locked until release() is called.
``polling_interval`` indicates the time between acquire attempts (polling)
when the lock is in blocking mode and another client is currently
holding the lock.
Note: If using ``lock_ttl``, you should make sure all the hosts
that are running clients have their time synchronized with a network
time service like ntp.
"""
return Lock(self, lock_name, lock_ttl=lock_ttl, polling_interval=polling_interval)
### SCRIPTING COMMANDS
def eval(self, script, keys=None, args=None, callback=None):
if keys is None:
keys = []
if args is None:
args = []
num_keys = len(keys)
_args = keys + args
self.execute_command('EVAL', script, num_keys,
*_args, callback=callback)
def evalsha(self, shahash, keys=None, args=None, callback=None):
if keys is None:
keys = []
if args is None:
args = []
num_keys = len(keys)
keys.extend(args)
self.execute_command('EVALSHA', shahash, num_keys,
*keys, callback=callback)
def script_exists(self, shahashes, callback=None):
# not yet implemented in the redis protocol
self.execute_command('SCRIPT EXISTS', *shahashes, callback=callback)
def script_flush(self, callback=None):
# not yet implemented in the redis protocol
self.execute_command('SCRIPT FLUSH', callback=callback, verbose=True)
def script_kill(self, callback=None):
# not yet implemented in the redis protocol
self.execute_command('SCRIPT KILL', callback=callback)
def script_load(self, script, callback=None):
# not yet implemented in the redis protocol
self.execute_command('SCRIPT LOAD', script, callback=callback)
class Pipeline(Client):
def __init__(self, transactional, *args, **kwargs):
super(Pipeline, self).__init__(*args, **kwargs)
self.transactional = transactional
self.command_stack = []
self.executing = False
def __del__(self):
"""
Do not disconnect on releasing the PipeLine object.
Thanks to Tomek (https://github.com/thlawiczka)
"""
pass
def execute_command(self, cmd, *args, **kwargs):
if self.executing and cmd in ('AUTH', 'SELECT'):
super(Pipeline, self).execute_command(cmd, *args, **kwargs)
elif cmd in PUB_SUB_COMMANDS:
raise RequestError(
'Client is not supposed to issue '
'the %s command in a pipeline' % cmd)
else:
self.command_stack.append(CmdLine(cmd, *args, **kwargs))
def discard(self):
# actually do nothing with redis-server, just flush the command_stack
self.command_stack = []
def format_replies(self, cmd_lines, responses):
results = []
for cmd_line, response in zip(cmd_lines, responses):
try:
results.append(self.format_reply(cmd_line, response))
except Exception as e:
results.append(e)
return results
def format_pipeline_request(self, command_stack):
return ''.join(self.format_command(c.cmd, *c.args, **c.kwargs)
for c in command_stack)
@gen.engine
def execute(self, callback=None):
command_stack = self.command_stack
self.command_stack = []
self.executing = True
try:
if self.transactional:
command_stack = ([CmdLine('MULTI')] +
command_stack +
[CmdLine('EXEC')])
request = self.format_pipeline_request(command_stack)
password_should_be_sent = (
self.password and
self.connection.info.get('pass', None) != self.password)
if password_should_be_sent:
yield gen.Task(self.auth, self.password)
db_should_be_selected = (
self.selected_db and
self.connection.info.get('db', None) != self.selected_db)
if db_should_be_selected:
yield gen.Task(self.select, self.selected_db)
if not self.connection.connected():
self.connection.connect()
if not self.connection.ready():
yield gen.Task(self.connection.wait_until_ready)
try:
self.connection.write(request)
except IOError:
self.command_stack = []
self.connection.disconnect()
raise ConnectionError("Socket closed on remote end")
except Exception as e:
self.command_stack = []
self.connection.disconnect()
raise e
responses = []
total = len(command_stack)
cmds = iter(command_stack)
while len(responses) < total:
data = yield gen.Task(self.connection.readline)
if not data:
raise ResponseError('Not enough data after EXEC')
try:
cmd_line = next(cmds)
if self.transactional and cmd_line.cmd != 'EXEC':
response = self.process_data(data,
CmdLine('MULTI_PART'))
else:
response = self.process_data(data, cmd_line)
if isinstance(response, partial):
response = yield gen.Task(response)
responses.append(response)
except Exception as e:
responses.append(e)
if self.transactional:
command_stack = command_stack[:-1]
responses = responses[-1]
results = self.format_replies(command_stack[1:], responses)
else:
results = self.format_replies(command_stack, responses)
self.connection.execute_pending_command()
finally:
self.executing = False
callback(results)
class Lock(object):
"""
A shared, distributed Lock that uses a Redis server to hold its state.
This Lock can be shared across processes and/or machines. It works
asynchronously and plays nice with the Tornado IOLoop.
"""
LOCK_FOREVER = float(2 ** 31 + 1) # 1 past max unix time
def __init__(self, redis_client, lock_name, lock_ttl=None, polling_interval=0.1):
"""
Create a new Lock object using the Redis key ``lock_name`` for
state, that behaves like a threading.Lock.
This method is synchronous, and returns immediately. It doesn't acquire the
Lock or in fact trigger any sort of communications with the Redis server.
This must be done using the Lock object itself.
If specified, ``lock_ttl`` indicates the maximum life time for the lock.
If none is specified, it will remain locked until release() is called.
``polling_interval`` indicates the time between acquire attempts (polling)
when the lock is in blocking mode and another client is currently
holding the lock.
Note: If using ``lock_ttl``, you should make sure all the hosts
that are running clients have their time synchronized with a network
time service like ntp.
"""
self.redis_client = redis_client
self.lock_name = lock_name
self.acquired_until = None
self.lock_ttl = lock_ttl
self.polling_interval = polling_interval
if self.lock_ttl and self.polling_interval > self.lock_ttl:
raise LockError("'polling_interval' must be less than 'lock_ttl'")
@gen.engine
def acquire(self, blocking=True, callback=None):
"""
Acquire the lock.
Returns True once the lock is acquired.
If ``blocking`` is False, always return immediately. If the lock
was acquired, return True, otherwise return False.
Otherwise, block until the lock is acquired (or an error occurs).
If ``callback`` is supplied, it is called with the result.
"""
# Loop until we have a conclusive result
while 1:
# Get the current time
unixtime = int(mod_time.time())
# If the lock has a limited lifetime, create a timeout value
if self.lock_ttl:
timeout_at = unixtime + self.lock_ttl
# Otherwise, set the timeout value at forever (dangerous)
else:
timeout_at = Lock.LOCK_FOREVER
timeout_at = float(timeout_at)
# Try and get the lock, setting the timeout value in the appropriate key,
# but only if a previous value does not exist in Redis
result = yield gen.Task(self.redis_client.setnx, self.lock_name, timeout_at)
# If we managed to get the lock
if result:
# We successfully acquired the lock!
self.acquired_until = timeout_at
if callback:
callback(True)
return
# We didn't get the lock, another value is already there
# Check to see if the current lock timeout value has already expired
result = yield gen.Task(self.redis_client.get, self.lock_name)
existing = float(result or 1)
# Has it expired?
if existing < unixtime:
# The previous lock is expired. We attempt to overwrite it, getting the current value
# in the server, just in case someone tried to get the lock at the same time
result = yield gen.Task(self.redis_client.getset,
self.lock_name,
timeout_at)
existing = float(result or 1)
# If the value we read is older than our own current timestamp, we managed to get the
# lock with no issues - the timeout has indeed expired
if existing < unixtime:
# We successfully acquired the lock!
self.acquired_until = timeout_at
if callback:
callback(True)
return
# However, if we got here, then the value read from the Redis server is newer than
# our own current timestamp - meaning someone already got the lock before us.
# We failed getting the lock.
# If we are not signalled to block
if not blocking:
# We failed acquiring the lock...
if callback:
callback(False)
return
# Otherwise, we "sleep" for an amount of time equal to the polling interval, after which
# we will try getting the lock again.
yield gen.Task(self.redis_client._io_loop.add_timeout,
self.redis_client._io_loop.time() + self.polling_interval)
@gen.engine
def release(self, callback=None):
"""
Releases the already acquired lock.
If ``callback`` is supplied, it is called with True when finished.
"""
if self.acquired_until is None:
raise ValueError("Cannot release an unlocked lock")
# Get the current lock value
result = yield gen.Task(self.redis_client.get, self.lock_name)
existing = float(result or 1)
# If the lock time is in the future, delete the lock
if existing >= self.acquired_until:
yield gen.Task(self.redis_client.delete, self.lock_name)
self.acquired_until = None
# That is it.
if callback:
callback(True)
|
Aged Bark, 10" Army Green Shaft.
Features: Innovative new light weight boot with extra level of stability. Features ATS technology with a Goodyear welt. |
#!/usr/bin/env python
# Copyright (C) 2010 Douglas Eck
#
# This file is part of Gordon.
#
# Gordon is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Gordon is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Gordon. If not, see <http://www.gnu.org/licenses/>.
'''
Functions for importing music to Gordon database
script usage:
python audio_intake_from_tracklist.py source csv [doit]
<source> is a name for the collection
<csv> is the collection physical location to browse
<doit> is an optional parameter; if False, no actual import takes
place, only verbose would-dos
'''
import os, collections, datetime, logging, stat, sys
import argparse
from csv import reader
from gordon.io import AudioFile
from gordon.db.model import add, commit, Album, Artist, Track, Collection, Annotation
from gordon.db.config import DEF_GORDON_DIR
from gordon.db.gordon_db import get_tidfilename, make_subdirs_and_copy, is_binary
from gordon.io.mp3_eyeD3 import isValidMP3, getAllTags
log = logging.getLogger('gordon.audio_intake_from_tracklist')
def add_track(trackpath, source=str(datetime.date.today()),
gordonDir=DEF_GORDON_DIR, tag_dict=dict(), artist=None,
album=None, fast_import=False, import_md=False):
"""Add track with given filename <trackpath> to database
@param source: audio files data source (string)
@param gordonDir: main Gordon directory
@param tag_dict: dictionary of key,val tag pairs - See add_album(...).
@param artist: The artist for this track. An instance of Artist. None if not present
@param album: The album for this track. An instance of Album. None if not present
@param fast_import: If true, do not calculate strip_zero length. Defaults to False
@param import_md: use True to try to extract all metadata tags embedded in the auudio-file. Defaults to False
"""
(path, filename) = os.path.split(trackpath)
(fname, ext) = os.path.splitext(filename)
log.debug('Adding file "%s" of "%s" album by %s', filename, album, artist)
# validations
if 'album' not in tag_dict:
#todo: currently cannot add singleton files. Need an album which is defined in tag_dict
log.error('Cannot add "%s" because it is not part of an album',
filename)
return -1 # didn't add
if not os.path.isfile(trackpath):
log.info('Skipping %s because it is not a file', filename)
return -1 # not a file
try:
AudioFile(trackpath).read(tlen_sec=0.01)
except:
log.error('Skipping "%s" because it is not a valid audio file', filename)
return -1 # not an audio file
# required data
bytes = os.stat(trackpath)[stat.ST_SIZE]
# reencode name to latin1 !!!
try:
fn_recoded = filename.decode('utf-8')
except:
try: fn_recoded = filename.decode('latin1')
except: fn_recoded = 'unknown'
# prepare data
if tag_dict[u'compilation'] not in [True, False, 'True', 'False'] :
tag_dict[u'compilation'] = False
track = Track(title = tag_dict[u'title'],
artist = tag_dict[u'artist'],
album = tag_dict[u'album'],
tracknum = tag_dict[u'tracknum'],
compilation = tag_dict[u'compilation'],
otitle = tag_dict[u'title'],
oartist = tag_dict[u'artist'],
oalbum = tag_dict[u'album'],
otracknum = tag_dict[u'tracknum'],
ofilename = fn_recoded,
source = unicode(source),
bytes = bytes)
# add data
add(track) # needed to get a track id
commit() #to get our track id we need to write this record
log.debug('Wrote track record %s to database', track.id)
if fast_import :
track.secs = -1
track.zsecs = -1
else :
a = AudioFile(trackpath)
[track.secs, track.zsecs] = a.get_secs_zsecs()
track.path = u'%s' % get_tidfilename(track.id, ext[1:])
# links track to artist & album in DB
if artist:
log.debug('Linking %s to artist %s', track, artist)
track.artist = artist.name
track.artists.append(artist)
if album:
log.debug('Linking %s to album %s', track, album)
track.album = album.name
track.albums.append(album)
log.debug('Wrote album and artist additions to track into database')
# copy the file to the Gordon audio/feature data directory
tgt = os.path.join(gordonDir, 'audio', 'main', track.path)
make_subdirs_and_copy(trackpath, tgt)
log.debug('Copied "%s" to %s', trackpath, tgt)
# add annotations
del(tag_dict[u'title'])
del(tag_dict[u'artist'])
del(tag_dict[u'album'])
del(tag_dict[u'tracknum'])
del(tag_dict[u'compilation'])
for tagkey, tagval in tag_dict.iteritems(): # create remaining annotations
track.annotations.append(Annotation(type='text', name=tagkey, value=tagval))
if import_md:
#check if file is mp3. if so:
if isValidMP3(trackpath):
#extract all ID3 tags, store each tag value as an annotation type id3.[tagname]
for tag in getAllTags(trackpath):
track.annotations.append(Annotation(type='id3', name=tag[0], value=tag[1]))
#todo: work with more metadata formats (use tagpy?)
# Link the track to the collection object
track.collections.append(get_or_create_collection(source))
commit() # store the annotations
def _read_csv_tags(cwd, csv=None):
'''Reads a csv file containing track metadata and annotations (v 2.0)
# may use py comments in metadata .csv file. Must include a header:
filename, title, artist, album, tracknum, compilation, [optional1], [optional2]...
and then corresponding values per line (see example metadata.csv file in project root)
@return: a 2D dict in the form dict[<file-name>][<tag>] or False if an error occurs
@param cwd: complete csv file-path (if no <csv> sent) or path to directory to work in
@param csv: csv file-name (in <cwd> dir). Defaults to None
@todo: csv values (after header) may include "\embedded" to try getting it from the audio file.
Currently only ID3 tags names understood by gordon.io.mp3_eyeD3.id3v2_getval_sub are usable in this manner.
@todo: include other metadata formats (use tagpy?)'''
# open csv file
if csv is None:
filename = cwd
else:
filename = os.path.join(cwd, csv)
try:
csvfile = reader(open(filename))
except IOError:
log.error(" Couldn't open '%s'", csv)
raise
tags = dict()
headers = False
for line in csvfile: # each record (file rows)
if len(line) < 6 : continue # skip bad lines (blank or too short)
line[0] = line[0].strip()
if not line[0] or line[0][0] == '#' : continue # skip if filepath empty or comment line
# read and validate header
if not headers: # first valid line is the header
line=[l.strip() for l in line]
if not line[:6]==['filepath','title','artist','album','tracknum','compilation']:
log.error('CSV headers are incorrect at line %d.',
csvfile.line_num)
return False
headers = [unicode(x) for x in line]
continue
# save title, artist, album, tracknum, compilation in tags[<file-name>]
filepath=line[0]
tags[filepath] = dict() # this deletes previous lines if filepath is repeated ...
col = 1 # col 0 is 'filepath' so skip it
while col < len(headers):
if col >= len(line):
break
value = line[col].strip()
if headers[col] == u'tracknum': # prepare for smallint in the DB
try: tags[filepath][u'tracknum'] = int(value)
except: tags[filepath][u'tracknum'] = 0
elif headers[col] == u'compilation': # prepare for bool in the DB
if value.lower()=='true' or value=='1':
value = True
else:
value = False
tags[filepath][u'compilation'] = value
elif os.path.isfile(value):
if not is_binary(value):
try:
txt=open(value)
tags[filepath][headers[col]] = unicode(txt.read())
txt.close()
except:
log.error('Error opening %s file %s at line %d',
headers[col], value, csvfile.line_num)
tags[filepath][headers[col]] = unicode(value)
else:
log.debug('%s file %s at line %d appears to be binary, '
'not importing', headers[col], value,
csvfile.line_num)
tags[filepath][headers[col]] = unicode(value)
else:
try:
tags[filepath][headers[col]] = u'%s' % value
except UnicodeDecodeError:
tags[filepath][headers[col]] = value.decode("utf-8")
col+=1
return tags
def add_album(album_name, tags_dicts, source=str(datetime.date.today()),
gordonDir=DEF_GORDON_DIR, prompt_aname=False, import_md=False, fast_import=False):
"""Add an album from a list of metadata in <tags_dicts> v "1.0 CSV"
"""
log.debug('Adding album "%s"', album_name)
# create set of artists from tag_dicts
artists = set()
for track in tags_dicts.itervalues():
artists.add(track['artist'])
if len(artists) == 0:
log.debug('Nothing to add')
return # no songs
else:
log.debug('Found %d artists in directory: %s', len(artists), artists)
#add album to Album table
log.debug('Album has %d tracks', len(tags_dicts))
albumrec = Album(name = album_name, trackcount = len(tags_dicts))
#if we have an *exact* string match we will use the existing artist
artist_dict = dict()
for artist in artists:
match = Artist.query.filter_by(name=artist)
if match.count() == 1 :
log.debug('Matched %s to %s in database', artist, match[0])
artist_dict[artist] = match[0]
#todo: (eckdoug) what happens if match.count()>1? This means we have multiple artists in db with same
# name. Do we try harder to resolve which one? Or just add a new one. I added a new one (existing code)
# but it seems risky.. like we will generate lots of new artists.
# Anyway, we resolve this in the musicbrainz resolver....
else :
# add our Artist to artist table
newartist = Artist(name = artist)
artist_dict[artist] = newartist
#add artist to album (album_artist table)
albumrec.artists.append(artist_dict[artist])
# Commit these changes in order to have access to this album
# record when adding tracks.
commit()
# Now add our tracks to album.
for filename in sorted(tags_dicts.keys()):
add_track(filename, source=source, gordonDir=gordonDir, tag_dict=tags_dicts[filename],
artist=artist_dict[tags_dicts[filename][u'artist']], album=albumrec,
fast_import=fast_import, import_md=import_md)
log.debug('Added "%s"', filename)
#now update our track counts
for aname, artist in artist_dict.iteritems() :
artist.update_trackcount()
log.debug('Updated trackcount for artist %s', artist)
albumrec.update_trackcount()
log.debug('Updated trackcount for album %s', albumrec)
commit()
def get_or_create_collection(source):
match = Collection.query.filter_by(name = unicode(source))
if match.count() == 1:
log.debug(' Matched source %s in database', match[0])
return match[0]
else:
return Collection(name=unicode(source))
def add_collection_from_csv_file(csvfile, source=str(datetime.date.today()),
prompt_incompletes=False, doit=False,
gordonDir=DEF_GORDON_DIR, fast_import=False,
import_md=False):
"""Adds tracks from a CSV (file) list of file-paths.
Only imports if all songs actually have same album name.
With flag prompt_incompletes will prompt for incomplete albums as well
Use doit=True to actually commit the addition of songs
"""
metadata = _read_csv_tags(csvfile)
# Turn metadata into a list of albums:
albums = collections.defaultdict(dict)
for filename, x in metadata.iteritems():
albums[x['album']][filename] = x
ntracks = 1
for albumname in sorted(albums):
tracks = albums[albumname]
# tracks is a 2D dict[<file-name>][<tag>] for a single album.
if not doit:
print 'Would import album "%s"' % albumname
for track in sorted(tracks):
print ' Would import file %d: "%s"' % (ntracks, track)
for key, value in tracks[track].iteritems():
strvalue = '%s' % value
if '\n' in strvalue:
strvalue = '%s ...' % strvalue.split('\n')[0]
print ' %s: %s' % (key, strvalue)
ntracks += 1
else:
add_album(albumname, tracks, source, gordonDir, prompt_incompletes, fast_import)
print 'Finished'
def _die_with_usage() :
print 'This program imports a set of tracks (and their corresponding metdata) listed in a csv file into the database'
print 'Usage: '
print 'audio_intake [flags] <source> <csvfile> [doit] [metadata]'
print 'Flags:'
print ' -fast: imports without calculating zero-stripped track times.'
print ' -noprompt: will not prompt for incomplete albums. See log for what we skipped'
print 'Arguments: '
print ' <source> is the string stored to the database for source (to identify the collection) e.g. DougDec22'
print ' <csvfile> is the csv file listing tracks to be imported'
print ' <doit> (default 1) use 0 to test the intake harmlessly'
print ' <metadata> (default 0) use 1 to import all metadata tags from the file'
print 'More options are available by using the function add_collection()'
sys.exit(0)
def process_arguments():
parser = argparse.ArgumentParser(description='Gordon audio intake from track list')
parser.add_argument('source',
action = 'store',
help = 'name for the collection')
parser.add_argument('csvfile',
action = 'store',
help = 'path to the track list CSV file')
parser.add_argument('-f',
'--fast',
action = 'store_true',
dest = 'fast',
default = False,
help = 'imports without calculating zero-stripped track times')
parser.add_argument('--no-prompt',
action = 'store_false',
dest = 'prompt_incompletes',
default = True,
help = 'Do not prompt for incomplete albums. See log for what we skipped.')
parser.add_argument('-t',
'--test',
action = 'store_false',
dest = 'doit',
default = True,
help = 'test the intake without modifying the database')
parser.add_argument('-m',
'--metadata',
action = 'store_true',
dest = 'import_md',
default = False,
help = 'import all metadata flags from the audio file')
return vars(parser.parse_args(sys.argv[1:]))
if __name__ == '__main__':
args = process_arguments()
log.info('Importing audio from tracklist %s (source=%s)', args['csvfile'], args['source'])
add_collection_from_csv_file( args['csvfile'],
source=args['source'],
prompt_incompletes=args['prompt_incompletes'],
doit=args['doit'],
fast_import=args['fast'],
import_md=args['import_md'])
|
Over four decades ago, Alfred G. Knudson proposed a ground-breaking model for tumorigenesis, suggesting that cancer is a consequence of genetic mutations that inactivate specific genes which suppress the growth of cancer cells. This visionary working model has greatly advanced our understanding of cancer, and has directly led to the discovery of numerous tumor suppressor genes, including PTEN (phosphatase and tensin homolog). PTEN was found to be frequently disrupted in multiple sporadic tumor types, and mutated in the germlines of patients with cancer predisposition syndromes such as Cowden disease. PTEN protein is known to govern a plethora of cellular processes, including cell survival, proliferation, and metabolism, by suppressing the highly oncogenic PI3K-AKT-mTOR signaling pathway through its lipid phosphatase activity. Of interest, tumorigenesis is exquisitely sensitive to even subtle variations in PTEN dosage, and consequently, mechanisms regulating PTEN protein expression play a critical role in cancer susceptibility and tumorigenesis. Thus restoring PTEN functions in cancer directly or indirectly holds great therapeutic promise. In this proposal, inspired by our recent discovery of a novel PTEN feedback mechanism in metastatic prostate cancer and triple-negative breast cancer, we aim to verify our initial findings by a direct genetic approach in the mouse, explore their therapeutic potential, and maximize the response of tumor cells to targeted therapy while reducing harmful side effects to healthy cells. In addition, we expect that the innovative mouse models we develop for these studies will become invaluable tools in future cancer research, given their pleiotropic tumor phenotypes. We anticipate the outcomes of this project will open new avenues for genetic analysis, pathway studies and clinical applications, and should have a profound impact on our approach to cancer both in the clinic and the lab. |
#!/usr/bin/env python3
import base64
import distutils.file_util
import io
import itertools
import os
import shlex
import shutil
import signal
import stat
import subprocess
import sys
import threading
from typing import List, Union
import urllib.request
class LF:
'''
LineFeed (AKA newline).
Singleton class. Can be used in print_cmd to print out nicer command lines
with --key on the same line as "--key value".
'''
pass
class ShellHelpers:
'''
Helpers to do things which are easy from the shell,
usually filesystem, process or pipe operations.
Attempt to print shell equivalents of all commands to make things
easy to debug and understand what is going on.
'''
_print_lock = threading.Lock()
def __init__(self, dry_run=False, quiet=False, force_oneline=False):
'''
:param dry_run: don't run the commands, just potentially print them. Debug aid.
:type dry_run: Bool
:param quiet: don't print the commands
:type dry_run: Bool
'''
self.dry_run = dry_run
self.force_oneline_default = force_oneline
self.quiet = quiet
@classmethod
def _print_thread_safe(cls, string):
'''
Python sucks: a naive print adds a bunch of random spaces to stdout,
and then copy pasting the command fails.
https://stackoverflow.com/questions/3029816/how-do-i-get-a-thread-safe-print-in-python-2-6
The initial use case was test-gdb which must create a thread for GDB to run the program in parallel.
'''
with cls._print_lock:
try:
print(string, flush=True)
except BrokenPipeError:
# https://stackoverflow.com/questions/26692284/how-to-prevent-brokenpipeerror-when-doing-a-flush-in-python
# https://stackoverflow.com/questions/16314321/suppressing-printout-of-exception-ignored-message-in-python-3
pass
def add_newlines(self, cmd):
out = []
for arg in cmd:
out.extend([arg, LF])
return out
def base64_encode(self, string):
'''
TODO deal with redirection and print nicely.
'''
return base64.b64encode(string.encode()).decode()
def base64_decode(self, string):
return base64.b64decode(string.encode()).decode()
def check_output(self, *args, **kwargs):
'''
Analogous to subprocess.check_output: get the stdout / stderr
of a program back as a byte array.
'''
out_str = []
actual_kwargs = {
'show_stdout': False,
'show_cmd': False
}
actual_kwargs.update(kwargs)
self.run_cmd(
*args,
out_str=out_str,
**actual_kwargs
)
return out_str[0]
def chmod(self, path, add_rm_abs='+', mode_delta=stat.S_IXUSR):
'''
TODO extend further, shell print equivalent.
'''
old_mode = os.stat(path).st_mode
if add_rm_abs == '+':
new_mode = old_mode | mode_delta
elif add_rm_abs == '':
new_mode = mode_delta
elif add_rm_abs == '-':
new_mode = old_mode & ~mode_delta
os.chmod(path, new_mode)
def force_oneline(self, force_oneline):
if force_oneline is not None:
return force_oneline
else:
return self.force_oneline_default
def cmd_to_string(
self,
cmd: List[Union[str, LF]],
cwd=None,
extra_env=None,
extra_paths=None,
force_oneline: Union[bool,None] =None,
*,
stdin_path: Union[str,None] =None
):
'''
Format a command given as a list of strings so that it can
be viewed nicely and executed by bash directly and print it to stdout.
If cmd contains:
* no LF, then newlines are added after every word
* exactly one LF at the end, then no newlines are added
* otherwise: newlines are added exactly at each LF
'''
last_newline = ' \\\n'
newline_separator = last_newline + ' '
out = []
if extra_env is None:
extra_env = {}
preffix_arr = []
if cwd is not None:
preffix_arr.append('cd {} &&'.format(shlex.quote(cwd)))
extra_env2 = extra_env.copy()
if extra_paths is not None:
extra_env2['PATH'] = '{}:"${{PATH}}"'.format(shlex.quote(':'.join(extra_paths)))
for key in extra_env2:
preffix_arr.append('{}={}'.format(shlex.quote(key), shlex.quote(extra_env2[key])))
cmd_quote = []
newline_count = 0
for arg in cmd:
if arg == LF:
if not self.force_oneline(force_oneline):
cmd_quote.append(arg)
newline_count += 1
else:
cmd_quote.append(shlex.quote(arg))
if self.force_oneline(force_oneline) or newline_count > 0:
cmd_quote = [
' '.join(list(y))
for x, y in itertools.groupby(
cmd_quote,
lambda z: z == LF
)
if not x
]
if self.force_oneline(force_oneline):
cmd_quote = [' '.join(preffix_arr + cmd_quote)]
else:
cmd_quote = preffix_arr + cmd_quote
out.extend(cmd_quote)
if stdin_path is not None:
out.append('< {}'.format(shlex.quote(stdin_path)))
if self.force_oneline(force_oneline) or newline_count == 1 and cmd[-1] == LF:
ending = ''
else:
ending = last_newline + ';'
return newline_separator.join(out) + ending
def copy_file_if_update(self, src, destfile):
if os.path.isdir(destfile):
destfile = os.path.join(destfile, os.path.basename(src))
self.mkdir_p(os.path.dirname(destfile))
if (
not os.path.exists(destfile) or \
os.path.getmtime(src) > os.path.getmtime(destfile)
):
self.cp(src, destfile)
def copy_dir_if_update_non_recursive(
self,
srcdir,
destdir,
filter_ext=None
):
# TODO print rsync equivalent.
os.makedirs(destdir, exist_ok=True)
if not os.path.exists(srcdir) and self.dry_run:
basenames = []
else:
basenames = os.listdir(srcdir)
for basename in sorted(basenames):
src = os.path.join(srcdir, basename)
if os.path.isfile(src) or os.path.islink(src):
noext, ext = os.path.splitext(basename)
if (filter_ext is None or ext == filter_ext):
dest = os.path.join(destdir, basename)
self.copy_file_if_update(src, dest)
def copy_dir_if_update(
self,
srcdir,
destdir,
filter_ext=None
):
self.copy_dir_if_update_non_recursive(srcdir, destdir, filter_ext)
srcdir_abs = os.path.abspath(srcdir)
srcdir_abs_len = len(srcdir_abs)
for path, dirnames, filenames in self.walk(srcdir_abs):
for dirname in dirnames:
dirpath = os.path.join(path, dirname)
dirpath_relative_root = dirpath[srcdir_abs_len + 1:]
self.copy_dir_if_update_non_recursive(
dirpath,
os.path.join(destdir, dirpath_relative_root),
filter_ext
)
def cp(self, src, dest, **kwargs):
if not kwargs.get('quiet', False):
self.print_cmd(['cp', src, dest])
if not self.dry_run:
if os.path.islink(src):
if os.path.lexists(dest):
os.unlink(dest)
linkto = os.readlink(src)
os.symlink(linkto, dest)
else:
shutil.copy2(src, dest)
def mkdir_p(self, d):
if not os.path.exists(d):
self.print_cmd(['mkdir', d, LF])
if not self.dry_run:
os.makedirs(d)
def mv(self, src, dest, **kwargs):
self.print_cmd(['mv', src, dest])
if not self.dry_run:
shutil.move(src, dest)
def print_cmd(
self,
cmd,
cwd=None,
cmd_file=None,
cmd_files=None,
extra_env=None,
extra_paths=None,
force_oneline: Union[bool,None] =None,
*,
stdin_path: Union[str,None] =None
):
'''
Print cmd_to_string to stdout.
Optionally save the command to cmd_file file, and add extra_env
environment variables to the command generated.
'''
if type(cmd) is str:
cmd_string = cmd
else:
cmd_string = self.cmd_to_string(
cmd,
cwd=cwd,
extra_env=extra_env,
extra_paths=extra_paths,
force_oneline=force_oneline,
stdin_path=stdin_path
)
if not self.quiet:
self._print_thread_safe('+ ' + cmd_string)
if cmd_files is None:
cmd_files = []
if cmd_file is not None:
cmd_files.append(cmd_file)
for cmd_file in cmd_files:
os.makedirs(os.path.dirname(cmd_file), exist_ok=True)
with open(cmd_file, 'w') as f:
f.write('#!/usr/bin/env bash\n')
f.write(cmd_string)
self.chmod(cmd_file)
def rmrf(self, path):
self.print_cmd(['rm', '-r', '-f', path, LF])
if not self.dry_run and os.path.exists(path):
if os.path.isdir(path):
shutil.rmtree(path)
else:
os.unlink(path)
def run_cmd(
self,
cmd,
cmd_file=None,
cmd_files=None,
out_file=None,
show_stdout=True,
show_cmd=True,
extra_env=None,
extra_paths=None,
delete_env=None,
raise_on_failure=True,
*,
out_str=None,
stdin_path: Union[str,None] =None,
**kwargs
):
'''
Run a command. Write the command to stdout before running it.
Wait until the command finishes execution.
:param cmd: command to run. LF entries are magic get skipped.
:type cmd: List[str]
:param cmd_file: if not None, write the command to be run to that file
:type cmd_file: str
:param cmd_files: if not None, write the command to be run to all files in this list
cmd_file gets appended to that list if given.
:type cmd_files: List[str]
:param out_file: if not None, write the stdout and stderr of the command the file
:type out_file: str
:param out_str: if not None, append the stdout and stderr string to this list
:type out_str: Union(List,None)
:param show_stdout: wether to show stdout and stderr on the terminal or not
:type show_stdout: bool
:param extra_env: extra environment variables to add when running the command
:type extra_env: Dict[str,str]
:return: exit status of the command
:rtype: int
'''
if out_file is None and out_str is None:
if show_stdout:
stdout = None
stderr = None
else:
stdout = subprocess.DEVNULL
stderr = subprocess.DEVNULL
else:
stdout = subprocess.PIPE
stderr = subprocess.STDOUT
if extra_env is None:
extra_env = {}
if delete_env is None:
delete_env = []
if 'cwd' in kwargs:
cwd = kwargs['cwd']
else:
cwd = None
env = os.environ.copy()
env.update(extra_env)
if extra_paths is not None:
path = ':'.join(extra_paths)
if 'PATH' in os.environ:
path += ':' + os.environ['PATH']
env['PATH'] = path
for key in delete_env:
if key in env:
del env[key]
if show_cmd:
self.print_cmd(
cmd,
cwd=cwd,
cmd_file=cmd_file,
cmd_files=cmd_files,
extra_env=extra_env,
extra_paths=extra_paths,
stdin_path=stdin_path
)
# Otherwise, if called from a non-main thread:
# ValueError: signal only works in main thread
if threading.current_thread() == threading.main_thread():
# Otherwise Ctrl + C gives:
# - ugly Python stack trace for gem5 (QEMU takes over terminal and is fine).
# - kills Python, and that then kills GDB:
# https://stackoverflow.com/questions/19807134/does-python-always-raise-an-exception-if-you-do-ctrlc-when-a-subprocess-is-exec
sigint_old = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, signal.SIG_IGN)
# Otherwise BrokenPipeError when piping through | grep
# But if I do this_module, my terminal gets broken at the end. Why, why, why.
# https://stackoverflow.com/questions/14207708/ioerror-errno-32-broken-pipe-python
# Ignoring the exception is not enough as it prints a warning anyways.
#sigpipe_old = signal.getsignal(signal.SIGPIPE)
#signal.signal(signal.SIGPIPE, signal.SIG_DFL)
cmd = self.strip_newlines(cmd)
if not self.dry_run:
if stdin_path is None:
stdin = None
else:
stdin = open(stdin_path, 'r')
# https://stackoverflow.com/questions/15535240/python-popen-write-to-stdout-and-log-file-simultaneously/52090802#52090802
with subprocess.Popen(
cmd,
stdin=stdin,
stdout=stdout,
stderr=stderr,
env=env,
**kwargs
) as proc:
if out_file is not None or out_str is not None:
if out_file is not None:
os.makedirs(os.path.split(os.path.abspath(out_file))[0], exist_ok=True)
if out_file is not None:
logfile = open(out_file, 'bw')
logfile_str = []
while True:
byte = proc.stdout.read(1)
if byte:
if show_stdout:
sys.stdout.buffer.write(byte)
try:
sys.stdout.flush()
except BlockingIOError:
# TODO understand. Why, Python, why.
pass
if out_file is not None:
logfile.write(byte)
if out_str is not None:
logfile_str.append(byte)
else:
break
if out_file is not None:
logfile.close()
if out_str is not None:
out_str.append((b''.join(logfile_str)))
if threading.current_thread() == threading.main_thread():
signal.signal(signal.SIGINT, sigint_old)
#signal.signal(signal.SIGPIPE, sigpipe_old)
if stdin_path is not None:
stdin.close()
returncode = proc.returncode
if returncode != 0 and raise_on_failure:
e = Exception('Command exited with status: {}'.format(returncode))
e.returncode = returncode
raise e
return returncode
else:
if not out_str is None:
out_str.append(b'')
return 0
def shlex_split(self, string):
'''
shlex_split, but also add Newline after every word.
Not perfect since it does not group arguments, but I don't see a solution.
'''
return self.add_newlines(shlex.split(string))
def strip_newlines(self, cmd):
if type(cmd) is str:
return cmd
else:
return [x for x in cmd if x != LF]
def walk(self, root):
'''
Extended walk that can take files or directories.
'''
if not os.path.exists(root):
raise Exception('Path does not exist: ' + root)
if os.path.isfile(root):
dirname, basename = os.path.split(root)
yield dirname, [], [basename]
else:
for path, dirnames, filenames in os.walk(root):
dirnames.sort()
filenames.sort()
yield path, dirnames, filenames
def wget(self, url, download_path):
'''
Append extra KEY=val configs into the given config file.
I wissh we could have a progress indicator, but impossible:
https://stackoverflow.com/questions/51212/how-to-write-a-download-progress-indicator-in-python
'''
self.print_cmd([
'wget', LF,
'-O', download_path, LF,
url, LF,
])
urllib.request.urlretrieve(url, download_path)
def write_configs(self, config_path, configs, config_fragments=None, mode='a'):
'''
Append extra KEY=val configs into the given config file.
'''
if config_fragments is None:
config_fragments = []
for config_fragment in config_fragments:
self.print_cmd(['cat', config_fragment, '>>', config_path])
if not self.dry_run:
with open(config_path, 'a') as config_file:
for config_fragment in config_fragments:
with open(config_fragment, 'r') as config_fragment_file:
for line in config_fragment_file:
config_file.write(line)
self.write_string_to_file(config_path, '\n'.join(configs), mode=mode)
def write_string_to_file(self, path, string, mode='w'):
if mode == 'a':
redirect = '>>'
else:
redirect = '>'
self.print_cmd("cat << 'EOF' {} {}\n{}\nEOF".format(redirect, path, string))
if not self.dry_run:
with open(path, mode) as f:
f.write(string)
if __name__ == '__main__':
shell_helpers = ShellHelpers()
if 'cmd_to_string':
# Default.
assert shell_helpers.cmd_to_string(['cmd']) == 'cmd \\\n;'
assert shell_helpers.cmd_to_string(['cmd', 'arg1']) == 'cmd \\\n arg1 \\\n;'
assert shell_helpers.cmd_to_string(['cmd', 'arg1', 'arg2']) == 'cmd \\\n arg1 \\\n arg2 \\\n;'
# Argument with a space gets escaped.
assert shell_helpers.cmd_to_string(['cmd', 'arg1 arg2']) == "cmd \\\n 'arg1 arg2' \\\n;"
# Ending in LF with no other LFs get separated only by spaces.
assert shell_helpers.cmd_to_string(['cmd', LF]) == 'cmd'
assert shell_helpers.cmd_to_string(['cmd', 'arg1', LF]) == 'cmd arg1'
assert shell_helpers.cmd_to_string(['cmd', 'arg1', 'arg2', LF]) == 'cmd arg1 arg2'
# More than one LF adds newline separators at each LF.
assert shell_helpers.cmd_to_string(['cmd', LF, 'arg1', LF]) == 'cmd \\\n arg1 \\\n;'
assert shell_helpers.cmd_to_string(['cmd', LF, 'arg1', LF, 'arg2', LF]) == 'cmd \\\n arg1 \\\n arg2 \\\n;'
assert shell_helpers.cmd_to_string(['cmd', LF, 'arg1', 'arg2', LF]) == 'cmd \\\n arg1 arg2 \\\n;'
# force_oneline separates everything simply by spaces.
assert \
shell_helpers.cmd_to_string(['cmd', LF, 'arg1', LF, 'arg2', LF], force_oneline=True) \
== 'cmd arg1 arg2'
# stdin_path
assert shell_helpers.cmd_to_string(['cmd'], stdin_path='ab') == "cmd \\\n < ab \\\n;"
assert shell_helpers.cmd_to_string(['cmd'], stdin_path='a b') == "cmd \\\n < 'a b' \\\n;"
|
WASTE-FREE | PLASTIC FREE | 100% COTTON | RE-USABLE | 20 x 20 CM (APPROX).
These colourful, reusable food wraps will keep your food fresh and free of plastic.
Size: 20 x 20 cm (approx).
Each wrap may last for one year with the right care and use.
Wrap around food or on top of bowls by moulding wraps with warmth from your hands. The wrap will stick to itself, keeping your food fresh.
You can wash off residues with cold water and gentle sanitiser if needed. |
# Copyright (c) 2015, MapR Technologies
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import sahara.plugins.mapr.base.base_version_handler as bvh
from sahara.plugins.mapr.services.drill import drill
from sahara.plugins.mapr.services.flume import flume
from sahara.plugins.mapr.services.hbase import hbase
from sahara.plugins.mapr.services.hive import hive
from sahara.plugins.mapr.services.httpfs import httpfs
from sahara.plugins.mapr.services.hue import hue
from sahara.plugins.mapr.services.impala import impala
from sahara.plugins.mapr.services.mahout import mahout
from sahara.plugins.mapr.services.management import management as mng
from sahara.plugins.mapr.services.maprfs import maprfs
from sahara.plugins.mapr.services.oozie import oozie
from sahara.plugins.mapr.services.pig import pig
from sahara.plugins.mapr.services.spark import spark
from sahara.plugins.mapr.services.sqoop import sqoop2
from sahara.plugins.mapr.services.swift import swift
from sahara.plugins.mapr.services.yarn import yarn
import sahara.plugins.mapr.versions.v5_0_0_mrv2.context as c
version = "5.0.0.mrv2"
class VersionHandler(bvh.BaseVersionHandler):
def __init__(self):
super(VersionHandler, self).__init__()
self._version = version
self._required_services = [
yarn.YARNv270(),
maprfs.MapRFS(),
mng.Management(),
oozie.Oozie(),
]
self._services = [
hive.HiveV013(),
hive.HiveV10(),
hive.HiveV12(),
impala.ImpalaV141(),
pig.PigV014(),
pig.PigV015(),
flume.FlumeV15(),
flume.FlumeV16(),
spark.SparkOnYarn(),
sqoop2.Sqoop2(),
mahout.MahoutV010(),
oozie.OozieV410(),
oozie.OozieV420(),
hue.HueV370(),
hue.HueV381(),
hue.HueV390(),
hbase.HBaseV0989(),
hbase.HBaseV09812(),
drill.DrillV11(),
drill.DrillV14(),
yarn.YARNv270(),
maprfs.MapRFS(),
mng.Management(),
httpfs.HttpFS(),
swift.Swift(),
]
def get_context(self, cluster, added=None, removed=None):
return c.Context(cluster, self, added, removed)
|
A play for women's rights at the Flight Deck.
One in three women will have an abortion in her lifetime. That statistic is at the core of the message that the 1 in 3 Campaign, a group that fights for a women’s rights to abortion. But those leading the grassroots initiative also know that the experiences women have with abortion are far more complex that a data sound bite. Through a traveling play, the national campaign is disseminating many of those stories in an attempt to break the silence and stigma of shame surrounding the topic. Called Remarkably Normal, the production is a “documentary play” that dramatizes actual interviews with and stories submitted by people who have received abortions and who provide abortion care. The play’s cast is currently traveling the country in order to raise awareness as the Supreme Court prepares to make a major ruling on the legality of abortion, and will be performing at the Flight Deck (1540 Broadway, Oakland) this Friday and Saturday at 7 p.m. |
import datetime
from django.utils.translation import activate
from io import BytesIO
from reportlab.lib.pagesizes import A4
from reportlab.lib.units import cm
from reportlab.pdfgen import canvas
from velo.results.tasks import create_result_sms
from velo.core.models import Log
from velo.core.pdf import fill_page_with_image, _baseFontNameB
from velo.registration.competition_classes import RM2016
from velo.registration.models import UCICategory, Participant, PreNumberAssign
from velo.results.models import ChipScan, DistanceAdmin, Result, LapResult
from velo.results.tables import ResultRMGroupTable, ResultRMDistanceTable, ResultRMTautaDistanceTable
class RM2017(RM2016):
SPORTA_DISTANCE_ID = 65
TAUTAS_DISTANCE_ID = 66
TAUTAS1_DISTANCE_ID = 77
GIMENU_DISTANCE_ID = 68
BERNU_DISTANCE_ID = 67
def _update_year(self, year):
return year + 3
@property
def groups(self):
"""
Returns defined groups for each competition type.
"""
return {
self.SPORTA_DISTANCE_ID: ('M-18', 'M', 'Masters', 'M 19-34 CFA', 'W'),
self.TAUTAS_DISTANCE_ID: ('T M-16', 'T W-16', 'T M', 'T W', 'T M-35', 'T M-45', 'T M-55', 'T M-65'),
self.TAUTAS1_DISTANCE_ID: ('T1 M', 'T1 W',)
}
def number_ranges(self):
"""
Returns number ranges for each distance.
"""
return {
self.SPORTA_DISTANCE_ID: [{'start': 1, 'end': 500, 'group': ''}, ],
self.TAUTAS_DISTANCE_ID: [{'start': 2001, 'end': 3400, 'group': ''}, ],
self.TAUTAS1_DISTANCE_ID: [{'start': 3401, 'end': 4000, 'group': ''}, ],
}
def assign_group(self, distance_id, gender, birthday, participant=None):
year = birthday.year
if distance_id not in (self.SPORTA_DISTANCE_ID, self.TAUTAS_DISTANCE_ID, self.TAUTAS1_DISTANCE_ID):
return ''
elif distance_id == self.SPORTA_DISTANCE_ID:
if gender == 'M':
if participant and (self._update_year(1995) >= year >= self._update_year(1980)) and UCICategory.objects.filter(category="CYCLING FOR ALL", slug=participant.slug):
return 'M 19-34 CFA'
if self._update_year(1997) >= year >= self._update_year(1996):
return 'M-18'
elif year <= self._update_year(1979):
return 'Masters'
else:
return 'M'
else:
return 'W'
elif distance_id == self.TAUTAS_DISTANCE_ID:
if gender == 'M':
if self._update_year(1999) >= year >= self._update_year(1998):
return 'T M-16'
elif self._update_year(1997) >= year >= self._update_year(1980):
return 'T M'
elif self._update_year(1979) >= year >= self._update_year(1970):
return 'T M-35'
elif self._update_year(1969) >= year >= self._update_year(1960):
return 'T M-45'
elif self._update_year(1959) >= year >= self._update_year(1950):
return 'T M-55'
elif year <= self._update_year(1949):
return 'T M-65'
else:
if self._update_year(1999) >= year >= self._update_year(1996):
return 'T W-16'
elif year <= self._update_year(1997):
return 'T W'
elif distance_id == self.TAUTAS1_DISTANCE_ID:
if gender == 'M':
return 'T1 M'
else:
return 'T1 W'
print('here I shouldnt be...')
raise Exception('Invalid group assigning. {0} {1} {2}'.format(gender, distance_id, birthday))
def passages(self):
return {
self.SPORTA_DISTANCE_ID: [(1, 1, 200, 0), (2, 201, 500, 0)],
self.TAUTAS_DISTANCE_ID: [
(1, 2001, 2200, 20),
(2, 2201, 2400, 20),
(3, 2401, 2600, 15),
(4, 2601, 2800, 10),
(5, 2801, 3000, 10),
(6, 3001, 3200, 5),
(7, 3201, 3400, 5),
],
self.TAUTAS1_DISTANCE_ID: [
(1, 3401, 3600, 5),
(2, 3601, 3800, 5),
(3, 3801, 4000, 5),
],
}
def number_pdf(self, participant_id):
activate('lv')
participant = Participant.objects.get(id=participant_id)
output = BytesIO()
c = canvas.Canvas(output, pagesize=A4)
fill_page_with_image("velo/media/competition/vestule/RVm_2017_vestule_ar_tekstu.jpg", c)
c.setFont(_baseFontNameB, 18)
c.drawString(6*cm, 20.6*cm, "%s %s" % (participant.full_name.upper(), participant.birthday.year))
c.drawString(5*cm, 18.6*cm, str(participant.distance))
if participant.primary_number:
c.setFont(_baseFontNameB, 35)
c.drawString(16*cm, 19.6*cm, str(participant.primary_number))
elif participant.distance_id == self.GIMENU_DISTANCE_ID:
c.setFont(_baseFontNameB, 25)
c.drawString(15*cm, 19.6*cm, "Ģimeņu br.")
else:
c.setFont(_baseFontNameB, 25)
c.drawString(16.5*cm, 19.6*cm, "-")
c.showPage()
c.save()
output.seek(0)
return output
def assign_numbers(self, reassign=False, assign_special=False):
# TODO: Need to find all participants that have started in sport distance and now are in other distances.
prev_participants = [p.slug for p in Participant.objects.filter(is_participating=True, competition=self.competition, distance_id=53)]
now_participants = Participant.objects.filter(distance_id=self.TAUTAS_DISTANCE_ID, is_participating=True, slug__in=prev_participants)
for now in now_participants:
try:
PreNumberAssign.objects.get(competition=self.competition, participant_slug=now.slug)
except:
PreNumberAssign.objects.create(competition=self.competition, distance=now.distance, participant_slug=now.slug, segment=1)
super().assign_numbers(reassign, assign_special)
def result_select_extra(self, distance_id):
return {
'l1': 'SELECT time FROM results_lapresult l1 WHERE l1.result_id = results_result.id and l1.index=1',
}
def get_result_table_class(self, distance, group=None):
if group:
return ResultRMGroupTable
else:
if distance.id in (self.SPORTA_DISTANCE_ID, self.TAUTAS1_DISTANCE_ID):
return ResultRMDistanceTable
else:
return ResultRMTautaDistanceTable
def process_chip_result(self, chip_id, sendsms=True, recalc=False):
"""
Function processes chip result and recalculates all standings
"""
chip = ChipScan.objects.get(id=chip_id)
distance_admin = DistanceAdmin.objects.get(competition=chip.competition, distance=chip.nr.distance)
Log.objects.create(content_object=chip, action="Chip process", message="Started")
delta = datetime.datetime.combine(datetime.date.today(), distance_admin.zero) - datetime.datetime.combine(datetime.date.today(), datetime.time(0,0,0,0))
result_time = (datetime.datetime.combine(datetime.date.today(), chip.time) - delta).time()
result_time_5back = (datetime.datetime.combine(datetime.date.today(), chip.time) - delta - datetime.timedelta(minutes=5)).time()
if result_time_5back > result_time:
result_time_5back = datetime.time(0,0,0)
result_time_5forw = (datetime.datetime.combine(datetime.date.today(), chip.time) - delta + datetime.timedelta(minutes=5)).time()
seconds = result_time.hour * 60 * 60 + result_time.minute * 60 + result_time.second
# Do not process if finished in 10 minutes.
if seconds < 10 * 60 or chip.time < distance_admin.zero: # 10 minutes
Log.objects.create(content_object=chip, action="Chip process", message="Chip result less than 10 minutes. Ignoring.")
return None
if chip.is_processed:
Log.objects.create(content_object=chip, action="Chip process", message="Chip already processed")
return None
participants = Participant.objects.filter(competition_id__in=self.competition.get_ids(), is_participating=True, slug=chip.nr.participant_slug, distance=chip.nr.distance)
if not participants:
Log.objects.create(content_object=chip, action="Chip process", message="Number not assigned to anybody. Ignoring.")
return None
else:
participant = participants[0]
if participant.is_competing:
result, created = Result.objects.get_or_create(competition=chip.competition, participant=participant, number=chip.nr)
already_exists_result = LapResult.objects.filter(result=result, time__gte=result_time_5back, time__lte=result_time_5forw)
if already_exists_result:
Log.objects.create(content_object=chip, action="Chip process", message="Chip double scanned.")
else:
laps_done = result.lapresult_set.count()
result.lapresult_set.create(index=(laps_done+1), time=result_time)
# Fix lap index
for index, lap in enumerate(result.lapresult_set.order_by('time'), start=1):
lap.index = index
lap.save()
if (chip.nr.distance_id == self.SPORTA_DISTANCE_ID and laps_done == 0) or (chip.nr.distance_id == self.TAUTAS_DISTANCE_ID and laps_done == 1) or (chip.nr.distance_id == self.TAUTAS1_DISTANCE_ID and laps_done == 0):
Log.objects.create(content_object=chip, action="Chip process", message="DONE. Lets assign avg speed.")
last_laptime = result.lapresult_set.order_by('-time')[0]
result.time = last_laptime.time
result.set_avg_speed()
result.save()
self.assign_standing_places()
if self.competition.competition_date == datetime.date.today() and sendsms:
create_result_sms.apply_async(args=[result.id, ], countdown=120)
chip.is_processed = True
chip.save()
print(chip)
|
I see from pics you guys cook it with husk on. I've also done it that way but now if I'm cooking for a few I cut a corn into 3 pieces, rub some butter over it and sprinkle salt over it. Then I double wrap in Ali foil. Everyone loves it. We are lucky my wife grows her own corn. I've just planted 16 new seeds for this spring. Thoughts.?
Russ wrote: I see from pics you guys cook it with husk on. I've also done it that way but now if I'm cooking for a few I cut a corn into 3 pieces, rub some butter over it and sprinkle salt over it. Then I double wrap in Ali foil. Everyone loves it. We are lucky my wife grows her own corn. I've just planted 16 new seeds for this spring. Thoughts.?
Wanna try something really good ?
Nuke 2 ears in the microwave in the entire husk.
I'd shoot for 5 minutes total. Then cut the one pointy end. Grab the husk end and squeeze the Cobb out. It's gonna be extremely hot but should not have any Silk on the corn cobb at all.
Hit it with salted butter, then grated parmesan cheese and then with some Old Bay or Slap Yo Mama. I've even used Sucklebusters SPG and it was absolutely Delicious !!!
Do that and I promise you will love it.
Wv, I promise you I will try your way this summer. I've seen this somewhere so I know it will work. My family all love corn. I planted some for our garden just yesterday. |
###############################################################################
#cyn.in is an open source Collaborative Knowledge Management Appliance that
#enables teams to seamlessly work together on files, documents and content in
#a secure central environment.
#
#cyn.in v2 an open source appliance is distributed under the GPL v3 license
#along with commercial support options.
#
#cyn.in is a Cynapse Invention.
#
#Copyright (C) 2008 Cynapse India Pvt. Ltd.
#
#This program is free software: you can redistribute it and/or modify it under
#the terms of the GNU General Public License as published by the Free Software
#Foundation, either version 3 of the License, or any later version and observe
#the Additional Terms applicable to this program and must display appropriate
#legal notices. In accordance with Section 7(b) of the GNU General Public
#License version 3, these Appropriate Legal Notices must retain the display of
#the "Powered by cyn.in" AND "A Cynapse Invention" logos. You should have
#received a copy of the detailed Additional Terms License with this program.
#
#This program is distributed in the hope that it will be useful,
#but WITHOUT ANY WARRANTY; without even the implied warranty of
#MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
#Public License for more details.
#
#You should have received a copy of the GNU General Public License along with
#this program. If not, see <http://www.gnu.org/licenses/>.
#
#You can contact Cynapse at [email protected] with any problems with cyn.in.
#For any queries regarding the licensing, please send your mails to
# [email protected]
#
#You can also contact Cynapse at:
#802, Building No. 1,
#Dheeraj Sagar, Malad(W)
#Mumbai-400064, India
###############################################################################
from setuptools import setup, find_packages
import os
version = '0.1'
setup(name='ubify.smartview',
version=version,
description="intelligent views",
long_description=open("README.txt").read() + "\n" +
open(os.path.join("docs", "HISTORY.txt")).read(),
# Get more strings from http://www.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
"Framework :: Plone",
"Programming Language :: Python",
"Topic :: Software Development :: Libraries :: Python Modules",
],
keywords='web zope plone theme',
author='Cynapse',
author_email='[email protected]',
url='http://www.cynapse.com',
license='GPL',
packages=find_packages(exclude=['ez_setup']),
namespace_packages=['ubify'],
include_package_data=True,
zip_safe=False,
install_requires=[
'setuptools',
# -*- Extra requirements: -*-
],
entry_points="""
# -*- Entry points: -*-
""",
)
|
The increasing volume of data in modern business and science calls for more complex and sophisticated tools. Although advances in data mining technology have made extensive data collection much easier, it’s still always evolving and there is a constant need for new techniques and tools that can help us transform this data into useful information and knowledge.
Since the previous edition’s publication, great advances have been made in the field of data mining. Not only does the third of edition of Data Mining: Concepts and Techniques continue the tradition of equipping you with an understanding and application of the theory and practice of discovering patterns hidden in large data sets, it also focuses on new, important topics in the field: data warehouses and data cube technology, mining stream, mining social networks, and mining spatial, multimedia and other complex data. Each chapter is a stand-alone guide to a critical topic, presenting proven algorithms and sound implementations ready to be used directly or with strategic modification against live data. This is the resource you need if you want to apply today’s most powerful data mining techniques to meet real business challenges.
Presents dozens of algorithms and implementation examples, all in pseudo-code and suitable for use in real-world, large-scale data mining projects.
Thanks to Flipkart for amazing service...This is for people who want to learn the tecnical skills of data Mining techniques..The chapters on clustering is very good..
book quality is very bad. many pages are printed badly.
Book arrived in perfect condition. |
# Copyright (c) 2015, Euan Thoms
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
#
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import sys, os, subprocess
from PyQt4 import QtCore, QtGui, uic
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from LoginDialog import LoginDialog
from RootPasswordDialog import RootPasswordDialog
try:
_fromUtf8 = QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
( Ui_NetdriveConnector, QWidget ) = uic.loadUiType( os.path.join(os.path.dirname( __file__ ), 'NetdriveConnector.ui' ))
TOOLTIP_PREFIX = "Full fstab entry: "
SSHFS_INVALID_OPTIONS = ['users','noauto']
class NetdriveConnector ( QWidget ):
def __init__ ( self, parent = None ):
QWidget.__init__( self, parent )
self.ui = Ui_NetdriveConnector()
self.ui.setupUi( self )
self.getHomeFolder()
self.dependencyCheck()
self.loadConnectionsTable()
def __del__ ( self ):
self.ui = None
def dependencyCheck(self):
shellCommand = str("groups | egrep 'davfs2 | davfs2'")
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Warning")
message =\
"""
WARNING: The currently logged in user is not a member of the davfs2 group.
This will likely cause the mounting of WebDAV connections to fail.
Consider adding this user account to the davfs2 group. Consult your OS/distributions guide for how to add a user to a group.
"""
warningMessage.setText(message)
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
def loadConnectionsTable(self):
self.ui.connectionsTableWidget.clear()
allConnections = []
if self.ui.currentUserCheckBox.isChecked():
grepForCurrentUser = " | grep " + self.homeFolder
else:
grepForCurrentUser = ""
shellCommand = str("cat /etc/fstab | grep -v '^#' | grep ' davfs '" + grepForCurrentUser)
if subprocess.call(shellCommand,shell=True) == 0:
davfsConnections = str (subprocess.check_output(shellCommand,shell=True)).splitlines()
allConnections = allConnections + davfsConnections
else:
davfsConnections = None
shellCommand = str("cat /etc/fstab | grep -v '^#' | grep ' fuse.sshfs '" + grepForCurrentUser)
if subprocess.call(shellCommand,shell=True) == 0:
sftpConnections = str (subprocess.check_output(shellCommand,shell=True)).splitlines()
allConnections = allConnections + sftpConnections
else:
sftpConnections = None
self.ui.connectionsTableWidget.setColumnCount(2)
self.ui.connectionsTableWidget.setHorizontalHeaderLabels(('URL','Mount Point'))
self.ui.connectionsTableWidget.setRowCount(len(allConnections))
row = 0
for rowData in allConnections:
url = rowData.split(' ')[0]
mountPoint = rowData.split(' ')[1]
shellCommand = str("mount | grep ' " + str(mountPoint) + " '")
if subprocess.call(shellCommand,shell=True) == 0:
bgColour = QColor(100,200,100,80)
else:
bgColour = QColor(250,120,10,80)
tableItem = QtGui.QTableWidgetItem(url)
self.ui.connectionsTableWidget.setItem(row, 0, tableItem)
tableItem.setBackgroundColor(bgColour)
tableItem.setToolTip(TOOLTIP_PREFIX + rowData)
tableItem = QtGui.QTableWidgetItem(mountPoint)
self.ui.connectionsTableWidget.setItem(row, 1, tableItem)
tableItem.setBackgroundColor(bgColour)
tableItem.setToolTip(TOOLTIP_PREFIX + rowData)
row += 1
self.ui.connectionsTableWidget.resizeColumnsToContents()
self.ui.connectionsTableWidget.resizeRowsToContents()
header = self.ui.connectionsTableWidget.horizontalHeader()
header.setStretchLastSection(True)
def clearSftpFields(self):
self.ui.sftpUsernameLineEdit.clear()
self.ui.sftpHostnameLineEdit.clear()
self.ui.sftpPortSpinBox.setValue(22)
self.ui.sftpPathLineEdit.clear()
self.ui.sftpMountpointLineEdit.clear()
self.ui.sftpPasswordlessCheckBox.setChecked(True)
self.ui.sftpPasswordLineEdit.clear()
self.ui.sftpAutoMountCheckBox.setCheckable(True)
self.ui.sftpAutoMountCheckBox.setChecked(False)
def clearWebdavFields(self):
self.ui.webdavServerUrlLineEdit.clear()
self.ui.webdavUriLineEdit.clear()
self.ui.webdavMountpointLineEdit.clear()
self.ui.httpRadioButton.setChecked(True)
self.ui.webdavProtocolLbl.setText("http://")
self.ui.webdavPortSpinBox.setValue(80)
self.ui.webdavUsernameLineEdit.clear()
self.ui.webdavPasswordLineEdit.clear()
self.ui.webdavAutoMountCheckBox.setCheckable(True)
self.ui.webdavAutoMountCheckBox.setChecked(False)
def currentUserCheckBoxClicked(self):
self.loadConnectionsTable()
def sftpPasswordlessCheckBoxClicked(self):
if self.ui.sftpPasswordlessCheckBox.isChecked():
self.ui.sftpAutoMountCheckBox.setCheckable(True)
else:
self.ui.sftpAutoMountCheckBox.setChecked(False)
self.ui.sftpAutoMountCheckBox.setCheckable(False)
def webdavSavePasswordCheckBoxClicked(self):
if self.ui.webdavSavePasswordCheckBox.isChecked():
self.ui.webdavAutoMountCheckBox.setCheckable(True)
else:
self.ui.webdavAutoMountCheckBox.setChecked(False)
self.ui.webdavAutoMountCheckBox.setCheckable(False)
def connectBtnClicked(self):
if len(self.ui.connectionsTableWidget.selectedItems()) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No connection selected. Please select a filesystem to connect.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
toolTipText = str ( self.ui.connectionsTableWidget.selectedItems()[0].toolTip() )
toConnect = toolTipText[toolTipText.find(TOOLTIP_PREFIX)+len(TOOLTIP_PREFIX):]
filesystem = toConnect.split(' ')[0]
mountpoint = toConnect.split(' ')[1]
fsType = toConnect.split(' ')[2]
fstabMountOptions = toConnect.split(' ')[3].split(',')
mountOptions = ""
for option in fstabMountOptions:
if option not in SSHFS_INVALID_OPTIONS:
mountOptions = mountOptions + option + ","
if mountOptions is not "":
mountOptions = mountOptions[:-1]
shellCommand = str("mount | grep ' " + mountpoint + " '")
if subprocess.call(shellCommand,shell=True) == 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("The selected filesystem is already mounted.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if fsType == "davfs":
shellCommand = str("cat '" + self.homeFolder + "/.davfs2/secrets' | grep '^" + filesystem +" '")
if subprocess.call(shellCommand,shell=True) != 0:
isWebdavPasswordSaved = False
loginDialog = LoginDialog("")
loginDialog.exec_()
if not loginDialog.isOK:
return False
else:
username,password = loginDialog.getLoginCredentials()
shellCommand = str("echo '" + filesystem + " " + username + " " + password + "' >> '" + self.homeFolder + "/.davfs2/secrets'")
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("ERROR: Failed to add username/password to secrets file.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
isWebdavPasswordSaved = True
shellCommand = str("mount " + mountpoint)
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to connect filesystem: " + filesystem)
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
successMessage = QtGui.QMessageBox(self)
successMessage.setWindowTitle("Netdrive Connector - Success")
successMessage.setText("Successfully connected the remote filesystem: " + filesystem )
successMessage.setIcon(QtGui.QMessageBox.Information)
successMessage.show()
if not isWebdavPasswordSaved:
# TODO: check for GNU/LInux or *BSD and use specific sed in-place command
shellCommand = str('sed -i "\|^' + filesystem + ' .*|d" "' + self.homeFolder + '/.davfs2/secrets"')
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("ERROR: Failed to remove username/password from secrets file.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
if fsType == "fuse.sshfs":
# NOTE: since we rely on a ssh-askpass to graphically prompt for password (no tty),
# we need to use sshfs instead of mount. At least on Slackware, mount does not initiate the ssh-askpass.
shellCommand = str("sshfs " + filesystem + " " + mountpoint + " -o " + mountOptions)
print shellCommand
if subprocess.call(shellCommand, shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to connect filesystem: " + filesystem)
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
successMessage = QtGui.QMessageBox(self)
successMessage.setWindowTitle("Netdrive Connector - Success")
successMessage.setText("Successfully connected the remote filesystem: " + filesystem )
successMessage.setIcon(QtGui.QMessageBox.Information)
successMessage.show()
self.loadConnectionsTable()
def disconnectBtnClicked(self):
if len(self.ui.connectionsTableWidget.selectedItems()) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No connection selected. Please select a filesystem to disconnect.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
toolTipText = str ( self.ui.connectionsTableWidget.selectedItems()[0].toolTip() )
toDisconnect = toolTipText[toolTipText.find(TOOLTIP_PREFIX)+len(TOOLTIP_PREFIX):]
mountpoint = toDisconnect.split(' ')[1]
fs_type = toDisconnect.split(' ')[2]
shellCommand = str("mount | grep ' " + mountpoint + " '")
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("The selected filesystem is not currently mounted.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if fs_type == "fuse.sshfs":
shellCommand = str("fusermount -u " + mountpoint)
else:
shellCommand = str("umount " + mountpoint)
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to disconnect mount point: " + mountpoint + " . Try to save and close all open files, exit the folder and try again." )
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
successMessage = QtGui.QMessageBox(self)
successMessage.setWindowTitle("Netdrive Connector - Success")
successMessage.setText("Successfully disconnected the remote filesystem mounted at: " + mountpoint)
successMessage.setIcon(QtGui.QMessageBox.Information)
successMessage.show()
self.loadConnectionsTable()
def removeBtnClicked(self):
if len(self.ui.connectionsTableWidget.selectedItems()) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No connection selected. Please select a filesystem to remove.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
toolTipText = str ( self.ui.connectionsTableWidget.selectedItems()[0].toolTip() )
connection = toolTipText[toolTipText.find(TOOLTIP_PREFIX)+len(TOOLTIP_PREFIX):]
filesystem = connection.split(' ')[0]
mountpoint = connection.split(' ')[1]
fsType = connection.split(' ')[2]
shellCommand = str("mount | grep ' " + mountpoint + " '")
if subprocess.call(shellCommand,shell=True) == 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("The selected filesystem is currently mounted. Disconnect before trying to remove the connection.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
reply = QtGui.QMessageBox.question(self, 'Netdrive Connector',"Are you sure that you want to remove this connection?", \
QtGui.QMessageBox.Yes | QtGui.QMessageBox.No, QtGui.QMessageBox.No)
if reply == QtGui.QMessageBox.No:
return False
if fsType == "davfs":
removeCmd = "remove-webdav-connector"
elif fsType == "fuse.sshfs":
removeCmd = "remove-sftp-connector"
rootPasswordDialog = RootPasswordDialog()
rootPasswordDialog.exec_()
if not rootPasswordDialog.isOK:
return False
password = rootPasswordDialog.getRootPassword()
removeConnectorParms = filesystem + " " + mountpoint
if subprocess.call(['unbuffer','netdrive-connector_run-as-root', str(password), removeCmd, removeConnectorParms]) !=0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to remove the connection to : " + filesystem )
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
mountpointNoSlashes = str(mountpoint).replace("/","_")
shellCommand = str("rm " + self.homeFolder + "/.config/autostart/netdrive_connector" + mountpointNoSlashes + ".desktop" )
if subprocess.call(shellCommand,shell=True) != 0:
print "WARNING: problem whilst removing autostart file."
self.loadConnectionsTable()
def refreshBtnClicked(self):
self.loadConnectionsTable()
def addSftpBtnClicked(self):
sftpUsername= self.ui.sftpUsernameLineEdit.text()
sftpHostname= self.ui.sftpHostnameLineEdit.text()
sftpPort = str(self.ui.sftpPortSpinBox.value())
sftpMountpoint = self.ui.sftpMountpointLineEdit.text()
sftpPath = self.ui.sftpPathLineEdit.text()
sftpPassword = self.ui.sftpPasswordLineEdit.text()
if len(str(sftpUsername).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid username. Please enter a valid username.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if len(str(sftpHostname).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid hostname. Please enter a valid hostname.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if len(str(sftpPath).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid path. Please enter a valid path.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if len(str(sftpMountpoint).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No mount point (folder) selected. Please select a folder to use as a mount point.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if self.ui.sftpPasswordlessCheckBox.isChecked() and len(str(sftpPassword).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No SFTP password supplied. Please enter the password for the user on the server.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
rootPasswordDialog = RootPasswordDialog()
rootPasswordDialog.exec_()
if not rootPasswordDialog.isOK:
return False
password = rootPasswordDialog.getRootPassword()
if self.ui.sftpPasswordlessCheckBox.isChecked():
connectorParms = sftpUsername + "@" + sftpHostname + ":" + sftpPort + "/" + sftpPath + " " + sftpMountpoint + " key " + sftpPassword
else:
connectorParms = sftpUsername + "@" + sftpHostname + ":" + sftpPort + "/" + sftpPath + " " + sftpMountpoint
if subprocess.call(['unbuffer','netdrive-connector_run-as-root', str(password), 'add-sftp-connector', connectorParms]) !=0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to add the connection. ")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
if self.ui.sftpAutoMountCheckBox.isChecked():
self.addAutoMount(sftpMountpoint, "fuse.sshfs")
self.clearSftpFields()
self.loadConnectionsTable()
def addWebdavBtnClicked(self):
webdavProtocol = self.ui.webdavProtocolLbl.text()
webdavURL = self.ui.webdavServerUrlLineEdit.text()
webdavPort = str(self.ui.webdavPortSpinBox.value())
webdavMountpoint = self.ui.webdavMountpointLineEdit.text()
webdavURI = self.ui.webdavUriLineEdit.text()
webdavUsername = self.ui.webdavUsernameLineEdit.text()
webdavPassword = self.ui.webdavPasswordLineEdit.text()
if len(str(webdavURL).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid server URL. Please enter a valid server URL.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if len(str(webdavURI).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid WebDAV URI. Please enter a valid WebDAV URI.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if len(str(webdavMountpoint).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No mount point (folder) selected. Please select a folder to use as a mount point.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if self.ui.webdavSavePasswordCheckBox.isChecked() and len(str(webdavUsername).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No valid WebDAV username supplied. Please enter a valid WebDAV username.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
if self.ui.webdavSavePasswordCheckBox.isChecked() and len(str(webdavPassword).replace(" ","")) < 1:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("No WebDAV password supplied. Please enter the WebDAV password.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
rootPasswordDialog = RootPasswordDialog()
rootPasswordDialog.exec_()
if not rootPasswordDialog.isOK:
return False
password = rootPasswordDialog.getRootPassword()
if self.ui.webdavSavePasswordCheckBox.isChecked():
connectorParms = webdavProtocol + webdavURL + ":" + webdavPort + "/" + webdavURI + " " + webdavMountpoint + " " + webdavUsername + " " + webdavPassword
else:
connectorParms = webdavProtocol + webdavURL + ":" + webdavPort + "/" + webdavURI + " " + webdavMountpoint
if subprocess.call(['unbuffer','netdrive-connector_run-as-root', str(password), 'add-webdav-connector', connectorParms]) !=0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("Failed to add the connection. ")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
else:
if self.ui.webdavAutoMountCheckBox.isChecked():
self.addAutoMount(webdavMountpoint, "davfs")
self.clearWebdavFields()
self.loadConnectionsTable()
def sftpMountpointBtnClicked(self):
mountpoint = QtGui.QFileDialog.getExistingDirectory(self, 'Select mount point',self.homeFolder)
if mountpoint == self.homeFolder:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Warning")
warningMessage.setText("WARNING: The selected folder is your home folder. Mounting a remote filesystem to your home folder is not recommended.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
if self.isMountpointOwnedByCurrentUser(mountpoint):
self.ui.sftpMountpointLineEdit.setText(mountpoint)
else:
errorMessage = QtGui.QErrorMessage(self)
errorMessage.setWindowTitle("Netdrive Connector - Error")
errorMessage.showMessage("ERROR: you are not the owner of the selected folder. Please change ownership of the folder or select a different mount point.")
def webdavMountpointBtnClicked(self):
mountpoint = QtGui.QFileDialog.getExistingDirectory(self, 'Select mount point',self.homeFolder)
if mountpoint == self.homeFolder:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Warning")
warningMessage.setText("WARNING: The selected folder is your home folder. Mounting a remote filesystem to your home folder is not recommended.")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
if self.isMountpointOwnedByCurrentUser(mountpoint):
self.ui.webdavMountpointLineEdit.setText(mountpoint)
else:
errorMessage = QtGui.QErrorMessage(self)
errorMessage.setWindowTitle("Netdrive Connector - Error")
errorMessage.showMessage("ERROR: you are not the owner of the selected folder. Please change ownership of the folder or select a different mount point.")
def httpRadioBtnClicked(self):
self.ui.webdavProtocolLbl.setText("http://")
if self.ui.webdavPortSpinBox.value() == 443:
self.ui.webdavPortSpinBox.setValue(80)
def httpsRadioBtnClicked(self):
self.ui.webdavProtocolLbl.setText("https://")
if self.ui.webdavPortSpinBox.value() == 80:
self.ui.webdavPortSpinBox.setValue(443)
def getHomeFolder(self):
self.homeFolder = str (subprocess.check_output("echo $HOME",shell=True)).splitlines()[0]
def isMountpointOwnedByCurrentUser(self, mountpoint):
currentUser = str (subprocess.check_output("whoami",shell=True)).splitlines()[0]
shellCommand = str ("ls -ld " + mountpoint + " | awk '{print $3}'")
folderOwner = str (subprocess.check_output(shellCommand,shell=True)).splitlines()[0]
if folderOwner != currentUser:
return False
else:
return True
def addAutoMount(self, mountpoint, fs_type):
mountpointNoSlashes = str(mountpoint).replace("/","_")
fileContents =\
"""
[Desktop Entry]
Name=Netdrive AutoMounter
Hidden=false
StartupNotify=false
Terminal=false
TerminalOptions=
Type=Application
"""
fileContents = str(fileContents + "Exec=netdrive-connector_automountd " + mountpoint + " " + fs_type)
shellCommand = str("if [ ! -d " + self.homeFolder + "/.config/autostart ]; then mkdir " + self.homeFolder + "/.config/autostart ; fi ; echo '" + fileContents + "' > " + self.homeFolder + "/.config/autostart/netdrive_connector" + mountpointNoSlashes + ".desktop" )
if subprocess.call(shellCommand,shell=True) != 0:
warningMessage = QtGui.QMessageBox(self)
warningMessage.setWindowTitle("Netdrive Connector - Error")
warningMessage.setText("An error occured whilst creating the autostart file in " + self.homeFolder + "/.config/autostart .")
warningMessage.setIcon(QtGui.QMessageBox.Warning)
warningMessage.show()
return False
|
Hoeth + Shadow is by far the most favoured combo for Archmages when you peruse the Ulthuan forums. That and book of Ashur for players who want to offer their mage some protection.
Where it worked particularly well against me was in teaming it with what I saw as a nice big swathe of free points. I took it hook, line and sinker.
So are we going to see this type of combo in your HEs?
Most likely I will be trying it out this week at 2400pts during a club game - play testing for Skitterleap. Although I think Lore of Life also has a lot to offer it. The problem is if you take book of Hoeth it makes the mage completely unprotected as is very much a one shot kind of deal. |
# -*- coding: utf-8 -*-
"""
===========================================
Commands :core:`getwebfilesinator.commands`
===========================================
Commands for the GetWebFilesInator
"""
from argparseinator import ArgParseInated
from argparseinator import arg
from argparseinator import class_args
from getwebfilesinator.client import GwfiClient
from getwebfilesinator.utils import update_paths, getLogger
log = getLogger(__name__)
# Tell to ArgParseInator the class must be parsed for GetWebFilesInator
# commands and it is a ArgParseInated subclass.
@class_args
class Commands(ArgParseInated):
"""Commands for getwebfilesinator"""
# we will check for the configuration file. Is a mandatory.
def __preinator__(self):
if not self.args.config:
# if we don't have a configuration file we will exit using
# the builtin __argpi__ (ArgParseInator Instance)
__argpi__.exit(1, u'The configuration file is mandatory\n')
# this will be the only command.
@arg()
def download(self):
"""Downloads files according with configuration"""
# lets instantiate the client passing the configuration
cli = GwfiClient(self.cfg)
# now the client should process all the files
# (should we change the name ?)
cli.process(self.cfg.files or [])
|
WTI prompt futures contracts got pummeled last week, gaining a small amount back on Friday closing off $0.70 higher to settle at $46.22, although not enough to counteract last week’s losses. So here’s a recap on the events that took place: Baker Hughes reported an increase in US rig count, 7 rigs from the Permian Basin, EIA had a bearish draw less than anticipated (930k actual versus 3 million barrel forecast), OPEC producers expressed sentiments that they did not wish to increase the level of production cuts should an extension be agreed to past June, all while the Permian Basin had a tremendous quarter showing no signs of slowing down.
What’s in store for this week? We hope you had an excellent weekend and came back refreshed and ready to face the market. |
#!/usr/bin/env python
import unittest
import pentai.base.human_player as h_m
import pentai.base.rules as r_m
import pentai.base.game as g_m
import pentai.ai.priority_filter as pf_m
import pentai.ai.utility_calculator as uc_m
from pentai.ai.ab_state import *
def get_black_line_counts(ab_game_state):
return ab_game_state.get_utility_stats().lines[P1]
def get_white_line_counts(ab_game_state):
return ab_game_state.get_utility_stats().lines[P2]
class AlphaBetaBridgeTest(unittest.TestCase):
def setUp(self):
player1 = h_m.HumanPlayer("Blomp")
player2 = h_m.HumanPlayer("Kubba")
r = r_m.Rules(13, "standard")
my_game = g_m.Game(r, player1, player2)
self.gs = my_game.current_state
self.search_filter = pf_m.PriorityFilter()
self.util_calc = uc_m.UtilityCalculator()
self.s = ABState(search_filter=self.search_filter,
utility_calculator=self.util_calc)
self.bl = self.s.utility_stats.lines[P1]
self.wl = self.s.utility_stats.lines[P2]
self.s.set_state(self.gs)
def test_update_substrips_middle_of_board(self):
self.gs.set_occ((7,7), P1)
"""
self.assertEquals(self.bl, [20, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_empty_board(self):
self.assertEquals(self.bl, [0, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_SW_corner(self):
self.gs.set_occ((0,0), P1)
self.assertEquals(self.bl, [3, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_near_SW_corner(self):
self.gs.set_occ((1,0), P1)
self.assertEquals(self.bl, [4, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_NE_corner(self):
self.gs.set_occ((12,12), P1)
self.assertEquals(self.bl, [3, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_remove_single_stone(self):
self.gs.set_occ((0,0), P1)
self.gs.set_occ((0,0), EMPTY)
self.assertEquals(self.bl, [0, 0, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_two_blacks_SW(self):
self.gs.set_occ((0,0), P1)
self.gs.set_occ((1,1), P1)
self.assertEquals(self.bl, [7, 1, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_2_opp_colour_pieces(self):
self.gs.set_occ((0,0), P1)
self.gs.set_occ((0,1), P2)
self.assertEquals(self.bl, [2, 0, 0, 0, 0])
self.assertEquals(self.wl, [3, 0, 0, 0, 0])
def test_update_substrips_2_pieces(self):
self.gs.set_occ((0,0), P1)
self.gs.set_occ((0,1), P1)
self.assertEquals(self.bl, [5, 1, 0, 0, 0])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
def test_update_substrips_5_in_a_row(self):
self.gs.set_occ((0,0), P1)
self.gs.set_occ((0,1), P1)
self.gs.set_occ((0,2), P1)
self.gs.set_occ((0,3), P1)
self.gs.set_occ((0,4), P1)
self.assertEquals(self.bl, [12, 1, 1, 1, 1])
self.assertEquals(self.wl, [0, 0, 0, 0, 0])
class LengthCountingTest(unittest.TestCase):
def setUp(self):
player1 = h_m.HumanPlayer("Blomp")
player2 = h_m.HumanPlayer("Kubba")
r = r_m.Rules(9, "standard")
my_game = g_m.Game(r, player1, player2)
self.gs = my_game.current_state
self.search_filter = pf_m.PriorityFilter()
self.util_calc = uc_m.UtilityCalculator()
self.s = ABState(search_filter=self.search_filter,
utility_calculator=self.util_calc)
self.bl = self.s.utility_stats.lines[P1]
self.wl = self.s.utility_stats.lines[P2]
self.s.set_state(self.gs)
def test_middle_for_black_diag_2_for_white(self):
self.gs.set_occ((4,4), P1)
self.gs.set_occ((2,2), P2)
self.assertEquals(self.bl, [17, 0, 0, 0, 0])
self.assertEquals(self.wl, [7, 0, 0, 0, 0])
def test_middle_for_black_left_1_for_white(self):
self.gs.set_occ((4,4), P1)
self.gs.set_occ((3,4), P2)
self.assertEquals(self.bl, [16, 0, 0, 0, 0])
self.assertEquals(self.wl, [5+4+4, 0, 0, 0, 0])
def test_middle_for_black_right_1_for_white(self):
self.gs.set_occ((4,4), P1)
self.gs.set_occ((5,4), P2)
self.assertEquals(self.bl, [16, 0, 0, 0, 0])
self.assertEquals(self.wl, [5+4+4, 0, 0, 0, 0])
def test_middle_for_black_up_1_for_white(self):
self.gs.set_occ((4,4), P1)
self.gs.set_occ((4,5), P2)
self.assertEquals(self.bl, [16, 0, 0, 0, 0])
self.assertEquals(self.wl, [5+4+4, 0, 0, 0, 0])
def test_middle_for_black_down_1_for_white(self):
self.gs.set_occ((4,4), P1)
self.gs.set_occ((4,3), P2)
self.assertEquals(self.bl, [16, 0, 0, 0, 0])
self.assertEquals(self.wl, [5+4+4, 0, 0, 0, 0])
###############
class MoreAlphaBetaBridgeTests(unittest.TestCase):
def setUp(self):
player1 = h_m.HumanPlayer("Blomp")
player2 = h_m.HumanPlayer("Kubba")
r = r_m.Rules(5, "standard")
my_game = g_m.Game(r, player1, player2)
self.gs = my_game.current_state
self.search_filter = pf_m.PriorityFilter()
self.util_calc = uc_m.UtilityCalculator()
self.s = ABState(search_filter=self.search_filter,
utility_calculator=self.util_calc)
self.bl = self.s.utility_stats.lines[P1]
self.wl = self.s.utility_stats.lines[P2]
self.s.set_state(self.gs)
def test_initial_state_black_to_move(self):
self.assertEquals(self.s.to_move_colour(), P1)
def test_create_state(self):
child = self.s.create_state((2,2))
self.assertEquals(child.to_move_colour(), P2)
self.assertEquals(child.terminal(), False)
board = child.board()
self.assertEquals(board.get_occ((2,2)), P1)
self.assertEquals(board.get_occ((3,3)), EMPTY)
self.assertEquals(board.get_occ((1,1)), EMPTY)
def test_length_counters_after_sw_corner(self):
g1 = self.s.create_state((0,0)) # B
self.assertEquals(get_black_line_counts(g1), [3, 0, 0, 0, 0])
def test_length_counters_after_nw_corner(self):
g1 = self.s.create_state((0,4)) # B
self.assertEquals(get_black_line_counts(g1), [3, 0, 0, 0, 0])
def test_length_counters_after_ne_corner(self):
g1 = self.s.create_state((4,4)) # B
self.assertEquals(get_black_line_counts(g1), [3, 0, 0, 0, 0])
def test_length_counters_after_se_corner(self):
g1 = self.s.create_state((4,0)) # B
self.assertEquals(get_black_line_counts(g1), [3, 0, 0, 0, 0])
def test_cannot_place_off_e_edge(self):
try:
g1 = self.s.create_state((-1,2)) # B
except IllegalMoveException:
return
self.assertFail()
def test_length_counters_after_two_moves(self):
g1 = self.s.create_state((0,0)) # B
g2 = g1.create_state((1,1)) # W
self.assertEquals(get_black_line_counts(g2), [2, 0, 0, 0, 0])
self.assertEquals(get_white_line_counts(g2), [2, 0, 0, 0, 0])
def test_length_counters_after_two_moves_b(self):
g1 = self.s.create_state((1,1)) # B
g2 = g1.create_state((2,2)) # W
self.assertEquals(get_black_line_counts(g2), [2, 0, 0, 0, 0])
# One across the other diagonal
self.assertEquals(get_white_line_counts(g2), [3, 0, 0, 0, 0])
def test_length_counters_after_five_moves(self):
# along the NE diagonal
g1 = self.s.create_state((1,1)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((3,3)) # B
g4 = g3.create_state((4,4)) # W
g5 = g4.create_state((0,0)) # B
self.assertEquals(get_black_line_counts(g5), [6, 0, 0, 0, 0])
self.assertEquals(get_white_line_counts(g5), [5, 0, 0, 0, 0])
def test_length_counters_after_five_moves_in_cnrs_and_middle(self):
# four in the corners and one in the middle
g1 = self.s.create_state((0,0)) # B
g2 = g1.create_state((0,4)) # W
g3 = g2.create_state((4,4)) # B
g4 = g3.create_state((4,0)) # W
g5 = g4.create_state((2,2)) # B
self.assertEquals(get_black_line_counts(g5), [2, 0, 1, 0, 0])
self.assertEquals(get_white_line_counts(g5), [0, 0, 0, 0, 0])
def test_make_a_capture(self):
g1 = self.s.create_state((0,1)) # B
g2 = g1.create_state((1,2)) # W
g3 = g2.create_state((1,3)) # B
g4 = g3.create_state((2,3)) # W
g5 = g4.create_state((3,4)) # B
self.assertEquals(g5.to_move_colour(), P2)
self.assertEquals(g5.terminal(), False)
board = g5.board()
self.assertEquals(board.get_occ((0,1)), P1)
self.assertEquals(board.get_occ((1,3)), P1)
self.assertEquals(board.get_occ((3,4)), P1)
self.assertEquals(board.get_occ((1,2)), EMPTY)
self.assertEquals(board.get_occ((2,3)), EMPTY)
class ThreatTest(unittest.TestCase):
def setUp(self):
player1 = h_m.HumanPlayer("Blomp")
player2 = h_m.HumanPlayer("Kubba")
r = r_m.Rules(5, "standard")
my_game = g_m.Game(r, player1, player2)
self.gs = my_game.current_state
self.search_filter = pf_m.PriorityFilter()
self.util_calc = uc_m.UtilityCalculator()
self.s = ABState(search_filter=self.search_filter,
utility_calculator=self.util_calc)
self.bl = self.s.utility_stats.lines[P1]
self.wl = self.s.utility_stats.lines[P2]
self.s.set_state(self.gs)
def test_add_one_take_for_white(self):
g1 = self.s.create_state((2,4)) # B
g2 = g1.create_state((1,4)) # W
g3 = g2.create_state((3,4)) # B
self.assertEquals(g3.get_takes(), [0, 0, 1])
def test_SW_valid(self):
g1 = self.s.create_state((1,1)) # B
g2 = g1.create_state((3,3)) # W
g3 = g2.create_state((2,2)) # B
self.assertEquals(g3.get_takes(), [0, 0, 1])
def test_NW_valid(self):
g1 = self.s.create_state((1,3)) # B
g2 = g1.create_state((3,1)) # W
g3 = g2.create_state((2,2)) # B
self.assertEquals(g3.get_takes(), [0, 0, 1])
def test_NE_valid(self):
g1 = self.s.create_state((3,3)) # B
g2 = g1.create_state((1,1)) # W
g3 = g2.create_state((2,2)) # B
self.assertEquals(g3.get_takes(), [0, 0, 1])
def test_SE_valid(self):
g1 = self.s.create_state((2,2)) # B
g2 = g1.create_state((1,3)) # W
g3 = g2.create_state((3,1)) # B
self.assertEquals(g3.get_takes(), [0, 0, 1])
##########################################
def test_SW_invalid(self):
g1 = self.s.create_state((0,0)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((1,1)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_NW_invalid(self):
g1 = self.s.create_state((0,4)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((1,3)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_NE_invalid(self):
g1 = self.s.create_state((4,4)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((3,3)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_SE_invalid(self):
g1 = self.s.create_state((4,0)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((3,1)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
##########################################
def test_W_invalid(self):
g1 = self.s.create_state((0,2)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((1,2)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_E_invalid(self):
g1 = self.s.create_state((4,2)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((3,2)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_N_invalid(self):
g1 = self.s.create_state((2,4)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((2,3)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_S_invalid(self):
g1 = self.s.create_state((2,0)) # B
g2 = g1.create_state((2,2)) # W
g3 = g2.create_state((2,1)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
##########################################
def test_SW_invalid_take2(self):
g1 = self.s.create_state((1,0)) # B
g2 = g1.create_state((3,2)) # W
g3 = g2.create_state((2,1)) # B
self.assertEquals(g3.get_takes(), [0, 0, 0])
def test_SW_invalid_threat2(self):
g1 = self.s.create_state((1,0)) # B
g2 = g1.create_state((3,4)) # W (irrel.)
g3 = g2.create_state((2,1)) # B
self.assertEquals(g3.get_threats(), [0, 0, 0])
##########################################
'''
def test_seen(self):
self.s.set_seen(set([(1,2)]))
moves = list(self.s.successors())
'''
"""
# TODO: lots of threat cases, or unify stuff
if __name__ == "__main__":
unittest.main()
|
biggest tree in oregon worlds largest spruce tree.
tomy breast pump review quiet expressions plus double breast pump quiet expressions double breast pump reviews.
cable hanging system exhibition cable picture hanging system ikea cable hanging system for ductwork.
regal cleaners photo of regal cleaners united states.
float picture frame print w floating frame.
camper propane tank cover tire covers tyre propane tank covers got them.
stool height for 36 inch counter bar stool height bar stool sizes standard bar height large size of bar stool height standard bar stool height.
pearl dynasty pearl dynasty hot spicy soup pearl dynasty slot machine.
spt dishwasher dishwashers new dishwasher inch tall tub stainless steel spt built in dishwasher reviews spt countertop dishwasher reviews. |
"""
python-mochi
------------
python-mochi is a lib for working with the `mochiads api <https://www.mochimedia.com/support/pub_docs>`_
Links
`````
* `website <http://codeboje.de/python-mochi/>`_
* `development version
<http://github.com/azarai/python-mochi>`_
"""
from distutils.core import setup
setup(name="python-mochi",
version="0.0.1",
description="A Python lib for the mochiads api",
long_description=__doc__,
author="Jens Boje",
author_email="[email protected]",
url="http://codeboje.de/python-mochi/",
packages=['mochi'],
platforms='any',
license = 'BSD',
classifiers = [
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules'
],
)
|
Fund records can store fund relationships. How can I import fund relationships on to fund records or constituent records?
4. Open appropriate Fund record.
5. Select the Relationships tab.
6. Click New Individual Relationship or New Organization Relationship.
7. Click binoculars and search for Constituent (Individual or Organization) record. |
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import gettext
import os
import sys
PY3 = sys.version_info > (3,)
LOCALES_DIR = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
'locales'
)
class Translator(object):
def configure(self, locale):
if not os.path.exists(os.path.join(LOCALES_DIR, locale)):
locale = 'en'
self.lang = gettext.translation(
'messages', localedir=LOCALES_DIR, languages=[locale])
self.lang.install()
def translate(self, string, arguments=None):
if PY3:
gettext = self.lang.gettext
else:
gettext = self.lang.ugettext
translated = gettext(string)
if arguments is not None:
translated = translated % arguments
return translated
class __proxy__(object):
def __init__(self, string, translator, arguments):
self.translator = translator
self.string = string
self.arguments = arguments
def __repr__(self):
return self.translator.translate(self.string, self.arguments)
__str__ = __repr__
class LazyTranslator(object):
def __init__(self):
self.translator = Translator()
def __call__(self, string, arguments=None):
self.proxy = __proxy__(string, self.translator, arguments)
return self.proxy
translate_lazy = LazyTranslator()
|
Search Results of RECEITAS. Check all videos related to RECEITAS. User can download RECEITAS videos for personal use only. |
#
# Copyright 2012 - 2013 David Sommerseth <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# For the avoidance of doubt the "preferred form" of this code is one which
# is in an open unpatent encumbered format. Where cryptographic key signing
# forms part of the process of creating an executable the information
# including keys needed to generate an equivalently functional executable
# are deemed to be part of the source code.
#
import libxml2
from rteval.modules import RtEvalModules, ModuleContainer
class MeasurementProfile(RtEvalModules):
"""Keeps and controls all the measurement modules with the same measurement profile"""
def __init__(self, config, with_load, run_parallel, modules_root, logger):
self.__with_load = with_load
self.__run_parallel = run_parallel
# Only used when running modules serialised
self.__run_serialised_mods = None
self._module_type = "measurement"
self._module_config = "measurement"
self._report_tag = "Profile"
RtEvalModules.__init__(self, config, modules_root, logger)
def GetProfile(self):
"Returns the profile characteristic as (with_load, run_parallel)"
return (self.__with_load, self.__run_parallel)
def ImportModule(self, module):
"Imports an exported module from a ModuleContainer() class"
return self._ImportModule(module)
def Setup(self, modname):
"Instantiates and prepares a measurement module"
modobj = self._InstantiateModule(modname, self._cfg.GetSection(modname))
self._RegisterModuleObject(modname, modobj)
def Unleash(self):
"""Unleashes all the measurement modules"""
if self.__run_parallel:
# Use the inherrited method if running
# measurements in parallel
return RtEvalModules.Unleash(self)
# Get a list of all registered modules,
# and start the first one
self.__serialised_mods = self.GetModulesList()
mod = self.GetNamedModuleObject(self.__serialised_mods[0])
mod.setStart()
return 1
def MakeReport(self):
"Generates an XML report for all run measurement modules in this profile"
rep_n = RtEvalModules.MakeReport(self)
rep_n.newProp("loads", self.__with_load and "1" or "0")
rep_n.newProp("parallel", self.__run_parallel and "1" or "0")
return rep_n
def isAlive(self):
"""Returns True if all modules which are supposed to run runs"""
if self.__run_parallel:
return self._isAlive()
if len(self.__serialised_mods) > 0:
# If running serialised, first check if measurement is still running,
# if so - return True.
mod = self.GetNamedModuleObject(self.__serialised_mods[0])
if mod.WorkloadAlive():
return True
# If not, go to next on the list and kick it off
self.__serialised_mods.remove(self.__serialised_mods[0])
if len(self.__serialised_mods) > 0:
mod = self.GetNamedModuleObject(self.__serialised_mods[0])
mod.setStart()
return True
# If we've been through everything, nothing is running
return False
class MeasurementModules(object):
"""Class which takes care of all measurement modules and groups them into
measurement profiles, based on their characteristics"""
def __init__(self, config, logger):
self.__cfg = config
self.__logger = logger
self.__measureprofiles = []
self.__modules_root = "modules.measurement"
self.__iter_item = None
# Temporary module container, which is used to evalute measurement modules.
# This will container will be destroyed after Setup() has been called
self.__container = ModuleContainer(self.__modules_root, self.__logger)
self.__LoadModules(self.__cfg.GetSection("measurement"))
def __LoadModules(self, modcfg):
"Loads and imports all the configured modules"
for m in modcfg:
# hope to eventually have different kinds but module is only on
# for now (jcw)
if m[1].lower() == 'module':
self.__container.LoadModule(m[0])
def GetProfile(self, with_load, run_parallel):
"Returns the appropriate MeasurementProfile object, based on the profile type"
for p in self.__measureprofiles:
mp = p.GetProfile()
if mp == (with_load, run_parallel):
return p
return None
def SetupModuleOptions(self, parser):
"Sets up all the measurement modules' parameters for the option parser"
self.__container.SetupModuleOptions(parser, self.__cfg)
def Setup(self, modparams):
"Loads all measurement modules and group them into different measurement profiles"
if not isinstance(modparams, dict):
raise TypeError("modparams attribute is not of a dictionary type")
modcfg = self.__cfg.GetSection("measurement")
for (modname, modtype) in modcfg:
if modtype.lower() == 'module': # Only 'module' will be supported (ds)
# Extract the measurement modules info
modinfo = self.__container.ModuleInfo(modname)
# Get the correct measurement profile container for this module
mp = self.GetProfile(modinfo["loads"], modinfo["parallel"])
if mp is None:
# If not found, create a new measurement profile
mp = MeasurementProfile(self.__cfg,
modinfo["loads"], modinfo["parallel"],
self.__modules_root, self.__logger)
self.__measureprofiles.append(mp)
# Export the module imported here and transfer it to the
# measurement profile
mp.ImportModule(self.__container.ExportModule(modname))
# Setup this imported module inside the appropriate measurement profile
self.__cfg.AppendConfig(modname, modparams)
mp.Setup(modname)
del self.__container
def MakeReport(self):
"Generates an XML report for all measurement profiles"
# Get the reports from all meaurement modules in all measurement profiles
rep_n = libxml2.newNode("Measurements")
for mp in self.__measureprofiles:
mprep_n = mp.MakeReport()
if mprep_n:
rep_n.addChild(mprep_n)
return rep_n
def __iter__(self):
"Initiates an iteration loop for MeasurementProfile objects"
self.__iter_item = len(self.__measureprofiles)
return self
def next(self):
"""Internal Python iterating method, returns the next
MeasurementProfile object to be processed"""
if self.__iter_item == 0:
self.__iter_item = None
raise StopIteration
else:
self.__iter_item -= 1
return self.__measureprofiles[self.__iter_item]
|
The freewheel remover is designed for removal of single speed freewheels, which are broader than multi speed freewheels. The chain fits precisely to the freewheel teeth and thus enables effective work without slipping or damaging the sprockets. An added retaining spring is a special feature distinguishing this tool from similar products. |
#-*- coding: utf-8 -*-
'''
Created on 24 авг. 2010
@author: ivan
'''
from gi.repository import Gtk
import logging
from foobnix.fc.fc import FC
from foobnix.helpers.image import ImageBase
from foobnix.util.const import SITE_LOCALE, ICON_FOOBNIX
from foobnix.util.localization import foobnix_localization
from foobnix.gui.service.path_service import get_foobnix_resourse_path_by_name
foobnix_localization()
def responseToDialog(entry, dialog, response):
dialog.response(response)
def file_selection_dialog(title, current_folder=None):
chooser = Gtk.FileSelection(title)
chooser.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
chooser.set_default_response(Gtk.ResponseType.OK)
chooser.set_select_multiple(True)
paths = None
if current_folder:
chooser.set_current_folder(current_folder)
response = chooser.run()
if response == Gtk.ResponseType.OK:
paths = chooser.get_selections()
elif response == Gtk.ResponseType.CANCEL:
logging.info('Closed, no files selected')
chooser.destroy()
return paths
def file_chooser_dialog(title, current_folder=None):
chooser = Gtk.FileChooserDialog(title, action=Gtk.FileChooserAction.OPEN, buttons=(_("Open"), Gtk.ResponseType.OK))
chooser.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
chooser.set_default_response(Gtk.ResponseType.OK)
chooser.set_select_multiple(True)
paths = None
if current_folder:
chooser.set_current_folder(current_folder)
response = chooser.run()
if response == Gtk.ResponseType.OK:
paths = chooser.get_filenames()
elif response == Gtk.ResponseType.CANCEL:
logging.info('Closed, no files selected')
chooser.destroy()
return paths
def directory_chooser_dialog(title, current_folder=None):
chooser = Gtk.FileChooserDialog(title, action=Gtk.FileChooserAction.SELECT_FOLDER, buttons=(_("Choose"), Gtk.ResponseType.OK))
chooser.set_default_response(Gtk.ResponseType.OK)
chooser.set_select_multiple(True)
paths = None
if current_folder:
chooser.set_current_folder(current_folder)
response = chooser.run()
if response == Gtk.ResponseType.OK:
paths = chooser.get_filenames()
elif response == Gtk.ResponseType.CANCEL:
logging.info('Closed, no directory selected')
chooser.destroy()
return paths
def one_line_dialog(dialog_title, parent=None, entry_text=None, message_text1=None, message_text2=None):
dialog = Gtk.MessageDialog(
parent,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.INFO,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_title(dialog_title)
if message_text1:
dialog.set_markup(message_text1)
if message_text2:
dialog.format_secondary_markup(message_text2)
entry = Gtk.Entry()
'''set last widget in action area as default widget (button OK)'''
dialog.set_default_response(Gtk.ResponseType.OK)
'''activate default widget after Enter pressed in entry'''
entry.set_activates_default(True)
if entry_text:
entry.set_text(entry_text)
dialog.vbox.pack_start(entry, True, True, 0)
dialog.show_all()
dialog.run()
text = entry.get_text()
dialog.destroy()
return text if text else None
def two_line_dialog(dialog_title, parent=None, message_text1=None,
message_text2=None, entry_text1="", entry_text2=""):
dialog = Gtk.MessageDialog(
parent,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.QUESTION,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_title(dialog_title)
if message_text1:
dialog.set_markup(message_text1)
if message_text2:
dialog.format_secondary_markup(message_text2)
login_entry = Gtk.Entry()
if entry_text1:
login_entry.set_text(entry_text1)
login_entry.show()
password_entry = Gtk.Entry()
if entry_text2:
password_entry.set_text(entry_text2)
password_entry.show()
hbox = Gtk.Box.new(Gtk.Orientation.HORIZONTAL, 0)
hbox.pack_start(login_entry, False, False, 0)
hbox.pack_start(password_entry, False, False, 0)
dialog.vbox.pack_start(hbox, True, True, 0)
dialog.show_all()
'''set last widget in action area as default widget (button OK)'''
dialog.set_default_response(Gtk.ResponseType.OK)
'''activate default widget after Enter pressed in entry'''
login_entry.set_activates_default(True)
password_entry.set_activates_default(True)
dialog.run()
login_text = login_entry.get_text()
password_text = password_entry.get_text()
dialog.destroy()
return [login_text, password_text] if (login_text and password_text) else [None,None]
def info_dialog(title, message, parent=None):
dialog = Gtk.MessageDialog(
parent,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.INFO,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_title(title)
dialog.set_markup(title)
dialog.format_secondary_markup(message)
dialog.show_all()
dialog.run()
dialog.destroy()
def info_dialog_with_link(title, version, link):
dialog = Gtk.MessageDialog(
None,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.INFO,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_title(title)
dialog.set_markup(title)
dialog.format_secondary_markup("<b>" + version + "</b>")
link = Gtk.LinkButton.new_with_label(link, link)
link.show()
dialog.vbox.pack_end(link, True, True, 0)
dialog.show_all()
dialog.run()
dialog.destroy()
def info_dialog_with_link_and_donate(version):
dialog = Gtk.MessageDialog(
None,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.INFO,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_title(_("New foobnix release avaliable"))
dialog.set_markup(_("New foobnix release avaliable"))
dialog.format_secondary_markup("<b>" + version + "</b>")
card = Gtk.LinkButton.new_with_label("http://foobnix.com/%s/download.html"%SITE_LOCALE, _("Download and Donate"))
#terminal = Gtk.LinkButton("http://www.foobnix.com/donate/eng#terminal", _("Download and Donate by Webmoney or Payment Terminal"))
# link = Gtk.LinkButton("http://www.foobnix.com/support?lang=%s"%SITE_LOCALE, _("Download"))
frame = Gtk.Frame(label="Please donate and download")
vbox = Gtk.Box.new(Gtk.Orientation.VERTICAL, 0)
vbox.set_homogeneous(True)
vbox.pack_start(card, True, True, 0)
#vbox.pack_start(terminal, True, True, 0)
vbox.pack_start(link, True, True, 0)
frame.add(vbox)
image = ImageBase("images/foobnix-slogan.jpg")
dialog.vbox.pack_start(image, True, True, 0)
dialog.vbox.pack_start(frame, True, True, 0)
dialog.vbox.pack_start(Gtk.Label.new(_("We hope you like the player. We will make it even better.")), True, True, 0)
version_check = Gtk.CheckButton.new_with_label(_("Check for new foobnix release on start"))
version_check.set_active(FC().check_new_version)
dialog.vbox.pack_start(version_check, True, True, 0)
dialog.show_all()
dialog.run()
FC().check_new_version = version_check.get_active()
FC().save()
dialog.destroy()
def show_entry_dialog(title, description):
dialog = Gtk.MessageDialog(
None,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.QUESTION,
Gtk.ButtonsType.OK,
None)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_markup(title)
entry = Gtk.Entry()
entry.connect("activate", responseToDialog, dialog, Gtk.ResponseType.OK)
hbox = Gtk.Box.new(Gtk.Orientation.HORIZONTAL, 0)
hbox.pack_start(Gtk.Label.new("Value:"), False, 5, 5)
hbox.pack_end(entry, False, False, 0)
dialog.format_secondary_markup(description)
dialog.vbox.pack_end(hbox, True, True, 0)
dialog.show_all()
dialog.run()
text = entry.get_text()
dialog.destroy()
return text
def show_login_password_error_dialog(title, description, login, password):
dialog = Gtk.MessageDialog(
None,
Gtk.DialogFlags.MODAL | Gtk.DialogFlags.DESTROY_WITH_PARENT,
Gtk.MessageType.ERROR,
Gtk.ButtonsType.OK,
title)
dialog.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
dialog.set_markup(str(title))
dialog.format_secondary_markup(description)
login_entry = Gtk.Entry()
login_entry.set_text(login)
login_entry.show()
password_entry = Gtk.Entry()
password_entry.set_text(password)
password_entry.set_visibility(False)
password_entry.set_invisible_char("*")
password_entry.show()
vbox = Gtk.Box.new(Gtk.Orientation.VERTICAL, 0)
vbox.pack_start(login_entry, False, False, 0)
vbox.pack_start(password_entry, False, False, 0)
dialog.vbox.pack_start(vbox, True, True, 0)
dialog.show_all()
dialog.run()
login_text = login_entry.get_text()
password_text = password_entry.get_text()
dialog.destroy()
return [login_text, password_text]
def file_saving_dialog(title, current_folder=None):
chooser = Gtk.FileChooserDialog(title, action=Gtk.FileChooserAction.SAVE, buttons=("document-save", Gtk.ResponseType.OK))
chooser.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
chooser.set_default_response(Gtk.ResponseType.OK)
chooser.set_select_multiple(False)
if current_folder:
chooser.set_current_folder(current_folder)
response = chooser.run()
if response == Gtk.ResponseType.OK:
paths = chooser.get_filenames()
elif response == Gtk.ResponseType.CANCEL:
logging.info('Closed, no files selected')
chooser.destroy()
class FileSavingDialog(Gtk.FileChooserDialog):
def __init__(self, title, func, args = None, current_folder=None, current_name=None):
Gtk.FileChooserDialog.__init__(self, title, action=Gtk.FileChooserAction.SAVE, buttons=("document-save", Gtk.ResponseType.OK))
self.set_default_response(Gtk.ResponseType.OK)
self.set_select_multiple(False)
self.set_do_overwrite_confirmation(True)
self.set_icon_from_file(get_foobnix_resourse_path_by_name(ICON_FOOBNIX))
if current_folder:
self.set_current_folder(current_folder)
if current_name:
self.set_current_name(current_name)
response = self.run()
if response == Gtk.ResponseType.OK:
filename = self.get_filename()
folder = self.get_current_folder()
if func:
try:
if args: func(filename, folder, args)
else: func(filename, folder)
except IOError as e:
logging.error(e)
elif response == Gtk.ResponseType.CANCEL:
logging.info('Closed, no files selected')
self.destroy()
if __name__ == '__main__':
info_dialog_with_link_and_donate("foobnix 0.2.1-8")
Gtk.main()
|
Grinding Ball Mill Manufacture. Cheapest! Gold Grinding Machine Rock Crushing Mill Stone Crusher Ball Mill, High Quality Stone .. Good quality china manufacture powder small lab ball grinding mill . manufacturer beneficiation / grinding line laboratory ball mill for sale.
2. Our offered forged steel ball is offered with no breakage and deformation therefore it is demanding in the market. 3.These forged steel ball is having no deformation quality and grinding resistant property. 4. Our offered forged steel ball is having higher level of hardness and good wear resistant capability. 5.
No Break Ball Mill Grinding Media 2.5 Inch Mill Steel Balls , Find Complete Details about No Break Ball Mill Grinding Media 2.5 Inch Mill Steel Balls,Grinding Media Mill Steel Balls,2.5 Inch Steel Ball,Mill Steel Balls from Cast &Forged Supplier or ManufacturerChengde Rongmao Cast Steel Co., Ltd.
Once leaves are harvested, one of several grinding/homogenization methods is used to break open the cells. Below is a brief summary of each methods. CryoCooler, a tool for collecting and transporting temperature sensitive samples ( video ).
Auto Parts Reliable Quality Car Disc Break Pad Manufacture For FORD/LINCOLN D1040 3W1Z2200AA .. 4.5 Inch Flexible Abrasive Disc Flap Disc For Weld Polishing And Grinding . Laboratory disc brake grinding machhine for coal, lab disc mill.
We are highly appreciated in the market for our supreme quality Ball Mill &Grinding System.All the products are widely demanded by various industries like chemical, power, engineering and tool making.
China manufacturer cast &forged grinding ball for ball mill spare parts. Add to Compare . CrMo wearing spare parts for AG/SAG mill, ball mill and rod. Add to Compare .. Durable cheapest no break ball mill spare parts. Add to Compare.
You will find a high quality soybean mill at an affordable price from brands like JIQI. Looking for something more? AliExpress carries many soybean mill related products, including soybean , peanut butter , roller mill , mill hand , pellet mill , machine for making peanut butter , mixer cup , pepper , rice grinder.
Steel Balls For Sale Wholesale, Steel Balls Suppliers. Good WearResistance Cast Grinding Media Steel Ball For Mining Chromium alloy cast iron grinding steel ball high quality grinding steel ball for ball mill .
no break high quality ball mill metal grinding EMAX High Energy Ball Mill Retsch The Emax is an entirely new type of ball mill for high energy milling. . limestone, metal oxides, minerals, ores, paper, pigments, plant materials, polymers, . operation without cool down breakstemperaturecontrolled grindingnarrow particle. |
"""
Author: Kevin Lin, [email protected]
Modified version of Logger used by Skynet Senior Design team at Rice University.
"""
import time
class LogLevel:
"""
Mapping of log level enums to names.
"""
DEBUGV = {'name': 'DEBUGV', 'value' : 4}
DEBUG = {'name': 'DEBUG', 'value': 3}
INFO = {'name': 'INFO', 'value': 2}
WARN = {'name': 'WARN', 'value': 1}
ERROR = {'name': 'ERROR', 'value': 0}
class Logger:
def __init__(self, name, level=LogLevel.DEBUG, outfile=None):
"""
Initializes a logger.
:param name: Name to attach to every log entry generated with this logger.
:param level: The log level at which to supress messages.
"""
self.name = name
self.level = level
if outfile != None:
self.fout = open(outfile, mode='a')
else:
self.fout = None
def debugv(self, message):
"""
Log a debug message.
:param message: Message to log.
"""
return self._print_log(LogLevel.DEBUGV, message)
def debug(self, message):
"""
Log a debug message.
:param message: Message to log.
"""
return self._print_log(LogLevel.DEBUG, message)
def info(self, message):
"""
Log an info message.
:param message: Message to log.
"""
return self._print_log(LogLevel.INFO, message)
def warn(self, message):
"""
Log a warning message.
:param message: Message to log.
"""
return self._print_log(LogLevel.WARN, message)
def error(self, message):
"""
Log an error message.
:param message: Message to log.
"""
return self._print_log(LogLevel.ERROR, message)
def _print_log(self, level, message):
"""
Print a log entry to standard output, with the timestamp, log level, and context name
automatically prefixed.
:param level: Target log level.
:param message: Message to log.
"""
# Don't print if we are supressing the message:
if self.level['value'] < level['value']:
return
hms = time.strftime('%H:%M:%S')
self._print_stdout(
'[{hms}] [{name}] [{level}] {message}'.format(
hms=hms,
name=self.name,
level=level['name'],
message=message,
)
)
def _print_stdout(self, line):
"""
Print a line to standard output.
:param line: Line to print.
"""
print(line)
if self.fout is not None:
self.fout.write(line + '\n')
self.fout.flush()
|
Nowak, Marta; Gram, A; Boos, Alois; Aslan, Selim; Ay, Serhan S; Önyay, Firdevs; Kowalewski, Mariusz P (2017). Functional implications of the utero-placental relaxin (RLN) system in the dog throughout pregnancy and at term. Reproduction, 154(4):415-431.
Graubner, Felix R; Gram, Aykut; Kautz, Ewa; Bauersachs, Stefan; Aslan, Selim; Agaoglu, Ali R; Boos, Alois; Kowalewski, Mariusz P (2017). Uterine responses to early pre-attachment embryos in the domestic dog and comparisons with other domestic animal species. Biology of Reproduction, 97(2):197-216.
Kaya, Semra; Kaçar, Cihan; Polat, Bülent; Çolak, Armağan; Kaya, Duygu; Gürcan, I S; Bollwein, Heiner; Aslan, Selim (2016). Association of luteal blood flow with follicular size, serum estrogen and progesterone concentrations, and the inducibility of luteolysis by PGF2α in dairy cows. Theriogenology, 87:167-172.
Küçükaslan, İbrahim; Kaya, Duygu; Wollgarten, Bernhard; Aslan, Selim; Ay, Serhan Serhat; Findik, Murat; Kaçar, Cihan; Bollwein, Heiner (2015). Investigation of the effects of acupuncture stimulation on the size and blood flow of corpus luteum and progesterone levels in dairy cows. Kafkas Universitesi Veteriner Fakultesi Dergisi, 21(6):877-883.
Kautz, Ewa; Gram, Aykut; Aslan, Selim; Ay, Serhan Serhat; Selcuk, Murat; Kanca, Halit; Koldas, Ece; Akal, Eser; Karakas, Kubra; Findik, Murat; Boos, Alois; Kowalewski, Mariusz Pawel (2014). Expression of genes involved in the embryo-maternal interaction in the early pregnant canine uterus. Reproduction, 147(5):703-717.
Aslan, Selim; Arslanbas, D; Beindorff, N; Bollwein, Heiner (2011). Effects of induction of ovulation with GnRH or hCG on follicular and luteal blood flow in Holstein-Friesian heifers. Reproduction in Domestic Animals, 46(5):781-786.
This list was generated on Sun Apr 21 00:35:02 2019 CEST. |
#!/usr/bin/env python
import os,sys
import numpy as np
import subprocess
import multiprocessing
from functools import partial
from astropy.io import fits
from astropy.table import Table,vstack,hstack,join
from astropy.stats import sigma_clip
from astropy.wcs import InconsistentAxisTypesError
from bokpipe import bokphot,bokpl,bokproc,bokutil,bokastrom
from bokpipe.bokdm import SimpleFileNameMap
import bokrmpipe,bokrmphot
import cfhtrm
import idmrmphot
nom_pixscl = 0.18555
cfhtrm_aperRad = np.array([0.75,1.5,2.275,3.4,4.55,6.67,10.]) / nom_pixscl
def get_phot_file(photCat,inFile):
if inFile is None:
return '{0}_{1}.fits'.format('cfhtrmphot',photCat.name)
else:
return inFile
class CfhtConfig(object):
name = 'cfht'
nCCD = 40
nAper = 7
nAmp = 80
ccd0 = 0
zpAperNum = -2
zpMinSnr = 10.
zpMinNobs = 10
zpMaxSeeing = 1.7/nom_pixscl
zpMaxChiVal = 5.
zpMagRange = {'g':(17.0,20.5),'i':(17.0,21.0)}
zpFitKwargs = {'minContig':1}
apCorrMaxRmsFrac = 0.5
apCorrMinSnr = 20.
apCorrMinNstar = 20
# XXX need to understand why cfht data has so many outliers
maxFrameOutlierFrac = 0.99
maxFrameChiSqrNu = 10.
#colorXform = idmrmphot.ColorTransform('cfht','sdss')
# although the color terms appear consistent between <2009 and 2014-15,
# combining them into a single calibration results in ~10 mmag offsets
# in the absolute calibration with SDSS. Splitting them into separate
# calibrations improves this.
def __init__(self):
_cfgdir = os.path.join(os.environ['BOKRMDIR'],'..') # XXX
ctab = Table.read(os.path.join(_cfgdir,'colorterms.fits'))
ii = np.where( (ctab['photsys']=='cfht') &
(ctab['refsys']=='sdss') &
(ctab['filter']=='g') )[0]
dec1_2013 = 56627
i = np.searchsorted(ctab['mjdmin'][ii],dec1_2013)
ctab['mjdmax'][ii[i-1]] = dec1_2013
ctab['epoch'][ii[i:]] += 1
ctab.insert_row(ii[i],('cfht','sdss','g',1,
dec1_2013,ctab['mjdmin'][ii[i]],
ctab['cterms'][ii[i-1]]))
self.colorXform = idmrmphot.ColorTransform('cfht','sdss',
inTab=ctab)
def _cat_worker(dataMap,imFile,**kwargs):
clobber = kwargs.pop('redo',False)
verbose = kwargs.pop('verbose',0)
bokutil.mplog('extracting catalogs for '+imFile)
imgFile = dataMap('img')(imFile)
psfFile = dataMap('psf')(imFile)
aheadFile = imgFile.replace('.fits.fz','.ahead')
tmpFile = imgFile.replace('.fz','')
catFile = dataMap('wcscat')(imFile)
print '-->',imgFile
kwargs.setdefault('SEEING_FWHM','1.0')
kwargs.setdefault('PIXEL_SCALE','0.18555')
kwargs.setdefault('SATUR_KEY','SATURATE')
kwargs.setdefault('GAIN_KEY','GAIN')
if not os.path.exists(aheadFile):
print aheadFile,' not found!'
return
if not os.path.exists(imgFile):
print imgFile,' not found!'
return
if True:
# a few widely spaced ccds
pix = np.array([ fits.getdata(imgFile,ccdNum)[::8]
for ccdNum in [10,16,21,33] ])
sky = sigma_clip(pix).mean()
if verbose > 0:
print 'sky level is %.2f' % sky
kwargs.setdefault('BACK_TYPE','MANUAL')
kwargs.setdefault('BACK_VALUE','%.1f'%sky)
if not os.path.exists(catFile):
if not os.path.exists(tmpFile):
subprocess.call(['funpack',imgFile])
bokphot.sextract(tmpFile,catFile,full=False,
clobber=clobber,verbose=verbose,**kwargs)
if not os.path.exists(psfFile):
if not os.path.exists(tmpFile):
subprocess.call(['funpack',imgFile])
bokphot.run_psfex(catFile,psfFile,instrument='cfhtmegacam',
clobber=clobber,verbose=verbose,**kwargs)
if not os.path.exists(aheadFile):
bokastrom.scamp_solve(tmpFile,catFile,filt='r',
clobber=clobber,verbose=verbose)
if not os.path.exists(aheadFile):
print imgFile,' WCS failed!'
return
if False:
os.remove(catFile)
catFile = dataMap('cat')(imFile)
# XXX while using these as primary
apers = ','.join(['%.2f'%a for a in cfhtrm_aperRad])
kwargs.setdefault('DETECT_MINAREA','10.0')
kwargs.setdefault('DETECT_THRESH','2.0')
kwargs.setdefault('ANALYSIS_THRESH','2.0')
kwargs.setdefault('PHOT_APERTURES',apers)
kwargs.setdefault('PARAMETERS_NAME',
os.path.join(bokphot.configDir,'cfht_catalog_tmp.par'))
#kwargs.setdefault('BACK_SIZE','64,128')
#kwargs.setdefault('BACK_FILTERSIZE','1')
kwargs.setdefault('BACKPHOTO_TYPE','LOCAL')
#kwargs.setdefault('CHECKIMAGE_TYPE','BACKGROUND')
#kwargs.setdefault('CHECKIMAGE_NAME',imgFile.replace('.fits.fz','.back.fits'))
if not os.path.exists(catFile):
if not os.path.exists(tmpFile):
subprocess.call(['funpack',imgFile])
bokphot.sextract(tmpFile,catFile,psfFile,full=True,
clobber=clobber,verbose=verbose,**kwargs)
if os.path.exists(tmpFile):
os.remove(tmpFile)
def _exc_cat_worker(*args,**kwargs):
try:
_cat_worker(*args,**kwargs)
except:
pass
def make_sextractor_catalogs(dataMap,procMap,**kwargs):
files = dataMap.getFiles()
p_cat_worker = partial(_exc_cat_worker,dataMap,**kwargs)
status = procMap(p_cat_worker,files)
def get_phot_fn(dataMap,imFile,catPfx):
fmap = SimpleFileNameMap(None,cfhtrm.cfhtCatDir,
'.'.join(['',catPfx,'phot']))
catFile = dataMap('cat')(imFile)
return fmap(imFile)
def _phot_worker(dataMap,photCat,inp,matchRad=2.0,redo=False,verbose=0):
imFile,frame = inp
refCat = photCat.refCat
catFile = dataMap('cat')(imFile)
aperFile = get_phot_fn(dataMap,imFile,photCat.name)
if verbose:
print '--> ',imFile
if os.path.exists(aperFile) and not redo:
return
tabs = []
try:
f = fits.open(catFile)
except IOError:
print catFile,' not found!'
return
for ccdNum,hdu in enumerate(f[1:]):
c = hdu.data
m1,m2,sep = idmrmphot.srcor(refCat['ra'],refCat['dec'],
c['ALPHA_J2000'],c['DELTA_J2000'],matchRad)
if len(m1)==0:
continue
expTime = dataMap.obsDb['expTime'][frame]
t = Table()
t['x'] = c['X_IMAGE'][m2]
t['y'] = c['Y_IMAGE'][m2]
t['objId'] = refCat['objId'][m1]
t['counts'] = c['FLUX_APER'][m2] / expTime
t['countsErr'] = c['FLUXERR_APER'][m2] / expTime
t['flags'] = np.tile(c['FLAGS'][m2],(len(cfhtrm_aperRad),1)).T
t['psfCounts'] = c['FLUX_PSF'][m2] / expTime
t['psfCountsErr'] = c['FLUXERR_PSF'][m2] / expTime
t['ccdNum'] = ccdNum
t['frameIndex'] = dataMap.obsDb['frameIndex'][frame]
t['__number'] = c['NUMBER'][m2]
t['__nmatch'] = len(m1)
t['__sep'] = sep
tabs.append(t)
f.close()
if len(tabs)==0:
if verbose:
print 'no objects!'
return
vstack(tabs).write(aperFile,overwrite=True)
def make_phot_catalogs(dataMap,procMap,photCat,**kwargs):
files = zip(*dataMap.getFiles(with_frames=True))
p_phot_worker = partial(_phot_worker,dataMap,photCat,**kwargs)
status = procMap(p_phot_worker,files)
def load_raw_cfht_aperphot(dataMap,photCat):
photTabs = []
for imFile in dataMap.getFiles():
aperFile = get_phot_fn(dataMap,imFile,photCat.name)
try:
photTabs.append(Table.read(aperFile))
print "loaded catalog {}".format(aperFile)
except IOError:
print "WARNING: catalog {} missing, skipped!".format(aperFile)
return vstack(photTabs)
def calc_zeropoints(dataMap,refCat,cfhtCfg,debug=False):
#
fields = ['frameIndex','utDate','filter','mjdStart','mjdMid','airmass']
good = dataMap.obsDb['good']
frameList = dataMap.obsDb[fields][good]
frameList.sort('frameIndex')
# zero point trends are fit over a season
if 'season' not in frameList.colnames:
frameList['season'] = idmrmphot.get_season(frameList['mjdStart'])
# select the zeropoint aperture
cfhtPhot = load_raw_cfht_aperphot(dataMap,refCat)
# XXX temporary hack
cfhtPhot['nMasked'] = np.int32(0)
cfhtPhot['peakCounts'] = np.float32(1)
phot = idmrmphot.extract_aperture(cfhtPhot,cfhtCfg.zpAperNum)
# calculate zeropoints and aperture corrections
# XXX I guess would have to split i band out eventually? it won't have
# same epochs
epochs = cfhtCfg.colorXform.get_epoch('g',frameList['mjdStart'])
outputs = []
for epoch in np.unique(epochs):
ii = np.where(epochs==epoch)[0]
jj = np.where(np.in1d(phot['frameIndex'],
frameList['frameIndex'][ii]))[0]
zpdat = idmrmphot.iter_selfcal(phot[jj],frameList[ii],refCat,cfhtCfg,
mode='focalplane')
outputs.append(zpdat)
frameList = vstack([ zpdat.zpts for zpdat in outputs ])
frameList.sort('frameIndex')
frameList = idmrmphot.calc_apercorrs(cfhtPhot,frameList,cfhtCfg,
mode='focalplane')
#
if True:
zptrend = vstack([ zpdat.zptrend for zpdat in outputs ])
zptrend.write('cfht_zptrend.dat',overwrite=True,format='ascii')
if debug:
zpdat.sePhot.write('zp_sephot.fits',overwrite=True)
zpdat.coaddPhot.write('zp_coaddphot.fits',overwrite=True)
return frameList
def calibrate_lightcurves(dataMap,photCat,zpFile,cfhtCfg):
zpTab = Table.read(zpFile)
if False:
# these are hacks to fill the zeropoints table for CCDs with no
# measurements... this may be necessary as sometimes too few reference
# stars will land on a given CCD. but need to understand it better.
for row in zpTab:
iszero = row['aperZp'] == 0
if np.sum(~iszero) > 10:
row['aperZp'][iszero] = np.median(row['aperZp'][~iszero])
row['aperNstar'][iszero] = 999
for j in range(7):
iszero = row['aperCorr'][:,j] == 0
if np.sum(~iszero) > 5:
row['aperCorr'][iszero,j] = np.median(row['aperCorr'][~iszero,j])
phot = load_raw_cfht_aperphot(dataMap,photCat)
phot = idmrmphot.calibrate_lightcurves(phot,zpTab,cfhtCfg,
zpmode='focalplane',
apcmode='focalplane')
return phot
def check_status(dataMap):
from collections import defaultdict
from bokpipe.bokastrom import read_headers
missing = defaultdict(list)
incomplete = defaultdict(list)
files = dataMap.getFiles()
for i,f in enumerate(files):
imgFile = dataMap('img')(f)
if not os.path.exists(imgFile):
missing['img'].append(f)
continue
nCCD = fits.getheader(imgFile,0)['NEXTEND']
aheadFile = imgFile.replace('.fits.fz','.ahead')
if not os.path.exists(aheadFile):
missing['ahead'].append(f)
else:
hdrs = read_headers(aheadFile)
if len(hdrs) < nCCD:
incomplete['ahead'].append(f)
for k in ['wcscat','psf','cat']:
outFile = dataMap(k)(f)
if not os.path.exists(outFile):
missing[k].append(f)
else:
try:
ff = fits.open(outFile)
except IOError:
incomplete[k].append(f)
continue
n = len(ff)-1
if k == 'wcscat':
n //= 2 # ldac
if n < nCCD:
incomplete[k].append(f)
sys.stdout.write("\r%d/%d" % (i+1,len(files)))
sys.stdout.flush()
print
print 'total images: ',len(files)
for k in ['img','ahead','wcscat','psf','cat']:
n = len(files) - len(missing[k]) - len(incomplete[k])
print '%10s %5d %5d %5d' % (k,n,len(missing[k]),len(incomplete[k]))
d = { f for l in missing.values() for f in l }
if len(d)>0:
logfile = open('missing.log','w')
for f in d:
logfile.write(f+'\n')
logfile.close()
d = { f for l in incomplete.values() for f in l }
if len(d)>0:
logfile = open('incomplete.log','w')
for f in d:
logfile.write(f+'\n')
logfile.close()
def load_phot(phot,photCat,frameList,lctable,aper,season=None,photo=False):
if phot is None:
photFile = get_phot_file(photCat,args.lctable)
print 'loaded lightcurve catalog {}'.format(photFile)
phot = Table.read(photFile)
apPhot = idmrmphot.extract_aperture(phot,args.aper,lightcurve=True)
if args.photo:
if frameList is None:
print 'loading zeropoints table {0}'.format(frameListFile)
frameList = Table.read(frameListFile)
photoFrames = frameList['frameIndex'][frameList['isPhoto']]
nbefore = len(apPhot)
apPhot = apPhot[np.in1d(apPhot['frameIndex'],photoFrames)]
print 'restricting to {0} photo frames yields {1}/{2}'.format(
len(photoFrames),nbefore,len(apPhot))
apPhot['season'] = idmrmphot.get_season(apPhot['mjd'])
if season is None:
# there's too little 2009 data for useful statistics
apPhot = apPhot[apPhot['season']!='2009']
else:
apPhot = apPhot[apPhot['season']==season]
return apPhot
if __name__=='__main__':
import sys
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--catalogs',action='store_true',
help='make source extractor catalogs and PSF models')
parser.add_argument('--dophot',action='store_true',
help='do photometry on images')
parser.add_argument('--zeropoint',action='store_true',
help='do zero point calculation')
parser.add_argument('--lightcurves',action='store_true',
help='construct lightcurves')
parser.add_argument('--aggregate',action='store_true',
help='construct aggregate photometry')
parser.add_argument('--binnedstats',action='store_true',
help='compute phot stats in mag bins')
parser.add_argument('--status',action='store_true',
help='check processing status')
parser.add_argument('--catalog',type=str,default='sdssrm',
help='reference catalog ([sdssrm]|sdss|cfht)')
parser.add_argument('-p','--processes',type=int,default=1,
help='number of processes to use [default=single]')
parser.add_argument('-R','--redo',action='store_true',
help='redo (overwrite existing files)')
parser.add_argument('-u','--utdate',type=str,default=None,
help='UT date(s) to process [default=all]')
parser.add_argument('--lctable',type=str,
help='lightcurve table')
parser.add_argument('--season',type=str,
help='observing season')
parser.add_argument('--aper',type=int,default=-2,
help='index of aperture to select [-2]')
parser.add_argument('--zptable',type=str,
default='config/CFHTRMFrameList.fits.gz',
help='zeropoints table')
parser.add_argument('--outfile',type=str,default='',
help='output file')
parser.add_argument('--photo',action='store_true',
help='use only photometric frames')
parser.add_argument('--catdir',type=str,
help='directory containing photometry catalogs')
parser.add_argument('-v','--verbose',action='count',
help='increase output verbosity')
args = parser.parse_args()
#
if args.processes > 1:
pool = multiprocessing.Pool(args.processes)
procMap = pool.map
else:
procMap = map
dataMap = cfhtrm.CfhtDataMap()
photCat = idmrmphot.load_target_catalog(args.catalog)
timerLog = bokutil.TimerLog()
kwargs = dict(redo=args.redo,verbose=args.verbose)
cfhtCfg = CfhtConfig()
phot = None
if args.utdate:
utDate = args.utdate.split(',')
dataMap.setUtDate(utDate)
if args.catalogs:
make_sextractor_catalogs(dataMap,procMap,**kwargs)
timerLog('sextractor catalogs')
if args.dophot:
make_phot_catalogs(dataMap,procMap,photCat,**kwargs)
timerLog('photometry catalogs')
if args.zeropoint:
zps = calc_zeropoints(dataMap,photCat,cfhtCfg,debug=True)
zps.write(args.zptable,overwrite=True)
timerLog('zeropoints')
if args.lightcurves:
phot = calibrate_lightcurves(dataMap,photCat,args.zptable,cfhtCfg)
photFile = get_phot_file(photCat,args.lctable)
phot.write(photFile,overwrite=True)
timerLog('lightcurves')
if args.aggregate:
# which = 'nightly' if args.nightly else 'all'
frameList = Table.read(args.zptable)
apPhot = load_phot(phot,photCat,frameList,
args.lctable,args.aper,args.season)
apPhot = apPhot.group_by(['season','filter','objId'])
objPhot = idmrmphot.clipped_group_mean_rms(apPhot['aperMag',])
aggPhot = hstack([apPhot.groups.keys,objPhot])
outfile = args.outfile if args.outfile \
else 'meanphot_cfht_{}.fits'.format(args.season)
aggPhot.write(outfile)
timerLog('aggregate phot')
if args.binnedstats:
frameList = Table.read(args.zptable)
apPhot = load_phot(phot,photCat,frameList,args.lctable,args.aper)
bs = idmrmphot.get_binned_stats(apPhot,photCat.refCat,cfhtCfg,
binEdges=np.arange(17.5,20.11,0.2))
outfile = args.outfile if args.outfile else 'phot_stats_cfht.fits'
bs.write(outfile,overwrite=True)
timerLog('binned stats')
if args.status:
check_status(dataMap)
timerLog.dump()
if args.processes > 1:
pool.close()
|
Many years ago, 12-year-old Ryo played the role of dutiful son, learning to ply his future trade as a pottery maker like his father. However, when his small village is raided by brigands and a single stranger steps forward and ends the attack, Ryo changes his mind and wants to become a warrior like the stranger.
After obtaining his father’s blessing and waiting for his 13th birthday, Ryo heads out to find the stranger and train as he did so he can protect those who cannot protect themselves. What he encounters are challenges that he never expected, but is more than willing to face.
In Apocalypse 5, author Stacey Rourke introduces readers to a team of five early teenaged warriors who train mercilessly to protect the people of the starship AT-1-NS from any and all threats. They run virtual reality simulations nearly daily, but, despite being virtual, these trainings can result in death to the team members if they fail, at which time a new member will be promoted to the A5’s elite ranks.
Do you remember the pirate Hondo Ohnaka from either the Star Wars: Clone Wars or Star Wars: Rebels animated series? You don’t? That’s a little shocking. Sure, this Weequay was a background character, but he managed to insinuate himself into some pretty unforgettable sequences, both for better and for worse. I was never much of a fan myself. So imagine my dismay when I tuned into Star Wars: Pirate’s Price only to discover he was the front-and-center character driving the plot?
Yeah, I wasn’t happy. But then I started listening and, I’m going to be honest with you, hearing Jim Cummings bring Hondo to life through a mostly first-person accounting of his multiple times aboard the Millennium Falcon made me come around a bit. I actually kinda like the guy now.
Lando Calrissian has always been one of my favorite characters in the Star Wars universe. He’s a gambler, a swindler, a pilot, a warrior, a general, and he’s probably the only person that can make Han Solo blush. From that first moment on Bespin in The Empire Strikes Back when Billy Dee Williams faced off with Harrison Ford, that was it for me. And I have not been disappointed with his portrayals in any subsequent movie or book… and, yes, Donald Glover nailed it.
So, given the opportunity to review Star Wars: Lando’s Luck, a book dedicated to Lando, I jumped at the chance.
A few weeks ago, I reviewed the audiobook for the junior novel for Solo: A Star Wars Story, the first of two novelizations of the film released in conjunction with the Blu-ray Edition. I apologize that it took nearly three weeks to get around to this Expanded Edition, but, yanno, life and other Geeks of Doom reviews.
Much like I explained in the review of the Joe Schreiber junior novelization, Mur Lafferty‘s novelization is just that, a scene-by-scene retelling of the movie. The Junior Novel promised and delivered on a handful of deleted scenes that didn’t make the theatrical cut; the Expanded Edition promises and delivers on that and a whole ton more.
Resistant is set in a not-so-distant future in which a large chunk of the world’s population has been decimated by disease, the spread of which was fueled by our long history of over-dependence on antibiotics, the dodgy business dealings of the pharmaceutical industry, and climate change. In the ensuing years, the remainder of the scientific community has been working on discovering a cure, to no avail. However, this effort is tainted. While a government-driven initiative, employing some of the world’s top scientists working against their will, will go to any length to find a cure so it can be privatized and sold to the highest bidder, a counteractive resistance is doing the same but with the aim of saving all of humanity.
It’s been about 125 years so I’m sincerely hoping that, by now, everyone is at least tangentially familiar with the legend of Count Dracula as published by Bram Stoker. Yes? Good. No? For shame!!
Now consider if you were told that the story of the world’s most infamous and influential vampire was based on true events in the life of its author.
Well, I’m not about to tell you that this is the case. However, Bram’s great-grand-nephew Dacre Stoker, along with co-author J.D. Barker, is here to do just that with Dracul, a prequel to the classic Dracula.
I’d like to preface this by saying that the next week or so is going to be interesting. I just finished the audiobook of Solo: A Star Wars Story Junior Novel (which I will review here in short order) and then I’m embarking on Solo: A Star Wars Story Expanded Edition. Both are novelizations of the same movie, but with different authors and different voice talents on each. I want to liken it to watching the theatrical and director’s cuts of a film, but I don’t think it’s going to be like that at all, to be honest, because the interpretations by the authors should be slightly different and the lead voice will have a different sound entirely. Bear with me, okay?
I began with the Junior Novel audiobook because I knew I could tear through it pretty quickly and because I’m already familiar with author Joe Schreiber‘s work in the Star Wars Universe having read both Death Troopers and Maul: Lockdown. (I’m pretty sure Death Troopers is the final book I purchased before Borders Books & Music shuttered forever).
There’s a risk inherent in reading the first published novel by an otherwise new author. There’s additional risk when that first published novel is announced as part of a planned trilogy before the author has even had the opportunity to toe the waters of publishing success. I admit I initially felt this way about reading Rebecca Schaeffer’s Not Even Bones. But the official synopsis won me over and made me overlook my trepidations and dive in.
Ever since the 1991 release of Star Wars: Heir to the Empire, I have maintained that Grand Admiral Thrawn could be the best bad guy to ever grace the Star Wars universe. Even badder than Darth Vader. After author Timothy Zahn closed out the Thrawn trilogy of books – which also includes Dark Force Rising and The Last Command – I have been chomping on the bit for more Thrawn love.
Not only did I get no additional love, I had what I’d grown to love practically ripped out from under me as Thrawn and all the other characters introduced in the Expanded Universe of books and comics were declared non-canon by Lucasfilm. I was heartbroken. |
# coding=utf-8
# Copyright 2014 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
import functools
import os
from hashlib import sha1
from six import string_types
from pants.base.address import Addresses, SyntheticAddress
from pants.base.build_environment import get_buildroot
from pants.base.build_manual import manual
from pants.base.deprecated import deprecated
from pants.base.exceptions import TargetDefinitionException
from pants.base.fingerprint_strategy import DefaultFingerprintStrategy
from pants.base.hash_utils import hash_all
from pants.base.payload import Payload
from pants.base.payload_field import DeferredSourcesField, SourcesField
from pants.base.source_root import SourceRoot
from pants.base.target_addressable import TargetAddressable
from pants.base.validation import assert_list
class AbstractTarget(object):
_deprecated_predicate = functools.partial(deprecated, '0.0.30')
@property
def has_resources(self):
"""Returns True if the target has an associated set of Resources."""
return hasattr(self, 'resources') and self.resources
@property
def is_exported(self):
"""Returns True if the target provides an artifact exportable from the repo."""
# TODO(John Sirois): fixup predicate dipping down into details here.
return self.has_label('exportable') and self.provides
@property
@_deprecated_predicate('Do not use this method, use an isinstance check on JarDependency.')
def is_jar(self):
"""Returns True if the target is a jar."""
return False
@property
@_deprecated_predicate('Do not use this method, use an isinstance check on JavaAgent.')
def is_java_agent(self):
"""Returns `True` if the target is a java agent."""
return self.has_label('java_agent')
@property
@_deprecated_predicate('Do not use this method, use an isinstance check on JvmApp.')
def is_jvm_app(self):
"""Returns True if the target produces a java application with bundled auxiliary files."""
return False
# DEPRECATED to be removed after 0.0.29
# do not use this method, use isinstance(..., JavaThriftLibrary) or a yet-to-be-defined mixin
@property
def is_thrift(self):
"""Returns True if the target has thrift IDL sources."""
return False
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_jvm(self):
"""Returns True if the target produces jvm bytecode."""
return self.has_label('jvm')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_codegen(self):
"""Returns True if the target is a codegen target."""
return self.has_label('codegen')
@property
@_deprecated_predicate('Do not use this method, use an isinstance check on JarLibrary.')
def is_jar_library(self):
"""Returns True if the target is an external jar library."""
return self.has_label('jars')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_java(self):
"""Returns True if the target has or generates java sources."""
return self.has_label('java')
@property
@_deprecated_predicate('Do not use this method, use an isinstance check on AnnotationProcessor.')
def is_apt(self):
"""Returns True if the target exports an annotation processor."""
return self.has_label('apt')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_python(self):
"""Returns True if the target has python sources."""
return self.has_label('python')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_scala(self):
"""Returns True if the target has scala sources."""
return self.has_label('scala')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_scalac_plugin(self):
"""Returns True if the target builds a scalac plugin."""
return self.has_label('scalac_plugin')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_test(self):
"""Returns True if the target is comprised of tests."""
return self.has_label('tests')
# DEPRECATED to be removed after 0.0.29
# do not use this method, use an isinstance check on a yet-to-be-defined mixin
@property
def is_android(self):
"""Returns True if the target is an android target."""
return self.has_label('android')
class Target(AbstractTarget):
"""The baseclass for all pants targets.
Handles registration of a target amongst all parsed targets as well as location of the target
parse context.
"""
class WrongNumberOfAddresses(Exception):
"""Internal error, too many elements in Addresses"""
pass
LANG_DISCRIMINATORS = {
'java': lambda t: t.is_jvm,
'python': lambda t: t.is_python,
}
@classmethod
def lang_discriminator(cls, lang):
"""Returns a tuple of target predicates that select the given lang vs all other supported langs.
The left hand side accepts targets for the given language; the right hand side accepts
targets for all other supported languages.
"""
def is_other_lang(target):
for name, discriminator in cls.LANG_DISCRIMINATORS.items():
if name != lang and discriminator(target):
return True
return False
return (cls.LANG_DISCRIMINATORS[lang], is_other_lang)
@classmethod
def get_addressable_type(target_cls):
class ConcreteTargetAddressable(TargetAddressable):
@classmethod
def get_target_type(cls):
return target_cls
return ConcreteTargetAddressable
@property
def target_base(self):
""":returns: the source root path for this target."""
return SourceRoot.find(self)
@classmethod
def identify(cls, targets):
"""Generates an id for a set of targets."""
return cls.combine_ids(target.id for target in targets)
@classmethod
def maybe_readable_identify(cls, targets):
"""Generates an id for a set of targets.
If the set is a single target, just use that target's id."""
return cls.maybe_readable_combine_ids([target.id for target in targets])
@staticmethod
def combine_ids(ids):
"""Generates a combined id for a set of ids."""
return hash_all(sorted(ids)) # We sort so that the id isn't sensitive to order.
@classmethod
def maybe_readable_combine_ids(cls, ids):
"""Generates combined id for a set of ids, but if the set is a single id, just use that."""
ids = list(ids) # We can't len a generator.
return ids[0] if len(ids) == 1 else cls.combine_ids(ids)
def __init__(self, name, address, build_graph, payload=None, tags=None, description=None):
"""
:param string name: The name of this target, which combined with this
build file defines the target address.
:param dependencies: Other targets that this target depends on.
:type dependencies: list of target specs
:param Address address: The Address that maps to this Target in the BuildGraph
:param BuildGraph build_graph: The BuildGraph that this Target lives within
:param Payload payload: The configuration encapsulated by this target. Also in charge of
most fingerprinting details.
:param iterable<string> tags: Arbitrary string tags that describe this target. Usable
by downstream/custom tasks for reasoning about build graph. NOT included in payloads
and thus not used in fingerprinting, thus not suitable for anything that affects how
a particular target is built.
:param string description: Human-readable description of this target.
"""
# dependencies is listed above; implementation hides in TargetAddressable
self.payload = payload or Payload()
self.payload.freeze()
self.name = name
self.address = address
self._tags = set(tags or [])
self._build_graph = build_graph
self.description = description
self.labels = set()
self._cached_fingerprint_map = {}
self._cached_transitive_fingerprint_map = {}
@property
def tags(self):
return self._tags
@property
def num_chunking_units(self):
return max(1, len(self.sources_relative_to_buildroot()))
def assert_list(self, maybe_list, expected_type=string_types):
return assert_list(maybe_list, expected_type,
raise_type=lambda msg: TargetDefinitionException(self, msg))
def compute_invalidation_hash(self, fingerprint_strategy=None):
"""
:param FingerprintStrategy fingerprint_strategy: optional fingerprint strategy to use to compute
the fingerprint of a target
:return: a fingerprint representing this target (no dependencies)
:rtype: string
"""
fingerprint_strategy = fingerprint_strategy or DefaultFingerprintStrategy()
return fingerprint_strategy.fingerprint_target(self)
def invalidation_hash(self, fingerprint_strategy=None):
fingerprint_strategy = fingerprint_strategy or DefaultFingerprintStrategy()
if fingerprint_strategy not in self._cached_fingerprint_map:
self._cached_fingerprint_map[fingerprint_strategy] = self.compute_invalidation_hash(fingerprint_strategy)
return self._cached_fingerprint_map[fingerprint_strategy]
def mark_extra_invalidation_hash_dirty(self):
pass
def mark_invalidation_hash_dirty(self):
self._cached_fingerprint_map = {}
self._cached_transitive_fingerprint_map = {}
self.mark_extra_invalidation_hash_dirty()
def transitive_invalidation_hash(self, fingerprint_strategy=None):
"""
:param FingerprintStrategy fingerprint_strategy: optional fingerprint strategy to use to compute
the fingerprint of a target
:return: A fingerprint representing this target and all of its dependencies.
The return value can be `None`, indicating that this target and all of its transitive dependencies
did not contribute to the fingerprint, according to the provided FingerprintStrategy.
:rtype: string
"""
fingerprint_strategy = fingerprint_strategy or DefaultFingerprintStrategy()
if fingerprint_strategy not in self._cached_transitive_fingerprint_map:
hasher = sha1()
def dep_hash_iter():
for dep in self.dependencies:
dep_hash = dep.transitive_invalidation_hash(fingerprint_strategy)
if dep_hash is not None:
yield dep_hash
dep_hashes = sorted(list(dep_hash_iter()))
for dep_hash in dep_hashes:
hasher.update(dep_hash)
target_hash = self.invalidation_hash(fingerprint_strategy)
if target_hash is None and not dep_hashes:
return None
dependencies_hash = hasher.hexdigest()[:12]
combined_hash = '{target_hash}.{deps_hash}'.format(target_hash=target_hash,
deps_hash=dependencies_hash)
self._cached_transitive_fingerprint_map[fingerprint_strategy] = combined_hash
return self._cached_transitive_fingerprint_map[fingerprint_strategy]
def mark_transitive_invalidation_hash_dirty(self):
self._cached_transitive_fingerprint_map = {}
self.mark_extra_transitive_invalidation_hash_dirty()
def mark_extra_transitive_invalidation_hash_dirty(self):
pass
def inject_dependency(self, dependency_address):
self._build_graph.inject_dependency(dependent=self.address, dependency=dependency_address)
def invalidate_dependee(dependee):
dependee.mark_transitive_invalidation_hash_dirty()
self._build_graph.walk_transitive_dependee_graph([self.address], work=invalidate_dependee)
def has_sources(self, extension=''):
"""
:param string extension: suffix of filenames to test for
:return: True if the target contains sources that match the optional extension suffix
:rtype: bool
"""
sources_field = self.payload.get_field('sources')
if sources_field:
return sources_field.has_sources(extension)
else:
return False
def sources_relative_to_buildroot(self):
if self.has_sources():
return self.payload.sources.relative_to_buildroot()
else:
return []
def sources_relative_to_source_root(self):
if self.has_sources():
abs_source_root = os.path.join(get_buildroot(), self.target_base)
for source in self.sources_relative_to_buildroot():
abs_source = os.path.join(get_buildroot(), source)
yield os.path.relpath(abs_source, abs_source_root)
@property
def derived_from(self):
"""Returns the target this target was derived from.
If this target was not derived from another, returns itself.
"""
return self._build_graph.get_derived_from(self.address)
@property
def derived_from_chain(self):
"""Returns all targets that this target was derived from.
If this target was not derived from another, returns an empty sequence.
"""
cur = self
while cur.derived_from is not cur:
cur = cur.derived_from
yield cur
@property
def concrete_derived_from(self):
"""Returns the concrete target this target was (directly or indirectly) derived from.
The returned target is guaranteed to not have been derived from any other target, and is thus
guaranteed to be a 'real' target from a BUILD file, not a programmatically injected target.
"""
return self._build_graph.get_concrete_derived_from(self.address)
@property
def traversable_specs(self):
"""
:return: specs referenced by this target to be injected into the build graph
:rtype: list of strings
"""
return []
@property
def traversable_dependency_specs(self):
"""
:return: specs representing dependencies of this target that will be injected to the build
graph and linked in the graph as dependencies of this target
:rtype: list of strings
"""
# To support DeferredSourcesField
for name, payload_field in self.payload.fields:
if isinstance(payload_field, DeferredSourcesField) and payload_field.address:
yield payload_field.address.spec
@property
def dependencies(self):
"""
:return: targets that this target depends on
:rtype: list of Target
"""
return [self._build_graph.get_target(dep_address)
for dep_address in self._build_graph.dependencies_of(self.address)]
@property
def dependents(self):
"""
:return: targets that depend on this target
:rtype: list of Target
"""
return [self._build_graph.get_target(dep_address)
for dep_address in self._build_graph.dependents_of(self.address)]
@property
def is_synthetic(self):
"""
:return: True if this target did not originate from a BUILD file.
"""
return self.concrete_derived_from.address != self.address
@property
def is_original(self):
"""Returns ``True`` if this target is derived from no other."""
return self.derived_from == self
@property
def id(self):
"""A unique identifier for the Target.
The generated id is safe for use as a path name on unix systems.
"""
return self.address.path_safe_spec
@property
def identifier(self):
"""A unique identifier for the Target.
The generated id is safe for use as a path name on unix systems.
"""
return self.id
def walk(self, work, predicate=None):
"""Walk of this target's dependency graph, DFS preorder traversal, visiting each node exactly
once.
If a predicate is supplied it will be used to test each target before handing the target to
work and descending. Work can return targets in which case these will be added to the walk
candidate set if not already walked.
:param work: Callable that takes a :py:class:`pants.base.target.Target`
as its single argument.
:param predicate: Callable that takes a :py:class:`pants.base.target.Target`
as its single argument and returns True if the target should passed to ``work``.
"""
if not callable(work):
raise ValueError('work must be callable but was %s' % work)
if predicate and not callable(predicate):
raise ValueError('predicate must be callable but was %s' % predicate)
self._build_graph.walk_transitive_dependency_graph([self.address], work, predicate)
def closure(self):
"""Returns this target's transitive dependencies, in DFS preorder traversal."""
return self._build_graph.transitive_subgraph_of_addresses([self.address])
@manual.builddict()
@deprecated('0.0.30', hint_message='Use the description parameter of target() instead')
def with_description(self, description):
"""Set a human-readable description of this target.
:param description: Descriptive string"""
self.description = description
return self
# TODO(Eric Ayers) As of 2/5/2015 this call is DEPRECATED and should be removed soon
def add_labels(self, *label):
self.labels.update(label)
# TODO(Eric Ayers) As of 2/5/2015 this call is DEPRECATED and should be removed soon
def remove_label(self, label):
self.labels.remove(label)
# TODO(Eric Ayers) As of 2/5/2015 this call is DEPRECATED and should be removed soon
def has_label(self, label):
return label in self.labels
def __lt__(self, other):
return self.address < other.address
def __eq__(self, other):
return isinstance(other, Target) and self.address == other.address
def __hash__(self):
return hash(self.address)
def __ne__(self, other):
return not self.__eq__(other)
def __repr__(self):
addr = self.address if hasattr(self, 'address') else 'address not yet set'
return "%s(%s)" % (type(self).__name__, addr)
def create_sources_field(self, sources, sources_rel_path, address=None, build_graph=None):
"""Factory method to create a SourcesField appropriate for the type of the sources object.
Note that this method is called before the call to Target.__init__ so don't expect fields to
be populated!
:return: a payload field object representing the sources parameter
:rtype: SourcesField
"""
if isinstance(sources, Addresses):
# Currently, this is only created by the result of from_target() which takes a single argument
if len(sources.addresses) != 1:
raise self.WrongNumberOfAddresses(
"Expected a single address to from_target() as argument to {spec}"
.format(spec=address.spec))
referenced_address = SyntheticAddress.parse(sources.addresses[0],
relative_to=sources.rel_path)
return DeferredSourcesField(ref_address=referenced_address)
return SourcesField(sources=sources, sources_rel_path=sources_rel_path)
|
Growth opportunities for Bristol-Myers Squibb make this drug stock a bargain.
What will Carl Icahn do? Will Bristol-Myers Squibb (NYSE:BMY) be acquired? How will Opdivo fare against Merck's (NYSE:MRK) Keytruda?
There are plenty of unanswered questions swirling around about Bristol-Myers Squibb (BMS) these days. But as I listened to the company's CEO speak at the Cowen and Company global healthcare conference, I was reminded of something investors shouldn't miss: This stock is a bargain, despite all the current distractions. Here's why.
It's true that Merck achieved a solid success with its late-stage study evaluating Keytruda as a first-line treatment of lung cancer while BMS' Opdivo study disappointed. It's also true that Bristol-Myers Squibb will be playing defense against Merck for a while in the U.S. lung cancer market. However, the first-line lung cancer indication isn't a lost cause for BMS by any means.
BMS CEO Giovanni Caforio stated at the Cowen conference that his company has the most comprehensive strategy for first-line non-small-cell lung cancer. He's right.
There are currently five pharmaceutical companies with potential first-line lung cancer treatments either approved or in registrational studies. Merck, of course, already obtained approval for Keytruda as a monotherapy for patients whose tumors have high PD-L1 expression. The company is also awaiting regulatory approval for Keytruda in combination with chemotherapy in the first-line indication.
Roche (NASDAQOTH:RHHBY) expects to submit Tecentriq for approval as a monotherapy and in combination with chemotherapy as a first-line treatment for lung cancer next year. Pfizer (NYSE:PFE) has its anti-PD-L1 inhibitor, avelumab, in late-stage testing in the front-line indication. Results from this study should be announced in the first half of 2018.
AstraZeneca (NYSE:AZN) hopes to submit durvalumab for approval as a monotherapy later this year. The drugmaker also has a durvalumab/chemotherapy combo in late-stage testing, along with a combo with another immuno-oncology (I-O) candidate, tremelimumab.
Only Bristol-Myers Squibb, however, can check off all the boxes. There's still potential for Opdivo as a monotherapy in first-line lung cancer, since one arm of the CheckMate-227 study continues. The company also has an I-O/I-O combo with Opdivo and Yervoy, an Opdivo/chemotherapy combo, and an Opdivo/Yervoy/chemotherapy combo in testing.
Bristol-Myers Squibb has plenty of news on the way outside of the lung cancer indications. The company expects 10 data readouts within the next two years from phase 2 and phase 3 studies evaluating Opdivo/Yervoy in other types of cancer.
Anticoagulant drug Eliquis and autoimmune disease drug Orencia are experiencing solid sales growth. BMS hopes to augment its success in the two therapeutic areas with several mid-stage pipeline assets. In addition, the company is looking to expand its presence in the fibrotic-diseases space.
With Carl Icahn reportedly buying a stake in Bristol-Myers Squibb, there has been a lot of talk about the company being a buyout target. It's important to remember, though, that acquisitions can go both ways. I suspect the chances of BMS making its own acquisition are much higher than it being bought by another company.
Bristol-Myers Squibb already partners with several smaller biotechs. The company claims a strong balance sheet with roughly $9 billion in cash and cash equivalents. An acquisition of a biotech with a product that fits well with Opdivo could make sense.
Among the major players in the lung cancer market, Pfizer appears to be the most attractively valued based on forward earnings multiple. Bristol-Myers Squibb is the most expensive using that metric. However, when you factor in Bristol-Myers Squibb's growth prospects, the stock trades at a much more attractive valuation than its peer group.
There's also another nice plus: BMS has steadily raised its dividend for the last eight years. The company spends less than 58% of earnings to fund the dividend program, so more dividend increases could be on the way.
Forget the distractions. Bristol-Myers Squibb is a bargain. I doubt the stock will remain this good of a deal for too much longer. |
"""
Module to control a virtual create
"""
from ..vrep import vrep as vrep
from enum import Enum
class VirtualCreate:
"""
Class to control a virtual create in V-REP.
"""
def __init__(self, client_id):
"""Constructor.
Args:
client_id (integer): V-REP client id.
"""
self._clientID = client_id
# query objects
rc, self._obj = vrep.simxGetObjectHandle(self._clientID, "create_estimate", vrep.simx_opmode_oneshot_wait)
# Use custom GUI
_, self._uiHandle = vrep.simxGetUIHandle(self._clientID, "UI", vrep.simx_opmode_oneshot_wait)
vrep.simxGetUIEventButton(self._clientID, self._uiHandle, vrep.simx_opmode_streaming)
def set_pose(self, position, yaw):
vrep.simxSetObjectPosition(self._clientID, self._obj, -1, position,
vrep.simx_opmode_oneshot_wait)
vrep.simxSetObjectOrientation(self._clientID, self._obj, -1, (0, 0, yaw),
vrep.simx_opmode_oneshot_wait)
def set_point_cloud(self, data):
signal = vrep.simxPackFloats(data)
vrep.simxWriteStringStream(self._clientID, "pointCloud", signal, vrep.simx_opmode_oneshot)
class Button(Enum):
MoveForward = 3
TurnLeft = 4
TurnRight = 5
Sense = 6
def get_last_button(self):
self.enable_buttons()
err, button_id, aux = vrep.simxGetUIEventButton(self._clientID, self._uiHandle, vrep.simx_opmode_buffer)
if err == vrep.simx_return_ok and button_id != -1:
self.disable_buttons()
vrep.simxGetUIEventButton(self._clientID, self._uiHandle, vrep.simx_opmode_streaming)
return self.Button(button_id)
return None
def disable_buttons(self):
for i in range(3, 7):
_, prop = vrep.simxGetUIButtonProperty(self._clientID, self._uiHandle, i, vrep.simx_opmode_oneshot)
prop &= ~vrep.sim_buttonproperty_enabled
vrep.simxSetUIButtonProperty(self._clientID, self._uiHandle, i, prop, vrep.simx_opmode_oneshot)
def enable_buttons(self):
for i in range(3, 7):
_, prop = vrep.simxGetUIButtonProperty(self._clientID, self._uiHandle, i, vrep.simx_opmode_oneshot)
# print(prop)
prop |= vrep.sim_buttonproperty_enabled
vrep.simxSetUIButtonProperty(self._clientID, self._uiHandle, i, prop, vrep.simx_opmode_oneshot)
|
I have gone shooting trips with several of them in the course of my life, and they have always proved themselves the best and bravest and nicest fellows I ever met, though sadly given, some of them, to the use of profane language.
it hinders profane language, and attaches a man to the society of refined females.
More and more people are coming out of their silence over extra-judicial killings (EJKs), President Duterte's pugnacious ways, profane language, abusive attitude toward people, and confusing foreign policy.
The two Lebanese sisters overheard the Lebanese man cursing them and using profane language during the brawl.
At the same time, the data also elucidate how carnival was an ambivalent phenomenon wherein certain students strategically used carnivalesque, profane language to critique their experiences in school and give voice to their sense of dissatisfaction of being in a newcomer program and labeled as an EL.
He raised his voice, used profane language to complain about the government, said that he had nothing to live for, and threatened one MSHA inspector that he would 'find where he lived' and that Andersen would not 'go out [die] alone.
com/products/1080711-bible-2-trade-paperback "Editor's note: Site contains profane language.
Game of Thrones star Jack Gleeson said he struggled to play evil King Joffrey Baratheon because he's a polite person who does not swear and avoids profane language.
Muhammet Feyzi AygE-n, who works as an investigatory judge at the Justice Ministry's European Union General Directorate, was revealed to have been targeting critics of the Justice and Development Party (AK Party) and even insulting them by using profane language.
The board made its decision after conducting a 40-minute closed-door hearing with parent Saul Patu, who claimed that Wagner directed demeaning and profane language toward his daughter, at practices and in games, that constituted harassment. |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, unicode_literals
from os import path
import json
from flask import Flask, g, render_template
from peewee import Model, SqliteDatabase, CharField, FloatField
app = Flask(__name__)
# TODO: override on_result(self, result) method to manage the result yourself.
database_path = path.join(path.abspath(path.dirname(__file__)), 'result.db')
database = SqliteDatabase(database_path)
class BaseModel(Model):
class Meta:
database = database
class Resultdb_top100_version_4(BaseModel):
taskid = CharField(primary_key=True)
result = CharField()
updatetime = FloatField()
url = CharField()
@app.before_request
def before_request():
g.db = database
g.db.connect()
@app.after_request
def after_request(response):
g.db.close()
return response
@app.route('/')
@app.route('/sortby/<sorted_key>')
def index(sorted_key='played'):
top100 = []
for record in Resultdb_top100_version_4.select():
top100.append(json.loads(record.result))
top100 = sorted(top100, key=lambda t: t[sorted_key], reverse=True)[:100]
return render_template('index.html', top100=top100)
if __name__ == '__main__':
app.run(debug=True, port=5001)
|
What is NATIONAL PAYER ID?
NATIONAL PAYER ID meaning A system for uniquely identifying all organizations that pay for health care services. Also known as Health Plan ID, or Plan ID. |
# Copyright 2012 VMware, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import eventlet
import netaddr
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
from oslo_service import loopingcall
from oslo_service import periodic_task
from oslo_utils import excutils
from oslo_utils import timeutils
from neutron._i18n import _, _LE, _LI, _LW
from neutron.agent.common import utils as common_utils
from neutron.agent.l3 import dvr
from neutron.agent.l3 import dvr_edge_ha_router
from neutron.agent.l3 import dvr_edge_router as dvr_router
from neutron.agent.l3 import dvr_local_router as dvr_local_router
from neutron.agent.l3 import ha
from neutron.agent.l3 import ha_router
from neutron.agent.l3 import legacy_router
from neutron.agent.l3 import namespace_manager
from neutron.agent.l3 import namespaces
from neutron.agent.l3 import router_processing_queue as queue
from neutron.agent.linux import external_process
from neutron.agent.linux import ip_lib
from neutron.agent.linux import pd
from neutron.agent.metadata import driver as metadata_driver
from neutron.agent import rpc as agent_rpc
from neutron.callbacks import events
from neutron.callbacks import registry
from neutron.callbacks import resources
from neutron.common import constants as l3_constants
from neutron.common import exceptions as n_exc
from neutron.common import ipv6_utils
from neutron.common import rpc as n_rpc
from neutron.common import topics
from neutron import context as n_context
from neutron import manager
try:
from neutron_fwaas.services.firewall.agents.l3reference \
import firewall_l3_agent
except Exception:
# TODO(dougw) - REMOVE THIS FROM NEUTRON; during l3_agent refactor only
from neutron.services.firewall.agents.l3reference import firewall_l3_agent
LOG = logging.getLogger(__name__)
# TODO(Carl) Following constants retained to increase SNR during refactoring
NS_PREFIX = namespaces.NS_PREFIX
INTERNAL_DEV_PREFIX = namespaces.INTERNAL_DEV_PREFIX
EXTERNAL_DEV_PREFIX = namespaces.EXTERNAL_DEV_PREFIX
# Number of routers to fetch from server at a time on resync.
# Needed to reduce load on server side and to speed up resync on agent side.
SYNC_ROUTERS_MAX_CHUNK_SIZE = 256
SYNC_ROUTERS_MIN_CHUNK_SIZE = 32
class L3PluginApi(object):
"""Agent side of the l3 agent RPC API.
API version history:
1.0 - Initial version.
1.1 - Floating IP operational status updates
1.2 - DVR support: new L3 plugin methods added.
- get_ports_by_subnet
- get_agent_gateway_port
Needed by the agent when operating in DVR/DVR_SNAT mode
1.3 - Get the list of activated services
1.4 - Added L3 HA update_router_state. This method was reworked in
to update_ha_routers_states
1.5 - Added update_ha_routers_states
1.6 - Added process_prefix_update
1.7 - DVR support: new L3 plugin methods added.
- delete_agent_gateway_port
1.8 - Added address scope information
1.9 - Added get_router_ids
"""
def __init__(self, topic, host):
self.host = host
target = oslo_messaging.Target(topic=topic, version='1.0')
self.client = n_rpc.get_client(target)
def get_routers(self, context, router_ids=None):
"""Make a remote process call to retrieve the sync data for routers."""
cctxt = self.client.prepare()
return cctxt.call(context, 'sync_routers', host=self.host,
router_ids=router_ids)
def get_router_ids(self, context):
"""Make a remote process call to retrieve scheduled routers ids."""
cctxt = self.client.prepare(version='1.9')
return cctxt.call(context, 'get_router_ids', host=self.host)
def get_external_network_id(self, context):
"""Make a remote process call to retrieve the external network id.
@raise oslo_messaging.RemoteError: with TooManyExternalNetworks as
exc_type if there are more than one
external network
"""
cctxt = self.client.prepare()
return cctxt.call(context, 'get_external_network_id', host=self.host)
def update_floatingip_statuses(self, context, router_id, fip_statuses):
"""Call the plugin update floating IPs's operational status."""
cctxt = self.client.prepare(version='1.1')
return cctxt.call(context, 'update_floatingip_statuses',
router_id=router_id, fip_statuses=fip_statuses)
def get_ports_by_subnet(self, context, subnet_id):
"""Retrieve ports by subnet id."""
cctxt = self.client.prepare(version='1.2')
return cctxt.call(context, 'get_ports_by_subnet', host=self.host,
subnet_id=subnet_id)
def get_agent_gateway_port(self, context, fip_net):
"""Get or create an agent_gateway_port."""
cctxt = self.client.prepare(version='1.2')
return cctxt.call(context, 'get_agent_gateway_port',
network_id=fip_net, host=self.host)
def get_service_plugin_list(self, context):
"""Make a call to get the list of activated services."""
cctxt = self.client.prepare(version='1.3')
return cctxt.call(context, 'get_service_plugin_list')
def update_ha_routers_states(self, context, states):
"""Update HA routers states."""
cctxt = self.client.prepare(version='1.5')
return cctxt.call(context, 'update_ha_routers_states',
host=self.host, states=states)
def process_prefix_update(self, context, prefix_update):
"""Process prefix update whenever prefixes get changed."""
cctxt = self.client.prepare(version='1.6')
return cctxt.call(context, 'process_prefix_update',
subnets=prefix_update)
def delete_agent_gateway_port(self, context, fip_net):
"""Delete Floatingip_agent_gateway_port."""
cctxt = self.client.prepare(version='1.7')
return cctxt.call(context, 'delete_agent_gateway_port',
host=self.host, network_id=fip_net)
class L3NATAgent(firewall_l3_agent.FWaaSL3AgentRpcCallback,
ha.AgentMixin,
dvr.AgentMixin,
manager.Manager):
"""Manager for L3NatAgent
API version history:
1.0 initial Version
1.1 changed the type of the routers parameter
to the routers_updated method.
It was previously a list of routers in dict format.
It is now a list of router IDs only.
Per rpc versioning rules, it is backwards compatible.
1.2 - DVR support: new L3 agent methods added.
- add_arp_entry
- del_arp_entry
1.3 - fipnamespace_delete_on_ext_net - to delete fipnamespace
after the external network is removed
Needed by the L3 service when dealing with DVR
"""
target = oslo_messaging.Target(version='1.3')
def __init__(self, host, conf=None):
if conf:
self.conf = conf
else:
self.conf = cfg.CONF
self.router_info = {}
self._check_config_params()
self.process_monitor = external_process.ProcessMonitor(
config=self.conf,
resource_type='router')
self.driver = common_utils.load_interface_driver(self.conf)
self.context = n_context.get_admin_context_without_session()
self.plugin_rpc = L3PluginApi(topics.L3PLUGIN, host)
self.fullsync = True
self.sync_routers_chunk_size = SYNC_ROUTERS_MAX_CHUNK_SIZE
# Get the list of service plugins from Neutron Server
# This is the first place where we contact neutron-server on startup
# so retry in case its not ready to respond.
retry_count = 5
while True:
retry_count = retry_count - 1
try:
self.neutron_service_plugins = (
self.plugin_rpc.get_service_plugin_list(self.context))
except oslo_messaging.RemoteError as e:
with excutils.save_and_reraise_exception() as ctx:
ctx.reraise = False
LOG.warning(_LW('l3-agent cannot check service plugins '
'enabled at the neutron server when '
'startup due to RPC error. It happens '
'when the server does not support this '
'RPC API. If the error is '
'UnsupportedVersion you can ignore this '
'warning. Detail message: %s'), e)
self.neutron_service_plugins = None
except oslo_messaging.MessagingTimeout as e:
with excutils.save_and_reraise_exception() as ctx:
if retry_count > 0:
ctx.reraise = False
LOG.warning(_LW('l3-agent cannot check service '
'plugins enabled on the neutron '
'server. Retrying. '
'Detail message: %s'), e)
continue
break
self.metadata_driver = None
if self.conf.enable_metadata_proxy:
self.metadata_driver = metadata_driver.MetadataDriver(self)
self.namespaces_manager = namespace_manager.NamespaceManager(
self.conf,
self.driver,
self.metadata_driver)
self._queue = queue.RouterProcessingQueue()
super(L3NATAgent, self).__init__(conf=self.conf)
self.target_ex_net_id = None
self.use_ipv6 = ipv6_utils.is_enabled()
self.pd = pd.PrefixDelegation(self.context, self.process_monitor,
self.driver,
self.plugin_rpc.process_prefix_update,
self.create_pd_router_update,
self.conf)
def _check_config_params(self):
"""Check items in configuration files.
Check for required and invalid configuration items.
The actual values are not verified for correctness.
"""
if not self.conf.interface_driver:
msg = _LE('An interface driver must be specified')
LOG.error(msg)
raise SystemExit(1)
if self.conf.ipv6_gateway:
# ipv6_gateway configured. Check for valid v6 link-local address.
try:
msg = _LE("%s used in config as ipv6_gateway is not a valid "
"IPv6 link-local address."),
ip_addr = netaddr.IPAddress(self.conf.ipv6_gateway)
if ip_addr.version != 6 or not ip_addr.is_link_local():
LOG.error(msg, self.conf.ipv6_gateway)
raise SystemExit(1)
except netaddr.AddrFormatError:
LOG.error(msg, self.conf.ipv6_gateway)
raise SystemExit(1)
def _fetch_external_net_id(self, force=False):
"""Find UUID of single external network for this agent."""
if self.conf.gateway_external_network_id:
return self.conf.gateway_external_network_id
# L3 agent doesn't use external_network_bridge to handle external
# networks, so bridge_mappings with provider networks will be used
# and the L3 agent is able to handle any external networks.
if not self.conf.external_network_bridge:
return
if not force and self.target_ex_net_id:
return self.target_ex_net_id
try:
self.target_ex_net_id = self.plugin_rpc.get_external_network_id(
self.context)
return self.target_ex_net_id
except oslo_messaging.RemoteError as e:
with excutils.save_and_reraise_exception() as ctx:
if e.exc_type == 'TooManyExternalNetworks':
ctx.reraise = False
msg = _(
"The 'gateway_external_network_id' option must be "
"configured for this agent as Neutron has more than "
"one external network.")
raise Exception(msg)
def _create_router(self, router_id, router):
args = []
kwargs = {
'router_id': router_id,
'router': router,
'use_ipv6': self.use_ipv6,
'agent_conf': self.conf,
'interface_driver': self.driver,
}
if router.get('distributed'):
kwargs['agent'] = self
kwargs['host'] = self.host
if router.get('distributed') and router.get('ha'):
if self.conf.agent_mode == l3_constants.L3_AGENT_MODE_DVR_SNAT:
kwargs['state_change_callback'] = self.enqueue_state_change
return dvr_edge_ha_router.DvrEdgeHaRouter(*args, **kwargs)
if router.get('distributed'):
if self.conf.agent_mode == l3_constants.L3_AGENT_MODE_DVR_SNAT:
return dvr_router.DvrEdgeRouter(*args, **kwargs)
else:
return dvr_local_router.DvrLocalRouter(*args, **kwargs)
if router.get('ha'):
kwargs['state_change_callback'] = self.enqueue_state_change
return ha_router.HaRouter(*args, **kwargs)
return legacy_router.LegacyRouter(*args, **kwargs)
def _router_added(self, router_id, router):
ri = self._create_router(router_id, router)
registry.notify(resources.ROUTER, events.BEFORE_CREATE,
self, router=ri)
self.router_info[router_id] = ri
ri.initialize(self.process_monitor)
# TODO(Carl) This is a hook in to fwaas. It should be cleaned up.
self.process_router_add(ri)
def _safe_router_removed(self, router_id):
"""Try to delete a router and return True if successful."""
try:
self._router_removed(router_id)
except Exception:
LOG.exception(_LE('Error while deleting router %s'), router_id)
return False
else:
return True
def _router_removed(self, router_id):
ri = self.router_info.get(router_id)
if ri is None:
LOG.warning(_LW("Info for router %s was not found. "
"Performing router cleanup"), router_id)
self.namespaces_manager.ensure_router_cleanup(router_id)
return
registry.notify(resources.ROUTER, events.BEFORE_DELETE,
self, router=ri)
ri.delete(self)
del self.router_info[router_id]
registry.notify(resources.ROUTER, events.AFTER_DELETE, self, router=ri)
def router_deleted(self, context, router_id):
"""Deal with router deletion RPC message."""
LOG.debug('Got router deleted notification for %s', router_id)
update = queue.RouterUpdate(router_id,
queue.PRIORITY_RPC,
action=queue.DELETE_ROUTER)
self._queue.add(update)
def routers_updated(self, context, routers):
"""Deal with routers modification and creation RPC message."""
LOG.debug('Got routers updated notification :%s', routers)
if routers:
# This is needed for backward compatibility
if isinstance(routers[0], dict):
routers = [router['id'] for router in routers]
for id in routers:
update = queue.RouterUpdate(id, queue.PRIORITY_RPC)
self._queue.add(update)
def router_removed_from_agent(self, context, payload):
LOG.debug('Got router removed from agent :%r', payload)
router_id = payload['router_id']
update = queue.RouterUpdate(router_id,
queue.PRIORITY_RPC,
action=queue.DELETE_ROUTER)
self._queue.add(update)
def router_added_to_agent(self, context, payload):
LOG.debug('Got router added to agent :%r', payload)
self.routers_updated(context, payload)
def _process_router_if_compatible(self, router):
if (self.conf.external_network_bridge and
not ip_lib.device_exists(self.conf.external_network_bridge)):
LOG.error(_LE("The external network bridge '%s' does not exist"),
self.conf.external_network_bridge)
return
if self.conf.router_id and router['id'] != self.conf.router_id:
raise n_exc.RouterNotCompatibleWithAgent(router_id=router['id'])
# Either ex_net_id or handle_internal_only_routers must be set
ex_net_id = (router['external_gateway_info'] or {}).get('network_id')
if not ex_net_id and not self.conf.handle_internal_only_routers:
raise n_exc.RouterNotCompatibleWithAgent(router_id=router['id'])
# If target_ex_net_id and ex_net_id are set they must be equal
target_ex_net_id = self._fetch_external_net_id()
if (target_ex_net_id and ex_net_id and ex_net_id != target_ex_net_id):
# Double check that our single external_net_id has not changed
# by forcing a check by RPC.
if ex_net_id != self._fetch_external_net_id(force=True):
raise n_exc.RouterNotCompatibleWithAgent(
router_id=router['id'])
if router['id'] not in self.router_info:
self._process_added_router(router)
else:
self._process_updated_router(router)
def _process_added_router(self, router):
self._router_added(router['id'], router)
ri = self.router_info[router['id']]
ri.router = router
ri.process(self)
registry.notify(resources.ROUTER, events.AFTER_CREATE, self, router=ri)
def _process_updated_router(self, router):
ri = self.router_info[router['id']]
ri.router = router
registry.notify(resources.ROUTER, events.BEFORE_UPDATE,
self, router=ri)
ri.process(self)
registry.notify(resources.ROUTER, events.AFTER_UPDATE, self, router=ri)
def _resync_router(self, router_update,
priority=queue.PRIORITY_SYNC_ROUTERS_TASK):
router_update.timestamp = timeutils.utcnow()
router_update.priority = priority
router_update.router = None # Force the agent to resync the router
self._queue.add(router_update)
def _process_router_update(self):
for rp, update in self._queue.each_update_to_next_router():
LOG.debug("Starting router update for %s, action %s, priority %s",
update.id, update.action, update.priority)
if update.action == queue.PD_UPDATE:
self.pd.process_prefix_update()
LOG.debug("Finished a router update for %s", update.id)
continue
router = update.router
if update.action != queue.DELETE_ROUTER and not router:
try:
update.timestamp = timeutils.utcnow()
routers = self.plugin_rpc.get_routers(self.context,
[update.id])
except Exception:
msg = _LE("Failed to fetch router information for '%s'")
LOG.exception(msg, update.id)
self._resync_router(update)
continue
if routers:
router = routers[0]
if not router:
removed = self._safe_router_removed(update.id)
if not removed:
self._resync_router(update)
else:
# need to update timestamp of removed router in case
# there are older events for the same router in the
# processing queue (like events from fullsync) in order to
# prevent deleted router re-creation
rp.fetched_and_processed(update.timestamp)
LOG.debug("Finished a router update for %s", update.id)
continue
try:
self._process_router_if_compatible(router)
except n_exc.RouterNotCompatibleWithAgent as e:
LOG.exception(e.msg)
# Was the router previously handled by this agent?
if router['id'] in self.router_info:
LOG.error(_LE("Removing incompatible router '%s'"),
router['id'])
self._safe_router_removed(router['id'])
except Exception:
msg = _LE("Failed to process compatible router '%s'")
LOG.exception(msg, update.id)
self._resync_router(update)
continue
LOG.debug("Finished a router update for %s", update.id)
rp.fetched_and_processed(update.timestamp)
def _process_routers_loop(self):
LOG.debug("Starting _process_routers_loop")
pool = eventlet.GreenPool(size=8)
while True:
pool.spawn_n(self._process_router_update)
# NOTE(kevinbenton): this is set to 1 second because the actual interval
# is controlled by a FixedIntervalLoopingCall in neutron/service.py that
# is responsible for task execution.
@periodic_task.periodic_task(spacing=1, run_immediately=True)
def periodic_sync_routers_task(self, context):
self.process_services_sync(context)
if not self.fullsync:
return
LOG.debug("Starting fullsync periodic_sync_routers_task")
# self.fullsync is True at this point. If an exception -- caught or
# uncaught -- prevents setting it to False below then the next call
# to periodic_sync_routers_task will re-enter this code and try again.
# Context manager self.namespaces_manager captures a picture of
# namespaces *before* fetch_and_sync_all_routers fetches the full list
# of routers from the database. This is important to correctly
# identify stale ones.
try:
with self.namespaces_manager as ns_manager:
self.fetch_and_sync_all_routers(context, ns_manager)
except n_exc.AbortSyncRouters:
self.fullsync = True
def fetch_and_sync_all_routers(self, context, ns_manager):
prev_router_ids = set(self.router_info)
curr_router_ids = set()
timestamp = timeutils.utcnow()
try:
router_ids = ([self.conf.router_id] if self.conf.router_id else
self.plugin_rpc.get_router_ids(context))
# fetch routers by chunks to reduce the load on server and to
# start router processing earlier
for i in range(0, len(router_ids), self.sync_routers_chunk_size):
routers = self.plugin_rpc.get_routers(
context, router_ids[i:i + self.sync_routers_chunk_size])
LOG.debug('Processing :%r', routers)
for r in routers:
curr_router_ids.add(r['id'])
ns_manager.keep_router(r['id'])
if r.get('distributed'):
# need to keep fip namespaces as well
ext_net_id = (r['external_gateway_info'] or {}).get(
'network_id')
if ext_net_id:
ns_manager.keep_ext_net(ext_net_id)
update = queue.RouterUpdate(
r['id'],
queue.PRIORITY_SYNC_ROUTERS_TASK,
router=r,
timestamp=timestamp)
self._queue.add(update)
except oslo_messaging.MessagingTimeout:
if self.sync_routers_chunk_size > SYNC_ROUTERS_MIN_CHUNK_SIZE:
self.sync_routers_chunk_size = max(
self.sync_routers_chunk_size / 2,
SYNC_ROUTERS_MIN_CHUNK_SIZE)
LOG.error(_LE('Server failed to return info for routers in '
'required time, decreasing chunk size to: %s'),
self.sync_routers_chunk_size)
else:
LOG.error(_LE('Server failed to return info for routers in '
'required time even with min chunk size: %s. '
'It might be under very high load or '
'just inoperable'),
self.sync_routers_chunk_size)
raise
except oslo_messaging.MessagingException:
LOG.exception(_LE("Failed synchronizing routers due to RPC error"))
raise n_exc.AbortSyncRouters()
self.fullsync = False
LOG.debug("periodic_sync_routers_task successfully completed")
# adjust chunk size after successful sync
if self.sync_routers_chunk_size < SYNC_ROUTERS_MAX_CHUNK_SIZE:
self.sync_routers_chunk_size = min(
self.sync_routers_chunk_size + SYNC_ROUTERS_MIN_CHUNK_SIZE,
SYNC_ROUTERS_MAX_CHUNK_SIZE)
# Delete routers that have disappeared since the last sync
for router_id in prev_router_ids - curr_router_ids:
ns_manager.keep_router(router_id)
update = queue.RouterUpdate(router_id,
queue.PRIORITY_SYNC_ROUTERS_TASK,
timestamp=timestamp,
action=queue.DELETE_ROUTER)
self._queue.add(update)
def after_start(self):
# Note: the FWaaS' vArmourL3NATAgent is a subclass of L3NATAgent. It
# calls this method here. So Removing this after_start() would break
# vArmourL3NATAgent. We need to find out whether vArmourL3NATAgent
# can have L3NATAgentWithStateReport as its base class instead of
# L3NATAgent.
eventlet.spawn_n(self._process_routers_loop)
LOG.info(_LI("L3 agent started"))
def create_pd_router_update(self):
router_id = None
update = queue.RouterUpdate(router_id,
queue.PRIORITY_PD_UPDATE,
timestamp=timeutils.utcnow(),
action=queue.PD_UPDATE)
self._queue.add(update)
class L3NATAgentWithStateReport(L3NATAgent):
def __init__(self, host, conf=None):
super(L3NATAgentWithStateReport, self).__init__(host=host, conf=conf)
self.state_rpc = agent_rpc.PluginReportStateAPI(topics.REPORTS)
self.agent_state = {
'binary': 'neutron-l3-agent',
'host': host,
'availability_zone': self.conf.AGENT.availability_zone,
'topic': topics.L3_AGENT,
'configurations': {
'agent_mode': self.conf.agent_mode,
'router_id': self.conf.router_id,
'handle_internal_only_routers':
self.conf.handle_internal_only_routers,
'external_network_bridge': self.conf.external_network_bridge,
'gateway_external_network_id':
self.conf.gateway_external_network_id,
'interface_driver': self.conf.interface_driver,
'log_agent_heartbeats': self.conf.AGENT.log_agent_heartbeats},
'start_flag': True,
'agent_type': l3_constants.AGENT_TYPE_L3}
report_interval = self.conf.AGENT.report_interval
if report_interval:
self.heartbeat = loopingcall.FixedIntervalLoopingCall(
self._report_state)
self.heartbeat.start(interval=report_interval)
def _report_state(self):
num_ex_gw_ports = 0
num_interfaces = 0
num_floating_ips = 0
router_infos = self.router_info.values()
num_routers = len(router_infos)
for ri in router_infos:
ex_gw_port = ri.get_ex_gw_port()
if ex_gw_port:
num_ex_gw_ports += 1
num_interfaces += len(ri.router.get(l3_constants.INTERFACE_KEY,
[]))
num_floating_ips += len(ri.router.get(l3_constants.FLOATINGIP_KEY,
[]))
configurations = self.agent_state['configurations']
configurations['routers'] = num_routers
configurations['ex_gw_ports'] = num_ex_gw_ports
configurations['interfaces'] = num_interfaces
configurations['floating_ips'] = num_floating_ips
try:
agent_status = self.state_rpc.report_state(self.context,
self.agent_state,
True)
if agent_status == l3_constants.AGENT_REVIVED:
LOG.info(_LI('Agent has just been revived. '
'Doing a full sync.'))
self.fullsync = True
self.agent_state.pop('start_flag', None)
except AttributeError:
# This means the server does not support report_state
LOG.warning(_LW("Neutron server does not support state report. "
"State report for this agent will be disabled."))
self.heartbeat.stop()
return
except Exception:
LOG.exception(_LE("Failed reporting state!"))
def after_start(self):
eventlet.spawn_n(self._process_routers_loop)
LOG.info(_LI("L3 agent started"))
# Do the report state before we do the first full sync.
self._report_state()
self.pd.after_start()
def agent_updated(self, context, payload):
"""Handle the agent_updated notification event."""
self.fullsync = True
LOG.info(_LI("agent_updated by server side %s!"), payload)
|
Nominated several times for a Latin Grammy.
“You have arrived at a private party. A party where not everyone is welcome; you have to be a friend of the host to get in. you have to be ready. The party is cheerful, but full of nostalgia, and you’re not sure why. To some extent, an insular party. It is decorated with the most elegant drapery. The well-dressed waiters, light on their feet, walk on carpet to avoid making a noise, in the French style. This is not a boisterous wedding party. It is more like a liturgy. It is the debut of a woman who only yesterday was a little girl, but who, without you noticing, has grown up to be a beautiful woman. |
#!/usr/bin/python
'''
watchlist.py
Copyright (C) 2011 Pradeep Balan Pillai
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
'''
import gtk
import pickle
class Watchlist:
def __init__(self):
self.tickers = []
# Load the tickers from pickled list of stocks
def load_tickers(self):
ticker_file = open('ticker_list', 'rb')
if(ticker_file.readlines() == []): # Check whether the file contains data
ticker_file.close() # else add data before initiating
self.add_stock()
ticker_file = open('ticker_list','rb')
self.tickers = pickle.load(ticker_file)
else:
ticker_file = open('ticker_list','rb')
self.tickers = pickle.load(ticker_file)
ticker_file.close()
# Add stocks to watchlist. Argument should be pair of display_name:stock_code dictionary
def add_stocks(self, stocks = {}):
pickled_stocks = {}
f = open('watch_list', 'rb') # Load existing watchlist add new stocks and write
try:
pickled_stocks = pickle.load(f)
f.close()
for key in stocks.keys():
pickled_stocks[key] = stocks[key]
f = open('watch_list', 'wb')
pickle.dump(pickled_stocks, f)
f.close()
except EOFError:
f.close()
return 0
def delete_stocks(self, stocks = {}):
pass
|
Refreshments – Tasty Bacon & egg barms.
A Christian Aid Week social and planning evening, January 25th in the Broadhead Room at St Anne’s church, Turton.
Come and join interested people in sharing ideas, plans and to hear about projects and resources from our local area representative.
6.30pm: CAROL SERVICE at Christ Church.
We had an absolutely fantastic weekend, filled with fun, friendship and entertainment, when twelve of us spent the weekend in Llandudno.
A comprehensive 11-day pilgrimage from Manchester to Galilee led by Rev John McGrath.
Dawn & Jane wish to thank all those who help in any way to keep things running smoothly in our Parish. Your help, support, knowledge and wisdom is always appreciated. |
# Copyright (C) 2013-2015 Codethink Limited
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; version 2 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see <http://www.gnu.org/licenses/>.
#
# =*= License: GPL-2 =*=
import cliapp
import morphlib
class MorphologyFinder(object):
'''Abstract away finding morphologies in a git repository.
This class provides an abstraction layer between a git repository
and the morphologies contained in it.
'''
def __init__(self, gitdir, ref=None):
self.gitdir = gitdir
self.ref = ref
def read_morphology(self, filename):
'''Return the un-parsed text of a morphology.
For the given morphology name, locate and return the contents
of the morphology as a string.
Parsing of this morphology into a form useful for manipulating
is handled by the MorphologyLoader class.
'''
return self.gitdir.read_file(filename, self.ref)
def list_morphologies(self):
'''Return the filenames of all morphologies in the (repo, ref).
Finds all morphologies in the git directory at the specified
ref.
'''
def is_morphology_path(path):
return path.endswith('.morph')
return (path
for path in self.gitdir.list_files(self.ref)
if is_morphology_path(path))
|
Alice can you hear me!
Alice can you see me!
Alice can you touch me!
Alice can you feel me!
Only the White Rabbit knows…..coming soon to a theatre near you……….. |
# -*- coding: utf-8 -*-
from operator import attrgetter
from pyangbind.lib.yangtypes import RestrictedPrecisionDecimalType
from pyangbind.lib.yangtypes import RestrictedClassType
from pyangbind.lib.yangtypes import TypedListType
from pyangbind.lib.yangtypes import YANGBool
from pyangbind.lib.yangtypes import YANGListType
from pyangbind.lib.yangtypes import YANGDynClass
from pyangbind.lib.yangtypes import ReferenceType
from pyangbind.lib.base import PybindBase
from collections import OrderedDict
from decimal import Decimal
from bitarray import bitarray
import six
# PY3 support of some PY2 keywords (needs improved)
if six.PY3:
import builtins as __builtin__
long = int
elif six.PY2:
import __builtin__
from . import state
class unknown_tlv(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance - based on the path /network-instances/network-instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa-types/lsa-type/lsas/lsa/opaque-lsa/router-information/tlvs/tlv/unknown-tlv. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: An unknown TLV within the context. Unknown TLVs are
defined to be the set of TLVs that are not modelled
within the OpenConfig model, or are unknown to the
local system such that it cannot decode their value.
"""
__slots__ = ("_path_helper", "_extmethods", "__state")
_yang_name = "unknown-tlv"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__state = YANGDynClass(
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"ospfv2",
"areas",
"area",
"lsdb",
"lsa-types",
"lsa-type",
"lsas",
"lsa",
"opaque-lsa",
"router-information",
"tlvs",
"tlv",
"unknown-tlv",
]
def _get_state(self):
"""
Getter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa_types/lsa_type/lsas/lsa/opaque_lsa/router_information/tlvs/tlv/unknown_tlv/state (container)
YANG Description: Contents of an unknown TLV within the LSA
"""
return self.__state
def _set_state(self, v, load=False):
"""
Setter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa_types/lsa_type/lsas/lsa/opaque_lsa/router_information/tlvs/tlv/unknown_tlv/state (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_state is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_state() directly.
YANG Description: Contents of an unknown TLV within the LSA
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """state must be of a type compatible with container""",
"defined-type": "container",
"generated-type": """YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=False)""",
}
)
self.__state = t
if hasattr(self, "_set"):
self._set()
def _unset_state(self):
self.__state = YANGDynClass(
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
state = __builtin__.property(_get_state)
_pyangbind_elements = OrderedDict([("state", state)])
from . import state
class unknown_tlv(PybindBase):
"""
This class was auto-generated by the PythonClass plugin for PYANG
from YANG module openconfig-network-instance-l2 - based on the path /network-instances/network-instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa-types/lsa-type/lsas/lsa/opaque-lsa/router-information/tlvs/tlv/unknown-tlv. Each member element of
the container is represented as a class variable - with a specific
YANG type.
YANG Description: An unknown TLV within the context. Unknown TLVs are
defined to be the set of TLVs that are not modelled
within the OpenConfig model, or are unknown to the
local system such that it cannot decode their value.
"""
__slots__ = ("_path_helper", "_extmethods", "__state")
_yang_name = "unknown-tlv"
_pybind_generated_by = "container"
def __init__(self, *args, **kwargs):
self._path_helper = False
self._extmethods = False
self.__state = YANGDynClass(
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
load = kwargs.pop("load", None)
if args:
if len(args) > 1:
raise TypeError("cannot create a YANG container with >1 argument")
all_attr = True
for e in self._pyangbind_elements:
if not hasattr(args[0], e):
all_attr = False
break
if not all_attr:
raise ValueError("Supplied object did not have the correct attributes")
for e in self._pyangbind_elements:
nobj = getattr(args[0], e)
if nobj._changed() is False:
continue
setmethod = getattr(self, "_set_%s" % e)
if load is None:
setmethod(getattr(args[0], e))
else:
setmethod(getattr(args[0], e), load=load)
def _path(self):
if hasattr(self, "_parent"):
return self._parent._path() + [self._yang_name]
else:
return [
"network-instances",
"network-instance",
"protocols",
"protocol",
"ospfv2",
"areas",
"area",
"lsdb",
"lsa-types",
"lsa-type",
"lsas",
"lsa",
"opaque-lsa",
"router-information",
"tlvs",
"tlv",
"unknown-tlv",
]
def _get_state(self):
"""
Getter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa_types/lsa_type/lsas/lsa/opaque_lsa/router_information/tlvs/tlv/unknown_tlv/state (container)
YANG Description: Contents of an unknown TLV within the LSA
"""
return self.__state
def _set_state(self, v, load=False):
"""
Setter method for state, mapped from YANG variable /network_instances/network_instance/protocols/protocol/ospfv2/areas/area/lsdb/lsa_types/lsa_type/lsas/lsa/opaque_lsa/router_information/tlvs/tlv/unknown_tlv/state (container)
If this variable is read-only (config: false) in the
source YANG file, then _set_state is considered as a private
method. Backends looking to populate this variable should
do so via calling thisObj._set_state() directly.
YANG Description: Contents of an unknown TLV within the LSA
"""
if hasattr(v, "_utype"):
v = v._utype(v)
try:
t = YANGDynClass(
v,
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
except (TypeError, ValueError):
raise ValueError(
{
"error-string": """state must be of a type compatible with container""",
"defined-type": "container",
"generated-type": """YANGDynClass(base=state.state, is_container='container', yang_name="state", parent=self, path_helper=self._path_helper, extmethods=self._extmethods, register_paths=True, extensions=None, namespace='http://openconfig.net/yang/network-instance', defining_module='openconfig-network-instance', yang_type='container', is_config=False)""",
}
)
self.__state = t
if hasattr(self, "_set"):
self._set()
def _unset_state(self):
self.__state = YANGDynClass(
base=state.state,
is_container="container",
yang_name="state",
parent=self,
path_helper=self._path_helper,
extmethods=self._extmethods,
register_paths=True,
extensions=None,
namespace="http://openconfig.net/yang/network-instance",
defining_module="openconfig-network-instance",
yang_type="container",
is_config=False,
)
state = __builtin__.property(_get_state)
_pyangbind_elements = OrderedDict([("state", state)])
|
French beans are smaller than common green beans and have a soft, velvety pod. Quite fleshy for their size, only tiny seeds inhabit these delicate pods. French beans are sweet, tender and wonderfully crispy. French beans are delicious. The tasty beans are picked while still small and when cooked that size they taste out of this world. French beans generally grow as a bush, which grows twelve to twenty inches tall depending upon the variety you decide to grow. However there are some varieties that climb similar to runner beans, these varieties grow up to seven foot tall.
The milk, sweet, almost nutty flavor of cauliflower is at its best from December through March when it is in season and most plentiful in your local markets. Cauliflower has a compact head (called a “curd”), usually about six inches in diameter that is composed of undeveloped flower buds. The flowers are attached to a central stalk. Cauliflower, a cruciferous vegetable, is in the same plant family as broccoli, kale, cabbage. When broken apart into separate buds, cauliflower looks like a little tree, something that many kids are fascinated by. Raw cauliflower is firm yet a bit spongy in texture. It has a slightly sulfurous and faintly bitter flavor.
Red pumpkin or Lal bhopla is poor man’s source of carotene in India. Other sources (like apples, carrots) normally happen to be rather expensive when not in the peak of season. Red pumpkin, however, is ‘in season’ throughout the year unlike in the Western countries, where it is typically an autumn appearance. |
# vim:fileencoding=utf-8
import requests
import json
from .oauth import BaseOAuth2
from ..exceptions import AuthFailed
from ..utils import handle_http_errors
class LineOAuth2(BaseOAuth2):
name = 'line'
AUTHORIZATION_URL = 'https://access.line.me/dialog/oauth/weblogin'
ACCESS_TOKEN_URL = 'https://api.line.me/v1/oauth/accessToken'
BASE_API_URL = 'https://api.line.me'
USER_INFO_URL = BASE_API_URL + '/v1/profile'
ACCESS_TOKEN_METHOD = 'POST'
STATE_PARAMETER = True
REDIRECT_STATE = True
ID_KEY = 'mid'
EXTRA_DATA = [
('mid', 'id'),
('expire', 'expire'),
('refreshToken', 'refresh_token')
]
def auth_params(self, state=None):
client_id, client_secret = self.get_key_and_secret()
return {
'client_id': client_id,
'redirect_uri': self.get_redirect_uri(),
'response_type': self.RESPONSE_TYPE
}
def process_error(self, data):
error_code = data.get('errorCode') or \
data.get('statusCode') or \
data.get('error')
error_message = data.get('errorMessage') or \
data.get('statusMessage') or \
data.get('error_desciption')
if error_code is not None or error_message is not None:
raise AuthFailed(self, error_message or error_code)
@handle_http_errors
def auth_complete(self, *args, **kwargs):
"""Completes login process, must return user instance"""
client_id, client_secret = self.get_key_and_secret()
code = self.data.get('code')
self.process_error(self.data)
try:
response = self.request_access_token(
self.access_token_url(),
method=self.ACCESS_TOKEN_METHOD,
params={
'requestToken': code,
'channelSecret': client_secret
}
)
self.process_error(response)
return self.do_auth(response['accessToken'], response=response,
*args, **kwargs)
except requests.HTTPError as err:
self.process_error(json.loads(err.response.content))
def get_user_details(self, response):
response.update({
'fullname': response.get('displayName'),
'picture_url': response.get('pictureUrl')
})
return response
def get_user_id(self, details, response):
"""
Return a unique ID for the current user, by default from
server response.
"""
return response.get(self.ID_KEY)
def user_data(self, access_token, *args, **kwargs):
"""Loads user data from service"""
try:
response = self.get_json(
self.USER_INFO_URL,
headers={
"Authorization": "Bearer {}".format(access_token)
}
)
self.process_error(response)
return response
except requests.HTTPError as err:
self.process_error(err.response.json())
|
000036118 - Saving RSA Live credentials in NetWitness Endpoint fails with "Could not store into lockbox"
Issue When trying to save the ECATUI, Configure > Monitoring and External Components, RSA Live credentials, and it fails with the error "Could not store into lockbox".
Cause The RSA ECAT API Server service doesn't have permission to write to the lockbox files.
The lockbox files have become corrupt, and are no longer writeable.
Resolution 1. Check the RSA ECAT services have permission to write to files in the Server directory?
For example, in this screenshot, the Administrators group has Full control in the Server directory which gives Write permission.
2. If the Server directory permissions check above wasn't the cause of the issue, then perhaps the lockbox files are not writable.
a. On the ECAT Server in the ECATUI, Configure > Monitoring and External Components.
Make a copy of all the settings you have previously made, so these values can be re-entered later.
b. Exit the ECATUI, and stop all the ECAT services on the ECAT Server.
c. On the ECAT Server, in the Server directory (default path C:\Program Files\RSA\ECAT\Server) make a copy of all 4 km* files (km, km.bak, km.bak.FCD, km.FCD) to another location. These are the lockbox files and its backups.
d. Delete the 4 km* files.
e. Start all the ECAT services, the km* files will automatically be re-created in the Server directory.
f. In the ECATUI, Configure > Monitoring and External Components re-enter all the settings, starting with RSA Live. Ensure the RSA Live credentials can now be saved. |
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# Copyright (c) 2017 F5 Networks Inc.
# Copyright (c) 2013 Matt Hite <[email protected]>
# GNU General Public License v3.0 (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
ANSIBLE_METADATA = {'metadata_version': '1.1',
'status': ['preview'],
'supported_by': 'community'}
DOCUMENTATION = r'''
---
module: _bigip_node
short_description: Manages F5 BIG-IP LTM nodes
deprecated: Deprecated in 2.5. Use the C(bigip_node) module instead.
description:
- "Manages F5 BIG-IP LTM nodes via iControl SOAP API"
version_added: "1.4"
author:
- Matt Hite (@mhite)
- Tim Rupp (@caphrim007)
notes:
- "Requires BIG-IP software version >= 11"
- "F5 developed module 'bigsuds' required (see http://devcentral.f5.com)"
- "Best run as a local_action in your playbook"
requirements:
- bigsuds
options:
state:
description:
- Pool member state.
required: True
default: present
choices: ['present', 'absent']
session_state:
description:
- Set new session availability status for node.
version_added: "1.9"
choices: ['enabled', 'disabled']
monitor_state:
description:
- Set monitor availability status for node.
version_added: "1.9"
choices: ['enabled', 'disabled']
partition:
description:
- Partition.
default: Common
name:
description:
- Node name.
monitor_type:
description:
- Monitor rule type when monitors > 1.
version_added: "2.2"
choices: ['and_list', 'm_of_n']
quorum:
description:
- Monitor quorum value when monitor_type is m_of_n.
version_added: "2.2"
monitors:
description:
- Monitor template name list. Always use the full path to the monitor.
version_added: "2.2"
host:
description:
- Node IP. Required when state=present and node does not exist. Error when
C(state) is C(absent).
required: True
aliases: ['address', 'ip']
description:
description:
- Node description.
extends_documentation_fragment: f5
'''
EXAMPLES = r'''
- name: Add node
bigip_node:
server: lb.mydomain.com
user: admin
password: secret
state: present
partition: Common
host: 10.20.30.40
name: 10.20.30.40
delegate_to: localhost
# Note that the BIG-IP automatically names the node using the
# IP address specified in previous play's host parameter.
# Future plays referencing this node no longer use the host
# parameter but instead use the name parameter.
# Alternatively, you could have specified a name with the
# name parameter when state=present.
- name: Add node with a single 'ping' monitor
bigip_node:
server: lb.mydomain.com
user: admin
password: secret
state: present
partition: Common
host: 10.20.30.40
name: mytestserver
monitors:
- /Common/icmp
delegate_to: localhost
- name: Modify node description
bigip_node:
server: lb.mydomain.com
user: admin
password: secret
state: present
partition: Common
name: 10.20.30.40
description: Our best server yet
delegate_to: localhost
- name: Delete node
bigip_node:
server: lb.mydomain.com
user: admin
password: secret
state: absent
partition: Common
name: 10.20.30.40
delegate_to: localhost
# The BIG-IP GUI doesn't map directly to the API calls for "Node ->
# General Properties -> State". The following states map to API monitor
# and session states.
#
# Enabled (all traffic allowed):
# monitor_state=enabled, session_state=enabled
# Disabled (only persistent or active connections allowed):
# monitor_state=enabled, session_state=disabled
# Forced offline (only active connections allowed):
# monitor_state=disabled, session_state=disabled
#
# See https://devcentral.f5.com/questions/icontrol-equivalent-call-for-b-node-down
- name: Force node offline
bigip_node:
server: lb.mydomain.com
user: admin
password: mysecret
state: present
session_state: disabled
monitor_state: disabled
partition: Common
name: 10.20.30.40
delegate_to: localhost
'''
def node_exists(api, address):
# hack to determine if node exists
result = False
try:
api.LocalLB.NodeAddressV2.get_object_status(nodes=[address])
result = True
except bigsuds.OperationFailed as e:
if "was not found" in str(e):
result = False
else:
# genuine exception
raise
return result
def create_node_address(api, address, name):
try:
api.LocalLB.NodeAddressV2.create(
nodes=[name],
addresses=[address],
limits=[0]
)
result = True
desc = ""
except bigsuds.OperationFailed as e:
if "already exists" in str(e):
result = False
desc = "referenced name or IP already in use"
else:
# genuine exception
raise
return (result, desc)
def get_node_address(api, name):
return api.LocalLB.NodeAddressV2.get_address(nodes=[name])[0]
def delete_node_address(api, address):
try:
api.LocalLB.NodeAddressV2.delete_node_address(nodes=[address])
result = True
desc = ""
except bigsuds.OperationFailed as e:
if "is referenced by a member of pool" in str(e):
result = False
desc = "node referenced by pool"
else:
# genuine exception
raise
return (result, desc)
def set_node_description(api, name, description):
api.LocalLB.NodeAddressV2.set_description(nodes=[name],
descriptions=[description])
def get_node_description(api, name):
return api.LocalLB.NodeAddressV2.get_description(nodes=[name])[0]
def set_node_session_enabled_state(api, name, session_state):
session_state = "STATE_%s" % session_state.strip().upper()
api.LocalLB.NodeAddressV2.set_session_enabled_state(nodes=[name],
states=[session_state])
def get_node_session_status(api, name):
result = api.LocalLB.NodeAddressV2.get_session_status(nodes=[name])[0]
result = result.split("SESSION_STATUS_")[-1].lower()
return result
def set_node_monitor_state(api, name, monitor_state):
monitor_state = "STATE_%s" % monitor_state.strip().upper()
api.LocalLB.NodeAddressV2.set_monitor_state(nodes=[name],
states=[monitor_state])
def get_node_monitor_status(api, name):
result = api.LocalLB.NodeAddressV2.get_monitor_status(nodes=[name])[0]
result = result.split("MONITOR_STATUS_")[-1].lower()
return result
def get_monitors(api, name):
result = api.LocalLB.NodeAddressV2.get_monitor_rule(nodes=[name])[0]
monitor_type = result['type'].split("MONITOR_RULE_TYPE_")[-1].lower()
quorum = result['quorum']
monitor_templates = result['monitor_templates']
return (monitor_type, quorum, monitor_templates)
def set_monitors(api, name, monitor_type, quorum, monitor_templates):
monitor_type = "MONITOR_RULE_TYPE_%s" % monitor_type.strip().upper()
monitor_rule = {'type': monitor_type, 'quorum': quorum, 'monitor_templates': monitor_templates}
api.LocalLB.NodeAddressV2.set_monitor_rule(nodes=[name],
monitor_rules=[monitor_rule])
def main():
monitor_type_choices = ['and_list', 'm_of_n']
argument_spec = f5_argument_spec()
meta_args = dict(
session_state=dict(type='str', choices=['enabled', 'disabled']),
monitor_state=dict(type='str', choices=['enabled', 'disabled']),
name=dict(type='str', required=True),
host=dict(type='str', aliases=['address', 'ip']),
description=dict(type='str'),
monitor_type=dict(type='str', choices=monitor_type_choices),
quorum=dict(type='int'),
monitors=dict(type='list')
)
argument_spec.update(meta_args)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True
)
if module.params['validate_certs']:
import ssl
if not hasattr(ssl, 'SSLContext'):
module.fail_json(msg='bigsuds does not support verifying certificates with python < 2.7.9. Either update python or set validate_certs=False on the task')
server = module.params['server']
server_port = module.params['server_port']
user = module.params['user']
password = module.params['password']
state = module.params['state']
partition = module.params['partition']
validate_certs = module.params['validate_certs']
session_state = module.params['session_state']
monitor_state = module.params['monitor_state']
host = module.params['host']
name = module.params['name']
address = fq_name(partition, name)
description = module.params['description']
monitor_type = module.params['monitor_type']
if monitor_type:
monitor_type = monitor_type.lower()
quorum = module.params['quorum']
monitors = module.params['monitors']
if monitors:
monitors = []
for monitor in module.params['monitors']:
monitors.append(fq_name(partition, monitor))
# sanity check user supplied values
if state == 'absent' and host is not None:
module.fail_json(msg="host parameter invalid when state=absent")
if monitors:
if len(monitors) == 1:
# set default required values for single monitor
quorum = 0
monitor_type = 'single'
elif len(monitors) > 1:
if not monitor_type:
module.fail_json(msg="monitor_type required for monitors > 1")
if monitor_type == 'm_of_n' and not quorum:
module.fail_json(msg="quorum value required for monitor_type m_of_n")
if monitor_type != 'm_of_n':
quorum = 0
elif monitor_type:
# no monitors specified but monitor_type exists
module.fail_json(msg="monitor_type require monitors parameter")
elif quorum is not None:
# no monitors specified but quorum exists
module.fail_json(msg="quorum requires monitors parameter")
try:
api = bigip_api(server, user, password, validate_certs, port=server_port)
result = {'changed': False} # default
if state == 'absent':
if node_exists(api, address):
if not module.check_mode:
deleted, desc = delete_node_address(api, address)
if not deleted:
module.fail_json(msg="unable to delete: %s" % desc)
else:
result = {'changed': True}
else:
# check-mode return value
result = {'changed': True}
elif state == 'present':
if not node_exists(api, address):
if host is None:
module.fail_json(msg="host parameter required when "
"state=present and node does not exist")
if not module.check_mode:
created, desc = create_node_address(api, address=host, name=address)
if not created:
module.fail_json(msg="unable to create: %s" % desc)
else:
result = {'changed': True}
if session_state is not None:
set_node_session_enabled_state(api, address,
session_state)
result = {'changed': True}
if monitor_state is not None:
set_node_monitor_state(api, address, monitor_state)
result = {'changed': True}
if description is not None:
set_node_description(api, address, description)
result = {'changed': True}
if monitors:
set_monitors(api, address, monitor_type, quorum, monitors)
else:
# check-mode return value
result = {'changed': True}
else:
# node exists -- potentially modify attributes
if host is not None:
if get_node_address(api, address) != host:
module.fail_json(msg="Changing the node address is "
"not supported by the API; "
"delete and recreate the node.")
if session_state is not None:
session_status = get_node_session_status(api, address)
if session_state == 'enabled' and \
session_status == 'forced_disabled':
if not module.check_mode:
set_node_session_enabled_state(api, address,
session_state)
result = {'changed': True}
elif session_state == 'disabled' and \
session_status != 'force_disabled':
if not module.check_mode:
set_node_session_enabled_state(api, address,
session_state)
result = {'changed': True}
if monitor_state is not None:
monitor_status = get_node_monitor_status(api, address)
if monitor_state == 'enabled' and \
monitor_status == 'forced_down':
if not module.check_mode:
set_node_monitor_state(api, address,
monitor_state)
result = {'changed': True}
elif monitor_state == 'disabled' and \
monitor_status != 'forced_down':
if not module.check_mode:
set_node_monitor_state(api, address,
monitor_state)
result = {'changed': True}
if description is not None:
if get_node_description(api, address) != description:
if not module.check_mode:
set_node_description(api, address, description)
result = {'changed': True}
if monitors:
t_monitor_type, t_quorum, t_monitor_templates = get_monitors(api, address)
if (t_monitor_type != monitor_type) or (t_quorum != quorum) or (set(t_monitor_templates) != set(monitors)):
if not module.check_mode:
set_monitors(api, address, monitor_type, quorum, monitors)
result = {'changed': True}
except Exception as e:
module.fail_json(msg="received exception: %s" % e)
module.exit_json(**result)
from ansible.module_utils.basic import *
from ansible.module_utils.f5_utils import *
if __name__ == '__main__':
main()
|
Zero Waste Skincare and Haircare: meet GiseleByLAMissApple!
Today we're kicking off a series of interviews to present you the brands we work with and the passionate people behind the products we bring in your homes.
In this 1st episode we are thrilled to introduce Anne, founder and maker of the Los Angeles based brand GiseleByLAMissApple. We fell in love with her shampoo bars, hair mask, bath bombs and make-up remover. We are very excited to share the story of a true artist with many talents!
Why did you start Gisele by LAMissApple?
When I was 23 year old I had serious health issues. It was a pivotal moment in my life when I discovered there was a good chance my disease was linked to the "regular" skincare products I was using everyday, in particular my deodorant. These products are available everywhere, I trusted them. How could I know they were so dangerous for my health?
From this moment on, I eliminated all nasty chemicals from my life. I searched for organic ways to improve my health and the health of my little boy who was 1 year old at that time.
I spent 700 hours of studying aromatherapy in France. Through this experience, I learned all the benefits that essential oils can bring to us and how to use them to make perfectly safe skincare products. I'm grateful today I've built a business around my passion and I'm able to share my products with so many people around the world.
How did you choose the name GiseleByLAMissApple?
Gisele was my son's grand mother. I spent 6 months with her when she was going through chemo. I shared with her my first lotions, the bond between us became very strong. This name is a way to pay tribute to her.
And La Miss Apple is my french nickname (" la petite pomme") when I was a little girl.
Where does your inspiration to create your products come from?
My inspiration comes from Nature : I'm deeply curious about flowers, herbs, wax, seaweeds... all these natural substances have so many healing virtues, we don't need synthetic ingredients.
I choose the best plants and oils and incorporate them in all my products to treat skin and hair. It's safe for us and for our planet. For me it's obvious that it's the best thing to do.
It's a difficult question! I would say the coconut hair mask because it reminds me of my son. I created this product for his huge curly hair! Now I love how his hair have never looked so shiny and felt so soft.
My second favorite are all my solid shampoos: I'm very proud I came up with a product that is so eco-friendly (no plastic bottle, safe ingredients). It's truly zero waste, it has health benefits and as my customers would say "it's so simple and it works!". I created 9 different types, because we all have different hair. Pick one and you will see!
If I could choose one more product it will be of course my deodorant. Although I've found a safe recipe, it's hard to find the perfect zero waste and plastic free container, so it's still a work in progress.
What was your plastic pollution eye opening moment?
I got a wake up call in 2002. I was line producer for an advertising agency, we were making lots of commercials to promote luxury cosmetics brands. Between their formulas full of chemicals and their plastic packaging, you can imagine it was a little hard for me.
One day I was working on a short film I was producing. During the editing I went through images that were so shocking I still remember them: groups of children looking for food, climbing mountains of garbage, plastic was everywhere. I couldn't handle it but this convinced me I had to do something to raise awareness about this worldwide issue. And that's what happened years later when I started LAMissApple, this is my contribution.
What’s the easiest zero waste change you've made?
All the containers I use are recycled and I don't buy plastic anymore: I make all my beauty and household products myself; I even made some shopping bags and I sewn pads to clean my face (no more disposable cotton balls). And when my friends order from me, I sanitize and reuse their own containers.
It's also important for me to contribute to keeping our beautiful canyons clean. I go on a hike every week with my kids and we pick up all the plastic bottles we find. We bring what's recyclable to the recycling center and they keep the money they receive. This is how I teach them about caring for our environment.
My big project for 2018 is to spread the word for responsible consuming: I want to offer more ways to respect our planet, take care of our health, protect our kids. I want to become a reference brand for better living.
Why not to building an app, a platform or a concept store based on Zero waste and toxins-free lifestyle? Wish me luck! |
import sys
from io import TextIOWrapper
def get_categorized_prices_from_file(filepath :str) -> dict:
prices = {}
try:
with open(filepath, 'r') as f:
assert isinstance(f, TextIOWrapper)
for line_number, line_content in enumerate(f):
try:
if line_content and len(line_content.rstrip('\n')) > 0:
splitted_line = line_content.rstrip('\n').split(',')
category = splitted_line[-2]
price_float = float(splitted_line[-1])
if category in prices:
prices[category].append(price_float)
else:
prices[category] = [price_float]
except ValueError:
print('Price on row {row} not convertable to float. This price is not included in result'.format(row = line_number + 1))
except IOError:
print("Failed to open data file.")
return prices
def average_prices_from_file(prices :dict) -> dict:
average_prices = {}
for key in prices:
average_prices[key] = sum(prices[key]) / float(len(prices[key]))
return average_prices
def print_categorized_average_prices(average_prices :dict):
for key in average_prices:
print('{} - average price: {:.2f}'.format(key, average_prices[key]))
def main():
prices = get_categorized_prices_from_file('./data/catalog_sample.csv')
average_prices = average_prices_from_file(prices)
print_categorized_average_prices(average_prices)
print('\n')
prices = get_categorized_prices_from_file('./data/catalog_full.csv')
average_price = average_prices_from_file(prices)
print_categorized_average_prices(average_price)
if __name__ == "__main__":
sys.exit(int(main() or 0)) |
From his experience growing up as a Palestinian in Israel to his extensive research and practice in the field, Nadim Rouhana has examined protracted social conflict from various angles. His research highlights the centrality of identity, history, and justice in such conflicts and underscores the need to develop both theories and practical tools for addressing them. He is the former director of Point of View, an international research and retreat center at George Mason University’s Institute for Conflict Analysis and Resolution. He comes to the Fletcher School with an illustrious career under his belt, and he sees his role as an opportunity to critically examine conflict studies and develop a new paradigm of conflict resolution that will include perspectives beyond those developed in the West.
Rouhana’s recent publications and projects underscore his commitment to highlighting the importance of history in conflict studies. He is currently working with a colleague from UCLA to plan a conference titled “Looking Past, Looking Forward: The History and Future of the Israeli-Palestinian Conflict.” The conference kicks off a project to discuss future relations between Palestinians and Israelis by examining the nature of their past encounters.
“Reconciling History and Equal Citizenship in Israel: Democracy and the Politics of Historical Denial.” The Politics of Reconciliation in Multicultural Societies, edited by Will Kymlicka and Bashir Bashir. Oxford University Press, 2008.
Co-organizer and co-chair of “The Future of Dialogue and Problem Solving Workshops,” a one-day workshop held at Point of View, George Mason University, April 2008.
“Exile and Return in Israeli and Palestinian Discourse: Between Division and Coexistence.” Paper presented at conference on “Di/Visions: Culture and Politics of the Middle East,” House of World Cultures, Berlin, Germany, January 2008.
Participant in the Yale Law School Middle East Legal Studies Seminar, “The Moral Imperative in the Middle East,” Yale Law School, Istanbul, Turkey, January 2008. |
"""This module contains methods for density ratio estimation."""
import logging
from functools import partial
import numpy as np
logger = logging.getLogger(__name__)
def calculate_densratio_basis_sigma(sigma_1, sigma_2):
"""Heuristic way to choose a basis sigma for density ratio estimation.
Parameters
----------
sigma_1 : float
Standard deviation related to population 1
sigma_2 : float
Standard deviation related to population 2
Returns
-------
float
Basis function scale parameter that works often well in practice.
"""
sigma = sigma_1 * sigma_2 / np.sqrt(np.abs(sigma_1 ** 2 - sigma_2 ** 2))
return sigma
class DensityRatioEstimation:
"""A density ratio estimation class."""
def __init__(self,
n=100,
epsilon=0.1,
max_iter=500,
abs_tol=0.01,
conv_check_interval=20,
fold=5,
optimize=False):
"""Construct the density ratio estimation algorithm object.
Parameters
----------
n : int
Number of RBF basis functions.
epsilon : float
Parameter determining speed of gradient descent.
max_iter : int
Maximum number of iterations used in gradient descent optimization of the weights.
abs_tol : float
Absolute tolerance value for determining convergence of optimization of the weights.
conv_check_interval : int
Integer defining the interval of convergence checks in gradient descent.
fold : int
Number of folds in likelihood cross validation used to optimize basis scale-params.
optimize : boolean
Boolean indicating whether or not to optimize RBF scale.
"""
self.n = n
self.epsilon = epsilon
self.max_iter = max_iter
self.abs_tol = abs_tol
self.fold = fold
self.sigma = None
self.conv_check_interval = conv_check_interval
self.optimize = optimize
def fit(self,
x,
y,
weights_x=None,
weights_y=None,
sigma=None):
"""Fit the density ratio estimation object.
Parameters
----------
x : array
Sample from the nominator distribution.
y : sample
Sample from the denominator distribution.
weights_x : array
Vector of non-negative nominator sample weights, must be able to normalize.
weights_y : array
Vector of non-negative denominator sample weights, must be able to normalize.
sigma : float or list
List of RBF kernel scales, fit selected at initial call.
"""
self.x_len = x.shape[0]
self.y_len = y.shape[0]
x = x.reshape(self.x_len, -1)
y = y.reshape(self.y_len, -1)
self.x = x
if self.x_len < self.n:
raise ValueError("Number of RBFs ({}) can't be larger "
"than number of samples ({}).".format(self.n, self.x_len))
self.theta = x[:self.n, :]
if weights_x is None:
weights_x = np.ones(self.x_len)
if weights_y is None:
weights_y = np.ones(self.y_len)
self.weights_x = weights_x / np.sum(weights_x)
self.weights_y = weights_y / np.sum(weights_y)
self.x0 = np.average(x, axis=0, weights=weights_x)
if isinstance(sigma, float):
self.sigma = sigma
self.optimize = False
if self.optimize:
if isinstance(sigma, list):
scores_tuple = zip(*[self._KLIEP_lcv(x, y, sigma_i)
for sigma_i in sigma])
self.sigma = sigma[np.argmax(scores_tuple)]
else:
raise ValueError("To optimize RBF scale, "
"you need to provide a list of candidate scales.")
if self.sigma is None:
raise ValueError("RBF width (sigma) has to provided in first call.")
A = self._compute_A(x, self.sigma)
b, b_normalized = self._compute_b(y, self.sigma)
alpha = self._KLIEP(A, b, b_normalized, weights_x, self.sigma)
self.w = partial(self._weighted_basis_sum, sigma=self.sigma, alpha=alpha)
def _gaussian_basis(self, x, x0, sigma):
"""N-D RBF basis-function with equal scale-parameter for every dim."""
return np.exp(-0.5 * np.sum((x - x0) ** 2) / sigma / sigma)
def _weighted_basis_sum(self, x, sigma, alpha):
"""Weighted sum of gaussian basis functions evaluated at x."""
return np.dot(np.array([[self._gaussian_basis(j, i, sigma) for j in self.theta]
for i in np.atleast_2d(x)]), alpha)
def _compute_A(self, x, sigma):
A = np.array([[self._gaussian_basis(i, j, sigma) for j in self.theta] for i in x])
return A
def _compute_b(self, y, sigma):
b = np.sum(np.array(
[[self._gaussian_basis(i, y[j, :], sigma) * self.weights_y[j]
for j in np.arange(self.y_len)]
for i in self.theta]), axis=1)
b_normalized = b / np.dot(b.T, b)
return b, b_normalized
def _KLIEP_lcv(self, x, y, sigma):
"""Compute KLIEP scores for fold-folds."""
A = self._compute_A(x, sigma)
b, b_normalized = self._compute_b(y, sigma)
non_null = np.any(A > 1e-64, axis=1)
non_null_length = sum(non_null)
if non_null_length == 0:
return np.Inf
A_full = A[non_null, :]
x_full = x[non_null, :]
weights_x_full = self.weights_x[non_null]
fold_indices = np.array_split(np.arange(non_null_length), self.fold)
score = np.zeros(self.fold)
for i_fold, fold_index in enumerate(fold_indices):
fold_index_minus = np.setdiff1d(np.arange(non_null_length), fold_index)
alpha = self._KLIEP(A=A_full[fold_index_minus, :], b=b, b_normalized=b_normalized,
weights_x=weights_x_full[fold_index_minus], sigma=sigma)
score[i_fold] = np.average(
np.log(self._weighted_basis_sum(x_full[fold_index, :], sigma, alpha)),
weights=weights_x_full[fold_index])
return [np.mean(score)]
def _KLIEP(self, A, b, b_normalized, weights_x, sigma):
"""Kullback-Leibler Importance Estimation Procedure using gradient descent."""
alpha = 1 / self.n * np.ones(self.n)
target_fun_prev = self._weighted_basis_sum(x=self.x, sigma=sigma, alpha=alpha)
abs_diff = 0.0
non_null = np.any(A > 1e-64, axis=1)
A_full = A[non_null, :]
weights_x_full = weights_x[non_null]
for i in np.arange(self.max_iter):
dAdalpha = np.matmul(A_full.T, (weights_x_full / (np.matmul(A_full, alpha))))
alpha += self.epsilon * dAdalpha
alpha = np.maximum(0, alpha + (1 - np.dot(b.T, alpha)) * b_normalized)
alpha = alpha / np.dot(b.T, alpha)
if np.remainder(i, self.conv_check_interval) == 0:
target_fun = self._weighted_basis_sum(x=self.x, sigma=sigma, alpha=alpha)
abs_diff = np.linalg.norm(target_fun - target_fun_prev)
if abs_diff < self.abs_tol:
break
target_fun_prev = target_fun
return alpha
def max_ratio(self):
"""Find the maximum of the density ratio at numerator sample."""
max_value = np.max(self.w(self.x))
return max_value
|
Here we have found several different types of double doors with glass design ideas, and if you are serious about searching for the best home design ideas, you can come to us. The Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors is one of the pictures that are related to the picture before in the collection gallery. The exactly dimension of Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors is 360×360 pixels placed simply by Mila. You can also look for some pictures that related of Home Furniture Ideas by scroll down to collection on below this picture. If you want to find the other picture or article about Double Doors With Glass just push the gallery or if you are interested in similar pictures of Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors, you are free to browse through search feature that located on top this page or random post section at below of this post. We hope it can help you to get information of this picture.
This particular image of Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors is a part of double door glass designs, double door glass entry, double doors without glass, double front doors with glass images, double pane door glass replacement cost, and just one of our Picture Collection we have to home furniture ideas in this site. If you’re inspired, amazed and charmed by this Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors, you can download it by right-clicking it and click save image as. We hope that, by posting this Stylish Double Doors With Glass Interior Glazed French Doors Interior French Doors, we can fulfill your needs of Ideas for Door furniture ideas. If you need more Door furniture design, you can check at our collection right below this post. Also, don’t forget always to visit best Door furniture ideas to find a new and fresh post every day.
Great Double Doors With Glass Elegant Double Glass Interior Doors Interior French Doors Double. Attractive Double Doors With Glass Alluring Front Double Doors With Glass And Wrought Iron Between. Nice Double Doors With Glass Inspiration Of Glass Double Door With Interior Double Doors With. Brilliant Double Doors With Glass Plastpro French Doors French Door Fiberglass Front Doors. Unique Double Doors With Glass Contemporary African Wenge Interior Double Door Lined Frosted. Elegant Double Doors With Glass Modern Interior Double Door Italian Black Apricot With Frosted. Attractive Double Doors With Glass 24 Best Home Decor Images On Pinterest French Patio Black Doors. |
import numpy as np
"""
This class serves as a quick reference for all methods and attributes associated with a python raster function.
Feel free to use this template a starting point for your implementation or as a cheat-sheet.
"""
class Reference():
"""Class name defaults to module name unless specified in the Python Adapter function's property page.
"""
def __init__(self):
"""Initialize your class attributes here.
"""
self.name = "Reference Function" # a short name for the function. Usually named "<something> Function".
self.description = "Story of the function..." # a detailed description of what this function does.
def getParameterInfo(self):
"""This method returns information on each parameter to your function as a list of dictionaries.
This method must be defined.
Args:
None
Returns:
A list of dictionaries where each entry in the list corresponds to an input parameter--and describes the parameter.
These are the recognized attributes of a parameter:
. name: The keyword associated with this parameter that enables dictionary lookup in other methods
. dataType: The data type of the value held by this parameter.
Allowed values: {'numeric', 'string', 'raster', 'rasters', 'boolean'}
. value: The default value associated with this parameter.
. required: Indicates whether this parameter is required or optional. Allowed values: {True, False}.
. displayName: A friendly name that represents this parameter in Python Adapter function's property page and other UI components
. domain: Indicates the set of allowed values for this parameter.
If specified, the property page shows a drop-down list pre-populated with these values.
This attribute is applicable only to string parameters (dataType='string').
. description: Details on this parameter that's displayed as tooltip in Python Adapter function's property page.
"""
return [
{
'name': 'raster',
'dataType': 'raster',
'value': None,
'required': True,
'displayName': "Input Raster",
'description': "The story of this raster...",
},
{
'name': 'processing_parameter',
'dataType': 'numeric',
'value': "<default value>",
'required': False,
'displayName': "Friendly Name",
'description': "The story of this parameter...",
},
# ... add dictionaries here for additional parameters
]
def getConfiguration(self, **scalars):
"""This method can manage how the output raster is pre-constructed gets.
This method, if defined, controls aspects of parent dataset based on all scalar (non-raster) user inputs.
It's invoked after .getParameterInfo() but before .updateRasterInfo().
Args:
Use scalar['x'] to obtain the user-specified value of the scalar whose 'name' attribute is
'x' in the .getParameterInfo().
Returns:
A dictionary describing the configuration. These are the recognized configuration attributes:
. extractBands: Tuple(ints) containing indexes of bands of the input raster that need to be extracted.
The first band has index 0.
If unspecified, all bands of the input raster are available in .updatePixels()
. compositeRasters: Boolean indicating whether all input rasters are composited as a single multi-band raster.
Defaults to False. If set to True, a raster by the name 'compositeraster' is available
in .updateRasterInfo() and .updatePixels().
. inheritProperties: Bitwise-OR'd integer that indicates the set of input raster properties that are inherited
by the output raster. If unspecified, all properties are inherited.
These are the recognized values:
. 1: Pixel type
. 2: NoData
. 4: Dimensions (spatial reference, extent, and cell-size)
. 8: Resampling type
. invalidateProperties: Bitwise-OR'd integer that indicates the set of properties of the parent dataset that needs
to be invalidated. If unspecified, no property gets invalidated.
These are the recognized values:
. 1: XForm stored by the function raster dataset.
. 2: Statistics stored by the function raster dataset.
. 4: Histogram stored by the function raster dataset.
. 8: The key properties stored by the function raster dataset.
. padding: The number of extra pixels needed on each side of input pixel blocks.
. inputMask: Boolean indicating whether NoData mask arrays associated with all input rasters are needed
by this function for proper construction of output pixels and mask.
If set to True, the input masks are made available in the pixelBlocks keyword
argument in .updatePixels(). For improved performance, input masks are not made available if
attribute is unspecified.
"""
return {
'extractBands': (0, 2), # we only need the first (red) and third (blue) band.
'compositeRasters': False,
'inheritProperties': 2 | 4 | 8, # inherit everything but the pixel type (1)
'invalidateProperties': 2 | 4 | 8, # invalidate these aspects because we are modifying pixel values and updating key properties.
'padding': 0, # No padding needed. Return input pixel block as is.
'inputMask': False # Don't need mask in .updatePixels. Simply use inherited NoData.
}
def updateRasterInfo(self, **kwargs):
"""This method can update the output raster's information.
This method, if defined, gets called after .getConfiguration().
It's invoked each time a function raster dataset containing this python function is initialized.
Args:
kwargs contains all user-specified scalar values and information associated with all input rasters.
Use kwargs['x'] to obtain the user-specified value of the scalar whose 'name' attribute is 'x' in the .getParameterInfo().
If 'x' represents a raster, kwargs['x_info'] will be a dictionary representing the the information associated with the raster.
Access aspects of a particular raster's information like this: kwargs['<rasterName>_info']['<propertyName>']
where <rasterName> corresponds to a raster parameter where 'rasterName' is the value of the 'name' attribute of the parameter.
and <propertyName> is an aspect of the raster information.
If <rasterName> represents a parameter of type rasters (dataType='rasters'), then
kwargs['<rasterName>_info'] is a tuple of raster info dictionaries.
kwargs['output_info'] is always available and populated with values based on the first raster parameter and .getConfiguration().
These are the properties associated with a raster information:
. bandCount: Integer representing the number of bands in the raster.
. pixelType: String representation of pixel type of the raster. These are the allowed values:
{'t1', 't2', 't4', 'i1', 'i2', 'i4', 'u1', 'u2', 'u4', 'f4', 'f8'}
cf: http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
. noData: ndarray(<bandCount> x <dtype>): An array of one value per raster band representing NoData.
. cellSize: Tuple(2 x floats) representing cell-size in the x- and y-direction.
. nativeExtent: Tuple(4 x floats) representing XMin, YMin, XMax, YMax values of the native image coordinates.
. nativeSpatialReference: Int representing the EPSG code of the native image coordinate system.
. geodataXform: XML-string representation of the associated XForm between native image and map coordinate systems.
. extent: Tuple(4 x floats) representing XMin, YMin, XMax, YMax values of the map coordinates.
. spatialReference: Int representing the EPSG code of the raster's map coordinate system.
. colormap: Tuple(ndarray(int32), 3 x ndarray(uint8)) A tuple of four arrays where the first array contains 32-bit integers
corresponding to pixel values in the indexed raster. The subsequent three arrays contain unsigned 8-bit integers
corresponding to the Red, Green, and Blue components of the mapped color. The sizes of all arrays
must match and correspond to the number of colors in the RGB image.
. rasterAttributeTable: Tuple(String, Tuple(Strings)): A tuple of a string representing the path of the attribute table,
and another tuple representing field names.
Use the information in this tuple with arcpy.da.TableToNumPyArray() to access the values.
. levelOfDetails: Int: The number of level of details in the input raster.
. origin: Tuple(Floats): Tuple of (x,y) coordinate corresponding to the origin.
. bandSelection: Boolean
. histogram: Tuple(numpy.ndarrays): Tuple where each entry is an array of histogram values of a band.
. statistics: Tuple(dicts): Tuple of statistics values.
Each entry in the tuple is a dictionary containing the following attributes of band statistics:
. minimum: Float. Approximate lowest value.
. maximum: Float. Approximate highest value.
. mean: Float. Approximate average value.
. standardDeviation: Float. Approximate measure of spread of values about the mean.
. skipFactorX: Int. Number of horizontal pixels between samples when calculating statistics.
. skipFactorY: Int. Number of vertical pixels between samples when calculating statistics.
Returns:
A dictionary containing output raster info.
This method can update the values of the dictionary in kwargs['output_info'] depending on the kind of
operation in .updatePixels()
Note:
. The tuple in cellSize and maximumCellSize attributes can be used to construct an arcpy.Point object.
. The tuple in extent, nativeExtent and origin attributes can be used to construct an arcpy.Extent object.
. The epsg code in nativeSpatialReference and spatialReference attributes can be used to construct an
arcpy.SpatialReference() object.
"""
kwargs['output_info']['bandCount'] = 1 # output is a single band raster
kwargs['output_info']['pixelType'] = 'f4' # ... with floating-point pixel values.
kwargs['output_info']['statistics'] = () # invalidate any statistics
kwargs['output_info']['histogram'] = () # invalidate any histogram
return kwargs
def updatePixels(self, tlc, shape, props, **pixelBlocks):
"""This method can provide output pixels based on pixel blocks associated with all input rasters.
A python raster function that doesn't actively modify output pixel values doesn't need to define this method.
Args:
. tlc: Tuple(2 x floats) representing the coordinates of the top-left corner of the pixel request.
. shape: Tuple(ints) representing the shape of ndarray that defines the output pixel block.
For a single-band pixel block, the tuple contains two ints (rows, columns).
For multi-band output raster, the tuple defines a three-dimensional array (bands, rows, columns).
The shape associated with the outgoing pixel block and mask must match this argument's value.
. props: A dictionary containing properties that define the output raster from which
a pixel block--of dimension and location is defined by the 'shape' and 'tlc' arguments--is being requested.
These are the available attributes in this dictionary:
. extent: Tuple(4 x floats) representing XMin, YMin, XMax, YMax values of the output
raster's map coordinates.
. pixelType: String representation of pixel type of the raster. These are the allowed values:
{'t1', 't2', 't4', 'i1', 'i2', 'i4', 'u1', 'u2', 'u4', 'f4', 'f8'}
cf: http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
. spatialReference: Int representing the EPSG code of the output raster's map coordinate system.
. cellSize: Tuple(2 x floats) representing cell-size in the x- and y-direction.
. width: Number of columns of pixels in the output raster.
. height: Number of rows of pixels in the output raster.
. noData: TODO.
. pixelBlocks: Keyword argument containing pixels and mask associated with each input raster.
For a raster parameter with dataType='raster' and name='x', pixelBlocks['x_pixels'] and
pixelBlocks['x_mask'] are numpy.ndarrays of pixel and mask values for that input raster.
For a parameter of type rasters (dataType='rasters'), these are tuples of ndarrays--one entry per raster.
The arrays are three-dimensional for multiband rasters.
Note:
. The pixelBlocks dictionary does not contain any scalars parameters.
Returns:
A dictionary with a numpy array containing pixel values in the 'output_pixels' key and,
optionally, an array representing the mask in the 'output_mask' key. The shape of both arrays
must match the 'shape' argument.
"""
if not 'raster_pixels' in pixelBlocks:
raise Exception("No input raster was provided.")
raise Exception("{0}".format(shape))
if len(shape) != 3 or shape[1] < 2:
raise Exception("Input raster must have at least two bands.")
inputBlock = pixelBlocks['raster_pixels'] # get pixels of an raster
red = np.array(inputBlock[0], 'f4') # assuming red's the first band
blue = np.array(inputBlock[1], 'f4') # assuming blue's the second band... per extractBands in .getConfiguration()
outBlock = (red + blue) / 2.0 # this is just an example. nothing complicated here.
pixelBlocks['output_pixels'] = outBlock.astype(props['pixelType'])
return pixelBlocks
def updateKeyMetadata(self, names, bandIndex, **keyMetadata):
"""This method can update dataset-level or band-level key metadata.
When a request for a dataset's key metadata is made, this method (if present) allows the python raster function
to invalidate or overwrite specific requests.
Args:
. names: A tuple containing names of the properties being requested. An empty tuple
indicates that all properties are being requested.
. bandIndex: A zero-based integer representing the raster band for which key metadata is being requested.
bandIndex == -1 indicates that the request is for dataset-level key properties.
. keyMetadata: Keyword argument containing all currently known metadata (or a subset as defined by the names tuple).
Returns:
The updated keyMetadata dictionary.
"""
if bandIndex == -1: # dataset-level properties
keyMetadata['datatype'] = 'Processed' # outgoing dataset is now 'Processed'
elif bandIndex == 0: # properties for the first band
keyMetadata['wavelengthmin'] = None # reset inapplicable band-specific key metadata
keyMetadata['wavelengthmax'] = None
keyMetadata['bandname'] = 'Red_and_Blue' # ... or something meaningful
return keyMetadata
def isLicensed(self, **productInfo):
"""This method, if defined, indicates whether this python raster function is licensed to execute.
This method is invoked soon after the function object is constructed. It enables the python
raster function to halt execution--given information about the parent product and the context of execution.
It also allows the function to, optionally, indicate the expected product-level and the extension that
must be available before execution can proceed.
Args:
The productInfo keyword argument describes the current execution environment.
It contains the following attributes:
. productName: String representing the name of the product {'Desktop', 'Server', 'Engine', ...}
. version: The version string associated with the product
. path: String conntaining the installation path of the product.
. major: An integer representing the major version number of the product.
. minor: A floating-point number representing the minor version number of the product.
. build: An integer represening the build number associated with the product.
. spNumber: An integer representing the service pack number, if applicable.
. spBuild: An integer representing the service pack build, if applicable.
Returns:
A dictionary containing an attribute that indicates whether licensing checks specific to this
python raster function has passed--and, optional attributes that control additional licensing checks
enforced by the Python Adapter:
. okToRun: [Required] Boolean indicating whether it's OK to proceed with the use of this
raster function object. This attribute must be present and, specifically,
set to False for execution to halt. Otherwise, it's assumed to be True (and, that it's OK to proceed).
. message: [Optional] String representing the message to be displayed to the user or logged
when okToRun is False.
. productLevel: [Optional] String representing the product license-level expected from the parent application.
Allowed values include {'Basic', 'Standard', 'Advanced'}.
. extension: [Optional] String representing the name of the extension that must be available before
the Python Adapter is allowed to use this raster function. The set of recognized extension names
are enumerated here: http://resources.arcgis.com/en/help/main/10.2/index.html#//002z0000000z000000.
"""
major = productInfo.get('major', 0)
minor = productInfo.get('minor', 0.0)
build = productInfo.get('build', 0)
return {
'okToRun': major >= 10 and minor >= 3.0 and build >= 4276,
'message': "The python raster function is only compatible with ArcGIS 10.3 build 4276",
'productLevel': 'Standard',
'extension': 'Image'
}
|
So I have my 617.952 pulled out of the car and on a stand. I recently also pulled the FM146 from my target truck.
My 4x4 labs 61x to SBC adapter plate is 3/4" plate al. |
'''
爬虫闯关第一关 http://www.heibanke.com/lesson/crawler_ex00/
提示技能: 模拟登陆、csrf-token
来自 http://www.zhihu.com/question/20899988 黑板客的回答
这里成功实现了模拟登陆,写一下思路:
1.提交登陆表单,查看POST方法文件的消息头、Cookie、参数
消息头中的请求头没啥有价值的内容,请求网址与登陆网址是一样的
Cookie倒是引起了注意。清除Cookie,直接打开登陆界面,服务器响应了一个同样名称的Cookie
表单参数里有这三个选项:
"csrfmiddlewaretoken" / "username" / "password"
"csrfmiddlewaretoken"怎么来的呢?查看登陆页面的HTML源码文件,发现<form>后有这么一行:
<input type='hidden' name='csrfmiddlewaretoken' value='3TwYYML662nMWaafvVDWg8pp6RVCAS1d' />
2.模拟登陆步骤:
a.opener.open(auth_url),得到Cookie与csrfmiddlewaretoken
b.构造请求体req,包含Cookie的headers、有csrfmiddlewaretoken/username/password的data
b.opener.open(req),得到Cookie
'''
from urllib import request
from urllib import parse
from urllib import error
from http import cookiejar
import re
class third:
def __init__(self):
self.username = "1234567"
self.password = "1234567890"
self.auth_url = "http://www.heibanke.com/accounts/login"
self.url = "http://www.heibanke.com/lesson/crawler_ex02/"
self.csrfmiddlewaretoken = ""
def __get_cookies(self, req):
cookies = cookiejar.CookieJar()
handler = request.HTTPCookieProcessor(cookies)
opener = request.build_opener(handler)
try:
with opener.open(req) as f:
if f.code == 200:
pattern = re.compile(r"<input.*?type='hidden'.*?name='csrfmiddlewaretoken'.*?value='(.*?)'.*>")
try:
self.csrfmiddlewaretoken = pattern.search(f.read().decode("utf-8")).group(1)
print("Achieved cookies and csrfmiddlewaretoken sucessfully")
except:
print("Achieved cookies sucessfully")
return cookies
else:
print("Lost cookies")
except error.URLError as e:
if hasattr(e, "reason"):
print ("We failed to reach a server. Please check your url and read the Reason")
print ("Reason: {}".format(e.reason))
elif hasattr(e, "code"):
print("The server couldn't fulfill the request.")
print("Error code: {}".format(e.code))
exit()
def __request(self, url, cookies=None):
form = {
"csrfmiddlewaretoken": self.csrfmiddlewaretoken,
"username": self.username,
"password": self.password
}
data = parse.urlencode(form).encode("utf-8")
headers = {}
header_cookie = ""
for cookie in cookies:
header_cookie = "{} {}={};".format(header_cookie, cookie.name, cookie.value)
headers["Cookie"] = header_cookie.strip(' ;')
req = request.Request(url, data, headers=headers)
return req
def __auth_cookies(self, pre_auth_cookies):
req = self.__request(self.auth_url, pre_auth_cookies)
cookies = self.__get_cookies(req)
return cookies
def guess_passwd(self, auth_cookies):
for i in range(31):
self.password = i
req = self.__request(self.url, auth_cookies)
print("正在猜测密码为{}".format(self.password))
try:
with request.urlopen(req) as f:
body = f.read().decode("utf-8")
if not "您输入的密码错误" in body:
print(body)
print("密码为{}".format(i))
break
except error.URLError as e:
if hasattr(e, "reason"):
print ("We failed to reach a server. Please check your url and read the Reason")
print ("Reason: {}".format(e.reason))
elif hasattr(e, "code"):
print("The server couldn't fulfill the request.")
print("Error code: {}".format(e.code))
return
def start(self):
pre_auth_cookies = self.__get_cookies(self.auth_url)
auth_cookies = self.__auth_cookies(pre_auth_cookies)
self.guess_passwd(auth_cookies)
spider = third()
spider.start()
|
What a fantastic day we had! On Saturday 29th June 2013, around 1,500 people came together to Bollywood dance for Khushi Feet. A huge thank you to all those who took part – everyone was brilliant.
Take a look at some of our photos and re-live that magical Bollywood afternoon in the glorious sunshine! Special thanks to Samantha Jones Photography for the stunning images. |
# coding: utf-8
"""
Convert DICOM Data into a Python Dictionary
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
"""
# Also see: https://github.com/darcymason/pydicom/issues/319.
import os
from biovida.support_tools.support_tools import cln, dicom
def _extract_numeric(value):
"""
:param value:
:return:
"""
return float("".join((i for i in value if i.isdigit() or i == '.')))
def parse_age(value):
"""
:param value:
:return:
"""
if not isinstance(value, str):
raise TypeError('`value` must be a string.')
elif len(value) > 4:
return value
if 'y' in value.lower():
return _extract_numeric(value)
elif 'm' in value.lower():
return _extract_numeric(value) / 12.0
else:
return value
def parse_string_to_tuple(value):
"""
:param value:
:type value: ``str``
:return:
"""
braces = [['[', ']'], ['(', ')']]
for (left, right) in braces:
if left in value and right in value:
value_split = value.replace(left, "").replace(right, "").split(",")
value_split_cln = list(filter(None, map(cln, value_split)))
if len(value_split_cln) == 0:
return None
try:
to_return = tuple(map(_extract_numeric, value_split_cln))
except:
to_return = tuple(value_split_cln)
return to_return[0] if len(to_return) == 1 else to_return
else:
raise ValueError("Cannot convert `value` to a tuple.")
def dicom_value_parse(key, value):
"""
Try to convert ``value`` to a numeric or tuple of numerics.
:param key:
:param value:
:return:
"""
value = cln(str(value).replace("\'", "").replace("\"", ""))
if not len(value) or value.lower() == 'none':
return None
if key.lower().endswith(' age') or key == 'PatientAge':
try:
return parse_age(value)
except:
return value
else:
try:
return int(value)
except:
try:
return float(value)
except:
try:
return parse_string_to_tuple(value)
except:
return value
def dicom_object_dict_gen(dicom_object):
"""
:param dicom_object:
:type dicom_object: ``dicom.FileDataset``
:return:
"""
d = dict()
for k in dicom_object.__dir__():
if not k.startswith("__") and k != 'PixelData':
try:
value = dicom_object.data_element(k).value
if type(value).__name__ != 'Sequence':
d[k] = dicom_value_parse(key=k, value=value)
except:
pass
return d
def dicom_to_dict(dicom_file):
"""
Convert the metadata associated with ``dicom_file`` into a python dictionary
:param dicom_file: a path to a dicom file or the yield of ``dicom.read_file(FILE_PATH)``.
:type dicom_file: ``FileDataset`` or ``str``
:return: a dictionary with the dicom meta data.
:rtype: ``dict``
"""
if isinstance(dicom_file, str):
if not os.path.isfile(dicom_file):
raise FileNotFoundError("Could not locate '{0}'.".format(dicom_file))
dicom_object = dicom.read_file(dicom_file)
elif type(dicom_file).__name__ == 'FileDataset':
dicom_object = dicom_file
else:
raise TypeError("`dicom_file` must be of type `dicom.FileDataset` or a string.")
return dicom_object_dict_gen(dicom_object)
|
Retail and storage space c. 456 sq. ft.
The property is of traditional construction. The building is arranged internally to provide an open plan retail space along with separate staff toilets and kitchenette. The rear of the property can be used as a small storage area. The property avails from an excellent 3.7 metre frontage onto Castle Street and benefits from a large volume of passing pedestrian footfall.
There is a mixture of plaster, painted and bare walls along with laminate and tiled floors in the shop area and concrete floor in the storage area.
Private parking is available to the rear of the unit accessed from Castle Street, these spaces are available on separate licence agreements.
Nearby occupiers include Shannon’s Jewellers, The Cycle Zone, The Hair Lounge, Little Wing Pizzeria, and Eastwood Estate Agents.
We are advised by Land and Property Services the property has an NAV of £6,600. The rate in the £ for 2018/19 is £0.555698 resulting in rates payable of approx. £3,668.
The property also benefits from a staff toilet and kitchenette area.
Lisburn is the third largest city in Northern Ireland and is located 8 miles from Belfast. It has a population c. 120,000 people (2011 census) and good transport links to the rest of Northern Ireland via the M1 motorway.
The subject property is located on the corner of Castle Street and Railway Street within the one way system around Lisburn City Centre and offers excellent return frontage and visibility. It is located a short distance away from the pedestrianised retail centre of Bow Street. Also within walking distance are the Lisburn Museum and Linen Centre, Lisburn Cathedral and the South Eastern Regional College. |
import os.path
from unittest import TestCase
from tempfile import NamedTemporaryFile
from itertools import izip, imap
import numpy as np
from canecycle.reader import from_shad_lsml
from canecycle.cache import CacheWriter, CacheReader
class TestReader(TestCase):
test_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
"train_except.txt")
def test_cache_write_and_read(self):
cache_file = './testing.cache'
hash_size = 2**20
reader = from_shad_lsml(self.test_file, hash_size)
reader.restart(0)
cache_writer = CacheWriter(60, hash_size)
cache_writer.open(cache_file)
for item in reader:
cache_writer.write_item(item)
cache_writer.close()
reader.restart(3)
cache_reader = CacheReader(cache_file)
cache_reader.restart(3)
self.assertEqual(hash_size, cache_reader.get_features_count())
for read_item, cached_item in izip(reader, cache_reader):
self.assertEqual(read_item.label, cached_item.label)
self.assertEqual(read_item.weight, cached_item.weight)
np.testing.assert_array_equal(
read_item.data, cached_item.data)
np.testing.assert_array_equal(
read_item.indices, cached_item.indices)
reader.restart(-3)
cache_reader.restart(-3)
for read_item, cached_item in izip(reader, cache_reader):
self.assertEqual(read_item.label, cached_item.label)
self.assertEqual(read_item.weight, cached_item.weight)
np.testing.assert_array_equal(
read_item.data, cached_item.data)
np.testing.assert_array_equal(
read_item.indices, cached_item.indices)
reader.close()
cache_reader.restart(-4)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 250)
cache_reader.restart(-2)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 500)
cache_reader.restart(-100)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 10)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 0)
cache_reader.restart(4)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 750)
cache_reader.restart(2)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 500)
cache_reader.restart(100)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 990)
self.assertEqual(sum(imap(lambda item: 1, cache_reader)), 0)
|
Credit cards embedded with RFID chips bear four curved lines that look like wifi symbols.
Thumbing through a holiday catalog chock full of modern gadgets, we were struck by the presence of no fewer than three wallets purporting to block RFID signals.
First, we wondered, what are RFID signals? Second, why do we need to block them?
Each of the barriers was designed to keep identity thieves from gaining access to the RFID signals or, more specifically, the personal information they represent.
This same catalog sells Golf Ball Finder Glasses, so one is left to ponder whether RFID signals really represent a rich target for would-be ID thieves and a genuine risk for consumers.
Some licenses, credit cards and passports come with radio frequency chips embedded in them.
“When activated by an RFID reader, these chips transmit certain types of information wirelessly, so that you can verify your identity or even make a purchase without swiping your card,” according to a 2015 Slate magazine article.
The article cited demonstrations in which RFID skimmers obtained entire credit card numbers from a person’s pocket simply by using a handheld RFID reader.
However, Slate suggested that it’s of greater interest to security researchers than to thieves because of the existence of easier, more lucrative ways to steal money and data.
“By contrast, skimmers installed on ATM or point-of-sale machines allow thieves to pick up much more usable information from a far greater number of cards,” Slate noted.
NPR picked up on that theme in a story in July 2017.
“In the last few years, a whole RFID-blocking industry has sprung up, and it survives partly on confusion,” NPR reported.
Part of that confusion has resulted from the proliferation of RFID-blocking products even though a relatively small percentage of payment cards in the United States have RFID chips.
"There's probably hundreds of millions of financial crimes being done every year and so far zero real-life RFID crime," one computer security expert said.
Some experts said they are more concerned with other ways thieves steal personal information, such as with telephone scams. Password management and checking credit reports are important safeguards against identify theft, too.
RFID protection probably won’t hurt, other than the pain from buying that whiz-bang wallet with the stainless-steel layer.
If you’re still worried about RFID scams, you might settle for a home-made remedy. Citing Consumer Reports magazine, NPR said you can get as good of results by wrapping credit cards or passports in a thick piece of aluminum foil. |
import json
from flask import Blueprint
from flask import Response
from flask.ext.cors import cross_origin
# from pgeo.config.settings import read_config_file_json
from pgeo.error.custom_exceptions import PGeoException
from pgeo.dataproviders import trmm2 as t
browse_trmm2 = Blueprint('browse_trmm2', __name__)
# conf = read_config_file_json('trmm2', 'data_providers')
@browse_trmm2.route('/')
@cross_origin(origins='*')
def list_years_service():
try:
out = t.list_years()
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code())
@browse_trmm2.route('/<year>')
@browse_trmm2.route('/<year>/')
@cross_origin(origins='*')
def list_months_service(year):
try:
out = t.list_months(year)
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code())
@browse_trmm2.route('/<year>/<month>')
@browse_trmm2.route('/<year>/<month>/')
@cross_origin(origins='*')
def list_days_service(year, month):
try:
out = t.list_days(year, month)
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code())
@browse_trmm2.route('/<year>/<month>/<day>')
@browse_trmm2.route('/<year>/<month>/<day>/')
@cross_origin(origins='*')
def list_layers_service(year, month, day):
try:
out = t.list_layers(year, month, day)
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code())
@browse_trmm2.route('/<year>/<month>/<from_day>/<to_day>')
@browse_trmm2.route('/<year>/<month>/<from_day>/<to_day>/')
@cross_origin(origins='*')
def list_layers_subset_service(year, month, from_day, to_day):
try:
out = t.list_layers_subset(year, month, from_day, to_day)
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code())
@browse_trmm2.route('/layers/<year>/<month>')
@browse_trmm2.route('/layers/<year>/<month>/')
@cross_origin(origins='*')
def list_layers_month_subset_service(year, month):
try:
out = t.list_layers_month_subset(year, month)
return Response(json.dumps(out), content_type='application/json; charset=utf-8')
except PGeoException, e:
raise PGeoException(e.get_message(), e.get_status_code()) |
System administration command. Mail warning messages to users that have exceeded their soft limit.
Read group administrator information from file instead of /etc/quotagrpadmins.
Read configuration information from file instead of /etc/-warnquota.conf.
Send messages without attaching quota reports.
Send messages for group quotas. Send the message to the user specified in /etc/quotagrpadmins.
Read device description strings from file instead of /etc/quotagrpadmins.
Report sizes in more human-readable units. |
# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
#
# Copyright (C) 2015-2018 Canonical Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3 as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import os
import shutil
from unittest import mock
from testtools.matchers import Equals
from snapcraft.internal import sources
from tests import unit
from tests.subprocess_utils import call, call_with_output
# LP: #1733584
class TestMercurial(unit.sources.SourceTestCase): # type: ignore
def setUp(self):
super().setUp()
patcher = mock.patch("snapcraft.sources.Mercurial._get_source_details")
self.mock_get_source_details = patcher.start()
self.mock_get_source_details.return_value = ""
self.addCleanup(patcher.stop)
def test_pull(self):
hg = sources.Mercurial("hg://my-source", "source_dir")
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "clone", "hg://my-source", "source_dir"]
)
def test_pull_branch(self):
hg = sources.Mercurial(
"hg://my-source", "source_dir", source_branch="my-branch"
)
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "clone", "-u", "my-branch", "hg://my-source", "source_dir"]
)
def test_pull_tag(self):
hg = sources.Mercurial("hg://my-source", "source_dir", source_tag="tag")
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "clone", "-u", "tag", "hg://my-source", "source_dir"]
)
def test_pull_commit(self):
hg = sources.Mercurial("hg://my-source", "source_dir", source_commit="2")
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "clone", "-u", "2", "hg://my-source", "source_dir"]
)
def test_pull_existing(self):
self.mock_path_exists.return_value = True
hg = sources.Mercurial("hg://my-source", "source_dir")
hg.pull()
self.mock_run.assert_called_once_with(["hg", "pull", "hg://my-source"])
def test_pull_existing_with_tag(self):
self.mock_path_exists.return_value = True
hg = sources.Mercurial("hg://my-source", "source_dir", source_tag="tag")
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "pull", "-r", "tag", "hg://my-source"]
)
def test_pull_existing_with_commit(self):
self.mock_path_exists.return_value = True
hg = sources.Mercurial("hg://my-source", "source_dir", source_commit="2")
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "pull", "-r", "2", "hg://my-source"]
)
def test_pull_existing_with_branch(self):
self.mock_path_exists.return_value = True
hg = sources.Mercurial(
"hg://my-source", "source_dir", source_branch="my-branch"
)
hg.pull()
self.mock_run.assert_called_once_with(
["hg", "pull", "-b", "my-branch", "hg://my-source"]
)
def test_init_with_source_branch_and_tag_raises_exception(self):
raised = self.assertRaises(
sources.errors.SnapcraftSourceIncompatibleOptionsError,
sources.Mercurial,
"hg://mysource",
"source_dir",
source_tag="tag",
source_branch="branch",
)
self.assertThat(raised.source_type, Equals("mercurial"))
self.assertThat(raised.options, Equals(["source-tag", "source-branch"]))
def test_init_with_source_commit_and_tag_raises_exception(self):
raised = self.assertRaises(
sources.errors.SnapcraftSourceIncompatibleOptionsError,
sources.Mercurial,
"hg://mysource",
"source_dir",
source_commit="2",
source_tag="tag",
)
self.assertThat(raised.source_type, Equals("mercurial"))
self.assertThat(raised.options, Equals(["source-tag", "source-commit"]))
def test_init_with_source_commit_and_branch_raises_exception(self):
raised = self.assertRaises(
sources.errors.SnapcraftSourceIncompatibleOptionsError,
sources.Mercurial,
"hg://mysource",
"source_dir",
source_commit="2",
source_branch="branch",
)
self.assertThat(raised.source_type, Equals("mercurial"))
self.assertThat(raised.options, Equals(["source-branch", "source-commit"]))
def test_init_with_source_depth_raises_exception(self):
raised = self.assertRaises(
sources.errors.SnapcraftSourceInvalidOptionError,
sources.Mercurial,
"hg://mysource",
"source_dir",
source_depth=2,
)
self.assertThat(raised.source_type, Equals("mercurial"))
self.assertThat(raised.option, Equals("source-depth"))
def test_source_checksum_raises_exception(self):
raised = self.assertRaises(
sources.errors.SnapcraftSourceInvalidOptionError,
sources.Mercurial,
"hg://mysource",
"source_dir",
source_checksum="md5/d9210476aac5f367b14e513bdefdee08",
)
self.assertThat(raised.source_type, Equals("mercurial"))
self.assertThat(raised.option, Equals("source-checksum"))
def test_has_source_handler_entry(self):
self.assertTrue(sources._source_handler["mercurial"] is sources.Mercurial)
class MercurialBaseTestCase(unit.TestCase):
def rm_dir(self, dir):
if os.path.exists(dir):
shutil.rmtree(dir)
def clean_dir(self, dir):
self.rm_dir(dir)
os.mkdir(dir)
self.addCleanup(self.rm_dir, dir)
def clone_repo(self, repo, tree):
self.clean_dir(tree)
call(["hg", "clone", repo, tree])
os.chdir(tree)
def add_file(self, filename, body, message):
with open(filename, "w") as fp:
fp.write(body)
call(["hg", "add", filename])
call(["hg", "commit", "-am", message])
def check_file_contents(self, path, expected):
body = None
with open(path) as fp:
body = fp.read()
self.assertThat(body, Equals(expected))
class MercurialDetailsTestCase(MercurialBaseTestCase):
def setUp(self):
super().setUp()
self.working_tree = "hg-test"
self.source_dir = "hg-checkout"
self.clean_dir(self.working_tree)
self.clean_dir(self.source_dir)
os.chdir(self.working_tree)
call(["hg", "init"])
with open("testing", "w") as fp:
fp.write("testing")
call(["hg", "add", "testing"])
call(["hg", "commit", "-m", "testing", "-u", "Test User <[email protected]>"])
call(["hg", "tag", "-u", "test", "test-tag"])
self.expected_commit = call_with_output(["hg", "id"]).split()[0]
self.expected_branch = call_with_output(["hg", "branch"])
self.expected_tag = "test-tag"
os.chdir("..")
self.hg = sources.Mercurial(self.working_tree, self.source_dir, silent=True)
self.hg.pull()
self.source_details = self.hg._get_source_details()
def test_hg_details_commit(self):
self.assertThat(
self.source_details["source-commit"], Equals(self.expected_commit)
)
def test_hg_details_branch(self):
self.clean_dir(self.source_dir)
self.hg = sources.Mercurial(
self.working_tree, self.source_dir, silent=True, source_branch="default"
)
self.hg.pull()
self.source_details = self.hg._get_source_details()
self.assertThat(
self.source_details["source-branch"], Equals(self.expected_branch)
)
def test_hg_details_tag(self):
self.clean_dir(self.source_dir)
self.hg = sources.Mercurial(
self.working_tree, self.source_dir, silent=True, source_tag="test-tag"
)
self.hg.pull()
self.source_details = self.hg._get_source_details()
self.assertThat(self.source_details["source-tag"], Equals(self.expected_tag))
|
Archive with tag: 36 medicine cabinet beveled mirror | Fayeflam 36 Medicine Cabinet.
collection of galleries from Home And Furniture like 36 Medicine Cabinet. and other designs you might like Reglazing Bathroom Tile. Curved Outdoor Bench. Modern Shoe Bench. Teak Chaise Lounge Chairs. Prepac Platform Bed. Outdoor Shower Plumbing. Real Barn Door Hardware. |
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; If not, see <http://www.gnu.org/licenses/>.
#
# ##### END GPL LICENSE BLOCK #####
__author__ = "Lothar Krause and Sergi Blanch-Torne"
__maintainer__ = "Sergi Blanch-Torne"
__copyright__ = "Copyright 2015, CELLS / ALBA Synchrotron"
__license__ = "GPLv3+"
"""Device Server to control the Alba's Linac manufactured by Thales."""
__all__ = ["LinacData", "LinacDataClass", "main"]
__docformat__ = 'restructuredtext'
import PyTango
from PyTango import AttrQuality
import sys
# Add additional import
# PROTECTED REGION ID(LinacData.additionnal_import) ---
from copy import copy
from ctypes import c_uint16, c_uint8, c_float, c_int16
import fcntl
import json # FIXME: temporal to dump dictionary on the relations collection
from numpy import uint16, uint8, float32, int16
import pprint
import psutil
import Queue
import socket
import struct
import time
import tcpblock
import threading
import traceback
from types import StringType
from constants import *
from LinacAttrs import (LinacException, CommandExc, AttrExc,
binaryByte, hex_dump)
from LinacAttrs import (EnumerationAttr, PLCAttr, InternalAttr, MeaningAttr,
AutoStopAttr, AutoStopParameter, HistoryAttr,
GroupAttr, LogicAttr)
from LinacAttrs.LinacFeatures import CircularBuffer, HistoryBuffer, EventCtr
class release:
author = 'Lothar Krause &'\
' Sergi Blanch-Torne <[email protected]>'
hexversion = (((MAJOR_VERSION << 8) | MINOR_VERSION) << 8) | BUILD_VERSION
__str__ = lambda self: hex(hexversion)
if False:
TYPE_MAP = {PyTango.DevUChar: c_uint8,
PyTango.DevShort: c_int16,
PyTango.DevFloat: c_float,
PyTango.DevDouble: c_float,
}
else:
TYPE_MAP = {PyTango.DevUChar: ('B', 1),
PyTango.DevShort: ('h', 2),
PyTango.DevFloat: ('f', 4),
PyTango.DevDouble: ('f', 4),
# the PLCs only use floats of 4 bytes
}
def john(sls):
'''used to encode the messages shown for each state code
'''
if type(sls) == dict:
return '\n'+''.join('%d:%s\n' % (t, sls[t]) for t in sls.keys())
else:
return '\n'+''.join('%d:%s\n' % t for t in enumerate(sls))
def latin1(x):
return x.decode('utf-8').replace(u'\u2070', u'\u00b0').\
replace(u'\u03bc', u'\u00b5').encode('latin1')
class AttrList(object):
'''Manages dynamic attributes and contains methods for conveniently adding
attributes to a running TANGO device.
'''
def __init__(self, device):
super(AttrList, self).__init__()
self.impl = device
self._db20_size = self.impl.ReadSize-self.impl.WriteSize
self._db22_size = self.impl.WriteSize
self.alist = list()
self.locals_ = {}
self._relations = {}
self._buider = None
self._fileParsed = threading.Event()
self._fileParsed.clear()
self.globals_ = globals()
self.globals_.update({
'DEVICE': self.impl,
'LIST': self,
'Attr': self.add_AttrAddr,
'AttrAddr': self.add_AttrAddr,
'AttrBit': self.add_AttrAddrBit,
'GrpBit': self.add_AttrGrpBit,
'AttrLogic': self.add_AttrLogic,
'AttrRampeable': self.add_AttrRampeable,
#'AttrLock_ST': self.add_AttrLock_ST,
#'AttrLocking': self.add_AttrLocking,
#'AttrHeartBeat': self.add_AttrHeartBeat,
'AttrPLC': self.add_AttrPLC,
'AttrEnumeration': self.add_AttrEnumeration,
# 'john' : john,
})
def add_Attr(self, name, T, rfun=None, wfun=None, label=None, desc=None,
minValue=None, maxValue=None, unit=None, format=None,
memorized=False, logLevel=None, xdim=0):
if wfun:
if xdim == 0:
attr = PyTango.Attr(name, T, PyTango.READ_WRITE)
else:
self.impl.error_stream("Not supported write attribute in "
"SPECTRUMs. %s will be readonly."
% (name))
attr = PyTango.SpectrumAttr(name, T, PyTango.READ_WRITE, xdim)
else:
if xdim == 0:
attr = PyTango.Attr(name, T, PyTango.READ)
else:
attr = PyTango.SpectrumAttr(name, T, PyTango.READ, xdim)
if logLevel is not None:
self.impl._getAttrStruct(name).logLevel = logLevel
aprop = PyTango.UserDefaultAttrProp()
if unit is not None:
aprop.set_unit(latin1(unit))
if minValue is not None:
aprop.set_min_value(str(minValue))
if maxValue is not None:
aprop.set_max_value(str(maxValue))
if format is not None:
attrStruct = self.impl._getAttrStruct(name)
attrStruct['format'] = str(format)
aprop.set_format(latin1(format))
if desc is not None:
aprop.set_description(latin1(desc))
if label is not None:
aprop.set_label(latin1(label))
if memorized:
attr.set_memorized()
attr.set_memorized_init(True)
self.impl.info_stream("Making %s memorized (%s,%s)"
% (name, attr.get_memorized(),
attr.get_memorized_init()))
attr.set_default_properties(aprop)
rfun = AttrExc(rfun)
try:
if wfun:
wfun = AttrExc(wfun)
except Exception as e:
self.impl.error_stream("Attribute %s build exception: %s"
% (name, e))
self.impl.add_attribute(attr, r_meth=rfun, w_meth=wfun)
if name in self.impl._plcAttrs and \
EVENTS in self.impl._plcAttrs[name]:
self.impl.set_change_event(name, True, False)
elif name in self.impl._internalAttrs and \
EVENTS in self.impl._internalAttrs[name]:
self.impl.set_change_event(name, True, False)
self.alist.append(attr)
return attr
def __mapTypes(self, attrType):
# ugly hack needed for SOLEILs archiving system
if attrType == PyTango.DevFloat:
return PyTango.DevDouble
elif attrType == PyTango.DevUChar:
return PyTango.DevShort
else:
return attrType
def add_AttrAddr(self, name, T, read_addr=None, write_addr=None,
meanings=None, qualities=None, events=None,
formula=None, label=None, desc=None,
readback=None, setpoint=None, switch=None,
IamChecker=None, minValue=None, maxValue=None,
*args, **kwargs):
'''This method is a most general builder of dynamic attributes, for RO
as well as for RW depending on if it's provided a write address.
There are other optional parameters to configure some special
characteristics.
With the meaning parameter, a secondary attribute to the give one
using the name is created (with the suffix *_Status and They share
other parameters like qualities and events). The numerical attribute
can be used in formulas, alarms and any other machine-like system,
but this secondary attribute is an DevString who concatenates the
read value with an string specified in a dictionary in side this
meaning parameter in order to provide a human-readable message to
understand that value.
All the Tango attributes have characteristics known as qualities
(like others like format, units, and so on) used to provide a 5
level state-like information. They can by 'invalid', 'valid',
'changing', 'warning' or 'alarm'. With the dictionary provided to
the parameter qualities it can be defined some ranges or some
discrete values. The structure splits between this two situations:
- Continuous ranges: that is mainly used for DevDoubles but also
integers. As an example of the dictionary:
- WARNING:{ABSOLUTE:{BELOW:15,ABOVE:80}}
This will show VALID quality between 15 and 80, but warning
if in absolute terms the read value goes out this thresholds.
- Discrete values: that is used mainly in the state-like attributes
and it will establish the quality by an equality. As example:
- ALARM:[0],
WARNING:[1,2,3,5,6,7],
CHANGING:[4]
Suppose a discrete attribute with values between 0 and 8. Then
when value is 8 will be VALID, between 0 and 7 will be WARNING
with the exception at 4 that will show CHANGING.
Next of the parameters is events and with this is configured the
behaviour of the attribute to emit events. Simply by passing a
dictionary (even void like {}) the attribute will be configured to
emit events. In this simplest case events will be emitted if the
value has changed from last reading. But for DevDouble is used a
key THRESHOLD to indicate that read changes below it will not
produce events (like below the format representation). For such
thing is used a circular buffer that collects reads and its mean
is used to compare with a new reading to decide if an event has to
be emitted or not.
Another parameter is the formula. This is mainly used with the
DevBooleans but it's possible for any other. It's again a dictionary
with two possible keys 'read' and/or 'write' and their items shall
be assessed strings in running time that would 'transform' a
reading.
The set of arguments about readback, setpoint and switch are there
to store defined relations between attributes. That is, to allow the
setpoint (that has a read and write addresses) to know if there is
another read only attribute that does the measure of what the
setpoint sets. Also this readback may like to know about the
setpoint and if the element is switch on or off.
In the attribute description, one key argument (default None) is:
'IamChecker'. It is made to, if it contains a list of valid read
values, add to the tcpblock reader to decide if a received block
has a valid structure or not.
'''
self.__traceAttrAddr(name, T, readAddr=read_addr, writeAddr=write_addr)
tango_T = self.__mapTypes(T)
try:
read_addr = self.__check_addresses_and_block_sizes(
name, read_addr, write_addr)
except IndexError:
return
self._prepareAttribute(name, T, readAddr=read_addr,
writeAddr=write_addr, formula=formula,
readback=readback, setpoint=setpoint,
switch=switch, label=label, description=desc,
minValue=minValue, maxValue=maxValue,
*args, **kwargs)
rfun = self.__getAttrMethod('read', name)
if write_addr is not None:
wfun = self.__getAttrMethod('write', name)
else:
wfun = None
# TODO: they are not necessary right now
#if readback is not None:
# self.append2relations(name, READBACK, readback)
#if setpoint is not None:
# self.append2relations(name, SETPOINT, setpoint)
#if switch is not None:
# self.append2relations(name, SWITCH, switch)
self._prepareEvents(name, events)
if IamChecker is not None:
try:
self.impl.setChecker(read_addr, IamChecker)
except Exception as e:
self.impl.error_stream("%s cannot be added in the checker set "
"due to:\n%s" % (name, e))
if meanings is not None:
return self._prepareAttrWithMeaning(name, tango_T, meanings,
qualities, rfun, wfun,
**kwargs)
elif qualities is not None:
return self._prepareAttrWithQualities(name, tango_T, qualities,
rfun, wfun, label=label,
**kwargs)
else:
return self.add_Attr(name, tango_T, rfun, wfun, minValue=minValue,
maxValue=maxValue, **kwargs)
def add_AttrAddrBit(self, name, read_addr=None, read_bit=0,
write_addr=None, write_bit=None, meanings=None,
qualities=None, events=None, isRst=False,
activeRst_t=None, formula=None, switchDescriptor=None,
readback=None, setpoint=None, logLevel=None,
label=None, desc=None, minValue=None, maxValue=None,
*args, **kwargs):
'''This method is a builder of a boolean dynamic attribute, even for RO
than for RW. There are many optional parameters.
With the meanings argument, moreover the DevBoolean a DevString
attribute will be also generated (suffixed *_Status) with the same
event and qualities configuration if they are, and will have a
human readable message from the concatenation of the value and its
meaning.
There are also boolean attributes with a reset feature, those are
attributes that can be triggered and after some short period of time
they are automatically set back. The time with this reset active
can be generic (and uses ACTIVE_RESET_T from the constants) or can
be specified for a particular attribute using the activeRst_t.
Another feature implemented for this type of attributes is the
formula. That requires a dictionary with keys:
+ 'read' | 'write': they contain an string to be evaluated when
value changes like a filter or to avoid an action based on some
condition.
For example, this is used to avoid to power up klystrons if there
is an interlock, or to switch of the led when an interlock occurs.
{'read':'VALUE and '\
'self._plcAttrs[\'HVPS_ST\'][\'read_value\'] == 9 and '\
'self._plcAttrs[\'Pulse_ST\'][\'read_value\'] == 8',
'write':'VALUE and '\
'self._plcAttrs[\'HVPS_ST\'][\'read_value\'] == 8 and '\
'self._plcAttrs[\'Pulse_ST\'][\'read_value\'] == 7'
},
The latest feature implemented has relation with the rampeable
attributes and this is a secondary configuration for the
AttrRampeable DevDouble attributes, but in this case the feature
to complain is to manage ramping on the booleans that power on and
off those elements.
The ramp itself shall be defined in the DevDouble attribute, the
switch attribute only needs to know where to send this when state
changes.
The switchDescriptor is a dictionary with keys:
+ ATTR2RAMP: the name of the numerical attribute involved with the
state transition.
+ WHENON | WHENOFF: keys to differentiate action interval between
the two possible state changes.
- FROM: initial value of the state change ramp
- TO: final value of the state change ramp
About those two last keys, they can be both or only one.
+ AUTOSTOP: in case it has also the autostop feature, this is used
to identify the buffer to clean when transition from off to on.
The set of arguments about readback, setpoint and switch are there
to store defined relations between attributes. That is, to allow the
setpoint (that has a read and write addresses) to know if there is
another read only attribute that does the measure of what the
setpoint sets. Also this readback may like to know about the
setpoint and if the element is switch on or off.
'''
self.__traceAttrAddr(name, PyTango.DevBoolean, readAddr=read_addr,
readBit=read_bit, writeAddr=write_addr,
writeBit=write_bit)
try:
read_addr = self.__check_addresses_and_block_sizes(
name, read_addr, write_addr)
except IndexError:
return
self._prepareAttribute(name, PyTango.DevBoolean, readAddr=read_addr,
readBit=read_bit, writeAddr=write_addr,
writeBit=write_bit, formula=formula,
readback=readback, setpoint=setpoint,
label=label, description=desc,
minValue=minValue, maxValue=maxValue,
*args, **kwargs)
rfun = self.__getAttrMethod('read', name, isBit=True)
if write_addr is not None:
wfun = self.__getAttrMethod('write', name, isBit=True)
if write_bit is None:
write_bit = read_bit
else:
wfun = None
if isRst:
self.impl._plcAttrs[name][ISRESET] = True
self.impl._plcAttrs[name][RESETTIME] = None
if activeRst_t is not None:
self.impl._plcAttrs[name][RESETACTIVE] = activeRst_t
if type(switchDescriptor) == dict:
self.impl._plcAttrs[name][SWITCHDESCRIPTOR] = switchDescriptor
self.impl._plcAttrs[name][SWITCHDEST] = None
# in the construction of the AutoStopAttr() the current switch
# may not be build yet. Then now they must be linked together.
if AUTOSTOP in switchDescriptor:
autostopAttrName = switchDescriptor[AUTOSTOP]
if autostopAttrName in self.impl._internalAttrs:
autostopper = self.impl._internalAttrs[autostopAttrName]
if autostopper.switch == name:
autostopper.setSwitchAttr(self.impl._plcAttrs[name])
self._prepareEvents(name, events)
if logLevel is not None:
self.impl._getAttrStruct(name).logLevel = logLevel
if meanings is not None:
return self._prepareAttrWithMeaning(name, PyTango.DevBoolean,
meanings, qualities, rfun,
wfun, historyBuffer=None,
**kwargs)
else:
return self.add_Attr(name, PyTango.DevBoolean, rfun, wfun,
minValue=minValue, maxValue=maxValue,
**kwargs)
def add_AttrGrpBit(self, name, attrGroup=None, meanings=None, qualities=None,
events=None, **kwargs):
'''An special type of attribute where, given a set of bits by the pair
[reg,bit] this attribute can operate all of them as one.
That is, the read value is True if _all_ are true.
the write value, is applied to _all_ of them
(almost) at the same time.
'''
self.__traceAttrAddr(name, PyTango.DevBoolean, internal=True)
attrObj = GroupAttr(name=name, device=self.impl, group=attrGroup)
self.impl._internalAttrs[name] = attrObj
rfun = attrObj.read_attr
wfun = attrObj.write_attr
toReturn = [self.add_Attr(name, PyTango.DevBoolean, rfun, wfun,
**kwargs)]
if qualities is not None:
attrObj.qualities = qualities
if meanings is not None:
meaningAttr = self._buildMeaningAttr(attrObj, meanings, rfun,
**kwargs)
toReturn.append(meaningAttr)
self._prepareEvents(name, events)
return tuple(toReturn)
def add_AttrLogic(self, name, logic, label, desc, events=None,
operator='and', inverted=False, **kwargs):
'''Internal type of attribute made to evaluate a logical formula with
other attributes owned by the device with a boolean result.
'''
self.__traceAttrAddr(name, PyTango.DevBoolean, internalRO=True)
self.impl.debug_stream("%s logic: %s" % (name, logic))
# self._prepareInternalAttribute(name, PyTango.DevBoolean, logic=logic,
# operator=operator, inverted=inverted)
attrObj = LogicAttr(name=name, device=self.impl,
valueType=PyTango.DevBoolean,logic=logic,
operator=operator, inverted=inverted)
self.impl._internalAttrs[name] = attrObj
rfun = self.__getAttrMethod('read', name, isLogical=True)
wfun = None # this kind can only be ReadOnly
for key in logic:
self.append2relations(name, LOGIC, key)
self._prepareEvents(name, events)
return self.add_Attr(name, PyTango.DevBoolean, rfun, wfun, label, **kwargs)
def add_AttrRampeable(self, name, T, read_addr, write_addr, label, unit,
rampsDescriptor, events=None, qualities=None,
readback=None, switch=None, desc=None, minValue=None,
maxValue=None, *args, **kwargs):
'''Given 2 plc memory positions (for read and write), with this method
build a RW attribute that looks like the other RWs but it includes
ramping features.
- rampsDescriptor is a dictionary with two main keys:
+ ASCENDING | DESCENDING: Each of these keys contain a
dictionary in side describing the behaviour of the ramp
('+' mandatory keys, '-' optional keys):
+ STEP: value added/subtracted on each step.
+ STEPTIME: seconds until next step.
- THRESHOLD: initial value from where start ramping.
- SWITCH: attribute to monitor if it has switched off
Those keys will generate attributes called '$name_$key' as memorised
to allow the user to adapt the behaviour depending on configuration.
About the threshold, it's a request from the user to have, it
klystronHV, to not apply the ramp between 0 to N and after, if it's
above, ramp it to the setpoint. Also the request of the user is to
only do this ramp in the increasing way and decrease goes direct.
Example:
- rampsDescriptor = {ASCENDING:
{STEP:0.5,#kV
STEPTIME:1,#s
THRESHOLD:20,#kV
SWITCH:'HVPS_ONC'
}}
Another request for the Filament voltage is a descending ramp in
similar characteristics than klystrons, but also: once commanded a
power off, delay it doing a ramps to 0. This second request will
be managed from the boolean that does this on/off transition using
AttrAddrBit() builder together with a switchDescriptor dictionary.
Example:
- rampsDescriptor = {DESCENDING:
{STEP:1,#kV
STEPTIME:1,#s
THRESHOLD:-50,#kV
SWITCH:'GUN_HV_ONC'
},
ASCENDING:
{STEP:5,#kV
STEPTIME:0.5,#s
THRESHOLD:-90,#kV
SWITCH:'GUN_HV_ONC'
}}
The set of arguments about readback, setpoint and switch are there
to store defined relations between attributes. That is, to allow the
setpoint (that has a read and write addresses) to know if there is
another read only attribute that does the measure of what the
setpoint sets. Also this readback may like to know about the
setpoint and if the element is switch on or off.
'''
self.__traceAttrAddr(name, T, readAddr=read_addr, writeAddr=write_addr)
tango_T = self.__mapTypes(T)
self._prepareAttribute(name, T, readAddr=read_addr,
writeAddr=write_addr, readback=readback,
switch=switch, label=label, description=desc,
minValue=minValue, maxValue=maxValue,
*args, **kwargs)
rfun = self.__getAttrMethod('read', name)
wfun = self.__getAttrMethod('write', name, rampeable=True)
self._prepareEvents(name, events)
if qualities is not None:
rampeableAttr = self._prepareAttrWithQualities(name, tango_T,
qualities, rfun,
wfun, label=label,
**kwargs)
else:
rampeableAttr = self.add_Attr(name, tango_T, rfun, wfun, label,
minValue=minValue, maxValue=maxValue,
**kwargs)
# until here, it's not different than another attribute
# Next is specific for rampeable attributes
rampAttributes = []
# FIXME: temporally disabled all the ramps
# TODO: review if the callback functionality can be usefull here
# self.impl._plcAttrs[name][RAMP] = rampsDescriptor
# self.impl._plcAttrs[name][RAMPDEST] = None
# for rampDirection in rampsDescriptor.keys():
# if not rampDirection in [ASCENDING,DESCENDING]:
# self.impl.error_stream("In attribute %s, the ramp direction "
# "%s has been not recognised."
# %(name,rampDirection))
# else:
# rampAttributes = []
# newAttr = self._buildInternalAttr4RampEnable(name,name)
# if newAttr != None:
# rampAttributes.append(newAttr)
# for subAttrName in rampsDescriptor[rampDirection].keys():
# if subAttrName in [STEP,STEPTIME,THRESHOLD]:
# if subAttrName == STEPTIME:
# subAttrUnit = 'seconds'
# else:
# subAttrUnit = unit
# defaultValue = rampsDescriptor[rampDirection]\
# [subAttrName]
# newAttr = self._buildInternalAttr4Ramping(\
# name+'_'+rampDirection, subAttrName,
# name+" "+rampDirection, subAttrUnit,
# defaultValue)
# if newAttr is not None:
# rampAttributes.append(newAttr)
rampAttributes.insert(0, rampeableAttr)
return tuple(rampAttributes)
def add_AttrPLC(self, heart, lockst, read_lockingAddr, read_lockingBit,
write_lockingAddr, write_lockingBit):
heartbeat = self.add_AttrHeartBeat(heart)
lockState, lockStatus = self.add_AttrLock_ST(lockst)
locking = self.add_AttrLocking(read_lockingAddr, read_lockingBit,
write_lockingAddr, write_lockingBit)
return (heartbeat, lockState, lockStatus, locking)
def add_AttrLock_ST(self, read_addr):
COMM_STATUS = {0: 'unlocked', 1: 'local', 2: 'remote'}
COMM_QUALITIES = {ALARM: [0], WARNING: [2]}
plc_name = self.impl.get_name().split('/')[-1]
desc = 'lock status %s' % plc_name
# This attr was a number but for the user what shows the ----
# information is an string
self.impl.lock_ST = read_addr
self.impl.setChecker(self.impl.lock_ST, ['\x00', '\x01', '\x02'])
LockAttrs = self.add_AttrAddr('Lock_ST', PyTango.DevUChar, read_addr,
label=desc, desc=desc+john(COMM_STATUS),
meanings=COMM_STATUS,
qualities=COMM_QUALITIES, events={})
# This UChar is to know what to read from the plc, the AttrAddr,
# because it has an enumerate, will set this attr as string
self.impl.set_change_event('Lock_ST', True, False)
self.impl.set_change_event('Lock_Status', True, False)
return LockAttrs
def add_AttrLocking(self, read_addr, read_bit, write_addr, write_bit):
desc = 'True when attempting to obtain write lock'
new_attr = self.add_AttrAddrBit('Locking', read_addr, read_bit,
write_addr, write_bit, desc=desc,
events={})
locking_attr = self.impl.get_device_attr().get_attr_by_name('Locking')
self.impl.Locking = locking_attr
locking_attr.set_write_value(False)
self.impl.locking_raddr = read_addr
self.impl.locking_rbit = read_bit
# TODO: adding this checker, it works worst
# if hasattr(self.impl,'read_db') and self.impl.read_db si not None:
# self.impl.setChecker(self.impl.locking_raddr, ['\x00', '\x01'])
self.impl.locking_waddr = write_addr
self.impl.locking_wbit = write_bit
# TODO: adding this checker, it works worst
# if hasattr(self.impl,'read_db') and self.impl.read_db is not None:
# self.impl.setChecker(self.impl.locking_waddr, ['\x00','\x01'])
self.impl.set_change_event('Locking', True, False)
return new_attr
def add_AttrHeartBeat(self, read_addr, read_bit=0):
self.impl.heartbeat_addr = read_addr
desc = 'cadence bit going from True to False when PLC is okay'
attr = self.add_AttrAddrBit('HeartBeat', read_addr, read_bit,
desc=desc, events={})
self.impl.set_change_event('HeartBeat', True, False)
return attr
def add_AttrEnumeration(self, name, prefix=None, suffixes=None,
*args, **kwargs):
self.impl.info_stream("Building a Enumeration attribute set for %s"
% name)
if prefix is not None:
# With the klystrons user likes to see the number in the label,
# we don't want in the attribute name because it will make
# different between those two equal devices.
try:
plcN = int(self.impl.get_name().split('plc')[-1])
except:
plcN = 0
if plcN in [4, 5]:
label = "%s%d_%s" % (prefix, plcN-3, name)
name = "%s_%s" % (prefix, name)
else:
label = "%s_%s" % (prefix, name)
name = "%s_%s" % (prefix, name)
# FIXME: but this is something "ad hoc"
else:
label = "%s" % (name)
if suffixes is None:
suffixes = {'options': [PyTango.DevString, 'read_write'],
'active': [PyTango.DevString, 'read_write'],
'numeric': [PyTango.DevUShort, 'read_only'],
'meaning': [PyTango.DevString, 'read_only']}
attrs = []
try:
enumObj = EnumerationAttr(name, valueType=None)
for suffix in suffixes.keys():
try:
attrType = suffixes[suffix][0]
rfun = enumObj.read_attr
if suffixes[suffix][1] == 'read_write':
wfun = enumObj.write_attr
else:
wfun = None
attr = self.add_Attr(name+'_'+suffix, attrType,
label="%s %s" % (label, suffix),
rfun=rfun, wfun=wfun, **kwargs)
# FIXME: setup events in the self.add_Attr(...)
self.impl.set_change_event(name+'_'+suffix, True, False)
attrs.append(attr)
except Exception as e:
self.impl.debug_stream("In %s enumeration, exception "
"with %s: %s" % (name, suffix, e))
self.impl._internalAttrs[name] = enumObj
enumObj.device = self.impl
except Exception as e:
self.impl.error_stream("Fatal exception building %s: %s"
% (name, e))
traceback.print_exc()
# No need to configure device memorised attributes because the
# _LinacAttr subclasses already have this feature nested in the
# implementation.
return tuple(attrs)
def remove_all(self):
for attr in self.alist:
try:
self.impl.remove_attribute(attr.get_name())
except ValueError as exc:
self.impl.debug_stream(attr.get_name()+': '+str(exc))
def build(self, fname):
if self._buider is not None:
if not isinstance(self._buider, threading.Thread):
msg = "AttrList builder is not a Thread! (%s)" \
% (type(self._buider))
self.impl.error_stream("Ups! This should never happen: %s"
% (msg))
raise AssertionError(msg)
elif self._buider.isAlive():
msg = "AttrList build while it is building"
self.impl.error_stream("Ups! This should never happen: %s"
% (msg))
return
else:
self._buider = None
self._buider = threading.Thread(name="FileParser",
target=self.parse_file, args=(fname,))
self.impl.info_stream("Launch a thread to build the dynamic attrs")
self._buider.start()
def parse_file(self, fname):
t0 = time.time()
self._fileParsed.clear()
msg = "%30s\t%10s\t%5s\t%6s\t%6s"\
% ("'attrName'", "'Type'", "'RO/RW'", "'read'", "'write'")
self.impl.info_stream(msg)
try:
execfile(fname, self.globals_, self.locals_)
except IOError as io:
self.impl.error_stream("AttrList.parse_file IOError: %s\n%s"
% (e, traceback.format_exc()))
raise LinacException(io)
except Exception as e:
self.impl.error_stream("AttrList.parse_file Exception: %s\n%s"
% (e, traceback.format_exc()))
self.impl.debug_stream('Parse attrFile done.')
# Here, I can be sure that all the objects are build,
# then any none existing object reports a configuration
# mistake in the parsed file.
for origName in self._relations:
try:
origObj = self.impl._getAttrStruct(origName)
for tag in self._relations[origName]:
for destName in self._relations[origName][tag]:
try:
destObj = self.impl._getAttrStruct(destName)
origObj.addReportTo(destObj)
self.impl.debug_stream("Linking %s with %s (%s)"
% (origName, destName, tag))
origObj.reporter.report()
except Exception as e:
self.impl.error_stream("Exception managing the "
"relation between %s and "
"%s: %s" % (origName,
destName, e))
except Exception as e:
self.impl.error_stream("Exception managing %s relations: %s"
% (origName, e))
traceback.print_exc()
self.impl.applyCheckers()
self._fileParsed.set()
tf = time.time()
self.impl.info_stream("file parsed in %6.3f seconds" % (tf-t0))
def parse(self, text):
exec text in self.globals_, self.locals_
# # internal auxiliar methods ---
def __getAttrMethod(self, operation, attrName, isBit=False,
rampeable=False, internal=False, isGroup=False,
isLogical=False):
# if exist an specific method
attrStruct = self.impl._getAttrStruct(attrName)
return getattr(attrStruct, "%s_attr" % (operation))
# if hasattr(self.impl, "%s_%s" % (operation, attrName)):
# return getattr(self.impl, "%s_%s" % (operation, attrName))
# # or use the generic method for its type
# elif isBit:
# return getattr(self.impl, "%s_attr_bit" % (operation))
# elif operation == 'write' and rampeable:
# # no sense with read operation
# # FIXME: temporally disabled all the ramps
# # return getattr(self.impl,"write_attr_with_ramp")
# return getattr(self.impl, "write_attr")
# elif isGroup:
# return getattr(self.impl, '%s_attrGrpBit' % (operation))
# elif internal:
# return getattr(self.impl, "%s_internal_attr" % (operation))
# elif isLogical:
# return getattr(self.impl, "%s_logical_attr" % (operation))
# else:
# return getattr(self.impl, "%s_attr" % (operation))
def __traceAttrAddr(self, attrName, attrType, readAddr=None, readBit=None,
writeAddr=None, writeBit=None, internal=False,
internalRO=False):
# first column is the attrName
msg = "%30s\t" % ("'%s'" % attrName)
# second, its type
msg += "%10s\t" % ("'%s'" % attrType)
# Then, if it's read only or read/write
if writeAddr is not None or internal:
msg += " 'RW'\t"
else:
msg += "'RO' \t"
if readAddr is not None:
if readBit is not None:
read = "'%s.%s'" % (readAddr, readBit)
else:
read = "'%s'" % (readAddr)
msg += "%6s\t" % (read)
if writeAddr is not None:
if writeBit is not None:
write = "'%s.%s'" % (writeAddr, writeBit)
else:
write = "'%s'" % (writeAddr)
msg += "%6s\t" % (write)
self.impl.info_stream(msg)
# # prepare attribute structures ---
def _prepareAttribute(self, attrName, attrType, readAddr, readBit=None,
writeAddr=None, writeBit=None, formula=None,
readback=None, setpoint=None, switch=None,
label=None, description=None,
minValue=None, maxValue=None, **kwargs):
'''This is a constructor of the item in the dictionary of attributes
related with PLC memory locations. At least they have a read address
and a type. The booleans also needs a read bit. For writable
attributes there are the equivalents write addresses and booleans
also the write bit (it doesn't happen with current PLCs, but we
support them if different).
Also is introduced the feature of the formula that can distinguish
between a read formula and write case. Not needing both coexisting.
The set of arguments about readback, setpoint and switch are there
to store defined relations between attributes. That is, to allow the
setpoint (that has a read and write addresses) to know if there is
another read only attribute that does the measure of what the
setpoint sets. Also this readback may like to know about the
setpoint and if the element is switch on or off.
'''
if readAddr is None and writeAddr is not None:
readAddr = self._db20_size + writeAddr
attrObj = PLCAttr(name=attrName, device=self.impl, valueType=attrType,
readAddr=readAddr, readBit=readBit,
writeAddr=writeAddr, writeBit=writeBit,
formula=formula,
readback=readback, setpoint=setpoint, switch=switch,
label=label, description=description,
minValue=minValue, maxValue=maxValue)
self.impl._plcAttrs[attrName] = attrObj
self._insert2reverseDictionary(attrName, attrType, readAddr, readBit,
writeAddr, writeBit)
# if readBit is not None:
# if readAddr not in self.impl._addrDct:
# self.impl._addrDct[readAddr] = {}
# self.impl._addrDct[readAddr][readBit] = []
# self.impl._addrDct[readAddr][readBit].append(attrName)
# if writeAddr is not None:
# self.impl._addrDct[readAddr][readBit].append(writeAddr)
# if writeBit is not None:
# self.impl._addrDct[readAddr][readBit].append(writeBit)
# else:
# self.impl._addrDct[readAddr][readBit].append(readBit)
# else:
# if readAddr not in self.impl._addrDct:
# self.impl._addrDct[readAddr] = []
# self.impl._addrDct[readAddr].append(attrName)
# self.impl._addrDct[readAddr].append(attrType)
# if writeAddr is not None:
# self.impl._addrDct[readAddr].append(writeAddr)
# else:
# self.impl.error_stream("The address %s has been found in the "
# "reverse dictionary" % (readAddr))
def _insert2reverseDictionary(self, name, valueType, readAddr, readBit,
writeAddr, writeBit):
'''
Hackish for the dump of a valid write block when the PLC provides
invalid values in the write datablock
:param name:
:param valueType:
:param readAddr:
:param readBit:
:param writeAddr:
:param writeBit:
:return:
'''
dct = self.impl._addrDct
if 'readBlock' not in dct:
dct['readBlock'] = {}
rDct = dct['readBlock']
if 'writeBlock' not in dct:
dct['writeBlock'] = {}
wDct = dct['writeBlock']
if readBit is not None: # boolean
if readAddr not in rDct:
rDct[readAddr] = {}
if readBit in rDct[readAddr]:
self.impl.warn_stream(
"{0} override readAddr {1} readBit {2}: {3}"
"".format(name, readAddr, readBit,
rDct[readAddr][readBit]['name']))
rDct[readAddr][readBit] = {}
rDct[readAddr][readBit]['name'] = name
rDct[readAddr][readBit]['type'] = valueType
if writeAddr is not None:
rDct[readAddr][readBit]['writeAddr'] = writeAddr
if writeBit is None:
writeBit = readBit
rDct[readAddr][readBit]['writeBit'] = writeBit
if writeAddr not in wDct:
wDct[writeAddr] = {}
wDct[writeAddr][writeBit] = {}
wDct[writeAddr][writeBit]['name'] = name
wDct[writeAddr][writeBit]['type'] = valueType
wDct[writeAddr][writeBit]['readAddr'] = readAddr
wDct[writeAddr][writeBit]['readBit'] = readBit
else: # Byte, Word or Float
if readAddr in rDct:
self.impl.warn_stream(
"{0} override readAddr {1}: {2}"
"".format(name, readAddr, rDct[readAddr]['name']))
rDct[readAddr] = {}
rDct[readAddr]['name'] = name
rDct[readAddr]['type'] = valueType
if writeAddr is not None:
rDct[readAddr]['writeAddr'] = writeAddr
if writeAddr in wDct:
self.impl.warn_stream(
"{0} override writeAddr {1}:{2}"
"".format(name, writeAddr, wDct[writeAddr]['name']))
wDct[writeAddr] = {}
wDct[writeAddr]['name'] = name
wDct[writeAddr]['type'] = valueType
wDct[writeAddr]['readAddr'] = readAddr
def _prepareInternalAttribute(self, attrName, attrType, memorized=False,
isWritable=False, defaultValue=None,
logic=None, operator=None, inverted=None):
attrObj = InternalAttr(name=attrName, device=self.impl,
valueType=attrType, memorized=memorized,
isWritable=isWritable,
defaultValue=defaultValue, logic=logic,
operator=operator, inverted=inverted)
self.impl._internalAttrs[attrName] = attrObj
def _prepareEvents(self, attrName, eventConfig):
if eventConfig is not None:
attrStruct = self.impl._getAttrStruct(attrName)
attrStruct[EVENTS] = eventConfig
attrStruct[LASTEVENTQUALITY] = PyTango.AttrQuality.ATTR_VALID
attrStruct[EVENTTIME] = None
def _prepareAttrWithMeaning(self, attrName, attrType, meanings, qualities,
rfun, wfun, historyBuffer=None, **kwargs):
'''There are some short integers where the number doesn't mean anything
by itself. The plcs register description shows a relation between
the possible numbers and its meaning.
These attributes are splitted in two:
- one with only the number (machine readable: archiver,plots)
- another string with the number and its meaning (human readable)
The historyBuffer parameter has been developed to introduce
interlock tracking (using a secondary attribute called *_History).
That is, starting from a set of non interlock values, when the
attibute reads something different to them, it starts collecting
those new values in order to provide a line in the interlock
activity. Until the interlock is cleaned, read value is again in
the list of non interlock values and this buffer is cleaned.
'''
# first, build the same than has been archived
attrState = self.add_Attr(attrName, attrType, rfun, wfun, **kwargs)
# then prepare the human readable attribute as a feature
attrStruct = self.impl._plcAttrs[attrName]
attrStruct.qualities = qualities
attrTuple = self._buildMeaningAttr(attrStruct, meanings, rfun,
**kwargs)
toReturn = (attrState,)
toReturn += (attrTuple,)
return toReturn
def _buildMeaningAttr(self, attrObj, meanings, rfun, historyBuffer=None,
**kwargs):
if attrObj.name.endswith('_ST'):
name = attrObj.name.replace('_ST', '_Status')
else:
name = "%s_Status" % (attrObj.name)
attrObj.meanings = meanings
self.impl._plcAttrs[name] = attrObj._meaningsObj
self.impl._plcAttrs[name].alias = name
self.impl._plcAttrs[name].meanings = meanings
self.impl._plcAttrs[name].qualities = attrObj.qualities
meaningAttr = self.add_Attr(name, PyTango.DevString, rfun, wfun=None,
**kwargs)
toReturn = (meaningAttr)
if historyBuffer is not None:
attrHistoryName = "%s_History" % (attrStruct._meaningsalias)
attrStruct.history = historyBuffer
historyStruct = attrStruct._historyObj
historyStruct.history = historyBuffer
historyStruct.alias = attrHistoryName
attrStruct.read_value = HistoryBuffer(
cleaners=historyBuffer[BASESET], maxlen=HISTORYLENGTH,
owner=attrStruct)
xdim = attrStruct.read_value.maxSize()
self.impl._plcAttrs[attrHistoryName] = historyStruct
attrHistory = self.add_Attr(attrHistoryName, PyTango.DevString,
rfun=historyStruct.read_attr,
xdim=xdim, **kwargs)
toReturn += (attrHistory,)
return toReturn
def _prepareAttrWithQualities(self, attrName, attrType, qualities,
rfun, wfun, label=None, unit=None,
autoStop=None, **kwargs):
'''The attributes with qualities definition, but without meanings for
their possible values, are specifically build to have a
CircularBuffer as the read element. That is made to collect a small
record of the previous values, needed for the RELATIVE condition
(mainly used with CHANGING quality). Without a bit of memory in the
past is not possible to know what had happen.
This kind of attributes have another possible keyword named
'autoStop'. This has been initially made for the eGun HV
leakage current, to stop it when this leak is too persistent on
time (adjustable using an extra attribute). Apart from that, the
user has a feature disable for it.
TODO: feature 'too far' from a setpoint value.
'''
self.impl._plcAttrs[attrName][READVALUE] = \
CircularBuffer([], owner=self.impl._plcAttrs[attrName])
self.impl._plcAttrs[attrName][QUALITIES] = qualities
toReturn = (self.add_Attr(attrName, attrType, rfun, wfun, label=label,
unit=unit, **kwargs),)
if autoStop is not None:
# FIXME: shall it be in the AttrWithQualities? Or more generic?
toReturn += self._buildAutoStopAttributes(attrName, label,
attrType, autoStop,
**kwargs)
return toReturn
# # Builders for subattributes ---
def _buildAutoStopAttributes(self, baseName, baseLabel, attrType,
autoStopDesc, logLevel, **kwargs):
# TODO: review if the callback between attributes can be usefull here
attrs = []
autostopperName = "%s_%s" % (baseName, AUTOSTOP)
autostopperLabel = "%s %s" % (baseLabel, AUTOSTOP)
autostopSwitch = autoStopDesc.get(SWITCHDESCRIPTOR, None)
if autostopSwitch in self.impl._plcAttrs:
autostopSwitch = self.impl._plcAttrs[autostopSwitch]
# depending on the build process, the switch object may not be
# build yet. That's why the name (as string) is stored.
# Later, when the switch (AttrAddrBit) is build, this assignment
# will be completed.
autostopper = AutoStopAttr(name=autostopperName,
valueType=attrType,
device=self.impl,
plcAttr=self.impl._plcAttrs[baseName],
below=autoStopDesc.get(BELOW, None),
above=autoStopDesc.get(ABOVE, None),
switchAttr=autostopSwitch,
integr_t=autoStopDesc.get(INTEGRATIONTIME,
None),
events={})
self.impl._internalAttrs[autostopperName] = autostopper
spectrumAttr = self.add_Attr(autostopperName, PyTango.DevDouble,
rfun=autostopper.read_attr, xdim=1000,
label=autostopperLabel)
attrs.append(spectrumAttr)
enableAttr = self._buildAutoStopperAttr(autostopperName,
autostopperLabel, ENABLE,
autostopper._enable,
PyTango.DevBoolean,
memorised=True, writable=True)
attrs.append(enableAttr)
for condition in [BELOW, ABOVE]:
if condition in autoStopDesc:
condAttr = self._buildAutoStopConditionAttr(condition,
autostopperName,
autostopperLabel,
autostopper)
attrs.append(condAttr)
integrAttr = self._buildAutoStopperAttr(autostopperName,
autostopperLabel,
INTEGRATIONTIME,
autostopper._integr_t,
PyTango.DevDouble,
memorised=True, writable=True)
meanAttr = self._buildAutoStopperAttr(autostopperName,
autostopperLabel, MEAN,
autostopper._mean,
PyTango.DevDouble)
attrs.append(meanAttr)
stdAttr = self._buildAutoStopperAttr(autostopperName,
autostopperLabel, STD,
autostopper._std,
PyTango.DevDouble)
attrs.append(stdAttr)
triggeredAttr = self._buildAutoStopperAttr(autostopperName,
autostopperLabel, TRIGGERED,
autostopper._triggered,
PyTango.DevBoolean)
attrs.append(triggeredAttr)
if logLevel is not None:
autostopper.logLevel = logLevel
# it is only necessary to set it in one of them (here is the main
# one), but can be any because they share their logLevel.
return tuple(attrs)
def _buildAutoStopperAttr(self, baseName, baseLabel, suffix,
autostopperComponent, dataType, memorised=False,
writable=False):
attrName = "%s_%s" % (baseName, suffix)
attrLabel = "%s %s" % (baseLabel, suffix)
autostopperComponent.alias = attrName
if memorised:
autostopperComponent.setMemorised()
rfun = autostopperComponent.read_attr
if writable:
wfun = autostopperComponent.write_attr
else:
wfun = None
self.impl._internalAttrs[attrName] = autostopperComponent
return self.add_Attr(attrName, dataType,
rfun=rfun, wfun=wfun,
label=attrLabel)
def _buildAutoStopConditionAttr(self, condition, baseName, baseLabel,
autostopper):
conditionName = "%s_%s_Threshold" % (baseName, condition)
conditionLabel = "%s %s Threshold" % (baseName, condition)
conditioner = getattr(autostopper, '_%s' % (condition.lower()))
conditioner.alias = conditionName
conditioner.setMemorised()
self.impl._internalAttrs[conditionName] = conditioner
return self.add_Attr(conditionName, PyTango.DevDouble,
rfun=conditioner.read_attr,
wfun=conditioner.write_attr,
label=conditionLabel)
def append2relations(self, origin, tag, dependency):
self.impl.debug_stream("%s depends on %s (%s)"
% (origin, dependency, tag))
if dependency not in self._relations:
self._relations[dependency] = {}
if tag not in self._relations[dependency]:
self._relations[dependency][tag] = []
self._relations[dependency][tag].append(origin)
def __check_addresses_and_block_sizes(self, name, read_addr, write_addr):
if read_addr is None and write_addr is not None:
read_addr = self._db20_size+write_addr
self.impl.debug_stream(
"{0} define the read_addr {1} relative to the db20 size {2} "
"and the write_addr{3}".format(name, read_addr,
self._db20_size, write_addr))
if read_addr > self.impl.ReadSize:
self.impl.warn_stream(
"{0} defines a read_addr {1} out of the size of the "
"db20+db22 {2}: it will not be build"
"".format(name, read_addr, self.impl.ReadSize))
raise IndexError("Out of the DB20")
if write_addr is not None and write_addr > self._db22_size:
self.impl.warn_stream(
"{0} defines a write_addr {1} out of the size of the db22 {2}: "
"it will not be build".format(name, write_addr,
self._db22_size))
raise IndexError("Out of the DB22")
return read_addr
def get_ip(iface='eth0'):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sockfd = sock.fileno()
SIOCGIFADDR = 0x8915
ifreq = struct.pack('16sH14s', iface, socket.AF_INET, '\x00'*14)
try:
res = fcntl.ioctl(sockfd, SIOCGIFADDR, ifreq)
except:
return None
ip = struct.unpack('16sH2x4s8x', res)[2]
return socket.inet_ntoa(ip)
# PROTECTED REGION END --- LinacData.additionnal_import
# # Device States Description
# # INIT : The device is being initialised.
# # ON : PLC communication normal
# # ALARM : Transient issue
# # FAULT : Unrecoverable issue
# # UNKNOWN : No connection with the PLC, no state information
class LinacData(PyTango.Device_4Impl):
# --------- Add you global variables here --------------------------
# PROTECTED REGION ID(LinacData.global_variables) ---
ReadSize = None
WriteSize = None
BindAddress = None # deprecated
LocalAddress = None
RemoteAddress = None
IpAddress = None # deprecated
PlcAddress = None
Port = None
LocalPort = None
RemotePort = None
# assigned by addAttrLocking
locking_raddr = None
locking_rbit = None
locking_waddr = None
locking_wbit = None
lock_ST = None
Locking = None
is_lockedByTango = None
heartbeat_addr = None
AttrFile = None
_plcAttrs = {}
_internalAttrs = {}
_addrDct = {}
disconnect_t = 0
read_db = None
dataBlockSemaphore = threading.Semaphore()
_important_logs = []
# #ramping auxiliars
# _rampThreads = {}
# _switchThreads = {}
# #hackish to reemit events
# _sayAgainThread = None
# _sayAgainQueue = None
# FIXME: remove the expert attributes! ---
# special event emition trace
#_traceAttrs = []
#_tracedAttrsHistory = {}
#_historySize = 100
_traceTooClose = []
_prevMemDump = None
_prevLockSt = None
def debug_stream(self, msg):
super(LinacData, self).debug_stream(
"[%s] %s" % (threading.current_thread().getName(), msg))
def info_stream(self, msg):
super(LinacData, self).info_stream(
"[%s] %s" % (threading.current_thread().getName(), msg))
def warn_stream(self, msg):
super(LinacData, self).warn_stream(
"[%s] %s" % (threading.current_thread().getName(), msg))
def error_stream(self, msg):
super(LinacData, self).error_stream(
"[%s] %s" % (threading.current_thread().getName(), msg))
####
# PLC connectivity area ---
def connect(self):
'''This method is used to build the object that maintain the
communications with the assigned PLC.
'''
if self.read_db is not None:
return
self.info_stream('connecting...')
self.set_status('connecting...')
try:
self.read_db = tcpblock.open_datablock(self.PlcAddress,
self.Port,
self.ReadSize,
self.WriteSize,
self.BindAddress,
self.info_stream,
self.debug_stream,
self.warn_stream,
self.error_stream,
self.lock_ST)
self.info_stream("build the tcpblock, socket %d"
% (self.read_db.sock.fileno()))
self.write_db = self.read_db
self.info_stream('connected')
self.set_state(PyTango.DevState.ON)
self.set_status('connected')
self.applyCheckers()
return True
except Exception as e:
self.error_stream('connection failed exception: %s'
% (traceback.format_exc()))
self.set_state(PyTango.DevState.FAULT)
self.set_status(traceback.format_exc())
return False
def disconnect(self):
'''This method closes the connection to the assigned PLC.
'''
self.info_stream('disconnecting...')
self.set_status('disconnecting...')
# self._plcUpdatePeriod = PLC_MAX_UPDATE_PERIOD
self._setPlcUpdatePeriod(PLC_MAX_UPDATE_PERIOD)
try:
if self.is_connected():
tcpblock.close_datablock(self.read_db, self.warn_stream)
self.read_db = None
if self.get_state() == PyTango.DevState.ON:
self.set_state(PyTango.DevState.OFF)
self.set_status('not connected')
return True
except:
return False
def reconnect(self):
'''
'''
if time.time() - self.last_update_time > self.ReconnectWait:
self.connect()
def is_connected(self):
'''Checks if the object that interfaces the communication with
the PLC is well made and available.
'''
return self.read_db is not None and self.read_db.sock is not None
def has_data_available(self):
'''Check if there is some usable data give from the PLC.
'''
return self.is_connected() and \
len(self.read_db.buf) == self.ReadSize
def setChecker(self, addr, values):
if not hasattr(self, '_checks'):
self.debug_stream("Initialise checks dict")
self._checks = {}
if isinstance(addr, int) and isinstance(values, list):
self.debug_stream("Adding a checker for address %d "
"with values %s" % (addr, values))
self._checks[addr] = values
return True
return False
def applyCheckers(self):
if hasattr(self, '_checks') and isinstance(self._checks, dict) and \
hasattr(self, 'read_db') and isinstance(self.read_db,
tcpblock.Datablock):
try:
had = len(self.read_db._checks.keys())
for addr in self._checks:
if not addr in self.read_db._checks:
self.debug_stream(
"\tfor addr %d insert %s"
% (addr, self._checks[addr]))
self.read_db.setChecker(addr, self._checks[addr])
else:
lst = self.read_db.getChecker(addr)
for value in self._checks[addr]:
if value not in lst:
self.debug_stream(
"\tin addr %d append %s"
% (addr, self._checks[addr]))
self.read_db._checks.append(value)
now = len(self.read_db._checks.keys())
if had != now:
self.debug_stream("From %d to %d checkers" % (had, now))
except Exception as e:
self.error_stream("Incomplete applyCheckers: %s" % (e))
def forceWriteAttrs(self):
'''There are certain situations, like the PLC shutdown, that
results in a bad DB20 values received. Then the writable values
cannot be written because the datablock only changes one
register when many have bad values and is rejected by the plc.
Due to this we force a construction of a complete write
datablock to be send once for all.
'''
if not hasattr(self, 'write_db') or self.write_db is None:
return
wDct = self._addrDct['writeBlock']
self.info_stream("Force to reconstruct the write data block")
self.dataBlockSemaphore.acquire()
self.attr_forceWriteDB_read = "%s\n" % (time.strftime(
"%Y/%m/%d %H:%M:%S", time.localtime()))
wblock_was = self.write_db.buf[self.write_db.write_start:]
try:
for wAddr in wDct:
if 'readAddr' in wDct[wAddr]: # Uchars, Shorts, floats
name = wDct[wAddr]['name']
rAddr = wDct[wAddr]['readAddr']
T, size = TYPE_MAP[wDct[wAddr]['type']]
rValue = self.read_db.get(rAddr, T, size)
wValue = self.write_db.get(
wAddr+self.write_db.write_start, T, size)
msg = "%s = (%s, %s) [%s -> %s, %s, %s]" \
% (name, rValue, wValue, rAddr, wAddr, T, size)
self.attr_forceWriteDB_read += "%s\n" % msg
self.info_stream(msg)
self.write_db.write(wAddr, rValue, (T, size),
dry=True)
else: # booleans
was = byte = self.write_db.b(
wAddr+self.write_db.write_start)
msg = "booleans %d" % (wAddr)
self.attr_forceWriteDB_read += "%s\n" % msg
self.info_stream(msg)
for wBit in wDct[wAddr]:
name = wDct[wAddr][wBit]['name']
rAddr = wDct[wAddr][wBit]['readAddr']
rBit = wDct[wAddr][wBit]['readBit']
rValue = self.read_db.bit(rAddr, rBit)
wValue = self.write_db.bit(
wAddr+self.write_db.write_start, rBit)
msg = "\t%s = (%s, %s) [%s.%s -> %s.%s]" \
% (name, rValue, wValue, rAddr, rBit,
wAddr, wBit)
self.attr_forceWriteDB_read += "%s\n" % msg
self.info_stream(msg)
if rValue is True:
byte = byte | (int(1) << wBit)
else:
byte = byte & ((0xFF) ^ (1 << wBit))
msg = "%d = %s -> %s" \
% (wAddr, binaryByte(was), binaryByte(byte))
self.attr_forceWriteDB_read += "%s\n" % msg
self.info_stream(msg)
self.write_db.rewrite()
wblock_is = self.write_db.buf[self.write_db.write_start:]
i = 0
msg = "writeblock:\n%-11s\t%-11s\n" % ("was:","now:")
while i < len(wblock_was):
line = "%-11s\t%-11s\n" % (
' '.join("%02x" % x for x in wblock_was[i:i+4]),
' '.join("%02x" % x for x in wblock_is[i:i + 4]))
msg += line
i += 4
self.attr_forceWriteDB_read += "%s\n" % msg
self.info_stream(msg)
except Exception as e:
msg = "Could not complete the force Write\n%s" % (e)
self.attr_forceWriteDB_read += "%s\n" % msg
self.error_stream(msg)
self.dataBlockSemaphore.release()
# def _getWattrList(self):
# wAttrNames = []
# for attrName in self._plcAttrs.keys():
# attrStruct = self._getAttrStruct(attrName)
# if WRITEVALUE in attrStruct:
# wAttrNames.append(attrName)
# return wAttrNames
# def _forceWriteDB(self, attr2write):
# for attrName in attr2write:
# attrStruct = self._getAttrStruct(attrName)
# write_addr = attrStruct[WRITEADDR]
# write_value = attrStruct[READVALUE]
# if type(attrStruct[READVALUE]) in [CircularBuffer,
# HistoryBuffer]:
# write_value = attrStruct[READVALUE].value
# else:
# write_value = attrStruct[READVALUE]
# self.info_stream("Dry write of %s value %s"
# % (attrName, write_value))
# if WRITEBIT in attrStruct:
# read_addr = attrStruct[READADDR]
# write_bit = attrStruct[WRITEBIT]
# self.__writeBit(attrName, read_addr, write_addr, write_bit,
# write_value, dry=True)
# else:
# self.write_db.write(write_addr, write_value,
# attrStruct[TYPE], dry=True)
# self.write_db.rewrite()
# Done PLC connectivity area ---
####
# state/status manager methods ---
def set_state(self, newState, log=True):
'''Overload of the superclass method to add event
emission functionality.
'''
if self.get_state() != newState:
if log:
self.warn_stream("Change state from %s to %s"
% (self.get_state(), newState))
PyTango.Device_4Impl.set_state(self, newState)
self.push_change_event('State', newState)
self.set_status("")
# as this changes the state, clean non important
# messages in status
def set_status(self, newLine2status, important=False):
'''Overload of the superclass method to add the extra feature of
the persistent messages added to the status string.
'''
# self.debug_stream("In set_status()")
newStatus = "" # The device is in %s state.\n"%(self.get_state())
for importantMsg in self._important_logs:
if len(importantMsg) > 0:
newStatus = "%s%s\n" % (newStatus, importantMsg)
if len(newLine2status) > 0 and \
newLine2status not in self._important_logs:
newStatus = "%s%s\n" % (newStatus, newLine2status)
if important:
self._important_logs.append(newLine2status)
if len(newStatus) == 0:
newStatus = "The device is in %s state.\n" % (self.get_state())
oldStatus = self.get_status()
if newStatus != oldStatus:
PyTango.Device_4Impl.set_status(self, newStatus)
self.warn_stream("New status message: %s"
% (repr(self.get_status())))
self.push_change_event('Status', newStatus)
def clean_status(self):
'''With the extra feature of the important logs, this method allows
to clean all those logs as a clean interlocks method does.
'''
self.debug_stream("In clean_status()")
self._important_logs = []
self.set_status("")
# done state/status manager methods ---
# def __doTraceAttr(self, attrName, tag):
# if attrName in self._traceAttrs:
# attrStruct = self._getAttrStruct(attrName)
# readValue = attrStruct[READVALUE]
# if WRITEVALUE in attrStruct:
# writeValue = attrStruct[WRITEVALUE]
# else:
# writeValue = float('NaN')
# quality = "%s" % attrStruct[LASTEVENTQUALITY]
# timestamp = time.ctime(attrStruct[READTIME])
# if attrName not in self._tracedAttrsHistory:
# self._tracedAttrsHistory[attrName] = []
# self._tracedAttrsHistory[attrName].append(
# [tag, readValue, writeValue, quality, timestamp])
# self.debug_stream("Traceing %s with %s tag: "
# "read = %s, write = %s (%s,%s)"
# % (attrName, tag, readValue, writeValue,
# quality, timestamp))
# while len(self._tracedAttrsHistory[attrName]) > \
# self._historySize:
# self._tracedAttrsHistory[attrName].pop(0)
####
# event methods ---
def fireEvent(self, attrEventStruct, timestamp=None):
'''Method with the procedure to emit an event from one existing
attribute. Minimal needs are the attribute name and the value
to emit, but also can be specified the quality and the timestamp
'''
attrName = attrEventStruct[0]
if attrName not in ['lastUpdate', 'lastUpdateStatus']:
self.warn_stream("DEPRECATED: fireEvent(%s)" % attrName)
attrValue = attrEventStruct[1]
if timestamp is None:
timestamp = time.time()
if len(attrEventStruct) == 3: # the quality is specified
quality = attrEventStruct[2]
else:
quality = PyTango.AttrQuality.ATTR_VALID
# self.__doTraceAttr(attrName, "fireEvent(%s)" % attrValue)
if self.__isHistoryBuffer(attrName):
attrValue = self.__buildHistoryBufferString(attrName)
self.push_change_event(attrName, attrValue, timestamp, quality)
else:
self.push_change_event(attrName, attrValue, timestamp, quality)
attrStruct = self._getAttrStruct(attrName)
if attrStruct is not None and \
LASTEVENTQUALITY in attrStruct and \
not quality == attrStruct[LASTEVENTQUALITY]:
attrStruct[LASTEVENTQUALITY] = quality
if attrStruct is not None and EVENTTIME in attrStruct:
now = time.time()
attrStruct[EVENTTIME] = now
attrStruct[EVENTTIMESTR] = time.ctime(now)
def fireEventsList(self, eventsAttrList, timestamp=None, log=False):
'''Given a set of pair [attr,value] (with an optional third element
with the quality) emit events for all of them with the same
timestamp.
'''
if log:
self.debug_stream("In fireEventsList(): %d events:\n%s"
% (len(eventsAttrList),
''.join("\t%s\n" % line
for line in eventsAttrList)))
if timestamp is None:
timestamp = time.time()
attrNames = []
for attrEvent in eventsAttrList:
try:
self.fireEvent(attrEvent, timestamp)
attrNames.append(attrEvent[0])
except Exception as e:
self.error_stream("In fireEventsList() Exception with "
"attribute %s: %s" % (attrEvent, e))
traceback.print_exc()
# done event methods ---
####
# Read Attr method for dynattrs ---
# def __applyReadValue(self, attrName, attrValue, timestamp=None):
# '''Hide the internal differences of the stored attribute struct
# and return the last value read from the PLC for a certain attr.
# '''
# self.warn_stream("DEPRECATED: __applyReadValue(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if timestamp is None:
# timestamp = time.time()
# if not self.__filterAutoStopCollection(attrName):
# return
# if type(attrStruct[READVALUE]) in [CircularBuffer, HistoryBuffer]:
# attrStruct[READVALUE].append(attrValue)
# else:
# attrStruct[READVALUE] = attrValue
# attrStruct[READTIME] = timestamp
# # attrStruct[READTIMESTR] = time.ctime(timestamp)
# def __filterAutoStopCollection(self, attrName):
# '''This method is made to manage the collection of data on the
# integration buffer for attributes with the autostop feature.
# No data shall be collected when it is already off (and the
# autostop will not stop anything).
# '''
# self.warn_stream("DEPRECATED: __filterAutoStopCollection(%s)"
# % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if AUTOSTOP in attrStruct and \
# SWITCHDESCRIPTOR in attrStruct[AUTOSTOP]:
# switchName = attrStruct[AUTOSTOP][SWITCHDESCRIPTOR]
# switchStruct = self._getAttrStruct(switchName)
# if READVALUE in switchStruct and not switchStruct[READVALUE]:
# # do not collect data when the switch to stop
# # is already off
# self.debug_stream("The switch for %s the autostopper is "
# "off, no needed to collect values"
# % (attrName))
# # if there is data collected, do not clean it until a new
# # transition from off to on.
# return False
# return True
# def __applyWriteValue(self, attrName, attrValue):
# '''Hide the internal attribute struct representation and give an
# interface to set a value to be written.
# '''
# self.warn_stream("DEPRECATED: __applyWriteValue(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if WRITEVALUE in attrStruct:
# attrStruct[WRITEVALUE] = attrValue
# def __buildAttrMeaning(self, attrName, attrValue):
# '''As some (state-like) attributes have a meaning, there is a
# status-like attribute that reports what the documentation
# assign to the enumeration.
# '''
# self.warn_stream("DEPRECATED: __buildAttrMeaning(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# meanings = attrStruct[MEANINGS]
# if attrValue in meanings:
# return "%d:%s" % (attrValue, meanings[attrValue])
# else:
# return "%d:unknown" % (attrValue)
# def __buildAttrQuality(self, attrName, attrValue):
# '''Resolve the quality the an specific value has for an attribute.
# '''
# self.warn_stream("DEPRECATED: __buildAttrQuality(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if QUALITIES in attrStruct:
# qualities = attrStruct[QUALITIES]
# if self.__checkQuality(attrName, attrValue, ALARM):
# return PyTango.AttrQuality.ATTR_ALARM
# elif self.__checkQuality(attrName, attrValue, WARNING):
# return PyTango.AttrQuality.ATTR_WARNING
# elif self.__checkQuality(attrName, attrValue, CHANGING):
# return PyTango.AttrQuality.ATTR_CHANGING
# if self.attr_IsTooFarEnable_read and \
# SETPOINT in attrStruct:
# try:
# # This is to review if, not having the value changing
# # (previous if) the readback value is or not too far away
# # from the given setpoint.
# setpointAttrName = attrStruct[SETPOINT]
# try:
# readback = attrStruct[READVALUE].value
# except:
# return PyTango.AttrQuality.ATTR_INVALID
# setpoint = \
# self._getAttrStruct(setpointAttrName)[READVALUE].value
# if setpoint is not None:
# if self.__tooFar(attrName, setpoint, readback):
# if attrName in self._traceTooClose:
# self.warn_stream("Found %s readback (%6.3f) "
# "too far from setpoint "
# "(%6.3f)" % (attrName,
# readback,
# setpoint))
# return PyTango.AttrQuality.ATTR_WARNING
# if attrName in self._traceTooClose:
# self.info_stream("Found %s readback (%6.3f) "
# "close enought to the setpoint "
# "(%6.3f)" % (attrName, readback,
# setpoint))
# except Exception as e:
# self.warn_stream("Error comparing readback with "
# "setpoint: %s" % (e))
# traceback.print_exc()
# return PyTango.AttrQuality.ATTR_INVALID
# return PyTango.AttrQuality.ATTR_VALID
# def __tooFar(self, attrName, setpoint, readback):
# '''
# Definition of 'too far': when the readback and the setpoint
# differ more than a certain percentage, the quality of the
# readback attribute is warning.
# But this doesn't apply when the setpoint is too close to 0.
#
# Definition of 'too far': there are two different definitions
# - When the setpoint is "close to 0" the warning quality alert
# will be raised if the readback has a difference bigger than
# 0.1 (plus minus).
# - If the setpoint is not that close to 0, the warning alert
# will be raised when their difference is above the 10%.
# It has been used a multiplicative notation but it can be
# made also with additive notation using a multiplication
# factor.
# '''
# self.warn_stream("DEPRECATED: __tooFar(%s)" % (attrName))
# if (-CLOSE_ZERO < setpoint < CLOSE_ZERO) or readback == 0:
# diff = abs(setpoint - readback)
# if (diff > CLOSE_ZERO):
# return True
# else:
# diff = abs(setpoint / readback)
# # 10%
# if (1-REL_PERCENTAGE > diff or diff > 1+REL_PERCENTAGE):
# return True
# return False
# def __checkQuality(self, attrName, attrValue, qualityInQuery):
# '''Check if this attrName with the give attrValue is with in the
# threshold of the give quality
# '''
# self.warn_stream("DEPRECATED: __checkQuality(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# qualities = attrStruct[QUALITIES]
# if qualityInQuery in qualities:
# if type(qualities[qualityInQuery]) == dict:
# if self.__checkAbsoluteRange(qualities[qualityInQuery],
# attrValue):
# return True
# buffer = attrStruct[READVALUE]
# if self.__checkRelativeRange(qualities[qualityInQuery],
# buffer,
# attrValue):
# return True
# return False
# elif type(qualities[qualityInQuery]) == list:
# if attrValue in qualities[qualityInQuery]:
# return True
# return False
# def __checkAbsoluteRange(self, qualityDict, referenceValue):
# '''Check if the a value is with in any of the configured absolute
# ranges for the specific configuration with in an attribute.
# '''
# # self.warn_stream("DEPRECATED: __checkAbsoluteRango()")
# if ABSOLUTE in qualityDict:
# if ABOVE in qualityDict[ABSOLUTE]:
# above = qualityDict[ABSOLUTE][ABOVE]
# else:
# above = float('inf')
# if BELOW in qualityDict[ABSOLUTE]:
# below = qualityDict[ABSOLUTE][BELOW]
# else:
# below = float('-inf')
# if UNDER in qualityDict[ABSOLUTE] and \
# qualityDict[ABSOLUTE][UNDER]:
# if above < referenceValue < below:
# return True
# else:
# if not below <= referenceValue <= above:
# return True
# return False
# def __checkRelativeRange(self, qualityDict, buffer, referenceValue):
# '''Check if the a value is with in any of the configured relative
# ranges for the specific configuration with in an attribute.
# '''
# # self.warn_stream("DEPRECATED: __checkRelativeRange()")
# if RELATIVE in qualityDict and isintance(buffer, CircularBuffer):
# if buffer.std >= qualityDict[RELATIVE]:
# return True
# return False
def _getAttrStruct(self, attrName):
'''Given an attribute name, return the internal structure that
defines its behaviour.
'''
try:
return self._plcAttrs[
self.__getDctCaselessKey(attrName, self._plcAttrs)]
except ValueError as e:
pass # simply was not in the plcAttrs
try:
return self._internalAttrs[
self.__getDctCaselessKey(attrName, self._internalAttrs)]
except ValueError as e:
pass # simply was not in the internalAttrs
if attrName.count('_'):
mainName, suffix = attrName.rsplit('_', 1)
try:
return self._internalAttrs[
self.__getDctCaselessKey(mainName,
self._internalAttrs)]
except ValueError as e:
pass # simply was not in the internalAttrs
return None
def __getDctCaselessKey(self, key, dct):
position = [e.lower() for e in dct].index(key.lower())
return dct.keys()[position]
# def __solveFormula(self, attrName, VALUE, formula):
# '''Some attributes can have a formula to interpret or modify the
# value given from the PLC to the value reported by the device.
# '''
# self.warn_stream("DEPRECATED: __solveFormula(%s)" % (attrName))
# result = eval(formula)
# # self.debug_stream("%s formula eval(\"%s\") = %s" % (attrName,
# # formula,
# # result))
# return result
# def __setAttrValue(self, attr, attrName, attrType, attrValue,
# timestamp):
# '''
# '''
# self.warn_stream("DEPRECATED: __setAttrValue(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# self.__applyReadValue(attrName, attrValue, timestamp)
# if attrValue is None:
# attr.set_value_date_quality(0, timestamp,
# PyTango.AttrQuality.ATTR_INVALID)
# # if MEANINGS in attrStruct:
# # attrMeaning = self.__buildAttrMeaning(attrName, attrValue)
# # attrQuality = self.__buildAttrQuality(attrName, attrValue)
# # attr.set_value_date_quality(attrMeaning, timestamp,
# # attrQuality)
# elif QUALITIES in attrStruct:
# attrQuality = self.__buildAttrQuality(attrName, attrValue)
# attr.set_value_date_quality(attrValue, timestamp,
# attrQuality)
# else:
# attrQuality = PyTango.AttrQuality.ATTR_VALID
# attr.set_value_date_quality(attrValue, timestamp, attrQuality)
# if WRITEADDR in attrStruct:
# writeAddr = attrStruct[WRITEADDR]
# sp_addr = self.offset_sp + writeAddr
# if WRITEBIT in attrStruct:
# writeBit = attrStruct[WRITEBIT]
# writeValue = self.read_db.bit(sp_addr, writeBit)
# else:
# writeValue = self.read_db.get(sp_addr, *attrType)
# if FORMULA in attrStruct and \
# 'write' in attrStruct[FORMULA]:
# try:
# writeValue = self.\
# __solveFormula(attrName, writeValue,
# attrStruct[FORMULA]['write'])
# except Exception as e:
# self.error_stream("Cannot solve formula for the "
# "attribute %s: %s" % (attrName,
# e))
# # if attrStruct.formula is not None:
# # try:
# # writeValue = attrStruct.formula.writeHook(
# # writeValue)
# # except Exception as e:
# # self.error_stream("Cannot solve formula for the "
# # "attribute %s: %s" % (attrName,
# # e))
# if 'format' in attrStruct:
# try:
# format = attrStruct['format']
# if format.endswith("d"):
# writeValue = int(format % writeValue)
# else:
# writeValue = float(format % writeValue)
# except Exception as e:
# self.error_stream("Cannot format value for the "
# "attribute %s: %s"
# % (attrName, e))
# self.__applyWriteValue(attrName, writeValue)
# try:
# attr.set_write_value(writeValue)
# except PyTango.DevFailed as e:
# self.tainted = "%s/%s: failed to set point %s (%s)"\
# % (self.get_name(), attrName, writeValue, e)
# self.error_stream(self.tainted)
# elif WRITEVALUE in attrStruct:
# try:
# writeValue = attrStruct[WRITEVALUE]
# attr.set_write_value(writeValue)
# except PyTango.DevFailed:
# self.tainted = self.get_name() + '/'+attrName + \
# ': failed to set point '+str(writeValue)
# self.error_stream("On setAttrValue(%s,%s) tainted: %s"
# % (attrName, str(attrValue),
# self.tainted))
# except Exception as e:
# self.warn_stream("On setAttrValue(%s,%s) Exception: %s"
# % (attrName, str(attrValue), e))
# # self.__doTraceAttr(attrName, "__setAttrvalue")
# # Don't need to trace each time the attribute is read.
@AttrExc
def read_attr(self, attr):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
name = attr.get_name()
attrStruct = self._getAttrStruct(name)
if any([isinstance(attrStruct, kls) for kls in [PLCAttr,
InternalAttr,
EnumerationAttr,
MeaningAttr,
HistoryAttr,
AutoStopAttr,
AutoStopParameter,
GroupAttr
]]):
attrStruct.read_attr(attr)
return
# self.warn_stream("DEPRECATED read_attr for %s" % (name))
# attrType = attrStruct[TYPE]
# read_addr = attrStruct[READADDR]
# if READBIT in attrStruct:
# read_bit = attrStruct[READBIT]
# else:
# read_bit = None
# try:
# if read_bit:
# read_value = self.read_db.bit(read_addr, read_bit)
# else:
# read_value = self.read_db.get(read_addr, *attrType)
# if FORMULA in attrStruct and \
# 'read' in attrStruct[FORMULA]:
# read_value = self.\
# __solveFormula(name, read_value,
# attrStruct[FORMULA]['read'])
# read_t = time.time()
# except Exception as e:
# self.error_stream('Trying to read %s/%s and looks to be not '
# 'well connected to the plc.'
# % (self.get_name(), attr.get_name()))
# self.debug_stream('Exception (%s/%s): %s'
# % (self.get_name(), attr.get_name(), e))
# traceback.print_exc()
# else:
# self.__setAttrValue(attr, name, attrType, read_value, read_t)
# @AttrExc
# def read_spectrumAttr(self, attr):
# '''This method is a generic read for dynamic spectrum attributes in
# this device. But right now only supports the historic buffers.
#
# The other spectrum attributes, related with the events
# generation are not using this because they have they own method.
# '''
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# name = attr.get_name()
# attrStruct = self._getAttrStruct(name)
# if any([isinstance(attrStruct, kls) for kls in [PLCAttr,
# InternalAttr,
# EnumerationAttr,
# MeaningAttr,
# AutoStopAttr,
# AutoStopParameter,
# GroupAttr
# ]]):
# attrStruct.read_spectrumAttr(attr)
# return
# self.warn_stream("DEPRECATED read_spectrumAttr for %s" % (name))
# if BASESET in attrStruct:
# attrValue = self.__buildHistoryBufferString(name)
# elif AUTOSTOP in attrStruct:
# attrValue = attrStruct[READVALUE].array
# attrTimeStamp = attrStruct[READTIME] or time.time()
# attrQuality = attrStruct[LASTEVENTQUALITY] or \
# PyTango.AttrQuality.ATTR_VALID
# self.debug_stream("Attribute %s: value=%s timestamp=%g quality=%s "
# "len=%d" % (name, attrValue, attrTimeStamp,
# attrQuality, len(attrValue)))
# attr.set_value_date_quality(attrValue, attrTimeStamp, attrQuality)
# def read_logical_attr(self, attr):
# '''
# '''
# self.warn_stream("DEPRECATED: read_logical_attr(%s)"
# % attr.get_name())
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# attrName = attr.get_name()
# if attrName in self._internalAttrs:
# ret = self._evalLogical(attrName)
# read_t = self._internalAttrs[attrName][READTIME]
# self.__setAttrValue(attr, attrName, PyTango.DevBoolean, ret,
# read_t)
# def _evalLogical(self, attrName):
# '''
# '''
# self.warn_stream("DEPRECATED: _evalLogical(%s)" % (attrName))
# if attrName not in self._internalAttrs:
# return
# attrStruct = self._internalAttrs[attrName]
# if attrStruct.logicObj is None:
# return
# logic = attrStruct.logicObj.logic
# values = []
# self.info_stream("Evaluate %s LogicAttr" % attrName)
# for key in logic.keys():
# try:
# if type(logic[key]) == dict:
# values.append(self.__evaluateDict(key, logic[key]))
# elif type(logic[key]) == list:
# values.append(self.__evaluateList(key, logic[key]))
# else:
# self.warn_stream("step less to evaluate %s for "
# "key %s unmanaged content type"
# % (attrName, key))
# except Exception as e:
# self.error_stream("cannot eval logic attr %s for key %s: "
# "%s" % (attrName, key, e))
# traceback.print_exc()
# if attrStruct.logicObj.operator == 'or':
# ret = any(values)
# elif attrStruct.logicObj.operator == 'and':
# ret = all(values)
# attrStruct.read_t = time.time()
# if attrStruct.logicObj.inverted:
# ret = not ret
# self.info_stream("For %s: values %s (%s) (inverted) answer %s"
# % (attrName, values, attrStruct.operator, ret))
# else:
# self.info_stream("For %s: values %s (%s) answer %s"
# % (attrName, values, attrStruct.operator, ret))
# attrStruct.read_value = ret
# return ret
# def __evaluateDict(self, attrName, dict2eval):
# """
# """
# self.warn_stream("DEPRECATED: __evaluateDict(%s)" % (attrName))
# self.info_stream("%s dict2eval: %s" % (attrName, dict2eval))
# for key in dict2eval.keys():
# if key == QUALITIES:
# return self.__evaluateQuality(attrName, dict2eval[key])
# def __evaluateList(self, attrName, list2eval):
# """
# """
# self.warn_stream("DEPRECATED: __evaluateList(%s)" % (attrName))
# self.info_stream("%s list2eval: %r" % (attrName, list2eval))
# value = self.__getAttrReadValue(attrName)
# self.info_stream("%s value: %r" % (attrName, value))
# return value in list2eval
# def __evaluateQuality(self, attrName, searchList):
# """
# """
# self.warn_stream("DEPRECATED: __evaluateQuality(%s)" % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if LASTEVENTQUALITY in attrStruct:
# quality = attrStruct[LASTEVENTQUALITY]
# return quality in searchList
return False
# FIXME: this method is merged with read_attr(), and once write
# versions become also merged, they will be not necessary
# anymore.
# @AttrExc
# def read_attr_bit(self, attr):
# '''
# '''
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# name = attr.get_name()
# attrType = PyTango.DevBoolean
# attrStruct = self._getAttrStruct(name)
# if any([isinstance(attrStruct, kls) for kls in [PLCAttr,
# InternalAttr,
# EnumerationAttr,
# MeaningAttr,
# AutoStopAttr,
# AutoStopParameter,
# GroupAttr
# ]]):
# attrStruct.read_attr(attr)
# return
# self.warn_stream("DEPRECATED read_attr_bit for %s" % (name))
# read_addr = attrStruct[READADDR]
# read_bit = attrStruct[READBIT]
# # if WRITEADDR in attrStruct:
# # write_addr = attrStruct[WRITEADDR]
# # write_bit = attrStruct[WRITEBIT]
# # else:
# # write_addr = None
# # write_bit = None
# try:
# if read_addr and read_bit:
# read_value = self.read_db.bit(read_addr, read_bit)
# if FORMULA in attrStruct and \
# 'read' in attrStruct[FORMULA]:
# read_value = self.\
# __solveFormula(name, read_value,
# attrStruct[FORMULA]['read'])
# read_t = time.time()
# else:
# read_value, read_t, _ = attrStruct.vtq
# attrType = attrStruct.type
# except Exception as e:
# self.error_stream('Trying to read %s/%s and looks to be not '
# 'well connected to the plc.'
# % (self.get_name(), attr.get_name()))
# self.debug_stream('Exception (%s/%s): %s'
# % (self.get_name(), attr.get_name(), e))
# else:
# self.__setAttrValue(attr, name, attrType, read_value, read_t)
# def read_attrGrpBit(self, attr):
# '''
# '''
# self.warn_stream("DEPRECATED: read_attrGrpBit(%s)"
# % (attr.get_name()))
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# attrName = attr.get_name()
# if attrName in self._internalAttrs:
# attrStruct = self._getAttrStruct(attrName)
# if 'read_set' in attrStruct:
# read_value = self.__getGrpBitValue(attrName,
# attrStruct['read_set'],
# self.read_db)
# read_t = time.time()
# if 'write_set' in attrStruct:
# write_set = attrStruct['write_set']
# write_value = self.__getGrpBitValue(attrName,
# write_set,
# self.write_db)
# self.__applyWriteValue(attrName,
# attrStruct[WRITEVALUE])
# self.__setAttrValue(attr, attrName, PyTango.DevBoolean,
# read_value, read_t)
# def __getGrpBitValue(self, attrName, addrSet, memSegment):
# '''
# '''
# self.warn_stream("DEPRECATED: __getGrpBitValue(%s)" % (attrName))
# try:
# bitSet = []
# for addr, bit in addrSet:
# bitSet.append(memSegment.bit(addr, bit))
# if all(bitSet):
# return True
# except Exception as e:
# self.error_stream("Cannot get the bit group for %s [%s]: %s\n"
# % (attrName, str(addrSet), e,
# str(self._internalAttrs[attrName])))
# return False
def read_lock(self):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
rbyte = self.read_db.b(self.locking_raddr)
locker = bool(rbyte & (1 << self.locking_rbit))
return locker
@AttrExc
def read_Locking(self, attr):
'''The read of this attribute is a boolean to represent if the
control of the plc has been take by tango. This doesn't look
to correspond exactly with the same meaning of the "Local Lock"
boolean in the memory map of the plc'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
self._checkLocking()
attrName = attr.get_name()
value, timestamp, quality = self._plcAttrs[attrName].vtq
attr.set_value_date_quality(value, timestamp, quality)
@AttrExc
def read_Lock_ST(self, attr):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
attrName = attr.get_name()
self.info_stream('DEPRECATED: reading %s' % (attrName))
value, timestamp, quality = self._plcAttrs[attrName].vtq
attr.set_value_date_quality(value, timestamp, quality)
def _checkLocking(self):
if self._isLocalLocked() or self._isRemoteLocked():
self._lockingChange(True)
else:
self._lockingChange(False)
def _isLocalLocked(self):
return self._deviceIsInLocal and \
self._plcAttrs['Lock_ST'].rvalue == 1
def _isRemoteLocked(self):
return self._deviceIsInRemote and \
self._plcAttrs['Lock_ST'].rvalue == 2
def _lockingChange(self, newLockValue):
if self.is_lockedByTango != newLockValue:
if 'Locking' in self._plcAttrs:
self._plcAttrs['Locking'].read_value = newLockValue
self.is_lockedByTango = newLockValue
# @AttrExc
# def read_internal_attr(self, attr):
# '''this is referencing to a device attribute that doesn't
# have plc representation
# '''
# self.warn_stream("DEPRECATED: read_internal_attr(%s)"
# % (attr.get_name()))
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# try:
# attrName = attr.get_name()
# if attrName in self._internalAttrs:
# attrStruct = self._getAttrStruct(attrName)
# if READVALUE in attrStruct:
# read_value = attrStruct[READVALUE]
# if read_value is None:
# attr.set_value_date_quality(0, time.time(),
# PyTango.AttrQuality.
# ATTR_INVALID)
# else:
# attr.set_value(read_value)
# else:
# attr.set_value_date_quality(0, time.time(),
# PyTango.AttrQuality.
# ATTR_INVALID)
# if WRITEVALUE in attrStruct:
# write_value = attrStruct[WRITEVALUE]
# attr.set_write_value(write_value)
# except Exception as e:
# self.error_stream("read_internal_attr(%s) Exception %s"
# % (attr.get_name(), e))
# # Read Attr method for dynattrs ---
####
# Write Attr method for dynattrs ---
def prepare_write(self, attr):
'''
'''
self.warn_stream(": prepare_write(%s)"
% (attr.get_name()))
data = []
self.Locking.get_write_value(data)
val = data[0]
if attr.get_name().lower() in ['locking']:
self.debug_stream("Do not do the write checks, when what is "
"wanted is to write the locker")
# FIXME: perhaps check if it is already lock by another program
elif not self.read_lock():
try:
exceptionMsg = 'first required to set Locking flag on '\
'%s device' % self.get_name()
except Exception as e:
self.error_stream("Exception in prepare_write(): %s" % (e))
else:
raise LinacException(exceptionMsg)
if self.tainted:
raise LinacException('mismatch with '
'specification:\n'+self.tainted)
data = []
attr.get_write_value(data)
return data[0]
@AttrExc
def write_attr(self, attr):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
name = attr.get_name()
attrStruct = self._getAttrStruct(name)
if any([isinstance(attrStruct, kls) for kls in [PLCAttr,
InternalAttr,
EnumerationAttr,
MeaningAttr,
AutoStopAttr,
AutoStopParameter,
GroupAttr
]]):
attrStruct.write_attr(attr)
return
# self.warn_stream("DEPRECATED write_attr for %s" % (name))
# attrType = attrStruct[TYPE]
# write_addr = attrStruct[WRITEADDR]
# write_value = self.prepare_write(attr)
# if FORMULA in attrStruct and 'write' in attrStruct[FORMULA]:
# write_value = self.__solveFormula(name, write_value,
# attrStruct[FORMULA]['write'])
# attrStruct[WRITEVALUE] = write_value
# # self.__doTraceAttr(name, "write_attr")
# self.write_db.write(write_addr, write_value, attrType)
# @AttrExc
# def write_attr_bit(self, attr):
# '''
# '''
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# name = attr.get_name()
# write_value = self.prepare_write(attr)
# self.doWriteAttrBit(attr, name, write_value)
# # self.__doTraceAttr(name, "write_attr_bit")
# def doWriteAttrBit(self, attr, name, write_value):
# attrStruct = self._getAttrStruct(name)
# if any([isinstance(attrStruct, kls) for kls in [PLCAttr,
# InternalAttr,
# EnumerationAttr,
# MeaningAttr,
# AutoStopAttr,
# AutoStopParameter,
# GroupAttr
# ]]):
# attrStruct.write_attr(attr)
# return
# self.warn_stream("DEPRECATED write_attr_bit for %s" % (name))
# read_addr = attrStruct[READADDR]
# write_addr = attrStruct[WRITEADDR]
# write_bit = attrStruct[WRITEBIT]
# if FORMULA in attrStruct and 'write' in attrStruct[FORMULA]:
# formula_value = self.\
# __solveFormula(name, write_value,
# attrStruct[FORMULA]['write'])
# self.info_stream("%s received %s formula eval(\"%s\") = %s"
# % (name, write_value,
# attrStruct[FORMULA]['write'],
# formula_value))
# if formula_value != write_value and \
# 'write_not_allowed' in attrStruct[FORMULA]:
# reason = "Write %s not allowed" % write_value
# description = attrStruct[FORMULA]['write_not_allowed']
# PyTango.Except.throw_exception(reason,
# description,
# name,
# PyTango.ErrSeverity.WARN)
# else:
# write_value = formula_value
# if SWITCHDESCRIPTOR in attrStruct:
# # For the switch with autostop, when transition to power on, is
# # necessary to clean the old collected information or it will
# # produce an influence on the conditions.
# descriptor = attrStruct[SWITCHDESCRIPTOR]
# if AUTOSTOP in descriptor:
# # if self.__stateTransitionToOn(write_value,descriptor) \
# # and descriptor.has_key(AUTOSTOP):
# self.__cleanAutoStopCollection(
# attrStruct[SWITCHDESCRIPTOR][AUTOSTOP])
# # #Depending to the on or off transition keys, this will launch
# # #a thread who will modify the ATTR2RAMP, and when that
# # #finishes the write will be set.
# # self.info_stream("attribute %s has receive a write %s"
# # %(name,write_value))
# # if self.__stateTransitionNeeded(write_value,name):
# # #attrStruct[SWITCHDESCRIPTOR]):
# # self.info_stream("doing state transition for %s"%(name))
# # attrStruct[SWITCHDEST] = write_value
# # self.createSwitchStateThread(name)
# # return
# # The returns are necessary to avoid the write that is set
# # later on this method. But in the final else case it has to
# # continue.
# self.__writeBit(name, read_addr, write_addr, write_bit,
# write_value)
# attrStruct[WRITEVALUE] = write_value
# self.info_stream("Received write %s (%s)" % (name,
# write_value))
# if self.__isRstAttr(name) and write_value:
# attrStruct[RESETTIME] = time.time()
# def __cleanAutoStopCollection(self, attrName):
# '''This will clean the buffer with autostop condition collected
# data and also the triggered boolean if it was raised.
# '''
# self.warn_stream("DEPRECATED: __cleanAutoStopCollection(%s)"
# % (attrName))
# attrStruct = self._getAttrStruct(attrName)
# if READVALUE in attrStruct and len(attrStruct[READVALUE]) != 0:
# self.info_stream("Clean up the buffer because collected data "
# "doesn't have sense having the swithc off.")
# attrStruct[READVALUE].resetBuffer()
# self._cleanTriggeredFlag(attrName)
# def __writeBit(self, name, read_addr, write_addr, write_bit,
# write_value, dry=False):
# '''
# '''
# rbyte = self.read_db.b(read_addr)
# attrStruct = self._getAttrStruct(name)
# if write_value:
# # sets bit 'bitno' of b
# toWrite = rbyte | (int(1) << write_bit)
# # a byte of 0s with a unique 1 in the place to set this 1
# else:
# # clears bit 'bitno' of b
# toWrite = rbyte & ((0xFF) ^ (1 << write_bit))
# # a byte of 1s with a unique 0 in the place to set this 0
# if not dry:
# self.write_db.write(write_addr, toWrite,
# TYPE_MAP[PyTango.DevUChar])
# reRead = self.read_db.b(read_addr)
# self.debug_stream("Writing %s boolean to %6s (%d.%d) byte was "
# "%s; write %s; now %s"
# % (name, write_value, write_addr, write_bit,
# bin(rbyte), bin(toWrite), bin(reRead)))
# def write_attrGrpBit(self, attr):
# '''
# '''
# self.warn_stream("DEPRECATED: write_attrGrpBit(%s)"
# % (attr.get_name()))
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# attrName = attr.get_name()
# if attrName in self._internalAttrs:
# attrDescr = self._internalAttrs[attrName]
# if 'write_set' in attrDescr:
# writeValue = self.prepare_write(attr)
# self.__setGrpBitValue(attrDescr['write_set'],
# self.write_db, writeValue)
# def __setGrpBitValue(self, addrSet, memSegment, value):
# '''
# '''
# # self.warn_stream("DEPRECATED: __setGrpBitValue()")
# try:
# for addr, bit in addrSet:
# rbyte = self.read_db.b(self.offset_sp+addr)
# if value:
# toWrite = rbyte | (int(value) << bit)
# else:
# toWrite = rbyte & (0xFF) ^ (1 << bit)
# memSegment.write(addr, toWrite, TYPE_MAP[PyTango.DevUChar])
# reRead = self.read_db.b(self.offset_sp+addr)
# self.debug_stream("Writing boolean to %6s (%d.%d) byte "
# "was %s; write %s; now %s"
# % (value, addr, bit, bin(rbyte),
# bin(toWrite), bin(reRead)))
# except Exception as e:
# self.error_stream("Cannot set the bit group: %s" % (e))
# @AttrExc
# def write_Locking(self, attr):
# '''
# '''
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# try:
# self.write_lock(attr.get_write_value())
# except:
# self.error_stream('Trying to write %s/%s and looks to be not '
# 'well connected to the plc.'
# % (self.get_name(), attr.get_name()))
# def check_lock(self):
# '''Drops lock if write_value is True, but did not receive
# lock_state if re
# '''
# pass
# # autostop area ---
# def _refreshInternalAutostopParams(self, attrName):
# '''There are auxiliar attibutes with the autostop conditions and
# when their values change them have to be introduced in the
# structure of the main attribute with the buffer, who will use it
# to take the decission.
# This includes the resizing task of the CircularBuffer.
# '''
# # FIXME: use the spectrum attribute and left the circular buffer
# # as it was to avoid side effects on relative events.
# if attrName not in self._internalAttrs:
# return
# attrStruct = self._internalAttrs[attrName]
# if AUTOSTOP not in attrStruct:
# return
# stopperDict = attrStruct[AUTOSTOP]
# if 'is'+ENABLE in stopperDict:
# refAttr = self._getAttrStruct(stopperDict['is'+ENABLE])
# refAttr[AUTOSTOP][ENABLE] = attrStruct[READVALUE]
# if 'is'+INTEGRATIONTIME in stopperDict:
# refAttr = self._getAttrStruct(stopperDict['is' +
# INTEGRATIONTIME])
# refAttr[AUTOSTOP][INTEGRATIONTIME] = attrStruct[READVALUE]
# # resize the CircularBuffer
# # time per sample int(INTEGRATIONTIME/self._plcUpdatePeriod)
# newBufferSize = \
# int(attrStruct[READVALUE]/self._getPlcUpdatePeriod())
# if refAttr[READVALUE].maxSize() != newBufferSize:
# self.info_stream("%s buffer to be resized from %d to %d "
# "(integration time %f seconds with a "
# "plc reading period of %f seconds)"
# % (attrName, refAttr[READVALUE].maxSize(),
# newBufferSize, attrStruct[READVALUE],
# self._plcUpdatePeriod))
# refAttr[READVALUE].resize(newBufferSize)
# else:
# for condition in [BELOW, ABOVE]:
# if 'is'+condition+THRESHOLD in stopperDict:
# key = 'is'+condition+THRESHOLD
# refAttr = self._getAttrStruct(stopperDict[key])
# refAttr[AUTOSTOP][condition] = attrStruct[READVALUE]
def _getPlcUpdatePeriod(self):
return self._plcUpdatePeriod
def _setPlcUpdatePeriod(self, value):
self.info_stream("modifying PLC Update period: was %.3f and now "
"becomes %.3f." % (self._plcUpdatePeriod, value))
self._plcUpdatePeriod = value
# FIXME: this is hardcoding!!
# self._refreshInternalAutostopParams('GUN_HV_I_AutoStop')
# def _updateStatistic(self, attrName):
# if attrName not in self._internalAttrs:
# return
# attrStruct = self._internalAttrs[attrName]
# if MEAN in attrStruct:
# refAttr = attrStruct[MEAN]
# if refAttr not in self._plcAttrs:
# return
# attrStruct[READVALUE] = self._plcAttrs[refAttr][READVALUE].mean
# elif STD in attrStruct:
# refAttr = attrStruct[STD]
# if refAttr not in self._plcAttrs:
# return
# attrStruct[READVALUE] = self._plcAttrs[refAttr][READVALUE].std
# def _cleanTriggeredFlag(self, attrName):
# triggerName = "%s_%s" % (attrName, TRIGGERED)
# if triggerName not in self._internalAttrs:
# return
# if self._internalAttrs[triggerName][TRIGGERED]:
# # if it's powered off and it was triggered, then this
# # power off would be because autostop has acted.
# # Is needed to clean the flag.
# self.info_stream("Clean the autostop triggered flag "
# "for %s" % (attrName))
# self._internalAttrs[triggerName][TRIGGERED] = False
# def _checkAutoStopConditions(self, attrName):
# '''The attribute with the Circular buffer has to do some checks
# to decide if it's necessary to proceed with the autostop
# procedure.
# '''
# if attrName not in self._plcAttrs:
# return
# attrStruct = self._plcAttrs[attrName]
# if AUTOSTOP not in attrStruct:
# return
# if ENABLE not in attrStruct[AUTOSTOP] or \
# not attrStruct[AUTOSTOP][ENABLE]:
# return
# if SWITCHDESCRIPTOR in attrStruct[AUTOSTOP]:
# switchStruct = \
# self._getAttrStruct(attrStruct[AUTOSTOP][SWITCHDESCRIPTOR])
# if READVALUE in switchStruct and \
# not switchStruct[READVALUE]:
# return
# if len(attrStruct[READVALUE]) < attrStruct[READVALUE].maxSize():
# return
# if SWITCHDESCRIPTOR in attrStruct[AUTOSTOP]:
# switchStruct = \
# self._getAttrStruct(attrStruct[AUTOSTOP][SWITCHDESCRIPTOR])
# if switchStruct is None or READVALUE not in switchStruct:
# return
# if SWITCHDEST in switchStruct:
# if switchStruct[SWITCHDEST]:
# return
# elif not switchStruct[READVALUE]:
# return
# for condition in [BELOW, ABOVE]:
# if condition in attrStruct[AUTOSTOP]:
# refValue = attrStruct[AUTOSTOP][condition]
# meanValue = attrStruct[READVALUE].mean
# # BELOW and ABOVE is compared with mean
# if condition == BELOW and refValue > meanValue:
# self.info_stream("Attribute %s stop condition "
# "%s is met ref=%g > mean=%g"
# % (attrName, condition,
# refValue, meanValue))
# self._doAutostop(attrName, condition)
# elif condition == ABOVE and refValue < meanValue:
# self.info_stream("Attribute %s stop condition "
# "%s is met ref=%g < mean=%g"
# % (attrName, condition,
# refValue, meanValue))
# self._doAutostop(attrName, condition)
# def _doAutostop(self, attrName, condition):
# attrStruct = self._plcAttrs[attrName]
# refValue = attrStruct[AUTOSTOP][condition]
# meanValue, stdValue = attrStruct[READVALUE].meanAndStd
# self.doWriteAttrBit(attrStruct[AUTOSTOP][SWITCHDESCRIPTOR], False)
# triggerStruct = self._internalAttrs["%s_%s"
# % (attrName, TRIGGERED)]
# self.warn_stream("Flag the autostop trigger for attribute %s"
# % (attrName))
# triggerStruct[TRIGGERED] = True
# done autostop area ---
def __isHistoryBuffer(self, attrName):
attrStruct = self._getAttrStruct(attrName)
if attrStruct is not None and BASESET in attrStruct and \
type(attrStruct[READVALUE]) == HistoryBuffer:
return True
return False
# def __buildHistoryBufferString(self, attrName):
# if self.__isHistoryBuffer(attrName):
# valuesList = self._getAttrStruct(attrName)[READVALUE].array
# self.debug_stream("For %s, building string list from %s"
# % (attrName, valuesList))
# strList = []
# for value in valuesList:
# strList.append(self.__buildAttrMeaning(attrName, value))
# return strList
# return None
# @AttrExc
# def write_internal_attr(self, attr):
# '''this is referencing to a device attribute that doesn't
# have plc representation'''
# self.warn_stream("DEPRECATED: write_internal_attr(%s)"
# % (attr.get_name()))
# if self.get_state() == PyTango.DevState.FAULT or \
# not self.has_data_available():
# return # raise AttributeError("Not available in fault state!")
# attrName = attr.get_name()
# self.info_stream('write_internal_attr(%s)' % (attrName))
#
# data = []
# attr.get_write_value(data)
# # FIXME: some cases must not allow values <= 0 ---
# if attrName in self._internalAttrs:
# attrDescr = self._internalAttrs[attrName]
# if WRITEVALUE in attrDescr:
# attrDescr[WRITEVALUE] = data[0]
# if attrDescr[TYPE] in [PyTango.DevDouble,
# PyTango.DevFloat]:
# attrValue = float(data[0])
# elif attrDescr[TYPE] in [PyTango.DevBoolean]:
# attrValue = bool(data[0])
# attrDescr[READVALUE] = attrValue
# attrQuality = self.\
# __buildAttrQuality(attrName, attrDescr[READVALUE])
# attrDescr.store(attrDescr[WRITEVALUE])
# if EVENTS in attrDescr:
# self.fireEventsList([[attrName, attrValue,
# attrQuality]], log=True)
def loadAttrFile(self):
self.attr_loaded = True
if self.AttrFile:
attr_fname = self.AttrFile
else:
attr_fname = self.get_name().split('/')[-1]+'.py'
try:
self.attr_list.build(attr_fname.lower())
except Exception as e:
if self.get_state() != PyTango.DevState.FAULT:
self.set_state(PyTango.DevState.FAULT)
self.set_status("ReloadAttrFile() failed (%s)" % (e),
important=True)
@AttrExc
def read_lastUpdateStatus(self, attr):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
attr.set_value(self.read_lastUpdateStatus_attr)
@AttrExc
def read_lastUpdate(self, attr):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
attr.set_value(self.read_lastUpdate_attr)
# Done Write Attr method for dynattrs ---
# PROTECTED REGION END --- LinacData.global_variables
def __init__(self, cl, name):
PyTango.Device_4Impl.__init__(self, cl, name)
self.log = self.get_logger()
LinacData.init_device(self)
def delete_device(self):
self.info_stream('deleting device '+self.get_name())
self._plcUpdateJoiner.set()
self._tangoEventsJoiner.set()
self._newDataAvailable.set()
self.attr_list.remove_all()
def init_device(self):
try:
self.debug_stream("In "+self.get_name()+"::init_device()")
self.set_change_event('State', True, False)
self.set_change_event('Status', True, False)
self.attr_IsSayAgainEnable_read = False
self.attr_IsTooFarEnable_read = True
self.attr_forceWriteDB_read = ""
self.attr_cpu_percent_read = 0.0
self.attr_mem_percent_read = 0.0
self.attr_mem_rss_read = 0
self.attr_mem_swap_read = 0
# The attributes Locking, Lock_ST, and HeartBeat have also
# events but this call is made in each of the AttrList method
# who dynamically build them.
self.set_state(PyTango.DevState.INIT)
self.set_status('inititalizing...')
self.get_device_properties(self.get_device_class())
self.debug_stream('AttrFile='+str(self.AttrFile))
self._locals = {'self': self}
self._globals = globals()
# String with human infomation about the last update
self.read_lastUpdateStatus_attr = ""
attr = PyTango.Attr('lastUpdateStatus',
PyTango.DevString, PyTango.READ)
attrProp = PyTango.UserDefaultAttrProp()
attrProp.set_label('Last Update Status')
attr.set_default_properties(attrProp)
self.add_attribute(attr, r_meth=self.read_lastUpdateStatus)
self.set_change_event('lastUpdateStatus', True, False)
# numeric attr about the lapsed time of the last update
self.read_lastUpdate_attr = None
attr = PyTango.Attr('lastUpdate',
PyTango.DevDouble, PyTango.READ)
attrProp = PyTango.UserDefaultAttrProp()
attrProp.set_format(latin1('%f'))
attrProp.set_label('Last Update')
attrProp.set_unit('s')
attr.set_default_properties(attrProp)
self.add_attribute(attr, r_meth=self.read_lastUpdate)
self.set_change_event('lastUpdate', True, False)
self._process = psutil.Process()
self.attr_list = AttrList(self)
########
# region to setup the network communication parameters
# restrictions and rename of PLC's ip address
if self.IpAddress == '' and self.PlcAddress == '':
self.error_stream("The PLC ip address must be set")
self.set_state(PyTango.DevState.FAULT)
self.set_status("Please set the PlcAddress property",
important=True)
return
elif not self.IpAddress == '' and self.PlcAddress == '':
self.warn_stream("Deprecated property IpAddress, "
"please use PlcAddress")
self.PlcAddress = self.IpAddress
elif not self.IpAddress == '' and not self.PlcAddress == '' \
and not self.IpAddress == self.PlcAddress:
self.warn_stream("Both PlcAddress and IpAddress "
"properties are defined and with "
"different values, prevail PlcAddress")
# get the ip address of the host where the device is running
# this to know if the device is running in local or remote
thisHostIp = get_ip()
if not thisHostIp == self.BindAddress:
if not self.BindAddress == '':
self.warn_stream("BindAddress property defined but "
"deprecated and it doesn't match "
"with the host where device runs. "
"Overwrite BindAddress with '%s'"
% thisHostIp)
else:
self.debug_stream("BindAddress of this host '%s'"
% (thisHostIp))
self.BindAddress = thisHostIp
# check if the port corresponds to local and remote modes
if thisHostIp == self.LocalAddress:
self.info_stream('Connection to the PLC will be '
'local mode')
self.set_status('Connection in local mode', important=True)
self._deviceIsInLocal = True
self._deviceIsInRemote = False
try:
if self.LocalPort is not None:
self.info_stream('Using specified local port %s'
% (self.LocalPort))
self.Port = self.LocalPort
else:
self.warn_stream('Local port not specified, '
'trying to use deprecated '
'definition')
if self.Port > 2010:
self.Port -= 10
self.warn_stream('converted the port to local'
' %s' % self.Port)
except:
self.error_stream('Error in the port setting')
elif thisHostIp == self.RemoteAddress:
self.info_stream('Connection to the PLC with be '
'remote mode')
self.set_status('Connection in remote mode',
important=True)
self._deviceIsInLocal = False
self._deviceIsInRemote = True
try:
if self.RemotePort is not None:
self.info_stream('Using specified remote port %s'
% (self.RemotePort))
self.Port = self.RemotePort
else:
self.warn_stream('Remote port not specified, '
'trying to use deprecated '
'definition')
if self.Port < 2010:
self.Port += 10
self.warn_stream('converted the port to '
'remote %s'
% (self.RemotePort))
except:
self.error_stream('Error in the port setting')
else:
self.warn_stream('Unrecognized IP for local/remote '
'modes (%s)' % thisHostIp)
self.set_status('Unrecognized connection for local/remote'
' mode', important=True)
self._deviceIsInLocal = False
self._deviceIsInRemote = False
# restrictions and renames of the Port's properties
if self.Port is None:
self.debug_stream("The PLC ip port must be set")
self.set_state(PyTango.DevState.FAULT)
self.set_status("Please set the plc ip port",
important=True)
return
# end the region to setup the network communication parameters
########
if self.ReadSize <= 0 or self.WriteSize <= 0:
self.set_state(PyTango.DevState.FAULT)
self.set_status("Block Read/Write sizes not well "
"set (r=%d,w=%d)" % (self.ReadSize,
self.WriteSize),
important=True)
return
# true when reading some attribute failed....
self.tainted = ''
# where the readback of the set points begins
self.offset_sp = self.ReadSize-self.WriteSize
self.attr_loaded = False
self.last_update_time = time.time()
try:
self.connect()
except Exception:
traceback.print_exc()
self.disconnect()
self.set_state(PyTango.DevState.UNKNOWN)
self.info_stream('initialized')
# self.set_state(PyTango.DevState.UNKNOWN)
self._threadingBuilder()
except Exception:
self.error_stream('initialization failed')
self.debug_stream(traceback.format_exc())
self.set_state(PyTango.DevState.FAULT)
self.set_status(traceback.format_exc())
# --------------------------------------------------------------------
# LinacData read/write attribute methods
# --------------------------------------------------------------------
# PROTECTED REGION ID(LinacData.initialize_dynamic_attributes) ---
def initialize_dynamic_attributes(self):
self.loadAttrFile()
self.attr_list._fileParsed.wait()
self.info_stream("with all the attributes build, proceed...")
# PROTECTED REGION END --- LinacData.initialize_dynamic_attributes
# ------------------------------------------------------------------
# Read EventsTime attribute
# ------------------------------------------------------------------
def read_EventsTime(self, attr):
# self.debug_stream("In " + self.get_name() + ".read_EventsTime()")
# PROTECTED REGION ID(LinacData.EventsTime_read) --
self.attr_EventsTime_read = self._tangoEventsTime.array
# PROTECTED REGION END --- LinacData.EventsTime_read
attr.set_value(self.attr_EventsTime_read)
# ------------------------------------------------------------------
# Read EventsTimeMix attribute
# ------------------------------------------------------------------
def read_EventsTimeMin(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsTimeMin()")
# PROTECTED REGION ID(LinacData.EventsTimeMin_read) --
self.attr_EventsTimeMin_read = self._tangoEventsTime.array.min()
if self._tangoEventsTime.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsTimeMin_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsTimeMin_read
attr.set_value(self.attr_EventsTimeMin_read)
# ------------------------------------------------------------------
# Read EventsTimeMax attribute
# ------------------------------------------------------------------
def read_EventsTimeMax(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsTimeMax()")
# PROTECTED REGION ID(LinacData.EventsTimeMax_read) --
self.attr_EventsTimeMax_read = self._tangoEventsTime.array.max()
if self._tangoEventsTime.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsTimeMax_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
elif self.attr_EventsTimeMax_read >= self._getPlcUpdatePeriod()*3:
attr.set_value_date_quality(self.attr_EventsTimeMax_read,
time.time(),
PyTango.AttrQuality.ATTR_WARNING)
return
# PROTECTED REGION END --- LinacData.EventsTimeMax_read
attr.set_value(self.attr_EventsTimeMax_read)
# ------------------------------------------------------------------
# Read EventsTimeMean attribute
# ------------------------------------------------------------------
def read_EventsTimeMean(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsTimeMean()")
# PROTECTED REGION ID(LinacData.EventsTimeMean_read) --
self.attr_EventsTimeMean_read = self._tangoEventsTime.array.mean()
if self._tangoEventsTime.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsTimeMean_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
elif self.attr_EventsTimeMean_read >= self._getPlcUpdatePeriod():
attr.set_value_date_quality(self.attr_EventsTimeMean_read,
time.time(),
PyTango.AttrQuality.ATTR_WARNING)
return
# PROTECTED REGION END --- LinacData.EventsTimeMean_read
attr.set_value(self.attr_EventsTimeMean_read)
# ------------------------------------------------------------------
# Read EventsTimeStd attribute
# ------------------------------------------------------------------
def read_EventsTimeStd(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsTimeStd()")
# PROTECTED REGION ID(LinacData.EventsTimeStd_read) --
self.attr_EventsTimeStd_read = self._tangoEventsTime.array.std()
if self._tangoEventsTime.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsTimeStd_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsTimeStd_read
attr.set_value(self.attr_EventsTimeStd_read)
# ------------------------------------------------------------------
# Read EventsNumber attribute
# ------------------------------------------------------------------
def read_EventsNumber(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsNumber()")
# PROTECTED REGION ID(LinacData.EventsNumber_read) ---
self.attr_EventsNumber_read = self._tangoEventsNumber.array
# PROTECTED REGION END --- LinacData.EventsNumber_read
attr.set_value(self.attr_EventsNumber_read)
# ------------------------------------------------------------------
# Read EventsNumberMin attribute
# ------------------------------------------------------------------
def read_EventsNumberMin(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsNumberMin()")
# PROTECTED REGION ID(LinacData.EventsNumberMin_read) ---
self.attr_EventsNumberMin_read = \
int(self._tangoEventsNumber.array.min())
if self._tangoEventsNumber.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsNumberMin_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsNumberMin_read
attr.set_value(self.attr_EventsNumberMin_read)
# ------------------------------------------------------------------
# Read EventsNumberMax attribute
# ------------------------------------------------------------------
def read_EventsNumberMax(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsNumberMax()")
# PROTECTED REGION ID(LinacData.EventsNumberMax_read) ---
self.attr_EventsNumberMax_read = \
int(self._tangoEventsNumber.array.max())
if self._tangoEventsNumber.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsNumberMax_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsNumberMax_read
attr.set_value(self.attr_EventsNumberMax_read)
# ------------------------------------------------------------------
# Read EventsNumberMean attribute
# ------------------------------------------------------------------
def read_EventsNumberMean(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsNumberMean()")
# PROTECTED REGION ID(LinacData.EventsNumberMean_read) ---
self.attr_EventsNumberMean_read = \
self._tangoEventsNumber.array.mean()
if self._tangoEventsNumber.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsNumberMean_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsNumberMean_read
attr.set_value(self.attr_EventsNumberMean_read)
# ------------------------------------------------------------------
# Read EventsNumberStd attribute
# ------------------------------------------------------------------
def read_EventsNumberStd(self, attr):
# self.debug_stream("In " + self.get_name() +
# ".read_EventsNumberStd()")
# PROTECTED REGION ID(LinacData.EventsNumberStd_read) ---
self.attr_EventsNumberStd_read = \
self._tangoEventsNumber.array.std()
if self._tangoEventsNumber.array.size < HISTORY_EVENT_BUFFER:
attr.set_value_date_quality(self.attr_EventsNumberStd_read,
time.time(),
PyTango.AttrQuality.ATTR_CHANGING)
return
# PROTECTED REGION END --- LinacData.EventsNumberStd_read
attr.set_value(self.attr_EventsNumberStd_read)
# ------------------------------------------------------------------
# Read IsTooFarEnable attribute
# ------------------------------------------------------------------
def read_IsTooFarEnable(self, attr):
self.debug_stream("In " + self.get_name() +
".read_IsTooFarEnable()")
# PROTECTED REGION ID(LinacData.IsTooFarEnable_read) ---
# PROTECTED REGION END --- LinacData.IsTooFarEnable_read
attr.set_value(self.attr_IsTooFarEnable_read)
# ------------------------------------------------------------------
# Write IsTooFarEnable attribute
# ------------------------------------------------------------------
def write_IsTooFarEnable(self, attr):
self.debug_stream("In " + self.get_name() +
".write_IsTooFarEnable()")
data = attr.get_write_value()
# PROTECTED REGION ID(LinacData.IsTooFarEnable_write) ---
self.attr_IsTooFarEnable_read = bool(data)
# PROTECTED REGION END -- LinacData.IsTooFarEnable_write
# ------------------------------------------------------------------
# Read forceWriteDB attribute
# ------------------------------------------------------------------
def read_forceWriteDB(self, attr):
self.debug_stream("In " + self.get_name() +
".read_forceWriteDB()")
# PROTECTED REGION ID(LinacData.forceWriteDB_read) ---
# PROTECTED REGION END --- LinacData.forceWriteDB_read
attr.set_value(self.attr_forceWriteDB_read)
# ------------------------------------------------------------------
# Read cpu_percent attribute
# ------------------------------------------------------------------
def read_cpu_percent(self, attr):
self.debug_stream("In " + self.get_name() +
".read_cpu_percent()")
# PROTECTED REGION ID(LinacData.cpu_percent_read) ---
self.attr_cpu_percent_read = self._process.cpu_percent()
# PROTECTED REGION END --- LinacData.cpu_percent_read
attr.set_value(self.attr_cpu_percent_read)
# ------------------------------------------------------------------
# Read mem_percent attribute
# ------------------------------------------------------------------
def read_mem_percent(self, attr):
self.debug_stream("In " + self.get_name() +
".read_mem_percent()")
# PROTECTED REGION ID(LinacData.mem_percent_read) ---
self.attr_mem_percent_read = self._process.memory_percent()
# PROTECTED REGION END --- LinacData.mem_percent_read
attr.set_value(self.attr_mem_percent_read)
# ------------------------------------------------------------------
# Read mem_rss attribute
# ------------------------------------------------------------------
def read_mem_rss(self, attr):
self.debug_stream("In " + self.get_name() +
".read_mem_rss()")
# PROTECTED REGION ID(LinacData.mem_rss_read) ---
self.attr_mem_rss_read = self._process.memory_info().rss
# PROTECTED REGION END --- LinacData.mem_rss_read
attr.set_value(self.attr_mem_rss_read)
# ------------------------------------------------------------------
# Read mem_swap attribute
# ------------------------------------------------------------------
def read_mem_swap(self, attr):
self.debug_stream("In " + self.get_name() +
".read_mem_swap()")
# PROTECTED REGION ID(LinacData.mem_swap_read) ---
self.attr_mem_swap_read = self._process.memory_full_info().swap
# PROTECTED REGION END --- LinacData.mem_swap_read
attr.set_value(self.attr_mem_swap_read)
# ---------------------------------------------------------------------
# LinacData command methods
# ---------------------------------------------------------------------
@CommandExc
def ReloadAttrFile(self):
"""Reload the file containing the attr description for a
particular plc
:param argin:
:type: PyTango.DevVoid
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In ReloadAttrFile()')
# PROTECTED REGION ID(LinacData.ReloadAttrFile) ---
self.loadAttrFile()
# PROTECTED REGION END --- LinacData.ReloadAttrFile
@CommandExc
def Exec(self, cmd):
""" Direct command to execute python with in the device, use it
very carefully it's good for debuging but it's a security
thread.
:param argin:
:type: PyTango.DevString
:return:
:rtype: PyTango.DevString """
self.debug_stream('In Exec()')
# PROTECTED REGION ID(LinacData.Exec) ---
L = self._locals
G = self._globals
try:
try:
# interpretation as expression
result = eval(cmd, G, L)
except SyntaxError:
# interpretation as statement
exec cmd in G, L
result = L.get("y")
except Exception as exc:
# handles errors on both eval and exec level
result = exc
if type(result) == StringType:
return result
elif isinstance(result, BaseException):
return "%s!\n%s" % (result.__class__.__name__, str(result))
else:
return pprint.pformat(result)
# PROTECTED REGION END --- LinacData.Exec
@CommandExc
def GetBit(self, args):
""" Command to direct Read a bit position from the PLC memory
:param argin:
:type: PyTango.DevVarShortArray
:return:
:rtype: PyTango.DevBoolean """
self.debug_stream('In GetBit()')
# PROTECTED REGION ID(LinacData.GetBit) ---
idx, bitno = args
if self.read_db is not None and hasattr(self.read_db, 'bit'):
return self.read_db.bit(idx, bitno)
raise IOError("No access to the hardware")
# PROTECTED REGION END --- LinacData.GetBit
@CommandExc
def GetByte(self, idx):
"""Command to direct Read a byte position from the PLC memory
:param argin:
:type: PyTango.DevShort
:return:
:rtype: PyTango.DevShort """
self.debug_stream('In GetByte()')
# PROTECTED REGION ID(LinacData.GetByte) ---
if self.read_db is not None and hasattr(self.read_db, 'b'):
return self.read_db.b(idx)
raise IOError("No access to the hardware")
# PROTECTED REGION END --- LinacData.GetByte
@CommandExc
def GetShort(self, idx):
"""Command to direct Read two consecutive byte positions from the
PLC memory and understand it as an integer
:param argin:
:type: PyTango.DevShort
:return:
:rtype: PyTango.DevShort """
self.debug_stream('In GetShort()')
# PROTECTED REGION ID(LinacData.GetShort) ---
if self.read_db is not None and hasattr(self.read_db, 'i16'):
return self.read_db.i16(idx)
raise IOError("No access to the hardware")
# PROTECTED REGION END --- LinacBData.GetShort
@CommandExc
def GetFloat(self, idx):
""" Command to direct Read four consecutive byte positions from the
PLC memory and understand it as an float
:param argin:
:type: PyTango.DevShort
:return:
:rtype: PyTango.DevFloat """
self.debug_stream('In GetFloat()')
# PROTECTED REGION ID(LinacData.GetFloat) ---
if self.read_db is not None and hasattr(self.read_db, 'f'):
return self.read_db.f(idx)
raise IOError("No access to the hardware")
# PROTECTED REGION END --- LinacData.GetFloat
@CommandExc
def HexDump(self):
""" Hexadecimal dump of all the registers in the plc
:param argin:
:type: PyTango.DevVoid
:return:
:rtype: PyTango.DevString """
self.debug_stream('In HexDump()')
# PROTECTED REGION ID(LinacData.HexDump) ---
rblock = self.read_db.buf[:]
wblock = self.write_db.buf[self.write_db.write_start:]
return hex_dump([rblock, wblock])
# PROTECTED REGION END --- LinacData.HexDump
@CommandExc
def Hex(self, idx):
""" Hexadecimal dump the given register of the plc
:param argin:
:type: PyTango.DevShort
:return:
:rtype: PyTango.DevString """
self.debug_stream('In Hex()')
# PROTECTED REGION ID(LinacData.Hex) ---
return hex(self.read_db.b(idx))
# PROTECTED REGION END --- LinacData.Hex
@CommandExc
def DumpTo(self, arg):
""" Hexadecimal dump of all the registers in the plc to a file
:param argin:
:type: PyTango.DevString
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In DumpTo()')
# PROTECTED REGION ID(LinacData.DumpTo) ---
fout = open(arg, 'w')
fout.write(self.read_db.buf.tostring())
# PROTECTED REGION END --- LinacData.DumpTo
@CommandExc
def WriteBit(self, args):
""" Write a single bit in the memory of the plc [reg,bit,value]
:param argin:
:type: PyTango.DevVarShortArray
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In WriteBit()')
# PROTECTED REGION ID(LinacData.WriteBit) ---
idx, bitno, v = args
idx += bitno / 8
bitno %= 8
v = bool(v)
b = self.write_db.b(idx) # Get the byte where the bit is
b = b & ~(1 << bitno) | (v << bitno)
# change only the expected bit
# The write operation of a bit, writes the Byte where it is
self.write_db.write(idx, b, TYPE_MAP[PyTango.DevUChar])
# PROTECTED REGION END --- LinacData.WriteBit
@CommandExc
def WriteByte(self, args):
""" Write a byte in the memory of the plc [reg,value]
:param argin:
:type: PyTango.DevVarShortArray
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In WriteByte()')
# PROTECTED REGION ID(LinacData.WriteByte) ---
# args[1] = c_uint8(args[1])
register = args[0]
value = uint8(args[1])
# self.write_db.write( *args )
self.write_db.write(register, value, TYPE_MAP[PyTango.DevUChar])
# PROTECTED REGION END --- LinacData.WriteByte
@CommandExc
def WriteShort(self, args):
""" Write two consecutive bytes in the memory of the plc
[reg,value]
:param argin:
:type: PyTango.DevVarShortArray
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In WriteShort()')
# PROTECTED REGION ID(LinacData.WriteShort) ---
# args[1] = c_int16(args[1])
register = args[0]
value = int16(args[1])
# self.write_db.write( *args )
self.write_db.write(register, value, TYPE_MAP[PyTango.DevShort])
# PROTECTED REGION END --- LinacData.WriteShort
@CommandExc
def WriteFloat(self, args):
""" Write the representation of a float in four consecutive bytes
in the memory of the plc [reg,value]
:param argin:
:type: PyTango.DevVarShortArray
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In WriteFloat()')
# PROTECTED REGION ID(LinacData.WriteFloat) ---
idx = int(args[0])
f = float32(args[1])
self.write_db.write(idx, f, TYPE_MAP[PyTango.DevFloat])
# PROTECTED REGION END --- LinacData.WriteFloat
@CommandExc
def ResetState(self):
""" Clean the information set in the Status message and restore
the state
:param argin:
:type: PyTango.DevVoid
:return:
:rtype: PyTango.DevVoid """
self.debug_stream('In ResetState()')
# PROTECTED REGION ID(LinacData.ResetState) ---
self.info_stream('resetting state %s...' % str(self.get_state()))
if self.get_state() == PyTango.DevState.FAULT:
if self.disconnect():
self.set_state(PyTango.DevState.OFF) # self.connect()
elif self.is_connected():
self.set_state(PyTango.DevState.ON)
self.clean_status()
else:
self.set_state(PyTango.DevState.UNKNOWN)
self.set_status("")
# PROTECTED REGION END --- LinacData.ResetState
@CommandExc
def RestoreReadDB(self):
self.forceWriteAttrs()
# To be moved ---
def _threadingBuilder(self):
# Threading joiners ---
self._plcUpdateJoiner = threading.Event()
self._plcUpdateJoiner.clear()
self._tangoEventsJoiner = threading.Event()
self._tangoEventsJoiner.clear()
# Threads declaration ---
self._plcUpdateThread = \
threading.Thread(name="PlcUpdater",
target=self.plcUpdaterThread)
self._tangoEventsThread = \
threading.Thread(name="EventManager",
target=self.newValuesThread)
self._tangoEventsTime = \
CircularBuffer([], maxlen=HISTORY_EVENT_BUFFER, owner=None)
self._tangoEventsNumber = \
CircularBuffer([], maxlen=HISTORY_EVENT_BUFFER, owner=None)
# Threads configuration ---
self._plcUpdateThread.setDaemon(True)
self._tangoEventsThread.setDaemon(True)
self._plcUpdatePeriod = PLC_MAX_UPDATE_PERIOD
self._newDataAvailable = threading.Event()
self._newDataAvailable.clear()
# Launch those threads ---
self._plcUpdateThread.start()
self._tangoEventsThread.start()
def plcUpdaterThread(self):
'''
'''
while not self._plcUpdateJoiner.isSet():
try:
start_t = time.time()
if self.is_connected():
self._readPlcRegisters()
self._addaptPeriod(time.time()-start_t)
else:
if self._plcUpdateJoiner.isSet():
return
self.info_stream('plc not connected')
self.reconnect()
time.sleep(self.ReconnectWait)
except Exception as e:
self.error_stream("In plcUpdaterThread() "
"exception: %s" % (e))
traceback.print_exc()
def _readPlcRegisters(self):
""" Do a read of all the registers in the plc and update the
mirrored memory
:param argin:
:type: PyTango.DevVoid
:return:
:rtype: PyTango.DevVoid """
# faults are critical and can not be recovered by restarting things
# INIT states mean something is going is on that interferes with
# updating, such as connecting
start_update_time = time.time()
if (self.get_state() == PyTango.DevState.FAULT) or \
not self.is_connected():
if start_update_time - self.last_update_time \
< self.ReconnectWait:
return
else:
if self.connect():
self.set_state(PyTango.DevState.UNKNOWN)
return
# relock if auto-recover from fault ---
try:
self.auto_local_lock()
self.dataBlockSemaphore.acquire()
try:
e = None
# The real reading to the hardware:
up = self.read_db.readall()
except Exception as e:
self.error_stream(
"Could not complete the readall()\n%s" % (e))
finally:
self.dataBlockSemaphore.release()
if e is not None:
raise e
if up:
self.last_update_time = time.time()
if self.get_state() == PyTango.DevState.ALARM:
# This warning would be because attributes with
# this quality, don't log because it happens too often.
self.set_state(PyTango.DevState.ON, log=False)
if not self.get_state() in [PyTango.DevState.ON]:
# Recover a ON state when it is responding and the
# state was showing something different.
self.set_state(PyTango.DevState.ON)
else:
self.set_state(PyTango.DevState.FAULT)
self.set_status("No data received from the PLC")
self.disconnect()
end_update_t = time.time()
diff_t = (end_update_t - start_update_time)
if end_update_t-self.last_update_time > self.TimeoutAlarm:
self.set_state(PyTango.DevState.ALARM)
self.set_status("Timeout alarm!")
return
# disconnect if no new information is send after long time
if end_update_t-self.last_update_time > self.TimeoutConnection:
self.disconnect()
self.set_state(PyTango.DevState.FAULT)
self.set_status("Timeout connection!")
return
self.read_lastUpdate_attr = diff_t
timeFormated = time.strftime('%F %T')
self.read_lastUpdateStatus_attr = "last updated at %s in %f s"\
% (timeFormated, diff_t)
attr2Event = [['lastUpdate', self.read_lastUpdate_attr],
['lastUpdateStatus',
self.read_lastUpdateStatus_attr]]
self.fireEventsList(attr2Event,
timestamp=self.last_update_time)
self._newDataAvailable.set()
# when an update goes fine, the period is reduced one step
# until the minumum
if self._getPlcUpdatePeriod() > PLC_MIN_UPDATE_PERIOD:
self._setPlcUpdatePeriod(self._plcUpdatePeriod -
PLC_STEP_UPDATE_PERIOD)
except tcpblock.Shutdown as exc:
self.set_state(PyTango.DevState.FAULT)
msg = 'communication shutdown requested '\
'at '+time.strftime('%F %T')
self.set_status(msg)
self.error_stream(msg)
self.disconnect()
except socket.error as exc:
self.set_state(PyTango.DevState.FAULT)
msg = 'broken socket at %s\n%s' % (time.strftime('%F %T'),
str(exc))
self.set_status(msg)
self.error_stream(msg)
self.disconnect()
except Exception as exc:
self.set_state(PyTango.DevState.FAULT)
msg = 'update failed at %s\n%s: %s' % (time.strftime('%F %T'),
str(type(exc)),
str(exc))
self.set_status(msg)
self.error_stream(msg)
self.disconnect()
self.last_update_time = time.time()
traceback.print_exc()
def _addaptPeriod(self, diff_t):
current_p = self._getPlcUpdatePeriod()
max_t = PLC_MAX_UPDATE_PERIOD
step_t = PLC_STEP_UPDATE_PERIOD
if diff_t > max_t:
if current_p < max_t:
self.warn_stream(
"plcUpdaterThread() has take too much time "
"(%3.3f seconds)" % (diff_t))
self._setPlcUpdatePeriod(current_p+step_t)
else:
self.error_stream(
"plcUpdaterThread() has take too much time "
"(%3.3f seconds), but period cannot be increased more "
"than %3.3f seconds" % (diff_t, current_p))
elif diff_t > current_p:
exceed_t = diff_t-current_p
factor = int(exceed_t/step_t)
increment_t = step_t+(step_t*factor)
if current_p+increment_t >= max_t:
self.error_stream(
"plcUpdaterThread() it has take %3.6f seconds "
"(%3.6f more than expected) and period will be "
"increased to the maximum (%3.6f)"
% (diff_t, exceed_t, max_t))
self._setPlcUpdatePeriod(max_t)
else:
self.warn_stream(
"plcUpdaterThread() it has take %3.6f seconds, "
"%f over the expected, increase period "
"(%3.3f + %3.3f seconds)" % (diff_t, exceed_t,
current_p, increment_t))
self._setPlcUpdatePeriod(current_p+increment_t)
else:
# self.debug_stream(
# "plcUpdaterThread() it has take %3.6f seconds, going to "
# "sleep %3.3f seconds (update period %3.3f seconds)"
# % (diff_t, current_p-diff_t, current_p))
time.sleep(current_p-diff_t)
def newValuesThread(self):
'''
'''
if not self.attr_list._fileParsed.isSet():
self.info_stream("Event generator thread will wait until "
"file is parsed")
self.attr_list._fileParsed.wait()
while not self.has_data_available():
time.sleep(self._getPlcUpdatePeriod()*2)
self.debug_stream("Event generator thread wait for connection")
event_ctr = EventCtr()
while not self._tangoEventsJoiner.isSet():
try:
if self._newDataAvailable.isSet():
start_t = time.time()
self.propagateNewValues()
diff_t = time.time() - start_t
n_events = event_ctr.ctr
event_ctr.clear()
self._tangoEventsTime.append(diff_t)
self._tangoEventsNumber.append(n_events)
if n_events > 0:
self.debug_stream(
"newValuesThread() it has take %3.6f seconds "
"for %d events" % (diff_t, n_events))
self._newDataAvailable.clear()
else:
self._newDataAvailable.wait()
except Exception as exc:
self.error_stream(
"In newValuesThread() exception: %s" % (exc))
traceback.print_exc()
def propagateNewValues(self):
"""
Check the attributes that comes directly from the PLC registers, to check if
the information stored in the device needs to be refresh, events emitted, as
well as for each of them, inter-attribute dependencies are required to be
triggered.
"""
attrs = self._plcAttrs.keys()[:]
for attrName in attrs:
attrStruct = self._plcAttrs[attrName]
if hasattr(attrStruct, 'hardwareRead'):
attrStruct.hardwareRead(self.read_db)
# def plcBasicAttrEvents(self):
# '''This method is used, after all reading from the PLC to update
# the most basic attributes to indicate everything is fine.
# Those attributes are:
# - lastUpdate{,Status}
# - HeartBeat
# - Lock_{ST,Status}
# - Locking
# '''
# # Heartbit
# if self.heartbeat_addr:
# self.read_heartbeat_attr =\
# self.read_db.bit(self.heartbeat_addr, 0)
# HeartBeatStruct = self._plcAttrs['HeartBeat']
# if not self.read_heartbeat_attr == HeartBeatStruct[READVALUE]:
# HeartBeatStruct[READTIME] = time.time()
# HeartBeatStruct[READVALUE] = self.read_heartbeat_attr
# # Locks
# if self.lock_ST:
# self.read_lock_ST_attr = self.read_db.get(self.lock_ST, 'B', 1)
# # lock_str, lock_quality = self.convert_Lock_ST()
# if self.read_lock_ST_attr not in [0, 1, 2]:
# self.warn_stream("<<<Invalid locker code %d>>>"
# % (self.read_lock_ST_attr))
# Lock_STStruct = self._getAttrStruct('Lock_ST')
# if not self.read_lock_ST_attr == Lock_STStruct[READVALUE]:
# # or (now - Lock_STStruct[READTIME]) > PERIODIC_EVENT:
# Lock_STStruct[READTIME] = time.time()
# Lock_STStruct[READVALUE] = self.read_lock_ST_attr
# # Lock_StatusStruct = self._getAttrStruct('Lock_Status')
# # if not lock_str == Lock_StatusStruct[READVALUE]:
# # or (now - Lock_StatusStruct[READTIME]) > PERIODIC_EVENT:
# # Lock_StatusStruct[READTIME] = time.time()
# # Lock_StatusStruct[READVALUE] = lock_str
# # locking = self.read_lock()
# LockingStruct = self._getAttrStruct('Locking')
# self._checkLocking()
# # if not self.is_lockedByTango == LockingStruct[READVALUE]:
# # # or (now - LockingStruct[READTIME]) > PERIODIC_EVENT:
# # LockingStruct[READTIME] = time.time()
# # LockingStruct[READVALUE] = self.is_lockedByTango
# def __attrHasEvents(self, attrName):
# '''
# '''
# attrStruct = self._getAttrStruct(attrName)
# if attrStruct._eventsObj:
# return True
# return False
# # if attrName in self._plcAttrs and \
# # EVENTS in self._plcAttrs[attrName]:
# # return True
# # elif attrName in self._internalAttrs and \
# # EVENTS in self._internalAttrs[attrName].keys():
# # return True
# # return False
# def __getAttrReadValue(self, attrName):
# '''
# '''
# attrStruct = self._getAttrStruct(attrName)
# if READVALUE in attrStruct:
# if type(attrStruct[READVALUE]) == CircularBuffer:
# return attrStruct[READVALUE].value
# elif type(attrStruct[READVALUE]) == HistoryBuffer:
# return attrStruct[READVALUE].array
# return attrStruct[READVALUE]
# return None
# def __lastEventHasChangingQuality(self, attrName):
# attrStruct = self._getAttrStruct(attrName)
# if MEANINGS in attrStruct or ISRESET in attrStruct:
# # To these attributes this doesn't apply
# return False
# if LASTEVENTQUALITY in attrStruct:
# if attrStruct[LASTEVENTQUALITY] == \
# PyTango.AttrQuality.ATTR_CHANGING:
# return True
# else:
# return False
# else:
# return False
# def __attrValueHasThreshold(self, attrName):
# if EVENTS in self._getAttrStruct(attrName) and \
# THRESHOLD in self._getAttrStruct(attrName)[EVENTS]:
# return True
# else:
# return False
# def __isRstAttr(self, attrName):
# self.warn_stream("DEPRECATED: __isRstAttr(%s)" % (attrName))
# if attrName.startswith('lastUpdate'):
# return False
# if ISRESET in self._getAttrStruct(attrName):
# return self._getAttrStruct(attrName)[ISRESET]
# else:
# return False
# def __checkAttrEmissionParams(self, attrName, newValue):
# if not self.__attrHasEvents(attrName):
# self.warn_stream("No events for the attribute %s" % (attrName))
# return False
# lastValue = self.__getAttrReadValue(attrName)
# if lastValue is None:
# # If there is no previous read, it has to be emitted
# return True
# # after that we know the values are different
# if self.__isRstAttr(attrName):
# writeValue = self._getAttrStruct(attrName)[WRITEVALUE]
# rst_t = self._getAttrStruct(attrName)[RESETTIME]
# if newValue and not lastValue and writeValue and \
# rst_t is not None:
# return True
# elif not newValue and lastValue and not writeValue \
# and rst_t is None:
# return True
# else:
# return False
# if self.__attrValueHasThreshold(attrName):
# diff = abs(lastValue - newValue)
# threshold = self._getAttrStruct(attrName)[EVENTS][THRESHOLD]
# if diff > threshold:
# return True
# elif self.__lastEventHasChangingQuality(attrName):
# # below the threshold and last quality changing is an
# # indicative that a movement has finish, then it's time
# # to emit an event with a quality valid.
# return True
# else:
# return False
# if self.__isHistoryBuffer(attrName):
# if len(lastValue) == 0 or \
# newValue != lastValue[len(lastValue)-1]:
# return True
# else:
# return False
# # At this point any special case has been treated, only avoid
# # to emit if value doesn't change
# if newValue != lastValue:
# return True
# # when non case before, no event
# return False
# def plcGeneralAttrEvents(self):
# '''This method is used to periodically loop to review the list of
# attribute (above the basics) and check if they need event
# emission.
# '''
# now = time.time()
# # attributeList = []
# # for attrName in self._plcAttrs.keys():
# # if attrName not in ['HeartBeat', 'Lock_ST', 'Lock_Status',
# # 'Locking']:
# # attributeList.append(attrName)
# attributeList = self._plcAttrs.keys()
# for exclude in ['HeartBeat', 'Lock_ST', 'Lock_Status', 'Locking']:
# if attributeList.count(exclude):
# attributeList.pop(attributeList.index(exclude))
# # Iterate the remaining to know if they need something to be done
# for attrName in attributeList:
# self.checkResetAttr(attrName)
# attrStruct = self._plcAttrs[attrName]
# if hasattr(attrStruct, 'hardwareRead'):
# attrStruct.hardwareRead(self.read_db)
#
#
# # First check if for this element, it's prepared for events
# # if self.__attrHasEvents(attrName):
# # try:
# # attrStruct = self._plcAttrs[attrName]
# # attrType = attrStruct[TYPE]
# # # lastValue = self.__getAttrReadValue(attrName)
# # last_read_t = attrStruct[READTIME]
# # if READADDR in attrStruct:
# # # read_addr = attrStruct[READADDR]
# # # if READBIT in attrStruct:
# # # read_bit = attrStruct[READBIT]
# # # newValue = self.read_db.bit(read_addr,
# # # read_bit)
# # # else:
# # # newValue = self.read_db.get(read_addr,
# # # *attrType)
# # newValue = attrStruct.hardwareRead(self.read_db)
# # if FORMULA in attrStruct and \
# # 'read' in attrStruct[FORMULA]:
# # newValue = \
# # self.__solveFormula(attrName, newValue,
# # attrStruct[FORMULA]
# # ['read'])
# # if self.__checkAttrEmissionParams(attrName, newValue):
# # self.__applyReadValue(attrName, newValue,
# # self.last_update_time)
# # if MEANINGS in attrStruct:
# # if BASESET in attrStruct:
# # attrValue = attrStruct[READVALUE].array
# # else:
# # attrValue = \
# # self.__buildAttrMeaning(attrName,
# # newValue)
# # attrQuality = \
# # self.__buildAttrQuality(attrName, newValue)
# # elif QUALITIES in attrStruct:
# # attrValue = newValue
# # attrQuality = \
# # self.__buildAttrQuality(attrName,
# # attrValue)
# # elif AUTOSTOP in attrStruct:
# # attrValue = attrStruct[READVALUE].array
# # attrQuality = PyTango.AttrQuality.ATTR_VALID
# # self._checkAutoStopConditions(attrName)
# # else:
# # attrValue = newValue
# # attrQuality = PyTango.AttrQuality.ATTR_VALID
# # # store the current quality to know an end of
# # # a movement: quality from changing to valid
# # attrStruct[LASTEVENTQUALITY] = attrQuality
# # # collect to launch fire event
# # self.__doTraceAttr(attrName,
# # "plcGeneralAttrEvents(%s)"
# # % (attrValue))
# # # elif self.__checkEventReEmission(attrName):
# # # Even there is no condition to emit an event
# # # Check the RE_EVENTS_PERIOD to know if a refresh
# # # would be nice
# # # self.__eventReEmission(attrName)
# # # attr2Reemit += 1
# # except Exception as e:
# # self.warn_stream("In plcGeneralAttrEvents(), "
# # "exception in attribute %s: %s"
# # % (attrName, e))
# # traceback.print_exc()
# # if len(attr2Event) > 0:
# # self.fireEventsList(attr2Event, timestamp=now, log=True)
# # if attr2Reemit > 0:
# # self.debug_stream("%d events due to periodic reemission"
# # % attr2Reemit)
# # self.debug_stream("plcGeneralAttrEvents(): %d events from %d "
# # "attributes" % (len(attr2Event),
# # len(attributeList)))
# def internalAttrEvents(self):
# '''
# '''
# now = time.time()
# attributeList = self._internalAttrs.keys()
# attr2Event = []
# for attrName in attributeList:
# if self.__attrHasEvents(attrName):
# try:
# # evaluate if emit is needed
# # internal attr types:
# # - logical
# # - sets
# attrStruct = self._getAttrStruct(attrName)
# attrType = attrStruct[TYPE]
# lastValue = self.__getAttrReadValue(attrName)
# last_read_t = attrStruct[READTIME]
# if LOGIC in attrStruct:
# # self.info_stream("Attribute %s is from logical "
# # "type"%(attrName))
# newValue = self._evalLogical(attrName)
# elif 'read_set' in attrStruct:
# # self.info_stream("Attribute %s is from group "
# # "type" % (attrName))
# newValue = \
# self.__getGrpBitValue(attrName,
# attrStruct['read_set'],
# self.read_db)
# elif AUTOSTOP in attrStruct:
# newValue = lastValue
# # FIXME: do it better.
# # Don't emit events on the loop, because they shall
# # be only emitted when they are written.
# self._refreshInternalAutostopParams(attrName)
# # FIXME: this is task for a internalUpdaterThread
# elif MEAN in attrStruct or STD in attrStruct:
# # self._updateStatistic(attrName)
# newValue = attrStruct[READVALUE]
# elif TRIGGERED in attrStruct:
# newValue = attrStruct[TRIGGERED]
# elif isinstance(attrStruct, EnumerationAttr):
# newValue = lastValue # avoid emit
# else:
# # self.warn_stream("In internalAttrEvents(): "
# # "unknown how to emit events "
# # "for %s attribute" % (attrName))
# newValue = lastValue
# emit = False
# if newValue != lastValue:
# # self.info_stream("Emit because %s!=%s"
# # % (str(newValue),
# # str(lastValue)))
# emit = True
# elif (last_read_t is None):
# # self.info_stream("Emit new value because it "
# # "wasn't read before")
# emit = True
# else:
# pass
# # self.info_stream("No event to emit "
# # "(lastValue %s (%s), "
# # "newValue %s)"
# # %(str(lastValue),
# # str(last_read_t),
# # str(newValue)))
# except Exception as e:
# self.error_stream("In internalAttrEvents(), "
# "exception reading attribute %s: %s"
# % (attrName, e))
# traceback.print_exc()
# else:
# # prepare to emit
# try:
# if emit:
# self.__applyReadValue(attrName,
# newValue,
# self.last_update_time)
# if MEANINGS in attrStruct:
# attrValue = \
# self.__buildAttrMeaning(attrName,
# newValue)
# attrQuality = \
# self.__buildAttrQuality(attrName,
# newValue)
# elif QUALITIES in attrStruct:
# attrValue = newValue
# attrQuality = \
# self.__buildAttrQuality(attrName,
# attrValue)
# else:
# attrValue = newValue
# attrQuality =\
# PyTango.AttrQuality.ATTR_VALID
# attr2Event.append([attrName, attrValue])
# self.__doTraceAttr(attrName,
# "internalAttrEvents(%s)"
# % (attrValue))
# except Exception as e:
# self.error_stream("In internalAttrEvents(), "
# "exception on emit attribute "
# "%s: %s" % (attrName, e))
# # if len(attr2Event) > 0:
# # self.fireEventsList(attr2Event, timestamp=now, log=True)
# # self.debug_stream("internalAttrEvents(): %d events from %d "
# # "attributes" % (len(attr2Event),
# # len(attributeList)))
# return len(attr2Event)
# def checkResetAttr(self, attrName):
# '''
# '''
# self.warn_stream("DEPRECATED: checkResetAttr(%s)" % (attrName))
# if not self.__isRstAttr(attrName):
# return
# # FIXME: ---
# # if this is moved to a new thread separated to the event
# # emit, the system must be changed to be passive waiting
# # (that it Threading.Event())
# if self.__isCleanResetNeed(attrName):
# self._plcAttrs[attrName][RESETTIME] = None
# readAddr = self._plcAttrs[attrName][READADDR]
# writeAddr = self._plcAttrs[attrName][WRITEADDR]
# writeBit = self._plcAttrs[attrName][WRITEBIT]
# writeValue = False
# self.__writeBit(attrName, readAddr,
# writeAddr, writeBit, writeValue)
# self._plcAttrs[attrName][WRITEVALUE] = writeValue
# self.info_stream("Set back to 0 a RST attr %s" % (attrName))
# # self._plcAttrs[attrName][READVALUE] = False
# # self.fireEvent([attrName, False], time.time())
# def __isCleanResetNeed(self, attrName):
# '''
# '''
# now = time.time()
# if self.__isResetAttr(attrName):
# read_value = self._plcAttrs[attrName][READVALUE]
# rst_t = self._plcAttrs[attrName][RESETTIME]
# if read_value and rst_t is not None:
# diff_t = now-rst_t
# if RESETACTIVE in self._plcAttrs[attrName]:
# activeRst_t = self._plcAttrs[attrName][RESETACTIVE]
# else:
# activeRst_t = ACTIVE_RESET_T
# if activeRst_t-diff_t < 0:
# self.info_stream("Attribute %s needs clean reset"
# % (attrName))
# return True
# self.info_stream("Do not clean reset flag yet for %s "
# "(%6.3f seconds left)"
# % (attrName, activeRst_t-diff_t))
# return False
# def __isResetAttr(self, attrName):
# '''
# '''
# if attrName in self._plcAttrs and \
# ISRESET in self._plcAttrs[attrName] and \
# self._plcAttrs[attrName][ISRESET]:
# return True
# return False
def auto_local_lock(self):
if self._deviceIsInLocal:
if 'Locking' in self._plcAttrs:
if not self._plcAttrs['Locking'].rvalue:
self.info_stream("Device is in Local mode and "
"not locked. Proceed to lock it")
with self.dataBlockSemaphore:
self.relock()
time.sleep(self._getPlcUpdatePeriod())
# else:
# self.info_stream("Device is in Local mode and locked")
else:
self.warn_stream("Device in Local mode but 'Locking' "
"attribute not yet present")
# else:
# self.info_stream("Device is not in Local mode")
def relock(self):
'''
'''
if 'Locking' in self._plcAttrs and \
not self._plcAttrs['Locking'].wvalue:
self.write_lock(True)
# end "To be moved" section
def write_lock(self, value):
'''
'''
if self.get_state() == PyTango.DevState.FAULT or \
not self.has_data_available():
return # raise AttributeError("Not available in fault state!")
if not isinstance(value, bool):
raise ValueError("write_lock argument must be a boolean")
if 'Locking' in self._plcAttrs:
raddr = self._plcAttrs['Locking'].read_addr
rbit = self._plcAttrs['Locking'].read_bit
rbyte = self.read_db.b(raddr)
waddr = self._plcAttrs['Locking'].write_bit
if value:
# sets bit 'bitno' of b
toWrite = rbyte | (int(value) << rbit)
# a byte of 0s with a unique 1 in the place to set this 1
else:
# clears bit 'bitno' of b
toWrite = rbyte & (0xFF) ^ (1 << rbit)
# a byte of 1s with a unique 0 in the place to set this 0
self.write_db.write(waddr, toWrite, TYPE_MAP[PyTango.DevUChar])
time.sleep(self._getPlcUpdatePeriod())
reRead = self.read_db.b(raddr)
self.info_stream("Writing Locking boolean to %s (%d.%d) byte "
"was %s; write %s; now %s"
% (" lock" if value else "unlock",
raddr, rbit, bin(rbyte), bin(toWrite),
bin(reRead)))
self._plcAttrs['Locking'].write_value = value
@CommandExc
def Update(self):
'''Deprecated
'''
pass
# PROTECTED REGION END --- LinacData.Update
# ==================================================================
#
# LinacDataClass class definition
#
# ==================================================================
class LinacDataClass(PyTango.DeviceClass):
# -------- Add you global class variables here ------------------------
# PROTECTED REGION ID(LinacData.global_class_variables) ---
# PROTECTED REGION END --- LinacData.global_class_variables
def dyn_attr(self, dev_list):
"""Invoked to create dynamic attributes for the given devices.
Default implementation calls
:meth:`LinacData.initialize_dynamic_attributes` for each device
:param dev_list: list of devices
:type dev_list: :class:`PyTango.DeviceImpl`"""
for dev in dev_list:
try:
dev.initialize_dynamic_attributes()
except:
dev.warn_stream("Failed to initialize dynamic attributes")
dev.debug_stream("Details: " + traceback.format_exc())
# PROTECTED REGION ID(LinacData.dyn_attr) ENABLED START ---
# PROTECTED REGION END --- LinacData.dyn_attr
# Class Properties ---
class_property_list = {}
# Device Properties ---
device_property_list = {'ReadSize': [PyTango.DevShort,
"how many bytes to read (should "
"be a multiple of 2)", 0
],
'WriteSize': [PyTango.DevShort,
"size of write data block", 0],
'IpAddress': [PyTango.DevString,
"ipaddress of linac PLC "
"(deprecated)", ''],
'PlcAddress': [PyTango.DevString,
"ipaddress of linac PLC", ''],
'Port': [PyTango.DevShort,
"port of linac PLC (deprecated)",
None],
'LocalPort': [PyTango.DevShort,
"port of linac PLC (deprecated)",
None],
'RemotePort': [PyTango.DevShort,
"port of linac PLC "
"(deprecated)", None],
'AttrFile': [PyTango.DevString,
"file that contains description "
"of attributes of this "
"Linac data block", ''],
'BindAddress': [PyTango.DevString,
'ip of the interface used to '
'communicate with plc '
'(deprecated)', ''],
'LocalAddress': [PyTango.DevString,
'ip of the interface used '
'to communicate with plc as '
'the local', '10.0.7.100'],
'RemoteAddress': [PyTango.DevString,
'ip of the interface used '
'to communicate with plc as '
'the remote', '10.0.7.1'],
'TimeoutAlarm': [PyTango.DevDouble,
"after how many seconds of "
"silence the state is set "
"to alarm, this should be "
"less than TimeoutConnection",
1.0],
'TimeoutConnection': [PyTango.DevDouble,
"after how many seconds "
"of silence the "
"connection is assumed "
"to be interrupted",
1.5],
'ReconnectWait': [PyTango.DevDouble,
"after how many seconds "
"since the last update the "
"next connection attempt is "
"made", 6.0],
}
class_property_list['TimeoutAlarm'] = \
device_property_list['TimeoutAlarm']
class_property_list['TimeoutConnection'] = \
device_property_list['TimeoutConnection']
class_property_list['ReconnectWait'] = \
device_property_list['ReconnectWait']
# Command definitions ---
cmd_list = {'ReloadAttrFile': [[PyTango.DevVoid, ""],
[PyTango.DevVoid, ""]],
'Exec': [[PyTango.DevString, "statement to executed"],
[PyTango.DevString, "result"],
{'Display level': PyTango.DispLevel.EXPERT, }],
'GetBit': [[PyTango.DevVarShortArray, "idx"],
[PyTango.DevBoolean, ""],
{'Display level': PyTango.DispLevel.EXPERT, }],
'GetByte': [[PyTango.DevShort, "idx"],
[PyTango.DevShort, ""],
{'Display level': PyTango.DispLevel.EXPERT, }],
'GetShort': [[PyTango.DevShort, "idx"],
[PyTango.DevShort, ""],
{'Display level':
PyTango.DispLevel.EXPERT, }],
'GetFloat': [[PyTango.DevShort, "idx"],
[PyTango.DevFloat, ""],
{'Display level':
PyTango.DispLevel.EXPERT, }],
'HexDump': [[PyTango.DevVoid, "idx"],
[PyTango.DevString, "hexdump of all data"]],
'Hex': [[PyTango.DevShort, "idx"],
[PyTango.DevString, ""]],
'DumpTo': [[PyTango.DevString, "target file"],
[PyTango.DevVoid, ""], {}],
'WriteBit': [[PyTango.DevVarShortArray,
"idx, bitno, value"],
[PyTango.DevVoid, ""],
{'Display level':
PyTango.DispLevel.EXPERT, }],
'WriteByte': [[PyTango.DevVarShortArray, "idx, value"],
[PyTango.DevVoid, ""],
{'Display level':
PyTango.DispLevel.EXPERT, }],
'WriteShort': [[PyTango.DevVarShortArray, "idx, value"],
[PyTango.DevVoid, ""],
{'Display level':
PyTango.DispLevel.EXPERT, }],
'WriteFloat': [[PyTango.DevVarFloatArray, "idx, value"],
[PyTango.DevVoid, ""],
{'Display level':
PyTango.DispLevel.EXPERT}],
'ResetState': [[PyTango.DevVoid, ""],
[PyTango.DevVoid, ""]],
'Update': [[PyTango.DevVoid, ""],
[PyTango.DevVoid, ""],
# { 'polling period' : 50 }
],
'RestoreReadDB': [[PyTango.DevVoid, ""],
[PyTango.DevVoid, ""],
{'Display level':
PyTango.DispLevel.EXPERT}],
}
# Attribute definitions ---
attr_list = {'EventsTime': [[PyTango.DevDouble,
PyTango.SPECTRUM,
PyTango.READ, 1800],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsTimeMin': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsTimeMax': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsTimeMean': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsTimeStd': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsNumber': [[PyTango.DevShort,
PyTango.SPECTRUM,
PyTango.READ, 1800],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsNumberMin': [[PyTango.DevUShort,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsNumberMax': [[PyTango.DevUShort,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsNumberMean': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'EventsNumberStd': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'IsTooFarEnable': [[PyTango.DevBoolean,
PyTango.SCALAR,
PyTango.READ_WRITE],
{'label':
"Is Too Far readback Feature "
"Enabled?",
'Display level':
PyTango.DispLevel.EXPERT,
'description':
"This boolean is to enable or "
"disable the feature to use the "
"quality warning for readback "
"attributes with setpoint too far",
'Memorized': "true"
}
],
'forceWriteDB': [[PyTango.DevString,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'cpu_percent': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'mem_percent': [[PyTango.DevDouble,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT}
],
'mem_rss': [[PyTango.DevULong,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT,
'unit': 'Bytes'}
],
'mem_swap': [[PyTango.DevULong,
PyTango.SCALAR,
PyTango.READ],
{'Display level':
PyTango.DispLevel.EXPERT,
'unit': 'Bytes'}
],
}
if __name__ == '__main__':
try:
py = PyTango.Util(sys.argv)
py.add_TgClass(LinacDataClass, LinacData, 'LinacData')
U = PyTango.Util.instance()
U.server_init()
U.server_run()
except PyTango.DevFailed as e:
PyTango.Except.print_exception(e)
except Exception as e:
traceback.print_exc()
|
I have, for too long a period of time accepted the opinion of others (even though they were directly affecting my life) as if they were objective events totally out of my control. Because I separated such opinions from the persons who were making them, I accepted them the way I accepted natural disasters; and I endured them as inevitable.
Mitsuye Yamada was of the World War II generation and had grown up in a considerably different time and context in the U.S. when she wrote those words. Yet as I read them for the first time, I felt as if I had thought/written them. Her awakening into an understanding of how she was racialized and gendered as an Asian American woman, both in her own family and in the outside world, mirrored and validated my experience as well.
It took me 28 years to truly acknowledge that the act of debating was not a bad thing. Walking outside on a cloudy, wintry day in Ithaca, New York, where I was attending graduate school at Cornell University, I had just finished a heated debate with two colleagues at dinner about the use of a specific academic term in our studies; my friend and myself were against it and the other person was for it. I could finally see that debate was good for developing intellectual capacity, to truly refine why you believe or think a certain way, after hearing opposing arguments.
I had always equated debates, especially heated debates, with arguments, with conflict and fighting. Growing up as the youngest child in a mostly traditional, patriarchal Korean household had trained me by default to not question my elders. Anytime I spoke, it seemed to cause more discord within my family, because people started to fight! I started to think, wrongly, that I was the cause of this conflict, because I had spoken. So I learned to stay silent, to disassociate to a different place in my mind whenever fights happened. I went through school for a long time fearing interactions with teachers and other authority figures, and fearing the repercussions of using my voice. My learned silence and passive acceptance were good for avoiding and preventing familial conflict, even as those behaviors were reinforced by the outside world’s expectations.
Thus, it should not have been a surprise when I finally started to use my voice at home and I encountered resistance to it.
“Why are you acting like this? Why are you so argumentative, like you’re a teenager? You were so quiet and good when you were an actual teenager; you used to accept our word. And now that you’ve lived away from home, you’ve changed!” – my mother exclaimed in frustration to me this summer, when I heatedly contested something my parents had said. Essentially, she wanted me to stay quiet, even when they were wrong.
When my mother uttered those words, I had lived away from home for nearly ten years since starting college when I was 18 years old, with transitional stints staying with my parents during school breaks and unemployment periods. I had lived in El Salvador for nearly two years serving in the Peace Corps in my mid-twenties; then in rural, upstate New York to attend graduate school at Cornell, where I stayed for another two years; and I had lived and worked in India during the most recent summer. In between, I had traveled extensively. To say I had changed was accurate, if not an understatement. I had been in diverse environments with different histories and cultures; I had learned to find comfort in discomfort; and I had been immersed in places that were marred by global inequality, poverty, and violence. My worldview had broadened; my understanding of injustice and inequality had deepened; and my intellectual capacity for critical thinking had been heightened. But my parents, living in suburban California, worn down by years of hard immigrant life, and from a rigidly socially conservative era, had not kept up with my metamorphosis. They continued to treat me as their youngest daughter, a girl whose opinion was inconsequential, one who didn’t own her voice.
Going to college and taking ethnic studies courses at UC San Diego was where I first started to find that voice, one that was able to articulate a growing critical consciousness. This (re)education made me feel truly validated and gave expression to how I had felt but had not had the vocabulary or knowledge to express. My parents were reticent about their past in post-war Korea and about their experiences as immigrants in the U.S. Silence permeated and filled the crevices of our small apartment with all the unspoken han*** of our family’s and people’s history and experiences.
The “squeaky wheel gets the grease” was a saying that I heard much later in life, because it was never part of my childhood. The opposite was more commonly repeated in my house: Keep your head down, don’t cause trouble, don’t complain, just deal with it, and keep moving forward. This mantra of survival suited my parents’ upbringing in post-war Korea, raised by my grandparents who had endured decades of brutal, oppressive Japanese colonization, a world war, American and Soviet occupation, and a civil war partitioning their country for the foreseeable future. They had grown up in a strictly hierarchical, patriarchal family environment. When my parents were children, they were not allowed to sit at the same dining table as their parents, and could only speak when spoken to. While they were considerably more relaxed with me, there were still inevitable traces of their traditional upbringing. Debate and discussion were not encouraged. Silence at the dinner table was valued. The youngest was there to listen to others, not to be heard. It’s really only now, in my late-twenties, that I’m realizing the extent to which my silence “ultimately rendered me invisible” to others who were stereotyping me as a quiet, submissive Asian American woman.
A friend of mine pointed out once that I had an “offended silence” whenever someone said something offensive in my presence, where I raised my eyebrows and looked away as if thinking to myself, Don’t punch them in the face. I usually think, They’re so ignorant, they don’t even deserve my energy or attention to correct them. But as Yamada articulates in the above quote, I was not teaching anyone anything besides the reinforcement of my own “expected role” in a society still laced with white supremacist, heteronormative attitudes.
It wasn’t until I lived with my then-boyfriend last year at Cornell that I realized for the first time how much my silence had rendered me invisible. I was well-educated and worldly, and had gotten into an Ivy League university for graduate school. I thought I was beyond the stereotype of the submissive, silent Asian woman. But my “offended silence” reinforced my invisibility as he, a white man, made a fair share of ignorant, micro-aggressive remarks about Asians or myself as an Asian. He’s a white guy from New Jersey who hasn’t traveled abroad much and hasn’t been around Asian Americans. I just need to give him some time to emerge from his ignorance after being around me, I rationalized. Sometimes, I did call him out, and he would take it well, but I did not feel that it was my responsibility to teach him out of all his ignorance.
I had finished my classes and was taking one last semester to finish my thesis, so I had more free time than before when I had had a full course load and teaching responsibilities. In the cold, long winter of Ithaca, I stayed in our home and worked on my thesis, avoiding the long uphill walk to campus in the snow. Surprising myself, I relished in domesticity. I cleaned, I cooked, I listened to him talk about his day, his worries, and his complaints about colleagues and professors. I listened and I listened. I had endless patience for this self-centered, selfish boy in a man’s body. Until he made it clear over time that he wanted his words to hurt me. He wanted me to listen to how I was unworthy and ungrateful, how I was stupid, lazy, boring, and everything that the white patriarchy tells women of color about how they are not good enough in one way or another. I wish I could say I spoke up for myself then, but all I could do was stay silent and blame myself when he had angry outbursts, and when he made mean, condescending remarks. Sometimes, I cried. I endured his verbal abuse as “inevitable natural disasters.” They were, in fact, totally unnatural. When I eventually recovered my voice to speak up for myself and to break up with him, he sneered and refused to listen to anything I had to say.
Why had I let things get so far? Why had I let him disrespect me so much? It took this pain and the development of my own personal han to acknowledge how much I had “separated opinions from the persons who were making them” in all aspects of my life. Parents, friends, strangers, colleagues, lovers. I had not applied my knowledge to my daily experiences, and I had become aware of how much I was still stereotyped, even by those close to me. A colleague had described me as “apolitical” to my then-boyfriend, when, in fact, he was the apolitical one. I had actively discussed and cared about immigration and civil rights, while he avidly avoided any political talk. But this colleague only saw us in our stereotypes of Asian woman and white man; she did not listen to what was actually being said and by whom.
It was time to move on and build a home elsewhere, a home where I would be visible. I had tried and failed to build a home with him, both metaphorically in our relationship and literally in the apartment we shared. Upon my return to my home town, I also realized how different I had become after my various experiences, and how difficult it was to be the daughter my parents once knew. Now, I’m settling down somewhere else for a while for work. It is only a matter of time before I become restless again. But now I know, home is in myself first and foremost, and it will withstand the “man-made disasters” as I demand accountability and authenticity in those around me and in myself.
*Referring to title of book by Thomas Wolfe (1940). You Can’t Go Home Again.
***Korean term for which there is no direct translation in English, but means a collective (people’s or nation’s) feeling of anger/hurt/grievances that have not been addressed with justice.
Yuri Lee was born and raised in southern California. She studied Urban Planning and Ethnic Studies at UC San Diego, served in the Peace Corps in El Salvador, and earned a masters degree in International Agriculture from Cornell University, where she researched coffee plant pathology and soil health in Colombia, and food and nutrition security policies in India. She is currently working as a consultant. |
# vim: ts=4:sw=4:expandtab
# -*- coding: UTF-8 -*-
# BleachBit
# Copyright (C) 2008-2019 Andrew Ziem
# https://www.bleachbit.org
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
Test case for module Memory
"""
from __future__ import absolute_import, print_function
from tests import common
from bleachbit.Memory import *
import unittest
import sys
running_linux = sys.platform.startswith('linux')
class MemoryTestCase(common.BleachbitTestCase):
"""Test case for module Memory"""
@unittest.skipUnless(running_linux, 'not running linux')
def test_get_proc_swaps(self):
"""Test for method get_proc_swaps"""
ret = get_proc_swaps()
self.assertGreater(len(ret), 10)
if not re.search('Filename\s+Type\s+Size', ret):
raise RuntimeError("Unexpected first line in swap summary '%s'" % ret)
@unittest.skipUnless(running_linux, 'not running linux')
def test_make_self_oom_target_linux(self):
"""Test for method make_self_oom_target_linux"""
# preserve
euid = os.geteuid()
# Minimally test there is no traceback
make_self_oom_target_linux()
# restore
os.seteuid(euid)
@unittest.skipUnless(running_linux, 'not running linux')
def test_count_linux_swap(self):
"""Test for method count_linux_swap"""
n_swaps = count_swap_linux()
self.assertIsInteger(n_swaps)
self.assertTrue(0 <= n_swaps < 10)
def test_physical_free_darwin(self):
# TODO: use mock
self.assertEqual(physical_free_darwin(lambda:
"""Mach Virtual Memory Statistics: (page size of 4096 bytes)
Pages free: 836891.
Pages active: 588004.
Pages inactive: 16985.
Pages speculative: 89776.
Pages throttled: 0.
Pages wired down: 468097.
Pages purgeable: 58313.
"Translation faults": 3109985921.
Pages copy-on-write: 25209334.
Pages zero filled: 537180873.
Pages reactivated: 132264973.
Pages purged: 11567935.
File-backed pages: 184609.
Anonymous pages: 510156.
Pages stored in compressor: 784977.
Pages occupied by compressor: 96724.
Decompressions: 66048421.
Compressions: 90076786.
Pageins: 758631430.
Pageouts: 30477017.
Swapins: 19424481.
Swapouts: 20258188.
"""), 3427905536)
self.assertRaises(RuntimeError, physical_free_darwin, lambda: "Invalid header")
def test_physical_free(self):
"""Test for method physical_free"""
ret = physical_free()
self.assertIsInteger(ret, 'physical_free() returns variable type %s' % type(ret))
self.assertGreater(physical_free(), 0)
report_free()
@unittest.skipUnless(running_linux, 'not running linux')
def test_get_swap_size_linux(self):
"""Test for get_swap_size_linux()"""
with open('/proc/swaps') as f:
swapdev = f.read().split('\n')[1].split(' ')[0]
if 0 == len(swapdev):
self.skipTest('no active swap device detected')
size = get_swap_size_linux(swapdev)
self.assertIsInteger(size)
self.assertGreater(size, 1024 ** 2)
logger.debug("size of swap '%s': %d B (%d MB)", swapdev, size, size / (1024 ** 2))
with open('/proc/swaps') as f:
proc_swaps = f.read()
size2 = get_swap_size_linux(swapdev, proc_swaps)
self.assertEqual(size, size2)
@unittest.skipUnless(running_linux, 'not running linux')
def test_get_swap_uuid(self):
"""Test for method get_swap_uuid"""
self.assertEqual(get_swap_uuid('/dev/doesnotexist'), None)
def test_parse_swapoff(self):
"""Test for method parse_swapoff"""
tests = (
# Ubuntu 15.10 has format "swapoff /dev/sda3"
('swapoff /dev/sda3', '/dev/sda3'),
('swapoff for /dev/sda6', '/dev/sda6'),
('swapoff on /dev/mapper/lubuntu-swap_1', '/dev/mapper/lubuntu-swap_1'))
for test in tests:
self.assertEqual(parse_swapoff(test[0]), test[1])
@unittest.skipUnless(running_linux, 'skipping test on non-linux')
def test_swap_off_swap_on(self):
"""Test for disabling and enabling swap"""
if not General.sudo_mode() or os.getuid() > 0:
self.skipTest('not enough privileges')
disable_swap_linux()
enable_swap_linux()
|
After breakfast with the children, the first job of the lady of the house would be to talk to the housekeeper. It would be important for them to communicate about the other servants, making sure they were doing their jobs properly and behaving correctly above and below stairs.
They would also discuss the evening meal. If visitors were expected, the lady would choose meals that were lavish and unusual. (They loved showing off.) When these matters were dealt with the wife would then check through the household accounts. Bills for meat, candles and flour would usually be paid weekly. When the early morning activities were finished, the social whirl would begin! High society ladies would either receive calls or visit others. Tea would be drunk and snacks eaten.
A very agreeable pastime for a young Regency lady (especially a Brunswick town resident) was to show off their latest fans and fashions along the sea front. To stroll along the promenade at Brighton was a popular way to spend a few hours.
In Brunswick Square the lady of the house would have aspired to be the best dressed in town. Different outfits would be laid out by the maid for each section of the day. Politically, the French Revolution was a major talking point, thus introducing popular French fashions. At this time Napoleon had declared himself ruler of the Empire, and we see the emergence of the 'Empire Line' dress. This style raised the waist-line to sit under the breast.
This design pushed away from the need of the uncomfortable boned corsets. However if corsets were to be worn, the lady would be helped to dress by a maid. The maid would pull the strings tight at the back of the garment until the lady was laced in. On top of this would be placed a range of petticoats, then finally a 'Morning Dress'.
A Regency woman would change her clothes up to 6 times a day and would have had a number of different outfits for every conceivable occasion. |
from __future__ import division, print_function, absolute_import
import warnings
import numpy as np
import numpy.testing as npt
from scipy import integrate
from scipy import stats
from scipy.special import betainc
from common_tests import (check_normalization, check_moment, check_mean_expect,
check_var_expect, check_skew_expect, check_kurt_expect,
check_entropy, check_private_entropy, NUMPY_BELOW_1_7,
check_edge_support, check_named_args, check_random_state_property)
from scipy.stats._distr_params import distcont
"""
Test all continuous distributions.
Parameters were chosen for those distributions that pass the
Kolmogorov-Smirnov test. This provides safe parameters for each
distributions so that we can perform further testing of class methods.
These tests currently check only/mostly for serious errors and exceptions,
not for numerically exact results.
"""
## Note that you need to add new distributions you want tested
## to _distr_params
DECIMAL = 5 # specify the precision of the tests # increased from 0 to 5
## Last four of these fail all around. Need to be checked
distcont_extra = [
['betaprime', (100, 86)],
['fatiguelife', (5,)],
['mielke', (4.6420495492121487, 0.59707419545516938)],
['invweibull', (0.58847112119264788,)],
# burr: sample mean test fails still for c<1
['burr', (0.94839838075366045, 4.3820284068855795)],
# genextreme: sample mean test, sf-logsf test fail
['genextreme', (3.3184017469423535,)],
]
# for testing only specific functions
# distcont = [
## ['fatiguelife', (29,)], #correction numargs = 1
## ['loggamma', (0.41411931826052117,)]]
# for testing ticket:767
# distcont = [
## ['genextreme', (3.3184017469423535,)],
## ['genextreme', (0.01,)],
## ['genextreme', (0.00001,)],
## ['genextreme', (0.0,)],
## ['genextreme', (-0.01,)]
## ]
# distcont = [['gumbel_l', ()],
## ['gumbel_r', ()],
## ['norm', ()]
## ]
# distcont = [['norm', ()]]
distmissing = ['wald', 'gausshyper', 'genexpon', 'rv_continuous',
'loglaplace', 'rdist', 'semicircular', 'invweibull', 'ksone',
'cosine', 'kstwobign', 'truncnorm', 'mielke', 'recipinvgauss', 'levy',
'johnsonsu', 'levy_l', 'powernorm', 'wrapcauchy',
'johnsonsb', 'truncexpon', 'invgauss', 'invgamma',
'powerlognorm']
distmiss = [[dist,args] for dist,args in distcont if dist in distmissing]
distslow = ['rdist', 'gausshyper', 'recipinvgauss', 'ksone', 'genexpon',
'vonmises', 'vonmises_line', 'mielke', 'semicircular',
'cosine', 'invweibull', 'powerlognorm', 'johnsonsu', 'kstwobign']
# distslow are sorted by speed (very slow to slow)
# NB: not needed anymore?
def _silence_fp_errors(func):
# warning: don't apply to test_ functions as is, then those will be skipped
def wrap(*a, **kw):
olderr = np.seterr(all='ignore')
try:
return func(*a, **kw)
finally:
np.seterr(**olderr)
wrap.__name__ = func.__name__
return wrap
def test_cont_basic():
# this test skips slow distributions
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=integrate.IntegrationWarning)
for distname, arg in distcont[:]:
if distname in distslow:
continue
if distname is 'levy_stable':
continue
distfn = getattr(stats, distname)
np.random.seed(765456)
sn = 500
rvs = distfn.rvs(size=sn, *arg)
sm = rvs.mean()
sv = rvs.var()
m, v = distfn.stats(*arg)
yield check_sample_meanvar_, distfn, arg, m, v, sm, sv, sn, \
distname + 'sample mean test'
yield check_cdf_ppf, distfn, arg, distname
yield check_sf_isf, distfn, arg, distname
yield check_pdf, distfn, arg, distname
yield check_pdf_logpdf, distfn, arg, distname
yield check_cdf_logcdf, distfn, arg, distname
yield check_sf_logsf, distfn, arg, distname
if distname in distmissing:
alpha = 0.01
yield check_distribution_rvs, distname, arg, alpha, rvs
locscale_defaults = (0, 1)
meths = [distfn.pdf, distfn.logpdf, distfn.cdf, distfn.logcdf,
distfn.logsf]
# make sure arguments are within support
spec_x = {'frechet_l': -0.5, 'weibull_max': -0.5, 'levy_l': -0.5,
'pareto': 1.5, 'tukeylambda': 0.3}
x = spec_x.get(distname, 0.5)
yield check_named_args, distfn, x, arg, locscale_defaults, meths
yield check_random_state_property, distfn, arg
# Entropy
skp = npt.dec.skipif
yield check_entropy, distfn, arg, distname
if distfn.numargs == 0:
yield skp(NUMPY_BELOW_1_7)(check_vecentropy), distfn, arg
if distfn.__class__._entropy != stats.rv_continuous._entropy:
yield check_private_entropy, distfn, arg, stats.rv_continuous
yield check_edge_support, distfn, arg
knf = npt.dec.knownfailureif
yield knf(distname == 'truncnorm')(check_ppf_private), distfn, \
arg, distname
@npt.dec.slow
def test_cont_basic_slow():
# same as above for slow distributions
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=integrate.IntegrationWarning)
for distname, arg in distcont[:]:
if distname not in distslow:
continue
if distname is 'levy_stable':
continue
distfn = getattr(stats, distname)
np.random.seed(765456)
sn = 500
rvs = distfn.rvs(size=sn,*arg)
sm = rvs.mean()
sv = rvs.var()
m, v = distfn.stats(*arg)
yield check_sample_meanvar_, distfn, arg, m, v, sm, sv, sn, \
distname + 'sample mean test'
yield check_cdf_ppf, distfn, arg, distname
yield check_sf_isf, distfn, arg, distname
yield check_pdf, distfn, arg, distname
yield check_pdf_logpdf, distfn, arg, distname
yield check_cdf_logcdf, distfn, arg, distname
yield check_sf_logsf, distfn, arg, distname
# yield check_oth, distfn, arg # is still missing
if distname in distmissing:
alpha = 0.01
yield check_distribution_rvs, distname, arg, alpha, rvs
locscale_defaults = (0, 1)
meths = [distfn.pdf, distfn.logpdf, distfn.cdf, distfn.logcdf,
distfn.logsf]
# make sure arguments are within support
x = 0.5
if distname == 'invweibull':
arg = (1,)
elif distname == 'ksone':
arg = (3,)
yield check_named_args, distfn, x, arg, locscale_defaults, meths
yield check_random_state_property, distfn, arg
# Entropy
skp = npt.dec.skipif
ks_cond = distname in ['ksone', 'kstwobign']
yield skp(ks_cond)(check_entropy), distfn, arg, distname
if distfn.numargs == 0:
yield skp(NUMPY_BELOW_1_7)(check_vecentropy), distfn, arg
if distfn.__class__._entropy != stats.rv_continuous._entropy:
yield check_private_entropy, distfn, arg, stats.rv_continuous
yield check_edge_support, distfn, arg
@npt.dec.slow
def test_moments():
with warnings.catch_warnings():
warnings.filterwarnings('ignore', category=integrate.IntegrationWarning)
knf = npt.dec.knownfailureif
fail_normalization = set(['vonmises', 'ksone'])
fail_higher = set(['vonmises', 'ksone', 'ncf'])
for distname, arg in distcont[:]:
if distname is 'levy_stable':
continue
distfn = getattr(stats, distname)
m, v, s, k = distfn.stats(*arg, moments='mvsk')
cond1, cond2 = distname in fail_normalization, distname in fail_higher
msg = distname + ' fails moments'
yield knf(cond1, msg)(check_normalization), distfn, arg, distname
yield knf(cond2, msg)(check_mean_expect), distfn, arg, m, distname
yield knf(cond2, msg)(check_var_expect), distfn, arg, m, v, distname
yield knf(cond2, msg)(check_skew_expect), distfn, arg, m, v, s, \
distname
yield knf(cond2, msg)(check_kurt_expect), distfn, arg, m, v, k, \
distname
yield check_loc_scale, distfn, arg, m, v, distname
yield check_moment, distfn, arg, m, v, distname
def check_sample_meanvar_(distfn, arg, m, v, sm, sv, sn, msg):
# this did not work, skipped silently by nose
if not np.isinf(m):
check_sample_mean(sm, sv, sn, m)
if not np.isinf(v):
check_sample_var(sv, sn, v)
def check_sample_mean(sm,v,n, popmean):
# from stats.stats.ttest_1samp(a, popmean):
# Calculates the t-obtained for the independent samples T-test on ONE group
# of scores a, given a population mean.
#
# Returns: t-value, two-tailed prob
df = n-1
svar = ((n-1)*v) / float(df) # looks redundant
t = (sm-popmean) / np.sqrt(svar*(1.0/n))
prob = betainc(0.5*df, 0.5, df/(df + t*t))
# return t,prob
npt.assert_(prob > 0.01, 'mean fail, t,prob = %f, %f, m, sm=%f,%f' %
(t, prob, popmean, sm))
def check_sample_var(sv,n, popvar):
# two-sided chisquare test for sample variance equal to hypothesized variance
df = n-1
chi2 = (n-1)*popvar/float(popvar)
pval = stats.distributions.chi2.sf(chi2, df) * 2
npt.assert_(pval > 0.01, 'var fail, t, pval = %f, %f, v, sv=%f, %f' %
(chi2, pval, popvar, sv))
def check_cdf_ppf(distfn,arg,msg):
values = [0.001, 0.5, 0.999]
npt.assert_almost_equal(distfn.cdf(distfn.ppf(values, *arg), *arg),
values, decimal=DECIMAL, err_msg=msg +
' - cdf-ppf roundtrip')
def check_sf_isf(distfn,arg,msg):
npt.assert_almost_equal(distfn.sf(distfn.isf([0.1,0.5,0.9], *arg), *arg),
[0.1,0.5,0.9], decimal=DECIMAL, err_msg=msg +
' - sf-isf roundtrip')
npt.assert_almost_equal(distfn.cdf([0.1,0.9], *arg),
1.0-distfn.sf([0.1,0.9], *arg),
decimal=DECIMAL, err_msg=msg +
' - cdf-sf relationship')
def check_pdf(distfn, arg, msg):
# compares pdf at median with numerical derivative of cdf
median = distfn.ppf(0.5, *arg)
eps = 1e-6
pdfv = distfn.pdf(median, *arg)
if (pdfv < 1e-4) or (pdfv > 1e4):
# avoid checking a case where pdf is close to zero or huge (singularity)
median = median + 0.1
pdfv = distfn.pdf(median, *arg)
cdfdiff = (distfn.cdf(median + eps, *arg) -
distfn.cdf(median - eps, *arg))/eps/2.0
# replace with better diff and better test (more points),
# actually, this works pretty well
npt.assert_almost_equal(pdfv, cdfdiff,
decimal=DECIMAL, err_msg=msg + ' - cdf-pdf relationship')
def check_pdf_logpdf(distfn, args, msg):
# compares pdf at several points with the log of the pdf
points = np.array([0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8])
vals = distfn.ppf(points, *args)
pdf = distfn.pdf(vals, *args)
logpdf = distfn.logpdf(vals, *args)
pdf = pdf[pdf != 0]
logpdf = logpdf[np.isfinite(logpdf)]
npt.assert_almost_equal(np.log(pdf), logpdf, decimal=7, err_msg=msg + " - logpdf-log(pdf) relationship")
def check_sf_logsf(distfn, args, msg):
# compares sf at several points with the log of the sf
points = np.array([0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8])
vals = distfn.ppf(points, *args)
sf = distfn.sf(vals, *args)
logsf = distfn.logsf(vals, *args)
sf = sf[sf != 0]
logsf = logsf[np.isfinite(logsf)]
npt.assert_almost_equal(np.log(sf), logsf, decimal=7, err_msg=msg + " - logsf-log(sf) relationship")
def check_cdf_logcdf(distfn, args, msg):
# compares cdf at several points with the log of the cdf
points = np.array([0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8])
vals = distfn.ppf(points, *args)
cdf = distfn.cdf(vals, *args)
logcdf = distfn.logcdf(vals, *args)
cdf = cdf[cdf != 0]
logcdf = logcdf[np.isfinite(logcdf)]
npt.assert_almost_equal(np.log(cdf), logcdf, decimal=7, err_msg=msg + " - logcdf-log(cdf) relationship")
def check_distribution_rvs(dist, args, alpha, rvs):
# test from scipy.stats.tests
# this version reuses existing random variables
D,pval = stats.kstest(rvs, dist, args=args, N=1000)
if (pval < alpha):
D,pval = stats.kstest(dist,'',args=args, N=1000)
npt.assert_(pval > alpha, "D = " + str(D) + "; pval = " + str(pval) +
"; alpha = " + str(alpha) + "\nargs = " + str(args))
def check_vecentropy(distfn, args):
npt.assert_equal(distfn.vecentropy(*args), distfn._entropy(*args))
@npt.dec.skipif(NUMPY_BELOW_1_7)
def check_loc_scale(distfn, arg, m, v, msg):
loc, scale = 10.0, 10.0
mt, vt = distfn.stats(loc=loc, scale=scale, *arg)
npt.assert_allclose(m*scale + loc, mt)
npt.assert_allclose(v*scale*scale, vt)
def check_ppf_private(distfn, arg, msg):
#fails by design for truncnorm self.nb not defined
ppfs = distfn._ppf(np.array([0.1, 0.5, 0.9]), *arg)
npt.assert_(not np.any(np.isnan(ppfs)), msg + 'ppf private is nan')
if __name__ == "__main__":
npt.run_module_suite()
|
Modern facilities, a fully equipped commercial kitchen, business support, and a neighborhood café for foodie folk.
Running a successful food business is no walk in the park. So we wanted to find a way to help. By offering the basics: a licenced, fully equipped and insured co-working kitchen – and a few special extras: ongoing business support and software for managing orders – we’re here to help you get started and to keep you going.
We encourage positive food choices. From free-range to ethical, cruelty-free and sustainable foods, we believe in being responsible for the planet and its resources.
We welcome opportunities to collaborate with those who share our values.
I’m Patta – mom of two and amateur chef. I’m from Thailand, but have called Hong Kong home for the past 10 years.
With a background in trends research and strategy, I wanted to bring innovation to something I really love: fresh, authentic food.
In Hong Kong, it’s hard to get started as a food entrepreneur. Rents are high and space is limited, so it’s difficult to get a break, let alone run a business. We set out to change all that. Fast forward 4 years and our Hong Kong kitchen is the heart of our community. It’s a hive of activity and creativity, with chefs, caterers, and bakers doing what they do best and growing their businesses every day.
So, why San Francisco? Some of my best friends live here and I come back time and again to sample some of the most incredible local food I’ve ever tried. I love this city and I can’t wait to see what we cook up together.
Pick up your food or book a culinary experience with the best chefs in town.
We love our chefs – and not just because they’ve cooked their way into our hearts. They come from a variety of culinary backgrounds: professionally-trained, self-taught, generational cooking families – you name it.
Fill in the form, tell us a little about you and let's fix a time to get together.
Building a food community in San Francisco.
I loved that the event was more interactive. It was much easier to connect with other Elites while we bonded over chopping, mixing, folding, and frying dumplings.
Subscribe to our newsletter and be the first to hear about our latest community events, cookings classes, and menus. |
#
# Copyright (c) SAS Institute Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Command line chroot manipulation command tests.
"""
import errno
import re
import os
import select
import sys
import time
from conary_test import recipes
from rmake_test import rmakehelp
from conary.lib import coveragehook
def _readIfReady(fd):
if select.select([fd], [], [], 1.0)[0]:
return os.read(fd, 8096)
return ''
class ChrootTest(rmakehelp.RmakeHelper):
def testChrootManagement(self):
self.openRmakeRepository()
client = self.startRmakeServer()
helper = self.getRmakeHelper(client.uri)
self.buildCfg.cleanAfterCook = False
trv = self.addComponent('simple:source', '1-1', '',
[('simple.recipe', recipes.simpleRecipe)])
jobId = self.discardOutput(helper.buildTroves, ['simple'])
helper.waitForJob(jobId)
chroot = helper.listChroots()[0]
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
assert(helper.client.getJob(jobId).getTrove(*chroot.troveTuple))
path = self.rmakeCfg.getChrootDir() + '/' + chroot.path
assert(os.path.exists(path))
self.stopRmakeServer()
client = self.startRmakeServer()
helper = self.getRmakeHelper(client.uri)
chroot = helper.listChroots()[0]
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
assert(helper.client.getJob(jobId).getTrove(*chroot.troveTuple))
self.captureOutput(helper.archiveChroot,'_local_', 'simple', 'foo')
archivedPath = self.rmakeCfg.getChrootArchiveDir() + '/foo'
assert(os.path.exists(archivedPath))
archivedChroot = helper.listChroots()[0]
assert(archivedChroot.path == 'archive/foo')
self.stopRmakeServer()
client = self.startRmakeServer()
helper = self.getRmakeHelper(client.uri)
archivedChroot = helper.listChroots()[0]
assert(archivedChroot.path == 'archive/foo')
self.captureOutput(helper.deleteChroot ,'_local_', 'archive/foo')
assert(not helper.listChroots())
assert(not os.path.exists(archivedPath))
def testChrootManagementMultinode(self):
def _getChroot(helper):
data = helper.listChroots()
started = time.time()
while not data:
if time.time() - started > 60:
raise RuntimeError("timeout waiting for chroot to appear")
time.sleep(.2)
data = helper.listChroots()
chroot, = data
return chroot
self.openRmakeRepository()
client = self.startRmakeServer(multinode=True)
helper = self.getRmakeHelper(client.uri)
self.startNode()
self.buildCfg.cleanAfterCook = False
trv = self.addComponent('simple:source', '1-1', '',
[('simple.recipe', recipes.simpleRecipe)])
jobId = self.discardOutput(helper.buildTroves, ['simple'])
helper.waitForJob(jobId)
chroot = helper.listChroots()[0]
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
self.stopNodes()
self.startNode()
chroot = _getChroot(helper)
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
self.stopNodes()
self.stopRmakeServer()
client = self.startRmakeServer(multinode=True)
self.startNode()
helper = self.getRmakeHelper(client.uri)
chroot = _getChroot(helper)
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
self.captureOutput(helper.archiveChroot, self.nodeCfg.name, 'simple', 'foo')
archivedPath = self.nodeCfg.getChrootArchiveDir() + '/foo'
assert(os.path.exists(archivedPath))
archivedChroot = helper.listChroots()[0]
assert(archivedChroot.path == 'archive/foo')
self.stopNodes()
self.stopRmakeServer()
client = self.startRmakeServer(multinode=True)
helper = self.getRmakeHelper(client.uri)
self.startNode()
archivedChroot = _getChroot(helper)
assert(archivedChroot.path == 'archive/foo')
pid, master_fd = os.forkpty()
if not pid:
try:
coveragehook.install()
helper.startChrootSession(jobId, 'simple', ['/bin/sh'])
sys.stdout.flush()
coveragehook.save()
finally:
os._exit(0)
try:
count = 0
data = ''
while not data and count < 60:
data = _readIfReady(master_fd)
count += 1
assert(data)
os.write(master_fd, 'exit\n')
data = _readIfReady(master_fd)
while True:
try:
data += _readIfReady(master_fd)
except OSError:
os.waitpid(pid, 0)
break
finally:
os.close(master_fd)
def testDeleteAllChrootsMultinode(self):
self.openRmakeRepository()
client = self.startRmakeServer(multinode=True)
return
self.startNode()
helper = self.getRmakeHelper(client.uri)
self.buildCfg.cleanAfterCook = False
try:
trv = self.addComponent('simple:source', '1-1', '',
[('simple.recipe', recipes.simpleRecipe)])
jobId = self.discardOutput(helper.buildTroves, ['simple'])
finally:
self.buildCfg.cleanAfterCook = True
helper.waitForJob(jobId)
chroot = helper.listChroots()[0]
assert(chroot.path == 'simple')
assert(chroot.jobId == jobId)
assert(helper.client.getJob(jobId).getTrove(*chroot.troveTuple))
self.captureOutput(helper.deleteAllChroots)
assert(not helper.listChroots())
def testChrootSession(self):
# NOTE: This test is prone to race conditions. The chroot
# process will occasionally quit right away, probably due to
# a (hidden) error.
self.openRmakeRepository()
client = self.startRmakeServer()
helper = self.getRmakeHelper(client.uri)
oldStdin = sys.stdin
self.buildCfg.cleanAfterCook = False
self.buildCfg.configLine('[context1]')
try:
trv = self.addComponent('simple:source', '1-1', '',
[('simple.recipe', recipes.simpleRecipe)])
jobId = self.discardOutput(helper.buildTroves, ['simple{context1}'])
helper.waitForJob(jobId)
finally:
self.buildCfg.cleanAfterCook = True
pid, master_fd = os.forkpty()
if not pid:
try:
coveragehook.install()
helper.startChrootSession(jobId, 'simple', ['/bin/sh'])
sys.stdout.flush()
coveragehook.save()
finally:
os._exit(0)
try:
count = 0
data = ''
while not data and count < 30:
try:
data = _readIfReady(master_fd)
except OSError, err:
if err.errno == errno.EIO:
os.waitpid(pid, 0)
raise testsuite.SkipTestException(
"testChrootSession failed yet again")
raise
count += 1
assert(data)
os.write(master_fd, 'echo "this is a test"\n')
data = ''
# White out bash version
r = re.compile(r"sh-[^$]*\$")
expected = 'echo "this is a test"\r\r\nthis is a test\r\r\nsh-X.XX$ '
count = 0
while not data == expected and count < 60:
data += r.sub("sh-X.XX$", str(_readIfReady(master_fd)), 1)
count += 1
self.assertEquals(data, expected)
os.write(master_fd, 'exit\n')
data = _readIfReady(master_fd)
while True:
try:
data += _readIfReady(master_fd)
except OSError:
os.waitpid(pid, 0)
break
expected = 'exit\r\r\nexit\r\r\n*** Connection closed by remote host ***\r\n'
count = 0
while not data == expected and count < 60:
try:
data += _readIfReady(master_fd)
except OSError:
break
count += 1
self.assertEquals(data, expected)
finally:
os.close(master_fd)
|
A full line of #METOO and #TIMESUP special interest buttons. All button measures 2.25” are new. These beautiful campaign buttons will become collectors’ items in the years to come and are 100% Made in the USA.
Interested in purchasing any of these 2020 campaign buttons for a group, rally or special event? If so, choose from the price options below and receive the associated discount on your order. |
# This is a convergence simulation for gossip based consensus.
import json
import time
import logging
from os import urandom
from random import sample, shuffle
from binascii import hexlify
from collections import defaultdict, Counter
from hashlib import sha256
from struct import pack
def make_shard_map(num = 100):
""" Makes a map for 'num' shards (defaults to 100). """
limits = []
MAX = 2**16
for l in range(0, MAX - 1, MAX / num):
l_lower = hexlify(pack(">H", l)) + ("00" * 20)
limits.append(l_lower)
limits = limits + ["f" * 64]
shard_map = []
for i, (b0, b1) in enumerate(zip(limits[:-1],limits[1:])):
shard_map.append((i, (b0, b1)))
shard_map = dict(shard_map)
return shard_map
def within_ID(idx, b0, b1):
""" Tests whether an object identifer is within the
remit of the shard bounds. """
return b0 <= idx < b1
def within_TX(Tx, b0, b1):
""" Test whether the transaction and its dependencies are
within the shard bounds. """
idx, deps, outs, txdata = Tx
if within_ID(idx, b0, b1):
return True
if any(within_ID(d, b0, b1) for d in deps):
return True
if any(within_ID(d, b0, b1) for d in outs):
return True
return False
def h(data):
""" Define the hash function used in the system. This is used to
derive transaction and object identifiers. """
return hexlify(sha256(data).digest()[:20])
def packageTx(data, deps, num_out):
""" Package some transaction data into an appropriate identifier,
and resulting new object identifiers. """
hx = sha256(data)
for d in sorted(deps):
hx.update(d)
actualID = hx.digest()
actualID = actualID[:-2] + pack("H", 0)
out = []
for i in range(num_out):
out.append(actualID[:-2] + pack("H", i+1))
return (hexlify(actualID), sorted(deps), map(hexlify,out), data)
class Node:
""" A class representing an authority participating in the consensus. """
def __init__(self, start = [], quorum=1, name = None, shard=None):
self.transactions = {}
self.quorum = quorum
self.name = name if name is not None else urandom(16)
self.pending_vote = defaultdict(set)
if shard is None:
self.shard = ["0"*64, "f"*64]
else:
self.shard = shard
self.pending_available = set(o for o in start if self._within_ID(o))
self.pending_used = set()
self.commit_yes = set()
self.commit_no = set()
# self.commit_available = set(start)
self.commit_used = set()
self.quiet = False
if __debug__:
self.start = set(o for o in start if self._within_ID(o))
self.cache = { }
def _within_ID(self, idx):
""" Tests whether an object identifer is within the
remit of this Node. """
return within_ID(idx, self.shard[0], self.shard[1])
def _within_TX(self, Tx):
""" Test whether the transaction and its dependencies are
within the remit of this Node. """
## Tests whether a transaction is related to this node in
## any way. If not there is no case for processing it.
return within_TX(Tx, self.shard[0], self.shard[1])
def gossip_towards(self, other_node):
""" A primitive way to probagate information. """
for k, v in self.pending_vote.iteritems():
other_node.pending_vote[k] |= v
# Should we process votes again here?
other_node.commit_yes |= self.commit_yes
other_node.commit_no |= self.commit_no
assert other_node.commit_yes & other_node.commit_no == set()
# other_node.commit_available |= self.commit_available
other_node.commit_used |= self.commit_used
def on_vote(self, full_tx, vote):
""" What the Node does when a transaction vote is cast. """
pass
def on_commit(self, full_tx, yesno):
""" What to do when a transaction commit is cast. """
pass
def process(self, Tx):
""" Process a transaction to vote or commit it. """
if not self._within_TX(Tx):
return
# Cache the transaction
self.transactions[Tx[0]] = Tx
# Process the transaction
logging.info("Process %s (%s)" % (Tx[0][:8], self.name))
x = True
while(x):
x = self._process(Tx)
def do_commit_yes(self, Tx):
""" What to do when commiting a transaction to the positive log. """
if __debug__:
self.cache[Tx[0]] = Tx
idx, deps, new_obj, txdata = Tx
self.commit_yes.add(idx)
self.pending_available |= set(o for o in new_obj if self._within_ID(o)) ## Add new transactions here
self.commit_used |= set(o for o in deps if self._within_ID(o))
def _check_invariant(self):
""" An internal debugging function to ensure all invariants hold. """
all_objects = set(self.start)
used_objects = set()
for txa in self.commit_yes:
assert txa in self.cache
idx, deps, new_obj, data = self.cache[txa]
all_objects |= set(o for o in new_obj if self._within_ID(o))
used_objects |= set(o for o in deps if self._within_ID(o))
assert all_objects == self.pending_available
assert used_objects == self.commit_used
for o in self.commit_used:
assert self._within_ID(o)
assert used_objects <= all_objects
potentially_used = { xd for xd, xtx in self.pending_used if xtx not in self.commit_no}
actually_available = self.pending_available - potentially_used
assert (all_objects - used_objects) - potentially_used == actually_available
return True
def _process(self, Tx):
if __debug__:
self.cache[Tx[0]] = Tx
self._check_invariant()
if not self._within_TX(Tx):
return False
idx, deps, new_obj, txdata = Tx
all_deps = set(deps)
deps = {d for d in deps if self._within_ID(d)}
new_obj = set(new_obj) # By construction no repeats & fresh names
if (idx in self.commit_yes or idx in self.commit_no):
# Do not process twice
logging.info("Do nothing for %s (%s)" % (idx[:6], self.name))
return False # No further progress can be made
else:
if deps & self.commit_used != set():
# Some dependencies are used already!
# So there is no way we will ever accept this
# and neither will anyone else
self.commit_no.add(idx)
self.on_commit( Tx, False )
logging.info("Commit no for %s (%s)" % (idx[:6], self.name))
return False # there is no further work on this.
# If we cannot exclude it out of hand then we kick in
# the consensus protocol by considering it a candidate.
xdeps = tuple(sorted(list(deps)))
if not ( (self.name, xdeps, True) in self.pending_vote[idx] or (self.name, xdeps, False) in self.pending_vote[idx]):
# We have not considered this as a pending candidate before
# So now we have to vote on it.
if deps.issubset(self.pending_available):
# We have enough information on the transactions this
# depends on, so we can vote.
# Make a list of used transactions:
used = { xd for xd, xtx in self.pending_used if xtx not in self.commit_no}
# and xd not in self.commit_used }
## CHECK CORRECTNESS: Do we update on things that are eventually used?
if set(deps) & used == set() and set(deps) & self.commit_used == set():
# We cast a 'yes' vote -- since it seems that there
# are no conflicts for this transaction in our pending list.
self.pending_vote[idx].add( (self.name, xdeps, True) )
self.pending_used |= set((d, idx) for d in deps)
self.on_vote( Tx, (self.name, xdeps, True) )
# TODO: add new transactions to available here
# Hm, actually we should not until it is confirmed.
# self.pending_available |= new_obj ## Add new transactions here
logging.info("Pending yes for %s (%s)" % (idx[:6], self.name))
return True
else:
# We cast a 'no' vote since there is a conflict in our
# history of transactions.
self.pending_vote[idx].add( (self.name, xdeps, False) )
self.on_vote( Tx, (self.name, xdeps, False) )
logging.info("Pending no for %s (%s)" % (idx[:6], self.name))
return True
else:
logging.info("Unknown prerequisites for %s (%s)" % (idx[:6], self.name))
# We continue in case voting helps move things. This
# happens in case others know about this transaction.
if self.shard[0] <= idx < self.shard[1] or deps != set():
# Only process the final votes if we are in charde of this
# shard for the transaction or any dependencies.
Votes = Counter()
for oname, odeps, ovote in self.pending_vote[idx]:
for d in odeps:
Votes.update( [(d, ovote)] )
yes_vote = all( Votes[(d, True)] >= self.quorum for d in all_deps )
no_vote = any( Votes[(d, False)] >= self.quorum for d in all_deps )
## Time to count votes for this transaction
if yes_vote: # Counter(x for _,x in self.pending_vote[idx])[True] >= self.quorum:
# We have a Quorum for including the transaction. So we update
# all the committed state monotonically.
self.do_commit_yes(Tx)
self.on_commit( Tx, True )
## CHECK CORRECT: Should I add the used transactions to self.pending_used?
logging.info("Commit yes for %s (%s)" % (idx[:6], self.name))
return False
if no_vote: #Counter(x for _,x in self.pending_vote[idx])[False] >= self.quorum:
# So sad: there is a quorum for rejecting this transaction
# so we will now add it to the 'no' bucket.
# Optional TODO: invalidate in the pending lists
self.commit_no.add(idx)
self.on_commit( Tx, False )
logging.info("Commit no for %s (%s)" % (idx[:6], self.name))
return False
return False # No further work
|
Details The City of Mannheim currently employs roughly 8.000 people both in ist departtments and owner-operated municipal enterprises. In 2010, the city designed and first published a staff magazine that featured ambitious editorial content as well as an up-to-date editorial design. „magma“ is published every two months and distributed to all of the 8.000 employees.
inter-departmental communication. From the first issue on, we have been responsible for the editorial design, layout and production of „magma“. |
"""
Runs a code review tool pylint for each script in the 'streams' package.
Author: O.Z.
"""
# imports
import os
import sys
from pathlib import Path
from subprocess import Popen
from subprocess import PIPE
from oztoolz.ioutils import select_all_scripts
from oztoolz.ioutils import safe_write_log as write_log
from oztoolz.ioutils import get_current_package_path as find_package_sources
# utility methods
def run_pylint(script_name, package_path):
"""Executes a pylint code analyzer for the given script in a separate
subprocess and writes the stdout of the subprocess into the .txt file.
The created .txt file is named as the script name + '_report' suffix.
Args:
script_name: the name of the script to check.
package_path: the path string to the package, which scripts are
analyzed.
"""
script_path = os.path.join(package_path, script_name)
with Popen(['pylint.exe', script_path], stdout=PIPE) as proc:
script_file_name = str(Path(script_path).relative_to(package_path))
write_log(script_file_name[:-3] + '_report.txt',
os.path.abspath(os.path.join(package_path, 'reports')),
proc.stdout.read(),
sys.stdout)
def review_code():
"""Runs an automatic code review tool 'pylint' for each script of the
'streams' package.
Returns:
list of strings, each entry is a name of the checked script.
"""
package_path = find_package_sources(sys.stdout)
sys.stdout.write("# 'streams' package was found in [" +
package_path + "];\n")
scripts = select_all_scripts(package_path, sys.stdout)
for script_name in scripts:
run_pylint(script_name, package_path)
return scripts
# the main method
def main():
"""Runs code review for all scripts of the 'streams' package and logs out
which scripts were checked.
This method is executed when the whole package is executed.
"""
sys.stdout.write("\nPyLint code review of the 'streams' package:\n")
reviewed_scripts = review_code()
sys.stdout.write("\treviewed scripts:\n")
for script_name in reviewed_scripts:
sys.stdout.write("\t\t" + script_name + "\n")
sys.stdout.write("\ttotal number of the reviewed scripts: " +
str(len(reviewed_scripts)) + ";\n")
sys.stdout.write("OK.\n")
if __name__ == '__main__':
main()
|
The Galvin Green Banks WINDSTOPPER Half Zip Jacket is a great way to look good and stay warm this autumn and winter.
Galvin Green have designed the Banks WINDSTOPPER Jacket from a soft lightweight shell fabric, that is extremely comfortable and quiet to wear and swing a club in. The half-zip design allows for the jacket to be taken on or off in seconds so you will not get caught out by the weather on the golf course.
The sporty three-colour Banks jacket features WINDSTOPPER technology. WINDSTOPPER technology makes the Banks Jacket totally windproof and hugely effective at keeping you warm when the windchill is low on the golf course. Also, any excess heat generated by the WINDSTOPPER technology will escape leaving you dry and comfortable for the entire time you are wearing the jacket without allowing the cold to penetrate through the jacket.
So, in our opinion the Galvin Green Banks WINDSTOPPER Jacket will keep you warm, feel unrestrictive to swing a club in, has great styling and is real easy to look after. For us this is a must for any serious autumn / winter golfer.
Totally windproof. Effectively protects against windchill.
Maximum breathability that allows moisture vapour to evaporate.
Thermoregulatory function keeps the body at an optimum performance temperature.
Perfect fit for maximum comfort and freedom of movement, specially developed for golfers.
The WINDSTOPPER® membrane is an ultra thin, totally windproof protective layer which is laminated to a lightweight textile layer. The membrane is made of the versatile polymer PTFE (polytetrafluorethylene) which is expanded to create a microporous structure. These micropores are 900 times larger than water vapour molecules, allowing perspiration to pass through unhindered. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.