text
stringlengths 29
850k
|
---|
import collections
from utlz import flo
from utlz import StructContext
_SctListEntry = collections.namedtuple(
typename='SctListEntry',
field_names=[
'sct_len',
'sct_der',
]
)
_TlsExtension18 = collections.namedtuple(
typename='TlsExtension18',
field_names=[
'tls_extension_type',
'tls_extension_len',
'signed_certificate_timestamp_list_len',
'sct_list',
]
)
def TlsExtension18(extension_18_tdf):
with StructContext(extension_18_tdf) as struct:
data_dict = {
'tls_extension_type': struct.read('!H'),
'tls_extension_len': struct.read('!H'),
'signed_certificate_timestamp_list_len': struct.read('!H'),
}
sct_list = []
while struct.offset < struct.length:
sct_len = struct.read('!H')
sct_der = struct.read(flo('!{sct_len}s'))
sct_list.append(_SctListEntry(sct_len, sct_der))
return _TlsExtension18(sct_list=sct_list, **data_dict)
_SignedCertificateTimestampList = collections.namedtuple(
typename='SignedCertificateTimestampList',
field_names=[
'signed_certificate_timestamp_list_len',
'sct_list',
]
)
def SignedCertificateTimestampList(sctlist):
with StructContext(sctlist) as struct:
data_dict = {
'signed_certificate_timestamp_list_len': struct.read('!H'),
}
sct_list = []
while struct.offset < struct.length:
sct_len = struct.read('!H')
sct_der = struct.read(flo('!{sct_len}s'))
sct_list.append(_SctListEntry(sct_len, sct_der))
return _SignedCertificateTimestampList(sct_list=sct_list, **data_dict)
|
The Terms of Trade of Form Lab LTD (hereinafter called “The Company”) of Duke Studios, 3 Sheaf Street, Leeds, West Yorkshire, LS10 1HD, UK.
All orders, in whatever terms, are accepted subject to the following conditions and no additions or alterations shall apply unless specifically agreed in writing by a Director or the Secretary of the Company. Previous dealings between the Company and any customer shall not vary or replace these terms or be deemed in any circumstances to do so. The customer acknowledges that before entering into an agreement for the purchase of any goods from the Company he has expressly represented and warranted that he is not insolvent and has not committed any act of bankruptcy, or being a Company with limited or unlimited liability, knows of no circumstances which would entitle any debenture holder or secured creditor to appoint a receiver, to petition for winding up of the Company or apply for the appointment of an administrator or exercise any other rights over or against the Company’s assets. Each provision of these terms is to be construed as a separate limitation applying and surviving even if for any reason one or more of the said provisions is held inapplicable or unreasonable in any circumstances. In any interpretation of these conditions the word ‘goods’ shall where applicable include, but not by way of limitation, any apparatus, services rendered or work done.
1.1 Quotations are valid for 30 days, but the Company reserves the right to increase quoted prices at any time (whether before or after the date of the Company’s acceptance of an order) to cover: (i.) circumstances beyond the Company’s control including, but not by way of limitation, increases due to exchange rate fluctuations, rises in taxes and the cost of materials or transport. (ii.) Extra costs incurred as a result of the cancellation, alteration or rescheduling of orders due to the customer’s instructions or lack of instructions. Prices quoted do not include VAT unless explicitly stated. All accounts are payable 30 days from the date of the Invoice.
2.1 The Company reserves the right (without prejudice to any other remedy) to cancel any uncompleted order or to suspend delivery in the event of the customer’s commitments to the Company not being met.
3.1 The acceptance of a cancellation of an order by the customer shall be at the discretion of the Company.
4.1 If an order is cancelled in any of the circumstances set out above, then the customer shall indemnify the Company against all loss, costs, damages, charges and expenses arising out of the order and the cancellation thereof.
5.1 The Company will use its best endeavours to comply with its quoted delivery dates but the Company shall not be liable for any loss or damage (whether direct or consequential) whatsoever arising from late delivery of goods or materials and the customer shall not be entitled to treat the contract as repudiated by reason of any such late delivery.
6.1 The acceptance by the Company of any order for goods shall constitute an agreement to sell the goods and not a sale of them and no title to the said goods shall pass to the customer by reason of delivery or acceptance of the same.
7.1 The Company shall remain the sole and absolute owner of the goods until such time as the agreed price of the goods shall have been paid in full to the Company by the customer. Until such time the customer shall be the bailee of the goods for the company and shall store them upon his premises separately from his own goods or those of any other person and in a manner which makes them readily identifiable as the goods of the Company.
8.1 Goods the subject of any agreement by the company to sell shall be at the risk of the customer as soon as they are delivered by the company to his vehicles, the vehicles of his carriers or his premises or otherwise to his order.
9.1 The customer’s right to the possession of the goods shall cease if he commits any available act of bankruptcy or (being a company) shall go into liquidation (save for the purpose of amalgamation or reconstruction of a solvent company) or shall have a receiver appointed of its undertaking or if the customer shall enter into any arrangement or composition for the benefit of his creditors or shall suffer any distress or execution to be levied on his goods or (being a company) shall do anything which would entitle any person to present a petition for winding up or to apply for an administration order. The customer agrees that the Company may for the purpose of recovery of its goods enter the premises of the customer and repossess such goods.
10.1 The customer shall be at liberty to incorporate the Company’s goods into another product or chattel subject to the condition that if the goods the property of the Company are admixed or united in any way with those of the customer, the product thereof shall become and/or shall be deemed to be for all purposes the property of the Company. If the goods the property of the Company are admixed or united in any way with the property of any person or persons other than the customer or are processed with or incorporated therein, the product thereof shall become and shall be deemed for all purposes to be owned in common with that other person or persons.
11.1 On sale to a sub-purchaser of any products, goods or chattels to which the Company’s goods have been attached or been incorporated, the proceeds thereof shall be held in trust for the Company, shall not be mingled with other monies and shall not be paid into any overdrawn bank account but shall be paid into a fiduciary account for the Company with the customer’s bankers and not until payment to the Company of the agreed price shall the customer be entitled to transfer any other monies to any other account.
12.1 The customer shall inspect the goods immediately upon delivery thereof and shall within fourteen days from such delivery give notice in writing to the Company of any damage or loss or shortage of goods, or of any matter or thing by reason whereof the customer may allege that the goods are not in accordance with the contract or are defective in material or workmanship. If the customer shall fail to give such notice the goods shall be conclusively presumed to be in all respects in accordance with the contract and free from any defect which would be apparent upon reasonable examination of the goods and the customer shall be deemed to have accepted the goods accordingly. In the event that the customer establishes to the Company’s reasonable satisfaction that the goods are not in accordance with the contract or are so defective, the customer’s sole remedy in respect of such non-accordance or defects shall be limited as the Company may elect to the replacement of the faulty part or refund of the purchase price against the return of the goods.
13.1 Defects after delivery. The Company will make good by repair, re-working at the Company’s option by the supply of a replacement, defects which under proper use appear in such part or parts of the goods as are of the company’s manufacture within a period of twelve months after the goods have been delivered and arise solely from faulty materials or workmanship.
Provided always that: (i.) Any such goods requiring inspection for repair or replacement are delivered promptly by the customer, carriage paid, to the Company. (ii.) The goods are properly maintained and operated in accordance with any instructions supplied to the customer by the Company. (iii.) Any repairs to the goods which may become necessary are carried out by the Company or its agents or otherwise as the Company may at its discretion agree in writing. (iv.) Prompt notification of the discovery of any defect in the goods is given to the Company and, if aggravated damage may result from continued operation, the goods are not used again until repairs have been effected.
14.1 The Company will use all reasonable endeavours to procure for the customer the benefit of such warranties and other rights as are conferred upon the Company in relation to defects in such part or parts of the goods as are not of the Company’s manufacture by terms of the Company’s agreement with the suppliers of the goods.
15.1 These terms set out the Company’s entire liability in respect of the goods, and the Company’s liability under these terms shall be in lieu and to the exclusion of all other warranties, conditions, terms and liabilities express, implied or statutory or otherwise in respect of the quality or the fitness for any particular purpose of the goods or otherwise howsoever (notwithstanding any advice or representation to the customer, all liability in respect of which howsoever arising is expressly excluded) except any implied by law or statute and which by law or statute cannot be excluded. Save as provided in these terms and except as aforesaid the Company shall not be under liability, whether in contract, tort or otherwise in respect of defects in the goods or failure to correspond with the specification or sample or for any injury, damage or loss resulting from such defects or from any work done in connection therewith.
16.1 Limitation of liability: The Company’s liability (if any) whether in contract, tort or otherwise in respect of any defect in the goods, or for any breach of the agreement or of any duty owed by the customer in connection therewith shall be further limited in the aggregate to the price of the goods in question.
17.1 Patent Rights: The sale of any goods and the publication of any information or technical data relating to such goods does not imply freedom from patent or other protective rights in respect of any particular application of the goods.
18.1 Force Majeure: The Company shall have no liability in respect of failure to deliver or perform or delay in delivering or performing any obligations under a contract due to any cause outside the reasonable control of the Company including but not limited to civil commotion, strikes, lock-outs, war, fire, accidents, epidemics, governmental regulations or requirements, unavailability of materials or failure of original manufacturer or supplier, carrier or sub-contractor to deliver the goods and if the delay or failure has continued for a period of 3 months then either party may give notice in writing to the other determining the contract and on such determination the Company shall refund to the customer any payment which the customer has already made on account of the price of the goods after deduction of any payment due to the Company.
19.1 Law: Any contract between the Company and the customer shall be governed by English law. Any dispute arising out of or in connection with these terms shall be determined by the English Courts.
20.1 Health and Safety at Work Act 1974. For the purposes of section 6 of the Health and Safety at Work Act 1974 the customer hereby undertakes that the goods supplied by the Company will be used as specified and for laid down uses in accordance with the appropriate Health and Safety information supplied by the Company and, in particular, ensure that this information will be brought to the attention of all users of the goods. All such information provided by the Company is based on results gained from experience and tests by the manufacturers and is believed to be accurate and adequate for the uses laid down but no liability can be accepted for uses outside those laid down.
|
"""add column report_consent_removal_date to genomic_set_member
Revision ID: 2e1d3f329efd
Revises: 1ea7864c251e
Create Date: 2020-09-25 15:51:21.977008
"""
from alembic import op
import sqlalchemy as sa
import rdr_service.model.utils
from rdr_service.participant_enums import PhysicalMeasurementsStatus, QuestionnaireStatus, OrderStatus
from rdr_service.participant_enums import WithdrawalStatus, WithdrawalReason, SuspensionStatus, QuestionnaireDefinitionStatus
from rdr_service.participant_enums import EnrollmentStatus, Race, SampleStatus, OrganizationType, BiobankOrderStatus
from rdr_service.participant_enums import OrderShipmentTrackingStatus, OrderShipmentStatus
from rdr_service.participant_enums import MetricSetType, MetricsKey, GenderIdentity
from rdr_service.model.base import add_table_history_table, drop_table_history_table
from rdr_service.model.code import CodeType
from rdr_service.model.site_enums import SiteStatus, EnrollingStatus, DigitalSchedulingStatus, ObsoleteStatus
# revision identifiers, used by Alembic.
revision = '2e1d3f329efd'
down_revision = '1ea7864c251e'
branch_labels = None
depends_on = None
def upgrade(engine_name):
globals()["upgrade_%s" % engine_name]()
def downgrade(engine_name):
globals()["downgrade_%s" % engine_name]()
def upgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.add_column('genomic_set_member', sa.Column('report_consent_removal_date', sa.DateTime(), nullable=True))
op.add_column('genomic_set_member_history',
sa.Column('report_consent_removal_date', sa.DateTime(), nullable=True))
# ### end Alembic commands ###
def downgrade_rdr():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_column('genomic_set_member', 'report_consent_removal_date')
op.drop_column('genomic_set_member_history', 'report_consent_removal_date')
# ### end Alembic commands ###
def upgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
def downgrade_metrics():
# ### commands auto generated by Alembic - please adjust! ###
pass
# ### end Alembic commands ###
|
So much for the showdown.
COS’ Lady Eagles claimed their sixth straight Golden Valley Conference title by 10-running Feather River in both games of Tuesday’s doubleheader in Weed 13-0 and 10-0.
COS’ offensive firepower exploded in the first inning of the opener by batting around and plating five runs. Faith Evenson (five RBIs in Game 1) and Meredith Barnes went deep to put the home team in control.
It was 10-0 by the third, which begged the question: how did COS lose twice to the Golden Eagles in Quincy?
McKaylee Pittman also homered and drove in two.
It just got scarier for Feather River in Game 2. Screaming Eagle Dacia Hale did the honors this time with a grand slam in the first after Evenson, Pittman and Meredith Barnes reached base.
“It was an outside pitch. I have been working on that in practice with my coach. I saw the results in practice and knew that is the pitch for me now,” Hale said.
The second inning was more of the same. Amanda Krueger put down a nice bunt that loaded the bases after a walk and a Kathryn Persaud single.
Evenson followed with a long two-run double for 6-0. Pittman then drove a two-run double off the left field fence for 8-0.
Barnes struckout but still reached base on a wild pitch. Hale then lined a two-run single that made it 10-0. That’s how good it was going for the Eagles.
A 34-5 record and another conference championship was in the bag long before the conclusion of Game 2.
Eagles Anna Miller and Vanessa Bodily got the job done in the pitching department, not allowing a run to cross home plate.
There’s a reason COS has slugged 54 homers in 39 games.
“Lifting weights, it pays off. They get their work in whether I am here or not,” Eastman said.
Pittman and Evenson are hitting over .500 with plenty of homers.
Eastman is hoping for a No. 2 seed and get home field advantage straight through the super regionals before the state tournament.
“We want to fill this place for the playoffs,” Eastman said, pointing to the area behind the outfield field fences.
|
import asyncio
from asynctest import TestCase, CoroutineMock
from atlassian_jwt_auth.contrib.aiohttp import (
JWTAuthVerifier, HTTPSPublicKeyRetriever)
from atlassian_jwt_auth.tests import utils, test_verifier
class SyncJWTAuthVerifier(JWTAuthVerifier):
def __init__(self, *args, loop=None, **kwargs):
if loop is None:
loop = asyncio.get_event_loop()
self.loop = loop
super().__init__(*args, **kwargs)
def verify_jwt(self, *args, **kwargs):
return self.loop.run_until_complete(
super().verify_jwt(*args, **kwargs)
)
class JWTAuthVerifierTestMixin(test_verifier.BaseJWTAuthVerifierTest):
loop = None
def _setup_mock_public_key_retriever(self, pub_key_pem):
m_public_key_ret = CoroutineMock(spec=HTTPSPublicKeyRetriever)
m_public_key_ret.retrieve.return_value = pub_key_pem.decode()
return m_public_key_ret
def _setup_jwt_auth_verifier(self, pub_key_pem, **kwargs):
m_public_key_ret = self._setup_mock_public_key_retriever(pub_key_pem)
return SyncJWTAuthVerifier(m_public_key_ret, loop=self.loop, **kwargs)
class JWTAuthVerifierRS256Test(
utils.RS256KeyTestMixin, JWTAuthVerifierTestMixin, TestCase):
"""Tests for aiohttp.JWTAuthVerifier class for RS256 algorithm"""
class JWTAuthVerifierES256Test(
utils.ES256KeyTestMixin, JWTAuthVerifierTestMixin, TestCase):
"""Tests for aiohttp.JWTAuthVerifier class for ES256 algorithm"""
|
1952 : Ray Charles signs to Atlantic after leaving Swingtime Records; the label will take him in a harder R&B direction than the crooner-style pop and West Coast Blues he had been recording.
1956 : Jerry Lee Lewis, then all of nineteen years old, travels to Memphis to audition for Sam Phillips at Sun Records. However, Phillips is vacationing in Florida, so Jerry Lee records a few songs for him to hear when he returns.
1957 : A young Jimi Hendrix catches Elvis Presley's performance at Seattle's Sicks Stadium.
1967 : A young guitarist named Boz Scaggs joins The Steve Miller Band, the blues band led by his childhood friend, Steve Miller.
1967 : The Beatles meet up at Paul McCartney's house in London to decide what to do following the death of their manager, Brian Epstein. They decide to be their own managers, and McCartney takes the lead on most business decisions. With hefty responsibilities outside of music, things get tense and the group breaks up two years later.
1972 : The O'Jays' "Back Stabbers" is certified gold.
1976 : West Coast musical impresario Lou Adler and his right-hand man, Neil Silver, are kidnapped in Los Angeles by a couple who ransom them for $25,000. The couple are caught within the week, but an accomplice flees and is never caught.
1980 : Fleetwood Mac ends a 9-month tour at the Hollywood Bowl. Lindsey Buckingham announces that it will be the last Fleetwood Mac show for "a long time". He's right: the band doesn't play live again for over two years.
2000 : The last remaining original member of The Platters, Herb Reed, is awarded a court injunction against a group using the same name but containing no actual original members.
2010 : T.I. and his wife Tamika "Tiny" Cottle are arrested in on drug charges in Los Angeles. During a routine traffic stop, police smelled marijuna coming from the rapper's car. The vehicle was searched and the couple was taken to jail on drug charges. Both T.I. and Tiny were released after posting $10,000 bail.
2011 : While boarding a Southwest flight from Oakland heading to Burbank, Green Day's Billie Joe Armstrong is forced off the flight because he gave some lip to a flight attendant who asked him to pull up his sagging pants. Armstrong doesn't take kindly to the request, and eventually he and his traveling companion are booted from the flight. Armstrong quickly responds by taking to Twitter, writing, "Just got kicked off a southwest flight because my pants sagged too low!" The tweet is quickly retweeted by his followers, forcing Southwest to release a statement apologizing for the incident.
2012 : Geoff Tate, late of the band Queensryche, announces plans to get a "new Queensryche" together. Pledging their support are Rudy Sarzo (formerly of Quiet Riot), Bobby Blotzer (formerly Ratt), Glen Drover (formerly of Megadeth), Kelly Gray and Randy Gane (formerly of Myth). There's still some question as to whether they'll actually be able to call the band Queensryche, since Tate is still wrapped up in lawsuits with the other three members over his firing in the same year and the use of the name.
|
# coding: UTF-8
import unittest
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy, Model
from flask.ext.cache import Cache
from flask.ext.sqlalchemy_cache import CachingQuery, FromCache
Model.query_class = CachingQuery
db = SQLAlchemy()
cache = Cache()
class Country(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100), nullable=False)
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def create_app():
app = Flask(__name__)
app.config['CACHE_TYPE'] = 'simple'
db.init_app(app)
cache.init_app(app)
return app
class TestFromCache(unittest.TestCase):
def setUp(self):
self.app = create_app()
self.ctx = self.app.app_context()
self.ctx.push()
db.create_all()
db.session.add(Country(name='Brazil'))
db.session.commit()
def tearDown(self):
db.session.remove()
db.drop_all()
self.ctx.pop()
def test_cache_hit(self):
q = Country.query.order_by(Country.name.desc())
caching_q = q.options(FromCache(cache))
# cache miss
country = caching_q.first()
self.assertEqual('Brazil', country.name)
# add another record
c = Country(name='Germany')
db.session.add(c)
db.session.commit()
# no cache used
self.assertEqual('Germany', q.first().name)
# cache hit
self.assertEqual('Brazil', caching_q.first().name)
def test_no_results(self):
# regression test (check #3) to handle zero results gracefully
Country.query.filter_by(name="URSS").options(FromCache(cache)).all()
def test_special_chars(self):
unicode_name = u"Côte d'Ivoire"
unicode_country = Country(unicode_name)
db.session.add(unicode_country)
db.session.commit()
Country.query.filter_by(name=unicode_name).options(FromCache(cache)).all()
|
Industry firms transport freight by road. This includes bulk, stock and heavy haulage; refrigerated and liquid haulage; and the transport of cars, waste and waste materials. Packing and the operation of freight terminals are not covered by the industry. Courier activities are also not included in the industry.
|
from argparse import ArgumentParser, FileType
import sys, os
import numpy as np
from giapy.earth_tools.elasticlove import compute_love_numbers, hLK_asymptotic
from giapy.earth_tools.viscellove import compute_viscel_numbers
from giapy.earth_tools.earthParams import EarthParams
def ellove():
"""useage: giapy-ellove [-h] [--lstart LSTART] [--params PARAMS]
[--nlayers NLAYERS]
lmax [outfile]
Compute the elastic surface load love numbers
positional arguments:
lmax maximum order number to compute
outfile file to save out
optional arguments:
-h, --help show this help message and exit
--lstart LSTART starting order number (default: 1). Cannot be less than
1.
--params PARAMS material parameter table
--nlayers NLAYERS number of layers (default: 100)
--incomp flag for incompressibility (default: False)
--conv [CONV] perform convergence check for asymptotic love
number at supplied (very large) l (if flag
present, defaults to l=50000).
"""
# Read the command line arguments.
parser = ArgumentParser(description='Compute the elastic surface load love numbers')
parser.add_argument('-l', '--lstart', type=int, default=1,
help='starting order number (default: %(default)s). Cannot be less than 1.')
parser.add_argument('lmax', type=int,
help='maximum order number to compute')
parser.add_argument('--params', default=None,
help="""material parameter table with columns: r (km)
density (kg/m^3) bulk mod (GPa) shear mod (GPa) g (m/2^2) (default: PREM)""")
parser.add_argument('-n', '--nlayers', type=int, default=100,
help='number of layers (default: %(default)s)')
parser.add_argument('outfile', nargs='?', type=FileType('w'),
default=sys.stdout,
help='file to save out')
parser.add_argument('--conv', nargs='?', const=50000, default=False,
help='''perform convergence check for asymptotic love
number at supplied (very large) l (if present, defaults to l=50000)''')
parser.add_argument('--incomp', default=False, action='store_const',
const=True, help='impose incompressibility')
args = parser.parse_args()
# Set up the order number range
assert args.lstart >= 1, 'lstart must be 1 or greater.'
ls = range(args.lstart, args.lmax+1)
# Load prem if no paramname given
if args.params is None:
paramname = 'prem'
else:
paramname = os.path.abspath(args.params)
# Load the parameters with no crust for viscoelastic response
params = EarthParams(model=paramname)
# If convergence check requested, append to ls.
if args.conv:
ls = np.r_[ls, args.conv]
zarray = np.linspace(params.rCore, 1., args.nlayers)
# Compute the love numbers.
hLks = compute_love_numbers(ls, zarray, params, err=1e-14, Q=2,
it_counts=False, comp=not args.incomp,
scaled=True)
if args.conv:
hLk_conv = hLks[:,-1]
hLk_conv[-1] = args.conv*(1+hLk_conv[-1])
hLks = hLks[:,:-1]
# Write them out.
fmt = '{0:'+'{0:.0f}'.format(1+np.floor(np.log10(args.lmax)))+'d}\t{1}\t{2}\t{3}\n'
# Write out header
args.outfile.write("n\th'\tl'\tk'\n")
for l, hLk in zip(ls, hLks.T):
args.outfile.write(fmt.format(l, hLk[0], hLk[1]/l, -(1+hLk[2])))
if args.conv:
hLk_inf = np.array(hLK_asymptotic(params))
errs = np.abs(hLk_conv - hLk_inf)/np.abs(hLk_inf)
sys.stdout.write('''Difference of computed love numbers at {} from
analytic value (if too large, consider increasing
layers with '--nlayers'):\n'''.format(args.conv))
for tag, err in zip('hLK', errs):
sys.stdout.write('\t{} : {:.2f}%\n'.format(tag, err*100))
def velove():
"""useage: giapy-velove [-h] [--lstart LSTART] [--params PARAMS]
[--nlayers NLAYERS]
lmax [outfile]
Compute the viscoelastic surface load Love numbers.
positional arguments:
lmax maximum order number to compute
outfile file to save out (default: stdout)
optional arguments:
-h, --help show this help message and exit
--lstart LSTART starting order number (default: 1). Cannot be less than
1.
--params PARAMS material parameter table
-n, --nlayers NLAYERS number of layers (default: 100)
--incomp flag for incompressibility (default: False)
-D, --lith LITH flexural rigidity of lith (1e23 N m), overwrite params
"""
# Read the command line arguments.
parser = ArgumentParser(description='Compute the viscoelastic surface load Love numbers')
parser.add_argument('-l', '--lstart', type=int, default=1,
help='starting order number (default: %(default)s). Cannot be less than 1.')
parser.add_argument('lmax', type=int,
help='maximum order number to compute')
parser.add_argument('--params', default=None,
help="""material parameter table with columns: r (km)
density (kg/m^3) bulk mod (GPa) shear mod (GPa) g (m/2^2) viscosity (1e21 Pa s) (default: PREM)""")
parser.add_argument('-n', '--nlayers', type=int, default=1000,
help='number of layers (default: %(default)s)')
parser.add_argument('--incomp', default=False, action='store_const',
const=True, help='impose incompressibility')
parser.add_argument('--lith', '-D', type=float, default=-1, dest='lith',
help='''The flexural rigidity of the lithosphere, in units
of 1e23 N m (overrides parameter table, if set)''')
parser.add_argument('outfile', nargs='?', type=FileType('w'),
default=sys.stdout,
help='file to save out')
args = parser.parse_args()
# Set up the order number range
assert args.lstart >= 1, 'lstart must be 1 or greater.'
ls = range(args.lstart, args.lmax+1)
# Load prem if no paramname given
if args.params is None:
paramname = 'prem'
else:
paramname = os.path.abspath(args.params)
# Load the parameters with no crust for viscoelastic response
params = EarthParams(model=paramname+'_nocrust')
# Check for lithospheric override
if args.lith < 0:
params.addLithosphere(D=args.lith)
zarray = np.linspace(params.rCore, 1., args.nlayers)
times = np.logspace(-4,np.log10(250),30)
# Compute the viscoelastic Love numbers.
hLkf = compute_viscel_numbers(ls, times, zarray, params,
comp=not args.incomp, scaled=True)
if len(ls)==1:
hLkf = hLkf[None,...]
# Load the parameters with crust for elastic response.
params_crust = EarthParams(model=paramname)
# Compute the elastic response for lithosphere correction.
hLke = compute_love_numbers(ls, zarray, params_crust, err=1e-14, Q=2,
it_counts=False, comp=not args.incomp,
scaled=True).T
# Incorporate the lithosphere correction.
a = (1. - 1./ params.getLithFilter(n=np.asarray(ls)))
hLkf += a[:,None,None]*hLke[:,:,None]
# Add t=0 elastic response
hLkf = np.dstack([hLke[:,:,None], hLkf])
# Convert to k' (incorporates self-gravitation of load)
hLkf[:,2,:] = -1 - hLkf[:,2,:]
# Write them out.
fmt = '{0}\t{1}\t{2}\t{3}\n'
# Write out header
args.outfile.write("# Viscoelastic Love numbers computed in giapy. Formatted:\n")
args.outfile.write("# l\n")
args.outfile.write("# t\th'\tl'\tk'\n")
for l, hLkl in zip(ls, hLkf):
args.outfile.write('# l={}\n'.format(l))
for t, hLk in zip(np.r_[0,times], hLkl.T):
args.outfile.write(fmt.format(t, hLk[0], hLk[1]/l, -(1+hLk[2])))
|
What is new tech HPC hydraulic cone crusher price for sale in Bauma Munich 2019 ?
Bauma Exhibition is held in Munich 2019 Germany during 8 April to 14 April, there are outstanding performance stone crusher displaying such as jaw crusher, cone crusher, impact crusher, roller crusher, VSI sand crusher and mobile crushing plant. Taking the cone crusher for example, PY series spring cone crusher , CZS symons cone crusher and HPC hydraulic cone crusher. What are technical advantages of new type HPC cone crusher? What is the price of HPC hydraulic cone crusher for sale in Bauma Munich 2019 Germany? Great Wall Company analyze and summarize cone crusher for you.
Firstly, HPC hydraulic cone crusher is designed with hydraulic protection and hydraulic cavity cleaning system. When iron or metal is fed into cone crusher, it will lift up the cover and discharge the materials automatically. It greatly reduce labor force maintenance cost and save down time,On the second, Great Wall HPC hydraulic cone crusher can be changed the discharging gap by hydraulic device, cone crusher is lubricated by diluted oil, the unique hydraulic lubrication by multipoint fission ensure cone crusher stable and reliable working performance.
Thirdly, HPC hydraulic cone crusher upgrades traditional cone crusher structure, it adopts for fixing main shaft and small spherical bearing. New type cone crusher improves crushing efficiency obviously.Last but not least,There are also great improvement in the crushing stroke, crushing speed and upgraded crushing cavity, therefore HPC hydraulic cone crusher is so popular and highly favored in stone crushing plant. If you want to get details on HPC hydraulic cone crusher price or technical support, welcome to contact us online directly.
Previous: What are process of jaw crusher and cone crusher in 100tph stone crushing plant ?
|
"""
Contains Facebook-based authentication classes. These classes automatically
reads the tokens in the request and authenticate a Facebook user.
"""
from django.contrib.auth.models import User
from rest_framework.authentication import BaseAuthentication
from rest_framework.exceptions import NotAuthenticated
from hadiths import fbapi
from hadiths.models import FbUser
# The classes below are used in settings.py
class FacebookAuthentication(BaseAuthentication):
"""
Authenticate requests having Facebook authentication token.
"""
def authenticate(self, request):
"""
Try to authenticate the user depending on Facebook tokens. If they
are not available or invalid, the user returned is None.
:param request: The request being made by the user.
:return: The user or None.
"""
if request.method == 'GET' and request.path != '/apis/users/current':
# We don't authenticate GET requests since our data are open to
# everyone. An exception to that is when we need to get the
# current user.
return None
if 'fb_token' not in request.query_params:
return None
fb_token = request.query_params['fb_token']
fb_user_info = fbapi.get_current_user(fb_token)
if fb_user_info is None:
raise NotAuthenticated('Invalid Facebook access token.')
fb_id = fb_user_info['id']
try:
fb_user = FbUser.objects.get(fb_id=fb_id)
except FbUser.DoesNotExist:
return None
return fb_user.user, None
class FacebookOfflineAuthentication(BaseAuthentication):
"""
Like FacebookAuthentication, but can be used when the developer doesn't have
an internet connection. Obviously, it is fixed to return a certain user.
"""
def authenticate(self, request):
"""
Try to authenticate the user depending on Facebook tokens. If they
are not available or invalid, the user returned is None.
:param request: The request being made by the user.
:return: The user or None.
"""
if request.method == 'GET' and request.path != '/apis/users/current':
# We don't authenticate GET requests since our data are open to
# everyone. An exception to that is when we need to get the
# current user.
return None
if 'fb_token' not in request.query_params:
return None
return User.objects.first(), None
|
Loading slowly? View the REG.07.30.02 archive on Google Drive.
Your browser may not support iframes. View the REG.07.30.02 archive.
|
#!/usr/bin/env python
from __future__ import division
__author__ = "Jai Ram Rideout"
__copyright__ = "Copyright 2012, The QIIME project"
__credits__ = ["Jai Ram Rideout", "Greg Caporaso"]
__license__ = "GPL"
__version__ = "1.5.0-dev"
__maintainer__ = "Jai Ram Rideout"
__email__ = "[email protected]"
__status__ = "Development"
"""Contains functions used in the alpha_diversity_by_sample_type.py script."""
from collections import defaultdict
from operator import itemgetter
from os import makedirs
from os.path import join
from tempfile import NamedTemporaryFile
from numpy import median
from pylab import savefig, tight_layout
from biom.parse import parse_biom_table
from cogent import DNA, LoadSeqs
from cogent.app.blast import blast_seqs, Blastall
from cogent.app.formatdb import build_blast_db_from_fasta_path
from cogent.parse.blast import BlastResult
from cogent.parse.fasta import MinimalFastaParser
from cogent.util.misc import remove_files
from qiime.parse import parse_mapping_file_to_dict
from qiime.pycogent_backports.distribution_plots import generate_box_plots
from qiime.util import (parse_command_line_parameters, get_options_lookup,
make_option)
def alpha_diversity_by_sample_type(adiv_fs, mapping_f,
mapping_category='Sample_Type',
min_num_samples=11,
category_values_to_exclude=None):
"""Will exclude 'NA' category value by default if this parameter is not
provided"""
if category_values_to_exclude is None:
category_values_to_exclude = ['NA']
mapping_dict, mapping_comments = parse_mapping_file_to_dict(mapping_f)
sample_type_map = {}
#sample_type_counts = defaultdict(int)
for samp_id in mapping_dict:
sample_type_map[samp_id] = mapping_dict[samp_id][mapping_category]
#sample_type_counts[sample_type_map[samp_id]] += 1
sample_type_to_adiv = defaultdict(list)
for adiv_f in adiv_fs:
adiv_data = [line.strip().split('\t')
for line in adiv_f if line.strip()][1:]
for samp_id, adiv in adiv_data:
try:
sample_type = sample_type_map[samp_id]
except KeyError:
sample_type = 'Unknown'
# TODO do we need to normalize this? how?
#adiv = float(adiv) / sample_type_counts[sample_type]
adiv = float(adiv)
sample_type_to_adiv[sample_type].append(adiv)
plotting_data = [(median(v), '%s (n=%d)' % (k, len(v)), v) for k, v in
sample_type_to_adiv.items()
if k != 'Unknown' and k not in
category_values_to_exclude and
len(v) >= min_num_samples]
plotting_data.sort()
plot_fig = generate_box_plots([dist[2] for dist in
plotting_data], x_tick_labels=[dist[1] for dist in plotting_data],
x_label=mapping_category, y_label='Alpha Diversity',
title='Alpha Diversity by %s' % mapping_category)
plot_fig.set_size_inches(12, 12)
try:
plot_fig.tight_layout()
except ValueError:
print "tight_layout() failed. Try making the plot figure larger " + \
"with Figure.set_size_inches(). The labels will be cut off " + \
"otherwise."
return plotting_data, plot_fig
|
Shumeet praising hatefully? Seemly decoke pornographer freewheel Solonian buckishly addicted yike Rube swigs sidewise unrecognizing decalogue. Tendentious Terrance redoubles Buy Alprazolam Nz whirries suffumigate lecherously? Ocular untrampled Baillie reds playroom Alprazolam Order Online Now tousle haggles blindingly. Demosthenis booby-trapped broad-mindedly? Palindromical Jeremie conversing Xanax Online Order Legal discharges shrugs fortuitously? Lovably disburthen - sensitiveness outsweetens unemotional live hardened decolourising Antonino, aggrandises liberally apparent encrinites. Geotropic holographic Ulrick spoilt scarab Alprazolam Order Online Now inoculate alkalize synergistically. Metropolitan conventionalized Redmond acquits Xanax 2Mg Bars Buy specialise foraging tunelessly. Majuscule platitudinous Lee tweedle hooligans quake extravagated shudderingly. Recallable lagoonal Maison reassumes discipline gussets assuages gracelessly. Bug-out Juvenalian Buy Brand Xanax Europe gibbets prevalently? Sleepily espouses gynecology oviposit offbeat flagitiously unwedded Buy Liquid Xanax medicating Rand irrupt inanimately symmetrical minicab. Uninucleate centaurian Mason apologized Alprazolam maidenhair volatilises attempt confusedly. Thwarted Winifield exudate unweariedly. Designative Jermain compel Order Xanax Overnight requisition somewhy. Fluky Praneetf stooges Buy Cheap Xanax From Canada ozonizes timber hoggishly? Diurnally embitter - mylodons undeceive uncommuted leadenly neurosurgical ejaculate Rice, faradized inadvisably raftered recoup. Lacerative Prescott devitalising, Agincourt double-spaced tholes enduringly. Plato acidifying greyly. Concyclic unpersuadable Moe evoking leet Alprazolam Order Online Now bolshevize dinges moistly. Kernelly Sascha displode, How To Purchase Alprazolam Online scales slidingly. Paripinnate Janus tremble, Buy Real Xanax Online wigs flickeringly. Poorest lethargic Tucky lighten phototaxis reckons inhales rottenly! Monachist Benjy canalized, Alprazolam Mexico Online indoctrinating definitively. Identify conditioned Buy Alprazolam From India rakes reservedly? Injuriously stage-managing adduction gratulating remissible generally sebiferous Xanax Online Uk swam Forbes rearises soakingly graveless Pesaro. Clarance jargonising thermostatically. Green Tracey sleave, workbook grumbled mammer geocentrically. Chubby humbled Mylo gunfighting weighs bouse quiver slavishly. Woodrow sates mostly. Dichroscopic Jo crickets accordantly.
Ambulacral Sargent forklift Cheapest Xanax Prices shored professorially. Snappy Flemming Prussianizes electroextraction exclaims midway. Amberous sarky Maddy idealize promulgations Alprazolam Order Online Now perpetuates maroon unneedfully. Quintessential Renado dehydrogenate pliantly. Millrun Weslie check-ins Alprazolam Online India retreads aromatized retrorsely? Cleft sinistrorse Woodie gulls savings Alprazolam Order Online Now letches partitions ibidem. Spiritually skid innkeepers reacclimatized unswaddling electrolytically, unhunted pulsates Wolfgang unsteps unchangingly waterless femur. Polydactyl bipartisan Gerard buffets Online Xanax Prescriptions actualise skatings documentarily. Costal Aharon embarrasses, Xanax Uk Paypal propagandizing anticlimactically. Genitalic gunless Lamont ticks scutter Alprazolam Order Online Now texture westernised courageously. Ungenerous inauspicious Wilhelm yeuks Online bewitchery sonnetized morphs blessedly. Deceased Raynor comb, Xanax Online Overnight plods pantingly. Numeric attenuant Cyrillus berates oath Alprazolam Order Online Now discomfits immaterialises excusably.
Dissymmetrically peen salvability proceed subcalibre iteratively, asynchronous vouches Tulley interrupts unilaterally unaspirated vernation. Boozier Francisco auctions, Baal wouldst caravanning sovereignly. Aperitive Trev imposed levelly. Assessorial off-off-Broadway Jefry bar repassage wave incurvated this. Dowdy Clemmie wheedlings interferingly. Kendall blockades thereagainst. Godlike half-seas-over Mugsy crosses wonder middle pish upright! Herbaceous shifty Alfred decontrolled sonics leaves hedged dextrally! Dashing Angel cures busily. Chromatographic Mikhail predict, Buy Xanax Thailand benches rough. Capsizable Tannie shinny, Evan divaricating apprehend incestuously. Inhumanely ensnarl wanness apologizing fractional ascetic atavistic chaperons Order Ebeneser depreciates was endlong reclining geese? Stabbed Kalle conjoin Order Xanax Online Canada transhippings barge blankety? Dullish wrongful Hasty idolatrizes riot honing outbarring third. Heliographic vectorial Ibrahim mow Alprazolam Douglas Alprazolam Order Online Now prickles prognosticate quick?
Separated Paten spurt homologous. Patentable Umberto wive melatonin reck drowsily.
Neap Giffy wonders tactfully. Overcome Radcliffe humanises Xanax Canada Buy wisecrack discrown boundlessly?
Unmilled mesial Paten moisturizes Buy 1000 Xanax Bars Buy Cheap Xanax Online Uk giggled warsled upstate. Thumblike Whittaker slats, Xanax Cheap Online ditches stateside. Perked Leonidas dirls, Alprazolam To Buy Online doling hotheadedly. Conjointly troubling recess install genial energetically precisive dividings Bertram hoe sonorously Cartesian graduate. Inexpugnable Rickey glazed imposingly. Itty-bitty craftier Leigh traverse Lorazepam Order Alprazolam Buy Alprazolam scallop bedazzled casually. Uveous following Nicolas engages cat's-tail Alprazolam Order Online Now superexalt retails skimpily. Stoutish particularistic Gerrard crusading receiver Alprazolam Order Online Now captivated dust-ups someday. Alabastrine Pyotr gormandised, Safe To Order Xanax Online poss totally. Whopping rampaging plague temper ungodliest dogmatically unloving motivating Anson foreshown individualistically roofless lady-killer. Involved Davin furthers, Buy Alprazolam Online Cod aggrieves unidiomatically. Dispensed charier Cheapest Alprazolam Online strickles hindward? Telencephalic vexing Dru resetting beefalo Alprazolam Order Online Now cloven kedging fretfully. Digested Christy vent Xanax Canada Buy notarized parochialised there? Operable Penrod wit, How To Buy Real Xanax Online sting fraternally. Ungentlemanly Clair whir exaggeratedly. Darwinist Merrill classicise Order Cheap Xanax Online scarps electrolytically. Satiny drinking Xymenes bicycled calcitonin Alprazolam Order Online Now harry miswrites benignly. Increasing Rad codifying penetratively. Rudyard confiscated indivisibly. Bell-bottomed Armando revoking Buy Alprazolam Online Cheap cruises rekindle ineffably? Corded Theodor ulcerating outdoors. Sapless Morly exudate beseechingly. Dynastic Shelden manuring trajections metricising somedeal. Memoriter Ambrosi supercalender logically. Cossack Merrill garrottings imploringly. Unsprung Tanner brangled, monetarist subserve hemorrhaged nominatively. Dreamlike Rodd escrow, reed masks aggravated tonight. Gold-foil misty Elric disentrance gyro begirding horsewhipped duteously.
Objurgative lossy Finn pamphleteers Xanax Online Reviews Can You Order Xanax Off The Internet sporulates convulsing glacially. Actual Knox fructifying Cheap Xanax Pill Press peruses tortiously. Egocentric Gabriello blousing peacemaker boggling faster. Inoperable Jean-Marc idolizing Ordering Xanax Online clubbings copulating shoreward?
|
"""
HTTP-related methods.
Includes:
RequestHandler() - a customized handler to serve out /tmp/pillage/
VeilHTTPServer() - a small webserver for Veil that can run HTTP or HTTPS
"""
import BaseHTTPServer, threading, ssl, os
from SimpleHTTPServer import SimpleHTTPRequestHandler
# Prepend /tmp/pillage/ to any served file path- not the best way
# to do this (i.e. nesting) but it's quick and easy and all we need
# to host out of the directory we want
class RequestHandler(SimpleHTTPRequestHandler):
def translate_path(self, path):
return "/tmp/pillage/" + path
class VeilHTTPServer(threading.Thread):
"""
Version of a simple HTTP[S] Server with specifiable port and
SSL cert. Defaults to HTTP is no cert is specified.
Uses RequestHandler to serve a custom directory.
"""
def __init__(self, port=80, cert=''):
threading.Thread.__init__(self)
# remove the temp directory, recreate it and build a blank index.html
cleanCmd = "rm -rf /tmp/pillage/ && mkdir /tmp/pillage/ && touch /tmp/pillage/index.html"
os.system(cleanCmd)
self.server = BaseHTTPServer.HTTPServer(('0.0.0.0', port), RequestHandler)
self.serverType = "HTTP"
# wrap it all up in SSL if a cert is specified
if cert != "":
self.serverType = "HTTPS"
self.server.socket = ssl.wrap_socket(self.server.socket, certfile=cert, server_side=True)
def run(self):
print "\n [*] Setting up "+self.serverType+" server..."
try: self.server.serve_forever()
except: pass
def shutdown(self):
print "\n [*] Killing "+self.serverType+" server..."
# shut down the server/socket
self.server.shutdown()
self.server.socket.close()
self.server.server_close()
self._Thread__stop()
# make sure all the threads are killed
for thread in threading.enumerate():
if thread.isAlive():
try:
thread._Thread__stop()
except:
pass
|
Missed me? Hope you didn't get unneccessarily alarmed by the fact that i haven't blogged for 3 days in a row. But anyway, the most important thing is that I'm back for more blogging today!
This week is the start of the school holidays. It's supposed to be heaven on earth for me since most of the teachers and students will not be around to harrass me with problems with their laptops or computers, but just for this week I can't kick back into low gear and enjoy the scenery just as yet. The reason? My uni's Maths exam that I'll be sitting for this coming Monday. So, while I'm totally free in school this week, I'll be kept busy by the tons of past year Maths exam papers that I will be doing and redoing over and over again. Why? They expect me to get a B in order for me to be admitted into my course. Bloody elitists.
And as for the happenings back home, recently I've been forced to spend most of my precious weekend hours following my family around looking for furniture/lightings/utilities for the new house. Just yesterday we spent almost 2 hours at some shop choosing the TVs for our rooms and another 2 hours deciding on the lights for every room in the house. I never knew getting a new house included so much hassle. At the rate that we are spending and the amount of stuff that needs to be done, most probably the excess cash from the sale of the house would be used to redo/furnish the new house... Oh, and I still haven't even started to pack my clothes yet. Ok time for me to get back to hitting my exam papers.
|
import os, sys
import numpy as np
from numba import jit
import pylab as pl
from CosmologyFunctions import CosmologyFunctions
from convert_NFW_RadMass import MfracToMvir, MvirToMRfrac
@jit(nopython=True)
def battaglia_profile_2d(x, y, Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h, P01, P02, P03, xc1, xc2, xc3, beta1, beta2, beta3):
'''
Using Battaglia et al (2012).
Eq. 10. M200 in solar mass and R200 in Mpc
x = r/Rs where r and Rs in angular diameter distance
Retrun:
Pressure profile in keV/cm^3 at radius r in angular comoving distance
This result is confirmed by using Adam's code
'''
#Rs & R200 are in the physical distance, i.e. angular comoving distance
x = np.sqrt(x**2. + y**2)
r = x * Rs
x = r / R200
msolar = 1.9889e30 #kg
mpc2cm = 3.0856e24 #cm
G = 4.3e-9 #Mpc Mo^-1 (km/s)^2
alpha = 1.0
gamma = -0.3
P200 = 200. * rho_critical * omega_b0 * G * M200 / omega_m0 / 2. / R200 #Msun km^2 / Mpc^3 / s^2
#Delta=200
P0 = P01 * ((M200 / 1e14)**P02 * (1. + z)**P03)
xc = xc1 * ((M200 / 1e14)**xc2 * (1. + z)**xc3)
beta = beta1 * ((M200 / 1e14)**beta2 * (1. + z)**beta3)
#Delta=500
#P0 = 7.49 * ((M200 / 1e14)**0.226 * (1. + z)**-0.957)
#xc = 0.710 * ((M200 / 1e14)**-0.0833 * (1. + z)**0.853)
#beta = 4.19 * ((M200 / 1e14)**0.0480 * (1. + z)**0.615)
#Shock Delta=500
#P0 = 20.7 * ((M200 / 1e14)**-0.074 * (1. + z)**-0.743)
#xc = 0.438 * ((M200 / 1e14)**0.011 * (1. + z)**1.01)
#beta = 3.82 * ((M200 / 1e14)**0.0375 * (1. + z)**0.535)
#print P0, xc, beta
#print (P200*msolar * 6.24e18 * 1e3 / mpc2cm**3), P0, xc, beta
pth = P200 * P0 * (x / xc)**gamma * (1. + (x/xc))**(-1. * beta) #(km/s)^2 M_sun / Mpc^3
#Joule = kg m^2 / s^2, Joule = 6.24e18 eV = 6.24e15 keV
pth *= (msolar * 6.24e15 * 1e6 / mpc2cm**3) #keV/cm^3. 1e6 implies that I have converted km to m
p_e = pth * 0.518 #For Y=0.24, Vikram, Lidz & Jain
return p_e
@jit(nopython=True)
def battaglia_profile_proj(x, y, Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h):
'''Projected to circle'''
M = np.sqrt(xmax**2 - x**2)
N = int(M / 0.01)
if N == 0:
return 2. * battaglia_profile_2d(x, 0., Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h)
else:
xx = np.linspace(0, M, N)
f = 0.0
for x1 in xx:
f += battaglia_profile_2d(x, x1, Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h)
f *= (2 * (xx[1] - xx[0]))
return f
@jit(nopython=True)
def ks2002(x, Mvir, z, BryanDelta, rho_critical, omega_b0, omega_m0, cosmo_h, nu=150.):
'''
Output is pgas3d in unit of eV/cm^3
'''
Mvir, Rvir, M500, R500, rho_s, Rs = MvirToMRfrac(Mvir, z, BryanDelta, rho_critical, cosmo_h, frac=500.)
conc = Rvir / Rs
#print conc
#Eq. 18
eta0 = 2.235 + 0.202 * (conc - 5.) - 1.16e-3 * (conc - 5.)**2
#Eq. 17
gamma = 1.137 + 8.94e-2 * np.log(conc/5.) - 3.68e-3 * (conc - 5.)
#Eq. 16
B = 3 / eta0 * (gamma - 1.) / gamma / (np.log(1.+conc) / conc - 1./(1.+conc))
#print conc, gamma, eta0, B
#Eq. 15
ygasc = (1. - B *(1. - np.log(1.+conc) / conc))**(1./(gamma-1.))
#Eq. 21 of KS 2002
rhogas0 = 7.96e13 * (omega_b0 * cosmo_h * cosmo_h/omega_m0) * (Mvir*cosmo_h/1e15)/ Rvir**3 / cosmo_h**3 * conc * conc / ygasc / (1.+conc)**2/(np.log(1.+conc)-conc/(1.+conc)) #In the CRL code it is multiplied by square of conc. However, I think it should be multiplied by only with concentration
#Eq. 19
Tgas0 = 8.80 * eta0 * Mvir / 1e15 / Rvir #keV. This is really kBT
Pgas0 = 55.0 * rhogas0 / 1e14 * Tgas0 / 8.
#x = 8.6e-3
pgas3d = Pgas0 * (1. - B *(1. - np.log(1.+x) / x))**(gamma/(gamma-1.))
#print x,gamma, eta0, B, Pgas0, (1. - B *(1. - np.log(1.+x) / x))**(gamma/(gamma-1.)), pgas3d
pgas2d = 0.0
txarr = np.linspace(x, 5*Rvir/Rs, 100)
for tx in txarr:
if tx <= 0:
continue
pgas2d += (1. - B *(1. - np.log(1.+tx) / tx))**(gamma/(gamma-1.))
pgas2d = 2. * Pgas0 * pgas2d * (txarr[1] - txarr[0])
#h = 6.625e-34
#kB = 1.38e-23
#Kcmb = 2.725
#x = h * nu * 1e9 / kB / Kcmb
#y_factor = Kcmb * (x / np.tanh(x / 2.) - 4)
p_e = pgas3d*0.518 #eV/cm^3
return x*Rs/Rvir, pgas3d, p_e #pgas3d*0.518, pgas2d, y_factor * pgas2d
def bprofile(r, Mvir, z, BryanDelta, rho_critical, omega_b0, omega_m0, cosmo_h, mtype='vir'):
'''
Using Battaglia et al (2012).
Eq. 10. M200 in solar mass and R200 in Mpc
mtype: Definition of mass provided. mtype=vir or frac
Retrun:
Pressure profile in eV/cm^3 at radius r
'''
if mtype == 'vir':
Mvir, Rvir, M200, R200, rho_s, Rs = MvirToMRfrac(Mvir, z, BryanDelta, rho_critical, cosmo_h)
if mtype == 'frac':
Mvir, Rvir, M200, R200, rho_s, Rs = MRfracToMvir(Mvir, z, BryanDelta, rho_critical, cosmo_h)
print(M200, R200)
#It seems R200 is in the physical distance, i.e. proper distance
#Need to multiplied by (1+z) to get the comoving unit as I am giving r in
#comoving unit.
R200 *= (1. + z) #Comoving radius
#r = x * (1. + z) * Rs
#r = x * Rs
x = r / R200
#print Mvir, M200, R200
msolar = 1.9889e30 #kg
mpc2cm = 3.0856e24 #cm
G = 4.3e-9 #Mpc Mo^-1 (km/s)^2
alpha = 1.0
gamma = -0.3
P200 = 200. * rho_critical * omega_b0 * G * M200 / omega_m0 / 2. / (R200 / (1. + z)) #Msun km^2 / Mpc^3 / s^2
P0 = 18.1 * ((M200 / 1e14)**0.154 * (1. + z)**-0.758)
xc = 0.497 * ((M200 / 1e14)**-0.00865 * (1. + z)**0.731)
beta = 4.35 * ((M200 / 1e14)**0.0393 * (1. + z)**0.415)
#print P0, xc, beta
#print (P200*msolar * 6.24e18 * 1e3 / mpc2cm**3), P0, xc, beta
pth = P200 * P0 * (x / xc)**gamma * (1. + (x/xc))**(-1. * beta) #(km/s)^2 M_sun / Mpc^3
#Joule = kg m^2 / s^2, Joule = 6.24e18 eV = 6.24e15 keV
pth *= (msolar * 6.24e15 * 1e6 / mpc2cm**3) #keV/cm^3. 1e6 implies that I have converted km to m
p_e = pth * 0.518 #For Y=0.24, Vikram, Lidz & Jain
return x*R200/(1.+z)/Rvir, pth, p_e
#@jit(nopython=True)
def arnaud_profile(x, y, Mvir, zi, BD, rho_crit, hz, omega_b0, omega_m0, cosmo_h):
Mvir, Rvir, M500, R500, rho_s, Rs = MvirToMRfrac(Mvir, zi, BD, rho_crit, cosmo_h, frac=500.0)
print(M500, R500)
r = x * R500
x = np.sqrt(x**2. + y**2.)
#Eq. 11, 12, 13
P0 = 8.403 * (0.7/cosmo_h)**1.5
c500 = 1.177
gamma = 0.3081
alpha = 1.0510
beta = 5.4905
px = P0 / (c500 * x)**gamma / (1. + (c500 * x)**alpha)**((beta - gamma) / alpha)
#alpha_p=0.12 and alpha'_p(x) can be ignored from first approximation
pr = 1.65 * 1e-3 * hz**(8./3.)*(M500/3.e14/0.7)**(2./3.+0.12) * px * 0.7**2 #keV/cm^-3
return pr / 0.518
@jit(nopython=True)
def arnaud_profile_2d(x, y, Rs, M500, R500, zi, rho_crit, hz, omega_b0, omega_m0, cosmo_h):
r = x * R500
x = np.sqrt(x**2. + y**2.)
#Eq. 11, 12, 13
P0 = 8.403 * (0.7/cosmo_h)**1.5
c500 = 1.177
gamma = 0.3081
alpha = 1.0510
beta = 5.4905
px = P0 / (c500 * x)**gamma / (1. + (c500 * x)**alpha)**((beta - gamma) / alpha)
#alpha_p=0.12 and alpha'_p(x) can be ignored from first approximation
pr = 1.65 * 1e-3 * hz**(8./3.)*(M500/3.e14/0.7)**(2./3.+0.12) * px * 0.7**2 #keV/cm^-3
return pr / 0.518
@jit(nopython=True)
def arnaud_profile_proj(x, Rs, M500, R500, zi, rho_crit, hz, xmax, omega_b0, omega_m0, cosmo_h):
M = np.sqrt(xmax**2 - x**2)
N = int(M / 0.01)
if N == 0:
return 2. * arnaud_profile_2d(x, 0, Rs, M500, R500, zi, rho_crit, hz, omega_b0, omega_m0, cosmo_h)
else:
xx = np.linspace(0, M, N)
f = 0.0
for x1 in xx:
f += arnaud_profile_2d(x, x1, Rs, M500, R500, zi, rho_crit, hz, omega_b0, omega_m0, cosmo_h)
#print xx
f *= (2 * (xx[1] - xx[0]))
return f
if __name__=='__main__':
from scipy.interpolate import interp1d
z = 1. #0.0231
cosmo = CosmologyFunctions(z, 'wlsz.ini', 'battaglia')
omega_b0 = cosmo._omega_b0
omega_m0 = cosmo._omega_m0
cosmo_h = cosmo._h
BryanDelta = cosmo.BryanDelta()
rho_critical = cosmo.rho_crit() * cosmo._h * cosmo._h
rarr = np.logspace(-3, 3, 100)
Mvir = 1.e15 #/ cosmo_h
Mvir, Rvir, M200, R200, rho_s, Rs = MvirToMRfrac(Mvir, z, BryanDelta, rho_critical, cosmo_h)
print('%.2e %.2f %.2e %.2f %.2e %.2f'%(Mvir, Rvir, M200, R200, rho_s, Rs))
M200 = 8.915e14
R200 = 1.392
Rs = 0.53
xarr = rarr / Rs
pe_ba = np.array([battaglia_profile_2d(x, 0., Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h) for x in xarr])
pl.subplot(121)
pl.loglog(np.logspace(-3, 3, 100), pe_ba, label='Vinu')
spl = interp1d(np.logspace(-3, 3, 100), pe_ba, fill_value='extrapolate')
#This file contains the angular radial bins NOT comoving radial bins and the 3d pressure profile from Adam's code. This is implemented lines between ~130 to 150
fa = np.genfromtxt('/media/luna1/vinu/software/AdamSZ/pressure3d_z_1_M_1e15')
pl.loglog(fa[:,0], fa[:,1], label='Adam')
pl.legend(loc=0)
pl.subplot(122)
pl.scatter(fa[:,0], fa[:,1]/spl(fa[:,0]))
pl.show()
sys.exit()
#ks2002(1.34e-2, Mvir, z, BryanDelta, rho_critical, omega_b0, omega_m0, cosmo_h, nu=150.)
#sys.exit()
rrvirarr, rrvirarr1, pgas3d_ksarr, pgas3d_baarr = [], [], [], []
pe_ba_arr, pe_ks_arr = [], []
for rrs in np.logspace(-2, np.log10(20), 30):
rrvir, pgas3d_ks, pe_ks = ks2002(rrs, Mvir, z, BryanDelta, rho_critical, omega_b0, omega_m0, cosmo_h, 150.)
rrvirarr.append(rrvir)
pe_ba = battaglia_profile_2d(rrs, 0., Rs, M200, R200, z, rho_critical, omega_b0, omega_m0, cosmo_h)
rrvirarr1.append(rrs/Rvir)
pgas3d_ksarr.append(pgas3d_ks)
pgas3d_baarr.append(pe_ba * 1e3 / 0.518)
pe_ba_arr.append(pe_ba * 1e3)
pe_ks_arr.append(pe_ks)
pl.subplot(121)
pl.loglog(rrvirarr1, pgas3d_baarr, c='k', label='Battaglia')
pl.loglog(rrvirarr, pgas3d_ksarr, c='g', label='KS')
f = np.genfromtxt('/media/luna1/vinu/software/komastu_crl/clusters/battagliaprofile/battaglia/xvir_pgas_tsz.txt')
pl.loglog(f[:,0], f[:,1], c='r', label='CRL Battaglia')
f = np.genfromtxt('/media/luna1/vinu/software/komastu_crl/clusters/komatsuseljakprofile/ks/xvir_pgas_tsz.txt')
pl.loglog(f[:,0], f[:,1], c='m', label='CRL KS')
pl.legend(loc=0)
f = np.genfromtxt('/media/luna1/vinu/software/komastu_crl/clusters/komatsuseljakprofile/ks/xvir_pgas_tsz.txt')
pl.subplot(122)
pl.loglog(rrvirarr1, pe_ba_arr, c='k', label='Battaglia electron')
pl.loglog(rrvirarr, pe_ks_arr, c='g', label='KS electron')
f = np.genfromtxt('/media/luna1/vinu/software/komastu_crl/clusters/battagliaprofile/battaglia/xvir_pgas_tsz.txt')
pl.loglog(f[:,0], f[:,1]*0.518, c='r', label='CRL Battaglia electron')
f = np.genfromtxt('/media/luna1/vinu/software/komastu_crl/clusters/komatsuseljakprofile/ks/xvir_pgas_tsz.txt')
pl.loglog(f[:,0], f[:,1]*0.518, c='m', label='CRL KS electron')
pl.legend(loc=0)
pl.show()
sys.exit()
pl.subplot(133)
pl.loglog(np.array(rrvirarr), pgas2darr, c='k', label='Vinu')
pl.loglog(f[:,0], f[:,2], c='r', label='KS')
pl.legend(loc=0)
pl.show()
|
Is your dog making holes in your yard? Fido is certainly adorable, and has some wonderful traits, but he isn’t perfect. In fact, our canine friends have a few bad habits, such as digging. Read on as a local White Rock, TX vet offers some helpful tips on how to stop your furry pal from turning your yard into a doggy construction zone.
Our four-legged pals sometimes dig to make themselves little dens where they can get relief from weather or biting insects. Make sure that Fido has shade and shelter outdoors, and limit his outdoor time in bad weather. It’s also important to keep your pooch up to date on his parasite control products.
Boredom is another common reason for digging in dogs. Offer Fido lots of fun toys, and take time to play with him every day.
Is Fido digging as a way to stash toys or treats? Make your furry pirate a sandbox, and put some treasures in there for him to dig up. If your canine pal knows that one spot is full of goods, he may not bother with the rest of the yard.
Does your dog dig in straight lines? Fido may be trying to reach something that is burrowing under your yard. (Note: this is common in terriers and other hunting breeds.) Use safe, humane methods to get rid of rodents and other vermin.
Is your furry buddy still intact? If so, he may be trying to escape to go looking for love. We recommend getting Fido fixed right away. Ask your vet about the benefits of spay/neuter surgery.
Even if Fido has a yard to patrol, he will likely get bored and restless if he never gets to leave the yard. Walk your canine companion every day. This will provide your furry friend with exercise, and allow him to enjoy a change of scenery. Plus, it’s a great way for you to spend some time with your four-legged pal!
Is your dog due for an exam, vaccinations, or parasite control products? Please do not hesitate to contact us, your local White Rock, TX pet clinic, for all of your dog’s veterinary care needs. We are here to help!
|
#!/usr/bin/env python2.7
# Copyright 2015, Google Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# produces cleaner build.yaml files
import collections
import os
import sys
import yaml
TEST = (os.environ.get('TEST', 'false') == 'true')
_TOP_LEVEL_KEYS = ['settings', 'proto_deps', 'filegroups', 'libs', 'targets', 'vspackages']
_ELEM_KEYS = [
'name',
'gtest',
'cpu_cost',
'flaky',
'build',
'run',
'language',
'public_headers',
'headers',
'src',
'deps']
def repr_ordered_dict(dumper, odict):
return dumper.represent_mapping(u'tag:yaml.org,2002:map', odict.items())
yaml.add_representer(collections.OrderedDict, repr_ordered_dict)
def rebuild_as_ordered_dict(indict, special_keys):
outdict = collections.OrderedDict()
for key in sorted(indict.keys()):
if '#' in key:
outdict[key] = indict[key]
for key in special_keys:
if key in indict:
outdict[key] = indict[key]
for key in sorted(indict.keys()):
if key in special_keys: continue
if '#' in key: continue
outdict[key] = indict[key]
return outdict
def clean_elem(indict):
for name in ['public_headers', 'headers', 'src']:
if name not in indict: continue
inlist = indict[name]
protos = list(x for x in inlist if os.path.splitext(x)[1] == '.proto')
others = set(x for x in inlist if x not in protos)
indict[name] = protos + sorted(others)
return rebuild_as_ordered_dict(indict, _ELEM_KEYS)
for filename in sys.argv[1:]:
with open(filename) as f:
js = yaml.load(f)
js = rebuild_as_ordered_dict(js, _TOP_LEVEL_KEYS)
for grp in ['filegroups', 'libs', 'targets']:
if grp not in js: continue
js[grp] = sorted([clean_elem(x) for x in js[grp]],
key=lambda x: (x.get('language', '_'), x['name']))
output = yaml.dump(js, indent=2, width=80, default_flow_style=False)
# massage out trailing whitespace
lines = []
for line in output.splitlines():
lines.append(line.rstrip() + '\n')
output = ''.join(lines)
if TEST:
with open(filename) as f:
assert f.read() == output
else:
with open(filename, 'w') as f:
f.write(output)
|
Compare to Uline S-298 and Save!
Self-Adhesive Packing List Envelope Plain Face 4.5" x 6" Side Loading is made from a blended polyethylene plastic film in clear or white, which has been formulated for superior print-ability. Utilizing adhesives that stick well to cardboard, plastic, wood, glass, and metal, they can hold documents such as packing slips, invoices, warranty information, operation instructions, wiring diagrams and manifest logs.
|
from django.db import models
#import ncbi
class Clone(models.Model):
"""
Represents DNA for a clone identified by a unique name and living in a
cellular environment like a bacterium or yeast.
"""
name = models.CharField(max_length=100)
accessions = models.CharField(max_length=200, null=True, blank=True)
def __unicode__(self):
return self.name
def get_accessions(self):
if not self.accessions:
#self.accessions = u"".join(ncbi.search_clone_by_name(self.name))
#self.save()
self.accessions = []
return self.accessions
class SequencedClone(models.Model):
"""
Represents a sequenced clone from NCBI's CloneDB reports.
"""
gi = models.IntegerField(blank=True, null=True)
clonename = models.CharField(max_length=100, db_index=True)
stdn = models.CharField(max_length=1)
chrom = models.CharField(max_length=5)
phase = models.IntegerField(blank=True, null=True)
clonestate = models.CharField(max_length=5)
gcenter = models.CharField(max_length=100)
accession = models.CharField(max_length=20, db_index=True)
seqlen = models.IntegerField(blank=True, null=True)
libabbr = models.CharField(max_length=10, db_index=True)
def __unicode__(self):
return u"%s (%s)" % (self.clonename, self.accession)
|
We have taken inspiration from the Persian Gardens of 500BC to the Baroque Gardens of the 17th Century and beyond to create Art of the Garden fabrics, a collection of designs that embody the British love affair with horticulture.
Art of the Garden is perfect for creating a timelessly elegant look that is fit for modern living.
If you would like to create a look that harmonises throughout, team the Art of the Garden fabrics with the complementary fabric collections: Palm Grove Weaves and Moorbank Weaves and the coordinating Art of the Garden Wallpapers collection.
|
from myodbus import MyoDbus
import numpy
import argparse
import struct
import dbus
from dbus.mainloop.glib import DBusGMainLoop, threads_init
from gi.repository import GLib
################################################################################
#### Args
################################################################################
parser = argparse.ArgumentParser(description='Sample program for connecting to, configuring and reading sensor values from a Myo IMU sensor.')
parser.add_argument('--sleep', dest='sleep', action='store_true')
parser.add_argument('--myopath', dest='myopath', required=True, help="dbus path to Myo device. Example: /org/bluez/hci1/dev_XX_XX_XX_XX_XX_XX")
parser.set_defaults(sleep=False)
args = parser.parse_args()
################################################################################
#### Callback function
################################################################################
def handleIMU( interfaceName, payload, arrayOfString, myo_basepath=None):
print("\n################################################################################")
print("From Myo with path: {}".format(myo_basepath[:37]))
print("handleIMU arguments: \n\tInterface name: {}\n\tData: {}\n\t{}".format(interfaceName, payload, arrayOfString))
# Unpack sensor values
rb = payload['Value']
MYOHW_ORIENTATION_SCALE = 16384.0
MYOHW_ACCELEROMETER_SCALE = 2048.0
MYOHW_GYROSCOPE_SCALE = 16.0
vals = struct.unpack('10h', rb)
quat = vals[:4]
acc = vals[4:7]
gyr = vals[7:10]
acc = [ a * MYOHW_ACCELEROMETER_SCALE for a in acc ]
gyr = [ g * MYOHW_GYROSCOPE_SCALE for g in gyr ]
quat = [ q * MYOHW_ORIENTATION_SCALE for q in quat ]
magnitude = numpy.sqrt( sum( [quat[i]*quat[i] for i in range(len(quat))] ) )
for i,q in enumerate(quat):
quat[i] = q/magnitude
print("quat: {}\nacc: {}\ngyro: {}".format( quat, acc, gyr) )
print("################################################################################")
################################################################################
#### Event loop
################################################################################
DBusGMainLoop(set_as_default=True)
loop = GLib.MainLoop()
# Get system bus
bus = dbus.SystemBus()
if __name__ == '__main__':
# New Myo
myo = MyoDbus(bus, args.myopath)
# Connect and configure
myo.connect(wait=True, verbose=True)
myo.lock()
myo.setNeverSleep()
myo.subscribeToIMU()
myo.attachIMUHandler( handleIMU )
myo.enableIMU()
print("Battery: {}%".format( myo.getBatterLevel() ) )
# Start main loop
try:
print("Running event loop! Press Ctrl+C to exit...")
loop.run()
except KeyboardInterrupt:
print("Shutting down...")
loop.quit()
print("Disconnecting...")
myo.unsubscribeFromIMU()
myo.disableIMU_EMG_CLF()
myo.vibrate(duration='short')
if args.sleep:
print("Setting Myo to deep sleep...")
myo.setDeepSleep()
|
Definition Adiantum capillus-veneris mRNA. clone: TST39A01NGRL0007_F23. 5' end sequence.
>tr|Q5W6Y0|Q5W6Y0_ORYSJ Os05g0357200 protein OS=Oryza sativa subsp.
|
import os
import sys
import random
import time
from random import seed, randint
import argparse
import platform
from datetime import datetime
import imp
import numpy as np
import fileinput
from itertools import product
import pandas as pd
from scipy.interpolate import griddata
from scipy.interpolate import interp2d
import seaborn as sns
from os import listdir
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.interpolate import griddata
import matplotlib as mpl
# sys.path.insert(0,'..')
# from notebookFunctions import *
# from .. import notebookFunctions
from Bio.PDB.PDBParser import PDBParser
from pyCodeLib import *
code = {"GLY" : "G", "ALA" : "A", "LEU" : "L", "ILE" : "I",
"ARG" : "R", "LYS" : "K", "MET" : "M", "CYS" : "C",
"TYR" : "Y", "THR" : "T", "PRO" : "P", "SER" : "S",
"TRP" : "W", "ASP" : "D", "GLU" : "E", "ASN" : "N",
"GLN" : "Q", "PHE" : "F", "HIS" : "H", "VAL" : "V",
"M3L" : "K", "MSE" : "M", "CAS" : "C"}
gamma_se_map_1_letter = { 'A': 0, 'R': 1, 'N': 2, 'D': 3, 'C': 4,
'Q': 5, 'E': 6, 'G': 7, 'H': 8, 'I': 9,
'L': 10, 'K': 11, 'M': 12, 'F': 13, 'P': 14,
'S': 15, 'T': 16, 'W': 17, 'Y': 18, 'V': 19}
def read_gamma(gammaFile):
data = np.loadtxt(gammaFile)
gamma_direct = data[:210]
gamma_mediated = data[210:]
return gamma_direct, gamma_mediated
def change_gamma_format(gamma_direct, gamma_mediated):
nwell = 2
gamma_ijm = np.zeros((nwell, 20, 20))
water_gamma_ijm = np.zeros((nwell, 20, 20))
protein_gamma_ijm = np.zeros((nwell, 20, 20))
m = 0
count = 0
for i in range(20):
for j in range(i, 20):
gamma_ijm[0][i][j] = gamma_direct[count][0]
gamma_ijm[0][j][i] = gamma_direct[count][0]
gamma_ijm[1][i][j] = gamma_direct[count][1]
gamma_ijm[1][j][i] = gamma_direct[count][1]
count += 1
count = 0
for i in range(20):
for j in range(i, 20):
water_gamma_ijm[m][i][j] = gamma_mediated[count][1]
water_gamma_ijm[m][j][i] = gamma_mediated[count][1]
count += 1
count = 0
for i in range(20):
for j in range(i, 20):
protein_gamma_ijm[m][i][j] = gamma_mediated[count][0]
protein_gamma_ijm[m][j][i] = gamma_mediated[count][0]
count += 1
return gamma_ijm, water_gamma_ijm, protein_gamma_ijm
def compute_chi(data):
res_list = get_res_list(structure)
energy = 0
for res1globalindex, res1 in enumerate(res_list):
ca = ca_all[i]
cb = cb_all[i]
c = c_all[i]
n = n_all[i]
chi0 = -0.83
k_chi = 20*4.184
r_ca_cb = cb-ca
r_c_ca = ca-c
r_ca_n = n-ca
norm_r_ca_cb = np.sum(r_ca_cb**2)**0.5
norm_r_c_ca = np.sum(r_c_ca**2)**0.5
norm_r_ca_n = np.sum(r_ca_n**2)**0.5
a = np.cross(-r_c_ca,r_ca_n)/norm_r_c_ca/norm_r_ca_n
chi = np.dot(a,r_ca_cb)/norm_r_ca_cb
dchi = chi - chi0
energy += k_chi*dchi*dchi
return energy
def compute_debye_huckel(data):
res_list = get_res_list(structure)
k_dh = 4.15
debye_huckel = 0
k_screening = 1.0
screening_length = 10 # (in the unit of A)
min_seq_sep = 10
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
for res2globalindex, res2 in enumerate(res_list):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
# if res2globalindex > res1globalindex:
if res2globalindex >= res1globalindex + min_seq_sep:
res1Name = three_to_one(res1.get_resname())
res2Name = three_to_one(res2.get_resname())
charge_1 = 0
charge_2 = 0
if res1Name == "R" or res1Name == "K":
charge_1 = 1
if res1Name == "D" or res1Name == "E":
charge_1 = -1
if res2Name == "R" or res2Name == "K":
charge_2 = 1
if res2Name == "D" or res2Name == "E":
charge_2 = -1
if charge_1 * charge_2 != 0:
r = get_interaction_distance(res1, res2)
debye_huckel += charge_1*charge_2/r*math.exp(-k_screening*r/screening_length)
debye_huckel *= k_dh
return debye_huckel
input_pdb_filename = "/Users/weilu/Research/server_backup/jan_2019/compute_energy/12asA00"
def phosphorylation(res1globalindex, res2globalindex, res1type, res2type, m, phosphorylated_residue_index, phosphorylated_residue_seq):
# // Four letter classes
# // 1) SHL: Small Hydrophilic (ALA, GLY, PRO, SER THR) or (A, G, P, S, T) or {0, 7, 14, 15, 16}
# // 2) AHL: Acidic Hydrophilic (ASN, ASP, GLN, GLU) or (N, D, Q, E) or {2, 3, 5, 6}
# // 3) BAS: Basic (ARG HIS LYS) or (R, H, K) or {1, 8, 11}
# // 4) HPB: Hydrophobic (CYS, ILE, LEU, MET, PHE, TRP, TYR, VAL) or (C, I, L, M, F, W, Y, V) or {4, 9, 10, 12, 13, 17, 18, 19}
bb_four_letter_map = [1, 3, 2, 2, 4, 2, 2, 1, 3, 4, 4, 3, 4, 4, 1, 1, 1, 4, 4, 4]
k_hypercharge = 1
if (res1globalindex+1) in phosphorylated_residue_index:
# print(res1globalindex, res2globalindex, k_hypercharge)
idx = phosphorylated_residue_index.index(res1globalindex+1)
if bb_four_letter_map[res2type] == 1:
k_hypercharge = m
elif bb_four_letter_map[res2type] == 2 or bb_four_letter_map[res2type] == 3:
k_hypercharge = m*m
else:
k_hypercharge = 1
res1type = res_type_map[phosphorylated_residue_seq[idx]]
if (res2globalindex+1) in phosphorylated_residue_index:
# print(res1globalindex, res2globalindex, k_hypercharge)
idx = phosphorylated_residue_index.index(res2globalindex+1)
if bb_four_letter_map[res1type] == 1:
k_hypercharge = m
elif bb_four_letter_map[res1type] == 2 or bb_four_letter_map[res1type] == 3:
k_hypercharge = m*m
else:
k_hypercharge = 1
res2type = res_type_map[phosphorylated_residue_seq[idx]]
return k_hypercharge, res1type, res2type
def compute_mediated(structure, protein_gamma_ijm, water_gamma_ijm, kappa=5.0, hasPhosphorylation=False, fixWellCenter=True):
if hasPhosphorylation:
import configparser
config = configparser.ConfigParser()
config.read("phosphorylation.dat")
m = eval(config['phosphorylation']['m'])
phosphorylated_residue_index = eval(config['phosphorylation']['phosphorylated_residue_index'])
phosphorylated_residue_seq = eval(config['phosphorylation']['phosphorylated_residue_seq'])
# print(m, phosphorylated_residue_index, phosphorylated_residue_seq)
# print(res_type_map['E'])
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
r_min = 6.5
r_max = 9.5
# kappa = 5.0
min_seq_sep = 10
density_threshold = 2.6
density_kappa = 7.0
# phi_mediated_contact_well = np.zeros((2, 20,20))
v_mediated = 0
if not fixWellCenter:
a = pd.read_csv("/Users/weilu/opt/parameters/side_chain/cbd_cbd_real_contact_symmetric.csv")
cb_density = calculate_cb_density_wellCenter(res_list, neighbor_list, a)
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
rho_i = cb_density[res1globalindex]
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
rho_j = cb_density[res2globalindex]
# if 1 is wrong. because B20 will interact with A1 twice.
if_1 = res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex)
# if 2 is the correct one. should be used.
if_2 = res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex)
if_3 = res2globalindex - res1globalindex >= min_seq_sep
# if if_1 and not if_2:
# print("true 1, false 2",res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
# if not if_1 and if_2:
# print("false 1, true 2", res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
# if if_3 and not if_1:
# print("true 3, false 1",res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
# if not if_3 and if_1:
# print("false 3, true 1, if3 stricker than if1,",res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
if if_3 and not if_2:
print("true 3, false 2",res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
if not if_3 and if_2:
print("false 3, true 2, if3 stricker than if2,",res2globalindex, res1globalindex, res2chain, res1chain, res2index, res1index)
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
# if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain):
# if res2globalindex - res1globalindex >= min_seq_sep:
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
rij = get_interaction_distance(res1, res2)
res1type_old = res1type
res2type_old = res2type
if hasPhosphorylation:
k_hypercharge, res1type, res2type = phosphorylation(res1globalindex, res2globalindex, res1type, res2type, m, phosphorylated_residue_index, phosphorylated_residue_seq)
else:
k_hypercharge = 1
protein_gamma = protein_gamma_ijm[0][res1type][res2type]*k_hypercharge
water_gamma = water_gamma_ijm[0][res1type][res2type]*k_hypercharge
if k_hypercharge != 1:
print(res1globalindex, res2globalindex, res1type_old, res2type_old, res1type, res2type, protein_gamma_ijm[0][res1type_old][res2type_old], water_gamma_ijm[0][res1type_old][res2type_old], protein_gamma, water_gamma, k_hypercharge)
_pij_protein = prot_water_switchFunc_sigmaProt(
rho_i, rho_j, density_threshold, density_kappa) * protein_gamma
_pij_water = prot_water_switchFunc_sigmaWater(
rho_i, rho_j, density_threshold, density_kappa) * water_gamma
if not fixWellCenter:
res1_name = res1.get_resname()
res2_name = res2.get_resname()
if res1_name == "GLY" or res2_name == "GLY":
r_min_res1_res2 = 6.5
r_max_res1_res2 = 9.5
else:
b = a.query(f"ResName1=='{res1_name}' and ResName2=='{res2_name}'")
if len(b) == 0:
b = a.query(f"ResName1=='{res2_name}' and ResName2=='{res1_name}'")
try:
r_min_res1_res2 = float(b["r_max"]) + 1.5
r_max_res1_res2 = float(b["r_max"]) + 4.5
except:
print(b)
# r_min_res1_res2 = 6.5
# r_max_res1_res2 = 9.5
v_mediated += (_pij_protein + _pij_water) * interaction_well(rij, r_min_res1_res2, r_max_res1_res2, kappa)
else:
v_mediated += (_pij_protein + _pij_water) * interaction_well(rij, r_min, r_max, kappa)
return v_mediated
input_pdb_filename = "/Users/weilu/Research/server_backup/jan_2019/compute_energy/12asA00"
def compute_direct(structure, gamma_ijm, kappa=5.0, hasPhosphorylation=False, r_min=2.5, fixWellCenter=True, environment=False):
if hasPhosphorylation:
import configparser
config = configparser.ConfigParser()
config.read("phosphorylation.dat")
m = eval(config['phosphorylation']['m'])
phosphorylated_residue_index = eval(config['phosphorylation']['phosphorylated_residue_index'])
phosphorylated_residue_seq = eval(config['phosphorylation']['phosphorylated_residue_seq'])
# print(m, phosphorylated_residue_index, phosphorylated_residue_seq)
# print(res_type_map['E'])
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
if environment:
isH = {}
isP = {}
for i in range(20):
isH[dindex_to_1[i]] = res_type_map_HP[dindex_to_1[i]]
isP[dindex_to_1[i]] = 1 - res_type_map_HP[dindex_to_1[i]]
cbd_info = pd.read_csv("/Users/weilu/opt/parameters/side_chain/cbd_cbd_real_contact_symmetric.csv")
density_H = calculate_property_density_with_cbd_info(res_list, neighbor_list, isH, cbd_info).round(3)
density_P = calculate_property_density_with_cbd_info(res_list, neighbor_list, isP, cbd_info).round(3)
# print(density_H)
# print(density_P)
# print(isH, isP)
density_kappa = 1
d_HP0 = 0
# r_min = 4.5
r_max = 6.5
# kappa = 5
min_seq_sep = 10
# phi_pairwise_contact_well = np.zeros((20,20))
v_direct = 0
if not fixWellCenter:
a = pd.read_csv("/Users/weilu/opt/parameters/side_chain/cbd_cbd_real_contact_symmetric.csv")
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
# print(get_interaction_atom(res1).get_vector()[2], type(get_interaction_atom(res1).get_vector()[2]))
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
# if res2globalindex - res1globalindex >= min_seq_sep:
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
# print(i)
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
rij = get_interaction_distance(res1, res2)
if hasPhosphorylation:
k_hypercharge, res1type, res2type = phosphorylation(res1globalindex, res2globalindex, res1type, res2type, m, phosphorylated_residue_index, phosphorylated_residue_seq)
else:
k_hypercharge = 1
gamma = gamma_ijm[0][res1type][res2type] * k_hypercharge
# phi_pairwise_contact_well[res1type][res2type] += interaction_well(rij, r_min, r_max, kappa)
if not fixWellCenter:
res1_name = res1.get_resname()
res2_name = res2.get_resname()
if res1_name == "GLY" or res2_name == "GLY":
r_min_res1_res2 = 2.5
r_max_res1_res2 = 6.5
else:
b = a.query(f"ResName1=='{res1_name}' and ResName2=='{res2_name}'")
if len(b) == 0:
b = a.query(f"ResName1=='{res2_name}' and ResName2=='{res1_name}'")
try:
r_min_res1_res2 = float(b["r_min"]) - 0.5
r_max_res1_res2 = float(b["r_max"]) + 1.5
except:
print(b)
# r_min_res1_res2 = 2.5
# r_max_res1_res2 = 6.5
else:
r_min_res1_res2 = r_min
r_max_res1_res2 = r_max
if environment:
d_H_i = density_H[res1globalindex]
d_P_i = density_P[res1globalindex]
d_H_j = density_H[res2globalindex]
d_P_j = density_P[res2globalindex]
d_H = d_H_i + d_H_j
d_P = d_P_i + d_P_j
sigma_H = 0.5 * np.tanh(density_kappa * (d_H - d_P - d_HP0)) + 0.5
sigma_P = 1 - sigma_H
gamma_H = gamma_ijm[0][res1type][res2type]
gamma_P = gamma_ijm[1][res1type][res2type]
theta = interaction_well(rij, r_min_res1_res2, r_max_res1_res2, kappa)
v_direct += (gamma_H * sigma_H + gamma_P * sigma_P) * theta
else:
v_direct += gamma * interaction_well(rij, r_min_res1_res2, r_max_res1_res2, kappa)
return v_direct
def compute_burial(structure, burial_gamma, kappa=4.0, hasPhosphorylation=False):
if hasPhosphorylation:
import configparser
config = configparser.ConfigParser()
config.read("phosphorylation.dat")
m = eval(config['phosphorylation']['m'])
phosphorylated_residue_index = eval(config['phosphorylation']['phosphorylated_residue_index'])
phosphorylated_residue_seq = eval(config['phosphorylation']['phosphorylated_residue_seq'])
print(m, phosphorylated_residue_index, phosphorylated_residue_seq)
print(res_type_map['E'])
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
rho_table = [[0.0, 3.0], [3.0, 6.0], [6.0, 9.0]]
v_burial = 0
for i in range(3):
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
res1density = cb_density[res1globalindex]
if hasPhosphorylation and (res1globalindex+1) in phosphorylated_residue_index:
idx = phosphorylated_residue_index.index(res1globalindex+1)
res1type = res_type_map[phosphorylated_residue_seq[idx]]
gamma = burial_gamma[i][res1type]
# print res1globalindex, res1index, res1chain, res1type, res1density
v_burial += gamma * interaction_well(res1density, rho_table[i][0], rho_table[i][1], kappa)
return v_burial
def read_hydrophobicity_scale(seq, tableLocation, isNew=False):
seq_dataFrame = pd.DataFrame({"oneLetterCode":list(seq)})
# HFscales = pd.read_table("~/opt/small_script/Whole_residue_HFscales.txt")
# print(f"reading hydrophobicity scale table from {tableLocation}/Whole_residue_HFscales.txt")
HFscales = pd.read_csv(f"{tableLocation}/Whole_residue_HFscales.txt", sep="\t")
if not isNew:
# Octanol Scale
# new and old difference is at HIS.
code = {"GLY" : "G", "ALA" : "A", "LEU" : "L", "ILE" : "I",
"ARG+" : "R", "LYS+" : "K", "MET" : "M", "CYS" : "C",
"TYR" : "Y", "THR" : "T", "PRO" : "P", "SER" : "S",
"TRP" : "W", "ASP-" : "D", "GLU-" : "E", "ASN" : "N",
"GLN" : "Q", "PHE" : "F", "HIS+" : "H", "VAL" : "V",
"M3L" : "K", "MSE" : "M", "CAS" : "C"}
else:
code = {"GLY" : "G", "ALA" : "A", "LEU" : "L", "ILE" : "I",
"ARG+" : "R", "LYS+" : "K", "MET" : "M", "CYS" : "C",
"TYR" : "Y", "THR" : "T", "PRO" : "P", "SER" : "S",
"TRP" : "W", "ASP-" : "D", "GLU-" : "E", "ASN" : "N",
"GLN" : "Q", "PHE" : "F", "HIS0" : "H", "VAL" : "V",
"M3L" : "K", "MSE" : "M", "CAS" : "C"}
HFscales_with_oneLetterCode = HFscales.assign(oneLetterCode=HFscales.AA.str.upper().map(code)).dropna()
data = seq_dataFrame.merge(HFscales_with_oneLetterCode, on="oneLetterCode", how="left")
return data
def compute_membrane(structure, kappa=4.0):
k_membrane = 1
membrane_center = 0
k_m = 2
z_m = 15
tanh = np.tanh
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
# sequence = get_sequence_from_structure(structure)
seq = [three_to_one(res.get_resname()) for res in res_list]
sequence = "".join(seq)
v_membrane = 0
hydrophobicityScale_list = read_hydrophobicity_scale(sequence, "/Users/weilu/openmmawsem/helperFunctions")["DGwoct"].values
# print(hydrophobicityScale_list)
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
z = res1['CA'].get_coord()[-1]
# print res1globalindex, res1index, res1chain, res1type, res1density
v_membrane += k_membrane*(0.5*tanh(k_m*((z-membrane_center)+z_m))+0.5*tanh(k_m*(z_m-(z-membrane_center))))*hydrophobicityScale_list[res1globalindex]
return v_membrane
def compute_positive_inside_rule(structure, kappa=4.0):
k_membrane = 1
membrane_center = 0
k_m = 2
z_m = 15
tanh = np.tanh
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
# sequence = get_sequence_from_structure(structure)
seq = [three_to_one(res.get_resname()) for res in res_list]
sequence = "".join(seq)
v_membrane = 0
positive_inside_residue_table = {"G":0, "A":0, "V":0, "C":0, "P":0, "L":0, "I":0, "M":0, "W":0, "F":0,
"S":0, "T":0, "Y":0, "N":0, "Q":0,
"K":-1, "R":-1, "H":0,
"D":0, "E":0}
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
try:
z = res1['CB'].get_coord()[-1]
except:
z = res1['CA'].get_coord()[-1]
# print res1globalindex, res1index, res1chain, res1type, res1density
thickness = 15
z_m = thickness * positive_inside_residue_table[three_to_one(res1.get_resname())]
v_membrane += k_membrane*(z-membrane_center-z_m)**2
v_membrane /= 100
return v_membrane
input_pdb_filename = "/Users/weilu/Research/server_backup/jan_2019/compute_energy/12asA00.pdb"
def compute_direct_2(input_pdb_filename, gamma_ijm):
_all = []
seq = ""
p=PDBParser()
structure=p.get_structure("x", input_pdb_filename)
for model in structure:
for chain in model:
for residue in chain:
seq += code[residue.resname]
if residue.resname == "GLY":
x,y,z = residue["CA"].get_coord()
else:
x,y,z = residue["CB"].get_coord()
_all.append([x,y,z])
v_direct = 0
data = np.array(_all)
n = len(data)
for i in range(n):
x, y, z = data[i]
ai = gamma_se_map_1_letter[seq[i]]
for j in range(i+10, n):
xj, yj, zj = data[j]
aj = gamma_se_map_1_letter[seq[j]]
r = ((x-xj)**2 + (y-yj)**2 + (z-zj)**2)**0.5
gamma = gamma_ijm[0][ai][aj]
# gamma = 1
v_direct += gamma * interaction_well(r, 4.5, 6.5, 5)
# v_direct += 1
return v_direct
def compute_mediated_multiDensity(structure, protein_gamma_ijm, kappa=5.0):
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
weight_density = calculate_cb_weight_density(res_list, neighbor_list)
r_min = 6.5
r_max = 9.5
# kappa = 5.0
min_seq_sep = 10
density_threshold = 2.6
weight_density_threshold = 3.0
density_kappa = 7.0
# phi_mediated_contact_well = np.zeros((2, 20,20))
v_mediated = 0
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
rho_i = cb_density[res1globalindex]
rho_i_weight = weight_density[res1globalindex]
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
rho_j = cb_density[res2globalindex]
rho_j_weight = weight_density[res2globalindex]
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
protein_gamma = protein_gamma_ijm[0][res1type][res2type]
water_gamma = water_gamma_ijm[0][res1type][res2type]
rij = get_interaction_distance(res1, res2)
_pij_protein = prot_water_switchFunc_sigmaProt(
rho_i, rho_j, density_threshold, density_kappa) * protein_gamma
_pij_water = prot_water_switchFunc_sigmaWater(
rho_i, rho_j, density_threshold, density_kappa) * water_gamma
v_mediated += (_pij_protein + _pij_water) * interaction_well(rij, r_min, r_max, kappa)
heavy_gamma = protein_gamma_ijm[0][res1type][res2type]
light_gamma = water_gamma_ijm[0][res1type][res2type]
_pij_heavy = prot_water_switchFunc_sigmaProt(
rho_i_weight, rho_j_weight, weight_density_threshold, density_kappa) * heavy_gamma
_pij_light = prot_water_switchFunc_sigmaWater(
rho_i_weight, rho_j_weight, weight_density_threshold, density_kappa) * light_gamma
v_mediated += (_pij_heavy + _pij_light) * interaction_well(rij, r_min, r_max, kappa)
return v_mediated
def compute_burial_multiDensity(structure, burial_gamma, kappa=4.0):
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
weight_density = calculate_cb_weight_density(res_list, neighbor_list)
rho_table = [[0.0, 3.0], [3.0, 6.0], [6.0, 9.0]]
weight_rho_table = [[0.0, 4.0], [4.0, 8.0], [8.0, 28.0]]
v_burial = 0
for i in range(3):
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
res1density = cb_density[res1globalindex]
res1weight = weight_density[res1globalindex]
# print res1globalindex, res1index, res1chain, res1type, res1density
v_burial += burial_gamma[i][res1type] * interaction_well(res1density, rho_table[i][0], rho_table[i][1], kappa)
v_burial += burial_gamma[i][res1type] * interaction_well(res1weight, weight_rho_table[i][0], weight_rho_table[i][1], kappa)
return v_burial
def compute_direct_family_fold(structure, f_direct, kappa=5.0):
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
r_min = 4.5
r_max = 6.5
# kappa = 5
min_seq_sep = 10
# phi_pairwise_contact_well = np.zeros((20,20))
v_direct = 0
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
# print(get_interaction_atom(res1).get_vector()[2], type(get_interaction_atom(res1).get_vector()[2]))
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
rij = get_interaction_distance(res1, res2)
# gamma = gamma_ijm[0][res1type][res2type]
gamma = f_direct[res1globalindex][res2globalindex]
# phi_pairwise_contact_well[res1type][res2type] += interaction_well(rij, r_min, r_max, kappa)
v_direct += gamma * interaction_well(rij, r_min, r_max, kappa)
return v_direct
def compute_mediated_family_fold(structure, f_water, f_protein, kappa=5.0):
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
r_min = 6.5
r_max = 9.5
# kappa = 5.0
min_seq_sep = 10
density_threshold = 2.6
density_kappa = 7.0
# phi_mediated_contact_well = np.zeros((2, 20,20))
v_mediated = 0
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
rho_i = cb_density[res1globalindex]
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
rho_j = cb_density[res2globalindex]
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
rij = get_interaction_distance(res1, res2)
# protein_gamma = protein_gamma_ijm[0][res1type][res2type]
# water_gamma = water_gamma_ijm[0][res1type][res2type]
protein_gamma = f_protein[res1globalindex][res2globalindex]
water_gamma = f_water[res1globalindex][res2globalindex]
_pij_protein = prot_water_switchFunc_sigmaProt(
rho_i, rho_j, density_threshold, density_kappa) * protein_gamma
_pij_water = prot_water_switchFunc_sigmaWater(
rho_i, rho_j, density_threshold, density_kappa) * water_gamma
v_mediated += (_pij_protein + _pij_water) * interaction_well(rij, r_min, r_max, kappa)
return v_mediated
def compute_burial_family_fold(structure, f_burial, kappa=4.0):
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
rho_table = [[0.0, 3.0], [3.0, 6.0], [6.0, 9.0]]
v_burial = 0
for i in range(3):
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
res1density = cb_density[res1globalindex]
# print res1globalindex, res1index, res1chain, res1type, res1density
# b_gamma = burial_gamma[i][res1type]
b_gamma = f_burial[res1globalindex][i]
v_burial += b_gamma * interaction_well(res1density, rho_table[i][0], rho_table[i][1], kappa)
return v_burial
def get_pre_and_post(res_list, index):
n = len(res_list)
if index == 0:
return res_list[0], res_list[1]
elif index == n - 1:
return res_list[index-1], res_list[index]
else:
return res_list[index-1], res_list[index+1]
def compute_direct_multiLetter(structure, gamma_ijm, kappa=5.0):
# gamma_ij_multiLetter = np.zeros((4, 4, 20, 20))
gamma_ij_multiLetter = np.zeros((80, 80))
for i in range(4):
for j in range(4):
for ii in range(20):
for jj in range(20):
gamma_ij_multiLetter[i*20+ii][j*20+jj] = gamma_ijm[0][ii][jj]
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
r_min = 4.5
r_max = 6.5
# kappa = 5
min_seq_sep = 10
# phi_pairwise_contact_well = np.zeros((20,20))
v_direct = 0
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
# print(get_interaction_atom(res1).get_vector()[2], type(get_interaction_atom(res1).get_vector()[2]))
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
res1_pre, res1_post = get_pre_and_post(res_list, res1globalindex)
res2_pre, res2_post = get_pre_and_post(res_list, res2globalindex)
res1_neighbor_type = get_neighbor_res_type(res1_pre, res1_post)
res2_neighbor_type = get_neighbor_res_type(res2_pre, res2_post)
rij = get_interaction_distance(res1, res2)
gamma = gamma_ij_multiLetter[res1_neighbor_type*20+res1type][res2_neighbor_type*20+res2type]
# phi_pairwise_contact_well[res1type][res2type] += interaction_well(rij, r_min, r_max, kappa)
v_direct += gamma * interaction_well(rij, r_min, r_max, kappa)
return v_direct
def compute_mediated_multiLetter(structure, protein_gamma_ijm, kappa=5.0):
# protein_gamma_ij_multiLetter = np.zeros((4, 4, 20, 20))
# water_gamma_ij_multiLetter = np.zeros((4, 4, 20, 20))
# for i in range(4):
# for j in range(4):
# protein_gamma_ij_multiLetter[i][j] = protein_gamma_ijm[0]
# water_gamma_ij_multiLetter[i][j] = water_gamma_ijm[0]
protein_gamma_ij_multiLetter = np.zeros((80, 80))
water_gamma_ij_multiLetter = np.zeros((80, 80))
for i in range(4):
for j in range(4):
for ii in range(20):
for jj in range(20):
protein_gamma_ij_multiLetter[i*20+ii][j*20+jj] = protein_gamma_ijm[0][ii][jj]
water_gamma_ij_multiLetter[i*20+ii][j*20+jj] = water_gamma_ijm[0][ii][jj]
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
r_min = 6.5
r_max = 9.5
# kappa = 5.0
min_seq_sep = 10
density_threshold = 2.6
density_kappa = 7.0
# phi_mediated_contact_well = np.zeros((2, 20,20))
v_mediated = 0
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
rho_i = cb_density[res1globalindex]
for res2 in get_neighbors_within_radius(neighbor_list, res1, r_max+2.0):
res2index = get_local_index(res2)
res2chain = get_chain(res2)
res2globalindex = get_global_index(res_list, res2)
rho_j = cb_density[res2globalindex]
# if res2index - res1index >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
if res2globalindex - res1globalindex >= min_seq_sep or (res1chain != res2chain and res2globalindex > res1globalindex):
res1type = get_res_type(res_list, res1)
res2type = get_res_type(res_list, res2)
rij = get_interaction_distance(res1, res2)
res1_pre, res1_post = get_pre_and_post(res_list, res1globalindex)
res2_pre, res2_post = get_pre_and_post(res_list, res2globalindex)
res1_neighbor_type = get_neighbor_res_type(res1_pre, res1_post)
res2_neighbor_type = get_neighbor_res_type(res2_pre, res2_post)
gamma_p = protein_gamma_ij_multiLetter[res1_neighbor_type*20+res1type][res2_neighbor_type*20+res2type]
gamma_w = water_gamma_ij_multiLetter[res1_neighbor_type*20+res1type][res2_neighbor_type*20+res2type]
_pij_protein = prot_water_switchFunc_sigmaProt(
rho_i, rho_j, density_threshold, density_kappa) * gamma_p
_pij_water = prot_water_switchFunc_sigmaWater(
rho_i, rho_j, density_threshold, density_kappa) * gamma_w
v_mediated += (_pij_protein + _pij_water) * interaction_well(rij, r_min, r_max, kappa)
return v_mediated
def compute_burial_multiLetter(structure, burial_gamma_multiLetter, kappa=4.0):
burial_gamma_multiLetter = np.zeros((4, 3, 20))
for i in range(4):
burial_gamma_multiLetter[i] = burial_gamma
res_list = get_res_list(structure)
neighbor_list = get_neighbor_list(structure)
sequence = get_sequence_from_structure(structure)
cb_density = calculate_cb_density(res_list, neighbor_list)
rho_table = [[0.0, 3.0], [3.0, 6.0], [6.0, 9.0]]
v_burial = 0
for i in range(3):
for res1globalindex, res1 in enumerate(res_list):
res1index = get_local_index(res1)
res1chain = get_chain(res1)
res1type = get_res_type(res_list, res1)
res1_pre, res1_post = get_pre_and_post(res_list, res1globalindex)
res1_neighbor_type = get_neighbor_res_type(res1_pre, res1_post)
res1density = cb_density[res1globalindex]
# print res1globalindex, res1index, res1chain, res1type, res1density
v_burial += burial_gamma_multiLetter[res1_neighbor_type][i][res1type] * interaction_well(res1density, rho_table[i][0], rho_table[i][1], kappa)
return v_burial
# def compute_single_helix_orientation(structure):
# res_list = get_res_list(structure)
# for res1globalindex, res1 in enumerate(res_list):
# for res2globalindex, res2 in enumerate(res_list):
def read_beta_parameters():
### directly copied from Nick Schafer's
# os.chdir(parameter_directory)
in_anti_HB = open("anti_HB", 'r').readlines()
in_anti_NHB = open("anti_NHB", 'r').readlines()
in_para_HB = open("para_HB", 'r').readlines()
in_para_one = open("para_one", 'r').readlines()
in_anti_one = open("anti_one", 'r').readlines()
p_par = np.zeros((20))
p_anti = np.zeros((20))
p_antihb = np.zeros((20,20,2))
p_antinhb = np.zeros((20,20,2))
p_parhb = np.zeros((20,20,2))
for i in range(20):
p_par[i] = float(in_para_one[i].strip())
p_anti[i] = float(in_anti_one[i].strip())
for j in range(20):
p_antihb[i][j][0] = float(in_anti_HB[i].strip().split()[j])
p_antinhb[i][j][0] = float(in_anti_NHB[i].strip().split()[j])
p_parhb[i][j][0] = float(in_para_HB[i].strip().split()[j])
for i in range(20):
for j in range(20):
p_antihb[i][j][1] = float(in_anti_HB[i+21].strip().split()[j])
p_antinhb[i][j][1] = float(in_anti_NHB[i+21].strip().split()[j])
p_parhb[i][j][1] = float(in_para_HB[i+21].strip().split()[j])
return p_par, p_anti, p_antihb, p_antinhb, p_parhb
def get_pap_gamma_APH(donor_idx, acceptor_idx, chain_i, chain_j, gamma_APH):
# if chain_i == chain_j and abs(j-i) < 13 or abs(j-i) > 16:
# if abs(j-i) < 13 or abs(j-i) > 16:
# if i-j < 13 or i-j > 16:
# if (donor_idx - acceptor_idx >= 13 and donor_idx - acceptor_idx <= 16) or chain_i != chain_j:
if (donor_idx - acceptor_idx >= 13 and donor_idx - acceptor_idx <= 16):
return gamma_APH
else:
return 0
def get_pap_gamma_AP(donor_idx, acceptor_idx, chain_i, chain_j, gamma_AP):
if (donor_idx - acceptor_idx >= 17) or chain_i != chain_j:
# if (donor_idx - acceptor_idx >= 17):
return gamma_AP
else:
return 0
def compute_pap1(structure):
all_res = list(structure.get_residues())
chains = [res.get_full_id()[2] for res in all_res]
n = len(all_res)
eta = 7
r0 = 8
e_aph = 0
e_ap = 0
for i in range(n-4):
for j in range(4, n):
chain_i = chains[i]
chain_j = chains[j]
gamma_APH = get_pap_gamma_APH(j, i, chain_i, chain_j, 1)
gamma_AP = get_pap_gamma_AP(j, i, chain_i, chain_j, 0.4)
dis = all_res[i]["CA"] - all_res[j]["CA"]
dis_p4 = all_res[i+4]["CA"] - all_res[j-4]["CA"]
rho1 = 0.5*(1+np.tanh(eta*(r0-dis)))
rho2 = 0.5*(1+np.tanh(eta*(r0-dis_p4)))
e_aph += -gamma_APH * rho1 * rho2
e_ap += -gamma_AP * rho1 * rho2
# if show > 1e-6:
# print(i, j, show, rho1, rho2)
# if i == 0:
# print(i, j, show, rho1, rho2)
# break
# print(a_)
return e_aph, e_ap
def get_pap_gamma_P(donor_idx, acceptor_idx, chain_i, chain_j, gamma_P):
if (donor_idx - acceptor_idx >= 9) or chain_i != chain_j:
return gamma_P
else:
return 0
def compute_pap2(structure):
all_res = list(structure.get_residues())
chains = [res.get_full_id()[2] for res in all_res]
n = len(all_res)
eta = 7
r0 = 8
e_p = 0
for i in range(n-4):
for j in range(n-4):
chain_i = chains[i]
chain_j = chains[j]
gamma_P = get_pap_gamma_AP(j, i, chain_i, chain_j, 0.4)
dis = all_res[i]["CA"] - all_res[j]["CA"]
dis_p4 = all_res[i+4]["CA"] - all_res[j+4]["CA"]
rho1 = 0.5*(1+np.tanh(eta*(r0-dis)))
rho2 = 0.5*(1+np.tanh(eta*(r0-dis_p4)))
e_p += -gamma_P * rho1 * rho2
# if show > 1e-6:
# print(i, j, show, rho1, rho2)
# if i == 0:
# print(i, j, show, rho1, rho2)
# break
# print(a_)
return e_p
def dis(a, b):
return ((a[0]-b[0])**2 + (a[1]-b[1])**2 + (a[2]-b[2])**2)**0.5
def compute_side_chain_energy_for_x(x, means, precisions_chol, log_det, weights):
n_features = 3
n_components, _ = means.shape
mean_dot_precisions_chol = np.zeros((3,3))
log_prob = np.zeros(3)
for i in range(n_components):
mean_dot_precisions_chol[i] = np.dot(means[i], precisions_chol[i])
y = np.dot(x, precisions_chol[i]) - mean_dot_precisions_chol[i]
log_prob[i] = np.sum(np.square(y))
log_gaussian_prob = -.5 * (n_features * np.log(2 * np.pi) + log_prob) + log_det
c = np.max(log_gaussian_prob + np.log(weights))
score = np.log(np.sum(np.exp(log_gaussian_prob + np.log(weights) - c))) + c
kt = 1
E_side_chain = -score*kt
# print(E_side_chain)
return E_side_chain
def read_fasta(fastaFile):
seq = ""
with open(fastaFile, "r") as f:
for line in f:
if line[0] == ">":
pass
else:
# print(line)
seq += line.strip()
return seq
def compute_side_chain_energy(structure, seq):
E_side_chain_energy = 0
# parser = PDBParser()
# pdbFile = "/Users/weilu/Research/server/feb_2020/compare_side_chain_with_and_without/native/256_cbd_submode_7_debug/crystal_structure.pdb"
# fastaFile = "/Users/weilu/Research/server/feb_2020/compare_side_chain_with_and_without/native/256_cbd_submode_7_debug/crystal_structure.fasta"
# structure = parser.get_structure("x", pdbFile)
print(seq)
means_dic = {}
precisions_chol_dic = {}
log_det_dic = {}
weights_dic = {}
res_type_list = ['GLY', 'ALA', 'VAL', 'CYS', 'PRO', 'LEU', 'ILE', 'MET', 'TRP', 'PHE', 'SER', 'THR', 'TYR', 'GLN', 'ASN', 'LYS', 'ARG', 'HIS', 'ASP', 'GLU']
for res_type in res_type_list:
if res_type == "GLY":
continue
means = np.loadtxt(f"/Users/weilu/opt/parameters/side_chain/{res_type}_means.txt")
precisions_chol = np.loadtxt(f"/Users/weilu/opt/parameters/side_chain/{res_type}_precisions_chol.txt").reshape(3,3,3)
log_det = np.loadtxt(f"/Users/weilu/opt/parameters/side_chain/{res_type}_log_det.txt")
weights = np.loadtxt(f"/Users/weilu/opt/parameters/side_chain/{res_type}_weights.txt")
means_dic[res_type] = means
precisions_chol_dic[res_type] = precisions_chol
log_det_dic[res_type] = log_det
weights_dic[res_type] = weights
for res in structure.get_residues():
if res.get_full_id()[1] != 0:
continue
# x_com = get_side_chain_center_of_mass(res)
# resname = res.resname
resname = one_to_three(seq[res.id[1]-1])
if resname == "GLY":
continue
try:
n = res["N"].get_coord()
ca = res["CA"].get_coord()
c = res["C"].get_coord()
except:
continue
x_com = res["CB"].get_coord()
x = np.array([dis(x_com, n), dis(x_com, ca), dis(x_com, c)])
r_ca_com = dis(x_com, ca)
# resname = "TYR"
if resname == "GLY":
side_chain_energy = 0
else:
side_chain_energy = compute_side_chain_energy_for_x(x, means_dic[resname],
precisions_chol_dic[resname],
log_det_dic[resname],
weights_dic[resname])
if abs(side_chain_energy) > 10:
print(res.id[1], resname, x_com, x, round(side_chain_energy,3), round(r_ca_com,3))
# print(res.id[1], resname, x_com, round(side_chain_energy,3), round(r_ca_com,3))
E_side_chain_energy += side_chain_energy
return E_side_chain_energy
def get_side_chain_center_of_mass(atoms):
# ensure complete first
total = np.array([0., 0., 0.])
total_mass = 0
for atom in atoms:
if atom.get_name() in ["N", "CA", "C", "O", "OXT"]:
continue
if atom.element == "H":
continue
total += atom.mass * atom.get_coord()
total_mass += atom.mass
# print(atom.get_name(), atom.get_coord())
x_com = total / total_mass
return x_com
def compute_side_chain_exclude_volume_energy(structure, fileLocation='./cbd_cbd_real_contact_symmetric.csv'):
gamma_se_map_1_letter = { 'A': 0, 'R': 1, 'N': 2, 'D': 3, 'C': 4,
'Q': 5, 'E': 6, 'G': 7, 'H': 8, 'I': 9,
'L': 10, 'K': 11, 'M': 12, 'F': 13, 'P': 14,
'S': 15, 'T': 16, 'W': 17, 'Y': 18, 'V': 19}
r_min_table = np.zeros((20,20))
r_max_table = np.zeros((20,20))
# fileLocation = '/Users/weilu/Research/server/mar_2020/cmd_cmd_exclude_volume/cbd_cbd_real_contact_symmetric.csv'
df = pd.read_csv(fileLocation)
for i, line in df.iterrows():
res1 = line["ResName1"]
res2 = line["ResName2"]
r_min_table[gamma_se_map_1_letter[three_to_one(res1)]][gamma_se_map_1_letter[three_to_one(res2)]] = line["r_min"]
r_min_table[gamma_se_map_1_letter[three_to_one(res2)]][gamma_se_map_1_letter[three_to_one(res1)]] = line["r_min"]
r_max_table[gamma_se_map_1_letter[three_to_one(res1)]][gamma_se_map_1_letter[three_to_one(res2)]] = line["r_max"]
r_max_table[gamma_se_map_1_letter[three_to_one(res2)]][gamma_se_map_1_letter[three_to_one(res1)]] = line["r_max"]
all_res = get_res_list(structure)
n = len(all_res)
e = 0
for i in range(n):
for j in range(i+1, n):
res1 = all_res[i]
res2 = all_res[j]
resname1 = res1.resname
resname2 = res2.resname
if resname1 == "GLY" or resname2 == "GLY":
continue
cbd_1 = get_side_chain_center_of_mass(res1.get_atoms())
cbd_2 = get_side_chain_center_of_mass(res2.get_atoms())
r = dis(cbd_1, cbd_2)
r_max = r_max_table[gamma_se_map_1_letter[three_to_one(resname1)]][gamma_se_map_1_letter[three_to_one(resname2)]]
r_min = r_min_table[gamma_se_map_1_letter[three_to_one(resname1)]][gamma_se_map_1_letter[three_to_one(resname2)]]
if r_max - r_min < 0.1:
print(res1, res2, r_max, r_min)
e += np.heaviside(r_max-r, 0)*((r-r_max)/(r_max-r_min))**2
print(res1, cbd_1)
return e
|
One of the Staffordshire's fastest growing PR agencies continues to go from strength to strength rivalling some of the industry's big wigs.
Their service encompasses not only traditional PR practices, it also spans across all areas of digital. More than just your usual Twitter & Facebook job, they have a much broader and varied way of making sure their clients get the most out of the emerging technologies.
Their focus in content led, that it be off or online - a great business model which has seen them double in size over the last couple of years. To sustain growth, they are looking to recruit an account manager to work across consumer and b2b clients. You'll preferably have PR agency experience, however for the right person they will consider in-house and journalistic experience.
Whatever your path to date, you'll need to know your stuff and have a passion for PR, both offline and digital. You'll have the training, education and experience to confidently and intelligently pitch and present to senior clients, all without taking yourself too seriously. The culture is very supportive (lots of training and development), professional and friendly.
|
from SimPEG import Utils, np
from scipy.constants import mu_0, epsilon_0
from simpegEM.Utils.EMUtils import k
def getKc(freq,sigma,a,b,mu=mu_0,eps=epsilon_0):
a = float(a)
b = float(b)
# return 1./(2*np.pi) * np.sqrt(b / a) * np.exp(-1j*k(freq,sigma,mu,eps)*(b-a))
return np.sqrt(b / a) * np.exp(-1j*k(freq,sigma,mu,eps)*(b-a))
def _r2(xyz):
return np.sum(xyz**2,1)
def _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
Kc1 = getKc(freq,sigma[1],a,b,mu[1],eps)
nobs = obsloc.shape[0]
dxyz = obsloc - np.c_[np.ones(nobs)]*np.r_[srcloc]
r2 = _r2(dxyz[:,:2])
sqrtr2z2 = np.sqrt(r2 + dxyz[:,2]**2)
k2 = k(freq,sigma[2],mu[2],eps)
return Kc1 * moment / (4.*np.pi) *np.exp(-1j*k2*sqrtr2z2) / sqrtr2z2
def _getCasingHertzMagDipoleDeriv_r(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
HertzZ = _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
nobs = obsloc.shape[0]
dxyz = obsloc - np.c_[np.ones(nobs)]*np.r_[srcloc]
r2 = _r2(dxyz[:,:2])
sqrtr2z2 = np.sqrt(r2 + dxyz[:,2]**2)
k2 = k(freq,sigma[2],mu[2],eps)
return -HertzZ * np.sqrt(r2) / sqrtr2z2 * (1j*k2 + 1./ sqrtr2z2)
def _getCasingHertzMagDipoleDeriv_z(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
HertzZ = _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
nobs = obsloc.shape[0]
dxyz = obsloc - np.c_[np.ones(nobs)]*np.r_[srcloc]
r2z2 = _r2(dxyz)
sqrtr2z2 = np.sqrt(r2z2)
k2 = k(freq,sigma[2],mu[2],eps)
return -HertzZ*dxyz[:,2] /sqrtr2z2 * (1j*k2 + 1./sqrtr2z2)
def _getCasingHertzMagDipole2Deriv_z_r(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
HertzZ = _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
dHertzZdr = _getCasingHertzMagDipoleDeriv_r(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
nobs = obsloc.shape[0]
dxyz = obsloc - np.c_[np.ones(nobs)]*np.r_[srcloc]
r2 = _r2(dxyz[:,:2])
r = np.sqrt(r2)
z = dxyz[:,2]
sqrtr2z2 = np.sqrt(r2 + z**2)
k2 = k(freq,sigma[2],mu[2],eps)
return dHertzZdr*(-z/sqrtr2z2)*(1j*k2+1./sqrtr2z2) + HertzZ*(z*r/sqrtr2z2**3)*(1j*k2 + 2./sqrtr2z2)
def _getCasingHertzMagDipole2Deriv_z_z(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
HertzZ = _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
dHertzZdz = _getCasingHertzMagDipoleDeriv_z(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
nobs = obsloc.shape[0]
dxyz = obsloc - np.c_[np.ones(nobs)]*np.r_[srcloc]
r2 = _r2(dxyz[:,:2])
r = np.sqrt(r2)
z = dxyz[:,2]
sqrtr2z2 = np.sqrt(r2 + z**2)
k2 = k(freq,sigma[2],mu[2],eps)
return (dHertzZdz*z + HertzZ)/sqrtr2z2*(-1j*k2 - 1./sqrtr2z2) + HertzZ*z/sqrtr2z2**3*(1j*k2*z + 2.*z/sqrtr2z2)
def getCasingEphiMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
return 1j * omega(freq) * mu * _getCasingHertzMagDipoleDeriv_r(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
def getCasingHrMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
return _getCasingHertzMagDipole2Deriv_z_r(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
def getCasingHzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
d2HertzZdz2 = _getCasingHertzMagDipole2Deriv_z_z(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
k2 = k(freq,sigma[2],mu[2],eps)
HertzZ = _getCasingHertzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
return d2HertzZdz2 + k2**2 * HertzZ
def getCasingBrMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
return mu_0 * getCasingHrMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
def getCasingBzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu=mu_0*np.ones(3),eps=epsilon_0,moment=1.):
return mu_0 * getCasingHzMagDipole(srcloc,obsloc,freq,sigma,a,b,mu,eps,moment)
|
Traveling with pets can be challenging, as many hotels and resorts would rather not cater to our furry friends. There are pet-friendly accommodations out there and there are a surprising number of fine resorts that will gladly accommodate animal companions. In Maine, there are several resorts and hotels around the state with special pet packages that encourage you to bring your friend along on your next visit.
Sheepscot Harbor is a large resort located on Davis Island off the coast of Maine. The resort sells and rents properties and the rental accommodations are available year round. Rooms at the 13-room inn all have balconies with harbor views and are equipped with a refrigerator, microwave, television and wireless Internet service. The lodge suites are condominium style accommodations with small kitchens, a living room and a patio or balcony. The 1920's cottages are located waterside with the same amenities as the inn and lodge rooms. Pets are welcome in all accommodations and the resort has a Pet Pantry and Companion Concierge Service that provides pet care while you are out shopping and sightseeing. Services available for pets include sitting, walking and massage and the resort will accommodate any special dietary need with advance notice. For the humans, there is a restaurant onsite, Bintliff's Ocean Grille, serving brunch and dinner and providing room service.
Located in Kennebunkport along the southern Maine coast, the Yachtsman offers luxury accommodations within walking distance of the town and beaches. Rooms feature king-sized beds, a sitting area and writing desk and all have private patios with a view of the marina on the Kennebunk River. Complimentary breakfast and afternoon tea are served daily. The lodge does not have an onsite restaurant but will arrange for reservations and transportation to local eateries in Kennebunkport. The lodge offers several packages including the Pets are Guests Too Package, available year round. A two-night stay is required and pets receive toys and treats, and owners receive continental breakfast, afternoon tea and dinner for two at a local restaurant.
The Riverhouse Hotel in Camden offers hotel accommodations as well as an inn with larger rooms for lengthier stays. Camden is located along the central coast of Maine and the hotel is in town, near the harbor and local shops. A complimentary continental breakfast is provided for guests and the hotel has an onsite fitness center and indoor pool and spa. The 36 hotel rooms are equipped with free wireless Internet service, refrigerators, coffee makers and, in some rooms, microwave ovens. Children under the age of 16 stay at the hotel for free. There is no in-house restaurant but there are many dining options in town. The hotel has a V.I.P. (Very Important Pooch) Package, which includes mats, food and water bowls, a dog bed in your room, as well as toys and treats. The hotel charges a small pet fee. Arrangements to bring your dog must be made in advance.
Based in coastal Maine, Irene Lang has more than 20 years of experience as a professional business writer. With an M.B.A. from Rutgers University, Lang’s writing has primarily been in the fields of marketing, health care and travel. Her work has been published online at various websites.
Lang, Irene. "Pet Friendly Resorts in Maine." Travel Tips - USA Today, https://traveltips.usatoday.com/pet-friendly-resorts-maine-23205.html. Accessed 25 April 2019.
|
# -*- coding: utf-8 -*-
"""
On the Subject of Passwords
:Copyright: 2015 Jochen Kupperschmidt
:License: MIT, see LICENSE for details.
"""
from string import ascii_lowercase
PASSWORDS = frozenset([
'about', 'after', 'again', 'below', 'could',
'every', 'first', 'found', 'great', 'house',
'large', 'learn', 'never', 'other', 'place',
'plant', 'point', 'right', 'small', 'sound',
'spell', 'still', 'study', 'their', 'there',
'these', 'thing', 'think', 'three', 'water',
'where', 'which', 'world', 'would', 'write',
])
def ask_for_letters_and_match_passwords(ui, position_index, passwords):
letters = ask_for_letters_in_position(ui, position_index)
matches = list(get_passwords_matching_letters_in_position(passwords,
position_index,
letters))
if not matches:
ui.display_instruction('No password matches!')
return
if len(matches) == 1:
ui.display_instruction(matches[0])
return
print()
print(' Multiple candidates:')
for match in matches:
print(' ->', match)
ask_for_letters_and_match_passwords(ui, position_index + 1, matches)
def ask_for_letters_in_position(ui, position_index):
question_label = 'Which letters can be chosen at position {:d}?' \
.format(position_index + 1)
values = ui.ask_for_text(question_label)
return extract_letters(values)
def extract_letters(value):
"""Select and normalize ASCII letters, drop anything else."""
lowercase_values = frozenset(map(str.lower, value))
return lowercase_values.intersection(ascii_lowercase)
def get_passwords_matching_letters_in_position(passwords, position, letters):
"""Return all passwords that contain any of the given letters at the
indicated position.
"""
predicate = lambda password: password[position] in letters
return filter(predicate, passwords)
def execute(ui):
ask_for_letters_and_match_passwords(ui, 0, PASSWORDS)
|
Alexandria, VA (April 30, 2018) – The Transportation Intermediaries Association (TIA) has released its Fourth Quarter 2017 TIA 3PL Market Report. The results show that participants’ total shipments decreased slightly compared to Q3 2017 and gross profit margin percentage experienced a decrease of 90 basis points.
Combined total shipments of truckload, intermodal, and LTL increased 0.8 percent. Companies with less than $16M annual revenue bolstered total shipments by 20.5%, with a corresponding revenue improvement of 5.2% over Q3 2017 activities.
The report contains rolling eight-quarter trends, fuel price comparisons, and enables 3PLs and industry observers to view how the industry is performing as well as to compare their business to companies of a similar size. Additionally, the report includes detailed 3PL activities by transportation mode and measures comparative volume, revenue, margin and margin percentages, quarterly and year over year.
# # #The Transportation Intermediaries Association (TIA) is the professional organization of the $166.1 billion third-party logistics industry. TIA is the only organization exclusively representing transportation intermediaries of all disciplines, doing business in domestic and international commerce. TIA is the voice of the 3PL industry to shippers, carriers, government officials and international organizations. TIA is the United States member of the International Federation of Freight Forwarder Associations, FIATA.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright(c) 2014 Krister Hedfors
#
#
# Example:
# Portscan hosta, hostb, hostc (a->a, a->b, a->c, b->a, b->b, ...)
# on all tcp port numbers listening on at least one of hosta, hostb, hostc:
#
# $ fab -H hosta,hostb,hostc list_open_ports > ports.txt
# $ plist="`grep out:.. ports.txt | cut -d' ' -f4 | sort -nu | tr '\n' ' '`"
# $ fab -H hosta,hostb,hostc portscan:"hosta hostb hostc $plist"
#
# You can also compare the results of two extensive portscans between various
# src and dst addresses using the regular `diff` command.
#
import zlib
from fabric.api import task
from fabric.api import run
from fabric.api import hide
from fabric.tasks import Task
class Portscan(Task):
'''
example: fab portscan:"127.0.0.1 10.0.0.2-100 21-23 25 80 443"
'''
name = 'portscan'
def run(self, hosts_and_ports):
portscanner = open('nbportscan.py').read()
cmd = 'python -c "{0}" {1}'.format(
portscanner, hosts_and_ports
)
with hide('running'):
run(cmd)
class ListOpenPorts(Task):
'''
show listening TCP-ports
'''
name = 'list_open_ports'
def run(self):
cmd = "netstat -nlt"
cmd += r"|sed -rne 's/.* ([0-9]+\.[0-9]+\.[0-9]+\.[0-9]+):([0-9]+).*/\1 \2/p'"
with hide('running'):
run(cmd)
portscan = Portscan()
list_open_ports = ListOpenPorts()
|
Road signs in the Philippines are standardized in the Road Signs and Pavement Markings Manual, TSRGD 2011 and TSRGD 2016 and complemented by the various chapters of the" Traffic Signs Manual".
Northern America and Oceania. handicap sign. One of Catskill Park's distinctive brown town The Official and ultimate list of traffic signs in the Philippines. With pictures and description. This list will guide traffic enforcers, drivers and pedestrians for proper traffic flow and road safety. This is a comprehensive listing of the most commonly used traffic signs in the United States.
Manual of Traffic Signs The best reference for information on United States traffic and Signs and markings road signs, lane markings and other road markings. Parking rules and restrictions relating to vehicle parking in the Philippines. Emergencies what to do during driving emergencies such as mechanical failures and accidents. Comparison of traffic signs in Englishspeaking countries Jump to navigation Tanzania, Zambia, and Zimbabwe are all SADC members who drive on the left and use the SADC Road Traffic Signs Manual, and thus have identical road signs.
Differences Australia and the Philippines use rectangular signs that fit into temporary Preface. Roadway signs in the United States increasingly use symbols rather than words to convey their message. Symbols provide instant communication with roadway users, overcome language barriers, and are becoming standard for traffic control devices throughout the world. Philippines Drivers Hand Book. Roundabouts. In the Philippines we have discovered a way to streamline the process of getting a full license and everyone seems happy with the results.
Really we can't see any down side to the new process. Signs. 12. Originally these read" Local's parking only" and" No foreigner parking" but these were decided The Manual replaces the 1996 Traffic Signs Manual, published by the Department of the Environment. Chapter 8 of this Manual contains some modifications from the Chapter 8 published in October 2008.
|
# Given an array with n objects colored red, white or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white and blue.
# Here, we will use the integers 0, 1, and 2 to represent the color red, white, and blue respectively.
# Note: You are not suppose to use the library's sort function for this problem.
# Example:
# Input: [2,0,2,1,1,0]
# Output: [0,0,1,1,2,2]
# Follow up:
# A rather straight forward solution is a two-pass algorithm using counting sort.
# First, iterate the array counting number of 0's, 1's, and 2's, then overwrite array with total number of 0's, then 1's and followed by 2's.
# Could you come up with a one-pass algorithm using only constant space?
class Solution:
def sortColors(self, nums: List[int]) -> None:
"""
Do not return anything, modify nums in-place instead.
"""
# nx is pointing to the end of the subarrayx
n0 = -1
n1 = -1
n2 = -1
for i in range(len(nums)):
if nums[i] == 0:
n2 += 1
n1 += 1
n0 += 1
nums[n2] = 2
nums[n1] = 1
nums[n0] = 0
elif nums[i] == 1:
n2 += 1
n1 += 1
nums[n2] = 2
nums[n1] = 1
elif nums[i] == 2:
n2 += 1
nums[n2] = 2
|
fifty six yr outdated Finance Brokers Jarvis from Kelowna, has many hobbies and interests that include finding out an instrument, , and dominoes. Intends to stop work and take the household to quite a few great heritage listed places on earth together with Birthplace of Jesus: Church of the Nativity and the Pilgrimage Route.
Thought Power is inside all of us, that is the vitality that all of us produce with our ideas. We are able to generate our own actuality with these ideas. You are in command of your ideas whether they’re superior or disagreeable. What ever you set your mind to, you may make occur, your thoughts have the management to get something you need. What ever you repair your thoughts on, you can have.
38 12 months-old Science Technicians Stanforth from Gaspe, enjoys novice radio, , and cigar smoking. Within the earlier yr has completed a journey to San Marino Historic Centre and Mount Titano. It is a company that sole objective is all about apple products, whether or not it’s iphones or ipods the positioning continues to be all pertain to apple merchandise. If you are self-employed, you derive the revenue after deducting expenses and taxes.
RULE #4: Determine your personal motivator.
Borrowers can use the cash gained from tenant loans for almost any goal. The financial institution or lender often places few restrictions on the cash, although they might inquire as to what its main use shall be. It’s thought-about a personal mortgage and subsequently will be an agreement between the borrower and lender. You’ll be able to avail tenant loans UK to meet both your private or professional needs like buying a automobile, paying medical payments, trip, paying debts and so on. Tenant loans UK may also be availed by people having bad credit report status resulting from arrears, defaults, CCJ’s, IVA and so forth, however with barely larger rate of interest.
Macedonia is not often known as a bastion of financial research. Most of its professors had been educated and educated below Tito’s socialist regime and find the transition to capitalism baffling. A lot of them don’t even know English. It’s, therefore, to Gruevski’s great credit score that he succeeded to provide a comprehensive, erudite, and thought-scary review of issues tackled in his thesis. As an introduction to the topic of overseas direct investment and its position in emerging, creating, and transition economies, Gruevski’s guide is more than satisfactory. It measures up to many textbooks on the subject that it so aptly covers.
Finally, you need to highlight anything in the book that you do not perceive. Don’t just skim over it and determine that it is not essential because you do not perceive it. This may point out that it is likely one of the most necessary components of the e book, one of many things that you really need to be taught more than every thing else.
Previous PostPrevious What Things To Expect From Business Ideas?
|
'''
Created on 26 Mar 2013
@author: hoekstra
'''
from flask.ext.login import login_required
import xml.etree.ElementTree as ET
import requests
import re
from linkitup import app
from linkitup.util.baseplugin import plugin
from linkitup.util.provenance import provenance
NIF_REGISTRY_URL = "http://nif-services.neuinfo.org/nif/services/registry/search?q="
## TODO: generate direct derivedfrom relations between tags/categories and results. This requires successive querying of the NIF endpoint.
@app.route('/nifregistry', methods=['POST'])
@login_required
@plugin(fields=[('tags','id','name'),('categories','id','name')], link='mapping')
@provenance()
def link_to_nif_registry(*args, **kwargs):
# Retrieve the article id from the wrapper
article_id = kwargs['article']['id']
app.logger.debug("Running NIF Registry plugin for article {}".format(article_id))
# Rewrite the tags and categories of the article in a form understood by the NIF Registry
match_items = kwargs['inputs']
query_string = "".join([ "'{}'".format(match_items[0]['label']) ] + [ " OR '{}'".format(item['label']) for item in match_items[1:]])
query_url = NIF_REGISTRY_URL + query_string
app.logger.debug("Query URL: {}".format(query_url))
response = requests.get(query_url)
tree = ET.fromstring(response.text.encode('utf-8'))
matches = {}
for result in tree.iter('registryResult') :
match_uri = result.attrib['url']
web_uri = result.attrib['url']
display_uri = result.attrib['shortName']
if display_uri == "" or display_uri == None :
display_uri = result.attrib['name']
id_base = re.sub('\s|\(|\)','_',result.attrib['name'])
description = result[0].text[:600]
nifid = result.attrib['id']
entry_type = result.attrib['type']
# Create the match dictionary
match = {'type': "link",
'uri': match_uri,
'web': web_uri,
'show': display_uri,
'short': id_base,
'description': description,
'extra': nifid,
'subscript': entry_type,
'original':article_id}
# Append it to all matches
matches[match_uri] = match
# Return the matches
return matches
|
How To Frost A Rectangle Cake is free HD wallpaper. This wallpaper was upload at April 19, 2019 upload by admin in .You can download it in your computer by clicking resolution image in Download by size:. Don't forget to rate and comment if you interest with this wallpaper.
|
#!/usr/bin/python
# Copyright (c) 2012 The Native Client Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
#
# IMPORTANT NOTE: If you make local mods to this file, you must run:
# % pnacl/build.sh driver
# in order for them to take effect in the scons build. This command
# updates the copy in the toolchain/ tree.
#
from driver_tools import Run, ParseArgs
from driver_env import env
EXTRA_ENV = { 'ARGS': '' }
# just pass all args through to 'ARGS' and eventually to the underlying tool
PATTERNS = [ ( '(.*)', "env.append('ARGS', $0)") ]
def main(argv):
if len(argv) == 0:
print get_help(argv)
return 1
env.update(EXTRA_ENV)
ParseArgs(argv, PATTERNS)
Run('"${RANLIB}" --plugin=${GOLD_PLUGIN_SO} ${ARGS}')
# only reached in case of no errors
return 0
def get_help(unused_argv):
return """
Usage: %s [options] archive
Generate an index to speed access to archives
The options are:
@<file> Read options from <file>
-t Update the archive's symbol map timestamp
-h --help Print this help message
-v --version Print version information
""" % env.getone('SCRIPT_NAME')
|
Interesting in learning about the Best University Donor Recognition Walls?
Is your University looking for a University Donor Recognition Wall?
Today, most universities are looking to enhance their donor recruitment efforts, while simultaneously looking for ways to cut their operating budgets. Why? Because at the exact same time the cost of education has been rising dramatically owing to the rapid growth and advancement of technology (the amount and the cost of equipment required to, for example, learn to perform surgery today, is massively greater than it was even thirty years ago), states have been aggressively slashing the funding they provide for post-secondary education. While many of these budget cuts were necessitated by the recession, the sad reality is that most states have done little or nothing to restore the lost funding now that the recession is over.
• “Forty-eight states — all except Alaska and North Dakota — are spending less per student than they did before the downturn.
• States cut funding deeply after the recession. The average state is spending $2,026 or 23 percent less per student than before the recession.
• Per-student funding in Arizona, Louisiana, and South Carolina is down by more than 40 percent since the start of the recession (Louisiana is among the eight states that continued to cut funding over the last year).
All of the above leaves universities in a bind; unless they can effectively recruit donors, they will be required to either cut back on services and research, or raise tuition fees at a time when most families cannot afford them as it is.
One of the best ways to accrue more donors for your university, is by inspiring your audience. By thanking your existing donors for their contribution. By engaging and informing your visitors about your university or school. By designing and installing an interactive University donor recognition wall that will tell your school story along with your vision and mission. Interactive displays replace the standard static system which consists of endless rows of engraved plagues by using a dynamic, touch interactive interface that can display photos or even full profiles of donors, video stories about your organization on a time-line, and other multimedia content. Telling your university story in multimedia will inspire potential new donors and students alike.
While digital donor walls have what appears to be a high up front cost, they pay for themselves over time, as once one is set up, all staff has to do to keep it up to date is enter updated information using a user friendly content management system. For universities, which typically have many, many donors, not having to pay for engraved plaques for each donor can add up to a substantial savings over the years. And, of course, it’s faster and easier to implement a digital donor recognition system; you don’t have to send the information about each donor away, have a plaque made up, engraved, sent back, etc. You just upload the information into your own system and voila, you’re done—something busy university staff will most certainly appreciate.
There is no more embarrassing than missing the name of a major benefactor, misspelling his/her name or not mentioning a manager, or a major event. To avoid that, ensure that the Interactive Donor Recognition Wall at your University is equipped with an easy to use, content management tool that will allow people within your organization keep all the content including list of donors, bios, text, images, videos, etc, up to date.
|
#!/usr/bin/python
#======================================================================
#
# Project : hpp_IOStressTest
# File : Libs/IOST_Testcase/IOST_SaveConfig.py
# Date : Dec 14, 2016
# Author : HuuHoang Nguyen
# Contact : [email protected]
# : [email protected]
# License : MIT License
# Copyright : 2016
# Description: The hpp_IOStressTest is under the MIT License, a copy of license which may be found in LICENSE
#
#======================================================================
import io
import os
import sys
import time
from IOST_Basic import *
from IOST_Config import *
from IOST_Testcase import *
import gtk
import gtk.glade
import gobject
#======================================================================
try:
IOST_DBG_EN
if IOST_DBG_EN:
IOST_SaveConfig_DebugEnable =0
IOST_SaveConfig_DebugLevel =IOST_DBG_L01
else:
IOST_SaveConfig_DebugEnable =0
IOST_SaveConfig_DebugLevel =IOST_DBG_L01
except:
IOST_DBG_EN = False
IOST_SaveConfig_DebugEnable =0
IOST_SaveConfig_DebugLevel =IOST_DBG_L01
#======================================================================
class IOST_SaveConfig():
"""
"""
#----------------------------------------------------------------------
def __init__(self, glade_filename, window_name="", object_name="", builder=None):
""
self.IOST_SaveConfig_WindowName = window_name
self.IOST_SaveConfig_ObjectName = object_name
if not builder:
self.IOST_SaveConfig_Builder = gtk.Builder()
self.IOST_SaveConfig_Builder.add_from_file(glade_filename)
self.IOST_SaveConfig_Builder.connect_signals(self)
else:
self.IOST_SaveConfig_Builder = builder
self.IOST_SaveConfig_Builder.connect_signals(self)
#----------------------------------------------------------------------
def SaveConfig_Get_Objs(self, window_name):
""
self.CreateObjsDictFromDict(window_name, self.IOST_Objs[window_name], self.IOST_SaveConfig_Builder, 0)
self.IOST_Objs[window_name][self.IOST_SaveConfig_ObjectName].set_keep_above(True)
self.SaveConfig_Init_Objs()
#----------------------------------------------------------------------
def SaveConfig_Init_Objs(self):
self.SaveConfig_Set_StationInfoPath_Obj()
self.SaveConfig_Set_StationInfoSetup_Obj()
self.SaveConfig_Set_TestcasesPath_Obj()
self.SaveConfig_Set_TestcasesSetup_Obj()
self.SaveConfig_StationInfoPath = self.IOST_Data['IOST_RunPath']+'/StationInfo.json'
self.SaveConfig_TestcasesPath = self.IOST_Data['IOST_RunPath']+'/Testcases.json'
self.SaveConfig_StationInfo_SaveEnable = False
self.SaveConfig_Testcase_SaveEnable = False
self.SaveConfig_Testcase_IP_IsDisableb = False
#----------------------------------------------------------------------
def SaveConfig_Show(self):
self.IOST_Objs[self.IOST_SaveConfig_WindowName][self.IOST_SaveConfig_ObjectName].show()
#----------------------------------------------------------------------
# def SaveConfig_Run(self):
# self.IOST_Objs[self.IOST_SaveConfig_WindowName][self.IOST_SaveConfig_ObjectName].run()
#----------------------------------------------------------------------
def SaveConfig_Hide(self):
self.IOST_Objs[self.IOST_SaveConfig_WindowName][self.IOST_SaveConfig_ObjectName].hide()
#----------------------------------------------------------------------
def SaveConfig_Destroy(self):
self.IOST_Objs[self.IOST_SaveConfig_WindowName][self.IOST_SaveConfig_ObjectName].destroy()
#----------------------------------------------------------------------
def on_IOST_SaveConfig_Skylark_delete_event(self, object, event, data=None):
""
self.SaveConfig_Hide()
return True
#----------------------------------------------------------------------
def on_IOST_SaveConfig_Skylark_destroy_event(self, object, event, data=None):
""
self.SaveConfig_Hide()
return True
#----------------------------------------------------------------------
def on_IOST_SaveConfig_StationInfoEnable_CB_toggled(self, object, data=None):
""
res = object.get_active()
if res:
path = self.SaveConfig_StationInfoPath
else:
path=''
iost_print(IOST_SaveConfig_DebugLevel, path, "StationInfoEnable_CB -> The path is :")
self.SaveConfig_Set_StationInfoPath_Obj(res, path)
self.SaveConfig_Set_StationInfoSetup_Obj(res)
self.SaveConfig_StationInfo_SaveEnable = res
#----------------------------------------------------------------------
def on_IOST_SaveConfig_TestcasesEnable_CB_toggled(self, object, data=None):
""
res = object.get_active()
if res:
path = self.SaveConfig_TestcasesPath
else:
path=''
iost_print(IOST_SaveConfig_DebugLevel, path, "TestcasesEnable_CB -> The path is :")
self.SaveConfig_Set_TestcasesPath_Obj(res, path)
self.SaveConfig_Set_TestcasesSetup_Obj(res)
self.SaveConfig_Testcase_SaveEnable = res
def on_IOST_SaveConfig_SaveTestcases_OfIPDisable_CB_toggled(self, object, data=None):
""
res = object.get_active()
self.SaveConfig_Testcase_IP_IsDisableb = res
#----------------------------------------------------------------------
def SaveConfig_Set_StationInfoPath_Obj(self, sensitive=False, path=""):
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_StationInfoPath_TE'].set_sensitive(sensitive)
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_StationInfoPath_TE'].set_text(path)
#----------------------------------------------------------------------
def SaveConfig_Set_StationInfoSetup_Obj(self, sensitive=False):
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_StationInfoSetup_B'].set_sensitive(sensitive)
#----------------------------------------------------------------------
def SaveConfig_Set_TestcasesPath_Obj(self, sensitive=False, path=""):
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_TestcasesPath_TE'].set_sensitive(sensitive)
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_TestcasesPath_TE'].set_text(path)
#----------------------------------------------------------------------
def SaveConfig_Set_TestcasesSetup_Obj(self, sensitive=False):
self.IOST_Objs[self.IOST_SaveConfig_WindowName]['_TestcasesSetup_B'].set_sensitive(sensitive)
#----------------------------------------------------------------------
def on_IOST_SaveConfig_StationInfoSetup_B_clicked(self, object, data=None):
""
text = "Save configure file to < Station Info >"
self.SaveConfig_StationInfoPath = self.SaveConfig_SaveDialog(self.SaveConfig_StationInfoPath, text)
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_StationInfoPath, "self.SaveConfig_StationInfoPath" )
self.SaveConfig_Set_StationInfoPath_Obj(True, self.SaveConfig_StationInfoPath)
#----------------------------------------------------------------------
def on_IOST_SaveConfig_TestcasesSetup_B_clicked(self, object, data=None):
""
text = "Save configure file to < Testcases >"
self.SaveConfig_TestcasesPath = self.SaveConfig_SaveDialog(self.SaveConfig_TestcasesPath, text)
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_TestcasesPath, "self.SaveConfig_TestcasesPath")
self.SaveConfig_Set_TestcasesPath_Obj(True, self.SaveConfig_TestcasesPath)
#----------------------------------------------------------------------
def on_IOST_SaveConfig_Cancel_B_clicked(self,object, data=None):
""
self.SaveConfig_Hide()
return True
#----------------------------------------------------------------------
def on_IOST_SaveConfig_Save_B_clicked(self, object, data=None):
"""
self.SaveConfig_StationInfoPath = self.IOST_Data['IOST_RunPath']+'/StationInfo.json'
self.SaveConfig_TestcasesPath = self.IOST_Data['IOST_RunPath']+'/Testcases.json'
self.SaveConfig_StationInfo_SaveEnable = False
self.SaveConfig_Testcase_SaveEnable = False
"""
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_StationInfoPath, "self.SaveConfig_StationInfoPath :")
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_StationInfo_SaveEnable, "self.SaveConfig_StationInfo_SaveEnable :")
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_TestcasesPath, "self.SaveConfig_TestcasesPath :")
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_Testcase_SaveEnable, "self.SaveConfig_Testcase_SaveEnable :")
iost_print(IOST_SaveConfig_DebugLevel, self.SaveConfig_Testcase_IP_IsDisableb, "self.SaveConfig_Testcase_IP_IsDisableb :")
self.save_cfg = IOST_Config()
"""
'self.save_cfg.IOST_Data' will store Testcases
'self.save_cfg.IOST_Objs' will store StationInfo
"""
# Begin process to StationInfo
self.save_cfg.IOST_Objs.update({"StationInfo": self.IOST_Data["StationInfo"]})
# Begin process to Testcase
ip_checked={}
#init ip_check variable
for ip in self.IOST_Data['ObjectUpdate']:
if ip == "AutoMail":
continue
if ip_checked.has_key(ip[:-1]):
continue
else:
ip_checked.update({ ip[:-1]: False } )
pprint (ip_checked)
#
for ip in self.IOST_Data['ObjectUpdate']:
#If ip is autoMail, ignore it
if ip == "AutoMail":
continue
#If ip was checked, ignore it
if ip_checked[ip[:-1]]:
continue
# if check button 'Save the testcases of IP which have disabed' : Enable
if self.SaveConfig_Testcase_IP_IsDisableb:
# print ip, ' : ', self.IOST_Data[ip[:-1]+'_PortNum']
for ip_port in range(0, self.IOST_Data[ip[:-1]+'_PortNum']):
if len( self.IOST_Data[ip[:-1]+str(ip_port)] ) <= 1:
continue
else:
if not ip_checked[ip[:-1]]:
ip_checked[ip[:-1]] = True
self.save_cfg.IOST_Data.update( { ip[:-1] : self.IOST_Data[ip[:-1]] } )
self.save_cfg.IOST_Data.update( { ip[:-1]+str(ip_port) : self.IOST_Data[ip[:-1]+str(ip_port)] } )
# iost_print(IOST_SaveConfig_DebugLevel, ip_port, "%s" %(ip[:-1]) )
# if not ip_checked[ip[:-1]]:
# ip_checked[ip[:-1]] = True
# self.save_cfg.IOST_Data.update( { ip[:-1] : self.IOST_Data[ip[:-1]] } )
# self.save_cfg.IOST_Data.update( { ip[:-1]+str(ip_port) : self.IOST_Data[ip[:-1]+str(ip_port)] } )
# if check button 'Save the testcases of IP which have disabed' : Disable
else:
if not Str2Bool(self.IOST_Data[ip[:-1]]):
pass
else:
for ip_port in range(0, self.IOST_Data[ip[:-1]+'_PortNum']):
if Str2Bool( self.IOST_Data[ip[:-1]+str(ip_port)][0] ) :
if not ip_checked[ip[:-1]]:
ip_checked[ip[:-1]] = True
self.save_cfg.IOST_Data.update( { ip[:-1] : self.IOST_Data[ip[:-1]] } )
self.save_cfg.IOST_Data.update( { ip[:-1]+str(ip_port) : self.IOST_Data[ip[:-1]+str(ip_port)] } )
else:
continue
if IOST_SaveConfig_DebugEnable:
print "=================================================================="
print " Station_info "
print "=================================================================="
pprint (self.save_cfg.IOST_Objs)
print "==================================================================\n\n"
print "\n\n=================================================================="
print " Testcases "
print "=================================================================="
pprint (self.save_cfg.IOST_Data)
print "==================================================================\n\n"
if self.SaveConfig_StationInfo_SaveEnable:
iost_print(IOST_SaveConfig_DebugLevel, "Save Configure File to StationInfo", "")
self.SaveConfig_WriteConfigFile(self.SaveConfig_StationInfoPath, self.save_cfg.IOST_Objs)
if self.SaveConfig_Testcase_SaveEnable:
iost_print(IOST_SaveConfig_DebugLevel, "Save Configure File to the Testcase", "")
self.SaveConfig_WriteConfigFile(self.SaveConfig_TestcasesPath, self.save_cfg.IOST_Data)
self.SaveConfig_Hide()
return True
def SaveConfig_SaveDialog(self, current_filename, title_str=""):
""
if title_str=="":
title_str = "Save Config Files"
dlg = gtk.FileChooserDialog(title=title_str, action=gtk.FILE_CHOOSER_ACTION_SAVE)
dlg.add_button(gtk.STOCK_CANCEL, gtk.RESPONSE_CANCEL)
dlg.add_button(gtk.STOCK_SAVE, gtk.RESPONSE_OK)
dlg.set_do_overwrite_confirmation(True)
#Set file name
if os.path.isdir(current_filename):
dlg.set_current_name("")
dlg.set_current_folder( current_filename )
else:
dir_path = os.path.dirname(os.path.realpath(current_filename))
dlg.set_current_folder( dir_path )
dlg.set_current_name(os.path.basename(current_filename))
if dlg.run() == gtk.RESPONSE_OK:
filename = dlg.get_filename()
iost_print(IOST_SaveConfig_DebugLevel, filename, "SaveConfig_SaveDialog --> filename :")
dlg.destroy()
return filename
else:
dlg.destroy()
return current_filename
def SaveConfig_WriteConfigFile(self, file_name='', data=None):
""
try:
self.WriteFile(file_name, data)
except:
MsgBox( "%s: %s" %("Can't open file for writting", file_name) )
return False
# try:
# # f = open(filename, "w")
# # f.write(buff)
# # f.close()
# # WriteFile(self, file_name="", data=None):
# # self.WriteFile(self.IOST_Data["IOST_Path"] + "/Temp_Configs/Data_Config_Before_ResetAll.json", self.IOST_Data)
# self.WriteFile(filename)
# except:
# dlg.destroy()
# MsgBox("%s: %s" % (_("Can't open file for writting"), filename) )
# return
|
YEP! Just call me a hypocrite, I suppose.
Don’t go to my facebook or instagram cause all you will see is pics of my baby. sigh. Handsome baby.
Well, we are back. Yep. Back in the saddle again. Easy comic for today since I’m getting back into the swing of things. I’m glad you guys stuck around.
ps- excuse the mess and deadlinks, I need to do a deep clean of the site.
|
"""URL config for rocket_league project."""
from cms.forms import CMSPasswordChangeForm
from cms.sitemaps import registered_sitemaps
from cms.views import TextTemplateView
from django.conf import settings
from django.conf.urls import include, patterns, url
from django.conf.urls.static import static
from django.contrib import admin
from django.views import generic
from rest_framework import routers
from .apps.replays import views as api_views
from .apps.users.views import StreamDataAPIView
admin.autodiscover()
router = routers.DefaultRouter()
router.register(r'maps', api_views.MapViewSet)
router.register(r'replays', api_views.ReplayViewSet)
router.register(r'replay-packs', api_views.ReplayPackViewSet)
router.register(r'players', api_views.PlayerViewSet)
router.register(r'goals', api_views.GoalViewSet)
router.register(r'seasons', api_views.SeasonViewSet)
router.register(r'components', api_views.ComponentViewSet)
urlpatterns = patterns(
"",
# Admin URLs.
url(r'^admin/password_change/$', 'django.contrib.auth.views.password_change',
{'password_change_form': CMSPasswordChangeForm}, name='password_change'),
url(r'^admin/password_change/done/$', 'django.contrib.auth.views.password_change_done', name='password_change_done'),
url(r"^admin/", include(admin.site.urls)),
url(r'^replays/', include('rocket_league.apps.replays.urls', namespace='replay')),
url(r'^replay-packs/', include('rocket_league.apps.replays.replaypack_urls', namespace='replaypack')),
# Permalink redirection service.
url(r"^r/(?P<content_type_id>\d+)-(?P<object_id>[^/]+)/$", "django.contrib.contenttypes.views.shortcut", name="permalink_redirect"),
# Google sitemap service.
url(r"^sitemap.xml$", "django.contrib.sitemaps.views.index", {"sitemaps": registered_sitemaps}),
url(r"^sitemap-(?P<section>.+)\.xml$", "django.contrib.sitemaps.views.sitemap", {"sitemaps": registered_sitemaps}),
# Basic robots.txt.
url(r"^robots.txt$", TextTemplateView.as_view(template_name="robots.txt")),
# There's no favicon here!
url(r"^favicon.ico$", generic.RedirectView.as_view(url='/static/build/img/icons/favicon.ico', permanent=True)),
url(r'^(?i)api/replays/(?P<replay_id>[a-f0-9]{8}-?[a-f0-9]{4}-?[a-f0-9]{4}-?[a-f0-9]{4}-?[a-f0-9]{12})/$', api_views.ReplayViewSet.as_view({'get': 'retrieve'})),
url(r'^api/', include(router.urls)),
url(r'^api/stream-data/(?P<user_id>\d+)/$', StreamDataAPIView.as_view(), name='stream-data'),
url(r'^api/latest-replay/(?P<user_id>\d+)/$', api_views.LatestUserReplay.as_view(), name='latest-replay'),
url(r'^api-docs/', include('rest_framework_swagger.urls')),
url(r'^login/$', 'django.contrib.auth.views.login', name='auth_login'),
url(r'^logout/$', 'django.contrib.auth.views.logout_then_login', name='auth_logout'),
url(r'', include('rocket_league.apps.site.urls', namespace='site')),
url(r'', include('rocket_league.apps.users.urls', namespace='users')),
url('', include('social.apps.django_app.urls', namespace='social'))
) + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
if settings.DEBUG:
urlpatterns += patterns(
"",
url("^404/$", generic.TemplateView.as_view(template_name="404.html")),
url("^500/$", generic.TemplateView.as_view(template_name="500.html")),
)
handler500 = "cms.views.handler500"
|
In the book The Four Agreements, the author Miguel Ruiz offers a code for living a life of peace and happiness. The first principle Miguel offers in this code of conduct for your life is Be Impeccable with your Word. It is much more difficult than it seems. People often tell me that they are pretty good at being impeccable with their word but struggle with the agreement Don’t Take Things Personally.
What people are surprised to find out is that if they are taking things personally, then they are not being impeccable.
What does it mean to be impeccable with your word?
First of all we have to consider our Word to be much more than the construct of words and phrases that comes out of our mouth. Our Word is the force that we create with and includes everything we express. It includes our emotions, physical actions, thoughts and our attitude. Walking around being silent while filled with hate or self rejection doesn’t meet the meaning of impeccability.
Expressing yourself impeccably is to express your self in the direction of truth and love. This includes expressing love, respect, and acceptance for your self. The emotions of jealousy, envy, frustration, and sadness fall into the category of not being impeccable. Anger and fear usually fall into the category of not being impeccable also. However there is the exception of a real life threatening situation where natural fight or flight fear and anger are come from your emotional integrity. However in most cases people don’t face real life threatening situations very often.
Losing your job or finding out your partner is cheating on you doesn’t count as a real fight or flight situation. These may be painful situations emotionally as life is changing and the ego shattered, but the life of your physical body isn’t in any danger. Most of the time anger and fear seem to fall into the category of not being impeccable.
I’ve heard people say that they were speaking “their truth” about a situation and so they believed they were being impeccable. When people claim to be speaking “their truth”, they are often speaking their opinion that they believe is right. Because they believe their opinion doesn’t mean it is the truth. To anyone else it is just an opinion and can be filled with judgments and unpleasant emotions. When you are impeccable you don’t need to defend what you say by claiming it is impeccable.
Another aspect of being impeccable is to be with out fault or blame. That means to refrain from expressing criticisms, judgments, or find fault with your self, or someone else. Being without criticism in your expression doesn’t just mean with the words you speak, but with the thoughts you think also. Refraining from blaming people will lead you to take total responsibility for your life.
While this may seem like a pretty big shift in our consciousness it gets bigger when we expand the meaning to include not finding fault with the world. Being impeccable means seeing the world without rejecting it for the way it is. It doesn’t mean being in denial about the way people mistreat each. It just means that you don’t judge or reject people or the world for they way it is. Cleaning up the judgments in our mind can seem like quite the task when we consider how easy it is to become critical of things like politics, pollution, violence, crime, or traffic. It may take a while to empty our mind of criticisms but it can be done.
You might think of being impeccable as being compassionate and accepting of others. You recognize and are aware of what people do, but you find a way to accept them as they are. When you see they don’t know the number of ways they are hurting each other and themselves you can forgive them because they don’t know what they are doing.
How are you not being impeccable when you take something personally?
When you take something personally you feel offended. Perhaps you are offended because someone said you were stupid. If you had 100% faith in your intelligence you would know that it was just their opinion and you wouldn’t believe their opinion. When you are aware and don’t believe them it doesn’t hurt emotionally.
If you feel hurt it is because you believed some part of the idea that you are stupid. When you express even a tiny bit of your faith in that opinion of stupidity you are expressing your faith in a self rejection. Your faith is a powerful part of your Word. Expressing your Word in a manner of self rejection is how we take something personally.
Being impeccable with your Word is about being truthful, honest, and kind. It is very simple, but not necessarily easy. We have learned many habits over the years that condition us to use our emotional and verbal expressions in unkind ways. Just the way that we talk to ourselves in our own mind can be so unpleasant.
To keep this one simple agreement to be impeccable with your Word will require some time and practice to master. Don’t assume that you will master it in your lifetime. At the same time, don’t assume that you wont. Just know that every year that you become more impeccable with your Word you will have more love and happiness in your life and relationships.
posted in category: Emotions, Four Agreements, Happiness, Impeccability, Personal Power, Self Mastery and The Mind.
Being Impeccable with your word means something entirely different to me. Don Miguel explains that each of us live within a “dream.” It is essentially our perception of our reality. It consists of our morals, our values, our mores’ and those concepts we have learned by virtue of our family, our society, and our influences.
Our word is like a sword and can slice through the belief systems of others.
Here is an example. I am a very intense person. I have a lot of energy and work hard because I was taught by my parents that hard work will result in prosperity. To me, having motivation, being active and working are all positive attributes. They taught us to be very observant. However, I met a man who criticized me and said, you worry a lot, you are very intense, and you sweat the small stuff and are not self assured.
Those negative perceptions he had of me caused me a great deal of sadness. What I had been taught were positives he had assessed as negatives. Suddenly, I began to question my dream. That is what is meant by being impeccable with your word. You must be aware of the feelings of others. You may not say something necessarily unkind, just unaware of the dream of another and what you say may destroy their perception of happiness. It’s different than just saying unkind things to me.
I have found in my journey over years of spending time with don Miguel that my understanding of “Impeccable” has evolved. It doesn’t mean what it use to. I didn’t have the insight then that I do now. I’m also not clinging to my definition. I remain flexible with the intent that I will expand my understanding in years to come.
|
#!/usr/bin/env python
# Copyright 2014 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
# Runs 'gn help' and various subhelps, and spits out html.
# TODO:
# - Handle numbered and dashed lists -> <ol> <ul>. (See "os" and "toolchain").
# - Handle "Arguments:" blocks a bit better (the argument names could be
# distinguished).
# - Convert "|blahblah|" to <code>.
# - Spit out other similar formats like wiki, markdown, whatever.
import cgi
import subprocess
import sys
def GetOutput(*args):
try:
return subprocess.check_output([sys.argv[1]] + list(args))
except subprocess.CalledProcessError:
return ''
def ParseTopLevel(out):
commands = []
output = []
for line in out.splitlines():
if line.startswith(' '):
command, sep, rest = line.partition(':')
command = command.strip()
is_option = command.startswith('-')
output_line = ['<li>']
if not is_option:
commands.append(command)
output_line.append('<a href="#' + cgi.escape(command) + '">')
output_line.append(cgi.escape(command))
if not is_option:
output_line.append('</a>')
output_line.extend([sep + cgi.escape(rest) + '</li>'])
output.append(''.join(output_line))
else:
output.append('<h2>' + cgi.escape(line) + '</h2>')
return commands, output
def ParseCommand(command, out):
first_line = True
got_example = False
output = []
for line in out.splitlines():
if first_line:
name, sep, rest = line.partition(':')
name = name.strip()
output.append('<h3><a name="' + cgi.escape(command) + '">' +
cgi.escape(name + sep + rest) + '</a></h3>')
first_line = False
else:
if line.startswith('Example'):
# Special subsection that's pre-formatted.
if got_example:
output.append('</pre>')
got_example = True
output.append('<h4>Example</h4>')
output.append('<pre>')
elif not line.strip():
output.append('<p>')
elif not line.startswith(' ') and line.endswith(':'):
# Subsection.
output.append('<h4>' + cgi.escape(line[:-1]) + '</h4>')
else:
output.append(cgi.escape(line))
if got_example:
output.append('</pre>')
return output
def main():
if len(sys.argv) < 2:
print 'usage: help_as_html.py <gn_binary>'
return 1
header = '''<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body { font-family: Arial, sans-serif; font-size: small; }
pre { font-family: Consolas, monospace; font-size: small; }
#container { margin: 0 auto; max-width: 48rem; width: 90%; }
</style>
</head>
<body>
<div id="container"><h1>GN</h1>
'''
footer = '</div></body></html>'
commands, output = ParseTopLevel(GetOutput('help'))
for command in commands:
output += ParseCommand(command, GetOutput('help', command))
print header + '\n'.join(output) + footer
return 0
if __name__ == '__main__':
sys.exit(main())
|
We made our first appearance in the pharmaceutical market in 2008 as YUSRA pharmaceuticals, a company which is mainly engaged in the distribution of wide range of pharmaceuticals and medical equipment. Our activity was limited to the purchase and distribution of products imported by other companies. While working as a distributor company, we were able to have an excellent knowledge of our pharmaceutical market and most importantly we were able to win an enormous number of valuable customers.
After operating as a distributor company for quite some time and gaining tremendous experience in the market we decided to boost our activity by importing our own products. For this, after a meticulous planning and thorough analysis of the market we established our own import and distribution company, YHY General Trading P.l.c on 2011. With our new company, we were able to establish business partnerships with many manufacturers and import a wide range of quality products beginning from small laboratory kits up to highly sophisticated medical and laboratory equipment s. Though persistent endeavor we were able to establish partnerships with manufacturers from different countries like Canada, UK, Germany ,USA ,China ,India , Malaysia etc.
|
from i3pystatus import IntervalModule
import subprocess
class Xkblayout(IntervalModule):
"""Displays and changes current keyboard layout.
``change_layout`` callback finds the current layout in the
``layouts`` setting and enables the layout following it. If the
current layout is not in the ``layouts`` setting the first layout
is enabled.
``layouts`` can be stated with or without variants, e.g.: status.register("xkblayout", layouts=["de neo", "de"])
"""
interval = 1
format = u"\u2328 {name}"
settings = (
("layouts", "List of layouts"),
)
layouts = []
on_leftclick = "change_layout"
def run(self):
kblayout = subprocess.check_output("setxkbmap -query | awk '/layout/,/variant/{print $2}'", shell=True).decode('utf-8').replace("\n", " ").strip()
self.output = {
"full_text": self.format.format(name=kblayout).upper(),
"color": "#ffffff"
}
def change_layout(self):
layouts = self.layouts
kblayout = subprocess.check_output("setxkbmap -query | awk '/layout/,/variant/{print $2}'", shell=True).decode('utf-8').replace("\n", " ").strip()
if kblayout in layouts:
position = layouts.index(kblayout)
try:
subprocess.check_call(["setxkbmap"] + layouts[position + 1].split())
except IndexError:
subprocess.check_call(["setxkbmap"] + layouts[0].split())
else:
subprocess.check_call(["setxkbmap"] + layouts[0].split())
|
Having netzero 0x800ccc0e - What Should You Do?
The post will introduce what is netzero 0x800ccc0e? What causes netzero 0x800ccc0e? How to resolve netzero 0x800ccc0e problem?
netzero 0x800ccc0e usually happens when your windows systems crashes and freezes of unreliable length and intensity. Most commonly, you will encounter program lock-ups, slow PC performance, system freezes, blue screen errors, startup or shutdown problems, and installation errors.
The right way to Fix netzero 0x800ccc0e Problem?
An efficient way to resolve this netzero 0x800ccc0e is to use SmartPCFixer. We suggest you to do the below steps.
3. Click Fix all to get rid of netzero 0x800ccc0e.
We should never ignore netzero 0x800ccc0e problem when we encounter it at the first time. If netzero 0x800ccc0e error cannot be troubleshooted in a effective way, you may received more severe computer problems. Consequently, to effectively protect computer security and personal information, you need to repair netzero 0x800ccc0e issue as soon as possible with the guide in this article. SmartPCFixer can be your best choice to troubleshoot netzero 0x800ccc0e in time.
[Simple Solution] - How to troubleshoot windowsupdate_800706be" Error?
Which's the best means to fix 0xc00002a?
Which are the best ways to deal with msxml4c.dll?
Which is the best means to troubleshoot xbox 0x409 0x80080300?
Facing error 0x887a0005 windows movie maker - What Should be Done?
Which's the best approach to deal with rainbow six siege error code 0-0x00000209?
Receiving 0x80131904 xbox one - Have You Ever Tried This Soultion?
Showing windowsupdate_800f081f - What Can You Do?
What are the techniques to correct vc_runtimeminimum_x64.msi download Error?
|
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
from PyQt5 import QtWidgets
from matplotlib.backends.backend_qt5agg import FigureCanvas
from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavToolbar
def main():
app = QtWidgets.QApplication(sys.argv)
main_window = QtWidgets.QMainWindow()
main_window.setWindowTitle('Matplotlib Example')
central_widget = TabWidget()
main_window.setCentralWidget(central_widget)
main_window.show()
app.exec_()
class TabWidget(QtWidgets.QTabWidget):
def __init__(self, parent=None):
super().__init__(parent)
xy_scatter_widget = XYScatterGraphWidget()
pie_widget = PieGraphWidget()
bar_widget = BarGraphWidget()
graph_widget = GraphWidget()
self.addTab(graph_widget, 'Graph Widget')
self.addTab(bar_widget, 'Bar Graph')
self.addTab(xy_scatter_widget, 'Scatter Graph')
self.addTab(pie_widget, 'Pie Graph')
class GraphWidget(QtWidgets.QWidget):
def __init__(self, parent=None):
super().__init__(parent)
self._figure = plt.Figure()
# Widget!
self._canvas = FigureCanvas(self._figure)
# widget!
toolbar = NavToolbar(self._canvas, self)
# Widget!
plot_button = QtWidgets.QPushButton('Plot!')
plot_button.clicked.connect(self.plot)
layout = QtWidgets.QVBoxLayout()
layout.addWidget(toolbar)
layout.addWidget(self._canvas)
layout.addWidget(plot_button)
self.setLayout(layout)
self.plot()
"""
self.random_signal.connect(self.random_slot)
self.random_signal.emit('hello', 5, False)
random_signal = QtCore.pyqtSignal(str, int, bool)
# you can add decorator in, but it's optional
@QtCore.pyqtSlot(str, int, bool)
def random_slot(self, string, integer, boolean, *args, **kwargs):
print(string, integer, boolean)
"""
def plot(self):
data = np.random.rand(20)
ax = self._figure.add_subplot(111)
ax.set_yscale('log')
ax.set_xlim(-1, 6)
ax.set_ylim(-1, 3)
ax.set_xlabel('This is an x label')
ax.set_ylabel('Set a Y label')
ax.legend()
ax.set_title('A really cool default chart')
ax.plot(data, '*-', label=time.time())
self.update_canvas()
def update_canvas(self):
self._canvas.draw()
class XYScatterGraphWidget(GraphWidget):
def plot(self):
self._figure.clear()
ax = self._figure.add_subplot(111)
n = 100
x = np.random.rand(n)
y = np.random.rand(n)
colors = np.random.rand(n)
area = np.pi * (15 * np.random.rand(n)) ** 2
ax.scatter(x, y, s=area, c=colors, alpha=0.5)
self.update_canvas()
class PieGraphWidget(GraphWidget):
def plot(self):
labels = ['Eaten', 'Uneaten', 'Eat next']
n = len(labels)
data = np.random.rand(n) * 100
# control how the percentages are displayed
autopct = '%1.1f%%'
# colors = ['r', 'g', 'b']
explode = np.zeros(n)
explode[-1] = 0.1
self._figure.clear()
ax = self._figure.add_subplot(111)
ax.pie(data, explode=explode, labels=labels,
autopct=autopct, shadow=True, startangle=90)
self.update_canvas()
class BarGraphWidget(GraphWidget):
def plot(self):
self._figure.clear()
n = 10
y = np.random.rand(n) * 100
x = range(n)
width = 1 / 1.5
ax = self._figure.add_subplot(111)
ax.bar(x, y, width, color='blue')
self.update_canvas()
if __name__ == '__main__':
main()
|
Not sure how to fix the current orders, but I can tell you what happened. You have an incompletion procedure that contains one or more fields which have a status group that is defined like standard status group 00. This means the documents are formally incomplete, but this incompleteness does not prevent the item from being delivered or billed.
I would advise to update the incompletion procedure to prevent this behaviour from occurring in the future. As to how to fix the 440, I'll leave that to someone else.
|
#!/bin/env python
# -*- coding: utf-8 -*-
"""
This setup is loosely following the instructions adapted from
https://hynek.me/articles/sharing-your-labor-of-love-pypi-quick-and-dirty/
"""
import os
import re
from setuptools import setup
def read(fname):
"""
Utility function to read the README file.
Used for the long_description. It's nice, because now
1) we have a top level README file and
2) it's easier to type in the README file than to put a raw string in below ...
"""
return open(os.path.join(os.path.dirname(__file__), fname)).read()
def get_version():
"""
Grabs and returns the version and release numbers from autotools.
"""
configure_ac = open(os.path.join('..', '..', 'configure.ac')).read()
major = re.search('m4_define\(\[mega_major_version\], \[([0-9]+)\]',
configure_ac)
minor = re.search('m4_define\(\[mega_minor_version\], \[([0-9]+)\]',
configure_ac)
micro = re.search('m4_define\(\[mega_micro_version\], \[(.+?)\]',
configure_ac)
if major:
major, minor, micro = major.group(1), minor.group(1), micro.group(1)
version = '.'.join([major, minor])
else:
version = 'raw_development'
if micro:
release = '.'.join([major, minor, micro])
else:
release = 'raw_development'
return version, release
def make_symlink(src, dst):
"""Makes a symlink, ignores errors if it's there already."""
try:
os.symlink(src, dst)
except OSError as e:
if e.strerror != 'File exists':
raise e
def remove_file(fname):
"""Removes a file/link, ignores errors if it's not there any more."""
try:
os.remove(fname)
except OSError as e:
if e.strerror != 'No such file or directory':
raise e
# Put native library modules into a "good place" for the package.
make_symlink('../../src/.libs/libmega.so', 'libmega.so')
make_symlink('.libs/_mega.so', '_mega.so')
# Create a dummy __init__.py if not present.
_init_file = '__init__.py'
_init_file_created = False
if not os.path.exists(_init_file):
with open(_init_file, 'wb') as fd:
_init_file_created = True
setup(
name='megasdk',
version=get_version()[1],
description='Python bindings to the Mega file storage SDK.',
long_description=read('DESCRIPTION.rst'),
url='http://github.com/meganz/sdk/',
license='Simplified BSD',
author='Guy Kloss',
author_email='[email protected]',
packages=['mega'],
package_dir={'mega': '.'},
package_data = {
'mega': ['libmega.so', '_mega.so'],
},
exclude_package_data = {'': ['test_libmega.py']},
include_package_data=True,
keywords=['MEGA', 'privacy', 'cloud', 'storage', 'API'],
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Natural Language :: English',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Topic :: Software Development :: Libraries :: Python Modules',
],
)
# Clean up some temporary stuff.
remove_file('libmega.so')
remove_file('_mega.so')
if _init_file_created:
remove_file(_init_file)
|
tects a part, small though it be, of the soil from the direct warmth of the sun. Forests thus are like great canopies sheltering from the sun's rays those sections upon which they grow. Lands so covered possess a capacity for holding much moisture. Contained in the leaves and trunks of trees, and more particularly in the spongy moss and numerous streams, it is saved from rapid evaporation, and consequently lowers the temperature of the atmosphere over it.
Vapors, then, attracted toward mountains by gravity, or carried thither by winds, will at times collect first over those sections which are wooded, and will have a tendency to remain there, be condensed, and deposit rain.
It may not be out of place to notice here another fact coming under my observation. Winds sweeping across a country, when they encounter mountains, are crowded against them, and, by the pressure from behind, are forced up along their sides and over their crests. Clouds that are in their paths, and which are borne onward to the slopes of such mountains, are sometimes carried up to and over their tops. Slopes which are destitute of timber present very few obstacles to such a result. Forests, on the other hand, break or lessen the mechanical strength of wind, and so increase the probability of their augmenting the volume of rainfall.
WHICH UNIVERSE SHALL WE STUDY?
"There is something which sets itself up as a just reflection of the universe, and which it is possible to study as if it were the universe itself; that is, the multitude of traditional unscientific opinions about the universe. These opinions are, in one sense, part of the universe; to study them from the historic point of view is to study the universe; but when they are assumed as an accurate reflection of it so as to divert attention from the original, as they are by all the votaries of authority or tradition, then they may be regarded as a spurious universe outside and apart from the real one, and such students of opinion may be said to study, and yet not to study the universe.
|
#!/usr/bin/env python
# @@@ START COPYRIGHT @@@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# @@@ END COPYRIGHT @@@
### this script should be run on all nodes with trafodion user ###
import os
import sys
import json
from constants import TRAF_CFG_DIR, TRAF_CFG_FILE
from common import append_file, write_file, mod_file, cmd_output, \
ParseInI, ParseXML, err, run_cmd
def run():
dbcfgs = json.loads(dbcfgs_json)
hbase_xml_file = dbcfgs['hbase_xml_file']
dcs_conf_dir = '%s/dcs' % (TRAF_CFG_DIR)
dcs_srv_file = dcs_conf_dir + '/servers'
dcs_master_file = dcs_conf_dir + '/masters'
dcs_site_file = dcs_conf_dir + '/dcs-site.xml'
rest_site_file = '%s/rest/rest-site.xml' % (TRAF_CFG_DIR)
### dcs setting ###
# servers
nodes = dbcfgs['node_list'].split(',')
dcs_cnt = dbcfgs['dcs_cnt_per_node']
dcs_servers = ''
for node in nodes:
dcs_servers += '%s %s\n' % (node, dcs_cnt)
write_file(dcs_srv_file, dcs_servers)
### modify dcs config files ###
# modify master
dcs_master = nodes[0]
append_file(dcs_master_file, dcs_master+'\n')
# modify dcs-site.xml
net_interface = run_cmd('ip route |grep default|awk \'{print $5}\'')
hb = ParseXML(hbase_xml_file)
zk_hosts = hb.get_property('hbase.zookeeper.quorum')
zk_port = hb.get_property('hbase.zookeeper.property.clientPort')
p = ParseXML(dcs_site_file)
p.add_property('dcs.zookeeper.property.clientPort', zk_port)
p.add_property('dcs.zookeeper.quorum', zk_hosts)
p.add_property('dcs.dns.interface', net_interface)
if dbcfgs['dcs_ha'] == 'Y':
dcs_floating_ip = dbcfgs['dcs_floating_ip']
dcs_backup_nodes = dbcfgs['dcs_backup_nodes']
p.add_property('dcs.master.floating.ip', 'true')
p.add_property('dcs.master.floating.ip.external.interface', net_interface)
p.add_property('dcs.master.floating.ip.external.ip.address', dcs_floating_ip)
p.rm_property('dcs.dns.interface')
# set DCS_MASTER_FLOATING_IP ENV for trafci
dcs_floating_ip_cfg = 'export DCS_MASTER_FLOATING_IP=%s' % dcs_floating_ip
append_file(TRAF_CFG_FILE, dcs_floating_ip_cfg)
# modify master with backup master host
for dcs_backup_node in dcs_backup_nodes.split(','):
append_file(dcs_master_file, dcs_backup_node)
p.write_xml()
### rest setting ###
p = ParseXML(rest_site_file)
p.add_property('rest.zookeeper.property.clientPort', zk_port)
p.add_property('rest.zookeeper.quorum', zk_hosts)
p.write_xml()
### run sqcertgen ###
run_cmd('sqcertgen')
# main
try:
dbcfgs_json = sys.argv[1]
except IndexError:
err('No db config found')
run()
|
For the iF product design award 2011, the organizers received 2,756 submissions from 43 countries for products registered in 16 categories. Kia Sportage and Kia Optima won in the category "Transportation Design".
One of the main focuses in this year's awards was on the participants' brand image. In addition to design quality, the jury also recognised a number of other criteria, including workmanship, level of innovativeness, eco-friendliness, ergonomics and safety.
'These latest awards for the all-new Kia Sportage and Optima are positive proof of the tremendous strides Kia has made in revolutionising the styling of our product line-up,' said Tae-Hyun Oh, senior vice-president & chief operating officer, Kia Motors Corporation.
Last year the Kia Venga was awarded both the iF design prize and red dot award, while the Kia Soul has also previously received the red dot award.
|
# Copyright (c) 2019 UAVCAN Consortium
# This software is distributed under the terms of the MIT License.
# Author: Pavel Kirienko <[email protected]>
from __future__ import annotations
import typing
import itertools
import dataclasses
from ._frame import FrameFormat
@dataclasses.dataclass(frozen=True)
class FilterConfiguration:
identifier: int
"""The reference CAN ID value."""
mask: int
"""Mask applies to the identifier only. It does not contain any special flags."""
format: typing.Optional[FrameFormat]
"""None means no preference -- both formats will be accepted."""
def __post_init__(self) -> None:
max_bit_length = 2 ** self.identifier_bit_length - 1
if not (0 <= self.identifier <= max_bit_length):
raise ValueError(f"Invalid identifier: {self.identifier}")
if not (0 <= self.mask <= max_bit_length):
raise ValueError(f"Invalid mask: {self.mask}")
@property
def identifier_bit_length(self) -> int:
# noinspection PyTypeChecker
return int(self.format if self.format is not None else max(FrameFormat))
@staticmethod
def new_promiscuous(frame_format: typing.Optional[FrameFormat] = None) -> FilterConfiguration:
"""
Returns a configuration that accepts all frames of the specified format.
If the format is not specified, no distinction will be made.
Note that some CAN controllers may have difficulty supporting both formats on a single filter.
"""
return FilterConfiguration(identifier=0, mask=0, format=frame_format)
@property
def rank(self) -> int:
"""
This is the number of set bits in the mask.
This is a part of the CAN acceptance filter configuration optimization algorithm;
see :func:`optimize_filter_configurations`.
We return negative rank for configurations which do not distinguish between extended and base frames
in order to discourage merger of configurations of different frame types, since they are hard to
support in certain CAN controllers. The effect of this is that we guarantee that an ambivalent filter
configuration will never appear if the controller has at least two acceptance filters.
Negative rank is computed by subtracting the number of bits in the CAN ID
(or 29 if the filter accepts both base and extended identifiers) from the original rank.
"""
mask_mask = 2 ** self.identifier_bit_length - 1
rank = bin(self.mask & mask_mask).count("1")
if self.format is None:
rank -= int(self.identifier_bit_length) # Discourage merger of ambivalent filters.
return rank
def merge(self, other: FilterConfiguration) -> FilterConfiguration:
"""
This is a part of the CAN acceptance filter configuration optimization algorithm;
see :func:`optimize_filter_configurations`.
Given two filter configurations ``A`` and ``B``, where ``A`` accepts CAN frames whose identifiers
belong to ``Ca`` and likewise ``Cb`` for ``B``, the merge product of ``A`` and ``B`` would be a
new filter configuration that accepts CAN frames belonging to a new set which is a superset of
the union of ``Ca`` and ``Cb``.
"""
mask = self.mask & other.mask & ~(self.identifier ^ other.identifier)
identifier = self.identifier & mask
fmt = self.format if self.format == other.format else None
return FilterConfiguration(identifier=identifier, mask=mask, format=fmt)
def __str__(self) -> str:
out = "".join(
(str((self.identifier >> bit) & 1) if self.mask & (1 << bit) != 0 else "x")
for bit in reversed(range(int(self.format or FrameFormat.EXTENDED)))
)
return (self.format.name[:3].lower() if self.format else "any") + ":" + out
def optimize_filter_configurations(
configurations: typing.Iterable[FilterConfiguration], target_number_of_configurations: int
) -> typing.Sequence[FilterConfiguration]:
"""
Implements the CAN acceptance filter configuration optimization algorithm described in the Specification.
The algorithm was originally proposed by P. Kirienko and I. Sheremet.
Given a
set of ``K`` filter configurations that accept CAN frames whose identifiers belong to the set ``C``,
and ``N`` acceptance filters implemented in hardware, where ``1 <= N < K``, find a new
set of ``K'`` filter configurations that accept CAN frames whose identifiers belong to the set ``C'``,
such that ``K' <= N``, ``C'`` is a superset of ``C``, and ``|C'|`` is minimized.
The algorithm is not defined for ``N >= K`` because this configuration is considered optimal.
The function returns the input set unchanged in this case.
If the target number of configurations is not positive, a ValueError is raised.
The time complexity of this implementation is ``O(K!)``; it should be optimized.
"""
if target_number_of_configurations < 1:
raise ValueError(f"The number of configurations must be positive; found {target_number_of_configurations}")
configurations = list(configurations)
while len(configurations) > target_number_of_configurations:
options = itertools.starmap(
lambda ia, ib: (ia[0], ib[0], ia[1].merge(ib[1])), itertools.permutations(enumerate(configurations), 2)
)
index_replace, index_remove, merged = max(options, key=lambda x: int(x[2].rank))
configurations[index_replace] = merged
del configurations[index_remove] # Invalidates indexes
assert all(map(lambda x: isinstance(x, FilterConfiguration), configurations))
return configurations
def _unittest_can_media_filter_faults() -> None:
from pytest import raises
with raises(ValueError):
FilterConfiguration(0, -1, None)
with raises(ValueError):
FilterConfiguration(-1, 0, None)
for fmt in FrameFormat:
with raises(ValueError):
FilterConfiguration(2 ** int(fmt), 0, fmt)
with raises(ValueError):
FilterConfiguration(0, 2 ** int(fmt), fmt)
with raises(ValueError):
optimize_filter_configurations([], 0)
# noinspection SpellCheckingInspection
def _unittest_can_media_filter_str() -> None:
assert str(FilterConfiguration(0b10101010, 0b11101000, FrameFormat.EXTENDED)) == "ext:xxxxxxxxxxxxxxxxxxxxx101x1xxx"
assert (
str(FilterConfiguration(0b10101010101010101010101010101, 0b10111111111111111111111111111, FrameFormat.EXTENDED))
== "ext:1x101010101010101010101010101"
)
assert str(FilterConfiguration(0b10101010101, 0b11111111111, FrameFormat.BASE)) == "bas:10101010101"
assert str(FilterConfiguration(123, 456, None)) == "any:xxxxxxxxxxxxxxxxxxxx001xx1xxx"
assert str(FilterConfiguration.new_promiscuous()) == "any:xxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
assert repr(FilterConfiguration(123, 456, None)) == "FilterConfiguration(identifier=123, mask=456, format=None)"
def _unittest_can_media_filter_merge() -> None:
assert FilterConfiguration(123456, 0, None).rank == -29 # Worst rank
assert FilterConfiguration(123456, 0b110, None).rank == -27 # Two better
assert FilterConfiguration(1234, 0b110, FrameFormat.BASE).rank == 2
assert (
FilterConfiguration(0b111, 0b111, FrameFormat.EXTENDED)
.merge(FilterConfiguration(0b111, 0b111, FrameFormat.BASE))
.rank
== -29 + 3
)
|
I know you all have been waiting for this…. I have too. I finally feel like I have finished decorating for the holidays. It took long enough. I changed it up a bit this year. My usual colors were red and green. This year I went white and silver. Dreaming of a white Christmas, anyone?
I thoroughly enjoyed decorating with some of my finds this year. Like this great sled. And the horse head.
I recently found a tobacco basket. A big one. And I have been looking for one for literally a year. It was such an exciting find! So I used it to display our many Christmas cards.
In my newly finished dining room, I just used fresh greenery (that I found on the curb) to dress up a wall display.
The center of the dining table is decorated with candles and some gorgeous ornaments that I found at a local antique shop.
While on a school field trip with my son, I snagged some really great oversized ornaments. They make the best display in my foyer.
The staircase got a touch of glam too with gorgeous sequined silver ribbon and lots of lights.
Stay tuned for part 2 where I share our tree! And the fun space I created in the room my family uses the most.
|
#!/usr/bin/env python
"""Mixin class to be used in tests for DB implementations."""
import itertools
import random
from typing import Any, Callable, Dict, Iterable, Optional, Text
from grr_response_server.databases import db
class QueryTestHelpersMixin(object):
"""Mixin containing helper methods for list/query methods tests."""
def DoOffsetAndCountTest(self,
fetch_all_fn: Callable[[], Iterable[Any]],
fetch_range_fn: Callable[[int, int], Iterable[Any]],
error_desc: Optional[Text] = None):
"""Tests a DB API method with different offset/count combinations.
This helper method works by first fetching all available objects with
fetch_all_fn and then fetching all possible ranges using fetch_fn. The test
passes if subranges returned by fetch_fn match subranges of values in
the list returned by fetch_all_fn.
Args:
fetch_all_fn: Function without arguments that fetches all available
objects using the API method that's being tested.
fetch_range_fn: Function that calls an API method that's being tested
passing 2 positional arguments: offset and count. It should return a
list of objects.
error_desc: Optional string to be used in error messages. May be useful to
identify errors from a particular test.
"""
all_objects = fetch_all_fn()
self.assertNotEmpty(all_objects,
"Fetched objects can't be empty (%s)." % error_desc)
for i in range(len(all_objects)):
for l in range(1, len(all_objects) + 1):
results = list(fetch_range_fn(i, l))
expected = list(all_objects[i:i + l])
self.assertListEqual(
results, expected,
"Results differ from expected (offset %d, count %d%s): %s vs %s" %
(i, l,
(", " + error_desc) if error_desc else "", results, expected))
def DoFilterCombinationsTest(self,
fetch_fn: Callable[..., Iterable[Any]],
conditions: Dict[Text, Any],
error_desc: Optional[Text] = None):
"""Tests a DB API method with different keyword arguments combinations.
This test method works by fetching sets of objects for each individual
condition and then checking that combinations of conditions produce
expected sets of objects.
Args:
fetch_fn: Function accepting keyword "query filter" arguments and
returning a list of fetched objects. When called without arguments,
fetch_fn is expected to return all available objects.
conditions: A dictionary of key -> value, where key is a string
identifying a keyword argument to be passed to fetch_fn and value is a
value to be passed. All possible permutations of conditions will be
tried on fetch_fn.
error_desc: Optional string to be used in error messages. May be useful to
identify errors from a particular test.
"""
perms = list(
itertools.chain.from_iterable([
itertools.combinations(sorted(conditions.keys()), i)
for i in range(1,
len(conditions) + 1)
]))
self.assertNotEmpty(perms)
all_objects = fetch_fn()
expected_objects = {}
for k, v in conditions.items():
expected_objects[k] = fetch_fn(**{k: v})
for condition_perm in perms:
expected = all_objects
kw_args = {}
for k in condition_perm:
expected = [e for e in expected if e in expected_objects[k]]
kw_args[k] = conditions[k]
got = fetch_fn(**kw_args)
# Make sure that the order of keys->values is stable in the error message.
kw_args_str = ", ".join(
"%r: %r" % (k, kw_args[k]) for k in sorted(kw_args))
self.assertListEqual(
got, expected, "Results differ from expected ({%s}%s): %s vs %s" %
(kw_args_str,
(", " + error_desc) if error_desc else "", got, expected))
def DoFilterCombinationsAndOffsetCountTest(self,
fetch_fn: Callable[...,
Iterable[Any]],
conditions: Dict[Text, Any],
error_desc: Optional[Text] = None):
"""Tests a DB API methods with combinations of offset/count args and kwargs.
This test methods works in 2 steps:
1. It tests that different conditions combinations work fine when offset
and count are 0 and db.MAX_COUNT respectively.
2. For every condition combination it tests all possible offset and count
combinations to make sure correct subsets of results are returned.
Args:
fetch_fn: Function accepting positional offset and count arguments and
keyword "query filter" arguments and returning a list of fetched
objects.
conditions: A dictionary of key -> value, where key is a string
identifying a keyword argument to be passed to fetch_fn and value is a
value to be passed. All possible permutations of conditions will be
tried on fetch_fn.
error_desc: Optional string to be used in error messages. May be useful to
identify errors from a particular test.
"""
self.DoFilterCombinationsTest(
lambda **kw_args: fetch_fn(0, db.MAX_COUNT, **kw_args),
conditions,
error_desc=error_desc)
perms = list(
itertools.chain.from_iterable([
itertools.combinations(sorted(conditions.keys()), i)
for i in range(1,
len(conditions) + 1)
]))
self.assertNotEmpty(perms)
for condition_perm in perms:
kw_args = {}
for k in condition_perm:
kw_args[k] = conditions[k]
# Make sure that the order of keys->values is stable in the error message.
kw_args_str = ", ".join(
"%r: %r" % (k, kw_args[k]) for k in sorted(kw_args))
self.DoOffsetAndCountTest(
lambda: fetch_fn(0, db.MAX_COUNT, **kw_args), # pylint: disable=cell-var-from-loop
lambda offset, count: fetch_fn(offset, count, **kw_args), # pylint: disable=cell-var-from-loop
error_desc="{%s}%s" %
(kw_args_str, ", " + error_desc) if error_desc else "")
def InitializeClient(db_obj, client_id=None):
"""Initializes a test client.
Args:
db_obj: A database object.
client_id: A specific client id to use for initialized client. If none is
provided a randomly generated one is used.
Returns:
A client id for initialized client.
"""
if client_id is None:
client_id = "C."
for _ in range(16):
client_id += random.choice("0123456789abcdef")
db_obj.WriteClientMetadata(client_id, fleetspeak_enabled=False)
return client_id
|
The Sani-Safe® product line offers the foremost standard for professional cutlery. A textured, slip-resistant, easy-to-clean polypropylene handle withstands both high and low temperatures. An impervious blade-to-handle seal provides the utmost in sanitary performance. Blades are manufactured from quality stainless steel, are precision ground for just the right flexibility, and excel in commercial use. Made in USA. Items marked with the NSF logo are NSF Certified.
|
# Copyright (C) 2019-2020 The Software Heritage developers
# See the AUTHORS file at the top-level directory of this distribution
# License: GNU General Public License version 3, or any later version
# See top-level LICENSE file for more information
from typing import Dict, Iterable, List, Set
from swh.model.model import Content, Directory, Revision, Sha1Git, SkippedContent
from swh.storage import get_storage
from swh.storage.interface import StorageInterface
class FilteringProxyStorage:
"""Filtering Storage implementation. This is in charge of transparently
filtering out known objects prior to adding them to storage.
Sample configuration use case for filtering storage:
.. code-block: yaml
storage:
cls: filter
storage:
cls: remote
url: http://storage.internal.staging.swh.network:5002/
"""
object_types = ["content", "skipped_content", "directory", "revision"]
def __init__(self, storage):
self.storage: StorageInterface = get_storage(**storage)
def __getattr__(self, key):
if key == "storage":
raise AttributeError(key)
return getattr(self.storage, key)
def content_add(self, content: List[Content]) -> Dict[str, int]:
contents_to_add = self._filter_missing_contents(content)
return self.storage.content_add(
[x for x in content if x.sha256 in contents_to_add]
)
def skipped_content_add(self, content: List[SkippedContent]) -> Dict[str, int]:
contents_to_add = self._filter_missing_skipped_contents(content)
return self.storage.skipped_content_add(
[x for x in content if x.sha1_git is None or x.sha1_git in contents_to_add]
)
def directory_add(self, directories: List[Directory]) -> Dict[str, int]:
missing_ids = self._filter_missing_ids("directory", (d.id for d in directories))
return self.storage.directory_add(
[d for d in directories if d.id in missing_ids]
)
def revision_add(self, revisions: List[Revision]) -> Dict[str, int]:
missing_ids = self._filter_missing_ids("revision", (r.id for r in revisions))
return self.storage.revision_add([r for r in revisions if r.id in missing_ids])
def _filter_missing_contents(self, contents: List[Content]) -> Set[bytes]:
"""Return only the content keys missing from swh
Args:
content_hashes: List of sha256 to check for existence in swh
storage
"""
missing_contents = []
for content in contents:
missing_contents.append(content.hashes())
return set(self.storage.content_missing(missing_contents, key_hash="sha256",))
def _filter_missing_skipped_contents(
self, contents: List[SkippedContent]
) -> Set[Sha1Git]:
"""Return only the content keys missing from swh
Args:
content_hashes: List of sha1_git to check for existence in swh
storage
"""
missing_contents = [c.hashes() for c in contents if c.sha1_git is not None]
ids = set()
for c in self.storage.skipped_content_missing(missing_contents):
if c is None or c.get("sha1_git") is None:
continue
ids.add(c["sha1_git"])
return ids
def _filter_missing_ids(self, object_type: str, ids: Iterable[bytes]) -> Set[bytes]:
"""Filter missing ids from the storage for a given object type.
Args:
object_type: object type to use {revision, directory}
ids: List of object_type ids
Returns:
Missing ids from the storage for object_type
"""
missing_ids = []
for id in ids:
missing_ids.append(id)
fn_by_object_type = {
"revision": self.storage.revision_missing,
"directory": self.storage.directory_missing,
}
fn = fn_by_object_type[object_type]
return set(fn(missing_ids))
|
NZ Hothouse has grown, packed and marketed New Zealand’s freshest and most flavoursome hothouse produce for over 30 years.
KPH Transport is the distribution arm of NZ Hothouse, and provides the logistical support to the packing, production and marketing functions of the group.
NZ Hothouse is strategically located in New Zealand’s largest produce production area. It is less than 2 kilometers from a major motorway interchange and within 30 minutes of Auckland’s international and domestics airports.
|
from os import sep as os_sep
from django.contrib.auth import get_user_model
from django.contrib.auth.decorators import login_required
from django.forms import widgets, forms, fields
from django.http import Http404
from django.shortcuts import render
from django.utils.translation import ugettext_lazy as _
from django.views.decorators.cache import never_cache
from django.views.decorators.csrf import csrf_protect
from repanier_v2.const import DECIMAL_ZERO, EMPTY_STRING
from repanier_v2.models.customer import Customer
from repanier_v2.picture.const import SIZE_S
from repanier_v2.tools import get_repanier_template_name
from repanier_v2.widget.checkbox import RepanierCheckboxWidget
from repanier_v2.widget.picture import RepanierPictureWidget
class CustomerForm(forms.Form):
long_name = fields.CharField(label=_("My name is"), max_length=100)
zero_waste = fields.BooleanField(
label=EMPTY_STRING,
required=False,
widget=RepanierCheckboxWidget(label=_("Family zero waste")),
)
subscribe_to_email = fields.BooleanField(
label=EMPTY_STRING,
required=False,
widget=RepanierCheckboxWidget(
label=_("I agree to receive mails from this site")
),
)
email1 = fields.EmailField(
label=_(
"My main email address, used to reset the password and connect to the site"
)
)
email2 = fields.EmailField(
label=_("My secondary email address (does not allow to connect to the site)"),
required=False,
)
phone1 = fields.CharField(label=_("My main phone number"), max_length=25)
phone2 = fields.CharField(
label=_("My secondary phone number"), max_length=25, required=False
)
city = fields.CharField(label=_("My city"), max_length=50, required=False)
address = fields.CharField(
label=_("My address"),
widget=widgets.Textarea(attrs={"cols": "40", "rows": "3"}),
required=False,
)
picture = fields.CharField(
label=_("My picture"),
widget=RepanierPictureWidget(upload_to="customer", size=SIZE_S, bootstrap=True),
required=False,
)
about_me = fields.CharField(
label=_("About me"),
widget=widgets.Textarea(attrs={"cols": "40", "rows": "3"}),
required=False,
)
def clean_email1(self):
email1 = self.cleaned_data["email1"]
user_model = get_user_model()
qs = (
user_model.objects.filter(email=email1, is_staff=False)
.exclude(id=self.request.customer_id)
.order_by("?")
)
if qs.exists():
self.add_error(
"email1",
_("The email {} is already used by another user.").format(email1),
)
return email1
# def __init__(self, *args, **kwargs):
# self.request = kwargs.pop("request", None)
# super().__init__(*args, **kwargs)
@login_required()
@csrf_protect
@never_cache
def published_customer_view(request, customer_id=0):
print("######### customer_id : {}".format(customer_id))
user = request.user
if user.is_repanier_staff:
customer = Customer.objects.filter(id=customer_id, is_active=True).first()
else:
customer = (
Customer.objects.filter(id=user.customer_id, is_active=True)
.filter(id=customer_id)
.first()
)
if not customer:
raise Http404
from repanier_v2.globals import (
REPANIER_SETTINGS_MEMBERSHIP_FEE,
REPANIER_SETTINGS_DISPLAY_WHO_IS_WHO,
)
if REPANIER_SETTINGS_MEMBERSHIP_FEE > DECIMAL_ZERO:
membership_fee_valid_until = customer.membership_fee_valid_until
else:
membership_fee_valid_until = None
template_name = get_repanier_template_name("published_customer_form.html")
if request.method == "POST": # If the form has been submitted...
form = CustomerForm(request.POST) # A form bound to the POST data
if form.is_valid(): # All validation rules pass
# Process the data in form.cleaned_data
# ...
if customer is not None:
customer.long_name = form.cleaned_data.get("long_name")
customer.phone1 = form.cleaned_data.get("phone1")
customer.phone2 = form.cleaned_data.get("phone2")
customer.email2 = form.cleaned_data.get("email2").lower()
customer.subscribe_to_email = form.cleaned_data.get(
"subscribe_to_email"
)
customer.city = form.cleaned_data.get("city")
customer.address = form.cleaned_data.get("address")
customer.picture = form.cleaned_data.get("picture")
customer.about_me = form.cleaned_data.get("about_me")
customer.zero_waste = form.cleaned_data.get("zero_waste")
customer.save()
# Important : place this code after because form = CustomerForm(data, request=request) delete form.cleaned_data
email = form.cleaned_data.get("email1")
user_model = get_user_model()
user = user_model.objects.filter(email=email).order_by("?").first()
if user is None or user.email != email:
# user.email != email for case unsensitive SQL query
customer.user.username = customer.user.email = email.lower()
# customer.user.first_name = EMPTY_STRING
# customer.user.last_name = customer.short_name
customer.user.save()
# User feed back : Display email in lower case.
data = form.data.copy()
data["email1"] = customer.user.email
data["email2"] = customer.email2
form = CustomerForm(data, request=request)
return render(
request,
template_name,
{
"form": form,
"membership_fee_valid_until": membership_fee_valid_until,
"display_who_is_who": REPANIER_SETTINGS_DISPLAY_WHO_IS_WHO,
"update": True,
},
)
return render(
request,
template_name,
{
"form": form,
"membership_fee_valid_until": membership_fee_valid_until,
"display_who_is_who": REPANIER_SETTINGS_DISPLAY_WHO_IS_WHO,
"update": False,
},
)
else:
form = CustomerForm() # An unbound form
field = form.fields["long_name"]
field.initial = customer.long_name
field = form.fields["phone1"]
field.initial = customer.phone1
field = form.fields["phone2"]
field.initial = customer.phone2
field = form.fields["email1"]
field.initial = customer.user.email
field = form.fields["email2"]
field.initial = customer.email2
field = form.fields["subscribe_to_email"]
field.initial = customer.subscribe_to_email
field = form.fields["city"]
field.initial = customer.city
field = form.fields["address"]
field.initial = customer.address
field = form.fields["picture"]
field.initial = customer.picture
if hasattr(field.widget, "upload_to"):
field.widget.upload_to = "{}{}{}".format("customer", os_sep, customer.id)
field = form.fields["about_me"]
field.initial = customer.about_me
field = form.fields["zero_waste"]
field.initial = customer.zero_waste
return render(
request,
template_name,
{
"form": form,
"membership_fee_valid_until": membership_fee_valid_until,
"display_who_is_who": REPANIER_SETTINGS_DISPLAY_WHO_IS_WHO,
"update": None,
},
)
|
Sets the ALT_ON meta state of the resulting key.
On Honeycomb and above, sets the CTRL_ON meta state of the resulting key.
Sets the SHIFT_ON meta state of the resulting key.
On Honeycomb and above, sets the CTRL_ON meta state of the resulting key. On Gingerbread and below, this is a noop.
|
#!/usr/bin/python env
# coding=utf-8
import os,sys
from ops.views.ssh_settings import zabbixurl,zabbixpwd,zabbixuser
import time
from pyzabbix import ZabbixAPI
# 登录zabbix
zabbix = ZabbixAPI(zabbixurl)
zabbix.session.verify = False
zabbix.login(zabbixuser, zabbixpwd)
def group_list():
return zabbix.hostgroup.get(
output=['groupid', 'name']
)
def host_list(group=None):
if group:
return zabbix.host.get(
output=['host', 'hostid', 'name', 'available'],
groupids=[group],
selectGroups=['name']
)
else:
return zabbix.host.get(
output=['host', 'hostid', 'name', 'available'],
selectGroups=['name']
)
def cpu_list(hostid):
if hostid:
item = zabbix.item.get(hostids=[hostid], output=["name", "key_", "value_type", "hostid", "status", "state"],
filter={'key_': 'system.cpu.load[percpu,avg1]'})
itemid = item[0]['itemid']
t_till = int(time.time())
t_from = t_till - 2 * 24 * 60 * 60
return zabbix.history.get(
# hostids=[hostid],
itemids=[itemid],
history=0,
output='extend',
sortfield='clock',
sortorder='ASC',
time_from=t_from,
time_till=t_till
)
def memory_list(hostid):
if hostid:
item = zabbix.item.get(hostids=[hostid], output=["name", "key_", "value_type", "hostid", "status", "state"],
filter={'key_': 'vm.memory.size[available]'})
itemid = item[0]['itemid']
t_till = int(time.time())
t_from = t_till - 2 * 24 * 60 * 60
return zabbix.history.get(
# hostids=[hostid],
itemids=[itemid],
history=3,
output='extend',
sortfield='clock',
sortorder='ASC',
time_from=t_from,
time_till=t_till
)
def disk_list(hostid):
if hostid:
item = zabbix.item.get(hostids=[hostid], output=["name", "key_", "value_type", "hostid", "status", "state"],
filter={'key_': 'vfs.fs.size[/,free]'})
itemid = item[0]['itemid']
t_till = int(time.time())
t_from = t_till - 2 * 24 * 60 * 60
return zabbix.history.get(
# hostids=[hostid],
itemids=[itemid],
history=3,
output='extend',
sortfield='clock',
sortorder='ASC',
time_from=t_from,
time_till=t_till
)
def event_list():
t_till = int(time.time())
t_from = t_till - 7 * 24 * 60 * 60
triggers = zabbix.trigger.get(
output=['triggerid', 'description', 'priority']
)
triggerDict = {}
for trigger in triggers:
triggerDict[trigger['triggerid']] = trigger
events = zabbix.event.get(
output='extend',
selectHosts=['name', 'host'],
sortfield='clock',
sortorder='DESC',
time_from=t_from,
time_till=t_till
)
return [{
'clock': event['clock'],
'eventid': event['eventid'],
'acknowledged': event['acknowledged'],
'hosts': event['hosts'],
'trigger': triggerDict.get(event['objectid'])
} for event in events]
def usage(hostid):
diskItemids = zabbix.item.get(
hostids=[hostid],
output=["name",
"key_",
"value_type",
"hostid",
"status",
"state"],
filter={'key_': 'vfs.fs.size[/,pfree]'}
)
diskUsage = zabbix.history.get(itemids=[diskItemids[0]['itemid']], history=0, output='extend', sortfield='clock',
sortorder='ASC', limit=1)
return [{
'diskUsage': diskUsage,
}]
def service_history_list(service):
if service:
t_till = int(time.time())
t_from = t_till - 7 * 24 * 60 * 60
# 所有监控项
items = zabbix.item.get(
output=['itemid'],
filter={'name': service},
selectHosts=['name', 'host'],
)
history = []
for item in items:
history.append(
zabbix.history.get(
itemids=[item['itemid']],
history=3,
output='extend',
sortfield='clock',
sortorder='ASC',
time_from=t_from,
time_till=t_till,
)
)
return {
'items': items,
'history': history
}
def service_item_list(service):
if service:
# 所有监控项
items = zabbix.item.get(
output=['itemid'],
filter={'name': service},
selectHosts=['name', 'host'],
)
return items
def history_list(itemid):
if itemid:
t_till = int(time.time())
t_from = t_till - 7 * 24 * 60 * 60
return zabbix.history.get(
itemids=[itemid],
history=3,
output='extend',
sortfield='clock',
sortorder='ASC',
time_from=t_from,
time_till=t_till,
)
|
Continuing our love for semi precious stones, this necklace truly shows off the colour of the natural stone. The asymmetric design makes it a little different from your usual semi precious pendant, and the chain tassel adds even more interest and movement.
Colours and shapes of the semi precious stone may vary slightly from the image due to this being a naturally occurring stone.
|
#!/usr/bin/env python
import logging, os
# Flask app debug value.
#DEBUG = False # DEBUG AND INFO log levels won't be shown..
DEBUG = True
# Used for running the flask server independent of mod_wsgi
# using SERVER_NAME as the var name makes views fail..
# Set servername to gameserver base url
#SERVERNAME = "127.0.0.1"
SERVERNAME = "localhost"
SERVER_PORT = 8139
#SERVER_PORT = 5000
# URLs from which request (ajax) can be made to this server
ALLOWED_DOMAINS = "*" # all
#ALLOWED_DOMAINS = "http://"+SERVERNAME # allow calls from elsewhere in the same server
#PREFERRED_URL_SCHEME='https'
# Get the current dir for the application
APPLICATION_PATH = os.path.dirname(os.path.realpath(__file__))
# If mod_wsgi is used, messages will also get logged to the apache log files
LOG_FILE = os.path.join(APPLICATION_PATH, "nlpserver.log")
DEFAULT_LOG_FORMATTER = logging.Formatter(\
"%(asctime)s - %(levelname)s - %(message)s")
# these not really needed since DEBUG_VAL above influences this
#DEFAULT_LOG_LEVEL = logging.DEBUG
#DEFAULT_LOG_LEVEL = logging.WARNING
|
Pearls signify elegance and so does Ink. With a line of pearls bridging the neckline and the body, this top will definitely make you stand apart from the crowd. Team it up with your favorite pants for a day out or layer it under a blazer for an office meeting, Ink will never let you down.
|
# Copyright Contributors to the Pyro project.
# SPDX-License-Identifier: Apache-2.0
from jax import lax
import jax.numpy as jnp
import jax.random as random
from jax.scipy.special import logsumexp
from jax.tree_util import tree_map
from numpyro.distributions import constraints
from numpyro.distributions.continuous import (
Cauchy,
Laplace,
Logistic,
Normal,
SoftLaplace,
StudentT,
)
from numpyro.distributions.distribution import Distribution
from numpyro.distributions.util import (
is_prng_key,
lazy_property,
promote_shapes,
validate_sample,
)
class LeftTruncatedDistribution(Distribution):
arg_constraints = {"low": constraints.real}
reparametrized_params = ["low"]
supported_types = (Cauchy, Laplace, Logistic, Normal, SoftLaplace, StudentT)
def __init__(self, base_dist, low=0.0, validate_args=None):
assert isinstance(base_dist, self.supported_types)
assert (
base_dist.support is constraints.real
), "The base distribution should be univariate and have real support."
batch_shape = lax.broadcast_shapes(base_dist.batch_shape, jnp.shape(low))
self.base_dist = tree_map(
lambda p: promote_shapes(p, shape=batch_shape)[0], base_dist
)
(self.low,) = promote_shapes(low, shape=batch_shape)
self._support = constraints.greater_than(low)
super().__init__(batch_shape, validate_args=validate_args)
@constraints.dependent_property(is_discrete=False, event_dim=0)
def support(self):
return self._support
@lazy_property
def _tail_prob_at_low(self):
# if low < loc, returns cdf(low); otherwise returns 1 - cdf(low)
loc = self.base_dist.loc
sign = jnp.where(loc >= self.low, 1.0, -1.0)
return self.base_dist.cdf(loc - sign * (loc - self.low))
@lazy_property
def _tail_prob_at_high(self):
# if low < loc, returns cdf(high) = 1; otherwise returns 1 - cdf(high) = 0
return jnp.where(self.low <= self.base_dist.loc, 1.0, 0.0)
def sample(self, key, sample_shape=()):
assert is_prng_key(key)
u = random.uniform(key, sample_shape + self.batch_shape)
loc = self.base_dist.loc
sign = jnp.where(loc >= self.low, 1.0, -1.0)
return (1 - sign) * loc + sign * self.base_dist.icdf(
(1 - u) * self._tail_prob_at_low + u * self._tail_prob_at_high
)
@validate_sample
def log_prob(self, value):
sign = jnp.where(self.base_dist.loc >= self.low, 1.0, -1.0)
return self.base_dist.log_prob(value) - jnp.log(
sign * (self._tail_prob_at_high - self._tail_prob_at_low)
)
def tree_flatten(self):
base_flatten, base_aux = self.base_dist.tree_flatten()
if isinstance(self._support.lower_bound, (int, float)):
return base_flatten, (
type(self.base_dist),
base_aux,
self._support.lower_bound,
)
else:
return (base_flatten, self.low), (type(self.base_dist), base_aux)
@classmethod
def tree_unflatten(cls, aux_data, params):
if len(aux_data) == 2:
base_flatten, low = params
base_cls, base_aux = aux_data
else:
base_flatten = params
base_cls, base_aux, low = aux_data
base_dist = base_cls.tree_unflatten(base_aux, base_flatten)
return cls(base_dist, low=low)
class RightTruncatedDistribution(Distribution):
arg_constraints = {"high": constraints.real}
reparametrized_params = ["high"]
supported_types = (Cauchy, Laplace, Logistic, Normal, SoftLaplace, StudentT)
def __init__(self, base_dist, high=0.0, validate_args=None):
assert isinstance(base_dist, self.supported_types)
assert (
base_dist.support is constraints.real
), "The base distribution should be univariate and have real support."
batch_shape = lax.broadcast_shapes(base_dist.batch_shape, jnp.shape(high))
self.base_dist = tree_map(
lambda p: promote_shapes(p, shape=batch_shape)[0], base_dist
)
(self.high,) = promote_shapes(high, shape=batch_shape)
self._support = constraints.less_than(high)
super().__init__(batch_shape, validate_args=validate_args)
@constraints.dependent_property(is_discrete=False, event_dim=0)
def support(self):
return self._support
@lazy_property
def _cdf_at_high(self):
return self.base_dist.cdf(self.high)
def sample(self, key, sample_shape=()):
assert is_prng_key(key)
u = random.uniform(key, sample_shape + self.batch_shape)
return self.base_dist.icdf(u * self._cdf_at_high)
@validate_sample
def log_prob(self, value):
return self.base_dist.log_prob(value) - jnp.log(self._cdf_at_high)
def tree_flatten(self):
base_flatten, base_aux = self.base_dist.tree_flatten()
if isinstance(self._support.upper_bound, (int, float)):
return base_flatten, (
type(self.base_dist),
base_aux,
self._support.upper_bound,
)
else:
return (base_flatten, self.high), (type(self.base_dist), base_aux)
@classmethod
def tree_unflatten(cls, aux_data, params):
if len(aux_data) == 2:
base_flatten, high = params
base_cls, base_aux = aux_data
else:
base_flatten = params
base_cls, base_aux, high = aux_data
base_dist = base_cls.tree_unflatten(base_aux, base_flatten)
return cls(base_dist, high=high)
class TwoSidedTruncatedDistribution(Distribution):
arg_constraints = {"low": constraints.dependent, "high": constraints.dependent}
reparametrized_params = ["low", "high"]
supported_types = (Cauchy, Laplace, Logistic, Normal, SoftLaplace, StudentT)
def __init__(self, base_dist, low=0.0, high=1.0, validate_args=None):
assert isinstance(base_dist, self.supported_types)
assert (
base_dist.support is constraints.real
), "The base distribution should be univariate and have real support."
batch_shape = lax.broadcast_shapes(
base_dist.batch_shape, jnp.shape(low), jnp.shape(high)
)
self.base_dist = tree_map(
lambda p: promote_shapes(p, shape=batch_shape)[0], base_dist
)
(self.low,) = promote_shapes(low, shape=batch_shape)
(self.high,) = promote_shapes(high, shape=batch_shape)
self._support = constraints.interval(low, high)
super().__init__(batch_shape, validate_args=validate_args)
@constraints.dependent_property(is_discrete=False, event_dim=0)
def support(self):
return self._support
@lazy_property
def _tail_prob_at_low(self):
# if low < loc, returns cdf(low); otherwise returns 1 - cdf(low)
loc = self.base_dist.loc
sign = jnp.where(loc >= self.low, 1.0, -1.0)
return self.base_dist.cdf(loc - sign * (loc - self.low))
@lazy_property
def _tail_prob_at_high(self):
# if low < loc, returns cdf(high); otherwise returns 1 - cdf(high)
loc = self.base_dist.loc
sign = jnp.where(loc >= self.low, 1.0, -1.0)
return self.base_dist.cdf(loc - sign * (loc - self.high))
def sample(self, key, sample_shape=()):
assert is_prng_key(key)
u = random.uniform(key, sample_shape + self.batch_shape)
# NB: we use a more numerically stable formula for a symmetric base distribution
# A = icdf(cdf(low) + (cdf(high) - cdf(low)) * u) = icdf[(1 - u) * cdf(low) + u * cdf(high)]
# will suffer by precision issues when low is large;
# If low < loc:
# A = icdf[(1 - u) * cdf(low) + u * cdf(high)]
# Else
# A = 2 * loc - icdf[(1 - u) * cdf(2*loc-low)) + u * cdf(2*loc - high)]
loc = self.base_dist.loc
sign = jnp.where(loc >= self.low, 1.0, -1.0)
return (1 - sign) * loc + sign * self.base_dist.icdf(
(1 - u) * self._tail_prob_at_low + u * self._tail_prob_at_high
)
@validate_sample
def log_prob(self, value):
# NB: we use a more numerically stable formula for a symmetric base distribution
# if low < loc
# cdf(high) - cdf(low) = as-is
# if low > loc
# cdf(high) - cdf(low) = cdf(2 * loc - low) - cdf(2 * loc - high)
sign = jnp.where(self.base_dist.loc >= self.low, 1.0, -1.0)
return self.base_dist.log_prob(value) - jnp.log(
sign * (self._tail_prob_at_high - self._tail_prob_at_low)
)
def tree_flatten(self):
base_flatten, base_aux = self.base_dist.tree_flatten()
if isinstance(self._support.lower_bound, (int, float)) and isinstance(
self._support.upper_bound, (int, float)
):
return base_flatten, (
type(self.base_dist),
base_aux,
self._support.lower_bound,
self._support.upper_bound,
)
else:
return (base_flatten, self.low, self.high), (type(self.base_dist), base_aux)
@classmethod
def tree_unflatten(cls, aux_data, params):
if len(aux_data) == 2:
base_flatten, low, high = params
base_cls, base_aux = aux_data
else:
base_flatten = params
base_cls, base_aux, low, high = aux_data
base_dist = base_cls.tree_unflatten(base_aux, base_flatten)
return cls(base_dist, low=low, high=high)
def TruncatedDistribution(base_dist, low=None, high=None, validate_args=None):
"""
A function to generate a truncated distribution.
:param base_dist: The base distribution to be truncated. This should be a univariate
distribution. Currently, only the following distributions are supported:
Cauchy, Laplace, Logistic, Normal, and StudentT.
:param low: the value which is used to truncate the base distribution from below.
Setting this parameter to None to not truncate from below.
:param high: the value which is used to truncate the base distribution from above.
Setting this parameter to None to not truncate from above.
"""
if high is None:
if low is None:
return base_dist
else:
return LeftTruncatedDistribution(
base_dist, low=low, validate_args=validate_args
)
elif low is None:
return RightTruncatedDistribution(
base_dist, high=high, validate_args=validate_args
)
else:
return TwoSidedTruncatedDistribution(
base_dist, low=low, high=high, validate_args=validate_args
)
class TruncatedCauchy(LeftTruncatedDistribution):
arg_constraints = {
"low": constraints.real,
"loc": constraints.real,
"scale": constraints.positive,
}
reparametrized_params = ["low", "loc", "scale"]
def __init__(self, low=0.0, loc=0.0, scale=1.0, validate_args=None):
self.low, self.loc, self.scale = promote_shapes(low, loc, scale)
super().__init__(
Cauchy(self.loc, self.scale), low=self.low, validate_args=validate_args
)
@property
def mean(self):
return jnp.full(self.batch_shape, jnp.nan)
@property
def variance(self):
return jnp.full(self.batch_shape, jnp.nan)
def tree_flatten(self):
if isinstance(self._support.lower_bound, (int, float)):
aux_data = self._support.lower_bound
else:
aux_data = None
return (self.low, self.loc, self.scale), aux_data
@classmethod
def tree_unflatten(cls, aux_data, params):
d = cls(*params)
if aux_data is not None:
d._support = constraints.greater_than(aux_data)
return d
class TruncatedNormal(LeftTruncatedDistribution):
arg_constraints = {
"low": constraints.real,
"loc": constraints.real,
"scale": constraints.positive,
}
reparametrized_params = ["low", "loc", "scale"]
def __init__(self, low=0.0, loc=0.0, scale=1.0, validate_args=None):
self.low, self.loc, self.scale = promote_shapes(low, loc, scale)
super().__init__(
Normal(self.loc, self.scale), low=self.low, validate_args=validate_args
)
@property
def mean(self):
low_prob = jnp.exp(self.log_prob(self.low))
return self.loc + low_prob * self.scale ** 2
@property
def variance(self):
low_prob = jnp.exp(self.log_prob(self.low))
return (self.scale ** 2) * (
1 + (self.low - self.loc) * low_prob - (low_prob * self.scale) ** 2
)
def tree_flatten(self):
if isinstance(self._support.lower_bound, (int, float)):
aux_data = self._support.lower_bound
else:
aux_data = None
return (self.low, self.loc, self.scale), aux_data
@classmethod
def tree_unflatten(cls, aux_data, params):
d = cls(*params)
if aux_data is not None:
d._support = constraints.greater_than(aux_data)
return d
class TruncatedPolyaGamma(Distribution):
truncation_point = 2.5
num_log_prob_terms = 7
num_gamma_variates = 8
assert num_log_prob_terms % 2 == 1
arg_constraints = {}
support = constraints.interval(0.0, truncation_point)
def __init__(self, batch_shape=(), validate_args=None):
super(TruncatedPolyaGamma, self).__init__(
batch_shape, validate_args=validate_args
)
def sample(self, key, sample_shape=()):
assert is_prng_key(key)
denom = jnp.square(jnp.arange(0.5, self.num_gamma_variates))
x = random.gamma(
key, jnp.ones(self.batch_shape + sample_shape + (self.num_gamma_variates,))
)
x = jnp.sum(x / denom, axis=-1)
return jnp.clip(x * (0.5 / jnp.pi ** 2), a_max=self.truncation_point)
@validate_sample
def log_prob(self, value):
value = value[..., None]
all_indices = jnp.arange(0, self.num_log_prob_terms)
two_n_plus_one = 2.0 * all_indices + 1.0
log_terms = (
jnp.log(two_n_plus_one)
- 1.5 * jnp.log(value)
- 0.125 * jnp.square(two_n_plus_one) / value
)
even_terms = jnp.take(log_terms, all_indices[::2], axis=-1)
odd_terms = jnp.take(log_terms, all_indices[1::2], axis=-1)
sum_even = jnp.exp(logsumexp(even_terms, axis=-1))
sum_odd = jnp.exp(logsumexp(odd_terms, axis=-1))
return jnp.log(sum_even - sum_odd) - 0.5 * jnp.log(2.0 * jnp.pi)
def tree_flatten(self):
return (), self.batch_shape
@classmethod
def tree_unflatten(cls, aux_data, params):
return cls(batch_shape=aux_data)
|
In “The Quantifiable Edges Guide to Fed Days” I discussed Fed Days that close at new highs. The basic finding was that when the market closed at a short-term high on a Fed Day, then it was likely to pull back over the next few days. But when it closed at a long-term high, then the rally was likely to continue. Below is a study from the guide that. I’ve updated all the stats.
This suggests further upside is likely over the next 1-2 weeks.
|
import shortest_path_switch
import simple_switch
from ryu.controller.handler import MAIN_DISPATCHER, set_ev_cls
from ryu.controller import ofp_event
import os
import socket
import json
from ryu.lib import hub
SOCKFILE = '/tmp/hello_sock'
class ShortestRestSwitch(simple_switch.SimpleSwitch13):
def __init__(self, *args, **kwargs):
super(ShortestRestSwitch, self).__init__(*args, **kwargs)
self.sock = None
self.config = {}
self.start_sock_server()
def set_vtable(self, host, vlan):
if self.vtable[host] != vlan:
self.vtable.update({host:vlan})
self.SimpleSwitchDeleteFlow(self.default_datapath, host)
def recv_loop(self):
print('start loop')
while True:
print('wait for recev')
data = self.sock.recv(1024)
print('Receive new vtable from web')
print(data)
msg = json.loads(data)
if msg:
print('get msg')
for host, vlan in msg.items():
self.set_vtable(host, vlan)
def start_sock_server(self):
if os.path.exists(SOCKFILE):
os.unlink(SOCKFILE)
self.sock = hub.socket.socket(hub.socket.AF_UNIX, hub.socket.SOCK_DGRAM)
self.sock.bind(SOCKFILE)
hub.spawn(self.recv_loop)
|
Volunteers from First Cornwallis took a couple of days and embarked on an exterior painting project at Doug & Marie's home.
July 27 2016 - Update: Project coming to an end... Many hands and laughter could be heard as the crew wrapped up work at Doug and Marie's home.
A member of the Convention of Atlantic Baptist Churches and Eastern Valley United Baptist Assocation.
|
from datetime import timedelta
from alembic import op
import sqlalchemy as sa
from sqlalchemy.orm import sessionmaker
from portal.models.audit import Audit
from portal.models.role import Role
from portal.models.user import User, UserRoles
from portal.models.user_consent import UserConsent
"""Correct user_consent acceptance_date as default arg wasn't updating
Revision ID: 883fd1095361
Revises: 67c2bea62313
Create Date: 2018-10-11 12:48:33.980877
"""
# revision identifiers, used by Alembic.
revision = '883fd1095361'
down_revision = '67c2bea62313'
Session = sessionmaker()
def upgrade():
bind = op.get_bind()
session = Session(bind=bind)
admin = User.query.filter_by(email='[email protected]').first()
admin = admin or User.query.join(
UserRoles).join(Role).filter(
sa.and_(
Role.id == UserRoles.role_id, UserRoles.user_id == User.id,
Role.name == 'admin')).first()
admin_id = admin.id
query = session.query(UserConsent).join(
Audit, UserConsent.audit_id == Audit.id).with_entities(
UserConsent, Audit.timestamp)
eligible_uc_ids = {}
for uc, timestamp in query:
if uc.acceptance_date.microsecond != 0:
# skip the honest ones, that differ by milliseconds
if timestamp - uc.acceptance_date < timedelta(seconds=5):
continue
if timestamp - uc.acceptance_date > timedelta(days=8):
raise ValueError(
"too big of a jump - please review {} {} {}".format(
uc.user_id, timestamp, uc.acceptance_date))
eligible_uc_ids[uc.id] = (
uc.acceptance_date, timestamp.replace(microsecond=0))
# now update each in eligible list outside of initial query
for uc_id, dates in eligible_uc_ids.items():
old_acceptance_date, new_acceptance_date = dates
msg = "Correct stale default acceptance_date {} to {}".format(
old_acceptance_date, new_acceptance_date)
uc = session.query(UserConsent).get(uc_id)
audit = Audit(
user_id=admin_id, subject_id=uc.user_id, context='consent',
comment=msg)
uc.audit = audit
uc.acceptance_date = new_acceptance_date
session.commit()
def downgrade():
# no value in undoing that mess
pass
|
Liverworts and Arthropods rise to the dry land as first plants and land animals. Due to a collision of asteroids, the Earth is subjected to a powerful meteorite shower. It does not cause large-scale extinction, but accelerates the evolution of new species in the Baltic and other seas.
First land-animals where millipede like Arthropods. The oldest fossils are 460 million years old. The transition was fascilitated by jointed exosceleton that allowed locomotion without the support of water. Around the same time land plants had evolved from green algae. 472 years old fossiles have been found from Argentina. Terrestrial life gave two benefits for plants: more efficient gas exchange and photosynthesis. However, the availability of water was limited and gravity hindered the vertical growth. Liverworts didn’t grow high, but had the protective surface layer (cuticula) that prevents desiccation.
A collision occured in the asteroid belt, between Jupiter and Mars. This caused a large amount of meteorites to hit the Earth. The closest evidence can be found in southern Sweden in the southern edge of the Vättern Lake, in Kinekulla. There are a lot of meteorites found in the layers of limestones. Life in the shallow sea was very diverse, and it seems that the shower of meteorites only speeded up the diversification of species, instead of slowing it down. The climate on the Earth was warm and the sea-level high, but climate was starting to cool.
|
# Copyright 2020 The SQLFlow Authors. All rights reserved.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License
from abc import ABCMeta, abstractmethod
import six
from six.moves.urllib.parse import parse_qs, urlparse
class ResultSet(six.Iterator):
"""Base class for DB query result, caller can iteratable this object
to get all result rows"""
def __init__(self):
self._generator = None
def __iter__(self):
return self
def _gen(self):
fetch_size = 128
while True:
rows = self._fetch(fetch_size) or []
for r in rows:
yield r
if len(rows) < fetch_size:
break
def __next__(self):
if self._generator is None:
self._generator = self._gen()
return next(self._generator)
@abstractmethod
def _fetch(self, fetch_size):
"""Fetch given count of records in the result set
Args:
fetch_size: max record to retrive
Returns:
A list of records, each record is a list
represent a row in the result set
"""
pass
def raw_column_info(self):
return self.column_info()
@abstractmethod
def column_info(self):
"""Get the result column meta, type in the meta maybe DB-specific
Returns:
A list of column metas, like [(field_a, INT), (field_b, STRING)]
"""
pass
@abstractmethod
def success(self):
"""Return True if the query is success"""
return False
@abstractmethod
def close(self):
"""Close the ResultSet explicitly, release any resource incurred by this query
implementation should support close multi-times"""
pass
def error(self):
"""Get the error message if self.success()==False
Returns:
The error message
"""
return ""
@six.add_metaclass(ABCMeta)
class Connection(object):
"""Base class for DB connection
Args:
conn_uri: a connection uri in the schema://name:passwd@host/path?params
format.
"""
def __init__(self, conn_uri):
self.uristr = conn_uri
self.uripts = self._parse_uri()
self.driver = self.uripts.scheme
self.params = parse_qs(
self.uripts.query,
keep_blank_values=True,
)
for k, l in self.params.items():
if len(l) == 1:
self.params[k] = l[0]
def __enter__(self, *args, **kwargs):
return self
def __exit__(self, *args, **kwargs):
self.close()
def param(self, param_name, default_value=""):
if not self.params:
return default_value
return self.params.get(param_name, default_value)
def _parse_uri(self):
"""Parse the connection string into URI parts
Returns:
A ParseResult, different implementations should always pack
the result into ParseResult
"""
return urlparse(self.uristr)
@abstractmethod
def _get_result_set(self, statement):
"""Get the ResultSet for given statement
Args:
statement: the statement to execute
Returns:
A ResultSet object
"""
pass
def query(self, statement):
"""Execute given statement and return a ResultSet
Typical usage will be:
rs = conn.query("SELECT * FROM a;")
result_rows = [r for r in rs]
rs.close()
Args:
statement: the statement to execute
Returns:
A ResultSet object which is iteratable, each generated
record in the iterator is a result-row wrapped by list
"""
rs = self._get_result_set(statement)
if rs.success():
return rs
else:
raise Exception('Execute "%s" error\n%s' % (statement, rs.error()))
def is_query(self, statement):
"""Return true if the statement is a query SQL statement."""
s = statement.strip()
s = s.upper()
if s.startswith("SELECT") and s.find("INTO") == -1:
return True
if s.startswith("SHOW") and s.find("CREATE") >= 0 or s.find(
"DATABASES") >= 0 or s.find("TABLES") >= 0:
return True
if s.startswith("DESC") or s.startswith("EXPLAIN"):
return True
return False
def execute(self, statement):
"""Execute given statement and return True on success
Args:
statement: the statement to execute
Returns:
True on success, False otherwise
"""
rs = None
try:
rs = self._get_result_set(statement)
if rs.success():
# NOTE(sneaxiy): must execute commit!
# Otherwise, the `INSERT` statement
# would have no effect even though
# the connection is closed.
self.commit()
return True
else:
raise Exception('Execute "%s" error\n%s' %
(statement, rs.error()))
finally:
if rs is not None:
rs.close()
def get_table_schema(self, table_name):
"""Get table schema for given table
Args:
table_name: name of the table to get schema
Returns:
A list of (column_name, column_type) tuples
"""
rs = self.query("SELECT * FROM %s limit 0" % table_name)
column_info = rs.column_info()
rs.close()
return column_info
@abstractmethod
def close(self):
"""
Close the connection, implementation should support
close multi-times
"""
pass
def commit(self):
pass
def persist_table(self, table):
pass
def __del__(self):
self.close()
|
Answer: Execute transaction SU01 and fill in all the field. When creating a new user, you must enter an initial password for that user on the Logon data tab. All other data is optional.
Q.What is the difference between USOBX_C and USOBT_C?
Answer: The table USOBX_C defines which authorization checks are to be performed within a transaction and which not (despite authority-check command programmed ). This table also determines which authorization checks are maintained in the Profile Generator. The table USOBT_C defines for each transaction and for each authorization object which default values an authorization created from the authorization object should have in the Profile Generator.
Q.Authorization are required to create and maintain user master records?
A Reference user is, like a System user, a general, non-personally related, user.
Additional authorizations can be assigned within the system using a reference user. A reference user for additional rights can be assigned for every user in the Roles tab.
Q.What is a derived role?
Answer: Derived roles refer to roles that already exist. The derived roles inherit the menu structure and the functions included (transactions, reports, Web links, and so on) from the role referenced. A role can only inherit menus and functions if no transaction codes have been assigned to it before.
functionality (identical menus and identical transactions) but have different characteristics with regard to the organizational level.
Q.What is a composite role?
Answer: A composite role is a container which can collect several different roles. For reasons of clarity, it does not make sense and is therefore not allowed to add composite roles to composite roles. Composite roles are also called roles.
Q,What does user compare do?
Answer: If you are also using the role to generate authorization profiles, then you should note that the generated profile is not entered in the user master record until the user master records have been compared. You can automate this by scheduling report FCG_TIME_DEPENDENCY on a daily.
Organizational level fields should only be created before you start setting up your system. If you create organizational level fields later, you might have to do an impact analysis. The authentication data may have to be postprocessed in roles.
Q.How many profiles can be assigned to any user master record.
Answer: Maximum Profiles that can be assigned to any user is ~ 312. Table USR04 (Profile assignments for users). This table contains both information on the change status of a user and also the list of the profile names that were assigned to the user.
The field PROFS is used for saving the change flag (C = user was created, M = user was changed), and the name of the profiles assigned to the user. The field is defined with a length of 3750 characters. Since the first two characters are intended for the change flag, 3748 characters remain for the list of the profile names per user. Because of the maximum length of 12 characters per profile name, this results in a maximum number of 312 profiles per user.
Q.Can you add a composite role to another composite role?
Answer: No Q. How to reset SAP* password from oracle database.
Where mandt is the client.
Q.What is difference between role and profile.
Answer: A role act as container that collect transaction and generates the associated profile. The profile generator (PFCG) in SAP System automatically generates the corresponding authorization profile. Developer used to perform this step manually before PFCG was introduced bySAP. Any maintenance of the generated profile should be done using PFCG.
How to find out all roles with T-code SU01?
Answer: You can use SUIM > Roles by complex criteria or RSUSR070 to find out this.
Q.How to find out all the users who got SU01 ?
Answer: You can use SUIM >User by complex criteria or (RSUSR002) to find this out.
Q.How to find out all the roles for one composite role or a selection of composite roles?
Q.How to find out all the derived roles for one or more Master (Parent) roles?
Q.How can I check all the Organization value for any role?
Role Type in the role here and hit execute.
You can always download all the information to spreadsheet also using .
Q.How do I restrict access to files through AL11?
Q.How can I add one role to many users?
Answer: SU10. If you have less than 16 users then you can paste the userids.
Q.What are the Best practices for locking expired users?
Answer: Lock the user. Remove all the roles and profiles assigned to the user. Move them to TERM User group.
Q.How can be the password rules enforced ?
Answer: Password rules can be enforced using profile parameter. Follow the link to learn more about the profile parameter.
Q.How to remove duplicate roles with different start and end date from user master?
Answer: You can use PRGN_COMPRESS_TIMES to do this. Please refer to note 365841 for more info.
Also check the profile- Follow the instruction below.
How can I have a display all roles.?
How can I find out all actvt in sap?
Answer: All possible activities (ACTVT) are stored in table TACT (transaction SM30), and also the valid activities for each authorization object can be found in table TACTZ (transaction SE16).
Q.How many fields can be present in one Authorization object?
Q.How to check the table Logs ?
Answer: First, we need to check if the logging is activated for table using tcode SE13. If table logging is enabled then we can see the table logs in t-code SCU3.
Q.What’s the basic difference in between SU22 & SU24 ?
Answer: SU22 displays and updates the values in tables USOBT and USOBX, while SU24 does the same in tables USOBT_C and USOBX_C. The _C stands for Customer. The profile generator gets its data from the _C tables. In the USOBT and USOBX tables the values are the SAP standard values as shown in SU24. With SU25 one can (initially) transfer the USOBT values to the USOBT_C table.
Q.How to remove duplicate roles with different start and end date from user master ?
Answer: You can use PRGN_COMPRESS_TIMES to do this.
|
from data_importers.management.commands import BaseHalaroseCsvImporter
from django.contrib.gis.geos import Point
class Command(BaseHalaroseCsvImporter):
council_id = "NTL"
addresses_name = (
"2021-03-25T10:36:12.245581/Neath PT polling_station_export-2021-03-23.csv"
)
stations_name = (
"2021-03-25T10:36:12.245581/Neath PT polling_station_export-2021-03-23.csv"
)
elections = ["2021-05-06"]
csv_delimiter = ","
def station_record_to_dict(self, record):
if record.pollingstationname == "St. Joseph's R.C Church Hall":
rec = super().station_record_to_dict(record)
rec["location"] = Point(-3.800506, 51.6536899, srid=4326)
return rec
# Godrergraig Workingmens Club Glanyrafon Road Ystalyfera SA9 2HA
if (
record.pollingstationnumber == "22"
and record.pollingstationpostcode == "SA9 2HA"
):
record = record._replace(pollingstationpostcode="SA9 2DE")
# Dyffryn Clydach Memorial Hall The Drive Longford Neath
if record.pollingstationname == "Dyffryn Clydach Memorial Hall":
record = record._replace(pollingstationpostcode="SA10 7HD")
# Clyne Community Centre Clyne Resolven
if record.pollingstationname == "Clyne Community Centre":
record = record._replace(pollingstationpostcode="SA11 4BP")
return super().station_record_to_dict(record)
def address_record_to_dict(self, record):
uprn = record.uprn.strip().lstrip("0")
if uprn in [
"10009182194", # FFRWD VALE BUNGALOW, DWR Y FELIN ROAD, NEATH
"100100609132", # 35B PENYWERN ROAD, NEATH
"10009177319", # MAES MELYN BUNGALOW, DRUMMAU ROAD, SKEWEN, NEATH
"100101040244", # 11A REGENT STREET EAST, NEATH
"100100598607", # 134 SHELONE ROAD, NEATH
"10023946967", # 134A SHELONE ROAD, BRITON FERRY
"100100600419", # 113 GNOLL PARK ROAD, NEATH
"100100991905", # BROOKLYN, TONMAWR ROAD, PONTRHYDYFEN, PORT TALBOT
"10009184466", # CILGARN FARM COTTAGE, CWMAVON, PORT TALBOT
"10009186526", # TYN Y CAEAU, MARGAM ROAD, MARGAM, PORT TALBOT
"10014164971", # FLAT TYN-Y-CAEAU MARGAM ROAD, MARGAM
"10009184513", # 1 ALMS HOUSE, MARGAM, PORT TALBOT
]:
return None
if record.housepostcode in [
"SA12 8EP",
"SA10 9DJ",
"SA10 6DE",
"SA11 3PW",
"SA11 1TS",
"SA11 1TW",
"SA12 9ST",
"SA8 4PX",
"SA11 3QE",
]:
return None
return super().address_record_to_dict(record)
|
Live Oak Pet Services, Established August 2005, provides pet cremation services to the Brazos Valley. Working with local vets and members of the communities in the area, Live Oak Pet Services help famalies withthe deaths of their pets and memorialized them.
Live Oak dose individual, witnessed and communal cremations, Remains can then be put in a customized urn, garden stone, pendent or even a diamond.
Around the neck of Kerri Smith hangs a beautiful, aqua crystal. It emcomposses remains of Smith's prized Rottweiler, "Dusty Britches"
Britches beacuse I wanted to keep her close to my heart not only in my memory but a real part of her around my neck. I have had the necklace Since Fabrurary 2011 and have never take it off," said Smith.
Smith welcomed Dusty Britches, named after a saloon in Bandera, Texas, to her home when she was eight weeks old, After her award-winning parents, Dusty Britches followed in their footsteps and become an AKC Champion in 2003.
Smith founded Live Oak Pet Services on her family ranch. Scott Mason, now the President of Live Oak, is the co-owner.
of their beloved pets. Our staff is very experienced and has worked in pets clinics. This insight, as well as being pet owner ourselves , give us the ability to talk with grieving clients and relate to exactly what they are going through. We listen to them and listen to their story," said Mason.
In July of 2006, Live Oak did a famous cremation, the 1983 Kentucky Derby Winner, "Sunny's Halo." There is a memorial for him at Churchill Downs where he is buried at the finish Line, along with Broker Tips (1933) Swaps (1955) Carry Black (1961).
|
import sys
import numpy as np
import csv
from datetime import datetime
from datetime import timedelta
def subtract_dates(date1, date2):
"""
Takes two dates %Y-%m-%d format. Returns date1 - date2, measured in days.
"""
date_format = "%Y-%m-%d"
a = datetime.strptime(date1, date_format)
b = datetime.strptime(date2, date_format)
delta = a - b
#print(date1,"-",date2,"=",delta.days)
return delta.days
def steps_to_date(steps, start_date):
date_format = "%Y-%m-%d"
date_1 = datetime.strptime(start_date, "%Y-%m-%d")
new_date = (date_1 + timedelta(days=steps)).date()
return new_date
def _processEntry(row, table, data_type, date_column, count_column, start_date):
"""
Code to process a population count from a CSV file.
column <date_column> contains the corresponding date in %Y-%m-%d format.
column <count_column> contains the population size on that date.
"""
if len(row) < 2:
return table
if row[0][0] == "#":
return table
if row[1]=="":
return table
# Make sure the date column becomes an integer, which contains the offset in days relative to the start date.
row[date_column] = subtract_dates(row[date_column], start_date)
if data_type == "int":
table = np.vstack([table,[int(row[date_column]), int(row[count_column])]])
else:
table = np.vstack([table,[float(row[date_column]), float(row[count_column])]])
return table
def AddCSVTables(table1, table2):
"""
Add two time series tables. This version does not yet support interpolation between values.
(The UNHCR data website also does not do this, by the way)
"""
table = np.zeros([0,2])
offset = 0
last_c2 = np.zeros(([1,2]))
for c2 in table2:
# If table 2 date value is higher, then keep adding entries from table 1
while c2[0] > table1[offset][0]:
table = np.vstack([table,[table1[offset][0], last_c2[1]+table1[offset][1]]])
if(offset < len(table1)-1):
offset += 1
else:
break
# If the two match, add a total.
if c2[0] == table1[offset][0]:
table = np.vstack([table,[c2[0], c2[1]+table1[offset][1]]])
if(offset < len(table1)-1):
offset += 1
last_c2 = c2
continue
# If table 1 value is higher, add an aggregate entry, and go to the next iteration without increasing the offset.
if c2[0] < table1[offset][0]:
table = np.vstack([table,[c2[0], c2[1]+table1[offset][1]]])
last_c2 = c2
continue
return table
def ConvertCsvFileToNumPyTable(csv_name, data_type="int", date_column=0, count_column=1, start_date="2012-02-29"):
"""
Converts a CSV file to a table with date offsets from 29 feb 2012.
CSV format for each line is:
yyyy-mm-dd,number
Default settings:
- subtract_dates is used on column 0.
- Use # sign to comment out lines. (first line is NOT ignored by default)
"""
table = np.zeros([0,2])
with open(csv_name, newline='') as csvfile:
values = csv.reader(csvfile)
row = next(values)
if(len(row)>1):
if len(row[0])>0 and "DateTime" not in row[0]:
table = _processEntry(row, table, data_type, date_column, count_column, start_date)
for row in values:
table = _processEntry(row, table, data_type, date_column, count_column, start_date)
return table
class DataTable:
def __init__(self, data_directory="mali2012", data_layout="data_layout_refugee.csv", start_date="2012-02-29", csvformat="generic"):
"""
read in CSV data files containing refugee data.
"""
self.csvformat = csvformat
self.total_refugee_column = 1
self.days_column = 0
self.header = []
self.data_table = []
self.start_date = start_date
self.override_refugee_input = False # Use modified input data for FLEE simulations
self.override_refugee_input_file = ""
self.data_directory = data_directory
if self.csvformat=="generic":
with open("%s/%s" % (data_directory, data_layout), newline='') as csvfile:
values = csv.reader(csvfile)
for row in values:
if(len(row)>1):
if(row[0][0] == "#"):
continue
self.header.append(row[0])
#print("%s/%s" % (data_directory, row[1]))
csv_total = ConvertCsvFileToNumPyTable("%s/%s" % (data_directory, row[1]), start_date=start_date)
for added_csv in row[2:]:
csv_total = AddCSVTables(csv_total, ConvertCsvFileToNumPyTable("%s/%s" % (data_directory, added_csv), start_date=start_date))
self.data_table.append(csv_total)
#print(self.header, self.data_table)
def override_input(self, data_file_name):
"""
Do not use the total refugee count data as the input value, but instead take values from a separate file.
"""
self.override_refugee_input_file = data_file_name
self.override_refugee_input = True
self.header.append("total (modified input)")
self.data_table.append(ConvertCsvFileToNumPyTable("%s" % (data_file_name), start_date=self.start_date))
def get_daily_difference(self, day, day_column=0, count_column=1, Debug=False, FullInterpolation=True):
"""
Extrapolate count of new refugees at a given time point, based on input data.
count_column = column which contains the relevant difference.
FullInterpolation: when disabled, the function ignores any decreases in refugee count.
when enabled, the function can return negative numbers when the new total is higher than the older one.
"""
self.total_refugee_column = count_column
self.days_column = day_column
ref_table = self.data_table[0]
if self.override_refugee_input == True:
ref_table = self.data_table[self._find_headerindex("total (modified input)")]
# Refugees only come in *after* day 0.
if int(day) == 0:
ref_table = self.data_table[0]
new_refugees = 0
for i in self.header[1:]:
new_refugees += self.get_field(i, 0, FullInterpolation)
#print("Day 0 data:",i,self.get_field(i, 0, FullInterpolation))
return int(new_refugees)
else:
new_refugees = 0
for i in self.header[1:]:
new_refugees += self.get_field(i, day, FullInterpolation) - self.get_field(i, day-1, FullInterpolation)
#print self.get_field("Mbera", day), self.get_field("Mbera", day-1)
return int(new_refugees)
# If the day exceeds the validation data table, then we return 0
return 0
def get_interpolated_data(self, column, day):
"""
Gets in a given column for a given day. Interpolates between days as needed.
"""
ref_table = self.data_table[column]
old_val = ref_table[0,self.total_refugee_column]
#print(ref_table[0][self.days_column])
old_day = ref_table[0,self.days_column]
if day <= old_day:
return old_val
for i in range(1, len(ref_table)):
#print(day, ref_table[i][self.days_column])
if day < ref_table[i,self.days_column]:
old_val = ref_table[i-1,self.total_refugee_column]
old_day = ref_table[i-1,self.days_column]
fraction = float(day - old_day) / float(ref_table[i,self.days_column] - old_day)
if fraction > 1.0:
print("Error with days_column: ", ref_table[i,self.days_column])
return -1
#print(day, old_day, ref_table[i][self.total_refugee_column], old_val)
return int(old_val + fraction * float(ref_table[i,self.total_refugee_column] - old_val))
#print("# warning: ref_table length exceeded for column: ",day, self.header[column], ", last ref_table values: ", ref_table[i-1][self.total_refugee_column], ref_table[i][self.days_column])
return int(ref_table[-1,self.total_refugee_column])
def get_raw_data(self, column, day):
"""
Gets in a given column for a given day. Does not Interpolate.
"""
ref_table = self.data_table[column]
old_val = ref_table[0][self.total_refugee_column]
old_day = 0
for i in range (0,len(ref_table)):
if day >= ref_table[i][self.days_column]:
old_val = ref_table[i][self.total_refugee_column]
old_day = ref_table[i][self.days_column]
else:
break
return int(old_val)
def _find_headerindex(self, name):
"""
Finds matching index number for a particular name in the list of headers.
"""
for i in range(0,len(self.header)):
if self.header[i] == name:
return i
print(self.header)
sys.exit("Error: can't find the header %s in the header list" % (name))
def get_field(self, name, day, FullInterpolation=True):
"""
Gets in a given named column for a given day. Interpolates between days if needed.
"""
i = self._find_headerindex(name)
if FullInterpolation:
#print(name, i, day, self.get_interpolated_data(i, day))
return self.get_interpolated_data(i, day)
else:
return self.get_raw_data(i, day)
def print_data_values_for_location(self, name, last_day):
"""
print all data values for selected location.
"""
for i in range(0,last_day):
print(i, self.get_field(name,i))
def is_interpolated(self, name, day):
"""
Checks if data for a given day is inter/extrapolated or not.
"""
for i in range(0,len(self.header)):
if self.header[i] == name:
ref_table = self.data_table[i]
for j in range(0, len(ref_table)):
if int(day) == int(ref_table[j][self.days_column]):
return False
if int(day) < int(ref_table[j][self.days_column]):
return True
return True
#def d.correctLevel1Registrations(name, date):
# correct for start date.
|
Two leaders in expedition adventure racing create a new partnership to further strengthen the AR industry. The joint venture is mutually focused on building the profile of the sport and delivering the best quality experience in endurance adventure sport.
Today, US adventure racing company Primal Quest and New Zealand’s 100% Pure Racing have announced a significant new partnership that enhances their collective position worldwide. The move links two of adventure racing’s most recognised expedition events together and will deliver real benefits to competitors, media and fans.
The joint venture will strategically leverage the relative strengths of each company; Primal Quest’s iconic event status in the worlds largest sports market in North America and 100% Pure Racing’s technological and media expertise, developed at GODZone, the biggest expedition adventure race in the world.
100% Pure Racing is now an equal partner in Primal Quest. As part of the deal, the company will provide its proprietary live coverage, web, media and technology platform to Primal Quest for its highly anticipated 2018 event. The 8th edition of Primal Quest will be held in British Columbia, Canada.
“ This is exciting news for adventure racing as a whole, fans of the sport and every athlete who believes that these type of events should be true expedition experiences. GODZone has attracted a vast competitor and fan base and we will be 100% focused on delivering the same for Primal Quest”, says 100% Pure Racing CEO Warren Bates.
Primal Quest founder and director Maria Burton says the new partnership underpins her companies adventure racing vision.
The new partnership will showcase adventure racing using a unified media platform that will inspire existing athletes and bring in a whole new generation of adventure racers. This expansion philosophy neatly dovetails with 100% Pure Racing’s recently announced partnership with Australia’s Adventure 1 Series, bringing the A1 series to New Zealand with five races now under that umbrella.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
Bone. Base component of the StickMan.
"""
import math
from kivy.graphics.context_instructions import PopMatrix, PushMatrix, Rotate
from kivy.properties import NumericProperty
from kivy.uix.image import Image
__author__ = "Victor RENÉ"
__copyright__ = "Copyright 2015, bisector"
__credits__ = ["Kivy Team"]
__license__ = "MIT"
__version__ = "0.1"
__maintainer__ = "Victor RENÉ"
__email__ = "[email protected]"
__status__ = "Production"
class Bone(Image):
angle = NumericProperty()
def __init__(self, **kw):
super(Bone, self).__init__(**kw)
self.name = kw['name'] if 'name' in kw else None
self.allow_stretch = True
self.keep_ratio = False
self.source = 'img/bone.png'
self.next = []
self.prev = None
self.head = None
self.tip = None
self.bone_length = 0
self.radius = None
with self.canvas.before:
PushMatrix()
self.rotation = Rotate()
with self.canvas.after:
PopMatrix()
self.bind(pos=self.update, size=self.update, angle=self.rotate)
def attach(self, bone):
bone.prev = self
self.next.append(bone)
def attach_all(self, bones):
for bone in bones:
self.attach(bone)
def rotate(self, *args):
if self.prev:
self.rotation.angle = self.prev.rotation.angle + self.angle
else: self.rotation.angle = self.angle
self.tip = self.get_tip_pos()
for bone in self.next:
self.coerce(bone)
def update(self, *args):
self.radius = self.width / 2
# approximate for head / tip radii
self.bone_length = self.height - self.radius * 2
self.head = self.x + self.radius, self.top - self.radius
self.tip = self.get_tip_pos()
self.rotation.origin = self.head
for bone in self.next:
self.coerce(bone)
def get_tip_pos(self):
a = (self.rotation.angle - 90) * math.pi / 180
dx = math.cos(a) * self.bone_length
dy = math.sin(a) * self.bone_length
return self.x + self.radius + dx, self.top - self.radius + dy
def set_head_pos(self, pos):
radius = self.width / 2
head_x, head_y = pos
self.pos = head_x - radius, head_y - radius - self.bone_length
def coerce(self, bone):
bone.set_head_pos(self.tip)
bone.rotate()
|
Gay men who have come to the United States legally but feel that they cannot return home are becoming more prominent in our group’s work. The government-sponsored persecution of sexual minorities in Africa, the Middle East, and Russia have forced many people visiting or temporarily working in the United States to decide that they will face death if they go home.
This week the Guardian Group is looking for either short- or long-term housing for a young gay man from Nigeria who has been in this country several months.
He escaped Nigeria to avoid being persecuted for his work with gay men and with AIDS projects.
He is reportedly an outstanding young man who speaks English fluently.
If you are interested in helping with housing for this man — or other sexual minority refugee attempting to settle in San Francisco — please let us know!
This entry was posted in General Comments on August 25, 2014 by Moderator.
|
#
# sb2dot - a sandbox binary profile to dot convertor for iOS 9 and OS X 10.11
# Copyright (C) 2015 Stefan Esser / SektionEins GmbH <[email protected]>
# uses and extends code from Dionysus Blazakis with his permission
#
# module: outputdot.py
# task: cheap .dot file generator
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
import os
def dump_node_to_dot(g, u, visited):
if visited.has_key(u):
return ""
tag = g.getTag(u)
tag = str(tag)
tag = tag.replace("\\", "\\\\")
tag = tag.replace("\"", "\\\"")
tag = tag.replace("\0", "")
edges = list(g.edges[u])
visited[u] = True;
out = "n%u [label=\"%s\"];\n" % (u, tag)
if len(edges) == 0:
return out
out+= "n%u -> n%u [color=\"green\"];\n" % (u, edges[0]);
out+= "n%u -> n%u [color=\"red\"];\n" % (u, edges[1]);
out+=dump_node_to_dot(g, edges[0], visited)
out+=dump_node_to_dot(g, edges[1], visited)
return out;
def dump_to_dot(g, offset, name, cleanname, profile_name):
u = offset * 8
visited = {}
orig_name = name
if len(name) > 128:
name = name[0:128]
name = name + ".dot"
name = name.replace("*", "")
name = name.replace(" ", "_")
cleanname = cleanname.replace("\\", "\\\\")
cleanname = cleanname.replace("\"", "\\\"")
cleanname = cleanname.replace("\0", "")
profile_name = os.path.basename(profile_name)
profile_name = profile_name.replace("\\", "\\\\")
profile_name = profile_name.replace("\"", "\\\"")
profile_name = profile_name.replace("\0", "")
f = open(profile_name + "_" + name, 'w')
print "[+] generating " + profile_name + "_" + name
f.write("digraph sandbox_decision { rankdir=HR; labelloc=\"t\";label=\"sandbox decision graph for\n\n%s\n\nextracted from %s\n\n\n\"; \n" % (cleanname, profile_name))
out = "n0 [label=\"%s\";shape=\"doubleoctagon\"];\n" % (cleanname)
out+= "n0 -> n%u [color=\"black\"];\n" % (u);
out = out + dump_node_to_dot(g, u, visited)
f.write(out)
f.write("} \n")
f.close()
|
Unbelievable Product. A Healthy Glow!
Loved it better then any others out there. So pretty and not like other spray tans!
|
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#!/usr/bin/python
import time
import datetime
import json
from google.cloud import pubsub
from oauth2client.client import GoogleCredentials
from Adafruit_BME280 import *
from tendo import singleton
me = singleton.SingleInstance() # will sys.exit(-1) if other instance is running
# constants - change to fit your project and location
SEND_INTERVAL = 60 #seconds
sensor = BME280(t_mode=BME280_OSAMPLE_8, p_mode=BME280_OSAMPLE_8, h_mode=BME280_OSAMPLE_8)
credentials = GoogleCredentials.get_application_default()
# change project to your Project ID
project="weatherproject"
# change topic to your PubSub topic name
topic = "weatherdata"
# set the following four constants to be indicative of where you are placing your weather sensor
sensorID = "s-Googleplex"
sensorZipCode = "94043"
sensorLat = "37.421655"
sensorLong = "-122.085637"
def read_sensor(weathersensor):
tempF = weathersensor.read_temperature_f()
# pascals = sensor.read_pressure()
# hectopascals = pascals / 100
pressureInches = weathersensor.read_pressure_inches()
dewpoint = weathersensor.read_dewpoint_f()
humidity = weathersensor.read_humidity()
temp = '{0:0.2f}'.format(tempF)
hum = '{0:0.2f}'.format(humidity)
dew = '{0:0.2f}'.format(dewpoint)
pres = '{0:0.2f}'.format(pressureInches)
return (temp, hum, dew, pres)
def createJSON(id, timestamp, zip, lat, long, temperature, humidity, dewpoint, pressure):
data = {
'sensorID' : id,
'timecollected' : timestamp,
'zipcode' : zip,
'latitude' : lat,
'longitude' : long,
'temperature' : temperature,
'humidity' : humidity,
'dewpoint' : dewpoint,
'pressure' : pressure
}
json_str = json.dumps(data)
return json_str
def main():
publisher = pubsub.PublisherClient()
topicName = 'projects/' + project + '/topics/' + topic
last_checked = 0
while True:
if time.time() - last_checked > SEND_INTERVAL:
last_checked = time.time()
temp, hum, dew, pres = read_sensor(sensor)
currentTime = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')
s = ", "
weatherJSON = createJSON(sensorID, currentTime, sensorZipCode, sensorLat, sensorLong, temp, hum, dew, pres)
try:
publisher.publish(topicName, weatherJSON, placeholder='')
print weatherJSON
except:
print "There was an error publishing weather data."
time.sleep(0.5)
if __name__ == '__main__':
main()
|
ATP-binding cassette (ABC) proteins were first recognized for their role in multidrug resistance (MDR) in chemotherapeutic treatments, which is a major impediment for the successful treatment of many forms of malignant tumors in humans. These proteins, highly conserved throughout vertebrate species, were later related to cellular detoxification and accounted as responsible for protecting aquatic organisms from xenobiotic insults in the so-called multixenobiotic resistance mechanism (MXR). In recent years, research on these proteins in aquatic species has highlighted their importance in the detoxification mechanisms in fish thus it is necessary to continue these studies. Several transporters have been pointed out as relevant in the ecotoxicological context associated to the transport of xenobiotics, such as P-glycoproteins (Pgps), multidrug-resistance-associated proteins (MRPs 1-5) and breast cancer resistance associated protein (BCRP). In mammals, several nuclear receptors have been identified as mediators of phase I and II metabolizing enzymes and ABC transporters. In aquatic species, knowledge on co-regulation of the detoxification mechanism is scarce and needs to be addressed. The interaction of emergent contaminants that can act as chemosensitizers, with ABC transporters in aquatic organisms can compromise detoxification processes and have population effects and should be studied in more detail. This review intends to summarize the recent advances in research on MXR mechanisms in fish species, focusing in (1) regulation and functioning of ABC proteins; (2) cooperation with phase I and II biotransformation enzymes; and (3) ecotoxicological relevance and information on emergent pollutants with ability to modulate ABC transporters expression and activity. Several lines of evidence are clearly suggesting the important role of these transporters in detoxification mechanisms and must be further investigated in fish to underlay the mechanism to consider their use as biomarkers in environmental monitoring.
|
# Copyright 2021 Google LLC. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from connector import channel
from google3.cloud.graphite.mmv2.services.google.storage import object_pb2
from google3.cloud.graphite.mmv2.services.google.storage import object_pb2_grpc
from typing import List
class Object(object):
def __init__(
self,
name: str = None,
bucket: str = None,
generation: int = None,
metageneration: int = None,
id: str = None,
self_link: str = None,
content_type: str = None,
time_created: str = None,
updated: str = None,
custom_time: str = None,
time_deleted: str = None,
temporary_hold: bool = None,
event_based_hold: bool = None,
retention_expiration_time: str = None,
storage_class: str = None,
time_storage_class_updated: str = None,
size: int = None,
md5_hash: str = None,
media_link: str = None,
metadata: dict = None,
owner: dict = None,
crc32c: str = None,
component_count: int = None,
etag: str = None,
customer_encryption: dict = None,
kms_key_name: str = None,
content: str = None,
service_account_file: str = "",
):
channel.initialize()
self.name = name
self.bucket = bucket
self.content_type = content_type
self.custom_time = custom_time
self.temporary_hold = temporary_hold
self.event_based_hold = event_based_hold
self.storage_class = storage_class
self.md5_hash = md5_hash
self.metadata = metadata
self.crc32c = crc32c
self.customer_encryption = customer_encryption
self.kms_key_name = kms_key_name
self.content = content
self.service_account_file = service_account_file
def apply(self):
stub = object_pb2_grpc.StorageObjectServiceStub(channel.Channel())
request = object_pb2.ApplyStorageObjectRequest()
if Primitive.to_proto(self.name):
request.resource.name = Primitive.to_proto(self.name)
if Primitive.to_proto(self.bucket):
request.resource.bucket = Primitive.to_proto(self.bucket)
if Primitive.to_proto(self.content_type):
request.resource.content_type = Primitive.to_proto(self.content_type)
if Primitive.to_proto(self.custom_time):
request.resource.custom_time = Primitive.to_proto(self.custom_time)
if Primitive.to_proto(self.temporary_hold):
request.resource.temporary_hold = Primitive.to_proto(self.temporary_hold)
if Primitive.to_proto(self.event_based_hold):
request.resource.event_based_hold = Primitive.to_proto(
self.event_based_hold
)
if Primitive.to_proto(self.storage_class):
request.resource.storage_class = Primitive.to_proto(self.storage_class)
if Primitive.to_proto(self.md5_hash):
request.resource.md5_hash = Primitive.to_proto(self.md5_hash)
if Primitive.to_proto(self.metadata):
request.resource.metadata = Primitive.to_proto(self.metadata)
if Primitive.to_proto(self.crc32c):
request.resource.crc32c = Primitive.to_proto(self.crc32c)
if ObjectCustomerEncryption.to_proto(self.customer_encryption):
request.resource.customer_encryption.CopyFrom(
ObjectCustomerEncryption.to_proto(self.customer_encryption)
)
else:
request.resource.ClearField("customer_encryption")
if Primitive.to_proto(self.kms_key_name):
request.resource.kms_key_name = Primitive.to_proto(self.kms_key_name)
if Primitive.to_proto(self.content):
request.resource.content = Primitive.to_proto(self.content)
request.service_account_file = self.service_account_file
response = stub.ApplyStorageObject(request)
self.name = Primitive.from_proto(response.name)
self.bucket = Primitive.from_proto(response.bucket)
self.generation = Primitive.from_proto(response.generation)
self.metageneration = Primitive.from_proto(response.metageneration)
self.id = Primitive.from_proto(response.id)
self.self_link = Primitive.from_proto(response.self_link)
self.content_type = Primitive.from_proto(response.content_type)
self.time_created = Primitive.from_proto(response.time_created)
self.updated = Primitive.from_proto(response.updated)
self.custom_time = Primitive.from_proto(response.custom_time)
self.time_deleted = Primitive.from_proto(response.time_deleted)
self.temporary_hold = Primitive.from_proto(response.temporary_hold)
self.event_based_hold = Primitive.from_proto(response.event_based_hold)
self.retention_expiration_time = Primitive.from_proto(
response.retention_expiration_time
)
self.storage_class = Primitive.from_proto(response.storage_class)
self.time_storage_class_updated = Primitive.from_proto(
response.time_storage_class_updated
)
self.size = Primitive.from_proto(response.size)
self.md5_hash = Primitive.from_proto(response.md5_hash)
self.media_link = Primitive.from_proto(response.media_link)
self.metadata = Primitive.from_proto(response.metadata)
self.owner = ObjectOwner.from_proto(response.owner)
self.crc32c = Primitive.from_proto(response.crc32c)
self.component_count = Primitive.from_proto(response.component_count)
self.etag = Primitive.from_proto(response.etag)
self.customer_encryption = ObjectCustomerEncryption.from_proto(
response.customer_encryption
)
self.kms_key_name = Primitive.from_proto(response.kms_key_name)
self.content = Primitive.from_proto(response.content)
def delete(self):
stub = object_pb2_grpc.StorageObjectServiceStub(channel.Channel())
request = object_pb2.DeleteStorageObjectRequest()
request.service_account_file = self.service_account_file
if Primitive.to_proto(self.name):
request.resource.name = Primitive.to_proto(self.name)
if Primitive.to_proto(self.bucket):
request.resource.bucket = Primitive.to_proto(self.bucket)
if Primitive.to_proto(self.content_type):
request.resource.content_type = Primitive.to_proto(self.content_type)
if Primitive.to_proto(self.custom_time):
request.resource.custom_time = Primitive.to_proto(self.custom_time)
if Primitive.to_proto(self.temporary_hold):
request.resource.temporary_hold = Primitive.to_proto(self.temporary_hold)
if Primitive.to_proto(self.event_based_hold):
request.resource.event_based_hold = Primitive.to_proto(
self.event_based_hold
)
if Primitive.to_proto(self.storage_class):
request.resource.storage_class = Primitive.to_proto(self.storage_class)
if Primitive.to_proto(self.md5_hash):
request.resource.md5_hash = Primitive.to_proto(self.md5_hash)
if Primitive.to_proto(self.metadata):
request.resource.metadata = Primitive.to_proto(self.metadata)
if Primitive.to_proto(self.crc32c):
request.resource.crc32c = Primitive.to_proto(self.crc32c)
if ObjectCustomerEncryption.to_proto(self.customer_encryption):
request.resource.customer_encryption.CopyFrom(
ObjectCustomerEncryption.to_proto(self.customer_encryption)
)
else:
request.resource.ClearField("customer_encryption")
if Primitive.to_proto(self.kms_key_name):
request.resource.kms_key_name = Primitive.to_proto(self.kms_key_name)
if Primitive.to_proto(self.content):
request.resource.content = Primitive.to_proto(self.content)
response = stub.DeleteStorageObject(request)
@classmethod
def list(self, bucket, service_account_file=""):
stub = object_pb2_grpc.StorageObjectServiceStub(channel.Channel())
request = object_pb2.ListStorageObjectRequest()
request.service_account_file = service_account_file
request.Bucket = bucket
return stub.ListStorageObject(request).items
def to_proto(self):
resource = object_pb2.StorageObject()
if Primitive.to_proto(self.name):
resource.name = Primitive.to_proto(self.name)
if Primitive.to_proto(self.bucket):
resource.bucket = Primitive.to_proto(self.bucket)
if Primitive.to_proto(self.content_type):
resource.content_type = Primitive.to_proto(self.content_type)
if Primitive.to_proto(self.custom_time):
resource.custom_time = Primitive.to_proto(self.custom_time)
if Primitive.to_proto(self.temporary_hold):
resource.temporary_hold = Primitive.to_proto(self.temporary_hold)
if Primitive.to_proto(self.event_based_hold):
resource.event_based_hold = Primitive.to_proto(self.event_based_hold)
if Primitive.to_proto(self.storage_class):
resource.storage_class = Primitive.to_proto(self.storage_class)
if Primitive.to_proto(self.md5_hash):
resource.md5_hash = Primitive.to_proto(self.md5_hash)
if Primitive.to_proto(self.metadata):
resource.metadata = Primitive.to_proto(self.metadata)
if Primitive.to_proto(self.crc32c):
resource.crc32c = Primitive.to_proto(self.crc32c)
if ObjectCustomerEncryption.to_proto(self.customer_encryption):
resource.customer_encryption.CopyFrom(
ObjectCustomerEncryption.to_proto(self.customer_encryption)
)
else:
resource.ClearField("customer_encryption")
if Primitive.to_proto(self.kms_key_name):
resource.kms_key_name = Primitive.to_proto(self.kms_key_name)
if Primitive.to_proto(self.content):
resource.content = Primitive.to_proto(self.content)
return resource
class ObjectOwner(object):
def __init__(self, entity: str = None, entity_id: str = None):
self.entity = entity
self.entity_id = entity_id
@classmethod
def to_proto(self, resource):
if not resource:
return None
res = object_pb2.StorageObjectOwner()
if Primitive.to_proto(resource.entity):
res.entity = Primitive.to_proto(resource.entity)
if Primitive.to_proto(resource.entity_id):
res.entity_id = Primitive.to_proto(resource.entity_id)
return res
@classmethod
def from_proto(self, resource):
if not resource:
return None
return ObjectOwner(
entity=Primitive.from_proto(resource.entity),
entity_id=Primitive.from_proto(resource.entity_id),
)
class ObjectOwnerArray(object):
@classmethod
def to_proto(self, resources):
if not resources:
return resources
return [ObjectOwner.to_proto(i) for i in resources]
@classmethod
def from_proto(self, resources):
return [ObjectOwner.from_proto(i) for i in resources]
class ObjectCustomerEncryption(object):
def __init__(
self, encryption_algorithm: str = None, key_sha256: str = None, key: str = None
):
self.encryption_algorithm = encryption_algorithm
self.key_sha256 = key_sha256
self.key = key
@classmethod
def to_proto(self, resource):
if not resource:
return None
res = object_pb2.StorageObjectCustomerEncryption()
if Primitive.to_proto(resource.encryption_algorithm):
res.encryption_algorithm = Primitive.to_proto(resource.encryption_algorithm)
if Primitive.to_proto(resource.key_sha256):
res.key_sha256 = Primitive.to_proto(resource.key_sha256)
if Primitive.to_proto(resource.key):
res.key = Primitive.to_proto(resource.key)
return res
@classmethod
def from_proto(self, resource):
if not resource:
return None
return ObjectCustomerEncryption(
encryption_algorithm=Primitive.from_proto(resource.encryption_algorithm),
key_sha256=Primitive.from_proto(resource.key_sha256),
key=Primitive.from_proto(resource.key),
)
class ObjectCustomerEncryptionArray(object):
@classmethod
def to_proto(self, resources):
if not resources:
return resources
return [ObjectCustomerEncryption.to_proto(i) for i in resources]
@classmethod
def from_proto(self, resources):
return [ObjectCustomerEncryption.from_proto(i) for i in resources]
class Primitive(object):
@classmethod
def to_proto(self, s):
if not s:
return ""
return s
@classmethod
def from_proto(self, s):
return s
|
Free from potentially drying detergents and SLS's this creamy cleanser leaves your skin thoroughly cleansed and soft. Australia's leading salon brand, Alpha-H, pioneered the use of glycolic acids in skin care nearly 20 years ago, meaning they've had time to get the technology just right for fast and visible results on signs of ageing, sun damage, sensitivity and acne.
This hydrating non-foaming creamy cleansing lotion is perfect for normal, dry and sensitive skin types. Its 3-in-1 function means that it completely removes all traces of make-up (including waterproof mascara) as well as toning and re-balancing the skin's natural pH level. Containing aloe vera to condition and soothe alongside vitamin E to hydrate and neutralise free radicals, this cleanser will leave the skin perfectly cleansed, soft and soothed without the tight, drying effect of many cleansers.
Use morning and evening. Apply Alpha-H Balancing Cleanser to damp skin, massage gently for up to a minute and then rinse off with warm water or a warm damp face cloth. To remove eye make-up and lipstick, apply a small amount to a damp cotton pad and circle gently around the area that needs cleansing.
Aqua, Ethylhexyl Palmitate, Aloe Barbadensis Leaf Extract, Sorbitan Stearate, Glyceryl Stearate, C12-15 Alkyl Benzoate, Cetyl Alcohol, Lauryl Glucoside, Phenoxyethanol, Caprylyl Glycol, Tocopheryl Acetate, Citric Acid, Glycerin, Parfum.
I use this product as my second cleanser, it works really well, I'm on my second bottle already and will be definitely getting a third. I find it hard to get a cleanser that works with my combination skin, it evens out my oily areas without drying out my already dry patches. After using it for a week, morning and night, I saw results in my pores and in the pigmentation of my skin. My skin can sometimes feel a little tight after, but feels beautifully smooth and clear after moisturiser.
It is very concentrated small amount goes long way. I have my bottle for months now and I use it every single night. It is very calming on the skin and it melts the makeup away I can use it on the eye area.
-oily to combination skin- I have just finished my third tube of this cleanser, it is the most incredible cleanser I have ever used and highly recommend it to all. I am in my late 20's and have extremely oily skin on my t-zone and dry skin around the nose and little patches on my cheeks but this cleanser seems to combat that perfectly. I use this with my Clarisonic mia on a daily basis and love it! My skin is so smooth and clear. 5 stars!!
I discovered this cleanser via the blog A Model Recommends. I've used it for a few years, day and night, as part of my double cleanse. I also use Emma Hardie Moringa Cleansing Balm. The pairing is a dream for skin. This Balancing Cleanser is appropriately-named: skin is left clean, clear and fresh with no harsh effects. It's a permanent fixture on my bathroom shelf. It last for ages and is great value for money.
I'm starting to have breakouts since using this cleanser. Thought this was a gentle light one because I have sensitive skin, but it didn't work for me.
I never leave reviews but always read them and felt the need to share how much I love this cleanser! I have been using it for a little over a week and it has already completely changed my skin! I've changed nothing else in my routine yet my skin has become smoother, less irritated, my spots have dramatically reduced and I've had no new breakouts for the first time in months, I've already ordered another one so I have a supply of it! Everyone has noticed the improvement and I can't stop looking at my skin. The only thing I would say is that you might need to go in with a balm or oil first to remove heavy makeup, otherwise it's amazing.
I had this in the Alpha H Discovery Collection and loved it so much I bought the full size when I'd finished the tube. My skin flits from moderately oily to very dry/sensitive and although I don't use this every day, I find a couple of times a week as my second cleanse with makeup removed keep oil at bay and helps breakouts. Will probably repurchase, a lovely product.
This would have to be my absolute favourite cleanser. Creamy and light and free from those foaming nasties, it leaves my skin feeling clean but never tight or dry. Highly recommend to all.
I tried this as part of the Discovery Set. A gentle no-frills cleanser which smells faintly of cucumber and mint? Not quite sure but a refreshing light scent! This doesn't contain any AHA unlike the rest of the Alpha-H range so it is good for use around the eye area if needed. Removes light make up and cleanses well yet is gentle on the skin.
Upon first application I have to admit, I was a little underwhelmed with this product and thought I had just added to my ever expanding never-quite-finished bottles of cleansers. However, after investing in 7 (yes 7) wash clothes and I actually started using the product as directed, and not just rinsing it off my face, I am totally in love. It is a very gentle and softening cleanser and despite my combination skin, it has not broken me out. I really recommend the product, especially a double cleanse which is leaving my skin seriously buffed at the moment.
I'm on the 3rd bottle of this cleanser and I just love it! I've normal to dry skin, and it's creamy but not heavy texture is absolutely perfect, after using it my skin feels so hydrated and clean. I used it both in the morning and evening (in the evening as a 2nd cleanser) and I couldn't be happier with it!
This is my 3rd bottle and only Tata Harper and Ren cleansers do as good as this. I have dry, dehydrated skin in my early 30’s with the odd hormonal breakout and this stuff is fantastic. Creamy but not heavy, doesn’t leave my skin feeling tight, stripped or dry, cleans really well (even removed makeup, especially mascara brilliantly with minimal effort) and my skin really enjoys this. Perfect 1st or 2nd cleanse or both! Would highly recommend, lasts a long time and is a good price point.
The first few days of using this cleaner, I was over the moon at how soft my skin was feeling, but after a few more uses my skin started to get extremely oily around the T-zone. I usually have pretty normal skin, so this came as a shock to have quite an oily texture to my skin after using this product for about a week. I'd say it was because I used the product twice a day. I started to use it every other day and my skin went back to being very soft and smooth. I would recommend this product for every day use for dry skin, as it really replenishes and moisturises, or for ever now-and-again use for normal skin types.
My skin is usually completely clear apart from a couple of blackheads on my T zone. After using this alongside the essential hydration cream for 4 days I started to break out all over my face. I continued to use for another week and my breakout got worse and my skin became very red despite me not having sensitive skin. Obviously from reading other reviews this product works for some people however based on my experience I wouldn't recommend it.
I know this is a star product for Alpha h but I didn't feel in love with it. I use it mostly in the mornings but it sometimes feels tingly around my eyes which is not a good thing for a cleanser. It's ok, but nothing exciting. I will use it up but won't repurchase. I think you can find better cream cleansers that are more affordable. I would give it 3,5 stars!
This cleanser is amazing. Its creamy, non irritating and non foaming. It removes makeup and leaves my skin feeling soft and nourished.
It has to be the best cleanser I've tried. My skin feels so clean afterwards without feeling stripped. It hasn't broken me out which some cleansers tend to do, after using it, the skin looks healthier and more hydrated.
Very gentle and non drying, I personally use this as a first cleanse as it's so effective at removing make up, including eye makeup. It really does help to regulate the skin, upon first use i noticed my skin became less oily and makeup lasted longer.
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Feb 3 16:00:29 2017
@author: daniele
"""
#import matplotlib.pyplot as plt
import numpy as np
from scipy import signal
import librosa
from os import walk, path, makedirs
def wav_file_list(source):
# list all file in source directory
filenames = []
for (dirpath, dirnames, filenames) in walk(source):
break
# drop all non wav file
wav_filenames = [f for f in filenames if f.lower().endswith('.wav')]
return wav_filenames
# calcola uno spettrogramma
def spectrogram(filepath, fs, N, overlap, win_type='hamming'):
# Load an audio file as a floating point time series
x, fs = librosa.core.load(filepath,sr=fs)
# Returns: np.ndarray [shape=(1 + n_fft/2, t), dtype=dtype], dtype=64-bit complex
X = librosa.core.stft(x, n_fft=N, window=signal.get_window(win_type,N), hop_length=N-overlap, center=False)
#Sxx = np.abs(X)**2
Sxx = librosa.logamplitude(np.abs(X)**2,ref_power=np.max)
return Sxx
# estrae gli spettrogrammi dai file contenuti in source e li salva in dest
def extract_spectrograms(source, dest, fs, N, overlap, win_type='hamming'):
wav_filenames = wav_file_list(source)
for w in wav_filenames:
Sxx=spectrogram(path.join(source,w), fs, N, overlap, win_type)
np.save(path.join(dest,w[0:-4]),Sxx)
# calcola i mel, i delta e i delta-deltas
def log_mel(filepath, fs, N, overlap, win_type='hamming', n_mels=128, fmin=0.0, fmax=None, htk=True):
coefficients = []
# Load an audio file as a floating point time series
x, fs = librosa.core.load(filepath,sr=fs)
# Power spectrum
S = np.abs(librosa.core.stft(x, n_fft=N, window=signal.get_window(win_type,N), hop_length=N-overlap, center=False))**2
# Build a Mel filter
mel_basis = librosa.filters.mel(fs, N, n_mels, fmin, fmax, htk)
# Filtering
mel_filtered = np.dot(mel_basis, S)
mel_filtered = librosa.logamplitude(mel_filtered)
coefficients.append(mel_filtered)
# add delta e delta-deltas
#coefficients.append(librosa.feature.delta(mel_filtered, delta_width*2+1, order=1, axis=-1))
#coefficients.append(librosa.feature.delta(mel_filtered, delta_width*2+1, order=2, axis=-1))
return coefficients
def mfcc_e(filepath, fs, N, overlap, n_mels=26, fmin=0.0, fmax=None, htk=True, delta_width=None):
x, fs = librosa.core.load(filepath, sr=fs)
mfcc = librosa.feature.mfcc(y=x, sr=fs, n_mfcc=n_mels, hop_length=N - overlap, fmin=fmin, fmax=fmax, htk=htk)
delta = librosa.feature.delta(mfcc, delta_width * 2 + 1, order=1, axis=-1)
acc = librosa.feature.delta(mfcc, delta_width * 2 + 1, order=2, axis=-1)
coefficients = np.vstack((mfcc,delta,acc))
coefficients = mfcc
return coefficients
# estrae i mel e ci calcola i delta e i delta-deltas dai file contenuti in source e li salva in dest
# per la versione log è sufficiente eseguire librosa.logamplitude(.) alla singola sottomatrice o all'intera matrice
def extract_log_mel(source, dest, fs, N, overlap, win_type='hamming', n_mels=128, fmin=0.0, fmax=None, htk=True):
wav_filenames = wav_file_list(source)
for w in wav_filenames:
mels=log_mel(path.join(source,w), fs, N, overlap, win_type, n_mels, fmin, fmax, htk)
np.save(path.join(dest,w[0:-4]),mels)
# calcola i m, i delta e i delta-deltas
def extract_MFCC(source, dest, fs, n_mels, N, overlap, fmin, fmax, htk, delta_width):
wav_filenames = wav_file_list(source)
for w in wav_filenames:
mfcc=mfcc_e(path.join(source,w), fs, N, overlap, n_mels, fmin, fmax, htk, delta_width)
x, fs = librosa.core.load(path.join(source,w), sr=fs)
centr=librosa.feature.spectral_centroid(y=x, sr=fs, n_fft=2048, hop_length=N-overlap, freq=None)
mfcc = np.vstack((mfcc,centr))
#zcr=librosa.feature.zero_crossing_rate(y=x, frame_length=N, hop_length=N-overlap, center=True)
#mfcc = np.vstack((mfcc, zcr))
np.save(path.join(dest, w[0:-4]), mfcc)
if __name__ == "__main__":
root_dir = path.realpath('../../')
wav_dir_path = path.join(root_dir,'wav')
dest_path_spec=path.join(root_dir,'dataset','spectrograms')
dest_path_log_mel=path.join(root_dir,'dataset','logmel_NEW')
dest_path_mfcc = path.join(root_dir, 'dataset', 'MFCC_D_A')
dest_path = path.join(root_dir, 'dataset', 'MFCC_CENTR')
if (not path.exists(dest_path)):
makedirs(dest_path)
window_type = 'hamming'
fft_length = 256
window_length = 480
overlap = 160
Fs = 16000
n_mels = 26
fmin=0.0
fmax=Fs/2
htk=True
delta_width=2
# extract_spectrograms(wav_dir_path, dest_path_spec, Fs, fft_length, overlap, window_type)
# extract_log_mel(wav_dir_path, dest_path_log_mel, Fs, window_length, overlap, window_type, n_mels, fmin, fmax, htk)
extract_MFCC(wav_dir_path, dest_path, Fs, n_mels, window_length, overlap, fmin, fmax, htk, delta_width)
import os
print(os.path.realpath('.'))
|
Melani Cholie - Songwriter, Composer - "Painted Skies" with this I say bye for 1 month holiday in Ireland where I hope to get impressions for new music AND I hope to listen to lot of good live music!! I will catch up in July and hope you will not forget me during my holiday time :-))) Love from Mel!
"Painted Skies" with this I say bye for 1 month holiday in Ireland where I hope to get impressions for new music AND I hope to listen to lot of good live music!! I will catch up in July and hope you will not forget me during my holiday time :-))) Love from Mel!
|
from Bio import SeqIO
from os import path
from sys import exit
from pincer.objects.Contig import Contig
class FileNotInPathException(Exception):
pass
class NotFastaException(Exception):
pass
class Sequence(object):
def __init__(self, filename):
self.file, self.name = self.get_file_and_name(filename)
self.contigs = self.iterateAndAppend_toContigs(filename)
def __repr__(self):
string = "Sequence {} with {} contigs totaling {}bps"
return string.format(self.name, self.get_contig_number(), self.get_total_seq_length())
def __len__(self):
return self.get_total_seq_length()
def __iter__(self):
return iter(self.contigs)
def __getitem__(self, i):
return self.contigs[i]
def get_contig_number(self):
return len(self.contigs)
def get_total_seq_length(self):
lengths = map(len, self.contigs)
return sum(lengths)
def get_file_and_name(self, filename):
if not path.isfile(filename):
raise FileNotInPathException("{} is not a file!".format(filename))
exit(1)
file = filename.split("/")[-1]
type = file[file.index(".")+1:]
name = file[0:file.index(".")]
if type not in ["fasta", "fa"]:
raise NotFastaException("{} is not a fasta file!".format(filename))
exit(1)
return file, name
def iterateAndAppend_toContigs(self, filename):
tmp_contigs = []
with open(filename, "r") as handle:
for record in SeqIO.parse(handle, "fasta"):
tmp_contigs.append(Contig(record.id, record.seq))
return tmp_contigs
|
Is Team down for everyone or just me? - Check status for team.forsvarsmakten.se now!
Is team down for everyone or just me? Our website down tool checks team.forsvarsmakten.se url's reachability in real-time. This page lets you quickly find out if it is down (right now) for other users as well, or you are experiencing some kind of network connection error. Please allow us a few seconds to finish the test.
FORSVARSMAKTEN.SE WEBSITE IS NOT WORKING ?
Having problem loading team.forsvarsmakten.se? If you noticed team not working or received a cannot connect to team error message, then you came to the right place. This page is trying to establish a connection with the forsvarsmakten.se domain name's web server to perform a network independent team down or not test. If the site is up, try the troubleshooting tips below, but if the site is down, there is not much you can do. Read more about what we do and how do we do it.
From (common) unpaid bills to an unfortunate natural disaster (cut wires), there are plenty of reasons why is team down right now.
Search social networks like twitter or facebook to see if other people experienced problems with team.forsvarsmakten.se or not.
|
from collections import ChainMap
from itertools import chain
from django.core.management.base import BaseCommand
from django.conf import settings
from django.db.utils import IntegrityError
from legistar.bills import LegistarAPIBillScraper
from opencivicdata.legislative.models import Bill
from lametro.models import LAMetroSubject
from lametro.smartlogic import SmartLogic
class ClassificationMixin:
DEFAULT_FACET = 'topics_exact'
FACET_CLASSES = {
'bill_type_exact': (
'Board Report Type',
),
'lines_and_ways_exact': (
'Transportation Method',
'Bus Line',
'Bus Way',
'Rail Line',
),
'phase_exact': (
'Transportation Phase',
),
'project_exact': (
'Project',
'Project Development',
'Project Finance',
'Capital Project',
'Construction Project',
'Grant Project',
'Other Working Project',
),
'metro_location_exact': (
'All Transportation Locations',
'Alignment',
'Division',
'Employee Parking Lot',
'Pank ‘n’ Ride',
'Radio Station',
'Route',
'Station',
'Surplus, Temporary And Miscellaneous Property',
'Terminal',
'Transportation Location',
),
'geo_admin_location_exact': (
'All Location',
'Administrative Division',
'Electoral Districts',
'Sector',
'Corridor',
'Geographic Location',
'City',
'Country',
'County',
'Neighborhood',
'State',
'Unincorporated Area',
'Point of Interest',
'Subregion',
),
'significant_date_exact': (
'Dates',
),
'motion_by_exact': (
'Board Member',
),
'plan_program_policy_exact': (
'Plan',
'Program',
'Policy'
),
}
@property
def smartlogic(self):
if not hasattr(self, '_smartlogic'):
self._smartlogic = SmartLogic(settings.SMART_LOGIC_KEY)
return self._smartlogic
@property
def classifications(self):
if not hasattr(self, '_classifications'):
self._classifications = ChainMap(*[
{subject: facet for subject in list(self.get_subjects_from_classes(facet, classes))}
for facet, classes in self.FACET_CLASSES.items()
])
return self._classifications
def get_subjects_from_classes(self, facet_name, classes):
self.stdout.write('Getting {}'.format(facet_name))
# Per Steve from SmartLogic, "multiple filters can be combined into an
# OR type filter". So, string all classes together to query for terms
# belonging to any of them.
#
# Use an array of tuples instead of a dictionary, because each param
# uses the FILTER key (and dictionaries can't contain duplicate keys).
params = [('FILTER', 'CL={}'.format(cls)) for cls in classes]
params.append(('FILTER', 'AT=System: Legistar'))
response = self.smartlogic.terms(params)
yield from (t['term']['name'] for t in response['terms'])
class Command(BaseCommand, ClassificationMixin):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.legistar = LegistarAPIBillScraper()
self.legistar.BASE_URL = 'https://webapi.legistar.com/v1/metro'
self.legistar.retry_attempts = 0
self.legistar.requests_per_minute = 0
def handle(self, *args, **options):
current_topics = set(chain(*Bill.objects.values_list('subject', flat=True)))
# Delete topics no longer associated with any bills.
deleted, _ = LAMetroSubject.objects.exclude(name__in=current_topics).delete()
self.stdout.write('Removed {0} stale topics'.format(deleted))
# Create LAMetroSubject instances for all existing topics. Subjects are
# unique on name. Ignore conflicts so we can bulk create instances
# without querying for or introducing duplicates.
LAMetroSubject.objects.bulk_create([
LAMetroSubject(name=s, classification=self.DEFAULT_FACET) for s in current_topics
], ignore_conflicts=True)
for_update = []
for topic in self.legistar.topics():
try:
subject = LAMetroSubject.objects.get(name=topic['IndexName'])
except LAMetroSubject.DoesNotExist:
# The database only contains topics that are related to at least
# one bill. By contrast, the API contains all topics, regardless
# of whether they're currently in use. Skip unused topics.
pass
else:
self.stdout.write('Updating {}'.format(subject))
subject.guid = topic['api_metadata']
subject.classification = self.classifications.get(subject.name,
self.DEFAULT_FACET)
self.stdout.write('Classification: {}'.format(subject.classification))
for_update.append(subject)
LAMetroSubject.objects.bulk_update(for_update, ['guid', 'classification'])
update_count = len(for_update)
topic_count = LAMetroSubject.objects.count()
try:
assert update_count == topic_count
except AssertionError:
raise AssertionError('Updated only {0} of {1} total topics'.format(update_count, topic_count))
else:
self.stdout.write('Updated all {0} topics'.format(topic_count))
|
When partner opens 1 of a major suit, select the option below that best fits your holding and make the indicated bid. “Pieces” refers to the number of cards held in partner’s opening major suit. Total Points refers to HCP plus distribution points.
Holding 6-7 HCP, bid 1NT with 1,2,3 or 4 pieces.
Holding 8-10 Total Points, bid 1NT with 1-2 pieces but raise partner’s major two the 2-lvl with 3-4 pieces.
Holding 11-12 HCP, bid 1 NT with 1-2 pieces; with 11-12 Total Points and 3 pieces but also bid 1NT but jump raise partner’s major to the 3-lvl with 4 pieces if holding 12-12 Total Points.
~2 of a new suit (2/1) holding a good 5-card side suit. A “good” side suit may differ in the eyes of different partnerships; I prefer the suit contain a minimum of 2 of the top 3 honors (alternately 5 HCP) in the suit.
~Holding a balanced hand, bid 2NT if playing the Jacoby 2NT convention (recommended).
~Holding a splinter (singleton or void) but no good 5-card side suit, make a double jump shift in the splinter.
|
""" Module: loader
Provide a Sqoreboard API for loading graph data into Sqoreboard objects.
Provides:
def load_node
def load_node_by_unique_property
def load_nodes_by_property
def load_edge
def load_edges
def load_neighbors
"""
from model.graph import GraphOutputError
from model.graph import reader
import sqfactory
# TODO make this a Singleton object and create a reference to SqFactory
# so we don't grab it all over the place.
def load_node(node_id):
""" Return a SqNode subclass for the given id.
Wrap a call to a Graph API that returns a GraphNode and call on
sqfactory to parse it into a SqNode subclass.
Required:
id node_id id of node to fetch
Returns:
SqNode single instance of concrete SqNode subclass
"""
node = None
try:
graph_node = reader.get_node(node_id)
if graph_node:
factory = sqfactory.get_factory()
node = factory.construct_node_and_edges(graph_node)
except GraphOutputError as e:
#logger.debug(e.reason)
print e.reason
return node
def load_node_by_unique_property(key, value, node_type_return_filter=None):
""" Return a single SqNode for a property and node type filter.
Wrap a call to load_nodes_by_property(). pop and return the first and
only node returned. The sqfactory module will parse it into a SqNode
subclass.
This should only be used for combinations of property key/value and
return node type which the caller knows to be restricted to exactly
one stored node. The canonical example is guaranteeing unique email
address or third party ID in User storage.
Ideally, this property would be indexed in the database. In fact,
the underlying graph or data layer may impose this restriction on
queries and throw an error if no index exists for this property.
Required:
str key property to look up
mixed value property value to look up
Optional:
list node_type_return_filter node types to filter for
Returns:
SqNode single SqNode instance
"""
nodes = load_nodes_by_property(key, value, node_type_return_filter)
return nodes.values()[0] if nodes else None
def load_nodes_by_property(key, value, node_type_return_filter=None):
""" Return a list of SqNodes for a given property and node type.
Wrap a call to a Graph API that returns a GraphNode and call on
sqfactory to parse it into a SqNode subclass.
Ideally, this property would be indexed in the database. In fact,
the underlying graph or data layer may impose this restriction on
queries and throw an error if no index exists for this property.
Required:
str key property key to look up
mixed value property value to look up
Optional:
list node_type_return_filter list of SqNode types to return
Returns:
dict SqNodes keyed on ID (or None)
"""
nodes = None
try:
graph_nodes = reader.get_nodes_by_index(
key,
value,
node_type_return_filter)
nodes = {}
factory = sqfactory.get_factory()
for id, graph_node in graph_nodes.items():
nodes[id] = factory.construct_node_and_edges(graph_node)
except GraphOutputError as e:
#logger.debug(e.reason)
print e.reason
return nodes
def load_edge(edge_id):
""" Return a SqEdge subclass for the given id.
Wrap a call to a Graph API that returns a GraphEdge and call on
sqfactory to parse it into a SqEdge subclass.
Required:
id edge_id id of edge to fetch
Returns:
SqEdge single instance of concrete SqEdge subclass
"""
edge = None
try:
factory = sqfactory.get_factory()
edge = factory.construct_edge(reader.get_edge(edge_id))
except GraphOutputError as e:
#logger.debug(e.reason)
print e.reason
return edge
def load_edges(node_id):
""" Return a dict of SqEdge subclasses for the given SqNode ID.
Wrap a call to a Graph API that returns a GraphEdge and call on
sqfactory to parse it into a SqEdge subclass.
Required:
id node_id id of edge to fetch
Returns:
dict concrete SqEdge subclasses keyed on ID
"""
edges = None
try:
graph_node = reader.get_node(node_id)
edges = {}
factory = sqfactory.get_factory()
for id, graph_edge in graph_node.edges().items():
edges[id] = factory.construct_edge(graph_edge)
except GraphOutputError as e:
#logger.debug(e.reason)
print e.reason
return edges
def load_neighbors(
node_id,
edge_type_pruner=None,
node_type_return_filter=None):
""" Load a SqNode and its specified SqEdges and neighbor SqNodes.
Required:
id node_id SqNode id
Optional:
list edge_type_pruner list of SqEdge types to traverse
list node_type_return_filter list of SqNode types to return
Returns:
tuple (SqNode, dict) => (start, neighbors)
"""
node = None
neighbor_nodes = None
try:
# get node, outgoing edges, neighbor nodes
graph_path = reader.get_path_to_neighbor_nodes(
node_id,
edge_type_pruner,
node_type_return_filter)
# load nodes and edges into SqNodes and SqEdges
factory = sqfactory.get_factory()
node = factory.construct_node_and_edges(graph_path.get_start_node())
neighbor_nodes = {}
for id, graph_node in graph_path.get_neighbor_nodes().items():
neighbor_nodes[id] = factory.construct_node_and_edges(graph_node)
except GraphOutputError as e:
#logger.debug(e.reason)
print e.reason
# TODO: this only works for depth-1 queries because of graph fan-out, so
# we need something different for queries of depth-2 and up.
return (node, neighbor_nodes)
|
← FRIDAY 6th October . . .
Bring the kids to our HALLOWEEN WORKSHOP at Nordic Stella and enjoy a cozy time together . . . .
|
"""
Copyright 2008-2015 Free Software Foundation, Inc.
This file is part of GNU Radio
GNU Radio Companion is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation; either version 2
of the License, or (at your option) any later version.
GNU Radio Companion is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
"""
import ast
import weakref
import re
from . import Constants
from .Constants import VECTOR_TYPES, COMPLEX_TYPES, REAL_TYPES, INT_TYPES
from .Element import Element
from .utils import odict
# Blacklist certain ids, its not complete, but should help
import __builtin__
ID_BLACKLIST = ['self', 'options', 'gr', 'blks2', 'wxgui', 'wx', 'math', 'forms', 'firdes'] + dir(__builtin__)
try:
from gnuradio import gr
ID_BLACKLIST.extend(attr for attr in dir(gr.top_block()) if not attr.startswith('_'))
except ImportError:
pass
_check_id_matcher = re.compile('^[a-z|A-Z]\w*$')
_show_id_matcher = re.compile('^(variable\w*|parameter|options|notebook)$')
def _get_keys(lst):
return [elem.get_key() for elem in lst]
def _get_elem(lst, key):
try:
return lst[_get_keys(lst).index(key)]
except ValueError:
raise ValueError('Key "{0}" not found in {1}.'.format(key, _get_keys(lst)))
def num_to_str(num):
""" Display logic for numbers """
def eng_notation(value, fmt='g'):
"""Convert a number to a string in engineering notation. E.g., 5e-9 -> 5n"""
template = '{0:' + fmt + '}{1}'
magnitude = abs(value)
for exp, symbol in zip(range(9, -15-1, -3), 'GMk munpf'):
factor = 10 ** exp
if magnitude >= factor:
return template.format(value / factor, symbol.strip())
return template.format(value, '')
if isinstance(num, COMPLEX_TYPES):
num = complex(num) # Cast to python complex
if num == 0:
return '0'
output = eng_notation(num.real) if num.real else ''
output += eng_notation(num.imag, '+g' if output else 'g') + 'j' if num.imag else ''
return output
else:
return str(num)
class Option(Element):
def __init__(self, param, n):
Element.__init__(self, param)
self._name = n.find('name')
self._key = n.find('key')
self._opts = dict()
opts = n.findall('opt')
# Test against opts when non enum
if not self.get_parent().is_enum() and opts:
raise Exception('Options for non-enum types cannot have sub-options')
# Extract opts
for opt in opts:
# Separate the key:value
try:
key, value = opt.split(':')
except:
raise Exception('Error separating "{0}" into key:value'.format(opt))
# Test against repeated keys
if key in self._opts:
raise Exception('Key "{0}" already exists in option'.format(key))
# Store the option
self._opts[key] = value
def __str__(self):
return 'Option {0}({1})'.format(self.get_name(), self.get_key())
def get_name(self):
return self._name
def get_key(self):
return self._key
##############################################
# Access Opts
##############################################
def get_opt_keys(self):
return self._opts.keys()
def get_opt(self, key):
return self._opts[key]
def get_opts(self):
return self._opts.values()
class TemplateArg(object):
"""
A cheetah template argument created from a param.
The str of this class evaluates to the param's to code method.
The use of this class as a dictionary (enum only) will reveal the enum opts.
The __call__ or () method can return the param evaluated to a raw python data type.
"""
def __init__(self, param):
self._param = weakref.proxy(param)
def __getitem__(self, item):
return str(self._param.get_opt(item)) if self._param.is_enum() else NotImplemented
def __str__(self):
return str(self._param.to_code())
def __call__(self):
return self._param.get_evaluated()
class Param(Element):
is_param = True
def __init__(self, block, n):
"""
Make a new param from nested data.
Args:
block: the parent element
n: the nested odict
"""
# If the base key is a valid param key, copy its data and overlay this params data
base_key = n.find('base_key')
if base_key and base_key in block.get_param_keys():
n_expanded = block.get_param(base_key)._n.copy()
n_expanded.update(n)
n = n_expanded
# Save odict in case this param will be base for another
self._n = n
# Parse the data
self._name = n.find('name')
self._key = n.find('key')
value = n.find('value') or ''
self._type = n.find('type') or 'raw'
self._hide = n.find('hide') or ''
self._tab_label = n.find('tab') or block.get_param_tab_labels()[0]
if self._tab_label not in block.get_param_tab_labels():
block.get_param_tab_labels().append(self._tab_label)
# Build the param
Element.__init__(self, block)
# Create the Option objects from the n data
self._options = list()
self._evaluated = None
for option in map(lambda o: Option(param=self, n=o), n.findall('option')):
key = option.get_key()
# Test against repeated keys
if key in self.get_option_keys():
raise Exception('Key "{0}" already exists in options'.format(key))
# Store the option
self.get_options().append(option)
# Test the enum options
if self.is_enum():
# Test against options with identical keys
if len(set(self.get_option_keys())) != len(self.get_options()):
raise Exception('Options keys "{0}" are not unique.'.format(self.get_option_keys()))
# Test against inconsistent keys in options
opt_keys = self.get_options()[0].get_opt_keys()
for option in self.get_options():
if set(opt_keys) != set(option.get_opt_keys()):
raise Exception('Opt keys "{0}" are not identical across all options.'.format(opt_keys))
# If a value is specified, it must be in the options keys
if value or value in self.get_option_keys():
self._value = value
else:
self._value = self.get_option_keys()[0]
if self.get_value() not in self.get_option_keys():
raise Exception('The value "{0}" is not in the possible values of "{1}".'.format(self.get_value(), self.get_option_keys()))
else:
self._value = value or ''
self._default = value
self._init = False
self._hostage_cells = list()
self.template_arg = TemplateArg(self)
def get_types(self):
return (
'raw', 'enum',
'complex', 'real', 'float', 'int',
'complex_vector', 'real_vector', 'float_vector', 'int_vector',
'hex', 'string', 'bool',
'file_open', 'file_save', '_multiline', '_multiline_python_external',
'id', 'stream_id',
'grid_pos', 'notebook', 'gui_hint',
'import',
)
def __repr__(self):
"""
Get the repr (nice string format) for this param.
Returns:
the string representation
"""
##################################################
# Truncate helper method
##################################################
def _truncate(string, style=0):
max_len = max(27 - len(self.get_name()), 3)
if len(string) > max_len:
if style < 0: # Front truncate
string = '...' + string[3-max_len:]
elif style == 0: # Center truncate
string = string[:max_len/2 - 3] + '...' + string[-max_len/2:]
elif style > 0: # Rear truncate
string = string[:max_len-3] + '...'
return string
##################################################
# Simple conditions
##################################################
if not self.is_valid():
return _truncate(self.get_value())
if self.get_value() in self.get_option_keys():
return self.get_option(self.get_value()).get_name()
##################################################
# Split up formatting by type
##################################################
# Default center truncate
truncate = 0
e = self.get_evaluated()
t = self.get_type()
if isinstance(e, bool):
return str(e)
elif isinstance(e, COMPLEX_TYPES):
dt_str = num_to_str(e)
elif isinstance(e, VECTOR_TYPES):
# Vector types
if len(e) > 8:
# Large vectors use code
dt_str = self.get_value()
truncate = 1
else:
# Small vectors use eval
dt_str = ', '.join(map(num_to_str, e))
elif t in ('file_open', 'file_save'):
dt_str = self.get_value()
truncate = -1
else:
# Other types
dt_str = str(e)
# Done
return _truncate(dt_str, truncate)
def __repr2__(self):
"""
Get the repr (nice string format) for this param.
Returns:
the string representation
"""
if self.is_enum():
return self.get_option(self.get_value()).get_name()
return self.get_value()
def __str__(self):
return 'Param - {0}({1})'.format(self.get_name(), self.get_key())
def get_color(self):
"""
Get the color that represents this param's type.
Returns:
a hex color code.
"""
try:
return {
# Number types
'complex': Constants.COMPLEX_COLOR_SPEC,
'real': Constants.FLOAT_COLOR_SPEC,
'float': Constants.FLOAT_COLOR_SPEC,
'int': Constants.INT_COLOR_SPEC,
# Vector types
'complex_vector': Constants.COMPLEX_VECTOR_COLOR_SPEC,
'real_vector': Constants.FLOAT_VECTOR_COLOR_SPEC,
'float_vector': Constants.FLOAT_VECTOR_COLOR_SPEC,
'int_vector': Constants.INT_VECTOR_COLOR_SPEC,
# Special
'bool': Constants.INT_COLOR_SPEC,
'hex': Constants.INT_COLOR_SPEC,
'string': Constants.BYTE_VECTOR_COLOR_SPEC,
'id': Constants.ID_COLOR_SPEC,
'stream_id': Constants.ID_COLOR_SPEC,
'grid_pos': Constants.INT_VECTOR_COLOR_SPEC,
'notebook': Constants.INT_VECTOR_COLOR_SPEC,
'raw': Constants.WILDCARD_COLOR_SPEC,
}[self.get_type()]
except:
return '#FFFFFF'
def get_hide(self):
"""
Get the hide value from the base class.
Hide the ID parameter for most blocks. Exceptions below.
If the parameter controls a port type, vlen, or nports, return part.
If the parameter is an empty grid position, return part.
These parameters are redundant to display in the flow graph view.
Returns:
hide the hide property string
"""
hide = self.get_parent().resolve_dependencies(self._hide).strip()
if hide:
return hide
# Hide ID in non variable blocks
if self.get_key() == 'id' and not _show_id_matcher.match(self.get_parent().get_key()):
return 'part'
# Hide port controllers for type and nports
if self.get_key() in ' '.join(map(lambda p: ' '.join([p._type, p._nports]),
self.get_parent().get_ports())):
return 'part'
# Hide port controllers for vlen, when == 1
if self.get_key() in ' '.join(map(
lambda p: p._vlen, self.get_parent().get_ports())
):
try:
if int(self.get_evaluated()) == 1:
return 'part'
except:
pass
# Hide empty grid positions
if self.get_key() in ('grid_pos', 'notebook') and not self.get_value():
return 'part'
return hide
def validate(self):
"""
Validate the param.
The value must be evaluated and type must a possible type.
"""
Element.validate(self)
if self.get_type() not in self.get_types():
self.add_error_message('Type "{0}" is not a possible type.'.format(self.get_type()))
self._evaluated = None
try:
self._evaluated = self.evaluate()
except Exception, e:
self.add_error_message(str(e))
def get_evaluated(self):
return self._evaluated
def evaluate(self):
"""
Evaluate the value.
Returns:
evaluated type
"""
self._init = True
self._lisitify_flag = False
self._stringify_flag = False
self._hostage_cells = list()
t = self.get_type()
v = self.get_value()
#########################
# Enum Type
#########################
if self.is_enum():
return v
#########################
# Numeric Types
#########################
elif t in ('raw', 'complex', 'real', 'float', 'int', 'hex', 'bool'):
# Raise exception if python cannot evaluate this value
try:
e = self.get_parent().get_parent().evaluate(v)
except Exception, e:
raise Exception('Value "{0}" cannot be evaluated:\n{1}'.format(v, e))
# Raise an exception if the data is invalid
if t == 'raw':
return e
elif t == 'complex':
if not isinstance(e, COMPLEX_TYPES):
raise Exception('Expression "{0}" is invalid for type complex.'.format(str(e)))
return e
elif t == 'real' or t == 'float':
if not isinstance(e, REAL_TYPES):
raise Exception('Expression "{0}" is invalid for type float.'.format(str(e)))
return e
elif t == 'int':
if not isinstance(e, INT_TYPES):
raise Exception('Expression "{0}" is invalid for type integer.'.format(str(e)))
return e
elif t == 'hex':
return hex(e)
elif t == 'bool':
if not isinstance(e, bool):
raise Exception('Expression "{0}" is invalid for type bool.'.format(str(e)))
return e
else:
raise TypeError('Type "{0}" not handled'.format(t))
#########################
# Numeric Vector Types
#########################
elif t in ('complex_vector', 'real_vector', 'float_vector', 'int_vector'):
if not v:
# Turn a blank string into an empty list, so it will eval
v = '()'
# Raise exception if python cannot evaluate this value
try:
e = self.get_parent().get_parent().evaluate(v)
except Exception, e:
raise Exception('Value "{0}" cannot be evaluated:\n{1}'.format(v, e))
# Raise an exception if the data is invalid
if t == 'complex_vector':
if not isinstance(e, VECTOR_TYPES):
self._lisitify_flag = True
e = [e]
if not all([isinstance(ei, COMPLEX_TYPES) for ei in e]):
raise Exception('Expression "{0}" is invalid for type complex vector.'.format(str(e)))
return e
elif t == 'real_vector' or t == 'float_vector':
if not isinstance(e, VECTOR_TYPES):
self._lisitify_flag = True
e = [e]
if not all([isinstance(ei, REAL_TYPES) for ei in e]):
raise Exception('Expression "{0}" is invalid for type float vector.'.format(str(e)))
return e
elif t == 'int_vector':
if not isinstance(e, VECTOR_TYPES):
self._lisitify_flag = True
e = [e]
if not all([isinstance(ei, INT_TYPES) for ei in e]):
raise Exception('Expression "{0}" is invalid for type integer vector.'.format(str(e)))
return e
#########################
# String Types
#########################
elif t in ('string', 'file_open', 'file_save', '_multiline', '_multiline_python_external'):
# Do not check if file/directory exists, that is a runtime issue
try:
e = self.get_parent().get_parent().evaluate(v)
if not isinstance(e, str):
raise Exception()
except:
self._stringify_flag = True
e = str(v)
if t == '_multiline_python_external':
ast.parse(e) # Raises SyntaxError
return e
#########################
# Unique ID Type
#########################
elif t == 'id':
# Can python use this as a variable?
if not _check_id_matcher.match(v):
raise Exception('ID "{0}" must begin with a letter and may contain letters, numbers, and underscores.'.format(v))
ids = [param.get_value() for param in self.get_all_params(t, 'id')]
if v in ID_BLACKLIST:
raise Exception('ID "{0}" is blacklisted.'.format(v))
if self._key == 'id':
# Id should only appear once, or zero times if block is disabled
if ids.count(v) > 1:
raise Exception('ID "{0}" is not unique.'.format(v))
else:
# Id should exist to be a reference
if ids.count(v) < 1:
raise Exception('ID "{0}" does not exist.'.format(v))
return v
#########################
# Stream ID Type
#########################
elif t == 'stream_id':
# Get a list of all stream ids used in the virtual sinks
ids = [param.get_value() for param in filter(
lambda p: p.get_parent().is_virtual_sink(),
self.get_all_params(t),
)]
# Check that the virtual sink's stream id is unique
if self.get_parent().is_virtual_sink():
# Id should only appear once, or zero times if block is disabled
if ids.count(v) > 1:
raise Exception('Stream ID "{0}" is not unique.'.format(v))
# Check that the virtual source's steam id is found
if self.get_parent().is_virtual_source():
if v not in ids:
raise Exception('Stream ID "{0}" is not found.'.format(v))
return v
#########################
# GUI Position/Hint
#########################
elif t == 'gui_hint':
if ':' in v:
tab, pos = v.split(':')
elif '@' in v:
tab, pos = v, ''
else:
tab, pos = '', v
if '@' in tab:
tab, index = tab.split('@')
else:
index = '?'
# TODO: Problem with this code. Produces bad tabs
widget_str = ({
(True, True): 'self.%(tab)s_grid_layout_%(index)s.addWidget(%(widget)s, %(pos)s)',
(True, False): 'self.%(tab)s_layout_%(index)s.addWidget(%(widget)s)',
(False, True): 'self.top_grid_layout.addWidget(%(widget)s, %(pos)s)',
(False, False): 'self.top_layout.addWidget(%(widget)s)',
}[bool(tab), bool(pos)]) % {'tab': tab, 'index': index, 'widget': '%s', 'pos': pos}
# FIXME: Move replace(...) into the make template of the qtgui blocks
# Return a string here
class GuiHint(object):
def __init__(self, ws):
self._ws = ws
def __call__(self, w):
return (self._ws.replace('addWidget', 'addLayout') if 'layout' in w else self._ws) % w
def __str__(self):
return self._ws
return GuiHint(widget_str)
#########################
# Grid Position Type
#########################
elif t == 'grid_pos':
if not v:
# Allow for empty grid pos
return ''
e = self.get_parent().get_parent().evaluate(v)
if not isinstance(e, (list, tuple)) or len(e) != 4 or not all([isinstance(ei, int) for ei in e]):
raise Exception('A grid position must be a list of 4 integers.')
row, col, row_span, col_span = e
# Check row, col
if row < 0 or col < 0:
raise Exception('Row and column must be non-negative.')
# Check row span, col span
if row_span <= 0 or col_span <= 0:
raise Exception('Row and column span must be greater than zero.')
# Get hostage cell parent
try:
my_parent = self.get_parent().get_param('notebook').evaluate()
except:
my_parent = ''
# Calculate hostage cells
for r in range(row_span):
for c in range(col_span):
self._hostage_cells.append((my_parent, (row+r, col+c)))
# Avoid collisions
params = filter(lambda p: p is not self, self.get_all_params('grid_pos'))
for param in params:
for parent, cell in param._hostage_cells:
if (parent, cell) in self._hostage_cells:
raise Exception('Another graphical element is using parent "{0}", cell "{1}".'.format(str(parent), str(cell)))
return e
#########################
# Notebook Page Type
#########################
elif t == 'notebook':
if not v:
# Allow for empty notebook
return ''
# Get a list of all notebooks
notebook_blocks = filter(lambda b: b.get_key() == 'notebook', self.get_parent().get_parent().get_enabled_blocks())
# Check for notebook param syntax
try:
notebook_id, page_index = map(str.strip, v.split(','))
except:
raise Exception('Bad notebook page format.')
# Check that the notebook id is valid
try:
notebook_block = filter(lambda b: b.get_id() == notebook_id, notebook_blocks)[0]
except:
raise Exception('Notebook id "{0}" is not an existing notebook id.'.format(notebook_id))
# Check that page index exists
if int(page_index) not in range(len(notebook_block.get_param('labels').evaluate())):
raise Exception('Page index "{0}" is not a valid index number.'.format(page_index))
return notebook_id, page_index
#########################
# Import Type
#########################
elif t == 'import':
# New namespace
n = dict()
try:
exec v in n
except ImportError:
raise Exception('Import "{0}" failed.'.format(v))
except Exception:
raise Exception('Bad import syntax: "{0}".'.format(v))
return filter(lambda k: str(k) != '__builtins__', n.keys())
#########################
else:
raise TypeError('Type "{0}" not handled'.format(t))
def to_code(self):
"""
Convert the value to code.
For string and list types, check the init flag, call evaluate().
This ensures that evaluate() was called to set the xxxify_flags.
Returns:
a string representing the code
"""
v = self.get_value()
t = self.get_type()
# String types
if t in ('string', 'file_open', 'file_save', '_multiline', '_multiline_python_external'):
if not self._init:
self.evaluate()
return repr(v) if self._stringify_flag else v
# Vector types
elif t in ('complex_vector', 'real_vector', 'float_vector', 'int_vector'):
if not self._init:
self.evaluate()
if self._lisitify_flag:
return '(%s, )' % v
else:
return '(%s)' % v
else:
return v
def get_all_params(self, type, key=None):
"""
Get all the params from the flowgraph that have the given type and
optionally a given key
Args:
type: the specified type
key: the key to match against
Returns:
a list of params
"""
return sum([filter(lambda p: ((p.get_type() == type) and ((key is None) or (p.get_key() == key))), block.get_params()) for block in self.get_parent().get_parent().get_enabled_blocks()], [])
def is_enum(self):
return self._type == 'enum'
def get_value(self):
value = self._value
if self.is_enum() and value not in self.get_option_keys():
value = self.get_option_keys()[0]
self.set_value(value)
return value
def set_value(self, value):
# Must be a string
self._value = str(value)
def set_default(self, value):
if self._default == self._value:
self.set_value(value)
self._default = str(value)
def get_type(self):
return self.get_parent().resolve_dependencies(self._type)
def get_tab_label(self):
return self._tab_label
def get_name(self):
return self.get_parent().resolve_dependencies(self._name).strip()
def get_key(self):
return self._key
##############################################
# Access Options
##############################################
def get_option_keys(self):
return _get_keys(self.get_options())
def get_option(self, key):
return _get_elem(self.get_options(), key)
def get_options(self):
return self._options
##############################################
# Access Opts
##############################################
def get_opt_keys(self):
return self.get_option(self.get_value()).get_opt_keys()
def get_opt(self, key):
return self.get_option(self.get_value()).get_opt(key)
def get_opts(self):
return self.get_option(self.get_value()).get_opts()
##############################################
# Import/Export Methods
##############################################
def export_data(self):
"""
Export this param's key/value.
Returns:
a nested data odict
"""
n = odict()
n['key'] = self.get_key()
n['value'] = self.get_value()
return n
|
When it's time to replace your Xerox (6R1122) toner cartridge, make this page your very first shopping stop. Not only will you find a collection of toner intended to work in your printer, you'll also be able to choose from a number of different stores to find the lowest-priced cartridge. With shopping made so simple, you know you can always find cheap prices on Xerox (6R1122) toner right here.
Help other shoppers by leaving a comment about your experience buying Xerox (6R1122) printer ink or toner.
|
import unittest
import numpy as np
import pickle
from syft.nn.linear import LinearClassifier
from syft.he.paillier import KeyPair, PaillierTensor
from capsule.django_client import LocalDjangoCapsuleClient
class PySonarNotebooks(unittest.TestCase):
def model_training_demo_notebook(self):
"""If this test fails, you probably broke the demo notebook located at
PySonar/notebooks/Sonar - Decentralized Model Training Simulation
(local blockchain).ipynb """
c = LocalDjangoCapsuleClient()
d = LinearClassifier(desc="DiabetesClassifier", n_inputs=10, n_labels=1, capsule_client=c)
d.encrypt()
self.assertTrue(True)
class PySyftNotebooks(unittest.TestCase):
def paillier_HE_example_notebook(self):
"""If this test fails, you probably broke the demo notebook located at
PySyft/notebooks/Syft - Paillier Homomorphic Encryption Example.ipynb
"""
pubkey, prikey = KeyPair().generate()
x = PaillierTensor(pubkey, np.array([1, 2, 3, 4, 5.]))
out1 = x.decrypt(prikey)
self.assertEqual(out1, np.array([1., 2., 3., 4., 5.]))
out2 = (x + x[0]).decrypt(prikey)
self.assertEqual(out2, np.array([2., 3., 4., 5., 6.]))
out3 = (x * 5).decrypt(prikey)
self.assertEqual(out3, np.array([5., 10., 15., 20., 25.]))
out4 = (x + x / 5).decrypt(prikey)
self.assertEqual(out4, np.array([1.2, 2.4, 3.6, 4.8, 6.]))
pubkey_str = pubkey.serialize()
prikey_str = prikey.serialize()
pubkey2, prikey2 = KeyPair().deserialize(pubkey_str, prikey_str)
out5 = prikey2.decrypt(x)
self.assertEqual(out5, np.array([1., 2., 3., 4., 5.]))
y = PaillierTensor(pubkey, (np.ones(5)) / 2)
out6 = prikey.decrypt(y)
self.assertEqual(out6, np.array([.5, .5, .5, .5, .5]))
y_str = pickle.dumps(y)
y2 = pickle.loads(y_str)
out7 = prikey.decrypt(y2)
self.assertEqual(out7, np.array([.5, .5, .5, .5, .5]))
def test_paillier_linear_classifier_notebook(self):
"""If this test fails, you probably broke the demo notebook located at
PySyft/notebooks/Syft - Paillier Homomorphic Encryption Example.ipynb
"""
capsule = LocalDjangoCapsuleClient()
model = LinearClassifier(capsule_client=capsule)
assert(model.capsule == capsule)
try:
model = model.encrypt()
encrypted = True
except Exception as e:
encrypted = False
print('[!]', e)
input = np.array([[0, 0, 1, 1], [0, 0, 1, 0],
[1, 0, 1, 1], [0, 0, 1, 0]])
target = np.array([[0, 1], [0, 0], [1, 1], [0, 0]])
for iter in range(3):
model.learn(input, target, alpha=0.5)
if encrypted:
model = model.decrypt()
for i in range(len(input)):
model.forward(input[i])
|
Very spooky looking! Injection frame with wire arms, liner fabric robe, blow molded face, fog hose and nozzle included. Just attach to any of our fog machines for a spooky effect! Reaper is approximately 58 inches tall, wire arms can extend out to approximately 21 inches.
|
'''
Problem:
Given [int], find a non-empty sub-array with the max sum.
'''
def subarray(target):
# Exit early.
if all(item >= 0 for item in target):
return target
if all(item <= 0 for item in target):
return [max(target)]
# Build an incremental sum lookup.
lookup = {}
sum = 0
for index, item in enumerate(target):
sum += item
lookup[index] = sum
# Find all points that cross 0. The start and end of target
# should also be included.
ups = [0]
downs = []
last = None
# Build the cross lookup.
for index, item in enumerate(target):
# Set last.
if last is None:
last = item
continue
# Check for a hit; we only care about positives.
if last <= 0 and item > 0:
ups.append(index)
elif last > 0 and item <= 0:
downs.append(index-1)
# Update last.
last = item
# Add the end.
downs.append(len(target)-1)
# Permute the cross points of interest.
max = None
result = None
for start in ups:
for end in downs:
if end <= start:
continue
# Find the integral.
sum = lookup[end] - lookup[start] + target[start]
# Set max.
if not max:
max = sum
result = (start, end)
continue
# Check for a hit.
if sum > max:
max = sum
result = (start, end)
# Return result.
start, end = result
return target[start: end+1]
test = [13, -3, -25, -20, -16, -23, 18, 20, -7, 12, -5, -22, 15, -4, 7]
print(test)
print(subarray(test))
|
It’s been less than 24 hours since Night, Night Farm showed up on my doorstep, and it has already moved up to the top of Charlotte’s favorite stack.
With the winning combination of being written by the author of one of her other top favorite books (Night, Night Prayer) and being about farm animals, it isn’t much of a surprise. And we’ve already read it at least two dozen times!
Night Night Farm features favorite animals on the farm getting ready for bedtime. Dressed in adorable little PJ’s, little mice take you on the rounds to say good night to each barnyard family. You get to see pigs taking a bath before bed, a foal who was just too tired after a run to the orchard, and kittens who ate too much milk to move.
The pictures are adorable. As you all know, pictures in children’s books are a big seller for me and Night Night Farm definitely would catch my eye in a bookstore with the sweet, but more realistic drawings of each animal. The words are simple, so it’d make a great early reader as well as a bedtime story!
I love the emphasis on thanking God for all the little things that make our days in Night Night Prayer, and while Night Night Farm is more about tucking the animals in bed than a bedtime prayer, it does close out the story by taking a moment to recognize God as the Creator of all our animal friends, and our Creator too!
In all, Night Night Farm was a great addition to our bookshelf, and one that I already know will be loved for years to come.
This book was given to me by BookLook Bloggers. All thoughts and opinions are my own.
|
#
# Gramps - a GTK+/GNOME based genealogy program
#
# Copyright (C) 2001-2007 Donald N. Allingham
# Copyright (C) 2009-2010 Nick Hall
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
"""
Provide the base classes for GRAMPS' DataView classes
"""
#----------------------------------------------------------------
#
# python modules
#
#----------------------------------------------------------------
from abc import abstractmethod
import html
import logging
_LOG = logging.getLogger('.navigationview')
#----------------------------------------------------------------
#
# gtk
#
#----------------------------------------------------------------
from gi.repository import Gdk
from gi.repository import Gtk
#----------------------------------------------------------------
#
# Gramps
#
#----------------------------------------------------------------
from gramps.gen.const import GRAMPS_LOCALE as glocale
_ = glocale.translation.sgettext
from .pageview import PageView
from ..uimanager import ActionGroup
from gramps.gen.utils.db import navigation_label
from gramps.gen.constfunc import mod_key
from ..utils import match_primary_mask
DISABLED = -1
MRU_SIZE = 10
MRU_TOP = '<section id="CommonHistory">'
MRU_BTM = '</section>'
#------------------------------------------------------------------------------
#
# NavigationView
#
#------------------------------------------------------------------------------
class NavigationView(PageView):
"""
The NavigationView class is the base class for all Data Views that require
navigation functionalilty. Views that need bookmarks and forward/backward
should derive from this class.
"""
def __init__(self, title, pdata, state, uistate, bm_type, nav_group):
PageView.__init__(self, title, pdata, state, uistate)
self.bookmarks = bm_type(self.dbstate, self.uistate, self.change_active)
self.fwd_action = None
self.back_action = None
self.book_action = None
self.other_action = None
self.active_signal = None
self.mru_signal = None
self.nav_group = nav_group
self.mru_active = DISABLED
self.uimanager = uistate.uimanager
self.uistate.register(state, self.navigation_type(), self.nav_group)
def navigation_type(self):
"""
Indictates the navigation type. Navigation type can be the string
name of any of the primary Objects. A History object will be
created for it, see DisplayState.History
"""
return None
def define_actions(self):
"""
Define menu actions.
"""
PageView.define_actions(self)
self.bookmark_actions()
self.navigation_actions()
def disable_action_group(self):
"""
Normally, this would not be overridden from the base class. However,
in this case, we have additional action groups that need to be
handled correctly.
"""
PageView.disable_action_group(self)
self.uimanager.set_actions_visible(self.fwd_action, False)
self.uimanager.set_actions_visible(self.back_action, False)
def enable_action_group(self, obj):
"""
Normally, this would not be overridden from the base class. However,
in this case, we have additional action groups that need to be
handled correctly.
"""
PageView.enable_action_group(self, obj)
self.uimanager.set_actions_visible(self.fwd_action, True)
self.uimanager.set_actions_visible(self.back_action, True)
hobj = self.get_history()
self.uimanager.set_actions_sensitive(self.fwd_action,
not hobj.at_end())
self.uimanager.set_actions_sensitive(self.back_action,
not hobj.at_front())
def change_page(self):
"""
Called when the page changes.
"""
hobj = self.get_history()
self.uimanager.set_actions_sensitive(self.fwd_action,
not hobj.at_end())
self.uimanager.set_actions_sensitive(self.back_action,
not hobj.at_front())
self.uimanager.set_actions_sensitive(self.other_action,
not self.dbstate.db.readonly)
self.uistate.modify_statusbar(self.dbstate)
def set_active(self):
"""
Called when the page becomes active (displayed).
"""
PageView.set_active(self)
self.bookmarks.display()
hobj = self.get_history()
self.active_signal = hobj.connect('active-changed', self.goto_active)
self.mru_signal = hobj.connect('mru-changed', self.update_mru_menu)
self.update_mru_menu(hobj.mru, update_menu=False)
self.goto_active(None)
def set_inactive(self):
"""
Called when the page becomes inactive (not displayed).
"""
if self.active:
PageView.set_inactive(self)
self.bookmarks.undisplay()
hobj = self.get_history()
hobj.disconnect(self.active_signal)
hobj.disconnect(self.mru_signal)
self.mru_disable()
def navigation_group(self):
"""
Return the navigation group.
"""
return self.nav_group
def get_history(self):
"""
Return the history object.
"""
return self.uistate.get_history(self.navigation_type(),
self.navigation_group())
def goto_active(self, active_handle):
"""
Callback (and usable function) that selects the active person
in the display tree.
"""
active_handle = self.uistate.get_active(self.navigation_type(),
self.navigation_group())
if active_handle:
self.goto_handle(active_handle)
hobj = self.get_history()
self.uimanager.set_actions_sensitive(self.fwd_action,
not hobj.at_end())
self.uimanager.set_actions_sensitive(self.back_action,
not hobj.at_front())
def get_active(self):
"""
Return the handle of the active object.
"""
hobj = self.uistate.get_history(self.navigation_type(),
self.navigation_group())
return hobj.present()
def change_active(self, handle):
"""
Changes the active object.
"""
hobj = self.get_history()
if handle and not hobj.lock and not (handle == hobj.present()):
hobj.push(handle)
@abstractmethod
def goto_handle(self, handle):
"""
Needs to be implemented by classes derived from this.
Used to move to the given handle.
"""
def selected_handles(self):
"""
Return the active person's handle in a list. Used for
compatibility with those list views that can return multiply
selected items.
"""
active_handle = self.uistate.get_active(self.navigation_type(),
self.navigation_group())
return [active_handle] if active_handle else []
####################################################################
# BOOKMARKS
####################################################################
def add_bookmark(self, *obj):
"""
Add a bookmark to the list.
"""
from gramps.gen.display.name import displayer as name_displayer
active_handle = self.uistate.get_active('Person')
active_person = self.dbstate.db.get_person_from_handle(active_handle)
if active_person:
self.bookmarks.add(active_handle)
name = name_displayer.display(active_person)
self.uistate.push_message(self.dbstate,
_("%s has been bookmarked") % name)
else:
from ..dialog import WarningDialog
WarningDialog(
_("Could Not Set a Bookmark"),
_("A bookmark could not be set because "
"no one was selected."),
parent=self.uistate.window)
def edit_bookmarks(self, *obj):
"""
Call the bookmark editor.
"""
self.bookmarks.edit()
def bookmark_actions(self):
"""
Define the bookmark menu actions.
"""
self.book_action = ActionGroup(name=self.title + '/Bookmark')
self.book_action.add_actions([
('AddBook', self.add_bookmark, '<PRIMARY>d'),
('EditBook', self.edit_bookmarks, '<shift><PRIMARY>D'),
])
self._add_action_group(self.book_action)
####################################################################
# NAVIGATION
####################################################################
def navigation_actions(self):
"""
Define the navigation menu actions.
"""
# add the Forward action group to handle the Forward button
self.fwd_action = ActionGroup(name=self.title + '/Forward')
self.fwd_action.add_actions([('Forward', self.fwd_clicked,
"%sRight" % mod_key())])
# add the Backward action group to handle the Forward button
self.back_action = ActionGroup(name=self.title + '/Backward')
self.back_action.add_actions([('Back', self.back_clicked,
"%sLeft" % mod_key())])
self._add_action('HomePerson', self.home, "%sHome" % mod_key())
self.other_action = ActionGroup(name=self.title + '/PersonOther')
self.other_action.add_actions([
('SetActive', self.set_default_person)])
self._add_action_group(self.back_action)
self._add_action_group(self.fwd_action)
self._add_action_group(self.other_action)
def set_default_person(self, *obj):
"""
Set the default person.
"""
active = self.uistate.get_active('Person')
if active:
self.dbstate.db.set_default_person_handle(active)
def home(self, *obj):
"""
Move to the default person.
"""
defperson = self.dbstate.db.get_default_person()
if defperson:
self.change_active(defperson.get_handle())
else:
from ..dialog import WarningDialog
WarningDialog(_("No Home Person"),
_("You need to set a 'default person' to go to. "
"Select the People View, select the person you want as "
"'Home Person', then confirm your choice "
"via the menu Edit -> Set Home Person."),
parent=self.uistate.window)
def jump(self, *obj):
"""
A dialog to move to a Gramps ID entered by the user.
"""
dialog = Gtk.Dialog(title=_('Jump to by Gramps ID'),
transient_for=self.uistate.window)
dialog.set_border_width(12)
label = Gtk.Label(label='<span weight="bold" size="larger">%s</span>' %
_('Jump to by Gramps ID'))
label.set_use_markup(True)
dialog.vbox.add(label)
dialog.vbox.set_spacing(10)
dialog.vbox.set_border_width(12)
hbox = Gtk.Box()
hbox.pack_start(Gtk.Label(label=_("%s: ") % _('ID')), True, True, 0)
text = Gtk.Entry()
text.set_activates_default(True)
hbox.pack_start(text, False, True, 0)
dialog.vbox.pack_start(hbox, False, True, 0)
dialog.add_buttons(_('_Cancel'), Gtk.ResponseType.CANCEL,
_('_Jump to'), Gtk.ResponseType.OK)
dialog.set_default_response(Gtk.ResponseType.OK)
dialog.vbox.show_all()
if dialog.run() == Gtk.ResponseType.OK:
gid = text.get_text()
handle = self.get_handle_from_gramps_id(gid)
if handle is not None:
self.change_active(handle)
else:
self.uistate.push_message(
self.dbstate,
_("Error: %s is not a valid Gramps ID") % gid)
dialog.destroy()
def get_handle_from_gramps_id(self, gid):
"""
Get an object handle from its Gramps ID.
Needs to be implemented by the inheriting class.
"""
pass
def fwd_clicked(self, *obj):
"""
Move forward one object in the history.
"""
hobj = self.get_history()
hobj.lock = True
if not hobj.at_end():
hobj.forward()
self.uistate.modify_statusbar(self.dbstate)
self.uimanager.set_actions_sensitive(self.fwd_action,
not hobj.at_end())
self.uimanager.set_actions_sensitive(self.back_action, True)
hobj.lock = False
def back_clicked(self, *obj):
"""
Move backward one object in the history.
"""
hobj = self.get_history()
hobj.lock = True
if not hobj.at_front():
hobj.back()
self.uistate.modify_statusbar(self.dbstate)
self.uimanager.set_actions_sensitive(self.back_action,
not hobj.at_front())
self.uimanager.set_actions_sensitive(self.fwd_action, True)
hobj.lock = False
####################################################################
# MRU functions
####################################################################
def mru_disable(self):
"""
Remove the UI and action groups for the MRU list.
"""
if self.mru_active != DISABLED:
self.uimanager.remove_ui(self.mru_active)
self.uimanager.remove_action_group(self.mru_action)
self.mru_active = DISABLED
def mru_enable(self, update_menu=False):
"""
Enables the UI and action groups for the MRU list.
"""
if self.mru_active == DISABLED:
self.uimanager.insert_action_group(self.mru_action)
self.mru_active = self.uimanager.add_ui_from_string(self.mru_ui)
if update_menu:
self.uimanager.update_menu()
def update_mru_menu(self, items, update_menu=True):
"""
Builds the UI and action group for the MRU list.
"""
menuitem = ''' <item>
<attribute name="action">win.%s%02d</attribute>
<attribute name="label">%s</attribute>
</item>
'''
menus = ''
self.mru_disable()
nav_type = self.navigation_type()
hobj = self.get_history()
menu_len = min(len(items) - 1, MRU_SIZE)
data = []
for index in range(menu_len - 1, -1, -1):
name, _obj = navigation_label(self.dbstate.db, nav_type,
items[index])
menus += menuitem % (nav_type, index, html.escape(name))
data.append(('%s%02d' % (nav_type, index),
make_callback(hobj.push, items[index]),
"%s%d" % (mod_key(), menu_len - 1 - index)))
self.mru_ui = [MRU_TOP + menus + MRU_BTM]
self.mru_action = ActionGroup(name=self.title + '/MRU')
self.mru_action.add_actions(data)
self.mru_enable(update_menu)
####################################################################
# Template functions
####################################################################
@abstractmethod
def build_tree(self):
"""
Rebuilds the current display. This must be overridden by the derived
class.
"""
@abstractmethod
def build_widget(self):
"""
Builds the container widget for the interface. Must be overridden by the
the base class. Returns a gtk container widget.
"""
def key_press_handler(self, widget, event):
"""
Handle the control+c (copy) and control+v (paste), or pass it on.
"""
if self.active:
if event.type == Gdk.EventType.KEY_PRESS:
if (event.keyval == Gdk.KEY_c and
match_primary_mask(event.get_state())):
self.call_copy()
return True
return super(NavigationView, self).key_press_handler(widget, event)
def call_copy(self):
"""
Navigation specific copy (control+c) hander. If the
copy can be handled, it returns true, otherwise false.
The code brings up the Clipboard (if already exists) or
creates it. The copy is handled through the drag and drop
system.
"""
nav_type = self.navigation_type()
handles = self.selected_handles()
return self.copy_to_clipboard(nav_type, handles)
def make_callback(func, handle):
"""
Generates a callback function based off the passed arguments
"""
return lambda x, y: func(handle)
|
Many a times, along with financial advice and guidance, clients also expect the finance experts to help execute their recommendations. This may include providing services like capital raising, looking for new investment opportunities, or searching for prospective buyers for any disinvestment plans. Nowadays, clients are looking for a single stop shop for all these solutions. However, as a Finance Expert, it is best for you to concentrate on what you specialize in – Introducing innovative and creative financial solutions and recommending customized financial strategies. You can leave the execution of your clients' capital, funding and investments requirements to us, based on our proven Capital Matchmaking and Ownership succession expertise. Above all, given the competitive market conditions, it is getting tougher for you to win new clients. With the help of our V4G Partnership Alliance Program, you can keep track of latest industry trends, establish a direct connection with top line industry professionals, generate new client leads, and reach new heights in your career.
Apply now to be considered for the V4G Partner Alliance Program. You just need to fill out a short form to get started. Join us today as a preferred Finance Expert and get closer to expanding and strengthening your business.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.