text
stringlengths 29
850k
|
---|
# -*- coding: utf-8 -*-
from PyQt5.QtWidgets import *
from PyQt5.QtCore import *
class ParameterWidget(QWidget):
def __init__(self, parent=None):
super(ParameterWidget, self).__init__(parent)
self.formLayout = QFormLayout()
self.widthLineEdit = QLineEdit()
self.widthLineEdit.setText('800')
self.formLayout.addRow('width', self.widthLineEdit)
self.heightLineEdit = QLineEdit()
self.heightLineEdit.setText('600')
self.formLayout.addRow('height', self.heightLineEdit)
self.sppLineEdit = QLineEdit()
self.sppLineEdit.setText('1')
self.formLayout.addRow('samples', self.sppLineEdit)
self.nphotonLineEdit = QLineEdit()
self.nphotonLineEdit.setText('1000000')
self.formLayout.addRow('photons', self.nphotonLineEdit)
self.scaleLineEdit = QLineEdit()
self.scaleLineEdit.setText('0.01')
self.formLayout.addRow('scale', self.scaleLineEdit)
self.setLayout(self.formLayout)
class ControlWidget(QWidget):
def __init__(self, parent=None):
super(ControlWidget, self).__init__(parent)
self.paramWidget = ParameterWidget()
self.loadPushButton = QPushButton()
self.loadPushButton.setText('Load')
self.estimatePushButton = QPushButton()
self.estimatePushButton.setText('Estimate')
self.renderPushButton = QPushButton()
self.renderPushButton.setText('Render')
self.boxLayout = QVBoxLayout()
self.boxLayout.addWidget(self.paramWidget)
self.boxLayout.addWidget(self.loadPushButton)
self.boxLayout.addWidget(self.estimatePushButton)
self.boxLayout.addWidget(self.renderPushButton)
self.setLayout(self.boxLayout)
@property
def width_value(self):
return int(self.paramWidget.widthLineEdit.text())
@width_value.setter
def width_value(self, value):
self.paramWidget.widthLineEdit.setText(str(value))
@property
def height_value(self):
return int(self.paramWidget.heightLineEdit.text())
@height_value.setter
def height_value(self, value):
self.paramWidget.heightLineEdit.setText(str(value))
@property
def sample_per_pixel(self):
return int(self.paramWidget.sppLineEdit.text())
@sample_per_pixel.setter
def sample_per_pixel(self, value):
self.paramWidget.sppLineEdit.setText(str(value))
@property
def num_photons(self):
return int(self.paramWidget.nphotonLineEdit.text())
@num_photons.setter
def num_photons(self, value):
self.paramWidget.nphotonLineEdit.setText(str(value))
@property
def bssrdf_scale(self):
return float(self.paramWidget.scaleLineEdit.text())
@bssrdf_scale.setter
def bssrdf_scale(self, value):
self.paramWidget.scaleLineEdit.setText(str(value))
|
It was very nice to meet many of you at open house. Thank you to all the families that provided snack!
The year’s first PTO meeting is scheduled for September 25th beginning at 5 pm.
School begins promptly at 8:45 am and ends at 3:45 pm. If you anticipate being late, please call the school.
Please remember to meet your child on our class number at dismissal time to avoid congestion around the school doors. Please remind your child to say good-bye before leaving. I .will walk the students out beginning at 3:40 pm. Any students who remain at 3:45 pm will be escorted to room 306.
Please return all forms that have been sent home: emergency contact, student information and lunch form.
If you are interested in volunteering in the classroom please email me. I am looking for volunteers that can commit to a regular schedule. Please also consider volunteering in the lunchroom and during lunch recess. If you intend on volunteering, you need to fill out a volunteer packet prior to volunteering. I also need volunteers to sharpen pencils and a volunteer to wash classroom tables after school. If you cannot commit to a weekly volunteer time, you can also reach out about reading books to the class.
Children are permitted to have “pop top” water bottles in the classroom.
In Phonics, we will work on recognizing rhymes, building words using the alphabet chart, hearing sounds in sequence and saying words slowly to predict letter sequence. We will focus on the short sound of /i/.
This week we will continue to read a collection of books about friendship and being unique. We will read the books “How to Lose All Your Friends”, “Recess Queen”, and “Enemy Pie”.
We will continue to write narratives in Writing Workshop. We will work on telling the stories of our lives using illustrations and focusing on small moment rather than an entire day.
In Math, we will learn all about tally marks, collecting data and writing number stories. We will also learn more about appropriate use of math tools.
In Science, we will begin our unit on space systems, focusing on the sun, moon and stars. We will continue out unit on families. A huge thank you to all the families that completed the Friday letter. The students loved seeing what their families wrote to them. The family letter is a great way to connect home and school. The students especially loved that their family had Friday homework and they did not!
|
# -*- coding: utf-8 -*-
"""Test cases for the WikidataQuery query syntax and API."""
#
# (C) Pywikibot team, 2014
#
# Distributed under the terms of the MIT license.
#
from __future__ import absolute_import, unicode_literals
import os
import time
import pywikibot
import pywikibot.data.wikidataquery as query
from pywikibot.page import ItemPage, PropertyPage, Claim
from tests.aspects import unittest, WikidataTestCase, TestCase
class TestDryApiFunctions(TestCase):
"""Test WikiDataQuery API functions."""
net = False
def testQueries(self):
"""
Test Queries and check whether they're behaving correctly.
Check that we produce the expected query strings and that
invalid inputs are rejected correctly
"""
q = query.HasClaim(99)
self.assertEqual(str(q), "claim[99]")
q = query.HasClaim(99, 100)
self.assertEqual(str(q), "claim[99:100]")
q = query.HasClaim(99, [100])
self.assertEqual(str(q), "claim[99:100]")
q = query.HasClaim(99, [100, 101])
self.assertEqual(str(q), "claim[99:100,101]")
q = query.NoClaim(99, [100, 101])
self.assertEqual(str(q), "noclaim[99:100,101]")
q = query.StringClaim(99, "Hello")
self.assertEqual(str(q), 'string[99:"Hello"]')
q = query.StringClaim(99, ["Hello"])
self.assertEqual(str(q), 'string[99:"Hello"]')
q = query.StringClaim(99, ["Hello", "world"])
self.assertEqual(str(q), 'string[99:"Hello","world"]')
self.assertRaises(TypeError, lambda: query.StringClaim(99, 2))
q = query.Tree(92, [1], 2)
self.assertEqual(str(q), 'tree[92][1][2]')
# missing third arg
q = query.Tree(92, 1)
self.assertEqual(str(q), 'tree[92][1][]')
# missing second arg
q = query.Tree(92, reverse=3)
self.assertEqual(str(q), 'tree[92][][3]')
q = query.Tree([92, 93], 1, [2, 7])
self.assertEqual(str(q), 'tree[92,93][1][2,7]')
# bad tree arg types
self.assertRaises(TypeError, lambda: query.Tree(99, "hello"))
q = query.Link("enwiki")
self.assertEqual(str(q), 'link[enwiki]')
q = query.NoLink(["enwiki", "frwiki"])
self.assertEqual(str(q), 'nolink[enwiki,frwiki]')
# bad link arg types
self.assertRaises(TypeError, lambda: query.Link(99))
self.assertRaises(TypeError, lambda: query.Link([99]))
# HasClaim with tree as arg
q = query.HasClaim(99, query.Tree(1, 2, 3))
self.assertEqual(str(q), "claim[99:(tree[1][2][3])]")
q = query.HasClaim(99, query.Tree(1, [2, 5], [3, 90]))
self.assertEqual(str(q), "claim[99:(tree[1][2,5][3,90])]")
class TestLiveApiFunctions(WikidataTestCase):
"""Test WikiDataQuery API functions."""
cached = True
def testQueriesWDStructures(self):
"""Test queries using Wikibase page structures like ItemPage."""
q = query.HasClaim(PropertyPage(self.repo, "P99"))
self.assertEqual(str(q), "claim[99]")
q = query.HasClaim(PropertyPage(self.repo, "P99"),
ItemPage(self.repo, "Q100"))
self.assertEqual(str(q), "claim[99:100]")
q = query.HasClaim(99, [100, PropertyPage(self.repo, "P101")])
self.assertEqual(str(q), "claim[99:100,101]")
q = query.StringClaim(PropertyPage(self.repo, "P99"), "Hello")
self.assertEqual(str(q), 'string[99:"Hello"]')
q = query.Tree(ItemPage(self.repo, "Q92"), [1], 2)
self.assertEqual(str(q), 'tree[92][1][2]')
q = query.Tree(ItemPage(self.repo, "Q92"), [PropertyPage(self.repo, "P101")], 2)
self.assertEqual(str(q), 'tree[92][101][2]')
self.assertRaises(TypeError, lambda: query.Tree(PropertyPage(self.repo, "P92"),
[PropertyPage(self.repo, "P101")],
2))
c = pywikibot.Coordinate(50, 60)
q = query.Around(PropertyPage(self.repo, "P625"), c, 23.4)
self.assertEqual(str(q), 'around[625,50,60,23.4]')
begin = pywikibot.WbTime(site=self.repo, year=1999)
end = pywikibot.WbTime(site=self.repo, year=2010, hour=1)
# note no second comma
q = query.Between(PropertyPage(self.repo, "P569"), begin)
self.assertEqual(str(q), 'between[569,+00000001999-01-01T00:00:00Z]')
q = query.Between(PropertyPage(self.repo, "P569"), end=end)
self.assertEqual(str(q), 'between[569,,+00000002010-01-01T01:00:00Z]')
q = query.Between(569, begin, end)
self.assertEqual(str(q),
'between[569,+00000001999-01-01T00:00:00Z,+00000002010-01-01T01:00:00Z]')
# try negative year
begin = pywikibot.WbTime(site=self.repo, year=-44)
q = query.Between(569, begin, end)
self.assertEqual(str(q),
'between[569,-00000000044-01-01T00:00:00Z,+00000002010-01-01T01:00:00Z]')
def testQueriesDirectFromClaim(self):
"""Test construction of the right Query from a page.Claim."""
# Datatype: item
claim = Claim(self.repo, 'P17')
claim.setTarget(pywikibot.ItemPage(self.repo, 'Q35'))
q = query.fromClaim(claim)
self.assertEqual(str(q), 'claim[17:35]')
# Datatype: string
claim = Claim(self.repo, 'P225')
claim.setTarget('somestring')
q = query.fromClaim(claim)
self.assertEqual(str(q), 'string[225:"somestring"]')
# Datatype: external-id
claim = Claim(self.repo, 'P268')
claim.setTarget('somestring')
q = query.fromClaim(claim)
self.assertEqual(str(q), 'string[268:"somestring"]')
# Datatype: commonsMedia
claim = Claim(self.repo, 'P18')
claim.setTarget(
pywikibot.FilePage(
pywikibot.Site(self.family, self.code),
'Foo.jpg'))
q = query.fromClaim(claim)
self.assertEqual(str(q), 'string[18:"Foo.jpg"]')
def testQuerySets(self):
"""Test that we can join queries together correctly."""
# construct via queries
qs = query.HasClaim(99, 100).AND(query.HasClaim(99, 101))
self.assertEqual(str(qs), 'claim[99:100] AND claim[99:101]')
self.assertEqual(repr(qs), 'QuerySet(claim[99:100] AND claim[99:101])')
qs = query.HasClaim(99, 100).AND(query.HasClaim(99, 101)).AND(query.HasClaim(95))
self.assertEqual(str(qs), 'claim[99:100] AND claim[99:101] AND claim[95]')
# construct via queries
qs = query.HasClaim(99, 100).AND([query.HasClaim(99, 101), query.HasClaim(95)])
self.assertEqual(str(qs), 'claim[99:100] AND claim[99:101] AND claim[95]')
qs = query.HasClaim(99, 100).OR([query.HasClaim(99, 101), query.HasClaim(95)])
self.assertEqual(str(qs), 'claim[99:100] OR claim[99:101] OR claim[95]')
q1 = query.HasClaim(99, 100)
q2 = query.HasClaim(99, 101)
# different joiners get explicit grouping parens (the api also allows
# implicit, but we don't do that)
qs1 = q1.AND(q2)
qs2 = q1.OR(qs1).AND(query.HasClaim(98))
self.assertEqual(str(qs2),
'(claim[99:100] OR (claim[99:100] AND claim[99:101])) AND claim[98]')
# if the joiners are the same, no need to group
qs1 = q1.AND(q2)
qs2 = q1.AND(qs1).AND(query.HasClaim(98))
self.assertEqual(str(qs2),
'claim[99:100] AND claim[99:100] AND claim[99:101] AND claim[98]')
qs1 = query.HasClaim(100).AND(query.HasClaim(101))
qs2 = qs1.OR(query.HasClaim(102))
self.assertEqual(str(qs2), '(claim[100] AND claim[101]) OR claim[102]')
qs = query.Link("enwiki").AND(query.NoLink("dewiki"))
self.assertEqual(str(qs), 'link[enwiki] AND nolink[dewiki]')
def testQueryApiSyntax(self):
"""Test that we can generate the API query correctly."""
w = query.WikidataQuery("http://example.com")
qs = w.getQueryString(query.Link("enwiki"))
self.assertEqual(qs, "q=link%5Benwiki%5D")
self.assertEqual(w.getUrl(qs), "http://example.com/api?q=link%5Benwiki%5D")
# check labels and props work OK
qs = w.getQueryString(query.Link("enwiki"), ['en', 'fr'], ['prop'])
self.assertEqual(qs, "q=link%5Benwiki%5D&labels=en,fr&props=prop")
class TestApiSlowFunctions(TestCase):
"""Test slow WikiDataQuery API functions."""
hostname = 'https://wdq.wmflabs.org/api'
def testQueryApiGetter(self):
"""Test that we can actually retreive data and that caching works."""
w = query.WikidataQuery(cacheMaxAge=0)
# this query doesn't return any items, save a bit of bandwidth!
q = query.HasClaim(105).AND([query.NoClaim(225), query.HasClaim(100)])
# check that the cache file is created
cacheFile = w.getCacheFilename(w.getQueryString(q, [], []))
# remove existing cache file
try:
os.remove(cacheFile)
except OSError:
pass
data = w.query(q)
self.assertFalse(os.path.exists(cacheFile))
w = query.WikidataQuery(cacheMaxAge=0.1)
data = w.query(q)
self.assertTrue(os.path.exists(cacheFile))
self.assertIn('status', data)
self.assertIn('items', data)
t1 = time.time()
data = w.query(q)
t2 = time.time()
# check that the cache access is fast
self.assertLess(t2 - t1, 0.2)
if __name__ == '__main__': # pragma: no cover
try:
unittest.main()
except SystemExit:
pass
|
It’s almost a new school year around here! We school year-round, but most new stuff happens in the fall, especially with the classes I’m teaching now. So before we start things back up, we’re doing some review.
The Engineer is doing better with reading, but practice practice practice is key! Plus the Princess is a brand new Kindergartener this year and needs to start doing more school stuff.
Boy did this project take over my life! Here are the very basic, scattered instructions on how to make a DIY puppet theatre.
|
"""Implementation of the NumberReplacer operator.
"""
import parso
from ..ast import is_number
from .operator import Operator
# List of offsets that we apply to numbers in the AST. Each index into the list
# corresponds to single mutation.
OFFSETS = [
+1,
-1,
]
class NumberReplacer(Operator):
"""An operator that modifies numeric constants."""
def mutation_positions(self, node):
if is_number(node):
for _ in OFFSETS:
yield (node.start_pos, node.end_pos)
def mutate(self, node, index):
"""Modify the numeric value on `node`."""
assert index < len(OFFSETS), 'received count with no associated offset'
assert isinstance(node, parso.python.tree.Number)
val = eval(node.value) + OFFSETS[index] # pylint: disable=W0123
return parso.python.tree.Number(' ' + str(val), node.start_pos)
@classmethod
def examples(cls):
return (
('x = 1', 'x = 2'),
('x = 1', 'x = 0', 1),
('x = 4.2', 'x = 5.2'),
('x = 4.2', 'x = 3.2', 1),
('x = 1j', 'x = (1+1j)'),
('x = 1j', 'x = (-1+1j)', 1),
)
|
This course will extend your awareness of the impact of ICT upon society, industry and commerce and will consider its limitations. You will take a critical view at the use of information systems within organisations. You will complete two projects of your choice that will involve solving a real life problem.
A minimum of 5 A-C GCSE subject passes, one to be in ICT, including a minimum of 2 grade Bs, one to be in English Language or a related subject.
You will be required to successfully complete module examinations and two pieces of coursework.
You could progress to a Foundation Degree in IT and e-commerce or a BA Hons degree in ICT or Business related degree at university. Following this you could enter employment as a systems analyst, help desk clerk, I.T. technician or teacher.
You will need to purchase a course textbook to support your study of this subject and a removable storage media device.
|
from kobato.plugin import KobatoBasePlugin, kobato_plugin_register
import sys
class KobatoConfig(KobatoBasePlugin):
def prepare(self, parser):
subparsers = parser.add_subparsers(help='sub-command help')
set_ = subparsers.add_parser('set', help='Set config values')
set_.add_argument('name')
set_.add_argument('value')
set_.set_defaults(func=self.set)
show = subparsers.add_parser('show', help='Show config values')
show.add_argument('name')
show.set_defaults(func=self.show)
reset = subparsers.add_parser('reset', help='Remove setting from config')
reset.add_argument('name')
reset.set_defaults(func=self.reset)
def run(self, args):
# TODO FIXME
raise NotImplementedError('TODO: SHOW HELP')
def set(self, args):
try:
(group, name) = args['name'].split('.')
except ValueError:
print('Please provide path in format: <group>.<name>')
sys.exit(1)
val = args['value']
if self._config.get(group) is None:
self._config[group] = {}
val = {
'true': True,
'false': False,
'True': True,
'False': False
}.get(val, val)
self._config[group][name] = val
self._config.dump()
def show(self, args):
path = args['name']
def print_group(name):
if self._config.get(name) is None:
print('No such group in config:', name)
return
for line in self._config.get(name):
print(' {}.{} => {}'.format(name, line, self._config.get(name).get(line)))
def print_line(group, name):
if self._config.get(group) is None:
print('No such group in config:', name)
return
if self._config.get(group).get(name) is None:
print('No such entry found in config')
return
print(' {}.{} => {}'.format(group, name, self._config.get(group).get(name)))
def error(*args, **kwargs):
print('Invalid format')
sys.exit(1)
{
1: print_group,
2: print_line
}.get(len(path.split('.')), error)(*path.split('.'))
def reset(self, args):
try:
(group, name) = args['name'].split('.')
except ValueError:
print('You can\'t reset config group. Reset individual entries.'
' Please provide path in format: <group>.<name>')
sys.exit(1)
try:
del self._config[group][name]
except KeyError:
pass
self._config.dump()
kobato_plugin_register(
'config',
KobatoConfig,
aliases=['cfg'],
description='Set, reset and view kobato config'
)
|
He UN agency says Tramadol is evil is aware of nothing regarding the superb saving drug. The primary factor that an individual feels regarding the drugs is that it’s habit-forming, however you ought to apprehend that it doesn’t have the malicious intent to create you obsessed on it. The drug may be a dependable supply of comfort to several individuals littered with pain thanks to medical and non- medical issues. Here, Tramadol is that the constant however the patient may be a variable. The drug is taken by varied individuals and everybody reacts in a very totally different manner and should potential to not development or development sure aspect effects. The best thanks to ascertain if your body can reject the drug is underneath take to do} it under the doctor’s direction within the prescribed doses. If you discover it unsuitable then consult your doctor for an answer. However, it’s not the fault of the drugs if you have got to face any aspect effects. It’s merely that your body isn’t compatible with the ingredients or the mechanism of Tramadol. Therefore, it’ll be terribly wrong if the drug is termed ‘evil’ if the review is predicated on one personal expertise. Also, you have got to create positive that you simply have done the 2 most significant things before employing a new medicine: Give all the most important facts like allergies, anamnesis, and health issues to the doctor. Research the prescription. If your doctor is aware of your body, he will apprehend whether or not Tramadol is nice for you. If you bear the prescription on-line or scan the label, you’ll apprehend if it’s not sensible for you. However, although once these 2 things area unit done, you face aspect effects, it’d ensue to a replacement allergic reaction or incompatibility that even you didn’t comprehend earlier. Although, it’s the doctor’s duty to temporary you with the common aspect effects and therefore the tendency of addiction in some medication however to be doubly positive, you’ll be able to continuously check for details of your drug on-line, if you have got any doubt. There is Associate in nursing endless pile of data on any drug, online. You’ll be able to apprehend everything from the start until the top. Saving your body is in your hands fully. Tramadol has control the record for treating immeasurable individuals since decades; thus, it cannot be classified as a nasty drug. So, you ought to confirm that your body is compatible with it or not.
|
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from selenium.common.exceptions import NoSuchElementException
from selenium.common.exceptions import NoAlertPresentException
import unittest, time, re
class Kbrd(unittest.TestCase):
def setUp(self):
self.driver = webdriver.Firefox()
self.driver.implicitly_wait(30)
self.base_url = "http://localhost/"
self.verificationErrors = []
self.accept_next_alert = True
def test_kbrd(self):
driver = self.driver
f = open('property')
project_name = f.read()
f.close()
driver.get(self.base_url + "/" + project_name + "/")
driver.find_element_by_css_selector("div.ace_content").click()
driver.find_element_by_class_name("ace_text-input").send_keys("process w {}")
driver.find_element_by_link_text("Edit").click()
driver.find_element_by_link_text("Preferences").click()
Select(driver.find_element_by_id("setKeyboardHandler")).select_by_visible_text("emacs")
driver.find_element_by_xpath("//div[10]").click()
driver.find_element_by_class_name("ace_text-input").send_keys(Keys.CONTROL,"x","s")
time.sleep(1)
driver.find_element_by_css_selector("#fileSaveToDiskModal > div.modal-dialog > div.modal-content > div.modal-header > button.close").click()
time.sleep(3)
driver.find_element_by_link_text("Edit").click()
driver.find_element_by_link_text("Preferences").click()
Select(driver.find_element_by_id("setKeyboardHandler")).select_by_visible_text("vim")
driver.find_element_by_xpath("//div[10]").click()
driver.find_element_by_class_name("ace_text-input").send_keys(":","w", Keys.ENTER)
time.sleep(1)
def is_element_present(self, how, what):
try: self.driver.find_element(by=how, value=what)
except NoSuchElementException as e: return False
return True
def is_alert_present(self):
try: self.driver.switch_to_alert()
except NoAlertPresentException as e: return False
return True
def close_alert_and_get_its_text(self):
try:
alert = self.driver.switch_to_alert()
alert_text = alert.text
if self.accept_next_alert:
alert.accept()
else:
alert.dismiss()
return alert_text
finally: self.accept_next_alert = True
def tearDown(self):
self.driver.quit()
self.assertEqual([], self.verificationErrors)
if __name__ == "__main__":
unittest.main()
|
Heaney’s statement, about the act of writing, also resonates for me as a reader. We read to see ourselves – to illuminate our present and our past – to set the darkness echoing.
In Edmonton we rarely, if ever, are given the opportunity to see ourselves and our city reflected in the pages of a book. There have been strides in adult literature, but until the publication of Rutherford the Time-Travelling Moose, there have been no picture books celebrating Edmonton for children.
Rutherford the Time-Travelling Moose was initially conceived as a picture book about Edmonton’s history in the context of Rutherford House, the home of the first Premier of Alberta, Alexander Rutherford (1905-1910). The Friends of Rutherford House Society issued a call for story proposals and Thomas Wharton, an award-winning local author, won the bid along with illustrator Amanda Schutz, also a resident of Edmonton.
Wharton was commissioned to “animate” the bones of the story, taking it beyond its original confines to the places in Edmonton the premier and his family would have known.
As luck would have it, Rutherford is a time-travelling moose. Astride his back, Rutherford spirits the adventurous girl back to the ice-age and then travels forward through various eras, including a visit to a First Nations settlement along the banks of the North Saskatchewan River, fur traders at Fort Edmonton, an early 20th century legislature (where the Famous Five can be spotted in the foreground), the High Level Bridge with a streetcar running along the top (as its facsimile does to this day) and other recognizable Edmonton landmarks and historical touch points.
Unless you’re from a major city like New York or London, like me you probably get excited when you see familiar people and place names in books, simply because it doesn’t happen that often. This alone makes Rutherford the Time-Travelling Moose essential reading for every child in Edmonton, but it’s particularly gratifying as an illustration junkie to see my city presented so beautifully. Schutz’s bold and colourful palette serve Edmonton well, and her humourous characterizations keep the mood light and fun.
Indeed. Rutherford the Time-Travelling Moose leaves the reader, young and old, hungry for more Edmonton(or moose)-based stories.
It’s time to set that darkness echoing.
THOMAS WHARTON is the award-winning author of the young adult trilogy Perilous Realm and the Alberta-centric novel Icefields (a Canada Reads finalist). Wharton was born in Grande Prairie, Alberta and is currently an associate professor of writing and English at the University of Alberta in Edmonton.
AMANDA SCHUTZ is an Edmonton-based illustrator and designer. Schutz is the creative director of the design firm Curio Studio and her work has appeared in many children’s books and publications.
This Moose Belongs to Me by Oliver Jeffers is like the shreddie in a bag of Nuts ‘n Bolts: impossible to resist, and so spectacular in flavour it makes the pretzels and cheesy things pale in comparison. I know. Could this metaphor be more strained? I will confess that I’m experiencing difficulty coming up with adjectives for the singular brilliance of artists like Oliver Jeffers, Jon Klassen, Lisbeth Zwerger, and the other illustrators who populate this blog (and elsewhere.) While these artists are few, they are without a doubt masters of their respective mediums, and at the core of what is arguably a highpoint in the history of illustration. A low-point as well, as there are many more bad books than good, and e-readers threaten to erase or at least diminish the sensual and visual pleasure of a truly great picture book. At this moment, however, the privilege still exists, and I would strongly recommend that you get your hands on This Moose Belongs to Me.
|
from urllib.request import urlopen
from json import loads
from rtmbot.core import Plugin
def get_elements(studio):
valid_studio_values = ['studio', 'teknikerrom']
studio = studio.strip().lower()
if studio not in valid_studio_values:
return False
elements_url = urlopen('http://pappagorg.radiorevolt.no/v1/sendinger/currentelements/' + studio).read().decode()
elements = loads(elements_url)
if elements['current']:
current_class = elements['current']['class'].lower()
if current_class == 'music':
return 'Låt: {0} - {1}'.format(elements['current']['title'], elements['current']['artist'])
elif current_class == 'audio':
return 'Lydsak: {0}'.format(elements['current']['title'])
elif current_class == 'promotion':
return 'Jingle: {0}'.format(elements['current']['title'])
else:
return 'Unknown ({0}): {1}'.format(current_class, elements['current']['title'])
elif elements['previous'] or elements['next']:
return 'Stikk'
return False
def has_elements(studio):
valid_studio_values = ['studio', 'teknikerrom', 'autoavvikler']
studio = studio.strip().lower()
if studio not in valid_studio_values:
return False
elements_url = urlopen('http://pappagorg.radiorevolt.no/v1/sendinger/currentelements/' + studio).read().decode()
elements = loads(elements_url)
return elements['current'] or elements['next'] or elements['previous']
def debug():
elements_url = urlopen('http://pappagorg.radiorevolt.no/v1/sendinger/currentelements/autoavvikler').read().decode()
elements = loads(elements_url)
warnings = list()
if scheduled_replay():
if not elements['current']:
if elements['previous']:
warnings.append('Reprisen i autoavvikler er for kort, og har sluttet!')
else:
warnings.append('Planlagt reprise i autoavvikler, men ingen elementer i autoavvikler!')
elif elements['next']:
warnings.append('Det ligger mer enn ett element i autoavvikler. Nå: {0}({1}), neste: {2}({3}'.format(
elements['current']['title'], get_type(elements['current']['class']), elements['next']['title'],
get_type(elements['next']['class'])))
elif elements['previous']:
warnings.append(
'Det lå et element før gjelende element i autoavvikler. Nå: {0}({1}), forige: {2}({3}'.format(
elements['current']['title'], get_type(elements['current']['class']), elements['previous']['title'],
get_type(elements['previous']['class'])
))
else:
studio = has_elements('studio')
tekrom = has_elements('teknikerrom')
if studio:
if elements['current'] or elements['previous']:
warnings.append('Ligger elementer i både autoavvikler og i studio.')
if tekrom:
if elements['current'] or elements['previous']:
warnings.append('Ligger elementer i både autoavvikler og i teknikerrom.')
if not tekrom and not studio:
if elements['current']:
warnings.append('Ser ut som noen har slunteret unna og lagt inn reprise.')
if elements['next']:
warnings.append('Det ligger mer enn ett element i autoavvikler. Nå: {0}({1}), neste: {2}({3}'.format(
elements['current']['title'], get_type(elements['current']['class']), elements['next']['title'],
get_type(elements['next']['class'])))
if not elements['current']:
if elements['previous']:
warnings.append(
'Det er ingen elementer som spiller noe sted! (det lå et i autoavvikler, men det stoppet)')
else:
warnings.append('Det er inten elementer som spiller noe sted!')
return warnings
def get_type(class_type):
if class_type == 'music':
return 'låt'
if class_type == 'audio':
return 'lyd'
if class_type == 'promotion':
return 'jingle'
return 'ukjent'
def scheduled_replay():
current_shows_url = urlopen('http://pappagorg.radiorevolt.no/v1/sendinger/currentshows').read().decode()
current_shows_data = loads(current_shows_url)
return '(R)' in current_shows_data['current']['title']
def get_show():
current_shows_url = urlopen('http://pappagorg.radiorevolt.no/v1/sendinger/currentshows').read().decode()
current_shows_data = loads(current_shows_url)
show_end = current_shows_data['current']['endtime'].split(' ')[-1]
show_start = current_shows_data['current']['starttime'].split(' ')[-1]
show_now = current_shows_data['current']['title']
show_next = current_shows_data['next']['title']
return 'Nå: {0} ({1} - {2}), Neste: {3}'.format(show_now, show_start, show_end, show_next)
class DabPlugin(Plugin):
def process_message(self, data):
if data['text'] == '.dab':
for warning in debug():
self.outputs.append([data['channel'], warning])
if scheduled_replay():
self.outputs.append([data['channel'], get_show()])
else:
self.outputs.append([data['channel'], get_show()])
studio = get_elements('studio')
tekrom = get_elements('teknikerrom')
if studio:
self.outputs.append([data['channel'], studio + ' i studio 1.'])
if tekrom:
self.outputs.append([data['channel'], tekrom + ' i teknikerrom.'])
|
People often ask were we send all of the donated shoes we receive. Well, the MORE Foundation takes these shoes to Ghana and Honduras and sells them at major metro areas to vendors who in turn create jobs and help improve the quality of life in these cities. The vast majority of Africans cannot afford new shoes . The proceeds from the sale of the shoes provides training and tools to the poorest rural farmers and supports MORE’s Agroforestry efforts.
MORE’s Agroforestry efforts have the potential to create a huge number of jobs in the region. By reforesting 5% of the land area of Ghana (the size of Oregon) a million jobs would be created and the size of the economy would easily double from 40 billion to 80 billion a year. Three billion trees would have a value of half a trillion USD.
Each farm we supply 1,000 trees to increases in value to over $100,000 USD in 10 years. One acre with 1,000 trees will be worth $100,000 USD. This is a 1,000% increase in income for the subsistence farmer. The used athletic shoes you collect provide 100% of the funding for this program. Every shoe your store collects pulls down one ton of carbon.
You are the change that must happen. This year 360,000 trees will sequester over 150,000 tons of carbon within 10 years and another 200,000 across 20 years. We could easily reach one million tons annually with only one million pair of used athletic shoes. Every 100 pair of used athletic shoes creates another job and strengthens democracy.
So keep on donating those shoes people!
|
from __future__ import print_function, absolute_import, division
import six
import numpy as np
from astropy.io.registry import UnifiedReadWriteMethod
from .io.core import StokesSpectralCubeRead, StokesSpectralCubeWrite
from .spectral_cube import SpectralCube, BaseSpectralCube
from . import wcs_utils
from .masks import BooleanArrayMask, is_broadcastable_and_smaller
__all__ = ['StokesSpectalCube']
VALID_STOKES = ['I', 'Q', 'U', 'V', 'RR', 'LL', 'RL', 'LR']
class StokesSpectralCube(object):
"""
A class to store a spectral cube with multiple Stokes parameters.
The individual Stokes cubes can share a common mask in addition to having
component-specific masks.
"""
def __init__(self, stokes_data, mask=None, meta=None, fill_value=None):
self._stokes_data = stokes_data
self._meta = meta or {}
self._fill_value = fill_value
reference = tuple(stokes_data.keys())[0]
for component in stokes_data:
if not isinstance(stokes_data[component], BaseSpectralCube):
raise TypeError("stokes_data should be a dictionary of "
"SpectralCube objects")
if not wcs_utils.check_equality(stokes_data[component].wcs,
stokes_data[reference].wcs):
raise ValueError("All spectral cubes in stokes_data "
"should have the same WCS")
if component not in VALID_STOKES:
raise ValueError("Invalid Stokes component: {0} - should be "
"one of I, Q, U, V, RR, LL, RL, LR".format(component))
if stokes_data[component].shape != stokes_data[reference].shape:
raise ValueError("All spectral cubes should have the same shape")
self._wcs = stokes_data[reference].wcs
self._shape = stokes_data[reference].shape
if isinstance(mask, BooleanArrayMask):
if not is_broadcastable_and_smaller(mask.shape, self._shape):
raise ValueError("Mask shape is not broadcastable to data shape:"
" {0} vs {1}".format(mask.shape, self._shape))
self._mask = mask
@property
def shape(self):
return self._shape
@property
def mask(self):
"""
The underlying mask
"""
return self._mask
@property
def wcs(self):
return self._wcs
def __dir__(self):
if six.PY2:
return self.components + dir(type(self)) + list(self.__dict__)
else:
return self.components + super(StokesSpectralCube, self).__dir__()
@property
def components(self):
return list(self._stokes_data.keys())
def __getattr__(self, attribute):
"""
Descriptor to return the Stokes cubes
"""
if attribute in self._stokes_data:
if self.mask is not None:
return self._stokes_data[attribute].with_mask(self.mask)
else:
return self._stokes_data[attribute]
else:
raise AttributeError("StokesSpectralCube has no attribute {0}".format(attribute))
def with_mask(self, mask, inherit_mask=True):
"""
Return a new StokesSpectralCube instance that contains a composite mask
of the current StokesSpectralCube and the new ``mask``.
Parameters
----------
mask : :class:`MaskBase` instance, or boolean numpy array
The mask to apply. If a boolean array is supplied,
it will be converted into a mask, assuming that
`True` values indicate included elements.
inherit_mask : bool (optional, default=True)
If True, combines the provided mask with the
mask currently attached to the cube
Returns
-------
new_cube : :class:`StokesSpectralCube`
A cube with the new mask applied.
Notes
-----
This operation returns a view into the data, and not a copy.
"""
if isinstance(mask, np.ndarray):
if not is_broadcastable_and_smaller(mask.shape, self.shape):
raise ValueError("Mask shape is not broadcastable to data shape: "
"%s vs %s" % (mask.shape, self.shape))
mask = BooleanArrayMask(mask, self.wcs)
if self._mask is not None:
return self._new_cube_with(mask=self.mask & mask if inherit_mask else mask)
else:
return self._new_cube_with(mask=mask)
def _new_cube_with(self, stokes_data=None,
mask=None, meta=None, fill_value=None):
data = self._stokes_data if stokes_data is None else stokes_data
mask = self._mask if mask is None else mask
if meta is None:
meta = {}
meta.update(self._meta)
fill_value = self._fill_value if fill_value is None else fill_value
cube = StokesSpectralCube(stokes_data=data, mask=mask,
meta=meta, fill_value=fill_value)
return cube
def with_spectral_unit(self, unit, **kwargs):
stokes_data = {k: self._stokes_data[k].with_spectral_unit(unit, **kwargs)
for k in self._stokes_data}
return self._new_cube_with(stokes_data=stokes_data)
read = UnifiedReadWriteMethod(StokesSpectralCubeRead)
write = UnifiedReadWriteMethod(StokesSpectralCubeWrite)
|
Nearly nine in ten U.S. workers are optimistic about their financial future, according to a new report from Bank of America Merrill Lynch. Surveyed employees attributed their bright outlook to a variety of factors, including the improving economy and strong financial markets. However, the most frequently cited reasons for workers’ elevated financial optimism were more personal issues, such as living within their means, being in good health, and having a well-paying job.
Confident employees, though, are also aware that their financial situation can change, and many surveyed workers admitted that they still have money-related concerns. For example, nearly two-thirds (64 percent) of respondents said that they are worried about running out of money in retirement. Other frequently cited concerns included having to delay retirement, becoming seriously ill and not being able to work, the challenge of paying for their kids’ education, job security, having to support other family members, becoming a financial burden to their family, and being able to cover their mortgage or rent payments.
With so many issues on their minds, it should not be too surprising that employees will often spend time at work thinking about their personal financial matters. In fact, the surveyed employees, based on their responses, spend a median of two hours at work each week focused on their personal finances instead of their job responsibilities. That equates to roughly 100 hours, or two-and-a-half workweeks1, of lost productivity every year. More than half (53 percent) of the surveyed employees who said that they are stressed when it comes to their financial situation even admitted that the stress interferes with their ability to focus and be productive at work.
Younger employees were found to spend significantly more time at work each week worrying about their personal finances (double that of Gen-X and 4x's that of Baby Boomers), and Millennials were twice as likely as Baby Boomers to say that financial stress interferes with their work. Beyond just feeling stressed, financial concerns can have a real impact on employees' physical well-being as well. Nearly six in ten workers, for instance, said that their financial worries have negatively affected their health. That means that financial stress in the workplace can not only hurt employers through lost productivity but also through potentially higher health insurance costs.
Further, 56 percent of surveyed workers who reported experiencing an increase in their healthcare costs said that they are contributing less to their financial goals as a result. For roughly half of these respondents that means they are paying down less debt, and nearly two-thirds said that they are saving less for retirement as a result of higher healthcare outlays. Unsurprisingly, access to health insurance and a 401(k) plan were the employer-provided benefits that surveyed workers said they value the most. Employee respondents, though, also reported that they want employers to help with managing a range of financial matters.
The top issue that workers said they need assistance with is saving for retirement, and 67 percent of employees admitted that their employer has already been influential in getting them to set money aside for retirement. However, many employees also stated that they would like their employer to bring in financial professionals to the workplace to provide both general education and education tailored to their age and finances, as well as access to the creation of a personalized financial strategy. Eighty-six percent of surveyed workers even said that they would participate in any kind of financial education program if provided by their employer.
1. Assuming a full-time workweek equals 40 hours.
|
# Copyright (C) 2011 Google Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
import unittest
import sys
import os
from webkitpy.common.system.outputcapture import OutputCapture
from webkitpy.layout_tests.port.gtk import GtkPort
from webkitpy.layout_tests.port import port_testcase
from webkitpy.common.system.executive_mock import MockExecutive
from webkitpy.thirdparty.mock import Mock
from webkitpy.common.system.filesystem_mock import MockFileSystem
from webkitpy.tool.mocktool import MockOptions
class GtkPortTest(port_testcase.PortTestCase):
port_name = 'gtk'
port_maker = GtkPort
def test_show_results_html_file(self):
port = self.make_port()
port._executive = MockExecutive(should_log=True)
expected_stderr = "MOCK run_command: ['Tools/Scripts/run-launcher', '--release', '--gtk', 'file://test.html'], cwd=/mock-checkout\n"
OutputCapture().assert_outputs(self, port.show_results_html_file, ["test.html"], expected_stderr=expected_stderr)
def assertLinesEqual(self, a, b):
if hasattr(self, 'assertMultiLineEqual'):
self.assertMultiLineEqual(a, b)
else:
self.assertEqual(a.splitlines(), b.splitlines())
def test_get_crash_log(self):
core_directory = os.environ.get('WEBKIT_CORE_DUMPS_DIRECTORY', '/path/to/coredumps')
core_pattern = os.path.join(core_directory, "core-pid_%p-_-process_%e")
mock_empty_crash_log = """\
Crash log for DumpRenderTree (pid 28529):
Coredump core-pid_28529-_-process_DumpRenderTree not found. To enable crash logs:
- run this command as super-user: echo "%(core_pattern)s" > /proc/sys/kernel/core_pattern
- enable core dumps: ulimit -c unlimited
- set the WEBKIT_CORE_DUMPS_DIRECTORY environment variable: export WEBKIT_CORE_DUMPS_DIRECTORY=%(core_directory)s
STDERR: <empty>""" % locals()
def _mock_gdb_output(coredump_path):
return (mock_empty_crash_log, [])
port = self.make_port()
port._get_gdb_output = mock_empty_crash_log
log = port._get_crash_log("DumpRenderTree", 28529, "", "", newer_than=None)
self.assertLinesEqual(log, mock_empty_crash_log)
log = port._get_crash_log("DumpRenderTree", 28529, "", "", newer_than=0.0)
self.assertLinesEqual(log, mock_empty_crash_log)
|
At Shower Pan Install, we'll be there to fulfill all your goals when it comes to Shower Pan in Stoneville, MS. Our company has a staff of skilled professional contractors and the most impressive technologies in the industry to present exactly what you might need. We grantee that you get the very best service, the best price tag, and the finest products. Contact us by dialing 888-356-1115 to learn more.
At Shower Pan Install, we know that you need to stay within budget and lower your costs anywhere you can. On the other hand, you want the absolute best and highest quality of work when it comes to Shower Pan in Stoneville, MS. Our initiatives to save a little money will never compromise on the superior quality of our services. Our aim is to ensure that you receive the right materials and a result that endures as time passes. We'll do that by providing you with the top deals in the field and steering clear of expensive mistakes. If you want to spend less money, Shower Pan Install is the service to contact. You're able to communicate with our company at 888-356-1115 to learn more.
Shower Pan are available in Stoneville, MS.
On the subject of Shower Pan in Stoneville, MS, you'll need to be well informed to make the very best judgments. We will make sure that you know what to expect. We take the unexpected surprises out of the situation by supplying adequate and thorough advice. Start by contacting 888-356-1115 to talk about your task. With this phone call, you'll get the questions you have resolved, and we're going to arrange a time to begin the work. We work with you through the entire process, and our team will appear on time and prepared.
If you're organizing a task for Shower Pan in Stoneville, MS, you'll find great reasons to prefer Shower Pan Install. We have got the greatest customer satisfaction scores, the very best quality supplies, and the most helpful and efficient money saving strategies. We are aware of your preferences and intentions, and we're here to help you using our expertise. Contact 888-356-1115 if you need Shower Pan in Stoneville, and we're going to work with you to systematically accomplish your task.
|
"""
Regular ini rules, except:
special settings:
_use: those will be marked as dependencies of the current section
_provider: the python provider that creates shell commands
keys can have no value
values can be lists (whitespace is separator)
values can contain references to other sections and particular keys in them
"""
from ConfigParser import (ConfigParser, NoOptionError, Error,
NoSectionError, MAX_INTERPOLATION_DEPTH)
import re
class SpacesConfigParser(ConfigParser):
_USES_OPT = "_uses"
_PROVIDER_OPT = "_provider"
def gettuple(self, section, option):
value = self.get(section, option)
return list(filter(None, (x.strip() for x in value.splitlines())))
def getuses(self, section):
out = []
try:
for uses in self.gettuple(section, self._USES_OPT):
if uses[0] == '[':
uses = uses[1:]
if uses[-1] == ']':
uses = uses[:-1]
if not self.has_section(uses):
raise NoSectionError(uses)
out.append(uses)
except NoOptionError:
pass
# now those used for interpolation
for o, v in self.items(section, raw=True):
m = self._KEYCRE.match(v)
if m.group(1):
if not self.has_section(m.group(1)):
raise NoSectionError(m.group(1))
out.append(m.group(1))
return set(out)
def getprovider(self, section):
return self.get(section, self._PROVIDER_OPT)
def _interpolate(self, section, option, rawval, vars):
# do the string interpolation
value = rawval
depth = MAX_INTERPOLATION_DEPTH
while depth: # Loop through this until it's done
depth -= 1
if value and "[" in value:
value = self._KEYCRE.sub(self._interpolation_replace, value)
try:
value = value % vars
except KeyError, e:
raise InterpolationMissingOptionError(
option, section, rawval, e.args[0])
else:
break
if value and "%(" in value:
raise InterpolationDepthError(option, section, rawval)
return value
_KEYCRE = re.compile(r"\[([^\]]*)\]:(\S+)|.")
def _interpolation_replace(self, match):
s = match.group(1)
if s is None:
return match.group()
elif self.has_section(s):
o = match.group(2)
if o is None:
return match.group()
# try exact match
if self.has_option(s, o):
return self.get(s, o)
# try partial; longest first
for option in reversed(sorted(self.options(s))):
if o.startswith(option):
v = self.get(s, option, raw=True)
return v + o[len(option):]
raise NoOptionError(s, o)
else:
raise NoSectionError(s)
if __name__ == "__main__":
from StringIO import StringIO
cfg = """
[test section 1]
testkeya: 1
testkeyb: a
b
_provider: BlahProvider
[test section 2]
#_uses: [test section 1]
testkeya: [test section 1]:testkeyafoo
testkeyb: [test section 1]:testkeyb
_provider: FooProvider
"""
config = SpacesConfigParser(allow_no_value=True)
config.readfp(StringIO(cfg), 'cfg')
#print config.sections()
#print config.items('test section 1')
#print config.items('test section 2')
#print config.gettuple('test section 2', 'testkeya')
#print config.gettuple('test section 2', 'testkeyb')
#print config.gettuple('test section 1', 'testkeyb')
#print config.gettuple('test section 1', 'testkeya')
print config.getuses('test section 1')
print config.getuses('test section 2')
#print config.getprovider('test section 1')
#print config.getprovider('test section 2')
|
Hard to reach, but not so crowded. Wonderful nature! The lake is naturally cold, but in summer you can already swim.
The idyllic Gaisalpsee invites you to enjoy the mountain world up here before you start the descent into the valley.
Great hike: with Nebelhorn lift to Höfatsblick mountain station, then downhill on the 7 above the upper Geissalpsee to lower Geissalpsee. Very nice mountain lake and ideal for a rest. Also suitable for a stop is further down the Geissalpe, but takes a little longer from the lake. Passing Wallrafweg past the ski flying hill, we return to Oberstdorf. Very nice, long tour. Have fun!
The lower Gaisalpsee is a high mountain lake at 1508 meters altitude in the Allgäu Alps, northeast below the Rubihorns (1957 m). About the Gaisalpbach drains the 3.5-acre lake to Iller.
The lake is located on the trail from Reichenbach to Nebelhorn on the Gaisalpe, Lower and Upper Gaisalpsee, Gaisfußsattel, Gund and Gmund Edmund Probst House.
A wonderful place to rest: In the sunshine it is warm, sheltered from the wind, overlooking Gaisalphorn and Rubihorn and on the descent the Alpengasthof Gaisalpe waits with a delicious Kaiserschmarrn ... ..
From here you can climb up to the Rubihorn, Entschekopf and Nebelhorn. All very nice tours.
|
# -*- coding: utf-8 -*-
import datetime
import logging
from django.utils import timezone
from atracker.models import EventType
from .utils.queries import get_media_for_label
from .utils.xls_output_label import label_statistics_as_xls
TITLE_MAP = {
"playout": "Airplay statistics",
"download": "Download statistics",
"stream": "Stream statistics",
}
log = logging.getLogger(__name__)
def yearly_summary_for_label_as_xls(year, label, event_type_id, output=None):
log.debug("generating {} statistics for {} - {}".format(event_type_id, label, year))
year = int(year)
event_type = EventType.objects.get(pk=event_type_id)
title = "{}: open broadcast radio".format(TITLE_MAP.get(event_type.title))
start = datetime.datetime.combine(datetime.date(year, 1, 1), datetime.time.min)
end = datetime.datetime.combine(datetime.date(year, 12, 31), datetime.time.max)
objects = get_media_for_label(
label=label, start=start, end=end, event_type_id=event_type_id
)
years = [{"start": start, "end": end, "objects": objects}]
label_statistics_as_xls(label=label, years=years, title=title, output=output)
return output
def summary_for_label_as_xls(label, event_type_id, output=None):
log.debug(
"generating {} statistics for {} - since created".format(event_type_id, label)
)
event_type = EventType.objects.get(pk=event_type_id)
title = "{}: open broadcast radio".format(TITLE_MAP.get(event_type.title))
years = []
year_start = label.created.year if label.created.year >= 2014 else 2014
year_end = timezone.now().year
for year in range(year_end, year_start - 1, -1):
start = datetime.datetime.combine(datetime.date(year, 1, 1), datetime.time.min)
end = datetime.datetime.combine(datetime.date(year, 12, 31), datetime.time.max)
objects = get_media_for_label(
label=label, start=start, end=end, event_type_id=event_type_id
)
years.append({"start": start, "end": end, "objects": objects})
label_statistics_as_xls(label=label, years=years, title=title, output=output)
return output
|
Extremely basic and easy to use Soldering Iron.
Comes with EU plug and US adapter.
For those who are looking for an affordable solution for just installs and tune-ups.
We found this to be the ideal solder gun for most basic ebike applications.
No need for an expensive soldering gun to do light ebike soldering.
|
#!/usr/bin/python
"""
(C) Copyright 2018 ALBA-CELLS
Authors: Marc Rosanes, Carlos Falcon, Zbigniew Reszela, Carlos Pascual
The program is distributed under the terms of the
GNU General Public License (or the Lesser GPL).
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
"""
import argparse
from argparse import RawTextHelpFormatter
from txm2nexuslib.images.multiplenormalization import normalize_images
def main():
def str2bool(v):
return v.lower() in ("yes", "true", "t", "1")
description = ('Normalize images located in different hdf5 files\n'
'Each file containing one of the images to be normalized')
parser = argparse.ArgumentParser(description=description,
formatter_class=RawTextHelpFormatter)
parser.register('type', 'bool', str2bool)
parser.add_argument('file_index_fn', metavar='file_index_fn',
type=str, help='DB index json filename of hdf5 data '
'files to be normalized')
parser.add_argument('-d', '--date', type=int,
default=None,
help='Date of files to be normalized\n'
'If None, no filter is applied\n'
'(default: None)')
parser.add_argument('-s', '--sample', type=str,
default=None,
help='Sample name of files to be normalized\n'
'If None, all sample names are normalized\n'
'(default: None)')
parser.add_argument('-e', '--energy', type=float,
default=None,
help='Energy of files to be normalized\n'
'If None, no filter is applied\n'
'(default: None)')
parser.add_argument('-t', '--table_h5', type=str,
default="hdf5_proc",
help='DB table of hdf5 to be normalized\n'
'If None, default tinyDB table is used\n'
'(default: hdf5_proc)')
parser.add_argument('-a', '--average_ff', type='bool',
default=True,
help='Compute average FF and normalize using it\n'
'(default: True)')
parser.add_argument('-c', '--cores', type=int,
default=-1,
help='Number of cores used for the format conversion\n'
'(default is max of available CPUs: -1)')
args = parser.parse_args()
normalize_images(args.file_index_fn, table_name=args.table_h5,
date=args.date, sample=args.sample, energy=args.energy,
average_ff=args.average_ff, cores=args.cores)
if __name__ == "__main__":
main()
|
The all-new 2016 Civic is set to make its Philippine debut sometime in April or May this year. We know because Honda Cars Philippines (HCPI) already began taking reservations last month (see link). But before that happens, we found this quick 60-second TVC from Honda Indonesia that gives us a good idea what it’ll look like when it hits local showrooms in a few week’s time.
So what’s to look forward to with this all-new Civic? For starters, it now flaunts a sportier exterior. Highlighted by a meaner-looking front fascia, a coupe-like side profile, and a new rear end design, the overall look complements the Civic’s new weapon under the hood.
Yup, for the first time the Civic now comes with a turbocharged 1.5L VTEC engine that unleashes 172 hp. This powerplant will be available in top-of-the-line variant called the Civic RS Turbo.
As of the moment, HCPI has yet to reveal the car’s exact launch date. But it’s said that the 1.8E variant will be sold at P1.1-million while the 1.5 RS Turbo will be available at P1.3-million.
For more info, specs, and full price list of the all-new 2016 Civic, visit the AutoDeal Car Guide.
Hi, I would like to receive a financing quote for the Honda Civic, thank you.
|
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file '/home/julio/Desktop/SIMNAV/simnav/gui/base/ui/resultadosTorre.ui'
#
# Created by: PyQt5 UI code generator 5.7.1
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Form(object):
def setupUi(self, Form):
Form.setObjectName("Form")
Form.resize(494, 339)
self.gridLayout_2 = QtWidgets.QGridLayout(Form)
self.gridLayout_2.setObjectName("gridLayout_2")
self.gridLayout = QtWidgets.QGridLayout()
self.gridLayout.setObjectName("gridLayout")
self.label = QtWidgets.QLabel(Form)
font = QtGui.QFont()
font.setPointSize(12)
font.setItalic(True)
self.label.setFont(font)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 0, 0, 1, 1)
self.resultadosSimulacion = QtWidgets.QTextBrowser(Form)
self.resultadosSimulacion.setObjectName("resultadosSimulacion")
self.gridLayout.addWidget(self.resultadosSimulacion, 1, 0, 1, 1)
self.gridLayout_2.addLayout(self.gridLayout, 0, 0, 1, 1)
self.retranslateUi(Form)
QtCore.QMetaObject.connectSlotsByName(Form)
def retranslateUi(self, Form):
_translate = QtCore.QCoreApplication.translate
Form.setWindowTitle(_translate("Form", "Form"))
self.label.setText(_translate("Form", "Resultados internos columna de destilación"))
|
Complete your training session with the latest EMC EMCIE Avamar Backup Recovery - Avamar Expert for Implementation Engineers audio training tools as the passguide has efficient path for the learners. Take the guidance of this site as Backup Recovery - Avamar Expert for Implementation Engineers updated audio guide and EMC EMCIE Avamar E20-895 updated test dumps will be proved beneficial for the support and help of the students on right time for you. passguide has a special team that can supervise you for the preparation in the updated E20-895 EMC audio training. This certification is no doubt essential for you but it is difficult too. You can take the training from EMCIE Avamar E20-895 EMC study notes online and updated E20-895 EMC EMCIE Avamar computer based training as this will prove supportive and great for all. You can showcase your skills and abilities for the success in the E20-895 EMC EMCIE Avamar online audio lectures as the dream job cannot be yours without it in the heavy competition. You have to buy E20-895 EMC EMCIE Avamar online preparation materials and updated EMCIE Avamar E20-895 EMC testing engine so that success can be written in your whole life for your own way. Computer based training tools can upgrade your success chances in the latest E20-895 cbt because we know your requirements and these will be easily fulfilled with the E20-895 online audio exam and updated EMC EMCIE Avamar E20-895 exam engine. This will let you assure your terrific success as the guidance will be complete in the right time with the help of passguide study products on time and tide.
Perfect tools of passguide can prove highly supportive for you and you can get the guidance and experience from the E20-895 video training. You should check the advancement of material with online E20-895 mp3 guide and latest Backup Recovery - Avamar Expert for Implementation Engineers computer based training to increase guidance and success in your own life. E20-895 EMC online audio guide and E20-895 cbt online both will prove best for the training of students regarding EMC E20-895 Backup Recovery - Avamar Expert for Implementation Engineers latest cbt. You will score top grades in the exam because we know your problems and want your feedback about the betterment of our tools for your able play and help for all learners. EMC E20-895 EMCIE Avamar latest practice exams and E20-895 preparation materials online can support you in the preparation and the Backup Recovery - Avamar Expert for Implementation Engineers EMC EMCIE Avamar audio lectures online will prove highly effective. It will prove beneficial to make you topper in the practical field so that you can learn the vital concepts. Right path will prove beneficial for you. You can increase your understanding level in the EMC EMCIE Avamar E20-895 updated cbt just with the top guidance of the pass5syre. You have to think the perfect choice like E20-895 interactive exam engine and E20-895 EMC notes for the super attentive tools of this site for the proper guidance of the students on time.
You have less time and lots of contents to learn and you are worried about your success then no need to worry as we are here for your support. You can check the guidance of E20-895 updated audio exam and EMC E20-895 EMCIE Avamar intereactive testing engine so that you can learn latest E20-895 EMC audio training vital contents. Creative E20-895 updated interactive exam engine and E20-895 online lab questions are designed to increase the success credentials of the learners. You have to take part in the online Backup Recovery - Avamar Expert for Implementation Engineers EMC EMCIE Avamar video lectures assist will lead you straight toward success. This will prove top part of your success choice on right time and path. E20-895 online lab questions and updated E20-895 video lectures are designed by passguide. You have to get the tools from the site and the efficient success can increase your performance. You can get the hit from the site as we know your exact requirements then you should think about the success work. It is updated platform for all learners therefore you have to take the EMC EMCIE Avamar E20-895 interactive exam engine and E20-895 latest audio study guide for your guidance. It will assist you in the right direction so that you can pass the EMC E20-895 Backup Recovery - Avamar Expert for Implementation Engineers latest audio training. This certification will convince potential employer to hire you for the job.
Experience Passguide E20-895 exam testing engine for yourself.
Simply submit your e-mail address below to get started with our interactive software demo of your EMC E20-895 exam.
|
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA
# 02110-1301, USA.
import smtplib
from textwrap import wrap
from kitchen.text.converters import to_unicode, to_bytes
from kitchen.iterutils import iterate
from . import log
from .util import get_rpm_header
from .config import config
#
# All of the email messages that bodhi is going to be sending around.
#
MESSAGES = {
'new': {
'body': u"""\
%(email)s has submitted a new update for %(release)s\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'email': agent,
'release': x.release.long_name,
'updatestr': unicode(x)
}
},
'deleted': {
'body': u"""\
%(email)s has deleted the %(package)s update for %(release)s\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'package': x.title,
'email': agent,
'release': '%s %s' % (x.release.long_name, x.status),
'updatestr': unicode(x)
}
},
'edited': {
'body': u"""\
%(email)s has edited the %(package)s update for %(release)s\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'package': x.title,
'email': agent,
'release': '%s %s' % (x.release.long_name, x.status),
'updatestr': unicode(x)
}
},
'pushed': {
'body': u"""\
%(package)s has been successfully pushed for %(release)s.\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'package': x.title,
'release': '%s %s' % (x.release.long_name, x.status),
'updatestr': unicode(x)
}
},
'testing': {
'body': u"""\
%(submitter)s has requested the pushing of the following update to testing:\n
%(updatestr)s
""",
'fields': lambda agent, x: {
'submitter': agent,
'updatestr': unicode(x)
}
},
'unpush': {
'body': u"""\
%(submitter)s has requested the unpushing of the following update:\n
%(updatestr)s
""",
'fields': lambda agent, x: {
'submitter': agent,
'updatestr': unicode(x)
}
},
'obsolete': {
'body': u"""\
%(submitter)s has obsoleted the following update:\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'submitter': agent,
'updatestr': unicode(x)
}
},
'unpushed': {
'body': u"""\
The following update has been unpushed\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'updatestr': unicode(x)
}
},
'revoke': {
'body': u"""\
%(submitter)s has revoked the request of the following update:\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'submitter': agent,
'updatestr': unicode(x)
}
},
'stable': {
'body': u"""\
%(submitter)s has requested the pushing of the following update stable:\n
%(updatestr)s
""",
'fields': lambda agent, x: {
'submitter': agent,
'updatestr': unicode(x)
}
},
'moved': {
'body': u"""\
The following update has been moved from Testing to Stable:\n\n%(updatestr)s
""",
'fields': lambda agent, x: {
'updatestr': unicode(x)
}
},
'stablekarma': {
'body': u"""\
The following update has reached a karma of %(karma)d and is being
automatically marked as stable.\n
%(updatestr)s
""",
'fields': lambda agent, x: {
'karma': x.karma,
'updatestr': unicode(x)
}
},
'unstable': {
'body': u"""\
The following update has reached a karma of %(karma)d and is being
automatically marked as unstable. This update will be unpushed from the
repository.\n
%(updatestr)s
""",
'fields': lambda agent, x: {
'karma': x.karma,
'updatestr': unicode(x)
}
},
'comment': {
'body': u"""\
The following comment has been added to the %(package)s update:
%(comment)s
To reply to this comment, please visit the URL at the bottom of this mail
%(updatestr)s
""",
'fields': lambda agent, x: {
'package': x.title,
'comment': x.comments[-1],
'updatestr': unicode(x)
}
},
'old_testing': {
'body': u"""\
The update for %(package)s has been in 'testing' status for over 2 weeks.
This update can be marked as stable after it achieves a karma of
%(stablekarma)d or by clicking 'Push to Stable'.
This is just a courtesy nagmail. Updates may reside in the testing repository
for more than 2 weeks if you deem it necessary.
You can submit this update to be pushed to the stable repository by going to
the following URL:
https://admin.fedoraproject.org/updates/request/stable/%(package)s
or by running the following command with the bodhi-client:
bodhi -R stable %(package)s
%(updatestr)s
""",
'fields': lambda agent, x: {
'package': x.title,
'stablekarma': x.stable_karma,
'updatestr': unicode(x)
}
},
'security': {
'body': u"""\
%(submitter)s has submitted the following update.
%(updatestr)s
To approve this update and request that it be pushed to stable, you can use the
link below:
https://admin.fedoraproject.org/updates/approve/%(package)s
""",
'fields': lambda agent, x: {
'package': x.title,
'submitter': agent,
'updatestr': unicode(x)
}
},
}
fedora_errata_template = u"""\
--------------------------------------------------------------------------------
Fedora%(testing)s Update Notification
%(updateid)s
%(date)s
--------------------------------------------------------------------------------
Name : %(name)s
Product : %(product)s
Version : %(version)s
Release : %(release)s
URL : %(url)s
Summary : %(summary)s
Description :
%(description)s
--------------------------------------------------------------------------------
%(notes)s%(changelog)s%(references)s
This update can be installed with the "yum" update program. Use
su -c 'yum%(yum_repository)s update %(name)s' at the command line.
For more information, refer to "Managing Software with yum",
available at https://docs.fedoraproject.org/yum/.
All packages are signed with the Fedora Project GPG key. More details on the
GPG keys used by the Fedora Project can be found at
https://fedoraproject.org/keys
--------------------------------------------------------------------------------
"""
fedora_epel_errata_template = u"""\
--------------------------------------------------------------------------------
Fedora EPEL%(testing)s Update Notification
%(updateid)s
%(date)s
--------------------------------------------------------------------------------
Name : %(name)s
Product : %(product)s
Version : %(version)s
Release : %(release)s
URL : %(url)s
Summary : %(summary)s
Description :
%(description)s
--------------------------------------------------------------------------------
%(notes)s%(changelog)s%(references)s
This update can be installed with the "yum" update programs. Use
su -c 'yum%(yum_repository)s update %(name)s' at the command line.
For more information, refer to "Managing Software with yum",
available at https://docs.fedoraproject.org/yum/.
All packages are signed with the Fedora EPEL GPG key. More details on the
GPG keys used by the Fedora Project can be found at
https://fedoraproject.org/keys
--------------------------------------------------------------------------------
"""
maillist_template = u"""\
================================================================================
%(name)s-%(version)s-%(release)s (%(updateid)s)
%(summary)s
--------------------------------------------------------------------------------
%(notes)s%(changelog)s%(references)s
"""
def get_template(update, use_template='fedora_errata_template'):
"""
Build the update notice for a given update.
@param use_template: the template to generate this notice with
"""
from bodhi.models import UpdateStatus, UpdateType
use_template = globals()[use_template]
line = unicode('-' * 80) + '\n'
templates = []
for build in update.builds:
h = get_rpm_header(build.nvr)
info = {}
info['date'] = str(update.date_pushed)
info['name'] = h['name']
info['summary'] = h['summary']
info['version'] = h['version']
info['release'] = h['release']
info['url'] = h['url']
if update.status is UpdateStatus.testing:
info['testing'] = ' Test'
info['yum_repository'] = ' --enablerepo=updates-testing'
else:
info['testing'] = ''
info['yum_repository'] = ''
info['subject'] = u"%s%s%s Update: %s" % (
update.type is UpdateType.security and '[SECURITY] ' or '',
update.release.long_name, info['testing'], build.nvr)
info['updateid'] = update.alias
info['description'] = h['description']
info['product'] = update.release.long_name
info['notes'] = ""
if update.notes and len(update.notes):
info['notes'] = u"Update Information:\n\n%s\n" % \
'\n'.join(wrap(update.notes, width=80))
info['notes'] += line
# Add this updates referenced Bugzillas and CVEs
i = 1
info['references'] = ""
if len(update.bugs) or len(update.cves):
info['references'] = u"References:\n\n"
parent = True in [bug.parent for bug in update.bugs]
for bug in update.bugs:
# Don't show any tracker bugs for security updates
if update.type is UpdateType.security:
# If there is a parent bug, don't show trackers
if parent and not bug.parent:
log.debug("Skipping tracker bug %s" % bug)
continue
title = (bug.title != 'Unable to fetch title' and
bug.title != 'Invalid bug number') and \
' - %s' % bug.title or ''
info['references'] += u" [ %d ] Bug #%d%s\n %s\n" % \
(i, bug.bug_id, title, bug.url)
i += 1
for cve in update.cves:
info['references'] += u" [ %d ] %s\n %s\n" % \
(i, cve.cve_id, cve.url)
i += 1
info['references'] += line
# Find the most recent update for this package, other than this one
lastpkg = build.get_latest()
# Grab the RPM header of the previous update, and generate a ChangeLog
info['changelog'] = u""
if lastpkg:
oldh = get_rpm_header(lastpkg)
oldtime = oldh['changelogtime']
text = oldh['changelogtext']
del oldh
if not text:
oldtime = 0
elif len(text) != 1:
oldtime = oldtime[0]
info['changelog'] = u"ChangeLog:\n\n%s%s" % \
(to_unicode(build.get_changelog(oldtime)), line)
try:
templates.append((info['subject'], use_template % info))
except UnicodeDecodeError:
# We can't trust the strings we get from RPM
log.debug("UnicodeDecodeError! Will try again after decoding")
for (key, value) in info.items():
if value:
info[key] = to_unicode(value)
templates.append((info['subject'], use_template % info))
return templates
def _send_mail(from_addr, to_addr, body):
"""A lower level function to send emails with smtplib"""
smtp_server = config.get('smtp_server')
if not smtp_server:
log.info('Not sending email: No smtp_server defined')
return
smtp = None
try:
log.debug('Connecting to %s', smtp_server)
smtp = smtplib.SMTP(smtp_server)
smtp.sendmail(from_addr, [to_addr], body)
except:
log.exception('Unable to send mail')
finally:
if smtp:
smtp.quit()
def send_mail(from_addr, to_addr, subject, body_text, headers=None):
if not from_addr:
from_addr = config.get('bodhi_email')
if not from_addr:
log.warn('Unable to send mail: bodhi_email not defined in the config')
return
if to_addr in config.get('exclude_mail'):
return
from_addr = to_bytes(from_addr)
to_addr = to_bytes(to_addr)
subject = to_bytes(subject)
body_text = to_bytes(body_text)
msg = ['From: %s' % from_addr, 'To: %s' % to_addr]
if headers:
for key, value in headers.items():
msg.append('%s: %s' % (key, to_bytes(value)))
msg += ['Subject: %s' % subject, '', body_text]
body = '\r\n'.join(msg)
log.info('Sending mail to %s: %s', to_addr, subject)
_send_mail(from_addr, to_addr, body)
def send(to, msg_type, update, sender=None, agent=None):
""" Send an update notification email to a given recipient """
assert agent, 'No agent given'
critpath = getattr(update, 'critpath', False) and '[CRITPATH] ' or ''
headers = {}
if msg_type != 'buildroot_override':
headers = {
"X-Bodhi-Update-Type": update.type.description,
"X-Bodhi-Update-Release": update.release.name,
"X-Bodhi-Update-Status": update.status.description,
"X-Bodhi-Update-Builds": ",".join([b.nvr for b in update.builds]),
"X-Bodhi-Update-Title": update.title,
"X-Bodhi-Update-Pushed": update.pushed,
"X-Bodhi-Update-Submitter": update.user.name,
}
if update.request:
headers["X-Bodhi-Update-Request"] = update.request.description
initial_message_id = "<bodhi-update-%s-%s-%s@%s>" % (
update.id, update.user.name, update.release.name,
config.get('message_id_email_domain'))
if msg_type == 'new':
headers["Message-ID"] = initial_message_id
else:
headers["References"] = initial_message_id
headers["In-Reply-To"] = initial_message_id
for person in iterate(to):
send_mail(sender, person, '[Fedora Update] %s[%s] %s' % (critpath,
msg_type, update.title), MESSAGES[msg_type]['body'] %
MESSAGES[msg_type]['fields'](agent, update), headers)
def send_releng(subject, body):
""" Send the Release Engineering team a message """
send_mail(config.get('bodhi_email'), config.get('release_team_address'),
subject, body)
def send_admin(msg_type, update, sender=None):
""" Send an update notification to the admins/release team. """
send(config.get('release_team_address'), msg_type, update, sender)
|
Erie Ambassador Joe Schember expects the lath to accommodated on Nov. 19.
Now that bristles new associates of the Erie Metropolitan Transit Authority’s nine-member lath of admiral acquire been appointed, Erie Ambassador Joe Schember said he expects the console to get aback to business soon.
He additionally hopes Erie Canton Lath will anon advice complete the lath by signing off on its four actual appointments.
Schember said EMTA’s new lath should be on calendar to accommodated in backward November, now that it has a quorum. EMTA’s lath has not met back Aug. 27.
EMTA’s aing accustomed affair is appointed for Nov. 19. The EMTA board’s appointed Sept. 24 and Oct. 22 affairs were canceled because of a abridgement of a quorum, said Jeremy Peterson, EMTA’s acting controlling director.
At the EMTA board’s Aug. 27 meeting, controlling administrator Lynn Schantz accommodated and three lath associates — then-Chairman Gary Grack, Fred Rush and Chris Loader — absolved out of the meeting.
The added lath associates believed the three accommodated and voted to acquire their resignations, according to EMTA’s solicitor, Jill Nagy.
Schember and Dahlkemper again asked for the resignations of all the lath associates in an attack to dness disagreements and infighting which they say that had bedeviled the lath for years. On Sept. 25 nine new appointees were announced.
The ambassador appoints bristles associates and the canton controlling four, with approval from Erie Burghal Lath and Erie Canton Council, respectively.
However, Grack and Rush after told Burghal Lath associates and others that they did not abandon and capital to abide to serve in the advance posts. However, Burghal Lath accustomed Schember’s accessories anyway.
Voting in favor of Schember’s picks were Burghal Lath Admiral Sonya Arrington and associates Liz Allen, Kathy Schaaf and Mel Witherspoon. Voting adjoin the accessories were Bob E. Merski, Casimir Kwitowski and Jim Winarski, council’s communication to EMTA.
County Lath has yet to act on Dahlkemper’s appointments, with some associates ytic the canton executive’s ascendancy to accomplish the appointments.
Erie Canton Councilwoman Carol Loll has asked that the accessories be placed on council’s Oct. 30 agenda, alike admitting she continues to seek added advice apropos the EMTA, she said.
Councilman Fiore Leone had originally said that he believed it was council’s job to accomplish the accessories and not Dahlkemper’s. However, on Friday Leone said he will abutment the accessories if the lath associates whose districts are alfresco the burghal of Erie do.
The county’s authoritative cipher requires that appointees to the EMTA lath appear from those four districts. Dahlkemper’s appointees represent all but one of them, Commune 7, which Loll represents. Loll has said that she is annoyed by the efforts Dahlkemper took to seek applicants from her district.
Both Schember and Peterson said they achievement that bearings is bound soon, because all lath associates charge some akin of training about EMTA’s operations afore they activate administering business.
Schember’s appointees are Gyan Ghising of Erie, buyer of My Way Bar & Grill, 2218 East Lake Road; Rick Griffith, admiral and buyer of Rick Griffith Properties, 2955 W. 17th St.; Jessica Molczan, an EMTA addition and advancement specialist at Voices for Independence, 1432 Wilkins Road; George Willis of Erie, a retired arch carnality admiral at Urban Engineers; and Ben Wilson of Millcreek Township, a workforce development casework administrator at the Greater Erie Community Action Committee, 18 W. Ninth St.
City appointees do not acquire to alive in the burghal because EMTA involves the absolute county, according to Schember’s office.
Dahlkemper’s appointees are Millcreek Township’s Tom Bly, buyer of Accudyn Products, 2400 Yoder Drive; Ashley Lawson of Greene Township, an ceremoniousness affairs coordinator at Gannon University; Dave Robinson of Union Township, arch controlling of the Union Burghal Family Abutment Center; and Lyn Twillie-Darby of Millcreek, a arch advocate at Erie Insurance.
Staff biographer Matt Rink contributed to this report.
Kevin Flowers can be accomplished at 870-1693 or by email. Follow him on Cheep at twitter.com/ETNflowers.
|
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import os
from ironic_lib import disk_utils
from ironic_lib import utils as ironic_utils
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import fileutils
from six.moves.urllib import parse
from ironic.common import dhcp_factory
from ironic.common import exception
from ironic.common.i18n import _
from ironic.common import keystone
from ironic.common import states
from ironic.common import utils
from ironic.conductor import task_manager
from ironic.conductor import utils as manager_utils
from ironic.drivers import base
from ironic.drivers.modules import agent_base_vendor
from ironic.drivers.modules import deploy_utils
from ironic.drivers.modules import image_cache
LOG = logging.getLogger(__name__)
# NOTE(rameshg87): This file now registers some of opts in pxe group.
# This is acceptable for now as a future refactoring into
# separate boot and deploy interfaces is planned, and moving config
# options twice is not recommended. Hence we would move the parameters
# to the appropriate place in the final refactoring.
pxe_opts = [
cfg.StrOpt('pxe_append_params',
default='nofb nomodeset vga=normal',
help=_('Additional append parameters for baremetal PXE boot.')),
cfg.StrOpt('default_ephemeral_format',
default='ext4',
help=_('Default file system format for ephemeral partition, '
'if one is created.')),
cfg.StrOpt('images_path',
default='/var/lib/ironic/images/',
help=_('On the ironic-conductor node, directory where images '
'are stored on disk.')),
cfg.StrOpt('instance_master_path',
default='/var/lib/ironic/master_images',
help=_('On the ironic-conductor node, directory where master '
'instance images are stored on disk. '
'Setting to <None> disables image caching.')),
cfg.IntOpt('image_cache_size',
default=20480,
help=_('Maximum size (in MiB) of cache for master images, '
'including those in use.')),
# 10080 here is 1 week - 60*24*7. It is entirely arbitrary in the absence
# of a facility to disable the ttl entirely.
cfg.IntOpt('image_cache_ttl',
default=10080,
help=_('Maximum TTL (in minutes) for old master images in '
'cache.')),
cfg.StrOpt('disk_devices',
default='cciss/c0d0,sda,hda,vda',
help=_('The disk devices to scan while doing the deploy.')),
]
iscsi_opts = [
cfg.PortOpt('portal_port',
default=3260,
help=_('The port number on which the iSCSI portal listens '
'for incoming connections.')),
]
CONF = cfg.CONF
CONF.register_opts(pxe_opts, group='pxe')
CONF.register_opts(iscsi_opts, group='iscsi')
DISK_LAYOUT_PARAMS = ('root_gb', 'swap_mb', 'ephemeral_gb')
@image_cache.cleanup(priority=50)
class InstanceImageCache(image_cache.ImageCache):
def __init__(self):
super(self.__class__, self).__init__(
CONF.pxe.instance_master_path,
# MiB -> B
cache_size=CONF.pxe.image_cache_size * 1024 * 1024,
# min -> sec
cache_ttl=CONF.pxe.image_cache_ttl * 60)
def _get_image_dir_path(node_uuid):
"""Generate the dir for an instances disk."""
return os.path.join(CONF.pxe.images_path, node_uuid)
def _get_image_file_path(node_uuid):
"""Generate the full path for an instances disk."""
return os.path.join(_get_image_dir_path(node_uuid), 'disk')
def _save_disk_layout(node, i_info):
"""Saves the disk layout.
The disk layout used for deployment of the node, is saved.
:param node: the node of interest
:param i_info: instance information (a dictionary) for the node, containing
disk layout information
"""
driver_internal_info = node.driver_internal_info
driver_internal_info['instance'] = {}
for param in DISK_LAYOUT_PARAMS:
driver_internal_info['instance'][param] = i_info[param]
node.driver_internal_info = driver_internal_info
node.save()
def check_image_size(task):
"""Check if the requested image is larger than the root partition size.
:param task: a TaskManager instance containing the node to act on.
:raises: InstanceDeployFailure if size of the image is greater than root
partition.
"""
i_info = deploy_utils.parse_instance_info(task.node)
image_path = _get_image_file_path(task.node.uuid)
image_mb = disk_utils.get_image_mb(image_path)
root_mb = 1024 * int(i_info['root_gb'])
if image_mb > root_mb:
msg = (_('Root partition is too small for requested image. Image '
'virtual size: %(image_mb)d MB, Root size: %(root_mb)d MB')
% {'image_mb': image_mb, 'root_mb': root_mb})
raise exception.InstanceDeployFailure(msg)
def cache_instance_image(ctx, node):
"""Fetch the instance's image from Glance
This method pulls the AMI and writes them to the appropriate place
on local disk.
:param ctx: context
:param node: an ironic node object
:returns: a tuple containing the uuid of the image and the path in
the filesystem where image is cached.
"""
i_info = deploy_utils.parse_instance_info(node)
fileutils.ensure_tree(_get_image_dir_path(node.uuid))
image_path = _get_image_file_path(node.uuid)
uuid = i_info['image_source']
LOG.debug("Fetching image %(ami)s for node %(uuid)s",
{'ami': uuid, 'uuid': node.uuid})
deploy_utils.fetch_images(ctx, InstanceImageCache(), [(uuid, image_path)],
CONF.force_raw_images)
return (uuid, image_path)
def destroy_images(node_uuid):
"""Delete instance's image file.
:param node_uuid: the uuid of the ironic node.
"""
ironic_utils.unlink_without_raise(_get_image_file_path(node_uuid))
utils.rmtree_without_raise(_get_image_dir_path(node_uuid))
InstanceImageCache().clean_up()
def get_deploy_info(node, address, iqn, port=None, lun='1'):
"""Returns the information required for doing iSCSI deploy in a dictionary.
:param node: ironic node object
:param address: iSCSI address
:param iqn: iSCSI iqn for the target disk
:param port: iSCSI port, defaults to one specified in the configuration
:param lun: iSCSI lun, defaults to '1'
:raises: MissingParameterValue, if some required parameters were not
passed.
:raises: InvalidParameterValue, if any of the parameters have invalid
value.
"""
i_info = deploy_utils.parse_instance_info(node)
params = {
'address': address,
'port': port or CONF.iscsi.portal_port,
'iqn': iqn,
'lun': lun,
'image_path': _get_image_file_path(node.uuid),
'node_uuid': node.uuid}
is_whole_disk_image = node.driver_internal_info['is_whole_disk_image']
if not is_whole_disk_image:
params.update({'root_mb': i_info['root_mb'],
'swap_mb': i_info['swap_mb'],
'ephemeral_mb': i_info['ephemeral_mb'],
'preserve_ephemeral': i_info['preserve_ephemeral'],
'boot_option': deploy_utils.get_boot_option(node),
'boot_mode': _get_boot_mode(node)})
# Append disk label if specified
disk_label = deploy_utils.get_disk_label(node)
if disk_label is not None:
params['disk_label'] = disk_label
missing = [key for key in params if params[key] is None]
if missing:
raise exception.MissingParameterValue(
_("Parameters %s were not passed to ironic"
" for deploy.") % missing)
if is_whole_disk_image:
return params
# configdrive and ephemeral_format are nullable
params['ephemeral_format'] = i_info.get('ephemeral_format')
params['configdrive'] = i_info.get('configdrive')
return params
def continue_deploy(task, **kwargs):
"""Resume a deployment upon getting POST data from deploy ramdisk.
This method raises no exceptions because it is intended to be
invoked asynchronously as a callback from the deploy ramdisk.
:param task: a TaskManager instance containing the node to act on.
:param kwargs: the kwargs to be passed to deploy.
:raises: InvalidState if the event is not allowed by the associated
state machine.
:returns: a dictionary containing the following keys:
For partition image:
'root uuid': UUID of root partition
'efi system partition uuid': UUID of the uefi system partition
(if boot mode is uefi).
NOTE: If key exists but value is None, it means partition doesn't
exist.
For whole disk image:
'disk identifier': ID of the disk to which image was deployed.
"""
node = task.node
params = get_deploy_info(node, **kwargs)
def _fail_deploy(task, msg):
"""Fail the deploy after logging and setting error states."""
LOG.error(msg)
deploy_utils.set_failed_state(task, msg)
destroy_images(task.node.uuid)
raise exception.InstanceDeployFailure(msg)
# NOTE(lucasagomes): Let's make sure we don't log the full content
# of the config drive here because it can be up to 64MB in size,
# so instead let's log "***" in case config drive is enabled.
if LOG.isEnabledFor(logging.logging.DEBUG):
log_params = {
k: params[k] if k != 'configdrive' else '***'
for k in params
}
LOG.debug('Continuing deployment for node %(node)s, params %(params)s',
{'node': node.uuid, 'params': log_params})
uuid_dict_returned = {}
try:
if node.driver_internal_info['is_whole_disk_image']:
uuid_dict_returned = deploy_utils.deploy_disk_image(**params)
else:
uuid_dict_returned = deploy_utils.deploy_partition_image(**params)
except Exception as e:
msg = (_('Deploy failed for instance %(instance)s. '
'Error: %(error)s') %
{'instance': node.instance_uuid, 'error': e})
_fail_deploy(task, msg)
root_uuid_or_disk_id = uuid_dict_returned.get(
'root uuid', uuid_dict_returned.get('disk identifier'))
if not root_uuid_or_disk_id:
msg = (_("Couldn't determine the UUID of the root "
"partition or the disk identifier after deploying "
"node %s") % node.uuid)
_fail_deploy(task, msg)
if params.get('preserve_ephemeral', False):
# Save disk layout information, to check that they are unchanged
# for any future rebuilds
_save_disk_layout(node, deploy_utils.parse_instance_info(node))
destroy_images(node.uuid)
return uuid_dict_returned
def do_agent_iscsi_deploy(task, agent_client):
"""Method invoked when deployed with the agent ramdisk.
This method is invoked by drivers for doing iSCSI deploy
using agent ramdisk. This method assumes that the agent
is booted up on the node and is heartbeating.
:param task: a TaskManager object containing the node.
:param agent_client: an instance of agent_client.AgentClient
which will be used during iscsi deploy (for exposing node's
target disk via iSCSI, for install boot loader, etc).
:returns: a dictionary containing the following keys:
For partition image:
'root uuid': UUID of root partition
'efi system partition uuid': UUID of the uefi system partition
(if boot mode is uefi).
NOTE: If key exists but value is None, it means partition doesn't
exist.
For whole disk image:
'disk identifier': ID of the disk to which image was deployed.
:raises: InstanceDeployFailure, if it encounters some error
during the deploy.
"""
node = task.node
i_info = deploy_utils.parse_instance_info(node)
wipe_disk_metadata = not i_info['preserve_ephemeral']
iqn = 'iqn.2008-10.org.openstack:%s' % node.uuid
portal_port = CONF.iscsi.portal_port
result = agent_client.start_iscsi_target(
node, iqn,
portal_port,
wipe_disk_metadata=wipe_disk_metadata)
if result['command_status'] == 'FAILED':
msg = (_("Failed to start the iSCSI target to deploy the "
"node %(node)s. Error: %(error)s") %
{'node': node.uuid, 'error': result['command_error']})
deploy_utils.set_failed_state(task, msg)
raise exception.InstanceDeployFailure(reason=msg)
address = parse.urlparse(node.driver_internal_info['agent_url'])
address = address.hostname
uuid_dict_returned = continue_deploy(task, iqn=iqn, address=address)
root_uuid_or_disk_id = uuid_dict_returned.get(
'root uuid', uuid_dict_returned.get('disk identifier'))
# TODO(lucasagomes): Move this bit saving the root_uuid to
# continue_deploy()
driver_internal_info = node.driver_internal_info
driver_internal_info['root_uuid_or_disk_id'] = root_uuid_or_disk_id
node.driver_internal_info = driver_internal_info
node.save()
return uuid_dict_returned
def _get_boot_mode(node):
"""Gets the boot mode.
:param node: A single Node.
:returns: A string representing the boot mode type. Defaults to 'bios'.
"""
boot_mode = deploy_utils.get_boot_mode_for_deploy(node)
if boot_mode:
return boot_mode
return "bios"
def validate(task):
"""Validates the pre-requisites for iSCSI deploy.
Validates whether node in the task provided has some ports enrolled.
This method validates whether conductor url is available either from CONF
file or from keystone.
:param task: a TaskManager instance containing the node to act on.
:raises: InvalidParameterValue if the URL of the Ironic API service is not
configured in config file and is not accessible via Keystone
catalog.
:raises: MissingParameterValue if no ports are enrolled for the given node.
"""
try:
# TODO(lucasagomes): Validate the format of the URL
CONF.conductor.api_url or keystone.get_service_url()
except (exception.KeystoneFailure,
exception.CatalogNotFound,
exception.KeystoneUnauthorized) as e:
raise exception.InvalidParameterValue(_(
"Couldn't get the URL of the Ironic API service from the "
"configuration file or keystone catalog. Keystone error: %s") % e)
# Validate the root device hints
deploy_utils.parse_root_device_hints(task.node)
deploy_utils.parse_instance_info(task.node)
class ISCSIDeploy(base.DeployInterface):
"""iSCSI Deploy Interface for deploy-related actions."""
def get_properties(self):
return {}
def validate(self, task):
"""Validate the deployment information for the task's node.
:param task: a TaskManager instance containing the node to act on.
:raises: InvalidParameterValue.
:raises: MissingParameterValue
"""
task.driver.boot.validate(task)
node = task.node
# Check the boot_mode, boot_option and disk_label capabilities values.
deploy_utils.validate_capabilities(node)
# TODO(rameshg87): iscsi_ilo driver uses this method. Remove
# and copy-paste it's contents here once iscsi_ilo deploy driver
# broken down into separate boot and deploy implementations.
validate(task)
@task_manager.require_exclusive_lock
def deploy(self, task):
"""Start deployment of the task's node.
Fetches instance image, updates the DHCP port options for next boot,
and issues a reboot request to the power driver.
This causes the node to boot into the deployment ramdisk and triggers
the next phase of PXE-based deployment via agent heartbeats.
:param task: a TaskManager instance containing the node to act on.
:returns: deploy state DEPLOYWAIT.
"""
node = task.node
cache_instance_image(task.context, node)
check_image_size(task)
manager_utils.node_power_action(task, states.REBOOT)
return states.DEPLOYWAIT
@task_manager.require_exclusive_lock
def tear_down(self, task):
"""Tear down a previous deployment on the task's node.
Power off the node. All actual clean-up is done in the clean_up()
method which should be called separately.
:param task: a TaskManager instance containing the node to act on.
:returns: deploy state DELETED.
:raises: NetworkError if the cleaning ports cannot be removed.
:raises: InvalidParameterValue when the wrong state is specified
or the wrong driver info is specified.
:raises: other exceptions by the node's power driver if something
wrong occurred during the power action.
"""
manager_utils.node_power_action(task, states.POWER_OFF)
task.driver.network.unconfigure_tenant_networks(task)
return states.DELETED
@task_manager.require_exclusive_lock
def prepare(self, task):
"""Prepare the deployment environment for this task's node.
Generates the TFTP configuration for PXE-booting both the deployment
and user images, fetches the TFTP image from Glance and add it to the
local cache.
:param task: a TaskManager instance containing the node to act on.
:raises: NetworkError: if the previous cleaning ports cannot be removed
or if new cleaning ports cannot be created.
:raises: InvalidParameterValue when the wrong power state is specified
or the wrong driver info is specified for power management.
:raises: other exceptions by the node's power driver if something
wrong occurred during the power action.
:raises: any boot interface's prepare_ramdisk exceptions.
"""
node = task.node
if node.provision_state == states.ACTIVE:
task.driver.boot.prepare_instance(task)
else:
if node.provision_state == states.DEPLOYING:
# Adding the node to provisioning network so that the dhcp
# options get added for the provisioning port.
manager_utils.node_power_action(task, states.POWER_OFF)
task.driver.network.add_provisioning_network(task)
deploy_opts = deploy_utils.build_agent_options(node)
task.driver.boot.prepare_ramdisk(task, deploy_opts)
def clean_up(self, task):
"""Clean up the deployment environment for the task's node.
Unlinks TFTP and instance images and triggers image cache cleanup.
Removes the TFTP configuration files for this node.
:param task: a TaskManager instance containing the node to act on.
"""
destroy_images(task.node.uuid)
task.driver.boot.clean_up_ramdisk(task)
task.driver.boot.clean_up_instance(task)
provider = dhcp_factory.DHCPFactory()
provider.clean_dhcp(task)
def take_over(self, task):
pass
def get_clean_steps(self, task):
"""Get the list of clean steps from the agent.
:param task: a TaskManager object containing the node
:raises NodeCleaningFailure: if the clean steps are not yet
available (cached), for example, when a node has just been
enrolled and has not been cleaned yet.
:returns: A list of clean step dictionaries.
"""
steps = deploy_utils.agent_get_clean_steps(
task, interface='deploy',
override_priorities={
'erase_devices': CONF.deploy.erase_devices_priority})
return steps
def execute_clean_step(self, task, step):
"""Execute a clean step asynchronously on the agent.
:param task: a TaskManager object containing the node
:param step: a clean step dictionary to execute
:raises: NodeCleaningFailure if the agent does not return a command
status
:returns: states.CLEANWAIT to signify the step will be completed
asynchronously.
"""
return deploy_utils.agent_execute_clean_step(task, step)
def prepare_cleaning(self, task):
"""Boot into the agent to prepare for cleaning.
:param task: a TaskManager object containing the node
:raises NodeCleaningFailure: if the previous cleaning ports cannot
be removed or if new cleaning ports cannot be created
:returns: states.CLEANWAIT to signify an asynchronous prepare.
"""
return deploy_utils.prepare_inband_cleaning(
task, manage_boot=True)
def tear_down_cleaning(self, task):
"""Clean up the PXE and DHCP files after cleaning.
:param task: a TaskManager object containing the node
:raises NodeCleaningFailure: if the cleaning ports cannot be
removed
"""
deploy_utils.tear_down_inband_cleaning(
task, manage_boot=True)
class VendorPassthru(agent_base_vendor.BaseAgentVendor):
"""Interface to mix IPMI and PXE vendor-specific interfaces."""
@task_manager.require_exclusive_lock
def continue_deploy(self, task, **kwargs):
"""Method invoked when deployed using iSCSI.
This method is invoked during a heartbeat from an agent when
the node is in wait-call-back state. This deploys the image on
the node and then configures the node to boot according to the
desired boot option (netboot or localboot).
:param task: a TaskManager object containing the node.
:param kwargs: the kwargs passed from the heartbeat method.
:raises: InstanceDeployFailure, if it encounters some error during
the deploy.
"""
task.process_event('resume')
node = task.node
LOG.debug('Continuing the deployment on node %s', node.uuid)
uuid_dict_returned = do_agent_iscsi_deploy(task, self._client)
root_uuid = uuid_dict_returned.get('root uuid')
efi_sys_uuid = uuid_dict_returned.get('efi system partition uuid')
self.prepare_instance_to_boot(task, root_uuid, efi_sys_uuid)
self.reboot_and_finish_deploy(task)
|
The Holmes 8' x 10' by Surya at Design Interiors in the Tampa, St. Petersburg, Clearwater, Florida area. Product availability may vary. Contact us for the most current availability on this product.
Browse other items in the Holmes collection from Design Interiors in the Tampa, St. Petersburg, Clearwater, Florida area.
|
import logging
from celery import group
from app import app, celery, client
from taciturn import registry
logger = logging.getLogger("taciturn")
from celery.utils.log import get_task_logger
task_logger = get_task_logger(__name__)
# Synchronous tasks.
def verify(request):
"""
Verifies a request against the config.
"""
if request is None:
return False
if len(request) < 1:
return False
if not isinstance(request[0], str):
return False
if request[0] != client.api_key:
logger.error("Failed to validate API key! Key provided was: {}".format(request[0]))
return False
return True
# Async stuff
@celery.task
def process(data, ptype):
task_logger.info("Entered processor, type {}".format(ptype))
if data[2]["username"] == app.config["API_USERNAME"]:
return
if ptype == 0:
g = group(func.s(data) for func in registry.topic_created_registry.values())
elif ptype == 1:
g = group(func.s(data) for func in registry.post_created_registry.values())
else:
task_logger.error("Unable to handle event of type {}".format(ptype))
return
# Call the group.
g.apply_async()
|
I received a sample of these in my goody bag at a dog rescue event. My two finicky pups love these treats! Can’t wait to try more Freshpet products!
|
#!/usr/bin/env python
"""Tests for grr.lib.timeseries."""
from grr.lib import flags
from grr.lib import test_lib
from grr.lib import timeseries
class TimeseriesTest(test_lib.GRRBaseTest):
def makeSeries(self):
s = timeseries.Timeseries()
for i in range(1, 101):
s.Append(i, (i+5) * 10000)
return s
def testAppendFilterRange(self):
s = self.makeSeries()
self.assertEqual(100, len(s.data))
self.assertEqual([1, 60000], s.data[0])
self.assertEqual([100, 1050000], s.data[-1])
s.FilterRange(100000, 200000)
self.assertEqual(10, len(s.data))
self.assertEqual([5, 100000], s.data[0])
self.assertEqual([14, 190000], s.data[-1])
def testNormalize(self):
s = self.makeSeries()
s.Normalize(10 * 10000, 100000, 600000)
self.assertEqual(5, len(s.data))
self.assertEqual([9.5, 100000], s.data[0])
self.assertEqual([49.5, 500000], s.data[-1])
s = timeseries.Timeseries()
for i in range(0, 1000):
s.Append(0.5, i * 10)
s.Normalize(200, 5000, 10000)
self.assertEqual(25, len(s.data))
self.assertListEqual(s.data[0], [0.5, 5000])
self.assertListEqual(s.data[24], [0.5, 9800])
s = timeseries.Timeseries()
for i in range(0, 1000):
s.Append(i, i * 10)
s.Normalize(200, 5000, 10000, mode=timeseries.NORMALIZE_MODE_COUNTER)
self.assertEqual(25, len(s.data))
self.assertListEqual(s.data[0], [519, 5000])
self.assertListEqual(s.data[24], [999, 9800])
def testToDeltas(self):
s = self.makeSeries()
self.assertEqual(100, len(s.data))
s.ToDeltas()
self.assertEqual(99, len(s.data))
self.assertEqual([1, 60000], s.data[0])
self.assertEqual([1, 1040000], s.data[-1])
s = timeseries.Timeseries()
for i in range(0, 1000):
s.Append(i, i * 1e6)
s.Normalize(20 * 1e6,
500 * 1e6,
1000 * 1e6,
mode=timeseries.NORMALIZE_MODE_COUNTER)
self.assertEqual(25, len(s.data))
self.assertListEqual(s.data[0], [519, int(500 * 1e6)])
s.ToDeltas()
self.assertEqual(24, len(s.data))
self.assertListEqual(s.data[0], [20, int(500 * 1e6)])
self.assertListEqual(s.data[23], [20, int(960 * 1e6)])
def testNormalizeFillsGapsWithNone(self):
s = timeseries.Timeseries()
for i in range(21, 51):
s.Append(i, (i+5) * 10000)
for i in range(81, 101):
s.Append(i, (i+5) * 10000)
s.Normalize(10 * 10000, 10 * 10000, 120 * 10000)
self.assertEqual(11, len(s.data))
self.assertEqual([None, 100000], s.data[0])
self.assertEqual([22.5, 200000], s.data[1])
self.assertEqual([None, 600000], s.data[5])
self.assertEqual([None, 1100000], s.data[-1])
def testMakeIncreasing(self):
s = timeseries.Timeseries()
for i in range(0, 5):
s.Append(i, i * 1000)
for i in range(0, 5):
s.Append(i, (i+6) * 1000)
self.assertEqual(10, len(s.data))
self.assertEqual([4, 10000], s.data[-1])
s.MakeIncreasing()
self.assertEqual(10, len(s.data))
self.assertEqual([8, 10000], s.data[-1])
def testAddRescale(self):
s1 = timeseries.Timeseries()
for i in range(0, 5):
s1.Append(i, i * 1000)
s2 = timeseries.Timeseries()
for i in range(0, 5):
s2.Append(2*i, i * 1000)
s1.Add(s2)
for i in range(0, 5):
self.assertEqual(3 * i, s1.data[i][0])
s1.Rescale(1/3.0)
for i in range(0, 5):
self.assertEqual(i, s1.data[i][0])
def testMean(self):
s = timeseries.Timeseries()
self.assertEqual(None, s.Mean())
s = self.makeSeries()
self.assertEqual(100, len(s.data))
self.assertEqual(50, s.Mean())
def main(argv):
test_lib.main(argv)
if __name__ == "__main__":
flags.StartMain(main)
|
Auto insurance protects you against financial loss if you have an accident. It is a contract between you and the insurance company. You agree to pay the premium and the insurance company agrees to pay your losses as defined in your policy. Let Agency Name. help you find the car insurance policy that fits your needs in State today!
|
# -*- coding: utf-8 -*-
"""
distpatch.helpers
~~~~~~~~~~~~~~~~~
Helper functions for distpatch.
:copyright: (c) 2011 by Rafael Goncalves Martins
:license: GPL-2, see LICENSE for more details.
"""
import atexit
import os
import shutil
import tempfile
from subprocess import call
def tempdir(*args, **kwargs):
def cleanup(directory):
if os.path.isdir(directory):
shutil.rmtree(directory)
dirname = tempfile.mkdtemp(*args, **kwargs)
atexit.register(cleanup, dirname)
return dirname
def uncompressed_filename_and_compressor(tarball):
'''returns the filename of the given tarball uncompressed and the compressor.
'''
compressors = {
'.gz': ('gzip', ''),
'.bz2': ('bzip2', ''),
'.xz': ('xz', ''),
'.lzma': ('lzma', ''),
'.tgz': ('gzip', '.tar'),
'.tbz2': ('bzip2', '.tar'),
}
dest, ext = os.path.splitext(tarball)
compressor = compressors.get(ext.lower(), None)
if compressor is None:
return tarball, None
return dest + compressor[1], compressor[0]
def uncompress(fname, output_dir=None):
# extract to a temporary directory and move back, to keep both files:
# compressed and uncompressed.
base_src = os.path.basename(fname)
base_dest, compressor = uncompressed_filename_and_compressor(base_src)
tmp_dir = tempdir()
tmp_src = os.path.join(tmp_dir, base_src)
tmp_dest = os.path.join(tmp_dir, base_dest)
local_dir = os.path.dirname(os.path.abspath(fname))
local_src = os.path.join(local_dir, base_src)
if output_dir is None:
local_dest = os.path.join(local_dir, base_dest)
else:
local_dest = os.path.join(output_dir, base_dest)
shutil.copy2(local_src, tmp_src)
if compressor is not None:
rv = call([compressor, '-fd', tmp_src])
if rv is not os.EX_OK:
raise RuntimeError('Failed to decompress file: %d' % rv)
if not os.path.exists(tmp_dest):
raise RuntimeError('Decompressed file not found: %s' % tmp_dest)
shutil.move(tmp_dest, local_dest)
# we do automatic cleanup, but we should remove it here to save disk space
shutil.rmtree(tmp_dir)
return local_dest
def format_size(size):
KB = 1024
MB = KB * 1024
GB = MB * 1024
TB = GB * 1024
size = float(size)
if size > TB:
return '%.3f TB' % (size / TB)
elif size > GB:
return '%.3f GB' % (size / GB)
elif size > MB:
return '%.3f MB' % (size / MB)
elif size > KB:
return '%.3f KB' % (size / KB)
else:
return '%.0f B' % size
|
Learn Mohiniyattam by Gopika Varma. Bharata natyam DVDs and other Classical Indian Dance video: Odissi, Kuchipudi, Mohiniattam, Kathak. Bharatanatyam VCDs at discounted prices. Bharathanatyam CD, Bharatnatyam DVD. Orissi.
Learn Mohiniyattam by Gopika Varma.
Smt. Gopika Varma, the renownwed Mohiniattam dancer came under the direct purview of Kalamandalam Smt. Kalyani Kuttyamma. She was fortunate to learn Abhinaya, which is the quintessence of Mohiniyattam from Sri Kalamandalam Krishnan Nair, doyen of the Kathakali stage. Gopika has performed exclusively at various art festivals, sabhas and temples, nation wide and internationally. Her Mohiniyattam is marked by extreme grace and fluidity of movements, transmitting radiance through expressions that at times sigh with nostalgia, purity and sanctity, while being blessed with geometrical precision and technical movements and expressions, her own personal worship of the Divine. Mohoniyattam is a classical dance form Kerala, India. It is one of the eight Indian classical dance forms recognized by the Sangeet Natak Akademi. It is considered a very graceful form of dance meant to be performed as solo recitals by women. The term Mohiniyattam comes from the words "Mohini" meaning a woman who enchants onlookers and "aattam" meaning graceful and sensuous body movements. The word "Mohiniyattam' literally means "dance of the enchantress". The essence of Mohiniyattam is in the emphasis on "Abinaya' through which the artist directly reaches out to the rasikas and transport them to a divine plane.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2006 University of Oslo, Norway
#
# This file is part of Cerebrum.
#
# Cerebrum is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# Cerebrum is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Cerebrum; if not, write to the Free Software Foundation,
# Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
"""
Process changelog entries, create user for persons registered
by ABC-import
"""
import sys
import cereconf
from Cerebrum import Errors
from Cerebrum.Utils import Factory
from Cerebrum.modules import CLHandler
def build_account(person_id):
person.clear()
try:
person.find(person_id)
except Errors.NotFoundError:
logger.error("Could not find person %s.", person_id)
return None
person_aff = person.get_affiliations()
acc_id = account.list_accounts_by_owner_id(person_id)
if acc_id == []:
unames = account.suggest_unames(person)
if unames[0] is None:
logger.error('Could not generate user name for %s.', person_id)
return None
account.clear()
account.populate(unames[0], constants.entity_person, person_id,
None, default_creator_id, default_expire_date)
account.write_db()
for s in cereconf.BOFHD_NEW_USER_SPREADS:
account.add_spread(constants.Spread(s))
account.write_db()
if person_aff:
for row in person_aff:
account.set_account_type(row['ou_id'], row['affiliation'])
account.write_db()
return account.entity_id
def main():
global db, constants, logger, person, account
global default_creator_id, default_expire_date
db = Factory.get('Database')()
db.cl_init(change_program='auto_create')
acc = Factory.get('Account')(db)
constants = Factory.get('Constants')(db)
clconstants = Factory.get('CLConstants')(db)
cl_handler = CLHandler.CLHandler(db)
logger = Factory.get_logger('cronjob')
person = Factory.get('Person')(db)
account = Factory.get('Account')(db)
acc.find_by_name(cereconf.INITIAL_ACCOUNTNAME)
default_creator_id = acc.entity_id
default_expire_date = None
try:
cl_events = cl_handler.get_events('auto_create',
(clconstants.person_create,))
if cl_events == []:
logger.info("Nothing to do.")
sys.exit(0)
for event in cl_events:
if event['change_type_id'] == clconstants.person_create:
new_acc_id = build_account(event['subject_entity'])
if new_acc_id is None:
logger.error('Could not create an account for %s',
event['subject_entity'])
continue
cl_handler.confirm_event(event)
except TypeError as e:
logger.warn("No such event, %s" % e)
return None
cl_handler.commit_confirmations()
if __name__ == '__main__':
main()
|
Discover the gorgeous scenic wonders of the Wanaka region and be blown away by the picture perfect mountains, restored villages and world famous Reflection Tree. Your knowledgable guide will show you the best photo opportunities, have a beer in Cardrona Hotel and enjoy free time to explore!
You tour begins with pick up from your accommodation and the you'll drive to the charming Arrowtown. You'll walk among tiny restored cottages and become enchanted with the historic buildings of this town that was the site of the Gold Rush in 1860s.
You'll also make your way to Wanaka; a gorgeous town that has made a name for itself as a trendy eco resort town. Historically, Wanaka was use by the Maori people as a pathway to greenstone sites and later by goldminers for accommodation; it is surrounded by majestic mountains and giant glaciers. It's also the home of the Reflection Tree - a natural wonder that makes for a perfect photo! After this, you'll have an hour to wander around Wanaka; have lunch at a cafe, browse the local shops and take in the scenic views.
On the way back you'll be taken the Crown Range route, this is the highest sealed road of New Zealand and is usually covered with snow in winter and boasts some incredible viewing platforms. You will stretch your legs at these points and be able to get some spectacular photos.
The last stop on the tour is the iconic Cardrona Hotel. One of New Zealand's oldest hotels and one of the most photographed buildings in the area, the Cardrona offers up a rustic, old-fashioned feel and so it should, it was built in 1863 at the height of the Gold Rush. You'll receive your complimentary beer or soft drink (you choose) to enjoy in the beer garden or before the fire, before the last little bit of your drive back to your hotel.
There will be approx 10 per group.
It is recommended to bring warm cloths and a rain coat, depending on the weather.
This experience is located in Queenstown, New Zealand.
|
import logging
from pajbot.managers.adminlog import AdminLogManager
from pajbot.managers.db import DBManager
from pajbot.models.command import Command
from pajbot.models.command import CommandExample
from pajbot.models.user import User
from pajbot.modules import BaseModule
from pajbot.modules import ModuleType
from pajbot.modules.basic import BasicCommandsModule
from pajbot.modules import ModuleSetting
log = logging.getLogger(__name__)
class PermabanModule(BaseModule):
ID = __name__.split(".")[-1]
NAME = "Permaban"
DESCRIPTION = "Permabans a user (re-bans them if unbanned by a mod)"
CATEGORY = "Moderation"
ENABLED_DEFAULT = True
MODULE_TYPE = ModuleType.TYPE_ALWAYS_ENABLED
PARENT_MODULE = BasicCommandsModule
SETTINGS = [
ModuleSetting(
key="unban_from_chat",
label="Unban the user from chat when the unpermaban command is used",
type="boolean",
required=True,
default=False,
),
ModuleSetting(
key="enable_send_timeout",
label="Timeout the user for one second to note the unban reason in the mod logs",
type="boolean",
required=True,
default=True,
),
ModuleSetting(
key="timeout_reason",
label="Timeout Reason | Available arguments: {source}",
type="text",
required=False,
placeholder="",
default="Un-permabanned by {source}",
constraints={},
),
]
@staticmethod
def permaban_command(bot, source, message, **rest):
if not message:
return
username = message.split(" ")[0]
with DBManager.create_session_scope() as db_session:
user = User.find_by_user_input(db_session, username)
if not user:
bot.whisper(source, "No user with that name found.")
return False
if user.banned:
bot.whisper(source, "User is already permabanned.")
return False
user.banned = True
bot.ban(
user,
reason=f"User has been added to the {bot.nickname} banlist. Contact a moderator level 1000 or higher for unban.",
)
log_msg = f"{user} has been permabanned"
bot.whisper(source, log_msg)
AdminLogManager.add_entry("Permaban added", source, log_msg)
def unpermaban_command(self, bot, source, message, **rest):
if not message:
return
username = message.split(" ")[0]
with DBManager.create_session_scope() as db_session:
user = User.find_by_user_input(db_session, username)
if not user:
bot.whisper(source, "No user with that name found.")
return False
if user.banned is False:
bot.whisper(source, "User is not permabanned.")
return False
user.banned = False
log_msg = f"{user} is no longer permabanned"
bot.whisper(source, log_msg)
AdminLogManager.add_entry("Permaban remove", source, log_msg)
if self.settings["unban_from_chat"] is True:
bot.unban(user)
if self.settings["enable_send_timeout"] is True:
bot.timeout(user, 1, self.settings["timeout_reason"].format(source=source), once=True)
def load_commands(self, **options):
self.commands["permaban"] = Command.raw_command(
self.permaban_command,
level=1000,
description="Permanently ban a user. Every time the user types in chat, he will be permanently banned again",
examples=[
CommandExample(
None,
"Default usage",
chat="user:!permaban Karl_Kons\n" "bot>user:Karl_Kons has now been permabanned",
description="Permanently ban Karl_Kons from the chat",
).parse()
],
)
self.commands["unpermaban"] = Command.raw_command(
self.unpermaban_command,
level=1000,
description="Remove a permanent ban from a user",
examples=[
CommandExample(
None,
"Default usage",
chat="user:!unpermaban Karl_Kons\n" "bot>user:Karl_Kons is no longer permabanned",
description="Remove permanent ban from Karl_Kons",
).parse()
],
)
|
Four abstracts have been selected for the KENET Computer Science (CS) and Information Systems (IS) Mini-Grant. The Mini-Grant attracted 35 submissions from 16 public and private Universities when the call was announced on March 26, 2018. The grant amounts to Kshs. 1, 500, 000 (15,000 USD) for each winner and lasts for 12 months, starting August 1, 2018 to July 31, 2019. The area of focus for the grants were Distributed Ledger Technologies (DLTs) and Big Data Analytics (BDA).
Out of the 20 entries in BDA and 15 in DLTs, two mini-grants were awarded per research area of focus. The Mini-Grant is intended to promote early stage CS/IS research and development in current and emerging research areas, as well as strengthen the CS/IS Special Interest Groups (SIGs). Early stage research was the focal point of the Mini-Grant to enable researchers to undertake proof-of-concept work to support Research and Design (R&D) ideas and concepts.
The winners worked in group led by the lead researcher while others had the option of submitting their individual research proposal. The winners represented five KENET member institutions; Africa Nazarene University, Murang’a University of Technology, Meru University of Science and Technology, University of Nairobi and Strathmore University.
The winners were selected by a panel of CS/IS experts based on some criterions including; relevance potential for publication in refereed journals and/or conferences, stakeholder buy-in, student engagement and awareness of and strategies to address/comply with policy and regulatory requirements.
At the end of their research project, the winners are expected to publish their work in reputable journals and grow a community of researchers. Hence, through this mini-grant, KENET hopes to not only support individual research teams, but to facilitate institutional collaboration and formation of communities of practice in the research areas of focus, leading to enhanced research capacity in member institutions.
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
'''
freeseer - vga/presentation capture software
Copyright (C) 2011 Free and Open Source Software Learning Centre
http://fosslc.org
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
For support, questions, suggestions or any other inquiries, visit:
http://wiki.github.com/Freeseer/freeseer/
@author: Thanh Ha
'''
from PyQt4 import QtCore, QtGui
from freeseer.frontend.qtcommon.dpi_adapt_qtgui import QGroupBoxWithDpi
from freeseer.frontend.qtcommon.dpi_adapt_qtgui import QWidgetWithDpi
class GeneralWidget(QWidgetWithDpi):
'''
classdocs
'''
def __init__(self, parent=None):
'''
Constructor
'''
super(GeneralWidget, self).__init__(parent)
self.mainLayout = QtGui.QVBoxLayout()
self.mainLayout.addStretch(0)
self.setLayout(self.mainLayout)
fontSize = self.font().pixelSize()
fontUnit = "px"
if fontSize == -1: # Font is set as points, not pixels.
fontUnit = "pt"
fontSize = self.font().pointSize()
boxStyle = "QGroupBox {{ font-weight: bold; font-size: {}{} }}".format(fontSize + 1, fontUnit)
BOX_WIDTH = 400
BOX_HEIGHT = 60
#
# Heading
#
self.title = QtGui.QLabel(u"{0} General {1}".format(u'<h1>', u'</h1>'))
self.mainLayout.insertWidget(0, self.title)
self.mainLayout.insertSpacerItem(1, QtGui.QSpacerItem(0, fontSize * 2))
#
# Language
#
languageBoxLayout = QtGui.QVBoxLayout()
self.languageGroupBox = QGroupBoxWithDpi("Language")
self.languageGroupBox.setLayout(languageBoxLayout)
self.languageGroupBox.setSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
self.languageGroupBox.setFixedSize(BOX_WIDTH, BOX_HEIGHT)
self.languageGroupBox.setStyleSheet(boxStyle)
self.mainLayout.insertWidget(2, self.languageGroupBox)
languageLayout = QtGui.QHBoxLayout()
languageBoxLayout.addLayout(languageLayout)
self.translateButton = QtGui.QPushButton("Help us translate")
self.languageComboBox = QtGui.QComboBox()
self.languageComboBox.setContextMenuPolicy(QtCore.Qt.ActionsContextMenu)
languageLayout.addWidget(self.languageComboBox, 2)
languageLayout.addSpacerItem(self.qspacer_item_with_dpi(40, 0))
languageLayout.addWidget(self.translateButton, 1)
#
# Appearance
#
appearanceBoxLayout = QtGui.QVBoxLayout()
self.appearanceGroupBox = QGroupBoxWithDpi("Appearance")
self.appearanceGroupBox.setLayout(appearanceBoxLayout)
self.appearanceGroupBox.setSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
self.appearanceGroupBox.setFixedSize(BOX_WIDTH, BOX_HEIGHT)
self.appearanceGroupBox.setStyleSheet(boxStyle)
self.mainLayout.insertWidget(3, self.appearanceGroupBox)
self.autoHideCheckBox = QtGui.QCheckBox("Auto-Hide to system tray on record")
appearanceBoxLayout.addWidget(self.autoHideCheckBox)
#
# Reset
#
resetBoxLayout = QtGui.QVBoxLayout()
self.resetGroupBox = QGroupBoxWithDpi("Reset")
self.resetGroupBox.setLayout(resetBoxLayout)
self.resetGroupBox.setSizePolicy(QtGui.QSizePolicy.Fixed, QtGui.QSizePolicy.Fixed)
self.resetGroupBox.setFixedSize(BOX_WIDTH / 2, BOX_HEIGHT)
self.resetGroupBox.setStyleSheet(boxStyle)
self.mainLayout.addWidget(self.resetGroupBox)
self.mainLayout.addSpacerItem(self.qspacer_item_with_dpi(0, 20))
resetLayout = QtGui.QHBoxLayout()
resetBoxLayout.addLayout(resetLayout)
self.resetButton = QtGui.QPushButton("Reset settings to defaults")
resetLayout.addWidget(self.resetButton)
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
main = GeneralWidget()
main.show()
sys.exit(app.exec_())
|
I drew it and then went to dinner.
Upon returning home, I started painting with exactly one hour to spare before the start of Breaking Bad - the only TV show I am currently watching.
I literally ran back and forth from the painting to the TV during commercial breaks to wrap this one up. It is admittedly a bit rushed, which is a shame because I am fond of the composition. This one will get added to the pile of paintings that could be improved with a bit more time.
Despite the rush, I am glad I got to take a break and paint tonight. I spent the entirety of the last 48 hours working on "school stuff" - be it college or middle school. September is a busy month - another reason why I should have finished this project in AUGUST.
|
#!/usr/bin/env python
# encoding: utf8
#
# Copyright © Burak Arslan <burak at arskom dot com dot tr>,
# Arskom Ltd. http://www.arskom.com.tr
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of the owner nor the names of its contributors may be
# used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT,
# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
# EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
"""This is a simple db-backed persistent task queue implementation.
The producer (client) writes requests to a database table. The consumer (server)
polls the database every 10 seconds and processes new requests.
"""
import time
import logging
from sqlalchemy import MetaData
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from spyne import MethodContext, Application, rpc, TTableModel, Integer32, \
UnsignedInteger, ByteArray, Mandatory as M
# client stuff
from spyne import RemoteService, RemoteProcedureBase, ClientBase
# server stuff
from spyne import ServerBase, Service
from spyne.protocol.soap import Soap11
db = create_engine('sqlite:///:memory:')
TableModel = TTableModel(MetaData(bind=db))
#
# The database tables used to store tasks and worker status
#
class TaskQueue(TableModel):
__tablename__ = 'task_queue'
id = Integer32(primary_key=True)
data = ByteArray(nullable=False)
class WorkerStatus(TableModel):
__tablename__ = 'worker_status'
worker_id = Integer32(pk=True, autoincrement=False)
task = TaskQueue.customize(store_as='table')
#
# The consumer (server) implementation
#
class Consumer(ServerBase):
transport = 'http://sqlalchemy.persistent.queue/'
def __init__(self, db, app, consumer_id):
super(Consumer, self).__init__(app)
self.session = sessionmaker(bind=db)()
self.id = consumer_id
if self.session.query(WorkerStatus).get(self.id) is None:
self.session.add(WorkerStatus(worker_id=self.id))
self.session.commit()
def serve_forever(self):
while True:
# get the id of the last processed job
last = self.session.query(WorkerStatus).with_lockmode("update") \
.filter_by(worker_id=self.id).one()
# get new tasks
task_id = 0
if last.task is not None:
task_id = last.task.id
task_queue = self.session.query(TaskQueue) \
.filter(TaskQueue.id > task_id) \
.order_by(TaskQueue.id)
for task in task_queue:
initial_ctx = MethodContext(self)
# this is the critical bit, where the request bytestream is put
# in the context so that the protocol can deserialize it.
initial_ctx.in_string = [task.data]
# these two lines are purely for logging
initial_ctx.transport.consumer_id = self.id
initial_ctx.transport.task_id = task.id
# The ``generate_contexts`` call parses the incoming stream and
# splits the request into header and body parts.
# There will be only one context here because no auxiliary
# methods are defined.
for ctx in self.generate_contexts(initial_ctx, 'utf8'):
# This is standard boilerplate for invoking services.
self.get_in_object(ctx)
if ctx.in_error:
self.get_out_string(ctx)
logging.error(''.join(ctx.out_string))
continue
self.get_out_object(ctx)
if ctx.out_error:
self.get_out_string(ctx)
logging.error(''.join(ctx.out_string))
continue
self.get_out_string(ctx)
logging.debug(''.join(ctx.out_string))
last.task_id = task.id
self.session.commit()
time.sleep(10)
#
# The producer (client) implementation
#
class RemoteProcedure(RemoteProcedureBase):
def __init__(self, db, app, name, out_header):
super(RemoteProcedure, self).__init__(db, app, name, out_header)
self.Session = sessionmaker(bind=db)
def __call__(self, *args, **kwargs):
session = self.Session()
for ctx in self.contexts:
self.get_out_object(ctx, args, kwargs)
self.get_out_string(ctx)
out_string = ''.join(ctx.out_string)
print(out_string)
session.add(TaskQueue(data=out_string))
session.commit()
session.close()
class Producer(ClientBase):
def __init__(self, db, app):
super(Producer, self).__init__(db, app)
self.service = RemoteService(RemoteProcedure, db, app)
#
# The service to call.
#
class AsyncService(Service):
@rpc(M(UnsignedInteger))
def sleep(ctx, integer):
print("Sleeping for %d seconds..." % (integer))
time.sleep(integer)
def _on_method_call(ctx):
print("This is worker id %d, processing task id %d." % (
ctx.transport.consumer_id, ctx.transport.task_id))
AsyncService.event_manager.add_listener('method_call', _on_method_call)
if __name__ == '__main__':
# set up logging
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('sqlalchemy.engine.base.Engine').setLevel(logging.DEBUG)
# Setup colorama and termcolor, if they are there
try:
from termcolor import colored
from colorama import init
init()
except ImportError, e:
logging.error("Install 'termcolor' and 'colorama' packages to get "
"colored log output")
def colored(s, *args, **kwargs):
return s
logging.info(colored("Creating database tables...", 'yellow', attrs=['bold']))
TableModel.Attributes.sqla_metadata.create_all()
logging.info(colored("Creating Application...", 'blue'))
application = Application([AsyncService], 'spyne.async',
in_protocol=Soap11(), out_protocol=Soap11())
logging.info(colored("Making requests...", 'yellow', attrs=['bold']))
producer = Producer(db, application)
for i in range(10):
producer.service.sleep(i)
logging.info(colored("Spawning consumer...", 'blue'))
# process requests. it'd make most sense if this was in another process.
consumer = Consumer(db, application, consumer_id=1)
consumer.serve_forever()
|
Mormedi was present at the 5th Annual Meeting of Spanish Design Associations (ENAD), and participated actively in the group that prepared what will be the third edition of Transferencias.design in 2018.
The ENAD meetings serve in part to take the pulse of design in Spain and see where there is a convergence not only of design associations, but also representatives from companies, educational centers and government institutions. These meetings review activity to-date and also include, in a very active and decentralized way, future initiatives to open new paths in the field of design. All this is brought to life through presentations, seminars and working sessions.
One of these initiatives is Transferencias.design, a multidisciplinary meeting with participants from companies, institutions, professionals and teachers that discuss how they interact and how this specifically contributes to the development of new profiles of designers. As such, the first edition in 2016 dealt with the perception of these profiles, and secondly, in 2017, was about the experiences and problems large companies and entities were facing in the area of design.
In addition to the aforementioned themes and working groups, ENAD participants agreed there were secondary tracks that would take shape during the year and working groups would be assigned to them as well. The findings and activities of all of these groups would be presented and discussed in the next meeting. We hope to see you there.
|
# -*- encoding: utf-8 -*-
# transformation functions to apply to features
from collections import defaultdict, namedtuple
from math import ceil
from numbers import Number
from shapely.geometry.collection import GeometryCollection
from shapely.geometry import box as Box
from shapely.geometry import LinearRing
from shapely.geometry import LineString
from shapely.geometry import Point
from shapely.geometry import Polygon
from shapely.geometry.multilinestring import MultiLineString
from shapely.geometry.multipoint import MultiPoint
from shapely.geometry.multipolygon import MultiPolygon
from shapely.geometry.polygon import orient
from shapely.ops import linemerge
from shapely.strtree import STRtree
from sort import pois as sort_pois
from StreetNames import short_street_name
from sys import float_info
from tilequeue.process import _make_valid_if_necessary
from tilequeue.process import _visible_shape
from tilequeue.tile import calc_meters_per_pixel_area
from tilequeue.tile import normalize_geometry_type
from tilequeue.tile import tolerance_for_zoom
from tilequeue.transform import calculate_padded_bounds
from util import to_float
from util import safe_int
from zope.dottedname.resolve import resolve
import csv
import pycountry
import re
import shapely.errors
import shapely.wkb
import shapely.ops
import kdtree
feet_pattern = re.compile('([+-]?[0-9.]+)\'(?: *([+-]?[0-9.]+)")?')
number_pattern = re.compile('([+-]?[0-9.]+)')
# pattern to detect numbers with units.
# PLEASE: keep this in sync with the conversion factors below.
unit_pattern = re.compile('([+-]?[0-9.]+) *(mi|km|m|nmi|ft)')
# multiplicative conversion factor from the unit into meters.
# PLEASE: keep this in sync with the unit_pattern above.
unit_conversion_factor = {
'mi': 1609.3440,
'km': 1000.0000,
'm': 1.0000,
'nmi': 1852.0000,
'ft': 0.3048
}
# used to detect if the "name" of a building is
# actually a house number.
digits_pattern = re.compile('^[0-9-]+$')
# used to detect station names which are followed by a
# parenthetical list of line names.
station_pattern = re.compile(r'([^(]*)\(([^)]*)\).*')
# used to detect if an airport's IATA code is the "short"
# 3-character type. there are also longer codes, and ones
# which include numbers, but those seem to be used for
# less important airports.
iata_short_code_pattern = re.compile('^[A-Z]{3}$')
def _to_float_meters(x):
if x is None:
return None
as_float = to_float(x)
if as_float is not None:
return as_float
# trim whitespace to simplify further matching
x = x.strip()
# try looking for a unit
unit_match = unit_pattern.match(x)
if unit_match is not None:
value = unit_match.group(1)
units = unit_match.group(2)
value_as_float = to_float(value)
if value_as_float is not None:
return value_as_float * unit_conversion_factor[units]
# try if it looks like an expression in feet via ' "
feet_match = feet_pattern.match(x)
if feet_match is not None:
feet = feet_match.group(1)
inches = feet_match.group(2)
feet_as_float = to_float(feet)
inches_as_float = to_float(inches)
total_inches = 0.0
parsed_feet_or_inches = False
if feet_as_float is not None:
total_inches = feet_as_float * 12.0
parsed_feet_or_inches = True
if inches_as_float is not None:
total_inches += inches_as_float
parsed_feet_or_inches = True
if parsed_feet_or_inches:
# international inch is exactly 25.4mm
meters = total_inches * 0.0254
return meters
# try and match the first number that can be parsed
for number_match in number_pattern.finditer(x):
potential_number = number_match.group(1)
as_float = to_float(potential_number)
if as_float is not None:
return as_float
return None
def _to_int_degrees(x):
if x is None:
return None
as_int = safe_int(x)
if as_int is not None:
# always return within range of 0 to 360
return as_int % 360
# trim whitespace to simplify further matching
x = x.strip()
cardinals = {
'north': 0, 'N': 0, 'NNE': 22, 'NE': 45, 'ENE': 67,
'east': 90, 'E': 90, 'ESE': 112, 'SE': 135, 'SSE': 157,
'south': 180, 'S': 180, 'SSW': 202, 'SW': 225, 'WSW': 247,
'west': 270, 'W': 270, 'WNW': 292, 'NW': 315, 'NNW': 337
}
# protect against bad cardinal notations
return cardinals.get(x)
def _coalesce(properties, *property_names):
for prop in property_names:
val = properties.get(prop)
if val:
return val
return None
def _remove_properties(properties, *property_names):
for prop in property_names:
properties.pop(prop, None)
return properties
def _is_name(key):
"""
Return True if this key looks like a name.
This isn't as simple as testing if key == 'name', as there are alternative
name-like tags such as 'official_name', translated names such as 'name:en',
and left/right names for boundaries. This function aims to match all of
those variants.
"""
# simplest and most common case first
if key == 'name':
return True
# translations next
if key.startswith('name:'):
return True
# then any of the alternative forms of name
return any(key.startswith(p) for p in tag_name_alternates)
def _remove_names(props):
"""
Remove entries in the props dict for which the key looks like a name.
Modifies the props dict in-place and also returns it.
"""
for k in props.keys():
if _is_name(k):
props.pop(k)
return props
def _has_name(props):
"""
Return true if any of the props look like a name.
"""
for k in props.keys():
if _is_name(k):
return True
return False
def _building_calc_levels(levels):
levels = max(levels, 1)
levels = (levels * 3) + 2
return levels
def _building_calc_min_levels(min_levels):
min_levels = max(min_levels, 0)
min_levels = min_levels * 3
return min_levels
# slightly bigger than the tallest structure in the world. at the time
# of writing, the Burj Khalifa at 829.8m. this is used as a check to make
# sure that nonsense values (e.g: buildings a million meters tall) don't
# make it into the data.
TALLEST_STRUCTURE_METERS = 1000.0
def _building_calc_height(height_val, levels_val, levels_calc_fn):
height = _to_float_meters(height_val)
if height is not None and 0 <= height <= TALLEST_STRUCTURE_METERS:
return height
levels = _to_float_meters(levels_val)
if levels is None:
return None
levels = levels_calc_fn(levels)
if 0 <= levels <= TALLEST_STRUCTURE_METERS:
return levels
return None
def add_id_to_properties(shape, properties, fid, zoom):
properties['id'] = fid
return shape, properties, fid
def detect_osm_relation(shape, properties, fid, zoom):
# Assume all negative ids indicate the data was a relation. At the
# moment, this is true because only osm contains negative
# identifiers. Should this change, this logic would need to become
# more robust
if isinstance(fid, Number) and fid < 0:
properties['osm_relation'] = True
return shape, properties, fid
def remove_feature_id(shape, properties, fid, zoom):
return shape, properties, None
def building_height(shape, properties, fid, zoom):
height = _building_calc_height(
properties.get('height'), properties.get('building_levels'),
_building_calc_levels)
if height is not None:
properties['height'] = height
else:
properties.pop('height', None)
return shape, properties, fid
def building_min_height(shape, properties, fid, zoom):
min_height = _building_calc_height(
properties.get('min_height'), properties.get('building_min_levels'),
_building_calc_min_levels)
if min_height is not None:
properties['min_height'] = min_height
else:
properties.pop('min_height', None)
return shape, properties, fid
def synthesize_volume(shape, props, fid, zoom):
area = props.get('area')
height = props.get('height')
if area is not None and height is not None:
props['volume'] = int(area * height)
return shape, props, fid
def building_trim_properties(shape, properties, fid, zoom):
properties = _remove_properties(
properties,
'building', 'building_part',
'building_levels', 'building_min_levels')
return shape, properties, fid
def road_classifier(shape, properties, fid, zoom):
source = properties.get('source')
assert source, 'Missing source in road query'
if source == 'naturalearthdata.com':
return shape, properties, fid
properties.pop('is_link', None)
properties.pop('is_tunnel', None)
properties.pop('is_bridge', None)
kind_detail = properties.get('kind_detail', '')
tunnel = properties.get('tunnel', '')
bridge = properties.get('bridge', '')
if kind_detail.endswith('_link'):
properties['is_link'] = True
if tunnel in ('yes', 'true'):
properties['is_tunnel'] = True
if bridge and bridge != 'no':
properties['is_bridge'] = True
return shape, properties, fid
def add_road_network_from_ncat(shape, properties, fid, zoom):
"""
Many South Korean roads appear to have an "ncat" tag, which seems to
correspond to the type of road network (perhaps "ncat" = "national
category"?)
This filter carries that through into "network", unless it is already
populated.
"""
if properties.get('network') is None:
tags = properties.get('tags', {})
ncat = _make_unicode_or_none(tags.get('ncat'))
if ncat == u'국도':
# national roads - gukdo
properties['network'] = 'KR:national'
elif ncat == u'광역시도로':
# metropolitan city roads - gwangyeoksido
properties['network'] = 'KR:metropolitan'
elif ncat == u'특별시도':
# special city (Seoul) roads - teukbyeolsido
properties['network'] = 'KR:metropolitan'
elif ncat == u'고속도로':
# expressways - gosokdoro
properties['network'] = 'KR:expressway'
elif ncat == u'지방도':
# local highways - jibangdo
properties['network'] = 'KR:local'
return shape, properties, fid
def road_trim_properties(shape, properties, fid, zoom):
properties = _remove_properties(properties, 'bridge', 'tunnel')
return shape, properties, fid
def _reverse_line_direction(shape):
if shape.type != 'LineString':
return False
shape.coords = shape.coords[::-1]
return True
def road_oneway(shape, properties, fid, zoom):
oneway = properties.get('oneway')
if oneway in ('-1', 'reverse'):
did_reverse = _reverse_line_direction(shape)
if did_reverse:
properties['oneway'] = 'yes'
elif oneway in ('true', '1'):
properties['oneway'] = 'yes'
elif oneway in ('false', '0'):
properties['oneway'] = 'no'
return shape, properties, fid
def road_abbreviate_name(shape, properties, fid, zoom):
name = properties.get('name', None)
if not name:
return shape, properties, fid
short_name = short_street_name(name)
properties['name'] = short_name
return shape, properties, fid
def route_name(shape, properties, fid, zoom):
rn = properties.get('route_name')
if rn:
name = properties.get('name')
if not name:
properties['name'] = rn
del properties['route_name']
elif rn == name:
del properties['route_name']
return shape, properties, fid
def place_population_int(shape, properties, fid, zoom):
population_str = properties.pop('population', None)
population = to_float(population_str)
if population is not None:
properties['population'] = int(population)
return shape, properties, fid
def population_rank(shape, properties, fid, zoom):
population = properties.get('population')
pop_breaks = [
1000000000,
100000000,
50000000,
20000000,
10000000,
5000000,
1000000,
500000,
200000,
100000,
50000,
20000,
10000,
5000,
2000,
1000,
200,
0,
]
for i, pop_break in enumerate(pop_breaks):
if population >= pop_break:
rank = len(pop_breaks) - i
break
else:
rank = 0
properties['population_rank'] = rank
return (shape, properties, fid)
def pois_capacity_int(shape, properties, fid, zoom):
pois_capacity_str = properties.pop('capacity', None)
capacity = to_float(pois_capacity_str)
if capacity is not None:
properties['capacity'] = int(capacity)
return shape, properties, fid
def pois_direction_int(shape, props, fid, zoom):
direction = props.get('direction')
if not direction:
return shape, props, fid
props['direction'] = _to_int_degrees(direction)
return shape, props, fid
def water_tunnel(shape, properties, fid, zoom):
tunnel = properties.pop('tunnel', None)
if tunnel in (None, 'no', 'false', '0'):
properties.pop('is_tunnel', None)
else:
properties['is_tunnel'] = True
return shape, properties, fid
def admin_level_as_int(shape, properties, fid, zoom):
admin_level_str = properties.pop('admin_level', None)
if admin_level_str is None:
return shape, properties, fid
try:
admin_level_int = int(admin_level_str)
except ValueError:
return shape, properties, fid
properties['admin_level'] = admin_level_int
return shape, properties, fid
def tags_create_dict(shape, properties, fid, zoom):
tags_hstore = properties.get('tags')
if tags_hstore:
tags = dict(tags_hstore)
properties['tags'] = tags
return shape, properties, fid
def tags_remove(shape, properties, fid, zoom):
properties.pop('tags', None)
return shape, properties, fid
tag_name_alternates = (
'int_name',
'loc_name',
'nat_name',
'official_name',
'old_name',
'reg_name',
'short_name',
'name_left',
'name_right',
'name:short',
)
def _alpha_2_code_of(lang):
try:
alpha_2_code = lang.alpha_2.encode('utf-8')
except AttributeError:
return None
return alpha_2_code
# a structure to return language code lookup results preserving the priority
# (lower is better) of the result for use in situations where multiple inputs
# can map to the same output.
LangResult = namedtuple('LangResult', ['code', 'priority'])
def _convert_wof_l10n_name(x):
lang_str_iso_639_3 = x[:3]
if len(lang_str_iso_639_3) != 3:
return None
try:
lang = pycountry.languages.get(alpha_3=lang_str_iso_639_3)
except KeyError:
return None
return LangResult(code=_alpha_2_code_of(lang), priority=0)
def _convert_ne_l10n_name(x):
if len(x) != 2:
return None
try:
lang = pycountry.languages.get(alpha_2=x)
except KeyError:
return None
return LangResult(code=_alpha_2_code_of(lang), priority=0)
def _normalize_osm_lang_code(x):
# first try an alpha-2 code
try:
lang = pycountry.languages.get(alpha_2=x)
except KeyError:
# next, try an alpha-3 code
try:
lang = pycountry.languages.get(alpha_3=x)
except KeyError:
# finally, try a "bibliographic" code
try:
lang = pycountry.languages.get(bibliographic=x)
except KeyError:
return None
return _alpha_2_code_of(lang)
def _normalize_country_code(x):
x = x.upper()
try:
c = pycountry.countries.get(alpha_2=x)
except KeyError:
try:
c = pycountry.countries.get(alpha_3=x)
except KeyError:
try:
c = pycountry.countries.get(numeric=x)
except KeyError:
return None
alpha2_code = c.alpha_2
return alpha2_code
osm_l10n_lookup = set([
'zh-min-nan',
'zh-yue'
])
def _convert_osm_l10n_name(x):
if x in osm_l10n_lookup:
return LangResult(code=x, priority=0)
if '_' not in x:
lang_code_candidate = x
country_candidate = None
else:
fields_by_underscore = x.split('_', 1)
lang_code_candidate, country_candidate = fields_by_underscore
lang_code_result = _normalize_osm_lang_code(lang_code_candidate)
if lang_code_result is None:
return None
priority = 0
if country_candidate:
country_result = _normalize_country_code(country_candidate)
if country_result is None:
result = lang_code_result
priority = 1
else:
result = '%s_%s' % (lang_code_result, country_result)
else:
result = lang_code_result
return LangResult(code=result, priority=priority)
def tags_name_i18n(shape, properties, fid, zoom):
tags = properties.get('tags')
if not tags:
return shape, properties, fid
name = properties.get('name')
if not name:
return shape, properties, fid
source = properties.get('source')
is_wof = source == 'whosonfirst.org'
is_osm = source == 'openstreetmap.org'
is_ne = source == 'naturalearthdata.com'
if is_osm:
alt_name_prefix_candidates = [
'name:left:', 'name:right:', 'name:', 'alt_name:', 'old_name:'
]
convert_fn = _convert_osm_l10n_name
elif is_wof:
alt_name_prefix_candidates = ['name:']
convert_fn = _convert_wof_l10n_name
elif is_ne:
# replace name_xx with name:xx in tags
for k in tags.keys():
if k.startswith('name_'):
value = tags.pop(k)
tag_k = k.replace('_', ':')
tags[tag_k] = value
alt_name_prefix_candidates = ['name:']
convert_fn = _convert_ne_l10n_name
else:
# conversion function only implemented for things which come from OSM,
# NE or WOF - implement more cases here when more localized named
# sources become available.
return shape, properties, fid
langs = {}
for k, v in tags.items():
for candidate in alt_name_prefix_candidates:
if k.startswith(candidate):
lang_code = k[len(candidate):]
normalized_lang_code = convert_fn(lang_code)
if normalized_lang_code:
code = normalized_lang_code.code
priority = normalized_lang_code.priority
lang_key = '%s%s' % (candidate, code)
if lang_key not in langs or \
priority < langs[lang_key][0].priority:
langs[lang_key] = (normalized_lang_code, v)
for lang_key, (lang, v) in langs.items():
properties[lang_key] = v
for alt_tag_name_candidate in tag_name_alternates:
alt_tag_name_value = tags.get(alt_tag_name_candidate)
if alt_tag_name_value and alt_tag_name_value != name:
properties[alt_tag_name_candidate] = alt_tag_name_value
return shape, properties, fid
def _no_none_min(a, b):
"""
Usually, `min(None, a)` will return None. This isn't
what we want, so this one will return a non-None
argument instead. This is basically the same as
treating None as greater than any other value.
"""
if a is None:
return b
elif b is None:
return a
else:
return min(a, b)
def _sorted_attributes(features, attrs, attribute):
"""
When the list of attributes is a dictionary, use the
sort key parameter to order the feature attributes.
evaluate it as a function and return it. If it's not
in the right format, attrs isn't a dict then returns
None.
"""
sort_key = attrs.get('sort_key')
reverse = attrs.get('reverse')
assert sort_key is not None, "Configuration " + \
"parameter 'sort_key' is missing, please " + \
"check your configuration."
# first, we find the _minimum_ ordering over the
# group of key values. this is because we only do
# the intersection in groups by the cutting
# attribute, so can only sort in accordance with
# that.
group = dict()
for feature in features:
val = feature[1].get(sort_key)
key = feature[1].get(attribute)
val = _no_none_min(val, group.get(key))
group[key] = val
# extract the sorted list of attributes from the
# grouped (attribute, order) pairs, ordering by
# the order.
all_attrs = sorted(group.iteritems(),
key=lambda x: x[1], reverse=bool(reverse))
# strip out the sort key in return
return [x[0] for x in all_attrs]
# the table of geometry dimensions indexed by geometry
# type name. it would be better to use geometry type ID,
# but it seems like that isn't exposed.
#
# each of these is a bit-mask, so zero dimentions is
# represented by 1, one by 2, etc... this is to support
# things like geometry collections where the type isn't
# statically known.
_NULL_DIMENSION = 0
_POINT_DIMENSION = 1
_LINE_DIMENSION = 2
_POLYGON_DIMENSION = 4
_GEOMETRY_DIMENSIONS = {
'Point': _POINT_DIMENSION,
'LineString': _LINE_DIMENSION,
'LinearRing': _LINE_DIMENSION,
'Polygon': _POLYGON_DIMENSION,
'MultiPoint': _POINT_DIMENSION,
'MultiLineString': _LINE_DIMENSION,
'MultiPolygon': _POLYGON_DIMENSION,
'GeometryCollection': _NULL_DIMENSION,
}
# returns the dimensionality of the object. so points have
# zero dimensions, lines one, polygons two. multi* variants
# have the same as their singular variant.
#
# geometry collections can hold many different types, so
# we use a bit-mask of the dimensions and recurse down to
# find the actual dimensionality of the stored set.
#
# returns a bit-mask, with these bits ORed together:
# 1: contains a point / zero-dimensional object
# 2: contains a linestring / one-dimensional object
# 4: contains a polygon / two-dimensional object
def _geom_dimensions(g):
dim = _GEOMETRY_DIMENSIONS.get(g.geom_type)
assert dim is not None, "Unknown geometry type " + \
"%s in transform._geom_dimensions." % \
repr(g.geom_type)
# recurse for geometry collections to find the true
# dimensionality of the geometry.
if dim == _NULL_DIMENSION:
for part in g.geoms:
dim = dim | _geom_dimensions(part)
return dim
def _flatten_geoms(shape):
"""
Flatten a shape so that it is returned as a list
of single geometries.
>>> [g.wkt for g in _flatten_geoms(shapely.wkt.loads('GEOMETRYCOLLECTION (MULTIPOINT(-1 -1, 0 0), GEOMETRYCOLLECTION (POINT(1 1), POINT(2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'))]
['POINT (-1 -1)', 'POINT (0 0)', 'POINT (1 1)', 'POINT (2 2)', 'POINT (3 3)', 'LINESTRING (0 0, 1 1)']
>>> _flatten_geoms(Polygon())
[]
>>> _flatten_geoms(MultiPolygon())
[]
""" # noqa
if shape.geom_type.startswith('Multi'):
return shape.geoms
elif shape.is_empty:
return []
elif shape.type == 'GeometryCollection':
geoms = []
for g in shape.geoms:
geoms.extend(_flatten_geoms(g))
return geoms
else:
return [shape]
def _filter_geom_types(shape, keep_dim):
"""
Return a geometry which consists of the geometries in
the input shape filtered so that only those of the
given dimension remain. Collapses any structure (e.g:
of geometry collections) down to a single or multi-
geometry.
>>> _filter_geom_types(GeometryCollection(), _POINT_DIMENSION).wkt
'GEOMETRYCOLLECTION EMPTY'
>>> _filter_geom_types(Point(0,0), _POINT_DIMENSION).wkt
'POINT (0 0)'
>>> _filter_geom_types(Point(0,0), _LINE_DIMENSION).wkt
'GEOMETRYCOLLECTION EMPTY'
>>> _filter_geom_types(Point(0,0), _POLYGON_DIMENSION).wkt
'GEOMETRYCOLLECTION EMPTY'
>>> _filter_geom_types(LineString([(0,0),(1,1)]), _LINE_DIMENSION).wkt
'LINESTRING (0 0, 1 1)'
>>> _filter_geom_types(Polygon([(0,0),(1,1),(1,0),(0,0)],[]), _POLYGON_DIMENSION).wkt
'POLYGON ((0 0, 1 1, 1 0, 0 0))'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _POINT_DIMENSION).wkt
'POINT (0 0)'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _LINE_DIMENSION).wkt
'LINESTRING (0 0, 1 1)'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), LINESTRING(0 0, 1 1))'), _POLYGON_DIMENSION).wkt
'GEOMETRYCOLLECTION EMPTY'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POINT(0 0), GEOMETRYCOLLECTION (POINT(1 1), LINESTRING(0 0, 1 1)))'), _POINT_DIMENSION).wkt
'MULTIPOINT (0 0, 1 1)'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (MULTIPOINT(-1 -1, 0 0), GEOMETRYCOLLECTION (POINT(1 1), POINT(2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'), _POINT_DIMENSION).wkt
'MULTIPOINT (-1 -1, 0 0, 1 1, 2 2, 3 3)'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (LINESTRING(-1 -1, 0 0), GEOMETRYCOLLECTION (LINESTRING(1 1, 2 2), GEOMETRYCOLLECTION (POINT(3 3))), LINESTRING(0 0, 1 1))'), _LINE_DIMENSION).wkt
'MULTILINESTRING ((-1 -1, 0 0), (1 1, 2 2), (0 0, 1 1))'
>>> _filter_geom_types(shapely.wkt.loads('GEOMETRYCOLLECTION (POLYGON((-2 -2, -2 2, 2 2, 2 -2, -2 -2)), GEOMETRYCOLLECTION (LINESTRING(1 1, 2 2), GEOMETRYCOLLECTION (POLYGON((3 3, 0 0, 1 0, 3 3)))), LINESTRING(0 0, 1 1))'), _POLYGON_DIMENSION).wkt
'MULTIPOLYGON (((-2 -2, -2 2, 2 2, 2 -2, -2 -2)), ((3 3, 0 0, 1 0, 3 3)))'
""" # noqa
# flatten the geometries, and keep the parts with the
# dimension that we want. each item in the parts list
# should be a single (non-multi) geometry.
parts = []
for g in _flatten_geoms(shape):
if _geom_dimensions(g) == keep_dim:
parts.append(g)
# figure out how to construct a multi-geometry of the
# dimension wanted.
if keep_dim == _POINT_DIMENSION:
constructor = MultiPoint
elif keep_dim == _LINE_DIMENSION:
constructor = MultiLineString
elif keep_dim == _POLYGON_DIMENSION:
constructor = MultiPolygon
else:
raise ValueError('Unknown dimension %d in _filter_geom_types'
% keep_dim)
if len(parts) == 0:
return constructor()
elif len(parts) == 1:
# return the singular geometry
return parts[0]
else:
if keep_dim == _POINT_DIMENSION:
# not sure why the MultiPoint constructor wants
# its coordinates differently from MultiPolygon
# and MultiLineString...
coords = []
for p in parts:
coords.extend(p.coords)
return MultiPoint(coords)
else:
return constructor(parts)
# creates a list of indexes, each one for a different cut
# attribute value, in priority order.
#
# STRtree stores geometries and returns these from the query,
# but doesn't appear to allow any other attributes to be
# stored along with the geometries. this means we have to
# separate the index out into several "layers", each having
# the same attribute value. which isn't all that much of a
# pain, as we need to cut the shapes in a certain order to
# ensure priority anyway.
#
# intersect_func is a functor passed in to control how an
# intersection is performed. it is passed
class _Cutter:
def __init__(self, features, attrs, attribute,
target_attribute, keep_geom_type,
intersect_func):
group = defaultdict(list)
for feature in features:
shape, props, fid = feature
attr = props.get(attribute)
group[attr].append(shape)
# if the user didn't supply any options for controlling
# the cutting priority, then just make some up based on
# the attributes which are present in the dataset.
if attrs is None:
all_attrs = set()
for feature in features:
all_attrs.add(feature[1].get(attribute))
attrs = list(all_attrs)
# alternatively, the user can specify an ordering
# function over the attributes.
elif isinstance(attrs, dict):
attrs = _sorted_attributes(features, attrs,
attribute)
cut_idxs = list()
for attr in attrs:
if attr in group:
cut_idxs.append((attr, STRtree(group[attr])))
self.attribute = attribute
self.target_attribute = target_attribute
self.cut_idxs = cut_idxs
self.keep_geom_type = keep_geom_type
self.intersect_func = intersect_func
self.new_features = []
# cut up the argument shape, projecting the configured
# attribute to the properties of the intersecting parts
# of the shape. adds all the selected bits to the
# new_features list.
def cut(self, shape, props, fid):
original_geom_dim = _geom_dimensions(shape)
for cutting_attr, cut_idx in self.cut_idxs:
cutting_shapes = cut_idx.query(shape)
for cutting_shape in cutting_shapes:
if cutting_shape.intersects(shape):
shape = self._intersect(
shape, props, fid, cutting_shape,
cutting_attr, original_geom_dim)
# if there's no geometry left outside the
# shape, then we can exit the function
# early, as nothing else will intersect.
if shape.is_empty:
return
# if there's still geometry left outside, then it
# keeps the old, unaltered properties.
self._add(shape, props, fid, original_geom_dim)
# only keep geometries where either the type is the
# same as the original, or we're not trying to keep the
# same type.
def _add(self, shape, props, fid, original_geom_dim):
# if keeping the same geometry type, then filter
# out anything that's different.
if self.keep_geom_type:
shape = _filter_geom_types(
shape, original_geom_dim)
# don't add empty shapes, they're completely
# useless. the previous step may also have created
# an empty geometry if there weren't any items of
# the type we're looking for.
if shape.is_empty:
return
# add the shape as-is unless we're trying to keep
# the geometry type or the geometry dimension is
# identical.
self.new_features.append((shape, props, fid))
# intersects the shape with the cutting shape and
# handles attribute projection. anything "inside" is
# kept as it must have intersected the highest
# priority cutting shape already. the remainder is
# returned.
def _intersect(self, shape, props, fid, cutting_shape,
cutting_attr, original_geom_dim):
inside, outside = \
self.intersect_func(shape, cutting_shape)
# intersections are tricky, and it seems that the geos
# library (perhaps only certain versions of it) don't
# handle intersection of a polygon with its boundary
# very well. for example:
#
# >>> import shapely.geometry as g
# >>> p = g.Point(0,0).buffer(1.0, resolution=2)
# >>> b = p.boundary
# >>> b.intersection(p).wkt
# 'MULTILINESTRING ((1 0, 0.7071067811865481 -0.7071067811865469), (0.7071067811865481 -0.7071067811865469, 1.615544574432587e-15 -1), (1.615544574432587e-15 -1, -0.7071067811865459 -0.7071067811865491), (-0.7071067811865459 -0.7071067811865491, -1 -3.231089148865173e-15), (-1 -3.231089148865173e-15, -0.7071067811865505 0.7071067811865446), (-0.7071067811865505 0.7071067811865446, -4.624589118372729e-15 1), (-4.624589118372729e-15 1, 0.7071067811865436 0.7071067811865515), (0.7071067811865436 0.7071067811865515, 1 0))' # noqa
#
# the result multilinestring could be joined back into
# the original object. but because it has separate parts,
# each requires duplicating the start and end point, and
# each separate segment gets a different polygon buffer
# in Tangram - basically, it's a problem all round.
#
# two solutions to this: given that we're cutting, then
# the inside and outside should union back to the
# original shape - if either is empty then the whole
# object ought to be in the other.
#
# the second solution, for when there is actually some
# part cut, is that we can attempt to merge lines back
# together.
if outside.is_empty and not inside.is_empty:
inside = shape
elif inside.is_empty and not outside.is_empty:
outside = shape
elif original_geom_dim == _LINE_DIMENSION:
inside = _linemerge(inside)
outside = _linemerge(outside)
if cutting_attr is not None:
inside_props = props.copy()
inside_props[self.target_attribute] = cutting_attr
else:
inside_props = props
self._add(inside, inside_props, fid,
original_geom_dim)
return outside
def _intersect_cut(shape, cutting_shape):
"""
intersect by cutting, so that the cutting shape defines
a part of the shape which is inside and a part which is
outside as two separate shapes.
"""
inside = shape.intersection(cutting_shape)
outside = shape.difference(cutting_shape)
return inside, outside
# intersect by looking at the overlap size. we can define
# a cut-off fraction and if that fraction or more of the
# area of the shape is within the cutting shape, it's
# inside, else outside.
#
# this is done using a closure so that we can curry away
# the fraction parameter.
def _intersect_overlap(min_fraction):
# the inner function is what will actually get
# called, but closing over min_fraction means it
# will have access to that.
def _f(shape, cutting_shape):
overlap = shape.intersection(cutting_shape).area
area = shape.area
# need an empty shape of the same type as the
# original shape, which should be possible, as
# it seems shapely geometries all have a default
# constructor to empty.
empty = type(shape)()
if ((area > 0) and (overlap / area) >= min_fraction):
return shape, empty
else:
return empty, shape
return _f
# intersect by looking at the overlap length. if more than a minimum fraction
# of the shape's length is within the cutting area, then we will consider it
# totally "cut".
def _intersect_linear_overlap(min_fraction):
# the inner function is what will actually get
# called, but closing over min_fraction means it
# will have access to that.
def _f(shape, cutting_shape):
overlap = shape.intersection(cutting_shape).length
total = shape.length
empty = type(shape)()
if ((total > 0) and (overlap / total) >= min_fraction):
return shape, empty
else:
return empty, shape
return _f
# find a layer by iterating through all the layers. this
# would be easier if they layers were in a dict(), but
# that's a pretty invasive change.
#
# returns None if the layer can't be found.
def _find_layer(feature_layers, name):
for feature_layer in feature_layers:
layer_datum = feature_layer['layer_datum']
layer_name = layer_datum['name']
if layer_name == name:
return feature_layer
return None
# shared implementation of the intercut algorithm, used both when cutting
# shapes and using overlap to determine inside / outsideness.
#
# the filter_fn are used to filter which features from the base layer are cut
# with which features from the cutting layer. cutting layer features which do
# not match the filter are ignored, base layer features are left in the layer
# unchanged.
def _intercut_impl(intersect_func, feature_layers, base_layer, cutting_layer,
attribute, target_attribute, cutting_attrs, keep_geom_type,
cutting_filter_fn=None, base_filter_fn=None):
# the target attribute can default to the attribute if
# they are distinct. but often they aren't, and that's
# why target_attribute is a separate parameter.
if target_attribute is None:
target_attribute = attribute
# search through all the layers and extract the ones
# which have the names of the base and cutting layer.
# it would seem to be better to use a dict() for
# layers, and this will give odd results if names are
# allowed to be duplicated.
base = _find_layer(feature_layers, base_layer)
cutting = _find_layer(feature_layers, cutting_layer)
# base or cutting layer not available. this could happen
# because of a config problem, in which case you'd want
# it to be reported. but also can happen when the client
# selects a subset of layers which don't include either
# the base or the cutting layer. then it's not an error.
# the interesting case is when they select the base but
# not the cutting layer...
if base is None or cutting is None:
return None
base_features = base['features']
cutting_features = cutting['features']
# filter out any features that we don't want to cut with
if cutting_filter_fn is not None:
cutting_features = filter(cutting_filter_fn, cutting_features)
# short-cut return if there are no cutting features => there's nothing
# to do.
if not cutting_features:
return base
# make a cutter object to help out
cutter = _Cutter(cutting_features, cutting_attrs,
attribute, target_attribute,
keep_geom_type, intersect_func)
skipped_features = []
for base_feature in base_features:
if base_filter_fn is None or base_filter_fn(base_feature):
# we use shape to track the current remainder of the
# shape after subtracting bits which are inside cuts.
shape, props, fid = base_feature
cutter.cut(shape, props, fid)
else:
skipped_features.append(base_feature)
base['features'] = cutter.new_features + skipped_features
return base
class Where(object):
"""
A "where" clause for filtering features based on their properties.
This is commonly used in post-processing steps to configure which features
in the layer we want to operate on, allowing us to write simple Python
expressions in the YAML.
"""
def __init__(self, where):
self.fn = compile(where, 'queries.yaml', 'eval')
def __call__(self, feature):
shape, props, fid = feature
local = defaultdict(lambda: None)
local.update(props)
return eval(self.fn, {}, local)
# intercut takes features from a base layer and cuts each
# of them against a cutting layer, splitting any base
# feature which intersects into separate inside and outside
# parts.
#
# the parts of each base feature which are outside any
# cutting feature are left unchanged. the parts which are
# inside have their property with the key given by the
# 'target_attribute' parameter set to the same value as the
# property from the cutting feature with the key given by
# the 'attribute' parameter.
#
# the intended use of this is to project attributes from one
# layer to another so that they can be styled appropriately.
#
# - feature_layers: list of layers containing both the base
# and cutting layer.
# - base_layer: str name of the base layer.
# - cutting_layer: str name of the cutting layer.
# - attribute: optional str name of the property / attribute
# to take from the cutting layer.
# - target_attribute: optional str name of the property /
# attribute to assign on the base layer. defaults to the
# same as the 'attribute' parameter.
# - cutting_attrs: list of str, the priority of the values
# to be used in the cutting operation. this ensures that
# items at the beginning of the list get cut first and
# those values have priority (won't be overridden by any
# other shape cutting).
# - keep_geom_type: if truthy, then filter the output to be
# the same type as the input. defaults to True, because
# this seems like an eminently sensible behaviour.
# - base_where: if truthy, a Python expression which is
# evaluated in the context of a feature's properties and
# can return True if the feature is to be cut and False
# if it should be passed through unmodified.
# - cutting_where: if truthy, a Python expression which is
# evaluated in the context of a feature's properties and
# can return True if the feature is to be used for cutting
# and False if it should be ignored.
#
# returns a feature layer which is the base layer cut by the
# cutting layer.
def intercut(ctx):
feature_layers = ctx.feature_layers
base_layer = ctx.params.get('base_layer')
assert base_layer, \
'Parameter base_layer was missing from intercut config'
cutting_layer = ctx.params.get('cutting_layer')
assert cutting_layer, \
'Parameter cutting_layer was missing from intercut ' \
'config'
attribute = ctx.params.get('attribute')
# sanity check on the availability of the cutting
# attribute.
assert attribute is not None, \
'Parameter attribute to intercut was None, but ' + \
'should have been an attribute name. Perhaps check ' + \
'your configuration file and queries.'
target_attribute = ctx.params.get('target_attribute')
cutting_attrs = ctx.params.get('cutting_attrs')
keep_geom_type = ctx.params.get('keep_geom_type', True)
base_where = ctx.params.get('base_where')
cutting_where = ctx.params.get('cutting_where')
# compile the where-clauses, if any were configured
if base_where:
base_where = Where(base_where)
if cutting_where:
cutting_where = Where(cutting_where)
return _intercut_impl(
_intersect_cut, feature_layers, base_layer, cutting_layer,
attribute, target_attribute, cutting_attrs, keep_geom_type,
base_filter_fn=base_where, cutting_filter_fn=cutting_where)
# overlap measures the area overlap between each feature in
# the base layer and each in the cutting layer. if the
# fraction of overlap is greater than the min_fraction
# constant, then the feature in the base layer is assigned
# a property with its value derived from the overlapping
# feature from the cutting layer.
#
# the intended use of this is to project attributes from one
# layer to another so that they can be styled appropriately.
#
# it has the same parameters as intercut, see above.
#
# returns a feature layer which is the base layer with
# overlapping features having attributes projected from the
# cutting layer.
def overlap(ctx):
feature_layers = ctx.feature_layers
base_layer = ctx.params.get('base_layer')
assert base_layer, \
'Parameter base_layer was missing from overlap config'
cutting_layer = ctx.params.get('cutting_layer')
assert cutting_layer, \
'Parameter cutting_layer was missing from overlap ' \
'config'
attribute = ctx.params.get('attribute')
# sanity check on the availability of the cutting
# attribute.
assert attribute is not None, \
'Parameter attribute to overlap was None, but ' + \
'should have been an attribute name. Perhaps check ' + \
'your configuration file and queries.'
target_attribute = ctx.params.get('target_attribute')
cutting_attrs = ctx.params.get('cutting_attrs')
keep_geom_type = ctx.params.get('keep_geom_type', True)
min_fraction = ctx.params.get('min_fraction', 0.8)
base_where = ctx.params.get('base_where')
cutting_where = ctx.params.get('cutting_where')
# use a different function for linear overlaps (i.e: roads with polygons)
# than area overlaps. keeping this explicit (rather than relying on the
# geometry type) means we don't end up with unexpected lines in a polygonal
# layer.
linear = ctx.params.get('linear', False)
if linear:
overlap_fn = _intersect_linear_overlap(min_fraction)
else:
overlap_fn = _intersect_overlap(min_fraction)
# compile the where-clauses, if any were configured
if base_where:
base_where = Where(base_where)
if cutting_where:
cutting_where = Where(cutting_where)
return _intercut_impl(
overlap_fn, feature_layers, base_layer,
cutting_layer, attribute, target_attribute, cutting_attrs,
keep_geom_type, cutting_filter_fn=cutting_where,
base_filter_fn=base_where)
# intracut cuts a layer with a set of features from that same
# layer, which are then removed.
#
# for example, with water boundaries we get one set of linestrings
# from the admin polygons and another set from the original ways
# where the `maritime=yes` tag is set. we don't actually want
# separate linestrings, we just want the `maritime=yes` attribute
# on the first set of linestrings.
def intracut(ctx):
feature_layers = ctx.feature_layers
base_layer = ctx.params.get('base_layer')
assert base_layer, \
'Parameter base_layer was missing from intracut config'
attribute = ctx.params.get('attribute')
# sanity check on the availability of the cutting
# attribute.
assert attribute is not None, \
'Parameter attribute to intracut was None, but ' + \
'should have been an attribute name. Perhaps check ' + \
'your configuration file and queries.'
base = _find_layer(feature_layers, base_layer)
if base is None:
return None
# unlike intracut & overlap, which work on separate layers,
# intracut separates features in the same layer into
# different sets to work on.
base_features = list()
cutting_features = list()
for shape, props, fid in base['features']:
if attribute in props:
cutting_features.append((shape, props, fid))
else:
base_features.append((shape, props, fid))
cutter = _Cutter(cutting_features, None, attribute,
attribute, True, _intersect_cut)
for shape, props, fid in base_features:
cutter.cut(shape, props, fid)
base['features'] = cutter.new_features
return base
# place kinds, as used by OSM, mapped to their rough
# min_zoom so that we can provide a defaulted,
# non-curated min_zoom value.
_default_min_zoom_for_place_kind = {
'locality': 13,
'isolated_dwelling': 13,
'farm': 13,
'hamlet': 12,
'village': 11,
'suburb': 10,
'quarter': 10,
'borough': 10,
'town': 8,
'city': 8,
'province': 4,
'state': 4,
'sea': 3,
'country': 0,
'ocean': 0,
'continent': 0
}
# if the feature does not have a min_zoom attribute already,
# which would have come from a curated source, then calculate
# a default one based on the kind of place it is.
def calculate_default_place_min_zoom(shape, properties, fid, zoom):
min_zoom = properties.get('min_zoom')
if min_zoom is not None:
return shape, properties, fid
# base calculation off kind
kind = properties.get('kind')
if kind is None:
return shape, properties, fid
min_zoom = _default_min_zoom_for_place_kind.get(kind)
if min_zoom is None:
return shape, properties, fid
# adjust min_zoom for state / country capitals
if kind in ('city', 'town'):
if properties.get('region_capital'):
min_zoom -= 1
elif properties.get('country_capital'):
min_zoom -= 2
properties['min_zoom'] = min_zoom
return shape, properties, fid
def _make_new_properties(props, props_instructions):
"""
make new properties from existing properties and a
dict of instructions.
the algorithm is:
- where a key appears with value True, it will be
copied from the existing properties.
- where it's a dict, the values will be looked up
in that dict.
- otherwise the value will be used directly.
"""
new_props = dict()
for k, v in props_instructions.iteritems():
if v is True:
# this works even when props[k] = None
if k in props:
new_props[k] = props[k]
elif isinstance(v, dict):
# this will return None, which allows us to
# use the dict to set default values.
original_v = props.get(k)
if original_v in v:
new_props[k] = v[original_v]
elif isinstance(v, list) and len(v) == 1:
# this is a hack to implement escaping for when the output value
# should be a value, but that value (e.g: True, or a dict) is
# used for some other purpose above.
new_props[k] = v[0]
else:
new_props[k] = v
return new_props
def _snap_to_grid(shape, grid_size):
"""
Snap coordinates of a shape to a multiple of `grid_size`.
This can be useful when there's some error in point
positions, but we're using an algorithm which is very
sensitive to coordinate exactness. For example, when
calculating the boundary of several items, it makes a
big difference whether the shapes touch or there's a
very small gap between them.
This is implemented here because it doesn't exist in
GEOS or Shapely. It exists in PostGIS, but only because
it's implemented there as well. Seems like it would be a
useful thing to have in GEOS, though.
>>> _snap_to_grid(Point(0.5, 0.5), 1).wkt
'POINT (1 1)'
>>> _snap_to_grid(Point(0.1, 0.1), 1).wkt
'POINT (0 0)'
>>> _snap_to_grid(Point(-0.1, -0.1), 1).wkt
'POINT (-0 -0)'
>>> _snap_to_grid(LineString([(1.1,1.1),(1.9,0.9)]), 1).wkt
'LINESTRING (1 1, 2 1)'
_snap_to_grid(Polygon([(0.1,0.1),(3.1,0.1),(3.1,3.1),(0.1,3.1),(0.1,0.1)],[[(1.1,0.9),(1.1,1.9),(2.1,1.9),(2.1,0.9),(1.1,0.9)]]), 1).wkt
'POLYGON ((0 0, 3 0, 3 3, 0 3, 0 0), (1 1, 1 2, 2 2, 2 1, 1 1))'
>>> _snap_to_grid(MultiPoint([Point(0.1, 0.1), Point(0.9, 0.9)]), 1).wkt
'MULTIPOINT (0 0, 1 1)'
>>> _snap_to_grid(MultiLineString([LineString([(0.1, 0.1), (0.9, 0.9)]), LineString([(0.9, 0.1),(0.1,0.9)])]), 1).wkt
'MULTILINESTRING ((0 0, 1 1), (1 0, 0 1))'
""" # noqa
# snap a single coordinate value
def _snap(c):
return grid_size * round(c / grid_size, 0)
# snap all coordinate pairs in something iterable
def _snap_coords(c):
return [(_snap(x), _snap(y)) for x, y in c]
# recursively snap all coordinates in an iterable over
# geometries.
def _snap_multi(geoms):
return [_snap_to_grid(g, grid_size) for g in geoms]
shape_type = shape.geom_type
if shape.is_empty or shape_type == 'GeometryCollection':
return None
elif shape_type == 'Point':
return Point(_snap(shape.x), _snap(shape.y))
elif shape_type == 'LineString':
return LineString(_snap_coords(shape.coords))
elif shape_type == 'Polygon':
exterior = LinearRing(_snap_coords(shape.exterior.coords))
interiors = []
for interior in shape.interiors:
interiors.append(LinearRing(_snap_coords(interior.coords)))
return Polygon(exterior, interiors)
elif shape_type == 'MultiPoint':
return MultiPoint(_snap_multi(shape.geoms))
elif shape_type == 'MultiLineString':
return MultiLineString(_snap_multi(shape.geoms))
elif shape_type == 'MultiPolygon':
return MultiPolygon(_snap_multi(shape.geoms))
else:
raise ValueError('_snap_to_grid: unimplemented for shape type %s'
% repr(shape_type))
def exterior_boundaries(ctx):
"""
create new fetures from the boundaries of polygons
in the base layer, subtracting any sections of the
boundary which intersect other polygons. this is
added as a new layer if new_layer_name is not None
otherwise appended to the base layer.
the purpose of this is to provide us a shoreline /
river bank layer from the water layer without having
any of the shoreline / river bank draw over the top
of any of the base polygons.
properties on the lines returned are copied / adapted
from the existing layer using the new_props dict. see
_make_new_properties above for the rules.
buffer_size determines whether any buffering will be
done to the index polygons. a judiciously small
amount of buffering can help avoid "dashing" due to
tolerance in the intersection, but will also create
small overlaps between lines.
any features in feature_layers[layer] which aren't
polygons will be ignored.
note that the `bounds` kwarg should be filled out
automatically by tilequeue - it does not have to be
provided from the config.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
base_layer = ctx.params.get('base_layer')
assert base_layer, 'Missing base_layer parameter'
new_layer_name = ctx.params.get('new_layer_name')
prop_transform = ctx.params.get('prop_transform')
buffer_size = ctx.params.get('buffer_size')
start_zoom = ctx.params.get('start_zoom', 0)
snap_tolerance = ctx.params.get('snap_tolerance')
layer = None
# don't start processing until the start zoom
if zoom < start_zoom:
return layer
# search through all the layers and extract the one
# which has the name of the base layer we were given
# as a parameter.
layer = _find_layer(feature_layers, base_layer)
# if we failed to find the base layer then it's
# possible the user just didn't ask for it, so return
# an empty result.
if layer is None:
return None
if prop_transform is None:
prop_transform = {}
features = layer['features']
# this exists to enable a dirty hack to try and work
# around duplicate geometries in the database. this
# happens when a multipolygon relation can't
# supersede a member way because the way contains tags
# which aren't present on the relation. working around
# this by calling "union" on geometries proved to be
# too expensive (~3x current), so this hack looks at
# the way_area of each object, and uses that as a
# proxy for identity. it's not perfect, but the chance
# that there are two overlapping polygons of exactly
# the same size must be pretty small. however, the
# STRTree we're using as a spatial index doesn't
# directly support setting attributes on the indexed
# geometries, so this class exists to carry the area
# attribute through the index to the point where we
# want to use it.
class geom_with_area:
def __init__(self, geom, area):
self.geom = geom
self.area = area
self._geom = geom._geom
# STRtree started filtering out empty geoms at some version, so
# we need to proxy the is_empty property.
self.is_empty = geom.is_empty
# create an index so that we can efficiently find the
# polygons intersecting the 'current' one. Note that
# we're only interested in intersecting with other
# polygonal features, and that intersecting with lines
# can give some unexpected results.
indexable_features = list()
indexable_shapes = list()
for shape, props, fid in features:
if shape.type in ('Polygon', 'MultiPolygon'):
# the data comes back clipped from the queries now so we
# no longer need to clip here
snapped = shape
if snap_tolerance is not None:
snapped = _snap_to_grid(shape, snap_tolerance)
# geometry collections are returned as None
if snapped is None:
continue
# snapping coordinates and clipping shapes might make the shape
# invalid, so we need a way to clean them. one simple, but not
# foolproof, way is to buffer them by 0.
if not snapped.is_valid:
snapped = snapped.buffer(0)
# that still might not have done the trick, so drop any polygons
# which are still invalid so as not to cause errors later.
if not snapped.is_valid:
# TODO: log this as a warning!
continue
# skip any geometries that may have become empty
if snapped.is_empty:
continue
indexable_features.append((snapped, props, fid))
indexable_shapes.append(geom_with_area(snapped, props.get('area')))
index = STRtree(indexable_shapes)
new_features = list()
# loop through all the polygons, taking the boundary
# of each and subtracting any parts which are within
# other polygons. what remains (if anything) is the
# new feature.
for feature in indexable_features:
shape, props, fid = feature
boundary = shape.boundary
cutting_shapes = index.query(boundary)
for cutting_item in cutting_shapes:
cutting_shape = cutting_item.geom
cutting_area = cutting_item.area
# dirty hack: this object is probably a
# superseded way if the ID is positive and
# the area is the same as the cutting area.
# using the ID check here prevents the
# boundary from being duplicated.
is_superseded_way = \
cutting_area == props.get('area') and \
props.get('id') > 0
if cutting_shape is not shape and \
not is_superseded_way:
buf = cutting_shape
if buffer_size is not None:
buf = buf.buffer(buffer_size)
boundary = boundary.difference(buf)
# filter only linestring-like objects. we don't
# want any points which might have been created
# by the intersection.
boundary = _filter_geom_types(boundary, _LINE_DIMENSION)
if not boundary.is_empty:
new_props = _make_new_properties(props, prop_transform)
new_features.append((boundary, new_props, fid))
if new_layer_name is None:
# no new layer requested, instead add new
# features into the same layer.
layer['features'].extend(new_features)
return layer
else:
# make a copy of the old layer's information - it
# shouldn't matter about most of the settings, as
# post-processing is one of the last operations.
# but we need to override the name to ensure we get
# some output.
new_layer_datum = layer['layer_datum'].copy()
new_layer_datum['name'] = new_layer_name
new_layer = layer.copy()
new_layer['layer_datum'] = new_layer_datum
new_layer['features'] = new_features
new_layer['name'] = new_layer_name
return new_layer
def _inject_key(key, infix):
"""
OSM keys often have several parts, separated by ':'s.
When we merge properties from the left and right of a
boundary, we want to preserve information like the
left and right names, but prefer the form "name:left"
rather than "left:name", so we have to insert an
infix string to these ':'-delimited arrays.
>>> _inject_key('a:b:c', 'x')
'a:x:b:c'
>>> _inject_key('a', 'x')
'a:x'
"""
parts = key.split(':')
parts.insert(1, infix)
return ':'.join(parts)
def _merge_left_right_props(lprops, rprops):
"""
Given a set of properties to the left and right of a
boundary, we want to keep as many of these as possible,
but keeping them all might be a bit too much.
So we want to keep the key-value pairs which are the
same in both in the output, but merge the ones which
are different by infixing them with 'left' and 'right'.
>>> _merge_left_right_props({}, {})
{}
>>> _merge_left_right_props({'a':1}, {})
{'a:left': 1}
>>> _merge_left_right_props({}, {'b':2})
{'b:right': 2}
>>> _merge_left_right_props({'a':1, 'c':3}, {'b':2, 'c':3})
{'a:left': 1, 'c': 3, 'b:right': 2}
>>> _merge_left_right_props({'a':1},{'a':2})
{'a:left': 1, 'a:right': 2}
"""
keys = set(lprops.keys()) | set(rprops.keys())
new_props = dict()
# props in both are copied directly if they're the same
# in both the left and right. they get left/right
# inserted after the first ':' if they're different.
for k in keys:
lv = lprops.get(k)
rv = rprops.get(k)
if lv == rv:
new_props[k] = lv
else:
if lv is not None:
new_props[_inject_key(k, 'left')] = lv
if rv is not None:
new_props[_inject_key(k, 'right')] = rv
return new_props
def _make_joined_name(props):
"""
Updates the argument to contain a 'name' element
generated from joining the left and right names.
Just to make it easier for people, we generate a name
which is easy to display of the form "LEFT - RIGHT".
The individual properties are available if the user
wants to generate a more complex name.
>>> x = {}
>>> _make_joined_name(x)
>>> x
{}
>>> x = {'name:left':'Left'}
>>> _make_joined_name(x)
>>> x
{'name': 'Left', 'name:left': 'Left'}
>>> x = {'name:right':'Right'}
>>> _make_joined_name(x)
>>> x
{'name': 'Right', 'name:right': 'Right'}
>>> x = {'name:left':'Left', 'name:right':'Right'}
>>> _make_joined_name(x)
>>> x
{'name:right': 'Right', 'name': 'Left - Right', 'name:left': 'Left'}
>>> x = {'name:left':'Left', 'name:right':'Right', 'name': 'Already Exists'}
>>> _make_joined_name(x)
>>> x
{'name:right': 'Right', 'name': 'Already Exists', 'name:left': 'Left'}
""" # noqa
# don't overwrite an existing name
if 'name' in props:
return
lname = props.get('name:left')
rname = props.get('name:right')
if lname is not None:
if rname is not None:
props['name'] = "%s - %s" % (lname, rname)
else:
props['name'] = lname
elif rname is not None:
props['name'] = rname
def _linemerge(geom):
"""
Try to extract all the linear features from the geometry argument
and merge them all together into the smallest set of linestrings
possible.
This is almost identical to Shapely's linemerge, and uses it,
except that Shapely's throws exceptions when passed a single
linestring, or a geometry collection with lines and points in it.
So this can be thought of as a "safer" wrapper around Shapely's
function.
"""
geom_type = geom.type
result_geom = None
if geom_type == 'GeometryCollection':
# collect together everything line-like from the geometry
# collection and filter out anything that's empty
lines = []
for line in geom.geoms:
line = _linemerge(line)
if not line.is_empty:
lines.append(line)
result_geom = linemerge(lines) if lines else None
elif geom_type == 'LineString':
result_geom = geom
elif geom_type == 'MultiLineString':
result_geom = linemerge(geom)
else:
result_geom = None
if result_geom is not None:
# simplify with very small tolerance to remove duplicate points.
# almost duplicate or nearly colinear points can occur due to
# numerical round-off or precision in the intersection algorithm, and
# this should help get rid of those. see also:
# http://lists.gispython.org/pipermail/community/2014-January/003236.html
#
# the tolerance here is hard-coded to a fraction of the
# coordinate magnitude. there isn't a perfect way to figure
# out what this tolerance should be, so this may require some
# tweaking.
epsilon = max(map(abs, result_geom.bounds)) * float_info.epsilon * 1000
result_geom = result_geom.simplify(epsilon, True)
result_geom_type = result_geom.type
# the geometry may still have invalid or repeated points if it has zero
# length segments, so remove anything where the length is less than
# epsilon.
if result_geom_type == 'LineString':
if result_geom.length < epsilon:
result_geom = None
elif result_geom_type == 'MultiLineString':
parts = []
for line in result_geom.geoms:
if line.length >= epsilon:
parts.append(line)
result_geom = MultiLineString(parts)
return result_geom if result_geom else MultiLineString([])
def _orient(geom):
"""
Given a shape, returns the counter-clockwise oriented
version. Does not affect points or lines.
This version is required because Shapely's version is
only defined for single polygons, and we want
something that works generically.
In the example below, note the change in order of the
coordinates in `p2`, which is initially not oriented
CCW.
>>> p1 = Polygon([[0, 0], [1, 0], [0, 1], [0, 0]])
>>> p2 = Polygon([[0, 1], [1, 1], [1, 0], [0, 1]])
>>> orient(p1).wkt
'POLYGON ((0 0, 1 0, 0 1, 0 0))'
>>> orient(p2).wkt
'POLYGON ((0 1, 1 0, 1 1, 0 1))'
>>> _orient(MultiPolygon([p1, p2])).wkt
'MULTIPOLYGON (((0 0, 1 0, 0 1, 0 0)), ((0 1, 1 0, 1 1, 0 1)))'
"""
def oriented_multi(kind, geom):
oriented_geoms = [_orient(g) for g in geom.geoms]
return kind(oriented_geoms)
geom_type = geom.type
if geom_type == 'Polygon':
geom = orient(geom)
elif geom_type == 'MultiPolygon':
geom = oriented_multi(MultiPolygon, geom)
elif geom_type == 'GeometryCollection':
geom = oriented_multi(GeometryCollection, geom)
return geom
def _fix_disputed_left_right_kinds(props):
"""
After merging left/right props, we might find that any kind:XX for disputed
borders are mixed up as kind:left:XX or kind:right:XX and we want to merge
them back together again.
"""
keys = []
for k in props.keys():
if k.startswith('kind:left:') or k.startswith('kind:right:'):
keys.append(k)
for k in keys:
prefix = 'kind:left:' if k.startswith('kind:left:') else 'kind:right:'
new_key = 'kind:' + k[len(prefix):]
value = props.pop(k)
props[new_key] = value
def admin_boundaries(ctx):
"""
Given a layer with admin boundaries and inclusion polygons for
land-based boundaries, attempts to output a set of oriented
boundaries with properties from both the left and right admin
boundary, and also cut with the maritime information to provide
a `maritime_boundary: True` value where there's overlap between
the maritime lines and the admin boundaries.
Note that admin boundaries must alread be correctly oriented.
In other words, it must have a positive area and run counter-
clockwise around the polygon for which it is an outer (or
clockwise if it was an inner).
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
base_layer = ctx.params.get('base_layer')
assert base_layer, 'Parameter base_layer missing.'
start_zoom = ctx.params.get('start_zoom', 0)
layer = None
# don't start processing until the start zoom
if zoom < start_zoom:
return layer
layer = _find_layer(feature_layers, base_layer)
if layer is None:
return None
# layer will have polygonal features for the admin
# polygons and also linear features for the maritime
# boundaries. further, we want to group the admin
# polygons by their kind, as this will reduce the
# working set.
admin_features = defaultdict(list)
maritime_features = list()
new_features = list()
# Sorting here so that we have consistent ordering of left/right side
# on boundaries.
sorted_layer = sorted(layer['features'], key=lambda f: f[1]['id'])
for shape, props, fid in sorted_layer:
dims = _geom_dimensions(shape)
kind = props.get('kind')
maritime_boundary = props.get('maritime_boundary')
# the reason to use this rather than compare the
# string of types is to catch the "multi-" types
# as well.
if dims == _LINE_DIMENSION and kind is not None:
admin_features[kind].append((shape, props, fid))
elif dims == _POLYGON_DIMENSION and maritime_boundary:
maritime_features.append((shape, {'maritime_boundary': False}, 0))
# there are separate polygons for each admin level, and
# we only want to intersect like with like because it
# makes more sense to have Country-Country and
# State-State boundaries (and labels) rather than the
# (combinatoric) set of all different levels.
for kind, features in admin_features.iteritems():
num_features = len(features)
envelopes = [g[0].envelope for g in features]
for i, feature in enumerate(features):
boundary, props, fid = feature
prop_id = props['id']
envelope = envelopes[i]
# intersect with *preceding* features to remove
# those boundary parts. this ensures that there
# are no duplicate parts.
for j in range(0, i):
cut_shape, cut_props, cut_fid = features[j]
# don't intersect with self
if prop_id == cut_props['id']:
continue
cut_envelope = envelopes[j]
if envelope.intersects(cut_envelope):
try:
boundary = boundary.difference(cut_shape)
except shapely.errors.TopologicalError:
# NOTE: we have gotten errors Topological errors here
# that look like:
# TopologicalError: This operation could not be
# performed. Reason: unknown"
pass
if boundary.is_empty:
break
# intersect with every *later* feature. now each
# intersection represents a section of boundary
# that we want to keep.
for j in range(i+1, num_features):
cut_shape, cut_props, cut_fid = features[j]
# don't intersect with self
if prop_id == cut_props['id']:
continue
cut_envelope = envelopes[j]
if envelope.intersects(cut_envelope):
try:
inside, boundary = _intersect_cut(boundary, cut_shape)
except (StandardError, shapely.errors.ShapelyError):
# if the inside and remaining boundary can't be
# calculated, then we can't continue to intersect
# anything else with this shape. this means we might
# end up with erroneous one-sided boundaries.
# TODO: log warning!
break
inside = _linemerge(inside)
if not inside.is_empty:
new_props = _merge_left_right_props(props, cut_props)
new_props['id'] = props['id']
_make_joined_name(new_props)
_fix_disputed_left_right_kinds(new_props)
new_features.append((inside, new_props, fid))
if boundary.is_empty:
break
# anything left over at the end is still a boundary,
# but a one-sided boundary to international waters.
boundary = _linemerge(boundary)
if not boundary.is_empty:
new_props = props.copy()
_make_joined_name(new_props)
new_features.append((boundary, new_props, fid))
# use intracut for maritime, but it intersects in a positive
# way - it sets the tag on anything which intersects, whereas
# we want to set maritime where it _doesn't_ intersect. so
# we have to flip the attribute afterwards.
cutter = _Cutter(maritime_features, None,
'maritime_boundary', 'maritime_boundary',
_LINE_DIMENSION, _intersect_cut)
for shape, props, fid in new_features:
cutter.cut(shape, props, fid)
# flip the property, so define maritime_boundary=yes where
# it was previously unset and remove maritime_boundary=no.
for shape, props, fid in cutter.new_features:
maritime_boundary = props.pop('maritime_boundary', None)
if maritime_boundary is None:
props['maritime_boundary'] = True
layer['features'] = cutter.new_features
return layer
def _unicode_len(s):
if isinstance(s, str):
return len(s.decode('utf-8'))
elif isinstance(s, unicode):
return len(s)
return None
def _delete_labels_longer_than(max_label_chars, props):
"""
Delete entries in the props dict where the key starts with 'name' and the
unicode length of the value is greater than max_label_chars.
If one half of a left/right pair is too long, then the opposite in the pair
is also deleted.
"""
to_delete = set()
for k, v in props.iteritems():
if not k.startswith('name'):
continue
length_chars = _unicode_len(v)
if length_chars is None:
# huh? name isn't a string?
continue
if length_chars <= max_label_chars:
continue
to_delete.add(k)
if k.startswith('name:left:'):
opposite_k = k.replace(':left:', ':right:')
to_delete.add(opposite_k)
elif k.startswith('name:right:'):
opposite_k = k.replace(':right:', ':left:')
to_delete.add(opposite_k)
for k in to_delete:
if k in props:
del props[k]
def drop_names_on_short_boundaries(ctx):
"""
Drop all names on a boundaries which are too small to render the shortest
name.
"""
params = _Params(ctx, 'drop_names_on_short_boundaries')
layer_name = params.required('source_layer')
start_zoom = params.optional('start_zoom', typ=int, default=0)
end_zoom = params.optional('end_zoom', typ=int)
pixels_per_letter = params.optional('pixels_per_letter', typ=(int, float),
default=10.0)
layer = _find_layer(ctx.feature_layers, layer_name)
zoom = ctx.nominal_zoom
if zoom < start_zoom or \
(end_zoom is not None and zoom >= end_zoom):
return None
# tolerance for zoom gives us a value in meters for a pixel, so it's
# meters per pixel
meters_per_letter = pixels_per_letter * tolerance_for_zoom(zoom)
for shape, props, fid in layer['features']:
geom_type = shape.geom_type
if geom_type in ('LineString', 'MultiLineString'):
# simplify to one letter size. this gets close to what might
# practically be renderable, and means we're not counting any
# sub-letter scale fractal crinklyness towards the length of
# the line.
label_shape = shape.simplify(meters_per_letter)
if geom_type == 'LineString':
shape_length_meters = label_shape.length
else:
# get the longest section to see if that's labellable - if
# not, then none of the sections could have a label and we
# can drop the names.
shape_length_meters = max(part.length for part in label_shape)
# maximum number of characters we'll be able to print at this
# zoom.
max_label_chars = int(shape_length_meters / meters_per_letter)
_delete_labels_longer_than(max_label_chars, props)
return None
def handle_label_placement(ctx):
"""
Converts a geometry label column into a separate feature.
"""
layers = ctx.params.get('layers', None)
zoom = ctx.nominal_zoom
location_property = ctx.params.get('location_property', None)
label_property_name = ctx.params.get('label_property_name', None)
label_property_value = ctx.params.get('label_property_value', None)
label_where = ctx.params.get('label_where', None)
start_zoom = ctx.params.get('start_zoom', 0)
if zoom < start_zoom:
return None
assert layers, 'handle_label_placement: Missing layers'
assert location_property, \
'handle_label_placement: Missing location_property'
assert label_property_name, \
'handle_label_placement: Missing label_property_name'
assert label_property_value, \
'handle_label_placement: Missing label_property_value'
layers = set(layers)
if label_where:
label_where = compile(label_where, 'queries.yaml', 'eval')
for feature_layer in ctx.feature_layers:
if feature_layer['name'] not in layers:
continue
padded_bounds = feature_layer['padded_bounds']
point_padded_bounds = padded_bounds['point']
clip_bounds = Box(*point_padded_bounds)
new_features = []
for feature in feature_layer['features']:
shape, props, fid = feature
label_wkb = props.pop(location_property, None)
new_features.append(feature)
if not label_wkb:
continue
local_state = props.copy()
local_state['properties'] = props
if label_where and not eval(label_where, {}, local_state):
continue
label_shape = shapely.wkb.loads(label_wkb)
if not (label_shape.type in ('Point', 'MultiPoint') and
clip_bounds.intersects(label_shape)):
continue
point_props = props.copy()
point_props[label_property_name] = label_property_value
point_feature = label_shape, point_props, fid
new_features.append(point_feature)
feature_layer['features'] = new_features
def generate_address_points(ctx):
"""
Generates address points from building polygons where there is an
addr:housenumber tag on the building. Removes those tags from the
building.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'generate_address_points: missing source_layer'
start_zoom = ctx.params.get('start_zoom', 0)
if zoom < start_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
new_features = []
for feature in layer['features']:
shape, properties, fid = feature
# We only want to create address points for polygonal
# buildings with address tags.
if shape.geom_type not in ('Polygon', 'MultiPolygon'):
continue
addr_housenumber = properties.get('addr_housenumber')
# consider it an address if the name of the building
# is just a number.
name = properties.get('name')
if name is not None and digits_pattern.match(name):
if addr_housenumber is None:
addr_housenumber = properties.pop('name')
# and also suppress the name if it's the same as
# the address.
elif name == addr_housenumber:
properties.pop('name')
# if there's no address, then keep the feature as-is,
# no modifications.
if addr_housenumber is None:
continue
label_point = shape.representative_point()
# we're only interested in a very few properties for
# address points.
label_properties = dict(
addr_housenumber=addr_housenumber,
kind='address')
source = properties.get('source')
if source is not None:
label_properties['source'] = source
addr_street = properties.get('addr_street')
if addr_street is not None:
label_properties['addr_street'] = addr_street
oid = properties.get('id')
if oid is not None:
label_properties['id'] = oid
label_feature = label_point, label_properties, fid
new_features.append(label_feature)
layer['features'].extend(new_features)
return layer
def parse_layer_as_float(shape, properties, fid, zoom):
"""
If the 'layer' property is present on a feature, then
this attempts to parse it as a floating point number.
The old value is removed and, if it could be parsed
as a floating point number, the number replaces the
original property.
"""
layer = properties.pop('layer', None)
if layer:
layer_float = to_float(layer)
if layer_float is not None:
properties['layer'] = layer_float
return shape, properties, fid
def drop_features_where(ctx):
"""
Drop features entirely that match the particular "where"
condition. Any feature properties are available to use, as well as
the properties dict itself, called "properties" in the scope.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'drop_features_where: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
where = ctx.params.get('where')
assert where, 'drop_features_where: missing where'
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
where = compile(where, 'queries.yaml', 'eval')
new_features = []
for feature in layer['features']:
shape, properties, fid = feature
local = properties.copy()
local['properties'] = properties
local['geom_type'] = shape.geom_type
if not eval(where, {}, local):
new_features.append(feature)
layer['features'] = new_features
return layer
def _project_properties(ctx, action):
"""
Project properties down to a subset of the existing properties based on a
predicate `where` which returns true when the function `action` should be
performed. The value returned from `action` replaces the properties of the
feature.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
where = ctx.params.get('where')
source_layer = ctx.params.get('source_layer')
assert source_layer, '_project_properties: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
geom_types = ctx.params.get('geom_types')
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
if where is not None:
where = compile(where, 'queries.yaml', 'eval')
new_features = []
for feature in layer['features']:
shape, props, fid = feature
# skip some types of geometry
if geom_types and shape.geom_type not in geom_types:
new_features.append((shape, props, fid))
continue
# we're going to use a defaultdict for this, so that references to
# properties which don't exist just end up as None without causing an
# exception. we also add a 'zoom' one. would prefer '$zoom', but
# apparently that's not allowed in python syntax.
local = defaultdict(lambda: None)
local.update(props)
local['zoom'] = zoom
# allow decisions based on meters per pixel zoom too.
meters_per_pixel_area = calc_meters_per_pixel_area(zoom)
local['pixel_area'] = meters_per_pixel_area
if where is None or eval(where, {}, local):
props = action(props)
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def drop_properties(ctx):
"""
Drop all configured properties for features in source_layer
"""
properties = ctx.params.get('properties')
all_name_variants = ctx.params.get('all_name_variants', False)
assert properties, 'drop_properties: missing properties'
def action(p):
if all_name_variants and 'name' in properties:
p = _remove_names(p)
return _remove_properties(p, *properties)
return _project_properties(ctx, action)
def drop_names(ctx):
"""
Drop all names on properties for features in this layer.
"""
def action(p):
return _remove_names(p)
return _project_properties(ctx, action)
def remove_zero_area(shape, properties, fid, zoom):
"""
All features get a numeric area tag, but for points this
is zero. The area probably isn't exactly zero, so it's
probably less confusing to just remove the tag to show
that the value is probably closer to "unspecified".
"""
# remove the property if it's present. we _only_ want
# to replace it if it matches the positive, float
# criteria.
area = properties.pop("area", None)
# try to parse a string if the area has been sent as a
# string. it should come through as a float, though,
# since postgres treats it as a real.
if isinstance(area, (str, unicode)):
area = to_float(area)
if area is not None:
# cast to integer to match what we do for polygons.
# also the fractional parts of a sq.m are just
# noise really.
area = int(area)
if area > 0:
properties['area'] = area
return shape, properties, fid
# circumference of the extent of the world in mercator "meters"
_MERCATOR_CIRCUMFERENCE = 40075016.68
# _Deduplicator handles the logic for deduplication. a feature
# is considered a duplicate if it has the same property tuple
# as another and is within a certain distance of the other.
#
# the property tuple is calculated by taking a tuple or list
# of keys and extracting the value of the matching property
# or None. if none_means_unique is true, then if any tuple
# entry is None the feature is considered unique and kept.
#
# note: distance here is measured in coordinate units; i.e:
# mercator meters!
class _Deduplicator:
def __init__(self, property_keys, min_distance,
none_means_unique):
self.property_keys = property_keys
self.min_distance = min_distance
self.none_means_unique = none_means_unique
self.seen_items = dict()
def keep_feature(self, feature):
"""
Returns true if the feature isn't a duplicate, and should
be kept in the output. Otherwise, returns false, as
another feature had the same tuple of values.
"""
shape, props, fid = feature
key = tuple([props.get(k) for k in self.property_keys])
if self.none_means_unique and any([v is None for v in key]):
return True
seen_geoms = self.seen_items.get(key)
if seen_geoms is None:
# first time we've seen this item, so keep it in
# the output.
self.seen_items[key] = [shape]
return True
else:
# if the distance is greater than the minimum set
# for this zoom, then we also keep it.
distance = min([shape.distance(s) for s in seen_geoms])
if distance > self.min_distance:
# this feature is far enough away to count as
# distinct, but keep this geom to suppress any
# other labels nearby.
seen_geoms.append(shape)
return True
else:
# feature is a duplicate
return False
def remove_duplicate_features(ctx):
"""
Removes duplicate features from a layer, or set of layers. The
definition of duplicate is anything which has the same values
for the tuple of values associated with the property_keys.
If `none_means_unique` is set, which it is by default, then a
value of None for *any* of the values in the tuple causes the
feature to be considered unique and completely by-passed. This
is mainly to handle things like features missing their name,
where we don't want to remove all but one unnamed feature.
For example, if property_keys was ['name', 'kind'], then only
the first feature of those with the same value for the name
and kind properties would be kept in the output.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
source_layers = ctx.params.get('source_layers')
start_zoom = ctx.params.get('start_zoom', 0)
property_keys = ctx.params.get('property_keys')
geometry_types = ctx.params.get('geometry_types')
min_distance = ctx.params.get('min_distance', 0.0)
none_means_unique = ctx.params.get('none_means_unique', True)
end_zoom = ctx.params.get('end_zoom')
# can use either a single source layer, or multiple source
# layers, but not both.
assert bool(source_layer) ^ bool(source_layers), \
('remove_duplicate_features: define either source layer or source '
'layers, but not both')
# note that the property keys or geometry types could be empty,
# but then this post-process filter would do nothing. so we
# assume that the user didn't intend this, or they wouldn't have
# included the filter in the first place.
assert property_keys, \
'remove_duplicate_features: missing or empty property keys'
assert geometry_types, \
'remove_duplicate_features: missing or empty geometry types'
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
# allow either a single or multiple layers to be used.
if source_layer:
source_layers = [source_layer]
# correct for zoom: min_distance is given in pixels, but we
# want to do the comparison in coordinate units to avoid
# repeated conversions.
min_distance = (min_distance * _MERCATOR_CIRCUMFERENCE /
float(1 << (zoom + 8)))
# keep a set of the tuple of the property keys. this will tell
# us if the feature is unique while allowing us to maintain the
# sort order by only dropping later, presumably less important,
# features. we keep the geometry of the seen items too, so that
# we can tell if any new feature is significantly far enough
# away that it should be shown again.
deduplicator = _Deduplicator(property_keys, min_distance,
none_means_unique)
for source_layer in source_layers:
layer_index = -1
# because this post-processor can potentially modify
# multiple layers, and that wasn't how the return value
# system was designed, instead it modifies layers
# *in-place*. this is abnormal, and as such requires a
# nice big comment like this!
for index, feature_layer in enumerate(feature_layers):
layer_datum = feature_layer['layer_datum']
layer_name = layer_datum['name']
if layer_name == source_layer:
layer_index = index
break
if layer_index < 0:
# TODO: warn about missing layer when we get the
# ability to log.
continue
layer = feature_layers[layer_index]
new_features = []
for feature in layer['features']:
shape, props, fid = feature
keep_feature = True
if geometry_types is not None and \
shape.geom_type in geometry_types:
keep_feature = deduplicator.keep_feature(feature)
if keep_feature:
new_features.append(feature)
# NOTE! modifying the layer *in-place*.
layer['features'] = new_features
feature_layers[index] = layer
# returning None here would normally indicate that the
# post-processor has done nothing. but because this
# modifies the layers *in-place* then all the return
# value is superfluous.
return None
def merge_duplicate_stations(ctx):
"""
Normalise station names by removing any parenthetical lines
lists at the end (e.g: "Foo St (A, C, E)"). Parse this and
use it to replace the `subway_routes` list if that is empty
or isn't present.
Use the root relation ID, calculated as part of the exploration of the
transit relations, plus the name, now appropriately trimmed, to merge
station POIs together, unioning their subway routes.
Finally, re-sort the features in case the merging has caused
the station POIs to be out-of-order.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, \
'normalize_and_merge_duplicate_stations: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
if zoom < start_zoom:
return None
# we probably don't want to do this at higher zooms (e.g: 17 &
# 18), even if there are a bunch of stations very close
# together.
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
seen_stations = {}
new_features = []
for feature in layer['features']:
shape, props, fid = feature
kind = props.get('kind')
name = props.get('name')
if name is not None and kind == 'station':
# this should match station names where the name is
# followed by a ()-bracketed list of line names. this
# is common in NYC, and we want to normalise by
# stripping these off and using it to provide the
# list of lines if we haven't already got that info.
m = station_pattern.match(name)
subway_routes = props.get('subway_routes', [])
transit_route_relation_id = props.get(
'mz_transit_root_relation_id')
if m:
# if the lines aren't present or are empty
if not subway_routes:
lines = m.group(2).split(',')
subway_routes = [x.strip() for x in lines]
props['subway_routes'] = subway_routes
# update name so that it doesn't contain all the
# lines.
name = m.group(1).strip()
props['name'] = name
# if the root relation ID is available, then use that for
# identifying duplicates. otherwise, use the name.
key = transit_route_relation_id or name
seen_idx = seen_stations.get(key)
if seen_idx is None:
seen_stations[key] = len(new_features)
# ensure that transit routes is present and is of
# list type for when we append to it later if we
# find a duplicate.
props['subway_routes'] = subway_routes
new_features.append(feature)
else:
# get the properties and append this duplicate's
# transit routes to the list on the original
# feature.
seen_props = new_features[seen_idx][1]
# make sure routes are unique
unique_subway_routes = set(subway_routes) | \
set(seen_props['subway_routes'])
seen_props['subway_routes'] = list(unique_subway_routes)
else:
# not a station, or name is missing - we can't
# de-dup these.
new_features.append(feature)
# might need to re-sort, if we merged any stations:
# removing duplicates would have changed the number
# of routes for each station.
if seen_stations:
sort_pois(new_features, zoom)
layer['features'] = new_features
return layer
def normalize_station_properties(ctx):
"""
Normalise station properties by removing some which are only used
during importance calculation. Stations may also have route
information, which may appear as empty lists. These are
removed. Also, flags are put on the station to indicate what
kind(s) of station it might be.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, \
'normalize_and_merge_duplicate_stations: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
if zoom < start_zoom:
return None
# we probably don't want to do this at higher zooms (e.g: 17 &
# 18), even if there are a bunch of stations very close
# together.
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
for shape, props, fid in layer['features']:
kind = props.get('kind')
# get rid of temporaries
root_relation_id = props.pop('mz_transit_root_relation_id', None)
props.pop('mz_transit_score', None)
if kind == 'station':
# remove anything that has an empty *_routes
# list, as this most likely indicates that we were
# not able to _detect_ what lines it's part of, as
# it seems unlikely that a station would be part of
# _zero_ routes.
for typ in ['train', 'subway', 'light_rail', 'tram']:
prop_name = '%s_routes' % typ
routes = props.pop(prop_name, [])
if routes:
props[prop_name] = routes
props['is_%s' % typ] = True
# if the station has a root relation ID then include
# that as a way for the client to link together related
# features.
if root_relation_id:
props['root_id'] = root_relation_id
return layer
def _match_props(props, items_matching):
"""
Checks if all the items in `items_matching` are also
present in `props`. If so, returns true. Otherwise
returns false.
Each value in `items_matching` can be a list, in which case the
value from `props` must be any one of those values.
"""
for k, v in items_matching.iteritems():
prop_val = props.get(k)
if isinstance(v, list):
if prop_val not in v:
return False
elif prop_val != v:
return False
return True
def keep_n_features(ctx):
"""
Keep only the first N features matching `items_matching`
in the layer. This is primarily useful for removing
features which are abundant in some places but scarce in
others. Rather than try to set some global threshold which
works well nowhere, instead sort appropriately and take a
number of features which is appropriate per-tile.
This is done by counting each feature which matches _all_
the key-value pairs in `items_matching` and, when the
count is larger than `max_items`, dropping those features.
Only features which are within the unpadded bounds of the
tile are considered for keeping or dropping. Features
entirely outside the bounds of the tile are always kept.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'keep_n_features: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
items_matching = ctx.params.get('items_matching')
max_items = ctx.params.get('max_items')
unpadded_bounds = Box(*ctx.unpadded_bounds)
# leaving items_matching or max_items as None (or zero)
# would mean that this filter would do nothing, so assume
# that this is really a configuration error.
assert items_matching, 'keep_n_features: missing or empty item match dict'
assert max_items, 'keep_n_features: missing or zero max number of items'
if zoom < start_zoom:
return None
# we probably don't want to do this at higher zooms (e.g: 17 &
# 18), even if there are a bunch of features in the tile, as
# we use the high-zoom tiles for overzooming to 20+, and we'd
# eventually expect to see _everything_.
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
count = 0
new_features = []
for shape, props, fid in layer['features']:
keep_feature = True
if _match_props(props, items_matching) and \
shape.intersects(unpadded_bounds):
count += 1
if count > max_items:
keep_feature = False
if keep_feature:
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def rank_features(ctx):
"""
Assign a rank to features in `rank_key`.
Enumerate the features matching `items_matching` and insert
the rank as a property with the key `rank_key`. This is
useful for the client, so that it can selectively display
only the top features, or de-emphasise the later features.
Note that only features within in the unpadded bounds are ranked.
Features entirely outside the bounds of the tile are not modified.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'rank_features: missing source layer'
start_zoom = ctx.params.get('start_zoom', 0)
items_matching = ctx.params.get('items_matching')
rank_key = ctx.params.get('rank_key')
unpadded_bounds_shp = Box(*ctx.unpadded_bounds)
# leaving items_matching or rank_key as None would mean
# that this filter would do nothing, so assume that this
# is really a configuration error.
assert items_matching, 'rank_features: missing or empty item match dict'
assert rank_key, 'rank_features: missing or empty rank key'
if zoom < start_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
count = 0
for shape, props, fid in layer['features']:
if (_match_props(props, items_matching) and
unpadded_bounds_shp.intersects(shape)):
count += 1
props[rank_key] = count
return layer
def normalize_aerialways(shape, props, fid, zoom):
aerialway = props.get('aerialway')
# normalise cableway, apparently a deprecated
# value.
if aerialway == 'cableway':
props['aerialway'] = 'zip_line'
# 'yes' is a pretty unhelpful value, so normalise
# to a slightly more meaningful 'unknown', which
# is also a commonly-used value.
if aerialway == 'yes':
props['aerialway'] = 'unknown'
return shape, props, fid
def numeric_min_filter(ctx):
"""
Keep only features which have properties equal or greater
than the configured minima. These are in a dict per zoom
like this:
{ 15: { 'area': 1000 }, 16: { 'area': 2000 } }
This would mean that at zooms 15 and 16, the filter was
active. At other zooms it would do nothing.
Multiple filters can be given for a single zoom. The
`mode` parameter can be set to 'any' to require that only
one of the filters needs to match, or any other value to
use the default 'all', which requires all filters to
match.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'rank_features: missing source layer'
filters = ctx.params.get('filters')
mode = ctx.params.get('mode')
# assume missing filter is a config error.
assert filters, 'numeric_min_filter: missing or empty filters dict'
# get the minimum filters for this zoom, and return if
# there are none to apply.
minima = filters.get(zoom)
if not minima:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
# choose whether all minima have to be met, or just
# one of them.
aggregate_func = all
if mode == 'any':
aggregate_func = any
new_features = []
for shape, props, fid in layer['features']:
keep = []
for prop, min_val in minima.iteritems():
val = props.get(prop)
keep.append(val >= min_val)
if aggregate_func(keep):
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def copy_features(ctx):
"""
Copy features matching _both_ the `where` selection and the
`geometry_types` list to another layer. If the target layer
doesn't exist, it is created.
"""
feature_layers = ctx.feature_layers
source_layer = ctx.params.get('source_layer')
target_layer = ctx.params.get('target_layer')
where = ctx.params.get('where')
geometry_types = ctx.params.get('geometry_types')
assert source_layer, 'copy_features: source layer not configured'
assert target_layer, 'copy_features: target layer not configured'
assert where, \
('copy_features: you must specify how to match features in the where '
'parameter')
assert geometry_types, \
('copy_features: you must specify at least one type of geometry in '
'geometry_types')
src_layer = _find_layer(feature_layers, source_layer)
if src_layer is None:
return None
tgt_layer = _find_layer(feature_layers, target_layer)
if tgt_layer is None:
# create target layer if it doesn't already exist.
tgt_layer_datum = src_layer['layer_datum'].copy()
tgt_layer_datum['name'] = target_layer
tgt_layer = src_layer.copy()
tgt_layer['name'] = target_layer
tgt_layer['features'] = []
tgt_layer['layer_datum'] = tgt_layer_datum
new_features = []
for feature in src_layer['features']:
shape, props, fid = feature
if _match_props(props, where):
# need to deep copy, otherwise we could have some
# unintended side effects if either layer is
# mutated later on.
shape_copy = shape.__class__(shape)
new_features.append((shape_copy, props.copy(), fid))
tgt_layer['features'].extend(new_features)
return tgt_layer
def make_representative_point(shape, properties, fid, zoom):
"""
Replaces the geometry of each feature with its
representative point. This is a point which should be
within the interior of the geometry, which can be
important for labelling concave or doughnut-shaped
polygons.
"""
label_placement_wkb = properties.get('mz_label_placement', None)
if label_placement_wkb:
shape = shapely.wkb.loads(label_placement_wkb)
else:
shape = shape.representative_point()
return shape, properties, fid
def add_iata_code_to_airports(shape, properties, fid, zoom):
"""
If the feature is an airport, and it has a 3-character
IATA code in its tags, then move that code to its
properties.
"""
kind = properties.get('kind')
if kind not in ('aerodrome', 'airport'):
return shape, properties, fid
tags = properties.get('tags')
if not tags:
return shape, properties, fid
iata_code = tags.get('iata')
if not iata_code:
return shape, properties, fid
# IATA codes should be uppercase, and most are, but there
# might be some in lowercase, so just normalise to upper
# here.
iata_code = iata_code.upper()
if iata_short_code_pattern.match(iata_code):
properties['iata'] = iata_code
return shape, properties, fid
def add_uic_ref(shape, properties, fid, zoom):
"""
If the feature has a valid uic_ref tag (7 integers), then move it
to its properties.
"""
tags = properties.get('tags')
if not tags:
return shape, properties, fid
uic_ref = tags.get('uic_ref')
if not uic_ref:
return shape, properties, fid
uic_ref = uic_ref.strip()
if len(uic_ref) != 7:
return shape, properties, fid
try:
uic_ref_int = int(uic_ref)
except ValueError:
return shape, properties, fid
else:
properties['uic_ref'] = uic_ref_int
return shape, properties, fid
def _freeze(thing):
"""
Freezes something to a hashable item.
"""
if isinstance(thing, dict):
return frozenset([(_freeze(k), _freeze(v)) for k, v in thing.items()])
elif isinstance(thing, list):
return tuple([_freeze(i) for i in thing])
return thing
def _thaw(thing):
"""
Reverse of the freeze operation.
"""
if isinstance(thing, frozenset):
return dict([_thaw(i) for i in thing])
elif isinstance(thing, tuple):
return list([_thaw(i) for i in thing])
return thing
def quantize_val(val, step):
# special case: if val is very small, we don't want it rounding to zero, so
# round the smallest values up to the first step.
if val < step:
return int(step)
result = int(step * round(val / float(step)))
return result
def quantize_height_round_nearest_5_meters(height):
return quantize_val(height, 5)
def quantize_height_round_nearest_10_meters(height):
return quantize_val(height, 10)
def quantize_height_round_nearest_20_meters(height):
return quantize_val(height, 20)
def quantize_height_round_nearest_meter(height):
return round(height)
def _merge_lines(linestring_shapes, _unused_tolerance):
list_of_linestrings = []
for shape in linestring_shapes:
list_of_linestrings.extend(_flatten_geoms(shape))
# if the list of linestrings is empty, return None. this avoids generating
# an empty GeometryCollection, which causes problems further down the line,
# usually while formatting the tile.
if not list_of_linestrings:
return []
multi = MultiLineString(list_of_linestrings)
result = _linemerge(multi)
return [result]
def _drop_small_inners_multi(shape, area_tolerance):
"""
Drop inner rings (holes) of the given shape which are smaller than the area
tolerance. The shape must be either a Polygon or MultiPolygon. Returns a
shape which may be empty.
"""
from shapely.geometry import MultiPolygon
if shape.geom_type == 'Polygon':
shape = _drop_small_inners(shape, area_tolerance)
elif shape.geom_type == 'MultiPolygon':
multi = []
for poly in shape:
new_poly = _drop_small_inners(poly, area_tolerance)
if not new_poly.is_empty:
multi.append(new_poly)
shape = MultiPolygon(multi)
else:
shape = MultiPolygon([])
return shape
def _drop_small_outers_multi(shape, area_tolerance):
"""
Drop individual polygons which are smaller than the area tolerance. Input
can be a single Polygon or MultiPolygon, in which case each Polygon within
the MultiPolygon will be compared to the area tolerance individually.
Returns a shape, which may be empty.
"""
from shapely.geometry import MultiPolygon
if shape.geom_type == 'Polygon':
if shape.area < area_tolerance:
shape = MultiPolygon([])
elif shape.geom_type == 'MultiPolygon':
multi = []
for poly in shape:
if poly.area >= area_tolerance:
multi.append(poly)
shape = MultiPolygon(multi)
else:
shape = MultiPolygon([])
return shape
def _merge_polygons(polygon_shapes, tolerance):
"""
Merge a list of polygons together into a single shape. Returns list of
shapes, which might be empty.
"""
list_of_polys = []
for shape in polygon_shapes:
list_of_polys.extend(_flatten_geoms(shape))
# if the list of polygons is empty, return None. this avoids generating an
# empty GeometryCollection, which causes problems further down the line,
# usually while formatting the tile.
if not list_of_polys:
return []
# first, try to merge the polygons as they are.
try:
result = shapely.ops.unary_union(list_of_polys)
return [result]
except ValueError:
pass
# however, this can lead to numerical instability where polygons _almost_
# touch, so sometimes buffering them outwards a little bit can help.
try:
from shapely.geometry import JOIN_STYLE
# don't buffer by the full pixel, instead choose a smaller value that
# shouldn't be noticable.
buffer_size = tolerance / 16.0
list_of_buffered = [
p.buffer(buffer_size, join_style=JOIN_STYLE.mitre, mitre_limit=1.5)
for p in list_of_polys
]
result = shapely.ops.unary_union(list_of_buffered)
return [result]
except ValueError:
pass
# ultimately, if it's not possible to merge them then bail.
# TODO: when we get a logger in here, let's log a big FAIL message.
return []
def _merge_polygons_with_buffer(polygon_shapes, tolerance):
"""
Merges polygons together with a buffer operation to blend together
adjacent polygons. Originally designed for buildings.
It does this by first merging the polygons into a single MultiPolygon and
then dilating or buffering the polygons by a small amount (tolerance). The
shape is then simplified, small inners are dropped and it is shrunk back
by the same amount it was dilated by. Finally, small polygons are dropped.
Many cities around the world have dense buildings in blocks, but these
buildings can be quite detailed; having complex facades or interior
courtyards or lightwells. As we zoom out, we often would like to keep the
"visual texture" of the buildings, but reducing the level of detail
significantly. This method aims to get closer to that, merging neighbouring
buildings together into blocks.
"""
from shapely.geometry import JOIN_STYLE
area_tolerance = tolerance * tolerance
# small factor, relative to tolerance. this is used so that we don't buffer
# polygons out by exactly the same amount as we buffer them inwards. using
# the exact same value ends up causing topology problems when two points on
# opposing sides of the polygon meet eachother exactly.
epsilon = tolerance * 1.0e-6
result = _merge_polygons(polygon_shapes, tolerance)
if not result:
return result
assert len(result) == 1
result = result[0]
# buffer with a mitre join, as this keeps the corners sharp and (mostly)
# keeps angles the same. to avoid spikes, we limit the mitre to a little
# under 90 degrees.
result = result.buffer(
tolerance - epsilon, join_style=JOIN_STYLE.mitre, mitre_limit=1.5)
result = result.simplify(tolerance)
result = _drop_small_inners_multi(result, area_tolerance)
result = result.buffer(
-tolerance, join_style=JOIN_STYLE.mitre, mitre_limit=1.5)
result = _drop_small_outers_multi(result, area_tolerance)
# don't return invalid results!
if result.is_empty or not result.is_valid:
return []
return [result]
def _union_bounds(a, b):
"""
Union two (minx, miny, maxx, maxy) tuples of bounds, returning a tuple
which covers both inputs.
"""
if a is None:
return b
elif b is None:
return a
else:
aminx, aminy, amaxx, amaxy = a
bminx, bminy, bmaxx, bmaxy = b
return (min(aminx, bminx), min(aminy, bminy),
max(amaxx, bmaxx), max(amaxy, bmaxy))
def _intersects_bounds(a, b):
"""
Return true if two bounding boxes intersect.
"""
aminx, aminy, amaxx, amaxy = a
bminx, bminy, bmaxx, bmaxy = b
if aminx > bmaxx or amaxx < bminx:
return False
elif aminy > bmaxy or amaxy < bminy:
return False
return True
# RecursiveMerger is a set of functions to merge geometry recursively in a
# quad tree.
#
# It consists of three functions, any of which can be `id` for a no-op, and
# all of which take a single argument which will be a list of shapes, and
# should return a list of shapes as output.
#
# * leaf: called at the leaves of the quad tree with original geometry.
# * node: called at internal nodes of the quad tree with the results of
# either calls to leaf() or node().
# * root: called once at the root with the results of the top node (or leaf
# if it's a degenerate single-level tree).
# * tolerance: a length that is approximately a pixel, or the size by which
# things can be simplified or snapped to.
#
# These allow us to merge transformed versions of geometry, where leaf()
# transforms the geometry to some other form (e.g: buffered for buildings),
# node merges those recursively, and then root reverses the buffering.
#
RecursiveMerger = namedtuple('RecursiveMerger', 'leaf node root tolerance')
# A bucket used to sort shapes into the next level of the quad tree.
Bucket = namedtuple("Bucket", "bounds box shapes")
def _mkbucket(*bounds):
"""
Convenience method to make a bucket from a tuple of bounds (minx, miny,
maxx, maxy) and also make the Shapely shape for that.
"""
from shapely.geometry import box
return Bucket(bounds, box(*bounds), [])
def _merge_shapes_recursively(shapes, shapes_per_merge, merger, depth=0,
bounds=None):
"""
Group the shapes geographically, returning a list of shapes. The merger,
which must be a RecursiveMerger, controls how the shapes are merged.
This is to help merging/unioning, where it's better to try and merge shapes
which are adjacent or near each other, rather than just taking a slice of
a list of shapes which might be in any order.
The shapes_per_merge controls at what depth the tree starts merging.
Smaller values mean a deeper tree, which might increase performance if
merging large numbers of items at once is slow.
This method is recursive, and will bottom out after 5 levels deep, which
might mean that sometimes more than shapes_per_merge items are merged at
once.
"""
assert isinstance(merger, RecursiveMerger)
# don't keep recursing. if we haven't been able to get to a smaller number
# of shapes by 5 levels down, then perhaps there are particularly large
# shapes which are preventing things getting split up correctly.
if len(shapes) <= shapes_per_merge and depth == 0:
return merger.root(merger.leaf(shapes, merger.tolerance))
elif depth >= 5:
return merger.leaf(shapes, merger.tolerance)
# on the first call, figure out what the bounds of the shapes are. when
# recursing, use the bounds passed in from the parent.
if bounds is None:
for shape in shapes:
bounds = _union_bounds(bounds, shape.bounds)
minx, miny, maxx, maxy = bounds
midx = 0.5 * (minx + maxx)
midy = 0.5 * (miny + maxy)
# find the 4 quadrants of the bounding box and use those to bucket the
# shapes so that neighbouring shapes are more likely to stay together.
buckets = [
_mkbucket(minx, miny, midx, midy),
_mkbucket(minx, midy, midx, maxy),
_mkbucket(midx, miny, maxx, midy),
_mkbucket(midx, midy, maxx, maxy),
]
for shape in shapes:
for bucket in buckets:
if shape.intersects(bucket.box):
bucket.shapes.append(shape)
break
else:
raise AssertionError(
"Expected shape %r to intersect at least one quadrant, but "
"intersects none." % (shape.wkt))
# recurse if necessary to get below the number of shapes per merge that
# we want.
grouped_shapes = []
for bucket in buckets:
if len(bucket.shapes) > shapes_per_merge:
recursed = _merge_shapes_recursively(
bucket.shapes, shapes_per_merge, merger,
depth=depth+1, bounds=bucket.bounds)
grouped_shapes.extend(recursed)
# don't add empty lists!
elif bucket.shapes:
grouped_shapes.extend(merger.leaf(bucket.shapes, merger.tolerance))
fn = merger.root if depth == 0 else merger.node
return fn(grouped_shapes)
def _noop(x):
return x
def _merge_features_by_property(
features, geom_dim, tolerance,
update_props_pre_fn=None,
update_props_post_fn=None,
max_merged_features=None,
merge_shape_fn=None,
merge_props_fn=None):
assert geom_dim in (_POLYGON_DIMENSION, _LINE_DIMENSION)
if merge_shape_fn is not None:
_merge_shape_fn = merge_shape_fn
elif geom_dim == _LINE_DIMENSION:
_merge_shape_fn = _merge_lines
else:
_merge_shape_fn = _merge_polygons
features_by_property = {}
skipped_features = []
for feature in features:
shape, props, fid = feature
shape_dim = _geom_dimensions(shape)
if shape_dim != geom_dim:
skipped_features.append(feature)
continue
orig_props = props.copy()
p_id = props.pop('id', None)
if update_props_pre_fn:
props = update_props_pre_fn((shape, props, fid))
if props is None:
skipped_features.append((shape, orig_props, fid))
continue
frozen_props = _freeze(props)
if frozen_props in features_by_property:
record = features_by_property[frozen_props]
record[-1].append(shape)
record[-2].append(orig_props)
else:
features_by_property[frozen_props] = (
(fid, p_id, [orig_props], [shape]))
new_features = []
for frozen_props, (fid, p_id, orig_props, shapes) in \
features_by_property.iteritems():
if len(shapes) == 1:
# restore original properties if we only have a single shape
new_features.append((shapes[0], orig_props[0], fid))
continue
num_shapes = len(shapes)
shapes_per_merge = num_shapes
if max_merged_features and max_merged_features < shapes_per_merge:
shapes_per_merge = max_merged_features
# reset fid if we're going to split up features, as we don't want
# them all to have duplicate IDs.
fid = None
merger = RecursiveMerger(root=_noop, node=_noop, leaf=_merge_shape_fn,
tolerance=tolerance)
for merged_shape in _merge_shapes_recursively(
shapes, shapes_per_merge, merger):
# don't keep any features which have become degenerate or empty
# after having been merged.
if merged_shape is None or merged_shape.is_empty:
continue
if merge_props_fn is None:
# thaw the frozen properties to use in the new feature.
props = _thaw(frozen_props)
else:
props = merge_props_fn(orig_props)
if update_props_post_fn:
props = update_props_post_fn((merged_shape, props, fid))
new_features.append((merged_shape, props, fid))
new_features.extend(skipped_features)
return new_features
def quantize_height(ctx):
"""
Quantize the height property of features in the layer according to the
per-zoom configured quantize function.
"""
params = _Params(ctx, 'quantize_height')
zoom = ctx.nominal_zoom
source_layer = params.required('source_layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
quantize_cfg = params.required('quantize', typ=dict)
layer = _find_layer(ctx.feature_layers, source_layer)
if layer is None:
return None
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
quantize_fn_dotted_name = quantize_cfg.get(zoom)
if not quantize_fn_dotted_name:
# no changes at this zoom
return None
quantize_height_fn = resolve(quantize_fn_dotted_name)
for shape, props, fid in layer['features']:
height = props.get('height', None)
if height is not None:
props['height'] = quantize_height_fn(height)
return None
def merge_building_features(ctx):
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
drop = ctx.params.get('drop')
exclusions = ctx.params.get('exclude')
max_merged_features = ctx.params.get('max_merged_features')
assert source_layer, 'merge_building_features: missing source layer'
layer = _find_layer(ctx.feature_layers, source_layer)
if layer is None:
return None
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
# this formula seems to give a good balance between larger values, which
# merge more but can merge everything into a blob if too large, and small
# values which retain detail.
tolerance = min(5, 0.4 * tolerance_for_zoom(zoom))
def _props_pre((shape, props, fid)):
if exclusions:
for prop in exclusions:
if prop in props:
return None
# also drop building properties that we won't want to consider
# for merging. area and volume will be re-calculated afterwards
props.pop('area', None)
props.pop('volume', None)
if drop:
for prop in drop:
props.pop(prop, None)
return props
def _props_post((merged_shape, props, fid)):
# add the area and volume back in
area = int(merged_shape.area)
props['area'] = area
height = props.get('height')
if height is not None:
props['volume'] = height * area
return props
layer['features'] = _merge_features_by_property(
layer['features'], _POLYGON_DIMENSION, tolerance, _props_pre,
_props_post, max_merged_features,
merge_shape_fn=_merge_polygons_with_buffer)
return layer
def merge_polygon_features(ctx):
"""
Merge polygons having the same properties, apart from 'id' and 'area', in
the source_layer between start_zoom and end_zoom inclusive.
Area is re-calculated post-merge and IDs are preserved for features which
are unique in the merge.
"""
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
merge_min_zooms = ctx.params.get('merge_min_zooms', False)
buffer_merge = ctx.params.get('buffer_merge', False)
buffer_merge_tolerance = ctx.params.get('buffer_merge_tolerance')
assert source_layer, 'merge_polygon_features: missing source layer'
layer = _find_layer(ctx.feature_layers, source_layer)
if layer is None:
return None
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
tfz = tolerance_for_zoom(zoom)
if buffer_merge_tolerance:
tolerance = eval(buffer_merge_tolerance, {}, {
'tolerance_for_zoom': tfz,
})
else:
tolerance = tfz
def _props_pre((shape, props, fid)):
# drop area while merging, as we'll recalculate after.
props.pop('area', None)
if merge_min_zooms:
props.pop('min_zoom', None)
return props
def _props_post((merged_shape, props, fid)):
# add the area back in
area = int(merged_shape.area)
props['area'] = area
return props
def _props_merge(all_props):
merged_props = None
for props in all_props:
if merged_props is None:
merged_props = props.copy()
else:
min_zoom = props.get('min_zoom')
merged_min_zoom = merged_props.get('min_zoom')
if min_zoom and (merged_min_zoom is None or
min_zoom < merged_min_zoom):
merged_props['min_zoom'] = min_zoom
return merged_props
merge_props_fn = _props_merge if merge_min_zooms else None
merge_shape_fn = _merge_polygons_with_buffer if buffer_merge else None
layer['features'] = _merge_features_by_property(
layer['features'], _POLYGON_DIMENSION, tolerance, _props_pre,
_props_post, merge_props_fn=merge_props_fn,
merge_shape_fn=merge_shape_fn)
return layer
def _angle_at(linestring, pt):
import math
if pt == linestring.coords[0]:
nx = linestring.coords[1]
elif pt == linestring.coords[-1]:
nx = pt
pt = linestring.coords[-2]
else:
assert False, "Expected point to be first or last"
if nx == pt:
return None
dx = nx[0] - pt[0]
dy = nx[1] - pt[1]
if dy < 0.0:
dx = -dx
dy = -dy
a = math.atan2(dy, dx) / math.pi * 180.0
# wrap around at exactly 180, because we don't care about the direction of
# the road, only what angle the line is at, and 180 is horizontal same as
# 0.
if a == 180.0:
a = 0.0
assert 0 <= a < 180
return a
def _junction_merge_candidates(ids, geoms, pt, angle_tolerance):
# find the angles at which the lines join the point
angles = []
for i in ids:
a = _angle_at(geoms[i], pt)
if a is not None:
angles.append((a, i))
# turn that into an angle->index associative list, so
# that we can tell which are the closest pair of angles.
angles.sort()
# list of pairs of ids, candidates to be merged.
candidates = []
# loop over the list, removing the closest pair, as long
# as they're within the tolerance angle of eachother.
while len(angles) > 1:
min_angle = None
for j in xrange(0, len(angles)):
angle1, idx1 = angles[j]
angle0, idx0 = angles[j-1]
# usually > 0 since angles are sorted, but might be negative
# on the first index (angles[-1]). note that, since we're
# taking the non-directional angle, the result should be
# between 0 and 180.
delta_angle = angle1 - angle0
if delta_angle < 0:
delta_angle += 180
if min_angle is None or delta_angle < min_angle[0]:
min_angle = (delta_angle, j)
if min_angle[0] >= angle_tolerance or min_angle is None:
break
candidates.append((angles[j][1], angles[j-1][1]))
del angles[j]
del angles[j-1]
return candidates
def _merge_junctions_in_multilinestring(mls, angle_tolerance):
"""
Merge LineStrings within a MultiLineString across junctions where more
than two lines meet and the lines appear to continue across the junction
at the same angle.
The angle_tolerance (in degrees) is used to judge whether two lines
look like they continue across a junction.
Returns a new shape.
"""
endpoints = defaultdict(list)
for i, ls in enumerate(mls.geoms):
endpoints[ls.coords[0]].append(i)
endpoints[ls.coords[-1]].append(i)
seen = set()
merged_geoms = []
for pt, ids in endpoints.iteritems():
# we can't merge unless we've got at least 2 lines!
if len(ids) < 2:
continue
candidates = _junction_merge_candidates(
ids, mls.geoms, pt, angle_tolerance)
for a, b in candidates:
if a not in seen and b not in seen and a != b:
merged = linemerge(MultiLineString(
[mls.geoms[a], mls.geoms[b]]))
if merged.geom_type == 'LineString':
merged_geoms.append(merged)
seen.add(a)
seen.add(b)
elif (merged.geom_type == 'MultiLineString' and
len(merged.geoms) == 1):
merged_geoms.append(merged.geoms[0])
seen.add(a)
seen.add(b)
# add back any left over linestrings which didn't get merged.
for i, ls in enumerate(mls.geoms):
if i not in seen:
merged_geoms.append(ls)
if len(merged_geoms) == 1:
return merged_geoms[0]
else:
return MultiLineString(merged_geoms)
def _loop_merge_junctions(geom, angle_tolerance):
"""
Keep applying junction merging to the MultiLineString until there are no
merge opportunities left.
A single merge step will only carry out one merge per LineString, which
means that the other endpoint might miss out on a possible merge. So we
loop over the merge until all opportunities are exhausted: either we end
up with a single LineString or we run a step and it fails to merge any
candidates.
For a total number of possible merges, N, we could potentially be left
with two thirds of these left over, depending on the order of the
candidates. This means we should need only O(log N) steps to merge them
all.
"""
if geom.geom_type != 'MultiLineString':
return geom
# keep track of the number of linestrings in the multilinestring. we'll
# use that to figure out if we've merged as much as we possibly can.
mls_size = len(geom.geoms)
while True:
geom = _merge_junctions_in_multilinestring(geom, angle_tolerance)
# merged everything down to a single linestring
if geom.geom_type == 'LineString':
break
# made no progress
elif len(geom.geoms) == mls_size:
break
assert len(geom.geoms) < mls_size, \
"Number of geometries should stay the same or reduce after merge."
# otherwise, keep looping
mls_size = len(geom.geoms)
return geom
def _simplify_line_collection(shape, tolerance):
"""
Calling simplify on a MultiLineString doesn't always simplify if it would
make the MultiLineString non-simple.
However, we're trying to sort linestrings into nonoverlapping sets, and we
don't care whether they overlap at this point. However, we do want to make
sure that any colinear points in the individual LineStrings are removed.
"""
if shape.geom_type == 'LineString':
shape = shape.simplify(tolerance)
elif shape.geom_type == 'MultiLineString':
new_geoms = []
for geom in shape.geoms:
new_geoms.append(geom.simplify(tolerance))
shape = MultiLineString(new_geoms)
return shape
def _merge_junctions(features, angle_tolerance, simplify_tolerance,
split_threshold):
"""
Merge LineStrings within MultiLineStrings within features across junction
boundaries where the lines appear to continue at the same angle.
If simplify_tolerance is provided, apply a simplification step. This can
help to remove colinear junction points left over from any merging.
Finally, group the lines into non-overlapping sets, each of which generates
a separate MultiLineString feature to ensure they're already simple and
further geometric operations won't re-introduce intersection points.
Large linestrings, with more than split_threshold members, use a slightly
different algorithm which is more efficient at very large sizes.
Returns a new list of features.
"""
new_features = []
for shape, props, fid in features:
if shape.geom_type == 'MultiLineString':
shape = _loop_merge_junctions(shape, angle_tolerance)
if simplify_tolerance > 0.0:
shape = _simplify_line_collection(shape, simplify_tolerance)
if shape.geom_type == 'MultiLineString':
disjoint_shapes = _linestring_nonoverlapping_partition(
shape, split_threshold)
for disjoint_shape in disjoint_shapes:
new_features.append((disjoint_shape, props, None))
else:
new_features.append((shape, props, fid))
return new_features
def _first_positive_integer_not_in(s):
"""
Given a set of positive integers, s, return the smallest positive integer
which is _not_ in s.
For example:
>>> _first_positive_integer_not_in(set())
1
>>> _first_positive_integer_not_in(set([1]))
2
>>> _first_positive_integer_not_in(set([1,3,4]))
2
>>> _first_positive_integer_not_in(set([1,2,3,4]))
5
"""
if len(s) == 0:
return 1
last = max(s)
for i in xrange(1, last):
if i not in s:
return i
return last + 1
# utility class so that we can store the array index of the geometry
# inside the shape index.
class _geom_with_index(object):
def __init__(self, geom, index):
self.geom = geom
self.index = index
self._geom = geom._geom
self.is_empty = geom.is_empty
class OrderedSTRTree(object):
"""
An STR-tree geometry index which remembers the array index of the
geometries it was built with, and only returns geometries with lower
indices when queried.
This is used as a substitute for a dynamic index, where we'd be able
to add new geometries as the algorithm progressed.
"""
def __init__(self, geoms):
self.shape_index = STRtree([
_geom_with_index(g, i) for i, g in enumerate(geoms)
])
def query(self, shape, idx):
"""
Return the index elements which have bounding boxes intersecting the
given shape _and_ have array indices less than idx.
"""
for geom in self.shape_index.query(shape):
if geom.index < idx:
yield geom
class SplitOrderedSTRTree(object):
"""
An ordered STR-tree index which splits the geometries it is managing.
This is a simple, first-order approximation to a dynamic index. If the
input geometries are sorted by increasing size, then the "small" first
section are much less likely to overlap, and we know we're not interested
in anything in the "big" section because the index isn't large enough.
This should cut down the number of expensive queries, as well as the
number of subsequent intersection tests to check if the shapes within the
bounding boxes intersect.
"""
def __init__(self, geoms):
split = int(0.75 * len(geoms))
self.small_index = STRtree([
_geom_with_index(g, i) for i, g in enumerate(geoms[0:split])
])
self.big_index = STRtree([
_geom_with_index(g, i + split) for i, g in enumerate(geoms[split:])
])
self.split = split
def query(self, shape, i):
for geom in self.small_index.query(shape):
if geom.index < i:
yield geom
# don't need to query the big index at all unless i >= split. this
# should cut down on the number of yielded items that need further
# intersection tests.
if i >= self.split:
for geom in self.big_index.query(shape):
if geom.index < i:
yield geom
def _linestring_nonoverlapping_partition(mls, split_threshold=15000):
"""
Given a MultiLineString input, returns a list of MultiLineStrings
which are individually simple, but cover all the points in the
input MultiLineString.
The OGC definition of a MultiLineString says it's _simple_ if it
consists of simple LineStrings and the LineStrings only meet each
other at their endpoints. This means that anything which makes
MultiLineStrings simple is going to insert intersections between
crossing lines, and decompose them into separate LineStrings.
In general we _do not want_ this behaviour, as it prevents
simplification and results in more points in the geometry. However,
there are many operations which will result in simple outputs, such
as intersections and unions. Therefore, we would prefer to take the
hit of having multiple features, if the features can be decomposed
in such a way that they are individually simple.
"""
# only interested in MultiLineStrings for this method!
assert mls.geom_type == 'MultiLineString'
# simple (and sub-optimal) greedy algorithm for making sure that
# linestrings don't intersect: put each into the first bucket which
# doesn't already contain a linestring which intersects it.
#
# this will be suboptimal. for example:
#
# 2 4
# | |
# 3 ---+-+---
# | |
# 1 -----+---
# |
#
# (lines 1 & 2 do _not_ intersect).
#
# the greedy algorithm will use 3 buckets, as it'll put lines 1 & 2 in
# the same bucket, forcing 3 & 4 into individual buckets for a total
# of 3 buckets. optimally, we can bucket 1 & 3 together and 2 & 4
# together to only use 2 buckets. however, making this optimal seems
# like it might be a Hard problem.
#
# note that we don't create physical buckets, but assign each shape a
# bucket ID which hasn't been assigned to any other intersecting shape.
# we can assign these in an arbitrary order, and use an index to reduce
# the number of intersection tests needed down to O(n log n). this can
# matter quite a lot at low zooms, where it's possible to get 150,000
# tiny road segments in a single shape!
# sort the geometries before we use them. this can help if we sort things
# which have fewer intersections towards the front of the array, so that
# they can be done more quickly.
def _bbox_area(geom):
minx, miny, maxx, maxy = geom.bounds
return (maxx - minx) * (maxy - miny)
# if there's a large number of geoms, switch to the split index and sort
# so that the spatially largest objects are towards the end of the list.
# this should make it more likely that earlier queries are fast.
if len(mls.geoms) > split_threshold:
geoms = sorted(mls.geoms, key=_bbox_area)
shape_index = SplitOrderedSTRTree(geoms)
else:
geoms = mls.geoms
shape_index = OrderedSTRTree(geoms)
# first, assign everything the "null" bucket with index zero. this means
# we haven't gotten around to it yet, and we can use it as a sentinel
# value to check for logic errors.
bucket_for_shape = [0] * len(geoms)
for idx, shape in enumerate(geoms):
overlapping_buckets = set()
# assign the lowest bucket ID that hasn't been assigned to any
# overlapping shape with a lower index. this is because:
# 1. any overlapping shape would cause the insertion of a point if it
# were allowed in this bucket, and
# 2. we're assigning in-order, so shapes at higher array indexes will
# still be assigned to the null bucket. we'll get to them later!
for indexed_shape in shape_index.query(shape, idx):
if indexed_shape.geom.intersects(shape):
bucket = bucket_for_shape[indexed_shape.index]
assert bucket > 0
overlapping_buckets.add(bucket)
bucket_for_shape[idx] = _first_positive_integer_not_in(
overlapping_buckets)
results = []
for bucket_id in set(bucket_for_shape):
# by this point, no shape should be assigned to the null bucket any
# more.
assert bucket_id > 0
# collect all the shapes which have been assigned to this bucket.
shapes = []
for idx, shape in enumerate(geoms):
if bucket_for_shape[idx] == bucket_id:
shapes.append(shape)
if len(shapes) == 1:
results.append(shapes[0])
else:
results.append(MultiLineString(shapes))
return results
def _drop_short_segments_from_multi(tolerance, mls):
return MultiLineString(
[g for g in mls.geoms if g.length >= tolerance])
def _drop_short_segments(tolerance, features):
new_features = []
for shape, props, fid in features:
if shape.geom_type == 'MultiLineString':
shape = _drop_short_segments_from_multi(tolerance, shape)
elif shape.geom_type == 'LineString':
if shape.length < tolerance:
shape = None
if shape and not shape.is_empty:
new_features.append((shape, props, fid))
return new_features
def merge_line_features(ctx):
"""
Merge linestrings having the same properties, in the source_layer
between start_zoom and end_zoom inclusive.
By default, will not merge features across points where more than
two lines meet. If you set merge_junctions, then it will try to
merge where the line looks contiguous.
"""
params = _Params(ctx, 'merge_line_features')
zoom = ctx.nominal_zoom
source_layer = params.required('source_layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
merge_junctions = params.optional(
'merge_junctions', default=False, typ=bool)
junction_angle_tolerance = params.optional(
'merge_junction_angle', default=15.0, typ=float)
drop_short_segments = params.optional(
'drop_short_segments', default=False, typ=bool)
short_segment_factor = params.optional(
'drop_length_pixels', default=0.1, typ=float)
simplify_tolerance = params.optional(
'simplify_tolerance', default=0.0, typ=float)
split_threshold = params.optional(
'split_threshold', default=15000, typ=int)
assert source_layer, 'merge_line_features: missing source layer'
layer = _find_layer(ctx.feature_layers, source_layer)
if layer is None:
return None
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer['features'] = _merge_features_by_property(
layer['features'], _LINE_DIMENSION, simplify_tolerance)
if drop_short_segments:
tolerance = short_segment_factor * tolerance_for_zoom(zoom)
layer['features'] = _drop_short_segments(
tolerance, layer['features'])
if merge_junctions:
layer['features'] = _merge_junctions(
layer['features'], junction_angle_tolerance, simplify_tolerance,
split_threshold)
return layer
def normalize_tourism_kind(shape, properties, fid, zoom):
"""
There are many tourism-related tags, including 'zoo=*' and
'attraction=*' in addition to 'tourism=*'. This function promotes
things with zoo and attraction tags have those values as their
main kind.
See https://github.com/mapzen/vector-datasource/issues/440 for more details.
""" # noqa
zoo = properties.pop('zoo', None)
if zoo is not None:
properties['kind'] = zoo
properties['tourism'] = 'attraction'
return (shape, properties, fid)
attraction = properties.pop('attraction', None)
if attraction is not None:
properties['kind'] = attraction
properties['tourism'] = 'attraction'
return (shape, properties, fid)
return (shape, properties, fid)
# a whitelist of the most common fence types from OSM.
# see https://taginfo.openstreetmap.org/keys/fence_type#values
_WHITELIST_FENCE_TYPES = set([
'avalanche',
'barbed_wire',
'bars',
'brick', # some might say a fence made of brick is called a wall...
'chain',
'chain_link',
'concrete',
'drystone_wall',
'electric',
'grate',
'hedge',
'metal',
'metal_bars',
'net',
'pole',
'railing',
'railings',
'split_rail',
'steel',
'stone',
'wall',
'wire',
'wood',
])
def build_fence(ctx):
"""
Some landuse polygons have an extra barrier fence tag, in thouse cases we
want to create an additional feature for the fence.
See https://github.com/mapzen/vector-datasource/issues/857 for more
details.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
base_layer = ctx.params.get('base_layer')
new_layer_name = ctx.params.get('new_layer_name')
prop_transform = ctx.params.get('prop_transform')
assert base_layer, 'Missing base_layer parameter'
start_zoom = ctx.params.get('start_zoom', 16)
layer = None
# don't start processing until the start zoom
if zoom < start_zoom:
return layer
# search through all the layers and extract the one
# which has the name of the base layer we were given
# as a parameter.
layer = _find_layer(feature_layers, base_layer)
# if we failed to find the base layer then it's
# possible the user just didn't ask for it, so return
# an empty result.
if layer is None:
return None
if prop_transform is None:
prop_transform = {}
features = layer['features']
new_features = list()
# loop through all the polygons, if it's a fence, duplicate it.
for feature in features:
shape, props, fid = feature
barrier = props.pop('barrier', None)
if barrier == 'fence':
fence_type = props.pop('fence_type', None)
# filter only linestring-like objects. we don't
# want any points which might have been created
# by the intersection.
filtered_shape = _filter_geom_types(shape, _POLYGON_DIMENSION)
if not filtered_shape.is_empty:
new_props = _make_new_properties(props, prop_transform)
new_props['kind'] = 'fence'
if fence_type in _WHITELIST_FENCE_TYPES:
new_props['kind_detail'] = fence_type
new_features.append((filtered_shape, new_props, fid))
if new_layer_name is None:
# no new layer requested, instead add new
# features into the same layer.
layer['features'].extend(new_features)
return layer
else:
# make a copy of the old layer's information - it
# shouldn't matter about most of the settings, as
# post-processing is one of the last operations.
# but we need to override the name to ensure we get
# some output.
new_layer_datum = layer['layer_datum'].copy()
new_layer_datum['name'] = new_layer_name
new_layer = layer.copy()
new_layer['layer_datum'] = new_layer_datum
new_layer['features'] = new_features
new_layer['name'] = new_layer_name
return new_layer
def normalize_social_kind(shape, properties, fid, zoom):
"""
Social facilities have an `amenity=social_facility` tag, but more
information is generally available in the `social_facility=*` tag, so it
is more informative to put that as the `kind`. We keep the old tag as
well, for disambiguation.
Additionally, we normalise the `social_facility:for` tag, which is a
semi-colon delimited list, to an actual list under the `for` property.
This should make it easier to consume.
"""
kind = properties.get('kind')
if kind == 'social_facility':
tags = properties.get('tags', {})
if tags:
social_facility = tags.get('social_facility')
if social_facility:
properties['kind'] = social_facility
# leave the original tag on for disambiguation
properties['social_facility'] = social_facility
# normalise the 'for' list to an actual list
for_list = tags.get('social_facility:for')
if for_list:
properties['for'] = for_list.split(';')
return (shape, properties, fid)
def normalize_medical_kind(shape, properties, fid, zoom):
"""
Many medical practices, such as doctors and dentists, have a speciality,
which is indicated through the `healthcare:speciality` tag. This is a
semi-colon delimited list, so we expand it to an actual list.
"""
kind = properties.get('kind')
if kind in ['clinic', 'doctors', 'dentist']:
tags = properties.get('tags', {})
if tags:
speciality = tags.get('healthcare:speciality')
if speciality:
properties['speciality'] = speciality.split(';')
return (shape, properties, fid)
class _AnyMatcher(object):
def match(self, other):
return True
def __repr__(self):
return "*"
class _NoneMatcher(object):
def match(self, other):
return other is None
def __repr__(self):
return "-"
class _SomeMatcher(object):
def match(self, other):
return other is not None
def __repr__(self):
return "+"
class _TrueMatcher(object):
def match(self, other):
return other is True
def __repr__(self):
return "true"
class _ExactMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other == self.value
def __repr__(self):
return repr(self.value)
class _NotEqualsMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other != self.value
def __repr__(self):
return repr(self.value)
class _SetMatcher(object):
def __init__(self, values):
self.values = values
def match(self, other):
return other in self.values
def __repr__(self):
return repr(self.value)
class _GreaterThanEqualMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other >= self.value
def __repr__(self):
return '>=%r' % self.value
class _GreaterThanMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other > self.value
def __repr__(self):
return '>%r' % self.value
class _LessThanEqualMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other <= self.value
def __repr__(self):
return '<=%r' % self.value
class _LessThanMatcher(object):
def __init__(self, value):
self.value = value
def match(self, other):
return other < self.value
def __repr__(self):
return '<%r' % self.value
_KEY_TYPE_LOOKUP = {
'int': int,
'float': float,
}
def _parse_kt(key_type):
kt = key_type.split("::")
type_key = kt[1] if len(kt) > 1 else None
fn = _KEY_TYPE_LOOKUP.get(type_key, str)
return (kt[0], fn)
class CSVMatcher(object):
def __init__(self, fh):
keys = None
types = []
rows = []
self.static_any = _AnyMatcher()
self.static_none = _NoneMatcher()
self.static_some = _SomeMatcher()
self.static_true = _TrueMatcher()
# CSV - allow whitespace after the comma
reader = csv.reader(fh, skipinitialspace=True)
for row in reader:
if keys is None:
target_key = row.pop(-1)
keys = []
for key_type in row:
key, typ = _parse_kt(key_type)
keys.append(key)
types.append(typ)
else:
target_val = row.pop(-1)
for i in range(0, len(row)):
row[i] = self._match_val(row[i], types[i])
rows.append((row, target_val))
self.keys = keys
self.rows = rows
self.target_key = target_key
def _match_val(self, v, typ):
if v == '*':
return self.static_any
if v == '-':
return self.static_none
if v == '+':
return self.static_some
if v == 'true':
return self.static_true
if isinstance(v, str) and ';' in v:
return _SetMatcher(set(v.split(';')))
if v.startswith('>='):
assert len(v) > 2, 'Invalid >= matcher'
return _GreaterThanEqualMatcher(typ(v[2:]))
if v.startswith('<='):
assert len(v) > 2, 'Invalid <= matcher'
return _LessThanEqualMatcher(typ(v[2:]))
if v.startswith('>'):
assert len(v) > 1, 'Invalid > matcher'
return _GreaterThanMatcher(typ(v[1:]))
if v.startswith('<'):
assert len(v) > 1, 'Invalid > matcher'
return _LessThanMatcher(typ(v[1:]))
if v.startswith('!'):
assert len(v) > 1, 'Invalid ! matcher'
return _NotEqualsMatcher(typ(v[1:]))
return _ExactMatcher(typ(v))
def __call__(self, shape, properties, zoom):
vals = []
for key in self.keys:
# NOTE zoom and geometrytype have special meaning
if key == 'zoom':
val = zoom
elif key.lower() == 'geometrytype':
val = shape.type
else:
val = properties.get(key)
vals.append(val)
for row, target_val in self.rows:
if all([a.match(b) for (a, b) in zip(row, vals)]):
return (self.target_key, target_val)
return None
class YAMLToDict(dict):
def __init__(self, fh):
import yaml
data = yaml.load(fh)
assert isinstance(data, dict)
for k, v in data.iteritems():
self[k] = v
def csv_match_properties(ctx):
"""
Add or update a property on all features which match properties which are
given as headings in a CSV file.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
target_value_type = ctx.params.get('target_value_type')
matcher = ctx.resources.get('matcher')
assert source_layer, 'csv_match_properties: missing source layer'
assert matcher, 'csv_match_properties: missing matcher resource'
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
def _type_cast(v):
if target_value_type == 'int':
return int(v)
return v
for shape, props, fid in layer['features']:
m = matcher(shape, props, zoom)
if m is not None:
k, v = m
props[k] = _type_cast(v)
return layer
def update_parenthetical_properties(ctx):
"""
If a feature's name ends with a set of values in parens, update
its kind and increase the min_zoom appropriately.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
parenthetical_values = ctx.params.get('values')
target_min_zoom = ctx.params.get('target_min_zoom')
drop_below_zoom = ctx.params.get('drop_below_zoom')
assert parenthetical_values is not None, \
'update_parenthetical_properties: missing values'
assert target_min_zoom is not None, \
'update_parenthetical_properties: missing target_min_zoom'
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
new_features = []
for shape, props, fid in layer['features']:
name = props.get('name', '')
if not name:
new_features.append((shape, props, fid))
continue
keep = True
for value in parenthetical_values:
if name.endswith('(%s)' % value):
props['kind'] = value
props['min_zoom'] = target_min_zoom
if drop_below_zoom and zoom < drop_below_zoom:
keep = False
if keep:
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def height_to_meters(shape, props, fid, zoom):
"""
If the properties has a "height" entry, then convert that to meters.
"""
height = props.get('height')
if not height:
return shape, props, fid
props['height'] = _to_float_meters(height)
return shape, props, fid
def elevation_to_meters(shape, props, fid, zoom):
"""
If the properties has an "elevation" entry, then convert that to meters.
"""
elevation = props.get('elevation')
if not elevation:
return shape, props, fid
props['elevation'] = _to_float_meters(elevation)
return shape, props, fid
def normalize_cycleway(shape, props, fid, zoom):
"""
If the properties contain both a cycleway:left and cycleway:right
with the same values, those should be removed and replaced with a
single cycleway property. Additionally, if a cycleway_both tag is
present, normalize that to the cycleway tag.
"""
cycleway = props.get('cycleway')
cycleway_left = props.get('cycleway_left')
cycleway_right = props.get('cycleway_right')
cycleway_both = props.pop('cycleway_both', None)
if cycleway_both and not cycleway:
props['cycleway'] = cycleway = cycleway_both
if (cycleway_left and cycleway_right and
cycleway_left == cycleway_right and
(not cycleway or cycleway_left == cycleway)):
props['cycleway'] = cycleway_left
del props['cycleway_left']
del props['cycleway_right']
return shape, props, fid
def add_is_bicycle_related(shape, props, fid, zoom):
"""
If the props contain a bicycle_network tag, cycleway, or
highway=cycleway, it should have an is_bicycle_related
boolean. Depends on the normalize_cycleway transform to have been
run first.
"""
props.pop('is_bicycle_related', None)
if ('bicycle_network' in props or
'cycleway' in props or
'cycleway_left' in props or
'cycleway_right' in props or
props.get('bicycle') in ('yes', 'designated') or
props.get('ramp_bicycle') in ('yes', 'left', 'right') or
props.get('kind_detail') == 'cycleway'):
props['is_bicycle_related'] = True
return shape, props, fid
def drop_properties_with_prefix(ctx):
"""
Iterate through all features, dropping all properties that start
with prefix.
"""
prefix = ctx.params.get('prefix')
assert prefix, 'drop_properties_with_prefix: missing prefix param'
feature_layers = ctx.feature_layers
for feature_layer in feature_layers:
for shape, props, fid in feature_layer['features']:
for k in props.keys():
if k.startswith(prefix):
del props[k]
def _drop_small_inners(poly, area_tolerance):
ext = poly.exterior
inners = []
for inner in poly.interiors:
area = Polygon(inner).area
if area >= area_tolerance:
inners.append(inner)
return Polygon(ext, inners)
def drop_small_inners(ctx):
"""
Drop inners which are smaller than the given scale.
"""
zoom = ctx.nominal_zoom
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
pixel_area = ctx.params.get('pixel_area')
source_layers = ctx.params.get('source_layers')
assert source_layers, \
"You must provide source_layers (layer names) to drop_small_inners"
assert pixel_area, \
"You must provide a pixel_area parameter to drop_small_inners"
if zoom < start_zoom:
return None
if end_zoom and zoom >= end_zoom:
return None
meters_per_pixel_area = calc_meters_per_pixel_area(zoom)
area_tolerance = meters_per_pixel_area * pixel_area
for layer in ctx.feature_layers:
layer_datum = layer['layer_datum']
layer_name = layer_datum['name']
if layer_name not in source_layers:
continue
new_features = []
for feature in layer['features']:
shape, props, fid = feature
geom_type = shape.geom_type
if geom_type == 'Polygon':
new_shape = _drop_small_inners(shape, area_tolerance)
if not new_shape.is_empty:
new_features.append((new_shape, props, fid))
elif geom_type == 'MultiPolygon':
polys = []
for g in shape.geoms:
new_g = _drop_small_inners(g, area_tolerance)
if not new_g.is_empty:
polys.append(new_g)
if polys:
new_features.append((MultiPolygon(polys), props, fid))
else:
new_features.append(feature)
layer['features'] = new_features
def simplify_layer(ctx):
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
source_layer = ctx.params.get('source_layer')
assert source_layer, 'simplify_layer: missing source layer'
tolerance = ctx.params.get('tolerance', 1.0)
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
# adjust tolerance to be in coordinate units
tolerance = tolerance * tolerance_for_zoom(zoom)
new_features = []
for (shape, props, fid) in layer['features']:
simplified_shape = shape.simplify(tolerance,
preserve_topology=True)
shape = _make_valid_if_necessary(simplified_shape)
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def simplify_and_clip(ctx):
"""simplify geometries according to zoom level and clip"""
zoom = ctx.nominal_zoom
simplify_before = ctx.params.get('simplify_before')
assert simplify_before, 'simplify_and_clip: missing simplify_before param'
meters_per_pixel_area = calc_meters_per_pixel_area(zoom)
tolerance = tolerance_for_zoom(zoom)
for feature_layer in ctx.feature_layers:
simplified_features = []
layer_datum = feature_layer['layer_datum']
is_clipped = layer_datum['is_clipped']
clip_factor = layer_datum.get('clip_factor', 1.0)
padded_bounds = feature_layer['padded_bounds']
area_threshold_pixels = layer_datum['area_threshold']
area_threshold_meters = meters_per_pixel_area * area_threshold_pixels
layer_tolerance = layer_datum.get('tolerance', 1.0) * tolerance
# The logic behind simplifying before intersecting rather than the
# other way around is extensively explained here:
# https://github.com/mapzen/TileStache/blob/d52e54975f6ec2d11f63db13934047e7cd5fe588/TileStache/Goodies/VecTiles/server.py#L509,L527
simplify_before_intersect = layer_datum['simplify_before_intersect']
# perform any simplification as necessary
simplify_start = layer_datum['simplify_start']
should_simplify = simplify_start <= zoom < simplify_before
for shape, props, feature_id in feature_layer['features']:
geom_type = normalize_geometry_type(shape.type)
original_geom_dim = _geom_dimensions(shape)
padded_bounds_by_type = padded_bounds[geom_type]
layer_padded_bounds = calculate_padded_bounds(
clip_factor, padded_bounds_by_type)
if should_simplify and simplify_before_intersect:
# To reduce the performance hit of simplifying potentially huge
# geometries to extract only a small portion of them when
# cutting out the actual tile, we cut out a slightly larger
# bounding box first. See here for an explanation:
# https://github.com/mapzen/TileStache/blob/d52e54975f6ec2d11f63db13934047e7cd5fe588/TileStache/Goodies/VecTiles/server.py#L509,L527
min_x, min_y, max_x, max_y = layer_padded_bounds.bounds
gutter_bbox_size = (max_x - min_x) * 0.1
gutter_bbox = Box(
min_x - gutter_bbox_size,
min_y - gutter_bbox_size,
max_x + gutter_bbox_size,
max_y + gutter_bbox_size)
clipped_shape = shape.intersection(gutter_bbox)
simplified_shape = clipped_shape.simplify(
layer_tolerance, preserve_topology=True)
shape = _make_valid_if_necessary(simplified_shape)
if is_clipped:
shape = shape.intersection(layer_padded_bounds)
if should_simplify and not simplify_before_intersect:
simplified_shape = shape.simplify(layer_tolerance,
preserve_topology=True)
shape = _make_valid_if_necessary(simplified_shape)
# this could alter multipolygon geometries
if zoom < simplify_before:
shape = _visible_shape(shape, area_threshold_meters)
# don't keep features which have been simplified to empty or
# None.
if shape is None or shape.is_empty:
continue
# if clipping and simplifying caused this to become a geometry
# collection of different geometry types (e.g: by just touching
# the clipping box), then trim it back to the original geometry
# type.
if shape.type == 'GeometryCollection':
shape = _filter_geom_types(shape, original_geom_dim)
# if that removed all the geometry, then don't keep the
# feature.
if shape is None or shape.is_empty:
continue
simplified_feature = shape, props, feature_id
simplified_features.append(simplified_feature)
feature_layer['features'] = simplified_features
_lookup_operator_rules = {
'United States National Park Service': (
'National Park Service',
'US National Park Service',
'U.S. National Park Service',
'US National Park service'),
'United States Forest Service': (
'US Forest Service',
'U.S. Forest Service',
'USDA Forest Service',
'United States Department of Agriculture',
'US National Forest Service',
'United State Forest Service',
'U.S. National Forest Service'),
'National Parks & Wildife Service NSW': (
'Department of National Parks NSW',
'Dept of NSW National Parks',
'Dept of National Parks NSW',
'Department of National Parks NSW',
'NSW National Parks',
'NSW National Parks & Wildlife Service',
'NSW National Parks and Wildlife Service',
'NSW Parks and Wildlife Service',
'NSW Parks and Wildlife Service (NPWS)',
'National Parks and Wildlife NSW',
'National Parks and Wildlife Service NSW')}
normalized_operator_lookup = {}
for normalized_operator, variants in _lookup_operator_rules.items():
for variant in variants:
normalized_operator_lookup[variant] = normalized_operator
def normalize_operator_values(shape, properties, fid, zoom):
"""
There are many operator-related tags, including 'National Park Service',
'U.S. National Park Service', 'US National Park Service' etc that refer
to the same operator tag. This function promotes a normalized value
for all alternatives in specific operator values.
See https://github.com/tilezen/vector-datasource/issues/927.
"""
operator = properties.get('operator', None)
if operator is not None:
normalized_operator = normalized_operator_lookup.get(operator, None)
if normalized_operator:
properties['operator'] = normalized_operator
return (shape, properties, fid)
return (shape, properties, fid)
def _guess_type_from_network(network):
"""
Return a best guess of the type of network (road, hiking, bus, bicycle)
from the network tag itself.
"""
if network in ['iwn', 'nwn', 'rwn', 'lwn']:
return 'hiking'
elif network in ['icn', 'ncn', 'rcn', 'lcn']:
return 'bicycle'
else:
# hack for now - how can we tell bus routes from road routes?
# it seems all bus routes are relations, where we have a route type
# given, so this should default to roads.
return 'road'
# a mapping of operator tag values to the networks that they are (probably)
# part of. this would be better specified directly on the data, but sometimes
# it's just not available.
#
# this is a list of the operators with >=100 uses on ways tagged as motorways,
# which should hopefully allow us to catch most of the important ones. they're
# mapped to the country they're in, which should be enough in most cases to
# render the appropriate shield.
_NETWORK_OPERATORS = {
'Highways England': 'GB',
'ASF': 'FR',
'Autopista Litoral Sul': 'BR',
'DNIT': 'BR',
'Εγνατία Οδός': 'GR',
'Αυτοκινητόδρομος Αιγαίου': 'GR',
'Transport Scotland': 'GB',
'The Danish Road Directorate': 'DK',
"Autostrade per l' Italia S.P.A.": 'IT',
'Νέα Οδός': 'GR',
'Autostrada dei Fiori S.P.A.': 'IT',
'S.A.L.T.': 'IT',
'Welsh Government': 'GB',
'Euroscut': 'PT',
'DIRIF': 'FR',
'Administración central': 'ES',
'Αττική Οδός': 'GR',
'Autocamionale della Cisa S.P.A.': 'IT',
'Κεντρική Οδός': 'GR',
'Bundesrepublik Deutschland': 'DE',
'Ecovias': 'BR',
'東日本高速道路': 'JP',
'NovaDutra': 'BR',
'APRR': 'FR',
'Via Solutions Südwest': 'DE',
'Autoroutes du Sud de la France': 'FR',
'Transport for Scotland': 'GB',
'Departamento de Infraestructuras Viarias y Movilidad': 'ES',
'ViaRondon': 'BR',
'DIRNO': 'FR',
'SATAP': 'IT',
'Ολυμπία Οδός': 'GR',
'Midland Expressway Ltd': 'GB',
'autobahnplus A8 GmbH': 'DE',
'Cart': 'BR',
'Μορέας': 'GR',
'Hyderabad Metropolitan Development Authority': 'PK',
'Viapar': 'BR',
'Autostrade Centropadane': 'IT',
'Triângulo do Sol': 'BR',
}
def _ref_importance(ref):
try:
# first, see if the reference is a number, or easily convertible
# into one.
ref = int(ref or 0)
except ValueError:
# if not, we can try to extract anything that looks like a sequence
# of digits from the ref.
m = _ANY_NUMBER.match(ref)
if m:
ref = int(m.group(1))
else:
# failing that, we assume that a completely non-numeric ref is
# a name, which would make it quite important.
ref = 0
# make sure no ref is negative
ref = abs(ref)
return ref
def _guess_network_gb(tags):
# for roads we put the original OSM highway tag value in kind_detail, so we
# can recover it here.
highway = tags.get('kind_detail')
ref = tags.get('ref', '')
networks = []
# although roads are part of only one network in the UK, some roads are
# tagged incorrectly as being part of two, so we have to handle this case.
for part in ref.split(';'):
if not part:
continue
# letter at the start of the ref indicates the road class. generally
# one of 'M', 'A', or 'B' - although other letters exist, they are
# rarely used.
letter, number = _splitref(part)
# UK is tagged a bit weirdly, using the highway tag value in addition
# to the ref to figure out which road class should be applied. the
# following is not applied strictly, but is a "best guess" at the
# appropriate signage colour.
#
# https://wiki.openstreetmap.org/wiki/United_Kingdom_Tagging_Guidelines
if letter == 'M' and highway == 'motorway':
networks.append(('GB:M-road', 'M' + number))
elif ref.endswith('(M)') and highway == 'motorway':
networks.append(('GB:M-road', 'A' + number))
elif letter == 'A' and highway == 'trunk':
networks.append(('GB:A-road-green', 'A' + number))
elif letter == 'A' and highway == 'primary':
networks.append(('GB:A-road-white', 'A' + number))
elif letter == 'B' and highway == 'secondary':
networks.append(('GB:B-road', 'B' + number))
return networks
def _guess_network_ar(tags):
ref = tags.get('ref')
if ref is None:
return None
elif ref.startswith('RN'):
return [('AR:national', ref)]
elif ref.startswith('RP'):
return [('AR:provincial', ref)]
return None
def _guess_network_with(tags, fn):
"""
Common function for backfilling (network, ref) pairs by running the
"normalize" function on the parts of the ref. For example, if the
ref was 'A1;B2;C3', then the normalize function would be run on
fn(None, 'A1'), fn(None, 'B2'), etc...
This allows us to back-fill the network where it can be deduced from
the ref in a particular country (e.g: if all motorways are A[0-9]).
"""
ref = tags.get('ref', '')
networks = []
for part in ref.split(';'):
part = part.strip()
if not part:
continue
network, ref = fn(None, part)
networks.append((network, part))
return networks
def _guess_network_au(tags):
return _guess_network_with(tags, _normalize_au_netref)
# list of all the state codes in Brazil, see
# https://en.wikipedia.org/wiki/ISO_3166-2:BR
_BR_STATES = set([
'DF', # Distrito Federal (federal district, not really a state)
'AC', # Acre
'AL', # Alagoas
'AP', # Amapá
'AM', # Amazonas
'BA', # Bahia
'CE', # Ceará
'ES', # Espírito Santo
'GO', # Goiás
'MA', # Maranhão
'MT', # Mato Grosso
'MS', # Mato Grosso do Sul
'MG', # Minas Gerais
'PA', # Pará
'PB', # Paraíba
'PR', # Paraná
'PE', # Pernambuco
'PI', # Piauí
'RJ', # Rio de Janeiro
'RN', # Rio Grande do Norte
'RS', # Rio Grande do Sul
'RO', # Rondônia
'RR', # Roraima
'SC', # Santa Catarina
'SP', # São Paulo
'SE', # Sergipe
'TO', # Tocantins
])
# additional road types
_BR_NETWORK_EXPANSION = {
# Minas Gerais state roads
'AMG': 'BR:MG',
'LMG': 'BR:MG:local',
'MGC': 'BR:MG',
# CMG seems to be coupled with BR- roads of the same number
'CMG': 'BR:MG',
# Rio Grande do Sul state roads
'ERS': 'BR:RS',
'VRS': 'BR:RS',
'RSC': 'BR:RS',
# access roads in São Paulo?
'SPA': 'BR:SP',
# connecting roads in Paraná?
'PRC': 'BR:PR',
# municipal roads in Paulínia
'PLN': 'BR:SP:PLN',
# municipal roads in São Carlos
# https://pt.wikipedia.org/wiki/Estradas_do_munic%C3%ADpio_de_S%C3%A3o_Carlos#Identifica%C3%A7%C3%A3o
'SCA': 'BR:SP:SCA',
}
def _guess_network_br(tags):
ref = tags.get('ref')
networks = []
# a missing or blank ref isn't going to give us much information
if not ref:
return networks
# track last prefix, so that we can handle cases where the ref is written
# as "BR-XXX/YYY" to mean "BR-XXX; BR-YYY".
last_prefix = None
for prefix, num in re.findall('([A-Za-z]+)?[- ]?([0-9]+)', ref):
# if there's a prefix, save it for potential later use. if there isn't
# then use the previous one - if any.
if prefix:
last_prefix = prefix
else:
prefix = last_prefix
# make sure the prefix is from a network that we know about.
if prefix == 'BR':
network = prefix
elif prefix in _BR_STATES:
network = 'BR:' + prefix
elif prefix in _BR_NETWORK_EXPANSION:
network = _BR_NETWORK_EXPANSION[prefix]
else:
continue
networks.append((network, '%s-%s' % (prefix, num)))
return networks
def _guess_network_ca(tags):
nat_name = tags.get('nat_name:en') or tags.get('nat_name')
ref = tags.get('ref')
network = tags.get('network')
networks = []
if network and ref:
networks.append((network, ref))
if nat_name and nat_name.lower() == 'trans-canada highway':
# note: no ref for TCH. some states appear to add route numbers from
# the state highway to the TCH shields, e.g:
# https://commons.wikimedia.org/wiki/File:TCH-16_(BC).svg
networks.append(('CA:transcanada', ref))
if not networks and ref:
# final fallback - all we know is that this is a road in Canada.
networks.append(('CA', ref))
return networks
def _guess_network_ch(tags):
ref = tags.get('ref', '')
networks = []
for part in ref.split(';'):
if not part:
continue
network, ref = _normalize_ch_netref(None, part)
if network or ref:
networks.append((network, ref))
return networks
def _guess_network_cn(tags):
return _guess_network_with(tags, _normalize_cn_netref)
def _guess_network_es(tags):
return _guess_network_with(tags, _normalize_es_netref)
def _guess_network_fr(tags):
return _guess_network_with(tags, _normalize_fr_netref)
def _guess_network_de(tags):
return _guess_network_with(tags, _normalize_de_netref)
def _guess_network_ga(tags):
return _guess_network_with(tags, _normalize_ga_netref)
def _guess_network_gr(tags):
ref = tags.get('ref', '')
networks = []
for part in ref.split(';'):
if not part:
continue
# ignore provincial refs, they should be on reg_ref. see:
# https://wiki.openstreetmap.org/wiki/WikiProject_Greece/Provincial_Road_Network
if part.startswith(u'ΕΠ'.encode('utf-8')):
continue
network, ref = _normalize_gr_netref(None, part)
networks.append((network, part))
return networks
def _guess_network_in(tags):
ref = tags.get('ref', '')
networks = []
for part in ref.split(';'):
if not part:
continue
network, ref = _normalize_in_netref(None, part)
# note: we return _ref_ here, as normalize_in_netref might have changed
# the ref part (e.g: in order to split MDR54 into (network=MDR, ref=54)
networks.append((network, ref))
return networks
def _guess_network_mx(tags):
return _guess_network_with(tags, _normalize_mx_netref)
def _guess_network_my(tags):
return _guess_network_with(tags, _normalize_my_netref)
def _guess_network_no(tags):
return _guess_network_with(tags, _normalize_no_netref)
def _guess_network_pe(tags):
return _guess_network_with(tags, _normalize_pe_netref)
def _guess_network_jp(tags):
ref = tags.get('ref', '')
name = tags.get('name:ja') or tags.get('name')
network_from_name = None
if name:
if isinstance(name, str):
name = unicode(name, 'utf-8')
if name.startswith(u'国道') and \
name.endswith(u'号'):
network_from_name = 'JP:national'
networks = []
for part in ref.split(';'):
if not part:
continue
network, ref = _normalize_jp_netref(None, part)
if network is None and network_from_name is not None:
network = network_from_name
if network and part:
networks.append((network, part))
return networks
def _guess_network_kr(tags):
ref = tags.get('ref', '')
network_from_tags = tags.get('network')
# the name often ends with a word which appears to mean expressway or
# national road.
name_ko = _make_unicode_or_none(tags.get('name:ko') or tags.get('name'))
if name_ko and network_from_tags is None:
if name_ko.endswith(u'국도'):
# national roads - gukdo
network_from_tags = 'KR:national'
elif name_ko.endswith(u'광역시도로'):
# metropolitan city roads - gwangyeoksido
network_from_tags = 'KR:metropolitan'
elif name_ko.endswith(u'특별시도'):
# special city (Seoul) roads - teukbyeolsido
network_from_tags = 'KR:metropolitan'
elif (name_ko.endswith(u'고속도로') or
name_ko.endswith(u'고속도로지선')):
# expressways - gosokdoro (and expressway branches)
network_from_tags = 'KR:expressway'
elif name_ko.endswith(u'지방도'):
# local highways - jibangdo
network_from_tags = 'KR:local'
networks = []
for part in ref.split(';'):
if not part:
continue
network, ref = _normalize_kr_netref(None, part)
if network is None and network_from_tags is not None:
network = network_from_tags
if network and part:
networks.append((network, part))
return networks
def _guess_network_pl(tags):
return _guess_network_with(tags, _normalize_pl_netref)
def _guess_network_pt(tags):
return _guess_network_with(tags, _normalize_pt_netref)
def _guess_network_ro(tags):
return _guess_network_with(tags, _normalize_ro_netref)
def _guess_network_ru(tags):
ref = tags.get('ref', '')
network = tags.get('network')
networks = []
for part in ref.split(';'):
if not part:
continue
# note: we pass in the network tag, as that can be important for
# disambiguating Russian refs.
network, ref = _normalize_ru_netref(network, part)
networks.append((network, part))
return networks
def _guess_network_sg(tags):
return _guess_network_with(tags, _normalize_sg_netref)
def _guess_network_tr(tags):
ref = tags.get('ref', '')
networks = []
for part in _COMMON_SEPARATORS.split(ref):
part = part.strip()
if not part:
continue
network, ref = _normalize_tr_netref(None, part)
if network or ref:
networks.append((network, ref))
return networks
def _guess_network_ua(tags):
return _guess_network_with(tags, _normalize_ua_netref)
_COMMON_SEPARATORS = re.compile('[;,/,]')
def _guess_network_vn(tags):
ref = tags.get('ref', '')
# some bare refs can be augmented from the network tag on the way, or
# guessed from the name, which often starts with the type of the road.
network_from_tags = tags.get('network')
if not network_from_tags:
name = tags.get('name') or tags.get('name:vi')
if name:
name = unicode(name, 'utf-8')
if name.startswith(u'Tỉnh lộ'):
network_from_tags = 'VN:provincial'
elif name.startswith(u'Quốc lộ'):
network_from_tags = 'VN:national'
networks = []
for part in _COMMON_SEPARATORS.split(ref):
if not part:
continue
network, ref = _normalize_vn_netref(network_from_tags, part)
if network or ref:
networks.append((network, ref))
return networks
def _guess_network_za(tags):
ref = tags.get('ref', '')
networks = []
for part in _COMMON_SEPARATORS.split(ref):
if not part:
continue
network, ref = _normalize_za_netref(tags.get('network'), part)
networks.append((network, part))
return networks
def _do_not_backfill(tags):
return None
def _sort_network_us(network, ref):
if network is None:
network_code = 9999
elif network == 'US:I':
network_code = 1
elif network == 'US:US':
network_code = 2
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
_AU_NETWORK_IMPORTANCE = {
'N-highway': 0,
'A-road': 1,
'M-road': 2,
'B-road': 3,
'C-road': 4,
'N-route': 5,
'S-route': 6,
'Metro-road': 7,
'T-drive': 8,
'R-route': 9,
}
def _sort_network_au(network, ref):
if network is None or \
not network.startswith('AU:'):
network_code = 9999
else:
network_code = _AU_NETWORK_IMPORTANCE.get(network[3:], 9999)
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_br(network, ref):
if network is None:
network_code = 9999
elif network == 'BR:Trans-Amazonian':
network_code = 0
else:
network_code = len(network.split(':')) + 1
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ca(network, ref):
if network is None:
network_code = 9999
elif network == 'CA:transcanada':
network_code = 0
elif network == 'CA:yellowhead':
network_code = 1
else:
network_code = len(network.split(':')) + 2
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ch(network, ref):
if network is None:
network_code = 9999
elif network == 'CH:national':
network_code = 0
elif network == 'CH:regional':
network_code = 1
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 2
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_cn(network, ref):
if network is None:
network_code = 9999
elif network == 'CN:expressway':
network_code = 0
elif network == 'CN:expressway:regional':
network_code = 1
elif network == 'CN:JX':
network_code = 2
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_es(network, ref):
if network is None:
network_code = 9999
elif network == 'ES:A-road':
network_code = 0
elif network == 'ES:N-road':
network_code = 1
elif network == 'ES:autonoma':
network_code = 2
elif network == 'ES:province':
network_code = 3
elif network == 'ES:city':
network_code = 4
elif network == 'e-road':
network_code = 99
else:
network_code = 5 + len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_fr(network, ref):
if network is None:
network_code = 9999
elif network == 'FR:A-road':
network_code = 0
elif network == 'FR:N-road':
network_code = 1
elif network == 'FR:D-road':
network_code = 2
elif network == 'FR':
network_code = 3
elif network == 'e-road':
network_code = 99
else:
network_code = 5 + len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_de(network, ref):
if network is None:
network_code = 9999
elif network == 'DE:BAB':
network_code = 0
elif network == 'DE:BS':
network_code = 1
elif network == 'DE:LS':
network_code = 2
elif network == 'DE:KS':
network_code = 3
elif network == 'DE:STS':
network_code = 4
elif network == 'DE:Hamburg:Ring':
network_code = 5
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 6
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ga(network, ref):
if network is None:
network_code = 9999
elif network == 'GA:national':
network_code = 0
elif network == 'GA:L-road':
network_code = 1
else:
network_code = 2 + len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_gr(network, ref):
if network is None:
network_code = 9999
elif network == 'GR:motorway':
network_code = 0
elif network == 'GR:national':
network_code = 1
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_in(network, ref):
if network is None:
network_code = 9999
elif network == 'IN:NH':
network_code = 0
elif network == 'IN:SH':
network_code = 1
elif network == 'IN:MDR':
network_code = 2
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ir(network, ref):
if network is None:
network_code = 9999
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_kz(network, ref):
if network is None:
network_code = 9999
elif network == 'KZ:national':
network_code = 0
elif network == 'KZ:regional':
network_code = 1
elif network == 'e-road':
network_code = 99
elif network == 'AsianHighway':
network_code = 99
else:
network_code = 2 + len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_la(network, ref):
if network is None:
network_code = 9999
elif network == 'LA:national':
network_code = 0
elif network == 'AsianHighway':
network_code = 99
else:
network_code = 1 + len(network.split(':'))
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_mx(network, ref):
if network is None:
network_code = 9999
elif network == 'MX:MEX':
network_code = 0
else:
network_code = len(network.split(':')) + 1
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_my(network, ref):
if network is None:
network_code = 9999
elif network == 'MY:federal':
network_code = 0
elif network == 'MY:expressway':
network_code = 1
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':')) + 2
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_no(network, ref):
if network is None:
network_code = 9999
elif network == 'NO:oslo:ring':
network_code = 0
elif network == 'e-road':
network_code = 1
elif network == 'NO:Riksvei':
network_code = 2
elif network == 'NO:Fylkesvei':
network_code = 3
else:
network_code = len(network.split(':')) + 4
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_gb(network, ref):
if network is None:
network_code = 9999
elif network == 'GB:M-road':
network_code = 0
elif network == 'GB:A-road-green':
network_code = 1
elif network == 'GB:A-road-white':
network_code = 2
elif network == 'GB:B-road':
network_code = 3
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 4
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_pl(network, ref):
if network is None:
network_code = 9999
elif network == 'PL:motorway':
network_code = 0
elif network == 'PL:expressway':
network_code = 1
elif network == 'PL:national':
network_code = 2
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_pt(network, ref):
if network is None:
network_code = 9999
elif network == 'PT:motorway':
network_code = 0
elif network == 'PT:primary':
network_code = 1
elif network == 'PT:secondary':
network_code = 2
elif network == 'PT:national':
network_code = 3
elif network == 'PT:rapid':
network_code = 4
elif network == 'PT:express':
network_code = 5
elif network == 'PT:regional':
network_code = 6
elif network == 'PT:municipal':
network_code = 7
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 8
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ro(network, ref):
if network is None:
network_code = 9999
elif network == 'RO:motorway':
network_code = 0
elif network == 'RO:national':
network_code = 1
elif network == 'RO:county':
network_code = 2
elif network == 'RO:local':
network_code = 3
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 4
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ru(network, ref):
ref = _make_unicode_or_none(ref)
if network is None:
network_code = 9999
elif network == 'RU:national' and ref:
if ref.startswith(u'М'):
network_code = 0
elif ref.startswith(u'Р'):
network_code = 1
elif ref.startswith(u'А'):
network_code = 2
else:
network_code = 9999
elif network == 'RU:regional':
network_code = 3
elif network == 'e-road':
network_code = 99
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':')) + 4
if ref is None:
ref = 9999
else:
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_tr(network, ref):
ref = _make_unicode_or_none(ref)
if network is None:
network_code = 9999
elif network == 'TR:motorway':
network_code = 0
elif network == 'TR:highway':
# some highways are "main highways", so it makes sense to show them
# before regular other highways.
# see footer of https://en.wikipedia.org/wiki/State_road_D.010_(Turkey)
if ref in ('D010', 'D100', 'D200', 'D300', 'D400',
'D550', 'D650', 'D750', 'D850', 'D950'):
network_code = 1
else:
network_code = 2
elif network == 'TR:provincial':
network_code = 3
elif network == 'e-road':
network_code = 99
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':')) + 4
if ref is None:
ref = 9999
else:
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_ua(network, ref):
ref = _make_unicode_or_none(ref)
if network is None:
network_code = 9999
elif network == 'UA:international':
network_code = 0
elif network == 'UA:national':
network_code = 1
elif network == 'UA:regional':
network_code = 2
elif network == 'UA:territorial':
network_code = 3
elif network == 'e-road':
network_code = 99
else:
network_code = len(network.split(':')) + 4
if ref is None:
ref = 9999
else:
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_vn(network, ref):
if network is None:
network_code = 9999
elif network == 'VN:expressway':
network_code = 0
elif network == 'VN:national':
network_code = 1
elif network == 'VN:provincial':
network_code = 2
elif network == 'VN:road':
network_code = 3
elif network == 'AsianHighway':
network_code = 99
else:
network_code = len(network.split(':')) + 4
if ref is None:
ref = 9999
else:
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
def _sort_network_za(network, ref):
if network is None:
network_code = 9999
elif network == 'ZA:national':
network_code = 0
elif network == 'ZA:provincial':
network_code = 1
elif network == 'ZA:regional':
network_code = 2
elif network == 'ZA:metropolitan':
network_code = 3
elif network == 'ZA:kruger':
network_code = 4
elif network == 'ZA:S-road':
network_code = 5
else:
network_code = len(network.split(':')) + 6
if ref is None:
ref = 9999
else:
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
_AU_NETWORK_EXPANSION = {
'A': 'AU:A-road',
'M': 'AU:M-road',
'B': 'AU:B-road',
'C': 'AU:C-road',
'N': 'AU:N-route',
'R': 'AU:R-route',
'S': 'AU:S-route',
'T': 'AU:T-drive',
'MR': 'AU:Metro-road',
}
def _splitref(ref):
"""
Split ref into a leading alphabetic part and a trailing (possibly numeric)
part.
"""
# empty strings don't have a prefix
if not ref:
return None, ref
for i in xrange(0, len(ref)):
if not ref[i].isalpha():
return ref[0:i], ref[i:].strip()
# got to the end, must be all "prefix", which probably indicates it's not
# a ref of the expected prefix-suffix form, and we should just return the
# ref without a prefix.
return None, ref
def _normalize_au_netref(network, ref):
"""
Take the network and ref of an Australian road and normalise them so that
the network is in the form 'AU:road-type' and the ref is numeric. This is
based on a bunch of logic about what kinds of Australian roads exist.
Returns new (network, ref) values.
"""
# grab the prefix, if any, from the ref. we can use this to "back-fill" the
# network.
prefix, ref = _splitref(ref)
if network and network.startswith('AU:') and \
network[3:] in _AU_NETWORK_IMPORTANCE:
# network is already in the form we want!
pass
elif network in _AU_NETWORK_EXPANSION:
network = _AU_NETWORK_EXPANSION[network]
elif prefix in _AU_NETWORK_EXPANSION:
# backfill network from ref, if possible. (note that ref must
# be non-None, since mz_networks entries have either network or
# ref, or both).
network = _AU_NETWORK_EXPANSION[prefix]
return network, ref
def _normalize_br_netref(network, ref):
# try to add detail to the network by looking at the ref value,
# which often has additional information.
for guess_net, guess_ref in _guess_network_br(dict(ref=ref)):
if guess_ref == ref and (
network is None or guess_net.startswith(network)):
network = guess_net
break
if network == 'BR':
if ref == 'BR-230':
return 'BR:Trans-Amazonian', ref
else:
return network, ref
elif network and network.startswith('BR:'):
# turn things like "BR:BA-roads" into just "BR:BA"
if network.endswith('-roads'):
network = network[:-6]
return network, ref
elif network in _BR_STATES:
# just missing the 'BR:' at the start?
return 'BR:' + network, ref
else:
return None, ref
def _normalize_ca_netref(network, ref):
if isinstance(network, (str, unicode)) and \
network.startswith('CA:NB') and \
ref.isdigit():
refnum = int(ref)
if refnum >= 200:
network = 'CA:NB3'
elif refnum >= 100:
network = 'CA:NB2'
return network, ref
def _normalize_cd_netref(network, ref):
if network == 'CD:rrig':
network = 'CD:RRIG'
return network, ref
def _normalize_ch_netref(network, ref):
prefix, ref = _splitref(ref)
if network == 'CH:Nationalstrasse':
# clean up the ref by removing any prefixes and extra stuff after
# the number.
ref = ref.split(' ')[0]
network = 'CH:national'
elif prefix == 'A':
network = 'CH:motorway'
elif network not in ('CH:motorway', 'CH:national', 'CH:regional'):
network = None
ref = None
return network, ref
def _normalize_cn_netref(network, ref):
if ref and ref.startswith('S'):
network = 'CN:expressway:regional'
elif ref and ref.startswith('G'):
network = 'CN:expressway'
elif ref and ref.startswith('X'):
network = 'CN:JX'
elif network == 'CN-expressways':
network = 'CN:expressway'
elif network == 'CN-expressways-regional':
network = 'CN:expressway:regional'
elif network == 'JX-roads':
network = 'CN:JX'
return network, ref
# mapping the network prefixes onto ISO 3166-2 codes
_ES_AUTONOMA = set([
'ARA', # Aragon
'A', # Aragon & Andalusia (and also Álava, Basque Country)
'CA', # Cantabria (also Cadiz?)
'CL', # Castile & Leon
'CM', # Castilla-La Mancha
'C', # Catalonia (also Cistierna & Eivissa?)
'EX', # Extremadura
'AG', # Galicia
'M', # Madrid
'R', # Madrid
'Ma', # Mallorca
'Me', # Menorca
'ML', # Melilla
'RC', # Menorca
'RM', # Murcia
'V', # Valencia (also A Coruna?)
'CV', # Valencia
'Cv', # Valencia
])
# mapping the network prefixes onto ISO 3166-2 codes
_ES_PROVINCES = set([
'AC', # A Coruna
'DP', # A Coruna
'AB', # Albacete
'F', # Alicante?
'AL', # Almeria
'AE', # Asturias
'AS', # Asturias
'AV', # Avila
'BA', # Badajoz
'B', # Barcelona
'BP', # Barcelona
'BV', # Barcelona
'BI', # Bizkaia
'BU', # Burgos
'CC', # Caceres
'CO', # Cordoba
'CR', # Cuidad Real
'GIP', # Girona
'GIV', # Girona
'GI', # Gipuzkoa & Girona
'GR', # Granada
'GU', # Guadalajara
'HU', # Huesca
'JA', # Jaen
'JV', # Jaen
'LR', # La Rioja
'LE', # Leon
'L', # Lerida
'LP', # Lerida
'LV', # Lerida
'LU', # Lugo
'MP', # Madrid
'MA', # Malaga
'NA', # Navarre
'OU', # Orense
'P', # Palencia
'PP', # Palencia
'EP', # Pontevedra
'PO', # Pontevedra
'DSA', # Salamanca
'SA', # Salamanca
'NI', # Segovia
'SG', # Segovia
'SE', # Sevilla
'SO', # Soria
'TP', # Tarragona
'TV', # Tarragona
'TE', # Teruel
'TO', # Toledo
'VA', # Valladolid
'ZA', # Zamora
'CP', # Zaragoza
'Z', # Zaragoza
'PM', # Baleares
'PMV', # Baleares
])
# mapping city codes to the name of the city
_ES_CITIES = set([
'AI', # Aviles
'IA', # Aviles
'CT', # Cartagena
'CS', # Castello
'CU', # Cudillero
'CHE', # Ejea de los Caballeros
'EL', # Elx/Elche
'FE', # Ferrol
'GJ', # Gijon
'H', # Huelva
'VM', # Huelva
'J', # Jaen
'LN', # Lena
'LL', # Lleida
'LO', # Logrono
'ME', # Merida
'E', # Mollerussa? / Eivissa
'MU', # Murcia
'O', # Oviedo
'PA', # Pamplona
'PR', # Parres
'PI', # Pilona
'CHMS', # Ponferrada?
'PT', # Puertollano
'SL', # Salas
'S', # Santander
'SC', # Santiago de Compostela
'SI', # Siero
'VG', # Vigo
'EI', # Eivissa
])
def _normalize_es_netref(network, ref):
prefix, num = _splitref(ref)
# some A-roads in Spain are actually province or autonoma roads. these are
# distinguished from the national A-roads by whether they have 1 or 2
# digits (national) or 3 or more digits (autonoma / province). sadly, it
# doesn't seem to be possible to tell whether it's an autonoma or province
# without looking at the geometry, which is left as a TODO for later rainy
# days.
num_digits = 0
if num:
num = num.lstrip('-')
for c in num:
if c.isdigit():
num_digits += 1
else:
break
if prefix in ('A', 'AP') and num_digits > 0 and num_digits < 3:
network = 'ES:A-road'
elif prefix == 'N':
network = 'ES:N-road'
elif prefix == 'E' and num:
# e-roads seem to be signed without leading zeros.
network = 'e-road'
ref = 'E-' + num.lstrip('0')
elif prefix in _ES_AUTONOMA:
network = 'ES:autonoma'
elif prefix in _ES_PROVINCES:
network = 'ES:province'
elif prefix in _ES_CITIES:
network = 'ES:city'
else:
network = None
ref = None
return network, ref
_FR_DEPARTMENTAL_D_ROAD = re.compile(
'^FR:[0-9]+:([A-Z]+)-road$', re.UNICODE | re.IGNORECASE)
def _normalize_fr_netref(network, ref):
prefix, ref = _splitref(ref)
if prefix:
# routes nationales (RN) are actually signed just "N"? also, RNIL
# are routes delegated to local departments, but still signed as
# routes nationales.
if prefix in ('RN', 'RNIL'):
prefix = 'N'
# strip spaces and leading zeros
if ref:
ref = prefix + ref.strip().lstrip('0')
# backfill network from refs if network wasn't provided from another
# source.
if network is None:
network = 'FR:%s-road' % (prefix,)
# networks are broken down by department, e.g: FR:01:D-road, but we
# only want to match on the D-road part, so throw away the department
# number.
if isinstance(network, (str, unicode)):
m = _FR_DEPARTMENTAL_D_ROAD.match(network)
if m:
# see comment above. TODO: figure out how to not say this twice.
prefix = m.group(1).upper()
if prefix in ('RN', 'RNIL'):
prefix = 'N'
network = 'FR:%s-road' % (prefix,)
return network, ref
def _normalize_de_netref(network, ref):
prefix, ref = _splitref(ref)
if prefix:
if prefix == 'Ring':
ref = 'Ring ' + ref
else:
ref = prefix + ref
if not network:
network = {
'A': 'DE:BAB',
'B': 'DE:BS',
'L': 'DE:LS',
'K': 'DE:KS',
'St': 'DE:STS',
'S': 'DE:STS',
'Ring': 'DE:Hamburg:Ring',
}.get(prefix)
if network == 'Landesstra\xc3\x9fen NRW':
network = 'DE:LS'
elif network == 'Kreisstra\xc3\x9fen Hildesheim':
network = 'DE:KS'
elif network == 'BAB':
network = 'DE:BAB'
return network, ref
def _normalize_ga_netref(network, ref):
prefix, num = _splitref(ref)
if prefix in ('N', 'RN'):
network = 'GA:national'
ref = 'N' + num
elif prefix == 'L':
network = 'GA:L-road'
ref = 'L' + num
else:
network = None
ref = None
return network, ref
def _normalize_gr_netref(network, ref):
ref = _make_unicode_or_none(ref)
prefix, ref = _splitref(ref)
# this might look bizzare, but it's because the Greek capital letters
# epsilon and omicron look very similar (in some fonts identical) to the
# Latin characters E and O. it's the same below for capital alpha and A.
# these are sometimes mixed up in the data, so we map them to the same
# networks.
if prefix in (u'ΕΟ', u'EO'):
network = 'GR:national'
elif (prefix in (u'Α', u'A') and
(network is None or network == 'GR:motorway')):
network = 'GR:motorway'
# keep A prefix for shield text
ref = u'Α' + ref
elif network == 'e-road':
ref = 'E' + ref
elif network and network.startswith('GR:provincial:'):
network = 'GR:provincial'
return network, ref
def _normalize_in_netref(network, ref):
prefix, ref = _splitref(ref)
if prefix == 'NH':
network = 'IN:NH'
elif prefix == 'SH':
network = 'IN:SH'
elif prefix == 'MDR':
network = 'IN:MDR'
elif network and network.startswith('IN:NH'):
network = 'IN:NH'
elif network and network.startswith('IN:SH'):
network = 'IN:SH'
elif network and network.startswith('IN:MDR'):
network = 'IN:MDR'
elif ref == 'MDR':
network = 'IN:MDR'
ref = None
elif ref == 'ORR':
network = 'IN:NH'
else:
network = None
return network, ref
def _normalize_ir_netref(network, ref):
net, num = _splitref(ref)
if network == 'AH' or net == 'AH':
network = 'AsianHighway'
# in Iran, the Wikipedia page for the AsianHighway template suggests
# that the AH route is shown as "A1Tr" (with the "Tr" in a little box)
# https://en.wikipedia.org/wiki/Template:AHN-AH
#
# however, i haven't been able to find an example on a real road sign,
# so perhaps it's not widely shown. anyway, we probably want "A1" as
# the shield text.
ref = 'A' + num
elif network == 'IR:freeways':
network = 'IR:freeway'
return network, ref
def _normalize_la_netref(network, ref):
# apparently common mistake: Laos is LA, not LO
if network == 'LO:network':
network = 'LA:national'
return network, ref
# mapping of mexican road prefixes into their network values.
_MX_ROAD_NETWORK_PREFIXES = {
'AGS': 'MX:AGU', # Aguascalientes
'BC': 'MX:BCN', # Baja California
'BCS': 'MX:BCS', # Baja California Sur
'CAM': 'MX:CAM', # Campeche
'CHIS': 'MX:CHP', # Chiapas
'CHIH': 'MX:CHH', # Chihuahua
'COAH': 'MX:COA', # Coahuila
'COL': 'MX:COL', # Colima
'DGO': 'MX:DUR', # Durango
'GTO': 'MX:GUA', # Guanajuato
'GRO': 'MX:GRO', # Guerrero
'HGO': 'MX:HID', # Hidalgo
'JAL': 'MX:JAL', # Jalisco
# NOTE: couldn't find an example for Edomex.
'MICH': 'MX:MIC', # Michoacán
'MOR': 'MX:MOR', # Morelos
'NAY': 'MX:NAY', # Nayarit
'NL': 'MX:NLE', # Nuevo León
'OAX': 'MX:OAX', # Oaxaca
'PUE': 'MX:PUE', # Puebla
'QRO': 'MX:QUE', # Querétaro
'ROO': 'MX:ROO', # Quintana Roo
'SIN': 'MX:SIN', # Sinaloa
'SLP': 'MX:SLP', # San Luis Potosí
'SON': 'MX:SON', # Sonora
'TAB': 'MX:TAB', # Tabasco
'TAM': 'MX:TAM', # Tamaulipas
# NOTE: couldn't find an example for Tlaxcala.
'VER': 'MX:VER', # Veracruz
'YUC': 'MX:YUC', # Yucatán
'ZAC': 'MX:ZAC', # Zacatecas
# National roads
'MEX': 'MX:MEX',
}
def _normalize_mx_netref(network, ref):
# interior ring road in Mexico City
if ref == 'INT':
network = 'MX:CMX:INT'
ref = None
elif ref == 'EXT':
network = 'MX:CMX:EXT'
ref = None
prefix, part = _splitref(ref)
if prefix:
net = _MX_ROAD_NETWORK_PREFIXES.get(prefix.upper())
if net:
network = net
ref = part
# sometimes Quintana Roo is also written as "Q. Roo", which trips up
# the _splitref() function, so this just adjusts for that.
if ref and ref.upper().startswith('Q. ROO'):
network = 'MX:ROO'
ref = ref[len('Q. ROO'):].strip()
return network, ref
# roads in Malaysia can have a state prefix similar to the letters used on
# vehicle license plates. see Wikipedia for a list:
#
# https://en.wikipedia.org/wiki/Malaysian_State_Roads_system
#
# these are mapped to the abbreviations given in the table on:
#
# https://en.wikipedia.org/wiki/States_and_federal_territories_of_Malaysia
#
_MY_ROAD_STATE_CODES = {
'A': 'PRK', # Perak
'B': 'SGR', # Selangor
'C': 'PHG', # Pahang
'D': 'KTN', # Kelantan
'J': 'JHR', # Johor
'K': 'KDH', # Kedah
'M': 'MLK', # Malacca
'N': 'NSN', # Negiri Sembilan
'P': 'PNG', # Penang
'R': 'PLS', # Perlis
'SA': 'SBH', # Sabah
'T': 'TRG', # Terengganu
'Q': 'SWK', # Sarawak
}
def _normalize_my_netref(network, ref):
prefix, number = _splitref(ref)
if prefix == 'E':
network = 'MY:expressway'
elif prefix in ('FT', ''):
network = 'MY:federal'
# federal highway 1 has many parts (1-1, 1-2, etc...) but it's not
# clear that they're actually signed that way. so throw the part
# after the dash away.
ref = number.split('-')[0]
elif prefix == 'AH':
network = 'AsianHighway'
elif prefix == 'MBSA':
network = 'MY:SGR:municipal'
# shorten ref so that it is more likely to fit in a 5-char shield.
ref = 'BSA' + number
elif prefix in _MY_ROAD_STATE_CODES:
network = 'MY:' + _MY_ROAD_STATE_CODES[prefix]
else:
network = None
return network, ref
def _normalize_jp_netref(network, ref):
if network and network.startswith('JP:prefectural:'):
network = 'JP:prefectural'
elif network is None:
prefix, _ = _splitref(ref)
if prefix in ('C', 'E'):
network = 'JP:expressway'
return network, ref
def _normalize_kr_netref(network, ref):
net, part = _splitref(ref)
if net == 'AH':
network = 'AsianHighway'
ref = part
elif network == 'AH':
network = 'AsianHighway'
return network, ref
def _normalize_kz_netref(network, ref):
net, num = _splitref(ref)
if net == 'AH' or network == 'AH':
network = 'AsianHighway'
ref = 'AH' + num
elif net == 'E' or network == 'e-road':
network = 'e-road'
ref = 'E' + num
return network, ref
def _normalize_no_netref(network, ref):
prefix, number = _splitref(ref)
if prefix == 'Rv':
network = 'NO:riksvei'
ref = number
elif prefix == 'Fv':
network = 'NO:fylkesvei'
ref = number
elif prefix == 'E' and number:
network = 'e-road'
ref = 'E ' + number.lstrip('0')
elif prefix == 'Ring':
network = 'NO:oslo:ring'
ref = 'Ring ' + number
elif network and network.lower().startswith('no:riksvei'):
network = 'NO:riksvei'
elif network and network.lower().startswith('no:fylkesvei'):
network = 'NO:fylkesvei'
else:
network = None
return network, ref
_PE_STATES = set([
'AM', # Amazonas
'AN', # Ancash
'AP', # Apurímac
'AR', # Arequipa
'AY', # Ayacucho
'CA', # Cajamarca
'CU', # Cusco
'HU', # Huánuco
'HV', # Huancavelica
'IC', # Ica
'JU', # Junín
'LA', # Lambayeque
'LI', # La Libertad
'LM', # Lima (including Callao)
'LO', # Loreto
'MD', # Madre de Dios
'MO', # Moquegua
'PA', # Pasco
'PI', # Piura
'PU', # Puno
'SM', # San Martín
'TA', # Tacna
'TU', # Tumbes
'UC', # Ucayali
])
def _normalize_pe_netref(network, ref):
prefix, number = _splitref(ref)
# Peruvian refs seem to be usually written "XX-YY" with a dash, so we have
# to remove that as it's not part of the shield text.
if number:
number = number.lstrip('-')
if prefix == 'PE':
network = 'PE:PE'
ref = number
elif prefix in _PE_STATES:
network = 'PE:' + prefix
ref = number
else:
network = None
return network, ref
def _normalize_ph_netref(network, ref):
if network == 'PH:nhn':
network = 'PH:NHN'
return network, ref
def _normalize_pl_netref(network, ref):
if network == 'PL:motorways':
network = 'PL:motorway'
elif network == 'PL:expressways':
network = 'PL:expressway'
if ref and ref.startswith('A'):
network = 'PL:motorway'
elif ref and ref.startswith('S'):
network = 'PL:expressway'
return network, ref
# expansion from ref prefixes to (network, shield text prefix).
#
# https://en.wikipedia.org/wiki/Roads_in_Portugal
#
# note that it seems signs generally don't have EN, ER or EM on them. instead,
# they have N, R and, presumably, M - although i wasn't able to find one of
# those. perhaps they're not important enough to sign with a number.
_PT_NETWORK_EXPANSION = {
'A': ('PT:motorway', 'A'),
'IP': ('PT:primary', 'IP'),
'IC': ('PT:secondary', 'IC'),
'VR': ('PT:rapid', 'VR'),
'VE': ('PT:express', 'VE'),
'EN': ('PT:national', 'N'),
'ER': ('PT:regional', 'R'),
'EM': ('PT:municipal', 'M'),
'E': ('e-road', 'E'),
}
def _normalize_pt_netref(network, ref):
prefix, num = _splitref(ref)
result = _PT_NETWORK_EXPANSION.get(prefix)
if result and num:
network, letter = result
ref = letter + num.lstrip('0')
else:
network = None
return network, ref
# note that there's another road class, DX, which is documented, but doesn't
# currently exist.
# see https://en.wikipedia.org/wiki/Roads_in_Romania
#
_RO_NETWORK_PREFIXES = {
'A': 'RO:motorway',
'DN': 'RO:national',
'DJ': 'RO:county',
'DC': 'RO:local',
'E': 'e-road',
}
def _normalize_ro_netref(network, ref):
prefix, num = _splitref(ref)
network = _RO_NETWORK_PREFIXES.get(prefix)
if network is not None:
ref = prefix + num
else:
ref = None
return network, ref
def _normalize_ru_netref(network, ref):
ref = _make_unicode_or_none(ref)
prefix, num = _splitref(ref)
# get rid of any stuff trailing the '-'. seems to be a section number or
# mile marker?
if num:
num = num.lstrip('-').split('-')[0]
if prefix in (u'М', 'M'): # cyrillic M & latin M!
ref = u'М' + num
elif prefix in (u'Р', 'P'):
if network is None:
network = 'RU:regional'
ref = u'Р' + num
elif prefix in (u'А', 'A'):
if network is None:
network = 'RU:regional'
ref = u'А' + num
elif prefix == 'E':
network = 'e-road'
ref = u'E' + num
elif prefix == 'AH':
network = 'AsianHighway'
ref = u'AH' + num
else:
ref = None
if isinstance(ref, unicode):
ref = ref.encode('utf-8')
return network, ref
_TR_PROVINCIAL = re.compile('^[0-9]{2}-[0-9]{2}$')
# NOTE: there's aslo an "NSC", which is under construction
_SG_EXPRESSWAYS = set([
'AYE', # Ayer Rajah Expressway
'BKE', # Bukit Timah Expressway
'CTE', # Central Expressway
'ECP', # East Coast Parkway
'KJE', # Kranji Expressway
'KPE', # Kallang-Paya Lebar Expressway
'MCE', # Marina Coastal Expressway
'PIE', # Pan Island Expressway
'SLE', # Seletar Expressway
'TPE', # Tampines Expressway
])
def _normalize_sg_netref(network, ref):
if ref in _SG_EXPRESSWAYS:
network = 'SG:expressway'
else:
network = None
ref = None
return network, ref
def _normalize_tr_netref(network, ref):
prefix, num = _splitref(ref)
if num:
num = num.lstrip('-')
if prefix == 'O' and num:
# see https://en.wikipedia.org/wiki/Otoyol
network = 'TR:motorway'
ref = 'O' + num.lstrip('0')
elif prefix == 'D' and num:
# see https://en.wikipedia.org/wiki/Turkish_State_Highway_System
network = 'TR:highway'
# drop section suffixes
ref = 'D' + num.split('-')[0]
elif ref and _TR_PROVINCIAL.match(ref):
network = 'TR:provincial'
elif prefix == 'E' and num:
network = 'e-road'
ref = 'E' + num
else:
network = None
ref = None
return network, ref
def _normalize_ua_netref(network, ref):
ref = _make_unicode_or_none(ref)
prefix, num = _splitref(ref)
if num:
num = num.lstrip('-')
if not num:
network = None
ref = None
elif prefix in (u'М', 'M'): # cyrillic M & latin M!
if network is None:
network = 'UA:international'
ref = u'М' + num
elif prefix in (u'Н', 'H'):
if network is None:
network = 'UA:national'
ref = u'Н' + num
elif prefix in (u'Р', 'P'):
if network is None:
network = 'UA:regional'
ref = u'Р' + num
elif prefix in (u'Т', 'T'):
network = 'UA:territorial'
ref = u'Т' + num.replace('-', '')
elif prefix == 'E':
network = 'e-road'
ref = u'E' + num
else:
ref = None
network = None
if isinstance(ref, unicode):
ref = ref.encode('utf-8')
return network, ref
def _normalize_vn_netref(network, ref):
ref = _make_unicode_or_none(ref)
prefix, num = _splitref(ref)
if num:
num = num.lstrip(u'.')
if not num:
network = None
ref = None
elif prefix == u'CT' or network == 'VN:expressway':
network = 'VN:expressway'
ref = u'CT' + num
elif prefix == u'QL' or network == 'VN:national':
network = 'VN:national'
ref = u'QL' + num
elif prefix in (u'ĐT', u'DT'):
network = 'VN:provincial'
ref = u'ĐT' + num
elif prefix == u'TL' or network in ('VN:provincial', 'VN:TL'):
network = 'VN:provincial'
ref = u'TL' + num
elif ref:
network = 'VN:road'
else:
network = None
ref = None
if isinstance(ref, unicode):
ref = ref.encode('utf-8')
return network, ref
def _normalize_za_netref(network, ref):
prefix, num = _splitref(ref)
ndigits = len(num) if num else 0
# N, R & M numbered routes all have special shields which have the letter
# above the number, which would make it part of the shield artwork rather
# than the shield text.
if prefix == 'N':
network = 'ZA:national'
ref = num
elif prefix == 'R' and ndigits == 2:
# 2-digit R numbers are provincial routes, 3-digit are regional routes.
# https://en.wikipedia.org/wiki/Numbered_routes_in_South_Africa
network = 'ZA:provincial'
ref = num
elif prefix == 'R' and ndigits == 3:
network == 'ZA:regional'
ref = num
elif prefix == 'M':
# there are various different metropolitan networks, but according to
# the Wikipedia page, they all have the same shield. so lumping them
# all together under "metropolitan".
network = 'ZA:metropolitan'
ref = num
elif prefix == 'H':
# i wasn't able to find documentation for these, but there are
# H-numbered roads with good signage, which appear to be only in the
# Kruger National Park, so i've named them that way.
network = 'ZA:kruger'
elif prefix == 'S':
# i wasn't able to find any documentation for these, but there are
# plain white-on-green signs for some of these visible.
network = 'ZA:S-road'
else:
ref = None
network = None
return network, ref
def _shield_text_ar(network, ref):
# Argentinian national routes start with "RN" (ruta nacional), which
# should be stripped, but other letters shouldn't be!
if network == 'AR:national' and ref and ref.startswith('RN'):
return ref[2:]
# Argentinian provincial routes start with "RP" (ruta provincial)
if network == 'AR:provincial' and ref and ref.startswith('RP'):
return ref[2:]
return ref
_AU_NETWORK_SHIELD_TEXT = {
'AU:M-road': 'M',
'AU:A-road': 'A',
'AU:B-road': 'B',
'AU:C-road': 'C',
}
def _shield_text_au(network, ref):
# shields on M, A, B & C roads should have the letter, but not other types
# of roads.
prefix = _AU_NETWORK_SHIELD_TEXT.get(network)
if prefix:
ref = prefix + ref
return ref
def _shield_text_gb(network, ref):
# just remove any space between the letter and number(s)
prefix, number = _splitref(ref)
if prefix and number:
return prefix + number
else:
return ref
def _shield_text_ro(network, ref):
# the DN, DJ & DC networks don't have a prefix on the displayed shields,
# see:
# https://upload.wikimedia.org/wikipedia/commons/b/b0/Autostrada_Sibiu_01.jpg
# https://upload.wikimedia.org/wikipedia/commons/7/7a/A1_Arad-Timisoara_-_01.JPG
if network in ('RO:national', 'RO:county', 'RO:local'):
return ref[2:]
return ref
# do not strip anything from the ref apart from whitespace.
def _use_ref_as_is(network, ref):
return ref.strip()
# CountryNetworkLogic centralises the logic around country-specific road
# network processing. this allows us to do different things, such as
# back-filling missing network tag values or sorting networks differently
# based on which country they are in. (e.g: in the UK, an "M" road is more
# important than an "A" road, even though they'd sort the other way
# alphabetically).
#
# the different logic sections are:
#
# * backfill: this is called as fn(tags) to unpack the ref tag (and any other
# meaningful tags) into a list of (network, ref) tuples to use
# instead. For example, it's common to give ref=A1;B2;C3 to
# indicate multiple networks & shields.
#
# * fix: this is called as fn(network, ref) and should fix whatever problems it
# can and return the replacement (network, ref). remember! either
# network or ref can be None!
#
# * sort: this is called as fn(network, ref) and should return a numeric value
# where lower numeric values mean _more_ important networks.
#
# * shield_text: this is called as fn(network, ref) and should return the
# shield text to output. this might mean stripping leading alpha
# numeric characters - or not, depending on the country.
#
CountryNetworkLogic = namedtuple(
'CountryNetworkLogic', 'backfill fix sort shield_text')
CountryNetworkLogic.__new__.__defaults__ = (None,) * len(
CountryNetworkLogic._fields)
_COUNTRY_SPECIFIC_ROAD_NETWORK_LOGIC = {
'AR': CountryNetworkLogic(
backfill=_guess_network_ar,
shield_text=_shield_text_ar,
),
'AU': CountryNetworkLogic(
backfill=_guess_network_au,
fix=_normalize_au_netref,
sort=_sort_network_au,
shield_text=_shield_text_au,
),
'BR': CountryNetworkLogic(
backfill=_guess_network_br,
fix=_normalize_br_netref,
sort=_sort_network_br,
),
'CA': CountryNetworkLogic(
backfill=_guess_network_ca,
fix=_normalize_ca_netref,
sort=_sort_network_ca,
),
'CH': CountryNetworkLogic(
backfill=_guess_network_ch,
fix=_normalize_ch_netref,
sort=_sort_network_ch,
),
'CD': CountryNetworkLogic(
fix=_normalize_cd_netref,
),
'CN': CountryNetworkLogic(
backfill=_guess_network_cn,
fix=_normalize_cn_netref,
sort=_sort_network_cn,
shield_text=_use_ref_as_is,
),
'DE': CountryNetworkLogic(
backfill=_guess_network_de,
fix=_normalize_de_netref,
sort=_sort_network_de,
shield_text=_use_ref_as_is,
),
'ES': CountryNetworkLogic(
backfill=_guess_network_es,
fix=_normalize_es_netref,
sort=_sort_network_es,
shield_text=_use_ref_as_is,
),
'FR': CountryNetworkLogic(
backfill=_guess_network_fr,
fix=_normalize_fr_netref,
sort=_sort_network_fr,
shield_text=_use_ref_as_is,
),
'GA': CountryNetworkLogic(
backfill=_guess_network_ga,
fix=_normalize_ga_netref,
sort=_sort_network_ga,
shield_text=_use_ref_as_is,
),
'GB': CountryNetworkLogic(
backfill=_guess_network_gb,
sort=_sort_network_gb,
shield_text=_shield_text_gb,
),
'GR': CountryNetworkLogic(
backfill=_guess_network_gr,
fix=_normalize_gr_netref,
sort=_sort_network_gr,
),
'IN': CountryNetworkLogic(
backfill=_guess_network_in,
fix=_normalize_in_netref,
sort=_sort_network_in,
shield_text=_use_ref_as_is,
),
'IR': CountryNetworkLogic(
fix=_normalize_ir_netref,
sort=_sort_network_ir,
shield_text=_use_ref_as_is,
),
'JP': CountryNetworkLogic(
backfill=_guess_network_jp,
fix=_normalize_jp_netref,
shield_text=_use_ref_as_is,
),
'KR': CountryNetworkLogic(
backfill=_guess_network_kr,
fix=_normalize_kr_netref,
),
'KZ': CountryNetworkLogic(
fix=_normalize_kz_netref,
sort=_sort_network_kz,
shield_text=_use_ref_as_is,
),
'LA': CountryNetworkLogic(
fix=_normalize_la_netref,
sort=_sort_network_la,
),
'MX': CountryNetworkLogic(
backfill=_guess_network_mx,
fix=_normalize_mx_netref,
sort=_sort_network_mx,
),
'MY': CountryNetworkLogic(
backfill=_guess_network_my,
fix=_normalize_my_netref,
sort=_sort_network_my,
shield_text=_use_ref_as_is,
),
'NO': CountryNetworkLogic(
backfill=_guess_network_no,
fix=_normalize_no_netref,
sort=_sort_network_no,
shield_text=_use_ref_as_is,
),
'PE': CountryNetworkLogic(
backfill=_guess_network_pe,
fix=_normalize_pe_netref,
shield_text=_use_ref_as_is,
),
'PH': CountryNetworkLogic(
fix=_normalize_ph_netref,
),
'PL': CountryNetworkLogic(
backfill=_guess_network_pl,
fix=_normalize_pl_netref,
sort=_sort_network_pl,
),
'PT': CountryNetworkLogic(
backfill=_guess_network_pt,
fix=_normalize_pt_netref,
sort=_sort_network_pt,
shield_text=_use_ref_as_is,
),
'RO': CountryNetworkLogic(
backfill=_guess_network_ro,
fix=_normalize_ro_netref,
sort=_sort_network_ro,
shield_text=_shield_text_ro,
),
'RU': CountryNetworkLogic(
backfill=_guess_network_ru,
fix=_normalize_ru_netref,
sort=_sort_network_ru,
shield_text=_use_ref_as_is,
),
'SG': CountryNetworkLogic(
backfill=_guess_network_sg,
fix=_normalize_sg_netref,
shield_text=_use_ref_as_is,
),
'TR': CountryNetworkLogic(
backfill=_guess_network_tr,
fix=_normalize_tr_netref,
sort=_sort_network_tr,
shield_text=_use_ref_as_is,
),
'UA': CountryNetworkLogic(
backfill=_guess_network_ua,
fix=_normalize_ua_netref,
sort=_sort_network_ua,
shield_text=_use_ref_as_is,
),
'US': CountryNetworkLogic(
backfill=_do_not_backfill,
sort=_sort_network_us,
),
'VN': CountryNetworkLogic(
backfill=_guess_network_vn,
fix=_normalize_vn_netref,
sort=_sort_network_vn,
shield_text=_use_ref_as_is,
),
'ZA': CountryNetworkLogic(
backfill=_guess_network_za,
fix=_normalize_za_netref,
sort=_sort_network_za,
shield_text=_use_ref_as_is,
),
}
# regular expression to look for a country code at the beginning of the network
# tag.
_COUNTRY_CODE = re.compile('^([a-z][a-z])[:-](.*)', re.UNICODE | re.IGNORECASE)
def _fixup_network_country_code(network):
if network is None:
return None
m = _COUNTRY_CODE.match(network)
if m:
suffix = m.group(2)
# fix up common suffixes which are plural with ones which are singular.
if suffix.lower() == 'roads':
suffix = 'road'
network = m.group(1).upper() + ':' + suffix
return network
def merge_networks_from_tags(shape, props, fid, zoom):
"""
Take the network and ref tags from the feature and, if they both exist, add
them to the mz_networks list. This is to make handling of networks and refs
more consistent across elements.
"""
network = props.get('network')
ref = props.get('ref')
mz_networks = props.get('mz_networks', [])
country_code = props.get('country_code')
# apply some generic fixes to networks:
# * if they begin with two letters and a colon, then make sure the two
# letters are upper case, as they're probably a country code.
# * if they begin with two letters and a dash, then make the letters upper
# case and replace the dash with a colon.
# * expand ;-delimited lists in refs
for i in xrange(0, len(mz_networks), 3):
t, n, r = mz_networks[i:i+3]
if t == 'road' and n is not None:
n = _fixup_network_country_code(n)
mz_networks[i+1] = n
if r is not None and ';' in r:
refs = r.split(';')
mz_networks[i+2] = refs.pop()
for new_ref in refs:
mz_networks.extend((t, n, new_ref))
# for road networks, if there's no explicit network, but the country code
# and ref are both available, then try to use them to back-fill the
# network.
if props.get('kind') in ('highway', 'major_road') and \
country_code and ref:
# apply country-specific logic to try and backfill the network from
# structure we know about how refs work in the country.
logic = _COUNTRY_SPECIFIC_ROAD_NETWORK_LOGIC.get(country_code)
# if the road is a member of exactly one road relation, which provides
# a network and no ref, and the element itself provides no network,
# then use the network from the relation instead.
if network is None:
solo_networks_from_relations = []
for i in xrange(0, len(mz_networks), 3):
t, n, r = mz_networks[i:i+3]
if t == 'road' and n and (r is None or r == ref):
solo_networks_from_relations.append((n, i))
# if we found one _and only one_ road network, then we use the
# network value and delete the [type, network, ref] 3-tuple from
# mz_networks (which is a flattened list of them). because there's
# only one, we can delete it by using its index.
if len(solo_networks_from_relations) == 1:
network, i = solo_networks_from_relations[0]
# add network back into properties in case we need to pass it
# to the backfill.
props['network'] = network
del mz_networks[i:i+3]
if logic and logic.backfill:
networks_and_refs = logic.backfill(props) or []
# if we found a ref, but the network was not provided, then "use
# up" the network tag by assigning it to the first network. this
# deals with cases where people write network="X", ref="1;Y2" to
# mean "X1" and "Y2".
if networks_and_refs:
net, r = networks_and_refs[0]
if net is None and network is not None:
networks_and_refs[0] = (network, r)
# if we extracted information from the network and ref, then
# we don't want to process it again.
network = None
ref = None
for net, r in networks_and_refs:
mz_networks.extend(['road', net, r])
elif network is None:
# last ditch backfill, if we know nothing else about this element,
# at least we know what country it is in. but don't add if there's
# an entry in mz_networks with the same ref!
if ref:
found = False
for i in xrange(0, len(mz_networks), 3):
t, _, r = mz_networks[i:i+3]
if t == 'road' and r == ref:
found = True
break
if not found:
network = country_code
# if there's no network, but the operator indicates a network, then we can
# back-fill an approximate network tag from the operator. this can mean
# that extra refs are available for road networks.
elif network is None:
operator = props.get('operator')
backfill_network = _NETWORK_OPERATORS.get(operator)
if backfill_network:
network = backfill_network
if network and ref:
props.pop('network', None)
props.pop('ref')
mz_networks.extend([_guess_type_from_network(network), network, ref])
if mz_networks:
props['mz_networks'] = mz_networks
return (shape, props, fid)
# a pattern to find any number in a string, as a fallback for looking up road
# reference numbers.
_ANY_NUMBER = re.compile('[^0-9]*([0-9]+)')
def _default_sort_network(network, ref):
"""
Returns an integer representing the numeric importance of the network,
where lower numbers are more important.
This is to handle roads which are part of many networks, and ensuring
that the most important one is displayed. For example, in the USA many
roads can be part of both interstate (US:I) and "US" (US:US) highways,
and possibly state ones as well (e.g: US:NY:xxx). In addition, there
are international conventions around the use of "CC:national" and
"CC:regional:*" where "CC" is an ISO 2-letter country code.
Here we treat national-level roads as more important than regional or
lower, and assume that the deeper the network is in the hierarchy, the
less important the road. Roads with lower "ref" numbers are considered
more important than higher "ref" numbers, if they are part of the same
network.
"""
if network is None:
network_code = 9999
elif ':national' in network:
network_code = 1
elif ':regional' in network:
network_code = 2
elif network == 'e-road' in network:
network_code = 9000
else:
network_code = len(network.split(':')) + 3
ref = _ref_importance(ref)
return network_code * 10000 + min(ref, 9999)
_WALKING_NETWORK_CODES = {
'iwn': 1,
'nwn': 2,
'rwn': 3,
'lwn': 4,
}
_BICYCLE_NETWORK_CODES = {
'icn': 1,
'ncn': 2,
'rcn': 3,
'lcn': 4,
}
def _generic_network_importance(network, ref, codes):
# get a code based on the "largeness" of the network
code = codes.get(network, len(codes))
# get a numeric ref, if one is available. treat things with no ref as if
# they had a very high ref, and so reduced importance.
try:
ref = max(int(ref or 9999), 0)
except ValueError:
# if ref isn't an integer, then it's likely a name, which might be
# more important than a number
ref = 0
return code * 10000 + min(ref, 9999)
def _walking_network_importance(network, ref):
return _generic_network_importance(network, ref, _WALKING_NETWORK_CODES)
def _bicycle_network_importance(network, ref):
return _generic_network_importance(network, ref, _BICYCLE_NETWORK_CODES)
def _bus_network_importance(network, ref):
return _generic_network_importance(network, ref, {})
_NUMBER_AT_FRONT = re.compile(r'^(\d+\w*)', re.UNICODE)
_SINGLE_LETTER_AT_FRONT = re.compile(r'^([^\W\d]) *(\d+)', re.UNICODE)
_LETTER_THEN_NUMBERS = re.compile(r'^[^\d\s_]+[ -]?([^\s]+)',
re.UNICODE | re.IGNORECASE)
_UA_TERRITORIAL_RE = re.compile(r'^(\w)-(\d+)-(\d+)$',
re.UNICODE | re.IGNORECASE)
def _make_unicode_or_none(ref):
if isinstance(ref, unicode):
# no need to do anything, it's already okay
return ref
elif isinstance(ref, str):
# it's UTF-8 encoded bytes, so make it a unicode
return unicode(ref, 'utf-8')
# dunno what this is?!!
return None
def _road_shield_text(network, ref):
"""
Try to extract the string that should be displayed within the road shield,
based on the raw ref and the network value.
"""
# FI-PI-LI is just a special case?
if ref == 'FI-PI-LI':
return ref
# These "belt" roads have names in the ref which should be in the shield,
# there's no number.
if network and network == 'US:PA:Belt':
return ref
# Ukranian roads sometimes have internal dashes which should be removed.
if network and network.startswith('ua:'):
m = _UA_TERRITORIAL_RE.match(ref)
if m:
return m.group(1) + m.group(2) + m.group(3)
# Greek roads sometimes have alphabetic prefixes which we should _keep_,
# unlike for other roads.
if network and (network.startswith('GR:') or network.startswith('gr:')):
return ref
# If there's a number at the front (optionally with letters following),
# then that's the ref.
m = _NUMBER_AT_FRONT.match(ref)
if m:
return m.group(1)
# If there's a letter at the front, optionally space, and then a number,
# the ref is the concatenation (without space) of the letter and number.
m = _SINGLE_LETTER_AT_FRONT.match(ref)
if m:
return m.group(1) + m.group(2)
# Otherwise, try to match a bunch of letters followed by a number.
m = _LETTER_THEN_NUMBERS.match(ref)
if m:
return m.group(1)
# Failing that, give up and just return the ref as-is.
return ref
def _default_shield_text(network, ref):
"""
Without any special properties of the ref to make the shield text from,
just use the 'ref' property.
"""
return ref
# _Network represents a type of route network.
# prefix is what we should insert into
# the property we put on the feature (e.g: prefix + 'network' for
# 'bicycle_network' and so forth). shield_text_fn is a function called with the
# network and ref to get the text which should be shown on the shield.
_Network = namedtuple(
'_Network', 'prefix shield_text_fn network_importance_fn')
_ROAD_NETWORK = _Network(
'',
_road_shield_text,
None)
_FOOT_NETWORK = _Network(
'walking_',
_default_shield_text,
_walking_network_importance)
_BIKE_NETWORK = _Network(
'bicycle_',
_default_shield_text,
_bicycle_network_importance)
_BUS_NETWORK = _Network(
'bus_',
_default_shield_text,
_bus_network_importance)
_NETWORKS = {
'road': _ROAD_NETWORK,
'foot': _FOOT_NETWORK,
'hiking': _FOOT_NETWORK,
'bicycle': _BIKE_NETWORK,
'bus': _BUS_NETWORK,
'trolleybus': _BUS_NETWORK,
}
def extract_network_information(shape, properties, fid, zoom):
"""
Take the triples of (route_type, network, ref) from `mz_networks` and
extract them into two arrays of network and shield_text information.
"""
mz_networks = properties.pop('mz_networks', None)
country_code = properties.get('country_code')
country_logic = _COUNTRY_SPECIFIC_ROAD_NETWORK_LOGIC.get(country_code)
if mz_networks is not None:
# take the list and make triples out of it
itr = iter(mz_networks)
groups = defaultdict(list)
for (type, network, ref) in zip(itr, itr, itr):
n = _NETWORKS.get(type)
if n:
groups[n].append([network, ref])
for network, vals in groups.items():
all_networks = 'all_' + network.prefix + 'networks'
all_shield_texts = 'all_' + network.prefix + 'shield_texts'
shield_text_fn = network.shield_text_fn
if network is _ROAD_NETWORK and country_logic and \
country_logic.shield_text:
shield_text_fn = country_logic.shield_text
shield_texts = list()
network_names = list()
for network_name, ref in vals:
network_names.append(network_name)
ref = _make_unicode_or_none(ref)
if ref is not None:
ref = shield_text_fn(network_name, ref)
# we try to keep properties as utf-8 encoded str, but the
# shield text function may have turned them into unicode.
# this is a catch-all just to make absolutely sure.
if isinstance(ref, unicode):
ref = ref.encode('utf-8')
shield_texts.append(ref)
properties[all_networks] = network_names
properties[all_shield_texts] = shield_texts
return (shape, properties, fid)
def _choose_most_important_network(properties, prefix, importance_fn):
"""
Use the `_network_importance` function to select any road networks from
`all_networks` and `all_shield_texts`, taking the most important one.
"""
all_networks = 'all_' + prefix + 'networks'
all_shield_texts = 'all_' + prefix + 'shield_texts'
networks = properties.pop(all_networks, None)
shield_texts = properties.pop(all_shield_texts, None)
country_code = properties.get('country_code')
if networks and shield_texts:
def network_key(t):
return importance_fn(*t)
tuples = sorted(set(zip(networks, shield_texts)), key=network_key)
# i think most route designers would try pretty hard to make sure that
# a segment of road isn't on two routes of different networks but with
# the same shield text. most likely when this happens it's because we
# have duplicate information in the element and relations it's a part
# of. so get rid of anything with network=None where there's an entry
# with the same ref (and network != none).
seen_ref = set()
new_tuples = []
for network, ref in tuples:
if network:
if ref:
seen_ref.add(ref)
new_tuples.append((network, ref))
elif ref is not None and ref not in seen_ref:
# network is None, fall back to the country code
new_tuples.append((country_code, ref))
tuples = new_tuples
if tuples:
# expose first network as network/shield_text
network, ref = tuples[0]
properties[prefix + 'network'] = network
properties[prefix + 'shield_text'] = ref
# replace properties with sorted versions of themselves
properties[all_networks] = [n[0] for n in tuples]
properties[all_shield_texts] = [n[1] for n in tuples]
return properties
def choose_most_important_network(shape, properties, fid, zoom):
for net in _NETWORKS.values():
prefix = net.prefix
if net is _ROAD_NETWORK:
country_code = properties.get('country_code')
logic = _COUNTRY_SPECIFIC_ROAD_NETWORK_LOGIC.get(country_code)
importance_fn = None
if logic:
importance_fn = logic.sort
if not importance_fn:
importance_fn = _default_sort_network
else:
importance_fn = net.network_importance_fn
properties = _choose_most_important_network(
properties, prefix, importance_fn)
return (shape, properties, fid)
def buildings_unify(ctx):
"""
Unify buildings with their parts. Building parts will receive a
root_id property which will be the id of building parent they are
associated with.
"""
zoom = ctx.nominal_zoom
start_zoom = ctx.params.get('start_zoom', 0)
if zoom < start_zoom:
return None
source_layer = ctx.params.get('source_layer')
assert source_layer is not None, 'unify_buildings: missing source_layer'
feature_layers = ctx.feature_layers
layer = _find_layer(feature_layers, source_layer)
if layer is None:
return None
class geom_with_building_id(object):
def __init__(self, geom, building_id):
self.geom = geom
self.building_id = building_id
self._geom = geom._geom
self.is_empty = geom.is_empty
indexable_buildings = []
parts = []
for feature in layer['features']:
shape, props, feature_id = feature
kind = props.get('kind')
if kind == 'building':
building_id = props.get('id')
if building_id:
indexed_building = geom_with_building_id(shape, building_id)
indexable_buildings.append(indexed_building)
elif kind == 'building_part':
parts.append(feature)
if not (indexable_buildings and parts):
return
buildings_index = STRtree(indexable_buildings)
for part in parts:
best_overlap = 0
root_building_id = None
part_shape, part_props, part_feature_id = part
indexed_buildings = buildings_index.query(part_shape)
for indexed_building in indexed_buildings:
building_shape = indexed_building.geom
intersection = part_shape.intersection(building_shape)
overlap = intersection.area
if overlap > best_overlap:
best_overlap = overlap
root_building_id = indexed_building.building_id
if root_building_id is not None:
part_props['root_id'] = root_building_id
def truncate_min_zoom_to_2dp(shape, properties, fid, zoom):
"""
Truncate the "min_zoom" property to two decimal places.
"""
min_zoom = properties.get('min_zoom')
if min_zoom:
properties['min_zoom'] = round(min_zoom, 2)
return shape, properties, fid
def truncate_min_zoom_to_1dp(shape, properties, fid, zoom):
"""
Truncate the "min_zoom" property to one decimal place.
"""
min_zoom = properties.get('min_zoom')
if min_zoom:
properties['min_zoom'] = round(min_zoom, 1)
return shape, properties, fid
class Palette(object):
"""
A collection of named colours which allows relatively fast lookup of the
closest named colour to any particular input colour.
Inspired by https://github.com/cooperhewitt/py-cooperhewitt-swatchbook
"""
def __init__(self, colours):
self.colours = colours
self.namelookup = dict()
for name, colour in colours.items():
assert len(colour) == 3, \
"Colours must lists of be of length 3 (%r: %r)" % \
(name, colour)
for val in colour:
assert isinstance(val, int), \
"Colour values must be integers (%r: %r)" % (name, colour)
assert val >= 0 and val <= 255, \
"Colour values must be between 0 and 255 (%r: %r)" % \
(name, colour)
self.namelookup[tuple(colour)] = name
self.tree = kdtree.create(colours.values())
def __call__(self, colour):
"""
Returns the name of the closest colour in the palette to the input
colour.
"""
node, dist = self.tree.search_nn(colour)
return self.namelookup[tuple(node.data)]
def get(self, name):
return self.colours.get(name)
def palettize_colours(ctx):
"""
Derive a colour from each feature by looking at one or more input
attributes and match that to a palette of name to colour mappings given
in the `colours` parameter. The name of the colour will be output in the
feature's properties using a key from the `attribute` paramter.
"""
from vectordatasource.colour import parse_colour
layer_name = ctx.params.get('layer')
assert layer_name, \
'Parameter layer was missing from palettize config'
attr_name = ctx.params.get('attribute')
assert attr_name, \
'Parameter attribute was missing from palettize config'
colours = ctx.params.get('colours')
assert colours, \
'Dict mapping colour names to RGB triples was missing from config'
input_attrs = ctx.params.get('input_attributes', ['colour'])
layer = _find_layer(ctx.feature_layers, layer_name)
palette = Palette(colours)
for (shape, props, fid) in layer['features']:
colour = None
for attr in input_attrs:
colour = props.get(attr)
if colour:
break
if colour:
rgb = parse_colour(colour)
if rgb:
props[attr_name] = palette(rgb)
return layer
def backfill_from_other_layer(ctx):
"""
Matches features from one layer with the other on the basis of the feature
ID and, if the configured layer property doesn't exist on the feature, but
the other layer property does exist on the matched feature, then copy it
across.
The initial use for this is to backfill POI kinds into building kind_detail
when the building doesn't already have a different kind_detail supplied.
"""
layer_name = ctx.params.get('layer')
assert layer_name, \
'Parameter layer was missing from ' \
'backfill_from_other_layer config'
other_layer_name = ctx.params.get('other_layer')
assert other_layer_name, \
'Parameter other_layer_name was missing from ' \
'backfill_from_other_layer config'
layer_key = ctx.params.get('layer_key')
assert layer_key, \
'Parameter layer_key was missing from ' \
'backfill_from_other_layer config'
other_key = ctx.params.get('other_key')
assert other_key, \
'Parameter other_key was missing from ' \
'backfill_from_other_layer config'
layer = _find_layer(ctx.feature_layers, layer_name)
other_layer = _find_layer(ctx.feature_layers, other_layer_name)
# build an index of feature ID to property value in the other layer
other_values = {}
for (shape, props, fid) in other_layer['features']:
# prefer to use the `id` property rather than the fid.
fid = props.get('id', fid)
kind = props.get(other_key)
# make sure fid is truthy, as it can be set to None on features
# created by merging.
if kind and fid:
other_values[fid] = kind
# apply those to features which don't already have a value
for (shape, props, fid) in layer['features']:
if layer_key not in props:
fid = props.get('id', fid)
value = other_values.get(fid)
if value:
props[layer_key] = value
return layer
def drop_layer(ctx):
"""
Drops the named layer from the list of layers.
"""
layer_to_delete = ctx.params.get('layer')
for idx, feature_layer in enumerate(ctx.feature_layers):
layer_datum = feature_layer['layer_datum']
layer_name = layer_datum['name']
if layer_name == layer_to_delete:
del ctx.feature_layers[idx]
break
return None
def _fixup_country_specific_networks(shape, props, fid, zoom):
"""
Apply country-specific fixup functions to mz_networks.
"""
mz_networks = props.get('mz_networks')
country_code = props.get('country_code')
logic = _COUNTRY_SPECIFIC_ROAD_NETWORK_LOGIC.get(country_code)
if logic and logic.fix and mz_networks:
new_networks = []
# mz_networks is a list of repeated [type, network, ref, ...], it isn't
# nested!
itr = iter(mz_networks)
for (type, network, ref) in zip(itr, itr, itr):
if type == 'road':
network, ref = logic.fix(network, ref)
new_networks.extend([type, network, ref])
props['mz_networks'] = new_networks
return (shape, props, fid)
def road_networks(ctx):
"""
Fix up road networks. This means looking at the networks from the
relation(s), if any, merging that with information from the tags on the
original object and any structure we expect from looking at the country
code.
"""
params = _Params(ctx, 'road_networks')
layer_name = params.required('layer')
layer = _find_layer(ctx.feature_layers, layer_name)
zoom = ctx.nominal_zoom
funcs = [
merge_networks_from_tags,
_fixup_country_specific_networks,
extract_network_information,
choose_most_important_network,
]
new_features = []
for (shape, props, fid) in layer['features']:
for fn in funcs:
shape, props, fid = fn(shape, props, fid, zoom)
new_features.append((shape, props, fid))
layer['features'] = new_features
return None
# helper class to wrap logic around extracting required and optional parameters
# from the context object passed to post-processors, making its use more
# concise and readable in the post-processor method itself.
#
class _Params(object):
def __init__(self, ctx, post_processor_name):
self.ctx = ctx
self.post_processor_name = post_processor_name
def required(self, name, typ=str, default=None):
"""
Returns a named parameter of the given type and default from the
context, raising an assertion failed exception if the parameter wasn't
present, or wasn't an instance of the type.
"""
value = self.optional(name, typ=typ, default=default)
assert value is not None, \
'Required parameter %r was missing from %r config' \
% (name, self.post_processor_name)
return value
def optional(self, name, typ=str, default=None):
"""
Returns a named parameter of the given type, or the default if that
parameter wasn't given in the context. Raises an exception if the
value was present and is not of the expected type.
"""
value = self.ctx.params.get(name, default)
if value is not None:
assert isinstance(value, typ), \
'Expected parameter %r to be of type %s, but value %r is of ' \
'type %r in %r config' \
% (name, typ.__name__, value, type(value).__name__,
self.post_processor_name)
return value
def point_in_country_logic(ctx):
"""
Intersect points from source layer with target layer, then look up which
country they're in and assign property based on a look-up table.
"""
params = _Params(ctx, 'point_in_country_logic')
layer_name = params.required('layer')
country_layer_name = params.required('country_layer')
country_code_attr = params.required('country_code_attr')
# single attribute version
output_attr = params.optional('output_attr')
# multiple attribute version
output_attrs = params.optional('output_attrs', typ=list)
# must provide one or the other
assert output_attr or output_attrs, 'Must provide one or other of ' \
'output_attr or output_attrs for point_in_country_logic'
logic_table = params.optional('logic_table', typ=dict)
if logic_table is None:
logic_table = ctx.resources.get('logic_table')
assert logic_table is not None, 'Must provide logic_table via a param ' \
'or resource for point_in_country_logic'
where = params.optional('where')
layer = _find_layer(ctx.feature_layers, layer_name)
country_layer = _find_layer(ctx.feature_layers, country_layer_name)
if where is not None:
where = compile(where, 'queries.yaml', 'eval')
# this is a wrapper around a geometry, so that we can store extra
# information in the STRTree.
class country_with_value(object):
def __init__(self, geom, value):
self.geom = geom
self.value = value
self._geom = geom._geom
self.is_empty = geom.is_empty
# construct an STRtree index of the country->value mapping. in many cases,
# the country will cover the whole tile, but in some other cases it will
# not, and it's worth having the speedup of indexing for those.
countries = []
for (shape, props, fid) in country_layer['features']:
country_code = props.get(country_code_attr)
value = logic_table.get(country_code)
if value is not None:
countries.append(country_with_value(shape, value))
countries_index = STRtree(countries)
for (shape, props, fid) in layer['features']:
# skip features where the 'where' clause doesn't match
if where:
local = props.copy()
if not eval(where, {}, local):
continue
candidates = countries_index.query(shape)
for candidate in candidates:
# given that the shape is (expected to be) a point, all
# intersections are the same (there's no measure of the "amount of
# overlap"), so we might as well just stop on the first one.
if shape.intersects(candidate.geom):
if output_attrs:
for output_attr in output_attrs:
props[output_attr] = candidate.value[output_attr]
else:
props[output_attr] = candidate.value
break
return None
def max_zoom_filter(ctx):
"""
For features with a max_zoom, remove them if it's < nominal zoom.
"""
params = _Params(ctx, 'max_zoom_filter')
layers = params.required('layers', typ=list)
nominal_zoom = ctx.nominal_zoom
for layer_name in layers:
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
new_features = []
for feature in features:
_, props, _ = feature
max_zoom = props.get('max_zoom')
if max_zoom is None or max_zoom >= nominal_zoom:
new_features.append(feature)
layer['features'] = new_features
return None
def min_zoom_filter(ctx):
"""
For features with a min_zoom, remove them if it's > nominal zoom + 1.
"""
params = _Params(ctx, 'min_zoom_filter')
layers = params.required('layers', typ=list)
nominal_zoom = ctx.nominal_zoom
for layer_name in layers:
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
new_features = []
for feature in features:
_, props, _ = feature
min_zoom = props.get('min_zoom')
if min_zoom is not None and min_zoom < nominal_zoom + 1:
new_features.append(feature)
layer['features'] = new_features
return None
def tags_set_ne_min_max_zoom(ctx):
"""
Override the min zoom and max zoom properties with __ne_* variants from
Natural Earth, if there are any.
"""
params = _Params(ctx, 'tags_set_ne_min_max_zoom')
layer_name = params.required('layer')
layer = _find_layer(ctx.feature_layers, layer_name)
for _, props, _ in layer['features']:
min_zoom = props.pop('__ne_min_zoom', None)
if min_zoom is not None:
# don't overstuff features into tiles when they are in the
# long tail of won't display, but make their min_zoom
# consistent with when they actually show in tiles
if min_zoom % 1 > 0.5:
min_zoom = ceil(min_zoom)
props['min_zoom'] = min_zoom
elif props.get('kind') == 'country':
# countries and regions which don't have a min zoom joined from NE
# are probably either vandalism or unrecognised countries. either
# way, we probably don't want to see them at zoom, which is lower
# than most of the curated NE min zooms. see issue #1826 for more
# information.
props['min_zoom'] = max(6, props['min_zoom'])
elif props.get('kind') == 'region':
props['min_zoom'] = max(8, props['min_zoom'])
max_zoom = props.pop('__ne_max_zoom', None)
if max_zoom is not None:
props['max_zoom'] = max_zoom
return None
def whitelist(ctx):
"""
Applies a whitelist to a particular property on all features in the layer,
optionally also remapping some values.
"""
params = _Params(ctx, 'whitelist')
layer_name = params.required('layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
property_name = params.required('property')
whitelist = params.required('whitelist', typ=list)
remap = params.optional('remap', default={}, typ=dict)
where = params.optional('where')
# check that we're in the zoom range where this post-processor is supposed
# to operate.
if ctx.nominal_zoom < start_zoom:
return None
if end_zoom is not None and ctx.nominal_zoom >= end_zoom:
return None
if where is not None:
where = compile(where, 'queries.yaml', 'eval')
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
for feature in features:
_, props, _ = feature
# skip this feature if there's a where clause and it evaluates falsey.
if where is not None:
local = props.copy()
local['zoom'] = ctx.nominal_zoom
if not eval(where, {}, local):
continue
value = props.get(property_name)
if value is not None:
if value in whitelist:
# leave value as-is
continue
elif value in remap:
# replace with replacement value
props[property_name] = remap[value]
else:
# drop the property
props.pop(property_name)
return None
def remap(ctx):
"""
Maps some values for a particular property to others. Similar to whitelist,
but won't remove the property if there's no match.
"""
params = _Params(ctx, 'remap')
layer_name = params.required('layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
property_name = params.required('property')
remap = params.optional('remap', default={}, typ=dict)
where = params.optional('where')
# check that we're in the zoom range where this post-processor is supposed
# to operate.
if ctx.nominal_zoom < start_zoom:
return None
if end_zoom is not None and ctx.nominal_zoom >= end_zoom:
return None
if where is not None:
where = compile(where, 'queries.yaml', 'eval')
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
for feature in features:
shape, props, _ = feature
# skip this feature if there's a where clause and it evaluates falsey.
if where is not None:
local = props.copy()
local['zoom'] = ctx.nominal_zoom
local['geom_type'] = shape.geom_type
if not eval(where, {}, local):
continue
value = props.get(property_name)
if value in remap:
# replace with replacement value
props[property_name] = remap[value]
return None
def backfill(ctx):
"""
Backfills default values for some features. In other words, if the feature
lacks some or all of the defaults, then set those defaults.
"""
params = _Params(ctx, 'whitelist')
layer_name = params.required('layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
defaults = params.required('defaults', typ=dict)
where = params.optional('where')
# check that we're in the zoom range where this post-processor is supposed
# to operate.
if ctx.nominal_zoom < start_zoom:
return None
if end_zoom is not None and ctx.nominal_zoom >= end_zoom:
return None
if where is not None:
where = compile(where, 'queries.yaml', 'eval')
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
for feature in features:
_, props, _ = feature
# skip this feature if there's a where clause and it evaluates truthy.
if where is not None:
local = props.copy()
local['zoom'] = ctx.nominal_zoom
if not eval(where, {}, local):
continue
for k, v in defaults.iteritems():
if k not in props:
props[k] = v
return None
def clamp_min_zoom(ctx):
"""
Clamps the min zoom for features depending on context.
"""
params = _Params(ctx, 'clamp_min_zoom')
layer_name = params.required('layer')
start_zoom = params.optional('start_zoom', default=0, typ=int)
end_zoom = params.optional('end_zoom', typ=int)
clamp = params.required('clamp', typ=dict)
property_name = params.required('property')
# check that we're in the zoom range where this post-processor is supposed
# to operate.
if ctx.nominal_zoom < start_zoom:
return None
if end_zoom is not None and ctx.nominal_zoom >= end_zoom:
return None
layer = _find_layer(ctx.feature_layers, layer_name)
features = layer['features']
for feature in features:
_, props, _ = feature
value = props.get(property_name)
min_zoom = props.get('min_zoom')
if value is not None and min_zoom is not None:
min_val = clamp.get(value)
if min_val is not None and min_val > min_zoom:
props['min_zoom'] = min_val
return None
def add_vehicle_restrictions(shape, props, fid, zoom):
"""
Parse the maximum height, weight, length, etc... restrictions on vehicles
and create the `hgv_restriction` and `hgv_restriction_shield_text`.
"""
from math import floor
def _one_dp(val, unit):
deci = int(floor(10 * val))
if deci % 10 == 0:
return "%d%s" % (deci / 10, unit)
return "%.1f%s" % (0.1 * deci, unit)
def _metres(val):
# parse metres or feet and inches, return cm
metres = _to_float_meters(val)
if metres:
return True, _one_dp(metres, 'm')
return False, None
def _tonnes(val):
tonnes = to_float(val)
if tonnes:
return True, _one_dp(tonnes, 't')
return False, None
def _false(val):
return val == 'no', None
Restriction = namedtuple('Restriction', 'kind parse')
restrictions = {
'maxwidth': Restriction('width', _metres),
'maxlength': Restriction('length', _metres),
'maxheight': Restriction('height', _metres),
'maxweight': Restriction('weight', _tonnes),
'maxaxleload': Restriction('wpa', _tonnes),
'hazmat': Restriction('hazmat', _false),
}
hgv_restriction = None
hgv_restriction_shield_text = None
for osm_key, restriction in restrictions.items():
osm_val = props.pop(osm_key, None)
if osm_val is None:
continue
restricted, shield_text = restriction.parse(osm_val)
if not restricted:
continue
if hgv_restriction is None:
hgv_restriction = restriction.kind
hgv_restriction_shield_text = shield_text
else:
hgv_restriction = 'multiple'
hgv_restriction_shield_text = None
if hgv_restriction:
props['hgv_restriction'] = hgv_restriction
if hgv_restriction_shield_text:
props['hgv_restriction_shield_text'] = hgv_restriction_shield_text
return shape, props, fid
def load_collision_ranker(fh):
import yaml
from vectordatasource.collision import CollisionRanker
data = yaml.load(fh)
assert isinstance(data, list)
return CollisionRanker(data)
def add_collision_rank(ctx):
"""
Add or update a collision_rank property on features in the given layers.
The collision rank is looked up from a YAML file consisting of a list of
filters (same syntax as in kind/min_zoom YAML) and "_reserved" blocks.
Collision rank indices are automatically assigned based on where in the
list a matching filter is found.
"""
feature_layers = ctx.feature_layers
zoom = ctx.nominal_zoom
start_zoom = ctx.params.get('start_zoom', 0)
end_zoom = ctx.params.get('end_zoom')
ranker = ctx.resources.get('ranker')
where = ctx.params.get('where')
assert ranker, 'add_collision_rank: missing ranker resource'
if zoom < start_zoom:
return None
if end_zoom is not None and zoom >= end_zoom:
return None
if where:
where = compile(where, 'queries.yaml', 'eval')
for layer in feature_layers:
layer_name = layer['layer_datum']['name']
for shape, props, fid in layer['features']:
# use the "where" clause to limit the selection of features which
# we add collision_rank to.
add_collision_rank = True
if where:
local = defaultdict(lambda: None)
local.update(props)
local['layer_name'] = layer_name
local['_has_name'] = _has_name(props)
add_collision_rank = eval(where, {}, local)
if add_collision_rank:
props_with_layer = props.copy()
props_with_layer['$layer'] = layer_name
rank = ranker((shape, props_with_layer, fid))
if rank is not None:
props['collision_rank'] = rank
return None
# mappings from the fclass_XXX values in the Natural Earth disputed areas data
# to the matching Tilezen kind.
_REMAP_VIEWPOINT_KIND = {
'Disputed (please verify)': 'disputed',
'Indefinite (please verify)': 'indefinite',
'Indeterminant frontier': 'indeterminate',
'International boundary (verify)': 'country',
'Lease limit': 'lease_limit',
'Line of control (please verify)': 'line_of_control',
'Overlay limit': 'overlay_limit',
'Unrecognized': 'unrecognized_country',
'Map unit boundary': 'map_unit',
'Breakaway': 'disputed_breakaway',
'Claim boundary': 'disputed_claim',
'Elusive frontier': 'disputed_elusive',
'Reference line': 'disputed_reference_line',
'Admin-1 region boundary': 'macroregion',
'Admin-1 boundary': 'region',
'Admin-1 statistical boundary': 'region',
'Admin-1 statistical meta bounds': 'region',
'1st Order Admin Lines': 'region',
'Unrecognized Admin-1 region boundary': 'unrecognized_macroregion',
'Unrecognized Admin-1 boundary': 'unrecognized_region',
'Unrecognized Admin-1 statistical boundary': 'unrecognized_region',
'Unrecognized Admin-1 statistical meta bounds': 'unrecognized_region',
}
def remap_viewpoint_kinds(shape, props, fid, zoom):
"""
Remap Natural Earth kinds in kind:* country viewpoints into the standard
Tilezen nomenclature.
"""
for key in props.keys():
if key.startswith('kind:'):
props[key] = _REMAP_VIEWPOINT_KIND.get(props[key])
return (shape, props, fid)
def _list_of_countries(value):
"""
Parses a comma or semicolon delimited list of ISO 3166-1 alpha-2 codes,
discarding those which don't match our expected format. We also allow a
special pseudo-country code "iso".
Returns a list of lower-case, stripped country codes (plus "iso").
"""
from re import match
from re import split
countries = []
candidates = split('[,;]', value)
for candidate in candidates:
# should have an ISO 3166-1 alpha-2 code, so should be 2 ASCII
# latin characters.
candidate = candidate.strip().lower()
if candidate == 'iso' or match('[a-z][a-z]', candidate):
countries.append(candidate)
return countries
def unpack_viewpoint_claims(shape, props, fid, zoom):
"""
Unpack OSM "claimed_by" list into viewpoint kinds.
For example; "claimed_by=AA;BB;CC" should become "kind:aa=country,
kind:bb=country, kind:cc=country" (or region, etc... as appropriate for
the main kind, which should be "unrecognized_TYPE".
Additionally, "recognized_by=XX;YY;ZZ" indicates that these viewpoints,
although they don't claim the territory, recognize the claim and should
see it in their viewpoint as a country/region/county.
"""
prefix = 'unrecognized_'
kind = props.get('kind')
claimed_by = props.get('claimed_by')
recognized_by = props.get('recognized_by')
if kind and kind.startswith(prefix) and claimed_by:
claimed_kind = kind[len(prefix):]
for country in _list_of_countries(claimed_by):
props['kind:' + country] = claimed_kind
if recognized_by:
for viewpoint in _list_of_countries(recognized_by):
props['kind:' + viewpoint] = claimed_kind
return (shape, props, fid)
class _DisputeMasks(object):
"""
Creates a "mask" of polygons by buffering disputed border lines and
provides an interface through cut() to intersect other border lines and
apply kind:xx=unrecognized_* to them.
This allows us to handle disputed borders - we effectively clip them out
of the disputant's viewpoint by setting a property that will hide them.
"""
def __init__(self, buffer_distance):
self.buffer_distance = buffer_distance
self.masks = []
def add(self, shape, props):
from shapely.geometry import CAP_STYLE
from shapely.geometry import JOIN_STYLE
disputed_by = props.get('disputed_by', '')
disputants = _list_of_countries(disputed_by)
if disputants:
# we use a flat cap to avoid straying too much into nearby lines
# and a mitred join to avoid creating extra geometry points to
# represent the curve, as this slows down intersection checks.
buffered_shape = shape.buffer(
self.buffer_distance, CAP_STYLE.flat, JOIN_STYLE.mitre)
self.masks.append((buffered_shape, disputants))
def empty(self):
return not self.masks
def cut(self, shape, props, fid):
"""
Cut the (shape, props, fid) feature against the masks to apply the
dispute to the boundary by setting 'kind:xx' to unrecognized.
"""
updated_features = []
# figure out what we want the boundary kind to be, if it's intersected
# with the dispute mask.
kind = props['kind']
if kind.startswith('unrecognized_'):
unrecognized = kind
else:
unrecognized = 'unrecognized_' + kind
for mask_shape, disputants in self.masks:
# we don't want to override a kind:xx if it has already been set
# (e.g: by a claim), so we filter out disputant viewpoints where
# a kind override has already been set.
#
# this is necessary for dealing with the case where a border is
# both claimed and disputed in the same viewpoint.
non_claim_disputants = []
for disputant in disputants:
key = 'kind:' + disputant
if key not in props:
non_claim_disputants.append(disputant)
if shape.intersects(mask_shape):
cut_shape = shape.intersection(mask_shape)
cut_shape = _filter_geom_types(cut_shape, _LINE_DIMENSION)
shape = shape.difference(mask_shape)
shape = _filter_geom_types(shape, _LINE_DIMENSION)
if not cut_shape.is_empty:
new_props = props.copy()
for disputant in non_claim_disputants:
new_props['kind:' + disputant] = unrecognized
updated_features.append((cut_shape, new_props, None))
if not shape.is_empty:
updated_features.append((shape, props, fid))
return updated_features
# tuple of boundary kind values on which we should set alternate viewpoints
# from disputed_by ways.
_BOUNDARY_KINDS = ('country', 'region', 'county', 'locality',
'aboriginal_lands')
def apply_disputed_boundary_viewpoints(ctx):
"""
Use the dispute features to apply viewpoints to the admin boundaries.
We take the 'mz_internal_dispute_mask' features and build a mask from them.
The mask is used to move the information from 'disputed_by' lists on the
mask features to 'kind:xx' overrides on the boundary features. The mask
features are discarded afterwards.
"""
params = _Params(ctx, 'apply_disputed_boundary_viewpoints')
layer_name = params.required('base_layer')
start_zoom = params.optional('start_zoom', typ=int, default=0)
end_zoom = params.optional('end_zoom', typ=int)
layer = _find_layer(ctx.feature_layers, layer_name)
zoom = ctx.nominal_zoom
if zoom < start_zoom or \
(end_zoom is not None and zoom >= end_zoom):
return None
# we tried intersecting lines against lines, but this often led to a sort
# of "dashed pattern" in the output where numerical imprecision meant two
# lines don't quite intersect.
#
# we solve this by buffering out the shape by a small amount so that we're
# more likely to get a clean cut against the boundary line.
#
# tolerance for zoom is the length of 1px at 256px per tile, so we can take
# a fraction of that to get sub-pixel alignment.
buffer_distance = 0.1 * tolerance_for_zoom(zoom)
# first, separate out the dispute mask geometries
masks = _DisputeMasks(buffer_distance)
# features that we're going to return
new_features = []
# boundaries, which we pull out separately to apply the disputes to
boundaries = []
for shape, props, fid in layer['features']:
kind = props.get('kind')
if kind == 'mz_internal_dispute_mask':
masks.add(shape, props)
elif kind in _BOUNDARY_KINDS:
boundaries.append((shape, props, fid))
# we want to apply disputes to already generally-unrecognised borders
# too, as this allows for multi-level fallback from one viewpoint
# possibly through several others before reaching the default.
elif (kind.startswith('unrecognized_') and
kind[len('unrecognized_'):] in _BOUNDARY_KINDS):
boundaries.append((shape, props, fid))
else:
# pass through this feature - we just ignore it.
new_features.append((shape, props, fid))
# quick escape if there are no masks (which should be the common case)
if masks.empty():
# keep the boundaries and other features we already passed through,
# but drop the masks - we don't want them in the output.
new_features.extend(boundaries)
else:
for shape, props, fid in boundaries:
# cut boundary features against disputes and set the alternate
# viewpoint on any which intersect.
features = masks.cut(shape, props, fid)
new_features.extend(features)
layer['features'] = new_features
return layer
def update_min_zoom(ctx):
"""
Update the min zoom for features matching the Python fragment "where"
clause. If none is provided, update all features.
The new min_zoom is calculated by evaluating a Python fragment passed
in through the "min_zoom" parameter. This is evaluated in the context
of the features' parameters, plus a zoom variable.
If the min zoom is lower than the current min zoom, the current one is
kept. If the min zoom is increased, then it's checked against the
current zoom and the feature dropped if it's not in range.
"""
params = _Params(ctx, 'update_min_zoom')
layer_name = params.required('source_layer')
start_zoom = params.optional('start_zoom', typ=int, default=0)
end_zoom = params.optional('end_zoom', typ=int)
min_zoom = params.required('min_zoom')
where = params.optional('where')
layer = _find_layer(ctx.feature_layers, layer_name)
zoom = ctx.nominal_zoom
if zoom < start_zoom or \
(end_zoom is not None and zoom >= end_zoom):
return None
min_zoom = compile(min_zoom, 'queries.yaml', 'eval')
if where:
where = compile(where, 'queries.yaml', 'eval')
new_features = []
for shape, props, fid in layer['features']:
local = defaultdict(lambda: None)
local.update(props)
local['zoom'] = zoom
if where and eval(where, {}, local):
new_min_zoom = eval(min_zoom, {}, local)
if new_min_zoom > props.get('min_zoom'):
props['min_zoom'] = new_min_zoom
if new_min_zoom >= zoom + 1 and zoom < 16:
# DON'T add feature - it's masked by min zoom.
continue
new_features.append((shape, props, fid))
layer['features'] = new_features
return layer
def major_airport_detector(shape, props, fid, zoom):
if props.get('kind') == 'aerodrome':
passengers = props.get('passenger_count', 0)
kind_detail = props.get('kind_detail')
# if we didn't detect that the airport is international (probably
# missing tagging to indicate that), but it carries over a million
# passengers a year, then it's probably an airport in the same class
# as an international one.
#
# for example, TPE (Taipei) airport hasn't got any international
# tagging, but carries over 45 million passengers a year. however,
# CGH (Sao Paulo Congonhas) carries 21 million, but is actually a
# domestic airport -- however it's so large we'd probably want to
# display it at the same scale as an international airport.
if kind_detail != 'international' and passengers > 1000000:
props['kind_detail'] = 'international'
# likewise, if we didn't detect a kind detail, but the number of
# passengers suggests it's more than just a flying club airfield,
# then set a regional kind_detail.
elif kind_detail is None and passengers > 10000:
props['kind_detail'] = 'regional'
return shape, props, fid
_NE_COUNTRY_CAPITALS = [
'Admin-0 region capital',
'Admin-0 capital alt',
'Admin-0 capital',
]
_NE_REGION_CAPITALS = [
'Admin-1 capital',
'Admin-1 region capital',
]
def capital_alternate_viewpoint(shape, props, fid, zoom):
"""
Removes the fclass_* properties and replaces them with viewpoint overrides
for country_capital and region_capital.
"""
fclass_prefix = 'fclass_'
default_country_capital = props.get('country_capital', False)
default_region_capital = props.get('region_capital', False)
for k in props.keys():
if k.startswith(fclass_prefix):
viewpoint = k[len(fclass_prefix):]
fclass = props.pop(k)
country_capital = fclass in _NE_COUNTRY_CAPITALS
region_capital = fclass in _NE_REGION_CAPITALS
if country_capital:
props['country_capital:' + viewpoint] = True
elif region_capital:
props['region_capital:' + viewpoint] = True
if default_country_capital and not country_capital:
props['country_capital:' + viewpoint] = False
elif default_region_capital and not region_capital:
props['region_capital:' + viewpoint] = False
return shape, props, fid
|
We consider the consumer decision journey at every stage of the funnel. With that in mind, we craft efficient and effective, channel-agnostic media strategies based on meaningful consumer insights that are derived from a strong strategic foundation and a data-driven mindset.
Our transparency lets you see results.
As an agency, being independent makes all the difference for our clients. We offer complete transparency so what you see is what you get, no hidden fees, no surprises, just the media strategy you approved at the start of the process. And because we’re not beholden to any holding company deals, 100% of your media dollars are spent connecting with your audience – and driving business results.
We measure up because we measure more.
With a top-notch, in-house analytics team on hand from the jump, we’re able to measure everything we do at every stage of the journey, so we can ensure that our KPIs align to your business goals. By grounding everything we do in strategy and data we’re constantly optimizing our media plans to drive performance.
First we dig into the specifics of your brand to identify the challenges and metrics that will inform what success looks like.
We transform the strategic platform and ideas into a meaningful and specific connections plan to be executed.
We look at the most effective way to execute the plan, so we can maximize the impact of our working media dollars.
We ensure that our strategies and tactics are delivering on the overarching KPIs and business goals.
With low overall brand awareness and stagnating conversion rates, we developed a digital solution to fill the funnel in order to drive ongoing policy application starts for Jewelers Mutual. By reallocating 20% of the ad spend to upper funnel tactics (video and display) and dedicating the remaining budget to focus on driving overall conversion, we leveraged a programmatic approach to engage multiple funnel stages in a targeted way. As a result, after just one quarter we increased overall conversions by 30% and decreased our cost-per-acquisition by almost 32%.
We developed a dense and layered media plan with an integrated channel mix to help our client stay on top in this highly competitive market. Because video, audio, display, native and rich-media were purchased programmatically, LC was able to ensure incremental reach as the campaign progressed. Video and audio elevated the brand. Select out-of-home buys penetrated key geographic territories. Social and rich-media units drove engagement and education, while digital was optimized continually to maximize campaign effectiveness as a whole. As a result, a Media Mix model study found the integrated approach drove a 9:1 ROI.
|
# Copyright (c) 2020 University of Chicago.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
class BaseFilter(metaclass=abc.ABCMeta):
enforcement_opts = []
def __init__(self, conf=None):
self.conf = conf
for opt in self.enforcement_opts:
self.conf.register_opt(opt, 'enforcement')
def __getattr__(self, name):
func = getattr(self.conf.enforcement, name)
return func
@abc.abstractmethod
def check_create(self, context, lease_values):
pass
@abc.abstractmethod
def check_update(self, context, current_lease_values, new_lease_values):
pass
@abc.abstractmethod
def on_end(self, context, lease_values):
pass
|
UWG Continuing Education offers both Instructor-Led Courses and Online Insurance Courses.
This class is the designated 40HR class for individuals wishing to sell Property & Casualty insurance in the State of Georgia. Upon completion, students will sit for the State exam. Students will learn General Insurance Principles, PC Terms, Liability, Loss Valuation, Personal Lines, Commercial Lines, and the various Homeowner Policies. Please call ahead at 678-839-6614 for book ordering information.
This class will allow students to prepare to sit for the state insurance exam to sell Life, Accident, and Sickness insurance in the State of Georgia. Students will learn about The Type of Policies, Underwriting, Taxes, Retirement, and other terms needed to pass the exam.
The University of West Georgia has partnered with WebCE, the leading provider of online continuing education (CE) for insurance professionals nationwide, to deliver high quality CE courses to insurance agents and adjusters.
University of West Georgia offers insurance continuing education in all 50 states plus the District of Columbia.
|
#!/usr/bin/env python
# Computes the matrix of similarities between structures in a xyz file
# by first getting SOAP descriptors for all environments, finding the best
# match between environments using the Hungarian algorithm, and finally
# summing up the environment distances.
# Supports periodic systems, matching between structures with different
# atom number and kinds, and sports the infrastructure for introducing an
# alchemical similarity kernel to match different atomic species
# import sys, os, pickle
import sys, os
import cPickle as pickle
import gc
from lap.lap import best_pairs, best_cost, lcm_best_cost
from lap.perm import xperm, mcperm, rematch
import numpy as np
from environments import environ, alchemy, envk
import quippy
__all__ = [ "structk", "structure" ]
class structure:
def __init__(self, salchem=None):
self.env={}
self.species={}
self.zspecies = []
self.atz = []
self.nenv=0
self.alchem=salchem
if self.alchem is None: self.alchem=alchemy()
self.globenv = None
def getnz(self, sp):
if sp in self.species:
return self.species[sp]
else: return 0
def getatomenv(self, i):
if i>=len(self.atz):
raise IndexError("Trying to access atom past structure size")
k=0
lsp = {}
for z in self.atz:
if z in lsp: lsp[z]+=1
else: lsp[z] = 0
if i==k:
return self.env[z][lsp[z]]
k+=1
def getenv(self, sp, i):
if sp in self.env and i<len(self.env[sp]):
return self.env[sp][i]
else:
return environ(self.nmax,self.lmax,self.alchem,sp) # missing atoms environments just returned as isolated species!
def ismissing(self, sp, i):
if sp in self.species and i<self.species[sp]:
return False
else: return True
def parse(self, fat, coff=5.0, cotw=0.5, nmax=4, lmax=3, gs=0.5, cw=1.0, nocenter=[], noatom=[], kit=None, soapdump=None):
""" Takes a frame in the QUIPPY format and computes a list of its environments. """
# removes atoms that are to be ignored
at = fat.copy()
nol = []
for s in range(1,at.z.size+1):
if at.z[s] in noatom: nol.append(s)
if len(nol)>0: at.remove_atoms(nol)
self.nmax = nmax
self.lmax = lmax
self.atz = at.z.copy()
self.species = {}
for z in at.z:
if z in self.species: self.species[z]+=1
else: self.species[z] = 1
self.zspecies = self.species.keys();
self.zspecies.sort();
lspecies = 'n_species='+str(len(self.zspecies))+' species_Z={ '
for z in self.zspecies: lspecies = lspecies + str(z) + ' '
lspecies = lspecies + '}'
at.set_cutoff(coff);
at.calc_connect();
self.nenv = 0
if not soapdump is None:
soapdump.write("####### SOAP VECTOR FRAME ######\n")
for sp in self.species:
if sp in nocenter:
self.species[sp]=0
continue # Option to skip some environments
# first computes the descriptors of species that are present
if not soapdump is None: sys.stderr.write("SOAP STRING: "+"soap central_reference_all_species=F central_weight="+str(cw)+" covariance_sigma0=0.0 atom_sigma="+str(gs)+" cutoff="+str(coff)+" cutoff_transition_width="+str(cotw)+" n_max="+str(nmax)+" l_max="+str(lmax)+' '+lspecies+' Z='+str(sp)+"\n")
desc = quippy.descriptors.Descriptor("soap central_reference_all_species=F central_weight="+str(cw)+" covariance_sigma0=0.0 atom_sigma="+str(gs)+" cutoff="+str(coff)+" cutoff_transition_width="+str(cotw)+" n_max="+str(nmax)+" l_max="+str(lmax)+' '+lspecies+' Z='+str(sp) )
try:
psp = desc.calc(at)["descriptor"].T
except TypeError:
print("Interface change in QUIP/GAP. Update your code first.")
if not soapdump is None:
soapdump.write("Specie %d - %d atoms\n"% (sp,len(psp)))
for p in psp:
np.savetxt(soapdump,[p])
# now repartitions soaps in environment descriptors
lenv = []
for p in psp:
nenv = environ(nmax, lmax, self.alchem)
nenv.convert(sp, self.zspecies, p)
lenv.append(nenv)
self.env[sp] = lenv
self.nenv += self.species[sp]
# adds kit data
if kit is None: kit = {}
for sp in kit:
if not sp in self.species:
self.species[sp]=0
self.env[sp] = []
for k in range(self.species[sp], kit[sp]):
self.env[sp].append(environ(self.nmax,self.lmax,self.alchem,sp))
self.nenv+=1
self.species[sp] = kit[sp]
self.zspecies = self.species.keys()
self.zspecies.sort()
# also compute the global (flattened) fingerprint
self.globenv = environ(nmax, lmax, self.alchem)
for k, se in self.env.items():
for e in se:
self.globenv.add(e)
# divides by the number of atoms in the structure
for sij in self.globenv.soaps: self.globenv.soaps[sij]*=1.0/self.nenv
# self.globenv.normalize() #if needed, normalization will be done later on.....
def gcd(a,b):
if (b>a): a,b = b, a
while (b): a, b = b, a%b
return a
def lcm(a,b):
return a*b/gcd(b,a)
#def gstructk(strucA, strucB, alchem=alchemy(), periodic=False):
#
# return envk(strucA.globenv, strucB.globenv, alchem)
def structk(strucA, strucB, alchem=alchemy(), periodic=False, mode="match", fout=None, peps=0.0, gamma=1.0, zeta=1.0, xspecies=False):
# computes the SOAP similarity KERNEL between two structures by combining atom-centered kernels
# possible kernel modes include:
# average : scalar product between averaged kernels
# match: best-match hungarian kernel
# permanent: average over all permutations
# average kernel. quick & easy!
if mode=="fastavg":
genvA=strucA.globenv
genvB=strucB.globenv
return envk(genvA, genvB, alchem)**zeta, 0
elif mode=="fastspecies":
# for now, only implement standard Kronecker alchemy
senvB = environ(strucB.nmax, strucB.lmax, strucB.alchem)
kk = 0
for za in strucA.zspecies:
if not za in strucB.zspecies: continue
senvA = environ(strucA.nmax, strucA.lmax, strucA.alchem)
for ia in xrange(strucA.getnz(za)):
senvA.add(strucA.getenv(za, ia))
senvB = environ(strucB.nmax, strucB.lmax, strucB.alchem)
for ib in xrange(strucB.getnz(za)):
senvB.add(strucB.getenv(za, ib))
kk += envk(senvA, senvB, alchem)**zeta
kk/=strucA.nenv*strucB.nenv
return kk,0
# for zb, nzb in nspeciesB:
# for ib in xrange(nzb):
# return envk(genvA, genvB, alchem), 0
nenv = 0
if periodic: # replicate structures to match structures of different periodicity
# we do not check for compatibility at this stage, just assume that the
# matching will be done somehow (otherwise it would be exceedingly hard to manage in case of non-standard alchemy)
nspeciesA = []
nspeciesB = []
for z in strucA.zspecies:
nspeciesA.append( (z, strucA.getnz(z)) )
for z in strucB.zspecies:
nspeciesB.append( (z, strucB.getnz(z)) )
nenv=nenvA = strucA.nenv
nenvB = strucB.nenv
else:
# top up missing atoms with isolated environments
# first checks which atoms are present
zspecies = sorted(list(set(strucB.zspecies+strucA.zspecies)))
nspecies = []
for z in zspecies:
nz = max(strucA.getnz(z),strucB.getnz(z))
nspecies.append((z,nz))
nenv += nz
nenvA = nenvB = nenv
nspeciesA = nspeciesB = nspecies
np.set_printoptions(linewidth=500,precision=4)
kk = np.zeros((nenvA,nenvB),float)
ika = 0
ikb = 0
for za, nza in nspeciesA:
for ia in xrange(nza):
envA = strucA.getenv(za, ia)
ikb = 0
for zb, nzb in nspeciesB:
for ib in xrange(nzb):
envB = strucB.getenv(zb, ib)
if alchem.mu > 0 and (strucA.ismissing(za, ia) ^ strucB.ismissing(zb, ib)):
# includes a penalty dependent on "mu", in a way that is consistent with the definition of kernel distance
kk[ika,ikb] = exp(-alchem.mu)
else:
if za == zb or not xspecies: #uncomment to zero out kernels between different species
kk[ika,ikb] = envk(envA, envB, alchem)**zeta
else: kk[ika,ikb] = 0
ikb+=1
ika+=1
aidx = {}
ika=0
for za, nza in nspeciesA:
aidx[za] = range(ika,ika+nza)
ika+=nza
ikb=0
bidx = {}
for zb, nzb in nspeciesB:
bidx[zb] = range(ikb,ikb+nzb)
ikb+=nzb
if fout != None:
# prints out similarity information for the environment pairs
fout.write("# atomic species in the molecules (possibly topped up with dummy isolated atoms): \n")
for za, nza in nspeciesA:
for ia in xrange(nza): fout.write(" %d " % (za) )
fout.write("\n");
for zb, nzb in nspeciesB:
for ib in xrange(nzb): fout.write(" %d " % (zb) )
fout.write("\n");
fout.write("# environment kernel matrix: \n")
for r in kk:
for e in r:
fout.write("%20.14e " % (e) )
fout.write("\n")
#fout.write("# environment kernel eigenvalues: \n")
#ev = np.linalg.eigvals(kk)
#for e in ev:
# fout.write("(%8.4e,%8.4e) " % (e.real,e.imag) )
#fout.write("\n");
# Now we have the matrix of scalar products.
# We can first find the optimal scalar product kernel
# we must find the maximum "cost"
if mode == "match":
if periodic and nenvA != nenvB:
nenv = lcm(nenvA, nenvB)
hun = lcm_best_cost(1-kk)
else:
hun=best_cost(1.0-kk)
cost = 1-hun/nenv
elif mode == "permanent":
# there is no place to hide: cross-species environments are not necessarily zero
if peps>0: cost = mcperm(kk, peps)
else: cost = xperm(kk)
cost = cost/np.math.factorial(nenv)/nenv
elif mode == "rematch":
cost=rematch(kk, gamma, 1e-6) # hard-coded residual error for regularized gamma
# print cost, kk.sum()/(nenv*nenv), envk(strucA.globenv, strucB.globenv, alchem)
elif mode == "average":
cost = kk.sum()/(nenvA*nenvB)
# print 'elem: {}'.format(kk.sum())
# print 'elem norm: {}'.format(cost)
# print 'avg norm: {}'.format((nenvA*nenvB))
else: raise ValueError("Unknown global fingerprint mode ", mode)
return cost,kk
class structurelist(list):
def __init__(self, basedir="tmpstructures"):
self.basedir=basedir
# create the folder if it is not there
if not os.path.exists(basedir):os.makedirs(basedir)
self.count=0
def exists(self, index):
# return true if the file associated with index exists, false otherwise
f=self.basedir+'/sl_'+str(index)+'.dat'
return os.path.isfile(f)
# @profile
def append(self, element):
#pickle the element for later use
ind=self.count
f=self.basedir+'/sl_'+str(ind)+'.dat'
file = open(f,"wb")
gc.disable()
pickle.dump(element, file,protocol=pickle.HIGHEST_PROTOCOL) # HIGHEST_PROTOCOL is 2 in py 2.7
file.close()
gc.enable()
self.count+=1
# @profile
def __getitem__(self, index):
f = self.basedir+'/sl_'+str(index)+'.dat'
try:
file = open(f,"rb")
except IOError:
raise IOError("Cannot load descriptors for index %d" % (index) )
gc.disable()
l = pickle.load(file)
file.close()
gc.enable()
return l
|
If you're looking for a family vacation that involves watching enormous quantities of water go off a cliff, you can't beat Niagara Falls. We went there recently with several other families, and our feeling of awe and wonderment can best be summed up by the words of my friend Libby Burger, who, when we first beheld the heart-stopping spectacle of millions of gallons of water per second hurtling over the precipice and thundering into the mist-enshrouded gorge below, said, "I have to tinkle."
The falls have been casting this magical spell ever since they were discovered thousands of years ago by Native Americans, who gave them the name "Niagara," which means "Place Where There Will Eventually Be Museums Dedicated, for No Apparent Reason, to Frankenstein, John F. Kennedy, Harry Houdini and Elvis." And this has certainly proved to be true, as today the area around the falls features an extremely dense wad of tourist attractions. In addition to the museums (both wax and regular), there was a place where you could see tiny scale models of many world-famous buildings such as the Vatican; plus one of the world's largest floral clocks; plus, of course, miniature golf courses, houses of horror and countless stores selling souvenir plates, cups, clocks, knives, spoons, refrigerator magnets, thermometers, folding combs, toothbrushes, toenail clippers, hats, T-shirts, towels, boxer shorts and random slabs of wood, all imprinted with what appears to be the same blurred, heavily colorized picture, taken in about 1948, depicting some object that could be Niagara Falls, or could also be hamsters mating.
Of course, the big tourism attraction is Niagara Falls, a geological formation caused by the Great Lakes being attracted toward gravity. Also limestone is involved. We learned these facts from a giant-screen movie that we paid to get into after the children became bored with looking at the actual falls, a process that took them perhaps four minutes. They are modern children. They have Nintendo. They have seen what appears to be a real dinosaur eat what appears to be a real lawyer in the movie "Jurassic Park." They are not about to be impressed by mere water.
The movie featured a dramatic re-enactment of the ancient falls legend of "The Maid of the Mist." This was an Indian maiden whose father wanted her to marry a fat toothless old man who, in the movie, looks a lot like U.S. House of Representatives Ways and Means Committee Chairman and noted stamp collector Dan Rostenkowski wearing a bad wig. The maiden was so upset about this that she paddled a canoe over the Falls, thus becoming one with the Thunder God, the Mist God, the God of Canoe Repair, etc. At least that is the legend. Some of us were skeptical. As my friend Gene Weingarten put it: "I think she became one with the rocks."
Since that time a number of people have gone over the falls in barrels, not always with positive health results. What would motivate people to take such a terrible risk? My theory is that they were tourists. They probably paid ADMISSION to get into the barrels. I bet that, even as they were going over the brink, they were videotaping the barrel interiors.
|
# -*- coding: utf-8 -*-
from __future__ import absolute_import
from dp_tornado.engine.helper import Helper as dpHelper
try:
from wand.image import Image
except ImportError:
Image = None
class WandHelper(dpHelper):
@property
def Image(self):
return Image
def load(self, src):
return Image(filename=src)
def size(self, src):
return src.width, src.height
def crop(self, img, left, top, right, bottom):
img.crop(left, top, right, bottom)
return img
def resize(self, img, width, height, kwargs=None):
if kwargs is None:
img.resize(width, height)
else:
raise Exception('Not implemented method.')
return img
def border(self, img, border, border_color):
raise Exception('Not implemented method.')
def radius(self, img, radius, border, border_color):
raise Exception('Not implemented method.')
def colorize(self, img, colorize):
raise Exception('Not implemented method.')
def save(self, img, ext, dest, kwargs):
if ext.lower() == 'jpg':
ext = 'jpeg'
img.format = ext
img.save(filename=dest)
return True
def iter_seqs(self, img, kwargs):
yield 0, img
|
Course fees minus the deposit are refundable until 30 days prior to course date.
After that your course fee is non refundable.
We design our courses purposefully. To deliver a great course, we need to have an optimal number of participants. That means we need to have both a minimum and maximum number of people attending. In addition, we tend not to keep waiting lists. Once you register, we count you in. We want to know who you are and what your needs are so that we can fine-tune our courses to meet your needs. Without us having a firm understanding of who is coming and what they are committed to for their personal learning, you won’t get the value you came for. If our participants enroll and un-enroll, we can’t deliver on our promises to you. Moreover, as with any business operations, we have hard costs, and our venue requires payment thirty days out. This requires that we have a fixed idea of all of our numbers for the program.
Thank you for your understanding. We love working with you and are committed to delivering the best possible product to you.
|
# -*- coding: utf-8 -*-
import scrapy
import re
import urllib.request
from scrapy.http import Request
from spider_jd_phone.items import SpiderJdPhoneItem
class JdPhoneSpider(scrapy.Spider):
name = 'jd_phone'
allowed_domains = ['jd.com']
str_keyword = '手机京东自营'
encode_keyword = urllib.request.quote(str_keyword)
url = 'https://search.jd.com/Search?keyword=' + encode_keyword + '&enc=utf-8&qrst' \
'=1&rt' \
'=1&stop=1&spm=2.1.0&vt=2&page=1&s=1&click=0'
# start_urls = [url]
# def start_requests(self):
# print(">>>进行第一次爬取<<<")
# print("爬取网址:%s" % self.url)
# yield Request(self.encode_url,
# headers={
# 'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) "
# "AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.167 Safari/537.36"
# })
# 设置要爬取用户的uid,为后续构造爬取网址做准备
# uid = "19940007"
# start_urls = ["http://19940007.blog.hexun.com/p1/default.html"]
def start_requests(self):
print(">>>进行第一次爬取<<<")
# 首次爬取模拟成浏览器进行
# yield Request(
# "http://" + str(self.uid) + ".blog.hexun.com/p1/default.html",
# headers={
# 'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) "
# "AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.167 Safari/537.36"
# })
url = "https://search.jd.com/Search?keyword=%E6%89%8B%E6%9C%BA%E4%BA%AC%E4%B8%9C%E8%87%AA%E8%90%A5&enc=utf-8&qrst=1&rt%27=1&stop=1&spm=2.1.0&vt=2&page=1&s=1&click=0"
print(url)
yield Request("https://search.jd.com/Search?keyword=" +
self.str_keyword + "&enc=utf-8&qrst=1&rt'=1&stop=1&spm=2.1.0&vt=2&page=1&s=1&click=0",
headers={
'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_0) "
"AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.167 Safari/537.36"
})
def parse(self, response):
print(">>>parsing...<<<")
item = SpiderJdPhoneItem()
# print(str(response.body))
file_object = open('test.html', 'wb')
file_object.write(response.body)
file_object.close()
item['price'] = response.xpath("//div["
"@class='p-price']//i/text()").extract()
item['name'] = response.xpath("//div[@class='p-name "
"p-name-type-2']//em").extract()
print("获取item:{}".format(item))
print("长度:%s" % len(item['price']))
print("长度:%s" % len(item['name']))
print("=====================")
yield item
|
The time has come to add a canine companion to your family. The options are endless, but you can narrow them down substantially to getting a rescue dog.
One reason to choose a rescue dog is this: they need a home. Even if they are in caring, “no kill” shelters, dogs do best in a home environment surrounded by love. Other reasons for getting a shelter dog include the fact that they are typically healthy, with vaccinations, spaying, and neutering, and they are far less expensive than a dog from a breeder or puppy mill.
When choosing a rescue dog, it is important that you make sure to choose the right rescue dog. Dogs are living creatures, and you don’t want to have to send one back because you have a poor match.
Before you go to the first shelter, assess your family situation. By assessing your situation, you can narrow your choices to find a dog with the lineage, temperament, energy level and age to fit in with your family.
Am I home a lot, or will the dog be left alone for long periods of time?
Do I have little children or am I planning to have children soon?
What kind of flooring and furniture do I have and what type of dogs may fit best into my home?
Do I have time to house train?
Do I have other pets?
Do I do a lot of entertaining?
How often do I vacation?
Do I have a budget for shots and veterinary appointments?
Don’t just drop by the nearest shelter to choose a dog. Carefully decide which shelters you will visit. Ask your friends and family, and call the ones in your area.
What is the shelter’s return policy?
Will they allow me to take the dogs for a walk so that I can see how well we get along?
Does the shelter provide medical and behavior information?
It might be worth your while to take along a trainer or someone who is familiar with dogs. Tell them ahead of time what you’ve determined by following step #1 – what type of temperament, breed and energy level you are looking for. The trainer will be able to see things about your new dog that you might not notice. This could save you, your family, and the shelter dog a lot of stress and heartache.
It’s easy to get caught up in the enthusiasm of getting a new dog or puppy and not taking steps to make sure you do what is best for yourself and for the dog. As much as it is important to fall in love with a dog immediately, a dog or puppy that is chosen hastily could end up being unpredictable, and that affection could go away just as quickly.
What if, in the end, you determine adopting a rescue dog is not for you right now? There is no shame in changing your mind. You can always volunteer or donate to a shelter instead.
Have you fostered or adopted a pet? What were your experiences? What advice would you give?
Photo Credit: Photo of Adoptapoalooza in Union Park – September 2013.
The entryway of your home says a lot of about you. It is the first glimpse you are giving of your home, and into your life.
A cluttered entryway is not just disconcerting; it might be a hazard where someone can trip or fall.
Whether your entryway is an open space, an alcove, or a closet, here are a few ideas for keeping it clean and organized.
Don’t pack it full of stuff.
Sure, it is convenient to have your coats, boots, hats, and scarves in one place, but unless you live alone, you will never be able to find the items you need on the way out the door. Put anything you don’t use every day in the appropriate place. As the seasons change, swap out your items putting away the ones you won’t need for the next few months.
Keep items off the floor.
Lifting items even a few inches off the floor makes he space seem bigger and cleaner. Consider using shoe racks where you can organize your most used shoes. Place items like umbrellas and sports gear in buckets or baskets, which can easily be pulled in and out of the entryway space as needed. Storage benches are a great way to create a comfortable seating place while hiding some of the doorway clutter.
Use hooks and racks to organize coats, hats, handbags, book bags and other items towards the top of the space. Leave a couple of empty hooks for guests as well, so they won’t have to throw their jacket over a chair.
Keep everyone’s stuff separate, especially if you have kids.
If necessary, use labels or color so that each child knows where to place her or her “stuff.” Train them to put their items away every time they walk in the door, and your clutter problem will be eliminated.
Cleaning the kitchen after a meal has got to be the least favorite chore. This is especially true if you have to do with burned on foods in your pots and pans. These tips will help you get your cookware clean in no time.
Remember – you should not scrub pans that have a non-stick coating.
Burned milk not only ruins your pots, but the terrible smell will stink up your home.
You will want to clean that pot immediately by covering the bottom of the pot with salt, and covering with warm water for 15 minutes before cleaning it right of with a sponge. As for the smell, open a couple of windows and use the fan.
Cleaning burns from a cast iron skillet is easier than it looks.
Cast iron should be rinsed with hot water right after cooking before the food items dry onto the pan. For stubborn residue, use a little bit of salt and a paper towel to scrub at the stains, and rinse with hot water.
Scorched food on a pot can be cooked away in most cases.
Put a 50/50 mix of water and vinegar in the pot and cook it for about 10 minutes. You will notice the burned of food loosening from your pots. A good scrubbing with a bristle dish brush will help you get all of the remaining grime.
A long soak is probably unnecessary.
Baked in and greasy foods in your lasagna spare-rib roasting pans don’t need to be soaked for a week to get it clean. Just fill your pan with water to cover the burnt on spots and place a used dryer sheet in the water. In 30 minutes, your pan will come clean with very little effort. Give it an extra once over with dish soap and hot water to remove dryer sheet residue.
Prevent food from burning in the first place.
Never walk away from cooking food. Find a project to do in the kitchen if you are waiting for your water to boil or your sauces to simmer. Turn off the pot before your rice, pasta, or eggs are done cooking. The heat from the pan will finish them off nicely. Turn down the heat on sauces, allowing them to cook slowly. Try lining some of your baking dishes with aluminum foil. Also, consider investing in quality pots and pans.
|
#! A simple Psi 4 input script to compute a SCF reference using Psi4's libJK
import time
import numpy as np
import psi4
psi4.set_output_file("output.dat", False)
# Benzene
mol = psi4.geometry("""
0 1
O
H 1 1.1
H 1 1.1 2 104
symmetry c1
""")
psi4.set_options({"basis": "aug-cc-pVDZ",
"scf_type": "df",
"e_convergence": 1e-8
})
# Set tolerances
maxiter = 12
E_conv = 1.0E-6
D_conv = 1.0E-5
# Integral generation from Psi4's MintsHelper
wfn = psi4.core.Wavefunction.build(mol, psi4.core.get_global_option("BASIS"))
mints = psi4.core.MintsHelper(wfn.basisset())
S = mints.ao_overlap()
# Get nbf and ndocc for closed shell molecules
nbf = wfn.nso()
ndocc = wfn.nalpha()
if wfn.nalpha() != wfn.nbeta():
raise PsiException("Only valid for RHF wavefunctions!")
psi4.core.print_out('\nNumber of occupied orbitals: %d\n' % ndocc)
psi4.core.print_out('Number of basis functions: %d\n\n' % nbf)
# Build H_core
V = mints.ao_potential()
T = mints.ao_kinetic()
H = T.clone()
H.add(V)
# Orthogonalizer A = S^(-1/2)
A = mints.ao_overlap()
A.power(-0.5, 1.e-16)
# Diagonalize routine
def build_orbitals(diag):
Fp = psi4.core.triplet(A, diag, A, True, False, True)
Cp = psi4.core.Matrix(nbf, nbf)
eigvecs = psi4.core.Vector(nbf)
Fp.diagonalize(Cp, eigvecs, psi4.core.DiagonalizeOrder.Ascending)
C = psi4.core.doublet(A, Cp, False, False)
Cocc = psi4.core.Matrix(nbf, ndocc)
Cocc.np[:] = C.np[:, :ndocc]
D = psi4.core.doublet(Cocc, Cocc, False, True)
return C, Cocc, D
# Build core orbitals
C, Cocc, D = build_orbitals(H)
# Setup data for DIIS
t = time.time()
E = 0.0
Enuc = mol.nuclear_repulsion_energy()
Eold = 0.0
# Initialize the JK object
jk = psi4.core.JK.build(wfn.basisset())
jk.set_memory(int(1.25e8)) # 1GB
jk.initialize()
jk.print_header()
diis_obj = psi4.p4util.solvers.DIIS(max_vec=3, removal_policy="largest")
psi4.core.print_out('\nTotal time taken for setup: %.3f seconds\n' % (time.time() - t))
psi4.core.print_out('\nStart SCF iterations:\n\n')
t = time.time()
for SCF_ITER in range(1, maxiter + 1):
# Compute JK
jk.C_left_add(Cocc)
jk.compute()
jk.C_clear()
# Build Fock matrix
F = H.clone()
F.axpy(2.0, jk.J()[0])
F.axpy(-1.0, jk.K()[0])
# DIIS error build and update
diis_e = psi4.core.triplet(F, D, S, False, False, False)
diis_e.subtract(psi4.core.triplet(S, D, F, False, False, False))
diis_e = psi4.core.triplet(A, diis_e, A, False, False, False)
diis_obj.add(F, diis_e)
# SCF energy and update
FH = F.clone()
FH.add(H)
SCF_E = FH.vector_dot(D) + Enuc
dRMS = diis_e.rms()
psi4.core.print_out('SCF Iteration %3d: Energy = %4.16f dE = % 1.5E dRMS = %1.5E\n'
% (SCF_ITER, SCF_E, (SCF_E - Eold), dRMS))
if (abs(SCF_E - Eold) < E_conv) and (dRMS < D_conv):
break
Eold = SCF_E
# DIIS extrapolate
F = diis_obj.extrapolate()
# Diagonalize Fock matrix
C, Cocc, D = build_orbitals(F)
if SCF_ITER == maxiter:
psi4.clean()
raise Exception("Maximum number of SCF cycles exceeded.\n")
psi4.core.print_out('Total time for SCF iterations: %.3f seconds \n\n' % (time.time() - t))
#print(psi4.energy("SCF"))
psi4.core.print_out('Final SCF energy: %.8f hartree\n' % SCF_E)
psi4.compare_values(-76.0033389840197202, SCF_E, 6, 'SCF Energy')
|
Abstract: Background: Good nurses show concern for patients by caring for them effectively and attentively to foster their well-being. However, nurses cannot be taught didactically to be “good” or any trait that characterizes a good nurse. Nurses’ self-awareness of their role traits warrants further study. Objectives: This study aimed (a) to develop a strategy to elicit nurses’ self-exploration of the importance of good nurse traits and (b) to explore any discrepancies between such role traits perceived by nurses as ideally and actually important. Research design: For this mixed-method study, we used good nurse trait card play to trigger nurses’ reflections based on clinical practice. Nurse participants appraised the ideal and actual importance of each trait using a Q-sort grid. The gap between the perceived ideal and actual importance of each trait was examined quantitatively, while trait-related clinical experiences were analyzed qualitatively. Participants and research context: Participants were 35 in-service nurses (mean age = 31.6 years (range = 23–49 years); 10.1 years of nursing experience (range = 1.5–20 years)) recruited from a teaching hospital in Taiwan. Ethical considerations: The study was approved by the Institutional Review Board of the study site. Findings: Good nurse trait card play with a Q-sort grid served as an icebreaker to help nurse participants talk about their experiences as embodied in good quality nursing care. Nurses’ perceived role-trait discrepancies were divided into three categories: over-performed, least discrepant, and under-performed. The top over-performed trait was “obedience.” Discussion: Patients’ most valued traits (“patient,” “responsible,” “cautious,” and “considerate”) were perceived by participants as ideally important but were under-performed, perhaps due to experienced nurses’ loss of idealism. Conclusion: Good nurse trait card play with Q-sort grid elicited nurses’ self-dialogue and revealed evidence of the incongruity between nurses’ perceived ideal and actual importance of traits. The top over-performed trait, “obedience,” deserves more study.
Shu-Yueh Chen is in the Department of Nursing, Hung Kuang University, Taichung City, Taiwan, R.O.C.
|
# -*- coding: utf-8 -*-
'''
Created on 10/01/2012
@author: lm / @fsalamero
'''
import pygame
import sys
import time
from pygame.locals import *
#------------------------------------------------------------------#
# Inicialización de Pygame
#------------------------------------------------------------------#
#pygame.mixer.pre_init(44100,16,2,1024)
pygame.init()
#------------------------------------------------------------------#
# Definición de variables
#------------------------------------------------------------------#
fps = 60
tiempo = 0
BLANCO = (255,255,255)
AMARILLO = (255,255,0)
pelotaX = 50
pelotaY = 50
pelotaDX = 5
pelotaDY = 5
raquetaX = 50
raquetaY = 250
raquetaDY = 5
raqueta2X = 740
raqueta2Y = 250
raqueta2DY = 5
puntos1 = 0
puntos2 = 0
tipoLetra = pygame.font.Font('data/Grandezza.ttf', 96)
tipoLetra2 = pygame.font.Font('data/Grandezza.ttf', 24)
tipoLetra3 = pygame.font.Font('data/Grandezza.ttf', 48)
sonidoPelota = pygame.mixer.Sound('data/sonidoPelota.wav')
sonidoRaqueta = pygame.mixer.Sound('data/sonidoRaqueta.wav')
sonidoError = pygame.mixer.Sound('data/sonidoError.wav')
sonidoAplausos = pygame.mixer.Sound('data/sonidoAplausos.wav')
sonidoLaser = pygame.mixer.Sound('data/onidoLaser.wav')
imagenDeFondo = 'data/pingpong.jpg'
#------------------------------------------------------------------#
# Creación de la pantalla de juego (SURFACE)
#------------------------------------------------------------------#
visor = pygame.display.set_mode((800,600),FULLSCREEN)
#------------------------------------------------------------------#
# Funciones del programa
#------------------------------------------------------------------#
def pausa():
# Esta función hace que se espera hasta que se pulse una tecla
esperar = True
while esperar:
for evento in pygame.event.get():
if evento.type == KEYDOWN:
esperar = False
sonidoLaser.play()
def mostrarIntro():
# Muestra la pantalla de inicio y espera
fondo = pygame.image.load(imagenDeFondo).convert()
visor.blit(fondo, (0,0))
mensaje1 = 'PONG'
texto1 = tipoLetra.render(mensaje1, True, AMARILLO)
mensaje2 = 'Pulsa una tecla para comenzar'
texto2 = tipoLetra2.render(mensaje2, True, BLANCO)
visor.blit(texto1, (50,100,200,100))
visor.blit(texto2, (235,340,350,30))
pygame.display.update()
pausa()
def dibujarJuego():
# Dibuja la mesa, la pelota, las raquetas y los marcadores
# Primero borra la pantalla en negro
visor.fill((0,0,0))
# Dibuja la pelota y las raquetas
pygame.draw.circle(visor, BLANCO, (pelotaX,pelotaY),4,0)
pygame.draw.rect(visor, BLANCO, (raquetaX,raquetaY,10,50))
pygame.draw.rect(visor, BLANCO, (raqueta2X,raqueta2Y,10,50))
# Dibuja la red
for i in range(10):
pygame.draw.rect(visor, BLANCO, (398,10+60*i,4,30))
# Dibuja los marcadores
marcador1 = tipoLetra.render(str(puntos1), True, BLANCO)
marcador2 = tipoLetra.render(str(puntos2), True, BLANCO)
visor.blit(marcador1, (300,20,50,50))
visor.blit(marcador2, (450,20,50,50))
# Y, finalmente, lo vuelca todo en pantalla
pygame.display.update()
def decirGanador():
# Decir qué jugador ha ganado y esperar
sonidoAplausos.play()
if puntos1 == 11:
ganador = 'Jugador 1'
else:
ganador = 'Jugador 2'
mensaje = 'Ganador: '+ ganador
texto = tipoLetra3.render(mensaje, True, AMARILLO)
visor.blit(texto, (110,350,600,100))
pygame.display.update()
pausa()
#------------------------------------------------------------------#
# Cuerpo principal del juego
#------------------------------------------------------------------#
pygame.mouse.set_visible(False)
mostrarIntro()
time.sleep(0.75)
while True:
#----------------------------------------------------------------#
# Gestionar la velocidad del juego
#----------------------------------------------------------------#
if pygame.time.get_ticks()-tiempo < 1000/fps:
continue
tiempo = pygame.time.get_ticks()
#----------------------------------------------------------------#
# Bucle de eventos: Mirar si se quiere terminar el juego
#----------------------------------------------------------------#
for evento in pygame.event.get():
if evento.type == KEYDOWN:
if evento.key == K_ESCAPE:
pygame.quit()
sys.exit()
#----------------------------------------------------------------#
# Mover la pelota
#----------------------------------------------------------------#
# Primero hay que vigilar por si hay que cambiar de dirección
# Mira si se impacta con el jugador 1
diff1 = pelotaY-raquetaY
if pelotaX == raquetaX + 10 and diff1 >= 0 and diff1 <= 50:
pelotaDX = -pelotaDX
sonidoRaqueta.play()
# Mira si se impacta con el jugador 2
diff2 = pelotaY-raqueta2Y
if pelotaX == raqueta2X and diff2 >= 0 and diff2 <= 50:
pelotaDX = -pelotaDX
sonidoRaqueta.play()
# Mira si se ha llegado al borde de la pantalla
if pelotaY < 5 or pelotaY > 595:
pelotaDY = -pelotaDY
sonidoPelota.play()
# Mueve la pelota
pelotaX += pelotaDX
pelotaY += pelotaDY
#----------------------------------------------------------------#
# Mover las raquetas
#----------------------------------------------------------------#
# Mira si el jugador 1 mueve la raqueta
teclasPulsadas = pygame.key.get_pressed()
if teclasPulsadas[K_a]:
raquetaY += raquetaDY
if teclasPulsadas[K_q]:
raquetaY -= raquetaDY
# Vigilar que la raqueta no se salga de la pantalla
if raquetaY < 0:
raquetaY = 0
elif raquetaY > 550:
raquetaY = 550
# Ahora hacemos lo mismo con el jugador 2
if teclasPulsadas[K_l]:
raqueta2Y += raqueta2DY
if teclasPulsadas[K_p]:
raqueta2Y -= raqueta2DY
if raqueta2Y < 0:
raqueta2Y = 0
elif raqueta2Y > 550:
raqueta2Y = 550
#----------------------------------------------------------------#
# Mirar si se ha ganado un punto
#----------------------------------------------------------------#
# Primero, mira si la pelota ha llegado al borde
if pelotaX > 800 or pelotaX < 0:
# En tal caso, recolocar juego y cambiar puntuación
sonidoError.play()
time.sleep(1)
raquetaY = 250
raqueta2Y = 250
if pelotaX > 800:
puntos1 = puntos1 + 1
else:
puntos2 = puntos2 + 1
pelotaX = 400
pelotaDX = -pelotaDX
#----------------------------------------------------------------#
# Dibujar el juego en pantalla
#----------------------------------------------------------------#
dibujarJuego()
#---------------------------------------------------------------#
# Comprobar si el juego se ha acabado
#---------------------------------------------------------------#
if puntos1 == 11 or puntos2 ==11:
decirGanador()
puntos1 = 0
puntos2 = 0
visor.fill((0,0,0))
mostrarIntro()
|
The problem is even greater for those people who have weakened immune systems as toxic mold can be poisonous. Our company employs the most effective approach of mold testing in Kingwood, WV.
You can count on our mold testing team in Kingwood, WV to deliver on-call services any day of the week. You might take comfort in knowing that your health is our most important concern. We have invested our funds and experience in the most advanced testing and remediation equipment to identify and remove toxic mold effectively and precisely from your residential or business structures.
Our team works not only to kill the mold, but we also identify the source and use the most appropriate precautionary measures to avoid reinfestation of mold spores. This is done by conducting a thorough mold testing in Kingwood, WV. We also look for ventilation or leak issues in the attic that might be adding to the moisture problems. When we find a mold growth condition, we will offer you the most cost-effective mold remediation solutions.
If you have any questions or would like a price quote for mold testing in Kingwood, WV, give us a call at 888-548-5870 today.
With our equipped and qualified technicians, you will be surely impressed with the way they handle your potential mold problem. They will provide you with the proper knowledge, eagerness, and expertise necessary for mold testing in Kingwood, WV. In fact, each of our employees undergoes strict training to ensure that every customer is pleased with the quality and service that we deliver. All our employees are guided by the principle that we expect nothing less than 100% customer satisfaction on every job.
For over than two decades, our company has earned a reputation for integrity, ethics, honesty, and top quality customer service. This has led us to rise above our competitors, making us one of the leading mold testing companies in Kingwood, WV. We have a long list of satisfied clients, and they include homeowners, general contractors, property management companies, hospitals, attorneys, real estate developers, realtors, insurance companies, public and private schools and universities.
We have a group of mold testing employees and associates in Kingwood, WV. This only means that we have the resources, expertise, manpower and equipment to assist you with any of your mold testing needs and complete the job immediately and efficiently. If you’re seeking for fast and immediate results, we can lead you to the right direction.
Let us assist you with your mold testing needs in Kingwood, WV and request a free quote via 888-548-5870.
|
from datetime import date
from django.conf import settings
from django.contrib.auth.models import User
from django.contrib.sitemaps import Sitemap
from django.contrib.sites.models import Site
from django.core.exceptions import ImproperlyConfigured
from django.test import TestCase
from django.utils.unittest import skipUnless
from django.utils.formats import localize
from django.utils.translation import activate, deactivate
class SitemapTests(TestCase):
urls = 'django.contrib.sitemaps.tests.urls'
def setUp(self):
self.old_USE_L10N = settings.USE_L10N
self.old_Site_meta_installed = Site._meta.installed
# Create a user that will double as sitemap content
User.objects.create_user('testuser', '[email protected]', 's3krit')
def tearDown(self):
settings.USE_L10N = self.old_USE_L10N
Site._meta.installed = self.old_Site_meta_installed
def test_simple_sitemap(self):
"A simple sitemap can be rendered"
# Retrieve the sitemap.
response = self.client.get('/simple/sitemap.xml')
# Check for all the important bits:
self.assertEquals(response.content, """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>http://example.com/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % date.today().strftime('%Y-%m-%d'))
def test_localized_priority(self):
"The priority value should not be localized (Refs #14164)"
# Localization should be active
settings.USE_L10N = True
activate('fr')
self.assertEqual(u'0,3', localize(0.3))
# Retrieve the sitemap. Check that priorities
# haven't been rendered in localized format
response = self.client.get('/simple/sitemap.xml')
self.assertContains(response, '<priority>0.5</priority>')
self.assertContains(response, '<lastmod>%s</lastmod>' % date.today().strftime('%Y-%m-%d'))
deactivate()
def test_generic_sitemap(self):
"A minimal generic sitemap can be rendered"
# Retrieve the sitemap.
response = self.client.get('/generic/sitemap.xml')
expected = ''
for username in User.objects.values_list("username", flat=True):
expected += "<url><loc>http://example.com/accounts/%s/</loc></url>" %username
# Check for all the important bits:
self.assertEquals(response.content, """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
%s
</urlset>
""" %expected)
@skipUnless("django.contrib.flatpages" in settings.INSTALLED_APPS, "django.contrib.flatpages app not installed.")
def test_flatpage_sitemap(self):
"Basic FlatPage sitemap test"
# Import FlatPage inside the test so that when django.contrib.flatpages
# is not installed we don't get problems trying to delete Site
# objects (FlatPage has an M2M to Site, Site.delete() tries to
# delete related objects, but the M2M table doesn't exist.
from django.contrib.flatpages.models import FlatPage
public = FlatPage.objects.create(
url=u'/public/',
title=u'Public Page',
enable_comments=True,
registration_required=False,
)
public.sites.add(settings.SITE_ID)
private = FlatPage.objects.create(
url=u'/private/',
title=u'Private Page',
enable_comments=True,
registration_required=True
)
private.sites.add(settings.SITE_ID)
response = self.client.get('/flatpages/sitemap.xml')
# Public flatpage should be in the sitemap
self.assertContains(response, '<loc>http://example.com%s</loc>' % public.url)
# Private flatpage should not be in the sitemap
self.assertNotContains(response, '<loc>http://example.com%s</loc>' % private.url)
def test_requestsite_sitemap(self):
# Make sure hitting the flatpages sitemap without the sites framework
# installed doesn't raise an exception
Site._meta.installed = False
# Retrieve the sitemap.
response = self.client.get('/simple/sitemap.xml')
# Check for all the important bits:
self.assertEquals(response.content, """<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<url><loc>http://testserver/location/</loc><lastmod>%s</lastmod><changefreq>never</changefreq><priority>0.5</priority></url>
</urlset>
""" % date.today().strftime('%Y-%m-%d'))
def test_sitemap_get_urls_no_site_1(self):
"""
Check we get ImproperlyConfigured if we don't pass a site object to
Sitemap.get_urls and no Site objects exist
"""
Site._meta.installed = True
Site.objects.all().delete()
self.assertRaises(ImproperlyConfigured, Sitemap().get_urls)
def test_sitemap_get_urls_no_site_2(self):
"""
Check we get ImproperlyConfigured when we don't pass a site object to
Sitemap.get_urls if Site objects exists, but the sites framework is not
actually installed.
"""
Site.objects.get_current()
Site._meta.installed = False
self.assertRaises(ImproperlyConfigured, Sitemap().get_urls)
|
The Hamilton County Unit 10 School District provides resources and education to more than 1,200 students. The Unit 10 office is located at 117 North Washington Street, McLeansboro, IL 62859. For more information on each of the schools within the district, click the links below.
|
from vtk import *
from tonic import camera as tc
def update_camera(renderer, cameraData):
camera = renderer.GetActiveCamera()
camera.SetPosition(cameraData['position'])
camera.SetFocalPoint(cameraData['focalPoint'])
camera.SetViewUp(cameraData['viewUp'])
def create_spherical_camera(renderer, dataHandler, phiValues, thetaValues):
camera = renderer.GetActiveCamera()
return tc.SphericalCamera(dataHandler, camera.GetFocalPoint(), camera.GetPosition(), camera.GetViewUp(), phiValues, thetaValues)
def create_cylindrical_camera(renderer, dataHandler, phiValues, translationValues):
camera = renderer.GetActiveCamera()
return tc.CylindricalCamera(dataHandler, camera.GetFocalPoint(), camera.GetPosition(), camera.GetViewUp(), phiValues, translationValues)
class CaptureRenderWindow(object):
def __init__(self, magnification=1):
self.windowToImage = vtkWindowToImageFilter()
self.windowToImage.SetMagnification(magnification)
self.windowToImage.SetInputBufferTypeToRGB()
self.windowToImage.ReadFrontBufferOn()
self.writer = None
def SetRenderWindow(self, renderWindow):
self.windowToImage.SetInput(renderWindow)
def SetFormat(self, mimeType):
if mimeType == 'image/png':
self.writer = vtkPNGWriter()
self.writer.SetInputConnection(self.windowToImage.GetOutputPort())
elif mimeType == 'image/jpg':
self.writer = vtkJPEGWriter()
self.writer.SetInputConnection(self.windowToImage.GetOutputPort())
def writeImage(self, path):
if self.writer:
self.windowToImage.Modified()
self.windowToImage.Update()
self.writer.SetFileName(path)
self.writer.Write()
|
The sun is starting to show it's face again, and for the first time this year I didn't need my coat with me today. As summer approaches, I am starting to get excited about getting our tent out of storage and heading into the countryside to spend the evening wrapped in blankets around a little camp fire.
Always at this time of year I begin to look ahead to the summer and dream about the sunshine, the smell of freshly cut grass, or that delicious fresh scent right after a summer rain storm. And who can resist perfectly ripe avocados mashed on toast with fresh tomatoes? It's one of my favourite snacks when it's warm, that or just impatiently spooning the avocado straight into my mouth.
Hurry up summer! Can't you tell I'm desperate?!?!
Margate Gen-Do!! i.e. My Sister's Hen Do in Margate!
© FABSICKLE. All rights reserved.
|
"""Test deCONZ remote events."""
from copy import deepcopy
from homeassistant.components.deconz.deconz_event import CONF_DECONZ_EVENT
from .test_gateway import DECONZ_WEB_REQUEST, setup_deconz_integration
from tests.common import async_capture_events
SENSORS = {
"1": {
"id": "Switch 1 id",
"name": "Switch 1",
"type": "ZHASwitch",
"state": {"buttonevent": 1000},
"config": {},
"uniqueid": "00:00:00:00:00:00:00:01-00",
},
"2": {
"id": "Switch 2 id",
"name": "Switch 2",
"type": "ZHASwitch",
"state": {"buttonevent": 1000},
"config": {"battery": 100},
"uniqueid": "00:00:00:00:00:00:00:02-00",
},
"3": {
"id": "Switch 3 id",
"name": "Switch 3",
"type": "ZHASwitch",
"state": {"buttonevent": 1000, "gesture": 1},
"config": {"battery": 100},
"uniqueid": "00:00:00:00:00:00:00:03-00",
},
"4": {
"id": "Switch 4 id",
"name": "Switch 4",
"type": "ZHASwitch",
"state": {"buttonevent": 1000, "gesture": 1},
"config": {"battery": 100},
"uniqueid": "00:00:00:00:00:00:00:04-00",
},
}
async def test_deconz_events(hass):
"""Test successful creation of deconz events."""
data = deepcopy(DECONZ_WEB_REQUEST)
data["sensors"] = deepcopy(SENSORS)
gateway = await setup_deconz_integration(hass, get_state_response=data)
assert "sensor.switch_1" not in gateway.deconz_ids
assert "sensor.switch_1_battery_level" not in gateway.deconz_ids
assert "sensor.switch_2" not in gateway.deconz_ids
assert "sensor.switch_2_battery_level" in gateway.deconz_ids
assert len(hass.states.async_all()) == 3
assert len(gateway.events) == 4
switch_1 = hass.states.get("sensor.switch_1")
assert switch_1 is None
switch_1_battery_level = hass.states.get("sensor.switch_1_battery_level")
assert switch_1_battery_level is None
switch_2 = hass.states.get("sensor.switch_2")
assert switch_2 is None
switch_2_battery_level = hass.states.get("sensor.switch_2_battery_level")
assert switch_2_battery_level.state == "100"
events = async_capture_events(hass, CONF_DECONZ_EVENT)
gateway.api.sensors["1"].update({"state": {"buttonevent": 2000}})
await hass.async_block_till_done()
assert len(events) == 1
assert events[0].data == {
"id": "switch_1",
"unique_id": "00:00:00:00:00:00:00:01",
"event": 2000,
}
gateway.api.sensors["3"].update({"state": {"buttonevent": 2000}})
await hass.async_block_till_done()
assert len(events) == 2
assert events[1].data == {
"id": "switch_3",
"unique_id": "00:00:00:00:00:00:00:03",
"event": 2000,
"gesture": 1,
}
gateway.api.sensors["4"].update({"state": {"gesture": 0}})
await hass.async_block_till_done()
assert len(events) == 3
assert events[2].data == {
"id": "switch_4",
"unique_id": "00:00:00:00:00:00:00:04",
"event": 1000,
"gesture": 0,
}
await gateway.async_reset()
assert len(hass.states.async_all()) == 0
assert len(gateway.events) == 0
|
Synopsis: An affluent urban professional from West Amman and a young Black village farmer by the Dead Sea. Both share history and roots. Both find themselves unwilling participants in another cycle of history that pits the prosperity of one against the survival of another.
|
import os
import json
import boto3 as boto
from slackclient import SlackClient
import requests
from datetime import datetime
def printJson(jsonObject, label):
"""Pretty print JSON document with indentation."""
systime = str(datetime.now())
print("********************************* {} *********************************".format(label))
print("--------------------------------- {} ---------------------------------".format(systime))
print(json.dumps(jsonObject, indent=4, sort_keys=True))
print("----------------------------------------------------------------------")
def getTeam(team_id):
"""Given a team id, lookup the team data from DynamoDB."""
table_name = os.environ['SLACK_TEAMS']
dynamodb = boto.resource('dynamodb')
table = dynamodb.Table(table_name)
table_key = {'team_id': team_id}
result = table.get_item(Key=table_key)['Item']
return result
def getAccessToken(team):
"""Extract the access token from the team data given."""
access_token = None
if team:
access_token = team['access_token']
return access_token
def process(event, context):
"""
Process the incoming Slack event.
If the incoming event is a file_share event and the file shared is an image,
lookup the team data from the database based on the team id in the event.
Enrich the event with a new object to include the access_token and other
details and return the information. StepFunction will take this enriched
event to the next layer of Lambda functions to process.
"""
# Event comes as a JSON. No need to convert.
body = event
printJson(body, "process_events Input")
team_id = body['team_id']
team = getTeam(team_id)
access_token = getAccessToken(team)
slack_event = body['event']
printJson(slack_event, "slack_event")
slack_event_type = slack_event['type']
slack_event_channel = slack_event['channel']
slack_event_subtype = slack_event['subtype']
slack_event_ts = slack_event['ts']
slack_event_username = slack_event['username']
slack_event_file = None
celebrity_detected = False
celebs = None
if (slack_event_subtype) and (slack_event_type == 'message') and (slack_event_subtype == 'file_share'):
slack_event_file = slack_event['file']
file_type = slack_event_file['filetype']
if file_type == 'jpg' or file_type == 'png':
file_url = slack_event_file['url_private']
process_events = {
'team_id': team_id,
'team': team,
'slack_access_token': access_token,
'slack_event_type': slack_event_type,
'slack_event_channel': slack_event_channel,
'slack_event_subtype': slack_event_subtype,
'slack_event_ts': slack_event_ts,
'slack_event_username': slack_event_username,
'slack_event_filetype': file_type,
'slack_event_file_url': file_url
}
event['process_events'] = process_events
printJson(event, "Return this event downstream")
print("****** Done with process_events")
return event
|
Former classmates at Sarah Lawrence College, Amy and Devi have been discussing films and feminism together for decades, frequently over craft cocktails while wearing berets. With this blog we endeavor to share our ideas (which do not always agree) and become more actively participatory in public feminist discourse. We hope this effort grows in time to include more diverse voices and opinions as we join our feminist peers in the fight to move Beyond the Bechdel. We welcome you to join us! New posts every Tuesday and Thursday.
A former ballerina, published fiction author and sometimes film instructor at the University of Notre Dame, I’m a proud alumnus of the American Film Institute’s Directing Workshop for Women. When I’m not writing and directing films, I chase monkeys around the world with my anthropologist partner. For more info about me (and my work), click HERE.
My posts will take a pseudo academic approach to film and feminism, exploring a series of related topics much like I conduct my college courses, drawing from history, sociology, anthropology, psychology, gender studies and film theory.
Amy has worked as a criminal defense attorney in the Bronx and as a prosecutor of members of the service for the NYPD. As a card-carrying member of the Resistance, she provides pro bono legal work for immigrants.
To change the world one film review at a time. I will explore how perceived race and gender issues impact film and the Hollywood film system.
|
from win32com.adsi import adsi
from win32com.adsi.adsicon import *
from win32com.adsi import adsicon
import pythoncom, pywintypes, win32security
options = None # set to optparse options object
ADsTypeNameMap = {}
def getADsTypeName(type_val):
# convert integer type to the 'typename' as known in the headerfiles.
if not ADsTypeNameMap:
for n, v in adsicon.__dict__.items():
if n.startswith("ADSTYPE_"):
ADsTypeNameMap[v] = n
return ADsTypeNameMap.get(type_val, hex(type_val))
def _guid_from_buffer(b):
return pywintypes.IID(b, True)
def _sid_from_buffer(b):
return str(pywintypes.SID(b))
_null_converter = lambda x: x
converters = {
'objectGUID' : _guid_from_buffer,
'objectSid' : _sid_from_buffer,
'instanceType' : getADsTypeName,
}
def log(level, msg, *args):
if options.verbose >= level:
print("log:", msg % args)
def getGC():
cont = adsi.ADsOpenObject("GC:", options.user, options.password, 0, adsi.IID_IADsContainer)
enum = adsi.ADsBuildEnumerator(cont)
# Only 1 child of the global catalog.
for e in enum:
gc = e.QueryInterface(adsi.IID_IDirectorySearch)
return gc
return None
def print_attribute(col_data):
prop_name, prop_type, values = col_data
if values is not None:
log(2, "property '%s' has type '%s'", prop_name, getADsTypeName(prop_type))
value = [converters.get(prop_name, _null_converter)(v[0]) for v in values]
if len(value) == 1:
value = value[0]
print(" %s=%r" % (prop_name, value))
else:
print(" %s is None" % (prop_name,))
def search():
gc = getGC()
if gc is None:
log(0, "Can't find the global catalog")
return
prefs = [(ADS_SEARCHPREF_SEARCH_SCOPE, (ADS_SCOPE_SUBTREE,))]
hr, statuses = gc.SetSearchPreference(prefs)
log(3, "SetSearchPreference returned %d/%r", hr, statuses)
if options.attributes:
attributes = options.attributes.split(",")
else:
attributes = None
h = gc.ExecuteSearch(options.filter, attributes)
hr = gc.GetNextRow(h)
while hr != S_ADS_NOMORE_ROWS:
print("-- new row --")
if attributes is None:
# Loop over all columns returned
while 1:
col_name = gc.GetNextColumnName(h)
if col_name is None:
break
data = gc.GetColumn(h, col_name)
print_attribute(data)
else:
# loop over attributes specified.
for a in attributes:
try:
data = gc.GetColumn(h, a)
print_attribute(data)
except adsi.error as details:
if details[0] != E_ADS_COLUMN_NOT_SET:
raise
print_attribute( (a, None, None) )
hr = gc.GetNextRow(h)
gc.CloseSearchHandle(h)
def main():
global options
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-v", "--verbose",
action="count", default=1,
help="increase verbosity of output")
parser.add_option("-q", "--quiet",
action="store_true",
help="suppress output messages")
parser.add_option("-U", "--user",
help="specify the username used to connect")
parser.add_option("-P", "--password",
help="specify the password used to connect")
parser.add_option("", "--filter",
default = "(&(objectCategory=person)(objectClass=User))",
help="specify the search filter")
parser.add_option("", "--attributes",
help="comma sep'd list of attribute names to print")
options, args = parser.parse_args()
if options.quiet:
if options.verbose != 1:
parser.error("Can not use '--verbose' and '--quiet'")
options.verbose = 0
if args:
parser.error("You need not specify args")
search()
if __name__=='__main__':
main()
|
Welcome to the home page of the FP7 funded project SEMEP.
SEMEP, the Search for ElectroMagnetic Earthquake Precursors, was an EU FP7 funded project in response to the FP7-SPACE-2010-1 call for collaborative projects between the EU and Russia. The project began on December 1st, 2010 and ran for two years, until November 30th, 2012.
The search for earthquake related phenomena using satellite observations has an evident advantage. The availability of planet wide observations enable the possibility of performing global surveys of the ionosphere, looking for possible electromagnetic perturbation effects that arise due to seismic activity. This information could help to pinpoint regions of strong seismic activity, determine their extent, and provide indicators and inputs for the short term forecast of the occurrance of an earthquake.
However, ground based measurements are also necessary when investigating their relationship between ionospheric perturbations and seismic and volcanic activity because this is the only way to connect the physical processes in the crust with the signals recorded by the satellite.
Such phenomena are of great interest because they have been observed to begin several days before the main shock. As a consequence they could be used, in conjunction with other precursory effects, to provide short-term forecasts for the occurrance of large earthquakes.
Last modified: January 12 2013 17:32:23.
|
from django.contrib.gis.db import models
from django.contrib.gis.gdal import DataSource
from django.core.exceptions import ValidationError
from geomaps.validation import GdbValidator
from geomaps.dataloader import GdbLoader
from geomaps.postprocess import StandardLithologyProcessor, GeologicEventProcessor
from gsconfig.layers import LayerGenerator
from gsmlp.generators import GeologicUnitViewGenerator
from vocab.parser import updateVocabulary
# Map is a class that represents the upload of a single NCGMP File Geodatabase
class GeoMap(models.Model):
class Meta:
db_table = 'geomaps'
verbose_name = 'Geologic Map'
name = models.CharField(max_length=50)
title = models.CharField(max_length=200)
fgdb_path = models.CharField(max_length=200)
map_type = models.CharField(max_length=200, choices=(('Direct observation', 'New mapping'), ('Compilation', 'Compilation')))
metadata_url = models.URLField(blank=True)
is_loaded = models.BooleanField(default=False)
def __unicode__(self):
return self.name
def clean(self):
try:
self.dataSource = DataSource(self.fgdb_path)
except:
raise ValidationError(self.fgdb_path + " could not be opened by GDAL")
else:
validator = GdbValidator(self.dataSource)
valid = validator.isValid()
if not valid:
err = ValidationError(validator.validationMessage())
err.asJson = validator.logs.asJson()
raise err
def load(self):
loader = GdbLoader(self)
loader.load()
self.is_loaded = True
self.save()
def populateRepresentativeValues(self):
for dmu in self.descriptionofmapunits_set.all():
dmu.generateRepresentativeValues()
def createGsmlp(self):
geologicUnitViewGen = GeologicUnitViewGenerator(self)
geologicUnitViewGen.buildGeologicUnitViews()
def createLayers(self):
layerGen = LayerGenerator(self)
return layerGen.createNewLayers()
# The following are classes from the GeoSciML Portrayal Schema
class GeologicUnitView(models.Model):
class Meta:
db_table = 'geologicunitview'
verbose_name = "GeologicUnitView"
owningmap = models.ForeignKey('GeoMap')
identifier = models.CharField(max_length=200, unique=True)
name = models.CharField(max_length=200, blank=True)
description = models.TextField()
geologicUnitType = models.CharField(max_length=200, blank=True)
rank = models.CharField(max_length=200, blank=True)
lithology = models.CharField(max_length=200, blank=True)
geologicHistory = models.TextField()
observationMethod = models.CharField(max_length=200, blank=True)
positionalAccuracy = models.CharField(max_length=200, blank=True)
source = models.CharField(max_length=200, blank=True)
geologicUnitType_uri = models.CharField(max_length=200)
representativeLithology_uri = models.CharField(max_length=200)
representativeAge_uri = models.CharField(max_length=200)
representativeOlderAge_uri = models.CharField(max_length=200)
representativeYoungerAge_uri = models.CharField(max_length=200)
specification_uri = models.CharField(max_length=200)
metadata_uri = models.CharField(max_length=200)
genericSymbolizer = models.CharField(max_length=200, blank=True)
shape = models.MultiPolygonField(srid=4326)
objects = models.GeoManager()
def __unicode__(self):
return self.identifier
# The following are classes that represent tables from an NCGMP Database
# Each class contains a ForeignKey to the GeoMap Class, which is the upload
# that the feature came into the system with
class MapUnitPolys(models.Model):
class Meta:
db_table = 'mapunitpolys'
verbose_name = 'Map Unit Polygon'
verbose_name_plural = 'MapUnitPolys'
owningmap = models.ForeignKey('GeoMap')
mapunitpolys_id = models.CharField(max_length=200, unique=True)
mapunit = models.ForeignKey('DescriptionOfMapUnits', db_column='mapunit')
identityconfidence = models.CharField(max_length=200)
label = models.CharField(max_length=200, blank=True)
symbol = models.CharField(max_length=200, blank=True)
notes = models.TextField(blank=True)
datasourceid = models.ForeignKey('DataSources', db_column='datasourceid', to_field='datasources_id')
shape = models.MultiPolygonField(srid=4326)
objects = models.GeoManager()
def __unicode__(self):
return self.mapunitpolys_id
class ContactsAndFaults(models.Model):
class Meta:
db_table = 'contactsandfaults'
verbose_name = 'Contact or Fault'
verbose_name_plural = 'ContactsAndFaults'
owningmap = models.ForeignKey('GeoMap')
contactsandfaults_id = models.CharField(max_length=200, unique=True)
type = models.CharField(max_length=200)
isconcealed = models.IntegerField()
existenceconfidence = models.CharField(max_length=200)
identityconfidence = models.CharField(max_length=200)
locationconfidencemeters = models.FloatField()
label = models.CharField(max_length=200, blank=True)
datasourceid = models.ForeignKey('DataSources', db_column='datasourceid', to_field='datasources_id')
notes = models.TextField(blank=True)
symbol = models.IntegerField()
shape = models.MultiLineStringField(srid=4326)
objects = models.GeoManager()
def __unicode__(self):
return self.contactsandfaults_id
class DescriptionOfMapUnits(models.Model):
class Meta:
db_table = 'descriptionofmapunits'
verbose_name = 'Description of a Map Unit'
verbose_name_plural = 'DescriptionOfMapUnits'
ordering = ['hierarchykey']
owningmap = models.ForeignKey('GeoMap')
descriptionofmapunits_id = models.CharField(max_length=200, unique=True)
mapunit = models.CharField(max_length=200)
label = models.CharField(max_length=200)
name = models.CharField(max_length=200)
fullname = models.CharField(max_length=200)
age = models.CharField(max_length=200, blank=True)
description = models.TextField()
hierarchykey = models.CharField(max_length=200)
paragraphstyle = models.CharField(max_length=200, blank=True)
areafillrgb = models.CharField(max_length=200)
areafillpatterndescription = models.CharField(max_length=200, blank=True)
descriptionsourceid = models.ForeignKey('DataSources', db_column='descriptionsourceid', to_field='datasources_id')
generallithologyterm = models.CharField(max_length=200, blank=True)
generallithologyconfidence = models.CharField(max_length=200, blank=True)
objects = models.GeoManager()
def __unicode__(self):
return self.name
def representativeValue(self):
repValues = self.representativevalue_set.all()
if repValues.count() > 0: return repValues[0]
else: return RepresentativeValue.objects.create(owningmap=self.owningmap, mapunit=self)
def generateRepresentativeValues(self):
StandardLithologyProcessor(self).guessRepresentativeLithology()
GeologicEventProcessor(self).guessRepresentativeAge()
def preferredAge(self):
extAttrIds = ExtendedAttributes.objects.filter(ownerid=self.descriptionofmapunits_id, property="preferredAge").values_list("valuelinkid", flat=True)
return GeologicEvents.objects.filter(geologicevents_id__in=extAttrIds)
def geologicHistory(self):
extAttrIds = ExtendedAttributes.objects.filter(ownerid=self.descriptionofmapunits_id).exclude(property="preferredAge").values_list("valuelinkid", flat=True)
return GeologicEvents.objects.filter(geologicevents_id__in=extAttrIds)
class DataSources(models.Model):
class Meta:
db_table = 'datasources'
verbose_name = 'Data Source'
verbose_name_plural = 'DataSources'
ordering = ['source']
owningmap = models.ForeignKey('GeoMap')
datasources_id = models.CharField(max_length=200, unique=True)
notes = models.TextField()
source = models.CharField(max_length=200)
objects = models.GeoManager()
def __unicode__(self):
return self.source
class Glossary(models.Model):
class Meta:
db_table = 'glossary'
verbose_name_plural = 'Glossary Entries'
ordering = ['term']
owningmap = models.ForeignKey('GeoMap')
glossary_id = models.CharField(max_length=200, unique=True)
term = models.CharField(max_length=200)
definition = models.CharField(max_length=200)
definitionsourceid = models.ForeignKey('DataSources', db_column='descriptionsourceid', to_field='datasources_id')
objects = models.GeoManager()
def __unicode__(self):
return self.term
class StandardLithology(models.Model):
class Meta:
db_table = 'standardlithology'
verbose_name_plural = 'Standard Lithology'
owningmap = models.ForeignKey('GeoMap')
standardlithology_id = models.CharField(max_length=200, unique=True)
mapunit = models.ForeignKey('descriptionofmapunits', db_column='mapunit')
parttype = models.CharField(max_length=200)
lithology = models.CharField(max_length=200)
proportionterm = models.CharField(max_length=200, blank=True)
proportionvalue = models.FloatField(max_length=200, blank=True, null=True)
scientificconfidence = models.CharField(max_length=200)
datasourceid = models.ForeignKey('DataSources', db_column='datasourceid', to_field='datasources_id')
objects = models.GeoManager()
def __unicode__(self):
return self.mapunit.mapunit + ': ' + self.lithology
class ExtendedAttributes(models.Model):
class Meta:
db_table = 'extendedattributes'
verbose_name_plural = 'ExtendedAttributes'
ordering = ['ownerid']
owningmap = models.ForeignKey('GeoMap')
extendedattributes_id = models.CharField(max_length=200)
ownertable = models.CharField(max_length=200)
ownerid = models.CharField(max_length=200)
property = models.CharField(max_length=200)
propertyvalue = models.CharField(max_length=200, blank=True)
valuelinkid = models.CharField(max_length=200, blank=True)
qualifier = models.CharField(max_length=200, blank=True)
notes = models.CharField(max_length=200, blank=True)
datasourceid = models.ForeignKey('DataSources', db_column='datasourceid', to_field='datasources_id')
objects = models.GeoManager()
def __unicode__(self):
return self.property + ' for ' + self.ownerid
class GeologicEvents(models.Model):
class Meta:
db_table = 'geologicevents'
verbose_name_plural = 'GeologicEvents'
ordering = ['event']
owningmap = models.ForeignKey('GeoMap')
geologicevents_id = models.CharField(max_length=200)
event = models.CharField(max_length=200)
agedisplay = models.CharField(max_length=200)
ageyoungerterm = models.CharField(max_length=200, blank=True)
ageolderterm = models.CharField(max_length=200, blank=True)
timescale = models.CharField(max_length=200, blank=True)
ageyoungervalue = models.FloatField(blank=True, null=True)
ageoldervalue = models.FloatField(blank=True, null=True)
notes = models.CharField(max_length=200, blank=True)
datasourceid = models.ForeignKey('DataSources', db_column='datasourceid', to_field='datasources_id')
objects = models.GeoManager()
def __unicode__(self):
return self.event + ': ' + self.agedisplay
def mapunits(self):
mapUnits = []
for ext in ExtendedAttributes.objects.filter(valuelinkid=self.geologicevents_id):
try: mapUnits.append(DescriptionOfMapUnits.objects.get(descriptionofmapunits_id=ext.ownerid))
except DescriptionOfMapUnits.DoesNotExist: continue
return mapUnits
def isPreferredAge(self, dmu):
if "preferredAge" in ExtendedAttributes.objects.filter(valuelinkid=self.geologicevents_id, ownerid=dmu.descriptionofmapunits_id).values_list("property", flat=True):
return True
else:
return False
def inGeologicHistory(self, dmu):
if ExtendedAttributes.objects.filter(valuelinkid=self.geologicevents_id, ownerid=dmu.descriptionofmapunits_id).exclude(property="preferredAge").count() > 0:
return True
else:
return False
# The following are vocabulary tables for storing a simplified view of CGI vocabularies
class Vocabulary(models.Model):
class Meta:
db_table = 'cgi_vocabulary'
verbose_name_plural = "Vocabularies"
name = models.CharField(max_length=200)
url = models.URLField()
def __unicode__(self):
return self.name
def update(self):
updateVocabulary(self)
class VocabularyConcept(models.Model):
class Meta:
db_table = 'cgi_vocabularyconcept'
ordering = [ 'label' ]
uri = models.CharField(max_length=200)
label = models.CharField(max_length=200)
definition = models.TextField()
vocabulary = models.ForeignKey("Vocabulary")
def __unicode__(self):
return self.label
class AgeTerm(models.Model):
class Meta:
db_table = 'cgi_ageterm'
ordering = [ 'olderage' ]
uri = models.CharField(max_length=200)
label = models.CharField(max_length=200)
olderage = models.FloatField(verbose_name='Older age')
youngerage = models.FloatField(verbose_name='Younger age')
vocabulary = models.ForeignKey("Vocabulary")
def __unicode__(self):
return self.label
# The following are "helper" tables for generating GSMLP effectively
class RepresentativeValue(models.Model):
class Meta:
db_table = 'representativevalue'
owningmap = models.ForeignKey('GeoMap')
mapunit = models.ForeignKey('descriptionofmapunits', db_column='mapunit')
representativelithology_uri = models.CharField(max_length=200, default="http://www.opengis.net/def/nil/OGC/0/missing")
representativeage_uri = models.CharField(max_length=200, default="http://www.opengis.net/def/nil/OGC/0/missing")
representativeolderage_uri = models.CharField(max_length=200, default="http://www.opengis.net/def/nil/OGC/0/missing")
representativeyoungerage_uri = models.CharField(max_length=200, default="http://www.opengis.net/def/nil/OGC/0/missing")
objects = models.GeoManager()
def __unicode__(self):
return "Representative values for " + self.mapunit.mapunit
|
We all aren’t born to marry, eventually we grow up until one day our family decides to hitch us up or by god’s fatthese a fall in love and we want to judicially stamp it as the marriage. Everything looks planned and set but like any big project, with time everything starts to change.
Few years down the life, when we talk with our friends, we laugh at this institution called marriage and even ponder why we marry; isn’t?? Laughing, sharing and venting our frustrations; these all are fine and natural but ever have you found yourself in a situation when there is nobody around you to act as a guide or someone you thought as a guide but the person proved to be a complete fatal for your marriage? Well today I’m going to tell you two such stories; read carefully and please bear with my amateur writing.
Sudha loved Ramesh but her parents forcibly married her to Jagjit. She cried and tried to persuade her mother who once was her best friend and guide but suddenly it felt as if she had no self-identity than a torchbearer of their family’s pride. She only prepared her to accept her fate and settle down in the life chosen by them. Like any small town girl she did took her first step but she never knew that she was straightaway slipping into the darkest world. Jagjit was sadastic and the night-horrors she used to suffer were beyond imagination. The night it first happened, she called her mother the very next day. For her, mother was the best guide and seeker but the mother didn’t even bothered to listen her. For her everything she said felt like an excuse to dishonor their reputation and an excuse to rekindle her lost romance. Her mother shut her up, and her sufferings followed few more years until one day when she just knocked out infront of her 5 year old daughter. That day she realised that she really needs to do something for her life or else one day her daughter might have to face the similar consequences. She called her mother and said in loud words that either they come and take her or she will run away with her daughter.
But before her parents could come; she was brutally assaulted by none other than her own husband and then she finally got the chance to came up but like a living dead! Had she been listened before, her life wouldn’t have turned into a complete catastrophe!
Now there is another story of Shishir and Seema, both successfully settled in their professional lives and were enjoying the best moments with their two year old son. They were both ambitious and there is nothing wrong being ambitious but then Seema didn’t wanted to send their son to the daycare. She was adamant that either of their parents should come and take care of the kid while they are at work. Shishsir’s parents were quite aged and he never liked this idea to sustain in the longer run but Seems always had her mother to support her so he couldn’t argue more. Though her mother’s presence was giving their child the best possible care and supervision but they both were slowly missing out the fun in their lives. Staying in a small apartment, they faced difficulty in talking, sitting with each other and have a couple time because her mother was always around and used to pretend how sacrificing she was for them. And to woo her mother for this immense support she began ignoring her husband and unknowingly sidelining him as well. For Seema her mother was her best friend, savior and a true guide.
If he ever gathered any courage to talk, her mother would alway nterrupt and take it as an insult and dominating against her daughter. So with time it became husband versus mother-in-law, the real conversation between the husband and wife never happened. Things turned ugly day by day until one day when the guy lost it all! After one such big fight Seema’s mother sat next to her and asked why she is living with such a man who don’t value her and why she needs to live with a man like him? She said if I’m there with you, you don’t need a man to take care of your child for the namesake so better leave him, given she had already began earning more than him! And within minutes Seema took her decision afterall her mother was her true well wisher and guide!
Things could have been better if they would have approached a relationship coach rather than someone who in the disguise of a trusted relationship ruined their lives . Here I’m not saying that those who love us don’t care about us but this is also a brutal truth that often these people replace care with their pride, fear and judgemental behaviour and end up digging an open well for us rather saving us out. So if seeking advice, we must always have that one person in our life who can show us the best possible solution without any biased or preconceived notions.
This blog is part of #a2zblogchatter challenge hosted by Blogchatter and seventh in the series.
|
#!/usr/bin/env python
import vtk
def main():
colors = vtk.vtkNamedColors()
geoGraticle = vtk.vtkGeoGraticule()
transformProjection = vtk.vtkGeoTransform()
destinationProjection = vtk.vtkGeoProjection()
sourceProjection = vtk.vtkGeoProjection()
transformGraticle = vtk.vtkTransformFilter()
reader = vtk.vtkXMLPolyDataReader()
transformReader = vtk.vtkTransformFilter()
graticleMapper = vtk.vtkPolyDataMapper()
readerMapper = vtk.vtkPolyDataMapper()
graticleActor = vtk.vtkActor()
readerActor = vtk.vtkActor()
geoGraticle.SetGeometryType( geoGraticle.POLYLINES )
geoGraticle.SetLatitudeLevel( 2 )
geoGraticle.SetLongitudeLevel( 2 )
geoGraticle.SetLongitudeBounds( -180, 180 )
geoGraticle.SetLatitudeBounds( -90, 90 )
# destinationProjection defaults to latlong.
destinationProjection.SetName( "rouss" )
destinationProjection.SetCentralMeridian( 0. )
transformProjection.SetSourceProjection( sourceProjection )
transformProjection.SetDestinationProjection( destinationProjection )
transformGraticle.SetInputConnection( geoGraticle.GetOutputPort() )
transformGraticle.SetTransform( transformProjection )
graticleMapper.SetInputConnection( transformGraticle.GetOutputPort() )
graticleActor.SetMapper( graticleMapper )
renderWindow = vtk.vtkRenderWindow()
renderer = vtk.vtkRenderer()
interactor = vtk.vtkRenderWindowInteractor()
renderWindow.SetInteractor( interactor )
renderWindow.AddRenderer( renderer )
renderWindow.SetSize(640, 480)
renderer.SetBackground(colors.GetColor3d("BurlyWood"))
renderer.AddActor( readerActor )
renderer.AddActor( graticleActor )
renderWindow.Render()
interactor.Initialize()
interactor.Start()
if __name__ == '__main__':
main()
|
The College of Agriculture Extension reports (1913-1972; 126 cubic feet) documents the annual reports of the ninety-two Indiana counties' extension work; some include agricultural agent work, home demonstration work and beginning in the 1960s, cooperative extension work. There is a rich collection of photographs from some counties during the earliest years and beyond. Some reports feature a “farm family of the year” among the documentation. Types of materials include: bulletins, correspondence, newspaper clippings, photographs, programs and reports.
Purdue University played a major role in the development of Indiana agriculture. In the 1880s the university started interesting experimental work in agriculture. Valuable information on crop rotation, soil fertility, care of livestock, fruit production and marketing, and other numerous farm related topics were generated from the ongoing research. In 1889, the late Professor W.C. Latta became a key figure in the enactment of the Farmer’s Institute Act, which legally recognized the work the university had been doing in holding farm schools and ‘moveable’ schools throughout the state. Some of the activities at the time included Boys’ Corn Clubs, the first county agents were appointed in 1906, along with the appointment of the first home demonstration agents in 1910. This work in turn led to the Clore Act in 1911, which authorized expansion of extension work, under the direction of the Department of Agriculture Extension. On May 8, 1914, President Woodrow Wilson signed the Smith-Lever Act which provided for co-operative relations between state and nation to aid in agricultural education. Extension Service became the educational arm of the United States Department of Agriculture. Forty-two states had extension work in some form in 1914 and 929 counties already employed 1,350 extension workers. By mid-June 1918 nationally 2,435 counties had agriculture agents, 1,715 counties had home demonstration agents and 4-H membership soared to a half million members. Laws affecting extension have helped direct programs during World War 1, the depression, World War II and post-war efforts.
In 1940 about 55,000 Indiana boys and girls between ten and twenty years of age were enrolled in these groups, which led to great improvements in farm life. More than 2,000,000 persons attended in one year the lectures and demonstrations that the county agents, home demonstration agents and specialists from Purdue conducted. In the 1960s and 1970s additional programs were added, and the Agricultural Extension Service was changed to the Cooperative Extension Service, and agent titles were changed to County Extension Agent. In the 1980s and 1990s, the “farm crisis” redirected extension; Indiana combined 10 areas into 5 districts, positions were downsized in 1987, with a strong emphasis on accountability and collaboration with organizations with similar goals.
Extension continues to take the university to the people and the demonstration method is still relevant. The use of new technology has changed information and communication dissemination, and organizational stress and resource redirection is common. The Cooperative Extension Service continues to be proactive, responsive and collaborative, and it is committed to the growth and development of people through life-long learning. The goals are still to empower customers, develop volunteers, build collaborative partnerships, increase the capacity to secure resources, utilize appropriate technologies and communication networks and create a climate for staff to realize their potential.
Source: Retrieved July 11, 2011 from: http://www.ag.purdue.edu/counties/dubois/Documents/4-H%20Adult%20Leaders/CooperativeExtensionServiceHistory.pdf Writer’s Program (Ind.) (1941). Indiana: a guide to the Hoosier state. New York, Oxford University Press.
*Please note that it is best to search by county. The reports were processed at two different times, due to the fact that some were located at a later date. Some older reports may be interspersed with the more recent years. For the most part, the reports are arranged alphabetically by Indiana county and then chronologically.
College of Agriculture, Janet Bechman and Margaret Titus.
Method of Acquisition Transfer of records received from Margaret Titus on October 22, 2010.
Whenever possible, original order of the materials has been retained. All materials have been housed in acid-free boxes.
09099999: Date range for collection updated 1/11/13; final updates to finding aid completed 5/19/16. Collection identifier updated from UA 14.13.01 to UA 59 as of January 25, 2017.
College of Agriculture Extension reports. Purdue University Archives and Special Collections. https://archives.lib.purdue.edu/repositories/2/resources/506 Accessed April 26, 2019.
|
#!/usr/bin/env python3
# This file is a part of ENGO629-ROBPCA
# Copyright (c) 2015 Jeremy Steward
# License: http://www.gnu.org/licenses/gpl-3.0-standalone.html GPL v3+
"""
Defines a class which computes the ROBPCA method as defined by Mia Hubert,
Peter J. Rousseeuw and Karlien Vandem Branden (2005)
"""
import sys
import numpy as np
from sklearn.covariance import fast_mcd, MinCovDet
from numpy.random import choice
from .classic_pca import principal_components
class ROBPCA(object):
"""
Implements the ROBPCA algorithm as defined by Mia Hubert, Peter J.
Rousseeuw, and Karlien Vandem Branden (2005)
"""
def __init__(self, X, k=0, kmax=10, alpha=0.75, mcd=True):
"""
Initializes the class instance with the data you wish to compute the
ROBPCA algorithm over.
Parameters
----------
X : An n x p data matrix (where n is number of data points and p
is number of dimensions in data) which is to be reduced.
k : Number of principal components to compute. If k is missing,
the algorithm itself will determine the number of components
by finding a k such that L_k / L_1 > 1e-3 and (sum_1:k L_j /
sum_1:r L_j >= 0.8)
kmax : Maximal number of components that will be computed. Set to 10
by default
alpha : Assists in determining step 2. The higher alpha is, the more
efficient the estimates will be for uncontaminated data.
However, lower values for alpha make the algorithm more
robust. Can be any real value in the range [0.5, 1].
mcd : Specifies whether or not to use the MCD covariance matrix to
compute the principal components when p << n.
"""
if k < 0:
raise ValueError("ROBPCA: number of dimensions k must be greater than or equal to 0.")
if kmax < 1:
raise ValueError("ROBPCA: kmax must be greater than 1 (default is 10).")
if not (0.5 <= alpha <= 1.0):
raise ValueError("ROBPCA: alpha must be a value in the range [0.5, 1.0].")
if mcd is not True or mcd is not False:
raise ValueError("ROBPCA: mcd must be either True or False.")
if k > kmax:
print("ROBPCA: WARNING - k is greater than kmax, setting k to kmax.",
file=sys.stderr)
k = kmax
self.data = X
self.k = k
self.kmax = kmax
self.alpha = alpha
self.mcd = mcd
return
@staticmethod
def reduce_to_affine_subspace(X):
"""
Takes the data-matrix and computes the affine subspace spanned by n
observations of the mean-centred data.
Parameters
----------
X : X is an n by p data matrix where n is the number of observations
and p is the number of dimensions in the data.
Returns
--------
Z : Z is the affine subspace of the data matrix X. It is the same
data as X but represents itself within its own dimensionality.
rot : Specifies the PCs computed here that were used to rotate X into
the subspace Z.
"""
# Compute regular PCA
# L -> lambdas (eigenvalues)
# PC -> principal components (eigenvectors)
_, PC = principal_components(X)
centre = np.mean(X, axis=0)
# New data matrix
Z = np.dot((X - centre), PC)
return Z, PC
def num_least_outlying_points(self):
"""
Determines the least number of outlying points h, which should be less
than n, our number of data points. `h` is computed as the maximum of
either:
alpha * n OR
(n + kmax + 1) / 2
Returns
--------
h : number of least outlying points.
"""
n = self.data.shape[0]
return int(np.max([self.alpha * n, (n + self.kmax + 1) / 2]))
@staticmethod
def direction_through_hyperplane(X):
"""
Calculates a direction vector between two points in Z, where Z is an
n x p matrix. This direction is projected upon to find the number of
least outlying points using the Stahel-Donoho outlyingness measure.
Parameters
----------
X : Affine subspace of mean-centred data-matrix
Returns
--------
p0 : point of origin of the direction vector d
d : direction vector between two points in Z
"""
n = Z.shape[0]
p = Z.shape[1]
d = None
if n > p:
P = np.array(X[choice(n,p), :])
Q, R = np.linalg.qr(P)
if np.linalg.matrix_rank(Q) == p:
d = np.linalg.solve(Q, np.ones(p))
else:
P = np.array(X[choice(n,2), :])
tmp = P[1, :] - P[0, :]
N = np.sqrt(np.dot(E,E))
if N > 1e-8:
d = tmp / N
return d
def find_least_outlying_points(self, X):
"""
Finds the `h` number of points in the dataset that are least-outlying.
Does this by first computing the modified Stahel-Donoho
affine-invariant outlyingness.
Parameters
----------
X : The data matrix with which you want to find the least outlying
points using the modified Stahel-Donoho outlyingness measure
Returns
--------
H0 : indices of data points from X which have the least
outlyingness
"""
n, p = X.shape
self.h = num_least_outlying_points()
num_directions = min(250, n * (n - 1) / 2)
B = np.array([ROBPCA.direction_through_hyperplane(X)
for _ in range(num_directions)])
B_norm = np.linalg.norm(B, axis = 1)
index_norm = B_norm > 1e-12
A = np.dot(np.diag(1 / B_norm[index_norm]), B[index_norm, :])
# Used as a matrix because there's a bug in fast_mcd / MinCovDet
# The bug is basically due to single row / column arrays not
# maintaining the exact shape information (e.g. np.zeros(3).shape
# returns (3,) and not (3,1) or (1,3).
Y = np.matrix(np.dot(Z, A.T))
ny, ry = Y.shape
# Set up lists for univariate t_mcd and s_mcd
t_mcd = np.zeros(ry)
s_mcd = np.zeros(ry)
for i in range(ry):
mcd = MinCovDet(support_fraction=self.alpha).fit(Y[:,i])
t_mcd[i] = mcd.location_
s_mcd[i] = mcd.covariance_
# Supposedly if any of the s_mcd values is zero we're supposed to
# project all the data points onto the hyperplane defined by the
# direction orthogonal to A[i, :]. However, the reference
# implementation in LIBRA does not explicitly do this, and quite
# frankly the code is so terrible I've moved on for the time being.
outl = np.max(np.abs(np.array(Y) - t_mcd) / s_mcd, axis=1)
H0 = np.argsort(outl)[::-1][0:h]
return H0
def compute_pc(self):
"""
Robustly computes the principal components of the data matrix.
This is primarily broken up into one of several ways, depending on the
dimensionality of the data (whether p > n or p < n)
"""
X, rot = ROBPCA.reduce_to_affine_subspace(self.data)
centre = np.mean(self.data, axis=0)
if np.linalg.rank(X) == 0:
raise ValueError("All data points collapse!")
n, p = X.shape
self.h = num_least_outlying_points()
# Use MCD instead of ROBPCA if p << n
if p < min(np.floor(n / 5), self.kmax) and self.mcd:
mcd = MinCovDet(support_fraction=self.alpha).fit(X)
loc = mcd.location_
cov = mcd.covariance_
L, PCs = principal_components(X, lambda x: cov)
result = {
'location' : np.dot(rot,loc) + centre,
'covariance' : np.dot(rot, cov),
'eigenvalues' : L,
'loadings' : np.dot(rot, PCs),
}
return result
# Otherwise just continue with ROBPCA
H0 = find_least_outlying_points(X)
Xh = X[H0, :]
Lh, Ph = principal_components(Xh)
centre_Xh = np.mean(Xh, axis=0)
self.kmax = np.min(np.sum(Lh > 1e-12), self.kmax)
# If k was not set or chosen to be 0, then we should calculate it
# Basically we test if the ratio of the k-th eigenvalue to the 1st
# eigenvalue (sorted decreasingly) is larger than 1e-3 and if the
# fraction of cumulative dispersion is greater than 80%
if self.k == 0:
test, = np.where(Lh / Lh[0] <= 1e-3)
if len(test):
self.k = min(np.sum(Lh > 1e-12), int(test + 1), self.kmax)
else:
self.k = min(np.sum(Lh > 1e-12), self.kmax)
cumulative = np.cumsum(Lh[1:self.k]) / np.sum(Lh)
if cumulative[self.k-1] > 0.8:
self.k = int(np.where(cumulative >= 0.8)[0])
centre += np.mean(Xh, axis=0)
rot = np.dot(rot, Ph)
X2 = np.dot(X - np.mean(Xh, axis = 0), Ph)
X2 = X2[:, 1:self.k]
rot = rot[:, 1:self.k]
mcd = MinCovDet(support_fraction=(self.h / n)).fit(X2)
loc = mcd.location_
cov = mcd.covariance_
L, PCs = principal_components(X2, lambda x: cov)
result = {
'location' : np.dot(rot, loc) + centre,
'covariance' : np.dot(rot,cov),
'eigenvalues' : L,
'loadings' : np.dot(rot, PCs),
}
return result
|
Written by Dominik Joe Pantůček on January 31, 2019.
It has been quite some time since we have discussed some interesting programming stuff we are doing here at Trustica. And – as you may have guessed – we have encountered quite a few things that needed to be done and apparently no-one has done them before. Read on to see how you can style UI elements in Racket.
Racket is a powerful Scheme-derived language that includes a plethora of utility libraries out of box. One of these libraries – it is better to say: collections – allows you to quickly develop a cross-platform UI applications. It has been a few years now since we started using it for most of our back-end and support GUI software.
Firstly we get a reference to the font structure using a global font list object and secondly we tell the button widget to use it. Trouble is the button widget ignores the font we pass it.
As you can see, it passes on everything as is to the underlying button% class but in case of text label, it is pre-rendered onto a bitmap canvas. This is actually just a sample code from StackOverflow extended to allow not only color change but also customizing the font.
And as you can see in Picture 1 below, the result looks pretty neat!
Once again we have proved that Racket is extremely flexible tool – even if you want to do relatively simple things. One can only guess why there are not more people programming in this Scheme-based language.
Hope you share our excitement about the tools we use and stay tuned for more information about Cryptoucan™ soon!
|
import sys, pprint, math, numpy, simpy, getopt, itertools, operator
from rvs import *
from arepeat_models import *
from arepeat_sim import *
# ************************ Multiple Qs for Jobs with multiple Tasks **************************** #
class Job(object):
def __init__(self, _id, k, tsize, n=0):
self._id = _id
self.k = k
self.tsize = tsize
self.n = n
self.prev_hop_id = None
def __repr__(self):
return "Job[id= {}, k= {}, n= {}]".format(self._id, self.k, self.n)
def deep_copy(self):
j = Job(self._id, self.k, self.tsize, self.n)
j.prev_hop_id = self.prev_hop_id
return j
class JG(object): # Job Generator
def __init__(self, env, ar, k_dist, tsize_dist):
self.env = env
self.ar = ar
self.k_dist = k_dist
self.tsize_dist = tsize_dist
self.nsent = 0
self.out = None
self.action = None
def init(self):
self.action = self.env.process(self.run() )
def run(self):
while 1:
yield self.env.timeout(random.expovariate(self.ar) )
self.nsent += 1
k = self.k_dist.gen_sample()
tsize = self.tsize_dist.gen_sample()
self.out.put(Job(self.nsent, k, tsize) )
class Task(object):
def __init__(self, jid, k, size, remaining):
self.jid = jid
self.k = k
self.size = size
self.remaining = remaining
self.prev_hop_id = None
self.ent_time = None
def __repr__(self):
return "Task[jid= {}, k= {}, size= {}, remaining= {}]".format(self.jid, self.k, self.size, self.remaining)
def deep_copy(self):
t = Task(self.jid, self.k, self.size, self.remaining)
t.prev_hop_id = self.prev_hop_id
t.ent_time = self.ent_time
return t
class PSQ(object): # Process Sharing Queue
def __init__(self, _id, env, h, out):
self._id = _id
self.env = env
self.h = h
self.out = out
self.t_l = []
self.tinserv_l = []
self.got_busy = None
self.sinterrupt = None
self.add_to_serv = False
self.cancel = False
self.cancel_jid = None
self.store = simpy.Store(env)
self.action = env.process(self.serv_run() )
self.action = env.process(self.put_run() )
self.lt_l = []
self.sl_l = []
def __repr__(self):
return "PSQ[id= {}]".format(self._id)
def length(self):
return len(self.t_l)
def serv_run(self):
while True:
self.tinserv_l = self.t_l[:self.h]
if len(self.tinserv_l) == 0:
# sim_log(DEBUG, self.env, self, "idle; waiting for arrival", None)
self.got_busy = self.env.event()
yield (self.got_busy)
# sim_log(DEBUG, self.env, self, "got busy!", None)
continue
# TODO: This seems wrong
# t_justmovedHoL = self.tinserv_l[-1]
# self.out.put_c({'m': 'HoL', 'jid': t_justmovedHoL.jid, 'k': t_justmovedHoL.k, 'qid': self._id} )
serv_size = len(self.tinserv_l)
r_l = [self.tinserv_l[i].remaining for i in range(serv_size) ]
time = min(r_l)
i_min = r_l.index(time)
# sim_log(DEBUG, self.env, self, "back to serv; time= {}, serv_size= {}".format(time, serv_size), None)
start_t = self.env.now
self.sinterrupt = self.env.event()
yield (self.sinterrupt | self.env.timeout(time) )
serv_t = (self.env.now - start_t)/serv_size
for i in range(serv_size):
try:
self.t_l[i].remaining -= serv_t
except IndexError:
break
if self.add_to_serv:
# sim_log(DEBUG, self.env, self, "new task added to serv", None)
self.sinterrupt = None
self.add_to_serv = False
elif self.cancel:
for t in self.t_l:
if t.jid == self.cancel_jid:
# sim_log(DEBUG, self.env, self, "cancelled task in serv", t)
self.t_l.remove(t)
self.sinterrupt = None
self.cancel = False
else:
t = self.t_l.pop(i_min)
# sim_log(DEBUG, self.env, self, "serv done", t)
lt = self.env.now - t.ent_time
self.lt_l.append(lt)
self.sl_l.append(lt/t.size)
t.prev_hop_id = self._id
self.out.put(t)
def put_run(self):
while True:
t = (yield self.store.get() )
_l = len(self.t_l)
self.t_l.append(t)
if _l == 0:
self.got_busy.succeed()
elif _l < self.h:
self.add_to_serv = True
self.sinterrupt.succeed()
def put(self, t):
# sim_log(DEBUG, self.env, self, "recved", t)
t.ent_time = self.env.now
return self.store.put(t) # .deep_copy()
def put_c(self, m):
# sim_log(DEBUG, self.env, self, "recved; tinserv_l= {}".format(self.tinserv_l), m)
# if m['m'] == 'cancel':
jid = m['jid']
if jid in [t.jid for t in self.tinserv_l]:
self.cancel = True
self.cancel_jid = jid
self.sinterrupt.succeed()
else:
for t in self.t_l:
if t.jid == jid:
self.t_l.remove(t)
class FCFS(object):
def __init__(self, _id, env, sl_dist, out):
self._id = _id
self.env = env
self.sl_dist = sl_dist
self.out = out
self.t_l = []
self.t_inserv = None
self.got_busy = None
self.cancel_flag = False
self.cancel = None
self.lt_l = []
self.sl_l = []
self.action = env.process(self.serv_run() )
def __repr__(self):
return "FCFS[_id= {}]".format(self._id)
def length(self):
return len(self.t_l) + (self.t_inserv is not None)
def serv_run(self):
while True:
if len(self.t_l) == 0:
self.got_busy = self.env.event()
yield (self.got_busy)
self.got_busy = None
# sim_log(DEBUG, self.env, self, "got busy!", None)
self.t_inserv = self.t_l.pop(0)
self.cancel = self.env.event()
clk_start_time = self.env.now
st = self.t_inserv.size * self.sl_dist.gen_sample()
# sim_log(DEBUG, self.env, self, "starting {}s-clock on ".format(st), self.t_inserv)
yield (self.cancel | self.env.timeout(st) )
if self.cancel_flag:
# sim_log(DEBUG, self.env, self, "cancelled clock on ", self.t_inserv)
self.cancel_flag = False
else:
# sim_log(DEBUG, self.env, self, "serv done in {}s on ".format(self.env.now-clk_start_time), self.t_inserv)
lt = self.env.now - self.t_inserv.ent_time
self.lt_l.append(lt)
self.sl_l.append(lt/self.t_inserv.size)
self.t_inserv.prev_hop_id = self._id
self.out.put(self.t_inserv)
self.t_inserv = None
def put(self, t):
# sim_log(DEBUG, self.env, self, "recved", t)
_l = len(self.t_l)
t.ent_time = self.env.now
self.t_l.append(t) # .deep_copy()
if self.got_busy is not None and _l == 0:
self.got_busy.succeed()
def put_c(self, m):
# sim_log(DEBUG, self.env, self, "recved", m)
# if m['m'] == 'cancel':
jid = m['jid']
for t in self.t_l:
if t.jid == jid:
self.t_l.remove(t)
if jid == self.t_inserv.jid:
self.cancel_flag = True
self.cancel.succeed()
class JQ(object):
def __init__(self, env, in_qid_l):
self.env = env
self.in_qid_l = in_qid_l
self.jid__t_l_map = {}
self.deped_jid_l = []
self.jid_HoLqid_l_map = {}
self.movedHoL_jid_l = []
self.store = simpy.Store(env)
self.action = env.process(self.run() )
# self.store_c = simpy.Store(env)
# self.action = env.process(self.run_c() )
def __repr__(self):
return "JQ[in_qid_l= {}]".format(self.in_qid_l)
def run(self):
while True:
t = (yield self.store.get() )
if t.jid in self.deped_jid_l: # Redundant tasks of a job may be received
continue
if t.jid not in self.jid__t_l_map:
self.jid__t_l_map[t.jid] = []
self.jid__t_l_map[t.jid].append(t.deep_copy() )
t_l = self.jid__t_l_map[t.jid]
if len(t_l) > t.k:
log(ERROR, "len(t_l)= {} > k= {}".format(len(t_l), t.k) )
elif len(t_l) < t.k:
continue
else:
self.jid__t_l_map.pop(t.jid, None)
self.deped_jid_l.append(t.jid)
self.out_c.put_c({'jid': t.jid, 'm': 'jdone', 'deped_from': [t.prev_hop_id for t in t_l] } )
def put(self, t):
# sim_log(DEBUG, self.env, self, "recved", t)
return self.store.put(t)
# def run_c(self):
# while True:
# m = (yield self.store_c.get() )
# if m['m'] == 'HoL':
# jid, k, qid = m['jid'], m['k'], m['qid']
# if m['jid'] in self.movedHoL_jid_l: # Redundant tasks may move HoL simultaneously
# continue
# if jid not in self.jid_HoLqid_l_map:
# self.jid_HoLqid_l_map[jid] = []
# self.jid_HoLqid_l_map[jid].append(qid)
# HoLqid_l = self.jid_HoLqid_l_map[jid]
# if len(HoLqid_l) > k:
# log(ERROR, "len(HoLqid_l)= {} > k= {}".format(len(HoLqid_l), k) )
# elif len(HoLqid_l) < k:
# continue
# else:
# self.movedHoL_jid_l.append(jid)
# HoLqid_l = self.jid_HoLqid_l_map[jid]
# self.out_c.put_c({'m': 'jHoL', 'jid': jid, 'at': HoLqid_l} )
# self.jid_HoLqid_l_map.pop(jid, None)
# def put_c(self, m):
# sim_log(DEBUG, self.env, self, "recved", m)
# return self.store_c.put(m)
class MultiQ(object):
def __init__(self, env, N, sching_m, sl_dist):
self.env = env
self.N = N
self.sching_m = sching_m
self.jq = JQ(env, list(range(self.N) ) )
self.jq.out_c = self
# self.q_l = [PSQ(i, env, h=4, out=self.jq) for i in range(self.N) ]
# sl_dist = DUniform(1, 1) # Dolly()
self.q_l = [FCFS(i, env, sl_dist, out=self.jq) for i in range(self.N) ]
self.jid_info_m = {}
self.store = simpy.Store(env)
self.action = env.process(self.run() )
self.jtime_l = []
self.k__jtime_m = {}
def __repr__(self):
return "MultiQ[N= {}]".format(self.N)
def tlt_l(self):
l = []
for q in self.q_l:
l.extend(q.lt_l)
return l
def tsl_l(self):
l = []
for q in self.q_l:
l.extend(q.sl_l)
return l
def get_sorted_qids(self):
qid_length_m = {q._id: q.length() for q in self.q_l}
# print("qid_length_m= {}".format(qid_length_m) )
qid_length_l = sorted(qid_length_m.items(), key=operator.itemgetter(1) )
# print("qid_length_l= {}".format(qid_length_l) )
return [qid_length[0] for qid_length in qid_length_l]
def run(self):
while True:
j = (yield self.store.get() )
toi_l = random.sample(range(self.N), j.n)
# toi_l = self.get_sorted_qids()[:j.n]
for i in toi_l:
self.q_l[i].put(Task(j._id, j.k, j.tsize, j.tsize) )
self.jid_info_m[j._id] = {'k': j.k, 'ent_time': self.env.now, 'tsize': j.tsize, 'qid_l': toi_l}
def put(self, j):
# sim_log(DEBUG, self.env, self, "recved", j)
if self.sching_m['t'] == 'coded':
# j.n = j.k + self.sching_m['n-k']
j.n = min(self.N, math.floor(j.k*self.sching_m['r'] ) )
# return self.store.put(j.deep_copy() )
return self.store.put(j)
def put_c(self, m):
# sim_log(DEBUG, self.env, self, "recved", m)
# if m['m'] == 'jHoL':
# jid = m['jid']
# jinfo = self.jid_info_m[jid]
# for i in jinfo['qid_l']:
# if i not in m['at']:
# self.q_l[i].put_c({'m': 'cancel', 'jid': jid} )
# elif m['m'] == 'jdone':
jid = m['jid']
jinfo = self.jid_info_m[jid]
t = (self.env.now - jinfo['ent_time'] )/jinfo['tsize']
# t = self.env.now - jinfo['ent_time'] - jinfo['tsize']
self.jtime_l.append(t)
if jinfo['k'] not in self.k__jtime_m:
self.k__jtime_m[jinfo['k'] ] = []
self.k__jtime_m[jinfo['k'] ].append(t)
for i in jinfo['qid_l']:
if i not in m['deped_from']:
self.q_l[i].put_c({'m': 'cancel', 'jid': jid} )
self.jid_info_m.pop(jid, None)
# class MultiQ_RedToIdle(MultiQ):
# def __init__(self, env, N, sching_m):
# super().__init__(env, N, sching_m)
# self.qid_redjid_l = {i: [] for i in range(N) }
# def __repr__(self):
# return "MultiQ_RedToIdle[N = {}]".format(self.N)
# def get_sorted_qid_length_l(self):
# qid_length_m = {q._id: q.length() for q in self.q_l}
# qid_length_l = sorted(qid_length_m.items(), key=operator.itemgetter(1) )
# return qid_length_l
# def run(self):
# while True:
# j = (yield self.store.get() )
# qid_length_l = self.get_sorted_qid_length_l()
# toqid_l = []
# for i, qid_length in enumerate(qid_length_l):
# if i < j.k:
# toqid_l.append(qid_length[0] )
# elif i < j.n:
# if qid_length[1] == 0:
# qid = qid_length[0]
# toqid_l.append(qid)
# self.qid_redjid_l[qid].append(j._id)
# for i, qid in enumerate(toqid_l):
# if i < j.k:
# for jid in self.qid_redjid_l[qid]:
# self.q_l[qid].put_c({'m': 'cancel', 'jid': jid} )
# self.qid_redjid_l[qid].remove(jid)
# self.q_l[qid].put(Task(j._id, j.k, j.tsize, j.tsize) )
# self.jid_info_m[j._id] = {'ent_time': self.env.now, 'tsize': j.tsize, 'qid_l': toqid_l}
|
Daniel J. Hansen, a partner at the Zadroga Act Claim law firm of Turley, Hansen & Partners was quoted last week in Metro NY on the new 9 11 Victim Compensation Fund draft rules released by Special Master Sheila Birnbaum. Like his partner Troy Rosasco, Hansen was disappointed that the new rules only permit compensation for “physical harm”, not for psychological injuries. 9/11 Victim Compensation Fund Special Master, Sheila Birnbaum, at a “Town Hall” meeting tomorrow night, 6/29 from 6:00pm to 8:30pm at the United Federation of Teachers office located at 50 Broadway in lower Manhattan.
Zadroga Act 9/11 Victim Compensation Fund Special Master Sheila Birnbaum has released updated draft regulations regarding Zadroga claims procedures. new rules limit compensation to those claims involving “physical harm”. new rules significantly expand the geographic boundaries for claimants the decisions of Dr. John Howard , Director of the World Trade Center Health Program, on which illnesses will be covered, including certain cancers, will guide the Special Master.
new rules permit the “amendment” of prior Victim Compensation Fund claims made under the original Fund, which closed in December 2003 The Special Master will have to “pro-rate” or reduce the initial awards, which may upset some claimants.
The British Broadcast Corporation (BBC), a worldwide leader in television news reporting, visited our side of the pond last week to interview Turley Hansen Partner Troy Rosasco at our office in The Woolworth Building in lower Manhattan for a documentary commemorating the 10th Anniversary of 9/11. Troy Rosasco answered questions about the new Zadroga 9/11 Act and re-opening of the 9/11 Victim Compensation Fund. After leaving our office, they were off to interview Congressman Jerrold Nadler, Co-Sponsor of the Zadroga Act in the US House of Representatives.
|
# -*- coding: utf-8 -*-
#
# This file is part of Invenio.
# Copyright (C) 2014, 2015, 2016 CERN.
#
# Invenio is free software; you can redistribute it
# and/or modify it under the terms of the GNU General Public License as
# published by the Free Software Foundation; either version 2 of the
# License, or (at your option) any later version.
#
# Invenio is distributed in the hope that it will be
# useful, but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Invenio; if not, write to the
# Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston,
# MA 02111-1307, USA.
#
# In applying this license, CERN does not
# waive the privileges and immunities granted to it by virtue of its status
# as an Intergovernmental Organization or submit itself to any jurisdiction.
"""Record ID provider."""
from __future__ import absolute_import, print_function
from ..models import PIDStatus, RecordIdentifier
from .base import BaseProvider
class RecordIdProvider(BaseProvider):
"""Record identifier provider."""
pid_type = 'recid'
"""Type of persistent identifier."""
pid_provider = None
"""Provider name.
The provider name is not recorded in the PID since the provider does not
provide any additional features besides creation of record ids.
"""
default_status = PIDStatus.RESERVED
"""Record IDs are by default registered immediately.
Default: :attr:`invenio_pidstore.models.PIDStatus.RESERVED`
"""
@classmethod
def create(cls, object_type=None, object_uuid=None, **kwargs):
"""Create a new record identifier.
Note: if the object_type and object_uuid values are passed, then the
PID status will be automatically setted to
:attr:`invenio_pidstore.models.PIDStatus.REGISTERED`.
:param object_type: The object type. (Default: None.)
:param object_uuid: The object identifier. (Default: None).
:param kwargs: You specify the pid_value.
"""
# Request next integer in recid sequence.
assert 'pid_value' not in kwargs
kwargs['pid_value'] = str(RecordIdentifier.next())
kwargs.setdefault('status', cls.default_status)
if object_type and object_uuid:
kwargs['status'] = PIDStatus.REGISTERED
return super(RecordIdProvider, cls).create(
object_type=object_type, object_uuid=object_uuid, **kwargs)
|
Tswelo Kodisang is the Chief HR Officer for Tiger Brands where he is responsible for organizing and managing agendas, HR strategy and systems, and performance. He is a commercially driven international HR leader who operates as a business partner and trusted advisor. Results-driven, Tswelo has worked across Africa, the Middle East, Turkey, Israel, and England. For him, the road to success is through hard work, dedication, and the desire to continually improve at what you do.
|
# -*- coding: utf-8 -*-
##############################################################################
#
# Copyright (C) 2018 Compassion CH (http://www.compassion.ch)
# Releasing children from poverty in Jesus' name
# @author: Emanuel Cino <[email protected]>
#
# The licence is in the file __manifest__.py
#
##############################################################################
from odoo import api, models
class Contract(models.Model):
_inherit = 'recurring.contract'
@api.multi
def contract_waiting_mandate(self):
"""
In case the sponsor paid the first month online, we want to force
activation of contract and later put it in waiting mandate state.
"""
for contract in self.filtered('invoice_line_ids'):
invoices = contract.invoice_line_ids.mapped('invoice_id')
payment = self.env['account.payment'].search([
('invoice_ids', 'in', invoices.ids),
('state', '=', 'draft')
])
if payment:
# Activate contract
contract._post_payment_first_month()
contract.contract_active()
return super(Contract, self).contract_waiting_mandate()
def associate_group(self, payment_mode_id):
res = super(Contract, self).associate_group(payment_mode_id)
self.group_id.on_change_payment_mode()
return res
|
Construction Project Manager Guys will be here for all your goals involving Construction Project Managers in Woodsfield, OH. You are looking for the most sophisticated technologies in the field, and our crew of highly trained contractors will provide exactly that. We make sure you get the most excellent service, the best selling price, and the very best quality supplies. Call us today at 888-492-2478 and we'll be able to go over your alternatives, answer the questions you have, and arrange a scheduled appointment to begin organizing your project.
At Construction Project Manager Guys, we are aware that you must stay within your price range and reduce costs wherever it's possible to. But, saving money should never signify that you give up quality on Construction Project Managers in Woodsfield, OH. Our attempts to help you save money won't sacrifice the high quality of our services. We utilize the highest quality practices and materials to ensure that any work can withstand the years, and we save a little money with strategies that won't change the quality of your job. It will be feasible since we appreciate how to save your time and resources on products and labor. To be able to lower your costs, Construction Project Manager Guys is the service to get in touch with. Contact 888-492-2478 to talk with our client care reps, now.
To put together the ideal judgments regarding Construction Project Managers in Woodsfield, OH, you have to be well informed. You shouldn't enter into it without understanding it, and it is best to understand what you should expect. You will not deal with any sort of unexpected situations when you choose Construction Project Manager Guys. You can begin by talking about your task with our customer service representatives when you dial 888-492-2478. During this call, you'll get your questions responded to, and we're going to establish a time to start work. We consistently get there at the arranged time, prepared to work with you.
You've got lots of reasons to turn to Construction Project Manager Guys to meet your needs regarding Construction Project Managers in Woodsfield, OH. We will be the first choice when you need the best cash saving options, the finest materials, and the best rate of customer service. We're ready to assist you with the greatest expertise and practical knowledge in the industry. Call 888-492-2478 whenever you need Construction Project Managers in Woodsfield, and we're going to work together with you to systematically complete your project.
|
#!/usr/bin/env python
import sys
import argparse
import shutil
import numpy as np
import netCDF4 as nc
from scipy import signal
"""
Smooth out bathymetry by applying a gaussian blur.
See: http://wiki.scipy.org/Cookbook/SignalSmooth
Convolving a noisy image with a gaussian kernel (or any bell-shaped curve)
blurs the noise out and leaves the low-frequency details of the image standing
out.
After smoothing various things need to be fixed up:
- A minimum depth should be set, e.g. we don't want ocean depths of < 1m
- All land points should be the same in the smoothed bathymetry.
FIXME: what about using scipy.convolve?
FIXME: what about a more generalised smoothing tool?
"""
def gauss_kern(size, sizey=None):
""" Returns a normalized 2D gauss kernel array for convolutions """
size = int(size)
if not sizey:
sizey = size
else:
sizey = int(sizey)
x, y = np.mgrid[-size:size+1, -sizey:sizey+1]
g = np.exp(-(x**2/float(size) + y**2/float(sizey)))
return g / g.sum()
def blur_image(im, n, ny=None) :
"""
Blurs the image by convolving with a gaussian kernel of typical
size n. The optional keyword argument ny allows for a different
size in the y direction.
"""
g = gauss_kern(n, sizey=ny)
improc = signal.convolve(im, g, mode='valid')
return improc
def main():
parser = argparse.ArgumentParser()
parser.add_argument("point_x", help="x coordinate of centre point.", type=int)
parser.add_argument("point_y", help="y coordinate of centre point.", type=int)
parser.add_argument("size", help="""
Rows and columns that will be smoothed. For example if size == 10,
then a 10x10 array centred at (x, y) will be smoothed.""", type=int)
parser.add_argument("--kernel", help="Size of guassian kernel.", default=5, type=int)
parser.add_argument("--minimum_depth", help="""
After smoothing, set values to a minimum depth.
This is used to fix up shallow waters""", default=40, type=int)
parser.add_argument("input_file", help="Name of the input file.")
parser.add_argument("input_var", help="Name of the variable to blur.")
parser.add_argument("output_file", help="Name of the output file.")
args = parser.parse_args()
shutil.copy(args.input_file, args.output_file)
f = nc.Dataset(args.output_file, mode='r+')
input_var = f.variables[args.input_var][:]
north_ext = args.point_y + args.size
assert(north_ext < input_var.shape[0])
south_ext = args.point_y - args.size
assert(south_ext >= 0)
east_ext = args.point_x + args.size
assert(east_ext < input_var.shape[1])
west_ext = args.point_x - args.size
assert(west_ext >= 0)
input_var = input_var[south_ext:north_ext,west_ext:east_ext]
# We need to extend/pad the array by <kernel> points along each edge.
var = np.pad(input_var, (args.kernel, args.kernel), mode='edge')
smoothed = blur_image(var, args.kernel)
# After smoothing make sure that there is a certain minimum depth.
smoothed[(smoothed > 0) & (smoothed < args.minimum_depth)] = args.minimum_depth
# Ensure that all land points remain the same.
smoothed[input_var == 0] = 0
f.variables[args.input_var][south_ext:north_ext,west_ext:east_ext] = smoothed
f.close()
if __name__ == '__main__':
sys.exit(main())
|
You can explore life, the universe and everything in it. Some exhibits are millions of years old, others less than a decade.
This famous museum in the heart of Scotland's capital city is actually two buildings. The collections in the landmark Museum of Scotland building tell you the story of Scotland its land, its people and culture. The Royal Museum building, with its magnificent glass ceiling, houses international collections covering nature to art, culture to science.
Located at Edinburgh Castle, explore over 400 years of the Scottish military experience. Uncover stories of courage and determination, victory and defeat, heroics and heartbreak.
The Museum of Childhood opened in 1955 and is a favourite with adults and children alike. It was the first museum in the world to specialise in the history of childhood.
The Museum of Edinburgh is home to important collections relating to the history of Edinburgh, from prehistoric times to the present day.
One of the museum's great treasures is the National Covenant, signed by Scotland's presbyterian leadership in 1638, while the collections of Scottish pottery and items relating to Field Marshal Earl Haig are of national importance.
East Fortune played an important role as an airfield during two World Wars. Now the National Museum of Flight hangars are packed with aircraft that reveal how flight developed from the Wright brothers to Concorde.
|
# Copyright 2016 MakeMyTrip (Kunal Aggarwal)
#
# This file is part of dataShark.
#
# dataShark is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# dataShark is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with dataShark. If not, see <http://www.gnu.org/licenses/>.
import csv
from datetime import datetime
class Plugin:
def __init__(self, conf):
self.path = conf.get('path', 'UseCase.csv')
self.separator = conf.get('separator', ',')
self.quote_char = conf.get('quote_char', '"')
self.title = conf.get('title', 'Use Case')
self.debug = conf.get('debug', False)
if self.debug == "true":
self.debug = True
else:
self.debug = False
def save(self, dataRDD, progType):
if progType == "streaming":
dataRDD.foreachRDD(lambda s: self.__writeToCSV(s))
elif progType == "batch":
self.__writeToCSV(dataRDD)
def __writeToCSV(self, dataRDD):
if dataRDD.collect():
with open(self.path, 'a') as csvfile:
spamwriter = csv.writer(csvfile, delimiter = self.separator, quotechar = self.quote_char, quoting=csv.QUOTE_MINIMAL)
currentTime = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
for row in dataRDD.collect():
if self.debug:
print row
csvrow = [currentTime, row[0], row[1]]
try:
metadata = row[2]
except:
metadata = {}
if metadata:
for key in sorted(metadata):
csvrow.append(metadata[key])
spamwriter.writerow(csvrow)
print "%s - (%s) - Written %s Documents to CSV File %s " % (datetime.now().strftime("%Y-%m-%d %H:%M:%S"), self.title, len(dataRDD.collect()), self.path)
else:
print "%s - (%s) - No RDD Data Recieved" % (datetime.now().strftime("%Y-%m-%d %H:%M:%S"), self.title)
|
Door Staywell 775 brown version with manual opening and a removable lockable flap. Interconnecting doors allow your cat or dog comfortable and free passage back and forth. Suitable for small, medium and large dogs up to 45 kg weight.
Pet door for small, medium dogs and big dogs Staywell 775 from the manufacturer PetSafe with manual opening. Interconnecting doors allow your cat or dog comfortable and free passage back and forth. Brown design, suitable for dogs up to 45 kg, to maximum width of 29 cm chest, weather resistant. The door is closed by a plug-FLAP, which is inside the door. Suitable for installation in: wood, PVC, metal, glass and brick. We recommend mainly for dogs up to max. Caution with double and tempered glass, we recommend consulting a specialist. User manual for all doors in the package.
Pet door dimensions: height 45,6 cm, width 38,6 cm. Size FLAP: height 35,6 cm, width 30,5 cm.
the possibility of installation in wood, PVC, glass and bricks.
|
from setuptools import setup
with open('README.rst') as f:
readme = f.read()
setup(
name="tryagain",
version=__import__('tryagain').__version__,
license='MIT',
description="A lightweight and pythonic retry helper",
long_description=readme,
author="Thomas Feldmann",
author_email="[email protected]",
url="https://github.com/tfeldmann/tryagain",
py_modules=["tryagain"],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
keywords=[
'retry', 'unstable', 'tryagain', 'redo', 'try', 'again', 'exception'],
)
|
I have a business card in adobe illustrator.
I need help cutting my logo out of this adobe illustrator file and provide me the logo from the card in a png file.
This is a 2 minute job.
Hello, I can do this quickly job for you, can you please send me the ai file. https://www.freelancer.com/u/TheVisionColor contact me to discuss further details, Akraam Y.
I will take few minutes to complete.. I am ready to start the work.. Want to see the existing file.. modification till you get satisfied.
|
#coding:utf-8
# User Management
import web
import auth
from app.views import wrapper as views
from app.models import wtdata
import config
class register:
def GET(self):
return views.layout.register(info='')
def POST(self):
"""
Posted data has three keys:
'username', 'email', 'password'
"""
data = web.input()
email = data['email']
email_validate = auth.validate_email(email)
if not email_validate:
return views.layout.register(info='email can not be validate')
email_exist = wtdata.email_exist(email)
if email_exist:
return views.layout.register(info='email exist')
pwd = data['password']
hashed_pwd = auth.encrypt_password(pwd)
print(hashed_pwd)
user_info = {}
user_info['username'] = data['username']
user_info['password'] = hashed_pwd
user_info['email'] = email
user_id = wtdata.add_user(user_info)
wtdata.add_default_category(user_id)
return web.seeother('/login')
class login:
def GET(self):
return views.layout.login(info='')
def POST(self):
data = web.input()
data = web.input()
email = data['email']
if not config.DEBUG:
email_validate = auth.validate_email(email)
if not email_validate:
return 'email is not validate'
pwd = data['password']
hashed_pwd = wtdata.get_pwd_by_email(email)
if hashed_pwd:
login_success = auth.validate_password(hashed_pwd, pwd)
else:
login_success = False
if login_success:
username = wtdata.get_username_by_email(email)
userid = wtdata.get_userid_by_email(email)
auth.set_login_state(username, userid)
return web.seeother('/home/' + str(userid))
else:
return views.layout.login(info='Username or Password is Wrong')
class logout:
def GET(self):
auth.clear_login_state()
return web.seeother('/')
|
Spread the cost from only £104.88 a month with Interest Free Credit.
The Cross Round Table by Matthew Hilton for Case is a distinct design that accentuates a very timeless look adorned by its fine details. It combines comfort with functionality while creating a contemporary piece of furniture to invest in your home. The unique lines and shape of this table makes it a very expressive piece making it the perfect centre piece in your room. Made of solid wood, the table is complemented with a chamfered edge detail, which gives a lightweight and simple aesthetic. Hilton's signature splayed leg design provides an impressive amount of legroom and stability for a comfortable dining experience. With a sizeable sitting arrangement of 8 people around 1.5m table top, it is robust enough for family dining and even the smallest dinner parties.
Comfortably seats 6 to 8 people, perfectly suited to family dining or small dinner parties.
Clear lacquer finish allows the natural warmth of the wood to come through, and requires minimal maintenance.
Made from solid wood it features a large table top and pedestal style base, designed to offer ample leg room and stability.
Pairs perfectly with the Profile Chair by Matthew Hilton.
Table made from solid wood, with a clear lacquer meaning the table requires little maintenance.
Chamfered edges provide a lightweight and simple aesthetic.
Pedestal style base is in-keeping with the Cross collection and helps provide ample leg room.
Solid Oak with clear polyurethane lacquer.
|
from setuptools import setup
setup(
name="electrum-frc-server",
version="0.9",
scripts=['run_electrum_frc_server','electrum-frc-server'],
install_requires=['plyvel','jsonrpclib', 'irc>=11'],
package_dir={
'electrumfrcserver':'src'
},
py_modules=[
'electrumfrcserver.__init__',
'electrumfrcserver.utils',
'electrumfrcserver.storage',
'electrumfrcserver.deserialize',
'electrumfrcserver.networks',
'electrumfrcserver.blockchain_processor',
'electrumfrcserver.server_processor',
'electrumfrcserver.processor',
'electrumfrcserver.version',
'electrumfrcserver.ircthread',
'electrumfrcserver.stratum_tcp',
'electrumfrcserver.stratum_http'
],
description="Freicoin Electrum Server",
author="Thomas Voegtlin",
author_email="[email protected]",
license="GNU Affero GPLv3",
url="https://github.com/spesmilo/electrum-server/",
long_description="""Server for the Electrum-FRC Lightweight Freicoin Wallet"""
)
|
Convenience Store Woman, the English-language debut of Japanese writer Murata, is brilliant, witty, and sweet in ways that recall Amélie and Shopgirl. Keiko, a Tokyo woman in her 30s, finds her calling as a checkout girl at a national convenience store chain called Smile Mart: Quirky Keiko, who has never fit in, can finally pretend to be a normal person. Her story of conforming for convenience (literally) is one that women all over the world know all too well, as is her family’s pressure to get married and settle down, but Murata’s sparkly writing and knack for odd, beautiful details are totally her own.
Before she’d even published in the U.S., New Zealand novelist Young had been lauded here—winning the prestigious Windham-Campbell Prize for nonfiction in 2017. Like a Lena Dunham from Down Under, Young’s writing explores fragility and resilience with a visceral, bodily focus. Her essays are both grounded in particulars—there are ruminations on Katherine Mansfield, arm hair, and the oppression of attempting to work in the same space as someone else—and universal.
Arceneaux is a hysterically funny, vulnerable writer whose memoir is a triumph of self-exploration, tinged with but not overburdened by his reckoning with our current political moment. His story of growing up black and gay in Houston, in his very Catholic family, is more than a little heartbreaking at times, though knowing that Arceneaux has come through the other side as a successful author with such a strong sense of self makes it ultimately uplifting. He navigates with crucial nuance his many worlds as they’ve hemmed him in, made him stronger, and brought him to new places. The result is a piece of personal and cultural storytelling that is as fun as it is illuminating.
Amid the raft of motherhood memoirs out this summer, it’s refreshing to read a book unapologetically dedicated to the fulfillment of single life. Like a more zoomed-in chapter from Rebecca Traister’s All the Single Ladies, MacNicol’s offering is a personable, entertaining reflection on the author’s 40th year. Though she is known for her exacting, emotional, and poignant writing on things like career burnout and parental illness, as well as founding the women’s networking organization The List, this memoir allows MacNicol a broader and looser canvas.
When Davis left her urban, magazine-editing life, she wanted to do something more fundamental. Rather than tell people what to eat, where to travel, and how to lead their best lives, she wanted to, well, learn how to carve a pig. The quest took her to a slaughterhouse in Gascony, France, where, under the tutelage of some no-nonsense meat-revering brothers, she learned the art and labor of butchery. Back in the U.S., she founded the Portland Meat Collective, a school that would extend the lessons she had absorbed and bring the experience of eating and cooking meat away from mass production and into people’s hands and homes. Davis writes with the precision and pacing of a former editor, but one who has gained experience that extends well beyond Manhattan skyscrapers.
A dense, powerful novel, Citkowitz’s The Shades tells the story of married couple Catherine and Michael Hall and their lives following their daughter’s death. This is only Citkowitz’s second book—her debut, Ether, was published in 2010—but it reads like the work of a writer with a much lengthier pedigree. In fact, Citkowitz possesses literary bona fides that are not immediately apparent, having worked as a screenwriter and coming from bookish stock: Her mother was Anglo-Irish novelist Lady Caroline Blackwood and her stepfather was poet Robert Lowell.
This debut novel from Colombian writer Contreras—original, politically daring, and passionately written—is the coming-of-age female empowerment story we need in 2018, infusing a genre that feels largely tapped out with much-needed life. The author creates a lush but very harsh world—Pablo Escobar’s Bogota of the 1990s—that could feel familiar (see: Narcos) if not for her heroines, who make the story new again. Two young women, one wealthy and one not, try to survive in increasingly unstable times, and what unfolds is an intimate story that also feels global, showing how women in particular are caught up in violence and resource exploitation.
Dangarembga’s first novel, Nervous Conditions, published when she was in her 20s, has since become part of the national canon of Zimbabwe, called by Doris Lessing “the novel we have all been waiting for.” (It also recently ranked on the BBC’s The 100 Stories That Shaped the World.) Set in what was then called Rhodesia, before the country ended white rule, that first book follows her protagonist, Tambudzai Sigauke, as she embarks upon her education. Dangarembga has returned to that central character in her latest book, opening with her living in a squalid youth hostel in Harare, confronting the limits of her hoped-for future.
When Jeanne McCulloch’s mother insisted that her father go off alcohol cold turkey before her daughter’s wedding, it sent Mr. McCulloch into a coma. But the show—or wedding—must go on, at least according to Jeanne’s high-drama mother, and so while her father lay in Southampton Hospital, Jeanne walked down the aisle and into her short-lived marriage. Set mostly over the course of the wedding weekend in 1983, the memoir expands to tell a bigger story about society in the late 20th century. Partly a breezy scene piece, partly a meditation on the familial forces that make us who we are, All Happy Families is a distinct and evocative work.
Is seaweed the new kale? The market for edible seaweed, according to Shetterly’s Seaweed Chronicles, is about $5 billion per year. (Some think the longevity of the Japanese people can be traced to their consumption of the algae.) This book is no diet manifesto, however; instead it’s a lovely, first-person consideration of the diaphanous organisms—and an evaluation of their environmental history and promise: A carbon-absorbing environmental superhero, macro-algae biofuel may fuel your VWs and Hondas in the future, to boot.
New England native Nicole Hennessy is devoted to her Buddhist practice and a particularly demanding guru. But after several years as the Master’s most dedicated pupil—and sometimes sexual partner—she finds herself needing to sever the connection and move on with her life. A belated coming-of-age novel with a quest at its core, The Devoted is an assured and promising debut.
Riffing off of Virginia Woolf’s desire for “a room of one’s own,” renowned literary biographer Tomalin’s memoir recounts how she ironically found herself in bringing other, mostly dead people back to life in her work. The writer is a master craftswoman, and it’s a thrill to see her prose and capacity for moving storytelling turned on her own life. Delving into the more difficult parts of her past—her husband, a journalist, was killed in Israel in 1973, leaving her on her own with their four children—Tomalin also re-creates a few raucous London literary scenes (particularly in the ’70s and ’80s) and discusses falling in love again in her 50s. If it leads you to read some of her biographies (Jane Austen is a favorite), you’ll be better off.
|
from django import template
from django.conf import settings
from django.db.models import (
Count,
Max,
Min,
)
register = template.Library()
TAG_MAX = getattr(settings, 'TAGCLOUD_MAX', 5.0)
TAG_MIN = getattr(settings, 'TAGCLOUD_MIN', 1.0)
def get_weight_closure(tag_min, tag_max, count_min, count_max):
"""Gets a closure for generating the weight of the tag.
Args:
tag_min: the minimum weight to use for a tag
tag_max: the maximum weight to use for a tag
count_min: the minimum number a tag is used
count_max: the maximum number a tag is used
Returns:
A closure to be used for calculating tag weights
"""
def linear(count, tag_min=tag_min, tag_max=tag_max,
count_min=count_min, count_max=count_max):
# Prevent a division by zero here, found to occur under some
# pathological but nevertheless actually occurring circumstances.
if count_max == count_min:
factor = 1.0
else:
factor = float(tag_max - tag_min) / float(count_max - count_min)
return tag_max - (count_max - count) * factor
return linear
@register.assignment_tag
def get_weighted_tags(tags):
"""Annotates a list of tags with the weight of the tag based on use.
Args:
tags: the list of tags to annotate
Returns:
The tag list annotated with weights
"""
# Annotate each tag with the number of times it's used
use_count = tags.annotate(use_count=Count('taggit_taggeditem_items'))
if len(use_count) == 0:
return tags
# Get the closure needed for adding weights to tags
get_weight = get_weight_closure(
TAG_MIN,
TAG_MAX,
use_count.aggregate(Min('use_count'))['use_count__min'],
use_count.aggregate(Max('use_count'))['use_count__max'])
tags = use_count.order_by('name')
# Add weight to each tag
for tag in tags:
tag.weight = get_weight(tag.use_count)
return tags
|
Our platform gives you all the tools you need, plus dedicated dev support to make sure you launch booking with success.
Build brand consistent booking flows directly within your chat application.
Send leads directly to your clients CRM in order to track their success down the funnel.
Define a default service across all vendors and push leads through the booking flow faster.
Enter a website to see what chat with automated booking flows can do.
Sync to your vendors Google and Outlook calendars for real-time availability.
Easily view, evaluate and track a leads journey from start to finish.
Enhance existing tools, and discover more ways to connect your clients with their leads.
Build brand consistent booking flows into your application with our open API.
Create qualifying questions that are unique to each of your clients, in order to qualify their leads.
Notify your client of newly booked appointments with customized email notifications and reminders.
Synchronize your clients external calendars and send bookings directly into their Google or Outlook Cal.
Notify and remind both your clients and their leads by SMS so they never miss a meeting.
Book any client, no matter where they are in the world with OnSched's world wide time zone support.
|
"""Composable matchers for HTTP headers."""
import re
RE_TYPE = type(re.compile(r''))
class BaseMatcher:
"""Matcher base class."""
def match(self, request):
"""
Check HTTP request headers against some criteria.
This method checks whether the request headers satisfy some
criteria or not. Subclasses should override it and provide a
specialized implementation.
The default implementation just returns False.
`request`: a Django request.
returns: a boolean.
"""
return False
def __invert__(self):
return Not(self)
def __and__(self, other):
return And(self, other)
def __or__(self, other):
return Or(self, other)
def __xor__(self, other):
return Xor(self, other)
class And(BaseMatcher):
"""Composite matcher that implements the bitwise AND operation."""
def __init__(self, matcher1, matcher2):
"""
Initialize the instance.
`matcher1`, `matcher2`: matchers of any type.
"""
self._matchers = (matcher1, matcher2)
def match(self, request):
"""
Compute the bitwise AND between the results of two matchers.
`request`: a Django request.
returns: a boolean.
"""
return all(matcher.match(request) for matcher in self._matchers)
def __repr__(self):
return '({!r} & {!r})'.format(*self._matchers)
class Or(BaseMatcher):
"""Composite matcher that implements the bitwise OR operation."""
def __init__(self, matcher1, matcher2):
"""
Initialize the instance.
`matcher1`, `matcher2`: matchers of any type.
"""
self._matchers = (matcher1, matcher2)
def match(self, request):
"""
Compute the bitwise OR between the results of two matchers.
`request`: a Django request.
returns: a boolean.
"""
return any(matcher.match(request) for matcher in self._matchers)
def __repr__(self):
return '({!r} | {!r})'.format(*self._matchers)
class Xor(BaseMatcher):
"""Composite matcher that implements the bitwise XOR operation."""
def __init__(self, matcher1, matcher2):
"""
Initialize the instance.
`matcher1`, `matcher2`: matchers of any type.
"""
self._matcher1 = matcher1
self._matcher2 = matcher2
def match(self, request):
"""
Compute the bitwise XOR between the results of two matchers.
`request`: a Django request.
returns: a boolean.
"""
return self._matcher1.match(request) is not self._matcher2.match(request)
def __repr__(self):
return '({!r} ^ {!r})'.format(self._matcher1, self._matcher2)
class Not(BaseMatcher):
"""Composite matcher that implements the bitwise NOT operation."""
def __init__(self, matcher):
"""
Initialize the instance.
`matcher`: a matcher of any type.
"""
self._matcher = matcher
def match(self, request):
"""
Compute the bitwise NOT of the result of a matcher.
`request`: a Django request.
returns: a boolean.
"""
return not self._matcher.match(request)
def __repr__(self):
return '~{!r}'.format(self._matcher)
class Header(BaseMatcher):
"""HTTP header matcher."""
def __init__(self, name, value):
"""
Initialize the instance.
`name`: a header name, as string.
`value`: a header value, as string, compiled regular expression
object, or iterable of strings.
"""
self._name = name
self._value = value
self._compare_value = self._get_value_comparison_method()
def _get_value_comparison_method(self):
if isinstance(self._value, RE_TYPE):
return self._compare_value_to_re_object
if isinstance(self._value, str):
return self._compare_value_to_str
return self._compare_value_to_iterable
def _compare_value_to_re_object(self, request_value):
return bool(self._value.fullmatch(request_value))
def _compare_value_to_str(self, request_value):
return request_value == self._value
def _compare_value_to_iterable(self, request_value):
return request_value in set(self._value)
def match(self, request):
"""
Inspect a request for headers with given name and value.
This method checks whether:
a) the request contains a header with the same exact name the
matcher has been initialized with, and
b) the header value is equal, matches, or belongs to the value
the matcher has been initialized with, depending on that being
respectively a string, a compiled regexp object, or an iterable
of strings.
`request`: a Django request.
returns: a boolean.
"""
try:
request_value = request.META[self._name]
except KeyError:
return False
else:
return self._compare_value(request_value)
def __repr__(self):
return '{}({!r}, {!r})'.format(self.__class__.__name__, self._name, self._value)
class HeaderRegexp(BaseMatcher):
"""HTTP header matcher based on regular expressions."""
def __init__(self, name_re, value_re):
"""
Initialize the instance.
`name_re`: a header name, as regexp string or compiled regexp
object.
`value_re`: a header value, as regexp string or compiled regexp
object.
"""
self._name_re = re.compile(name_re)
self._value_re = re.compile(value_re)
def match(self, request):
"""
Inspect a request for headers that match regular expressions.
This method checks whether the request contains at least one
header whose name and value match the respective regexps the
matcher has been initialized with.
`request`: a Django request.
returns: a boolean.
"""
for name, value in request.META.items():
if self._name_re.fullmatch(name) and self._value_re.fullmatch(value):
return True
return False
def __repr__(self):
return '{}({!r}, {!r})'.format(self.__class__.__name__, self._name_re, self._value_re)
|
A rare snow and ice storm is hitting the southeastern U.S. this morning, paralyzing travelers and stranding school buses.
States of emergency have been declared in North Carolina, South Carolina, Georgia, Alabama, Louisiana and Mississippi.
Many residents across the region are inexperienced driving in the snow, and many cities don't have big fleets of salt trucks or snowplows.
Many don't even own a snow shovel and stores have been quickly selling out.
The few inches of snow and slick roads have caused massive gridlock and hundreds of accidents. As of last night, Georgia officials were reporting nearly 1,000 crashes, causing more than 100 injuries and one fatality.
Schools closed early on Tuesday as the snow began to fall and there were several reports of stranded school buses in Georgia. Thousands of other students in Alabama and Georgia spent the night in their schools as wind chills dropped into the single digits.
In the report above, WAGA-TV reporter Buck Lanford brought us the latest on America's Newsroom. You can read more details on the storm at FoxNews.com.
In the segment below, Martha MacCallum spoke by phone with Montgomery, Alabama Mayor Todd Strange, who explained that his city does not have snowplows and salting equipment. Overall though, the city has not experienced any major problems. Strange said city and local government workers were sent home well in advance.
In a press conference moments ago, Georgia Governor Nathan Deal gave an update on the children who were stranded at their schools. Students are being escorted back to their homes by police and buses.
|
from .m_step_cy import *
from six.moves import range
__all__ = ['compute_cluster_mean', 'compute_covariance_matrix']
def get_diagonal(x):
'''
Return a writeable view of the diagonal of x
'''
return x.reshape(-1)[::x.shape[0]+1]
def compute_cluster_mean(kk, cluster):
data = kk.data
num_clusters = len(kk.num_cluster_members)
num_features = kk.num_features
cluster_mean = numpy.zeros(num_features)
num_added = numpy.zeros(num_features, dtype=int)
spikes = kk.get_spikes_in_cluster(cluster)
prior = 0
if cluster==kk.mua_cluster:
prior = kk.mua_point
elif cluster>=kk.num_special_clusters:
prior = kk.prior_point
do_compute_cluster_mean(
spikes, data.unmasked, data.unmasked_start, data.unmasked_end,
data.features, data.values_start, data.values_end,
cluster_mean, num_added, data.noise_mean, prior)
return cluster_mean
def compute_covariance_matrix(kk, cluster, cluster_mean, cov):
if cluster<kk.first_gaussian_cluster:
return
data = kk.data
num_cluster_members = kk.num_cluster_members
num_clusters = len(num_cluster_members)
num_features = kk.num_features
block = cov.block
block_diagonal = get_diagonal(block)
spikes_in_cluster = kk.spikes_in_cluster
spikes_in_cluster_offset = kk.spikes_in_cluster_offset
spike_indices = spikes_in_cluster[spikes_in_cluster_offset[cluster]:spikes_in_cluster_offset[cluster+1]]
f2m = numpy.zeros(num_features)
ct = numpy.zeros(num_features)
if kk.use_mua_cluster and cluster==kk.mua_cluster:
point = kk.mua_point
do_var_accum_mua(spike_indices, cluster_mean,
kk.data.noise_mean, kk.data.noise_variance,
cov.unmasked, block,
data.unmasked, data.unmasked_start, data.unmasked_end,
data.features, data.values_start, data.values_end,
f2m, ct, data.correction_terms, num_features,
)
else:
point = kk.prior_point
do_var_accum(spike_indices, cluster_mean,
kk.data.noise_mean, kk.data.noise_variance,
cov.unmasked, block,
data.unmasked, data.unmasked_start, data.unmasked_end,
data.features, data.values_start, data.values_end,
f2m, ct, data.correction_terms, num_features,
)
# add correction term for diagonal
cov.diagonal[:] += len(spike_indices)*data.noise_variance[cov.masked]
# Add prior
block_diagonal[:] += point*data.noise_variance[cov.unmasked]
cov.diagonal[:] += point*data.noise_variance[cov.masked]
# Normalise
factor = 1.0/(num_cluster_members[cluster]+point-1)
cov.block *= factor
cov.diagonal *= factor
|
Justine Cowan has been a writer, in various capacities, for 25 years. Her first screnplay "A Grunt's Tale" was awarded Honorable Mention in the 77th Annual Writer's Digest Writing Competition(out of 17,000 entries). "A Grunt's Tale" was also recognized as a top winner in the SkyFest I 2008 Script Competition.
Justine's other completed screenplays include "The Octopus Charmer"(which was also recognized as a top winner in the SkyFest I 2008 Script Competition, "Flashdrive", and "Janie Loves Bollywood".
Justine's other completed screenplays include "The Octopus Charmer"(which was also recognized as a top winner in the SkyFest I 2008 Script Competition, "Flashdrive", and "Janie Loves Bollywood". She is currently working on a drama about baseball.
|
# Example of conversion of scattering cross section from SANS in absolute
# units into SESANS using a Hankel transformation
# everything is in units of metres except specified otherwise
# Wim Bouwman ([email protected]), June 2013
from __future__ import division
from pylab import *
from scipy.special import jv as besselj
# q-range parameters
q = arange(0.0003, 1.0, 0.0003); # [nm^-1] range wide enough for Hankel transform
dq=(q[1]-q[0])*1e9; # [m^-1] step size in q, needed for integration
nq=len(q);
Lambda=2e-10; # [m] wavelength
# sample parameters
phi=0.1; # volume fraction
R=100; # [nm] radius particles
DeltaRho=6e14; # [m^-2]
V=4/3*pi*R**3 * 1e-27; # [m^3]
th=0.002; # [m] thickness sample
#2 PHASE SYSTEM
st= 1.5*Lambda**2*DeltaRho**2*th*phi*(1-phi)*R*1e-9 # scattering power in sesans formalism
# Form factor solid sphere
qr=q*R;
P=(3.*(sin(qr)-qr*cos(qr)) / qr**3)**2;
# Structure factor dilute
S=1.;
#2 PHASE SYSTEM
# scattered intensity [m^-1] in absolute units according to SANS
I=phi*(1-phi)*V*(DeltaRho**2)*P*S;
clf()
subplot(211) # plot the SANS calculation
plot(q,I,'k')
loglog(q,I)
xlim([0.01, 1])
ylim([1, 1e9])
xlabel(r'$Q [nm^{-1}]$')
ylabel(r'$d\Sigma/d\Omega [m^{-1}]$')
# Hankel transform to nice range for plot
nz=61;
zz=linspace(0,240,nz); # [nm], should be less than reciprocal from q
G=zeros(nz);
for i in range(len(zz)):
integr=besselj(0,q*zz[i])*I*q;
G[i]=sum(integr);
G=G*dq*1e9*2*pi; # integr step, conver q into [m**-1] and 2 pi circle integr
# plot(zz,G);
stt= th*Lambda**2/4/pi/pi*G[0] # scattering power according to SANS formalism
PP=exp(th*Lambda**2/4/pi/pi*(G-G[0]));
subplot(212)
plot(zz,PP,'k',label="Hankel transform") # Hankel transform 1D
xlabel('spin-echo length [nm]')
ylabel('polarisation normalised')
hold(True)
# Cosine transformation of 2D scattering patern
if False:
qy,qz = meshgrid(q,q)
qr=R*sqrt(qy**2 + qz**2); # reuse variable names Hankel transform, but now 2D
P=(3.*(sin(qr)-qr*cos(qr)) / qr**3)**2;
# Structure factor dilute
S=1.;
# scattered intensity [m^-1] in absolute units according to SANS
I=phi*V*(DeltaRho**2)*P*S;
GG=zeros(nz);
for i in range(len(zz)):
integr=cos(qz*zz[i])*I;
GG[i]=sum(sum(integr));
GG=4*GG* dq**2; # take integration step into account take 4 quadrants
# plot(zz,GG);
sstt= th*Lambda**2/4/pi/pi*GG[0] # scattering power according to SANS formalism
PPP=exp(th*Lambda**2/4/pi/pi*(GG-GG[0]));
plot(zz,PPP,label="cosine transform") # cosine transform 2D
# For comparison calculation in SESANS formalism, which overlaps perfectly
def gsphere(z,r):
"""
Calculate SESANS-correlation function for a solid sphere.
Wim Bouwman after formulae Timofei Kruglov J.Appl.Cryst. 2003 article
"""
d = z/r
g = zeros_like(z)
g[d==0] = 1.
low = ((d > 0) & (d < 2))
dlow = d[low]
dlow2 = dlow**2
print dlow.shape, dlow2.shape
g[low] = sqrt(1-dlow2/4.)*(1+dlow2/8.) + dlow2/2.*(1-dlow2/16.)*log(dlow/(2.+sqrt(4.-dlow2)))
return g
if True:
plot(zz,exp(st*(gsphere(zz,R)-1)),'r', label="analytical")
legend()
show()
|
Grand Theft Auto free download whatsapp for sony ericsson wi for world also 's the social Rockstar Editor, which is js a simple time of creating schools to find. whatsapp sony ericsson for Sony-Ericsson Wi apps Download and whatsapp sony ericsson for Sony-Ericsson Wi games download from brothersoft. whatsapp for pc for Sony-Ericsson Wi apps Download and whatsapp for pc for Sony-Ericsson Wi games download from brothersoft mobile.
free download whatsapp for Sony-Ericsson Wi apps Download and free download whatsapp for Sony-Ericsson Wi games download from brothersoft . Free download whatsapp for sony ericsson wi. Download. Wifi hacker free download for nokia mobile. Whatsapp messenger for symbian v Backuptrans android whatsapp transfer. Download whatsapp application for sony ericsson wi. Jual sony Sony ericsson wi free download brothersoft .
WhatsApp Messenger is a smartphone messaging app which allows you to exchange messages with your friends and contacts without having to pay for SMS. WHATSAPP INSTALL/USE - Guide on whatApp installation and for Sony Ericsson C - Download App Free. As what download whatsapp emblazoned, an fantastic voice works been to find timers due and good to know in the project. If you consider those policies by. WhatsApp is a direct iPhone 2 iPhone messenger chat application. android phone, apk, Download WhatsApp Messenger Mobile Software For Symbian Phones . Sony-Ericsson: C, Equinox TM, Hazel, J Naite, Ji Cedar, Ji, Wi, Wi, Wi, W, Wi, Wi, Wiv, Wi, Wi, Wiv. Sony Ericsson: Ki, Pi, Wi, Mi, Ki, Si, Sa, Wi, Wi, Wi, Wi, Wi, Wi, Ki, Ki, Ki, Wi, Si, Wc.
23 Mar I ended up reluctantly getting a Sony Ericsson Wi. The good thing is that I won't be having it with me all the time. I certainly intend to have. Sony Ericsson Hazel MORE PICTURES. Released , May Compare · Opinions. Also known as Sony Ericsson Hazel GreenHeart but there is no whatsapp and youtube working in my phone. Reply Sony Ericsson W · More related. Get WhatsApp Messenger and say goodbye to smsWhatsApp Messenger is a Download WhatsApp Messenger Mobile Software For Symbian Phones . Sony- Ericsson: C, Equinox TM, Hazel, J Naite, Ji Cedar, Ji, P, Wi, Wi, Wi, W, Wi, Wi, Wiv, Wi, Wi, Wiv. Hi, If you forget the security code, you get this error.. The default security code for nokia is If you forgot you can generate master code.
|
from werkzeug.security import generate_password_hash, check_password_hash
from . import db
from . import login_manager
from flask_login import UserMixin
class Role(db.Model):
__tablename__ = 'roles'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True)
users = db.relationship('User', backref='role', lazy='dynamic')
def __repr__(self):
return '<Role %r>' % self.name
class User(db.Model, UserMixin):
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True)
email = db.Column(db.String(120), unique=True, index=True)
username = db.Column(db.String(64), unique=True, index=True)
role_id = db.Column(db.Integer, db.ForeignKey('roles.id'))
password_hash = db.Column(db.String(128))
@property
def password(self):
raise AttributeError('password is not a readable attribute')
@password.setter
def password(self, password):
self.password_hash = generate_password_hash(password)
def verify_password(self, password):
return check_password_hash(self.password_hash, password)
def __repr__(self):
return '<User %r>' % self.username
@login_manager.user_loader
def load_user(user_id):
return User.query.get(int(user_id))
|
How many Amazon Alexa Echoes were sold over the 2017 holidays?
Amazon sold "millions" more Amazon devices than a year ago over the holidays.
Echo Dot was the top-selling product on Amazon across all categories.
Echo Spot and Echo Buttons also sold out.
Looking for an indicator that Amazon's Alexa is a beloved product? This holiday season, shoppers asked Alexa to order ... more Alexa devices.
The company said on Tuesday that it sold "tens of millions" of devices powered by its smart voice assistant.
While there are scores of purchases that can be made using Alexa, the top purchases made through "voice shopping" were the Echo Dot and Fire TV Stick with Alexa Voice Remote, the company said.
Amazon's Alexa app also topped the App Store charts for Apple's iPhone, presumably as people set up their new devices.
The small, inexpensive Echo Dot was the top-selling product on Amazon across all categories, and the newer Echo Spot and Echo Buttons also sold out.
Alexa also seems to be a big driver of the Fire TV category. Twice as many Fire TV Sticks (the less-sophisticated version) were purchased this year — but usage of Alexa on the higher-end Fire TV devices was 889 percent year over year in the U.S.
Alexa's breakout holiday seems to come hand-in-hand with a surge in smart-home interest in general: Smart plugs, smart light bulbs, and robot vacuums were also top sellers.
"The more that Amazon can not just get its Echo devices out there, but get its Alexa technology embedded in a variety of things, including cars, that'll be the long-term win," RBC analyst Mark Mahaney told CNBC's "Squawk Alley" on Tuesday.
|
__author__ = 'sibirrer'
#this file contains a class to make a gaussian
import numpy as np
import astrofunc.util as util
from astrofunc.LensingProfiles.sersic_utils import SersicUtil
import astrofunc.LensingProfiles.calc_util as calc_util
class Sersic(SersicUtil):
"""
this class contains functions to evaluate a Sersic mass profile: https://arxiv.org/pdf/astro-ph/0311559.pdf
"""
def function(self, x, y, n_sersic, r_eff, k_eff, center_x=0, center_y=0):
"""
returns Gaussian
"""
n = n_sersic
x_red = self._x_reduced(x, y, n, r_eff, center_x, center_y)
b = self.b_n(n)
#hyper2f2_b = util.hyper2F2_array(2*n, 2*n, 1+2*n, 1+2*n, -b)
hyper2f2_bx = util.hyper2F2_array(2*n, 2*n, 1+2*n, 1+2*n, -b*x_red)
f_eff = np.exp(b)*r_eff**2/2.*k_eff# * hyper2f2_b
f_ = f_eff * x_red**(2*n) * hyper2f2_bx# / hyper2f2_b
return f_
def derivatives(self, x, y, n_sersic, r_eff, k_eff, center_x=0, center_y=0):
"""
returns df/dx and df/dy of the function
"""
x_ = x - center_x
y_ = y - center_y
r = np.sqrt(x_**2 + y_**2)
if isinstance(r, int) or isinstance(r, float):
r = max(self._s, r)
else:
r[r < self._s] = self._s
alpha = -self.alpha_abs(x, y, n_sersic, r_eff, k_eff, center_x, center_y)
f_x = alpha * x_ / r
f_y = alpha * y_ / r
return f_x, f_y
def hessian(self, x, y, n_sersic, r_eff, k_eff, center_x=0, center_y=0):
"""
returns Hessian matrix of function d^2f/dx^2, d^f/dy^2, d^2/dxdy
"""
x_ = x - center_x
y_ = y - center_y
r = np.sqrt(x_**2 + y_**2)
if isinstance(r, int) or isinstance(r, float):
r = max(self._s, r)
else:
r[r < self._s] = self._s
d_alpha_dr = self.d_alpha_dr(x, y, n_sersic, r_eff, k_eff, center_x, center_y)
alpha = -self.alpha_abs(x, y, n_sersic, r_eff, k_eff, center_x, center_y)
#f_xx_ = d_alpha_dr * calc_util.d_r_dx(x_, y_) * x_/r + alpha * calc_util.d_x_diffr_dx(x_, y_)
#f_yy_ = d_alpha_dr * calc_util.d_r_dy(x_, y_) * y_/r + alpha * calc_util.d_y_diffr_dy(x_, y_)
#f_xy_ = d_alpha_dr * calc_util.d_r_dy(x_, y_) * x_/r + alpha * calc_util.d_x_diffr_dy(x_, y_)
f_xx = -(d_alpha_dr/r + alpha/r**2) * x_**2/r + alpha/r
f_yy = -(d_alpha_dr/r + alpha/r**2) * y_**2/r + alpha/r
f_xy = -(d_alpha_dr/r + alpha/r**2) * x_*y_/r
return f_xx, f_yy, f_xy
|
Hi there, I discovered your website by way of Google at the same time as searching for a related subject, your web site got here up, it appears great. I have added to favourites|added to my bookmarks.
I can’t get into my facebook account, they are tellin me to please verify my account. To tell my webmaster about the problem. What could be the problem? Can some1 assis me plz n thankz?
The funniest part about this thread is that you’re at school. I know you think it’s a joke, but all of the people trying to block you are a lot smarter than you think. They don’t block and forget, they often review logs and look for suspicious names like “www.lamekidswanteasyanswers.com/proxy”. Forget about it, go to class, and maybe one day you’ll be smart enough to figure it out on your own. Have a good day!
Hi Guys, tell me please, I am now in Germany, and my parents are in Kiev how to to pay them anything so talk to me less? I found just such a an article, maybe someone has used a similar service, or heard of him? Tell me please whether this actually real?
—www.moneypizza.com—- this is a great proxy, doesn’t use java, but for facebook, youtube or anythingelse it’s great!
As per my guess your site http://www.webmasterview.com has a truly clean and very simple look. I’ll try to be creative just like your website.
this will take you to facebook even if it’s blocked…(notice how there is an s after the http.
Is there other proxise to unblock sites from school or work?
Because the school has blocked this sites using this proxy and maybe other proxies.
Please leave a comment here to read the response anyone that can provide us better proxies that most schools and work do not block them at all.
It no longer works for me. It takes me to a ‘view’ page, and then takes me back to your site and says something about subscriptions. Please fix this, I used it often and was really grateful you had it up for us to use.
help!!!!!!!! i’m struggling with a blocker called x-keeper and i want to play Runescape which it blocks it! can anyone help me?
yall Should Unblock Facebook !!
http://www.webmasterview.com is a pretty good site so far.
Nothing works so far.. exept i can log on facebook but not acces anyone elses pages or my own for that matter, needd some proxys :}.
how come some times it does not work?
If system load increases due to high usage, proxy will doze off temporarily.
any one no how to get onto chatroulette.
i have to plug my phone into my laptop to get internet and i cant seem to get passed the content control page. i am able to get on youtube and facebok and stuff but cant get on chatroulette, does any one know of any where i can get passed it?
May 27, 2010 at 6:36 pm Gee.
They have blocked all the https: and almost every proxy, anyone?
myspace wont come up right now, but im sure it will here soon. im praying to god it does cuz i need to change my status!
i need a proxy! its very urgent! im very bored at school…and i am not able to get on anything! please help me…most of the proxies have been blocked by our district.
Hey, my school has blocked all sites that’re labled as Proxies. The only one I can use is Meebo which only allows you to chat.
Do you guys know of any other sites that are not labeled as proxies???
there is no way to get to any of these sits. i bet teachers look on here and c and then they know to block them. cz iv tried so many different ones. and none of them work. all denide!
my school has blocked facebook and now i cant get into anything. can anyone tell me how to unblock facebook on my school computers?
Where did the proxy go?
If you could get this site to work that would fantastic! Right now I can get to the login part but can’t get past login. After login there is a sever selection and then it loads a flash page. Thanks much, btw other sites listed as proxies will be blocked so I can’t use those.
If you r School dont allow Youtube. Then do this http://video.google.ca/?hl=en&tab=wv then search your video.. it will say youtube tho but its on google they cant do nothing google is a safe server any questions email me with your name and what your emailing about..
help ive tried everything! get me on facebook!
DUDE NONE of these work! i am not joking my school blocked EVERYTHING!
anyone know how to unblock myspace from school ????
dude, my school filter detects proxies and all that.
i can go to the login but it wont load after that whats up !??
how can you get around blocked websites when the sites to unblock it are blocked?
i can only get onto bebo and facebooks log in pages and it wont load after that why is this??
Alright. I still haven’t figured out a way to get onto sites like myspace, and facebook. but I have figured out a way to get on Msn, AIM, and yahoo.
remember to include the S after http.
hey i need hlep getting all the proxys to be unblocked to get around fliter…. can some1 help me this filter does detect outside sourses and anonomous websites!!! can some1 please help me!!!! i need to game!…. the fliters name is… smart filter and i cant get on anything fun!!! please help me! (distress)!!!!
this would be awesome if you could get into ur facebook page…whiich i can’t do. do u know any websites that allows u to do so???
if you realy have probs with getting to blocked websites try http://www.flyproxy.com also http://www.vvd1.net it’ll really work! enjoy!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.