text
stringlengths
29
850k
''' My alterations of hist2d from the triangle package. ''' import numpy as np import matplotlib.pyplot as pl from matplotlib.colors import LinearSegmentedColormap import matplotlib.cm as cm def hist2d(x, y, *args, **kwargs): """ Plot a 2-D histogram of samples. """ ax = kwargs.pop("ax", pl.gca()) extent = kwargs.pop("extent", [[x.min(), x.max()], [y.min(), y.max()]]) bins = kwargs.pop("bins", 50) color = kwargs.pop("color", "k") linewidths = kwargs.pop("linewidths", None) plot_datapoints = kwargs.get("plot_datapoints", True) plot_contours = kwargs.get("plot_contours", False) cmap = kwargs.get("cmap", 'gray') cmap = cm.get_cmap(cmap) cmap._init() cmap._lut[:-3, :-1] = 0. cmap._lut[:-3, -1] = np.linspace(1, 0, cmap.N) X = np.linspace(extent[0][0], extent[0][1], bins + 1) Y = np.linspace(extent[1][0], extent[1][1], bins + 1) try: H, X, Y = np.histogram2d(x.flatten(), y.flatten(), bins=(X, Y), weights=kwargs.get('weights', None)) except ValueError: raise ValueError("It looks like at least one of your sample columns " "have no dynamic range. You could try using the " "`extent` argument.") V = 1.0 - np.exp(-0.5 * np.arange(1.5, 2.1, 0.5) ** 2) Hflat = H.flatten() inds = np.argsort(Hflat)[::-1] Hflat = Hflat[inds] sm = np.cumsum(Hflat) sm /= sm[-1] for i, v0 in enumerate(V): try: V[i] = Hflat[sm <= v0][-1] except: V[i] = Hflat[0] X1, Y1 = 0.5 * (X[1:] + X[:-1]), 0.5 * (Y[1:] + Y[:-1]) X, Y = X[:-1], Y[:-1] if plot_datapoints: ax.plot(x, y, "o", color=color, ms=1.5, zorder=-1, alpha=0.2, rasterized=True) if plot_contours: ax.contourf(X1, Y1, H.T, [V[-1], H.max()], cmap=LinearSegmentedColormap.from_list("cmap", ([1] * 3, [1] * 3),N=2), antialiased=False) if plot_contours: # ax.pcolor(X, Y, H.max() - H.T, cmap=cmap) ax.contour(X1, Y1, H.T, V, colors=color, linewidths=linewidths) ax.set_xlim(extent[0]) ax.set_ylim(extent[1])
Are you [tag]new to investing[/tag]? Were you thinking you would [tag]start investing[/tag] some [tag]money[/tag] in the [tag]stock market[/tag] to make some [tag]additional income[/tag], but you have no idea how to start? Don’t worry, we all were beginners once, so do I. This article shall give you an “implementation & operational” manual on [tag]how to start investing[/tag]. I would guess you want to start from the beginning so let me start from scratch. For investing in the stock market you would need some [tag]initial capital[/tag] to start. Ideally you want to start with at least 10,000 dollars account, however you can start with 5,000 or even 2,000 account will work. If you do not have such sum, I would recommend opening a savings account and transferring some amount of money until you save your initial capital. Do not open your investing account and save on it, since this account will earn no or very small interest and investing is not only about [tag]buying and selling stocks[/tag], saving money is part of the entire game. If you do not have your emergency account yet, create it first. If you cannot save on both accounts (a part of your savings to your future investing account, part to your emergency account) save your emergency first. The purpose of this account is not only backing you up in case you would need some money to pay some emergency expenses, but it will potentially protect your investments. In a case you may need to pay something important, you do not want to withdraw money from investing account by selling stocks in bad times or when they show some loss (i.e. after purchasing an initial position). You can open one or two independent savings accounts for this emergency purpose. I would not recommend opening more than two accounts, since it is hard to maintain them. I use two accounts. One for short term emergencies and second is a long term account. On the short term account I hold about 2,000 dollars to cover emergency expenses which I am not able to pay off of my regular salary. The long term will cover my life expenses in case of loss of my job and I will draw money off of this account only in this case. Emergency account is crucial and not many Americans have it. If you want to take your [tag]financial responsibility[/tag] seriously, start creating it as soon as you can. In good times save for bad times. Look at today’s financial mess. Too many people forgot to create their safety net. Too many people believed that the [tag]housing market[/tag] will always go up. Too many people took their equities and bought their new TVs, cars, or vacations. When bad times arrived the same “too many people” realized that they have no back up. They lost their TVs, cars, and even homes. When you lose even your job, you end up in trouble. Of course, your life should not be one big saving and die with a large account, but poor because you didn’t enjoy your life as well. Find some middle way how to be regularly saving while living and enjoying your life. Later you will create your [tag]money making machine[/tag] and you can start enjoying only. However, at the beginning, more effort will pay later sweet results. The most important part on the entire process of investing is your own learning. If you believe that investing into stocks is easy money and [tag]get-rich-quick[/tag], you are not ready yet. To be honest with you I started investing while I believed in it and this is why I lost my money. Do not do the same mistake as I did. There is a lot of information out there. You can find a lot of books as well as a lot of web sites about investing. Read some of them, but be careful when selecting what to read. You should look for the strategy which fits you best and stick to it and avoid any other information otherwise you end up confused in mess. You can start reading on internet first to find some strategies and then continue looking for more information about each particular [tag]strategy[/tag] so you can grab as much information as possible to make the decision. You can also open a hypothetical account and try those strategies on paper first to determine which strategy fits best to you. You can also start reading books from my Library if you think that RSS approach fits you. You can buy those books or you can borrow them in your local library. If you are enthusiastic into investing as I am, you would buy those books to have them handy in the time you want to re-read them and refresh your memory about some tips and advice you have read and almost forgot. You need to decide what would be your strategy. Do you want [tag]value investing[/tag], [tag]growth investing[/tag], do you prefer long term (buy stock and hold positions weeks and months) or short term (trading within days) or day trading (buy and sell the same day)? Do some research on internet about strategies, read about them and start implementing one of them which you chose and which will work for you (at least on paper). Start studying market. You can subscribe to some newsletters if you want, or you can use free web sites such as MSN.com, Yahoo.com etc. I use Investors Business Daily as well to read about markets. Reading about markets can give you some idea what the market is doing so you do not invest against the market (another way how to loose money). It can be frustrating reading either, like these days with financial crisis. But you would know when it is time to start buying or when it is good to stay aside. I like [tag]RSS strategy[/tag] because I do not have time to trade daily. My approach is long term (but not buy and hold) so I like buying stocks and holding them as long as they are making me money, which can be couple days, as well as months and years. I will write about RSS strategy more later below. You may have heard or read advice such as “analyze your trades, why you did this or that so you can learn from your mistakes…” Great advice! I tried so many times. I tried to find on internet or in books how to do it so many times. Unsuccessfully. Then I came up with this blog. By writing all my ideas and thoughts here, it is not only me who is benefiting from it. You all can read about it and learn, get your own ideas and improve your own investing. You can use a hand written journal book, computer, blog whatever will help you to summarize your thoughts, ideas and records. Start doing it during your paper trading. Write down everything. Record your thoughts on [tag]market, stocks, economy, stock picking[/tag], etc. Record why you are selecting a particular stock, why you are going to buy it when and why you are going to sell it etc. All the recorded information would help you not only to analyze your trades but adjust your strategy and investing plans. Sometimes you may feel like a dummy, you will have no idea what to write about (i.e. the stock appeared on my screen list, which looks stupid and says nothing about why you are buying it), but keep trying. One day you find your proper language how to communicate with yourself. Sometimes you find out that you are trying to lie yourself, apologize and back up your bad trades. It happened to me as well that I have bought a stock and later I was trying to find a reason for it. You should find your reason first and then buy. You do not have to create a detailed report on each stock the same way as Jim Jubak (I like that guy and his approach) in his Jubak’s picks, but you should get close to it. Write down what is the name of the company, what business they do, how are they doing, read some news, so you get familiar about the company. There are some strategies (and for example [tag]Nicholas Darvas[/tag] was partially using that strategy), which do not care about the company at all. They are pure technical and the only important thing is what the stock is doing in the current market. They look for a break out on high volume and this is literally all they need for trading. I want at least know what the company does for business so I can see and look for its peers and possible outlook in the market. However do not dig too deep. All those numbers you may find will be totally useless. Later you can realize that your company is making huge profits and still its stock is falling like a rock. Do not even look for reasons why your stocks are doing on the market what they are doing. There either is no reason or after you find it, it will be too late to do something about it. Did you ever tried to find out how to do it? Were you searching books and internet? Have you found something? I haven’t, until I found a book about Reverse Scale System by Braden Glett. This was the first book which gave me a clue what to do and how to create my investing plans and how to deal with money management. “Before the book” I tried so many times and nothing worked well for me. “After the book” I became confident, I have a plan and I know what to do in each moment during opening, holding and closing every single position. Even if you try and go for another strategy, completely different from what I do here, this book is worth reading to know about loss control plan and money management. I am not going to write about it here, just purchase the book and read it. Then open your spreadsheet and create your own plan with all formulas and ideas and use it every time you open your investing account to place the order. Write down all strategies, stop loss plan, pyramiding plan, exiting plan and never trade unless you do your homework and calculations. This plan will help you limiting your losses, protecting your gains and money, pick new trades, drop old trades, protect you against over-investing your account, and many others. After your plan is done, all strategies identified, learn them! Learn patience and discipline. You need to be able to stick to your plans no matter what. Do not double question your plan. Be confident in it, trust it and trust yourself. During studying part and creating plans you had a lot of time to train it on a hypothetical account, so now there is no place for doubts and questioning. You can do adjusting, slightly modifying your numbers and calculations, but the general strategy should remain intact. It is hard part to learn. Do you have your savings ready? Now it is time to select your broker and open your investing account. Do some research on internet. Ask brokers to provide you with either trial access or with a list of services you can get. Ask for fees and commissions and test their technical and consumer or client support to see how responsive and professional they are. Also look what additional value you can get from them. Some brokers will have a very low or no commissions, but no additional services will be available. Others may charge you some commissions which you may consider high, but you get a lot of free analytical tools on their web site so you do not have to buy them. You would need a program which can do charting for you in which you can draw your own marks, lines, studies, create your own alerts, define your own trading algorithms and not all of those programs you can get for free on internet. Find some brokers, compare their services, support and requirements. You also would need to decide what type of broker you want, if a full-service broker or a discount broker. A full-service broker will be available for you and advising you whenever you would need it. These brokers are however very expensive and their requirements for minimum amount to open an account are high, mostly 100,000 of dollars and higher. A discount broker is mostly an online one and it is easy to open an account with him. Some have no minimum investment requirements, some have $2,000 minimum. Also look for whether your selected broker is insured and registered with SIPC. If you like what you could find about your selected broker, open a new account with him. It is easy and quick. You would be able to open a fully working account within ten minutes and start trading as soon as your money, transferred from your savings account will be deposited to your new investing account. Here you may find a list of some brokers to start searching. It is not a full list, but good to start with. Done? Welcome in the exciting world of a pure capitalism – Wall Street. I am making a report .I was searching for website about Internet Savings Accounts this blog. I am interested in your information and appreciate sites like this.Thank you very much for the information.
### The following originates from https://github.com/coady/multimethod # Copyright 2020 Aric Coady # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ### import abc import collections import functools import inspect import itertools import types import typing from typing import Callable, Iterable, Iterator, Mapping __version__ = "1.3" def groupby(func: Callable, values: Iterable) -> dict: """Return mapping of key function to values.""" groups = collections.defaultdict(list) # type: dict for value in values: groups[func(value)].append(value) return groups def get_types(func: Callable) -> tuple: """Return evaluated type hints in order.""" if not hasattr(func, "__annotations__"): return () annotations = dict(typing.get_type_hints(func)) annotations.pop("return", None) params = inspect.signature(func).parameters return tuple(annotations.pop(name, object) for name in params if annotations) class DispatchError(TypeError): pass class subtype(type): """A normalized generic type which checks subscripts.""" def __new__(cls, tp, *args): if tp is typing.Any: return object if isinstance(tp, typing.TypeVar): if not tp.__constraints__: return object tp = typing.Union[tp.__constraints__] origin = getattr(tp, "__extra__", getattr(tp, "__origin__", tp)) args = tuple(map(cls, getattr(tp, "__args__", None) or args)) if set(args) <= {object} and not (origin is tuple and args): return origin bases = (origin,) if type(origin) is type else () namespace = {"__origin__": origin, "__args__": args} return type.__new__(cls, str(tp), bases, namespace) def __init__(self, tp, *args): if isinstance(self.__origin__, abc.ABCMeta): self.__origin__.register(self) def __getstate__(self): return self.__origin__, self.__args__ def __eq__(self, other): return ( isinstance(other, subtype) and self.__getstate__() == other.__getstate__() ) def __hash__(self): return hash(self.__getstate__()) def __subclasscheck__(self, subclass): origin = getattr( subclass, "__extra__", getattr(subclass, "__origin__", subclass) ) args = getattr(subclass, "__args__", ()) if origin is typing.Union: return all(issubclass(cls, self) for cls in args) if self.__origin__ is typing.Union: return issubclass(subclass, self.__args__) return ( # check args first to avoid a recursion error in ABCMeta len(args) == len(self.__args__) and issubclass(origin, self.__origin__) and all(map(issubclass, args, self.__args__)) ) class signature(tuple): """A tuple of types that supports partial ordering.""" parents = None # type: set def __new__(cls, types: Iterable): return tuple.__new__(cls, map(subtype, types)) def __le__(self, other) -> bool: return len(self) <= len(other) and all(map(issubclass, other, self)) def __lt__(self, other) -> bool: return self != other and self <= other def __sub__(self, other) -> tuple: """Return relative distances, assuming self >= other.""" mros = (subclass.mro() for subclass in self) return tuple( mro.index(cls if cls in mro else object) for mro, cls in zip(mros, other) ) class multimethod(dict): """A callable directed acyclic graph of methods.""" pending = None # type: set def __new__(cls, func): namespace = inspect.currentframe().f_back.f_locals self = functools.update_wrapper(dict.__new__(cls), func) self.pending = set() self.get_type = type # default type checker return namespace.get(func.__name__, self) def __init__(self, func: Callable): try: self[get_types(func)] = func except NameError: self.pending.add(func) def register(self, *args): """Decorator for registering a function. Optionally call with types to return a decorator for unannotated functions. """ if len(args) == 1 and hasattr(args[0], "__annotations__"): return overload.register(self, *args) return lambda func: self.__setitem__(args, func) or func def __get__(self, instance, owner): return self if instance is None else types.MethodType(self, instance) def parents(self, types: tuple) -> set: """Find immediate parents of potential key.""" parents = {key for key in self if isinstance(key, signature) and key < types} return parents - {ancestor for parent in parents for ancestor in parent.parents} def clean(self): """Empty the cache.""" for key in list(self): if not isinstance(key, signature): super().__delitem__(key) def __setitem__(self, types: tuple, func: Callable): self.clean() types = signature(types) parents = types.parents = self.parents(types) for key in self: if types < key and (not parents or parents & key.parents): key.parents -= parents key.parents.add(types) if any(isinstance(cls, subtype) for cls in types): self.get_type = get_type # switch to slower generic type checker super().__setitem__(types, func) self.__doc__ = self.docstring def __delitem__(self, types: tuple): self.clean() super().__delitem__(types) for key in self: if types in key.parents: key.parents = self.parents(key) self.__doc__ = self.docstring def __missing__(self, types: tuple) -> Callable: """Find and cache the next applicable method of given types.""" self.evaluate() if types in self: return self[types] groups = groupby(signature(types).__sub__, self.parents(types)) keys = groups[min(groups)] if groups else [] funcs = {self[key] for key in keys} if len(funcs) == 1: return self.setdefault(types, *funcs) msg = f"{self.__name__}: {len(keys)} methods found" # type: ignore raise DispatchError(msg, types, keys) def __call__(self, *args, **kwargs): """Resolve and dispatch to best method.""" return self[tuple(map(self.get_type, args))](*args, **kwargs) def evaluate(self): """Evaluate any pending forward references. This can be called explicitly when using forward references, otherwise cache misses will evaluate. """ while self.pending: func = self.pending.pop() self[get_types(func)] = func @property def docstring(self): """a descriptive docstring of all registered functions""" docs = [] for func in set(self.values()): try: sig = inspect.signature(func) except ValueError: sig = "" doc = func.__doc__ or "" docs.append(f"{func.__name__}{sig}\n {doc}") return "\n\n".join(docs) class multidispatch(multimethod): """Provisional wrapper for future compatibility with `functools.singledispatch`.""" get_type = multimethod(type) get_type.__doc__ = """Return a generic `subtype` which checks subscripts.""" for atomic in (Iterator, str, bytes): get_type[ atomic, ] = type @multimethod # type: ignore[no-redef] def get_type(arg: tuple): """Return generic type checking all values.""" return subtype(type(arg), *map(get_type, arg)) @multimethod # type: ignore[no-redef] def get_type(arg: Mapping): """Return generic type checking first item.""" return subtype(type(arg), *map(get_type, next(iter(arg.items()), ()))) @multimethod # type: ignore[no-redef] def get_type(arg: Iterable): """Return generic type checking first value.""" return subtype(type(arg), *map(get_type, itertools.islice(arg, 1))) def isa(*types) -> Callable: """Partially bound `isinstance`.""" return lambda arg: isinstance(arg, types) class overload(collections.OrderedDict): """Ordered functions which dispatch based on their annotated predicates.""" __get__ = multimethod.__get__ def __new__(cls, func): namespace = inspect.currentframe().f_back.f_locals self = functools.update_wrapper(super().__new__(cls), func) return namespace.get(func.__name__, self) def __init__(self, func: Callable): self[inspect.signature(func)] = func def __call__(self, *args, **kwargs): """Dispatch to first matching function.""" for sig, func in reversed(self.items()): arguments = sig.bind(*args, **kwargs).arguments if all( predicate(arguments[name]) for name, predicate in func.__annotations__.items() ): return func(*args, **kwargs) raise DispatchError("No matching functions found") def register(self, func: Callable) -> Callable: """Decorator for registering a function.""" self.__init__(func) # type: ignore return self if self.__name__ == func.__name__ else func # type: ignore class multimeta(type): """Convert all callables in namespace to multimethods.""" class __prepare__(dict): def __init__(*args): pass def __setitem__(self, key, value): if callable(value): value = getattr(self.get(key), "register", multimethod)(value) super().__setitem__(key, value)
There's just something comforting about sitting around the fireplace at home. That's what this cocktail reminds us of. It's not smoky, and it's served chilled, but the red wine and walnut bitters give it a warm, soothing feel. 1Prep the Rich Syrup enough in advance that it can cool. 2Chill a coupe glass by filling it with ice and cold water. 3Add all liquid ingredients to a mixing glass. 4Add ice to the mixing glass. Stir. 5Empty the chilled coupe glass. 6Strain the cocktail into the chilled coupe glass. 7Twist orange peel over the drink to express its oils, then place the peel in the glass as garnish.
""" This is the play """ import numpy as np import matplotlib.pyplot as plt import math from sklearn.datasets import make_blobs from functions import selection_algorithm, scl plot = True verbose = False tracking = False selection = True # Generate the data n_samples = 1500 random_state = 20 # Does not converge random_state = 41 random_state = 105 n_features = 2 centers = 3 X, y = make_blobs(n_samples, n_features, centers, random_state=random_state) # Seed the random number generator np.random.seed(random_state) # The algorithm N = 3 m = 1 s = 2 # Number of neurons to change per round D = math.inf eta = 1.0 / n_samples eta = 0.1 neurons = np.random.rand(N, n_features) D_vector = np.zeros(n_samples) T = 50 # Initialize neuron to data hash with empty list neuron_to_data = {} for neuron in range(N): neuron_to_data[neuron] = [] follow_neuron_0_x = [] follow_neuron_0_y = [] follow_neuron_1_x = [] follow_neuron_1_y = [] follow_neuron_2_x = [] follow_neuron_2_y = [] total_distortion = [] time = np.arange(T) s_half_life = 10 s_0 = 2 s_sequence = np.floor(s_0 * np.exp(-time / s_half_life)).astype('int') for t, s in zip(time, s_sequence): # Data loop for x_index, x in enumerate(X): # Conventional competitive learning distances = np.linalg.norm(neurons - x, axis=1) closest_neuron = np.argmin(distances) # Modify neuron weight difference = x - neurons[closest_neuron, :] neurons[closest_neuron, :] += eta * difference # Store the distance to each D_vector[x_index] = np.linalg.norm(neurons[closest_neuron, :] - x) neuron_to_data[closest_neuron].append(x_index) if tracking: follow_neuron_0_x.append(neurons[0, 0]) follow_neuron_0_y.append(neurons[0, 1]) follow_neuron_1_x.append(neurons[1, 0]) follow_neuron_1_y.append(neurons[1, 1]) follow_neuron_2_x.append(neurons[2, 0]) follow_neuron_2_y.append(neurons[2, 1]) # Selection if selection: neurons = selection_algorithm(neurons, D_vector, neuron_to_data, s) if verbose: print('winning neuron', closest_neuron) print('distances', distances) if t % 10 == 0: print('time', t) total_distortion.append(np.sum(D_vector)) if plot: # Visualize X fig = plt.figure(figsize=(16, 12)) ax = fig.add_subplot(211) ax.plot(X[:, 0], X[:, 1], 'x', markersize=6) ax.hold(True) if True: ax.plot(neurons[0, 0], neurons[0, 1], 'o', markersize=12, label='neuron 1') ax.plot(neurons[1, 0], neurons[1, 1], 'o', markersize=12, label='neuron 2') ax.plot(neurons[2, 0], neurons[2, 1], 'o', markersize=12, label='neuron 3') ax.legend() if tracking: ax.plot(follow_neuron_0_x, follow_neuron_0_y, 'o-', markersize=12) ax.plot(follow_neuron_1_x, follow_neuron_1_y, 'o-', markersize=12) ax.plot(follow_neuron_2_x, follow_neuron_2_y, 'o-', markersize=12) ax2 = fig.add_subplot(212) ax2.plot(time, total_distortion) plt.show()
So, I’m a dad – I guess that’s pretty obvious from the title of this blog. I have two daughters – one is currently three-and-a-half and I see her at weekends, while the other is ten months and I see her every day, as we all live together. 1) To enable and encourage me to write regularly and not about things that I write every day at work. Although it might sound fun to tell people what you think of Sienna Miller’s latest red carpet outfit, believe me, it gets boring! 2) Being a dad is not talked about that much. Everyone, probably naturally, concentrates on the mum’s role in bringing up baby, as it were. Poor old dad is usually left to bring home enough money to buy weekly groceries and fund the endless supply of nappies, clothes and toys, to name but a few things. I want to try and redress that balance in somme small way, but posting my musings about being a good, hopefully, dad. I probably should have started this a year or so ago, as I’ll have nothing to say about the early months, but hey, both of my girls have a lot of years to go and a lot trials and tribulations to go through. I’m sure there’ll still be plenty to comment about. That’s it for the moment – I’ll be back with something new soon! Previous Previous post: Very funny video! Next Next post: Where does the time go?
# -*- coding: utf8 -*- class Personne: "Définition de la classe personne" # Pas de surcharge # Pas de polymorphisme direct ### Attributs statiques de classe ### ctr = 0 ### Attributs d'instance ### # Pas de définition préalable ### Méthode appelée avant la création de l'instance ### def __new__(self): pass ### Constructeur ### def __init__(self, nom, prenom): Personne.ctr += 1 self.nom = nom self.prenom = prenom ### Destructeur ### def __del__(self): pass ### Méthode sollicitée par str ### def __str__(self): return self.prenom + " " + self.nom ### Méthode statique ### def get_ctr(): return Personne.ctr ### getter, setter ### @property def name(self): return self.nom @name.setter def name(self, name): self.nom = name get_ctr = staticmethod(get_ctr) pers = Personne("xxx", "Gichin") pers.name = "Funakoshi" print "Nom = ", pers.name ### Lister toutes les attributs et méthodes de la classe ### print "Tous les objets : " + str(dir(pers)) print "Toutes les méthodes : " + str([methode for methode in dir(pers) if callable(getattr(pers, methode))])
Start out by preheating your oven to 350 degrees F. Then you can get going by trimming your green beans. I used fresh green beans from my grandmother’s farm, but any store bought kind will work. I have not tried this with frozen green beans, but as long as you thaw them first it should work fine. You can even buy pre-cut green beans to save yourself this step! Once the green beans are trimmed you can wash, peel, and dice your potatoes. I used the same cutting board from the green beans. Cut them in to evenly sized, one inch cubes so that they can cook evenly. Still using the same cutting board (can you tell I like minimal dishes??), trim and cut your chicken into one inch cubes. Grab two glass baking dishes (I used 9×13 dishes) and put your potatoes in one, and then your chicken and green beans in the other. There will be some overlap, but try to keep it to a minimum to ensure everything cooks evenly. Pour the Italian dressing over both dishes. Cover the dishes with tin foil and bake for about 1 hour, or until the potatoes are soft and the chicken is cook through. Enjoy immediately or store in the fridge for up to 5 days. I’ve also made this dish and added fresh parmesan cheese on top of the potatoes and it was amazing! Set out two, 9x13 glass baking dishes. Put the potatoes in one, and then the chicken and green beans in the other. Sprinkle everything with salt and pepper. Then pour the Italian dressing on top.
import datetime import rethinkdb as r from flask import request from flask.ext.restful import Resource, abort from dynaconfig import db from time import time def config_id(user_id, config_name): return "{}-{}".format(user_id, config_name) class Config(Resource): def get(self, user_id, config_name): current_config = list(r.table("config").get_all(config_id(user_id, config_name), index="name").run(db.conn)) if current_config: return current_config[0] else: return abort(404, message="Could not find config with name='{}' for user id={}".format(config_name, user_id)) def post(self, user_id, config_name): values = request.json current_config = list(r.table("config").get_all(config_id(user_id, config_name), index="name").run(db.conn)) if current_config: current_config = current_config[0] old_audit = current_config["values"] _id = current_config["id"] old_audit = current_config["audit_trail"] old_values = current_config["values"] current_version = current_config["highest_version"] + 1 new_audit = self._create_audit(old_values, values, current_version) if new_audit["changes"]: return r.table("config").get(_id).update({ "version": r.row["highest_version"] + 1, "last_version": r.row["version"], "highest_version": r.row["highest_version"] + 1, "values": r.literal(values), "audit_trail": r.row["audit_trail"].default([]).append(new_audit) }).run(db.conn) else: return abort(302, message="Config did not change") else: return r.table("config").insert({ "name": "{}-{}".format(user_id, config_name), "version": 0, "highest_version": 0, "last_version": 0, "values": values, "audit_trail": [self._create_audit({}, values, 0)] }).run(db.conn) def _create_audit(self, old_values, new_values, version): audit_values = [] for k in old_values: if k in new_values: if old_values[k] != new_values[k]: audit_values.append({"key": k, "action": "updated", "value": new_values[k]}) else: audit_values.append({"key": k, "action": "removed", "value": old_values[k]}) new_keys = set(new_values.keys()).difference(set(old_values.keys())) for k in new_keys: audit_values.append({ "key": k, "action": "added", "value": new_values[k] }) return {"created_at": int(time() * 1000), "changes": audit_values, "version": version} class RevertConfig(Resource): def put(self, user_id, config_name, version): current_config = list(r.table("config").get_all(config_id(user_id, config_name), index="name").run(db.conn)) if current_config: current_config = current_config[0] if 0 <= version <= current_config["highest_version"]: current_version = current_config["version"] audit_trail = current_config["audit_trail"] values = self._revert_config(current_config["values"], audit_trail, version, current_version) return r.table("config").get(_id).update({ "version": version, "last_version": r.row["version"], "values": r.literal(values) }).run(db.conn) else: return abort(404, message="Version={} for config with name='{}' for user id={} could not be found".format(version, config_name, user_id)) else: return abort(404, message="Could not find config with name='{}' for user id={}".format(config_name, user_id)) def _revert_config(self, config, audits, current_version, expected_version): assert(not current_version == expected_version) if current_version > expected_version: changes = reversed([a for audit_map in map(lambda audit: audit["changes"] if audit["version"] >= expected_version else [], audits) for a in audit_map]) elif expected_version > current_version: changes =[a for audit_map in map(lambda audit: audit["changes"] if audit["version"] <= expected_version else [], audits) for a in audit_map] for change in changes: action = change["action"] key = change["key"] value = change["value"] if action in ["updated", "removed"]: config[key] = value elif action == "added": if expected_version == 0: config[key] = value else: del config[key] return config
OUR ONLINE CATALOG and this magnifier lens tool set page serve as a cross-section of our China exports. Greater variety is available. Email us a photo example of what you seek. Export prices often change. All are re confirmed after your inquiry. You will be emailed a pro-forma invoice offer. China Factory Minimum Quantity of these magnifier lens tool set items can be negotiated with factories. Dollar Amount is often more important to smaller factories than the quantity of each piece. Minimum quantity often can be divided among several magnifier lens tool set or stock numbers. Please inquire with us about your specific needs for smaller quantities than those listed. Smaller quantity can result in a bit higher price. Ask us for a quote. Custom Orders are possible with any of our magnifier lens tool set products. Send us a .jpg example of what you want. If we don't have it, we can get it.
# Author: Julien Goret <[email protected]> # URL: https://github.com/sarakha63/Sick-Beard # # This file is based upon tvtorrents.py. # # This file is part of Sick Beard. # # Sick Beard is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Sick Beard is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with Sick Beard. If not, see <http://www.gnu.org/licenses/>. import sickbeard import generic from sickbeard import helpers, logger, exceptions, tvcache from lib.tvdb_api import tvdb_api, tvdb_exceptions from sickbeard.name_parser.parser import NameParser, InvalidNameException class EthorProvider(generic.TorrentProvider): def __init__(self): generic.TorrentProvider.__init__(self, "Ethor") self.supportsBacklog = False self.cache = EthorCache(self) self.url = 'http://ethor.net/' def isEnabled(self): return sickbeard.ETHOR def imageName(self): return 'ethor.png' class EthorCache(tvcache.TVCache): def __init__(self, provider): tvcache.TVCache.__init__(self, provider) # only poll every 15 minutes self.minTime = 15 def _getRSSData(self): if not sickbeard.ETHOR_KEY: raise exceptions.AuthException("Ethor requires an API key to work correctly") url = 'http://ethor.net/rss.php?feed=dl&cat=45,43,7&rsskey=' + sickbeard.ETHOR_KEY logger.log(u"Ethor cache update URL: " + url, logger.DEBUG) data = self.provider.getURL(url) return data def _parseItem(self, item): ltvdb_api_parms = sickbeard.TVDB_API_PARMS.copy() ltvdb_api_parms['search_all_languages'] = True (title, url) = self.provider._get_title_and_url(item) if not title or not url: logger.log(u"The XML returned from the Ethor RSS feed is incomplete, this result is unusable", logger.ERROR) return try: myParser = NameParser() parse_result = myParser.parse(title) except InvalidNameException: logger.log(u"Unable to parse the filename "+title+" into a valid episode", logger.DEBUG) return try: t = tvdb_api.Tvdb(**ltvdb_api_parms) showObj = t[parse_result.series_name] except tvdb_exceptions.tvdb_error: logger.log(u"TVDB timed out, unable to update episodes from TVDB", logger.ERROR) return logger.log(u"Adding item from RSS to cache: " + title, logger.DEBUG) self._addCacheEntry(name=title, url=url, tvdb_id=showObj['id']) provider = EthorProvider()
I used to be a big fan of Oxford band Ride and I would like to get the soon to be released 3CD boxed set “OX4: The Best of Ride”, which contains the best tracks from Ride’s brief but bright reign. Ride are Probably best remembered as the progenitors of shoegazing, which dominated the early 90s indie scene, and specialised in skyscraping walls of Sound and noisy uplifting rock. They broke out of the boundaries of British rock music, by drawing on the extremities of bands such as the Velvet Underground, My Bloody Valentine and The Jesus and Mary Chain, and the resulting songs all had an explosive rush that fizzed with enthusiasm. The three discs comprise of: OX4, the greatest hits disc, Firing Blanks, which is a disc of unreleased material and The live disc, which was recorded at the 1992 Reading Festival and serves as a good document of Ride as a live band. The album features tracks From their first two EPs, Ride and Play such as the starry-eyed clatter of “Chelsea Girl“, and the harmonic jangle of “Like a Daydream”, to tracks from the Debut Album “Nowhere” like ‘Taste’, ‘Seagull’ ‘Dreams Burn Down’ & ‘Vapour Trail. By way of ‘Unfamilliar ‘From Time to Time’ ‘Twisterella’, ‘Leave Them All Behind’. The album also contains awesome tracks like ‘Drive Blind ”Eight Miles High’, ‘European Son’,’The Model’, ‘Sight Of You’, and ‘That Man’ and charts their growth from Shoegazers into champions of retrodelica, with Carnival of Light-era tracks such as “I Don’t Know Where It Comes From”. By summoning classic tunes out of the ether, Ride sounded like true pioneers. First created in 1998 by musicians Damon Albarn & Jamie Hewlett while they were living in Westbourne Grove, Gorillaz is an English musical project which consists of the music itself and an extensive fictional universe depicting a “virtual band” of cartoon characters. This band has four animated members: 2D (lead vocalist, keyboard, and melodica), Murdoc Niccals (bass guitar and drum machine), Noodle (guitar, keyboard, and occasional vocals) and Russel Hobbs (drums and percussion). The idea to create the band came about when the two were watching MTV, “if you watch MTV for too long, it’s a bit like hell – there’s nothing of substance there. So we got this idea for a cartoon band, something that would be a comment on that,” Hewlett said. Their fictional universe is explored through the band’s website and music videos, as well as a number of other media, such as short cartoons. The music is a collaboration between various musicians, Albarn being the only permanent musical contributor. Their style is a composition of multiple musical genres, with a large number of influences including: alternative rock, dub, hip hop, electronic, and pop music. The band’s 2001 debut album Gorillaz sold over seven million copies and earned them an entry in the Guinness Book of World Records as the Most Successful Virtual Band.It was nominated for the Mercury Prize in 2001, but the nomination was later withdrawn at the band’s request. Their second studio album, Demon Days, released in 2005, went five times platinum in the UK, double platinum in the United States, earned five Grammy Award nominations for 2006 and won one of them in the Best Pop Collaboration with Vocals category. Gorillaz have also released two B-sides compilations and a remix album. The combined sales of the Gorillaz and Demon Days albums had exceeded 15 million by 2007. The band’s third studio album, Plastic Beach, was released in March 2010. Their latest album, The Fall, was released in December 2010 as a free download for sub-division members, then in April 2011 as a physical release. It Follows on from the release of TOTAL: From Joy Division To New Order which was released in early 2011 and the surprise success of Lost Siren track “Hellbent”, and features Bernard Sumner, Peter Hook, Stephen Morris and Philip Cunningham. It Features a truly eclectic mix of electronica and guitars in the way only New Order know how, and contains eight tracks which were recorded during the sessions for Waiting For The Sirens Call and are all previously unreleased. “Hellbent” features on The Lost Sirens in it’s original non-radio edit form (as per TOTAL) and “I Told You So” is a previously unreleased mix. Here is the track listing for the album.
import pickle from django.db import models from .encryption import get_cipher_and_iv, padding from django.utils.timezone import now class EncryptedUploadedFileMetaData(models.Model): ''' Meta data for saved files. ''' # File uuid file_id = models.CharField(max_length=50, primary_key=True) encrypted_name = models.CharField(max_length=200) # salt for AES cipher iv = models.CharField(max_length=50) # File Access Expiration date expire_date = models.DateTimeField(auto_now=False, null=True, blank=True) # File access one time flag one_time = models.BooleanField(default=False) # Clear file size size = models.IntegerField(default=0, null=True, blank=True) @classmethod def save_(cls, file_): ''' writes metadata for a given file ''' cipher = get_cipher_and_iv(file_.passphrase, file_.iv)[0] metadata = cls() metadata.file_id = file_.name for attr in ('size', 'one_time', 'iv', 'expire_date'): setattr(metadata, attr, getattr(file_, attr, None)) # Encrypts plain filename and content-type together encrypted_name = cipher.encrypt( padding(file_.clear_filename + '|' + file_.content_type)) metadata.encrypted_name = pickle.dumps(encrypted_name) metadata.iv = pickle.dumps(metadata.iv) metadata.save() return metadata @classmethod def update(cls, file_, **kwargs): ''' Updates metadata for a given file ''' from .storage import InexistentFile try: metadata = cls.objects.get( file_id=file_.name) except cls.DoesNotExist: raise InexistentFile for arg, val in kwargs.items(): setattr(metadata, arg, val) metadata.save() @classmethod def load(cls, file_): ''' Load metadata for a given file. ''' from .storage import InexistentFile, ExpiredFile try: metadata = cls.objects.get( file_id=file_.name) except cls.DoesNotExist: raise InexistentFile for attr in ('size', 'one_time', 'iv', 'expire_date'): setattr(file_, attr, getattr(metadata, attr, None)) file_.iv = pickle.loads(file_.iv) cipher = get_cipher_and_iv(file_.passphrase, file_.iv)[0] encrypted_name = pickle.loads(metadata.encrypted_name) file_.clear_filename, file_.content_type = \ cipher.decrypt(encrypted_name).split('|') # File access has expired if file_.expire_date and file_.expire_date < now(): metadata.delete() raise ExpiredFile('This file has expired') # File is accessed only once if file_.one_time: metadata.delete()
State attorneys issued a wholesale rebuttal July 18 in responding to a lawsuit brought by three companies that work on the North Slope against the state Revenue Department over how detailed portions of the state’s oil tax laws are applied. ExxonMobil, Hilcorp Energy and SAE Exploration, a seismic imaging company, sued the Department of Revenue in state Superior Court June 8 alleging Tax Division officials are using an unenforceable guidance document to improperly collect upwards of $160 million in oil production taxes. The state’s 14-page response, signed by Assistant Attorney General Katherine Demarest, admits the advisory bulletin in question issued by Tax Director Ken Alper in March 2017 interprets oil tax law and acknowledges the companies sued in the proper venue, but in line-by-line fashion flatly denies the rest of the complaint. ExxonMobil and Hilcorp contend the state informally adopted the six-page advisory bulletin as a regulation packet and subsequently applied to collectively increase their fiscal year 2018 oil production taxes by about $110 million and to collect another roughly $50 million plus interest in retroactive taxes since 2014. The state’s filing rejects the allegation that tax regulations “incorporate” the advisory bulletin, but rather asserts that it “interprets statutes” as allowed by state law. State attorneys also contend that one or more of the plaintiffs may lack standing in the lawsuit and that the state’s sovereign immunity may by applicable to this case. They request the case be dismissed and the state be compensated for the costs of its defense. SAE, which holds refundable tax credits earned through its seismic shoots, planned to sell those credits to producers, which in turn could use them against their tax liability. Limiting the amount of credits the producers can apply to their production taxes has in turn hampered SAE’s ability to sell its credits and reduced their market value, according to the original complaint. The advisory bulletin explains that use of the sliding scale credit prevents a company from using tax credits to take their production tax liability below the 4 percent gross minimum tax floor. The sliding scale credit, which starts at $8 per barrel when oil prices are less than $80 per barrel and steps down to nothing at extremely high prices, is used as a way to install progressivity into the production tax for oil produced from the state’s large, legacy fields such as Prudhoe Bay and Kuparuk River. However, if a producer were to forgo the per-barrel credit or use a fixed $5 per barrel credit for “new” oil production, the new oil credit and others could reduce a production tax liability to less than the 4 percent floor, according to the bulletin. The companies insist that Revenue’s own regulations allow taxpayers to choose the order in which credits are applied. The state’s filing does concur with that assertion, but additionally “denies that such an option relieves taxpayers of the duty to accurately report and pay tax,” the filing reads. The companies argue further that a 2011 advisory bulletin, issued under former Gov. Sean Parnell’s administration, stated that North Slope producers could reduce their liability below the minimum tax by using new oil or other credits. The Legislature and Parnell administration overhauled the oil and gas production tax system in 2013 with Senate Bill 21, which survived a voter referendum to repeal it in 2014. The current oil production tax law has since been modified twice at the behest of Gov. Bill Walker in 2016 and House Democrats last year.
import os.path as path from sys import argv from json import load from os import system from subprocess import Popen def make_frankenstein(question_names, file_name): global server_directory global client_directory def read_json(file_name): with open(file_name) as input_file: return load(input_file) def run_tests(tests, config): file_dict = {} for f in config: file_name = f['file'] for question_name in f['questions']: file_dict[question_name] = file_name for test in tests: questions = test['questions'] stdin = test['input'] args = test['args'] files_made = [] # TODO possibly redundant for question in questions: file_name = file_dict[question] if file_name not in files_made: make_frankenstein(questions, file_name) files_made.append(file_name) #system(config['compile']) #(stdout, stderr) = Popen(args).communicate(stdin) def main(): config = read_json(argv[1]) tests = read_json(argv[2]) global server_directory global client_directory server_directory = read_json(argv[3]) client_directory = read_json(argv[4]) run_tests(tests, config) if __name__ == '__main__': main()
Located just 5 minutes walk from Brixton Station with fast services into the city centre on the Victoria line, this is an excellent location from which to explore the city. We are also lucky enough to be a few minutes walk from Brixton Market which boasts a thriving restaurant culture with a buzzing local vibe. My home is a traditional terraced house with a modern extension and I offer a comfortable room with access to a shared bathroom.
# stdlib import datetime # ============================================================================== # last checked 2021/03/25 # https://developers.facebook.com/docs/graph-api/changelog/versions # Version: Release Date, Expiration Date _API_VERSIONS = { "10.0": ["Feb 23, 2021", None], "9.0": ["Nov 10, 2020", "Feb 23, 2023"], "8.0": ["Aug 4, 2020", "Nov 1, 2022"], "7.0": ["May 5, 2020", "Aug 4, 2022"], "6.0": ["Feb 3, 2020", "May 5, 2022"], "5.0": ["Oct 29, 2019", "Feb 3, 2022"], "4.0": ["Jul 29, 2019", "Nov 2, 2021"], "3.3": ["Apr 30, 2019", "Aug 3, 2021"], "3.2": ["Oct 23, 2018", "May 4, 2021"], "3.1": ["Jul 26, 2018", "Oct 27, 2020"], "3.0": ["May 1, 2018", "Jul 28, 2020"], "2.12": ["Jan 30, 2018", "May 5, 2020"], "2.11": ["Nov 7, 2017", "Jan 28, 2020"], "2.10": ["Jul 18, 2017", "Nov 7, 2019"], "2.9": ["Apr 18, 2017", "Jul 22, 2019"], "2.8": ["Oct 5, 2016", "Apr 18, 2019"], "2.7": ["Jul 13, 2016", "Oct 5, 2018"], "2.6": ["Apr 12, 2016", "Jul 13, 2018"], "2.5": ["Oct 7, 2015", "Apr 12, 2018"], "2.4": ["Jul 8, 2015", "Oct 9, 2017"], "2.3": ["Mar 25, 2015", "Jul 10, 2017"], "2.2": ["Oct 30, 2014", "Mar 27, 2017"], "2.1": ["Aug 7, 2014", "Oct 31, 2016"], "2.0": ["Apr 30, 2014", "Aug 8, 2016"], "1.0": ["Apr 21, 2010", "Apr 30, 2015"], } # >>> datetime.datetime.strptime("Apr 1, 2010", "%b %d, %Y") _format = "%b %d, %Y" API_VERSIONS = {} for (_v, _ks) in _API_VERSIONS.items(): API_VERSIONS[_v] = [ datetime.datetime.strptime(_ks[0], _format), datetime.datetime.strptime(_ks[1], _format) if _ks[1] else None, ]
See the gallery for tag and special word "Waste". You can to use those 8 images of quotes as a desktop wallpapers. A film is a terrible thing to waste. Don't be silly and don't waste your time. ✍ Author: George Allen, Sr. Much more quotes of "Waste" below the page. The International Health Partnership Plus is addressing the need to harmonize development assistance and reduce the current waste, duplication, and high transaction costs. We waste an awful lot of energy as a Nation through inefficient use of energy. By burning nuclear waste as fuel, we believe we can power the United States cleanly for hundreds of years without ever touching new resources. I'm not gonna waste your time, so I wouldn't expect you to waste my time. In an organic system you don't waste anything. We need to educate the consumer to accept a tiny blemish on an orange. What I say is, national defense is the most important thing we do in Washington, but there's still waste in the military budget. You know, the thing that I do to waste time is think of things I want to make. That's how my mind is employed.
# vim: tabstop=4 shiftwidth=4 softtabstop=4 # Copyright 2010 OpenStack LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Possible task states for instances. Compute instance task states represent what is happening to the instance at the current moment. These tasks can be generic, such as 'spawning', or specific, such as 'block_device_mapping'. These task states allow for a better view into what an instance is doing and should be displayed to users/administrators as necessary. """ SCHEDULING = 'scheduling' BLOCK_DEVICE_MAPPING = 'block_device_mapping' NETWORKING = 'networking' SPAWNING = 'spawning' IMAGE_SNAPSHOT = 'image_snapshot' IMAGE_BACKUP = 'image_backup' UPDATING_PASSWORD = 'updating_password' RESIZE_PREP = 'resize_prep' RESIZE_MIGRATING = 'resize_migrating' RESIZE_MIGRATED = 'resize_migrated' RESIZE_FINISH = 'resize_finish' RESIZE_REVERTING = 'resize_reverting' RESIZE_CONFIRMING = 'resize_confirming' RESIZE_VERIFY = 'resize_verify' REBOOTING = 'rebooting' REBOOTING_HARD = 'rebooting_hard' PAUSING = 'pausing' UNPAUSING = 'unpausing' SUSPENDING = 'suspending' RESUMING = 'resuming' POWERING_OFF = 'powering-off' POWERING_ON = 'powering-on' RESCUING = 'rescuing' UNRESCUING = 'unrescuing' DELETING = 'deleting' STOPPING = 'stopping' STARTING = 'starting'
The era of incomplete, one-dimensional data is over. Quovo aggregates financial accounts and enriches data with groundbreaking proprietary technology to normalize, transform, and reconcile disparate data from any source. Quovo’s smart aggregation enables instantaneous, powerful analysis for investment data. See any statistic, over any time period, for any portfolio, on the fly. Even feed data into sophisticated simulations ‘ effortlessly. Because Quovo takes full ownership of the data workflow, our platform can translate large, complex data sets into a variety of visualizations ‘ making otherwise messy data beautiful, usable, and valuable. With bank-level encryption and 3rd party security validation, Quovo ensures that your data remains secure. Hosted in the cloud, our technology turns your browser or mobile device into a professional-grade data and insights application. CIBC Innovation Banking provides $6.5 million expansion capital to wealth management analytics company WealthEngine, Inc.
from bitset import iterate, size, subtract, contains, first from components import components from utils import Infinity def get_neighborhood(N, subset): result = 0 for v in iterate(subset): result |= N[v] return result def get_neighborhood_2(N, subset): result = get_neighborhood(N, subset) for v in iterate(result): result |= N[v] return result def increment_un(G, X, UN_X, v): """Compute UN of X|v, based on the UN of X""" U = set() for S in UN_X: U.add(subtract(S, v)) U.add(subtract(S, v) | (G.neighborhoods[v] & (G.vertices - (X | v)))) return U def check_decomposition(G, decomposition): un = {0} lboolw = 1 left = 0 right = G.vertices for v in decomposition: un = increment_un(G, left | v, v, un) lboolw = max(lboolw, len(un)) left = left | v right = subtract(right, v) return lboolw def incremental_un_heuristic(G): lboolw_components = [] decomposition_components = [] for component in components(G): best_lboolw = Infinity best_decomposition = None for i, start in enumerate([first(component)]): #for i, start in enumerate(iterate(component)): print('{}th starting vertex'.format(i)) right = subtract(component, start) left = start un_left = increment_un(G, 0, {0}, start) booldim_left = 1 decomposition = [start] lboolw = len(un_left) for _ in range(size(component) - 1): best_vertex, best_un, _ = greedy_step(G, left, right, un_left, booldim_left, {}, Infinity) booldim_left = len(best_un) lboolw = max(lboolw, booldim_left) un_left = best_un decomposition.append(best_vertex) right = subtract(right, best_vertex) left = left | best_vertex if lboolw < best_lboolw: best_lboolw = lboolw best_decomposition = decomposition lboolw_components.append(best_lboolw) decomposition_components.append(best_decomposition) total_lboolw = max(lboolw_components) total_decomposition = [v for part in decomposition_components for v in part] return total_lboolw, total_decomposition def greedy_step(G, left, right, un_left, booldim_left, un_table, bound): best_vertex = None best_booldim = Infinity best_un = None if size(right) == 1: return right, {0}, 1 assert size(right) > 1 candidates = get_neighborhood_2(G.neighborhoods, left) & right # Trivial cases are slow for v in iterate(candidates): if trivial_case(G.neighborhoods, left, right, v): new_un = increment_un(G, left, un_left, v) new_booldim = len(new_un) return v, new_un, new_booldim for v in iterate(candidates): if left | v not in un_table: un_table[left | v] = increment_un(G, left, un_left, v) new_un = un_table[left | v] new_booldim = len(new_un) # Apply pruning if new_booldim >= bound: # print('pruning') continue if new_booldim < best_booldim: best_vertex = v best_booldim = new_booldim best_un = new_un # If nothing found if best_vertex == None: best_un = increment_un(G, left, un_left, v) best_booldim = len(best_un) best_vertex = v assert best_vertex != None return best_vertex, best_un, best_booldim def trivial_case(N, left, right, v): # No neighbors if contains(left, N[v]): return True # Twins for u in iterate(left): if N[v] & right == subtract(N[u], v) & right: return True return False
Prime Minister Scott Morrison is being criticised for downgraded an international development and the Pacific role in his ministry. Scott Morrison has been accused of sending the wrong message to Australia's Pacific neighbours by downgrading a job that focused on the region and international development. The responsibility has been relegated to an assistant minister's role, with Anne Ruston taking over the portfolio from former Minister for International Development and the Pacific Concetta Fierravanti-Wells. Ms Fierravanti-Wells resigned from the front bench last week after criticising Malcolm Turnbull and voting against him in her party's leadership spill. But she isn't happy to see the position downgraded, saying it sends the "wrong signal". "I am disappointed that this has happened at a time where we have growing interest and growing contestability in the Pacific region," she told ABC Radio on Thursday. "Downgrading this position does send the wrong signal at a time when we are spending record amounts of overseas development assistance in the pacific - $1.3 billion." Labor foreign affairs spokeswoman Penny Wong also criticised the change, together with Mr Morrison's choice not to attend the Pacific Islands Forum in Nauru this week. "Those are not the actions of a leader nor a government that recognises the importance of the Pacific to Australia," she told ABC Radio.
#!/usr/bin/env python import usb.core import usb.util import os import select import commands G13_VENDOR_ID = 0x046d G13_PRODUCT_ID = 0xc21c G13_KEY_ENDPOINT = 1 G13_LCD_ENDPOINT = 2 G13_REPORT_SIZE = 8 G13_LCD_BUFFER_SIZE = 0x3c0 G13_NUM_KEYS = 40 G13_NUM_MODES = 4 G13_KEYS = (["G%d" % x for x in range(1, 23)] + ["UNDEF1", "LIGHT_STATE", "BD"] + ["L%d" % x for x in range(1, 5)] + ["M%d" % x for x in range(1, 4)] + ["MR", "LEFT", "DOWN", "TOP", "UNDEF3", "LIGHT", "LIGHT2", "MISC_TOGGLE"]) # G13_KEYS = ["G13_KEY_%s" % x for x in G13_KEYS] LIBUSB_REQUEST_TYPE_STANDARD = (0x00 << 5) LIBUSB_REQUEST_TYPE_CLASS = (0x01 << 5), LIBUSB_REQUEST_TYPE_VENDOR = (0x02 << 5) LIBUSB_REQUEST_TYPE_RESERVED = (0x03 << 5) LIBUSB_RECIPIENT_DEVICE = 0x00 LIBUSB_RECIPIENT_INTERFACE = 0x01, LIBUSB_RECIPIENT_ENDPOINT = 0x02 LIBUSB_RECIPIENT_OTHER = 0x03 class G13Device(object): def __init__(self, device): self.device = device self.device.set_configuration() # TODO: do we need to manually claim the interface? self.unique_id = "%d_%d" % (self.device.bus, self.device.address) self.key_maps = [{}] * G13_NUM_MODES self.key_states = {} self.mode = 0 self.init_lcd() self.set_mode_leds(0) self.set_key_color(0, 0, 0) # TODO: self.write_lcd(g13_logo) # TODO: self.uinput = self.create_uinput() self.create_command_fifo() def init_lcd(self): self.device.ctrl_transfer(0, 9, 1, 0, None, 1000) def set_mode_leds(self, leds): data = [0x05, leds, 0x00, 0x00, 0x00] self.device.ctrl_transfer(LIBUSB_REQUEST_TYPE_CLASS[0] | LIBUSB_RECIPIENT_INTERFACE[0], 9, 0x305, 0, data, 1000) def set_mode(self, mode): self.set_mode_leds(mode) # TODO: implement proper mode handling def set_key_color(self, red, green, blue): data = [0x05, red, green, blue, 0x00] self.device.ctrl_transfer(LIBUSB_REQUEST_TYPE_CLASS[0] | LIBUSB_RECIPIENT_INTERFACE[0], 9, 0x307, 0, data, 1000) def create_command_fifo(self): self.command_fifo_name = "/tmp/g13_cmd_%s" % self.unique_id if os.path.exists(self.command_fifo_name): os.remove(self.command_fifo_name) os.mkfifo(self.command_fifo_name, 0666) self.command_fifo = os.open(self.command_fifo_name, os.O_RDWR | os.O_NONBLOCK) def handle_commands(self): """ Handle commands sent to the command fifo. """ ready = select.select([self.command_fifo], [], [], 0) if not len(ready[0]): return False data = os.read(self.command_fifo, 1000) print "< %s" % data lines = data.splitlines() for line in lines: command = commands.Command.parse_command(line) if command: command.execute(self) def get_key_state(self, key): if key not in self.key_states: return False return self.key_states[key] def set_key_state(self, key, state): self.key_states[key] = state def get_key_action(self, key): return self.key_maps[self.mode].get(key, None) def bind_key(self, key, action): if key not in G13_KEYS: raise Exception("The specified key isn't a known G13 key") self.key_maps[self.mode][key] = action self.key_states[key] = False def handle_keys(self): report = [] try: report = self.device.read(usb.util.ENDPOINT_IN | G13_KEY_ENDPOINT, G13_REPORT_SIZE) except usb.core.USBError, e: if not str(e).startswith("[Errno 60]"): raise if len(report): for g13_key_index, g13_key_name in enumerate(G13_KEYS): actual_byte = report[3 + (g13_key_index / 8)] mask = 1 << (g13_key_index % 8) is_pressed = (actual_byte & mask) == 0 # if the key has changed state, we're going to want to perform the action current_state = self.get_key_state(g13_key_name) # print ["%02x" % x for x in report] if current_state != is_pressed: print "key: %s %s -> state %s %s" % (g13_key_name, current_state, actual_byte & mask, is_pressed) self.set_key_state(g13_key_name, is_pressed) if not current_state: action = self.get_key_action(g13_key_name) if action: action.perform(self, is_pressed) return True def cleanup(self): # TODO: destroy the device cleanly? os.close(self.command_fifo) os.remove(self.command_fifo_name) def parse_args(): import argparse parser = argparse.ArgumentParser(description="user-mode g13 driver") # parser.add_argument("--verbose", "-v", action=store_const, const=bool, # default=False, "be verbose") args = parser.parse_args() return args def find_devices(): g13s = [] devices = usb.core.find(idVendor=G13_VENDOR_ID, idProduct=G13_PRODUCT_ID, find_all=True) print devices for device in devices: g13s.append(G13Device(device)) return g13s def main(): # args = parse_args() g13s = find_devices() print g13s running = True while running: try: for g13 in g13s: g13.handle_commands() status = g13.handle_keys() if not status: running = False except KeyboardInterrupt: running = False for g13 in g13s: g13.cleanup() if __name__ == '__main__': main()
America Online last month junked dozens of replies from Harvard University to its early applicants. Scared off by the anthrax postal threat, the esteemed university was looking for an efficient way to inform Harvard hopefuls on whether they were accepted, denied or deferred. Enter AOL, the leading Internet service provider, which blocked nearly 100 of the do-or-die letters, putting them in the dreaded “spam” file. “This wasn’t exactly the instant response we intended,” said William Fitzsimmons, dean of admissions and financial aid at Harvard. Harvard used e-mail to notify nearly all of the 6,000 students who applied under the early-admissions guidelines. Officials at AOL could not explain why, but said the company identified the Harvard messages as junk mail, the bane of Internet existence. Certain characteristics, such as address, size and quantity, trigger automatic gatekeepers that block certain e-mail from reaching its intended target, said Nicholas Graham, an AOL spokesman.
"""Handling e-mail messages.""" import datetime import email.header import email.message import logging import typing as t import dateutil.parser from .connection import Connection _LOG = logging.getLogger(__name__) def recode_header(raw_data: t.Union[bytes, str]) -> str: """Normalize the header value.""" decoded_data = email.header.decode_header(raw_data) try: return email.header.make_header(decoded_data) except UnicodeDecodeError as err: try: return email.header.make_header([(decoded_data[0][0], 'utf-8')]) except: _LOG.exception('both "%s" and "utf-8" fail to decode the header', decoded_data[0][1]) raise ValueError(f'after decoding {raw_data!r}, obtained {decoded_data!r}' ' which cannot be re-made into a header') from err def is_name_and_address(text: str) -> bool: return '<' in text and '>' in text def split_name_and_address(text) -> tuple: if is_name_and_address(text): begin = text.rfind('<') end = text.rfind('>') assert begin < end return text[begin + 1:end], text[:begin] return text, None def recode_timezone_info(dt: datetime.datetime): name = dt.tzname() dst = dt.dst() dst = (' ' + dst) if dst != datetime.timedelta() else '' if name == 'UTC': return f'{name}{dst}' offset = dt.utcoffset() offset = ('+' if offset >= datetime.timedelta() else '') + str(offset.total_seconds() / 3600) if name is None or not name: return f'UTC{offset}{dst}' return f'{name} (UTC{offset}{dst})' class Message: """An e-mail message.""" def __init__(self, msg: email.message.EmailMessage = None, server: Connection = None, folder: str = None, msg_id: int = None): assert folder is None or isinstance(folder, str), type(folder) assert msg_id is None or isinstance(msg_id, int), type(msg_id) self._email_message = msg # type: email.message.EmailMessage self._origin_server = server # type: Connection self._origin_folder = folder # type: str self._origin_id = msg_id # type: int self.from_address = None # type: str self.from_name = None # type: str self.reply_to_address = None # type: str self.reply_to_name = None # type: str self.to_address = None # type: str self.to_name = None # type: str self.subject = None # type: str self.datetime = None # type: datetime.datetime self.timezone = None self.local_date = None self.local_time = None self.received = [] self.return_path = None self.envelope_to = None self.message_id = None self.content_type = None self.other_headers = [] self.flags = set() # type: t.Set[str] self.contents = [] self.attachments = [] if msg is not None: self._init_headers_from_email_message(msg) self._init_contents_from_email_message(msg) @property def date(self) -> datetime.date: if self.datetime is None: return None return self.datetime.date() @property def time(self) -> datetime.time: if self.datetime is None: return None return self.datetime.time() @property def is_read(self) -> bool: return 'Seen' in self.flags @property def is_unread(self) -> bool: return not self.is_read() @property def is_answered(self) -> bool: return 'Answered' in self.flags @property def is_flagged(self) -> bool: return 'Flagged' in self.flags @property def is_deleted(self) -> bool: return 'Deleted' in self.flags def _init_headers_from_email_message(self, msg: email.message.EmailMessage) -> None: for key, value in msg.items(): self._init_header_from_keyvalue(key, value) def _init_header_from_keyvalue(self, key: str, value: str) -> None: if key == 'From': self.from_address, self.from_name = split_name_and_address(str(recode_header(value))) elif key == 'Reply-To': self.reply_to_address, self.reply_to_name = split_name_and_address( str(recode_header(value))) elif key == 'To': self.to_address, self.to_name = split_name_and_address(str(recode_header(value))) elif key == 'Subject': self.subject = str(recode_header(value)) elif key == 'Date': self._init_datetime_from_header_value(value) elif key == 'Received': self.received.append(value) elif key == 'Return-Path': self.return_path = value elif key == 'Envelope-To': self.envelope_to = value elif key == 'Message-Id': self.message_id = value elif key == 'Content-Type': self.content_type = value else: self.other_headers.append((key, value)) def _init_datetime_from_header_value(self, value: str): self.datetime = None try: self.datetime = dateutil.parser.parse(value) except ValueError: try: self.datetime = dateutil.parser.parse(value, fuzzy=True) _LOG.debug( 'dateutil failed to parse string "%s" into a date/time,' ' using fuzzy=True results in: %s', value, self.datetime, exc_info=1) except ValueError: _LOG.debug( 'dateutil failed to parse string "%s" into a date/time,' ' even using fuzzy=True', value, exc_info=1) if self.datetime is not None: self.timezone = recode_timezone_info(self.datetime) def _init_contents_from_email_message(self, msg: email.message.EmailMessage) -> None: if not msg.get_payload(): return if msg.get_content_maintype() != 'multipart': self._init_contents_part(msg) return content_type = msg.get_content_type() parts = msg.get_payload() if isinstance(parts, str): _LOG.error('one of %i parts in a message is %s, but it has no subparts', len(parts), content_type) assert not parts, parts return assert isinstance(parts, list), type(parts) assert parts if content_type == 'multipart/alternative': if len(parts) > 1: _LOG.warning('taking last alternative of %i available in part type %s' ' - ignoring others', len(parts), content_type) self._init_contents_from_email_message(parts[-1]) elif content_type == 'multipart/related': if len(parts) > 1: _LOG.warning('taking only first part of %i available in part type %s' ' - ignoring related parts', len(parts), content_type) self._init_contents_from_email_message(parts[0]) elif content_type == 'multipart/mixed': for part in parts: self._init_contents_from_email_message(part) else: raise NotImplementedError(f'handling of "{content_type}" not implemented') def _init_contents_part(self, part: email.message.Message): content_type = part.get_content_type() if content_type not in {'text/plain', 'text/html'}: _LOG.info('treating message part with type %s as attachment', content_type) self.attachments.append(part) return charset = part.get_content_charset() if charset: text = part.get_payload(decode=True) try: text = text.decode(charset) except UnicodeDecodeError: _LOG.exception('failed to decode %i-character text using encoding "%s"', len(text), charset) else: text = part.get_payload() try: if isinstance(text, bytes): text = text.decode('utf-8') except UnicodeDecodeError: _LOG.exception('failed to decode %i-character text using encoding "%s"', len(text), 'utf-8') if not isinstance(text, str): _LOG.error('no content charset in a message %s in part %s -- attachment?', self.str_headers_compact(), part.as_bytes()[:128]) self.attachments.append(part) text = None if not text: return self.contents.append(text) def move_to(self, server: Connection, folder_name: str) -> None: """Move message to a specific folder on a specific server.""" assert isinstance(folder_name, str), type(folder_name) if server is not self._origin_server: from .imap_connection import IMAPConnection assert isinstance(self._origin_server, IMAPConnection), type(self._origin_server) assert isinstance(server, IMAPConnection), type(server) parts = self._origin_server.retrieve_message_parts( self._origin_id, ['UID', 'ENVELOPE', 'FLAGS', 'INTERNALDATE', 'BODY.PEEK[]'], self._origin_folder) _LOG.warning('moving %s between servers: from %s "%s" to %s "%s"', self, self._origin_server, self._origin_folder, server, folder_name) server.add_message(parts, folder_name) self._origin_server.delete_message(self._origin_id, self._origin_folder) return if folder_name == self._origin_folder: _LOG.debug('move_to() destination same as origin, nothing to do') return from .imap_connection import IMAPConnection assert isinstance(self._origin_server, IMAPConnection), type(self._origin_server) _LOG.warning('moving %s within same server %s: from "%s" to "%s"', self, self._origin_server, self._origin_folder, folder_name) self._origin_server.move_message(self._origin_id, folder_name, self._origin_folder) def copy_to(self, server: Connection, folder: str) -> None: raise NotImplementedError() def send_via(self, server: Connection) -> None: server.send_message(self._email_message) def str_oneline(self): return (f'{type(self).__name__}(From:{self.from_name}<{self.from_address}>,' f'To:{self.to_name}<{self.to_address}>,Subject:{self.subject},' f'DateAndTime:{self.datetime})') def str_headers(self): return '\n'.join([ f'From: {self.from_address}', f' {self.from_name}', f'Reply-To: {self.reply_to_address}', f' {self.reply_to_name}', f'To: {self.to_address}', f' {self.to_name}', f'Subject: {self.subject}', f'Date: {self.date}', f'Time: {self.time}', f'Timezone: {self.timezone}', f'Locally: {self.local_date}', f' {self.local_time}', # '', # ' Received: {}'.format(self.received), # ' Return-Path: {}'.format(self.return_path), # ' Envelope-To: {}'.format(self.envelope_to), # ' Message-Id: {}'.format(self.message_id), # ' Content-Type: {}'.format(self.content_type), # 'Other headers:', # '\n'.join([' {}: {}'.format(k, v) for k, v in self.other_headers]), ]) def str_headers_compact(self): return '\n'.join([ f'From: {self.from_address} {self.from_name}', f'Reply-To: {self.reply_to_address} {self.reply_to_name}', f'To: {self.to_address} {self.to_name}', f'Subject: {self.subject}', f'Datetime: {self.date} {self.time} {self.timezone}' ]) def str_quote(self): raise NotImplementedError() def str_forward(self): raise NotImplementedError() def str_complete(self): return '\n'.join([ self.str_headers(), '', f'id: {self._origin_id}', f'flags: {self.flags}', '', 'contents{}:'.format(f' (multipart, {len(self.contents)} parts)' if len(self.contents) > 1 else ''), 80*'=', (80*'=' + '\n').join(self.contents), 80*'=', ]) def __str__(self): return self.str_oneline() def __repr__(self): return self.str_headers_compact()
Die Engineering Manager The primary scope of this position is to provide, through direct interaction with the Engineering staff, the management of all Die Engineering projects. The position will be responsible for the quality, delivery and cost associated with each project through Die Engineering. Scheduling of Die Engineering work loads and time periods. Assignment of projects to designated Designer / Engineer in concurrence with the Engineering projects List and by priority. Support new quoting opportunities. Establish estimated hours and feasibility. Quote project cost. Maintain daily awareness of each group progress (planned hours v. actual hours). Provide design and technical input to maintain design quality and progress. Responsible for all personnel in the Engineering department, including the hiring and firing of personnel, interviewing all personnel. Expanding Die Design guide lines/procedures/work instructions. Coordinate projects w/outside suppliers, tooling vendors. Educate customers regarding project technical issues. Tracking design costs. Improvement of die design technology. Engineering degree or equivalent experience preferred. At least ten years experience in progressive dies with high speed (Bruderer) presses and processes. Mechanical aptitude & ingenuity during tool making and or die design experience. Proven analytical capability in Electro/mechanical problem solving.
"""empty message Revision ID: 56a9f1dcf35 Revises: None Create Date: 2015-05-05 23:24:42.576471 """ # revision identifiers, used by Alembic. revision = '56a9f1dcf35' down_revision = None from alembic import op import sqlalchemy as sa def upgrade(): ### commands auto generated by Alembic - please adjust! ### op.create_table('users', sa.Column('id', sa.Integer(), nullable=False), sa.Column('username', sa.String(length=80), nullable=False), sa.Column('email', sa.String(length=80), nullable=False), sa.Column('password', sa.String(length=128), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('first_name', sa.String(length=30), nullable=True), sa.Column('last_name', sa.String(length=30), nullable=True), sa.Column('active', sa.Boolean(), nullable=True), sa.Column('is_admin', sa.Boolean(), nullable=True), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('email'), sa.UniqueConstraint('username') ) op.create_table('roles', sa.Column('id', sa.Integer(), nullable=False), sa.Column('name', sa.String(length=80), nullable=False), sa.Column('user_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['user_id'], ['users.id'], ), sa.PrimaryKeyConstraint('id'), sa.UniqueConstraint('name') ) op.create_table('questions', sa.Column('id', sa.Integer(), nullable=False), sa.Column('text', sa.String(length=400), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=True), sa.Column('user_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['user_id'], ['users.id'], ), sa.PrimaryKeyConstraint('id') ) op.create_table('answers', sa.Column('id', sa.Integer(), nullable=False), sa.Column('text', sa.String(length=400), nullable=True), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('question_id', sa.Integer(), nullable=True), sa.Column('user_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['question_id'], ['questions.id'], ), sa.ForeignKeyConstraint(['user_id'], ['users.id'], ), sa.PrimaryKeyConstraint('id') ) op.create_table('likes', sa.Column('id', sa.Integer(), nullable=False), sa.Column('created_at', sa.DateTime(), nullable=False), sa.Column('answer_id', sa.Integer(), nullable=True), sa.Column('user_id', sa.Integer(), nullable=True), sa.ForeignKeyConstraint(['answer_id'], ['answers.id'], ), sa.ForeignKeyConstraint(['user_id'], ['users.id'], ), sa.PrimaryKeyConstraint('id') ) ### end Alembic commands ### def downgrade(): ### commands auto generated by Alembic - please adjust! ### op.drop_table('likes') op.drop_table('answers') op.drop_table('questions') op.drop_table('roles') op.drop_table('users') ### end Alembic commands ###
bftw.h - bfs.git - Breadth-first version of find. * Callback function type for bftw(). * Data about the current file. * The pointer passed to bftw(). * Flags that control bftw() behavior.
# encoding: utf-8 """ Test suite for pptx.oxml.presentation module """ from __future__ import absolute_import, print_function, unicode_literals import pytest from ..unitdata.presentation import a_sldId, a_sldIdLst class DescribeCT_SlideIdList(object): def it_can_add_a_sldId_element_as_a_child(self, add_fixture): sldIdLst, expected_xml = add_fixture sldIdLst.add_sldId('rId1') assert sldIdLst.xml == expected_xml def it_knows_the_next_available_slide_id(self, next_id_fixture): sldIdLst, expected_id = next_id_fixture assert sldIdLst._next_id == expected_id # fixtures ------------------------------------------------------- @pytest.fixture def add_fixture(self): sldIdLst = a_sldIdLst().with_nsdecls().element expected_xml = ( a_sldIdLst().with_nsdecls('p', 'r').with_child( a_sldId().with_id(256).with_rId('rId1')) ).xml() return sldIdLst, expected_xml @pytest.fixture(params=[ ((), 256), ((256,), 257), ((257,), 256), ((300,), 256), ((255,), 256), ((257, 259), 256), ((256, 258), 257), ((256, 257), 258), ((257, 258, 259), 256), ((256, 258, 259), 257), ((256, 257, 259), 258), ((258, 256, 257), 259), ]) def next_id_fixture(self, request): existing_ids, expected_id = request.param sldIdLst_bldr = a_sldIdLst().with_nsdecls() for n in existing_ids: sldIdLst_bldr.with_child(a_sldId().with_id(n)) sldIdLst = sldIdLst_bldr.element return sldIdLst, expected_id
Maven Repositories, Without the Headache. Is Your Team Hosting Their Own Maven Repositories? How much time, money, and productivity are you losing? Both small teams and large organizations developing with Java or Java Virtual Machine (JVM) related technologies use maven to manage their dependencies and private artifacts. They implement repository managment by running local repositories on internal servers, maintained by one or more members of the engineering team. Whether you're using an open source or commercially licensed repository manager, you need to be aware of the hidden costs of running and maintaining your own servers. When you host your own maven repositories, it costs your team valuable time that could be better spent working on projects that are critical to your business goals and objectives. With CloudRepo, all of these time sinks are eliminated from your team's shoulders and become our responsibility. Servers, hard disks, and network hardware are all things you need to purchase when you're hosting your own maven repositories. Licensing fees vary with the particular maven server you're licensing, but these fees are in addition to your hardware costs. CloudRepo's subscription model provides a simple, monthly subscription fee based on the amount of storage you use. No more hidden costs to worry about. As a manager of an engineering team, you know that productivity is one of the most difficult metrics to measure. However, productivity is also one of the most important metrics to achieve for business success. Hosting your own internal maven repositories can impact your team's productivity when the repository server becomes overloaded or unavailable. When this happens your team's development process can come to a screeching halt while your developers wait for the server to come back online. CloudRepo was built from the ground up to be highly available and scalable so that an outage of one server, or a spike in load, does not impact your team's productivity. Unburden your team and save money and time when you move to CloudRepo. Signup takes less than 20 seconds. Setup your account in less than a minute and never worry about running your own local repository servers again. We manage your remote repositories, so your team doesn't have to. Public access to maven artifacts stored in CloudRepo remote repositories allows your users to read and write to CloudRepo by simply configuring your maven project file to point at the CloudRepo repository url. Any apache maven compatible tool or HTTPS client can read from our remote repositories. Writes to CloudRepo remote repositories always require write permissions granted via access control. Leverage CloudRepo's remote repositories as a highly available distribution platform for your apache maven binaries, libraries, or other artifacts. The Maven Central Repository is the world's largest remote repository and stores the majority of the world's open source, public artifacts. By default, all maven project files use the Maven Central Repository to retrieve artifacts. While this central public repository server works well for some teams, it may not be the right fit for your team, especially if you wish to publish private artifacts as well. Whether you're frustrated with the upload process of Maven Central or you just want to have more control over your maven artifacts, CloudRepo provides a simple, easy to use, alternative to the Maven Central Repository. Private repositories restrict access of your private artifacts to authorized individuals. Just like public repositories, private repositories allow read and write access to your private artifacts via any apache maven compatible tool or HTTPS client. Credentials are required to access any private artifacts stored within private maven repositories. Read and write permissions are granted via access control. Eliminate the need to install, manage, and maintain your internal maven repositories by moving your maven projects to CloudRepo. We manage all of your private artifacts, so you don't have to. All CloudRepo remote repositories are globally accessible and available to your team via HTTPS URLs from any internet connection. Have remote employees or other team members? No longer will they struggle or fight with VPN or other network configuration when accessing your repositories. Apache Maven requires either a local repository or remote repository to push/pull public or private artifacts from. Configure Apache Maven project file to read and write from your CloudRepo remote repositories just as you would any other remote repository server. No maven plugins, special command line tools, or anything needed. CloudRepo allows you to share your remote repositories with the public by providing the ability to create a public repositories. Distribute your repository url to your customers, no access control required.. If you want your remote repositories to be accessed solely by your team, use a private repository. All access to private repositories are protected by access control which require user level authentication in order to read and write. All of the benefits of hosting your repositories in the cloud are included with every CloudRepo subscription. When your team needs access to your repositories, they must be available. If your repositories are down for any reason, your team will be blocked. CloudRepo hosts your repositories across multiple servers so, even if multiple servers go down, your business can keep operating without impact. Your maven artifacts are critical to your organization. When you host your maven repositories on a single server, in a single location, you're putting these artifacts at risk. CloudRepo stores your artifacts redundantly across multiple locations and availability zones. Running your local repositories on a single server makes it difficult to scale when load increases. When you host your repositories with CloudRepo, our servers scale automatically without any interruption to your team. The days of overloading your local repository server are over when you move your artifacts to CloudRepo. If you lose your local repository server and all your artifacts in a disaster, what will happen to your business? Moving to CloudRepo, disaster recovery is no longer something you have to worry about. CloudRepo has been architected from the ground up with security in mind. Grant your users read and write permissions via the access control features of CloudRepo. We also offer integration with Okta if you'd like to manage access control with Okta. CloudRepo offers two types of access for your repository users: read/write or read only. Users with read/write access can upload and download maven artifacts. Users with read only permissions can only read artifacts. When your maven artifacts aren't being accessed, they are stored encrypted at rest. Encryption at rest protects your data from unauthorized access. All APIs are restricted to HTTPS in order to protect the privacy of maven artifacts while they're in flight. No one comes close to matching or beating our pricing. Save time and money when you host all of your maven repositories with CloudRepo. We don't want your team to worry about data transfer limits to and from your repositories, so for business plans we removed them!
#!/usr/bin/env python2 import os, re, tempfile, shutil, time, signal, unittest, sys from subprocess import Popen, check_call def slurp(file): with open(file) as f: return f.read() def wait_until_exists(file, timeout=1, delay=0.02): start = time.time() while time.time() < start + timeout: if os.path.exists(file): return time.sleep(delay) raise Exception("timeout waiting for " + file) def run_tests(tmpdir): socket_file = os.path.join(tmpdir, "logductd.sock") logs_dir = os.path.join(tmpdir, "logs") daemon = Popen([sys.executable, "-m", "logduct.daemon", "-s", socket_file, "-d", logs_dir, "--trust-blindly"]) unit = "dummyunit" stdio_log = os.path.join(logs_dir, unit, "stdio.log") third_log = os.path.join(logs_dir, unit, "third.log") try: wait_until_exists(socket_file) # stdio check_call([sys.executable, "-m", "logduct.run", "-s", socket_file, "-u", unit, "echo", "hello"]) wait_until_exists(stdio_log) data = slurp(stdio_log) match = re.match(r"\d\d:\d\d:\d\d.\d\d\d (unknown|echo)\[\d+\]: hello\n", data) assert match # pipe fd check_call([sys.executable, "-m", "logduct.run", "-s", socket_file, "-u", unit, "--fd", "3:third", "--no-stdio", "bash", "-c", "echo there >&3"]) wait_until_exists(third_log) data = slurp(third_log) match = re.match(r"\d\d:\d\d:\d\d.\d\d\d: there\n", data) assert match finally: daemon.send_signal(signal.SIGTERM) time.sleep(0.2) daemon.kill() def main(): try: tmpdir = tempfile.mkdtemp("logduct-test") run_tests(tmpdir) finally: shutil.rmtree(tmpdir) class Test(unittest.TestCase): def test(self): main() if __name__ == '__main__': main()
You are here: Home / 102 Years Ago Today – Suffragists Under Attack! On January 10, 1917 suffrage leaders from the Congressional Union, discouraged by President Wilson’s refusal to support the suffrage amendment, decided on a daring, attention-grabbing demonstration – picketing the White House. Silent Sentinels appeared at the White House gates, holding banners with such messages as “Mr. President, How Long Must Women Wait for Liberty?” and “Mr. President, What Will You Do for Woman Suffrage?,” braving cold, rain and snow in a desperate attempt to garner support for their cause. And in another ironic twist, somewhere along the way Gardiner seems to have changed her mind about suffrage. Her 1924 obituary lists her as secretary of the National League of Women Voters. It’s so important to rediscover our progenitors. And, in an example like that of Ruth Kimball Gardiner, to be reminded that Woman Suffrage was a multi-faceted, multi-grey-area’d issue–not simply either all Pro or all Anti.
import sys import json from os.path import join, dirname, abspath from rdflib import Graph, Namespace, Literal from indra.sources import sofia # Note that this is just a placeholder, it doesn't resolve as a URL sofia_ns = Namespace('http://cs.cmu.edu/sofia/') indra_ns = 'http://sorger.med.harvard.edu/indra/' indra_rel_ns = Namespace(indra_ns + 'relations/') isa = indra_rel_ns.term('isa') def save_ontology(g, path): with open(path, 'wb') as out_file: g_bytes = g.serialize(format='nt') # Replace extra new lines in string and get rid of empty line at end g_bytes = g_bytes.replace(b'\n\n', b'\n').strip() # Split into rows and sort rows = g_bytes.split(b'\n') rows.sort() g_bytes = b'\n'.join(rows) out_file.write(g_bytes) def build_ontology(ont_json, rdf_path): G = Graph() for top_key, entries in ont_json.items(): for entry_key, examples in entries.items(): if '/' in entry_key: parent, child = entry_key.split('/', maxsplit=1) parent_term = sofia_ns.term(parent) child_term = sofia_ns.term(entry_key) rel = (child_term, isa, parent_term) G.add(rel) save_ontology(G, rdf_path) if __name__ == '__main__': # Path to a SOFIA ontology JSON file sofia_ont_json_file = sys.argv[1] with open(sofia_ont_json_file, 'r') as fh: sofia_ont_json = json.load(fh) sofia_rdf_path = join(dirname(abspath(sofia.__file__)), 'sofia_ontology.rdf') G = build_ontology(sofia_ont_json, sofia_rdf_path)
The Industrial Spraying business provides its customers with integrated solutions and services in the areas of protection, fi nishes and lubrication. It operates in a variety of markets: wood, metal and plastic but also glass, leather and foods. The companies in the Group’s Industrial Spraying business off er a c o m p re h e n s i ve range of complementary products and solutions: manual, automatic or robotic pumps, machines, reinforced hoses and applicators. These products enable our customers to improve their productivity significantly through increasingly accurate spraying, while protecting the environment and operator health. Through SAMES KREMLIN, EXEL Industries group off ers equipment for distributing and applying paints (liquids and powders), glues, adhesives and lubricants. TRICOFLEX specializes in the manufacture of reinforced hoses. As a major player in the automotive, aerospace and agricultural machinery markets, our Industrial Spraying companies continue their development strategy by intensifying the complementarity of their products and solutions. Major ongoing investment in research and innovation, protected by a large number of international patents, positions the Group to maintain its leadership position and secure its long-term growth. Our final priority is customer support. Improving the skills of our local teams, expanding our partner network and developing our services are all part of the Group’s everyday priorities. Close proximity and quality are the essential ingredients for excellent customer relations.
#!/usr/bin/python # -*- coding:utf-8 -*- import time from tlgflaws import * ## Ein Filter, der alle Seiten findet, die heute geändert wurden. class FRecentlyChanged(FlawFilter): shortname= 'RecentlyChanged' # Name, der den Filter identifiziert (nicht übersetzen!) label= _('Recently Changed') # Label, das im Frontend neben der Checkbox angezeigt wird description= _('Page was touched today.') # Längerer Beschreibungstext für Tooltip group= _('Timeliness') # Gruppe, in die der Filter eingeordnet wird # Die Action-Klasse für diesen Filter class Action(TlgAction): # execute() filtert die Seiten und steckt Ergebnisse in resultQueue. def execute(self, resultQueue): cur= getCursors()[self.wiki] # Formatstrings für mehrere Seiten generieren. format_strings = ','.join(['%s'] * len(self.pageIDs)) # Beginn des heutigen Tages im Format der Wikipedia-Datenbank today= time.strftime( '%Y%m%d000000', time.localtime(time.time()) ) params= [] params.extend(self.pageIDs) params.append(today) # Subset der Seiten finden, die heute geändert wurden cur.execute('SELECT * FROM page WHERE page_id IN (%s) AND page_touched >= %%s' % format_strings, params) changed= cur.fetchall() # Alle gefundenen Seiten zurückgeben for row in changed: resultQueue.put(TlgResult(self.wiki, row, self.parent)) # Wir wollen 100 Seiten pro Aktion verarbeiten. def getPreferredPagesPerAction(self): return 100 # Eine Aktion erstellen. def createActions(self, wiki, pages, actionQueue): actionQueue.put(self.Action(self, wiki, pages)) # Beim Laden des Moduls den Filter registrieren: #FlawFilters.register(FRecentlyChanged)
I was told that buying buying 5 rolls of "Rolled Shingles" is the best thing to put under my AGP to keep grass and weeds from breaking through. Has anyone heard of this?? I called a local supply store and asked if they had any " Rolled Shingles" The guy said they did and said that they sell them ALL the time for pool installs. He said this before I even told him what I wanted it for. So I'm guessing its used far more then I thought. I dont install pools so i woulnt know. At least he gave you the answer you were looking for. Looks like rolled shingles it is! I just have a hard time going with a supply store and one install guy telling me its the way to go. I really wish someone else would give me their imput on this and let me know if its the right thing to do. When I redid my ABG 24' round pool 8 yrs ago, I put 1" styrofoam down first, cut it to fit, qrey taped the seams. I thought it would be a cheaper alternative to "Happy Feet" or something the pool store was selling. I was great for about 5yrs. The bottom was smooth and flat and felt spongy under foot. But for the last 2 or 3 yrs, it has all got compressed and is not soft any more and you can see where the seams are. I have 3 places where moles have left tunnels under the bottom so I expect that I might be putting in a new liner next season. I haven't decided what to put down the next time. Went by the supply store today and talked to them about using the rolled shingles for a weed/grass blocker under the pool. They told me that told me that it works awsome because its thick, doesn't shift like plastic can. The guy told me he had it put under his pool 6yrs ago and hasn't had any issues with it at all. The gritty part faces down and the felt part faces the liner. The stuff looks pretty thick. I say for $5 a roll and $50 to the installer for doing it its a darn good deal for added protection. Dip switch settings for Jandy VSFHP270JEP controlled with aqua link ?
# -*- coding: utf-8 -*- # Copyright 2020 Google LLC # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # import warnings from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple, Union from google.api_core import gapic_v1 # type: ignore from google.api_core import grpc_helpers_async # type: ignore from google.auth import credentials as ga_credentials # type: ignore from google.auth.transport.grpc import SslCredentials # type: ignore import packaging.version import grpc # type: ignore from grpc.experimental import aio # type: ignore from google.cloud.retail_v2.types import catalog as gcr_catalog from google.cloud.retail_v2.types import catalog_service from .base import CatalogServiceTransport, DEFAULT_CLIENT_INFO from .grpc import CatalogServiceGrpcTransport class CatalogServiceGrpcAsyncIOTransport(CatalogServiceTransport): """gRPC AsyncIO backend transport for CatalogService. Service for managing catalog configuration. This class defines the same methods as the primary client, so the primary client can load the underlying transport implementation and call it. It sends protocol buffers over the wire using gRPC (which is built on top of HTTP/2); the ``grpcio`` package must be installed. """ _grpc_channel: aio.Channel _stubs: Dict[str, Callable] = {} @classmethod def create_channel(cls, host: str = 'retail.googleapis.com', credentials: ga_credentials.Credentials = None, credentials_file: Optional[str] = None, scopes: Optional[Sequence[str]] = None, quota_project_id: Optional[str] = None, **kwargs) -> aio.Channel: """Create and return a gRPC AsyncIO channel object. Args: host (Optional[str]): The host for the channel to use. credentials (Optional[~.Credentials]): The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. credentials_file (Optional[str]): A file with credentials that can be loaded with :func:`google.auth.load_credentials_from_file`. This argument is ignored if ``channel`` is provided. scopes (Optional[Sequence[str]]): A optional list of scopes needed for this service. These are only used when credentials are not specified and are passed to :func:`google.auth.default`. quota_project_id (Optional[str]): An optional project to use for billing and quota. kwargs (Optional[dict]): Keyword arguments, which are passed to the channel creation. Returns: aio.Channel: A gRPC AsyncIO channel object. """ return grpc_helpers_async.create_channel( host, credentials=credentials, credentials_file=credentials_file, quota_project_id=quota_project_id, default_scopes=cls.AUTH_SCOPES, scopes=scopes, default_host=cls.DEFAULT_HOST, **kwargs ) def __init__(self, *, host: str = 'retail.googleapis.com', credentials: ga_credentials.Credentials = None, credentials_file: Optional[str] = None, scopes: Optional[Sequence[str]] = None, channel: aio.Channel = None, api_mtls_endpoint: str = None, client_cert_source: Callable[[], Tuple[bytes, bytes]] = None, ssl_channel_credentials: grpc.ChannelCredentials = None, client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None, quota_project_id=None, client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO, always_use_jwt_access: Optional[bool] = False, ) -> None: """Instantiate the transport. Args: host (Optional[str]): The hostname to connect to. credentials (Optional[google.auth.credentials.Credentials]): The authorization credentials to attach to requests. These credentials identify the application to the service; if none are specified, the client will attempt to ascertain the credentials from the environment. This argument is ignored if ``channel`` is provided. credentials_file (Optional[str]): A file with credentials that can be loaded with :func:`google.auth.load_credentials_from_file`. This argument is ignored if ``channel`` is provided. scopes (Optional[Sequence[str]]): A optional list of scopes needed for this service. These are only used when credentials are not specified and are passed to :func:`google.auth.default`. channel (Optional[aio.Channel]): A ``Channel`` instance through which to make calls. api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint. If provided, it overrides the ``host`` argument and tries to create a mutual TLS channel with client SSL credentials from ``client_cert_source`` or applicatin default SSL credentials. client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]): Deprecated. A callback to provide client SSL certificate bytes and private key bytes, both in PEM format. It is ignored if ``api_mtls_endpoint`` is None. ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials for grpc channel. It is ignored if ``channel`` is provided. client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]): A callback to provide client certificate bytes and private key bytes, both in PEM format. It is used to configure mutual TLS channel. It is ignored if ``channel`` or ``ssl_channel_credentials`` is provided. quota_project_id (Optional[str]): An optional project to use for billing and quota. client_info (google.api_core.gapic_v1.client_info.ClientInfo): The client info used to send a user-agent string along with API requests. If ``None``, then default info will be used. Generally, you only need to set this if you're developing your own client library. always_use_jwt_access (Optional[bool]): Whether self signed JWT should be used for service account credentials. Raises: google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport creation failed for any reason. google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials`` and ``credentials_file`` are passed. """ self._grpc_channel = None self._ssl_channel_credentials = ssl_channel_credentials self._stubs: Dict[str, Callable] = {} if api_mtls_endpoint: warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning) if client_cert_source: warnings.warn("client_cert_source is deprecated", DeprecationWarning) if channel: # Ignore credentials if a channel was passed. credentials = False # If a channel was explicitly provided, set it. self._grpc_channel = channel self._ssl_channel_credentials = None else: if api_mtls_endpoint: host = api_mtls_endpoint # Create SSL credentials with client_cert_source or application # default SSL credentials. if client_cert_source: cert, key = client_cert_source() self._ssl_channel_credentials = grpc.ssl_channel_credentials( certificate_chain=cert, private_key=key ) else: self._ssl_channel_credentials = SslCredentials().ssl_credentials else: if client_cert_source_for_mtls and not ssl_channel_credentials: cert, key = client_cert_source_for_mtls() self._ssl_channel_credentials = grpc.ssl_channel_credentials( certificate_chain=cert, private_key=key ) # The base transport sets the host, credentials and scopes super().__init__( host=host, credentials=credentials, credentials_file=credentials_file, scopes=scopes, quota_project_id=quota_project_id, client_info=client_info, always_use_jwt_access=always_use_jwt_access, ) if not self._grpc_channel: self._grpc_channel = type(self).create_channel( self._host, credentials=self._credentials, credentials_file=credentials_file, scopes=self._scopes, ssl_credentials=self._ssl_channel_credentials, quota_project_id=quota_project_id, options=[ ("grpc.max_send_message_length", -1), ("grpc.max_receive_message_length", -1), ], ) # Wrap messages. This must be done after self._grpc_channel exists self._prep_wrapped_messages(client_info) @property def grpc_channel(self) -> aio.Channel: """Create the channel designed to connect to this service. This property caches on the instance; repeated calls return the same channel. """ # Return the channel from cache. return self._grpc_channel @property def list_catalogs(self) -> Callable[ [catalog_service.ListCatalogsRequest], Awaitable[catalog_service.ListCatalogsResponse]]: r"""Return a callable for the list catalogs method over gRPC. Lists all the [Catalog][google.cloud.retail.v2.Catalog]s associated with the project. Returns: Callable[[~.ListCatalogsRequest], Awaitable[~.ListCatalogsResponse]]: A function that, when called, will call the underlying RPC on the server. """ # Generate a "stub function" on-the-fly which will actually make # the request. # gRPC handles serialization and deserialization, so we just need # to pass in the functions for each. if 'list_catalogs' not in self._stubs: self._stubs['list_catalogs'] = self.grpc_channel.unary_unary( '/google.cloud.retail.v2.CatalogService/ListCatalogs', request_serializer=catalog_service.ListCatalogsRequest.serialize, response_deserializer=catalog_service.ListCatalogsResponse.deserialize, ) return self._stubs['list_catalogs'] @property def update_catalog(self) -> Callable[ [catalog_service.UpdateCatalogRequest], Awaitable[gcr_catalog.Catalog]]: r"""Return a callable for the update catalog method over gRPC. Updates the [Catalog][google.cloud.retail.v2.Catalog]s. Returns: Callable[[~.UpdateCatalogRequest], Awaitable[~.Catalog]]: A function that, when called, will call the underlying RPC on the server. """ # Generate a "stub function" on-the-fly which will actually make # the request. # gRPC handles serialization and deserialization, so we just need # to pass in the functions for each. if 'update_catalog' not in self._stubs: self._stubs['update_catalog'] = self.grpc_channel.unary_unary( '/google.cloud.retail.v2.CatalogService/UpdateCatalog', request_serializer=catalog_service.UpdateCatalogRequest.serialize, response_deserializer=gcr_catalog.Catalog.deserialize, ) return self._stubs['update_catalog'] __all__ = ( 'CatalogServiceGrpcAsyncIOTransport', )
Freestone Railroad: THE FUN NEVER STOPS at Gilbert Arizona's Freestone park! Instant Buy One Get One Free RideBand & Monthly Offers! at Freestone Railroad in Gilbert, Arizona! Pay one low price and ride for a full year! Get your Year Pass now! See the Admissions page for details. Join our E-mail list today and receive money-saving coupons directly to your inbox!! Ride the rails on the miniature railroad. Catch a 'desert wave' on the waterless Wave Runner. Pick your favorite pony from dozens on the beautiful antique carousel. Finish up with a ride on the mini Ferris Wheel, the perfect spot to look back on a day of fun! There's even fun food, at family-friendly prices. This Gilbert family fun attraction is open weekends and geared especially towards kids from 2-10, and has everything your family could want! Get Your $2 Off Coupon HERE! Located in Freestone Park just East of Lindsay Rd. on Juniper (Between Guadalupe and Elliott). ©2019 Freestone Railroad, LLC | For more family entertainment visit Desert Breeze Railroad or Enchanted Island!
# -*- coding: utf-8 -*- from openprocurement.api.utils import context_unpack, json_view, APIResource from openprocurement.tender.core.utils import ( save_tender, optendersresource, apply_patch ) from openprocurement.tender.belowthreshold.utils import ( check_status, ) from openprocurement.tender.core.validation import ( validate_patch_tender_data, ) @optendersresource(name='belowThreshold:Tender', path='/tenders/{tender_id}', procurementMethodType='belowThreshold', description="Open Contracting compatible data exchange format. See http://ocds.open-contracting.org/standard/r/master/#tender for more info") class TenderResource(APIResource): @json_view(permission='view_tender') def get(self): """Tender Read Get Tender ---------- Example request to get tender: .. sourcecode:: http GET /tenders/64e93250be76435397e8c992ed4214d1 HTTP/1.1 Host: example.com Accept: application/json This is what one should expect in response: .. sourcecode:: http HTTP/1.1 200 OK Content-Type: application/json { "data": { "id": "64e93250be76435397e8c992ed4214d1", "tenderID": "UA-64e93250be76435397e8c992ed4214d1", "dateModified": "2014-10-27T08:06:58.158Z", "procuringEntity": { "id": { "name": "Державне управління справами", "scheme": "https://ns.openprocurement.org/ua/edrpou", "uid": "00037256", "uri": "http://www.dus.gov.ua/" }, "address": { "countryName": "Україна", "postalCode": "01220", "region": "м. Київ", "locality": "м. Київ", "streetAddress": "вул. Банкова, 11, корпус 1" } }, "value": { "amount": 500, "currency": "UAH", "valueAddedTaxIncluded": true }, "itemsToBeProcured": [ { "description": "футляри до державних нагород", "primaryClassification": { "scheme": "CPV", "id": "44617100-9", "description": "Cartons" }, "additionalClassification": [ { "scheme": "ДКПП", "id": "17.21.1", "description": "папір і картон гофровані, паперова й картонна тара" } ], "unitOfMeasure": "item", "quantity": 5 } ], "enquiryPeriod": { "endDate": "2014-10-31T00:00:00" }, "tenderPeriod": { "startDate": "2014-11-03T00:00:00", "endDate": "2014-11-06T10:00:00" }, "awardPeriod": { "endDate": "2014-11-13T00:00:00" }, "deliveryDate": { "endDate": "2014-11-20T00:00:00" }, "minimalStep": { "amount": 35, "currency": "UAH" } } } """ if self.request.authenticated_role == 'chronograph': tender_data = self.context.serialize('chronograph_view') else: tender_data = self.context.serialize(self.context.status) return {'data': tender_data} @json_view(content_type="application/json", validators=(validate_patch_tender_data, ), permission='edit_tender') def patch(self): """Tender Edit (partial) For example here is how procuring entity can change number of items to be procured and total Value of a tender: .. sourcecode:: http PATCH /tenders/4879d3f8ee2443169b5fbbc9f89fa607 HTTP/1.1 Host: example.com Accept: application/json { "data": { "value": { "amount": 600 }, "itemsToBeProcured": [ { "quantity": 6 } ] } } And here is the response to be expected: .. sourcecode:: http HTTP/1.0 200 OK Content-Type: application/json { "data": { "id": "4879d3f8ee2443169b5fbbc9f89fa607", "tenderID": "UA-64e93250be76435397e8c992ed4214d1", "dateModified": "2014-10-27T08:12:34.956Z", "value": { "amount": 600 }, "itemsToBeProcured": [ { "quantity": 6 } ] } } """ tender = self.context if self.request.authenticated_role != 'Administrator' and tender.status in ['complete', 'unsuccessful', 'cancelled']: self.request.errors.add('body', 'data', 'Can\'t update tender in current ({}) status'.format(tender.status)) self.request.errors.status = 403 return if self.request.authenticated_role == 'chronograph': apply_patch(self.request, save=False, src=self.request.validated['tender_src']) check_status(self.request) save_tender(self.request) else: apply_patch(self.request, src=self.request.validated['tender_src']) self.LOGGER.info('Updated tender {}'.format(tender.id), extra=context_unpack(self.request, {'MESSAGE_ID': 'tender_patch'})) return {'data': tender.serialize(tender.status)}
Hello all my post Auto Repair Sample Resume Gracecollege Us Sample Resume Templates 18294 Decorating Ideas was posted in April 9, 2019 at 3:44 am. This has been viewed by 0 View users. If you want to use this image click the download image link below to go to the download page. Right click the image and select "Save Image As" to download the Auto Repair Sample Resume Gracecollege Us Sample Resume Templates 18294 to your computer or select "Set Desktop Background As" if your browser has that capability. The first picture is a picture Auto Repair Sample Resume Gracecollege Us Sample Resume Templates, the image has been in name Auto Repair Sample Resume Gracecollege Us Sample Resume Templates, This image was posted on category : , have a resolution : 638x825 Pixel. The posting entitled tips and photos, and has seen as many as 0 view by user who is looking for about Auto Repair Sample Resume Gracecollege Us Sample Resume Templates from seacrh engine. I uploaded this post in order to provide the best for the visitors domain.com, we as administrators try Auto Repair Sample Resume Gracecollege Us Sample Resume Templates 18294 make what you are looking for, every day we will update with new posts, if you are not happy with this site I am as a human being trying to continue to do the best for you. Hopefully domain.com always assist you in finding what you need.
# 3p import ntplib # project from checks import AgentCheck from utils.ntp import get_ntp_args, set_user_ntp_settings DEFAULT_OFFSET_THRESHOLD = 60 # in seconds class NtpCheck(AgentCheck): DEFAULT_MIN_COLLECTION_INTERVAL = 900 # in seconds def check(self, instance): service_check_msg = None offset_threshold = instance.get('offset_threshold', DEFAULT_OFFSET_THRESHOLD) try: offset_threshold = int(offset_threshold) except (TypeError, ValueError): raise Exception('Must specify an integer value for offset_threshold. Configured value is %s' % repr(offset_threshold)) set_user_ntp_settings(dict(instance)) req_args = get_ntp_args() self.log.debug("Using ntp host: {0}".format(req_args['host'])) try: ntp_stats = ntplib.NTPClient().request(**req_args) except ntplib.NTPException: self.log.debug("Could not connect to NTP Server {0}".format( req_args['host'])) status = AgentCheck.UNKNOWN ntp_ts = None else: ntp_offset = ntp_stats.offset # Use the ntp server's timestamp for the time of the result in # case the agent host's clock is messed up. ntp_ts = ntp_stats.recv_time self.gauge('ntp.offset', ntp_offset, timestamp=ntp_ts) if abs(ntp_offset) > offset_threshold: status = AgentCheck.CRITICAL service_check_msg = "Offset {0} secs higher than offset threshold ({1} secs)".format(ntp_offset, offset_threshold) else: status = AgentCheck.OK self.service_check('ntp.in_sync', status, timestamp=ntp_ts, message=service_check_msg)
A key function of Public Relations is conducting community and village consultations. Consultations with all Fijians is critical to ensuring that everyone is aware and understands the intent of government as outlined in the People’s Charter for Change, Peace and Progress (PCCPP). Consultations are facilitated by the PR Divisional Coordinators; Eastern (including the maritime zones), Northern and Western. The PR Divisional Coordinators collaborate with the Divisional Commissioners and Provincial Officers on the consultation schedules and logistics. The consultations provide the PR Division with the opportunity to relay government initiatives from the PCCPP and also to allow people with the opportunity to provide feedback. “The PR Officers competencies in delivering substantive and relevant presentations and sound communication skills have enabled them to be effective agents of advocacy work”. It is important to disseminate Government’s intent during public forums and meetings, such as the one attended in the Lau Group. The public relations consultations being carried out in the greater Suva – Nasinu corridor is part of the integrated concept of Government agencies reaching out to the people. “The Government has brought a significant number of changes for the betterment of the Fijian people”.
import sublime, sublime_plugin import cgi, time, os from ..libs import NodeJS from ..libs import FlowCLI from ..libs import flow from ..libs import util from ..libs.popup_manager import popup_manager from .wait_modified_async import JavascriptEnhancementsWaitModifiedAsyncViewEventListener show_flow_errors_css = "" with open(os.path.join(os.path.dirname(os.path.abspath(__file__)), "show_flow_errors.css"), encoding="utf-8") as css_file: show_flow_errors_css = "<style>"+css_file.read()+"</style>" class JavascriptEnhancementsShowFlowErrorsViewEventListener(JavascriptEnhancementsWaitModifiedAsyncViewEventListener, sublime_plugin.ViewEventListener): description_by_row_column = {} diagnostics = { "error": [], "warning": [] } diagnostic_regions = { "error": [], "warning": [] } diagnostic_scope = { "error": "storage", "warning": "keyword" } callback_setted_use_flow_checker_on_current_view = False prefix_thread_name = "javascript_enhancements_show_flow_errors_view_event_listener" wait_time = .15 modified = False def on_load_async(self): self.on_modified_async() def on_activated_async(self): self.on_modified_async() def on_modified(self): self.modified = True def on_modified_async(self): super(JavascriptEnhancementsShowFlowErrorsViewEventListener, self).on_modified_async() def on_selection_modified_async(self): view = self.view sel = view.sel()[0] if view.find_by_selector('source.js.embedded.html') and (self.diagnostics["error"] or self.diagnostics["warning"] or view.get_regions("javascript_enhancements_flow_error") or view.get_regions("javascript_enhancements_flow_warning")): pass elif not util.selection_in_js_scope(view) or not self.are_there_errors(): flow.hide_errors(view) return for key, value in self.diagnostics.items(): if not value and not view.get_regions("javascript_enhancements_flow_error"): flow.hide_errors(view, level=key) error_region = None error_level = "" for region in view.get_regions("javascript_enhancements_flow_error"): if region.contains(sel): error_region = region error_level = "error" break if not error_region: for region in view.get_regions("javascript_enhancements_flow_warning"): if region.contains(sel): error_region = region error_level = "warning" break if not self.can_check(): return error_description = "" if error_region: row_region, col_region = view.rowcol(error_region.begin()) end_row_region, endcol_region = view.rowcol(error_region.end()) try : error_description = self.description_by_row_column[str(row_region)+":"+str(end_row_region)+":"+str(col_region)+":"+str(endcol_region)+":"+error_level] except KeyError as e: if str(row_region+1)+":"+str(row_region+1)+":0:0:"+error_level in self.description_by_row_column: error_description = self.description_by_row_column[str(row_region+1)+":"+str(row_region+1)+":0:0:"+error_level] for key, value in self.diagnostics.items(): if value: error_count = len(value) error_count_text = 'Flow: {} {}{}'.format( error_count, key, '' if error_count is 1 else 's' ) if error_level == key and error_region: view.set_status( 'javascript_enhancements_flow_' + key, error_count_text + ': ' + error_description ) else: view.set_status('javascript_enhancements_flow_' + key, error_count_text) def on_modified_async_with_thread(self, recheck=True): self.modified = False view = self.view if view.find_by_selector('source.js.embedded.html'): pass elif not util.selection_in_js_scope(view): flow.hide_errors(view) return if not self.can_check(): return self.wait() flow_cli = FlowCLI(view) result = flow_cli.check_contents() self.diagnostics = { "error": [], "warning": [] } self.diagnostic_regions = { "error": [], "warning": [] } self.description_by_row_column = {} if result[0] and len(result[1]['errors']) > 0: for error in result[1]['errors']: description = '' operation = error.get('operation') row = -1 error_level = error['level'] self.diagnostics[error_level].append(error) for i in range(len(error['message'])): message = error['message'][i] # check if the error path is the same file opened on the current view. # this check is done because sometimes flow put errors from other files (for example when defining new flow definitions) if message['path'] and message['path'] != view.file_name(): continue if i == 0 : row = int(message['line']) - 1 endrow = int(message['endline']) - 1 col = int(message['start']) - 1 endcol = int(message['end']) self.diagnostic_regions[error_level].append(util.rowcol_to_region(view, row, endrow, col, endcol)) if operation: description += operation["descr"] if not description : description += "'"+message['descr']+"'" else : description += " " + message['descr'] if row >= 0 : self.description_by_row_column[str(row)+":"+str(endrow)+":"+str(col)+":"+str(endcol)+":"+error_level] = description if not self.modified : need_update_sublime_status = False for key, value in self.diagnostic_regions.items(): view.erase_regions('javascript_enhancements_flow_' + key) if value: view.add_regions( 'javascript_enhancements_flow_' + key, value, self.diagnostic_scope[key], 'dot', sublime.DRAW_SQUIGGLY_UNDERLINE | sublime.DRAW_NO_FILL | sublime.DRAW_NO_OUTLINE ) if not need_update_sublime_status: need_update_sublime_status = True else: view.erase_status("javascript_enhancements_flow_" + key) if need_update_sublime_status: self.on_selection_modified_async() elif (recheck) : sublime.set_timeout_async(lambda: self.on_modified_async_with_thread(recheck=False)) def on_hover(self, point, hover_zone) : view = self.view if view.find_by_selector('source.js.embedded.html') and (self.diagnostics["error"] or self.diagnostics["warning"] or view.get_regions("javascript_enhancements_flow_error") or view.get_regions("javascript_enhancements_flow_warning")): pass elif not util.selection_in_js_scope(view) or not self.are_there_errors(): flow.hide_errors(view) return for key, value in self.diagnostics.items(): if not value and not view.get_regions("javascript_enhancements_flow_error"): flow.hide_errors(view, level=key) if hover_zone != sublime.HOVER_TEXT : return sel = sublime.Region(point, point) is_hover_error = False region_hover_error = None error_level = "" for region in view.get_regions("javascript_enhancements_flow_error"): if region.contains(sel): region_hover_error = region is_hover_error = True error_level = "error" break if not is_hover_error: for region in view.get_regions("javascript_enhancements_flow_warning"): if region.contains(sel): region_hover_error = region is_hover_error = True error_level = "warning" break if not is_hover_error: return if not self.can_check(): return row_region, col_region = view.rowcol(region_hover_error.begin()) end_row_region, endcol_region = view.rowcol(region_hover_error.end()) error = None try : error = self.description_by_row_column[str(row_region)+":"+str(end_row_region)+":"+str(col_region)+":"+str(endcol_region)+":"+error_level] except KeyError as e: if str(row_region+1)+":"+str(row_region+1)+":0:0:"+error_level in self.description_by_row_column: error = self.description_by_row_column[str(row_region+1)+":"+str(row_region+1)+":0:0:"+error_level] if error: text = cgi.escape(error).split(" ") html = "" i = 0 while i < len(text) - 1: html += text[i] + " " + text[i+1] + " " i += 2 if i % 10 == 0 : html += " <br> " if len(text) % 2 != 0 : html += text[len(text) - 1] row_region, col_region = view.rowcol(region_hover_error.begin()) end_row_region, endcol_region = view.rowcol(region_hover_error.end()) # here the css code for the <a> element is not working, so the style is inline. popup_manager.set_visible("javascript_enhancements_flow_" + error_level, True) view.show_popup(""" <html> <body> """ + show_flow_errors_css + """ """ + html + """ <br> <a style="display: block; margin-top: 10px; color: #333;" class="copy-to-clipboard" href="copy_to_clipboard">Copy</a> </body> </html>""", sublime.HIDE_ON_MOUSE_MOVE_AWAY, point, 1150, 80, lambda action: sublime.set_clipboard(error) or view.hide_popup(), lambda: popup_manager.set_visible("javascript_enhancements_flow_" + error_level, False) ) def can_check(self): view = self.view settings = util.get_project_settings() if settings : if not settings["project_settings"]["flow_checker_enabled"] or not util.is_project_view(view) : flow.hide_errors(view) return False elif settings["project_settings"]["flow_checker_enabled"] : comments = view.find_by_selector('source.js comment') flow_comment_found = False for comment in comments: if "@flow" in view.substr(comment) : flow_comment_found = True break if not flow_comment_found : flow.hide_errors(view) return False elif not view.settings().get("javascript_enhancements_use_flow_checker_on_current_view") : flow.hide_errors(view) return False return True def are_there_errors(self): view = self.view return True if self.diagnostics["error"] or self.diagnostics["warning"] or view.get_regions("javascript_enhancements_flow_error") or view.get_regions("javascript_enhancements_flow_warning") else False
I challenge Bob Sullivan’s perspective, which accuses the developers behind Ashton Meeting Place shopping center (not a mall) of ignoring the community’s concerns (‘‘Are AMP developers really listening to the community?” June 14 letter). There have been few individuals (out of the proclaimed 800) against the shopping center that have attended any or all meetings (four) the developers held. It appears false information continues to circulate at the opposition’s hand. How can anyone make informed assessments when they choose not to attend the developers’ meetings? Where have the citizens of the self-created Sandy Spring-Ashton Rural Preserve Consortium been? The developers have been up-front and honest, and provided the experts to answer questions regarding the development of AMP. They have also listened to the concerns of the community and have incorporated many of those changes at a huge cost (their own) to the project. How many developers do you know that actually inform and ask for input from the community? The developers should be applauded for creating an esthetically beautiful center, which captures a yesteryear feel. The center respects the environment and will serve the community’s needs well. Mr. Sullivan crossed the line, however, when he accused an investor of blanketing the community with a letter ‘‘informing his neighbors that the plans are revised to reflect their concerns and all is well.” He implied that he spoke untruths about the project and misrepresented the community. This is totally false and disrespectful. The majority of the people living in the Ashton⁄Sandy Spring area do feel that AMP will benefit the community. That investor and his brother grew up on the corner in Ashton and worked a family business gasoline station. They are part of the history in Ashton⁄Sandy Spring. They have given countless hours in multiple community efforts and businesses including the Sandy Spring Lions Club, the Sandy Spring Museum, Sandy Spring Friends House, Sandy Spring Bank, Montgomery General Hospital, Wintergrowth, Brooke Grove Foundation, the Sandy Spring Fire Department, the Olney Theatre and many local churches. That investor has devoted his almost 80 years of life to the betterment of this community through physical, intellectual and financial support. He would not intentionally mislead the people in any circumstance. Anyone who has truly known him, has only known him to be kind, compassionate, generous and honest. By the way, ‘‘the most cherished community institution, the Sandy Spring Museum,” as Mr. Sullivan quoted, would not be on the property it is currently sitting without that investor. He is the one who procured the property for the museum and went on (along with several others) to establish and assist in providing funds for operating the organization. He was also on the board for many years, contributed financially, and procured funds for operations. In fact, he is still donating and raising money for the museum currently. Through good deeds, hard work, and vision, he has ensured the museum’s ongoing success. This community owes him a debt of gratitude for significantly contributing to the preservation of its heritage. Perhaps Mr. Sullivan should list for the community all of the citizens who are against AMP (including himself) that have subdivided and sold off their properties (or portions of their properties) to ‘‘unruralize” the area. Those individuals have served their own personal needs well. As owners of their property, they were able to decide what they wanted to do with the land without opposition. They have created havoc on our roads both in the development of those houses and by contributing to the excessive traffic that clogs routes 108 and New Hampshire Avenue daily. I welcome the opportunity to travel only one mile in that traffic, rather than three or four. The developers have presented an exceptional design on the only remaining commercial property in Ashton. Let’s welcome all of the new citizens of Ashton and Sandy Spring, enjoy the beauty of the center, and serve the needs of the citizens living in the surrounding area.
# Copyright (C) 2010-2014 GRNET S.A. # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. from django.core.urlresolvers import reverse from django.utils.translation import ugettext as _ from django.utils.http import urlencode from django.contrib.auth import authenticate from django.http import ( HttpResponse, HttpResponseBadRequest, HttpResponseForbidden) from django.core.exceptions import ValidationError from django.views.decorators.http import require_http_methods from urlparse import urlunsplit, urlsplit, parse_qsl from astakos.im.util import restrict_next from astakos.im.user_utils import login as auth_login, logout from astakos.im.views.decorators import cookie_fix import astakos.im.messages as astakos_messages from astakos.im.settings import REDIRECT_ALLOWED_SCHEMES import logging logger = logging.getLogger(__name__) @require_http_methods(["GET"]) @cookie_fix def login(request): """ If there is no `next` request parameter returns 400 (BAD REQUEST). Otherwise, if `next` request parameter is not among the allowed schemes, returns 403 (Forbidden). If the request user is authenticated and has signed the approval terms, redirects to `next` request parameter. If not, redirects to approval terms in order to return back here after agreeing with the terms. Otherwise, redirects to login in order to return back here after successful login. """ next = request.GET.get('next') if not next: return HttpResponseBadRequest('Missing next parameter') if not restrict_next(next, allowed_schemes=REDIRECT_ALLOWED_SCHEMES): return HttpResponseForbidden(_( astakos_messages.NOT_ALLOWED_NEXT_PARAM)) force = request.GET.get('force', None) response = HttpResponse() if force == '' and request.user.is_authenticated(): logout(request) if request.user.is_authenticated(): # if user has not signed the approval terms # redirect to approval terms with next the request path if not request.user.signed_terms: # first build next parameter parts = list(urlsplit(request.build_absolute_uri())) params = dict(parse_qsl(parts[3], keep_blank_values=True)) parts[3] = urlencode(params) next = urlunsplit(parts) # build url location parts[2] = reverse('latest_terms') params = {'next': next} parts[3] = urlencode(params) url = urlunsplit(parts) response['Location'] = url response.status_code = 302 return response renew = request.GET.get('renew', None) if renew == '': request.user.renew_token( flush_sessions=True, current_key=request.session.session_key ) try: request.user.save() except ValidationError, e: return HttpResponseBadRequest(e) # authenticate before login user = authenticate( username=request.user.username, auth_token=request.user.auth_token ) auth_login(request, user) logger.info('Token reset for %s' % user.username) parts = list(urlsplit(next)) parts[3] = urlencode({ 'uuid': request.user.uuid, 'token': request.user.auth_token }) url = urlunsplit(parts) response['Location'] = url response.status_code = 302 return response else: # redirect to login with next the request path # first build next parameter parts = list(urlsplit(request.build_absolute_uri())) params = dict(parse_qsl(parts[3], keep_blank_values=True)) # delete force parameter if 'force' in params: del params['force'] parts[3] = urlencode(params) next = urlunsplit(parts) # build url location parts[2] = reverse('login') params = {'next': next} parts[3] = urlencode(params) url = urlunsplit(parts) response['Location'] = url response.status_code = 302 return response
If you have a medical emergency please call 911 right away! If you are in need of assistance, please call our office at 920-232-1130. From Highway 41 South: Directions for visitors coming from Green Bay or other cities north of Oshkosh. Highway 41 runs through Oshkosh. Take Hwy 41 Southbound. Exit off 41S via 9th Ave exit to the right, then turn right onto S. Washburn St. (Frontage Road). Head north on S. Washburn St. past Fire Station #16 on your left to the second driveway at OptiVision, then turn left into driveway where you will see Tower West Building located behind OptiVision. Fox Valley Dermatology is on 2nd Floor of Tower West Building. From Highway 41 North: Directions for visitors coming from Fond du Lac, Milwaukee or other cities south of Oshkosh. Highway 41 runs through Oshkosh. Take Hwy 41 Northbound. Exit off 41N stay in left lane around first roundabout traveling west on 9th Ave. crossing over Hwy 41. Carefully move to right lane through second roundabout on 9th Ave, then turn right at third roundabout onto S. Washburn St. (Frontage Road). Head north on S. Washburn St. past Fire Station #16 on your left to the second driveway at OptiVision, then turn left into driveway where you will see Tower West Building located behind OptiVision. Fox Valley Dermatology is on 2nd Floor of Tower West Building.
# Copyright (c) 2006-2014 LOGILAB S.A. (Paris, FRANCE) <[email protected]> # Copyright (c) 2014-2020 Claudiu Popa <[email protected]> # Copyright (c) 2014 Google, Inc. # Copyright (c) 2015-2017 Ceridwen <[email protected]> # Copyright (c) 2015 Florian Bruhin <[email protected]> # Copyright (c) 2015 Radosław Ganczarek <[email protected]> # Copyright (c) 2016 Moises Lopez <[email protected]> # Copyright (c) 2017 Hugo <[email protected]> # Copyright (c) 2017 Łukasz Rogalski <[email protected]> # Copyright (c) 2017 Calen Pennington <[email protected]> # Copyright (c) 2018 Ville Skyttä <[email protected]> # Copyright (c) 2018 Ashley Whetter <[email protected]> # Copyright (c) 2018 Bryce Guinta <[email protected]> # Copyright (c) 2019 Uilian Ries <[email protected]> # Copyright (c) 2019 Thomas Hisch <[email protected]> # Copyright (c) 2020-2021 hippo91 <[email protected]> # Copyright (c) 2020 David Gilman <[email protected]> # Copyright (c) 2020 Konrad Weihmann <[email protected]> # Copyright (c) 2020 Felix Mölder <[email protected]> # Copyright (c) 2020 Michael <[email protected]> # Copyright (c) 2021 Pierre Sassoulas <[email protected]> # Copyright (c) 2021 Marc Mueller <[email protected]> # Licensed under the LGPL: https://www.gnu.org/licenses/old-licenses/lgpl-2.1.en.html # For details: https://github.com/PyCQA/astroid/blob/master/LICENSE """astroid packaging information""" from typing import Optional __version__ = "2.5.6" # For an official release, use 'alpha_version = False' and 'dev_version = None' alpha_version: bool = False # Release will be an alpha version if True (ex: '1.2.3a6') dev_version: Optional[int] = None if dev_version is not None: if alpha_version: __version__ += f"a{dev_version}" else: __version__ += f".dev{dev_version}" version = __version__
G stands for the total expenditure incurred by the government. It has to meet the expenses required to acquire all kinds of goods and services. For services to be made available for the country, the major and perhaps the only massive expense is the salary to pay to those who provide service to the country. Also, it is pointed out by the analysts that a major share of the nation�s revenue is assigned to procure military weapons. Necessarily next comes the ever mounting salary for the service providers. Government spending as such is very crucial in its impact on the economy of the nation. A study from this point of view will take us away from the immediate concern of the present context. The government spending which we can say as the public sector spending cannot be dispensed with. There are certain areas of service which private sector cannot afford to give to the public. Say, building roads without which no meaningful commutation is possible, again, laying railway tracks, giving basic education to all and providing medical service--- for all these, the government has to spend and there is no other way. It is needless to emphasize that the government has to employ for all its departments and their attendant functions, thousands of employees for whom he has to pay salary. All the expenses the government has to meet in time come under Government spending. There is one more fact of spending by the government. Generally, a nation also can borrow from other nations or from some international bodies that lend money on a very meager rate of interest for a long period of say 30 years. For the loan, the nation has to pay interest and also clear the loan. A country borrows money from other developed countries for developing the national infrastructure of massive nature. This fourth component of GDP refers to the net exports of the country. It means how much revenue the country has generated by its exports. Export means our product is purchased by foreigners and therefore their purchase of our goods adds to our income. But we must know that we also buy products from other countries; we import the products from other countries thus we spend our money on foreign products. Naturally, the money we generate must be calculated by abstracting the money we spend on foreign goods from the money we receive from foreign countries. We can simply say: domestic spending on foreign goods is Import; foreigners� spending on domestic goods is Export and the Net Export is Exports (X) minus Imports (I). The difference between X and I is called Net Exports. We have seen so far what is meant by GDP and in our next session, we will try to understand the difference between Gross Domestic Product and Gross National Product which are not the same.
from itertools import product import matplotlib matplotlib.use("agg") from matplotlib import pyplot as plt import seaborn as sns import pandas as pd import common import numpy as np MIN_CALLS = 100 colors = common.get_colors(snakemake.config) props = product(snakemake.params.callers, snakemake.params.len_ranges, snakemake.params.fdrs) calls = [] for _calls, (caller, len_range, fdr) in zip(snakemake.input.varlociraptor_calls, props): calls.append({"caller": caller, "len_range": len_range, "fdr": float(fdr), "calls": _calls}) calls = pd.DataFrame(calls) calls = calls.set_index("caller", drop=False) def plot_len_range(minlen, maxlen): def plot(caller): color = colors[caller] label = "varlociraptor+{}".format(caller) fdrs = [] alphas = [] calls_ = calls.loc[caller] calls_ = calls_[calls_["len_range"].map(lambda r: r == [minlen, maxlen])] calls_ = calls_.sort_values("fdr") for e in calls_.itertuples(): c = pd.read_table(e.calls) n = c.shape[0] if n < MIN_CALLS: continue true_fdr = 1.0 - common.precision(c) if fdrs and fdrs[-1] == true_fdr: continue fdrs.append(true_fdr) alphas.append(e.fdr) plt.plot(alphas, fdrs, ".-", color=color, label=label) for caller in calls.index.unique(): plot(caller) plt.plot([0, 1], [0, 1], ":", color="grey") sns.despine() ax = plt.gca() handles, _ = ax.get_legend_handles_labels() return ax, handles common.plot_ranges( snakemake.params.len_ranges, plot_len_range, xlabel="FDR threshold", ylabel="true FDR") plt.savefig(snakemake.output[0], bbox_inches="tight")
If you wish to enjoy an individual hosting server efficient at hosting multiple crowded web sites, then the dedicated server is definitely for your requirements. With more than adequate CPU and RAM allocations and a good amount of disk space and traffic quotas, it is going to bring real appeal to your Internet existence. Choosing our dedicated services will supply a great degree of control over your server setup. Besides the dedicated server set–up, you’ll even be able to to pick an Operating System. Located on the signup form, you will discover a pull–down selection with Linux distributions – Ubuntu, Debian or CentOS. All you’re expected to do is tell us which Linux OS you’d like and we will install it for you together with the free–of–charge, in–house–built Website Control Panel. Your dedicated server will be housed in one of the most famous datacenter in US – Steadfast, which is located in the central district of Chicago, Illinois. The location gives you perfect system conditions for your sites and apps. The possibility of blackouts by result of natural catastrophes like severe weather, seismism or flooding is reduced to a minimum. 24x7x365 onsite security inspecting equipment and Gbit system connections to any place in the world are provided also.
from insights.parsers.jboss_standalone_main_conf import JbossStandaloneConf from insights.tests import context_wrap from insights.parsers import jboss_standalone_main_conf import doctest JBOSS_STANDALONE_CONFIG = """ <?xml version='1.0' encoding='UTF-8'?> <server xmlns="urn:jboss:domain:1.7"> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <local default-user="$local" skip-group-loading="true"/> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization map-groups-to-roles="false"> <properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <local default-user="$local" allowed-users="*" skip-group-loading="true"/> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> <authorization> <properties path="application-roles.properties" relative-to="jboss.server.config.dir"/> </authorization> </security-realm> </security-realms> <audit-log> <formatters> <json-formatter name="json-formatter"/> </formatters> <handlers> <file-handler name="file" formatter="json-formatter" path="audit-log.log" relative-to="jboss.server.data.dir"/> </handlers> <logger log-boot="true" log-read-only="false" enabled="false"> <handlers> <handler name="file"/> </handlers> </logger> </audit-log> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> <access-control provider="simple"> <role-mapping> <role name="SuperUser"> <include> <user name="$local"/> </include> </role> </role-mapping> </access-control> </management> </server> """ def test_jboss_standalone_conf(): jboss_standalone_conf = JbossStandaloneConf( context_wrap(JBOSS_STANDALONE_CONFIG, path="/root/jboss/jboss-eap-6.4/standalone/configuration/standalone.xml")) assert jboss_standalone_conf is not None assert jboss_standalone_conf.file_path == "/root/jboss/jboss-eap-6.4/standalone/configuration/standalone.xml" assert jboss_standalone_conf.get_elements( ".//management/security-realms/security-realm/authentication/properties")[0].get( "relative-to") == 'jboss.server.config.dir' def test_jboss_standalone_conf_doc_examples(): env = { 'JbossStandaloneConf': JbossStandaloneConf, 'jboss_main_config': JbossStandaloneConf(context_wrap(JBOSS_STANDALONE_CONFIG, path='/root/jboss/jboss-eap-6.4/standalone/configuration/standalone.xml')) } failed, total = doctest.testmod(jboss_standalone_main_conf, globs=env) assert failed == 0
I cannot tell you how thrilled I am for longer days! I’ve been rearranging my closet this week. Moving the darker colors to the back. And the brighter colors to the front. Breaking out the cargo pants. Hoping to wear the lightweight spring scarves one more time before the humidity sets in. And looking forward to when it’s warm enough to wear ruffled tanks again. Just stumbled into your blog today and I LOVE it. It seems you have many of the same addictions,er, I mean interests that do – cooking, scrapping, etc. Great site! What a cute blog!! Love your hubby’s Auburn shirt! War Eagle! Those photos were soooo fun! so crisp, so up-close, so colorful, so pretty! loved them! You have inspired me. I going in tomorrow……into that black-hole I call a closet. Thanks Amanda for showing me how nice clothes can look when they’re hung up and ironed. you have a very shique style. i adore it- i want your closet!! You are as cute as a button. That is all I have to say. Love the photographs!! Just what I needed to get started on my own Spring Cleaning. It’s still just a little cool here in North Carolina, but last week we had an 83 degree day!! You can just never tell. Love the posts and being able to live vicariously through you sometimes. Happy Spring! You have fabulous taste. I love your clothes. The Loft is my favorite store and I have a few of your same pieces. Thanks for sharing, you are an amazing photographer. I am also happy spring has arrived here in northern AL. I’m sitting here enjoying one of your favorite snack- yogurt, granola, and fresh strawberries! THANKS for the reminder- its one of my favorites too! I live out west, very close to the mountains- and its snowing, but oh when the sun shines on the fresh powder, its breath taking. Those are very pretty girly clothes but I think I’ll stick with my jeans and t- shirt wardrobe till it’s hot enough for my landscaping attire(shorts and t-shirts.)That’s about as good as it gets around here except for church. I live in the Boston suburbs and I wear ruffled tanks all.year.long. With a cardi of course! All of your clothes are so pretty! I wish my closet looked like that! I can not wait for Spring! Great photos! It’s nice to live vicariously through those who actually have nice weather this time of year. We’ve got -11C with the wind chill this morning. Brrr! Love the photo’s! Here in South Florida spring is pretty much year round but I have to say a little closet cleaning is in need in my house too. Happy Spring to you! I love Spring too!! Gives me an excuse to go shopping for new brighter, funner, lighter clothes! My boyfriend always thinks I’m crazy when I say “spring/summer” scarves. He wonders why anyone in their right mind would want to wear them in the summer, especially here in Alabama. You are so right, once the humidity kicks they go back into hiding. I often wish I was still on the East Coast where some days when the weather isn’t 100 degrees you could still swing it if you were by the sea. I am so looking forward to spring. Like you, I can’t wait to shed all the darks for light colors and springy looks. It has been a long winter for all of us! So cute! I have been doing the same thing this week! I really need to retire some spring clothes before adding new! I’m so with you Amanda, can’t wait ’till spring! It’s snowing here today and I’m ready for sunshine! I am a scarf addict – I can’t wait to break out my Spring scarves! 85 in gainesville today…yeehaw! love the photos. Oh my the dreaded thoughts of spring cleaning. Organizing the closet, right now I just throw junk in and close it fast for fear of it all coming out to get me. But the thought of warm weather coming and your pictures has given me hope that I am not alone and its time to bring a little color into my wardrobe. Yea spring is back. Have a good weekend Amanda. Your photography skills always amaze me. Your photos are ALWAYS stunning! Thanks for sharing. I am about to do the same thing today, here in the UK.We had our hottest day of the year yesterday. First of all, congratulations for the photos! Second, sorry for my English, I know that is not perfect (I’m italian) .. Third, and most important: please tell me with that camera lens you have taken these pictures, the bokeh is amazing! I love all of your beautiful scarves! So pretty and girly!
import requests def split_url(url): """Splits the given URL into a tuple of (protocol, host, uri)""" proto, rest = url.split(':', 1) rest = rest[2:].split('/', 1) host, uri = (rest[0], rest[1]) if len(rest) == 2 else (rest[0], "") return (proto, host, uri) def parse_url(url): """Parses out Referer info indicating the request is from a previously proxied page. For example, if: Referer: http://localhost:8080/proxy/google.com/search?q=foo then the result is: ("google.com", "search?q=foo") """ proto, host, uri = split_url(url) print ("url={}".format(url)) print ("{}, {}, {}".format(proto, host, uri)) if uri.find("/") < 0: return None first, rest = uri.split("/", 1) return {'proto':proto, 'host':host, 'uri':uri, 'url':rest} @app.route('/proxy/<path:url>', methods=['GET']) def proxy_get(url): rd = parse_url(request.url) print("rd=", str(rd)) #print ('request.path={}'.format(request.path)) #print ('request.full_path={}'.format(request.full_path)) #print ('request.script_root={}'.format(request.script_root)) #print ('request.base_url={}'.format(request.base_url)) #print ('request.url={}'.format(request.url)) #print ('request.url_root={}'.format(request.url_root)) LOG.debug("Fetching {}".format(url)) #_params = {} if params is None else params r = requests.get(rd['url']) #, params=_params) print("From cache: {}".format(r.from_cache)) print('headers=', dict(r.headers)) return Response(r.content, headers=dict(r.headers)), r.status_code
Trichotillomania – the disorder where a person has a compulsive urge to pull out his or her hair – can be devastating to a person’s life. It can make it hard for a person to be social. It can also make a person feel wracked with feelings of negative emotions. And of course, it could have an extremely negative impact on a person’s physical appearance. While various forms of treatment are available for people that suffer from trichotillomania to help work through and manage the disorder, these gradual processes may not offer a short-term solution that is needed to remedy the loss of hair that was pulled out. Fortunately, people that suffer from this disorder can turn to trichotillomania wigs as a means to cope with its physical aspects. By their very nature, trichotillomania wigs are a little different than the average wig. The wigs for hair pullers are typically designed with the notion that the skin on the top of the head of a person suffering from the disorder is extremely sensitive. This level of sensitivity is heightened due to the fact that the hairs on the head have been yanked out with a measure of force. As a result, the base of a good trichotillomania wig – that is, the part that fits onto the scalp – will be made of comfortable fabrics, such as lace or cotton. Additionally, having a sense of realism is extremely crucial for good trichotillomania wigs. To achieve this realism, the wigs are typically made from human hair that has been donated through various charitable programs. Once the hair is received by the wig making company, it is treated and colored so it looks as natural as possible. It helps to curb hair being pulled out – Because the hair extensions are not actual hair that is rooted into a scalp, it cannot be pulled out of the scalp. As a result, the hair that is attached to the extensions is safe. It helps natural hair to regain health – One of the negative effects of trichotillomania is how the natural hair of the person suffering from the disorder degrades and weakens over time as a result of being pulled out. However, since the hair extensions provide a barrier of sorts for the natural hair that it attaches to, the hair is allowed to grow and gradually regain its strength. Using a trichotillomania wig can be a powerful ally for a person that is suffering from the disorder because it does more than mask compulsive hair pulling. It gives these people a huge boost in their self-confidence. This may sound obvious, but this feeling goes a little deeper. One of the big side effects of trichotillomania is feelings of guilt or depression over the act, and these feelings are noted to be one of the possible causes of the disorder, thus creating a vicious cycle. However, a trichotillomania wig or trichotillomania hair extensions help to break this cycle, which in turn returns a state of self-assuredness that may otherwise be lost. While obtaining a trichotillomania wig can be an important step in helping a person that suffers from the disorder, it is not the final step. Indeed, this type of wig will only improve the cosmetic aspects of the disorder. If a person wishes to go further to address trichotillomania and its problems, he or she should still look into a form of treatment.
# -*- coding: utf-8 -*- ############################################################################## # # OpenERP, Open Source Management Solution # Copyright (C) 2004-2010 Tiny SPRL (<http://tiny.be>). # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU Affero General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Affero General Public License for more details. # # You should have received a copy of the GNU Affero General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # ############################################################################## from openerp import api from openerp.osv import osv from openerp.osv import fields import logging _logger = logging.getLogger(__name__) # import pdb; pdb.set_trace() # import vimpdb; vimpdb.hookPdb() # set propertie global... RAW = 'raw' BACHA = 'bacha' SERVICE = 'service' INPUT = 'input' OTHER = '*' M2 = 'm2' AREA = 'area' UNITS = 'units' LOC_DESPACHO = 'loc_despacho' LOC_STOCK = 'stock' LOC_REC_STOCK = 'rec_stock' LOC_OWN_STOCK = 'own' LOC_CUSTOMERS = 'customers' MAIN_COMPANY = 'company' _xml_data = { # ---- Prod Categ ----- RAW : 'product_marble.prod_categ_raw_material', BACHA : 'product_marble.prod_categ_bachas', SERVICE: 'product_marble.prod_categ_services', INPUT : 'product_marble.prod_categ_inputs', # ---- Prod UOM ----- M2 : 'product_marble.product_uom_square_meter', AREA : 'product_marble.product_uom_categ_area', UNITS : 'product.product_uom_categ_unit', # ---- Warehouse location Stock ----- LOC_DESPACHO : 'product_marble.location_deposito_despacho', LOC_STOCK : 'product_marble.location_deposito_stock_propio', LOC_OWN_STOCK : 'product_marble.location_deposito_stock_propio', LOC_CUSTOMERS : 'product_marble.location_deposito_stock_clientes', LOC_REC_STOCK : 'product_marble.location_deposito_stock_propio_recortes', # ---- Base Company ----- MAIN_COMPANY : 'base.main_company', } _prop = {} @api.model def set_prop(self): global _prop for key in _xml_data: xml_id = _xml_data[key] if not _prop.get(key) or _prop.get(key) < 0: ids = self.env.ref(xml_id) _prop[key] = ids.id if ids and ids.id > 0 else -1 #_logger.info(">> set_prop >> _prop = %s", _prop) @api.model def get_prod_types(self): _logger.info(">> get_prod_type >> 1- self = %s", self) types = { get_prop(self, RAW) : RAW, get_prop(self, BACHA) : BACHA, get_prop(self, SERVICE) : SERVICE, get_prop(self, INPUT) : INPUT, } _logger.info(">> get_prod_type >> 2- types = %s", types) return types # --- Migracion ------------------------- # # def get_prop(self, cr, uid, key): # global _prop # if not _prop.get(key): # set_prop(self, cr, uid, []) # return _prop[key] @api.model def get_prop(self, key): global _prop # valido db... db = 'db_name' db_name = _prop.get(db,False) if (not db_name or db_name != self._cr.dbname): _prop.clear() _prop[db] = self._cr.dbname # if not _prop.get(key): set_prop(self) return _prop[key] # ---------------------------------------- # --- Migracion ------------------------- # # def get_raw_material_id(self, cr, uid): # return get_prop(self, cr, uid, 'raw_material_id') # # def get_bachas_id(self, cr, uid): # return get_prop(self, cr, uid, 'bachas_id') # # def get_services_id(self, cr, uid): # return get_prop(self, cr, uid, 'services_id') # # def get_inputs_id(self, cr, uid): # return get_prop(self, cr, uid, 'inputs_id') # # def get_uom_m2_id(self, cr, uid): # return get_prop(self, cr, uid, 'uom_m2_id') # # def get_uom_units_id(self, cr, uid): # return get_prop(self, cr, uid, 'uom_units_id') @api.model def get_raw_material_id(self): return get_prop(self, RAW) @api.model def get_bachas_id(self): return get_prop(self, BACHA) @api.model def get_services_id(self): return get_prop(self, SERVICE) @api.model def get_inputs_id(self): return get_prop(self, INPUT) @api.model def get_uom_m2_id(self): return get_prop(self, M2) @api.model def get_uom_units_id(self): return get_prop(self, UNITS) @api.model def get_location_despacho(self): return get_prop(self, LOC_DESPACHO) @api.model def get_location_stock(self): return get_prop(self, LOC_STOCK) @api.model def get_location_recortes_stock(self): return get_prop(self, LOC_REC_STOCK) @api.model def get_location_own_id(self): return get_prop(self, LOC_OWN_STOCK) @api.model def get_location_customers_id(self): return get_prop(self, LOC_CUSTOMERS) @api.model def get_main_company_id(self): return get_prop(self, MAIN_COMPANY) # ---------------------------------------- # --- Migracion ------------------------- # # def is_raw_material(self, cr, uid, cid): # return (cid == get_prop(self, cr, uid, RAW)) # # def is_bachas(self, cr, uid, cid): # return (cid == get_prop(self, cr, uid, BACHAS)) # # def is_services(self, cr, uid, cid): # return (cid == get_prop(self, cr, uid, SERVICES)) # # def is_inputs(self, cr, uid, cid): # return (cid == get_prop(self, cr, uid, INPUTS)) @api.model def is_raw_material(self, cid): return (cid == get_prop(self, RAW)) @api.model def is_bachas(self, cid): return (cid == get_prop(self, BACHA)) @api.model def is_services(self, cid): return (cid == get_prop(self, SERVICE)) @api.model def is_inputs(self, cid): return (cid == get_prop(self, INPUT)) # ---------------------------------------- # def get_raw_material_id(self, cr, uid): # # ids = self.pool.get('product.category').search(cr, uid, [('name','ilike','Marble Work')], limit=1) # # return ids[0] or False # return get_raw_material_id(self, cr, uid) # def get_product_uom_m2_id(self, cr, uid): # global _prop # key = 'uom_m2_id' # # if (not _prop) or (_prop.get(key) < 0): # obj = self.pool.get('product.uom') # ids = obj.search(cr, uid, [('name','ilike','m2')], limit=1) # _prop[key] = ids[0] if ids and ids[0] > 0 else -1 # # # _logger.info("3 >> get_product_uom_m2_id >> _prop = %s", _prop) # return _prop[key] def is_raw_material_by_category_id(self, cr, uid, ids): """ - Obj: Determina si Category [ids] es Marble Work si pertenece a la categ: Marble Work o no... - Inp: [ids] = lista de category_id. - Out: {category_id: true/false, ..} """ result = {} if not ids: return result marble_work_id = get_raw_material_id(self, cr, uid) result = {c:(c == marble_work_id) for c in ids} # _logger.info("1 >> is_raw_material_by_category_id >> result = %s", result) return result def is_raw_material_by_product_id(self, cr, uid, ids): """ - Obj: Determina por cada producto [ids], si pertenece a la categ: Marble Work o no... - Inp: [ids] = lista de products ids. - Out: {prod_id: is_marble, ..} """ result = {} if not ids: return result marble_work_id = get_raw_material_id(self, cr, uid) obj = self.pool.get('product.product') for p in obj.read(cr, uid, ids, ['categ_id']): result.update({p['id']: (marble_work_id == p['categ_id'][0])}) # _logger.info("1 >> is_raw_material_by_product_id >> result = %s", result) return result def is_bacha_by_product_id(self, cr, uid, ids): result = {} if not ids: return result bacha_id = get_bachas_id(self, cr, uid) obj = self.pool.get('product.product') for p in obj.read(cr, uid, ids, ['categ_id']): result.update({p['id']: (bacha_id == p['categ_id'][0])}) # _logger.info("1 >> is_raw_material_by_product_id >> result = %s", result) return result def is_input_by_product_id(self, cr, uid, ids): result = {} if not ids: return result input_id = get_inputs_id(self, cr, uid) obj = self.pool.get('product.product') for p in obj.read(cr, uid, ids, ['categ_id']): result.update({p['id']: (input_id == p['categ_id'][0])}) # _logger.info("1 >> is_raw_material_by_product_id >> result = %s", result) return result def is_service_by_product_id(self, cr, uid, ids): result = {} if not ids: return result service_id = get_services_id(self, cr, uid) obj = self.pool.get('product.product') for p in obj.read(cr, uid, ids, ['categ_id']): result.update({p['id']: (service_id == p['categ_id'][0])}) # _logger.info("1 >> is_raw_material_by_product_id >> result = %s", result) return result # def get_data(self, cr, uid, list_tuple, fid): # """ # - Obj: Recupero 'value' segun 'fid' (find id), en list_tuple... # - Inp: # arg 4: [list_tuple] = lista de tuplas: [(id, value to display), ..] # arg 5: [fid] = 'fid' a localizar en 'list_tuple'. # - Out: 'value' referenciado por 'fid'. # """ # if not list_tuple or not fid: # return "" # # return dict(list_tuple).get(fid) # ---------- Stock ---------- def query_stock_move_input(self, cr, uid): str = "\n\n >>> Stock Move Input <<<\n" obj = self.pool.get('stock.move') domain = [ '&','|', '&', ('picking_id','=',False), ('location_id.usage', 'in', ['customer','supplier']), '&', ('picking_id','!=',False), ('picking_id.type','=','in'), ('plaque_id','>','0') ] ids = obj.search(cr, uid, domain) _logger.info(">> query_stock_input >> 1 >> ids = %s", ids) for m in obj.browse(cr, uid, ids): str += "%s - %s - %s - %s - %s \n" % (m.id, m.product_uom, m.plaque_id, m.plaque_qty, m.name) _logger.info(str) return True def query_stock_move_output(self, cr, uid): str = "\n\n >>> Stock Move Output <<<\n" obj = self.pool.get('stock.move') domain = [ '&','|', '&', ('picking_id','=',False), ('location_dest_id.usage', 'in', ['customer','supplier']), '&', ('picking_id','!=',False), ('picking_id.type','=','out'), ('plaque_id','>','0') ] ids = obj.search(cr, uid, domain) _logger.info(">> query_stock_input >> 2 >> ids = %s", ids) for m in obj.browse(cr, uid, ids): str += "%s - %s - %s - %s - %s \n" % (m.id, m.product_uom, m.plaque_id, m.plaque_qty, m.name) _logger.info(str) return True def get_stock_move_by_product(self, cr, uid, ids): _logger.info(">> get_stock_move_by_product >> 000 >> ids = %s", ids) str = "\n\n >>> Stock Move In/Out by Product <<<\n" obj = self.pool.get('stock.move') domain = [ # ('product_id.categ_id','=', get_raw_material_id(self, cr, uid)), << producto tipo marble. ('product_id', 'in', ids), ] _logger.info(">> get_stock_move_by_product >> 111 >> domain = %s", domain) ids = obj.search(cr, uid, domain) _logger.info(">> get_stock_move_by_product >> 222 >> ids = %s", ids) for m in obj.browse(cr, uid, ids): str += "%s - %s - %s - %s - %s \n" % (m.id, m.product_uom, m.plaque_id, m.plaque_qty, m.name) _logger.info(str) return True def query_stock_move_test(self, cr, uid): query_stock_move_input(self, cr, uid) query_stock_move_output(self, cr, uid) # ------------------------------------------------------- def print_dict(msg, val): nl = '\n' res = msg + nl + nl for k in val: res += str(k) + ':' + str(val[k]) + nl res += nl + nl _logger.info(res) # ------------------------------------------------------- def get_loc_parents(self, loc, res): res += (loc and loc.id and [loc.id]) or [] if loc and loc.location_id: get_loc_parents(self, loc.location_id, res) return res # vim:expandtab:smartindent:tabstop=4:softtabstop=4:shiftwidth=4:
Vector’s ongoing and successful work with Adelphi University has helped them generate higher quality leads that convert into students, improve their brand recognition and search results, reduce their costs, and increase their online visibility. A distinct collaboration between our design, digital marketing, and development teams ensures that more of Adelphi’s prospects are turned into students. With profitable campaigns that are tied to the university’s business goals and aspirations, their ROI is much higher due to higher conversion rates. Vector and Adelphi University have been working together for years— and our partnership has been undeniably successful. Due to our long-term client relationship, we continuously collaborate with Adelphi in an ongoing effort to give them the best possible results to match their strategic vision. Adelphi University is a large university with an even larger website, making the work both challenging and rewarding. Since Adelphi’s site includes information about everything from its extensive list of academic programs to its admissions process to its student life, the Vector Team had to focus on prioritizing. Our team strategically directed their energy to the pages where we were able to make the biggest impact. After collaborating with Adelphi to understand their business, brand, and visual design landscape, we worked on both optimizing their organic search results and managing their paid search (PPC) campaigns. Adelphi had an outdated system that only tracked new leads according to ‘last touch’ attribution (i.e. if the user does a Google search after initial contact, the lead would be attributed to Google). We developed and implemented an attribution model that moved beyond this “last touch”, allowing us to track which channels and campaigns were successfully introducing people into the sales funnel. Next, we launched cross-channel, cross device remarketing campaigns to track users across multiple devices. We also integrated our cutting-edge proprietary technology that makes changes to campaign bids and budgets in real time, every 30 minutes, to automatically shift Adelphi's budget to the campaigns that are working best for them. This innovative approach enabled Adelphi to prioritize and reprioritize their campaigns more efficiently. From the PPC perspective, working with Adelphi has yielded results that are a testament to the partnership’s success. Adelphi has seen a 320% increase in overall conversions, with a 68% drop in overall cost per acquisition (CPA). We were able to achieve this success by allocating funds to programs that have the highest probability of success instead of sticking to rigid allocations that are not as efficient. In-line with Vector’s overall integrated approach to projects, we use our findings from our work with SEO and PPC in order to help the two build off of each other. When we began working with Adelphi, we ran a technical audit to analyze and correct any issues, such as missing site maps, HTML errors, and duplicate pages that could affect their organic rankings. Adelphi has since seen a 57% increase in page 1 organic rankings. We also focused heavily on creating keyword-enriched title tags and meta descriptions. By having thoroughly researched the site in order to see which pages generated the most traffic and had the highest conversions, we knew which pages to prioritize. Through search keyword research, mapping, and theming of pages we were able to increase Adelphi’s visibility. By helping Adelphi create landing pages to familiarize new visitors with the university, we were able to nurture relationships with users to convert them into customers. Utilizing curated relevant content to pull people in, Adelphi was able to see a 98% increase in clicks and an 18% drop in cost per click (CPC), with only a 63% increase in spend. With refined pages serving more targeted markets, Adelphi has had an extremely high yield rate: 34% of applications that were influenced or generated by our efforts turned into an actual student. The work we do with Adelphi is ongoing. We stay on top of their constantly changing and growing site. We regularly go through a re-optimization process where we repeat our original SEO process with adjustments based on rankings in an effort to continuously improve those rankings. Our relationship with Adelphi is highly collaborative—our work involves a great deal of exchanging information back and forth, but the process is smooth and effective. By layering in audience data (both 3rd party and Adelphi’s 1st party) to move potential students thru the funnel, we are able to refine our work based on their feedback. We continue to build on our organic success, while keeping Adelphi’s strict university guidelines in mind. With strong SEO and PPC strategies accelerating their growth, we always make sure our work is relevant, does not go off-brand, and adheres to their standards and expectations.
from django import template from django.db.models import Sum from reports.models import Event register = template.Library() @register.simple_tag def get_event_statistics(disaster, month, year): """ Return monthly summary of a disaster category and its impact. """ events_with_impacts = Event.objects.filter( eventimpacts__isnull=False, disaster=disaster, created__month=month, created__year=year ).distinct() events_with_impacts = events_with_impacts.annotate( evac_total=Sum('eventimpacts__evac_total'), affected_total=Sum('eventimpacts__affected_total'), affected_injury=Sum('eventimpacts__affected_injury'), affected_death=Sum('eventimpacts__affected_death'), loss_total=Sum('eventimpacts__loss_total') ) return events_with_impacts @register.simple_tag def get_eventimpact_total(event_statistics): """ Calculate the sum impact of a particular disaster category. """ eventimpact_total = { 'evac_total': 0, 'affected_total': 0, 'affected_injury': 0, 'affected_death': 0, 'loss_total': 0 } for event_statistic in event_statistics: if event_statistic.evac_total: eventimpact_total['evac_total'] += event_statistic.evac_total if event_statistic.affected_total: eventimpact_total['affected_total'] += event_statistic.affected_total if event_statistic.affected_injury: eventimpact_total['affected_injury'] += event_statistic.affected_injury if event_statistic.affected_death: eventimpact_total['affected_death'] += event_statistic.affected_death if event_statistic.loss_total: eventimpact_total['loss_total'] += event_statistic.loss_total return eventimpact_total
Congress General Secretary Priyanka Gandhi Vadra. Amethi: Congress General Secretary Priyanka Gandhi Vadra on Thursday revealed she was eyeing the 2022 Assembly elections in Uttar Pradesh much more than the 2019 general elections just weeks ahead of the first phase of polling. At a programme organised by party workers at the AH Inter College in Amethi, the parliamentary constituency of her brother and party President Rahul Gandhi, she took Congress workers by surprise when she asked them if they were preparing for the 2022 Assembly elections. "Are you preparing for elections... I am not talking about 2019, but 2022," she asked a worker who was greeting her at the function organised by the Congress cadre in the Gandhi bastion of Amethi, where Priyanka was on her day-long visit during her three-day programme in the state. When she was appointed party General Secretary in-charge of eastern Uttar Pradesh, Rahul Gandhi had said that he had sent his sister to the state with a larger plan. "She hasn't been sent here for four months, she has been sent here with a larger plan... We will not only defeat BJP in 2019 but also win 2022 elections," Rahul Gandhi had told reporters here on January 23. In several speeches following that in Amethi he had reiterated the point. Her Thursday’s comments were yet another disclosure of the party's plan in the state. In the 2017 Uttar Pradesh Assembly elections, the Congress had lost all the five assembly constituencies in Amethi. The BJP had won four while one seat was won by the Samajwadi Party. With Priyanka Gandhi Vadra in an official role, the party is eyeing a major share of seats in the assembly elections in the state where it has been out of power since 1991. Meanwhile, on Wednesday, Priyanka reached Amethi during the second leg of her campaign and held a marathon 11-hour meeting with party workers at the college campus. She first held the meeting for over five and half hours and then returned for another round of meeting which lasted for over four hours. She held discussions with the party's booth workers till 12.30 am. Later on Thursday, she is scheduled to meet the workers and leaders of her mother's constituency, Rae Bareli. She will then go to Ayodhya on Friday, where the party workers have also planned an eight-km long road show.
#!/usr/bin/python # -*- coding: utf-8 -*- ### BEGIN LICENSE # Copyright (C) 2007-2011 Tualatrix Chou <[email protected]> # Copyright (C) 2013 ~ 2014 National University of Defense Technology(NUDT) & Kylin Ltd # This program is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License version 3, as published # by the Free Software Foundation. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranties of # MERCHANTABILITY, SATISFACTORY QUALITY, or FITNESS FOR A PARTICULAR # PURPOSE. See the GNU General Public License for more details. # # You should have received a copy of the GNU General Public License along # with this program. If not, see <http://www.gnu.org/licenses/>. ### END LICENSE import dbus import dbus.service import logging log = logging.getLogger("DbusProxy") INTERFACE = 'com.ubuntukylin.youker' UKPATH = '/' SHOWED = False def show_message(*args): from dialogs import ErrorDialog title = 'Daemon start failed' message = ('Youker Assisant systemdaemon didn\'t start correctly.\n' 'If you want to help developers debugging, try to run "<b>sudo /usr/lib/python2.7/dist-packages/youker-assistant-daemon/src/start_systemdbus.py</b>" in a terminal.') ErrorDialog(title=title, message=message).launch() def nothing(*args): return None class DbusProxy: try: __system_bus = dbus.SystemBus() __object = __system_bus.get_object(INTERFACE, UKPATH) except Exception, e: __object = None def __getattr__(self, name): global SHOWED try: return self.__object.get_dbus_method(name, dbus_interface=self.INTERFACE) except Exception, e: #log.error(e) if not SHOWED: SHOWED = True return show_message else: return nothing def get_object(self): return self.__object class AccessDeniedException(dbus.DBusException): '''This exception is raised when some operation is not permitted.''' _dbus_error_name = 'com.ubuntukylin.youker.AccessDeniedException' def init_dbus(dbus_iface=INTERFACE, dbus_path=UKPATH): '''init dbus''' proxy = DbusProxy() return proxy if __name__ == '__main__': print init_dbus()
Interested in becoming the next infected site? Bring Zombie Shooters Association to your club or range, we make it easy for you to get started! Review our Club Affiliation Information and contact us today. We’ll gladly work with you to get ZSA to your area. Feel free to email us at: [email protected].
# Script to recurse through all subdirectories from pandas.io.json import json_normalize import flatten_json import xmltodict import argparse import zipfile import shutil import pandas import tqdm import json import os import re ''' ------------------------------ CONSTANTS ------------------------------ ''' JSON_WORKBOOKS_ARRAY_OPENER = '{"jsonified-workbooks":[' JSON_WORKBOOKS_ARRAY_CLOSER = ']}' JSON_WORKBOOKS_ARRAY_DELIMITER = ',' STARTING_DIRECTORY = '/' EXTRACT_NAME = 'extracted_workbook' SECURITY_WARNING = r""" SECURITY WARNING This program relies on creating a temporary directory to extract Tableau archives. Be sure that only appropriate users have access to tmp directories. On DOS systems, the default tmp directory is located at "C:\Windows\Temp for a system", and for an individual, the tmp directory is located at "C:\Users\<username>\AppData\Local\Temp". The default tmp directory in a nix system is located at /tmp To prevent inappropriate access, local tmp directories are used. You may want to customize the tmp location used dependent on your policy or need. """ ''' parser = argparse.ArgumentParser(description='Process some integers.') parser.add_argument('integers', metavar='N', type=int, nargs='+', help='an integer for the accumulator') parser.add_argument('--tmpdir', dest='accumulate', action='store_const', const=sum, default=max, help='sum the integers (default: find the max)') args = parser.parse_args() ''' ''' -------------------------- RUNTIME CONSTANTS -------------------------- ''' FOLDER_PATH = 'Users' + os.sep + os.getenv('username') + os.sep + 'Documents' + os.sep + 'Desktop' + os.sep + 'demo_folder' DISK_PATH = "C:" + os.sep STARTING_DIRECTORY = os.path.join(os.sep, DISK_PATH, FOLDER_PATH) OUTPUT_DIRECTORY = os.path.join(os.sep, DISK_PATH, FOLDER_PATH) TMP_DIRECTORY = os.path.join(os.sep, DISK_PATH, FOLDER_PATH) REMOVE_IMAGE_RAW_BITS = True # This needs to come from some sort of command line argument REMOVE_CUSTOM_SQL = False # This needs to come from some sort of command line argument def unzip(zip_archive, destination): print('suggested destination: ' + destination) with zipfile.ZipFile(zip_archive, "r") as zip_archive: zip_archive.extractall(destination) # we don't want to merge python dictionaries, we want to add each dictionary to # a larger dictionary def get_all_workbooks(starting_directory, unzip_tmp_directory): print('current dir: ' + starting_directory) tableau_workbooks_xml = [] for item in os.listdir(starting_directory): if item.endswith('.twb'): # straightforward xml extraction tableau_workbooks_xml.append( get_workbook_xml(starting_directory + '\\' + item) ) elif item.endswith('.twbx'): # extract archive, extract xml extract_destination = unzip_tmp_directory + '/' + EXTRACT_NAME archive_directory = unzip(starting_directory + '\\' + item, extract_destination) archive_workbook_xml = get_all_workbooks(extract_destination, unzip_tmp_directory) if type(archive_workbook_xml) is list: for xml_string in archive_workbook_xml: if type(xml_string) is not str: print('Unexpected type!') else: tableau_workbooks_xml.append(xml_string) elif type(archive_workbook_xml) is str: tableau_workbooks_xml.extend(archive_workbook_xml) else: print('Unexpected type! Error appending XML data to tableau_workbooks_xml') exit() # Remove your unzipped archive shutil.rmtree(extract_destination) elif os.path.isdir(starting_directory + '\\' + item): # recurse over subdirectory tableau_workbooks_xml.extend(get_all_workbooks(starting_directory + '\\' + item, unzip_tmp_directory)) return tableau_workbooks_xml def get_workbook_xml(path): xml_contents = '' print('Path = ' + path) with open(path, "r") as file_stream: xml_contents = file_stream.read() return xml_contents def list_of_xml_to_list(xml_list): json_list = [] print('XML_LIST type: ' + str(type(xml_list))) for xml in xml_list: if type(xml) is list: for xml_string in xml: json_list.append(xmltodict.parse(xml_string)) print('XML type: ' + str(type(xml))) print('XML type: ' + str(xml)) json_list.append(xmltodict.parse(xml)) return json_list # Given the state of a flag, start cleaning process def cleanse(json_summary): json_summary = json_summary # not sure if this needs to be declared locally, this might be removable # Remove the raw image content, not information about the images if REMOVE_IMAGE_RAW_BITS: # workbook.thumbnails.thumbnail[].#text try: for element in json_summary['workbook']['thumbnails']['thumbnail']: # If this is not an element, it will be interpreted as a string which will crash the metho d and program if type(element) is not str: del element['#text'] try: # Case where there is only one image in the workbook, not multiple del json_summary['workbook']['thumbnails']['thumbnail']['#text'] except KeyError as key_error: pass except TypeError as type_error: # happens when slicing a list, depends on what's in the attribute pass except KeyError as key_error: pass # workbook.external.shapes.shape[].#text try: for element in json_summary['workbook']['external']['shapes']['shape']: # If this is not an element, it will be interpreted as a string which will crash the metho d and program if type(element) is not str: del element['#text'] try: # Case where there is only one image in the workbook, not multiple del json_summary['workbook']['external']['shapes']['shape']['#text'] except KeyError as key_error: pass except TypeError as type_error: # happens when slicing a list, depends on what's in the attribute pass except KeyError as key_error: pass # Remove the raw SQL information, not the individual formulas that are used throughout the report if REMOVE_CUSTOM_SQL: # connection.metadata-records.metadata-record[].attributes.attribute[].#text try: for data_source in json_summary['workbook']['datasources']['datasource']: # If this is not an element, it will be interpreted as a string which will crash the method and program for metadata_record in data_source['connection']['metadata-records']['metadata-record']: for attribute in metadata_record['attributes']['attribute']: if type(attribute) is not str: del attribute['#text'] except KeyError as key_error: print('Recovered from key error when attempting to remove custom sql.') print(key_error) return json_summary def get_windows_username(): os.getlogin() def get_nix_username(): os.popen('whoami').read() def make_tmp_directory(tmp_directory_location): if os.path.exists(tmp_directory_location): # don't make dir pass else: pass def clear_tmp_directory(): pass def get_workbook_name(): # return $(FILE_NAME or PATH?) pass def detect_custom_sql(): # if datasources.datasource[].connection.@class == ("sqlproxy" or "postgres") # however, still not indicative of custom SQL; are the XML queries you removed indicative? pass def main(): os.system('cls') input('Defaulting to starting directory: ' + STARTING_DIRECTORY) input('Defaulting to output directory: ' + OUTPUT_DIRECTORY) input('Defaulting to tmp directory: ' + TMP_DIRECTORY) input('Remove image raw bits (.#text): ' + str(REMOVE_IMAGE_RAW_BITS)) input('Remove custom SQL: ' + str(REMOVE_CUSTOM_SQL)) workbook_xml_list = get_all_workbooks(STARTING_DIRECTORY, TMP_DIRECTORY) output_file_path = OUTPUT_DIRECTORY + os.sep + 'json_output.json' json_list = list_of_xml_to_list(workbook_xml_list) try: # file I/O writing output and removing a trailing character # Write output with open(output_file_path, 'w') as output_file: output_file.write(JSON_WORKBOOKS_ARRAY_OPENER) with open(output_file_path, 'a') as output_file: for json_summary in json_list: string = json.dumps(cleanse(json_summary)) output_file.write(string + JSON_WORKBOOKS_ARRAY_DELIMITER) # remove trailing '},' that was generated at the end of the loop with open(output_file_path, 'rb+') as filehandle: filehandle.seek(-1, os.SEEK_END) filehandle.truncate() with open(output_file_path, 'a') as output_file: output_file.write(JSON_WORKBOOKS_ARRAY_CLOSER) except IOError as e: print('Failed to open or write JSON output.' + str(e)) if __name__ == "__main__": main()
Discussion forum for Kelvin Jackson's fans (Miami (Ohio), NCAAF). Please post trade rumors, injury reports and amateur scout suggestions. Please do not post inappropriate comments, this is a friendly forum for fans. If you see inappropriate comments, then please report them by clicking the report abuse link aside the comment. Comments you post may require a paid membership to delete.
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os import sys import time import paddle.fluid as fluid from paddle.fluid import unique_name import paddle.fluid.core as core import paddle from paddle.fluid.layer_helper import LayerHelper from paddle.distributed import fleet from paddle.distributed.fleet.meta_optimizers.ascend import ascend_parser, ascend_optimizer from collections import namedtuple Block = namedtuple('Block', ['program']) Loss = namedtuple('Loss', ['block']) paddle.enable_static() OpRole = core.op_proto_and_checker_maker.OpRole OP_ROLE_KEY = core.op_proto_and_checker_maker.kOpRoleAttrName() OP_ROLE_VAR_KEY = core.op_proto_and_checker_maker.kOpRoleVarAttrName() role = fleet.PaddleCloudRoleMaker(is_collective=True) fleet.init(role) def init_communicator(startup_program, main_program, current_endpoint, endpoints, ring_id): nranks = len(endpoints) other_endpoints = endpoints[:] other_endpoints.remove(current_endpoint) group_rank = endpoints.index(current_endpoint) assert group_rank >= 0 block = startup_program.global_block() nccl_id_var = block.create_var( name=unique_name.generate('nccl_id'), persistable=True, type=core.VarDesc.VarType.RAW) block.append_op( type='c_gen_nccl_id', inputs={}, outputs={'Out': nccl_id_var}, attrs={ 'rank': group_rank, 'endpoint': current_endpoint, 'other_endpoints': other_endpoints, OP_ROLE_KEY: OpRole.Forward, }) block.append_op( type='c_comm_init', inputs={'X': nccl_id_var}, outputs={}, attrs={ 'nranks': nranks, 'rank': group_rank, 'ring_id': ring_id, OP_ROLE_KEY: OpRole.Forward, }) # add input op for test fill_var_name = "tensor@Filled" fill_var = block.create_var( name=fill_var_name, shape=[10, 10], dtype='float32', persistable=False, stop_gradient=True) block.append_op( type="fill_constant", outputs={"Out": fill_var_name}, attrs={ "shape": [10, 10], "dtype": fill_var.dtype, "value": 1.0, "place_type": 1 }) with fluid.program_guard(main_program): op_type = "c_allreduce_sum" data = fluid.layers.fill_constant(shape=[1], dtype='float32', value=2.5) helper = LayerHelper(op_type, **locals()) helper.append_op( type=op_type, inputs={'X': [data]}, outputs={'Out': [data]}, attrs={'ring_id': ring_id, 'use_calc_stream': True}) print("startup program:", startup_program) print("main program:", main_program) def train(world_endpoints, world_device_ids, local_device_ids, local_rank): startup_programs = [] main_programs = [] #trainer_endpoints=["127.0.0.1:6071","127.0.0.1:6072","127.0.0.1:6073","127.0.0.1:6074"] trainer_endpoints = world_endpoints groups = [[], [], []] groups[0] = [trainer_endpoints[0], trainer_endpoints[1]] groups[1] = [trainer_endpoints[2], trainer_endpoints[3]] groups[2] = [trainer_endpoints[0], trainer_endpoints[2]] print("groups:", groups) for i in range(len(trainer_endpoints)): startup_programs.append(fluid.Program()) main_programs.append(fluid.Program()) for idx, group in enumerate(groups): for te in group: te_idx = trainer_endpoints.index(te) startup_program = startup_programs[te_idx] main_program = main_programs[te_idx] init_communicator(startup_program, main_program, te, group, idx) print(len(startup_programs)) print(startup_programs[local_rank]) print(main_programs[local_rank]) print("local rank: ", local_rank) print("local startup program: ", startup_programs[local_rank]) startup_program = startup_programs[local_rank] main_program = main_programs[local_rank] loss = Loss(Block(main_program)) optimizer = ascend_optimizer.AscendOptimizer(None, fetch_list=[]) optimizer.minimize( loss, startup_program, auto_dp=True, rank_table_file=os.getenv("RANK_TABLE_FILE", None)) exe = paddle.static.Executor(paddle.CPUPlace()) exe.run(startup_program) exe.run(main_program) worker_endpoints = fleet.worker_endpoints() world_device_ids = fleet.world_device_ids() local_device_ids = fleet.local_device_ids() local_rank = int(fleet.local_rank()) print("worker_endpoints:", worker_endpoints) print("world_device_ids:", world_device_ids) print("local_device_ids:", local_device_ids) print("local_rank:", local_rank) train(worker_endpoints, world_device_ids, local_device_ids, local_rank)
Cato Lingerie Home Improvement Store Near Me Stores Houston is one of pictures thet are related with the picture before in the collection gallery. If you would like to see the Cato Lingerie Home Improvement Store Near Me Stores Houston in High Resolution [HD Resolution] version, please press the right click on picures/image then choose "Save as Image" option, and done. You will get Cato Lingerie Home Improvement Store Near Me Stores Houston pictures that you want. The exactly dimension of Cato Lingerie Home Improvement Store Near Me Stores Houston was 1000x720 pixels. You can also look for some pictures by collection on below this picture. Find the other picture or article about Cato Lingerie here. We hope it can help you to get information of the picture.
import sys # http://www.unicode.org/Public/UNIDATA/auxiliary/BidiMirroring.txt # This parses a file in the format of the above file and outputs a table # suitable for bsearch(3). This table maps Unicode code points to their # 'mirror'. (Mirroring is used when rendering RTL characters, see the Unicode # standard). By convention, this mapping should be commutative, but this code # doesn't enforce or check this. def main(infile, outfile): pairs = [] for line in infile: line = line[:-1] if len(line) == 0 or line[0] == '#': continue if '#' in line: (data, _) = line.split('#', 1) else: data = line if ';' not in data: continue (a, b) = data.split(';', 1) a = int(a, 16) b = int(b, 16) pairs.append((a, b)) pairs.sort() print >>outfile, '// Generated from Unicode Bidi Mirroring tables\n' print >>outfile, '#ifndef MIRRORING_PROPERTY_H_' print >>outfile, '#define MIRRORING_PROPERTY_H_\n' print >>outfile, '#include <stdint.h>' print >>outfile, 'struct mirroring_property {' print >>outfile, ' uint32_t a;' print >>outfile, ' uint32_t b;' print >>outfile, '};\n' print >>outfile, 'static const struct mirroring_property mirroring_properties[] = {' for pair in pairs: print >>outfile, ' {0x%x, 0x%x},' % pair print >>outfile, '};\n' print >>outfile, 'static const unsigned mirroring_properties_count = %d;\n' % len(pairs) print >>outfile, '#endif // MIRRORING_PROPERTY_H_' if __name__ == '__main__': if len(sys.argv) != 3: print 'Usage: %s <input .txt> <output .h>' % sys.argv[0] else: main(file(sys.argv[1], 'r'), file(sys.argv[2], 'w+'))
Once you have obtained the SAMSUNG I9506 Galaxy S4 unlock code it is extremely simple to remove the carrier restrictions of your handset and use it with whatever compatible network around the world. - Make sure that your SAMSUNG I9506 Galaxy S4 it is network locked. Do NOT purchase an unlock code if your phone is not locked; if you still order you will NOT be eligible for a refund. -After you receive the unlock code simply use your phone keypad to enter the unlock code and your SAMSUNG I9506 Galaxy S4 phone is SIM unlocked!
from PyQt5 import QtGui as QG from PyQt5 import QtWidgets as QW from PyQt5 import QtCore as QC from constants import Constants class StartUI(QW.QDialog): mode_set = QC.pyqtSignal(int) """Main user interface window""" def __init__(self, parent = None): super(StartUI, self).__init__(parent) self.setup_ui() def setup_ui(self): self.modeasker = QW.QWidget() modelayout = QW.QVBoxLayout() self.modeasker.setLayout(modelayout) modelayout.addWidget(QW.QLabel('Choose mode:')) buttonlayout = QW.QHBoxLayout() self.sparkpix_g = QG.QPixmap(":/sparkbutton_gray.png") self.sparkicon = QG.QIcon(self.sparkpix_g) self.sparkpix = QG.QPixmap(":/sparkbutton.png") self.transientpix_g = QG.QPixmap(":/transientbutton_gray.png") self.transienticon = QG.QIcon(self.transientpix_g) self.transientpix = QG.QPixmap(":/transientbutton.png") self.sparkb = QW.QPushButton(self.sparkicon,'') self.sparkb.setCheckable(True) self.sparkb.setIconSize(QC.QSize(140, 140)) self.sparkb.setSizePolicy(QW.QSizePolicy.Expanding, QW.QSizePolicy.Expanding) self.transientb = QW.QPushButton(self.transienticon,'') self.transientb.setCheckable(True) self.transientb.setMouseTracking(True) self.transientb.setIconSize(QC.QSize(140, 140)) self.transientb.setSizePolicy(QW.QSizePolicy.Expanding, QW.QSizePolicy.Expanding) buttonlayout.addWidget(self.sparkb) buttonlayout.addWidget(self.transientb) modelayout.addLayout(buttonlayout) self.gobutton = QW.QPushButton('OK') self.gobutton.setEnabled(False) modelayout.addWidget(self.gobutton) self.setLayout(QW.QVBoxLayout()) self.layout().addWidget(self.modeasker) #self.setCentralWidget(self.modeasker) onsc = lambda : self.setbuttons(0) ontc = lambda : self.setbuttons(1) self.sparkb.clicked[()].connect(onsc) self.transientb.clicked[()].connect(ontc) self.gobutton.clicked[()].connect(self.go) #self.setWindowFlags(QC.Qt.Dialog) def go(self): if self.sparkb.isChecked(): self.mode_set.emit(Constants.SPARK_TYPE) else: self.mode_set.emit(Constants.TRANSIENT_TYPE) #self.close() return QW.QDialog.accept(self) def setbuttons(self, state): if not self.gobutton.isEnabled(): self.gobutton.setEnabled(True) if state == 0: self.sparkb.setChecked(True) self.transientb.setChecked(False) self.sparkicon = QG.QIcon(self.sparkpix) self.transienticon = QG.QIcon(self.transientpix_g) elif state == 1: self.transientb.setChecked(True) self.sparkb.setChecked(False) self.sparkicon = QG.QIcon(self.sparkpix_g) self.transienticon = QG.QIcon(self.transientpix) self.sparkb.setIcon(self.sparkicon) self.transientb.setIcon(self.transienticon)
The Russian Duma has declared that Kyiv’s decision to make Ukrainian the language of instruction in Ukrainian schools is “’an act of ethnocide’ of the ethnic Russian people in Ukraine, thus denouncing in another what Moscow is itself doing in Russia and ignoring who is really responsible for the shift away from Russian ethnic identity in Ukraine. In a Kasparov.ru commentary, Russian analyst Igor Yakovenko notes that “ethnocide is the policy of the intentional destruction of national identity and the self-consciousness of a people” that can be achieved either by genocide or by forced assimilation into another human community. There is no genocide of ethnic Russians going on in Ukraine except in the fevered imaginations of some Russian commentators, Yakovenko says; but there is assimilation of ethnic Russians into the Ukrainian nation – but not as a result of Kyiv’s policies but rather because of the actions and statement of the Russian government. In Soviet times, the share of ethnic Russians in the Ukrainian population rose from 9.23 percent in 1926 to 22.07 percent in 1989, the result of the mass murder of the Ukrainian peasantry by Stalin and the Moscow-organized in-migration of ethnic Russians and Moscow’s encouragement of Russian as opposed to Ukrainian identity. The next Ukrainian census is scheduled for 2020 and it will show a precipitous decline in the share of ethnic Russians in the population, Yakovenko says. A recent survey found that only six percent of the citizens of Ukraine now say they are ethnic Russians. The figure in 2020 will likely be even lower. Most ethnic Russians in Ukraine are characterized by “bi-ethnicity,” the Russian analyst says. That is, people who hold this identity view themselves as part of two peoples simultaneously – Russians and Ukrainians. But now if they have to choose, almost all of these people will choose to identify as Ukrainians. More troubled because of the ethnic closeness of the Ukrainian and Russian peoples, and more intensively “above all” because of the war that Russia has unleashed against Ukraine. But it is not connected only with the war, Yakovenko says. It also reflects the hatred of Ukraine spewed out by Russian media outlets which still reach many people in Ukraine. As a result, “to be an ethnic Russian in Ukraine is becoming a problem,” the analyst says. It is not just a problem of how others view ethnic Russians in Ukraine, he argues; it is also a problem of self-consciousness, of how ethnic Russians in Ukraine see themselves. They do not see themselves as Moscow TV commentators insist they should, and they are choosing to be Ukrainians even though under different circumstances they might have chosen otherwise. “If it weren’t for the war, the political talk shows, and a number of other broadcasts of Russian television,” ethnic Russians in Ukraine wouldn’t be confronted with a choice. But when they hear what those who are invading their country say, they make the only reasonable choice and become Ukrainians. Thus, Yakovenko says, “an ethnocide of the Russian people in Ukraine is really occurring. Russia by its military actions and its television broadcasts is intentionally carrying it out.” That Moscow should blame Kyiv for what the Russian authorities are doing is only yet another confirmation of that reality.
# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket from unittest import mock from octavia.amphorae.backends.utils import haproxy_query as query from octavia.common import constants from octavia.common import utils as octavia_utils import octavia.tests.unit.base as base STATS_SOCKET_SAMPLE = ( "# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq," "econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg," "downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim," "rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp" "_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot" ",cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk," "last_agt,qtime,ctime,rtime,ttime,\n" "http-servers:listener-id,id-34821,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0," "1,1,575,575,,1,3,1,,0,,2,0,,0,L4TOUT,,30001,0,0,0,0,0,0,0,,,,0,0,,,,,-1,," ",0,0,0,0,\n" "http-servers:listener-id,id-34824,0,0,0,0,,0,0,0,,0,,0,0,0,0,DOWN,1,1,0," "1,1,567,567,,1,3,2,,0,,2,0,,0,L4TOUT,,30001,0,0,0,0,0,0,0,,,,0,0,,,,,-1,," ",0,0,0,0,\n" "http-servers:listener-id,BACKEND,0,0,0,0,200,0,0,0,0,0,,0,0,0,0,DOWN,0,0," "0,,1,567,567,,1,3,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,0,0,0,0,-1,,,0,0,0," "0,\n" "tcp-servers:listener-id,id-34833,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,1,1," "560,560,,1,5,1,,0,,2,0,,0,L4TOUT,,30000,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0," "\n" "tcp-servers:listener-id,id-34836,0,0,0,0,,0,0,0,,0,,0,0,0,0,UP,1,1,0,1,1," "552,552,,1,5,2,,0,,2,0,,0,L4TOUT,,30001,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0," "\n" "tcp-servers:listener-id,id-34839,0,0,0,0,,0,0,0,,0,,0,0,0,0,DRAIN,0,1,0," "0,0,552,0,,1,5,2,,0,,2,0,,0,L7OK,,30001,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0," "\n" "tcp-servers:listener-id,id-34842,0,0,0,0,,0,0,0,,0,,0,0,0,0,MAINT,0,1,0," "0,0,552,0,,1,5,2,,0,,2,0,,0,L7OK,,30001,,,,,,,0,,,,0,0,,,,,-1,,,0,0,0,0," "\n" "tcp-servers:listener-id,BACKEND,0,0,0,0,200,0,0,0,0,0,,0,0,0,0,UP,0,0,0,," "1,552,552,,1,5,0,,0,,1,0,,0,,,,,,,,,,,,,,0,0,0,0,0,0,-1,,,0,0,0,0," ) INFO_SOCKET_SAMPLE = ( 'Name: HAProxy\nVersion: 1.5.3\nRelease_date: 2014/07/25\nNbproc: 1\n' 'Process_num: 1\nPid: 2238\nUptime: 0d 2h22m17s\nUptime_sec: 8537\n' 'Memmax_MB: 0\nUlimit-n: 4031\nMaxsock: 4031\nMaxconn: 2000\n' 'Hard_maxconn: 2000\nCurrConns: 0\nCumConns: 32\nCumReq: 32\n' 'MaxSslConns: 0\nCurrSslConns: 0\nCumSslConns: 0\nMaxpipes: 0\n' 'PipesUsed: 0\nPipesFree: 0\nConnRate: 0\nConnRateLimit: 0\n' 'MaxConnRate: 0\nSessRate: 0\nSessRateLimit: 0\nMaxSessRate: 0\n' 'SslRate:0\nSslRateLimit: 0\nMaxSslRate: 0\nSslFrontendKeyRate: 0\n' 'SslFrontendMaxKeyRate: 0\nSslFrontendSessionReuse_pct: 0\n' 'SslBackendKeyRate: 0\nSslBackendMaxKeyRate: 0\nSslCacheLookups: 0\n' 'SslCacheMisses: 0\nCompressBpsIn: 0\nCompressBpsOut: 0\n' 'CompressBpsRateLim: 0\nZlibMemUsage: 0\nMaxZlibMemUsage: 0\nTasks: 4\n' 'Run_queue: 1\nIdle_pct: 100\nnode: amphora-abd35de5-e377-49c5-be32\n' 'description:' ) class QueryTestCase(base.TestCase): def setUp(self): self.q = query.HAProxyQuery('') super().setUp() @mock.patch('socket.socket') def test_query(self, mock_socket): sock = mock.MagicMock() sock.connect.side_effect = [None, socket.error] sock.recv.side_effect = ['testdata', None] mock_socket.return_value = sock self.q._query('test') sock.connect.assert_called_once_with('') sock.send.assert_called_once_with(octavia_utils.b('test\n')) sock.recv.assert_called_with(1024) self.assertTrue(sock.close.called) self.assertRaisesRegex(Exception, 'HAProxy \'test\' query failed.', self.q._query, 'test') def test_get_pool_status(self): query_mock = mock.Mock() self.q._query = query_mock query_mock.return_value = STATS_SOCKET_SAMPLE self.assertEqual( {'tcp-servers:listener-id': { 'status': constants.UP, 'listener_uuid': 'listener-id', 'pool_uuid': 'tcp-servers', 'members': {'id-34833': constants.UP, 'id-34836': constants.UP, 'id-34839': constants.DRAIN, 'id-34842': constants.MAINT}}, 'http-servers:listener-id': { 'status': constants.DOWN, 'listener_uuid': 'listener-id', 'pool_uuid': 'http-servers', 'members': {'id-34821': constants.DOWN, 'id-34824': constants.DOWN}}}, self.q.get_pool_status() ) def test_show_info(self): query_mock = mock.Mock() self.q._query = query_mock query_mock.return_value = INFO_SOCKET_SAMPLE self.assertEqual( {'SslRateLimit': '0', 'SessRateLimit': '0', 'Version': '1.5.3', 'Hard_maxconn': '2000', 'Ulimit-n': '4031', 'PipesFree': '0', 'SslRate': '0', 'ZlibMemUsage': '0', 'CumConns': '32', 'ConnRate': '0', 'Memmax_MB': '0', 'CompressBpsOut': '0', 'MaxConnRate': '0', 'Uptime_sec': '8537', 'SslCacheMisses': '0', 'MaxZlibMemUsage': '0', 'SslCacheLookups': '0', 'CurrSslConns': '0', 'SslBackendKeyRate': '0', 'CompressBpsRateLim': '0', 'Run_queue': '1', 'CumReq': '32', 'SslBackendMaxKeyRate': '0', 'SslFrontendSessionReuse_pct': '0', 'Nbproc': '1', 'Tasks': '4', 'Maxpipes': '0', 'Maxconn': '2000', 'Pid': '2238', 'Maxsock': '4031', 'CurrConns': '0', 'Idle_pct': '100', 'CompressBpsIn': '0', 'SslFrontendKeyRate': '0', 'MaxSessRate': '0', 'Process_num': '1', 'Uptime': '0d 2h22m17s', 'PipesUsed': '0', 'SessRate': '0', 'MaxSslRate': '0', 'ConnRateLimit': '0', 'CumSslConns': '0', 'Name': 'HAProxy', 'SslFrontendMaxKeyRate': '0', 'MaxSslConns': '0', 'node': 'amphora-abd35de5-e377-49c5-be32', 'description': '', 'Release_date': '2014/07/25'}, self.q.show_info() )
It looks like Chinese checkers, but is played like chess. A fascinating game that I found mesmerising to watch. These elderly folks gathered almost daily to challenge each other or to observe considering what they would themselves move if they were playing. Such an awesome community of gentlemen.
import numpy import scipy.stats # Sensor class represents a sensor connected to the virtual device. class Sensor: def __init__(self, sensor_id, name, sensor_type, minimum, maximum, sampling_rate, distribution): self._id = sensor_id self._name = name self._type = sensor_type self._minimum = minimum self._maximum = maximum self._sampling_rate = sampling_rate self._distribution = distribution self._status = False self._value = 0 # Overload of the print of a sensor object. def __str__(self): return "" + str(self._id) + " " + self._name + " " + str(self._status) # ID getter. @property def id(self): return self._id # Name getter. @property def name(self): return self._name # Name setter. @name.setter def name(self, name): self._name = name # Type getter. @property def type(self): return self._type # Type setter. @type.setter def type(self, type): self._type = type # Minimum getter. @property def minimum(self): return self._minimum # Minimum setter. @minimum.setter def minimum(self, minimum): self._minimum = minimum # Maximum getter. @property def maximum(self): return self._maximum # Maximum setter. @maximum.setter def maximum(self, maximum): self._maximum = maximum # Sampling rate getter. @property def sampling_rate(self): return self._sampling_rate # Sampling rate setter. @sampling_rate.setter def sampling_rate(self, sampling_rate): self._sampling_rate = sampling_rate # Distribution getter. @property def distribution(self): return self._distribution # Distribution setter. @distribution.setter def distribution(self, distribution): self._distribution = distribution # Value getter. @property def value(self): return self._value # Function for generate a random integer value following a normal distribution. def normalDistributionInteger(self): x = numpy.arange(self._minimum, self._maximum + 1) sigma = numpy.std(x) xU, xL = x + 0.5, x - 0.5 prob = scipy.stats.norm.cdf(xU, scale=3*sigma) - scipy.stats.norm.cdf(xL, scale=3*sigma) prob = prob / prob.sum() # normalize the probabilities so their sum is 1 self._value = numpy.random.choice(x, p=prob) # Function for generate a float integer value following a normal distribution. def normalDistributionFloat(self): x = numpy.linspace(round(self._minimum, 2), round(self._maximum + 1, 2)) sigma = numpy.std(x) xU, xL = x + 0.50, x - 0.50 prob = scipy.stats.norm.cdf(xU, scale=3*sigma) - scipy.stats.norm.cdf(xL, scale=3*sigma) prob = prob / prob.sum() # normalize the probabilities so their sum is 1 self._value = round(numpy.random.choice(x, p=prob), 2) # Function for generate a random integer value following a uniform distribution. def uniformDistributionInteger(self): self._value = numpy.random.randint(self._minimum, self._maximum + 1) # Function for generate a random float value following a normal distribution. def uniformDistributionFloat(self): x = numpy.random.uniform(self.minimum, self._maximum + 1) # Round the generated number with 2 decimals. self._value = round(x, 2) # Activate the sensor. def activate(self): self._status = True # Deactivate the sensor. def deactivate(self): self._status = False # Return the status of the sensor. def isActive(self): return self._status # Assign a random value to the value attribute depending from the distribution and type. def randomValue(self): if self._status: if (self._distribution == 'normal') and (self._type == 'integer'): self.normalDistributionInteger() elif (self._distribution == 'normal') and (self._type == 'float'): self.normalDistributionFloat() elif (self._distribution == 'uniform') and (self._type == 'integer'): self.uniformDistributionInteger() elif (self._distribution == 'uniform') and (self._type == 'float'): self.uniformDistributionFloat() else: print("Device", self._id, "is not active, please activate it before")
Just when I was worried spring would never arrive, we are having a gorgeous, sunny day, almost warm enough for shorts. To celebrate, I am wearing Balmain's Vent Vert, the classic green floral fragrance created by Germaine Cellier and released in 1947. Like just about everything else, this was "reworked" to appeal to modern sensibilities; the reformulation was done by perfumer Calice Becker, and I believe the relaunch was in 1990. The notes for the new version are greens, orange blossom, lemon, lime, basil, rose, galbanum, lily of the valley, freesia, hyacinth, tagetes, ylang-ylang, violet, oakmoss, sandalwood, sage, iris, amber, and musk. Vent Vert starts out a strong, bitter, slightly perfume-y green. After a few minutes the bitterness calms and the citrus notes and orange blossom take the stage, later still, the citrus notes are replaced by the other florals. Throughout, it retains a strong undertone of crushed stems, leaves, and grass, just slightly bitter and dry. It is fresh and light, a summery meadow kind of thing, very beautiful, very calm. The far dry down is simply lovely, with just a hint of oakmoss, and just before it fades away I can make out the iris and sandalwood as well. Happily for me, I don't smell any amber. I have the Eau de Toilette, and the lasting power is reasonable but not spectacular. The original version of Balmain Vent Vert is said to be far superior, and last year I finally bought a vintage bottle from ebay. Of course, it is hard to say how accurately it represents the "real" original, given that the bottle I purchased must be well over 10 years old, and it is likely that some of the ingredients had long since been replaced by cheaper alternatives by the time my particular bottle was produced. At any rate, I suppose it is heresy, but I prefer the new version. The old is more uncompromisingly green and bitter, and there is a darker edge to the florals. It does, however, seem more in keeping with Germaine Cellier's other perfumes, such as Bandit and Fracas, which tend to have a disquieting effect. Update: I recently had the chance to try the original version again, this time from another bottle. It was far superior to the bottle I had purchased on ebay, and I liked it much better. It is still so different from the reformulation that they might as well have been given different names. You've awakened my lemming! I've really liked all the Balmains I have sampled so far. Hi R! One of my mom's perfumes – nice enough but not for me. Your review is great (as usual!). And sad to say, Vent Vert is the only Balmain I've tried so far! I wish the line was easier to find — do you know of any department stores that carry it? you are particularly acute in your descriptions of green notes in fragrance … especially that frequent bitter tone. I love green notes in the drydown, but can't wait to get past that bitterness. I probably feel toward “bitter” the way some feel toward “powder”: a necessary evil. anyway, wonderful review — and I actually am encouraged that you favor the new! I bought Ivoire from eBay and have seen it lots at TJ Maxx. Plus, I see all the others (that I'm familiar with) on eBay pretty regularly, and at many of the online discount shops (like parfum1, etc.) — all can be had for very little $$$. Thanks G, I'll keep an eye out at TJ Maxx! M, I wonder if I just like the new because it is what I tried first? Sort of like: I know Sean Connery is the best Bond, but I like Roger Moore because I saw all his movies first. Ok, weird analogy. Maybe I need a nap. I was eager to try Vent Vert again – I remember it from a purchase I made in France about 35 years ago! The new version, however, is disappointing in that it doesn't seem (to me, anyway) very close to the original — seems a bit heavier, sweeter … though I am beginning to like it for its own qualities. Can anyone recommend a truly light, green fragrance to try? The new version is bound to disappoint if you're used to the old, I completely agree. I tried the new one first, so didn't have the comparison & liked it very much. Are you looking for a green floral or just plain old green? If the latter, try Miller et Bertaux no. 3, or if you can get your hands on it, Gobin Daude Sous Le Buis. Other possibilities: Sisley Eau de Campagne, Diptyque Virgilio. Many thanks for the suggestions! I've bought the M & B (found a great price on 3.4 oz) and samples of the others. They sound like just what I'm looking for. Hope you will like it, it is one of my favorite greens! Very summery & lively. I love it! By chance, a seller on eBay from whom I'd ordered a couple of other scents included a sample of the M&B with the order. It came yesterday & I'm enjoying it today. Would you also recommend a couple of your favorite green florals? I received also a sample of the Dyptique, and while it's intriguing, it's a bit TOO purely green for me. For floral, Annick Goutal Camille or L'Artisan Jacinthe des Bois might suit you. My mother gave me a bottle of the real Vent Vert in 1964. It smelled the way a French perfume should,cut through the sickly, candy/powder/stifling scents worn by the girls at school and created a pleasant, green, biting* aura around me. The 'Vent Vert' on sale these days I do not recognize. *When I say 'biting' I mean it as a compliment. Not every perfume smells 'romantic' or 'sweet' – even in a pleasant way. My mother often brought back perfumes from Paris and I recall their unusual compositions so different from what was being sold here. I am still see-sawing over Bandit. Ah, that is just how I used to feel about Jean Couturier Coriandre. I can't say I don't recognize it now because I try not to smell it when I see it…which is rare, and usually for 9.99 at TJ Maxx. Although I have never tried Balmain's Vent Vert, I am intrigued, for I have been searching for a substitute for my beloved “Vetiver” by the House of Carven. Is anyone familiar with both? I would like to receive any comments of comparison. Hmmm…I have not smelled the Carven, but rather doubt Vent Vert would be a good substitute for a mostly-vetiver scent. Are you just not happy with the newer formulation of the Carven? And what other vetivers have you tried? oh dear i'm a little late to the party….i adore Vent Vert, have and love the parfum and the edt. its a sweet-ish green for sure, the parfum being rich as well. Ivoire is an HG- odd, woody, clean and my husband loves it too. it's just perfect. may i recommend Carven's Ma Griffe. once read it described as a “pointy” green fragrance and that is a perfect description. off the common scent path for sure but may be perfect for someone searching for a special green frag. may i ask you about Ma Griffe, there seem to be 2 bottle styles “out there', an angular plain glass bottle with a green plastic lid, and the squat square glass bottle with the small lid. do you think there is a difference in the scent? wondering if mine might be vintage-ish. I'm sorry but I had to delete the MUA quote you had posted below…that is copyrighted material and you'd quoted more than would be allowed by fair use laws. I have seen at least 3 different Ma Griffe bottles, but I've no idea how they're dated, sorry! well i guess i will just have to keep sniffing. thanks so much. Just tried a parfum version packaged in the square bottle with label on the edge – called vintage on ebay, but I don’t know how vintage it is. Perhaps it’s the ’90 version. Gorrrrrgeous. I do love a green floral – I was expecting more of a blast of galbanum, but only got enough to make it fresh as cut grass: perfect, in other words. Yeah, have a feeling your bottle is not terribly old: the vintage ones I’ve smelled are very galbanum-heavy, esp. in the opening. I just got my sample of Vent Vert , and it seemed strangely familiar. I tried it next to Chanel No. 19 and that was it. VV is like a slightly airier version of 19 on me – does anyone else get this? They have similar notes…wonder if they were more alike in their original forms?
from google.appengine.api import memcache from google.appengine.api import urlfetch from google.appengine.ext import ndb from googleapiclient import errors from googleapiclient import discovery from oauth2client.contrib import appengine import cgi import csv import httplib2 import io import logging import time import random import os RELOAD_ACL_QUERY_PARAM = 'grow-reload-acl' SCOPE = 'https://www.googleapis.com/auth/drive' EDIT_URL = 'https://docs.google.com/spreadsheets/d/{}' RETRY_ERRORS = [ 'backendError', 'internalServerError', 'quotaExceeded', 'userRateLimitExceeded', ] discovery.logger.setLevel(logging.WARNING) urlfetch.set_default_fetch_deadline(60) class Error(Exception): pass class Settings(ndb.Model): sheet_id = ndb.StringProperty() sheet_gid_global = ndb.StringProperty() sheet_gid_admins = ndb.StringProperty() @classmethod def instance(cls): key = ndb.Key(cls.__name__, 'Settings') ent = key.get() if ent is None: ent = cls(key=key) ent.put() logging.info('Created settings -> {}'.format(key)) return ent def get_query_dict(): query_string = os.getenv('QUERY_STRING', '') return cgi.parse_qs(query_string, keep_blank_values=True) def create_service(api='drive', version='v2'): credentials = appengine.AppAssertionCredentials(SCOPE) http = httplib2.Http() http = credentials.authorize(http) return discovery.build(api, version, http=http) def _request_with_backoff(service, url): for n in range(0, 5): resp, content = service._http.request(url) if resp.status in [429]: logging.info('Attempt {} for {}'.format(n, url)) logging.info(resp) time.sleep((2 ** (n + 1)) + random.random()) continue return resp, content raise Error('Error {} {} downloading sheet: {}'.format(resp.status, resp.reason, url)) def _request_sheet_content(sheet_id, gid=None): service = create_service() logging.info('Loading ACL -> {}'.format(sheet_id)) for n in range(0, 5): try: resp = service.files().get(fileId=sheet_id).execute() except errors.HttpError as error: if error.resp.reason in RETRY_ERRORS: logging.info('Attempt {} for {}'.format(n, url)) time.sleep((2 ** (n + 1)) + random.random()) continue raise if 'exportLinks' not in resp: raise Error('Nothing to export: {}'.format(sheet_id)) for mimetype, url in resp['exportLinks'].iteritems(): if not mimetype.endswith('csv'): continue if gid is not None: url += '&gid={}'.format(gid) resp, content = _request_with_backoff(service, url) if resp.status != 200: text = 'Error {} downloading sheet: {}:{}' text = text.format(resp.status, sheet_id, gid) raise Error(text) return content def get_sheet(sheet_id, gid=None, use_cache=True): """Returns a list of rows from a sheet.""" query_dict = get_query_dict() force_cache = RELOAD_ACL_QUERY_PARAM in query_dict cache_key = 'google_sheet:{}:{}'.format(sheet_id, gid) logging.info('Loading Google Sheet -> {}'.format(cache_key)) result = memcache.get(cache_key) if result is None or force_cache or not use_cache: content = _request_sheet_content(sheet_id, gid=gid) fp = io.BytesIO() fp.write(content) fp.seek(0) reader = csv.DictReader(fp) result = [row for row in reader] logging.info('Saving Google Sheet in cache -> {}'.format(cache_key)) memcache.set(cache_key, result) return result def append_rows(sheet_id, gid, rows_to_append): rows = [] for row in rows_to_append: values = [] for item in row: values.append({ 'userEnteredValue': { 'stringValue': item, }, }) rows.append({'values': values}) service = create_service(api='sheets', version='v4') requests = [] requests.append({ 'appendCells': { 'fields': 'userEnteredValue', 'rows': rows, 'sheetId': gid, }, }) body = {'requests': requests} resp = service.spreadsheets().batchUpdate( spreadsheetId=sheet_id, body=body).execute() def get_spreadsheet_url(sheet_id, gid=None): url = 'https://docs.google.com/spreadsheets/d/{}'.format(sheet_id) if gid: url += '#gid={}'.format(gid) return url def create_sheet(title='Untitled Grow Website Access'): service = create_service() data = { 'title' : title, 'mimeType' : 'application/vnd.google-apps.spreadsheet' } resp = service.files().insert(body=data, fields='id').execute() logging.info('Created sheet -> {}'.format(resp['id'])) return resp['id'] def share_sheet(file_id, emails): service = create_service() for email in emails: permission = { 'type': 'user', 'role': 'writer', 'value': email, } service.permissions().insert( fileId=file_id, body=permission, fields='id', ).execute() logging.info('Shared sheet -> {}'.format(email)) def get_or_create_sheet_from_settings(title=None, emails=None): settings = Settings.instance() if settings.sheet_id is None: if title: title = '{} Website Access'.format(title) sheet_id = create_sheet(title=title) share_sheet(sheet_id, emails) settings.sheet_id = sheet_id settings.put() sheet_id = settings.sheet_id sheet_gid_global = settings.sheet_gid_global resp = get_sheet(sheet_id, gid=sheet_gid_global) return resp
Not unlike the feeling that new parents experience, families faced with the sudden need to care for a senior parent or relative usually feel scared and overwhelmed. How to cope with the new care challenges, as well as the simple addition of so many daily tasks to an already busy workload, leave some difficult questions to answer. What type of care tasks can I provide? What do I need to learn about elder care and medication before I begin? Could home care damage a cherished family relationship? Will I be able to face the more personal and intimate requirements of care? These questions and many more arise when such a seismic and, usually, an unexpected duty of care enters our lives. Home care is a challenging job in its own right. That challenge increases exponentially when the dependent is one’s own mother, father, or another relative. The potential for strained relationships and disagreements, added to the increased burden of everyday tasks and chores, mean some big decisions regarding how you’ll care for your elderly parent and what support you’ll need. Today, we’ll look at some of the factors you’ll need to consider as you begin to take on the task of providing care and comfort for a senior family member. The first thing to do is sit down with your senior family member – or even someone who understands their daily routine – to draw up a complete list of tasks that absolutely must get done every day, week, and month. These could include a wide range of activities, so start with an open mind and give the task plenty of time. Think about physical activities where the person might need help, like dressing in the morning, getting up and around the house, heading out of the house for a morning walk, and any other common morning task. This will set you off thinking about the daily routine and help you both walk through a typical day. In each scenario, consider whether your loved one’s condition will limit them from getting through the activity and how you can help make it easier, preferably without intruding or taking it over completely. Regular chores like cleaning, weekly trips to the grocery store, walking the dog, and bringing in the mail will provide another example of areas you can help. Admin tasks such as making sure bills get paid and services are arranged also need to be considered, as seniors often rely on these services and won’t be able to cope without them. Keep in mind that this is not a list of everything you will do, but merely a starting point to separate out the most vital tasks from those that are less important, or which your loved one should still be able to accomplish. You can also divide the responsibilities among family, friends, and professional service providers in some cases, making the burden less difficult to bear for the primary caregiver. As the offers of help increase from family and friends, it’s often the case that the voice of your senior family member can become drowned out. Don’t allow this to happen. The sense of isolation and losing control will be considerably worse than any element of care that may be overlooked for a moment. Firstly, make sure that the discussions about care and what’s required include your loved one, wherever possible. Their wishes should be considered in every case, even when a decision has to be made that they might not like. Even having some input is better than feeling that family members are taking everything away from them, as it offers some sense of control and influence. As much as their condition makes it possible, allow them that input. Sometimes the very act of having a conversation about care can be therapeutic. The shift to full-time care can be particularly difficult, so take it one step at a time and be openly communicative with your senior family member at every possible stage. There are various levels of support you can receive in caregiving, from a few tasks being taken care of on your behalf to a complete home care solution. From your task list created in the earlier section, start with the tasks that you know will be toughest to achieve based on your skills and schedule, then make sure you seek out qualified help to get them done. This could be anything from getting evening meals delivered because you can’t get out of work quickly enough to cook, to having an care provider check in during the early morning when you can’t be around to set up the day. A professional home care provider will cover all kinds of services. The kind of service – whether agency or individual, medical professional or home health assistant – will depend entirely on the requirements of your senior family member and the availability of secondary caregivers in the form of other friends and family. You might only need a few hours help a week at first, but that requirement could grow and already having a caregiver on hand whom you trust will make the transition that much easier. Even with the many treatments and care options available, the most important thing to remember about caring for a parent into their senior years is communication. As you take on a new set of care duties, it’s important to keep in mind that this person is family first, patient or dependent second. Talk to them as you always have, play games, read, laugh and cry with him or her. Enjoy all the regular activities you have throughout their life, as much as their condition allows it. Old age raises all kinds of health problems and challenges, but they need not remove the fullness of life from the things you love to do with your senior. As our family members age, that all-important quality of life is increasingly based on the care you can provide – personally or with the help of a professional – and the experiences you help them have. We know you will give everything to the care of your senior family member, but make sure that in doing so you don’t overlook the importance of celebrating the life and memorable moments you have experienced together.
from __future__ import division import numpy as np import cv2 as cv import scipy.spatial from cv_utils import Box, img_utils, feature_extractor as fe _DEF_TM_OPT = dict(feature='rgb', distance='correlation', normalize=True, retain_size=True) def match_one(template, image, options=None): """ Match template and find exactly one match in the Image using specified features. :param template: Template Image :param image: Search Image :param options: Options include - features: List of options for each feature :return: (Box, Score) Bounding box of the matched object, Heatmap value """ heatmap, scale = multi_feat_match(template, image, options) min_val, _, min_loc, _ = cv.minMaxLoc(heatmap) top_left = tuple(scale * x for x in min_loc) score = min_val h, w = template.shape[:2] return Box(top_left[0], top_left[1], w, h), score def multi_feat_match(template, image, options=None): """ Match template and image by extracting multiple features (specified) from it. :param template: Template image :param image: Search image :param options: Options include - features: List of options for each feature :return: """ h, w = image.shape[:2] scale = 1 if options is not None and 'features' in options: heatmap = np.zeros((h, w), dtype=np.float64) for foptions in options['features']: f_hmap, _ = feature_match(template, image, foptions) heatmap += cv.resize(f_hmap, (w, h), interpolation=cv.INTER_AREA) heatmap /= len(options['features']) else: heatmap, scale = feature_match(template, image, options) return heatmap, scale def feature_match(template, image, options=None): """ Match template and image by extracting specified feature :param template: Template image :param image: Search image :param options: Options include - feature: Feature extractor to use. Default is 'rgb'. Available options are: 'hog', 'lab', 'rgb', 'gray' :return: Heatmap """ op = _DEF_TM_OPT.copy() if options is not None: op.update(options) feat = fe.factory(op['feature']) tmpl_f = feat(template, op) img_f = feat(image, op) scale = image.shape[0] / img_f.shape[0] heatmap = match_template(tmpl_f, img_f, op) return heatmap, scale def match_template(template, image, options=None): """ Multi channel template matching using simple correlation distance :param template: Template image :param image: Search image :param options: Other options: - distance: Distance measure to use. Default: 'correlation' - normalize: Heatmap values will be in the range of 0 to 1. Default: True - retain_size: Whether to retain the same size as input image. Default: True :return: Heatmap """ # If the input has max of 3 channels, use the faster OpenCV matching if len(image.shape) <= 3 and image.shape[2] <= 3: return match_template_opencv(template, image, options) op = _DEF_TM_OPT.copy() if options is not None: op.update(options) template = img_utils.gray3(template) image = img_utils.gray3(image) h, w, d = template.shape im_h, im_w = image.shape[:2] template_v = template.flatten() heatmap = np.zeros((im_h - h, im_w - w)) for col in range(0, im_w - w): for row in range(0, im_h - h): cropped_im = image[row:row + h, col:col + w, :] cropped_v = cropped_im.flatten() if op['distance'] == 'euclidean': heatmap[row, col] = scipy.spatial.distance.euclidean(template_v, cropped_v) elif op['distance'] == 'correlation': heatmap[row, col] = scipy.spatial.distance.correlation(template_v, cropped_v) # normalize if op['normalize']: heatmap /= heatmap.max() # size if op['retain_size']: hmap = np.ones(image.shape[:2]) * heatmap.max() h, w = heatmap.shape hmap[:h, :w] = heatmap heatmap = hmap return heatmap def match_template_opencv(template, image, options): """ Match template using OpenCV template matching implementation. Limited by number of channels as maximum of 3. Suitable for direct RGB or Gray-scale matching :param options: Other options: - distance: Distance measure to use. (euclidean | correlation | ccoeff). Default: 'correlation' - normalize: Heatmap values will be in the range of 0 to 1. Default: True - retain_size: Whether to retain the same size as input image. Default: True :return: Heatmap """ # if image has more than 3 channels, use own implementation if len(image.shape) > 3: return match_template(template, image, options) op = _DEF_TM_OPT.copy() if options is not None: op.update(options) method = cv.TM_CCORR_NORMED if op['normalize'] and op['distance'] == 'euclidean': method = cv.TM_SQDIFF_NORMED elif op['distance'] == 'euclidean': method = cv.TM_SQDIFF elif op['normalize'] and op['distance'] == 'ccoeff': method = cv.TM_CCOEFF_NORMED elif op['distance'] == 'ccoeff': method = cv.TM_CCOEFF elif not op['normalize'] and op['distance'] == 'correlation': method = cv.TM_CCORR heatmap = cv.matchTemplate(image, template, method) # make minimum peak heatmap if method not in [cv.TM_SQDIFF, cv.TM_SQDIFF_NORMED]: heatmap = heatmap.max() - heatmap if op['normalize']: heatmap /= heatmap.max() # size if op['retain_size']: hmap = np.ones(image.shape[:2]) * heatmap.max() h, w = heatmap.shape hmap[:h, :w] = heatmap heatmap = hmap return heatmap
If you’re curious about the growth potential of the virtual-reality market—especially from an employment perspective—look no further than development giant Valve, which devotes nearly a third of its staff to VR applications. That’s massive, considering all the other projects under Valve’s umbrella, and a sign that many in tech see VR as the future of gaming (and perhaps much more). That staffing statistic comes courtesy of Alan Yates, a high-ranking engineer at Valve, in comments on Reddit. (Hat tip to Uploadvr.com for pointing it out.) “Key individuals that solved most of the really hard technological problems and facilitated this generation of consumer headsets are still here working on the next generation,” he wrote in a commenting thread. Valve is currently looking for virtual reality engineers with experience in human-computer interaction, computer vision (tracking, photogrammetry, bundle adjustment), 3D skills, and engine integration (Unity and Unreal preferred). A lot of smaller firms working in the VR space have gotten very good at leveraging specific technologies such as eye tracking and display hardware, Yates continued in another part of the thread, but haven’t managed to turn that knowledge into a successful product. “The minimum viable product is now a pretty high bar and that can stifle innovation,” he wrote. With that in mind, Valve is apparently planning on licensing its research and technology to those smaller creators, provided the smaller company’s device ends up interoperable with Valve’s platform. For smaller developers interested in VR, licensing from Valve, Facebook, Microsoft or another company in the space may eventually prove the best route forward. That would certainly save money on research and development, and allow devs to focus on creating next-generation products, although it raises some longer-term concerns about licensing conditions and vendor lock-in. Valve isn’t the only competitor in the virtual-reality space, and it remains to be seen whether its open approach to the VR ecosystem will pay off, especially since there are hints that competitors may take a different stance. (Facebook, for example, seems determined to lock down some software on the Oculus platform.) Given how it’s still very early days for VR, it may be years before any trends become clear. But in the interim, from an employee perspective, Valve is betting very big on VR as the future.
""" Linear algebra routines ======================= Linear Algebra Basics:: inv --- Find the inverse of a square matrix solve --- Solve a linear system of equations solve_banded --- Solve a linear system of equations with a banded matrix solveh_banded --- Solve a linear system of equations with a Hermitian or symmetric banded matrix, returning the Cholesky decomposition as well det --- Find the determinant of a square matrix norm --- matrix and vector norm lstsq --- Solve linear least-squares problem pinv --- Pseudo-inverse (Moore-Penrose) using lstsq pinv2 --- Pseudo-inverse using svd Eigenvalues and Decompositions:: eig --- Find the eigenvalues and vectors of a square matrix eigvals --- Find the eigenvalues of a square matrix eig_banded --- Find the eigenvalues and vectors of a band matrix eigvals_banded --- Find the eigenvalues of a band matrix lu --- LU decomposition of a matrix lu_factor --- LU decomposition returning unordered matrix and pivots lu_solve --- solve Ax=b using back substitution with output of lu_factor svd --- Singular value decomposition of a matrix svdvals --- Singular values of a matrix diagsvd --- construct matrix of singular values from output of svd orth --- construct orthonormal basis for range of A using svd cholesky --- Cholesky decomposition of a matrix cholesky_banded --- Cholesky decomposition of a banded symmetric or Hermitian matrix cho_factor --- Cholesky decomposition for use in solving linear system cho_solve --- Solve previously factored linear system qr --- QR decomposition of a matrix schur --- Schur decomposition of a matrix rsf2csf --- Real to complex schur form hessenberg --- Hessenberg form of a matrix matrix Functions:: expm --- matrix exponential using Pade approx. expm2 --- matrix exponential using Eigenvalue decomp. expm3 --- matrix exponential using Taylor-series expansion logm --- matrix logarithm cosm --- matrix cosine sinm --- matrix sine tanm --- matrix tangent coshm --- matrix hyperbolic cosine sinhm --- matrix hyperbolic sine tanhm --- matrix hyperbolic tangent signm --- matrix sign sqrtm --- matrix square root funm --- Evaluating an arbitrary matrix function. Iterative linear systems solutions:: cg --- Conjugate gradient (symmetric systems only) cgs --- Conjugate gradient squared qmr --- Quasi-minimal residual gmres --- Generalized minimal residual bicg --- Bi-conjugate gradient bicgstab --- Bi-conjugate gradient stabilized """ postpone_import = 1 depends = ['misc','lib.lapack']
The first workshop took place at the University of Bristol, on 16-17 April 2009, and focused on theoretical questions of memory. The event focused on two central strands: ‘Memory and Politics’ and ‘Memory and Media’, and welcomed keynote papers from Dr Susannah Radstone (University of East London) and Professor Ansgar Nünning (Justus-Liebig-Universität Gießen). A report of discussions from the final session of the workshop summarises the main points of discussion during the workshop, and indicates suggested ways forward for the Network and for forthcoming events. A list of recommended reading was sent to members prior to the workshop, in order to provide a common basis for discussion, particularly on theoretical questions of memory. Please find below the outlines of both keynote papers, which have kindly been provided by the speakers. Useful lists of suggested reading have been included in both cases.
"""Provides widgets related to submodules""" from __future__ import absolute_import from qtpy import QtWidgets from qtpy.QtCore import Qt from qtpy.QtCore import Signal from .. import cmds from .. import core from .. import qtutils from .. import icons from ..i18n import N_ from ..widgets import defs from ..widgets import standard class SubmodulesWidget(QtWidgets.QFrame): def __init__(self, context, parent): QtWidgets.QFrame.__init__(self, parent) self.setToolTip(N_('Submodules')) self.tree = SubmodulesTreeWidget(context, parent=self) self.setFocusProxy(self.tree) self.main_layout = qtutils.vbox(defs.no_margin, defs.spacing, self.tree) self.setLayout(self.main_layout) # Titlebar buttons self.refresh_button = qtutils.create_action_button( tooltip=N_('Refresh'), icon=icons.sync()) self.open_parent_button = qtutils.create_action_button( tooltip=N_('Open Parent'), icon=icons.repo()) self.button_layout = qtutils.hbox(defs.no_margin, defs.spacing, self.open_parent_button, self.refresh_button) self.corner_widget = QtWidgets.QWidget(self) self.corner_widget.setLayout(self.button_layout) titlebar = parent.titleBarWidget() titlebar.add_corner_widget(self.corner_widget) # Connections qtutils.connect_button(self.refresh_button, context.model.update_submodules_list) qtutils.connect_button(self.open_parent_button, cmds.run(cmds.OpenParentRepo, context)) class SubmodulesTreeWidget(standard.TreeWidget): updated = Signal() def __init__(self, context, parent=None): standard.TreeWidget.__init__(self, parent=parent) self.context = context self.main_model = model = context.model self.setSelectionMode(QtWidgets.QAbstractItemView.SingleSelection) self.setHeaderHidden(True) # UI self._active = False self.list_helper = BuildItem() self.itemDoubleClicked.connect(self.tree_double_clicked) # Connections self.updated.connect(self.refresh, type=Qt.QueuedConnection) model.add_observer(model.message_submodules_changed, self.updated.emit) def refresh(self): if not self._active: return items = [self.list_helper.get(entry) for entry in self.main_model.submodules_list] self.clear() self.addTopLevelItems(items) def showEvent(self, event): """Defer updating widgets until the widget is visible""" if not self._active: self._active = True self.refresh() return super(SubmodulesTreeWidget, self).showEvent(event) def tree_double_clicked(self, item, _column): path = core.abspath(item.path) cmds.do(cmds.OpenRepo, self.context, path) class BuildItem(object): def __init__(self): self.state_folder_map = {} self.state_folder_map[''] = icons.folder() self.state_folder_map['+'] = icons.staged() self.state_folder_map['-'] = icons.modified() self.state_folder_map['U'] = icons.merge() def get(self, entry): """entry: same as returned from list_submodule""" name = entry[2] path = entry[2] # TODO better tip tip = path + '\n' + entry[1] if entry[3]: tip += '\n({0})'.format(entry[3]) icon = self.state_folder_map[entry[0]] return SubmodulesTreeWidgetItem(name, path, tip, icon) class SubmodulesTreeWidgetItem(QtWidgets.QTreeWidgetItem): def __init__(self, name, path, tip, icon): QtWidgets.QTreeWidgetItem.__init__(self) self.path = path self.setIcon(0, icon) self.setText(0, name) self.setToolTip(0, tip)
Certified Boom Repair specializes in refurbishment of material handling equipment such as cranes, forklifts and man-lifts. Our turn-key operation provides the dis-assembly, boom repair, reassembly with manufacture provided parts and third party OSHA licensed inspections so that when your crane is complete you can take it straight to work. Please call us whenever your needs arise for this type of service.
# Copyright 2013 The Chromium Authors. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. import hashlib import os def CallAndRecordIfStale( function, record_path=None, input_paths=None, input_strings=None, force=False): """Calls function if the md5sum of the input paths/strings has changed. The md5sum of the inputs is compared with the one stored in record_path. If this has changed (or the record doesn't exist), function will be called and the new md5sum will be recorded. If force is True, the function will be called regardless of whether the md5sum is out of date. """ if not input_paths: input_paths = [] if not input_strings: input_strings = [] md5_checker = _Md5Checker( record_path=record_path, input_paths=input_paths, input_strings=input_strings) if force or md5_checker.IsStale(): function() md5_checker.Write() def _UpdateMd5ForFile(md5, path, block_size=2**16): with open(path, 'rb') as infile: while True: data = infile.read(block_size) if not data: break md5.update(data) def _UpdateMd5ForDirectory(md5, dir_path): for root, _, files in os.walk(dir_path): for f in files: _UpdateMd5ForFile(md5, os.path.join(root, f)) def _UpdateMd5ForPath(md5, path): if os.path.isdir(path): _UpdateMd5ForDirectory(md5, path) else: _UpdateMd5ForFile(md5, path) class _Md5Checker(object): def __init__(self, record_path=None, input_paths=None, input_strings=None): if not input_paths: input_paths = [] if not input_strings: input_strings = [] assert record_path.endswith('.stamp'), ( 'record paths must end in \'.stamp\' so that they are easy to find ' 'and delete') self.record_path = record_path md5 = hashlib.md5() for i in sorted(input_paths): _UpdateMd5ForPath(md5, i) for s in input_strings: md5.update(s) self.new_digest = md5.hexdigest() self.old_digest = '' if os.path.exists(self.record_path): with open(self.record_path, 'r') as old_record: self.old_digest = old_record.read() def IsStale(self): return self.old_digest != self.new_digest def Write(self): with open(self.record_path, 'w') as new_record: new_record.write(self.new_digest)
My one gripe about Paalkova – How to make Palkova / Milk Sweet is that they come in endless exciting-sounding flavor combinations, and they are all generally absolutely divine. In a heavy bottomed pan, add milk (use full-fat milk to make a large quantity) and allow it to boil. Once it boils, mix it well. Allow it to form a thickened milk layer on top. Mix it well with milk. Now bring the flame to medium to avoid burning the milk. Again allow it to boil till it reduces to 1/2 by stirring and scraping the side of the pan from time to time. Now you can see the milk getting reduced into a thick paste. Once it becomes more thickened, add sugar and mix it well. Once the sugar melts, the mixture will become loosened again. Again, mix it and allow it to thicken. Add ghee and mix it well, and allow it to cook till it becomes semi-solid. When it becomes semi-solid, transfer it a bowl and allow to cool down. As it cools down, it will thicken over time. Finally, Paalkova – How to make Palkova / Milk Sweet is ready to serve warm or cold!! Cooking time will vary depending on the flame and the pan you use. Add cardamom powder if you want. Use a Heavy bottomed (nonstick or ceramic) pan to make the step easy and to avoid burning. Add more sugar according to sweet level. Stir it in between the bottom and sides of the pan to avoid burning. Once it cools down, store it in the refrigerator up to one week.
"""Provide a non-decreasing clock() function. In Windows, time.clock() provides number of seconds from first call, so use that. In Unix, time.clock() is CPU time, and time.time() reports system time, which may not be non-decreasing.""" import time import sys _MAXFORWARD = 100 _FUDGE = 1 class RelativeTime(object): # pylint: disable=R0903 """Non-decreasing time implementation for Unix""" def __init__(self): self.time = time.time() self.offset = 0 def get_time(self): """Calculate a non-decreasing time representation""" systemtime = time.time() now = systemtime + self.offset if self.time < now < self.time + _MAXFORWARD: self.time = now else: # If time jump is outside acceptable bounds, move ahead one second # and note the offset self.time += _FUDGE self.offset = self.time - systemtime return self.time if sys.platform != 'win32': clock = RelativeTime().get_time # pylint: disable=C0103 else: from time import clock
If you had any issues browsing some of your favorite websites last week you weren’t alone! Access to Amazon, Twitter, PayPal and Spotify, just to name a few, was totally cut off on Friday. Industry professionals call it a distributed denial-of-service, or DDoS, attack. In other words, a whole bunch of devices connected to the Internet sent a lot of data to machines (servers) that made them crash. Hackers did it, though exactly who remains in question. The attack vector, though, is definitely known. Websites were not actually attacked. Instead, the suspects targeted the path that 99.9% of people use to get to the websites. A bunch of online machines called domain name system servers, or DNS servers for short, help the Internet work. They allow regular folks like you and me to type in a website name and get there. Behind the scenes, these DNS servers link the website name that you type to an IP address. So, the home address of the website, instead of 231 Main St., USA, is usually something like 198.51.100.24. Remembering complex numbers like this isn’t the strong suit of most humans, so we hit the easy button and leverage the simplicity of the DNS servers. At least until they’re massively attacked and taken offline. With Election Day less than two weeks away, some folks are worried that key information may not make its way to voters if we see another DDoS attack. In Hong Kong in 2014, a DDoS attack took down a key election website during a big vote. It affected the information flow during the campaign. So now the pay-for-protection business model is booming. It’s something I follow as part of my MarketVOX Trader service. Disruptive technologies could help solve some cyber security problems, and I look at ways to actually profit from them. You see, if you want major protection from a DDoS attack, you have to pay a major protection service provider to employ a massive online infrastructure to safeguard your precious site. It’s kind of like paying your local “Union Rep,” Uncle Vito, to operate your store in his territory under his protection, whether you like it or not. DDoS attacks are just the tip of the iceberg when it comes to threats. There are a ton of far worse events that can occur from the mischievous minds operating online. Plain theft has been the biggest issue in recent years. Yes, getting your company’s credit card payment system hacked into is a problem, but imagine having all of your company’s intellectual property stolen. Intellectual property (IP) includes everything from drawings, plans, training materials, trade secrets, and, of course, research and development projects. According to a new survey from Deloitte, one-fifth of the 2,500 professionals surveyed suspect that employees and other insiders steal company IP. In the automotive industry, one-third held this suspicion! That’s a big problem when, across the S&P 500 in 2015, companies’ total value consisted of 87% intellectual property and only 13% tangible assets. One technique companies employ to lockdown IP theft is role-based access control on their networks. If you don’t have a critical role on a program, your access to the data associated with that program is restricted. Role-based network identities, though, are usually accessed by… a username and password. The problem with usernames and passwords is that hackers can easily steal and leverage them to download the guarded information. Some companies have swapped to biometrics for access, which include fingerprints, iris scans and other bodily identifiers. But what happens when the biometric data eventually gets stolen? Most people just reset their password when it’s compromised. But resetting your fingerprint or eyeball is tough, even for a guy like Jason Bourne. One New York City-based startup company called HYPR Corp. is working to solve the problem of biometric theft with novel solutions that involve decentralizing the data and encrypting it. HYPR recently closed a $3 million seed round to make its cybersecurity products widely-known. Bottom line: cyber security and protection continue to be a growing and profitable business sector. The problem is that no one really respects it or is willing to pay for it until they’re the ones compromised. It really is a tangled web of cyber security. Appropriate, with Halloween fast approaching. P.S. Hidden Profits editor, John Del Vecchio, just released a new book on Wednesday. It’s called The Rule of 72: Compound Your Money and Uncover Hidden Stock Profits. It reveals the “legalized” lies that corporations are telling to drive up stock prices. And how they put your hard-earned money at risk. Get your copy on Amazon today or go to www.BuyRuleof72.com.
# -*- coding: utf-8 -*- """ v1: Created on May 31, 2016 author: Daniel Garrett ([email protected]) """ import numpy as np def maxdmag(s, ranges, x): """Calculates the maximum difference in magnitude for a given population and apparent separation value Args: s (ndarray): Apparent separation (AU) ranges (tuple): pmin (float): minimum geometric albedo Rmin (float): minimum planetary radius (km) rmax (float): maximum distance from star (AU) x (float): Conversion factor for AU to km Returns: maxdmag (ndarray): Maximum difference in magnitude for given population and separation """ pmin, Rmin, rmax = ranges PhiL = lambda b: (1./np.pi)*(np.sin(b) + (np.pi - b)*np.cos(b)) maxdmag = -2.5*np.log10(pmin*(Rmin*x/rmax)**2*PhiL(np.pi - np.arcsin(s/rmax))) return maxdmag def mindmag(s, ranges, x): """Calculates the minimum difference in magnitude for a given population and apparent separation value Args: s (ndarray): Apparent separation (AU) ranges (tuple): pmax (float): maximum geometric albedo Rmax (float): maximum planetary radius (km) rmin (float): minimum distance from star (AU) rmax (float): maximum distance from star (AU) x (float): Conversion factor for AU to km Returns: mindmag (ndarray): Minimum difference in magnitude for given population and separation """ pmax, Rmax, rmin, rmax = ranges bstar = 1.104728818644543 PhiL = lambda b: (1./np.pi)*(np.sin(b) + (np.pi - b)*np.cos(b)) if type(s) == np.ndarray: mindmag = -2.5*np.log10(pmax*(Rmax*x*np.sin(bstar)/s)**2*PhiL(bstar)) mindmag[s < rmin*np.sin(bstar)] = -2.5*np.log10(pmax*(Rmax*x/rmin)**2*PhiL(np.arcsin(s[s < rmin*np.sin(bstar)]/rmin))) mindmag[s > rmax*np.sin(bstar)] = -2.5*np.log10(pmax*(Rmax*x/rmax)**2*PhiL(np.arcsin(s[s > rmax*np.sin(bstar)]/rmax))) else: if s < rmin*np.sin(bstar): mindmag = -2.5*np.log10(pmax*(Rmax*x/rmin)**2*PhiL(np.arcsin(s/rmin))) elif s > rmax*np.sin(bstar): mindmag = -2.5*np.log10(pmax*(Rmax*x/rmax)**2*PhiL(np.arcsin(s/rmax))) else: mindmag = -2.5*np.log10(pmax*(Rmax*x*np.sin(bstar)/s)**2*PhiL(bstar)) return mindmag
Opinion: Can blended finance close the generational gap of aid workers? It’s becoming more and more clear to me that the world of economic development is bifurcating. On the one hand, we have the “old school” of development professionals with deep experience on the ground and working within a relatively rigid institutional and financial framework. This category would include many of the multilateral organizations and bilateral aid agencies for whom I have worked for much of my career — the World Bank, the U.S. Agency for International Development, the regional development banks, and so on. It also includes the ecosystem of consultants, implementers, contractors, and NGOs that has evolved to serve these traditional development agencies. On the other hand, you have the “new kids on the block,” who hail from the private sector and represent huge reserves of capital. This includes millennials, who are slated to inherit $30 trillion from the baby boomers and who want to invest their inheritance responsibly; large companies with trillions of dollars of idle cash on their balance sheets, hoping that corporate venturing can replace traditional corporate social responsibility and mitigate disruption in many markets; and the new philanthropists, who want to invest rather than donate. These new players see social entrepreneurship and impact investing as a way to achieve both profits and purpose. Generally speaking, these two worlds do not intersect much. Nowhere was that more evident than last month, when we saw the World Bank annual meetings in Washington, D.C., at the same time as the Social Capital Markets conference in San Francisco. Although a smattering of people from the U.N., the World Bank, USAID, and other development agencies attended SOCAP this year, they made up at most 5 percent of the crowd. Most people — many of them very young — came to SOCAP from the “new kids on the block” segment. And yet, there was significant overlap in the topics of discussion at these meetings. Most notably, the focus on mobilizing trillions (rather than billions) of dollars to meet the U.N.’s Sustainable Development Goals. The SDGs envision a world free from poverty by 2030 and lay out 17 goals we have to meet to get there. I’ve been excited and impressed by the wide range of stakeholders who seem motivated by the SDGs. Corporates, civil society, governments: everyone seems to be talking about how to achieve the Goals. Clearly, if the Goals will require $2.4 trillion more than is currently being spent via traditional development assistance and foreign direct investment, then private capital markets and larger investors will have to get involved. At the same time, it is estimated that, should the SDGs actually materialize, they will offer a $12 trillion business opportunity by 2030. That’s the kind of number that has companies and investors perking up their ears. Indeed, when the U.N.’s Business and Sustainable Development Commission laid out the business case for achieving the SDGs, it was confirming a growing sense that the SDGs represent good business. But the new players and the market can’t do this all on their own — at least not on the accelerated schedule demanded by the SDGs. We need a “worlds collide” approach to solving the world’s most intractable economic, social, and environmental problems. One promising solution gaining traction is blended finance. Blended finance refers to how grant funding from the more traditional development players can leverage and crowd in private capital. This happens when the subsidized funds reduce real and perceived risks, better enable both potential investee entrepreneurs and new investors to facilitate transactions, and bring together otherwise uneasy bedfellows to help them speak the same language. At DAI, we are heading a consortium of “new” players (CrossBoundary, Tideline, and Convergence) to help USAID accelerate the development of blended finance. USAID is innovating at the frontier of blended finance with a new program called The Invest Project, which seeks to unlock the potential of private capital to drive inclusive growth in countries around the world. Encouraging investment in these markets — especially in high-impact areas important to USAID such as agriculture, financial services, infrastructure, energy, clean water, health, and education — requires new forms of collaboration between USAID and the investment community. Whether through blended finance or other mechanisms, I hope that we see more of this dialogue and understanding across the two worlds, traditional and new. It’s time for a meeting of the minds, not only to mobilize the kind of cash required to meet the SDGs, but to come up with new ways to address development problems. If each of us continues to work solely on our own and within our comfort zones, we will struggle to generate the breakthrough models we so desperately need.
import json import calendar import datetime import decimal import sys from typing import Union, Any, Dict from algoliasearch.helpers import get_items # Python 3 if sys.version_info >= (3, 0): from urllib.parse import urlencode else: from urllib import urlencode # pragma: no cover class QueryParametersSerializer(object): @staticmethod def serialize(query_parameters): # type: (Dict[str, Any]) -> str for key, value in get_items(query_parameters): if isinstance(value, (list, dict)): value = json.dumps(value) elif isinstance(value, bool): value = "true" if value else "false" query_parameters[key] = value return urlencode(sorted(get_items(query_parameters), key=lambda val: val[0])) class SettingsDeserializer(object): @staticmethod def deserialize(data): # type: (Dict[str, Any]) -> dict keys = { "attributesToIndex": "searchableAttributes", "numericAttributesToIndex": "numericAttributesForFiltering", "slaves": "replicas", } for deprecated_key, current_key in get_items(keys): if deprecated_key in data: data[current_key] = data.pop(deprecated_key) return data class DataSerializer(object): @staticmethod def serialize(data): # type: (Union[Dict[str, Any], list]) -> str return json.dumps(data, cls=JSONEncoder) class JSONEncoder(json.JSONEncoder): def default(self, obj): # type: (object) -> object if isinstance(obj, decimal.Decimal): return float(obj) elif isinstance(obj, datetime.datetime): return int(calendar.timegm(obj.utctimetuple())) elif type(obj).__str__ is not object.__str__: return str(obj) return json.JSONEncoder.default(self, obj)
===Cats and dogs=== Cats and dogs are our most common companion animals - at least in the west. How do you feel about the following activities? *Keeping large dogs in flats. *Keeping dogs tied to a post all day. *De-clawing cats. *Trimming or cutting whiskers of cats *Neutering dogs. *Neutering cats. *Cutting off or surgically shaping dogs' ears or tails.
import json from flask import url_for, make_response from .flaskapp import app, resource from .util import * @app.route('/<username>', methods=['GET', 'PUT']) def user(username): if request.method == 'GET': return get_user(username) elif request.method == 'PUT': return put_user(username) # should never be reached abort(500) # pragma: no coverage @resource def get_user(username): user = get_user_or_404(username) return { 'username': user.username, 'resources': { 'maps': { 'url': url_for_user_maps(user) }, 'layers': { 'url': url_for_user_layers(user) }, 'buckets': { 'url': url_for_user_buckets(user) }, }, } def put_user(username): # This will replace the one currently in the DB user = model.User(username) user.save() response = make_response(json.dumps({ 'url': url_for_user(user) }), 201) response.headers['Location'] = url_for_user(user) response.headers['Content-Type'] = 'application/json' return response
Versatile duo Mike Blair and Greig Laidlaw will steer the Edinburgh Rugby back-line against Cardiff Blues in round four of the Heineken Cup at Murrayfield Stadium tomorrow (kick-off 8pm). Two wins from three in Heineken Cup Pool 2 means Edinburgh Rugby are just three points behind this weekend’s opponents, who leapfrogged Edinburgh into the Pool’s top spot in the reverse fixture last Friday (25-8). Now, with home advantage, the capital club have the chance to redress the Pool positions, with a win capable of restoring Edinburgh to the top spot with two matches remaining – Racing Metro away followed by London Irish at Murrayfield. The selection of Blair and Laidlaw is the only change in the Edinburgh Rugby back-line, while two personnel changes have been made in the pack – Grant Gilchrist and Netani Talei coming in for Esteban Lozada and Stuart McInally in the second and back-row. Talei’s introduction means there’s no room for McInally in the match-day 23, with Ross Rennie retaining the back-row replacement berth, while Lozada takes Steven Turnbull’s place on the bench. Talei, who started and scored in the club’s last home Heineken Cup match against Racing Metro, said: “Everyone’s looking forward to this game, the boys are confident and have been focussed on this match since we left Cardiff on Saturday. It’s a big, big game for us. Other notable inclusions see centre Matt Scott return from a hip injury sustained in the side’s 48-47 win over Racing Metro, while the experienced Phil Godman provides the half-back replacement option for Laidlaw and Blair from the bench.
import numpy as np, os, itertools import pandas as pd from .comparison_metrics import (sim_xy, selInf_R, glmnet_lasso, coverage, compare_sampler_MLE) def compare_sampler_mle(n=500, p=100, rho=0.35, s=5, beta_type=1, snr_values=np.array([0.10, 0.15, 0.20, 0.25, 0.30, 0.35, 0.42, 0.71, 1.22, 2.07]), target="selected", tuning_rand="lambda.1se", randomizing_scale= np.sqrt(0.50), ndraw=50, outpath=None): df_selective_inference = pd.DataFrame() if n > p: full_dispersion = True else: full_dispersion = False snr_list = [] for snr in snr_values: snr_list.append(snr*np.ones(2)) output_overall = np.zeros(23) for i in range(ndraw): output_overall += np.squeeze( compare_sampler_MLE(n=n, p=p, nval=n, rho=rho, s=s, beta_type=beta_type, snr=snr, target = target, randomizer_scale=randomizing_scale, full_dispersion=full_dispersion, tuning_rand=tuning_rand)) nreport = output_overall[22] randomized_MLE_inf = np.hstack(((output_overall[0:7] / float(ndraw - nreport)).reshape((1, 7)), (output_overall[7:11] / float(ndraw)).reshape((1, 4)))) randomized_sampler_inf = np.hstack(((output_overall[11:18] / float(ndraw - nreport)).reshape((1, 7)), (output_overall[18:22] / float(ndraw)).reshape((1, 4)))) df_MLE = pd.DataFrame(data=randomized_MLE_inf, columns=['coverage', 'length', 'prop-infty', 'tot-active', 'bias', 'sel-power', 'time', 'power', 'power-BH', 'fdr-BH', 'tot-discoveries']) df_MLE['method'] = "MLE" df_sampler = pd.DataFrame(data=randomized_sampler_inf, columns=['coverage', 'length', 'prop-infty', 'tot-active', 'bias', 'sel-power', 'time', 'power', 'power-BH', 'fdr-BH', 'tot-discoveries']) df_sampler['method'] = "Sampler" df_selective_inference = df_selective_inference.append(df_MLE, ignore_index=True) df_selective_inference = df_selective_inference.append(df_sampler, ignore_index=True) snr_list = list(itertools.chain.from_iterable(snr_list)) df_selective_inference['n'] = n df_selective_inference['p'] = p df_selective_inference['s'] = s df_selective_inference['rho'] = rho df_selective_inference['beta-type'] = beta_type df_selective_inference['snr'] = pd.Series(np.asarray(snr_list)) df_selective_inference['target'] = target if outpath is None: outpath = os.path.dirname(__file__) outfile_inf_csv = (os.path.join(outpath, "compare_" + str(n) + "_" + str(p) + "_inference_betatype" + str(beta_type) + target + "_rho_" + str(rho) + ".csv")) outfile_inf_html = os.path.join(outpath, "compare_" + str(n) + "_" + str(p) + "_inference_betatype" + str(beta_type) + target + "_rho_" + str(rho) + ".html") df_selective_inference.to_csv(outfile_inf_csv, index=False) df_selective_inference.to_html(outfile_inf_html)
It seems even the massive corporations were not on it, as Google has been fined 50 million euros (£44m) for GDPR breaches. The French data regulator CNIL has ruled that Google had a lack of transparency, inadequate information and lack of valid consent regarding ads personalisation. It also stated that it believed that people were not sufficiently informed about how Google collected data to personalise advertising. This decision has stemmed from two complaints against Google made in May 2018, made by privacy rights groups noyb and La Quadrature du Net (LQDN). While Google EU has its base in Ireland, the French Data Protection regulator was chosen as the Irish counterpart had limited power. This decision highlights how hard the GDPR can bite when it is breached, something that has worried a number of businesses since its implementation. In response to the decision, Google have said they are “deeply committed to meeting those expectations and the consent requirements of the GDPR." See this as a warning, make sure you are complaint or brace yourself for the hefty financial consequences.
#!/usr/bin/env python # -*- coding: utf-8 -*- """ Alaa El Jawad ~~~~~~~~~~~~~ This node subscribes to the mesured position of the sailboat and publishes an interval where the sailboat should be """ import rospy from shepherd_msg.msg import RosInterval, SailboatPoseInterval, SailboatPose # -------------------------------------------------------------------------------- # ROS Node initialisation # -------------------------------------------------------------------------------- rospy.init_node('sailboatX_locator') # -------------------------------------------------------------------------------- # Get sensors precision from rosparam (if any) # -------------------------------------------------------------------------------- # GPS Precision gps_noise = 2 if rospy.has_param('sailboat_gps_noise'): gps_noise = rospy.get_param('sailboat_gps_noise') rospy.loginfo('I Precision was set to %f', gps_noise) else: msg = 'GPS Precision was not set in param server, defaulting to: {} m' msg = msg.format(gps_noise) rospy.loginfo(msg) # IMU Precision imu_noise = 0.2 if rospy.has_param('sailboat_imu_noise'): imu_noise = rospy.get_param('sailboat_imu_noise') rospy.loginfo('IMU Precision was set to %f', imu_noise) else: msg = 'IMU Precision was not set in param server, defaulting to: {} deg' msg = msg.format(imu_noise) rospy.loginfo(msg) # -------------------------------------------------------------------------------- # Publisher of the interval of pose # -------------------------------------------------------------------------------- pose_pub = rospy.Publisher('pose_interval', SailboatPoseInterval, queue_size=1) est_pub = rospy.Publisher('pose_est', SailboatPose, queue_size=1) # -------------------------------------------------------------------------------- # Subscribe to SailboatPose mesured data # -------------------------------------------------------------------------------- def publish_pose_interval(msg): global gps_noise, imu_noise poseI = SailboatPoseInterval() poseI.x = RosInterval(msg.pose.x - gps_noise, msg.pose.x + gps_noise) poseI.y = RosInterval(msg.pose.y - gps_noise, msg.pose.y + gps_noise) poseI.theta = RosInterval( msg.pose.theta - imu_noise, msg.pose.theta + imu_noise) pose_pub.publish(poseI) # for the moment pose_est=pose_noisy est_pub.publish(msg) sb_pose_sub = rospy.Subscriber( 'pose_noisy', SailboatPose, publish_pose_interval) rospy.spin()
In August 2016, I hiked solo round trip from Tuolumne Meadows to what John Muir called Dome Dam, a tough granite wall that resisted the carving of the multiple glaciers that passed down this canyon over the last million years. Just beyond this granite dam, the Tuolumne River falls rapidly, and literally, in multiple cascades, Tuolumne Falls, and White Cascade-- on its way to the next flat of Glen Aulin. I wanted to climb two of the domes that form this "Dome Dam". The first is 500-foot-high "Dingley Dome" and the second (about 800 foot high) that I call River Dome II. The first offers a fine view of the Cathedral Range peaks and the second a fine view down the Grand Canyon of the Tuolumne. I made both climbs and wended my way back off trail by following the winding and singing Tuolumne River back to camp. The late-season Tuolumne was low but beautiful. Enjoy these photographs of my day's adventure!
# -*- coding: utf-8 -*- """Tests for pomodoro service.""" import os import sys import time import unittest import pytest from unittest.mock import Mock from pomito import main, pomodoro, task from pomito.plugins.ui import UIPlugin from pomito.plugins.task import TaskPlugin from pomito.test import PomitoTestFactory class PomodoroServiceTests(unittest.TestCase): """Tests for pomodoro service. - test_break_stopped_without_start - test_session_stopped_without_start - test_interruption_stopped_without_start - test_get_config_gets_value_for_plugin_and_key - test_get_config_throws_for_invalid_plugin - test_get_config_throws_for_invalid_key - test_get_config_throws_for_invalid_inifile """ def setUp(self): test_factory = PomitoTestFactory() self.pomodoro_service = test_factory.create_fake_service() self.dummy_task = Mock(spec=task.Task) self.dummy_callback = Mock() def tearDown(self): self.pomodoro_service._pomito_instance.exit() def test_current_task_none_for_default_pomodoro(self): assert self.pomodoro_service.current_task is None def test_current_task_is_set_for_running_session(self): self.pomodoro_service.start_session(self.dummy_task) assert self.pomodoro_service.current_task == self.dummy_task self.pomodoro_service.stop_session() def test_current_task_none_after_session_stop(self): self.pomodoro_service.start_session(self.dummy_task) self.pomodoro_service.stop_session() assert self.pomodoro_service.current_task is None def test_get_config_gets_value_for_plugin_and_key(self): pass def test_get_config_returns_none_invalid_plugin(self): val = self.pomodoro_service.get_config("dummy_plugin", "dummy_key") assert val is None def test_get_task_plugins_gets_list_of_all_task_plugins(self): from pomito import plugins plugins.PLUGINS = {'a': plugins.task.nulltask.NullTask(None), 'b': self.pomodoro_service} task_plugins = self.pomodoro_service.get_task_plugins() assert task_plugins == [plugins.PLUGINS['a']] def test_get_tasks_returns_tasks_for_the_user(self): self.pomodoro_service.get_tasks() self.pomodoro_service \ ._pomito_instance \ .task_plugin.get_tasks.assert_called_once_with() def test_get_tasks_by_filter_returns_tasks_match_filter(self): self.pomodoro_service.get_tasks_by_filter("dummy_filter") self.pomodoro_service \ ._pomito_instance \ .task_plugin.get_tasks_by_filter \ .assert_called_once_with("dummy_filter") def test_get_task_by_id_returns_task_matching_task_idish(self): self.pomodoro_service.get_task_by_id(10) self.pomodoro_service \ ._pomito_instance \ .task_plugin.get_task_by_id \ .assert_called_once_with(10) def test_start_session_throws_if_no_task_is_provided(self): self.assertRaises(Exception, self.pomodoro_service.start_session, None) def test_stop_session_waits_for_timer_thread_to_join(self): self.pomodoro_service.start_session(self.dummy_task) assert self.pomodoro_service._timer.is_alive() self.pomodoro_service.stop_session() assert self.pomodoro_service._timer.is_alive() is False def test_stop_break_waits_for_timer_thread_to_join(self): self.pomodoro_service.start_break() assert self.pomodoro_service._timer.is_alive() self.pomodoro_service.stop_break() assert self.pomodoro_service._timer.is_alive() is False def test_session_started_is_called_with_correct_session_count(self): self.pomodoro_service.signal_session_started \ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_session(self.dummy_task) self.dummy_callback.assert_called_once_with(None, session_count=0, session_duration=600, task=self.dummy_task) self.pomodoro_service.signal_session_started \ .disconnect(self.dummy_callback) self.pomodoro_service.stop_session() def test_session_stopped_for_reason_interrupt(self): self.pomodoro_service.signal_session_stopped \ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_session(self.dummy_task) self.pomodoro_service.stop_session() self.dummy_callback.\ assert_called_once_with(None, session_count=0, task=self.dummy_task, reason=pomodoro.TimerChange.INTERRUPT) self.pomodoro_service.signal_session_stopped \ .disconnect(self.dummy_callback) def test_session_stopped_for_reason_complete(self): self.pomodoro_service.signal_session_stopped \ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_session(self.dummy_task) self.pomodoro_service._timer.trigger_callback(pomodoro.TimerChange.COMPLETE) self.dummy_callback.assert_called_once_with(None, session_count=1, task=self.dummy_task, reason=pomodoro.TimerChange.COMPLETE) self.pomodoro_service.signal_session_stopped\ .disconnect(self.dummy_callback) def test_break_started_shortbreak(self): self._test_break_started(pomodoro.TimerType.SHORT_BREAK, 120) def test_break_started_longbreak(self): self.pomodoro_service._session_count = 4 self._test_break_started(pomodoro.TimerType.LONG_BREAK, 300) def _test_break_started(self, break_type, duration): self.pomodoro_service.signal_break_started \ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_break() self.dummy_callback\ .assert_called_once_with(None, break_type=break_type, break_duration=duration) self.pomodoro_service.stop_break() self.pomodoro_service.signal_break_started \ .disconnect(self.dummy_callback) def test_break_stopped_shortbreak_for_reason_complete(self): self.pomodoro_service.signal_break_stopped\ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_break() self.pomodoro_service._timer.trigger_callback(pomodoro.TimerChange.COMPLETE) self.dummy_callback.assert_called_once_with(None, break_type=pomodoro.TimerType.SHORT_BREAK, reason=pomodoro.TimerChange.COMPLETE) self.pomodoro_service.signal_break_stopped\ .disconnect(self.dummy_callback) def test_break_stopped_shortbreak_for_reason_interrupt(self): self.pomodoro_service.signal_break_stopped\ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_break() self.pomodoro_service.stop_break() self.dummy_callback.assert_called_once_with(None, break_type=pomodoro.TimerType.SHORT_BREAK, reason=pomodoro.TimerChange.INTERRUPT) self.pomodoro_service.signal_break_stopped\ .disconnect(self.dummy_callback) def test_break_stopped_longbreak_for_interrupt(self): self.pomodoro_service._session_count = 4 self.pomodoro_service.signal_break_stopped\ .connect(self.dummy_callback, weak=False) self.pomodoro_service.start_break() self.pomodoro_service.stop_break() self.dummy_callback.assert_called_once_with(None, break_type=pomodoro.TimerType.LONG_BREAK, reason=pomodoro.TimerChange.INTERRUPT) self.pomodoro_service.signal_break_stopped\ .disconnect(self.dummy_callback) def test_get_data_dir_returns_correct_default(self): expected_data_dir = os.path.join(os.path.expanduser("~"), "pomito") if sys.platform.startswith("linux"): home_dir = os.getenv("HOME") alt_data_dir = os.path.join(home_dir, ".local/share") expected_data_dir = os.path\ .join(os.getenv("XDG_DATA_HOME") or alt_data_dir, "pomito") data_dir = self.pomodoro_service.get_data_dir() assert data_dir == expected_data_dir def test_get_db_returns_a_valid_database(self): test_db = "dummy_db" pomodoro_service = pomodoro.Pomodoro(main.Pomito(database=test_db)) assert pomodoro_service.get_db() == test_db @pytest.mark.perf def test_session_started_perf(self): t = Mock(spec=task.Task) pomito = main.Pomito(None) pomito.ui_plugin = DummyUIPlugin() pomito.task_plugin = Mock(spec=TaskPlugin) pomito._message_dispatcher.start() pomito.pomodoro_service.signal_session_started \ .connect(pomito.ui_plugin.notify_session_started, weak=False) time_start = time.time() # initial timestamp pomito.pomodoro_service.start_session(t) time.sleep(1) time_end = pomito.ui_plugin.timestamp self.assertAlmostEqual(time_start, time_end, delta=0.1) pomito.ui_plugin.timestamp = None pomito.pomodoro_service.stop_session() pomito.exit() class TimerTests(unittest.TestCase): def setUp(self): self.timestamp_start = 0.0 self.timestamp_end = 0.0 self.delta = 0.0 self.mock_callback = Mock() def tearDown(self): self.timestamp_start = self.timestamp_end = self.delta = 0.0 def dummy_callback(self, reason='whatever'): self.timestamp_end = time.time() self.delta += (self.timestamp_end - self.timestamp_start) self.timestamp_start = self.timestamp_end self.reason = reason def test_mock_callback_reason_increment_and_complete(self): timer = pomodoro.Timer(0.2, self.mock_callback, 0.1) timer.start() time.sleep(0.3) assert self.mock_callback.call_count == 2 self.assertListEqual(self.mock_callback.call_args_list, [((pomodoro.TimerChange.INCREMENT,), {}), ((pomodoro.TimerChange.COMPLETE,), {})], 'invalid notify_reason') def test_mock_callback_reason_interrupt(self): timer = pomodoro.Timer(10, self.mock_callback, 1) timer.start() timer.stop() time.sleep(0.1) assert self.mock_callback.call_count == 1 self.assertListEqual(self.mock_callback.call_args_list, [((pomodoro.TimerChange.INTERRUPT,), {})], 'invalid notify_reason') def test_start_throws_when_called_on_same_thread(self): def callback_with_catch(reason): try: timer.start() assert False # expect previous call to throw except RuntimeError: pass timer = pomodoro.Timer(10, callback_with_catch, 1) timer.start() timer.stop() time.sleep(0.1) def test_stop_throws_when_called_on_same_thread(self): def callback_with_catch(reason): try: timer.stop() assert False # expect previous call to throw except RuntimeError: pass timer = pomodoro.Timer(10, callback_with_catch, 1) timer.start() timer.stop() time.sleep(0.1) @pytest.mark.perf def test_callback_granular(self): duration = 60.00 delta_granular = 1.0 # windows if sys.platform.startswith("linux"): delta_granular = 0.03 timer = pomodoro.Timer(duration, self.dummy_callback) self.timestamp_start = time.time() timer.start() time.sleep(duration + 2) assert self.reason == pomodoro.TimerChange.COMPLETE self.assertAlmostEqual(self.delta, duration, delta=delta_granular) class DummyUIPlugin(UIPlugin): def __init__(self): """Create an instance of dummy plugin.""" self.timestamp = 100.0 return def run(self): pass def notify_session_started(self, sender, **kwargs): self.timestamp = time.time() return def initialize(self): pass
I&apos;ve been extremely disappointed with Jean-Michael Seri this season. When he arrived from Nice in the summer, I was as surprised as anyone that Fulham managed to pull off such a coup in bringing him to the club and excited to see a player of his magnitude strut his stuff at Craven Cottage. I wasn&apos;t disappointed in the first few games - he was magnificent against Crystal Palace while he was on a different planet altogether in the win over Burnley. But since, he just seems to be, well, there. He&apos;s made 200 more passes than any other player in white this season, but they don&apos;t seem to be passes that actually contribute to much for Fulham. Seri hasn&apos;t done anything that has made me think he&apos;s a player that could&apos;ve played for Barcelona and his performances have been very disappointing to say the least. That was the same in the Chelsea defeat - he started as the defensive point in a midfield diamond but was caught in possession within three minutes by N&apos;Golo Kante which led to the first goal in what was an almost carbon copy of the goal he gifted to Manchester City at the Etihad. I know there&apos;s a player there - he made some decent tackles but he just isn&apos;t have the impact that you&apos;d expect from a player of his calibre. Is it because he&apos;s taking a while to adapt to the Premier League? Is it because of the frantic pace the English league has compared to France? Is he not suited to Fulham&apos;s system? Did players make him better than he actually was at Nice? I don&apos;t know the answer, but we have to start seeing more from Seri this season. Fulham&apos;s search for a clean sheet continued after they shipped two goals at Stamford Bridge, meaning they&apos;re now the only team in the top four tiers of English football not to shut out their opponents this season. The 35 goals they&apos;ve conceded is also the joint most amount of goals the club have conceded after 14 games of a top flight campaign, tied with the 1959/60 season while it&apos;s six more than any other side in the Premier League this season. Claudio Ranieri has stressed the importance of keeping a clean sheet but at the moment it really doesn&apos;t look like happening any time soon, and the Italian knows as well as anyone that if you keep leaking goals you will go down. We saw another change in shape from the manager to try and stifle Chelsea&apos;s more attacking players, but yet another individual mistake early in the game meant that it was even more of an uphill struggle after just three minutes. Those individual errors plagued Slavisa Jokanovic&apos;s 12 games in the Premier League, but there&apos;s very little a manager can do about them once those players take to the pitch - tactics only go so far before you have to trust your players once they cross the white line. Fulham desperately need a clean sheet and they need it soon. That leads me nicely to my next point. While I&apos;m sure most reading this won&apos;t consider Stamford Bridge one of their favourite grounds, from a journalist&apos;s point of view it&apos;s fantastic. The press box is located about five rows behind the dugouts, meaning we can see and hear everything that the new managers are doing and shouting at their teams on the pitch. It makes a huge difference from Craven Cottage where I can barely see Ranieri during the game. On Sunday, I got to witness Ranieri&apos;s more animated side as he slammed his hands down in frustration at Fulham&apos;s defending and shouted &apos;oh no&apos; every time they lost the ball. It&apos;s clear from watching him he&apos;s as frustrated as the fans are with his side&apos;s defensive capabilities and that defensive shape is high on the menu at Motspur Park at the moment. In press conferences Ranieri is a jovial character, almost like a lovable granddad, but we got to see first hand what he is like as Ranieri the manager on Sunday. Ranieri is living up to his title as the Tinkerman as he changed from the formation he played against Southampton, instead employing a diamond in the midfield with Ryan Sessegnon up top. It didn&apos;t quite work as he would&apos;ve wanted - Chelsea exploited space down the flanks while Sessegnon didn&apos;t see much of the ball up front, so was pulled into the wide area of a three with Stefan Johansen on the other flank to try and provide that. That didn&apos;t work either, so Ranieri rang the changes at half time and brought on Floyd Ayite and Aboubakar Kamara to give his side that much needed width, and it worked well. Fulham moved the ball out wide better and got into some good positions, but they weren&apos;t able to put in any quality crosses for Aleksandar Mitrovic to feed off as he cut a frustrated character. Ranieri certainly showed he can tinker clever with the half time subs at Stamford Bridge, but he&apos;s still in the process of finding out what works for his new team. Who would&apos;ve predicted after the Cardiff City defeat that Calum Chambers would be a revelation in midfield for Fulham? But that&apos;s what&apos;s happened with Chambers now an important part of Ranieri&apos;s midfield option as he had another solid game in the heart of it for his loan club at Stamford Bridge. I think he does well there in a defensive capability because of his reading of the game, which is probably one of his strongest aspects, meaning he&apos;s able to be in the right positions to stop the opposing side&apos;s attacks - something we haven&apos;t really seen from Andre-Frank Zambo Anguissa yet this season. Going forward there&apos;s still a lot to work on - he had a number of really good chances for Fulham while we saw him turn into peak Ronaldinho before smashing the ball out for a throw in when he meant to cross it, but it&apos;s very encouraging from the Arsenal loanee.
#FileName:Apriori.py import sys import copy def InsertToSortedList(tlist,item): i = 0 while i < len(tlist): if item < tlist[i]: break elif item == tlist[i]: return i+=1 tlist.insert(i, item) def CompItemSet(x,y): i = 0 while i < len(x) and i < len(y): if x[i] < y[i]: return -1 elif x[i] > y[i]: return 1 i += 1 if i == len(x) and i == len(y): return 0 elif i < len(y): return -1 return 1 RawFile = "casts.list.txt" ResultFile = "casts.fis.txt" infile = file(RawFile,'r') s = infile.readline().lower() WordIDTable = {} WordList = [] WordFreqSet = {} TSet = [] #Transaction Database ItemSet = [] #A transaction # load transactions while len(s) > 0: items = s.strip().split('\t') for i in range(1, len(items)): tmpstr = items[i].strip() if tmpstr not in WordIDTable: WordList.append(tmpstr) WordIDTable[tmpstr] = len(WordList) WordFreqSet[WordIDTable[tmpstr]] = 1 else: WordFreqSet[WordIDTable[tmpstr]] += 1 InsertToSortedList(ItemSet,WordIDTable[tmpstr]) TSet.append(ItemSet) ItemSet = [] s = infile.readline().lower() infile.close() print len(WordList), "person names loaded!" print len(TSet), "transactions loaded!" #ItemSetComp = lambda x,y:CompItemSet(x,y) TSet.sort(CompItemSet) MinSupCount = 5 # set the minimum support LSet = [] # frequent item set CSet = [] # candidate item set CSet.append([]) # get 1-frequent item set LSet.append([]) for (item,freq) in WordFreqSet.items(): if freq >= MinSupCount: LSet[0].append([item]) LSet[0].sort(CompItemSet) print len(LSet[0]), "1-frequent item sets found!" # remove transactions containing no 1-frequent item set # and get 2-frequent item set Freq2Dic = {} for itemset in TSet: i = 0 while i < len(itemset): if WordFreqSet[itemset[i]] < MinSupCount: itemset.remove(itemset[i]) i += 1 if len(itemset) < 1: TSet.remove(itemset) elif len(itemset) > 1: # generate the dictionary of 2-item pairs, calculate the frequency for j in range(len(itemset)-1): for k in range(j+1,len(itemset)): temps = str(itemset[j])+'-'+str(itemset[k]) if temps not in Freq2Dic: Freq2Dic[temps] = 1 else: Freq2Dic[temps] += 1 # Get 2-frequent item set CSet.append([]) LSet.append([]) for (item,freq) in Freq2Dic.items(): if freq >= MinSupCount: templist = [] parts = item.split('-') templist.append(int(parts[0])) templist.append(int(parts[1])) LSet[1].append(templist) LSet[1].sort(CompItemSet) print len(TSet), "transactions after pruning!" def IsEqual(list1, list2): i = 0 while i < len(list1) and i < len(list2): if list1[i] != list2[i]: return False i += 1 if i == len(list1) and i == len(list2): return True else: return False ################################### # for pruning # 1: You need to decide whether 'newSet' is included in the candidate item sets for (k+1) # 'tmpDic' is the dictionary built from k-frequent item sets def IsValid(newSet, tmpDic): # TODO: for i in range(len(newSet)-2): s = "" for j in range(len(newSet)): if j != i: s += "-" + str(newSet[j]) if s[1:] not in tmpDic: return False return True # link and prune def GenCand(k, LSet, CSet): # generate the dictionary built from k-frequent item sets PreSetDic = {} for itemset in LSet[k-1]: s = "" for j in range(len(itemset)): s += "-" + str(itemset[j]) temps = s[1:] if temps not in PreSetDic: PreSetDic[temps] = True else: print "Duplicate frequent itemset found!" ################################### # 2: You need to generate the candidate item sets for (k+1) # You MAY call the function 'IsValid(,)' you have built, and use the dictionary 'PreSetDic' generated above # TODO: for i in range(len(LSet[k-1])-1): itemSet1 = LSet[k-1][i] for j in range(i+1,len(LSet[k-1])): n = 0 itemSet2 = LSet[k-1][j] while n < len(itemSet1) and n < len(itemSet2): if itemSet1[n] != itemSet2[n]: break n += 1 if len(itemSet1) - n == 1 and len(itemSet2) - n == 1: newItemSet = copy.copy(itemSet1) newItemSet.append(itemSet2[n]) if IsValid(newItemSet, PreSetDic): CSet[k].append(newItemSet) else: break def GetFreq(candlist,tarlist): ci = 0 ti = 0 while ci < len(candlist) and ti < len(tarlist): if candlist[ci] < tarlist[ti]: break elif candlist[ci] == tarlist[ti]: ci += 1 ti += 1 else: ti += 1 if len(candlist) == ci: return 1 else: return 0 # print the solution info k = 2 while len(LSet[k-1]) > 1: print len(LSet[k-1]), str(k)+"-frequent item sets found!" CSet.append([]) GenCand(k,LSet,CSet) # You are supposed to complete this function print len(CSet[k]), str(k+1)+"-candidate item sets found!" LSet.append([]) for candlist in CSet[k]: count = 0 for tarlist in TSet: count += GetFreq(candlist,tarlist) if count >= MinSupCount: LSet[k].append(candlist) k += 1 # write the result outfile = file(ResultFile, 'w') i = 1 num = 0 for fislist in LSet: if len(fislist) < 1: LSet.remove(fislist) continue num += len(fislist) outfile.write(str(i)+"-frequent item sets:\r\n") for fis in fislist: for itemid in fis: outfile.write(WordList[itemid-1]) outfile.write('\t') outfile.write('\r\n') i += 1 outfile.close() print num, "frequent item sets in total!"
Smm10 Europe Mlm. Last Content Update: 20 Jun 2017 Total Downloads Today: 21,217 Total Downloads: 359,182 Average Download Speed: 957kb/s Download Server Online: Yes Members Logged in: 6,776 New Members Today: 2,756 Members Logged in Today: 39,428. I just joined FileFixation today and I have downloaded all my favorite TV shows and a few software downloads! This was worth the few dollars it cost me! Thank you for such a brilliant download service. At first I hesitated joining this site but now i'm glad I did. I am happy to see you really do have thousands of downloads. I just downloaded the software I needed. The service is great. Thank you and Ciao. Unlimited downloads updated daily. Server uptime 99.9% guaranteed. Smm10 Europe Mlm Download Search Tips. To create more accurate search results for Smm10 Europe Mlm try to exclude using commonly used keywords such as: crack, download, serial, keygen, torrent, warez, etc. Simplifying your search should return more download results. Many downloads like Smm10 Europe Mlm may also include a crack, serial number, unlock code, cd key or keygen (key generator). If this is the case it is usually found in the full download archive itself. is a new file sharing web service which gives you access to literally hundreds of thousands of direct downloads including software, games, movies, tv shows, mp3 albums, ebooks and more! Our downloads database is updated daily to provide the latest download releases on offer. To celebrate our launch we are offering unlimited full download access for $3 ! This is a limited offer and will soon expire and revert back to the normal member price. We now have 359,182 downloads in the member section. Take the FileFixation tour now for more detailed information. What is a Crack. The word crack in this context means the action of removing the copy protection from commercial software. A crack is a set of instructions or patch used to remove copy protection from a piece of software or to unlock features from a demo or time-limited trial. There are crack groups who work together in order to crack software, games, etc. If you search for Smm10 Europe Mlm Crack, you will often see the word crack amongst the results which means it is the full version of the product. What is a Serial. The word serial means a unique number which identifies the license of the software as being valid. All retail software uses a serial number or key of some form. The installation often requires the user to enter a valid serial number to proceed. A serial can also be referred to as a CD Key . When you search for Smm10 Europe Mlm Serial for example, you may find the word serial amongst the results. This usually means your software download includes a serial number of some sort. What is a Keygen. The word keygen means a small program that can generate a cd key, activation number, license code, serial number, or registration number for a piece of software. KeyGen is a shortened word for Key Generator. A keygen is made available through crack groups free to download. When writing a keygen, the author will identify the algorithm used in creating a valid cd key. Once the algorithm is identified they can then incorporate this into the keygen. If you search a download site for Smm10 Europe Mlm Keygen, this often means your download includes a keygen. Popular Download Searches.
from PyQt4 import QtCore from hashmal_lib.core import chainparams from base import Plugin, BasePluginUI, Category def make_plugin(): p = Plugin(ChainParams) p.has_gui = False return p class ChainParamsObject(QtCore.QObject): """This class exists so that a signal can be emitted when chainparams presets change.""" paramsPresetsChanged = QtCore.pyqtSignal() class ChainParams(BasePluginUI): """For augmentation purposes, we use this plugin to help with chainparams presets.""" tool_name = 'Chainparams' description = 'Chainparams allows plugins to add chainparams presets for Hashmal to use.' category = Category.Core def __init__(self, *args): super(ChainParams, self).__init__(*args) self.chainparams_object = ChainParamsObject() self.paramsPresetsChanged = self.chainparams_object.paramsPresetsChanged self.augment('chainparams_presets', None, callback=self.on_chainparams_augmented) def add_params_preset(self, preset): try: chainparams.add_preset(preset) except Exception as e: self.error(str(e)) def on_chainparams_augmented(self, data): # Assume data is iterable. try: for i in data: self.add_params_preset(i) return # data is not an iterable. except Exception: self.add_params_preset(data)
Post by Corinne. Originally published August 2, 2017. Last updated May 20, 2018. Some of Munich‘s most impressive charms can be enjoyed free of charge. Marienplatz, Viktualienmarkt, Englischer Garten, Karlsplatz, Odeonsplatz — some of the most pleasant and notable spots in the city are available for everyone to enjoy. Mere blocks from that downtown area sit two impressive structures overlooking the Isar River: the Friedensengel (Angel of Peace) and the Maximilianeum. Post by Corinne. Originally published July 31, 2017. Last updated November 25, 2018. Spending a Day in Washington, D.C. Spending a day in Washington, D.C. is a great idea. Spending a day in Washington, D.C. in August (the first day, actually) can cause a traveler to have a second thought. But there I was with a day in Washington, D.C. The nation’s capital is full of fantastic options of things to see and do. There’s no shortage of great restaurants and shops. And while the city does have plenty of public transit options — from the subway to buses doing circuits of the most popular tourist attractions — the relatively flat landscape makes for fantastic walking. Yeah, a day in Washington, D.C. is full of fun. Post by Corinne. Originally published August 17, 2016. Last updated May 12, 2017. In southern Munich, Germany on the edge of the Theresienwiese — perhaps best known for playing home every year to Oktoberfest — stands a woman. Nearly 61 feet tall, she is clothed classically in a draped Grecian gown. At her feet, a lion sits loyally at her side while her left arm is outstretched with a wreath of oak leaves. She represents Bavaria. It would be easy to draw comparisons to another famous female: the Statue of Liberty. But the Bavaria statue, as she is called, is older. And Lady Bavaria has got a bit of a secret! Post by Corinne. Originally published September 18, 2015. Last updated November 11, 2017. Post by Corinne. Originally published December 26, 2014. Last updated July 14, 2018. I feel like taking photographs of monuments, busts and sculptures in parks has become my thing. They seem like such underrated works of art that we all take for granted. During our recent trip to the Golden Gate Park in San Francisco, I loved turning every corner and not knowing what would be next. Often, it was a statue. Who would it memorialize? So often it seemed somewhat random. English great Shakespeare holds court in the sunny park with German greats Goethe and Schiller. The three rubbed bronzed elbows with United States presidents, Beethoven (although as a bust he was elbow-less), Francis “oh say can you see” Scott Key and Czech philosopher and politician Tomáš Garrigue Masaryk, who is also lacking in the below-the-chin area. And these are merely the statues that we stumbled upon. It’s simply lovely! Post by Corinne. Originally published November 10, 2014. Last updated November 9, 2014. Sometimes there are places or buildings that I’d like to see that I’m pretty confident I’ll never see. For example, it’s pretty likely I’ll never get to see the Azadi Tower, or “Freedom Tower,” in Tehran, Iran. I don’t recall the first time I saw the monument but it was likely in the news. Over the past few years, it’s commonly been used as a backdrop for former President Mahmoud Ahmadinejad’s speeches. Originally I thought it was a bridge but the Azadi Tower is actually a monument in a public square. Post by Corinne. Originally published September 2, 2013. Last updated September 2, 2013.