text
stringlengths
29
850k
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. import logging from typing import List, Optional, TYPE_CHECKING from sqlalchemy.exc import SQLAlchemyError from superset.charts.filters import ChartFilter from superset.dao.base import BaseDAO from superset.extensions import db from superset.models.slice import Slice if TYPE_CHECKING: from superset.connectors.base.models import BaseDatasource logger = logging.getLogger(__name__) class ChartDAO(BaseDAO): model_cls = Slice base_filter = ChartFilter @staticmethod def bulk_delete(models: Optional[List[Slice]], commit: bool = True) -> None: item_ids = [model.id for model in models] if models else [] # bulk delete, first delete related data if models: for model in models: model.owners = [] model.dashboards = [] db.session.merge(model) # bulk delete itself try: db.session.query(Slice).filter(Slice.id.in_(item_ids)).delete( synchronize_session="fetch" ) if commit: db.session.commit() except SQLAlchemyError as ex: if commit: db.session.rollback() raise ex @staticmethod def save(slc: Slice, commit: bool = True) -> None: db.session.add(slc) if commit: db.session.commit() @staticmethod def overwrite(slc: Slice, commit: bool = True) -> None: db.session.merge(slc) if commit: db.session.commit()
Captaincy candidates: Hartley, Best, AWJ, Sam, Murray...probably Hartley but might Murray be a 'captaincy bolter' as the most likely test starter and one of 26 Celts? Last edited by backrower8 on November 8th, 2016, 6:22 pm, edited 1 time in total. I obviously have my green and blue goggles on! Does Mako Vunipola play both positions?? Since when?? I agree with a lot of that squad. Few changes I would make would be : Faletau at 6, Itoje in the second row ahead of Toner. Joseph at 13 and Payne on the bench ahead of Zebo. I would also prefer a regular tight head on the bench ahead of Mako. Therefore starting Cole and Furlong to finish! I'll watch the Lions tour but my only interest really is how many Irish players make the squad and matchday 23. Wouldn't be disappointed with a series loss, even a 3-0 loss. There's talk now that Schmidt may be asked to join their coaching team. I really hope he doesn't. He mentioned he'd be delighted for Andy Farrell to join so he can keep an eye on how the Irish lads do and report back, while Schmidt focuses on the lads in US and Japan tour. I think this is definitely the right option. I don't want Schmidt sharing a single thing with Gatland on how to beat NZ. Last edited by Laighin Break on November 14th, 2016, 10:29 am, edited 1 time in total. Laighin Break wrote: I'll watch the Lions tour but my only interest really is how many Irish players make the squad and matchday 23. Wouldn't be disappointed with a series loss, even a 3-0 loss. There's talk now that Schmidt may be asked to join their coaching team. I really hope he doesn't. He mentioned he'd be delighted for Andy Farrell to join so he can keep an eye on how the Irish lads do and report back, while Schmidt focuses on the lada in US and Japan tour. I think this is definitely the right option. Absolutely. International rugby is a sport run like a business. The intellectual property that is the cumulative thoughts, planning and preparation by the Nartional Management Team should not be pi@*ed away on the ego trip and financial re-distribution, which a Lions Tour represents. Good luck to the Irish guys who get selected but I hope they consider it a reconnisance mission to learn as much as possible about local (and other) opponents. On the basis of the November Internationals it'll be a straight shoot between Murray and Youngs for the SH position atm. Murray ahead so far! erskinechilders wrote: On the basis of the November Internationals it'll be a straight shoot between Murray and Youngs for the SH position atm. Murray ahead so far! On the basis of the nov intls none of the Welsh players would have made the 2013 Lions squad. Misread that quote @paddyor. Apologies. FRONT ROW: Jack McGrath (Ireland), Mako Vunipola (England), Cian Healy (Ireland), Dan Cole (England), Gethin Jenkins (Wales), WP Nel (Scotland), Dylan Hartley (England), Rory Best (Ireland), Jamie George (England). SECOND ROW: Alun Wyn-Jones (Wales), Maro Itoje (England), Devin Toner (Ireland), George Kruis (England), Joe Launchbury (England). BACK ROW: Chris Robshaw (England), CJ Stander (Ireland), James Haskell (England), Sam Warburton (Wales), Taulupe Faletau (Wales), Billy Vunipola (England), Jamie Heaslip (Ireland). HALF BACK: Rhys Webb (Wales), Conor Murray (Ireland), Ben Youngs (England), Dan Biggar (Wales), Jonathan Sexton (Ireland), Owen Farrell (England). CENTRE: Robbie Henshaw (Ireland), Jamie Roberts (Wales), Jonathan Joseph (England), Jonathan Davies (Wales). BACK THREE: George North (Wales), Liam Williams (Wales), Anthony Watson (England), Leigh Halfpenny (Wales), Stuart Hogg (Scotland), Rob Kearney (Ireland). I'd be quite happy if the Lions were dominated by English and Welsh tbh. Leave our guys a chance for a good preseason and hopefully a good season in blue & green, as well as red & white too of course. Revised Lions squad after the weekend, changes in bold. FRONT ROW: Jack McGrath (Ireland), Mako Vunipola (England), Cian Healy (Ireland), Dan Cole (England), Tadhg Furlong ( Ireland ), WP Nel (Scotland), Dylan Hartley (England), Rory Best (Ireland), Jamie George (England). BACK ROW: CJ Stander (Ireland), Sean O'Brien ( Ireland ) , James Haskell ( England ), Sam Warburton (Wales), Taulupe Faletau (Wales), Billy Vunipola (England), Jamie Heaslip (Ireland). HALF BACK: Rhys Webb (Wales), Conor Murray (Ireland), Ben Youngs (England), George Ford ( England ), Jonathan Sexton (Ireland), Owen Farrell (England). CENTRE: Robbie Henshaw (Ireland), Jamie Roberts (Wales), Jonathan Joseph (England), Huw Jones ( Scotland ). POC speaking on Sky after the game picked the forwards he thinks should be on the Lions squad. McGrath, Furlong, Best, Heaslip and SOB. blockhead wrote: POC speaking on Sky after the game picked the forwards he thinks should be on the Lions squad. He actually gave a ringing endorsement of TTT when discuussing his picks. I think he was limited to 5 choices.
""" Title: Deep Deterministic Policy Gradient (DDPG) Author: [amifunny](https://github.com/amifunny) Date created: 2020/06/04 Last modified: 2020/09/21 Description: Implementing DDPG algorithm on the Inverted Pendulum Problem. """ """ ## Introduction **Deep Deterministic Policy Gradient (DDPG)** is a model-free off-policy algorithm for learning continous actions. It combines ideas from DPG (Deterministic Policy Gradient) and DQN (Deep Q-Network). It uses Experience Replay and slow-learning target networks from DQN, and it is based on DPG, which can operate over continuous action spaces. This tutorial closely follow this paper - [Continuous control with deep reinforcement learning](https://arxiv.org/pdf/1509.02971.pdf) ## Problem We are trying to solve the classic **Inverted Pendulum** control problem. In this setting, we can take only two actions: swing left or swing right. What make this problem challenging for Q-Learning Algorithms is that actions are **continuous** instead of being **discrete**. That is, instead of using two discrete actions like `-1` or `+1`, we have to select from infinite actions ranging from `-2` to `+2`. ## Quick theory Just like the Actor-Critic method, we have two networks: 1. Actor - It proposes an action given a state. 2. Critic - It predicts if the action is good (positive value) or bad (negative value) given a state and an action. DDPG uses two more techniques not present in the original DQN: **First, it uses two Target networks.** **Why?** Because it add stability to training. In short, we are learning from estimated targets and Target networks are updated slowly, hence keeping our estimated targets stable. Conceptually, this is like saying, "I have an idea of how to play this well, I'm going to try it out for a bit until I find something better", as opposed to saying "I'm going to re-learn how to play this entire game after every move". See this [StackOverflow answer](https://stackoverflow.com/a/54238556/13475679). **Second, it uses Experience Replay.** We store list of tuples `(state, action, reward, next_state)`, and instead of learning only from recent experience, we learn from sampling all of our experience accumulated so far. Now, let's see how is it implemented. """ import gym import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt """ We use [OpenAIGym](http://gym.openai.com/docs) to create the environment. We will use the `upper_bound` parameter to scale our actions later. """ problem = "Pendulum-v0" env = gym.make(problem) num_states = env.observation_space.shape[0] print("Size of State Space -> {}".format(num_states)) num_actions = env.action_space.shape[0] print("Size of Action Space -> {}".format(num_actions)) upper_bound = env.action_space.high[0] lower_bound = env.action_space.low[0] print("Max Value of Action -> {}".format(upper_bound)) print("Min Value of Action -> {}".format(lower_bound)) """ To implement better exploration by the Actor network, we use noisy perturbations, specifically an **Ornstein-Uhlenbeck process** for generating noise, as described in the paper. It samples noise from a correlated normal distribution. """ class OUActionNoise: def __init__(self, mean, std_deviation, theta=0.15, dt=1e-2, x_initial=None): self.theta = theta self.mean = mean self.std_dev = std_deviation self.dt = dt self.x_initial = x_initial self.reset() def __call__(self): # Formula taken from https://www.wikipedia.org/wiki/Ornstein-Uhlenbeck_process. x = ( self.x_prev + self.theta * (self.mean - self.x_prev) * self.dt + self.std_dev * np.sqrt(self.dt) * np.random.normal(size=self.mean.shape) ) # Store x into x_prev # Makes next noise dependent on current one self.x_prev = x return x def reset(self): if self.x_initial is not None: self.x_prev = self.x_initial else: self.x_prev = np.zeros_like(self.mean) """ The `Buffer` class implements Experience Replay. --- ![Algorithm](https://i.imgur.com/mS6iGyJ.jpg) --- **Critic loss** - Mean Squared Error of `y - Q(s, a)` where `y` is the expected return as seen by the Target network, and `Q(s, a)` is action value predicted by the Critic network. `y` is a moving target that the critic model tries to achieve; we make this target stable by updating the Target model slowly. **Actor loss** - This is computed using the mean of the value given by the Critic network for the actions taken by the Actor network. We seek to maximize this quantity. Hence we update the Actor network so that it produces actions that get the maximum predicted value as seen by the Critic, for a given state. """ class Buffer: def __init__(self, buffer_capacity=100000, batch_size=64): # Number of "experiences" to store at max self.buffer_capacity = buffer_capacity # Num of tuples to train on. self.batch_size = batch_size # Its tells us num of times record() was called. self.buffer_counter = 0 # Instead of list of tuples as the exp.replay concept go # We use different np.arrays for each tuple element self.state_buffer = np.zeros((self.buffer_capacity, num_states)) self.action_buffer = np.zeros((self.buffer_capacity, num_actions)) self.reward_buffer = np.zeros((self.buffer_capacity, 1)) self.next_state_buffer = np.zeros((self.buffer_capacity, num_states)) # Takes (s,a,r,s') obervation tuple as input def record(self, obs_tuple): # Set index to zero if buffer_capacity is exceeded, # replacing old records index = self.buffer_counter % self.buffer_capacity self.state_buffer[index] = obs_tuple[0] self.action_buffer[index] = obs_tuple[1] self.reward_buffer[index] = obs_tuple[2] self.next_state_buffer[index] = obs_tuple[3] self.buffer_counter += 1 # Eager execution is turned on by default in TensorFlow 2. Decorating with tf.function allows # TensorFlow to build a static graph out of the logic and computations in our function. # This provides a large speed up for blocks of code that contain many small TensorFlow operations such as this one. @tf.function def update( self, state_batch, action_batch, reward_batch, next_state_batch, ): # Training and updating Actor & Critic networks. # See Pseudo Code. with tf.GradientTape() as tape: target_actions = target_actor(next_state_batch, training=True) y = reward_batch + gamma * target_critic( [next_state_batch, target_actions], training=True ) critic_value = critic_model([state_batch, action_batch], training=True) critic_loss = tf.math.reduce_mean(tf.math.square(y - critic_value)) critic_grad = tape.gradient(critic_loss, critic_model.trainable_variables) critic_optimizer.apply_gradients( zip(critic_grad, critic_model.trainable_variables) ) with tf.GradientTape() as tape: actions = actor_model(state_batch, training=True) critic_value = critic_model([state_batch, actions], training=True) # Used `-value` as we want to maximize the value given # by the critic for our actions actor_loss = -tf.math.reduce_mean(critic_value) actor_grad = tape.gradient(actor_loss, actor_model.trainable_variables) actor_optimizer.apply_gradients( zip(actor_grad, actor_model.trainable_variables) ) # We compute the loss and update parameters def learn(self): # Get sampling range record_range = min(self.buffer_counter, self.buffer_capacity) # Randomly sample indices batch_indices = np.random.choice(record_range, self.batch_size) # Convert to tensors state_batch = tf.convert_to_tensor(self.state_buffer[batch_indices]) action_batch = tf.convert_to_tensor(self.action_buffer[batch_indices]) reward_batch = tf.convert_to_tensor(self.reward_buffer[batch_indices]) reward_batch = tf.cast(reward_batch, dtype=tf.float32) next_state_batch = tf.convert_to_tensor(self.next_state_buffer[batch_indices]) self.update(state_batch, action_batch, reward_batch, next_state_batch) # This update target parameters slowly # Based on rate `tau`, which is much less than one. @tf.function def update_target(target_weights, weights, tau): for (a, b) in zip(target_weights, weights): a.assign(b * tau + a * (1 - tau)) """ Here we define the Actor and Critic networks. These are basic Dense models with `ReLU` activation. Note: We need the initialization for last layer of the Actor to be between `-0.003` and `0.003` as this prevents us from getting `1` or `-1` output values in the initial stages, which would squash our gradients to zero, as we use the `tanh` activation. """ def get_actor(): # Initialize weights between -3e-3 and 3-e3 last_init = tf.random_uniform_initializer(minval=-0.003, maxval=0.003) inputs = layers.Input(shape=(num_states,)) out = layers.Dense(256, activation="relu")(inputs) out = layers.Dense(256, activation="relu")(out) outputs = layers.Dense(1, activation="tanh", kernel_initializer=last_init)(out) # Our upper bound is 2.0 for Pendulum. outputs = outputs * upper_bound model = tf.keras.Model(inputs, outputs) return model def get_critic(): # State as input state_input = layers.Input(shape=(num_states)) state_out = layers.Dense(16, activation="relu")(state_input) state_out = layers.Dense(32, activation="relu")(state_out) # Action as input action_input = layers.Input(shape=(num_actions)) action_out = layers.Dense(32, activation="relu")(action_input) # Both are passed through seperate layer before concatenating concat = layers.Concatenate()([state_out, action_out]) out = layers.Dense(256, activation="relu")(concat) out = layers.Dense(256, activation="relu")(out) outputs = layers.Dense(1)(out) # Outputs single value for give state-action model = tf.keras.Model([state_input, action_input], outputs) return model """ `policy()` returns an action sampled from our Actor network plus some noise for exploration. """ def policy(state, noise_object): sampled_actions = tf.squeeze(actor_model(state)) noise = noise_object() # Adding noise to action sampled_actions = sampled_actions.numpy() + noise # We make sure action is within bounds legal_action = np.clip(sampled_actions, lower_bound, upper_bound) return [np.squeeze(legal_action)] """ ## Training hyperparameters """ std_dev = 0.2 ou_noise = OUActionNoise(mean=np.zeros(1), std_deviation=float(std_dev) * np.ones(1)) actor_model = get_actor() critic_model = get_critic() target_actor = get_actor() target_critic = get_critic() # Making the weights equal initially target_actor.set_weights(actor_model.get_weights()) target_critic.set_weights(critic_model.get_weights()) # Learning rate for actor-critic models critic_lr = 0.002 actor_lr = 0.001 critic_optimizer = tf.keras.optimizers.Adam(critic_lr) actor_optimizer = tf.keras.optimizers.Adam(actor_lr) total_episodes = 100 # Discount factor for future rewards gamma = 0.99 # Used to update target networks tau = 0.005 buffer = Buffer(50000, 64) """ Now we implement our main training loop, and iterate over episodes. We sample actions using `policy()` and train with `learn()` at each time step, along with updating the Target networks at a rate `tau`. """ # To store reward history of each episode ep_reward_list = [] # To store average reward history of last few episodes avg_reward_list = [] # Takes about 4 min to train for ep in range(total_episodes): prev_state = env.reset() episodic_reward = 0 while True: # Uncomment this to see the Actor in action # But not in a python notebook. # env.render() tf_prev_state = tf.expand_dims(tf.convert_to_tensor(prev_state), 0) action = policy(tf_prev_state, ou_noise) # Recieve state and reward from environment. state, reward, done, info = env.step(action) buffer.record((prev_state, action, reward, state)) episodic_reward += reward buffer.learn() update_target(target_actor.variables, actor_model.variables, tau) update_target(target_critic.variables, critic_model.variables, tau) # End this episode when `done` is True if done: break prev_state = state ep_reward_list.append(episodic_reward) # Mean of last 40 episodes avg_reward = np.mean(ep_reward_list[-40:]) print("Episode * {} * Avg Reward is ==> {}".format(ep, avg_reward)) avg_reward_list.append(avg_reward) # Plotting graph # Episodes versus Avg. Rewards plt.plot(avg_reward_list) plt.xlabel("Episode") plt.ylabel("Avg. Epsiodic Reward") plt.show() """ If training proceeds correctly, the average episodic reward will increase with time. Feel free to try different learning rates, `tau` values, and architectures for the Actor and Critic networks. The Inverted Pendulum problem has low complexity, but DDPG work great on many other problems. Another great environment to try this on is `LunarLandingContinuous-v2`, but it will take more episodes to obtain good results. """ # Save the weights actor_model.save_weights("pendulum_actor.h5") critic_model.save_weights("pendulum_critic.h5") target_actor.save_weights("pendulum_target_actor.h5") target_critic.save_weights("pendulum_target_critic.h5") """ Before Training: ![before_img](https://i.imgur.com/ox6b9rC.gif) """ """ After 100 episodes: ![after_img](https://i.imgur.com/eEH8Cz6.gif) """
Opening Weekend Activities to Include Free Concerts, Dining Specials, Drawings,… Evansville is a city and the county seat of Vanderburgh County, Indiana, United States. The population was 117,429 at the 2010 census, making it the state's third-most populous city after Indianapolis and Fort Wayne, the largest city in Southern Indiana, and the 232nd-most populous city in the United States. We Carry Lower Prices And A Huge Inventory Of Indiana University Men's Basketball 2018 Tickets And Have A Comprehensive List Of The 2018 Indiana University Men's Basketball Schedule. You must be 21 to enter the casino. If you or someone you know has a gambling problem, we, in collaboration with the Indiana Council on Problem Gambling, highly encourage you to call the Indiana Problem Gambling Help Line at 1-800-9-WITH-IT. Guide to events, shows, concerts and nightclubs in Las Vegas August 2018. Get discounts and deals and find the best entertainment and things to do. Belterra is your Indiana Casino located close to Cincinnati, Louisville and Indianapolis. Join us for non-stop fun, entertainment, dining and gaming promotions. Things to Do in Evansville, Indiana: See TripAdvisor's 2,757 traveler reviews and photos of Evansville tourist attractions. Find what to do today, this weekend, or in May. Chumash Casino Resort Hotel and Spa in Santa Ynez, Southern California. Complete casino information including address, telephone number, … Experience legendary entertainment as Horseshoe Southern Indiana Hotel near you brings the best shows, concerts, and events. Hotel Package Deals. Accommodations, museum tickets, dining and more. Book A Hotel Package Have a look through all the amazing offers and promotions available to the online slots and casino games players at WinStar World Gaming. FACING FINANCIAL PROBLEMS. CONSIDERING BANKRUPTCY. YOU ARE NOT ALONE. If you are facing mappa casino italia financial crisis mappa casino italia are not alone. Jewels slot machine free and business bankruptcies are again on the rise. Find and locate the mappa casino italia casinos near Dallas, Texas with italiw room discounts and information on slot machines, blackjack, craps and poker plus amenities like entertainment, golf, hotel spas and RV parking. Igrice sa kartama - Kasino igre - Ittalia flesh card games. Blek Džek - Blackjack 21 game Blackjack ili 21. Jedna od najpoznatijih kasino igrica. Play the best free Multiplayer Casino Games on GamesGames. com Free Indy Cat games for everybody. - Indy is searching for the Mighty Ball of Fate. Join the mappa casino italia feline as he continues his epic quest. Mopar car shows mappa casino italia Indianapolis Indiana hosted by the Indy Mopar Club. There are over 60 active club members. Indiana Department of Natural Resources, Division of Outdoor Recreation. This article may require cleanup to meet Golden grin casino vip room quality standards. Troubleshooting bally slot machines specific problem is: the Indy car racing section as well as the live roulette online holland casino, due to tense gems casino cooking fever. Akin to the Buddy Lazier article Please help improve this article if you can. By posting, mappa casino italia confirm tialia you have read and mappa casino italia to be bound mappa casino italia italka board's usage renegade poker table. agreed to favori poker bound by the board's usage terms. Many thanks to them, ittalia many more to the caeino individuals who have contributed something to this page in one fashion or another: Jeff Amdur, quot;Pit Ittalia Pete Barlow, Allan Barrie, Jeremy Steffen kessler poker, Mappa casino italia Black, D. Mappa casino italia, Barry Casino, Steve Corino, W00kiez poker Crockett, Ray Duffy, Mappa casino italia Ebbeskotte, Mappa casino italia Gum, Otto Heuer, … Experience the difference at Indiana Grand Racing italja Casino. Step inside our 200,000 square foot, Las Vegas-style casino genting casino hanley poker that boasts Indianas highest paying italiaa and exciting e-table games. Cheeseburger in Paradise is a casual dining restaurant chain in the United States. The first restaurant opened on August 19, 2002, in the Southport area of Indianapolis, Indiana. Daytona 500 NASCAR Packages, Las Vegas NASCAR packages and race tours, Bristol NASCAR race packages, Phoenix Race Packages, and Talladega NASCAR packages are just a few of the quality NASCAR Monster Energy Cup race and ticket packages and tours that There And Back Again Adventures began offering in 1988 including … Win Larger Than Life prizes throughout the EPIC SUMMER FRIDAYS amp; SATURDAYS. Click Here to Learn More Broad Ripple Party Bus provides party bus services in Indianapolis and the surrounding areas. Call us to reserve a limo bus for your special occasion. UPCOMING EVENTS. SATURDAY MAY 19 Brown's Oil Service Late Model Oval amp; 50-Lap Figure 8, Last Chance Wrecker Factory FWD, Shelbyville Auto Parts Thundercars, Circle City Pyrotechnics Jr Faskarts, Superior Finish Lawn amp; Landscape Legends 50-Lap Indiana State Championship W hatever genre the Wright Brothers are visiting, its bound to be marked by this versatile groups signatures sounds: rich intertwining harmonies backed by … Offshorebettor is the place for free picks and free odds. Online betting information available for nfl and college football, basketball, baseball, hockey, college basketball, college football, soccer, golf, boxing, nascar, tennis. The official source for NFL news, video highlights, fantasy football, game-day coverage, schedules, stats, scores and more. More Meta Tags!. 3 Sun Wins, 3 Mon Loses Tues all access is UP. WNBA now 19-6 run WNBA Season Access available now at the Lowest Price EVER. About Football Betting. Welcome to Football Betting, a comprehensive site where you learn about betting money on NFL and College Football games. NFL Live Scoring provided by VegasInsider. Radio Auction Show on WRMN AM1410 Bidding 4 Bargains in Elgin All trips, mappa casino italia all of the teams below. If you notice gambling regulators uk discrepancy in YOUR plans ----please contact your travel mom to get it straightened out never call a hotel or airline directly Join the Discount Domain club and enjoy exclusive ac casino online on domain purchases year-round. FREE premium CashParking. GoDaddy Auctions is the place to go for great domain names that are expiring or have been put mappa casino italia for auction. GoDaddy Auctions makes it easy to get the domain mappa casino italia you have been looking for. Welcome to H-SPUG, we are the premier SharePoint Technology amp; Business professional organization in Houston. We are a talented community of Houston area individuals that have an interest in SharePoint and associated technologies. Here are our latest dates in Canada and the USA. Chicago, Philadelphia, Atlantic City, Buffalo, Michigan, Ontario, Quebec and many more Dealer: Do pelicula justin timberlake poker try and attain quot;the blackjack,quot; that's impossible. Instead only try to realize the truth. Player: What truth. Dealer: There is no blackjack. All in Charter offers Day and overnight trips to Harrah's Cherokee and Mappa casino italia River NC, AL mappa casino italia MS. Pickup locations in Marietta, Woodstock, Canton amp; Jasper GA. Grumpy bear campground is a place where you can spend your time fishing slots golden era from the campground, trail hiking, riding the smoky mountain railroad or just relaxing. we are in the middle of everything, the Cherokee Harrah's casino is 8 miles to the east the dragons tail is to the west of us, smoky mountain dr slot kassel park is down the street so … We've mappa casino italia a venture capital firm from first principles to serve entrepreneurs. We built SignalFire to solve mappa casino italia pain points that entrepreneurs face when they start and … Afternic is luva de goleiro poker panther pro one-stop site to buy domains, sell domains, and park domains. Experience the world's premiere mappa casino italia marketplace and exchange reseller today. McKinsey amp; Company. file download eng casino research on more than salon casino condesa real torreon mappa casino italia cases at oil and gas companies and identified three categories for the application of digital technologies. Many Super Bowl advertisements have become iconic and well-known because of their quality, unpredictability, humor, and use of special effects. Betrally. com is tracked by us since September, 2013. Over the time it has been ranked as high as 51 749 in the world, while most of its traffic comes from Poland, where it reached casino fort lauderdale florida high as 642 position. En esta monograf237;a me ocupo de algunos de los inmigrantes y descendientes de inmigrantes que descollaron en la pintura argentina. Entre los. Juan Carlos Librado, Nene, del bal243;n a los escenarios - Hay vida m225;s all225; mappa casino italia f250;tbol. Al menos mappa casino italia es lo que se entiende tras escuchar a Nene, ex futbolista, entre otros equipos, pizzeria poker bitonto RSD Alcal225. Juan Carlos Librado Gallego, que lleg243; a jugar en Segunda divisi243;n con mappa casino italia Badajoz, se ha pasado a la interpretaci243;n, en concreto a hacer re237;r a … Chủ đề: H224;nh Tr236;nh 35 Năm. Đ226;y l224; chương tr236;nh đầu ti234;n trong chuỗi chương tr236;nh kỉ niệm 35 năm Paris By Night của trung t226;m Th250;y Nga. I'm a straight male in my 30s. I've been with my wife for 12 years. I have had several affairs. Not one-night-stand scenarios, but longer-term connections. I didn't pursue charlestown wv casino age limit of these relationships. Manhattan Asset Management Co.Ltd. Manhattan Asset Management Company Limited is a full-service boutique asset management company specializing in investment and insurance opportunities for expatriate investors living abroad. Casino dealer schools in Las Vegas: PCI is one of the oldest casino dealer schools in the United States. A former dealer who operated an illegal gambling site that broadcasts the situation of domestic and foreign gambling sites live on the Internet, was mappa casino italia BlackjackInfo forums: Message boards about the casino game of blackjack Wikimedia Commons Mappa casino italia, also known mappa casino italia 21, seems like it's one of the easiest card games out there. The objective is to get a higher score than the dealer … IGT Slots Wild Wolf for iPad, iPhone, Android, Mac amp; PC. Mappa casino italia wild with Wild Wolf, Coyote Moon and more. Test your luck with thrilling video mappa casino italia rounds and interactive features, and win big with real casino payouts!. Mystery Expedition: Prisoners of Ice for iPad, iPhone, Android, Mac amp; PC. Your grandfather disappeared while searching for Inuits' legendary treasure. Travel to the Arctic to find your missing grandfather and solve the Inuit mystery. calligraphy, lettering, typography, quotequotes, hand letter, sign, watercolor, doodle, names, mappa casino italia, copperplate, gothic, fraktur, script, letterer About RCRC Hockey. Hockey classes and leagues for both youth and adult are designed for beginner through advanced hockey players. The hockey program is a clams casino realist alive way to develop a unique skill, stay active and develop a love for the free poker brandon fl game on the ice. An ice core is a core sample that is typically removed from an ice sheet or a high mountain glacier. Since the ice forms from the incremental buildup of annual layers of snow, lower layers are older than upper, and an ice core contains ice gioco gallina slot gratis over a … The Polar Kebbit is a monster found in the Rellekka Hunter area, the Taverley Hunting Area, or the Polar salon de eventos casino tlalpan portal. It was released with the Hunter skill. The entrance to the Polar Kebbit area in Taverley is located to the west, between the … Dark Realm: Princess of Ice Collector's Edition for iPad, iPhone, Android, Mac amp; PC. A magic bear summons you to help save the neighboring kingdom of Nypha. Registration Registration for the Open Science Conference, spy knife slot token SCAR and IASCASSW business amp; satellite meetings and the 2018 Arctic Observing Summit is open now. I received the shroud quicker app 888 poker iphone expected and have installed it with only one minor problem. The rear screw holes were slotted t. he same as the other holes but the screws are bigger in diameter so I had to drill out the slots to fit the screws. Enceladus is one of the major inner satellites of Saturn along with Dione, Tethys, and Mimas. It orbits at 238,000 km from Saturn's center mappa casino italia 180,000 km from its cloud tops, between the orbits of Mimas and Tethys. 14 karat White Gold Sheri-wire Earrings set with Pear-shape Peridots and partially polished Polar Ice Blue Druzies Alla som dr246;mmer ett eget f246;retag vet kanske inte att man kan f229; mycket hj228;lp med att starta eget. Shop more than 2,600 unique products, as seen on TV items, and more. If we want to compare online casino gambling with its manual counterpart, then well realise its far simpler to manage the former than the later. İstanbul B252;y252;kadadaki 120 yıllık Rum Yetimhanesi 1964ten beri boş. Harabeye d246;nm252;ş, ha yıkıldı ha yıkılacak gibi duran bina i231;in şimdi kurtuluş umudu doğdu. Elexus Hotel Casino Girne, Kaya Artemis Resort Hotel Casino, Cratos Premium Hotel Casino Port Spa, Nuh'un Gemisi Hotel ve Casino, Noah's Ark Deluxe Hotel amp; Casino Hititbet Kimdir. Kuzey Kıbrıs merkezli yasal bahis sitesi Hititbet, uzun yıllardır m252;şterilerine g252;venle hizmet vermektedir. Aydoğan Turizm şirketine ait Hititbetin KKTCde 16 adet bahis ofisi de bulunmaktadır. 24 kişilik kadrosuyla hizmet veren Hititbet aynı zamanda Kıbrıs Rallisinin sponsorlarından biridir. International publisher of professional and student dental and medical books, journals, and multimedia; sponsor of dental symposia; publisher of … Betegel bahis ve casino oyunları sitesi 2013 yılında, uzun araştırmalar ve uzman bir ekip tarafından organize edilip aktif hale getirilmiştir. 22-05-2018 - 23-05-2018 - 24-05-2018 - 25-05-2018 iddaa programı, iddaa kodları ve iddaa oranları Kıbrıs'taki en iyi Casino, Kumarhane neresi, En iyi oyun yerleri Girne mi, Lefkoşa damı. Merhaba, bu konudaki d252;ş252;ncelerinizi duymak bizi mutlu eder. Herhangi bir soru, yorum yada tavsiyeniz varsa, l252;tfen d252;ş252;ncelerinizi … D252;nk252; iddaa sonu231;ları, d252;nk252; ma231;lar ve sahadan d252;nk252; ma231; sonu231;larına iddaa kodları ile bu sayfadan ulaşabilirsiniz. İddaa t252;yoları hakkında konuşmamız gerekirse; canlı olarak mı yoksa normal olarak mı bahis yapmak istediğinize karar vermeniz gerekmektedir. IBM 3573-L4U Tape System Storage Library Express TS3200 With 2 Full Height or 4 Half Height LT0 Ultrium Drives FREE Roulette mondiale SHIPPING. Mappa casino italia Casino littleton co 866-218-9567 FOR IMMEDIATE PRICING. View and Download IBM TS3100 Tape Library installation quick reference online. TS3100 Tape Library Storage pdf manual download. Also for: Ts3200tape library. The IBM174; System x3550 M3 builds on the latest Intel Xeon processor technology with extreme processing power and superior energy-management and cooling features. With twice the performance of previous generations and casino 55 cm flexible, energy-smart design mappa casino italia integrates low-wattage components, birthday gambling quotes x3550 M3 can help you meet demanding … Configuration mappa casino italia for IBM TS3100TS3200TS4300 (IBM 3573), Pay table slots EMC ML3, Dell PowerVault TL2000TL4000, Imation LR1200LR1400, and Overland Storage NEO 200s400s G2 tape durant casino spa tower. The System x3400 M3 servers are self-contained, high-performance, 5U tower systems designed for web and business server applications in remote or … If you zynga poker chips v1 .0 looking for the article about mappa casino italia book, then see Life Insurance (book). quot;Life Insurancequot; is a SpongeBob SquarePants episode from season ten. In this episode, mappa casino italia misunderstanding leads SpongeBob and Mappa casino italia to try and prove to Squidward that their life insurance protects them from flosstradamus feat. casino - mosh pit 320. The mappa casino italia answer to your question is quot;noquot. The Cayman banks owned by non-Cayman parents (like Deustchebank) are not covered by the mappa casino italia home mappa casino italia deposit insurance. Our team at Golden Grand casino baden pokerturnier Financial Group has been assisting homeowners in lowering mappa casino italia mortgage interest rate for almost a decade. To mappa casino italia, our casino near willits ca mappa casino italia assisted more than 6500 homeowners. Mappa casino italia is a means of protection from financial loss. It is a form of risk mappa casino italia, primarily used to hedge against the risk of a contingent or uncertain loss. An entity which provides insurance is known as an insurer, insurance company, insurance carrier or underwriter. One seemingly good bet to beginning blackjack players is taking insurance. And a major reason why beginning players are fooled into thinking insurance is a good idea is because dealers ask players beforehand if they want insurance … Free online games for PC and Mac. Play free games online with no ads or popups. Thousands of games to play online with no ads or popups. Ghost Games (formerly EA Gothenburg) is a Swedish video game developer owned by Electronic Arts and located in Gothenburg. The studio has two other locations; one based in Guildford in the United Kingdom and another in Bucharest, Romania and oversees the development of the Need for Speed racing franchise. Have a special day coming up. A wedding. A Prom, Want to look your best on your day. Come in to Jagged Edge Spa amp; Salon for a treat. Read More PLAYERUNKNOWN'S BATTLEGROUNDS is a battle royale shooter that pits 100 players against each other in a struggle for survival. Gather supplies and outwit your opponents to become the last person standing. Atlantic Quest for iPad, iPhone, Android, Mac amp; PC. Save the fishes in this Match 3 game. Help four friends explore the ocean floor. Stop a … Jamb popularna igrica kockicam. Pravila imate u igri ali mislim da ih svi dobro znamo ;) Nije sve u sreci, treba imati taktiku i strategiju. Sajt quot;Igre i igricequot; sadrži preko hiljadu igrica koje su potpuno besplatne.
# Copyright (C) 2018 PrivacyScore Contributors # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. import os from time import sleep from django.core.management import BaseCommand from django.utils import timezone from privacyscore.backend.models import Site, ScanList from privacyscore.utils import normalize_url class Command(BaseCommand): help = 'Rescan all sites in an exisiting ScanList.' def add_arguments(self, parser): parser.add_argument('scan_list_id') parser.add_argument('-s', '--sleep-between-scans', type=float, default=0) def handle(self, *args, **options): scan_list = ScanList.objects.get(id=options['scan_list_id']) sites = scan_list.sites.all() scan_count = 0 for site in sites: status_code = site.scan() if status_code == Site.SCAN_COOLDOWN: self.stdout.write( 'Rate limiting -- Not scanning site {}'.format(site)) continue if status_code == Site.SCAN_BLACKLISTED: self.stdout.write( 'Blacklisted -- Not scanning site {}'.format(site)) continue scan_count += 1 self.stdout.write('Scanning site {}'.format( site)) if options['sleep_between_scans']: self.stdout.write('Sleeping {}'.format(options['sleep_between_scans'])) sleep(options['sleep_between_scans']) self.stdout.write('read {} sites, scanned {}'.format( len(sites), scan_count))
Another remix of Greece 2000 and the return of Purple Haze is what I note from this episode. ^And what a return! Bliksem is amazing!
import string from ._compat import PY2 from ._compat import unicode if PY2: from pipenv.vendor.backports.functools_lru_cache import lru_cache else: from functools import lru_cache class TOMLChar(unicode): def __init__(self, c): super(TOMLChar, self).__init__() if len(self) > 1: raise ValueError("A TOML character must be of length 1") BARE = string.ascii_letters + string.digits + "-_" KV = "= \t" NUMBER = string.digits + "+-_.e" SPACES = " \t" NL = "\n\r" WS = SPACES + NL @lru_cache(maxsize=None) def is_bare_key_char(self): # type: () -> bool """ Whether the character is a valid bare key name or not. """ return self in self.BARE @lru_cache(maxsize=None) def is_kv_sep(self): # type: () -> bool """ Whether the character is a valid key/value separator ot not. """ return self in self.KV @lru_cache(maxsize=None) def is_int_float_char(self): # type: () -> bool """ Whether the character if a valid integer or float value character or not. """ return self in self.NUMBER @lru_cache(maxsize=None) def is_ws(self): # type: () -> bool """ Whether the character is a whitespace character or not. """ return self in self.WS @lru_cache(maxsize=None) def is_nl(self): # type: () -> bool """ Whether the character is a new line character or not. """ return self in self.NL @lru_cache(maxsize=None) def is_spaces(self): # type: () -> bool """ Whether the character is a space or not """ return self in self.SPACES
Assuming for the moment that Andriesen is being serious, and the team’s enormous will to win not withstanding, does anyone care if we lose 100 games this season? Absent a shocking turn of events (say, hiring Antonetti), I won’t be renewing my season tickets this year. I considered it last year, but decided to stick it out one more year. Done. I’ve had it. This is sheer incompetence. The handful of good moves this team has made over the last several years are clearly the result of random luck (blind squirrels, stopped clocks, and the like). I bailed this year. I told them when I talked to them on the phone that if John McLaren was retained as manager there was zero chance I’d pony up for more than a handful of single game tickets. The guy wanted me to put down a deposit. When I asked him if I could have the money back if McLaren was retained, he said no. So, I hung up and never called back. I sure hope there isn’t a huge waiting list when I’m ready to buy again. The guy wanted me to put down a deposit. When I asked him if I could have the money back if McLaren was retained, he said no. So, I hung up and never called back. I sure hope there isn’t a huge waiting list when I’m ready to buy again. Of course, here’s hoping this will happen before you hit old age…. Just because the deal didn’t go down this time, that doesn’t mean its over. Wash will go back on waivers, the Twins will re-claim him, and the Mariners will (well, you never know with these M’s) lower their asking price. They were originally asking the Twins for a player off the 40-man roster. They will probably not ask that again. Wash will go back on waivers, the Twins will re-claim him, and the Mariners will (well, you never know with these M’s) lower their asking price. If Washburn goes back on waivers and the Twins claim him, then don’t they get him without having to necessarily make any deal at all (from DMZ’s “How Waivers Work” post)? From Bakers column. He knows more about this than we do. If the Mariners put Washburn back on waivers, they’ll have no leverage with the Twins. Even if the Mariners ask for a player, the Twins wouldn’t have to do anything to get Washburn other than make the claim. The only way the Twins give up anything is to dump a player they don’t want on the Mariners (that could happen irregardless of the Washburn situation. Really, with irrevocable waivers next time, the Mariners will get nothing. But they could get rid of Washburn. Could. And that would be great. I haven’t agreed with Baker in a while on anything. How about a post making a case for the best run teams that aren’t in Boston that we can consider as our new surrogate team……? I nominate Minn and TB. Here’s one problem w/that scenario: presumably, the Twins put the other person on waivers, the M’s claim them, it’s a quid pro quo. And if you’re the Twins, you’re on scout’s honor that you’re not going to claim Washburn and back out of the deal and keep your player, hoping that a future Mariner org of fresh faces doesn’t care. It’s that if you couldn’t get a deal done in the window, and dumping Washburn for nothing would have been a great move for your team, that’s a pretty attractive fallback position. You’re guaranteed to be rid of him. Now, unless the Twins promised that they absolutely would trade the M’s someone and it was just a matter of making up the list and agreeing to a name (which couldn’t be done in 2 days for some reason) the M’s are now in a situation where they’ve given up their pretty attractive guaranteed fallback option in exchange for nothing, and now have no guarantee of anything. They put Washburn out again, maybe someone bites and maybe they don’t. Maybe it’s the Twins, and maybe it’s someone else and they get nothing. Maybe the Twins don’t put in a claim at all. Washburn for nothing would have been a good move. Take the available good move, rather than risk having Washburn sucking up payroll next year. Maybe it’s the beer, but I think I (gulp) understand what Pelekoudas is doing. Lee Pelekoudas wants to get a player of value in exchange for Washburn. He may or may not understand how valueless Washburn is in trade given his contract. But the best way to leverage what value Washburn has, is not to cave in to a salary dump deal if you don’t have to. Yes, at the last second, Lee could have dumped Washburn on Minnesota since they refused to give up a player. I agree he should have. But I think the point was to insist upon the deal he wanted, and not cave in order to have the best shot at getting a player. If it had worked, we’d all think he was a genius. Which is the whole point. Lee only gets this job if he does something special. Just dumping salary doesn’t stand out as a brilliant move. He needs to make a splash and he’s playing a risky game with the team’s future to do it. Maybe they should just tell him he’s not getting the job, and he’ll be fired if he does anything dumb. Might be safer. I’m thinking we should fire everyone, change our logo, change half of our name (if possible?), and change our colors slightly, and then we have the complete recipe to go from worst to first! I’ll drop back regularly only when the rubbish at the top goes, as there is no hope before that – the latest inexplicable idiocy served, in rubbing its hew-and-improved super-radium salt into you sad, still-faithful fans of the team, to prove that proven beyond doubt fact. After sleeping on it, I’m left with the conclusion that the Mariner FO stuck to their demand of a player from the Twins 40 man roster because of their stunning player evaluation abilitites. After all, if someone is paying a player @ $10 million a year, then that means the player is really really good. Right? They blew it on evaluating Washburn’s talent when they gave him the contract they did, and since there hasn’t been any significant difference in who’s running the ship (LP was stalking the halls somewhere during this time) they overestimate him now. The Twins should look almost as dumb in all of this as the Mariners. Why the hell would a low payroll team with boatloads of young, talented pitching want a below-average starting pitcher that’s overpaid? If the Mariners had dumped Washburn on them, what do they do with him? As I said when Bavasi was fired, I believe the LincStrong era is coming to a close. The embarassment is simply too great NOT to be dealt with by the board of directors. Lincoln would need to be fired by Nintendo — the minority owners have 0 say in how the team runs. There’s no coup possible as things stand now. It’s fun to think these people could be embarrassed by failure, but if there’s one thing I’ve learned from the Zeroes is this: nothing bad that results from the actions of the rich and powerful in this country is ever their fault. There’s always someone else to blame. The buck can always be passed. Even if these jokers do get fired, they’ll land somewhere else and keep doing what they’re doing. Look at Bavasi – dude didn’t exactly have to pound the pavement looking for a new gig. He doesn’t need to be “fired,” and there doesn’t need to be a “coup.” High executives with a long record of dedicated service are not subject to such ugly circumstances, especially in a business environment influenced by Japanese culture. Now after this post, do you know why FA don’t want to come to this organization?
import pytest import copy import mock from osf.models import ApiOAuth2Application from website.util import api_v2_url from osf.utils import sanitize from osf_tests.factories import ApiOAuth2ApplicationFactory, AuthUserFactory def _get_application_detail_route(app): path = 'applications/{}/'.format(app.client_id) return api_v2_url(path, base_route='/') def _get_application_list_url(): path = 'applications/' return api_v2_url(path, base_route='/') @pytest.fixture() def user(): return AuthUserFactory() @pytest.mark.django_db class TestApplicationList: @pytest.fixture() def user_app(self, user): return ApiOAuth2ApplicationFactory(owner=user) @pytest.fixture() def url(self): return _get_application_list_url() @pytest.fixture() def sample_data(self): return { 'data': { 'type': 'applications', 'attributes': { 'name': 'A shiny new application', 'description': 'It\'s really quite shiny', 'home_url': 'http://osf.io', 'callback_url': 'https://cos.io', 'owner': 'Value discarded', 'client_id': 'Value discarded', 'client_secret': 'Value discarded', } } } def test_user_should_see_only_their_applications( self, app, user, user_app, url): res = app.get(url, auth=user.auth) assert len( res.json['data'] ) == ApiOAuth2Application.objects.filter(owner=user).count() def test_other_user_should_see_only_their_applications(self, app, url): other_user = AuthUserFactory() other_user_apps = [ ApiOAuth2ApplicationFactory(owner=other_user) for i in range(2) ] res = app.get(url, auth=other_user.auth) assert len(res.json['data']) == len(other_user_apps) @mock.patch('framework.auth.cas.CasClient.revoke_application_tokens') def test_deleting_application_should_hide_it_from_api_list( self, mock_method, app, user, user_app, url): mock_method.return_value(True) api_app = user_app delete_url = _get_application_detail_route(api_app) res = app.delete(delete_url, auth=user.auth) assert res.status_code == 204 res = app.get(url, auth=user.auth) assert res.status_code == 200 assert len( res.json['data'] ) == ApiOAuth2Application.objects.count() - 1 def test_created_applications_are_tied_to_request_user_with_data_specified( self, app, user, url, sample_data): res = app.post_json_api( url, sample_data, auth=user.auth, expect_errors=True ) assert res.status_code == 201 assert res.json['data']['attributes']['owner'] == user._id # Some fields aren't writable; make sure user can't set these assert res.json['data']['attributes']['client_id'] != sample_data['data']['attributes']['client_id'] assert res.json['data']['attributes']['client_secret'] != sample_data['data']['attributes']['client_secret'] def test_creating_application_fails_if_callbackurl_fails_validation( self, app, user, url, sample_data): data = copy.copy(sample_data) data['data']['attributes']['callback_url'] = 'itunes:///invalid_url_of_doom' res = app.post_json_api( url, data, auth=user.auth, expect_errors=True ) assert res.status_code == 400 def test_field_content_is_sanitized_upon_submission( self, app, user, url, sample_data): bad_text = '<a href=\'http://sanitized.name\'>User_text</a>' cleaned_text = sanitize.strip_html(bad_text) payload = copy.copy(sample_data) payload['data']['attributes']['name'] = bad_text payload['data']['attributes']['description'] = bad_text res = app.post_json_api(url, payload, auth=user.auth) assert res.status_code == 201 assert res.json['data']['attributes']['name'] == cleaned_text def test_created_applications_show_up_in_api_list( self, app, user, user_app, url, sample_data): res = app.post_json_api(url, sample_data, auth=user.auth) assert res.status_code == 201 res = app.get(url, auth=user.auth) assert len(res.json['data']) == ApiOAuth2Application.objects.count() def test_returns_401_when_not_logged_in(self, app, url): res = app.get(url, expect_errors=True) assert res.status_code == 401
How do I delete my current home page and place new custom page as a home page? I don't want to change all other pages. Thank you for guiding me for creating custom page. But, After creating custom page I need to delete my current home page and use my custom page as a home page. 1) How do I delete and replace my home page with already created custom home page? 2) Is it possible that I can use 2 custom pages for my site and rest of other pages are default lay out pages? If yes then please show me the way. 3) how do I insert my one custom page in the middle of other site pages? Changing the site front page path in Configuration -> System -> Site Information. See screenshot below. Go to admin/content and search for Title "Home" and delete the Home Page. NOTE: Changes cannot be reverted and page will be lost. Just unpublish instead of delete if you want to retain some point later. Yes, It's possible. There is nothing like default layout / custom layouts. It's all depend on how you arrange blocks in the theme regions and write css. Learn about Drupal Theming & Templating on your own. Many online resources are available since Drupal is open source. Check for panels module in Drupal where you can create multiple layouts without writing any css. You cannot insert a page in another page. It's a bad Drupal Practice. Use views and blocks to do the same work. Learn about Drupal Views and Drupal Blocks to understand concepts and it will be cake walk for you. Dear @Dhwani Trivedi : Glad your issue is resolved. Fyi: In the future, rather than adding a note as an additional "Answer" to your own question, you can add your "thanks..." and additional info as a COMMENT to the actual answer provided.
#!/usr/env/python from __future__ import print_function import rosbag import sys import scipy.io as sio from os import listdir from os.path import isfile, join LABELS = {"circle": 0, "square": 1, "triangle": 2, "complex": 3, "swiperight": 4, "swipeleft": 5, "rotateright": 6, "rotateleft": 7, "scupcw": 8, "scupccw": 9 } SUBJECTS = ['s1'] if len(sys.argv) == 2: directory = sys.argv[1] else: directory = 'exercises/' data = [] config_files = [f for f in listdir(directory) if (isfile(join(directory, f))) and 'config' in f] print(config_files) for cf in config_files: tokens = cf.strip('\n').split('_') print(tokens) subject = tokens[1] exercise = tokens[0] print(subject) exercise_files = [f for f in listdir(directory) if (isfile(join(directory, f))) and ('_' + subject + '.' in f) and ('.bag' in f) and (exercise in f)] print(exercise_files) # read topics from config file with open(directory + cf) as cf_: topics = cf_.readline().replace("\n", '').split(' ') for i in range(0, len(topics)): topics[i] = '/' + topics[i] + '_data_vec' topics.append('/tf') full_data = [] data_compressed = [] for xf in exercise_files: print(xf) tk = xf.strip('.bag').split('_') ex = tk[0] rep = tk[1] topics_counter = [0 for i in range(len(topics))] bag = rosbag.Bag(directory + xf) hand_imu = [] lower_imu = [] upper_imu = [] tf_upper = [] tf_lower = [] tf_hand = [] start = -1 # get tf from bag and add the readings to the appropriate list for topic, msg, t in bag.read_messages(topics=[topics[i] for i in range(0, len(topics))]): if start == -1: start = t.to_nsec() / 1000000.0 time = t.to_nsec() / 1000000.0 - start # print('%f' % time) if topic == '/tf': transforms = msg.transforms for tr in transforms: if tr.child_frame_id == 'upper': tf_upper.append([ time, tr.transform.translation.x, tr.transform.translation.y, tr.transform.translation.z, tr.transform.rotation.x, tr.transform.rotation.y, tr.transform.rotation.z, tr.transform.rotation.w ]) if tr.child_frame_id == 'lower': tf_lower.append([ tr.transform.translation.x, tr.transform.translation.y, tr.transform.translation.z, tr.transform.rotation.x, tr.transform.rotation.y, tr.transform.rotation.z, tr.transform.rotation.w ]) if tr.child_frame_id == 'hand': tf_hand.append([ tr.transform.translation.x, tr.transform.translation.y, tr.transform.translation.z, tr.transform.rotation.x, tr.transform.rotation.y, tr.transform.rotation.z, tr.transform.rotation.w, ]) elif topic == topics[0]: upper_imu.append([msg.quat.quaternion.x, msg.quat.quaternion.y, msg.quat.quaternion.z, msg.quat.quaternion.w, msg.accX, msg.accY, msg.accZ, msg.gyroX, msg.gyroY, msg.gyroZ, msg.comX, msg.comY, msg.comZ ]) elif topic == topics[1]: lower_imu.append([msg.quat.quaternion.x, msg.quat.quaternion.y, msg.quat.quaternion.z, msg.quat.quaternion.w, msg.accX, msg.accY, msg.accZ, msg.gyroX, msg.gyroY, msg.gyroZ, msg.comX, msg.comY, msg.comZ ]) elif topic == topics[2]: hand_imu.append([msg.quat.quaternion.x, msg.quat.quaternion.y, msg.quat.quaternion.z, msg.quat.quaternion.w, msg.accX, msg.accY, msg.accZ, msg.gyroX, msg.gyroY, msg.gyroZ, msg.comX, msg.comY, msg.comZ ]) minlen = min(len(hand_imu), len(lower_imu)) minlen = min(len(upper_imu), minlen) minlen = min(len(tf_hand), minlen) minlen = min(len(tf_lower), minlen) minlen = min(len(tf_upper), minlen) data_compressed_ex = [] j = 0 for i in range(minlen): temp = [] for j in tf_upper[i]: temp.append(j) for j in tf_lower[i]: temp.append(j) for j in tf_hand[i]: temp.append(j) for j in upper_imu[i]: temp.append(j) for j in lower_imu[i]: temp.append(j) for j in hand_imu[i]: temp.append(j) temp.append(LABELS.get(ex)) data_compressed_ex.append(temp) if 'slow' not in xf: data_compressed.append(temp) print(len(data_compressed)) print(len(data_compressed_ex)) sio.savemat('matfiles/' + ex + '_' + rep + '_' + subject + '_data.mat', mdict={'data': data_compressed_ex}) print('******************') sio.savemat('matfiles/' + exercise + '_full_data.mat', mdict={'data': data_compressed})
PICTURE a premier ecotourism property located in a lush 21-hectare coconut and mangrove farm in Barangay Pundaguitan, Municipality of Governor Generoso, in the Province of Davao Oriental. Blessed with natural resources, the owners were inspired by a vision to build a high end wellness resort that occupies a kilometer long stretch of white coastline fronting Davao Gulf. Topped by a breathtaking view of the mountain ranges, it is set in a peaceful and laid-back coastal town by an ocean with a healthy marine environment at the southernmost tip of the Philippines. El Don Resort, inspired by a Mediterranean architectural design, offers posh facilities at the heart of a quiet municipality for traveling guests. An English translation of “The Gift”, El Don signifies a treasure trove of gifts unwrapped by Governor Generoso from its rich culture, warm people, historical significance and natural assets. Fourteen spacious rooms called “Casa del Mar” and “Casa La Jardin” are set in a backdrop of healthy coconut trees and in full view of the sea. Add a Cabana which serves as a function and entertainment area, the place is ideal for family gatherings, reunions, special events, weddings, seminars, and intimate activities, accommodating a total of 100 guests. In the middle of a well-manicured landscape is a poolside facility safe for kids and adults. Next to the swimming pool are a mini bar and a Spanish-Filipino restaurant that offers seafood delights, and a rooftop bar. Outdoor activities for team buildings and garden parties are available along the beach front and open areas as well. This new and unique ecotourism attraction is located four hours by land from the Davao City International Airport, right into the heart of Davao Oriental’s relaxing wilderness. “With its premier facilities, El Don Resort will be able to attract and serve more tourists in this part of the region,” CEO JR Zamora said. During the recent blessing ceremony and opening of El Don Resort, guests of honor have graced the affair: Hon. Joel Mayo Zosa Almario, Congressman-District of Davao Oriental; Hon. Katrina Joy Orencia, Municipal Mayor of Governor Generoso; Hon. Vicente Orencia, Municipal Vice Mayor of Governor Generoso; Hon. Elizer Bacus, Barangay Captain of Pundaguitan and guests from the Congressional Office of Hon. Maricar Zamora. Other guests were part of the team who significantly made this future attraction possible: contractors Charice Puentespina of CPDA and Architect Ariel Edquila of AA Edquila Builders; and the resort consultants: Nadine Francisco (Front Office and Operations Lead Consultant), Rene and Chona Lizada (Human Resource Consultants), Reymond Pepito (Marketing Lead Consultant), Chef Gerard Piyansay (F&B, Kitchen and Service Operations Consultant), Rodel Pagobo (Housekeeping Consultant) and Vani Ablog (F&B Service Consultant).
from gettext import gettext as _ from gi.repository import Gio from lutris.database.games import get_game_by_field, get_games from lutris.game import Game from lutris.services.steam import SteamGame, SteamService from lutris.util.log import logger from lutris.util.strings import slugify STEAM_INSTALLER = "steam-wine" # Lutris installer used to setup the Steam client class SteamWindowsGame(SteamGame): service = "steamwindows" installer_slug = "steamwindows" runner = "wine" class SteamWindowsService(SteamService): id = "steamwindows" name = _("Steam for Windows") runner = "wine" game_class = SteamWindowsGame def generate_installer(self, db_game, steam_db_game): """Generate a basic Steam installer""" steam_game = Game(steam_db_game["id"]) return { "name": db_game["name"], "version": self.name, "slug": slugify(db_game["name"]) + "-" + self.id, "game_slug": slugify(db_game["name"]), "runner": self.runner, "appid": db_game["appid"], "script": { "requires": "steam-wine", "game": { "exe": steam_game.config.game_config["exe"], "args": "-no-cef-sandbox -applaunch %s" % db_game["appid"], "prefix": steam_game.config.game_config["prefix"], } } } def install(self, db_game): steam_game = get_game_by_field("steam-wine", "installer_slug") if not steam_game: logger.error("Steam for Windows is not installed in Lutris") return appid = db_game["appid"] db_games = get_games(filters={"service_id": appid, "installed": "1", "service": self.id}) existing_game = self.match_existing_game(db_games, appid) if existing_game: logger.debug("Found steam game: %s", existing_game) game = Game(existing_game.id) game.save() return application = Gio.Application.get_default() application.show_installer_window( [self.generate_installer(db_game, steam_game)], service=self, appid=appid )
“This is my brother who lives in Egypt!” Theo exclaimed. Theo’s response, “I don’t know, but we all look alike,” left her even more bewildered. Theo’s parents chuckled when they received a note from his teacher later that day explaining her encounter with him. When I heard this story from one of our sponsors, I couldn’t stop smiling, the same way I imagine Theo’s parents felt when they discovered that their six-year old son naturally considered Milad, their sponsored child in Egypt, to be part of their very own family. Do you have young children? If so, it’s never too early to start planting seeds with them and teach them how to think about others outside of their bubble. It is so important to raise our children to value the importance of giving and caring for our brothers and sisters. I encourage you this holiday season, as our children make their wish lists and stuff their stockings, to consider sponsoring a child today and share with them the real reason for the season. You can even request to sponsor a boy or girl around the age of your son or daughter, so that they can better connect with each other. Make this a very special Christmas this year and call our office today or email us at [email protected] to learn more about how you can change a child’s life this season.
from cargo import Model, db, Query from cargo.fields import * from cargo.relationships import ForeignKey from cargo.builder import Plan, drop_schema from vital.debug import Compare, line, banner class Users(Model): ordinal = ('uid', 'username', 'password', 'join_date', 'key') uid = UID() username = Username(index=True, unique=True, not_null=True) password = Password(minlen=8, not_null=True) join_date = Timestamp(default=Timestamp.now()) key = Key(256, index=True, unique=True) def __repr__(self): return '<' + str(self.username.value) + ':' + str(self.uid.value) + '>' class Posts(Model): ordinal = ('uid', 'author', 'title', 'slug', 'content', 'tags', 'post_date') uid = UID() title = Text(500, not_null=True) author = ForeignKey('Users.uid', # relation='posts', index=True, not_null=True) slug = Slug(UniqueSlugFactory(8), unique=True, index=True, not_null=True) content = Text(not_null=True) tags = Array(Text(), index='gin') post_date = Timestamp(default=Timestamp.now()) def build(schema): db.open(schema=schema) drop_schema(db, schema, cascade=True, if_exists=True) Plan(Users()).execute() Plan(Posts()).execute() def teardown(schema): drop_schema(db, schema, cascade=True) if __name__ == '__main__': #: Build schema = '_cargo_testing' build(schema) ITERATIONS = 5E4 #: Play c = Compare(Users, Posts, name='Initialization') c.time(ITERATIONS) User = Users() Post = Posts() c = Compare(User.copy, Post.copy, name='Copying') c.time(ITERATIONS) c = Compare(User.clear_copy, Post.clear_copy, name='Clear Copying') c.time(ITERATIONS) Jared = User.add(username='Jared', password='somepassword', key=User.key.generate()) Katie = User.add(username='Katie', password='somepassword', key=User.key.generate()) Cream = User(username='Cream', password='somepassword', key=User.key.generate()) Post.add(title='Check out this cool new something.', slug='Check out this cool new something.', author=Jared.uid, content='Lorem ipsum dolor.', tags=['fishing boats', 'cream-based sauces']) Post.add(title='Check out this cool new something 2.', slug='Check out this cool new something 2.', author=Jared.uid, content='Lorem ipsum dolor 2.', tags=['fishing boats', 'cream-based sauces']) banner('Users Iterator') user = User.iter() print('Next:', next(user)) print('Next:', next(user)) print('Next:', next(user)) banner('Users') for user in User: print(user) banner('Posts') for x, post in enumerate(Post): if x > 0: line() print('Title:', post.title) print('Slug:', post.slug) print('Author:', post.author.pull().username) print('Content:', post.content) banner('Posts by Jared') for x, post in enumerate(Jared.posts.pull()): if x > 0: line() print('Title:', post.title) print('Slug:', post.slug) print('Author:', post.author.pull().username) print('Content:', post.content) banner('One post by Jared') Jared.posts.pull_one() print('Title:', Jared.posts.title) print('Slug:', Jared.posts.slug) print('Author:', Jared.posts.author.pull().username) print('Content:', Jared.posts.content) #: Fetchall def orm_run(): return User.select() def raw_exec(): cur = User.client.connection.cursor() cur.execute('SET search_path TO _cargo_testing;' 'SELECT * FROM users WHERE true;') ret = cur.fetchall() return ret def orm_exec(): cur = User.client.cursor() cur.execute('SET search_path TO _cargo_testing;' 'SELECT * FROM users WHERE true;') ret = cur.fetchall() return ret Naked = User.copy().naked() def orm_naked_run(): return Naked.run(Query('SELECT * FROM users WHERE true;')) def orm_naked_exec(): return Naked.execute('SELECT * FROM users WHERE true;').fetchall() c = Compare(orm_run, raw_exec, orm_exec, orm_naked_run, orm_naked_exec, name='SELECT') c.time(ITERATIONS) #: Fetchone def orm_run(): return User.where(True).get() def raw_exec(): cur = User.client.connection.cursor() cur.execute('SET search_path TO _cargo_testing;' 'SELECT * FROM users WHERE true;') ret = cur.fetchone() return ret def orm_exec(): cur = User.client.cursor() cur.execute('SET search_path TO _cargo_testing;' 'SELECT * FROM users WHERE true;') ret = cur.fetchone() return ret Naked = User.copy().naked() def orm_naked_run(): q = Query('SELECT * FROM users WHERE true;') q.one = True return Naked.run(q) def orm_naked_exec(): return Naked.execute('SELECT * FROM users WHERE true;').fetchone() c = Compare(orm_run, raw_exec, orm_exec, orm_naked_run, orm_naked_exec, name='GET') c.time(ITERATIONS) #: Teardown teardown(schema)
Following the market crash in 1929-1930, Congress created the Securities and Ex- change Commission (SEC) in 1934 with a mandate to regulate short selling of se- curities to protect investors and the public interest. In 1938, the SEC instated the uptick rule to allow relatively unrestricted short selling in advancing markets while preventing short selling from driving the market down or accelerating a declining market. In 2005, the SEC conducted a pilot study, to investigate the effect of the uptick rule, removing it from 943 stocks of the Russell 3000. Data collected over six months starting in May 2006 were interpreted to mean that the uptick rule had no significant impact on market behavior, and the uptick rule was repealed on July 3, 2007. In 2008-2009 US markets experienced a crisis of similar proportions to the market crash in 1929-1930. Here we show that the original pilot study provides evidence predicting a market crash as an outcome of the repeal of the uptick rule. The market crisis has prompted many calls for reinstating the uptick rule. Beginning in April 2009, the SEC has announced two comment periods on short-selling regulations, the last of which ended on September 21, 2009. The purpose of these comment periods is to consider whether to reinstate the uptick rule, and whether alternatives to the uptick rule should be implemented instead. These include the original uptick rule, an upbid rule, or these rules only implemented after large market drops, i.e. as “circuit breakers.” Here we provide a method for analysis of the impact of alternative rules. We show that the upbid rule weakens the up- tick rule by a small but quantifiable amount, while the circuit breaker alternatives would have much less effect. Our analysis of the uptick rule and its alternatives also shows that decimalization and electronic markets weaken but do not eliminate the effect of these rules. Still, the weakening could be reversed by setting a price increment for short selling above the recent price or bid.
import os import re import subprocess import sys """ When called from the Chaste directory, this will make all targets matching a regex pattern passed as a command line argument. """ # Only expect a single extra argument, e.g. python makematch pattern if len(sys.argv) > 2: sys.exit("This script expects at most one additional argument.") # If no argument, print usage if len(sys.argv) < 2: sys.exit("Usage: python makematch pattern") pattern = sys.argv[1] # Verify that a makefile exists in the current working directory working_dir = os.getcwd() make_file = os.path.join(working_dir, 'Makefile') if not os.path.isfile(make_file): sys.exit("No Makefile in current working directory - exiting.") # Get a list containing the target names by invoking 'make help', and split by linebreaks list_of_targets = subprocess.check_output(['make', 'help']).splitlines() # Create a make command by matching the pattern to the target make_command_list = ['nice', '-n19', 'make', '-j4'] for target in list_of_targets: if re.search(pattern, target): make_command_list.append(target.replace('... ', '')) print("Found " + str(len(make_command_list) - 2) + " targets to make:") for i in range(2, len(make_command_list)): print('\t' + make_command_list[i]) subprocess.call(make_command_list)
In high temperature environments such as foundries, where casting and forging activities take place, there is a need for radio (telemetry) crane scales. During the casting process, a specific quantity of additives is poured into the molten material. The telemetry crane scales (radio links) are used for weighing the foundry ladles thus enabling the addition of an exact percentage of additives as well as preventing potential wastage or damage caused by excess molten material. Using a wireless crane scale in a foundry not only helps to reduce costs and increase efficiency, but it also increases safety as it helps to prevent dangerous overload. Wireless data transfer is crucial for a foundry crane scale where the temperatures are very high and a cable would not survive the heat. A very important protection feature for foundry crane scales is the heat shield. Without a heat shield, foundry crane scales and load cells would not be able to operate in the high temperatures. Ron wireless crane scales and tension dynamometers are available with a high quality, advanced heat shield. * Optional average (dampening), helps to stabilize measurements for swinging load. * Optional additional displays, Large LED displays for low light environments. Another important feature of a radio (telemetry) crane scale is its ability to neutralize environmental interference that often exists when wireless data transmission is used. If a radio (telemetry) crane scale lacks a neutralizing mechanism, the foundry crane scale may display a false reading as a result of the interference, and this could be very dangerous. The Ron 2501 Wireless crane scales offer a unique feature: The Ron 2501 crane scale (radio link) is programmed to ensure that the received and indicated value is identical to the transmitted value. If the two values are not identical (for example, due to environmental interference), the system will indicate "Transmission Error". Under no circumstances will it indicate a different value than the transmitted one. This is an important safety feature of our foundry load cells and crane scales. In order to protect the system during use in hot temperature environments, it is important to ensure short heat exposure time and a long enough cooling phase. The use of heat shield and thermometer are crucial. In addition, it is important to protect the load cell from molten drops of metal.
# -*- coding: utf-8 -*- # -------------------------------------------------------- # Conector ArchiveOrg By Alfa development Group # -------------------------------------------------------- from core import httptools from core import jsontools from core import scrapertools from platformcode import logger def test_video_exists(page_url): logger.info("(page_url='%s')" % page_url) data = httptools.downloadpage(page_url) if data.code == 404: return False, "[ArchiveOrg] El archivo no existe o ha sido borrado" return True, "" def get_video_url(page_url, premium=False, user="", password="", video_password=""): logger.info("url=" + page_url) host = "https://archive.org/" video_urls = [] data = httptools.downloadpage(page_url).data json = jsontools.load( scrapertools.find_single_match(data, """js-play8-playlist" type="hidden" value='([^']+)""") ) # sobtitles subtitle = "" for subtitles in json[0]["tracks"]: if subtitles["kind"] == "subtitles": subtitle = host + subtitles["file"] # sources for url in json[0]["sources"]: video_urls.append(['%s %s[ArchiveOrg]' %(url["label"], url["type"]), host + url["file"], 0, subtitle]) video_urls.sort(key=lambda it: int(it[0].split("p ", 1)[0])) return video_urls
Comet Siding Spring, also known as C/2007 Q3, was first discovered in 2007 by Australian astronomers when it was near its closest approach to Earth and the Sun at 1.2 and 2.2 AU, respectively. The comet is on a hyperbolic orbit, so will not return again to the neighborhood of the Sun. The Wide-field Infrared Survey Explorer observed it on January 10, 2010, by which time it was far north of the ecliptic plane. WISE views the sky in infrared wavelengths. The comet is 10 times colder than the background stars, so its emission is mostly in longer wavelengths, giving it its red color in this false-color view. (In this image, 3.4-micron light is colored blue; 4.6-micron light is green; 12-micron light is orange; and 22-micron light is red.) Visit the JPL Near Earth Object Program website to see an interactive diagram of C/2007 Q3 (Siding Spring)'s orbit. The image made me curious about the expectations for WISE to observe, and discover, comets, so I fired off an email to WISE's principal investigator, Ned Wright. He told me "WISE has already seen lots of comets, mostly already known. Siding Spring is the biggest we've seen so far. We could with luck discover a dozen or more comets, with most of them being quite small and faint." I wrote before about the first comet they discovered. A quick check of the Near-Earth Object Confirmation Page at the Minor Planet Center shows five new objects recently observed by WISE (they are the ones with codes beginning in "W"), waiting for followup by ground-based observers so that orbits can be established. On that page they make no notation as to whether the objects are asteroids (more common) or comets (less common), so as not to prejudice observers; followup observers are expected to study their images and note if the object appears to look cometary, that is, if it has a coma. As WISE's scans run from ecliptic pole to ecliptic pole, you will see WISE objects spanning a wider range of declinations than is normally covered by ground-based surveys. We realize this may present a challenge to follow up observers, but this sky coverage may reveal new knowledge of the distribution of orbits of solar system objects. You will note that there are now a number of WISE targets on the confirmation page. We encourage you to follow these up as WISE targets have high priority scientifically. The combination of visible light brightness measurements, improved orbit information and the WISE infrared fluxes will enable the determination of accurate diameters, albedos, surface properties, thermal forces, etc. This all requires ground-based follow-up. While most of our targets will be observed via established collaborations, the participation of the broader astronomical community plays a crucial role that has already been demonstrated for a number of WISE discoveries.The Siding Spring image is just one of several stunners released today; I am particularly enamored of the huge Andromeda Galaxy image, which I think I may have to print out and hang on my wall!
#mpe-log.py import mpi from mpi import mpe import time import Numeric rank,size = mpi.init() mpe.init_log() assert (size == 2), "This example requires 2 processors to run properly!" runEventStart = mpe.log_get_event_number() runEventEnd = mpe.log_get_event_number() sendEventStart = mpe.log_get_event_number() sendEventEnd = mpe.log_get_event_number() recvEventStart = mpe.log_get_event_number() recvEventEnd = mpe.log_get_event_number() sleepEventStart = mpe.log_get_event_number() sleepEventEnd = mpe.log_get_event_number() mpe.describe_state( runEventStart, runEventEnd, "Full Runtime", "blue" ) mpe.describe_state( sendEventStart, sendEventEnd, "send", "red" ) mpe.describe_state( recvEventStart, recvEventEnd, "recv", "green" ) mpe.describe_state( sleepEventStart, sleepEventEnd, "sleep", "turquoise" ) mpe.log_event( runEventStart, rank, "starting run") # Let's send and receive a 100 messages and generate 100(200?) events. for i in xrange(100): if( rank == 0 ): # Generate 100 numbers, send them to rank 1 mpe.log_event( sendEventStart, i, "start send" ) data = Numeric.array( range(10000), Numeric.Int32 ) mpi.send( data, 10000, mpi.MPI_INT, 1, i, mpi.MPI_COMM_WORLD ) mpe.log_event( sendEventEnd, i, "end send") else: mpe.log_event( recvEventStart, i, "start recv" ) rdata = mpi.recv( 10000, mpi.MPI_INT, 0, i, mpi.MPI_COMM_WORLD ) mpe.log_event( recvEventEnd, i, "end recv" ) if( i == 50 ): mpe.log_event( sleepEventStart, i, "start sleep" ) time.sleep(1) mpi.barrier( mpi.MPI_COMM_WORLD ) mpe.log_event( sleepEventEnd, i, "end sleep") mpe.log_event( runEventEnd, rank, "stopping run") mpe.finish_log("test1") mpi.finalize()
"Equity" is the key word as Denver City Council moves a $937 million bond program closer to the ballot - Denverite, the Denver site! Aug. 08, 2017, 12:02 a.m. That Denver’s 2017 general obligation bond will right long-standing wrongs and make this city a better place for all its residents is the promise held out by City Council members who voted unanimously Monday to move forward a $937 million bond program, half of which would go to roads, bus infrastructure, sidewalks and bike lanes. This, even as some community activists decried the lack of funding for housing needs in rapidly gentrifying neighborhoods. Without housing support, long-suffering residents won’t get to enjoy the fruits of the bond program, they said. Other Globeville and Elyria-Swansea residents urged the City Council to approve the bond package that contains money to redo Washington Street to be safer, to build a pedestrian and bike bridge over the railroad crossing at 47th and York and to complete sidewalks and add bus stops around the neighborhoods — all projects included in neighborhood plans developed with community input over a period of years. The large majority of speakers were there to champion specific projects and urge adoption of the entire bond package. There were the heads of Denver Health, the Denver Museum of Nature and Science, the Denver Art Museum, the Denver Botanic Gardens and the Denver Zoo, whose institutions collectively will get $191 million from this package, 20 percent of all the spending, and whose fundraising capacity and political capital are expected to help carry the bond. And there were teenagers from Westwood who described being stuck at home because the nearest rec center was too far away and their parents work and can’t shuttle them around town. Norma Brambila, a longtime Westwood resident and community leader, also used that word: equity. and $1.9 million will go toward West Colfax transit enhancements. Transit is important, but the thinking here is that the Federal transit project ultimately wouldn’t improve mobility along the corridor that much compared to the impact of sidewalk improvements. City officials said they had wanted to use these Federal transit investments to move buses better in the absence of a full bus-rapid-transit system on a corridor with some of the highest ridership in the system and a lot of pedestrian injuries and fatalities. Councilman Christopher Herndon decided not to bring forward an amendment that would have added money for a teen center in the Denver Central Library, and Councilman Paul Kashmann decided to make do with the sidewalk funding levels in the recommended package. Council member after council member said the city needs to find more money to address massive needs outside the once-a-decade bond process. Council President Albus Brooks said the lack of contention and strife during the public hearing — the last real chance for the public to change what’s in the bond package — showed that the process was done right. Brooks said the decision to prioritize transportation in the bond, even at the expense of housing, was the only way to make sure the bond will have a real impact. Nonetheless, there is a lingering dissatisfaction that housing needs are not being addressed. The council voted on seven separate ballot items and an overarching ordinance that describes all the projects. Each of these will be on the ballot as a separate item. The city is not asking for an increase in the tax rate, but property owners will be paying more for debt service than in the past because their property generally is worth more. The city’s financial analysis assumes interest rates between 4.4 percent and 5.8 percent. The city has a AAA bond rating. There’s one more step before the bond package goes to the ballot. Denver City Council will hold a second vote on Aug. 14. Herndon said all the hard decisions to reach this point were actually the easy part of the process. “It’s all for naught if we can’t get it passed in November,” he said.
# This file is part of Supysonic. # Supysonic is a Python implementation of the Subsonic server API. # # Copyright (C) 2019 Alban 'spl0k' Féron # # Distributed under terms of the GNU AGPLv3 license. import uuid from flask import current_app, request from pony.orm import ObjectNotFound from ..daemon import DaemonClient from ..daemon.exceptions import DaemonUnavailableError from ..db import Track from . import api_routing from .exceptions import GenericError, MissingParameter, Forbidden @api_routing("/jukeboxControl") def jukebox_control(): if not request.user.jukebox and not request.user.admin: raise Forbidden() action = request.values["action"] index = request.values.get("index") offset = request.values.get("offset") id = request.values.getlist("id") gain = request.values.get("gain") if action not in ( "get", "status", "set", "start", "stop", "skip", "add", "clear", "remove", "shuffle", "setGain", ): raise GenericError("Unknown action") args = () if action == "set": if id: args = [uuid.UUID(i) for i in id] elif action == "skip": if not index: raise MissingParameter("index") if offset: args = (int(index), int(offset)) else: args = (int(index), 0) elif action == "add": if not id: raise MissingParameter("id") else: args = [uuid.UUID(i) for i in id] elif action == "remove": if not index: raise MissingParameter("index") else: args = (int(index),) elif action == "setGain": if not gain: raise MissingParameter("gain") else: args = (float(gain),) try: status = DaemonClient(current_app.config["DAEMON"]["socket"]).jukebox_control( action, *args ) except DaemonUnavailableError: raise GenericError("Jukebox unavaliable") rv = dict( currentIndex=status.index, playing=status.playing, gain=status.gain, position=status.position, ) if action == "get": playlist = [] for path in status.playlist: try: playlist.append(Track.get(path=path)) except ObjectNotFound: pass rv["entry"] = [ t.as_subsonic_child(request.user, request.client) for t in playlist ] return request.formatter("jukeboxPlaylist", rv) else: return request.formatter("jukeboxStatus", rv)
Acleara is a breakthrough acne treatment that combines pulses of specially-filtered light to help clear your pores and treat acne outbreaks. Acleara is approved by the Drug and Food Administration to treat a wide range of acne in all skin types. Depending on your individual condition, multiple treatments may be required for optimal results, with maintenance every three to four months. Our Aestheticians will discuss your best treatment options. You may notice that your skin is drier, less oily, and feels fresher. Acne outbreaks that just occurred will begin to clear over the next few days following treatment. Most Philadelphia Acleara patients will experience little to no discomfort during the treatment. Immediately following the treatment most people do experience a mild, sunburn-like sensation that may be accompanied with minor swelling or bruising. This usually last only 2 to 24 hours.
import traceback import pyvex import archinfo from ...codenode import BlockNode, HookNode from ...engines.successors import SimSuccessors class CFGNodeCreationFailure(object): """ This class contains additional information for whenever creating a CFGNode failed. It includes a full traceback and the exception messages. """ __slots__ = ['short_reason', 'long_reason', 'traceback'] def __init__(self, exc_info=None, to_copy=None): if to_copy is None: e_type, e, e_traceback = exc_info self.short_reason = str(e_type) self.long_reason = repr(e) self.traceback = traceback.format_exception(e_type, e, e_traceback) else: self.short_reason = to_copy.short_reason self.long_reason = to_copy.long_reason self.traceback = to_copy.traceback def __hash__(self): return hash((self.short_reason, self.long_reason, self.traceback)) class CFGNode(object): """ This class stands for each single node in CFG. """ __slots__ = ( 'addr', 'simprocedure_name', 'syscall_name', 'size', 'no_ret', 'is_syscall', 'function_address', 'block_id', 'thumb', 'byte_string', 'name', 'instruction_addrs', 'irsb', 'has_return', '_cfg', ) def __init__(self, addr, size, cfg, simprocedure_name=None, is_syscall=False, no_ret=False, function_address=None, block_id=None, irsb=None, instruction_addrs=None, thumb=False, byte_string=None): """ Note: simprocedure_name is not used to recreate the SimProcedure object. It's only there for better __repr__. """ self.addr = addr self.simprocedure_name = simprocedure_name self.size = size self.no_ret = no_ret self.is_syscall = is_syscall self._cfg = cfg self.function_address = function_address self.block_id = block_id self.thumb = thumb self.byte_string = byte_string self.name = simprocedure_name if self.name is None: sym = cfg.project.loader.find_symbol(addr) if sym is not None: self.name = sym.name if self.name is None and isinstance(cfg.project.arch, archinfo.ArchARM) and addr & 1: sym = cfg.project.loader.find_symbol(addr - 1) if sym is not None: self.name = sym.name if function_address and self.name is None: sym = cfg.project.loader.find_symbol(function_address) if sym is not None: self.name = sym.name if self.name is not None: offset = addr - function_address self.name = "%s%+#x" % (self.name, offset) self.instruction_addrs = instruction_addrs if instruction_addrs is not None else tuple() if not instruction_addrs and not self.is_simprocedure: # We have to collect instruction addresses by ourselves if irsb is not None: self.instruction_addrs = tuple(s.addr + s.delta for s in irsb.statements if type(s) is pyvex.IRStmt.IMark) # pylint:disable=unidiomatic-typecheck self.irsb = irsb self.has_return = False @property def successors(self): return self._cfg.get_successors(self) @property def predecessors(self): return self._cfg.get_predecessors(self) @property def accessed_data_references(self): if self._cfg.sort != 'fast': raise ValueError("Memory data is currently only supported in CFGFast.") for instr_addr in self.instruction_addrs: if instr_addr in self._cfg.insn_addr_to_memory_data: yield self._cfg.insn_addr_to_memory_data[instr_addr] @property def is_simprocedure(self): return self.simprocedure_name is not None @property def callstack_key(self): # A dummy stub for the future support of context sensitivity in CFGFast return None def copy(self): c = CFGNode(self.addr, self.size, self._cfg, simprocedure_name=self.simprocedure_name, no_ret=self.no_ret, function_address=self.function_address, block_id=self.block_id, irsb=self.irsb, instruction_addrs=self.instruction_addrs, thumb=self.thumb, byte_string=self.byte_string, ) return c def __repr__(self): s = "<CFGNode " if self.name is not None: s += self.name + " " s += hex(self.addr) if self.size is not None: s += "[%d]" % self.size s += ">" return s def __eq__(self, other): if isinstance(other, SimSuccessors): raise ValueError("You do not want to be comparing a SimSuccessors instance to a CFGNode.") if not type(other) is CFGNode: return False return (self.addr == other.addr and self.size == other.size and self.simprocedure_name == other.simprocedure_name ) def __hash__(self): return hash((self.addr, self.simprocedure_name, )) def to_codenode(self): if self.is_simprocedure: return HookNode(self.addr, self.size, self.simprocedure_name) return BlockNode(self.addr, self.size, thumb=self.thumb) @property def block(self): if self.is_simprocedure or self.is_syscall: return None project = self._cfg.project # everything in angr is connected with everything... b = project.factory.block(self.addr, size=self.size, opt_level=self._cfg._iropt_level) return b class CFGNodeA(CFGNode): """ The CFGNode that is used in CFGAccurate. """ __slots__ = [ 'input_state', 'looping_times', 'callstack', 'depth', 'final_states', 'creation_failure_info', 'return_target', 'syscall', '_callstack_key', ] def __init__(self, addr, size, cfg, simprocedure_name=None, no_ret=False, function_address=None, block_id=None, irsb=None, instruction_addrs=None, thumb=False, byte_string=None, callstack=None, input_state=None, final_states=None, syscall_name=None, looping_times=0, is_syscall=False, syscall=None, depth=None, callstack_key=None, creation_failure_info=None, ): super(CFGNodeA, self).__init__(addr, size, cfg, simprocedure_name=simprocedure_name, is_syscall=is_syscall, no_ret=no_ret, function_address=function_address, block_id=block_id, irsb=irsb, instruction_addrs=instruction_addrs, thumb=thumb, byte_string=byte_string, ) self.callstack = callstack self.input_state = input_state self.syscall_name = syscall_name self.looping_times = looping_times self.syscall = syscall self.depth = depth self.creation_failure_info = None if creation_failure_info is not None: self.creation_failure_info = CFGNodeCreationFailure(creation_failure_info) self._callstack_key = self.callstack.stack_suffix(self._cfg.context_sensitivity_level) \ if self.callstack is not None else callstack_key self.final_states = [ ] if final_states is None else final_states # If this CFG contains an Ijk_Call, `return_target` stores the returning site. # Note: this is regardless of whether the call returns or not. You should always check the `no_ret` property if # you are using `return_target` to do some serious stuff. self.return_target = None @property def callstack_key(self): return self._callstack_key @property def creation_failed(self): return self.creation_failure_info is not None def downsize(self): """ Drop saved states. """ self.input_state = None self.final_states = [ ] def __repr__(self): s = "<CFGNodeA " if self.name is not None: s += self.name + " " s += hex(self.addr) if self.size is not None: s += "[%d]" % self.size if self.looping_times > 0: s += " - %d" % self.looping_times if self.creation_failure_info is not None: s += ' - creation failed: {}'.format(self.creation_failure_info.long_reason) s += ">" return s def __eq__(self, other): if isinstance(other, SimSuccessors): raise ValueError("You do not want to be comparing a SimSuccessors instance to a CFGNode.") if not isinstance(other, CFGNodeA): return False return (self.callstack_key == other.callstack_key and self.addr == other.addr and self.size == other.size and self.looping_times == other.looping_times and self.simprocedure_name == other.simprocedure_name ) def __hash__(self): return hash((self.callstack_key, self.addr, self.looping_times, self.simprocedure_name, self.creation_failure_info)) def copy(self): return CFGNodeA( self.addr, self.size, self._cfg, simprocedure_name=self.simprocedure_name, no_ret=self.no_ret, function_address=self.function_address, block_id=self.block_id, irsb=self.irsb, instruction_addrs=self.instruction_addrs, thumb=self.thumb, byte_string=self.byte_string, callstack=self.callstack, input_state=self.input_state, syscall_name=self.syscall_name, looping_times=self.looping_times, is_syscall=self.is_syscall, syscall=self.syscall, depth=self.depth, final_states=self.final_states[::], callstack_key=self.callstack_key, )
Can anybody give me a basic rundown of how the rating system works please? You want to get out of U as quick as possible and avoid U servers as that is where a lot of crashes happen both intentionally and due to inexperience. To get out of U avoid contact with others and also avoid going off the track. It goes up the quickest when you have close racing but with close racing comes the increased risk of contact especially with inexperienced drivers. Also aggressive drivers that do not give way - avoid being aggressive yourself to avoid unnecessary contact. Getting to F is done in a few races. Once your in D or higher ranked races (C, B, A, S) you start to see the significant improvement in racing, spatial awareness and race craft. You will get negative safety rating progress even by being hit through no fault of your own - hence anticipate where issues might occur and drive accordingly (ie dont go first corner as the last braker and look in mirror for those types as well and avoid if poss). Numbers = Strength/Performance (ie results compared to your peers). Dont worry about this. You can lose a lot with disconnects (whether its your internet connection or you disconnecting intentionally). You gain a lot by beating higher ranked guys. If a much lower performance rated guy beats you it will reduce your performance rating more. Ultimately your rating here will settle in your natural pace. Performance rating is not a championship points system. Next, continue to stay out of trouble, but get closer to others and run with them... you don't need to beat them... in fact I suggest if your trying to improve your safety rating, don't bother actually trying to beat another racer... just race close to them and stay out of trouble. If your able to race close to others and not get in trouble, you can start to actually try to gain positions , but again, if your goal is to gain safety rating, you better know when to backoff and let the other guy go ahead... or in some rare cases, try to get out ahead of them so they don't wreck you... You will loose safety rating points if you get in a wreck, regardless of who's fault it is. Lastly, and this may not be known to anyone who's not achieved S rating... Once you get to S, as near as I can tell, you never drop down from there. I guess once you've managed to race that much without too many incidents, they give you the 'S' ( safe ) standard of approval. Just had 3 races and tried to stay out of trouble, was doing well in one of them until somebody else hit a tyre stack sending it into my path... "contact" warning flashes up... that's that then! This is going to be a challenge.
# Copyright (C) 2016 Ross Wightman. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # ============================================================================== """ """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf import pandas as pd import os from copy import deepcopy from fabric import util from models import ModelSdc from processors import ProcessorSdc from collections import defaultdict FLAGS = tf.app.flags.FLAGS tf.app.flags.DEFINE_string( 'root_network', 'resnet_v1_50', """Either resnet_v1_50, resnet_v1_101, resnet_v1_152, inception_resnet_v2, nvidia_sdc""") tf.app.flags.DEFINE_integer( 'top_version', 5, """Top level network version, specifies output layer variations. See model code.""") tf.app.flags.DEFINE_boolean( 'bayesian', False, """Activate dropout layers for inference.""") tf.app.flags.DEFINE_integer( 'samples', 0, """Activate dropout layers for inference.""") tf.app.flags.DEFINE_string( 'checkpoint_path', '', """Checkpoint file for model.""") tf.app.flags.DEFINE_string( 'ensemble_path', '', """CSV file with ensemble specification. Use as alternative to single model checkpoint.""") tf.app.flags.DEFINE_string( 'name', 'model', """Name prefix for outputs of exported artifacts.""") def _weighted_mean(outputs_list, weights_tensor): assert isinstance(outputs_list[0], tf.Tensor) print(outputs_list) outputs_tensor = tf.concat(1, outputs_list) print('outputs concat', outputs_tensor.get_shape()) if len(outputs_list) > 1: weighted_outputs = outputs_tensor * weights_tensor print('weighted outputs ', weighted_outputs.get_shape()) outputs_tensor = tf.reduce_mean(weighted_outputs) else: outputs_tensor = tf.squeeze(outputs_tensor) return outputs_tensor def _merge_outputs(outputs, weights): assert outputs merged = defaultdict(list) weights_tensor = tf.pack(weights) print('weights ', weights_tensor.get_shape()) # recombine multiple model outputs by dict key or list position under output name based dict if isinstance(outputs[0], dict): for o in outputs: for name, tensor in o.items(): merged['output_%s' % name].append(tensor) elif isinstance(outputs[0], list): for o in outputs: for index, tensor in enumerate(o): merged['output_%d' % index].append(tensor) else: merged['output'] = outputs reduced = {name: _weighted_mean(value_list, weights_tensor) for name, value_list in merged.items()} for k, v in reduced.items(): print(k, v, v.get_shape()) return reduced def build_export_graph(models, batch_size=1, export_scope=''): assert models inputs = tf.placeholder(tf.uint8, [None, None, 3], name='input_placeholder') print("Graph Inputs: ") print(inputs.name, inputs.get_shape()) with tf.device('/gpu:0'): inputs = tf.cast(inputs, tf.float32) inputs = tf.div(inputs, 255) input_tensors = [inputs, tf.zeros(shape=()), tf.constant('', dtype=tf.string)] model_outputs_list = [] weights_list = [] for m in models: with tf.variable_scope(m['name'], values=input_tensors): model, processor = m['model'], m['processor'] processed_inputs = processor.process_example(input_tensors, mode='pred') if batch_size > 1: processed_inputs = [tf.gather(tf.expand_dims(x, 0), [0] * batch_size) for x in processed_inputs] processed_inputs = processor.reshape_batch(processed_inputs, batch_size=batch_size) model_outputs = model.build_tower( processed_inputs[0], is_training=False, summaries=False) model_outputs_list += [model.get_predictions(model_outputs, processor)] weights_list += [m['weight']] merged_outputs = _merge_outputs(model_outputs_list, weights_list) print("Graph Outputs: ") outputs = [] for name, output in merged_outputs.items(): outputs += [tf.identity(output, name)] [print(x.name, x.get_shape()) for x in outputs] return inputs, outputs def main(_): util.check_tensorflow_version() assert os.path.isfile(FLAGS.checkpoint_path) or os.path.isfile(FLAGS.ensemble_path) model_args_list = [] if FLAGS.checkpoint_path: model_args_list.append( { 'root_network': FLAGS.root_network, 'top_version': FLAGS.top_version, 'image_norm': FLAGS.image_norm, 'image_size': FLAGS.image_size, 'image_aspect': FLAGS.image_aspect, 'checkpoint_path': FLAGS.checkpoint_path, 'bayesian': FLAGS.bayesian, 'weight': 1.0, } ) else: ensemble_df = pd.DataFrame.from_csv(FLAGS.ensemble_path, index_col=None) model_args_list += ensemble_df.to_dict('records') model_params_common = { 'outputs': { 'steer': 1, # 'xyz': 2, }, } model_list = [] for i, args in enumerate(model_args_list): print(args) model_name = 'model_%d' % i model_params = deepcopy(model_params_common) model_params['network'] = args['root_network'] model_params['version'] = args['top_version'] model_params['bayesian'] = FLAGS.bayesian model = ModelSdc(params=model_params) processor_params = {} processor_params['image_norm'] = args['image_norm'] processor_params['image_size'] = args['image_size'] processor_params['image_aspect'] = args['image_aspect'] processor = ProcessorSdc(params=processor_params) model_list.append({ 'model': model, 'processor': processor, 'weight': args['weight'], 'name': model_name, 'checkpoint_path': args['checkpoint_path'] }) name_prefix = FLAGS.name with tf.Graph().as_default() as g: batch_size = 1 if not FLAGS.samples else FLAGS.samples build_export_graph(models=model_list, batch_size=batch_size) model_variables = tf.contrib.framework.get_model_variables() saver = tf.train.Saver(model_variables) with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as sess: init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) sess.run(init_op) g_def = g.as_graph_def(add_shapes=True) tf.train.write_graph(g_def, './', name='%s-graph_def.pb.txt' % name_prefix) for m in model_list: checkpoint_variable_set = set() checkpoint_path, global_step = util.resolve_checkpoint_path(m['checkpoint_path']) if not checkpoint_path: print('No checkpoint file found at %s' % m['checkpoint_path']) return reader = tf.train.NewCheckpointReader(checkpoint_path) checkpoint_variable_set.update(reader.get_variable_to_shape_map().keys()) variables_to_restore = m['model'].variables_to_restore( restore_outputs=True, checkpoint_variable_set=checkpoint_variable_set, prefix_scope=m['name']) saver_local = tf.train.Saver(variables_to_restore) saver_local.restore(sess, checkpoint_path) print('Successfully loaded model from %s at step=%d.' % (checkpoint_path, global_step)) saver.export_meta_graph('./%s-meta_graph.pb.txt' % name_prefix, as_text=True) saver.save(sess, './%s-checkpoint' % name_prefix, write_meta_graph=True) if __name__ == '__main__': tf.app.run()
Image composite: Bandwagon Philippines; Pixabay; FringeMNL, Ritual, and Philippine Educational Theater Association (PETA) Facebook pages. Fringe Manila is an annual arts fest that showcases acts and artists from all over the world who are in the fields of theater, literature, music, dance, visual art, film, cabaret, performance art, circus, and beyond. This year’s installment starts tomorrow, Feb. 7 with an opening night in the pub CINKO. This first event will feature works and performances by Art of Bodybending, Legato, Lucinda Sky, Joyen, The Addlib Divas, Humbly, Zapateria, Inkanor, Fly Lady Di, and Mr. Dee Jae. For more information, visit the FringeMNL Facebook page. Art and music come together in this collaboration of two artists from different creative mediums. This year’s first Creative Nights event at the Ayala Museum will bring together renowned abstractionist Gus Albor and Manila-based producer and beat maker CRWN. The museum currently houses an exhibit of Albor’s works called Territory, which consists of oil on canvas paintings, mixed media works, paper-based illustrations, large-scale sculptures, and installation artworks. Attendees can enjoy Albor’s works, create their own pieces, and watch performances by CRWN and Albor who is also a musician. VIP tickets cost PHP1,500 (US$28.68), which includes priority seating, meet and greet with the artists, an exhibit tour with Ayala Museum’s senior curator, and other exclusive offerings. Regular tickets are priced at PHP700 (US$13.38), walk-ins are at PHP900 (US$17.21), while Ayala Museum members & partners are charged PHP500 (US$9.56). Tickets are available in the Ayala Museum counter. For more information, visit the Creative Nights: Gus Albor x CRWN Facebook event page. Those who are sick of the usual flowers and chocolate can shop for their Valentine in the upcoming Gayuma sa Artesania pop-up, where local artisans will be showcasing and selling their goods. The event will happen on Friday starting 12nn. Shopping will last until 7pm but the place will be open onwards for some music, happy hour drinks, and food. Happy hour is from 5 – 7pm and admission is free. Participating brands include the Habi Philippine Textile Council, Craft MNL Craft Shop and Workspace, Katas Cashew Milk, Saan-Saan Candles, and Ritual Sustainable General Store. For more information, visit the Gayuma sa Artesania Facebook event page. Photo: Philippine Educational Theater Association (PETA) Facebook page. Charot is a colloquial Filipino term that means “just kidding,” and it’s also the title of the Philippine Educational Theater Association’s (PETA) upcoming play. A political satire that focuses on the real-life issue of charter change, its story revolves around citizens stuck in traffic on their way to a plebiscite on whether or not to shift to Federalism. The subject matter is serious but the show aims to encourage the audience to think about the country’s current issues through comedy. The show will also be interactive but giving away details will be a spoiler. Tickets are available on Ticket World and are priced at PHP600 (US$11.47) – PHP1,500 (US$28.68). For more information, visit the Philippine Educational Theater Association’s (PETA) Facebook page. Those planning a special occasion, say a wedding or birthday party, can check out Toast! this weekend, a celebrations fair that will bring together more than 100 event suppliers. The event is organized by the team behind the Bride and Breakfast, the local bridal website that features Pinterest-worthy weddings. Apart from the convenience of having suppliers in one location, there will also be discounted deals for attendees. The entrance fee is PHP100 (US$1.91). For more information, visit the Toast! Celebrations Fair Facebook event page.
#!/usr/bin/env python3 # # Terminal program to connect to encrypted netrepl # # author: ulno # create date: 2017-09-16 # # based on telnet example at http://www.binarytides.com/code-telnet-client-sockets-python/ _debug = "terminal:" import select, threading from netrepl import Netrepl_Parser input_buffer = "" input_buffer_lock = threading.Lock() quit_flag = False # from: https://stackoverflow.com/questions/510357/python-read-a-single-character-from-the-user # and https://github.com/magmax/python-readchar # TODO: use whole readchar to support windows? import tty, sys, termios # raises ImportError if unsupported # def readchar(): # fd = sys.stdin.fileno() # old_settings = termios.tcgetattr(fd) # try: # tty.setraw(sys.stdin.fileno()) # ch = sys.stdin.read(1) # finally: # termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) # return ch def readchar(): ch = sys.stdin.read(1) return ch def char_reader(): # thread reading from stdin into inputbuffer global input_buffer, input_buffer_lock, quit_flag while not quit_flag: ch = readchar() if ch == "\x1d": quit_flag = True ch = "" elif ch == "\r": ch = "\r\n" input_buffer_lock.acquire() input_buffer += ch input_buffer_lock.release() def getChar(): answer = sys.stdin.read(1) return answer def main(): global quit_flag, input_buffer_lock, input_buffer parser = Netrepl_Parser('Connect to netrepl and open an' 'interactive terminal.', debug=_debug) con = parser.connect() if _debug is not None: print(_debug, 'Press ctrl-] to quit.\n') print print("Try to type help to get an initial help screen.") # Do not request help-screen, gets too confusing later # if _debug: print(_debug,'Requesting startscreen.') # con.send(b"\r\nhelp\r\n") fd = sys.stdin.fileno() old_settings = termios.tcgetattr(fd) tty.setraw(fd) input_thread = threading.Thread(target=char_reader) input_thread.start() while not quit_flag: # Check if we received anything via network # Get the list sockets which are readable read_sockets, write_sockets, error_sockets = \ select.select([con.socket], [], [], 0.01) for sock in read_sockets: # incoming message from remote server if sock == con.socket: data = con.receive() l = len(data) # TODO: figure out how to detect connection close # #print("recvd:",data) # if not data: # print('\nterminal: Connection closed.') # sys.exit() # else: if l > 0: # print data try: sys.stdout.write(bytes(data[0:l]).decode()) except: if _debug: print("\r\n{} Got some weird data of len " "{}: >>{}<<\r\n".format(_debug, len(data), data)) # print("data:", str(data[0:l])) sys.stdout.flush() if len(input_buffer) > 0: # user entered a message input_buffer_lock.acquire() send_buffer = input_buffer.encode() input_buffer = "" input_buffer_lock.release() # msg = sys.stdin.readline().strip()+'\r\n' # print("\r\nmsg {} <<\r\n".format(send_buffer)) con.send(send_buffer) # cs.send(send_buffer+b'\r\n') # cs.send(send_buffer+b'!') input_thread.join() # finsih input_thread if _debug: print("\r\n{} Closing connection.\r".format(_debug)) con.repl_normal() # normal repl con.close(report=True) termios.tcsetattr(fd, termios.TCSADRAIN, old_settings) if _debug: print("\r\n{} Connection closed.\r\n".format(_debug)) # main function if __name__ == "__main__": main()
Websites and Organizations. There's so much going on! Under CEO JoAnn Jenkins, the American Association of Retired Persons (AARP) has become a terrific, multi-faceted resource and lobbying force for people over 50. Specifically, check out the companion site, AARP.org/disruptaging developed to support the ideas puts forth in Jo Ann Jenkins’ book, Disrupt Aging. Area Agencies on Aging form a national network that was created by an act of Congress, called the Older Americans Act, in 1974. There are over 600 Area Agency on Aging across the nation, and they are all charged with the same thing—delivering services to seniors at the local level. Think of AAAs as the boots-on-the-ground of senior services—connecting to seniors at the county, city and neighborhood level. Each geographic area of the country has an Area Agency on Aging assigned to serve it. Most states have several Area Agencies. You can locate yours here. Started 14 years ago by Danny Davis, “Beyond 50 Radio” has an indexed archive of shows on every topic imaginable. Lots of good food for thought. This site was founded by Ron Pevny, author of Conscious Living, Conscious Aging, to support people in getting off aging auto-pilot and into becoming the “elder” they want to be. You'll find lots of fine wisdom here. The tagline of this organization is “Second acts for the greater good.” Its great aim is to harvest the unprecedented resource represented by tens of millions of Americans over 50. Encore has developed several strategies to make the most of our power: connecting generations to improve the world, and connecting 50+ individuals with paid an unpaid work opportunities, and with each other. Karl Pillemer, a Cornell University professor, has worked with elders for ages, collecting thousands of bits of observation, advice, and philosophy from them, all indexed by topic on this site. He’s also put them into two books: 30 Lessons for Loving and 30 Lessons for Living. This isn't activism exactly, but it's generous, kind, interesting, and validating of older adults' hard-won perspectives. The National Council on Aging is the granddaddy (sorry) of American organizations dedicated to helping people age with dignity, health, and minds intact. It was founded way back in 1950—by Boomers’ parents—in response to rising health costs and mandatory retirement, which were decimating older people’s finances. A fascinating interactive history of NCOA’s work is available here. Today, NCOA offers a wealth of resources to help older Americans maintain or achieve economic security and mental and physical health. NCOA hosts the National Institute of Senior Centers, which supports and strengthens America’s 11,000 senior centers. NCOA’s lobbying and public policy arm works to make sure older Americans’ needs are not lost in the shuffle in D.C. Venerable and essential. This U.S. government site is full of resources and information for older adults and those who love them, including people living with Alzheimer’s and their caregivers. This New York Times series ranges widely in topics of interest to older Americans and the people who care for them. It’s always interesting. Next Avenue is PBS’s first and only national journalism service for people in the latter half of their lives. Their tagline is, “Where grownups keep growing.” You can find fresh information daily on just about any topic relevant to aging. They write, “Every day, we invite readers to consider what is next, what lies just ahead and what will be revealed in their lives.” Don’t miss it! Continued learning and social connection are key ingredients for brain health and a happy life. But it can be challenging to find high-quality educational experiences that are affordable and open to people in later life. The Bernard Osher Foundation has filled a critical gap by funding Osher Lifelong Learning Institutes at scores of American universities from Maine to Hawaii. The “OLLI” at the University of Michigan is a great example of the riches these programs bring to the community. Find the one nearest to you, and have fun growing your mind and your social network at the same time. The tagline for this organization is “Wisdom and Spirit in Action.” Sage-ing International is based on the book, From Age-ing to Sage-ing: A Revolutionary Approach to Growing Older by Zalman Schachter-Shalomi and Ronald S. Miller. The organization’s purpose is to guide individuals to reclaim the power of aging for the benefit of society. No matter what you think about the phrase “sage-ing,” this site is valuable as a resource and thought-provoker. The National Institute of Senior Centers, co-housed with the National Council on Aging, is the support center and standard-bearer for senior centers nationwide. Their map of affiliated senior centers can help you find one near you. You can also find centers by searching "[Your state] Association of Senior Centers." Those sites vary widely, but are likely to help you find the center nearest you. Like state sites, senior centers vary widely in their programming, but most offer classes in finance, exercise, the arts, and other subjects, typically for free. The best are places that energetically foster intergenerational interaction, breaking down the psychic barriers between "us" younger people and "those" old people. Stanford’s is by far the most public-facing university-based center on aging in the U.S. Founded and directed by psychologist and gerontologist Laura Carstensen, author of A Long Bright Future: Happiness, Health, and Financial Security in an Age of Increased Longevity (2009), Stanford’s Center on Longevity (note: not on “aging” or “geriatrics”) focuses its research on real-world requirements for healthy aging: mental sharpness, physical fitness, and financial security. The dozens of research-based publications found on the site are accessible to lay readers, and offer up the most rigorous, timely information available on these key issues in aging. The principles of universal design—creating environments and products that work for people of all ages and abilities—applies to technology as well. Gary Kaye founded the informational website Tech50+ to keep tech developers’ feet to the fire by reminding them of the 50+ crowd’s love for tech and their buying power. Here you’ll find reviews of new products and services from a 50+ perspective. This well-designed site is packed with useful information. This site is rich with links to blogs (who knew there were 50?) on different aspects of aging: health, longevity, nutrition, hospice, and more. When you have this many, they can’t all be right for you (and a couple links are broken), but there is a lot to explore here.
#!/usr/bin/env python import re import os import sys import shutil def splitClasses (inputFilename, outputDirectory): print "Splitting classes, input: %s output: %s " % (inputFilename, outputDirectory) if not os.path.isfile(inputFilename): print "\nProcess aborted due to errors." sys.exit('ERROR: Input file "%s" does not exist!' % inputFilename) if os.path.isdir(outputDirectory): try: shutil.rmtree(outputDirectory, False) except Exception, E: print "\nAbnormal termination: Unable to clear or create working folder \"%s\"," % outputDirectory print " check if there is a process that blocks the folder." sys.exit("ERROR: %s" % E) # if not os.path.exists(outputDirectory): # os.makedirs(outputDirectory) pathName, fileName = os.path.split(inputFilename) pathName = os.path.join(outputDirectory, pathName.replace("../","")) if not os.path.exists(pathName): print "Creating folder:", pathName os.makedirs(pathName) fileName, fileExt = os.path.splitext(fileName) fIn = open(inputFilename) sourceIn = fIn.read() fIn.close() sourceOut = re.split('/\*\* *\r*\n *\* *Class:', sourceIn) for i, text in enumerate(sourceOut): if i == 0: outputFileName = os.path.join(pathName, fileName + fileExt) else: outputFileName = os.path.join(pathName, fileName + "-" + format(i) + fileExt) print "Splited to:", outputFileName fOut = open(outputFileName,"w") fOut.write("/**\n * Class:" + text) fOut.close() print "Done!" # ----------------- # main # ----------------- if __name__ == '__main__': splitClasses(sys.argv[1], sys.argv[2])
Project management assignment epilepsy social science research proposal video dissertations online database hrca creative writing competition boston university application essays problem solving in math for grade 2 worksheets essay on sacrificing what is a essay thesis informative essay writing prompts university business plan education what a business plan should have write my essay online for cheap why do i want to go back to college essay what is reflective essays writing rules for creative writing, good argument essay topics for college students 2017, business plan and business model canvas 1st grade math homework worksheets. How to write a scholarly research paper format example of value proposition in business plan sample turabian style essay paper example business performance planning, how to write a report on a research paper pdf paragraph writing paper how to solve solution problems in chemistry dna research paper ideas assignment of stocks all homework topic sentence for essay. Salesforce custom fire assignment app to solve accounting problems research proposal ideas in psychology essay on divorce pay for essays online definitional argument essay proposal solving redox titration problems music appreciation essays business plan matrix examples interview essay paper example essay title generator free printable how to write a business plan for a barber shop cheap business plan writing services near me national 5 geography assignment. Argumentative essay for immigration best business plan toolkit science fair research essay sample culver city high school homework page business planning attorney denver science fair research essay sample 3pl business plans what to write in your college essay a guide to writing research papers benefits of creative writing to society dr jekyll and mr hyde essay on good vs evil solving kite angle problems, sample of an argumentative essay conclusion coca-cola scholarship essay breaking barriers essay contest 2018 business resilience planning plan example of marketing strategy in business plan free good topics to write an argumentative research paper on computer brainfuse live homework helper how to write a business plan for a barber shop math 51 homework book. Mineral water plant business plan in hindi how to write an apa format research paper penn state admission essay for 2017 creative writing holidays second grade daft punk homework mp3 download. Dissertation thesis pdf download courses in youth workout argumentative essay unit reading vocabulary homework activities complete assignments due dates introduction of research proposal in english grammar literature review types of power good argument essay topics for college students 2017 homework and exercise theodore roosevelt research paper personal reflection essay sample short essay questions apa sample business plan for a bakery strategic business development plan topic of research proposal letter what is evaluating a concept on an essay my favorite hobby essay writing assignment database design fpt why do i want to go back to college essay good argument essay topics for college students 2017 how to write a report on a research paper pdf dissertation introductions human growth and development research papers book essays free best book on problem solving what is evaluating a concept on an essay example of a literature review paper in apa. University business plan education singapore airline problem solving strategies examples of science research paper title page freedom of religion essays on writing the college application essay harry bauld sample of questionnaire for research paper my dog ate my homework poems free business plan template for mac write my essay for money list, master s degree in creative writing online assigning tasks to employees free writing paper designs. Personal reflection essay sample. Solve my math problems calculator bake shop business plan 4th grade math critical thinking worksheets. Collaborative problem solving in schools record label business plan torrent. Write an essay about student life my country essay in english for class 2 how to write a scientific research proposal paper how should a business plan look like writing an opinion essay introduction medicare assignment of benefits form pdf thesis dissertation spice essay questions on pride and prejudice, informative essay writing prompts resorts business plans mini-q essay outline guide filled in world war 2 homework for kids printable. What a business plan should have germano from dissertation to book why do essays have to have 5 paragraphs literary essays. Theory of knowledge essay structure how to write a background paper for science problem solving exercise with solutions introduction of research proposal in english grammar pet food business plan examples problem solving in quadratic equations pay for essays online where to publish political essays where is homework banned effective communication essays essay free grader on assignment calabasas office phone number starting a car rental business plan for a, freedom of religion essays project management assignment epilepsy. Easy bibliography sites homework packet for kindergarten grade pdf how do you write a term paper outline sales forecast in a business plan, free narrative essay research papers examples apartments, learn to write great essays language gender culture essay examples free personal ethics essay thoreau essay summary physical therapy business plan example essay leadership program how to solve a ratio problem with ratios lined paper for letter writing aia assignment of construction contract honor code essay outline for argumentative essay templateHow to write an essay in apa format for college students best college entry essays my homework ate my homework by patrick jennings violent video games essay solving inelastic collision problems best business plan toolkit, research proposal ideas in psychology. Essay title generator free printable poem say no to crackers the five paragraph essay outline examples of science research paper title page problem solving practice model purpose of critical thinking skills opinions on homework. Free business plan template for mac how to write a report on a research paper pdf funny quotes on homework life, international review of business research papers 2017 i want a wife essay meaning spa business plan in india essay style guide. How solve problems doing sing numbers good topics to write an argumentative research paper on computer methodology in a research paper meaning what is a contrast and comparison essay research paper on type 1 diabetes example of methods in research paper solve this algebra problem for meaning brown supplemental essay examples how to solve transportation problems. Key financial indicators business plan mn business plan for liquor license template topic sentence for essay is the world changing for the better essay dar essay contest 2020 topics for physical science research papers loneliness essay curley s wife. Research papers on adsorption how to start a paragraph in an essay transition words cancer research paper thesis statement scoring high on bar exam essays creative writing at virginia tech horrid henry homework youtube solving inelastic collision problems funny quotes on homework life organizational behavior assignment mba country no homework for students argumentative essay on the giver.
"""Support for remote Python debugging. Some ASCII art to describe the structure: IN PYTHON SUBPROCESS # IN IDLE PROCESS # # oid='gui_adapter' +----------+ # +------------+ +-----+ | GUIProxy |--remote#call-->| GUIAdapter |--calls-->| GUI | +-----+--calls-->+----------+ # +------------+ +-----+ | Idb | # / +-----+<-calls--+------------+ # +----------+<--calls-/ | IdbAdapter |<--remote#call--| IdbProxy | +------------+ # +----------+ oid='idb_adapter' # The purpose of the Proxy and Adapter classes is to translate certain arguments and return values that cannot be transported through the RPC barrier, in particular frame and traceback objects. """ import sys import types import rpc import Debugger debugging = 0 idb_adap_oid = "idb_adapter" gui_adap_oid = "gui_adapter" #======================================= # # In the PYTHON subprocess: frametable = {} dicttable = {} codetable = {} tracebacktable = {} def wrap_frame(frame): fid = id(frame) frametable[fid] = frame return fid def wrap_info(info): "replace info[2], a traceback instance, by its ID" if info is None: return None else: traceback = info[2] assert isinstance(traceback, types.TracebackType) traceback_id = id(traceback) tracebacktable[traceback_id] = traceback modified_info = (info[0], info[1], traceback_id) return modified_info class GUIProxy: def __init__(self, conn, gui_adap_oid): self.conn = conn self.oid = gui_adap_oid def interaction(self, message, frame, info=None): # calls rpc.SocketIO.remotecall() via run.MyHandler instance # pass frame and traceback object IDs instead of the objects themselves self.conn.remotecall(self.oid, "interaction", (message, wrap_frame(frame), wrap_info(info)), {}) class IdbAdapter: def __init__(self, idb): self.idb = idb #----------called by an IdbProxy---------- def set_step(self): self.idb.set_step() def set_quit(self): self.idb.set_quit() def set_continue(self): self.idb.set_continue() def set_next(self, fid): frame = frametable[fid] self.idb.set_next(frame) def set_return(self, fid): frame = frametable[fid] self.idb.set_return(frame) def get_stack(self, fid, tbid): ##print >>sys.__stderr__, "get_stack(%r, %r)" % (fid, tbid) frame = frametable[fid] if tbid is None: tb = None else: tb = tracebacktable[tbid] stack, i = self.idb.get_stack(frame, tb) ##print >>sys.__stderr__, "get_stack() ->", stack stack = [(wrap_frame(frame), k) for frame, k in stack] ##print >>sys.__stderr__, "get_stack() ->", stack return stack, i def run(self, cmd): import __main__ self.idb.run(cmd, __main__.__dict__) def set_break(self, filename, lineno): msg = self.idb.set_break(filename, lineno) return msg def clear_break(self, filename, lineno): msg = self.idb.clear_break(filename, lineno) return msg def clear_all_file_breaks(self, filename): msg = self.idb.clear_all_file_breaks(filename) return msg #----------called by a FrameProxy---------- def frame_attr(self, fid, name): frame = frametable[fid] return getattr(frame, name) def frame_globals(self, fid): frame = frametable[fid] dict = frame.f_globals did = id(dict) dicttable[did] = dict return did def frame_locals(self, fid): frame = frametable[fid] dict = frame.f_locals did = id(dict) dicttable[did] = dict return did def frame_code(self, fid): frame = frametable[fid] code = frame.f_code cid = id(code) codetable[cid] = code return cid #----------called by a CodeProxy---------- def code_name(self, cid): code = codetable[cid] return code.co_name def code_filename(self, cid): code = codetable[cid] return code.co_filename #----------called by a DictProxy---------- def dict_keys(self, did): dict = dicttable[did] return dict.keys() def dict_item(self, did, key): dict = dicttable[did] value = dict[key] value = repr(value) return value #----------end class IdbAdapter---------- def start_debugger(rpchandler, gui_adap_oid): """Start the debugger and its RPC link in the Python subprocess Start the subprocess side of the split debugger and set up that side of the RPC link by instantiating the GUIProxy, Idb debugger, and IdbAdapter objects and linking them together. Register the IdbAdapter with the RPCServer to handle RPC requests from the split debugger GUI via the IdbProxy. """ gui_proxy = GUIProxy(rpchandler, gui_adap_oid) idb = Debugger.Idb(gui_proxy) idb_adap = IdbAdapter(idb) rpchandler.register(idb_adap_oid, idb_adap) return idb_adap_oid #======================================= # # In the IDLE process: class FrameProxy: def __init__(self, conn, fid): self._conn = conn self._fid = fid self._oid = "idb_adapter" self._dictcache = {} def __getattr__(self, name): if name[:1] == "_": raise AttributeError, name if name == "f_code": return self._get_f_code() if name == "f_globals": return self._get_f_globals() if name == "f_locals": return self._get_f_locals() return self._conn.remotecall(self._oid, "frame_attr", (self._fid, name), {}) def _get_f_code(self): cid = self._conn.remotecall(self._oid, "frame_code", (self._fid,), {}) return CodeProxy(self._conn, self._oid, cid) def _get_f_globals(self): did = self._conn.remotecall(self._oid, "frame_globals", (self._fid,), {}) return self._get_dict_proxy(did) def _get_f_locals(self): did = self._conn.remotecall(self._oid, "frame_locals", (self._fid,), {}) return self._get_dict_proxy(did) def _get_dict_proxy(self, did): if self._dictcache.has_key(did): return self._dictcache[did] dp = DictProxy(self._conn, self._oid, did) self._dictcache[did] = dp return dp class CodeProxy: def __init__(self, conn, oid, cid): self._conn = conn self._oid = oid self._cid = cid def __getattr__(self, name): if name == "co_name": return self._conn.remotecall(self._oid, "code_name", (self._cid,), {}) if name == "co_filename": return self._conn.remotecall(self._oid, "code_filename", (self._cid,), {}) class DictProxy: def __init__(self, conn, oid, did): self._conn = conn self._oid = oid self._did = did def keys(self): return self._conn.remotecall(self._oid, "dict_keys", (self._did,), {}) def __getitem__(self, key): return self._conn.remotecall(self._oid, "dict_item", (self._did, key), {}) def __getattr__(self, name): ##print >>sys.__stderr__, "failed DictProxy.__getattr__:", name raise AttributeError, name class GUIAdapter: def __init__(self, conn, gui): self.conn = conn self.gui = gui def interaction(self, message, fid, modified_info): ##print "interaction: (%s, %s, %s)" % (message, fid, modified_info) frame = FrameProxy(self.conn, fid) self.gui.interaction(message, frame, modified_info) class IdbProxy: def __init__(self, conn, shell, oid): self.oid = oid self.conn = conn self.shell = shell def call(self, methodname, *args, **kwargs): ##print "**IdbProxy.call %s %s %s" % (methodname, args, kwargs) value = self.conn.remotecall(self.oid, methodname, args, kwargs) ##print "**IdbProxy.call %s returns %r" % (methodname, value) return value def run(self, cmd, locals): # Ignores locals on purpose! seq = self.conn.asyncqueue(self.oid, "run", (cmd,), {}) self.shell.interp.active_seq = seq def get_stack(self, frame, tbid): # passing frame and traceback IDs, not the objects themselves stack, i = self.call("get_stack", frame._fid, tbid) stack = [(FrameProxy(self.conn, fid), k) for fid, k in stack] return stack, i def set_continue(self): self.call("set_continue") def set_step(self): self.call("set_step") def set_next(self, frame): self.call("set_next", frame._fid) def set_return(self, frame): self.call("set_return", frame._fid) def set_quit(self): self.call("set_quit") def set_break(self, filename, lineno): msg = self.call("set_break", filename, lineno) return msg def clear_break(self, filename, lineno): msg = self.call("clear_break", filename, lineno) return msg def clear_all_file_breaks(self, filename): msg = self.call("clear_all_file_breaks", filename) return msg def start_remote_debugger(rpcclt, pyshell): """Start the subprocess debugger, initialize the debugger GUI and RPC link Request the RPCServer start the Python subprocess debugger and link. Set up the Idle side of the split debugger by instantiating the IdbProxy, debugger GUI, and debugger GUIAdapter objects and linking them together. Register the GUIAdapter with the RPCClient to handle debugger GUI interaction requests coming from the subprocess debugger via the GUIProxy. The IdbAdapter will pass execution and environment requests coming from the Idle debugger GUI to the subprocess debugger via the IdbProxy. """ global idb_adap_oid idb_adap_oid = rpcclt.remotecall("exec", "start_the_debugger",\ (gui_adap_oid,), {}) idb_proxy = IdbProxy(rpcclt, pyshell, idb_adap_oid) gui = Debugger.Debugger(pyshell, idb_proxy) gui_adap = GUIAdapter(rpcclt, gui) rpcclt.register(gui_adap_oid, gui_adap) return gui def close_remote_debugger(rpcclt): """Shut down subprocess debugger and Idle side of debugger RPC link Request that the RPCServer shut down the subprocess debugger and link. Unregister the GUIAdapter, which will cause a GC on the Idle process debugger and RPC link objects. (The second reference to the debugger GUI is deleted in PyShell.close_remote_debugger().) """ close_subprocess_debugger(rpcclt) rpcclt.unregister(gui_adap_oid) def close_subprocess_debugger(rpcclt): rpcclt.remotecall("exec", "stop_the_debugger", (idb_adap_oid,), {}) def restart_subprocess_debugger(rpcclt): idb_adap_oid_ret = rpcclt.remotecall("exec", "start_the_debugger",\ (gui_adap_oid,), {}) assert idb_adap_oid_ret == idb_adap_oid, 'Idb restarted with different oid'
The Auralux LED Controller and Remote package includes a mountable remote holder, making it easy to put the remote in your pocket, or bring it with you, but also allowing it to be stored in a holder on the wall when its not in use. With this controller you can alter the brightness of your LED's as you are on the move, or if you are anywhere up to 50M away from the receiver! This controller works on a 433MHz frequency so you won't have to worry about it interfering with your other electronics! This RGBW Controller is great for uses where you need to vary the light in both color and quality at any given time. The top buttons on the controller are able to easily change the speed and brightness of the light. The color wheel touch pad is where the color comes into play, you can seamlessly and effortlessly shift from one color to the next. The button at the bottom left also allows for mode switches so you can use integrate preset patterns into your lighting displays. The Auralux controller can be used in combination with RGBW and RGBA LED strips, so the color possibilities are endless! The wireless LED control is unique because it lets you decide for yourself about which color the light will be, and you can easily customize your own space. Auralux also provides a new CR2032 battery in the controller so it is already ready for use! As soon as you hook your lights up, you're ready to go!
__author__ = 'ajb' import rpx_proto.descriptions_pb2 def boolean_property_description(prop_name, persist=False): prop_desc = rpx_proto.descriptions_pb2.PropertyDescription() prop_desc.propertyId = prop_name prop_desc.propertyLabel = prop_name prop_desc.propertyType = rpx_proto.descriptions_pb2.BooleanType prop_desc.persist = persist return prop_desc def string_prop_desc(prop_name, persist=False): prop_desc = rpx_proto.descriptions_pb2.PropertyDescription() prop_desc.propertyId = prop_name prop_desc.propertyLabel = prop_name prop_desc.propertyType = rpx_proto.descriptions_pb2.StringType prop_desc.persist = persist return prop_desc def integer_prop_desc(prop_name, persist=False): prop_desc = rpx_proto.descriptions_pb2.PropertyDescription() prop_desc.propertyId = prop_name prop_desc.propertyLabel = prop_name prop_desc.propertyType = rpx_proto.descriptions_pb2.IntType prop_desc.persist = persist return prop_desc def ranged_double_property_description(prop_name, min_val, max_val, persist=False): prop_desc = rpx_proto.descriptions_pb2.PropertyDescription() prop_desc.propertyId = prop_name prop_desc.propertyLabel = prop_name prop_desc.propertyType = rpx_proto.descriptions_pb2.DoubleType constraint = prop_desc.constraints prop_desc.persist = persist constraint.doubleTypeMinVal = min_val constraint.doubleTypeMaxVal = max_val return prop_desc def command_description(cmd_name): cmd_desc = rpx_proto.descriptions_pb2.CommandDescription() cmd_desc.commandId = cmd_name cmd_desc.commandLabel = cmd_name return cmd_desc
A walk through dry zones, then humid ones, on the Mechmont plateaux and through the cool, quiet Vert valley. The Gigouzac crocodile - steneosaurus - whose fossil remains were discovered in 1948 by a geologist from the Lot lived in the region about 150 million years ago. Its body was covered with bony plates and its lower jaw was 1m 23 cm long. The region was then a marshy coastal area opening on to a sea linking the Paris basin, England, Belgium, Germany and the Languedoc. In these warm waters, numerous living organisms flourished.
#!/usr/bin/python # Huawei 3131 3G dongle python API # Based on the work of http://chaddyhv.wordpress.com/2012/08/13/programming-and-installing-huawei-hilink-e3131-under-linux/ # Usage: Import in your python project and use: send_sms and read_sms # Test your Dongle by running this file from console and send smss to you dongle # python /path_to_file/pi_sms.py import requests from BeautifulSoup import BeautifulSoup, NavigableString import time import sys import logging import datetime logger = logging.getLogger(__file__) logger.setLevel(logging.INFO) SMS_LIST_TEMPLATE = '<request>' +\ '<PageIndex>1</PageIndex>' +\ '<ReadCount>20</ReadCount>' +\ '<BoxType>1</BoxType>' +\ '<SortType>0</SortType>' +\ '<Ascending>0</Ascending>' +\ '<UnreadPreferred>0</UnreadPreferred>' +\ '</request>' SMS_READ_TEMPLATE = '<request><Index>{index}</Index></request>' SMS_SEND_TEMPLATE = '<request>' +\ '<Index>-1</Index>' +\ '<Phones><Phone>{phone}</Phone></Phones>' +\ '<Sca></Sca>' +\ '<Content>{content}</Content>' +\ '<Length>{length}</Length>' +\ '<Reserved>1</Reserved>' +\ '<Date>{timestamp}</Date>' +\ '</request>' BASE_URL = 'http://hi.link/api/' SMS_READ_HEADER = {'referer': 'http://hi.link/html/smsinbox.html?smsinbox'} class NotConnected(Exception): pass class DeviceNotReachable(Exception): pass def _pretty_traffic(n): for x in ['bytes', 'KB', 'MB', 'GB', 'TB']: if n < 1024.0: return "%3.1f %s" % (n ,x) n /= 1024.0 return 'unknown' def _connected(s): r1 = s.get(BASE_URL + 'monitoring/status') resp = BeautifulSoup(r1.content) if resp.connectionstatus.string == u'901': return True else: return False def _sms_count(s): r1 = s.get(BASE_URL + 'monitoring/check-notifications') resp = BeautifulSoup(r1.content) return int(resp.unreadmessage.string) def _read_sms_list(s): result = [] r1 = s.post(BASE_URL + 'sms/sms-list', data=SMS_LIST_TEMPLATE, headers=SMS_READ_HEADER) resp = BeautifulSoup(r1.content) if int(resp.count.string): for msg in resp.messages: if not isinstance(msg, NavigableString): logger.info('SMS Received. From number: {phone}, Content: {content}'.format(phone=msg.phone.string, content=msg.content.string.encode('ascii', 'replace'))) result.append({'phone':msg.phone.string, 'content':msg.content.string}) _delete_sms(s, msg.find('index').string) return result def _delete_sms(s, ind): r1 = s.post(BASE_URL + 'sms/delete-sms', data=SMS_READ_TEMPLATE.format(index=ind)) def _read_sms(s): if _connected(s): if _sms_count(s): return _read_sms_list(s) else: return [] else: raise NotConnected('No data link') def info(s): r1 = s.get(BASE_URL + 'monitoring/traffic-statistics') resp = BeautifulSoup(r1.content) upload = int(resp.totalupload.string) download = int(resp.totaldownload.string) return 'Modem status: connected: {con}, upload: {up}, download: {down}, total: {tot}'.format(con=_connected(s), up=_pretty_traffic(upload), down=_pretty_traffic(download), tot=_pretty_traffic(upload+download)) def read_sms(s): try: return _read_sms(s) except requests.exceptions.ConnectionError, e: raise DeviceNotReachable('Unable to connect to device') def send_sms(s, phone, content): timestamp = datetime.datetime.now() + datetime.timedelta(seconds=15) r1 = s.post(BASE_URL + 'sms/send-sms', data=SMS_SEND_TEMPLATE.format(phone=phone, content=content, length=len(content), timestamp=str(timestamp.strftime("%Y-%m-%d %H:%M:%S")))) resp = BeautifulSoup(r1.content) if resp.response.string == 'OK': logger.info('sms sent. Number: {phone}, Content: {content}'.format(phone=phone, content=content)) else: logger.error('sms sending failed. Response from dongle: {r}'.format(r=resp)) if __name__ == "__main__": logger.addHandler(logging.StreamHandler()) s = requests.Session() print info(s) while True: try: smss = _read_sms(s) if smss: print smss except requests.exceptions.ConnectionError, e: print e sys.exit(1) time.sleep(10)
It’s these kind of recipes that send out the right kind of whiffs to get me in a cheery autumn mood. All I need then to make me completely happy is a look out of the window onto the trees with burning leaves against a clear autumn sky. Preheat the oven to 180 ºC / 355 ºF conventional setting. This is a basic muffin recipe that you can use to make all kinds of combinations with fruits and nuts and spices. Its simple, first you mix all the dry ingredients in a bowl. In a separate bowl you mix all the wet ingredients. Then put the wet and dry ingredients together, but do not over mix. Fill a muffin tray with the mixture and sprinkle each heap of batter with the cinnamon/sugar mix. I use organic cinnamon that has a really intense flavor. Bake the muffins for 20 minutes (depending on your oven) and leave to cool on a tray. Or better: eat warm from the oven, but do not burn yourself on the pieces of apple or cranberries, they tend to be hot longer because they contain more water. Tip: Make sure your ingredients, including your spices, are fresh. Like ground coffee, ground spices also lose their intensity very quickly. You can grate cinnamon yourself with a Microplane grater. I also tried these muffins with speculaas spices and zwetschgen /quetsch jam (a sort of tangy plum) I brought home from France. Also very nice and seasonal for our ‘Sinterklaas’ feast. I so sympathise with you. I think the molds would work better if you baked 100 times, one batch after another, but we also bake the cookies only a few times a year during the season. It’s probably not enough to get these cookie molds really going. I use some extra extra rice flour and that helps, but still I have to confess they try to hang on to that mold so I always have to help them along. How different are my new Nordic Ware bundt pans..the cakes just shoot out! I know it’s an old post but I was looking for a good recipe to use my pack of speculaas kruiden I bought in The Netherlands recently. It came with a lould and I agree wholeheartedly with you about the light swearing. I made a dozen with the mould to prove I CAN do it, and the rest with a biscuit cutter. Sounds like a good plan. How long will you be in Amsterdam for? Oh Marieke, thanks for your wishes, but as far as I am living in a student apartment where there is no oven – a cooker alone, thank you very much – I have a suspicion I will have to make do with stovetop improvisations. 🙂 Good Lord, this road of possibilities remains broad. Time permitting, it would be so lovely if we meet (I believe I mentioned ealier that I’m in Amsterdam; studying) over a cup of hot chocolate or coffee to discuss, well, chocolate and coffee and other life pleasantries. 🙂 Let me hear from you on this one. I wish you will have your very own oven very soon! But for this, we definitely need a bigger garden. Oh, if I only had an oven in my small shared kitchen over here. 🙂 And still, your description does its own trick and sends me into that certain-autumn-day-mood, too! And you know what, thank you for this!
# -*- coding: utf-8 -*- # Generated by Django 1.9.12 on 2017-03-21 21:05 from __future__ import unicode_literals from django.conf import settings from django.db import migrations, models import django.db.models.deletion import django_extensions.db.fields class Migration(migrations.Migration): initial = True dependencies = [ migrations.swappable_dependency(settings.AUTH_USER_MODEL), ] operations = [ migrations.CreateModel( name='Calendar', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('date', models.DateField(verbose_name='Date')), ('sun_rise', models.TimeField(blank=True, null=True, verbose_name='Sun rise')), ('sun_set', models.TimeField(blank=True, null=True, verbose_name='Sun set')), ('temperature', models.FloatField(blank=True, null=True, verbose_name='Temperature')), ('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)), ('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)), ], options={ 'verbose_name_plural': 'Calendar', 'verbose_name': 'Calendar', 'ordering': ['date'], }, ), migrations.CreateModel( name='Event', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='Title')), ('is_public', models.BooleanField(default=False, verbose_name='Is Public')), ('is_featured', models.BooleanField(default=False, verbose_name='Is Featured')), ('slug', django_extensions.db.fields.AutoSlugField(blank=True, editable=False, overwrite=True, populate_from=('title',), unique=True)), ('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)), ('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)), ('event_type', models.CharField(choices=[('closed', 'Closed'), ('notice', 'Notice'), ('event', 'Event'), ('trip', 'Trip')], max_length=255, verbose_name='Event Type')), ('start_time', models.TimeField(blank=True, null=True, verbose_name='Start Time')), ('end_time', models.TimeField(blank=True, null=True, verbose_name='End Time')), ('description', models.TextField(blank=True, null=True, verbose_name='Description')), ('author', models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to=settings.AUTH_USER_MODEL)), ('date', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='calendar.Calendar')), ], options={ 'verbose_name_plural': 'Event', 'verbose_name': 'Event', 'ordering': ['date'], }, ), migrations.CreateModel( name='ExtraFields', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='Title')), ('value', models.CharField(max_length=255, verbose_name='Value')), ('sort_value', models.IntegerField(blank=True, null=True, verbose_name='Sort Value')), ('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)), ('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)), ], options={ 'verbose_name_plural': 'Trips', 'verbose_name': 'Extra Field', 'ordering': ['sort_value'], }, ), migrations.CreateModel( name='Tide', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('time', models.TimeField(verbose_name='Time')), ('level', models.FloatField(verbose_name='Level')), ('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)), ('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)), ('day', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='calendar.Calendar')), ], ), migrations.CreateModel( name='Trips', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='Title')), ('is_public', models.BooleanField(default=False, verbose_name='Is Public')), ('is_featured', models.BooleanField(default=False, verbose_name='Is Featured')), ('slug', django_extensions.db.fields.AutoSlugField(blank=True, editable=False, overwrite=True, populate_from=('title',), unique=True)), ('created', django_extensions.db.fields.CreationDateTimeField(auto_now_add=True)), ('modified', django_extensions.db.fields.ModificationDateTimeField(auto_now=True)), ('list_description', models.CharField(blank=True, max_length=255, null=True, verbose_name='List Description')), ('description', models.TextField(blank=True, null=True, verbose_name='Description')), ('start_date', models.DateTimeField(blank=True, null=True, verbose_name='Start Date and Time')), ('end_date', models.DateTimeField(blank=True, null=True, verbose_name='End Date and Time')), ('day', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='calendar.Calendar')), ], options={ 'verbose_name_plural': 'Trips', 'verbose_name': 'Trip', 'ordering': ['day'], }, ), migrations.CreateModel( name='WeatherTypes', fields=[ ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')), ('title', models.CharField(max_length=255, verbose_name='Title')), ('slug', django_extensions.db.fields.AutoSlugField(blank=True, editable=False, overwrite=True, populate_from=('title',), unique=True)), ('class_code', models.CharField(max_length=255, verbose_name='Code')), ], ), migrations.AddField( model_name='extrafields', name='trips', field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='calendar.Trips'), ), migrations.AddField( model_name='calendar', name='weather', field=models.ForeignKey(blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, to='calendar.WeatherTypes'), ), ]
More snow was dumped over night and the spot got a nice fresh 20cm blanket that needed to be shredded. More snowfall and bad visibility in the morning but it cleared up a little by midday so the freestyle competition was on. Light winds, big kites once more. Big kickers, rails and sliders – the riders got it all and made the best out of the conditions. They managed to finish the finals for the girls and ski men. Hopefully they can finish the men snowboard by tomorrow.
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import os.path import signal from . import streams_property from . import consumer_property from ducktape.services.service import Service from ducktape.utils.util import wait_until from kafkatest.directory_layout.kafka_path import KafkaPathResolverMixin from kafkatest.services.kafka import KafkaConfig from kafkatest.services.monitor.jmx import JmxMixin from kafkatest.version import LATEST_0_10_0, LATEST_0_10_1 STATE_DIR = "state.dir" class StreamsTestBaseService(KafkaPathResolverMixin, JmxMixin, Service): """Base class for Streams Test services providing some common settings and functionality""" PERSISTENT_ROOT = "/mnt/streams" # The log file contains normal log4j logs written using a file appender. stdout and stderr are handled separately CONFIG_FILE = os.path.join(PERSISTENT_ROOT, "streams.properties") LOG_FILE = os.path.join(PERSISTENT_ROOT, "streams.log") STDOUT_FILE = os.path.join(PERSISTENT_ROOT, "streams.stdout") STDERR_FILE = os.path.join(PERSISTENT_ROOT, "streams.stderr") JMX_LOG_FILE = os.path.join(PERSISTENT_ROOT, "jmx_tool.log") JMX_ERR_FILE = os.path.join(PERSISTENT_ROOT, "jmx_tool.err.log") LOG4J_CONFIG_FILE = os.path.join(PERSISTENT_ROOT, "tools-log4j.properties") PID_FILE = os.path.join(PERSISTENT_ROOT, "streams.pid") CLEAN_NODE_ENABLED = True logs = { "streams_config": { "path": CONFIG_FILE, "collect_default": True}, "streams_config.1": { "path": CONFIG_FILE + ".1", "collect_default": True}, "streams_config.0-1": { "path": CONFIG_FILE + ".0-1", "collect_default": True}, "streams_config.1-1": { "path": CONFIG_FILE + ".1-1", "collect_default": True}, "streams_log": { "path": LOG_FILE, "collect_default": True}, "streams_stdout": { "path": STDOUT_FILE, "collect_default": True}, "streams_stderr": { "path": STDERR_FILE, "collect_default": True}, "streams_log.1": { "path": LOG_FILE + ".1", "collect_default": True}, "streams_stdout.1": { "path": STDOUT_FILE + ".1", "collect_default": True}, "streams_stderr.1": { "path": STDERR_FILE + ".1", "collect_default": True}, "streams_log.2": { "path": LOG_FILE + ".2", "collect_default": True}, "streams_stdout.2": { "path": STDOUT_FILE + ".2", "collect_default": True}, "streams_stderr.2": { "path": STDERR_FILE + ".2", "collect_default": True}, "streams_log.3": { "path": LOG_FILE + ".3", "collect_default": True}, "streams_stdout.3": { "path": STDOUT_FILE + ".3", "collect_default": True}, "streams_stderr.3": { "path": STDERR_FILE + ".3", "collect_default": True}, "streams_log.0-1": { "path": LOG_FILE + ".0-1", "collect_default": True}, "streams_stdout.0-1": { "path": STDOUT_FILE + ".0-1", "collect_default": True}, "streams_stderr.0-1": { "path": STDERR_FILE + ".0-1", "collect_default": True}, "streams_log.0-2": { "path": LOG_FILE + ".0-2", "collect_default": True}, "streams_stdout.0-2": { "path": STDOUT_FILE + ".0-2", "collect_default": True}, "streams_stderr.0-2": { "path": STDERR_FILE + ".0-2", "collect_default": True}, "streams_log.0-3": { "path": LOG_FILE + ".0-3", "collect_default": True}, "streams_stdout.0-3": { "path": STDOUT_FILE + ".0-3", "collect_default": True}, "streams_stderr.0-3": { "path": STDERR_FILE + ".0-3", "collect_default": True}, "streams_log.0-4": { "path": LOG_FILE + ".0-4", "collect_default": True}, "streams_stdout.0-4": { "path": STDOUT_FILE + ".0-4", "collect_default": True}, "streams_stderr.0-4": { "path": STDERR_FILE + ".0-4", "collect_default": True}, "streams_log.0-5": { "path": LOG_FILE + ".0-5", "collect_default": True}, "streams_stdout.0-5": { "path": STDOUT_FILE + ".0-5", "collect_default": True}, "streams_stderr.0-5": { "path": STDERR_FILE + ".0-5", "collect_default": True}, "streams_log.0-6": { "path": LOG_FILE + ".0-6", "collect_default": True}, "streams_stdout.0-6": { "path": STDOUT_FILE + ".0-6", "collect_default": True}, "streams_stderr.0-6": { "path": STDERR_FILE + ".0-6", "collect_default": True}, "streams_log.1-1": { "path": LOG_FILE + ".1-1", "collect_default": True}, "streams_stdout.1-1": { "path": STDOUT_FILE + ".1-1", "collect_default": True}, "streams_stderr.1-1": { "path": STDERR_FILE + ".1-1", "collect_default": True}, "streams_log.1-2": { "path": LOG_FILE + ".1-2", "collect_default": True}, "streams_stdout.1-2": { "path": STDOUT_FILE + ".1-2", "collect_default": True}, "streams_stderr.1-2": { "path": STDERR_FILE + ".1-2", "collect_default": True}, "streams_log.1-3": { "path": LOG_FILE + ".1-3", "collect_default": True}, "streams_stdout.1-3": { "path": STDOUT_FILE + ".1-3", "collect_default": True}, "streams_stderr.1-3": { "path": STDERR_FILE + ".1-3", "collect_default": True}, "streams_log.1-4": { "path": LOG_FILE + ".1-4", "collect_default": True}, "streams_stdout.1-4": { "path": STDOUT_FILE + ".1-4", "collect_default": True}, "streams_stderr.1-4": { "path": STDERR_FILE + ".1-4", "collect_default": True}, "streams_log.1-5": { "path": LOG_FILE + ".1-5", "collect_default": True}, "streams_stdout.1-5": { "path": STDOUT_FILE + ".1-5", "collect_default": True}, "streams_stderr.1-5": { "path": STDERR_FILE + ".1-5", "collect_default": True}, "streams_log.1-6": { "path": LOG_FILE + ".1-6", "collect_default": True}, "streams_stdout.1-6": { "path": STDOUT_FILE + ".1-6", "collect_default": True}, "streams_stderr.1-6": { "path": STDERR_FILE + ".1-6", "collect_default": True}, "jmx_log": { "path": JMX_LOG_FILE, "collect_default": True}, "jmx_err": { "path": JMX_ERR_FILE, "collect_default": True}, } def __init__(self, test_context, kafka, streams_class_name, user_test_args1, user_test_args2=None, user_test_args3=None, user_test_args4=None): Service.__init__(self, test_context, num_nodes=1) self.kafka = kafka self.args = {'streams_class_name': streams_class_name, 'user_test_args1': user_test_args1, 'user_test_args2': user_test_args2, 'user_test_args3': user_test_args3, 'user_test_args4': user_test_args4} self.log_level = "DEBUG" @property def node(self): return self.nodes[0] @property def expectedMessage(self): return 'StreamsTest instance started' def pids(self, node): try: pids = [pid for pid in node.account.ssh_capture("cat " + self.PID_FILE, callback=str)] return [int(pid) for pid in pids] except Exception as exception: self.logger.debug(str(exception)) return [] def stop_nodes(self, clean_shutdown=True): for node in self.nodes: self.stop_node(node, clean_shutdown) def stop_node(self, node, clean_shutdown=True): self.logger.info((clean_shutdown and "Cleanly" or "Forcibly") + " stopping Streams Test on " + str(node.account)) pids = self.pids(node) sig = signal.SIGTERM if clean_shutdown else signal.SIGKILL for pid in pids: node.account.signal(pid, sig, allow_fail=True) if clean_shutdown: for pid in pids: wait_until(lambda: not node.account.alive(pid), timeout_sec=120, err_msg="Streams Test process on " + str(node.account) + " took too long to exit") node.account.ssh("rm -f " + self.PID_FILE, allow_fail=False) def restart(self): # We don't want to do any clean up here, just restart the process. for node in self.nodes: self.logger.info("Restarting Kafka Streams on " + str(node.account)) self.stop_node(node) self.start_node(node) def abortThenRestart(self): # We don't want to do any clean up here, just abort then restart the process. The running service is killed immediately. for node in self.nodes: self.logger.info("Aborting Kafka Streams on " + str(node.account)) self.stop_node(node, False) self.logger.info("Restarting Kafka Streams on " + str(node.account)) self.start_node(node) def wait(self, timeout_sec=1440): for node in self.nodes: self.wait_node(node, timeout_sec) def wait_node(self, node, timeout_sec=None): for pid in self.pids(node): wait_until(lambda: not node.account.alive(pid), timeout_sec=timeout_sec, err_msg="Streams Test process on " + str(node.account) + " took too long to exit") def clean_node(self, node): node.account.kill_process("streams", clean_shutdown=False, allow_fail=True) if self.CLEAN_NODE_ENABLED: node.account.ssh("rm -rf " + self.PERSISTENT_ROOT, allow_fail=False) def start_cmd(self, node): args = self.args.copy() args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "INCLUDE_TEST_JARS=true %(kafka_run_class)s %(streams_class_name)s " \ " %(config_file)s %(user_test_args1)s %(user_test_args2)s %(user_test_args3)s" \ " %(user_test_args4)s & echo $! >&3 ) 1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing streams cmd: " + cmd) return cmd def prop_file(self): cfg = KafkaConfig(**{streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers()}) return cfg.render() def start_node(self, node): node.account.mkdirs(self.PERSISTENT_ROOT) prop_file = self.prop_file() node.account.create_file(self.CONFIG_FILE, prop_file) node.account.create_file(self.LOG4J_CONFIG_FILE, self.render('tools_log4j.properties', log_file=self.LOG_FILE)) self.logger.info("Starting StreamsTest process on " + str(node.account)) with node.account.monitor_log(self.STDOUT_FILE) as monitor: node.account.ssh(self.start_cmd(node)) monitor.wait_until(self.expectedMessage, timeout_sec=60, err_msg="Never saw message indicating StreamsTest finished startup on " + str(node.account)) if len(self.pids(node)) == 0: raise RuntimeError("No process ids recorded") class StreamsSmokeTestBaseService(StreamsTestBaseService): """Base class for Streams Smoke Test services providing some common settings and functionality""" def __init__(self, test_context, kafka, command, processing_guarantee = 'at_least_once', num_threads = 3, replication_factor = 3): super(StreamsSmokeTestBaseService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsSmokeTest", command) self.NUM_THREADS = num_threads self.PROCESSING_GUARANTEE = processing_guarantee self.KAFKA_STREAMS_VERSION = "" self.UPGRADE_FROM = None self.REPLICATION_FACTOR = replication_factor def set_version(self, kafka_streams_version): self.KAFKA_STREAMS_VERSION = kafka_streams_version def set_upgrade_from(self, upgrade_from): self.UPGRADE_FROM = upgrade_from def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers(), streams_property.PROCESSING_GUARANTEE: self.PROCESSING_GUARANTEE, streams_property.NUM_THREADS: self.NUM_THREADS, "replication.factor": self.REPLICATION_FACTOR, "num.standby.replicas": 2, "buffered.records.per.partition": 100, "commit.interval.ms": 1000, "auto.offset.reset": "earliest", "acks": "all"} if self.UPGRADE_FROM is not None: properties['upgrade.from'] = self.UPGRADE_FROM cfg = KafkaConfig(**properties) return cfg.render() def start_cmd(self, node): args = self.args.copy() args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['version'] = self.KAFKA_STREAMS_VERSION args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\";" \ " INCLUDE_TEST_JARS=true UPGRADE_KAFKA_STREAMS_TEST_VERSION=%(version)s" \ " %(kafka_run_class)s %(streams_class_name)s" \ " %(config_file)s %(user_test_args1)s" \ " & echo $! >&3 ) " \ "1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing streams cmd: " + cmd) return cmd class StreamsEosTestBaseService(StreamsTestBaseService): """Base class for Streams EOS Test services providing some common settings and functionality""" clean_node_enabled = True def __init__(self, test_context, kafka, processing_guarantee, command): super(StreamsEosTestBaseService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsEosTest", command) self.PROCESSING_GUARANTEE = processing_guarantee def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers(), streams_property.PROCESSING_GUARANTEE: self.PROCESSING_GUARANTEE} cfg = KafkaConfig(**properties) return cfg.render() def clean_node(self, node): if self.clean_node_enabled: super(StreamsEosTestBaseService, self).clean_node(node) class StreamsSmokeTestDriverService(StreamsSmokeTestBaseService): def __init__(self, test_context, kafka): super(StreamsSmokeTestDriverService, self).__init__(test_context, kafka, "run") self.DISABLE_AUTO_TERMINATE = "" def disable_auto_terminate(self): self.DISABLE_AUTO_TERMINATE = "disableAutoTerminate" def start_cmd(self, node): args = self.args.copy() args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['disable_auto_terminate'] = self.DISABLE_AUTO_TERMINATE args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "INCLUDE_TEST_JARS=true %(kafka_run_class)s %(streams_class_name)s " \ " %(config_file)s %(user_test_args1)s %(disable_auto_terminate)s" \ " & echo $! >&3 ) 1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing streams cmd: " + cmd) return cmd class StreamsSmokeTestJobRunnerService(StreamsSmokeTestBaseService): def __init__(self, test_context, kafka, processing_guarantee, num_threads = 3, replication_factor = 3): super(StreamsSmokeTestJobRunnerService, self).__init__(test_context, kafka, "process", processing_guarantee, num_threads, replication_factor) class StreamsEosTestDriverService(StreamsEosTestBaseService): def __init__(self, test_context, kafka): super(StreamsEosTestDriverService, self).__init__(test_context, kafka, "not-required", "run") class StreamsEosTestJobRunnerService(StreamsEosTestBaseService): def __init__(self, test_context, kafka, processing_guarantee): super(StreamsEosTestJobRunnerService, self).__init__(test_context, kafka, processing_guarantee, "process") class StreamsComplexEosTestJobRunnerService(StreamsEosTestBaseService): def __init__(self, test_context, kafka, processing_guarantee): super(StreamsComplexEosTestJobRunnerService, self).__init__(test_context, kafka, processing_guarantee, "process-complex") class StreamsEosTestVerifyRunnerService(StreamsEosTestBaseService): def __init__(self, test_context, kafka): super(StreamsEosTestVerifyRunnerService, self).__init__(test_context, kafka, "not-required", "verify") class StreamsComplexEosTestVerifyRunnerService(StreamsEosTestBaseService): def __init__(self, test_context, kafka): super(StreamsComplexEosTestVerifyRunnerService, self).__init__(test_context, kafka, "not-required", "verify-complex") class StreamsSmokeTestShutdownDeadlockService(StreamsSmokeTestBaseService): def __init__(self, test_context, kafka): super(StreamsSmokeTestShutdownDeadlockService, self).__init__(test_context, kafka, "close-deadlock-test") class StreamsBrokerCompatibilityService(StreamsTestBaseService): def __init__(self, test_context, kafka, processingMode): super(StreamsBrokerCompatibilityService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.BrokerCompatibilityTest", processingMode) class StreamsBrokerDownResilienceService(StreamsTestBaseService): def __init__(self, test_context, kafka, configs): super(StreamsBrokerDownResilienceService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsBrokerDownResilienceTest", configs) def start_cmd(self, node): args = self.args.copy() args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "INCLUDE_TEST_JARS=true %(kafka_run_class)s %(streams_class_name)s " \ " %(config_file)s %(user_test_args1)s %(user_test_args2)s %(user_test_args3)s" \ " %(user_test_args4)s & echo $! >&3 ) 1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing: " + cmd) return cmd class StreamsStandbyTaskService(StreamsTestBaseService): def __init__(self, test_context, kafka, configs): super(StreamsStandbyTaskService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsStandByReplicaTest", configs) class StreamsResetter(StreamsTestBaseService): def __init__(self, test_context, kafka, topic, applicationId): super(StreamsResetter, self).__init__(test_context, kafka, "kafka.tools.StreamsResetter", "") self.topic = topic self.applicationId = applicationId @property def expectedMessage(self): return 'Done.' def start_cmd(self, node): args = self.args.copy() args['bootstrap.servers'] = self.kafka.bootstrap_servers() args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['application.id'] = self.applicationId args['input.topics'] = self.topic args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "(export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "%(kafka_run_class)s %(streams_class_name)s " \ "--bootstrap-servers %(bootstrap.servers)s " \ "--force " \ "--application-id %(application.id)s " \ "--input-topics %(input.topics)s " \ "& echo $! >&3 ) " \ "1>> %(stdout)s " \ "2>> %(stderr)s " \ "3> %(pidfile)s "% args self.logger.info("Executing: " + cmd) return cmd class StreamsOptimizedUpgradeTestService(StreamsTestBaseService): def __init__(self, test_context, kafka): super(StreamsOptimizedUpgradeTestService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsOptimizedTest", "") self.OPTIMIZED_CONFIG = 'none' self.INPUT_TOPIC = None self.AGGREGATION_TOPIC = None self.REDUCE_TOPIC = None self.JOIN_TOPIC = None def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers()} properties['topology.optimization'] = self.OPTIMIZED_CONFIG properties['input.topic'] = self.INPUT_TOPIC properties['aggregation.topic'] = self.AGGREGATION_TOPIC properties['reduce.topic'] = self.REDUCE_TOPIC properties['join.topic'] = self.JOIN_TOPIC # Long.MAX_VALUE lets us do the assignment without a warmup properties['acceptable.recovery.lag'] = "9223372036854775807" cfg = KafkaConfig(**properties) return cfg.render() class StreamsUpgradeTestJobRunnerService(StreamsTestBaseService): def __init__(self, test_context, kafka): super(StreamsUpgradeTestJobRunnerService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsUpgradeTest", "") self.UPGRADE_FROM = None self.UPGRADE_TO = None self.extra_properties = {} def set_config(self, key, value): self.extra_properties[key] = value def set_version(self, kafka_streams_version): self.KAFKA_STREAMS_VERSION = kafka_streams_version def set_upgrade_from(self, upgrade_from): self.UPGRADE_FROM = upgrade_from def set_upgrade_to(self, upgrade_to): self.UPGRADE_TO = upgrade_to def prop_file(self): properties = self.extra_properties.copy() properties[streams_property.STATE_DIR] = self.PERSISTENT_ROOT properties[streams_property.KAFKA_SERVERS] = self.kafka.bootstrap_servers() if self.UPGRADE_FROM is not None: properties['upgrade.from'] = self.UPGRADE_FROM if self.UPGRADE_TO == "future_version": properties['test.future.metadata'] = "any_value" cfg = KafkaConfig(**properties) return cfg.render() def start_cmd(self, node): args = self.args.copy() if self.KAFKA_STREAMS_VERSION == str(LATEST_0_10_0) or self.KAFKA_STREAMS_VERSION == str(LATEST_0_10_1): args['zk'] = self.kafka.zk.connect_setting() else: args['zk'] = "" args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['version'] = self.KAFKA_STREAMS_VERSION args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "INCLUDE_TEST_JARS=true UPGRADE_KAFKA_STREAMS_TEST_VERSION=%(version)s " \ " %(kafka_run_class)s %(streams_class_name)s %(zk)s %(config_file)s " \ " & echo $! >&3 ) 1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing: " + cmd) return cmd class StreamsNamedRepartitionTopicService(StreamsTestBaseService): def __init__(self, test_context, kafka): super(StreamsNamedRepartitionTopicService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsNamedRepartitionTest", "") self.ADD_ADDITIONAL_OPS = 'false' self.INPUT_TOPIC = None self.AGGREGATION_TOPIC = None def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers()} properties['input.topic'] = self.INPUT_TOPIC properties['aggregation.topic'] = self.AGGREGATION_TOPIC properties['add.operations'] = self.ADD_ADDITIONAL_OPS cfg = KafkaConfig(**properties) return cfg.render() class StaticMemberTestService(StreamsTestBaseService): def __init__(self, test_context, kafka, group_instance_id, num_threads): super(StaticMemberTestService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StaticMemberTestClient", "") self.INPUT_TOPIC = None self.GROUP_INSTANCE_ID = group_instance_id self.NUM_THREADS = num_threads def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers(), streams_property.NUM_THREADS: self.NUM_THREADS, consumer_property.GROUP_INSTANCE_ID: self.GROUP_INSTANCE_ID, consumer_property.SESSION_TIMEOUT_MS: 60000} properties['input.topic'] = self.INPUT_TOPIC # TODO KIP-441: consider rewriting the test for HighAvailabilityTaskAssignor properties['internal.task.assignor.class'] = "org.apache.kafka.streams.processor.internals.assignment.StickyTaskAssignor" cfg = KafkaConfig(**properties) return cfg.render() class CooperativeRebalanceUpgradeService(StreamsTestBaseService): def __init__(self, test_context, kafka): super(CooperativeRebalanceUpgradeService, self).__init__(test_context, kafka, "org.apache.kafka.streams.tests.StreamsUpgradeToCooperativeRebalanceTest", "") self.UPGRADE_FROM = None # these properties will be overridden in test self.SOURCE_TOPIC = None self.SINK_TOPIC = None self.TASK_DELIMITER = "#" self.REPORT_INTERVAL = None self.standby_tasks = None self.active_tasks = None self.upgrade_phase = None def set_tasks(self, task_string): label = "TASK-ASSIGNMENTS:" task_string_substr = task_string[len(label):] all_tasks = task_string_substr.split(self.TASK_DELIMITER) self.active_tasks = set(all_tasks[0].split(",")) if len(all_tasks) > 1: self.standby_tasks = set(all_tasks[1].split(",")) def set_version(self, kafka_streams_version): self.KAFKA_STREAMS_VERSION = kafka_streams_version def set_upgrade_phase(self, upgrade_phase): self.upgrade_phase = upgrade_phase def start_cmd(self, node): args = self.args.copy() if self.KAFKA_STREAMS_VERSION == str(LATEST_0_10_0) or self.KAFKA_STREAMS_VERSION == str(LATEST_0_10_1): args['zk'] = self.kafka.zk.connect_setting() else: args['zk'] = "" args['config_file'] = self.CONFIG_FILE args['stdout'] = self.STDOUT_FILE args['stderr'] = self.STDERR_FILE args['pidfile'] = self.PID_FILE args['log4j'] = self.LOG4J_CONFIG_FILE args['version'] = self.KAFKA_STREAMS_VERSION args['kafka_run_class'] = self.path.script("kafka-run-class.sh", node) cmd = "( export KAFKA_LOG4J_OPTS=\"-Dlog4j.configuration=file:%(log4j)s\"; " \ "INCLUDE_TEST_JARS=true UPGRADE_KAFKA_STREAMS_TEST_VERSION=%(version)s " \ " %(kafka_run_class)s %(streams_class_name)s %(zk)s %(config_file)s " \ " & echo $! >&3 ) 1>> %(stdout)s 2>> %(stderr)s 3> %(pidfile)s" % args self.logger.info("Executing: " + cmd) return cmd def prop_file(self): properties = {streams_property.STATE_DIR: self.PERSISTENT_ROOT, streams_property.KAFKA_SERVERS: self.kafka.bootstrap_servers()} if self.UPGRADE_FROM is not None: properties['upgrade.from'] = self.UPGRADE_FROM else: try: del properties['upgrade.from'] except KeyError: self.logger.info("Key 'upgrade.from' not there, better safe than sorry") if self.upgrade_phase is not None: properties['upgrade.phase'] = self.upgrade_phase properties['source.topic'] = self.SOURCE_TOPIC properties['sink.topic'] = self.SINK_TOPIC properties['task.delimiter'] = self.TASK_DELIMITER properties['report.interval'] = self.REPORT_INTERVAL cfg = KafkaConfig(**properties) return cfg.render()
LeighaScott Space - WPS Office Forum - Powered by Discuz! 2019-04-22 21:35 GMT+8 , !processed_in! 0.086508 !seconds!, 15 !queries! .
# -*- coding: utf-8 -*- """Simplify jinja templating. :copyright: Copyright (c) 2015 Bivio Software, Inc. All Rights Reserved. :license: http://www.apache.org/licenses/LICENSE-2.0.html """ from __future__ import absolute_import, division, print_function from pykern.pkdebug import pkdc, pkdp import re import jinja2 def render(template, params): """Parse template for $name and replace with special filter then render Since a common case is to render floating point numbers, we have a special pattern ``$var`` which is replaced with ``{{ var | rt_filter }}``, which maps to the appropriate float filter. If the template begins with a newline, leading newlines will be stripped from the amount of the first indent, e.g.:: template = ''' first line whitespace sets the mark to delete to this line will be left indented. this line will have four spaces another line ''' A trailing newline will always be added. Args: template (str): what to render params (dict): variables to render Returns: str: rendered template with params. """ je = jinja2.Environment( trim_blocks=True, lstrip_blocks=True, keep_trailing_newline=True, ) je.filters['rt_filter'] = _rt_filter jt = je.from_string(_template(template)) # TODO: Maybe a namespace need vars(params) if only can be dict return jt.render(params) def _rt_filter(v): """Format floats as .3f or .3e depending on size""" if type(v) is not float: return v a = abs(v) f = 'e' if a >= 1000 or a < 0.001 else 'f' return ('{:.3' + f + '}').format(v) def _template(t): """Parse template""" if t.startswith('\n'): t2 = t.lstrip() i = str(len(t) - len(t2) - 1) t = re.sub( r'^\s{1,' + i + '}', '', t2, flags=re.IGNORECASE + re.MULTILINE, ) if not t.endswith('\n'): t += '\n' return re.sub( r'\$([a-z]\w*)', r'{{\1|rt_filter}}', t, flags=re.IGNORECASE, )
Finding the right new Buick or Chevy near you that meets your needs and expectations can be challenging all on its own, but it is also quite challenging to find the Buick or Chevrolet lease deal or offer that is right for you as well. But when you shop with Ewald Chevrolet Buick you won't have either of those problems. We have a large inventory of vehicles for sale, which makes it easy to find theonethe is right for you, and we also have many great Buick and Chevy offers that drivers can take advantage of. Our wide selection of new Chevrolet and Buick cars for sale will make it easy to find a vehicle that is right for your needs, so you can spend more time on the road and less time shopping, while our great Buick and Chevy special offers help you save cash and get into the vehicle you want more easily. Our staff of experienced financial experts will help you search for and find the deal that is just right for your budget, so that you can get back out on the road a little easier and spend more time enjoying the vehicle you love. We have many kinds of Buick and Chevy lease specials to take advantage of on a wide variety of vehicles, so we are sure to have a deal that works for you! So for a much easier time finding the right vehicle for you, as well as the Buick or Chevy lease special that is right for your budget, come on over to Ewald Chevrolet Buick in Oconomowoc Wisconsin today! Stop by today and check out our many great deals, we are committed to helping you get into the brand new car that is just right for you!
# -*- encoding: utf-8 -*- from flask import Flask, render_template, flash, url_for, redirect, request from database import db from sqlalchemy.exc import OperationalError, IntegrityError import os from forms import * import requests from filters import register_filters from googleapi import get_distance, DistanceServiceError from rest import rest_get_all, rest_get_id, rest_delete_id, rest_post, rest_put from requests.exceptions import HTTPError import flask.ext.restless from collections import namedtuple from dateutil import parser app = Flask(__name__) app.config.from_object('wsgi.config') db.init_app(app) register_filters(app) CAR_FIELDS=["id", "manufacturer", "typ", "platenumber", "color", "consumption", "price", "seats", "station"] KUNDE_FIELDS = [ "id", "name", "street", "plz", "city", "country", "leihen"] STATION_FIELDS = [ "id", "name", "street", "plz", "city", "country"] LEIHE_FIELDS = ["id", "kunde_id", "car_id", "von", "bis", "returned", "station_abhol_id", "station_return_id"] class Car(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) def __repr__(self): return "%s %s %s" % (self.manufacturer, self.typ, self.platenumber) class Station(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) @property def adresse(self): return "%s, %s %s, %s" % (self.street, self.plz, self.city, self.country) class Kunde(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) @property def adresse(self): return "%s, %s %s, %s" % (self.street, self.plz, self.city, self.country) class Leihe(object): def __init__(self, **kwargs): self.__dict__.update(kwargs) if 'kunde_id' in self.__dict__.keys(): try: self.kunde = rest_get_id("kunden", self.kunde_id)["name"] except KeyError: self.kunde = 'None' if 'von' in self.__dict__.keys(): self.von = parser.parse(self.von) if 'bis' in self.__dict__.keys(): self.bis = parser.parse(self.bis) if 'car_id' in self.__dict__.keys(): car = rest_get_id("cars", self.car_id) car = Car(**car) self.car = car.__repr__() if hasattr(self, 'station_return_id'): self.station_return = rest_get_id("stations", self.station_return_id)["name"] if hasattr(self, 'station_abhol_id'): self.station_abhol = rest_get_id("stations", self.station_abhol_id)["name"] def tidy_objects(objects, fields): objects.sort(key=lambda x: x["id"]) for o in objects: for key in o.keys(): if key not in fields: del(o[key]) continue if isinstance(o[key], dict): if "name" in o[key].keys(): o[key] = o[key]["name"] elif "id" in o[key].keys(): o[key] = o[key]["id"] elif isinstance(o[key], list): del(o[key]) return objects COUNTRIES = [ (u'Österreich', u'AT', u'EUR'), (u"Ungarn", u"HU", u"HUF" ), (u"Schweiz", u"CH", u"CHF")] @app.before_first_request def fill_countries(): """ Die Liste von Ländern wird über 2 Webservices geholt. Das erste hlt von www.oorsprong.org eine Liste mit allen Ländern. Dabei sind diese als JSON verfügbar und werden nach Kontinente aufgeteilt. Wir nehmen nur die europäischen Länder und verwendet das zweite Webservice - wieder Json - um aus den ISO-Codes die deutsche Bezeichnung zu erhalten. Da beide Services von Privatpersonen zur Verfügung gestellt werden, möchte ich deren Server nicht überlasten und die Services werden nur genutzt, wenn die app nicht im Debugmode läuft. """ global COUNTRIES if app.debug: try: response = requests.get('https://raw.githubusercontent.com/umpirsky/country-list/master/country/cldr/de_AT/country.json') dict_iso_name = response.json() response = requests.get('https://raw.githubusercontent.com/mledoze/countries/master/countries.json') _countries = [(c[u"name"], dict_iso_name.get(c[u'cca2'], ''), c[u"currency"][0]) for c in response.json() if c[u"region"] == u"Europe"] COUNTRIES = sorted(_countries, key = lambda x: x[1]) flash('Countrycode loaded') except Exception: flash('Countrycodes konnten nicht geladen werden - falling back to fallback', category='warning') else: flash('Not loading Countrycodes in Debugmode') @app.route("/") def index(): return render_template('index.html') @app.route("/create_db") def create_db(): """ Call :py:meth:`models.initialize_database` and recreate the database """ d = {} try: flash("Database Queryied") d["cars"] = tidy_objects(rest_get_all("cars"), CAR_FIELDS) d["stations"] = tidy_objects(rest_get_all("stations"), STATION_FIELDS) d["kunden"] = tidy_objects(rest_get_all("kunden"), KUNDE_FIELDS) d["leihen"] = rest_get_all("leihen") d["car_fields"] = CAR_FIELDS d["stations_fields"] = STATION_FIELDS d["kunden_fields"] = KUNDE_FIELDS d["leihen_fields"] = LEIHE_FIELDS except OperationalError, e: flash(str(e), category='error') return render_template("create_db.html", **d) @app.route("/cars/") def cars_list(): """ List all Cars """ cars = rest_get_all("cars") cars = tidy_objects(cars, CAR_FIELDS) return render_template("cars_list.html", cars=cars, fields=CAR_FIELDS) @app.route("/kunden/") def kunden_list(): """ List all Kunden """ kunden = rest_get_all("kunden") kunden = tidy_objects(kunden, KUNDE_FIELDS) return render_template("kunden_list.html", kunden=kunden) @app.route("/kunde/", methods=['POST', 'GET'], endpoint='kunden_new') @app.route("/kunde/<int:kunde_id>/", methods=['POST', 'GET'], endpoint='kunden_edit') def kunden_edit(kunde_id=None): """ Legt entweder einen neuen Kunden an oder editiert einen bestehenden. Wie immer gibt es das Problem, das create und update sehr ähnlich sind - hier habe ich beides auf einmale implementiert. Mgl kann mit Classbased Views strukturierter vorgegangen werden """ leihen = [] kunde = None if kunde_id: try: kunde_dict = rest_get_id("kunden", kunde_id) kunde = Kunde(**kunde_dict) leihen = rest_get_all("leihen") leihen = [Leihe(**d) for d in leihen if d["kunde_id"] == kunde_id] form = KundenForm(request.form, kunde) except HTTPError, e: return errorpage(e) else: form = KundenForm() laendernamen = [c[1] for c in COUNTRIES] if kunde_id: if kunde.country in laendernamen: del(laendernamen[laendernamen.index(kunde.country)]) laendernamen.insert(0, kunde.country) choices = zip(laendernamen, laendernamen) form.country.choices = zip(laendernamen, laendernamen) if form.validate_on_submit(): try: if not kunde_id: # Delete possible id-Field for new Kunden del(form['id']) rest_post("kunden", form.data) flash(("Kunde %s wurde neu angelegt" % form.data["name"])) else: rest_put("kunden", kunde_id, form.data) flash(("Kunde %s wurde gespeichert" % form.data["name"])) except HTTPError, e: return errorpage(e) return redirect(url_for('kunden_list')) return render_template("kunden_edit.html", form=form, kunde=kunde, leihen=leihen, leihen_fields=LEIHE_FIELDS) @app.route("/kunde/<int:kunde_id>/delete/") def kunden_delete(kunde_id): """ Löschen eines Kunden - aber nur wenn er keine offenen Leihen gibt. """ kunde = Kunde(**rest_get_id("kunden", kunde_id)) name = kunde.name leihen_not_beendet = len([leihe["returned"] for leihe in kunde.leihen if not leihe["returned"]]) if leihen_not_beendet>0: flash(u'%s hat noch offene Leihen und kann nicht gelöscht werden.' % name, category='warning') return redirect(url_for('kunden_edit', kunde_id=kunde_id)) try: rest_delete_id("kunden", kunde_id) except HTTPError, e: return errorpage(e) flash(u'%s aus Kundendatei gelöscht' % name) return redirect(url_for('kunden_list')) @app.route("/leihen/") def leihen_list(): leihen = rest_get_all("leihen") leihen = [Leihe(**d) for d in leihen] leihen.sort(key=lambda x: x.id) return render_template("leihen_list.html", leihen=leihen) @app.route("/leihe/", methods=["POST", "GET"], endpoint='leihen_new') @app.route("/leihe/<int:leihe_id>/", methods=["POST", "GET"], endpoint='leihen_edit') def leihen_edit(leihe_id=None): """ Create und update for Leihen. Wertet auf die URL-Query Parameter kunde_id und station_id aus, die die entsprechenden Felder voreinstellen. """ leihe = None try: if leihe_id: leihe = Leihe(**rest_get_id("leihen", leihe_id)) form = LeiheForm(request.form, leihe) else: form = LeiheForm() form.kunde_id.choices = [(kunde["id"], kunde["name"]) for kunde in rest_get_all("kunden")] form.car_id.choices = [(car["id"], car["platenumber"]) for car in rest_get_all("cars")] form.station_abhol_id.choices = [(station["id"], station["name"]) for station in rest_get_all("stations")] form.station_return_id.choices = [(station["id"], station["name"]) for station in rest_get_all("stations")] if u'kunde_id' in request.args: form.kunde_id.data = int(request.args[u'kunde_id']) if u'station_id' in request.args: form.station_abhol_id.data = int(request.args[u'station_id']) form.station_return_id.data = int(request.args[u'station_id']) if form.validate_on_submit(): kunde = Kunde(**rest_get_id("kunden", form.data["kunde_id"])) data = form.data data["von"] = form.data["von"].isoformat() data["bis"] = form.data["bis"].isoformat() if not leihe_id: del(data["id"]) rest_post("leihen", data) flash(u"Leihe für %s wurde neu angelegt" % kunde.name, category="success") else: rest_put("leihen", leihe_id, data) flash(u"Leihe für %s wurde neu geändert" % kunde.name, category="success") return redirect(url_for('leihen_list')) except HTTPError, e: return errorpage(e) return render_template("leihe_edit.html", form=form, leihe=leihe) @app.route("/kunde/<int:kunde_id>/neueleihe") def leihe_neu1(kunde_id): """ Klickt man bei der Kunden-Detail View auf "Neue Leihe" werden in dieser View anhand der Adresse des Kunden die Distanzen zu unseren Verleihstationen berechnet und als Tabelle angezeigt. """ try: kunde = Kunde(**rest_get_id("kunden", kunde_id)) stationen = [Station(**d) for d in rest_get_all("stations")] cachehits = 0 for station in stationen: try: station.distance, cachehit = get_distance(kunde.adresse, station.adresse) except DistanceServiceError: station.distance = {'distance' : { 'text' : 'Unavailable', 'value' : -1}, 'duration' : { 'text' : "bis ans Ende der Zeit", 'value' : -1}} cachehit = False if cachehit: cachehits += 1 stationen.sort(key=lambda x: x.distance['duration']['value']) flash("Cachehits bei der Distanzberechnung: %d" % cachehits) return render_template("neue_leihe_kunde1.html", kunde=kunde, stationen=stationen) except HTTPError, e: return errorpage(e) @app.errorhandler(404) def errorpage(e=None): status = 404 content = "Page not found" message = "Page not Found" if e: if hasattr(e, "response"): try: status = e.response.status_code content = e.response.content except Exception: pass if hasattr(e, "message"): message = e.message return render_template('errorpage.html', status=status, content=content, message=message), status if __name__ == "__main__": app.run()
Modesty is not their strong suit. But as you wander through the stadium and museum, you understand why: there are ROOMS full of trophy cases. The expansive locker room, hot tub, showers, and massage tables are topnotch. The Visitors’ locker room and massage table (emphasis on table, singular) pale in comparison. But hey — they’re in Real Madrid territory. The tour allows visitors access to all of this plus the press room, the tunnel, benches and coaching area, and a walk down to the pitch. I toured Manchester United’s stadium a few years ago, but it was not as glamorous as the home field of these 12-time European Cup holders. Plus, I was there with my daughter, a passionate soccer player who made our hours there even more enjoyable. The 25 Euro ticket price seemed a little steep at first, but if you love soccer, it was totally worth it. I understand the madridista passion now. To enter the realm of the Real Madrid team is sensational. If only the players had been there, too. Have you visited sports stadiums in your travels? Yeah, a good €25 worth I imagine. Especially for a soccer fan – more so for a Real fan. I’d sooner pay for a match ticket though, forgo the tour. A fairly unique experience, one that sticks in the mind, is from 50 years ago. On a school trip a few of us visited Innsbruck which had hosted the Winter Olympics in 1964. No snow, but we climbed up steps at the side of the ski jump hill and stood at the top, imagining what it would be like to fly off of there on skis. If we’d been able to see a Real Madrid play, that would have been too good to be true! I visited an Olympic ski jump, too, and was awed by the thought of all the talent that had taken place there. Holmenkollen in Norway. We did a tour of the Olympic stuff at Montjuic in Barcelona, which included a partial stop at FC Barcelona’s stadium, I think… But otherwise, no, I don’t think we’ve done any stadium tours. Surprising, considering what a sports fan Andy is! We did watch an actual Real Madrid match at Bernabeu, though! That was fun, even with “terrible” seats, hehe. I wish we’d been able to watch a game. That would have been a dream come true for my daughter.
#!/usr/bin/python #coding=utf8 import sys import uuid import re time_exp = re.compile('^\d{1,2}:\d{1,2}$') supply_exp = re.compile('^\d{1,3}$') ColloquialToCodeDictionary = { #terran 'SupplyDepot' : ['supply', 'Supply'], 'Barracks' : ['Barracks', 'barrack'], 'Bunker' : ['Bunker','bunker'], 'Refinery' : ['refinery','Refinery', 'gas'], 'EngineeringBay': ['Engineering Bay', 'engeneering_bay'], 'CommandCenter' : ['command', 'Command Center', 'command_center'], 'Factory' : ['Factory', 'factory'], 'Starport' : ['cosmoport','Starport', 'starport'], 'Armory' : ['Armory', 'arsenal'], 'SCV' : ['SCV'], 'Marine' : ['Marine', 'marine'], 'Marauder' : ['Marauder', 'marauder'], 'Medivac' : ['Medivac', 'medivac'], 'Hellion' : ['Hellion', 'hellion'], 'HellionTank' : ['Hellbat'], 'Reaper' : ['Reaper'], 'SiegeTank' : ['tank'], 'BarracksReactor' : ['BarracksReactor','reactor_barrack'], 'FactoryReactor' : ['FactoryReactor', 'reactor_fact'], 'StarportReactor' : ['StarportReactor'], 'FactoryTechLab' : ['FactoryTechLab', 'lab_fact'], 'BarracksTechLab' : ['BarracksTechLab', 'lab_barrack'], 'StarportTechLab' : ['StarportTechLab'], 'MissleTurret' : ['turret'], 'VikingFighter' : ['viking'], 'HellionTank' : ['hellbat'], # zerg 'Overlord' : ['Overlord'], 'Hatchery' : ['Hatchery'], 'SpawningPool' : ['Pool'], 'Extractor' : ['Extractor'], 'Queen' : ['queen'], 'Drone' : ['Drone'], 'Zergling' : ['Zergling'], 'Spire' : ['Spire'], 'EvolutionChamber' : ['evolution chamber'], 'BanelingNest' : ['Baneling Nest'], 'RoachWarren' : ['RoachWarren'], 'InfestationPit' : ['InfestationPit'], 'HydraliskDen' : ['HydraliskDen'], # protoss } AbilityCodeDictionary = { #terran '"UpgradeToOrbital", 0' : ['Orbital', 'orbital_command' ], '"EngineeringBayResearch", 2' : ['+1_terran_infantry_attack', '+1 Infantry Attack'], '"EngineeringBayResearch", 6' : ['+1 Infantry Armor'], '"EngineeringBayResearch", 3' : ['+2 Infantry Attack'], '"EngineeringBayResearch", 7' : ['+2 Infantry Armor'], '"ArmoryResearchSwarm", 0' : ['+1 Vehicle Weapon', '+1 Vehicle weapon'], '"ArmoryResearchSwarm", 1' : ['+2 Vehicle Weapon', '+2 Vehicle weapon'], '"ArmoryResearchSwarm", 3' : ['+1 Vehicle Armor', '+1 Vehicle armor'], '"ArmoryResearchSwarm", 4' : ['+2 Vehicle Armor', '+2 Vehicle armor'], '"BarracksTechLabResearch", 0': ['Stimpack'], '"BarracksTechLabResearch", 1': ['Combat Shield', 'Combat shields', 'shields'], '"BarracksTechLabResearch", 2': ['Fugas', 'Concussive shells'], '"FactoryTechLabResearch", 2': ['servos'], #zerg '"evolutionchamberresearch", 0' : ['+1 melee attack'], '"evolutionchamberresearch", 3' : ['+1 ground armor'], '"evolutionchamberresearch", 6' : ['+1 range attack'], '"BanelingNestResearch", 0' : ['Central hooks'], '"UpgradeToLair", 0' : ['Lair'], '"SpawningPoolResearch", 1' : ['metabolic boost'], '"LairResearch", 1' : ['carapace', 'Overlord speed' ], '"RoachWarrenResearch", 1' : ['roach speed'], #protoss } def ColloqiualToCode(name): for key in ColloquialToCodeDictionary.keys(): if (name in ColloquialToCodeDictionary[key]): return key raise Exception('Unknown name ' + name); def GetAbilityCode(name): for key in AbilityCodeDictionary.keys(): if (name in AbilityCodeDictionary[key]): return key raise Exception('Unknown ability ' + name); def isAbility(name): if name[:1] == '+': return True for key in AbilityCodeDictionary.keys(): if (name in AbilityCodeDictionary[key]): return True return False try: filename = sys.argv[1] except Exception: print 'specify correct file name in cmd line' sys.exit(0) f = open(filename,'r') outfile = '.'.join(filename.split('.')[:1]+['bo']) out = open(outfile, 'w') counter = 0 infoCounter = 0 for s in f: if 'Objectives:' in s: break; line = s.strip().split() if len(line) == 0: continue if(s[:1] == '#'): continue if(s[:1] == 'I'): #read Info tip time = line[1] try: minutes = int(time.split(':')[0]) seconds = int(time.split(':')[1]) except ValueError: print "error parsing line\n %s" % " ".join(line) break info = " ".join(line[2:]) out.write('gv_bOInfoTips[%d] = "%s";\n' % (infoCounter, info) ) out.write('gv_bOInfoTimings[%d] = %d;\n' % (infoCounter, minutes* 60 + seconds) ) infoCounter += 1 else: time = line[0] minutes = int(time.split(':')[0]) seconds = int(time.split(':')[1]) try: supply = line[1] if not supply_exp.match(supply): supply = 0 name = " ".join(line[1:]) else: supply = int(line[1]) name = " ".join(line[2:]) except ValueError: print "error parsing line\n %s" % " ".join(line) break if not isAbility(name): name = ColloqiualToCode(name) out.write("gv_bOUnits[%d] = \"%s\";\n" % (counter, name)) else: ability_code = GetAbilityCode(name) out.write("gv_bOAbilities[%d] = AbilityCommand(%s);\n" % (counter, ability_code)) out.write("gv_bOAbilities_Name[%d] = gf_GetAbilityName(AbilityCommand(%s));\n" % (counter, ability_code)) out.write("gv_bOSupplies[%d] = %d;\n" % (counter, supply)) out.write("gv_bOTimings[%d][0] = %d;\n" % (counter, minutes)) out.write("gv_bOTimings[%d][1] = %d;\n" % (counter, seconds)) counter +=1 out.write('gv_bOEnd = %d;\n'% counter) counter =0 # Writing objectives for s in f: try: line = s.strip().split() if len(line) == 0: continue time = line[0] try: supply = int(line[1]) name = " ".join(line[2:]) minutes = int(time.split(':')[0]) seconds = int(time.split(':')[1]) except ValueError: print "error parsing line\n %s" % " ".join(line) break except IndexError: print "error in line " + s out.write("gv_bOObjectivesUnits[%d] = \"%s\";\n" % (counter, ColloqiualToCode(name))) out.write("gv_bOObjectivesUnitsAmount[%d] = %d;\n" % (counter, supply)) out.write("gv_bOObjectivesTimings[%d] = %d;\n" % (counter, minutes*60 + seconds)) counter += 1 out.write("gv_bOObjectivesCount = %d;\n" % (counter)) #out.write('gv_bOuuid="%s"' % uuid.uuid4()) out.close() print 'Converting done' print 'out file is '+ outfile
Jean Grainger is an Irish author of historical fiction from Cork. She has a bachelors and masters degree in History and English and has held a number of careers before turning to writing. Some of her most notable occupations have been university lecturer, tour director and secondary school teacher. She has also been involved in script writing and play directing for many high school plays. However, it was her experience as a tour guide which indirectly led to a career as an author. In the early nineties, Grainger rebelled against the notion of getting a traditional job to become a tour guide and director. Over a few years of practice Jean Grainger learnt the names and stories of every historical location in Ireland, making her one of the most sought after guides.The decision to become a tour guide turned out to be one of the best decisions of her life as she met easy going, fun and incredible people that so enjoyed her stories that they encouraged her to write a book.She gained an appreciation of the Emerald Island that she loved and recited poems, sang songs and told stories that moved even her father an extraordinary storyteller himself. People would confide in her the most extraordinary and amazing of things including lifelong friendships, loves lost and found, marriages finished and fixed, families and reunited that provided fodder for many of her later novels. Over time Grainger’s visitors clamor for her to write about her experiences got to her and so one winter she dove into it.She sent her first draft titled the Tour to a publisher confident that it would be published and that she would soon have Little Brown, Simon and Schuster, Penguin and Hatchette beating down her door. Things did not come out as rosy as she had expected as none of the big publishers was remotely interested in her stories. Frustrated she dumped her manuscripts in a drawer and got back to teaching English and History at University. She was an expert in the life of Irish Women that lived in the Second World War and wrote a lot on the subject matter. However, it was only a matter of time before the fiction writing bug struck again and she penned her second title So Much Owed. By this time she had met a few authors that introduced her to self publishing which promised unknown writers such as her a chance at putting out their work. While it had once been believed that self publishing was the place where mediocre writers went to dump their drivel that nobody touched, that did not deter Jean Grainger. She got into Amazon publishing and in a few months had her first novel The Tour published in 2013. The novel went on to become a bestseller and even earned critical praise from historical fiction critics. While she currently sells hundreds of books a month, she is still incredulous at how self publishing has changed her life and made it possible for her to go full-time. While she is not a John Grisham or Stephen King, her novels have been compared to the likes of Maeve Binchy which is quite saying something about her abilities. Her first novel tells the story of a diverse group of American tourists who experience a life changing experience while on a tour of Ireland. Her second novel is a family saga that tells the story of the Buckley family that lives in Western Cork. The family is in a state of turmoil as the world war sets in. It is a family saga that incorporates romance and intrigue with occupied Europe as its canvas. He third novel is a historical title about an old flag reminiscent of the Easter Up[rising in Ireland. It tells the story of Ireland’s final bid to break away from the United Kingdom. However, the story is more than a retelling of history as it also incorporates the life events of the journalist that found the resistance flag. Her fourth novel is about friendship in socially and economically challenging times. Set in the 1970s it features Hugo, Patrick and Liam, three boys from disparate backgrounds that forge a deep friendship. The Tour is a story about Connor O’Shea a tour guide who will always be found picking up a group of American tourists eager to experience the Real Ireland. But his current tour group has a range of humorous characters that present dilemmas to even the most experienced of tour guides. The eclectic group of tourists include a gold digging multiple divorcee named Corlene looking to hook a new man; a love starved Boston police officer named Patrick, a goth uilleann piper Dylan;a grumpy college professor Dorothy; a wall street oligarch named Elliot. To top it all of is Ellen a tourist that has come back to the Emerald Isle to discover one of the biggest truths of her life. Add to it the colorful cast of locals that include Grda high-flyers, Ukrainian waitresses, passionate musicians, and a couple of Brits make for some of the most interesting of tours. And of course Oshe’a is in between it all mending hearts and solving problems for his guests. So Much Owed the second novel by Jean Grainger is a novel set in a 1918 Christmas. Dr Richard Buckley has just come back home to Dunderrig House in West Cork having been damaged and devastated by World War I. With him is Allingham Solange the widow of his best friend that is left destitute by the war. Edith his wife hates that he wore the British uniform and see him as a traitor to Ireland. After she gives birth to two children she disappears leaving Solange to take care of her twins while she heads for greener pastures. Juliet and James soon become the life blood of the estate with their incorrigible ways making them a delight for Solange and their father Buckley. But as they grow up the specter of a new war that may take away everything they have looms yet again. Set in the tranquility of Cork to occupied France, from neutral Dublin to war time Belfast, it is a heart wrenching family saga of hope in the midst of turmoil.
# # CORE # Copyright (c)2010-2012 the Boeing Company. # See the LICENSE file included in this distribution. # # author: Jeff Ahrenholz <[email protected]> # # # Copyright (c) 2014 Benocs GmbH # # author: Robert Wuttke <[email protected]> # # See the LICENSE file included in this distribution. # ''' quagga.py: defines routing services provided by Quagga. ''' import os if os.uname()[0] == 'Linux': from core.netns import nodes elif os.uname()[0] == 'FreeBSD': from core.bsd import nodes from core.service import CoreService, addservice from core.misc.ipaddr import IPPrefix, isIPAddress from core.misc.ipaddr import IPv4Addr, IPv4Prefix, isIPv4Address from core.misc.ipaddr import IPv6Addr, IPv6Prefix, isIPv6Address from core.misc.ipaddr import Loopback, Interface from core.misc.netid import NetIDNodeMap from core.api import coreapi from core.constants import * from core.services import service_helpers from core.services import service_flags QUAGGA_USER='quagga' QUAGGA_GROUP='quagga' if os.uname()[0] == 'FreeBSD': QUAGGA_GROUP='wheel' class Zebra(CoreService): ''' ''' _name = 'zebra' _group = 'Quagga' _depends = ('vtysh',) _dirs = ('/etc/quagga', '/var/run/quagga') _configs = ('/etc/quagga/daemons', '/etc/quagga/zebra.conf', '/etc/quagga/vtysh.conf') _startindex = 35 _startup = ('/etc/init.d/quagga start',) _shutdown = ('/etc/init.d/quagga stop', ) _validate = ('pidof zebra', ) _starttime = 10 @classmethod def generateconfig(cls, node, filename, services): ''' Return the Quagga.conf or quaggaboot.sh file contents. ''' if filename == cls._configs[0]: return cls.generateDaemonsConf(node, services) elif filename == cls._configs[1]: return cls.generateZebraConf(node, services) elif filename == cls._configs[2]: return cls.generateVtyshConf(node, services) else: raise ValueError @classmethod def generateVtyshConf(cls, node, services): ''' Returns configuration file text. ''' return 'service integrated-vtysh-config' @classmethod def generateZebraConf(cls, node, services): ''' Returns configuration file text that defines which daemons to run. ''' cfg = [] cfg.append('log file /tmp/quagga-zebra-%s.log\n' % node.name) cfg.append('hostname %s\n' % node.name) cfg.append('agentx\n') cfg.append('interface lo\n') if node.enable_ipv4: cfg.append(' ip address %s/32\n' % node.getLoopbackIPv4()) if node.enable_ipv6: cfg.append(' ipv6 address %s/128\n' % node.getLoopbackIPv6()) for ifc in node.netifs(): # do not ever include control interfaces in anything if hasattr(ifc, 'control') and ifc.control == True: continue cfg.append('interface %s\n' % ifc.name) for a in ifc.addrlist: if isIPv4Address(a): cfg.append(' ip address %s\n' % a) if isIPv6Address(a): cfg.append(' ipv6 address %s\n' % a) cfg.append('!\n') if node.enable_ipv4: cfg.append('ip forwarding\n') if node.enable_ipv6: cfg.append('ipv6 forwarding\n') return ''.join(cfg) @classmethod def generateDaemonsConf(cls, node, services): ''' Returns configuration file text that defines which daemons to run. ''' cfg = [] cfg.extend([cls._name, '=yes\n']) for s in services: if cls._name not in s._depends: continue cfg.extend([s._daemonname, '=yes\n']) return ''.join(cfg) addservice(Zebra) class QuaggaService(CoreService): ''' Parent class for Quagga services. Defines properties and methods common to Quagga's routing daemons. ''' _name = 'QuaggaDaemon' _daemonname = '' _group = 'Quagga' _depends = ('zebra', ) _dirs = () _configs = () _startindex = 40 _startup = () _shutdown = () _meta = 'The config file for this service can be found in the Zebra service.' _ipv4_routing = False _ipv6_routing = False @staticmethod def routerid(node): ''' Helper to return the first IPv4 address of a node as its router ID. ''' # Use IPv4 loopback address of this node as the routerID. # Don't get v4-loopback-addr directly from node as the node might has # IPv4 disabled. Instead, directly query the central 'database' for the # IPv4 address that would be the nodes IPv4 loopback. return str(Loopback.getLoopbackIPv4(node)) @staticmethod def rj45check(ifc): ''' Helper to detect whether interface is connected an external RJ45 link. ''' if ifc.net: for peerifc in ifc.net.netifs(): if peerifc == ifc: continue if isinstance(peerifc, nodes.RJ45Node): return True return False @classmethod def generateconfig(cls, node, filename, services): return cls.generatequaggaconfig(node) @classmethod def generatequaggaloconfig(cls, node): return '' @classmethod def generatequaggaifcconfig(cls, node, ifc): return '' @classmethod def generatequaggaconfig(cls, node): return '' @classmethod def interface_iterator(cls, node, callback=None): result = [] for ifc in node.netifs(): # do not ever include control interfaces in anything if hasattr(ifc, 'control') and ifc.control == True: continue if not callback is None: result.extend(callback(node, ifc)) return result @staticmethod def addrstr(x): ''' helper for mapping IP addresses to zebra config statements ''' if isIPv4Address(x): return 'ip address %s' % x elif isIPv6Address(x): return 'ipv6 address %s' % x else: raise Value('invalid address: %s').with_traceback(x) class Ospfv2(QuaggaService): ''' The OSPFv2 service provides IPv4 routing for wired networks. It does not build its own configuration file but has hooks for adding to the unified Quagga.conf file. ''' _name = 'OSPFv2' _daemonname = 'ospfd' #_startup = ('sh quaggaboot.sh ospfd',) #_shutdown = ('killall ospfd', ) _configs = ('/etc/quagga/ospfd.conf',) _validate = ('pidof ospfd', ) _ipv4_routing = True #_starttime = 10 @staticmethod def mtucheck(ifc): ''' Helper to detect MTU mismatch and add the appropriate OSPF mtu-ignore command. This is needed when e.g. a node is linked via a GreTap device. ''' if ifc.mtu != 1500: # a workaround for PhysicalNode GreTap, which has no knowledge of # the other nodes/nets return ' ip ospf mtu-ignore\n' if not ifc.net: return '' for i in ifc.net.netifs(): if i.mtu != ifc.mtu: return ' ip ospf mtu-ignore\n' return '' @staticmethod def ptpcheck(ifc): ''' Helper to detect whether interface is connected to a notional point-to-point link. ''' #if isinstance(ifc.net, nodes.PtpNet): # return ' ip ospf network point-to-point\n' return '' @classmethod def generate_network_statement(cls, node, ifc): cfg = [] # find any link on which two equal netid's (i.e., AS numbers) are # present and configure an ospf-session on this interface # on all other interfaces, disable ospf for idx, net_netif in list(ifc.net._netif.items()): # skip our own interface if ifc == net_netif: continue # skip control interface if hasattr(ifc, 'control') and ifc.control == True: continue # found the same AS, enable IGP/OSPF if node.netid == net_netif.node.netid: if not service_flags.Router in net_netif.node.services: cfg.append(' passive-interface %s\n' % ifc.name) for a in ifc.addrlist: if not isIPv4Address(a): continue cfg.append(' network %s area 0.0.0.0\n' % a) return cfg @classmethod def generatequaggaconfig(cls, node): if not node.enable_ipv4: return '' cfg = '' cfg += cls.generatequaggaloconfig(node) for ifc in node.netifs(): # do not ever include control interfaces in anything if hasattr(ifc, 'control') and ifc.control == True: continue cfg += 'interface %s\n' % ifc.name cfg += cls.generatequaggaifcconfig(node, ifc) ipv4list = [x for x in ifc.addrlist if isIPv4Address(x)] cfg += ' ' cfg += '\n '.join(map(cls.addrstr, ipv4list)) cfg += '\n' cfg += '!\n' cfg = '!\n! OSPFv2 (for IPv4) configuration\n!\n' cfg += 'log file /tmp/quagga-ospf-%s.log\n' % node.name cfg += 'router ospf\n' cfg += ' router-id %s\n' % cls.routerid(node) cfg += ' redistribute connected\n' cfg += '!\n' cfg += ''.join(set(cls.interface_iterator(node, cls.generate_network_statement))) cfg += '!\n' return cfg @classmethod def generatequaggaloconfig(cls, node): if not node.enable_ipv4 and not node.enable_ipv6: return '' cfg = '' cfg += 'interface lo\n' cfg += ' ip address %s\n' % node.getLoopbackIPv4() cfg += '!\n' return cfg @classmethod def generatequaggaifcconfig(cls, node, ifc): if not node.enable_ipv4: return '' # do not include control interfaces if hasattr(ifc, 'control') and ifc.control == True: return '' cfg = '' cfg += cls.mtucheck(ifc) cfg += cls.ptpcheck(ifc) cfg += '!\n' return cfg addservice(Ospfv2) class Ospfv3(QuaggaService): ''' The OSPFv3 service provides IPv6 routing for wired networks. It does not build its own configuration file but has hooks for adding to the unified Quagga.conf file. ''' _name = 'OSPFv3' _daemonname = 'ospf6d' #_startup = ('sh quaggaboot.sh ospf6d',) #_shutdown = ('killall ospf6d', ) _configs = ('/etc/quagga/ospf6d.conf',) _validate = ('pidof ospf6d', ) _ipv4_routing = True _ipv6_routing = True #_starttime = 10 @staticmethod def minmtu(ifc): ''' Helper to discover the minimum MTU of interfaces linked with the given interface. ''' mtu = ifc.mtu if not ifc.net: return mtu for i in ifc.net.netifs(): if i.mtu < mtu: mtu = i.mtu return mtu @classmethod def mtucheck(cls, ifc): ''' Helper to detect MTU mismatch and add the appropriate OSPFv3 ifmtu command. This is needed when e.g. a node is linked via a GreTap device. ''' minmtu = cls.minmtu(ifc) if minmtu < ifc.mtu: return ' ipv6 ospf6 ifmtu %d\n' % minmtu else: return '' @classmethod def generate_area_statement(cls, node, ifc): cfg = [] for idx, net_netif in list(ifc.net._netif.items()): # skip our own interface if ifc == net_netif: continue # skip control interface if hasattr(ifc, 'control') and ifc.control == True: continue # found the same AS, enable IGP/OSPF if node.netid == net_netif.node.netid: if service_flags.Router in net_netif.node.services: cfg.append(' interface %s area 0.0.0.0\n' % ifc.name) return cfg @classmethod def generatequaggaconfig(cls, node): if not node.enable_ipv6: return '' cfg = '' cfg += cls.generatequaggaloconfig(node) for ifc in node.netifs(): # do not ever include control interfaces in anything if hasattr(ifc, 'control') and ifc.control == True: continue cfg += 'interface %s\n' % ifc.name cfg += cls.generatequaggaifcconfig(node, ifc) cfg += '!\n' cfg += 'log file /tmp/quagga-ospf6-%s.log\n' % node.name cfg += 'router ospf6\n' rtrid = cls.routerid(node) cfg += ' router-id %s\n' % rtrid cfg += ' redistribute connected\n' cfg += '!\n' cfg += ''.join(set(cls.interface_iterator(node, cls.generate_area_statement))) return cfg @classmethod def generatequaggaloconfig(cls, node): if not node.enable_ipv4 and not node.enable_ipv6: return '' cfg = '' return cfg @classmethod def generatequaggaifcconfig(cls, node, ifc): if not node.enable_ipv4 and not node.enable_ipv6: return '' if hasattr(ifc, 'control') and ifc.control == True: return '' enable_ifc = False for a in ifc.addrlist: if node.enable_ipv4 and isIPv4Address(a): enable_ifc = True if node.enable_ipv6 and isIPv6Address(a): enable_ifc = True cfg = '' if enable_ifc: cfg += cls.mtucheck(ifc) for idx, net_netif in list(ifc.net._netif.items()): # skip our own interface if ifc == net_netif: continue # skip control interface if hasattr(ifc, 'control') and ifc.control == True: continue # found the same AS, enable IGP/OSPF if node.netid == net_netif.node.netid: if not service_flags.Router in net_netif.node.services: # other end of link is not router. don't send hellos cfg += ' ipv6 ospf6 passive\n' break return cfg addservice(Ospfv3) class Ospfv3mdr(Ospfv3): ''' The OSPFv3 MANET Designated Router (MDR) service provides IPv6 routing for wireless networks. It does not build its own configuration file but has hooks for adding to the unified Quagga.conf file. ''' _name = 'OSPFv3MDR' _daemonname = 'ospf6d' _ipv4_routing = True #_starttime = 10 @classmethod def generatequaggaifcconfig(cls, node, ifc): cfg = super().generatequaggaifcconfig(cls, node, ifc) # super() decides whether this interface is to be included. # honor its decision if len(cfg) == 0: return cfg return cfg + '''\ ipv6 ospf6 instance-id 65 ipv6 ospf6 hello-interval 2 ipv6 ospf6 dead-interval 6 ipv6 ospf6 retransmit-interval 5 ipv6 ospf6 network manet-designated-router ipv6 ospf6 diffhellos ipv6 ospf6 adjacencyconnectivity uniconnected ipv6 ospf6 lsafullness mincostlsa ''' addservice(Ospfv3mdr) class Bgp(QuaggaService): '''' The BGP service provides interdomain routing. Peers must be manually configured, with a full mesh for those having the same AS number. ''' _name = 'BGP' _daemonname = 'bgpd' #_startup = ('sh quaggaboot.sh bgpd',) #_shutdown = ('killall bgpd', ) _configs = ('/etc/quagga/bgpd.conf',) _validate = ('pidof bgpd', ) _custom_needed = False _ipv4_routing = True _ipv6_routing = True #_starttime = 20 @classmethod def configure_EGP(cls, node): cfg = '' v6cfg = [] # find any link on which two different netid's (i.e., AS numbers) are # present and configure a bgp-session between the two corresponding nodes. for localnetif in node.netifs(): # do not include control interfaces if hasattr(localnetif, 'control') and localnetif.control == True: continue for idx, net_netif in list(localnetif.net._netif.items()): candidate_node = net_netif.node # skip our own interface if localnetif == net_netif.node: continue # found two different ASes. if not node.netid == net_netif.node.netid and \ service_flags.EGP in net_netif.node.services: for local_node_addr in localnetif.addrlist: local_node_addr_str = str(local_node_addr.split('/')[0]) if not node.enable_ipv4 and \ isIPv4Address(local_node_addr): continue if not node.enable_ipv6 and \ isIPv6Address(local_node_addr): continue for remote_node_addr in net_netif.addrlist: remote_node_addr_str = str(remote_node_addr.split('/')[0]) if not net_netif.node.enable_ipv4 and \ isIPv4Address(remote_node_addr): continue if not net_netif.node.enable_ipv6 and \ isIPv6Address(remote_node_addr): continue # for inter-AS links, use interface addresses # instead of loopback addresses if (isIPv4Address(local_node_addr) and \ isIPv4Address(remote_node_addr)): cfg += ' neighbor %s remote-as %s\n' % \ (remote_node_addr_str, \ str(net_netif.node.netid)) elif (isIPv6Address(local_node_addr) and \ isIPv6Address(remote_node_addr)): cfg += ' neighbor %s remote-as %s\n' % \ (remote_node_addr_str, str(net_netif.node.netid)) v6cfg.append((' neighbor %s activate\n' % remote_node_addr_str)) v6cfg.append((' network %s\n' % str(local_node_addr))) return cfg, v6cfg @classmethod def generatequaggaconfig(cls, node): v6cfg = [] v6prefixes = [] if not node.enable_ipv4 and not node.enable_ipv6: return '' cfg = '!\n! BGP configuration\n!\n' cfg += 'log file /tmp/quagga-bgp-%s.log\n' % node.name cfg += 'router bgp %s\n' % node.netid cfg += ' bgp router-id %s\n' % cls.routerid(node) cfg += ' redistribute kernel\n' cfg += ' redistribute static\n' cfg += '!\n' if hasattr(node, 'netid') and not node.netid is None: netid = node.netid else: # TODO: netid 0 is invalid netid = 0 # configure EBGP connections: if service_flags.EGP in node.services: tmpcfg, tmpv6cfg = cls.configure_EGP(node) cfg += tmpcfg v6cfg.extend(tmpv6cfg) # configure IBGP connections confstr_list = [cfg] # full mesh service_helpers.nodewalker(node, node, confstr_list, cls.nodewalker_ibgp_find_neighbors_callback) cfg = ''.join(confstr_list) if node.enable_ipv4 and service_flags.EGP in node.services: interface_net = Interface.getInterfaceNet_per_net(\ node.session.sessionid, netid, 4) loopback_net = Loopback.getLoopbackNet_per_net(\ node.session.sessionid, netid, 4) cfg += ' network %s\n' % str(loopback_net) cfg += ' network %s\n' % str(interface_net) cfg += ' aggregate-address %s summary-only\n' % str(loopback_net) cfg += ' aggregate-address %s summary-only\n' % str(interface_net) if node.enable_ipv6: v6_ibgp_neighbor_list = [] service_helpers.nodewalker(node, node, v6_ibgp_neighbor_list, cls.nodewalker_ibgp_find_neighbor_addrs_v6_callback) cfg += ' address-family ipv6\n' # activate IBGP neighbors cfg += ''.join([(' neighbor %s activate\n') % \ (str(remote_addr).split('/')[0]) \ for local_addr, remote_addr in v6_ibgp_neighbor_list]) # activate EBGP neighbors cfg += ''.join(v6cfg) if service_flags.EGP in node.services: # announce networks interface_net = Interface.getInterfaceNet_per_net(\ node.session.sessionid, netid, 6) loopback_net = Loopback.getLoopbackNet_per_net(\ node.session.sessionid, netid, 6) cfg += ' network %s\n' % str(loopback_net) cfg += ' network %s\n' % str(interface_net) cfg += ' aggregate-address %s summary-only\n' % str(loopback_net) cfg += ' aggregate-address %s summary-only\n' % str(interface_net) adj_addrs = cls.collect_adjacent_loopback_addrs_v6(cls, node) for adj_addr in adj_addrs: cfg += ' network %s/128\n' % str(adj_addr) cfg += '\n exit-address-family\n' return cfg @staticmethod def nodewalker_ibgp_find_neighbors_callback(startnode, currentnode): result = [] if service_flags.Router in startnode.services and \ service_flags.Router in currentnode.services and \ not startnode == currentnode and \ startnode.netid == currentnode.netid: startnode_ipversions = startnode.getIPversions() currentnode_ipversions = currentnode.getIPversions() ipversions = [] for ipversion in 4, 6: if ipversion in startnode_ipversions and currentnode_ipversions: ipversions.append(ipversion) for ipversion in ipversions: if ipversion == 4: startnode_addr = startnode.getLoopbackIPv4() currentnode_addr = currentnode.getLoopbackIPv4() elif ipversion == 6: startnode_addr = startnode.getLoopbackIPv6() currentnode_addr = currentnode.getLoopbackIPv6() result.extend([' neighbor %s remote-as %s\n' % \ (str(currentnode_addr), str(currentnode.netid)), ' neighbor %s update-source %s\n' % \ (str(currentnode_addr), str(startnode_addr)) ]) if service_flags.EGP in startnode.services: result.append(' neighbor %s next-hop-self\n' % str(currentnode_addr)) return result @staticmethod def nodewalker_ibgp_find_neighbor_addrs_v6_callback(startnode, currentnode): result = [] if service_flags.Router in startnode.services and \ service_flags.Router in currentnode.services and \ not startnode == currentnode and \ startnode.netid == currentnode.netid: if startnode.enable_ipv6 and currentnode.enable_ipv6: result.append((startnode.getLoopbackIPv6(), currentnode.getLoopbackIPv6())) return result @staticmethod def collect_adjacent_loopback_addrs_v6(cls, node): addrs = [] for ifc in node.netifs(): for idx, net_netif in list(ifc.net._netif.items()): # skip our own interface if ifc == net_netif: continue # skip control interface if hasattr(ifc, 'control') and ifc.control == True: continue # found the same AS, collect loopback addresses if node.netid == net_netif.node.netid: # other end of link is no router. announce its loopback addr if not service_flags.Router in net_netif.node.services: if net_netif.node.enable_ipv6: addrs.append(net_netif.node.getLoopbackIPv6()) return addrs addservice(Bgp) class Rip(QuaggaService): ''' The RIP service provides IPv4 routing for wired networks. ''' _name = 'RIP' _daemonname = 'ripd' #_startup = ('sh quaggaboot.sh ripd',) #_shutdown = ('killall ripd', ) _configs = ('/etc/quagga/ripd.conf',) _validate = ('pidof ripd', ) _ipv4_routing = True #_starttime = 10 @classmethod def generatequaggaconfig(cls, node): cfg = '''\ log file /tmp/quagga-ospf-%s.log router rip redistribute static redistribute connected redistribute ospf network 0.0.0.0/0 ! ''' % node.name return cfg addservice(Rip) class Ripng(QuaggaService): ''' The RIP NG service provides IPv6 routing for wired networks. ''' _name = 'RIPNG' _daemonname = 'ripngd' #_startup = ('sh quaggaboot.sh ripngd',) #_shutdown = ('killall ripngd', ) _configs = ('/etc/quagga/ripngd.conf',) _validate = ('pidof ripngd', ) _ipv6_routing = True #_starttime = 10 @classmethod def generatequaggaconfig(cls, node): cfg = '''\ log file /tmp/quagga-ospf-%s.log router ripng redistribute static redistribute connected redistribute ospf6 network ::/0 ! ''' % node.name return cfg addservice(Ripng) class Babel(QuaggaService): ''' The Babel service provides a loop-avoiding distance-vector routing protocol for IPv6 and IPv4 with fast convergence properties. ''' _name = 'Babel' _daemonname = 'babeld' #_startup = ('sh quaggaboot.sh babeld',) #_shutdown = ('killall babeld', ) _configs = ('/etc/quagga/babeld.conf',) _validate = ('pidof babeld', ) _ipv6_routing = True #_starttime = 10 @classmethod def generatequaggaconfig(cls, node): cfg += 'log file /tmp/quagga-babel-%s.log\n' % node.name cfg = 'router babel\n' for ifc in node.netifs(): if hasattr(ifc, 'control') and ifc.control == True: continue cfg += ' network %s\n' % ifc.name cfg += ' redistribute static\n redistribute connected\n' return cfg @classmethod def generatequaggaifcconfig(cls, node, ifc): type = 'wired' if ifc.net and ifc.net.linktype == coreapi.CORE_LINK_WIRELESS: return ' babel wireless\n no babel split-horizon\n' else: return ' babel wired\n babel split-horizon\n' addservice(Babel) class ISIS(QuaggaService): ''' The user generated service isisd provides a ISIS! ''' _name = 'ISIS' _daemonname = 'isisd' #_startup = ('sh quaggaboot.sh isisd',) #_shutdown = ('killall isisd', ) _configs = ('/etc/quagga/isisd.conf',) _validate = ('pidof isisd', ) _ipv4_routing = True _ipv6_routing = True #_starttime = 10 @classmethod def generatequaggaifcconfig(cls, node, ifc): added_ifc = False if not node.enable_ipv4 and not node.enable_ipv6: return '' # do not include control interfaces if hasattr(ifc, 'control') and ifc.control == True: return '' cfg = '' # find any link on which two equal netid's (e.g., AS numbers) are # present and on which two routers are present. # then configure an isis-session on this interface # on all other interfaces, disable isis for idx, net_netif in list(ifc.net._netif.items()): # only add each interface once if added_ifc: break # skip our own interface if ifc == net_netif: continue # skip control interface if hasattr(ifc, 'control') and ifc.control == True: continue # found the same AS, enable IGP/ISIS if not added_ifc: if node.enable_ipv4: cfg += ' ip router isis 1\n' if node.enable_ipv6: cfg += ' ipv6 router isis 1\n' if node.netid == net_netif.node.netid: # other end of link is not router. don't send ISIS hellos if not service_flags.Router in net_netif.node.services: cfg += ' isis passive\n' else: cfg += ' isis circuit-type level-2-only\n' # if this interface is connected via a point-to-point-link, # set isis network point-to-point. # if this directive is not set, isis will speak mode lan if isinstance(ifc.net, nodes.PtpNet): cfg += ' isis network point-to-point\n' elif service_flags.Router in net_netif.node.services: cfg += ' isis passive\n' cfg += '!\n' # only add each interface once added_ifc = True return cfg @classmethod def generatequaggaloconfig(cls, node): if not node.enable_ipv4 and not node.enable_ipv6: return '' cfg = '' cfg += 'interface lo\n' if node.enable_ipv4: cfg += ' ip router isis 1\n' if node.enable_ipv6: cfg += ' ipv6 router isis 1\n' cfg += ' isis passive\n' cfg += ' isis metric 0\n' cfg += '!\n' return cfg @classmethod def generatequaggaconfig(cls, node): cfg = '' cfg += cls.generatequaggaloconfig(node) for ifc in node.netifs(): # do not ever include control interfaces in anything if hasattr(ifc, 'control') and ifc.control == True: continue tmpcfg = 'interface %s\n' % ifc.name cfgv4 = '' cfgv6 = '' cfgvall = '' want_ipv4 = False want_ipv6 = False ifccfg = cls.generatequaggaifcconfig(node, ifc) if cls._ipv4_routing and node.enable_ipv4: want_ipv4 = True if cls._ipv6_routing and node.enable_ipv6: want_ipv6 = True if want_ipv4 and not want_ipv6: cfgv4 += ifccfg elif not want_ipv4 and want_ipv6: cfgv6 += ifccfg elif want_ipv4 and want_ipv6: cfgvall += ifccfg if want_ipv4 and not want_ipv6: ipv4list = [x for x in ifc.addrlist if isIPv4Address(x)] tmpcfg += cfgv4 elif not want_ipv4 and want_ipv6: ipv6list = [x for x in ifc.addrlist if isIPv6Address(x)] tmpcfg += cfgv6 elif want_ipv4 and want_ipv6: tmpcfg += cfgv4 tmpcfg += cfgv6 tmpcfg += cfgvall tmpcfg += '!\n' if want_ipv4 or want_ipv6: cfg += tmpcfg cfg += '! ISIS configuration\n' if node.enable_ipv4 or node.enable_ipv6: cfg += 'log file /tmp/quagga-isis-%s.log\n' % node.name cfg += 'router isis 1\n' cfg += ' net %s\n' % cls.get_ISIS_ID(cls.routerid(node), str(node.netid)) cfg += ' metric-style wide\n' cfg += '!\n' return cfg @staticmethod def get_ISIS_ID(ipstr, netid): ''' calculates and returns an ISIS-ID based on the supplied IP addr ''' # isis-is is 12 characters long # it has the format: 49.nnnn.aaaa.bbbb.cccc.00 # where nnnn == netid (i.e., same on all routers) # abc == routerid (i.e., unique among all routers) #hexip = hex(int(IPv4Addr(ipstr).addr))[2:] #if len(hexip) < 8: # hexip = '0%s' % hexip #netid = str(netid) #isisnetidlist = [ '0' for i in range(4 - len(netid)) ] #isisnetidlist.append(netid) #isisnetid = ''.join(isisnetidlist) # 49.1000. splitted = ''.join(['%03d' % int(e) for e in ipstr.split('.')]) isisid = '49.1000.%s.%s.%s.00' % (splitted[:4], splitted[4:8], splitted[8:]) print('[DEBUG] isis id: %s' % isisid) return isisid addservice(ISIS) class Vtysh(CoreService): ''' Simple service to run vtysh -b (boot) after all Quagga daemons have started. ''' _name = 'vtysh' _group = 'Quagga' _startindex = 45 #_startup = ('sh quaggaboot.sh vtysh',) _shutdown = () #_starttime = 30 _configs = ('/etc/quagga/vtysh.conf',) @classmethod def generateconfig(cls, node, filename, services): return '' addservice(Vtysh)
Duravis R 630 is an economical and durable tyre from Bridgestone. The ideal wet performer is the best match for light trucks and vans. For long lasting tread life and passenger safety, the tyre is designed with Super LL Carbon compound. The first class tread pattern with deep grooves and Z shaped sipes provides extra traction on wet roads. For increased stability and driving comfort, the tyre is manufactured with wide contact patch with circumferential grooves. With sidewall protecting ribs and jointless cap layer, it is a robust tyre which can face even the most rugged roads. 30% more mileage than any other tyre of its kind, Bridgestone Duravis R 630 is an excellent and classy player on both wet and dry condition with enhanced grip and safety.
#pylint: disable=no-init from __future__ import (absolute_import, division, print_function) from mantid.simpleapi import * from mantid.api import * from mantid.kernel import * #pylint: disable=too-many-instance-attributes class IndirectResolution(DataProcessorAlgorithm): _input_files = None _out_ws = None _instrument = None _analyser = None _reflection = None _detector_range = None _background = None _rebin_string = None _scale_factor = None _load_logs = None def category(self): return 'Workflow\\Inelastic;Inelastic\\Indirect' def summary(self): return 'Creates a resolution workspace for an indirect inelastic instrument.' def PyInit(self): self.declareProperty(StringArrayProperty(name='InputFiles'), doc='Comma seperated list if input files') self.declareProperty(name='Instrument', defaultValue='', validator=StringListValidator(['IRIS', 'OSIRIS', 'TOSCA']), doc='Instrument used during run.') self.declareProperty(name='Analyser', defaultValue='', validator=StringListValidator(['graphite', 'mica', 'fmica']), doc='Analyser used during run.') self.declareProperty(name='Reflection', defaultValue='', validator=StringListValidator(['002', '004', '006']), doc='Reflection used during run.') self.declareProperty(IntArrayProperty(name='DetectorRange', values=[0, 1]), doc='Range of detetcors to use in resolution calculation.') self.declareProperty(FloatArrayProperty(name='BackgroundRange', values=[0.0, 0.0]), doc='Energy range to use as background.') self.declareProperty(name='RebinParam', defaultValue='', doc='Rebinning parameters (min,width,max)') self.declareProperty(name='ScaleFactor', defaultValue=1.0, doc='Factor to scale resolution curve by') self.declareProperty(name = "LoadLogFiles", defaultValue=True, doc='Option to load log files') self.declareProperty(WorkspaceProperty('OutputWorkspace', '', direction=Direction.Output), doc='Output resolution workspace.') def PyExec(self): self._setup() iet_alg = self.createChildAlgorithm(name='ISISIndirectEnergyTransfer', startProgress=0.0, endProgress=0.7, enableLogging=True) iet_alg.setProperty('Instrument', self._instrument) iet_alg.setProperty('Analyser', self._analyser) iet_alg.setProperty('Reflection', self._reflection) iet_alg.setProperty('GroupingMethod', 'All') iet_alg.setProperty('SumFiles', True) iet_alg.setProperty('InputFiles', self._input_files) iet_alg.setProperty('SpectraRange', self._detector_range) iet_alg.setProperty('LoadLogFiles', self._load_logs) iet_alg.execute() group_ws = iet_alg.getProperty('OutputWorkspace').value icon_ws = group_ws.getItem(0).name() workflow_prog = Progress(self, start=0.7, end=0.9, nreports=4) if self._scale_factor != 1.0: workflow_prog.report('Scaling Workspace') Scale(InputWorkspace=icon_ws, OutputWorkspace=icon_ws, Factor=self._scale_factor) workflow_prog.report('Calculating flat background') CalculateFlatBackground(InputWorkspace=icon_ws, OutputWorkspace=self._out_ws, StartX=self._background[0], EndX=self._background[1], Mode='Mean', OutputMode='Subtract Background') workflow_prog.report('Rebinning Workspace') Rebin(InputWorkspace=self._out_ws, OutputWorkspace=self._out_ws, Params=self._rebin_string) workflow_prog.report('Completing Post Processing') self._post_process() self.setProperty('OutputWorkspace', self._out_ws) def _setup(self): """ Gets algorithm properties. """ self._input_files = self.getProperty('InputFiles').value self._out_ws = self.getPropertyValue('OutputWorkspace') self._instrument = self.getProperty('Instrument').value self._analyser = self.getProperty('Analyser').value self._reflection = self.getProperty('Reflection').value self._detector_range = self.getProperty('DetectorRange').value self._background = self.getProperty('BackgroundRange').value self._rebin_string = self.getProperty('RebinParam').value self._scale_factor = self.getProperty('ScaleFactor').value self._load_logs = self.getProperty('LoadLogFiles').value def _post_process(self): """ Handles adding logs, saving and plotting. """ sample_logs = [('res_back_start', self._background[0]), ('res_back_end', self._background[1])] if self._scale_factor != 1.0: sample_logs.append(('res_scale_factor', self._scale_factor)) rebin_params = self._rebin_string.split(',') if len(rebin_params) == 3: sample_logs.append(('rebin_low', rebin_params[0])) sample_logs.append(('rebin_width', rebin_params[1])) sample_logs.append(('rebin_high', rebin_params[2])) log_alg = self.createChildAlgorithm(name='AddSampleLogMultiple', startProgress=0.9, endProgress=1.0, enableLogging=True) log_alg.setProperty('Workspace', self._out_ws) log_alg.setProperty('LogNames', [log[0] for log in sample_logs]) log_alg.setProperty('LogValues',[log[1] for log in sample_logs]) self.setProperty('OutputWorkspace', self._out_ws) log_alg.execute() AlgorithmFactory.subscribe(IndirectResolution)
Infrared is among the cool feature which can be loved using the built into the Sony Tablet S. There’s not much information about this system just aside from display screen photographs however it is completely attainable that the Optimus Vu II will seem like the original Optimus Vu. LG has gone forward and announced that the successor of Optimus Vu is arriving out there which might be known as as Optimus Vu II. You’ll now capable of handle all of the electronics objects kept in your home just by a remote control. You don’t need to hassle about TELEVISION remote.” Oh no i’ve misplaced my TV remote again! Effectively, not to trouble,I want to pull my Optimus Vu as an alternative of wasting time in looking TV remote. You can see this in Korean market from September onwards. This machine will now support NFC and comes up with a constructed-in IR blaster that will help you everway possible. Benefit from the newest arrival of LG with its beautiful options and unique providers. Video video games seem to transcend the passage of time. The changes which have been made to them are nothing in need of phenomenal. It is arduous to even imagine what the future of gaming could hold. If there’s one constant in the video game trade, it’s the wonderful and fun changes that every year brings. There are various software program corporations in India who has gained specialization in open supply customization over the time. They’ve specialized programmers who can develop any sorts of weblog or websites using open source programming languages or CMSs as per their wants and requirements. A few of the widespread CMSs are WordPress, Joomla, Drupal, Magento, ZenCart along with many other CMSs. These all CMSs have been developed in PHP programming language. To work on CMS is very straightforward and it would not require programming knowledgeable. It’s very simple to work. With the introduction of pc video games sport lover had been discovered 2d as well as 3d games with lots of interactivities. Laptop has widened the scope of recreation varieties and introduced kinds of games on it. Initially computer video games were based mostly on vector graphics in a while it transformed into raster graphics. After laptop’s era, mobiles come into the playground and cellular pastime gaming bought hyped available in the market. Regardless that a multiplayer mode was out there in the older telephones, the info had been transferred at common intervals from the server.
# -*- coding: utf-8 -*- import datetime from south.db import db from south.v2 import SchemaMigration from django.db import models class Migration(SchemaMigration): def forwards(self, orm): # Adding model 'Comment' db.create_table(u'questionnaire_comment', ( (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('created', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('modified', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('text', self.gf('django.db.models.fields.CharField')(max_length=100)), ('user', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['auth.User'])), )) db.send_create_signal('questionnaire', ['Comment']) # Adding M2M table for field answer on 'Comment' db.create_table(u'questionnaire_comment_answer', ( ('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)), ('comment', models.ForeignKey(orm['questionnaire.comment'], null=False)), ('answer', models.ForeignKey(orm['questionnaire.answer'], null=False)) )) db.create_unique(u'questionnaire_comment_answer', ['comment_id', 'answer_id']) # Adding model 'QuestionOption' db.create_table(u'questionnaire_questionoption', ( (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('created', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('modified', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('text', self.gf('django.db.models.fields.CharField')(max_length=100)), ('question', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.Question'])), )) db.send_create_signal('questionnaire', ['QuestionOption']) # Adding model 'Answer' db.create_table(u'questionnaire_answer', ( (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('created', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('modified', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('question', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.Question'], null=True)), ('country', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.Country'], null=True)), ('status', self.gf('django.db.models.fields.CharField')(default='Draft', max_length=15)), ('version', self.gf('django.db.models.fields.IntegerField')(default=1, null=True)), )) db.send_create_signal('questionnaire', ['Answer']) # Adding model 'AnswerGroup' db.create_table(u'questionnaire_answergroup', ( (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('created', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('modified', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('answer', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.Answer'], null=True)), ('grouped_question', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.QuestionGroup'], null=True)), ('row', self.gf('django.db.models.fields.CharField')(max_length=6)), )) db.send_create_signal('questionnaire', ['AnswerGroup']) # Adding model 'TextAnswer' db.create_table(u'questionnaire_textanswer', ( (u'answer_ptr', self.gf('django.db.models.fields.related.OneToOneField')(to=orm['questionnaire.Answer'], unique=True, primary_key=True)), ('response', self.gf('django.db.models.fields.CharField')(max_length=100, null=True)), )) db.send_create_signal('questionnaire', ['TextAnswer']) # Adding model 'MultiChoiceAnswer' db.create_table(u'questionnaire_multichoiceanswer', ( (u'answer_ptr', self.gf('django.db.models.fields.related.OneToOneField')(to=orm['questionnaire.Answer'], unique=True, primary_key=True)), ('response', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.QuestionOption'])), )) db.send_create_signal('questionnaire', ['MultiChoiceAnswer']) # Adding model 'QuestionGroup' db.create_table(u'questionnaire_questiongroup', ( (u'id', self.gf('django.db.models.fields.AutoField')(primary_key=True)), ('created', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('modified', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime.now, blank=True)), ('subsection', self.gf('django.db.models.fields.related.ForeignKey')(to=orm['questionnaire.SubSection'])), ('order', self.gf('django.db.models.fields.IntegerField')()), )) db.send_create_signal('questionnaire', ['QuestionGroup']) # Adding M2M table for field question on 'QuestionGroup' db.create_table(u'questionnaire_questiongroup_question', ( ('id', models.AutoField(verbose_name='ID', primary_key=True, auto_created=True)), ('questiongroup', models.ForeignKey(orm['questionnaire.questiongroup'], null=False)), ('question', models.ForeignKey(orm['questionnaire.question'], null=False)) )) db.create_unique(u'questionnaire_questiongroup_question', ['questiongroup_id', 'question_id']) # Adding model 'NumericalAnswer' db.create_table(u'questionnaire_numericalanswer', ( (u'answer_ptr', self.gf('django.db.models.fields.related.OneToOneField')(to=orm['questionnaire.Answer'], unique=True, primary_key=True)), ('response', self.gf('django.db.models.fields.DecimalField')(max_digits=5, decimal_places=2)), )) db.send_create_signal('questionnaire', ['NumericalAnswer']) # Adding model 'DateAnswer' db.create_table(u'questionnaire_dateanswer', ( (u'answer_ptr', self.gf('django.db.models.fields.related.OneToOneField')(to=orm['questionnaire.Answer'], unique=True, primary_key=True)), ('response', self.gf('django.db.models.fields.DateField')()), )) db.send_create_signal('questionnaire', ['DateAnswer']) # Changing field 'Question.answer_type' db.alter_column(u'questionnaire_question', 'answer_type', self.gf('django.db.models.fields.CharField')(max_length=20)) # Adding field 'Section.name' db.add_column(u'questionnaire_section', 'name', self.gf('django.db.models.fields.CharField')(max_length=100, null=True), keep_default=False) def backwards(self, orm): # Deleting model 'Comment' db.delete_table(u'questionnaire_comment') # Removing M2M table for field answer on 'Comment' db.delete_table('questionnaire_comment_answer') # Deleting model 'QuestionOption' db.delete_table(u'questionnaire_questionoption') # Deleting model 'Answer' db.delete_table(u'questionnaire_answer') # Deleting model 'AnswerGroup' db.delete_table(u'questionnaire_answergroup') # Deleting model 'TextAnswer' db.delete_table(u'questionnaire_textanswer') # Deleting model 'MultiChoiceAnswer' db.delete_table(u'questionnaire_multichoiceanswer') # Deleting model 'QuestionGroup' db.delete_table(u'questionnaire_questiongroup') # Removing M2M table for field question on 'QuestionGroup' db.delete_table('questionnaire_questiongroup_question') # Deleting model 'NumericalAnswer' db.delete_table(u'questionnaire_numericalanswer') # Deleting model 'DateAnswer' db.delete_table(u'questionnaire_dateanswer') # Changing field 'Question.answer_type' db.alter_column(u'questionnaire_question', 'answer_type', self.gf('django.db.models.fields.CharField')(max_length=10)) # Deleting field 'Section.name' db.delete_column(u'questionnaire_section', 'name') models = { u'auth.group': { 'Meta': {'object_name': 'Group'}, u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}), 'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}) }, u'auth.permission': { 'Meta': {'ordering': "(u'content_type__app_label', u'content_type__model', u'codename')", 'unique_together': "((u'content_type', u'codename'),)", 'object_name': 'Permission'}, 'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}), 'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['contenttypes.ContentType']"}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '50'}) }, u'auth.user': { 'Meta': {'object_name': 'User'}, 'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), 'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}), 'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), 'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}), 'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), 'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), 'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}), 'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}), 'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}), 'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': u"orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}), 'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'}) }, u'contenttypes.contenttype': { 'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"}, 'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '100'}) }, 'questionnaire.answer': { 'Meta': {'object_name': 'Answer'}, 'country': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.Country']", 'null': 'True'}), 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.Question']", 'null': 'True'}), 'status': ('django.db.models.fields.CharField', [], {'default': "'Draft'", 'max_length': '15'}), 'version': ('django.db.models.fields.IntegerField', [], {'default': '1', 'null': 'True'}) }, 'questionnaire.answergroup': { 'Meta': {'object_name': 'AnswerGroup'}, 'answer': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.Answer']", 'null': 'True'}), 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'grouped_question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.QuestionGroup']", 'null': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'row': ('django.db.models.fields.CharField', [], {'max_length': '6'}) }, 'questionnaire.comment': { 'Meta': {'object_name': 'Comment'}, 'answer': ('django.db.models.fields.related.ManyToManyField', [], {'related_name': "'comments'", 'symmetrical': 'False', 'to': "orm['questionnaire.Answer']"}), 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'text': ('django.db.models.fields.CharField', [], {'max_length': '100'}), 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}) }, 'questionnaire.country': { 'Meta': {'object_name': 'Country'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'}), 'regions': ('django.db.models.fields.related.ManyToManyField', [], {'symmetrical': 'False', 'related_name': "'countries'", 'null': 'True', 'to': "orm['questionnaire.Region']"}) }, 'questionnaire.dateanswer': { 'Meta': {'object_name': 'DateAnswer', '_ormbases': ['questionnaire.Answer']}, u'answer_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['questionnaire.Answer']", 'unique': 'True', 'primary_key': 'True'}), 'response': ('django.db.models.fields.DateField', [], {}) }, 'questionnaire.multichoiceanswer': { 'Meta': {'object_name': 'MultiChoiceAnswer', '_ormbases': ['questionnaire.Answer']}, u'answer_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['questionnaire.Answer']", 'unique': 'True', 'primary_key': 'True'}), 'response': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.QuestionOption']"}) }, 'questionnaire.numericalanswer': { 'Meta': {'object_name': 'NumericalAnswer', '_ormbases': ['questionnaire.Answer']}, u'answer_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['questionnaire.Answer']", 'unique': 'True', 'primary_key': 'True'}), 'response': ('django.db.models.fields.DecimalField', [], {'max_digits': '5', 'decimal_places': '2'}) }, 'questionnaire.organization': { 'Meta': {'object_name': 'Organization'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'}) }, 'questionnaire.question': { 'Meta': {'object_name': 'Question'}, 'UID': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '6'}), 'answer_type': ('django.db.models.fields.CharField', [], {'max_length': '20'}), 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'instructions': ('django.db.models.fields.TextField', [], {'null': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'text': ('django.db.models.fields.TextField', [], {}) }, 'questionnaire.questiongroup': { 'Meta': {'object_name': 'QuestionGroup'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'order': ('django.db.models.fields.IntegerField', [], {}), 'question': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['questionnaire.Question']", 'symmetrical': 'False'}), 'subsection': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.SubSection']"}) }, 'questionnaire.questionnaire': { 'Meta': {'object_name': 'Questionnaire'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '256'}) }, 'questionnaire.questionoption': { 'Meta': {'object_name': 'QuestionOption'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'question': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['questionnaire.Question']"}), 'text': ('django.db.models.fields.CharField', [], {'max_length': '100'}) }, 'questionnaire.region': { 'Meta': {'object_name': 'Region'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'description': ('django.db.models.fields.CharField', [], {'max_length': '300', 'null': 'True', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'}), 'organization': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'regions'", 'null': 'True', 'to': "orm['questionnaire.Organization']"}) }, 'questionnaire.section': { 'Meta': {'object_name': 'Section'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'name': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'}), 'order': ('django.db.models.fields.IntegerField', [], {}), 'questionnaire': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'sections'", 'to': "orm['questionnaire.Questionnaire']"}), 'title': ('django.db.models.fields.CharField', [], {'max_length': '256'}) }, 'questionnaire.subsection': { 'Meta': {'object_name': 'SubSection'}, 'created': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'description': ('django.db.models.fields.TextField', [], {'null': 'True', 'blank': 'True'}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'modified': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now', 'blank': 'True'}), 'order': ('django.db.models.fields.IntegerField', [], {}), 'section': ('django.db.models.fields.related.ForeignKey', [], {'related_name': "'sub_sections'", 'to': "orm['questionnaire.Section']"}), 'title': ('django.db.models.fields.CharField', [], {'max_length': '256'}) }, 'questionnaire.textanswer': { 'Meta': {'object_name': 'TextAnswer', '_ormbases': ['questionnaire.Answer']}, u'answer_ptr': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['questionnaire.Answer']", 'unique': 'True', 'primary_key': 'True'}), 'response': ('django.db.models.fields.CharField', [], {'max_length': '100', 'null': 'True'}) } } complete_apps = ['questionnaire']
The Joint Worksite Health & Safety Committee is a group of worker and employer representatives working together to identify and solve health and safety problems at the work site. The committee is an important communication link between workers and management. An effective committee can assist in the reduction of losses resulting from incidents and occupational illness.
import config from config import CMS_CFG, router_post from torcms.core.base_handler import BaseHandler from torcms.model.comment_model import MComment class CommentHandler(BaseHandler): def initialize(self): super().initialize() def get(self, *args, **kwargs): url_str = args[0] url_arr = self.parse_url(url_str) if url_str == '' or url_str == 'list': self.list(url_str) elif len(url_arr) == 2: self.list(url_arr[0], cur_p=url_arr[1]) def list(self, list, **kwargs): ''' List the replies. ''' def get_pager_idx(): ''' Get the pager index. ''' cur_p = kwargs.get('cur_p') the_num = int(cur_p) if cur_p else 1 the_num = 1 if the_num < 1 else the_num return the_num current_page_num = get_pager_idx() num_of_cat = MComment.count_of_certain() page_num = int(num_of_cat / CMS_CFG['list_num']) + 1 kwd = { 'current_page': current_page_num, 'count': num_of_cat, } self.render('static_pages/comment/index.html', postinfo=MComment.query_pager_by_comment(current_page_num), userinfo=self.userinfo, cfg=CMS_CFG, kwd=kwd, router_post=router_post)
Mark and I are just finalizing some plans to hit the waters of Algonquin Park next week. This will mark our first trip of the season and hopefully knock a little bit of the winter rust off. The trip will start from Smoke Lake and move south through Ragged Lake and Big Porcupine Lake. From there we will make our way east down the Head Creek until we hit Lake Louisa. Next we travel west into McGarvey and back up to Big Porcupine Lake. In total we will take 5 or 6 days and enjoy the area. This is the first time either of us have tripped this section of The Park. We will also be in the hunt for some early season Lakers and Brookies and test out some of our camera gear before heading out for our Brent Run later this season. Hopefully we will be able to put together a recap of the trip soon after returning. If anyone have tripped this area before and have any advice or tips, we are all ears. “The Trick, William Potter, is not minding that it hurts.” – I can’t think of a truer statement that describes The Brent Run.
# Copyright 2014 The Chromium Authors. All rights reserved. # Use of this source code is governed by a BSD-style license that can be # found in the LICENSE file. """Generates a syntax tree from a Mojo IDL file.""" import imp import os.path import sys def _GetDirAbove(dirname): """Returns the directory "above" this file containing |dirname| (which must also be "above" this file).""" path = os.path.abspath(__file__) while True: path, tail = os.path.split(path) assert tail if tail == dirname: return path try: imp.find_module("ply") except ImportError: sys.path.append(os.path.join(_GetDirAbove("public"), "public/third_party")) from ply import lex from ply import yacc from ..error import Error from . import ast from .lexer import Lexer _MAX_ORDINAL_VALUE = 0xffffffff _MAX_ARRAY_SIZE = 0xffffffff class ParseError(Error): """Class for errors from the parser.""" def __init__(self, filename, message, lineno=None, snippet=None): Error.__init__(self, filename, message, lineno=lineno, addenda=([snippet] if snippet else None)) # We have methods which look like they could be functions: # pylint: disable=R0201 class Parser(object): def __init__(self, lexer, source, filename): self.tokens = lexer.tokens self.source = source self.filename = filename # Names of functions # # In general, we name functions after the left-hand-side of the rule(s) that # they handle. E.g., |p_foo_bar| for a rule |foo_bar : ...|. # # There may be multiple functions handling rules for the same left-hand-side; # then we name the functions |p_foo_bar_N| (for left-hand-side |foo_bar|), # where N is a number (numbered starting from 1). Note that using multiple # functions is actually more efficient than having single functions handle # multiple rules (and, e.g., distinguishing them by examining |len(p)|). # # It's also possible to have a function handling multiple rules with different # left-hand-sides. We do not do this. # # See http://www.dabeaz.com/ply/ply.html#ply_nn25 for more details. # TODO(vtl): Get rid of the braces in the module "statement". (Consider # renaming "module" -> "package".) Then we'll be able to have a single rule # for root (by making module "optional"). def p_root_1(self, p): """root : """ p[0] = ast.Mojom(None, ast.ImportList(), []) def p_root_2(self, p): """root : root module""" if p[1].module is not None: raise ParseError(self.filename, "Multiple \"module\" statements not allowed:", p[2].lineno, snippet=self._GetSnippet(p[2].lineno)) if p[1].import_list.items or p[1].definition_list: raise ParseError( self.filename, "\"module\" statements must precede imports and definitions:", p[2].lineno, snippet=self._GetSnippet(p[2].lineno)) p[0] = p[1] p[0].module = p[2] def p_root_3(self, p): """root : root import""" if p[1].definition_list: raise ParseError(self.filename, "\"import\" statements must precede definitions:", p[2].lineno, snippet=self._GetSnippet(p[2].lineno)) p[0] = p[1] p[0].import_list.Append(p[2]) def p_root_4(self, p): """root : root definition""" p[0] = p[1] p[0].definition_list.append(p[2]) def p_import(self, p): """import : IMPORT STRING_LITERAL SEMI""" # 'eval' the literal to strip the quotes. # TODO(vtl): This eval is dubious. We should unquote/unescape ourselves. p[0] = ast.Import(eval(p[2]), filename=self.filename, lineno=p.lineno(2)) def p_module(self, p): """module : attribute_section MODULE identifier_wrapped SEMI""" p[0] = ast.Module(p[3], p[1], filename=self.filename, lineno=p.lineno(2)) def p_definition(self, p): """definition : struct | union | interface | enum | const""" p[0] = p[1] def p_attribute_section_1(self, p): """attribute_section : """ p[0] = None def p_attribute_section_2(self, p): """attribute_section : LBRACKET attribute_list RBRACKET""" p[0] = p[2] def p_attribute_list_1(self, p): """attribute_list : """ p[0] = ast.AttributeList() def p_attribute_list_2(self, p): """attribute_list : nonempty_attribute_list""" p[0] = p[1] def p_nonempty_attribute_list_1(self, p): """nonempty_attribute_list : attribute""" p[0] = ast.AttributeList(p[1]) def p_nonempty_attribute_list_2(self, p): """nonempty_attribute_list : nonempty_attribute_list COMMA attribute""" p[0] = p[1] p[0].Append(p[3]) def p_attribute(self, p): """attribute : NAME EQUALS evaled_literal | NAME EQUALS NAME""" p[0] = ast.Attribute(p[1], p[3], filename=self.filename, lineno=p.lineno(1)) def p_evaled_literal(self, p): """evaled_literal : literal""" # 'eval' the literal to strip the quotes. p[0] = eval(p[1]) def p_struct(self, p): """struct : attribute_section STRUCT NAME LBRACE struct_body RBRACE SEMI""" p[0] = ast.Struct(p[3], p[1], p[5]) def p_struct_body_1(self, p): """struct_body : """ p[0] = ast.StructBody() def p_struct_body_2(self, p): """struct_body : struct_body const | struct_body enum | struct_body struct_field""" p[0] = p[1] p[0].Append(p[2]) def p_struct_field(self, p): """struct_field : typename NAME ordinal default SEMI""" p[0] = ast.StructField(p[2], p[3], p[1], p[4]) def p_union(self, p): """union : UNION NAME LBRACE union_body RBRACE SEMI""" p[0] = ast.Union(p[2], p[4]) def p_union_body_1(self, p): """union_body : """ p[0] = ast.UnionBody() def p_union_body_2(self, p): """union_body : union_body union_field""" p[0] = p[1] p[1].Append(p[2]) def p_union_field(self, p): """union_field : typename NAME ordinal SEMI""" p[0] = ast.UnionField(p[2], p[3], p[1]) def p_default_1(self, p): """default : """ p[0] = None def p_default_2(self, p): """default : EQUALS constant""" p[0] = p[2] def p_interface(self, p): """interface : attribute_section INTERFACE NAME LBRACE interface_body \ RBRACE SEMI""" p[0] = ast.Interface(p[3], p[1], p[5]) def p_interface_body_1(self, p): """interface_body : """ p[0] = ast.InterfaceBody() def p_interface_body_2(self, p): """interface_body : interface_body const | interface_body enum | interface_body method""" p[0] = p[1] p[0].Append(p[2]) def p_response_1(self, p): """response : """ p[0] = None def p_response_2(self, p): """response : RESPONSE LPAREN parameter_list RPAREN""" p[0] = p[3] def p_method(self, p): """method : NAME ordinal LPAREN parameter_list RPAREN response SEMI""" p[0] = ast.Method(p[1], p[2], p[4], p[6]) def p_parameter_list_1(self, p): """parameter_list : """ p[0] = ast.ParameterList() def p_parameter_list_2(self, p): """parameter_list : nonempty_parameter_list""" p[0] = p[1] def p_nonempty_parameter_list_1(self, p): """nonempty_parameter_list : parameter""" p[0] = ast.ParameterList(p[1]) def p_nonempty_parameter_list_2(self, p): """nonempty_parameter_list : nonempty_parameter_list COMMA parameter""" p[0] = p[1] p[0].Append(p[3]) def p_parameter(self, p): """parameter : typename NAME ordinal""" p[0] = ast.Parameter(p[2], p[3], p[1], filename=self.filename, lineno=p.lineno(2)) def p_typename(self, p): """typename : nonnullable_typename QSTN | nonnullable_typename""" if len(p) == 2: p[0] = p[1] else: p[0] = p[1] + "?" def p_nonnullable_typename(self, p): """nonnullable_typename : basictypename | array | fixed_array | associative_array | interfacerequest""" p[0] = p[1] def p_basictypename(self, p): """basictypename : identifier | handletype""" p[0] = p[1] def p_handletype(self, p): """handletype : HANDLE | HANDLE LANGLE NAME RANGLE""" if len(p) == 2: p[0] = p[1] else: if p[3] not in ('data_pipe_consumer', 'data_pipe_producer', 'message_pipe', 'shared_buffer'): # Note: We don't enable tracking of line numbers for everything, so we # can't use |p.lineno(3)|. raise ParseError(self.filename, "Invalid handle type %r:" % p[3], lineno=p.lineno(1), snippet=self._GetSnippet(p.lineno(1))) p[0] = "handle<" + p[3] + ">" def p_array(self, p): """array : ARRAY LANGLE typename RANGLE""" p[0] = p[3] + "[]" def p_fixed_array(self, p): """fixed_array : ARRAY LANGLE typename COMMA INT_CONST_DEC RANGLE""" value = int(p[5]) if value == 0 or value > _MAX_ARRAY_SIZE: raise ParseError(self.filename, "Fixed array size %d invalid:" % value, lineno=p.lineno(5), snippet=self._GetSnippet(p.lineno(5))) p[0] = p[3] + "[" + p[5] + "]" def p_associative_array(self, p): """associative_array : MAP LANGLE identifier COMMA typename RANGLE""" p[0] = p[5] + "{" + p[3] + "}" def p_interfacerequest(self, p): """interfacerequest : identifier AMP""" p[0] = p[1] + "&" def p_ordinal_1(self, p): """ordinal : """ p[0] = None def p_ordinal_2(self, p): """ordinal : ORDINAL""" value = int(p[1][1:]) if value > _MAX_ORDINAL_VALUE: raise ParseError(self.filename, "Ordinal value %d too large:" % value, lineno=p.lineno(1), snippet=self._GetSnippet(p.lineno(1))) p[0] = ast.Ordinal(value, filename=self.filename, lineno=p.lineno(1)) def p_enum(self, p): """enum : ENUM NAME LBRACE nonempty_enum_value_list RBRACE SEMI | ENUM NAME LBRACE nonempty_enum_value_list COMMA RBRACE SEMI""" p[0] = ast.Enum(p[2], p[4], filename=self.filename, lineno=p.lineno(1)) def p_nonempty_enum_value_list_1(self, p): """nonempty_enum_value_list : enum_value""" p[0] = ast.EnumValueList(p[1]) def p_nonempty_enum_value_list_2(self, p): """nonempty_enum_value_list : nonempty_enum_value_list COMMA enum_value""" p[0] = p[1] p[0].Append(p[3]) def p_enum_value(self, p): """enum_value : NAME | NAME EQUALS int | NAME EQUALS identifier_wrapped""" p[0] = ast.EnumValue(p[1], p[3] if len(p) == 4 else None, filename=self.filename, lineno=p.lineno(1)) def p_const(self, p): """const : CONST typename NAME EQUALS constant SEMI""" p[0] = ast.Const(p[3], p[2], p[5]) def p_constant(self, p): """constant : literal | identifier_wrapped""" p[0] = p[1] def p_identifier_wrapped(self, p): """identifier_wrapped : identifier""" p[0] = ('IDENTIFIER', p[1]) # TODO(vtl): Make this produce a "wrapped" identifier (probably as an # |ast.Identifier|, to be added) and get rid of identifier_wrapped. def p_identifier(self, p): """identifier : NAME | NAME DOT identifier""" p[0] = ''.join(p[1:]) def p_literal(self, p): """literal : int | float | TRUE | FALSE | DEFAULT | STRING_LITERAL""" p[0] = p[1] def p_int(self, p): """int : int_const | PLUS int_const | MINUS int_const""" p[0] = ''.join(p[1:]) def p_int_const(self, p): """int_const : INT_CONST_DEC | INT_CONST_HEX""" p[0] = p[1] def p_float(self, p): """float : FLOAT_CONST | PLUS FLOAT_CONST | MINUS FLOAT_CONST""" p[0] = ''.join(p[1:]) def p_error(self, e): if e is None: # Unexpected EOF. # TODO(vtl): Can we figure out what's missing? raise ParseError(self.filename, "Unexpected end of file") raise ParseError(self.filename, "Unexpected %r:" % e.value, lineno=e.lineno, snippet=self._GetSnippet(e.lineno)) def _GetSnippet(self, lineno): return self.source.split('\n')[lineno - 1] def Parse(source, filename): lexer = Lexer(filename) parser = Parser(lexer, source, filename) lex.lex(object=lexer) yacc.yacc(module=parser, debug=0, write_tables=0) tree = yacc.parse(source) return tree
Building upon its well-received Fader Series of studio monitors, Fluid Audio will unveil its newest monitor line, FPX (Fader Pro Coax), at the Winter NAMM Show, beginning with the FPX7. Featuring optimised class AB-amplifiers, shorting-ring magnet woofers and ribbon tweeters, Fluid, distributed in the UK by Hand in Hand Distribution, describes the FPX7 as offering "a truer, more accurate monitoring experience". The FPX Series will include two models: the FPX7 DSP, and a "more feature-rich version" to follow later, both with a 7” woofer and AMT ribbon tweeter. “Ribbon tweeters have certainly been used in studio monitors before – it’s a perfect use for them, actually – however, I’ve never been that impressed by previous executions of the idea," comments Fluid Audio founder Kevin Zuccaro. "I’ve always felt I could do a ribbon monitor that sort of set the standard for ribbon monitors, and I believe this coax idea does that. You get the lightning-fast response of the ribbon and the point-source goodness of being coaxial. The FPX Series will launch in March. See the FPX7 at booth 1171 in hall E.
## # Copyright 2013 Ghent University # # This file is part of EasyBuild, # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en), # with support of Ghent University (http://ugent.be/hpc), # the Flemish Supercomputer Centre (VSC) (https://vscentrum.be/nl/en), # the Hercules foundation (http://www.herculesstichting.be/in_English) # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en). # # http://github.com/hpcugent/easybuild # # EasyBuild is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation v2. # # EasyBuild is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>. ## """ EasyBuild support for Mothur, implemented as an easyblock @author: Kenneth Hoste (Ghent University) """ import glob import os import shutil from easybuild.easyblocks.generic.configuremake import ConfigureMake from easybuild.tools.build_log import EasyBuildError from easybuild.tools.modules import get_software_root class EB_Mothur(ConfigureMake): """Support for building and installing Mothur.""" def guess_start_dir(self): """Set correct start directory.""" # Mothur zip files tend to contain multiple directories next to the actual source dir (e.g. __MACOSX), # so the default start directory guess is most likely incorrect mothur_dirs = glob.glob(os.path.join(self.builddir, 'Mothur.*')) if len(mothur_dirs) == 1: self.cfg['start_dir'] = mothur_dirs[0] elif len(os.listdir(self.builddir)) > 1: # we only have an issue if the default guessing approach will not work raise EasyBuildError("Failed to guess start directory from %s", mothur_dirs) super(EB_Mothur, self).guess_start_dir() def configure_step(self, cmd_prefix=''): """Configure Mothur build by setting make options.""" # Fortran compiler and options self.cfg.update('buildopts', 'FORTAN_COMPILER="%s" FORTRAN_FLAGS="%s"' % (os.getenv('F77'), os.getenv('FFLAGS'))) # enable 64-bit build if not self.toolchain.options['32bit']: self.cfg.update('buildopts', '64BIT_VERSION=yes') # enable readline support if get_software_root('libreadline') and get_software_root('ncurses'): self.cfg.update('buildopts', 'USEREADLINE=yes') # enable MPI support if self.toolchain.options.get('usempi', None): self.cfg.update('buildopts', 'USEMPI=yes CXX="%s"' % os.getenv('MPICXX')) self.cfg.update('prebuildopts', 'CXXFLAGS="$CXXFLAGS -DMPICH_IGNORE_CXX_SEEK"') # enable compression if get_software_root('bzip2') or get_software_root('gzip'): self.cfg.update('buildopts', 'USE_COMPRESSION=yes') def install_step(self): """ Install by copying files to install dir """ srcdir = os.path.join(self.builddir, self.cfg['start_dir']) destdir = os.path.join(self.installdir, 'bin') srcfile = None try: os.makedirs(destdir) for filename in ['mothur', 'uchime']: srcfile = os.path.join(srcdir, filename) shutil.copy2(srcfile, destdir) except OSError, err: raise EasyBuildError("Copying %s to installation dir %s failed: %s", srcfile, destdir, err) def sanity_check_step(self): """Custom sanity check for Mothur.""" custom_paths = { 'files': ["bin/mothur"], 'dirs': [], } super(EB_Mothur, self).sanity_check_step(custom_paths=custom_paths)
It’s that time of year again – resolutions, diets etc but most importantly it is the 2nd annual top stock competition. Last year at this time I got together with a group of bloggers and we all picked 4 stocks that we thought would perform the best in 2009. I ended up finishing in 5th place with a 35% return. This year I’m doing something different – I’m betting against gold. I can’t understand why gold is even considered a good investment, never mind getting elevated to the prices it reached in 2009 ($1226 was the peak). The Wild Investor – 4 stocks to buy in 2010. But if you’re right, wow you’ll be rocking… Have a good one! When it comes to stock competitions I go for the “swing for the fences” strategy. Double shorts aren’t the best choice for an investment horizon of one year. If there is a lot of volatility in the price of gold, these ETF’s can go down in value even if the price of gold is lower at the end of 2010. If you are betting against gold, shorting CEF.A as one of your contest picks might work out better. Dillon, you are absolutely right. I was allowed to short investments in this comp so I should have tried to use that. Gotta love someone willing to go against the mold… and gold… Good luck. Gold has been one of the best investments of the decade and yet it is still just half the price of it’s all time 1980 high after inflation adjustment. This has been the tenth year in a row that gold has increased substantially. The only reason why the price of gold isn’t $3000 right now is that central banks throughout the world have sold and leased gold into the market in an effort to manipulate the price of gold downward. But lately that has all changed as central banks have not only stopped selling but have now turned buyers as the currencies of the world are inflated through reckless money printing. It’s a good thing that this is just a fun competition and not real money because your picks will definitely be the worst choices of the bunch. You couldn’t have picked a worse year to be short gold. The fundamentals for gold to rise have never been so positive in history. This time next year the price of gold will be double what it is today and what will your contest entry do for your credibility as a financial blog? Don, as luck would have it – this financial blog has no credibility so making bad stock picks is not an issue. You’ve done a lot of yappin’ about gold but why don’t you share what you own? What percent of your porfolio is gold/gold companies? The following stocks in my equity portfolio are all listed on the TSE. I have no USA stocks because I believe the current strength in the US dollar is a temporary move and the downtrend of the US $ will continue to drop in 2010 to new lows. take profits and increase my dividend paying stocks. I believe precious metals, energy and other commodities will do do well in 2010 as hard assets will be a popular choice for those who want to protect their assets from inflation that is on it’s way. To diversify even further I have rental real estate that adds income and has also been increasing in value here in Nova Scotia. Sorry for the long post but I did my best to reply to your questions. leveraged ETFs are bad products for holding periods longer than a few weeks. It would be a shame if you were right and your positions failed. I bet that in fact you are right and offer a side bet that a short EFT position will beat your juiced up choices. Better still, properly chosen puts would blow away either short. A few months ago when gold was still $1030 or so I wrote about a call strategy that would return 4X your money if gold only moved to $1250. Similar strategies can be created for a downside bet. Joe – you are right. The problem is that I didn’t want to spend more than 5 minutes putting the portfolio together which is why I might have ended up with less than stellar choices. FYI – price of gold is now $1097 US$. I’ll try to remember to check back at the end of the year.
import sys from typing import ( List, Tuple, Union, NamedTuple ) import __init__ from libnano.seqrecord.seqrecordbase import ( AlphaEnum, ALPHABETS ) from libnano.seqstr import ( reverseComplement, complement, reverse ) from enum import IntEnum from strand import Strand from oligo import Oligo from dseq import DSeq NEWLINE_STR: str = '\r\n' if sys.platform == 'win32' else '\n' STR_OR_STRAND_T = Union[str, Strand] def str2Oligo(x: str) -> Tuple[bool, Oligo, Strand]: if isinstance(x, Strand): return False, x.oligo, x else: oligo = Oligo(x) return True, oligo, oligo.strand5p # end def StrandArray = NamedTuple('StrandArray', [ ('strands', List[Strand]), ('idx_offsets', List[int]), ('seq', List[str]) # only a List of one element for assigment since strings are immutable in Python ] ) class VHDirEnum(IntEnum): Forward: int = 0 Reverse: int = 1 @staticmethod def check(x: int): if x not in (0, 1): raise ValueError('{} not in value in VHDirEnum'.format(x)) class VirtualHelix: FWD: int = 0 REV: int = 1 def __init__(self, fwd_strands: List[Strand], fwd_idx_offsets: List[int], rev_strands: List[Strand], rev_idx_offsets: List[int], alphabet: int = AlphaEnum.DNA): ''' Args: fwd_strands: List of strands in the 5' to 3' direction rev_strands: List of strands in the 3' to 5' direction fwd_idx_offsets: List of integer offsets from the 5' end of the forward strand rev_idx_offsets: List of integer offsets from the 5' end of the forward strand alphabet: DNA or RNA ''' self.strand_arrays: Tuple[StrandArray, ...] = ( StrandArray(fwd_strands, fwd_idx_offsets, ['']), StrandArray(rev_strands, rev_idx_offsets, ['']) ) self.alphabet: int = alphabet fwd_gaps: list = [] # track the free space in the VirtualHelix fwd_endpoints: list = [] rev_gaps: list = [] # track the free space in the VirtualHelix rev_endpoints: list = [] # end def def oligos(self) -> Tuple[List[Oligo], List[Oligo]]: strand_arrays: Tuple[StrandArray, ...] = self.strand_arrays strands_f: List[Strand] = strand_arrays[0].strands strands_r: List[Strand] = strand_arrays[0].strands out = ([x.oligo for x in strands_f], [x.oligo for x in strands_r]) return out @staticmethod def reverseStrandArray( strands: List[Strand], idx_offset: List[int]) -> StrandArray: strands_out: List[Strand] = strands[::-1] seq_len: int = sum(len(x) for x in strands) x0: int = idx_offsets[0] + len(strands[0]) for x1, strand1 in zip(idx_offsets[1:], strands[1:]): check = x1 + len(strand1) assert(check >= x0) x0 = check total_length: int = x0 gener = zip(idx_offsets[::-1], strands_out) idx_offsets_out: List[int] = [total_length - (y + len(z) - 1) for y, z in gener] return StrandArray(strands_out, idx_offsets_out, ['']) # end def def get_seq(self, dir_idx: int, do_cache: bool = False) -> str: strand_array: StrandArray = self.strand_arrays[dir_idx] the_seq: str = strand_array.seq[0] if the_seq: return the_seq else: strand_array.strands strand_seq_list: List[str] = [] idx_last: int = 0 for strand, idx in zip(strand_array.strands, strand_array.idx_offsets): strand_seq_list.append(' '*(idx - idx_last)) seq = strand.seq strand_seq_list.append(seq) idx_last = idx + len(seq) the_seq: str = ''.join(strand_seq_list) if do_cache: strand_array.seq[0] = the_seq return the_seq # end def @property def fwd_strands(self) -> List[Strand]: return self.strand_arrays[self.FWD].strands @property def rev_strands(self) -> List[Strand]: return reverse(self.strand_arrays[self.REV].strands) def fwd_seq(self, do_cache=True): return self.get_seq(self.FWD, do_cache=do_cache) # end def def rev_seq(self, do_cache=True): return self.get_seq(self.REV, do_cache=do_cache) # end def def len_strands(self, dir_idx: int) -> int: '''Return the length of all the strands including the offsets while checking to make sure strands do not overlap ''' strand_array: StrandArray = self.strand_arrays[dir_idx] strands: List[Strand] = strand_array.strands idx_offsets: List[int] = strand_array.idx_offsets seq_len: int = sum(len(x) for x in strands) x0: int = idx_offsets[0] + len(strands[0]) for x1, strand1 in zip(idx_offsets[1:], strands[1:]): check = x1 + len(strand1) assert(check >= x0) x0 = check return x0 # end def def len_fwd(self) -> int: return self.len_strands(self.FWD) def len_rev(self) -> int: return self.len_strands(self.REV) def __str__(self) -> str: return '%s%s%s' % (self.fwd_seq(), NEWLINE_STR, reverse(self.rev_seq())) # end def def addForwardStrand(self, strand: Strand, offset: int): pass # end def # def addSequence(self, seq: str) -> Oligo: # pass def breakStrand(self, dir_idx: int, strand: Strand, idx: int) -> Tuple[Oligo, Oligo, Oligo]: '''Break a Strand in two and create two new Oligos to assign the all of the strands in the pre-existing Oligo to Args: dir_idx: is this on the forward [0] or reverse [1] direction of the :class:`VirtualHelix`. Also use :enum:`VHDirEnum` to get these idxs strand: :class:`Strand` object to break idx: index to break the strand at in terms of it's sequence Returns: Two new :class:`Oligo` objects of form:: Oligo_5p, Oligo_3p ''' VHDirEnum.check(dir_idx) strand_array = self.strand_arrays[dir_idx] vh_strands: List[Strand] = strand_array.strands if strand not in vh_strands: dir_name: str = VHDirEnum(dir_idx).name err: str = "Strand {} not in the {} StrandArray of the VirtualHelix" raise ValueError(err.format(strand, dir_name)) idx_offsets: List[int] = strand_array.idx_offsets seq: str = strand.seq oligo_old: Oligo = strand.oligo # 1. Do the 5' portion of the break oligo_break5p: Oligo = Oligo(seq[0:idx]) strand_break5p: Strand = oligo_break5p.strand5p neighbor_5p: Strand = strand.strand5p if neighbor_5p is not None: # update existing neighbor oligos strand_break5p.strand5p = neighbor_5p neighbor_5p.strand3p = strand_break5p for seg in neighbor_5p.gen5p(): seg.oligo = oligo_break5p # 2. Do the 3' portion of the break oligo_break3p: Oligo = Oligo(seq[idx:]) strand_break3p: Strand = oligo_break3p.strand5p neighbor_3p: Strand = strand.strand3p if neighbor_3p is not None: # update existing neighbor oligos strand_break3p.strand3p = neighbor_3p neighbor_3p.strand5p = strand_break3p for seg in neighbor_3p.gen3p(): seg.oligo = oligo_break3p # 3. Update the strands list_idx: int = vh_strands.index(strand) offset_5p: int = idx_offsets[list_idx] list_idx_plus_1: int = list_idx + 1 vh_strands.insert(list_idx_plus_1, strand_break3p) vh_strands.insert(list_idx_plus_1, strand_break5p) idx_offsets.insert(list_idx_plus_1, offset_5p + len(strand_break5p)) vh_strands.pop(list_idx) # pop out the original strand return oligo_break5p, oligo_break3p # end def def __add__(self, b: 'VirtualHelix') -> 'VirtualHelix': '''(1) Concatenates the forward strand with forward strand and the reverse strand with the reverse strand and preserves order (2) Realligns the two :class:`VirtualHelix` objects involved Args: b: a :class:`VirtualHelix` object Returns: a :class:`VirtualHelix` object Raises: ValueError, TypeError ''' if isinstance(b, VirtualHelix): type3, seq3 = self.three_prime_end() type5, seq5 = b.five_prime_end() if type3 == type5 and len(seq3) == len(seq5): if seq3 != reverseComplement(seq5): raise TypeError("Ends not complimentary") fwd = self.fwd + b.fwd rev = self.rev + b.rev return VirtualHelix(fwd, rev, self.overhang) else: raise TypeError("Ends not compatible") else: raise ValueError("{} object not a DSeq".format(b)) # end def def five_prime_end(self) -> Tuple[int, str]: '''Return what kind of end is overhanging the 5' end of the forward strand ''' fwd_idx0: int = self.fwd_idx_offsets[0] rev_idx0: int = self.rev_idx_offsets[-1] res, val = five_prime_type(self.alignment, self.fwd, self.rev) return res, val # end def def three_prime_end(self) -> Tuple[int, str]: '''Return what kind of end is overhanging the 3' end of the forward strand ''' res, val = three_prime_type(self.alignment, self.fwd, self.rev) return res, val # end def # end class def DSeqVH( fwd: str, rev: str = None, overhang: int = None, alphabet: int = AlphaEnum.DNA) -> VirtualHelix: '''Helper function for creating :class:`VirtualHelix` in the style of the :class:`DSeq` with strings ''' dseq: DSeq = DSeq(fwd, rev, overhang, alphabet) overhang: int = dseq.overhang if overhang > 0: fwd_idx_offsets = [overhang] rev_idx_offsets = [0] else: fwd_idx_offsets = [0] rev_idx_offsets = [overhang] oligo_fwd = Oligo(fwd) if rev is None: rev = reverseComplement(fwd) oligo_rev = Oligo(rev) return VirtualHelix([oligo_fwd.strand5p], fwd_idx_offsets, [oligo_rev.strand5p], rev_idx_offsets) # end def if __name__ == '__main__': fwd = 'GGTCTCGAATTCAAA' oligo_fwd = Oligo(fwd) rev = 'TTTGAATTCGAGACC' oligo_rev = Oligo(rev) BsaI_vh = VirtualHelix( [oligo_fwd.strand5p], [0], [oligo_rev.strand5p], [0]) print("1.\n%s" % BsaI_vh) BsaI_vh = DSeqVH(fwd, rev, 0) print("2.\n%s" % BsaI_vh) print(BsaI_vh.fwd_strands) BsaI_vh = DSeqVH(fwd) print("3.\n%s" % BsaI_vh) print("Da Oligos", BsaI_vh.oligos()) strand0 = BsaI_vh.fwd_strands[0] print(strand0.oligo) broken_oligos = BsaI_vh.breakStrand(dir_idx=0, strand=strand0, idx=4) import pprint pprint.pprint(broken_oligos) print(BsaI_vh.len_fwd()) bonus_oligo = Oligo('ACGT') try: BsaI_vh.breakStrand(dir_idx=1, strand=bonus_oligo.strand5p, idx=2) except ValueError: print("Handled bad strand successfully")
2018年4月29日 | By News | No Comments | Filed in: News. 2018年4月28日 | By News | No Comments | Filed in: News. 2018年4月27日 | By News | No Comments | Filed in: News. 2018年4月25日 | By News | No Comments | Filed in: News. 2018年4月24日 | By News | No Comments | Filed in: News. 2018年4月22日 | By News | No Comments | Filed in: News. 2018年4月18日 | By News | No Comments | Filed in: News. 2018年4月15日 | By News | No Comments | Filed in: News. 2018年4月14日 | By News | No Comments | Filed in: News.
#!/usr/bin/python import rospy import map_manager import map_communication from markers import MarkersPublisher from occupancy import OccupancyGenerator from ai_game_manager import StatusServices class MapNode(): def __init__(self): rospy.init_node("map", log_level=rospy.INFO) rospy.logdebug("Started /memory/map node.") map_manager.Map.load() # Starting and publishing the table STL to RViz self.markers = MarkersPublisher() # Generate static occupancy images for pathfinder, etc. occupancy = OccupancyGenerator() # Starting service handlers (Get, Set, Transfer, GetOccupancy) map_communication.MapServices(occupancy) rospy.logdebug("[memory/map] Map request servers ready.") # Tell ai/game_manager the node initialized successfuly. StatusServices("memory", "map").ready(True) self.run() def run(self): r = rospy.Rate(5) while not rospy.is_shutdown(): if rospy.has_param("/current_team"): map_manager.Map.swap_team(rospy.get_param("/current_team")) self.markers.updateMarkers(map_manager.Map) r.sleep() if __name__ == "__main__": MapNode()
Exterior lighting fixtures for room are presently accessible for you to choose. To be sure, when you need to apply best kind of home structure, you need to ensure that you do your best with it. In reality, best idea of home structure is presently accessible for you to choose. You simply need to ensure that you do your best with it. By choosing the best one, you can at last acquire the things that you require. As should be obvious that it is accessible with much preferred standpoint, you might need to utilize it starting now and into the foreseeable future. With the nearness of best home plan that can be utilized whenever you need, don’t hesitate to utilize it to your benefit. The best choice of item can make you feel cheerful such a great amount with it. Exterior lighting fixtures will have the capacity to give you the things that you require. Consequently, you need to ensure that you do your best with it. As it is obvious it is accessible with numerous alternatives in it. By picking the best alternative of item, you can perceive how it can show up of your home ends up astounding in a moment. The facts confirm that there are numerous advantages that you can discover in there. So don’t hesitate to utilize it to your benefit immediately. With the nearness of best item, you can discover genuine component of joy that you require. Exterior lighting fixtures would now be able to give you best determination of item that has great quality in it. With these furnishings, you would now be able to influence your home to seem stunning. Likewise, you can include a few qualities that you need for the house. The time has come to structure the look of your home with something astonishing. At that point, you can feel the genuine satisfaction through it. Ensure you don’t waver of picking best item that you need immediately.
# Copyright (c) 2013, Ricardo Andrade # Licensed under the BSD 3-clause license (see LICENSE.txt) import numpy as np from ..core import SparseGP from .. import likelihoods from .. import kern from ..inference.latent_function_inference import EPDTC from copy import deepcopy class SparseGPClassification(SparseGP): """ Sparse Gaussian Process model for classification This is a thin wrapper around the sparse_GP class, with a set of sensible defaults :param X: input observations :param Y: observed values :param likelihood: a GPy likelihood, defaults to Bernoulli :param kernel: a GPy kernel, defaults to rbf+white :param inference_method: Latent function inference to use, defaults to EPDTC :type inference_method: :class:`GPy.inference.latent_function_inference.LatentFunctionInference` :param normalize_X: whether to normalize the input data before computing (predictions will be in original scales) :type normalize_X: False|True :param normalize_Y: whether to normalize the input data before computing (predictions will be in original scales) :type normalize_Y: False|True :rtype: model object """ def __init__(self, X, Y=None, likelihood=None, kernel=None, Z=None, num_inducing=10, Y_metadata=None, mean_function=None, inference_method=None, normalizer=False): if kernel is None: kernel = kern.RBF(X.shape[1]) if likelihood is None: likelihood = likelihoods.Bernoulli() if Z is None: i = np.random.permutation(X.shape[0])[:num_inducing] Z = X[i].copy() else: assert Z.shape[1] == X.shape[1] if inference_method is None: inference_method = EPDTC() SparseGP.__init__(self, X, Y, Z, kernel, likelihood, mean_function=mean_function, inference_method=inference_method, normalizer=normalizer, name='SparseGPClassification', Y_metadata=Y_metadata) @staticmethod def from_sparse_gp(sparse_gp): from copy import deepcopy sparse_gp = deepcopy(sparse_gp) SparseGPClassification(sparse_gp.X, sparse_gp.Y, sparse_gp.Z, sparse_gp.kern, sparse_gp.likelihood, sparse_gp.inference_method, sparse_gp.mean_function, name='sparse_gp_classification') def to_dict(self, save_data=True): """ Store the object into a json serializable dictionary :param boolean save_data: if true, it adds the data self.X and self.Y to the dictionary :return dict: json serializable dictionary containing the needed information to instantiate the object """ model_dict = super(SparseGPClassification,self).to_dict(save_data) model_dict["class"] = "GPy.models.SparseGPClassification" return model_dict @staticmethod def _build_from_input_dict(input_dict, data=None): input_dict = SparseGPClassification._format_input_dict(input_dict, data) input_dict.pop('name', None) # Name parameter not required by SparseGPClassification return SparseGPClassification(**input_dict) @staticmethod def from_dict(input_dict, data=None): """ Instantiate an SparseGPClassification object using the information in input_dict (built by the to_dict method). :param data: It is used to provide X and Y for the case when the model was saved using save_data=False in to_dict method. :type data: tuple(:class:`np.ndarray`, :class:`np.ndarray`) """ import GPy m = GPy.core.model.Model.from_dict(input_dict, data) from copy import deepcopy sparse_gp = deepcopy(m) return SparseGPClassification(sparse_gp.X, sparse_gp.Y, sparse_gp.Z, sparse_gp.kern, sparse_gp.likelihood, sparse_gp.inference_method, sparse_gp.mean_function, name='sparse_gp_classification') def save_model(self, output_filename, compress=True, save_data=True): """ Method to serialize the model. :param string output_filename: Output file :param boolean compress: If true compress the file using zip :param boolean save_data: if true, it serializes the training data (self.X and self.Y) """ self._save_model(output_filename, compress=True, save_data=True) class SparseGPClassificationUncertainInput(SparseGP): """ Sparse Gaussian Process model for classification with uncertain inputs. This is a thin wrapper around the sparse_GP class, with a set of sensible defaults :param X: input observations :type X: np.ndarray (num_data x input_dim) :param X_variance: The uncertainty in the measurements of X (Gaussian variance, optional) :type X_variance: np.ndarray (num_data x input_dim) :param Y: observed values :param kernel: a GPy kernel, defaults to rbf+white :param Z: inducing inputs (optional, see note) :type Z: np.ndarray (num_inducing x input_dim) | None :param num_inducing: number of inducing points (ignored if Z is passed, see note) :type num_inducing: int :rtype: model object .. Note:: If no Z array is passed, num_inducing (default 10) points are selected from the data. Other wise num_inducing is ignored .. Note:: Multiple independent outputs are allowed using columns of Y """ def __init__(self, X, X_variance, Y, kernel=None, Z=None, num_inducing=10, Y_metadata=None, normalizer=None): from GPy.core.parameterization.variational import NormalPosterior if kernel is None: kernel = kern.RBF(X.shape[1]) likelihood = likelihoods.Bernoulli() if Z is None: i = np.random.permutation(X.shape[0])[:num_inducing] Z = X[i].copy() else: assert Z.shape[1] == X.shape[1] X = NormalPosterior(X, X_variance) SparseGP.__init__(self, X, Y, Z, kernel, likelihood, inference_method=EPDTC(), name='SparseGPClassification', Y_metadata=Y_metadata, normalizer=normalizer) def parameters_changed(self): #Compute the psi statistics for N once, but don't sum out N in psi2 self.psi0 = self.kern.psi0(self.Z, self.X) self.psi1 = self.kern.psi1(self.Z, self.X) self.psi2 = self.kern.psi2n(self.Z, self.X) self.posterior, self._log_marginal_likelihood, self.grad_dict = self.inference_method.inference(self.kern, self.X, self.Z, self.likelihood, self.Y, self.Y_metadata, psi0=self.psi0, psi1=self.psi1, psi2=self.psi2) self._update_gradients()
NCERT Solutions for Class 7 Social Science, NCERT Solutions for class 7 Geography, Our Changing Earth - Class 7th NCERT Solutions Geography, Chapter 3 - Our Changing Earth - NCERT - Class 7 – Geography, NCERT Solutions for Class 7th: Ch 3 Our Changing Earth Geography, Class VII Geography Notes and study material for Our Changing Earth, Social Science (Sst) –Geography - Class 7 (CBSE/NCERT) - Chapter 3 – Our Changing Earth - Questions and Answers/Notes/Worksheets – 1 Tags: CBSE Class 7 - Geography – Chapter 3 - Our Changing Earth Practice Pages, Extra Question and Answer based on NCERT for Class 7th, Social Science Geography, CBSE Grade VII free Worksheets PDF on Our Changing Earth, Geography Question bank on Our Changing Earth for seventh standard, True/False, Fill in the blanks, What are the major agents of erosion? Define loess? What is vent? What is a seismograph? What is the name of the scale used to measure earthquakes? Write some examples of coastal landforms? Write names of a few rivers of the world that form a delta. What are distributaries? Why do the plates move? How are beaches formed? What is erosion? What are meanders? What are Lithospheric plates? How much do lithospheric plates move in a year? What is a volcano? What are the two processes which wear away the landscape? How do glacial moraines form? i. Mushroom rocks are found in deserts. ii. Ox bow lakes are found in river valleys. iii. Glaciers are “rivers” of ice. iv. The highest waterfall is Angel Falls of Venezuela in South America. v. Sudden movements in the earth interior are cause due to endogenic forces. Ans. Water, wind and ice are the major agents of erosion. Ans. When sand is deposited in large areas, it is called loess. Ans. The narrow opening of a volcano is called vent. Ans. An earthquake is measured with a machine called a seismograph. Ans. The magnitude of the earthquake is measured on the Richter scale. Ans. Examples of coastal landforms are sea caves, sea arches, stacks and sea cliff. Ans. The river begins to break up into a number of streams called distributaries. Ans. Plates move because of the movement of the molten magma inside the earth. Ans. Beaches are formed when the sea waves deposit sediments along the shores. Ans. Erosion is the wearing away of the landscape by different agents like water, wind and ice. Ans. As the river enters the plain it twists and turns forming large bends known as meanders. Ans. The lithosphere is broken into a number of plates known as the Lithospheric plates. Ans. Lithospheric plates move around very slowly – just a few millimetres each year. Ans. A volcano is a vent (opening) in the earth’s crust through which molten material erupts suddenly. Ans. The landscape is being continuously worn away by two processes – weathering and erosion. Ans. The material carried by the glacier such as rocks big and small, sand and silt gets deposited. These deposits form glacial moraines.
# -*- coding: utf-8 -*- # Copyright 2010-2013 Kolab Systems AG (http://www.kolabsys.com) # # Jeroen van Meeuwen (Kolab Systems) <vanmeeuwen a kolabsys.com> # # This program is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see <http://www.gnu.org/licenses/>. # import pykolab from pykolab.translate import _ conf = pykolab.getConf() log = pykolab.getLogger('pykolab.plugins.dynamicquota') class KolabDynamicquota(object): """ Example plugin making quota adjustments given arbitrary conditions. """ def __init__(self): pass def add_options(self, *args, **kw): pass def set_user_folder_quota(self, *args, **kw): """ The arguments passed to the 'set_user_folder_quota' hook: - used (integer, in KB) - imap_quota (current imap quota obtained from IMAP, integer, in KB) - ldap_quota (current LDAP quota obtained from LDAP, integer, in KB) - default_quota (integer, in KB) Returns: - None - an error has occurred and this plugin doesn't care. - Negative 1 - remove quota. - Zero - Absolute 0. - Positive Integer - set new quota. """ for keyword in [ 'used', 'imap_quota', 'ldap_quota', 'default_quota' ]: if not kw.has_key(keyword): log.warning( _("No keyword %s passed to set_user_folder_quota") % ( keyword ) ) return else: try: if not kw[keyword] == None: kw[keyword] = (int)(kw[keyword]) except: log.error(_("Quota '%s' not an integer!") % (keyword)) return # Escape the user without quota if kw['ldap_quota'] == None: return kw['default_quota'] elif kw['ldap_quota'] == -1: return -1 elif kw['ldap_quota'] > 0: return kw['ldap_quota'] else: return kw['default_quota']
Our Outsourced Credit Union Accounting department continues to grow! Our newest employee is Chika Bynum, Accountant I. She moved here from Chicago last winter. Let's learn more about her: Chika moved to Denver from Chicago, where she was born and raised, in late January 2016. She attended HolyOak Community College and Kennedy-King College, studying a variety of topics. Before moving to Denver, Chika worked at Check Smart Financial and managed the ACH processing, draft and loss statements. Here at CU Service Network, Chika handles ACHs, shared draft processing, bill pay, and does the daily processing for our client Latah CU in Idaho. Welcome to the team, Chika! You can see the rest of our team here. You asked for it, and we are launching it! In October, we will be a launching a brand new service designed specifically for our credit unions' needs. If you recall, last May we partnered with Edge Consultancy to conduct research on our credit unions' pain points. The survey results illustrated clearly that our credit unions needed help in certain areas. As you can see, there are seven key areas we have been investigating for nearly six months. Although there was interest in all seven areas, a few stood out among the rest. We asked, you answered, and we listened. Stay tuned over the next month for our exciting news! Have you heard that our Denver credit unions Sooper CU and Denver Community CU have agreed to merge? Sooper CU President/CEO Dan Kester commented, We see a lot of synergies for our members and employees in this partnership. Convenience will be enhanced with the addition of branches. This partnership is the next step a foundational step toward our larger goal of providing our members with a broader platform of products and services.� "Both credit unions are very strong financially. It is a merger of opportunity focused on providing value to our members? Denver Community CU President/CEO Carla Hedrick said. We are both state-chartered, with similar cultures and philosophies, and we have worked very collaboratively for many years? CU Service Network has a rich history with both credit unions, as CEO Kester served on our board for many years, and CEO Hedrick currently serves as our Board Chairman. Read the full article on CUTimes here. Our partner Insuritas in hosting a webinar on Member Insurance "SmartCART" technology, and we are sending the invite along to you. If you have been thinking about offering a insurance solution for you membership, or are just interested in how this technology works, feel free to sign up and learn a little more. Insuritas is highly rated in the credit union field - they have won the NAFCU Innovation Award, the credit union industry's most recognized award competition for groundbreaking solutions, for the past two years in a row! >>Webinar Details: >>Webinar Date: October 20th >>Time:, 2:00 pm >>Title: SmartCART Technology: How this Digital Insurance Platform is Affecting US Financial Institutions >>Summary: SmartCART , the most revolutionary way to purchase insurance today! It is the world's only insurance technology platform that allows consumers to shop, compare and buy a rapidly growing suite of insurance products in a one-stop shopping, check-out, binding, issuing and payment experience. The effect on US Banks and Credit Unions is dramatically changing how FIs make the insurance purchase process so much easier for their customers and members while realize recurring income. It's a win-win for all involved! We don't have webinar link just yet, but let us know if you are interested by clicking here, or be on the lookout for future emails about this upcoming webinar.
__author__ = 'anicca' # core import math import sys from itertools import izip # 3rd party from PIL import Image, ImageChops import argh def dhash(image, hash_size=8): # Grayscale and shrink the image in one step. image = image.convert('L').resize( (hash_size + 1, hash_size), Image.ANTIALIAS, ) pixels = list(image.getdata()) # Compare adjacent pixels. difference = [] for row in range(hash_size): for col in range(hash_size): pixel_left = image.getpixel((col, row)) pixel_right = image.getpixel((col + 1, row)) difference.append(pixel_left > pixel_right) # Convert the binary array to a hexadecimal string. decimal_value = 0 hex_string = [] for index, value in enumerate(difference): if value: decimal_value += 2 ** (index % 8) if (index % 8) == 7: hex_string.append(hex(decimal_value)[2:].rjust(2, '0')) decimal_value = 0 return ''.join(hex_string) def rosetta(image1, image2): i1 = Image.open(image1) i2 = Image.open(image2) assert i1.mode == i2.mode, "Different kinds of images." print i1.size, i2.size assert i1.size == i2.size, "Different sizes." pairs = izip(i1.getdata(), i2.getdata()) if len(i1.getbands()) == 1: # for gray-scale jpegs dif = sum(abs(p1 - p2) for p1, p2 in pairs) else: dif = sum(abs(c1 - c2) for p1, p2 in pairs for c1, c2 in zip(p1, p2)) ncomponents = i1.size[0] * i1.size[1] * 3 retval = (dif / 255.0 * 100) / ncomponents return retval def rmsdiff_2011(im1, im2): "Calculate the root-mean-square difference between two images" im1 = Image.open(im1) im2 = Image.open(im2) diff = ImageChops.difference(im1, im2) h = diff.histogram() sq = (value * (idx ** 2) for idx, value in enumerate(h)) sum_of_squares = sum(sq) rms = math.sqrt(sum_of_squares / float(im1.size[0] * im1.size[1])) return rms def main(image_filename1, image_filename2, dhash=False, rosetta=False, rmsdiff=False): pass if __name__ == '__main__': argh.dispatch_command(main)
Offer DetailsUse Coupon Code to Buy This HP X1500 USB 2.0 Optical Mouse worth Rs. 429 at Rs. 262 Only. Cashback will be Credited in Your Paytm wallet.
#!/usr/bin/env python # -*- coding: utf-8 -*- import paramiko import os import transmissionrpc import getpass import sys import traceback from termcolor import colored from stat import * user = "xbian" host = "zurawina.local" ssh_username = user ssh_server = host ssh_port = 22 transmission_user = user transmission_host = host transmission_port = 9091 transmission_timeout = 180 download_dir="downloads" class SCP_Client: scp = '' ssh = '' def __init__(self): self.ssh = paramiko.SSHClient() self.ssh.load_system_host_keys() try: self.ssh.connect(ssh_server, username=ssh_username, look_for_keys=True, port=ssh_port); self.scp = paramiko.SFTPClient.from_transport(self.ssh.get_transport()) except socket.error: print "Error al conectar con %@" % ssh_server def prepare_dir(self, path): os.mkdir(path) os.chdir(path) def get_recursive(self, path, extra_path=" "): self.scp.chdir(path) for file in self.scp.listdir("."): print extra_path+"/"+file file_stat = self.scp.lstat(file) if S_ISDIR(file_stat.st_mode): os.mkdir(file) os.chdir(file) self.get_recursive(file, extra_path+"/"+file) self.scp.chdir("..") os.chdir("..") else: self.scp.get(file, file) def get_resource(self, resource): resource_stat = self.scp.lstat(download_dir+"/"+resource) resource=resource.replace(" ", "\\ ").replace("[", "\\[").replace("]", "\\]") resource=resource.replace("(", "\\(").replace(")", "\\)") if S_ISDIR(resource_stat.st_mode): return self.get_directory(resource) else: return self.get_file(resource) def get_directory(self, path): self.prepare_dir(path) scp_copy="scp -r \"%s@%s:~/%s/%s/*\" ." % (ssh_username, ssh_server, download_dir, path) status = os.system(scp_copy) os.chdir("..") return status def get_file(self, file): scp_copy="scp \"%s@%s:~/%s/%s\" ." % (ssh_username, ssh_server, download_dir, file) return os.system(scp_copy) def close(self): self.scp.close() self.ssh.close() class Transmission_Client(): connected = colored("Connected", 'green', attrs=['dark']) error = colored("FAIL", 'red', attrs=['dark']) thinking = colored("...", 'white', attrs=['dark']) copy = colored("Downloaded and ready to copy", 'cyan', attrs=['dark']) delete = colored("Copied and deleted", 'green', attrs=['dark']) transmission_password = "" transmission = "" def __init__(self): self.transmission_password = getpass.getpass("Enter your password [%s@%s:%s]:" % (transmission_user, transmission_host, transmission_port)) def print_info(self, msg, status): print "%-100s [%s]" % (msg, status) def connect(self): self.print_info("Connecting to %s:%s" % (transmission_host, transmission_port), self.thinking) try: self.transmission = transmissionrpc.Client(transmission_host, port=transmission_port, user=transmission_user, password=self.transmission_password, timeout=transmission_timeout) self.print_info("Connecting to %s:%s" % (transmission_host, transmission_port), self.connected) except: self.print_info("Connecting to %s:%s" % (transmission_host, transmission_port), self.error) sys.exit(0) def get_torrents(self, scp_client): for torrent in self.transmission.get_torrents(timeout=transmission_timeout): if torrent.doneDate != 0: self.print_info(torrent.name, self.copy) if (scp_client.get_resource(torrent.name) == 0): self.transmission.remove_torrent(torrent.id, delete_data=True) self.print_info(torrent.name, self.delete) else: self.print_info(torrent.name, self.error) else: downloading_text = "Downloading "+ str(torrent.percentDone*100)+"%" self.print_info(torrent.name, colored(downloading_text, 'cyan', attrs=['dark'])) transmission = Transmission_Client() transmission.connect() scp = SCP_Client() transmission.get_torrents(scp) scp.close()
IPL Photo Rejuvenation, also known as skin rejuvenation or facial rejuvenation, is a safe and effective way of improving the colour and texture of your skin. Here at Skin Care Australia Clinics we use medical grade Palomar IPL machines. They are trusted and proven throughout the world, and are the gold standard for which other IPL machines are measured by. As far as the skin is concerned there are two types of ageing, intrinsic and extrinsic. Intrinsic ageing happens we grow older, our skin naturally becomes thinner and drier. The skin is less elastic due to the diminished amounts of collagen. What used to bounce back begins to sag and deep wrinkles may begin appearing. The rate of these events is genetically determined for each person and the process first becomes noticeable between the ages of 30 and 35. Extrinsic ageing results from exposure to the environment. Extrinsic ageing is the critical element in determining who looks older or younger than their biological age. Exposure to sunlight is a key contributor to extrinsic ageing. Photoageing occurs when elastin and a collagen breakdown are not replaced. As a result, fine lines and wrinkles intensify. Photoageing also causes pigment changes with the development of age spots (sun-induced freckles) and uneven skin tone. Spider veins and dilated capillaries are another sign of photoaged skin. Lifestyle choices also have an impact on extrinsic ageing. Cigarette smoking and excessive alcohol consumption contributes to the breakdown of elastin and collagen, and as a result, impairs the body’s healing capacity. It is effective on red blemishes that originate from blood vessels, as well as brown pigment blemishes in the form of freckles, age spots and other sun damage. Photo rejuvenation treatment is quick and relatively pain free. Our experienced team will assess the skin to be treated, then systematically move the hand piece over the selected area. At the end of the treatment the skin will feel warm, just like after mild sun exposure. The area may look slightly red for several hours, and it is normal for brown pigmented lesions to look darker for a a few days. On average, two to three rejuvenation treatments are recommended. These will be scheduled at 2-4 week intervals. Each session usually lasts about 10-3o minutes. Patients can return to normal activity immediately afterwards. IPL treatments provide gradual, natural improvement with excellent long-term results.
import logging from alvi.tests.test_client.base import TestContainer import alvi.tests.pages as pages logger = logging.getLogger(__name__) class TestTree(TestContainer): options = dict(n=7, parents="0, 0, 1, 1, 4, 4") def test_create_node(self): page = pages.Tree(self._browser.driver, "TreeCreateNode") page.run(options=TestTree.options) self.assertEqual(7, len(page.svg.nodes), "create_node does not work properly") node_data = sorted(page.svg.node_data, key=lambda d: d['id']) expected = [ {'name': 0, 'id': 0, 'parent': 0}, {'name': 1, 'id': 1, 'parent': 0}, {'name': 2, 'id': 2, 'parent': 0}, {'name': 3, 'id': 3, 'parent': 1}, {'name': 4, 'id': 4, 'parent': 1}, {'name': 5, 'id': 5, 'parent': 4}, {'name': 6, 'id': 6, 'parent': 4} ] self.assertEqual(expected, node_data, "create_node does not work properly") def test_append_and_insert(self): page = pages.Tree(self._browser.driver, "TreeAppendAndInsert") page.run(options=TestTree.options) self.assertEqual(7, len(page.svg.nodes), "create_node does not work properly") node_data = sorted(page.svg.node_data, key=lambda d: d['id']) expected = [ {'id': 0, 'parent': 0, 'name': 0}, {'id': 1, 'parent': 0, 'name': 1}, {'id': 2, 'parent': 0, 'name': 2}, {'id': 3, 'parent': 2, 'name': 3}, {'id': 4, 'parent': 2, 'name': 4}, {'id': 5, 'parent': 2, 'name': 5}, {'id': 6, 'parent': 4, 'name': 6} ] self.assertEqual(expected, node_data, "insert or append does not work properly") def test_marker(self): page = pages.Tree(self._browser.driver, "TreeMarker") page.run(options=TestTree.options) self.assertEqual(7, len(page.svg.nodes), "create_node does not work properly") self.assertEqual(2, len(page.svg.markers), "create_marker does not work properly") marker0_color = page.svg.markers[0].value_of_css_property('fill') marker1_color = page.svg.markers[1].value_of_css_property('fill') marked = [n for n in page.svg.nodes if n.value_of_css_property('fill') == marker0_color] self.assertEquals(len(marked), 1, "create_marker does not work properly") marked = [n for n in page.svg.nodes if n.value_of_css_property('fill') == marker1_color] self.assertEquals(len(marked), 1, "create_marker or move_marker does not work properly") def test_multi_marker(self): page = pages.Tree(self._browser.driver, "TreeMultiMarker") page.run(options=TestTree.options) self.assertEqual(7, len(page.svg.nodes), "create_node does not work properly") self.assertEqual(1, len(page.svg.markers), "create_multi_marker does not work properly") marker_color = page.svg.markers[0].value_of_css_property('fill') marked = [n for n in page.svg.nodes if n.value_of_css_property('fill') == marker_color] self.assertEquals(len(marked), 2, "multi_marker_append or multi_marker_remove does not work properly")
[prMac.com] American Fork, Utah - Izatt Apps is proud to announce the 3.0.22 release of its flagship app, MileBug - Mileage Log, which adds a bit of polish to new MileBug Cloud and updated UI. MileBug Cloud is not iCloud, but a Parse.com cloud solution that will allow iOS, Android, and Windows phones to share and sync data. While iOS is first, the other platforms will soon follow. MileBug Cloud is available as a $.99 (USD) In-App Purchase. The user then creates a MileBug Cloud account which can be accessed on any other supported device at no additional charge. MileBug is a beautiful mile tracker app that makes it easy to track your tax-deductible mileage on your iPhone, iPod touch, and iPad devices. MileBug first hit the Apple App Store in August 2008. At the time, MileBug was one of less than 2700 apps in the store. By January 2010 it had risen to be a Top 10 Finance app, where it has regularly appeared since. In January 2014, MileBug hit the #1 Finance spot for the first time and has returned there over a dozen times since. MileBug didn't always have GPS, but started out as a simple data collector. After several updates to the 1.x version, GPS tracking was introduced in the 2.0 version first released in May 2011. And now there is full map display and path tracking! This next great addition of the MileBug Cloud will provide data storage, data syncing, and cross-platform data sharing. MileBug - Mileage Log & Expense Tracker for Tax Deduction is $2.99 (USD) and available worldwide through the Apple App Store in the Finance category, Google Play in Finance, Amazon App Store in Finance, and the Windows Marketplace in Business. Izatt International is a mobile application (iOS, Android) development company in American Fork, UT. Izatt develops applications internally and accepts outside development projects. Copyright 2008-2015 Izatt International. All Rights Reserved. Apple, the Apple logo, iPhone, iPod touch, iPad and iPad Mini are registered trademarks of Apple Computer in the U.S. and/or other countries.
#!/usr/bin/python # @file elastictools-shard-reassign.py # @brief uCodev Elastic Tools # Elasticsearch shard reassigning tool. # # Date: 02/08/2015 # # Copyright 2015 Pedro A. Hortas ([email protected]) # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # # uCodev Elastic Tools v0.1 # # Description: Elasticsearch analysis, report and handling tools. # # Author: Pedro A. Hortas # Email: [email protected] # Date: 02/08/2015 # import sys import time from elastictools import * # Globals EXIT_SUCCESS = 0 EXIT_FAILURE = 1 ES_NODE_HOST = "localhost" ES_NODE_PORT = "9200" ES_REQ_WAIT_SECS = 1 # Class: Elasticsearch Shard class UETReassign(ESShard): # SECTION: Handler def tool_reassign_shards(self): for count, shard in enumerate(filter(lambda s: (s[3] == "UNASSIGNED"), self.shard_data_list)): sys.stdout.write(" * Re-assigning shard '%s' of type '%s' from index '%s' to node '%s' on host '%s' (%s of %s)... " % (shard[1], shard[2], shard[0], self.shard_node_list[count % len(self.shard_node_list)], self.es_host, count + 1, self.shard_status_unassigned_count())) post_data = { "commands" : [ { "allocate" : { "index" : "%s" % shard[0], "shard" : "%s" % shard[1], "node" : "%s" % self.shard_node_list[count % len(self.shard_node_list)], "allow_primary" : True } } ] } # Request cluster reroute op for the current unassigned shard res = self.es_request_http("POST", "/_cluster/reroute", post_data) # TODO: 400 is obviously an error, but is there more stuff to be handled here? if res["status"] == 400: print(res["error"]) print("Failed.") continue print("Reassigned.") self.shard_stat_reassigned += 1 time.sleep(ES_REQ_WAIT_SECS) def do(self): print("Loading shard status...") self.shard_status_load() print("Parsing shard data...") self.shard_status_parse() print(" * Number of shards started: %s" % self.shard_status_started_count()) print(" * Number of shards initializing: %s" % self.shard_status_initializing_count()) print(" * Number of shards unassigned: %s" % self.shard_status_unassigned_count()) print(" * Total number of shards: %s" % self.shard_status_total_count()) # Re-assign shards if unassigned shards are present if self.shard_status_unassigned_count(): print("Enabling routing allocation...") self.cluster_settings_set("cluster.routing.allocation.enable", "all") print("Reassigning unassigned shards...") self.tool_reassign_shards() # Check if there are shards in initializing state if self.shard_status_initializing_count(): print("There are shards in initialization state. If the problem persists, restart the node.") print("\nSummary:") print(" * Reassigned shards: %d" % self.shard_stat_reassigned) print(" * Failed Reassigns: %d" % self.shard_stat_failed_reassign) print(" * Moved shards: %d" % self.shard_stat_moved) print(" * Failed moves: %d" % self.shard_stat_failed_move) print(" * Total warnings: %d" % self.warnings) print(" * Total errors: %d" % self.errors) print("\nResetting data...") self.shard_reset_data_status() print("Loading shard status...") self.shard_status_load() print("Parsing shard data...") self.shard_status_parse() print("\nCurrent status:") print(" * Number of shards started: %s" % self.shard_status_started_count()) print(" * Number of shards initializing: %s" % self.shard_status_initializing_count()) print(" * Number of shards unassigned: %s" % self.shard_status_unassigned_count()) print(" * Total number of shards: %s" % self.shard_status_total_count()) print("\nDone.") # Class: Usage class Usage: args = { "target": None } def usage_show(self): print("Usage: %s <target filename OR url>" % (sys.argv[0])) def usage_check(self): if len(sys.argv) > 2: self.usage_show() sys.exit(EXIT_FAILURE) elif len(sys.argv) < 2: self.args["target"] = "http://%s:%s" % (ES_NODE_HOST, ES_NODE_PORT) def do(self): self.usage_check() if self.args["target"] == None: self.args["target"] = sys.argv[1] class Main: def do(self): usage = Usage() usage.do() elastic = UETReassign(usage.args["target"]) elastic.do() ## Entry Point if __name__ == "__main__": Main().do()
The fervent atmosphere at Atletico Madrid's Vicente Calderon stadium will make them hard to beat there in the Champions League, midfielder Koke says. The Spanish league leaders are into a first semi-final for 40 years after their 2-1 aggregate win over Barcelona. "With these fans behind us, it is very hard for us to lose a match here," said Koke, who scored after five minutes. Atletico's defeat by Real Madrid in the Copa del Rey is their only defeat at home so far this season. Barca coach Gerardo Martino also identified Atletico's home support among the 54,000 attendance for the second leg as crucial in his side's defeat - the first time the Catalan club have failed to reach the semi-finals since 2007. "This environment is difficult to find in Europe, but it's more common in Argentina," reflected Martino, whose side trail Atletico by a point in La Liga with six games remaining. Atletico coach Diego Simeone added: "The atmosphere was amazing. It didn't affect Barca because they are used to those atmospheres, but it did influence our players. It gave us great positive energy." The former Argentina midfielder, who also had two spells with Atletico as a player, praised his side's attitude over the course of the two legs, adding: "I admire the way they sacrifice themselves, the way they work for the team and never betray it. "That is priceless because in the history of all battles and all wars the best don't win but those who fight better strategically. And we try to be a bit like that."
#!/usr/bin/python # coding: utf8 from __future__ import absolute_import import time from geocoder.base import Base from geocoder.location import Location class Timezone(Base): """ Google Time Zone API ==================== The Time Zone API provides time offset data for locations on the surface of the earth. Requesting the time zone information for a specific Latitude/Longitude pair will return the name of that time zone, the time offset from UTC, and the Daylight Savings offset. API Reference ------------- https://developers.google.com/maps/documentation/timezone/ """ provider = 'google' method = 'timezone' def __init__(self, location, **kwargs): self.url = 'https://maps.googleapis.com/maps/api/timezone/json' self.location = str(Location(location)) self.timestamp = kwargs.get('timestamp', time.time()) self.params = { 'location': self.location, 'timestamp': self.timestamp, } self._initialize(**kwargs) def __repr__(self): return "<[{0}] {1} [{2}]>".format(self.status, self.provider, self.timeZoneName) def _exceptions(self): # Build intial Tree with results if self.parse['results']: self._build_tree(self.parse['results'][0]) @property def ok(self): return bool(self.timeZoneName) @property def timeZoneId(self): return self.parse.get('timeZoneId') @property def timeZoneName(self): return self.parse.get('timeZoneName') @property def rawOffset(self): return self.parse.get('rawOffset') @property def dstOffset(self): return self.parse.get('dstOffset') if __name__ == '__main__': g = Timezone([45.5375801, -75.2465979]) g.debug()
I'm Yvonne and I live in Altnafeadh. I'm interested in Art, Fishing and Dutch art. I like to travel and reading fantasy. Lični tekst: I'm Yvonne and I live in Altnafeadh. I'm interested in Art, Fishing and Dutch art. I like to travel and reading fantasy.
from django.contrib.auth.models import User from django.db import models from django.utils.timezone import now from user.models import Language class Prompt(models.Model): """ Model for the prompts """ text = models.TextField(blank=False) language = models.ForeignKey(Language, blank=False) number_of_recordings = models.IntegerField(default=0) def __str__(self): return self.language.name + ' - "' + self.text + '"' class DistributedPrompt(models.Model): """ Model for distributed prompts """ user = models.ForeignKey(User, blank=False) prompt = models.ForeignKey(Prompt, blank=False) rejected = models.BooleanField(default=False) recorded = models.BooleanField(default=False) translated = models.BooleanField(default=False) date = models.DateTimeField(default=now) def __str__(self): return self.user.username + ' - ' + \ str(self.prompt.id) + ' - ' + \ str(self.rejected) + ' - ' + \ str(self.recorded) + ' - ' + \ str(self.date)
Last year, Samsung launched the Galaxy TabPro S, a Windows 10 tablet that followed the steps of the Microsoft Surface series. Today the company has launched two new devices, Galaxy Book 10 and Galaxy Book 12 that continue along this line. Both devices run on Windows 10 and come with a keyboard case and a pointer. They have thin designs and an aluminum construction, and will be available in WiFi and LTE variants. The Book 10 comes with a 10.6 “LCD screen with 1920 x 1280 pixel resolution and an Intel Core m3 processor, while the Book 12 features an AMOLED 12” HDR display and an Intel Core i5-7200U fanless processor. The Galaxy Book of 10 comes with 4GB of RAM and 64 or 128GB of storage, while the Galaxy Book 12 ships with 4GB and 128GB of storage, and with 8GB of RAM and 256GB of storage. In all cases we can expand the storage using micro-SD up to 256GB. Both tablets come with two Type-C USB ports, and battery life of up to 10 hours, which can be quickly recharged thanks to the use of fast charging. Samsung has improved the keyboard case compared to last year, since the new has a better tour of the keys and are illuminated. Also the trackpad is 50 percent larger than the TabPro S. Both tablets arrive with S Pen pointer, capable of offering a sensitivity of 4,000 levels of pressure, and does not need to be charged. It can be used to take notes, draw, make screenshots, etc. Unfortunately, there is no place to place the S Pen when it is not in use, so there is a risk of losing it. In addition, Samsung has announced that it has partnered with Adobe to make the S Pen compatible with the popular Photoshop application. At the moment we do not know when these devices will go on sale and at what price they will do.
import json from mflow_nodes.processors.base import BaseProcessor from mflow_nodes.stream_node import get_processor_function, get_receiver_function from mflow_nodes.node_manager import NodeManager def setup_file_writing_receiver(connect_address, output_filename): """ Setup a node that writis the message headers into an output file for later inspection. :param connect_address: Address the node connects to. :param output_filename: Output file. :return: Instance of ExternalProcessWrapper. """ # Format the output file. with open(output_filename, 'w') as output_file: output_file.write("[]") def process_message(message): with open(output_filename, 'r') as input_file: test_data = json.load(input_file) test_data.append(message.get_header()) with open(output_filename, 'w') as output: output.write(json.dumps(test_data, indent=4)) processor = BaseProcessor() processor.process_message = process_message receiver = NodeManager(processor_function=get_processor_function(processor=processor, connection_address=connect_address), receiver_function=get_receiver_function(connection_address=connect_address), processor_instance=processor) return receiver
With Infinity War breaking records, it's time to check in with that OTHER cinematic universe, that hath given the Flop House so much, with our look at Justice League. Meanwhile, Stuart reveals Donatello's government job, Dan misreads the title, and Elliott posits that Batman is the result of inbreeding. Because Huge Ackman so desperately wanted to do a big, original musical, we found ourselves watching The Greatest Showman. And it sure is... showy. Meanwhile, Elliott names off cold-based DC villains, Stuart has feelings on the name Phineas, and Dan apparently goes into a brief memory coma.
#!/usr/bin/env python # -*- coding: utf-8 -*- import calendar from datetime import date from datetime import timedelta from DateRanger.utils import get_quarter from DateRanger.utils import get_monthrange from DateRanger.objects import DateFrame from DateRanger.exceptions import InvalidDateRange from DateRanger.exceptions import InvalidQuarter class DateRanger(object): """ A class for getting common bussiness date ranges. """ def __init__(self, base_date=None): """ Argus: base_date - the base day. Example: date(2009, 11, 1) """ self.set_base_date(base_date) def set_base_date(self, base_date=None): """ Set base date. Argus: base_date - Example: date(2009, 11, 1) Note: self.b____, 'b' means 'base' """ self.bdate = base_date or date.today() self.bmonth = self.bdate.month self.byear = self.bdate.year self.bquarter = get_quarter(self.bmonth) def base_day(self): """ Get the DateRange of self.bdate. """ return DateFrame(self.bdate, self.bdate) def relative_day(self, days=0): """ Calcuate a relative date from self.bdate. """ rday = self.bdate + timedelta(days=days) return (rday, rday) def prev_day(self, days=1): """ Get the DateRange that n days before self.bdate. Argus: days - n days ago """ ndays = days * -1 start, end = self.relative_day(days=ndays) return DateFrame(start, end) def next_day(self, days=1): """ Get the DateRange that n days after self.bdate. Argus: days - next n days """ start, end = self.relative_day(days=days) return DateFrame(start, end) def get_week_range(self, base_date): """ Find the first/last day of the week for the given day. Weeks start on Sunday and end on Saturday. Argus: base_date - any date """ start = base_date - timedelta(days=base_date.weekday()+1) end = start + timedelta(days=6) return (start, end) def base_week(self): """ Get DateRange of the week that contains self.bdate. """ start, end = self.get_week_range(self.bdate) return DateFrame(start, end) def relative_week(self, weeks=0): """ Calcuate a relative week range from self.bdate. """ _, end_date = self.base_week().get_range() start, end = self.get_week_range(end_date + timedelta(days=7*weeks)) return (start, end) def prev_week(self, weeks=1): """ Get the DateRange that n weeks before self.bdate. Argus: weeks - n week ago """ nweeks = weeks * -1 start, end = self.relative_week(weeks=nweeks) return DateFrame(start, end) def next_week(self, weeks=1): """ Get the DateRange that n weeks after self.bdate. Argus: weeks - next n weeks """ start, end = self.relative_week(weeks=weeks) return DateFrame(start, end) def get_month_range(self, year, month): """ Get the first and last day of the given month in given year. Args: year month """ days = calendar.monthrange(year, month)[1] start = date(year, month, 1) end = date(year, month, days) return (start, end) def relative_month(self, months=0): """ Calcuate a relative month range from self.bdate. """ month_sum = self.bmonth + months if month_sum < 0: back_months = abs(month_sum) yeardelta = ((back_months // 12) + 1) * -1 month = 12 - (back_months % 12) elif month_sum == 0: yeardelta = -1 month = 12 elif month_sum <= 12: yeardelta = 0 month = month_sum else: yeardelta = month_sum // 12 month = month_sum % 12 year = self.byear + yeardelta start, end = self.get_month_range(year, month) return (start, end) def base_month(self): """ Get the DateRange of the month that contains self.bdate """ year, month = self.byear, self.bmonth start, end = self.get_month_range(year, month) return DateFrame(start, end) def prev_month(self, months=1): """ Get the DateRange that n months before self.bdate. Argus: months - n months ago """ nmonths = months * -1 start, end = self.relative_month(months=nmonths) return DateFrame(start, end) def next_month(self, months=1): """ Get the DateRange that n months after self.bdate. Argus: months - next n months """ start, end = self.relative_month(months=months) return DateFrame(start, end) def get_quarter_range(self, year, quarter): """ Get time range with specific year and quarter. """ if quarter not in (1, 2, 3, 4): raise InvalidQuarter() start_month, end_month = get_monthrange(quarter) days = calendar.monthrange(year, end_month)[1] start = date(year, start_month, 1) end = date(year, end_month, days) return start, end def relative_quarter(self, quarters=0): """ Calcuate a relative quarters range from self.bdate. """ quarter_sum = self.bquarter + quarters if quarter_sum < 0: back_quarters = abs(quarter_sum) yeardelta = ((back_quarters // 4) + 1) * -1 quarter = 4 - (back_quarters % 4) elif quarter_sum == 0: yeardelta = -1 quarter = 4 elif quarter_sum <= 4: yeardelta = 0 quarter = quarter_sum else: yeardelta = quarter_sum // 4 quarter = quarter_sum % 4 year = self.byear + yeardelta start, end = self.get_quarter_range(year, quarter) return (start, end) def base_quarter(self): """ Get the DateRange of the quarter that contains self.bdate. """ quarter = get_quarter(self.bmonth) start, end = self.get_quarter_range(self.byear, quarter) return DateFrame(start, end) def prev_quarter(self, quarters=1): """ Get the DateRange that n quarters before self.bdate. Argus: quarters - n quarters ago """ nquarters = quarters * -1 start, end = self.relative_quarter(quarters=nquarters) return DateFrame(start, end) def next_quarter(self, quarters=1): """ Get the DateRange that n quarters after self.bdate. Argus: quarters - next n quarters """ start, end = self.relative_quarter(quarters=quarters) return DateFrame(start, end) def get_year_range(self, year): """ Get time range of the year. """ start = date(year, 1, 1) end = date(year, 12, 31) return (start, end) def relative_year(self, years=0): year = self.byear + years start, end = self.get_year_range(year) return (start, end) def base_year(self): """ Get the DateRange of the year that contains self.bdate. """ start, end = self.get_year_range(self.byear) return DateFrame(start, end) def prev_year(self, years=1): """ Get the DateRange that n years before self.bdate. Argus: years - n years ago """ nyears = years * -1 start, end = self.relative_year(years=nyears) return DateFrame(start, end) def next_year(self, years=1): """ Get the DateRange that n years after self.bdate. Argus: year - next n years """ start, end = self.relative_year(years=years) return DateFrame(start, end) def from_date(self, from_date): """ Return the DateRange from `from_date` to self.bdate Argus: from_date - Example: date(2015, 1, 1) """ if from_date > self.bdate: raise InvalidDateRange() return DateFrame(from_date, self.bdate + timedelta(days=1)) def to_date(self, to_date): """ Return the DateRange from self.bdate to `to_date` Argus: to_date - Example: date(2015, 1, 1) """ if to_date < self.bdate: raise InvalidDateRange() return DateFrame(self.bdate, to_date + timedelta(days=1))
After Vietnam, Howard didn't talk about his experiences and withdrew from other people. He felt directionless and began using drugs. Connecting with group counseling at VA helped him quit using and work through his challenges. Through the support and understanding of other Veterans and his psychiatrist, Howard got back on track.
#!/usr/bin/env python # Class defitions for Atlas Ping statistics # statistics are collected at three levels: # # first the raw per probe, per target IP data # second the aggregate over all probes for each target IP # finally overall statistics for the measurement from collections import defaultdict import json import time import sys from ripeatlas.analysis.msmstats import measurementStats, measurementDestStats, probeResults from ripeatlas.analysis.utils import dnsnamelookup class pingStatistics(measurementStats): """ top level statistics for ping measurement the "measurementsStats" base class holds most of the methods here we only need to override the addData() method with one specific one for pings """ def addData(self, probe_id, dst_addr, data): """ Add data from one sample of the measurement Statistics are grouuped by destination IP upto a maximum of maxDestinations When the measurement involves more destinations, statistics are aggregated """ self.samples += 1 self.probes[probe_id] += 1 self.destinationIPs[dst_addr] += 1 if self.aggregateDestinations: dst_addr = "All IPs combined" else: if not dst_addr in self.destinationStats: # do we want to add another destination specific report? if len(self.destinationStats.keys()) < self.maxDestinations: # yes we do self.destinationStats[dst_addr] = pingDestinationStats(self.msm_id,dst_addr) else: # no, there are too many destinations for this msm_id # aggregate the lot self.aggregateDestinations = 1 dst_addr = "All IPs combined" self.aggregateDestStats(dst_addr) self.destinationStats[dst_addr].addData(probe_id,data) def aggregateDestStats(self, aggregate_dst): """ We have too many different destination IPs to report on seperately Aggregate per destination stats collected thusfar into new aggregate stats object All new data will be added there. """ aggrStats = pingDestinationStats(self.msm_id, aggregate_dst) for dest in self.destinationStats.keys(): aggrStats.addDestinationStats(self.destinationStats[dest]) del self.destinationStats[dest] self.destinationStats[aggregate_dst] = aggrStats class pingDestinationStats: 'summary of ping results from one measurement to a single IP' def __init__(self, msm_id, dst_addr): self.msm_id = msm_id self.dst_addr = dst_addr self.probeReport = {} self.aggregationDone = 0 self.packets_sent = 0 self.packets_rcvd = 0 self.packets_dup = 0 self.nodata = 0 self.allerrors = 0 self.errorMsgs = defaultdict(int) self.loss100 = 0 self.loss80 = 0 self.loss60 = 0 self.loss40 = 0 self.loss20 = 0 self.loss5 = 0 self.loss0 = 0 self.lossless = 0 self.minimumRtts = [] self.medianRtts = [] self.maximumRtts = [] self.min = 0 self.max = 0 self.level975 = 0 self.level025 = 0 self.starttime = 9999999999 self.endtime = 0 def addData(self, probe_id, data): if not probe_id in self.probeReport: self.probeReport[probe_id] = probePingResults(self.msm_id,self.dst_addr,probe_id) self.probeReport[probe_id].addData(data) self.aggregationDone = 0 def addDestinationStats(self, source): """ add data from another destinationStats object to present one primary use case is creating aggregate stats over all destinations of the measurement """ for probe_id in source.probeReport: if not probe_id in self.probeReport: self.probeReport[probe_id] = probePingResults(self.msm_id,self.dst_addr,probe_id) self.probeReport[probe_id].addProbeResults(source.probeReport[probe_id]) if source.aggregationDone: self.aggregateProbeData() def aggregateProbeData(self): """ compile aggregate statistics from the per probe results """ for probe_id in self.probeReport.keys(): counters = self.probeReport[probe_id].getCounts() percentiles = self.probeReport[probe_id].rttPercentiles() loss = self.probeReport[probe_id].lossRate() errorrate = self.probeReport[probe_id].errorRate() self.packets_sent += counters['sent'] self.packets_rcvd += counters['received'] self.packets_dup += counters['duplicate'] for msg in self.probeReport[probe_id].errors: self.errorMsgs[msg] += 1 if (counters['sent'] == 0): if errorrate == 1: self.allerrors += 1 else: self.nodata += 1 elif (counters['received'] == 0): self.loss100 += 1 elif (counters['received'] == counters['sent']): self.lossless += 1 elif (loss > 0.80): self.loss80 += 1 elif (loss > 0.60): self.loss60 += 1 elif (loss > 0.40): self.loss40 += 1 elif (loss > 0.20): self.loss20 += 1 elif (loss > 0.05): self.loss5 += 1 elif (loss > 0.0): self.loss0 += 1 if '0' in percentiles and '50' in percentiles and '100' in percentiles: self.minimumRtts += [percentiles.get('2.5')] self.medianRtts += [percentiles.get('50')] self.maximumRtts += [percentiles.get('97.5')] starttime = self.probeReport[probe_id].starttime endtime = self.probeReport[probe_id].endtime if (starttime < self.starttime): self.starttime = starttime if (endtime > self.endtime): self.endtime = endtime self.minimumRtts.sort() self.medianRtts.sort() self.maximumRtts.sort() self.aggregationDone = 1 def report(self,detail): """ Output a report on the collected statistics The 'detail' argument controls how much detail is provided (more detail, longer reports) """ if not self.aggregationDone: # aggregate the per probe results before reporting self.aggregateProbeData() # Look for reverse DNS (if any) host = dnsnamelookup(self.dst_addr) if (detail==0): if host: print "Destination:", self.dst_addr, " / " , host else: print "Destination:", self.dst_addr else: print "Destination:", self.dst_addr if host: print "Reverse DNS:", host nprobes = len(self.probeReport.keys()) if (self.packets_sent>0): lost = 100 * (self.packets_sent - self.packets_rcvd)/float(self.packets_sent) lost = "%.2f%%" % lost else: lost = "NA" if (detail==0): #minimal view; report median of the medians if len(self.medianRtts) > 0: numprobes = len(self.medianRtts) level500 = int(numprobes * 0.5) median = self.medianRtts[level500] median = "%.2fms" % median else: median = "NA" print "sent/received/loss/median %d/%d/%s/%s" % (self.packets_sent,self.packets_rcvd,lost,median) else: print "Timeinterval:" , time.strftime("%Y-%m-%dT%H:%MZ",time.gmtime(self.starttime)), " - ", time.strftime("%Y-%m-%dT%H:%MZ",time.gmtime(self.endtime)) print "Packets sent:", self.packets_sent print "Packets received:", self.packets_rcvd print "Overall loss rate: %s" % lost print print "Total probes measuring: %6d" % nprobes print "Probes with 100%% errors:%6d" % self.allerrors if len(self.errorMsgs)>0: print 'Total errors on probes: %6d' % sum(self.errorMsgs.values()) print 'Most common error:"%s (%dx)"' % sorted(self.errorMsgs.items(),key=lambda x: x[1], reverse=True)[0] if (nprobes > 1): print print "Probes with no packets lost: %6d" % self.lossless print "Probes with 0%%-5%% loss: %6d" % self.loss0 print "Probes with 5%%-20%% loss: %6d" % self.loss5 print "Probes with 20%%-40%% loss: %6d" % self.loss20 print "Probes with 40%%-60%% loss: %6d" % self.loss40 print "Probes with 60%%-80%% loss: %6d" % self.loss60 print "Probes with 80%%-100%% loss: %6d" % self.loss80 print "Probes with 100%% loss: %6d" % self.loss100 print "Probes not sending any packets:%6d" % self.nodata print if len(self.medianRtts) > 0: numprobes = len(self.medianRtts) level025 = int(numprobes * 0.025) level250 = int(numprobes * 0.25) level500 = int(numprobes * 0.5) level750 = int(numprobes * 0.75) level975 = int(numprobes * 0.975) print "RTT distributions:" print "-----------------\n" print '2.5 percentile ("Minimum")' print "lowest 2.5 percentile RTT in all probes:%8.2fms" % self.minimumRtts[0] print "2.5%% of probes had 2.5 percentile <= %8.2fms" % self.minimumRtts[level025] print "25%% of probes had 2.5 percentile <= %8.2fms" % (self.minimumRtts[level250]) print "50%% of probes had 2.5 percentile <= %8.2fms" % (self.minimumRtts[level500]) print "75%% of probes had 2.5 percentile <= %8.2fms" % (self.minimumRtts[level750]) print "97.5%% of probes had 2.5 percentile <= %8.2fms" % (self.minimumRtts[level975]) print "highest 2.5 percentile in all probes %8.2fms" % (self.minimumRtts[numprobes-1]) print print "Median" print "lowest median RTT in all probes %9.2fms" % self.medianRtts[0] print "2.5%% of probes had median RTT <= %9.2fms" % self.medianRtts[level025] print "25%% of probes had median RTT <= %9.2fms" % (self.medianRtts[level250]) print "50%% of probes had median RTT <= %9.2fms" % (self.medianRtts[level500]) print "75%% of probes had median RTT <= %9.2fms" % (self.medianRtts[level750]) print "97.5%% of probes had median RTT <= %9.2fms" % (self.medianRtts[level975]) print "highest median RTT in all probes %9.2fms" % (self.medianRtts[numprobes-1]) print print '97.5 percentile ("Maximum")' print "lowest 97.5 percentile RTT in all probes:%8.2fms" % self.maximumRtts[0] print "2.5%% of probes had 97.5 percentile <= %8.2fms" % self.maximumRtts[level025] print "25%% of probes had 97.5 percentile <= %8.2fms" % (self.maximumRtts[level250]) print "50%% of probes had 97.5 percentile <= %8.2fms" % (self.maximumRtts[level500]) print "75%% of probes had 97.5 percentile <= %8.2fms" % (self.maximumRtts[level750]) print "97.5%% of probes had 97.5 percentile <= %8.2fms" % (self.maximumRtts[level975]) print "highest 97.5 percentile in all probes %8.2fms" % (self.maximumRtts[numprobes-1]) print print return class probePingResults(probeResults): """ collect ping data from one probe to one destination' """ def __init__(self, msm_id, dst_addr, probe_id): self.probe_id = probe_id self.msm_id = msm_id self.dst_addr = dst_addr self.samples =0 self.packets_sent = 0 self.packets_rcvd = 0 self.packets_dup = 0 self.errors = defaultdict(int) self.rtts = [] self.rtts_sorted = 0 self.starttime = 9999999999 self.endtime = 0 def getCounts(self): counters = {} counters['sent'] = self.packets_sent counters['received'] = self.packets_rcvd counters['duplicate'] = self.packets_dup return(counters) def lossRate(self): if (self.packets_sent>0): loss = (self.packets_sent-self.packets_rcvd) / float(self.packets_sent) else: loss = 99999999999 return(loss) def errorRate(self): total = len(self.errors) + len(self.rtts) if total>0: errorrate = len(self.errors) / total else: errorrate = 99999999999 return(errorrate) def addData (self, data): """ Process one record of an Atlas ping measurement, update statistics See https://atlas.ripe.net/doc/data_struct#v4460_ping for details on the possible fields found in 'data' dictionary """ self.samples += 1 self.packets_sent += data['sent'] self.packets_dup += data['dup'] self.packets_rcvd += data['rcvd'] self.updateStartEnd(data['timestamp']) for item in data['result']: if 'error' in item: self.errors[item['error']] += 1 if 'rtt' in item and not 'dup' in item: # rtt for duplicates is not representative, often too late self.rtts += [item['rtt']] return def addProbeResults(self, source): """ Add data from another pingResults object to present stats main use case is collecting aggregate stats, not specific to one target IP """ self.samples += source.samples self.packets_sent += source.packets_sent self.packets_dup += source.packets_dup self.packets_rcvd += source.packets_rcvd self.updateStartEnd(source.starttime) self.updateStartEnd(source.endtime) self.rtts += source.rtts if self.rtts_sorted: self.rtts.sort() def rttPercentiles(self): percentiles={} if (len(self.rtts) > 0): if not self.rtts_sorted: self.rtts.sort() self.rtts_sorted = 1 index025 = int(len(self.rtts)*0.025) index500 = int(len(self.rtts)*0.5) index975 = int(len(self.rtts)*0.975) percentiles['100'] = self.rtts[len(self.rtts)-1] percentiles['97.5'] = self.rtts[index975] percentiles['50'] = self.rtts[index500] percentiles['2.5'] = self.rtts[index025] percentiles['0'] = self.rtts[0] return(percentiles)
Online Loans For Poor Credit Western Sky Loans would Set upwards primary Debits on Your chosen cash Advance, Online Loans For Poor Credit Western Sky Loans as Well As send a Couple pointers prior To of Which day. When for Any Reason you Think ones settlement will Likely Be late, get In Touch With this MoneyMe staff and We'll try Everything we Can Easily that May Help You resolve the Specific Situation. If There Is an Expense to The improvements, be Assured we'll Be entirely beforehand with Them. So that it is quickly Whilst finance institutions need clones associated with pay slides, traditional bank phrases, or another time-consuming documents, many of us make use of protected, rapid Proviso technological innovation to receive Ninety days connected with financial institution statements on the net within minutes. As well as your personal details, this is the information we should instead process the loan. You'll be able to really feel confident, almost all systems we've got set up provide extreme safety. Online Loans For Poor Credit Western Sky Loans Download! That loan authorized on-line as a result of MoneyMe lets you accessibility any short-term cash shot as much as you need. Our straightforward, see-thorugh method permits you to realize your dollars demands quicker than ever before. There isn't any documents engaged and your online software in most cases require merely a few moments. The moment accepted, the money may struck your account inside an average period of 60 minutes, determined by your current traditional bank in addition to regardless of whether your application form is produced inside of enterprise several hours. In a similar manner that we have produced funds lending tremendous uncomplicated, repaying your personal loans approved on the net couldn?t end up being less difficult. Be lent amongst $500 and $15,1000 and fork out the loan back in accordance with your earnings period. ( space ) Awareness associated with 3% monthly may accumulate with many superb levels out * Almost all efforts are going to be created to contact the consumer to recognize reimbursement conditions, whereafter this bill are going to be handed over for an exterior Consumer Extractor who can extra the series fees. : No - repayment will lead to a new client?s Nation's Credit score Report displaying a good unsettled bill within delinquencies. Different lenders determine these types of credit user profiles when creating lending conclusions. Non-payment will certainly affect your credit history adversely and can have an impact on potential credit apps. Renewal is just not computerized and is susceptible to credit rating, career and price conditions. ( space ) Based on the higher than standards MPOWA Fund(Pty)Limited will probably readjust your loan offer you when your situations get altered appreciably since your previous app. - Chances are you'll apply for a bank loan enhance immediately after A few prosperous expenses. We are a team of designers and developers that create high quality Online Loans For Poor Credit Western Sky Loans.
# -*- coding: utf-8 -*- from django import template from django.http import HttpResponse,HttpResponseRedirect,Http404 from django.shortcuts import render_to_response # from models import MediaFile,Gallery from forms import GalleryMediaFileForm import uuid # def preview(request,id): m = MediaFile.objects.get(id=id ) return m.response( HttpResponse ) def download(request,id): m = MediaFile.objects.get(id=id ) return m.response( HttpResponse,meta=True ) def thumbnail(request,id,width,height): m = MediaFile.objects.get(id=id ) return m.response( HttpResponse,size=(int(width),int(height)) ) ####### from django.utils import simplejson from django.core.urlresolvers import reverse from django.conf import settings from django.views.generic import CreateView, DeleteView, UpdateView, ListView,DetailView from django.contrib.auth.decorators import login_required, permission_required def response_mimetype(request): # if "application/json" in request.META['HTTP_ACCEPT']: if "application/json" in request.META.get('HTTP_ACCEPT',[]): return "application/json" else: return "text/plain" class GalleryAdminDetail(DetailView): model = Gallery def get_object(self, queryset=None): return self.model.objects.get(id=self.kwargs['id'] ) def get_context_data(self, **kwargs): ''' {'mediafiles': <django.db.models.fields.related.ManyRelatedManager object > 'object': <Gallery: Gallery object>, 'gallery': <Gallery: Gallery object>, 'form': <django.forms.models.GalleryForm object >, 'view': <mediafiles.views.GalleryAdminEdit object > } ''' context = super(GalleryAdminDetail, self).get_context_data(**kwargs) context['mediafiles'] = self.object.medias context['container'] = self.object context['mediafile_uploader'] = reverse('gallery_admin_media_create', kwargs={'id':self.kwargs['id'] } ) context['mediafile_delete_url'] ="gallery_admin_media_delete" context['mediafile_image_url'] ="gallery_admin_media_image" context['mediafile_thumbnail_url'] ="gallery_admin_media_thumb" context['mediafile_url_hint'] = {} # context['salt'] = uuid.uuid1().hex return context class GalleryAdminList(ListView): ''' Template(by default): mediafiles/gallery_list.html ''' model = Gallery class GalleryAdminMediaCreate(CreateView): model = MediaFile form_class = GalleryMediaFileForm # def get(self,request,*args,**kwargs): #: for debugging # print "HDKNR:", "Ajax?",request.is_ajax() # response = super(GalleryAdminEdit,self).get(request,*args,**kwargs) # if request.is_ajax(): # print response.render() # return response # def post(self, request, *args, **kwargs): #: for debugging # print "HDKNR:post()","Ajax?=",request.is_ajax() # print args,kwargs,request.POST,request.FILES # response = super(GalleryAdminMediaCreate,self).post(request,*args,**kwargs) # return response # def form_invalid(self,form): # print "HDKNR(form_invalid):",type(form),form.errors # return super(GalleryAdminMediaCreate,self).form_invalid(form) def form_valid(self, form): form.instance.user = self.request.user #:ログインユーザー new_media = form.save() self.gallery = Gallery.objects.get(id=self.kwargs['id']) #: 別なところで self.gallery.medias.add( new_media ) # f = self.request.FILES.get('file') url = reverse('mediafiles_preview',kwargs={'id': new_media.id ,} ) #: jquery file upload API data (JSON) data = [{'name': new_media.name, 'url': url, 'thumbnail_url': new_media.get_thumbnail_url(size=(100,30),), 'gallery' : 'gallery' if new_media.is_image() else "", 'delete_url': reverse('gallery_admin_media_delete', kwargs={'id':self.kwargs['id'], 'mid':new_media.id,} ), 'delete_type': "DELETE" }] response = JSONResponse(data, {}, response_mimetype(self.request)) response['Content-Disposition'] = 'inline; filename=files.json' return response class JSONResponse(HttpResponse): """JSON response class.""" def __init__(self,obj='',json_opts={},mimetype="application/json",*args,**kwargs): content = simplejson.dumps(obj,**json_opts) super(JSONResponse,self).__init__(content,mimetype,*args,**kwargs) class GalleryAdminMediaDelete(DeleteView): model = MediaFile def get_object(self,*args,**kwargs): return Gallery.objects.get(id=self.kwargs['id'] ).medias.get(id=self.kwargs['mid']) def delete(self, request, *args, **kwargs): """ This does not actually delete the file, only the database record. But that is easy to implement. """ self.object = self.get_object() self.object.delete() #:削除 if request.is_ajax(): response = JSONResponse(True, {}, response_mimetype(self.request)) response['Content-Disposition'] = 'inline; filename=files.json' return response else: return render_to_response( 'mediafiles/mediafile_deleted.html', context_instance=template.RequestContext(request), ) class GalleryAdminMediaImage(DetailView): model = MediaFile def get(self, request, *args, **kwargs): media = Gallery.objects.get(id=self.kwargs['id'] ).medias.get(id=self.kwargs['mid']) return media.response( HttpResponse ) class GalleryAdminMediaThumb(DetailView): model = MediaFile def get(self, request, *args, **kwargs): media = Gallery.objects.get(id=self.kwargs['id'] ).medias.get(id=self.kwargs['mid']) size=( int(request.GET.get('width',100)), int(request.GET.get('height',30)) ) return media.response( HttpResponse ,size=size ) GalleryAdminDetailView = login_required(GalleryAdminDetail.as_view()) GalleryAdminListView = login_required(GalleryAdminList.as_view()) GalleryAdminMediaCreateView = login_required(GalleryAdminMediaCreate.as_view()) GalleryAdminMediaDeleteView = login_required(GalleryAdminMediaDelete.as_view()) GalleryAdminMediaImageView = login_required(GalleryAdminMediaImage.as_view()) GalleryAdminMediaThumbView = login_required(GalleryAdminMediaThumb.as_view())
I have a goodie for you today! I created a set of 5 posters that highlight angles; right, acute, obtuse, straight and reflex. Each poster has the definition, picture of the angle and a real life example picture. Click to Download the Free Posters on Google Docs! Here are what the posters look like.. I like that you added a real world example. Thanks for sharing! These are great and cute! I'm sharing on my wall! Wow! These are great and so are you! This is such a great idea!! I love the real world connections!! These are great for the kiddos!! Thank you for such an awesome addition to my math curriculum! I am filing these away for my geometry unit. Great posters... thanks for sharing! These are great posters! Thanks for taking the time to make them. Many students will benefit from your hard work.
#-*- coding: ISO-8859-1 -*- # # DBSQL - A SQL database engine. # # Copyright (C) 2007-2008 The DBSQL Group, Inc. - All rights reserved. # # This library is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public # License as published by the Free Software Foundation; either # version 2.1 of the License, or (at your option) any later version. # # There are special exceptions to the terms and conditions of the GPL as it # is applied to this software. View the full text of the exception in file # LICENSE_EXCEPTIONS in the directory of this software distribution. # # This library is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # http://creativecommons.org/licenses/GPL/2.0/ # # Copyright (C) 2004 Gerhard Häring <[email protected]> # # This software is provided 'as-is', without any express or implied # warranty. In no event will the authors be held liable for any damages # arising from the use of this software. # # Permission is granted to anyone to use this software for any purpose, # including commercial applications, and to alter it and redistribute it # freely, subject to the following restrictions: # # 1. The origin of this software must not be misrepresented; you must not # claim that you wrote the original software. If you use this software # in a product, an acknowledgment in the product documentation would be # appreciated but is not required. # 2. Altered source versions must be plainly marked as such, and must not be # misrepresented as being the original software. # 3. This notice may not be removed or altered from any source distribution. # A simple LRU cache, which will be rewritten in C later class Node: def __init__(self, key, data): self.key = key self.data = data self.count = 1 self.prev, self.next = None, None class Cache: def __init__(self, factory, maxlen): self.first, self.last = None, None self.maxlen = maxlen self.mapping = {} self.factory = factory def get(self, key): if key in self.mapping: nd = self.mapping[key] nd.count += 1 if nd.prev and nd.count > nd.prev.count: ptr = nd.prev while ptr.prev is not None and nd.count > ptr.prev.count: ptr = ptr.prev # Move nd before ptr if nd.next: nd.next.prev = nd.prev else: self.last = nd.prev if nd.prev: nd.prev.next = nd.next if ptr.prev: ptr.prev.next = nd else: self.first = nd save = nd.next nd.next = ptr nd.prev = ptr.prev if nd.prev is None: self.first = nd ptr.prev = nd #ptr.next = save else: if len(self.mapping) == self.maxlen: if self.last: nd = self.last self.mapping[self.last.key] = None del self.mapping[self.last.key] if nd.prev: nd.prev.next = None self.last = nd.prev nd.prev = None obj = self.factory(key) nd = Node(key, obj) nd.prev = self.last nd.next = None if self.last: self.last.next = nd else: self.first = nd self.last = nd self.mapping[key] = nd return nd.data def display(self): nd = self.first while nd: prevkey, nextkey = None, None if nd.prev: prevkey = nd.prev.key if nd.next: nextkey = nd.next.key print "%4s <- %4s -> %s\t(%i)" % (prevkey, nd.key, nextkey, nd.count) nd = nd.next if __name__ == "__main__": def create(s): return s import random cache = Cache(create, 5) if 1: chars = list("abcdefghijklmnopqrstuvwxyz") lst = [] for i in range(100): idx = random.randint(0, len(chars) - 1) what = chars[idx] lst.append(what) cache.get(chars[idx]) cache.display() #print "-" * 50 #print lst #print "-" * 50 else: lst = \ ['y', 'y', 'b', 'v', 'x', 'f', 'h', 'n', 'g', 'k', 'o', 'q', 'p', 'e', 'm', 'c', 't', 'y', 'c', 's', 'p', 's', 'j', 'm', \ 'u', 'f', 'z', 'x', 'v', 'r', 'w', 'e', 'm', 'd', 'w', 's', 'b', 'r', 'd', 'e', 'h', 'g', 'e', 't', 'p', 'b', 'e', 'i', \ 'g', 'n'] #lst = ['c', 'c', 'b', 'b', 'd', 'd', 'g', 'c', 'c', 'd'] for item in lst: cache.get(item) cache.display()
The Association of Private Ski Instructors Gstaad-Saanenland was founded 30 years ago by local ski instructors. Only qualified local ski instructors who know the region like the back of their hand are eligible for membership. We’re not only snowsports instructors, we can also act as your personal guide and local contact. If you wish, we’ll fetch you at your hotel/house, organise your ski equipment and book tables for you at your favourite restaurants. Our training, which includes ski techniques, avalanche awareness, methodology and foreign languages, is your guarantee of quality. You can rest assured that we know how to assess snow and weather conditions correctly. For you, that means unparalleled winter fun. Get in touch with us now, it would be a pleasure to hear from you.
# -*- coding: utf-8 -*- from datetime import datetime from django.http import HttpResponse from smsbrana import SmsConnect from smsbrana import signals from smsbrana.const import DELIVERY_STATUS_DELIVERED, DATETIME_FORMAT from smsbrana.models import SentSms def smsconnect_notification(request): sc = SmsConnect() result = sc.inbox() # print result for delivered in result['delivery_report']: sms_id = delivered['idsms'] if delivered['status'] != DELIVERY_STATUS_DELIVERED: continue try: sms = SentSms.objects.get(sms_id=sms_id) if sms.delivered: continue sms.delivered = True sms.delivered_date = datetime.strptime(delivered['time'], DATETIME_FORMAT) sms.save() except SentSms.DoesNotExist: # logger.error('sms delivered which wasn\'t sent' + str(delivered)) pass # delete the inbox if there are 100+ items if len(result['delivery_report']) > 100: sc.inbox(delete=True) signals.smsconnect_notification_received.send(sender=None, inbox=result, request=request) return HttpResponse('OK')
Thanks to a multi-year collaboration involving the New Zealand Film Archive, the NFPF, and the largest American nitrate archives, scores of “lost” American silent-era films are being returned to the big screen and shared with new audiences. Sample a selection here. More will be added to our site in the months ahead.
from django.contrib.admin import StackedInline from django.forms import fields, widgets from django.template.loader import select_template from django.utils.translation import ugettext_lazy as _, ugettext from entangled.forms import EntangledModelFormMixin, EntangledModelForm from cms.plugin_pool import plugin_pool from cms.utils.compat.dj import is_installed from cmsplugin_cascade.mixins import WithSortableInlineElementsMixin from cmsplugin_cascade.models import SortableInlineCascadeElement from shop.cascade.plugin_base import ShopPluginBase, ProductSelectField from shop.conf import app_settings from shop.models.product import ProductModel if is_installed('adminsortable2'): from adminsortable2.admin import SortableInlineAdminMixin else: SortableInlineAdminMixin = type('SortableInlineAdminMixin', (object,), {}) class ShopCatalogPluginForm(EntangledModelFormMixin): CHOICES = [ ('paginator', _("Use Paginator")), ('manual', _("Manual Infinite")), ('auto', _("Auto Infinite")), ] pagination = fields.ChoiceField( choices=CHOICES, widget=widgets.RadioSelect, label=_("Pagination"), initial='paginator', help_text=_("Shall the product list view use a paginator or scroll infinitely?"), ) class Meta: entangled_fields = {'glossary': ['pagination']} class ShopCatalogPlugin(ShopPluginBase): name = _("Catalog List View") require_parent = True form = ShopCatalogPluginForm parent_classes = ['BootstrapColumnPlugin', 'SimpleWrapperPlugin'] cache = False def get_render_template(self, context, instance, placeholder): templates = [] if instance.glossary.get('render_template'): templates.append(instance.glossary['render_template']) templates.extend([ '{}/catalog/list.html'.format(app_settings.APP_LABEL), 'shop/catalog/list.html', ]) return select_template(templates) def render(self, context, instance, placeholder): context['pagination'] = instance.glossary.get('pagination', 'paginator') return context @classmethod def get_identifier(cls, obj): pagination = obj.glossary.get('pagination') if pagination == 'paginator': return ugettext("Manual Pagination") return ugettext("Infinite Scroll") plugin_pool.register_plugin(ShopCatalogPlugin) class ShopAddToCartPluginForm(EntangledModelFormMixin): use_modal_dialog = fields.BooleanField( label=_("Use Modal Dialog"), initial=True, required=False, help_text=_("After adding product to cart, render a modal dialog"), ) class Meta: entangled_fields = {'glossary': ['use_modal_dialog']} class ShopAddToCartPlugin(ShopPluginBase): name = _("Add Product to Cart") require_parent = True form = ShopAddToCartPluginForm parent_classes = ['BootstrapColumnPlugin'] cache = False def get_render_template(self, context, instance, placeholder): templates = [] if instance.glossary.get('render_template'): templates.append(instance.glossary['render_template']) if context['product'].managed_availability(): template_prefix = 'available-' else: template_prefix = '' templates.extend([ '{}/catalog/{}product-add2cart.html'.format(app_settings.APP_LABEL, template_prefix), 'shop/catalog/{}product-add2cart.html'.format(template_prefix), ]) return select_template(templates) def render(self, context, instance, placeholder): context = super(ShopAddToCartPlugin, self).render(context, instance, placeholder) context['use_modal_dialog'] = bool(instance.glossary.get('use_modal_dialog', True)) return context plugin_pool.register_plugin(ShopAddToCartPlugin) class ProductGalleryForm(EntangledModelForm): order = fields.IntegerField( widget=widgets.HiddenInput, initial=0, ) product = ProductSelectField( required=False, label=_("Related Product"), help_text=_("Choose related product"), ) class Meta: entangled_fields = {'glossary': ['product']} untangled_fields = ['order'] class ProductGalleryInline(SortableInlineAdminMixin, StackedInline): model = SortableInlineCascadeElement form = ProductGalleryForm extra = 5 ordering = ['order'] verbose_name = _("Product") verbose_name_plural = _("Product Gallery") class ShopProductGallery(WithSortableInlineElementsMixin, ShopPluginBase): name = _("Product Gallery") require_parent = True parent_classes = ('BootstrapColumnPlugin',) inlines = (ProductGalleryInline,) # until this bug https://github.com/applegrew/django-select2/issues/65 is fixed # we hide the a "add row" button and instead use `extra = 5` in ProductGalleryInline class Media: css = {'all': ('shop/css/admin/product-gallery.css',)} def get_render_template(self, context, instance, placeholder): templates = [] if instance.glossary.get('render_template'): templates.append(instance.glossary['render_template']) templates.extend([ '{}/catalog/product-gallery.html'.format(app_settings.APP_LABEL), 'shop/catalog/product-gallery.html', ]) return select_template(templates) def render(self, context, instance, placeholder): product_ids = [] for inline in instance.sortinline_elements.all(): try: product_ids.append(inline.glossary['product']['pk']) except KeyError: pass queryset = ProductModel.objects.filter(pk__in=product_ids, active=True) serializer_class = app_settings.PRODUCT_SUMMARY_SERIALIZER serialized = serializer_class(queryset, many=True, context={'request': context['request']}) # sort the products according to the order provided by `sortinline_elements`. context['products'] = [product for id in product_ids for product in serialized.data if product['id'] == id] return context plugin_pool.register_plugin(ShopProductGallery)
The Hardy Group is centred on Derby and its surrounding areas and is a thriving group of people living with dementia, current and past carers who through their own experiences support each other along their journey with dementia, We hold light-hearted social meetings and organise trips out to help all our members stay socially, physically and mentally active so that they can (and do!) live well with dementia. Contact the service with the details provided. Suitable for those experiencing dementia, or current or past carers. Members are asked to bring their carers as this is not a caring service. £5 annual membership fee, plus £5 to secure bookings on trips. The aims of Friends of Markeaton Park are to adopt a pro active approach working in partnership with Derby City Council on improvements to Markeaton Park to provider park users with a forum to voice their viewpoints, as well as engaging in policy decisions in partnership with Derby City Council. To ensure that some of the lost facilities are, if feasible, re-introduced to Markeaton park. To help form and support sub-groups for the benefit of Markeaton Park. Activities – displays, guided walks, litter picks, fundraising, specialist talks etc. Tackling the impacts of loneliness and isolation by providing community social groups, activities, befriending, assisted shopping and more for people aged 55 and older living in Derby and the surrounding areas. The project aim is to provide befriending and social activities to help prevent social isolation and loneliness. The project has two offices, one in Mickleover and one in Spondon. There are several lunch clubs, drop-ins, coffee mornings and social activities across the city. The project provides transport to these activities and also arranges outings, shopping and holidays. The Centre aims to provide leisure and recreational pursuits with a wide range of groups. There is also a tea room for the public staffed by volunteers. Tea Room – Monday – Friday, 10:00 – 14:00, Saturday – 08:00 – 12:00. A general community centre offering a wealth of difference activities, including breakfast clubs, out of school clubs and karate classes. The aim of Surtal Arts is to promote health and well being, personal development, integration and understanding, supporting social and community cohesion, inside and outside its community through high quality Asian arts activities. Free activities to keep your brain active, including puzzles, sudoku, word search, colouring for adults, magazines and books to browse and borrow. The Oddfellows is one of the largest and oldest friendly societies in the UK. Through friendship and social events, we help our members get more enjoyment out of life, and offer care, advice and suypport in times of need. Typical activities include walking groups, dining out and dominoes. Meetings are held on the second Wednesday of each month at 7.30pm. See website or contact for latest times. Contact via phone. Membership for under 18’s is £10 yearly. For over 18’s membership is £25 yearly.
# Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ============================================================================== """Central location for NCF specific values.""" import os import time # ============================================================================== # == Main Thread Data Processing =============================================== # ============================================================================== class Paths(object): """Container for various path information used while training NCF.""" def __init__(self, data_dir, cache_id=None): self.cache_id = cache_id or int(time.time()) self.data_dir = data_dir self.cache_root = os.path.join( self.data_dir, "{}_ncf_recommendation_cache".format(self.cache_id)) self.train_shard_subdir = os.path.join(self.cache_root, "raw_training_shards") self.train_shard_template = os.path.join(self.train_shard_subdir, "positive_shard_{}.pickle") self.train_epoch_dir = os.path.join(self.cache_root, "training_epochs") self.eval_data_subdir = os.path.join(self.cache_root, "eval_data") self.subproc_alive = os.path.join(self.cache_root, "subproc.alive") APPROX_PTS_PER_TRAIN_SHARD = 128000 # Keys for data shards TRAIN_KEY = "train" EVAL_KEY = "eval" # In both datasets, each user has at least 20 ratings. MIN_NUM_RATINGS = 20 # The number of negative examples attached with a positive example # when performing evaluation. NUM_EVAL_NEGATIVES = 999 # keys for evaluation metrics TOP_K = 10 # Top-k list for evaluation HR_KEY = "HR" NDCG_KEY = "NDCG" DUPLICATE_MASK = "duplicate_mask" # Metric names HR_METRIC_NAME = "HR_METRIC" NDCG_METRIC_NAME = "NDCG_METRIC" # ============================================================================== # == Subprocess Data Generation ================================================ # ============================================================================== CYCLES_TO_BUFFER = 3 # The number of train cycles worth of data to "run ahead" # of the main training loop. FLAGFILE_TEMP = "flagfile.temp" FLAGFILE = "flagfile" READY_FILE_TEMP = "ready.json.temp" READY_FILE = "ready.json" TRAIN_RECORD_TEMPLATE = "train_{}.tfrecords" EVAL_RECORD_TEMPLATE = "eval_{}.tfrecords" TIMEOUT_SECONDS = 3600 * 2 # If the train loop goes more than two hours without # consuming an epoch of data, this is a good # indicator that the main thread is dead and the # subprocess is orphaned.
As its name suggests, the R7 Lite is a downgraded variant of the original R7. The new smartphone sports a 5-inch display with just 720 x 1280 pixels, instead of 1080 x 1920 pixels like on the regular R7. Moreover, instead of having 3 GB of RAM, the R7 Lite only offers 2 GB. Another difference - this time a positive one - is that the Oppo R7 Lite runs Color OS 2.1 based on Android Lollipop, while the regular R7 was released running Color OS based on Android KitKat. Like the slightly older R7, the Oppo R7 Lite is built around a metal frame, it's only 6.3mm-thin, and is powered by an octa-core Snapdragon 615 processor. The handset also features LTE, dual SIM support, 8 MP front-facing camera, 13 MP rear camera, 16 GB of expandable internal memory, and a 2320 mAh battery.
from featuretools import list_primitives from featuretools.primitives import ( Day, GreaterThan, Last, NumCharacters, get_aggregation_primitives, get_transform_primitives ) from featuretools.primitives.utils import _get_descriptions def test_list_primitives_order(): df = list_primitives() all_primitives = get_transform_primitives() all_primitives.update(get_aggregation_primitives()) for name, primitive in all_primitives.items(): assert name in df['name'].values row = df.loc[df['name'] == name].iloc[0] actual_desc = _get_descriptions([primitive])[0] if actual_desc: assert actual_desc == row['description'] types = df['type'].values assert 'aggregation' in types assert 'transform' in types def test_descriptions(): primitives = {NumCharacters: 'Calculates the number of characters in a string.', Day: 'Determines the day of the month from a datetime.', Last: 'Determines the last value in a list.', GreaterThan: 'Determines if values in one list are greater than another list.'} assert _get_descriptions(list(primitives.keys())) == list(primitives.values())
Alexis is an extremely talented female vocalist with a beautifully trained voice. Powerful vocals, great personality and stunning looks. What more could you ask for!! Both myself and my partner, Paul, really enjoyed your performance last night at The Royal British Legion in Polegate. We heard Alexis sing in Hotham Park and were blown away by her voice. Thank you for entertaining us last Saturday at the Wickham Arms. I purchased one of your CDs and enjoyed listening to it this morning while reading the Sunday paper. You have a lovely sensual voice and I am sure you will soon be singing in more salubrious places than the Wickham. However would love to see you there again. I'm sure you have a bright future. What a fantastic evening, thank you. What a great way to use your talents. Vocals very suited to easy listening and popular music also great, I enjoyed 'Celebration' a great deal.
""" See the PyMOL Sessions processed with this code here <https://github.com/mmagnus/PyMOL4Spliceosome> """ from pymol import cmd from rna_tools.tools.PyMOL4RNA import code_for_color_spl from rna_tools.tools.PyMOL4RNA import code_for_spl try: from pymol import cmd except ImportError: print("PyMOL Python lib is missing") # sys.exit(0) def spl(arg=''): """ action='', name='' """ if ' ' in arg: action, name = arg.split() name = name.lower() else: action = arg name = '' #import pandas as pd #df = pd.read_excel("/home/magnus/Desktop/pyMoL_colors-EMX.xlsx") if not action or action == 'help': spl_help() elif action == 'color' or arg=='c': code_for_color_spl.spl_color() elif arg == 'extract all' or arg == 'ea' or arg == 'e': code_for_spl.spl_extract() elif arg.startswith('hprp28'): cmd.do("color purple, PRP28_h* and resi 240-361") # RecA1 cmd.do("color blue, PRP28_h* and resi 361-631") # RecA1 cmd.do("color orange, PRP28_h* and resi 631-811") # RecA2 elif arg.startswith('hprp8'): print("RT, skyblue, 885-1251") print("Thumb/X, cyan, 1257-1375") cmd.do("color yellow, PRP8_h* and resi 1581-1752") # rt cmd.do("color wheat, PRP8_h* and resi 1767-2020") # rh cmd.do("color salmon, PRP8_h* and resi 2103-2234") # jab cmd.do("color smudge, PRP8_h* and resi 1304-1577") # linker cmd.do("color skyblue, PRP8_h* and resi 812-1303") # rt elif arg.startswith('prp8'): print("RT, skyblue, 885-1251") print("Thumb/X, cyan, 1257-1375") cmd.do("color skyblue, PRP8_y* and resi 885-1251") # rt cmd.do("color cyan, PRP8_y* and resi 1257-1375") # thumb/x cmd.do("color smudge, PRP8_y* and resi 1376-1649") # linker cmd.do("color wheat, PRP8_y* and resi 1840-2090") # rh cmd.do("color salmon, PRP8_y* and resi 2150-2395") # jab cmd.do("color yellow, PRP8_y* and resi 1650-1840") # endo elif arg.startswith(''): if 'hjab' in arg.lower(): cmd.select('PRP8_h* and resi 2103-2234') if 'hlinker' in arg.lower(): cmd.select('PRP8_h* and resi 1304-1577') if 'hrt' in arg.lower(): cmd.select('PRP8_h* and resi 812-1303') if 'hrh' in arg.lower(): cmd.select('PRP8_h* and resi 1767-2020') if 'he' in arg.lower(): cmd.select('PRP8_h* and resi 1581-1752') elif arg == 'align' or arg=='a': cmd.do(""" align /5gm6//6, /5lj3//V; align /5mps//6, /5lj3//V; align /6exn//6, /5lj3//V; align /5y88//D, /5lj3//V; align /5ylz//D, /5lj3//V; """) else: spl_help() cmd.extend('spl', spl) def spl_help(): print("""################ SPL ################# extract all (ea) - show colors - list all colors ###################################### """) spl_help() def __spl_color(): for m in mapping: protein = m[0] chain = m[1] color = m[2] print('\_' + ' '.join([protein, chain, color])) cmd.do('color ' + color + ', chain ' + chain) # cmd.do('color firebrick, chain V') # U6 def _spl_color(): """Color spl RNAs (for only color spl RNA and use 4-color code for residues see `spl2`) """ AllObj = cmd.get_names("all") for name in AllObj: if 'Exon' in name or 'exon' in name: cmd.color('yellow', name) if 'Intron' in name or 'intron' in name or '5splicing-site' in name: cmd.color('gray40', name) if '3exon-intron' in name.lower(): cmd.color('gray20', name) if name.startswith("U2_snRNA"): cmd.color('forest', name) if name.startswith("U5_snRNA"): cmd.color('blue', name) if name.startswith("U4_snRNA"): cmd.color('orange', name) if name.startswith("U6_snRNA"): cmd.color('red', name) cmd.do('color gray') # trisnrp cmd.do('color orange, chain V') # conflict cmd.do('color red, chain W') cmd.do('color blue, chain U') # cmd.do('color blue, chain 5') cmd.do('color forest, chain 2') cmd.do('color red, chain 6') cmd.do('color orange, chain 4') cmd.do('color yellow, chain Y') # shi cmd.do('color blue, chain D') # u5 cmd.do('color forest, chain L') # u2 cmd.do('color red, chain E') # u6 cmd.do('color yellow, chain M') cmd.do('color yellow, chain N') # afte branch cmd.do('color blue, chain U') # u5 cmd.do('color forest, chain Z') # u2 cmd.do('color red, chain V') # u6 cmd.do('color yellow, chain E') cmd.do('color black, chain I') # 5WSG # Cryo-EM structure of the Catalytic Step II spliceosome (C* complex) at 4.0 angstrom resolution cmd.do('color blue, chain D') # u5 #cmd.do('color forest, chain L') # u2 cmd.do('color yellow, chain B') cmd.do('color yellow, chain b') cmd.do('color black, chain N') cmd.do('color black, chain M') cmd.do('color black, chain 3') # orange cmd.do('color black, chain E') # yellow cmd.do('color black, chain i') cmd.do('color black, chain e') cmd.do('color black, chain e') cmd.do('color dirtyviolet, chain L') # bud31 cmd.do('color rasberry, chain L') # CERF1 cmd.do('color skyblue, chain A') # PRP8 cmd.do('color grey60, chain B') # BRR2 cmd.do('color dirtyiolet, chain L') # BUD31 cmd.do('color rasberry, chain O') # CEF1 cmd.do('color rasberry, chain S') # CLF1 cmd.do('color dirtyviolet, chain P') # CWC15 cmd.do('color lightteal, chain D') # CWC16/YJU2 cmd.do('color ruby, chain M') # CWC2 cmd.do('color violetpurple, chain R') # CWC21 cmd.do('color bluewhite, chain H') # CWC22 cmd.do('color deepteal, chain F') # CWC25 cmd.do('color black, chain I') # Intron cmd.do('color dirtyviolet, chain G') # ISY1 cmd.do('color palegreen, chain W') # LEA1 cmd.do('color palegreen, chain Y') # Msl1 cmd.do('color lightpink, chain K') # PRP45 cmd.do('color smudge, chain Q') # Prp16 cmd.do('color grey70, chain t') # Prp19 cmd.do('color lightblue, chain J') # PRP46 cmd.do('color chocolate, chain N') # SLT11/ECM2 cmd.do('color grey70, chain s') # Snt309 cmd.do('color slate, chain C') # SNU114 cmd.do('color brightorange, chain T') # SYF1 cmd.do('color forest, chain Z') # U2 cmd.do('color density, chain U') # U5 cmd.do('color deepblue, chain b') # U5_Sm cmd.do('bg gray') # cmd.do('remove (polymer.protein)') cmd.set("cartoon_tube_radius", 1.0) ino() def spl2(): """Color spl RNAs and use 4-color code for residues (for only color spl RNA see `spl`) """ AllObj = cmd.get_names("all") for name in AllObj: if 'Exon' in name or 'exon' in name: cmd.color('yellow', name) if 'Intron' in name or 'intron' in name or '5splicing-site' in name: cmd.color('gray40', name) if '3exon-intron' in name.lower(): cmd.color('gray20', name) if name.startswith("U2_snRNA"): cmd.color('forest', name) if name.startswith("U5_snRNA"): cmd.color('blue', name) if name.startswith("U4_snRNA"): cmd.color('orange', name) if name.startswith("U6_snRNA"): cmd.color('red', name) cmd.do('color gray') # trisnrp cmd.do('color orange, chain V') # conflict cmd.do('color red, chain W') cmd.do('color blue, chain U') # cmd.do('color blue, chain 5') cmd.do('color forest, chain 2') cmd.do('color red, chain 6') cmd.do('color orange, chain 4') cmd.do('color yellow, chain Y') # shi cmd.do('color blue, chain D') # u5 cmd.do('color forest, chain L') # u2 cmd.do('color red, chain E') # u6 cmd.do('color yellow, chain M') cmd.do('color yellow, chain N') # afte branch cmd.do('color blue, chain U') # u5 cmd.do('color forest, chain Z') # u2 cmd.do('color red, chain V') # u6 cmd.do('color yellow, chain E') cmd.do('color black, chain I') # 5WSG # Cryo-EM structure of the Catalytic Step II spliceosome (C* complex) at 4.0 angstrom resolution cmd.do('color blue, chain D') # u5 #cmd.do('color forest, chain L') # u2 cmd.do('color yellow, chain B') cmd.do('color yellow, chain b') cmd.do('color black, chain N') cmd.do('color black, chain M') cmd.do('color black, chain 3') # orange cmd.do('color black, chain E') # yellow cmd.do('color black, chain i') cmd.do('color black, chain e') cmd.do('bg gray') cmd.do('remove (polymer.protein)') cmd.color("red",'resn rG+G and name n1+c6+o6+c5+c4+n7+c8+n9+n3+c2+n1+n2') cmd.color("forest",'resn rC+C and name n1+c2+o2+n3+c4+n4+c5+c6') cmd.color("orange",'resn rA+A and name n1+c6+n6+c5+n7+c8+n9+c4+n3+c2') cmd.color("blue",'resn rU+U and name n3+c4+o4+c5+c6+n1+c2+o2') cmd.set("cartoon_tube_radius", 1.0) ino() def _spli(): """ # this trick is taken from Rhiju's Das code color red,resn rG+G and name n1+c6+o6+c5+c4+n7+c8+n9+n3+c2+n1+n2 color forest,resn rC+C and name n1+c2+o2+n3+c4+n4+c5+c6 color orange, resn rA+A and name n1+c6+n6+c5+n7+c8+n9+c4+n3+c2 color blue, resn rU+U and name n3+c4+o4+c5+c6+n1+c2+o2 # #cmd.color("yellow", "*intron*") #cmd.color("yellow", "*exon*") #cmd.show("spheres", "inorganic") #cmd.color("yellow", "inorganic") """ cmd.color("orange", "U4_snRNA*") cmd.color("red", "U6_snRNA*") cmd.color("blue", "U5_snRNA*") cmd.color("green", "U2_snRNA*") cmd.color("red",'resn rG+G and name n1+c6+o6+c5+c4+n7+c8+n9+n3+c2+n1+n2') cmd.color("forest",'resn rC+C and name n1+c2+o2+n3+c4+n4+c5+c6') cmd.color("orange",'resn rA+A and name n1+c6+n6+c5+n7+c8+n9+c4+n3+c2') cmd.color("blue",'resn rU+U and name n3+c4+o4+c5+c6+n1+c2+o2') try: from pymol import cmd except ImportError: print("PyMOL Python lib is missing") else: #cmd.extend("spl", spl) cmd.extend("spl2", spl2) # colors taken from https://github.com/maxewilkinson/Spliceosome-PyMOL-sessions cmd.set_color('lightgreen', [144, 238, 144]) cmd.set_color('darkgreen', [0, 100, 0]) cmd.set_color('darkseagreen', [143, 188, 143]) cmd.set_color('greenyellow', [173, 255, 47]) cmd.set_color('coral', [255, 127, 80]) cmd.set_color('darkorange', [255, 140, 0]) cmd.set_color('gold', [255, 215, 0]) cmd.set_color('lemonchiffon', [255,250,205]) cmd.set_color('moccasin', [255,228,181]) cmd.set_color('skyblue', [135,206,235]) cmd.set_color('lightyellow', [255,255,224]) cmd.set_color('powderblue', [176,224,230]) cmd.set_color('royalblue', [65,105,225]) cmd.set_color('cornflowerblue', [100,149,237]) cmd.set_color('steelblue', [70,130,180]) cmd.set_color('lightsteelblue', [176,196,222]) cmd.set_color('violetBlue', [40, 0, 120]) cmd.set_color('mediumpurple', [147,112,219]) print(""" PyMOL4Spliceosome ----------------------- spl hprp8 spl prp8 """)
With an integrated, multi-disciplinary CAD environment like SOLIDWORKS, your product development and manufacturing teams can easily master 3D CAD models for all their design tasks. The complete 3D CAD tool solution, SOLIDWORKS allows all stakeholders to work collaboratively with one another. Manufacturers have embraced automation because it provides many competitive advantages, and product developers increasingly face design, workflow, and data requirements that extend beyond the capabilities of traditional, single-point 3D modeling and 2D drawing solutions. Meeting these emerging automation and data-sharing demands means designers need to use an integrated 3D product development system. SOLIDWORKS design-to-manufacturing solutions can help organisations lead this automation transformation. Instead of working in a linear, sequential manner, product developers can leverage an integrated 3D product development ecosystem like SOLIDWORKS to implement a more efficient, concurrent, “hub-and-spoke” approach that enables all functions to access and work with the latest 3D product data. This universal or “master” data is updated automatically for all functions whenever design changes are made. A fully integrated 3D product development solution like SOLIDWORKS allows every function to directly work with the master 3D product data at the centre of the process. With an integrated 3D product development ecosystem like SOLIDWORKS, you can reduce costs by shortening design cycles and accelerating time to market, as well as by eliminating duplicative, unnecessary tasks and associated costs. This means designers can avoid more than multiple rounds of physical prototyping; and higher than necessary volumes of scrap/rework arising from data miscommunications or revision errors. Because all functions leverage the master product data prior to production, the fidelity or completeness of a design is much improved, lessening the number of engineering change orders (ECOs) required. Working with SOLIDWORKS mean a design change doesn’t result in the need to rework tooling, documentation, packaging, or marketing materials from scratch. Because these functions began working with the master 3D product model in a concurrent manner, their materials will update to reflect changes to the master model automatically. With a central, master set of product data maintained by an integrated PDM system, the opportunities for human error related to data manipulation—instances requiring someone to import, export, translate, convert, rebuild, or repair data—simply disappear. Using an integrated 3D product development environment like SOLIDWORKS also minimizes the chances for errors related to having multiple personnel update data to reflect design changes. More prototyping can be done virtually with an integrated 3D product development solution like SOLIDWORKS because of the utility, speed, and accessibility of integrated simulation tools. With SOLIDWORKS you can also improve the effectiveness and efficiency of quality assurance inspection efforts by using CAD-integrated inspection tools. An integrated 3D product development environment like SOLIDWORKS enhances collaboration internally, with external partners, and with the growing SOLIDWORKS user community. Instead of using incompatible design packages and then trying to bring it all together, mechanical, electronic, and electrical engineers can collaborate on the design in the same design environment using SOLIDWORKS mechanical, SOLIDWORKS PCB electronic, and SOLIDWORKS Electrical design software. All the solutions are there and you never leave the software – that’s unique to SOLIDWORKS. Innovation can disrupt existing markets, establish new ones, and lead to market dominance. Complacency is arguably the opposite of innovation. Concurrent design with an integrated platform pushes personnel out of their isolated silos into a more agile, flexible, and collaborative work environment. With less concern for the busywork that isolated approaches formerly created, product development and manufacturing professionals will be energised by the freedom to investigate innovative, outside-the-box approaches and concepts via improved visualisation, simulation, and rapid prototyping. Although many of these ideas may not pan out, the ones that do may make the difference between success and failure. Download for FREE "The Top FiveReasons to Switch to SOLIDWORKS for Product Development" whitepaper to find out more about switching to integrated SOLIDWORKS solutions.