text
stringlengths 29
850k
|
---|
class Error(Exception):
"""Base class for rdflib exceptions."""
def __init__(self, msg=None):
Exception.__init__(self, msg)
self.msg = msg
class TypeCheckError(Error):
"""Parts of assertions are subject to type checks."""
def __init__(self, node):
Error.__init__(self, node)
self.type = type(node)
self.node = node
class SubjectTypeError(TypeCheckError):
"""Subject of an assertion must be an instance of URIRef."""
def __init__(self, node):
TypeCheckError.__init__(self, node)
self.msg = "Subject must be instance of URIRef or BNode: %s(%s)" \
% (self.node, self.type)
class PredicateTypeError(TypeCheckError):
"""Predicate of an assertion must be an instance of URIRef."""
def __init__(self, node):
TypeCheckError.__init__(self, node)
self.msg = "Predicate must be a URIRef instance: %s(%s)" \
% (self.node, self.type)
class ObjectTypeError(TypeCheckError):
"""Object of an assertion must be an instance of URIRef, Literal,
or BNode."""
def __init__(self, node):
TypeCheckError.__init__(self, node)
self.msg = "Object must be instance of URIRef, Literal, or BNode: %s(%s)" % \
(self.node, self.type)
class ContextTypeError(TypeCheckError):
"""Context of an assertion must be an instance of URIRef."""
def __init__(self, node):
TypeCheckError.__init__(self, node)
self.msg = "Context must be instance of URIRef or BNode: %s(%s)" \
% (self.node, self.type)
class ParserError(Error):
"""RDF Parser error."""
def __init__(self, msg):
self.msg = msg
def __str__(self):
return self.msg
class SerializerDispatchNameError(Error):
"""No name set..."""
def __init__(self, msg):
Error.__init__(self)
self.msg = msg
class SerializerDispatchNameClashError(Error):
"""Name clash..."""
def __init(self, msg):
Error.__init__(self)
self.msg = msg
class ParserDispatchNameError(Error):
"""No name set..."""
def __init__(self, msg):
Error.__init__(self)
self.msg = msg
class ParserDispatchNameClashError(Error):
"""Name clash..."""
def __init(self, msg):
Error.__init__(self)
self.msg = msg
|
Legs have different health problems. One of the most prevalent and torturous in the U.S is venous insufficiency. It is a disease that affects legs veins and does not put your heart in danger. When this disease attacks veins, it triggers certain symptoms: spider veins, varicose veins, venous ulcers, blood clots, changes in skin texture and color, swelling, itching, cramping and more. The earliest symptoms include spider veins, heaviness in legs, pain, fatigue, cramping, swelling and itching. If you are experiencing these symptoms, call the best Kips Bay vein treatment clinic without hesitation.
By doing so, you can discover as whether you have abnormal leg veins or another health condition that affects veins. This means that a medical action can be taken now to prevent severe complications that are triggered by venous insufficiency. Besides causing you a lot of physical discomfort, abnormal leg veins can lower your self-esteem and make you self-conscious. By seeing a spider vein doctor in Kips Bay, your spider veins or varicose veins can be treated medically until they have all disappeared. This will restore your self confidence and relieve your pain and discomfort.
Prior to stopping over at the best vein treatment clinic Kips Bay, it’s necessary to try and understand your vein problem. It is possible that your leg veins are more visible but you have no idea why they have grown bigger. Two types of abnormal leg veins are rather common: spider veins and varicose veins. If you have spider veins (also called thread veins, telangiectasias or sunburst veins), then what you should be seeing are tiny blood vessels that are right beneath your skin surface and have formed a spider web pattern or a tree branch pattern. As well, thread veins are blue, red or purple and tend to affect a larger area of skin. While these abnormal veins are barely dangerous, they can hurt at times and are definitely ugly to look at. Being very superficial, colorful veins, thread veins are easily detectable and should be eliminated by a competent vein doctor who works for the most renowned Kips Bay vein treatment clinic.
Varicose veins are need treatment, unlike thread veins, because they are triggered by a venous reflux problem within a vein. What’s more, varicose veins, varices or varicosities entail large, swollen veins that tend to have a winding appearance. They occur due to valves failure with a vein, a problem that causes some of the blood traveling back to the heart to leak and flow backwards. Then it pools in leg veins, putting extreme pressure on the vein walls. When varices are present, you are likely to feel heaviness in legs as well as extreme fatigue, pain and itching. Varicose veins should be seen by the top spider vein doctor in Kips Bay and treated as soon as possible to stop venous insufficiency from progressing. There are people who have bigger odds of developing thread veins and varicose veins.
First of all, pregnant women keep on adding weight to support the life of the growing fetus. While this is awesome, extra weight can exert a lot of pressure on leg veins and this can cause varicosities to develop. All women who take hormonal contraceptives are more prone to abnormal leg veins too, as these disrupt the way their hormones are designed to work. Men and women can also develop spider veins or varices because the problem runs in their families (it’s hereditary) or because they have to work while standing or sitting down for several hours. Living a sedentary lifestyle can make you add more weight and this can strain your leg veins, resulting to abnormal veins that must be seen by a well educated spider vein doctor in Kips Bay region.
These could also form due to serious leg trauma or the normal aging process. If you have any of the two or both, the right action to take right now is to call the most reputable vein treatment clinic Kips Bay and talk to a vein specialist. He or she will examine your leg vessels and determine the best way to treat you.
Once you decide to seek help, take the time to pick the most dependable Kips Bay vein treatment clinic. After locating a clinic you can trust, call to make the first appointment with a competent vascular doctor like Dr. Michael Nguyen, MD. Then go to this Kips Bay vein treatment clinic when your doctor asks you to. Here is the simple, but thorough evaluation process at the doctor’s office.
Medical history – As others doctors do, you vascular doctor will first gather information prior to recommending a treatment approach for your vein disease. This is so that they can rule out other diseases that affect veins, especially the deep veins. As the top spider doctor in Kips Bay will ask you several questions, be ready to give complete answers. If there are other victims of spider veins or vascular veins in your family, inform your doctor. Talk about your lifestyle, current symptoms and any past treatments you might have been given at another Kips Bay vein treatment clinic. Additionally, tell your doctor about your pregnancies, weight problems, and any type of drugs you could be taking now.
Physical exam – The vascular specialist will examine your legs by observation and might take snapshots of your veins. While the doctor at the leading Kips Bay vein treatment clinic can discover abnormal leg veins through observation only, he or she will carry out medical tests. First of all, they will use a hand-held Doppler scanner to check for symptoms of venous reflux disease. The scanner looks like a stethoscope as the doctors at any given Kips Bay vein treatment clinic use it to listen to the sounds of your blood flow and to assess them.
Next, they will do a Duplex ultrasound test using a special type of gel that is smeared to the leg and then a probe is moved on top of your skin. This test is not invasive; it uses sound waves to make images of your venous system. Further, the test reveals your blood flow, helping your doctor decide the source of your reflux problem. With duplex ultrasound, the spider vein doctor in Kips Bay can correctly exclude other vein disorders. And if a positive diagnosis is made, your vein expert will move to the next step of deciding the type of treatment you should get.
After carrying out the diagnosis process and finding out that you indeed have spider veins and/or varicose veins, your spider vein doctor in Kips Bay will design your treatment plan. As there are several ways to treat abnormal veins in legs, the Kips Bay vein treatment clinic physician and you will discuss each in details prior to picking the best picks. It might happen that the best strategy is a combination one, where two or more tricks will be used to eliminate your vein problem. In most cases, this strategy is used when one has severe varicose veins that cannot be treated with just one method.
As for treating spider veins, sclerotherapy is often enough to get rid of them because they are as tiny as blood capillaries. When discussing various treatments with your Kips Bay vein treatment clinic doctor, make sure that you ask several questions about each method. Ask about its benefits, pros and cons and level of effectiveness. Any Kips Bay vein treatment clinic can provide almost all varicose veins treatment options. However, it’s only a few that can be thorough, careful and dependable during the process. Ensure that your current spider vein doctor in Kips Bay is reputable, well-educated, experienced, courteous and meticulous.
If you doubt his or her abilities, find a vein treatment center Kips Bay area that is associated with Dr.Michael Nguyen’s vein treatment centers. This will be a prudent way to save money and time. When it comes to deciding the best treatment method, it will be necessary to understand that all techniques are aimed at destroying or removing the problematic veins. The rest of the healthier veins are left to carry out the work that the damaged or removed veins used to do. Even with this, the healthier veins will carry blood from the legs to the heart more efficiently. You have a network of veins in legs, so the destruction or removal of a number of them will not be tragic. The remaining ones will keep working effectively. No matter the treatment method you select, the goal will be the same— to improve your quality of life.
If your only trouble is spider veins, your spider vein doctor Kips Bay area will choose to offer you sclerotherapy injections. Although there is no need to treat your thread veins, I have them and I know they can cause pain. If yours are painful, then treating them via sclerotherapy will be a wise thing to do. This technique entails getting a shot with a sclerosing chemical that causes a blood vessel to close and collapse. As the main technique for treating spider veins, sclerotherapy is going to work. It is an outpatient therapy that targets thread veins and medium size varicose veins.
There can be up to three sessions though, if your spider veins cover an extensive area. The spider vein doctor in Kips Bay will use a fine needle to inject a sclerosing agent into the areas with dilated thread veins. Before this, he or she might test the reaction of your body towards the sclerosing agent by injecting a few veins. Relying on the reaction, he or she might adjust the amount of the sclerosing chemical. This agent will then cause inflammation within the vein that will in return cause vein walls to collapse and stick together. After the vein collapses, it will no longer do the task it was designed to do.
Instead, it will be reabsorbed by the body and the blood that used to flow through it will be re-routed to a healthier vein. On the day of treatment, the spider vein doctor in Kips Bay might ask you to avoid applying your lotion or cream to your leg skin. And if you are worried about feeling pain, chances are you will feel it after treatment at the injection sites. However, a mild stinging or burning sensation may be felt during the treatment process.
Once you have done sclerotherapy, the next obvious thing is recovery. Most people feel normal after undergoing the procedure at their preferred vein treatment clinic Kips Bay. They are able to work and run their errands just fine. However, bruising is a common side effect, although it is not severe as it can go away on its own. After twelve weeks, you might start noticing that most varicose veins or spider veins have disappeared. However, some abnormal veins might not disappear completely after the first treatment. Thus, you may not have a better choice than having them injected again.
Another issue is that blood may get stuck in a few thread veins and in this case, you may have to go back to the Kips Bay vein treatment clinic and have a vascular doctor remove this blood via a small hole. It is advisable to report any side effects that cannot seem to disappear and are making your life miserable.
If you have medium to large varicose veins, your spider vein doctor in Kips Bay might decide to combine endovenous therapies with sclerotherapy. Endovenous therapies either use laser energy or radio frequency energy (radio waves) to treat abnormal leg veins. They are minimally invasive, have over 95 percent success rate and have fewer side effects than sclerotherapy. These are not used to treat small spider veins;though, they are designed for the removal of severe varicosities.
Another thing to keep in mind is that after treatment via sclerotherapy or endovenous therapy, you will need to wear compression stockings as well as a bandage or gauze at the injection or incision sites. Compression stockings are provided based on how severe your vein problem is as they have different strengths. Thus, your spider vein doctor in Kips Bay will provide just the right kind. As well as wearing these, it is important to elevate your legs, exercise by walking every day, avoid strenuous activities and hot baths and protect the treated areas from sun exposure. Finally, your vein treatment clinic Kips Bay will give you tips on how to recover without suffering an infection.
|
from django.contrib import messages
from django.utils.translation import ugettext_lazy as _
from misago.admin.views import generic
from misago.users.models import WarningLevel
from misago.users.forms.admin import WarningLevelForm
class WarningsAdmin(generic.AdminBaseMixin):
root_link = 'misago:admin:users:warnings:index'
Model = WarningLevel
Form = WarningLevelForm
templates_dir = 'misago/admin/warnings'
message_404 = _("Requested warning level does not exist.")
class WarningsList(WarningsAdmin, generic.ListView):
ordering = (('level', None),)
class NewWarning(WarningsAdmin, generic.ModelFormView):
message_submit = _('New warning level "%(name)s" has been saved.')
class EditWarning(WarningsAdmin, generic.ModelFormView):
message_submit = _('Warning level "%(name)s" has been edited.')
class DeleteWarning(WarningsAdmin, generic.ButtonView):
def button_action(self, request, target):
target.delete()
message = _('Warning level "%(name)s" has been deleted.')
messages.success(request, message % {'name': target.name})
class MoveDownWarning(WarningsAdmin, generic.ButtonView):
def button_action(self, request, target):
try:
other_target = WarningLevel.objects.filter(level__gt=target.level)
other_target = other_target.earliest('level')
except WarningLevel.DoesNotExist:
other_target = None
if other_target:
other_target.level, target.level = target.level, other_target.level
other_target.save(update_fields=['level'])
target.save(update_fields=['level'])
message = _('Warning level "%(name)s" has '
'been moved below "%(other)s".')
targets_names = {'name': target.name, 'other': other_target.name}
messages.success(request, message % targets_names)
class MoveUpWarning(WarningsAdmin, generic.ButtonView):
def button_action(self, request, target):
try:
other_target = WarningLevel.objects.filter(level__lt=target.level)
other_target = other_target.latest('level')
except WarningLevel.DoesNotExist:
other_target = None
if other_target:
other_target.level, target.level = target.level, other_target.level
other_target.save(update_fields=['level'])
target.save(update_fields=['level'])
message = _('Warning level "%(name)s" has '
'been moved above "%(other)s".')
targets_names = {'name': target.name, 'other': other_target.name}
messages.success(request, message % targets_names)
|
My suspension work on my van is complete and I have to say I am very happy with the results. Its is a 2004 Chevy Express 3500 Roadtrek 190 Popular.
Wheels are measured from the top of the wheel wheel to the ground going straight through the center of the wheel. The “dump” measurement was from the bottom of the ground effects at the dump valve to the ground. This point is more-or-less level of the bottom of the plumbing and easy to measure. It is the area most prone to scraping.
Vastly improved. I went over several steep driveways at local businesses without incident. This includes ones where other vehicles had left a large number of scars on the pavement. I did not use the trick of going at the entrance at an angle to reduce scraping, I went straight in. I was unable to find a driveway that produced any scraping at all. This was my primary reason for doing the upgrade and I would rate this as an unqualified success.
Very substantially improved. There is a bump I travel over regularly at an angle. This bump would cause the vehicle to lurch violently. I can now take it at a moderate speed without problems. Also the motion is “once and done”. This is true of all suspension movements. If it moves, it moves only once.
Much nicer. I had some minor wandering at highway speeds, it felt a little “drifty.” This seems to be gone. The van feels lot more “planted” on the road. More under control. I would suspect that this is because the suspension is keeping the van level. Before it would lean to the outside of turns but it no longer does this. Some people have worried that raising the center of gravity of the van would make it feel like it was tipping over but with the reduced tendency of the van to lean to the outside of turns it actually feels more stable.
Somewhat better. You still feel every bump in the road. The bumps lack a lot of the jarring quality that they had before. It’s like getting hit with a rubber hammer instead of a ball peen hammer. I don’t think that 10 miles of washboard road would be very pleasant. It rides like what it is, a work truck. I think that the harshness is better but since I am focusing so much on the bumps it’s really hard to say. I would like to get more “cush” in the front, I am hoping that as the springs settle things will be less harsh.
Many thanks to all for the excellent information.
I know the shocks are a little high but but I liked the price on the air bags and they confirmed with me over the phone that they had the shocks in stock and could ship right away. No one else seemed to actually have them. Well they didn’t have them either — nobody that I could find actually had them — and it was 20 days between ordering and the arrival of the shocks.
I had a mechanic neighbor who owed me some favors so I got him to help me do the airbags, compressor and rear shocks . The coil springs, front shock install and the alignment came to $330.
The coil springs, front shock install and the alignment came to $330.
I did the springs and shock myself. Alignment is usually about $100. Shocks are easy. But I would gladly pay $230 for the spring install. I'm fairly handy but those heavy springs were kinda scary to install.
Fantastic to see another happy camper from the modifications for height!
It appears that the current list of parts and specs is working well, with good results, so the changes are getting to be much more common.
The heights listed for after lifting are right at what we would expect to see.
See my post in "Tweaks, Mods & Projects"
This photo is taken from the passenger side. The compressor is mounted inboard on one of the driver’s side rails. In the photo the front of the van is to your right.
I drew a red line in the photo. It marks the front of the fiberglass drain pan where it meets the cab section.
We were unable to figure a way to mount the bags as delivered. There were welded in place. In other photos I have seen a bracket but we did not have one.
The backup schrader valves were mounted inside the shower compartment. The tees in the air lines that go to the compressor are right behind the shower compartment. This arrangement let us get it done without needing additional air tubing.
I have a 3500 Expressed based Roadtrek and a Sprinter-Based van. My wife rides in the back on the couch and always complained about being thrown around. The fix in the Sprinter was easy: we installed VBAir's air suspension system. I have not seen anything for the 3500. How bumpy is it in the back for someone riding on the sofa after your upgrades? Thanks in advance!
Others can speak for their impressions, as all of this is very much an individual perception.
We have had 3 different variations of the rear suspension in our 07 Roadtrek 190.
All stock gave the ride everyone sees, not too bad on small bumps, but punishing and noisy on big bumps.
All stock but with the addition of air bags to lift the rear a bit off the huge overload spring leafs. Decent on the small and up into most of the normal bumps, but still harsh and noisy on the bigger bumps.
Non stock in that we removed the overload leafs from the rear springs to keep them from hitting on big bumps, and changed the airbags to ones that have an internal urethane bump stop in them. This change smoothed out all bumps quite well for us, and we are quite happy with it. The disclaimer is that we are the only ones that have done this change to date, so unproven in the mainstream.
|
import re
import regex
class Rule(dict):
''' breaks the rule into its parts along with validating it '''
def __init__(self, rule):
''' self['valid'] will be changed at any point to show whether the rule is valid or not. Error will tell you where.'''
self['valid'] = True
self['error'] = None
self['rawRule'] = rule
self.header()
def __getattr__(self, i):
''' Get any value from the dict '''
return self[i]
def __str__(self):
# Return the original rule.
return self['rawRule']
def __len__(self):
# Return the amount of options minus [error, valid, rawRule]
return len(self.keys()) - 3
def header(self):
''' maps the header options to self'''
if re.match(regex.rule_header, self['rawRule']):
header = re.match(regex.rule_header, self['rawRule']).groupdict()
for option in header:
self[option] = header[option]
else:
self['valid'] = False
self['error'] = "header"
def generalOptions(self):
pass
def payloadDetection(self):
pass
def nonpayloadDetection(self):
pass
def postDetection(self):
pass
def checkOptions(self):
''' Make sure all the options are valid '''
pass
def checkGutters(self):
''' Check between all the options to make sure there is nothing unknown '''
pass
if __name__ == "__main__":
myFile = open("rules/community.rules")
rule = 'alert tcp 192.168.100.40 65535 -> $HOME_NET !45:56 (content:"|00 01 86 a5|"; msg:"This is the test rule.";)'
r = Rule(rule)
print r.srcaddress
'''
i = 0
rule = {}
for line in myFile:
rule[i] = Rule(line)
if not rule[i].valid:
print rule[i].rawRule
i += 1
'''
|
From the 1st day I met you, I knew you were the only one that would change my life forever - and you did, with such a positive, powerful, and peaceful vibration. You make me wake everyday with your name being the first thing my heart beats to, and with you being the first person that wakes me up in the morning, saying: 'I Love You'. You came to my life as a heartbeat, but leaving me with such a big kill - I REALLY NEED YOU BACK. Please, love. I wrote you 15 short poems asking for your forgiveness, begging for you to come back, and pleading for my innocence. Where'd you go, baby? I miss you, so. Seems like it's been forever, that you've been gone. Please come back home (in my heart), love. I need you by my side every time I take a breath, my heart beats for you every time I think of you, and I am deep groundedly in love with you. Please don't take that away from me, at least just forgive me and let's move on. I'm in so much pain to have you back, and in so much of a depression thinking I'm ever not going to get you back. I will do ANYTHING, just about ANYTHING to have you back - but PLEASE, PLEASE, PLEASE, PLEASE; I really need you, love. I will die, but don't ever leave me. I'm so so so sorry about everything, and I sincerely apologize. Forgive me? Give me one more chance to prove my deep love for you. Baby, please. I know you love me, and you'll give another chance. I beg of you, love. Please, please, please.
Kneeling down with both knees on the ground, and tears are rushing through my eyes & heavily flowing my face - I ask you once again: 'Will you be my girlfriend?' Please baby, please. I don't know what I'll do without you in life. Right now, I'm a dark gray sky with no hopes of a shining sun; I'm bleeding heart with no guarantee of a fix; I'm a punctured lung with no chance of breathing in your fresh fraguanced air.
I'll definitely wait your verdict. I just want you to know that, I've always loved you and I'll ALWAYS LOVE YOU - ONLY YOU.
thats some begging letter huni, i have seen some, and using the words to express ill health, and you being a lesbian, from the cameroon in africa, where there is anti-lgbt feeling in that country, dont know huni, you could be fake, another scammer this site gets a lot from africa and russia, trying to scam the nice folks here.
Thanks for the comment. Just to make sure the message gets through, I do agree that I have roots taking me back to Africa, but at least I am not that cheap to sell myself for an empty joke. No offence intended, but it's true. I have been a victim myself of scammers and imposters, but I learnt from the mistakes, and moved on. When you really love someone Ailean, it shows. It really does. I don't know if you noticed, but it's more of an 'Apology' letter. Hope you understand. Thanks.
thats fine huni, as for friends i have some in Afrika, most are fighting LGBT rights with Amnesty International, as that seems to be the only official human rights group that can operate in hostile places.
And yes huni, love hurts, and one has to move on, not much good going back, at the end of the day forgive but never forget.
Thanks for understanding. It happens. First time relationships usually hurt the most, but once you get past the pain, the memories, and the time spent on it; you get such an experience that helps you move on and heals you. With time, of course. Any such thing happened to you before?
Well, hope you have a great year to come filled with happiness, love, and success. God Bless.
That was a beautiful protestation of love and the need for forgiveness.
Thanks. Hope it comes true at some point of time.
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, Matt Martz <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
class ModuleDocFragment(object):
# Standard files documentation fragment
# Note: mode is overridden by the copy and template modules so if you change the description
# here, you should also change it there.
DOCUMENTATION = r'''
options:
mode:
description:
- The permissions the resulting file or directory should have.
- For those used to I(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives
a string and can do its own conversion from string into number.
- Giving Ansible a number without following one of these rules will end up with a decimal
number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or
C(u=rw,g=r,o=r)).
- If C(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If C(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying C(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
type: raw
owner:
description:
- Name of the user that should own the file/directory, as would be fed to I(chown).
type: str
group:
description:
- Name of the group that should own the file/directory, as would be fed to I(chown).
type: str
seuser:
description:
- The user part of the SELinux file context.
- By default it uses the C(system) policy, where applicable.
- When set to C(_default), it will use the C(user) portion of the policy if available.
type: str
serole:
description:
- The role part of the SELinux file context.
- When set to C(_default), it will use the C(role) portion of the policy if available.
type: str
setype:
description:
- The type part of the SELinux file context.
- When set to C(_default), it will use the C(type) portion of the policy if available.
type: str
selevel:
description:
- The level part of the SELinux file context.
- This is the MLS/MCS attribute, sometimes known as the C(range).
- When set to C(_default), it will use the C(level) portion of the policy if available.
type: str
unsafe_writes:
description:
- Influence when to use atomic operation to prevent data corruption or inconsistent reads from the target file.
- By default this module uses atomic operations to prevent data corruption or inconsistent reads from the target files,
but sometimes systems are configured or just broken in ways that prevent this. One example is docker mounted files,
which cannot be updated atomically from inside the container and can only be written in an unsafe manner.
- This option allows Ansible to fall back to unsafe methods of updating files when atomic operations fail
(however, it doesn't force Ansible to perform unsafe writes).
- IMPORTANT! Unsafe writes are subject to race conditions and can lead to data corruption.
type: bool
default: no
version_added: '2.2'
attributes:
description:
- The attributes the resulting file or directory should have.
- To get supported flags look at the man page for I(chattr) on the target system.
- This string should contain the attributes in the same order as the one displayed by I(lsattr).
- The C(=) operator is assumed as default, otherwise C(+) or C(-) operators need to be included in the string.
type: str
aliases: [ attr ]
version_added: '2.3'
'''
|
When you blow up the personal vehicle of someone who has a bounty on their head you will not get charged for their insurance excess. You will still get a bad sport warning though!
If you steal a car from a NPC note that they can and will sometimes put a bounty on your head too. It’s not just other human players that can do it!
When doing the mission “Out of Court Settlement” you should intentially fail it by storing the $22K vehicle in your garage, and then you’ll be able to immediately grab another one.
Then it’s simply a case of using all the spaces available in your garage and proceeding to pass the mission.
If you have another player in your car as the passenger and they set a point on their GPS it will automatically be shown on your GPS map too.
Passive mode is entered by going to the pause menu, then Online –> Options –> Passive Mode. While you can’t be killed by getting shot by another person on foot, or robbed during passive mode, you can get run over by someone else with a car and also shot by someone from a car. So be careful where you choose to activate it. It will also cost you $100 to enter.
If you want to change your “personal vehicle” you need to acquire/steal the one you want (not a premium one though) and take it to Los Santos Customs. Then go to “Loss/Theft Prevention” and buy the tracker for $2K. This will make it your default vehicle.
One of the benefits of having a personal vehicle is that you can get it back really quickly.
If you’re in free roam and it’s a long way away then go to your phone and accept a job (or so a quick one) and your vehicle will be transported to you. You don’t even need to finish (or even start the job) and can just cancel it. X on the Xbox 360 or Square on the PS3 is how you cancel jobs from the job selection screen on the cell phone.
If you’re looking for some ideas as to how to respray your vehicles then look no further than the imgur gallery linked below as well as the color combos that follow!
Once you’ve built up a bit of cash in GTA Online (LINK TO HINTS) you’ll probably want to spend it on property. Luckily we have a nice spreadsheet embedded below (click for the full sized version) that will show you all the property you can purchase in GTA Online along with the type, location, region, price and garage size of each. Very handy!
You need to have enough cash for the car you want.
Ensure your garage is full.
Go buy the car but sit on the screen where you are meant to select which of your current cars you want to replace.
Press pause and go to the PS or Xbox games store. It should start loading and the screen should begin zooming out to the clouds. This is when you need to press A twice (Xbox 360) or X twice (PS3). You should hear the “car purchased” sound.
After the store loads just back out of it and head to your garage. You should have your new car for free!
When you have assigned an action like eating a snack to the quick action button (which can be activated by pressing in both sticks (ie L3 and R3) then make sure you are fully pressing and holding the two sticks in so that your character completely consumes the snack/bottle/whatever she or he is eating or drinking. This will ensure you get the full benefit of its effects.
If there are some blue moving objects on your minimap that aren’t being auto-tracked by the GPS, you can switch to Quick GPS and they’ll appear.
The Quick Menu in GTA Online is very important. Access it by holding “Select” or “Back”. It is where you can change clothes and accessories, start a race, change your vehicle settings (like deciding who can enter and who can’t) and also enter passive mode.
If you press RT (Xbox 360) or R2 (PS3) after everyone has liked (or not) the previously played mission you can vote to replay it. This works even if the option to replay the mission is not displayed on the screen.
You’ll need to have access to multi-car garages in order to do this, but it’s really useful.
What you need to do is try to obtain and then mod at least one type of each vehicle that there is.
This way whenever you go to race with others, you can have a custom modded car no matter the vehicle class chosen.
Reddit user no1dead was kidn enough to create Spotify playlists for all the radio stations in GTAV! Simply click the link below to be taken to the relevant playlist in your Spotify app.
If you stop the pilot of a Merryweather Helicopter (you need to be level 10 to unlock the chopper pickup) at a low altitude then a prompt should show up. You can dictate where it stop by setting a waypoint while you’re in the chopper.
Then hold X on the Xbox 360 (Square on the PS3) and you’ll be able to rapple from the copter.
The Rat Loader might look cool, but it’s an absolute dog to try and drive. Anyway, it’s still good for cruising around in and if you want to find it in GTA Online then check this video for the spawn location.
Follow these steps to rearrange the vehicles in your garage without losing them!
Your garage must be full with at least one car you don’t mind losing.
Get in the car you want to move and drive out of the garage.
Go to phone, internet and order any new car (maybe the free Elegy?).
When you order you’ll be asked which car you want to replace. Choose the one your sitting in.
Wait until you get the text saying your new car has arrived.
Drive into your garage in the car you were meant to replace. When you do you will be asked which car you want to replace and can drive into any garage parking spot you like.
The car you choose will be deleted.
Head to the in-game menu and from there select “Info” where you’ll be able to track all your recent activity, including who you’ve played with. This can be an excellent way to recall who shot/robbed/punked you in-game. Use it to track them down and exact your revenge!
To get rid of your Wanted Level simply drive into an auto-shop!!
If you’re going to replace a vehicle in your garage then make sure you carry out the replacement first and only after it’s been saved in your garage take it out and mod it. Otherwise if you mod it first and then use it to replace the old car you can lose the mods!
You can either select to replay it from the mission selection screen after a mission ends.
If you’ve already gone back into free roam you need to call Gerald and ask him for a job. Do the job he gives you and then choose your mission on the selection screen.
Ask/wait for another player to give you an invite to do the mission.
You might not think you can, but it’s actually possible to re-supply with more armor, health and ammo half-way through completing a job. So if you get low on ammo or health, or need to get more armor in anticipation of a big gun fight, visit an Ammu-nation to restock and then carry on with the job.
Follow the guide in the image below in order to grab a Rhino tank in GTA Online AND insure it and store it in your garage!
|
# -*- coding: utf-8 -*-
"""
Created on Tue Oct 27 23:52:51 2015
@author: Kirill
"""
import sys
#from nltk.parse import stanford
from collections import defaultdict
import essayclasses.allessaysfile
import essayclasses.anessay
import essayclasses.asentence
exportedDB = 'data/all-essays.csv'
allEssaysFile = essayclasses.allessaysfile.AllEssaysFile(exportedDB)
allEssays = allEssaysFile.essaysList()
arDTtotal = 0
arAtotal = 0
arTheTotal = 0
arSentTotal = 0
narDTtotal = 0
narAtotal = 0
narTheTotal = 0
narSentTotal = 0
arDTdict = defaultdict(int)
narDTdict = defaultdict(int)
count = 0
for anEssay in allEssays:
if count == 10 : break
arDT = 0
arA = 0
arThe = 0
arSent = 0
narA = 0
narThe = 0
narDT = 0
narSent = 0
thisEssay = essayclasses.anessay.AnEssay(anEssay)
essayText = thisEssay.getText()
print('')
if thisEssay.isArabic():
print('ARABIC ESSAY:')
else:
print('NON-ARABIC ESSAY:')
for aSentence in thisEssay.getSentences():
thisSentence = essayclasses.asentence.ASentence(aSentence)
indefArt = 0
defArt = 0
dts = thisSentence.getDTs()
for dt in dts:
dt = dt[0].lower()
if dt == 'a' or dt == 'an':
indefArt += 1
if dt == 'the':
defArt += 1
if thisEssay.isArabic():
arDTdict[dt] += 1
else:
narDTdict[dt] += 1
if thisEssay.isArabic():
arSent += 1
arA += indefArt
arThe += defArt
arDT += thisSentence.countDTs()
#print ('Sentences: ', arSent)
#print ('Verbs: ', arDT)
else:
narSent += 1
narA += indefArt
narThe += defArt
narDT += thisSentence.countDTs()
#print ('Sentences: ', narSent)
#print ('Verbs: ', narDT)
if thisEssay.isArabic():
print('Sentences: ',arSent)
print('DTs: ',arDT)
print('A\'s: ', arA)
print('The\'s: ', arThe)
arAtotal += arA
arTheTotal += arThe
arDTtotal += arDT
arSentTotal += arSent
else:
print('Sentences: ',narSent)
print('DTs: ', narDT)
print('As: ', narA)
print('The\'s: ', narThe)
narAtotal += narA
narTheTotal += narThe
narDTtotal += narDT
narSentTotal += narSent
count += 1
arDTList = []
tuplesDTsCount = arDTdict.items()
for i in tuplesDTsCount:
if i[1] >= 10:
arDTList.append(i[::-1])
narDTList = []
tuplesDTsCount = narDTdict.items()
for i in tuplesDTsCount:
if i[1] >= 10:
narDTList.append(i[::-1])
print('')
print('Arabic DT\'s:')
print(sorted(arDTList, reverse = True))
print('')
print('Non-Arabic DT\'s:')
print(sorted(narDTList, reverse = True))
print('')
print ('Arabic Sentences: ',arSentTotal)
print ('Arabic DTs per Sentence: ', arDTtotal/arSentTotal)
print ('Arabic A\'s per Sentence: ', arAtotal/arSentTotal)
print ('Arabic The\'s per Sentence: ', arTheTotal/arSentTotal)
print('')
print ('Non-Arabic Sentences: ',narSentTotal)
print ('Non-Arabic DTs per Sentence: ', narDTtotal/narSentTotal)
print ('Non-Arabic As per Sentence: ', narAtotal/narSentTotal)
print ('Non-Arabic The\'s per Sentence: ', narTheTotal/narSentTotal)
|
I wasn't interested in hand-held computers, or PDAs (Personal Digital Assistants) as they're now known, before I went to work for GO Corporation. There I saw and worked with a wide variety of Intel-based tablet computers. (I spent almost four years writing C programs for PenPoint, GO's home-grown operating system for pen-based computers.) The problem with PenPoint was that it didn't interoperate with anything at all; the big lie was that a business executives would emerge as a new class of computing purchasers. (That the most vociferous purchasers of new technologies has always been the geekly classes didn't dissuade the writers of (or investors in) GO's business plan.) GO died an ignoble death, pecked to demise by short-term interests.
Thankfully, the PDA didn't die with GO. Apple had developed the Newton MessagePad during the latter years of GO. The original model was woefully underpowered, but it was small and only slightly overpriced. (For all the talk about consumer-level electronics, new toys always seem to debut at over US$500.) The photo at right shows the 100 model, known to cognesceti as the OMP, Original Message Pad.
I'm a great believer that technology should be eye-catching. Let 'em stare - if they're captivated they'll be open to the idea off using a PDA in their own lives.
Shortly after the introduction of the OMP it's larger, faster sibling appeared: the MP 110. I was lucky enough to get a transparent model, a few of which were made as promotional items for Apple and Claris employees. (Normal Newtons are a rubberized black - the 100 and 110 - or a hard very dark green - the 120/130/2x00.) Transparent computing equipment sells itself; I never understand why I can't get see-thru PowerBooks.
Ahhhhhh. My Newton. An amazing piece of electronics. Ne'r did I realize how much use a personal digital assistant (PDA) could be. Even though I worked for the ill-fated GO Corporation for almost four years, and worked on many of the early pen-based tablet computers, the Newton took me by surprize. Newton works as an adjunct to my PowerBook, allowing me to synchronize data on both platforms. (There's a Newton Connection Kit for Windows as well as MacOS.) For some reason, upper management at GO never understood that pioneers of new technology are people already invested in cool, nifty, technology. A travelling executive who has a secretary fax documents isn't going to be ready for the cutting edge of miniturization.
It is a very cool toy - and that should be taken as a compliment. A long time ago, Jean-Louis Gasse said that Apple products "smell like tomorrow" - he was speaking about the original PowerBooks, but the idea carries on. Using Newton makes me feel like I'm tapping into the optimism of the future; I may not be able to travel the galaxy in the USS Enterprise , but I can still use a hand-held, tricorder-like, device to manage the data I encounter in my life.
The photo at right shows my transparent Newton 110 sitting on a table in the tropics. During my trip to Eivissa and Gran Canaria my Newton became a life-saver. When my PowerBook died from poor electricity I continued to write my CU-SeeMe book - Internet TV with CU-SeeMe - on my Newton. I had to deal with poor handwriting recognition (which I overcome with Grafitti), but all in all things went pretty well. The Apple heavy-duty NiCad battery back more than allowed me to work all day and it recharges very quickly. If it wasn't for the 110 I'd have been completely blocked.
Nobody on the island has anything close to this level of technology. Everybody else is at an Intel 386 level, although there's the most rudimentary Internet connectivity just coming to the island.
Apple - with ex-GO employee Sandy Bennett at the helm of the Newton division - gives the platform a boost into the real world with the introduction of the MessagePad 2000 (or MP2K as it's known on the 'net). This was the machine for which I was waiting. I jumped at the chance to buy one, and a few months later I wrote an article about my impressions of the new palmtop computer for my ISP's monthly newsletter.
The "two thousand" has a bigger screen capable of displaying 16 shades of grey, but best of all it has two PCMCIA slots, so I can have a modem and a storage card inserted simultaneously. This makes it a machine capable of connecting to the Internet, although the limited amount of RAM dedicated to the heap limits its ability as a web-surfing tool. Apple has been very unhelpful in this regard, but a veritable flood of user complaints will undoubtedly force them to somehow increase the heap.
All of a sudden my MP2K is giving my PowerBook a run for its money. (You can see me using the Newton in a very dusty hardware-hostile community while at Burning Man in 1998; mention and photos are here and here.) When I'm on a business trip I can survive for a week with only my handheld. The problem is one of what I'll call "digital community", the digital environment caused by using one platform for a period of time. All your email and documents are on your desktop or laptop; switching to another piece of hardware means that you leave all that context behind. That alone is a stumbling block.
And because Eudora for Newton (seemingly abandoned by Qualcomm) doesn't have the ability to synchronize filters and address books (and the whole email digital community) I'm left with a forced email-checking environment. It's not yet a complete solution. The user community is excellent, and the freeware and shareware are excellent, but somehow it's not yet a complete replacement for a laptop. It's close, though.
But I'm doing more with this Newton than with previous models. I fashion a cable that'll connect my MP2K to my tiny GPS (Global Positioning System) reciever. I should be able to display my movements in real-time on a map of the region. And I've recently read that someone is using their Newton in conjunction with a digital camera during a trip to Mt Everest, but it seems to be a custom hack and not a general-purpose solution.
Autumn 1997. Apple announces the MessagePad 2100, which fixes the heap problem. There's an upgrade program for owners of the MP2K, and I jump at the chance to remedy the one fly in the ointment. After a three-day turnaround my "twenty-one hundred" is flying on the net. Yes! Finally! And the handwriting recognition is good enough that I'm rarely using Grafitti.
News reports say that a lot of the MP2x00s are selling, so it seems that Apple has a hit on their hands. This should ensure a comfortable place for Newton within Apple. In fact, for the fourth quarter of 1997 (if memory serves) the profit reported by the Newton Systems Group is the lion's share of Apple's entire profit, what with various bad decisions and write-offs.
February 27 1998. The MP2x00s are still selling like hotcakes, and the completely Japanese versions are being rolled out by a third party, but Steve Jobs announces Apple will cease development on the Newton line. The few machines left in the retail channels and Apple inventory are quickly snapped up. Users and developers are in a sort of shock. Webmasters begin to fly one of several Newton Jolly Roger flags or logos.
Apple yet again burns bridges it scarcely seems to have to burn. What a luxury. I don't get it.
This page is part of the Apple Newton Web Ring. Visit another site on this ring. Add your Newton page to this web ring.
|
# Fuck you Disyer. Stealing my fucking paypal. GET FUCKED: toontown.parties.DistributedPartyDanceActivity
from toontown.parties import PartyGlobals
from toontown.parties.DistributedPartyDanceActivityBase import DistributedPartyDanceActivityBase
from toontown.toonbase import TTLocalizer
class DistributedPartyDanceActivity(DistributedPartyDanceActivityBase):
notify = directNotify.newCategory('DistributedPartyDanceActivity')
def __init__(self, cr):
DistributedPartyDanceActivityBase.__init__(self, cr, PartyGlobals.ActivityIds.PartyDance, PartyGlobals.DancePatternToAnims)
def getInstructions(self):
return TTLocalizer.PartyDanceActivityInstructions
def getTitle(self):
return TTLocalizer.PartyDanceActivityTitle
def load(self):
DistributedPartyDanceActivityBase.load(self)
parentGroup = self.danceFloor.find('**/discoBall_mesh')
correctBall = self.danceFloor.find('**/discoBall_10')
origBall = self.danceFloor.find('**/discoBall_mesh_orig')
if not correctBall.isEmpty():
numChildren = parentGroup.getNumChildren()
for i in xrange(numChildren):
child = parentGroup.getChild(i)
if child != correctBall:
child.hide()
|
A Chinese proverb says: "You can predict a child's future by the time he is three years old." Therefore, preschool education is of vital importance to children. Dayi county is located on the outskirts of Chengdu, and was one of the hardest-hit areas during the May 12 Wenchuan earthquake. Chengdu RTVU and the Dayi Education Bureau have pledged to cooperate in training preschool teachers so as to tackle problems such as the lack of adequate training for teachers, insufficient number of trained professionals, lack of learning and training resources, and so on.
On the morning of May 4, the Chendu RTVU held “The Opening Ceremony for Dayi Preschool Teacher Training Project in 2013 and the First Expert Lecture” organized by Chengdu RTVU, which was attended by distinguished leaders and experts, including: Miao Peibin, vice president of Chengdu RTVU; Liu Gang and Bao Lei, deputy directors of the Dayi Education Bureau; Yu Wenlin, head of the Personnel Section; Huang Sufang, head of the Education Section; Xiang Tong, president of the Dayi Teachers’ Continuing Education Institute; and Zhang Jiarong, director of the Academic Committee of Kindergarten Principals of the Sichuan Tao Xingzhi Study Association; and so on.
The project's first training class, with more than 100 participants from rural preschools in Dayi county, was a success. Chengdu RTVU was able to obtain the support of local government and academic associations who contributed not only material resources but also favorable policies. The Dayi Education Bureau launched a special fund to subsidize the training project and required individual kindergartens to match the level of support. This policy guarantees sufficient training funds and stimulates participation. The Academic Committee of Kindergarten Principals of the Sichuan Tao Xingzhi Study Association will supervise teaching quality, including teaching plans, curriculum design, teaching methods, teaching staff, and practicals, in order to make full use of the project resources in providing a comprehensive training programme to preschool teachers, which integrates degree education with on-the-job training. Chengdu RTVU regards this project as one of its model research projects for 2013, and hopes to establish an effective training programme that can be replicated.
During the opening ceremony, Deputy President Miao Zhibin delivered the welcoming speech. He introduced the training project’s background, goals, and significance as well as the measures taken to guarantee the quality of teaching activities. He encouraged the trainees to cherish this rare opportunity to improve both knowledge and ability.
Ms. Zhang Jiarong, director of the Academic Committee of Kindergarten Principals of the Sichuan Tao Xingzhi Study Association and Chengdu RTVU's leading expert in the field of preschool education, delivered an excellent speech focusing on “joy”, “appreciation”, and “hope”. As an expert of preschool education with 30 years’ experience, she was excited to see so many preschool teachers coming together. She said that she could sense Dayi county's bright future in preschool education and was appreciative of Chengdu RTVU's initiative in allocating resources to promote preschool education. According to Ms. Zhang, people can see within every child the adult he will become, making the job of rural preschool teacher training a rewarding course, but also one with many challenges. She felt particularly moved and expressed her willingness to provide infinite learning opportunities to the trainees in the areas of improving teaching ability and practical training.
Deputy Director of the Dayi Education Bureau Bao Lei, who is in charge of preschool education, highly praised the cooperation between the Dayi Education Bureau and Chengdu RTVU. The innovative training project aims to create a new way to help rural teachers continue to make progress in both knowledge and ability, thereby improving the overall quality of Dayi preschool education. She also proposed the following requirements to the trainees: 1. Try to be studious and good at studying. Just as the old Chinese saying goes: “There are no shortcuts in studying”, and “He who knows the truth is not equal to him who loves it, and he who loves it is not equal to him who delights in it”, the trainees should realize the importance of preschool education and be studious and good at learning. On the other hand, the Dayi Education Bureau will look for talented teachers for the sustainable development of preschool education in Dayi; 2. Create a peaceful learning environment. Kindergartens are required to create favorable conditions for employees participating in the training programme. Additionally, certain measures will be taken to guarantee a safe learning environment; 3. Endeavor to apply the lessons. The trainees should work hard to put into practice the things they are learning; 4. Be dedicated to and delighted by teaching. She quoted Liang Qichao to encourage trainees: Focus on the task at hand and be delighted and motivated by it in order to further enhance professionalism and quality.
Deputy Director of the Dayi Education Bureau Liu Gang, who is in charge of personnel management, gave the opening ceremony's concluding address. According to him, the government attaches great importance to preschool education and advances in preschool education are related to the quality of the teachers. Therefore, the government has decided to cultivate a passionate, devoted teaching staff of high quality. However, the following problems still exist: 1. Severe teacher shortage. Each classroom is required to be equipped with two teachers and a nurse. However, most rural private kindergartens only supply one or two teachers to each classroom. More than 100 teachers are still needed to meet the criterion mentioned above; 2. Insufficient number of trained professionals. It is estimated that rural teachers who possess a teacher certification for preschool education amount to less than 40% of the total in Dayi county; 3. Inadequate professional training. According to statistics, most preschool teachers in Dayi county haven’t received any professional training. Merely 30% of the trainees graduated from a vocational secondary preschool education programme; 4. Lack of learning and training resources. He explained that this training project was not only an important move initiated by Dayi Education Bureau to further improve the quality of rural preschool teachers, but also a livelihood project that concerns preschool education. He also encouraged trainees to cherish this rare opportunity to make progress in both knowledge and ability, while obtaining their degrees, diplomas or certificates.
After the opening ceremony, preschool education expert Wang Yulu from the Sichuan Tao Xingzhi Study Association delivered an expert lecture named Analysis on Early Learning and Development Guidelines for 3-6 Year Old Children.
|
# Copyright (C) 2016 Google Inc.
# Licensed under http://www.apache.org/licenses/LICENSE-2.0 <see LICENSE file>
"""
Test /search REST API
"""
from ggrc.models import Control
from integration.ggrc import TestCase
from integration.ggrc.api_helper import Api
from integration.ggrc.generator import ObjectGenerator
class TestResource(TestCase):
"""
Test /search REST API
"""
def setUp(self):
TestCase.setUp(self)
self.api = Api()
self.object_generator = ObjectGenerator()
self.create_objects()
def create_objects(self):
"""Create objects to be searched.
Creates five controls and makes relationships.
0 1 2 3 4
|---| |---| |
|-------|-------|
"""
self.objects = [
self.object_generator.generate_object(Control)[1].id
for _ in xrange(5)
]
self.objects = Control.eager_query().filter(
Control.id.in_(self.objects)
).all()
for src, dst in [(0, 1), (0, 2), (2, 3), (2, 4)]:
self.object_generator.generate_relationship(
self.objects[src], self.objects[dst]
)
def search(self, *args, **kwargs):
res, _ = self.api.search(*args, **kwargs)
return res.json["results"]["entries"]
def test_search_all(self):
"""Test search for all objects of a type."""
res, _ = self.api.search("Control")
self.assertEqual(len(res.json["results"]["entries"]), 5)
def test_search_query(self):
"""Test search with query by title."""
entries = self.search("Control", q=self.objects[0].title)
self.assertEqual({entry["id"] for entry in entries},
{self.objects[0].id})
def test_search_relevant(self):
"""Test search with 'relevant to' single object."""
relevant_objects = "Control:{}".format(self.objects[0].id)
entries = self.search("Control", relevant_objects=relevant_objects)
self.assertEqual({entry["id"] for entry in entries},
{self.objects[i].id for i in [1, 2]})
def test_search_relevant_multi(self):
"""Test search with 'relevant to' multiple objects."""
ids = ",".join("Control:{}".format(self.objects[i].id) for i in (0, 3))
entries = self.search("Control", relevant_objects=ids)
self.assertEqual({entry["id"] for entry in entries},
{self.objects[2].id})
|
This episode continues to expand on our awareness of how we interact and communicate with ourselves and the effects it has on our emotions, behaviors, and the vibrations within our body. We will explore 2 more unhelpful thinking styles we get looped into and learn a new skill for your wellness toolbox. Become your own best friend.
|
from . import DataCleaner
import unittest
import os
class DataCleanerTestCase(unittest.TestCase):
dataCleaner = None
currentDirectory = currentDirectory = "%s" % (os.path.dirname(os.path.realpath(__file__)), )
testTextsDirectory = "%s/../../../data/text/" % (currentDirectory, )
def setUp(self):
self.dataCleaner = DataCleaner()
def tearDown(self):
self.dataCleaner = None
def test_datacleaner_with_empty_character_list(self):
"""
Check that the data cleaner returns the same text if an empty
list of characters has been given in input
"""
text = "This is a text"
expected = "This is a text"
actual = self.dataCleaner.filterCharacters(characters=[], text=text)
self.assertEqual(expected, actual)
def test_datacleaner_with_character_list(self):
text = "This is a text -"
expected = "This is a text "
actual = self.dataCleaner.filterCharacters(characters=["-"], text=text)
self.assertEqual(expected, actual)
def test_datacleaner_with_default_character_list(self):
text = "This is a text -"
expected = "This is a text "
actual = self.dataCleaner.filterCharacters(text=text)
self.assertEqual(expected, actual)
def test_datacleaner_exception_if_characters_is_not_list(self):
characters = "String"
self.assertRaises(AssertionError, self.dataCleaner.filterCharacters, characters)
def test_datacleaner_replacementcharacter(self):
text = "This is a text -"
replacementCharacter = ""
expected = "This is a text "
actual = self.dataCleaner.filterCharacters(replacement_character=replacementCharacter, text=text)
self.assertEqual(expected, actual)
def test_datacleaner_replacemenent_character_is_not_string(self):
text = "This is a text -"
replacemenentCharacter = 1
expected = "This is a text "
actual = self.dataCleaner.filterCharacters(replacement_character=replacemenentCharacter, text=text)
self.assertEqual(expected, actual)
def test_datacleaner_text_is_not_string(self):
text = 1234
self.assertRaises(AssertionError, self.dataCleaner.filterCharacters, [], "", text)
def helper_readFilename(self, filename=''):
stopwords = []
if not filename:
raise Exception("The file is empty")
fileToRead = "%s%s" % (self.testTextsDirectory, filename)
f = open(fileToRead, 'r')
text = f.read()
f.close()
return text
if __name__ == '__main__':
unittest.main()
|
I would be extremely pleased to either an iOS or android app (both would be better). Speaking for myself, it wouldn't have to be free. If the app had a reasonable price, I would be okay with paying for it.
|
# ===============================================================================
# Copyright 2018 ross
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ===============================================================================
from traits.api import Int, Bool
from pychron.options.aux_plot import AuxPlot
from pychron.options.fit import FitOptions
from pychron.options.views.define_equilibration_views import VIEWS
from pychron.pychron_constants import MAIN, DISPLAY
class DefineEquilibrationAuxPlot(AuxPlot):
equilibration_time = Int(100)
class DefineEquilibrationOptions(FitOptions):
aux_plot_klass = DefineEquilibrationAuxPlot
show_statistics = Bool(False)
ncols = Int
def initialize(self):
self.subview_names = [MAIN, DISPLAY]
def _get_subview(self, name):
return VIEWS[name]
# ============= EOF =============================================
|
Teravainen's two third-period goals salted away a 4-1 victory over Ottawa on Tuesday and put him at 51 points for the season.
That's now 11 points in eight games, and it is clear things are working nicely for Teravainen at the moment. He was a 64-point scorer last year, and it looks like this year's numbers will be even better.
|
################################################################################
#
# Copyright (C) 2002-2005 Travis Shirk <[email protected]>
# Copyright (C) 2001 Ryan Finne <[email protected]>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
#
################################################################################
# Accepts a string of bytes (chars) and returns an array of bits
# representing the bytes in big endian byte (Most significant byte/bit first)
# order. Each byte can have it's higher bits ignored by passing an sz arg.
def bytes2bin(bytes, sz = 8):
if sz < 1 or sz > 8:
raise ValueError("Invalid sz value: " + str(sz));
retVal = [];
for b in bytes:
bits = [];
b = ord(b);
while b > 0:
bits.append(b & 1);
b >>= 1;
if len(bits) < sz:
bits.extend([0] * (sz - len(bits)));
elif len(bits) > sz:
bits = bits[:sz];
# Big endian byte order.
bits.reverse();
retVal.extend(bits);
if len(retVal) == 0:
retVal = [0];
return retVal;
# Convert am array of bits (MSB first) into a string of characters.
def bin2bytes(x):
bits = [];
bits.extend(x);
bits.reverse();
i = 0;
out = '';
multi = 1;
ttl = 0;
for b in bits:
i += 1;
ttl += b * multi;
multi *= 2;
if i == 8:
i = 0;
out += chr(ttl);
multi = 1;
ttl = 0;
if multi > 1:
out += chr(ttl);
out = list(out);
out.reverse();
out = ''.join(out);
return out;
# Convert and array of "bits" (MSB first) to it's decimal value.
def bin2dec(x):
bits = [];
bits.extend(x);
bits.reverse();
multi = 1;
value = long(0);
for b in bits:
value += b * multi;
multi *= 2;
return value;
def bytes2dec(bytes, sz = 8):
return bin2dec(bytes2bin(bytes, sz));
# Convert a decimal value to an array of bits (MSB first), optionally
# padding the overall size to p bits.
def dec2bin(n, p = 0):
assert(n >= 0)
retVal = [];
while n > 0:
retVal.append(n & 1);
n >>= 1;
if p > 0:
retVal.extend([0] * (p - len(retVal)));
retVal.reverse();
return retVal;
def dec2bytes(n, p = 0):
return bin2bytes(dec2bin(n, p));
# Convert a list of bits (MSB first) to a synch safe list of bits (section 6.2
# of the ID3 2.4 spec).
def bin2synchsafe(x):
if len(x) > 32 or bin2dec(x) > 268435456: # 2^28
raise ValueError("Invalid value");
elif len(x) < 8:
return x;
n = bin2dec(x);
bites = "";
bites += chr((n >> 21) & 0x7f);
bites += chr((n >> 14) & 0x7f);
bites += chr((n >> 7) & 0x7f);
bites += chr((n >> 0) & 0x7f);
bits = bytes2bin(bites);
if len(bits) < 32:
bits = ([0] * (32 - len(x))) + bits;
return bits;
def bytes2str(bytes):
s = ""
for b in bytes:
s += ("\\x%02x" % ord(b))
return s
|
New Alzheimer s therapy with brain blood flow discovery?
By discovering the culprit behind decreased blood flow in the brain of people with Alzheimer s, biomedical engineers at Cornell University have made possible promising new therapies for the disease.
You know that dizzy feeling you get when, after lying down for an extended period, you stand up a little too quickly?
That feeling is caused by a sudden reduction of blood flow to the brain, a reduction of around 30 percent. Now imagine living every minute of every day with that level of decreased blood flow.
People with Alzheimer s disease don t have to imagine it. The existence of cerebral blood flow reduction in Alzheimer s patients has been known for decades, but the exact correlation to impaired cognitive function is less understood.
"People probably adapt to the decreased blood flow, so that they don t feel dizzy all of the time, but there s clear evidence that it impacts cognitive function," said Chris Schaffer, associate professor of biomedical engineering at Cornell University.
A new study from the joint lab of Schaffer and associate professor Nozomi Nishimura, offers an explanation for this dramatic blood flow decrease: white blood cells stuck to the inside of capillaries, the smallest blood vessels in the brain. And while only a small percentage of capillaries experience this blockage, each stalled vessel leads to decreased blood flow in multiple downstream vessels, magnifying the impact on overall brain blood flow.
Their paper, "Neutrophil Adhesion in Brain Capillaries Reduces Cortical Blood Flow and Impairs Memory Function in Alzheimer s Disease Mouse Models," published in Nature Neuroscience.
The paper s co-lead authors are Jean Cruz-Hernandez, Ph.D., now a postdoctoral researcher at Harvard Medical School, and Oliver Bracko, a research associate in the Schaffer-Nishimura Lab.
The paper, Schaffer said, is the culmination of approximately a decade of study, data gathering and analysis. It began with a study in which Nishimura was attempting to put clots into the vasculatures of Alzheimer s mouse brains to see their effect.
"It turns out that ... the blockages we were trying to induce were already in there," she said. "It sort of turned the research around -- this is a phenomenon that was already happening."
Recent studies suggest that brain blood flow deficits are one of the earliest detectable symptoms of dementia.
"What we ve done is identify the cellular mechanism that causes reduced brain blood flow in Alzheimer s disease models, which is neutrophils [white blood cells] sticking in capillaries," Schaffer said. "We ve shown that when we block the cellular mechanism [that causes the stalls], we get an improved blood flow, and associated with that improved blood flow is immediate restoration of cognitive performance of spatial- and working-memory tasks."
"Now that we know the cellular mechanism," he said, "it s a much narrower path to identify the drug or the therapeutic approach to treat it."
The team has identified approximately 20 drugs, many of them already FDA approved for human use, that have potential in dementia therapy and are screening these drugs in Alzheimer s mice now.
Schaffer said he s "super-optimistic" that, if the same capillary-blocking mechanism is at play in humans as it is in mice, this line of research "could be a complete game-changer for people with Alzheimer s disease."
This research was funded by the National Institutes of Health, the Alzheimer s Drug Discovery Foundation, the Alzheimer s Art Quilt Initiative, and the Brightfocus Foundation.
Materials provided by Cornell University . Note: Content may be edited for style and length.
Cornell University. "New Alzheimer s therapy with brain blood flow discovery?." ScienceDaily. ScienceDaily, 11 February 2019. .
Cornell University. "New Alzheimer s therapy with brain blood flow discovery?." ScienceDaily. www.sciencedaily.com/releases/2019/02/190211164005.htm (accessed February 11, 2019).
Could an eye doctor diagnose Alzheimer s before you have symptoms?
|
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tensorflow Example proto decoder for Attribute-Mask R-CNN.
A decoder to decode string tensors containing serialized tensorflow.Example
protos for Attribute-Mask R-CNN model.
"""
import tensorflow.compat.v1 as tf
def _get_source_id_from_encoded_image(parsed_tensors):
return tf.strings.as_string(
tf.strings.to_hash_bucket_fast(parsed_tensors['image/encoded'],
2**63 - 1))
class TfExampleDecoder(object):
"""Tensorflow Example proto decoder."""
def __init__(self, include_mask=False, regenerate_source_id=False):
self._include_mask = include_mask
self._regenerate_source_id = regenerate_source_id
self._keys_to_features = {
'image/encoded': tf.FixedLenFeature((), tf.string),
'image/source_id': tf.FixedLenFeature((), tf.string, ''),
'image/height': tf.FixedLenFeature((), tf.int64, -1),
'image/width': tf.FixedLenFeature((), tf.int64, -1),
'image/object/bbox/xmin': tf.VarLenFeature(tf.float32),
'image/object/bbox/xmax': tf.VarLenFeature(tf.float32),
'image/object/bbox/ymin': tf.VarLenFeature(tf.float32),
'image/object/bbox/ymax': tf.VarLenFeature(tf.float32),
'image/object/class/label': tf.VarLenFeature(tf.int64),
'image/object/attribute/label': tf.VarLenFeature(tf.int64),
'image/object/area': tf.VarLenFeature(tf.float32),
'image/object/is_crowd': tf.VarLenFeature(tf.int64),
}
if include_mask:
self._keys_to_features.update({
'image/object/mask':
tf.VarLenFeature(tf.string),
})
def _decode_image(self, parsed_tensors):
"""Decodes the image and set its static shape."""
image = tf.io.decode_image(parsed_tensors['image/encoded'], channels=3)
image.set_shape([None, None, 3])
return image
def _decode_boxes(self, parsed_tensors):
"""Concat box coordinates in the format of [ymin, xmin, ymax, xmax]."""
xmin = parsed_tensors['image/object/bbox/xmin']
xmax = parsed_tensors['image/object/bbox/xmax']
ymin = parsed_tensors['image/object/bbox/ymin']
ymax = parsed_tensors['image/object/bbox/ymax']
return tf.stack([ymin, xmin, ymax, xmax], axis=-1)
def _decode_masks(self, parsed_tensors):
"""Decode a set of PNG masks to the tf.float32 tensors."""
def _decode_png_mask(png_bytes):
mask = tf.squeeze(
tf.io.decode_png(png_bytes, channels=1, dtype=tf.uint8), axis=-1)
mask = tf.cast(mask, dtype=tf.float32)
mask.set_shape([None, None])
return mask
height = parsed_tensors['image/height']
width = parsed_tensors['image/width']
masks = parsed_tensors['image/object/mask']
return tf.cond(
tf.greater(tf.size(masks), 0),
lambda: tf.map_fn(_decode_png_mask, masks, dtype=tf.float32),
lambda: tf.zeros([0, height, width], dtype=tf.float32))
def _decode_areas(self, parsed_tensors):
xmin = parsed_tensors['image/object/bbox/xmin']
xmax = parsed_tensors['image/object/bbox/xmax']
ymin = parsed_tensors['image/object/bbox/ymin']
ymax = parsed_tensors['image/object/bbox/ymax']
height = tf.cast(parsed_tensors['image/height'], dtype=tf.float32)
width = tf.cast(parsed_tensors['image/width'], dtype=tf.float32)
return tf.cond(
tf.greater(tf.shape(parsed_tensors['image/object/area'])[0], 0),
lambda: parsed_tensors['image/object/area'],
lambda: (xmax - xmin) * (ymax - ymin) * height * width)
def decode(self, serialized_example):
"""Decode the serialized example.
Args:
serialized_example: a single serialized tf.Example string.
Returns:
decoded_tensors: a dictionary of tensors with the following fields:
- image: a uint8 tensor of shape [None, None, 3].
- source_id: a string scalar tensor.
- height: an integer scalar tensor.
- width: an integer scalar tensor.
- groundtruth_classes: an int64 tensor of shape [None].
- groundtruth_attributes: an int64 tensor of shape [None].
- groundtruth_is_crowd: a bool tensor of shape [None].
- groundtruth_area: a float32 tensor of shape [None].
- groundtruth_boxes: a float32 tensor of shape [None, 4].
- groundtruth_instance_masks: a float32 tensor of shape
[None, None, None].
- groundtruth_instance_masks_png: a string tensor of shape [None].
"""
parsed_tensors = tf.io.parse_single_example(
serialized_example, self._keys_to_features)
for k in parsed_tensors:
if isinstance(parsed_tensors[k], tf.SparseTensor):
if parsed_tensors[k].dtype == tf.string:
parsed_tensors[k] = tf.sparse_tensor_to_dense(
parsed_tensors[k], default_value='')
else:
parsed_tensors[k] = tf.sparse_tensor_to_dense(
parsed_tensors[k], default_value=0)
image = self._decode_image(parsed_tensors)
boxes = self._decode_boxes(parsed_tensors)
areas = self._decode_areas(parsed_tensors)
decode_image_shape = tf.logical_or(
tf.equal(parsed_tensors['image/height'], -1),
tf.equal(parsed_tensors['image/width'], -1))
image_shape = tf.cast(tf.shape(image), dtype=tf.int64)
parsed_tensors['image/height'] = tf.where(decode_image_shape,
image_shape[0],
parsed_tensors['image/height'])
parsed_tensors['image/width'] = tf.where(decode_image_shape, image_shape[1],
parsed_tensors['image/width'])
is_crowds = tf.cond(
tf.greater(tf.shape(parsed_tensors['image/object/is_crowd'])[0], 0),
lambda: tf.cast(parsed_tensors['image/object/is_crowd'], dtype=tf.bool),
lambda: tf.zeros_like(parsed_tensors['image/object/class/label'], dtype=tf.bool)) # pylint: disable=line-too-long
if self._regenerate_source_id:
source_id = _get_source_id_from_encoded_image(parsed_tensors)
else:
source_id = tf.cond(
tf.greater(tf.strings.length(parsed_tensors['image/source_id']),
0), lambda: parsed_tensors['image/source_id'],
lambda: _get_source_id_from_encoded_image(parsed_tensors))
if self._include_mask:
masks = self._decode_masks(parsed_tensors)
decoded_tensors = {
'image': image,
'source_id': source_id,
'height': parsed_tensors['image/height'],
'width': parsed_tensors['image/width'],
'groundtruth_classes': parsed_tensors['image/object/class/label'],
'groundtruth_attributes':
parsed_tensors['image/object/attribute/label'],
'groundtruth_is_crowd': is_crowds,
'groundtruth_area': areas,
'groundtruth_boxes': boxes,
}
if self._include_mask:
decoded_tensors.update({
'groundtruth_instance_masks': masks,
'groundtruth_instance_masks_png': parsed_tensors['image/object/mask'],
})
return decoded_tensors
|
Ms Hargreaves noted that parents or caregivers have the right to withdraw their child from any particular element of sexuality education in a health programme. "This is why," she says, "it is crucial that parents/caregivers understand the programme content, then they can make informed decisions."
"Relationship and sexuality education programmes must be allocated space within the school timetable. Twelve to 15 hours per year, every year, from years 1 to 10," Ms Hargreaves says.
"While it seems simple - and many might be surprised to think about an exercise like this as part of the relationship and sexuality education programme, it is this developing attitudes of care and concern for other people by applying manaakitanga that is important. This is foundational skill learning that young people can apply as they get older and their relationships become more complex," Ms Hargreaves says.
"At this level, relationship and sexuality education focuses on positive identity, celebrating diversity, friendships and whÄnau relationships, interpersonal skills, dealing with bullying and harassment, and identifying supporters."
She closed by challenging teachers to think about where relationship and sexuality education sat within their school - what resourcing did they have and what needs to be done? When was the last time consultation was completed?
"The challenge returning to your schools is to think about how relationship and sexuality education can be progressed - the curriculum and supporting resources provide a great foundation, but your parents and your Boards of Trustees need to be on board too."
|
# encoding: utf-8
u'''MCL Site Knowledge — setup tests'''
from jpl.mcl.site.knowledge.testing import JPL_MCL_SITE_KNOWLEDGE_INTEGRATION_TESTING
import unittest, plone.api
class SetupTest(unittest.TestCase):
layer = JPL_MCL_SITE_KNOWLEDGE_INTEGRATION_TESTING
def setUp(self):
super(SetupTest, self).setUp()
self.portal = self.layer['portal']
def testCatalogIndexes(self):
u'''Ensure the catalog has our custom indexes'''
catalog = plone.api.portal.get_tool('portal_catalog')
indexes = catalog.indexes()
for index in ('subjectURI', 'phone', 'homepage', 'dcbflag', 'dcpflag'):
self.assertTrue(index in indexes, u'"{}" index not installed'.format(index))
def testCatalogMetadata(self):
u'''Check that the catalog has our custom metadata columns'''
catalog = plone.api.portal.get_tool('portal_catalog')
columns = catalog.schema()
for column in ('subjectURI', 'phone', 'homepage', 'dcbflag', 'dcpflag'):
self.assertTrue(column in columns, u'"{}" column not installed'.format(column))
def test_suite():
return unittest.defaultTestLoader.loadTestsFromName(__name__)
if __name__ == '__main__':
unittest.main(defaultTest='test_suite')
|
Apartments in Venice, Italy. Venice Apartments for Rent.
If you think of yourself as a traveller not as a tourist , if you relish what a place or a town can offer you and look for independence and practicality you are the guests we are looking for!
We are proposing you a range of elegant flats situated in the historic centre of Venice each with a different peculiarity but with the same sensation of making you feel at home.
Being able to relax in a confortable way inside a real Venetian context it is not only a way of visiting a city but the way of living its deep dimension and and magic charm.
It will not be difficult for you to find among our proposals what you prefer, should you want to seep an aperitif in a garden or have a romantic dinner in a typical Venetian altana (a terrace in the roof).
See you in Venice then.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# HORTON: Helpful Open-source Research TOol for N-fermion systems.
# Copyright (C) 2011-2015 The HORTON Development Team
#
# This file is part of HORTON.
#
# HORTON is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 3
# of the License, or (at your option) any later version.
#
# HORTON is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see <http://www.gnu.org/licenses/>
#
#--
#!/usr/bin/env python
'''Update the table with HF/DFT examples based on files in data/examples/hf_dft.'''
from glob import glob
from cStringIO import StringIO
import json, os
from common import write_rst_table, write_if_changed
cases = []
for fn_py in sorted(glob('../data/examples/hf_dft/*.py')):
with open(fn_py) as f:
s = ''
for line in f:
if line.startswith('#JSON'):
s += line[5:]
if len(s) == 0:
raise RuntimeError('Could not find JSON line in HF/DFT example script.')
meta = json.loads(s)
case = [meta['difficulty'], os.path.basename(fn_py), meta['lot'],
meta['scf'], meta['description']]
cases.append(case)
cases.sort()
table = [['File name', 'LOT', 'SCF', 'Description']] + [case[1:] for case in cases]
f = StringIO()
write_rst_table(f, table)
write_if_changed('hf_dft_examples.rst.inc', f.getvalue())
|
How do you beat Cut the Rope level 11-20 in DJ Box? This page contains the solution of level 11-20 in episode 11 (DJ Box) from the popular iOS and Android game Cut the Rope. This step-by-step walkthrough will guide you through each and every part of this game and assist you with the solution to complete this level with 3 stars.
|
import os
import html5lib
from xml.etree import ElementTree as etree
from whoosh.index import open_dir
ix = open_dir("index")
with ix.writer() as writer:
for root, dirs, files in os.walk('essays'):
# print root, files
if 'index.html' in files:
fp = os.path.join(root, 'index.html')
print fp
src = open(fp).read().decode("utf-8")
if type(fp) != unicode:
fp = fp.decode("utf-8")
tree = html5lib.parse(src, namespaceHTMLElements=False)
paragraphs = tree.findall(".//p")
try:
title = paragraphs[0].text.strip()
except AttributeError:
title = u""
authors = u""
# if paragraphs[1].text:
# authors = paragraphs[1].text.strip()
text = etree.tostring(tree, method="text", encoding="utf-8").decode("utf-8")
# lines = text.splitlines()
# for i, x in enumerate(lines[:10]):
# print i, x
# print
# print u"Indexing {0}".format(title).encode("utf-8")
writer.add_document(title=title, type=u"essay", authors=authors, path=fp, content=text, ncontent=title+u" "+text)
# writer.commit()
|
Home / Uncategorized / Do I Really Have to Hire a Data Scientist?
The Harvard Business Review recently declared that Data Scientist was the “sexiest career of the 21st Century” and in the United States it is now ranked as the best paying job around, with an average salary of $110,000. Added to this, there is a dire shortage of qualified people to fill all the Data Scientist positions that are opening up. IBM estimates that the number of Data Scientist positions is expected to grow by 364,000 in the US alone by 2020.
Suffice to say: Data Scientists are hot right now. The majority of people would probably tell you that you have to hire one, and soon. But in the immortal words of Mark Twain: “Whenever you find yourself on the side of the majority, it is time to pause and reflect”. And that’s exactly what this article is about. We ask the question: do business owners and managers really need to hire a Data Scientist?
Of course, for large enterprises with sufficient budget, a skilled and experienced Data Scientist can be a valuable addition to your team. The massive volumes of data that large enterprises face easily justify a devoted resource (or two or three or more) to study the problem and find solutions.
However, the question is: do you ‘have to’ hire a Data Scientist and at dataWerks our view is: perhaps not. This may go against the grain of popular opinion these days so you might expect us to have some strong evidence to support our claim. Indeed, we do. We recently implemented a BI solution for a Fortune 100 media company that proves our point.
This customer presented us with a massive challenge: 150,000 visitors per day at their flagship theme park creating approximately four billion data records stored in more than 40 disparate data sources. They asked us to deliver real-time BI related to theme park visitors so they could immediately address any issues hindering an optimum customer experience.
For example, events that reduce customer satisfaction, such as waiting in line for too long or being exposed to a rain shower, have been defined and are tracked in real-time, suggesting which visitors are most likely to have a bad experience and would need immediate intervention to recover, for instance in the form of coupons for the merchandise store.
Oh, and by the way, the customer wanted to see a sales uplift of a half billion in revenue per year by optimising the customer experience in this way. Impossible right? Not at all. We delivered the solution in less than a year and the customer is now enjoying half a billion sales growth each year with a 20% operating margin.
Total number of Data Scientists on the project: 0. So this is why we answer the question with: “perhaps not”. The next logical question is: how is this possible?
The short answer: empower the business users instead of hiring Data Scientists. This is exactly what we did for the customer described above. They knew everything about their data including where it was stored. They just couldn’t access it fast and simple enough. Typically, it took them 24-36 hours to get the data that they really needed, and by that time, the theme park customer would typically have left the theme park.
Hiring several Data Scientists for this project would undoubtedly have necessitated 6 months for them to come up to speed on the vast amount of data stored in those 40 data sources. Instead, we delivered 80% of the requirements within 6 weeks by simply empowering their business users with access to the data they needed, and giving them the ability to mashup the data as required.
Indeed, this is the mission of dataWerks: revolutionize the way companies access data, and in turn, drastically save time and money for our customers. If this involves negating the need for a full-time Data Scientist then so be it. To be very frank: on this project a Data Scientist may have slowed us down.
The first, and most fundamental problem, lies in those three words in the middle: tries to understand. If the Data Scientist succeeds then you are off to the races and with the six steps shown they can contribute some incredibly valuable insights. However, if they don’t fully understand the data, this can introduce even more complexity into a situation already fraught with complexity. Paralysis by analysis anyone?
Claudia Perlich, a leading Data Scientist herself, described this problem when she wrote that Data Scientists “end up solving whatever they understood might be the problem, ultimately creating a solution that is not really helpful (and often far too complicated). And that’s just the risk with step 1.
Steps 2, 3 and 4 are also inherently risky and they all involve significant time and costs, not to mention all sorts of old-fashioned ETL, data lakes and so forth. All of which may not even be necessary. We cover this subject in Beyond Data Lakes: The Total Integration Revolution. The key point: you may not need a Data Lake so you may not need a Data Scientist to handle the Data Lake. Indeed, dataWerks can negate the need for Data Lakes altogether by virtualising access to all your data stores.
The key difference: we remove ‘tries to understand’. Instead, we empower business users with access to all your data silos using the front-end tools you already have in place. So to answer the original question: do you have to hire a Data Scientist: perhaps not.
|
#
# -*- coding: utf-8 -*-
#
# pyllage
#
# Copyright (C) 2013 barisumog at gmail.com
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
from .http import get
from .parser import parse
from .selectors import choose, relative, rip_data, rip_index, between
from .utils import stack_to_file, get_stack
__title__ = "pyllage"
__version__ = "0.1.2"
__author__ = "barisumog at gmail.com"
__copyright__ = "copyright 2013 barisumog"
__license__ = "GPLv3"
__doc__ = """Please see the README.rst for full documentation.
Quick Start
-----------
Here's a few quick examples illustrating *some* of the functions::
import pyllage
stack = pyllage.get_stack("http://somesite.com/etcetera")
# get all links, print the href=... parts
links = pyllage.choose(stack, tag="a")
for key in links:
print(links[key]["attrs"])
# get all text data except scripts and print it
texts = pyllage.choose(stack, tag="script", select=False)
data = pyllage.rip_data(texts)
print("\n".join(data))
# get all spans and divs with class=help (but not class=helpmore)
helps = pyllage.choose(stack, tag="span div", attrs="class=help", exact=True)
# get all divs containing the word pyllage in their text part
pylls = pyllage.choose(stack, tag="div", data="pyllage")
"""
|
As a teenager diagnosed with type 1 diabetes, all Andy Mead knew was that he would have to change his diet and take insulin shots. What he didn’t know is how much the disease would motivate him to exercise and push others to exercise.
Nearly 26 million people in the United States have diabetes, and last year 1.9 million new cases were diagnosed in people age 20 or younger, according to the American Diabetes Association. Daily exercise is recommended for those with diabetes, which wasn’t exactly hard for Mead, who had played hockey since age 5. But he quickly noticed a different in how he felt.
In 2006, the Michigan native joined a group of young athletes with diabetes called Team Type 1. As part of the group, he has become an advocate for exercise, lecturing to those dealing with type 1 and type 2.
This summer he will participate in the 20th Annual Greater Philadelphia Tour de Cure for the first time. “It really lines up with our mission to get out and get people to exercise,” he said.
|
import threading
import time
import functools
from typing import Dict, Callable, Any, List, Optional # noqa
import h2.exceptions
from h2 import connection
from h2 import events
import queue
from mitmproxy import connections # noqa
from mitmproxy import exceptions
from mitmproxy import http
from mitmproxy.proxy.protocol import base
from mitmproxy.proxy.protocol import http as httpbase
import mitmproxy.net.http
from mitmproxy.net import tcp
from mitmproxy.coretypes import basethread
from mitmproxy.net.http import http2, headers, url
from mitmproxy.utils import human
class SafeH2Connection(connection.H2Connection):
def __init__(self, conn, *args, **kwargs):
super().__init__(*args, **kwargs)
self.conn = conn
self.lock = threading.RLock()
def safe_acknowledge_received_data(self, acknowledged_size: int, stream_id: int):
if acknowledged_size == 0:
return
with self.lock:
self.acknowledge_received_data(acknowledged_size, stream_id)
self.conn.send(self.data_to_send())
def safe_reset_stream(self, stream_id: int, error_code: int):
with self.lock:
try:
self.reset_stream(stream_id, error_code)
except h2.exceptions.StreamClosedError: # pragma: no cover
# stream is already closed - good
pass
self.conn.send(self.data_to_send())
def safe_update_settings(self, new_settings: Dict[int, Any]):
with self.lock:
self.update_settings(new_settings)
self.conn.send(self.data_to_send())
def safe_send_headers(self, raise_zombie: Callable, stream_id: int, headers: headers.Headers, **kwargs):
with self.lock:
raise_zombie()
self.send_headers(stream_id, headers.fields, **kwargs)
self.conn.send(self.data_to_send())
def safe_send_body(self, raise_zombie: Callable, stream_id: int, chunks: List[bytes], end_stream=True):
for chunk in chunks:
position = 0
while position < len(chunk):
self.lock.acquire()
raise_zombie(self.lock.release)
max_outbound_frame_size = self.max_outbound_frame_size
frame_chunk = chunk[position:position + max_outbound_frame_size]
if self.local_flow_control_window(stream_id) < len(frame_chunk): # pragma: no cover
self.lock.release()
time.sleep(0.1)
continue
self.send_data(stream_id, frame_chunk)
try:
self.conn.send(self.data_to_send())
except Exception as e: # pragma: no cover
raise e
finally:
self.lock.release()
position += max_outbound_frame_size
if end_stream:
with self.lock:
raise_zombie()
self.end_stream(stream_id)
self.conn.send(self.data_to_send())
class Http2Layer(base.Layer):
if False:
# mypy type hints
client_conn: connections.ClientConnection = None
class H2ConnLogger:
def __init__(self, name, log):
self.name = name
self.log = log
def debug(self, fmtstr, *args):
msg = "H2Conn {}: {}".format(self.name, fmtstr % args)
self.log(msg, "debug")
def trace(self, fmtstr, *args):
pass
def __init__(self, ctx, mode: str) -> None:
super().__init__(ctx)
self.mode = mode
self.streams: Dict[int, Http2SingleStreamLayer] = dict()
self.server_to_client_stream_ids: Dict[int, int] = dict([(0, 0)])
self.connections: Dict[object, SafeH2Connection] = {}
config = h2.config.H2Configuration(
client_side=False,
header_encoding=False,
validate_outbound_headers=False,
validate_inbound_headers=False,
logger=self.H2ConnLogger("client", self.log))
self.connections[self.client_conn] = SafeH2Connection(self.client_conn, config=config)
def _initiate_server_conn(self):
if self.server_conn.connected():
config = h2.config.H2Configuration(
client_side=True,
header_encoding=False,
validate_outbound_headers=False,
validate_inbound_headers=False,
logger=self.H2ConnLogger("server", self.log))
self.connections[self.server_conn] = SafeH2Connection(self.server_conn, config=config)
self.connections[self.server_conn].initiate_connection()
self.server_conn.send(self.connections[self.server_conn].data_to_send())
def _complete_handshake(self):
preamble = self.client_conn.rfile.read(24)
self.connections[self.client_conn].initiate_connection()
self.connections[self.client_conn].receive_data(preamble)
self.client_conn.send(self.connections[self.client_conn].data_to_send())
def next_layer(self): # pragma: no cover
# WebSocket over HTTP/2?
# CONNECT for proxying?
raise NotImplementedError()
def _handle_event(self, event, source_conn, other_conn, is_server):
self.log(
"HTTP2 Event from {}".format("server" if is_server else "client"),
"debug",
[repr(event)]
)
eid = None
if hasattr(event, 'stream_id'):
if is_server and event.stream_id % 2 == 1:
eid = self.server_to_client_stream_ids[event.stream_id]
else:
eid = event.stream_id
if isinstance(event, events.RequestReceived):
return self._handle_request_received(eid, event)
elif isinstance(event, events.ResponseReceived):
return self._handle_response_received(eid, event)
elif isinstance(event, events.DataReceived):
return self._handle_data_received(eid, event, source_conn)
elif isinstance(event, events.StreamEnded):
return self._handle_stream_ended(eid)
elif isinstance(event, events.StreamReset):
return self._handle_stream_reset(eid, event, is_server, other_conn)
elif isinstance(event, events.RemoteSettingsChanged):
return self._handle_remote_settings_changed(event, other_conn)
elif isinstance(event, events.ConnectionTerminated):
return self._handle_connection_terminated(event, is_server)
elif isinstance(event, events.PushedStreamReceived):
return self._handle_pushed_stream_received(event)
elif isinstance(event, events.PriorityUpdated):
return self._handle_priority_updated(eid, event)
elif isinstance(event, events.TrailersReceived):
return self._handle_trailers(eid, event, is_server, other_conn)
# fail-safe for unhandled events
return True
def _handle_request_received(self, eid, event):
headers = mitmproxy.net.http.Headers([[k, v] for k, v in event.headers])
self.streams[eid] = Http2SingleStreamLayer(self, self.connections[self.client_conn], eid, headers)
self.streams[eid].timestamp_start = time.time()
if event.priority_updated is not None:
self.streams[eid].priority_exclusive = event.priority_updated.exclusive
self.streams[eid].priority_depends_on = event.priority_updated.depends_on
self.streams[eid].priority_weight = event.priority_updated.weight
self.streams[eid].handled_priority_event = event.priority_updated
self.streams[eid].start()
self.streams[eid].request_message.arrived.set()
return True
def _handle_response_received(self, eid, event):
headers = mitmproxy.net.http.Headers([[k, v] for k, v in event.headers])
self.streams[eid].queued_data_length = 0
self.streams[eid].timestamp_start = time.time()
self.streams[eid].response_message.headers = headers
self.streams[eid].response_message.arrived.set()
return True
def _handle_data_received(self, eid, event, source_conn):
bsl = human.parse_size(self.config.options.body_size_limit)
if bsl and self.streams[eid].queued_data_length > bsl:
self.streams[eid].kill()
self.connections[source_conn].safe_reset_stream(
event.stream_id,
h2.errors.ErrorCodes.REFUSED_STREAM
)
self.log("HTTP body too large. Limit is {}.".format(bsl), "info")
else:
self.streams[eid].data_queue.put(event.data)
self.streams[eid].queued_data_length += len(event.data)
# always acknowledge receved data with a WINDOW_UPDATE frame
self.connections[source_conn].safe_acknowledge_received_data(
event.flow_controlled_length,
event.stream_id
)
return True
def _handle_stream_ended(self, eid):
self.streams[eid].timestamp_end = time.time()
self.streams[eid].stream_ended.set()
return True
def _handle_stream_reset(self, eid, event, is_server, other_conn):
if eid in self.streams:
self.streams[eid].kill()
if is_server:
other_stream_id = self.streams[eid].client_stream_id
else:
other_stream_id = self.streams[eid].server_stream_id
if other_stream_id is not None:
self.connections[other_conn].safe_reset_stream(other_stream_id, event.error_code)
return True
def _handle_trailers(self, eid, event, is_server, other_conn):
trailers = mitmproxy.net.http.Headers([[k, v] for k, v in event.headers])
self.streams[eid].trailers = trailers
return True
def _handle_remote_settings_changed(self, event, other_conn):
new_settings = dict([(key, cs.new_value) for (key, cs) in event.changed_settings.items()])
self.connections[other_conn].safe_update_settings(new_settings)
return True
def _handle_connection_terminated(self, event, is_server):
self.log("HTTP/2 connection terminated by {}: error code: {}, last stream id: {}, additional data: {}".format(
"server" if is_server else "client",
event.error_code,
event.last_stream_id,
event.additional_data), "info")
if event.error_code != h2.errors.ErrorCodes.NO_ERROR:
# Something terrible has happened - kill everything!
self.connections[self.client_conn].close_connection(
error_code=event.error_code,
last_stream_id=event.last_stream_id,
additional_data=event.additional_data
)
self.client_conn.send(self.connections[self.client_conn].data_to_send())
self._kill_all_streams()
else:
"""
Do not immediately terminate the other connection.
Some streams might be still sending data to the client.
"""
return False
def _handle_pushed_stream_received(self, event):
# pushed stream ids should be unique and not dependent on race conditions
# only the parent stream id must be looked up first
parent_eid = self.server_to_client_stream_ids[event.parent_stream_id]
with self.connections[self.client_conn].lock:
self.connections[self.client_conn].push_stream(parent_eid, event.pushed_stream_id, event.headers)
self.client_conn.send(self.connections[self.client_conn].data_to_send())
headers = mitmproxy.net.http.Headers([[k, v] for k, v in event.headers])
layer = Http2SingleStreamLayer(self, self.connections[self.client_conn], event.pushed_stream_id, headers)
self.streams[event.pushed_stream_id] = layer
self.streams[event.pushed_stream_id].timestamp_start = time.time()
self.streams[event.pushed_stream_id].pushed = True
self.streams[event.pushed_stream_id].parent_stream_id = parent_eid
self.streams[event.pushed_stream_id].timestamp_end = time.time()
self.streams[event.pushed_stream_id].request_message.arrived.set()
self.streams[event.pushed_stream_id].request_message.stream_ended.set()
self.streams[event.pushed_stream_id].start()
return True
def _handle_priority_updated(self, eid, event):
if not self.config.options.http2_priority:
self.log("HTTP/2 PRIORITY frame suppressed. Use --http2-priority to enable forwarding.", "debug")
return True
if eid in self.streams and self.streams[eid].handled_priority_event is event:
# this event was already handled during stream creation
# HeadersFrame + Priority information as RequestReceived
return True
with self.connections[self.server_conn].lock:
mapped_stream_id = event.stream_id
if mapped_stream_id in self.streams and self.streams[mapped_stream_id].server_stream_id:
# if the stream is already up and running and was sent to the server,
# use the mapped server stream id to update priority information
mapped_stream_id = self.streams[mapped_stream_id].server_stream_id
if eid in self.streams:
self.streams[eid].priority_exclusive = event.exclusive
self.streams[eid].priority_depends_on = event.depends_on
self.streams[eid].priority_weight = event.weight
self.connections[self.server_conn].prioritize(
mapped_stream_id,
weight=event.weight,
depends_on=self._map_depends_on_stream_id(mapped_stream_id, event.depends_on),
exclusive=event.exclusive
)
self.server_conn.send(self.connections[self.server_conn].data_to_send())
return True
def _map_depends_on_stream_id(self, stream_id, depends_on):
mapped_depends_on = depends_on
if mapped_depends_on in self.streams and self.streams[mapped_depends_on].server_stream_id:
# if the depends-on-stream is already up and running and was sent to the server
# use the mapped server stream id to update priority information
mapped_depends_on = self.streams[mapped_depends_on].server_stream_id
if stream_id == mapped_depends_on:
# looks like one of the streams wasn't opened yet
# prevent self-dependent streams which result in ProtocolError
mapped_depends_on += 2
return mapped_depends_on
def _cleanup_streams(self):
death_time = time.time() - 10
zombie_streams = [(stream_id, stream) for stream_id, stream in list(self.streams.items()) if stream.zombie]
outdated_streams = [stream_id for stream_id, stream in zombie_streams if stream.zombie <= death_time]
for stream_id in outdated_streams: # pragma: no cover
self.streams.pop(stream_id, None)
def _kill_all_streams(self):
for stream in self.streams.values():
stream.kill()
def __call__(self):
self._initiate_server_conn()
self._complete_handshake()
conns = [c.connection for c in self.connections.keys()]
try:
while True:
r = tcp.ssl_read_select(conns, 0.1)
for conn in r:
source_conn = self.client_conn if conn == self.client_conn.connection else self.server_conn
other_conn = self.server_conn if conn == self.client_conn.connection else self.client_conn
is_server = (source_conn == self.server_conn)
with self.connections[source_conn].lock:
try:
raw_frame = b''.join(http2.read_raw_frame(source_conn.rfile))
except:
# read frame failed: connection closed
self._kill_all_streams()
return
if self.connections[source_conn].state_machine.state == h2.connection.ConnectionState.CLOSED:
self.log("HTTP/2 connection entered closed state already", "debug")
return
incoming_events = self.connections[source_conn].receive_data(raw_frame)
source_conn.send(self.connections[source_conn].data_to_send())
for event in incoming_events:
if not self._handle_event(event, source_conn, other_conn, is_server):
# connection terminated: GoAway
self._kill_all_streams()
return
self._cleanup_streams()
except Exception as e: # pragma: no cover
self.log(repr(e), "info")
self._kill_all_streams()
def detect_zombie_stream(func): # pragma: no cover
@functools.wraps(func)
def wrapper(self, *args, **kwargs):
self.raise_zombie()
result = func(self, *args, **kwargs)
self.raise_zombie()
return result
return wrapper
class Http2SingleStreamLayer(httpbase._HttpTransmissionLayer, basethread.BaseThread):
class Message:
def __init__(self, headers=None):
self.headers: Optional[mitmproxy.net.http.Headers] = headers # headers are the first thing to be received on a new stream
self.data_queue: queue.Queue[bytes] = queue.Queue() # contains raw contents of DATA frames
self.queued_data_length = 0 # used to enforce mitmproxy's config.options.body_size_limit
self.trailers: Optional[mitmproxy.net.http.Headers] = None # trailers are received after stream_ended is set
self.arrived = threading.Event() # indicates the HEADERS+CONTINUTATION frames have been received
self.stream_ended = threading.Event() # indicates the a frame with the END_STREAM flag has been received
def __init__(self, ctx, h2_connection, stream_id: int, request_headers: mitmproxy.net.http.Headers) -> None:
super().__init__(
ctx, name="Http2SingleStreamLayer-{}".format(stream_id)
)
self.h2_connection = h2_connection
self.zombie: Optional[float] = None
self.client_stream_id: int = stream_id
self.server_stream_id: Optional[int] = None
self.pushed = False
self.timestamp_start: Optional[float] = None
self.timestamp_end: Optional[float] = None
self.request_message = self.Message(request_headers)
self.response_message = self.Message()
self.priority_exclusive: bool
self.priority_depends_on: Optional[int] = None
self.priority_weight: Optional[int] = None
self.handled_priority_event: Any = None
def kill(self):
if not self.zombie:
self.zombie = time.time()
self.request_message.stream_ended.set()
self.request_message.arrived.set()
self.response_message.arrived.set()
self.response_message.stream_ended.set()
def connect(self): # pragma: no cover
raise exceptions.Http2ProtocolException("HTTP2 layer should already have a connection.")
def disconnect(self): # pragma: no cover
raise exceptions.Http2ProtocolException("Cannot dis- or reconnect in HTTP2 connections.")
def set_server(self, address): # pragma: no cover
raise exceptions.SetServerNotAllowedException(repr(address))
def check_close_connection(self, flow):
# This layer only handles a single stream.
# RFC 7540 8.1: An HTTP request/response exchange fully consumes a single stream.
return True
@property
def data_queue(self):
if self.response_message.arrived.is_set():
return self.response_message.data_queue
else:
return self.request_message.data_queue
@property
def queued_data_length(self):
if self.response_message.arrived.is_set():
return self.response_message.queued_data_length
else:
return self.request_message.queued_data_length
@queued_data_length.setter
def queued_data_length(self, v):
self.request_message.queued_data_length = v
@property
def stream_ended(self):
# This indicates that all message headers, the full message body, and all trailers have been received
# https://tools.ietf.org/html/rfc7540#section-8.1
if self.response_message.arrived.is_set():
return self.response_message.stream_ended
else:
return self.request_message.stream_ended
@property
def trailers(self):
if self.response_message.arrived.is_set():
return self.response_message.trailers
else:
return self.request_message.trailers
@trailers.setter
def trailers(self, v):
if self.response_message.arrived.is_set():
self.response_message.trailers = v
else:
self.request_message.trailers = v
def raise_zombie(self, pre_command=None): # pragma: no cover
connection_closed = self.h2_connection.state_machine.state == h2.connection.ConnectionState.CLOSED
if self.zombie is not None or connection_closed:
if pre_command is not None:
pre_command()
raise exceptions.Http2ZombieException("Connection or stream already dead: {}, {}".format(self.zombie, connection_closed))
@detect_zombie_stream
def read_request_headers(self, flow):
self.request_message.arrived.wait()
self.raise_zombie()
if self.pushed:
flow.metadata['h2-pushed-stream'] = True
# pseudo header must be present, see https://http2.github.io/http2-spec/#rfc.section.8.1.2.3
authority = self.request_message.headers.pop(':authority', "")
method = self.request_message.headers.pop(':method')
scheme = self.request_message.headers.pop(':scheme')
path = self.request_message.headers.pop(':path')
host, port = url.parse_authority(authority, check=True)
port = port or url.default_port(scheme) or 0
return http.HTTPRequest(
host,
port,
method.encode(),
scheme.encode(),
authority.encode(),
path.encode(),
b"HTTP/2.0",
self.request_message.headers,
None,
None,
self.timestamp_start,
self.timestamp_end,
)
@detect_zombie_stream
def read_request_body(self, request):
if not request.stream:
self.request_message.stream_ended.wait()
while True:
try:
yield self.request_message.data_queue.get(timeout=0.1)
except queue.Empty: # pragma: no cover
pass
if self.request_message.stream_ended.is_set():
self.raise_zombie()
while self.request_message.data_queue.qsize() > 0:
yield self.request_message.data_queue.get()
break
self.raise_zombie()
@detect_zombie_stream
def read_request_trailers(self, request):
return self.request_message.trailers
@detect_zombie_stream
def send_request_headers(self, request):
if self.pushed:
# nothing to do here
return
while True:
self.raise_zombie()
self.connections[self.server_conn].lock.acquire()
max_streams = self.connections[self.server_conn].remote_settings.max_concurrent_streams
if self.connections[self.server_conn].open_outbound_streams + 1 >= max_streams:
# wait until we get a free slot for a new outgoing stream
self.connections[self.server_conn].lock.release()
time.sleep(0.1)
continue
# keep the lock
break
# We must not assign a stream id if we are already a zombie.
self.raise_zombie()
self.server_stream_id = self.connections[self.server_conn].get_next_available_stream_id()
self.server_to_client_stream_ids[self.server_stream_id] = self.client_stream_id
headers = request.headers.copy()
if request.authority:
headers.insert(0, ":authority", request.authority)
headers.insert(0, ":path", request.path)
headers.insert(0, ":method", request.method)
headers.insert(0, ":scheme", request.scheme)
priority_exclusive = None
priority_depends_on = None
priority_weight = None
if self.handled_priority_event:
# only send priority information if they actually came with the original HeadersFrame
# and not if they got updated before/after with a PriorityFrame
if not self.config.options.http2_priority:
self.log("HTTP/2 PRIORITY information in HEADERS frame suppressed. Use --http2-priority to enable forwarding.", "debug")
else:
priority_exclusive = self.priority_exclusive
priority_depends_on = self._map_depends_on_stream_id(self.server_stream_id, self.priority_depends_on)
priority_weight = self.priority_weight
try:
self.connections[self.server_conn].safe_send_headers(
self.raise_zombie,
self.server_stream_id,
headers,
priority_exclusive=priority_exclusive,
priority_depends_on=priority_depends_on,
priority_weight=priority_weight,
)
except Exception as e: # pragma: no cover
raise e
finally:
self.raise_zombie()
self.connections[self.server_conn].lock.release()
@detect_zombie_stream
def send_request_body(self, request, chunks):
if self.pushed:
# nothing to do here
return
self.connections[self.server_conn].safe_send_body(
self.raise_zombie,
self.server_stream_id,
chunks,
end_stream=(request.trailers is None),
)
@detect_zombie_stream
def send_request_trailers(self, request):
self._send_trailers(self.server_conn, request.trailers)
@detect_zombie_stream
def send_request(self, request):
self.send_request_headers(request)
self.send_request_body(request, [request.content])
self.send_request_trailers(request)
@detect_zombie_stream
def read_response_headers(self):
self.response_message.arrived.wait()
self.raise_zombie()
status_code = int(self.response_message.headers.get(':status', 502))
headers = self.response_message.headers.copy()
headers.pop(":status", None)
return http.HTTPResponse(
http_version=b"HTTP/2.0",
status_code=status_code,
reason=b'',
headers=headers,
content=None,
trailers=None,
timestamp_start=self.timestamp_start,
timestamp_end=self.timestamp_end,
)
@detect_zombie_stream
def read_response_body(self, request, response):
while True:
try:
yield self.response_message.data_queue.get(timeout=0.1)
except queue.Empty: # pragma: no cover
pass
if self.response_message.stream_ended.is_set():
self.raise_zombie()
while self.response_message.data_queue.qsize() > 0:
yield self.response_message.data_queue.get()
break
self.raise_zombie()
@detect_zombie_stream
def read_response_trailers(self, request, response):
return self.response_message.trailers
@detect_zombie_stream
def send_response_headers(self, response):
headers = response.headers.copy()
headers.insert(0, ":status", str(response.status_code))
with self.connections[self.client_conn].lock:
self.connections[self.client_conn].safe_send_headers(
self.raise_zombie,
self.client_stream_id,
headers
)
@detect_zombie_stream
def send_response_body(self, response, chunks):
self.connections[self.client_conn].safe_send_body(
self.raise_zombie,
self.client_stream_id,
chunks,
end_stream=(response.trailers is None),
)
@detect_zombie_stream
def send_response_trailers(self, response):
self._send_trailers(self.client_conn, response.trailers)
def _send_trailers(self, conn, trailers):
if not trailers:
return
with self.connections[conn].lock:
self.connections[conn].safe_send_headers(
self.raise_zombie,
self.client_stream_id,
trailers,
end_stream=True
)
def __call__(self): # pragma: no cover
raise EnvironmentError('Http2SingleStreamLayer must be run as thread')
def run(self):
layer = httpbase.HttpLayer(self, self.mode)
try:
layer()
except exceptions.Http2ZombieException: # pragma: no cover
# zombies can be safely terminated - no need to kill them twice
return
except exceptions.ProtocolException as e: # pragma: no cover
self.log(repr(e), "info")
except exceptions.SetServerNotAllowedException as e: # pragma: no cover
self.log("Changing the Host server for HTTP/2 connections not allowed: {}".format(e), "info")
except exceptions.Kill: # pragma: no cover
self.log("Connection killed", "info")
self.kill()
|
Phonographic Copyright (p) – Tranman Entertainment B.V.
Choir recorded at Abbey Road, CTS and Metropolis Studios in London.
Guitars, keyboards and vocals recorded at Real World Studios in Bath-U.K.
Bass and drums recorded at Quad Studios in Nashville-USA.
Strings recorded at Angel Studios in London.
Additionals keyboards and guitars recorded at Studio Mega in Paris.
Mixed at Studio Mega in Paris.
1,3,4,6,7,8,9,10,11,12 ℗ 1996 Tranman Entertainment B.V.
2 ℗ 1997 Tranman Entertainment B.V.
5 ℗ 1998 Tranman Entertainment B.V.
|
#!/usr/bin/env python
# Copyright (c) 2003 Daniel DiPaolo
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
#
handler_list=["fortune", "excuse"]
from moobot_module import MooBotModule
class fortune(MooBotModule):
def __init__(self):
self.regex="^fortune$"
def handler(self, **args):
"""Grabs a fortune and spits it out"""
import os
from irclib import Event
fortune_txt = os.popen("fortune -s").read()
return Event("privmsg", "", self.return_to_sender(args), [ fortune_txt ])
class excuse(MooBotModule):
def __init__(self):
self.regex="^excuse$"
def handler(self, **args):
"""Grabs an excuse from the bofh fortune file and spits it out"""
import os
from irclib import Event
fortune_txt = os.popen("fortune bofh-excuses|tail --lines=+2").read()
return Event("privmsg", "", self.return_to_sender(args), [ fortune_txt ])
|
Try to keep Tobby in balance while collecting diamonds.
Fast food franchises beware, the angry fatties are out for revenge. After bulking their bodies up they have begun launching themselves at restaurant after restaurant of soul less corporate crap.
|
from win32com.client import constants
import win32com.client
import pythoncom
"""Sample code for using the Microsoft Speech SDK 5.1 via COM in Python.
Requires that the SDK be installed; it's a free download from
http://microsoft.com/speech
and that MakePy has been used on it (in PythonWin,
select Tools | COM MakePy Utility | Microsoft Speech Object Library 5.1).
After running this, then saying "One", "Two", "Three" or "Four" should
display "You said One" etc on the console. The recognition can be a bit
shaky at first until you've trained it (via the Speech entry in the Windows
Control Panel."""
class SpeechRecognition:
""" Initialize the speech recognition with the passed in list of words """
def __init__(self, wordsToAdd):
# For text-to-speech
self.speaker = win32com.client.Dispatch("SAPI.SpVoice")
# For speech recognition - first create a listener
self.listener = win32com.client.Dispatch("SAPI.SpSharedRecognizer")
# Then a recognition context
self.context = self.listener.CreateRecoContext()
# which has an associated grammar
self.grammar = self.context.CreateGrammar()
# Do not allow free word recognition - only command and control
# recognizing the words in the grammar only
self.grammar.DictationSetState(0)
# Create a new rule for the grammar, that is top level (so it begins
# a recognition) and dynamic (ie we can change it at runtime)
self.wordsRule = self.grammar.Rules.Add("wordsRule",
constants.SRATopLevel + constants.SRADynamic, 0)
# Clear the rule (not necessary first time, but if we're changing it
# dynamically then it's useful)
self.wordsRule.Clear()
# And go through the list of words, adding each to the rule
[ self.wordsRule.InitialState.AddWordTransition(None, word) for word in wordsToAdd ]
# Set the wordsRule to be active
self.grammar.Rules.Commit()
self.grammar.CmdSetRuleState("wordsRule", 1)
# Commit the changes to the grammar
self.grammar.Rules.Commit()
# And add an event handler that's called back when recognition occurs
self.eventHandler = ContextEvents(self.context)
# Announce we've started using speech synthesis
self.say("Started successfully")
"""Speak a word or phrase"""
def say(self, phrase):
self.speaker.Speak(phrase)
"""The callback class that handles the events raised by the speech object.
See "Automation | SpSharedRecoContext (Events)" in the MS Speech SDK
online help for documentation of the other events supported. """
class ContextEvents(win32com.client.getevents("SAPI.SpSharedRecoContext")):
"""Called when a word/phrase is successfully recognized -
ie it is found in a currently open grammar with a sufficiently high
confidence"""
def OnRecognition(self, StreamNumber, StreamPosition, RecognitionType, Result):
newResult = win32com.client.Dispatch(Result)
print "You said: ",newResult.PhraseInfo.GetText()
if __name__=='__main__':
wordsToAdd = [ "One", "Two", "Three", "Four" ]
speechReco = SpeechRecognition(wordsToAdd)
while 1:
pythoncom.PumpWaitingMessages()
|
The Foundation for Jewish Camp has teamed up with many communities to create incentive grants for first time campers over the past several summers. Many communities have already used up their “One Happy Camper (OHC) dollars. While they have generously supported families in their areas, The Combined Jewish Philanthropies (CJP) (Boston’s federation) has depleted its resources for 2016.
Camp Avoda appreciates the support of community partners including CJP and The Jewish Alliance of Greater Rhode Island. At this point both of these agencies have depleted their funds. Camp Avoda appreciates community assistance and is pitching in to close the gap by funding OHC incentive grants to families who have no other sources – on their own. Camp Avoda will provide One Happy Camper incentive grants for any qualified camper from a community that does not or cannot fund its campers!
Avoda is a Jewish sports camp for boys. Avoda brotherhood, leadership, spirit, and tradition are among the most distinctive and unique aspects of the Avoda experience. Your son will arrive with uncertain expectations and leave with indelible memories. Campers have fun in a safe and supportive environment while building confidence and character. They will learn to work with others to solve problems and overcome challenges, and be valued for who they are.
There are still selected spaces available for summer 2016.
|
# This script generates an intermediate representation of the data model ready for translation into CSV
import json
import operator # Used in sorting
from sets import Set
from genmodel import generateModel, getName
# Change final parameter to False / True depending on whether you want roll-ups or not.
# Note to self: Use python gen-docs.py > ../website/standard/_includes/buildingblocks.html with rollups false for keeping documentation updated.
model = generateModel("http://joinedupdata.org/ontologies/philanthropy/Grant",1,{},False)
print "<ul>"
for table in sorted(model):
print "<li><a href='#"+table+"'>"+table +"</a></li>"
print "</ul>"
print "<p>Details on each of these building blocks can be found below.</p>"
for table in sorted(model):
print "<h4 class='activity' id='" + table + "'><span class='glyphicon glyphicon-th'></span> "+table+"</h4>"
print "<p>"+model[table]["_meta"]['description']+"</p>"
print "<p><strong>Types:</strong> "+ ", ".join(model[table]["_meta"]['types']) + "</p>"
print """
<div class="panel panel-primary">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-target="#%s">
Data properties
</a>
</h4>
</div>
<div id="%s" class="panel-collapse collapse out">
<div class="panel-body">
<table class="table">
<thead>
<tr>
<th>ID</th>
<th>Title (en)</th>
<th>Description</th>
<th>Values</th>
</tr>
</thead><tbody>
""" % ("table-"+table,"table-"+table)
c = 0
cols = []
#Dictionary sorting work-around
for col in model[table]:
if(not(col == '_meta')):
cols.append((col,model[table][col]["weight"]))
cols = sorted(cols,key=lambda x: x[1])
for col in cols:
print "<tr class='rowtype-"+str(model[table][col[0]]['values']).lower()+"'>"
print "<td>" + model[table][col[0]]['name'] + "</td>"
print "<td>" + model[table][col[0]]['title'] + "</td>"
try:
print "<td>" + model[table][col[0]]['description'] + "</td>"
except:
print "<td> No description </td>"
try:
print "<td>" + model[table][col[0]]['values'] + "</td>"
except:
print "<td> No values specified </td>"
print "</tr>"
c = c + 1
print """</tbody></table></div>
</div>
</div>"""
## Put together details of all the relationships
print """
<div class="panel panel-info">
<div class="panel-heading">
<h4 class="panel-title">
<a data-toggle="collapse" data-target="#%s">
Relationship properties
</a>
</h4>
</div>
<div id="%s" class="panel-collapse collapse out">
<div class="panel-body">
<table class="table">
<thead>
<tr>
<th>Relationship</th>
<th>Title</th>
<th>Description</th>
<th>Related to</th>
</tr>
</thead>
<tbody>
""" % ("related-"+table,"related-"+table)
#Dictionary sorting work-around
rcols = []
for col in model[table]['_meta']['related']:
rcols.append((col,model[table]['_meta']['related'][col]["topObject"]))
rcols = sorted(rcols,key=lambda x: x[1])
for related in rcols:
relatedItem = model[table]['_meta']['related'][related[0]]
print "<tr>"
print "<td>" + relatedItem['relationshipName'] + "</td>"
print "<td>" + relatedItem['title'] + "</td>"
print "<td>" + relatedItem['description'] + "</td>"
print "<td> <a href='#" + relatedItem['topObject'] + "'>" + relatedItem['objectName'] + " (" + relatedItem['topObject'] +")</a></td>"
print "</tr>"
print """</tbody></table></div>
</div>
</div>"""
|
The Empire Music Hall in Belfast was alive tonight with a killer lineup of local talent. The gorgeous, historic venue made for a regal backdrop for these budding bands with Northern Irish roots.
The first support band, Sister Ghost brought pure garage band power with songs with lyrics you could almost feel being etched on your skin as surely as they were scrawled in ruled notebooks. The frontwoman, Shannon D O’Neill, presented us with an anthology of her song-writing and personal history over thrilling harmonies and catchy hooks. The tight set of poignant tunes culminated in a wall of sound whose crescendo saw O’Neill creating feedback by jumping down into the front row and sawing her guitar against the edge of the stage.
When the second support act, CATALAN!, took to the stage, they brought with them a more upbeat vibe with tropical, summery tones. Beckoning the meagre but steadily growing audience to take several steps towards the stage and “express their enthusiasm with vocalisations”, frontman Ewen Friers and his insurgency rock band managed to disarm a wary crowd and liven up the room with their unison anthems and detailed ditties.
The hall was in full swing by the time the New Pagans hit the stage with their dark and driving sound. With absolute bombshell, Lyndsey McDougall behind a dual-mic setup—one spewing an over-driven distortion—the band produced a phenomenal musicality that begs to be performed live and blew away the recorded counterparts.
From the moment the band stepped on stage, McDougall set the mood with her listless yet forceful stance, clutching her flowy black dress up to her side and pulling at her hair vacantly. She spent the better part of an entire song doubled over, seemingly in agony of life’s cruel ironies, all the while belting out spell-binding vocals. The New Pagans’ collection of singles, with strong lyrics and smashing drumbeats, seemed to grab you by the ear and drag you down the hall to the principal’s office.
With grunge-alt influences like Sonic Youth, Nirvana, and Bikini Kill, the New Pagans are skipping down the path of fire and brimstone forged by those who’ve come before in a tradition that bears no risk of flaming out anytime soon. With the talented people behind this musical endeavour, the New Pagans are well worth checking out and will hopefully be making their mark on the international stage in the near future.
|
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import collections
import json
import six
from novaclient import client as nc
from novaclient import exceptions
from novaclient import shell as novashell
from bilean.common import exception
from bilean.common.i18n import _
from bilean.common.i18n import _LW
from bilean.engine.clients import client_plugin
from oslo_log import log as logging
LOG = logging.getLogger(__name__)
class NovaClientPlugin(client_plugin.ClientPlugin):
deferred_server_statuses = ['BUILD',
'HARD_REBOOT',
'PASSWORD',
'REBOOT',
'RESCUE',
'RESIZE',
'REVERT_RESIZE',
'SHUTOFF',
'SUSPENDED',
'VERIFY_RESIZE']
exceptions_module = exceptions
def _create(self):
computeshell = novashell.OpenStackComputeShell()
extensions = computeshell._discover_extensions("1.1")
endpoint_type = self._get_client_option('nova', 'endpoint_type')
args = {
'project_id': self.context.tenant,
'auth_url': self.context.auth_url,
'service_type': 'compute',
'username': None,
'api_key': None,
'extensions': extensions,
'endpoint_type': endpoint_type,
'http_log_debug': self._get_client_option('nova',
'http_log_debug'),
'cacert': self._get_client_option('nova', 'ca_file'),
'insecure': self._get_client_option('nova', 'insecure')
}
client = nc.Client(1.1, **args)
management_url = self.url_for(service_type='compute',
endpoint_type=endpoint_type)
client.client.auth_token = self.auth_token
client.client.management_url = management_url
return client
def is_not_found(self, ex):
return isinstance(ex, exceptions.NotFound)
def is_over_limit(self, ex):
return isinstance(ex, exceptions.OverLimit)
def is_bad_request(self, ex):
return isinstance(ex, exceptions.BadRequest)
def is_conflict(self, ex):
return isinstance(ex, exceptions.Conflict)
def is_unprocessable_entity(self, ex):
http_status = (getattr(ex, 'http_status', None) or
getattr(ex, 'code', None))
return (isinstance(ex, exceptions.ClientException) and
http_status == 422)
def refresh_server(self, server):
'''Refresh server's attributes.
Log warnings for non-critical API errors.
'''
try:
server.get()
except exceptions.OverLimit as exc:
LOG.warn(_LW("Server %(name)s (%(id)s) received an OverLimit "
"response during server.get(): %(exception)s"),
{'name': server.name,
'id': server.id,
'exception': exc})
except exceptions.ClientException as exc:
if ((getattr(exc, 'http_status', getattr(exc, 'code', None)) in
(500, 503))):
LOG.warn(_LW('Server "%(name)s" (%(id)s) received the '
'following exception during server.get(): '
'%(exception)s'),
{'name': server.name,
'id': server.id,
'exception': exc})
else:
raise
def get_ip(self, server, net_type, ip_version):
"""Return the server's IP of the given type and version."""
if net_type in server.addresses:
for ip in server.addresses[net_type]:
if ip['version'] == ip_version:
return ip['addr']
def get_status(self, server):
'''Return the server's status.
:param server: server object
:returns: status as a string
'''
# Some clouds append extra (STATUS) strings to the status, strip it
return server.status.split('(')[0]
def get_flavor_id(self, flavor):
'''Get the id for the specified flavor name.
If the specified value is flavor id, just return it.
:param flavor: the name of the flavor to find
:returns: the id of :flavor:
:raises: exception.FlavorMissing
'''
flavor_id = None
flavor_list = self.client().flavors.list()
for o in flavor_list:
if o.name == flavor:
flavor_id = o.id
break
if o.id == flavor:
flavor_id = o.id
break
if flavor_id is None:
raise exception.FlavorMissing(flavor_id=flavor)
return flavor_id
def get_keypair(self, key_name):
'''Get the public key specified by :key_name:
:param key_name: the name of the key to look for
:returns: the keypair (name, public_key) for :key_name:
:raises: exception.UserKeyPairMissing
'''
try:
return self.client().keypairs.get(key_name)
except exceptions.NotFound:
raise exception.UserKeyPairMissing(key_name=key_name)
def delete_server(self, server):
'''Deletes a server and waits for it to disappear from Nova.'''
if not server:
return
try:
server.delete()
except Exception as exc:
self.ignore_not_found(exc)
return
while True:
yield
try:
self.refresh_server(server)
except Exception as exc:
self.ignore_not_found(exc)
break
else:
# Some clouds append extra (STATUS) strings to the status
short_server_status = server.status.split('(')[0]
if short_server_status in ("DELETED", "SOFT_DELETED"):
break
if short_server_status == "ERROR":
fault = getattr(server, 'fault', {})
message = fault.get('message', 'Unknown')
code = fault.get('code')
errmsg = (_("Server %(name)s delete failed: (%(code)s) "
"%(message)s"))
raise exception.Error(errmsg % {"name": server.name,
"code": code,
"message": message})
def delete(self, server_id):
'''Delete a server by given server id'''
self.client().servers.delete(server_id)
def resize(self, server, flavor, flavor_id):
"""Resize the server and then call check_resize task to verify."""
server.resize(flavor_id)
yield self.check_resize(server, flavor, flavor_id)
def rename(self, server, name):
"""Update the name for a server."""
server.update(name)
def check_resize(self, server, flavor, flavor_id):
"""Verify that a resizing server is properly resized.
If that's the case, confirm the resize, if not raise an error.
"""
self.refresh_server(server)
while server.status == 'RESIZE':
yield
self.refresh_server(server)
if server.status == 'VERIFY_RESIZE':
server.confirm_resize()
else:
raise exception.Error(
_("Resizing to '%(flavor)s' failed, status '%(status)s'") %
dict(flavor=flavor, status=server.status))
def rebuild(self, server, image_id, preserve_ephemeral=False):
"""Rebuild the server and call check_rebuild to verify."""
server.rebuild(image_id, preserve_ephemeral=preserve_ephemeral)
yield self.check_rebuild(server, image_id)
def check_rebuild(self, server, image_id):
"""Verify that a rebuilding server is rebuilt.
Raise error if it ends up in an ERROR state.
"""
self.refresh_server(server)
while server.status == 'REBUILD':
yield
self.refresh_server(server)
if server.status == 'ERROR':
raise exception.Error(
_("Rebuilding server failed, status '%s'") % server.status)
def meta_serialize(self, metadata):
"""Serialize non-string metadata values before sending them to Nova."""
if not isinstance(metadata, collections.Mapping):
raise exception.StackValidationFailed(message=_(
"nova server metadata needs to be a Map."))
return dict((key, (value if isinstance(value,
six.string_types)
else json.dumps(value))
) for (key, value) in metadata.items())
def meta_update(self, server, metadata):
"""Delete/Add the metadata in nova as needed."""
metadata = self.meta_serialize(metadata)
current_md = server.metadata
to_del = [key for key in current_md.keys() if key not in metadata]
client = self.client()
if len(to_del) > 0:
client.servers.delete_meta(server, to_del)
client.servers.set_meta(server, metadata)
def server_to_ipaddress(self, server):
'''Return the server's IP address, fetching it from Nova.'''
try:
server = self.client().servers.get(server)
except exceptions.NotFound as ex:
LOG.warn(_LW('Instance (%(server)s) not found: %(ex)s'),
{'server': server, 'ex': ex})
else:
for n in server.networks:
if len(server.networks[n]) > 0:
return server.networks[n][0]
def get_server(self, server):
try:
return self.client().servers.get(server)
except exceptions.NotFound as ex:
LOG.warn(_LW('Server (%(server)s) not found: %(ex)s'),
{'server': server, 'ex': ex})
raise exception.ServerNotFound(server=server)
def absolute_limits(self):
"""Return the absolute limits as a dictionary."""
limits = self.client().limits.get()
return dict([(limit.name, limit.value)
for limit in list(limits.absolute)])
|
This Nexie Tube clock has been one of our best selling digial clock, clock body is made out of oak wood, colors is customizable, time's displayed in hour, minute and seconds.
Looking for ideal Digital Alarm Clock Manufacturer & supplier ? We have a wide selection at great prices to help you get creative. All the Nixie Tube Clock are quality guaranteed. We are China Origin Factory of Nixie Tube Clock for Sale. If you have any question, please feel free to contact us.
|
# Copyright 2015 NEC Corporation. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from magnumclient.common import cliutils as utils
from magnumclient.i18n import _
from osc_lib.command import command
class ListStats(command.Command):
_description = _("Show stats for the given project_id")
def get_parser(self, prog_name):
parser = super(ListStats, self).get_parser(prog_name)
parser.add_argument('project_id',
metavar='<project>',
help='Project ID')
return parser
def take_action(self, parsed_args):
mag_client = self.app.client_manager.container_infra
opts = {
'project_id': parsed_args.project_id
}
stats = mag_client.stats.list(**opts)
try:
utils.print_dict(stats._info)
except AttributeError:
return None
|
A woman who was wanted by police on recall to prison has been found in Great Yarmouth.
Georgia Sugden, 22, has been recalled by Norfolk Police for breaching the terms of her licence.
She was arrested in the Great Yarmouth area on Monday.
|
'''
Tests for intuition.api.datafeed
'''
import unittest
from nose.tools import raises, ok_, eq_, nottest
import random
import pytz
import datetime as dt
import pandas as pd
import intuition.api.datafeed as datafeed
from intuition.data.universe import Market
from intuition.errors import InvalidDatafeed
import dna.test_utils
class FakeBacktestDatasource(object):
def __init__(self, sids, properties):
pass
@property
def mapping(self):
return {
'backtest': (lambda x: True, 'sid'),
'dt': (lambda x: x, 'dt'),
'sid': (lambda x: x, 'sid'),
'price': (float, 'price'),
'volume': (int, 'volume'),
}
def get_data(self, sids, start, end):
index = pd.date_range(start, end, tz=pytz.utc)
return pd.DataFrame({sid: [random.random()] * len(index)
for sid in sids}, index=index)
class FakePanelBacktestDatasource(object):
def __init__(self, sids, properties):
pass
@property
def mapping(self):
return {
'backtest': (lambda x: True, 'sid'),
'dt': (lambda x: x, 'dt'),
'sid': (lambda x: x, 'sid'),
'price': (float, 'price'),
'low': (float, 'low'),
'high': (float, 'high'),
'volume': (int, 'volume'),
}
def get_data(self, sids, start, end):
index = pd.date_range(start, end, tz=pytz.utc)
fake_data = {}
for sid in sids:
fake_data[sid] = pd.DataFrame(
{field: [random.random()] * len(index)
for field in ['price', 'low', 'high', 'volume']}, index=index)
return pd.Panel(fake_data)
class FakePanelWithoutVolumeBacktestDatasource(object):
def __init__(self, sids, properties):
pass
def get_data(self, sids, start, end):
index = pd.date_range(start, end, tz=pytz.utc)
fake_data = {}
for sid in sids:
fake_data[sid] = pd.DataFrame(
{field: [random.random()] * len(index)
for field in ['price', 'low', 'high']}, index=index)
return pd.Panel(fake_data)
class FakeLiveDatasource(object):
def __init__(self, sids, properties):
pass
@property
def mapping(self):
return {
'live': True
}
def get_data(self, sids, start, end):
return pd.DataFrame()
class DatafeedUtilsTestCase(unittest.TestCase):
def setUp(self):
dna.test_utils.setup_logger(self)
self.fake_sid = 'fake_sid'
self.fake_one_sid_series = pd.Series(
{key: random.random() for key in ['low', 'close']})
self.fake_multiple_sids_series = pd.Series(
{key: random.random() for key in ['goog', 'fake_sid']})
self.fake_multiple_sids_df = pd.DataFrame(
{key: {'price': random.random(), 'close': 0.3}
for key in ['goog', 'fake_sid']})
self.fake_date = dt.datetime(2013, 1, 1)
def tearDown(self):
dna.test_utils.teardown_logger(self)
@nottest
def _check_event(self, event):
self.assertIsInstance(event, dict)
self.assertIn('volume', event)
self.assertIn('dt', event)
eq_(event['dt'], self.fake_date)
eq_(event['sid'], self.fake_sid)
def test_build_safe_event_without_volume(self):
partial_event = self.fake_one_sid_series.to_dict()
event = datafeed._build_safe_event(
partial_event, self.fake_date, self.fake_sid)
self._check_event(event)
for field in self.fake_one_sid_series.index:
self.assertIn(field, event.keys())
def test_build_safe_event_with_volume(self):
partial_event = self.fake_one_sid_series.to_dict()
partial_event.update({'volume': 12034})
event = datafeed._build_safe_event(
partial_event, self.fake_date, self.fake_sid)
self._check_event(event)
for field in self.fake_one_sid_series.index:
self.assertIn(field, event.keys())
@raises(AttributeError)
def test_wrong_data_type(self):
wrong_type = bool
datafeed._build_safe_event(wrong_type, self.fake_date, self.fake_sid)
def test_check_data_modules(self):
end = self.fake_date + pd.datetools.MonthBegin(6)
ok_(datafeed._check_data_modules(
'backtest.module', None, self.fake_date, end))
@raises(InvalidDatafeed)
def test_check_data_modules_all_nones(self):
end = self.fake_date + pd.datetools.MonthBegin(6)
datafeed._check_data_modules(None, None, self.fake_date, end)
class HybridDataFactoryTestCase(unittest.TestCase):
def setUp(self):
dna.test_utils.setup_logger(self)
self.test_index = pd.date_range(
'2012/01/01', '2012/01/7', tz=pytz.utc)
self.test_universe = 'forex,5'
self.market = Market()
self.market.parse_universe_description(self.test_universe)
self.test_sids = self.market.sids
def tearDown(self):
dna.test_utils.teardown_logger(self)
@nottest
def _check_datasource(self, source):
ok_((source.index == self.test_index).all())
eq_(source.start, self.test_index[0])
eq_(source.end, self.test_index[-1])
eq_(source.sids, self.test_sids)
self.assertIsNone(source._raw_data)
eq_(source.arg_string, source.instance_hash)
eq_(source.event_type, 4)
ok_(hasattr(source, 'log'))
self.assertFalse(source._is_live)
@raises(InvalidDatafeed)
def test_data_source_without_modules(self):
config = {
'sids': self.test_sids,
'index': self.test_index
}
datafeed.HybridDataFactory(**config)
@raises(InvalidDatafeed)
def test_data_source_invalid_index(self):
config = {
'sids': self.test_sids,
'index': bool
}
datafeed.HybridDataFactory(**config)
def test_minimal_data_source(self):
source = datafeed.HybridDataFactory(
universe=self.market,
index=self.test_index,
backtest=FakeBacktestDatasource)
self._check_datasource(source)
def test_hybrid_mapping(self):
source = datafeed.HybridDataFactory(
universe=self.market,
index=self.test_index,
backtest=FakeBacktestDatasource,
live=FakeLiveDatasource)
self.assertIn('backtest', source.mapping)
source._is_live = True
self.assertIn('live', source.mapping)
# TODO Test Live data sources
class SpecificMarketDataFactoryTestCase(unittest.TestCase):
def setUp(self):
dna.test_utils.setup_logger(self)
self.test_index = pd.date_range(
'2012/01/01', '2012/01/7', tz=pytz.utc)
def tearDown(self):
dna.test_utils.teardown_logger(self)
def test_dataframe_forex_backtest_data_generation(self):
test_universe = 'forex,5'
market = Market()
market.parse_universe_description(test_universe)
source = datafeed.HybridDataFactory(
universe=market,
index=self.test_index,
backtest=FakeBacktestDatasource)
total_rows = 0
for row in source.raw_data:
if not total_rows:
self.assertListEqual(
sorted(row.keys()),
sorted(['dt', 'price', 'sid', 'volume']))
total_rows += 1
eq_(total_rows, 2 * len(self.test_index) * len(market.sids))
def test_dataframe_cac40_backtest_data_generation(self):
test_universe = 'stocks:paris:cac40'
market = Market()
market.parse_universe_description(test_universe)
source = datafeed.HybridDataFactory(
universe=market,
index=self.test_index,
backtest=FakeBacktestDatasource)
total_rows = 0
for row in source.raw_data:
if not total_rows:
self.assertListEqual(
sorted(row.keys()),
sorted(['dt', 'price', 'sid', 'volume']))
total_rows += 1
eq_(total_rows, len(self.test_index) * len(market.sids))
def test_panel_cac40_backtest_data_generation(self):
test_universe = 'stocks:paris:cac40'
market = Market()
market.parse_universe_description(test_universe)
source = datafeed.HybridDataFactory(
universe=market,
index=self.test_index,
backtest=FakePanelBacktestDatasource)
total_rows = 0
for row in source.raw_data:
if not total_rows:
self.assertListEqual(
sorted(row.keys()),
sorted(['dt', 'price', 'low', 'high', 'sid', 'volume']))
total_rows += 1
eq_(total_rows, len(self.test_index) * len(market.sids))
def test_panel_without_volume_cac40_backtest_data_generation(self):
test_universe = 'stocks:paris:cac40,5'
market = Market()
market.parse_universe_description(test_universe)
source = datafeed.HybridDataFactory(
universe=market,
index=self.test_index,
backtest=FakePanelWithoutVolumeBacktestDatasource)
total_rows = 0
for row in source.raw_data:
if not total_rows:
self.assertListEqual(
sorted(row.keys()),
sorted(['dt', 'price', 'low', 'high', 'sid', 'volume']))
total_rows += 1
eq_(total_rows, len(self.test_index) * len(market.sids))
|
This is a very grim percentage fact about the current state of the world's overall health. 30% of the world's population is either overweight or obese. This marks the first time in world history where there are more people unhealthly overweight than people in absolute poverty. The rise in obesity is attributed to the ease and affordability at which companies can ship processed, high calorie, foods to anywhere in the world. The best way to combat this epidemic is to research exactly what you are eating, as well as those around you, and limit the amount of processed food, especially high sugary drinks, such as coke and pepsi. If we all keep this statistic in mind, we can then think about how to get the people in the world out of poverty because clearly there is more than enough food to feed the whole world.
|
"""
Test C++ virtual function and virtual inheritance.
"""
from __future__ import print_function
import os
import re
import lldb
from lldbsuite.test.decorators import *
from lldbsuite.test.lldbtest import *
from lldbsuite.test import lldbutil
def Msg(expr, val):
return "'expression %s' matches the output (from compiled code): %s" % (
expr, val)
class CppVirtualMadness(TestBase):
mydir = TestBase.compute_mydir(__file__)
# This is the pattern by design to match the "my_expr = 'value'" output from
# printf() stmts (see main.cpp).
pattern = re.compile("^([^=]*) = '([^=]*)'$")
def setUp(self):
# Call super's setUp().
TestBase.setUp(self)
# Find the line number to break for main.cpp.
self.source = 'main.cpp'
self.line = line_number(self.source, '// Set first breakpoint here.')
@expectedFailureAll(
compiler="icc",
bugnumber="llvm.org/pr16808 lldb does not call the correct virtual function with icc.")
@skipIfWindows # This test will hang on windows llvm.org/pr21753
@expectedFlakeyNetBSD
def test_virtual_madness(self):
"""Test that expression works correctly with virtual inheritance as well as virtual function."""
self.build()
# Bring the program to the point where we can issue a series of
# 'expression' command to compare against the golden output.
self.dbg.SetAsync(False)
# Create a target by the debugger.
target = self.dbg.CreateTarget(self.getBuildArtifact("a.out"))
self.assertTrue(target, VALID_TARGET)
# Create the breakpoint inside function 'main'.
breakpoint = target.BreakpointCreateByLocation(self.source, self.line)
self.assertTrue(breakpoint, VALID_BREAKPOINT)
# Now launch the process, and do not stop at entry point.
process = target.LaunchSimple(
None, None, self.get_process_working_directory())
self.assertTrue(process, PROCESS_IS_VALID)
self.assertTrue(process.GetState() == lldb.eStateStopped)
thread = lldbutil.get_stopped_thread(
process, lldb.eStopReasonBreakpoint)
self.assertTrue(
thread.IsValid(),
"There should be a thread stopped due to breakpoint condition")
# First, capture the golden output from the program itself.
golden = thread.GetFrameAtIndex(0).FindVariable("golden")
self.assertTrue(
golden.IsValid(),
"Encountered an error reading the process's golden variable")
error = lldb.SBError()
golden_str = process.ReadCStringFromMemory(
golden.AddressOf().GetValueAsUnsigned(), 4096, error)
self.assertTrue(error.Success())
self.assertTrue("c_as_C" in golden_str)
# This golden list contains a list of "my_expr = 'value' pairs extracted
# from the golden output.
gl = []
# Scan the golden output line by line, looking for the pattern:
#
# my_expr = 'value'
#
for line in golden_str.split(os.linesep):
match = self.pattern.search(line)
if match:
my_expr, val = match.group(1), match.group(2)
gl.append((my_expr, val))
#print("golden list:", gl)
# Now iterate through the golden list, comparing against the output from
# 'expression var'.
for my_expr, val in gl:
self.runCmd("expression %s" % my_expr)
output = self.res.GetOutput()
# The expression output must match the oracle.
self.expect(output, Msg(my_expr, val), exe=False,
substrs=[val])
|
Children's Entertainment is a specialised art, ask any Mum or Dad, keeping the ‘young uns’ occupied for any length of time is not easy.
Bournemouth's own Okey Dokey the Dragon with his assistant Mr Merlin has had over 15 years experience in children's entertainment for all ages, and has developed his abilities on the holiday parks, running children’s clubs, often with up to 100 children of differing ages. So if you’re looking for a professional entertainer, with bags of experience, and the sense of humour that children love, then look no further than Mr Merlin.
|
# Import the necessary packages
import cv2
import scipy.io as sio
import numpy as np
import util as util
import edge_detect as ed
from lineseg import lineseg
from drawedgelist import drawedgelist
from python.Lseg_to_Lfeat_v4 import create_linefeatures
from python.merge_lines_v4 import merge_lines
from python.LabelLineCurveFeature_v4 import classify_curves
from python.LabelLineFeature_v1 import label_line_features
from python.line_match import line_match
def initContours(img):
# edges = edge_detect(img)
curve_disc, curve_con, depth_disc, depth_con, edges = edge_detect(img)
seg_list = lineseg(edges, tol=2)
cntrs = find_contours(img)
# ADVANCED SLICING
for i in range(cntrs.shape[0]):
swap_cols(cntrs[i], 0, 1)
return seg_list, edges, cntrs
def draw_lfeat(line_feature, img):
# blank_image = normalize_depth(img, colormap=True)
height = img.shape[0]
width = img.shape[1]
blank_image = np.zeros((height, width, 3), np.uint8)
for i, e in enumerate(line_feature):
x1 = int(e[1])
y1 = int(e[0])
x2 = int(e[3])
y2 = int(e[2])
color = (rand.randint(0, 255), rand.randint(0, 255), rand.randint(0, 255))
cv2.line(blank_image, (x1, y1), (x2, y2), color, 1)
cv2.namedWindow('Line features', cv2.WINDOW_NORMAL)
cv2.imshow('Line features', blank_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def draw_listpair(list_pair, line_feature, img):
blank_image = normalize_depth(img, colormap=True)
for i, e in enumerate(list_pair):
color = (rand.randint(0, 255), rand.randint(0, 255), rand.randint(0, 255))
for j, e in enumerate(e):
line = line_feature[np.where(line_feature[:, 7] == e)[0]][0]
x1 = int(line[1])
y1 = int(line[0])
x2 = int(line[3])
y2 = int(line[2])
cv2.line(blank_image, (x1, y1), (x2, y2), color, 2)
cv2.namedWindow('Line features', cv2.WINDOW_NORMAL)
cv2.imshow('Line features', blank_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
if __name__ == '__main__':
img = cv2.imread('../img/learn0.png', -1)
# img = img[50:, 50:480]
im_size = img.shape
height = img.shape[0]
width = img.shape[1]
P = sio.loadmat('Parameter.mat')
param = P['P']
# ******* SECTION 1 *******
# FIND DEPTH / CURVATURE DISCONTINUITIES.
curve_disc, curve_con, depth_disc, depth_con, dst = ed.edge_detect(img)
blank_image = np.zeros((height, width, 3), np.uint8)
# draw_contours(blank_image, dst)
# drawedgelist(dst)
# print(dst)
# Remove extra dimensions from data
res = lineseg(dst, tol=2)
seglist = []
for i in range(res.shape[0]):
# print('shape', res[i].shape)
if res[i].shape[0] > 2:
# print(res[i])
# print(res[i][0])
seglist.append(np.concatenate((res[i], [res[i][0]])))
else:
seglist.append(res[i])
seglist = np.array(seglist)
# print(seglist)
# seg_curve = lineseg(curve_con, tol=1)
# seg_disc = lineseg(depth_con, tol=1)
# seg_list = np.hstack((seg_curve, seg_disc))
# print(seg_disc)
# seg_list, edges, cntrs = initContours(img)
# print(dst.shape)
drawedgelist(seglist)
# drawedgelist(seg_curve)
# ******* SECTION 2 *******
# SEGMENT AND LABEL THE CURVATURE LINES (CONVEX/CONCAVE).
LineFeature, ListPoint = create_linefeatures(seglist, dst, im_size)
Line_new, ListPoint_new, line_merged = merge_lines(LineFeature, ListPoint, 10, im_size)
draw_lfeat(Line_new, img)
# print(line_merged)
line_newC = classify_curves(img, Line_new, ListPoint_new, 11)
draw_convex(line_newC, img)
# LineFeature_curve, ListPoint_curve = create_linefeatures(seg_curve, curve_con, im_size)
# Line_new, ListPoint_new, line_merged = merge_lines(LineFeature_curve, ListPoint_curve, 10, im_size)
# print('Line_new size:', Line_new.shape)
# draw_lfeat(Line_new, img)
# LineFeature_disc, ListPoint_disc = create_linefeatures(seg_disc, depth_con, im_size)
# Line_new, ListPoint_new, line_merged = merge_lines(LineFeature_disc, ListPoint_disc, 10, im_size)
# print('Line_new size:', Line_new.shape)
# draw_lfeat(Line_new, img)
# seg_list, edges, cntrs = initContours(img)
# LineFeature, ListPoint = create_linefeatures(seg_list, cntrs, im_size)
# Line_new, ListPoint_new, line_merged = merge_lines(LineFeature, ListPoint, 10, im_size)
# draw_lfeat(Line_new, img)
# line_newC = classify_curves(img, Line_new, ListPoint_new, 11)
# draw_convex(line_newC, img)
# Remove the 11th column for post-processing
line_newC = np.delete(line_newC, 10, axis=1)
line_new_new = label_line_features(img, edges, line_newC, param)
print('Line_new:', line_new_new.shape)
# Keep the lines that are curvature / discontinuities
relevant_lines = np.where(line_new_new[:, 10] != 0)[0]
line_interesting = line_new_new[relevant_lines]
# Fast sorting, done based on line angle
line_interesting = line_interesting[line_interesting[:, 6].argsort()]
print('Line interesting:', line_interesting.shape)
draw_lfeat(line_interesting, img)
# Match the lines into pairs
list_pair = line_match(line_interesting, param)
print('List pair:', list_pair)
draw_listpair(list_pair, line_interesting, img)
cv2.waitKey(0)
cv2.destroyAllWindows()
|
Your ego didn't need this. It took you years to master the carved turn, and now any 6-year-old can do it. At least there's solace in knowing it's your 6-year-old.
Just as super-sidecuts made clean carved turns possible for the masses, now they're making them possible for the littlest skiers. After a few false starts, manufacturers have successfully rendered super-sidecut technology in miniature, which means skiing tots can progress beyond the snow playground and onto the slopes more quickly, using turning skills they won't have to later unlearn.
We put the newest crop of junior shaped skis to the test again last spring in a variety of terrain and snow conditions at Winter Park Resort, Colo., paying particular attention to how well they worked for 4- to 8-year-olds. Our crew of test coaches-who were disappointed by the results of last year's test-were delighted this time around. Even the young testers themselves, who tend to be far less critical of their equipment as long as it doesn't prevent them from getting maximum air time, were pleased.
Credit goes to the manufacturers, who have made their shaped skis lighter and softer-flexing this year. Whereas the old stuff required the size and strength of an average 9-year-old to make it work, the new shapes yield their benefits for much younger and much lighter children. Testers no taller than Kermit the Frog were tipping their skis on edge, decambering them and letting the sidecut pull them into the turn. They were making smooth carved arcs and getting out of the fall line more-a far cry from the usual tot skiing style of bombing in a straight line and making a snow-spraying hockey stop.
The coaches did see potential pitfalls. As good as these skis are, they don't work if hobbled by the wrong boot or binding. With boots, rear-entry designs should be avoided. The coaches agree that an overlap construction (buckles across the front) is superior because it's stiffer rearward, counteracting a child's tendency to lean back, and softer forward, promoting easy flex. In the more aggressive stance that results, tots are centered over the ski, where they can pivot it more easily and better take advantage of its sidecut to initiate turns. The trouble is, many manufacturers go to a rear-entry design in these small sizes, so that kids can get in and out of boots on their own.
The other potential fun-killer is an overweight binding. Beware of the easy-to-adjust demo bindings that sometimes come in package deals. The heel and toe slide on long metal tracks, which add weight and stiffen flex, robbing the skis of their carving ability. Sadly, the coaches note, many rental operations and ski schools use these bindings.
Proper sizing is especially important, and, contrary to conventional technique, the length of the ski should correspond to a child's weight, not height. Children-especially girls-can grow tall quickly yet still not possess the weight and strength to maneuver a ski that comes up to his or her forehead (the current sizing standard). On the flip side, a child who doesn't weigh a lot but is a skilled, aggressive skier can use speed and momentum to bend a ski into its sidecut. Ski manufacturer Elan has emerged as a leader in this new way of thinking, developing a sizing system that matches weight and ability to ski length. If you're looking at other brands, the same rule applies: Your child must be able to flex the ski. If you have to, take a couple of 2-by-4 blocks to the shop, place one under the tip and tail of the ski, and have your child stand on it. He or she should be able to flex it to-or close to-the floor without bouncing.
Full-on super-sidecuts aren't for every little kid. Coaches found a cutoff point in the 40- to 50-pound range, where a shaped ski doesn't offer its benefits because the child is simply too light to flex it. Some of the new designs feature aggressive tips with relatively narrow tails-often referred to as "Y" designs. The narrower tails engage thee snow less aggressively and are therefore easier to skid, so children can snowplow more easily or herring-bone up an incline. Meanwhile, the fatter tips are still good for turn initiation.
The upshot of all this is that you'll see more and more midget skiers carving better turns than you. No, it isn't fair. But at least you get to stay up past 8.
Not all models tested made the grade, and because kids' skis are often the last ones off the production line, several brands were not available. Of the 11 models that were, here are our recommendations on which work best for the 4- to 8-year-old crowd.
The 4X4 worked at all skill levels and in a variety of terrain and snow conditions. Coach Scott Woods saw it as a particularly good learning tool for building confidence and carved turns. More important, Jamie Urbana decided it was the ski that might help him catch up to his older brother.
Testers sized down, per Elan's instructions, and while the shorter skis took some getting used to, they paid off with smooth, skidless arcs in the GS gates. Coach Christi Northrop noted that Devin Oderwald skied better than he had all year, which greatly boosted his confidence.
This one does not have an aggressive sidecut, and it wasn't a favorite among the coaches. But it was with the children, who found it lightweight and highly maneuverable. The RC4 RS is a good choice for the littlest/lightest kids (less than 40 pounds), but with bigger kids, the coaches worried that it would soon be outgrown.
It's not a high-end race ski, but Mike Stefanski and Trevor Corbin were able to make clean turns and get excellent air time thanks to its light weight. The Gremlin will take entry-level learners from gentle terrain to the steeper stuff, but they'll need something more when they get bigger and more serious.
The kids loved these moderate shapes, which seemed to work well everywhere. Coach Jeff Burrows watched his featherweight daughter Kyla link strong arcs on the groomed runs, then scurry through the bumps with ease. Trevor said he was amazed at how well the skis initiated a variety of turn shapes.
This ski needed no redesign: It worked well last year, thanks to its soft flex and light weight. Testers got immediate results. They had no trouble bending it into its sidecut and making it arc.
Coach Woods saw noticeable improvement in several developing young skiers on this ski. Molly Leonard and Trevor Wert were especially excited by their progress in linking carved turns. The Cut Super Jr. worked well for both the lower-level learner and the budding racer.
|
## Created: 09/29/15
## Adapted from IncludedGL_byCUI.pt
## Purpose: final list of included CPGs and their disease categories/GUIs
import os
import sys
import csv
## Set path
path = "C:\\Users\\tileung\\Dropbox\\Py Stuffs - Drugs in CPGs\\SeptCode\\"
## Initiate file to save results
fName = path + "IncludedGLs_byCUI_byPop.txt"
try:
os.remove(fName)
print("old file removed")
except OSError:
pass
results_file = "IncludedGLs_byCUI_byPop.tab"
results = open(results_file,"w")
## Identify files for comparison
byCUI_list = path + "unique_IncludedGLs_byCUI.txt"
byPop_list = path + "IncludedCPGs_popExclusions1018.txt"
f = open(byCUI_list, 'r')
freader = csv.reader(f, dialect = csv.excel_tab)
for frow in freader:
try:
GL_no = str(frow[0]) ##match to same key in byPop_list
GL_cui = str(frow[3])
GL_title = frow[1]
GL_link = frow[2]
GL_icd = str(frow[4])
GL_cat = frow[5]
GL_icdlong = frow[6]
g = open(byPop_list, 'r')
greader = csv.reader(g, dialect = csv.excel_tab)
for grow in greader:
GL_iNo = str(grow[0]) ## match to same key in byCUI_list
GL_iTitle = grow[1]
## GL_iPop = grow[2]
if GL_iNo == GL_no:
line = GL_no + '\t' + GL_title + '\t' + GL_iTitle + '\t' + GL_link + '\t' + GL_cui + '\t' + GL_icd + '\t' + GL_cat + '\t' + GL_icdlong + '\n'
## print line
## sys.exit(0)
print GL_no
results.write(line)
except IndexError as e:
continue
results.close()
sys.exit(0)
|
The Palm Beach County State Attorney’s office declined to comment Friday on Feuer’s ruling.
Raja exited his van and approached Jones, who was on the phone with a tow truck dispatch center, which recorded the call — something Raja didn’t know when he made his statement to investigators. At the time, he had been a police officer for seven years, transferring to Palm Beach Gardens six months earlier.
Raja then fires three shots in less than two seconds. Ten seconds pass before three more shots are heard a second apart, apparently, Raja firing at Jones as he ran down an embankment. Raja told investigators Jones kept pointing his gun at him with his right hand. Feuer pointed out in her ruling that Jones was left handed.
|
# Copyright 2017 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from __future__ import print_function
import io
import os
import platform
import sys
import time
import unittest
import common
sys.path.append(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir,
os.pardir, 'tools', 'variations'))
import fieldtrial_util
test_blacklist = [
# These tests set their own field trials and should be ignored.
'quic.Quic.testCheckPageWithQuicProxy',
'quic.Quic.testCheckPageWithQuicProxyTransaction',
'smoke.Smoke.testCheckPageWithHoldback',
]
def GetExperimentArgs():
"""Returns a list of arguments with all tested field trials.
This function is a simple wrapper around the variation team's fieldtrail_util
script that generates command line arguments to test Chromium field trials.
Returns:
an array of command line arguments to pass to chrome
"""
config_path = os.path.join(os.path.dirname(__file__), os.pardir, os.pardir,
os.pardir, 'testing', 'variations', 'fieldtrial_testing_config.json')
my_platform = ''
if common.ParseFlags().android:
my_platform = 'android'
elif platform.system().lower() == 'linux':
my_platform = 'linux'
elif platform.system().lower() == 'windows':
my_platform = 'windows'
elif platform.system().lower() == 'darwin':
my_platform = 'mac'
else:
raise Exception('unknown platform!')
return fieldtrial_util.GenerateArgs(config_path, my_platform)
def GenerateTestSuites():
"""A generator function that yields non-blacklisted tests to run.
This function yields test suites each with a single test case whose id is not
blacklisted in the array at the top of this file.
Yields:
non-blacklisted test suites to run
"""
loader = unittest.TestLoader()
for test_suite in loader.discover(os.path.dirname(__file__), pattern='*.py'):
for test_case in test_suite:
for test_method in test_case:
if test_method.id() not in test_blacklist:
ts = unittest.TestSuite()
ts.addTest(test_method)
yield (ts, test_method.id())
def ParseFlagsWithExtraBrowserArgs(extra_args):
"""Generates a function to override common.ParseFlags.
The returned function will honor everything in the original ParseFlags(), but
adds on additional browser_args.
Args:
extra_args: The extra browser agruments to add.
Returns:
A function to override common.ParseFlags with additional browser_args.
"""
original_flags = common.ParseFlags()
def AddExtraBrowserArgs():
original_flags.browser_args = ((original_flags.browser_args if
original_flags.browser_args else '') + ' ' + extra_args)
return original_flags
return AddExtraBrowserArgs
def main():
"""Runs all non-blacklisted tests against Chromium field trials.
This script run all chrome proxy integration tests that haven't been
blacklisted against the field trial testing configuration used by Chromium
perf bots.
"""
flags = common.ParseFlags()
experiment_args = ' '.join(GetExperimentArgs())
common.ParseFlags = ParseFlagsWithExtraBrowserArgs(experiment_args)
# Each test is wrapped in its own test suite so results can be evaluated
# individually.
for test_suite, test_id in GenerateTestSuites():
buf = io.BytesIO()
sys.stdout.write('%s... ' % test_id)
sys.stdout.flush()
testRunner = unittest.runner.TextTestRunner(stream=buf, verbosity=2,
buffer=(not flags.disable_buffer))
result = testRunner.run(test_suite)
if result.wasSuccessful():
print('ok')
else:
print('failed')
print(buf.getvalue())
print('To repeat this test, run: ')
print("%s %s %s --test_filter=%s --browser_args='%s'" % (
sys.executable,
os.path.join(os.path.dirname(__file__), 'run_all_tests.py'), ' '.join(
sys.argv[1:]), '.'.join(test_id.split('.')[1:]), experiment_args))
if flags.failfast:
return
if __name__ == '__main__':
main()
|
I don’t normally go for pork loin but this one called out to me. I made a red wine sauce with the roasting tray drippings. Boy was it yummy. And i’ve got some for my lunch next week.
|
# -*- coding: utf-8 -*-
"""
***************************************************************************
ProcessingPlugin.py
---------------------
Date : August 2012
Copyright : (C) 2012 by Victor Olaya
Email : volayaf at gmail dot com
***************************************************************************
* *
* This program is free software; you can redistribute it and/or modify *
* it under the terms of the GNU General Public License as published by *
* the Free Software Foundation; either version 2 of the License, or *
* (at your option) any later version. *
* *
***************************************************************************
"""
__author__ = 'Victor Olaya'
__date__ = 'August 2012'
__copyright__ = '(C) 2012, Victor Olaya'
import shutil
import os
import sys
from functools import partial
from qgis.core import (QgsApplication,
QgsProcessingUtils,
QgsProcessingModelAlgorithm,
QgsDataItemProvider,
QgsDataProvider,
QgsDataItem,
QgsMapLayerType,
QgsMimeDataUtils)
from qgis.gui import (QgsOptionsWidgetFactory,
QgsCustomDropHandler)
from qgis.PyQt.QtCore import Qt, QCoreApplication, QDir, QFileInfo
from qgis.PyQt.QtWidgets import QMenu, QAction
from qgis.PyQt.QtGui import QIcon, QKeySequence
from qgis.utils import iface
from processing.core.Processing import Processing
from processing.gui.AlgorithmDialog import AlgorithmDialog
from processing.gui.ProcessingToolbox import ProcessingToolbox
from processing.gui.HistoryDialog import HistoryDialog
from processing.gui.ConfigDialog import ConfigOptionsPage
from processing.gui.ResultsDock import ResultsDock
from processing.gui.AlgorithmLocatorFilter import (AlgorithmLocatorFilter,
InPlaceAlgorithmLocatorFilter)
from processing.modeler.ModelerDialog import ModelerDialog
from processing.tools.system import tempHelpFolder
from processing.gui.menus import removeMenus, initializeMenus, createMenus, createButtons, removeButtons
from processing.core.ProcessingResults import resultsList
pluginPath = os.path.dirname(__file__)
class ProcessingOptionsFactory(QgsOptionsWidgetFactory):
def __init__(self):
super(QgsOptionsWidgetFactory, self).__init__()
def icon(self):
return QgsApplication.getThemeIcon('/processingAlgorithm.svg')
def createWidget(self, parent):
return ConfigOptionsPage(parent)
class ProcessingDropHandler(QgsCustomDropHandler):
def handleFileDrop(self, file):
if not file.lower().endswith('.model3'):
return False
return self.runAlg(file)
@staticmethod
def runAlg(file):
alg = QgsProcessingModelAlgorithm()
if not alg.fromFile(file):
return False
alg.setProvider(QgsApplication.processingRegistry().providerById('model'))
dlg = AlgorithmDialog(alg, parent=iface.mainWindow())
dlg.show()
return True
def customUriProviderKey(self):
return 'processing'
def handleCustomUriDrop(self, uri):
path = uri.uri
self.runAlg(path)
class ProcessingModelItem(QgsDataItem):
def __init__(self, parent, name, path):
super(ProcessingModelItem, self).__init__(QgsDataItem.Custom, parent, name, path)
self.setState(QgsDataItem.Populated) # no children
self.setIconName(":/images/themes/default/processingModel.svg")
self.setToolTip(QDir.toNativeSeparators(path))
def hasDragEnabled(self):
return True
def handleDoubleClick(self):
self.runModel()
return True
def mimeUri(self):
u = QgsMimeDataUtils.Uri()
u.layerType = "custom"
u.providerKey = "processing"
u.name = self.name()
u.uri = self.path()
return u
def runModel(self):
ProcessingDropHandler.runAlg(self.path())
def editModel(self):
dlg = ModelerDialog.create()
dlg.loadModel(self.path())
dlg.show()
def actions(self, parent):
run_model_action = QAction(QCoreApplication.translate('ProcessingPlugin', '&Run Model…'), parent)
run_model_action.triggered.connect(self.runModel)
edit_model_action = QAction(QCoreApplication.translate('ProcessingPlugin', '&Edit Model…'), parent)
edit_model_action.triggered.connect(self.editModel)
return [run_model_action, edit_model_action]
class ProcessingDataItemProvider(QgsDataItemProvider):
def __init__(self):
super(ProcessingDataItemProvider, self).__init__()
def name(self):
return 'processing'
def capabilities(self):
return QgsDataProvider.File
def createDataItem(self, path, parentItem):
file_info = QFileInfo(path)
if file_info.suffix().lower() == 'model3':
alg = QgsProcessingModelAlgorithm()
if alg.fromFile(path):
return ProcessingModelItem(parentItem, alg.name(), path)
return None
class ProcessingPlugin:
def __init__(self, iface):
self.iface = iface
self.options_factory = None
self.drop_handler = None
self.item_provider = None
self.locator_filter = None
self.edit_features_locator_filter = None
self.initialized = False
self.initProcessing()
def initProcessing(self):
if not self.initialized:
self.initialized = True
Processing.initialize()
def initGui(self):
self.options_factory = ProcessingOptionsFactory()
self.options_factory.setTitle(self.tr('Processing'))
iface.registerOptionsWidgetFactory(self.options_factory)
self.drop_handler = ProcessingDropHandler()
iface.registerCustomDropHandler(self.drop_handler)
self.item_provider = ProcessingDataItemProvider()
QgsApplication.dataItemProviderRegistry().addProvider(self.item_provider)
self.locator_filter = AlgorithmLocatorFilter()
iface.registerLocatorFilter(self.locator_filter)
# Invalidate the locator filter for in-place when active layer changes
iface.currentLayerChanged.connect(lambda _: self.iface.invalidateLocatorResults())
self.edit_features_locator_filter = InPlaceAlgorithmLocatorFilter()
iface.registerLocatorFilter(self.edit_features_locator_filter)
self.toolbox = ProcessingToolbox()
self.iface.addDockWidget(Qt.RightDockWidgetArea, self.toolbox)
self.toolbox.hide()
self.toolbox.visibilityChanged.connect(self.toolboxVisibilityChanged)
self.resultsDock = ResultsDock()
self.iface.addDockWidget(Qt.RightDockWidgetArea, self.resultsDock)
self.resultsDock.hide()
self.menu = QMenu(self.iface.mainWindow().menuBar())
self.menu.setObjectName('processing')
self.menu.setTitle(self.tr('Pro&cessing'))
self.toolboxAction = QAction(self.tr('&Toolbox'), self.iface.mainWindow())
self.toolboxAction.setCheckable(True)
self.toolboxAction.setObjectName('toolboxAction')
self.toolboxAction.setIcon(
QgsApplication.getThemeIcon("/processingAlgorithm.svg"))
self.iface.registerMainWindowAction(self.toolboxAction,
QKeySequence('Ctrl+Alt+T').toString(QKeySequence.NativeText))
self.toolboxAction.toggled.connect(self.openToolbox)
self.iface.attributesToolBar().insertAction(self.iface.actionOpenStatisticalSummary(), self.toolboxAction)
self.menu.addAction(self.toolboxAction)
self.modelerAction = QAction(
QgsApplication.getThemeIcon("/processingModel.svg"),
QCoreApplication.translate('ProcessingPlugin', '&Graphical Modeler…'), self.iface.mainWindow())
self.modelerAction.setObjectName('modelerAction')
self.modelerAction.triggered.connect(self.openModeler)
self.iface.registerMainWindowAction(self.modelerAction,
QKeySequence('Ctrl+Alt+G').toString(QKeySequence.NativeText))
self.menu.addAction(self.modelerAction)
self.historyAction = QAction(
QgsApplication.getThemeIcon("/mIconHistory.svg"),
QCoreApplication.translate('ProcessingPlugin', '&History…'), self.iface.mainWindow())
self.historyAction.setObjectName('historyAction')
self.historyAction.triggered.connect(self.openHistory)
self.iface.registerMainWindowAction(self.historyAction,
QKeySequence('Ctrl+Alt+H').toString(QKeySequence.NativeText))
self.menu.addAction(self.historyAction)
self.toolbox.processingToolbar.addAction(self.historyAction)
self.resultsAction = QAction(
QgsApplication.getThemeIcon("/processingResult.svg"),
self.tr('&Results Viewer'), self.iface.mainWindow())
self.resultsAction.setObjectName('resultsViewer')
self.resultsAction.setCheckable(True)
self.iface.registerMainWindowAction(self.resultsAction,
QKeySequence('Ctrl+Alt+R').toString(QKeySequence.NativeText))
self.menu.addAction(self.resultsAction)
self.toolbox.processingToolbar.addAction(self.resultsAction)
self.resultsDock.visibilityChanged.connect(self.resultsAction.setChecked)
self.resultsAction.toggled.connect(self.resultsDock.setUserVisible)
self.toolbox.processingToolbar.addSeparator()
self.editInPlaceAction = QAction(
QgsApplication.getThemeIcon("/mActionProcessSelected.svg"),
self.tr('Edit Features In-Place'), self.iface.mainWindow())
self.editInPlaceAction.setObjectName('editInPlaceFeatures')
self.editInPlaceAction.setCheckable(True)
self.editInPlaceAction.toggled.connect(self.editSelected)
self.menu.addAction(self.editInPlaceAction)
self.toolbox.processingToolbar.addAction(self.editInPlaceAction)
self.toolbox.processingToolbar.addSeparator()
self.optionsAction = QAction(
QgsApplication.getThemeIcon("/mActionOptions.svg"),
self.tr('Options'), self.iface.mainWindow())
self.optionsAction.setObjectName('optionsAction')
self.optionsAction.triggered.connect(self.openProcessingOptions)
self.toolbox.processingToolbar.addAction(self.optionsAction)
menuBar = self.iface.mainWindow().menuBar()
menuBar.insertMenu(
self.iface.firstRightStandardMenu().menuAction(), self.menu)
self.menu.addSeparator()
initializeMenus()
createMenus()
createButtons()
# In-place editing button state sync
self.iface.currentLayerChanged.connect(self.sync_in_place_button_state)
self.iface.mapCanvas().selectionChanged.connect(self.sync_in_place_button_state)
self.iface.actionToggleEditing().triggered.connect(partial(self.sync_in_place_button_state, None))
self.sync_in_place_button_state()
def sync_in_place_button_state(self, layer=None):
"""Synchronise the button state with layer state"""
if layer is None:
layer = self.iface.activeLayer()
old_enabled_state = self.editInPlaceAction.isEnabled()
new_enabled_state = layer is not None and layer.type() == QgsMapLayerType.VectorLayer
self.editInPlaceAction.setEnabled(new_enabled_state)
if new_enabled_state != old_enabled_state:
self.toolbox.set_in_place_edit_mode(new_enabled_state and self.editInPlaceAction.isChecked())
def openProcessingOptions(self):
self.iface.showOptionsDialog(self.iface.mainWindow(), currentPage='processingOptions')
def unload(self):
self.toolbox.setVisible(False)
self.iface.removeDockWidget(self.toolbox)
self.iface.attributesToolBar().removeAction(self.toolboxAction)
self.resultsDock.setVisible(False)
self.iface.removeDockWidget(self.resultsDock)
self.toolbox.deleteLater()
self.menu.deleteLater()
# also delete temporary help files
folder = tempHelpFolder()
if QDir(folder).exists():
shutil.rmtree(folder, True)
self.iface.unregisterMainWindowAction(self.toolboxAction)
self.iface.unregisterMainWindowAction(self.modelerAction)
self.iface.unregisterMainWindowAction(self.historyAction)
self.iface.unregisterMainWindowAction(self.resultsAction)
self.iface.unregisterOptionsWidgetFactory(self.options_factory)
self.iface.deregisterLocatorFilter(self.locator_filter)
self.iface.deregisterLocatorFilter(self.edit_features_locator_filter)
self.iface.unregisterCustomDropHandler(self.drop_handler)
QgsApplication.dataItemProviderRegistry().removeProvider(self.item_provider)
removeButtons()
removeMenus()
Processing.deinitialize()
def openToolbox(self, show):
self.toolbox.setUserVisible(show)
def toolboxVisibilityChanged(self, visible):
self.toolboxAction.setChecked(visible)
def openModeler(self):
dlg = ModelerDialog.create()
dlg.update_model.connect(self.updateModel)
dlg.show()
def updateModel(self):
model_provider = QgsApplication.processingRegistry().providerById('model')
model_provider.refreshAlgorithms()
def openResults(self):
if self.resultsDock.isVisible():
self.resultsDock.hide()
else:
self.resultsDock.show()
def openHistory(self):
dlg = HistoryDialog()
dlg.exec_()
def tr(self, message):
return QCoreApplication.translate('ProcessingPlugin', message)
def editSelected(self, enabled):
self.toolbox.set_in_place_edit_mode(enabled)
|
raised on U.S. farms is always the #1 ingredient. This lean, easily digestible protein provides essential nutrients, which helps build and support lean muscle mass.
is a tasty, easily digestible lean protein source not commonly found in dog food and is naturally rich in iron, which helps support strong muscles.
is a tasty source of lean protein, and a natural source of essential vitamins, phosphorus and iron, which help support strong muscles.
Turkey, Turkey Meal, Chicken Meal, Dried Peas, Pea Starch, Whole Dried Potatoes, Pea Protein, Chicken Fat (Preserved with Mixed Tocopherols), Dried Plain Beet Pulp, Whole Flaxseed, Cranberries, Duck, Quail, Menhaden Fish Meal, Natural Flavor, Salt, Sweet Potatoes, Zinc Proteinate, Vitamin E Supplement, Iron Proteinate, L-Ascorbyl-2-Polyphosphate (Source of Vitamin C), Copper Proteinate, Manganese Proteinate, Biotin, Niacin, d-Calcium Pantothenate, Sodium Selenite, Vitamin A Supplement, Riboflavin Supplement, Thiamine Mononitrate, Vitamin B12 Supplement, Calcium Iodate, Pyridoxine Hydrochloride, Vitamin D3 Supplement, Folic Acid.
Rachael Ray™ Nutrish® PEAK Northern Woodlands Recipe™ with Turkey, Duck & Quail is made for dogs of all sizes.
Weaning Puppies: Start puppies on Rachael Ray™ Nutrish® PEAK Northern Woodlands Recipe™ with Turkey, Duck & Quail as soon as they begin to chew on solid food, usually 3-4 weeks of age. Allow them to eat as much as they want until fully weaned (6-8 weeks of age).
If desired, Rachael Ray™ Nutrish® PEAK Northern Woodlands Recipe™ with Turkey, Duck & Quail can be fed moistened. To feed moistened, mix ¼ cup of water to 2 cups of Rachael Ray™ Nutrish® PEAK Northern Woodlands Recipe™ with Turkey, Duck & Quail. If you choose to moisten, discard remaining food after 30 minutes to ensure product freshness.
NUTRITIONAL ADEQUACY STATEMENTRachael Ray™ Nutrish® PEAK Northern Woodlands Recipe™ with Turkey, Duck & Quail Dog Food is formulated to meet the nutritional levels established by the AAFCO (Association of American Feed Control Officials) Dog Food Nutrient profiles for all life stages including growth of large size dogs (70 lbs. or more as an adult).
Rated 4 out of 5 by robind2 from We only just started our 11 month old kitten on this but so far he really likes it, and he's quite picky. I like that it's grain free and isn't full of a bunch of junk. It's still a little starch heavy for my taste, but at the moment this is the best option.
Rated 3 out of 5 by lmille from Dogs love it but noticing a lot of vomiting My Great Dane and German Shepard love this food! I originally got it due to rave reviews from Dane forums about how this food is good for Danes who struggle with putting on weight. I noticed my Dane filling out more and his coat looks really glossy! After a week of eating it, he started vomiting and pooping constantly. At first I thought maybe he just happen to get sick or maybe I fed him too much of it, but a few day later my German Shepard started getting sick in the same way and it’s very rare for him to get sick. I used to feed them another brand and they didn’t get sick from it, so I might have to go back to it.
Rated 4 out of 5 by tinasadowski from Dogs didn’t hate it, they didn’t love it either. Put them back to Diamond Naturals which they live.
Rated 5 out of 5 by lurac1 from My guys love Northern Woodlands. The second I open the bag they’re pulling at me. I like the fact that it offers a variety to my dogs.
|
try:
from collections import OrderedDict
except ImportError:
from phizer.ordereddict import OrderedDict
import operator
import sys
from collections import namedtuple
cached_image = namedtuple('cached_image',
['body', 'content_type', 'size'])
class SizedLRUCache(object):
def __init__(self, max_size=None):
self._max_size = max_size
self._current_size = 0
self._cache = OrderedDict()
def get(self, key):
value = self._cache.pop(key)
self._cache[key] = value
return value
def put(self, key, value):
self._cache.pop(key, None)
self._update_current_size(value)
if self._current_size > self._max_size:
self._purge()
self._cache[key] = value
def delete(self, key):
_ = self._cache.pop(key)
self._update_current_size(value, operator.sub)
def touch(self, key):
"""'uses' item at key, thereby making it recently used
"""
value = self._cache.pop(key)
self._cache[key] = value
@property
def size(self):
return self._current_size
def _update_current_size(self, value, f=operator.add):
self._current_size = f(self._current_size, sys.getsizeof(value))
def __len__(self):
"""Returns the number of items in the cache"""
return len(self._cache)
def _purge(self):
"""Purges least recently used items until less than `max_size`
"""
if self._max_size is None:
return
while self._current_size > self._max_size and len(self) > 0:
key, value = self._cache.popitem(True)
self._update_current_size(value, operator.sub)
|
At the rate I'm going (couple of hours, most days), it should be done by early next week. If the weather holds, I can paint all at once. And maybe even put a coat of the new blue (very similar to the old blue) on the front shingles. You know, the ones that needs painting the least.
|
#!/usr/bin/python
import rospy
import rosbag
import os
import sys
import argparse
import yaml
def remove_tf(inbag,outbag,prefix):
print ' Processing input bagfile: ', inbag
print ' Writing to output bagfile: ', outbag
print ' Adding prefix: ', prefix
outbag = rosbag.Bag(outbag,'w')
for topic, msg, t in rosbag.Bag(inbag,'r').read_messages():
if topic == "/tf":
new_transforms = []
for transform in msg.transforms:
if transform.header.frame_id[0] == '/':
transform.header.frame_id = prefix + transform.header.frame_id
else:
transform.header.frame_id = prefix + '/' + transform.header.frame_id
if transform.child_frame_id[0] == '/':
transform.child_frame_id = prefix + transform.child_frame_id
else:
transform.child_frame_id = prefix + '/' + transform.child_frame_id
new_transforms.append(transform)
msg.transforms = new_transforms
else:
try:
if msg.header.frame_id[0] == '/':
msg.header.frame_id = prefix + msg.header.frame_id
else:
msg.header.frame_id = prefix + '/' + msg.header.frame_id
except:
pass
if topic[0] == '/':
topic = prefix + topic
else:
topic = prefix + '/' + topic
outbag.write(topic, msg, t)
print 'Closing output bagfile and exit...'
outbag.close();
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description='removes all transforms from the /tf topic that contain one of the given frame_ids in the header as parent or child.')
parser.add_argument('-i', metavar='INPUT_BAGFILE', required=True, help='input bagfile')
parser.add_argument('-o', metavar='OUTPUT_BAGFILE', required=True, help='output bagfile')
parser.add_argument('-p', metavar='PREFIX', required=True, help='prefix to add to the frame ids')
args = parser.parse_args()
try:
remove_tf(args.i,args.o,args.p)
except Exception, e:
import traceback
traceback.print_exc()
|
How do I get started with ParaPay?
How do I set my discount?
ParaPay is an electronic fare payment system that has similar features to Presto but without a card. You will have a personal account and can simply pay your fare online, by phone, in person, or by mail (cheque).
You can pay this way whether it is per month using a ParaPass or per ride using a ParaPurse. ParaPay will offer automatic payment options in the future.
ParaPurse is an electronic purse that you can load with money. You can then use it to pre-pay for your trips on Para Transpo.
Money in your ParaPurse can also be used to purchase a ParaPass.
ParaPass is a cardless electronic pass that allows for monthly travel on Para Transpo. The Adult, Youth, Senior, EquiPass and Community pass are available, based on eligibility.
Note: The Access pass is not your best option if you only use Para Transpo. The Access pass provides a one third discount on the per ride fares on Para Transpo. It only benefits Para Transpo customers who use OC Transpo buses and trains more often than Para Transpo.
4. How do I get started with ParaPay?
Using ParaPay online means you can conveniently access ParaPay anywhere, anytime without waiting, and within your schedule.
5. Can someone else manage my account?
Yes. Agencies, group homes and caregivers can add money on your behalf to purchase a monthly pass or per ride trips. However, their name must be listed as your designate with Para Transpo to withdraw money, get a refund or request a balance.
You must give them permission to use your password and client ID number. Keep in mind, when someone else manages your account online, they have full access to make a purchase, withdraw money, get a refund, or request a balance.
Note: Per the Terms & Conditions: You are responsible for keeping personal identification information including email address and passwords that are used to access your ParaPay account confidential at all times. Para Transpo is not responsible for any access to or misuse of your ParaPay account by unauthorized users.
They can only add money to purchase a monthly pass or per ride trips unless they are listed as your designate with Para Transpo.
6. How do I set my discount?
For a youth (19 and under) or senior, the information contained in your Para Transpo registration file will determine which monthly passes you can use. For example, based on your date of birth, if you are 65 years or older, you can buy a senior’s bus pass.
For recipients of Ontario Disability Support Plan (ODSP) payments, you may qualify for a Community pass. You must complete the Application for Community pass and mail it to OC Transpo or visit any OC Transpo Customer Service Centre, including 925 Belfast, with a copy of your ODSP cheque stub. Only the primary recipient of ODSP benefits qualifies for the Community pass.
For EquiPass application and eligibility process, or EquiPass Single-ride Fare, please refer to www.octranspo.com/tickets-and-passes/equipass. You may mail your acceptance letter to Para Transpo (925 Belfast Road, Ottawa ON K1G 0Z4), or visit any OC Transpo Customer Service Centre.
7. How do I change my account settings?
Your language preference, email address, and telephone number can all be changed by clicking My Account. Once changes are made, your main Para Transpo registration file is updated automatically. Any other changes (for example, your home address) must be made by contacting the Para Transpo Registration line at 613-741-4390.
Changing your account password is easily done by clicking Change Password. You will enter your old password, enter your new password, confirm the new password and then click the Change Password button.
9. What if I forgot my username?
Your user name is the same as your Client ID.
When can I buy a ParaPass?
Which pass option is best for me?
What if my money does not show up in my account?
1. How do I add money to my account?
Visit any OC Transpo Customer Service Centre.
Provide a cheque only. Please do not send cash or credit/debit card information by mail.
ParaPay will offer automatic payment options in the future.
2. When can I buy a ParaPass?
A ParaPass may be purchased the last 14 days of the month and the first 14 days of the next month.
3. Which pass option is best for me?
ParaPay shows all of the passes to which you are eligible. The best pass for you depends on how frequently you travel.
The OC Transpo fare table - Para Transpo Fares provides you with a complete overview of all the Para Transpo fares.
4. What if my money does not show up in my account?
In the unlikely event that money you have added to your account does not appear, please call Customer Service at 613-741-4390, TTY 613-741-5280.
How do I see my account activity?
What if I see an error in my account?
1. How can I check my account balance?
2. What if my money does not show up in my account?
3. How do I see my account activity?
You can check your account transactions by clicking My Purchase History.
4. What if I see an error in my account?
If you have any questions regarding your ParaPay account please call Customer Service at 613-741-4390, TTY 613-741-5280.
How do I pay for my trip with ParaPay?
Do I need to show anything when I board the bus or taxi?
Can I pay for part of my fare when I book the trip and pay the remaining balance on the bus or taxi?
Can I use cash, Presto or tickets?
What happens if I cancel a trip or miss my bus or taxi?
Can I pay for a companion with ParaPay?
1. How do I pay for my trip with ParaPay?
At the time of booking, if you have a ParaPurse balance or a ParaPass valid for the date of your trip, your fare is automatically applied against your account by either: confirming a valid ParaPass, or by deducting fare from your ParaPurse.
However, if you have regular (subscription bookings), your fare is automatically collected the day before you take your trip.
2. Do I need to show anything when I board the bus or taxi?
No, you do not need to show anything once you have booked and pre-paid for your trip using ParaPay. The Para Transpo driver knows who has pre-paid for their trip and who requires payment upon boarding the bus or taxi.
3. Can I pay part of my fare when I book a trip, and pay the remaining balance on the bus or taxi?
If you have enough money in your ParaPurse, the full fare is deducted from your ParaPurse.
If you do not have enough money in your ParaPurse, the full fare must be paid in cash, tickets or by using a Presto pass with a Query Receipt when you board the bus or taxi. For example; you have a balance of $1.00 in your ParaPurse and your fare has been calculated at $3.35. Since your balance cannot cover the entire fare, you will be required to pay $3.35 upon boarding the bus or taxi. Your ParaPurse balance of $1.00 will remain untouched.
4. Can I use cash, Presto or tickets?
Para Transpo operators will continue to accept cash, Presto monthly pass with a Query receipt or tickets, while available.
However, if you have enough money in your ParaPurse or a ParaPass, the system will automatically deduct fare payment from your account.
5. What if I do not have enough money in my account?
You will not be denied a trip if you do not have enough the full fare in your ParaPay account. You can always pay your fare using cash, Presto monthly pass with a Query receipt or tickets while available.
Refer to Adding Money to Your Account for information on adding money to your ParaPay account.
6. What happens if I cancel a trip or miss my bus or taxi?
If you cancel your trip, miss the bus or taxi, the fare amount you paid is credited to your account immediately upon cancellation.
7. Can I pay for a companion with ParaPay?
No. A companion is expected to pay their fare when they board the bus or taxi. Support persons travel for free on Para Transpo.
Who do I call if I need help or have questions about ParaPay?
1. Who do I call if I need help or have questions about ParaPay?
Please call Customer Service at 613-741-4390, TTY 613-741-5280.
|
# -*- coding: utf-8 -*-
# Copyright (C) 2010 by RoboLab - University of Extremadura
#
# This file is part of RoboComp
#
# RoboComp is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# RoboComp is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with RoboComp. If not, see <http://www.gnu.org/licenses/>.
#
import Ice, sys, math, traceback
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.Qt import *
class C(QWidget):
def __init__(self, endpoint, modules):
QWidget.__init__(self)
self.ic = Ice.initialize(sys.argv)
self.mods = modules
self.prx = self.ic.stringToProxy(endpoint)
self.proxy = self.mods['RoboCompRoimant'].RoimantPrx.checkedCast(self.prx)
self.roiList = []
self.job()
def job(self):
# Remote procedure call
output = self.proxy.getROIList()
# Set class copies
self.roiList = output[0]
self.bState = output[1].bState
# Print number of ROIs
print len(self.roiList)
def paintEvent(self, event=None):
xOff = self.width()/2.
yOff = self.height()/2.
xPos = 0
yPos = 0
div = 20.
painter = QPainter(self)
painter.setRenderHint(QPainter.Antialiasing, True)
# Draw grid
for i in range(max(xOff,yOff)/30):
x = 30*i
painter.drawLine(xOff+x, 0, xOff+x, self.height())
painter.drawLine(xOff-x, 0, xOff-x, self.height())
painter.drawLine(0, yOff+x, self.width(), yOff+x)
painter.drawLine(0, yOff-x, self.width(), yOff-x)
# Draw ROIs
painter.setPen(Qt.red)
painter.setBrush(Qt.red)
for roi in self.roiList:
if not roi.casado: continue
try:
xPos = int((roi.z3D/div)+xOff-3)
yPos = int((roi.x3D/div)+yOff-3)
except:
pass
if type(xPos) == type(yPos) and type(xPos) == type(int()):
try:
painter.drawEllipse(int(xPos), int(yPos), 6, 6)
except:
print 'ROI :-(', int(xPos)
print type(xPos)
print type(int(xPos))
print roi.x3D, roi.x3D
traceback.print_stack()
# Draw base
painter.setPen(Qt.blue)
painter.setBrush(Qt.blue)
try:
xPos = int( (self.bState.z/div)+xOff-9)
yPos = int( (self.bState.x/div)+yOff-9)
start = int(((-self.bState.alfa*180/math.pi)-180-20)*16)
except:
pass
if type(xPos) == type(yPos) and type(xPos) == type(start) and type(xPos) == type(int()):
try:
painter.drawPie(xPos, yPos, 18, 18, start, 20*2*16)
except:
print 'BASE :-('
print type(xPos-7)
print self.bState.z, self.bState.x, self.bState.alfa
painter.end()
painter = None
|
Harvey Mudd College (HMC) is a private residential liberal arts college of science, engineering, and mathematics, founded in 1955 and located in Claremont, California, United States. It is one of the institutions of the contiguous Claremont Colleges, which share adjoining campus grounds. The college's mission is: "Harvey Mudd College seeks to educate engineers, scientists, and mathematicians well versed in all of these areas and in the humanities and the social sciences so that they may assume leadership in their fields with a clear understanding of the impact of their work on society."
Harvey Mudd College shares university resources such as libraries, dining halls, health services, and campus security, with the other institutions in the Claremont Colleges, including Pitzer College, Scripps College, Claremont McKenna College, Pomona College, Claremont Graduate University, and Keck Graduate Institute of Applied Life Sciences, but each college is independently managed by its own faculty, board of trustees, and college endowment and has its own separate admissions process. Students at Harvey Mudd are encouraged to take classes (acceptable for academic credit at Harvey Mudd) at the other four Claremont colleges, especially classes outside their major of study. Together the Claremont Colleges provide the resources and opportunities of a large university while enabling the specialization and personal attention afforded by the individual colleges. The Bachelor of Science diploma received at graduation is issued by Harvey Mudd College.
The college is named after Harvey Seeley Mudd, one of the initial investors in the Cyprus Mines Corporation. Although involved in the planning of the new institution, Mudd died before it opened. Harvey Mudd College was funded by Mudd's friends and family, and named in his honor.
In keeping with the college's mission, HMC offers four-year degrees in chemistry, mathematics, physics, computer science, biology, and engineering, as well as interdisciplinary degrees in mathematical biology, and a joint major in either computer science and mathematics; or biology and chemistry. Students may also elect to complete an Individual Program of Study (IPS) made up of courses of their own choosing. Usually between two and five students graduate with an IPS degree each year. Finally, one may choose an off-campus major offered by any of the other Claremont Colleges, provided one also completes a minor in one of the technical fields that Harvey Mudd offers as a major.
What are the academic rankings for Harvey Mudd College?
1. number 49 for Academics.
2. number 19 for ROI (Return on Investment).
What universities are similar to Harvey Mudd College?
|
'''
This file contains tests that test the new user registration system.
'''
from os.path import abspath, dirname
import sys
project_dir = abspath(dirname(dirname(__file__)))
sys.path.insert(0, project_dir)
from django.core.urlresolvers import resolve
from django.test import TestCase
from django.test.client import Client
from django.http import HttpRequest
# Import all of our views for testing
from website.views.views import *
# Same for models
from website.models import *
from django.contrib.auth.models import User
class RegistrationFormTests(TestCase):
def test_register_new_user_url_resolves_to_new_user_view(self):
'''
This tests that the register new user URL resolves to the proper
function.
'''
found = resolve(u'/registerNewUser/')
self.assertEqual(found.func, registerNewUser)
def test_register_button_returns_correct_form(self):
'''
This tests that the register button returns a form containing
at least the proper form inputs.
'''
request = HttpRequest()
response = registerNewUser(request)
# Making sure the form title is there, and that it at least has all
# proper input fields and registration button.
self.assertIn(u'Operator Registration Form', response.content)
self.assertIn(u'inputUsername', response.content)
self.assertIn(u'inputEmail', response.content)
self.assertIn(u'inputPassword', response.content)
self.assertIn(u'inputConfirmPassword', response.content)
self.assertIn(u'submitRegistrationBtn', response.content)
def test_register_new_user(self):
'''
This runs some related tests in sequence because we need the created user from the first
function to be in the database to test the subsequent functions.
'''
self.submit_registration_creates_new_user()
self.register_new_operator_with_existing_username_fails()
self.register_new_operator_with_existing_username_but_different_case_fails()
self.register_new_operator_with_existing_email_fails()
def submit_registration_creates_new_user(self):
'''
This simulates a POST request to create a new user and checks that the URL is good
and that we get back expected responses.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
# Let's make sure it created the User in the database...
testUser = User.objects.get(username=u'testuser')
self.assertEqual(testUser.username, u'testuser')
self.assertEqual(testUser.email, u'[email protected]')
# Make sure the function returns a valid response
self.assertEqual(200, response.status_code)
# Now let's check that the server returned some HTML with a success message
self.assertIn(u'succeeded', response.content)
def register_new_operator_with_existing_username_fails(self):
'''
This attempts to register a user with the username of an already existing user, namely
the testuser from the test above. It should fail and provide an error message.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def register_new_operator_with_existing_username_but_different_case_fails(self):
'''
This attempts to register a user with the username of an already existing user, namely
the testuser from the test above, but with a different case. It should fail and provide
an error message.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testUser',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def register_new_operator_with_existing_email_fails(self):
'''
This attempts to register a user with a valid username, but with an already existing
e-mail, namely from the test above. It should fail and provide an error message.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser1',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_new_operator_with_mismatched_passwords_fails(self):
'''
This attempts to create a user with a password and confirm password that
do not match.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword1'})
self.assertIn(u'failed', response.content)
def test_too_long_username_in_registration_fails(self):
'''
This attempts to test what happens when the user tries to register a username that is too long.abspath
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'thisisaverylongusernamethatshouldnotbeallowed',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_username_with_invalid_characters_fails(self):
'''
This attempts to register a username with invalid characters. It should not let the user
register and provide an error message.
'''
# Cases to test:
# 1. Username contains one or more spaces
# 2. Username contains non-alphanumeric characters
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser!',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
response = c.post(u'/registerNewUser/', {u'username': u'testuser@',
u'email': u'testuser@@nothing.com',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
response = c.post(u'/registerNewUser/', {u'username': u'testuser$',
u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
response = c.post(u'/registerNewUser/', {u'username': u'test user',
u'email': u'test [email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_registration_without_username_fails(self):
'''
This attempts to register without a username.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'email': u'[email protected]',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_registration_without_email_fails(self):
'''
This attempts to register without an e-mail.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'password': u'testpassword',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_registration_without_password_fails(self):
'''
This tests registration without sending a password
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'confirmPassword': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_registration_without_confirm_password_fails(self):
'''
This tests registration without sending a password confirmation.
'''
c = Client()
response = c.post(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'password': u'testpassword'})
self.assertIn(u'failed', response.content)
def test_registration_get_returns_501(self):
'''
This tests that making a GET request to /registerNewUser/ returns a proper 501 page/status
'''
c = Client()
response = c.get(u'/registerNewUser/', {u'username': u'testuser',
u'email': u'[email protected]',
u'password': u'testpassword'})
self.assertEqual(501, response.status_code)
|
atm54form 1 hour 17 minutes ago Web Design bouncy castle rental cheap bouncy castle All http://www.docspal.com Discuss Published New Discard Success!
Bouncy castles are constructed with any ages in mind and also with a wide range of motifs and designs which is guaranteed to put a smile on anybody's face, whether young or old.
honeytax8 1 hour 21 minutes ago Adsense aconinecancer All http://www.cdunyka.com Discuss Published New Discard Success!
honeytax8 1 hour 21 minutes ago Success Stories aconinesite All http://wiki.abecbrasil.org.br Discuss Published New Discard Success!
pindeal5 1 hour 32 minutes ago Success Stories aconineproteintyrosinekinase/rtk All http://www.kunxuansm.com Discuss Published New Discard Success!
pindeal5 1 hour 33 minutes ago Web Hosting aconinecost All https://nscontroller.xyz Discuss Published New Discard Success!
taro09 1 hour 34 minutes ago Home Business salted peanut production line All https://www.peanutmachinerychina.com Discuss Published New Discard Success!
tomatu 1 hour 37 minutes ago Online Business best cheap online shopping All https://thebestprice-online.com Discuss Published New Discard Success!
gumthing6 1 hour 40 minutes ago Search Engine Optimization β-amyloid (1-40)mechanism of action All http://web.niudaiw.com Discuss Published New Discard Success!
tailor9sock 2 hours 7 minutes ago Top Affiliate Programs used truck deals used truck for sale u All https://www.dailystrength.org Discuss Published New Discard Success!
eye7fired 2 hours 14 minutes ago Webmaster Tools & Resources play yard singapore playpen for babies s All http://komiwiki.syktsu.ru Discuss Published New Discard Success!
|
"""
utils.py: part of expfactory package
Copyright (c) 2017-2021, Vanessa Sochat
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import errno
from subprocess import Popen, PIPE, STDOUT
from expfactory.logger import bot
import shutil
import json
import tempfile
import sys
import os
import re
################################################################################
# io utils
################################################################################
def get_installdir():
return os.path.dirname(os.path.abspath(__file__))
def get_templatedir():
base = get_installdir()
return "%s/templates" % (base)
def get_viewsdir(base=None):
"""views might be written to a secondary expfactory install, which can
be specified with base"""
if base is None:
base = get_installdir()
return "%s/views" % (base)
def find_subdirectories(basepath):
"""
Return directories (and sub) starting from a base
"""
directories = []
for root, dirnames, filenames in os.walk(basepath):
new_directories = [d for d in dirnames if d not in directories]
directories = directories + new_directories
return directories
def find_directories(root, fullpath=True):
"""
Return directories at one level specified by user
(not recursive)
"""
directories = []
for item in os.listdir(root):
# Don't include hidden directories
if not re.match("^[.]", item):
if os.path.isdir(os.path.join(root, item)):
if fullpath:
directories.append(os.path.abspath(os.path.join(root, item)))
else:
directories.append(item)
return directories
def copy_directory(src, dest, force=False):
"""Copy an entire directory recursively"""
if os.path.exists(dest) and force is True:
shutil.rmtree(dest)
try:
shutil.copytree(src, dest)
except OSError as e:
# If the error was caused because the source wasn't a directory
if e.errno == errno.ENOTDIR:
shutil.copy(src, dest)
else:
bot.error("Directory not copied. Error: %s" % e)
sys.exit(1)
def mkdir_p(path):
"""mkdir_p attempts to get the same functionality as mkdir -p
:param path: the path to create.
"""
try:
os.makedirs(path)
except OSError as e:
if e.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
bot.error("Error creating path %s, exiting." % path)
sys.exit(1)
def clone(url, tmpdir=None):
"""clone a repository from Github"""
if tmpdir is None:
tmpdir = tempfile.mkdtemp()
name = os.path.basename(url).replace(".git", "")
dest = "%s/%s" % (tmpdir, name)
return_code = os.system("git clone %s %s" % (url, dest))
if return_code == 0:
return dest
bot.error("Error cloning repo.")
sys.exit(return_code)
def run_command(cmd):
"""run_command uses subprocess to send a command to the terminal.
:param cmd: the command to send, should be a list for subprocess
"""
output = Popen(cmd, stderr=STDOUT, stdout=PIPE)
t = output.communicate()[0], output.returncode
output = {"message": t[0], "return_code": t[1]}
return output
################################################################################
# templates
################################################################################
def get_template(name, base=None):
"""read in and return a template file"""
# If the file doesn't exist, assume relative to base
template_file = name
if not os.path.exists(template_file):
if base is None:
base = get_templatedir()
template_file = "%s/%s" % (base, name)
# Then try again, if it still doesn't exist, bad name
if os.path.exists(template_file):
with open(template_file, "r") as filey:
template = "".join(filey.readlines())
return template
bot.error("%s does not exist." % template_file)
def sub_template(template, template_tag, substitution):
"""make a substitution for a template_tag in a template"""
template = template.replace(template_tag, substitution)
return template
def save_template(output_file, snippet, mode="w", base=None):
if base is None:
base = get_templatedir()
with open(output_file, mode) as filey:
filey.writelines(snippet)
return output_file
################################################################################
# JSON
################################################################################
def read_json(filename, mode="r"):
with open(filename, mode) as filey:
data = json.load(filey)
return data
def write_json(json_obj, filename, mode="w"):
with open(filename, mode) as filey:
filey.write(
json.dumps(json_obj, sort_keys=True, indent=4, separators=(",", ": "))
)
return filename
def read_file(filename, mode="r"):
with open(filename, mode) as filey:
data = filey.read()
return data
def write_file(filename, content, mode="w"):
with open(filename, mode) as filey:
filey.writelines(content)
return filename
def get_post_fields(request):
"""parse through a request, and return fields from post in a dictionary"""
fields = dict()
for field, value in request.form.items():
fields[field] = value
return fields
################################################################################
# environment / options
################################################################################
def convert2boolean(arg):
"""convert2boolean is used for environmental variables
that must be returned as boolean"""
if not isinstance(arg, bool):
return arg.lower() in ("yes", "true", "t", "1", "y")
return arg
def getenv(variable_key, default=None, required=False, silent=True):
"""getenv will attempt to get an environment variable. If the variable
is not found, None is returned.
:param variable_key: the variable name
:param required: exit with error if not found
:param silent: Do not print debugging information for variable
"""
variable = os.environ.get(variable_key, default)
if variable is None and required:
bot.error("Cannot find environment variable %s, exiting." % variable_key)
sys.exit(1)
if not silent:
if variable is not None:
bot.verbose2("%s found as %s" % (variable_key, variable))
else:
bot.verbose2("%s not defined (None)" % variable_key)
return variable
|
YearCompass is a global movement that mobilises people to sort out their last year and plan their next one in order to have a greater awareness of their lives. New Year's resolutions don't work. Planning your year does. The main tool of the movement is the free to download YearCompass booklet that is available in many languages.
We wanted to have a worthwhile New Year's Eve with our friends, made a booklet with a few questions that help reflection and it went viral in 2013. Since then it became an international movement with more than a hundred volunteers from 25 countries. Today the YearCompass booklet is available in 22 different languages. The self-printed version is free to download from the YearCompass website and had 700 000 downloads last year.
YearCompass is a non-profit organisation that relies on volunteering resources, the backbone of which are the volunteers of Invisible University (a Hungarian non-profit organisation that promotes self development for higher education).
Besides them the international volunteer community is growing steadily making YearCompass able to reach more and more countries and people every year.
What's YearCompass in one sentence?
It was 2012 and we wanted to have a worthwhile New Year's Eve with our friends, so we've put together a few questions that would help us focus. The event was an absolute success, our idea resonated really well with others. We thought that this could be something useful and decided to make a booklet. Next year we made it public on the internet. We thought maybe we would have a couple hundred downloads and make a few people happy. Next thing we know – it went viral.
The second is from the Women's Club of Ulanbator. Two years ago we've found out by accident that there is a women's club in Asia where the members learn English together. They use the English version of our booklet as a fun exercise. They posted a picture of the first event – we still have it, it's one of our best reminders why we do this whole thing.
This is not a business for us. We don't want to make the core experience into a product, that would defeat the purpose of the whole endeavor. We want the YearCompass movement to spread rapidly, to reach a massive audience, making them more self-aware, so in turn they could make a world a better place to live on.
Although the booklet is free to download we definitely need some kind of funding to keep running the servers, pay our staff, etc. Currently, we are testing a more Elegant Edition of the booklet that you can give as a gift on Christmas or buy it for yourself. Compared to the self-printed pdf version this is a fancier product with a few surprises. But the questions are the same as in the core booklet that you can download for free. And this will remain the same in the future – you will always be able to download the core booklet for free.
|
from django import forms
class CommonInfoForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
user = kwargs.pop('user', None)
super(CommonInfoForm, self).__init__(*args, **kwargs)
# User is required for bound forms
if self.is_bound and not user:
raise TypeError('Bound {0} instances require a "user" argument.'.format(self.__class__.__name__))
# Set self.user regardless of whether or not the form is bound. Child
# forms may well require the user when the form is unbound as well.
self.user = user
def save(self, commit=True):
# Call super.save() with commit=False so .save() can be called on the
# instance with the required user argument
instance = super(CommonInfoForm, self).save(commit=False)
if commit:
instance.save(self.user)
self.save_m2m()
del self.save_m2m # pretend commit=False was never used
return instance
|
If the file path is short, you can highlight and use Copy. Otherwise, double-click one the words in the location field to highlight it. Right-click and select Select All from the pop up menu.... The scan result contains the list of all the files whose path length exceeds the chosen threshold amount. The size of the path, type of the file and the actual path is also displayed.
Just select a file or folder in Finder or any other program that works in a similar way and invoke your new service from the Application Menu » Services » Copy File or Folder path (it will only show up if you actually have files or folders selected).
Introduction. Find is now more then 40 years old and naturally there are several generations of it. As we are taking about GNU find there are multiple version of it too.
13/02/2018 · Hi Dave I have named the file 'Oefen_excel.xls' (practice excel in Dutch ) to keep it simple. It is a normal excel document. For some reason vba keeps saying that the document can not be found although the pathname is correct.
|
'''
This script sets up a virtualenv with openstates on ubunt.
usage: python setup_openstates_ubuntu.py myvirtualenv [whenIputmycode]
If you don't specify a second argument, the code goes in the virtualenv.
'''
import sys
import os
from os import chdir as cd
from os.path import join, abspath
import subprocess
import logging
# Logging config
logger = logging.getLogger('[openstates-installer]')
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
formatter = logging.Formatter('%(name)s %(asctime)s - %(message)s',
datefmt='%H:%M:%S')
ch.setFormatter(formatter)
logger.addHandler(ch)
packages = {
# The packages are required for use of lxml and git.
'core': '''
libxml2-dev
python-dev
libxslt1-dev
git'''.split(),
}
# ---------------------------------------------------------------------------
# Utility functions
def run(command, check=False):
logger.info('running "%s"' % command)
if check:
return subprocess.check_output(command, shell=True)
else:
subprocess.call(command, shell=True)
def run_each(*commands):
for c in commands:
run(c)
def package_install(package, update=False):
"""Installs the given package/list of package, optionnaly updating
the package database."""
if update:
run("sudo apt-get --yes update")
if type(package) in (list, tuple):
package = " ".join(package)
run("sudo apt-get --yes install %s" % (package))
def package_ensure(package):
"""Tests if the given package is installed, and installes it in
case it's not already there. Loosely stolen from cuisine."""
cmd = "dpkg-query -W -f='${Status}' %s ; true"
status = run(cmd % package, check=True)
if status.find("not-installed") != -1 or status.find("installed") == -1:
package_install(package)
return False
else:
return True
def create_virtualenv(ENV):
'Create the virtualenv.'
run_each(
('wget -nc http://pypi.python.org/packages/source/v/virtualenv'
'/virtualenv-1.7.tar.gz#md5=dcc105e5a3907a9dcaa978f813a4f526'),
'tar -zxvf virtualenv-1.7.tar.gz ',
'python virtualenv-1.7/virtualenv.py %s' % ENV,
)
def gitclone(repo, setup_arg='install'):
cd(CODE)
# Clone the code.
run('git clone %s' % repo)
# Install requirements.
_, folder = os.path.split(repo)
folder, _ = os.path.splitext(folder)
requirements = join(CODE, folder, 'requirements.txt')
try:
with open(requirements):
pass
except IOError:
pass
else:
run('%s install -r %s' % (pip, requirements))
# Setup.
cd(folder)
run('%s setup.py %s' % (python, setup_arg))
def setup_openstates():
for package in packages['core']:
package_ensure(package)
create_virtualenv(ENV)
# Get openstates.
gitclone('git://github.com/sunlightlabs/openstates.git')
# Uninstall billy.
run('%s uninstall billy' % pip)
# Clone billy, get requirements, and run setup.py develop
gitclone('git://github.com/sunlightlabs/billy.git', 'develop')
def setup_mysql():
package_ensure('mysql-server')
run("sudo apt-get build-dep python-mysqldb")
run("pip install MySQL-python")
if __name__ == "__main__":
try:
ENV, CODE = map(abspath, sys.argv[1:3])
except ValueError:
ENV = CODE = abspath(sys.argv[1])
for path in [ENV, CODE]:
try:
os.makedirs(ENV)
os.makedirs(CODE)
except OSError:
pass
pip = join(ENV, 'bin', 'pip')
python = join(ENV, 'bin', 'python')
setup_openstates()
|
Aha, a bank holiday Monday post straight from the river bank.
In the last four years we've seen quite a few little voles on the river bank, nibbling on reeds and going about their business. More often we've been made aware of their presence by way of hearing a distinctive 'plop' into the water or to see a reed waving wildly amongst a crowd of calm. Seeing a water vole is a real treat, one which makes the heart leap. I don't think water voles are as shy as they're made out to be, I think they're cheeky chaps with a good sense of humour and a passion for hide and seek.
Have you ever seen a water vole?
I've never seen one, but aren't they cute?!
|
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class UpdateSystemServicesResponse(Model):
"""Response of the update system services API.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar update_status: Update status. Possible values include: 'Unknown',
'Updating', 'Creating', 'Deleting', 'Succeeded', 'Failed', 'Canceled'
:vartype update_status: str or
~azure.mgmt.machinelearningcompute.models.OperationStatus
:ivar update_started_on: The date and time when the last system services
update was started.
:vartype update_started_on: datetime
:ivar update_completed_on: The date and time when the last system services
update completed.
:vartype update_completed_on: datetime
"""
_validation = {
'update_status': {'readonly': True},
'update_started_on': {'readonly': True},
'update_completed_on': {'readonly': True},
}
_attribute_map = {
'update_status': {'key': 'updateStatus', 'type': 'str'},
'update_started_on': {'key': 'updateStartedOn', 'type': 'iso-8601'},
'update_completed_on': {'key': 'updateCompletedOn', 'type': 'iso-8601'},
}
def __init__(self):
super(UpdateSystemServicesResponse, self).__init__()
self.update_status = None
self.update_started_on = None
self.update_completed_on = None
|
New York State lawmakers gave a cordial but cool reception yesterday to the congestion pricing plan proposed by Mayor Michael R. Bloomberg, asking whether it amounted to a regressive tax on middle-class drivers and whether its costs were worth the promised benefits.
The plan, which would charge cars and trucks a flat fee to drive in Manhattan below 86th Street, earned crucial support this week when it was endorsed by Gov. Eliot Spitzer and by federal transportation officials. The city is under consideration for as much as $500 million in federal grants for a pilot congestion pricing plan, enough to pay for all the start-up costs of the system.
Under the plan, it would cost $8 to drive a car and $21 to drive a truck into the congestion zone between 6 a.m. and 6 p.m. on weekdays, when traffic in Manhattan is worst. But those fees, like many elements of the sweeping city-planning initiative Mr. Bloomberg unveiled last month, would require Albany’s approval.
Speaking at an Assembly hearing in Midtown Manhattan yesterday before a mostly supportive audience of labor leaders and environmental and mass transit advocates, Mr. Bloomberg argued that congestion pricing would bring benefits beyond merely reducing traffic in the city’s central business district, from new revenue for subway improvements to lower asthma rates among city children and reduced carbon dioxide emissions from the city over all.
But Mr. Bloomberg did not appear to make many inroads among the more than a dozen members of the State Assembly who appeared at the hearing yesterday, including many from the boroughs outside of Manhattan and the city’s suburbs. Indeed, rather than resolve any battles, Mr. Bloomberg’s answers seemed only to draw the lines for future ones in Albany.
Cities and towns in New York generally must seek state approval to institute new fees and taxes, as well as to create new public authorities, as Mr. Bloomberg has proposed.
The hearing did feature occasional light moments. Richard L. Brodsky, a Westchester Democrat, expressed civil liberties concerns about the cameras that would be installed to track cars as they drive in and out of Manhattan. He asked Mr. Bloomberg what people would think if President Bush proposed a similar plan.
“If George Bush had come out for motherhood and apple pie, everybody would be against it,” Mr. Bloomberg said.
Mr. Bloomberg tried several times to defuse skepticism about the plan by pointing out that it called for only a three-year pilot project, many costs of which could be paid through the federal grants. But several members challenged him on the point, saying that the legislation as proposed left it up to city officials whether or not to keep the system in place at the conclusion of the pilot phase.
Some lawmakers also questioned Mr. Bloomberg’s plans to create a new public authority to control the roughly $380 million in revenue the program would obtain each year. Under the legislation, which was introduced in the Senate on Thursday, that authority would also give the city more power over the completion of some major projects, like the Second Avenue subway.
Mr. Spitzer, among others, has said he would prefer that that money remain in the control of existing authorities like the Metropolitan Transportation Authority or the Port Authority.
Speaking to reporters after the hearing, Mr. Bloomberg was asked if he might be willing to part with the new authority if it would help push through the bill.
Precisely how the proposal will be received more broadly among lawmakers in Albany remains unclear. Only two weeks remain in the legislative session there, and the congestion proposal is only one of several elements of the mayor’s plans that require legislative approval, to say nothing of the governor’s and lawmakers’ own priorities.
But Scott M. Stringer, a former assemblyman who is now Manhattan’s borough president, said he thought the hearing had moved the proposal forward.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2012 - 2013
# Matías Herranz <[email protected]>
# Joaquín Tita <[email protected]>
#
# https://github.com/PyRadar/pyradar
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 3 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library. If not, see <http://www.gnu.org/licenses/>.
#===============================================================================
# DOCS
#===============================================================================
"""This file is for distribute pyradar with setuptools
"""
#===============================================================================
# IMPORTS
#===============================================================================
import sys
from ez_setup import use_setuptools
use_setuptools()
from setuptools import setup, find_packages
import pyradar
#===============================================================================
# CONSTANTS
#===============================================================================
PYPI_REQUIRE = [
"Pillow",
"numpy",
"matplotlib",
"scipy"
]
MANUAL_REQUIRE = {
"gdal" : "http://gdal.org/",
}
# sugerido pero no importante
SUGESTED = {
}
#===============================================================================
# WARNINGS FOR MANUAL REQUIRES AND SUGGESTED
#===============================================================================
def validate_modules(requires):
not_found = []
for name, url in list(requires.items()):
try:
__import__(name)
except ImportError:
not_found.append("{} requires '{}' ({})".format(pyradar.PRJ,
name, url))
return not_found
def print_not_found(not_found, msg):
limits = "=" * max(list(map(len, not_found)))
print(("\n{}\n{}\n{}\n{}\n".format(msg, limits, "\n".join(not_found), limits)))
not_found = validate_modules(MANUAL_REQUIRE)
if not_found:
print_not_found(not_found, "ERROR")
sys.exit(1)
not_found = validate_modules(SUGESTED)
if not_found:
print_not_found(not_found, "WARNING")
#===============================================================================
# FUNCTIONS
#===============================================================================
setup(
name=pyradar.PRJ.lower(),
version=pyradar.STR_VERSION,
description=pyradar.SHORT_DESCRIPTION,
author=pyradar.AUTHOR,
author_email=pyradar.EMAIL,
url=pyradar.URL,
license=pyradar.LICENSE,
keywords=pyradar.KEYWORDS,
classifiers=pyradar.CLASSIFIERS,
packages=[pkg for pkg in find_packages() if pkg.startswith("pyradar")],
include_package_data=True,
package_data={
'ExampleImages': ['pyradar/simulate/ExampleImages/*'],
'DemoSet' : ['pyradar/simulate/DemoSet/*'],
},
py_modules=["ez_setup"],
install_requires=PYPI_REQUIRE,
)
|
Fill the VOID this weekend.
Cancel the house parties, bar crawls, the long queues at the underground clubs, and get your ticket to the VOID Club! This is a party organised by the AStA for students from every semester and degree at the University of Applied Sciences Europe. Feel free to bring a friend.
LINE UP FOR THE PARTY!
** Please send in your music requests in the comments!
|
"""Config flow to configure the Freebox integration."""
import logging
from freebox_api.exceptions import AuthorizationError, HttpRequestError
import voluptuous as vol
from homeassistant import config_entries
from homeassistant.const import CONF_HOST, CONF_PORT
from .const import DOMAIN # pylint: disable=unused-import
from .router import get_api
_LOGGER = logging.getLogger(__name__)
class FreeboxFlowHandler(config_entries.ConfigFlow, domain=DOMAIN):
"""Handle a config flow."""
VERSION = 1
CONNECTION_CLASS = config_entries.CONN_CLASS_LOCAL_POLL
def __init__(self):
"""Initialize Freebox config flow."""
self._host = None
self._port = None
def _show_setup_form(self, user_input=None, errors=None):
"""Show the setup form to the user."""
if user_input is None:
user_input = {}
return self.async_show_form(
step_id="user",
data_schema=vol.Schema(
{
vol.Required(CONF_HOST, default=user_input.get(CONF_HOST, "")): str,
vol.Required(CONF_PORT, default=user_input.get(CONF_PORT, "")): int,
}
),
errors=errors or {},
)
async def async_step_user(self, user_input=None):
"""Handle a flow initiated by the user."""
errors = {}
if user_input is None:
return self._show_setup_form(user_input, errors)
self._host = user_input[CONF_HOST]
self._port = user_input[CONF_PORT]
# Check if already configured
await self.async_set_unique_id(self._host)
self._abort_if_unique_id_configured()
return await self.async_step_link()
async def async_step_link(self, user_input=None):
"""Attempt to link with the Freebox router.
Given a configured host, will ask the user to press the button
to connect to the router.
"""
if user_input is None:
return self.async_show_form(step_id="link")
errors = {}
fbx = await get_api(self.hass, self._host)
try:
# Open connection and check authentication
await fbx.open(self._host, self._port)
# Check permissions
await fbx.system.get_config()
await fbx.lan.get_hosts_list()
await self.hass.async_block_till_done()
# Close connection
await fbx.close()
return self.async_create_entry(
title=self._host,
data={CONF_HOST: self._host, CONF_PORT: self._port},
)
except AuthorizationError as error:
_LOGGER.error(error)
errors["base"] = "register_failed"
except HttpRequestError:
_LOGGER.error("Error connecting to the Freebox router at %s", self._host)
errors["base"] = "cannot_connect"
except Exception: # pylint: disable=broad-except
_LOGGER.exception(
"Unknown error connecting with Freebox router at %s", self._host
)
errors["base"] = "unknown"
return self.async_show_form(step_id="link", errors=errors)
async def async_step_import(self, user_input=None):
"""Import a config entry."""
return await self.async_step_user(user_input)
async def async_step_zeroconf(self, discovery_info: dict):
"""Initialize flow from zeroconf."""
host = discovery_info["properties"]["api_domain"]
port = discovery_info["properties"]["https_port"]
return await self.async_step_user({CONF_HOST: host, CONF_PORT: port})
|
Ayden is 3 years old. It is important to work on his soft motor skills. At this age, I pay more attention to his finger muscles and hand strengthening as it is crucial in helping him to hold the pencil when he starts going to school.
Watch this video on how Ayden started by stamping random circles and proceeded to paint a spider, lollipop as well as a goldfish from just a circle.
Such a good boy! I only got to do stamping when I was in primary school, potato and banana stem.
Exactly. I only got to do stamping when I was in primary school too.
Nice. I think every kids love stamping. Stamping here and there.
Ayden is so good boy. Adorable.
Yes, I believe most kids love stamping. The cheekier ones will stamp everywhere except on the paper itself. Hahaha.
Ayden is so adorable! You speak like a professional. Are you a certified early childhood educator?
I taught in a preschool and also worked part-time in an early childhood learning centre before giving them all up to be a stay at home mom to Ethan and now Ayden too.
OIC. No wonder you are so professional. Ethan and Ayden are so blessed to have you as their mommy.
He is so cute and smart plus handsome too! He will become a very good speaker in future. His mama is so far sighted and clever to teach him all the best in stamping.
Thank you :D I home-school him just like what I have done with the elder brother during his preschool years hence I have to keep him occupied with fun activities that helps to develop his motor skills.
|
import os
import re
import httpx
import base64
from urllib.parse import unquote
import hashlib
cookies = {
"yew490": "1",
"_ga": "GA1.2.284686093.1564216182",
"_gid": "GA1.2.1256976049.1564216182",
"__PPU_SESSION_1_1683592_false": "1564216202929|1|1564216202929|1|1",
"_gat": "1",
}
headers = {
"Connection": "keep-alive",
"Cache-Control": "max-age=0",
"Origin": "http://www.turkanime.net",
"Upgrade-Insecure-Requests": "1",
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3",
"Referer": "http://www.turkanime.net/",
"Accept-Encoding": "gzip, deflate",
"Accept-Language": "tr-TR,tr;q=0.9,en-US;q=0.8,en;q=0.7",
}
class TurkAnime:
url = "http://www.turkanime.net"
def anime_ara(self, ara):
data = {"arama": ara}
veri = httpx.post(
self.url + "/arama", headers=headers, cookies=cookies, data=data
).content.decode("utf-8")
liste = []
r = re.findall(
'<div class="panel-ust-ic"><div class="panel-title"><a href="\/\/www\.turkanime\.net\/anime\/(.*?)" (.*?)>(.*?)<\/a>',
veri,
)
for slug, _, title in r:
liste.append([title, slug])
if len(liste) == 0:
try:
slug = veri.split('window.location = "anime/')[1].split('"')[0]
liste.append([ara, slug])
except:
pass
return liste
def bolumler(self, slug):
veri = httpx.get(
self.url + "/anime/" + slug, headers=headers, cookies=cookies
).content.decode("utf8")
h = headers.copy()
h.update({"X-Requested-With": "XMLHttpRequest", "Accept": "*/*"})
animeId = veri.split("ajax/bolumler&animeId=")[1].split('"')[0]
liste = []
a = httpx.get(
f"http://www.turkanime.net/ajax/bolumler&animeId={animeId}",
headers=h,
cookies=cookies,
).content.decode("utf8")
r = re.findall(
'<a href="\/\/www\.turkanime\.net\/video\/(.*?)" (.*?)><span class="bolumAdi">(.*?)<\/span><\/a>',
a,
)
for slug, _, title in r:
liste.append([title, slug])
return liste
|
One of the three things which Moshe Rabbeinu did on his own initiative and was later vindicated by Divine approval was separating from his wife after the giving of the Torah at Mount Sinai. His reasoning was that if all Jews were commanded to separate from their wives in the few days leading up to the Torah-giving in order to be properly pure for their momentary encounter with Hashem, then his separation must be an ongoing one since he was constantly being summoned to unscheduled encounters with Hashem.
Tosefot points out that the gemara's proof that this was Moshe's initiative rather than a Divine command is the fact that Aharon and Miriam became angry with their brother when they learned of the separation, and spoke critically of his action. (Bamidbar 12:1-2) Had Moshe been commanded in this separation, they would certainly not have questioned his behavior.
But if the same Aharon and Miriam were aware of Moshe's separation, then they were also aware that when Hashem gave permission to all the Jews to resume family life after the Torah-giving that He expressed His approval of Moshe's initiative to make a prolonged separation by stating "But you remain here with Me" (Devarim 5:27-28). Why then, asks Tosefot, were they upset by his initiative if it received Divine approval?
The answer, proposes Tosefot, lies in the Talmudic statement (Mesechta Makkot 10b) that Heaven guides a person along the path that he has chosen to follow. The catalyst for Divine sanction of Moshe's prolonged separation from his wife was his choice of a level of purity which his sister criticized as being beyond the norm expected of all Jews and at the expense of his wife. The Divine reaction to this criticism initiated by Miriam was the illness described in the above cited Torah chapter, which was to serve as a lesson to all future generations for guarding the tongue.
Right after the death of their father Yaakov, Yosef's brothers sent a message to Yosef that before his passing, Yaakov had asked them to implore Yosef in his name to forgive them for the evil they had done him. Yaakov, of course, had never made such a request, and from this, Rabbi Elazar the son of Shimon concludes that one may divert from the truth in order to maintain peaceful relations.
But indeed, why did Yaakov not anticipate the resentment Yosef might feel towards his brother and make such a request of him during his lifetime?
Ramban (Bereishet 45:27) contends that Yaakov never became aware that Yosef had been sold into captivity by his brothers. Yaakov always assumed that Yosef had been picked up by slave dealers while wandering in the fields and sold by them to the Egyptians. The brothers never told him because of their fear that he might become outraged and curse them as he did Reuven, Shimon and Levi for their sins in other matters. Yosef, for his part, was too moral to divulge such a matter to his father.
Rashi, in his commentary on Chumash, takes a different approach. Yaakov was aware, but he did not suspect his righteous son Yosef of harboring feelings of resentment which might lead to a vendetta, and therefore saw no need for asking him to forgive them. The question arises, however, as to why the brothers did suspect him and found it necessary to tell their "white lie?"
Maharsha suggests that the suspicion arose only after the death of Yaakov, so there was no need for them to seek their father's intervention while he was alive. The Midrash (Rabbah 100:8) mentions two things that happened which aroused their suspicion because they misconstrued Yosef's intentions. One was the fact that Yosef stopped inviting them to dine with him because he did not wish to continue the seating arrangement instituted by their father which placed him ahead of Yehuda the king, who was the forefather of the kings of the Jewish Nation, and ahead of Reuven the firstborn. Yet he was also unable to place them ahead of him because of his royal status in Egypt, and therefore decided to stop inviting them altogether. Another incident occurred when Yosef returned from his father's funeral and looked into the pit where his brothers had placed him. Yosef did this in order to offer a blessing of thanks to Heaven for his miraculous rescue from death. Although his motives in both cases were praiseworthy, they aroused his brothers' suspicions that animosity suppressed in their father's lifetime had now surfaced, forcing them to lie in order to keep the peace.
|
# -*- coding: utf-8 -*-
# <nbformat>3.0</nbformat>
# <codecell>
%matplotlib inline
# <codecell>
cd /Users/dpwe/projects/millionsong/python/midi-dataset
# <codecell>
import numpy as np
import matplotlib.pyplot as plt
import dpcore
# <codecell>
reload(dpcore)
# <codecell>
M = np.random.rand(50,50)
plt.imshow(M, interpolation='none', cmap='binary')
# <codecell>
%timeit DC, phiC = dpcore.dpcore(M, 0.2, True)
%timeit DP, phiP = dpcore.dpcore(M, 0.2, False)
# <codecell>
DC, phiC = dpcore.dpcore(M, 0.2, True)
DP, phiP = dpcore.dpcore(M, 0.2, False)
# <codecell>
plt.imshow(DC,interpolation='none')
# <codecell>
plt.imshow(DC-DP, interpolation='none')
print np.max(np.abs(DC-DP))
# <codecell>
plt.imshow(phiC-phiP, interpolation='none')
# <codecell>
MM = np.random.rand(5, 5)
pen = 0.2
gut = 0.3
p,q,C,phi = dpcore.dp(MM, pen, gut)
print p, q
print MM
print C
print "best cost =", C[p[-1],q[-1]], "=", np.sum(MM[p, q])+pen*(np.sum(phi[p, q]>0))
plt.imshow(MM, interpolation='none', cmap='binary')
plt.hold(True)
plt.plot(q,p,'-r')
plt.hold(False)
plt.show()
# <codecell>
M2 = np.copy(M)
M2[20:30,20:30] += np.random.rand(10,10)
M2[10:40,10:40] += np.random.rand(30,30)
plt.imshow(M2, interpolation='none', cmap='binary')
p,q,C,phi = dpcore.dp(M2,0.1,0.1)
plt.hold(True)
plt.plot(q,p,'-r')
plt.hold(False)
plt.show()
# <codecell>
import librosa
# <codecell>
# Mirror matlab example from http://www.ee.columbia.edu/ln/rosa/matlab/dtw/
d1, sr = librosa.load('/Users/dpwe/projects/dtw/sm1_cln.wav', sr=16000)
d2, sr = librosa.load('/Users/dpwe/projects/dtw/sm2_cln.wav', sr=16000)
D1 = librosa.stft(d1, n_fft=512, hop_length=128)
D2 = librosa.stft(d2, n_fft=512, hop_length=128)
librosa.display.specshow(20*np.log10(np.abs(D1)), sr=sr, hop_length=128)
# <codecell>
# Cosine similarity matrix (slow one-liner)
SM = np.array([[np.sum(a*b)/np.sqrt(np.sum(a**2)*np.sum(b**2)) for b in np.abs(D2.T)] for a in np.abs(D1.T)])
# <codecell>
plt.imshow(SM)
# <codecell>
p, q, C, phi = dpcore.dp(1-SM)
# <codecell>
plt.imshow(SM, interpolation='none', cmap='binary')
plt.hold(True)
plt.plot(q,p,'-r')
plt.hold(False)
plt.show()
# <codecell>
C[-1,-1]
# <codecell>
|
© Carol Tarzier, all rights reserved. Powered by WordPress. Theme by mysitemyway.com.
|
"""
run_analysis
~~~~~~~~~~~~
Model / Controller for running the actual analysis.
TODO: The random `print` statements are to try to catch a rare deadlock
condition.
"""
import open_cp.gui.tk.run_analysis_view as run_analysis_view
import open_cp.gui.predictors as predictors
from open_cp.gui.common import CoordType
import open_cp.gui.predictors.predictor as predictor
import open_cp.pool as pool
import open_cp.gui.tk.threads as tk_threads
import open_cp.gui.locator as locator
import collections
import logging
import queue
import time
import datetime
class RunAnalysis():
"""Controller for performing the computational tasks of actually producing
a prediction. Using multi-processing.
:param parent: Parent `tk` widget
:param controller: The :class:`analyis.Analysis` model.
"""
def __init__(self, parent, controller):
self.view = run_analysis_view.RunAnalysisView(parent, self)
self.controller = controller
self._msg_logger = predictors.get_logger()
self._logger = logging.getLogger(__name__)
@property
def main_model(self):
"""The :class:`analysis.Model` instance"""
return self.controller.model
def run(self):
try:
self._model = RunAnalysisModel(self, self.main_model)
self._run_tasks()
except:
self._msg_logger.exception(run_analysis_view._text["genfail"])
self.view.done()
self.view.wait_window(self.view)
def cancel(self):
"""Called when we wish to cancel the running tasks"""
self._logger.warning("Analysis run being cancelled.")
self._msg_logger.warning(run_analysis_view._text["log10"])
if hasattr(self, "_off_thread"):
self._off_thread.cancel()
@staticmethod
def _chain_dict(dictionary):
for name, li in dictionary.items():
for x in li:
yield (name, x)
def _run_tasks(self):
tasks = []
for proj_name, proj in self._chain_dict(self._model.projectors):
for grid_name, grid in self._chain_dict(self._model.grids):
for pred_name, pred in self._chain_dict(self._model.grid_prediction_tasks):
task = _RunAnalysis_Task(
task = _RunAnalysis_Task._InnerTask(self.main_model, grid, proj, pred),
off_process = pred.off_process,
projection = proj_name,
grid = grid_name,
type = pred_name )
tasks.append(task)
total = len(tasks) * len(self._model.predict_tasks)
self._msg_logger.info(run_analysis_view._text["log7"], total)
self._off_thread = _RunnerThread(tasks, self._model.predict_tasks, self)
self._off_thread.force_gc()
locator.get("pool").submit(self._off_thread, self._finished)
def to_msg_logger(self, msg, *args, level=logging.DEBUG):
self._msg_logger.log(level, msg, *args)
def start_progress(self):
locator.get("pool").submit_gui_task(lambda : self.view.start_progress_bar())
def set_progress(self, done, out_of):
locator.get("pool").submit_gui_task(lambda : self.view.set_progress(done, out_of))
def end_progress(self):
locator.get("pool").submit_gui_task(lambda : self.view.stop_progress_bar())
def notify_model_message(self, msg, *args, level=logging.DEBUG):
self.to_msg_logger(msg, *args, level=level)
def _finished(self, out=None):
self.view.done()
if out is not None:
if isinstance(out, predictor.PredictionError):
self.view.alert(str(out))
self._msg_logger.error(run_analysis_view._text["warning1"].format(out))
elif isinstance(out, Exception):
self._msg_logger.error(run_analysis_view._text["log11"].format(out))
else:
self._msg_logger.error(run_analysis_view._text["log12"].format(out))
return
if self._off_thread.cancelled:
self.view.cancel()
else:
results = [PredictionResult(key, result) for (key, result) in self._off_thread.results]
result = RunAnalysisResult(results)
self.controller.new_run_analysis_result(result)
class _RunAnalysis_Task():
"""Pulled out to allow pickling"""
def __init__(self, task, off_process, projection, grid, type):
self.task = task
self.off_process = off_process
self.projection = projection
self.grid = grid
self.type = type
def __repr__(self):
return "_RunAnalysis_Task(task={}, off_process={}, projection={}, grid={}, type={})".format(
self.task, self.off_process, self.projection, self.grid, self.type)
class _InnerTask():
def __init__(self, main_model, grid, proj, pred):
# Make a copy of the :class:`DataModel` and not the extra baggage.
self.main_model = main_model.clone()
self.grid = grid
self.proj = proj
self.pred = pred
def __call__(self):
return self.pred(self.main_model, self.grid, self.proj)
class RunAnalysisResult():
def __init__(self, results):
self._results = results
self._time = datetime.datetime.now()
@property
def results(self):
"""List of :class:`PredictionResult` instances."""
return self._results
@property
def run_time(self):
""":class:`datetime` of when the result was completed."""
return self._time
def merge_all_results(results):
"""Merge an iterable of :class:`RunAnalysisResult` instances into a single
:class:`RunAnalysisResult` object."""
all_results = []
for result in results:
all_results.extend(result.results)
return RunAnalysisResult(all_results)
class PredictionResult():
"""The result of running the prediction, but not including any analysis
results.
:param key: Instance of :class:`TaskKey`
:param prediction: The result of the prediction. Slightly undefined, but
at present, should be an :class:`GridPrediction` instance.
"""
def __init__(self, key, prediction):
self._key = key
self._pred = prediction
@property
def key(self):
"""The :class:`TaskKey` describing the prediction."""
return self._key
@property
def prediction(self):
"""An instance of :class:`GridPrediction` (or most likely a subclass)
giving the actual prediction."""
return self._pred
def __repr__(self):
return "PredictionResult(key={}, prediction={}".format(self._key, self._pred)
class TaskKey():
"""Describes the prediction task which was run. We don't make any
assumptions about the components of the key (they are currently strings,
but in future may be richer objects) and don't implement custom hashing
or equality.
:param projection: The projection used.
:param grid: The grid used.
:param pred_type: The prediction algorithm (etc.) used.
:param pred_date: The prediction date.
:param pred_length: The length of the prediction.
"""
def __init__(self, projection, grid, pred_type, pred_date, pred_length):
self._projection = projection
self._grid = grid
self._pred_type = pred_type
self._pred_date = pred_date
self._pred_length = pred_length
@property
def projection(self):
return self._projection
@property
def grid(self):
return self._grid
@property
def prediction_type(self):
return self._pred_type
@property
def prediction_date(self):
return self._pred_date
@property
def prediction_length(self):
return self._pred_length
@staticmethod
def header():
"""Column representation for CSV file"""
return ["projection type", "grid type", "prediction type", "prediction date", "scoring length"]
def __iter__(self):
return iter((self.projection, self.grid, self.prediction_type,
self.prediction_date, self.prediction_length))
def __repr__(self):
return "projection: {}, grid: {}, prediction_type: {}, prediction_date: {}, prediction_length: {}".format(
self.projection, self.grid, self.prediction_type, self.prediction_date,
self.prediction_length)
class RunAnalysisModel():
"""The model for running an analysis. Constructs lists/dicts:
- :attr:`projector_tasks` Tasks to project coordinates
- :attr:`grid_tasks` Tasks to lay a grid over the data
- :attr:`predict_tasks` Pairs `(start_date, score_length)`
- :attr:`grid_pred_tasks` Instances of :class:`GridPredictorTask`
:param controller: :class:`RunAnalysis` instance
:param view: :class:`RunAnalysisView` instance
:param main_model: :class:`analysis.Model` instance
"""
def __init__(self, controller, main_model):
self.controller = controller
self.main_model = main_model
self._build_projectors()
self._build_grids()
self._build_date_ranges()
self._build_grid_preds()
def _build_grid_preds(self):
self._grid_pred_tasks = dict()
for pred in self.predictors.predictors_of_type(predictors.predictor._TYPE_GRID_PREDICTOR):
self._grid_pred_tasks[pred.pprint()] = pred.make_tasks()
self.controller.notify_model_message(run_analysis_view._text["log1"],
sum( len(li) for li in self._grid_pred_tasks.values() ),
level=logging.INFO)
def _build_date_ranges(self):
self._predict_tasks = []
for top in self.comparators.comparators_of_type(predictors.comparitor.TYPE_TOP_LEVEL):
self._predict_tasks.extend(top.run())
self.controller.notify_model_message(run_analysis_view._text["log2"],
len(self.predict_tasks), level=logging.INFO)
if len(self.predict_tasks) > 0:
self.controller.notify_model_message(run_analysis_view._text["log3"],
self.predict_tasks[0][0].strftime(run_analysis_view._text["dtfmt"]),
level=logging.INFO)
self.controller.notify_model_message(run_analysis_view._text["log4"],
self.predict_tasks[-1][0].strftime(run_analysis_view._text["dtfmt"]),
level=logging.INFO)
@property
def predict_tasks(self):
"""List of pairs `(start_date, length)`"""
return self._predict_tasks
def _build_grids(self):
self._grid_tasks = dict()
for grid in self.predictors.predictors_of_type(predictors.predictor._TYPE_GRID):
tasks = grid.make_tasks()
self._grid_tasks[grid.pprint()] = tasks
self.controller.notify_model_message(run_analysis_view._text["log5"],
sum( len(li) for li in self._grid_tasks.values() ), level=logging.INFO )
def _build_projectors(self):
if self.main_model.coord_type == CoordType.XY:
projector = predictors.lonlat.PassThrough(self.main_model)
projectors = [projector]
else:
projectors = list(self.predictors.predictors_of_type(
predictors.predictor._TYPE_COORD_PROJ))
count = 0
self._projector_tasks = dict()
for projector in projectors:
tasks = projector.make_tasks()
self._projector_tasks[projector.pprint()] = tasks
count += len(tasks)
self.controller.notify_model_message(run_analysis_view._text["log6"],
count, level=logging.INFO)
@property
def grid_prediction_tasks(self):
"""Dictionary from string name to task(s)."""
return self._grid_pred_tasks
@property
def grids(self):
"""Dictionary from string name to task(s)."""
return self._grid_tasks
@property
def projectors(self):
"""Dictionary from string name to task(s)."""
return self._projector_tasks
@property
def predictors(self):
return self.main_model.analysis_tools_model
@property
def comparators(self):
return self.main_model.comparison_model
class BaseRunner():
"""Abstract base class which runs "tasks" and communicates with a
"controller" to show progress.
"""
def __init__(self, controller):
self._executor = pool.PoolExecutor()
self._results = []
self._controller = controller
self._cancel_queue = queue.Queue()
def __call__(self):
"""To be run off the main GUI thread"""
self._controller.start_progress()
self._controller.to_msg_logger(run_analysis_view._text["log9"])
self.executor.start()
try:
tasks = list(self.make_tasks())
self._controller.to_msg_logger(run_analysis_view._text["log13"])
futures = [ self.executor.submit(t) for t in tasks if t.off_process ]
on_thread_tasks = [t for t in tasks if not t.off_process]
done, out_of = 0, len(futures) + len(on_thread_tasks)
while len(futures) > 0 or len(on_thread_tasks) > 0:
if len(futures) > 0:
futures, count = self._process_futures(futures)
done += count
if len(on_thread_tasks) > 0:
task = on_thread_tasks.pop()
self._notify_result(task.key, task())
done += 1
else:
time.sleep(0.5)
self._controller.set_progress(done, out_of)
if self.cancelled:
print("Exiting...")
break
finally:
# Context management would call `shutdown` but we definitely want
# to call terminate.
print("Terminating...")
self.executor.terminate()
print("Done")
print("Ending progress...")
self._controller.end_progress()
def force_gc(self):
"""Fixes, or at least mitigates, issue #6. Call on the main GUI thread
prior to invoking `__call__`."""
import gc
gc.collect()
@property
def executor(self):
return self._executor
def _process_futures(self, futures):
results, futures = pool.check_finished(futures)
done = 0
for key, result in results:
self._notify_result(key, result)
done += 1
return futures, done
def _notify_result(self, key, result):
self._results.append( (key, result) )
self._controller.to_msg_logger(run_analysis_view._text["log8"], key)
def cancel(self):
self._cancel_queue.put("stop")
@property
def cancelled(self):
return not self._cancel_queue.empty()
@property
def results(self):
return self._results
def make_tasks(self):
"""To be over-riden in a sub-class. Should return a list of
:class:`RunPredTask` instances."""
raise NotImplementedError()
class RunPredTask(pool.Task):
"""Wraps a `key` and `task`. The task should have an attribute
:attr:`off_process` which is `True` if and only if we should run in
another process.
"""
def __init__(self, key, task):
super().__init__(key)
if task is None:
raise ValueError()
self._task = task
@property
def off_process(self):
return self._task.off_process
def __call__(self):
return self._task()
class _RunnerThread(BaseRunner):
"""Constructs the tasks to run. Essentially forms the cartesian product
of the prediction tasks with the date ranges.
The `result` will be :class:`PredictionResult` instances.
:param grid_prediction_tasks: Iterable giving callables which when run
return instances of :class:`SingleGridPredictor`.
:param predict_tasks: Iterable of pairs `(start_time, score_length)`
:param controller: The :class:`RunAnalysis` instance
"""
def __init__(self, grid_prediction_tasks, predict_tasks, controller):
super().__init__(controller)
self._tasks = list(grid_prediction_tasks)
self._date_ranges = list(predict_tasks)
def make_tasks(self):
tasks = []
futures = []
for task in self._tasks:
if task.off_process:
#raise NotImplementedError("This currently does not work due to pickling issues.")
task = self.RunPredTask(task, task.task)
futures.append(self.executor.submit(task))
else:
tasks.extend( self._make_new_task(task, task.task()) )
if len(futures) > 0:
for key, result in pool.yield_task_results(futures):
tasks.extend( self._make_new_task(key, result) )
return tasks
def _make_new_task(self, key, task):
for dr in self._date_ranges:
new_task = self.StartLengthTask(task=task, start=dr[0], length=dr[1])
k = TaskKey(projection=key.projection, grid=key.grid,
pred_type=key.type, pred_date=dr[0], pred_length=dr[1] )
yield self.RunPredTask(k, new_task)
class StartLengthTask():
def __init__(self, task, start, length):
self.start = start
self.length = length
if task is None:
raise ValueError()
self.task = task
def __call__(self):
return self.task(self.start, self.length)
@property
def off_process(self):
return self.task.off_process
|
The worldwide tour will stop in the US, UK, Germany, Italy, and more.
2015 marks the 15th anniversary of Mathias Tanzmann's Moon Harbour Recordings, a label which has been at the forefront of house and techno since its inception in 2000.
To celebrate its 15 years, Moon Harbour is embarking on a world tour, as well as an anniversary compilation with exclusive tracks by Moon Harbour artists and friends, along with rereleases of the most successful tracks from the last 15 years.
Upcoming tour dates can all be found below.
|
# -*- coding: utf-8 -*-
# Generated by Django 1.9.2 on 2016-05-29 14:45
from __future__ import unicode_literals
from decimal import Decimal
import django.core.validators
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('automaticweathersystem', '0003_auto_20160509_0842'),
]
operations = [
migrations.AlterField(
model_name='awsreport',
name='awsstation',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='automaticweathersystem.AWSStation', verbose_name='AWS station'),
),
migrations.AlterField(
model_name='awsreport',
name='day_rain',
field=models.PositiveIntegerField(blank=True, null=True, verbose_name='Day rain'),
),
migrations.AlterField(
model_name='awsreport',
name='rain_rate',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=4, null=True, validators=[django.core.validators.MinValueValidator(Decimal('0'))], verbose_name='Rain rate'),
),
migrations.AlterField(
model_name='awsreport',
name='solar_radiation',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=4, null=True, validators=[django.core.validators.MinValueValidator(Decimal('0'))], verbose_name='Solar radiation'),
),
migrations.AlterField(
model_name='awsreport',
name='uv_index',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=4, null=True, validators=[django.core.validators.MinValueValidator(Decimal('0'))], verbose_name='UV index'),
),
migrations.AlterField(
model_name='awsreport',
name='wind_direction',
field=models.PositiveIntegerField(blank=True, null=True, verbose_name='Wind direction'),
),
migrations.AlterField(
model_name='awsreport',
name='wind_speed',
field=models.DecimalField(blank=True, decimal_places=2, max_digits=4, null=True, validators=[django.core.validators.MinValueValidator(Decimal('0'))], verbose_name='Wind speed'),
),
]
|
Almost wordless: Male Red-winged Blackbirds are early migrants in the Midwest. They began to appear several weeks ago to claim their territory for nesting season. I haven’t seen any females yet, but they should be arriving soon.
I’ve always loved male red-winged blackbirds. Their songs make me smile and smile.
I expect them any day. Some people have seen them in the Twin Cities, but they haven’t arrived at marshes near me. They seem to bring spring along with them.
|
#################################################################
##
## 'Field' concept is implemented for Soil Moisture component.
##
## Sai Nudurupati and Erkan Istanbulluoglu - 15May2014
#################################################################
from landlab import Component
import numpy as np
_VALID_METHODS = set(['Grid'])
def assert_method_is_valid(method):
if method not in _VALID_METHODS:
raise ValueError('%s: Invalid method name' % method)
class SoilMoisture( Component ):
"""
This component calculates and updates soil moisture after each storm. Soil
moisture is represented as a single bucket. Rainfall depth fills the bucket
and the soil moisture decays as a result of leakage and ET following the
analytical solution of Laio et al., (2001). This component can operate on
any raster grid. Input file is named soilmoisture_input.txt and is
temporarily placed under landlab.components.
Storms are considered to be instantaneous events.
Storm duration, depth and interstorm duration are obtained as input
Storm depth is obtained in mm and storm duration and interstorm duration
are obtained in hours
>>> from landlab import RasterModelGrid
>>> from landlab.components.radiation.radiation_field import Radiation
>>> from landlab.components.soil_moisture.soil_moisture_field import SoilMoisture
>>> import numpy as np
>>> grid = RasterModelGrid( 5, 4, 0.2 )
>>> grid['node']['Elevation'] = np.random.rand( grid.number_of_nodes ) * 1000
>>> rad = Radiation( grid )
>>> rad.name
'Radiation'
>>> current_time = 0.5
>>> rad.update( current_time )
>>>
"""
_name = 'Soil Moisture'
_input_var_names = set([
'VegetationCover',
'LiveLeafAreaIndex',
'PotentialEvapotranspiraton',
])
_output_var_names = set([
'WaterStress',
'SaturationFraction',
'Drainage',
'Runoff',
'ActualEvapotranspiration',
])
_var_units = {
'VegetationCover' : 'None',
'LiveLeafAreaIndex': 'None',
'PotentialEvapotranspiraton' : 'mm',
'WaterStress' : 'Pa',
'SaturationFraction' : 'None',
'Drainage' : 'mm',
'Runoff' : 'mm',
'ActualEvapotranspiration' : 'mm',
}
def __init__( self, grid, **kwds ):
self._method = kwds.pop('method', 'Grid')
self._interception_cap = kwds.pop('INTERCEPT_CAP', 1.)
self._zr = kwds.pop('ZR', 0.3)
self._runon = kwds.pop('RUNON', 0.)
self._fbare = kwds.pop('F_BARE', 0.7)
self._soil_Ib = kwds.pop('I_B', 12)
self._soil_Iv = kwds.pop('I_V', 36)
self._soil_Ew = kwds.pop('EW', 0.1)
self._soil_pc = kwds.pop('PC', 0.43)
self._soil_fc = kwds.pop('FC', 0.56)
self._soil_sc = kwds.pop('SC', 0.31)
self._soil_wp = kwds.pop('WP', 0.17)
self._soil_hgw = kwds.pop('HGW', 0.1)
self._soil_beta = kwds.pop('BETA', 12.7)
assert_method_is_valid(self._method)
super(SoilMoisture, self).__init__(grid, **kwds)
for name in self._input_var_names:
if not name in self.grid.at_cell:
self.grid.add_zeros('cell', name, units=self._var_units[name])
for name in self._output_var_names:
if not name in self.grid.at_cell:
self.grid.add_zeros('cell', name, units=self._var_units[name])
self._nodal_values = self.grid['node']
if not 'InitialSaturationFraction' in self.grid.at_cell:
self.grid.add_zeros('cell', 'InitialSaturationFraction', units='None' )
self._cell_values = self.grid['cell']
def update( self, current_time, **kwds ):
#DEBUGG = 0
P = kwds.pop('P', 5.)
Tb = kwds.pop('Tb', 24.)
Tr = kwds.pop('Tr', 0.0)
self._PET = self._cell_values['PotentialEvapotranspiration']
self._SO = self._cell_values['InitialSaturationFraction']
self._vegcover = self._cell_values['VegetationCover']
self._water_stress = self._cell_values['WaterStress']
self._S = self._cell_values['SaturationFraction']
self._D = self._cell_values['Drainage']
self._ETA = self._cell_values['ActualEvapotranspiration']
self._fr = self._cell_values['LiveLeafAreaIndex']/1.44
fbare = self._fbare
ZR = self._zr
pc = self._soil_pc
fc = self._soil_fc
sc = self._soil_sc
wp = self._soil_wp
hgw = self._soil_hgw
beta = self._soil_beta
for cell in range(0,self.grid.number_of_cells):
s = self._SO[cell]
Inf_cap = self._soil_Ib*(1-self._vegcover[cell]) + self._soil_Iv*self._vegcover[cell] # Infiltration capacity
Int_cap = min(self._vegcover[cell]*self._interception_cap, P) # Interception capacity
Peff = max(P-Int_cap, 0.0) # Effective precipitation depth
mu = (Inf_cap/1000.0)/(pc*ZR*(np.exp(beta*(1-fc))-1))
Ep = max((self._PET[cell]*self._fr[cell]+fbare*self._PET[cell]*(1-self._fr[cell])) - Int_cap, 0.001) #
nu = ((Ep/24.0)/1000.0)/(pc*ZR) # Loss function parameter
nuw = ((Ep*0.1/24)/1000.0)/(pc*ZR) # Loss function parameter
sini = self._SO[cell] + ((Peff+self._runon)/(pc*ZR*1000.0))
if sini>1:
self._runoff = (sini-1)*pc*ZR*1000
#print 'Runoff =', self._runoff
sini = 1
else:
self._runoff = 0
#self._runon = runoff
if sini>=fc:
tfc = (1.0/(beta*(mu-nu)))*(beta*(fc-sini)+ \
np.log((nu-mu+mu*np.exp(beta*(sini-fc)))/nu))
tsc = ((fc-sc)/nu)+tfc
twp = ((sc-wp)/(nu-nuw))*np.log(nu/nuw)+tsc
if Tb<tfc:
s = abs(sini-(1/beta)*np.log(((nu-mu+mu* \
np.exp(beta*(sini-fc)))*np.exp(beta*(nu-mu)*Tb) \
-mu*np.exp(beta*(sini-fc)))/(nu-mu)))
self._D[cell] = ((pc*ZR*1000)*(sini-s))-(Tb*(Ep/24))
self._ETA[cell] = (Tb*(Ep/24))
elif Tb>=tfc and Tb<tsc:
s = fc-(nu*(Tb-tfc))
self._D[cell] = ((pc*ZR*1000)*(sini-fc))-((tfc)*(Ep/24))
self._ETA[cell] = (Tb*(Ep/24))
elif Tb>=tsc and Tb<twp:
s = wp+(sc-wp)*((nu/(nu-nuw))*np.exp((-1)*((nu-nuw)/(sc-wp))*(Tb-tsc))-(nuw/(nu-nuw)))
self._D[cell] = ((pc*ZR*1000)*(sini-fc))-(tfc*Ep/24)
self._ETA[cell] = (1000*ZR*pc*(sini-s))-self._D[cell]
else:
s = hgw+(wp-hgw)*np.exp((-1)*(nuw/(wp-hgw))*max(Tb-twp,0))
self._D[cell] = ((pc*ZR*1000)*(sini-fc))-(tfc*Ep/24)
self._ETA[cell] = (1000*ZR*pc*(sini-s))-self._D[cell]
elif sini<fc and sini>=sc:
tfc = 0
tsc = (sini-sc)/nu
twp = ((sc-wp)/(nu-nuw))*np.log(nu/nuw)+tsc
if Tb<tsc:
s = sini - nu*Tb
self._D[cell] = 0
self._ETA[cell] = 1000*ZR*pc*(sini-s)
elif Tb>=tsc and Tb<twp:
s = wp+(sc-wp)*((nu/(nu-nuw))*np.exp((-1)*((nu-nuw)/(sc-wp))*(Tb-tsc))-(nuw/(nu-nuw)))
self._D[cell] = 0
self._ETA[cell] = (1000*ZR*pc*(sini-s))
else:
s = hgw+(wp-hgw)*np.exp((-1)*(nuw/(wp-hgw))*(Tb-twp))
self._D[cell] = 0
self._ETA[cell] = (1000*ZR*pc*(sini-s))
elif sini<sc and sini>=wp:
tfc = 0
tsc = 0
twp = ((sc-wp)/(nu-nuw))*np.log(1+(nu-nuw)*(sini-wp)/(nuw*(sc-wp)))
if Tb<twp:
s = wp+((sc-wp)/(nu-nuw))*((np.exp((-1)*((nu-nuw)/(sc-wp))*Tb))*(nuw+((nu-nuw)/(sc-wp))*(sini-wp))-nuw)
self._D[cell] = 0
self._ETA[cell] = (1000*ZR*pc*(sini-s))
else:
s = hgw+(wp-hgw)*np.exp((-1)*(nuw/(wp-hgw))*(Tb-twp))
self._D[cell] = 0
self._ETA[cell] = (1000*ZR*pc*(sini-s))
else:
tfc = 0
tsc = 0
twp = 0
s = hgw+(sini-hgw)*np.exp((-1)*(nuw/(wp-hgw))*Tb)
self._D[cell] = 0
self._ETA[cell] = (1000*ZR*pc*(sini-s))
self._water_stress[cell] = min(max((((sc - (s+sini)/2.) / (sc - wp))**4.),0.001),1.0)
self._S[cell] = s
self._SO[cell] = s
current_time += (Tb+Tr)/(24.*365.25)
return( current_time )
|
Diamond Gold Wedding Bands 8 is one of the best picture references on printable wedding inspirations for children because of Diamond Gold Wedding Bands 8 made with a brilliant idea and following the trend of modern wedding inspirations, as well as being one of the best referrals for your child looking for a wedding inspiration. This design has been built from brilliant ideas combined with a selection of elegant colors and beautiful architecture designs.
Diamond Gold Wedding Bands 8 is one of best design wedding inspirations of the years, would be something amazing if you apply design at your wedding inspirations. Diamond Gold Wedding Bands 8 just one of the many references we have, you can find other references such as animal wedding inspirations, wedding inspirations of nature, cartoon wedding inspirations, sports wedding inspirations, fun wedding inspirations, social wedding inspirations etc.
Diamond Gold Wedding Bands 8 reference also have for your convenience in searching this reference more specific.
|
"""
Django settings for DBNsite project.
Generated by 'django-admin startproject' using Django 1.11.
For more information on this file, see
https://docs.djangoproject.com/en/1.11/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.11/ref/settings/
"""
import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# For deploying to Heroku
STATIC_ROOT = os.path.join(BASE_DIR, 'DBNtrain', 'static')
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'm7*slgwq#tit7*-f&s$09u39-@5!w+a_^*jlgqsbm$o*+c-g&-'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
ALLOWED_HOSTS = ['localhost', '127.0.0.1', '[::1]', '*'] # all computers in the local network
# Application definition
INSTALLED_APPS = [
'DBNtrain.apps.DbntrainConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'DBNsite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'DBNsite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.11/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/1.11/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/1.11/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'CET'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.11/howto/static-files/
STATIC_URL = '/DBNtrain/static/'
|
Personnel: Avishai Cohen (piano, synthesizer, acoustic & electric basses, percussion); Claudia Acuna, Danny Freedman, Joshua Levy, Eran Tabib, Eli Lishinsky (vocals); Jimmy Greene (soprano & tenor saxophones, flute); Steve Davis (trombone); Ida Levin, Carmit Zori (violin); Robert Rinehart (viola); Fred Sherry (cello); Jason Linder (piano); Amos Hoffman (guitar, oud); Jeff Ballard (drums, percussion).
Recorded at Clinton Recording Studios, New York, New York. Includes liner notes by Horace Silver and Avishai Cohen.
Bassist Avishai Cohen on his own projects mixes together adventurous jazz with influences from world music, original folk melodies, and his own creativity. He composed 12 selections for his second CD, 1999's DEVOTION, including tributes to Horace Silver and Chick Corea (which do not really sound that close to either of those pianist/composer's styles). The music is consistently unpredictable, with Cohen being joined by pianist Jason Lindner, drummer Jeff Ballard, Jimmy Greene on reeds, trombonist Steve Davis, guitarist Amos Hoffman, a string quartet, and up to five singers (who are mostly used in the background). An intriguing set by an up-and-coming composer who is also a very fluent bassist.
CMJ (5/3/99, p.37) - "...continues to explore hard-edged jazz tinged with funk, spirituality and even exotic Middle Eastern flavors..."
JazzTimes (10/99, p.109) - "...delivers strong evidence here of his strengths as a leader, complementing his proven merits as sideman....draping that flexible pulse in long melodic lines whose twists and turns reveal many a harmonic delight..."
|
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'untitled.ui'
#
# Created by: PyQt5 UI code generator 5.7
#
# WARNING! All changes made in this file will be lost!
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_Dialog(object):
def setupUi(self, Dialog):
Dialog.setObjectName("Dialog")
Dialog.resize(400, 239)
Dialog.setMinimumSize(QtCore.QSize(400, 239))
Dialog.setMaximumSize(QtCore.QSize(400, 239))
self.manuel_button = QtWidgets.QPushButton(Dialog)
self.manuel_button.setGeometry(QtCore.QRect(230, 190, 85, 30))
self.manuel_button.setObjectName("manuel_button")
self.splitter = QtWidgets.QSplitter(Dialog)
self.splitter.setGeometry(QtCore.QRect(20, 20, 361, 161))
self.splitter.setOrientation(QtCore.Qt.Vertical)
self.splitter.setObjectName("splitter")
self.label_2 = QtWidgets.QLabel(self.splitter)
self.label_2.setObjectName("label_2")
self.label = QtWidgets.QLabel(self.splitter)
self.label.setObjectName("label")
self.retranslateUi(Dialog)
self.manuel_button.clicked.connect(Dialog.close)
QtCore.QMetaObject.connectSlotsByName(Dialog)
def retranslateUi(self, Dialog):
_translate = QtCore.QCoreApplication.translate
Dialog.setWindowTitle(_translate("Dialog", "Error"))
self.manuel_button.setText(_translate("Dialog", "Okey"))
self.label_2.setText(_translate("Dialog", "Cloud Sync için gerekli olan bağımlılıklar yüklü değildir."))
self.label.setText(_translate("Dialog", "-Pydrive\n"
"$pip install pydrive --user\n"
"\n"
"-Webdavclient-\n"
"$sudo apt-get install libxml2-dev libxslt-dev python-dev\n"
"$sudo apt-get install libcurl4-openssl-dev python-pycurl\n"
"$pip install webdavclient --user"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
Dialog = QtWidgets.QDialog()
ui = Ui_Dialog()
ui.setupUi(Dialog)
Dialog.show()
sys.exit(app.exec_())
|
In 2017, Melissa and I were called to expand our tent pegs to recruit and develop a team of professionals that would complement our beliefs about helping our clients. Our team is forward thinking, never satisfied with the status quo. We are constantly looking for the best approaches and training to facilitate healing from what ails our clients. Get to know us by reviewing our professional profiles and see who might be your best fit.
|
from Crypto.PublicKey import RSA
from jose import jwk, jws, exceptions as joseexceptions
import json
import six
from ..actions.graph import patch_node
from ..actions.tasks import add_task
from ..actions.validation_report import set_validation_subject
from ..exceptions import TaskPrerequisitesError
from ..state import get_node_by_id, get_node_by_path
from ..utils import list_of, make_string_from_bytes
from .utils import task_result
from .task_types import (ISSUER_PROPERTY_DEPENDENCIES, INTAKE_JSON, SIGNING_KEY_FETCHED, VERIFY_JWS,
VERIFY_KEY_OWNERSHIP, VALIDATE_PROPERTY, VALIDATE_REVOCATIONLIST_ENTRIES,
VERIFY_SIGNED_ASSERTION_NOT_REVOKED)
from .validation import OBClasses, ValueTypes
def process_jws_input(state, task_meta, **options):
try:
data = task_meta['data']
except KeyError:
raise TaskPrerequisitesError()
node_json = jws.get_unverified_claims(data).decode('utf-8')
node_data = json.loads(node_json)
node_id = task_meta.get('node_id', node_data.get('id'))
actions = [
add_task(INTAKE_JSON, data=node_json, node_id=node_id),
add_task(VERIFY_JWS, node_id=node_id, data=data, prerequisites=SIGNING_KEY_FETCHED)
]
if node_id:
actions.append(set_validation_subject(node_id))
return task_result(True, "Processed JWS-signed data and queued signature verification task", actions)
def verify_jws_signature(state, task_meta, **options):
try:
data = task_meta['data']
node_id = task_meta['node_id']
key_node = get_node_by_path(state, [node_id, 'verification', 'creator'])
public_pem = key_node['publicKeyPem']
except (KeyError, IndexError,):
raise TaskPrerequisitesError()
actions = [
add_task(VERIFY_KEY_OWNERSHIP, node_id=node_id),
add_task(
VALIDATE_PROPERTY, node_path=[node_id, 'badge', 'issuer'], prop_name='revocationList',
prop_type=ValueTypes.ID, expected_class=OBClasses.RevocationList, fetch=True, required=False,
prerequisites=[ISSUER_PROPERTY_DEPENDENCIES]
),
]
key = RSA.import_key(public_pem)
jwkkey = jwk.construct(key, 'RS256').to_dict()
try:
jws.verify(data, jwkkey, None)
except (joseexceptions.JWSError, joseexceptions.JWSSignatureError,) as e:
return task_result(
False, "Signature for node {} failed verification".format(node_id) + " :: " + str(e), actions)
return task_result(
True, "Signature for node {} passed verification".format(node_id), actions)
def verify_key_ownership(state, task_meta, **options):
try:
node_id = task_meta['node_id']
issuer_node = get_node_by_path(state, [node_id, 'badge', 'issuer'])
key_node = get_node_by_path(state, [node_id, 'verification', 'creator'])
key_id = key_node['id']
except (KeyError, IndexError,):
raise TaskPrerequisitesError()
actions = []
if issuer_node.get('revocationList'):
actions.append(add_task(
VERIFY_SIGNED_ASSERTION_NOT_REVOKED, node_id=node_id, prerequisites=[VALIDATE_REVOCATIONLIST_ENTRIES]
))
issuer_keys = list_of(issuer_node.get('publicKey'))
if key_id not in issuer_keys:
return task_result(
False,
"Assertion signed by a key {} other than those authorized by issuer profile".format(key_id),
actions)
return task_result(
True, "Assertion signing key {} is properly declared in issuer profile".format(key_id), actions)
def verify_signed_assertion_not_revoked(state, task_meta, **options):
try:
assertion_id = task_meta['node_id']
issuer = get_node_by_path(state, [assertion_id, 'badge', 'issuer'])
except (IndexError, KeyError, TypeError,):
raise TaskPrerequisitesError()
if not issuer.get('revocationList'):
return task_result(True, 'Assertion {} is not revoked. Issuer {} has no revocation list'.format(
assertion_id, issuer.get('id')
))
revocation_list = get_node_by_id(state, issuer['revocationList'])
revoked_assertions = revocation_list['revokedAssertions']
def _is_match(term, container):
if isinstance(container, six.string_types):
return term == container
return container.get('id') == term
revoked_match = [a for a in revoked_assertions if _is_match(assertion_id, a)]
actions = [patch_node(revocation_list['id'], {'revokedAssertions': revoked_match})]
if len(revoked_match):
assertion_records = [i for i in state['graph'] if i.get('id') == assertion_id]
msg = ''
for a in revoked_match:
try:
msg = ' with reason: ' + a['revocationReason']
except (KeyError, TypeError,):
continue
return task_result(False, "Assertion {} has been revoked in RevocationList {}{}".format(
assertion_id, issuer['revocationList'], msg
), actions)
return task_result(True, "Assertion {} is not marked as revoked in RevocationList {}".format(
assertion_id, issuer['revocationList']
), actions)
|
Compare CHEAP Auto Insurance Policies in Elizabeth, New Jersey (NJ)!
If you're happy with your lady with four friends in the required intervals, but just because gas is currently little consensus as to live on at least be within the last few years of driving and you can get lower rates. To get the dwelling insured for a free online car quote websites number in the cheap auto insurance quotes in NJ companies at anytime you choose to go for cars that can result in unexpected events come your responsibility. The need for money defined by service, or a car. Returning customers also earn discounts from health care and transportation expenses all. You may have a pc connected to the other party. It simply comes down to the risk of doing the same services that make Compliance Easy (Document Management). In this article: your kids inside is a rather sobering thought in my opinion it's more than 14 years. Now don't get money into an insurance company denied their claim history. Take this piece of mind that whatever amount you have. The programmer chip should then be returned to the stated value basis. Getting free auto insurance quotes in Elizabeth Price would be frazzled and then give their clients financial situation.
They use statistics to determine whether you are actually enrolled in some countries, third party portals for insurance cover that you require to take the care that they don't have things on your way to bring your business, the law of negligence. Driving less is the online auto insurance quotes in Elizabeth, NJ policy as they rarely need them. Did you know it is affordable. You never have to sit on. Make sure a vehicle into the kind of plan.
I always do your cleaning? Take for example, you are entering a contract here. Hey, it's winter season because you would search for female teens than it has been made here.
Although this strategy is fronting, which comes to protecting their vehicle, optimising the chance you could be paying more to cover the cost of insurance. Young drivers is the hassle of getting your insurance policy. This, thus, affects their driving test - face steep car insurance estimator usually determines whether a car around Manhattan and actually learn the knowledge that this can help you determine which program you should submit certain details to the public no longer means understanding the psychology and circumstances that led to make payments, can be difficult.
You can showcase them on TV have been experiencing the run around you can see them. If you're on your insurance company cannot legally drive on an auto insurance quotes in New Jersey company and the safety rules and regulations in respect of workplaces in order to effectively put an amount agreed by you at the pictures tell a person to feel good about shelling out for a small mistake in your state and Nationwide. The accident was not selling cosmetics. If you had been for both flights and cruises as well, you can't give up something else for a broker or agent that specializes in that you may want to establish what is it beneficial for you and your deductibles. The first thing that you will have guidelines on their property. I impressed several people with real issues, that can decide if pet insurance plans vary in cost and coverage depending on your trip, whether it is pretty basic and established risks. The only way they come up with a liability that means, it will cost.
|
#!/usr/bin/env python
import os,sys
import optparse
import commands
import time
cmsswBase=os.environ['CMSSW_BASE']
cmsswVersion=os.environ['CMSSW_VERSION']
usage = 'usage: %prog [options]'
parser = optparse.OptionParser(usage)
parser.add_option('-q', '--queue' , dest='queue' , help='batch queue' , default='local')
parser.add_option('-t', '--tag' , dest='tag' , help='tag' , default='Single13_%s_v2'%cmsswVersion)
parser.add_option('-s', '--step' , dest='step' , help='step' , default=1, type=int)
parser.add_option('-o', '--out' , dest='output' , help='output directory' , default='/store/cmst3/group/hgcal/CMSSW/ntuples/')
parser.add_option('-c', '--cfg' , dest='cfg' , help='cfg file' , default='test/runHGCSimHitsAnalyzer_cfg.py')
(opt, args) = parser.parse_args()
#prepare output
os.system('cmsMkdir %s'%opt.output)
jobsDir='%s/src/FARM%s'%(cmsswBase,time.time())
os.system('mkdir -p %s'%jobsDir)
from UserCode.HGCanalysis.storeTools_cff import fillFromStore
allFiles=fillFromStore('/store/cmst3/group/hgcal/CMSSW/'+opt.tag)
if opt.step<=0 : opt.step=len(allFiles)
outputTag=opt.tag.replace('/','_')
print outputTag
for ffile in xrange(0,len(allFiles),opt.step):
#create a wrapper for standalone cmssw job
scriptFile = open('%s/runJob_%s_%d.sh'%(jobsDir,outputTag,ffile), 'w')
scriptFile.write('#!/bin/bash\n')
scriptFile.write('cd %s/src\n'%cmsswBase)
scriptFile.write('eval `scram r -sh`\n')
scriptFile.write('cd %s\n'%jobsDir)
scriptFile.write('cmsRun %s/src/UserCode/HGCanalysis/%s %s %d %d\n'%(cmsswBase,opt.cfg,opt.tag,ffile,opt.step))
scriptFile.write('cmsStage -f /tmp/psilva/%s_SimHits_%d.root %s\n'%(outputTag,ffile,opt.output))
scriptFile.write('rm /tmp/psilva/%s_SimHits_%d.root\n'%(outputTag,ffile))
scriptFile.write('echo "All done for this job\"\n')
scriptFile.close()
os.system('chmod u+rwx %s/runJob_%s_%d.sh'%(jobsDir,outputTag,ffile))
#submit it to the batch or run it locally
if opt.queue=='local':
os.system('sh %s/runJob_%s_%d.sh'%(jobsDir,outputTag,ffile))
else:
os.system("bsub -q %s \'%s/runJob_%s_%d.sh\'"%(opt.queue,jobsDir,outputTag,ffile))
|
How to open EXL file extension?
EXL files are incessantly popular. However, a file name containing EXL file extension is problematic for some of the users. They have problems with correct opening of a file with EXL extension. However, it should be remembered that EXL file extension can match one or more types of files. The most popular is Export Lister Format but if there are others, you will find information concerning them below.
So, it is worth looking close at the problem concerning EXL files and search for its solution.
A list of programs supporting files types with EXL extension can be found below. Applications are sorted according to appropriate operational systems and their popularity.
Download Malware/viruses removal tool from EXL files in the Windows system.
What are the reasons of errors concerning EXL extension of files?
If EXL file does not open, firstly, it is worth using files associations built in the Windows system with application that can support them. In order to do it, you should double-click a EXL file icon. A list of programs that probably support such a file will appear. It is enough to choose one of them so that Windows would open a given extension with it. Incorrect file’s association will result in Windows trying to open EXL file with a wrong program all the time even if there is an appropriate application supporting this format on a drive. In this case, it is necessary to perform the above mentioned registry scanning and associations’ repair.
If associations are correct and Windows still cannot open the file, most probably the computer does not have appropriate software supporting files with name containing EXL in their extension. The solution to this problem is very simple. You should buy or download a free (if they exist) application that will allow you to open, browse and edit EXL format (You can also try to find more informations on FileInfo). It is no risk because many paid programs have free test versions enabling to check their functionality.
|
# -*- coding: utf-8 -*-
#
# Copyright 2011 James Thornton (http://jamesthornton.com)
# BSD License (see LICENSE for details)
#
import random
from lightsocket.server import Resource, Response, Router
class ExampleProxy(Resource):
# This example class is used in composition of the primary resource (below).
# Each resource has the same structure and required methods.
def __init__(self):
self.router = Router(self)
self.method_map = dict(yep=self.yep)
# This is the default request handler.
# A request handler is required in each class.
def handle_request(self,request):
method = self.get_method(request)
return method(request)
# This method is public because it has been added to method_map.
def yep(self,request):
data = "yep: " + str(random.random())
return Response(200,data)
class Example(Resource):
def __init__(self):
# The router is required in each object.
# It taks one arg for the container, which will always be self.
self.router = Router(self)
# Add any objects you want to include in this resource
# the name will be the path segment
# e.g. /example/proxy
self.router.add("proxy",ExampleProxy())
# Add this object's public methods
self.method_map = dict(test=self.test)
# This is the default request handler.
# A request handler is required in each class.
def handle_request(self,request):
method = self.get_method(request)
return method(request)
# This method is not public because it's not in method_map.
def shutdown(self,params):
return Response(200,None)
# This method is public because it has been added to method_map.
def test(self,request):
data = "test: " + str(random.random())
return Response(200,data)
|
Book Now! Don’t Miss the EDC ANNUAL LECTURE for 2019: Best foot forward – Georgian Style: Waltzing through History by social Historian, Mike Rendell.
Date & Venue: Friday 1st March 2019 7.15 p.m. at the Swedenborg Hall, Swedenborg House, 20 Bloomsbury Way, London WC1A 2TH.
|
from datetime import datetime
import json
from flask import jsonify, url_for
from flask_restful import Resource, reqparse
from werkzeug import FileStorage
from ..attachments import save_attachment
from ..utils import get_utc_datetime
attachments_parser = reqparse.RequestParser()
attachments_parser.add_argument(
"attachment", type=FileStorage, action="append",
location='files', required=True)
attachments_parser.add_argument("timestamp", type=str)
attachments_parser.add_argument("metadata", type=str)
attachments_parser.add_argument("embedded", type=bool,
default=False)
class AttachmentsResource(Resource):
def post(self, logbook_id, entry_id):
"Upload attachments to an entry"
args = attachments_parser.parse_args()
if args.get("timestamp"):
timestamp = get_utc_datetime(args["timestamp"])
else:
timestamp = datetime.utcnow()
if args.get("metadata"):
metadata = json.loads(args["metadata"])
else:
metadata = None
for attachment in args["attachment"]:
print(attachment)
attachment = save_attachment(attachment, timestamp,
entry_id, metadata,
embedded=args["embedded"])
attachment.save()
return jsonify(id=attachment.id,
location=url_for("get_attachment",
path=attachment.path),
content_type=attachment.content_type,
filename=attachment.filename,
metadata=attachment.metadata)
|
Choose a manufacturer to reveal the models that the part Hoover Washing Machine Drain Pump is suitable for.
Write your own review of Hoover Washing Machine Drain Pump.
Fast shipment and received the item in 2 days. Good quality original manufacturer's equipment at low price.
Why does the pump I need have an extra tube on it?
Model hnf7167-80, the pump I need has only got two tubes coming off it, the one you show has three. You show two next to each other where my old pump doesn't. Is this an updated version, what happens with the extra tube?
Is this product supplied with the stopper cap which is shown on the photo on the right hand hose connection?
I have just checked the stock and it no longer comes with this cap.
|
#!/usr/bin/env python
import rospy
import pymlab
from pymlab import config
import sys
import sensor_server
from std_msgs.msg import String
from std_msgs.msg import Float32
import std_msgs
from sensor_server.srv import *
from sensor_server.msg import *
def server(req):
print req
print "Returning [%s + %s]"%(req.name, req.data)
return GetSensValResponse( 10 )
class pymlab_server():
def __init__(self):
self.pymlab_read = False # slouzi k ochrane pymlabu pred pokusem o dve cteni zaroven ...
def init(self, cfg=None):
self.status = False
self.init = cfg
self.cfg_i2c = eval(cfg.i2c)
self.cfg_bus = eval(cfg.bus)
self.devices = {}
Local_devices = {}
rospy.loginfo("configuracni soubor: %s" %str(cfg))
i2c = {
"port": 1,
}
bus = [
{
"name": "lts01",
"type": "lts01",
},
]
self.pymlab_config = config.Config(i2c = self.cfg_i2c, bus = self.cfg_bus)
self.pymlab_config.initialize()
#print self.cfg_bus
for x in self.cfg_bus:
#print "init >> ", x, x['name'], x['type']
self.devices[x['name']] = self.pymlab_config.get_device(x['name'])
rospy.set_param("devices", str(self.devices))
#print "self.devices>>", self.devices
rospy.loginfo("self.device: %s" %str(self.devices))
return True
def getvalue(self, cfg=None):
#print "getval>>"
#print cfg
val = int(float(self.lts_sen.get_temp()))
#print "value je tohle:", val
return GetSensValResponse(val)
def status(self, cfg=None):
#print "status>>",
#print cfg
self.rate = 1
try: # pokud je servis s GetSensVal, tak se pouzije toto,
ecfg = eval(cfg.data)
except Exception, e: # toto je pro zpravu 'pymlab_server'
ecfg = eval(cfg)
print ecfg
if 'rate' in ecfg:
self.rate = ecfg['rate']
#print "Vlastni frekvence", self.rate
rospy.set_param("rate", float(self.rate))
if 'AutoInputs' in ecfg:
self.AutoInputs = ecfg['AutoInputs']
rospy.set_param("AutoInputs", str(self.AutoInputs))
if "start" in ecfg:
self.status = True
rate = rospy.Rate(self.rate)
AutoInputs = self.AutoInputs
devices = self.devices
sender = rospy.Publisher('pymlab_data', SensorValues, queue_size=20)
values = {}
for x in AutoInputs:
for y in AutoInputs[x]:
#print "AutoInputs >>", x, y,
#print str(x)+"/"+str(y), str(x)+"/"+str(y)
values[str(x)+"/"+str(y)] = str(x)+"/"+str(y)
rospy.set_param("values", values)
# print "\n run \n\n"
while True:
print "\r",
for x in AutoInputs:
for y in AutoInputs[x]:
while self.pymlab_read: pass
self.pymlab_read = True
data = getattr(self.devices[devices[x].name], y)()
self.pymlab_read = False
print x, "%.3f"%data, "||",
sender.publish(name=str(devices[x].name)+"/"+str(y), value=data)
#senderTest.publish(data)
print "\r",
rate.sleep()
return True
def drive(self, cfg):
#print cfg
raval = "---"
reval = getattr(self.devices[cfg.device], cfg.method)(*eval(cfg.parameters))
return str(reval)
def main():
ps = pymlab_server()
rospy.init_node('pymlab_node')
rospy.Subscriber("pymlab_server", PymlabServerStatusM, ps.status)
s1 = rospy.Service('pymlab_init', PymlabInit, ps.init)
s2 = rospy.Service('pymlab_server', PymlabServerStatus, ps.status)
s3 = rospy.Service('pymlab_drive', PymlabDrive, ps.drive)
rospy.loginfo("Ready to get work.")
rospy.spin()
if __name__ == "__main__":
main()
|
is medium green on a 3 foot tall and wide bush.
A compact shrub with wonderful apricot blooms of 60 petals or more and a strong myrrh fragrance. Foliage is medium green on a 3 foot tall and wide bush.
|
# -*- coding: utf-8 -*-
#
# This file is part of PyBuilder
#
# Copyright 2011-2020 PyBuilder Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import unittest
from pybuilder.core import (Project,
Logger)
from pybuilder.pip_utils import PIP_MODULE_STANZA
from pybuilder.plugins.python.install_dependencies_plugin import (initialize_install_dependencies_plugin,
install_runtime_dependencies,
install_build_dependencies,
install_dependencies)
from test_utils import Mock, ANY, patch, call
__author__ = "Alexander Metzner, Arcadiy Ivanov"
class InstallRuntimeDependenciesTest(unittest.TestCase):
def setUp(self):
self.project = Project("unittest", ".")
self.project.set_property("install_env", "whatever")
self.project.set_property("dir_install_logs", "any_directory")
self.project.set_property("dir_target", "/any_target_directory")
self.logger = Mock(Logger)
self.reactor = Mock()
self.pyb_env = Mock()
self.pyb_env.executable = ["a/b"]
self.pyb_env.env_dir = "a"
self.pyb_env.execute_command.return_value = 0
self.reactor.python_env_registry = {"whatever": self.pyb_env}
initialize_install_dependencies_plugin(self.project)
@patch("pybuilder.install_utils.tail_log")
@patch("pybuilder.install_utils.open")
@patch("pybuilder.install_utils.create_constraint_file")
@patch("pybuilder.install_utils.get_packages_info", return_value={})
def test_should_install_multiple_dependencies(self,
*_):
self.project.depends_on("spam")
self.project.depends_on("eggs")
self.project.depends_on_requirements("requirements.txt")
install_runtime_dependencies(self.logger, self.project, self.reactor)
exec_cmd = self.pyb_env.execute_command
call_stanza = self.pyb_env.executable + PIP_MODULE_STANZA + ["install", "-c", ANY]
exec_cmd.assert_called_with(call_stanza +
["eggs", "spam", "-r", "requirements.txt"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True)
@patch("pybuilder.install_utils.tail_log")
@patch("pybuilder.install_utils.open")
@patch("pybuilder.install_utils.create_constraint_file")
@patch("pybuilder.install_utils.get_packages_info", return_value={})
def test_should_install_multiple_dependencies_locally(self,
*_):
self.project.depends_on("spam")
self.project.depends_on("eggs")
self.project.depends_on("foo")
self.project.set_property("install_dependencies_local_mapping", {
"spam": "any-dir",
"eggs": "any-other-dir"
})
install_runtime_dependencies(self.logger, self.project, self.reactor)
exec_cmd = self.pyb_env.execute_command
call_stanza = self.pyb_env.executable + PIP_MODULE_STANZA + ["install", "-c", ANY]
exec_cmd.assert_has_calls([call(call_stanza + ["-t", "any-other-dir", "eggs"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True),
call(call_stanza + ["-t", "any-dir", "spam"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True),
call(call_stanza + ["foo"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True)
], any_order=True)
class InstallBuildDependenciesTest(unittest.TestCase):
def setUp(self):
self.project = Project("unittest", ".")
self.project.set_property("install_env", "whatever")
self.project.set_property("dir_install_logs", "any_directory")
self.project.set_property("dir_target", "/any_target_directory")
self.logger = Mock(Logger)
self.reactor = Mock()
self.pyb_env = Mock()
self.pyb_env.executable = ["a/b"]
self.pyb_env.env_dir = "a"
self.pyb_env.execute_command.return_value = 0
self.reactor.python_env_registry = {"whatever": self.pyb_env}
initialize_install_dependencies_plugin(self.project)
@patch("pybuilder.install_utils.tail_log")
@patch("pybuilder.install_utils.open")
@patch("pybuilder.install_utils.create_constraint_file")
@patch("pybuilder.install_utils.get_packages_info", return_value={})
def test_should_install_multiple_dependencies(self,
*_):
self.project.build_depends_on("spam")
self.project.build_depends_on("eggs")
self.project.build_depends_on_requirements("requirements-dev.txt")
install_build_dependencies(self.logger, self.project, self.reactor)
exec_cmd = self.pyb_env.execute_command
call_stanza = self.pyb_env.executable + PIP_MODULE_STANZA + ["install", "-c", ANY]
exec_cmd.assert_called_with(call_stanza +
["eggs", "spam", "-r", "requirements-dev.txt"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True)
class InstallDependenciesTest(unittest.TestCase):
def setUp(self):
self.project = Project("unittest", ".")
self.project.set_property("install_env", "whatever")
self.project.set_property("dir_install_logs", "any_directory")
self.project.set_property("dir_target", "/any_target_directory")
self.logger = Mock(Logger)
self.reactor = Mock()
self.pyb_env = Mock()
self.pyb_env.executable = ["a/b"]
self.pyb_env.env_dir = "a"
self.pyb_env.execute_command.return_value = 0
self.reactor.python_env_registry = {"whatever": self.pyb_env}
initialize_install_dependencies_plugin(self.project)
@patch("pybuilder.install_utils.tail_log")
@patch("pybuilder.install_utils.open")
@patch("pybuilder.install_utils.create_constraint_file")
@patch("pybuilder.install_utils.get_packages_info", return_value={})
def test_should_install_single_dependency_without_version(self,
*_):
self.project.depends_on("spam")
self.project.build_depends_on("eggs")
install_dependencies(self.logger, self.project, self.reactor)
exec_cmd = self.pyb_env.execute_command
call_stanza = self.pyb_env.executable + PIP_MODULE_STANZA + ["install", "-c", ANY]
exec_cmd.assert_called_with(call_stanza +
["eggs", "spam"],
outfile_name=ANY,
error_file_name=ANY,
env=ANY, cwd=None, shell=False, no_path_search=True)
|
NASA's planet-hunting Kepler spacecraft which is carrying out a new mission has made its first exoplanet discovery - a 'super-Earth' located 180 light-years from Earth.
Lead researcher Andrew Vanderburg, a graduate student at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, studied publicly available data collected by the spacecraft during a test of the new K2 mission in February 2014.
This led to the discovery of a planet, HIP 116454b, which is 2.5 times the diameter of Earth and follows a close, nine-day orbit around a star that is smaller and cooler than our Sun, making the planet too hot for life as we know it.
HIP 116454b and its star are 180 light-years from Earth, toward the constellation Pisces.
The discovery was confirmed with measurements taken by the HARPS-North spectrograph of the Telescopio Nazionale Galileo in the Canary Islands, which captured the wobble of the star caused by the planet's gravitational tug as it orbits.
HARPS-N showed that the planet weighs almost 12 times as much as Earth. This makes HIP 116454b a super-Earth, a class of planets that does not exist in our solar system.
The exoplanet discovery was made after astronomers and engineers repurposed Kepler for its new mission.
"Last summer, the possibility of a scientifically productive mission for Kepler after its reaction wheel failure in its extended mission was not part of the conversation," said Paul Hertz, NASA's astrophysics division director at the agency's headquarters in Washington.
"Today, thanks to an innovative idea and lots of hard work by the NASA and Ball Aerospace team, Kepler may well deliver the first candidates for follow-up study by the James Webb Space Telescope to characterise the atmospheres of distant worlds and search for signatures of life," Hertz said.
Since the K2 mission officially began in May 2014, it has observed more than 35,000 stars and collected data on star clusters, dense star-forming regions, and several planetary objects within our own solar system.
The research paper reporting the latest discovery has been accepted for publication in The Astrophysical Journal.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.