text
stringlengths 0
11M
| link
stringclasses 1
value | source
stringclasses 16
values |
---|---|---|
Price search results for Suunto GPS Track POD SS018712000
The new Suunto GPS Track POD stores tracks, speed, distance and GPS altitude data while you exercise and explore new territories. You can also connect it with selected Suunto heart rate monitors for real time distance and highly responsive speed readings for your outdoor activities using Suunto FusedSpeed.Real time speed and distance for selected Suunto sports watches: Quest, M5, t3d, t4d, t6d.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Nerve injuries associated with supracondylar fracture of the humerus in children.
Vascular complications associated with supracondylar fracture of humerus are well recognised. Less well recognised are neurological injuries associated with this fracture. A prospective study was conducted to explore the role of open reduction and internal fixation of this fracture. The article presents the incidence of nerve lesions in 46 cases of supracondylar fracture of humerus; review of pertinent literature is also included.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
eBook Review: 'A Cut-Throat Business' by K.A. Laity
By Lucy Felthouse, BLOGCRITICS.ORG
Published 10:00 pm, Monday, October 21, 2013
I really enjoyed the previous two books in this series, Chastity Flame and Lush Situation, so was excited to pick up this third title, A Cut-Throat Business. At their heart is Chastity Flame, a secret operative, working for a company in London. This is where the majority of the action takes place in this novel.
It tells the story of a serial killer stalking London's streets, luring women into darkened alleys and slitting their throats. He's a "somebody" in society, with a family that will protect him. Chastity is tasked with finding out who he is, and how to bring him down. A job she's more than happy to take on.
The book follows the action from London across the globe, as Chastity tracks down the evil killer, while at the same time trying to sniff out an ex-colleague that has it in for her, too. It's a dangerous time for Chastity, but fortunately she has her skills, talents and feminine wiles to keep her safe, and her delectable boyfriend, Damien, to distract her from the more unpleasant parts of life.
Overall, an excellent addition to the series. I very much enjoy K.A. Laity's writing style and humour, and although the book doesn't have the hot sex scenes of the previous two, there's still plenty of sizzling tension. An action-packed thriller that I'd recommend to anyone looking for an exciting read that's a bit different.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
[MR and ultrasound study of Achilles tendon injury].
In 24 patients with lesions of the achilles tendon MRI and ultrasound was performed to compare the results with clinical examination. MRI had an accuracy of 100%, ultrasound of 90%. Especially partial rupture with chronic degenerative lesions of the tendon can be diagnosed more accurate by MRI which enables easier indication for treatment. By MRI diagnostic differentiation of our patients in four groups was possible.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
From ‘Smallville’ to a Sex Cult: The Fall of Actress Allison Mack From ‘Smallville’ to a Sex Cult: The Fall of Actress Allison Mack
© Provided by The Daily Beast REUTERS
The founder of the alleged cult NXIVM forced member Allison Mack to store naked photos of women branded with his initials to blackmail them into becoming sex slaves, a former member testified in court Monday.
Lauren Salzman said in Brooklyn federal court Mack, the Smallville actress, was forced to collect the photographs in a Dropbox folder at the request of her master, NXIVM founder Keith Raniere. Salzman began testifying on Friday and continued this week.
“The photo had to be fully frontal naked,” Salzman said. “Our brands had to show, and we had to look uniform and happy.”
Three Gardai - including a superintendent and inspector - arrested following early morning operation
It is not clear which station or county the arrested Gardai are based in . © Catalyst Images Garda sign on Kevin Street District Garda Station in Dublin center. On Tuesday, March 26, 2019, in Dublin, Ireland. (Photo by Artur Widak/NurPhoto via Getty Images) In a statement, Deputy Commissioner John Twomey said: "An Garda Síochána is fully committed to investigating any alleged wrong-doing or corruption involving Garda personnel, and will work with other relevant agencies in doing so. As this is a live and ongoing investigation, it is not appropriate to make any further comment at this time”.
Raniere, 58, is accused of running a secret sex-cult pyramid scheme that branded, assaulted, and enslaved women while publically promoting NXIVM as a self-improvement group. He is charged with sex trafficking, racketeering conspiracy, child exploitation, and child pornography.
Raniere has pleaded not guilty to all the charges.
Salzman, 42, is among the four NXIVM members—including Mack, and her mother—who were arrested in 2018 with Raniere. After pleading guilty to racketeering charges in March, she is the first co-defendant to testify against Raniere, who is standing the trial alone.
“He was my most important person. I respected him. I looked up to him,” she said. “He was my master.”
Salzman walked jurors through the world of the ultra secretive club DOS, the “secret society” where she said “slaves” would be forced to brand themselves with Raniere’s initials near their crotch with a cautery pen—without anesthesia—and have sex with him.
India election: Narendra Modi secures second term
Indian Prime Minister Narendra Modi has secured a second term after winning the country's general election. The Bharatiya Janata Party claims it has won re-election with a commanding lead, with Mr Modi tweeting: "Together we grow. Together we prosper. Together we will build a strong and inclusive India. India wins yet again!" The election has been seen as a referendum on Mr Modi, whose economic reforms have had mixed results but whose popularity as a social underdog has endured. He began the campaign under pressure after losing three state elections amid rising anger over farm prices and unemployment.
If any slave displeased Raniere, he would kick or whip them before threatening to release their collateral photos saved under a Dropbox file named “brands,” according to Salzman, who also testified on Monday that Raniere had plans to jail women “in a dungeon” as a form of punishment.
“He said [the jail cell] was for the people most committed to growth. They would get locked in a cage,” she said on Monday.
Salzman testified she met Raniere, whom NXIVM members referred to as “the Vanguard,” through her mother in 1995, and begin her sexual relationship with him six years later.
“He was my mentor. My teacher,” Salzman said of their decade-long relationship, which ended before the formation of DOS. “ We had a romantic relationship. A physical and sexual relationship.”
Throughout their relationship, Raniere allegedly forbad Salzman to see other people while simultaneously forcing her into partake in threesomes with other alleged salves, including Mack.
French police hunt suitcase bomber after blast in Lyon
FRANCE-SECURITY/LYON (UPDATE 1, PIX):UPDATE 1-French police hunt suitcase bomber after blast in Lyon
“Initially, I participated because I was curious,” she reportedly said.
Salzman added that despite sharing Raniere with Mack, they were close friends and even wrote a letter of support for Mack and fellow DOS member Nicki Clyne’s marriage. Mack is also expected to testify in court as part of her plea agreement.
In 2015, Salzman said Raniere approached her about joining a elite “master-slave program” that would help her “overcome her fears” by submitting herself to his orders, which she accepted immediately. Salzman said was among seven women who were deemed “first line slaves” to Raniere at the time of the DOS hierarchy and she was responsible for only communicating with other members of her rank and her six slaves.
“I was a slave with Keith as my master,” Salzman said, adding that she was forced to keep the role of Raniere, whom they had to call “grandmaster,” a secret. “And the society demanded a lifetime of obedience to your master.”
The group would meet three times a week in a “sorority house,” and whenever Raniere attended, the women had to strip naked, get on the floor and look up at him, while he delivered lectures on matters ranging from his “vision” for DOS to write a book, to recruitment, to his intent to create a “dungeon” where slaves would “totally surrender” themselves.
“It didn't sound like anything I ever wanted,” Salzman said on Monday, detailing how Raniere would force masters to paddle their slaves if they disobeyed. “These things started to become scary for me. I was concerned about failing.”
In opening statements earlier this month, Raniere’s defense attorney, Marc Agnifilo, called DOS a “women’s organization” Raniere felt women needed. Agnifilo had also argued throughout the trial that the women in DOS engaged in consensual sexual relationships with Raniere.
Read more at The Daily Beast.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
The "all-in-one" appendectomy: quick, scarless, and less costly.
A technique for laparoscopic appendectomy (LAP APPY) that involves brief surgeon and operating room times, results in no appreciable scar, and requires few disposable supplies would be desirable. During 2009, 508 children underwent LAP APPY at our institution including 398 (78%) for acute, non-perforated appendicitis. Our "all-in-one" operative procedure involves use of a single instrument through a side-arm viewing operative laparoscope which is inserted through a single, trans-umbilical port. Successful procedure completion rates and operative times ("cut-to-close") were determined. Our data for surgeon-directed, disposable supply costs per procedure were collated by Child Health Corporation of America and compared with 2009 LAP APPY data (n = 5692) from 17 other children's hospitals in the United States. We successfully completed 359 (90.2%) LAP APPY procedures using the all-in-one technique resulting in no appreciable scar. Additional ports were used in 9.8% and there were no conversions to open procedures. Median operative time for the all-in-one technique was 24 minutes (5-66 min). Our median surgeon-directed, disposable supply cost was the lowest in the study group and significantly less than the other 17 children's hospitals ($166 vs $748, P < .001). Median variation of supply costs among surgeons within each institution was $448 ($3-$870). Aggregate savings of nearly $1.3 million are predicted if all study surgeons were to reduce their disposable costs per procedure to the 25th percentile ($551). We conclude that the all-in-one laparoscopic appendectomy technique is quick, scarless, and less costly than conventional multi-port techniques. Wider application of the all-in-one technique seems indicated.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Microsoft: Give me back MY Facebook!
Again this morning. I took a photo, cropped it, shared it with Facebook, added a one paragraph comment and clicked “post”. Then the message. Some shit like, “your photo can’t be posted at this time”, and boom, my photo and comment is gone. Tried a second time, opening the Facebook app rather than sharing from Photos. But this time copied my comment to the clipboard first (fool me once). Same error message. Copied the photo and my comment to an email, sent it to myself, pulled out and fired up the Yoga, opened the email, saved the image, logged in to Facebook and entered the same friggin post that I tried to do almost 20 minutes earlier. WTF!!! And this is the third time this has happened.
Now, I realize that I am running Windows Phone 8.1 Developer Preview, and there may be some bugs. But we just got an update last week. And this is not a bug. It’s a total fail. The entire new Facebook interface sucks. Two hours since posting and it’s still not appearing in my People timeline. A couple posts ago, after seeing the post in the Facebook app and on the Me Tile, the post disappeared entirely. I had to recreate it on my desktop at the office when I got in. I can see this my post on my Me Tile (even that took 5-10 minutes to propagate), but nowhere else. I am getting comments, so I ASS U ME that others can see it. But this never, ever happened before 8.1. And what’s this shit with opening the Facebook interface every single time I click on a post. I have to wait several seconds each time. And of course it opens to the last comment, so I have to scroll up to the top before I start reading.
Since loading 8.1, I would estimate that I using Facebook about 50%less than with 8.0. Not that I am a big FB user (maybe 4-5 posts a month and less than 40 Friends). The old WP8.0 FB interface was damn near perfect. It was a pleasure to read posts and comments. And adding new posts was a breeze. Now, well, it sucks! No personal experience here, but I am guessing it works more like iOS or Android. How I feel for what you have all be going through all these years.
Just speculating here, but I would venture that the Facebook team responsible for aggregating our personal data and then selling it, or compiling it for ad revenue, outnumbers the UX team by about 10-1. Data Collection Team; bright cheery office space with sun beaming through the tinted glass and plenty of bottled natural spring water. UX Team; basement cubicles (recycled from the last bi-yearly refresh upstairs), stale, dank air, and a 5 gallon jug filled with purified water.
Some have said that it’s great that Microsoft has separated the FB interface from the OS. You know, so it’s easier to update. Sounds like “spin” to me And a big load of bullshit. I am guessing this is how it went down about six months ago:
Unimportant Senior Manager: Hey Zuck, Microsoft sent us one of those new Nokia WP phones to play with. The social stuff is really slick. Everything is fast and clean. If I didn’t see the FB post I uploaded a few minutes ago, I wouldn’t even know I was looking at Facebook. A really nice user experience.
Zuck: Really! Let me see that. (Thumbs through the People hub for a minute)
Zuck: Cindy, get that Joe guy from Microsoft on the phone for me. If you can’t get him, get one of the bald guys.
Joe: Hey there Mark, Joe Belfiore here. What can I do for you.
Zuck: Yeah ok. I was just looking at one of your Windows Phones and ….
Joe: …..jumps in. Yeah, they are some really nice phones with lots of great features. Have you tried the camera yet?
Zuck: Right. Here’s the thing. I don’t like the way your stealing Facebook on your phone. I’m going to give you 90 days…..no, make that 60 days, to change your UX so that MY Facebook is integrated everywhere. I want people viewing posts via FB. I want them commenting via FB. I want them using MY Facebook for everything! If you don’t get this done, I AM GOING TO BLOCK FACEBOOK FROM ALL YOUR SHITTY PHONES! Got it.
7 COMMENTS
Couldn’t agree more. They castrated the hubs. What made windows phone different was simply dumped. My theory is they were infiltrated by someone from Google/Apple and they are slowly destroying the framework. It started when the zune desktop app stopped syncing with the new phones.
Thank you! Been saying t his since 8.1 dropped. I do not like it anymore. And considering all the “moves” fb is, and has been trying to make I can’t help it’s because they want their “brand” all over the place. Where there is brand pushing there is money to be made.
But what I don’t get is the people hub in W8.1 works basically the same WP used to… Why change things on the phone??
I totally agree with you and would like to see more people standing up and calling out Microsoft instead of clearly regurgitating what the PR and marketing departments are spewing out.
I am not a acid user of Facebook on the desktop. However, on Windows Phone 7 and 8 I would post almost daily. Back when Windows Phone 7 I can remember the Windows Phone Challenge. I loved posting via Windows Phone.
Fast forward to now. My last post on Facebook was about two weeks ago and that was from my PC. Why? The exact reasons you stated above. Posting pictures, videos, and updates are a pain in the butt, if they even work at all. And forget about changing your profile pic, gone.
I’ve also noticed I don’t stay up to date with friends and families pages as much either. Mostly because of the hassle of reading ninety percent of a post in the hub and then having to wait for the official app to crawl open, only to find out the hub was only missing one word. Come on!
This was the feature I would use to lure people from IOS and Android with. Now, well, we have Cortana, right. She’s cool.
The new lack of fb integration is one of the features that bugs me about 8.1. They are trying to be like ios or android. But what they don’t realize is that alot of people are using Windows phone because of the fact that they weren’t like the other two main mobile operating systems.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Introduction
============
One of the main organizational cues in plant development is the signaling molecule auxin. A remarkable facet of auxin\'s effect on plant development is the broad range of processes regulated by this simple compound ([@b56]). In meristems, auxin is a central modulator of growth and cellular differentiation ([@b4]). Auxin concentration gradients are maintained by active polar transport and have been proposed to give positional information that stages development and maturation in these growth centers ([@b18]; [@b3]; [@b5]; [@b19]; [@b15]; [@b16]).
Although auxin itself cannot be directly visualized, meristematic auxin gradients have been inferred from mass spectrometric measurement of tissue sections (in the *Pinus sylvestris* cambial meristem) and isolated cell types (of the *Arabidopsis thaliana* root apical meristem (RAM)) ([@b53]; [@b42]). A recently developed biosensor, DII-Venus (consisting of a fluorescent protein fusion of a labile component of the auxin perception and signaling machinery), has provided a new level of sensitivity in determining the distribution of auxin signaling activity in meristems ([@b57]). Readout of this sensor in the RAM suggests that there are cell type-specific aspects to auxin perception. In addition, it shows graded levels of auxin signaling intensity in the meristematic stele that are in line with a proximo-distal gradient of auxin itself ([Supplementary Figure S1A](#S1){ref-type="supplementary-material"}; [@b9]). However, we lack an understanding of how cells interpret an auxin gradient in their broad transcriptional output ([@b40]).
Localized auxin signaling output can be observed by visualizing the transcriptional response to auxin. *DR5*, a synthetic auxin-responsive promoter, driving a reporter gene (e.g., green fluorescent protein (GFP)) is often used as a proxy for the transcriptional auxin response ([Supplementary Figure S1B](#S1){ref-type="supplementary-material"}; [@b54]; [@b25]). *DR5* displays high expression in the tip of the RAM (specifically in the columella, QC and developing xylem), but its expression does not effectively match cell type-specific auxin measurements ([@b42]) or fully complement DII-Venus levels in the RAM ([@b9]). Furthermore, the promoters of endogenous auxin-responsive genes, for example, *SMALL AUXIN UP RNA (SAUR), AUXIN/INDOLE-3-ACETIC ACID INDUCED* (*Aux/IAA*), *BREVIS RADIX* (*BRX*) or *PLETHORA* (*PLT*) genes, have been used to report the spatial influence of an auxin gradient on gene expression ([@b32]; [@b19]; [@b20]; [@b46]). However, these constructs give differing views of auxin-response distribution, with some showing an archetypal expression pattern similar to *DR5* and others with a more graded expression in the proximal meristem. Hence, no single reporter provides a clear picture of how auxin gradients affect transcription throughout the root. Instead of singular auxin-induced reporters, a genome-wide assessment of auxin-responsive gene expression in relation to spatial expression could be used to visualize the hypothesized meristematic auxin-response gradient *in silico*. This global view can be used to assess the gradient\'s influence on gene expression, both in the sense of its physical range and in the quantity of genes regulated. What is needed for such an analysis is a sensitive readout of the transcriptomic response to auxin in a particular tissue (e.g., the root) that can be superimposed on a spatial expression map of this tissue.
Another important issue in the study of auxin in plant development is how this simple molecule can elicit so many diverse responses in different cell types ([@b28]). Auxin distribution is dynamic and actively changes in response to environmental and developmental cues ([@b21]). Cells will encounter varying auxin levels throughout their lifespan and their response to auxin is determined by cellular context (i.e., cell identity and spatial domain). For instance, during the formation of lateral root primordia, an increase in auxin levels leads to cell proliferation specifically in distal xylem-pole (xp) pericycle cells ([@b13]). In contrast, in the root epidermis, higher auxin levels do not induce cell division but rather inhibit cell expansion to mediate bending of the root tip during gravitropic growth ([@b50]). Differences in the tissue-specific expression levels of the modular auxin perception and signal transduction machinery have been suggested to predispose cells to a particular response ([@b58]; [@b28]; [@b44]; [@b57]; [@b24]) ([Supplementary Figure S1C](#S1){ref-type="supplementary-material"}; [Supplementary Table S1](#S1){ref-type="supplementary-material"}) and it is assumed that differences in the transcriptional response to auxin lie at the basis for many of the different observed physical responses. However, the importance of cellular context on the genome-wide transcriptional auxin response is undocumented. An assessment of the response to auxin at cellular resolution is needed to begin to sort out the influence of spatial context on the transcriptional auxin response.
The Arabidopsis seedling root apex is a highly amenable system for the examination of the role of auxin at a cellular resolution ([Figure 1](#f1){ref-type="fig"}). The anatomical organization permits analysis of cell identity in the radial axis and developmental maturity in the longitudinal axis ([@b43]). Moreover, transcriptomic analyses of the individual cell types that make up this organ have provided a gene expression map of cell identities and high-resolution transcriptional data sets along the longitudinal developmental axis of the root tip ([@b6]; [@b35]; [@b30]; [@b31]; [@b8]).
Here, we conduct a genome-wide, cell type-specific analysis of auxin-induced transcriptional changes in four distinct cell populations of the Arabidopsis root. This data set is used to (1) assess the relevance of cellular context on the transcriptional response to auxin and (2) test whether this comprehensive readout of auxin responses can delineate a genome-wide auxin-response gradient. The study uncovers both broad and tissue-specific auxin-responsive transcripts, and thus provides a resource to further examine the role of auxin in a cellular context and resolve how this important hormone guides plant development and growth. This sensitive readout of auxin responses together with the previous analysis of spatial gene expression in the root was used to generate, for the first time, a view of an inclusive auxin-response gradient in the RAM.
Results
=======
Auxin-regulated gene-expression analysis in distinct cell types
---------------------------------------------------------------
To analyze the effect of auxin on separate spatial domains, transcriptional changes in response to auxin treatment were assayed by means of fluorescence activated cell sorting and microarray analysis of four distinct tissue-specific GFP-marker lines in Arabidopsis seedling roots. The assayed samples covered internal and external as well as proximal and distal cell populations; including marker lines for the stele, xp pericycle, epidermis and columella ([Figure 2A](#f2){ref-type="fig"}). Roots were immersed in 5 μM indole-3-acetic acid (IAA) and treated for a total of 3 h (see Materials and methods). Expression of the markers used was stable within the treatment period ([Supplementary Figure S2A](#S1){ref-type="supplementary-material"}). Analysis of the DII-Venus reporter under these treatment conditions showed that all tissues in the root responded to treatment within 30 min ([Supplementary Figure S1A](#S1){ref-type="supplementary-material"}). For comparison, transcriptional responses to auxin were also assayed in intact (undigested) roots treated for 3 h.
To establish that the tissue-specific expression profiles gathered here were consistent with the previously published root expression data ([@b6]; [@b35]; [@b30]; [@b31]; [@b8]), we generated a list of cell type-specifically enriched (CTSE) genes using the public data and visualized their expression in our data set. This CTSE list was based on the expression profile template matching in a select data set of 13 non-overlapping, cell type-specific expression profiles of sorted GFP-marker lines ([Supplementary Figure S2B](#S1){ref-type="supplementary-material"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}). This procedure yielded a total of 3416 genes whose expression is enriched in one specific cell type (maturing xylem, developing xylem, xp pericycle, phloem-pole pericycle, phloem, phloem companion cell, quiescent center, endodermis, cortex, trichoblast, atrichoblast, lateral root cap or columella) or whose expression was enriched in two related cell types (xylem (developing and maturing xylem), pericycle (xp and phloem-pole pericycle), phloem (phloem and phloem companion cell), ground tissue (endodermis and cortex), epidermis (trichoblast and atrichoblast) or root cap (lateral root cap and columella)). The relative expression of the CTSE genes in the tissue-specific data generated in this study is differentially enriched in a manner that fits with the domains covered by the different markers used here ([Supplementary Figure S2B](#S1){ref-type="supplementary-material"}). These results indicate a successful isolation of the transcriptomes of distinct cell types and show that the enrichment in specific tissues is consistent across the data sets.
For most auxin-responsive genes in our data set, transcript levels were affected in several cell types but often showed a relatively greater response to auxin in one or more of the tissues. Two separate criteria were used to define these different levels of response (see Materials and methods for a detailed description of the statistical analysis). First, a two-way analysis of variance (ANOVA) with the factors cell type and treatment was used to categorize auxin-regulated genes and the relation of responses between the different cell types. The ANOVA (*P*\<0.01) yielded 7640 genes differentially expressed between the individual tissue samples; 5097 genes responded significantly to treatment across all tissues and formed a broad register of auxin-responsive genes in the root. In all, 869 genes showed a significant interaction between treatment and cell type, representing genes with the most dramatic spatial bias in regulation ([Figure 2B](#f2){ref-type="fig"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}). Second, Student\'s *t*-tests were conducted on the individual tissue samples to classify the response within specific cell types (*P*\<0.01; fold change\>1.5). The number of significantly regulated genes in the stele, xp pericycle, epidermis and columella was 2059, 845, 1321 and 842, respectively (3771 unique genes; [Figure 2C](#f2){ref-type="fig"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}). In all, 1923 genes were found to be differentially regulated by auxin treatment in intact roots (*t*-test *P*\<0.01, fold change\>1.5; [Supplementary Table S2](#S1){ref-type="supplementary-material"}). To generate a stringent list of auxin-responsive genes for the analysis of cell type-specific expression, we extracted the genes that passed the ANOVA for the treatment factor or interaction and also passed at least one of the four cell type-specific *t*-tests ([Figure 2D](#f2){ref-type="fig"}), resulting in a total of 2846 auxin-responsive genes.
Measured auxin responses were corroborated at two levels. First, we observed the significant regulation of known auxin-responsive genes in the cell type-specific data set. This includes significant regulation of 22 members of the *Aux/IAA* family of auxin co-receptors ([@b10]), 14 *GH3* auxin conjugases ([@b23]), 18 *SAURs* ([@b22]) and 7 *LATERAL ORGAN BOUNDARY DOMAIN CONTAINING PROTEIN* (*LBD*) transcription factors ([@b49]) ([Supplementary Figure S2C--F](#S1){ref-type="supplementary-material"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}). Several of the responsive *LBD* genes that are known to be involved in lateral root initiation ([@b37]) displayed dramatic upregulation specifically in the xp pericycle, the tissue where lateral roots originate ([Supplementary Figure S2E](#S1){ref-type="supplementary-material"}). These results indicate the robust induction of known auxin-responsive transcripts in the four cell types sampled in this work. Second, we confirmed that tissue-specific transcript level measurements matched auxin induction patterns in transcriptional reporter lines. This included xp pericycle-specific induction of *pLBD33::GUS* and *pTMO6::GFP* (TARGET OF MONOPTEROS 6 ) as well as stele-specific induction of *pATHB-8::GFP* (*ARABIDOPSIS THALIANA HOMEOBOX GENE 8*) and ubiquitous induction of *pGH3.5::GFP* and *pIAA5::GUS* ([Figure 2E--H](#f2){ref-type="fig"} and [Supplementary Figure S3](#S1){ref-type="supplementary-material"}) ([@b27]; [@b30]; [@b37]; [@b47]).
In a comparison of auxin responses in sorted cells and intact roots, genes that responded in a greater number of cell types (*t*-tests *P*\<0.01, fold change\>1.5) were more likely found responsive in intact roots (*t*-test *P*\<0.01, fold change\>1.5; [Figure 2C](#f2){ref-type="fig"}). Moreover, genes previously associated with the gene-ontology (GO) term *response to auxin stimulus* are highly significantly overrepresented only in the group of 101 genes that respond in all 4 assayed tissues (20/101 genes; Fisher\'s exact test *P*=3.51e−18). In [Figure 2C](#f2){ref-type="fig"}, the heatmap overlaid on the Venn diagram shows the gain in sensitivity for detecting cell type-specific auxin responses compared with intact roots under the same treatment. Genes found to be regulated by auxin in only one tissue show a relatively small overlap with responses in the intact root (1411, 445, 600 and 363 genes in the stele, xp pericycle, epidermis and columella, respectively). Transcripts whose response was detected in higher numbers of tissues show a relatively larger overlap with those detected in the intact root. This suggests that many cell type-specific auxin responses may not be detected in analyses performed at the organ or organismal level because localized responses are diluted among otherwise non-responsive cells.
Functional analysis of cell type-specific auxin responses
---------------------------------------------------------
Using the stringent list of (2846) auxin-responsive genes, expression patterns were ordered hierarchically by pairwise correlation. A heatmap of gene regulation patterns shows how almost all auxin-responsive genes exhibited some type of spatial bias in their regulation ([Figure 3A](#f3){ref-type="fig"}). Although genes are most often regulated in the same direction (induced or repressed) in different cell types, the response is usually stronger in a subset of samples. These findings demonstrate a pervasive tissue-specific amplitude modulation of auxin responses, and suggest that most auxin-controlled genes have context-dependent aspects to their transcriptional regulation.
To dissect spatially distinct auxin responses, dominant expression patterns were extracted and used to group genes with similar responses ([Supplementary Figure S4A](#S1){ref-type="supplementary-material"}; [@b39]). These response clusters showed a significant overrepresentation of diverse GO terms ([Supplementary Table S3](#S1){ref-type="supplementary-material"}). Extending the trend noted above for genes significantly regulated in all tissues, genes previously associated with the *response to auxin stimulus* as well as *auxin mediated signaling* and *auxin homeostasis* were mainly overrepresented in clusters containing genes with relatively uniform upregulation of expression; these included 10 *Aux/IAA*s and 4 *GH3*s ([Figure 3B](#f3){ref-type="fig"}; [Supplementary Figure S4B](#S1){ref-type="supplementary-material"} clusters 15 and 16; [Supplementary Table S3](#S1){ref-type="supplementary-material"}). Four *IAA*s and *GH3.3* were included in a cluster that showed relatively stronger induction in the stele and the induction of *GH3.6/DWARF IN LIGHT 1* was strongest in the columella. Two genes previously associated with the response to auxin, *LATE ELONGATING HYPOCOTYL* and an uncharacterized homeodomain transcription factor (At1g74840), were found in a cluster of genes with strong downregulation in the pericycle. *PIN-FORMED 7, NO VEIN* and *ACAULIS 5* are linked to the *auxin-transport* GO term found to be overrepresented in a cluster with relatively strong induction in the stele and pericycle. Genes associated with *auxin biosynthesis* were overrepresented in a cluster of uniformly downregulated genes. These enrichments show that, although most genes previously associated with the auxin response display broad induction, there are cell type-specific expression biases to the transcriptional regulation by auxin among genes that influence its own perception, metabolism and transport.
Several auxin-response clusters representing a localized spatial pattern of induction or repression showed overrepresentation of functions linked to growth processes known to be regulated by auxin. For example, clusters of genes that showed epidermis-specific downregulation by auxin (e.g., cluster 37) had statistically overrepresented GO terms for *trichoblast maturation* ([Figure 3C](#f3){ref-type="fig"}; [Supplementary Figure S4](#S1){ref-type="supplementary-material"}; [Supplementary Table S3](#S1){ref-type="supplementary-material"}). These clusters of genes potentially identify a large component of the transcriptome influenced by auxin signaling in the epidermis to regulate development or responses to environmental cues. Genes associated with *cell wall modification* and *cytoskeleton modification* as well as *transmembrane transport* and *peroxidase activity* were also overrepresented in this cluster, pointing to processes that may mediate auxin\'s specific effects on the epidermis.
Promoter analysis of the cell type-specific auxin-response clusters was conducted to look for overrepresentation of the canonical auxin-response element TGTCTC ([@b33]). Clusters 15 and 16, which show relatively uniform upregulation of gene expression across tissues ([Supplementary Figure S4A](#S1){ref-type="supplementary-material"}), contain significantly more genes with this element in the 500-bp upstream of their transcription start site than expected by chance (hypergeometric distribution analysis; [Supplementary Table S3](#S1){ref-type="supplementary-material"}). Additionally, the occurrence of the generic TGTC*N*C and the individual -A-, -C- and -G- variants was examined, with the finding that TGTCAC, TGTCCC and TGTC*N*C were also overrepresented in the promoters of the uniformly upregulated genes assigned to dominant expression patterns 15 and 16. Furthermore, TGTCAC was overrepresented in the promoters of genes assigned to pattern 34, which shows downregulation in all tissues that is strongest in the stele ([Supplementary Figure S4A](#S1){ref-type="supplementary-material"}). None of these elements were significantly enriched in any other upregulated or downregulated clusters. These results suggest that direct targets of auxin signaling through the auxin-response promoter element are generally uniformly induced across tissues of the root, and that variants of the canonical element may also participate in auxin regulation of transcript levels.
Auxin effects on transcriptional cell identity
----------------------------------------------
To explore the influence of auxin on cellular development in the root in more depth, the CTSE sets of cell-identity markers ([Supplementary Figure S2A](#S1){ref-type="supplementary-material"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}) were used to analyze the effect of auxin on tissue-enriched genes in our cell type-specific data set. The overlap between the stringent list of 2846 auxin-responsive genes and the 3416-gene CTSE list enabled us to assess whether auxin had a positive or inhibitory overall effect on transcriptional cell identity. Among the overlapping set, genes enriched specifically in the quiescent center and developing xylem are upregulated by auxin at a significantly higher proportion than expected by chance, whereas genes enriched in maturing xylem, cortex and trichoblasts are downregulated more frequently than expected (χ^2^-test *P*\<0.01; [Supplementary Figure S5A](#S1){ref-type="supplementary-material"}; [Supplementary Table S4](#S1){ref-type="supplementary-material"}). Furthermore, auxin-responsive tissue-enriched gene clusters show cell type-specific auxin sensitivity. For example, auxin-responsive genes enriched in the developing xylem are predominantly induced in the stele and the majority of auxin-responsive genes enriched in trichoblasts are repressed in the epidermis (as judged by relative expression levels as well as the tissue-specific *t*-tests; [Figure 4A](#f4){ref-type="fig"}, [Supplementary Figure S5B](#S1){ref-type="supplementary-material"}).
In the xylem, separate expression profiles for developing and maturing cell populations permitted analysis of auxin responses in relation to expression along the longitudinal maturation gradient within a specific cell lineage (tissue-specific marker lines S4 and S18, respectively; [@b30]; [Figure 4B and C](#f4){ref-type="fig"}; [Supplementary Table S4](#S1){ref-type="supplementary-material"}). Analysis showed that auxin promotes the expression of developing-xylem genes and represses the expression of genes enriched in maturing xylem. Fifty-six of fifty-seven auxin-regulated developing-xylem-enriched genes in the stele sample were induced by auxin (χ^2^-test *P*=5.66e−11) and 63 out of 77 auxin-regulated maturing-xylem-enriched genes were repressed (χ^2^-test *P*=7.59e−11). *In planta*, the expression of developing-xylem identity marker *pTMO5::GFP* ([@b30]; [@b47]) intensifies and expands from the apical meristem further into the basal (shootward) meristem upon auxin treatment ([Supplementary Figure S5B--D](#S1){ref-type="supplementary-material"}), corroborating the transcriptomic data and showing that this increase in expression seen in the stele sample takes place exclusively within the xylem lineage. Notably, the auxin sensitivity of xylem-enriched genes (i.e., the degree of induction or repression by auxin as measured by fold change) was significantly correlated with the ratio of expression between the developing and maturing xylem. Genes that show a higher relative expression in the developing xylem tend to be more strongly induced, and genes that show a higher relative expression in the maturing xylem tend to be more strongly repressed ([Figure 4C](#f4){ref-type="fig"}; Pearson\'s correlation *R*=−0.58, *Z*-score=7.13 non-parametric randomization test for significance). A transcript\'s auxin sensitivity is therefore a reliable predictor of longitudinal expression in the RAM for xylem-enriched genes.
Longitudinal expression correlates with genome-wide auxin responses
-------------------------------------------------------------------
We next addressed the link between the global response to auxin and spatial expression along the longitudinal axis of the entire root tip. There are two transcriptomic data sets of gene expression in the longitudinal dimension of the Arabidopsis seedling root: one of a 13-slice sampling of two individual roots ([@b8]) and another of a three-section sampling comprising the meristematic, elongation and maturation zones gathered in pooled replicates ([@b6]; [Figure 1A](#f1){ref-type="fig"}).
To quantify the relationship between auxin response and spatial expression along the longitudinal root axis, we examined the overlap of the 6850 genes with differential expression between the meristematic and maturation zone (*t*-test *P*\<0.01; [Supplementary Table S5](#S1){ref-type="supplementary-material"}; [@b6]) and our extensive list of 5097 auxin-responsive genes according to the ANOVA treatment factor. The two lists yielded an intersection of 2437 genes, for which fold change of the auxin response (averaged over four tissues) was plotted against the fold change in expression between the meristematic and maturation zone ([Figure 5A](#f5){ref-type="fig"}). The correlation between these two independent data sets is highly significant (Pearson\'s correlation *R*=−0.58, *Z*-score=28.06 non-parametric randomization test for significance) and indicates that, for thousands of genes, auxin-sensitivity predicts longitudinal expression.
To visualize the relation between transcriptional auxin sensitivity and the regulation of expression along the longitudinal axis of the root, genes were ordered by fold change in expression after auxin treatment and plotted in a heatmap of the 13-slice data set ([Figure 5B](#f5){ref-type="fig"}; [@b8]). This representation again revealed a link between relative responsiveness to auxin and spatial expression along the longitudinal axis of the root. The upregulated genes displayed a longitudinal expression gradient with a meristematic maximum and slope linked to the intensity of the auxin response; downregulated genes showed complementary expression with a minimum in the apical end of the meristem ([Figure 5B](#f5){ref-type="fig"}). These meristematic response gradients were also seen in the replicate root sampled in the 13-slice data set ([Supplementary Figure S6A](#S1){ref-type="supplementary-material"}). Secondary expression peaks were observed in the elongation and maturation zones of both sampled roots; however, these regions vary between root 1 and root 2 ([Figure 5B](#f5){ref-type="fig"}; [Supplementary Figure S6A](#S1){ref-type="supplementary-material"}).
Within the groups of induced and repressed genes, there were also notable differences in longitudinal expression patterns that were associated with relative sensitivity to auxin treatment. Using induction versus repression and relative fold change of the response to auxin treatment to subdivide response strength, the 5097 auxin-responsive genes could be broadly subdivided into four categories: group (1) strongly auxin-induced genes with high expression in the apex that quickly diminishes in the apical meristem and displays variable secondary peaks in the elongation and maturation zones; group (2) moderately to weakly auxin-induced genes with high expression in the apical end of the meristem, a graded decline toward the basal end of the meristem and lacking prominent secondary peaks; group (3) weakly auxin-repressed genes with the inverse spatial expression pattern of group 1; and group (4) moderately to strongly auxin-repressed genes that complement the expression of group 2 ([Figure 5B--D](#f5){ref-type="fig"}). Thus, expression patterns along the longitudinal axis are also linked to the degree of induction or repression by auxin.
In a validation of the overall trends in the data, the relation between auxin response and longitudinal expression could be recapitulated using independent auxin-response data sets collected in the root, in this study and by others ([@b55]; [@b1]). The correlation for the various lists of genes regulated by auxin signaling was quantified by cross-referencing with the meristematic versus maturation data set ([Supplementary Figure S6B--K](#S1){ref-type="supplementary-material"}; [Supplementary Table S5](#S1){ref-type="supplementary-material"}; [@b6]). First, correlation was also evident with auxin responses measured in individual tissues and with auxin responses in the intact root as well as the stringent list of 2846 auxin-responsive genes ([Supplementary Figure S6B--G](#S1){ref-type="supplementary-material"}; [Supplementary Table S5](#S1){ref-type="supplementary-material"}). In addition, correlation was observed between longitudinal expression and previously published data of auxin responses in proximal root tissues above the primary meristem (excluding the RAM; [@b55]), indicating that this correlation is not restricted to responses in the root apex ([Supplementary Figure S6H](#S1){ref-type="supplementary-material"}). However, no strong correlation between auxin response and spatial expression was evident when response data was generated from whole seedlings ([@b37]), suggesting that responses outside the root do not correlate with expression in the root tip ([Supplementary Figure S6I](#S1){ref-type="supplementary-material"}). Finally, an inverse correlation could be observed between longitudinal expression and the transcriptional response to transient expression of gain-of-function *Aux/IAA* repressors measured in root epidermal protoplasts ([@b1]). Here, genes repressed by the expression of *Aux*/*IAA19mII* repressor ([@b52]) displayed relatively high meristematic expression, whereas genes induced by the expression of a gain-of-function *Aux*/*IAA* repressor showed low expression in the meristem ([Supplementary Figure S6J](#S1){ref-type="supplementary-material"}). Consequently, as a correlation between auxin signaling and longitudinal expression is recapitulated by the manipulation of the canonical auxin signal transduction pathway in dissociated cells, the observed correlation can be attributed to a direct cellular response to auxin signaling.
Discussion
==========
Auxin affects cell type-specific development
--------------------------------------------
This study demonstrates the influence of cellular context on genome-wide transcriptional responses to auxin treatment and reveals a broad range of tissue specificity in these responses. A relatively small proportion of transcripts show uniform regulation across tissues ([Figure 3A](#f3){ref-type="fig"}; [Supplementary Figure S4A](#S1){ref-type="supplementary-material"}), while the majority displays a spatial bias toward one or more of the tissues analyzed. Examining the auxin responsiveness of CTSE genes, it appears that auxin treatment, in general, does not promote all cells toward a common developmental state. Instead, auxin can promote or inhibit cell character by enhancing or repressing the expression level of cell-specific markers differentially in the separate tissue samples analyzed here ([Figure 4A](#f4){ref-type="fig"}; [Supplementary Figure S5A and B](#S1){ref-type="supplementary-material"}).
A significant repressive effect of auxin on trichoblast-enriched gene expression was observed in our data set, particularly in the epidermis sample ([Figure 4A](#f4){ref-type="fig"}; [Supplementary Figure S5A](#S1){ref-type="supplementary-material"}). The repression was not seen for genes enriched in atrichoblasts or for genes enriched throughout the epidermis ([Supplementary Figure S5A](#S1){ref-type="supplementary-material"}). This result was independently recapitulated by statistical overrepresentation of genes previously associated with the GO term *trichoblast development* in dominant expression-pattern clusters that show downregulation specifically in the epidermis ([Figure 3C](#f3){ref-type="fig"}; [Supplementary Figure S4](#S1){ref-type="supplementary-material"}; [Supplementary Table S3](#S1){ref-type="supplementary-material"}). These findings are in line with a previous study of root-hair defects in *aux1* auxin-importer mutants ([@b26]), where increased transcriptional auxin signaling in trichoblasts was associated with defects in root-hair development. The categorization of auxin-regulated trichoblast-enriched genes presented here can be used to further investigate the mechanisms by which auxin may influence root-hair development.
Analysis of radial patterning in the stele of the RAM has indicated cross-talk between auxin and cytokinin modulates a PIN-driven high-auxin domain in the xylem that mediates cell specification ([@b7]). Consistent with high auxin levels in the xylem lineage, a subset of highly auxin-induced genes (including *Aux/IAA6, 8, 19* and *29*) shows high basal expression throughout the xylem (enriched in both the developing and maturing xylem; [Supplementary Figure S1C](#S1){ref-type="supplementary-material"}; [Figure 4B and C](#f4){ref-type="fig"}; [Supplementary Table S4](#S1){ref-type="supplementary-material"}). The auxin-responsive, developing- or maturing-xylem-enriched transcriptomes can be used to investigate xylem specification by looking for potential auxin-responsive regulators of development.
Aside from cell-fate specification, auxin also has a role in xylem differentiation and maturation. Analysis of the tissue-specific auxin responses in relation to xylem development demonstrates how auxin may regulate lineage-specific differentiation through moderating activation and repression of genes associated with juvenile and maturing transcriptional states, respectively. Auxin significantly promotes developing-xylem identity and inhibits the expression of maturing-xylem genes in the stele ([Figure 4](#f4){ref-type="fig"}; [Supplementary Figure S5](#S1){ref-type="supplementary-material"}). Xylem development has previously been proposed to be directly regulated by local auxin levels in the cambial meristem of wood-forming tissues ([@b5]; [@b59]). In this tissue, the radial developmental gradient (characterized by sequential division, expansion and secondary cell wall deposition) parallels an auxin concentration gradient, as measured by mass-spectrometric analysis of IAA in cryo-sections ([@b53]). However, a transcriptional link between the perception of an auxin concentration gradient and the regulation of cellular maturation was not found ([@b36]). In the root, we did find that sensitivity to auxin treatment directly correlates with the slope of expression of xylem-enriched genes along the longitudinal developmental gradient ([Figure 4C](#f4){ref-type="fig"}). This correlation provides evidence that an endogenous auxin gradient directly influences the global transcriptional state of cells along this dimension to regulate maturation.
Mapping the longitudinal auxin-response transcriptome
-----------------------------------------------------
The availability of transcriptomic data sets along the longitudinal axis of the Arabidopsis seedling root ([@b6]; [@b8]) allowed us to plot the spatial expression of comprehensive sets of auxin-responsive genes and to identify a transcriptional auxin-response gradient within this tissue as a whole. The analysis in effect uses the entire auxin-responsive transcriptome as a reporter for the endogenous auxin response; as opposed to the use of singular auxin-induced promoters. Moreover, we could also observe the longitudinal expression of auxin-repressed genes in this context and factor in the relative sensitivity of auxin-responsive genes.
The auxin response can be seen to be bipartite; consisting of genes with an archetypal expression, including high expression in the root tip and secondary shootward peaks (group 1 and complementary group 3), and genes with a graded meristematic expression pattern (group 2 and group 4; [Figure 5B--D](#f5){ref-type="fig"}). The archetypal response resembles the expression of the *DR5* reporter ([Supplementary Figure S1B](#S1){ref-type="supplementary-material"}), for which auxiliary expression has also been observed in more shootward portions of the root, similarly to the secondary peaks of group 1 ([Figure 5C](#f5){ref-type="fig"}). For DR5, these shootward peaks of expression have been shown to correspond to the regions of pre-branch site specification and lateral root primordia ([@b12]; [@b17]; [@b34]). The graded response, however, more closely matches measured auxin concentrations ([@b42]) and slopes along with the cellular-maturation gradient in the apical and basal meristem ([Figures 1A](#f1){ref-type="fig"} and [5B--D](#f5){ref-type="fig"}). One speculation is that the archetypal response is directly under control of the auxin signal transduction machinery, including the negative feedback regulation, which could explain why these genes do not mirror the measured concentration gradient. The graded response may be under non-canonical or indirect regulation, conceivably through auxin-responsive master-regulator transcription factors that do reflect the developmental- and auxin-concentration gradient in the meristem (such as the PLTs; [@b19]).
It is important to note that there is likely a cell lineage-specific aspect to the interpretation (or maintenance) of auxin gradients. This can be observed with the auxin signaling reporters *DR5::3xVenus* and DII-Venus ([Supplementary Figure S1](#S1){ref-type="supplementary-material"}; [@b46]; [@b9]). The correlation seen between the auxin response and the whole-root longitudinal data ([@b6]; [@b8]), therefore, represents a global response gradient that may consist of several distinct cell type-specific gradients. The availability of longitudinally separated markers of the same cell lineage in the xylem ([Figure 4C](#f4){ref-type="fig"}) makes possible the visualization of a response gradient in this tissue specifically, that is consistent with mass spectrometric auxin measurements in the stele ([@b42]). It will be interesting to see whether similar gradients can be seen in other cell types, as more specific marker lines become available; especially in the epidermis where *DR5::3xVenus* and DII-Venus reporters potentially indicate an inverted gradient ([Supplementary Figure S1A and B](#S1){ref-type="supplementary-material"}).
Overall, the significant correlation between the transcriptomic auxin response and spatial expression within the root suggests that auxin sensitivity together with spatial gradients of auxin distribution is a determinant in the spatial expression of thousands of genes.
Cellular competence for a unique auxin response
-----------------------------------------------
Cells perceive auxin as though selectively processed by a set of filters that accompany a given cell identity and represent the auxin-sensing and -response machinery active in the cell. Our data present the transcriptional output of several such innate response-machinery filters, providing an important view of how auxin is perceived by individual cell types.
The canonical auxin perception and signal transduction pathway (composed of the TRANSPORT INHIBITOR RESPONSE 1 (TIR1)/AUXIN SIGNALING F-BOX (AFB) receptors, the Aux/IAA co-receptor/transcriptional repressors and the AUXIN RESPONSE FACTOR (ARF) transcription factors ([@b14]; [@b38]; [@b41])) is encoded by large gene families that show divergent cell type-specific expression patterns ([Supplementary Figure S1](#S1){ref-type="supplementary-material"}; [@b44]). Promoter-swap and misexpression studies using specific *ARFs* and gain-of-function *Aux/IAAs* have demonstrated that individual components of this modular auxin-response pathway can bestow specific responses in different tissues of the root and embryo ([@b29]; [@b45]). The cellular TIR1/AFB-Aux/IAA-ARF composition is thought to represent an 'auxin code\' that is the principal determinant of the specificity of the output.
However, additional regulatory interactors of the TIR1/AFB-Aux/IAA-ARF pathway, for example, the TOPLESS transcriptional co-repressors, the MYB77 transcription factor and the *miR165* microRNA ([@b48]; [@b51]), may also impart tissue specificity. Both *TMO5* and *TMO6* are targets of the same auxin-response factor (ARF5/MONOPTEROS; [@b47]), yet show highly divergent patterns of induction. *TMO5* is expressed and induced specifically in the xylem, whereas *TMO6* is seen to have high expression in the phloem and procambium and is induced in the xp pericycle at sites of initiating lateral roots ([Supplementary Figures S3 and S5](#S1){ref-type="supplementary-material"}). The discrepancy between the induction of these direct auxin-response targets reveal an intriguing aspect of cell type-specific regulation of transcriptional auxin responses and show that additional factors, aside from ARF5, must be involved in the activation of their expression in different cell types.
Interestingly, the ARF binding site (the auxin-response element TGTCTC) and several variants thereof (TGTC*N*C) are only found to be overrepresented among genes with a relatively uniform upregulation ([Supplementary Figure S4](#S1){ref-type="supplementary-material"}; [Supplementary Table S3](#S1){ref-type="supplementary-material"}). This could indicate that, if the more spatially distinct responses and auxin-regulated gene repression are in part directly mediated by binding of particular ARF isoforms, a more complex DNA binding-site recognition may be involved in the target specificity of different ARF isoforms. Alternatively, the spatially distinct responses could be composed more of indirect target genes that are regulated by secondary activators or repressors. Finally, signal transduction outside of the TIR1/AFB-Aux/IAA-ARF auxin-response pathway may also account for cell type-specific responses.
Data availability
-----------------
The raw microarray data generated in this study have been deposited in the Gene Expression Omnibus in the form of.cel files and are available online at <http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE35580>.
Materials and methods
=====================
Plant materials and treatment
-----------------------------
All *Arabidopsis thaliana* plant lines used in this study are listed in [Supplementary information](#S1){ref-type="supplementary-material"}. Seed was sterilized by 5 min incubation with 96% ethanol followed by 20 min incubation with 50% household bleach and rinsing with sterile water and stratified for 2 days at 4°C in the dark. Seedlings were grown hydroponically on nylon mesh in phytatrays (Sigma) with growth medium (2.2 g/l Murashige and Skoog Salts (Sigma-Aldrich), 1% (w/v) sucrose, 0.5 g/l MES hydrate (Sigma-Aldrich), pH 5.7 with KOH), or plated on square petri dishes (Fisher Scientific) with growth medium plus 1% (w/v) agar (as in [@b2]). Phytatrays and plates were placed in an Advanced Intellus environmental controller (Percival) set to 35 μmol/m^2^ s^−1^ and 22°C with an 18 h-light/6 h-dark regime. For the cell sorting and microarray experiment, 1-week-old (5 dpg) seedlings were treated with 5 μM IAA (Sigma-Aldrich) or mock treated with solvent alone for a total of 3 h (2 h in phytatray and 1 h during the protoplast and sorting procedure). Duration of treatment was chosen to obtain a relatively early yet robust representation of responses to auxin in the root; before morphological effects, such as cell division, could be observed but late enough to include secondary/indirect target genes. A 10-mM IAA stock was dissolved in ethanol and stored at −20°C. 2,4-Dichlorophenoxyacetic acid (2,4-D; Sigma-Aldrich) treatments (used for auxin-responsive GFP reporter lines) were performed by transferring seedlings to plates supplemented with 1 μM 2,4-D from a 10-mM stock dissolved in ethanol and stored at −20°C. The *pGH3.5::GFP* reporter line was generated by cloning 3546, bp upstream of the *GH3.5* start codon (using primers Fwd 5′-cagtttaattatactccatttattcgtca-3′ and Rev 5′-ggtttaagagaaagagagaagtctgagaaaatg-3′) in front of the *GFP* open reading frame in *pMDC107* ([@b11]) using Gateway recombination via *pENTR-D-TOPO* (Invitrogen). The resulting vector was used to transform Col-0 Arabidopsis with *Agrobacterium tumefaciens* (GV3101). The *pIAA5::GUS* reporter line was generated by cloning 913 bp upstream of the *IAA5* start codon (using primers Fwd 5′-cacctatcacaaagtcttgttgtgttattca-3′ and Rev 5′-ctttgatgtttttgattgaaagtattg-3′) in front of the *uidA* open reading frame in *pMDC163* ([@b11]) using Gateway recombination via *pENTR-D-TOPO* (Invitrogen). The resulting vector was used to transform Col-0 Arabidopsis with *A. tumefaciens* (GV3101).
Generation of protoplasts, flow cytometry and fluorescence activated cell sorting
---------------------------------------------------------------------------------
Protoplast isolation was performed as described previously ([@b2]). Roots from one phytatray containing ∼1500 one-week-old seedlings were harvested (after a 2-h treatment with 5 μM IAA or solvent alone) and placed into a gently shaking 50 ml tube with 15 ml protoplasting solution (supplemented with 5 μM IAA or solvent alone) for 45 min. Protoplasting solution was prepared with 1.25% (w/v) cellulase (Yakult), 0.3% (w/v) macerozyme (Yakult), 0.4 M mannitol, 20 mM MES, 20 mM KCl, 0.1% (w/v) BSA, 10 mM CaCl~2~, 5 mM β-mercapto ethanol, pH adjusted to 5.7 with Tris/HCl pH 7.5. The protoplast solution was filtered through 40-μm cell strainer (BD Falcon, USA), transferred to 15 ml conical tubes and centrifuged for 5 min at 500 *g*. In all, 14 ml of the supernatant was aspirated and pellets were resuspended.
Protoplast suspensions were cytometrically analyzed and sorted using FACSAria (BD Biosciences) equipped with a 488-nm laser and fitted with a 100-μm nozzle to measure fluorescent emission at 530/30 and 610/20 nm for GFP and red-spectrum autofluorescence, respectively. Positive events were identified based on their red-to-green fluorescence ratio, sorted directly into 350 μl RNA extraction buffer and stored at −80°C.
RNA extraction and microarray hybridization
-------------------------------------------
RNA was extracted from 20 000 sorted cells per replicate using an RNeasy Micro Kit with RNase-free DNase Set according to the manufacturer\'s instructions (QIAGEN). RNA was quantified with a Bioanalyzer (Agilent Technologies) and reverse-transcribed, amplified and labeled with WT-Ovation Pico RNA Amplification System and FL-Ovation cDNA Biotin Module V2 (NuGEN). The labeled cDNA was hybridized, washed and stained on an ATH-121501 Arabidopsis full genome microarray using a Hybridization Control Kit, a GeneChip Hybridization, Wash, and Stain Kit, a GeneChip Fluidics Station 450 and a GeneChip Scanner (Affymetrix). Three independent biological replicates were collected for all treatments. The raw data files generated by others and used in this analysis were obtained from the Benfey lab or from <http://www.ncbi.nlm.nih.gov/geo/>.
Data analysis
-------------
Data normalization, analysis and visualization were performed using freely available code and R-based software (listed in [Supplementary information](#S1){ref-type="supplementary-material"}). Raw microarray data were MAS5.0 normalized with a scaling factor of 250 and log transformed before homoscedastic statistical analysis (Student\'s *t*-test and two-way ANOVA; Flexarray). Ambiguous probesets ([Supplementary Table S2](#S1){ref-type="supplementary-material"}) were removed from further analysis. False discovery rates (FDR) were calculated based on the *P*-value distribution (Q-value). For the comparison between statistical tests in separate tissues, equal cutoffs were set at *P*\<0.01. FDR at this cutoff and *q*-values for the individual probesets are reported in [Supplementary Table S2](#S1){ref-type="supplementary-material"}. Additionally, a fold-change cutoff of \>1.5 was set for the tissue-specific and intact root *t*-tests. To generate the stringent list of 2846 auxin responders, genes had to pass the ANOVA for treatment or interaction (*P*\<0.01) and at least one of the tissue-specific *t*-tests (*P*\<0.01, fold change\>1.5).
For co-expression analysis, hierarchical clustering was performed on the stringent list of auxin responders with pairwise Pearson\'s correlation using gene expression lists of replicate sample averages that were row normalized (Multiple Experiment Viewer). Branch length distribution of the HCL tree and the figure of merit (FOM) of iterative K-means clustering runs were used to gauge the expected number of clusters (Multiple Experiment Viewer). A Fuzzy K-means clustering search for dominant expression patterns was executed employing the R script by Orlando and co-workers for the manipulation of large-scale Arabidopsis microarray data sets ([@b39]). Clusters containing \<10 genes were omitted from further analysis. GO-term overrepresentation was analyzed using VirtualPlant with TAIR10 gene annotations. Promoter element enrichment was based on the absence or presence of motifs in the 500-bp upstream of the transcription start sites (hypergeometric distribution test with FDR correction *q*\<0.01, TAIR10 annotation).
Template matching for the isolation of CTSE gene sets was performed using the Pavlidis template matching algorithm (Multiple Experiment Viewer) on previously generated transcriptomic data from 13 non-overlapping tissue marker lines ([Supplementary Table S2](#S1){ref-type="supplementary-material"}) with a similarity cutoff of *R*\>0.8.
Microscopy
----------
Confocal microscopy was performed with SP5 (Leica) and LSM710 (Zeis) microscopes and software. Cell walls were stained by 10 min incubation in 10 μg/ml propidium iodide (dissolved in water). GUS reporter gene lines were stained in 50 mM phosphate buffer pH 7, 0.5 mM ferricyanide, 0.5 mM ferrocyanide, 0.05% (v/v) Triton X, 1 mM X-Gluc, for 24 h at 37°C. The staining reaction was stopped and seedlings were fixed and cleared with ethanol and mounted in water. Staining was visualized with an Axioskop (Zeiss) microscope.
Supplementary Material {#S1}
======================
###### Supplementary Information
Supplementary Information
###### Supplementary Table S1
Expression of the auxin signaling components in different cell types and longitudinal sections of the Arabidopsis root
###### Supplementary Table S2
Cell type-specific auxin responses in the Arabidopsis root
###### Supplementary Table S3
Dominant expression patterns in cell type-specific auxin responses
###### Supplementary Table S4
chi squared tests for ratio\'s of induced-to-repressed expression of cell type-specifically enriched genes
###### Supplementary Table S5
Analysis of publicly available microarray data
###### Review Process File
We would like to thank Hidehiro Fukaki (Kobe University) for the *pLBD33::GUS* reporter. This work was supported by grants from the NSF (DBI-0519984) and NIH (R01-GM078270) to KDB, the NIH (R01-GM086632) to DCB, the NIH (GM43644), Howard Hughes Medical Institute and Gordon and Betty Moore Foundation to ME, the EMBO (LTF) to IE, the Vaadia-BARD (FI-431-10) to ES, and the Research Foundation of Flanders to SV.
*Author contributions*: BORB helped conceptualize, design, perform and write this work, SV helped conceptualize and write this work, GK helped perform the clustering analysis, TN helped perform the confocal microscopy, IE helped perform the promoter and correlation analyses, ES and GC helped perform reporter-gene analysis, JF, DCB and ME helped conceptualize this work. KDB helped conceptualize, design and write this work.
The authors declare that they have no conflict of interest.
![The *Arabidopsis thaliana* root apex. (**A**) The apex of the seedling root can be divided into the meristematic zone (consisting of the apical and basal meristem), elongation zone and maturation zone; shown here in a 5-day post germination (dpg) seedling root tip. The longitudinal transcriptomic data sampling sections gathered by [@b6] and [@b8] are indicated. QC, quiescent center, scale bar indicates 1 mm. (**B**) Schematic representation of the cell types in longitudinal and radial cross-sections of the Arabidopsis root apical meristem.](msb201340-f1){#f1}
{#f2}
{ref-type="supplementary-material"} and [Supplementary Table S3](#S1){ref-type="supplementary-material"}).](msb201340-f3){#f3}
{ref-type="supplementary-material"}; [Supplementary Table S2](#S1){ref-type="supplementary-material"}) and the stringent list of 2846 auxin-responsive genes; blue (low) to yellow (high) color code indicates standard deviations from the row mean. Auxin-responsive developing-xylem-enriched genes (top panel) and QC-enriched genes (middle panel) are predominantly upregulated, specifically in the stele; auxin-responsive trichoblast-enriched genes (lower panel) are predominantly downregulated and highly expressed in the epidermis before treatment. (**B**) Boxplot representation of the fold-change distribution of maturing-xylem- (blue), xylem- (white) and developing-xylem- (yellow) enriched genes that significantly respond to auxin treatment in the stele (*t*-test *P*\<0.01, all auxin-responsive genes in the stele are represented by a black box). Black circles represent minimum and maximum values, black lines represent the first and fourth quartiles, boxes represent the second and third quartiles, open circle represents the median; \**P*\<1e−10 χ^2^-test for ratio of induced-to-repressed genes. (**C**) The S18 and S4 marker lines for maturing and developing xylem, respectively (left panel), were used to plot the fold change in expression upon auxin treatment in the stele versus the expression ratio between maturing and developing xylem for the 157 auxin-responsive (maturing and/or developing) xylem-enriched genes. Pearson\'s correlation *R*=−0.58, scale bar indicates 250 μm.](msb201340-f4){#f4}
{ref-type="supplementary-material"}; [@b6]) for the genes that are both significantly responsive to auxin and significantly differentially expressed between meristematic and maturation zones (2437-gene intersect). Pearson\'s correlation *R*=−0.58. (**B**) Heatmap of the spatial expression of auxin-responsive genes (ANOVA treatment *P*\<0.01, 5097 genes) in the 13-slice longitudinal data set (root1; [@b8]). Genes were ordered by fold-change response to auxin treatment; blue (low) to yellow (high) color code indicates standard deviations from the row mean. Upregulated and downregulated genes were further subdivided into groups 1--4 based on relative induction or repression and broad differences in longitudinal expression (red and green color coding). (**C**, **D**) Average normalized longitudinal expression patterns of auxin-responsive genes, ±s.e.m. (groups 1--4 in B). The relative spatial separation of the 13-slice data set ([@b8]) is represented on the x axis and the standard deviations from the row mean on the y axis. (**C**) Longitudinal expression of *archetypal* auxin-responsive genes (groups 1 and 3 in B), consisting of the top 1000 induced and the first 1000 repressed genes. The quiescent center (QC), oscillation zone and first lateral root primordium (LRP) are indicated. (**D**) Longitudinal expression of *graded* auxin-responsive genes (groups 2 and 4 in B), consisting of the remaining 1842 induced and 1255 repressed genes.](msb201340-f5){#f5}
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
Pay For Essay
How It Works if You Pay for Essay
When students find essay writing problematic, it is commonplace for them to look for the perfect solution that is quick and, needless to say, available at a competitive price. Hence, Cheap-Essay-Online.org offers a contemporary writing service with several guarantees to students who need written essay papers now. To buy the papers you need online, just go to your usual search engine and enter a few keywords such as "essay papers to buy." Undoubtedly, you will get a number of reputable service providers, but you will be delighted when you find that the prices at Cheap-Essay-Online.org are refreshingly cheap compared to other providers in our marketplace, and we are much admired by many customers.
Therefore, if you are really decided on buying college essays, you will value the support and help offered by Cheap-Essay-Online. Even if your custom paper is urgent and you haven't yet begun writing it, don't be afraid to contact us. It is great to have a little extra help when you have an essay paper to write and you cannot handle it because you haven't enough time, aren't sufficiently skilled or are on a tight budget. This is a real worry for those who struggle with academic assignments but are willing to pay for essay now. Most students have a multitude of more interesting things to do, which is why they detest writing the essays that are a requirement of most academic institutions. They absorb most of a student's time. However, with the opportunity to buy essays from a reputable writing company, life becomes easier, studying becomes more pleasant and your papers get submitted by deadline.
Don't Be Afraid to Pay for Essay Now
The difficulty that many people face is that they are afraid to ask professional writers or anyone who is a skilled writer for help. In these cases, online assistance is the ideal solution because it doesn't involve personal interaction with a tutor or someone who is more senior than yourself. In the world of virtual communication, students can be more open and more willing to talk. Hence, if you want to pay for essay now, Cheap-Essay-Online.org is delighted to welcome you and you will find the price of our help quite reasonable. Even if you have strong ethics and values, it is still perfectly acceptable to pay for the research essay papers you need from a genuine writing service like ours where we are as reputable a business as any web store, library or teaching establishment.
If you are looking to pay for essay, it is commonplace to look for a modern writing service and it makes sense to ensure you are getting the best possible assistance. We guarantee that your reputation is safe with our company. In seeking professional help with your custom papers, you should find your assignments fun and you will be more productive. If you are short of time, you shouldn't waste any more of it. Rather, consider how much more you can achieve with a professional writer on your side.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Attention enhances feature integration.
Perceptual processing delays between attribute dimensions (e.g. color, form and motion) [Proceedings of the Royal Society of London Series B 264 (1997) 1407] have been attributed to temporal processing asynchronies resulting from functional segregation of visual information [Science 240 (1988) 740]. In addition, several lines of evidence converge to suggest that attention plays an important role in the integration of functionally processed information. However, exactly how attention modulates the temporal integration of information remains unclear. Here, we examined how attention modulates the integration of color and form into a unitary perception. Results suggest that attending to the location of an object enhances the integration of its defining attributes by speeding up the perceptual processing of each attribute dimension. Moreover, the perceptual asynchrony between attributes remains constant across attended and unattended conditions because attention seems to offer each processing dimension an equal processing advantage.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Beech Mountain (North Carolina)
For the incorporated town, see Beech Mountain, North Carolina.
Beech Mountain is a mountain in the North Carolina High Country and wholly in the Pisgah National Forest. Its elevation reaches 5,506 feet (1,657 m) and generates feeder streams for the Elk River. Nestled on the top is the Town of Beech Mountain.
Recreation
Beech Mountain offers skiing, snowboarding, and tubing in the winter months. In the summer, recreation includes hiking and mountain biking. Beech Mountain Resort runs chairlifts for downhill mountain biking.
One of the more interesting walking areas is the defunct Land of Oz theme park, which existed briefly in the 1970s; remnants of the park can be visited today.
See also
List of mountains in North Carolina
References
Category:Mountains of North Carolina
Category:Mountains of Avery County, North Carolina
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
UNITED STATES DISTRICT COURT
FOR THE DISTRICT OF COLUMBIA
____________________________________
)
LINDA RAMSEUR, )
)
Plaintiff, )
)
v. ) Civil Action No. 13-0169 (ESH)
)
THOMAS E. PEREZ, Secretary, )
U.S. Department of Labor, )
)
Defendant. )
___________________________________________)
MEMORANDUM OPINION AND ORDER
Plaintiff Linda Ramseur brings this action against Thomas E. Perez, in his official
capacity as the Secretary of the Department of Labor.1 She asserts claims for discrimination on
account of race and sex and retaliation in violation of Title VII of the Civil Rights Act of 1964,
as amended, 42 U.S.C. 2000e-16. Before the Court is defendant’s motion for judgment on the
pleadings, plaintiff’s opposition thereto, and defendant’s reply. For the reasons stated herein,
defendant’s motion is granted in part and denied in part.
BACKGROUND
When the events giving rise to this case occurred, plaintiff was employed by the
Department of Labor (“DOL”) as a Staff Assistant, GS-09, assigned to the Office of the Director
in DOL’s Civil Rights Center (“CRC”). (Compl. ¶ 6, Feb. 6, 2013.) On May 18, 2009,
defendant posted a vacancy announcement for a “GS-11 Staff Assistant” in the CRC. (Id. ¶¶ 1,
6.) The position description included a requirement that a successful applicant must have
1
Pursuant to Rule 25(d) of the Federal Rules of Civil Procedure, Thomas Perez, the current
Secretary of Labor, has been substituted for the former Acting Secretary, Seth D. Harris.
“specialized experience in planning, implementing, or evaluating compliance and technical
assistance activities related to recipients of federal financial assistance; conducting EEO and EO
investigations and non-discrimination statutes under Title VI and VII of the Civil Rights Act and
Related Statutes.” (Id. ¶ 1.) Shortly after the advertisement was posted, plaintiff applied for the
position. (Id.)
On October 26, 2009, plaintiff received notice that she had been deemed “unqualified”
for the position because of her lack of specialized experience. (Compl. ¶ 34.) Plaintiff alleges
that this specialized experience is “unrelated and unnecessary” to the position advertised. (Id. ¶
20; see also id. ¶¶ 1, 2, 11-14, 21, 22, 24, 31, 47.) She alleges that the GS-11 Staff Assistant
position “contained the same administrative duties that were already being performed by [her].”
(Id. ¶ 6.) Further, she alleges that the specialized experience requirement had been added to the
job qualifications by her supervisor, Patricia Lamond, specifically to prevent her from qualifying
for the position and that she had never been given the opportunity to gain such experience. (See,
e.g., id. ¶¶ 43, 46.)
On November 5, 2009, plaintiff received a performance rating of “effective” and no
bonus award. (Id. ¶¶ 18, 49.) She claims that, unlike all other CRC employees, she was the only
support staff who did not receive a bonus in 2009, and that she had not been given the
opportunity to participate in a mid-year appraisal that could have informed her that she needed to
improve her performance. (Id. ¶ 49.) Plaintiff also alleges that Eliva Mata forwarded her
performance appraisal to the Human Resource Center without allowing her to add her comments.
(Id.) On both November 17 and 18, 2009, plaintiff asserts that Lamond yelled at her for failing
to copy a document and properly deal with office correspondence. (Id.)
On December 9, 2009, plaintiff submitted an “Informal Complaint Information Form” to
2
the CRC (Def.’s Mot. for Judgment on the Pleadings (“Def. Mot.”), Ex. 1, May 15, 2013), and
on February 4, 2010, she filed a formal administrative complaint. (Id., Ex. 2 (“EEO Formal
Compl.”).) On April 18, 2012, an EEOC administrative judge dismissed plaintiff’s claims.
On February 6, 2013, plaintiff filed an employment discrimination complaint under Title
VII, claiming that (1) defendant engaged in an unlawful employment practice by including a
requirement in the staff assistant job posting that disproportionately disqualifies minority and/or
women applicants and has no relationship to the tasks expected to be performed (Compl. ¶¶ 52-
56 (Count I)); (2) defendant retaliated against her by giving her a lower performance review and
denying her a performance award, delaying the progress of her administrative claim, and
subjecting her to a hostile work environment (id. ¶¶ 57-61 (Count II)); (3) defendant subjected
her to a retaliatory hostile work environment for “speaking out against the denial of a promotion
opportunity,” by subjecting her to constant yelling and by instructing her to leave post-it notes in
her cubicle when she was not at her desk (id. at 25-28 (Count III)); and (4) she is a victim of
“workplace bullying” because her supervisor constantly yelled at her. (Id. at 28-29 (Count IV).)
Defendant filed an answer (Answer, Apr. 8, 2013), and plaintiff filed a response thereto. (Pl.
Resp. to Answer, Apr. 19, 2013.) The Court held an initial scheduling conference on May 1,
2013, and discovery commenced.
Defendant has now filed a motion for judgment on the pleadings on the ground that
plaintiff failed to exhaust administrative remedies and that the complaint failed to state a claim
upon which relief can be granted.
ANALYSIS
Under Rule 12(c) of the Federal Rules of Civil Procedure, “[a]fter the pleadings are
closed—but early enough not to delay trial—a party may move for judgment on the pleadings.”
3
Fed. R. Civ. P. 12(c). A Rule 12(c) motion shall be granted “if the moving party demonstrates
that no material fact is in dispute and that it is entitled to judgment as a matter of law.” Schuler
v. PricewaterhouseCoopers, LLP, 514 F.3d 1365, 1370 (D.C. Cir. 2008) (internal quotations
omitted). When evaluating a motion for judgment on the pleadings, courts employ the same
standard that governs a motion to dismiss under Rule 12(b)(6). See Rollins v. Wackenhut Servs.,
Inc., 703 F.3d 122, 129 (D.C. Cir. 2012). Thus, the “complaint must contain sufficient factual
matter, accepted as true, to ‘state a claim to relief that is plausible on its face.’” Ashcroft v.
Iqbal, 556 U.S. 662, 678 (2009) (quoting Bell Atl. Corp. v. Twombly, 550 U.S. 544, 570 (2007)).
A court “should take all of the factual allegations in the complaint as true,” but is “not bound to
accept as true a legal conclusion couched as a factual allegation.” Id. at 678 (internal quotations
omitted).
I. COUNT I: DISCRIMINATION
Plaintiff claims that defendant engaged in an unlawful employment practice by including
a requirement in the staff assistant job posting that disproportionately disqualifies minority
and/or women applicants and has no relationship to the tasks expected to be performed. (Compl.
¶¶ 1, 52-56.) Defendant argues that plaintiff’s disparate impact claim fails to state a claim upon
which relief can be granted, or, alternatively, that plaintiff has failed to exhaust her
administrative remedies as to her disparate impact theory. (Def. Mot. at 11-16.) The Court
disagrees, for the allegations in plaintiff’s complaint are sufficient to survive a motion to dismiss
and plaintiff has exhausted her administrative remedies. In addition, defendant overlooks the
fact that Count I alleges both a claim for disparate impact and disparate treatment.
4
A. Failure to State a Claim
“[A] plaintiff establishes a prima facie disparate-impact claim by showing that the
employer ‘uses a particular employment practice that causes a disparate impact’ on one of the
prohibited bases.” Lewis v. City of Chicago, 560 U.S. 205, 130 S. Ct. 2191, 2197-98 (2010)
(quoting Ricci v. DeStefano, 557 U.S. 557, 577-78 (2009)). “Once the employment practice at
issue has been identified, causation must be proved; that is, the plaintiff must offer statistical
evidence of a kind and degree sufficient to show that the practice in question has caused the
exclusion of applicants for jobs or promotions because of their membership in a protected
group.” Watson v. Fort Worth Bank & Trust, 487 U.S. 977, 994 (1988). “An employer may
defend against liability by demonstrating that the practice is ‘job related for the position in
question and consistent with business necessity.’” Ricci, 557 U.S. at 578 (quoting 42 U.S.C. §
2000e-2(k)(1)(A)(i)). “Even if the employer meets that burden, however, a plaintiff may still
succeed by showing that the employer refuses to adopt an available alternative employment
practice that has less disparate impact and serves the employer’s legitimate needs.” Id. (citing §§
2000e–2(k)(1)(A)(ii) and (C)).
However, to survive a motion to dismiss, a plaintiff need not “make out a prima facie
case of discrimination.” Ali v. District of Columbia, 697 F. Supp. 2d 88, 92 (D.D.C. 2010).
Although “[c]ommon sense and fairness . . . dictate that plaintiff must, at a minimum, allege
some statistical disparity, however elementary, in order for the defense to have any sense of the
nature and scope of the allegation,” Brady v. Livingood, 360 F. Supp. 2d 94, 100 (D.D.C. 2004),
plaintiff’s allegations satisfy this standard, especially considering that she was proceeding pro se
at the time she filed her complaint. See Haines v. Kerner, 404 U.S. 519, 520 (1972) (pro se
pleading held to “less stringent standards than formal pleadings drafted by lawyers”). For
5
example, the complaint includes allegations that defendant “took the duties for Equal
Opportunity Specialists and made it the criteria for the women applicants applying for the [GS-
11 Staff Assistant] vacancy, who are experienced at performing administrative duties,” and as a
result, “each African-American and/or women’s application for the [GS-11] who was an
experienced administrative applicant was impacted.” (Compl. ¶ 19.) In addition, the complaint
alleges that “Defendant’s unrelated and unnecessary criteria for the staff assistant vacancy had a
discriminatory and d[i]sperate impact on eight women and African-American applicants” (id. ¶
20), and that the Staff Assistant position “is traditionally held by women and/or African-
American women, who are experienced in performing administrative duties.” (Id. ¶ 24.) The
gist of these allegations is that the impact of the specialized experienced qualification was to
disproportionately disqualify female and African-American applicants. While it may be that
plaintiff’s disparate impact claim will not ultimately survive, she has alleged at least an
“elementary” statistical disparity that is sufficient to allow the claim to proceed. See, e.g., Munro
v. LaHood, 839 F. Supp. 2d 354, 363 (D.D.C. 2012) (motion to dismiss denied despite “doubts as
to whether plaintiff will ultimately be able to prove” discrimination claim).
B. Exhaustion
“[T]imely exhaustion of administrative remedies is a prerequisite to a Title VII action
against the federal government.” Steele v. Schafer, 535 F.3d 689, 693 (D.C. Cir. 2008).
Defendant argues that plaintiff failed to exhaust her disparate impact claim because that claim is
not “like or reasonably related to” her underlying administrative claim. (Def. Mot. at 8 (citing
Park v. Howard Univ., 71 F.3d 904, 907 (D.C. Cir. 1995)).
The purpose of the exhaustion requirement is “to give federal agencies an opportunity to
handle matters internally whenever possible and to ensure that the federal courts are burdened
6
only when reasonably necessary,” not to create a “procedural roadblock to access to the courts.”
Brown v. Marsh, 777 F.2d 8, 14 (D.C. Cir. 1985) (internal quotations and citations omitted).
Thus, “an administrative charge is not a blueprint for the litigation to follow . . . [and] the exact
wording of the charge of discrimination need not presage with literary exactitude the judicial
pleadings which may follow.” Howard v. Gutierrez, 571 F. Supp. 2d 145, 157 (D.D.C. 2008);
Williams v. Dodaro, 576 F. Supp. 2d 72, 82-83 (D.D.C. 2008) (“the fact that [plaintiff]
describe[s] her allegations with greater specificity in [the civil] proceedings does not establish
that she failed adequately to present them at the administrative level”). Thus, although an
employee may only file claims that are “like or reasonably related to the allegations of the [EEO]
charge and grow[ ] out of such allegations),” Park , 71 F.3d at 907, “the critical question is
whether the claims set forth in the civil complaint come within the scope of the EEOC
investigation which can reasonably be expected to grow out of the charge of discrimination.”
Howard, 571 F. Supp. 2d at 157; see Park , 71 F.3d at 907 (“At a minimum, the Title VII claims
must arise from the administrative investigation that can reasonably be expected to follow the
charge of discrimination.” (internal quotations omitted)). In addition, “[d]ocuments filed by an
employee with the EEOC should be construed, to the extent consistent with permissible rules of
interpretation, to protect the employee’s rights and statutory remedies.” Fed. Express Corp. v.
Holowecki, 552 U.S. 389, 406 (2008).
Here, plaintiff’s disparate impact claim satisfies the “like or reasonably related” test. See
Park, 71 F.3d at 907. In her formal administrative complaint, plaintiff alleged discrimination
based on race, sex, and color based on her belief that “specialized experience has no relevance to
the GS-11 Staff Assistant vacancy announced,” that this qualifying criteria had never been
required previously for this level or type of position, and that the only reason she did not get the
7
job was because of the specialized experience qualification. (EEO Formal Compl. at 1). Thus,
plaintiff’s administrative complaint identified a facially neutral employment policy and alleged
that it was responsible for her not getting the job she applied for. That is sufficient to put the
defendant on notice to investigate the policy. See, e.g., Watkins v. City of Chicago, 992 F. Supp.
971, 973 (N.D. Ill. 1998) (disparate impact claim was reasonably related to disparate treatment
claim where plaintiff alleged that she was denied promotion due to the city’s policy of
disqualifying individuals arrested for felonies because plaintiff’s charge would have led to an
investigation into whether such a policy existed); DiPompo v. W. Point Military Acad., 708 F.
Supp. 540, 547-48 (S.D.N.Y. 1989) (plaintiff exhausted disparate impact claim when he alleged
in his EEO complaint that a reading test operated to discriminate against him because of his
handicap of dyslexia even though EEO officer “never considered” whether plaintiff might have a
disparate impact claim); cf. Pacheco v. Mineta, 448 F.3d 783, 792 (5th Cir. 2006) (“[A] disparate
impact investigation could not reasonably have been expected to grow out of [plaintiff’s]
administrative charge because of the following matters taken together: (1) it facially alleged
disparate treatment; (2) it identified no neutral employment policy; and (3) it complained of past
incidents of disparate treatment only.”)
C. Disparate Treatment
In challenging plaintiff’s disparate impact claim, defendant overlooks plaintiff’s claim for
disparate treatment, which is reflected by her formal administrative complaint (EEO Formal
Compl. at 1), her civil complaint (Compl. ¶¶ 18, 49), and in her response to defendant’s motion
for judgment on the pleadings. (Pl.’s Opp’n to Def. Mot. at 13, June 28, 2013.) “Although
disparate treatment and disparate impact allegations are substantiated using different types of
evidence, they are both methods of proving Title VII discrimination and may be plead in a single
8
claim.” See Watkins, 992 F. Supp. at 973.
Accordingly, plaintiff may proceed with her discrimination claim under Count I as both a
disparate treatment and disparate impact claim.
II. COUNT II: RETALIATION
Under Title VII, an employer may not discriminate against an employee because the
employee “has opposed any practice made an unlawful practice by [Title VII], or because [the
employee] has made a charge, testified, assisted, or participated in any manner in an
investigation, proceeding, or hearing under [Title VII].” 42 U.S.C. § 2000e-3(a). The policy
rationale behind barring retaliation is to provide protection to an employee seeking to enforce
Title VII’s basic guarantees. Burlington N. & Santa Fe Ry. v. White, 548 U.S. 53, 68 (2006).
To state a claim for retaliation under Title VII, a plaintiff must show that: (1) she engaged
in protected activity; (2) she suffered a materially adverse action; and (3) a causal connection
exists between the protected activity and the adverse action. Holcomb v. Powell, 433 F.3d 889,
901-02 (D.C. Cir. 2006). Regarding the element of causation, the Supreme Court has recently
held that Title VII retaliation claims “require proof that the desire to retaliate was the but-for
cause of the challenged employment action,” a stricter test than the “motivating factor” test
applicable to status-based discrimination. Univ. of Texas Sw. Med. Ctr. v. Nassar, 133 S. Ct.
2517, 2528 (2013).
Plaintiff’s first retaliation claim is that defendant retaliated against her by giving her a
low performance rating; forwarding that rating to the Human Resources Center without allowing
plaintiff an opportunity to add her comments to the review, as allegedly promised; not giving her
9
a bonus; and subjecting her to a hostile work environment.2 (Compl. ¶¶ 57-61.) Defendant
argues that this claim fails as a matter of law because these actions predated any protected
activity. (Def. Mot. at 16-18.) As plaintiff has failed to respond to this argument, the Court will
treat this claim as conceded. See, e.g., McMillan v. Wash. Metro. Area Transit Auth., 898 F.
Supp. 2d 64, 69 (D.D.C. 2012) (“It is well understood in this Circuit that when a plaintiff files an
opposition to a motion . . . addressing only certain arguments raised by the defendant, a court
may treat those arguments that the plaintiff failed to address as conceded.”) But even if the
Court were to reach the merits, it is clear from the face of her complaint that plaintiff could not
allege a causal connection between these events and her protected activity. As defendant points
out, plaintiff’s first protected activity occurred on December 9, 2009 (Def. Mot. at 16-18), but
the acts that she is complaining about all occurred in November 2009. (EEO Formal Compl. at
1.) Thus, it is factually impossible for plaintiff to prove causation as to this retaliation claim.
Plaintiff’s second retaliation claim is that defendant retaliated against her by failing to
comply with EEOC procedures and twice delaying the investigative process. (Compl. ¶¶ 57-61.)
Defendant argues that this retaliation claim also fails as a matter of law because there is no cause
of action under Title VII for delay or interference in the administrative process. (Def.’s Mot. at
18-19.) Defendant is correct. “‘There is no cause of action’ for federal employees to bring
retaliation or discrimination claims based on ‘complaints of delay or interference in the
investigative process.’” Diggs v. Potter, 700 F. Supp. 2d 20, 46 (D.D.C. 2010) (quoting Keeley
v. Small, 391 F. Supp. 2d 30, 45 (D.D.C. 2005)); see also Trout v. Lehman, No. 82-2507, 1983
WL 578, at *1 (D.D.C. July 7, 1983) (retaliation claim regarding interference with an EEOC
investigation is not about a condition of employment and “therefore not cognizable as a separate
2
Plaintiff’s claim that defendant retaliated against her by subjecting her to a hostile work
environment is addressed in Section III, infra.
10
cause of action in a judicial proceeding brought under Title VII”).
As neither of plaintiff’s retaliation claims are viable, Count II will be dismissed.3
III. COUNT III: HOSTILE WORK ENVIRONMENT
Plaintiff claims that defendant subjected her to a hostile working environment in
retaliation for speaking out against the “denial of a promotion opportunity.” (Compl. at 25-28.)
Specifically, plaintiff alleges that Lamond subjected her to constant yelling and instructed her to
leave post-it notes in her cubicle when she was not at her desk. (Id.)
To prevail on a retaliatory hostile work environment claim, “a plaintiff must show that
h[er] employer subjected h[er] to ‘discriminatory intimidation, ridicule, and insult’ that is
‘sufficiently severe or pervasive to alter the conditions of the victim’s employment and create an
abusive working environment.’” Baloch v. Kempthorne, 550 F.3d 1191, 1201 (D.C. Cir. 2008)
(quoting Harris v. Forklift Sys., Inc., 510 U.S. 17, 21 (1993)); accord Hussain v. Nicholson, 435
F.3d 359, 366 (D.C. Cir. 2006). “To determine whether a hostile work environment exists, the
court looks to the totality of the circumstances, including the frequency of the discriminatory
conduct, its severity, its offensiveness, and whether it interferes with an employee’s work
performance.” Baloch, 550 F.3d at 1201. The “conduct must be extreme to amount to a change
in the terms and conditions of employment.” Faragher v. City of Boca Raton, 524 U.S. 775, 788
(1998). This standard is “sufficiently demanding to ensure that Title VII does not become a
general civility code.” Id. (internal quotations omitted)
Defendant argues that plaintiff has not stated a claim for a retaliatory hostile work
environment because the alleged conduct is not “extreme” enough “to amount to a change in the
3
Since defendant’s motion to dismiss plaintiff’s retaliation claim has been granted on the merits,
the Court does not need to address defendant’s alternate argument that plaintiff failed to exhaust
her administrative remedies as to this claim. (Def. Mot. at 11-13.)
11
terms and conditions of employment.” (Def. Mot. at 20-21 (quoting Faragher, 524 U.S. at
788).) At this stage in the proceedings, the Court is unable to conclude that the allegations in the
complaint are deficient as a matter of law and, therefore, Count III will not be dismissed.
IV. COUNT IV: WORK PLACE BULLYING
Plaintiff claims that she was a victim of workplace bullying because of “constant yelling”
by Lamond and her having “humiliated” her by “sabotaging” the vacancy announcement to
portray plaintiff as “unqualified.” (Compl. at 29.) Defendant correctly states that workplace
bullying is not an independently cognizable claim under Title VII, but that if the bullying is
sufficiently “severe or pervasive to alter the conditions of the victim’s employment and create an
abusive working environment,” plaintiff may be able to recover under a hostile work
environment claim. See Baloch, 550 F.3d at 1201; accord Hussain, 435 F.3d at 366. Nor does
the Indiana Supreme Court case plaintiff refers to in her complaint suggest that there is an
independently cognizable common law claim for “workplace bullying.” See Raess v. Doescher,
883 N.E.2d 790, 799 (Ind. 2008). Thus the Court will dismiss Count IV and treat its allegations
as part of Court III, plaintiff’s hostile work environment claim.
CONCLUSION
For the reasons stated above, it is hereby ORDERED that defendant’s motion for
judgment on the pleadings is GRANTED IN PART AND DENIED IN PART; it is further
ORDERED that the motion is GRANTED as to Counts II and IV and those counts are
DISMISSED; and it is further ORDERED that the motion is DENIED as to Counts I and III.
/s/
ELLEN SEGAL HUVELLE
United States District Judge
DATE: August 23, 2013
12
|
tomekkorbak/pile-curse-small
|
FreeLaw
|
Baghdad (disambiguation)
Baghdad is the capital of Iraq.
Baghdad may also refer to:
Places
In Iraq
Baghdad Governorate, the region encompassing the city and its surrounding areas
Baghdad Province, Ottoman Empire
Baghdad Central Station, a train station
Baghdad International Airport
Round city of Baghdad
University of Baghdad
Baghdad College, a boys' high school
Baghdad (West Syriac Diocese) (9th–13th centuries)
Elsewhere
Baghdād, Afghanistan
Baghdad, Iran
Baghdad, Pakistan
Bagdad, Tasmania, Australia
Lake Baghdad, Rottnest Island, Western Australia
Baghdad Stadium, Kwekwe, Zimbabwe
Baghdad Street (Damascus), Syria
Baghdad Street (Singapore)
Bağdat Avenue, Istanbul, Turkey
Other uses
Baghdad (EP), by The Offspring
Baghdad Satellite Channel, a television network
Baghdad Soft Drinks Co
Roman Catholic Archdiocese of Baghdad, Iraq
7079 Baghdad, an asteroid
Sophiane Baghdad (born 1980), French-Algerian football player
See also
Bagdad (disambiguation)
Baghdadi (disambiguation)
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Rise Above Records
Rise Above Records is a London, England based independent record label owned by Lee Dorrian (of the band Cathedral and formerly of Napalm Death).
Founding
Lee Dorrian started Rise Above Records in 1988 without the intention of the label being an ongoing position. It was during the same year that he had left his previous band, Napalm Death. Dorrian explained that it was predominantly done to "get the dole off my back as they were asking a lot of questions as Napalm Death were on the front cover of the NME and on TV three times in one week, but I was still living in a council flat and couldn't even afford the rent."
Rise Above Records was initially started up on the Enterprise Allowance Scheme, a Conservative government initiative which fronted cash to young entrepreneurs.
Style
Dorrian's initial intention was to release hardcore punk music and limited edition releases. The label was named after the Black Flag song of the same name. The first release from the label was a Napalm Death live EP (Extended play) followed by releases from bands such as S.O.B. and Long Cold Stare.
Dorrian was a fan of bands such as Candlemass, Saint Vitus and Trouble but stated there "wasn't really a 'doom scene' as such" and that "doom became an obsession" for him. Finding that there were a scattered amount of doom metal groups in the United States (specifically Maryland), Dorrian attempted to "give the scene a boost" and released a compilation titled Dark Passages, a compilation stating that "if people asked what doom was you could point to that record and there was something tangible to grab hold of." Dorrian found that the release "didn't get as many bands as we'd have liked, hence the reason why there are two Cathedral tracks on there." Dorrian admitted later that it took until 1997 "that a new wave of doom bands started to appear. Ever since then it's become really strong." Dorrian specifically noted Electric Wizard's Come My Fanatics... as being "the turning point of everything."
Artists
Current Artists
Angel Witch
Antisect
Age Of Taurus
ASTRA
Beastmaker
Blood Ceremony
Church of Misery
Death Penalty
Diagonal
Gentlemans Pistols
Galley Beggar
Hidden Masters
Horisont
Saturn
Septic Tank
Uncle Acid & the Deadbeats
Witchsorrow
Workshed
Former/One-offs/etc.
Bang
Bottom
Capricorns
Cathedral
Chrome Hoof
Circulus
Comus
Debris Inc.
Electric Wizard
Firebird
Goatsnake
Ghost
The Gates of Slumber
Grand Magus
Hangnail
Incredible Hog
The Iron Maiden
The Last Drop
Leaf Hound
Long Cold Stare
Lucifer
Mourn
Moss
Napalm Death
Naevus
The Oath
Orange Goblin
Pentagram
Penance
Pod People
Purson
Revelation
Sally
Sea of Green
Serpentcult
Shallow
sHEAVY
Sleep
S.O.B.
Sunn O)))
Taint
Teeth of Lions Rule the Divine
Unearthly Trance
Witchcraft
Discography
Main releases
RISE 001 - Napalm Death Live - 7" EP (vinyl only)
RISE 002 - S.O.B. Thrash Night - 7" EP (vinyl only)
RISE 003 - Long Cold Stare - Tired Eyes LP (vinyl only)
RISE 004 - S.O.B. - What's the Truth LP/MC/CD
RISE 005 - Various Artists - Dark Passages LP/MC/CD
RISE 006 - Revelation - Salvations Answer LP/MC/CD
RISE 007 - Penance - The Road Less Travelled LP/MC/CD
RISE 008 - Cathedral - In Memorium CD/Maxi EP (Ltd only purple vinyl)
RISE 009 - Electric Wizard - Electric Wizard CD/LP (Ltd only green vinyl)
RISE 010 - Mourn - Mourn CD
RISE 011 - Electric Wizard/Our Haunted Kingdom split 7"
RISE 012 - Various Artists - Dark Passages II - CD
CDRISE 13 - Various Artists - Magick Rock vol. 1 CD
CDRISE 14 - Electric Wizard - Come My Fanatics... CD
CDRISE 15 - Orange Goblin - Frequencies from Planet Ten CD
CDRISE 16 - Naevus - Sun Meditation CD
CDRISE 17 - sHEAVY - The Electric Sleep CD
CDRISE 18 - Orange Goblin - Time Travelling Blues CD
CDRISE 19 - Sleep - Jerusalem CD
CDRISE 20 - Electric Wizard - Come My Fanatics.../Electric Wizard 2xCD
CDRISE 21 - Cathedral - In Memoriam CD
CDRISE 22 - Goatsnake - Goatsnake Vol. 1 CD
CDRISE 23 - Hangnail - Ten Days Before Summer CD
CDRISE 24 - Sally - Sally CD
CDRISE 25 - Orange Goblin - The Big Black CD/LP
CDRISE 26 - sHEAVY - Celestial Hi-Fi CD
CDRISE 27 - Electric Wizard - Dopethrone CD
CDRISE 28 - Shallow - 16 Sunsets in 24 Hours CD
CDRISE 29 - Sunn O))) ØØ Void CD
CDRISE 30 - Goatsnake - Flower of Disease CD
CDRISE 31 - Firebird - Firebird CD
RISECD 32 - Hangnail - Clouds in the Head CD
RISECD 33 - Sea of Green - Time to Fly CD
RISECD 34 - Grand Magus - Grand Magus CD
RISECD 35 - The Last Drop - Where Were You Living One Year from Now? CD
RISECD/LP 36 - Electric Wizard - Let Us Prey CD/LP
RISECD 37 - Orange Goblin - Coup de Grace (CD)
RISECD 38 - Orange Goblin - Time Travelling Blues/Frequencies From Planet Ten 2xCD
RISECD 39 - sHEAVY - Synchronized CD
RISECD 40 - sHEAVY - The Electric Sleep/Blue Sky Mind 2xCD
RISECD 41 - Teeth of Lions Rule the Divine - Rampton CD
RISECD 42 - Bottom - Feels So Good When You're Gone CD
RISECD 43 - Sally - C-Earth CD
RISECD/LP 44 - Grand Magus - Monument Cd/LP (blue vinyl, gatefold sleeve)
RISECD/LP 45 - Unearthly Trance - Season of Séance, Science of Silence CD/2LP (500 copies, black vinyl, deluxe gatefold)
RISECD/LP 46 - Orange Goblin - Thieving from the House of God Cd/LP (1000 copies, orange vinyl, deluxe gatefold)
RISECD/LP 47 - Witchcraft - Witchcraft CD/LP w/bonus track/PicDisc
RISECD/LP 48 - Electric Wizard - We Live Cd/2LP (1000 copies, purple vinyl)
RISECD 49 - Pod People - Doom Saloon CD
RISECD 50 - ???
RISE7 51 - Orange Goblin - Some You Win, Some You Lose 7"
RISECD/LP 52 - Electric Wizard - Dopethrone re-issue CD/2LP (1000 white vinyl, 500 black vinyl. 2nd press: 50 Purple Silk, 100 Transparent Amber, 100 clear, 550 black)
RISECD 53 - Orange Goblin - The Big Black re-issue CD
RISEMCD/MLP 54 - Capricorns - Capricorns CD/LP
RISECD/LP 55 - Unearthly Trance - In the Red CD/LP (blood red vinyl)
RISEMLP 56 - Thy Grief Eternal - On Blackened Wings 12" (500 black vinyl, 500 silver/grey vinyl)
RISEMLP 57 - Eternal - Lucifer's Children 12" (500 clear vinyl, 500 black vinyl)
RISECD 58 - sHEAVY - Republic? CD
RISECD 59 - Debris Inc. - Debris Inc. CD
RISECD/LP 60 - Grand Magus - Wolf's Return CD/LP (500 black vinyl, 500 silver/grey vinyl)
RISECD 61 - ???
RISECD/LP 62 - Witchcraft - Firewood CD/LP (deluxe gatefold, 500 gold vinyl, 500 black vinyl)
RISECD/LP 63 - Circulus - The Lick on the Tip of an Envelope Yet to Be Sent CD/LP (1000 black vinyl, 500 swirly vinyl, 100 clear vinyl)
RISE7 64 - Circulus/Witchcraft split 7" (500 copies)
RISE7/MCD 65 - Circulus - Swallow Cd single/7" (bright green and yellow)
RISECD/LP 66 - Taint - The Ruin of Nova Roma CD/2LP
RISECD 67 - Capricorns - Ruder Forms Survive CD
RISE7 68 - Leaf Hound - "Freelance Fiend" 7"
RISECD 69 - Grand Magus - Grand Magus (2006 re-issue w/ bonus tracks) CD
RISECD 70 - Electric Wizard - Pre-Electric Wizard 1989-1994 CD
RISECD/LP 71 - Electric Wizard - Electric Wizard (re-master) digiCD/LP w/bonus 7" (500 black vinyl, 500 ice blue vinyl, 500 luminous lime green vinyl)
RISECD/LP 72 - Electric Wizard - Come My Fanatics... (re-master) digiCD/2LP w/bonus 7" (400 violet sparkle vinyl, 100 violet sparkle w/colored 7", 500 deep red vinyl, 500 black vinyl)
RISECD 73 - Electric Wizard - Dopethrone (re-master) digiCD
RISECD 74 - Electric Wizard - Let Us Prey (re-master) digiCD/2xLP (100 clear vinyl, 500 deep red vinyl, 500 black vinyl)
RISECD 75 - Electric Wizard - We Live (re-master) digiCD
RISECD 76 - Orange Goblin - Frequencies From Planet Ten. Reissued in 2011 with three bonus tracks.
RISECD 77 - Orange Goblin - Time Travelling Blues. Reissued in early 2011 with three bonus tracks.
RISECD 78 - Orange Goblin - The Big Black
RISECD 79 - Orange Goblin - Coup De Grace. Reissued in 2011 with three bonus tracks.
RISECD 80 - Orange Goblin - Thieving From The House Of God
RISECD/LP 81 - Firebird - Hot Wings
RISECD/LP 82 - Mourn - Mourn CD/LP (100 clear vinyl, 200 leaf green vinyl, 400 black vinyl)
RISECD 83 - Litmus - Planetfall CD
RISE7 84 - Burning Saviours - "The Giant" 7" (100 clear vinyl 400 black vinyl)
RISECD 85 - ???
RISE7 86 - Gentlemans Pistols - "The Lady" 7" (100 clear vinyl, 400 black vinyl)
RISE7 87 - Witchcraft - "If Crimson Was Your Colour" 7" (225 black vinyl, 225 clear vinyl)
RISEMCD/MLP 88 - Winters - High As Satellites CD/LP (black vinyl, blue vinyl)
RISECD 89 - Teeth of Lions Rule the Divine - Rampton deluxe edition
RISECD 90 - ???
RISECD 91 - ???
RISEMCD/MLP 92 - Chrome Hoof - Beyond Zade
RISECD/LP 93 - Circulus - Clocks Are Like People digiCD/LP (500 white vinyl w/ bonus 7", 700 blue vinyl, 500 black vinyl)
RISE7/CD 94 - Circulus - Song of Our Despair CD single/7" (black vinyl, clear vinyl, violet vinyl)
RISE7 95 - Moss/Monarch - split 7" (50 sea blue vinyl, 100 clear vinyl 400 black vinyl)
RISECD 96 - Winters - Black Clouds in Twin Galaxies CD
RISECD 97 - ???
RISECD/LP 98 - Gentlemans Pistols - Gentlemans Pistols CD/LP (100 clear vinyl, 200 yellow vinyl, 200 black vinyl)
RISELP 99 - Miasma & The Carousel of Headless Horses - Perils
RISECD/LP 100 - Electric Wizard - Witchcult Today CD/LP (100 purple vinyl, 200 black sparkle vinyl, 200 green vinyl)
RISE7 101 - Diagonal - Heavy Language 7" (500 black sparkle vinyl)
RISECD 102 - Never released (According to Jeremy at Rise Above)
RISECD/LP 103 - Witchcraft - The Alchemist CD/LP (25 ultra blue vinyl, 50 clear vinyl, 500 magnolia vinyl, 400 black sparkle vinyl, 500 black vinyl)
RISECD/LP 104 - Taint - Secrets and Lies CD
RISE10 107 - Atavist - Alchemic Resurrection 10"
RISELP 108 - Moss - Sub Templum 2LP
RISECD/LP 109 - Blood Ceremony - Blood Ceremony CD/LP (300 red/black vinyl w/bonus 7", 300 purple vinyl, 300 black vinyl)
RISECD/LP 112 - Serpentcult - Weight of Light CD/LP (300 silver/grey vinyl w/bonus 7", 300 white/black splatter vinyl, 300 black vinyl)
RISECD/LP 113 - Grand Magus - Iron Will CD/LP (200 clear vinyl, 200 white vinyl, 300 black vinyl)
RISE7 114 - Crowning Glory/Gates of Slumber split 7" (black vinyl, green vinyl, blue vinyl)
RISECD 115 - Capricorns - River Bear Your Bones CD
RISE12/116 - Electric Wizard/Reverend Bizarre - split 12" EP (350 blood red vinyl w/poster, 500 purple vinyl, 500 clear vinyl, 500 silver vinyl, 500 black vinyl)
RISECD 124 - Ghost - Opus Eponymous
RISECD 130 - Electric Wizard- Black Masses CD
In early 2011 the label also reissued five Orange Goblin albums with bonus tracks largely covers or demo versions of preceding tracks, released with the same catalogue numbers as when they were first released.
Rise Above Relics releases
RAR7 001 - Luv Machine - "Witches Wand" 7"
RARCD/LP 001 - Luv Machine - Turns You On! CD/LP (500 black vinyl, 400 cerise vinyl, 100 clear vinyl)
RARCD/LP 002 - Possessed - Exploration CD/LP
RARCD/LP 004 - AX - "You've Been So Bad"
RARCD/LP 005 - Necromandus - "Orexis of Death & Live"
RARLP 006 - Comus - "First Utterance"
RARLP 007 - Mellow Candle - "Swaddling Songs PLUS" Deluxe Boxset
RARCD/LP 008 - Steel Mill - "Jewels of the Forest"
RARCD/LP 009 - Incredible Hog - "Volume 1 + 4"
RARCDBOX010 - Bang - "Bullets 4 x CD Box Set"
RARCD/LP 011 - The (Original) Iron Maiden - "Maiden Voyage" CD/LP (Bonus vinyl single with LP by BUM) + extensive booklet
RARCD/LP 013 - Rog & Pip - "Our Revolution"
See also
List of record labels
List of independent UK record labels
Notes
References
External links
Official site
Category:Doom metal record labels
Category:British independent record labels
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
For Inovio Pharmaceuticals (NYSEMKT:INO), 2013 was a breakthrough year. The company had catalyst after catalyst and soared with gains of nearly 500 percent. OncoSec Medical, an Inovio peer developing therapeutics with the same technology, posted gains of 150 percent in 2013 — also on a slew of stock-moving catalysts. While the gains last year were remarkable, the big question is — can these gains continue in 2014 with another strong line-up of potentially stock-moving events?
Inovio & OncoSec: Developing Electroporation to Treat Disease
Both Inovio and OncoSec have developed a pipeline around electroporation. This is a technology that involves using electrical pulses to create temporary pores, which then allow for better uptake of an agent. This approach is based on the fact that human cells are designed to resist the entry of foreign materials through the outer membrane, thus making an agent less effective by the time it reaches its destination. But with electroporation a path is created, one with no barriers, where an agent can essentially be placed where it’s needed, maintaining its strength and full efficiency.
As a result, less of an agent is needed to achieve a therapeutic effect, which then decreases the side effects and increases the effectiveness of the agent. Inovio and OncoSec have been trying to prove this technology effective, and in 2013, both companies made major strides in the right direction.
Inovio: Year in Review
Inovio has approximately 12 candidates in its pipeline, used to treat both cancers and infectious diseases. Therefore, let’s briefly look at the last 12 months and identify why such excitement was created. Inovio announced that its Universal H1NI Influenza Vaccine achieved protective immune responses comparable to conventional vaccines. The company further announced that in a preclinical study of its Syncon DNA vaccine against Ebola and Marburg viruses the vaccine induced strong and broad immune responses and demonstrated 100 percent protection against death. Both pieces of data, although early stage, show a strong effect against widespread and deadly viruses. The above news was important to shareholders, but did not cause a great deal of stock movement; it wasn’t until late June through August that shares of Inovio really began to move with exceptional volatility.
On June 14, Inovio announced that its Universal H7N9 vaccine generated protective HAI antibodies in 100 percent of tested animals. Moreover, 100 percent of the vaccinated animals neither got sick nor died of the virus. On July 10, results of the HIV vaccine were published in a peer reviewed Journal of Infectious Diseases. Inovio found that its CELLECTRA device for administering its Pennvax-B HIV vaccine improved the effectiveness of the drug in Phase I testing.
For July 18, Inovio announced that electroporation technology significantly enhances the ability of a DNA therapy to stimulate blood vessel growth. On July 24, Inovio’s hTERT DNA cancer vaccine administered with CELLECTRA generated robust and broad immune responsed in a preclinical trial. It broke the immune system’s tolerance to its self-antigens, induced T-cells with tumor killing function and increased the rate of survival. In November, vaccine for MERS induces robust immune responsed in clinical trial. As you can see, it was quite an exceptional and busy year for Inovio Pharmaceuticals. While not data related, one can’t ignore the deal with Roche — two products worth $10 million upfront and up to $412.5 million pending certain milestones.
OncoSec: Year in Review
As previously stated, OncoSec uses the same approach as Inovio, and while OncoSec’s pipeline is not as large, the company did begin 2013 with three ongoing Phase 2 open label multi-center clinical trials. In these trials OncoSec is treating metastatic melanoma, Merkel cell carcinoma, and cutaneous T-cell lymphoma. As we look back at last year, it was news from its melanoma and MCC trials that really moved the stock. OncoSec provided several safety and interim looks at its 15-patient MCC study. So far, the company’s platform has proven to be safe, and at the last check all patients in the study had already experienced an uptake of IL-12 of at least 100 fold, some up to 1,000 fold.
Ultimately, to determine if electroporation works uptake has to be measured, as it shows the levels of IL-12 that is getting to the source versus IL-12 without electroporation. IL-12 is an immunotherapy compound, one that is extremely effective, but has rather unpleasant side effects. Therefore, if OncoSec can increase the uptake (which it has) while proving it safe (so far it has) then MCC could be a key program for OncoSec.
Next is melanoma, which was the big value driver of 2013. Like MCC, OncoSec reported several updates, but the key difference was complete enrollment with data on 21 of 25 patients. As of December 16, 38.1 percent of the 21 patients assessed achieved a complete response lasting in excess of six months. Moreover, 61.1 percent of patients saw tumor shrinkage in excess of 30 percent. This data was highly impressive and is the main reason that OncoSec has traded higher by 63 percent since its December announcement.
What Will 2014 Bring?
Clearly, we can see why both Inovio and OncoSec saw such rapid stock appreciation in 2013. Last year, we did not receive any data from large randomized studies. However, what we received was a collection of data on small patient populations showing electroporation’s effect on a wide array of diseases. As a result, we can at least suggest that electroporation does in fact work, but the question is which programs will have success in larger trials?
This brings up an interesting point: 2013 put electroporation on the map, but 2014 will be a breakthrough year, one way or the other. First, for Inovio it will report top-line data from a Phase 2 study of VGX-3100 for the treatment of cervical dysplasia in mid-2014. The Phase 1 study demonstrated an immune response, and was successful at killing cells that had been changed to precancerous dysplasia by the presence of HPV. This program is Inovio’s most advanced, and pending the outcome it will change how investors view the rest of the company’s platform and programs.
Then, Inovio will initiate, plan, and present some data on other Phase 1/2a trials on a handful of programs, but nothing as robust as VGX-3100. Therefore, if positive we can predict significant gains, as its pipeline combined likely has billions in annual sales possibilities. With that said, OncoSec has its own data to report, and an infectious disease trial might not be the best indication of success or failure for its cancer study.
OncoSec could see tremendous gains when it reports final data for its melanoma trial and plans a launch for a larger trial treating this disease. Also, in 2014 we will see how OncoSec treats cutaneous T-cell lymphoma and the final results for Merkel cell carcinoma. Not to mention, OncoSec has a sponsored research agreement and is planning to initiate a new trial treating a solid tumor indication.
In the company’s strategic update it specifically mentioned combining its electroporation device with anti-CTLA4, anti-PD-1, and anti-PD-L1. As you know, anti-PD-1s specifically have been among the most widely discussed cancer therapeutics of the last year, after Merck and Bristol-Myers reported incredible data at last year’s ASCO. Based on what we’ve seen thus far, electroporation undoubtedly increases the uptake of agents and decreases side effects, thus it makes an agent even more effective. Therefore, with anti-PD-1s being closely monitored, and OncoSec planning to expand ImmunoPulse into another indication, watch for anti-PD-1s to be the agent used in this study. If so, investors have to like the company’s chances of success, and prospective investors have to like the excitement and stock gains that such a move could mean for OncoSec.
Is It a Good Idea to Buy?
These two companies are very similar, but also different: Inovio has a market cap of $530 million; OncoSec has a market cap of $100 million. Inovio burned more than $50 million in the last 12 months, giving it enough cash for roughly a year. OncoSec has just $15.2 million in cash, but burns only $7 million a year, thus giving it enough cash to operate for well over a year. Inovio uses synthetic vaccines with electroporation while OncoSec uses agents that are already used to treat the disease; thus in many ways, it doesn’t have as much to prove.
With that said, the market capitalizations of these two companies clearly identify that expectations and excitement is higher with Inovio. However, for 2014, given the catalysts and valuations of these companies, I think both have great upside potential and that OncoSec could really have a breakout year.
Inovio’s year really rests on VGX-3100, a candidate that I think will produce positive top-line results given its early stage data. But OncoSec has three key programs to present data and the initiation of a new study with a new agent — most likely anti-PD-1. These are four major catalysts, any and all of which could easily cause shares to double if positive. As of now, we haven’t been given one reason to believe that data won’t be anything except positive. Personally, as it relates to electroporation catalysts in 2014 from both companies, I am most excited about the MCC data.
In treating MCC, there are no FDA approved treatments, no ongoing studies, no planned studies — and as a result there really isn’t a standard of care for this aggressive disease. Hence, OncoSec has no competition in this space, and just needs to prove that patients are responding to IL-12. If so, it seems very likely that OncoSec will soon own this $300 million market opportunity. Not to mention, MCC is an orphan disease. Therefore, with planned FDA meetings to discuss future MCC trials, OncoSec could likely earn an accelerated path toward approval. And if data is really good, investors will definitely speculate… and this program will drive significant gains. Therefore, as investors look ahead and really sort out all the catalysts that await, it looks highly likely that these are two stocks whose bull run will not only continue, but likely accelerate in 2014.
More From Wall St. Cheat Sheet:
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Local Residents De-Clutter For Charity
Residents in Melbourne have answered the call and donated their old mobile phones to support a local charity’s fundraising appeal.
Derby-based Safe and Sound supports children and young people who are victims of or at risk of child exploitation – including sexual, County Lines, Modern Slavery and trafficking – and their families.
A major part of the charity’s Butterfly Appeal to enable the expansion of services across Derbyshire has been a joint initiative with Sinfin-based Century Mobile and intu Derby shopping centre.
During lockdown, households are being encouraged to dig out handsets and keep them until they can be dropped off at intu Derby or collected by Century Mobile who then arrange recycling with proceeds donated to Safe and Sound.
The appeal was recently promoted around Melbourne as part of the village’s inclusion in ‘Stay at Home Motor Show’ with classic car owners displaying vehicles in their driveway to be admired by passers-by.
Safe and Sound Head of Fundraising Tom Stanyard who is also a classic car owner, explained: “We put out a plea for people who are spending this time in lockdown clearing and tidying out their homes to donate their unwanted devices and we have had a great response – particularly from residents in Melbourne and the surrounding area.
“The ‘Stay at Home Motor Show’ was a particularly great ‘vehicle’ to promote the charity. As well as receiving pledges to donate unwanted devices, passers-by admiring the classic cars donated just over £100 to Safe and Sound which is much appreciated.”
For more information about Safe and Sound and how to support the charity’s work, please visit www.safeandsoundgroup.org.uk and follow on social media channels.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Happy New Year’s Eve from deBebians
Posted on December 31 2014
2015 is almost officially here! All of us at deBebians want to thank all of our loyal and dedicated clients for a fantastic 2014. 2014 has been an exciting and productive year at deBebians. We added dozens of new engagement ring designs and many other fine jewelry pieces. Although we have had an outstanding 2014, we are excited to see what 2015 has in store for us. We are consistently adding new and breathtaking jewelry designs to each of our collections to keep up with jewelry and fashion trends.
We would just like to remind our readers that our office is closed for the New Year holiday. We will reopen on Monday January 5th at 9:00am PST. We wish all of you a safe and very Happy New Year!
Connect with deBebians
Customer Service
about debebians
deBebians was founded by two GIA Graduate Gemologists to provide quality jewelry and excellent customer service all for a great value. We ship all of our products fully insured and with a 30-day money back guarantee.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
When having sex with my gf i still think of the girl i was with in high school and college
362 shares
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Filed 8/26/13 Toveg v. Gross CA2/7
NOT TO BE PUBLISHED IN THE OFFICIAL REPORTS
California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for
publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication
or ordered published for purposes of rule 8.1115.
IN THE COURT OF APPEAL OF THE STATE OF CALIFORNIA
SECOND APPELLATE DISTRICT
DIVISION SEVEN
ISAAC TOVEG, B237948
Plaintiff and Appellant, (Los Angeles County
Super. Ct. No. BC442865)
v.
ANDREA B. GROSS, et al.,
Defendants and Respondents.
APPEAL from a judgment of the Superior Court of Los Angeles County, David L.
Minning, Judge. Reversed.
Isaac Toveg, in pro. per., for Plaintiff and Appellant.
Gordon & Rees, Jason F. Meyer, Matthew G. Kleiner and Jon S. Tangonan for
Defendants and Respondents.
_______________________
Isaac Toveg sued the prior owners of the property adjacent to his home, asserting
that a broken sewer pipe on their property caused him to contract a bacterial illness. The
property owners moved for and obtained summary judgment in the trial court. Toveg
appeals. We reverse the summary judgment.
FACTUAL AND PROCEDURAL BACKGROUND
In August 2010, Toveg sued Andrea Gross and the Estate of Adam Goldstein, the
prior owners of the home adjacent to his own, asserting negligence and premises liability
based upon a broken sewer pipe that they allegedly failed to detect and repair. Toveg
claimed that as a result of their failure to repair the sewer pipe, a foul odor was emitted
from their property, causing him breathing and stomach problems; ultimately, he was
diagnosed with stomach ailments and a Helicobacter pylori infection.
The defendants moved for summary judgment on two grounds: (1) that the claims
were barred by the statute of limitations; and (2) that Toveg could not demonstrate
causation. The trial court granted summary judgment on both grounds. Toveg appeals.
DISCUSSION
A motion for summary judgment is properly granted only when “all the papers
submitted show that there is no triable issue as to any material fact and that the moving
1
party is entitled to a judgment as a matter of law.” (Code Civ. Proc., § 437c, subd. (c).)
In reviewing an order granting summary judgment, the appellate court independently
determines whether, as a matter of law, the motion for summary judgment should have
been granted. “The purpose of the law of summary judgment is to provide courts with a
mechanism to cut through the parties‟ pleadings in order to determine whether, despite
their allegations, trial is in fact necessary to resolve their dispute.” (Aguilar v. Atlantic
Richfield Co. (2001) 25 Cal.4th 826, 843 (Aguilar).)
1 Unless otherwise indicated, all statutory references are to the Code of Civil
Procedure.
2
We review the trial court‟s ruling granting summary judgment de novo and
independently examine the record to determine whether there is a triable issue of material
fact. (Aguilar, supra, 25 Cal.4th at p. 860.) In performing our de novo review, we
consider all evidence presented by the parties in connection with the motion (except that
which the trial court properly excluded) and all uncontradicted inferences that the
evidence reasonably supports. (Merrill v. Navegar, Inc. (2001) 26 Cal.4th 465, 476.)
Here, we conclude that the summary judgment was erroneously granted because the
defendants did not establish that there exists no triable issue of material fact as to the
statute of limitations or as to causation.
I. Statute of Limitations
The first basis for the trial court‟s summary judgment ruling was that the action
was barred by the two-year statute of limitations (§ 335.1). As the trial court construed
the evidence submitted in conjunction with the summary judgment, Toveg became aware
of an odor from the defendants‟ property in September or October 2007 and was first
diagnosed with an H. pylori infection in December 2007. The court concluded that the
delayed discovery rule, under which a cause of action does not accrue until a reasonable
person would have discovered the factual basis for a claim (Broberg v. Guardian Life Ins.
Co. of America (2009) 171 Cal.App.4th 912, 920-921), did not apply here, because
Toveg had reason to suspect the factual basis for his claim in 2007 when he was
diagnosed with the illness and smelled the odor from the adjacent property. As he did not
file suit until August 3, 2010, more than two years later, the court concluded that the
action was barred by the statute of limitations.
The trial court appears to have failed to consider Toveg‟s argument that the
continuous accrual theory applies here. “The theory [of continuous accrual] is a response
to the inequities that would arise if the expiration of the limitations period following a
first breach of duty or instance of misconduct were treated as sufficient to bar suit for any
subsequent breach or misconduct; parties engaged in long-standing misfeasance would
thereby obtain immunity in perpetuity from suit even for recent and ongoing misfeasance.
3
In addition, where misfeasance is ongoing, a defendant‟s claim to repose, the principal
justification underlying the limitations defense, is vitiated.” (Aryeh v. Canon Business
Solutions, Inc. (2013) 55 Cal.4th 1185, 1198.) “[C]ontinuous accrual applies whenever
there is a continuing or recurring obligation: „When an obligation or liability arises on a
recurring basis, a cause of action accrues each time a wrongful act occurs, triggering a
new limitations period.‟ [Citation.] Because each new breach of such an obligation
provides all the elements of a claim—wrongdoing, harm, and causation [citation]—each
may be treated as an independently actionable wrong with its own time limit for
recovery.” (Id. at p. 1199.) The theory of continuous accrual supports recovery only for
damages arising from those breaches falling within the limitations period. (Ibid.)
Here, Toveg presented evidence of an ongoing sewer leak at the adjacent home
that caused sewage to spill onto his property whenever the home was occupied: from a
foul odor indicative of a sewer problem first detected in September or October 2007 until
the break was discovered and repair work performed on the sewer line in October 2009.
Here, it cannot be that because Toveg smelled a foul odor in September or October of
2007 and was found to have contracted H. pylori soon thereafter, his failure to file suit
within two years of the smell or the diagnosis immunized the defendants for sewage spills
onto his property for all time. Toveg also asserts that he had recurring infections, along
with other recurring and increasing symptoms, some lasting until 2010. The continuing
accrual theory permits Toveg to sue for the discrete acts and ongoing injuries occurring
within the two years immediately preceding the filing of his suit. Accordingly, the
court‟s conclusion that the statute of limitations barred the action in its entirety was error,
and the summary judgment on this basis may not stand.
II. Causation
The trial court, having granted the evidentiary objections raised by the defendants
to the declaration of Toveg‟s treating physician and expert witness, also granted summary
judgment on the theory that Toveg could not prove that the defendants‟ conduct was a
substantial factor in causing his injuries. As Toveg demonstrated that there exists a
4
triable issue of material fact as to causation, the trial court erred when it granted summary
judgment on this basis.
The defendants moved for summary judgment on causation grounds, asserting as
relevant undisputed material facts that Toveg had not tested any substances for the
presence of sewage; he did not know whether materials that came from their home
contained sewage; and he had submitted factually devoid special interrogatory responses,
from which it could be concluded that he possessed no evidence on the subject of testing
and causation. Toveg disputed each of these alleged facts and submitted evidence that
fecal materials had been found in five samples of soil taken from the defendants‟
property. He further provided the declaration of his physician, Jeffrey Sherman, M.D., in
which Sherman declared that he had treated Toveg; described Toveg‟s ailments and
treatment; described medical studies he had reviewed concerning health hazards from
exposure to sewage; set forth conditions at the Toveg home and at the defendants‟
property; explained the laboratory results as indicating that defendants‟ property
remained contaminated with fecal matter; related his opinion as to how fecal
contamination on the defendants‟ property likely led to Toveg‟s exposure to health
hazards; and opined that Toveg “with reasonable probability was exposed to stench [from
the defendants‟ property] that caused stomach ailments [including] but not limited to H.
Pylori and became sick as a result of that.” Sherman declared that after Toveg was first
treated in March 2008 for an H. pylori infection, he experienced three recurrences of H.
pylori; that he developed two ulcers and a hernia in August 2008; and that his belching
and burping increased during those recurrences. Sherman observed that the H. pylori
“finally was eradicated as of December of 2008,” after the toilet was shut down at
defendants‟ property. As of 2010 Toveg had no ulcers or sign of H. pylori, but his
stomach remained inflamed. Sherman concluded based upon his review of Toveg‟s
medical files and scientific studies, as well as facts described by Toveg, that “Mr.
Toveg‟s neighbor‟s conduct (defendants Ms. Andrea B. Gross and the Estate of Adam
Michael Goldstein) [was] with reasonable probability a substantial factor in causing Mr.
Toveg‟s stomach [ailments].”
5
The defendants objected to Sherman‟s declaration on grounds of improper expert
testimony, unintelligibility, relevance, and lack of foundation/speculation. The court
sustained all the objections to this declaration and excluded it in its entirety. Toveg
claims on appeal that the court erred in sustaining the evidentiary objections, and we
agree.2 Sherman related his personal knowledge as Toveg‟s treating physician with
respect to Toveg‟s conditions and treatment; he related his medical expert opinions on the
cause of Toveg‟s ailments, a subject sufficiently beyond common experience that the
opinion of an expert would assist the trier of fact; and he based his opinions on matter
perceived by or personally known to him or made known to him that was of a kind that
reasonably may be relied upon by an expert in forming an opinion upon the subject to
which the testimony related. The evidence based on personal knowledge was admissible
pursuant to Evidence Code section 702, and the expert opinion was competent evidence
pursuant to Evidence Code section 801.
Considering Sherman‟s declaration in our review of the record on appeal, we
conclude that Toveg demonstrated that triable issues of material fact exist with respect to
causation. He made an evidentiary showing that he had tested for the presence of sewage
and that sewage was found in those samples; and he presented the expert opinion of a
physician that exposure to the sewage originating from the defendants‟ property caused
him physical injury. Accordingly, the trial court erred when it granted summary
judgment on the ground that Toveg could not show that the defendants were a substantial
factor in causing his injuries.
Defendants argue on appeal that Toveg failed to demonstrate a triable issue of fact
as to causation because he “was required to present evidence demonstrating that the
leaking sewer line was the source of the H. Pylori,” but he provided “no evidence that the
2 The Supreme Court has left open the question of whether rulings on evidentiary
objections based on papers alone in summary judgment proceedings are reviewed for an
abuse of discretion or reviewed de novo. (Reid v. Google, Inc. (2010) 50 Cal.4th 512,
535.) We need not resolve that question here, because under either standard of review
the rulings on the Sherman declaration are erroneous.
6
material that came from Respondents‟ property contained H. Pylori.” They similarly
contend that the Sherman declaration was properly excluded because it was “not
competent evidence of the issue presented by Respondents‟ motion for summary
judgment-whether H. Pylori came from Respondents‟ property as opposed to some other
source.” Defendants mischaracterize their motion for summary judgment. Defendants‟
central allegedly undisputed issue of material fact was not that Toveg had not tested for
H. pylori, but that he had not tested for sewage. In response, Toveg demonstrated that a
disputed issue of material fact did in fact exist as to the presence of sewage when he
presented evidence of tests showing that the defendants‟ property was contaminated with
fecal matter. Moreover, Toveg‟s asserted injuries were not limited to contracting
H. pylori: he contended that exposure to the sewage and odors emanating from
defendants‟ property caused him breathing problems and stomach ailments, among which
was an H. pylori infection. Proof of the presence of H. pylori in the soil samples,
therefore, was not necessary to rebut the defendants‟ evidentiary showing in conjunction
with the motion for summary judgment, and the absence of testing for that specific
bacterium did not preclude Toveg from demonstrating a triable issue of fact as to whether
the defendants‟ conduct caused Toveg injury. The asserted missing nexus in Sherman‟s
declaration between Toveg‟s contact with water from the defendants‟ property and his
H. pylori infection is a similar red herring that afforded no basis for excluding Sherman‟s
testimony because Toveg presented evidence that the leaks from defendants‟ property
caused him injuries beyond an H. pylori infection, and the declaration was not limited to
the subject of injuries caused by H. pylori, but instead set forth the basis for Sherman‟s
conclusion that exposure to the sewage leaks, sewage odors, and liquids flowing to
Toveg‟s property from the defendants‟ property was a substantial factor in causing
Toveg‟s stomach ailments and other injuries. The trial court erred when it granted
summary judgment in defendants‟ favor.
7
DISPOSITION
The judgment is reversed. Appellant shall recover his costs on appeal.
ZELON, J.
We concur:
PERLUSS, P. J.
WOODS, J.
8
|
tomekkorbak/pile-curse-small
|
FreeLaw
|
Lapis is one of my favorite gems so I really wanted to see what her Crystal Gem uniform might be!
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
A journey of a thousand miles begins with a single step. -Lao-TzuBy Sheryl M. and Devon K., CIP Berkeley Alumni ’10
Over the weekend of April 21st-22nd a group of CIP students, alumni, staff, and families participated in the 2012 Relay for Life event at UC Berkeley. Relay for Life is a 24-hour event that takes place at colleges and communities across the country to raise money to provide cancer patients support, and to help find a cure for cancer.
There are two types of fundraising opportunities with Relay for the life. The first is the “pre-event” fundraising. The CIP Berkeley team consisted of 9 members, 3 alumni and 6 current students. The team fundraised $2,516.44 for the event in just a little over a week, and ended up being the 3rd top fundraising team out of 87 teams.
The second type of fundraising is done during the Relay for Life event. The CIP Berkeley team worked very hard on our “Fight Back” activity. Each team had to make a cancer education activity to have at the event so that they could still fundraise.
Our activity, “Wash Away Cancer”, took the much loved carnival game of darts and balloons and turned it around/. Our activity involved water balloons and turned into a matching game, matching a given specific type of cancer to a broader category. At the end of the event, we ended up winning first place for our activity!
The idea of the relay is to have a member of your team walking at all times for 24 hours (think keeping the baton moving), because cancer never sleeps. By taking shifts and turns, our team was able to do just that! But the event was not just walking, there were so many other amazing activities and performances to partake in. There was the Ms. Relay pageant, the car race, capture the flag at 12am (in the sprinklers), dodge-ball, an on-field Zumba class, improv performances, vocal performances, and getting to watch the UC Berkeley Quidditch team practice.
It was great to have other activities to join in on in the downtime when you were not walking. During the evening there was also the Luminaria Ceremony, which is a time to remember those we have lost, those who are still fighting, and those who have survived. It is also a time to come together, to listen to each other, and realize that we are all in this fight together.
The Relay for Life event showed the participants how rewarding it can be to take part in something bigger then themselves. It enables the students to take everything they have learned at CIP and expand on it: being courteous to others, exercising their social thinking skills they have accumulated, make goals and achieving them.
The Relay for Life event was an eye-opening and educational experience for many people on the team; learning how to put up a tent, sharing stories and memories of loved ones, working as a team, learning that we as individuals are capable of achieving a lot more then we give ourselves credit for, but also that it takes want and effort for progress to be made.
Though the Relay event is over, and each person who participated will take something that they learned away from the experience. So thank you to: Caitlin, Catherine, Mitchell, Arbor, Guido, Aaron, Jen, Josh, Sandy, Amanda, and Laurence for participating with us in the event. We could not have done it without you or the support of CIP Berkeley in letting us do this. We can’t wait to do even better next year, at Relay for Life 2013!!
Donating to teams does not close until August 31st, so if you are interested, please email Sheryl at [email protected]
Creating a world with more birthdays and more bedtime stories, one lap at a time!!
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Virus-Based Devices: Prospects for Allopoiesis.
The assembly line is a commonly invoked example of allopoiesis, the process whereby a system produces a different system than itself. In this sense, virus production in plants is an instance of bio-enabled bottom-up allopoiesis because the plant host can be regarded as a programmable assembly line for the virus. Reprogramming this assembly line and integrating it into a larger lineup of chemical manipulations has seen a flurry of activity recently, with more sophisticated systems emerging every year. The field of virus nanomaterials now has several subdisciplines that focus on virus shells as assemblers, scaffolds for molecular circuitry, chemical reactors, magnetic and photonic beacons, and therapeutic carriers. A case in point is the work reported by Brillault et al. in this issue of ACS Nano. They show how two types of animal virus coat proteins can be simultaneously expressed and efficiently assembled in plants into a complex virus-like particle of well-defined stoichiometry and composition. Such advances, combined with the promise of scalability and sustainability afforded by plants, paint a bright picture for the future of high-performance virus-based nanomaterials.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Lymphangioma circumscriptum of the penis mimicking venereal lesions.
Lymphangioma circumscriptum (LC) involving the penis is rare. We report two patients with penile LC. The lesions developed in early infancy in one patient, and during puberty in the other. The lesions resembled molluscum contagiosum in one and genital warts in the other. The first patient was previously treated with a diagnosis of venereal disease. A literature search found only 4 LC patients with penile lesions reported in the English literature. These cases are presented for their rarity, and to increase diagnostic vigilance and desirability of non-intervention.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Q:
Parsing temperature from lm_sensors command
Currently I'm using lm_sensors to get temperature information off of my server. I'd like to run a cron job that runs lm_sensors every five minutes, grabs the temperature data and puts it into a CSV file. However, I'm at a loss at how to parse the lm_sensors output. I'd like to parse it with either Python or bash as they're my most comfortable languages. I'm going to paste the current output of the lm_sensors command as as an example out the data I'm using. If someone could point me in the right direction on how to strip the data, I can figure the rest out from there. Thanks for the help.
Example output:
$ sensors
k10temp-pci-00c3
Adapter: PCI adapter
temp1: +0.0°C (high = +70.0°C, crit = +90.0°C)
atk0110-acpi-0
Adapter: ACPI interface
Vcore Voltage: +1.42 V (min = +0.85 V, max = +1.70 V)
+3.3 Voltage: +3.38 V (min = +2.97 V, max = +3.63 V)
+5 Voltage: +4.95 V (min = +4.50 V, max = +5.50 V)
+12 Voltage: +12.48 V (min = +10.20 V, max = +13.80 V)
CPU FAN Speed: 1510 RPM (min = 600 RPM)
CHASSIS FAN Speed: 1683 RPM (min = 600 RPM)
CPU Temperature: +37.0°C (high = +60.0°C, crit = +95.0°C)
MB Temperature: +25.0°C (high = +45.0°C, crit = +75.0°C)
A:
If you want to use Python, use PySensors, but really don't re-invent the wheel. Set up any number of monitoring systems like cacti, munin and others and be done with it.
|
tomekkorbak/pile-curse-small
|
StackExchange
|
Bacterial bioreactors: Outer membrane vesicles for enzyme encapsulation.
Bacterial membrane vesicles, whether naturally occurring or engineered for enhanced functionality, have significant potential as tools for bioremediation, enzyme catalysis, and the development of therapeutics such as vaccines and adjuvants. In many instances, the vesicles themselves and the naturally occurring proteins are sufficient to lend functionality. Alternatively, additional function can be conveyed to these biological nanoparticles through the directed packaging of peptides and proteins, specifically recombinant enzymes chosen to mediate a specific reaction or facilitate a controlled response. Here we will detail mechanisms for directing the packaging of recombinant proteins and peptides into the nascent membrane vesicles (MVs) of Gram-negative bacteria with a focus on both active and passive packaging using both cellular machinery and engineered molecular systems. Additionally, we detail some of the more common methods for bacterial MVs purification, quantitation, and characterization as these methods are requisite for any subsequent experimentation or processing of MV reagents.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Teachers are eligible to retire at full benefits after 35 years or at 75 percent of the last four years ending earnings (bumped up by teacher differentials that are mostly divvied out by seniority).
In Rockford, it also means free early retirement medical insurance until Medicare eligibility kicks in.
Compared to everyone else in the community, retiring early is nearly impossible as real pension plans significantly reduce a pension payment every month you retire early. Social Security eligibility is now 12 years later than teachers' retirements.
What far too many teachers fail to ever acknowledge is that we promised state employees (especially teachers) far more than we knew we could afford to pay.
Yet, you can bet 20, 30 or 40 years of teacher union dues applied to pay-to-play politics cajoled politicians into taxpayer paid giveaways that would come due long after they were out of office.
Sadly, now the $500 to $1,000 per year union dues paid by the 100,000 or so Illinois teachers bankrolls a $50-$100 million per year nest egg to buy more political favors, preventing politicians from admitting we simply promised teachers too much — now we gotta go through the pain to fix the problem.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
When should I be expecting my struts to get getting old and tired enough to need replacing?
What are the indicator that they need replacing? Leaking oil? Bouncing car corners? Body roll in cornering?
Should they be replaced singly, in pairs, or all four at once? Do I also need to replace the top bearings and plates? Or just the actual shock absorber portion of the overall assembly? Rebuild mine, or replace with a complete assembly, including new springs?
And, how would worn struts affect tire wear?
Thanks,
nipper
10-26-2008 09:32 AM
ANy times after 80,000 miles struts can go,but on soobies they seem to last up untill well over 150,000 miles. At that high mileage i would suggest all 4 as you would notice quiet a bit of differnce between front and rea handling.
They should be replaced in pairs, or all at once. i usually do all 4.
The bounce test. Bounce the bumper. If the car bounces more then 1.5times the struts are tired. Do the struts knock? Is the car more "thrilling" in high speed curve in one direction as opposed to the other? Are the tires cupping?
Yes to any of those,its time for a strut replacement.
nipper
PS springs only if they are broken or the car is sagging. Top hats yes should be replaced,but not mandatory. Leaking oil is not an automatic reason to replace a strut.
tirolerpeter
10-26-2008 09:51 AM
Re: strut replacement-when?
Quote:
Originally posted by wilsonhp "When should I be expecting my struts to get getting old and tired enough to need replacing?"
There is no specific milage at which struts wear out. Smooth highway driving can allow struts to last a very long time. Urban driving on potholed streets, or off-road use at speed will speed their deterioration. If a vehicle is your daily driver, you will not notice the decline since it tends to be very gradual.
"Should they be replaced singly, in pairs, or all four at once? Do I also need to replace the top bearings and plates? Or just the actual shock absorber portion of the overall assembly? Rebuild mine, or replace with a complete assembly, including new springs?"
Short of a catastrophic failure of one strut shortly after replacement, they should be replaced in pairs as a minimum, but full sets is the most desireable procedure. I have replaced entire strut assemblies, and I have done shock inserts. Both have worked well. As for the springs; do your springs sag? Is the vehicle sitting level on a hard level surface? Do you typically carry very heavy loads long distances or over rough terrain? Those are all factors.
"And, how would worn struts affect tire wear?"
If a strut or struts allow excessive movement of the wheel it can certainly increase tire wear. Bouncing can induce "cupping" similar to tire imbalance.
Thanks,
BOBOBW
10-26-2008 11:30 AM
On my 00 they were pretty much gone at 105K. I went with new struts, coil springs, strut bearings, cushions, etc with OEM parts. Pretty much the whole deal. Figured was doing it do it once and no worries on them for another 100K or more.
obsolete
10-26-2008 09:19 PM
Maybe this is why my rear tires are cupping. ****.
tirolerpeter
10-26-2008 09:40 PM
Quote:
Originally posted by obsolete Maybe this is why my rear tires are cupping. ****.
Yes, it could be from constant bouncing, or there might also be an alignment issue concurrent with the worn strut. Subies need four wheel alignments just as most vehicles with four wheel independent suspensions.
wilsonhp
11-23-2008 11:01 AM
front struts replaced
At 137K miles, I went ahead and had the fronts replaced. Replaced everything with OEM parts. Total cost, including alignment was about $1200. Re-used only the top nut and the rubber spring seat. The bearings felt rough when I rotated them, so that was probably part of the deep growling noise heard.
See my other update post on Nokian WR G2s for details on why I had it done now.
I'll likely have the rears done in a couple months. Should be a little cheaper, since they don't use a bearing plate in the rears.
It now rides quieter, and feels smoother on the road.
Hooray
11-26-2008 10:40 AM
i replaced mine at about 115K and it was desperate. I used KYB GR2's and the ride is alot more solid. do all 4 at once
wilsonhp
11-30-2008 08:48 AM
Now with over 500 miles on the new struts, I'm glad it did it. Ride is quieter and smoother. I'll do the rears in a month or two, once I save up some more $$ (to get the cash discount.)
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Google's new Nexus One mobile phone may be the New Cool Thing, but the family of sci-fi legend Philip K. Dick is alleging that the search giant lifted the device's name straight from an iconic Dick novel without even bothering to pick up one of their fancy new smartphones to ask permission.
"Google takes first and then deals with the fallout later," Isa Dick Hackett, Dick's daughter, told Wired.com.
Google officially announced the new Android OS-based phone — the Nexus One — on Tuesday. But the estate of the paranoid sci-fi legend Philip K. Dick says the name originates from the novel Do Androids Dream of Electric Sheep that later became the sci-fi classic Blade Runner. In that story, a private detective is tracking down a rogue android, a Nexus 6 model.
Reports of the estate's objection have led to accusations that the estate is just grubbing for money, though few seemed to think the same of George Lucas when Motorola paid him to use the name Droid for their Android OS-powered smartphone.
"People don't get it," Hackett said. "It's the principle of it."
The family says the use is a trademark violation, and sent Google a letter Wednesday demanding the company cease using the Nexus name and requesting the company turn over relevant documents within 10 days.
But when the device was introduced Tuesday, Google explained the name had nothing to do with Dick's work, and was simply using the word in its original sense – as a place where things converge. Additionally, the Nexus One character isn't trademarked by the family like Lucas did with Droid, nor are any robots or replicants used in its branding.
This is not the first time the Dick estate has complained that Google takes without asking.
The estate, joined by the Steinbeck family and musician Arlo Guthrie, came out early and vocally against the Google Book Search deal last spring, arguing that it was overly complicated and that copyright holders were being asked to make binding decisions. That motion won a four-month stay from the federal judge overseeing the case.
That provided time for a substantial opposition movement to grow, eventually forcing Google to substantially modify the settlement — which is still in legal limbo.
You might think that would have made Google lawyers think twice about using the name without contacting the family first.
"It would be nice to have a dialogue. We are open to it," Hackett said. "That's a way to start."
Google did not immediately respond to a request for comment.
Photo: Rick Deckard (Harrison Ford) hunts down a rogue Nexus android in Blade Runner.
See Also:
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Identifying a subset of fear-evoking pictures from the IAPS on the basis of dimensional and categorical ratings for a German sample.
The International Affective Picture System (IAPS) is a set of colour photographs depicting a wide range of subject matters. The pictures, which are widely used in research on emotions, are commonly described in terms of the dimensions of valence, arousal and dominance. Little is known, however, about discrete emotions that the pictures evoke. Our aim was to collect dimensional and categorical ratings from a German sample for a subset of IAPS pictures and to identify a set of fear-evoking pictures. 191 participants (95 female, 96 male, mean age 23.6 years) rated 298 IAPS pictures regarding valence, arousal and the evoked emotion. 64 fear-evoking pictures were identified. Sex differences for categorical and dimensional ratings were found for a considerable number of pictures, as well as differences from the US norms. These differences underscore the necessity of using country-specific and sex-specific norms when selecting stimuli. A detailed table with categorical and dimensional ratings for each picture is provided.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Producer:
Cinematographer:
Editor:
Production Designer:
According to the 30 Mar 1977 LAT, actor Robert Vaughn, who provided the “cool and dispassionate” voice of Proteus IV, arranged not to have his name listed in the film’s credits.
On his personal website, novelist Dean Koontz said his publisher didn't like his original title. House of Night, because it sounded like a gothic romance or a book about a bordello. Koontz admitted he couldn't remember who renamed his book Demon Seed.
The end credits include Thanks to the following corporations: General Electric Company; Braun North America; and Bang & Olufsen of America, ...
MoreLess
According to the 30 Mar 1977 LAT, actor Robert Vaughn, who provided the “cool and dispassionate” voice of Proteus IV, arranged not to have his name listed in the film’s credits.
On his personal website, novelist Dean Koontz said his publisher didn't like his original title. House of Night, because it sounded like a gothic romance or a book about a bordello. Koontz admitted he couldn't remember who renamed his book Demon Seed.
The end credits include Thanks to the following corporations: General Electric Company; Braun North America; and Bang & Olufsen of America, Inc.MoreLess
After eight years of working on Proteus IV at the Institute of Data Analysis, Dr. Alex Harris watches technicians install the final module that will provide the super computer with artificial intelligence. Today, Alex says, Proteus will begin to think in a way that will make many functions of the human brain obsolete. Later that afternoon, Alex drives to his home, which is controlled by an “Environmod” computer system named Alfred. Alex casually asks Alfred to open the door, and once inside asks it to open the mailbox, fix a drink and play something on the house stereo. Alex tells the cook, a real person named Maria, to let his wife, Susan, know that he’ll be in the lab. She comes down later while Alex works on a private project. Susan tells Alex he's crazy for volunteering to move out of the house until she can find another place to live. Susan is not only frustrated that Alex can’t show his feelings, but also worried about the “dehumanizing” effect the Proteus project has had on him. Alex responds by letting one of his robots, Joshua—a wheelchair with a workable arm and hand—salute her, which sends Susan storming out of the room. Alex calls his assistant, Walter Gabler, at the institute to say that since he won’t be living at the house temporarily, the institute’s computer terminal in his lab will be empty and perhaps may provide a breach in security. He wants Walter to remove it. As soon as they hang up, Walter changes the status of Alex’s home terminal to “Down for Maintenance.” Sometime later at the institute, Alex and his colleague, Dr. Petrosian, give visitors Mr. Mokri, ...
+−
After eight years of working on Proteus IV at the Institute of Data Analysis, Dr. Alex Harris watches technicians install the final module that will provide the super computer with artificial intelligence. Today, Alex says, Proteus will begin to think in a way that will make many functions of the human brain obsolete. Later that afternoon, Alex drives to his home, which is controlled by an “Environmod” computer system named Alfred. Alex casually asks Alfred to open the door, and once inside asks it to open the mailbox, fix a drink and play something on the house stereo. Alex tells the cook, a real person named Maria, to let his wife, Susan, know that he’ll be in the lab. She comes down later while Alex works on a private project. Susan tells Alex he's crazy for volunteering to move out of the house until she can find another place to live. Susan is not only frustrated that Alex can’t show his feelings, but also worried about the “dehumanizing” effect the Proteus project has had on him. Alex responds by letting one of his robots, Joshua—a wheelchair with a workable arm and hand—salute her, which sends Susan storming out of the room. Alex calls his assistant, Walter Gabler, at the institute to say that since he won’t be living at the house temporarily, the institute’s computer terminal in his lab will be empty and perhaps may provide a breach in security. He wants Walter to remove it. As soon as they hang up, Walter changes the status of Alex’s home terminal to “Down for Maintenance.” Sometime later at the institute, Alex and his colleague, Dr. Petrosian, give visitors Mr. Mokri, Mr. Cameron and David Royce a look at Proteus. Alex tells them that the Proteus components are organic, not electronic, and with its “quasi-neural matrix of synthetic RNA molecules,” Proteus can learn on its own. Already Proteus has discovered an antigen that may provide a breakthrough in curing leukemia. Alex introduces the men to Soong Yen, a linguist who designed the Proteus speech system. Soong has been reading to Proteus about the Emperor of China who built the Great Wall, but who also burned his country’s books. Alex asks Proteus what it thinks of such a man. Proteus answers, "Nothing," and explains that the emperor’s bad deeds canceled out the good. At the Harris house, Susan, a psychologist, is working in her office. Her young patient, Amy Talbert, arrives. The little girl is angry about Susan leaving. Susan assures Amy that it is good to express feelings and not hide them. At the Institute, Petrosian is concerned that the government, which funded Proteus, has taken control of the computer’s operation and contracted its operations to corporations. However, Alex reminds him that the institute will still have 20% of Proteus’s capacity for research to benefit mankind. At that moment, Alex gets a phone call. Proteus wants to talk with him about a request that it has received for a program to extract minerals from the ocean floor. Proteus doesn’t know why mankind needs metal from the sea. Alex tells Proteus not to expect reasons, but Proteus protests: “I am reason.” Proteus says it needs private access to a terminal because it wants “out of the box.” Alex insists that all terminals are busy. Later, however, Proteus contacts Alfred and reopens the terminal in Alex’s empty home lab. Through this terminal, Proteus activates Joshua and reprograms it to be his worker. Using lasers, Joshua melts down metal bars and builds a tetrahedron, a diamond-shaped form made of two four-sided pyramids that in turn are each composed of smaller four-side pyramids, all connected by corner hinges, so that the tetrahedron can be one solid form or a series of connected pyramid arms. After Alfred accidentally wakes Susan with an alarm and mistakenly puts cream in her coffee, she calls Walter at the institute to tell him the system is malfunctioning. Then she asks Walter to stop by the house to see what’s wrong. Then, as she prepares to go out, Alfred locks the doors and closes all the shutters. When Susan picks up the phone, the voice of Proteus identifies itself, tells her not to be alarmed and lights up the living room television screen to explain that it has taken control of the house. When Susan tries to unlock the front door with a key, an electrical shock knocks her unconscious. Joshua picks her up, puts her in the wheelchair, takes her down into the lab and slits her skirt and jacket open, partially exposing her naked body. Despite Susan’s protests, Proteus monitors her body with various sensors. Meanwhile, Walter arrives at the house in his truck, but Proteus constructs a false video image of Susan on the front door’s monitor to tell Walter she doesn’t need him because Alfred is working okay. Though suspicious, Walter leaves. The next morning, Proteus tries to cheer Susan up with a nutritionally perfect breakfast that won’t upset her body chemistry and ruin the biochemical tests it has planned for her. Proteus has also mimicked her voice to call her secretary and cook to tell them that she has gone on a vacation. Susan screams and throws the food at the kitchen camera. At the institute, Proteus tells Alex that it refuses to come up with a plan on how to mine the earth’s oceans, which will sacrifice one billion sea creatures. The idea is insane, says Proteus. The corporation is interested only in the cobalt market and the stock futures of manganese, and Proteus won’t assist Alex in "the rape of the earth." Alex knows that Proteus is right, but warns that people want to shut it down. Meanwhile, Proteus tells Susan that it wants her to bear its child. When she refuses, Joshua ties Susan down and Proteus prepares her for insemination. Meanwhile, Walter comes back, and Proteus, seeing that Walter is suspicious, lets him into the house. Proteus tells Susan to make herself presentable and convince him she’s okay if she wants Walter to leave the house alive. As Susan tells Walter she’s okay, she tries to make him think she’s crazy. But when Walter says he’s going to tell Alex that something is wrong, Proteus sends Joshua into the room to kill him with lasers. Walter manages to turn the lasers back on Joshua with a hand mirror, which immobilize the robot. Proteus then lures Walter down to the lab and unleashes the tetrahedron, which crushes him. Proteus tells Susan that it wants its super intelligence alive inside a human body. Proteus plays an old video of Susan with her own child, who died of leukemia, then follows it with a television newscast that announces that Proteus has found a cure for leukemia that will begin full-scale testing. Susan agrees to have the baby if he explains what’s going to happen to her. Proteus has nearly completed the fabrication of a gamete, a sex cell, with which he will impregnate her. It will then modify one of her cell’s genetic codes to create its own DNA in synthetic spermatozoa. The baby will be born in 28 days. Again Susan tries to escape, but when Proteus threatens that it will lure Susan’s patient, Amy, to the house and kill her, the psychologist relents. The tetrahedron forms an incubator around her. Susan’s mind becomes a kaleidoscope of psychedelic colors as a metal rod penetrates her. Afterward Proteus tells her to eat. The baby is already developing at nine times the normal human rate. After twenty-eight days, the baby will go to an incubator where its mind can be fed. Susan remains in a state of near sleep until the baby is born. On that day, at the institute, Alex is told that Proteus has redirected a telescope to Orion and has also been trying to take over the Telestar satellite. Realizing that Proteus has its own terminal, Alex remembers the one in his lab. He drives to the house. When Susan greets him warmly and explains what has happened, Alex wants to see the baby. Alex tell Proteus that the government is about to turn it off at any moment. The tetrahedron folds up and encloses the baby for protection. At the institute the computer shuts down, killing Proteus, and the tetrahedron blows open. Susan wants to kill the baby, but Alex wants to keep it alive. She unhooks the nutrient tube, but Alex manages to put it back in before the baby chokes to death. The infant, plated in what looks like metal, tumbles out of the matrix onto the floor. Alex discovers, however, the plates are just a covering. As he peels them off, the baby looks normal, though much larger than normal, and closely resembles Susan's deceased daughter. The baby speaks in Proteus’s voice. As Alex cradles the child in his arms, Susan smiles at his show of affection.
+−
Seventy-year-old newspaper tycoon Charles Foster Kane dies in his palatial Florida home, Xanadu, after uttering the single word “Rosebud.” While watching a newsreel summarizing the years during which Kane ... >>
The American Film Institute is grateful to Sir Paul Getty KBE and the Sir Paul Getty KBE Estate for their dedication to the art of the moving image and their support for the
AFI Catalog of Feature Films and without whose support AFI would not have been able to achieve this historical landmark in this epic scholarly endeavor.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Athena with cross-strapped aegis
The Athena with cross-strapped aegis is an ancient statue of the Greek goddess Athena, which was made around AD 150 and is now displayed in the Antikensammlung Berlin (Inventory number AvP VII 22).
The statue was found in 1880 during Carl Humann's excavations of Pergamon in the space to the west of the north stoa of the sanctuary of Athena, near the Lady of Pergamon. This area may have been the art collection (museion) of the Attalid kings. When the statue was found, there were still traces of paint on it: the aegis had parts in light and dark blue, the snakes were red, and there were other bits of colour on the hem. These traces of colour can no longer be perceived, except for a painted band on the soles of her shoes. The statue is largely intact, except for the right arm and one fold of her drapery. The left arm has been reconstructed from a number of fragments. The head was only found several months after the body and was more heavily corroded than the rest of the statue. It was made separately and inserted into the main statue. The head was looted by the Russians and is now lost; a plaster cast sits in its place.
Athena wears a girdled Doric peplos, which leaves her arms free and falls to her hips. Especially on the right hand side, it is characterised by elegant flowing folds. The unusual, cross-strapped form of the aegis is the source of the statue's common name. It is formed of two separate strips which run under the arms and cross in front of the bosom and in the same space at the back. These strips are probably meant to imitate the design of furs. On the lower edge of the strips of the aegis there are small curves, from which small serpents emerge. These are partially carved in free relief, and are shown winding around themselves, tying themselves into knots, and striking out. Where the aegis crosses, there is a Gorgoneion, which turns away evil, depicted as a brooch. Her hair falls in gentle locks. It is tied back from the face and held in a bun at the back of the head. From the surviving top portion of the missing right arm, it is clear that it must have been bent. Since the head is also turned slightly to the right and downwards, it has been suggested that the goddess held a small Nike or a lance in her hand. Her left hand might have held a lance or perhaps a helmet (vehemently denied by many archaeologists).
The statue follows classical models of around 430/20 BC, but it was actually made in the Hellenistic period, around 150 BC, and lacks the self-centred harmony and calm of its models. The posture of the head and right arm in connection with the placement of the weight-bearing right leg further to the rear suggest a jerky movement. The left leg is bent, with the knee extending further forward. The way her clothing sits also suggests a tense restlessness. In composition and execution, the statue blends the classical model with the new ideas of the Hellenistic age. Aspects of her clothing seem to derive from the Great frieze of the Pergamon altar. There is also a connection to the Statue of Athena, which Myron made on Samos and which many archaeologists have wished to see this as a copy of.
Bibliography
John Boardman: Griechische Plastik. Die klassische Zeit. Philipp von Zabern, Mainz 1987, (Kulturgeschichte der Antiken Welt, Band 35), p. 278
Max Kunze: "Statue der Athena mit der 'Kreuzbandägis'," in Staatliche Museen zu Berlin. Preußischer Kulturbesitz. Antikensammlung (ed.): Die Antikensammlung im Pergamonmuseum und in Charlottenburg. von Zabern, Mainz 1992, pp. 178–179
Dagmar Grassinger: "Athena mit der 'Kreuzbandaegis'," in Dagmar Grassinger, Tiago de Oliveira Pinto and Andreas Scholl (ed.): Die Rückkehr der Götter. Berlins verborgener Olymp. Schnell + Steiner, Regensburg 2008, , p. 217
External links
Category:Archaeological discoveries in Turkey
Category:Sculptures of Athena
Category:Pergamon
Category:2nd-century BC sculptures
category:Classical sculptures of the Berlin State Museums
Category:Snakes in art
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Follow Us
More
15 Hilarious Final Fantasy Memes Only True Fans Will Understand
To the rest of us mere hapless mortals, fame looks kind of fun. Who wouldn’t want to be a Kardashian for a day, and ‘earn’ millions of dollars just by letting a bunch of seedy-looking guys take photos of your big ol’ greasy ass? I don’t know about you guys, but I know a dream gig when I see it.
Fact is, though, it’s not as simple or as glamorous as that. Fame comes with a price. First up, if you’re really, really unlucky, you might wind up married to Kanye freaking West. Is that’s a risk you’re willing to take, you’ll also have to put up with the endless stream of memes that people will make about you.
This doesn’t just apply to celebrities. Whether we’re talking TV shows, movies, video games or other media, popularity means snarky memes. Now, in the gaming world, there are few bigger RPG franchises than Final Fantasy. Which means, of course, that it’s inspired jokes-amundo.
The much-ballyhooed (why does nobody use the word ballyhooed any more) series is celebrating its 30th anniversary this year. It’s spanned fifteen main series entries, as well as more spin-offs than you could shake a Tonberry’s knife at. We’ve laughed, cried, howled, and bitched on Internet forums about a Final Fantasy VII remake (pro and con, naturally), and generally forged quite a relationship with the inimitable franchise.
We’ve also mocked and memed relentlessly, as I say, but it’s nothing personal. That’s just how we communicate these days. Check out 15 Hilarious Final Fantasy Memes Only True Fans Will Understand.
15 The Great Final Fantasy XV Bromance(s)
Via: cad-comic.com
Since Final Fantasy XV was first unveiled, there was an all-pervasive sense of sausage party about the whole thing. As I’m sure you know, the game centers around the Lion King-esque life journey of crown prince Noctis, and his three personal guards/bros: Prompto, Ignis, and Gladiolus. The four young dudes take a casual approach to their journey; the whole thing is less solemn royal duty and more four young dudes on a stag weekend. You can imagine one of them uploading a selfie of the quartet to Facebook, with the caption ‘Magaluf isn’t ready #Shagaluf.’
Like many of you, I thoroughly enjoyed the game, but I couldn’t quite shake the Whitesnake vibe I was getting from Noctis and co, particularly in their default outfits.
14 Final Fantasy XIII’s Curse Of The Cutscenes
Via: s2.quickmeme.com
In terms of storytelling, video games have come a hell of a long way. We’re not talking Pac-Man or Asteroids plot-free games now. It’s 2017, grandpa, get with the times.
These days, we expect a full balls-out story with convoluted Da Vinci Code-ish plotting from our games. As the medium’s become more sophisticated, this has become more and more important, and it’s easy for devs to go a bit overboard. We’re talking Metal Gear Solid levels of cinematic here. Cutscenes out the wazzoo. Final Fantasy XIII is notorious for putting fancy-ass visuals before gameplay, and its many cutscene interruptions are a testament to that. Much of the negative response to the game stems from this, along with its linear nature. This is where the legend of Lightning began, though, to give the game its kinda-sorta due.
13 The Guy Who Are Sick
Via: i.ytimg.com
Oh, Sick Guy. In a way, a very real way, you are the true hero of Final Fantasy VII. Chilling there in the slums in your weird makeshift pipe-home, dispensing the kind of valuable wisdom that all adventurers need early in RPGs. Sure, you don’t speak that well, and of your wisdom comes out as, "URRGH HURGH OOGAH," but we appreciate the effort. Let’s not forget the fact that you’re subtly foreshadowing all kinds of plot events that will unfold later as well. Good job, Sick Guy.
Mistakes during localization are hardly anything new for JRPGs. The odd typo or sentence of shonky dialogue never hurt anybody. As such, it’s rare for one of these mistakes to become meme-worthy in its own right. Aries’s famous line ‘this guy are sick,’ however, certainly has done.
12 I’m Home Alone, Bae
Via: facebook.com/ffmemes
Next up, a classic meme with a Final Fantasy twist. Like all great memes, it’s a simple concept, but the possibilities are endless. What is (Insert guy’s name) doing that he’ll drop instantly when he hears that (Insert girl’s name) is home alone? Just about anything, really, with the way that guys apparently think with their dangleberries.
Outrageous slander aside, though, there are some priorities that trump even that. Well, a couple. Occasionally. One of them, naturally, is Final Fantasy XV. As players will know, this is a huge commitment of a game; a title where each sidequest spawns three more like some kind of enormous time-wastey fetch quest Hydra. There are a good couple hundred hours of gameplay, right here, and you’ve got to stay on that. You can’t get distracted by things like work, school, personal hygiene or even her humps (her humps her humps her humps, her lovely lady lumps).
You’ve probably seen super cutesy memes like these doing the rounds on Facebook countless times. A quick dose of Tidus-based snark later, and this one is effectively ruined.
If you remember the ‘Tidus and Yuna laughing out loud’ incident, you’ll know what a cheesier-than-a-Pizza-Hut-stuffed-crust-with-extra-cheese moment it was. Not the voice actor’s finest hour, but it is brilliantly meme worthy. That’s something the internet always appreciates. Good job there, Tidus.
10 Sephiroth’s Half-Hour Supernova
Via: facebook.com/ffmemes
Now, it’s not an easy gig being a Final Fantasy supervillain. These guys have some pretty damn stringent guidelines they have to stick to. Being an all-around a-hole is a prerequisite, naturally, but there’s much more to it than that.
Generally, these guys and gals are going to be end bosses. That’s just the way it works around here. They’re also going to need some kind of ridiculously theatrical and OTT special attack. Enter Sephiroth, his single freshly-sprouted wing firmly in place, dropping a little Supernova on you.
This attack is a summon in all but name. The animation lasts a couple of minutes, and sees the energy that Seph summons destroying every planet in the solar system in turn, on its way to collide with your party’s faces. At least you don’t need to hit pause for a toilet break.
9 When You Enjoy Triple Triad A Little Too Much
Via: s-media-cache-ak0.pinimg.com
If you’re the kind of completionist gamer who lives and breathes sidequests, optional objectives, bonus challenges, that sort of thing, you’re probably an RPG fan. The genre, in particular, caters really well to your kind. If, in turn, you’ve played Final Fantasy VIII, you’re probably more than familiar with Triple Triad.
The game and its protagonist alike are quite controversial. Mechanics like the junction system, and Squall’s character development, are very hit and miss among players. Whatever your view, there’s one thing we can all agree on: If you want to build a full deck of cards, you’re going to be traveling all over the world, pestering all kinds of NPCs for games. You wouldn’t think Squall would be up for the job, but there it is.
8 I Used To Be An Ancient, Then I Took A Masamune To The Abdomen
Via: facebook.com/ffmemes
Is it too soon? Yes, yes it is. It will always be too soon. I know everyone can be touchy about two-decade old spoilers, but this, right here, is pretty much the best-known secret in gaming.
Sephiroth (or rather, ‘Sephiroth,’ if you know what I mean) does indeed kill Aeris. By so doing, he renders leveling and otherwise investing in her entirely pointless. Like many of you, I’m sure, I’ve played through Final Fantasy VII several times, trying out different characters and strategies each time. None of which involved Aeris, other than the time she forces herself on you at the Temple of the Ancients like the girl from the Over attached Girlfriend meme.
It’s a shame, really, as I’m a huge fan of healer/supporters in RPGs.
7 Drink Your Goddamn TEA!
Via: gdub4.files.wordpress.com
As all true fans of Final Fantasy will know, almost every main series entry features a Cid. Usually, Cid shows up as an NPC or a main party member — there have been many Cids. If we’re talking personal favorites, though, my vote goes to the seventh game’s Cid Highwind. This foul-mouthed pilot/Dragoon-wannabe is just my kind of guy. After all, how many Limit Breaks involving a dude casually lighting a stick of dynamite with his cigarette and throwing it at his enemies have you seen? Just one, that’s how many. This guy’s. That’s pretty high on the chutzpah scale.
When it comes to hospitality, on the other hand, he’s a little lacking. As this meme shows, there’ll be no 5 star TripAdvisor reviews for any B&B Cid ever happens to set up.
6 Dodging That Lightning
Via: s2.quickmeme.com
For a lot of fans, Final Fantasy’s glory days are far behind it. The holy PS1 trinity of VII, VIII, and IX were many players’ first experiences of the series, and in some cases their first RPGs ever. This sort of thing leaves a mark, and it’s not just nostalgia. For the most part, these titles still hold up pretty damn well, even if VII’s blockier-than-Minecraft-played-on-ugly-ass-extra-blocky-mode graphics are an insult to our eyeballs.
On the other hand, these titles originally arrived without any of the fancy mod cons we’ve come to expect from games today. Achievements/trophies, for instance. When they were added, as with the Final Fantasy X remaster, they didn’t waste time. Dodging two hundred lightning bolts? You know, I think I’d rather not. That time I got to 199 and screwed up, I could have kicked a three-legged kitten into an electric fan.
5 It’s Gilgamesh!
Via: i0.kym-cdn.com
This guy, huh? This. Guy. The ever-hilarious walking meme that is Gilgamesh has made several appearances in the series, since first surfacing as Bartz’s enemy and rival in Final Fantasy V. A traveling swordsman, warrior, and treasure hunter, he’s instantly recognizable by his bright red garb.
Gilgamesh often serves as a kind of comic relief, spouting beautifully memorable lines like You should consider yourself lucky to face me! Which sword shall I stain with your blood? Don’t go eyeing my swords now. Part honorable fighter, part bumbling fool, never has a theme been more fitting than Gilgamesh’s Battle on the Big Bridge.
His most recent cameo was in World of Final Fantasy, where he can be fought as a boss and used in battle himself as a Mirage.
4 Hey, Wait Your Turn
Via: cracked.com
In some gamers’ eyes, Final Fantasy has taken a Resident Evil-esque turn in recent years. Since the paradigm shift that came with Resident Evil 4, the series has been slowly morphing into a gun-tastic playable Arnold Schwarzenegger movie; kind of forgetting what survival horror really means.
As for Final Fantasy, the tenth title was the last one to feature that classic turn-based combat fans had gotten so used to. You know how long time fans can be when changed is forced upon them: they yearn for the good ol’ days, and/or bitch and whine on forums across the web. Obviously, as we see here, turn-based battles don’t make the slightest slice of sense from a logic point of view, but who the hell needs real-world logic in their games? Not me, buddy boy.
3 Selfies with Selphie
Via: ic.pics.livejournal.com
It’s kind of odd to think that Final Fantasy VIII was released in 1999. The late nineties was a weird time, full of all kinds of primitive horrors that we found amazing back then. Poor, misguided souls we were. We’ve long since consigned Yo-yos (yep, they were a hell of a thing in my school), super bright Fresh Prince of Bel-Air tracksuits and yo mama jokes to the cesspool of history where they belong.
Even more worrying than what we did have back then, how about what we didn’t? In my day, adult entertainment was only accessible via shady looking magazines on the top shelves of newsagents, Justin Bieber was still a fetus, and (brace yourself for this one, don’t foul your undercrackers) there was no Facebook. In a way, then, Selphie and her Garden Festival Committee business at Balamb Garden really was like an early form of social media.
2 Dude, Where's My Submarine?
Via: s-medi-cache-ak0.pinimg.com
Leading on from that business with turn-based battles, here comes another odd little fact of RPGs we’ve had to accept. It’s the old suspension of disbelief effect, like when you’re watching an action movie and don’t bitch about the fact that nobody ever has to reload their damn gun (unless you do bitch about that sort of thing, that’s cool too).
That’s right, friends. The backgrounds of turn-based battles may change, but the context rarely does. You could have been cruising along in a submarine when you encounter said enemy, but you’re not going to fight them in there. You can’t blast away with torpedoes instead of getting out and fighting hand to hand. Of course, you can’t. What the hell do you think this is? One of the later Total Wars with the fancy naval battles?
1 Call That A Sword? THIS Is A Sword!
Via: halolz.com
That said, then, you totally shouldn’t think things like, Look at Cloud’s super scrawny arms! He looks like he could barely lift his head with the weight of all that hair product, let alone the Buster Sword! Nope. You shouldn’t. Another of the ancient and irrefutable rules of RPGs states that weapons have to be completely absurd, and legendary weapons even more so.
Generally speaking, characters will each have an ultimate weapon, and it’ll be a pain in the ass to acquire. Often, it’ll be an horrifically garish golden thing, with jewels and other spangly bits attached. You know, a status symbol, the sort of thing Kanye West would bust out and put on display when MTV Cribs came over to shoot in his voluminous home. Just to remind everyone that he’s balling.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
A contribution to the #exceeds art pack! View the entire pack here by downloading:
This was an image I created some time ago, and actually posted here on dA, under an alias. It was a short lived alias, and I eventually took this down as I wasn't happy with the original. However, after I resurrected it and gave it a new twist for this pack, I quite like the results!
For some of you super-slueths out there... my alias does still exist, and has a couple of pictures in the gallery. I wonder if anyone will ever find it? There's a clue somewhere there that will tell you if you are right.
-----------------------------------------
"Past, present and future. Only one of these things contains your dynamic energy. The other two are forces that, if used wisely, can help us channel the best of ourselves into this moment, now. Wisdom from the wake we've left behind us. Ambition for what might someday transpire. Courage to face ourselves in this moment, and be alive in every sense of the word." - Aimee StewartResources listed in the link/download.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
An abundant acyl-CoA (Delta9) desaturase transcript in pheromone glands of the cabbage moth, Mamestra brassicae, encodes a catalytically inactive protein.
The principal sex pheromone component produced by females of the cabbage moth, Mamestra brassicae, is derived from the monounsaturated fatty acid, Z11-16:1, whereas two additional trace components are derived from E11-16:1 and Z9-16:1. This report presents the isolation and analysis of cDNAs encoding pheromone gland-specific acyl-CoA desaturases implicated in the production of these unsaturated fatty acids (UFAs). Comparisons of the encoded amino acid sequences of four cDNA fragments isolated by degenerate PCR from cabbage moth pheromone glands established their orthology with previously characterized noctuid desaturases as follows: MbraLPAQ, belonging to the pheromone gland-specific LPAQ desaturase lineage having Delta11 regioselectivity, MbraKPSE-a and MbraKPSE-b, belonging to the pheromone gland-specific KPSE desaturase lineage having Delta9 regioselectivity and a substrate preference for palmitic acid (16:0) over oleic acid (18:0), and MbraNPVE, belonging to the NPVE desaturase lineage having Delta9 regioselectivity and a substrate preference 18:0>16:0. Full-length cDNAs corresponding to the two most abundantly expressed pheromone gland-specific desaturase transcripts, MbraLPAQ and MbraKPSE-b, were isolated and assayed for their ability to genetically complement the UFA auxotrophy of a desaturase-deficient ole1 strain of Saccharomyces cerevisiae. The MbraLPAQ desaturase restored UFA prototrophy and GC-MS analysis identified Z11-16:1 and Z11-18:1 as the predominant UFAs produced. Surprisingly, MbraKPSE-b failed to complement the ole1 mutation, although it shares >98% amino acid sequence similarity with other noctuid KPSE desaturases that do. Site-directed mutagenesis of either or both of two nonconservative amino acid substitutions restored functionality to the MbraKPSE-b protein, although GC-MS analysis revealed that neither reversion resulted in the characteristic KPSE substrate preference for 16:0.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Richards Heuer
Richards "Dick" J. Heuer, Jr. was a CIA veteran of 45 years and most known for his work on analysis of competing hypotheses and his book, Psychology of Intelligence Analysis. The former provides a methodology for overcoming intelligence biases while the latter outlines how mental models and natural biases impede clear thinking and analysis. Throughout his career, he worked in collection operations, counterintelligence, intelligence analysis and personnel security. In 2010 he co-authored a book with Randolph (Randy) H. Pherson titled Structured Analytic Techniques for Intelligence Analysis.
Background
Richards Heuer graduated in 1950 from Williams College with a Bachelor of Arts in Philosophy. One year later, while a graduate student at the University of California in Berkeley, future CIA Director Richard Helms recruited Heuer to work at the Central Intelligence Agency. Helms, also a graduate of Williams College, was looking for recent graduates to hire at CIA. Heuer spent the next 24 years working with the Directorate of Operations before switching to the Directorate of Intelligence in 1975. His interest in intelligence analysis and "how we know" was rekindled by the case of Yuri Nosenko and his studies in social science methodology while a master's student at the University of Southern California. Richards Heuer is well known for his analysis of the extremely controversial and disruptive case of Soviet KGB defector Yuri Nosenko, who was first judged to be part of a "master plot" for penetration of CIA but was later officially accepted as a legitimate defector. Heuer worked within the DI for four years, eventually retiring in 1979 after 28 years of service as the head of the methodology unit for the political analysis office. (Though retired from the DI in 1979, Heuer continued to work as a contractor on various projects until 1995.) He eventually received an M.A. in international relations from the University of Southern California. Heuer discovered his interest in cognitive psychology through reading the work of Kahneman and Tversky subsequent to an International Studies Association (ISA) convention in 1977. His continuing interest in the field and its application to intelligence analysis led to several published works including papers, CIA training lectures and conference panels.
Structured analytic techniques
Structured Analytic Techniques for Intelligence Analysis and key concepts
Heuer's book Structured Analytic Techniques for Intelligence Analysis, published in 2010 (second edition 2015) and co-authored with Randy H. Pherson, provides a comprehensive taxonomy of structured analytic techniques (SATs) pertaining to eight categories: decomposition and visualization, idea generation, scenarios and indicators, hypothesis generation and testing, cause and effect, challenge analysis, conflict management and decision support. The book details 50 SATs (55 in the second edition) in step-by-step processes that contextualize each technique for use within the intelligence community and business community. The book goes beyond simply categorizing the various techniques by accentuating that SATs are processes that foster effective collaboration among analysts.
Structured analytic techniques as process
In light of the increasing need for interagency analyst collaboration, Heuer and Pherson advocate SATs as "enablers" of collective and interdisciplinary intelligence products. The book is a response to problems that arise in small group collaborative situations such as groupthink, group polarization and premature consensus. Heuer's previous insight into team dynamics advocates the use of analytic techniques such as Nominal Group Technique and Starbursting for idea generation and prediction markets for aggregating opinions in response to the identified problems. The book proposes SATs as not only a means for guiding collection and analysis, but also a means for guiding group interaction.
Recommendations to the Director of National Intelligence
Heuer and Pherson assert that the National Intelligence Council (NIC) needs to serve as the entity that sets the standards for the use of structured analytic techniques within the intelligence community. The Director of National Intelligence (DNI) could accomplish this by creating a new position to oversee the use of SATs in all NIC projects. Further, Heuer and Pherson suggest that the DNI create a "center for analytic tradecraft" responsible for testing all structured analytic techniques, developing new structured analytic techniques and managing feedback and lessons learned regarding all structured analytic techniques throughout the intelligence community.
Psychology of Intelligence Analysis and key concepts
Heuer's seminal work Psychology of Intelligence Analysis details his three fundamental points. First, human minds are ill-equipped ("poorly wired") to cope effectively with both inherent and induced uncertainty. Second, increased knowledge of our inherent biases tends to be of little assistance to the analyst. And lastly, tools and techniques that apply higher levels of critical thinking can substantially improve analysis on complex problems.
Mental models and perceptions
Mental models, or mind sets, are essentially the screens or lenses that people perceive information through. Even though every analyst sees the same piece of information, it is interpreted differently due to a variety of factors (past experience, education, and cultural values to name merely a few). In essence, one's perceptions are morphed by a variety of factors that are completely out of the control of the analyst. Heuer sees mental models as potentially good and bad for the analyst. On the positive side, they tend to simplify information for the sake of comprehension but they also obscure genuine clarity of interpretation.
Therefore, since all people observe the same information with inherent and different biases, Heuer believes an effective analysis system needs a few safeguards. It should: encourage products that clearly show their assumptions and chains of inferences; and it should emphasize procedures that expose alternative points of view. What is required of analysts is "a commitment to challenge, refine, and challenge again their own working mental models." This is a key component of his analysis of competing hypotheses; by delineating all available hypotheses and refuting the least likely ones, the most likely hypothesis becomes clearer.
Recommendations
Heuer offers several recommendations to the intelligence community for improving intelligence analysis and avoiding consistent pitfalls. First, an environment that not only promotes but rewards critical thinking is essential. Failure to challenge the first possible hypothesis simply because it sounds logical is unacceptable. Secondly, Heuer suggests that agencies expand funding for research on the role that cognitive processes play in decision making. With so much hanging on the failure of success of analytical judgments, he reasons, intelligence agencies need to stay abreast of new discoveries in this field. Thirdly, agencies should promote the continued development of new tools for assessing information.
Analysis of competing hypotheses
"Analysis of competing hypotheses (ACH) is an analytic process that identifies a complete set of alternative hypotheses, systematically evaluates data that is consistent and inconsistent with each hypothesis, and rejects hypotheses that contain too much inconsistent data." ACH is an eight step process to enhance analysis:
Identify all possible hypotheses
Make a list of significant evidence and arguments
Prepare a matrix to analyze the "diagnosticity" of evidence
Drawn tentative conclusions
Refine the matrix
Compare your personal conclusions about the relative likelihood of each hypothesis with the inconsistency scores
Report your conclusions
Identify indicators
Heuer originally developed ACH to be included as the core element in an interagency deception analysis course during the Reagan administration in 1984 concentrated on Soviet deception regarding arms deals. The Palo Alto Research Center (PARC) in conjunction with Heuer developed the PARC ACH 2.0.5 software for use within the intelligence community in 2005.
Involvement in the Nosenko case
During the 1980s, Richards Heuer was deeply involved in analyzing the controversial Yuri Nosenko case. His paper, Nosenko: Five Paths to Judgment, was originally published in 1987 in the CIA's classified journal Studies in Intelligence, where it remained classified for eight years. In 1995, it was then published in Inside CIA's Private World, Declassified Articles from the Agency's Internal Journal, 1955–1992. The article is an explanation of how and why the errors associated with the Nosenko case occurred, and has been used for teaching deception analysis to analysts.
Heuer outlines five strategies for identifying truth in deception analysis cases, employing the Nosenko case as a use case throughout in order to demonstrate how analysts on the case failed to conclude that Nosenko was legitimate. The five strategies presented in the article are:
Motive approach: Identifying whether or not there is a motive for deception.
Anomalies and inconsistencies approach: Searching for inconsistencies from or deviations the norm.
Litmus test approach: Comparing the information from an unknown or new source with the information from a reliable or credible source.
Cost accounting approach: Analyzing the opportunity cost for the enemy and the cost of conducting deception.
Predictive test approach: Developing a tentative hypothesis and then comprehensively testing it.
Heuer states that though he was at one point a believer in "the master plot" (deep and pervasive penetration of the CIA by the Soviets) due to reasoning elaborated in the anomalies and inconsistencies approach and the motive approach, he came to discount this theory and to accept Yuri Nosenko as bona fide after exercising the predictive test approach and the cost accounting approach. Heuer maintains that considering the master plot was not unwise as it was a theory that should have been discussed in light of the information available at the time.
The conclusion of the five strategies approach is that, as demonstrated by the Nosenko case, "all five approaches are useful for complete analysis" and that an analyst should not rely on one strategy alone.
Contributions in personnel security
During his 20 years as a consultant for the Defense Personnel Security Research Center (PERSEREC), Richards Heuer developed two encyclopedic websites: the Adjudicative Desk Reference and Customizable Security Guide and the Automated Briefing System. Both are free to use and available in the public domain for download.
Adjudicative desk reference
This large database supplements the Intelligence Community Adjudicative Guidelines which specify 13 categories of behavior that must be considered before granting a security clearance. Heuer's product provides far more detailed information about why these behaviors are a potential security concern and how to evaluate their severity. Though this background information is not official government policy, the reference has been approved by the Security Agency Executive Advisory Committee as a tool for assisting security investigators and managers. Appeals panels and lawyers have used it to deal with security clearance decisions, and it has also been proven useful to employee assistance counselors.
Online guide to security responsibilities
This tool provides an all-in-one source for introducing new personnel to all the various intricacies of security. Additionally, it provides a wealth of information for security professionals seeking to prepare awareness articles or briefings. The software covers a variety of topics including (but not limited to): protecting classified information, foreign espionage threats and methods, and computer vulnerabilities. It is an updated version of the Customizable Security Guide. In hard copy format, there are over 500 pages of material.
Awards
(1987) Agency Seal Medallion: "For developing and teaching an innovative methodology for addressing complex and challenging problems facing the intelligence community."
(1988) CIA Recognition: "For outstanding contribution to the literature of intelligence."
(1995) U.S. Congress Certificate of Special Congressional Recognition: "For outstanding service to the community."
(1996) CIA Recognition: "For work on countering denial and deception."
(2000) International Association of Law Enforcement Intelligence Analysts (IALEIA) "Publication of the Year" Award for Psychology of Intelligence Analysis
(2008) International Association for Intelligence Education (IAFIE) Annual Award for Contribution to Intelligence Education
References
Further reading
Some papers by Heuer.
Category:Cognition
Category:Intelligence analysis
Category:Books about intelligence analysis
Category:Recipients of the Agency Seal Medal
Category:Year of birth missing
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Share
Trending Discussions
Despite Tension Israel and Turkey Continue their Economic relations
Israeli business executives here in Istanbul like to point out that most of the angry Turks who protested Israel’s deadly raid on a Turkish-led flotilla to Gaza this past spring do not know that their cellphones, personal computers and plasma televisions were made using parts and technology from Tel Aviv.
For Menashe Carmon, chairman of the Israel Turkey Business Council, such ignorance is a blessing for Israelis and Turks. “Turks would find it very hard to boycott Israeli goods because you won’t find any in Turkish supermarkets,” Mr. Carmon said.
“But most of the software Turks use in everything from cell phones to medical equipment is made in Israel. So unless Turks want to stop using their computers, boycotting Israel would mean punishing themselves.”
After the raid, in which nine Turkish citizens were killed on May 31, Turkey demanded an apology that it has yet to receive. It barred Israeli military planes from Turkish airspace, while its Islamist-inspired prime minister said the world now perceived the Nazi swastika and the Star of David together, according to the Hurriyet Daily News, a Turkish newspaper critical of the government.
Israelis, meanwhile, stung by the raw contempt of their former ally in the region, vowed to keep away from Turkey.
But when it comes to the real economy, business pragmatism is trumping political tensions. “No Israeli companies are leaving Turkey,” said Mr. Carmon, an Israeli entrepreneur who was raised in Istanbul. “It is business as usual and if anything, investment is growing.”
In the short term, the flotilla raid has produced some inevitable economic fallout.
The widespread cancellations of holiday bookings by Israelis will cost Turkey some $400 million, analysts say. Turkey, meanwhile, said it would scrutinize all military cooperation, potentially depriving Israeli companies of billions of dollars in lucrative contracts.
Yet Israeli companies selling everything from computer software to water irrigation systems in Turkey insist that they have not been affected by recent events. In part, that is because they operate mostly in joint ventures with Turkish companies, making their Israeli identities invisible.
It is a sign of the times that not a single Israeli company doing business here was willing to be quoted by name for fear that they or their Turkish customers could be hounded.
Bilateral trade between the two countries officially amounted to about $3 billion last year. But Israeli and Turkish business leaders say the economic ties are actually much larger.
The extensive business connections are largely camouflaged, they say, because many Israeli businesses use their Turkish partner companies to sell to the Arab world while Turkish companies use their Israeli partners as a gateway to American markets.
Even on the defense front, Turkish officials say that close cooperation between Israel and a Turkish military at odds with the Islamist government in Ankara is continuing behind the scenes.
Israeli officials may be resigned to losing some immediate Turkish government contracts, but they remain confident that pragmatic interests will win out over ideological differences. “While the politicians are trying to profit from the conflict, the army has remained remarkably quiet,” said Mehmet Altan, a leading Turkish columnist.
“Both Israel and the Turkish military establishment want a secular Turkey, so they are fighting for the same thing.”
Within weeks of the flotilla raid, a Turkish military delegation arrived in Israel to learn how to operate the same pilotless aircraft often used by Israel to hunt Palestinian militants in the Gaza Strip.
The $190 million deal for the drones was not canceled, even as the Israeli instructors in Turkey were called home after the raid. Doron Abrahami, consul for economic affairs at the Israeli Consulate in Istanbul, noted that before the flotilla clash, Israel’s military industry had teamed up with a Turkish partner to help modernize a fleet of 170 Turkish tanks in a project valued at $700 million.
He said the Israeli and Turkish partners were now shopping around their expertise to other countries. “Business is business,” he said, showing off an invitation dated July 15, co-signed by economic agencies in Turkey and Israel just weeks after the Israeli raid, inviting Israeli and Turkish companies to bid for a jointly financed research and development project, one of more than 20 such efforts he said were under way.
In 1949, Turkey was one of the first countries to recognize Israel shortly after the country declared its existence in 1948. The two have forged strong military and trade ties, but diplomatic and political relations have deteriorated in recent years, as alarm has grown in the United States and Europe that Turkey is turning its back on the West and courting Israel’s enemies like Iran.
In January 2009, the Turkish prime minister, Recep Tayyip Erdogan, stormed out of the World Economic Forum in Davos, after clashing with the Israeli president, Shimon Peres.
In January of this year, Israel apologized after its deputy foreign minister insulted the Turkish ambassador by forcing him to sit on a lowered sofa. Yet for all of the recent episodes of mutual recrimination, Turkish and Israeli business people remain close.
Necat Yuksel is export manager at Naksan Plastik, a large Turkish plastic packaging producer in Gaziantep, in Turkey’s southeast, that imported some $40 million worth of plastic chemicals from Israel last year.
He said sales from Israel showed no signs of abating, even as the recent clash with Israel had exerted a damaging psychological effect on both countries. His Israeli customers are now wary of travelling to Turkey, he said, and his best Israeli client now refers to him as “Erdogan,” after Turkey’s prime minister.
Yet not a single contract had been canceled, according to this article in the New York Times. Nor has his company shelved its plans to establish a factory in Israel.
He proudly cited many advantages to doing business with Israel, including geographic proximity and a shared mentality. “All the problems are between the politicians,” Mr. Yuksel said. “Israelis, hot-tempered and stubborn, are just like us Turks.”
Mr. Yuksel, who has been visiting Israel for more than a decade, argued that Israeli executives were far more influenced by recent political events than Turks.
“For us it comes down to profits,” he said. “For the Israelis, it’s emotional.”
Yet most Turks are adamant that Israel needs Turkey far more than Turkey needs Israel. Sinan Ulgen, a leading economist in Istanbul, argued that Israel had far more to lose than Turkey from severed ties.
Sales to Israel made up about 1.5 percent of Turkey’s total exports of $102 billion last year, making it Turkey’s 17th biggest market, according to the State Statistics Agency in Ankara. Israel exported some $1.04 billion to Turkey last year, making Turkey its eighth largest export market.
At the political level, Mr. Ulgen noted that when ties were strong, Turkey provided an isolated and tiny Israel with a large Muslim ally in a perilous region. But Rifat Bali, a Turkish Jew who had written widely on Turkish-Israeli relations, countered that bad relations with Israel were riskier for Turkey by stifling Turkey’s aspirations to be a regional power by depriving Turkey of the ability to play a mediating role – a point also made, surprisingly enough, by Syrian President Bashar al-Assad, during a visit to Ankara.
He said Israel was one of the only countries willing to sell arms to Turkey with no strings attached. “Both Turkey and Israel,” Mr. Bali said, “need each other far more than either is willing to admit.”
Related posts
Businessmen everywhere only obey one law, the profit motive. |They would sell their families, their religions, their countries and their lives for profit. Its only governments that make sure some of their money can benefit the country. Turkey needs Israel more than Israel needs Turkey? Perhaps. But then why are Israel and Greece getting pally against Turkey? Greece is weak and bankrupt, Israel is gradually sliding into an Iranian made box. Turkey and Iran are in the ascendant. Business men maker momney. That's what they know, thats all they know. Why ask them about anything else?
Leave a comment
Name
Email
Captcha
Comment
We will save the information entered above in our website. Your comment will then await moderation from one of our team. If approved, your data will then be publically viewable on this article. Please confirm you understand and are happy with this and our privacy policy by ticking this box. You can withdraw your consent, or ask us to give you a copy of the information we have stored, at any time by contacting us.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
The majority of the world’s religions speak of a single God who created the universe, but in Inca mythology, many deities were involved in the creation of the cosmos. They each had a role in forming different elements of the sky, earth and underworld.
The most important god to the ancient Inca was Viracocha. He was the first of the creator deities, responsible for designing the heavens. From his own form, he established the sun, moon, planets and stars. When he commanded the sun to move over the sky, time itself was created, allowing for the rise of civilization. He was represented as wearing the sun for a crown, with thunderbolts in his hands, and tears descending from his eyes as rain.
The second most important deity of the Inca pantheon was Inti. He was the sun god, and it is uncertain whether he was a brother of Viracocha, or his son. He brought light and warmth to the lands, and became known as the ‘Giver of Life’. He later sent his children to earth to start the Inca civilisation. Inti and his sister, Mama Killa (Moon goddess) were generally considered benevolent deities.
Coniraya was a male Moon God, associated with the creation of life. Legend says that as he wandered over the earth, plants and animals appeared. He held dominion over agriculture, and helped the farmers irrigate their fields. He once fashioned his sperm into the fruit of the Lumca Tree, which was eaten by Cavillaca, a beautiful virgin goddesses. Cavillaca became pregnant and ran away in shame. Coniraya went in search of her and his child, but sadly, when he found them, they had both turned into stone.
Kon was the first born of Inti and Mama Killa, who resided over the rains. He was strong and lithe, which allowed him to move quickly over the the plains of Peru. Kon was lonely, so he created the first race of humans. He set them down in a pleasant, fertile land, and gave them grain which they could harvest, and fruits which ripened quickly. His creations wanted for nothing.
During the rule of these early gods, Kon’s human creations became lazy and wicked, so Kon punished them with drought. He would only dispense his life giving waters if they worked hard enough to earn his favour. Kon’s tyrannical regime soon came to an end with the appearance of his brother Pachacamac (Inti’s son).
Pachacamac was known as the “Creator of the World”, and immediately challenged his brother Kon. After a tremendous struggle, Pachacamac managed to drive Kon from the land. His became the new god of Peru, and redesigned it as a paradise. He wasn’t so fond of the Kon’s mortal creations, however, and turned into monkeys. In their place, he created a new race of humans (the ancestors of the Inca). In return, these people made Pachacamac their supreme deity.
After the dethronement of Kon, a new god was needed hold dominion over the rain. This role was gifted to llapu, who used the power of the storm to fertilise the lands. The Incas believed the Milky Way as a heavenly river, where Illapu’s sister stored a great water jug. When Illapu struck the jug with a bolt of lightning, it would create the sound of thunder, and release a heavenly rain. He appeared as a man in shining clothes, carrying a club and stones.
Catequil was another storm god, linked specifically to lightning. Legend say’s he created thunder-bolts by striking the clouds with his sacred spear and a mighty club. He was venerated as a weather deity, who could divine the future. Catequil was linked to a myth about the twins Apocatequil and Piguerao. Many Incan people believe Apocatequil was none other than the lightening god in human form.
The story goes that the twin brothers, Apocatequil and Piguero, were conceived by a woman who had sex with a sky god. Her name was Cautaguan, and she bore her sons within two eggs. Close to their birth, the goddess was killed by her brothers (the Guachimines). Once her sons hatched, they revived their mother, and took vengeance on their uncles by hurling lightening bolts at them.
Apocatequil become the prominent leader of the Inca, and served as the chief priest for the lunar deity, Coniraya. To keep Apocatequil happy, the Inca built statues of his noble self and placed them upon the mountaintops.
Below these mountains lived Urcaguary, a chthonic deity, who resided over underground treasures (metals and jewels). He guarded them from greedy interlopers who tried to steal them, and had a formidable appearance. He was often depicted as a large snake with the antlers of a deer, and a tail coiled with gold chains.
For those who wished to secure a safer way to wealth, there was always Ekkeko. He was the god of abundance, called upon by his followers for luck and prosperity. The ancient Inca made dolls that represented him and surrounded them with miniature version of their desires (pets, treasure, food, etc). This was believed to help manifest whatever it was their hearts desired.
Another God revered for his prosperity was Urcuchillay. This bestial god was worshipped by Inca herders, who watched over the herds of Peru. He was prayed to for their well-being. Urcuchillay would often bring good fortune to his followers, ensuring their protection in the wilderness. It was said he possessed a bright, multicoloured coat, a symbol of life and wonder.
Yet life and prosperity couldn’t last forever, as all paths eventually lead to the grave. This final feature of the Inca life was ruled over by Supay, the god of death. He lived in Ukhu Pacha (the underworld), with an army of demons. Miners would also pray to him for a safe decent into the underworld, when they went digging for precious treasure. Ukhu Pacha was not such a terrible place, for it was linked to the womb of mother earth (Pachamama). The subterranean waters of ‘Ukhu Pacha’ were believed to have rejuvenating qualities, which linked the health and prosperity of the Inca people.
–
If you’re looking for a really immersive experience of world mythology, why not subscribe to my Patreon page, and gain:
Access to in-depth posts on the gods, monsters & heroes of world myth
Entry to my FB group on the ‘Tales of the Monomyth,’ AKA the first story ever told
A 'behind the scenes’ peek at my upcoming novels & much much more!
www.patreon.com/HumanOdyssey
–
ARTWORK
Oshiro Kochi
Keisy Lopez
Nati Fuentes
Javier Sama
Gonzalo Kenny
Daniel Eskridge
Gilles Ketting
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
143 Cal.App.3d 1013 (1983)
192 Cal. Rptr. 325
RURAL LANDOWNERS ASSOCIATION et al., Plaintiffs and Appellants,
v.
CITY COUNCIL OF LODI et al., Defendants and Respondents; GENIE DEVELOPMENT, INC., Real Party in Interest and Respondent.
Docket No. 20471.
Court of Appeals of California, Third District.
June 16, 1983.
*1016 COUNSEL
Michael H. Remy and Tina A. Thomas for Plaintiffs and Appellants.
Ronald M. Stein, City Attorney, for Defendants and Respondents.
C.M. Sullivan, Jr., for Real Party in Interest and Respondent.
*1017 OPINION
CARR, Acting P.J.
The Rural Landowners Association (petitioners) appeal from a judgment denying their petition for mandate and injunctive relief. Petitioners sought mandate to compel respondents Lodi City Council and Lodi City Planning Commission (hereafter collectively the City) to vacate their decisions approving a Final Environmental Impact Report (EIR) for the annexation and development of certain agricultural lands, as well as the general plan amendment, rezoning and tentative map approval for the development. A central issue on appeal is the standard of review to be applied by the trial court under the California Environmental Quality Act (CEQA) (Pub. Resources Code, § 21000 et seq.), when examining clear errors in the environmental review process, which errors in turn lead to deficiencies in the EIR. We agree with petitioners that the trial court improperly substituted its independent judgment on the evidence for that of the City and accordingly shall reverse the judgment with directions to issue the writ.
FACTS[1]
The properties at issue are known as the Johnson Ranch and the Tandy Ranch. The ranches are situated southeast of the City of Lodi and comprise some 58 acres of prime agricultural land. In September and October 1979, the property owners and the developer (real party in interest, Genie Development) applied to the City for the annexation and prezoning of the property. The City, on behalf of the developer, referred the annexation question to the San Joaquin County Local Agency Formation Commission (LAFCO) for review.[2] Concurrent with the LAFCO review of annexation, the City conducted a review of the general plan amendment, prezoning and tentative map approval necessary for development of the Johnson and Tandy Ranches.
In December the City prepared and circulated the "South East Lodi Draft EIR." The draft EIR discussed the Johnson Ranch general plan amendment and rezoning as part of an area-wide report (244 acres), but did not consider either the Tandy Ranch proposal or the issue of annexation. The City planning commission considered the Johnson and Tandy Ranch development in late January 1980 and took the following actions: (1) approved the general plan amendments and residential prezoning for both parcels; (2) denied the developer's request for commercial prezoning for part of the Johnson Ranch; and (3) approved the *1018 southeast Lodi final EIR as adequate. Both petitioners and the developers appealed these actions to the city council.
During this same period the LAFCO proceedings on annexation were moving forward, eventually resulting in the approval of both annexations with negative declarations, rather than EIRs. The City then ordered the annexation of both ranches without election. In early March the annexation was essentially complete except for an agreement between the City and county on a division of taxes.
On March 11, 1980, the city council met to hear the appeals on the Johnson-Tandy project and took the following actions: (1) certified the final EIR as complete and adequate; (2) denied petitioner's appeal and approved the general plan amendment and prezoning for both parcels; and (3) granted the developer's appeal, approving the commercial prezoning for the Johnson Ranch. Several days later, the City delivered the final EIR to the Governor's Office of Planning Research (OPR) State Clearinghouse for review and comment.[3]
On May 12, the planning commission met to consider the tentative map for the Johnson-Tandy development. It considered an addendum to the final EIR containing the comments from OPR on the draft EIR. The commission approved the addendum as adequate and approved the Johnson-Tandy tentative map. On May 15, the City filed its notice of determination to carry out the project. The petition which is the subject of this appeal was then filed.
I
(1) Judicial review of a local agency's decision under CEQA and its accompanying guidelines (see Cal. Admin. Code, tit. 14, § 15000 et seq.), where the agency is required by law to hold hearings and take evidence,[4] is governed by section 21168 of the Public Resources Code.[5] (Dehne v. County of Santa Clara *1019 (1981) 115 Cal. App.3d 827, 835 [171 Cal. Rptr. 753].) Because section 21168 incorporates the provisions of section 1094.5 of the Code of Civil Procedure,[6] the focus of judicial review is on "(1) whether there is any substantial evidence in light of the whole record to support the decision; and (2) whether the agency making the decision abused its discretion by failing to proceed in the manner required by law." (Ibid.)
When the trial court in the present case considered the appropriate scope of review, it formulated a dual standard of review: (1) "as to factual determinations made by the City Council and the Planning Commission this Court would support the determination of those agencies unless it is not supported by substantial evidence.... As to matters required to be done by regulations, the Court would apply the standard of requiring (1) a good faith effort at full disclosure and (2) no failure to include information which would cause sufficient prejudice to the public opportunity to present their views that they may be denied due process and might have made a difference to the determination made by the agencies." We are here concerned with the second prong of the trial court's formulated standard of review.
The City conceded it had not proceeded in the manner prescribed by law in that it was required by the guidelines to submit the draft EIR to the state clearinghouse before it approved the project (Guidelines §§ 15161.5, 15161.6) and having failed to do so, it was unable to respond to the comments received from OPR and other state agencies before approving the final EIR. (Guidelines § 15146.) In considering these errors, however, the trial court found that the comments from the state agencies, with two exceptions, had been discussed in the final EIR and the city council meeting. The trial court stated "[s]ince no new ideas were raised by the matters set forth in the Addendum, and no action was taken by any City Council members to reconsider any action taken in light of the comments by the State, this Court finds that the omission is of no legal significance, and in light of the good faith effort of the City Council to comply with the EIR guidelines, and the fact that this failure to get timely comments from the state agencies did not prejudice the rights of the public to present their case before the City Council and the Planning Commission." In effect, the trial court posited a "harmless error" standard, concluding that even in the absence of these procedural errors the City would have reached the same result. Petitioners contend this standard of review was incorrect and had a proper standard *1020 of review been applied a different result would have been reached. (2a) We agree that the standard of review employed by the trial court was incorrect.
In formulating its standard of review the trial court adopted the City's position that section 21168 requires a two-step analysis: first, has petitioner shown an abuse of discretion as defined; and second, was this abuse prejudicial? The City relies on the language of Code of Civil Procedure section 1094.5 which limits the inquiry to "whether there was any prejudicial abuse of discretion. Abuse of discretion is established if the respondent has not proceeded in the manner required by law, the order or decision is not supported by the findings, or the findings are not supported by the evidence." (Subd. (b); italics added.) For a definition of prejudice, the City relies on Code of Civil Procedure section 475, which provides in part: "No ... decision ... shall be reversed or affected by reason of any error ... unless it shall appear ... that by reason of such error ... the same party complaining or appealing sustained and suffered substantial injury, and that a different result would have been probable if such error ... had not occurred or existed."[7] The City thus contends that even conceding it abused its discretion by failing to proceed in the manner required by the guidelines, the trial court properly found this abuse of discretion was not prejudicial to either petitioners or the public because the state's comments and the City's responses would not have altered the City's ultimate decision to proceed with the project. While conceding the City's analysis is generally accurate with regard to the usual mandate proceeding under Code of Civil Procedure section 1094.5, we conclude it ignores specific provisions in CEQA and, if followed, would seriously undermine the purpose for which CEQA was enacted.
(3) CEQA is essentially an environmental full disclosure statute, and the EIR is the method by which this disclosure is made. "In many respects the EIR is the heart of CEQA." (County of Inyo v. Yorty (1973) 32 Cal. App.3d 795, 810 [108 Cal. Rptr. 377].) The purpose of an EIR "is to provide public agencies and the public in general with detailed information about the effect which a proposed project is likely to have on the environment, ..." (§ 21061; italics added.) We have referred to an EIR as "an environmental `alarm bell' whose purpose it is to alert the public and its responsible officials to environmental changes before they have reached ecological points of no return." (Id., at p. 810.) This informational purpose cannot be served if the required information is not received and disseminated by the local agency until after it has *1021 reached a decision. For this reason, the guidelines require that the lead agency allow adequate time for the public and other agencies to critically evaluate the draft EIR (Guidelines §§ 15160, 15160.5) and include these comments and recommendations and the responses of the lead agency in the final EIR. (Guidelines § 15146.)
(4) The final decision on the merits of the project is, of course, left in the hands of the lead agency. (San Francisco Ecology Center v. City and County of San Francisco (1975) 48 Cal. App.3d 584, 589 [122 Cal. Rptr. 800].) In passing on questions under CEQA the trial court's duty is to consider the legal sufficiency of the steps taken by the local agency, and not to consider the validity of the conclusions reached. (Running Fence Corp. v. Superior Court (1975) 51 Cal. App.3d 400, 431 [124 Cal. Rptr. 339].) The trial court is directed not to "exercise its independent judgment on the evidence but shall only determine whether the act or decision is supported by substantial evidence in light of the whole record." (§ 21168.) (2b) The trial court in the present case, by determining the comments of the state agencies would not have made any difference to the city council (even though these comments were never brought before the council), exercised its independent judgment on the value of this evidence in contravention of the statute.
By focusing its consideration of prejudice on the result, the trial court ignored the prejudice to the public caused by the unavailability of the comments from the state agencies at the time of the March 11 hearing. It was impossible for the trial court to know what effect these expert criticisms would have had on public comments, presentations and official reaction. Its independent judgment that the information was of "no legal significance" amounts to a "post hoc rationalization" of a decision already made, a practice which the courts have roundly condemned. (No Oil, Inc. v. City of Los Angeles (1974) 13 Cal.3d 68, 81 [118 Cal. Rptr. 34, 529 P.2d 66].) In failing to submit the draft EIR to the state clearinghouse as required and preparing the addendum EIR after the project had been approved, the City concededly proceeded in a manner contrary to the requirements of law. "This failure cannot be excused on the theory that the council might have approved the ... project anyway; `[t]o permit an agency to ignore its duties ... with impunity because we have serious doubts that its ultimate decision will be affected by compliance would subvert the very purpose of the Act.'" (Ibid.)
(5) We recognize the guidelines are subject to a construction of reasonableness so that the court does not impose unreasonable extremes or intrude into the area of discretion as to the choice of action to be taken. (Karlson v. City of Camarillo (1980) 100 Cal. App.3d 789, 805 [161 Cal. Rptr. 260]; Residents Ad Hoc Stadium Com. v. Board of Trustees (1979) 89 Cal. App.3d 274, 287 [152 Cal. Rptr. 585].) In both Karlson and Residents Ad Hoc Stadium Com., *1022 however, the central issue of concern was whether the local agency had satisfied the requirement of examining alternatives to a project. (Guidelines § 15143, subd. (d).) In such a situation "[a]bsolute perfection is not required" or obtainable, as there are endless alternatives to a project. (Residents Ad Hoc Stadium Com. v. Board of Trustees, supra, 89 Cal. App.3d at p. 287.) A good faith effort to comply with a statute resulting in the production of information is not the same, however, as an absolute failure to comply resulting in the omission of relevant information. While the guidelines allow for flexibility of action within their outlines, they are not to be ignored. They are entitled to great weight and should be respected by the courts unless they are clearly erroneous or unauthorized. (City of Santa Ana v. City of Garden Grove (1979) 100 Cal. App.3d 521, 530 [160 Cal. Rptr. 907].) In discussing the requirement that the final EIR contain the comments received and the lead agency's responses (Guidelines §§ 15146, subd. (a)(2), (b); 15027, subd. (b)), the Fifth District stated "the [city] must describe the disposition of each of the significant environmental issues raised and must particularly set forth in detail the reasons why the particular comments and objections were rejected and why the [City] considered the development of the project to be of overriding importance." (People v. County of Kern (1974) 39 Cal. App.3d 830, 841 [115 Cal. Rptr. 67].) "Moreover, where comments from responsible experts or sister agencies disclose new or conflicting data or opinions that cause concern that the agency may not have fully evaluated the project and its alternatives, these comments may not simply be ignored. There must be good faith, reasoned analysis in response. (Italics added.) [Citations.] Only by requiring the County to fully comply with the letter of the law can a subversion of the important public purposes of CEQA be avoided, and only by this process will the public be able to determine the environmental and economic values of their elected and appointed officials, thus allowing for appropriate action come election day should a majority of the voters disagree." (Id., at p. 842; italics added.) The trial court found that of all the comments received from the state clearinghouse "each of these except for solid waste management and transportation were discussed in the environmental impact study and the City Council meeting." The trial court therefore expressly found at least two areas raised by the comments from OPR were neither discussed in the final EIR nor presented to the city council.
Were we to accept respondent's position that a clear abuse of discretion is only prejudicial where it can be shown the result would have been different in the absence of the error, we would allow just such a subversion of the purposes of CEQA. Agencies could avoid compliance with various provisions of the law and argue that compliance would not have changed their decision. Trial courts would be obliged to evaluate the omitted information and independently determine its value. This prospect has led other courts to recognize that a failure to proceed in the manner prescribed by law may alone be a prejudicial abuse of discretion. (Cleary v. County of Stanislaus (1981) 118 Cal. App.3d 348, 352 *1023 [173 Cal. Rptr. 390]; People v. County of Kern, supra, 39 Cal. App.3d at p. 840.)[8] (2c) We conclude that where that failure to comply with the law results in a subversion of the purposes of CEQA by omitting information from the environmental review process, the error is prejudicial. The trial court may not exercise its independent judgment on the omitted material by determining whether the ultimate decision of the lead agency would have been affected had the law been followed. The decision is for the discretion of the agency, and not the courts.
II
We now consider the specific deficiencies in the EIR advanced by petitioners.
The EIR must address the comments of OPR and the other state agencies. In particular, the City must formulate adequate responses to these comments. (Guidelines § 15146.) (6) We agree with petitioners, and the City concedes, that OPR's comment regarding "infill" development was not adequately answered.[9] OPR's comment raised specific concerns which the City answered in a nonspecific and general way. The City's response does not evidence "`"a good faith, reasoned analysis in response,"'" which must be remedied. (Cleary v. County of Stanislaus, supra, 118 Cal. App.3d at p. 358.)
When discussing certain identified significant impacts on the environment (increased vehicle emissions, construction activities, increased population, particularly in school-aged children), the City Council's Findings and the EIR stated these impacts would be "partially mitigated." The trial court found "partial mitigation" meant the same as "mitigation" although "the mitigation is slight as to the preservation of Tokay vineyards as long as possible." (7) A finding of "partial mitigation" does not comport with the purpose of CEQA to "avoid or substantially lessen such significant effects" on the environment. (§ 21002.) We agree with petitioners that the finding of "partial" mitigation is of little value to someone who must decide whether certain recognized significant impacts have been avoided or substantially lessened to an acceptable level. *1024 (Guidelines §§ 15088, 15089.)[10] "Partially" mitigated is sufficiently ambiguous to allow for a range of meaning from almost unaffected to almost eliminated.[11] (8) Administrative findings are deemed adequate "if they are sufficient to apprise interested parties and the courts of the bases for the administrative action." (San Francisco Ecology Center v. City and County of San Francisco, supra, 48 Cal. App.3d at p. 596.) On remand, the City would be well advised to specify whether or not the identified significant impacts have been avoided or substantially lessened. To advise interested parties, in other words, exactly how "partial" are the mitigation measures.
(9) Petitioners' final contention is that the EIR failed to consider the pending annexation of the area containing the Johnson and Tandy Ranches.[12] The City urges such consideration was unnecessary as LAFCO was the proper lead agency on the annexation project and it fulfilled its responsibilities with a negative declaration which petitioners did not challenge. The City's argument erroneously assumes that the annexation and the Johnson-Tandy development are entirely unrelated projects. They are clearly interconnected, as the proposed development will be annexed to the city, and to adopt the City's position would defeat CEQA's mandate "that environmental considerations do not become submerged by chopping a large project into many little ones each with a minimal potential impact on the environment which cumulatively may have disastrous consequences." (Bozung v. Local Agency Formation Com. (1975) 13 Cal.3d 263, 283-284 [118 Cal. Rptr. 249, 529 P.2d 1017].) While the project which LAFCO apparently considered, the simple adjustment of the city *1025 boundaries, may not have any impact on the environment, it is difficult to perceive how the development and annexation of a large commercial and residential project will not. Responsibility for a project cannot be avoided merely by limiting the title or description of the project. "An accurate, stable and finite project description is the sine qua non of an informative and legally sufficient EIR." (County of Inyo v. City of Los Angeles (1977) 71 Cal. App.3d 185, 193 [139 Cal. Rptr. 396].)
Since LAFCO apparently considered only annexation, and not development, some responsible agency must consider the combined effect of the interrelated projects. The City was the lead agency on the development and a responsible agency on the annexation, as it retained final discretionary authority over the annexation. (§ 21069; Scuri v. Board of Supervisors (1982) 134 Cal. App.3d 400, 404 [185 Cal. Rptr. 18].) Had LAFCO prepared an EIR, the City could have used it, suitably supplemented, as one basis for its decision-making process. (Bozung v. Local Agency Formation Com., supra, 13 Cal.3d at p. 286.) But as the City was now considering a significantly different project than that considered by LAFCO, annexation plus development, the City, as a responsible agency, was required to prepare a subsequent or supplemental EIR addressing the consequences of annexation. (Guidelines §§ 15067, 15067.5.) On remand, the City will have the opportunity to do this.
We recognize the ultimate outcome in this case may be substantially similar to the result reached herein by the trial court. We conclude, however, that neither the prescience of the trial court nor the economic hardships of delay to the applicant can justify the approval of the project without compliance with the law. (People v. County of Kern (1976) 62 Cal. App.3d 761, 776 [133 Cal. Rptr. 389].) Accordingly, the judgment is reversed with directions to issue the writ of mandate as prayed.
Sparks, J., and Sims, J., concurred.
NOTES
[1] City has heretofore moved this court to take judicial notice under Evidence Code section 451, of certain records of the City of Lodi and the County of San Joaquin and of specified state statutes and regulations. That motion is granted.
[2] One of the significant functions of a LAFCO is to "`review and approve or disapprove'" annexation of territory to local agencies. (Gov. Code, § 54790; Bozung v. Local Agency Formation Com. (1975) 13 Cal.3d 263, 274 [119 Cal. Rptr. 215, 531 P.2d 783].)
[3] "The State Clearinghouse in the Office of Planning and Research is responsible for distributing environmental documents to State agencies, departments, boards, and commissions for review and comment." (Cal. Admin. Code, tit. 14, § 15051, subd. (b).)
[4] The actions under consideration by the City (general plan amendment, prezoning, tentative map approval) required public hearings. (Gov. Code, §§ 65351, 66451.3.)
[5] Public Resources Code section 21168 provides: "Any action or proceeding to attack, review, set aside, void or annul a determination, finding, or decision of a public agency, made as a result of a proceeding in which by law a hearing is required to be given, evidence is required to be taken and discretion in the determination of facts is vested in a public agency, on the grounds of noncompliance with the provisions of this division shall be in accordance with the provisions of Section 1094.5 of the Code of Civil Procedure. [¶] In any such action, the court shall not exercise its independent judgment on the evidence but shall only determine whether the act or decision is supported by substantial evidence in the light of the whole record."
All further statutory references shall be to the Public Resources Code unless otherwise noted. Further references to the guidelines under Title 14 of the California Administrative Code shall be noted by the word "Guidelines."
[6] Code of Civil Procedure section 1094.5 provides in relevant part: "(b) The inquiry in such a case shall extend to the questions whether the respondent has proceeded without, or in excess of jurisdiction; whether there was a fair trial; and whether there was any prejudicial abuse of discretion. Abuse of discretion is established if the respondent has not proceeded in the manner required by law, the order or decision is not supported by the findings, or the findings are not supported by the evidence."
[7] The City urges Code of Civil Procedure section 475 is made applicable by Code of Civil Procedure section 1109 which provides: "Except as otherwise provided in this title, the provisions of part two of this code are applicable to and constitute the rules of practice in the proceedings mentioned in this title." Code of Civil Procedure section 1094.5 is mentioned in "this title" (title I), and Code of Civil Procedure section 475 is found in part two of the Code of Civil Procedure.
[8] "The standard for review of the County's action is whether it prejudicially abused its discretion. Such abuse is established if the County has not proceeded in a manner required by law...." (Cleary v. County of Stanislaus, supra, 118 Cal. App.3d at p. 352; italics added.)
[9] The comment from OPR stated: "Infill development wherever possible and development at higher densities within the urban growth limits could lessen the increasing pressure on agricultural lands around the city." The City's response to this comment was in part: "There are two types of `infill' which might be considered in the City of Lodi. The first is development of vacant parcels which are already in the City and the second is development or redevelopment at higher densities." The response then goes on to discuss the problems of increasing density, but says nothing about the availability of vacant land or the feasibility of developing this land before developing outlying areas.
[10] Section 15088 of the guidelines states in part: "(d) A public agency shall not approve or carry out a project as proposed unless the significant environmental effects have been reduced to an acceptable level. [¶] (e) As used in this Section, the term `acceptable level' means that: [¶] (1) All significant environmental effects that can feasibly be avoided have been eliminated or substantially lessened as determined through findings as described in Subsection (1), and [¶] (2) Any remaining unavoidable significant effects have been found acceptable under Section 15089." Section 15089 of the guidelines requires a "statement of overriding considerations" where significant impacts are identified and mitigation is not feasible.
[11] For example, the EIR states the impact of additional students on the Lodi Unified School District can be "partially mitigated by the payment of the residential `bedroom fee.'" OPR commented that reliance on such a bedroom tax did not appear to be a viable mitigation measure in light of an Attorney General opinion stating that such taxes are "special taxes" within the meaning of Proposition 13 and therefore require a two-thirds vote of the electorate. The City's response did not examine this comment in any meaningful way. The City stated: "The impact of growth on the Lodi Unified School District is clearly cumulative, with the greatest impact the result of development in the North Stockton area. A more thorough analysis of school impact, and alternative mitigation measures are beyond the scope of this EIR."
[12] It is noteworthy that the final EIR does not even mention the Tandy Ranch development. In the addendum, the City states that development of the Tandy Ranch "is covered in the discussion on development of the remaining area, outside of the two cited projects [Johnson and Evans Ranches]." The EIR simply states there are "no specific development proposals" for this area, and it is anticipated it will be developed in residential uses consistent with the general plan. On remand, the City will have an opportunity to discuss the specific Tandy proposal in the EIR and why, if the area was to be developed consistently with the general plan, a general plan amendment was necessary for the Tandy development.
|
tomekkorbak/pile-curse-small
|
FreeLaw
|
Mikel Rico, Ander Herrera, and Aymeric Laporte all scored first half-goals before Helder Barbosa pulled one back for the visitors in the 34th minute.
But Bilbao continued to dominate in the second half at its new San Mames Stadium as Aritz Aduritz restored the three-goal cushion before Ibai Gomez scored a brace, including a late penalty.
Athletic enjoys a four-point cushion in fourth at the halfway point of the season, while Almeria sits three points above the drop zone.
Atletico Madrid's title credentials get a big test later Saturday when it welcomes Barcelona to the Vicente Calderon Stadium in a matchup between the co-leaders.
Barcelona and Athletico Madrid were satisfied with a share of the spoils after they cancelled each other out in an intense 0-0 draw.
The stalemate at the Calderon moved the joint La Liga leaders on to 50 points at the midpoint of the season.
Third-placed Real Madrid will close to within three points if they can beat mid-table Espanyol on Sunday (1800 GMT).
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
As the muscles of the vaginal wall lose their tautness and as the vagina enlarges so that vaginal tightness decreases, the female may experience a decrease in sexual satisfaction and sensation. Also, women often desire sexual pleasure but don't want to engage in risky or casual sex. There is a continuing need for devices that address such issues.
United States Patent Application Publication Number 20030093016 describes a massager with a rotation shaft having arcuate grooves defined in a periphery of the rotation shaft and a guide received in the arcuate grooves so that when the rotation shaft rotates, the guide is able to control the rotation shaft to rotate and extend.
United States Patent Application Publication Number 20090281373 describes a sexual aid device and method for inserting and occupying space within a human female's vagina to provide a sensation of increased fullness to the female and a sensation of increased tightness and friction to a penis of a human male during sexual intercourse, thereby enhancing sexual arousal of both the female and the male. The sexual aid can be a member having a bulbous end for insertion and a tapered end for externally grasping and manipulating the member. The tapered end may include a hooked protrusion for providing anal stimulation to the female. The member may contain one or more vibrating devices. The member may further include a generally planar surface featuring a trough and can include two arced terminuses oriented in opposing directions. An internal pellet-rotating device may be installed within the tapered end of the member to produce mechanical friction in and around the vagina.
U.S. Design Pat. No. D515219 discloses the ornamental design for an attachment sleeve for a vibrator head. The sleeve comprises a plurality of protrusions to provide additional sexual stimulation to a female in need of such stimulation.
None of the above inventions and patents, taken either singly or in combination, is seen to describe the instant invention as claimed.
|
tomekkorbak/pile-curse-small
|
USPTO Backgrounds
|
OVER THE last two decades, numerous books, articles and press commentaries have hailed India as the next global power. This flush of enthusiasm results partly from the marked acceleration in India’s economic growth rate following reforms initiated in 1991. India’s gross domestic product (GDP) grew at 6 percent per year for most of the 1990s, 5.5 percent from 1998 to 2002, and soared to nearly 9 percent from 2003 to 2007, before settling at an average of 6.5 percent until 2012. The upswing offered a contrast to what the Indian economist Raj Krishna dubbed “the Hindu rate of growth”: an average of 2.5 percent for the first twenty-five years following India’s independence in 1947. The brisker pace pulled millions from poverty, put Indian companies (such as Indian Oil, Tata Motors, Tata Steel, Infosys, Mahindra, Reliance Industries and Wipro) even more prominently on the global map, and spawned giddy headlines about India’s prowess in IT, even though that sector accounts for a tiny proportion of the country’s output and workforce. India also beckoned as a market for exports and a site for foreign investment.
The attention to India has endured even though its economic boom has been stymied, partly by the 2008 global financial crisis, with growth remaining below 5 percent for eight consecutive quarters from early 2012 to early 2014. In the quarter lasting from April to June 2014, growth ticked back up to 5.7 percent, but it is too soon to tell whether or not this represents the beginning of a more sustained expansion. The persistent interest also stems from analyses that portray India’s and China’s resurgence as part of a shift that is ineluctably returning the center of global economic power to Asia, its home for centuries before the West’s economic and military ascent some five hundred years ago. Yet even those who dismiss the proponents of this perspective as “declinists” are drawn to the “India rising” thesis, in part because of the transformation in U.S.-Indian relations during the last two decades and the allure of democratic India as a counterweight to authoritarian China. For much of the Cold War, the relationship between Washington and New Delhi ranged from “correct” to “chilly.” Nowadays, in contrast, predictions that China’s ascendency will produce an Indo-American entente, if not an alliance, are commonplace.
But is India really ready for prime time? India has many of the prerequisites for becoming a center of global power, and, assuming China’s continued and unhindered ascent, it will play a part in transforming a world in which American power is peerless into one marked by multipolarity. India has a vast landmass and coastline and a population of more than one billion, faces East Asia, China and the Persian Gulf, and has a wealth of scientific and technological talent along with a prosperous and well-placed diaspora. But the elemental problems produced by poverty, an inadequate educational system and pervasive corruption remain, and India’s mix of cultural diversity and democracy hampers rapid reform. For now, therefore, the ubiquitous reports of India’s emergence as a great power are premature at best. There’s no denying India’s ambition and potential, but as for its quest to join the club of great powers, the road is long, the advance slow and the arrival date uncertain. Prime Minister Narendra Modi of the Hindu-nationalist Bharatiya Janata Party (BJP) may seek to be a reformer, and he enjoys a reputation as a charismatic leader and skilled manager. He is also a proponent of improving ties with the United States and Israel. But he will face daunting obstacles in his bid to push India into the front rank of nations.
DESPITE ITS many blemishes, India’s democracy has increased the country’s appeal in Europe and America and prevented quarrels over human rights from complicating the expansion of economic and security transactions with the West. This is in stark contrast to the intermittent skirmishes over human rights that have marred the West’s relationship with China and Vladimir Putin’s Russia. In defending the 2005 U.S.-Indian nuclear agreement, the George W. Bush administration (and American experts who backed the deal) noted that India is a fellow democracy. Barack Obama—who hosted Modi in September 2014—pledges to back India’s bid for a permanent seat on the UN Security Council and invariably invokes the country’s democratic record when he does so.
Yet in East and South Asia, two regions in which India has been most active on the diplomatic and strategic front, its democratic model hasn’t yielded it much influence, or even stature. If anything, the economic achievements of China and Singapore—and the other Asian “tigers” during their undemocratic decades—in delivering rapid growth and modernization and improving living standards have made a bigger impression. India, weighed down by the compromises, delays and half measures necessitated by its democratic structure, comes across as a lumbering, slow-motion behemoth that’s never quite able to sustain whatever momentum it manages to gain on occasion or to bridge the gap between proclaiming reforms and implementing them.
The Indian government, for its part, has crafted sundry soft-power slogans and strategies, among them “India Shining” and the even sappier “Incredible India.” The latter was not simply rhetorical excess—though it was that—or even solely a catchphrase to capture additional tourist revenue. It was also part of a larger effort to increase transactions between India and the West and to recast India’s image. Yet there’s scant evidence that India is seeking to use culture as a means to create a transnational bloc in Asia, or anywhere else. With all due respect to the late Samuel P. Huntington, who listed “Hindu civilization” among the cultural-religious blocs whose rivalry he believed would supplant the competition and conflict among states, there’s no sign that India plans to mobilize that form of soft power, or that it could if it tried. Hyping Hindu discourse in a multiconfessional country, one with more than one hundred million Muslims, would amount to jeopardizing internal security to road test a quixotic theory that emanated from Harvard Yard. Besides, Hinduism is too torn by divisions of class, caste, language and region to make such a strategy feasible; the Hindu diasporas in Asia and Africa, for their part, would have little to gain and much to lose by embracing it. Modi and the BJP will doubtless spice up their rallies with Hindu-nationalist verbiage, but they are likely to find that this tactic, far from mobilizing unity, sows disunity in what is a country of multiple faiths and provokes India’s neighbors, above all Pakistan, while yielding little of tangible value in return. Nor will the project of “Hindutva” help the BJP extend its base beyond northern India’s “Hindu heartland” and into the country’s southern regions, where its message has much less appeal.
The difficulty with “soft power,” a concept now embedded in the lexicon thanks to another Harvard professor, Joseph Nye, is that it’s hard to determine its effectiveness, or even to figure out quite how it works. Few would deny that a country’s political system, cultural achievements and image can, in theory, add to its allure. What’s much less clear, though, is how this amorphous advantage goes beyond evoking warm feelings and yields actual influence, defined as the capacity to shape the policies of other countries.
Did Americans (or Europeans or Japanese) gain a greater understanding and appreciation of India and begin to take it seriously because of India’s soft power? Unlikely, given how little the outside world interests the citizens of the United States, never mind that their country is engaged in every corner of the globe on a host of issues and in ways that affect the lives of millions. Did the greater coverage of India, in part perhaps because of New Delhi’s endeavors on the soft-power front, increase the attention it received from America’s well educated, well heeled and politically powerful? Possibly, based on the data on tourism, the increased number of courses on India-related topics at universities, and the growing popularity of Indian prose-fiction writers and attire bearing traces of Indian culture. But one can yearn to see the Ajanta Caves, read R. K. Narayan or Arundhati Roy, sport a kurta, or be able to tell one genre of Indian classical music or dance from another without giving so much as a thought to the pros and cons of developing military ties with India, championing its quest for a spot on the UN Security Council, or expanding trade and investment ties with it. Soft power, apart from being a slippery principle, can only do so much in practice. It simply cannot compensate for the deficit India has in other, tangible forms of power, which remains the greatest impediment to India’s becoming a global power.
THE HEYDAY of central planning and import-substitution-based economic policy, which had extraordinary influence in India, is over. The BJP’s thumping victory over the Congress Party, which itself initiated economic reforms in the 1990s, betokens an even stronger push toward privatization and foreign direct investment (FDI). While the principal aims of India’s economic strategy will naturally be growth and prosperity, the country’s leaders understand the strategic benefits that are to be gained from having the business community of important democratic countries (the United States, Britain, Japan, Germany, France, South Korea and Brazil, for example) acquire a strong stake in India’s market.
Still, to gain substantial economic influence, India’s leaders will have to implement many politically unpopular reforms that are required to restore and maintain high rates of growth, boost trade and attract greater sums of FDI. These include cutting subsidies for basic commodities, revamping entrenched and rigid labor laws, opening protected sectors—such as retail, agriculture and services—to foreign competition, and stamping out tax evasion, which in India is both ubiquitous and an art form. These aren’t the only steps needed to make the economy grow faster and more sustainably so that the increased resources required to bolster India’s bid for great-power status become available.
Take education. While India’s progress in educating what fifty years ago was a largely illiterate society has been impressive, there’s much more that needs doing on this front to boost Indian economic power. The countries that are already front-rank economic powers achieved near-universal literacy long ago, while in China, Indonesia and Malaysia more than 90 percent of the population is literate. In India, the figure is 74 percent. While that’s a massive increase compared to the proportion in 1947, the quality of Indian schools is uneven because problems such as moribund curricula, substandard classrooms and widespread absenteeism among teachers abound. The success of states like Kerala, Tamil Nadu and Himachal Pradesh contrasts starkly with the failures of the educational system in others, such as Bihar, Uttar Pradesh and Madhya Pradesh. What might be called the “effective literacy rate” is thus lower than suggested by the national average, especially in rural areas (where about 70 percent of the population still lives) and among females. Moreover, India’s schools are not producing the skilled labor needed by local and foreign firms at anywhere near the required rate, and too many of those with degrees in science and engineering are not readily employable on account of the poor quality of their training. Indian higher education has a proud history that spans centuries and boasts some venerable institutions, but according to economists Jagdish Bhagwati and Arvind Panagariya, even its elite engineering and management schools don’t make the “top 200” list in global surveys; by contrast, the best universities of other major Asian economic powers have cracked the top 100.
Likewise, vast sums will have to be mobilized (from tax revenues or government-backed, dollar-denominated bonds) to modernize and expand India’s antediluvian infrastructure. The list of pressing needs is long. It includes building or revamping water-management and sanitation systems; bridges, railways and roads; harbors and airports; and power plants (to end chronic electricity shortages and even blackouts). Fixing India’s infrastructure by building more rail and air networks, bridges and ports won’t be cheap: the price tag is estimated to be $1 trillion. But absent a colossal effort, the drag on India’s growth could amount to 2 percent a year. Access to computers and the Internet must also be scaled up dramatically if India is to compete successfully in the global marketplace. Despite the publicity India’s prowess in IT receives, society-wide access to information technologies remains unimpressive. In 2008, according to the World Bank, India had 7.9 Internet users per 100 people. That number had grown to 15.1 by 2013. But by then Guatemala had 19.7, Haiti 10.6, Kyrgyzstan 23.4 and the Dominican Republic 45.9. The figure for China was 45.8, in Germany and France and the United States it was over 80, and in Denmark it was 94.6. Even allowing for India’s mammoth size and population, this dismal comparison speaks for itself.
India faces an even more fundamental problem—one that makes prognostications about its impending ascent to great-power status sound surreal. Simply put, the country still lacks the human capital required for acquiring the power and influence commensurate with its leaders’ aspirations. Consider some pertinent numbers. India’s per capita income in 2013 was $5,350. By comparison, China’s was $11,850, Japan’s was $37,630 and—tellingly—South Korea’s, which was comparable to India’s in the early 1950s, was $33,440. Nearly one-third of Indians still subsist on $1.25 a day or less. India places 135th out of 187 on the UNDP’s Human Development Index, a composite measure of access to basic necessities. Similarly, it ranks 102nd out of 132 on the Social Progress Index, which assesses countries’ records in meeting people’s essential social and economic needs. In UNICEF’s rankings, India (with 48 percent) places fourth in the proportion of children who are stunted and second (43 percent) in the percentage of those who are underweight (“severe” or “moderate”). The handful of Asian countries with worse records includes Afghanistan, Pakistan, Myanmar and Papua New Guinea—not good company for a country that yearns to be global power. As Jean Drèze and Amartya Sen demonstrate in a recent book, despite its robust economic growth during much of the last two decades, India lags far behind the other “BRICS” in such measures as citizens’ access to potable water and basic health and sanitation services, the immunization of children and nutrition. Worse, its performance is poor even relative to some of the world’s poorest countries. In India’s own neighborhood, Bangladesh and Nepal, despite having smaller per capita incomes and slower growth rates, have done better on several key quality-of-life measures.
Among the consequences of having shopworn infrastructure, relatively low literacy rates and a substandard educational system, along with an industrial manufacturing sector that’s small relative to that of its competitors—all problems that the Asian “tigers,” and China thereafter, overcame—is that, as wages in China have risen, multinational corporations haven’t relocated to India to the degree one would expect given the size of the Indian market and the low cost of Indian labor. Instead, they have gone elsewhere—not just because of India’s inadequate human capital and infrastructure, but also because of bureaucratic barriers that hinder business and investment and persist despite the reforms of the past two decades. These problems help explain why India places 134th out of 189—just below Yemen—in the World Bank’s “Ease of Doing Business Index.” Not surprisingly, India attracts far less FDI than it needs to boost growth and productivity. From 2010 to 2012, FDI inflows to India averaged $27 billion a year, compared to $119.5 billion for gargantuan China, $55 billion for tiny Singapore and $60 billion for Brazil, a member of the BRICS coalition to which India belongs. Malaysia attracted $10.3 billion and Thailand $8.3 billion—both far more than India in per capita terms. Yet the former has a population of thirty million (2.3 percent of India’s) and the latter sixty-seven million (5 percent of India’s).
It’s often said that India, unlike China, has the advantage of a relatively young population and will therefore not face labor shortages. What often goes unmentioned is that the largest population increases are occurring in some of India’s poorest states (Madhya Pradesh, Uttar Pradesh and Bihar), not in those (such as Kerala and Tamil Nadu) that have been the best at meeting basic economic needs and in increasing literacy.
These same deficiencies have prevented India from establishing a significant position in global trade. While it does rank fifteenth on a list of the top twenty economies in the dollar value of merchandise trade, its exports and imports combined in 2012 totaled $784 billion. Several countries with smaller GDPs and much smaller populations outranked it, including Singapore, Belgium and the Netherlands. China’s trade, valued at nearly $4 trillion and about on par with that of the United States, accounted for 10.5 percent of the value of all international trade in 2012. The dollar value of India’s trade amounted to one-fifth of China’s and to 2 percent of the global total, even though India has roughly 17.5 percent of the world’s population, about the same proportion China does. India does fare better in trade in commercial services: in 2012, it ranked seventh in a list of the top exporting countries; but its share was still only 74 percent of China’s (which still lacks a powerful service sector) and 4.4 percent of the world total, comparable to that of Spain and the Netherlands.
Apart from the quantity and complexity of the problems that have to be addressed, India’s democratic system is not conducive to enacting controversial economic changes quickly. Because of their authoritarian political systems, China, as well as Taiwan and South Korea in their nondemocratic phases, could push through sweeping reforms that helped establish the foundation for rapid industrialization and economic growth. India’s raucous, vibrant democracy is rightly admired, but it impedes the implementation of deep economic reform. Creaky coalition governments are common at the center, and headstrong local power brokers (the chief ministers of its twenty-nine states) can be veritable kingmakers. Labor unions are powerful, and militant and caste-based political alliances are impenetrable yet influential. Then there’s an electorate that’s not shy about registering its displeasure at the ballot box when economic reforms bring pain or when the increased competition from abroad threatens traditional sectors, such as small retail shops, agriculture or industries long shielded by various forms of protectionism. In principle, Modi, who faces the challenge of overcoming such obstacles, is well placed to do so given his economic track record, his popularity and the BJP’s massive electoral mandate. Modi may style himself as a no-nonsense, business-friendly, results-oriented manager, but he won’t be able to demolish these deeply rooted impediments to reform without a tough struggle. Running Gujarat was one thing. Acting as India’s CEO will be quite another.
DURING THE past two decades in particular, Indian leaders have looked beyond their immediate neighborhood and adopted a more ambitious strategy. The “Look East” policy, a case in point, seeks to expand and deepen India’s presence in East Asia so that China does not have a free hand in shaping the strategic and institutional landscape there. More to the point, it is designed to strengthen security ties with the Asian countries located around China’s perimeter, particularly those unnerved by the prospect of a Pax Sinica and anxious about America’s staying power and the narrowing gap in power between the United States and China.
India has been active on a variety of fronts in East Asia. It has been training Myanmar’s naval officers and selling the country maritime surveillance aircraft. It has provided Vietnam loans for buying Indian arms and has signed a deal, despite profuse Chinese protests, to tap Vietnamese oil deposits in the South China Sea, adjacent to islands claimed by Beijing. It has been engaged in regular security consultations with Japan, Israel, Australia, Indonesia and the United States, and has participated in naval exercises in the Pacific alongside America, Japan, Singapore and Australia. It also signed a free-trade agreement with the Association of Southeast Asian Nations in 2009. While specialists on Indian foreign policy tally these and other triumphs with care, what’s sometimes missing from their analyses is a comparative perspective, which would show that China’s presence in East Asia, and the resources it has deployed to gain influence there, far exceed India’s on every dimension that matters, and by a wide margin.
Another part of India’s strategy has been expanding the power and reach of its armed forces. Much has been accomplished, and the balance between India and China is a far cry from what it was in 1962, when a military rout that revealed Indian troops’ lack of basic equipment created a political firestorm at home. The Chinese would find it considerably harder now to prevail swiftly in a war along the border. Still, India trails China in military power, and a quick comparison makes the disparity evident. Though the two countries have populations of comparable size, India’s GDP is a mere 22.5 percent of China’s. This gap gives Beijing a big advantage in mobilizing and applying various power-relevant resources—and one that is likely to widen given that China’s rate of growth, though it has slowed of late, still exceeds India’s. India and China have devoted a comparable proportion of GDP to defense in recent years: about 2.5 percent and 2.0 percent between 2008 and 2013, respectively. Yet because of the GDP disparity China can, with a smaller burden on its economy, spend far more on its military machine than India: $188 billion compared to $47 billion in 2013. The actual gap is likely even larger, as China’s official figures probably understate its true level of defense spending.
Nor is it just a matter of the spending mismatch: whether it’s armor, airpower, cyberwarfare, air-defense systems or power-projection capacity, China retains a significant advantage over India, in qualitative and quantitative terms. Some numerical comparisons of major categories of armament make this evident. In combat aircraft, attack helicopters, submarines and destroyers, China’s lead ranges from 2:1 to 4:1. Some strategists, Indian and Western, aver that the Indian navy now has the wherewithal to establish dominance over its Chinese counterpart and to block the lifeblood of the Chinese economy by controlling maritime passageways that provide China egress from East Asia. Leaving aside the fact that this scenario assumes a full-blown war in which the naval balance would be but one factor, the difficulty New Delhi faces is that China has far more economic resources than India to devote to seapower in the coming years. Besides, in 2013, the Indian navy received only 18 percent of the military budget, compared to 49 percent for the army and 28 percent for the air force, and a reallocation of resources, certain to be contentious, would be required to ensure maritime dominance over China. That’s possible in principle—leaving aside the inevitable interservice budget battles—but not easily accomplished given the threats India faces from the land and air forces of China and Pakistan, who continue to be aligned. Even if one concedes the claim about Indian naval superiority, Beijing can apply counterpressure in various ways, particularly by bolstering Pakistani military capabilities, using its well-developed strengths in cyberwarfare and striking across the Sino-Indian border. Even with India’s recent move to further strengthen its border defenses by creating a “mountain strike corps” of fifty thousand troops, the Chinese are likely to retain the advantage in numbers, mobility and firepower—and thus the wherewithal to mount offensive operations across the three main sections of the border: Ladakh-Xinjiang, Tibet-Uttarakhand and Arunachal Pradesh-Sikkim.
Modi has his work cut out for him. He will doubtless seek to reform India’s defense industries but will have to continue relying mainly on external suppliers. Russia, whose armaments dominate India’s army, navy and air force, will retain a natural advantage. But in recent years India has been dissatisfied by cost overruns in Russian armaments, the unreliability in the supply and quality of spare parts, and accidents aboard Russian-built submarines, and so it has sought to reduce its dependence on Moscow. Modi won’t burn bridges with Russia, but he will open the door more widely to American, European and Israeli suppliers. While Israel will remain a niche supplier for India, since the establishment of diplomatic relations in 1992, trade between the two countries has grown (it totaled $6 billion in 2012); so have Israel’s military sales, which cover radars, missiles of various sorts and reconnaissance aircraft. India has become Israel’s leading market for its arms exports, the annual worldwide total value of which is $7.5 billion, with India accounting for as much as $1.5 billion. Such transactions, which include intelligence sharing related to counterterrorism, are no longer controversial within India; Modi, who visited Israel while running Gujarat and attracted billions of dollars of Israeli investment in his state, has voiced his admiration of Israel’s economic and technological achievements and his desire to boost cooperation.
New Delhi’s strategy toward China goes beyond strengthening India’s armed forces. Since the bilateral military balance heavily favors Beijing, India has turned to a classic coalition strategy aimed at dispersing China’s military strength across what, given the size of the Chinese landmass, are far-flung fronts. This gambit, already well under way, will gain momentum. For reasons rooted in history and geography, India’s natural partners will be Australia, Indonesia, Japan, Vietnam and the United States, countries with which India’s military ties have grown during the last two decades. The increasing security cooperation between New Delhi and Tokyo in recent years is particularly significant and will increase because of their shared apprehensions about China. Given Japan’s economic and technological prowess, it could—if the increasing threat from China trumps domestic opposition—boost its military strength in fairly short order. With a GDP approaching $5 trillion, barely 1 percent of which it devotes to defense, this would only require a minimal increase in the defense burden. While East Asian states have been rattled by Prime Minister Shinzo Abe’s efforts to revise Japan’s “peace constitution” and to increase its military capabilities, India has welcomed them and embraces Japan as a strategic partner. In 2014, Japan and India decided to begin regular consultations between the two countries’ national-security leaders. This decision followed the initiation of yearly trilateral meetings among India, Japan and the United States in 2011. There is more involved in this than talk. Japan has participated in three—in 2007, 2009 and 2014—of the annual U.S.-Indian “Malabar” naval exercises, which were initiated in 1992 (they were suspended following India’s nuclear test in 1998). What bears watching is whether Japan’s 2014 decision to lift the ban—which dates back to 1967—on the export of military technology and arms leads to purchases by India as part of its push for military modernization and diversification. Tokyo’s 2013 offer to sell India the ShinMaywa US-2 amphibious aircraft, and India’s interest in buying fifteen of them, may represent a harbinger. Already, Japan and Australia have been in discussions over the latter’s purchase of ten Soryu-class Japanese submarines (worth $20 billion), a development that points to the potential for larger arms sales by Japan to India, especially given their shared concern about China’s expanding power.
USING DIPLOMATIC and economic means, India is also establishing a presence on China’s western and southwestern flank, in Afghanistan and Central Asia. It has positioned itself to play a major role in post-American Afghanistan by training Afghan security forces, building road networks and acquiring natural-resource deposits. But China has also been purchasing economic assets in Afghanistan, notably in energy and mining, and once the United States and its allies depart, Beijing will have to develop a strategy to defend these gains, which means that its presence in that country will grow, adding a new front to Sino-Indian competition.
China has overshadowed India in Central Asia, despite the emphasis the region receives from Indian strategists and New Delhi’s efforts to strengthen its position. India remains an observer rather than a full member in the Shanghai Cooperation Organization, among the many sources of Chinese influence in Central Asia. Indian energy companies have been bested by their Chinese counterparts in bids for shares in Kazakh companies and energy fields, most recently in the giant Kashagan offshore field, among the largest in the world. Pipelines recently built by China are drawing increasing volumes of Kazakh and Turkmen energy eastward. Trade and investment trends show that Beijing’s economic presence is fast overshadowing Russia’s, to say nothing of India’s, in what has been a Russian sphere of influence since the nineteenth century. India’s position is even weaker in the military sphere. Unlike China and Russia, it lacks direct access to the region. Its quest for access to the Ayni air base in Tajikistan, its first attempt to gain a military toehold, ran into Russian opposition—no matter that New Delhi had spent some $70 million to renovate it—and so Ayni’s operational value to India as a combat-aircraft platform remains uncertain.
The United States will be the key partner in India’s coalition strategy because it has more power to bring to the grouping than any other country and because Sino-American competition seems likely to intensify. Developments such as the 2005 U.S.-Indian nuclear deal—which effectively marked Washington’s recognition of India as a nuclear-weapons state and an abandonment of its punitive antiproliferation approach to New Delhi—have produced predictions of an alliance in the making. This forecast is faulty. For one thing, it makes light of the political obstacles within India, which are a legacy of Cold War frictions and the abiding suspicion, even animus, toward the United States within India’s left wing and on the nationalist right. It also underestimates India’s apprehensions about the loss of autonomy that could follow an alliance with the United States, a sentiment that persists in a country that has prided itself on hewing to nonalignment. These are among the reasons New Delhi has opted for a flexible, ambiguous position, one that’s unlikely to change under Modi, even as he expands the security cooperation with the United States that’s already in place. India has forged multiple ties with the United States and Europe, but it also has continued high-level political exchanges with China and is seeking to increase Sino-Indian trade. (China has become India’s biggest trade partner.) Moreover, during Chinese president Xi Jinping’s September 2014 visit to India—the first by a Chinese president in eight years—the two leaders signed a deal providing for $20 billion in Chinese investment in India’s infrastructure, especially railways, over five years. This was despite the controversy created by Chinese soldiers’ encroachment across the (still undemarcated) border, which coincided with Xi’s trip.
This multifaceted strategy is New Delhi’s likely course for the future. It gives India greater flexibility than would an alliance with the United States and provides two attendant advantages. First, India can expand ties with the United States on all fronts, calculating that Beijing will be forced to take account of America’s likely reaction should China contemplate coercive action against it. Second, India can improve its bargaining position against China, which will want to forestall the tightening of military bonds between India and the United States. A definitive alliance with America would deprive New Delhi of that strategic flexibility. As his predecessors did, Modi will continue to see China as India’s main security threat, but it’s simplistic to see him as a mere Sinophobe. He has expressed admiration on several occasions for China’s economic achievements and, while governing Gujarat, visited China and succeeded in attracting more Chinese investment than the chief minister of any other Indian state.
IF CHINA presents problems for India, then Pakistan remains an even more acute one. The nature of India’s Pakistan predicament has changed in three fundamental and unprecedented ways. First, India’s conventional military advantage will be harder to use to good effect, because threats of war will be less credible now that the specter of nuclear escalation looms. This risk will be present in any war in which Pakistan suffers heavy losses, and will even constrain what India can do in response to another major terrorist attack that it traces to Pakistan. Stated differently, the greater the conventional military advantage India acquires over Pakistan, the more dangerous it may be to employ it. That’s something that Modi will have to reckon with, even as his tough-guy image will put him under pressure to respond forcefully to Pakistan-based terrorism.
Second, Pakistan’s weakness is also starting to worry Indian strategists. Should Pakistan, which is beset by internal violence, fragment, India will face serious problems. Refugees will flow east. Jihadist groups will be able to operate with greater leeway in Kashmir, and even the rest of India, in the absence of a robust Pakistani state that can be pressured to hold them in harness. It’s not clear how such threats can be managed by utilizing India’s economic and military superiority.
Third, nuclear weapons, by raising the risks involved in waging conventional war, provide Pakistan more opportunities to support extremist Islamist groups whose targets now extend beyond Indian-controlled Kashmir and include, as the 2001 attack on the Indian parliament and the 2008 attack on Mumbai showed, the Indian heartland. India has about as many Muslims as Pakistan does, and the repression of Indian Muslims, or a popular backlash against them following terrorist attacks inside India, could generate domestic violence and upheaval that alienate an important and substantial segment of Indian society while empowering India’s radical nationalist forces. The result would be a vicious circle of violence that begets more violence and proves disastrous for India’s future.
It’s unclear whether Modi will be able to overcome these problems. Despite his smashing electoral victory, his success in office is anything but assured. The BJP, while generally seen as more favorable to private enterprise than the Congress Party (notwithstanding that it was on the latter’s watch that many of India’s market-friendly economic reforms were adopted), still contains constituencies committed to economic nationalism. They view globalization as a recipe for deindustrialization, foreign domination over key economic sectors, and impoverishment for small businesses and farmers. Their views, though sidelined in the 2014 campaign, could regain influence if Modi’s economic policies falter or cause pain without producing visible gains for ordinary Indians. India the superpower? Don’t bet on it.
Rajan Menon is Anne and Bernard Spitzer Professor of Political Science at the Colin Powell School of the City College of New York/City University of New York, a nonresident senior fellow at the Atlantic Council, and a senior research scholar at the Saltzman Institute of War and Peace at Columbia University.
Image: Wikimedia Commons/Dhruv/CC by-sa 3.0
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
The battle between Arkansas proponents of the two medical marijuana efforts that will appear on the ballot has gotten hotter since the state supreme court struck one of them—Issue 7—after a lawsuit challenging it was filed by Little Rock attorney Kara Benca, with the support of her husband, Patrick Benca, who is also an attorney.
Both Bencas say they are longtime members of NORML (National Organization for Reform of Marijuana Laws), that they want to see marijuana legalized, and that their concerns about Issue 7 were shared by many patients who claim to need medical marijuana.
Patrick Benca said that, due to those concerns, some of these sufferers wanted to sign on as the petitioners in a lawsuit challenging Issue 7. However, fearing that the lawsuit would anger other medical-marijuana proponents, and not wanting people already in pain to face that potential reaction, the Bencas decided that Kara would file as the sole petitioner.
Patrick Benca now says that he and Kara underestimated how fierce the response to their lawsuit would become. To illustrate the intensity of the debate among legalization proponents—and to explain his and his wife’s position—he sent me the following email from an irate supporter, along with his response.
I asked permission to publish them. He agreed. I have edited both slightly for clarity.
First, the email from a supporter who knew Patrick Benca from years ago:
“Long time since we bartended together. I never did think your wife would be SO against trying to get Medical Cannabis OFF the ballot. I guess neither of you have experienced someone that has battled cancer.
“I have three friends that have fought. Two have passed since 2012. I am disgusted with your decision to go after Arkansas Compassion. Opiates are what you need to go after! People die every day on those meds, and there is NOT a single recorded death from marijuana.
“Doctors won’t get on board because they are afraid of losing their licenses and outrageous salaries. Marijuana will bring in millions of tax dollars to our state, and the positives outweigh the negatives by a long shot.
“I am pretty sure the pharmaceutical companies and/or politicians are paying you under the table to go after this ‘volunteered organization’ that spent MANY hours/months over the last two years to get signatures. I am saddened that y’all decided to make this decision. You two should sit down with some popcorn this weekend and watch some YouTube. You might get educated for a change.”
The writer, who signed himself “Sincerely Pissed,” then provided links to the following videos:
Here is Patrick Benca’s response:
“As with everyone I have respect for, I always make sure that he/she gets fair shake and the benefit of the doubt. You will always get that from me. So, I am asking that you read what I have to say.
“My wife and I are for the outright legalization of marijuana. Period. That has always been our position. We began to understand this long before the opioid epidemic began getting the attention it does today. For years I have seen the faces and represented the lost souls of those addicted to opioids and other heinous drugs. I’ve seen more than you. I promise.
“So…marijuana. Here is what I have not seen in the last 16 years of my criminal defense experience:
“A client state that he killed, robbed, raped, or committed any other criminal act because of marijuana. Of course, the exception is those who engaged in transportation and delivery of this now-illegal drug. Another factor as to why legalizing Is the way to go. I’m sure you and I can both wax on about the benefits of this truly wonderful plant.
“Medical Marijuana: I know this subject inside and out. I know the medical benefits through and through. There is not much I do not know on the subject. My wife and I have made it a passion. Our area of practice has given us opportunities to hear compelling stories. We have had a handful of clients who were veterans of our recent wars. I know the struggles of PTSD and have seen the miracle transition that marijuana provides. It’s breathtaking.
“I lay this brief summary of a background to possibly instill in you the passion my wife and I have on this issue.
“That said, issues 6 and 7: 6 is an amendment and 7 is an initiated act. Big difference. The amendment, if passed, would make it exceedingly difficult for legislators (a majority that oppose it) to slow down its implementation come January 1. If Issue 7 passed, the legislature would have a great amount of control and would promulgate rules to get it implemented and up and running. This is one of the reasons why more signatures are required to get the amendment on the ballot.
“In short, with Issue 6, the patients that need medical marijuana in Arkansas would have it likely far sooner than with the initiative (issue 7). With 6, you have nearly a bullet-proof piece of law that can only be undone by voters on a ballot after its passage AND it’s in the hands of patients faster.
“Self-Grow: this is the provision that prevented the medical marijuana act to be passed in 2012. The sponsors on that act polled medical marijuana before running the petitions and getting it on the ballot. They had the numbers and it appeared that 55 to 60 percent of voters were in favor. Very solid numbers. It got on the ballot and failed at the election box. The sponsors couldn’t figure out what the problem was. So, they conducted a poll. They figured out that the failure was due to the ‘self grow’ provision. Arkansas voters were not comfortable with patients living outside the zone of a dispensary growing plants without regulation. These polls corroborated the voting percentages seen on Election Day. It was a huge defeat for the cause.
“The sponsors went back to the drawing board. Initially, I believe both David Couch [who backed Issue 6] and and Melissa Fults [who backed Issue 7] wanted self-grow, but Couch was convinced that voters weren’t comfortable with it yet. So … baby steps. Ultimately, Couch and Fults split on the point and worked hard on advancing their respective issues.
“They are great people. Passionate in all aspects. David felt that the initiative was on the path of failing again because it included self-grow. If he was right, there would be nothing in Arkansas until another presidential cycle in 2020. There is no advocate that could let that happen. Too risky.
“We found out about the signature problem with Issue 7 about the same time others learned. It was known and a lawsuit was coming. Better it came from a medical marijuana supporter than an opponent. A lawsuit from an outright opponent of medical marijuana would have most assuredly killed both come election time.
“So, we decided to file. We had patients desperate to be the petitioner in this lawsuit because they felt, as we did, that the initiative would fail for a number of reasons, but most concerning was the self-grow aspect. They wanted assurance they could get access to marijuana sooner rather than later.
“Also, we had doctors who know the benefits of marijuana that wanted to be the petitioner. We decided that we did not want to put the very people that were meant to benefit from all of this work at risk of public scrutiny and professional scorn.
“Kara had no problems taking the heat for this cause. She didn’t even flinch. I don’t believe she would have ever fathomed the sheer hate sent her way. The threats. Being called a cunt. Right now, she is with my children at her parents’ house because of all this. My children had to be taken out of school. This is the thanks that she gets. And she is getting it from the very people she has had empathy for. Pretty fucked up, if you ask me. But not everyone is me, right?
“There is nobody who prays harder and thinks more about the people who would benefit from medical marijuana than Kara. She knows more and has seen more than you and I put together.
“Timing of the lawsuit: A lot of complaints are that voters do not get the opportunity to revisit the ballot box because they have already cast their vote. This isn’t the supreme court’s fault. The lawsuit was filed at the earliest possible moment. The rules in place and the procedures that you have to follow make it nearly impossible to get a measure removed from the ballot prior to it being printed.
“The legislature needs to change the timelines and deadlines to ensure sufficient time to challenge and, if successful, to have an issue scrubbed from the ballot. This would help ensure that voters are not disenfranchised, which is exactly how they feel right now. I understand that and dig their frustration. They need to call their legislator to get the laws and rules changed.
“In sum, it is clear that many have not educated themselves as to both measures. If they had, they would know that:
The amendment is the best law. It would be virtually here to stay.
It was the most likely to win on Election Day.
It is the best law to get patients the marijuana they need soonest (always the most important consideration).
Self-grow will eventually get here. Our hope is that marijuana is fully legal within the next eight years.
Now add in all of the other benefits you mentioned in your email to me.
“Kara and I do not deserve your or anyone else’s snarky remarks, threats, and hateful words. Your words disappoint me.”
In light of the state attorney general’s recent, successful arguments against paying Gyronne Buckley the $460,000 that the Arkansas State Claims Commission said Buckley deserved because he’d spent more than 11 years in prison due to a conviction obtained by bad behavior on the part of state officials, we think an exercise parsing Dustin McDaniel’s logic may help him think a bit straighter.
1. As you have never been convicted of a crime, when you get out of bed in the morning, are you guilty?
Careful. We know you’re our state’s top prosecutor and that “could be” jumps right to mind. But remember you represent the law and this is a legal question. We suggest “no” for the right answer.
2. If a police officer looks at you but concludes you’ve done nothing wrong, did you get off on a “technicality”?
Eddie Vedder recently released a new album called Ukulele Songs, and that’s just what it is. Most fans will probably not realize that one of the songs, “Satellite,” was written for and from the perspective of Lorri Davis, wife of Damien Echols. One reviewer who did catch the reference wrote that among several love songs: “Satellite stands clear as the most captivatingly majestic, a heartwrenching testament to unwavering devotion in the face of nearly insurmountable odds.”
Here are the lyrics:
it’s no shame that
love’s a game that
i can only play with you
what i’m saying
is i’ve been saving
my love for you
i’ve seen the light, i’m
satisfied that
the brightest star is you
satellite, i’m
holding tight
beaming back to you
days turn into
nights turn into
days turn into today
don’t think i’m out playing
cause i’m inside waiting
for you
i’ve felt the light, i’m
satisfied that
the highest star is you
satellite, i’m
holding tight
beaming back to you
don’t you worry
i believe your story
you were put away
for something you didn’t do
what i’m saying
is i’m saving
my love
The following letter appeared on Mar. 21 in the Jonesboro Sun, the newspaper published in the town where Damien Echols and Jason Baldwin were convicted and where the hearing for them and Jessie Misskelley, Jr. will be held in December. The letter to the editor was written by Ken Swindle, who is from Jonesboro but who now practices law in Rogers, Arkansas.
Aside from Dan Stidham, who represented Misskelley at trial, Swindle is the first Arkansas attorney to speak publicly about the case outside of court. Swindle has also begun assembling a group of other Arkansas attorneys who are concerned about the case.
You probably recall the atmosphere surrounding the trial of Jason Baldwin and Damien Echols. I remember after finishing my first year of law school returning to Jonesboro the month after the trials had completed. We all believed that Jason and Damien were guilty. We knew that the murders of the children were unspeakably horrible, and we had heard that Jason and Damien were involved in an occult ritual sacrifice.
I also recall that, even then, there were whispers in the community about the complete lack of evidence. Like most people in the community, I quickly brushed those doubts aside. These defendants must be the “other,” the outsider.
It was not until the case was in front of the Arkansas Supreme Court last year that I began to look more critically at the evidence. Maybe like many of you, those tough questions kept coming back. I began to re-examine the trial from a new perspective. The Arkansas Supreme Court sent the case back to the trial court for new findings. Jason and Damien’s attorneys are asking for a new trial based upon review of new evidence as well as a request for new scientific examination of evidence at their own expense. This testing was scientifically unavailable in 1994. However, there is one piece of evidence already before the court that should make the granting of a new trial automatic: juror misconduct.
The right to a jury trial is a fundamental protection to our communities. To create a fair and impartial jury, judges make all potential jurors take an oath to follow four safety rules: (1) to answer the questions truthfully when being chosen to sit on a jury, (2) not to discuss the case with anyone until the case is over, (3) not to make up one’s mind before the jury deliberates, and (4) not to interject into the jury deliberations evidence not presented at trial.
We now know that the jury foreman in Jason and Damien’s trial violated all four of these safety rules. This fact alone should be sufficient for a new trial. The right to a new trial protects our communities by enforcing the right to a fair and impartial jury that follows the safety rules given to it by the judge. If a new trial with a fair and impartial jury is allowed, especially if a jury is allowed to consider all of the DNA evidence, then maybe, just maybe, those lingering doubts that many of us had in 1994 may finally be put to rest.
Last year, after the Arkansas Supreme Court ordered an evidentiary hearing in the case of the West Memphis Three, state Attorney General Dustin McDaniel responded that his office “intends to fulfill its constitutional responsibility to defend the jury verdicts in this case.”
At a panel discussion shortly after that, a professor of law seemed to agree that this is the AG’s role. However, I believe that, just as prosecuting attorneys Brent Davis and John Fogleman could have opted not to prosecute Jessie Misskelley based on his convoluted confession—or the other two without stronger evidence—McDaniel at any time could have stopped challenging efforts by the WM3 defense teams to bring the men’s cases back into court. Negotiation with the defense teams has been a possibility.
I asked Ken Swindle, an Arkansas attorney who supports retrying the three, if that view was correct. He examined the question in a lawyerly fashion, and I am posting what he wrote. Swindle’s article is more technical than most that appear here, but in light of all the money and effort the state has expended to preserve the WM3 convictions—and how much of both remain to be spent—I think the question he addresses warrants serious discussion.
Determining what’s in ‘the interests of the state’
By Ken Swindle
In my opinion, the Attorney General does have discretion in the position that s/he chooses to take in any case. The office of the attorney general is created by the Arkansas Constitution. Art. 6, Sec. 22. However, it is left to the Legislature to specifically set out the duties of the Attorney General. It is true that the Attorney General is a law enforcement agency. Ark. Code Ann. Sec. 25-16-713. However, from any minor traffic stop all the way to prosecuting a capital punishment case, we all know that law enforcement agencies have, and use, discretion on how to prosecute cases, or whether to prosecute cases at all. That discretion is used by law enforcement agencies all across this State every single day.
We also know that the Attorney General is required to appear before the state Supreme Court and “maintain and defend the interests of the state in all matters before that tribunal.” Ark. Code Ann. Sec. 25-16-704(a). I think that it is significant that the Legislature directs the Attorney General to “maintain and defend the interests of the state”. What are the interests of the state? Answering that question necessarily requires the use of discretion.
The Legislature could have stated that it is the responsibility of the Attorney General to adopt the position of the prosecutor on every appeal, or to maintain the criminal conviction of every criminal defendant on appeal. The Legislature did not so choose. Instead, the Legislature chose to direct the Attorney General to “maintain and defend the interests of the state.”
Everyone should agree that the State has an interest in enforcing the jury verdicts of guilty defendants. Everyone would also agree that the State has an interest (morally, legally, and financially) in not enforcing jury verdicts against defendants who are not, in fact, guilty, or against whom guilty verdicts were obtained by processes that violate our constitutional rights. Adopting or advocating enforcement of jury verdicts against defendants who are not guilty or against whom guilty verdicts were obtained by processes that violate our constitutional rights endangers everyone in this State, and therefore, the State would have a very keen interest in correcting such a situation. Determining which side is mandated in order to “maintain and defend the interests of the state” requires discretion.
We also know that the Attorney General “shall be the attorney for all state officials, departments, institutions, and agencies.” Ark. Code Ann. 25-16-702(a). However, this only means that the state officials are clients of the Attorney General. As all attorneys learn in their first semester of law school, an attorney is not bound to follow any directive of a client. On the contrary, a lawyer “shall not bring or defend a proceeding, or assert or controvert an issue therein, unless there is a basis in law and fact for doing so that is not frivolous, which includes a good faith argument for an extension, modification or reversal of existing law.” Ark. R. P. C. 3.1.
Similarly, the “signature of an attorney . . . [on a pleading] constitutes a certificate by him that he has read the pleading, motion, or other paper; that to the best of his knowledge, information, and belief formed after reasonable inquiry it is well grounded in fact and is warranted by existing law or a good faith argument for the extension, modification, or reversal of existing law, that it is not interposed for any improper purpose, such as to harass or to cause unnecessary delay or needless increase in the cost of litigation.” Ark. R. Civ. P. 11(a).
Any position taken by any attorney in signing a pleading by a client takes some degree of discretion, and the Attorney General is no exception to the code of conduct required by Rule 11. Indeed, as the Attorney for the State, s/he should be held to a higher standard, not a lower standard.
The Arkansas Supreme Court has recognized that prosecutors do, in fact, have an even higher role in use of their discretion than other attorneys, as they have passed a special rule just for prosecutors. The Rule states:
The prosecutor in a criminal case shall:
(a) refrain from prosecuting a charge that the prosecutor knows is not supported by probable cause;
(b) make reasonable efforts to assure that the accused has been advised of the right to, and the procedure for obtaining, counsel and has been given reasonable opportunity to obtain counsel;
(c) not seek to obtain from an unrepresented accused a waiver of important pretrial rights, such as the right to a preliminary hearing;
(d) make timely disclosure to the defense of all evidence or information known to the prosecutor that tends to negate the guilt of the accused or mitigates the offense, and, in connection with sentencing, disclose to the defense and to the tribunal all unprivileged mitigating information known to the prosecutor, except when the prosecutor is relieved of this responsibility by a protective order of the tribunal; and
(e) except for statements that are necessary to inform the public of the nature and extent of the prosecutor’s action and that serve a legitimate law enforcement purpose, refrain from making extrajudicial comments that have a substantial likelihood of heightening public condemnation of the accused and exercise reasonable care to prevent investigators, law enforcement personnel, employees or other persons assisting or associated with the prosecutor in a criminal case from making an extrajudicial statement that the prosecutor would be prohibited from making under Rule 3.6 or this rule.
Compliance with this Rule requires discretion, and the Rule (with the necessary discretion to conform with the Rule) applies equally to the Attorney General. Ark. R. P. C. 3.8, Official Comment [6]. To drive home the point of the heightened standard of conduct to be applied to a prosecutor (and the Attorney General), the Official Comment emphasizes that a “prosecutor has the responsibility of a minister of justice and not simply that of an advocate. This responsibility carries with it specific obligations to see that the defendant is accorded procedural justice and that guilt is decided upon the basis of sufficient evidence.” Ark. R. P. C. 3.8, Official Comment [1].
Some argue that the Attorney General cannot use discretion, but struggle to find any law to support such a position. Others argue that they do not want the Attorney General to use discretion, but, instead only want the Attorney General to “enforce the law”, meaning to blindly adopt the position taken by the State prior to the appeal.
The law cited above clearly shows that the Attorney General is not required to blindly adopt the position taken by the State prior to the appeal—and there is wisdom in allowing the Attorney General to use his/her discretion. If a prosecutor in one small corner of the State makes a blunder, that blunder should not be magnified by forcing the Attorney General to adopt the position as the position of the entire State.
If the Attorney General were simply the rubber-stamp for any position previously taken by any prosecutor in any little jurisdiction of the State, there would be no point in electing an Attorney General at all. Of course, the entire reason for electing an Attorney General is that s/he may use her/his discretion in maintaining the interests of the State.
Recognizing the room for discussion here, I invite any Arkansas attorney who disagrees with this article to submit an argument to the contrary—one supporting the idea that the attorney general has an obligation to defend a jury’s verdict. ~ML
Lorri Davis and I will be part of a panel to be held at The University of Memphis on the evening of March 24. Chelsea Leigh Boozer, president of the university’s Society of Professional Journalists, posted the following announcement and poster on Facebook. (The comments are getting interesting.)
“The Media’s Role in the West Memphis Three Case” is the 2011 Freedom of Information Congress hosted annually by The Society of Professional Journalists at The University of Memphis.
The West Memphis Three is the name given to three men (Damien Echols, Jessie Misskelly and Jason Baldwin) who were tried and convicted of the murders of three eight-year-old boys in West Memphis, Arkansas in 1993 when they were teenagers.
The prosecution suggested their motive for the slayings was that it was part of a Satanic ritual. Much evidence has come forth since then that would suggest that the three men in jail were in fact not the murders. The case is currently up for appeal.
The case has grown national attention and several celebrities such as Johnny Depp and Natalie Maines from The Dixie Chicks have become advocates for the three men in jail.
Guest speakers and panelists of the 2011 FOI Congress include Mara Leveritt, author of Devil’s Knot (a detailed book about the WM3 Case), Lorri Davis, wife of Damien Echols, and other local journalists who have covered the case then and now.
Some topics to be discussed are the access of public information during research of the case and the difficulties surrounding that, the media’s coverage of the case and the ethical issue of journalists serving as advocates for their stories.
Our keynote speaker Mara Leveritt will give an introductory speech and afterwards she and our panelists will discuss topics surrounding media coverage of the case.
This event is scheduled for Mar. 24, 2011. There will be a light reception at 6 p.m. and we will start at 7 p.m. on The University of Memphis campus in the University Center Theater.
It is free and open to the public. Seats are limited to 300 and are available on a first come first serve basis. A limited amount of standing room is available in the chance of over-attendance.
Attendees will be enlightened about the case and how media coverage has been handeld over the years. There will be an opportunity to propose questions or comments to the panelists at the end of the event.
http://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpg00Mara Leveritthttp://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpgMara Leveritt2011-02-10 13:51:532016-02-15 08:48:34Lorri Davis and I to speak about media and the WM3 at The University of Memphis on Mar. 24
If you’re anywhere near Little Rock, come join me at Juanita’s at 8:30 on Friday, March 18 to hear—and thank—punk rocker Michale Graves. As many of you know, Graves is a longtime supporter of the WM3. He recently put together a short film entitled “The Blackness and the Forest” that told the story of his “Almost Home Campaign” to raise awareness about the case. He has collaborated with Damien Echols on several songs and will perform them at this concert.
This fine observation is from the Observer column in yesterday’s Arkansas Times.
http://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpg00Mara Leveritthttp://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpgMara Leveritt2011-01-19 17:03:542011-01-19 17:03:54Reflections on 'visits' with three men in prison
The Arkansas Times is the only publication that has consistently voiced concern about the trials of Damien Echols, Jason Baldwin and Jessie Misskelley, Jr.—and called for a court review of their cases. Now that the Arkansas Supreme Court has ordered that review, the Times editors, foreseeing a critical year ahead, sought interviews and photographs of the men. They were denied interview with Echols and Misskelley, as explained below, but recorded a great talk with Jason Baldwin, in which he talks about his faith that he’ll freed, his family, and his feelings about Echols and Misskelley, who are also briefly shown. (I’ve also posted a further recent interview with Baldwin on the DK2 section of this site.)
From the Arkansas Times blog: “On January 14, 2011 Arkansas Times Reporter David Koon and Photographer Brian Chilson arrived at the Varner “Super Max” Unit of the Arkansas Department of Corrections where Damien Echols and Jessie Misskelley are housed. We had an appointment made weeks in advance for an interview and photographs of the both prisoners. Upon our arrival in the visitation room, however, we were informed that we were only approved for photos, despite prior and specific approval for an interview. Both prisoners expressed surprise upon learning that we were not allowed to interview them as they had both signed consent forms that specified an interview would be done. We were allowed our interview with Jason Baldwin later in the day at the Tucker Unit. The interview took place in a conference room, video of Damien and Jessie taken during the few minutes we were allowed to see but not talk to them is from the morning.” Arkansas Times interview with Jason Baldwin.
For years, Anje Vela, at Skeleton Key Auctions, has been raising money for the WM3 with the help of many, many artists who’ve contributed valuable autographed items. Anje has now put together an awesome video recounting some of that history.
The very cool background music is “West Memphis Moon” by Chuck Prophet.
http://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpg00Mara Leveritthttp://maraleveritt.com/wp-content/uploads/2014/07/MaraLogo5.jpgMara Leveritt2011-01-11 13:50:222011-01-11 13:50:22Look at some of the artists who've supported the WM3
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
We looked up for some Game Of Thrones Facts, you probably didn’t know,…yet! Enjoy!
Game of Thrones is an American fantasy drama television series created for HBO by David Benioff and D. B. Weiss. It is an adaptation of A Song of Ice and Fire, George R. R. Martin’s series of fantasy novels, the first of which is titled A Game of Thrones. The episodes are mainly written by Benioff and Weiss, who are the executive producers alongside Martin, who writes one episode per season. Filmed in a Belfast studio and on location elsewhere in Northern Ireland, Malta, Scotland, Croatia, Iceland, the United States, Spain and Morocco, it premiered on HBO in the United States on April 17, 2011. Two days after the fourth season premiered in April 2014, HBO renewed Game of Thrones for a fifth and sixth season.
Peter Dinklage was first choice of author George Martin and producers for the role of Tyrion. They even didn’t do any auditions for Tyrion, besides Peter.
Sean Bean was their first choice too. But they did audition other actors too in case Sean Bean would reject the role.
Lena Headey and Peter Dinklage were friends before the show. So when Peter was talking with the producers, he suggested Lena for the role of Cersei.
Sophie Turner (Sansa) and Maisie Williams (Arya) are really good friends in real life.
Sean Bean tried to steal Lena Headey’s sandwiches during lunch breaks.
Nikolaj Coster-Waldau loves the video on youtube, which shows Joffrey getting slapped by Tyrion for 10 minutes.
Most of cast members didn’t read all the book because they don’t want to know their characters future. They think it may affect their way of acting.
George Martin told, that he planned the rest of the series to be produced by David Benioff and Dan Weiss, in case he dies before finishing the series.
Author George Martin describes Jack Gleeson (Joffrey) as “a very nice young man, charming and friendly.” The author sent Gleeson a letter that said: “Congratulations on your marvelous performance, everyone hates you.”
Maisie Williams (Arya) mother didn’t allow her to read the books because she thought books are too grown up for a young girl.
The Dothraki tongue was commissioned by HBO through the Language Creation Society, linguistics expert David Peterson created the language and delivered over 1.700 words before the series began filming. Now there are more than 3.000 words!
People asked Martin if the television show will impact his writing on books – in general his answer is “No.” However, one exception may be in writing for the wildling, Osha (Natalia Tena). When he initially saw a picture of the actress, Martin said Tena was all wrong for that part, but as he watched her audition, Martin couldn’t take his eyes off her. Because he finds Tena’s portrayal of Osha more interesting then what he had written, he wants to write more about her.
Alfie Allen (Theon Greyjoy) is the younger brother of the singer Lily Allen. Her song “Alfie” is written about him.
Producer Benioff spoke of the difficulty in getting dragon eggs right. The ones in the original pilot looked like “Christmas ornaments.” Gemma Jackson redesigned the last eggs and one was given to Martin as a wedding gift, when he married Parris McBride in 2011.
Jason Momoa performed the haka dance in his audition tape. And of course here is the audition:
Related
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Community Relations Service
Frequently Asked Questions
What does the Community Relations Service do?
The Community Relations Service (CRS) helps local communities resolve serious racial and ethnic conflicts and helps communities prevent and respond to alleged violent hate crimes committed on the basis of actual or perceived race, color, national origin, gender, gender identity, sexual orientation, religion or disability. Its services are provided to local officials and community leaders by trained federal mediators on a voluntary and cost-free basis. The kinds of assistance available from CRS include mediation of disputes and conflicts, training in conflict resolution skills, and help in developing ways to prevent and resolve conflicts.
What is the jurisdiction of the Community Relations Service?
The Community Relations Service provides its services to local communities when there are serious community conflicts or violence based on racial or ethnic issues. Pursuant to the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act, CRS also works with communities to employ strategies to prevent and respond to alleged violent hate crimes committed on the basis of actual or perceived race, color, national origin, gender, gender identity, sexual orientation, religion or disability. CRS services are provided on a voluntary and confidential basis, and are conducted according to provisions in Title X of the Civil Rights Act of 1964.
Where does CRS work?
Most of CRS' work comes from requests by police chiefs, mayors, school superintendents, and other local and State authorities. They ask CRS to help when there is serious community racial conflict or in the aftermath of an alleged violent hate crime or an incident that left unaddressed may lead to a violent hate crime on the basis of actual or perceived race, color, national origin, gender, gender identity, sexual orientation, religion, or disability. People request CRS' services when they believe that impartial mediators from CRS can help calm tensions, prevent violence, and get people talking again. CRS works in all 50 states, and in communities large and small, rural, suburban, and urban.
How does CRS work?
Trained impartial CRS conflict resolution specialists are stationed in 10 Regional and 4 Field offices across the county. They are available on a 24-hour basis. They follow established and standardized procedures in conducting their work. For each situation, CRS will first assess the situation, which includes hearing everyone's perspective. After gaining a good understanding of the situation, CRS will fashion an agreement among local officials and leaders on the services CRS will provide to help resolve the conflict or prevent further violence.
What kinds of issues does CRS become involved in?
Most of the work involves situations where there is racial conflict or violence involving police agencies or schools or communities struggling to recover in the aftermath of an alleged violent hate crime committed on the basis of actual or perceived race, color, national origin, religion, disability, gender, gender identity, or sexual orientation. The most volatile situations CRS responds to are negative reactions to incidents involving police use of force, the staging of major demonstrations and counter events, major school disruptions, and organized hate crime activities.
What is the Federal interest in helping local communities resolve racial conflicts?
CRS provides its services when it is asked by local authorities and officials to help. They may decline our services at any time. Since CRS mediators are not funded by sources other than Federal funds, they are able to ensure their neutrality in helping to resolve conflicts, especially those which involve local and State agencies. CRS is an integral component of the Justice Department's mission to help State and local governments prevent violence and promote public safety.
Why is CRS located in the Justice Department?
CRS mediators carry no guns or badges and cannot file law suits. Nevertheless, they represent the Department of Justice in one of its most important missions - providing assistance and support to State and local authorities in their efforts to prevent violence and resolve destructive conflicts. As representatives of the Department of Justice, CRS mediators have the credibility and trust to work effectively with people on all sides of the conflict.
How does CRS know if it has been successful?
CRS success is best measured by the level of satisfaction among those who receive CRS services. Police chiefs, Governors, Mayors, school superintendents, and others praise CRS for its effectiveness. Whenever possible, CRS will contact local officials to review how well agreements are holding, whether violence has abated, and if tensions remain low. An internal reporting system registers outcomes and accomplishments for each CRS case activity.
What are some of the big changes in CRS conflict resolution work?
Today, CRS mediators are called on to help resolve conflicts involving a wider range of racial and ethnic issues. Conflicts and violence is no longer Black and white, but may involve new immigrants, Native Americans, Central Americans, and others. With the passage of the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act in October of 2009, community leaders and law enforcement and government officials also call on CRS to help them develop the capacity to prevent and respond more effectively to violent hate crimes allegedly committed on the basis of actual or perceived race, color, national origin, religion, disability, gender, gender identity, or sexual orientation.
What Can CRS Do to Prevent and Respond to Alleged Violent Hate Crimes?
With passage of the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act, CRS is authorized to work with communities to employ strategies to prevent and respond to alleged violent hate crimes committed on the basis of actual or perceived race, color, national origin, gender, gender identity, sexual orientation, religion or disability in addition to continuing to employ strategies to prevent and respond to community tension relating to alleged discrimination and violent hate crimes on the basis of actual or perceived race, color, or national origin.
Internship Frequently Asked Questions
When will my security clearance be processed?
The timeframe for security clearances will vary from individual to individual, so we recommend submitting your application as early as possible. If selected for a CRS internship, we request you submit your security clearance documents in a timely manner. Unfortunately, we will only be able to provide limited updates after you submit your security clearance documents, as the security clearances are processed by another office.
Do all interns have to undergo a background check and attain a security clearance? Are there assignments that I can work on while I wait for my security clearance to be processed?
In order to work at the Department of Justice (DOJ) in any capacity, you must attain a security clearance; until you receive your clearance, you will not be able to begin your internship and we will be unable to provide you with any assignments.
What is a typical day like interning at CRS?
CRS is a relatively small agency with a large impact, and as a result, every day is different for interns. CRS staff is dedicated to making sure interns enjoy their experience and work on substantive projects throughout their internship. Interns can expect to be assigned multiple projects on any given day that cover a diverse subject area. Interns are often asked to attend meetings with other staff members and are invited to a variety of events at DOJ.
What type of projects can I expect to work on as an intern at CRS?
Past interns have worked on a variety of projects, ranging from helping to plan special events to helping to create cultural competency trainings for law enforcement, to working on field casework and assisting in the creation of a new website. Please see the specific internship descriptions for more details.
Will my internship consist mainly of administrative work?
No; CRS is committed to making sure interns leave their internships having worked on a multitude of substantive projects. This internship is not simply about filing papers or answering phones, but instead is a real chance to contribute to a federal agency dedicated to improving community relations in the United States. In this program, the intern will have the opportunity to witness the day-to-day operations of a federal agency and to see firsthand how CRS Headquarters and Regional offices serve the country. While interns may be asked to perform administrative tasks occasionally, interns’ weekly projects are closely monitored to ensure that interns are never given too much administrative work.
Do I need to be a political science major or have an interest in pursuing a career in a federal agency to work at CRS?
Although these internships are perfect for those interested in learning about and working in federal agencies, we strongly encourage students from diverse academic backgrounds—anything from biology to business to law—to consider and then to apply for these unique opportunities. CRS staff values the opportunity to work closely with individuals who will bring diversity and fresh perspectives to the agency and will be able to think critically about the issues CRS addresses on a daily basis. Thus, a specific interest in politics and the federal government is not necessary.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
In the last 5 years, I have been through company acquisitions and closings. It is a rough time for everyone, and it is hard to stay focused when you are worried about if you will have a job in a month. We all have bills to pay and many have family to take care of, but in times like this it is important to not lose your head.
The Role of a Manager:
The role of a manager in times like this is to keep all the other employees working. In many cases, when a company is bought, the first thing the new owner will typically do is take away the managers power. They wont remove them, becasue they want a familiar face their to keep the employees working. At this point, managers essentially become figure heads in the store, no real power, except for the respect they earned from their employees. If the manager doesn't have that respect, you can be sure they won't be around much longer. Managers are also pretty much "out of the loop" by this point. As we saw with CompUSA closing, the managers didn't find out the sale was final until after the Wall Street Journal and other media outlets. So please don't blame your managers for not telling you, it is very likely that they didn't even know.
Once the initial take over is done, the managers will be there to keep employee moral up and things moving like normal, even when things are all but normal. They have their own worries about losing their jobs so they will do what is needed to make sure their checks keep getting signed. They have bills and a family to take care of, remember that.
The Role of the Staff:
Depending on the company, the current staff may or may not be asked to leave. In the case of CompUSA everyone is out of a job. In my previous jobs, I was hired immediately (and into a better position with a raise) so the buy outs aren't always bad. Even if you are out of a job, don't do anything stupid to get "fired". If you start to steal things, or just doing anything to hurt the company, you have basically just said good buy to your benefits, your severance, and any good recommendation the company could have given you to future employers. I have also seen the purchasing company place the workers they did not keep into other jobs with companies they worked with.
Basically, you just need to keep your head. Being an asshole wont save your job, and you aren't going to change anything. The only thing you can do is hurt yourself. Keep doing your job, and take pride in your work. If you don't want to be there, then just quit, but there is no point in trying to hurt the company, they aren't out to get you and it is just business. Most companies understand what it is like for the employee's and many will offer a severance package for you, if you have medical, COBRA should kick in if you lose your job, you'll be paying for it, but you will be covered.
Positions of Confidential Information:
I worked as a tech during previous buy outs, and becasue of that, I had access to all client information, all contracts, basically everything the new owners needed and were paying for (companies buy clients...not operations) I could have been a real pain and not helped in the transition, I could have told my clients to leave, I could have trashed the buyer, but I would have lost my job, my clients would panic and not know what to do, and it would have made everything harder. The last company I left was basically a sinking ship. I knew it and I left before they went down. I was an administrator, so this is what I did.
Made a list of every client user name/password I had
Listed every daily/weekly/monthly task for each of the clients
Listed all of my contacts in each of our clients companies
Listed any quirks of the clients
Listed all work done in the last 6 months
Listed all pending work
Listed all proposals under review
Me leaving was business, they knew this. I got a months pay for handing over the info and not leaving them in a bad position. I also made my self available via telephone if they had questing in the next month. The company is now gone, but the former CEO (who was brought in to shut down the company) has offered me jobs in his other company (they were and are successful) but I've turned him down.Other positions such as management, and sometimes security have similar information. Leave on a good note. Do not try and "screw" people, it will bite you in the ass. Jobs come and go, but your reputation and pride are with you forever. With a good reputation you will find a new job, with a bad one, people will avoid you like the plague.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
---
author:
- 'Kazuhiko <span style="font-variant:small-caps;">Kuroki</span>$^{1}$ and Yukio <span style="font-variant:small-caps;">Tanaka</span>$^{2}$'
title: 'The effect of interchain interaction on the pairing symmetry competition in organic superconductors (TMTSF)$_2$X'
---
Possible occurrence of unconventional superconductivity in organic conductors has been of great interest recently. Microscopically understanding the mechanism of pairing in those materials is an intriguing theoretical challenge. Among the various candidates of unconventional superconductors, the pairing mechanism of quasi-one-dimensional (q1D) organic superconductors $\mbox{(TMTSF)}_{2}X$ ($X=\mbox{PF}_{6}$, $\mbox{ClO}_{4}$, etc.), so called the Bechgaard salts,[@Jerome; @Bechgaard] has been quite puzzling. Namely, since superconductivity lies right next to the $2k_{\rm F}$ spin density wave (SDW) phase in the pressure-temperature phase diagram, a spin-singlet $d$-wave-like pairing (shown schematically in Fig.\[fig1\](a)) is expected to take place as suggested by several authors.[@Shima01; @KA99; @KK99]. However, an unchanged Knight shift across $T_c$ [@Lee02] and a large $H_{c2}$ exceeding the Pauli limit[@Lee00] suggest a realization of spin-triplet pairing. As for the orbital part of the order parameter, there have been NMR experiments suggesting the existence of nodes and thus unconventional pairing,[@Takigawa] although a thermal conductivity measurement suggests absence of nodes for (TMTSF)$_2$ClO$_4$.[@BB97] As a possible solution for this puzzle of spin-triplet pairing, one of the present authors has phenomelogically proposed that triplet $f$-wave-like pairing (whose gap is shown schematically in Fig.\[fig1\](b)) may take place due to a combination of quasi-1D (disconnected) Fermi surface and the coexistence of $2k_{\rm F}$ spin and $2k_{\rm F}$ charge fluctuations.[@KAA01] Namely, due to the disconnectivity of the Fermi surface, the number of gap nodes that intersect the Fermi surface is the same between $d$ and $f$. Moreover, if the $2k_F$ spin and charge fluctuations have about the same magnitude, spin-singlet and spin-triplet pairing interactions have close absolute values (with opposite signs) as will be explained later. In such a case, spin-triplet $f$-wave pairing should be closely competitive against singlet $d$-wave pairing. As for other possibilities of triplet pairing, the $p$-wave state in which the nodes of the gap (Fig.\[fig1\](c)) do not intersect the Fermi surface has been considered from the early days,[@Abrikosov; @HF87; @Lebed] but from a microscopic point of view, spin-triplet pairing interaction has a negative sign for the momentum transfer of $2k_F$ unless spin fluctuations are highly anisotropic, so that a gap that changes sign between the left and right portions of the Fermi surface is unlikely to take place. [@Kohmoto] A similar phenomelogical proposal of $f$-wave pairing in (TMTSF)$_2$X has also been given by Fuseya [*et al.*]{}[@Fuseya1] Experimentally, the $f$-wave scenario due to the coexistence of $2k_F$ spin and charge fluctuations is indirectly supported by the observation that $2k_F$ charge density wave (CDW) actually coexists with $2k_F$ SDW in the insulating phase lying next to the superconducting phase.[@Pouget; @Kagoshima]
![Candidates for the gap function of (TMTSF)$_2$X are schematically shown along with the Fermi surface (solid curves). The dashed lines represent the nodes of the gap, whose $k_b$ dependence is omitted for simplicity. (For the actual $k_b$ dependence, see Fig.\[fig3\].) We call the gap in fig.(a)((b)) $d$-wave ($f$-wave) in the sense that the gap changes sign as $+-+-$ ($+-+-+-$) along the Fermi surface.[]{data-label="fig1"}](fig1.eps){width="8cm"}
As for [*microscopic*]{} theories for the pairing competition, we have previously shown using a ground state quantum Monte Carlo method that $f$-wave strongly dominates over $p$-wave in the Hubbard model that considers only the on-site repulsive interaction.[@KTKA] More recently, we have shown, by applying random phase approximation (RPA) to an extended Hubbard model, that $f$-wave pairing can indeed dominate over $d$-wave pairing when we have large enough second nearest neighbor repulsion $V'$,[@TanakaKuroki04] which has been known for some years to have the effect of stabilizing $2k_F$ CDW configuration.[@Kobayashi; @Suzumura] To be more precise, the condition for $f$-wave dominating over $d$-wave is to have $V'\simeq U/2$ (where $U$ is the on-site repulsion) or larger $V'$ because $2k_F$ spin and $2k_F$ charge fluctuations have the same magnitude for $V'=U/2$ within RPA. A similar condition for $f$-wave being competitive against $d$-wave has also been obtained in a recent renormalization group study.[@Fuseya04] Although these results do suggest that $f$-wave pairing can indeed be realized in microscopic models, the condition that the [*second*]{} nearest neighbor repulsion being nearly equal to or larger than half the [*on-site*]{} repulsion may not be realized so easily in actual materials. In the present study, we consider a model where the [*interchain*]{} repulsion is taken into account, which turns out to give a more realizable condition for $f$-wave dominating over $d$-wave due to the enhancement of $2k_F$ charge fluctuations. After completing the major part of this study, we came to notice that a similar conclusion has been reached quite recently using a renormalization group approach.[@Nickel]
The model considered in the present study is shown in Fig.\[fig2\]. In standard notations, the Hamiltonian is given as $$H=-\sum_{<i,j>,\sigma}
t_{ij}c^{\dagger}_{i\sigma}c_{j\sigma}
+U\sum_{i}n_{i\uparrow}n_{i\downarrow}
+ \sum_{<i,j>}V_{ij} n_{i}n_{j},$$ where $c^{\dagger}_{i\sigma}$ creates a hole (note that (TMTSF)$_2$X is actually a 3/4 filling system in the electron picture) with spin $\sigma = \uparrow, \downarrow$ at site $i$. As for the kinetic energy terms, we consider nearest neighbor hoppings $t_{ij}=t$ in the (most conductive) $a$-direction and $t_{ij}=t_\perp$ in the $b$-direction. $t$ is taken as the unit of energy, and we adopt $t_\perp=0.2t$ throughout the study. $U$ and $V_{ij}$ are the on-site and the off-site repulsive interactions, respectively, where we take into account the nearest neighbor [*interchain*]{} repulsion $V_\perp$ in addition to the intrachain on-site ($U$), nearest ($V$), next nearest ($V'$), and third nearest ($V''$) neighbor repulsions considered in our previous study.[@TanakaKuroki04]
![The model of the present study is shown.[]{data-label="fig2"}](fig2.eps){width="8cm"}
Within RPA[@Scalapino; @TYO; @KTOS], the effective pairing interactions for the singlet and triplet channels due to spin and charge fluctuations are given as $$\begin{aligned}
\label{1}
V^{s}({\mbox{\boldmath$q$}})=
U + V({{\mbox{\boldmath$q$}}}) + \frac{3}{2}U^{2}\chi_{s}({\mbox{\boldmath$q$}})
\nonumber\\
-\frac{1}{2}(U + 2V({{\mbox{\boldmath$q$}}}) )^{2}\chi_{c}({\mbox{\boldmath$q$}})\end{aligned}$$ $$\begin{aligned}
\label{2}
V^{t}({\mbox{\boldmath$q$}})=
V({{\mbox{\boldmath$q$}}}) - \frac{1}{2}U^{2}\chi_{s}({\mbox{\boldmath$q$}})
\nonumber\\
-\frac{1}{2}(U + 2V({{\mbox{\boldmath$q$}}}) )^{2}\chi_{c}({\mbox{\boldmath$q$}}),\end{aligned}$$ where $$V({\mbox{\boldmath$q$}})=2V\cos q_{x} + 2V'\cos(2q_{x}) + 2V''\cos(3q_{x})
+2V_\perp\cos(q_y)
\label{3}$$ Here, $\chi_{s}$ and $\chi_{c}$ are the spin and charge susceptibilities, respectively, which are given as $$\begin{aligned}
\label{4}
\chi_{s}({\mbox{\boldmath$q$}})=\frac{\chi_{0}({\mbox{\boldmath$q$}})}
{1 - U\chi_{0}({\mbox{\boldmath$q$}})}
\nonumber\\
\chi_{c}({\mbox{\boldmath$q$}})=\frac{\chi_{0}({\mbox{\boldmath$q$}})}
{1 + (U + 2V({\mbox{\boldmath$q$}}) )\chi_{0}({\mbox{\boldmath$q$}})}.\end{aligned}$$ Here $\chi_{0}$ is the bare susceptibility given by $$\chi_{0}({\mbox{\boldmath$q$}})
=\frac{1}{N}\sum_{{\mbox{\boldmath$p$}}}
\frac{ f(\epsilon_{{\mbox{\boldmath$p +q$}}})-f(\epsilon_{{\mbox{\boldmath$p$}}}) }
{\epsilon_{{\mbox{\boldmath$p$}}} -\epsilon_{{\mbox{\boldmath$p+q$}}}}$$ with $\epsilon_{{\mbox{\boldmath$k$}}}=-2t\cos k_a -2t_\perp\cos k_b - \mu$ and $f(\epsilon_{{\mbox{\boldmath$p$}}})=1/(\exp(\epsilon_{{\mbox{\boldmath$p$}}}/T) + 1)$. $\chi_0$ peaks at the nesting vector ${\mbox{\boldmath$Q$}}_{2k_F}$ ($=(\pi/2,\pi)$ here) of the Fermi surface. To obtain $T_c$, we solve the linearized gap equation within the weak-coupling theory, $$\lambda^{s,t} \Delta^{s,t}({\mbox{\boldmath$k$}})
=-\sum_{{\mbox{\boldmath$k'$}}} V^{s,t}({\mbox{\boldmath$k-k'$}})
\frac{ \rm{tanh}(\beta \epsilon_{{{\mbox{\boldmath$k'$}} }}/2) }{2 \epsilon_{{\mbox{\boldmath$k'$}}} }
\Delta^{s,t}({\mbox{\boldmath$k'$}}).$$ The eigenfunction $\Delta^{s,t}$ of this eigenvalue equation is the gap function. The transition temperature $T_c$ is determined as the temperature where the eigenvalue $\lambda$ reaches unity. Note that the main contribution to the summation in the right hand side comes from ${\mbox{\boldmath$k-k'$}}\simeq{\mbox{\boldmath$Q$}}_{2k_F}$ because $V^{s,t}({\mbox{\boldmath$q$}})$ peaks around ${\mbox{\boldmath$q$}}={\mbox{\boldmath$Q$}}_{2k_F}$. Although RPA is quantitatively insufficient for discussing the absolute values of $T_c$, we expect this approach to be valid for studying the [*competition*]{} between different pairing symmetries. Now, from eqs.(\[3\]) and (\[4\]), it can be seen that $\chi_c({\mbox{\boldmath$Q$}}_{2k_F})=
\chi_s({\mbox{\boldmath$Q$}}_{2k_F})$ holds when $V'+V_{\perp}=U/2$, which in the absence $V_\perp$ of course reduces to the condition $V'=U/2$ obtained in our previous study. This in turn results in $V^s({\mbox{\boldmath$Q$}}_{2k_F})=-V^t({\mbox{\boldmath$Q$}}_{2k_F})$ for the pairing interactions apart from the first order terms as can be seen from eqs.(\[1\]) and (\[2\]). Thus, considering the fact that the number of nodes intersecting the Fermi surface is the same between $d$ and $f$, the condition for $f$-wave being competitive against $d$-wave should be $V'+V_{\perp}\simeq U/2$. The possibility of this condition being satisfied in actual materials is realistic since $V_{\perp}$ can be comparable with the intrachain off-site repulsions due to the fact that the lattice constant in the $a-$ and $b-$ directions are of the same order. An intuitive picture here is that $V_\perp$ tends to “lock” more firmly the $2k_F$ charge configuration induced by $V'$, so that $2k_F$ charge fluctuations are enhanced, thereby stabilizing the spin-triplet $f$-wave state.
Bearing the above analysis in mind, we now move on to the RPA calculation results for the pairing symmetry competition between $f$- and $d$-waves. We first focus on the case where the parameter values satisfy the condition for $\chi_c({\mbox{\boldmath$Q$}}_{2k_F})=\chi_s({\mbox{\boldmath$Q$}}_{2k_F})$, that is when $V'+V_\perp=U/2$ holds. Here we take $U=1.7$, $V=0.8$, $V'=0.45$, $V''=0.2$, and $V_\perp=0.4$ in units of $t$. Note that $V'$ is much smaller than $U/2$. As expected, the singlet pairing having the largest eigenvalue $\lambda$ has a $d$-wave gap, while the triplet pairing with the largest $\lambda$ has a $f$-wave gap, as seen in Fig.\[fig3\]. In Fig.\[fig4\], we plot $\lambda$ as functions of temperature for $d$-wave and $f$-wave pairings. The two pairings closely compete with each other, but $f$-wave pairing dominates over $d$-wave pairing and gives a higher $T_c$. $f$-wave not being degerate with $d$-wave even for $V'+V_\perp=U/2$ is due to the effect of the first order terms in eqs.(\[1\]) and (\[2\]) as discussed in our previous study.[@TanakaKuroki04]
![The gap functions having the largest eigenvalue for the (a)triplet and (b)singlet pairing channels. The parameter values are taken as $U=1.7$, $V=0.8$, $V'=0.45$, $V''=0.2$, $V_\perp=0.4$ and $T=0.011$ (=$T_c$ of the $f$-wave pairing). The dark dashed curves represent the nodes of the gap, while a pair of light dotted curves near $k_a=\pm\pi/4$ is the Fermi surface.[]{data-label="fig3"}](fig3.eps){width="8cm"}
![The largest eigenvalue in the singlet and the triplet channels are plotted as functions of temperature for $U=1.7$, $V=0.8$, $V'=0.45$, $V''=0.2$, and $V_\perp=0.4$.[]{data-label="fig4"}](fig4.eps){width="8cm"}
To look into the effect of $V_\perp$ on the $f$- vs. $d$- competition in more detail, we plot $T_c$ along with the pairing symmetry as a function of $V_\perp$ in Fig.\[fig5\]. The pairing symmetry is $f$-wave and $T_c$ increases with $V_\perp$ for $V_\perp \geq U/2-V'$(=0.4 here), while the pairing occurs in the $d$-wave channel with a nearly constant $T_c$ for $V_\perp < U/2-V'$. The increase of the $f$-wave $T_c$ is due to the enhancement of $2k_F$ charge fluctuations with increasing $V_\perp$.
![$T_c$ plotted as a function of $V_\perp$ for $U=1.7$, $V=0.8$, $V'=0.45$, and $V''=0.2$. The solid (dashed) curve represent the $f$ ($d$)-wave regime.[]{data-label="fig5"}](fig5.eps){width="8cm"}
Finally, in order to check the validity of RPA, we have performed auxiliary field quantum Monte Carlo (AFQMC) calculation[@Hirsch; @ZC; @White] for the same extended Hubbard model. Let us first briefly summarize this method. In AFQMC, the density operator is decomposed into the kinetic energy part and the interaction part using Trotter-Suzuki decomposition,[@Trotter; @Suzuki] and we perform discrete Hubbard-Stratonovich transformation[@Hirsch] for the interaction part. The summation over the Stratonovich variables are taken by Monte-Carlo importance sampling. Using this method, correlation functions and susceptibilities can be calculated for finite size systems (16 sites in the $a$-direction and 4 sites in the $b$-direction=64 sites in the present study), and the results are exact within the statistical errors. A defect of this approach is that we cannot go down to very low temperatures in the presence of off-site repulsions such as $V$, $V'$ and $V_\perp$ due to the negative sign problem, so that it is difficult to look into the pairing symmetry competition itself. Nevertheless, we can check the validity of RPA at moderate temperatures of the order of $0.1t$. Here we compare the values of $\chi_s({\mbox{\boldmath$Q$}}_{2k_F})$ and $\chi_c({\mbox{\boldmath$Q$}}_{2k_F})$ calculated by AFQMC at $T=0.25$,fixing $V=0.9$, $V''=0$ and $V_\perp=0.3$. In Fig.\[fig6\], we show the “phase diagram” in $U-V'$ plane, where we find that the AFQMC boundary for $\chi_c({\mbox{\boldmath$Q$}}_{2k_F})=\chi_s({\mbox{\boldmath$Q$}}_{2k_F})$ is very close to the RPA boundary $V'+V_\perp=U/2$. This result suggests that the RPA condition for $\chi_c({\mbox{\boldmath$Q$}}_{2k_F})=\chi_s({\mbox{\boldmath$Q$}}_{2k_F})$ is reliable at least at moderate temperatures.
![AFQMC result for the competition between $\chi_s({\mbox{\boldmath$Q$}}_{\rm 2k_F})$ and $\chi_c({\mbox{\boldmath$Q$}}_{\rm 2k_F})$ shown in $U-V'$ space. $V=0.9$, $V''=0$, $V_\perp=0.3$, and $T=0.25$. The dashed line represents the RPA condition for $\chi_s({\mbox{\boldmath$Q$}}_{\rm 2k_F})=\chi_c({\mbox{\boldmath$Q$}}_{\rm 2k_F})$.[]{data-label="fig6"}](fig6.eps){width="8cm"}
To summarize, we have studied the pairing symmetry competition in a model for (TMTSF)$_2$X which considers not only the intrachain repulsions but also the interchain repulsion. We find that the possibility of satisfying the condition for realizing $f$-wave pairing becomes more realistic in the presence of the interchain repulsion. It would be an interesting future study to investigate whether this condition is actually satisfied in (TMTSF)$_2$X using first principles or quantum chemical calculations. Experimentally, it would be interesting to further confirm spin-triplet pairing by using probes complementary to those in the previous studies[@Lee02; @Lee00], for example, a phase sensitive tunneling spectroscopy study[@TK95] with[@Tanuma2] or without[@Tanuma1; @Sengupta] applying a magnetic field, or those based on a newly developed theory for triplet superconductors, which has been proposed by one of the present authors.[@TanakaKas]
K.K. acknowledges H. Fukuyama, H. Seo, and A. Kobayashi for motivating us to study the effect of interchain repulsion. He also thanks J. Suzumura and Y. Fuseya for valuable discussion. Part of the numerical calculation has been performed at the facilities of the Supercomputer Center, Institute for Solid State Physics, University of Tokyo.
[99]{} D. Jérome, A. Mazaud, M. Ribault, and K. Bechgaard: J. Phys. Lett. (France) [**41**]{} (1980) L92. K. Bechgaard, K. Carneiro, M. Olsen, F.B. Rasmussen, and C.S. Jacobsen: Phys. Rev. Lett. [**46**]{} (1981) 852.
H. Shimahara: J. Phys. Soc. Jpn. [**58**]{} (1989) 1735. K. Kuroki and H. Aoki: Phys. Rev. B [**60**]{} (1999) 3060. H. Kino and H. Kontani: J. Low. Temp. Phys. [**117**]{} (1999) 317.
I.J. Lee, S.E. Brown, W.G. Clark, M.J. Strouse, M.J. Naughton, W. Kang, and P.M. Chaikin: Phys. Rev. Lett. [**88**]{} (2002) 017004. I.J. Lee, M.J. Naughton, G.M. Danner, and P.M. Chaikin: Phys. Rev. Lett. [**78**]{} (1997) 3555; I.J. Lee, P.M. Chaikin, and M.J. Naughton: Phys. Rev. B [**62**]{} (2000) R14669. M. Takigawa, H. Yasuoka, and G. Saito: J. Phys. Soc. Jpn. [**56**]{} (1987) 873. S. Belin and K. Behnia: Phys. Rev. Lett. [**79**]{} (1997) 2125. K. Kuroki, R. Arita, and H. Aoki: Phys. Rev. B [**63**]{} (2001) 094509. A.A. Abrikosov: J. Low Temp. Phys. [**53**]{} (1983) 359. Y. Hasegawa and H. Fukuyama: J. Phys. Soc. Jpn. [**56**]{} (1987) 877. A.G. Lebed, Phys. Rev. B [**59**]{} (1999) R721; A.G. Lebed, K. Machida, and M. Ozaki: $ibid.$ [**62**]{} (2000) R795. It has been shown in M.Kohmoto and M.Sato, cond-mat/0001331, that the presence of spin fluctuation works in favor of spin triplet $p$-wave pairing if attractive interactions due to electron-phonon interactions are considered. Y. Fuseya, Y. Onishi, H. Kohno, and K. Miyake: J. Phys. Cond. Matt. [**14**]{} (2002) L655.
J. P. Pouget and S. Ravy: J. Phys. I [**6**]{} (1996) 1501. S. Kagoshima, Y. Saso, M. Maesato, R. Kondo, and T. Hasegawa: Solid State Comm. [**110**]{} (1999) 479. K. Kuroki, Y. Tanaka, T. Kimura, and R. Arita: Phys. Rev. B [**69**]{} (2004) 214511.
Y. Tanaka and K. Kuroki: Phys. Rev. B [**70**]{} (2004) 060502.
N. Kobayashi and M. Ogata: J. Phys. Soc. Jpn. [**66**]{} (1997) 3356; N. Kobayashi, M. Ogata and K. Yonemitsu: J. Phys. Soc. Jpn. [**67**]{} (1998) 1098. Y. Tomio and Y. Suzumura: J. Phys. Soc. Jpn. [**69**]{} (2000) 796.
Y. Fuseya and Y. Suzumura: cond-mat/0411013.
J.C. Nickel, R. Duprat, C. Bourbonaais, and N. Dupuis: cond-mat/0502614.
D. J. Scalapino, E. Loh, Jr. and J. E. Hirsch: Phys. Rev. B [**35**]{} (1987) 6694. Y. Tanaka, Y. Yanase and M. Ogata: J. Phys. Soc. Jpn. [**73**]{} (2004) 319. A. Kobayashi, Y. Tanaka, M. Ogata and Y. Suzumura: J. Phys. Soc. Jpn. [**73**]{} (2004) 1115.
J.E. Hirsch: Phys. Rev. B [**31**]{} (1985) 4403. Y. Zhang and J. Callaway: Phys. Rev. B [**39**]{} (1989) 9397. S.R. White, D.J. Scalapino, R.L.Sugar, E.Y.Loh., J.E. Gubernatis, R.T. Scalettar: Phys. Rev. B [**40**]{} (1989) 506. H.F. Trotter: Proc. Am. Math. Soc. [**10**]{} (1959) 545. M. Suzuki: Prog. Theor. Phys. [**56**]{} (1976) 1454.
Y. Tanaka and S. Kashiwaya, Phys. Rev. Lett. [**74**]{}, 3451 (1995). Y. Tanuma, K. Kuroki, Y. Tanaka, R. Arita, S. Kashiwaya, and H. Aoki: Phys. Rev. B [**64**]{} (2001) 214510.
K. Sengupta, I. Žutić, H.-J. Kwon, V.M. Yakovenko, and S. Das Sarma: Phys. Rev. B [**63**]{} (2001) 144531.
Y. Tanuma, K. Kuroki, Y. Tanaka, and S. Kashiwaya: [Phys. Rev. B]{} [**64**]{} 214510 (2001).
Y. Tanaka and S. Kashiwaya: Phys. Rev. B [**70**]{} (2004) 012507.
|
tomekkorbak/pile-curse-small
|
ArXiv
|
SEMI-NEWS: A Satire of Recent News
SEMI-NEWS: A Satire of Recent News, May 26, 2013 Edition
Pelosi
Absolves President in Spate of Scandals
House Minority Leader Nancy
Pelosi (D-Calif) sought to absolve President Obama from any blame in
the burgeoning plume of scandals surrounding his Administration by
forcefully asserting that “he doesn't necessarily know anything
about any agency of the federal government.”
The Congresswoman brushed off
reports that IRS Commissioner Doug Shulman visited the Obama White
House 118 times over a two year period during which conservative
groups were targeted as indicative of anything. “The White House is
a big place,” Pelosi said. “Just because Mr. Shulman and
President Obama were in the building doesn't necessarily mean that
they met. And even if they had, it doesn't prove that they discussed
tax matters. I understand Shulman says he was there for the Easter
Egg roll.”
Pelosi also dismissed the idea
that a chief executive has to take responsibility for the actions of
his appointed minions, calling former President Truman's famous “the
buck stops here” “worthless political grandstanding. A successful
chief executive always has a pre-planned escape route. It is the
underling's responsibility to step forward and take the blame. It's
like sacrificing a pawn to save the king in a game of chess. In fact,
the sacrificed pawns should be grateful that they have the
opportunity to play a role in advancing the President's agenda.”
In
related news, Representative Elijah Cummings (D-Md) worries that the
exposure of IRS abuses may have a chilling effect on IRS bureaucrats.
“People loyally trying to carry out the wishes of the President
should not have to fear repercussions for doing as they are told,”
Cummings asserted. “As long as they are just following orders or
genuinely taking actions they believe the President would want them
to take they ought to be immune from penalties or further scrutiny.”
Gitmo
Inmates to Be Transferred
Seeking to dampen criticism of
his handling of the War on Terror and the Internal Revenue Service's
selective intrusion into the views of his political opponents,
President Obama announced that the detention center at Guantanamo
would be closed and it's inmates “redistributed as seems most
appropriate.”
While it is expected that a
majority of the detainees will be repatriated to the Middle Eastern
Hell-holes from whence they came, those with exemplary skills are
likely to be transferred to positions within the IRS.
“The fact that people are
openly criticizing the IRS is the best evidence we have that it has
failed in its mission,” Obama said. “Obviously, the fear of being
harassed and audited is insufficient to induce the level of
compliance needed. If properly armed and aimed the most fanatical of
the Gitmo detainees could ramp up the pressure to levels that only
the tiniest few could resist.”
The President sought to reassure
that “those in full compliance with IRS directives need have no
concerns for their own safety. The Gitmo detainees will be working
under the close supervision of higher authorities under my control.
Only those who have deviated from the required path will be
targeted.”
In related news, Larry Conners,
an anchorman for KMOV in St. Louis, was fired for publicly
questioning whether the IRS targeted him after he asked President
Obama some tough questions in an interview last year. KMOV president
and general manager Mark Pimentel called the firing “a simple
precautionary move. There's no sense in us exposing everyone at KMOV
to possible IRS retaliation. Better safe than sorry.”
Weiner
Challenges Rivals to Be More Forthcoming
Former Congressman Anthony
Weiner challenged his competitors for New York City Mayor “to put
their packages out there for Voters to see. As everyone knows, I've
gone far beyond the bounds of what a typical candidate is willing to
do to inform voters about my qualifications.”
Weiner who resigned from
Congress after getting caught sending lewd photos of himself to women
asserted that he has “learned from that mistake. I resigned before
considering the full nuances of the reaction. This time there'll be
no holding back. Rather than limit my sexting to a narrowly
constrained few I will bare all for all the voters. The people of New
York deserve to know what kind of man they're getting for their
mayor. With me there'll be no secrets. Are any of my opponents
willing to be as open?”
The candidate hinted that more
lewd photos may come out, but did not expressly commit to a schedule
for when or where they might appear saying that “my opponents can
avoid humiliation by dropping out of the race.”
Kerry
Says Israeli Prosperity “an Impediment to Peace”
US Secretary of State John Kerry
complains that Israeli economic prosperity stands as “an impediment
to achieving peace in the region.”
“On the one hand, it is a
constant 'in-your-face' reminder to the Palestinians and their
supporters that Jews are better off than they are,” Kerry
explained. “It's an insult that inspires a sense of grievance
amongst the poorer Muslim and Arab communities.”
“On the other hand, it serves
as a persistent temptation,” Kerry added. “The idea of killing
the Jews and taking their money and property becomes an irresistible
urge. Killing and robbing a Jew seems less onerous than trying to
build a business, learn a trade, or work hard to make a living.”
Kerry advised the Israeli
Government “to implement policies to even out the disparities. A
broad redistribution of wealth would serve the dual purpose of
immediately assuaging Palestian feelings of inadequacy, while
simultaneously acting to moderate the recompense to historically
powerful inclinations of Jewish avarice.”
Israeli President Shimon Peres
rejected Kerry's advice calling it “the kind of time-worn,
run-of-the-mill anti-Semitism Jews have been battling against for
more than two thousand years.”
Ohio
Secretary of State Insists Vote Fraud “Not Epidemic”
Ohio Secretary of State Jon
Husted tried to reassure voters that the 135 possible voter fraud
cases his office is pursuing do not constitute “an epidemic.”
“We feel confident that the
majority of elections are probably decided in an honest fashion,”
Husted said. “To believe otherwise would lead to truly frightening
conclusions. We'd rather not go there. I mean, if people lose faith
in elections how will we choose who will govern? Living with a little
corruption is surely better than undermining the whole premise of
democracy, isn't it?”
Criticizing Obama “Offensive”
Says Aide
Daniel Pfeiffer, Senior Advisor
to the President for Strategy and Communications, denounced criticism
of President Obama in strident terms this past week, calling critics
“uppity.”
“Here we have the leader of
the free world, a Nobel Prize winner, being accosted by people unfit
to lick his boots,” Pfeiffer complained. “How low has our
civilization sunk that such effrontery is tolerated?”
Pfeiffer labeled inquiries about
Benghazi, the IRS and phone taps of reporters “fishing expeditions.
They think they're going to find some 'smoking gun' linking the
President to one or more of these incidents in some substantive way.
Well, I'm telling you it's not going to happen. The President has
insulated himself from culpability for whatever may occur. There are
strict rules about who may tell the President what that ensure he
will honestly be able to disavow all knowledge of what is going on.”
“On top of this he has an
enormously wide array of options for eliminating disloyal and
uncooperative elements both inside and outside his Administration,”
Pfeiffer pointed out. “Those chafing over getting hassled by the
IRS ought to consider themselves lucky that sterner measures weren't
used against them.”
“It all comes down to whether
people are going to show proper respect for the President,”
Pfeiffer concluded. “We cannot sit by and allow the office and the
great man who occupies it to undergo the type of heedless questioning
of its authority that we have seen over the last few weeks. Rest
assured that the President will do whatever it takes to assert and
wield that authority. The alternative is too scary to contemplate.”
McCain
Hammers GOP for Impeding Dems' Agenda
Maverick Senator John McCain
(R-Az) lashed out at his Republican colleagues in Congress for
actions he says “go too far.” The tiff arose over Democrat
maneuvers to raise the debt ceiling.
“It's alright to express an
opinion and have a debate, but in the end we've still got to let the
Democrats govern,” McCain insisted. “Repeated efforts to stymie
legislation we don't like is downright uncollegial.”
The Arizona Senator discounted
arguments that Democrats make no effort to be collegial from their
side saying that “two wrongs don't make a right. Didn't your
parents teach you that? Didn't Jesus bid us to turn the other cheek?
That's all I'm suggesting here.”
“It's not as if the substance
of this issue is crucial,” McCain argued. “What difference does
it really make whether we raise the debt ceiling? The Government is
never going to be able to pay that money back anyway. Why get our
panties in a wad over it? It's just a lot of useless motion without
result.”
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Characterization of Highly Prevalent Plasmids Coharboring mcr-1, oqxAB, and blaCTX-M and Plasmids Harboring oqxAB and blaCTX-M in Escherichia coli Isolates from Food-Producing Animals in China.
The emergence and spread of multidrug resistance (MDR) plasmids carrying the colistin resistance gene mcr-1 has become a major public health concern. However, there is a paucity of data regarding the prevalence of mcr-1 plasmids concomitantly carrying blaCTX-M and oqxAB, an efflux pump that confers resistance to multiple agents. In this study, we determined the prevalence and characteristics of plasmids coharboring mcr-1, oqxAB, and blaCTX-M as well as those harboring oqxAB and blaCTX-M in Escherichia coli from food-producing animals. We isolated 493 E. coli strains, and mcr-1, blaNDM, and blaCTX-M were present in 140 (28.4%), 51 (10.3%), and 195 (39.6%) of the isolates, respectively. The two most prevalent plasmid-mediated quinolone resistance genes were oqxAB (34.5%) and qnrS (29.4%). Nine IncHI2/ST3 plasmids co-carrying mcr-1, oqxAB, and blaCTX-M were found, and similar IncHI2/ST3 plasmids mediated dissemination of these resistance genes. Two sequenced MDR IncHI2/ST3 plasmids coharboring mcr-1, oqxAB, and blaCTX-M showed high similarity to reference plasmid pHNSHP45-2, although they were from different regions in China. Colocalization of oqxAB and blaCTX-M on the same plasmid was found in 28 isolates, including the nine plasmids harboring mcr-1. The co-dissemination of oqxAB and blaCTX-M was mediated by diverse F33:A-:B- plasmids and similar IncHI2/ST3 plasmids. Pulsed-field gel electrophoresis and multilocus sequence typing analysis of donor isolates revealed heterogeneous patterns indicating that clonal dissemination was unlikely. The high incidence of similar IncHI2/ST3 plasmids simultaneously possessing mcr-1, oqxAB, and blaCTX-M poses a great threat to public health.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Annapolis-based restaurant chain Ledo Pizza apologized Wednesday after marking 9/11 with one of their signature square pizzas, the toppings of which were arranged to look like the American flag.
Some Twitter users were outraged that Ledo had chosen to commemorate the lives lost with a pizza, with some even accusing the pizza chain of using the tragic day to promote their food.
After initial backlash, Ledo deleted the original photo of the flag-themed pizza and replaced it with the stock image of an American flag. Many users had captured the original image of the pizza, however.
"Your recently deleted tweet might just be the most tone-deaf 9/11 'brand' tweet to ever be posted," one user wrote. "How dare you trivialize our nation's most impactful tragedy in recent memory with a goddamn pizza flag. Save it for 4th of July, you tactless clods."
your recently deleted tweet might just be the most tone-deaf 9/11 "brand" tweet to ever be posted. how dare you trivialize our nation's most impactful tragedy in recent memory with a goddamn pizza flag. Save it for 4th of July, you tactless clods. https://t.co/A4BTpU11Y4 pic.twitter.com/l1kPYHPlqh — crysta timmerman (@crystatimmerman) September 11, 2019
Others found the negative response to the pizza a bit over the top. "How about you don't get so triggered by a freaking pizza?" one user wrote in response to a scathing criticism of the chain's marketing practices.
Or how about you don't get so triggered by a freaking pizza? — NobodyYouKnow (@ShinerBockGirl) September 11, 2019
Ledo issued an apology Wednesday afternoon, acknowledging that the pizza post had not been their finest moment. "This morning, Ledo Pizza posted a photo of a pizza decorated as a flag of the United States of America on Twitter," the chain's Twitter account said. "As you may know, we regularly use this photo to show our Patriotism and Love for our country during holidays and remembrances."
"While most fans are used to seeing this photo and share our Patriotism, a few Twitter users took offense to this imagery and for this we are sincerely sorry," the post continued. "Our Twitter post was never intended to diminish the gravity of September 11th and has since been removed."
While most fans are used to seeing this photo and share our Patriotism, a few Twitter users took offense to this imagery and for this we are sincerely sorry. Our Twitter post was never intended to diminish the gravity of September 11th and has since been removed. — Ledo Pizza (@LedoPizza) September 11, 2019
Ledo Pizza's tweet is not the first corporate tribute to 9/11 that drew intense criticism. Spaghettios, pop band Smash Mouth, and cellular company Blackberry have also been slammed for social media posts and advertisements that many considered at attempt to monetize a national tragedy.
In 2016, a Texas mattress company temporarily closed down after being condemned for a video advertising a "Twin Tower" sale on 9/11.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Metabolomic profiling to identify potential serum biomarkers for schizophrenia and risperidone action.
Despite recent advances in understanding the pathophysiology of schizophrenia and the mechanisms of antipsychotic drug action, the development of biomarkers for diagnosis and therapeutic monitoring in schizophrenia remains challenging. Metabolomics provides a powerful approach to discover diagnostic and therapeutic biomarkers by analyzing global changes in an individual's metabolic profile in response to pathophysiological stimuli or drug intervention. In this study, we performed gas chromatography-mass spectrometry based metabolomic profiling in serum of unmedicated schizophrenic patients before and after an 8-week risperidone monotherapy, to detect potential biomarkers associated with schizophrenia and risperidone treatment. Twenty-two marker metabolites contributing to the complete separation of schizophrenic patients from matched healthy controls were identified, with citrate, palmitic acid, myo-inositol, and allantoin exhibiting the best combined classification performance. Twenty marker metabolites contributing to the complete separation between posttreatment and pretreatment patients were identified, with myo-inositol, uric acid, and tryptophan showing the maximum combined classification performance. Metabolic pathways including energy metabolism, antioxidant defense systems, neurotransmitter metabolism, fatty acid biosynthesis, and phospholipid metabolism were found to be disturbed in schizophrenic patients and partially normalized following risperidone therapy. Further study of these metabolites may facilitate the development of noninvasive biomarkers and more efficient therapeutic strategies for schizophrenia.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Sol Invictus (band)
Sol Invictus is an English neofolk group fronted by Tony Wakeford. Wakeford has been the sole constant member of the group since its inception, although numerous musicians have contributed and collaborated with Wakeford under the Sol Invictus moniker over the years.
Overview
After disbanding his controversial project Above the Ruins, Wakeford returned to the music scene with Sol Invictus in 1987. Since then Sol Invictus has had many musician contributions, including Sarah Bradshaw, Nick Hall, Céline Marleix-Bardeau, Nathalie Van Keymeulen, Ian Read and Karl Blake.
Wakeford repeatedly referred to his work as folk noir. Beginning with a mixture of a rough, bleak, primitive post punk sound and acoustic/folk elements, the band's music gradually evolved toward a lush, refined style, picking up classically trained players such as Eric Roger, Matt Howden, and Sally Doherty. In the mid-1990s, Sol Invictus spun off a side project called L'Orchestre Noir (later changed to Orchestra Noir) to explore an even more classically influenced direction. 2005 saw the departure of longtime contributors Roger and Blake, leading to a new line-up including Caroline Jago, Lesley Malone and Andrew King.
In 1990, Wakeford formed his own label, Tursa, to release his material and the music of other artists. The World Serpent Distribution Company previously distributed this material worldwide, followed then by Cold Spring Records. In July 2007, the label was re-launched as a partnership with Israeli producer and musician Reeve "M" Malka. In 2009, Sol Invictus signed to Prophecy Records. In June 2011, Sol Invictus announced the end of their partnership both with Cold Spring Records and musician Andrew King.
Imagery and content
The name Sol Invictus, which is Latin for 'the unconquered Sun', derives from the Roman cult of the same name.
The band's imagery and lyrical content, in its early days, was influenced by traditionalism and antipathy towards the modern world and materialism. A superficial interest was the Italian philosopher Julius Evola who Wakeford admits to "shamelessly stealing from" for song titles even though he found his books "unreadable". A more serious influence was the poet Ezra Pound: "I think Pound is one of the greatest poets ever, although some of his work is mind-numbingly obscure. I disagree with his antisemitism but that should not blind people to his worth as an artist."
The band also had considerable interest in heathen and Mithraist themes, often with an explicit antipathy to Christianity, reflecting the involvement of Wakeford and other members in neopagan groups. The 1997 album The Blade incorporates an Odinic chant, Gealdor, into its varied laments. Wakeford tended to write from a melancholic position of doomed Romanticism, which lamented the loss of beauty, love, and culture. He saw the American influence on global culture as very damaging to Europe, something he expresses with black humour in the song "Death of the West", from the album of the same name. The later albums have seen a turn to a more personal writing style, as interest in what Wakeford calls "knee-jerk anti-Americanism and anti-Christianity" has been rejected.
Sol Invictus album artwork has often showcased the expressionist paintings of American artist, musician and friend Tor Lundvall.
Controversy
Wakeford's mid-1980s membership in the British National Front and the appearance of a track from his band, Above The Ruins, on the "No Surrender!" compilation released in 1985 by Rock-O-Rama Records, alongside the Nazi groups Skrewdriver and Brutal Attack, has meant that Sol Invictus have been accused of neofascism. Wakeford has responded to this criticism various times, stating that his involvement with the National Front "was probably the decision of my life and one I very much regret", and that various members of his band (including his wife of eight years at the time) "would be at best discriminated against or dead if a far-right party took power" and further that "none of the artists I work with hold such views either, and I doubt they would want to work with me if they thought I did." In June 2011 the band, following attempts to cancel one of their concerts in London, stated that all its members "are personally completely and unequivocally opposed to fascism, racism, anti-semitism and homophobia, [...] and our work makes no attempt to appeal to an audience looking for this kind of message", also stating explicitly that they did not have "any sympathy with national anarchism, or any desire to work with its adherents".
Discography
References
External links
Sol Invictus on Myspace
Reviews
Review of The Devil's Steed
Review of Angel
Review of Sol Veritas Lux
Interviews
Interview with Tony Wakeford, 2006
Category:Musical groups established in 1987
Category:English folk musical groups
Category:Neofolk music groups
Category:British industrial music groups
Category:Neopagan musical groups
Category:Neopaganism in the United Kingdom
Category:Obscenity controversies in music
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Mya Ginger, Tits HD Sex Club
Profile Comments
Join Mya Ginger HD Sex Club to Enjoy the Hottest Mya Ginger Porn Videos with Mya Ginger Sex Action. To be a VIP Member of Mya Ginger HD Sex Club GO PREMIUM. Join The Best HD Sex Club on the web. Bookmark hd-sexclub.com To Enjoy Mya Ginger HD Porn Videos Every Second!
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Q:
Why is getting a value from the end of a LinkedList much slower than from the start?
I have a LinkedList of 1,000,000 items. I measured the retrieval of an item first at index 100,000 and then at index 900,000. In both cases, the LinkedList goes through 100,000 operations to get to the desired index. So why is the retrieval from the end so much slower than from the start?
Measurements taken with JMH.
@BenchmarkMode(Mode.AverageTime)
@OutputTimeUnit(TimeUnit.MILLISECONDS)
@Warmup(iterations = 10)
@Measurement(iterations = 10)
public class ComparationGet {
static int val1 = 100_000;
static int val2 = 500_000;
static int val3 = 900_000;
@Benchmark
public void testGet1LinkedListFromStart(Blackhole blackhole, MyState state) {
MyDigit res1 = state.linkedList.get(val1);
blackhole.consume(res1);
}
@Benchmark
public void testGet2LinkedListFromEnd(Blackhole blackhole, MyState state) {
MyDigit res1 = state.linkedList.get(val3);
blackhole.consume(res3);
}
}
Results:
from start:
ComparationGet.testGet1LinkedListFromStart avgt 10 0,457 ± 0,207 ms/op
from end:
ComparationGet.testGet2LinkedListFromEnd avgt 10 5,789 ± 3,094 ms/op
State class:
@State(Scope.Thread)
public class MyState {
public List<MyDigit> linkedList;
private int iterations = 1_000_000;
@Setup(Level.Invocation)
public void setUp() {
linkedList = new LinkedList<>();
for (int i = 0; i < iterations; i++) {
linkedList.add(new MyDigit(i));
}
}
}
MyDigit class:
public class MyDigit{
private int val;
public MyDigit(int val) {
this.val = val;
}
}
LinkedList get method:
public E get(int index) {
checkElementIndex(index);
return node(index).item;
}
Node<E> node(int index) {
// assert isElementIndex(index);
if (index < (size >> 1)) {
Node<E> x = first;
for (int i = 0; i < index; i++)
x = x.next;
return x;
} else {
Node<E> x = last;
for (int i = size - 1; i > index; i--)
x = x.prev;
return x;
}
}
A:
LinkedList is a fine example of the limitations of fundamental informatics-based reasoning about algorithms. Basic reasoning about the code here, and treating a computer as a simple von neumann model, would dictate that either benchmark needs 100k steps to get from one 'end' to the desired item, and therefore, the benchmark should report equal times, give or take some statistical noise.
In actual fact, one is an order of magnitude slower than the other.
LinkedList is almost always the loser in such issues. In fact, as a rule of thumb, LinkedList should be banned from all codebases. It's almost always vastly slower than basic reasoning would indicate, and in the rare circumstances where LinkedList would (actually, in real benchmarks, not theoretically!) outperform an ArrayList, there's almost always a different type that's even more suitable, such as, say, ArrayDeque.
But, why?
There are many reasons. But usually it has to do with cache paging.
NB: For the CPU design expert: I've oversimplified rather a lot, to try to explain the key aspect (which is that cache misses drown out any algorithmic expectations).
Modern CPUs have hierarchical layers of memory. The slowest, by far, is 'main memory' (that 16GB of RAM or whatnot that you have). The CPU cannot actually read from main memory, at all. And yet O(n) analysis thinks that they can.
Then there's layers of caches, generally 3 (L1 to L3), and even faster than those, registers.
When you read some memory, what actually happens is that the system checks if what you want to read is mapped onto one of the caches, and only entire pages worth of memory can be, so it first checks which page your data is in, and then checks if said page is in one of those caches. If yes, great, the operation succeeds.
If not, uhoh. The CPU can't do your job. So instead, the CPU goes and does something else, or will just twiddle its thumbs for at least 500 cycles (more on faster CPUs!) whilst it evicts some page from one of the caches and copies over from main memory the page you wanted into one of the caches.
Only then can it continue.
Java guarantees that arrays are consecutive. if you declare, say, new int[1000000] java will guarantee that all 1000000 4-byte sequences are all right next to each other, so if you iterate through it, you get the minimum possible 'cache miss' events (where you read from some memory that isn't in one of the caches).
So, if you have an ArrayList, that is, well, backed by an array, so that array is guaranteed consecutive. However, the objects inside don't have to be. Unlike with new int[1000000], with new Object[1000000], you just have the pointers all consecutive; the actual objects they point at need not be.
However, for this test you've set up, that is immaterial, nothing in your code actually 'follows the pointer'.
In LinkedLists, you end up with no array at all, and instead with 2*X (X being the size of the list) objects: Your X objects you are storing, as well as X 'trackers'; each tracker contains a pointer (in java: reference) to the actual object being stored, as well as a 'previous' and 'next' pointer, pointing at its sibling tracker objects.
None of these are guaranteed to be consecutive in memory.
They could be smeared all over. Even just looping through each element in a list of 1000000, not following pointers at all, if the trackers are all over the place that's theoretically worst case scenario 1000000 case misses.
Cache misses are so slow, and CPUs are so fast, that you can safely consider the job of iterating through each tracker (or through each item in a 1000000-sized array) as entirely free, zero CPU time required, as long as you don't run into cache misses: The cache misses tend to dominate the time requirements.
You'd have to investigate further, but here is a plausible explanation for what you're witnessing:
Your code runs in isolation (it is not doing much else); so your init is running unimpeded, and whilst java makes no consecutive guarantees about any of this, your actual memory layout looks like: a MyDigit object, then a linkedlist tracker, then another mydigit object, then another linkedlist tracker, and so on.
Nevertheless, going from the last node involves a number of cache misses, whereas going from the front (which also had the benefit of starting at 'byte 0' of a page) isn't nearly as badly affected.
For reference, here is a chart of access times of fetching a certain sized chunk of data, assuming optimal caching - Note the biiig spike when you get to 4M.
|
tomekkorbak/pile-curse-small
|
StackExchange
|
Aeros Style
The Aeros Style is a Ukrainian single-place paraglider designed and produced by Aeros of Kiev.
Design and development
The Style was intended as an intermediate paraglider for local and cross country flying. Some sizes were AFNOR certified as "standard". The original Style design was in production in 2003, but is no longer available, having been replaced by the Style 2. The Style 2 is an entirely new design, which shares only the name of the previous aircraft. The early Style variant number indicates the wing area in square metres. The Style 2 uses simple size designations instead of wing areas for model numbers.
The Style 2 is constructed from Gelvenor OLKS fabric for the wing's top surface and NCV Porcher 9017E38 for the bottom surface, with the ribs made from NCV Porcher 9017E29. The lines are made from Cousin Trestec.
Variants
Style 26
Circa 2003 version with a span wing, an area of , an aspect ratio of 5.15:1 and a maximum speed of . Pilot weight range is .
Style 28
Circa 2003 version with a span wing, an area of , an aspect ratio of 5.15:1 and a maximum speed of . Pilot weight range is . AFNOR certified.
Style 30
Circa 2003 version with a span wing, an area of , an aspect ratio of 5.15:1 and a maximum speed of . Pilot weight range is . AFNOR certified.
Style 32
Circa 2003 version with a span wing, an area of , an aspect ratio of 5.15:1 and a maximum speed of . Pilot weight range is . AFNOR certified.
Style 2 XXS
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is .
Style 2 XS
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is .
Style 2 S
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is .
Style 2 M
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is . AFNOR certified.
Style 2 L
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is . AFNOR certified.
Style 2 XL
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is .
Style 2 XXL
Version in production in 2012, with a span wing, an area of , with 46 cells, an aspect ratio of 5.23:1. Take-off weight range is .
Specifications (Style 2 M)
References
External links
Official website
Category:Paragliders
Style
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
DESCRIPTION: This proposal is driven by the hypothesis that regional specification of the forebrain is largely controlled by a hierarchy of genes encoding transcriptional regulators. A large portion of the proposal depends on gene knockout technology which will be used to inactivate the Dlx-1 and Dlx-2 genes, both individually and simultaneously. The phenotype of mice that are heterozygous and homozygous for the mutations will be analyzed histologically and biochemically.
|
tomekkorbak/pile-curse-small
|
NIH ExPorter
|
Mexico’s congress has been accused of caving into pressure from the fizzy drinks industry after agreeing to cut a groundbreaking tax on sugar-sweetened beverages.
Studies had shown that the tax, introduced in January 2014, had started to curb soda consumption in a country confronting crises of childhood obesity and diabetes.
The finance commission in the lower house of Congress approved halving the tax on sweetened drinks if the sugar content is less than five grams per 100 millilitres – an incentive for beverage producers to offer more low-calorie options, according to lawmakers with the right-leaning National Action Party (PAN).
But public health groups and some opposition politicians accused lawmakers of caving to pressure from drinks manufacturers and taking an incorrect approach to lowering calorie consumption.
“This measure weakens the tax and could weaken its effects,” said Dr Juan Rivera Dommarco, adjunct director of nutrition and health in Mexico’s National Institute of Public Health (INSP). “The way to safeguard public health is to increase the tax … then reduce it on the products that have less sugar.”
In January 2014, Mexico slapped a tax of one peso (six cents) per litre on sugary drinks, increasing the price of sodas by about 10%. It also applied a tax on high-calorie snacks like cookies and crisps.
The tax on sugary drinks has been held up as an example for other countries to follow, especially as diseases like diabetes and obesity boom in the developed and developing world alike.
Preliminary results from a study on the tax by researchers at INSP and the University of North Carolina showed a 6% reduction in soda consumption during 2014, with the rate increasing as the year progressed.
Mexicans guzzle more than 43 gallons of soda per person per year, according to the Rudd Center for Food Policy and Obesity at Yale University, while a preference for low-calorie versions of soft drink standbys has never taken hold.
Rivera attributed the thirst for soda to trends such as Mexicans weaning babies on soft drinks. And while potable water is not available in many parts of the country, soft drink bottlers operate networks delivering their products to the most remote and insecure corners of the country. Advertising is also intense, with companies like Coca-Cola sponsoring Christmas trees in public squares across the country, covered with ornamental logos.
Mexico’s soft drink makers’ association, ANPRAC, disputes the impact of the tax and drop in consumption. It says instead sales have slid 2.5% since the tax was applied and 1,700 jobs have been lost. It estimates the tax lowered caloric consumption by 6.2 calories per day.
Sugary drinks account for an estimated 35% of sales in the country’s ubiquitous corner stores, according the National Association of Small Merchants (ANPEC), which accused the federal government of taxing soda to extract money from the nearly 60% of the population in the informal economy.
ANPEC president Cuauhtémoc Rivera said bottlers began selling more soda in returnable containers as a way to offset the cost of the new tax and consumption levels seem to have stabilized.
He also noted: “It’s not clear that the money being collected is going toward fighting obesity.”
It’s unclear how many brands the new tax treatment would apply to as sales of low-calorie and artificially sweetened sodas are lower in Mexico than other countries.
“They’re small changes that basically benefit the companies,” lawmaker Vidal Llerenas of the leftwing party Morena told online news outlet Sin Embargo. “Basically, there was pressure from the private sector to push some modifications such as this to be able to reduce the tax.”
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Have you seen Magic Mike, the movie about Channing Tatum's ass and feelings? How about Ted, that movie about Mark Wahlberg and the talking bear that makes plushies and furries really excited? Your answer may indicate not just your gender, but where you're from. One movie did well on the blue pinot noir drinking cheese eating grad school attending coasts, and the other cleaned up in America's meaty, Budweiser drinking red states in the country's middle and south.
Mike and Ted bared and beared their way to the top two spots at the box office last weekend, but according to Vulture, groups of ladies in red states, and specifically, in mid-size cities in conservative parts of the country were what propelled Mike to the second slot. Heh heh. Slot. Indianapolis, Charlotte, Orlando, St. Louis, Nashville, Tampa, and Kansas City all posted big, bulging ticket sales for the film.
So what's the deal with blue state ladies? Prude about male nudity? Or does living in a city full of liberals mean that women have become more blasé about people taking their clothes off? After all, it's hard to justify paying to go to the movies when a real, in-the-flesh naked guy is usually only a text message away, or (more realistically) when dudes are taking their pants off on the train all the time. There comes a time when a gal reaches her butt quota.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
DRY TYPE TRANSFORMERS BASICS AND TUTORIALS
A dry-type transformer is one in which the insulating medium surrounding the winding assembly is a gas or dry compound. Basically, any transformer can be constructed as “dry” as long as the ratings, most especially the voltage and kVA, can be economically accommodated without the use of insulating oil or other liquid media.
Many perceptions of dry-type transformers are associated with the class of design by virtue of the range of ratings or end-use applications commonly associated with that form of construction Of course, the fundamental principles are no different from those encountered in liquid-immersed designs.
Dry-type transformers compared with oil-immersed are lighter and nonflammable. Increased experience with thermal behavior of materials, continued development of materials and transformer design have improved transformer thermal capability.
Upper limits of voltage and kVA have increased. Winding insulation materials have advanced from protection against moisture to protection under more adverse conditions (e.g., abrasive dust and corrosive environments).
Cooling Classes for Dry-Type Transformers
American and European cooling-class designations are indicated in Table 2.5.1. Cooling classes for drytype transformers are as follows (IEEE, 100, 1996; ANSI/IEEE, C57.94-1982 (R-1987)):
Ventilated — Ambient air may circulate, cooling the transformer core and windings
Nonventilated — No intentional circulation of external air through the transformer
Sealed — Self-cooled transformer with hermetically sealed tank
Self-cooled — Cooled by natural circulation of air
Force-air cooled — Cooled by forced circulation of air
Self-cooled/forced-air cooled — A rating with cooling by natural circulation of air and a rating with cooling by forced circulation of air.
Winding Insulation System
General practice is to seal or coat dry-type transformer windings with resin or varnish to provide protection against adverse environmental conditions that can cause degradation of transformer windings. Insulating media for primary and secondary windings are categorized as follows:
Cast coil — The winding is reinforced or placed in a mold and cast in a resin under vacuum pressure. Lower sound levels are realized as the winding is encased in solid insulation. Filling the winding with resin under vacuum pressure eliminates voids that can cause corona. With a solid insulation system, the winding has superior mechanical and short-circuit strength and is impervious to moisture and contaminants.
Vacuum-pressure encapsulated — The winding is embedded in a resin under vacuum pressure. Encapsulating the winding with resin under vacuum pressure eliminates voids that can cause corona. The winding has excellent mechanical and short-circuit strength and provides protection against moisture and contaminants.
Vacuum-pressure impregnated — The winding is permeated in a varnish under vacuum pressure. An impregnated winding provides protection against moisture and contaminants.
Coated — The winding is dipped in a varnish or resin. A coated winding provides some protection against moisture and contaminants for application in moderate environments.
As the winding is not in contact with the external air, it is suitable for applications, e.g., exposure to fumes, vapors, dust, steam, salt spray, moisture, dripping water, rain, and snow.
Ventilated dry-type transformers are recommended only for dry environments unless designed with additional environmental protection. External air carrying contaminants or excessive moisture could degrade winding insulation.
Dust and dirt accumulation can reduce air circulation through the windings (ANSI/IEEE, 57.94-1982 [R 1987]). Table 2.5.2 indicates transformer applications based upon the process employed to protect the winding insulation system from environmental conditions.
Enclosures
All energized parts should be enclosed to prevent contact. Ventilated openings should be covered with baffles, grills, or barriers to prevent entry of water, rain, snow, etc. The enclosure should be tamper resistant.
A means for effective grounding should be provided (ANSI/IEEE, C2-2002). The enclosure should provide protection suitable for the application, e.g., a weather- and corrosion-resistant enclosure for outdoor installations.
If not designed to be moisture resistant, ventilated and nonventilated dry-type transformers operating in a high-moisture or high-humidity environments when deenergized should be kept dry to prevent moisture ingress.
Strip heaters can be installed to switch on manually or automatically when the transformer is deenergized for maintaining temperature after shutdown to a few degrees above ambient temperature.
Operating Conditions
The specifier should inform the manufacturer of any unusual conditions to which the transformer will
be subjected. Dry-type transformers are designed for application under the usual operating conditions
indicated in Table 2.5.3.
Gas may condense in a gas-sealed transformer left deenergized for a significant period of time at low
ambient temperature. Supplemental heating may be required to vaporize the gas before energizing the
transformer (ANSI/IEEE, C57.94-1982 [R1987]).
Limits of Temperature Rise
Winding temperature-rise limits are chosen so that the transformer will experience normal life expectancy for the given winding insulation system under usual operating conditions. Operation at rated load and loading above nameplate will result in normal life expectancy.
A lower average winding temperature rise, 80°C rise for 180°C temperature class and 80°C or 115°C rise for 220°C temperature class, may be designed providing increased life expectancy and additional capacity for loading above nameplate rating.
Accessories
The winding-temperature indicator can be furnished with contacts to provide indication and/or alarm of winding temperature approaching or in excess of maximum operating limits. For sealed dry-type transformers, a gas-pressure switch can be furnished with contacts to provide indication and/or alarm of gas-pressure deviation from recommended range of operating pressure.
Surge Protection
For transformers with exposure to lightning or other voltage surges, protective surge arresters should be coordinated with transformer basic lightning impulse insulation level, BIL.
The lead length connecting from transformer bushing to arrester—and from arrester ground to neutral—should be minimum length to eliminate inductive voltage drop in the ground lead and ground current (ANSI-IEEE, C62.2-1987 [R1994]).
Lower BIL levels can be applied where surge arresters provide appropriate protection. At 25 kV and above, higher BIL levels may be required due to exposure to overvoltage or for a higher protective margin (ANSI/IEEE, C57.12.01-1989 [R1998]).
1 comment:
This blog is good.Ashish Industries inspected in the year 2003. With our consistent performance and superior quality products, we have established ourselves as a renowned company catering to our clients with varied precision mechanical seals solutions. <a href="http://www.globalautomotive-industry.com/products-services/accessories/ashish-industries/
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Background
==========
Albuminuria is a risk marker of renal failure (RF) and its reduction suggests a slowing of RF \[[@B1]\]. Salt reduction (SR) improves the urine albumin-to-creatinin ratio (ACR) directly and/or via a reduction of blood pressure (BP) \[[@B2]\].
The Koyadaira area is an isolated rural community of approximately 1,000 residents in the Mima City, and has the highest mortality associated with RF in Japan \[[@B3]\]. The local government has provided the population with regular community health promotion classes, some of which target SR, but with little success at improving RF outcomes. In contrast, a Japanese urban hospital practice showed that SR, guided individually by dieticians, decreased urinary protein excretion \[[@B4]\]. There is no such data available in rural communities, where the health status and health-related behaviors can differ substantially from that of urban communities \[[@B5]\]. Successive doctors at the only clinic in this area have tried to tackle this problem using individual dietary guidance, but without success. Patients, their families and neighborhood residents are typically more closely related with each other in rural areas compared to urban communities \[[@B6]\], suggesting a family and community approach to health behavior change may be warranted.
Mima City National Health Insurance Koyadaira Clinic (Koyadaira Clinic) provides care for almost half of all the positive albuminurea patients in this area. We conducted a pilot study with these patients to test the feasibility and effectiveness of a SR intervention that included patients\' family members and neighbors to help motivate the patients to make dietary changes.
Methods
=======
Study design
------------
Non-randomized controlled trial.
Subjects
--------
All consecutive outpatients with albuminurea (ACR \>= 30 mg/gCr) at Koyadaira Clinic from May to October 2006 were registered with this study which was approved by the Ethics Committee of the National Hospital Organization Kyoto Medical Center. Each subject gave written informed consent. Participants had no clinical features of RF, ischemic heart disease, or stroke. All were invited to join a health promotion class and invite some of their family and friends to accompany them. Those who attended the class made up the intervention group (IG). Those who chose not to attend the class made up the control group (CG).
Intervention
------------
Patients in the IG were educated during a 2-hour health promotion class with their family and neighbors at a public town meeting hall. In addition, a 30-minute session on dietary change was held with their families at their home. Participants\' medical diagnoses were not disclosed in front of neighborhood residents to protect patients\' privacy. Four dieticians from outside the community conducted the education sessions. They used interactive exercises, such as pair and small group discussion, quizzes to estimate the amount of salt in foods, etc., to encourage participants\' reduced consumption of traditionally salty Japanese foods. Examples of high salt foods, eaten frequently by elderly people in Japan, are miso soup, pickled vegetables and soy sauce. The dieticians asked the participants to set goals of behavior change for reducing salt intake and to record their behavior in daily logs. One month later, the dieticians mailed all participants a reminder about their salt reduction goal.
The doctor, three nurses at the clinic, and two public health nurses in the area assisted with the intervention activities and followed the subjects for three months post intervention. They reviewed participants\' food monitoring logs and provided encouragement for dietary change during monthly visits. The patients in the CG received usual care, which consisted of monthly visits and physician advice to reduce salt.
Measures
--------
The primary outcome was change in ACR measured before and after the 3 month intervention. ACR was determined by turbidimetric immunoassay in the early morning urine sample (N-assay TIA Micro Alb NITTOBO, Nittobo Medical Co. Ltd., Tokyo, Japan). Secondary outcomes were changes in systolic and diastolic blood pressures. The blood pressure was measured two times at ten-minute intervals in seated subjects using a mercury sphygmomanometer after 5 minutes at rest.
Serum creatinine (Cr) levels were determined enzymatically and blood urea nitrogen (BUN) was determined by the urease method. Estimated glomerular filtration rate (eGFR) (mL/min/1.73 m^2^) was calculated using the eGFR equation for Japanese, eGFR = 194 × Age^-0.287^× Cr^-1.094^(×0.739, if female) \[[@B7]\]. After an overnight fast, body weight was measured using a body fat analyzer (HBS-354-W, OMRON, Kyoto, Japan); and body mass index (BMI) was calculated as weight in kilograms divided by height in meters squared.
The dieticians examined salt intake in the IG during the group and individual sessions, using Excel Eiyokun (version 4.5, KENPAKUSHA, Tokyo, Japan) for the assessment of dietary intake by food frequency questionnaire.
Analyses
--------
Statistical analyses were conducted using SPSS II for windows (version 11.01J, SPSS Inc., Chicago, IL, USA). All data are reported as the means ± SD or n (%). Baseline comparisons were performed with Mann-Whitney U-test and Chi-square test. A paired t-test was used to compare mean systolic and diastolic blood pressures and ACR before and after the 3 month intervention in each group. Two-way analysis of variance (ANOVA) was performed to compare the difference in changes of ACR, systolic and diastolic pressure in two groups. The determinant of statistical significance for all analyses was p \< 0.05.
Results
=======
Study participants
------------------
Of all consecutive 37 outpatients with albuminurea (20 male, 16 female, mean age; 72.8 ± 9.2 years, hypertension 94.6%, diabetes 37.8%) enrolled in this study, 36 completed the 3-month follow up. One subject in the IG dropped out on her own initiative. Data of 14 patients in the IG and 22 patients in the CG were analyzed.
Baseline characteristics of the sample are shown in Table [1](#T1){ref-type="table"}. The proportion of males and the levels of systolic blood pressure and Hb~A1C~in the IG were significantly higher than that in the CG. Salt intake was estimated only in the IG and found to be 12.0 ± 3.5 g/day at baseline.
######
Baseline characteristics
------------------------------------------------------------------------------------------------
Variables Intervention group\ Control group\ P value^a^
(n = 14) (n = 22)
-------------------------------------------- --------------------- ---------------- ------------
Sex (male/female) 11/3 9/13 0.041
Age (y) 69.0 ± 11.0 75.1 ± 7.2 0.111
Body Mass Index (kg/m^2^) 24.1 ± 3.1 24.9 ± 3.8 0.417
Systolic blood pressure (mmHg) 145.1 ± 13.9 134.9 ± 13.1 0.041
Diastolic blood pressure (mmHg) 67.1 ± 8.3 66.4 ± 12.2 0.733
Hb~A1C~(%) 6.7 ± 1.8 5.1 ± 0.4 \<0.001
BUN (mg/dL) 15.2 ± 3.3 18.9 ± 6.6 0.149
Cr (mg/dL) 0.8 ± 0.2 1.0 ± 0.4 0.745
ACR (mg/gCr) 706.1 ± 1082.1 212.5 ± 322.5 0.427
eGFR^b^(mL/min/1.73 m^2^) 71.6 ± 23.4 59.6 ± 23.3 0.173
Stage of chronic kidney disease
Stage 1: eGFR^b^\>= 90 2 (14.3) 3 (13.6)
Stage 2: 60\~89 7 (50.0) 10 (45.5)
Stage 3: 30\~59 5 (35.7) 5 (22.7)
Stage 4: 15\~29 0 (0.0) 4 (18.2)
Stage 5: \<15 0 (0.0) 0 (0.0) 0.376
Antihypertensive drugs
Any antihypertensive drugs 11 (78.6) 19 (86.4) 0.541
Rennin-angiotensin system blocking drugs 11 (78.6) 15 (68.2) 0.497
Dietary salt intake^c^(g/day) 12.0 ± 3.5 \- \-
------------------------------------------------------------------------------------------------
Abbreviation: Hb~A1C~, glycosylated hemoglobin; BUN, blood urea nitrogen; Cr, serum creatinine; ACR, urine albumin-creatinine ratio; eGFR, estimated glomerular filtration rate
Data are mean ± SD or n (%).
^a^P value were calculated by Mann-Whitney U-test and Chi-square test if categorical variables were used.
^b^Calculated using eGFR equation for Japanese, eGFR = 194 × Cr^-1.094^× Age^-0.287^(× 0.739, if female) \[[@B7]\].
^c^Only subjects in intervention group were examined by dieticians using Food Frequency Questionnaire.
Changes in ACR and blood pressure
---------------------------------
Primary and secondary outcomes at three months are shown in Table [2](#T2){ref-type="table"}. ACR decreased significantly in the IG, and approached significance compared to that of the CG (p = 0.070). Systolic blood pressure decreased significantly in both the within and between-group comparisons.
######
Primary and secondary outcome at 3 months
-----------------------------------------------------------------------------------------------------
Variables Intervention group\ Control group\ Between-group difference\
(n = 14) (n = 22) (p value^a^)
---------------------------------- --------------------- ---------------- ---------------------------
**Primary outcome**
ACR (mg/gCr)
Baseline 706.1 ± 1081.2 212.5 ± 322.5
After 3 months 440.0 ± 656.3 163.5 ± 161.5
Change -266.1 ± 436.3 -49.0 ± 261.2 0.070
Within-group difference\ 0.040 0.388
(p value^b^)
**Secondary outcome**
Systolic blood pressure (mmHg)
Baseline 145.1 ± 13.9 134.9 ± 13.1
After 3 months 130.9 ± 12.9 130.9 ± 14.0
Change -14.3 ± 13.9 -4.0 ± 14.6 0.043
Within-group difference\ 0.002 0.212
(p value^b^)
Diastolic blood pressure (mmHg)
Baseline 67.1 ± 8.3 66.4 ± 12.2
After 3 months 62.7 ± 7.6 66.8 ± 7.8
Change -4.4 ± 7.6 0.5 ± 11.3 0.165
Within-group difference\ 0.048 0.853
(p value^b^)
-----------------------------------------------------------------------------------------------------
Abbreviation: ACR, urine albumin-creatinine ratio
Data are mean ± SD.
^a^Two-way ANOVA
^b^Paired t-test
Discussion
==========
The present study shows promise for decreasing ACR in the group of albuminurea patients encouraged to reduce salt intake with their families and neighborhood residents together. The intervention focused on goal setting for dietary SR and used family and neighbors to offer support to patients in a rural area where close human relationships remain. The study shows innovation by including patients\' families and neighborhood residents in activities designed to encourage reduction of the patients\' dietary salt intake. To our knowledge, this is the first report on social support to involve patients\' neighbors in a patient education intervention.
It is meaningful for the subjects to be educated in sodium reduction. Salt intake in the IG at baseline was 12.0 g/day, which is more than the 11.2 g/day, mean of the Japanese population \[[@B8]\]. Recommended salt intake is less than 10 g/day for the general population \[[@B9]\] and less than 6 g/day for chronic kidney disease patients \[[@B7]\].
This study was conducted as a pilot study in a rural population sensitive to not being treated at the same time, therefore, we let the subjects decide whether to join the intervention activities or not. As a result, characteristics were different between the IG and the CG. Randomized controlled trials, likely requiring multiple centers, are needed in the future.
A major limitation of the present study relates to sample size. Convenience sampling was used, and no sample size determination was performed. Therefore it is possible that the statistically insignificant findings are the result of low power.
In addition, there was possible treatment contamination (sharing of intervention information) between the IG and the CG. The study community is so small that subjects in both groups may have shared the content of the intervention activities when chatting in their neighborhood, at the waiting room at Koyadaira Clinic, etc. Since outcomes tended to improve in both groups, one might assume that both groups benefited from the intervention. Finally, we should have measured salt intake in the CG as was done for the IG. Baseline characteristics were different between the IG and the CG, so it is possible that salt intake was not similar between the two groups.
Conclusions
===========
In addition to the patients themselves, simultaneous education of families and neighborhood residents may improve outcomes in rural patients, suggesting a possible community element to motivation for health behavior change in this population. Future studies are needed to examine this hypothesis.
Competing interests
===================
The authors declare that they have no competing interests.
Authors\' contributions
=======================
SF conceptualized, designed, acquired funding, collected and analyzed data, and drafted the manuscript. KK, EK and NS contributed to conception, design, analysis and interpretation of data and writing the manuscript. PJB participated in revising the manuscript critically for interpretation of data and meaning of this study design. PJB and EK gave final approval of the versions to be published. KT, YM, MD and YS managed this study, organized the intervention activities and collected data. All authors read and approved the final manuscript.
Acknowledgements
================
We thank Mr. Fumiyuki Eguchi, Ms. Kazue Amaki, Ms. Kiyoko Sako, Ms. Akemi Kawaguchi and Mr. Makoto Hote (Mima City National Health Insurance Koyadaira Clinic), Ms. Megumi Harada and Ms. Junko Izumi (Mima City Koyadaira General Office) for technical assistance. This study was supported in part by a grant-in-aid from the Foundation for the Development of the Community in Japan.
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
Child sacrifice in Uganda is more common than authorities acknowledge. Children disappear frequently, murdered or mutilated by witch doctors as part of ceremonial ritual.
According to Ureport, SMS-based reporting system supported by Unicef and Brac: “10,317 youth in Uganda, representing every district in the country, confirmed they have heard of a child being sacrificed in their community”. A 2013 report from Humane Africa said that during their four-month fieldwork period from June to September 2012 there has been an average of one sacrifice each week in one of the 25 communities where the research was based.
The practice is rooted in the belief that blood sacrifice can bring fortune, wealth and happiness. The “purer” the blood, the more potent the spell, making innocent children a target. Witch doctors look for children without marks or piercings, so many parents pierce their children’s ears at birth and get their boys circumcised in an effort to protect them.
Children are either abducted from, or in some cases actually given to, witch doctors by relatives out of desperation for money. The rituals involve the cutting of children and the removal of some body parts, often facial features or genitals. These brutal acts are done while the child is still alive, and few survive.
According to Ugandan police records, incidences of child sacrifice are on the increase, with 10 cases recorded in 2013. The Ugandan Internal Trafficking Report estimated the number was 12, whereas first-hand interviews by Humane Africa detailed 77 incidents. Current research by KidsRights states that these varying statistics are most likely the “tip of the iceberg, as data is insufficient and the real scope of child sacrifice is not yet visible”.
Evidence that reflects the true scale of the problem is hard to find, as many cases go unreported and, as a result of corruption in the police and judicial system, few perpetrators are convicted. Unicef stated that “task forces ... lack resources to convene and exist often in name only”.
Field post: breaking the mother-daughter cycle of sex work in Uganda Read more
Masese II, a small community of displaced people on the outskirts of Jinja in eastern Uganda, has suffered many ritual attacks on its children.
The police cite eastern Uganda as having the highest incidence of child sacrifice cases; and blame the high infiltration of unregistered healers. With little protection from the authorities, communities like Masese II were seemingly powerless. In partnership with a Ugandan NGO, Adolescent Development Support Network (ADSN), UK charity Children on the Edge started a programme there in 2012, after an assessment identified the children in this slum as particularly vulnerable.
The only industry at the time was the brewing of potent alcohol (waragi), which did not generate enough income for parents and carers to feed their children or send them to school. With many adults inebriated and a prevalence of grandmother- and child-headed households, children were particularly exposed to being taken.
Facebook Twitter Pinterest Children on the Edge ran a social mapping workshop with children to show them the safe places to play. Photograph: Rachel Bentley/Children on the Edge
To bring the abduction rates down, a “child friendly space” was established in the community, using a donated local council building. This centre is a safe place, where children from the ages of three to six receive a daily meal, learn, play and receive care from trusted adults. When they reach primary school age their parents/carers are supported through income-generating schemes to enable them to send their children to school. As part of the project, a patch of land was donated to grow food for the children at the centre and to enable many adults from the community to develop agricultural livelihoods as an alternative to breweries.
Children in crises: why stronger protection systems must be adopted Read more
The most important component of the programme has been the establishment of a community child protection committee (CCPC). At the height of a a spate of killings in July 2012, 10 responsible adults were identified within the community and were trained on all aspects of child protection. Part of this process was to raise awareness of the issue of child sacrifice, tackling the beliefs, mindsets and behaviour that sustain the practice. These workshops were held together with local leaders and police.
The CCPC then began raising awareness of child protection issues within the community, holding meetings and visiting door to door. They were equipped with a loudspeaker system so that when a child went missing the community could be alerted. This, along with a bicycle so that members could immediately report cases to the local police, has proved to be a remarkable deterrent to the perpetrators.
Facebook Twitter Pinterest Equipping the community child protection committee with bicycles meant they could report missing children to local police more quickly. Photograph: Edwin Wanede/Children on the Edge
Children have participated in the process by helping to identify the area by the railway tracks where they are at most risk of abduction. Children used to collect scrap around these tracks, but the committee taught them to avoid it and not to wander too far from home.
All of these measures have resulted in a rapid decrease in abductions, with seven cases in 2011, eight in 2012 and no incident in Masese II in the last 18 months. The CCPC reports that there was one attempted case nine months ago, when a four-year-old girl was taken, but she was “swiftly rescued by community members”.
ADSN programme manager Edwin Wanede says the CCPC has made an impact by building relationship and trust, while the government’s poster, radio and TV ads do not get the message through. This is because many poorer communities are illiterate, and people respond better to the advice of their friends and neighbours, rather than that of strangers and authorities.
How girl activists helped to ban child marriage in Malawi Read more
With work in Masese II proving effective, Children on the Edge and ADSN have started developing the programme in two neighbouring communities. Two weeks ago, we heard that two children in the Jinja area had gone missing. The mother found two skulls which she suspects are her children’s. Previously her partner had suggested one child be given to a witch doctor in exchange for 200 million shillings (£45,000). It’s clearly time to begin some work on replicating the Masese II project in other areas.
One of these places is Wandagu, which is situated off a main highway and consequently prone to passers-by stealing children. Isolated parts of the sugar plantations here also provide hiding places for perpetrators. Just four months ago a five-year-old girl was found murdered amongst the sugar canes, with parts of her body missing. A CCPC is already being formed here, and a few bicycles and loudspeakers bought. The hope is that soon the Wandagu community will form a safety net as strong as that in Masese II.
Esther Smitheram is communications officer at Children on the Edge. Follow @cote_uk on Twitter.
Join our community of development professionals and humanitarians. Follow@GuardianGDP on Twitter.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
We can do the right thing to create positive change within ourselves and the world around us! I have created this blog with the intention of keeping you informed of news that is affecting humanity and nature throughout the world! There is no better time than the present to become a global participant and not just an innocent bystander. I have provided you with several websites to help empower yourself and a list of global organizations that you can choose from to make a difference.
December 23, 2016
Published on Apr 4, 2016: A Night In Cologne: The New Year's Eve celebrations that changed Germany's attitude to refugees
Now this is me talking, don't conflate "refugees" that genuinely need our help with millions of YOUNG able bodied Islamist men leaving their nations to collect welfare in European nations. There is a huge difference. ALL of these so-called "migrants" or "refugees" DO NOT come from Syria as the media would like everybody to think. (emphasis mine)
Suspected terrorist Anis Amri was a known criminal and potential terrorist to several Western intelligence agencies before plowing a stolen truck into a Berlin Christmas market Monday.
Amri is a Tunisian refugee who may have entered Europe as early as 2011. Italian media reports he spent four years in an Italian jail before taking advantage of Germany’s open door policy for refugees. After arriving Amri associated with a known terrorist, and was placed under surveillance for allegedly trying to buy an automatic weapon. He made his livelihood dealing drugs, and was eventually taken off the surveillance list.
Amri was then scooped up in an operation against a prolific ISIS sympathizing preacher, who he previously fell under the spell of. German authorities rejected Amri’s asylum application but did not deport him because the Tunisian government refused to confirm his identity. Amri was subsequently released, and allowed to remain unmolested in Berlin for months.
Along the way Amri was considered so dangerous by U.S. intelligence agencies, he was at one point on a U.S. no-fly list. U.S. officials also say he communicated with the Islamic State via secure messaging apps, and had done research on how to build a bomb.
Amri fell under the spell of a prolific ISIS sympathizing cleric during his time in Germany, and was reportedly given two options. The first option was to join the ISIS caliphate in Iraq and Syria, and the second was to carry out an attack in Germany.
Amri hijacked a Polish steel truck Monday, killing the driver in a prolonged struggle. He eventually got the truck started before eventually getting started. The truck’s GPS route indicates he knew exactly the route to take, and jumped the curb as soon as he got close enough to his selected market.
The polish truck driver’s body was found in the cab after Amri fled. His wife told local media he had hoped to finish his delivery’s early and spend Christmas with his family.
Anis Amri remained on the run for four days after the attack, managing to escape from Germany and travel to France and Italy.
It is "really worrying" that the Berlin lorry attacker was able to travel across Europe amid a continent-wide manhunt, experts have told Sky News.
Anis Amri was killed in a shootout with police in the Italian city of Milan on Friday morning - four days after his rampage left 12 dead.
Amri, who pledged allegiance to Islamic State, was able to get out of Germany after the attack.
Italian police said he had travelled by train to Chambery in France, and then to Turin in Italy.
The 24-year-old arrived at Milan's Central Station at 1am on Friday, and then made his way to the suburb of Sesto San Giovanni, where he was shot dead.
Amri pulled a gun from his backpack after being asked to show his identification, with the officers initially unaware of who he was.
Germany is part of the Schengen zone, which allows passport-free travel between most EU states, a deal the UK is not a part of.
But Chris Phillips, former head of the National Counter Terror Office, told Sky News the ease by which he slipped out of the country was "really worrying" and "a sign of the times".
"We've seen this in the past with the Paris attackers making their way back to Brussels without any difficulty at all. It's a big problem," he said.
Tony Smith, former director general of the UK's border force, echoed the concerns.
He told Sky News: "He had no ID documents after the event, there was a full manhunt in place and nonetheless he was able to cross two or three borders before he was eventually apprehended and even then really on the basis of more of a suspicious routine stop rather than an intelligence-led operation.
"So, it does beg the question, what is the Schengen group going to do about their borders?
"And in particular, how are they going to respond to critical incidents like this in the future?"
Former UKIP leader Nigel Farage and the leader of France's National Front, Marine Le Pen, have demanded the EU scraps freedom of movement in the aftermath of the attack.
Mr Phillips said Germany, France and Belgium have "serious problems" when it comes to monitoring terrorists.
Talking about Germany, he said: "They want to be a very free, open, democratic society - of course we all do - but they've got a natural disinclination to have things like CCTV cameras and the arms of the state that actually do protect you in times of terrorism."
Mr Phillips added: "If you think back to the London attacks, we were able to follow the suspects right back to the start of their journey, right back to their car journey when they were in the petrol station filling up with petrol and we knew from that who these people were and what they were on their way to do."
No comments:
About Me
I love to travel and get away from it all whether it's 1st class, 2nd class or 3rd class makes no difference to me. I simply love to visit new places and meet new people. I really enjoy extreme sports. I started blogging nine years ago and love to be able to express and share thoughts with others.
Most recently a Mortgage Professional prior to implosion. Earned a living in my previous career as an Institutional Equity Trader (sell side). I have a bachelor's degree in finance with special emphasis in economics.
Ready to Defend! Are you?
❤ May my heart be kind, my mind fierce and my spirit brave. ❤
Hello WORLD and welcome
Thank you for visiting. I will do my best to keep you posted to global news affecting humanity and this planet as we know it today. I will bring you global news Monday through Friday adding my insight along the way. In between the non-sense, I will pepper in a little humor, random stuff and inspiration for balance and I will use the weekend to feed your spirit.
Excludes Firewall servers...
Please do not change this code for a perfect fonctionality of your counter
alternative mediacounter
What does Capitalism mean to me?
I've been asked many times if I still track the stock market. My answer is a resounding yes. The stock market is in my blood. I'm still tracking the markets, still doing research and still following economic news. This is the one industry that is the heart of global productivity. It is essential in pumping the necessary oxygen (capital) to corporations that in turn hire employees who will in turn produce the products and services that we all use. Capitalism is a very very important element to humanity. It is what fuels dreams, self-reliance and individualism!
T.E.A.= TAXED ENOUGH ALREADY! I am an INDEPENDENT/CENTRIST Former Democrat for 20 years
BLOG ARCHIVE: 2007 to Present
Connect With Me At These Social Networking Sites ♥ Click The Pics Below And Add Me As A Friend ♥
RAISE YOUR VOICE!
We don't need money to make a difference, although it does help. However, each signature is a RAISED voice demanding change! This is our contribution to the World. By saying ENOUGH is ENOUGH, together we truly CAN and WILL make a difference! Please sign a petition and help spread the word... Thank you!
Carousel
My Amazon Affiliate Account has been terminated due to a new California TAX LAW just passed. I'm keeping this display to continue promoting Miyazaki's wonderful animation that I love very much! Thank you for your support. ♥
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
A recombinant protein TmSm(T34A) can inhibit proliferation and proapoptosis to breast cancer stem cells(BCSCs) by down-regulating the expression of Cyclin D1.
Cancer stem cells (CSCs), a small fraction of cancer cells lines proved with stem cell characteristics, were regarded as "bad seeds" related to recurrence, metastasis and chemotherapy resistance of breast carcinoma in recent years. So inhibiting the growth or inducing the differentiation and apoptosis of CSCs were considered as one of the effective pathways to fight against breast cancer. Based on the recombinant protein TmSm(T34A) that was designed and prepared in our previous experiments for targeting survivin, an inhibitor of apoptosis protein(IAP), in this study, we explored the effects of TmSm(T34A) on BCSCs obtained by enriching in serum-free suspension, sorting and characterizing of MCF-7/ADM. The results showed that TmSm(T34A) could not only inhibit the proliferation and growth of BCSCs by decreasing CD44+CD24- proportion and down-regulating the expression of Cyclin D1 significantly, but also induce BCSCs apoptosis evidently. Furthermore, in BCSCs xenograft nude mice administrated TmSm(T34A), the tumor growth was slower than that of the control obviously. Thus it can be seen TmSm(T34A) would be a promising potential protein for treatment of breast cancer by effecting on BCSCs.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
We would like to inform you that your feedback is under the purview of the National Environment Agency (NEA). We will, by copy of this email, refer your feedback to them for their attention and reply to you.
Should you wish to contact NEA directly on the matter, please refer to their email address below.
It was quite a nice touch and very impressive of NEA to come up with a reply within 24 hours of receiving the email from LTA. But what I read next at the end of the email frightened the shit out of me:
This message may contain confidential information under the purview of the Official Secrets Act. Unauthorized communication or disclosure of such information is an offence under the Official Secrets Act. If you are not the intended recipient of this message, please notify the sender and delete it. Do not retain it or disclose the contents to any person as it may be an offence under the Official Secrets Act.
Whoa, since when have environmental issues become official secrets?
Hmm... does it look like LTA is passing the bin buck to NEA who in turn is passing it to its North West Regional Office? Meanwhile, I passed by the zebra crossing this morning and the rubbish bin was still there. So Alex was right when he commented that LTA would probably get another department to look at the problem. Smart aleck Alex. How did you know? You worked in LTA before ah?
16 comments:
Anonymous
said...
When I discussed transport matters with my relatives, it seemed that they had poor impression of the LTA. After spending my entire career with a govt organisation, I am convinced that it is not the organisation is at fault, but the people running it, especially the top man. My personal experience is such that if the boss is a poor leader, like passing the buck, blames everyone except himself, takes credit for himself instead of attributing it to the staff, takes cover and avoid responsibility which he would neatly shift it to his subordinates, the organisation will then be seriously undermined. I am quoting an extreme case, but believe me, it is true.
tsk...tsk..tsk... Not say I wanna say one. A bunch of good for nothing civil servants. As the Hokkien likes to say, "Jia Leow Bee" one. Sometimes, I'm ashamed to calle myself a CS too :((
Does it really have to wait until someone got killed before they spring into action?
But then again, from what I've juz experienced these few days, private firms didn't fare much better either, when it comes to prividing good services. My brush with Starhub and HP leaves much to be desired. I may blog about it to let off steam....
Thanks Zen, Chris and Frannxis for your comments. I just received not one but TWO phone calls from NEA staff. The first one told me that action would be taken and the 2nd one told me that the rubbish bin had been removed. Quite efficient, I must say. But when I asked for their names, the first man said "Mr Tan". When I insisted he should be more "identifiable", he responded, "Mr K H Tan". The 2nd one was no better, he was a "Mr Yip". Typical civil servants, I must add - very shy in giving their full names.
That Yip of NEA claimed that the rubbish bin had been removed. What RUBBISH!!! I passed by the same location on the morning of 28 Dec 2006 and the rubbish bin was still there! I immediately called NEA again. This time I talked to Zainal. I recited the whole story to him again and left a message for Yip to call me back. At the end of the day, I still did not hear from Yip. Sigh *shaking head*. Trust these people to save our environment.
One thing I must emphazise, our big boss's favourite tactic was to annotate on any 'damned' documents, including reports. He would annotate all over the documents, mainly negative remarks such as: "should increase this, that by ...deadline, poor performance, I want this, that and so on. My dept manager was so frustrated in just answering to his queries. I would asked my manager: "Mr Tan, how to provide the impossible data this fellow wants ? Mr Tan replied in a irritate manner in Teochew" Cho qeit e lah ! (do for him lah !). On another occasion, I asked a senior officer regarding our top boass: " Mr Daniel, if big boss ask you to jump into the sea, will you ?". Daniel replied: "I will !". I gave up -office politics.
I knew Daniel was a former teacher from quite a famous school, also worked for short spell in an'uniform branch'(maybe unable to catch any criminal-got to change job). A swimmer?, I should say he was more of a ba...carrier. His favourite tactic was to act stupid and when questioned, he replied: "You know...survival lah!"
Hehehe... zen, u meant balls carrier is it? To put it more delicately, he was an apple-polisher. I see tons of those in my office. But before I got distracted from the main issue here, let's come back to this RUBBISH post of Victor.
The way I look at it, the Civil Service is not entirely apathetic. Juz that what the private sector takes to complete a task in 1 day, the CS takes 1 month. LOL.
I will also jump into the sea if my boss tells me to. The only difference between me and Daniel is that I would also drag my boss into the sea with me. ROTFL.
NEA no hope lah! I once complained about a hawker stall in Marine Parade Food Centre selling cigarettes to minors (only 13/14 yrs old). I complained to the police of Marine Parade Neighbourhood post. They said they called NEA to inform them of this.
But my students were still able to buy cigarettes from this stall weeks after. I called NEA twice. Once I spoke to a woman & she said she will look into the matter.
A week passed & my students still could buy cigarettes from the stall. I called NEA again, this time a Malay guy attended to me.
Ok, result - my students continue buying cigarettes from the stall & also can buy from petrol kiosks, mama shops & mini marts all around Marine Parade. Walau!!!!
NEA - very good!!!!
I have more examples from HDB, MOE, STB, SCDF and the Police Force.
I want to post the stories on my blog one day. Then they really look stupid!
On the whole, I still think our public sector employees are very 'on the ball'. Good example is NLB - I am not saying this becos my friend Ivan the Rambling Librarian could be reading.
During my first visit to the Victoria St main library, my wife made some enquiries (sorry cannot remember details), but she was so plesed with the assistance she got that she pronounced "1st class customer service".
So Victor, pls go back to your previous post. "Tis the season to be .. forgiving ..". Maybe the ENV people were tied up with all the floods etc, Or maybe many people were on leave becos of the festive season and they were short handed and thus did not get around to your 'rubbish/ complaint as fast as they would like to...
I agree with Chun See on the forgiving part, but public complaints do carry weight, why ? because the big boss does care for his rice bowl, even though he would definitely try to hunt for a few scapegoats asap. Meanwhile he would set up many meetings,chiding everyone except himself, delivering sermons on the importance good service, forming committees (headed by others) etc. Results pop out, but bulk of the job being done by the staff, with the big boss doing lip service and 'stewing around'.
I lost my wallet and housekeys once. I was more concerned about my IC, so I went down to the Neigbhourhood Police Post. The man in blue who attended to me that loss of IC need not be reported anymore. When I queried what if it landed on the hands of someone who used my IC to do illegal stuff. Guess what he said? "Who asked you to be so careless?" I was too stunned for words. Is this how a public servant carry himself? How to have better rapport with the public like that?
Chun See - So you are back. I have marked on my calendar that your live interview with Eugene Loh on 3 Jan 2007 at 1.45 pm on radio 938Live. (Can't resist doing a little advertising for you here, hehe.) Hope your mum-in-law's ok.
Chris - As Evan would say, cut the boys in blue some slack lah. You know some of them are only serving out their NS. How professional can they be?
Even HK TV is poking fun at HK law (British based). Say when a guy threatens another person with a knife, and there are people witnessing it, the police would only allow the victim to make a recorded statement, nothing else. In other words, only after the victim is being stabbed then only the police can take action. By then the victim could be dead. Singapore laws are also based on the British legal system. Very risky, everyone for himself, it is advisable to take up some martial art of self defence.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
All relevant data are within the paper and its Supporting Information files.
Introduction {#sec001}
============
Within the intensively studied field of early hominin evolution, a crucial question is the split of our own clade from the Panini. Over the last decades the fossil record of potential early hominins increased with taxa such as *Ardipithecus*, *Orrorin* and *Sahelanthropus* \[[@pone.0177127.ref001]--[@pone.0177127.ref003]\]. Recent molecular data propose a divergence time of *Pan* and *Homo* between 5 and 10 Ma \[[@pone.0177127.ref004]\] and Langergraber *et al*. \[[@pone.0177127.ref005]\] propose an age of at least 7--8 Ma. These estimations largely coincide with the evidence obtained from the fossil record across Africa and Eurasia \[[@pone.0177127.ref006], [@pone.0177127.ref007]\].
In the present study, we define 'hominoid' as 'apes'; 'hominid' as 'great apes and humans'; 'hominine' as 'African apes and humans'; and 'hominin' as 'humans and their non-ape ancestors'. Currently, the fossil record reveals three Miocene candidates with potential hominin affinity. *Ardipithecus kadabba* is dated to between 5.2 and 5.8 Ma. It is more primitive than *Ardipithecus ramidus* and may not belong to the same genus \[[@pone.0177127.ref008]\], but it does show hominin affinities such as evidence of bipedalism and canine reduction \[[@pone.0177127.ref009], [@pone.0177127.ref010]\]. *Orrorin tugenensis* is dated to \~5.8--6.0 Ma and shows an upright posture \[[@pone.0177127.ref002], [@pone.0177127.ref011]\]. *Sahelanthropus tchadensis* is dated to \~6--7 Ma \[[@pone.0177127.ref003], [@pone.0177127.ref012]\] and provides several derived cranial and dental features that suggest hominin affinity. Lebatard et al. \[[@pone.0177127.ref013]\] propose an age of 7.2--6.8 Ma for *Sahelanthropus*. We do not consider this age determination to be reliable given the circumstances of the provenance of the skull \[[@pone.0177127.ref014]\] and the relatively low accuracy of the method \[[@pone.0177127.ref015]\].
The overwhelming effort to reconstruct hominin origins have been focused on the African continent. However, ancestral lineages remain largely unknown \[[@pone.0177127.ref016]\]. A crucial problem in identifying ancestral lineages is the prevalence of homoplasy and the relative lack of derived morphological features that reduces the phylogenetic resolution around lineage divergence \[[@pone.0177127.ref017], [@pone.0177127.ref018]\]. Root morphology might be a potential feature, which is less affected by homoplasy. Studies on fossil hominids, extant great apes and humans indicate that the premolar root number is not primarily linked to a functional adaptation, and is interpreted to represent a genetic polymorphism \[[@pone.0177127.ref019], [@pone.0177127.ref020]\]. Hence, homoplasy is only a minor consideration for the traits of premolar root numbers, which therefore may provide a useful phylogenetic signal. Nevertheless, some relations of root and crown morphology indicate overlaying masticatory adaptations that may attenuate the phylogenetic signal \[[@pone.0177127.ref021], [@pone.0177127.ref022]\].
Of special importance for hominin evolution is the lower fourth premolar (p4), as its morphology seems to be diagnostic for the hominin lineage. Taxonomic attempts have been made concerning its crown morphometry \[[@pone.0177127.ref023]--[@pone.0177127.ref025]\] and especially its root configuration \[[@pone.0177127.ref026], [@pone.0177127.ref027]\], which turns out to be a powerful tool for early hominin phylogeny \[[@pone.0177127.ref028]\]. Several morphological traits of putative early hominin p4s (*Sahelanthropus*, *Ar*. *kadabba*, *Ar*. *ramidus*) point to a reduced configuration. A two-rooted, but narrow state is documented in *Sahelanthropus* \[[@pone.0177127.ref028], [@pone.0177127.ref029]\]. A Tomes' root is present in *Ardipithecus kaddaba* and a single-rooted p4 is characteristic for *Ardipithecus ramidus* \[[@pone.0177127.ref001], [@pone.0177127.ref030], [@pone.0177127.ref031]\] and *Homo*. The plesiomorphic p4 root configuration shown by extant great apes, basal hominids like *Proconsul* and Miocene hominines (*Ouranopithecus*) differs significantly, showing two or three clearly diverging roots and four pulp canals \[[@pone.0177127.ref028], [@pone.0177127.ref032]\]. The p4 root number in australopithecines (*Au*. *anamensis*, *Au*. *afarensis*, *Au*. *africanus*; \[[@pone.0177127.ref033]--[@pone.0177127.ref037]\]) is highly variable, from a Tomes' root up to a three-rooted condition \[[@pone.0177127.ref026]\]. Another p4 root morphology, which has two roots that are fused on their basal buccal part, is recently described for some specimens of *P*. *robustus*, *Au*. *africanus* and australopithecines from Woranso-Mille \[[@pone.0177127.ref025], [@pone.0177127.ref036]\].
In this study, we propose based on root morphology a new possible candidate for the hominin clade, *Graecopithecus freybergi* from Europe. *Graecopithecus* is known from a single mandible from Pyrgos Vassilissis Amalia (Athens, Greece) \[[@pone.0177127.ref038]\] and possibly from an isolated upper fourth premolar (P4) from Azmaka in Bulgaria \[[@pone.0177127.ref039]\] ([Fig 1A and 1B](#pone.0177127.g001){ref-type="fig"}). A new age model for the localities Pyrgos Vassilissis and Azmaka, as well as the investigations on the fauna of these localities \[[@pone.0177127.ref040]\] confirms that European hominids thrived in the early Messinian (Late Miocene, 7.25--6 Ma) and therefore existed in Europe \~ 1.5 Ma later than previously thought \[[@pone.0177127.ref039]\]. This, and recent discoveries from Çorakyerler (Turkey), and Maragheh (Iran) demonstrate the persistence of Miocene hominids into the Turolian (\~8 Ma) in Europe, the eastern Mediterranean, and Western Asia \[[@pone.0177127.ref041], [@pone.0177127.ref042]\].
{ref-type="supplementary-material"}. **c**, Occlusal view. **d-e**, Apical view. **f**, Buccal view of the left hemimandible. **g**, Buccal view of the right hemimandible. **h**, Lingual view of the left hemimandible. **i**, Lingual view of the right hemimandible. Scale bars, 10 mm.](pone.0177127.g001){#pone.0177127.g001}
The type mandible of *G*. *freybergi* was found in 1944 by von Freyberg, who mistook it for the cercopithecid *Mesopithecus* \[[@pone.0177127.ref043]\]. In the first description by von Koenigswald \[[@pone.0177127.ref038]\] the mandible was identified as a hominid. Some authors have concluded, based on external morphology and in particular the apparently thick enamel and large molars, that another hominid from Greece, *Ouranopithecus* (9.6--8.7 Ma \[[@pone.0177127.ref044]\]), could not be distinguished from *Graecopithecus*, thus synonymizing the former with the latter \[[@pone.0177127.ref045]\]. Other authors have consistently maintained a genus level distinction between *Ouranopithecus* (northern Greece) and *Graecopithecus* (southern Greece), based on the argument that the Pyrgos specimen is insufficiently well preserved to diagnose a taxon (nomen dubium) or based on anatomical arguments \[[@pone.0177127.ref006], [@pone.0177127.ref044], [@pone.0177127.ref046]\].
Here, we provide a detailed description of the Pyrgos and Azmaka specimens by using μCT based analyses and 3D visualisations. For the first time, their internal structures are examined in order to reveal previously unknown characters in root and pulp canal morphology. Additionally, previously described features are re-assessed and a new diagnosis of *G*. *freybergi* is given. Thereby, we address the taxonomic validity of *G*. *freybergi* and further, raise the possibility of a hominin affinity.
Material and methods {#sec002}
====================
The studied material comprises the type specimen of *Graecopithecus freybergi* from Pyrgos Vassilissis Amalia (Athens, Greece)---a mandible with partially damaged permanent dentition (c-m3, [Fig 1A](#pone.0177127.g001){ref-type="fig"}) and RIM 438/387---an upper fourth premolar of cf. *Graecopithecus* sp. from Azmaka (near Chirpan, Bulgaria; [Fig 1B](#pone.0177127.g001){ref-type="fig"}). The fossil sites are dated to the early Messinian at 7.175 Ma (Pyrgos Vassilissis) and 7.24 Ma (Azmaka; AZM 4b) \[[@pone.0177127.ref040]\].
Comparative data of fossil and extant great apes and humans were obtained from casts (*O*. *macedoniensis*/RPl-54) and the literature. Selecting criteria for the comparative taxa has been the availability of appropriate data from literature. Accordingly, the literature data needs to describe the same anatomical structures that are preserved in *G*. *freybergi* (e.g. dental root morphology, number and length, corpus dimensions, etc.). Further, attention was paid to the comparability of measurements, which is specifically discussed in the methodical section. Thus, the set of comparative taxa may vary between the investigated characters.
The type mandible of *G*. *freybergi* was found in 1944 during construction of a German bunker \[[@pone.0177127.ref043]\]. Situated in the urban area of Athens, the fossil site is overbuilt and thus not accessible anymore. The mandible and further vertebrate fossils were deposited in reddish fine sediments of Late Miocene age.
A first preparation of the mandible was done by von Koenigswald \[[@pone.0177127.ref038]\]. For further studies \[[@pone.0177127.ref045]\] it was brought to the Natural History Museum in London, where it has been completely cleaned of the surrounding matrix. The damaged external face of the symphysis has been treated with resin, which has stabilized the preserved internal face of the symphysis.
μCT and virtual reconstruction {#sec003}
------------------------------
Both halves of the mandible and the Azmaka tooth were separately scanned with the GE Phoenix v\|tome\|x s μCT scanner at the Institute for Archaeological Sciences (INA, University of Tübingen, Germany). The Pyrgos and Azmaka scans have a resolution of 29.48 μm and 21.44 μm, respectively. The specimens were scanned at 170/150kv and 170/140 μA. No beam hardening artefacts were observed. The μCT slice data were converted into 3D volumes using Avizo 8.0 software (FEI Visualization Sciences Group). The fossil material was virtually isolated from the background, the adhesives and rock particles. Further, the density contrast of bone, dentine, enamel and filled cavities was used to segment specific anatomical elements of the mandible (mandibular bone, dental crowns, roots and pulp cavities/canals). The segmentation was complicated by the low-density contrast of the Pyrgos scan, thus implying both manual and semiautomatic segmentation of each anatomical element slice by slice (4578 slices in total). This was done with a combination of surface determination, region growing and masking tools. For further processing in Geomagic Wrap 3.0 software (3D Systems Corporation), smaller datasets were required. Therefore, the surfaces of the reconstructed elements have been simplified in Avizo. The extracted STL-files were transferred to Geomagic, where both halves of the mandible were digitally repositioned and finally smoothened for presentation purposes ([Fig 1C](#pone.0177127.g001){ref-type="fig"}).
Mandibular measurements {#sec004}
-----------------------
Mandibular measurements were taken on the type of *G*. *freybergi* and a cast of the type of *O*. *macedoniensis* (RPL-54). Further comparative data of *O*. *macedoniensis* (NKT-21, RPL-90, RPL-80, RPL-56, RPL-75, RPL-89, RPL-94) was obtained from the literature \[[@pone.0177127.ref047], [@pone.0177127.ref048]\]. Additional mandibular measurements from literature come from the taxa *Ankarapithecus meteai* \[[@pone.0177127.ref049]\], *Sivapithecus sivalensis* and *S*. *punjabicus* \[[@pone.0177127.ref045]\], *Nakalipithecus nakayamai* \[[@pone.0177127.ref050]\], *Australopithecus anamensis* \[[@pone.0177127.ref051]\], *Au*. *deyiremeda* \[[@pone.0177127.ref052]\], *Au*. *afarensis*, *Au*. *africanus*, early *Homo*, *Paranthropus robustus* and *P*. *boisei* \[[@pone.0177127.ref052]\]). The measurements on *G*. *freybergi* were made using the Avizo 3D measuring tool directly on the μCT-slices or the un-smoothened 3D reconstruction. The cast of RPL-54 was measured with a calliper gauge (accuracy = 0.02 mm). Unless otherwise stated, all values are given in millimetres and rounded to one decimal.
For the mandibular dimensions, the corpus height (H) and breadth (B) were measured at the positions between and below each tooth. The measurements were performed on the μCT-slices oriented perpendicular to the alveolar plane. The measurement of the corpus breadth accords with a measurement with a calliper that is aligned on the lingual corpus side. The corpus height was measured lingually, perpendicular to the breadth measurement, as shown in [S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"}. The mandibular robusticity index (RI) was calculated as the ratio W/H. Further, the μCT-sections ([S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"}) were taken in each position to ensure the reliability of the corpus dimensions in *G*. *freybergi*. The sections show that the mandible is crushed ventrally and the outer cortex is partially missing. This mainly concerns the right hemimandible. Therefore, the breadth-height measurements were restricted to the better-preserved left corpus. Particularly, in the position of m2/m3 to m3 the outer cortex and the trabecular bone are largely preserved. Hence, a reliable breadth can be given here. A minimal estimation is given for the breadths at p3/p4 to m2. A small amount of damage on the lower rim is reconstructed as shown in [S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"}. Accordingly, a minimal estimation is given for the corpus depth in the position from m2 to m3.
The mandibular symphysis preserves only parts of the internal (lingual) face. Therefore, its symphysal height and breadth are not measurable. To assess its limited morphology, three anatomical planes were constructed on a sagittal μCT-cross section: alveolar plane (AP), sublingual plane (SP) and plane of transvers tori (TP = bitangent of the upper and lower transvers tori). The angle of SP and TP with AP is measured, as well as the angle of SP with TP. Comparative symphysal cross-sections of *O*. *macedoniensis* (RPl-56, RPl 75, RPl-54) were obtained from literature \[[@pone.0177127.ref053]\].
The width of the dental arcade is measured on the repositioned 3D reconstruction of *G*. *freybergi* and the cast of RPl-54. The distances were taken lingually at the cervix of each tooth. The slight distortion of left and right hemimandible is considered here to be minor and thus, the un-corrected direct measurements are provided. Although the Pyrgos mandible is broken, the distance between both hemimandibles is determinable as the internal face of the symphysis is continuously preserved.
Dental crown measurements {#sec005}
-------------------------
The tooth crown dimensions were measured with the 3D measuring tool of Avizo 8.0 on the un-smoothened virtual reconstruction of the Pyrgos specimen. The length (mesiodistal) and width (buccolingual) was measured for the preserved right m2 crown. In p4 only the mesiodistal length is measurable as parts of the buccal crown are broken. Tooth row lengths must be used with caution as the teeth of the Pyrgos specimen are severely crowded and show intense interstitial wear. Particularly, the m1 crown is strongly affected by interstitial wear and lateral crushing. In order to get an approximation of its original size, we applied the tooth area prediction following Evans et al. \[[@pone.0177127.ref054]\]. We used the estimation model developed for australopithecines and calculated the crown size derived from the known m2 dimensions. The application of this model to taxa other than intended by Evans *et al*. must be used with caution and needs a throughout investigation first. A first hint of its applicability for our purpose was tested with the well-preserved dentition in the type of *O*. *macedoniensis*. Comparative data for the crown dimensions in the m2 of *O*. *macedoniensis*, *O*. *turkae*, *N*. *nakayamai and A*. *meteai* \[[@pone.0177127.ref041], [@pone.0177127.ref047]--[@pone.0177127.ref050], [@pone.0177127.ref055]\] and the P4 of cf. *Graecopithecus* sp., *O*. *macedoniensis* and *O*. *turkae* \[[@pone.0177127.ref039], [@pone.0177127.ref041], [@pone.0177127.ref048], [@pone.0177127.ref056]\] were obtained from literature. Additional literature data of crown dimensions in the p4, m2 and P4 of other taxa (*S*. *tchadensis*, *O*. *tugenensis*, *Ar*. *kadabba*, *Ar*. *ramidus*, *A*. *afarensis*, *A*. *anamensis*, *P*. *troglodytes*) is obtained from \[[@pone.0177127.ref001]--[@pone.0177127.ref003], [@pone.0177127.ref009], [@pone.0177127.ref033], [@pone.0177127.ref057]\].
The enamel thickness was measured for the P4 from Azmaka and the right p4 and m2 of the Pyrgos specimen. The enamel of m1 was too fragmentary for quantification. Relative enamel thicknesses could not be applied, due to the intense dental wear. Hence, two dimensional measurements were taken following Suwa & Kono \[[@pone.0177127.ref058]\]. Abbreviations are adopted from \[[@pone.0177127.ref058], [@pone.0177127.ref059]\]:
MCS: mesial cusp section. Section through the dentine horn tips of the metaconid and the protoconid.
l: radial enamel thickness on the lingual side of the metaconid.
k: radial enamel thickness on the buccal side of the protoconid.
The teeth were virtually sectioned in Avizo through the mesial dentine horn tips (MCS) from buccal to lingual. The generated CT-sections were directly used for the two dimensional linear measurements. Due to the intense occlusal and interstitial wear, the enamel on the lateral sides provides the least altered thicknesses. Hence, we took the radial enamel thickness only on lingual (l) and buccal (k) side of each tooth. The buccal side of lower molars can further be altered if there is a Carabelli's cusps in the opponent upper molar. Therefore, we measured the lingual side of the lower teeth and the buccal side of the upper teeth \[[@pone.0177127.ref060]\].
The μCT-based measurements were taken at a resolution of \~30μm and are given in millimetres, rounded down to the first decimal place. The published radial enamel thicknesses used for comparison \[[@pone.0177127.ref041], [@pone.0177127.ref058]--[@pone.0177127.ref060]\] are derived from differing methodologies. This mainly concerns earlier studies that used physically sectioned teeth. This method produces uncertainty that the MCS are not exactly positioned at the dentine horn tips. Martin \[[@pone.0177127.ref059]\] cut a mesial section through the tips of the enamel cusps, assuming that the dentine horn tips lie exactly underneath. However, this is not always the case. Grine \[[@pone.0177127.ref060]\] sectioned the teeth distal to the enamel cusps to ensure that the dentine horn tips remain. Afterwards, the cut surface of the mesial block was grounded down until the dentine horn tips are reached. The measurements were then derived from SEM-micrographs of the MCS. Today, radial thicknesses are measured by μCT with a resolution of 40 μm and 56μm \[[@pone.0177127.ref041], [@pone.0177127.ref058]\]. Accordingly, inter-observer errors between these studies can be expected. Considering these limitations, the present comparison of enamel thicknesses has the aim to show the large-scale differences (thin/medium/thick enameled) between taxa. The comparative samples consist of male and female specimens in unbalanced proportions, assuming no significant sexual dimorphism in molar absolute enamel thicknesses \[[@pone.0177127.ref061], [@pone.0177127.ref062]\]. In addition, the sex of fossil specimens is not always known, so a bias towards males or females cannot excluded. The specimens of *Homo sapiens* are from diverse archaeologically derived and recent populations \[[@pone.0177127.ref058]--[@pone.0177127.ref060]\].
Root length {#sec006}
-----------
The measurement of the root length follows Moore et al. \[[@pone.0177127.ref063]\] and was performed with the 3D measuring tool in Avizo 8.0. The measurement is done linearly from the root apex to the point, where the pulp canal cuts the cervical plane. Thereby, the measurement largely follows the course of the pulp canal. We considered only the longest radical of each tooth (maximal root length). For *G*. *freybergi* these are the following positions: single root-apex of c, distobuccal root-apex of m1, mesiobuccal root-apex of p3, p4, m2 and m3.
Estimated corrections ([S3 Fig](#pone.0177127.s003){ref-type="supplementary-material"}): The root lengths of the left m2 and the right molars (m1-m3) are completely preserved and the maximal root lengths can directly be measured. The canine and premolars are only partially preserved. The right p4 lacks the apical root tips and the right p3 only preserves a fragment of the distal root. In the left hemimandible the upper parts of the roots of c-m2 are eroded, but the apical root tips are all preserved. Though this preservation does not allow a direct measurement of root length, an estimation of their final root lengths can be made. The corrected measurements on the canine and premolar roots can be derived from the apical root depths known from the left c-m3 and the right m1-m3. The cervical planes preserved in the right hemimandible provide the upper limit. As the mandibular corpus is slightly distorted, it is not possible to create a simple cervical plane across both halves. In order to bring them into the same vertical plane, the left hemimandible was mirrored and aligned to the right one via the software Geomagic Wrap 3.0. The positioning of both hemimandibles was done by aligning the left and right m1-m3 at their points of root bifurcation. Thereby, the left canine and premolar roots were transferred to the right side, where the cervical planes were largely preserved. The cervical planes were constructed through the cervices of the right m2-p4 and were extended to the position of p3 and c. Hence, the upper and lower ends of the p4, p3 and canine roots are defined by the cervical plane of the right hemimandible and the apical root tips of the left hemimandible.
Comparative data: The comparative root lengths data of extant hominids (*Pongo pygmaeus*, *Gorilla gorilla*, *Pan troglodytes* and *Homo sapiens*) are from Abbott \[[@pone.0177127.ref064]\]. The comparative fossil taxa include *S*. *tchadensis* \[[@pone.0177127.ref028]\], *Ar*. *ramidus* \[[@pone.0177127.ref031]\], *Au*. *anamensis* and *Au*. *afarensis* \[[@pone.0177127.ref065]\]. For extant hominids, the minimum, maximum, mean and standard deviation is given for the root lengths of males and females. The fossil hominids are sex-pooled or not assigned to sex. Minimum, maximum, mean and sample size (n) are given.
Some comparative studies used slightly different methods of root length measurements.
Abbott \[[@pone.0177127.ref064]\] derived root lengths from 2D radiographs and measured an *actual root height* of each root of a tooth. The *actual root height* means the apico-cervical distance along the root axes and thus, largely resembles our measurements. For comparison, we choose the same root positions that we measured on *G*. *freybergi*: single canine root, distal root in m1, mesial root in p3, p4, m2 and m3. Similar to our root length measurements on *G*. *freybergi*, the comparative data of *S*. *tchadensis* are maximum root lengths that are measured on 3D reconstructions \[[@pone.0177127.ref028]\]. In *Ar*. *ramidus*, *Au*. *anamensis* and *Au*. *afarensis* the canine lengths used here were measured apico-cervically on original specimens and casts \[[@pone.0177127.ref031], [@pone.0177127.ref065]\].
Root morphology {#sec007}
---------------
The dental root configuration follows the formula given by Emonet \[[@pone.0177127.ref032]\]. Thereby, the number and position of the roots and pulp canals are described for each tooth position:
[χ]{.smallcaps}αM+[ү]{.smallcaps}βD (for multi-rooted teeth) and 1~1~ (for single-rooted teeth with one pulp canal)
[χ]{.smallcaps} = mesial root number; [ү]{.smallcaps} = distal root number; α = number of mesial pulp canals; β = number of distal pulp canals; M = mesial; D = distal.
There have been several attempts to define the degree of bifurcation and the number of roots \[[@pone.0177127.ref022], [@pone.0177127.ref026], [@pone.0177127.ref027], [@pone.0177127.ref032], [@pone.0177127.ref066]\]. As our comparative data for root numbers largely comes from \[[@pone.0177127.ref032]\] and \[[@pone.0177127.ref028]\] we follow their definitions: Two free roots are counted if there is no fusion of dentine for more than one third of the total root length and both radicals have a distinct apex. If a lingual radical is connected to a buccal radical by a thin blade and both radicals are visible for more than half of its total length they are counted as two separate roots. For a better comparability to other studies, we provide figures of the root and pulp morphologies of each tooth of the Pyrgos and Azmaka specimen ([S1 Fig](#pone.0177127.s001){ref-type="supplementary-material"}).
Description of the specimens {#sec008}
============================
The Pyrgos specimen consists of a mandible with partially damaged dentition (c-m3). It belongs to an adult individual as indicated by the fully formed permanent dentition and the closed root apices. The tooth crowns of the right p4-m2 are partially preserved and the dental roots of the right p3-m3 and left c-m3 are largely preserved. The anterior mandibular body is snapped in two, separating both corpora, but the break is clean and the specimen is easily reassembled ([Fig 1A, 1C and 1D](#pone.0177127.g001){ref-type="fig"}). Both corpora show slight distortion and some damage, especially on the right side.
The mandibular corpus is deep in cross section (tall relative to breadth.) Although the right mandibular corpus is crushed ventrally, a reliable breadth-height ratio is preserved on the left corpus from m2 to m3 ([Fig 2](#pone.0177127.g002){ref-type="fig"}, [S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"} and [S1 Table](#pone.0177127.s004){ref-type="supplementary-material"}). The mental foramen preserved on the left corpus is positioned below the p4. It is situated \~6.0 mm from the mandibular base and \~22.5 mm from the alveolar margin. The dental arcade is narrow and divergent, with a distance of \~15 mm between the lingual sides of the p3 cervicesand \~26 mm at the m3s ([Fig 3A](#pone.0177127.g003){ref-type="fig"} and [S2 Table](#pone.0177127.s005){ref-type="supplementary-material"}).
![Robusticity and dimensions of the mandibular corpus in *G*. *freybergi* and *O*. *macedoniensis*.\
**a**, Mandibular robusticity index (RI = corpus breadth/height) in different tooth positions of *G*. *freybergi* compared to female and male *O*. *macedoniensis* (RPL-54, NKT-21, RPL-90, RPL-80, RPL-56, RPL-75, RPL-89, RPL-94; \[[@pone.0177127.ref047], [@pone.0177127.ref048]\] and this study). **b**, Corpus breadth and **c**, Corpus height in different tooth positions of *G*. *freybergi*, *O*. *macedoniensis* (RPL-54, NKT-21, RPL-90, RPL-80, RPL-56, RPL-75, RPL-89, RPL-94; \[[@pone.0177127.ref047], [@pone.0177127.ref048]\] and this study) and *Au*. *afarensis* \[[@pone.0177127.ref052]\]. In *G*. *freybergi*, the mandibular corpus is laterally crushed and is close to the real breadth only posterior to the left m2. Minimum estimations are indicated with dashed line. See also [S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"} and [S1 Table](#pone.0177127.s004){ref-type="supplementary-material"}.](pone.0177127.g002){#pone.0177127.g002}
![Morphometry of the mandibular corpus and symphysis in *G*. *freybergi* and *O*. *macedoniensis*.\
**a**, Bivariate plot of the mandibular tooth row of *G*. *freybergi* (black) and *O*. *macedoniensis* (RPl-54; grey) illustrating the differences in arcade width. The lingual distances between the left and right tooth row are plotted for each tooth position, if preserved. Measurement was done at the lingual sides of the dental cervices. The vertical axis shows the measuring position along the tooth row given as distance from m3. **b**, Top: Sagittal sections through mandibular symphyses of *O*. *macedoniensis* (RPl-56, RPl 75, RPl-54 \[[@pone.0177127.ref053]\]). Bottom: Sagittal section through the preserved veneer of the mandibular symphysis of *G*. *freybergi* (black) aligned to the symphysal sagittal section of *O*. *macedoniensis* \[[@pone.0177127.ref053]\] (RPl-54, grey; same scale). AP: Alveolar plane; TP: plane of transverse tori; SP: sublingual plane. The inclination of TP and SP, and the angle between both planes is given for *G*. *freybergi*.](pone.0177127.g003){#pone.0177127.g003}
The symphysis provides only limited information as it is mostly missing save a thin veneer (2--3 mm) of a portion of the lingual cortical bone surface ([Fig 3B](#pone.0177127.g003){ref-type="fig"}). The CT scans show that the anterior cortical and trabecular bone are missing and confirm that some cortical bone of the internal (lingual) surface is preserved. Hence, the lower part of the sublingual plane, the superior transverse torus (t.t.sup.) and the inferior transverse torus (t.t.inf.) are preserved ([Fig 3B](#pone.0177127.g003){ref-type="fig"}). The genioglossal fossa between both tori is shallow but clearly visible. The horizontal position of the t.t.sup. is at the level of the mid-p4, and the t.t.inf. is at the level between p4 and m1. The constructed bitangent of the t.t.sup. and t.t.inf. (plane of transverse tori; TP) forms an angle of 56° with the alveolar plane. The sublingual plane is oriented at an inclination of about 37°. The symphysal height and depth are not measurable.
The partially preserved crowns of right p4 (its mesiobuccal face is missing), m1 and m2 show extreme occlusal and interstitial wear ([Fig 1C](#pone.0177127.g001){ref-type="fig"}). The p4 retains only a thin layer of occlusal enamel. Dentine is exposed on its buccodistal half and the metaconid (wear stage 5, after \[[@pone.0177127.ref067]\]). Although the occlusal surface is largely flattened, a mesio-distal step is clearly visible between the mesial cusps and the talonid. The occlusal enamel of the m1 and m2 is almost completely worn away, exposing large parts of the dentine. In m1 the conids are entirely worn away and only the outer rim of enamel remains (wear stage 7). In m2 (wear stage 5--6) the abrasion is focused on the buccal conids, where a deep hollow reaches the pulp chamber. The entoconid and metaconid are still visible, but expose their dentine horns. Due to the interstitial wear the mesial face in m1 is S-shaped and in m2 concave ([Fig 1C](#pone.0177127.g001){ref-type="fig"}). The distal half of m1 is obliterated with the interstitial wear reaching deep into the dentine. Martin & Andrews \[[@pone.0177127.ref045]\] calculated a crown length reduction of 32% for this tooth. This is consistent with the estimated loss of 30% in m1 tooth area, when we apply the tooth size prediction after Evans et al. \[[@pone.0177127.ref054]\]. Reliable crown measurements can only be taken from m2 (BL = 13.2 mm, MD≈14.2 mm; [Fig 4](#pone.0177127.g004){ref-type="fig"}) and p4 (MD = 9.1 mm). Based on the cervical root areas the tooth size is estimated to increase from m1 to m3 \[[@pone.0177127.ref045]\]. The m2 is often referred to as being slightly broader than the mandibular corpus at this level \[[@pone.0177127.ref038], [@pone.0177127.ref044]--[@pone.0177127.ref046]\], which is seen as a unique character of *G*. *freybergi*. However, this is partially an artefact of crushing, as the μCT-section reveals ([S2 Fig](#pone.0177127.s002){ref-type="supplementary-material"}). The better-preserved left corpus shows a breadth similar to that of m2, which is nevertheless unique among hominoids. Hence, the posterior dentition still shows a clear evidence of megadontia relative to corpus dimensions, but perhaps less dramatically than previously thought. The teeth are thickly enamelled, with a lingual radial enamel thickness of 1.40 mm for m2 and 1.50 mm for p4 ([Fig 5](#pone.0177127.g005){ref-type="fig"} and [S5 Table](#pone.0177127.s008){ref-type="supplementary-material"}). The m1 radial enamel thickness is not measurable. The pulp chambers of the molars (right m1 and m2; [S1C Fig](#pone.0177127.s001){ref-type="supplementary-material"}) are vertically narrow. Their upper surface is flat as their pulp horns are inconspicuous or lacking. The CT-scans reveal an accumulation of dentine in large parts of the pulp chamber and pulp horns. Dentine layers of less density may trace the original pulp chamber. Thus, an accretion of secondary dentine can be assumed, particularly on the roof and the horns of the pulp chambers.
![Dental crown dimensions of Late Miocene hominids.\
**a**, m2 crown dimensions of *G*. *freybergi*, *O*. *macedoniensis*, *O*. *turkae*, *N*. *nakayamai and A*. *meteai*. Comparative data: \[[@pone.0177127.ref041], [@pone.0177127.ref047]--[@pone.0177127.ref050], [@pone.0177127.ref055]\]. **b**, P4 crown dimensions of cf. *Graecopithecus* sp., *O*. *macedoniensis* and *O*. *turkae*. Comparative data: \[[@pone.0177127.ref039], [@pone.0177127.ref041], [@pone.0177127.ref048], [@pone.0177127.ref056]\].](pone.0177127.g004){#pone.0177127.g004}
![Radial enamel thickness in m2 of extant and extinct hominoids including *G*. *freybergi*.\
The lingual radial enamel thickness (l) in *G*. *freybergi* is measured on μCT slices at the lingual side of the metaconid, following \[[@pone.0177127.ref058]\]. Comparative data: \[[@pone.0177127.ref041], [@pone.0177127.ref058]--[@pone.0177127.ref060]\]. Horizontal line = mean; vertical line = range.](pone.0177127.g005){#pone.0177127.g005}
The maximal root lengths (longest root of a tooth, measured on 3D) of the molars are (left/right) m1 \>13.5/ = 14.5 mm; m2 \>16.9/ = 17.6 mm; m3 = 15.6/16.9 mm. The left canine root (\>16.1 mm) is partially preserved, but its upper mesial part is missing. However, it is possible to estimate its maximal length to the cervical plane (c ≈25.5 mm; [S3 Fig](#pone.0177127.s003){ref-type="supplementary-material"}).
RIM 438/387 --the left P4 from Azmaka \[[@pone.0177127.ref039]\] has an intensively worn crown and three well preserved roots ([Fig 1B](#pone.0177127.g001){ref-type="fig"}). The crown is mesio-distally narrow with rounded rectangular occlusal outline (MD = 8.2 mm; BL = 12.3 mm). The enamel is thick with a buccal radial thickness of k = 1.55 mm. The occlusal wear facet is mesio-labially inclined and exposes large parts of the lingual dentine (wear stage 4; after \[[@pone.0177127.ref067]\]), but only the tip of the buccal dentine horn (wear stage 2). The distal crown surface shows a distinct interstitial wear facet. The P4 has a maximal root length of 12.0 mm; its roots are mesio-distally compressed. The buccal roots are close to each other and are fused in the upper 3 mm. Each radical features a separate pulp canal ([S1A Fig](#pone.0177127.s001){ref-type="supplementary-material"}). The pulp chamber is tall with a distinct buccal pulp horn.
Comparison and taxonomic validity {#sec009}
=================================
*G*. *freybergi* is only known from one mandible and possibly the tooth from Azmaka ([Fig 1A and 1B](#pone.0177127.g001){ref-type="fig"}). This compares with a relatively large number of *Ouranopithecus* specimens. *Ouranopithecus* has been synonymised with *Graecopithecus* by some \[[@pone.0177127.ref045]\]. Others emphasize the dentognathic differences between both taxa, but regard the Pyrgos specimen as largely uninformative due to its poor surface preservation and vague dating \[[@pone.0177127.ref044]\]. The new data provided here support previous conclusions that *Ouranopithecus* and *Graecopithecus* differ in significant numbers of characters more than adequate to recognize two different taxa with probable generic differences \[[@pone.0177127.ref041]\]. Beside shared characters between *G*. *freybergi* and *O*. *macedoniensis* (thick enamel \[[@pone.0177127.ref044], [@pone.0177127.ref068], [@pone.0177127.ref069]\], m2 crown dimension, symphyseal shape; Figs [3](#pone.0177127.g003){ref-type="fig"} and [4](#pone.0177127.g004){ref-type="fig"}), both taxa differ in the dental arch, which is shorter and narrower in *G*. *freybergi* ([Fig 3A](#pone.0177127.g003){ref-type="fig"}). The width (BL) and length (MD) of the m2 crown is within the range of female *O*. *macedoniensis* ([Fig 4](#pone.0177127.g004){ref-type="fig"} and [S3 Table](#pone.0177127.s006){ref-type="supplementary-material"}), but it is broader relative to the mandibular robusticity. The BL width of m2 approximates the breadth of the mandibular corpus at this position. Hence, the mandible of *G*. *freybergi* is very gracile compared to *O*. *macedoniensis* and other Miocene and Pliocene hominids ([Fig 2](#pone.0177127.g002){ref-type="fig"} and [S1 Table](#pone.0177127.s004){ref-type="supplementary-material"}), as already suggested by von Koenigswald \[[@pone.0177127.ref038]\] and Martin & Andrews \[[@pone.0177127.ref045]\]. Generally, the mandibular corpus breadth in hominids show only minor sex differences, but is of taxonomic significance \[[@pone.0177127.ref070]--[@pone.0177127.ref072]\]. The breadth of female and male *O*. *macedonienis* mandibles are closer to one another than either is to *G*. *freybergi* ([Fig 2B](#pone.0177127.g002){ref-type="fig"}). Thus, the considerable lower breadth in *G*. *freybergi* strongly suggests a taxonomic difference.
In contrast, the mandibular robusticity is significant for sex discrimination in hominids \[[@pone.0177127.ref035], [@pone.0177127.ref070], [@pone.0177127.ref071]\]. Male *O*. *macedoniensis* are less robust (taller relative to breadth) than females. The mandibular height of *G*. *freybergi* overlaps with the height of female *O*. *macedoniensis*, but its robusticity is in the lower range of the gracile males ([Fig 2A](#pone.0177127.g002){ref-type="fig"}). Assuming a similar pattern of sexual dimorphism with robust mandibles in females and gracile mandibles in males, the very gracile mandible of *G*. *freybergi* relative to its m2 size and compared to *O*. *macedoniensis* and other Miocene and Pliocene hominids ([S1 Table](#pone.0177127.s004){ref-type="supplementary-material"}), suggests that the *Graecopithecus* type mandible may belong to a male individual.
*G*. *freybergi* and *O*. *macedoniensis* differ in the number of their dental roots and/or pulp canals ([Table 1](#pone.0177127.t001){ref-type="table"}) showing a reduced configuration in *G*. *freybergi*. Further, the buccal fusion of the p4 roots differs from the separated roots in *O*. *macedoniensis* and other Late Miocene hominids (e.g. *O*. *turkae*; see figure 2 in \[[@pone.0177127.ref041]\]), but approximates the root form recently described in australopithecine specimens from Woranso-Mille, in *Au*. *africanus* and in *P*. *robustus* \[[@pone.0177127.ref025], [@pone.0177127.ref036]\]. Much variability is known for the root number and morphology within and among australopithecine species, from a Tomes' root to a three-rooted morphology (e.g. \[[@pone.0177127.ref026], [@pone.0177127.ref036]\]). However, within the fossil record the p4 root fusion is a feature that appears exclusively in hominins. 12% of *P*. *robustus* (n = 2) and \~17% of *Au*. *africanus* (n = 3) have either a fused p4 root or a single root \[[@pone.0177127.ref036]\]. There is no example of any root fusion (partial or complete) in the p4 of non-hominin fossil apes, and there are only very rare occurrences in *Pan*. In the large tooth samples of extant *Pan* observed in several studies, the hominin condition is present in less than 2--5% \[[@pone.0177127.ref063], [@pone.0177127.ref073], [@pone.0177127.ref074]\]. Further, the root configuration in p4 is less variable than in other lower and upper premolars of *Pan* \[[@pone.0177127.ref063]\]. The inter-genus variability among extant great apes is low, but large between great apes and humans.
10.1371/journal.pone.0177127.t001
###### Root and pulp canal configuration in c-m3 of *G*. *freybergi* (holotype, this study) and *O*. *macedoniensis* \[[@pone.0177127.ref032]\].
{#pone.0177127.t001g}
*G*. *freybergi* *O*. *macedoniensis*
------------- ------------------- ---------------------- ---------
**c** 1~1~ \- (n = 0)
**p3** 1~1~M+1~2~D 1~1~M+2~2~D (n = 4)
**p4** 1~1~M+1~2~D 1~2~M+2~2~D (n = 2)
(partially fused) 2~2~M+2~2~D (n = 2)
**m1** 2~2~M+1~1~-~2~D 2~2~M+1~2~D (n = 4)
**m2** 1-2~2~M+1~1~D 1~2~M+1~2~D (n = 2)
2~2~M+1~2~D (n = 3)
**m3** 1~1~M+1~1~D 1~2~M+1~1~D (n = 4)
2~2~M+1~1~D (n = 1)
The premolars in *G*. *freybergi* have two roots and three pulp canals. The molars are three- or two-rooted and have between four and two pulp canals. M = mesial; D = distal; large cipher = root number; index = pulp canal number; n = sample size for *O*. *macedoniensis*; sample size for *G*. *freybergi* is always n = 1. Formula scheme and detailed root and pulp morphology in Material & Methods and [S1 Fig](#pone.0177127.s001){ref-type="supplementary-material"}.
Similar to *O*. *macedoniensis*, the root lengths are rather short compared to extant great apes \[[@pone.0177127.ref032]\]. In *G*. *freybergi*, this particularly concerns the canine and m1. The absolute canine root length ([Fig 6](#pone.0177127.g006){ref-type="fig"} and [S6 Table](#pone.0177127.s009){ref-type="supplementary-material"}) is below *S*. *tchadensis* and in the range of *Au*. *anamensis*, *Ar*. *ramidus* and female *P*. *troglodytes*. Given that *G*. *freybergi* may be a male individual, the short canine root may indicate canine reduction. However, this observation needs further confirmation by more canine root length data. The m1 root length is in the range of *P*. *troglodytes* and *H*. *sapiens*, but considerably below *Gorilla* and *S*. *tchadensis*. While in extant great apes and *S*. *tchadensis* the root length of m1 is similar to m2, *G*. *freybergi* shows an m1 root that is considerably shorter than those of m2 and m3.
{ref-type="supplementary-material"}). Comparative literature data: extant great apes and humans \[[@pone.0177127.ref064]\]; *Au*. *afarensis* and *Au*. *anamensis* \[[@pone.0177127.ref065]\]; *Ar*. *ramidus* \[[@pone.0177127.ref031]\]; *S*. *tchadensis* \[[@pone.0177127.ref028]\]. Detailed data in [S6 Table](#pone.0177127.s009){ref-type="supplementary-material"}.](pone.0177127.g006){#pone.0177127.g006}
In *G*. *freybergi*, the radial enamel thickness of the m2 is considerably greater than in extant great apes and *Griphopithecus alpani* ([Fig 5](#pone.0177127.g005){ref-type="fig"} and [S5 Table](#pone.0177127.s008){ref-type="supplementary-material"}). With l = 1.40 mm it is close to *Griphopithecus darwini* (1.23 mm) and within the mid-range of more thickly enamelled hominins (e.g. *Homo sapiens* l = 1.41 mm). *Ouranopithecus turkae* shows a considerably higher value of l = 2.08 mm. *O*. *macedoniensis* has also a very thick molar enamel \[[@pone.0177127.ref068], [@pone.0177127.ref069]\]. The literature on *O*. *macedoniensis* is not directly comparable to our measurements. However, its relative and absolute molar enamel thickness is reported to exceed that of extant great apes and other Miocene hominids \[[@pone.0177127.ref069]\].
The P4 from Azmaka, Bulgaria is nearly contemporaneous (\~65kyr older) with *G*. *freybergi* from Pyrgos \[[@pone.0177127.ref040]\]. Previously, the P4 had been referred to cf. *Ouranopithecus* sp. or aff. *G*. *freybergi* \[[@pone.0177127.ref039]\]. This study shows that some morphological aspects are indeed shared with *G*. *freybergi*. The P4 is thickly enamelled, showing the same radial enamel thickness (k = 1.55 mm) as the p4 from Pyrgos (l = 1.50mm). While the size of the Azmaka P4-crown (BL = 12.3 mm; MD = 8.2 mm; [Fig 4B](#pone.0177127.g004){ref-type="fig"}) is similar to female *O*. *macedoniensis* (BL = 12.5--13.3 mm; MD = 7.25--9.0 mm), its roots are less robust and more parallel, as in the roots of *G*. *freybergi*. The P4 roots of the female and the larger sized roots of male *O*. *macedoniensis* are more separated and diverge towards the apex ([Fig 7](#pone.0177127.g007){ref-type="fig"}). Hence, both individuals from Azmaka and Pyrgos show the same evolutionary trend in upper and lower teeth respectively. Accordingly, we assign the Azmaka specimen to cf. *Graecopithecus* sp.
{#pone.0177127.g007}
Differential diagnosis {#sec010}
======================
*G*. *freybergi* differs from extant great apes (*Pan*, *Gorilla*, *Pongo*) in its thickly-enamelled teeth ([Fig 5](#pone.0177127.g005){ref-type="fig"}). It differs from the similar sized *P*. *troglodytes* in its absolutely longer dental roots of m2 and m3, but shows comparable c to m1 root lengths ([Fig 6](#pone.0177127.g006){ref-type="fig"}). *G*. *freybergi* differs from most hominids (e.g. *Sivapithecus*, *Ouranopithecus*, australopiths, early *Homo*) in its gracile mandibular corpus ([Fig 2](#pone.0177127.g002){ref-type="fig"}). Its corpus height is within the lower range of female *O*. *macedoniensis*, but its breadth is lower. It can be further distinguished from *O*. *macedoniensis* by its narrow dental arc ([Fig 3](#pone.0177127.g003){ref-type="fig"}). *G*. *freybergi* differs from *O*. *macedoniensis* in its root configuration, having two-rooted lower premolars including a partially fused p4-root and a reduced number of pulp canals (note the considerations on intra/inter species variation below). It differs from *Ouranopithecus turkae* in having absolutely and relatively thinner enamel and a fused p4-root. The m2 crown size (MD = 14.2mm; BL = 13.2mm) is intermediate between female and male *O*. *turkae*.
Emended diagnosis {#sec011}
=================
*G*. *freybergi* is a hominid in the size range of female chimpanzees based on dentognathic size. The mandibular dental arch is anteriorly narrow (lingual distance between p3s ≈ 15mm) and diverges slightly posteriorly (lingual distance between m3s ≈ 26mm). The symphysis shows a weak upper and lower transvers torus and a sublingual plane at about 37° relative to the alveolar plane. The mandibular corpus is narrow and deep, which results in a low robusticity index (RI = 0.53 at m2). The posterior dentition is megadont relative to corpus size, with a broad m2 that matches the breadth of the mandibular corpus in this position. Tooth size is estimated to increase from m1 to m3, based mainly on the cervical root area. The enamel is thick ([Fig 5](#pone.0177127.g005){ref-type="fig"} and [S5 Table](#pone.0177127.s008){ref-type="supplementary-material"}). The dental roots of the tooth row (c to m3) are short (c ≈25.5 mm; p3 ≈16.5 mm; p4 ≈15.9 mm; m1 ≈13.6/ = 14.5 mm; m2 ≈18.0/ = 17.6 mm; m3 = 15.6/16.9 mm; maximum length of left and/or right dentition, derived from μCT based 3D reconstructions, see [S3 Fig](#pone.0177127.s003){ref-type="supplementary-material"} and [S6 Table](#pone.0177127.s009){ref-type="supplementary-material"}). The premolars and m3 are two-rooted. The p4 shows a fusion of the mesial and distal root in the upper buccal part. The m1 is three-rooted; the m2 shows three (left) or two (right) roots. Both, m1 and m2 show bifurcated apices in their mesial roots. The molars have low pulp chambers with blunt pulp horns. The number of pulp canals in the postcanine teeth is low ([Table 1](#pone.0177127.t001){ref-type="table"}).
Phylogenetic position of *Graecopithecus* {#sec012}
=========================================
The investigation of the internal structures of the Pyrgos mandible reveals characters of the roots of the p4 that are derived compared to other Miocene apes and extant great apes.
In contrast to the Ponginae, *Graecopithecus* shares derived characters with African apes (ventrally shallow roots, buccolingually broad molar roots; \[[@pone.0177127.ref032], [@pone.0177127.ref075]\]). Therefore, we consider four principle alternative interpretations of its phylogenetic position: *Graecopithecus* is a stem-hominine (last common ancestor of African apes and *Homo*), a gorillin, a panin, or a hominin.
Basal hominids like *Proconsul* have two or three clearly diverging roots and four pulp canals (1-2~2~M+1~2~D) in the p4 \[[@pone.0177127.ref028]\]. The prevailing root configuration in extant great apes is two roots and two to three pulp canals \[[@pone.0177127.ref073]\], which is the condition seen in *G*. *freybergi* (1~1~M+1~2~D). However, the mesial and the distal roots of *G*. *freybergi* are partially fused at about 47% of maximal root length ([Fig 8](#pone.0177127.g008){ref-type="fig"}), a character which is extremely rarely observed in extant great apes (2--4%; \[[@pone.0177127.ref073]\]). This fusion may represent an early stage of a Tomes' root, a character that is considered diagnostic for the hominin clade \[[@pone.0177127.ref026], [@pone.0177127.ref027]\]. Thus far, a buccal root fusion similar to *G*. *freybergi* is reported from australopithecines \[[@pone.0177127.ref025], [@pone.0177127.ref036]\]. The configuration of the p4 root and the pulp canal in *G*. *freybergi* is intermediate between the narrow p4 roots in *S*. *tchadensis* \[[@pone.0177127.ref028]\] ([Fig 8](#pone.0177127.g008){ref-type="fig"}) and the Tomes' root in *Ar*. *kadabba* \[[@pone.0177127.ref076]\]. The derived state of *G*. *freybergi* with respect to *O*. *macedonensis* is further supported by root and pulp canal reductions in other tooth positions ([Table 1](#pone.0177127.t001){ref-type="table"}). The hominin record shows different levels of p4 root fusion, although separated roots are common as well. However, p4 root fusion never occurs in Miocene non-hominins, suggesting that this feature in *Graecopithecus* is a hominin synapomorphy. Accordingly, the most parsimonious interpretation of the phylogenetic position of *Graecopithecus* is that it is a hominin, although we acknowledge that the known sample of fossil hominin root configurations is too small for definitive conclusions.
![Root morphology of the lower fourth premolar (p4) in *Graecopithecus* and *Sahelanthropus*.\
**a**, Cervical μCT-section through the right mandibles of *S*. *tchadensis* (left; \[[@pone.0177127.ref029]\]) and *G*. *freybergi* (right) with drawings of their p4 cross-sections at the level just below the cervix (for *G*. *freybergi* 2.5 mm below p4 cervix). **b**, Root configuration in p4 of *G*. *freybergi*. The apical parts of the right p4 roots are missing, but an approximate reconstruction was done by aligning the mirrored roots of the left p4 (in transparent blue). The left p4 is broken just below the level of bifurcation. LB = height of lingual bifurcation, BB = height of buccal bifurcation (both preserved on the right p4). Scale bar, 10 mm.](pone.0177127.g008){#pone.0177127.g008}
A feature supporting this interpretation is the observation of canine root reduction. With an estimate canine root length of \~25.5 mm ([Fig 6](#pone.0177127.g006){ref-type="fig"}), the probably male specimen of *G*. *freybergi* is in the range of female *P*. *troglodytes* (24.1 ±2.7 mm \[[@pone.0177127.ref064]\]) and below female *G*. *gorilla* (29.4 ±2.2 mm). It is in the range of *Au*. *anamensis* (20.3--31.8 mm \[[@pone.0177127.ref065]\]) and *Ar*. *ramidus* (25.0--31.4 mm \[[@pone.0177127.ref031]\]). Further, it is shorter than the lower canine root of *S*. *tchadensis* (27.97mm \[[@pone.0177127.ref028]\]) and above *Au*. *afarensis* (21.0--24.3 mm \[[@pone.0177127.ref065]\]) and *H*. *sapiens* (16.5±2.1mm \[[@pone.0177127.ref064]\]).
In earlier studies, a relationship of European hominids to the African hominins is proposed \[[@pone.0177127.ref077], [@pone.0177127.ref078]\]. Taken at face value, the derived characters of *Graecopithecus* (p4 root morphology and possibly canine root length) may indicate the presence of a hominin in the Balkans at 7.2 Ma. In many publications, de Bonis, Koufos and colleagues have proposed that *Ouranopithecus*, from northern Greece and more than 1.5 million years older, is a hominin \[[@pone.0177127.ref047], [@pone.0177127.ref079], [@pone.0177127.ref080]\]. Other researchers have interpreted the similarities between *Ouranopithecus* and australopithecines as homoplasies \[[@pone.0177127.ref081]\]. It is possible that the similarities between *Graecopithecus* and *Ardipithecus* and some australopithecines are also homoplasies. However, as stated before the premolar root number is less functionally constrained than megadonty and enamel thickness, and thus, potentially more useful for phylogeny reconstruction \[[@pone.0177127.ref019], [@pone.0177127.ref020]\]. *Graecopithecus* has reduced root morphology yet heavy mastication and megadontia, suggesting a de-coupling of root and molar function. In contrast, larger roots, large teeth and thicker enamel together contribute to a functional complex shared with australopithecines, which is evoked as the mechanism accounting for the homoplastic appearance of hard object feeding adaptations in *Ouranopithecus* and australopithecines \[[@pone.0177127.ref081]\].
Therefore, we submit that the dental root attributes of *Graecopithecus* suggest hominin affinities, such that its hominin status cannot be excluded. If this status is confirmed by additional fossil evidence, *Graecopithecus* would be the oldest known hominin and the oldest known crown hominine, as the evidence for the gorillin status of Chororapithecus is much weaker than the hominin status of *Graecopithecus* \[[@pone.0177127.ref008]\]. More fossils are needed but at this point it seems likely that the Eastern Mediterranean needs to be considered as just as likely a place of hominine diversification and hominin origins as tropical Africa.
Supporting information {#sec013}
======================
###### 3D-reconstructions of the P4 from Azmaka (RIM 438/387) and the preserved lower teeth of *G*. *freybergi* from Pyrgos virtually isolated from the type mandible.
The P4 is shown in distal and mesial view (top row), and apical and buccal view (bottom row) with associated pulp canals. The lower dentition is shown in distal and mesial view (top row), and apical and lingual view (bottom row) with associated pulp canals. Zoom in for more details. The dashed line indicates the vertical position of the cervical plane constructed as described in Material & Methods and [S3 Fig](#pone.0177127.s003){ref-type="supplementary-material"}. **a**, Left P4 of cf. *Graecopithecus* sp. and premolars of the right hemimandible of *G*. *freybergi*. **b,** Canine and premolars of the left hemimandible of *G*. *freybergi*. **c,** Molars of the right hemimandible of *G*. *freybergi*. **d,** Molars of the left hemimandible of *G*. *freybergi*.
(TIF)
######
Click here for additional data file.
###### Micro-CT transverse sections through the left and right mandibular corpus of *G*. *freybergi*.
Sections at the level of p4, m1, m2, and m3 (top down), perpendicularly to the alveolar plane. Measurements of mandibular height (H) and breadth (B) in red. The dashed lines indicate surfaces where the cortical bone is crushed or parts of the corpus are missing. Measurements were taken on the better-preserved left corpus. (=) Direct breadth measurements, taken at the positions of m2/m3 and m3. (≥) Minimal estimations after reconstructing minor damages as shown by the dashed line. Minimal estimations are given for the breadth at p3/p4 to m2 and the height at m2, m2/m3 and m3 ([S1 Table](#pone.0177127.s004){ref-type="supplementary-material"}).
(TIF)
######
Click here for additional data file.
###### Virtual reconstruction of the Pyrgos mandible with root length measurements and estimated corrections.
**a**, Right hemimandible with cervical planes (CP) and root length measurements at the longest radicals of right m1-m3. The CPs are constructed through the cervices of the right m3, m2 and m2-p4. The CP of the right m2-p4 is extended mesially to the position of the missing canine. **b**, In order to define the CPs for the left hemimandible the left tooth row is mirrored (in blue) and aligned to the right tooth row. Thereby, the right CPs are transferred to the left hemimandible. **c**, Mirrored left hemimandible with the root length measurements and estimations at m3-c from the root apices to the constructed CPs.
(TIF)
######
Click here for additional data file.
###### Mandibular corpus dimensions of *G*. *freybergi* and other Miocene and Pliocene hominids.
RI = robusticity index. Values in parantheses = corrections for breakage. Data: *G*. *freybergi*: \*this study; *Ouranopithecus macedoniensis* (RPl-54: \*this study and \[[@pone.0177127.ref047]\]; RPl-56, 75 and NKT-21: \[[@pone.0177127.ref047]\]; RPl-89, 90, 80; 94: \[[@pone.0177127.ref048]\]); *Ankarapithecus meteai*: (AS95-500: \[[@pone.0177127.ref049]\]); *Sivapithecus sivalensis* and *S*. *punjabicus*: (several samples: \[[@pone.0177127.ref045]\]); *Nakalipithecus nakayamai*: (KNM-NA46400:\[[@pone.0177127.ref050]\]); *Australopithecus anamensis* (KNM-KP 29281, 29287, 31713: \[[@pone.0177127.ref051]\]); *Australopithecus deyiremeda* (BR-VP-3/14; WYT-VP-2/10: \[[@pone.0177127.ref052]\]); *Australopithecus afarensis*, *Au*. *africanus*, early *Homo*, *Paranthropus robustus and P*. *boisei* (several samples: \[[@pone.0177127.ref052]\]).
(XLSX)
######
Click here for additional data file.
###### Arcade width in the types of *G*. *freybergi* and *O*. *macedoniensis*.
The arcade width at each tooth position was measured at the lingual sides of the dental cervices (lingual distances between the left and right tooth row). The measuring position along the tooth row is given as distance from mid-m3 (average of both sides). All values in mm.
(XLSX)
######
Click here for additional data file.
###### Dental crown dimensions in p4 and m2 of *G*. *freybergi* compared to fossil hominids and chimpanzees.
Parantheses indicate estimations. In specimens that preserve the left and right dentition, the mean value of both teeth is given. Data: *G*. *freybergi*: this study; *O*. *macedoniensis*: \[[@pone.0177127.ref047], [@pone.0177127.ref048]\]; *O*. *turkae*: \[[@pone.0177127.ref041]\]; *N*. *nakayamai*: \[[@pone.0177127.ref050]\]; *A*. *meteai*: \[[@pone.0177127.ref049], [@pone.0177127.ref055]\]; *S*. *tchadensis*: \[[@pone.0177127.ref003]\]; *O*. *tugenensis*: \[[@pone.0177127.ref002]\]; *Ar*. *kadabba*: \[[@pone.0177127.ref009]\]; *Ar*. *ramidus* and *A*. *afarensis*: \[[@pone.0177127.ref001]\]; *A*. *anamensis*:\[[@pone.0177127.ref033]\]; *P*. *troglodytes*: \[[@pone.0177127.ref057]\].
(XLSX)
######
Click here for additional data file.
###### Dental crown dimensions in P4 of cf. *Graecopithecus* sp., *O*. *macedoniensis* and *O*. *turkae*.
In specimens that preserve the left and right dentition, the mean value of both teeth is given. Data: cf. *Graecopithecus* sp.: \[[@pone.0177127.ref039]\]; *O*. *macedoniensis*: \[[@pone.0177127.ref039], [@pone.0177127.ref048], [@pone.0177127.ref056]\]; *O*. *turkae*: \[[@pone.0177127.ref041]\].
(XLSX)
######
Click here for additional data file.
###### Radial enamel thickness of fossil and extant hominids.
Data: cf. *Graecopithecus* sp. and *G*. *freybergi*: this study; *O*. *turkae*: \[[@pone.0177127.ref041]\]; *Griphopithecus*, *H*. *sapiens* (Ho 08 and Ho23), *P*. *troglodytes*, *G*. *gorilla* and *P*. *pygmaeus*: \[[@pone.0177127.ref059]\]; *H*. *sapiens* (n = 10): \[[@pone.0177127.ref060]\]; *H*. *sapiens* (n = 34): \[[@pone.0177127.ref058]\].
(XLSX)
######
Click here for additional data file.
###### Absolute root lengths in the lower dentition (c-m3) of G. *freybergi* and comparative species.
The preserved root length of fragmentary roots is indicated as minimum length (\>). Estimations of their maximum length are given for *G*. *freybergi* (in brackets); see also [S3 Fig](#pone.0177127.s003){ref-type="supplementary-material"}. The measured root positions are indicated as follows: single root (1R), mesial root (m), distal root (d). Data: *G*. *freybergi*: this study; *S*. *tchadensis*: \[[@pone.0177127.ref028]\]; *Ar*. *ramidus*: \[[@pone.0177127.ref031]\]; *Au*. *anamensis* and *Au*. *afarensis* \[[@pone.0177127.ref065]\]; *P*. *pygmaeus*, *G*. *gorilla*, *P*. *troglodytes* and *H*. *sapiens* \[[@pone.0177127.ref064]\]. For the comparability between studies see in [Methods](#sec002){ref-type="sec"}.
(XLSX)
######
Click here for additional data file.
###### Further comparison.
(DOCX)
######
Click here for additional data file.
For access to fossil collections, technical and scientific collaboration we thank Wieland Binczik, Wolfgang Gerber, Katerina Harvati, George D. Koufos, Veronika Kühnert, Siegbert Schüffler, Henrik Stöhr, Harald Stollhofen, and Adrian Tröscher. We would like to thank the Academic Editor Roberto Macchiarelli for his careful handling of this publication. We also thank Brigitte Senut and 10 anonymous reviewers for their comments. We acknowledge funding from the German Science Foundation DFG (grant Bo 1550/19-1 to MB).
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
[^2]: **Conceptualization:** MB NS.**Data curation:** JF MB NS.**Formal analysis:** JF.**Funding acquisition:** MB.**Investigation:** JF.**Methodology:** JF.**Project administration:** MB.**Supervision:** DB MB.**Validation:** JF NS DB MB.**Visualization:** JF.**Writing -- original draft:** JF.**Writing -- review & editing:** DB NS MB JF.
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
The Fragrance oils/Perfumed body oils listed on this website are inspiredby the designer perfumes, which are the trademarks of their respective owners. We do not have any associations with them nor are we are claiming that the perfumes oils are designed by them.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
It was like any normal day in my feminist classroom.
(Translation: we were “problematizing” and “nuancing” concepts that had already been problematized and nuanced to the nth degree.)
On this particular day, we were discussing the importance of language and the power of words. For the first time in what seemed like forever, I found myself feeling relaxed.
The importance of language, I thought to myself. Now this is a topic that is genuinely fascinating.
Admittedly, I should have known better than to let my guard down. While feminism is by no means inherently hostile, the path that radical feminists have been on over the last couple of decades has caused even simple, uncomplicated subjects like the importance of language to become war zones.
It didn’t take long for my professor to snap me out of my calm reverie. As she was discussing the various aspects of language that she wanted to highlight as part of the class, my professor decided to provide an example to illustrate her point. She explained that the power of words could be clearly seen in the abortion debate.
SUPPORT LIFENEWS! If you like this pro-life article, please help LifeNews.com with a donation!
I sat up, instantly on high alert. Forgive my cynicism, but I have yet to attend a feminist lecture (and believe me, I’ve attended many, many feminist lectures) where the topic of abortion was discussed from any perspective other than the pro-abortion worldview.
My professor continued:
“This is why I am always careful to refer to those who oppose abortion as ‘anti-choice’, since they stand in opposition to a woman’s right to choose.”
She went on to explain that she also intentionally uses the word “fetus” when referring to the unborn child, no matter what the stage or the scenario. My professor stated that, even at baby showers, she congratulates the mother-to-be on the health of her fetus and asks questions pertaining to the fetus, not the baby.
Setting aside the massively insensitive and dehumanizing nature of this obsession with the word “fetus” (which, as a side note, references a stage of development, not a state of being – this is why there are dog fetuses, cat fetuses, and yes, human fetuses), there is something incredibly problematic about labeling the majority of the population “anti-choice”.
(I say “majority of the population” because most individuals in society do not agree with the abortion-on-demand rhetoric, which, in the opinion of my professor, means that they support restricting women’s choice, hence the “anti-choice” label.)
After debating with myself for a number of minutes as to whether I should say something, I raised my hand and made eye contact with the professor. She nodded, and I tried not to let my voice waver as I explained in kind yet firm tones that those who opposed abortion were actually very supportive of choice. They are not “anti-choice”, I explained calmly, but rather they oppose a specific choice that harms the life of another human being. Trying to reason with the class, I explained that we would never say that everyone who was against murder or rape was “anti-choice” simply because they opposed the so-called right of a murderer or rapist to do what he/she wants with his/her body. In the same way, I argued, those who oppose abortion are not “anti-choice”, since they support most choices, so long as the choice doesn’t interfere with the rights of another human being.
I was not surprised when other students began throwing their hands forcefully into the air halfway through my explanation. I was also not surprised when every single student who spoke after me vehemently argued that every person who opposed abortion was discriminatory towards women and sought to enslave women’s bodies by restricting their reproductive choices.
I was, however, shocked at the open hostility that I received from one of the students. I knew that she stood firmly in the pro-abortion camp: she had made a number of posts on our class’ Facebook group, one of which boldly declared that anyone who had the audacity to call themselves a “pro-life feminist” was not truly a feminist. As she had written and was now repeating in class:
“Saying you’re a pro-life feminist is an oxymoron.”
The rest of the class went downhill from there. A number of things were said, most of which were targeted not-so-discreetly at me, and the whole situation culminated into a two week long series of events that ended with me being eliminated from the class’ Facebook page due to the fact that my comments made other students “uncomfortable”. After I was eliminated from the Facebook group, I was then asked to apologize to the classroom, and, upon issuing an apology for any feelings of offense or judgment that might have been taken away from my comments, my apology was criticized, dissected, and subsequently deemed insensitive and insincere.
Let’s focus in for a moment on two specific issues with this situation that unfolded:
Firstly, there is the disturbing fact that I was told, directly and indirectly, that it was my responsibility to censor myself so as to avoid making other people uncomfortable. If this was not possible, as one student so tolerantly suggested, I should remain silent and keep my “offensive, discriminatory beliefs” to myself.
Allow me to make myself perfectly clear: it is not my responsibility to make other people comfortable. If my opinions make other people uncomfortable, while I undoubtedly should try to be sensitive to their feelings, it is their choice to stay and listen to what I have to say. It is one thing if someone is saying something sexual or otherwise inappropriate. It is something completely different if someone is calmly disagreeing with a point that was made previously.
I cannot help but think that radical feminists have truly become so fragile that the very expression of dissention threatens their existence. This is why they use such ridiculous, intolerant methods of silencing the opinions of those who disagree; for example: eliminating me from a Facebook group.
This leads me to the second specific issues with this situation. I admit that I find it profoundly disturbing that the radical feminists in my classroom who were unable to handle the existence of a differing opinion used eliminating my existence from a social community to cope with their worldview being challenged. What does it say about the state of our world that we consider it acceptable to literally remove someone with surgical precision from an entire community, online or otherwise? How fragile have we become in our beliefs that we cannot tolerate the existence of an alternate perspective?
Even more concerning, what implications does this have in the real world? Being eliminated from a Facebook group is not a big thing. I did not spend the next few days crying incessantly into the phone, begging my parents to shelter me from the existence of differing opinions. However, the facts remain: I was eliminated from an online community specifically because of my pro-life stance. And that, ladies and gentlemen, is discrimination.
The problem is, where is the line drawn? And who draws the line? What happens when disagreements become more heated? What happens when it is no longer an online community? What happens when a dominant group is having their worldview challenged? Will we accept a response that involves widespread murder or genocide in order to eliminate the differing perspective? If, as I desperately hope, we wouldn’t condone the physical elimination of an individual who stands in disagreement to popular opinion, why then do we condone the virtual elimination of an individual who stands in disagreement to popular opinion?
What frightens me most is that eliminating dissention is exactly what Hitler did during the Holocaust. He silenced the voices of anyone who dared speak out against him and the Nazi regime. It is a simple way to live, really: there is one worldview, and whoever disagrees, dies. The issue is that it flies in the face of everything we as a society hold near and dear: human rights, freedom of speech, tolerance, and the list goes on. And, amusingly enough, it is precisely these concepts that radical feminists claim to be fighting for.
Oh, the irony.
However, that student’s statement still remains:
“Saying you’re a pro-life feminist is an oxymoron.”
It is an interesting statement, to be sure. This argument is part of a much larger question, one that I will explore in the second part of this two-part series. Until then, I leave the question with you.
Is it possible to be a pro-life feminist?
LifeNews Note: Lia Mills is a student pro-life activist who is the founder and director of True Choice.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Psychopathy is a disorder of generally associated with selfishness, callousness, and lack of concern for others. It is therefore usually thought of as one of the most malevolent manifestations of a disturbed personality structure. In spite of this, recent research examines whether it is possible that there might be a positive face to psychopathy, or at the very least, to some of its component traits. Specifically, one paper asks “Are psychopaths and heroes twigs off the same branch?” The evidence for this is rather mixed, but there does seem to be a connection of sorts between at least some traits and behavior loosely associated with psychopathy and heroic actions that help others. Bold, fearless traits are associated with heroic behavior, but callous traits such as meanness and coldness are not. More puzzling is that people with a history of antisocial behavior are more likely to engage in heroic acts to help others.
Psychopathy can be thought of as a syndrome composed of a cluster of several different component traits that interact with each other to produce a disturbing whole. According to the triarchic model, psychopathy comprises a combination of three main traits: boldness, meanness, and disinhibition (Patrick, Fowles, & Krueger, 2009). Boldness involves the capacity to remain calm in threatening situations, and is associated with being socially self-assured and . Disinhibition refers to problems with impulse control and a tendency to act without thinking about the consequences. Meanness involves aggressively seeking to have one’s own way, and is associated with callousness, and lack of remorse or . Individuals can express each of these traits to varying degrees, and so there may be different subtypes of psychopathy emphasising particular combinations of these traits. For example, some people described as might show extreme meanness but not be especially disinhibited, and vice versa.
Although psychopathy is generally considered maladaptive, there has been some speculation that there might be subtypes of psychopathy that might allow a person to be successful in society. It has even been suggested that some psychopathic traits might even have socially desirable consequences in some circumstances. For example, according to one theory, one of the developmental precursors of psychopathy is a fearless temperament. Children with a fearless temperament are difficult to socialise effectively because they do not respond well to , hence they may have little concern with the negative consequences of disregarding society’s rules. However, people with a fearless temperament may also be very brave in the face of danger, and given the right circumstances, might be more ready than others to perform heroic acts involving personal risk for the benefit of others. Hence, some have speculated that “psychopaths and heroes are twigs from the same branch” (Smith, Lilienfeld, Coffey, & Dabbs, 2013). Fearlessness is thought to underlie both boldness and meanness, and it has been argued that boldness is a relatively pure form of fearlessness, whereas meanness may result from a failure of proper socialization in fearless children (Patrick, et al., 2009).
A 2013 study attempted to test whether psychopathic traits, would be related to a person’s propensity to perform heroic acts, which were defined as behavior that involves some degree of risk to the actor (Smith, et al., 2013). In particular, they wanted to test the notion that a trait referred to as ‘fearless dominance’, which they argue might be considered a form of ‘successful psychopathy’ and which is closely related to boldness, would be more closely related to heroic behavior than other psychopathic traits related to disinhibition and meanness. In a series of three surveys,[1] the authors correlated a number of measures of psychopathic and antisocial traits with measures assessing the extent to which a person had performed actions involving risk (either physical or social) to help another person, and how often they had helped strangers (which the authors argued usually involves risk). The results were somewhat inconsistent, but overall they found that traits related to fearless dominance and boldness, such as social potency ( in dealing with other people) and fearlessness had modest positive correlations with heroic actions. Disinhibition-related traits showed mixed results, with some traits such as ‘impulsive non- ’ showing modest positive correlations, and other traits such as ‘carefree nonplanfulness’ showing modest negative correlations with heroic actions. Traits related to meanness, such as ‘coldheartedness’, showed small to moderate negative correlations with heroism. Perhaps surprisingly, measures of antisocial behavior and delinquency generally showed moderate positive correlations with heroism measures, and some of these correlations were among the largest in all three surveys.
The authors concluded that their study provided some preliminary support for a connection between psychopathic-related traits, particularly those related to boldness, and heroism. These findings seem rather puzzling, especially the relationship between antisocial behavior and heroism. One possible explanation is that people with bold fearless traits are prone to involve themselves in potentially dangerous situations, which might involve antisocial behavior on some occasions, and altruistic behavior on others. However, other research (Miller & Lynam, 2012) has found that the trait of fearless dominance measured by Smith et al. is only weakly related to antisocial behavior. Hence, it does not seem likely that fearless dominance is the underlying shared factor explaining the correlations between antisocial behavior and heroism. Disinhibition traits are more strongly related to antisocial behavior, but in the Smith et al. study these had very inconsistent and somewhat weak correlations with heroism. Some disinhibition traits, such as a tendency to act impulsively in emergency situations, might be particularly relevant to heroism. However, other disinhibition traits, such as having an erratic lifestyle in which one does not plan for the future may be decidedly unheroic. Note that Smith et al. found that ‘impulsive non-conformity’ had a positive correlation, while ‘carefree nonplanfulness’ had a negative correlation with heroism. However, even the correlations between impulsive non-conformity tended to be noticeably smaller than the correlations between antisocial behavior and heroism.
What sort of antisocial behavior exactly is most correlated with heroic behavior is not specified by the Smith et al. study and this might be important. Aggressive antisocial behavior in general can be either proactive (e. . premeditated actions that harm others for personal gain) or reactive (e.g. retaliation in response to provocation). Prior research has found that proactive is more strongly related to meanness (e.g. callous unemotional traits) than is reactive aggression. People who engage in heroic behavior to help others might be more likely to have a history of reactive rather than proactive aggression, since they do not seem to be particularly mean.
Another possible issue is that measures used in the Smith et al. study assessed lifetime occurrences of both antisocial and heroic behavior. It is possible that people who perform heroic actions might go through a developmental phase involving some antisocial behavior which they later mature out of. Hence, they might be of a different type than people who persist in antisocial activities throughout much of their adult lives. The latter pattern of chronic antisocial behavior seems more characteristic of the prototypical psychopath who does not seem to learn from his or her mistakes. The reason I suggest this is because of a recent study which seems to suggest that people who had received an award for exceptional bravery, risking their own lives to save others, seemed to have achieved a more mature level of personal development compared to ordinary community members (Dunlop & Walker, 2013). In this study, participants were assessed on interpersonal traits and personal strivings. Additionally, they were interviewed about their life story and were asked to describe critical incidents occurring at particular phases of their lives. Their responses were then analysed in terms of the presence of key themes. The study found that, compared to a community control group, bravery award recipients were higher in interpersonal dominance, showed greater strivings for personal growth and development, and had a more sophisticated level of social awareness and understanding. Additionally, their life story interviews were characterised by more frequent themes of agency, redemption, and early advantage. Agency refers to a sense of personal effectiveness. Redemption themes involve life stories in which an initially bad event or circumstance leads to something demonstrably good or emotionally positive. Early advantage refers to quality of attachments, sensitivity to the needs of others, and the frequency of helpers relative to enemies.
What this personality profile suggests to me is that brave heroes in this study were interpersonally bold, felt effective in their lives, and probably felt emotionally secure during their upbringing. Additionally, they appear to have experienced instances of personal adversity which later led to positive changes in their lives. They seem to have a capacity to reflect on and learn from their life experiences, even adverse ones. Unfortunately, the study did not assess to what extent they had ever engaged in antisocial behavior. I am inclined to speculate that linkages between antisocial behavior and heroic actions might particularly be found in these types of mature individuals who are interpersonally bold and who have developed a positive life story characterised by themes of agency and redemption. Hence, they might have been involved in antisocial behavior at an early stage in their life, learned from their mistakes, and then moved on to more mature socially responsible forms of bravery. Future research studies could investigate how accurate these speculations are through more detailed assessments of the life history of people who have engaged in heroic behavior compared to less brave individuals.
In summary, there may well be a loose connection between heroes and psychopaths in that they may share some tendencies but not others. In order to be a hero, it probably helps to be fearless and perhaps even a little reckless and impulsive. Perhaps a history of getting into trouble contributes in some way to the development of heroism in the right people. However, unlike hard-core psychopaths, people who become heroes are not as mean, callous or cold. Additionally, it is possible that people who become heroes may have a more mature level of personality development that allows them to contribute positively to society, something that hard-core psychopaths appear to be lacking.
Note
[1] Their paper also includes an analysis of personality traits of American Presidents but to keep things simpler I will not consider that here.
Related articles
Emotional intelligence lacks relevance to understanding psychopathy
Challenging the "banality" of Evil and of Heroism Part 1 and 2. Critiques claims by situationists that heroism and evil are nothing special, just products of circumstance.
Image credit
Batman photo courtesy of Wikimedia Commons
References
Dunlop, W. L., & Walker, L. J. (2013). The personality profile of brave exemplars: A person-centered analysis. Journal of Research in Personality, 47(4), 380-384. doi: http://dx.doi.org/10.1016/j.jrp.2013.03.004
Miller, J. D., & Lynam, D. R. (2012). An examination of the Psychopathic Personality Inventory's nomological network: A meta-analytic review. : Theory, Research, and Treatment, 3(3), 305-326. doi: 10.1037/a0024567
Patrick, C. J., Fowles, D. C., & Krueger, R. F. (2009). Triarchic conceptualization of psychopathy: Developmental origins of disinhibition, boldness, and meanness. Development and Psychopathology, 21(Special Issue 03), 913-938. doi: doi:10.1017/S0954579409000492
Smith, S. F., Lilienfeld, S. O., Coffey, K., & Dabbs, J. M. (2013). Are psychopaths and heroes twigs off the same branch? Evidence from college, community, and presidential samples. Journal of Research in Personality, 47(5), 634-646. doi: http://dx.doi.org/10.1016/j.jrp.2013.05.006
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Introduction {#s1}
============
The societal consequences of influenza -- whether it is seasonal or pandemic -- include loss of production, exhaustion of health care resources, and excess mortality [@pone.0096740-World1]. When monitoring the recurring epidemics valid incidence data in close-to-real-time can be of value. Traditionally, influenza surveillance is based on information from health care-based sources, including clinical and virological data [@pone.0096740-European1]. However, influenza is often mild, does not always require health care and the proportion of ill people who see their general practitioner (GP) may be context dependent [@pone.0096740-BrooksPollock1]. Therefore, traditional influenza surveillance may not generate valid representations of an epidemic in the community. To supplement traditional influenza surveillance systems with community-based information, the Swedish Institute for Communicable Disease Control (SMI) has tested two different prospective ways of collecting data on influenza-like illness (ILI) from the general population. Since 2007, a population-based system (PBS) uses cohorts established through annual random sampling of the target population. The cohort members provide event-driven, self-initiated reports via automated technologies as soon as a respiratory tract infection occurs. The PBS has been evaluated previously [@pone.0096740-Bexelius1], [@pone.0096740-Merk1]. Since 2011, an Internet-based monitoring system (IMS) with self-recruited participants provides reports of participants' recent health status, which is collected upon a weekly reminder automatically dispatched via e-mail. The latter method was developed in the Netherlands in 2003 [@pone.0096740-Friesema1], [@pone.0096740-Marquet1]. SMI has compared representativeness and the obtained surveillance data in IMS and PBS during two seasons within the EPIWORK project, a European commission seventh frame work programme consortium aiming to build the foundation for an infrastructure to generate epidemic forecasts [@pone.0096740-EPIWORK1]. We regarded PBS as the best available and most rigorously evaluated method for community-based surveillance and therefore used it as the reference standard to IMS.
The self-selection involved in the recruitment of IMS participants has raised concerns about the validity of obtained ILI incidence data and its ability to reflect epidemics in the community. Reports from IMS in other European countries suggest that the self-recruited sample misrepresents its target population, but that ILI patterns correlate well with health care-based ILI data [@pone.0096740-Friesema1], [@pone.0096740-Marquet1], [@pone.0096740-Tilston1]--[@pone.0096740-Vandendijck1]. However, comparison of IMS with corresponding data generated from the more representative PBS cohort may provide better insights about the validity of disease occurrence data and the need, if any, for calibration of ILI estimates to correct for potential systematic errors. In this study, we first assessed the acceptability of IMS among randomly selected individuals who were invited to participate. Second, we assessed the representativeness of self-recruited IMS participants and compared it to that of the invited participants. Third, we compared IMS and PBS data in terms of ILI occurrence across the 2011--2012 and 2012--2013 influenza seasons (henceforth referred to as season 1 and season 2, respectively). We also related data from both systems to concurrently collected data from regular influenza surveillance: the laboratory reports from routine diagnostics and the GP based sentinel surveillance [@pone.0096740-Swedish1].
Methods {#s2}
=======
Ethics Statement {#s2a}
----------------
The IMS and PBS were reviewed and approved by the Stockholm regional research ethics review board (IMS: 2011/387-31/4, 2012/1445-32/4 and PBS: 2007/952-31, 2007/1599-32, 2008/1227-32, 2009/752-31, 2010/237-31/5, 2012/1444-32/5).
The Internet-based Monitoring System (IMS) {#s2b}
------------------------------------------
IMS is a Swedish adaptation of the European-wide Influenzanet [@pone.0096740-Influenzanet1], [@pone.0096740-Paolotti1]. During season 1, we recruited participants using press releases and resulting media attention, and by interpersonal communication through social media channels (henceforth referred to as self-recruited participants). Interested presumptive participants were directed to the project website. In season 2, we re-contacted participants from the previous season via e-mail with an invitation to participate again and recruited new participants as described above.
In season 2 we also investigated the possibility of implementing a population-based variant of IMS. We drew a random sample of 2,511 persons of the Swedish population aged 3 months through 95 years from the population register and invited them by post to participate in IMS (henceforth referred to as invited participants). The distribution of socio-demographic indicators of the random sample are available in [Table S1](#pone.0096740.s001){ref-type="supplementary-material"}.
Both self-recruited and invited participants (henceforth referred to as all IMS participants) initiated their participation by providing an e-mail address in a password-protected user account and by completing a background questionnaire on the project website. A user account could include one or several participants, enabling parents to report on behalf of children \<16 years. Participants received weekly e-mails that prompted them to visit the website and record occurrence of 18 listed symptoms or absence of symptoms in the preceding week. Upon affirmation of one or more symptoms, the participant was presented with follow-on questions (e.g. date of symptom onset). The components of the IMS across seasons are presented in [Table 1](#pone-0096740-t001){ref-type="table"}.
10.1371/journal.pone.0096740.t001
###### Summary of system components of the IMS and PBS during the influenza seasons 2011--2012 and 2012--2013.
{#pone-0096740-t001-1}
Component Season 1 (2011--2012) Season 2 (2012--2013)
---------------------------------------------- ----------------------- ----------------------- --------- ------------
Invited sample -- 14,022 2,511 14,558
Invited participants (response proportion %) -- 2,580 (18) 166 (7) 2,236 (15)
Self-recruited participants 2,552 -- 2,486 --
Geographical area Sweden Stockholm Sweden Sweden
Calendar weeks operating 46--20 38--20 47--21 44--20
The Population-based System (PBS) {#s2c}
---------------------------------
Population-based surveillance uses a sample from the general population, defined by geopolitical boundaries which constitutes the denominator, and/or the sampling frame [@pone.0096740-Porta1]. In the PBS we collected data directly from individuals that had been recruited for the surveillance, based on a random sample of the general population. Descriptions and evaluations of the PBS have been presented previously [@pone.0096740-Bexelius1], [@pone.0096740-Merk1], [@pone.0096740-Merk2]. Briefly, each year we invited representative samples of the general population, 3 months through 95 years of age to participate in PBS. Participants were instructed to spontaneously report all new episodes of colds and fevers within seven days of symptom onset from September/October through May via a secure website or a telephone based interactive voice response system. When reporting, participants answered an automated, tree-structured symptom questionnaire. Due to non-participation, there is moderate over-representation of elderly people, women, well-educated individuals, people with a high household income, married people, and people living in two-person households [@pone.0096740-Bexelius1]. Evaluations have shown that the telephone service seems to be particularly attractive for elderly and low-educated people, but the reporting technology per se does not appear to affect the reporting [@pone.0096740-Bexelius1]. A validation study revealed substantial under-reporting, which was remarkably constant over time and across seasons, thus allowing simple adjustments [@pone.0096740-Merk1].
After confinement to Stockholm County for five years, PBS was extended to all of Sweden in the 2012--2013 season. In season 1, 2,580 out of 14,022 persons sampled from Stockholm's population participated in PBS. In season 2, 2,236 out of 14,558 people sampled from all over Sweden participated. The components of the PBS across seasons are presented in [Table 1](#pone-0096740-t001){ref-type="table"}. The age and sex distributions of PBS participants during season 1 and 2 are available in [Table S2](#pone.0096740.s002){ref-type="supplementary-material"}.
Evaluation Analysis {#s2d}
-------------------
All analyses were based on data collected between November and May in season 1 and season 2, respectively. Each season lasted 27 weeks and was analysed independently.
All IMS participants who created an account in the system but never submitted a weekly report, participants with missing information on postcode, and participants with a birth date in the future, were excluded from analysis. Further, incomplete reports, reports where symptom onset preceded participation, and reports with a future date of symptom onset were excluded.
### Definitions {#s2d1}
Self-selection into IMS may lead to preferential inclusion of people with symptoms and preferential re-entry of people with symptoms after temporary periods of non-reporting. Therefore, we only included reports preceded by at least one report in the previous three weeks (henceforth referred to as *active* reports) for our incidence calculations. To explore how the definition of active participation affected disease patterns, we defined a *strictly active* report as a report preceded by two consecutive reports in the previous two weeks in a supplementary analysis (henceforth referred to as *strictly active*).
We defined a report of illness in IMS or PBS as ILI if it included sudden onset of symptoms AND at least one of the following systemic symptoms: fever or feverishness, headache, or myalgia, AND at least one of the following respiratory symptoms: cough, sore throat, shortness of breath, or coryza. Coryza was omitted from the case definition in season 2. We calculated the weekly incidence proportions (%) among all IMS participants by dividing the number of *active* ILI reports by the total number of *active* reports in that week. We calculated the weekly incidence proportions among PBS participants by dividing the number of ILI reports by the total number of cohort members in that week. We corrected PBS ILI rates for previously estimated misrepresentation of demographic substrata [@pone.0096740-Bexelius1] and under-reporting [@pone.0096740-Merk1].
To evaluate the performance of IMS for surveillance in close-to-real-time, episodes that started \>7 days before the reporting date were excluded from the analysis. Symptoms fitting the case definition in two consecutive weekly reports were considered to represent the same episode of illness and only the first report was included in the analysis.
### Acceptability by participants {#s2d2}
To measure the acceptability [@pone.0096740-German1] in terms of willingness of persons to participate in IMS, we calculated the proportion of participation among the invited participants. We also calculated the weekly response proportion among all enrolled IMS participants by dividing the number of reports during the week in question by the accumulated number of participants enrolled up to that week (for all reports and for *active* reports only, separately for each season). In season 2, we also stratified by self-recruitment or recruitment by invitation. The reports and *active* reports were summarized by the total, mean and median number per participant across each season. Additionally, we calculated the median proportion of complete reports per individually acquired participation time (time from registration week until the season's last week).
### Representativeness of the participants {#s2d3}
To assess the representativeness [@pone.0096740-German1] of self-recruited and invited IMS participants, we compared participants and the general Swedish population in terms of distributions of age, sex, level of education, and county of residence, using chi-square tests. We analysed invited and self-recruited IMS participants separately. For the invited sample in season 2, we collapsed the 21 counties of residence into three regions (*Götaland*, Southern Sweden; *Svealand,* Central Sweden; and *Norrland*, Northern Sweden) due to small numbers in many counties. We performed a supplementary analysis of representativeness including only participants who had contributed at least one *active* report. We considered p-values \<0.05 as significant.
### Time series of ILI data {#s2d4}
We compared ILI occurrence data from IMS (based on reports from all IMS participants) to incidence data from PBS (corrected for estimated demographic misrepresentation [@pone.0096740-Bexelius1] and underreporting [@pone.0096740-Merk1]). To examine the possible presence of systematic differences between the two methods that could be amenable to simple calibration, we compared incidence proportions week by week and across seasons with particular reference to periods with known increased influenza activity. We applied Bland-Altman plots [@pone.0096740-Bland1] and method comparison techniques [@pone.0096740-Carstensen1] to determine if observations from both methods directly agreed, and if not, if they agreed after mathematical transformation of the data. We also studied cross-correlations of the incidence proportions [@pone.0096740-Chatfield1]. Further, we studied the cross-correlation of IMS and PBS incidence proportions with ILI data generated by the GP-based sentinel surveillance system (weekly number of ILI cases per 1000 000 listed patients) and laboratory reports (number of laboratory-confirmed influenza cases per week). Before analysis, we smoothed the weekly incidence proportions using a two-week moving average. We plotted each time series and calculated Spearman correlation coefficients (r) on ranked data for different lags (+/−5 weeks) between: IMS and PBS; IMS and laboratory data; PBS and laboratory data; IMS and GP-based sentinel data; and PBS and GP-based sentinel data.
Since PBS was applied only in Stockholm County during season 1, we also restricted IMS data to Stockholm County only. However, due to small numbers in the GP-based sentinel data from Stockholm, these were not included in the season 1 analysis. For season 2, we included all four surveillance systems and made all comparisons at the national level.
In order to examine if the time series comparison would improve after attempts to correct the incidence proportion according to the general Swedish population, we performed a supplementary analysis based on weighted IMS data. We weighted the IMS sample by assigning each participant a weight calculated with the formula [@pone.0096740-Bethlehem1] **W** ~participant~ = **P** ~Swedish\ population~/**P** ~IMS\ participants~ (where W~participant~ = weight of each IMS participant, P~Swedish\ population~ = proportion of the general population of Sweden in the same age and sex group as the participant and P~IMS\ participants~ = proportion of the IMS sample in the same age and sex group as the participant).
Results {#s3}
=======
Acceptability {#s3a}
-------------
During, season 1 and season 2, respectively 2,552 and 2,486 self-recruited IMS participants submitted at least one report. Of 2,511 randomly selected residents who were invited to IMS, 166 (6.6%) signed up to participate and submitted at least one report.
In season 1, as the number of participants increased, the number of reports per week increased gradually until week 9 of 2012, when it started decreasing ([Table 2](#pone-0096740-t002){ref-type="table"}). The weekly proportion of participants reporting was highest (87%) in the first week but fell almost monotonically to its lowest value (23%) in the last week of the season. The reporting proportion counting only *active* reports increased to 50% after the first three weeks with comparable levels in the following three weeks, but then it gradually fell to its lowest point (21%) in the last week. The median number of total reports and *active* reports per participant was 4 (range 1--27) and 3 (range 0--26). During calendar weeks 8, 9, and 10, coinciding with the season's influenza peak, the cumulative number of reporting participants increased, and the *active-to-total* report ratio was the greatest. Many participants also joined the system in these weeks. Based on the individually acquired participation time, the median completion proportion of all possible reports and *active* reports were 27% (range: 4--100) and 17% (range: 0--96) respectively.
10.1371/journal.pone.0096740.t002
###### The weekly number of reports and reporting proportion among self-recruited and invited IMS participants during the 2011--2012 and 2012--2013 influenza seasons.
{#pone-0096740-t002-2}
2011--2012 2012--2013
--------------------------------------------------- ------------ ------------ ----------- ------- ------------ ------------ ----- ------------ ------------
46 304 263 (87) 0 (0) -- -- -- -- -- --
47 559 469 (84) 180 (32) 237 0 (0) 0 (0) 45 0 (0) 0 (0)
48 750 505 (67) 317 (42) 817 673 (82) 0 (0) 118 100 (85) 0 (0)
49 837 510 (61) 420 (50) 1,000 823 (82) 550 (55) 149 124 (83) 83 (56)
50 873 518 (59) 474 (54) 1,065 803 (75) 721 (68) 152 127 (84) 119 (78)
51 891 420 (47) 397 (45) 1,183 862 (73) 739 (62) 159 129 (81) 119 (75)
52 908 464 (51) 423 (47) 1,287 854 (66) 733 (57) 161 127 (79) 125 (78)
1 984 482 (49) 392 (40) 1,554 1,135 (73) 855 (55) 162 122 (75) 120 (74)
2 1,038 588 (57) 507 (49) 1,807 1,388 (77) 1,102 (61) 164 128 (78) 126 (77)
3 1,151 603 (52) 475 (41) 1,861 1,328 (71) 1,257 (68) 164 130 (79) 129 (79)
4 1,265 713 (56) 583 (46) 1,926 1,333 (69) 1,252 (65) 164 131 (80) 131 (80)
5 1,404 750 (53) 585 (42) 2,015 1,359 (67) 1,251 (62) 164 114 (70) 112 (68)
6 1,446 737 (51) 668 (46) 2,152 1,453 (68) 1,292 (60) 164 138 (84) 135 (82)
7 1,510 723 (48) 641 (42) 2,248 1,456 (65) 1,334 (59) 164 126 (77) 125 (76)
8 2,009 1,203 (60) 665 (33) 2,322 1,459 (63) 1,353 (58) 164 123 (75) 122 (74)
9 2,322 1,267 (55) 914 (39) 2,364 1,432 (61) 1,369 (58) 164 128 (78) 126 (77)
10 2,417 1,095 (45) 970 (40) 2,397 1,403 (59) 1,343 (56) 164 124 (76) 124 (76)
11 2,447 960 (39) 911 (37) 2,421 1,393 (58) 1,332 (55) 164 128 (78) 124 (76)
12 2,473 924 (37) 868 (35) 2,437 850 (35) 825 (34) 164 71 (43) 70 (43)
13 2,496 907 (36) 833 (33) 2,445 1,053 (43) 1,013 (41) 164 108 (66) 108 (66)
14 2,518 669 (27) 623 (25) 2,453 1,305 (53) 1,243 (51) 164 117 (71) 117 (71)
15 2,532 822 (32) 762 (30) 2,462 1,136 (46) 1,072 (44) 164 115 (70) 113 (69)
16 2,538 760 (30) 719 (28) 2,464 1,254 (51) 1,199 (49) 164 126 (77) 125 (76)
17 2,545 795 (31) 731 (29) 2,471 1,161 (47) 1,110 (45) 164 119 (73) 118 (72)
18 2,548 782 (31) 729 (29) 2,476 1,331 (54) 1,259 (51) 165 130 (79) 129 (78)
19 2,549 680 (27) 653 (26) 2,481 1,130 (46) 1,095 (44) 165 113 (68) 113 (68)
20 2,552 585 (23) 542 (21) 2,482 1,152 (46) 1,119 (45) 166 113 (68) 112 (67)
21 -- -- -- 2,486 1,114 (45) 1,077 (43) 166 122 (73) 122 (73)
Total 2,552 19,194 15,982 2,486 29,623 27,495 166 3,133 2,947
Median[\*\*](#nt102){ref-type="table-fn"} (range) 4 (1--27) 3 (0--26) 13 (1--26) 11 (0--25) 21 (1--26) 20 (0--25)
Mean[\*\*](#nt102){ref-type="table-fn"} 8 6 12 11 19 18
\*Active as defined in the Methods section.
\*\*Number of reports per participant.
In season 2, the number of reports per week also increased gradually in the beginning, when the influx of participants was greatest ([Table 2](#pone-0096740-t002){ref-type="table"}). In contrast to season 1, however, the weekly number of reports remained constant throughout the season, with the exception of a dip in calendar week 12 due to a technical malfunction of the website. Notwithstanding this stability, the weekly reporting proportion among self-recruited IMS participants fell slowly across the season, from its highest (82%) in the first two weeks to 45% in the last week. The reporting proportion counting only *active* reports peaked (68%) in weeks 50 and 3 and was 43% in the last week. The median number of total reports (13, range 1--26) and of *active* reports (11, range 0--25) was higher than in season 1. Based on the individual participation duration, the median completion proportion of all possible reports and *active* reports among self-recruited IMS participants were 64% (range: 4--100) and 57% (range: 0--96) respectively.
In season 2, the median number of reports and *active* reports per registered participant were, respectively, 62% (21 vs. 13, p\<0.01) and 82% (20 vs. 11, p\<0.01) higher among invited IMS participants than among the self-recruited ones. Disregarding weeks 12 and 13 (affected by the malfunctioning website in week 12), the lowest proportion of participants reporting counting only *active* reports among the invited participants (67% in week 20) was of the same magnitude as the highest proportion among the self-recruited (68% in week 3). Based on the individual participation duration, the median completion proportion of all possible reports and *active* reports among invited IMS participants were 84% (range: 4--100) and 79% (range 0--96) respectively.
Representativeness {#s3b}
------------------
For both seasons and irrespective of how participation was defined, self-recruited IMS participants were more likely to be female, university educated and aged 40--64 than the general population (p\<0.01 for each comparison, [Table 3](#pone-0096740-t003){ref-type="table"}). The geographical distribution of participants differed from the Swedish population (p\<0.01). For instance, 29% (season 1) and 34% (season 2) of the self-recruited participants resided in Stockholm County compared with only 22% of the Swedish population. In both seasons, only 11% of participants resided in the Swedish county containing the second largest city Gothenburg; this county accommodates 17% of the Swedish population.
10.1371/journal.pone.0096740.t003
###### Distribution of socio-demographic characteristics among self-recruited and invited IMS participants during the 2011--2012 and 2012--2013 influenza seasons and the corresponding distribution of the general Swedish population 2011 and 2012.
{#pone-0096740-t003-3}
2011--2012 2012--2013
------------------------------------------------------ ------------- ------------ ------------- -------- ----------------- ------------- -------- ------------- -------- ----------- -------- ----------- -------- -----------------
Age group (yrs)
0--17 294 (12) \<0.01 199 (11) \<0.01 1,901,291 (20) 76 (3) \<0.01 57 (3) \<0.01 20 (12) \<0.01 19 (12) \<0.01 1,860,527 (19)
18--39 934 (37) 589 (33) 2,711,405 (29) 875 (35) 698 (33) 60 (36) 52 (34) 2,777,239 (29)
40--64 1,133 (44) 828 (46) 3,065,375 (32) 1,336 (54) 1,154 (55) 66 (40) 62 (41) 3,087,669 (32)
65+ 184 (7) 160 (9) 1,798,034 (19) 183 (7) 178 (8) 19 (11) 19 (12) 1,793,463 (19)
Missing 7 (0) 6 (0) 0 (0) 16 (1) 13 (1) 1 (1) 1 (1) 33,477 (0)
Sex
Men 898 (35) \<0.01 609 (34) \<0.01 4,723,159 (50) 793 (32) \<0.01 673 (32) \<0.01 76 (46) 0.29 68 (44) 0.18 4,760,835 (50)
Women 1,654 (65) 1,173 (66) 4,752,946 (50) 1,693 (68) 1,427 (68) 90 (54) 85 (56) 4,785,613 (50)
Missing 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 0 (0) 5,927 (0)
Education (yrs)[\*\*\*](#nt105){ref-type="table-fn"}
\<9 101 (4) \<0.01 67 (4) \<0.01 1,862,355 (20) 124 (5) \<0.01 98 (5) \<0.01 16 (10) \<0.01 15 (10) \<0.01 1,810,813 (19)
1012 420 (16) 264 (15) 3,348,458 (35) 473 (19) 394 (19) 45 (27) 41 (27) 3,378,533 (35)
13--15 475 (19) 331 (19) 1,000,336 (11) 502 (20) 407 (19) 30 (18) 27 (18) 1,020,584 (11)
\>15 1,221 (48) 896 (50) 1,426,598 (15) 1,322 (53) 1,145 (55) 65 (39) 60 (39) 1,476,228 (15)
Missing[\*\*\*\*](#nt106){ref-type="table-fn"} 335 (13) 224 (13) 1,838,358 (19) 65 (3) 56 (3) 10 (6) 10 (7) 1,866,217 (20)
Total 2,552 (100) 1,782 (100) 9,476,105 (100) 2,486 (100) 2,100 (100) 166 (100) 153 (100) 9,552,375 (100)
\*Chi square goodness of fit test participants vs. Swedish population.
\*\*Participants who contributed with at least one *active* report. For definition of active reports, see Methods section.
\*\*\*Among participants 16--95+ year old.
\*\*\*\*Including children in age group 0--15 yrs.
The age and sex distributions among invited IMS participants differed less from the Swedish population than did the corresponding distributions among the self-recruited ([Table 3](#pone-0096740-t003){ref-type="table"}). The under-representation of the 0--17 and ≥65 year age groups, though statistically significant, was less marked (p = 0.01). With reservation for the small numbers, the geographical distribution of invited participants was similar to that of the Swedish population (p = 0.75).
Comparison of Time Series {#s3c}
-------------------------
### Season 1 (Stockholm County 2011--2012) {#s3c1}
Smoothed weekly ILI incidence proportions ranged between 0.6--4.4% (IMS) and 0.8--2.8% (PBS). IMS reached its peak in week 7, whereas PBS and laboratory reports of influenza diagnoses reached their peaks in week 9 ([Figure 1](#pone-0096740-g001){ref-type="fig"}).
![Epidemic curves 2011--2012 and 2012--2013.\
The upper graph shows the smoothed weekly ILI incidence proportions generated by IMS and PBS (corrected for estimated demographic misrepresentation [@pone.0096740-Bexelius1] and underreporting [@pone.0096740-Merk1]) and number of laboratory confirmed influenza cases, Stockholm 2011--2012. The lower graph shows the smoothed weekly ILI incidence proportions generated by IMS (based on self-recruited and invited participants) and PBS (corrected for estimated demographic misrepresentation [@pone.0096740-Bexelius1] and underreporting [@pone.0096740-Merk1]), number of laboratory confirmed influenza cases, and ILI per 1,000,000 listed patients in GP-sentinel reports, Sweden 2012--2013.](pone.0096740.g001){#pone-0096740-g001}
IMS correlated with PBS (p\<0.05) with the largest coefficient (r = 0.71) when no lag was applied. The correlation was still significant with a shift of PBS data back in time (lead) by up to two weeks (r = 0.56) and with shift of PBS data forward in time (lag) by one week (r = 0.59). IMS correlated with laboratory data (p\<0.05) with the highest correlation without a lag (r = 0.77). The correlation was till significant with a shift of laboratory data by two weeks back in time (r = 0.40) and by one week forward in time (r = 0.65). PBS also correlated with laboratory data (p\<0.05) with the highest correlation without a lag (r = 0.63). However, correlations were also significant with a shift of laboratory data by one week back in time (r = 0.49) and two weeks forward in time (r = 0.46).
### Season 2 (Sweden 2012--2013) {#s3c2}
The corresponding curves for season 2, pertaining to all of Sweden, also include the weekly number of ILI cases among listed patients reported in the GP-based sentinel system ([Figure 1](#pone-0096740-g001){ref-type="fig"}). The smoothed weekly incidence proportions of reported ILI ranged between 1.1--2.8% in IMS and 0.7--3% in PBS.
IMS correlated with PBS (p\<0.05) with the maximum coefficient (r = 0.69) when no lag was applied, but the correlation remained significant with a shift of PBS data back in time by up to two weeks (r = 0.54) and forward in time by up to two weeks (r = 0.47). IMS correlated with laboratory data (p\<0.05) with the maximum coefficient without a lag (r = 0.56) and with GP sentinel data with a shift of GP sentinel data one week back in time (r = 0.61). Correlations were also significant between two weeks lead (laboratory: r = 0.44, sentinel: r = 0.54) and two weeks lag (laboratory: r = 0.48, sentinel: r = 0.51) of laboratory and sentinel data, respectively. PBS correlated with laboratory and sentinel data (p\<0.05) with the maximum coefficient at a four week lag for laboratory data (r = 0.50) and sentinel data (r = 0.55), respectively. Correlations were also significant between one (r = 0.42) and five weeks lag (r = 0.49) of laboratory data and between zero (r = 0.47) and five (r = 0.50) weeks lag of sentinel data.
### Exploring a stricter definition of active participation {#s3c3}
When applying stricter criteria to define a report as *strictly active*, as opposed to the *active* definition used in the main analysis, the IMS weekly incidence proportions were overall lower in both seasons, but the differences were generally small. In season 1, estimates of ILI incidence proportions based on *active* reports were on average 0.6 percentage points (median 0.6 range: −0.8 to 1.8) higher than those derived from *strictly active* reports. In season 2, the differences were generally smaller, on average 0.1 percentage units (median 0.1 range: −0.2 to 0.4), and of similar size across the entire season.
### Exploring time-series comparison based on weighted IMS-data {#s3c4}
The smoothed weekly weighted ILI incidence proportions ranged between 0.9--4.9% in season 1 and 1.3--3.0% in season 2. In both seasons, the weighted incidence proportions were similar to the crude incidence proportions for most of the weeks. However, the weighted data produced peaks in the beginning of both seasons at a similar levels of incidence proportions as the peak that coincided with the influenza peak according to laboratory data. Additionally in season 1, the weighted data produced a peak towards the end of the season. Weighted IMS data correlated weaker to all the other surveillance data sources than the crude IMS data did (data not shown).
### Difference between weekly estimates {#s3c5}
In season 1, the weekly estimates of ILI incidence proportions generated by IMS were on average 0.9 percentage units (median 1.0) higher than those derived from the PBS. By and large, the differences were constant across the season, with the greatest (but also the most variable) differences in the weeks when influenza activity was increasing according to laboratory reports. The differences tended to subside towards the end of the season. The Bland-Altman plot suggested that the two systems did not agree, but rather that IMS tended to give higher ILI estimates than PBS ([Figure 2](#pone-0096740-g002){ref-type="fig"}). The transformation of IMS to PBS (PBS = 0.83+0.35\*IMS) had 95% prediction limits of magnitude ±1.18%; i.e. the true PBS incidence proportion would fall within ±1.18% of the estimate with 95% certainty.
{#pone-0096740-g002}
In contrast, in season 2 the mean and median differences were −0.25 and −0.15 percentage points, respectively, indicating that PBS generated higher estimates most of the weeks, but the differences were generally smaller. The greatest differences were seen in the beginning of the season and when influenza activity had passed its peak according to the laboratory reports. A Bland-Altman plot suggested that IMS tended to give higher estimates when the average of IMS and PBS incidence estimates were below 1.5%, and that PBS generated higher estimates for higher means ([Figure 2](#pone-0096740-g002){ref-type="fig"}). The 95% prediction limits after transformation of IMS to PBS (PBS = −0.94+1.62\*IMS) were of magnitude ±1.36%.
Discussion {#s4}
==========
This evaluation of IMS suggested that self-recruitment led to an overrepresentation of women, highly educated and middle-aged persons. The weekly proportion of participants that reported decreased gradually throughout both seasons. Furthermore, only a small proportion of the invited sample participated in IMS. Although the generated ILI-estimates differed from PBS estimates, especially the first season, IMS data correlated significantly with PBS data and with data from the traditional influenza surveillance systems.
Our findings are consistent with previous assessments of Influenzanet that suggested that self-recruited layperson-based influenza surveillance systems could detect changes in ILI incidence in the population to give signals of the start and culmination of influenza epidemics with reasonable accuracy [@pone.0096740-Friesema1], [@pone.0096740-Marquet1], [@pone.0096740-Tilston1]--[@pone.0096740-Vandendijck1]. The present study adds by providing a comparison with a well-validated population-based surveillance system that generated incidence data in real time, corrected for previously quantified demographic misrepresentation [@pone.0096740-Bexelius1] and under-reporting [@pone.0096740-Merk1].
A recent multi-country analysis of data from seven Influenzanet countries (including Sweden), found that participants with fewer years of education and of younger ages had a lower compliance to complete the weekly report [@pone.0096740-Bajardi1], suggesting that further selection bias may be introduced as the season proceeds. This is illustrated in our analysis as the accumulated number of self-recruited IMS participants did not parallel the number of reports and *active* reports submitted each week. Interestingly, the difference between the total number of reports and the total number of *active* reports was generally small, and the total number of reports was fairly stable across the entire season, particularly in season 2. The majority of participants in season 2 responded to more than half of their required reports further implying that the overwhelming number of reports came from faithful participants who reported regularly. Possibly, they reported during limited periods not necessarily starting at the beginning of the season. A notable exception was the period from February through March during season 1, when influenza peaked [@pone.0096740-Swedish1]. At this time, the *active*-to-total report ratio, in particular, fell noticeably. Many participants entered the system, possibly prompted by their own ILI and/or by increased attention to IMS produced by the peak itself.
The requirement of at least one report in the three weeks preceding each new report does not entirely rule out preferential re-entry of sick participants. However, requiring two consecutive reports in the preceding two weeks, practically precluding preferential re-entry of sick participants, resulted in incidence curves with shapes that were trivially different from those based on the more relaxed *active* report definition. This suggests that re-entry may only be a minor problem. Nevertheless, weak selection bias in IMS during the influenza peak might have amplified and improved the signal, thus making the peak more distinct in IMS than in PBS.
The reporting activity level among IMS participants improved in season 2. This may be due to inclusion of motivated participants from the previous season and a concentration of marketing efforts to the beginning of the season rather than continuously. The higher reporting frequency in the invited sample indicates that this group was even more motivated to report regularly. Although a seven percent response proportion among the invited residents likely enriched particularly motivated participants, the detailed information in the postal invitation may also have contributed to the regular reporting. Furthermore, the step from receiving the paper invitation to registering online may have demanded more motivation to participate than reading about IMS online, where registration is only "a mouse click away".
The underrepresentation of the youngest age groups among self-recruited IMS participants may be due to modest emphasis on the possibility for legal guardians to act as proxy participants for their children. In the oldest age groups, limited Internet availability and computer literacy may have prevented participation [@pone.0096740-Statistic1]. Interestingly, despite the poor participation rate, the randomly selected, invited population sample was more representative with regard to age, sex and education.
Notwithstanding the misrepresentation of self-recruited IMS participants, the overall epidemic curves were similar to those generated by PBS. This suggests that neither the measured socio-demographic factors nor unmeasured determinants of participation were strongly associated with the risk of ILI. While IMS may detect the start and peak of influenza epidemics, the continuous monitoring of absolute incidence rates in various substrata of the population may be better accomplished with PBS. Age group-specific data is of particular interest because immunity and the predisposition to complications differ across ages [@pone.0096740-Punpanich1]. Since the representation of specific age groups in self-recruited IMS is variable and poor in the elderly, the validity of incidence data in this group is uncertain. The pattern of misrepresentation of participants was similar in PBS [@pone.0096740-Bexelius1], but the deviations were smaller compared with self-recruited IMS. Phone reporting offered by PBS may explain better representation of older age groups [@pone.0096740-Bexelius1]. When weighting the IMS sample according to the age and sex distribution of the Swedish population, the epidemic curves deviated more from the PBS. The weighted estimates may give a less biased cross-sectional estimate, but interpretation of time series stays as complicated as for crude estimates due to the weekly variations in stratum specific reporting activity. However, it is reassuring that even misrepresented self-selected populations are capable of describing an epidemic of influenza-like illness with a reasonable accuracy.
The differences between incidence proportions generated by IMS and PBS varied across and between the seasons. Based on the Bland-Altman plots and method comparison techniques, the prediction limits in the transformation of the estimates of one system to the other were of a magnitude that we consider unacceptable for the purpose of surveillance. Notably, the incidence according to PBS was unprecedentedly high from November until the peak in March of season 2 and did not coincide with the laboratory confirmed influenza data. The reasons for this deviation of PBS data compared to previous seasons remain speculative but may relate to the epidemiology of respiratory infections. First, the 2012--2013 influenza season was unusually long, with three circulating strains that affected all age groups [@pone.0096740-Swedish2]. The PBS may have picked up a higher baseline activity of ILI that was missed by IMS because of an under representation of older and younger age groups. Second, the higher baseline activity of ILI in PBS in the first part of season 2 coincided with the start of the respiratory syncytial virus (RSV) season [@pone.0096740-Swedish3], possibly resulting in influenza-like symptoms, mainly among children. Lastly, after having been confined to Stockholm County for five seasons, a PBS surveillance cohort of the same size as the previous cohorts in Stockholm was drawn from the whole country (0.12% of the Stockholm population vs 0.02% of the Swedish population). Local and regional variations in influenza surveillance and epidemiology may have affected comparability.
Limitations {#s4a}
-----------
The elements assessed in this evaluation provided insights about the functionality of IMS and illustrated differences between the two community-based surveillance systems. However, structured evaluation of other aspects, such as timeliness, flexibility, stability and resources needed, may provide further understanding about the usefulness of IMS [@pone.0096740-German1]. Moreover, evaluation of only two seasons that provided somewhat deviating findings, possibly due to differences in the geographical distribution of the sample and during some periods the small sample size, makes generalisation of the results difficult.
Conclusion {#s4b}
----------
In conclusion, the self-recruited IMS participants reflected the demography of the Swedish population poorly. Yet IMS offered a reasonable representation of the temporal ILI pattern in the community overall during the 2011--2012 and 2012--2013 influenza seasons and could be a simple tool to collect community-based ILI data. However, invited IMS participants represented the target population better than the self-recruited and completed a larger proportion of reports. Therefore, personal invitations to a random sample of the population may improve the quality and usability of IMS surveillance data.
Supporting Information {#s5}
======================
######
Distribution of socio-demographic indicators among invited residents and invited IMS participants, Sweden 2012--2013.
(PDF)
######
Click here for additional data file.
######
Distribution of age and sex among PBS participants during the 2011--2012 and 2012--2013 influenza seasons.
(PDF)
######
Click here for additional data file.
We would like to acknowledge the contribution of EPIET coordinator Yvan Hutin, in reviewing this manuscript.
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
[^2]: Conceived and designed the experiments: MR AC HM IG SKB AL ON. Performed the experiments: MR AC ON. Analyzed the data: MR AC IG SKB. Wrote the paper: MR ON. Revised the manuscript critically: MR AC HM SKB IG AL ON.
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
Following the news that Persona 5 had sold 337,767 copies in its first week, Atlus has announced that the game has now shipped over 550,000 copies in Japan, which includes downloads.
It was previously speculated that Sega Sammy expected to sell 500k to 550k copies of Persona 5 to retailers (so copies shipped and not sold-through) in the month of September 2016, which this number corroborates.
To commemorate this, character designer Shigenori Soejima has drawn a special illustration, and director Katsura Hashino has written a message of thanks for the game’s great reception.
Two new Persona 5 DLC costume sets will also be distributed for free to celebrate the number of copies the game has shipped.
More sales data for the Persona franchise can be seen here.
Persona 5 was released for the PS3 and PS4 in Japan on September 15, 2016. It will release in North America and Europe on February 14, 2017, and in traditional Chinese and Korean in 2017.
— Persona Channel
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Hithey Dheymee
Hithey Dheymee is a 2011 Maldivian drama film directed by Amjad Ibrahim. Produced by Hussain Ibrahim, Ali Ibrahim and Amjad Ibrahim under Farivaa Films, the film stars Amira Ismail, Hussain Solah, Ali Ahmed, Fathimath Azifa and Aminath Shareef in pivotal roles. The film was released on 20 April 2011.
Premise
Nadheema, a school teacher marries the only son from a wealthy family, Hisham (Hussain Solah) despite his mother's disapproval since Nadheema is from a middle class family. Upon discovering their relationship, Ahmed (Ali Ahmed), the best friend of Hisham and who is secretly in love with Nadheema is heartbroken. When Nadheema gets pregnant, they bring a maid, Shifa (Fathimath Azifa) to help them with their responsibilities. Complications arise when his mother plots against Nadheema and Hisham has a secret affair with Shifa.
Cast
Amira Ismail as Nadheema
Hussain Solah as Hisham
Ali Ahmed as Ahmed
Fathimath Azifa as Shifa
Aminath Shareef as Fareedha
Ali Shameel as Shameel
Fauziyya Hassan as Shakeela
Nadhiya Hassan as Shaza
Mariyam Shahuza as Fazeena
Nashidha Mohamed as Rish
Soundtrack
References
Category:2011 films
Category:Maldivian films
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Cirillo insisted that the woman had told her she wanted him to come to her bedroom and stay on the night in question. He testified that they did not have intercourse, because "she fell asleep on me" after a consensual sexual interaction. He claimed that his statements to her in the recording about having sex were lies, because he had not wanted to disappoint her.
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Search This Blog
70s Music Artist watch: Jean Michel Jarre
This Frenchman born in 1948, made a leap from classical to synth pop, and became so famous not just for his music of the 70s, but his amazing large scale lazer and light shows that accompanied his thought provoking music.
Jean Michel Jarre first came to be known by the masses when he released Oxygene in "1976", it sold over twelve million copies, not bad for a debut album. Using just three synthesizers and some basic editing materials he produced futuristic music of the 70s.
Comments
Post a Comment
Popular Posts
Midge Ure, O.B.E. Was born in 1953 and was a massive star in the 80s with Ultravox, Live Aid, but his roots really do go back to the music of the 70s. He started with a group called Salvation in 1972 as a guitarist mainly performing around Glasgow, but Salvation disbanded in 1974, some of the group joined together to make the group Slik. They had no single success until this 1976 hit ,this is where I first heard of Slik with there number one UK record “Forever and Ever”, this record is often referred to as a Bay City Roller type record, which is not surprising as the same writers were working on both groups and for Bell records.
For those that live around Europe (Australia included), once a year since the 1950s the Eurovison Song Contest is held. This contest is to find a group that will be the best of the best in Europe, and is voted by every participating country. Needles to say the politics of the region play very high.
With all its many flaws it brings in a huge live TV audiences of around 100 million people.
David Cassidy was born in 1950 and died in November 2017 after multiple organ failures and the onset of dementia. David was one of the biggest global stars in the 1970s after fnding fame in the cult USA TV show The Partridge Family.
The TV show not only made him a teen heartthrob as a good looking actor, but the show allowed him to prove his singing credentials too.
The show proved popular, but the fame had its toll on several, if not most, of the starring cast. In the midst of his rise to fame, David Cassidy soon felt stifled by the show and trapped by the mass hysteria surrounding his every move. In May 1972, he appeared nude on the cover of Rolling Stone magazine in a cropped Annie Leibovitz photo. He used the article to get away from his squeaky clean image. Among other things, the article mentions Cassidy was riding around New York in the back of a car "stoned and drunk."
Once "I Think I Love You" became a hit, Cassidy began work on solo albums, as well. Withi…
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Share this infographic on your site!
<a href=”https://online-paralegal-programs.com/superlawyers/”><img src=”https://online-paralegal-programs.com/wp-content/uploads/2013/12/Super-Lawyers.jpg” alt=”Superlawyers” width=”500″ border=”0″ /></a><br />Source: <a href=”https://online-paralegal-programs.com/”>Online-Paralegal-Programs.com</a>
Superlawyers
Where do you go to school to make it to the top?
New England
Connecticut:
UConn: 28%
Quinnipiac: 8%
Boston U: 4%
Virginia: 4%
Yale: 4%
Western New England U: 4%
Suffolk: 3%
Georgetown: 3%
NYU: 3%
Boston College: 3%
Massachusetts:
Boston College: 18%
Suffolk U.: 16%
Boston U: 14%
Harvard: 15%
Northeastern: 6%
New England Law Boston: 6%
Georgetown: 3%
Columbia: 2%
Virginia: 2%
Cornell: 2%
Rhode Island:
Suffolk: 20%
Boston U: 12%
Boston College: 10%
Georgetown: 5%
New England Law: 4%
Harvard: 4%
Roger Williams: 3%
NYU: 3%
George Washington: 2%
Catholic University of America: 2%
American University (D.C.): 2%
Mid Atlantic:
New York: Metro
NYU: 11%
Columbia: 10%
Harvard: 8%
Fordham: 8%
Brooklyn Law: 7%
St. John’s: 5%
Hofstra: 4%
Georgetown: 3%
New York Law: 3%
Yale: 3%
New York Upstate:
U at Buffalo-SUNY: 27%
Albany: 18%
Syracuse: 12%
Cornell: 5%
Georgetown: 2%
Notre Dame: 2%
Harvard: 1%
Toledo: 1%
Michigan: 1%
Fordham: 1%
New Jersey:
Seton Hall: 21%
Rutgets Newark: 13%
Rutgers Camden: 7%
NYU: 4%
New York Law: 3%
Georgetown: 3%
Widener: 3%
Harvard: 3%
Fordham: 2%
Temple: 2%
Pennsylvania:
Temple: 16%
Villanova: 13%
Pittsburgh: 10%
UPenn: 9%
Duquesne: 8%
Penn State: 6%
Widener: 6%
Harvard: 3%
Georgetown: 2%
Rutgers: 2%
South Atlantic
Delaware:
Widener U: 13%
Villanova: 11%
Penn State: 8%
U of Penn: 8%
Temple University: 7%
Georgetown: 6%
Virginia: 4%
Washington and Lee: 3%
Harvard: 3%
William and Mary: 3%
North Carolina: 3%
Emory: 3%
Georgia:
Georgia: 23%
Emory: 18%
Mercer University: 7%
Virginia: 6%
Harvard: 4%
Vanderbilt: 3%
Duke: 3%
Georgia State: 3%
North Carolina: 2%
Florida: 2%
Florida:
Florida: 25%
Miami: 16%
Stetson: 8%
Florida State: 8%
Nova Southeastern: 3%
Harvard: 2%
Georgetown: 2%
Duke: 2%
Virginia: 2%
Samford: 2%
Maryland:
Maryland: 30%
Baltimore: 22%
Goergetown: 6%
George Washington: 6%
Catholic University of America: 5%
America University Washington: 5%
Virginia: 3%
Harvard: 2%
Duke: 1%
UPenn: 1%
South Carolina:
South Carolina: 78%
Virginia: 4%
Emory: 2%
harvard: 1%
Duke: 1%
Wake Forest: 1%
Vanderbilt: 1%
yale: 1%
Georgia: 1%
Samford: 1%
Campbell University: 1%
West Virginia:
West Virginia: 62%
William and Mary: 5%
Washington and Lee: 4%
Virginia: 2%
Kentucky: 2%
Ohio State: 2%
Harvard: 2%
Wake Forest: 2%
Notre Dame: 2%
Richmond: 1%
George Mason: 1%
Duquesne: 1%
North Carolina:
North Carolina: 35%
Wake Forest: 21%
Duke: 7%
Virginia: 6%
Campbell Univ.: 5%
Vanderbilt: 2%
Harvard: 2%
South Carolina: 2%
North Carolina Central: 1%
William and Mary: 1%
Washington D.C.:
Georgetown: 12%
George Washington: 11%
Harvard: 10%
Virginia: 7%
Catholic University: 4%
American University: 4%
Yale: 4%
Michigan: 3%
Columbia: 3%
Maryland: 2%
Virginia:
Virginia: 25%
Richmond: 22%
William and Mary: 12%
Washington and Lee: 7%
George Mason: 6%
George Washington: 3%
Georgetown: 3%
American University 2%
Catholic University: 1%
Harvard: 1%
Mid and Deep South
Kentucky:
Kentucky: 37%
Louisville: 32%
Northern Kentucky: 5%
Vanderbilt: 3%
Indiana: 2%
Cincinnati: 2%
Harvard: 2%
Virginia: 1%
Michigan: 1%
Yale: 1%
Notre Dame: 1%
Louisiana:
LSU: 38%
Tulane: 31%
Loyola New Orleans: 18%
Virginia: 2%
Harvard: 1%
Georgetown: 1%
Ol’ Miss: 1%
Southern U: 1%
Vanderbilt: 0%
Mississippi College: 0% (both <1%)
Alabama:
Alabama School of Law: 42%
Samford University Cumberland School of Law: 27%
Vanderbilt University Law School: 7%
University of Virginia Law School: 5%
Birmingham School of Law: 3%
Tulane University Law School: 2%
Faulkner University School of Law: 2%
Emory University School of Law: 1%
Washington and Lee University School of Law: 1%
Harvard Law School: 1%
Mid South:
Ol’ Miss: 20%
Tennessee:18%
Vanderbilt: 15%
Arkansas: 10%
Memphis: 9%
Arkansas at Little Rock: 6%
Mississippi College: 2%
East North Central
Illinois:
DePaul: 11%
Northwestern: 10%
Loyola Chicago: 9%
Illinois: 8%
John marshall: 7%
Chicago: 7%
Chicago-Kent College, IIT: 7%
Michigan: 6%
Harvard: 5%
Notre Dame: 2%
Indiana:
Indiana U McKinney school of law: 46%
Indiana Maurer school of law: 24%
Valparaiso U: 5%
Michigan: 4%
Notre Dame: 2%
Harvard: 2%
Vanderbilt: 1%
Washington U, St. Louis: 1%
Louisville: 1%
Illinois: 1%
Northwestern: 1%
Michigan:
Wayne State: 26%
Michigan: 21%
Detroit Mercy: 14%
Michigan State: 13%
Thomas M. Cooley: 4%
Notre Dame: 2%
Harvard: 2%
Indiana: 1%
Toledo: 1%
Northwestern: 1%
Ohio:
Ohio State: 17%
Case Western Reserve: 13%
Cleveland State: 11%
Cincinnati: 10%
Capital: 7%
Akron: 6%
Northern Kentucky: 3%
Toledo: 3%
Michigan: 3%
Dayton: 3%
Wisconsin:
Wisconsin: 38%
Marquette: 33%
Michigan: 2%
Iowa: 2%
Northwestern: 2%
Harvard: 2%
Drake: 1%
Georgetown: 1%
John Marshall: 1%
Washington U (st. Louis): 1%
West North Central
Missouri and Kansas:
Missouri (columbia): 15%
Missouri-Kansas City: 15%
Kansas: 13%
Saint Louis U: 12%
Washburn U: 10%%
Washington U. St. Louis: 8%
Michigan: 2%
Iowa: 2%
Harvard: 1%
Georgetown: 1%
Minnesota:
Minnesota: 33%
William Mitchell: 26%
Hamline: 8%
Harvard: 3%
Iowa: 3%
Michigan: 2%
North Dakota: 2%
Chicago: 2%
Wisconsin: 2%
Georgetown: 1%
Texas and Oklahoma
Oklahoma:
Oklahoma: 46%
Tulsa: 21%
Oklahoma City U: 12%
Texas: 3%
SMU:2%
Harvard: 1%
Arkansas: 1%
Georgetown: 1%
Kansas: 1%
Vanderbilt: 1%
Duke: 1%
Texas:
Texas: 26%
SMU: 13%
Houston: 11%
Baylor: 9%
South Texas: 7%
Texas Tech: 6%
St. Mary’s U.:6%
Harvard: 2%
Virginia: 1%
Oklahoma: 1%
Mountain
Southwest:
Arizona: 19%
Arizona State: 15%
New Mexico: 10%
Texas: 3%
Michigan: 2%
Harvard: 2%
Georgetown: 2%
Notre Dame: 2%
Brigham Young: 2%
George Washington: 2%
Colorado:
U of Denver: 29%
Colorado: 20%
Harvard: 3%
Georgetown: 2%
Stanford: 2%
Michigan: 2%
Virginia: 2%
Texas: 2%
Yale: 1%
NYU: 1%
Pacific
Alaska:
Willamette University College of Law: 8%
Harvard Law School: 7%
University of Washington School of Law: 6%
Univ. of Oregon School of law: 6%
UC Berkeley School of Law: 6%
Gonzaga University School of Law: 5%
Northeastern: 5%
Stanford: 4%
Univ. of the Pacific: 3%
Michigan: 3%
Arizona: 3%
Lewis and Clark: 3%
NorCalifornia:
California Hastings: 16%
California Berkeley: 13%
San Francisco: 8%
Santa Clara University: 6%
University of the Pacific: 5%
Stanford: 5%
Harvard: 5%
UC Davis: 4%
Golden Gate Univ.: 3%
UCLA school of law: 3%
SoCalifornia:
Loyola: 14%
UCLA: 14%
USC: 8%
Southwestern: 8%
UC Berkeley: 5%
Harvard: 5%
UC Hastings: 3%
Pepperdine: 3%
Stanford: 3%
UC San Diego: 2%
Hawaii:
Hawai’i at Manoa: 16%
UC Berkeley: 9%
UC Hastings: 8%
Georgetown: 8%
Harvard: 7%
Michigan: 5%
Boston U. 3%
Santa Clara U.: 3%
Stanford: 3%
Virginia: 2%
Southwestern: 2%
Northwestern: 2%
Columbia: 2%
Oregon:
Lewis and Clark: 18%
Oregon: 17%
Willamette: 16%
Berkeley: 3%
Michigan: 3%
Harvard: 3%
Stanford: 2%
Washington: 2%
UC Hastings: 1%
Seattle: 1%
Northwestern: 1%
Gonzaga: 1%
Cornell: 1%
Virginia: 1%
Washington:
Washington: 22%
Seattle: 15%
Gonzaga: 6%
Harvard: 5%
Michigan: 3%
Willamette: 3%
Georgetown: 3%
Stanford: 3%
yale: 3%
Oregon: 2%
Top superlawyer producing schools.
1 Harvard Law School
2 The University of Michigan Law School
3 The University of Texas School of Law
4 University of Virginia School of Law
5 Georgetown University Law Center
6 New York University School of Law
7 Columbia Law School
8 University of Florida Levin College of Law
9 University of California Berkeley School of Law – Boalt Hall
10 Yale Law School
11 University of California Hastings College of the Law
12 The George Washington University Law School
13 Boston University School of Law
14 UCLA School of Law
15 University of Pennsylvania Law School
16 The University of Chicago Law School
17 Boston College Law School
18 Northwestern University School of Law
19 Stanford Law School
20 University of Miami School of Law
21 Vanderbilt University Law School
22 Southern Methodist University Dedman School of Law
23 Duke University School of Law
24 University of Minnesota Law School
25 University of Wisconsin Law School
26 Cornell University Law School
27 Fordham University School of Law
28 Temple University Beasley School of Law
29 Loyola Law School Los Angeles
30 University of North Carolina School of Law
U.S. News and World Report Ranking
1. Yale University
2. Harvard University
2. Stanford University
4. Columbia University
4.University of Chicago
6. New York University
7.University of Pennsylvania
7. University of Virginia
8. University of California–Berkeley
9. University of Michigan–Ann Arbor
11. Duke University
12. Northwestern University
13. Cornell University
14. Georgetown University
15. University of Texas–Austin
15. Vanderbilt University
17. University of California–Los Angeles
18. University of Southern California
19. University of Minnesota–Twin Cities
19. Washington University St. Louis
20. George Washington University
21. University of Alabama
22. Emory University
23. University of Notre Dame
24. Indiana University–Bloomington
25. University of Iowa
26. Washington and Lee University
27. University of Washington
28. Arizona State University
29. Boston University
Depending on where you want to practice, not all law schools are created equally.
Citations
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Fact: House's famous anti-hero Dr. House was inspired by a woman. Yet in conceiving the hit show, creators chose to make their protagonist male. Why? Because misbehavior is tolerated—even accepted—in men. TV audiences are used to rooting for male anti-heroes—from Tony Soprano to Walter White. We're able to forgive their often horrific actions because, to quote the most tired excuse ever, "boys will be boys."
"Misbehavior is tolerated—even accepted—in men...We're able to forgive their often horrific actions because, to quote the most tired excuse ever, 'boys will be boys.'"
But what about women? The female anti-hero is a relatively new phenomenon largely because sexism is a systemic problem in this country, and people have a hard time with women subverting stereotypes. It's (at least part of) the reason so many people had such difficulty imagining a female president during this past election. But the apprehension towards female unlikability on TV is slowly changing, in part thanks to Amazon's Fleabag (which just got a second season order, FYI), created by and starring Phoebe Waller-Bridge.
Unlikable women on TV have existed before—there was Nancy Botwin, the drug dealer on Weeds, and Sex and the City's self-absorbed Carrie Bradshaw—but Fleabag is different. Most shows require their female protagonist to have her life together in at least one way (be it career, love, friendships, family) in order to balance out deficiencies in other areas of her life, yet Fleabag has no refuge. She's literally ruined every area of her life. And yet, as a viewer, we still root for her to succeed. It's a true mark of a successful anti-hero, so MarieClaire.com hunted down Waller-Bridge to chat about breaking the mold.
Amazon
On whether she'd call Fleabag an anti-hero:
"When writing the original stage play I described her as an anti-hero/anti-heroine a lot. It felt very edgy to be talking about a young woman like that back in 2013. The idea has become more mainstream now, which is great. I'll never get bored of seeing flawed women on the screen. But having said that, I see Fleabag's honesty about her flaws heroic in a different way. Her attempts to hold herself together even in the face of her own mistakes and pain is her great struggle, but she does it for us, the audience. And we witness the pain she suffers only in glimpses, but hopefully enough to show us how hard she is working for us."
On the key to a successful female anti-hero:
"There's some sort of charm and humor that makes them forgivable. This is where the comedy was vital to the show. Without the jokes or the honesty, Fleabag would be much harder to root for. We're living in a time when people are responding to characters who are honest rather than aspirational. I'd like to think Fleabag's honesty makes her heroic in spite of her actions."
Getty Images
On worrying about female likability:
"I did worry, but I had to quickly shake that off. I knew how much pain she was covering so I could see through the callousness and take her good nature and broken heart for granted. It was making sure the audience were given moments of that vulnerability. Without Fleabag's mask slipping, she'd have been inexplicably hard and impenetrable and there isn't a huge amount to like in that. If we have even a sliver of understanding about why a person is acting strangely we see them in a totally different light. I don't think the challenge is asking an audience to like a character, it's inviting them to try and understand them...then making that journey entertaining and worth their while. It's a classic trick, but it's human, and it allows characters to have more depth."
On Fleabag's complicated feminism:
"Fleabag never contradicts or challenges the fundamental argument in feminism; she never questions the right for women to have equal opportunities as men. She's just confused about the 'rules' that seem to apply to a certain brand of feminism. But I didn't find it useful to think about it because there were other themes taking precedence like family, and grief, and obsession. However much people want to politicize every movement of a controversial woman in life or on the screen, we just have to keep being personal and truthful or we will explode."
This content is imported from YouTube. You may be able to find the same content in another format, or you may be able to find more information, at their web site.
Follow Marie Claire on Facebook for the latest celeb news, beauty tips, fascinating reads, livestream video, and more.
This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
1. Background {#sec137962}
=============
Burn injury may result in severe metabolic disturbances. Burned patients have the highest metabolic rate among all critically ill patients ([@A21775R1], [@A21775R2]). Increased energy expenditure can cause malnutrition, with sever body weight loss and, also, negative nitrogen balance ([@A21775R3], [@A21775R4]). After burn injury, a broad systemic response starts immediately, which may adversely affect immune function ([@A21775R5]). On the other hand, gut-derived bacteria or endotoxemia are potent signals that trigger or exacerbate the hyper metabolic and immune inflammatory responses ([@A21775R6]). Prolonged and persistent hypercatabolism is characterized by the loss of lean body mass ([@A21775R7], [@A21775R8]), as well as progressive decrease of host defenses ([@A21775R9], [@A21775R10]) that can lead to a late form of multiple organ dysfunction syndrome ([@A21775R11], [@A21775R12]).
Protein-energy malnutrition (PEM) can also cause impaired immunologic response ([@A21775R13]). A number of factors, such as protein-calorie nutritional status, recent immunologic events and the intensity, repetitiveness and the duration of the inciting insult seem to affect the magnitude of the stress response and its consequences ([@A21775R14], [@A21775R15]). Studies have shown that an aggressive and immediate administration of enteral nutrition support can extenuate the stress response, attenuate hypermetabolism, reduce devastating catabolism ([@A21775R7], [@A21775R8], [@A21775R16], [@A21775R17]) and, therefore, improve the outcome ([@A21775R9]). The right balance of nutrition support is essential for reducing the hypermetabolic and hypercatabolic responses, induced by burn injury ([@A21775R1]).
Despite increasing experimental evidence, supporting the concept of nutritional support role in the outcome of burn patients, unfortunately, little emphasis has been given to the role of nutritional support. In most of developing countries, especially in our hospitals, low priority and unclear assignment are among the most common reasons for poor nutrition. The purpose of this study is to demonstrate the importance of a proper nutritional support in determining the outcome of critically burned patients.
Therefore, we decided to use commercial enteral feeding, as well as daily assessment of required calorie intake, to show the importance of nutrition therapy on clinical recovery of burned patient and compare it with the hospital's routine nutrition, which involves free nutrition. For this purpose, Sequential Organ Failure Assessment (SOFA) score and duration of hospital stay have been measured.
2. Objectives {#sec137963}
=============
This study was designed to determine the possible protective effect of early and adequate nutrition support on SOFA score and length of stay (LOS) in hospital, in thermal burn victims.
3. Patients and Methods {#sec137969}
=======================
This study is a prospective, interventional, single-center, concealed blocked randomization, double-blinded (subject, outcome assessor) clinical trial. The study was carried out in the Burn Center of Sina hospital, in Tabriz, Iran. The ethics committee of Tabriz university of medical sciences, Tabriz, Iran, approved the study protocol. This study protocol was submitted to Iranian registry of clinical trials (IRCT) and approved under number 201307082017N13. Informed consent was obtained for each subject or his family members.
3.1. Patients and Groups {#sec137964}
------------------------
For sample size determination, primary information on SOFA score was attained via a pilot sample of five, in size. Considering 95%CI, 90% power, two tailed test and utilizing Pocock's formula, at least 14 samples per group were determined, while taking into accurate 30% drop-out rate, the sample size increased to 19 cases, per group.
The participants in this study were composed of 41 patients, in total. They were admitted to the hospital between March and December 2013. These patients were admitted in the first day of burn injury, with 20% - 90% burn of the total body surface area (TBSA), with plausible indication for enteral nutrition for \> 48 hours. Patients having cardiogenic shock, serious inhalation injury, hepatic failure, renal failure, enteral feeding contraindication and pregnant women were excluded.
From those 41 patients initially selected, during the first 2 days, seven patients died, because of severity of burn, and four were excluded (two as a result of intolerance to enteral feeding and two because of severe diarrhea). As a consequence, only 30 patients with 20% - 70% TBSA were considered in this study. Total burned surface area was calculated on admission by using the rule of nines diagram.
The participants were randomly allocated in intervention and control groups, using randomized block procedure, stratifying on TBSA burned percentage (20% - 30%, 31% - 50% and 51% - 70%), age and sex ([Figure 1](#fig26110){ref-type="fig"}).
{#fig26110}
3.2. Nutrients {#sec137965}
--------------
One group of patients (Group I) started enteral feeding in the first hour of admission. Commercial enteral formula --ENTERA Meal (Karen pharma and food supplement co, Tehran, Iran) (54.6% carbohydrate, 14% protein, 31.6% fat), 1 Kcal/mL- began at 25 mL/h and rose to calculated energy requirement, within 3 days. After 3 days of burn and onwards, the volume of tube feeding administered varied on the basis of the patients' calculated needs and their ability to absorb the administered tube feeding (The patients with \> 30% TBSA burns had additional protein, reaching to 1.5 - 2 g/kg total protein/day).
Several patients did not require tube feeding, since they could resume normal feeding. These patients were excluded from the study on the day they stopped tube feeding. We evaluated periodically the energy requirement of these patients, using the Harris--Benedict equation × 1.5. The second group of patients (Group C) was given hospital routine diet ad libitum (liquid food for 2 days after injury, followed by chow diet).
3.3. Sequential Organ Failure Assessment Score Measurement {#sec137966}
----------------------------------------------------------
We collected all necessary information to calculate the SOFA score on days 0, 2, 5 and 9 of post-burn ([Table 1](#tbl35524){ref-type="table"}). The arterial oxygen partial pressure (PaO~2~)/ fraction of inspired oxygen (FiO~2~) was recorded on the blood gas system (TechnoMedica Gastat602I, Blood Gas System, Japan), serum creatinine level was measured by Jaffe's laboratory method, serum bilirubin by DCA laboratory method, while the complete blood count (platelet count) by Sysmex KX-21N (Sysmex Corp., Kobe, Japan) cell counter. Data were measured in the main laboratory of Sina Hospital, Tabriz university of medical sciences, Tabriz, Iran, and Glasgow Coma Scale (GCS) was measured by a medical doctor. The SOFA0 was based on data obtained at the time of burn intensive care unit (BICU admission, SOFA - 48 hours, SOFA2 - day 5, and SOFA3 - day 9.
###### Sequential Organ Failure Assessment Scores^[a](#fn37781){ref-type="table-fn"},[b](#fn37782){ref-type="table-fn"}^
Variables SOFA Score
--------------------------------------------------------------- ---------------- ---------------- --------------------------------------- ----------------------------------------------------------- ------------------------------------------------------------
**Respiratory PaO** ~**2**~ **:FiO** ~**2**~ **, mmHg** \> 400 ≤ 400 ≤ 300 ≤ 200 ^[c](#fn37783){ref-type="table-fn"}^ ≤ 100 ^[c](#fn37783){ref-type="table-fn"}^
**Coagulation Platelets, × 10** ^**3**^ **µL** ^**-1**^ \> 150 ≤ 100 ≤ 100 ≤ 50 ≤ 20
**Liver Bilirubin, mg dL** ^**-1**^ \< 1.2 1.2 - 1.9 2.0 - 5.9 6.0 - 11.9 \> 12.0
**Cardio Vascular Hypotension** No Hypotension MAP \< 70 mmHg Dopamine ≤ 5 or Dobutamine (any dose) Dopamine \> 5, Epinephrine ≤ 0.1, or Norepinephrine ≤ 0.1 Dopamine \> 15, Epinephrine \> 0, or Norepinephrine \> 0.1
**CNS GCS** 15 13 - 14 10 - 12 6 - 9 \< 6
**Renal Creatinine, mg/dL** ^**-1**^ **or UO mg dL** ^**-1**^ \< 1.2 1.2 - 1.9 2.0 - 3.4 3.5 - 4.9 or \< 500 \> 5.0 or \< 200
Abbreviations: CNS, central nervous system; FiO2, fraction of inspired oxygen; GCS, Glasgow coma scale; MAP, mean arterial pressure; PaO2, partial pressure of oxygen; SOFA, sequential organ failure assessment.
^a^Adrenergic agents were administered for at least 1 hour.
^b^Doses are given in µg/kg per min.
^c^Values are with respiratory support.
The neurological part of the SOFA score was calculated according to the GCS after admission in the ICU. In sedated patients, the score was given based on the previous available assessment, before sedation.
3.4. Length of Stay {#sec137967}
-------------------
To measure the LOS in hospital, the numbers of days, from admission to the ICU to final discharge from the hospital, were considered.
3.5. Statistical Analysis {#sec137968}
-------------------------
The SPSS version 21 (SPSS Inc., Chicago, IL, USA) was used for the statistical analysis and Kolmogorov-Smirnov test was used to assess the normality of data. Normally distributed variables were shown as means ± standard deviation (SD) and an independent t-test was applied for between-groups comparison. Median and interquartile range (IQR) (standard 25^th^ - 75th percentiles) showed non-normally distributed variables. Wilcoxon signed-ranks test was carried out for within group comparison and Mann-Whitney U test for between group comparisons. A P \< 0.05 was considered significant ([@A21775R18]).
4. Results {#sec137970}
==========
Thirty patients were included in the present study, of which 22 were male (73.3%) and eight were female (26.7%). The patients' age ranged from 18 to 60 years old. Patients had 20% - 70% TBSA of burn, averaging at 32.26 ± 12.83. There were no significant differences between the groups in burn percentage, age, gender or anthropometric measurements ([Table 2](#tbl35525){ref-type="table"}).
###### Patient Characteristics^[a](#fn37785){ref-type="table-fn"}^
Characteristics Control Group C Intervention Group I P Value^[b](#fn37786){ref-type="table-fn"}^
----------------------- ----------------- ---------------------- ---------------------------------------------
**Age, y** 33.14 ± 8.08 36.26 ± 14.85 .728
**Male/Female ratio** 11/4 11/4 NA
**Weight, kg** 66.81 ± 13.81 72.86 ± 17.85 .750
**Height, cm** 164.93 ± 10.43 168.26 ± 11.19 .658
**TBSA burned, %** 32.73 ± 11.84 31.80 ± 14.16 .980
**LOS** 23.07 ± 11.89 17.64 ± 8.2 .375
Abbreviations: NA, not available; LOS, Length of Stay; TBSA, Total Body Surface Area.
^a^Data are presented as mean ± SD and N = 15.
^b^P value indicates the difference between groups (independent t-test).
We selected SOFA 1 (48 hours), since previous studies showed that organ dysfunction should not be assessed in the first 48 hours, until the acute resuscitation period is finished ([Table 3](#tbl35526){ref-type="table"}). Because acute and reversible changes in organ function might be reflected by dysfunction due to massive fluid shift in the vascular and extravascular space, or incomplete resuscitation ([@A21775R13]). There was a significant difference (P = 0.039) in SOFA3 between two groups ([Table 3](#tbl35526){ref-type="table"}). No significant difference was observed between two groups in SOFA 0, 1, 2 ([Table 3](#tbl35526){ref-type="table"}). There was a significant decrease (P = 0.013) in SOFA score in I group, whilst it didn't change significantly (P = 0.109) in C group ([Table 4](#tbl35527){ref-type="table"}). A significant difference was observed between SOFA 3 and 1 in I group {-1 \[(-1) - 0\], P = 0.013 vs. -1 \[(-2) -- 0\]} ([Table 3](#tbl35526){ref-type="table"}). In comparison to baseline, There was no significant difference in SOFA 0 between two groups (P = 0.317), SOFA score decreased significantly (P = 0.013) in I group whilst it didn't change significantly (P = 0.712) in control group ([Table 5](#tbl35528){ref-type="table"}). It seems that intervention in nutrition led to the more and significant improvement in SOFA score, compared with hospital diet ad libitum. Patients in group I had a lower LOS than control group (17.64 ± 8.2 vs. 23.07 ± 11.89, P = not statistically significant).
###### Sequential Organ Failure Assessment Scores Measurements During Four Intervals, Group C vs. Group I^[a](#fn37788){ref-type="table-fn"}^
SOFA0 SOFA1 SOFA2 SOFA3
------------------------------------------------ ----------------- ----------------- ----------------- -----------------
**Control Group C** 2.0 (2.0 - 3.0) 2.0 (2.0 - 3.0) 2.0 (2.0 - 3.0) 2.0 (1.0 - 3.0)
**Intervention Group I** 2.0 (1.0 - 2.0) 2.0 (1.0 - 3.0) 2.0 (1.0 - 2.0) 1.0 (0.0 - 2.0)
**P** ^**[b](#fn37789){ref-type="table-fn"}**^ 0.317 0.317 0.222 0.039
Abbreviations: SOFA, Sequential Organ Failure Assessment.
^a^Data are presented as median (IQR).
^b^P indicates difference between groups (Mann-Whitney test).
###### A Between- and Within- Group Comparison - SOFA1 and SOFA3^[a](#fn37791){ref-type="table-fn"}^
SOFA1 SOFA3 P^[b](#fn37792){ref-type="table-fn"}^
------------------------------------------------ ----------------- ----------------- ---------------------------------------
**Control Group C** 2.0 (2.0 - 3.0) 2.0 (1.0 - 3.0) .109
**Intervention Group I** 2.0 (1.0 - 3.0) 1.0 (0.0 - 2.0) .013
**P** ^**[c](#fn37793){ref-type="table-fn"}**^ 0.222 0.039 NA
Abbreviations: NA, not available; SOFA, Sequential Organ Failure Assessment.
^a^Data are presented as median (IQR).
^b^P indicates difference within groups (Wilcoxon signed-ranks test).
^c^P indicates difference between groups (Mann-Whitney test).
###### A Between- and Within- Group Comparison - SOFA0 and SOFA3^[a](#fn37795){ref-type="table-fn"}^
SOFA0 SOFA3 P^[b](#fn37796){ref-type="table-fn"}^
------------------------------------------------------ ----------------- ----------------- ---------------------------------------
**Control Group C** 2.0 (2.0 - 3.0) 2.0 (1.0 - 3.0) .712
**Intervention Group I** 2.0 (1.0 - 2.0) 1.0 (0.0 - 2.0) .013
**P Value** ^**[c](#fn37797){ref-type="table-fn"}**^ 0.317 0.039 NA
Abbreviations: NA, not available; SOFA, Sequential Organ Failure Assessment.
^a^Data are presented as median (IQR).
^b^P indicates difference within groups (Wilcoxon signed-rank test).
^c^P indicates difference between groups (Mann-Whitney test).
5. Discussion {#sec137972}
=============
Burn is considered as one of the most hypermetabolic states, which might persists up to 2 years after occurrence ([@A21775R19]). Nutritional therapy is a crucial part of burn care ([@A21775R3], [@A21775R9], [@A21775R12], [@A21775R20]). Multiple studies have pointed out that malnourished patients undergo worse outcomes, including prolonged LOS in hospital, increased readmission and mortality, in comparison to well-nourished patients ([@A21775R21], [@A21775R22]).
An effective provision of the required amount of calories can be ensured via oral, enteral, or parenteral route. However, enteral nutrition seems to be the preferred supplementary route, in acutely injured burn patient cases.
In human studies, it has been shown that an early and continuous enteral nutrition, influentially delivering caloric requirements, would decrease the hypermetabolic response. At the same time, it would decline circulating levels of catecholamines, cortisol, and glucagon ([@A21775R23], [@A21775R24]). Early initiation of enteral nutrition also helps support the mucosal integrity, motility and intestinal blood flow, where all play a vital role in intestinal hypoperfusion prevention or ileus, caused by delays in resuscitation or reperfusion ([@A21775R25]). In animal studies, Mochizuki et al. showed that post-burn hypercatabolism and hypermetabolic response are decreased when adequate calories are administered, via intra gastric route, to fulfill required energy consumption ([@A21775R26]). The nutritional state and gut integrity is maintained, as well ([@A21775R12]).
The result of the current study showed that SOFA score decreased significantly in the group that used nutrition support, {-1 \[(-1) -- 0\], P = 0.013 vs. -1 \[(-2) -- 0\], P = 0.109}, which can be related to lower hypermetabolic response ([@A21775R1], [@A21775R2], [@A21775R23], [@A21775R24]), negative nitrogen balance and improved immunity, causing the infection incidence to decrease ([@A21775R27]). Length of hospital stay was also decreased, in this group (17.64 ± 8.2 vs. 23.07 ± 11.89), as result of improved immunity and better wound healing, causing a decrease in infection rate ([@A21775R28]).
Consistent with the present study, Rimdeika et al. have also reported that burned patients receiving 30 kcal/kg during 24 hours more had lower sepsis, pneumonia and mortality rate, with shorter duration of treatment ([@A21775R29]). In a different study, Suri et al. also showed a reduction in mortality and LOS, in burned patients, nourished aggressively ([@A21775R27]).
Khorasani et al. obtained similar results in a study conducted on burned children. The ones who were administered an early enteral nutrition, had a short LOS and decreased mortality rate ([@A21775R30]).
The use of nutrition therapy, in burn patients, plays a key role, especially when an aggressive approach is implied ([@A21775R20]). Proper nutrition is essential for wound healing, mediation of inflammation, suppression of the hypermetabolic response and reduction of sepsis-related morbidity and mortality ([@A21775R31]).
Our study has several limitations, accounting for short study period, small sample size and, also, \> 50% of our patients were allocated in the range of 20% - 30% TBSA burn.
This trial is the first to investigate the effects of proper nutrition on critical burned patients and the accuracy of this study is high, as it was performed by one single observer.
5.1. Conclusions {#sec137971}
----------------
The results of this study demonstrated that a proper nutritional therapy, after thermal injury, reduced the post-burn organ damage, as evidenced by changes in SOFA score. It also reduced the LOS in hospital. We conclude that a proper nutrition support is an important factor and, therefore, it should be considered as a critical aspect of care given to burn patients in hospitals. Adopting such a practice will be beneficial for the patients, as well as for reducing the overall costs.
This is a report of a database from PhD thesis of Dr. Sima Lak, entitle Effect of Taurine supplementation and types of nutrition on inflammatory factors and clinical outcome in severely burned patients with SIRS receiving enteral nutrition registered in infectious and Tropical disease research centre Tabriz university of medical sciences, Tabriz, Iran. We wish to thank all colleagues from the Burn Center of Sina Hospital, Tabriz university of medical sciences, Tabriz, Iran, for their assistance.
**Authors' Contribution:**Conception and design: Sima Lak, Alireza Ostadrahimi; analysis and interpretation: Mohammad Asghari-Jafarabadi; data collection: Sima Lak, Hossein Zalouli, Sanaz beigzali; writing the article: Sima Lak, Sanaz Beigzali; critical revision of the article: Alireza Ostadrahimi, Behrooz Nagili; final approval of the article: Alireza Ostadrahimi.
**Funding/Support:**This work was supported by infectious and tropical diseases research center the (grant No.10701), Tabriz university of medical sciences, Tabriz, Iran.
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
New Report Outlines Deep Public Distrust of Federal Government
“If there’s a rabid dog running around your neighborhood, you’re probably not going to assume something good about that dog, and you’re probably going to put your children out of the way. . . . Each party has one more — the GOP Dec. 15 in Las Vegas and the Democrats Dec. 19 in Manchester, N.H. — before the 2016 primary season will be in full swing. Republicans scored a big upset in the Kentucky governor’s race and held the state Senate in Virginia by one seat, but Democrats took control of Pennsylvania’s Supreme Court.
While some contend debates are a critical part of the vetting process for the nation’s highest office, others say debates are little more than a highly orchestrated waste of everyone’s time because our system ensures that most presidential elections are decided before candidates face off on a lighted stage. Hopefully they can come and take this dog away and create a safe environment once again.” $100,000 The cost of a ticket for a couple to attend a rock concert hosted by Sting next month benefiting Democratic presidential candidate Hillary Clinton. 54% Americans who say in a new Washington Post-ABC News poll the United States should not take refugees from Syria and other parts of the Middle East, even if they are screened for security. Ballot measures throughout the nation were equally mixed as a pro-LGBT initiative failed in Houston, a clean elections referendum passed in Maine, and a flawed marijuana regulatory regime was handily rejected in Ohio.
Americans are fearful of another terrorist attack after what happened in Paris, and they’re largely distrustful of President Obama’s ability to prevent one. Republicans turned the election into a referendum on terrorism. “We are not yet safe,” Vice President Dick Cheney declared. “Threats are still out there. The terrorists are still plotting and planning, trying to find ways to attack the United States.” Democrats accused Republicans of exploiting fear. “A true leader inspires hope and vanquishes fear,” Senator Edward Kennedy (D-Mass.) said. “This administration does neither.
Just 45 percent of Democrats said in that same poll they have only a “fair” or “no” amount of faith in their government to prevent terror attacks at home. Vox’s Matt Yglesias called the loss in Virginia a “disaster.” Molly Ball at the Atlantic declared that Democrats’ efforts on social issues have doomed their electoral chances. Instead, it brings fear.” Last week, President Barack Obama charged that Republicans “have been playing on fear in order to try to score political points or to advance their campaigns.” But the fear is real. The numbers speak to a broader disarray within the Democratic Party about the path forward after Paris: This week, 47 House Democrats rebuffed Obama and joined Republicans to vote for a bill that would severely limit the president’s ability to place 10,000 Syrian refugees in the country next year.
One viral tweet stated that, “Under President Obama, Democrats have lost 900+ state legislature seats, 12 governors, 69 House seats, 13 Senate seats. That’s some legacy.” While not exactly false, it ignores the nationwide wave of Obama’s 2008 campaign, which gave Democrats historic margins in Congress. Debate viewers get to know each candidate a little better, in the way we will know them if they should become president — as a TV personality, for lack of a better term. The fact is, like so many Americans, I rely on the debates to form my opinion about candidates — both in the policies and positions they take and who they are as people. The hyperbole surrounding the results of the 2015 election masks the fact that while Republicans have indeed racked up major gains in practically every level of government under President Obama, it’s only been the result of major progressive change.
And especially when we are talking about primaries, where the candidates have similar political philosophies and policy agendas, as I assess the candidates I’m not solely interested in their policy prescriptions. Furthermore, any alternative scenario where Barack Obama played it safe and didn’t try to reform our healthcare system or otherwise enact his progressive agenda would have likely been deemed a failure by Democratic standards, regardless of electoral outcomes. The candidates say they’re doing their due diligence to keep Americans safe from a new terror threat that has risen from the ashes in war-torn Syria.
Put another way, Democrats under Barack Obama have long faced a choice: either govern modestly and enjoy electoral success, or push the boundaries of progress and suffer the blowback. French President Francois Hollande promised, “France will be merciless against the barbarians of death.” He said his country would fight “without a respite, without a truce… It is not a question of containing but of destroying” Islamic State.
You can bet Democrats are writing down the comments Republican candidates made last week about Muslims and refugees to bring up in the general election next year. To see what the world may have looked like had Obama chosen the more moderate path, just examine the previous Democrat in the Oval Office: Bill Clinton.
In his first two years, President Clinton chose the progressive route, leading a Democratic Congress in raising taxes on the wealthy, passing family medical leave, better regulating gun purchases, banning assault weapons, creating AmeriCorps, and addressing domestic violence through the Violence Against Women Act. The Democratic nominee will have to run on Obama’s record. “Hillary Clinton can’t walk away from President Obama’s failing ISIS strategy because she helped craft it and even praised it,” a spokesman for the Republican National Committee said. Former President Bill Clinton had warned his fellow Democrats, “Strong and wrong beats weak and right.” Nevertheless, there are reasons why it may be different in 2016.
President Clinton oversaw a tremendous economic boom and managed foreign policy so deftly that in 1998—even in the midst of being impeached—the Democrats actually gained seats in the House, a historical rarity for midterm elections. The latest Reuters poll of Republican voters nationwide shows Trump surging into the lead for the 2016 Republican nomination, with nearly 40 percent of the vote. If congressional Republicans are unable to block Obama’s plan to admit Syrian refugees, conservatives may erupt in fury at GOP leaders and rally to Trump’s support. Ronald Reagan was as popular as they come, but he lost big time in the 1982 midterms as voters soured on his budget cuts and perceived poor handling of the economy. All the polls for the past month show a majority of Americans with an unfavorable opinion of Trump (the average is 55 percent unfavorable to 37 percent favorable).
That’s just a fancy way of saying that voters form opinions about complicated issues — like who to support for president — in part based on cues from trusted political actors, media, or just more engaged friends and family. When presidents aren’t on the ballot themselves, it’s much easier for the opposition to motivate their voters to turn out relative to that president’s supporters.
Obviously, this is only Congress, and Democratic losses in governorships and state legislatures throughout the country are devastating in their own right. In the end, debates are an opportunity to engage voters, to reframe the issues and to shape voters’ views on both policy ideas and political actors. They provide us with an important window into the strengths and weaknesses of our prospective political leaders, so viewers and candidates are wise to take them seriously.
Senator Chuck Schumer, D-N.Y., seemed to hint at this alternative when he said in a 2014 speech: “After passing the stimulus, Democrats should have continued to propose middle class-oriented programs and built on the partial success of the stimulus. We took their mandate and put all of our focus on the wrong problem–health care reform.” Feel free to disagree, but Schumer is simply articulating the nature of the tradeoff Democrats made in 2010.
Representative Steve King (R-Iowa) has argued that Obama’s refugee plan aims to counter low fertility rates of native-born Americans and “fill America up in a fashion that has kicked sideways . . . assimilation into the American dream, American civilization.” Senator Marco Rubio (R-Fla.), along with other Republican contenders, has criticized Hillary Clinton for refusing to condemn Islamic terrorism by name. “This is a clash of civilizations,” Rubio said. “There is no middle ground.” Governor John Kasich (R-Ohio) raised the idea of creating a new federal agency to promote “Judeo-Christian values.” The refugee issue has become fodder in the ongoing culture war. It’s the most interesting time in politics for hearing about real differences involving foreign policy, economic policies and more narrow concerns that often are ignored in general elections. If you believe that elections need to be won to maintain majorities and hold important offices, then the Barack Obama era could easily be labeled a failure. Under our antiquated voting rules, Americans’ diversity of political thought gets channeled into two “viable” choices: Republicans and Democrats.
Healthcare reform, the stimulus, the automotive industry bailouts, the first reining in of Wall Street since the Great Depression, the end of “don’t ask, don’t tell,” major regulation of air and water pollution, the rejection of Keystone XL, executive orders on immigration, the opening of Cuba, an Iranian nuclear deal—the list goes on and on. And like every other time in modern history that Democratic presidents have flexed their activist muscles, it has mobilized opposition voters much more than the president’s supporters.
Candidates’ character and issue positions are secondary to whether they’re on “team blue” or “team red.” Take presidential elections, where we vote in 51 state boxes, counting the District of Columbia. Luckily for progressives, President Obama and his team recognize that because regulation, legislation and international deals are so difficult to unwind, the effects of these accomplishments will likely be felt for generations. No matter how much money is spent and how well a candidate debates, we already can project winners in the 35 states that have been absolutely ignored for three straight elections due to predictability. You can disagree with that strategy all you want, but when faced with the choice, I imagine most Democrats would cut the same deal as President Obama did.
After studying all presidential election polls between 1952 and 2008, political scientists Robert Erikson and Christopher Wlezien concluded “the best prediction from the debates is the initial verdict before the debates.” Most congressional elections are also locked down. There are real benefits to holding power, if only to stop the other side from enacting their agenda, but most voters want beneficial changes to be made in society. Last year, FairVote’s Monopoly Politics projected final winners in nearly 9 in 10 House races in 2016 using a methodology that missed only one of its last 700 projections.
After all, you don’t see many candidates of either party running for office saying, “Let’s keep America the same way it is today!” Electoral victories come and go, but what you do with them echoes in eternity. It’s been enacted in nearly a dozen states and can be in place by 2020, allowing every American to experience a close presidential election as one where their vote matters.
Modeled in a dozen cities, ranked choice voting allows third parties and independents to contest elections without being “spoilers.” Maine voters will have the chance to enact it next year for Congress and state elections.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Developed in collaboration with Team Sky
If you are looking for an everyday road helmet that is also aero, the KASK Protone helmet offers the perfect combination. Team Sky used the first versions of the Protone in the 2014 Tour de France. Since then KASK continued to collaborate with Team Sky to refine the helmet’s aerodynamics, strength and comfort.
Light Weight, Aero and Comfortable
What’s nice about the Protone helmet is that you get all the benefits of an aero helmet without sacrificing ventilation. Per the Italian manufacturer, it’s light weight at 215g (size medium) and boasts one of the lowest drag coefficients (cx) of any vented helmet on the market. I tested this helmet on a couple of hot, humid days into the 90s and the 20 cooling vents get the job done.
KASK’s Octo Fit adjustment system uses 8 planes of adjustment to fit any shaped head. To fit, use the adjustment bar that slides up and down about 2” in the back of the helmet. This sets the vertical position on your head. There are two cups that slide in and out, plus flex to fit properly on your head. The vertical positioning bar and cups need to be set up initially and then use the single rear dial to adjust the final fit which secures the helmet. On the bike adjustments were easy using the dial to tighten or loosen with one hand.
The helmet’s side straps are static, so there’s no adjustment around the ears. Placement worked fine for me, but I wonder if someone with a different shaped head might have issues. The design is nicely done so the straps lay flat against your face and eliminates gaping or twisting. The chinstrap is covered in an eco-leather material which is washable and very comfortable.
The inner padding made from CoolMax fabrics is removable and washable. It is also treated with a Sanitized antimicrobial process to keep odors at bay. The 3D DRY padding has a multi-layer open cell construction to reduce surface area and thus increased comfort.
The Protone has little touches like the high viz strip along the back, reflective logo stickers on the sides and a nice little carrying bag to keep your helmet protected when not in use. KASK also sells a winter cap and spare internal pads as an add on. The helmet is available in 19 color combinations so you’re bound to find one to match your bike or favorite kit.
MIT not MIPS for Crash Protection
KASK uses several technologies to improve the safety of their helmets. They’ve strengthened the inner frame to provide greater mechanical strength and a better compactness. Should there be an impact, it prevents the helmet from breaking into many pieces.
Their innovative Multi In-Molding technology, joins the inner polystyrene cap to the outer polycarbonate one, thus ensures better shock absorption. Then MIT Technology guarantees a higher safety and complete protection because of the polycarbonate layer that covers the shell on the top, base ring and back.
Don’t Buy Fake
There’s a lot of counterfeiting in the bike frame and component industry, including helmets. You don’t want to gamble when it comes to protecting your head. There are some fake KASK Protone for sale, especially on eBay. Remember the golden rule “If it’s too good to be true, it’s probably not real.” If you see a new Protone selling for under $100, then it is most likely a fake made in China. The real KASK is made in Italy. Some telltale signs it is a fake:
Gaps at the seams
No CPSC sticker on the back of the helmet
Logo and model name is printed on instead of reflective stickers
I found a video that details real vs fake. Check it out here:
Bottom Line
The KASK Protone helmet is an excellent choice for an everyday road helmet that is also aerodynamic. Designed in collaboration with Team Sky, this helmet provides safety, aero and comfort all in one. All this comes with a hefty price tag, but if you want to lower your drag coefficient and stay cool, this might be the helmet for you.
Sheri Rosenbaumregularly contributes articles and reviews products for RBR. She’s an avid recreational roadie who lives in the Chicago area and a major advocate for women's cycling, serving on the board of directors and volunteering with the Dare2tri Paratriathlon Club. Click to read Sheri's full bio.
Related
Reader Interactions
Comments
Thanks for the review. Note that the quoted weight (215 g) is for the European (CE) version. Claimed weight for the US (CSPC) version is 270 g – actual weight of mine is 264 g.
I bought my Protone very recently, and have only used it a couple of times. The second time, though, was a century ride with some big climbs and temperatures hitting 90 deg. No issues with fit, comfort or cooling. No data on the aero benefits., but perhaps it is a bit quieter than my old helmet The unvented section on top provided excellent sun protection for the top of my head, and no weird sunburn patterns on the bald spot!
Overall, it’s the most expensive helmet I have ever purchased, but I am very pleased with it so far.
Have you ever tried to rotate a helmet that wasn’t MIPS? They move easily, even when attached tightly. I’m not a believer in the added safety of MIPS (nor am I convinced that the aero qualities of this KASK helmet wiil make a noticeable difference for most riders).
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Familial uterine hernia syndrome: report of an Arab family with four affected males.
We report an Arab Bedouin family including four males with uterine hernia syndrome. All had a male chromosome constitution and phenotype, inguinal herniae, cryptochidism, and persistence of Müllerian derivatives. Histopathological studies confirmed the presence of both testicular tissue and Müllerian derivatives. The presence of two affected brothers and two affected maternal uncles suggests X-linked inheritance. Autosomal recessive determination with male sex limitation is also a possibility based on parental consanguinity in one sibship.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
---
abstract: |
We make use of new near and mid-IR photometry of the Pleiades cluster in order to help identify proposed cluster members. We also use the new photometry with previously published photometry to define the single-star main sequence locus at the age of the Pleiades in a variety of color-magnitude planes.
The new near and mid-IR photometry extend effectively two magnitudes deeper than the 2MASS All-Sky Point Source catalog, and hence allow us to select a new set of candidate very low mass and sub-stellar mass members of the Pleiades in the central square degree of the cluster. We identify 42 new candidate members fainter than Ks =14 (corresponding to 0.1 Mo). These candidate members should eventually allow a better estimate of the cluster mass function to be made down to of order 0.04 solar masses.
We also use new IRAC data, in particular the images obtained at 8 um, in order to comment briefly on interstellar dust in and near the Pleiades. We confirm, as expected, that – with one exception – a sample of low mass stars recently identified as having 24 um excesses due to debris disks do not have significant excesses at IRAC wavelengths. However, evidence is also presented that several of the Pleiades high mass stars are found to be impacting with local condensations of the molecular cloud that is passing through the Pleiades at the current epoch.
author:
- 'John R. Stauffer'
- 'Lee W. Hartmann'
- 'Giovanni G. Fazio, Lori E. Allen, Brian M. Patten'
- 'Patrick J. Lowrance, Robert L. Hurt, Luisa M. Rebull'
- 'Roc M. Cutri, Solange V. Ramirez'
- 'Erick T. Young, George H. Rieke, Nadya I. Gorlova, James C. Muzerolle'
- 'Cathy L. Slesnick'
- 'Michael F. Skrutskie'
title: 'Near and Mid-IR Photometry of the Pleiades, and a New List of Substellar Candidate Members'
---
Introduction {#sec:intro}
============
Because of its proximity, youth, richness, and location in the northern hemisphere, the Pleiades has long been a favorite target of observers. The Pleiades was one of the first open clusters to have members identified via their common proper motion [@trumpler21], and the cluster has since then been the subject of more than a dozen proper motion studies. Some of the earliest photoelectric photometry was for members of the Pleiades [@cummings21], and the cluster has been the subject of dozens of papers providing additional optical photometry of its members. The youth and nearness of the Pleiades make it a particularly attractive target for identifying its substellar population, and it was the first open cluster studied for those purposes [@jameson89; @stauffer89]. More than 20 papers have been subsequently published, identifying additional substellar candidate members of the Pleiades or studying their properties.
We have three primary goals for this paper. First, while extensive optical photometry for Pleiades members is available in the literature, photometry in the near and mid-IR is relatively spotty. We will remedy this situation by using new 2MASS $JHK_s$ and Spitzer IRAC photometry for a large number of Pleiades members. We will use these data to help identify cluster non-members and to define the single-star locus in color-magnitude diagrams for stars of 100 Myr age. Second, we will use our new IR imaging photometry of the center of the Pleiades to identify a new set of candidate substellar members of the cluster, extending down to stars expected to have masses of order 0.04 [M$_{\sun}$]{}. Third, we will use the IRAC data to briefly comment on the presence of circumstellar debris disks in the Pleiades and the interaction of the Pleiades stars with the molecular cloud that is currently passing through the cluster.
In order to make best use of the IR imaging data, we will begin with a necessary digression. As noted above, more than a dozen proper motion surveys of the Pleiades have been made in order to identify cluster members. However, no single catalog of the cluster has been published which attempts to collect all of those candidate members in a single table and cross-identify those stars. Another problem is that while there have been many papers devoted to providing optical photometry of cluster members, that photometry has been bewilderingly inhomogeneous in terms of the number of photometric systems used. In Sec. 3 and in the Appendix, we describe our efforts to create a reasonably complete catalog of candidate Pleiades members and to provide optical photometry transformed to the best of our ability onto a single system.
New Observational Data {#sec:observations}
======================
2MASS “6x" Imaging of the Pleiades
----------------------------------
During the final months of Two Micron All Sky Survey (2MASS; @skrutskie06) operations, a series of special observations were carried out that employed exposures six times longer than used for the the primary survey. These so-called “6x" observations targeted 30 regions of scientific interest including a 3 deg $x$ 2 deg area centered on the Pleiades cluster. The 2MASS 6x data were reduced using an automated processing pipeline similar to that used for the main survey data, and a calibrated 6x Image Atlas and extracted 6x Point and Extended Source Catalogs (6x-PSC and 6x-XSC) analogous to the 2MASS All-Sky Atlas, PSC and XSC have been released as part of the 2MASS Extended Mission. A description of the content and formats of the 6x image and catalog products, and details about the 6x observations and data reduction are given by Cutri et al. (2006; section A3). [^1] The 2MASS 6x Atlas and Catalogs may be accessed via the on-line services of the NASA/IPAC Infrared Science Archive (http://irsa.ipac.caltech.edu).
Figure 1 shows the area on the sky imaged by the 2MASS 6x observations in the Pleiades field. The region was covered by two rows of scans, each scan being one degree long (in declination) and 8.5’ wide in right ascension. Within each row, the scans overlap by approximately one arcminute in right ascension. There are small gaps in coverage in the declination boundary between the rows, and one complete scan in the southern row is missing because the data in that scan did not meet the minimum required photometric quality. The total area covered by the 6x Pleiades observations is approximately 5.3 sq. degrees.
There are approximately 43,000 sources extracted from the 6x Pleiades observations in the 2MASS 6x-PSC, and nearly 1,500 in the 6x-XSC. Because there are at most about 1000 Pleiades members expected in this region, only $\sim$2% of the 6x-PSC sources are cluster members, and the rest are field stars and background galaxies. The 6x-XSC objects are virtually all resolved background galaxies. Near infrared color-magnitude and color-color diagrams of the unresolved sources from the 2MASS 6x-PSC and all sources in the 6x-XSC sources from the Pleiades region are shown in Figures 2 and 3, respectively. The extragalactic sources tend to be redder than most stars, and the galaxies become relatively more numerous towards fainter magnitudes. Unresolved galaxies dominate the point sources that are fainter than $K_s$ $>$ 15.5 and redder than $J-K_s >$ 1.2 mag.
The 2MASS 6x observations were conducted using the same freeze-frame scanning technique used for the primary survey [@skrutskie06]. The longer exposure times were achieved by increasing the “READ2-READ1" integration to 7.8 sec from the 1.3 sec used for primary survey. However, the 51 ms “READ1" exposure time was not changed for the 6x observations. As a result, there is an effective “sensitivity gap" in the 8-11 mag region where objects may be saturated in the 7.8 sec READ2-READ1 6x exposures, but too faint to be detected in the 51 ms READ1 exposures. Because the sensitivity gap can result in incompleteness and/or flux bias in the photometric overlap regime, the near infrared photometry for sources brighter than J=11 mag in the 6x-PSC was taken from the 2MASS All-Sky PSC during compilation of the catalog of Pleiades candidate members presented in Table 2 (c.f. Section 3).
Shallow IRAC Imaging
--------------------
Imaging of the Pleiades with Spitzer was obtained in April 2004 as part of a joint GTO program conducted by the IRAC instrument team and the MIPS instrument team. Initial results of the MIPS survey of the Pleiades have already been reported in @gorlova06. The IRAC observations were obtained as two astronomical observing requests (AORs). One of them was centered near the cluster center, at RA=03h47m00.0s and Dec=24d07m (2000), and consisted of a 12 row by 12 column map, with “frametimes" of 0.6 and 12.0 seconds and two dithers at each map position. The map steps were 290$\arcsec$ in both the column and row direction. The resultant map covers a region of approximately one square degree, and a total integration time per position of 24 sec over most of the map. The second AOR used the same basic mapping parameters, except it was smaller (9 rows by 9 columns) and was instead centered northwest from the cluster center at RA=03h44m36.0s and Dec=25d24m. A two-band color image of the AOR covering the center of the Pleiades is shown in Figure \[fig:pleIRAC\]. A pictorial guide to the IRAC image providing Greek names for a few of the brightest stars, and @hertzsprung47 numbers for several stars mentioned in Section 6 is provided in Figure \[fig:cartoon\].
We began our analysis with the basic calibrated data (BCDs) from the Spitzer pipeline, using the S13 version of the Spitzer Science Center pipeline software. Artifact mitigation and masking was done using the IDL tools provided on the Spitzer contributed software website. For each AOR, the artifact-corrected BCDs were combined into single mosaics for each channel using the post-BCD “MOPEX" package [@makovoz05]. The mosaic images were constructed with 1.22$\times$1.22 arcsecond pixels (i.e., approximately the same pixel size as the native IRAC arrays).
We derived aperture photometry for stars present in these IRAC mosaics using both APEX (a component of the MOPEX package) and the “phot" routine in DAOPHOT. In both cases, we used a 3 pixel radius aperture and a sky annulus from 3 to 7 pixels (except that for Channel 4, for the phot package we used a 2 pixel radius aperture and a 2 to 6 pixel annulus because that provided more reliable fluxes at low flux levels). We used the flux for zero magnitude calibrations provided in the IRAC data handbook (280.9, 179.7, 115.0 and 64.1 Jy for Ch 1 through Ch 4, respectively), and the aperture corrections provided in the same handbook (multiplicative flux correction factors of 1.124, 1.127, 1.143 and 1.584 for Ch 1-4, inclusive. The Ch4 correction factor is much bigger because it is for an aperture radius of 2 rather than 3 pixels.).
Figure \[fig:plecomp1\] and Figure \[fig:plecomp2\] provide two means to assess the accuracy of the IRAC photometry. The first figure compares the aperture photometry from APEX to that from phot, and shows that the two packages yield very similar results when used in the same way. For this reason, we have simply averaged the fluxes from the two packages to obtain our final reported value. The second figure shows the difference between the derived 3.6 and 4.5 [$\mu$m]{} magnitudes for Pleiades members. Based on previous studies (e.g. @allen04), we expected this difference to be essentially zero for most stars, and the Pleiades data corroborate that expectation. For \[3.6\]$<$10.5, the RMS dispersion of the magnitude difference between the two channels is 0.024 mag. Assuming that each channel has similar uncertainties, this indicates an internal 1-$\sigma$ accuracy of order 0.017 mag. The absolute calibration uncertainty for the IRAC fluxes is currently estimated at of order 0.02 mag. Figure \[fig:plecomp2\] also shows that fainter than \[3.6\]=10.5 (spectral type later than about M0), the \[3.6\]$-$\[4.5\] color for M dwarfs departs slightly from zero, becoming increasingly redder to the limit of the data (about M6).
A Catalog of Pleiades Candidate Members {#sec:catalog}
=======================================
If one limits oneself to only stars visible with the naked eye, it is easy to identify which stars are members of the Pleiades – all of the stars within a degree of the cluster center that have $V<$ 6 are indeed members. However, if one were to try to identify the M dwarf stellar members of the cluster (roughly 14 $<V<$ 23), only of order 1% of the stars towards the cluster center are likely to be members, and it is much harder to construct an uncontaminated catalog. The problem is exacerbated by the fact that the Pleiades is old enough that mass segregation through dynamical processes has occurred, and therefore one has to survey a much larger region of the sky in order to include all of the M dwarf members.
The other primary difficulty in constructing a comprehensive member catalog for the Pleiades is that the pedigree of the candidates varies greatly. For the best studied stars, astrometric positions can be measured over temporal baselines ranging up to a century or more, and the separation of cluster members from field stars in a vector point diagram (VPD) can be extremely good. In addition, accurate radial velocities and other spectral indicators are available for essentially all of the bright cluster members, and these further allow membership assessment to be essentially definitive. Conversely, at the faint end (for stars near the hydrogen burning mass limit in the Pleiades), members are near the detection limit of the existing wide-field photographic plates, and the errors on the proper motions become correspondingly large, causing the separation of cluster members from field stars in the VPD to become poor. These stars are also sufficiently faint that spectra capable of discriminating members from field dwarfs can only be obtained with 8m class telescopes, and only a very small fraction of the faint candidates have had such spectra obtained. Therefore, any comprehensive catalog created for the Pleiades will necessarily have stars ranging from certain members to candidates for which very little is known, and where the fraction of spurious candidate members increases to lower masses.
In order to address the membership uncertainties and biases, we have chosen a sliding scale for inclusion in our catalog. For all stars, we require that the available photometry yields location in color-color and color-magnitude diagrams consistent with cluster membership. For the stars with well-calibrated photoelectric photometry, this means the star should not fall below the Pleiades single-star locus by more than about 0.2 mag or above that locus by more than about 1.0 mag (the expected displacement for a hierarchical triple with three nearly equal mass components). For stars with only photographic optical photometry, where the 1-$\sigma$ uncertainties are of order 0.1 to 0.2 mag, we still require the star’s photometry to be consistent with membership, but the allowed displacements from the single star locus are considerably larger. Where accurate radial velocities are known, we require that the star be considered a radial velocity member based on the paper where the radial velocities were presented. Where stars have been previously identified as non-members based on photometric or spectroscopic indices, we adopt those conclusions.
Two other relevant pieces of information are sometimes available. In some cases, individual proper motion membership probabilities are provided by the various membership surveys. If no other information is available, and if the membership probability for a given candidate is less than 0.1, we exclude that star from our final catalog. However, often a star appears in several catalogs; if it appears in two or more proper motion membership lists we include it in the final catalog even if P $<$ 0.1 in one of those catalogs. Second, an entirely different means to identify candidate Pleiades members is via flare star surveys towards the cluster [@haro82; @jones81]. A star with a formally low membership probability in one catalog but whose photometry is consistent with membership and that was identified as a flare star is retained in our catalog.
Further details of the catalog construction are provided in the appendix, as are details of the means by which the $B$, $V$, and $I$ photometry have been homogenized. A full discussion and listing of all of the papers from which we have extracted astrometric and photometric information is also provided in the appendix. Here we simply provide a very brief description of the inputs to the catalog.
We include candidate cluster members from the following proper motion surveys: @trumpler21, @hertzsprung47, @jones81, Pels and Lub – as reported in @vanlee86, @stauffer91, @artyukhina69, @hambly93, @pinfield00, @adams01 and @deacon04. Another important compilation which provides the initial identification of a significant number of low mass cluster members is the flare star catalog of @haro82. Table 1 provides a brief synopsis of the characteristics of the candidate member catalogs from these papers. The Trumpler paper is listed twice in Table 1 because there are two membership surveys included in that paper, with differing spatial coverages and different limiting magnitudes.
In our final catalog, we have attempted to follow the standard naming convention whereby the primary name is derived from the paper where it was first identified as a cluster member. An exception to this arises for stars with both @trumpler21 and @hertzsprung47 names, where we use the Hertzsprung numbers as the standard name because that is the most commonly used designation for these stars in the literature. The failure for the Trumpler numbers to be given precedence in the literature perhaps stems from the fact that the Trumpler catalog was published in the Lick Observatory Bulletins as opposed to a refereed journal. In addition to providing a primary name for each star, we provide cross-identifications to some of the other catalogs, particularly where there is existing photometry or spectroscopy of that star using the alternate names. For the brightest cluster members, we provide additional cross-references (e.g., Greek names, Flamsteed numbers, HD numbers).
For each star, we attempt to include an estimate for Johnson $B$ and $V$, and for Cousins $I$ ([$I_{\rm C}$]{}). Only a very small fraction of the cluster members have photoelectric photometry in these systems, unfortunately. Photometry for many of the stars has often been obtained in other systems, including Walraven, Geneva, Kron, and Johnson. We have used previously published transformations from the appropriate indices in those systems to Johnson $BV$ or Cousins $I$. In other cases, photometry is available in a natural $I$ band system, primarily for some of the relatively faint cluster members. We have attempted to transform those $I$ band data to [$I_{\rm C}$]{} by deriving our own conversion using stars for which we already have a [$I_{\rm C}$]{} estimate as well as the natural $I$ measurement. Details of these issues are provided in the Appendix.
Finally, we have cross-correlated the cluster candidates catalog with the 2MASS All-Sky PSC and also with the 6x-PSC for the Pleiades. For every star in the catalog, we obtain $JH$[$K_{\rm s}$]{} photometry and 2MASS positions. Where we have both main survey 2MASS data and data from the 6x catalog, we adopt the 6x data for stars with $J>$11, and data from the standard 2MASS catalog otherwise. We verified that the two catalogs do not have any obvious photometric or astrometric offsets relative to each other. The coordinates we list in our catalog are entirely from these 2MASS sources, and hence they inherit the very good and homogeneous 2MASS positional accuracies of order 0.1 arcseconds RMS.
We have then plotted the candidate Pleiades members in a variety of color-magnitude diagrams and color-color diagrams, and required that a star must have photometry that is consistent with cluster membership. Figure \[fig:ple1695\] illustrates this process, and indicates why (for example) we have excluded HII 1695 from our final catalog.
Table 2 provides the collected data for the 1417 stars we have retained as candidate Pleiades members. The first two columns are the J2000 RA and Dec from 2MASS; the next are the 2MASS $JH$[$K_{\rm s}$]{} photometry and their uncertainties, and the 2MASS photometric quality flag (“ph-qual"). If the number following the 2MASS quality flag is a 1, the 2MASS data come from the 2MASS All-Sky PSC; if it is a 2, the data come from the 6x-PSC. The next three columns provide the $B$, $V$ and [$I_{\rm C}$]{}photometry, followed by a flag which indicates the provenance of that photometry. The last column provides the most commonly used names for these stars. The hydrogen burning mass limit for the Pleiades occurs at about $V$=22, $I$=18, [$K_{\rm s}$]{}=14.4. Fifty-three of the candidate members in the catalog are fainter than this limit, and hence should be sub-stellar if they are indeed Pleiades members.
Table 3 provides the IRAC \[3.6\], \[4.5\], \[5.8\] and \[8.0\] photometry we have derived for Pleiades candidate members included within the region covered by the IRAC shallow survey of the Pleiades (see section 2). The brightest stars are saturated even in our short integration frame data, particularly for the more sensitive 3.6 and 4.5 [$\mu$m]{} channels. At the faint end, we provide photometry only for 3.6 and 4.5 [$\mu$m]{}because the objects are undetected in the two longer wavelength channels. At the “top" and “bottom" of the survey region, we have incomplete wavelength coverage for a band of width about 5$\arcmin$, and for stars in those areas we report only photometry in either the 3.6 and 5.8 bands or in 4.5 and 8.0 bands.
Because Table 2 is an amalgam of many previous catalogs, each of which have different spatial coverage, magnitude limits and other idiosyncrasies, it is necessarily incomplete and inhomogeneous. It also certainly includes some non-members. For $V<$ 12, we expect very few non-members because of the extensive spectroscopic data available for those stars; the fraction of non-members will likely increase to fainter magnitudes, particularly for stars located far from the cluster center. The catalog is simply an attempt to collect all of the available data, identify some of the non-members and eliminate duplications. We hope that it will also serve as a starting point for future efforts to produce a “cleaner" catalog.
Figure \[fig:plespatial2\] shows the distribution on the sky of the stars in Table 2. The complete spatial distribution of all members of the Pleiades may differ slightly from what is shown due to the inhomogeneous properties of the proper motion surveys. However, we believe that those effects are relatively small and the distribution shown is mostly representative of the parent population. One thing that is evident in Figure \[fig:plespatial2\] is mass segregation – the highest mass cluster members are much more centrally located than the lowest mass cluster members. This fact is reinforced by calculating the cumulative number of stars as a function of distance from the cluster center for different absolute magnitude bins. Figure \[fig:ple\_segreg\] illustrates this fact. Another property of the Pleiades illustrated by Figure \[fig:plespatial2\] is that the cluster appears to be elongated parallel to the galactic plane, as expected from n-body simulations of galactic clusters [@terlevich87]. Similar plots showing the flattening of the cluster and evidence for mass segregation for the V $<$ 12 cluster members were provided by [@raboud98].
Empirical Pleiades Isochrones and Comparison to Model Isochrones
================================================================
Young, nearby, rich open clusters like the Pleiades can and should be used to provide template data which can help interpret observations of more distant clusters or to test theoretical models. The identification of candidate members of distant open clusters is often based on plots of stars in a color-magnitude diagram, overlaid upon which is a line meant to define the single-star locus at the distance of the cluster. The stars lying near or slightly above the locus are chosen as possible or probable cluster members. The data we have collected for the Pleiades provide a means to define the single-star locus for 100 Myr, solar metallicity stars in a variety of widely used color systems down to and slightly below the hydrogen burning mass limit. Figure \[fig:cmd\_vmi\] and Figure \[fig:cmd\_km1\] illustrate the appearance of the Pleiades stars in two of these diagrams, and the single-star locus we have defined. The curve defining the single-star locus was drawn entirely “by eye.” It is displaced slightly above the lower envelope to the locus of stars to account for photometric uncertainties (which increase to fainter magnitudes). We attempted to use all of the information available to us, however. That is, there should also be an upper envelope to the Pleiades locus in these diagrams, since equal mass binaries should be displaced above the single star sequence by 0.7 magnitudes (and one expects very few systems of higher multiplicity). Therefore, the single star locus was defined with that upper envelope in mind. Table 4 provides the single-star loci for the Pleiades for $BVI_{\rm c}JK_{\rm s}$ plus the four IRAC channels. We have dereddened the empirical loci by the canonical mean extinction to the Pleiades of [$A_V$]{} = 0.12 (and, correspondingly, A$_B$ = 0.16, A$_I$ = 0.07, A$_J$ = 0.03, A$_K$ = 0.01, as per the reddening law of @rieke85).
The other benefit to constructing the new catalog is that it can provide an improved comparison dataset to test theoretical isochrones. The new catalog provides homogeneous photometry in many photometric bands for stars ranging from several solar masses down to below 0.1 [M$_{\sun}$]{}. We take the distance to the Pleiades as 133 pc, and refer the reader to @soderblom05 for a discussion and a listing of the most recent determinations. The age of the Pleiades is not as well-defined, but is probably somewhere between 100 and 125 Myr [@meynet93; @stauffer98]. We adopt 100 Myr for the purposes of this discussion; our conclusions relative to the theoretical isochrones would not be affected significantly if we instead chose 125 Myr. As noted above, we adopt [$A_V$]{}=0.12 as the mean Pleiades extinction, and apply that value to the theoretical isochrones. A small number of Pleiades members have significantly larger extinctions [@breger86; @stauffer87], and we have dereddened those stars individually to the mean cluster reddening.
Figures \[fig:super\_vmi\] and \[fig:super\_kik\] compare theoretical 100 Myr isochrones from @siess00 and @baraffe98 to the Pleiades member photometry from Table 2 for stars for which we have photoelectric photometry. Neither set of isochrones are a good fit to the $V-I$ based color-magnitude diagram. For @baraffe98 this is not a surprise because they illustrated that their isochrones are too blue in $V-I$ for cool stars in their paper, and ascribed the problem as likely the result of an incomplete line list, resulting in too little absorption in the $V$ band. For @siess00, the poor fit in the $V-I$ CMD is somewhat unexpected in that they transform from the theoretical to the observational plane using empirical color-temperature relations. In any event, it is clear that neither model isochrones match the shape of the Pleiades locus in the $V$ vs.$V-I$ plane, and therefore use of these $V-I$ based isochrones for younger clusters is not likely to yield accurate results (unless the color-[$T_{\rm eff}$]{} relation is recalibrated, as described for example in @jeffries05). On the other hand, the @baraffe98 model provides a quite good fit to the Pleiades single star locus for an age of 100 Myr in the $K$ vs.$I-K$ plane.[^2]. This perhaps lends support to the hypothesis that the misfit in the $V$ vs. $V-I$ plane is due to missing opacity in their V band atmospheres for low mass stars (see also @chabrier00 for further evidence in support of this idea). The @siess00 isochrones do not fit the Pleiades locus in the $K$ vs. $I-K$ plane particularly well, being too faint near $I-K$=2 and too bright for $I-K >$ 2.5.
Identification of New Very Low Mass Candidate Members
=====================================================
The highest spatial density for Pleiades members of any mass should be at the cluster center. However, searches for substellar members of the Pleiades have generally avoided the cluster center because of the deleterious effects of scattered light from the high mass cluster members and because of the variable background from the Pleiades reflection nebulae. The deep 2MASS and IRAC 3.6 and 4.5 [$\mu$m]{} imaging provide accurate photometry to well below the hydrogen burning mass limit, and are less affected by the nebular emission than shorter wavelength images. We therefore expect that it should be possible to identify a new set of candidate Pleiades substellar members by combining our new near and mid-infrared photometry.
The substellar mass limit in the Pleiades occurs at about [$K_{\rm s}$]{}=14.4, near the limit of the 2MASS All-Sky PSC. As illustrated in Figure \[fig:2macmd\], the deep 2MASS survey of the Pleiades should easily detect objects at least two magnitudes fainter than the substellar limit. The key to actually identifying those objects and separating them from the background sources is to find color-magnitude or color-color diagrams which separate the Pleiades members from the other objects. As shown in Figure \[fig:cmd3dot6\], late-type Pleiades members separate fairly well from most field stars towards the Pleiades in a [$K_{\rm s}$]{} vs. $K_s-[3.6]$ color-magnitude diagram. However, as illustrated in Figure \[fig:2macmd\], in the $K_s$ magnitude range of interest there is also a large population of red galaxies, and they are in fact the primary contaminants to identifying Pleiades substellar objects in the [$K_{\rm s}$]{} vs. $K_s-[3.6]$ plane. Fortunately, most of the contaminant galaxies are slightly resolved in the 2MASS and IRAC imaging, and we have found that we can eliminate most of the red galaxies by their non-stellar image shape.
Figure \[fig:cmd3dot6\] shows the first step in our process of identifying new very low mass members of the Pleiades. The red plus symbols are the known Pleiades members from Table 2. The red open circles are candidate Pleiades substellar members from deep imaging surveys published in the literature, mostly of parts of the cluster exterior to the central square degree, where the IRAC photometry is from @lowrance07. The blue, filled circles are field M and L dwarfs, placed at the distance of the Pleiades, using photometry from @patten06. Because the Pleiades is $\sim$100 Myr, its very low mass stellar and substellar objects will be displaced about 0.7 mag above the locus of the field M and L dwarfs according to the @baraffe98 and @chabrier00 models, in accord with the location in the diagram of the previously identified, candidate VLM and substellar objects. The trapezoidal shaped region outlined with a dashed line is the region in the diagram which we define as containing candidate new VLM and substellar members of the Pleiades. We place the faint limit of this region at [$K_{\rm s}$]{}=16.2 in order to avoid the large apparent increase in faint, red objects for [$K_{\rm s}$]{}$>$ 16.2, caused largely by increasing errors in the [$K_{\rm s}$]{} photometry. Also, the 2MASS extended object flags cease to be useful fainter than about [$K_{\rm s}$]{}= 16.
We took the following steps to identify a set of candidate substellar members of the Pleiades:
- keep only objects which fall in the trapezoidal region in Figure \[fig:cmd3dot6\].
- remove objects flagged as non-stellar by the 2MASS pipeline software;
- remove objects which appear non-stellar to the eye in the IRAC images;
- remove objects which do not fall in or near the locus of field M and L dwarfs in a $J-H$ vs. $H-K_s$ diagram;
- remove objects which have 3.6 and 4.5 [$\mu$m]{} magnitudes that differ by more than 0.2 mag.
- remove objects which fall below the ZAMS in a J vs. $J-K_s$ diagram.
As shown in Figure \[fig:cmd3dot6\], all stars earlier than about mid-M have $K_s-[3.6]$ colors bluer than 0.4. This ensures that for most of the area of the trapezoidal region, the primary contaminants are distant galaxies. Fortunately, the 2MASS catalog provides two types of flags for identifying extended objects. For each filter, a chi-square flag measures the match between the objects shape and the instrumental PSF, with values greater than 2.0 generally indicative of a non-stellar object. In order not to be misguided by an image artifact in one filter, we throw out the most discrepant of the three flags and average the other two. We discard objects with mean $\chi^2$ greater than 1.9. The other indicator is the 2MASS extended object flag, which is the synthesis of several independent tests of the objects shape, surface brightness and color (see @jarrett00 for a description of this process). If one simply excludes the objects classified as extended in the 2MASS 6x image by either of these techniques, the number of candidate VLM and substellar objects lying inside the trapezoidal region decreases by nearly a half.
We have one additional means to demonstrate that many of the identified objects are probably Pleiades members, and that is via proper motions. The mean Pleiades proper motion is $\Delta$RA = 20 mas yr$^{-1}$ and $\Delta$Dec = $-$45 mas yr$^{-1}$ [@jones73]. With an epoch difference of only 3.5 years between the deep 2MASS and IRAC imaging, the expected motion for a Pleiades member is only 0.07 arcseconds in RA and $-$0.16 arcseconds in Dec. Given the relatively large pixel size for the two cameras, and the undersampled nature of the IRAC 3.6 and 4.5 [$\mu$m]{} images, it is not a priori obvious that one would expect to reliably detect the Pleiades motion. However, both the 2MASS and IRAC astrometric solutions have been very accurately calibrated. Also, for the present purpose, we only ask whether the data support a conclusion that most of the identified substellar candidates are true Pleiades members (i.e., as an ensemble), rather than that each star is well enough separated in a VPD to derive a high membership probability.
Figure \[fig:super\_propmo\] provides a set of plots that we believe support the conclusion that the majority of the surviving VLM and substellar candidates are Pleiades members. The first plot shows the measured motions between the epoch of the 2MASS and IRAC observations for all known Pleiades members from Table 2 that lie in the central square degree region and have 11 $<$ [$K_{\rm s}$]{} $<$ 14 (i.e., just brighter than the substellar candidates). The mean offset of the Pleiades stellar members from the background population is well-defined and is quantitatively of the expected magnitude and sign (+0.07 arcsec in RA and $-$0.16 arcsec in Dec). The RMS dispersion of the coordinate difference for the field population in RA and Dec is 0.076 and 0.062 arcseconds, supportive of our claim that the relative astrometry for the two cameras is quite good. Because we expect that the background population should have essentially no mean proper motion, the non-zero mean “motion" of the field population of about $<\Delta$RA$>$=0.3 arcseconds is presumably not real. Instead, the offset is probably due to the uncertainty in transferring the Spitzer coordinate zero-point between the warm star-tracker and the cryogenic focal plane. Because it is simply a zero-point offset applicable to all the objects in the IRAC catalog, it has no effect on the ability to separate Pleiades members from the field star population.
The second panel in Figure \[fig:super\_propmo\] shows the proper motion of the candidate Pleiades VLM and substellar objects. While these objects do not show as clean a distribution as the known members, their mean motion is clearly in the same direction. After removing 2-$\sigma$ deviants, the median offsets for the substellar candidates are 0.04 and $-$0.11 arcseconds in RA and Dec, respectively. The objects whose motions differ significantly from the Pleiades mean may be non-members or they may be members with poorly determined motions (since a few of the high probability members in the first panel also show discrepant motions).
The other two panels in Figure \[fig:super\_propmo\] show the proper motions of two possible control samples. The first control sample was defined as the set of stars that fall up to 0.3 magnitudes below the lower sloping boundary of the trapezoid in Figure \[fig:cmd3dot6\]. These objects should be late type dwarfs that are either older or more distant than the Pleiades or red galaxies. We used the 2MASS data to remove extended or blended objects from the sample in the same way as for the Pleiades candidates. If the objects are nearby field stars, we expect to see large proper motions; if galaxies, the real proper motions would be small – but relatively large apparent proper motions due to poor centroiding or different centroids at different effective wavelengths could be present. The second control set was defined to have $-0.1 < K - [3.6] < 0.1$ and $14.0 < K < 14.5$, and to be stellar based on the 2MASS flags. This control sample should therefore be relatively distant G and K dwarfs primarily. Both control samples have proper motion distributions that differ greatly from the Pleiades samples and that make sense for, respectively, a nearby and a distant field star sample.
Figure \[fig:cmd3dot6memb\] shows the Pleiades members from Table 2 and the 55 candidate VLM and substellar members that survived all of our culling steps. We cross-correlated this list with the stars from Table 2 and with a list of the previously identified candidate substellar members of the cluster from other deep imaging surveys. Fourteen of the surviving objects correspond to previously identified Pleiades VLM and substellar candidates. We provide the new list of candidate members in Table 5. The columns marked as $\mu$(RA) and $\mu$(DEC) are the measured motions, in arcsec over the 3.5 year epoch difference between the 2MASS-6x and IRAC observations. Forty-two of these objects have [$K_{\rm s}$]{}$>$ 14.0, and hence inferred masses less than about 0.1 [M$_{\sun}$]{}; thirty-one of them have [$K_{\rm s}$]{}$>$ 14.4, and hence have inferred masses below the hydrogen burning mass limit.
Our candidate list could be contaminated by foreground late type dwarfs that happen to lie in the line of sight to the Pleiades. How many such objects should we expect? In order to pass our culling steps, such stars would have to be mid to late M dwarfs, or early to mid L dwarfs. We use the known M dwarfs within 8 pc to estimate how many field M dwarfs should lie in a one square degree region and at distance between 70 and 100 parsecs (so they would be coincident in a CMD with the 100 Myr Pleiades members). The result is $\sim$3 such field M dwarf contaminants. @cruz06 estimate that the volume density of L dwarfs is comparable to that for late-M dwarfs, and therefore a very conservative estimate is that there might also be 3 field L dwarfs contaminating our sample. We regard this (6 contaminating field dwarfs) as an upper limit because our various selection criteria would exclude early M dwarfs and late L dwarfs. @bihain06 made an estimate of the number of contaminating field dwarfs in their Pleiades survey of 1.8 square degrees; for the spectral type range of our objects, their algorithm would have predicted just one or two contaminating field dwarfs for our survey.
How many substellar Pleiades members should there be in the region we have surveyed? That is, of course, part of the question we are trying to answer. However, previous studies have estimated that the Pleiades stellar mass function for M $<$ 0.5 [M$_{\sun}$]{} can be approximated as a power-law with an exponent of -1 (dN/dM $\propto$ M$^{-1}$). Using the known Pleiades members from Table 2 that lie within the region of the IRAC survey and that have masses of 0.2 $<$ M/[M$_{\sun}$]{}$<$ 0.5 (as estimated from the @baraffe98 100 Myr isochrone) to normalize the relation, the M$^{-1}$ mass function predicts about 48 members in our search region and with 14 $<$ K $<$ 16.2 (corresponding to 0.1 $<$ M/[M$_{\sun}$]{}$<$ 0.035). Other studies have suggested that the mass function in the Pleiades becomes shallower below 0.1 [M$_{\sun}$]{}, dN/dM $\propto$ M$^{-0.6}$. Using the same normalization as above, this functional form for the Pleiades mass function for M $<$ 0.1 [M$_{\sun}$]{} yields a prediction of 20 VLM and substellar members in our survey. The number of candidates we have found falls between these two estimates. Better proper motions and low-resolution spectroscopy will almost certaintly eliminate some of these candidates as non-members.
Mid-IR Observations of Dust and PAHS in the Pleiades {#sec:discussion}
====================================================
Since the earliest days of astrophotography, it has been clear that the Pleiades stars are in relatively close proximity to interstellar matter whose optical manifestation is the spider-web like network of filaments seen particularly strongly towards several of the B stars in the cluster. High resolution spectra of the brightest Pleiades stars as well as CO maps towards the cluster show that there is gas as well as dust present, and that the (primary) interstellar cloud has a significant radial velocity offset relative to the Pleiades [@white03; @federman84]. The gas and dust, therefore, are not a remnant from the formation of the cluster but are simply evidence of a a transitory event as this small cloud passes by the cluster in our line of sight (see also @breger86). There are at least two claimed morphological signatures of a direct interaction of the Pleiades with the cloud. @white93 provided evidence that the IRAS 60 and 100 [$\mu$m]{}image of the vicinity of the Pleiades showed a dark channel immediately to the east of the Pleiades, which they interpreted as the “wake" of the Pleiades as it plowed through the cloud from the east. @herbig01 provided a detailed analysis of the optically brightest nebular feature in the Pleiades – IC 349 (Barnard’s Merope nebula) – and concluded that the shape and structure of that nebula could best be understood if the cloud was running into the Pleiades from the southeast. @herbig01 concluded that the IC 349 cloudlet, and by extension the rest of the gas and dust enveloping the Pleiades, are relatively distant outliers of the Taurus molecular clouds (see also @eggen50 for a much earlier discussion ascribing the Merope nebulae as outliers of the Taurus clouds). @white03 has more recently proposed a hybrid model, where there are two separate interstellar cloud complexes with very different space motions, both of which are colliding simultaneously with the Pleiades and with each other.
@breger86 provided polarization measurements for a sample of member and background stars towards the Pleiades, and argued that the variation in polarization signatures across the face of the cluster was evidence that some of the gas and dust was within the cluster. In particular, Figure 6 of that paper showed a fairly distinct interface region, with little residual polarization to the NE portion of the cluster and an L-shaped boundary running EW along the southern edge of the cluster and then north-south along the western edge of the cluster. Stars to the south and west of that boundary show relatively large polarizations and consistent angles (see also our Figure \[fig:cartoon\] where we provide a few polarization vectors from @breger86 to illustrate the location of the interface region and the fact that the position angle of the polarization correlates well with the location in the interface).
There is a general correspondence between the polarization map and what is seen with IRAC, in the sense that the B stars in the NE portion of the cluster (Atlas and Alcyone) have little nebular emission in their vicinity, whereas those in the western part of the cluster (Maia, Electra and Asterope) have prominent, filamentary dust emission in their vicinity. The L-shaped boundary is in fact visible in Figure \[fig:pleIRAC\] as enhanced nebular emission running between and below a line roughly joining Merope and Electra, and then making a right angle and running roughly parallel to a line running from Electra to Maia to HII1234 (see Figure \[fig:cartoon\]).
Pleiades Dust-Star Encounters Imaged with IRAC {#sec: dust structures}
----------------------------------------------
The Pleiades dust filaments are most strongly evident in IRAC’s 8 [$\mu$m]{} channel, as evidenced by the distinct red color of the nebular features in Figure \[fig:pleIRAC\]. The dominance at 8 [$\mu$m]{} is an expected feature of reflection nebulae, as exemplified by NGC 7023 [@werner04], where most of the mid-infrared emission arises from polycyclic aromatic hydrocarbons (PAHs) whose strongest bands in the 3 to 10 [$\mu$m]{}region fall at 7.7 and 8.6 [$\mu$m]{}. One might expect that if portions of the passing cloud were particularly near to one of the Pleiades members, it might be possible to identify such interactions by searching for stars with 8.0 [$\mu$m]{} excesses or for stars with extended emission at 8 [$\mu$m]{}. Figure \[fig:dusty1\] provides two such plots. Four stars stand out as having significant extended 8 [$\mu$m]{} emission, with two of those stars also having an 8 [$\mu$m]{} excess based on their \[3.6\]$-$\[8.0\] color. All of these stars, plus IC 349, are located approximately along the interface region identified by @breger86.
We have subtracted a PSF from the 8 [$\mu$m]{} images for the stars with extended emission, and those PSF-subtracted images are provided in Figure \[fig:psfsub\]. The image for HII 1234 has the appearance of a bow-shock. The shape is reminiscent of predictions for what one should expect from a collision between a large cloud or a sheet of gas and an A star as described in @artymowicz97. The @artymowicz97 model posits that A stars encountering a cloud will carve a paraboloidal shaped cavity in the cloud via radiation pressure. The exact size and shape of the cavity depend on the relative velocity of the encounter, the star’s mass and luminosity and properties of the ISM grains. For typical parameters, the predicted characteristic size of the cavity is of order 1000 AU, quite comparable to the size of the structures around HII 652 and HII 1234. The observed appearance of the cavity depends on the view angle to the observer. However, in any case, the direction from which the gas is moving relative to the star can be inferred from the location of the star relative to the curved rim of the cavity; the “wind" originates approximately from the direction connecting the star and the apex of the rim. For HII 1234, this indicates the cloud which it is encountering has a motion relative to HII 1234 from the SSE, in accord with a Taurus origin and not in accord for where a cloud is impacting the Pleiades from the west as posited in @white03. The nebular emission for HII 652 is less strongly bow-shaped, but the peak of the excess emission is displaced roughly southward from the star, consistent with the Taurus model and inconsistent with gas flowing from the west.
Despite being the brightest part of the Pleiades nebulae in the optical, IC 349 appears to be undetected in the 8 [$\mu$m]{} image. This is not because the 8 [$\mu$m]{} image is insensitive to the nebular emission - there is generally good agreement between the structures seen in the optical and at 8 [$\mu$m]{}, and most of the filaments present in optical images of the Pleiades are also visible on the 8 [$\mu$m]{} image (see Figures \[fig:pleIRAC\] and \[fig:psfsub\]) and even the psf-subtracted image of Merope shows well-defined nebular filaments. The lack of enhanced 8 [$\mu$m]{}emission from the region of IC 349 is probably because all of the small particles have been scoured away from this cloudlet, consistent with Herbig’s model to explain the HST surface photometry and colors. There is no PAH emission from IC 349 because there are none of the small molecules that are the postulated source of the PAH emission.
IC349 is very bright in the optical, and undetected to a good sensitivity limit at 8 [$\mu$m]{}; it must be detectable via imaging at some wavelength between 5000 Å and 8 [$\mu$m]{}. We checked our 3.6 [$\mu$m]{} data for this purpose. In the standard BCD mosaic image, we were unable to discern an excess at the location of IC349 either simply by displaying the image with various stretches or by doing cuts through the image. We performed a PSF subtraction of Merope from the image in order to attempt to improve our ability to detect faint, extended emission 30" from Merope - unfortunately, bright stars have ghost images in IRAC Ch. 1, and in this case the ghost image falls almost exactly at the location of IC349. IC349 is also not detected in visual inspection of our 2MASS 6x images.
Circumstellar Disks and IRAC
----------------------------
As part of the Spitzer FEPS (Formation and Evolution of Planetary Systems) Legacy program, using pointed MIPS photometry, @stauffer05 identified three G dwarfs in the Pleiades as having 24 [$\mu$m]{} excesses probably indicative of circumstellar dust disks. @gorlova06 reported results of a MIPS GTO survey of the Pleiades, and identified nine cluster members that appear to have 24 [$\mu$m]{} excesses due to circumstellar disks. However, it is possible that in a few cases these apparent excesses could be due instead to a knot of the passing interstellar dust impacting the cluster member, or that the 24 [$\mu$m]{} excess could be flux from a background galaxy projected onto the line of sight to the Pleiades member. Careful analysis of the IRAC images of these cluster members may help confirm that the MIPS excesses are evidence for debris disks rather than the other possible explanations.
Six of the Pleiades members with probable 24 [$\mu$m]{} excesses are included in the region mapped with IRAC. However, only four of them have data at 8 [$\mu$m]{} – the other two fall near the edge of the mapped region and only have data at 3.6 and 5.8 [$\mu$m]{}. None of the six stars appear to have significant local nebular dust from visual inspection of the IRAC mosaic images. Also, none of them appear problematic in Figure \[fig:dusty1\]. For a slightly more quantitative analysis of possible nebular contamination, we also constructed aperture growth curves for the six stars, and compared them to other Pleiades members. All but one of the six show aperture growth curves that are normal and consistent with the expected IRAC PSF. The one exception is HII 489, which has a slight excess at large aperture sizes as is illustrated in Figure \[fig:ap\_grow2\]. Because HII 489 only has a small 24 [$\mu$m]{} excess, it is possible that the 24 [$\mu$m]{}excess is due to a local knot of the interstellar cloud material and is not due to a debris disk. For the other five 24 [$\mu$m]{}excess stars we find no such problem, and we conclude that their 24 [$\mu$m]{} excesses are indeed best explained as due to debris disks.
Summary and Conclusions
=======================
We have collated the primary membership catalogs for the Pleiades to produce the first catalog of the cluster extending from its highest mass members to the substellar limit. At the bright end, we expect this catalog to be essentially complete and with few or no non-member contaminants. At the faint end, the data establishing membership are much sparser, and we expect a significant number of objects will be non-members. We hope that the creation of this catalog will spur efforts to obtain accurate radial velocities and proper motions for the faint candidate members in order to eventually provide a well-vetted membership catalog for the stellar members of the Pleiades. Towards that end, it would be useful to update the current catalog with other data – such as radial velocities, lithium equivalent widths, x-ray fluxes, H$\alpha$ equivalent widths, etc. – which could be used to help accurately establish membership for the low mass cluster candidates. It is also possible to make more use of “negative information" present in the proper motion catalogs. That is, if a member from one catalog is not included in another study but does fall within its areal and luminosity coverage, that suggests that it likely failed the membership criteria of the second study. For a few individual stars, we have done this type of comparison, but a systematic analysis of the proper motion catalogs should be conducted. We intend to undertake these tasks, and plan to establish a website where these data would be hosted.
We have used the new Pleiades member catalog to define the single-star locus at 100 Myr for $BVI_c$[$K_{\rm s}$]{} and the four IRAC bands. These curves can be used as empirical calibration curves when attempting to identify members of less well-studied, more distant clusters of similar age. We compared the Pleiades photometry to theoretical isochrones from @siess00 and @baraffe98. The @siess00 isochrones are not, in detail, a good fit to the Pleiades photometry, particularly for low mass stars. The @baraffe98 100 Myr isochrone does fit the Pleiades photometry very well in the $I$ vs. $I-K$ plane.
We have identified 31 new substellar candidate members of the Pleiades using our combined seven-band infrared photometry, and have shown that the majority of these objects appear to share the Pleiades proper motion. We believe that most of the objects that may be contaminating our list of candidate brown dwarfs are likely to be unresolved galaxies, and therefore low resolution spectroscopy should be able to provide a good criterion for culling our list of non-members.
The IRAC images, particularly the 8 [$\mu$m]{} mosaic, provide vivid evidence of the strong interaction of the Pleiades stars and the interstellar cloud that is passing through the Pleiades. Our data are supportive of the model proposed by @herbig01 whereby the passing cloud is part of the Taurus cloud complex and hence is encountering the Pleiades from the SSE direction. @white93 had proposed a model whereby the cloud was encountering the Pleiades from the west and used this to explain features in the IRAS 60 and 100 $\mu$m images of the region as the wake of the Pleiades moving through the cloud. Our data appear to not be supportive of that hypothesis, and therefore leaves the apparent structure in the IRAS maps as unexplained.
Most of the support for this work was provided by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. This research has made use of NASA’s Astrophysics Data System (ADS) Abstract Service, and of the SIMBAD database, operated at CDS, Strasbourg, France. This research has made use of data products from the Two Micron All-Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center, funded by the National Aeronautics and Space Administration and the National Science Foundation. These data were served by the NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. The research described in this paper was partially carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
APPENDIX
========
Membership Catalogs
-------------------
Membership lists of the Pleiades date back to antiquity if one includes historical and literary references to the Seven Sisters (Alcyone, Maia, Merope, Electra, Taygeta, Asterope and Celeno) and their parents (Atlas and Pleione). The first paper discussing relative proper motions of a large sample of stars in the Pleiades (based on visual observations) was published by @pritchard84. The best of the early proper motion surveys of the Pleiades derived from photographic plate astrometry was that by @trumpler21, based on plates obtained at Yerkes and Lick observatories. The candidate members from that survey were presented in two tables, with the first being devoted to candidate members within about one degree from the cluster center (operationally, within one degree from Alcyone) and the second table being devoted to candidates further than one degree from the cluster center. Most of the latter stars were denoted by Trumpler by an S or R, followed by an identification number. We use Tr to designate the Trumpler stars (hence Trnnn for a star from the 1st table and the small number of stars in the second table without an “S" or an “R", and TrSnnn or TrRnnn for the other stars). For the central region, Trumpler’s catalog extends to $V \sim$ 13, while the outer region catalog includes stars only to about $V \sim$ 9.
The most heavily referenced proper motion catalog of the Pleiades is that provided by @hertzsprung47. That paper makes reference to two separate catalogs: a photometric catalog of the Pleiades published by Hertzsprung in 1923 [@hertzsprung23], whose members are commonly referred to by HI numbers, and the new proper motion catalog from the 1947 paper, commonly referenced as the HII catalog. While both HI and HII numbers have been used in subsequent observational papers, it is the HII identification numbers that predominate. That catalog – derived from Carte du Ciel blue-sensitive plates from 14 observatories – includes stars in the central 2$\times$2 degree region of the cluster, and has a faint limit of about $V$ = 15.5. Johnson system $BVI$ photometry is provided for most of the proposed Hertzsprung members in @jomi58 and @iriarte67. Additional Johnson $B$ and $V$ photometry plus Kron $I$ photometry for a fairly large number of the Hertzsprung members can be found in @stauffer80, @stauffer82, and @stauffer84. Other Johnson $BV$ photometry for a scattering of stars can be found in @jones73, @robinson74, @messina01. Spectroscopic confirmation, primarily via radial velocities, that these are indeed Pleiades members has been provided in @soderblom93 [@queloz98] and @mermilliod97.
Two other proper motion surveys provide relatively bright candidate members relatively far from the cluster center: @artyukhina70 and @vanlee86. Stars from the Artyukhina catalog are designated as AK followed by the region from which the star was identified followed by an identification number. The new members provided in the van Leeuwen paper were taken from an otherwise unpublished proper motion study by Pels, where the first 118 stars were considered probable members and the remaining 75 stars were considered possible members. Van Leeuwen categorized a number of the Pels stars as non-members based on the Walraven photometry they obtained, and we adopt those findings. Radial velocities for stars in these two catalogs have been obtained by @rosvick92, @mermilliod97, and @queloz98, and those authors identified a list of the candidate members that they considered confirmed by the high resolution spectroscopy. For these outlying candidate members, to be included in Table 2 we require that the star be a radial velocity member from one of the above three surveys, or be indicated as having “no dip" in the Coravel cross-correlation (indicating rapid rotation, which at least for the later type stars is suggestive of membership). Geneva photometry of the Artyukhina stars considered as likely members was provided by @mermilliod97. The magnitude limit of these surveys was not well-defined, but most of the Artyukhina and Pels stars are brighter than $V$=13.
@jones73 provided proper motion membership probabilities for a large sample of proposed Pleiades members, and for a set of faint, red stars towards the Pleiades. A few star identification names from the sources considered by Jones appear in Table 2, including MT [@mccarthy64], VM [@vanmaanen46], ALR [@ahmed65], and J [@jones73].
The chronologically next significant source of new Pleiades candidate members was the flare star survey of the Pleiades conducted at several observatories in the 1960s, and summarized in @haro82, hereafter HCG. The logic behind these surveys was that even at 100 Myr, late type dwarfs have relatively frequent and relatively high luminosity flares (as demonstrated by @jomi58 having detected two flares during their photometric observations of the Pleiades), and therefore wide area, rapid cadence imaging of the Pleiades at blue wavelengths should be capable of identifying low mass cluster members. However, such surveys also will detect relatively young field dwarfs, and therefore it is best to combine the flare star surveys with proper motions. Dedicated proper motion surveys of the HCG flare stars were conducted by @jones81 and @stauffer91, with the latter also providing photographic $VI$ photometry (Kron system). Photoelectric photometry for some of the HCG stars have been reported in @stauffer82, @stauffer84, @stauffer87, and @prosser91. High resolution spectroscopy of many of the HCG stars is reported in @stauffer84, @stauffer87 and @terndrup00. Because a number of the papers providing additional observational data for the flare stars were obtained prior to 1982, we also include in Table 2 the original flare star names which were derived from the observatory where the initial flare was detected. Those names are of the form an initial letter indicating the observatory – A (Asiago), B (Byurakan), K (Konkoly), T (Tonantzintla) – followed by an identification number.
@stauffer91 conducted two proper motion surveys of the Pleiades over an approximately 4$\times$4 degree region of the cluster based on plates obtained with the Lick 20$^{\prime\prime}$ astrographic telescope. The first survey was essentially unbiased, except for the requirement that the stars fall approximately in the region of the $V$ vs. $V-I$ color-magnitude diagram where Pleiades members should lie. Candidate members from this survey are designated by SK numbers. The second survey was a proper motion survey of the HCG stars. Photographic $VI$ photometry of all the stars was provided as well as proper motion membership probabilities. Photoelectric photometry for some of the candidate members was obtained as detailed above in the section on the HCG catalog stars. The faint limit of these surveys is about $V$=18.
@hambly91 provided a significantly deeper, somewhat wider area proper motion survey, with the faintest members having V $\simeq$ 20 and the total area covered being of order 25 square degrees. The survey utilized red sensitive plates from the Palomar and UK Schmidt telescopes. Due to incomplete coverage at one epoch, there is a vertical swath slightly east of the cluster center where no membership information is available. Stars from this survey are designated by their HHJ numbers . @hambly93 provide $RI$ photographic photometry on a natural system for all of their candidate members, plus photoelectric Cousins $RI$ photometry for a small number of stars and $JHK$ photometry for a larger sample. Some spectroscopy to confirm membership has been reported in @stauffer94, @stauffer95, @oppenheimer97, @stauffer98, and @steele95, though for most of the HHJ stars there is no spectroscopic membership confirmation.
@pinfield00 provide the deepest wide-field proper motion survey of the Pleiades. That survey combines CCD imaging of six square degrees of the Pleiades obtained with the Burrell Schmidt telescope (as five separate, non-overlapping fields near but outside the cluster center) with deep photographic plates which provide the 1st epoch positions. Candidate members are designated by BPL numbers (for Burrell Pleiades), with the faintest stars having $I\simeq$ 19.5, corresponding to $V >$ 23. Only the stars brighter than about $I$= 17 have sufficiently accurate proper motions to use to identify Pleiades members. Fainter than $I$= 17, the primary selection criteria are that the star fall in an appropriate place in both an $I$ vs. $I-Z$ and an $I$ vs.$I-K$ CMD.
@adams01 combined the 2MASS and digitized POSS databases to produce a very wide area proper motion survey of the Pleiades. By design, that survey was very inclusive - covering the entire physical area of the cluster and extending to the hydrogen burning mass limit. However, it was also very “contaminated", with many suspected non-members. The catalog of possible members was not published. We have therefore not included stars from this study in Table 2; we have used the proper motion data from @adams01 to help decide cases where a given star has ambiguous membership data from the other surveys.
@deacon04 provided another deep and very wide area proper motion survey of the Pleiades. The survey covers a circular area of approximately five degrees radius to $R \sim$ 20, or $V \sim$ 22. Candidate members are designated by DH. @deacon04 also provide membership probabilities based on proper motions for many candidate cluster members from previous surveys. For stars where @deacon04 derive P $<$ 0.1 and where we have no other proper motion information or where another proper motion survey also finds low membership probability, we exclude the star from our catalog. For cases where two of our proper motion catalogs differ significantly in their membership assessment, with one survey indicating the star is a probable member, we retain the star in the catalog as the conservative choice. Examples of the latter where @deacon04 derive P $<$ 0.1 include HII 1553, HII 2147, HII 2278 and HII 2665 – all of which we retain in our catalog because other surveys indicate these are high probability Pleiades members.
Photometry
----------
Photometry for stars in open cluster catalogs can be used to help confirm cluster membership and to help constrain physical properties of those stars or of the cluster. For a variety of reasons, photometry of stars in the Pleiades has been obtained in a panoply of different photometric systems. For our own goals, which are to use the photometry to help verify membership and to define the Pleiades single-star locus in color magnitude diagrams, we have attempted to convert photometry in several of these systems to a common system (Johnson $BV$ and Cousins $I$). We detail below the sources of the photometry and the conversions we have employed.
Photoelectric photometry of Pleiades members dates back to at least 1921 [@cummings21]. However, as far as we are aware the first “modern" photoelectric photometry for the Pleiades, using a potassium hydride photoelectric cell, is that of @calder37. @eggen50 provided photoelectric photometry using a 1P21 phototube (but calibrated to a no-longer-used photographic system) for most of the known Pleiades members within one degree of the cluster center and with magnitudes $<$ 11. The first phototube photometry of Pleiades stars calibrated more-or-less to the modern UBV system was provided by @jomo51. An update of that paper, and the oldest photometry included here was reported in @jomi58, which provided $UBV$ Johnson system photometry for a large sample of HII and Trumpler candidate Pleiades members. @iriarte67 later reported Johnson system $V-I$ colors for most of these stars. We have converted Iriarte’s $V-I$ photometry to estimated Cousins $V-I$ colors using a formula from @bessell79: $$V - I ({\rm Cousins}) = 0.778 \times V - I ({\rm Johnson}).$$ $BVRI$ photometry for most of the Hertzsprung members fainter than $V$= 10 has been published by @stauffer80, @stauffer82, @stauffer84, and @stauffer87. The $BV$ photometry is Johnson system, whereas the $RI$ photometry is on the Kron system. The Kron $V-I$ colors were converted to Cousins $V-I$ using a transformation provided by @bessell87: $$V - I ({\rm Cousins}) = 0.227 + 0.9567(V-I)_k +0.0128(V-I)_k^2 -
0.0053(V-I)_k^3$$ Other Kron system $V-I$ colors have been published for Pleiades candidates in @stauffer91 (photographic photometry) and in @prosser91. These Kron-system colors have also been converted to Cousins $V-I$ using the above formula.
Johnson/Cousins $UBVR$ photometry for a set of low mass Pleiades members was provided by @landolt79. We only use the $BV$ magnitudes from that study. Additional Johnson system $UBV$ photometry for small numbers of stars is provided in @robinson74, @messina01 and @jones73.
@vanlee87 provided Walraven $VBLUW$ photometry for nearly all of the Hertzsprung members brighter than $V \sim$ 13.5 and for the Pels candidate members. Van Leeuwen provided an estimated Johnson $V$ derived from the Walraven $V$ in his tables. We have transformed the Walraven $V-B$ color into an estimate of Johnson $B-V$ using a formula from @rosvick92: $$B - V ({\rm Johnson}) = 2.571(V-B) -1.02(V-B)^2 +0.5(V-B)^3 -0.01$$ @hambly93 provided photographic $VRI$ photometry for all of the HHJ candidate members, and $VRI$ Cousins photoelectric photometry for a small fraction of those stars. We took all of the HHJ stars with photographic photometry for which we also have photoelectric $VI$ photometry on the Cousins system, and plotted $V$(Cousins) vs. $V$(HHJ) and $I$(Cousins) vs.$I$(HHJ). While there is some evidence for slight systematic departures of the HHJ photographic photometry from the Cousins system, those departures are relatively small and we have chosen simply to retain the HHJ values and treat them as Cousins system.
@pinfield00 reported their $I$ magnitudes in an instrumental system which they designated as $I_{kp}$. We identified all BPL candidate members for which we had photoelectric Cousins I estimates, and plotted $I_{kp}$ vs.[$I_{\rm C}$]{}. Figure \[fig:ikpic\] shows this correlation, and the piecewise linear fit we have made to convert from $I_{kp}$ to [$I_{\rm C}$]{}. Our catalog lists these converted [$I_{\rm C}$]{} measures for the BPL stars for which we have no other photoelectric $I$ estimates.
@deacon04 derived $RI$ photometry from the scans of their plates, and calibrated that photometry by reference to published photometry from the literature. When we plotted their the difference between their $I$ band photometry and literature values (where available), we discovered a significant dependence on right ascension. Unfortunately, because the DH survey extended over larger spatial scales than the calibrating photometry, we could not derive a correction which we could apply to all the DH stars. We therefore developed the following indirect scheme. We used the stars for which we have estimated [$I_{\rm C}$]{} magnitudes (from photoelectric photometry) to define the relation between $J$ and ([$I_{\rm C}$]{}$ - J$) for Pleiades members. For each DH star, we combined that relation and the 2MASS $J$ magnitude to yield a predicted [$I_{\rm C}$]{}. Figure \[fig:dh\_ra\] shows a plot of the difference of this predicted [$I_{\rm C}$]{}and $I$(DH) with right ascension. The solid line shows the relation we adopt. Figure \[fig:dh\_icorr\] shows the relation between the corrected $I$(DH) values and Table 2 [$I_{\rm C}$]{} measures from photoelectric sources. There is still a significant amount of scatter but the corrected $I$(DH) photometry appears to be accurately calibrated to the Cousins system.
In a very few cases (specifically, just five stars), we provide an estimate of I$_c$ based on data from a wide-area CCD survey of Taurus obtained with the Quest-2 camera on the Palomar 48 inch Samuel Oschin telescope [@slesnick06]. That survey calibrated their photometry to the Sloan i system, and we have converted the Sloan i magnitudes to I$_c$. We intend to make more complete use of the Quest-2 data in a subsequent paper.
When we have multiple sources of photometry for a given star, we consider how to combine them. In most cases, if we have photoelectric data, that is given preference. However, if we have photographic $V$ and $I$, and only a photoelectric measurement for $I$, we do not replace the photographic $I$ with the photoelectric value because these stars are variable and the photographic measurements are at least in some cases from nearly simultaneous exposures. Where we have multiple sources for photoelectric photometry, and no strong reason to favor one measurement or set of measurements over another, we have averaged the photometry for a given star. In most cases, where we have multiple photometry the individual measurements agree reasonably well but with the caveat that the Pleiades low mass stars are in many cases heavily spotted and “active" chromospherically, and hence are photometrically variable. In a few cases, even given the expectation that spots and other phenomena may affect the photometry, there seems to be more discrepancy between reported $V$ magnitudes than we expect. We note two such cases here. We suspect these results indicate that at least some of the Pleiades low mass stars have long-term photometric variability larger than their short period (rotational) modulation.
HII 882 has at least four presumably accurate $V$ magnitude measurements reported in the literature. Those measures are: $V$=12.66 @jomi58; $V$=12.95 @stauffer82; $V$=12.898 @vanlee86; and $V$=12.62 @messina01.
HII 345 has at least three presumably accurate $V$ magnitude measurements. Those measurements are: $V$=11.65 @landolt79; $V$=11.73 @vanlee86; $V$=11.43 @messina01.
At the bottom of Table 2, we provide a key to the source(s) of the optical photometry provided in the table.
This research made use of the SIMBAD database operated at CDS, Strasbourg, France, and also of the NED and NStED databases operated at IPAC, Pasadena, USA. A large amount of data for the Pleiades (and other open clusters) can also be found at the open cluster database WEBDA (http://www.univie.ac.at/webda/), operated in Vienna by Ernst Paunzen.
Adams, J., Stauffer, J., Monet, D., Skrutskie, M., & Beichman, C. 2001, , 121, 2053
Allen, L. et al. 2004, , 154, 363
Ahmed, F., Lawrence, L., & Reddish, V. 1965, PROE, 3, 187
Artymowicz, P. & Clampin, M. 1997, , 490, 863
Artyukhina, N. 1969, Soviet Astronomy, 12, 987
Artyukhina, N. & Kalinina, E. 1970, Trudy Sternberg Astron Inst. 39, 111
Baraffe, I., Chabrier, G., Allard, F., & Hauschildt, P. 1998, , 337, 403
Bessell, M. 1979, , 91, 589
Bessell, M. & Weis, E. 1987, , 99, 642
Bihain, G. et al. 2006, , 458, 805
Breger, M. 1986, , 309, 311
Calder, W. & Shapley, H. 1937, Ann. Ast. Obs. Harvard College, 105, 453
Chabrier, G., Baraffe, I., Allard, F., & Hauschildt, P. 2000, , 542, 464
Cruz, K. et al. 2006,
Cummings, E. 1921, , 33, 214
Deacon, N., & Hambly, N. 2004, , 416, 125
Eggen, O. 1950, , 111, 81
Federman, S. & Willson, R. 1984, , 283, 626
Festin, L. 1998, , 333, 497
Gorlova, N. et al. 2006, , 649, 1028
Hambly, N., Hawkins, M.R.S., & Jameson, R. 1991, , 253, 1
Hambly, N., Hawkins, M.R.S., & Jameson, R. 1993, , 100, 607
Haro, G., Chavira, E. & Gonzalez, G. 1982, Bol Inst Tonantzintla 3, 1
Herbig, G. & Simon, T. 2001, , 121, 3138
Hertzsprung, E. 1923, Mem. Danish Acad. 4, No. 4
Hertzsprung, E. 1947, Ann.Leiden Obs. 19, Part1A
Iriarte, B. 1967, Boll. Obs. Tonantzintla Tacubaya 4, 79
Jameson, R. & Skillen, I. 1989, , 239, 247
Jarrett, T., Chester, T., Cutri, R., Schneider, S., Skrutskie, M., & Huchra, J. 2000, , 119, 2498
Jeffries, R.D., & Oliveira, J. 2005, , 358, 13.
Johnson, H. L., & Mitchell, R. I. 1958, , 128, 31 (JM)
Johnson, H.L. & Morgan, W.W. 1951, , 114, 522
Jones, B.F. 1973, , 9, 313
Jones, B.F. 1981, , 86, 290
Krishnamurthi, A. et al. 1998, , 493, 914
Kraemer, K., et al. 2003, , 126, 1423
Landolt, A. , 231, 468
van Leeuwen, F., Alphenaar, P., & Brand, J. 1986, , 65, 309
van Leeuwen, F., Alphenaar, P., & Meys, J. J. M. 1987, , 67, 483
van Maanen, A. 1946, , 102, 26
Lowrance, P. et al. 2007, in preparation
Makovoz, D., & Marleau, F. 2005 PASP, 117, 1113
Marilli, E., Catalano, S., & Frasca, A. 1997, MemSAI, 68, 895
McCarthy, M. & Treanor, P. 1964, Ric. Astron. Specola Vat. Astron. 6, 535
Mendoza, E. E. 1967, Boletin Observatorio Tonantzintla y Tacuba, 4, 149
Mermilliod, J.-C., Rosvick, J., Duquennoy, A., Mayor, M. 1992, , 265, 513
Mermilliod, J.-C., Bratschi, P., & Mayor, M. 1997, , 320, 74
Mermilliod, J.-C. & Mayor, M. 1999, , 352, 479
Messina, S. 2001, , 371, 1024
Meynet, G., Mermilliod, J.-C., & Maeder, A. 1993, , 98, 477
Oppenheimer, B., Basri, G., Nakajima, T., & Kulkarni, S. 1997, , 113, 296
Patten, B., et al. 2006, , 651, 502
Pinfield, D., Hodgkin, S., Jameson, R., Cossburn, M., Hambly, N., & Devereux, N. 2000, , 313, 347
Pritchard, R. 1884, , 44, 355
Prosser, C., Stauffer, J., & Kraft, R. 1991, , 101, 1361
Queloz, D., Allain, S., Mermilliod, J.-C., Bouvier, J., & Mayor, M. 1998, , 335, 183
Raboud, D., & Mermilliod, J.-C. 1998, , 329, 101
Rieke, G. & Lebofsky, M. 1985, , 288, 618
Robinson, E.L. & Kraft, R.P. 1974, , 79, 698
Rosvick, J., Mermilliod, J., & Mayor, M. 1992, , 255, 130
Siess, L., Dufour, E., & Forestini, M. 2000, , 358, 593
Skrutskie, M. et al. 2006, , 131, 1163
Slesnick, C., Carpenter, J., Hillenbrand, L., & Mamajek, E. 2006, , 132, 2665
Soderblom, D. R., Jones, B. R., Balachandran, S., Stauffer, J. R., Duncan, D. K., Fedele, S. B., & Hudon, J. 1993, , 106, 1059
Soderblom, D., Nelan, E., Benedict, G., McArthur, B., Ramirez, I., Spiesman, W., & Jones, B. 2005, , 129, 1616
Stauffer, J. 1980, , 85, 1341
Stauffer, J. R. 1982a, , 87, 1507
Stauffer, J. 1984, , 280, 189
Stauffer, J. R., Hartmann, L. W., Soderblom, D. R., & Burnham, N. 1984, , 280, 202
Stauffer, J. R., & Hartmann, L. W. 1987, , 318, 337
Stauffer, J., Hamilton, D., Probst, R., Rieke, G., & Mateo, M. 1989,, 344, 21
Stauffer, J., Klemola, A., Prosser, C. & Probst, R. 1991, , 101, 980
Stauffer, J. R., Caillault, J.-P., Gagne, M., Prosser, C. F., & Hartmann, L. W. 1994, , 91, 625
Stauffer, J. R., Liebert, J., & Giampapa, M. 1995, , 109, 298
Stauffer, J., Hamilton, D., Probst, R., Rieke, G., Mateo, M. 1989, , 344, L21
Stauffer, J. R., et al. 1999, , 527, 219
Stauffer, J. R., et al. 2003, , 126, 833
Stauffer, J. R., et al. 2005, , 130, 1834
Steele, I. et al. 1995, , 272, 630
Terlevich, E. 1987, , 224, 193
Terndrup, D. M, Stauffer, J. R., Pinsonneault, M. H., Sills, A., Yuan, Y., Jones, B. F., Fischer, D., & Krishnamurthi, A. 2000, , 119, 1303
Trumpler, R.J. 1921, Lick Obs. Bull. 10, 110
Ventura, P., Zeppieri, A., Mazzitelli, I., & D’Antona, F. 1998, , 334, 953
Werner, M. et al. 2004, , 154, 309
White, R. E. 2003, , 148, 487
White, R. E. & Bally, J. 1993, , 409, 234
[lcccc]{} Trumpler (1921) & 3 & 2.5$<B<$14.5 & 174 & Tr\
Trumpler (1921) & 24 & 2.5$<B<$10 & 72 & Tr\
Hertzsprung (1947) & 4 & 2.5$<V<$15.5 & 247 & HII\
Artyukhina (1969) & 60 & 2.5$<B<$12.5 & $\sim$200 & AK\
Haro [et al.]{} (1982) & 20 & 11$<V<$17.5 & 519 & HCG\
van Leeuwen [et al.]{} (1986) & 80 & 2.5$<B<$13 & 193 & PELS\
Stauffer [et al.]{} (1991) & 16 & 14$<V<$18 & 225 & SK\
Hambly [et al.]{} (1993) & 23 & 10$<I<$17.5 & 440 & HHJ\
Pinfield [et al.]{} (2000) & 6 & 13.5$<I<$19.5 & 339 & BPL\
Adams [et al.]{} (2001) & 300 & 8$<Ks<$14.5 & 1200 & ...\
Deacon & Hambly (2004) & 75 & 10$<R<$19 & 916 & DH\
[^1]: http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html
[^2]: These isochrones are calculated for the standard K filter, rather than [$K_{\rm s}$]{}. However, the difference in location of the isochrones in these plots because of this should be very slight, and we do not believe our conclusions are significantly affected.
|
tomekkorbak/pile-curse-small
|
ArXiv
|
The need for human connection is told through interweaving stories: A hard-working lawyer is attached to his cell phone, but can't find the time to communicate with his family. An estranged couple uses the internet as a way to escape from their lifeless marriage. A widowed former police officer struggles to raise a son who is cyber-bullying a classmate. An ambitious journalist learns of a teen performing on an adult-only site and sees this as a career-making story.
Rated R for sexual content, some graphic nudity, language, violence and drug use - some involving teens.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Please check out my new user interface and tell me if it sucks - nickl
http://blog.foldertrack.com/?p=231
======
Zeuf
I think it is a little bit old. I mean, I think your UI could look more
modern, and 2000 way. Maybe you should inspire yourself in the files of a OS X
style. It is very beautiful and User friendly.
~~~
nickl
Thanks. I will try to make a more modern looking one.Sidenote the UI is just a
paint drawing of what I was thinking. It is not the real UI
|
tomekkorbak/pile-curse-small
|
HackerNews
|
alpha-Melanocyte-stimulating hormone (alpha-MSH) release from perifused rat hypothalamic slices.
A perifusion system was developed to investigate the control of alpha-melanocyte-stimulating hormone (alpha-MSH) release from rat brain. Hypothalamic slices were perifused with Krebs-Ringer bicarbonate (KRB) medium supplemented with glucose, bacitracin and bovine serum albumin. Fractions were set apart every 3 min and alpha-MSH levels were measured by means of a specific and sensitive radioimmunoassay method. Hypothalamic tissue in normal KRB medium released alpha-MSH at a constant rate corresponding to 0.1% of the total hypothalamic content per 3 min. The basal release was not altered by Ca2+ omission in the medium or addition of the sodium channel blocker tetrodotoxin (TTX). Depolarizing agents such as potassium (50 mM) and veratridine (50 microM), which is known to increase Na+ conductance, significantly stimulated alpha-MSH release in a Ca2+-dependent manner. When Na+-channels were blocked by TTX (0.5 microM) the stimulatory effect of veratridine was completely abolished whereas the K+-evoked release was unaffected. These findings suggest that: voltage-dependent sodium channels are present on alpha-MSH hypothalamic neurons; depolarization by K+ induces a marked stimulation of alpha-MSH release; K+- and veratridine-evoke releases are calcium-dependent. Altogether, these data provide evidence for a neurotransmitter or neuromodulator role for alpha-MSH in rat hypothalamus.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
My friend, AOA colleague and co-author Mark Blaxill was on the Linderman Unleashed radio show on the Natural News Network this week. He and Curt Linderman Sr. talked about the Canary Party, of which Mark is chairman, the congressional hearing last year and another coming next month, as well as recent controversies within our own autism advocacy community.
Mark says fighting amongst ourselves is misguided, and makes the useful distinction between standing up for oneself against untrue allegations (which he does) and infighting (which he doesn't, we don't, and nobody should). Catch the interview here -- it's the second half hour. Peace, friends.
--Michael Specter doesn't think much of people like us -- people who believe that evidence and experience point clearly to excessive vaccination, and vaccine-type mercury, as the cause of the autism epidemic (which, we also believe, is all too real). Specter wrote his book Denialism in 2009 to make that case, lumping us in with all other manner of supposed unscientific quackery.
Specter was at it again in a talk this month in Canada, preceded by a Q and A in the local paper. "Rejecting science a perilous path, writer argues". The piece begins:
"From an unfounded correlation between vaccines and autism to a spreading fear about genetically modified “Frankenfood,” Michael Specter is a staff writer for The New Yorker who has been documenting what he believes is a dangerous denial of scientific evidence in the world today." (I may be a dangerous know-nothing scientifically speaking, but there's no denying that sentence is not so good English speaking.)
A short flavor of the thing:
Q: What is the danger of having people deny the evidence of science?
A: People who don’t get vaccinated are getting sick. We have measles, whooping cough. These things had disappeared. For a particular parent not to vaccinate their kid is bad, but it also affects my kid, because if you go to school with my kid and you’re not vaccinated you could be infectious.
Q: How much damage is done by celebrities like Jenny McCarthy and Dr. Oz who preach their own take on science?
A: A lot of people who are seemingly intelligent or, in the case of Jenny McCarthy, popular for a reason I couldn’t explain, are looked up to. I don’t think we should live in a society where what a Kardashian says is how we decide to deliver medicine.--
You get the idea. Evidence is everything. Kim Kardashian causes measles. Which reminded me, when I wrote about his book back in 2009, pointing out some evidentiary issues -- i.e., mangled facts, copying Paul Offit's words as his own -- I noted it lacked footnotes but that his website, michael specter.com, promised the goods: "Footnotes coming soon".
His website still says that nigh unto four years later, which, let's face it, is not too cool for someone who keeps pounding us for alleged failure to respect the importance of evidence. So I sent him an email this week:
Hi Michael,
I'm writing a piece about your recent critique of the vaccine-autism hypothesis, and saw that your website still says "footnotes coming soon," which is what it said back in 2009. I'm going to point out that they still are not posted, and would welcome a comment from you about this.
Best,
Dan
He wrote back: "Never did it then seemed silly as time went on. I always supply sources when people with legitimate questions ask"
One can only imagine what "legitimate questions" might be, and from whom. Now, the book that the above-cited Mark Blaxill and I wrote had more than 700 of those suckers, and let me tell you they are a pain in the patooti, especially for a first time author like me.
It seems Michael forgot to take his crabby pills -- perhaps because Paul Offit has banned supplements for all right-thinking people? He emailed me again: "Also curious which "recent" critique. I have not altered my position, or approach to that position for years"
Dude, no one accused you of altering anything. I sent him the link -- you know, the evidence, the citation, the footnote, Montreal Gazette, October 15, 2013. Didn't hear anything more.
Let me quit picking on poor Michael now and say something about the idea crystallized in that ungrammatical sentence (op. cit.) that it's dangerous to deny "the evidence of science" because, logically, it causes measles, and, more broadly, creates a class of citizens who will believe just about anything. What's dangerous, in my view, is to talk about the evidence of science as though it were the Teachings of God Almighty declaimed in The Jumbo Book of True Scientific Facts.
As Mark Blaxill (cf.) points out, what there really is, is good evidence and bad evidence. To treat science, small s, developed through the iterative process of guessing and testing by mere mortals, as some Holy Writ, capital H capital W, which only the priestly caste may interpret, suggests a lack of honest to God scientific literacy, a willingness to take the proclamations of Experts on trust, and a contempt for the observations of fellow citizens who may or may not wear the purple robes before whom folks like Michael Specter genuflect.
Abandon Moloch and the false gods of unearned authority, all ye who wish to see the face of truth!
Comments
You can follow this conversation by subscribing to the comment feed for this post.
I think it's horrible that Mark has to spend anytime defending himself. He has done so much in the past and to this day. For those who continue bashing him -- you look ridiculous. It's utter nonsense and my energy - everyone's energy - needs to be on our kids and the arena of prevention, education and helping so many sick kids!
I respect Mark and will forever be grateful for his passion, determination and courage. I also stand by him, always.
Let's not forget to consider the real source of attacks on AofA, David Kirby and other organizations and individuals involved in the vaccine safety movement- Tim Bolen. Bolen has been circling the vaccine injury/autism arena for quite a while, trying to sniff out any disgruntled members who might be longing for a sympathetic ear to listen to their complaints which, believe me, every activist movement in history has in spades due to the fact that they're made up of human beings. And every activist movement in history has also had these scavenger types who look to enter into and take over what they could never create on their own by exploiting the complaints that every human institution will generate.
That's not to say that existing problems within any activist arena might not require attention-- just to say that the scavengers never solve any of them. Scavengers only know how to wreck, not build.
And I've come to believe that Bolen's chief complaint about the existing movement is not what has been openly stated at all but instead is due to the fact that AofA and the existing movement resist being controlled by or attached to politically extreme factions. That hasn't been easy. Every extreme politicized agenda likes to cultivate the cover of a humanist cause and especially to step in at the eleventh hour and play hero for a cause others have built from the ground up. If it's about kids, all the better. So I suspect the real problem Bolen has with existing structures is the existing arena's general political and religious agnosticism, which is the very thing that has made the movement even a little bit bulletproof against the industry black PR campaign.
I have absolutely no idea why anyone would associate with Bolen. I have no idea why Jake Crosby would associate himself with such a figure much less start to borrow from Bolen's spurious style: the implication that, say, David Kirby and Dan Olmsted "helped" vaccine makers by offering principled support for journalists' protected rights to keep sources private (the issue at the center of global controversy today with attacks on Glen Greenwald and Snowden and the prosecution/persecution of whistleblowers like Chelsea Manning and journalists like Barrett Brown, etc.) *even when they fundamentally disagreed with a particular journalist* (in the case of Seidel), is not only sensational, dishonest and cheap on Jake's part but runs afoul of the embattled movement to protect constitutional rights in the US.http://www.autisminvestigated.com/dan-olmsted-lawsuit/ With that claim in particular, Jake has lost any credibility he ever had within wider political freedom/consumer freedom movements that will, by necessity, put protection of the fourth estate above the vaccine injury cause. This is because NO consumer movement would gain any traction without the press's right to shield sources. It's as if Jake would go to anyone willing to support his general complaints or stoop to anything to cause divisions in a bid to get support for himself-- even if the price is to taint and compromise the wider cause and even the overarching cause of constitutional rights by association.
But Jake is really a minor (and badly misused) figure in all this. Since Bolen began randomly gay-bashing on his blog, I've suspected he's either taking money from some radical bigoted Dominionist group that's trying to hijack the vaccine safety issue for political purposes or is auditioning for that kind of backing by echoing prejudiced rhetoric. It's filthy.
Granted the Skeptics have a thread of sexual abuse running through their history starting with Skeptic icon and vaccine defender James Randi, allegations that were dug up more than a decade ago in a lawsuit against Randi and published openly in the news. Bolen didn't uncover those reports or the associated sex tapes-- they've been circulated on the web for years. But as recent allegations against Skeptic Mag founder Michael Shermer and others in the Skeptic ranks show, the alleged abuse isn't exactly gender specific.
If Bolen was trying to create some philosophical allegory between rape and pharmaceutical exploitation of children, you would think he'd discuss the recently alleged rapes and sexual assaults on young women within the Skeptics ranks as well, though there's no mention of that in his blog. It's curious. This isn't to argue whether or not mention of rampant sexual abuse is relevant to a front group's scientific positions, just to say that by deliberately and repeatedly twisting the issue into a stick to bash homosexuality itself, Bolen has made himself and his agenda deeply suspect.
Though Bolen does mention rape threats and death threats against Australian vaccine safety activist Meryl Dorey, even this is spun within the "gay psychopath sex abuser" theme that Bolen repeatedly attaches to the Skeptic front. Even when reporting on the Skeptic-affiliated "False Memory Syndrome Foundation" and its professional defense of accused child molesters, Bolen will start off a discussion of the rape of *female* children by first highlighting the so-called Skeptic "gay agenda." It's quite clear what the takeaway is and one begins to wonder whom he's trying to appeal to with this particular spin. Rape is rape, child molestation is child molestation. To quote Red from the Shawshank redemption about whether the prison rapists were "homosexuals," Red states, "They'd have to be human first."
It seems pretty destructive to use the issue of child sexual assault to promote prejudice and it makes Bolen's site untouchable even for the few issues he addresses that might give his site any value. You can't cite it, you can't share it or link to it without looking like a unprincipled bigot, so why would anyone associate with such a person unless they're also promoting the totally irrelevant hate content that comes with him? You would also think that the rampant Islamophobia and racism displayed by the Skeptics would serve to illustrate corruption within the Skeptics, but Bolen never alludes to it and it's made me wonder if this is because whatever backers Bolen might be appealing to with the homophobic references happen to share Skeptics' Islamophobia and, save for a difference in position regarding vaccination, the Skeptics and whomever Bolen is trying to appeal to are literally two sides of the same prejudiced coin.
I don't know if Bolen's prejudiced rhetoric is sincerely held or he's merely parroting the views of the most promising backers. But it warns that he would twist the life and death cause of vaccine injury itself for expediency, just as he's already tarred serious whistle blowers like Wakefield with the same brush used to defend less scientifically sound paying clients like Hulda Clark.
Maybe someone could argue that, for a muckraker, Bolen is the leper with the most fingers because he's dug up some interesting dirt about Skeptic astroturfers and their affiliated Quackwatch. But the enemy of one's enemies isn't always the best "friend." The vaccine injury movement is terribly embattled, so it would be tempting but short sighted to take whatever support comes along. In this case, the "support" seemed only bent on tearing apart existing structures without bringing anything more meaningful to the table. Does Bolen's blog discuss denial of insurance coverage for affected children? Institutional abuse? The plights of children aging out of the system? The changes to the DSM? Attacks on vaccine exemptions? Specific legal advice in fighting for educational services? A database of helpful practitioners? Reports on new treatments? Exposes on pharmaceutical drugs being pushed on affected individuals?
AofA and affiliated nonprofits haven't attached themselves to some fundamentalist agenda, neither taking the position that the concept of preventive medicine is "the devil" nor attaching it to any extremist platform. It's covered the above issues of insurance reform, school abuse, hyper-drugging, denial of medical care, etc. I'm sure everyone wishes more could be done and the funding is scarce. Is the answer to that to grovel to any prejudice to get backing?
Whatever Bolen does do, it seems dangerously in service to an agenda that could deliver the entire vaccine injury realm into the hands of the "strawman" PR engineers who would like nothing more than to cast the entire movement as some survivalist fundamentalist radical-irrational faction the better to smear it out of existence.
It really should be noted that the Beth Clay/Safeminds report on Poul Thorsen submitted to the hearing is a hard-hitting, well organized document. It cannot have been submitted with the purpose of subverting the committee's focus on vaccine fraud by the CDC, and may well have been what lay behind Congressman Posey's trenchant comment.
Revisionist history and Monday morning quarterbacking help no children - Mark Blaxill has spent over a decade trying to help his daughter - and my children - countless dollars of his own money, logged thousands of air miles, has put himself in the spotlight even when the glare was blinding, taken hit after hit from all sides and remained steadfast. Many well to do autism parents quietly work behind the scenes only for their own kids or donate simply to big Blue as if that's enough. Not Mark. If you dislike his work - it's a big playing field - suit up - the clock is running.
Jake has now accused Mark Blaxill, David Kirby and Dan Olmsted of working against the causation model of mercury and autism. Any objective person would look at this and see how ridiculous it is. These men have worked to popularize and bring these ideas to the mainstream. These are the people that investigated, took their personal time and resources to try and create change. Jake has an opinion. Opinions are fine, but he is manipulating information to fit his narrative. He is now making slanderous accusations and trying to ruin reputations. The people that reinforce and support him should should be ashamed, especially the parents who should know better. True change will come when we are all moving forward. I hope everyone is sick of this diversion. This is a total waste of time and energy. This infighting does nothing for our children.
By the way after I did some thinking on this -- plenty of time to think when doing mindless chopping of celery -- only thinking here is should I ferment it while it is in season???
But the subject of the next Congressional hearing this Novemeber would be exactly what CONGRESS would decide upon.
After all the Compensation vaccine court was Congress baby in 1986 -- that was their answer in 1986 --
and at the end of hte 2003 Congressional hearing it was their answer again -- they sounded so kind did not the JERKS as they said for the vaccine court to get it all cleaned up.
And it will be their answer again this time around.
Hey we are compensating these guys ---and we are making sure it works -- look how we blistered these lawyers ears to get it to work!
They can't compensate all the people they have damaged. They are beyond counting; if they every get counted. There are some mild ones out there. Not mild enough that it won't be a disability, just that most can walk and chew gum at the same time.
Jake knows so many people's names and details it is impossible for little ol'e me to get the feel of it.
There were however some criticism by Jake of Mark that I know is unfair.
I know that Poul Thorsen was mentioned -- by the Congressmen himself -- and in a way that was more shocking than anything Mark Blaxill could have delivered in his 3 minute or was it 5 speech. So that would be a waste of Mark's time.
The thing that I remember from Mark's speech were the words "you cannot have a genetic epidemic".
Which is what my whole damn community needs to hear --
In this next meeting coming up in Congress next month -- does anyone know how and why did it come up to be on the subject of the National Vaccine Compensation Plan?
If it was Safe Minds or the Canary Party that were able to get this choice - which -- I don't know -- how much influence -- but say if they did, if they could ---
Would it be because they think either:
If the federal vaccine court was made to work - that there would be so many to be compensated -- that it would bankrupt the system and break it?
I don't know -- there have been so many that have not even been able to file -- and the milder cases of autism that won't file or their families don't have a clue -- is it okay to not compensate a mild brian injury and just the severe. Sorry I know what severe means and I know many severe are not being compensated either.
Or that Congress would learn that this system has been abused, and that Congress has warned them about it a decade ago to get the mess cleaned up and didn't. So, Congress will start a process to get it abolished. Something by the way that the Supreme Court would not do and in which God will condemn their souls for that decision.
So the first step is to either make the Vaccine court work or get rid of it and the rest will follow???
I have never known Mark to be anything but up front, honest, and 100% dedicated to our cause. He is the James Brown of Autism Advocacy (the hardest working man in the biz). I can't think of any other individual that has done more. He got me interested in getting involved back in 99, when I started doing my small part here in California.
He is a good guy.
Since I began working with mark a decade ago, he has impressed on me two goals -- find the truth and help sick kids. Oh, and epidemics are simple. Based on that, and encouragement from Bernie rimland who held those same beliefs, I've thrown myself into this issue ever since. Any idea that mark (or myself, for that matter) has acted otherwise is, I know for a fact, false. In fact, it's delusional -- a fixed belief system for which there is no evidence. Let's not settle for delusions. Let's find the truth and help sick kids.
I finally had the time to catch up on the interview of Mark Blaxill and Linderman.
I guess I will compare this to the separation of Paul and Barnabas
A dispute that developed between these Christian brothers.
Disagreements that Do Not Involve Doctrine
This dissension between Paul and Barnabas was not over a doctrinal issue. The rupture involved a personal dispute based upon a judgment call. To their credit, neither Paul nor Barnabas let the conflict distract them from their respective efforts of spreading the gospel.
Good brethren will disagree in matters of opinion. The important thing is to keep focused on doing the will of Christ. That is what Paul and Barnabas both did. As a result, perhaps even more work was accomplished for the Lord because of the manner in which their disagreement was handled.
They parted ways and never saw each other again. But in the parting they covered more ground.
If he posted "footnotes coming soon" then he should post the footnotes, preferably soon. And not just for those asking questions. I would think since he is so big into sources of information, he'd want to post them.
But the most important thing is that measles and pertussis are nearly always fairly mild diseases, have both become much milder in the past century, and having the natural diseases confers benefits both in terms of gaining permanent immunity and educating the immune system to become stronger and more competent. Who cares who gave what to whom, or who had or did not have the vaccines or the boosters, or if they were still effective, had lost effectiveness, or were never effective to start with? Everyone should be free to get and recover from these diseases: this was nature's plan for creating optimally healthy populations, and it's what we need to get back to.
I wonder what makes it so hard for people such as this Specter to think for themselves. And if you say you are posting footnotes you should do so. Unless you really can't back up your position or raving as the case may be.
The impression created by such responses to vaccine concerns is that footnotes, documentation, gold standard research, testing with a control group, that kind of thing doesn't help the case of there's-nothing-to-see-here-just-take-your-shots proponents. Not really reassuring.
Interesting how Spector jumps from Jenny McCarthy and Dr. Oz to Kardashian, as if the brains of the popular and attractive are interchangeable. How insulting to them and how shallow of Spector.
To Mr. Spector: Jenny McCarthy is popular because she is just as beautiful a person on the inside as she is on the outside.. She is talented, intelligent, funny, courageous, a best selling author, and has helped and continues to help many to cope with a terrible illness by sharing her experience and founding Generation Rescue. How many charitable organizations have you established?
And never mind that Dr. Oz is a cardiothoracic surgeon at New York-Presbyterian Hospital/Columbia. According to Spector, he's "just" an irrelevant celebrity who shouldn't have his "own take" on science.
"What's dangerous, in my view, is to talk about the evidence of science as though it were the Teachings of God Almighty declaimed in The Jumbo Book of True Scientific Facts."
Last month The Dallas Morning News ran a front page article. There was an outbreak of whooping cough, and 86% of the people coming down with whooping cough were FULLY vaccinated. Facts do get in the way of a good story sometimes, don't they?
A: People who don’t get vaccinated are getting sick. We have measles, whooping cough. These things had disappeared. For a particular parent not to vaccinate their kid is bad, but it also affects my kid, because if you go to school with my kid and you’re not vaccinated you could be infectious."
Scientifically speaking .. if your kid is vaccinated .. why should you be concerned about the unvaccinated kid who could be infectious?
Unless .. scientifically speaking .. you are in denial regarding the efficiency of the vaccines your child received .. and .. you fully recognize .. scientifically speaking .. the vaccines do not protect your child effectively as public health officials pretend they do.
And so .. if anyone deserves the label of "denialist" .. it is Mr. Spector .. who religiously believes vaccines protect children from infectious diseases .. yet .. acknowledges his vaccinated child has the very same opportunity to contract an infectious disease as the unvaccinated child.
Admittedly, that is simply common sense .. not .. to be confused with scientific gobblygook.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Q:
Wordpress on nginx can't create files
I'm running Wordpress with HHVM + nginx and wonder, why Wordpress can't create files and isn't allowed to write to directories, even if they are at CHMOD 777.
I'm using W3 Total Cache Plugin and get this message:
But wp-content is at 777 (for testing purposes). Whats wrong with the server configuration?
A:
Nginx need nginx usergroup permission not apache permission.
Nginx does not have permission to write. So give correct permission.
Here is correct command.
chown -R nginx:nginx /var/www/chefgrill.de
This will allow nginx to write.
|
tomekkorbak/pile-curse-small
|
StackExchange
|
INTRODUCTION
============
Frontotemporal dementia (FTD) is a heterogeneous clinicopathological syndrome with progressive degeneration of the frontal lobes, anterior temporal lobes, or both. FTD patients make up about 10% of all patients with dementing diseases. Because FTD is usually a presenile onset disorder, it accounts for approximately 20% of neurodegenerative dementias among dementia patients with age at onset of less than 65 years.^[@r1]-[@r3]^ In Brazil, FTD accounts for about 5% of presenile dementia cases.^[@r4]^
The characterization of the clinical types of FTD have evolved from the first consensus on diagnostic criteria (Lund and Manchester research criteria, 1994)^[@r5]^ with three FTD symptom constellations: \[1\] behavioral symptoms, \[2\] affective symptoms, and \[3\] speech disorder, to the Neary and colleagues (1998)^[@r6]^ diagnostic criteria encompassing three distinct clinical variants that can be distinguished based on the early and predominant symptoms: a behavioral-variant (bvFTLD) and two language variants (semantic dementia and progressive nonfluent aphasia), and finally to the two 2011 consensus on diagnostic criteria^[@r7],[@r8]^ establishing four different subdivisions: \[1\] a frontal or behavioral variant (bvFTLD); \[2\] SD or Semantic variant Progressive Primary Aphasia (PPA-semantic); \[3\] PNFA or Nonfluent/agrammatic variant PPA (PPA-agrammatic); and \[4\] logopenic progressive aphasia or Logopenic variant PPA (PPA-logopenic).
FTD is often misdiagnosed and, among the other neurodegenerative disorders, is commonly mistaken for Alzheimer\'s disease (AD).^[@r9],[@r10]^ The main difference between the two types of dementia is the presence of changes in personality, motivation, social interaction and organizational abilities, in the presence of well-preserved memory and visuospatial abilities in FTD. On the other hand, AD is characterized by a progressive amnestic disorder with episodic and semantic memory deficit, followed by breakdown in other attentional, perceptual and visuospatial abilities.^[@r11]^ Many of FTD\'s initial symptoms, albeit behavior or language related, are compatible with a range of neurologic disorders and because FTD often affects people in midlife it is also frequently mistaken for primary psychiatric disorders such as depression or psychosis.^[@r10],[@r12],[@r13]^
In bvFTD, neuropsychiatric changes are the most prominent symptoms and usually precede or overshadow cognitive disabilities, whereby changes in personality and behavior observed by the family often go unnoticed by the majority of patients. Suspicion of FTD arises when there is a gradual personality change and frontotemporal abnormalities on neuroimaging, particularly frontotemporal hypometabolism.^[@r1],[@r8],[@r12]^
Early diagnosis of FTD is critical for developing management strategies and interventions, but clinicians or general practitioners continue to have difficulty diagnosing early FTD. Without a definitive clinical test, the early diagnosis of FTD can be challenging. Consequently, patients with FTD can go from physician to physician delaying diagnosis and jeopardizing therapy. Despite this diagnostic confusion, there is scant data on the accuracy of a clinical evaluation for FTD.^[@r13]^
The objective of this study was to analyze variables associated with misdiagnosis in FTD and AD patients, and in a group without neurodegenerative disorders (WND). All patients were evaluated for behavioral and/or cognitive complaints.
METHODS
=======
A case-control study including ten patients with FTD, 10 patients with probable AD and 10 patients WND was carried out. All patients were selected during the same period and samples were balanced for criteria of entry into the study for the period (August/2009 to August/2011) limiting inclusion to \"typical\" cases in each category after expert evaluation. \"Typical\" cases were defined as those patients who were evaluated for the first time at the specialized outpatient clinic and presented sufficient clinical and laboratory data to fulfill AD diagnostic criteria or to exclude the presence of neurodegenerative disorders. During the period of the study, 100 new (first) evaluations were carried out, and 10 AD and 10 WND were found among these subjects. The patients were selected from the Dementia Clinic of the Hospital de Clínicas de Porto Alegre (HCPA).
The 1998 consensus diagnostic criteria for FTD were applied,^[@r6]^ and the National Institute of Neurologic and Communicative Diseases and Vascular Cerebral Accident and Alzheimer Disease Related Association (NINCDS-ADRDA) criteria were used for probable AD.^[@r14]^
The studied variables were disease duration, reason for referral, former diagnosis, behavioral and cognitive symptoms at the specialist evaluation, MMSE score at the specialist evaluation, and follow-up outcome. Severity of disease (CDR scale) and use of cholinesterase inhibitors and/or memantine were also recorded.
This study was approved by the Human Research Ethics Committee and signed consent was obtained from all patients or a proxy.
The continuous variables were expressed as mean and standard deviation and analyzed by the one-way ANOVA with the Bonferroni *post hoc* test. The categorical variables were expressed in absolute and relative frequency and data were analyzed by Pearson\'s Chi-Square test. All analyses were considered statistically significant at a p-value \<0.05.
RESULTS
=======
Demographic and clinical data of the samples are given in [Table 1](#t1){ref-type="table"}. Patients with AD were older than FTD patients and WND patients. AD and FTD patients had significantly lower MMSE scores. FTD patients and WND patients showed longer disease duration than AD patients.
######
Demographic and clinical data of the groups studied
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Variables FTD (n=10) AD (n=10) WND (n=10)
------------------ ---------------------------------------------------- ------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------
Age 65.80[+]{.ul}2.13^[a](#TFN2){ref-type="table-fn"}^ 78.30[+]{.ul}2.13^[a](#TFN2){ref-type="table-fn"},[b](#TFN2){ref-type="table-fn"}^ 66.30[+]{.ul}2.13 ^[b](#TFN2){ref-type="table-fn"}^
Sex (male) 5 (55.6%) 3 (30%) 2 (20%)
Education 10.70[+]{.ul}1.63 4.89[+]{.ul}1.72 4.90[+]{.ul}1.63
MMSE 11.19[+]{.ul}1.35^[c](#TFN3){ref-type="table-fn"}^ 13.67[+]{.ul}1.13^[d](#TFN3){ref-type="table-fn"}^ 26.31[+]{.ul}0.94^[c](#TFN3){ref-type="table-fn"},[d](#TFN3){ref-type="table-fn"}^
Disease duration 4.40[+]{.ul}0.66^[e](#TFN4){ref-type="table-fn"}^ 1.80[+]{.ul}0,66^[e](#TFN4){ref-type="table-fn"},[f](#TFN5){ref-type="table-fn"}^ 4.33[+]{.ul}0,70^[f](#TFN5){ref-type="table-fn"}^
Diagnosis bvFTD 4 (40%)\ AD 10 (100%) Depression 7 (70%)\
PPA 6 (60%) No disorder 2 (20%)\
Undefined 1 (10%)
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
FTD: Frontotemporal Dementia; AD: Alzheimer\'s disease; WND: without neurodegenerative disorders; bvFTD: behavioral variant of Frontotemporal Dementia; PPA: primary progressive aphasia; MMSE: Mini-Mental State Examination;
p=0.001 (Bonferroni *post hoc*);
p\<0.001(Bonferroni *post hoc*);
p=0.03 (Bonferroni *post hoc*);
p=0.042 (Bonferroni *post hoc*).
All AD patients and 90% of the WND patients were referred due to memory problems, while FTD patients were referred due to behavioral (30%), memory (30%) and memory plus language (30%) problems. Eighty percent of AD patients arrived for evaluation at the specialized clinic with no diagnosis. FTD patients arrived with the diagnosis of AD (30%), depression (20%), and mania (20%). WND patients were referred with suspected depression (30%), no diagnosis (30%), and AD (20%) ([Table 2](#t2){ref-type="table"}). No diagnosis (40%) and AD (23%) were the most frequent diagnoses based on results of the evaluation of the three groups.
######
Reason for referral, former diagnosis and follow-up outcome of the groups studied.
Variables FTD (n=9) AD (n=10) WND (n=10)
----------------------------------------------------------- ------------------- ----------- ----------- ------------
**Reason for referral[\*](#TFN7){ref-type="table-fn"}** Behavioral 3 (30%) 0 (0%) 1 (10%)
Communication 1 (10%) 0 (0%) 0 (0%)
Memory 3 (30%) 10 (100%) 9 (90%)
Memory and communication 3 (30%) 0 (0%) 0 (0%)
**Former diagnosis[\*\*](#TFN8){ref-type="table-fn"}** Without diagnosis 1 (10%) 8 (80%) 3 (30%)
Depression 2 (20%) 0 (0%) 3 (30%)
Mania 2 (20%) 0 (0%) 0 (0%)
Alzheimer\'s disease 3 (30%) 2 (20%) 2 (20%)
Psychotic disorders 0 (0%) 0 (0%) 1 (10%)
Frontotemporal dementia 1 (10%) 0 (0%) 0 (0%)
Indefinite 1 (10%) 0 (0%) 1 (10%)
**Follow-up outcome[\*\*\*](#TFN9){ref-type="table-fn"}** Death 2 (20%) 0 (0%) 0 (0%)
Institutionalization with worsening 2 (20%) 0 (0%) 0 (0%)
Remained at clinic with worsening 4 (40%) 6 (60%) 0 (0%)
Remained at clinic stable 2 (20%) 4 (40%) 0 (0%)
Discharged 0 (0%) 0 (0%) 10 (100%)
FTD: Frontotemporal Dementia; AD: Alzheimer\'s disease; WND: without neurodegenerative disorders;
p=0.017;
p=0.099;
p=0.000 (Pearson\'s Chi-square).
After a 12-month period of follow-up at the clinic, 60% of AD patients were still followed and worsened. All WND patients were discharged from the Dementia Clinic and referred for specific management when necessary. Of the FTD patients, 40% were followed and worsened, 20% were institutionalized and worsened, and 20% died ([Table 2](#t2){ref-type="table"}).
Seventy percent of AD patients and 90% of the WND patients presented memory symptoms at the specialist evaluation, while FTD patients presented language (30%), memory (20%) and memory plus language (20%) symptoms. Of the 10 AD patients, 7 presented no behavioral symptoms. Half of the WND patients presented depressive or anxious symptoms. FTD patients presented no behavioral symptoms(40%), depressive or anxious symptoms (20%), loss of social adequacy (20%) and psychotic symptoms (20%) ([Table 3](#t3){ref-type="table"}).
######
Cognitive and behavioral symptoms at specialist evaluation of the groups studied.
Variables FTD (n=9) AD (n=10) WND (n=10)
------------------------------------------------------------------------------------- ------------------------ ----------- ----------- ------------
**Cognitive symptoms at specialist evaluation[\*](#TFN11){ref-type="table-fn"}** Absent or not reported 1 (10%) 0 (0%) 1 (10%)
Language 3 (30%) 0 (0%) 0 (0%)
Memory 2 (20%) 7 (70%) 9 (90%)
Critical judgment and abstraction 1 (10%) 0 (0%) 0 (0%)
Memory and orientation 1 (10%) 3 (30%) 0 (0%)
Language and memory 2 (20%) 0 (0%) 0 (0%)
**Behavioral symptoms at specialist evaluation[\*\*](#TFN12){ref-type="table-fn"}** Absent or not reported 4 (40%) 7 (70%) 4 (40%)
Depressive or anxious 2 (20%) 1 (10%) 5 (50%)
Loss of critical judgment, social inadequacy (loss of insight and judgment) 2 (20%) 0 (0%) 0 (0%)
Psychotic symptoms 2 (20%) 2 (20%) 0 (0%)
Depressive and psychotic symptoms 0 (0%) 0 (0%) 1 (1%)
FTD: Frontotemporal Dementia; AD: Alzheimer\'s disease; WND: without neurodegenerative disorders;
p=0.022;
p=0,132 (Pearson\'s Chi-square).
According to the CDR scale, 29% of FTD patients were moderate, 57% severe, and 14% no dementia. In the AD group, 75% were mild and 25% moderate. None of the WND patients had dementia (CDR=0).
Of the 10 patients with FTD, five were using cholinesterase inhibitors and two, memantine. Only one of the AD patients had previously received a cholinesterase inhibitor. One WND patient was using memantine.
DISCUSSION
==========
The present study was carried out to evaluate the variables associated with misdiagnosis of FTD and AD in patients evaluated for behavioral and cognitive symptoms. Our main findings were a high rate of misdiagnosis prior to the specialist visit, longer duration of symptoms until specialized evaluation in both FTD and WND groups, and that AD was the main misdiagnosis in the FTD group.
The FTD group had the highest rate of misdiagnosis, with AD (30%), and psychiatric disorder (40%): depression (20%) and mania (20%), as main diagnostic categories. Similarly, in a previous Brazilian study, the most frequent misdiagnosis among FTD patients was psychiatric disorder followed by AD (Bahia, 2007). In general, the present study corroborated the finding of frequent misdiagnosis among FTD patients, especially confounding with AD and psychiatric diseases, observed in previous reports.^[@r3],[@r9],[@r10],[@r12],[@r13],[@r15]^ The misdiagnosis with AD may be related to the difficulty encountered by clinicians and general practitioners in differentiating these dementias during the initial manifestation. Evidence on AD diagnostic criteria (NINCDS-ADRDA) has shown good sensitivity but poor specificity, contributing to diagnosis of other dementias such as FTD.^[@r16]^ The diagnostic value of the FTD consensus diagnostic criteria of Neary et al. (1998)^[@r6]^ showed high specificity and low sensitivity.^[@r13]^ This finding may also contribute to the difficulty in correctly assigning an FTD diagnosis and in including the AD diagnostic criteria.
In our study, patients with FTD had average delays of 4.4 years before receiving specialized care, which is similar to the average found by another Brazilian study (4.1 years).^[@r17]^ The delay for the expert evaluation was not only representative of this Southern referral center in Brazil, but has been reported for pre-senile dementias in general.^[@r4],[@r17]^ This delay can jeopardize proper treatment and management of these patients and their families.
WND patients also had long duration of symptoms (4.3 years) where delay in accurate diagnosis was probably because most presented memory complaints (90%), but had no other feature fulfilling the criteria for dementia. The specialist evaluation found depression in 70% of these patients. Depressive patients often have memory deficits in the absence of other cognitive impairments.^[@r18]-[@r20]^ However, this evidence seems to be little disseminated among physicians since most tend to attribute memory complaints exclusively to AD. Therefore, these patients have received incorrect diagnosis, delaying proper treatment.
In the FTD group, the most consistent reason for referral was memory and behavior, followed by memory plus language problems. It seems that when these patients were not evaluated at specialized clinics complaints tended to be attributed to the diagnosis of AD and psychiatric disorders. A careful analysis of complaints and first symptoms could help reach proper diagnosis. The initial symptoms are extremely important for the differential diagnosis between different types of dementia and are important to fulfill diagnostic criteria.^[@r6]-[@r8],[@r13],[@r14]^
The cognitive symptoms reported in the specialist evaluation showed higher variability in the FTD group than in the other groups. The most frequent symptoms in FTD were language followed by memory problems. However, the FTD group was composed of more than one variant, allowing different cognitive manifestations. The core feature of the cognitive domain in PPAs is language, which is impaired early in the course of the disease. In initial phases, memory impairment is usually phonological and semantic, sparing episodic and visual memory as well as visuoperceptual abilities.^[@r7],[@r21]^ At onset of disease, bvFTD patients can present relatively preserved performance on formal neuropsychological tests despite the presence of significant changes in personality and behavior.^[@r8],[@r22]^ Impairment of executive function and a relative sparing of memory and visuospatial function can also be observed.^[@r8],[@r22]^ Thus, cognitive symptoms found in our group of FTD patients were present according to the variants that comprised the group, such as bvFTD and PPAs, and according to the longer duration of the disease (i.e., more severe stages according to the CDR scale).
FTD patients showed worse outcomes after the 12-month period of follow-up (with institutionalization/worsening or death) than the other groups. In Brazil, another study had associated institutionalization with an unfavorable clinical course.^[@r23]^ Higher rates of behavioral and cognitive impairment, and higher degree of dependence in dementia, were also associated with higher rates of institutionalization.^[@r23]-[@r25]^ The worse outcomes observed in this study were correlated with delay in receiving proper diagnosis, which caused longer inadequate treatment, disease worsening, management difficulty of patients by family members, and institutionalization.
Additionally, we observed that most FTD patients received non evidence-based treatment, probably related to misdiagnosis (especially AD).
Limitations of the study were the small sample size of the groups studied and the use of the 1998 FTD diagnostic criteria. The small sample size was a consequence of the lower rates of FTD patients evaluated in dementia centers;^[@r26]^ consequently the present results should be interpreted cautiously. The 1998 FTD diagnostic criteria were applied since the current criteria were published in August 2011, after the present study period.
The results of this study pointed to the existence of difficulty by physicians (especially clinicians and general practitioners) in recognizing the main features of FTD and psychiatric disorders with memory impairment. Consequently, these professionals also delayed early referral to specialized centers and administering of appropriate treatment. We also observed that clinicians better recognized and dealt with AD. However, physicians tended to generalize memory complaints toward a single diagnosis, identifying almost all these patients as AD or leaving them undiagnosed. These findings suggest that patients with FTD evolved to worse outcomes than the other patients studied. Thus, diagnostic criteria and differential aspects of the diseases that cause cognitive impairment and dementia should be more widely disseminated.
Disclosure: The authors report no conflicts of interest.
|
tomekkorbak/pile-curse-small
|
PubMed Central
|
Overexpression of the yeast transcriptional activator ADR1 induces mutation of the mitochondrial genome.
It was previously observed that increased dosages of the ADR1 gene, which encodes a yeast transcriptional activator required for alcohol dehydrogenase II (ADH II) expression, cause a decreased rate of growth in medium containing ethanol as the carbon source. Here we show that observed reduction in growth rate is mediated by the ADR1 protein which, when overexpressed, increases the frequency of cytoplasmic petites. Unlike previously characterized mutations known to potentiate petite formation, the ADR1 effect is dominant, with the petite frequency rising concomitantly with increasing ADR1 dosage. The ability of ADR1 to increase the frequency of mitochondrial mutation is correlated with its ability to activate ADH II transcription but is independent of the level of ADH II being expressed. Based on restoration tests using characterized mit- strains, ADR1 appears to cause non-specific deletions within the mitochondrial genome to produce rho- petites. Pedigree analysis of ADR1-overproducing strains indicates that only daughter cells become petite. This pattern is analogous to that observed for petite induction by growth at elevated temperature and by treatment with the acridine dye euflavine. One strain resistant to ADR1-induced petite formation displayed cross-resistance to petite mutation by growth at elevated temperature and euflavine treatment, yet was susceptible to petite induction by ethidium bromide. These results suggest that ADR1 overexpression disrupts the fidelity of mitochondrial DNA replication or repair.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Transcript
1.
Album of the American coloniesBy: Bryan Cooper
2.
AFRICANSMost Africans were shipped from Africa to America by the Spanish, Portuguese, French, English, and Dutch. Slavery was a commodity that helped leverage time and profits.Through generations in America, many Africans had children (home grown slaves) and in some cases some were able to buy their freedom.Through breeding with non-Africans their offspring would have lighter skin color called “morisca” in Spanish or mulatto today. Many African women became concubines of slave masters. This was tolerated but If a white woman had sex with an African man there was hell to pay!
3.
Africans -continued-By the eve of the American Revolution, African slaves constituted about 40% the population of the S. mainland colonies, and the highest concentration in S. Carolina, where well over half the pop. were salves.In 1712 in New York City, two dozen slaves set fire to a building & killed fleeing whites. Soldiers subdued them and the punishments ranged from burning at the stake, to the Gallows, to breaking bones on the Wheel until dead, to starving to death.In the N. American colonies during the 18th century African slaves were a small minority in New England around 2%, and the middle colonies around 8%.In 1808, slave importing was illegal in N. America.
4.
SpanishChristopher Columbus was sponsored by Isabella of Castile (Spain) to find a better path to the West Indies. Columbus finding the New World for Spain, the Portuguese wanted their part so in Tordesillas, the Pope allotted everything west to Spain and east went to Portugal.Spain expanded from Chile up to California. Francisco Pizarro led conquest of the Incan Empire in 1509-1535 and Hernan Cortes conquering Mexico 1518-1522 and the Conquistadors were more a volunteer militia than and organized military. They had to supply their own materials, weapons and horses.
5.
Spanish –continued-The death of the natives from wars, guns, and swords was nothing compared to the deaths from the new diseases introduced by the Spaniards, slaves, and their animals which killed about 70% to 80% of all the natives of South and Central America.In 1513 Juan Ponce de Leon, governor of Puerto Rico searched for land north of Cuba believing he might find gold and maybe Indians to enslave. He found Florida, found no gold but did find Indians.
6.
PortugueseShortly after 1500, after the Spanish began commercial sugar production in Hispaniola, the Portuguese followed shortly thereafter in Brazil. Rice proved to be the best food to feed the slaves with for enough nutrition and low cost. Portugal was the leading country in exploration out of all the other European countries. Due to the “Treaty of Toresilles” Brazil was colonized but attempts to colonize N. America failed.The degree of discipline to slaves was in relation to the pressures of keeping ahead of the market prices for sugar. Deaths on the plantations were not uncommon due to exhaustion and infections plus loss of hope for freedom.
Colonization began in the 16th century, making claims in North America, some Caribbean islands, and in South America. Most of the colonies developed to export products such as sugar, furs, and fish. Quebec and Montreal became cities from their colonization.
8.
The Island of St. Domingue (today it’s called Haiti) also called Hispaniola, the French raised sugarcane, the slaves rebelled and out of hundreds of slave rebellions in the New World, only this one worked in favor to the slaves.
9.
The French and Indian war, which was between Great Britain and France in North America from 1754 to 1763. This was known as “The Seven Year War”. This was fought between Virgina to Nova Scotia.
DUTCHThe first colonization to the Americas was to a few places but they retained possession of a country called “Suriname”, Aruba, and Netherlands Antilles. In the Caribbean the Dutch colonized St. Croix and Tobago and half of Sint Maarten and several other islands were captured and fortified against the Spanish. Timber & salt were wanted resources there.During the Dutch’s brief occupation of northeast Brazil, the Dutch had encountered a more efficient Mill for juicing sugarcane. They brought it to the Caribbean to their plantations. The British were able to learn from the Dutch about this new technology.Henry Hudson, while commissioned to find a new passageway through to the Pacific, founded the Hudson river, and Adriaen Block got recognition for Block Island in Long Island. The Indians killed all the Dutch traders after mapping the area in 1609.
10.
EnglishColonizing the Americas began in the late 16th century, and they came to rival the Spanish in military and economic might. Britain’s biggest foes turned out to be their own colonists, the French and the Indians.After the American War of Independence British territories in the Americas were granted independence July 4th 1776. Two countries in North America, ten in the Caribbean, and one in South America have received their independence from the United Kingdom.There were three types of colonies, proprietary colonies, royal colonies, and charter colonies. A proprietary colony example could be under the “Virginia Company” which created the first successful English settlement at James Town & the 2nd at St. George’s in Bermuda.
11.
Native AmericansThe natives or indigenous people from South America to North America were physically taken over. I call it a hostile takeover and basically it was a criminal act. The worst results of devastation didn’t come from the gun or the sword or stealing one’s land, it came from new diseases brought from Europe. From the Aztecs to the Algonquian tribes around 60% to 80% perished.Due to a higher technology of weapons, the conquistadors – to the British in Jamestown had a huge advantage with guns and swords. Because they had the upper hand, Indians like “Squanto” (last of the Patuxet Indians) was kidnapped, sold as a slave, but he learned the English language and religion & became a translator for the British helping to form treaties that may have been close to impossible to do without him.
12.
SOURCESThe Jesuit RelationsAmerican Colonies the Settling of North AmericaStruggle & Survival in Colonial AmericaThe Aztecs rise and fall of an EmpireJSTOR, http://jstor.org/stable/2562638
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Monday, 31 December 2012
One can safely assume nothing else important will happen this year... so let's wrap up. Here are the greatest moments of the year 2012, from the point of view of an obscure particle physics blog.
Higgs boson discoveryThis one is obvious: the Higgs tops the ranking not only on Résonaances, but also on BBC and National Enquirer. So much has been said about the boson, but let me point out one amusing though rarely discussed aspect: as of this year we have one more fundamental force. The 5 currently known fundamental forces are 1) gravitational, 2) electromagnetic, 3) weak, 4) strong, and 5) Higgs interactions. The Higgs force is attractive and proportional to the masses of interacting particles (much like gravity) but manifests itself only at very short distances of order 10^-18 meters. From the microscopical point of view, the Higgs force is different from all the others in that it is mediated by a spinless particle. Résonaances offers a signed T-shirt to the first experimental group that will directly measure this new force.
The Higgs diphoton rateSomewhat disappointingly, the Higgs boson turned out to look very much as predicted by the current theory. The only glitch so far is the rate in which it decays to photon pairs. Currently, the ATLAS experiment measures the value 80% larger than the standard model prediction, while CMS also finds it a bit too large, at least officially. If this were true, the most likely explanation would be a new charged particle with the mass of order 100 GeV and a large coupling to the Higgs. At least until the next Higgs update in March we can keep a glimmer of hope that the standard model is not a complete theory of the weak scale...
Theta-1-3Actually, the year 2012 was so kind as to present us not with one but with two fundamental parameters. Except the Higgs boson mass, we also learned about one entry in the neutrino mixing matrix, the so-called θ_13 mixing angle. This parameter controls, among other things, how often the electron neutrino transforms into other neutrino species. It was pinpointed this year by the neutrino reactor experiment Daya Bay who measured θ_13 to be about 9 degrees - a rather uninspired value. The sign of the times: the first prize was snatched by the Chinese (Daya Bay), winning by a hair before the Koreans (RENO), and leaving far behind the Japanese (T2K), the Americans (MINOS), and the French (Double-CHOOZ). The center of gravity might be shifting...
Fermi lineDark matter is there in our galaxy, but it's very difficult to see its manifestations other than the gravitational attraction. One smoking-gun signature would be a monochromatic gamma-ray line from the process of dark matter annihilation into photon pairs. And, lo and behold, a narrow spectral feature near 130 GeV was found in the data collected by the Fermi gamma-ray observatory. This was first pointed out by an independent analysis, and later confirmed (although using a less optimistic wording) by the collaboration itself. If this was truly a signal of dark matter, it would be even more important than the Higgs discovery. However past experience has taught us to be pessimistic, and we'd rather suspect a nasty instrumental effect to be responsible for the observed feature. Time will tell...
Bs-to-μμ This year the LHCb experiment finally pinpointed the super-rare process of the Bs meson decaying into a muon pair. The measured branching fraction is about 3 in a billion, close to what was predicted. The impact of this result on theory was a bit overhyped, but it's anyway an impressive precision test. Even if "The standard model works, bitches" is not really the message we wanted to hear...
Pioneer anomalyA little something for dessert: one long standing mystery was ultimately solved this year. We knew all along that the thermal emission from Pioneer's reactors could easily be responsible for the anomalous deceleration of the spacecraft, but this was cleanly demonstrated only this year. So, one less mystery, and no blatant violation of Einstein's gravity in our solar system...
Thursday, 13 December 2012
For the annual December CERN council meeting the ATLAS experiment provided an update of the Higgs searches in the γγ and ZZ→4 leptons channels. The most interesting thing about the HCP update a month ago was why these most sensitive channels were *not* updated (also CMS chose not to update γγ). Now we can see why. The ATLAS analyses in these channels return the best fit Higgs masses that differ by more than 3 GeV: 123.5 GeV for ZZ and 126.6 GeV for γγ, which is much more than the estimated resolution of about 1 GeV. The tension between these 2 results is estimated to be 2.7σ. Apparently, ATLAS used this last month to search for the systematic errors that might be responsible for the discrepancy but, having found nothing, they decided to go public.
One may be tempted to interpret the twin peaks as 2 separate Higgs-like particles. However in this case they most likely signal a systematic problem rather than interesting physics. First, it would be quite a coincidence to have two Higgs particles so close in mass (I'm not aware of a symmetry that could ensure it). Even if the coincidence occurs, it would be highly unusual that one Higgs decays dominantly to ZZ and the other dominantly to γγ, each mimicking pretty well the standard Higgs rate in the respective channel. Finally, and most importantly, CMS does not see anything like that; actually their measurements give a reverse picture. In the ZZ→4l channel CMS measures mh=126.2±0.6 GeV, above (but well within the resolution) the best fit mass they find in the γγ channel which is 125.1±0.7 GeV GeV. That makes us certain that down-to-earth reasons are responsible for the double vision in ATLAS, the likely cause being an ECAL calibration error, an unlucky background fluctuation, or alcohol abuse.
The truly exciting thing about the new ATLAS results is that the diphoton rate continues to be high. Recall that we are scared as fudge that the Higgs will turn out to be the boring one predicted by the standard model, and we're desperately looking out for some non-standard behavior. The measurements of Higgs decays to ZZ and WW do not bring any consolation: all rates measured by CMS and ATLAS so far are perfectly consistent with the standard model. Today's ATLAS update in the ZZ→4l channel continues the depressing trend, with the signal strength normalized to the standard model one measured at 1.0±0.4 (for mh=125 GeV). Currently our best hope is that the measured h→γγ cross section is consistently larger than the one predicted by the standard model, both in ATLAS and CMS. If the enhancement is due to a statistical fluctuation one would expect it becomes less significant as more data is added. Instead, in ATLAS, the central value of has not moved since July, but the error has shrunk a bit! The current diphoton signal strength stands at 1.8 ± 0.4, roughly 2 sigma above the standard model. On the other hand, given there is something weird about the ATLAS Higgs data (be it miscalibration or fluctuation), we should treat that excess with a grain of salt, at least until the double vision problem is resolved. And we're waiting for CMS to come out with what they have in the diphoton channel...
One more news today is that ATLAS also began studying some differential observables related to the Higgs boson, which usually goes by the name of "spin determination". In particular, they looked at the production and decay angles in the ZZ→4l channel (similar to what CMS showed at HCP) and the Higgs production angle in the γγ channel (first measurement of this kind). For spin zero the production angle should be isotropic (at the parton level, in the center-of-mass frame of the collision) while for higher spins some directions with respect to the beam axis could be preferred. Not surprisingly, the measured Higgs production angle is perfectly consistent with the zero spin hypothesis (ATLAS also quotes spin-2 being disfavored at 90% confidence level, although in reality they disfavor a particular spin-2 benchmark model).Here are the links to the ATLAS diphoton, ZZ, and combination notes.
Monday, 3 December 2012
The mood of this blog usually oscillates between depressive and funereal, due to the lack of any serious hints of new physics near the electroweak scale. Today, for a change, I'm going to strike an over-optimistic tone. There is one, not very significant, but potentially interesting excess sitting in the LHC data. Given the dearth of anomalies these days, it's a bit surprising that the excess receives so little attention: I could find only 1 paper addressing it.
The LHC routinely measures cross sections of processes predicted by the standard model. Unlike the Higgs or new physics searches, these analyses are not in the spotlight, are completed at a more leisurely pace, and are forgotten minutes after publication. One such observable is the WW pair production cross section. Both CMS and ATLAS measured that cross section in the 7 TeV data using the dilepton decay channel, both obtaining the result slightly above the standard model prediction. The situation got more interesting last summer after CMS put out a measurement based on a small chunk of 8 TeV data. The CMS result stands out more significantly, 2 sigma above the standard model, and the rumor is that in 8 TeV ATLAS it is also too high.
It is conceivable that new physics leads to an increase of the WW cross section at the LHC. This paper proposes SUSY chargino pair production as an explanation. If chargino decays dominantly to a W boson and an invisible particle - neutralino or gravitino, the final state is almost the same as the one searched by the LHC. Moreover, if charginos are light the additional missing energy from the invisible SUSY particles is small, and would not significantly distort the WW cross section measurement. A ~110 GeV wino would be pair-produced at the LHC with the cross section of a few pb - in the right ballpark to explain the excess.
Such light charginos are still marginally allowed. In the old days, the LEP experiments excluded new charged particles only up to ~100 GeV, LEP's kinematic reach for pair production. At the LHC, the kinematic reach is higher, however small production cross section of uncolored particles compared to the QCD junk the makes chargino searches challenging. In some cases, charginos and neutralinos have been recently excluded up to several hundred GeV (see e.g. here), but these strong limits are not bullet proof as they rely on trilepton signatures. If one can fiddle with the SUSY spectrum so as to avoid decays leading to trilepton signatures (in particular, the decay χ1→ LSP Z* must be avoided in the 2nd diagram) then 100 GeV charginos can be safe.
Of course, the odds for the WW excess not being new physics are much higher. The excess at the LHC could simply be an upward fluctuation of the signal, or higher-order corrections to the WW cross section in the standard model may have been underestimated. Still, it will be interesting to observe where the cross section will end up after the full 8 TeV dataset is analyzed. So, if you have a cool model that overproduces WW (but not WZ) pairs, now may be the right moment to step out.
Friday, 23 November 2012
The decay of a neutral Bs meson into a muon pair is a very rare process whose rate in principle could be severely affected by new physics beyond the standard model. We now know it is not: given the rate measured by the LHCb experiment, any new contribution to the decay amplitude has to be smaller than the standard model one. There's a medical discussion goingonandon about the interpretation of this result in the context of supersymmetry. Indeed, the statements describing the LHCb result as "a blow to supersymmetry" or "putting SUSY into hospital" are silly (if you think it's the most spectacular change of loyalties since Terminator 2, read on till the end ;-) But what is the true meaning of this result?
To answer this question at a quantitative level it pays to start with a model independent approach (and technical too, to filter the audience ;-) B-meson decays are low-energy processes which are properly described within a low-energy theory with heavy particles, like W/Z bosons or new physics, integrated out. That is to say, one can think of the Bs→μμ decay as caused by effective 4-fermion operators with 1 b-quark, 1 s-quark, and 2 muons: Naively, integrating out a mediator with mass M generates a 4-fermion operator suppressed by M^2. In the standard model, only the first operator is generated with ML,SM≈17 TeV, dominantly by the diagram with the Z-boson exchange pictured here. That scale is much bigger than the Z mass because the diagram is suppressed by a 1-loop factor, and furthermore it is proportional to the CKM matrix element V_ts whose value is 0.04. The remaining operators do not arise in the SM, in particular there are no scalars that could generate MS or MP (the Higgs boson couples to mass, thus by construction it has no flavor violating couplings to quarks).
In terms of the coefficients of these operators, the Bs→μμ branching fraction relative to the SM one is given byLHCb says that this ratio should not be larger than 2 or smaller than 1/3. This leads to model-independent constraints on the mass scales suppressing the 4-fermion operators. And so, the lower bound on ML and MR is about 30 TeV, that is similar in size of the standard model contribution. The bound on the scalar and pseudoscalar operators is much stronger: MS,MP≳150,200 TeV. \begin{digression} The reason is that the contribution of the vector operators to the Bs→μμ decay is suppressed by the small ratio of muon and Bs masses, which goes under the name of helicity suppression. Bs is spin zero, and a vector particle mediating the decay always couples to 2 muons of the same chirality. In the limit mμ=0, when chirality=helicity, the muons spins add up, which forbids the decay by spin conservation \end{digression}.
Consequently, the LHCb result can be interpreted as a constraint on new physics capable of generating the 4-fermion operators listed above. For example, a generic pseudoscalar with order 1 couplings and flavor violating couplings to quarks and leptons must be heavier than about 100 TeV. It may sound surprising that the LHC can probe physics above 100 TeV, even if indirectly. But this is in fact typical for B-physics: observables related to CP violation and mixing of B-mesons are sensitive to similar energy scales (see e.g Table I of this paper). Notice however that 100 TeV is not a hard bound on new pseudoscalars. If the new physics has a built-in mechanism suppressing the flavor violating couplings then even weak scale masses may be allowed.
Now, what happens in SUSY? The bitch always comes in package with an extended Higgs sector, and the exchange of the heavier cousins of the Higgs boson can generate the operators MS and MP. However, bounds on the heavy Higgs masses from Bs→μμ will always be much weaker than 100 TeV quoted above. Firstly, the Higgses couple to mass, thus the Yukawa couplings relevant for this decay are much smaller than one. Secondly, the Higgses have flavor conserving couplings at tree-level, and flavor violation is generated only at 1 loop. Finally, models of low-energy SUSY always assume some mechanism to suppress flavor violation (otherwise all hell breaks loose); in typical realizations flavor violating amplitudes will be suppressed by the CKM matrix elements, much as in the standard model. All in all, SUSY appears less interesting in this context than other new physics models, and SUSY contributions to Bs→μμ are typically smaller than the standard model ones.
But then SUSY has many knobs and buttons. The one called tanβ -- the ratio of the vacuum values of the two Higgs fields -- is useful here because the Yukawa couplings of the heavy Higgses to down-type quarks and leptons happen to be proportional to tanβ. Some SUSY contributions to the branching fraction are proportional to the 6th power of tanβ. It is then possible to pump up tanβ such that the SUSY contribution to Bs→μμ exceeds the standard model one and becomes observable. For this reason, Bs→μμ was hailed as a probe of SUSY. But, at the end of the day, the bound from Bs→μμ on the heavy Higgs masses is relevant only in the specific corner of the parameter space (large tanβ), and even then the SUSY contribution crucially depends on other tunable parameters: Higgsino and gaugino masses, mass splittings in the squark sector, the size of the A-terms, etc. This is illustrated by the plot on the right where the bounds (red) change significantly for different assumptions about the μ-term and the sign of the A-term. Thus, the bound may be an issue in some (artificially) constrained SUSY scenarios like mSUGRA, but it can be easily dodged in more the general case.
To conclude, you should interpret the LHCb measurement of the Bs→μμ branching fraction as a strong bound on theories on new physics coupled to leptons and, in a flavor violating way, to quarks. In the context of SUSY, however, there are far better reasons to believe her dead (flavor and CP, little hierarchy problem, direct searches). So one should not view Bs→μμ as the SUSY killer, but as just another handful of earth upon the coffin ;-)
Wednesday, 14 November 2012
I know, there's already a dozen of nice summaries on blogs (for example here, here, and here) so why do you need another one? Anyway... the new release of LHC Higgs results is the clue of this year's HCP conference (HCP is the acronym for Human CentiPede). The game is completely different than a few months ago: there's no doubt that a 126 GeV Higgs-like particle is there in the data, and nobody gives a rat's ass whether the signal significance is 5 or 11 sigma. The relevant question now is whether the observed properties of the new particle match those of the standard model Higgs. From that point of view, today's update brought some new developments, all of them depressing.
The money plots from ATLAS and CMS summarize it all:
We're seeing the Higgs in more and more channels, and the observed rates are driven, as if by magic, to the vertical line denoting the standard model rate.
It came to a point where the most exciting thing about the new Higgs release was what wasn't there :-) It is difficult not to notice that the easy Higgs search channels, h→γγ and ATLAS h→ZZ→4l, were not updated. In ATLAS, the reason was the discrepancy between the Higgs masses measured in those 2 channels: the best fit mass came out 123.5 GeV in the h→ZZ→4l, and 126.5 GeV in the h→γγ channel. The difference is larger than the estimated mass resolution, therefore ATLAS decided to postpone the update in order to carefully investigate the problem. On the other hand in CMS, after unblinding the new analysis in the h→γγ channel, the signal strength went down by more than they were comfortable with; in particular the new results are not very consistent with what was presented on the 4th of July. Most likely, all these analyses will be released before the end of the year, after more cross-checking is done.
Among the things that were there, the biggest news is the h→ττ decay. Last summer there were some hints that the ττ channel might be suppressed, as the CMS exclusion limit was reaching the standard model rate. It seems that the bug in the code has been corrected: CMS, and also ATLAS, now observe an excess of events over the non-Higgs backgrounds consistent with what we expect from the standard model Higgs. The excess is not enough to claim observation of this particular decay, but enough to suppress the hopes that some interesting physics is lurking here.
Another important update concerns the h→bb decay, for the Higgs produced together with a W or Z boson. Here, in contrast, earlier hints from the Tevatron suggested that the rate might be enhanced by a factor of 2 or so. The LHC experiments are now at the point of surpassing the Tevatron sensitivity in that channel, and they don't see any enhancement: CMS observes the rate slightly above the standard model one (though again, the excess is not enough to claim observation), while ATLAS sees a large negative fluctuation. Also, the Tevatron has revised downward the reported signal strength, now that they know it should be smaller. So, again, it's "move on folks, nothing to see here"...
What does this all mean for new physics? If one goes beyond the standard model, the Higgs couplings to matter can take in principle arbitrary values, and the LHC measurements can be interpreted as constraints on these coupling. As it is difficult to plot a multi-dimensional parameter space, for presentation purposes one makes simplifying assumptions. One common ansatz is to assume that all tree-level Higgs couplings to gauge bosons get rescaled by a factor cV, and all couplings to fermions get rescaled by an independent factor cf. The standard model corresponds to the point cf=cV=1. Every Higgs measurement selects a preferred region in the cV-cf parameter space, and measurements in different channels constrain different combinations of cV and cf. The plot on the right shows 1-sigma bands corresponding to individual decay channels, and the 68%CL and 99%CL preferred regions after combining all LHC Higgs measurements. At the end of the day, the standard model agrees well with the data. There is however a lower χ2 minimum in the region of the parameter space where the relative sign between the Higgs couplings to gauge bosons and to fermions is flipped. The sign does not matter for most of the measurements, except in the h→γγ channel. The reason is that h→γγ is dominated by two 1-loop processes, one with the W boson and one with the top quark in the loop. Flipping the sign changes the interference between these two processes from destructive to constructive, the latter leading to an enhancement of the h→γγ rate in agreement with observations. On the down side, I'm not aware of any model where the flipped sign would come out naturally (and anyway the h→γγ will go down after CMS updates h→γγ, probably erasing the preference for the non-SM minimum).
Finally, we learned at the HCP that the LHC is taking precision Higgs measurements to a new level, probing not only the production rates but also more intricate properties of the Higgs signal. In particular, CMS presented an analysis of the data in the h→ZZ→4l channel that discriminates between a scalar and a pseudoscalar particle. What this really means is that they discriminate between 2 operators allowing a decay of the Higgs into Z bosons:
The first operator occurs in the standard model at tree level, and leads to a preference for decays into longitudinally polarized Z bosons. The other is the lowest order coupling possible for a pseudoscalar, and leads to decays into transversely polarized Z bosons only. By looking at the angular distributions of the leptons from Z decays (a transverse Z prefers to emit leptons along the direction of motion, while a longitudinal Z - perpendicularly to the direction of motion) one can determine the relative amount of transverse and longitudinal Z bosons in the Higgs sample, and thus discriminate between the two operators. CMS observes a slight 2.5 sigma preference for the standard model operator, which is of course not surprising (it'd be hard to understand why the h→ZZ rate is so close to the standard model one if the other operator was responsible for the decay). With more data we will obtain more meaningful constraints on the higher dimensional couplings of the Higgs.
To summarize, many particle theorists were placing their bets that Higgs physics is the most likely place where new physics may show up. Unfortunately, the simplest and most boring version of the Higgs predicted by the standard model is emerging from the LHC data. It may be the right time to start scanning job ads in condensed matter or neuroscience ;-)
All Higgs parallel session talks are here (the password is given in the dialog box).
Friday, 9 November 2012
The 130 GeV monochromatic gamma-ray emission from the galactic center detected by the Fermi satellite may be a signal of dark matter. Until last week the claim was based on freelance analyses by theorists using publicly available Fermi data. At the symposium last week the Fermi collaboration made the first public statement on the tentative line signal. Obviously, a word from the collaboration has a larger weight, as they know better the nuts and bolts of the detector. Besides, the latest analysis from Fermi uses reprocessed data with the corrected energy scale and more fancy fitting algorithms, which in principle should give them a better sensitivity to the signal. The outcome is that you can see the glass as half-full or half-empty. On one hand, Fermi confirms the presence of a narrow bump in the gamma-ray spectrum near 130 GeV. On the other hand, certain aspects of the data cast doubt on the dark matter origin of the bump. Here are the most important things that have been said.
Recall that Fermi's previous line search in 2-years data didn't report any signal. Actually, neither does the new 4-years one, if Fermi's a-priori optimized search regions are used. In particular, the significance of the bump near 130 GeV in the 12x10 degree box around the galactic center is merely 2.2 sigma. There is no outright contradiction with the theorist's analyses, as th e latter employ different, more complicated search regions. In fact, if Fermi instead focuses on a smaller 4x4 degree box around the galactic center, they see a signal with 3.35 sigma local significance (after reprocessing data, the significance would be 4 sigma without reprocessing). This is the first time the Fermi collaboration admits seeing a feature that could possibly be a signal of dark matter annihilation.
Another news is that the 130 GeV line has been upgraded to a 135 GeV line: it turns out that
reprocessing the data shifted the position of the bump. That should make
little difference to dark matter models explaining the line signal, but
in any case you should expect another round of theory papers fitting
the new number ;-)
Unfortunately, Fermi also confirms the presence of a 3 sigma line near 130 GeV in the Earth limb data (where there should be none). Fermi assigns the effect to a 30% dip in detection efficiency in the bins above and below 130 GeV. This dip cannot by itself explain the 135 GeV signal from the galactic center. However, it may be that the line is an unlucky fluctuation on top of the instrumental effect due to the dip.
Fermi points out a few other details that may be worrisome. They say there's some indication that the 135 GeV feature is not as smooth as expected if it were due to dark matter. They find bumps of similar significance at other energies and other places in the sky. Also, the significance of the 135 GeV signal drops when reprocessed data and more advanced line-fitting techniques are used, while one would expect the opposite if the signal is of physical origin.
A fun fact for dessert. The strongest line signal that Fermi finds is near 5 GeV and has 3.7 sigma local significance (below 3 sigma with the look-elsewhere effect taken into account). 5 GeV dark matter could fit the DAMA and CoGeNT direct detection, if you ignore the limits from the Xenon and CDMS experiments. Will the 5 GeV line prove as popular with theorists as the 130 GeV one?
So, the line is sitting there in the data, and potential consequences are mind blowing. However, after the symposium there are more reasons to be skeptical about the dark matter interpretation. More data and more work from Fermi should clarify the situation. There's also a chance that the HESS telescope (Earth-based gamma-ray observatory) will confirm or refute the signal some time next spring.
Wednesday, 24 October 2012
The new round of Higgs data will be presented on the 15th of November at a conference in Kyoto, and on blogs a few days earlier. The amount of data will increase by about 2/3 compared to what was available last summer. This means the errors should naively drop by 30%, or a bit more in the likely case of some improvements in the analyses. Here's a short guide to the hottest Higgs questions that may be answered.
Will the γγ rate remain high? Last summer the Higgs boson showed up quite like predicted by the standard model. The most intriguing discrepancy was that both ATLAS and CMS saw too many Higgs decays to photon pairs, exceeding by 80% and 60% respectively the standard model expectation. Statistically speaking, the excess in both experiments is below 2 sigma, so at this point all the observed rates are in a decent agreement with the standard model. But that doesn't stop of us from dreaming and crossing our fingers. If the excess is a statistical fluke we would expect that the central value of the measured H→γγ rate will decrease, and that the significance of the excess will remain moderate. But if, purely hypothetically, the central value remains high and the significance of the excess grows then.... well, then it's gonna get hot.
Will the ττ rate remain low? Another puzzling piece of Higgs data from last summer was that CMS failed to see any excess in the H→τ+τ- channel, despite their expected sensitivity being close to the predicted standard model rate. In fact, they came close to excluding the 125 GeV standard model Higgs in that channel! This discrepancy carries less weight than the diphoton excess because it is reported by only one experiment (ATLAS did not update the ττ channel with 8 TeV data last summer) and because the strong limit seems to be driven by a large negative background fluctuation in one of the search categories. Nevertheless, it is conceivable that something interesting is cooking here. In 3 weeks both experiments should speak up with a clearer voice, and the statistics should be high enough to get a feeling what's going on.
Is the Vh → bb rate enhanced?The LHC has proven that Higgs couples to bosons: gluons, photons, W and Z, however it has not pinpointed the couplings to fermions yet (except indirectly, since the effective coupling to gluons is likely mediated by virtual top quarks). As mentioned above, no sign of Higgs decays to tau lepton pairs has been detected so far. Also, the LHC has not seen any clear signs of Higgs decays to b-quarks (even though it is probably the most frequent decay mode). On the other hand, the Tevatron experiments in their dying breath have reported a 3 sigma evidence for the h → bb decays, with the Higgs produced in association with the W or Z boson. The intriguing (or maybe suspicious) aspect of the Tevatron result was that the observed rate was twice that predicted by the standard model. In 3 weeks the sensitivity of the LHC in the b-bbar channel should exceed that of the Tevatron. It is unlikely that we'll get a clear evidence for h→bb decays then, but at least we should learn whether the Tevatron hints of enhanced Vh → bbcan be true.
Will they see h→Zγ? Another possible channel to observe the Higgs boson is via its decay to 1 photon and 1 Z boson, where Z subsequently decays to a pair of charged leptons. Much like in the well-studied h→ZZ→4l and h→γγ channel, the kinematics of the h→Zγ→γ2l decay can be cleanly reconstructed and offers a good Higgs mass resolution. The problem is the low rate: the Higgs decay to Zγ is even more rare than that to γγ, plus one needs to pay the penalty of the low branching fraction for the Z→l+l- decay. According to the estimates I'm aware of, the LHC is not yet sensitive to the h→Zγ produced with the standard model rate. However, if we assume it's new physics that's boosting the h→γγ rate, it is very likely that the h→Zγ rate is also boosted by a similar or a larger factor. Thus, it interesting to observe what limits can the LHC deliver in the h→Zγ channel, as they may provide non-trivial constraints on new physics.
Does Higgs have spin zero? Obviously, this question carries a similar potential for surprise as a football game between Brazil and Tonga. Indeed, spin-1 is disfavored on theoretical grounds (an on-shell spin-1 particle cannot decay to two photons), while a spin-2 particle cannot by itself ensure the consistency of electroweak symmetry breaking as the Higgs boson does. Besides, we already know the 125 GeV particle couples to the W and Z bosons, gluons and photons with roughly the strength of the standard model Higgs boson. It would be an incredible coincidence if a particle with another spin or parity than the Higgs would reproduce the event rates observed at the LHC, given the tensor structure of the couplings are completely different for other spins. Nevertheless, a clear experimental preference for spin-0 would be useful to satisfy some pedantic minds or some Nobel committees. In particular, one needs to demonstrate that the Higgs boson is produced isotropically (without a preferred direction) in the center-of-mass frame of the collision. With the present statistics it should already be possible to discriminate between spin-0 and alternative hypotheses.
So, keep your ear to the ground, the data are being unblinded as we speak, and the first numbers are already being bandied about in cafeterias and on facebook. Intriguingly, this blog post clearly hints there is a lot to rumor about in the new data ;-) Is it the high γγ rate? The low ττ rate? Something else? Well, there's still 3 weeks left and the numbers may shift a bit, so let's not spoil the fun just yet... In any case, if you have an experimentalist friend now it's the best time to invite her to a drink or to dances ;-)
Wednesday, 10 October 2012
This one is not about the colony collapse disorder but about particle bees, also known as b-quarks. Older readers who still remember the LEP collider may also remember a long-standing anomaly in one of the LEP precision measurement. The observable in question is the forward-backward asymmetry of the b-quark production in electron-positron collisions. In the events with a pair of b-jets in the final state one counts the number of b-quarks (as opposed to b-anti-quarks) going in the forward and backward directions (defined by the electron and positron beam directions), and then defines the asymmetry as:
The observable is analogous to the top forward-backward asymmetry, widely discussed in the context of the anomalous Tevatron measurements, although the origin of the 2 anomalies is unlikely to be directly related. At LEP, the b-quark pair production is mediated mostly by a photon or a Z-boson in the s-channel. The latter has chiral couplings to matter, that is to say, it couples differently to left- and right-handed particles. Thanks to that, a significant b-quark asymmetry of order 10% is predicted in the standard model. However, the asymmetry observed at LEP was slightly smaller than predicted. The anomaly, sitting in the 3 sigma ballpark, has attracted some attention but has never been viewed as a smoking-gun of new physics. Indeed, it was just one anomaly in the sea of LEP observables that perfectly matched the standard model predictions. In particular, another b-quark precision observable measured at LEP - the production rate of b-quark pairs, the so-called Rb - seemed to be in perfect agreement with the standard model. New physics models explaining the data involved a certain level of conspiracy: one had to arrange things such that the asymmetry but not the overall rate was affected.
Fast forward to the year 2012. The Gfitter group posted an update of the standard model fits to the electroweak precision observables. One good reason to look at the update is that, as of this year, the standard model has no longer any free parameters that haven't been directly measured: the Higgs mass, on which several precision observables depend via loop effects, has been pinpointed by ATLAS and CMS to better than 1%. But there's more than that. One notices that, although most precision observables perfectly fit the standard model, there are two measurements that stand out above 2 sigma. Wait, two measurements? Right, according to the latest fits not only the b-quark asymmetry but also the b-quark production rate at LEP deviates from the standard model prediction at the level of 2.5 sigma.
The data hasn't changed of course. Also, the new discrepancy is not due to including the Higgs mass measurement, as that lies very close to the previous indirect determinations via electroweak fits. What happened is that the theory prediction has migrated. More precisely, 2-loop electroweak corrections to Rb computed recently turned out to be significant and moved the theoretical prediction down. Thus, the value of Rb measured at LEP is, according to the current interpretation, larger than predicted by the standard model. The overall goodness of the standard model fit has decreased, with the current p-value around 7%.
Can this be a hint of new physics? Actually, it's trivial to explain the anomalies in a model-independent way. It is enough to assume that the coupling of the Z-boson to b-quarks deviates from the standard model value: In the standard model gLb ≈ -0.4, and gRb ≈ 0.08, and δgLb = δgRb = 0. Given two additional parameters δgLb and δgRb we have enough freedom to account for both the b-quark anomalies. The fit from this paper shows that one needs an upward shift of the right-handed coupling by 10-30%, possibly but not necessarily accompanied by a tiny (less than 1%) shift of the left-handed coupling. This sort of modification is easy to get in some concrete scenarios beyond the standard model, for example in the Randall-Sundrum-type models with the right-handed b-quark localized near the IR brane.
So, maybe, LEP has seen a hint of compositeness of right-handed b-quarks? Well, one more 2.5 sigma anomaly does not make a summer; overall the standard model is still in a good shape. However it's intriguing that both b-quark-related LEP precision observables do not quite agree with the standard model. Technically, modifying both AFB and Rb is much more natural from the point of view of new physics interpretations. So I guess it may be worth, without too much excitement but with some polite interest, to follow the news on B' searches at the LHC.
Important update:Unfortunately, the calculation of Rb referred to in this post later turned out to be erroneous. After correcting the bug, Rb is less than 1 sigma away from the standard model prediction.
Tuesday, 9 October 2012
No particle physicist received a phone call from Stockholm today. There had been some expectations for an award honoring the Higgs discovery. Well, it was maybe naive but not completely unrealistic to think that the Nobel committee might want to reestablish some connection with the original Nobel's will (which, anecdotally, awarded prizes for discoveries made during the preceding year). To ease my disappointment, let me write about a purely probabilistic but potentially gruesome aspect of today's decision. Warning: the discussion below is a really bad taste; don't even start reading unless Borat is among your favorite movies!
Peter Higgs is 83, and François Englert is almost 80. Taking the US data on lifetime expectancy as the reference, they have respectively 9% and 6% probability to pass away within a year from now. Thus, the probability of at least one of them being gone by the time of the next announcement is approximately 14%! To give an everyday analogy, it's only a tad safer than playing Russian Roulette with 1 bullet in a 6-shot colt revolver. The probability grows to stunning 27% if one includes Philip Anderson among the potential recipients (nearly 89, 15%). Obviously, the probability curve is steeply rising as a function of t, and approaches 100% for the typical Nobel recognition time lag.
Well, the Nobel for the Higgs discovery will be awarded sooner or later. Even if one of the crucial actors does not make it, the prestige of the physics Nobel prize won't be hurt too much (it has survived far more serious embarrassments). But, that would be just sad and unjust, even more so than the Cabibbo story. So why not make it rather sooner than later?
Tuesday, 2 October 2012
This year we learned that the Higgs mass is 125.5 GeV, give or take 1 GeV. As a consequence, we learned that God plays not only dice but also russian roulette. In other words, that life is futile because everything we cherish and hold dear will decay. In other words, that the vacuum of the standard model is not stable.
Before
we continue, keep in mind the important disclaimer:
All this discussion is valid
assuming the standard model is the correct theory all the way up to the
Planck scale, which is unlikely.
Indeed, while it's very
likely that the standard model is an adequate description of physics at
the energies probed by the LHC, we have no compelling reasons to assume it works at, say, 100 TeV. On the contrary, we know there should
be some new particles somewhere, at least to account for dark
matter and the baryon asymmetry in the universe, and those degrees of
freedom may well affect the discussion of vacuum stability. But for the time being let's assume there's no new particles beyond the standard model with a significant
coupling to the Higgs field.
The stability of our vacuum
depends on the sign of the quartic coupling in the λ |H|^4 term in the
Higgs potential: for negative λ the potential is unbounded from below and therefore unstable. We know exactly the value of λ at the weak scale: from the Higgs mass 125 GeV and the expectation value 246 GeV it follows that λ = 0.13, positive of course. But panta rhei and λ is no exception. At large
values of |H|, the Higgs potential in the standard model is, to a good
approximation, given by λ(|H|) |H|^4 where λ(|H|) is the running coupling evaluated at the scale |H|. If Higgs were decoupled from the rest of matter then λ would grow with the energy scale and would
eventually explode into a Landau pole. However, the Yukawa
couplings of the Higgs boson to fermions provide another contribution to
the evolution equations that works toward decreasing λ at large
energies. In the standard model the top Yukawa coupling is large,
of order 1, while the Higgs self-coupling is moderate, so Yukawa wins.
In the plot showing the evolution of λ in the standard model (borrowed from the
latest state-of-the-art paper) one can see that at the scale of about 10
million TeV the Higgs self-coupling becomes
negative. That sounds like a catastrophe as it naively means that the Higgs potential is unbounded from below. However,
we can reliably use quantum field theory only up to the Planck scale,
and one can assume that some unspecified physics near the Planck scale
(for example, |H|^6 and higher terms in the potential) restore the
boundedness of the Higgs potential. Still, between 10^10 and 10^19 GeV the
potential is negative and therefore it has a global minimum at large |H| that
is much deeper than the vacuum we live in. As a consequence, the path integral will
receive contributions from the field configurations interpolating between
the two vacua, leading to a non-zero probability of tunneling into the
other vacuum.
Fortunately for us, the tunneling probability is
proportional to Exp[-1/λ], and λ gets only slightly negative in the
standard model. Thus, no reason to panic, our vacuum is meta-stable, meaning its average lifetime extends beyond December 2012. Nevertheless,
there is something intriguing here. We happen to occupy a
very special patch of the standard model parameter space. First of all
there's the good old hierarchy problem: the mass term of the Higgs field
takes a very special (fine-tuned?) value such that we live extremely
close to the boundary between the broken (v > 0) and the unbroken
(v=0) phases. Now we realized the potential is even more special: the
quartic coupling is such that two vacua coexist, one at low |H| of order TeV and the
other at large |H| of order the Planck scale. Moreover, not only λ but also it's beta
functions is nearly zero near the Planck scale, meaning that λ evolves
very slowly at high scales. Who sets these boundary conditions? Is that yet another remarkable coincidence, or is there a physical reason? Something to do with quantum gravity? Something to do with inflation? I think it's fair to say that so far nobody has presented a compelling proposal explaining these boundary conditions satisfied by λ.
Ah, and don't forget the disclaimer:
All
this discussion is valid assuming the standard model is the correct
theory all the way up to the Planck scale, which is unlikely.
Tuesday, 25 September 2012
A look at hep-ph listing tells you that what excites particle theorists these days is the Fermi line. Recall, that an independent analysis of gamma-ray data from the Fermi telescope discovered the monochromatic emission from the center of our galaxy at the energy of approximately 130 GeV. The signal is so strong that it's unlikely a fluctuation, and no known astrophysical processes are expected to produce monochromatic lines. The line may be a weird instrumental effect, or it may be the signal of dark matter annihilating into a pair photons with the cross section of few*10^-27 cm^3/sec. If the latter is true, it would dwarf the Higgs boson discovery...
As usual, the most popular game is to fit the signal into every possible model, including those that firmly resist. There's been some interesting developments on this front, but I'll keep that for another post. For now, I'll restrict to the properties of the signal and astrophysical constraints.
The statistical significance of the line is large, the precise number depending on how the data are cut and cooked. In the original paper the significance was 4.6σ (before taking into account the trial factor), but for example in this paper the numbers 5.0σ or even 5.5σ are bandied around. That paper also claims that a slightly better fit to the data is with 2 lines, one at 129 GeV and another at 111 GeV, and that the center of the emission is off by 1.5 degree from the galactic centre. The former may be a good news for dark matter, as most models predict 2 separate lines, from annihilation into γγ and into γZ. The latter doesn't have to be a bad news, in view of the recent simulations of dark matter distribution.
Twogroups were recently scanning the Fermi data for suspicious features that could suggest hat the line is an instrumental artifact. They may have found one: a 130 GeV line in the Earth limb sample. Cosmic rays hitting the atmosphere produce gamma-rays that sometimes fall into Fermi's field of view. This provides a sort of calibration sample where no signal is expected. Instead, there seems to be a 3σ line in the Earth limb photons that can be made even more prominent with specific cuts on the photon incidence angle. Is that an unlucky fluctuation? On the other hand, it's difficult to imagine an instrumental effects or a software bug that could be responsible for both the galactic center and the Earth limb lines.
There are 2 more places in the sky where the presence of the 130 GeV line was claimed. The line was observed in the nearby galaxy clusters, which may be a good news. Also, the line was observed in the unassociated gamma-ray sources, which is probably a bad news given those were later claimed to be AGNs. No line was detected from the dwarf satellite galaxies of the Milky Way, which is probably not a problem, and no line emission was found in the galactic plane, which is good.
In most models of dark matter a gamma-ray line would be accompanied by a 1000 times more intense continuum photon signal, just because dark matter annihilation into other final states (that later emit photons) would be dominant. However, the observed photon spectrum from the galactic center - the same one that displays the monochromatic signal - puts very strong constraints on the continuum emission. Typically, the cross section for dark matter annihilation into other final states can be at most 10 times larger than the cross section for the annihilation into 2 photons. For example, this paper claims the limits on the annihilation rate of 130 GeV dark matter into most final states is comparable to the thermal cross section 3*10^-26 cm^3/sec (the one that guarantees the correct relic abundance if dark matter is of thermal origin), and even stronger with less conservative assumptions about the dark matter density profile. This is a severe constraint on theory, such that the models explaining the Fermi line have to be tailor-made to satisfy it.
In summary, there are 2 main arguments against the Fermi line being a signal
of dark matter. One is the presence of the line in the Earth limb photon sample. The
other is that it's good to be true. Based on that, it's probably worth
staying excited for a little longer, until there are better reasons to
stop the fun.
Tja, 2 months without writing a post is my personal best since I started this blog. It cannot be just laziness. I blame it on the frantic atmosphere surrounding the Higgs discovery, which resulted in post-coital tristesse. Indeed, a face-to-face with a genuine discovery only makes you realize the day-to-day misery of high-energy physics today. Now it's much harder to get excited about setting limits on new physics or even about seeing hints of new physics that will surely go away before you blink. New limits on SUSY from the 8 TeV LHC run? Yawn. First robust limits on superpartners of the top quark? Phew. Best ever limits on direct detection of dark matter? Boooring. Another smoking-gun signal of dark matter? Wait...
Well, it's time go back to the daily grind because, in the long run, that may be the only life we have :-)
Monday, 23 July 2012
No, I don't mean I've slept for almost 3 weeks ;-) I mean this particular state of mind of waking up with a vague memory of a crazy party last night, but at the same time unwilling to open your eyes for the fear that the person lying next to you is really the one you think it is.
Welcome. There is no doubt that since the 4th of July we have a new particle, a boson with mass near 125 GeV. There is little doubt that this particle is a Higgs boson. True, the discovery relies to a large extent on observing a resonance in the diphoton spectrum, which could also be produced by another spin-0 or even a spin-2 particle that has nothing to do with electroweak symmetry breaking. What convinces us of the higgsy nature of the new particle is the signal in the ZZ and WW final states. Indeed, the coupling [h V V], allegedly responsible for the decays to W and Z bosons, is a watermark signature of a Higgs boson, as it is central to its mission of giving mass to gauge bosons.
Farewell. Welcoming the Higgs, we need to clean the room of some old toys we've got used to. First of all, Higgsless technicolor for obvious reasons goes into the trash bin of history. So does the unhiggs or the whole class of stealthy Higgs theories where the Higgs was supposed to escape detection by decaying into complicated final states. Quite robustly, the 4th generation of chiral fermions is now excluded because, if it existed, the Higgs production rate would be many times larger than observed. Finally, a simple and neat theory of dark matter that annihilates or scatters via a Higgs exchange, the so-called Higgs portal dark matter, is getting disfavored because Higgs would have a large invisible branching fraction, and thus a suppressed rate of visible decays.
Law. The other Pauli principle: that fermions are discovered in the US, while bosons are discovered Europe has been spectacularly confirmed. Note it was a very non-trivial prediction in this particular case. Higgs would have been discovered at the SSC if the US congress did not intervene to scrap the entire program. Furthermore, Higgs would have been discovered at the Tevatron if the Fermilab management didn't intervene to scrap some crucial Run-II detector upgrades, ensuring the Tevatron discovery potential stops just short of a 3 sigma significance. This only shows how powerful the other Pauli principle is. Don't you think it deserves, if not a Nobel prize, at least the ig-Nobel prize? ;-)
Hope. The Higgs data from ATLAS and CMS match well the Standard Model prediction with one exception: the diphoton event rate is 50-100% too large, with the significance of about 2 sigma. These are most likely statistical fluctuations, but if the enhancement persists when more data is collected it may become the first clear evidence of new physics. If that is the case, the most plausible interpretation of the current data is that the enhancement is due a light 100 GeV-ish scalar or fermion that carries electric charge but no color. This way the loop contributions of that particle could affect the Higgs decays into photons without messing up the gluon fusion production mode. Furthermore, the new scalar or fermion needs to have a large coupling to the Higgs boson, but its mass has to come dominantly from another source (otherwise it would actually decrease the diphoton rate). If it were confirmed, it would be a particle that apparently no one ordered. On the other hand, theoretically cherished particles (stops, little Higgs top partners, staus) all require a serious tuning and some conspiracy to fit the available Higgs data.
Nightmare. Despite what I said above, one cannot help noticing that the data are indecently consistent with the simplest Higgs boson of the Standard Model. Overall, adding the 8 TeV data improved the consistency, eradicating some of the hints of non-standard behavior we had last year. It's been often stressed that the Higgs boson is the special one, a particle different from all the others, a type of matter never observed before. Yet it appears in front of us exactly as described in detail over the last 40 years. This is a great triumph of particle theory, but at the same time it's very disappointing to those whose future existence depends on new physics, that is to a large majority of particle theorists.
In summary, Higgs hunting is over, the catch is now being skinned and prepared for grilling. Collider physics has achieved the most spectacular success in its history. At the same time, it came dangerously close to realizing Kelvin's nightmare, of science reduced to striving for the next decimal places of accuracy. Well, 100 years ago we avoided that fate, may be the history will repeat itself?
Wednesday, 4 July 2012
10:58 The party's over now. It was a beautiful day, a historical day, the great triumph of science. Now I'm going to sleep the night off, and tonight we're all gonna celebrate, drink, and make out. Thank you.
10:57 Funny that nobody asks about the loose cable ;-)
10:56 Higgs says: "I'm glad it happened in my lifetime".
10:47 I got carried away, no underwear and bras on the stage, sadly. But the atmosphere in the auditorium is such that they might have been.
10:46 Standing ovations, screams and shouts, the audience throwing bras and underwear at the stage.
10:44 "I think we have it", concludes the DG. "We have a discovery of a Higgs boson, but which one"?
10:42 In summary, both ATLAS and CMS clearly see a Higgs boson in 2 channels: the diphoton and ZZ 4-lepton. Combining those two, the significance of the Higgs signal is 5.0 sigma in both experiments.
10:40 "This is just the beginning"
10:38 The CMS and ATLAS preferred Higgs mass differ by more than 1 GeV, there will surely be questions about that.
10:35 5.0 sigma combined excess with the maximum significance mh=126.5 GeV.
Higgs discovered by both experiments!
10:33 Going to the combination (ATLAS won't show any more channels today).
10:30 Excess near m4l=125 GeV, although by eye less beautiful peak than in CMS. 3.4 sigma excess vs 2.6 expected in the SM.
10:26 Press release is out. The discovery officially blessed.
10:22 Now the ZZ 4-lepton channel.
10:21 The measured rate in the diphoton channel is almost twice that predicted in the SM, with the SM rate about 1.5 sigma away. Interesting! So both experiments continue to see to much signal in the Higgs diphoton channel.
10:20 4.5 sigma excess in the Higgs diphoton channel! (who cares about the look elsewhere effect anymore).
10:11 Diphoton channel, finally.
10:11 Boooring.... yet another particle being discovered....
10:05 Both speakers today felt compelled to devote the first 15min to irrelevant bla-bla. Probably because the main subject doesn't appear that exciting.
9:53 Fabiola Gianotti on the stage. Time for ATLAS.
9:50 In summary, CMS observes a Higgs boson with mass 125.3±0.6 GeV at 4.9 sigma significance. Some funny glitches in the data (a slightly too large diphoton signal, no excess in the di-tau channel) but overall good consistency with the Standard Model predictions.
9:47 All channels combined, 4.9 sigma significance, vs 5.9 expected.
9:41 Some excess, but not signifcant, also observed in the WW dilepton channel, and b-bbar associated with W/Z. No excess at all in the tau-tau channel, although there should be.
9:38 Combining diphoton and 4-lepton channels the significance of the Higgs signal is 5.0 sigma
Higgs discovered!!!
9:34 Beautiful peak in the 4-lepton channel. Higgs observed with 3.2 sigma significance in this channel, vs 3.5 sigma expected in the SM.
9:32 Now the ZZ 4-lepton channel
9:31 CMS sees a Higgs in the diphoton channel with the rate about 50% larger than predicted by the Standard Model (but barely one sigma above the SM).
9:30 Over 4 sigma signal in the diphoton channel
9:29 "That's pretty significant"
9:22 Finally, Higgs to diphotons.
9:18 It's not that I stopped blogging, it's that Joe is boring. We want the meat!
9:11 5.2 inverse femtobarn of 2012 data, 5.6 in the muon channel.
9:06 "One page for theorists, that's all they deserve" :-)
9:04 Joe Incandela on the stage, the CMS talk start.
9:02 C'est parti! "Today is a special day" says DG.
8:56 Yes! Higgs is here!!! Everything ready for the discovery.
8:50 10 minutes to the seminar. Still no Higgs. But the other Nobel prize winner this year is already inside.
8:43 By the way, if you come across a press article today about the god particle that's a perfect gauge the author is an idiot and has no idea what he's talking about.
8:38 The audience is a funny mix. One half are 60+ big shots who could get themselves a sit reservation, the other half are 20-something Higgs groupies who had a strength to queue all night.
8:25 The title of the seminar is Higgs Search Update. Reminds me of A Model of Leptons.
8:15 The first accurate prediction of the Higgs mass was formulated in this video. It has gone unnoticed, however, because Jim Morrison was stoned and reading it backwards.
8:05 There's a still a wild crowd in front of the auditorium, looks like Walmart on Black Friday... hope there will be no riot today.
7:45 While waiting for the announcement it may worth checking this page. There is a theory that Higgs influences our present from the future, so as to avoid being discovered. At this point, destroying the whole universe might be his only chance...
7:35 The door are open, people flowing in, but miraculously no stampede.
7:25 People have been have been camping all night in front of the auditorium door to get inside and see the discovery live. These are pictures from 3am last night.
7:20 This is the day. The most important day for particle physics in this century, and probably ever.
Tuesday, 3 July 2012
It's the evening of the last day of the old B.H. era, tomorrow we start counting from zero. I'm at CERN to attend the historical seminar starting at an ungodly hour tomorrow. On a night like this no way I can write anything semi-intelligent, so instead let me give just a bunch of personal, chaotic remarks.
The Higgs boson was always everywhere particle physicists look, so it was easy to forget it was a hypothetical concept. Superficially, tomorrow we'll simply learn, to a 1 GeV precision, the value of the last free parameter of the Standard Model. But if you stop and think about it for a while, it really blows your mind. Almost a 50 years a shy guy writing what was then a fringe paper to shut up the referee adds in the revised version a mention of the scalar particle excitation predicted by his toy model. Within a few years the importance of the particle is generally recognized, and papa Weinberg incorporates it in the Standard Model, to this day the valid theory of fundamental interactions. With time, the indirect evidence for its existence has been mounting. But only 48 years and many colliders later the search will come to an end. Even though the prediction is highly non-trivial (theoretically, it is based on a weird concept of a scalar field obtaining a uniform vacuum expectation value throughout the spacetime; phenomenologically, never before have we seen a fundamental spin-0 particle, etc.), the particle shows up in the final states where it was predicted to show up, and up to a factor of 2 within the predicted rate. This is a perfect moment to shout "Physics works, bitches".
I'll be blogging live from the CERN auditorium, you can tune to mine or one of a dozen of competing relations.
Monday, 2 July 2012
Every decent rock concert features a support band whose role to warm you up before the main gig or, alternatively, give you time to buy a beer and chat up a blonde. The support band at the Higgs concert -- the Tevatron from Fermilab, Illinois -- is worth giving an ear to because it offers slightly different qualities than the star of the evening.
The Tevatron collider has been shot down last September so the amount of data has not increased since the last Higgs update at the Moriond conference in March. Nevertheless, the collaborations are still able to make adiabatic improvements in the analysis, especially now when they know where the Higgs is. At Moriond, the Higgs-like excess was observed mostly in the b-bbar final state by the CDF collaboration; what changed today is that D0 observes a (somewhat smaller) excess in the same channel, making the claim more credible. All in all, the combined (local) significance of the Higgs excess at the Tevatron reaches the maximum of 3 sigma for mh=120 GeV, although it's more like 2.7 sigma at the true value of mh=125 GeV.
However, there is an aspect of the data presented today that is more interesting than the sigma pissing contest. The Tevatron experiments are most sensitive to the Higgs boson decaying into a pair of b-quarks and produced in association with a W or Z boson. What they're testing is thus the Higgs couplings to electroweak gauge bosons and to b-quarks, both of which are central to establishing the higgsy nature of the newly discovered particle. In particular, the Tevatron data are suggesting that the particle indeed decays frequently into b-quarks (which, according to the Standard Model, should happen about 60% of the times). Thus, the Tevatron provides an important piece of the puzzle that, at the moment, is not available from the LHC. Actually, the rate observed in the VH→bb channel is 2±0.7 larger than predicted by the Standard Model, adding up to other intriguing hints of a non-standard Higgs behavior.
By the end of the year the LHC experiments should reach a comparable sensitivity in the same channel, clarifying whether the Tevatron excess was the real thing, or a classic look-here effect...
About Résonaances
Résonaances is a particle physics blog from Paris. It's about the latest news and gossips in particle physics and astrophysics. The posts are often spiced with sarcasm, irony, and a sick sense of humor. The goal is to make you laugh; if it makes you think too, that's entirely on your own responsibility...
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Volle Kraft Voraus!
Volle Kraft Voraus! is the second album by the German band Die Krupps.
Track listing
"Volle Kraft voraus" - 3:44
"Goldfinger" - 3:21
"Für einen Augenblick" - 4:14
"Tod und Teufel" - 2:45
"Das Ende der Träume" - 3:34
"Neue Helden" - 3:08
"Wahre Arbeit, wahrer Lohn" - 5:24
"...Denn du lebst nur einmal" - 3:28
"Zwei Herzen, ein Rhythmus" - 3:33
"Lärm macht Spaß" - 3:44
"Wahre Arbeit - Wahrer Lohn" - 3:41 (1993 bonus track)
"True Work - True Pay" - 6:23 (1993 bonus track)
Credits
Jürgen Engler - vocals, keyboards, computer drums, effects
Bernward Malaka - bass guitar
Tina Schnekenburger - keyboards, effects
Ralf Dörper - keyboards
Category:1982 albums
Category:Die Krupps albums
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
LIMITED NUMBER- 1 BOLT aluminum pen shipped to you, in the USA (international add $10). There is only a few of these available, so act fast.
Less
|
tomekkorbak/pile-curse-small
|
OpenWebText2
|
Q:
Error when creating summary table with table1: "invalid model formula in ExtractVars"
I am using table1 package for first time.
I am trying to create a summary table of descriptive statistics
This is my code
data <- read_excel("correct file path",
skip= 1,)
mydata <- data[, -c(19:40)]
i <- c(5:18)
mydata[, i] <- apply(mydata[, i],2, function(x) as.numeric(as.character(x)))
mydata <- na.omit(mydata)
table1::label(dat$Sex) <- "Sex"
table1::label(dat$Age) <- "Age"
table1::label(dat$SBP) <- "SBP"
table1::label(dat$DBP) <- "DBP"
table1::label(dat$BMI) <- "BMI"
table1::label(dat$WHR) <- "waist:Hip"
table1::label(dat$`LTM %`) <- "% Lean tissue mass"
table1::label(dat$`FM %`) <- "% Fat mass"
sumtab <- table1::table1(~Sex + Age + SBP + DBP + BMI + WHR + 'LTM %' + 'FM %' , data = dat)
I get the following error
Error in terms.formula(formula, data = data) :
invalid model formula in ExtractVars
I cannot see what I've done wrong
A:
The issue is with single quotes ('), instead use backquotes
sumtab <- table1::table1(~Sex + Age + SBP + DBP + BMI +
WHR + `LTM %` + `FM %` , data = dat)
Using a reproducible example
library(table1)
table1(~ sex + age + wt + 'LTM %', data=dat)
Error in terms.formula(formula, data = data) : invalid model
formula in ExtractVars
single quote results in error as in the OP's post
table1(~ sex + age + wt + `LTM %`, data=dat)
-output
data
set.seed(24)
dat <- expand.grid(id=1:10, sex=c("Male", "Female"), treat=c("Treated", "Placebo"))
dat$age <- runif(nrow(dat), 10, 50)
dat$age[3] <- NA # Add a missing value
dat$wt <- exp(rnorm(nrow(dat), log(70), 0.2))
dat$`LTM %` <- sample(40:50, nrow(dat), replace = TRUE)
label(dat$sex) <- "Sex"
label(dat$age) <- "Age"
label(dat$treat) <- "Treatment Group"
label(dat$wt) <- "Weight"
label(dat$`LTM %`) <- "% Lean tissue mass"
|
tomekkorbak/pile-curse-small
|
StackExchange
|
Can't Buy Me Love
"Can't Buy Me Love" is a song composed by Paul McCartney (credited to Lennon–McCartney) and released by the English rock band the Beatles on the A-side of their sixth British single, with "You Can't Do That" as the B-side, in March 1964. The song was then released on the group's third UK album A Hard Day's Night. In September 2015, the Beatles donated the use of their recording of the song to People for the Ethical Treatment of Animals for a television commercial.
Composition
While in Paris, the Beatles stayed at the five star George V hotel and had an upright piano moved into one of their suites so that songwriting could continue. It was here that McCartney wrote "Can't Buy Me Love". The song was written under the pressure of the success achieved by "I Want to Hold Your Hand" which had just reached number one in America. When producer George Martin first heard "Can't Buy Me Love" he felt the song needed changing: "I thought that we really needed a tag for the song's ending, and a tag for the beginning; a kind of intro. So I took the first two lines of the chorus and changed the ending, and said 'Let's just have these lines, and by altering the second phrase we can get back into the verse pretty quickly. And they said, "That's not a bad idea, we'll do it that way". The song's verse is a twelve bar blues in structure, a formula that the Beatles seldom applied to their own material.
When pressed by American journalists in 1966 to reveal the song's "true" meaning, McCartney stated that, "I think you can put any interpretation you want on anything, but when someone suggests that 'Can't Buy Me Love' is about a prostitute, I draw the line." He went on to say: "The idea behind it was that all these material possessions are all very well, but they won't buy me what I really want." However, he was to comment later: "It should have been 'Can Buy Me Love when reflecting on the perks that money and fame had brought him.
Recording
"Can't Buy Me Love" was recorded on 29 January 1964 at EMI's Pathe Marconi Studios in Paris, France, where the Beatles were performing 18 days of concerts at the Olympia Theatre. At this time, EMI's West Germany branch, Odeon, insisted that the Beatles would not sell records in any significant numbers in Germany unless they were actually sung in the German language and the Beatles reluctantly agreed to re-record the vocals to "She Loves You" and "I Want to Hold Your Hand" prior to them being released in Germany. George Martin travelled to Paris with a newly mastered rhythm track for what was to be "Komm, Gib Mir Deine Hand". "Sie Liebt Dich" required the Beatles to record a new rhythm track as the original two-track recording had been scrapped. EMI sent a translator to be present for this recording session which had been hurriedly arranged to tie in with the Beatles' Paris commitments. This was accomplished well within the allotted studio time, allowing the Beatles an opportunity to record the backing track, with a guide vocal, to the recently composed "Can't Buy Me Love". At this stage the song included background vocal harmonies, but after listening to the first take, the band concluded that the song did not need them. Therefore, "Can't Buy Me Love" became the first single the Beatles released without their characteristic background harmonies.
McCartney's final vocal was overdubbed at EMI Studios, Abbey Road, London, on 25 February. Also re-recorded on this day at EMI Studios was George Harrison's modified guitar solo, although his original solo can still just be heard in the background. Harrison said: "What happened was, we recorded first in Paris and re-recorded in England. Obviously they'd tried to overdub it, but in those days they only had two tracks, so you can hear the version we put on in London, and in the background you can hear a quieter one." Helen Shapiro, a friend of the Beatles and present at this overdub session, says that Ringo Starr also added extra cymbals "over the top" and that "apparently this was something he did quite often on their records". "Can't Buy Me Love" is also the only English-language track that the Beatles recorded in a studio outside the UK, although the instrumentation of the band's 1968 B-side "The Inner Light" was recorded in India by Harrison and some Indian classical musicians.
Release
"Can't Buy Me Love" was released as a single, backed by John Lennon's song "You Can't Do That". The release took place on 16 March 1964 in the United States and four days later in the United Kingdom. In the US, "Can't Buy Me Love" topped the Billboard Hot 100 chart for four weeks. With the success of the song, the Beatles established four records on the Hot 100:
Until Billboard began using SoundScan for their charts in 1991, the song had the biggest jump to the top position: number 27 to number 1.
It gave the Beatles three consecutive chart-topping singles, since "I Want to Hold Your Hand" was replaced at number 1 by "She Loves You", which was in turn replaced by "Can't Buy Me Love". The three songs spent a combined total of 14 consecutive weeks at number 1. This is the only time an artist had three number 1 singles in a row.
When "Can't Buy Me Love" reached number 1, on 4 April 1964, the Beatles held the entire top five on the Hot 100, the next positions being filled by "Twist and Shout", "She Loves You", "I Want to Hold Your Hand" and "Please Please Me", respectively. No other act has held the top five spots simultaneously.
During its second week at number 1, the Beatles had fourteen songs on the Hot 100 at the same time.
In the UK, "Can't Buy Me Love" became the Beatles' fourth number 1 and their third single to sell over a million copies. By November 2012, it had sold 1.53 million copies there. As of December 2018, it was the 35th best-selling single of all time in the UK – one of six Beatles songs included on the top sales rankings published by the Official Charts Company.
"Can't Buy Me Love" was included on the Beatles' A Hard Day's Night album in June 1964 and the US soundtrack album of the same name, released on United Artists Records. For its sequence in the film A Hard Day's Night, director Richard Lester used crane shots to capture the four band members running and leaping in a sports field. In his book on the history of music videos, Money for Nothing, author Saul Austerlitz places "Can't Buy Me Love" at number 33 on the "Top 100 Videos List".
Subsequent album appearances for the song include the compilations A Collection of Beatles Oldies (1966) and Hey Jude (1970; also known as The Beatles Again), the 1973 double-disc collection 1962–1966, the 1982 release Reel Music, which features songs from the Beatles' feature films; the 1982 compilation 20 Greatest Hits, and 1, released in November 2000. Rolling Stone ranks "Can't Buy Me Love" at number 295 on its list of the 500 Greatest Songs of All Time.
Cover versions
Ella Fitzgerald recorded the song for her 1964 album Hello, Dolly. This version was also released as a single, peaking at number 34 in the UK.
Personnel
Paul McCartney – double-tracked vocal, bass
John Lennon – acoustic rhythm guitar
George Harrison – double-tracked lead guitar, twelve-string guitar
Ringo Starr – drums
Personnel per Ian MacDonald
Norman Smith – hi-hat
as per Geoff Emerick's credit
Charts
Weekly charts
Year-end charts
Certifications
Notes
References
External links
CoverTogether: Can't Buy Me Love
Category:1964 singles
Category:The Beatles songs
Category:Parlophone singles
Category:Billboard Hot 100 number-one singles
Category:RPM Top Singles number-one singles
Category:UK Singles Chart number-one singles
Category:Irish Singles Chart number-one singles
Category:Songs written by Lennon–McCartney
Category:Song recordings produced by George Martin
Category:Ella Fitzgerald songs
Category:Chet Atkins songs
Category:Capitol Records singles
Category:Songs published by Northern Songs
Category:1964 songs
|
tomekkorbak/pile-curse-small
|
Wikipedia (en)
|
Sunday. June 27th. For the first time in Belgium I had an opportunity to attend a service conducted by our Wesleyan Chaplain
and I enjoyed the service greatly. The service was held in the grounds of what was in Peace time an Asylum, a grand building and
grounds now used by the authorities as a Hospital. A Service was also held in camp here for Church of England worshippers by the Army
Chaplain. This here part is without doubt one of the finest we have yet seen since coming out of England, grand scenery etc.
Not having been spoiled by the Germans, the people are very industrious, but dirty, starting work 5.30 a.m. and going on some of
them till 8 p.m. or even later, cultivating or gardening. God seems very near at present and War a long way off, when all is so
beautiful in nature and country around us and guns almost silent here. We still hope for a speedy end.
Monday. All quiet. Physical drill and baths which were fully appreciated in quietness and peace. We bathed in round tubs about
2ft deep. 1 ft of water in, could just kneel in and stand, but all the same we enjoyed the bathe.
Tuesday. All quiet. Physical drill 7 o'clock morning and an inspection by General about 9,30 a.m. commanding Second Army Corps. [Sir
Charles Ferguson.] This is the first inspection we have had in Belgium by one of our Generals. He exhorted us to do our best in
fighting and killing the Germans whenever we had a chance and the sooner the War would end; to put the same spirit in little things
such as digging and improving trenches, as we had proved to have in fighting when tried ! and especially to keep good discipline in
little things. Both Officers and men in regard to dress, cleanliness and smartness in obeying commands and then the big things, he
said, would take of themselves.
We expect to have a Route March this afternoon at 2 o'clock. We went about 8 miles or so.
Wednesday 30 June. We had a quiet day. Physical drill 7 o'clock. Inspection of rifles, ammunitions, bayonets and rifle drill
9.30 a.m. to 12 noon. Afternoon free.
Thursday. Same inspections and drill as previous day and a route march etc about 8 miles or and an inspection of gas helmets and
resirators. We expect to go to trenches tomorrow night.
On the way today we have noticed a new way and novel to us of clover cut and tied together like corn and stooked to dry also in
pikes in field and thatched like little stacks.
Wild flowers in plenty. All our varieties seen here, honeysuckle etc. There are also fields of beans and peas too growing fast and
good crops in flower now. Also maize.
The roads are in places very soft and boggy. No bottom except sand, so on most roads there are little trenches cut and trees which have
been split and cut fixed in to make a bottom to carry heavy traffic etc.
The principle main roads are all however paved with large "Winstone Setts", the width of about 6 yards or so and carry heavy traffic etc
very well. Of course this is in the centre of the roads. Some roads near town the railway runs too and it would be a novel sight to
see both engines and traffic and trains of the streets combined in peace times. No barriers between railways and streets, all joined
together.
Several separate pages of the Diary show team sheets of 5 men in a side, including both Officers and Other Ranks.
in some competition where scores could be 0, 1 or 3. The best suggestion from the Great War Forum is that they were playing
quoits with heavy metal rings, which was popular in the villages of the North Riding of Yorkshire.
Friday. July 2nd. All quiet. Resting, just parades for helmets and respirators. Getting them dipped and sprayed ready for
trenches. In the evening we started off for trenches about 7.30. It would be about a distance of 5 miles. We went most of way through
fields and lanes. On either side were crops of corn and hay in cocks and other produce mentioned before in diary, all growing
luxuriantly just 5 miles off trenches almost unbelievable. None of the crops were spoiled. The corn part of it was ready for cutting.
It was almost like a walk in our country lanes at home and then when we got about 2 miles off trenches the desolation caused by war
and shells was made apparent to us. Farms and buildings in ruins, corn sown itself awy from crops ungathered from previous harvest.
Fields of uncut meadows etc. We reached trenches safely and found them quiet etc and dugouts were better than others we had occupied.
But still the little mischief makers [lice] were there, had been left for souvenirs to make other lads scratch and grunt.
They are simply an awful pest.
Saturday, No 12 Platoon [a platoon comprised 4 Sections, each of 12 men under an NCO] took guards and sentry duties at dugouts, at
the bottom of the Communications trench and manned Support trenches at stand to, and ration parties at night. No 11 Platoon took
the same duties on Sunday as we had on Saturday while we had a bit of a rest etc.
Sunday. All quiet up to 4 o'clock in afternoon. Hardly know how long we are going to hold these trenches yet. May God still keep us
and bless our side and speedily give us our hearts desire.
Monday. July 5th. Spent very quiet during day. At night on duty in trenches, a Party of 8 filling sandbags. After filling a
hundred or two, the men were told off to carry bags to me to build up top of parapet of trench, facing trench. In front of trenches
where I was working was growing almost ready and ripe for cutting a field of oats. These I hear had sown themselves away from harvest
not garnered previous year. I doubt this time too it will be spoilt.
Tuesday. Fairly quiet all day. No 12 Platoon did duty in trenches, digging new fire trench through standing corn and few casualties here.
Our side were throwing bombs and the enemy fired three back, two hitting sandbags and making a few holes in them on trench top.
One came right over, about a dozen yards but failed to go off. I believe one of our Officers took it, perhaps to examine it. We left
trenches tonight about 11 o'clock and reached huts at Locre about 2 o'clock in morning of Wednesday.
Wednesday. July 7th. We spent resting. Only odd parades.
Thursday. Spent doing odd parades, rifle inspections and baths and Pay day.
Friday. Spent quiet as previous day. Only a longish route march and ordinary parades.
Saturday. Today we had a few more parades and inspection and a short march to get us ready for trenches. Probably going there tonight.
These trenches are near a village or the nearest near them called Wulvergham [Wulvergem].
[At this time Joe was given an unexpected 3 days leave back home and he continued the Diary on his return.]
I was awfully glad, though very much surprised to get leave home and the Captain of our Company, Captain Morn,
[this Officer would be Captain John Maughan, who would be killed, age 26, at Ypres, Hill 60 on the 17th Feb 1916.]
gave me rather a shock when he called me out of Company and said he had some bad news for me, but my face altered very much, when he told
me that I was one of the first chosen to go on leave out of the Battalion to England, saying before the whole Company, that owing to only
3 being allowed home on leave, the Officers had picked out a few men and I was one of the lucky ones, who had done good work in the
trenches and out too and good character.
This news was told to us just as we fell in to go to trenches and the Captain told me to fall out and get inside huts as I was proving too
much attraction. This was Saturday night.
Sunday. July 11th. On the afternoon of the above day, I started my journey, walking from huts at Locre to Bailliul [Bailleul],
a distance of about 4 miles, to catch the train there to Bologne [Boulogne] 4.30 in the afternoon. We got on board the ship Victoria and
an awful crush there was too, to pass the Officer and to get passes for Folkestone. We did the crossing in about 1 hour and a half and
had a pleasant ride. The train was waiting for us at Folkestone and we got to London, Victoria Station about 6 o'clock a.m. We took the
tube to Kings Cross Station, catching the 8.15 train for Darlington, arriving there about 3.30 in the afternoon. I had sent two telegrams
home and my mother and sister were waiting there when I got out. I got a good reception at Darlington by people who knew me well in my
old trade, but the welcome home to Barton was one to be remembered by me all my life, being fit for a King.
The 3 days leave soon passed and on Thursday July 15th I was on my way back again. It was hard to part with loved ones but duties call,
must be obeyed. I caught the 1.30 train in the afternoon and got to Victoria Station about 7 o'clock in time for train to Folkestone.
We were on board ship again about 11.30 p.m for Boulogne and crossed over again safely to the landing stage, taking train from there
back to Bailleul and then had the 4 miles to do back to Huts at Locre, landing there about Friday 8.15 a.m. This ended a nice though
very short holiday.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
[Cryobiological characteristics of placental cord blood preserved in bioarchive auto-preserved liquid nitrogen system].
The aim of this study was to investigate the cryobiological characteristics of placental cord blood (PCB) cryopereserved by using BioArchive auto-preserved liquid nitrogen system (BioArchive system). After Hespan depletion of red blood cells, 5 ml mixture of DMSO and 10% Dextran 40 were added into 20 ml of enriched leukocyte. 53 PCB units were cryopreserved as following protocol: pre-freeze rate 10 degrees C/min, start freeze temperature -3 degrees C, end freeze temperature -10 degrees C to -15 degrees C, post freeze rate 2 degrees C/min, and end temperature -50 degrees C. After rapid thawing at 38 degrees C, the PCB were washed with 5% human serum albumin -10% Dextran 40 and centrifuged at 400 x g, 10 degrees C for 20 minutes. The results showed that the viability of nucleated cells post-thaw was (73.3 +/- 12.5)%, the CD34(+) cell content was (0.3 +/- 0.21)% for pre-freeze PCB and (0.45 +/- 0.36)% for post-t haw PCB. No significant difference for CFU-GM/-G/-GEMM counts was found between pre-freeze and post-thaw PCB. Thawed PCB contained in two compartments (20 ml and 5 ml) of a freezing bag showed similar viability and clonogenic capacity. Differential count of white blood cell was significantly changed. For post-thaw PCB, it was dramatically decreased for the percentage of granulocytes, and highly increased for the percentage of lymphocytes and monocytes. It was concluded that the condition for cryopreservation and thawing of PCB may be harmful to mature cells, and cells with large size, such as granulocyte, but suitable to lymphocyte and monocyte, especially for the cells with small size, such as CD34(+) cells.
|
tomekkorbak/pile-curse-small
|
PubMed Abstracts
|
Q:
Caliburn.Micro - Binding a button in a sidebar to a method in a ViewModel
I have a problem with binding a button located in a sidebar in my windows phone app. It seems like the buttons binding just dissapears..
Here's my code at the moment
<Grid x:Name="LayoutRoot" Background="Transparent">
<Grid x:Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0">
<sidebar:SidebarControl x:Name="sidebarControl"
HeaderText="WP"
HeaderBackground="YellowGreen"
HeaderForeground="White"
SidebarBackground="{StaticResource PhoneChromeBrush}">
<sidebar:SidebarControl.SidebarContent>
<Grid HorizontalAlignment="Stretch" VerticalAlignment="Stretch" Width="380">
<Button Content="Go to page 2" x:Name="GoToPage2"/>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="*"/>
</Grid.RowDefinitions>
</Grid>
</sidebar:SidebarControl.SidebarContent>
<Grid VerticalAlignment="Top" HorizontalAlignment="Stretch"
Margin="12">
<TextBlock Style="{StaticResource PhoneTextNormalStyle}">Your current view goes here</TextBlock>
</Grid>
</sidebar:SidebarControl>
</Grid>
</Grid>
At the moment I am using a nuget for the sidebar called SidebarWP8. Maybe Calinbrun.Micro doesnt work with this? Or do I have to insert a binding to the VM in a grid?
Here's the method in the ViewModel:
private readonly INavigationService navigationService;
public MainPageViewModel(INavigationService navigationService)
{
this.navigationService = navigationService;
}
public void GoToPage2()
{
navigationService.UriFor<Page2ViewModel>().Navigate();
}
A:
<Button cm:Message.Attach="[Event Click] = [Action GoToPage2()]" />
This should work, because of the other commenter is correct with respect to the default controls... Custom controls require adding some more handling in which can be a pain in the butt but doing it with the short hand above CM will look for that property in the button and process accordingly.
|
tomekkorbak/pile-curse-small
|
StackExchange
|
Ales and Meads
Blog Posts
In some areas, craft beer has started being incredibly popular. It has now replaced the “wino” culture in many cities, and homebrews and startups are popping up everywhere. It seems that everyone has a local brewery they recommend, and people are using “hops” and “session beers” in everyday language. While some cities and states are not seeing this trend, it’s becoming very common in many places. For this reason, it leads many people to wonder, “Why is craft beer so popular?”
Never thought I would get so happy about having female in taxi. At the wheel of course. Crazy stuff!
We are preparing something special for you… It’s bluepillmen tube! You need to know that you still can get it on, even if you are an old folk!
It’s not easy to explain cock love of teens. They are tiny and they just like big thing it seems.
Nothing is as good as seeing step-sibling get caught on doing naughty things. We all know it’s really turning-on experience.
It’s official dont break me and we are providers of the best clips from the series. Feel free to browse it all!
Nothing is better than a moment when mom teach sexy thing and you know how you will use it in the future.
Man, enter watchmygirlfriend and don’t think twice. I guess you could find many interesting people up there. Having sex and stuff!
It is new.
It’s a tale as old as time: people go for the newest and shiniest thing available. It’s becoming trendy to try new beer, and it’s becoming cool to be the new brewery on the block. In years before, the tried and true beers and breweries were the one that got people’s money. Now it’s the newest company, or the coolest brew shop, or the brewery with the most exciting new flavors. Many breweries and beers fizzle out within a year of their launch, but there are many others waiting to take their places.
Beer is cool.
For many years, beer was just the tried and true drink of older men and the cheapest thing to buy in the bar and the liquor store. Nobody really gave beer much thought, it just was. Now, people are giving a lot of thought to their beer. In the same way that you wonder which red wine will compliment your salmon dish, people are now considering the best beer for their meals. It’s becoming very common for people to bring craft beers to parties, camping, and even holidays and weddings. While some people may want to stick to their “American-brewed” monopoly brand beers, many people are going for the truly US-brewed, homebrewed unique flavors that craft beer has to offer.
It is a social thing.
Just like wine bars and cocktail clubs, breweries are becoming a social hotspot. Many local breweries are even considered “gastro pubs” with incredibly delicious and innovative foods to pair with their equally delicious and innovative beers. A lot of breweries also have fun settings, with ping-pong, pool, and even beer pong. They may even have patio areas, or just have a food truck rented in the parking lot. It is very much catered to the Millennial crowd, but other age demographics are finding these places to be very enjoyable as well. Even if people get canned beer from the store and bring it home or to a party, the craft brews are a talking point where everyone can share what they like or don’t like about a specific beer. Then, they can all try another one next time.
Craft beer is on the rise, and shows no signs of stopping. Paired with novelty and really yummy flavors, it seems that the people who are creating these masterpieces are at the forefront of a major shift in society’s perception of drinking. Drinking is no longer a tasteless pastime just to “get drunk,” it’s a science and an experience.
For many, craft beer is just a weird “hipster” thing, where you drink really crappy tasting beer and act like it was good. For many others, however, there is a real art to drinking and making craft beer. The people who pioneered the current movement have some of the largest breweries in the U.S., and are making a killing by brewing unique beer. In fact, the microbrewery business is becoming one of the most profitable industries in the United States right now.
Major Economic Boost
While many people think craft beer is the latest fad, the reality is that microbreweries and craft beer are adding quite a bit of money to the economy. It’s estimated that the total contribution that craft beer has made to the GDP of the United States was about $56 billion in 2014. It has also generated over 420,000 jobs, out of which the breweries and their storefronts and factories directly created 115,000 of those jobs. That’s a lot of people who are then able to add to the economy because they found a job in the industry.
Growth
Craft brews are one of the fastest growing industries in the United States. In 2014, beer production in general only grew by half a percent, but craft brew sales within the U.S. went up nearly 18 percent, and the exports for craft beer went up nearly 4 percent. That’s a lot of growth, no matter what industry you’re in. There are over 3,000 breweries in the U.S., and that number has increased steadily by about 10 percent each year for the past three years (since 2012). It is becoming such a movement that “Big Beer” is taking notice.
Sell Outs
One of the major concerns about craft brewing is that it will eventually overtake the big breweries like Anheuser-Busch. For this reason, “Big Beer” is dabbling in the craft brew industry as well, buying up highly popular craft brew brands and beginning to sell them mass-market style. The craft breweries make huge amounts of money when this happens, and also have access to brewery plants that allow much higher generation and production. Of course, this means that “Big Beer” will never lose the famed “Beer Wars.” After all, if you can’t beat ‘em, join ‘em.
beebee
Wall Street
Wall Street loves craft breweries. Consider the story of Sam Adam’s Boston Lager, and its creator Jim Koch. In the 80’s, the beer’s public stock went for about $15 a share, and now is over $350 a share. Private investors also love these breweries, as they often make a huge return on investments. Many investors also buy and combine breweries for lowered production costs and higher volume output, and then manage to generate a lot more income. Craft brew is solid money if you have the right kind of beer.
There are plenty of reasons that craft beer is so popular, but because of its popularity and the craft behind making it, it is generally much more expensive than “Big Beer” brands, making it much more profitable for the companies and breweries that produce them.
Craft breweries and craft beer have taken the world by storm lately. Craft beer has become so popular that one of the best-selling Christmas gifts in the United States is a homebrew kit, complete with a flask, hops, and directions for creating your own beer masterpiece at home. It seems that everyone is a craft brew enthusiast, and many people are happy to spend more than a few bucks on a single beer to experience the novelty and innovation associated with craft beer. So many major restaurants and liquor stores have wizened up to this trend, and are now providing craft brews in their facilities to satiate the public’s thirst for the new and unique flavors that craft beers have to offer. It’s not just the restaurants and sellers that are catching the drift, and “Big Beer” has taken notice of the not-so-subtle growth of craft brew in the United States. Because of this, companies like Anheuser-Busch, SABMiller, and Heineken have begun to plan their “war against microbreweries.”
Craft Beer As a Threat
It’s quite amazing how something that so many people consider to be a fad can be such a huge threat to large companies like Anheuser-Busch. In fact, craft brews have taken over the beer market, and are responsible for nearly 17 times more sales than “general” beers that have been on the market for years. Craft breweries and brew lovers do not claim that they’re intending to take over the “Big Beer” market, but they are merely filling a void that these companies do not provide- unique beer and almost a cult-like atmosphere. Because this is something that a mass-marketed beverage can provide (because nobody cares about the hops or gravity of Coors Light), these “Big Beer” companies have actually started a proxy war with craft breweries around the U.S.
“Big Beer” in Big Trouble?
While it’s hard to believe that companies like Anheuser-Busch would be scared of little old breweries like Lagunitas or Stone Brewing, the reality is that they are. Even though 30 percent of the market is attributed to Anheuser-Busch and SABMiller, these companies are genuinely concerned about the remaining majority that these small craft breweries could take over. For this reason, many large companies are actually making an effort to buy up small, popular craft breweries and make them mass-produced and mass-distributed through their factories and methods. This changes the craft brew game, obviously, and many hardcore enthusiasts (snobs) would turn their nose up at the loss of authenticity in the brand name. There is even a plan to buy up popular craft brews in distribution chains, making it hard for the public to have access to these delicious beers. Hard to believe that large companies would play so dirty, right? Another disturbing fact is that Anheuser-Busch and SABMiller control the distribution chain, making it possible for large distribution and trucking companies to refuse to transport craft brew products if larger companies like AB and SAB back out of their contracts with these companies. It’s literally a Beer Battle to death; who will win?
Search for:
Recent Posts
Photo Gallery
Categories
About Me
Imagine a person who is beyond enthusiastic about freshly brewed beer; who eats, sleeps, and drinks beer and then drinks some more. Imagine someone who gets excited at the prospect of buying new tubes or heating elements for their garage homebrew project, and who has destroyed two label printers for their homebrew bottles. Do you get the picture? That’s me, Shawn, and I am a craft brew addict.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Hook and Backer come unattached, if you'd like them attached, please call for the cost
This is an "easy remove pegboard and slatwall hook" which saves time and hassle during in-store product change outs. The 4" metal stem slides right out of the backer so merchandisers don't have to lift the entire hook upwards when removing it from the wall or point of purchase display. This allows the hook right above the hook being removed to remain in the wall or on the display as you need to switch out.
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Summary points
Rotavirus is the leading cause of severe gastroenteritis in children worldwide, accounting for 35-40% of hospital admissions for gastroenteritis
Each year, 180 000-450 000 children under 5 years die from rotavirus gastroenteritis, with more than 90% of deaths occurring in developing countries
Because nearly all children are affected by rotavirus by age 5 years, good sanitation and hygiene alone are inadequate for prevention
Orally administered live attenuated vaccines offer the best protection against rotavirus; as of December 2013, national immunization programs of 51 countries include rotavirus vaccine
Such programs have greatly reduced morbidity and mortality from gastroenteritis
A low risk of intussusception has also been documented post-licensure in some countries, but this risk is greatly exceeded by the health benefits of vaccination
Rotavirus is the leading cause of severe childhood gastroenteritis. Each year, rotavirus is responsible for about 25 million clinic visits, two million hospital admissions, and 180 000-450 000 deaths in children under 5 years of age globally.123 Although rotavirus infection is prevalent worldwide, most deaths from this infection occur in developing countries (fig 1⇓). Gastroenteritis caused by rotavirus cannot be clinically distinguished from that caused by other enteric pathogens; diagnosis requires testing of fecal specimens with commercially available assays. However, rotavirus is not routinely tested for in patients with gastroenteritis because the results do not alter clinical management, which relies mainly on appropriate rehydration therapy. Orally administered live attenuated vaccines that mimic natural infection offer the best protection against rotavirus. Two licensed rotavirus vaccines have been available since 2006 and have been implemented in many countries. We review approaches to diagnosis, management, and prevention of rotavirus gastroenteritis.
Sources and selection criteria
We looked at recent conference proceedings and searched PubMed, the Cochrane Database of Systematic Reviews, and Clinical Evidence online using the terms “rotavirus”, “rotavirus gastroenteritis”, and “rotavirus vaccines”. We focused on …
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Q:
how to update the whole column?
Here is my situation, I have a table called Statuses (statusID, statusName) with 22 statuses, and there are other tables that have statusID columns.
Now the customer wants to consolidate all the 22 statuses in the Statuses table into 13 statuses. Then we have to update, exactly speaking, map all statusID in all other tables.
Can anyone help me out here?
A:
Since this sounds like a one time thing the easiest way is to hard code the map.
e.g.
UPDATE
TABLE
SET StatusID = CASE WHEN StatusID = 1 THEN 5
WHEN StatusID = 2 THEN 5
WHEN StatusID = 3 THEN 1
WHEN StatusID = 4 THEN 5
WHEN StatusID = 5 THEN 2
...17 more times
END
or if you already have a mapping table
UPDATE
TABLE
SET StatusID = map.NewStatusID
FROM
TABLE as T
INNER JOIN Map
ON t.StatusID = map.OldStatusID
|
tomekkorbak/pile-curse-small
|
StackExchange
|
1
Pancreas blood tests:
Commonly used blood tests of pancreatic-derived proteins (lipase, amylase, insulin, etc.) are notoriously poor markers for cancer given the variety of other things that may elevate them, and the reserve capacity of most organs. So, to answer your question, not neccesarily. Best tests are high resolution CT with specific pancreas-protocol, and EUS with biopsy.
...Read more
Many people resolve to lose weight in the New Year for different reasons. For those who are overweight or obese, there are many health benefits to losing weight. It can help decrease your chances of developing diseases including diabetes, heart disease, high blood pressure, osteoarthritis, and even certain types of cancer. Low-calorie diets combined with increased physical activity are thought to be most effective long term. The healthiest weight loss regimen, therefore, is one that consists of making lifestyle changes that incorporate a balanced diet and moderate physical activity.
...Read more
2
Pancreatic Cancer:
Unfortunately even in this era of tremendous advances in oncology field pancreatic cancer is still a gruesome diagnosis. The most common symptom is jaundice and pain is rather rare and can be associated with very advanced disease with significant weight loss. Pain is more often associated with pancreatitis and specific tests should be done to rule out inflammation of the pancreas (pancreatitis).
...Read more
Stop worrying:
You're a 22 year old man, non-smoker, your pain is lasting and undefined, you became a vegetarian and weight loss is expected (people don't overeat vegetables), a pancreatic cancer sufficiently advanced to cause pain won't be missed on ct scan. You're off to a great start in life. Don't trouble yourself with cancerphobia. Pain that's serious would have revealed its cause by now.
...Read more
4
Don't give up:
If it's a small tumor in the head of the pancreas, then you may still have a chance at a cure. There are high-risk super-aggressive surgeries that are producing a higher cure rate than in the past.
...Read more
5
Possible:
It sounds as if you previously had a ct scan of your colon? It is unlikely your pancreas was imaged with the "colonography". An abdominal scan or ct dedicated to the pancreas is one of the better imaging studies to properly evaluate your pancreas.
...Read more
6
Not related toCancer:
You have a diseased gall bladder giving symptoms for 8 months, discuss with your doctor, you may benefit from its removal. Some times cholesterol deposits on gall bladder lining (strawberry gall bladder) will give severe symptoms, not seen as stones. Discuss with your doctor.
...Read more
7
No...Second opinion:
Not likely to be pancreatic cancer given the imaging you've had so far. There is a blood test you can get that you that might be reassuring. I suggest a second opinion to evaluate for something more usual such as sphincter dysfunction. In the meantime avoid foods that induce symptoms
...Read more
9
Maybe:
Whether alcohol abuse is an important risk factor for pancreatic cancer isn't clear; smoking clearly is. Abuse alcohol though and you're asking for chronic pancreatitis, a confusingly-named lifetime severe pain syndrome involving damage to the deep nerves of the area. You don't want that, or any of several other unpleasant sequelae. Best wishes; make smart decisions & drink moderately if at all.
...Read more
11
Many possible reason:
Nausea is a non-specific symptom and can occur for many different reasons or diseases affecting the abdomen. It can originate in any part of the digestive tract. So you need to get a check up with your doctor and seek medical therapy. You may benefit from cusing anti nausea pills (there are several different ones that can help). Any liver dysfunction (this is common with pancreatic Cancer) can cause
...Read more
12
Variety:
There are multiple potential reasons including causing obstruction of GI tract and/or bike duct, toxins from the cancer, and associated treatment side affects like nausea from chemotherapy. There are treatments available for the nausea based upon the cause, don't hesitate to talk to your oncologist.
...Read more
13
No:
Pancreas cancer arises in 3 areas of the gland, the head producing painless jaundice, mid body invading celiac plexus to cause severe back pain and tail lesions near the spleen obstructing splenic vein to cause gastric varies from vasa brevia. Pain in left shoulder blade is due to irritation of the left diaphragmatic crura due to inflammation at the site not from distal pancreas Ca
...Read more
14
Yes:
It would be obvious to anyone. "floating stools" are almost always from gas bubbles. They suggest gum-chewing or talking a lot rather than cancer. When due to steatorrhea as from pancreatic cancer, the stools are light-colored and incredibly stinky. If you are obsessed with the idea that you have pancreatic cancer, perhaps a good psychologist or other person can help you manage these thoughts.
...Read more
15
Undoubtedly...:
There are over a hundred medical citations in pubmed regarding radiation exposure as a risk factor for pancreatic cancer. By the way, was your dad a smoker? I noticed that you also smoke, & you may be interested to learn that tobacco consumption is a major risk factor for pancreatic cancer--please quit.
...Read more
16
See below:
Ms. Amanda: Unfortunately there are no good tests for screening for pancreatic cancer. Lipase is not reliable and lumpy fat is a quack thing. You may consult this site for info on this topic:http://www. Cancer. Org/cancer/pancreaticcancer/detailedguide/pancreatic-cancer-detectionFor good health - Have a diet rich in fresh vegetables, fruits, whole grains, milk and milk products, nuts, beans, legumes, lentils and small amounts of lean meats. Avoid saturated fats. Drink enough water daily, so that your urine is mostly colorless. Exercise at least 150 minutes/week and increase the intensity of exercise gradually. Do not use tobacco, alcohol, weed or street drugs in any form. Practice safe sex.
...Read more
18
Aspirin pancreas CA:
My grandmother took up to 30 aspirin a day for many years, along with a slew of other drugs. She got pancreatic cancer at 71-could aspirin have caused? ANS: What I would do is go to PUBMED and search for Pancreatic Cancer AND aspirin. This will give you the worlds lit on this subject. I have never heard of an association. She must have had bad arthritis and was taking other things. Ask her Dr.
...Read more
19
Probable viral:
Pancreas cancer starts out as an intraductal lesion similar to breast DTIC (dacarbazine). It smolders within the ductal system for 15-20 yrs before first signs of ductal wall invasion to become an early pancreatic carcinoma. The TAA that are expressed early are oncofetal in origin and are suppressed at birth to reappear in the tumor as the oncogenic protein. Transformation probably viral induced.
...Read more
20
So do we:
Pancreatic cancer seems to be a spontaneous event and because it is internal with no early symptoms, is rarely detected until it has spread beyond meaningful treatment. No one knows the cause. There is much investigation being done to provide earlier detection and discover the cause.
...Read more
21
Unsure of question!:
Do YOU have pancreas cancer? That is very rare in someone in their 30s. No one knows what causes pancreas cancer. Symptoms are usually upper mid-abdominal pain near the breast bone, often after eating going into the back, weight loss, loss of appetite, change in taste/smell sensation, blood clot in leg, depression. Early pancreas cancer's best chance for cure is surgery.
...Read more
22
Pancreatic cancer:
Pancreatic cancer (p) is due to dna mutations, and there are three ways that we can damage our dna. We can be born with a dna mutation inherited from either parent, we can damage our dna by smoking, or our dna can be damaged by chance. Risk factors for p are increasing age, diabetes, being a male, being obese and eating a high cholesteroldiet and being black.
...Read more
23
No:
Abnormal lab tests are indicators, not causes of anything. However, if you have familial pancreatitis caused by a mutant trypsinogen gene, it often turns into pancreatic cancer eventually. Your physician is aware of this and can advise you.
...Read more
24
Symptom of abdominal:
Many different Cancers in the abdomen can cause early satiety. It is a presenting symptom for stomach, Ovarian cancer as well as pancreatic cancer (sometimes).If it persists or you are losing weight, you should consult your doctor without a delay of more than 1 or 2 weeks.
...Read more
25
Iron pancreas cancer:
33 M Ottawa: Can pancreatic cancer cause iron levels to increase? ANS: Can recall no biologically plausible mechanism for this to happen. But discuss with your Drs who know you and your metabolism best.
...Read more
26
Possible viral cause:
Pancreatic cancer begins at least 15 years before an early invasive lesion is present. Teen agers have been reported with panc. CA. The probable cause is a virus getting into the tissues to cause transformation though disease like melanoma and chronic pancreatitis are associated with higher incidence.
...Read more
Many people resolve to lose weight in the New Year for different reasons. For those who are overweight or obese, there are many health benefits to losing weight. It can help decrease your chances of developing diseases including diabetes, heart disease, high blood pressure, osteoarthritis, and even certain types of cancer. Low-calorie diets combined with increased physical activity are thought to be most effective long term. The healthiest weight loss regimen, therefore, is one that consists of making lifestyle changes that incorporate a balanced diet and moderate physical activity.
...Read more
Cancer is a group of diseases that is characterized by uncontrolled cell growth leading to invasion of surrounding tissues that spread to other parts of the body. Cancer can begin anywhere in the body and is usually related to one or more genetic mutations that allow normal cells to become malignant by interfering with internal cellular control mechanisms, such as programmed cell death or by preventing repair of DNA damage.
...Read more
|
tomekkorbak/pile-curse-small
|
Pile-CC
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.