text
stringlengths
2
1.05M
repo_name
stringlengths
5
101
path
stringlengths
4
991
language
stringclasses
3 values
license
stringclasses
5 values
size
int64
2
1.05M
Adam Driver makes a solo trip down the red carpet while arriving at the 2020 Screen Actors Guild Awards on Sunday (January 19) at the Shine Auditorium in Los Angeles. The 36-year-old actor kept things cool and handsome in a black tux as he stepped out for the awards show. PHOTOS: Check out the latest pics of Adam Driver Adam is nominated tonight for Outstanding Performance by a Male Actor in a Leading Role in a Motion Picture for his role in Marriage Story. FYI: Adam is wearing a Louis Vuitton tux.
null
minipile
NaturalLanguage
mit
null
Q: OpenWeatherMap get average temperature on day Using the openweathermap api how can I get the average temperature on a certain day using javascript. I know I am limited with only getting with in the next 5 days. I have been looking and could not find a way to do it and have seen nobody else do. A: You can make use of the api api.openweathermap.org/data/2.5/forecast?q={city name},{country code}. Please refer here for more details. You can search weather forecast for 5 days with data every 3 hours by city name. All weather data can be obtained in JSON and XML formats. https://openweathermap.org/forecast5 API call: api.openweathermap.org/data/2.5/forecast?q={city name},{country code} Parameters: q city name and country code divided by comma, use ISO 3166 country codes Examples of API calls: api.openweathermap.org/data/2.5/forecast?q=London,us&mode=xml
null
minipile
NaturalLanguage
mit
null
When Queen’s Park Rangers appointed Neil Warnock as manager, a little over a year ago, I was aghast: “Fourteen years since QPR last played in the Premier League, but if the chairman thinks Warnock is the answer, he must be asking the wrong question.” How wrong was I? Read more »
null
minipile
NaturalLanguage
mit
null
Sightings: The Fiction of Evangelical Friction “As significant as is the impact of polls, podiums, and pulpits, shifts in the debates on immigration policy reform among evangelical Protestants may depend on the frequency and quality of concrete relationships between those sitting in the pews. How neighborly.”
null
minipile
NaturalLanguage
mit
null
UN Report: Internet growth slows; most people still offline A report by the UN Broadband Commission released on Monday said that growth in Internet access was predicted to drop further this year as rich economies reach saturation point, while 90 percent of people in the 48 poorest countries had no chance to go online. Internet access in rich economies is reaching saturation levels but 90% of people in the 48 poorest countries have none, its report said. The access growth rate is expected to slow to 8.1% this year, down from 8.6% in 2014. Until 2012, growth rates had been in double digits for years. Some 57 percent of the world’s population, or more than 4 billion people, still did not use the Internet regularly or actively, according to the report – a figure that falls far short of a UN target of having 60 percent of the world online by 2020. It blamed the cost of extending last-mile infrastructure to rural and remote customers, and a sharp slowdown in the growth of mobile cellular subscriptions globally. Women in poorer countries were particularly disadvantaged, the report said. In the developing world, 25 percent fewer women than men had Internet access, a number that rises to 50 percent in parts of sub-Saharan Africa. Only about 5 percent of the world’s estimated 7,100 languages were represented on the Internet, the report said. Many Internet users could not understand Latin script, so even reading domain names was a challenge, it added. The report showed the Asia-Pacific region as having the largest share – 46.6 percent – of the total global market for fixed broadband subscription, a rough indicator for overall Internet usage. Europe had 23.8 percent, followed by the Americas on 22.7 percent. Africa, Arab states and CIS countries all lagged far behind, well under double figures.
null
minipile
NaturalLanguage
mit
null
M and 10:30 AM? 344 How many minutes are there between 3:44 PM and 1:55 AM? 611 What is 643 minutes before 4:44 AM? 6:01 PM How many minutes are there between 6:51 PM and 2:17 AM? 446 What is 186 minutes before 1:38 AM? 10:32 PM How many minutes are there between 11:13 AM and 9:56 PM? 643 What is 607 minutes before 8:48 AM? 10:41 PM How many minutes are there between 6:02 AM and 1:44 PM? 462 What is 664 minutes before 2:57 AM? 3:53 PM How many minutes are there between 1:15 AM and 10:34 AM? 559 How many minutes are there between 8:42 PM and 2:14 AM? 332 What is 628 minutes after 1:06 AM? 11:34 AM What is 556 minutes after 7:12 AM? 4:28 PM What is 470 minutes after 5:47 AM? 1:37 PM How many minutes are there between 6:18 PM and 2:28 AM? 490 How many minutes are there between 2:00 AM and 9:03 AM? 423 What is 92 minutes after 9:47 AM? 11:19 AM What is 324 minutes before 9:12 AM? 3:48 AM How many minutes are there between 6:22 AM and 6:32 AM? 10 How many minutes are there between 8:17 PM and 12:47 AM? 270 What is 244 minutes before 12:58 AM? 8:54 PM How many minutes are there between 11:35 AM and 1:36 PM? 121 How many minutes are there between 6:31 PM and 3:44 AM? 553 How many minutes are there between 11:03 PM and 11:47 PM? 44 How many minutes are there between 8:15 PM and 3:23 AM? 428 How many minutes are there between 9:16 PM and 12:21 AM? 185 How many minutes are there between 3:34 AM and 6:39 AM? 185 What is 597 minutes after 1:55 AM? 11:52 AM What is 295 minutes after 6:20 PM? 11:15 PM What is 110 minutes after 7:24 PM? 9:14 PM What is 685 minutes after 6:15 PM? 5:40 AM How many minutes are there between 10:47 PM and 1:32 AM? 165 What is 676 minutes before 2:53 PM? 3:37 AM How many minutes are there between 7:46 PM and 3:58 AM? 492 What is 115 minutes after 10:03 PM? 11:58 PM What is 91 minutes before 9:38 PM? 8:07 PM What is 405 minutes before 6:55 PM? 12:10 PM How many minutes are there between 10:06 AM and 12:53 PM? 167 How many minutes are there between 6:14 PM and 5:46 AM? 692 How many minutes are there between 9:43 PM and 12:07 AM? 144 How many minutes are there between 7:39 AM and 6:06 PM? 627 How many minutes are there between 2:01 AM and 6:47 AM? 286 What is 254 minutes before 1:20 PM? 9:06 AM How many minutes are there between 5:04 PM and 5:48 PM? 44 What is 443 minutes before 5:23 PM? 10:00 AM What is 149 minutes after 5:09 PM? 7:38 PM How many minutes are there between 3:20 PM and 11:36 PM? 496 How many minutes are there between 8:18 PM and 2:46 AM? 388 What is 345 minutes after 12:35 AM? 6:20 AM How many minutes are there between 1:35 PM and 3:52 PM? 137 How many minutes are there between 10:50 PM and 2:47 AM? 237 How many minutes are there between 1:29 PM and 12:31 AM? 662 What is 156 minutes before 5:29 PM? 2:53 PM What is 707 minutes after 3:34 AM? 3:21 PM How many minutes are there between 10:50 AM and 1:36 PM? 166 How many minutes are there between 4:22 AM and 7:51 AM? 209 How many minutes are there between 1:51 AM and 11:19 AM? 568 What is 320 minutes after 8:17 AM? 1:37 PM How many minutes are there between 7:50 PM and 7:19 AM? 689 What is 586 minutes after 3:45 PM? 1:31 AM What is 243 minutes before 5:25 AM? 1:22 AM How many minutes are there between 4:00 AM and 5:02 AM? 62 How many minutes are there between 1:54 AM and 2:36 AM? 42 What is 325 minutes after 5:22 PM? 10:47 PM What is 318 minutes before 3:08 PM? 9:50 AM How many minutes are there between 7:03 PM and 6:45 AM? 702 How many minutes are there between 8:25 AM and 8:52 AM? 27 What is 51 minutes after 3:21 AM? 4:12 AM How many minutes are there between 10:29 AM and 9:29 PM? 660 How many minutes are there between 10:34 PM and 7:43 AM? 549 What is 243 minutes after 8:00 AM? 12:03 PM What is 243 minutes before 4:39 AM? 12:36 AM How many minutes are there between 11:26 PM and 9:09 AM? 583 How many minutes are there between 11:20 AM and 11:31 AM? 11 How many minutes are there between 12:01 PM and 11:49 PM? 708 What is 27 minutes before 4:52 PM? 4:25 PM How many minutes are there between 11:01 PM and 8:29 AM? 568 How many minutes are there between 5:48 AM and 8:16 AM? 148 What is 309 minutes before 7:32 PM? 2:23 PM What is 472 minutes after 1:45 PM? 9:37 PM How many minutes are there between 8:18 PM and 3:16 AM? 418 How many minutes are there between 3:09 PM and 8:27 PM? 318 How many minutes are there between 10:06 AM and 7:38 PM? 572 How many minutes are there between 1:22 AM and 3:44 AM? 142 What is 158 minutes after 6:26 PM? 9:04 PM How many minutes are there between 1:43 AM and 12:17 PM? 634 How many minutes are there between 2:36 PM and 10:18 PM? 462 What is 244 minutes before 5:29 PM? 1:25 PM How many minutes are there between 7:11 AM and 5:31 PM? 620 How many minutes are there between 5:40 PM and 5:50 PM? 10 How many minutes are there between 3:00 AM and 7:46 AM? 286 How many minutes are there between 2:31 PM and 6:22 PM? 231 What is 331 minutes after 9:22 AM? 2:53 PM What is 291 minutes after 10:10 AM? 3:01 PM How many minutes are there between 6:08 AM and 1:08 PM? 420 What is 310 minutes before 8:59 AM? 3:49 AM What is 295 minutes after 2:43 AM? 7:38 AM What is 344 minutes after 11:45 PM? 5:29 AM How many minutes are there between 1:03 AM and 6:20 AM? 317 How many minutes are there between 10:37 AM and 12:43 PM? 126 What is 647 minutes before 1:29 AM? 2:42 PM What is 558 minutes after 3:03 PM? 12:21 AM How many minutes are there between 2:48 PM and 1:13 AM? 625 What is 680 minutes before 4:28 PM? 5:08 AM How many minutes are there between 12:05 AM and 1:47 AM? 102 What is 588 minutes before 2:05 PM? 4:17 AM How many minutes are there between 3:18 PM and 1:27 AM? 609 How many minutes are there between 7:26 PM and 3:50 AM? 504 How many minutes are there between 12:04 AM and 9:24 AM? 560 What is 251 minutes after 6:18 PM? 10:29 PM What is 589 minutes after 7:01 AM? 4:50 PM How many minutes are there between 12:58 AM and 5:56 AM? 298 How many minutes are there between 11:27 PM and 10:31 AM? 664 What is 56 minutes after 7:19 PM? 8:15 PM What is 176 minutes before 8:26 PM? 5:30 PM What is 538 minutes before 12:44 PM? 3:46 AM What is 605 minutes before 6:09 AM? 8:04 PM How many minutes are there between 1:17 AM and 11:45 AM? 628 How many minutes are there between 6:14 PM and 2:37 AM? 503 How many minutes are there between 5:20 AM and 11:59 AM? 399 What is 53 minutes before 5:48 AM? 4:55 AM How many minutes are there between 11:35 PM and 8:54 AM? 559 What is 304 minutes before 11:30 PM? 6:26 PM What is 27 minutes before 10:59 AM? 10:32 AM What is 493 minutes before 3:41 PM? 7:28 AM How many minutes are there between 5:08 AM and 2:46 PM? 578 How many minutes are there between 7:33 PM and 2:07 AM? 394 How many minutes are there between 9:33 AM and 11:30 AM? 117 What is 508 minutes after 1:37 AM? 10:05 AM What is 170 minutes before 4:34 PM? 1:44 PM How many minutes are there between 8:46 AM and 9:24 AM? 38 How many minutes are there between 5:16 AM and 12:59 PM? 463 What is 115 minutes before 10:31 PM? 8:36 PM What is 235 minutes before 1:44 PM? 9:49 AM What is 672 minutes after 8:32 AM? 7:44 PM How many minutes are there between 5:43 PM and 8:33 PM? 170 How many minutes are there between 9:07 PM and 7:17 AM? 610 What is 323 minutes before 6:54 PM? 1:31 PM What is 485 minutes after 4:56 AM? 1:01 PM What is 553 minutes before 3:27 PM? 6:14 AM How many minutes are there between 1:57 PM and 4:04 PM? 127 What is 660 minutes after 3:21 PM? 2:21 AM What is 50 minutes after 9:38 PM? 10:28 PM What is 581 minutes before 11:41 AM? 2:00 AM What is 480 minutes after 5:11 AM? 1:11 PM What is 596 minutes after 10:31 AM? 8:27 PM What is 76 minutes before 9:49 PM? 8:33 PM How many minutes are there between 6:59 AM and 6:11 PM? 672 What is 374 minutes before 7:52 PM? 1:38 PM What is 36 minutes before 3:10 AM? 2:34 AM How many minutes are there between 11:19 AM and 6:17 PM? 418 What is 41 minutes before 8:28 PM? 7:47 PM How many minutes are there between 4:42 PM and 4:11 AM? 689 What is 644 minutes after 5:06 PM? 3:50 AM What is 317 minutes after 2:09 AM? 7:26 AM What is 149 minutes before 2:04 PM? 11:35 AM How many minutes are there between 5:48 AM and 10:39 AM? 291 What is 173 minu
null
minipile
NaturalLanguage
mit
null
I am by no means a food blogger, but today I want to take the opportunity to share about a wonderful new restaurant in town! Because tacos (like fashion) inspire me. Photo cred: Genaro Design It's called TacoArt and it's located in Quarry Village, near Trader Joe's. I was so excited to attend their Media Preview event and get one of the first tastes of their menu. Let me tell you: delicioso! I loved the tacos and salsa, but what really got me: the margs. Of course. But there are many more reasons this spot is a unique and exciting addition to our culinary community: Kid-friendly Atmosphere: From the chalkboard wall ready for your kiddos' next masterpiece, to the brown paper table cloths with crayons provided, this restaurant is a great family spot. They also offer hosted birthday parties with planned activities like art projects, and a taco bar for each birthday boy or girl! An Artistic Touch: Designed by local San Antonio artist Joseph Silvas, the artwork decorating the space is pop-art infused with classic Mexican cuisine elements! It provides a fun and funky backdrop for a great meal out. Photo cred: Genaro Design Owned by Locals: The restaurant is locally owned and operated by Adriana Llano, who previously opened Urban Taco in the same location.
null
minipile
NaturalLanguage
mit
null
Marinella Senatore Marinella Senatore (born 1977) is an Italian visual artist. Exhibitions 2011 – Macro Museum, Rome, IT 2012 – Kunstlerhaus Bethanien, Berlin, D 2012 – Matadero, Madrid, ES 2012 – Quad, Derby, UK 2012 – ViaFarini, Milan, IT 2013 – Castello di Rivoli (Turin) 2013 – Musei Civici, Cagliari, IT 2013 – Nomas Foundation/Teatro Valle occupato, Rome, IT 2014 – Estman Radio, INSITU, Berlin, D 2014 – Kunsthalle St.Gallen, CH 2014 – Museum of Contemporary Art, Santa Barbara, US 2014 – The School of Narrative Dance, Israel, Petach Tikva Museum of Art, IL 2014 – MOT International, London, UK Recognition 2009: Dena Foundation Fellowship 2010: New York Prize 2011: fellowship, The American Academy in Rome; finalist, Furla Art Award; 2013: Gotham Prize; fellowship, Castello di Rivoli – Museum of Contemporary Art. 2014: Maxxi Prize. Residencies 2005 – Ratti Foundation, Como, IT 2009 – ArtOmi, Omi International Center, Ghent, US 2010 – ISCP, Brooklyn, New York, US 2011 – Künstlerhaus Bethanien, Berlin, D 2012 – American Academy, Rome, IT 2012 – Via Farini, Milan, IT References Category:1977 births Category:Living people Category:Italian contemporary artists Category:Italian women artists
null
minipile
NaturalLanguage
mit
null
Watching a particularly beautiful movie of the sun helps show how the lines between science and art can sometimes blur. But there is more to the connection between the two disciplines: science and art techniques are often quite similar, indeed one may inform the other or be improved based on lessons from the other arena. One such case is a technique known as a "gradient filter" -- recognizable to many people as an option available on a photo-editing program. Gradients are, in fact, a mathematical description that highlights the places of greatest physical change in space. A gradient filter, in turn, enhances places of contrast, making them all the more obviously different, a useful tool when adjusting photos. Scientists, too, use gradient filters to enhance contrast, using them to accentuate fine structures that might otherwise be lost in the background noise. On the sun, for example, scientists wish to study a phenomenon known as coronal loops, which are giant arcs of solar material constrained to travel along that particular path by the magnetic fields in the sun's atmosphere. Observations of the loops, which can be more or less tangled and complex during different phases of the sun's 11-year activity cycle, can help researchers understand what's happening with the sun's complex magnetic fields, fields that can also power great eruptions on the sun such as solar flares or coronal mass ejections. The images here show an unfiltered image from the sun next to one that has been processed using a gradient filter. Note how the coronal loops are sharp and defined, making them all the more easy to study. On the other hand, gradients also make great art. Watch the movie to see how the sharp loops on the sun next to the more fuzzy areas in the lower solar atmosphere provide a dazzling show.
null
minipile
NaturalLanguage
mit
null
Kings 1, Stars 0, SO DALLAS - With a pair of teams struggling to find the back of the net, a low-scoring affair was certainly in the cards on Thursday night in Big D. The expected result wound up being a straight flush. Justin Williams scored the winning goal in the third round of the shootout, lifting the Los Angeles Kings to a 1-0 victory over the sliding Dallas Stars at American Airlines Center. After Jere Lehtinen tied the shootout with a goal to open the third round, Williams was able to squirt a wrister between Dallas goalie Marty Turco's pads for the winner. Turco actually was able to briefly stop the puck while sliding back into the net, but it eventually slipped through and in. On the ice it was not called a goal, but it was ultimately reversed after a short video review, giving the Kings the precious bonus point. The 12th place Stars are now five points behind both Anaheim and Nashville, the two teams that hold the final pair of playoff slots in the Western Conference. Both the Ducks and Predators have 80 points, one in front of the Edmonton Oilers, who were playing Phoenix later Thursday night. Coming in, the Kings had been outscored 10-2 in their last three games, which were all losses. The Stars, meanwhile, had scored two goals or less in each of their four straight defeats. Not surprisingly, the game went 65 minutes without a single tally. "We left a point out there," Turco said. "We had a lot of chances that didn't go our way. Other than that there's no other way to look at it other than we missed out on a point we desperately needed. They're a team that's struggling offensively, too. Defense was solid and we had the patience to keep everything out. We just needed one and it sucks we didn't get it, but the effort was there. Unfortunately we're just running out of time." It was the second two-goalie shutout game that the Stars have been involved in this year. "Trust and belief can turn into frustration and panic real quick," Turco said. "This team is mentally strong, we've been through a lot and we're going to continue to fight until they tell us to go home." Both teams were snakebit by the quality netminding of Turco and Los Angeles' Jonathan Quick during a game that looked ragged at times. Turco picked up his third shutout of the year -- and 36th of his career -- by making 30 saves, while Quick registered his fourth blanking of the season with 29 stops. "(Quick) played great today," Loui Eriksson said. "Our goalie played well, too. We played really well and I thought we had a couple good chances, but we couldn't capitalize on them. It's tough right now. We needed two points and we only got one." Dallas played the majority of the game shorthanded once again. In this season of unending injuries, the Stars were without durable defenseman Trevor Daley for all but three minutes of the contest after he was knocked out by a cheap shot by Kings forward Raitis Ivanans early on. Going after a loose puck behind the goal line, Ivanans shoved Daley from behind, with Daley's head crashing violently into the boards after he was in a vulnerable position. Daley, one of only three players (Eriksson and Mike Ribeiro are the others) to have appeared in all 74 games this season, stayed on the ice for a few minutes. He needed assistance to skate off before going directly to the dressing room. A minute later Krys Barch engaged Ivanans in a fight, but then had to leave the game as well after being hurt. "Barchy and Trevor went down but there nothing you can do," said defenseman Stephane Robidas, who logged over 27 minutes of ice to lead the team. "As a player it's hard to see a teammate and a friend go down like that. I think it could've been a boarding call, but they didn't call it." It was another heartbreaker for a stung Stars team that lost its fifth straight. The injury-riddled Stars are just 4-11-2 since a 10-3 stretch that bridged January and February. "We hung around, but the five 'D' back there saw some fatigue set in," coach Dave Tippett said. "We have lots and lots of try from a lot of people, but we needed to bury one of those chances. One game at a time, scratch and claw for points, that's where we are right now. We have to win games to keep ourselves in the race." Turco helped preserve the scoreless tie with a terrific save on Kings forward Brad Richardson on a rebound in the slot with 12 minutes to go in regulation. Turco was then helped by the goal post 5 minutes later when Anze Kopitar ripped one off the pipe. With 1:53 to go, Kings winger Teddy Purcell had a glorious opportunity to end it when he let a sizzling wrist shot go from the slot that was labeled for the top-right corner. Turco, though, was able to fling his cat-like glove out to snuff the chance. During a scoreless first period, the Stars easily held the territorial edge, outshooting the Kings by a 12-3 margin, including a 10-1 advantage over the first 12 minutes of the game. But Quick was up to the task, especially when he stopped a four-shot flurry right in front that was capped by a pad save on rookie James Neal. "I think we were pretty dominant in the first," Robidas said. "We outshot them, then we sat back a little bit. It's just hard for us. We couldn't find a way to win." In the second period, Quick once again thwarted the Stars best opportunity when he stuck out his left pad to stuff Eriksson's shovel attempt from in tight just over 5 minutes in. The Kings managed to gain some momentum by outshooting Dallas by a 14-6 margin in the second period. As good as Quick was in the first, Turco matched him save for save in the second. He made several sprawling stops while not leaving juicy rebounds that the Kings could pounce on. Through the first two periods, the Stars were unable to convert on four power-play chances, while the Kings failed to capitalize on three of their man-advantage opportunities. Overall, Dallas was 0-for-5 while Los Angeles was 0-for-4. "We have to find ways to win, simple as that," defenseman Nicklas Grossman said. "We played a solid 65 minutes and Turks played great, it just wasn't enough. It's no time to feel sorry for ourselves. We have to keep pushing. We have to keep looking forward, keep building and get ourselves ready." The Stars will conclude their three-game homestand on Saturday when they host the Florida Panthers (7 p.m. FSN). The Panthers are also in a heated playoff chase in the Eastern Conference. They're just two points behind the eighth place Montreal Canadiens.
null
minipile
NaturalLanguage
mit
null
COMMERCIAL DESCRIPTIONOur twist on the German Märzen bier, we give you a Paso Märzen. Traditional imported malts offer subtle honey-like aromas with hints of Noble hop spice. The stars of this brew are the imported Pilsner and Vienna malts that offer malty sweetness that carry through to the end. German Hallertau hops add rich Noble hop character to balance this exceptional Marzen. This beer is 100% stainless steel and gets the name Oaktoberfest as an ode to our hometown, Paso Robles, Spanish for "Pass of The Oaks". True to Oktoberfest custom, this beer makes its debut in time for fall to celebrate the age-old German tradition. The Sampler, Brooklyn tap: Pours amber with a whitish head. Aroma is slight oak-aging, but mostly a lot of caramel and malt. Taste is slightly oaky at the top, but then gets to be more like a normal marzen with the sweet caramel tastes hitting it up. Good though. 2014/11/24 - Pours a clear yellow/orange gold colour with a foamy white head that quickly reduces to a film. Aroma is cereal, bready malts, honey and a hint of apple. Taste is cereal, biscuit, and muted honey. Mouthfeel is medium with low carbonation and a crisp, drying finish. An ok marzen, but the oak is indistinguishable. ---Rated via Beer Buddy for iPhone Pour is a clear tan with an average white head. Aroma is a little caramel malt with some bread. Flavor is a bready malt but kinda watery and flat after that. Finish is watery and cracker malt. The name is kinda deceiving. It’s not a oak aged marzen the name has something to do with the Spanish name of the city this is made. 12 oz. bottle from BevMo. Not sure what the oak in the name refers to here. Pretty typical lager nose of malt and yeast. Taste is mild roasted bitterness against a bready malt base. No fragrant hops here. There’s a subtle honey like sweetness with a dry finish over a light body. Nothing exceptional but smooth and drinkable. Hmm, holding a 99 for style, but 70something over all. So, even if it is amazing, it is still meh. And, indeed. It is meh. Aroma is missing, flavor lacking, color bad. Alas, these are flaws on a great brew. Major, major flaws, in a decently made brew. Join us! RateBeer is made by beer enthusiasts for the craft beer community. Your basic membership is free and allows you to read all beer ratings. Click here to create your account... and give your opinion!
null
minipile
NaturalLanguage
mit
null
McQuade writes: "The special counsel made something public he could've kept private. It's full of hints and carries a message to Jared Kushner: Cooperate now." Robert Mueller. (image: Lyne Lucien/The Daily Beast) It's Russia, Stupid By Barbara McQuade, The Daily Beast 04 December 17 The special counsel made something public he could’ve kept private. It’s full of hints and carries a message to Jared Kushner: Cooperate now. he guilty plea by President Trump’s former national security adviser Michael Flynn on Friday showed once again that for special counsel Robert Mueller, the devil is in the documents. Flynn pleaded guilty to one count of making false statements to the Federal Bureau of Investigation. Filed along with his guilty plea were three documents: a criminal information statement containing the charge, a plea agreement, and a document called “Statement of the Offense.” Just as we saw in the guilty plea of former campaign advisor George Papadopoulos, the documents provide a number of interesting insights. It’s Russia, Stupid One clear point that these documents reveal is that Mueller is eager to strip away extraneous issues and focus on Russia. The charge focuses on Flynn’s false statements regarding (1) sanctions against Russia for interfering with the election and (2) a request of Russia to block a United Nations resolution relating to Israeli settlements. The plea agreement agrees not to charge Flynn for undisclosed lobbying on behalf of the government of Turkey. It makes no mention of the reported kidnapping plot against Turkish cleric Fethullah Gulen, a potentially very serious violation of kidnapping or bribery statutes. This strategy demonstrates two goals for Mueller: keeping his eye on the Russia ball, and keeping the investigation moving quickly. More to Come The Statement of the Offense makes it clear that when Flynn spoke to Russia, he was not acting on his own as some rogue player. The Statement of Offense sets out a timeline indicating that Flynn’s conversations with Russian ambassador Sergey Kislyak were being discussed in real time with a “senior member of President-elect Trump’s Transition Team” and a “very senior member of the Presidential Transition Team.” The document notes that the “senior member” was with other transition officials at the Mar-A-Lago resort in Florida, where President-elect Trump was staying at the time. Some reports indicate that the “senior member” is Flynn’s former deputy K.T. McFarland and the “very senior member” is Trump’s son-in-law Jared Kushner. Regardless, Flynn knows who they are and is prepared to testify about them, according to the plea agreement. This disclosure sheds new light on the reports that Kushner sought a back channel for communication with Russia during the transition. Flynn likely can confirm or refute this report and explain why any back channel for communicating with Russia might have been sought. (Kushner denied it was a “secret back channel” and said communications were to be about Syria.) And unlike Papadopoulos, who has already pleaded guilty and agreed to cooperate with Mueller’s team, Flynn was a high-level member of Trump’s team who was involved in the campaign for a long period of time. Flynn cannot be dismissed as a low-level volunteer. Flynn likely knows about any coordination between the campaign and Russia to interfere in the election, efforts to obtain information about Hillary Clinton, and any assistance in Russia’s cyber efforts to influence the election, such as the hacking and releasing of emails and the use of social media to influence voters. He likely will sit down with Mueller’s team for lengthy debriefing sessions if he has not done so already. Cooperators Get Good Deals Next, the plea agreement permits Flynn to plead guilty to a single count of making false statements, a relatively minor crime with a calculated sentencing guidelines range of zero to six months in prison. This document signals to other subjects of the investigation that they, too, might be able to get a good deal if they cooperate with Mueller. It might be too late for such a deal for Paul Manafort and Richard Gates, who were charged by the special counsel with a variety of fraud crimes in October, but not for others. The Statement of Offense is not a document that Mueller is required to file. Why, then, did he file it? In part, no doubt, he wants to lock Flynn into what he will testify to if necessary at any trial. But if locking in Flynn’s statement was Mueller’s goal, he could do that by having Flynn testify under oath and in private before the grand jury. So why make it public? The “senior member” and “very senior member” of the transition team mentioned in the documents know who they are. Including this language in a public document sends a message to them that if they want to cooperate, now is the time, and perhaps, they, too, can get a good deal. Lying to the FBI Is a Big Deal Mueller’s charges against Flynn and Papadopoulos make it clear that he takes lying to the FBI very seriously. Lying to the FBI is a significant crime because it makes it harder for investigators to uncover the truth. As a result, when FBI agents interview subjects, they show their badges to make sure that the person knows that they are in fact FBI agents. They tell the person that lying to the FBI is a crime. This occurs for two reasons, (1) to provide fair notice to the person that he should take this interview seriously and that lying brings significant consequences, and (2) to help prove at trial the essential element of the crime that the person was aware that lying to the FBI was illegal. This protocol was likely followed in this case. Flynn, like Papadopoulos, who has already pleaded guilty for lying to the FBI, is getting a pass for other crimes, but not for lying to the FBI. Mueller likely wants to hold accountable individuals who lie to the FBI and wants to deter lying by other subjects who are to be interviewed down the road. Obstruction of Justice The documents also raise the heat on the obstruction of justice investigation into President Trump. It now appears that when Trump allegedly asked then-FBI Director James Comey to let the investigation into Flynn go after Flynn was caught lying to the FBI, and later fired Comey when he did not, Trump was aware that the investigation could implicate not just Flynn, but also senior members of his transition team, and perhaps even himself. This information provides additional evidence that Trump may have acted “corruptly” when he fired Comey, the required motive under the obstruction of justice statute. Left Unanswered The documents raise some other questions. Why did Flynn lie to the FBI in the first place? One theory is that his conduct may be a violation of a statute known as the Logan Act, which prohibits ordinary citizens from negotiating with foreign governments. This statute, though, is rather obscure and has never been enforced. It seems unlikely that Flynn even knew about the statute at the time he was interviewed. If Flynn was not concerned about prosecution under the Logan Act, was he concerned about the appearance of undermining U.S. foreign policy? Was he trying to protect other members of the transition team? Why did he talk to Russians in the first place before the inauguration? Was this the first time they had talked? Were these conversations somehow connected with an overall strategy by Russia to not only interfere with our election, but also the conduct of their preferred candidate once he was in office? This is all part of the quest for the truth by Mueller and his team, and Flynn may have the answers.
null
minipile
NaturalLanguage
mit
null
1. Field of the Invention The present invention is related to a vehicle-purpose lighting tool. More specifically, the present invention is directed to an improvement of such vehicle-purpose lighting tools as rear combination lamps. 2. Description of the Related Art In vehicle-purpose lighting tools such as rear combination lamps and high-mounted stop lamps, light of light sources is radiated outside the light sources via outer lenses (designed covers) so as to obtain desirable light emissions. For instance, as indicated in FIG. 8, in a rear combination lamp where LED lamps 102 are employed as light sources, the LED lamps 102 are installed inside an outer lens 101, and reflectors 103 are provided at peripheral portions of the LED lamps 102 (refer to, for instance, patent publication 1). In the above-described structure, light of the LED lamps 102 is directly traveled to the forward direction, or is traveled via the reflectors 103, and then, is radiated through the outer lens 101 to an external space. Patent Publication 1. JP-A-2005-123092 Among the above-described combination lamps and the like, in which LED lamps are employed as light sources, it is not desirable that the LED lamps are visually recognizable outside the rear combination lamps, except for such a case: That is, it is desirable in view of design aspects that these LED lamps are actively disclosed. To this end, optical diffusion processing operations (for example, very fine grooves are formed) are carried out with respect to front surfaces of outer lenses, and shapes of reflectors are re-arranged, so that fluctuations in luminance may be reduced in order that the LED lamps may not be conspicuous. However, as long as the LED lamps have been arranged inside the outer lenses, there is such a fact that the LED lamps are positioned on lines of sight of viewers. Accordingly, even when the above-explained measures have been taken, it is practically difficult that the presence of these LED lamps is completely concealed. On the other hand, improvements in designs as to rear combination lamps are expected.
null
minipile
NaturalLanguage
mit
null
Dr. Daniel Lee Grotzinger was born in St. Marys, PA on July 7, 1948. He grew up with a deep interest in music and aviation. In his senior year he applied to both the Air Force Academy and for a Pitt university music scholarship. Neither came through. A semester was spent at St. Vincent college in Latrobe, PA studying for the Catholic priesthood before deciding to enlist in the USAF during the Viet-Nam war. He specialized in the radar systems for the B-58 and B-52. After four years of service he was discharged with spinal injuries. Several years of severe back pain with sciatica, with no relief from the usual medical methods lead to Dan trying chiropractic upon the advise of a church friend. In three visits the severe bilateral sciatic pain was resolved. Again, he applied to music school but found there was a long waiting list. Because of the tremendous help chiropractic had given him he began to think there must be a lot of others who have gone through what he did and could use the same kind of help. After several prayer sessions he received a very clear witness that this should be his life calling. A.D.A.M., Inc. is accredited by URAC, also known as the American Accreditation HealthCare Commission (www.urac.org). URAC's accreditation program is an independent audit to verify that A.D.A.M. follows rigorous standards of quality and accountability. A.D.A.M. is among the first to achieve this important distinction for online health information and services. Learn more about A.D.A.M.'s editorial policy, editorial process and privacy policy. A.D.A.M. is also a founding member of Hi-Ethics and subscribes to the principles of the Health on the Net Foundation (www.hon.ch). Physical therapy / exercise: For most neck pain, we recommend a nearly normal schedule from the onset. Physical therapy can help you return to full activity as soon as possible and prevent re-injury. Physical therapists will show proper lifting and walking techniques, and exercises to strengthen and stretch your neck, arms, and abdominal muscles. Massage, ultrasound, diathermy, heat, and traction may also be recommended for short periods. People may also benefit from yoga, chiropractic manipulation, and acupuncture. The AANS does not endorse any treatments, procedures, products or physicians referenced in these patient fact sheets. This information is provided as an educational service and is not intended to serve as medical advice. Anyone seeking specific neurosurgical advice or assistance should consult his or her neurosurgeon, or locate one in your area through the AANS’ Find a Board-certified Neurosurgeon online tool. A 2010 study by questionnaire presented to UK chiropractors indicated only 45% of chiropractors disclosed with patients the serious risk associated with manipulation of the cervical spine and that 46% believed there was possibility of patient would refuse treatment if risk correctly explained. However 80% acknowledged the ethical/moral responsibility to disclose risk to patient.[206] In diagnosing the cause of neck pain, it is important to review the history of the symptoms. In reviewing the history, the doctor will note the location, intensity, duration, and radiation of the pain. Is the pain worsened or improved with turning or repositioning of the head? Any past injury to the neck and past treatments are noted. Aggravating and/or relieving positions or motions are also recorded. The neck is examined at rest and in motion. Tenderness is detected during palpation of the neck. An examination of the nervous system is performed to determine whether or not nerve involvement is present. This guideline provides guidance on the assessment and management of major trauma, including resuscitation following major blood loss associated with trauma. For the purposes of this guideline, major trauma is defined as an injury or a combination of injuries that are life-threatening and could be life changing because it may result in long-term disability. This guideline covers both the pre-hospital and immediate hospital care of major trauma patients but does not include any management after definitive lifesaving intervention. It has been developed for health practitioners and professionals, patients and carers and commissioners of health services. This article gives a well-rounded picture about things that can cause neck and arm pain. However, a patient should consult with their own physician rather than doing a self-diagnosis. Some conditions, such as coronary artery disease (angina) or even lung tumors may mimic these conditions. It is best to have a skilled physician perform a thorough physical examination when the symptoms described are present. Cervical stenosis occurs when the spinal canal narrows and compresses the spinal cord and is most frequently caused by aging. The discs in the spine that separate and cushion vertebrae may dry out. As a result, the space between the vertebrae shrinks, and the discs lose their ability to act as shock absorbers. At the same time, the bones and ligaments that make up the spine become less pliable and thicken. These changes result in a narrowing of the spinal canal. In addition, the degenerative changes associated with cervical stenosis can affect the vertebrae by contributing to the growth of bone spurs that compress the nerve roots. Mild stenosis can be treated conservatively for extended periods of time as long as the symptoms are restricted to neck pain. Severe stenosis requires referral to a neurosurgeon. The best way to live with neck pain is to try to prevent it. The best things you can do to prevent neck pain are pay attention to your body, exercise, eat right, and maintain a healthy life style. In addition, do not sit at the computer for hours without getting up frequently to stretch the neck and back. Take the stress of the day out of your neck muscles and do your exercise routine. If you smoke, stop. Smoking is a predisposing factor for neck pain. If you are overweight, try to increase your activity level and eat healthier to get into shape. Check all that apply. Most people will not be able to check many of these! But the more you can check, the more worthwhile it is to ask your doctor if it’s possible that there’s something more serious going on than just neck pain. Most people who check off an item or two will turn out not to have an ominous health issue. But red flags are reasons to check… not reasons to worry. Neck pain results when the spine is stressed by injury, disease, wear and tear, or poor body mechanics. Acute neck pain is abrupt, intense pain that can radiate to the head, shoulders, arms, or hands. It typically subsides within days or weeks with rest, physical therapy and other self-care measures. You play an important role in the prevention, treatment and recovery process of neck pain. However, if chronic, pain will persist despite treatment and need further evaluation. Freedman Chiropractic Center, LLC was just recognized by The Home News Tribune in their 2017 Readers' Choice Awards as Best Chiropractic Office. It's owner and director, Dr. Ken Freedman, has been empowering patients to live healthier, more active lives since 1979. Dr. Freedman’s unique approach to chiropractic care balances clinical excellence, a long-standing commitment to whole body health and personalized recommendations and products to improve patient outcomes. Our comprehensive pain relief, injury rehabilitation and wellness services include chiropractic care, Reiki care, instructional classes, nutrition, purification, and ... View Profile Throughout its history chiropractic has been the subject of internal and external controversy and criticism.[22][224] According to Daniel D. Palmer, the founder of chiropractic, subluxation is the sole cause of disease and manipulation is the cure for all diseases of the human race.[4][42] A 2003 profession-wide survey[38] found "most chiropractors (whether 'straights' or 'mixers') still hold views of innate Intelligence and of the cause and cure of disease (not just back pain) consistent with those of the Palmers."[225] A critical evaluation stated "Chiropractic is rooted in mystical concepts. This led to an internal conflict within the chiropractic profession, which continues today."[4] Chiropractors, including D.D. Palmer, were jailed for practicing medicine without a license.[4] For most of its existence, chiropractic has battled with mainstream medicine, sustained by antiscientific and pseudoscientific ideas such as subluxation.[37] Collectively, systematic reviews have not demonstrated that spinal manipulation, the main treatment method employed by chiropractors, is effective for any medical condition, with the possible exception of treatment for back pain.[4] Chiropractic remains controversial, though to a lesser extent than in past years.[25] If you are seeking a drug and surgery-free alternative to alleviate your back or neck pain then you’ve come to the right place. When searching for a “chiropractor near me” online, you will be happy to know that your search is over. Our highly trained and certified chiropractors have offered safe, natural, and effective chiropractic care to the people of Orlando and the surrounding areas for many years. After selling his very successful practice in Boise, Idaho, Dr. Tim Dudley moved to Whitefish in 2015 to pursue his professional goals of starting a consulting business to teach other Chiropractors how to be effective and successful. He also wanted to practice what he was preaching and set out to create his dream practice from the ground up. While Whitefish has many chiropractors, there was room for Dr. Dudley’s unique and highly specialized skill set here, and it also put he and his wife much closer to where they were both raised, and to their families. Once Dr. Dudley found his ideal office space in downtown Whitefish, he opened his doors and has been helping and healing one spine at a time. Another part you have to play? Motivating yourself to continue your exercises at home. It’s important to remain active, and keep moving, so your adjustments can be helpful for as long as possible. Plus, the exercises your chiropractor or therapist give you can actually help you correct some of the issues causing your pain. If you don’t do them, you’re really just slowing down your own healing process. Are you looking for a Philadelphia chiropractor? Are you suffering from daily pain or have been injured in an auto accident, in sports, in your garden or at work? Dr. Paul Rubin and Philadelphia Chiropractic can help you finally put a stop to aggravated pain, so you can sleep better, feel younger and be able to participate in the activities you enjoy. Philadelphia Chiropractic is a chiropractic clinic located in downtown Philadelphia in Center City. We look forward to helping you live a more active and healthy lifestyle with gentle, personalized rehabilitation and effective, lasting pain relief. ... View Profile Upon graduation, there may be a requirement to pass national, state, or provincial board examinations before being licensed to practice in a particular jurisdiction.[171][172] Depending on the location, continuing education may be required to renew these licenses.[173][174] Specialty training is available through part-time postgraduate education programs such as chiropractic orthopedics and sports chiropractic, and through full-time residency programs such as radiology or orthopedics.[175] Throughout its history chiropractic has been the subject of internal and external controversy and criticism.[22][224] According to Daniel D. Palmer, the founder of chiropractic, subluxation is the sole cause of disease and manipulation is the cure for all diseases of the human race.[4][42] A 2003 profession-wide survey[38] found "most chiropractors (whether 'straights' or 'mixers') still hold views of innate Intelligence and of the cause and cure of disease (not just back pain) consistent with those of the Palmers."[225] A critical evaluation stated "Chiropractic is rooted in mystical concepts. This led to an internal conflict within the chiropractic profession, which continues today."[4] Chiropractors, including D.D. Palmer, were jailed for practicing medicine without a license.[4] For most of its existence, chiropractic has battled with mainstream medicine, sustained by antiscientific and pseudoscientific ideas such as subluxation.[37] Collectively, systematic reviews have not demonstrated that spinal manipulation, the main treatment method employed by chiropractors, is effective for any medical condition, with the possible exception of treatment for back pain.[4] Chiropractic remains controversial, though to a lesser extent than in past years.[25] Sharp neck pain is not in itself a red flag. Believe it or not there is no common worrisome cause of neck pain that is indicated by a sharp quality. In fact, oddly, sharp pains are actually a bit reassuring, despite how they feel. In isolation — with no other obvious problem — they usually indicate that you just have a temporary, minor source of irritation in the cervical spine. Serious causes of neck pain like infections, tumours, and spinal cord problems tend grind you down with throbbing pains, not “stab” you. A large number of chiropractors fear that if they do not separate themselves from the traditional vitalistic concept of innate intelligence, chiropractic will continue to be seen as a fringe profession.[22] A variant of chiropractic called naprapathy originated in Chicago in the early twentieth century.[35][36] It holds that manual manipulation of soft tissue can reduce "interference" in the body and thus improve health.[36] We use cookies and similar technologies to improve your browsing experience, personalize content and offers, show targeted ads, analyze traffic, and better understand you. We may share your information with third-party partners for marketing purposes. To learn more and make choices about data use, visit our Advertising Policy and Privacy Policy. By clicking “Accept and Continue” below, (1) you consent to these activities unless and until you withdraw your consent using our rights request form, and (2) you consent to allow your data to be transferred, processed, and stored in the United States. Your neck and shoulders contain muscles, bones, nerves, arteries, and veins, as well as many ligaments and other supporting structures. Many conditions can cause pain in the neck and shoulder area. In fact, neck pain is the third most common type of pain according to the American Pain Foundation. It is estimated that 70% of people will experience neck pain at some point in their lives. Spinal stenosis is narrowing of the spinal canal that causes compression of the spinal cord (cervical myelopathy). The narrowing is caused by disc bulging, bony spurs, and thickening of spinal ligaments. The squeezing of the spinal cord may not cause neck pain in all cases but is associated with leg numbness, weakness, and loss of bladder or rectum control. Chiropractors, like other primary care providers, sometimes employ diagnostic imaging techniques such as X-rays and CT scans that rely on ionizing radiation.[156] Although there is no clear evidence for the practice, some chiropractors may still X-ray a patient several times a year.[6] Practice guidelines aim to reduce unnecessary radiation exposure,[156] which increases cancer risk in proportion to the amount of radiation received.[157] Research suggests that radiology instruction given at chiropractic schools worldwide seem to be evidence-based.[48] Although, there seems to be a disparity between some schools and available evidence regarding the aspect of radiography for patients with acute low back pain without an indication of a serious disease, which may contribute to chiropractic overuse of radiography for low back pain.[48] If you are lucky enough to have family and friends who regularly visit a chiropractor, ask them for help finding a “chiropractor near me.” A license to practice shows that the doctor is qualified, but a person who has worked with them can tell you about their bedside manner and demeanor. It helps to keep in mind what kind of doctor you generally prefer. Whether you like a warm, caring doctor or a capable but business-like doctor, a recommendation from a family member or friend may be able to help. Chiropractic doctors diagnose and treat patients whose health problems are associated with the body’s muscular, nervous and skeletal systems. Chiropractors believe that interference with these systems can impair normal functioning, cause pain and lower resistance to disease. They are most well known for the hands-on technique they practice to adjust imbalances in the patient’s skeletal system, particularly the spine. What's to know about cervical spondylosis? Cervical spondylosis is a type of osteoarthritis. It is very common, and it happens as people get older, and the vertebrae and discs in the neck deteriorate. Minor symptoms include neck pain and stiffness, but numbness and more severe effects are possible. Symptoms often resolve alone, but treatment is available. Read now Employment of chiropractors is projected to grow 12 percent from 2016 to 2026, faster than the average for all occupations. People across all age groups are increasingly becoming interested in integrative or complementary healthcare as a way to treat pain and improve overall wellness. Chiropractic care is appealing to patients because chiropractors use nonsurgical methods of treatment and do not prescribe drugs. I lost my insurance last month because I switched jobs and my new employer does not offer insurance. Can I just get small look at my shoulder joints that I sprained 3 years ago. For cheap please. I don’t make much. It is a 3rd degree joint sprain between my left shoulder and clavicle. It has been really painful lately I’m afraid it could be getting worse. Analysis of a clinical and cost utilization data from the years 2003 to 2005 by an integrative medicine independent physician association (IPA) which looked the chiropractic services utilization found that the clinical and cost utilization of chiropractic services based on 70,274 member-months over a 7-year period decreased patient costs associate with the following use of services by 60% for in-hospital admissions, 59% for hospital days, 62% for outpatient surgeries and procedures, and 85% for pharmaceutical costs when compared with conventional medicine (visit to a medical doctor primary care provider) IPA performance for the same health maintenance organization product in the same geography and time frame.[163] Health Tools Baby Due Date CalculatorBasal Metabolic Rate CalculatorBody Mass Index (BMI) CalculatorCalories Burned CalculatorChild Energy Requirements CalculatorDaily Calcium Requirements CalculatorDaily Fibre Requirements CalculatorIdeal Weight CalculatorInfectious Diseases Exclusion Periods ToolOvulation CalculatorSmoking Cost CalculatorTarget Heart Rate CalculatorWaist-to-hip Ratio Calculator Risk Tests Depression Self-AssessmentErectile Dysfunction ToolMacular Degeneration ToolOsteoporosis Risk TestProstate Symptoms Self-AssessmentFind a GP Rarely. Nearly all neck stiffness is minor, diffuse musculoskeletal pain: several mildly irritated structures adding up to uncomfortable, reluctant movement as opposed to physically limited movement. The most common scary neck stiffness is the “nuchal rigidity” of meningitis — which makes it very difficult and uncomfortable to tilt the head forward — but that will be accompanied by other serious warning signs, of course. Like feeling gross otherwise (flu-like malaise). How long does it take to build muscle with exercise? Performing particular exercises and eating the right foods can help to build muscle over time. In this article, we look in detail at how muscle builds up and long it will take. We also give you some ideas about the types of exercise and diet that can achieve this, as well as some tips on how to exercise safely. Read now Common misconceptions about Chiropractic seems to be that Chiropractors are only good for treating back pain, and that PT or massage is a substitute for an adjustment. While PT and massage are beneficial, they are not the same thing. Chiropractic adjustments are very specific stimulus and movement to very specific parts of the nervous system, which runs through the spinal column. Adjusting these areas, allows for better communication between the nerves which control every muscle, joint, and organ of the body. People are often surprised to find that chiropractic adjustments not only make their backs feel better, but effectively treats other issues with other parts of their bodies. Having a better functioning nervous system, allows your body to heal itself, maintain a higher immune response, cope with stress, and function without pain. Some benefits experienced by patients of The Chiropractor Whitefish are, relief from sciatic pain, relief of neck pain, better sleep, more mood stability, relief of infant colic, eliminating headaches, and on and on. Come see us and read through our testimonials written by hundreds of ecstatic patients. A D.C. program includes classwork in anatomy, physiology, biology, and similar subjects. Chiropractic students also get supervised clinical experience in which they train in spinal assessment, adjustment techniques, and making diagnoses. D.C. programs also may include classwork in business management and in billing and finance. Most D.C. programs offer a dual-degree option, in which students may earn either a bachelor’s or a master’s degree in another field while completing their D.C. The site navigation utilizes arrow, enter, escape, and space bar key commands. Left and right arrows move across top level links and expand / close menus in sub levels. Up and Down arrows will open main level menus and toggle through sub tier links. Enter and space open menus and escape closes them as well. Tab will move on to the next part of the site rather than go through menu items. Cervical stenosis occurs when the spinal canal narrows and compresses the spinal cord and is most frequently caused by aging. The discs in the spine that separate and cushion vertebrae may dry out. As a result, the space between the vertebrae shrinks, and the discs lose their ability to act as shock absorbers. At the same time, the bones and ligaments that make up the spine become less pliable and thicken. These changes result in a narrowing of the spinal canal. In addition, the degenerative changes associated with cervical stenosis can affect the vertebrae by contributing to the growth of bone spurs that compress the nerve roots. Mild stenosis can be treated conservatively for extended periods of time as long as the symptoms are restricted to neck pain. Severe stenosis requires referral to a neurosurgeon. The patients were put into two groups. One group received traditional medical care for back pain along with chiropractic care; the other group only received traditional care. While traditional care can include medication, the chiropractic care included spinal manipulation adjustments along with manual therapies such as ice, heat, cryotherapy, and rehabilitative exercises. Our North Wales / Lansdale / Blue Bell PA Office is here to enhance your quality of life through chiropractic care, and we believe that chiropractic and a proper exercise program can improve your overall health. Chiropractors don't just making the pain disappear. Our team will place you on a plan to help your pain, but also help find the source of the problem, and also goals on how to get better. Dr. Allen Conrad has been a chiropractor for over since 2001, serving the North Wales and Lansdale PA area. His office specializes in spinal decompression therapy, massage therapy, and chiropractic care for many types of injuries. Dr. Conrad se ... View Profile Chiropractors are not normally licensed to write medical prescriptions or perform major surgery in the United States,[62] (although New Mexico has become the first US state to allow "advanced practice" trained chiropractors to prescribe certain medications.[63][64]). In the US, their scope of practice varies by state, based on inconsistent views of chiropractic care: some states, such as Iowa, broadly allow treatment of "human ailments"; some, such as Delaware, use vague concepts such as "transition of nerve energy" to define scope of practice; others, such as New Jersey, specify a severely narrowed scope.[65] US states also differ over whether chiropractors may conduct laboratory tests or diagnostic procedures, dispense dietary supplements, or use other therapies such as homeopathy and acupuncture; in Oregon they can become certified to perform minor surgery and to deliver children via natural childbirth.[62] A 2003 survey of North American chiropractors found that a slight majority favored allowing them to write prescriptions for over-the-counter drugs.[38] A 2010 survey found that 72% of Swiss chiropractors considered their ability to prescribe nonprescription medication as an advantage for chiropractic treatment.[66]
null
minipile
NaturalLanguage
mit
null
New Hampshire Primary Source covers breaking and behind-the-scenes news and analysis on the New Hampshire presidential primary and all things political in the Granite State. John DiStaso is the most experienced political writer in New Hampshire and has been writing a weekly column since 1982. His tour will follow the Democratic presidential debate scheduled for Dec. 19 at Saint Anselm College. “He will crisscross the Granite State talking about his bold vision to rebuild the American dream and his record of delivering on the issues progressives care about. Eight years ago, Americans voted for change,” O’Malley’s campaign said. “Now isn't the time to turn back the clock on progress.” The former Maryland governor will hold what his campaign is calling “New Leadership Town Halls” on Dec. 20 at 11:30 a.m. at the Hopkinton Town Hall; at 5:15 p.m. at the American Legion Sweeney Post No. 2, 251 Maple Street, Manchester; and at 6:45 p.m. at the home of Jen Wilhelm, 181 Drew Road, Madbury. O’Malley on Dec. 21 will attend a “Future Leaders Forum” at Concord High School at 9:45 a.m. and hold employees-only town halls at 1:30 p.m. at C & S Wholesale Grocers, 10 Optical Ave., Keene, and at 4:15 p.m. at Dyn, 150 Dow St., Manchester. At 5:45 p.m., he will hold a town hall at the Searles Chapel and School, 3 Chapel Road, Windham. (Earlier updates and the full Dec. 10 New Hampshire Primary Source column follow.) (Friday, Dec. 11, update:) NEW ENDORSEMENTS FOR MARCO, JEB. The New Hampshire national convention delegate slates submitted by the campaigns of Republican presidential candidates Jeb Bush and Marco Rubio reveal new endorsements for the two Floridians. Bush’s slate has 12 new endorsements, including Franklin Pierce University president and former White House Chief of Staff Andy Card, who served as chief of staff to former President George W. Bush. Also new on the Bush delegate list are supporters former state Rep. Kevin K. Waterhouse of Windham; former U.S. Department of Labor Appeals Judge Wayne C. Beyer of Conway; former state Board of Education vice chair Debra Hamel of Marlborough; Rockingham County Republican Committee vice chair Michael J Demartino of Exeter; German Ortiz of Manchester; state Reps. Lynn and Russell Ober of Hudson; former Dover City Councilor Gina M. Cruikshank; Alfonso J. Webb of Hampton; Richard M. Riley of Portsmouth; and Cody G. Aubin of Manchester. Bush campaign co-chair and state Senate President Chuck Morse said Bush has a strong grassroots organization and “has campaigned the New Hampshire way, with countless town halls, diner stops and house parties--and his delegate slate reflects the strong organization necessary to win here in February.” Rubio’s new endorsements, as disclosed on the delegate list, are former New Hampshire House Majority Whip Pam Price; Hampstead Republican Chairman Tyler Clark; and Mike DiCroce, Rockingham County GOP finance chair and 2014 candidate for Rockingham County attorney. The Ayotte email comes a day after Hassan’s campaign emailed supporters with a petition accusing Ayotte of voting last week “to leave open a loophole allowing known or suspected terrorists to buy guns and explosives.” Hassan’s petition seeks “common sense measures to make our communities safer.” Ayotte’s campaign writes that two former Obama administration officials told the Senate Armed Services Committee on Tuesday “that the threat from ISIS is getting worse, not better.” “This deteriorating situation comes as Iran continues to thumb its nose at the Obama administration’s nuclear deal,” the email says. “The Iranian regime has now conducted its second ballistic missile launch since the White House agreed to the misguided deal with Iran.” The campaign then says Hassan has shown “blind support for President Obama’s foreign policy and a troubling lack of knowledge about the threats facing our country.” “Tell President Obama and Maggie Hassan a nuclear-armed Iran is unacceptable,” the email says. Hassan’s email petition says that Ayotte not only voted for the terrorist gun “loophole,” but also voted to block a measure to strengthen background check system. “Add your name to tell Kelly Ayotte and the NRA that New Hampshire, and states from coast to coast, deserves real action on gun violence for safer, stronger communities,” the Hassan campaign email says. (The full Dec. 10 New Hampshire Primary Source follows.) (Thursday, Dec. 10) TRUMP SHOWS NO SIGN OF WEAKNESS. There’s a lot to look at in the WMUR/CNN polls that came out this week, but perhaps most striking is that Donald Trump’s lead among the Republican presidential candidates is all-encompassing. While establishment Republicans are wishing, hoping and waiting for him to fall, there is no sign of that, yet. Yes, there are two months to go and there have been dramatic collapses in primaries past. But at this point, the bottom would have to fall out for Trump, who is in New Hampshire Thursday night, in order for him to spiral out of first place. The New Hampshire Primary Poll released on Tuesday showed that Trump leads among women 27 percent to 15 percent over Marco Rubio; and, less surprisingly, among men, 35 percent to 14 percent. Trump not only has a big lead among registered Republicans, 29 percent to 13 percent for Rubio, but also among registered undeclared voters, 34 percent to 16 percent. He leads among self-described conservatives 34 percent to 15 percent over Rubio; among moderates 27 percent to 15 percent; and even among Republican voters who view themselves as liberals, 43 percent to 19 percent for Jeb Bush. Pro-Tea Party Republican voters support Trump over Rubio, 37 percent to 13 percent. Opponents of the Tea Party give him a 20 percent to 17 percent edge. He cuts across all age groups, leading among 18 to 34-year-olds, 37 percent to 11 percent over Bush; among 35 to 49-year-olds, 31 percent to 13 percent over Rubio; among 50 to 64 year-olds, 34 percent to 17 percent over Rubio; and among those 65-years-old and older, 26 percent to 14 percent over Rubio. And he leads in all geographic areas of the state. Only 18 percent of those polled have firmly made up their minds on who they will vote for on Feb. 9, but it’s an uphill climb at this point for anyone to grab the top spot from him, and it’s getting steeper every day. Funny thing, though, although Sanders leads Clinton, 50 percent to 40 percent, Clinton is viewed by 59 percent – and Sanders by only 28 percent – as the candidate who will actually win the primary. It’s odd, especially when one considers that Clinton has not even convinced all of her supporters that she will be the winner. A total of 84 percent of Clinton supporters believe she will win the primary, but 8 percent of her backers believe Sanders will win. Sanders’ problem – and one that he will have to confront in the coming months – is that less than half of his own supporters, 48 percent, believe that he will win the primary, while 39 percent of Sanders’ supporters believe Clinton will win. Not to mention that 70 percent of all voters believe Clinton has the best chance of winning the general election in November 2016, including 53 percent of Sanders' supporters. Sanders was named by only 17 percent, and 11 percent said they did not know. Only 32 percent of Sanders' supporters believe that he has the best chance among the three Democratic candidates of winning the general election. How to move from an extremely popular leader of a movement to a candidate with strong electability credentials has been Sanders’ challenge for months now, and it continues to be. MATCH-UPS. At odds with all of that is the latest poll from Public Policy Polling. Released on Monday, it poll showed that while Clinton and Sanders are in a virtual tie in New Hampshire (with Clinton ahead, 44 to 42 percent), it is Sanders who matches up better against the Republican candidates. In all cases, PPP pointed out, Sanders slightly does better than Clinton against the Republicans. So go figure. SENATE, GOVERNOR. PPP, which is known as a Democratic-leaning pollster, also surveyed the U.S. Senate race, showing Kelly Ayotte and Maggie Hassan tied at 42 percent and neither candidate particularly popular. Ayotte was viewed favorably by 40 percent and unfavorably by 42 percent; Hassan was viewed favorably by 43 percent and unfavorably by 40 percent. While Ayotte’s numbers were unchanged from the previous PPP poll in October, Hassan’s favorable-unfavorable rating dropped from 50 percent-39 percent, which PPP attributed to Hassan’s call for the federal government to stop admitting Syrian refugees into the country. THAT SENATE RACE. Despite the increasing intensity of the first-in-the-nation primary campaigns, the U.S. Senate race between Hassan and Ayotte is already becoming so intense that one might think that election was only two months away, rather than 11 months. The latest battle focuses on competing social media ads. The Democratic Senatorial Campaign Committee launched a new digital ad campaign accusing Ayotte of “rewriting her record.” It charges that Ayotte “tried to mislead New Hampshire about her position on closing the terrorist gun loophole.” Ayotte voted for what the Democrats call a “watered-down” GOP alternative to a Democratic bill that would have banned the sale of firearms to those on the nation’s terrorist watch list. Ayotte’s camp said the bill she supported was a legitimate option that would have better protected 2nd Amendment rights of those who were place on the watch list in error. The digital ads link to an “Ayotte Fact Check” site that alleges she has also misled on college debt, women’s health, the environment and issues of interest to working families. Democrats have also pointed out that Ayotte’s Facebook page has been inundated by critics of her vote for the GOP, rather than the Democratic, terrorist watch list bill. Ayotte’s camp returned fire on Wednesday with its own paid digital “Get the Facts” ad accusing “Washington Democrats” of misleading voters on her record. Her ad calls the Democratic terrorist watch list bill a “gimmick.” Ayotte’s camp also released a web video showing Hassan recently answering a question about ISIS. The campaign says the video shows that Hassan refused to say whether the U.S. is at war with “radical Islam” and shows Hassan mistakenly saying that ISIS declared war in the recent “bombing of Paris.” WITH REGRETS. Gov. Maggie Hassan’s spokesman said a scheduling oversight was the reason she didn’t make it to the New Hampshire Air Guard’s annual holiday celebration last Sunday. LTC Gregory Heilshorn said about 1,200 guard members, family members and friends attended the event at the former Pease Air Force Base. He said the Guard routinely invites the governor and members of the congressional delegation but this year, only U.S. Sen. Kelly Ayotte attended. A spokesman for Gov. Maggie Hassan told us, “Gov. Hassan attends as many National Guard events as possible. As a result of an inadvertent error, the event did not make it onto the governor's schedule, and she certainly regrets missing it.” We understand that the governor's office notified the House and Senate chiefs of staff that the governor would be out of state that day through Tuesday evening. (Thursday afternoon, Dec. 10, update:) Word of Hassan's absence from the event drew criticism from Republican former U.S. Sen. Scott Brown, a retired Colonel in the Army National Guard. "Gov. Hassan should be honest with Granite Staters and admit if she chose to campaign and fund-raise out of state instead of attending the New Hampshire Air Guard’s annual holiday event. As commander-in-chief of the New Hampshire National Guard, the governor has a duty to put those men and women in uniform ahead of her personal political ambitions. "Blaming an ‘inadvertent error’ for not attending the Air Guard’s most important annual event is no excuse and is clearly an attempt to do damage control after being caught abandoning her duties as governor. New Hampshire deserves a full-time governor instead of a distracted politician more concerned with getting to Washington than serving her constituents.” Hassan campaign spokesman Aaron Jacobs responded: "Kelly Ayotte, who has said that she would still support Donald Trump for president, should stop trotting out one of Granite Staters' most disliked politicians to launch politically motivated attacks on her behalf. "Scott Brown, who only moved to New Hampshire to run for office, has no credibility with the people of New Hampshire and should keep his focus on hosting barbecues for the likes of Trump and the rest of the far-right Republican candidates." A Hassan supporter noted that the governor on Thursday attended the Operation Santa Claus event with members of the Guard and state employees. SANTORUM, FIORINA TO ATTEND NHGOP TOWN HALL. Two more Republican presidential candidates have signed on to attend the New Hampshire Republican Party’s “First-in-the-Nation Presidential Town Hall” on Jan. 22-23. We’ve learned former U.S. Sen. Rick Santorum has agreed to attend, and the party confirmed that also on board is former business executive Carly Fiorina. They are the second and third candidates to confirm appearances, following U.S. Rand Paul, who announced on Monday that he will attend. “New Hampshire voters are some of the most informed and passionate, and they are very concerned about the direction Barack Obama is taking our nation,” Santorum said. “We are a nation at war with radical Islam, our economy has flat-lined, and our communities are being torn apart from within. I am looking forward to addressing these and all other issues of concern to New Hampshire voters at the town hall." Fiorina said, “Whenever I visit New Hampshire, I’m reminded of how Granite State voters understand the great importance of citizen government and the first-in-the-nation primary. I look forward to speaking at the First-in-the-Nation Town Hall in January about my plan to take our country back and out of the hands of the professional political class." NHGOP Chairwoman Jennifer Horn said the party is pleased with the growing roster of speakers. ENDORSING JEB. New Hampshire Primary Source has learned the latest state GOP activists to endorse Jeb Bush are Merrimack County Commissioner Bronwyn Asplund, former state Rep. Russell Day of Goffstown and former state Rep. and long-time Cheshire County Republican activist John Byrnes. “As our country faces increased danger both at home and abroad, Gov. Bush’s proven leadership skills are what we need in the Oval Office," Asplund said. "Gov. Bush has a knowledge and understanding of the issues that is unmatched in the Republican field.” Bush’s campaign on Thursday is launching a digital ad focusing on his call for the U.S. to lead a coalition “to take out ISIS with overwhelming force” made in a speech at The Citadal in South Carolina on Nov. 18. THE KOCHS IN NEW HAMPSHIRE. The progressive research group American Bridge has put together a 100-page report on the influence of the Koch brothers on New Hampshire politics, and the conclusion is, in a nutshell, there is heavy influence. We’re not taking sides in this fight, certainly, but we will say that the group has gone all out in promoting its belief that the Kochs have viewed New Hampshire as a key state in advancing their agenda. LOCAL EFFORT FOR DEBT-FREE COLLEGE. State Rep. Marjorie Porter, D-Hillsborough, joined a conference call with lawmakers and officials from nine other state this week to talk about their local efforts to promote debt-free college tuitions. The call was hosted by the Progressive Change Campaign Committee, which has long been a fan of U.S. Sen. Elizabeth Warren and has long been pushing her agenda – including debt-free tuition. Porter told us that she is gathering cosponsors and supporters for a resolution that would put the New Hampshire Legislature on record supporting efforts to enact legislation that would ensure that all students have access to debt-free higher education, increase state funding to higher education and increase state aid to students. Porter said it is too late to file the resolution for 2016 and so she plans to introduce it in 2017. REACTING TO THE DONALD. Kelly Ayotte and Maggie Hassan this week weighed in on Donald Trump’s call for a temporary halt to allowing Muslims into the U.S. Ayotte focused on the issue, saying: “I do not support religious-based tests for our immigration system and such a test would be inconsistent with the First Amendment to the Constitution. There should be fact-based risk assessments for entry into our country, which is why I’ve called for strengthening our refugee screening process and cosponsored legislation to strengthen the Visa Waiver Program to prohibit people who have recently traveled to countries like Iraq and Syria from traveling to the U.S. under that program.” Hassan focused on the issue and the candidate, saying: “Donald Trump’s comments are dangerous, disgraceful, and completely at odds with the American values that we hold dear. Trump's hateful comments should be condemned by all, regardless of political party." WEBSITE FUN. As we first reported on Twitter on Wednesday, someone who likes Donald Trump is having online fun with certain Republicans. As of Wednesday night, visitors to KellyAyotte.com, NHGOP.com and JebBush.com were mysteriously routed to DonaldJTrump.com, his official website. BUSY WEEK. As one might expect with two months to go until the primary, the campaign trail in New Hampshire continues to be busy. Donald Trump on Thursday at 7 p.m. will speak to the New England Police Benevolent Association’s executive meeting at the Sheraton Portsmouth Harborside Hotel. U.S. Sen. Lindsey Graham of South Carolina will speak Thursday at noon to the Portsmouth Rotary Club at the Portsmouth Country Club in Greenland. He will then hold a town hall at the Riverwoods Retirement Community in Exeter at 2 p.m., followed by a discussion with business leaders at the Sheehan Phinney Bass and Green law firm in Manchester at 4 p.m. Graham on Friday will hold a town hall meeting at Concord High School at 9:45 a.m., followed by a Chamber of Commerce roundtable event at the office of the Concord Chamber of Commerce. Ohio Gov. John Kasich will be in the state on Thursday and Friday. He will hold a town hall meeting at the Riverwoods Retirement Community at 12:30 p.m. on Thursday. On Friday he will hold a town hall meetings at Gilchrest Metal Fabricating in Hudson at noon and at the Belknap Mills in Laconia at 5:30 p.m. And New Jersey Gov. Chris Christie arrives in the state on Friday evening for a 6 p.m. town hall meeting at the Inn on Main in Wolfeboro, followed on Saturday at 10 a.m. by a town hall at the Weare Middle School. At 12:30 p.m., he will visit the D.W. Diner in Merrimack with House Majority Leader Dick Hinch, who endorsed Christie earlier this week. CAROL LINKS GUINTA TO TRUMP. Former U.S. Rep. Carol Shea-Porter earlier this week called on U.S. Rep. Frank Guinta to return a $1,000 contribution Guinta received from Trump. We verified the contribution through the Center for Responsive Politics’ OpenSecrets.org. Shea-Porter said, “If Frank Guinta is at all concerned about keeping Americans safe, he should immediately return Donald Trump’s contribution and condemn Trump’s remarks” calling for a halt to allowing Muslims into the country. A Guinta spokesman did not respond to our request for comment. CLOSE-UP. This week on “CloseUP,” WMUR political director Josh McElveen will interview Republican presidential candidates Chris Christie and Rand Paul. He will also discuss with Portsmouth City Councilor Stefany Shaheen, businesswoman Renee Plummer and Kriss Blevins efforts to have the federal government call a state of emergency in New Hampshire to deal with the heroin crisis. NEW HAMPSHIRE PRIMARY VAULT. Check out WMUR political reporter Adam Sexton’s series of “look backs” at past New Hampshire primaries by clicking here to go to his New Hampshire Primary Vault page on WMUR.com. This week’s featured spot focuses on the 1972 candidacy of Sam Yorty, who was then the Democratic mayor of Los Angeles.
null
minipile
NaturalLanguage
mit
null
This tutorial shows the steps to install an Ubuntu 15.10 (Wiley Werewolf) server with Nginx, PHP, MariaDB, Postfix, pure-ftpd, BIND, Dovecot and ISPConfig 3. ISPConfig 3 is a web hosting control panel that allows you to configure the installed services through a web browser. This setup provides a full hosting server with web, email (inc. spam and antivirus filter), Database, FTP and DNS services. 1. Preliminary Note In this tutorial, I will use the hostname server1.example.com with the IP address 192.168.1.100 and the gateway 192.168.1.1 for the network configuration. These settings might differ for you, so you have to replace them where appropriate. Before proceeding further, you need to have a basic minimal installation of Ubuntu 15.10 as explained in tutorial. The steps in this tutorial have to be executed as root user, so I will not prepend "sudo" in front of the commands. Either Login as root user to your server before you proceed or run: sudo su to become root when you are logged in as a different user on the shell. The commands to edit files will use the editor "nano", you can replace it with an editor of your choice. Nano is an easy to use file editor for the shell. If you like to use nano and haven't installed it yet, run: apt-get install nano 2. Update Your Linux Installation Edit /etc/apt/sources.list. Comment out or remove the installation CD from the file and make sure that the universe and multiverse repositories are enabled. It should look like this: # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to# newer versions of the distribution.deb http://de.archive.ubuntu.com/ubuntu/ wily main restricteddeb-src http://de.archive.ubuntu.com/ubuntu/ wily main restricted ## Major bug fix updates produced after the final release of the## distribution.deb http://de.archive.ubuntu.com/ubuntu/ wily-updates main restricteddeb-src http://de.archive.ubuntu.com/ubuntu/ wily-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu## team, and may not be under a free licence. Please satisfy yourself as to## your rights to use the software. Also, please note that software in## multiverse WILL NOT receive any review or updates from the Ubuntu## security team.deb http://de.archive.ubuntu.com/ubuntu/ wily multiversedeb-src http://de.archive.ubuntu.com/ubuntu/ wily multiversedeb http://de.archive.ubuntu.com/ubuntu/ wily-updates multiversedeb-src http://de.archive.ubuntu.com/ubuntu/ wily-updates multiverse ## N.B. software from this repository may not have been tested as## extensively as that contained in the main release, although it includes## newer versions of some applications which may provide useful features.## Also, please note that software in backports WILL NOT receive any review## or updates from the Ubuntu security team.deb http://de.archive.ubuntu.com/ubuntu/ wily-backports main restricted universe multiversedeb-src http://de.archive.ubuntu.com/ubuntu/ wily-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's## 'partner' repository.## This software is not part of Ubuntu, but is offered by Canonical and the## respective vendors as a service to Ubuntu users.# deb http://archive.canonical.com/ubuntu wily partner# deb-src http://archive.canonical.com/ubuntu wily partner Then run: apt-get update To update the apt package database and then: apt-get upgrade to install the latest updates (if there are any). If you see that a new kernel gets installed as part of the updates, you should reboot the system afterward: reboot 3. Change the Default Shell /bin/sh is a symlink to /bin/dash, however we need /bin/bash, not /bin/dash. Therefore we do this: dpkg-reconfigure dash Use dash as the default system shell (/bin/sh)?<-- No If you don't do this, the ISPConfig installation will fail. 4. Disable AppArmor AppArmor is a security extension (similar to SELinux) that should provide extended security. It is not installed by default from onwards 13.10. We will cross check if it is installed. In my opinion you don't need it to configure a secure system, and it usually causes more problems than advantages (think of it after you have done a week of trouble-shooting because some service wasn't working as expected, and then you find out that everything was ok, only AppArmor was causing the problem). Therefore, I disable it (this is a must if you want to install ISPConfig later on). MariaDB is a fork of the MySQL database server, developed by the original MySQL developer Monty Widenius. According to tests found on the internet, MariaDB is faster than MySQL and it's development is going on with more pace, therefore, most Linux Distributions replaced MySQL with MariaDB as default "MySQL alike" database server. In case that you prefer MySQL over MariaDB, replace "mariadb-client mariadb-server" in the above command with "mysql-client mysql-server". We want MariaDB/MySQL to listen on all interfaces, not just localhost. Therefore we edit /etc/mysql/mariadb.conf.d/mysqld.cnf (for MariaDB or/etc/mysql/my.cnf (for MySQL) and comment out the line bind-address = 127.0.0.1: MariaDB nano /etc/mysql/mariadb.conf.d/mysqld.cnf [...] # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 [...] Then we restart MariaDB: service mysql restart The systemd service name for MariaDB and MySQL is "mysql", so the restart command is the same for both database servers. The default shell for Linux systems has been /bin/sh for many years, ubuntu decided to switch to /bin/dash now but dash is not 100% compatible with sh, so shell scripts especially some configure scripts to compile software fail with dash. For ispconfig, the shell does not matter at all. But when you start to try to compile software like jailkit as we will do in this tutorial then this can fail with dash. Using /bin/sh as default shell has no negative effects. This tutorial uses the latest Nginx version that is available from Ubuntu as it is meant as a stable production server system and not as a testbed for latest dev versions of a software. If you want to install a newer third party package, then you can do that of course. Just ensure that your custom compiled Nginx uses the exact same compile options then the one from Ubuntu (e.g. the same folders and the same user and group www-data).
null
minipile
NaturalLanguage
mit
null
‘Fortnite’ may be a virtual game, but it’s having real-life, dangerous effects - spking https://www.bostonglobe.com/metro/2019/03/31/unexplained-weight-loss-children-boston-nutritionist-makes-her-diagnosis-fortnite/eNMmGkK814IOsCwDDk2ZPN/story.html ====== bobbygoodlatte Clickbait may be a virtual game for publishers, but it's having real-life, dangerous effects ------ dr1337 It's funny how Fortnite which is owned by Epic which is owned by Tencent is turning out to be the 21st Century's new Opium Den.
null
minipile
NaturalLanguage
mit
null
How much will the $5 billion tax hike cost your family? How much will the $5 billion tax hike cost your family? Use our tax hike calculator to find out how much the permanent 32% income tax rate hike will cost you. Find out how much the tax hikes will cost you Single, Head of household, or widowed Married, Filing Jointly Additional Income Tax: $ 0 Plus additional Corporate Income tax per household: $98 That's an increase of as much as $ 0 Enter your income and family size above Illinois lawmakers voted to permanently increase the personal income tax rate to 4.95 percent from the current 3.75 percent rate.
null
minipile
NaturalLanguage
mit
null
Cytotoxic effects of sodium phenylbutyrate on human neuroblastoma cell lines. Sodium phenylbutyrate (NaPB) is used in urea cycle disorders. We screened 6 neuroblastoma cell lines for in vitro potency of NaPB as an antiproliferative agent, evaluated multiple dosing schedules, and assessed its activity in combination with clinically active agents for neuroblastoma. We determined that NaPB achieves a 30-80% growth inhibition at 5 mM. Repeated dosing and prolonged drug exposure enhanced the cytotoxic effect. NaPB had additive cytotoxic effects when administered with vincristine; however, NaPB did not affect the activity of etoposide, adriamycin, 4-hydroxycyclophosphamide or cisplatinum. These results suggest that NaPB is an active agent against neuroblastoma and could be combined with vincristine in novel chemotherapy regimens.
null
minipile
NaturalLanguage
mit
null
541 So.2d 740 (1989) Arturo RAINERMAN, Appellant, v. EAGLE NATIONAL BANK OF MIAMI, Appellee. No. 88-1302. District Court of Appeal of Florida, Third District. April 11, 1989. Myles J. Tralins, Miami, for appellant. *741 Holland & Knight and Luis O'Naghten, Miami, for appellee. Before SCHWARTZ, C.J., and NESBITT and FERGUSON, JJ. PER CURIAM. Appellant, Arturo Rainerman, contends that it is sufficient, in order to sustain a claimed fifth-amendment privilege, to show that the nature of the proceeding, or setting where the claim is made, is such that a response to any relevant question might be incriminating. See Compton v. Societe Eurosuisse, S.A., 494 F. Supp. 836 (S.D.Fla. 1980) (a witness may properly invoke the privilege against self-incrimination when he reasonably apprehends a risk of self-incrimination, even if the risk of prosecution is remote). Appellee, Eagle National Bank, responds that approval of a blanket assertion of the privilege, in any setting, would be improvident and that the privilege should be asserted only as to individual questions as they are posited during discovery. See Fischer v. E.F. Hutton & Co., Inc., 463 So.2d 289 (Fla. 2d DCA 1984) (in exercising his fifth-amendment privilege as to deposition questions, the defendant in a civil action, rather than simply refusing to answer any questions relating to the allegations of the suit, was required to make specific objection to a particular question). The setting and nature of this proceeding is a deposition for discovery in aid of execution. Eagle was granted a judgment against Rainerman on a commercial promissory note and letter of credit. In a subsequent bankruptcy petition, Rainerman listed the obligations to Eagle among his dischargeable liabilities. Eagle obtained a final judgment from the bankruptcy court declaring that because of fraud, $50,000 of the debt owed to Eagle was not dischargeable in bankruptcy, 80 BR 549. Eagle is currently searching for Rainerman's assets in order to satisfy its judgment. It is settled law that the privilege against self-incrimination may be properly asserted during discovery proceedings if the civil litigant has reasonable grounds to believe that direct answers to deposition or interrogatory questions would furnish a link in the chain of evidence needed to prove a crime against him. See Pillsbury Co. v. Conboy, 459 U.S. 248, 266, 103 S.Ct. 608, 619, 74 L.Ed.2d 430, 445-446 n. 1 (1983) (a witness need show only a realistic possibility that his answer will be used against him); Hoffman v. United States, 341 U.S. 479, 486, 71 S.Ct. 814, 818, 95 L.Ed. 1118, 1123-24 (1951); Meek v. Dean Witter Reynolds, 458 So.2d 412 (Fla. 4th DCA 1984) (no need to prove actual indictment or investigation); DeLisi v. Bankers Ins. Co., 436 So.2d 1099 (Fla. 4th DCA 1983); DeLisi v. Smith, 423 So.2d 934, 938 (Fla. 2d DCA 1982), rev. denied, 434 So.2d 887 (Fla. 1983). See generally Litchford, The Privilege Against Self-incrimination in Civil Litigation, 57 Fla.B.J. 139 (1983). Presently pending against Rainerman are criminal proceedings arising out of banking transactions with Eagle and, as Eagle admits, there are other charges which could be brought depending on Rainerman's answers regarding his assets. Eagle contends that the existence of any privilege, nonetheless, depends on the specific questions propounded. In Fischer, 463 So.2d at 290, relied on by Eagle, the defendant refused to answer any questions relating to the allegations of the entire complaint. In that case it was appropriate to require that the objections be limited to only those questions where the privilege specifically applied. Here, however, the post-judgment proceeding in aid of execution, by its very nature, has narrowed the scope of inquiry to questions about Rainerman's assets and financial obligations. Allegations of Rainerman's fraudulent banking relationship with Eagle suggest that revelations in the discovery could furnish a link in ongoing and future criminal proceedings. In this posture of the case any answers Rainerman may give to relevant questions may tend to incriminate him, therefore, he has a right to assert the privilege. See, e.g., Stewart v. Mussoline, 487 So.2d 96 (Fla. 3d DCA 1986) (mother charged with murdering her husband, allegedly for financial gain, was entitled *742 to invoke her fifth-amendment privilege and to refuse to answer questions concerning her financial status). The order compelling answers to discovery in aid of execution is reversed and the cause is remanded.
null
minipile
NaturalLanguage
mit
null
Background {#Sec1} ========== Methicillin-resistant *Staphylococcus aureus* (MRSA) has a prevalence of around 2.2 % in newly admitted patients to German hospitals \[[@CR1]\] and can be detected in 18--20 % of inpatients' derived *Staphylococcus aureus* isolates \[[@CR2]\]. To diminish the overall load of MRSA, a bundle of measurements is recommended both, on the hospital level (e.g., isolation) and on the individual level (e.g., decolonization therapy) \[[@CR3]\]. While the spread of MRSA is especially problematic in the inpatient sector, transmissions can be initiated by MRSA carriers readmitted to the hospital. In order to interrupt the MRSA transmission, not only measures in the hospital but also follow-up and decolonization of patients in the outpatient sector is necessary. Meyer et al. \[[@CR4]\] reported that decolonization therapy can be applied successfully in the outpatient sector and pointed out that a close cooperation between outpatient and inpatient sector is necessary. However, a follow-up system for MRSA carriers (or carriers of other multiresistant pathogens) across sectoral borders (inpatient vs. outpatient) is missing in Germany, and we could not find any international studies concerning this subject. The German College of General Practitioners and Family Physicians (DEGAM) had published guidelines for the diagnosis and therapy of MRSA in the outpatient sector in September 2013 (three months before the survey). These recommendations are in accordance with the ones of the KBV (National Association of Statutory Health Insurance Physicians) published earlier. Both recommend MRSA screening for patients with an increased risk of being MRSA-positive, as well as three control swabs 48 h, 3--6 months and 12 months after application of a decolonization therapy. This treatment is recommended for all MRSA carriers; however, in case of factors that might decrease the success such as chronic wounds, the physician can opt out. The guidelines recommend up to two decolonization therapies before consulting a specialist, e.g., a MRSA network. The treatment should comprise a bundle of measures (nasal ointment, mouthwashes, and daily disinfection of hair and skin for 5 days). Accompanying measures should include daily change of clothes, bedding and towels, and disinfection or daily change of hygiene utensils \[[@CR5]\]. In order to improve treatment of MRSA in the outpatient sector, reimbursement of the costs for MRSA screening, control swabs, and decolonization therapy was introduced in April 2012 by the Federal Joint Committee ("Gemeinsamer Bundesausschuß"), and the reimbursement is now paid by statutory health insurances. It is noteworthy, that only the costs for nasal ointment (mupirocin) are covered by the statutory health insurances. The impact of this reimbursement has not been studied yet. In addition, risk perceptions regarding MRSA are likely to cause stigmatization of MRSA carriers, as was described for the UK \[[@CR6]\] and for Sweden \[[@CR7]\]. Up to now, nothing is known about the perception of stigmatization due to MRSA in the daily life of MRSA carriers after discharge from hospital in Germany \[[@CR8]\]. Greiner \[[@CR9]\] demonstrated a considerable loss of quality of life for patients with MRSA infection, but no data are available about the quality of life of MRSA carriers without symptoms of MRSA infection. Therefore, our study aimed at assessing knowledge, attitude, and practice related to MRSA among MRSA carriers and among physicians in outpatient care in two regions of Germany after the introduction of an additional reimbursement for MRSA specific care. In addition, we studied the perceived stigmatization and quality of life in MRSA carriers. Beside this quantitative approach, we initiated focus groups with MRSA carriers from the same study population \[[@CR10]\]. Methods {#Sec2} ======= Study population {#Sec3} ---------------- MRSA carriers were recruited in collaboration with two tertiary care hospitals in Lower Saxony and North Rhine-Westphalia in November and December 2013. Inclusion criterion was a positive MRSA test during a hospital stay in 2012, that is 12 to 24 months before initiation of the study. For simplification, we call these patients "MRSA carriers", regardless of their current MRSA status and if they initially had a MRSA colonization or a MRSA infection. The questionnaires were mailed to their home addresses from the hospitals. The questionnaire for physicians was sent to all general practitioners (GPs), specialists for internal medicine, dermatologists, and urologists in the catchment area of the two hospitals between October and November 2013. We focused on these specialities in order to query physicians who presumably frequently deal with MRSA positive patients. Ethics Statement {#Sec4} ---------------- All questionnaires were filled in anonymously by the participants. The study was approved by the Ethics committees of Hannover Medical School (No. 1893--2013) and the University Witten/Herdecke (No. 112/2013). Questionnaires {#Sec5} -------------- ### Physicians in outpatient care {#Sec6} We developed a questionnaire assessing relevance of MRSA in their practices (11-point scale), knowledge of the refunding possibilities for MRSA screening and treatment, and satisfaction with financial reimbursement for MRSA specific care. Knowledge of MRSA was evaluated by a cumulative score; for further analysis, we dichotomized this knowledge score (lower group 0--3 points, higher group 4--7 points). To assess their activity regarding MRSA, we asked how many screening tests and decolonization therapies they applied and if they are members of a MRSA network which is a regional quality management measure for training and discussions on MRSA. The questionnaire was piloted for clarity and comprehension with three GPs and one specialist in internal medicine working in the outpatient sector. The translated version of the questionnaire for physicians is available as Additional file [1](#MOESM1){ref-type="media"}. ### MRSA carriers {#Sec7} Analogously, we developed a questionnaire for MRSA carriers, including questions on MRSA history, general state of health, perceived stigmatization, and socio-demographic data. As among physicians, knowledge of MRSA was assessed by a cumulative score based on 7 items and dichotomized for further analysis (lower group 0--3 points, higher group 4--7 points, one point for every correct answer). Two questions focused on the patients' attitudes towards MRSA, namely the importance respondents attributed personally to MRSA, and if they were scared of MRSA, with answer categories on a 5-point Likert scale. We reclassified the responses into two categories (high:"yes a lot" and "yes, some" vs. low:"neutral", "rather not" and "not"). Furthermore, we included questions on decolonization and control swabs as well as eight questions about perceived constraints in daily life and perceived stigmatization. The translated version of the questionnaire for MRSA carriers is available as Additional file [2](#MOESM2){ref-type="media"}. We added one question on the self-rated health status which was previously used as a single item in several studies, e.g., in the 1998 German Federal Health Survey \[[@CR11]\]. The questionnaire was piloted with four healthy adults for comprehension. ### Statistical analysis {#Sec8} Differences between groups were tested using the chi-squared test for categorical variables and the Wilcoxon rank sum test for continuous variables; additionally the odds ratio with 95 % confidence intervals is indicated for the univariable analysis. For explorative multivariable analysis of MRSA related knowledge, we used the dichotomized knowledge score as outcome variable for both, MRSA carriers and physicians and applied logistic regression. Variables with *p* \< 0.25 in the univariable analysis as well as age and sex were included in an automatic forward selection model building procedure (using as cutoff *p* = 0.2 for inclusion of variables and *p* = 0.05 for exclusion, based on the Wald-Test). The Hosmer--Lemeshow test was used to test the goodness of fit of the logistic models. The analysis was carried out using Stata 12 (StataCorp, College Town, US). Results {#Sec9} ======= Physicians in the outpatient sector {#Sec10} ----------------------------------- The response proportion to our questionnaire for physicians was 9.5 % (80/851), 27.8 % of them being female and 38.0 % being GPs (Table [1](#Tab1){ref-type="table"}).Table 1Characteristics of responding physicians*N* (%^a^)Total79 (100 %)Age, *n* = 75 Median (IQR) in years52 (46--58)Sex, *n* = 74 Female22 (27.8 %)Years of professional experience in ambulant health care Median (IQR) in years, *n* = 7413.5 (3--34)Estimated number of MRSA positive patients in the last 12 months Median (IQR), *n* = 775 (3--10)Number of screened patients in the last 12 months Median (IQR), *n* = 782 (0--6)Number of decolonized patients ever Median (IQR), *n* = 792 (0--7)Discipline, *n* = 75 General practitioner30 (38.0 %) Internal medicine30 (38.0 %) Dermatologist4 (5.1 %) Urologist8 (10.1 %) Other3 (3.8 %)*IQR* interquartile range^a^ Calculation of proportions includes missing values in the denominator Only 57.0 % of the physicians were able to correctly define a patient being at risk for MRSA according to the definition issued by the 'National Association of Statutory Health Insurance Physicians', 51.9 % knew at which time points control swabs are recommended and 14.0 % answered all questions about the reimbursement correctly. Respondents achieved a median of four knowledge points out of seven possible. In the multivariable analysis regarding factors associated with knowledge related to MRSA, only the variable \"relevance\" was selected in the model, physicians attributing higher relevance to MRSA answered more questions correctly (odds ratio (OR) 1.4 per one point increase, 95 % confidence interval (95 % CI) 1.1 to 1.7, *p* = 0.002) (Table [2](#Tab2){ref-type="table"}). Those displaying more activity towards MRSA showed better knowledge in the univariable analysis. However, this was not significant in the multivariable model.Table 2Variables associated with physicians' knowledge related to MRSAUnivariable analysisMultivariable analysis^b^Less knowledge (0--3 points)More knowledge (4--7 points)*p*OR (95 % Confidence interval)OR (95 % Confidence interval)*p* (Wald test)*n* (%)^a^*n* (%)^a^36 (45.6 %)43 (54.4 %)Age0.7010.8 (0.5--1.4) per ten years increase Median (IQR)52 (46--59)52 (45--57)Sex0.090 Female13 (39.4 %)9 (21.4 %)0.4 (0.1--1.1) Male20 (60.6 %)33 (78.6 %)1Professional experience in years0.9090.9 (0.6--1.5) per 10 years increase Median (IQR)13 (8--24)14.5 (8--22)Discipline0.163 General practitioner24 (72.7 %)24 (57.1 %)1 Other specialist9 (27.3 %)18 (42.9 %)2 (0.7--5.3)Member of a MRSA-Network0.135 Yes2 (5.6 %)7 (16.3 %)3.3 (0.6--17.0) No34 (94.4 %)36 (83.7 %)1MRSA--certificate0.005 Yes8 (22.2 %)23 (53.5 %)4.0 (1.5--10.8) No28 (77.8 %)20 (46.5 %)1Subjectiv relevance for physician's work0.0011.3 (1.1--1.6) per one point increase1.4 (1.1--1.7) per one point increase0.002 Median (IQR)3.5 (1.5;7)7 (5;8)Number of MRSA carriers last 12 months0.0301.7 (0.9--3.3) per increase of 10 Median (IQR)5 (1.5;7)6 (4;10)Number of screenings in the last 12 months0.0191.1 (0.8--1.6) per increase of 10 Median (IQR)2 (0;4)3 (1;10)Number of decolonizations0.06251.4 (0.8--2.5) per 10 increase Median (IQR)2 (0;4.5)4 (1;10)Satisfaction with refunding0.008 Content1 (3.9 %)11 (26.2 %)1 Discontent7 (26.9 %)17 (40.5 %)0.2 (0.0--2.0) Don't know18 (69.2 %)14 (33.3 %)0.1 (0.0--0.6)^a^ Differences to total *N* due to missing values^b^ Logistic regression with forward selection of variables; mutually adjusted for all variables with reported ORs in the table Nearly half of the physicians (45.6 %) stated that "Sufficient information about MRSA is available". One third (35.9 %) agreed fully with the general recommendations about handling of MRSA carriers and another 46.2 % agreed with them "in part". According to our respondents, MRSA findings are not always reported in the discharge documents: 59.0 % answered the findings were "often" or "very often" reported, 28.2 % "sometimes", and 12.8 % answered "seldom" or "never". A notification of a MRSA finding by telephone was even less common: 5.2 % answered "often", 10.4 % "sometimes", 84.4 % "seldom" or "never". Importance attributed to MRSA by the physicians was fairly heterogeneous: 22.1 % attributted low importance (0--2 points/11), and 32.5 % attributed high importance (9--10/11). Physicians reported that they screened 2 patients (median) for MRSA in the last 12 months. Furthermore, they have initiated a median of 2 decolonization therapies ever (Table [1](#Tab1){ref-type="table"}); nearly one third of them (27.9 %) has never applied a decolonization therapy to a patient while 22.8 % have applied it 10 times or more. Fifty--eight percent of the responding physicians were aware of the refunding possibilities and satisfaction with the amount of refunding was distributed as follows: no participant was "very satisfied", 15.2 % were "satisfied", 30.4 % were "not satisfied", 54.4 % answered "I don't know"or did not answer this question. MRSA carriers {#Sec11} ------------- Based on hospital records, 2250 MRSA carriers were eligible for our study. Of these, we excluded 1089 because their death was known or assumed by the hospital, and 251 letters were undeliverable. Of the remaining 910 MRSA carriers, 16.5 % (150) sent back a completed questionnaire (Additional file [3](#MOESM3){ref-type="media"}: Figure S1). MRSA carriers appeared as a frail study population with a median age of 71.5 years; 33.3 % reported to have been assigned a formal long-term care level for purposes of German health insurance, which corresponds to a considerable and long-term need of nursing care (Table [3](#Tab3){ref-type="table"}).Table 3Characteristics of MRSA carriers*N* (%^e^)Total150 (100 %)Age, *n* = 146 Median (IQR) in years71.5 (60--78)Sex, *n* = 146 Female67 (44.7 %)Education, *n* = 136 Low^a^83 (55.3 %) Intermediate^b^30 (20.0 %) High^c^23 (15.3 %)Living in a long term care facility, *n* = 141 Yes9 (6.0 %) No132 (88.0 %)Need of nursing care ("Pflegestufe"), *n* = 144 Yes50 (33.3 %) No94 (62.7 %)Migration Background^d^, *n* = 147 Yes15 (10.0 %) No132 (88.0 %)Risk factors for MRSA (multiple selection possible), *n* = 150 Urinary catheter9 (6.0 %) Dialysis10 (6.7) Chronic wounds13 (8.7 %) Chronic skin disease12 (8.0 %) Occupational exposure to lifestock1 (0.7 %) No risk factor116 (77.3 %)Decontamination therapy applied, *n* = 132 Yes71 (47.3 %)  in the hospital44 (29.3 %)  at home21 (14.0 %)  in the hospital and at home29 (19.3 %) No61 (40.6 %)Control swabs for MRSA in the outpatient sector, *n* = 144 Yes50 (33.3 %)Control swab and decolonization therapy was applied, *n* = 132 Yes37 (24.7 %)^a^Low level of school education (\<10 years)^b^Intermediate level of school education (10--12 years)^c^High level of school education (12--13 years)^d^Migration defined as not being born in Germany or/and mother tongue not German^e^ Calculation of proportions includes missing values in the denominator*IQR* interquartile range To seven knowledge questions, MRSA carriers gave a median of three correct answers; a majority (64.0 %) answered "don't know" to at least one question (Fig. [1](#Fig1){ref-type="fig"}). In the multivariable analysis, those participants who had sought additional information through internet (OR 5.1; 95 % CI 1.8 to 14.0, *p* = 0.002) or newspapers (OR 5.4; 95 % CI 1.6 to 17.5, *p* = 0.005) showed significantly more knowledge about MRSA. Older MRSA carriers had less knowledge (OR 0.7 per 10 year increase, 95 % CI 0.5 to 1.0, *p* = 0.049) as well as those participants attaching more importance to MRSA (OR 0.4, 95 % CI 0.2 to 0.9, *p* = 0.034). Interestingly, education level was not associated with knowledge related to MRSA (Table [4](#Tab4){ref-type="table"}).Fig. 1Knowledge of MRSA carriersTable 4Variables associated with MRSA carriers' knowledge related to MRSAUnivariable AnalysisMultivariable Analysis^b^Less knowledge (0--3 points)More knowledge (4--7 points)*p*OR (95 % confidence interval)OR (95 % confidence interval)*pn* (%)^a^*n* (%)^a^82 (54.7 %)68 (45.3)Age0.7 (0.5--0.9) per 10 year increase0.7 (0.5--1.0) per 10 year increase0.049 Median (IQR) in years74 (65--79.5)66 (53--77)0.011Sex Female35 (43.8 %)32 (48.5 %)0.5681.2 (0.6--2.3) Male45 (56.3 %)34 (51.5 %)1Education Low53 (72.6 %)30 (47.6 %)0.0111 Intermediate12 (16.4 %)18 (28.6 %)2.6 (1.1--6.2) High8 (11.0 %)15 (23.8 %)3.3 (1.3--8.7)Migration Background Yes11 (13.9 %)4 (5.9 %)0.1080.4 (0.1--1.3) No68 (86.1 %)64 (94.1 %)1Source of Information Internet  Yes9 (11.0 %)25 (36.8 %)\<0.00015.4 (2.0--11.0)5.0 (1.8--14.0)0.002  No73 (89.0 %)43 (63.2 %)11 Newspaper  Yes8 (9.8 %)14 (20.6 %)0.0152.4 (0.9--6.1)5.4 (1.6--17.5)0.005  No74 (90.2 %)54 (79.4 %)11 Television/Radio  Yes7 (8.5 %)13 (19.1 %)0.0582.5 (0.9--6.8)  No75 (91.5 %)55 (80.9 %)1MRSA discussed with healthcare professional At least one54 (65.9 %)55 (80.9 %)0.0402.2 (1.0--4.7) None at all28 (34.2)13 (19.1 %)1Attitude: Importance of MRSA High51 (68.0 %)35 (51.5 %)0.0040.5 (0.3--1.0)0.4 (0.2--0.9)0.034 Low24 (32.0 %)33 (48.5 %)11Attitude: Scared of MRSA Scared of MRSA46 (59.0 %)29 (42.7 %)0.0490.5 (0.3--1.0) Not scared of MRSA32 (41.0 %)39 (57.4 %)1^a^ Differences to total N due to missing values^b^ Logistic regression with forward selection of variables; mutually adjusted for all variables with reported ORs in the table About a quarter of the respondents (27.3 %) stated that no professional healthcare worker has talked to them about MRSA. The remaining reported more often talking with hospital staff (physicians or nursing staff) than with physicians or nurses in the outpatient care service (data not shown). Twenty-one percent reported to attribute no importance to the positive MRSA result, but 51.0 % were scared of MRSA. Half of the respondents (49.3 %) reported that their general state of health was "not so good" or "bad", and 20.0 % of the participants reported a deterioration of their quality of life due to MRSA. One third (30.7 %) responded affirmatively to at least one question indicating stigmatization, and the aspect the most frequently reported was self-restriction of social contacts in order to prevent transmission (17.4 %). Of patients younger than 65 years (which was retirement age in Germany), 10 % (4/40) reported occupational problems because of MRSA. Participants also reported stigmatization in the context of health care services (Fig. [2](#Fig2){ref-type="fig"}).Fig. 2Stigmatization related to MRSA Only one third (33.3 %) reported that their MRSA status was evaluated after discharge from hospital. Nearly half of the respondents (47.3 %) received a decolonization therapy, and of those, 52.1 % (37/71) reported that a control swab was taken in the outpatient sector (Table [3](#Tab3){ref-type="table"}). Asked about the application of the recommended measures in detail, only ten participants (6.7 %) stated that all listed measures had been applied; the application of nasal ointment being the most common (64.3 % of all MRSA carriers) (Fig. [3](#Fig3){ref-type="fig"}). According to our data, the presence of self-reported risk factors for prolonged MRSA carriage like chronic wounds or urinary catheters did not influence the application of decolonization therapy (Chi^2^ test, *p* = 0.636).Fig. 3Decolonization therapy: Application of single measures Discussion {#Sec12} ========== We analyzed knowledge, attitude and practice among MRSA carriers and physicians in the outpatient sector in Germany after the introduction of reimbursement for MRSA related therapy. Physicians displayed heterogeneous knowledge and level of activity regarding MRSA specific aspects. Almost one third of the responding MRSA carriers stated that no healthcare professional had ever talked to them about MRSA. Thirty percent claimed that their quality of life deteriorated due to MRSA and one third experienced stigmatization due to MRSA or exerted self-stigmatization. Similarly,the reduction of social contacts and leisure activities among MRSA carriers was also reported in a Swedish study \[[@CR7]\] and in our own focus groups, where participants reported the reduction of social contacts, e.g., to their grandchildren as a consequence of the MRSA finding \[[@CR10]\]. Such behaviour is generally not recommended in official guidelines \[[@CR12]\]. Nevertheless, some MRSA carriers seem to overestimate the risk associated with MRSA for healthy and non-hospitalized individuals. It might be advisable to proactively address this topic in patient information leaflets and physician-patient consultations. Some participants reported a rejection by health care services like a nursing home or a rehabilitation clinic because of MRSA. Further studies are necessary to investigate to which extent the medical treatment of the underlying conditions is negatively influenced by a positive MRSA status. MRSA carriers who sought additional information about MRSA on the internet or by reading newspapers could answer more knowledge questions correctly. This suggests that publicly available information on MRSA is used by MRSA carriers and has a positive effect on their knowledge. Hence, the quality and availability of public information on MRSA is important. Thirty percent of the respondents claimed that no healthcare professional had ever talked to them about MRSA. Astonishingly, some of these also reported the application of a decolonization therapy. In these cases, either the decolonization therapy has been applied without a proper clarification or the responding MRSA carriers did not remember the education about MRSA. Respondents gave, however, more inconsistent answers in this area: only 71 respondents reported at least one decolonization therapy, whereas 90 reported the application of nasal ointment. These contradictions also underline the lack of specific knowledge of the MRSA carriers. Around 30 % of the answers to the knowledge questions were "don't know", which indicates rather substantial deficits in the knowledge among MRSA carriers. The need for more information was also a key finding in qualitative studies from Great Britain and Sweden \[[@CR6], [@CR7]\]. Participants of our qualitative part of the study also reported a need for more and adequate information \[[@CR10]\]. The regulation of the German National Association of Statutory Health Insurance allows reimbursement for ten minutes of conversation about MRSA twice during the treatment process \[[@CR13]\]. Taking the complexity of the topic into account, it seems challenging to give adequate information in ten minutes, which could in part explain the information deficit of the MRSA carriers. Rather surprisingly, 42 % of the responding physicians stated that there was not enough information available on MRSA. Various organizations in Germany (Robert Koch- Institute, Association of Health Insurance Doctors etc.) have detailed, freely available information on MRSA on their websites. In order to raise awareness of MRSA among physicians in the outpatient sector, it might not be sufficient to make information available through internet but it might be essential to additionally spread information proactively through medical journals or leaflets. Only one third (33.3 %) of the MRSA carriers in our study reported to have received control swabs in the outpatient sector. This can be put in the context to the results of the survey among physicians, which showed a huge heterogeneity in their activity towards MRSA. Correspondingly, only half of the MRSA carriers reported a decolonization therapy, even though it was recommended for all patients except those with underlying conditions like, e.g., dialysis. However, the application of a decolonization therapy was not associated with the presence of these conditions among patients in our study population. So why did only half of the patients get a decolonization therapy? One reason could be that MRSA diagnosis is not always reported in the discharge letter. According to a recent report of the European Observatory on Health Systems and Policies "communication between GPs and hospitals is problematic" in Germany \[[@CR14]\] as well as in some other European countries \[[@CR15]\]. According to this report, in Spain, e.g., the cooperation between sectors is standard practice and has been facilitated by shared electronic clinical record and IT systems \[[@CR14]\]. One possibility to overcome the sectoral gap might be the establishment of infection control teams taking care of the diagnosed MRSA carriers beyond the discharge from hospitals. Furthermore, it might be difficult for the physicians to develop routines with regard to MRSA due to the small number of cases. Kock et al. \[[@CR2]\] estimated 132,000 MRSA cases in German hospitals per year; distributing them on the 37,353 general practitioners (GPs) in Germany in 2013 \[[@CR16]\], one GP would see on average 3.5 MRSA carriers per year. This is quite comparable to the median of 5 MRSA carriers that were reported by the physicians in our data. This small number might result in a vicious circle: physicians attach little importance to MRSA, do not seek information about MRSA, and underestimate the number of patients at risk for MRSA, therefore apply screening measures too seldom, and consequently have low numbers of MRSA carriers among their patients. Only few respondents reported that all recommeded measures (disinfectant washing etc.) were applied. One reason could be that only the costs for nasal ointment are covered by the health insurances, the carriers having to bear the rest of the costs themselves. The other reason could be, that physicians do not believe all the measures to be necessary: only one third (36 %) agreed fully with the general recommendations. The poor adherence and low agreement with the recommendations might be due to a lack of studies showing the effectiveness of the various recommended measures as was highlighted before by Faetkenheuer et al. \[[@CR17]\]; however, we have no insight from our study to which aspects of the recommendations exactly the physicians do not agree and why. Studies exploring the effectiveness of (outpatient) decolonization therapy and the effectiveness of its single components could improve the acceptance of these measures. Limitations {#Sec13} ----------- The study was carried out in two regions in Germany and might not reflect the situation in all of Germany. While MRSA status was based on medical records, all further data on decolonization therapy and medical conditions were self-reported by MRSA carriers. Misclassification concerning application of control swabs, decolonization therapy or medical conditions e.g., due to recall problems cannot be excluded. Only 9.5 % of the physicians returned our questionnaire. Thus, our analysis is prone to selection bias, and the results have to be interpreted with caution. Knowledge, attitude, and activity might be higher among participants than among the whole source population. However, 22 % of responding physicians indicated that MRSA was not important in their daily work, and 10 % stated, that they had had no MRSA carriers among their patients in the last 12 months. Thus, we may assume that non-responders are not interested in taking part in studies in general and they are not specifically uninterested in MRSA. This may lead to less bias in our findings. The response proportion of the MRSA carriers was 16.5 %. It can be assumed that in this population non-responders might be frailer than respondents, and thus face different problems concerning MRSA. Unfortunately, we are not able to show this. To approach these hard to reach populations further research with a different approach, e.g., interviews, would be necessary. Conclusion {#Sec14} ========== Relevance attributed to MRSA and experience with application of MRSA specific therapy were highly heterogeneous among physicians in the outpatient sector. Raising the awareness towards MRSA in the outpatient healthcare sector appears imperative to improve treatment of MRSA carriers. Additionally, health care professionals should be aware of possible stigmatization and of the fact that lack of adequate information can result in inappropriate deprivation of social contacts. MRSA carriers need more and adequate information about MRSA. This information should include the fact that MRSA is no threat to healthy people. Ethics approval and consent to participate {#Sec15} ------------------------------------------ The study was approved by the Ethics Committee of Hannover Medical School (No.1893-2013) and the Ethics Committee of the University Witten/Herdecke (No.112/2013). Participants provided informed consent by sending back the anonymous questionnaire. Availability of data and materials {#Sec16} ---------------------------------- The dataset supporting the conclusions of this article is available from the corresponding author upon request. Additional files {#Sec17} ================ Additional file 1:Questionnaire for physicians working in the outpatient sector. (PDF 217 kb)Additional file 2:Questionnaire for MRSA carriers. (PDF 232 kb)Additional file 3: Figure S1.Recruitment of MRSA carriers. (PDF 178 kb) CI : Confidence Interval GP : general practitioner IQR : interquartile range MRSA : methicillin-resistant *Staphylococcus aureus* OR : odds ratio **Competing interests** The authors declare that they have no competing interests. **Authors' contributions** SC and HRR conceived and designed the study. HRR, RM, SC contributed to the development of the questionnaires. OS and SS contributed to the acquisition of data. SC, NR, and AK made contributions to the statistical analysis. HRR conducted the statistical analysis and drafted the manuscript. All authors contributed to the interpretation of the data, writing, and revising of the manuscript and approved the final manuscript. We thank all participants who completed the questionnaire for their commitment. Furthermore, we thank Prof. Dr. Dr. W. Bautsch, Klinikum Braunschweig, for his support. Funding {#FPar1} ======= Internal funding of the Helmholtz Centre for Infection Research.
null
minipile
NaturalLanguage
mit
null
Are you kidding? My previous blog-generator (Son of BartleBlog) was not in good shape. The archives only covered 2000-2010, the "previous posts" links were a lottery, and the spanish version of the site was missing whole sections. So, what's Nikola? Nikola is a static website generator. One thing about this site is that it is, and has always been, just HTML. Every "dynamic" thing you see in it, like comments, is a third party service. This site is just a bunch of HTML files sitting in a folder. So, how does Nikola work? Nikola takes a folder full of txt files written in restructured text, and generates HTML fragments.
null
minipile
NaturalLanguage
mit
null
Spontaneous coronary artery dissection causing sudden death. Mechanical arterial failure or primary vasculitis? A previously healthy 36-year-old woman died suddenly, and autopsy examination disclosed dissection of the left anterior descending coronary artery with luminal occlusion. A periadventitial inflammatory infiltrate consisting predominantly of eosinophils and including histiocytic multinucleated giant cells was present. The syndrome of spontaneous coronary artery dissection is a rare but well-described clinicopathologic entity. The pathogenesis and, specifically, the significance of the periarterial inflammation, has been controversial. The findings in our case, as well as in others reported in the literature, suggest a primarily mechanical process inciting a localized inflammatory reaction, rather than a primary vasculitis.
null
minipile
NaturalLanguage
mit
null
Foot and Ankle Fellowship Websites: An Assessment of Accessibility and Quality. The Internet has been reported to be the first informational resource for many fellowship applicants. The objective of this study was to assess the accessibility of orthopaedic foot and ankle fellowship websites and to evaluate the quality of information provided via program websites. The American Orthopaedic Foot and Ankle Society (AOFAS) and the Fellowship and Residency Electronic Interactive Database (FREIDA) fellowship databases were accessed to generate a comprehensive list of orthopaedic foot and ankle fellowship programs. The databases were reviewed for links to fellowship program websites and compared with program websites accessed from a Google search. Accessible fellowship websites were then analyzed for the quality of recruitment and educational content pertinent to fellowship applicants. Forty-seven orthopaedic foot and ankle fellowship programs were identified. The AOFAS database featured direct links to 7 (15%) fellowship websites with the independent Google search yielding direct links to 29 (62%) websites. No direct website links were provided in the FREIDA database. Thirty-six accessible websites were analyzed for content. Program websites featured a mean 44% (range = 5% to 75%) of the total assessed content. The most commonly presented recruitment and educational content was a program description (94%) and description of fellow operative experience (83%), respectively. There is substantial variability in the accessibility and quality of orthopaedic foot and ankle fellowship websites. Recognition of deficits in accessibility and content quality may assist foot and ankle fellowships in improving program information online. Level IV.
null
minipile
NaturalLanguage
mit
null
Militant moderate, unwilling to concede any longer the terms of debate to the strident ideologues on the fringe. If you are a Democrat or a Republican, you're an ideologue. If you're a "moderate" who votes a nearly straight party-ticket, you're still an ideologue, but you at least have the decency to be ashamed of your ideology. ...and you're lying in the meantime. Tuesday, July 30, 2013 The Future is Now Perhaps I’ve mentioned this before, or perhaps not.If you want to see the future of America once liberals get done with it you can look to the failed Soviet Union; to the imminent failure of Greece, Italy, Spain, Portugal; to the dinosaurs-on-life-support Cuba, Venezuela, North Korea, Bolivia … you probably cannot, though, look to communist China because – say what you will about myopic socialists, the Chinese have always looked ten generations ahead – they saw for themselves the slow toilet swirl their Soviet pals took and the Chinese caught capitalism faster than a street walker catches clap. But if none of those examples appeal to you as being too distant and removed in time, location or the foreign inability to pull off American Exceptionalism, then you might look instead to Detroit. Detroit just filed for bankruptcy. …like no one saw that coming. And when I say “no one”, I’m talking specifically about liberals.Detroit has been, for two generations, the poster-child of liberal sensibility. Detroit had two main industries: autos and Motown.Both rely on union labor – the United Auto Workers, and the American Federation of Musicians.While it is perfectly reasonable, though, for the UAW to denounce Henry Ford the 7th as a sweat-shopper, which session musician is going to tell Smokey Robinson or the Jackson 5 that they’re keepin’ a brother down? The result is that one generation ago US auto makers started putting out Plymouth Valiants, Dodge K-Cars, Chevy Citations, AMC Gremlins and Ford Pintos.Not coincidentally, Japan supplanted the US as the maker of the world’s best cars that same day.Americans were so bitter over Japanese cars, and so desperate to buy a vehicle that wasn’t a US piece-o-crap, that they bought – get this – a Yugoslavian piece-o-crap instead. The Yugo rusted on the fly, if anything, faster than the Chevy Monza did.You could actually watch the corrosion work its way out from a dent caused by a morning hailstorm.With the Monza, it might take a week to see the rust bubble; with the Yugo, it popped up by lunch. And, of course, with union labor comes union pensions.And early retirements.And several other factors that Greece is suddenly discovering they can’t afford – even if Germany picks up the tab.And the pensions that the UAW dopes thought they were so clever to secure for themselves out of the endless coffers of The Big Three’s vaults – they’ve largely gone away.Chrysler went bankrupt, Ford restructured at least once, and GM went bankrupt and folded two of its brands – Saturn and Oldsmobile – in response to our most recent “economic downturn”. Couple this with Detroit’s tax rates – and Michigan’s in general – designed to compel the wealthy to pay their Fair Share®, Detroit saw its manufacturing base move out of state.Detroit has long had the highest property tax rates of any of America’s 50 largest cities.Yes, Ford, Chrysler and GM still have a presence in the Detroit area as well as Detroit itself, but they’re a shadow of their former selves. For what it’s worth, Berry Gordy moved his Motown to L.A. in the early 70s.Even California had a better business climate than Detroit, even now.Of today’s top ten employers in Detroit Michigan, the first eight are one form of government employer or another [or heavily subsidized by the government – like healthcare and health insurance]. It’s not until you get to 9 and 10 on the list that you see a private employer: GM and Chrysler, respectively; Ford doesn’t even appear in the top 20, although three casinos do. With the major private employers pulling out to the suburbs – or to Tennessee – and taking their tax debts with them, the city of Detroit found itself learning what so many liberals refuse to: small businesses thrive in the presence of big business.This is due to a few factors.The first being the classic economist’s explanation: a large portion of what small businesses do is done for big business; big business is the small business’s biggest customer.Without the big business there, there’s no big customer, and the small customers aren’t enough for the small business to earn a profit … they fold. The second reason is the one that I may have mentioned before in response to the Occupy Wallstreet socialist idiocy: when you kill off the top 1%, there’s always, always, always going to be another 1% on the top of the economic heap.When you remove the outer layer of the onion, there’s still a layer of the onion on the outside.When you cut off the top rung of the ladder, there’s still a top rung.Removing the top layer by layer is fine as long as other people are at the top.But as the French found out during their Reign of Terror, eventually you will be at the top – and thus removed. Soaking the rich through government policy because they’re rich only succeeds in driving the rich to move to other … cities, other … states, other … countries.Where the rich go, so go the jobs – check out Detroit.The people who are left can’t afford the taxes – they don’t have jobs to earn the money – and those taxes not collected pay for all the stuff that the government has taken upon itself to do for them.As a result, it doesn’t get done.It’s not the rich who suffer; they’re living elsewhere under a different government, and living quite well, usually.The ones who suffer are the poor because they can’t afford to move. Property tax delinquency in Detroit currently exceeds 50%; whole city blocks are abandoned and in ruins.Fire and police protection is a joke, crime rates have skyrocketed and a small fire is as likely to burn down a whole block as be put out.It’s a good thing the Detroit Tigers lost the World Series last year: the city would have been completely torched out of celebration. Detroit even had their own version of a liberal savior to rescue them from themselves: Kwame Kilpatrick.The City Savior attempted to fix Detroit’s problems in much the same way our National Savior is doing it for the rest of us; Kilpatrick will be sentenced for it later this year, and it is likely to be a lengthy stint.He’s already been in state prison on lesser savioring.Because the fact of the matter is, liberal fiscal politics doesn’t work without cooking the books – what federal prosecutors call extortion and fraud.And then it typically requires payola and deceit to cover it up – what the same federal prosecutors call bribery and obstruction of justice. Only the Mayor of Detroit doesn’t have the same level of plausible deniability that a US President has, and Kilpatrick got caught playing Obama just as Obama was being elected to play himself.And Obama has minions to fall on their swords for him, people like Shyster General Eric Holder, Indoctrination Secretary Arne Duncan, Poverty Perpetuation Secretary Kathleen Sebelius, Disemployment Secretary Hilda Solis, and a few others, all tied into a happy knot by Head Book Cooker Timmy “The Great” Geithner. If Detroit goes bankrupt, it falls under US bankruptcy law, which obliges creditors – after all the court hearings – to take a financial bath.In the meantime, though, the rich abandon ship taking their money [and jobs] with them, and there’s no more police or fire protection, essential services – like filling potholes and replacing street lights – end in all neighborhoods, not just the poor sections, and the city gets parceled out to local gangs and to opportunists from other cities.But that’s only until the courts sort it out. The US government has gotten into the sad habit of paying the bills of bankrupt entities, so Detroit may get their bailout.Greece has Germany; Detroit may have D.C.It’s happened before.The opportunity to happen again isn’t even limited to Detroit; following close on these heels is any number of American cities – nearly all having a multi-generational history of liberal fiscal lunacy having guided them – and quite a few states, themselves historically liberal, such as Illinois, California, Michigan, New Jersey [etc]. I shouldn’t have to remind anyone that the US government is very rapidly going broke itself due to liberal Democrat [and pseudo-conservative liberal Republican] profligacy, and if the US government goes bankrupt, police and fire protection [in the form of military defense] will evaporate; essential services will die; and US territory will get divvied up by local gangs [called warlords in geopolitics] or by foreign opportunists – often called “terrorists”. Only the US going bankrupt would not be a matter for US bankruptcy law to handle.It would be handled, instead, in the traditional way: the bankrupt nation selling off its territory for the cash needed to pay its bills, or foreign creditors breaking kneecaps and repossessing territory by conquest.Or the nation simply degenerates into barbarism. What fun. The irony icing on this let them eat liberal cake is that Detroit has just petitioned the US government for a waiver from having to comply with its obligation to join the Obamacare Borg.It seems that the city of Detroit, having more than 50 employees, is required to provide those employees – all AFSCME union members, as if it needs to be said – their health coverage.Only this health coverage, under Obamacare, is grossly more expensive than it was before. This still comes as a shock to the 40% numbskulls in our country who believed the hucksters who sold them on Obamacare back in 2010; it is double-grossly more expensive than they were promised it would be.The cost was supposed to come down, after all.It is The Affordable Care Act.Remember?Employers who don’t want to pay for it are simply racist, because the cost of insurance goes down when more people join the premium pool. Remember?!? I do.I also remember saying several times that it doesn’t work the way idiot liberals were told it does … by the politicians who desperately needed to buy their votes … because they couldn’t win otherwise. They’re now finding out that I was right.Again. So if Greek and Soviet foolishness won’t convince the liberals what their liberal folly will bring, then they should take a drive to the Motor City and see for themselves.Tomorrowland has never been so real.
null
minipile
NaturalLanguage
mit
null
News IV Infiltration Burn Case - Arizona Defense Verdict March 22, 2017 Overland Park, Kansas ­– March 22, 2017 – Preferred Physicians Medical (PPM), industry-leading provider of professional liability insurance for anesthesia practices, announced that a Maricopa County, Arizona jury returned a defense verdict in favor of PPM’s insured pediatric anesthesiologist and his anesthesia practice group. A five month-old female underwent craniotomy and fronto-orbital advancement due to premature closure of the sutures of the skull. General anesthesia was provided by PPM’s insured pediatric anesthesiologist. After induction, two IVs were started, one in the patient’s foot and the other in her left hand. During the procedure the patient lost an estimated 250 ccs of blood, which was replaced. She also had a period of hypotension that was treated with diluted calcium chloride. At the end of the procedure when the patient’s arms were untucked, an infiltration of the left arm and hand was noted. The patient sustained chemical burns to her left upper extremity as a result of the calcium chloride infiltration. Conservative treatment was provided. The child has scarring on her left forearm and hand, but no functional limitations. The parents, on behalf of the child, filed a lawsuit naming the anesthesiologist and his anesthesia practice group as defendants. The patient alleged calcium gluconate should have been administered instead of calcium chloride. The patient claimed the administration of calcium gluconate would have resulted in her sustaining minimal, if any, scarring. The patient’s demand before trial was $850,000. During negotiations, PPM offered $100,000 on behalf of the anesthesiologist. The patient reduced her demand to $650,000 and the case proceeded to trial. The patient’s anesthesiology expert, Gregory B. Hammer, M.D. from Stanford, California, testified the administration of calcium chloride was below the standard of care. He testified further the anesthesiologist should have ensured calcium gluconate was available in the Pyxis or obtained it from the facility’s pharmacy. He also testified the period of hypotension did not require immediate treatment and waiting 10-15 minutes to obtain calcium gluconate from the pharmacy would have been appropriate. The defense anesthesiology expert testified the administration of diluted calcium chloride was within the standard of care. He testified further the child’s condition was emergent enough that it was appropriate not to wait to obtain calcium gluconate from the pharmacy. After a four-day trial, the jury deliberated approximately 20 minutes before returning a verdict in favor of the anesthesiologist and his anesthesia practice group. Gary Fadell, Esq. with the law firm Fadell, Cheney & Burt, PLLC in Phoenix, Arizona represented PPM’s insureds. Shelley Strome, Senior Claims Specialist, managed the file on behalf of PPM.
null
minipile
NaturalLanguage
mit
null
405 F.Supp. 506 (1975) TEMPO TRUCKING AND TRANSFER CORPORATION, Plaintiff, v. G. R. DICKSON, Acting Commissioner of Customs, U. S. Customs Service, Department of the Treasury of the United States, Defendant. No. 75 C 1208. United States District Court, E. D. New York. December 19, 1975. *507 *508 *509 Marvin H. Wolf, Groman, Wolf & Ross, P.C., Carle Place, N. Y., for plaintiff. *510 David G. Trager, U. S. Atty., E.D. N.Y., by Douglas J. Kramer, Asst. U. S. Atty., for defendant. MEMORANDUM AND ORDER BRAMWELL, District Judge. This is an action to review a final decision of the Acting Commissioner of Customs, revoking plaintiff's customhouse cartman's license. Jurisdiction is asserted under 28 U.S.C. § 1346, 28 U.S. C. § 1355, and 5 U.S.C. § 701 et seq. Both parties have moved for judgment on the pleadings pursuant to Rule 12(c) of The Federal Rules of Civil Procedure. Plaintiff Tempo Trucking and Transfer Corporation (hereinafter referred to as "Tempo") is engaged in the operation of a licensed and bonded trucking concern at the John F. Kennedy International Airport. Tempo was issued a customhouse cartman's license by the United States Customs Service, Department of The Treasury on June 29, 1964. The license authorizes Tempo to receive and transport bonded merchandise which has been entered and examined for customs purposes.[1] On February 15, 1972, a carton containing jewelry was transported from Montreal to the J.F.K. Airport via Air Canada. The carton was to be picked up by Tempo at Air Canada and delivered to Pan American World Airways, Inc., which thereafter was to transport it to Paris (Govt. Ex. 8; Minutes of Administrative Hearing,[2] at 64, 93, 105). At approximately 10:00 A.M. on February 28, 1972, Tempo employee Thomas Rupa picked up the carton at Air Canada and delivered it to the Tempo warehouse (Tr. 71, 82). At about 2:45 P.M. Tempo employee Frank Elia attempted to deliver the carton to Pan American. At that time Cargo Service Supervisor Robert Duscek refused to accept the carton on behalf of Pan American, finding that it was slit open and appeared to contain a cinderblock (Tr. 117, 120-123). At approximately 9:00 P.M. two Tempo employees attempted to deliver the same carton to Pan American. Pan American employee William Barry rejected the carton and thereafter notified agents of the United States Customs Service (Tr. 143, 144). By letter dated December 23, 1974, Area Director of Customs, George F. Dunn notified Tempo of his intention to revoke its customhouse cartman's license for violations of 19 C.F.R. § 112.30(a)(5) and (a)(9) (Govt. Ex. 1).[3]*511 Plaintiff's counsel, by letter dated December 30, 1974, filed a notice of appeal and requested a hearing (Govt. Ex. 2). A hearing was held before Hearing Officer Carl F. Nolte, Jr. on January 23, 1975 pursuant to 19 C.F.R. § 112.30(D). On March 14, 1975, the Hearing Officer transmitted his findings and conclusions to the Acting Commissioner of Customs, recommending that the license be revoked. By letter of decision dated July 9, 1975, the Acting Commissioner of Customs revoked Tempo's license effective July 31, 1975 (Complaint, Ex. "A"). On July 28, 1975, this Court issued a Temporary Restraining Order enjoining, inter alia, the revocation of Tempo's license. By stipulation filed August 7, 1975, the TRO was extended until there was a determination regarding Tempo's application for a preliminary injunction. This Court denied the application for a preliminary injunction in an order entered September 23, 1975.[4] Plaintiff seeks to overturn the determination of the Acting Commissioner of Customs on three grounds: (1) The Hearing Officer erred by failing to explore the facts and circumstances underlying the conviction of Tempo's president Anthony Garite as he was convicted on the basis of an "Alford" plea; (2) There was insufficient evidence that Tempo was guilty of any deceptive practices; and (3) The Hearing Officer, under the circumstances of this case, abused his discretion in denying Tempo's request for an adjournment of the administrative hearing. JURISDICTION Plaintiff asserts that jurisdiction may be predicated under 28 U.S.C. § 1346. Under the Tucker Act, 28 U.S. C. § 1346(a)(2), a district court has jurisdiction only over claims against the United States for money damages. Richardson v. Morris, 409 U.S. 464, 93 S.Ct. 629, 34 L.Ed.2d 647 (1973); R. E. D. M. Corporation v. LoSecco, 291 F. Supp. 53 (S.D.N.Y.1968), aff'd 412 F.2d 303 (2d Cir. 1969); Wells v. United States, 280 F.2d 275 (9th Cir. 1960); Clay v. United States, 93 U.S.App.D.C. 119, 210 F.2d 686 (1953), cert. denied, 347 U.S. 927, 74 S.Ct. 530, 98 L.Ed. 1080 (1954); see generally 1 Barron and Holtzoff, Federal Practice and Procedure (Wright ed.): Civil § 54. Plaintiff makes no claim for money damages here but rather seeks judicial review of an administrative determination revoking its license. Thus, plaintiff can not properly invoke the Tucker Act as a jurisdictional predicate. Plaintiff further claims that jurisdiction exists under 28 U.S.C. § 1355. This statute provides that district courts have original jurisdiction "of any action or proceeding for the recovery or enforcement of any fine, penalty, or forfeiture". See generally 1 Barron and Holtzoff, Federal Practice and Procedure (Wright ed.): Civil § 44. Having been unable to find any authority in support of plaintiff's contention, this Court concludes that 28 U.S.C. § 1355 does not confer jurisdiction. Although the Tariff Act of 1930 does not expressly provide for judicial review of an order revoking a customhouse cartman's license,[5] plaintiff contends that the Administrative Procedure Act, 5 U.S.C. § 701 et seq. (hereinafter "A. P.A."), confers jurisdiction upon this Court to review such a determination. The A.P.A. provides in relevant part: *512 A person suffering legal wrong because of agency action, or adversely affected or aggrieved by agency action within the meaning of a relevant statute, is entitled to judicial review thereof. 5 U.S.C. § 702. The form of proceeding for judicial review is the special statutory review proceeding relevant to the subject matter in a court specified by statute or, in the absence or inadequacy thereof, any applicable form of legal action . . . in a court of competent jurisdiction. 5 U.S.C. § 703. Agency action made reviewable by statute and final agency action for which there is no other adequate remedy in a court are subject to judicial review. 5 U.S.C. § 704. It is only the introductory clause of section 10 of the A.P.A. which purports to place limitations on the right to judicial review. The introductory clause excludes administrative action from judicial review to the extent that "statutes preclude judicial review", or "agency action is committed to agency discretion by law." 5 U.S.C. § 701. First, a statute must demonstrate clear and convincing evidence of an intent to preclude judicial review before courts will cut off an aggrieved party's right to be heard.[6] This Court has found no clear and convincing evidence in the Tariff Act of 1930 or in its legislative history indicating that Congress intended to preclude judicial review of an order revoking the license of a customhouse cartman. Second, the agency discretion exception to the general rule that agency action is reviewable under the A.P.A. is a narrow one, and is only "applicable in those rare instances where `statutes are drawn in such broad terms that in a given case there is no law to apply' (citation omitted)." Citizens to Preserve Overton Park, Inc. v. Volpe, 401 U.S. 402, 410, 91 S.Ct. 814, 821, 28 L.Ed.2d 136 (1971). See Adams v. Richardson, 156 U.S.App.D.C. 267, 480 F.2d 1159 (1973). Here the grounds for the suspension or revocation of a cartman's license as set forth in 19 C.F.R. § 112.30(a) are sufficiently specific to permit judicial review. This Court is mindful of the fact that neither the Tariff Act of 1930 nor the applicable Customs regulations provide for judicial review of an order revoking a cartman's license. However, where a statute does not specifically provide for judicial review, "nonstatutory judicial review" is still available.[7]Aquavella v. Richardson, 437 F.2d 397, 402 (2d Cir. 1971). Thus, having eliminated the exceptions to the presumption of reviewability set forth in the introductory clause of section 10, it would appear *513 that the A.P.A. affords the plaintiff a right to judicial review of the decision revoking its license. The remaining question is whether the A.P.A. provides an independent basis for federal jurisdiction. The Second Circuit has not provided a conclusive answer to this question.[8] Indeed, this issue has produced a split of authority throughout the country.[9] Given the plethora of scholarly opinion on the subject, it would not appear to be profitable to attempt to review the numerous arguments on both sides of the issue. This Court is guided by the legislative history of the A.P.A. which does not provide a conclusive answer to the jurisdictional question but does indicate that Congress intended to afford a broad right of judicial review. In Citizens Committee For Hudson Valley v. Volpe, supra, 425 F.2d at 102, the Second Circuit stated: There can be no question at this late date that Congress intended by the Administrative Procedure Act to assure comprehensive review of "a broad spectrum of administrative actions," including those made reviewable by specific statutes without adequate review provisions as well as those for which no review is available under any other statute. Abbott Laboratories v. Gardner, supra, 387 U.S. [136] at 140, 87 S.Ct. 1507 [18 L.Ed. 2d 681]; see S.Rep. No. 752, 79th Cong. 1st Sess., 26 (1945); H.R.Rep. No. 1980, 79th Cong. 2d Sess., 41 (1946). Were this Court to hold that the A.P.A. does not provide an independent grant of federal jurisdiction, it would frustrate "the important goal of subjecting final agency action to judicial scrutiny." 425 F.2d at 102. This Court therefore concludes that the A.P.A. provides it with jurisdiction to review the order of the Acting Commissioner of Customs. STANDARD OF REVIEW At the outset this Court must determine the proper standard of judicial review. Section 10(e) of the A.P.A., 5 U. S.C. § 706, provides that a reviewing court shall: . . . . . . (2) hold unlawful and set aside agency action, findings, and conclusions found to be — (A) arbitrary, capricious, an abuse of discretion, or otherwise not in accordance with law; [or] *514 (E) unsupported by substantial evidence in a case subject to sections 556 and 557 of this title or otherwise reviewed on the record of an agency hearing provided by statute; As the findings of the Acting Commissioner of Customs were based upon a hearing record,[10] the appropriate standard for reviewing those findings is the "substantial evidence test". Camp v. Pitts, 411 U.S. 138, 141, 93 S.Ct. 1241, 1243, 36 L.Ed.2d 106 (1973); National Nutritional Foods Association v. Weinberger, 512 F.2d 688, 700 (2d Cir. 1975). Thus, this Court must determine whether, considered as a whole, the record in this case contains substantial evidence to support the findings made below. Universal Camera Corp. v. National Labor Relations Board, 340 U.S. 474, 71 S.Ct. 456, 95 L.Ed. 456 (1951). In Consolo v. Federal Maritime Commission, 383 U.S. 607, 619-620, 86 S.Ct. 1018, 1026, 16 L. Ed.2d 131 (1966), the Supreme Court set forth the definition of "substantial evidence": We have defined "substantial evidence" as "such relevant evidence as a reasonable mind might accept as adequate to support a conclusion." Consolidated Edison Co. of New York v. National Labor Relations Board, 305 U.S. 197, 229, 59 S.Ct. 206, 217, 83 L. Ed. 126. "[I]t must be enough to justify, if the trial were to a jury, a refusal to direct a verdict when the conclusion sought to be drawn from it is one of fact for the jury." National Labor Relations Board v. Columbian Enameling & Stamping Co., 306 U.S. 292, 300, 59 S.Ct. 501, 505, 83 L.Ed. 660 (footnote omitted). This is something less than the weight of the evidence, and the possibility of drawing two inconsistent conclusions from the evidence does not prevent an administrative agency's findings from being supported by substantial evidence. National Labor Relations Board v. Nevada Consolidated Copper Corp., 316 U.S. 105, 106, 62 S.Ct. 960, 961, 86 L. Ed. 1305; Keele Hair & Scalp Specialists, Inc. v. FTC, 5 Cir., 275 F.2d 18, 21. In considering the penalty imposed upon Tempo by the Acting Commissioner of Customs, this Court is confined to a limited inquiry. The assessment of penalties by an administrative agency is not a factual finding but the exercise of a discretionary grant of power. A review of such penalties is limited to a determination of whether there has been an abuse of discretion. Jacob Siegel Co. v. Federal Trade Commission, 327 U.S. 608, 611-612, 66 S.Ct. 758, 760, 90 L.Ed. 888 (1946); G. H. Miller & Company v. United States, 260 F.2d 286, 296 (7th Cir. 1958), cert. denied, 359 U.S. 907, 79 S.Ct. 582, 3 L.Ed. 2d 572; Beall Construction Company v. Occupational Safety and Health Review Commission, 507 F.2d 1041, 1046 (8th Cir. 1974); cf. Nadiak v. C.A.B., 305 F. 2d 588, 593 (5th Cir. 1962), cert. denied, 372 U.S. 913, 83 S.Ct. 729, 9 L.Ed.2d 722. In this regard when the penalty chosen by an agency is within the range of sanctions provided by applicable disciplinary regulations, the severity of the sanction imposed is within the discretion of the agency. Ricci v. United States, 507 F.2d 1390, 1393 (Ct.Cl.1974). Further, the imposition of a sanction within the authority of an administrative agency is not rendered invalid in a particular case because it is more severe than sanctions imposed in other cases. Butz v. Glover Livestock Commission Company, Inc., 411 U.S. 182, 93 S.Ct. 1455, 36 L.Ed. 2d 142 (1973), rehearing denied, 412 U. S. 933, 93 S.Ct. 2746, 37 L.Ed.2d 162. After a thorough consideration of the entire record of the administrative hearing, it is the opinion of this Court that substantial evidence exists to support the findings made below. This Court also finds that the Acting Commissioner of Customs did not abuse his discretion in revoking Tempo's customhouse cartman's license. *515 MERITS Ten independent grounds for the suspension or revocation of the license of a customhouse cartman or lighterman are set forth in 19 C.F.R. § 112.30(a). This section provides in part: (a) Grounds for suspension or revocation of licenses. The district director may revoke or suspend the license of a cartman or lighterman if: . . . . . . (5) The holder of such a license or an officer of a corporation holding such a license is convicted of a felony, or is convicted of a misdemeanor involving theft, smuggling, or a theft-connected crime; [or] . . . . . . (9) The holder is guilty of any negligence, dishonest or deceptive practices or carelessness in the conduct of his business; In a letter of decision dated July 9, 1975, the Acting Commissioner of Customs made the following findings: That the conduct of the hearing and procedural steps preliminary and subsequent thereto comply with the pertinent statutes and regulations; That the provisions of sections 112.30(a)(5) and (9) of the Customs Regulations authorize the revocation of the license of a customhouse cartman if an officer of a corporation holding a license is convicted of a misdemeanor involving theft, smuggling, or a theft-connected crime, or if the holder of such a license is guilty of any dishonest or deceptive practices in the conduct of his business; That Mr. Anthony Garite is the President of Tempo Trucking and Transfer Corporation; That Mr. Anthony Garite was convicted of a misdemeanor involving theft, smuggling, or a theft-connected crime in the U. S. District Court for the Eastern District of New York on May 4, 1973; That the evidence establishes that Mr. Garite was guilty of a deceptive practice on February 28, 1972, when he affixed Customs warning labels to a carton of merchandise without authorization; and That the charges were proved by a preponderance of the evidence. (Complaint, Ex. "A"). I. In order to prove that an officer of Tempo was convicted of a misdemeanor involving theft or a theft-connected crime, the government introduced the following documents into evidence at the administrative hearing: a certified copy of a judgment of conviction of Tempo's then president Anthony Garite (Govt. Ex. 4); a transcript of the proceedings at Garite's sentencing on May 4, 1973 (Govt. Ex. 4-A); a transcript of the proceedings at the time Garite entered a guilty plea on February 26, 1973 (Govt. Ex. 4-B); and letters indicating that Garite was in fact the president of Tempo at the time of his conviction (Govt. Exs. 5, 6-A, 6-B). It is beyond dispute that the record contains substantial evidence to support the finding that Anthony Garite was convicted of violating 18 U.S.C. § 659 (a statute involving embezzlement and theft), and that Garite was president of Tempo at the time of his conviction. Tempo now argues that since Garite's conviction was based upon an "Alford" type plea, the Hearing Officer abused his discretion by failing to explore the facts and circumstances underlying the conviction.[11] *516 Plaintiff notes that 19 C.F.R. § 112.30(a) is not mandatory, i. e. the district director of customs is not required to suspend or revoke a license when one or more of the ten predicate grounds is factually established. Plaintiff argues that given the non-mandatory nature of 19 C.F.R. § 112.30(a), the mere finding of Garite's conviction was not a sufficient ground for revoking plaintiff's license since Garite never did expressly admit that he committed the particular acts claimed to underlie the conviction. Despite the clear language of this regulation, plaintiff contends that because Garite's conviction was based upon an Alford plea, the Hearing Officer had a duty to explore the facts underlying Garite's conviction. Plaintiff's novel argument concerns the consequences of an Alford plea outside the particular case in which it is interposed. Although there is scant authority regarding this question, there is considerable authority with respect to the consequences of a plea of nolo contendere. It is therefore useful to proceed by analogy. In North Carolina v. Alford, 400 U.S. 25, 91 S.Ct. 160, 27 L.Ed.2d 162 (1970), the Supreme Court held that a plea of guilty accompanied by protestations of innocence is constitutionally acceptable and is not so indicative of coercion and involuntariness as to invalidate the plea.[12] In support of its conclusion the Court stated: "[o]rdinarily, a judgment of conviction resting on a plea of guilty is justified by the defendant's admission that he committed the crime charged against him and his consent that judgment be entered without a trial of any kind. The plea usually subsumes both elements, and justifiably so, even though there is no express admission by the defendant that he committed the particular acts claimed to constitute the crimes charged in the indictment" (Citations omitted). 400 U.S. at 32, 91 S.Ct. at 164. Thus the Court reasoned: that the express admission of guilt by a defendant is not a constitutional requisite to the imposition of a criminal penalty; and that a defendant may voluntarily, knowingly, and understandingly consent to the imposition of such a penalty even if he is unwilling or unable to admit his participation in the acts constituting the crime. 400 U.S. at 37, 91 S.Ct. at 167. It is clear that Alford permits a court to accept a plea of guilty even when the defendant asserts his innocence and does not admit that he committed the particular acts claimed to constitute the crimes charged in the indictment. However, it is important to note that Alford did not eliminate the mandate of former Rule 11 of the Federal Rules of Criminal Procedure (1966), that a court "`shall not enter a judgment upon a plea of guilty unless it is satisfied that there is a factual basis for the plea'".[13] 400 U.S. at 38 n. 10, 91 S.Ct. at 168; United States ex rel. Dunn v. Casscles, 494 F.2d 397 (2d Cir. 1974). The plea of nolo contendere has been defined in a wide variety of ways.[14] The general rule is that a plea of nolo contendere constitutes an implied *517 confession of guilt and has the same effect as a plea of guilty as far as the proceedings on an indictment are concerned. Thus, a defendant who has been sentenced on such a plea is deemed convicted of the offense for which he was indicted. Hudson v. United States, 272 U.S. 451, 47 S.Ct. 127, 71 L.Ed. 347 (1925); Tseung Chu v. Cornell, 247 F. 2d 929 (9th Cir. 1957); cert. denied, 355 U.S. 892, 78 S.Ct. 265, 2 L.Ed.2d 190. The principal difference between a plea of guilty and a nolo plea is that the latter may not be used against a defendant in a civil action based upon the same acts. As the nolo plea is not an express admission of guilt it can not be received as such in a subsequent proceeding against the defendant. 1 Wright and Miller, Federal Practice and Procedure: Criminal § 177; Annot., 89 A.L.R.2d 540, § 37. Former Rule 11 of the Federal Rules of Criminal Procedure (1966) permits a defendant to plead nolo contendere with the consent of the court.[15] However, under both former Rule 11 and new Rule 11(b), a court may accept a plea of nolo contendere without being satisfied that there is a factual basis for the plea.[16] Numerous courts have drawn a distinction between a defendant's guilt and his conviction in regard to the effect of a nolo plea outside the case in which it is interposed. Thus, a defendant's conviction upon a plea of nolo contendere is not an express admission of guilt, therefore the defendant is not estopped in a subsequent civil proceeding from denying the facts that were the basis of his plea. However, the fact of his conviction upon the nolo plea may be shown in a subsequent civil proceeding, and such a conviction subjects the defendant to the same consequences as if it were entered after a plea of guilty or not guilty. See Tseung Chu v. Cornell, supra; Annot., 89 A.L. R.2d 540 § 42. In United States v. Bagliore, 182 F.Supp. 714, 716 (E.D.N. Y.1960), the Court noted this distinction stating "[t]hough [a plea of nolo contendere] . . . does not constitute an admission of guilt in subsequent actions, it is of no help to a defendant if the conviction (regardless of how obtained), rather than admission of guilt, is the basis for the subsequent action." This distinction is especially important where a statute provides that the "conviction" of a person of certain crimes is a ground for the revocation of a license. See Annot., 89 A.L.R.2d 540 § 45. A majority of cases support the view that where a statute provides as such, the judgment constitutes a "conviction" despite the fact that it is entered upon a nolo plea.[17] In general these cases do not afford a defendant the right to establish his innocence at a revocation proceeding merely because his conviction was based upon a nolo plea. Moreover, the cases in support of this position do not provide that when a license revocation hearing is held, the hearing officer has a duty to explore behind the conviction to determine if the defendant was in fact guilty of the crime for which he entered a nolo plea.[18] *518 The applicable customs regulation, 19 C.F.R. § 112.30(a)(5), provides that the conviction of a corporate officer, not proof of facts underlying the conviction, is a ground for the suspension or revocation of a license. This Court is in agreement with those cases which hold that a judgment entered upon a nolo plea constitutes a "conviction" under a statute similar to the aforementioned customs regulations. Following this line of authority, it is the opinion of this Court that if Garite's conviction were based upon a plea of nolo contendere, the Hearing Officer would not have had a duty to explore the facts underlying the conviction, despite the fact that no factual basis for the nolo plea would have been established. As Garite's conviction was based upon an Alford plea which was supported by the finding of a factual basis, it would seem to follow a fortiori that the hearing officer did not have a duty to explore the facts behind Garite's conviction. Consequently, this Court concludes that there was no abuse of discretion on the part of the Hearing Officer. II. Plaintiff further argues that there was insufficient evidence that it was guilty of any deceptive practices to sustain a revocation of its license. It is provided in 19 C.F.R. § 18.4(a)(1) that conveyances in which bonded merchandise is transported shall be sealed with red in-bond customs seals under customs supervision.[19] The record demonstrates through legally competent evidence that when the carton was first picked up from customs supervision at Air Canada by Tempo employee Thomas Rupa, the carton was in good condition, had only one type of tape on it, and had no red customs seals (Tr. 71, 74, 99, 107-109). At the time that Tempo employee Frank Elia engaged in Tempo's first attempt to deliver the carton to Pan American, the carton was damaged, appeared to contain a cinderblock, and still did not have any customs seals (Tr. 122, 123, 129, 135). When Tempo engaged in its second attempt to deliver the carton to Pan American, the carton was heavily taped and had customs seals (Tr. 143, 145). On February 29, 1972, a Tempo employee brought the carton back to Air Canada and requested that a customs inspector cord and seal the carton. At that time the carton was covered with customs seals and had been retaped over the original taping (Tr. 76, 77). In addition, the record contains the testimony of two special agents of the Bureau of Customs regarding an admission by Anthony Garite at an interview on February 29, 1972. At that time Garite stated that he first learned that jewelry was missing from the carton when Frank Elia returned the carton to the Tempo warehouse from Pan American. Garite stated that he feared that Tempo's license would be in jeopardy if Customs learned that the jewelry disappeared from the carton while it was in the possession of Tempo. He therefore retaped the carton and placed customs seals on it. He also directed two Tempo employees to attempt to redeliver the carton to Pan American (Tr. 163, 167-170, 205-206). Indeed, upon reading the record as a whole, substantial evidence exists upon which a reasonable mind could properly arrive at the conclusion reached by the Acting Commissioner of Customs. Certainly, the specific intent necessary for the finding of a deceptive practice has a substantial basis in the admission of Garite and the inferences drawn from the direct testimony.[20]*519 When viewed in the light furnished by the record in its entirety, the Court finds substantial evidence supporting the conclusion that Anthony Garite was guilty of a deceptive practice on February 28, 1972, when he affixed customs labels to a carton of merchandise without authorization. Plaintiff argues that the affixing of customs seals without customs supervision is at best a "technical" violation of the customs regulations and is not a sufficient basis for the revocation of its license. Plaintiff notes that the testimony at the revocation hearing indicated substantial confusion regarding the purpose of customs seals. Plaintiff's assertions are unpersuasive. Administrative rules and regulations, if not in conflict with express statutory provisions, "have `the force and effect of law'", General Services Administration v. Benson, 415 F.2d 878, 880 (9th Cir. 1969); Whattoff v. United States, 355 F.2d 473, 478 (8th Cir. 1966), and "their appearance in the Federal Register is tantamount to legal notice of their contents". United States v. Millsap, 208 F.Supp. 511, 516 (D. Wyo.1962); Doran v. United States, 304 F.Supp. 1162, 1169 (D.Puerto Rico 1969). Thus, Tempo had constructive notice that it was only to use customs labels with customs supervision under 19 C.F.R. § 18.4(a)(1). Indeed, there was testimony that it is customary for truckers to place these labels on conveyances without customs supervision (Tr. 100-104, 109, 127, 146, 174). However, the testimony at the hearing regarding the use and purpose of customs labels is not material to the legal import of the aforementioned regulation. Moreover, it must be noted that Tempo's violation of this regulation was found to be part of a deceptive scheme. As such it can not be considered a mere "technical" violation of customs regulations. III. At the outset of the administrative hearing on January 23, 1975, plaintiff's counsel requested an adjournment which was denied by the Hearing Officer (Tr. 35). Plaintiff contends that under the circumstances, the refusal to adjourn the hearing was an abuse of discretion. On the morning of January 23, 1975, prior to the commencement of the hearing, Anthony Garite was served with an Order to Show Cause returnable February 7, 1975. This Show Cause Order sought that the judgment and order of probation of Garite entered May 4, 1973 (U. S. A. v. Anthony Garite, 73 CR 202), be "corrected" to include the Tempo Trucking and Transfer Corporation and to require that Garite surrender Tempo's license to the Bureau of Customs (Pl.Ex. "A").[21] At the commencement of the administrative hearing defendant's counsel asserted that the Show Cause Order could bring about the revocation of Garite's probation. Defense counsel therefore requested an adjournment *520 of the administrative hearing and argued that if one were not granted he might have to advise Garite to invoke the Fifth Amendment (Tr. 24). The Hearing Officer denied the application stating that he did not believe that Garite could incriminate himself at Tempo's license revocation hearing (Tr. 35). Tempo did not present a defense at the administrative hearing although Tempo's attorney did cross-examine all of the government witnesses. Tempo now argues that the refusal to grant an adjournment was the proximate cause for Tempo's failure to set forth any evidentiary showing. It should be noted that the government never called Garite to take the stand so that he never invoked the privilege against self-incrimination on the record. See 8 Wigmore, Evidence § 2268 (McNaughton rev. 1961). Thus, the issue presented is whether the Hearing Officer, knowing that Garite's counsel might advise him not to testify at the license revocation hearing, abused his discretion by denying Tempo's request for an adjournment. The Hearing Officer correctly concluded that Garite could not "incriminate" himself at the license revocation hearing. Garite could not interpose the privilege against self-incrimination as the subject of the administrative hearing was the factual circumstances underlying his plea of guilty and conviction. "It is well established that once a witness has been convicted for the transactions in question, he is no longer able to claim the privilege of the Fifth Amendment and may be compelled to testify." United States v. Romero, 249 F.2d 371, 375 (2d Cir. 1957). See United States v. Gernie, 252 F.2d 664, 670 (2d Cir. 1958), cert. denied, 356 U.S. 968, 78 S.Ct. 1006, 2 L.Ed.2d 1073 rehearing denied, 357 U.S. 944, 78 S.Ct. 1383, 2 L.Ed.2d 1558. Garite's plea of guilty divested him of his Fifth Amendment right to refuse to testify regarding the transactions and events underlying his convictions.[22]United States v. Hoffman, 385 F.2d 501, 504 (7th Cir. 1967), cert. denied 390 U.S. 1031, 88 S.Ct. 1424, 20 L.Ed.2d 288; United States v. Karger, 439 F.2d 1108, 1109 (1st Cir. 1971), cert. denied, 403 U.S. 919, 91 S.Ct. 2230, 29 L.Ed.2d 696. See generally, 8 Wigmore, Evidence, § 2279 (McNaughton rev. 1961); McCormick, Evidence, § 121 (2d Ed. 1972); Annot., 9 A.L.R.3d 990 (1966); 21 Am.Jur.2d, Criminal Law § 359. Thus, the privilege against self-incrimination did not provide a justification for Garite's failure to testify at the license revocation hearing. After having carefully considered the record it is the opinion of this Court that neither Garite nor Tempo were deprived of any fundamental rights at the administrative hearing. Tempo's failure to set forth any evidentiary showing was the product of its own independent choice. Further, the Hearing Officer was bound to follow the mandate of 19 C.F.R. § 112.30(d)(1) which provides that if a hearing is held it must take place within 30 days following application therefor. As Tempo requested a hearing by letter dated December 30, 1974 (Govt. Ex. 2), the Hearing Officer could not have adjourned the hearing to a date after the return date of the Show Cause Order. Under the circumstances presented it was not an abuse of discretion for the Hearing Officer to deny the request for an adjournment. IV. Accordingly, defendant's motion for judgment on the pleadings is granted. NOTES [1] The Customs regulations providing for the licensing of cartmen are set forth in 19 C. F.R. § 112. These regulations are authorized by 19 U.S.C. §§ 66, 1551a, 1565, and 1624. [2] Hereinafter referred to as "Tr.". [3] The Area Director's letter provided in part as follows: Please take notice that Section 112.30(a) provides that the district director may revoke or suspend the license of a cartman or lighterman if: (5) The holder of such a license or an officer or a corporation holding such a license is convicted of a felony, or is convicted of a misdemeanor involving theft, smuggling, or a theft-connected crime; (9) The holder is guilty of any negligence, dishonest or deceptive practices or carelessness in the conduct of his business. Pursuant thereto I propose to revoke your Customhouse Cartman's License No. 1711 on the following grounds: (1) That Anthony Garite an officer of your corporation on May 4, 1973 was convicted of a theft-connected misdemeanor in the United States District Court for the Eastern District of New York, to wit, violating T-18, U.S.C. 659, in that on or about February 28, 1972 he did knowingly have in his possession goods of a value under one hundred dollars ($100.00), that is, jewelry, which had been embezzled and stolen while said goods were moving as, were part of, and constituted a foreign shipment of freight, express and other property from Canada to France, then knowing the said goods to have been embezzled and stolen. (2) That you were guilty of dishonest and deceptive practices in the conduct of your business in that on February 28, 1972 Anthony Garite, an officer of your corporation deceptively affixed Customs Warning Labels to a carton of merchandise without authorization. [4] This Court denied plaintiff's motion for a preliminary injunction having found it unlikely that plaintiff would succeed on the merits. Sonesta International Hotels Corp. v. Wellington Associates, 483 F.2d 247, 250 (2d Cir. 1973); Columbia Pictures Industries, Inc. v. American Broadcasting Companies, Inc., 501 F.2d 894, 897 (2d Cir. 1974); The instant memorandum opinion shall constitute this Court's findings of fact and conclusions of law as required by Rule 52(a) of the Federal Rules of Civil Procedure. [5] It is noteworthy that 19 U.S.C. § 1641(b) expressly provides for an appeal to the United States Court of Appeals from any order of the Secretary of the Treasury suspending or revoking the license of a customhouse broker. [6] In Association of Data Processing Service Organizations, Inc. v. Camp, 397 U.S. 150, 156-157, 90 S.Ct. 827, 831, 25 L.Ed.2d 184 (1970), the Supreme Court quoted from the legislative history of the A.P.A. as follows: The statutes of Congress are not merely advisory when they relate to administrative agencies, any more than in other cases. To preclude judicial review under this bill a statute, if not specific in withholding such review, must upon its face give clear and convincing evidence of an intent to withhold it. The mere failure to provide specially by statute for judicial review is certainly no evidence of intent to withhold review. H.R.Rep. No. 1980, 79th Cong., 2d Sess., 41. See also Barlow v. Collins, 397 U.S. 159, 90 S.Ct. 832, 25 L.Ed.2d 192 (1970). [7] In the words of one commentator, "[n]onstatutory review of federal administrative action refers to judicial review that is not obtained under a specific statutory review provision; it includes review proceedings that seek specific relief against a federal officer by injunction, mandamus, habeas corpus, or other common law remedies." Cramton, Nonstatutory Judicial Review of Federal Administrative Action: The Need For Statutory Reform of Sovereign Immunity, Subject Matter Jurisdiction, and Parties Defendant, 68 Mich.L.Rev. 389, 394-395 (1970). See Jacoby, The Effect of Recent Changes in The Law of "Nonstatutory" Judicial Review, 53 Georgetown L.J. 19 (1964); Byse and Fiocca, Section 1361 of the Mandamus and Venue Act of 1962 and "Nonstatutory" Judicial Review of Federal Administrative Action, 81 Harv.L.Rev. 308 (1967). [8] See Ove Gustavsson Contracting Co. v. Floete, 278 F.2d 912 (2d Cir. 1960), cert. denied, 364 U.S. 894, 81 S.Ct. 225, 5 L.Ed.2d 188 (1960); Cappadora v. Celebrezze, 356 F.2d 1 (2d Cir. 1966); Toilet Goods Assoc. v. Gardner, 360 F.2d 677 (2d Cir. 1966), aff'd, 387 U.S. 158, 87 S.Ct. 1520, 18 L.Ed. 2d 697 (1967); Citizens Committee For Hudson Valley v. Volpe, 425 F.2d 97 (2d Cir. 1970), cert. denied, 400 U.S. 949, 91 S. Ct. 237, 27 L.Ed.2d 256 (1970); Aquavella v. Richardson, supra; Mills v. Richardson, 464 F.2d 995 (2d Cir. 1972); Aguayo v. Richardson, 473 F.2d 1090 (2d Cir. 1973). [9] The cases holding that the A.P.A. does provide an independent grant of federal jurisdiction include: Davis v. Romney, 355 F. Supp. 29 (E.D.Pa.1973), aff'd in part, 490 F.2d 1360 (3d Cir. 1974); Ortego v. Weinberger, 516 F.2d 1005 (5th Cir. 1975); Rothman v. Hospital Service of Southern California, 510 F.2d 956 (9th Cir. 1975); Moore-McCormack Lines, Inc. v. United States, 413 F.2d 568, 188 Ct.Cl. 644 (1969); School Board of Okaloosa County v. Richardson, 332 F.Supp. 1263 (N.D.Fla.1971); Lyons v. Weinberger, 376 F.Supp. 248 (S. D.N.Y.1974). The cases answering this question in the negative include: Grant v. Hogan, 505 F.2d 1220 (3d Cir. 1974); International Federation of Professional and Technical Engineers, Loc. No. 1 v. Williams, 389 F.Supp. 287 (E.D.Va.1974), aff'd 510 F.2d 966 (4th Cir. 1975); Bramblett v. Desorby, 490 F.2d 405 (6th Cir. 1974); Twin Cities Chippewa Tribal Council v. Minnesota Chippewa Tribe, 370 F.2d 529 (8th Cir. 1967); Yahr v. Resor, 339 F.Supp. 964 (E. D.N.C.1972). It is noteworthy that some courts have relied upon the sovereign immunity doctrine in holding that the A.P.A. does not afford an independent jurisdictional keystone. The Second Circuit disposed of this problem in Kletschka v. Driver, 411 F.2d 436, 445 (2d Cir. 1969); where it held that the "A.P.A. constitutes a waiver of sovereign immunity concerning those claims which come within its scope." Accord, Kingsbrook Jewish Medical Center v. Richardson, 486 F.2d 663 (2d Cir. 1973). [10] The administrative hearing was held pursuant to 19 C.F.R. § 112.30(d) (2). [11] On February 26, 1973, Anthony Garite entered a plea of guilty to a violation of 18 U.S.C. § 659 before the Honorable Orrin G. Judd in the United States District Court for the Eastern District of New York. Throughout these proceedings Garite's attorney referred to Garite's plea as a "Serrano" plea (Govt. Ex. 4-B, p. 4). See People v. Serrano, 15 N.Y.2d 304, 258 N.Y.S.2d 386, 206 N.E.2d 330 (1965). As Garite's guilty plea did not include an express admission that he committed the particular acts claimed to constitute the crime charged (although it was not accompanied by a protestation of innocence), it was obviously entered pursuant to North Carolina v. Alford, 400 U.S. 25, 91 S.Ct. 160, 27 L.Ed.2d 162 (1970). [12] See generally Note, 39 Fordham L.Rev. 773 (1971); Note, 49 N.Carolina L.Rev. 795 (1971). [13] New Rule 11(f), effective December 1, 1975, retains the mandate of former Rule 11, providing: "[n]otwithstanding the acceptance of a plea of guilty, the court should not enter a judgment upon such plea without making such inquiry as shall satisfy it that there is a factual basis for the plea." The Advisory Committee notes that the new rule "does not speak directly to the issue of whether a judge may accept a plea of guilty where there is a factual basis for the plea but the defendant asserts his innocence." Fed.R.Crim.P. 11(f), Notes of Advisory Committee on Rules, 18 U.S.C.A. [14] See generally Annot., 89 A.L.R.2d 540 (1963); 1 Wright and Miller, Federal Practice and Procedure: Criminal § 177. For a general discussion of the history of the nolo plea see North Carolina v. Alford, supra, 400 U.S. at 35-36, n. 8, 91 S.Ct. at 166-167. [15] New Rule 11(b), effective December 1, 1975, retains the requirement that the defendant obtain the consent of the court in order to plead nolo contendere. See Fed.R. Crim.P. 11(b), Notes of Advisory Committee on Rules, 18 U.S.C.A. [16] North Carolina v. Alford, supra, 400 U.S. at 35-36 n. 8, 91 S.Ct. at 166-167; 1 Wright and Miller, Federal Practice and Procedure: Criminal §§ 174, 177; Fed.R. Crim.P. 11(b), Notes of Advisory Committee on Rules, 18 U.S.C.A. [17] See e. g., In Re Eaton, 14 Ill.2d 338, 152 N.E.2d 850 (1958); Lee v. Wisconsin State Board of Dental Examiners, 29 Wis.2d 330, 139 N.W.2d 61 (1966); Christensen v. Orr, 275 Cal.App.2d 12, 79 Cal.Rptr. 656 (4th Dist. 1969); In Re Snook, 94 Idaho 904, 499 P.2d 1260 (1972); In Re Lewis, 389 Mich. 668, 209 N.W.2d 203 (1973). [18] See e. g., Kravis v. Hock, 136 N.J.L. 161, 54 A.2d 778 (N.J.Ct.Err. & App.1947); Fox v. Scheidt, 241 N.C. 31, 84 S.E.2d 259 (1954); In Re Teitelbaum, 13 Ill.2d 586, 150 N.E.2d 873 (1958); State v. Tibbels, 167 Neb. 247, 92 N.W.2d 546 (1958). [19] See 19 C.F.R. §§ 18.4(e) & (f), 24.13(b) & (f). [20] In this regard it is noteworthy that the finality accorded to administrative findings extends as well to inferences from the evidence having a substantial basis in the record. Feil v. Gardner, 281 F.Supp. 983, 986 (E.D.Wis.1968), aff'd. 402 F.2d 481 (7th Cir. 1968). Moreover, a reviewing court may not substitute its judgment for that of an administrative agency in reviewing inferences drawn from established facts. See discussion in 4 Davis, Administrative Law Treatise, § 29.05. [21] In his affidavit in support of the "motion for show cause order", Fred F. Barlow, special Attorney of the Department of Justice, noted that one of the conditions inducing the Government to dismiss the indictments in 72 CR 886 and 72 CR 1196 upon Garite's sentence in 73 CR 202 was a promise made by Garite's attorney, Marvin H. Wolf, Esq. At the sentencing proceedings before the Honorable Orrin G. Judd on May 4, 1973, Mr. Wolf promised on behalf of the defendant Garite to voluntarily surrender Tempo's license. In a Memorandum and Order filed February 28, 1975 (U. S. A. v. Anthony Garite, 73 CR 202), Judge Judd ordered: "That the judgment and order of probation entered May 4, 1973 be corrected to add as a special condition that defendant cause the Custom House License of Tempo Trucking and Transfer Corporation to be surrendered to the Bureau of Customs," and . . . "that defendant Anthony Garite appear before the court at 2:00 P.M. on March 6, 1975 for a hearing on whether he has violated such condition of probation and whether sentence should not now be imposed." On March 6, 1975 Judge Judd marked the case off the calendar without prejudice to the Government. On July 8, 1975 an order was filed discharging Garite from probation. [22] Moreover, it has been held that the danger of a revocation of probation does not constitute "incrimination" within the meaning of the Fifth Amendment. See Holdren v. People, 168 Colo. 474, 452 P.2d 28 (1969).
null
minipile
NaturalLanguage
mit
null
2009 in Kenya A list of happenings in 2009 in Kenya: Incumbents President: Mwai Kibaki Vice-President: Kalonzo Musyoka Chief Justice: Johnson Gicheru Events January January - Two corruption cases, the 2009 Triton Oil Scandal and the 2009 Kenyan Maize Scandal broke out. January 16 - Kenya makes an international food appeal due to drought-induced famine in certain parts of the country January 23 - Amos Kimunya is appointed the Minister of Trade. Kimunya resigned from the post of Minister of Finance in July 2008 due to Grand Regency Scandal. Former minister of Trade Uhuru Kenyatta is now appointed for the vacant position of Minister of Finance. January 28 - A Nakumatt supermarket in the Nairobi CBD was destroyed by a fire (main article: 2009 Nakumatt supermarket fire). January 31 - An oil spill ignition kills over 50 people in near Molo town (main article: 2009 Kenyan oil spill ignition). February February 19 - Kenyan fishermen are forced to flee the disputed Migingo Island in Lake Victoria after Uganda deploy troops on the island February 25 - An UN report recommends dismissing the Attorney General Amos Wako and police commissioner Mohammed Hussein Ali due to killings by the police (The report). On September 8, 2009, Mohammed Hussein Ali was transferred to the position of the Chief Executive of Postal Corporation of Kenya. The new Police Commissioner is Mathew Iteere, the former General Service Unit Commandant. March March 30 - Student riots at the Kenyatta University results in one fatality and destroyed property, including a computer laboratory. April April 6 - Kenyan minister of Justice Martha Karua resigns citing lack of progress with her reform agenda. April 20 - Over 20 people die at a clash between the mungiki sect and local residents in Karatina (Main article: Mathira Massacre). April 24 - Eliud Wabukala is elected the new archbishop of the Anglican Church of Kenya, replacing the outgoing Benjamin Nzimbi May May 1 - A government official was forced to cut short his speech and abandon the May Day rally as angry workers hurled stones at dignitaries in protest over the government's refusal to deal with difficult living conditions May 7 - Thomas P. G. Cholmondeley was found guilty of manslaughter June June 12 - The Othaya Police Chief, John Nzau was shot dead. Fellow policemen were arrested as suspects. June 16 - An oil tanker fire kills at least four and injures nearly 50 people at Kapokyek village near Kericho. The victims were siphoning fuel from the tanker that had fallen off the road. July July 9 - the UN Secretary general Kofi Annan handed names of the main suspects of the 2007 Post-election violence to the International Criminal Court. July 21 - A collision of two buses in Siapei along the Narok - Mau Mahiu road causes 22 fatalities July 23 - The SEACOM cable becomes operational, raising hopes of higher speed and lower cost internet connections in Kenya. July 30 - Kenyan cabinet announced that no special tribunal will be formed to handle the 2009 post-election crisis, and that the cases will be dealt in local courts instead August August 1 - A small airplane belonging to AIM-Air crashes into a flat in Nairobi's Highrise estate, while approaching the Wilson Airport, resulting in one fatality August 4–6- The 8th African Growth and Opportunity Act (AGOA) conference was held at the Kenyatta International Conference Centre in Nairobi. Hillary Clinton, the United States Secretary of State, was among the speakers. August 23 - A bus and a truck collide near Gilgil, resulting in 16 deaths August 24 - The Kenyan census in 2009 is initiated August 24- Long-distance buses and matatus are banned for entering the Nairobi CBD, in order to reduce traffic congestions August 27 - Parliamentary by-elections were held in the constituencies of Shinyalu and Bomachoge. ODM retained the Shinyalu seat, the new MP is Justus Kizito, replacing the deceased Charles Lugano. In Bomachoge, Simon Ogari of ODM narrowly beat the 2007 winner Joel Onyancha of PNU. The Bomachoge seat was vacated after the 2007 Elections at the constituency were nullified due to irregularities. For the first time in Kenya, the ballot boxes were transparent. September September 30 - Aaron Ringera resigns from the position of the director of the Kenya Anti-Corruption Commission (KACC) due to pressure. He had been reappointed as the director of KACC on 31 August 2009, sparking mixed reactions October October 4 - Fourteen people die when a matatu and a truck collided in Kericho October 20 - A building collapsed in Kiambu, killing several people October 27 - Mungiki chairman Maina Njenga was acquitted after murder charges on him were withdrawn for lack of evidence November November 5 - Mungiki spokesman David Gitau Njuguna was shot dead in Nairobi by unknown assailants. November 9 - A light cargo plane crashes at the Wilson Airport, killing two crew members. The plane was carrying miraa to Somalia (main article: 2009 Kenyan Beach 1990D crash) November 15 - Ten people died when Samburu cattle raiders attacked the Kisima village in Samburu County. November 16 - The Kenyan government started to evict settlers from the Mau Forest November 17 - The Harmonized Draft Constitution of Kenya was released for public (The Draft) November 22 - Six people die when a trailer hit a matatu along the Nakuru-Eldoret Highway in Kiptenden, near Nakuru December December 2 - Eleven people died as a bus and a lorry collided along the Nakuru–Eldoret highway at Mlango Tatu, Koibatek District December 17 - The High court declares the South Mugirango Constituency parliamentary seat held by James Magara vacant due to irregularities in the election, Deaths January–March January 1 - Kenyan al-Qaeda members Fahid Mohammed Ally Msalam and Sheikh Ahmed Salim Swedan were killed in a US airstrike in Pakistan. January 26 - Pamela Mboya, former UN-HABITAT Kenya representative and widow of Tom Mboya January 28 - Angel Wainaina, Actress and radio presenter, known for her role as Sergeant Maria of Cobra Squad TV-series, victim of the Nakumatt supermarket fire. January 28 - Peter Serry, the CEO of Tusker FC, victim of the Nakumatt supermarket fire. February 1 - Kadir Farah, former international football player died of illness. February 25 - Atieno Odhiambo, 63, scholar and writer, died of illness. March 5 - Oscar Kamau Kingara and John Paul Oulo, human rights activists, were shot dead by the police. April–June April 9 - Farakh Yusuf, 54, rally co-driver. April 13 - James Bett, a peacemaker dies following a car accident May 1–3 - Bantu Mwaura, 40, human-rights activist, actor, director, poet and storyteller who wrote poetry in English, Swahili and Gikuyu May 4 - Charles Lugano, 59, Kenyan politician, illness June 18 - Professor Peter Kenya from Kenyatta University was shot dead by gangsters July–September July 9 - Kinuthia Murugu, Youth and Sports Permanent secretary dies of gunshot wounds August 11 - Campbell R. Bridges, 71, Scottish born Kenyan resident gemologist, stabbed to death near Voi August 13 - Major General Simeon Mutai, former Kenya Air Force commander, illness August 14 - Kimani Maruge, ~89, World oldest pupil, stomach cancer October–December October 9 - Francis Baldacchino, 72, the first Bishop of Malindi, dies of liver and heart complications in Malta. October 26 - Patrick Ndururi, 40, runner November 2 - Kirugumi wa Wanjuki, 86, the hangman at Kamiti Maximum Security Prison dies of pneumonia. November 28 - Patrick Konchellah, 41, runner, commonwealth games gold medalist dies of illness December 4 - Ronald Kiluta, former MP and assistant minister from Masinga Constituency, road accident Sports January–March January 13 - Kenya national football team lost to Uganda in the 2008 CECAFA Cup final. However, Kenya's head coach Francis Kimanzi was sacked soon after the tournament. He was replaced by Antoine Hey from Germany in February 2009. January 23 - Olympic champions Pamela Jelimo and Samuel Wanjiru take top honours at the 2008 Kenyan Sports Personality of the Year awards. February 9–10 - Kenyan national rugby sevens team reaches main cup semifinal at the 2009 Wellington Sevens and plate final of the 2009 USA Sevens. March 5–7 - Kenya reaches the 2009 Rugby World Cup Sevens semi-final in Dubai. March 27 - Kenya lost to Tunisia 1-2 at the 2010 FIFA World Cup qualifiers March 28 - Kenya reaches the main cup semi-final at the 2009 Hong Kong Sevens. March 29 - Micah Kogo breaks the 10 Kilometres road running world record at Parelloop race in Brunssum, The Netherlands by timing 27:01 minutes. April–June April 3–5 - At the 2009 Adelaide Sevens Kenya reaches its first ever World Sevens Series main cup final, but is defeated by South Africa in the final 7-26. At the group stage, Kenya had beaten South Africa. April 3–5 - Carl Tundo wins the 2009 Safari Rally. It was his second Safari title, the previous victory was in 2004. April 5 - Duncan Kibet wins the 2009 Rotterdam Marathon, his time 2.04:27 hours makes him the 2nd fastest marathon runner ever after the world record holder Haile Gebrselassie. James Kwambai finished second and posted the same time as Kibet. Abel Kirui was 3rd, his time 2:05:04 is also one of the fastest ever. On the same day, Vincent Kipruto of Kenya won the Paris Marathon setting a new course record on 2.05:47. April 9 - Josephine Owino was drafted to Washington Mystics at the 2009 WNBA draft 3rd round. April 20 - Kenya qualifies for the 2011 Cricket World Cup by finishing fourth at the 2009 ICC World Cup Qualifier in South Africa. April 19 - Gary Boyd of England wins the 2009 Kenya Open golf tournament. April 20 - Salina Kosgei wins the 2009 Boston Marathon. April 26 - Samuel Wanjiru won the 2009 London Marathon, setting a course record of 2:05:10 hours April 21-May 3 - Kenya finishes fourth at the 2009 IRB Junior World Rugby Trophy held in Nairobi. The tournament was won by Romania. June 6 - Kenya lost to Nigeria 0-3 in Abuja in a 2010 World Cup Qualifier. June 20 - Kenya won Mozambique 2-1 in Nairobi in a 2010 World Cup Qualifier. June 21 - Kenya wins the 2009 Safari Sevens. July–September July 1–12 - Kenya achieved three medals (medal record 1-1-1) at the 2009 Summer Universiade. All medals were won by swimmer Jason Dunford. July 8–12 - Kenya topped the medal table at the 2009 World Youth Championships in Athletics (medal record 6-7-1). July 12 - Kenya women's national volleyball team qualifies for the 2010 FIVB Women's World Championship at a qualifying tournament held in Nairobi. August 15–23 - Kenya finished with the third best medal record at the 2009 World Championships in Athletics with four gold, five silver and three bronze medals. Kenyan gold medalist were Linet Masai (10,000m women), Ezekiel Kemboi (3000m steeplechase men), Vivian Cheruiyot (5000m women) and Abel Kirui (marathon men). September 4 - Kenya set a new world record in the 4 x 1500 metres relay, 14:36.23 minutes at the Memorial Van Damme meeting in Brussels. Members of the team were Augustine Choge, William Biwott Tanui, Gideon Gathimba and Geoffrey Rono. The previous record in this rarely competed event was set by West Germany in 1977, making it the oldest world record when it was broken. September 6 - Kenya lost to Mozambique 1-0 in Maputo in a 2010 World Cup Qualifier. September 26 - Impala RFC becomes the 2009 Kenyan rugby union champion by winning the Kenya Cup October–December October 4 - Carl Tundo wins the 2009 Kenya National Rally Championship October 11- Samuel Wanjiru wins the 2009 Chicago Marathon, setting the fastest marathon time ever run in the United States (2:05:41 hours). On the same day, Mary Keitany won the 2009 IAAF World Half Marathon Championships in Birmingham, while Philes Ongori secured 1-2 for Kenya. Bernard Kipyego took silver in the Men's race behind Zersenay Tadese of Eritrea. Kenya won both team competitions. October 18 - Debutant Gilbert Yegon wins the 2009 Amsterdam Marathon, timing 2:06:18 hours, beating the old course record set by Haile Gebrselassie in 2005 October 20 - AFC Leopards wins the Kenyan Cup, beating a lower division side Congo United 4-1 in the final. October 25 - Moses Kipkosgei Kigen and Irene Jerotich win the 2009 Nairobi Marathon, both setting course records November 7 - Sofapaka wins the Kenya Premier League in 2009, despite playing in the league for the first time November 14 - Kenya lost Nigeria 2-3 in Nairobi in their last game of the 2010 FIFA World Cup qualifiers. Kenya finished the last in their group December 2 - Ian Duncan won the 2009 classic rally edition of the Safari Rally Entertainment October 10 - The MTV Africa Music Awards 2009 were held in Nairobi. References
null
minipile
NaturalLanguage
mit
null
Pulmonary vascular response to thrombin: effects of thromboxane synthetase inhibition with OKY-046 and OKY-1581. We examined the effects of thromboxane synthetase inhibition with OKY-1581 and OKY-046 on pulmonary hemodynamics and lung fluid balance after thrombin-induced intravascular coagulation. Studies were made in anesthetized sheep prepared with lung lymph fistulas. Pulmonary intravascular coagulation was induced by i.v. infusion of alpha-thrombin over a 15 min period. Thrombin infusion in control sheep resulted in immediate increases in pulmonary artery pressure (Ppa) and pulmonary vascular resistance (PVR), which were associated with rapid 3-fold increase in pulmonary lymph flow (Qlym) and a delayed increase in lymph-to-plasma protein concentration (L/P) ratio, indicating an increase in the pulmonary microvascular permeability to proteins. Thrombin-induced intravascular coagulation also increased arterial thromboxane B2 (a metabolite of thromboxane A2) and 6-keto-PGF1 alpha concentrations (a metabolite of prostacyclin). Both OKY-1581 and OKY-046 prevented thromboxane B2 and 6-keto-PGF1 alpha generation. The initial increments in Ppa and PVR were attenuated in both treated groups. The increases in Qlym were gradual in the treated groups but attained the same levels as in control group. However, the increases in Qlym were associated with decreases in L/P ratio. In both treated groups, the leukocyte count decreased after thrombin infusion but then increased steadily above the baseline value, whereas the leukocyte count remained depressed in the control group after thrombin. These studies indicate that a part of the initial pulmonary vasoconstrictor response to thrombin-induced intravascular coagulation is mediated by thromboxane generation. In addition, thromboxane may also contribute to the increase in lung vascular permeability to proteins that occurs after intravascular coagulation and this effect may be mediated by a thromboxane-neutrophil interaction.
null
minipile
NaturalLanguage
mit
null
Pig War - sjcsjc https://en.wikipedia.org/wiki/Pig_War_(1859) ====== Patrick_Devine The interesting thing about the Pig War was that it helped galvanize Canada as a country by bringing Vancouver Island and British Columbia together and then becoming part of Canada. The division between the Gulf Islands and the San Juan Islands is still pretty strong to this day thanks to the division. I used to live in White Rock, BC and could see Orcas Island from my house, and yet have never been there, despite having been to most of the Gulf Islands. Had the decision gone the other way the archipelago's economy would most likely have been much more entwined with that of Vancouver and Victoria. ------ zw123456 Full disclosure, I am completely biased. I live near Seattle and am a boating enthusiast. If you ever get the opportunity to visit the San Juan Islands, do it. I know I am biased but some of the best most wonderful memories for me are boating in the SJI's from young age to present. Incredible views, biking, hiking, music, everything. Very under rated part of the US. The pig war a big excuse to drink :) Sorry for the unabashed plug but just it's just such a special place. I almost hit escape because I don't want it ruined but HN people I think can dig this place in the right way :) ~~~ walrus01 One of the interesting differences between the southern gulf islands and the san juans is the wealth. Most obviously seen by the fact that there's at least 4 to 5 noteworthy airports and airstrips in the san juans, popular with private plane owners who live in Seattle and also have vacation homes in the San Juans. There's none on Saturna, Mayne, Pender, Galiano or Saltspring. ~~~ fatbird There's no land-based airstrips, but all those islands are serviced by floatplanes and most have one or more luxury resorts. Aside from Salt Spring, they're more middle-class retiree islands (Salt Spring legit has int'l rich people building summer homes--my wife and I had breakfast at a B&B with a New York couple complaining about the cost of shipping Italian marble in via BC Ferries). ------ tomkat0789 For those who missed the Wikipedia link, the US troops were commanded by one George Pickett, the Confederate officer of the ill-fated Pickett's Charge during the Battle of Gettysburg. Small world! ------ interfixus Canada has an ongoing territorial dispute with Denmark, concerning Hans Island way up north. There is military action. Canadian and Danish forces repeatedly take possession of the island, taking down the enemy flag and planting their own. They are also said to leave a bottle of Canadian whisky or Danish schnaps for the other side. Prospects of lethal escalation are presumably limited. ~~~ goodcanadian The border is left undefined in the region of the island, but if you were to draw a straight line between the two points defined on either side of the island, it would pretty much bisect the island. I think that is the solution. Then, Canada could have a land border with Denmark! ~~~ interfixus What marvellous fun we could have. We could even build a wall. ------ incompatible I assumed it would be a war on pigs, like the Emu War in Western Australia was a "war" on emus. * [https://en.wikipedia.org/wiki/Emu_War](https://en.wikipedia.org/wiki/Emu_War) ------ ddebernardy > this dispute was a bloodless conflict. Err... a pig died... :-) ~~~ Patrick_Devine and he was delicious. ------ Humdeee > Cutlar saying to Griffin, "It was eating my potatoes"; and Griffin replying, > "It is up to you to keep your potatoes out of my pig." Requirement fulfilled. Now does it make more sense to fence in the potatoes or fence in the pig? ~~~ anon1m0us The pig. If your pig comes on my land, it's now likely _my_ pig. [https://en.wikipedia.org/wiki/Possession_is_nine- tenths_of_t...](https://en.wikipedia.org/wiki/Possession_is_nine- tenths_of_the_law) Which, coincidentally, has a case about the Hatfield/McCoy feud over a pig, whose ownership was granted to those who possessed the pig. ------ mrec Commemorated in a comic here: [http://www.veritablehokum.com/comic/the-pig- war/](http://www.veritablehokum.com/comic/the-pig-war/) ------ moufestaphio Interesting, I hadn't heard of this. Mentioned at the bottom of the article about Canadian resentment: >Canada sought greater autonomy in international affairs. One I did know about though was mot much latter the Alaska border dispute a little latter: [https://en.wikipedia.org/wiki/Alaska_boundary_dispute](https://en.wikipedia.org/wiki/Alaska_boundary_dispute) ------ dannykwells Also see: [https://heyarnold.fandom.com/wiki/The_Pig_War](https://heyarnold.fandom.com/wiki/The_Pig_War) ------ dmitrygr It takes so little provocation to get humans at each-other's throats. Long term, i imagine that will be the undoing of our species. ------ andy_boot I'm quite relieved that the higher ups were able to successfully de-escalate the situation. It is so easy to read about situations where some pompous military officer trys to escalate and kick start a conflict. But fortunately a situation that has gotten out of hand is de-escalated smoothly. ------ freeqaz I grew up in the San Juan Islands. Funny to see this here -- my recollection of this war is primarily from visiting the different beaches where the camps were. And some re-enactments every year by people in the community. The world is a strange place! ------ mnehring I feel I have to link to a song I grew up listening to that tells the story of this "war" [https://youtu.be/4_Ap1rGgVLg](https://youtu.be/4_Ap1rGgVLg) ~~~ mnehring Best line from the song: "They called it a war, but it wasn't very big. The only one got killed was a little British pig." ------ topkai22 This was the highlight of the mandatory washington state history I had to take in 9th grade. That and something about socialist electoral victories in the early 1900s. I don’t think I remember anything else. 5 months of state history for a state that was barely 100 years old might have been a bit much for a bunch of 14 year olds ~~~ jacobush You squandered half a percent of the history reading about it :) ------ matt_morgan "For several days, the British and U.S. soldiers exchanged insults, each side attempting to goad the other into firing the first shot, but discipline held on both sides, and thus no shots were fired." So basically they all wanted to shoot each other, but not so much that they would defy orders. We absolutely suck. ~~~ nerfhammer it gets nicer at the end: "As a result of the negotiations, both sides agreed to retain joint military occupation of the island until a final settlement could be reached, reducing their presence to a token force of no more than 100 men.... During the years of joint military occupation, the small British and American units on San Juan Island had an amicable mutual social life, visiting one another's camps to celebrate their respective national holidays and holding various athletic competitions. Park rangers tell visitors the biggest threat to peace on the island during these years was 'the large amounts of alcohol available'."
null
minipile
NaturalLanguage
mit
null
About Us Jung Platform is an educational institute on Jungian psychology and other depth-psychological perspectives. We have produced several hundred classes and courses on depth psychological topics, and have had participants from over 60 countries. The Jung Platform is generally respected and seen as one of the premier educational organizations in this field. Our Goal Our goal is to educate a global diverse community about the medicine within the teachings of Carl Jung and other depth-psychological perspectives. Regardless of how much or how little pre-knowledge you may have of Carl Jung and depth psychology, our lectures have as an aim to help you navigate through and participate in the journey of life and unlock some of its clues. We offer online Jungian and other depth-psychological courses and lectures delivered by experts in their field that you can watch on-demand over the Internet from the comfort of your home or anywhere else in the world. These events are applicable to people of all races, cultures and ethnicity, and provide a diverse non-biased perspective. Our Partners Jung Platform is pleased to have formed an alliance with two organizations in the field of depth psychology – Shrink Rap Radio and The Depth Psychology Alliance. Shrink Rap Radio, Jung Platform, and Depth Psychology Alliance are joining forces to bring new educational content in the field of Jungian and depth psychology to interested individuals. Everyone intrigued by Jungian and depth psychology should take note of this strategic combination of expert resources delivering online interviews, videos, teleseminars, podcasts, and multi-platform book-related events featuring leading experts and authors in the field.
null
minipile
NaturalLanguage
mit
null
Breakthrough Starshot, a multi-faceted research and engineering program to develop and launch practical interstellar space missions by Breakthrough Initiatives, successfully flew their first spacecraft — the smallest ever launched. On June 23, a number of prototype “Sprites” — the world’s smallest fully functional space probes, built on a single circuit board — achieved LEO, piggybacking on OHB System AG’s Max Valier and Venta satellites. The 3.5-by-3.5 centimeter chips weigh just four grams but contain solar panels, computers, sensors, and radios. These vehicles are the next step of a revolution in spacecraft miniaturization that can contribute to the development of centimeter- and gram-scale “StarChips” envisioned by the Breakthrough Starshot project. The Sprite is the brainchild of Breakthrough Starshot’s Zac Manchester, whose 2011 Kickstarter campaign, “KickSat,” raised the first funds to develop the concept. The Sprites were constructed by researchers at Cornell University and transported into space as secondary payloads by the Max Valier and Venta satellites, the latter built by the Bremen-based OHB System AG, whose generous assistance made the mission possible. The Sprites remain attached to the satellites. Communications received from the mission show the Sprite system performing as designed. The spacecraft are in radio communication with ground stations in California and New York, as well as with amateur radio enthusiasts around the world. This mission is designed to test how well the Sprites' electronics perform in orbit, and demonstrates their novel radio communication architecture. Breakthrough Initiatives — including most notably, Breakthrough Starshot and Breakthrough Listen — are a set of long-term astronomical programs exploring the Universe, seeking scientific evidence of life beyond Earth, and encouraging public debate from a planetary perspective. Breakthrough Starshot, announced on April 12, 2016, by Yuri Milner and Stephen Hawking, is a $100 million research and engineering program aiming to demonstrate proof of concept for light-propelled spacecraft that could fly at 20 percent of light speed and, in just over 20 years after their launch, capture images and other measurements of the exoplanet Proxima b and other planets in our nearest star system, Alpha Centauri.
null
minipile
NaturalLanguage
mit
null
The toxicology of chromium with respect to its chemical speciation: a review. The properties of trivalent and hexavalent chromium are reviewed with respect to acute and chronic oral toxicity, dermal toxicity, systemic toxicity, toxicokinetics, cytotoxicity, genotoxicity and carcinogenicity. The hexavalent chromium compounds appear to be 10-100 times more toxic than the trivalent chromium compounds when both are administered by the oral route. Dermal irritancy and allergy are more frequently caused by contact with soluble hexavalent chromium compounds. The cytotoxicity of soluble and insoluble hexavalent chromium compounds to fibroblasts is 100-1000 times greater than that demonstrated by trivalent chromium compounds. In short-term tests, the hexavalent chromium compounds demonstrated genotoxic effects four times more frequently than did the trivalent chromium compounds. Carcinogenicity appears to be associated with the inhalation of the less soluble/insoluble hexavalent chromium compounds. The toxicology of chromium does not reside with the elemental form. It varies greatly among a wide variety of very different chromium compounds. Oxidation state and solubility are particularly important factors in considering the toxicity of chromium with respect to its chemical speciation.
null
minipile
NaturalLanguage
mit
null
Q: Expected identifier, got 'eth_compileSolidity' constructor () public { I'm unsure what identifier it is asking for, can anyone advise? Error section constructor () public { owner = msg.sender; } A: pragma solidity ^0.4.11; In the Solidity versions preceding 0.4.22, the constructor is defined as the function having the same name than the contract. Example : contract Test { function Test() {...} } So, if you want to use the keyword constructor for your constructor, you have to, at least, use the 0.4.22 version of Solidity.
null
minipile
NaturalLanguage
mit
null
Statistics reflect that one in five adult Americans grew in a household that included an alcoholic. As a result, these children face a bigger risk for developing emotional problems than children who do not have a parent who is an alcoholic. Alcoholism tends to run in families; children with alcoholi The child may perceive himself as the main reason his mother or father drinks, blaming himself for their issue. In addition, the child may fret consistently about the issue at home. He may worry that the alcoholic parent will get sick, and may also fear violence between his parents. Parents suffering from alcoholism may make the child feel as though there is an awful secret at home. The embarrassed child consequently does not invite friends over and fears asking anyone for assistance. Due to the child’s disappointment in his alcoholic parent, he may find it difficult to trust Regardless of how the child behaves, the alcoholic parent will suddenly switch from being loving to angry. A child needs to have a regular daily schedule; this is important to his well-being; but in the home of an alcoholic parent bedtimes and mealtimes are always changing. The child may develop an In America, the amount of drug and alcohol treatment centers has increased to staggering amounts. Several of these facilities provide treatment for mental health, eating disorder, and sex addiction plus programs pertaining to drug and alcohol rehabilitation. This type of structure is referred to as Most drug and alcohol treatment centers all provide the same services inside a safe and therapeutic atmosphere, where the individual can recover from drug addiction and/or alcoholism. These treatment centers tend to come in the form of residential addiction treatment centers, but they can also be in All alcohol and drug treatment centers provide the alcoholic or drug addict with a nurturing, safe, and supportive environment to help her recover from her alcoholism and drug addiction. It does not matter if the individual undergoes residential treatment or day/night treatment, all alcohol treatmen Outpatient treatment programs are suitable when the individual has already rid himself of the drugs in his system (detoxification). Medications such as Subutex, Suboxone, Buprenex or Buprenorphine are often used as rapid detox in opiate addiction cases. These drugs help to prevent withdrawal symptom
null
minipile
NaturalLanguage
mit
null
Biomembranes are heterogeneous lipid assemblies in which preferential association of certain lipids, sterols, and proteins can lead to the formation of nanodomains, so-called rafts[@b1]. Such rafts, enriched in cholesterol and saturated lipids, display physicochemical properties different from those of their disordered fluid surroundings, and are believed to play an important role in the self-assembly of membrane proteins into functional platforms[@b2]. Traditionally, detergents were used as the main criterion for raft association; raft constituents were defined simply as the detergent-resistant membrane (DRM) remaining after non-ionic detergent solubilization. This criterion was usually combined with the use of methyl-*β*-cyclodextrin to extract cholesterol from cell membranes. Thus, DRMs solubilization after cyclodextrin treatment was indicative of it being part of a raft component. In addition, if a biological process in living cells was disrupted by cyclodextrin treatment, the process was considered to be raft-based[@b3][@b4][@b5][@b6][@b7][@b8][@b9]. From the chemical point of view, cyclodextrins (CD) are composed of several glucose units, linked together by *α*-1,4-glucosidic bonds. The unique properties of these macrocycles result from their characteristic cylindrical structure with a hydrophilic exterior and hydrophobic central cavity[@b10]. The latter serves as a suitable microenvironment for lipolytic compounds. The limiting parameter in the complexation process is the inner diameter of the cyclodextrin cone; for *β*-CD, consisting of seven glucose monomers, the diameter measures 7.8 Å and is perfect for complexation of cholesterol. To characterize the ability of *β*-CD to uptake cholesterol from biomembranes, many studies have been devoted to model membranes[@b6][@b8][@b9][@b11][@b12][@b13][@b14][@b15][@b16]. Ohvo and Slotte[@b12] demonstrated that the desorption rates of various sterols from the surface film to *β*-CD solution depend on the relative polarity of sterols and increase considerably with increasing surface pressure. Moreover, sterol desorption from cholesterol/phospholipid mixed systems was found to be notably retarded in comparison to the extraction from pure cholesterol monolayers. More recently, Sanchez et al.[@b17] have shown that *β*-CDs preferentially remove cholesterol from liquid-disordered (*L~d~*) instead of liquid-ordered (*L~o~*) membrane domains. With the notion that *L~o~* domains supposedly resemble raft domains or DRMs, this finding casts doubt on the use of *β*-CDs to characterize the cholesterol content in vivo. In order to interpret the experimental data, a clear molecular understanding of how *β*-CDs extract cholesterol from lipid membranes is needed. To provide such a detailed view, we resort to multiscale molecular dynamics (MD) simulations. Nowadays, MD simulation is a powerful tool (denoted "computational microscopy"[@b18][@b19]) that can be used to complement experimental techniques. Using atomistic resolution, we investigated the interaction between *β*-CD and cholesterol/lipid membrane models, systematically exploring the effect of lipid type and cholesterol content on the extraction process. At a coarse-grain level, we show the preferential extraction of cholesterol from *L~d~* domains in phase separated *L~d~*/*L~o~* membranes. Our results provide a molecular view on *β*-CD mediated cholesterol depletion from lipid membranes that may aid in interpreting experimental data. We expect it will also contribute to the design of more effective cyclodextrin derivatives for medical treatment of lipid metabolism pathologies, like for example in the treatment of Niemann-Pick type C disease[@b20]. Results ======= Spontaneous extraction of cholesterol from mixed PC/CHOL monolayer by *β*-CD dimers ----------------------------------------------------------------------------------- To provide a molecular view on *β*-CD mediated cholesterol extraction from model membranes, we simulated monolayers of diC~16~-PC/CHOL at a 1:3 ratio in the presence of *β*-CD dimers. Dimers, rather than monomers, were previously shown to be the dominant aggregation state of *β*-CDs in solution[@b21]. We observe that nearly each of the twelve *β*-CD dimers ends up bound to the monolayer surface with a binding time scale between 50 and 150 ns, governed by their random diffusion ([Figure 1A](#f1){ref-type="fig"}). Some of the *β*-CDs (about 30%) aggregate on the monolayer forming stacked barrels, mostly tilted by 90° with respect to the monolayer surface normal. In this conformation extraction of cholesterol is not possible. The majority of the dimers, however, adopt a perpendicular orientation suitable for the extraction process to occur, stabilized in this position by adjacent *β*-CDs. Indeed cholesterols are extracted from the monolayer by these *β*-CD dimers ([Figure 1A](#f1){ref-type="fig"}, final snapshot). Two cholesterol molecules are completely extracted during the 500 ns length of the simulation. A repeat simulation shows extraction of three cholesterols. The averaged, normalized extraction rate is 0.0002 ns^−1^ per *β*-CD. Similar rates were obtained in smaller test systems in which *β*-CDs were pre-adsorbed on the membrane surface ([Supplementary Table S1](#s1){ref-type="supplementary-material"}). The fact that only a small number of *β*-CDs are able to extract cholesterol on the time scale of the simulation points to a kinetic barrier opposing extraction. As will become apparent later, this barrier reflects the formation of the encounter complex between *β*-CD and the cholesterol head. Once *β*-CD is positioned on top of the cholesterol, cholesterol is extracted rapidly (typically within 5 ns). We also studied the ability of *β*-CD monomers to extract cholesterol using the same set-up, i.e. with 24 monomers placed in random initial positions in the aqueous solution ([Figure 1B](#f1){ref-type="fig"}). Like the dimers, also the monomers have a high affinity for the monolayer surface, and binding is observed to occur on a similar timescale (within 200 ns). Under the conditions of the simulations, the *β*-CDs do not have enough time to form dimers, and bind the interface as monomers. Once adsorbed, diffusion is too slow and no further aggregation takes place on the time scale of the simulation (500 ns). In their monomeric form, the *β*-CDs are able to capture the cholesterol head group ([Figure 1B](#f1){ref-type="fig"}, final snapshot), but are unable to completely extract cholesterol out of the monolayer in line with results from simulations performed on pure cholesterol monolayers[@b21]. The same qualitative behavior was observed in three independent simulations. Based on the results described in this section, we conclude that spontaneous cholesterol extraction from mixed lipid/cholesterol monolayers occurs on a sub-microsecond time scale, requiring a suitably oriented and interfacially embedded *β*-CD dimer. Cholesterol content drastically affects extraction kinetics and thermodynamics ------------------------------------------------------------------------------ Varying the molar lipid-cholesterol ratio in the membrane has a dramatic effect on the cyclodextrin extraction activity. In the case of a diC~16~-PC/CHOL 1:1 molar ratio monolayer with nine *β*-CD dimers pre-adsorbed at the interface, no cholesterol is complexed during three independent 200 ns simulations. To understand the apparent large effect of the lipid/cholesterol ratio on the ability of *β*-CD to extract cholesterol, we analyzed the equilibrium configuration of the CD dimer at the monolayer interface. Snapshots of these configurations are shown in [Figure 2A--C](#f2){ref-type="fig"} at an increasing PC/cholesterol ratio from 0:1, 1:3, to 1:1. In the case of a pure cholesterol monolayer, in fact the energetically most favorable configuration for a single *β*-CD dimer is a tilted one ([Figure 2A](#f2){ref-type="fig"}). Lipid head groups appear to stabilize an upright orientation by formation of hydrogen bonds between the hydroxyl groups of the cyclodextrins and the phosphate groups of PC, clearly seen in the 1:3 mixture ([Figure 2B](#f2){ref-type="fig"}). Comparison of [Figure 2B and 2C](#f2){ref-type="fig"}, however, shows that the presence of too many lipid head groups does not allow the *β*-CDs to interact directly with the cholesterol molecules anymore. In the case of the 1:1 mixture, most cholesterols are shielded by the lipid head groups explaining the lack of spontaneous extraction events in our simulations. Spontaneous extraction was also not observed by increasing the amount of cholesterol to a lipid/cholesterol 1:2 molar ratio. As shown above, increasing the amount of cholesterol in the monolayer even further (diC~16~-PC/CHOL 1:3) facilitates the cholesterol extraction due to the presence of enough space between the phospholipid head groups for the proper positioning of the *β*-CDs on the monolayer surface. To get more insight into the inaccessibility of cholesterol by *β*-CD, we calculated the cholesterol-CD radial distribution function (RDF). The results are shown in [Supplementary Fig. S1](#s1){ref-type="supplementary-material"} as a function of cholesterol-lipid molar ratio. The probability of finding *β*-CD molecules within a distance of 0.2 nm to the cholesterol hydroxyl group (approximate distance for making a hydrogen bond) increases as the amount of phospholipid decreases. These results show that an important requirement for the spontaneous extraction is the proximity between cyclodextrins and cholesterol, facilitated at high cholesterol levels. To further quantify the extraction process, we calculated the potential of mean force (PMF) for the extraction of cholesterol from the membrane into a *β*-CD dimer. An example is shown in [Figure 3](#f3){ref-type="fig"} for the diC~16~-PC/CHOL 1:2 system. The PMF shows an overall favorable extraction energy, with a stabilization of 40 ± 2 kJ mol^−1^ in the complexed state. A clear barrier of about 20 ± 2 kJ mol^−1^ is also visible. As argued above, this barrier is due to the presence of lipid head groups that shield the cholesterol from being extracted. PMFs for other membrane compositions show an overall similar profile ([Fig. S2](#s1){ref-type="supplementary-material"}, [Table S2](#s1){ref-type="supplementary-material"}). The extraction free energy and barrier obtained from these PMFs are shown in the inset of [Figure 3](#f3){ref-type="fig"} as function of cholesterol fraction. In each case the final cholesterol-CD complex is energetically more favorable than the cholesterol-lipid association in the monolayer. The energy barrier for cholesterol desorption decreases with a decreasing amount of phospholipid present in the monolayers. Thus we conclude that for all these conditions cholesterol extraction is favorable, but that a kinetic barrier prevents rapid (sub-microsecond) extraction at high lipid/cholesterol ratios when cholesterol is shielded by surrounding lipids. Extraction process in presence of sphingomyelin and unsaturated lipids is similar --------------------------------------------------------------------------------- To evaluate whether or not the type of lipid effects the extraction process, monolayers containing sphingomyelin (SM), a ceramide based lipid, were also considered. The results are found to be similar to those obtained with diC~16~-PC monolayers. At a SM/CHOL 1:3 molar ratio, the simulations show spontaneous cholesterol extraction at a rate comparable to the PC/CHOL system (cf. [Table S1](#s1){ref-type="supplementary-material"}). When the concentration of SM increases, the extraction rate drops, with no spontaneous events observed for the SM/CHOL 1:1 monolayer. Comparing the binding modes of *β*-CD to PC and SM containing monolayers ([Figure 2C, D](#f2){ref-type="fig"}), a stronger binding of SM is noticeable due to the presence of the amide group in the ceramide linker. Analysis of the cholesterol-CD RDFs ([Fig. S1](#s1){ref-type="supplementary-material"}) and PMFs ([Fig. S2](#s1){ref-type="supplementary-material"}) confirm that the extraction process is very similar to that of PCs. Interestingly, and in agreement with experimental evidence[@b6], the barrier in the SM system seems to disappear already at a 1:2 ratio whereas for PC this is observed only at a 1:3 ratio. Consistent with these data is the occurrence of spontaneous cholesterol extraction in SM/CHOL, but not PC/CHOL, 1:2 monolayers ([Supplementary Fig. S3](#s1){ref-type="supplementary-material"}). In addition, we looked at the effect of substituting the fully saturated lipid tails for (poly)unsaturated lipid tails. PMFs for 4:1 and 1:1 mixtures of diC~18:2~-PC/CHOL monolayers are shown in [Supplementary Fig. S2](#s1){ref-type="supplementary-material"}. The energy profiles again look similar to those of the saturated lipids, with a net gain in free energy for extraction of cholesterol by the *β*-CD dimer but a barrier against spontaneous extraction. The associated extraction and barrier free energies are shown in the inset of [Figure 3](#f3){ref-type="fig"}. Due to the poor packing of unsaturated lipids and cholesterol, the free energy gain is larger compared to the saturated lipid case at high lipid/cholesterol ratio, e.g. Δ*G^extr^* = 0 kJ mol^−1^ for diC~16~-PC/CHOL 4:1 versus Δ*G^extr^* = 40 kJ mol^−1^ for diC~18:2~-PC/CHOL 4:1. Remarkably, this difference vanishes at a lipid/CHOL ratio of 1:1, with Δ*G^extr^* \~ 20 kJ mol^−1^ irrespective of lipid type. Based on our results on different lipid types, we conclude that substituting either glycerol based (diC~16~-PC) for ceramide based (SM) lipids or saturated for unsaturated lipids does not change the extraction process on a qualitative level. Subtle changes in the binding mode of *β*-CDs due to the sphingosine moiety and less efficient packing of cholesterol with unsaturated lipids, however, do modulate the free energy landscape and result in substantially different extraction kinetics. Extraction of cholesterol from bilayers is similar to monolayers ---------------------------------------------------------------- Most of the conclusions drawn so far are based on simulations of lipid model monolayers. In order to verify our results for biologically more relevant systems, we repeated some of the PMF calculations for lipid bilayers. The PMFs for extracting a single cholesterol into a *β*-CD dimer from either mono- or bilayer are compared in [Supplementary Fig. S4](#s1){ref-type="supplementary-material"}. In case of diC~16~-PC/CHOL 3:1, the free energy barrier as well as the final states are nearly identical between mono- and bilayer systems. Replacing diC~16~-PC by SM, the profiles still look similar but a notable difference (about 20 kJ mol^−1^) shows up in the relative free energy of the membrane bound state. Apparently cholesterol is tighter bound in the SM bilayer with respect to the monolayer. For the diC~18:2~-PC/CHOL 10:1 mixture, the PMFs are again nearly identical between the monolayer and bilayer membranes. Together, the data in [Fig. S4](#s1){ref-type="supplementary-material"} point to a very similar extraction mechanism between mono- and bilayers, allowing us to extrapolate most of our conclusions from monolayers to bilayers. Cholesterol is preferentially extracted from liquid-disordered membranes ------------------------------------------------------------------------ Finally, we turn to the question whether cholesterol is more easily extracted from *L~d~* or *L~o~* membranes. This question can be answered by a comparison of the PMFs among the different lipid compositions ([Fig. S2](#s1){ref-type="supplementary-material"}) for compositions mimicking those of *L~o~* (cholesterol fraction 0.3--0.6) and *L~d~* (cholesterol fraction below 0.25) domains. Although the PMF profiles are qualitatively similar, the end states show a larger free energy difference in the case of our *L~d~* mimic. Consequently, the energy barrier between the two states is reduced. Thus, complex formation in the *L~d~* phases is favored by two different effects: on the one hand, the formation of a more stable complex and on the other hand the decrease of the activation energy, see [Figure 3](#f3){ref-type="fig"}. Efficient extraction also depends on the affinity of *β*-CD for either *L~d~* or *L~o~* phase. To measure this, we calculated the binding free energy of a *β*-CD dimer on membranes of different compositions ([Fig. S5](#s1){ref-type="supplementary-material"}). This binding energy is found to be favorable by 35 ± 5 kJ mol^−1^ for both *L~d~* and *L~o~* mimicking membranes, suggesting that *β*-CDs will distribute quite evenly across a heterogeneous membrane. We thus expect that cholesterol is extracted more easily from a liquid-disordered phase, both from a kinetic and from a thermodynamic point of view. To test this prediction directly, we resort to a coarse-grained (CG) approach. Our CG model system is either a planar lipid bilayer or a small liposome, both composed of a ternary mixture of SM, diC~18:2~-PC, and CHOL at approximately 1:1:1 ratio. Consistent with the experimental ternary phase diagram of comparable systems[@b22], the membranes show coexisting *L~d~* and *L~o~* domains, with lipid/cholesterol compositions close to 10:1 for the *L~d~* and 1:1 for the *L~o~* domain. CD dimers were placed at a concentration of 0.04 M in the surrounding aqueous solution. Due to the rapid diffusion of the *β*-CDs, the lipid-water interface is covered by the carbohydrate in a matter of 10 s of nanoseconds, consistent with our atomistic resolution simulations. The final configuration of the systems are shown in [Fig. 4](#f4){ref-type="fig"}. In agreement with our calculated binding free energy calculations, *β*-CDs are found adsorbed in roughly equal amounts at the *L~d~* and *L~o~* domains in both planar and vesicular membranes. In case of the planar membrane we observe the formation of 11 *β*-CD/cholesterol complexes during a total of 0.1 ms simulation time ([Fig. 4](#f4){ref-type="fig"}, insert). Analyzing the type of lipids contacting a cholesterol at the moment of extraction, we can assign extraction from either ordered or disordered domains. We find that spontaneous extractions take place both from the *L~d~* and *L~o~* domains, with 8 and 3 occurrences respectively. Expressed as rate per *β*-CD monomer, this corresponds to 2 × 10^−6^ ns^−1^ cholesterol extractions for the *L~d~* and 7.5 × 10^−7^ ns^−1^ for the *L~o~* region. Given the about tenfold lower cholesterol concentration in the *L~d~* domain, it appears that cholesterol is extracted more efficiently from disordered membrane domains, supporting our predictions based on the atomistic PMFs. In the vesicular system, the overall extraction process is the same as for the planar bilayer. Again 11 cholesterol molecules were extracted, however, during a much shorter simulation time of 20 *μ*s. We explain the higher extraction efficiency by considering the high curvature of the simulated vesicle which increases the accessibility of cholesterols. Also in the vesicular system most cholesterols were extracted from the *L~d~* domain, but as the domains are less well defined further quantification proved difficult. Discussion ========== Our computational microscopy approach provides a molecular view on *β*-CD mediated cholesterol extraction from model membranes. We show that a *β*-CD dimer can efficiently extract cholesterol on a submicrosecond time scale, at least for membranes in which the cholesterol is not too strongly embedded. This occurs most easily in the presence of unsaturated lipids and/or at high cholesterol levels. The effect of unsaturation can be explained by considering the chemical potential of cholesterol. Bennett et al.[@b23] showed that a free energy of nearly 80--90 kJ mol^−1^ is required for the extraction of a single cholesterol molecule out of a diC~16~-PC/CHOL bilayer (in the range 0--40% cholesterol), however, this energy drops by about 10--20 kJ mol^−1^ in case of unsaturated PCs. The facilitated extraction at high cholesterol levels is caused by the ability of the *β*-CDs to directly interact with the cholesterols without being hampered by lipid head groups. Based on our PMF analysis, a large kinetic barrier appears for lipid fractions exceeding 0.3 (cf. [Figure 3](#f3){ref-type="fig"}). This barrier steeply increases with increasing lipid fraction. Our results explain the experimental observations made by Ohvo et al.[@b12] and recently by Flasinski et al.[@b6]. Based on lipid monolayer experiments, a strong correlation was found between a reduced cholesterol content and a diminished ability of cyclodextrins to extract cholesterol. Our data also support the model of Lange and Steck[@b24], in which the ease of extraction of cholesterol from disordered membranes is explained by an increase in cholesterol activity. According to this model, cholesterols form stoichiometric complexes with saturated phospholipids in particular. These complexes prevent the projection of cholesterol into the aqueous phase, in contrast with activated (non-complexed) cholesterols that undergo constant bobbing motions easing their capturing by extracellular acceptors such as CD. The model thus predicts facilitated uptake under conditions where complex formation is reduced, i.e. in the presence of unsaturated lipids or at high cholesterol levels, in line with our observations. In regard to the ability of *β*-CD to extract cholesterol from either *L~d~* or *L~o~* domains, the two effects mentioned above appear to counter each other. The presence of unsaturated lipids favors extraction from *L~d~* domains, whereas a higher cholesterol fraction favors extraction from *L~o~* domains. Whether cholesterol is primarily extracted from *L~d~* or *L~o~* domains will therefore critically depend on lipid composition as well as extraction conditions. Based on the data shown in [Fig. 3](#f3){ref-type="fig"}, however, it becomes clear that cholesterol is preferentially extracted from the *L~d~* phase. Assuming a typical lipid/cholesterol ratio of 5:1 for the *L~d~* phase, and 2:1 for the *L~o~* phase, the free energy gain upon complexation of cholesterol by *β*-CD is about 40 kJ mol^−1^ and 10 kJ mol^−1^ for *L~d~* and *L~o~* phase respectively, a difference of 30 kJ mol^−1^. In addition, the barrier along the extraction pathway increases from 30 kJ mol^−1^ to 60 kJ mol^−1^, again a difference of 30 kJ mol^−1^. The use of slightly different compositions for the *L~d~* and *L~o~* phases changes these numbers but not the overall conclusion. Our CG simulations show indeed a preferential extraction of cholesterol from the *L~d~* domain of a 1:1:1 SM/diC~18:2~-PC/CHOL mixture, despite the low concentration (0.1 molar ratio) of cholesterol in the *L~d~* domain. Recent experimental data by Sanchez et al. support our calculations[@b17]. The experiment correlates the changes in Laurdan GP (generalized polarization) value with changes in cholesterol content in the membrane. The measured GP value decreases in the *L~d~* phase as a result of the addition of *β*-CD. According to the authors, this change results from the removal of cholesterol from the *L~d~* phase, followed by a rapid re-equilibration of cholesterol from the *L~o~* to the *L~d~*. In another study by Jablin et al.[@b9], neutron reflectometry was used to characterize membranes of various compositions before and after β-CD was added to the subphase. Only the structure of bilayers with molar cholesterol ratios mimicking the *L~d~* phase was changed, indicating that *β*-CD is capable to remove non-complexed cholesterol. Finally we turn the discussion to the relevance of our data to the in vivo situation. A first point of concern is the temperature at which our simulations are performed, at 288 K considerably lower than physiological temperature. The lower temperature was chosen as it has experimentally been shown to be optimal for CD mediated cholesterol extraction from cholesterol monolayers[@b12]. To probe the effect of temperature, we recalculated the PMFs of cholesterol extraction from both Lo and Ld mimetic bilayers at 310 K. The resulting PMFs ([Fig. S6](#s1){ref-type="supplementary-material"}) show that the qualitative features do not change upon the increase in temperature. Changes of the order of 10 kJ mol^−1^ in the extraction free energy and height of the barrier can be appreciated, both being reduced at higher temperature irrespective of membrane composition. These data point to an increased cholesterol desorption rate at higher temperature, but other factors such as cyclodextrin monomer-dimer equilibrium and membrane adsorption propensity are also temperature dependent and could potentially lead to less efficient extraction. Another important difference between our simulated systems and real membranes is composition. Most of our results pertain artificial model membrane systems, in particular monolayers rich in cholesterol. Actually, no spontaneous cholesterol extraction was observed unless the phospholipid to cholesterol ratio was 1:3 or sphingomyelin to cholesterol ratio was higher than 1:1 of which the first is an impossibility for a bilayer and the second contains more cholesterol than reported for any biological membrane. Furthermore, the lipid mix of SM and diC~18:2~-PC, chosen to illustrate the coexistence of *L~o~* and *L~d~* phases, results in phases with a big difference in cholesterol content and associated membrane order. In the plasma membrane of real cells the difference in order between the two phases is not as large as in artificial well-separated phases[@b25]. Direct simulation of cholesterol extraction from more realistic mixtures is, however, hampered by two main reasons. First, simulation of phase separation using more realistic mixtures requires much larger system sizes as the width of the domain boundary will exponentially increase with reduced line tension between the domains. Second, as we have shown, the spontaneous extraction process from more realistic membranes becomes too slow to be sampled on the accessible microsecond time scale. In order to extrapolate our data to more realistic conditions, we therefore resorted to an indirect approach through computation of the energetic factors involved during cholesterol extraction. As discussed above, our energetic data (cf. [Fig. 3](#f3){ref-type="fig"}) suggest that also at physiological composition cholesterol extraction occurs primarily from the Ld phase. Based on recent in vivo studies involving T cells, Mahammad and Parmryd[@b26] also challenge the widespread assumption that *β*-CD preferentially targets cholesterol in lipid rafts and that sensitivity to *β*-CD is proof of lipid raft involvement in a cellular process. Nevertheless, in real cell membranes situation might be more complicated than the picture emerging based on model systems. Keeping these considerations in mind, we summarize the main conclusion of our work in [Fig. 5](#f5){ref-type="fig"}. Adding of *β*-CDs to a membrane which contains both *L~d~* and *L~o~* domains results in the following processes: i) rapid interfacial binding of *β*-CD dimers uniformly across the membrane, ii) cholesterol extraction mainly from the disordered domain by suitably oriented *β*-CD dimers, iii) re-equilibration of cholesterol between the two domains, notably the migration of cholesterol from the ordered to the disordered domains, and iv) disappearance of the domains eventually once the cholesterol fraction becomes too small. The whole extraction process slows down as the overall cholesterol content decreases. Furthermore, due to the favorable adsorption free energy of *β*-CDs on the membrane, the desorption rate of the CD-cholesterol complex from the membrane into the aqueous phase is predicted to be very slow. Our results should be considered when interpreting experimental assays that use *β*-CDs to manipulate cholesterol content in membranes, both in vitro and in vivo, and open the way to the rational design of more efficient *β*-CD based cholesterol carriers. Methods ======= System set-up ------------- For the unbiased atomistic level extraction simulations, a monolayer consisting of 300 cholesterol molecules, 24 *β*-CDs, and 100 diC~16~-PC molecules was set-up. The *β*-CDs were initially placed at a distance of 1.0 nm away from the monolayer surface either in monomeric or dimeric conformations. A number of smaller scale systems, containing 9 *β*-CDs pre-assembled on the monolayer, were simulated to probe the effect of lipid composition on the extraction process. For the free energy calculations, systems consisting of a single *β*-CD dimer in direct contact with either a monolayer or bilayer were used. The CG lipid bilayer consisted of a ternary mixture (SM/diC~18:2~-PC/CHOL 35:35:30 molar ratio), phase separated into coexisting *L~o~* and *L~d~* domains. The initial configuration was taken from our previous work[@b27], replacing diC~16~-PC with SM. The bilayer contains 2000 lipids and is fully solvated with 30000 CG water beads. We added 20 *β*-CD dimers to achieve an overall concentration of 0.04 M. Additionally, a ternary vesicle was prepared using the same composition as the planar bilayer. Details of the system set-up are given in the [Supplementary Materials and Methods](#s1){ref-type="supplementary-material"}. [Table S1](#s1){ref-type="supplementary-material"} provides an overview of all simulations performed. Computational details --------------------- Simulations were performed using the GROMACS molecular dynamics package[@b28], version 4.0. The parameter set for the atomistic simulations of *β*-CD was based on the GROMOS force field for carbohydrates[@b29]. The parameters for cholesterol were taken from Holtje et. al[@b30]. Lipids were modeled using an in-house force field (de Vries et al., unpublished), based on the set of parameters for aliphatic hydrocarbons which are part of the GROMOS 53a6 force field[@b31]. The SPC model[@b32] was used for the solvent. The temperature of all systems was maintained at 288 K by weak coupling to a bath. A surface-tension coupling scheme was used to maintain a constant surface pressure of 33 mN m^−1^ for the monolayers. The values for temperature as well as for surface pressure used here have been shown to be optimal for cyclodextrin mediated cholesterol extraction from cholesterol monolayers[@b12]. Parameters for the simulation of the CG lipids and solvent were taken from the MARTINI model[@b33][@b34]. The *β*-CD model was based on the MARTINI carbohydrate model[@b35] and refined to reproduce the thermodynamics of the atomistic simulations. The temperature of both planar and vesicular membrane systems is maintained at 300 K. In case of the planar system semi-isotropic pressure scaling is used. More details of the computations can be found in the [Supplementary Materials and Methods](#s1){ref-type="supplementary-material"}, including a description of the method to compute the PMFs and parameterization of the CG *β*-CDs. Author Contributions ==================== C.A.L., A.H.V. & S.J.M. designed research, C.A.L. performed the simulations, all authors contributed to writing of the manuscript. Supplementary Material {#s1} ====================== ###### Supplementary Information Supplementary information ![Spontaneous extraction of cholesterol from a diC~16~-PC/cholesterol 1:3 monolayer by cyclodextrin.\ (A) Set-up with 12 *β*-CD dimers. Initially placed in the aqueous solution (top-left), the *β*-CDs adsorb at the lipid/water interface (middle) and adopt a number of configurations, either tilted or untilted with respect to the surface normal or forming stacked barrels. Dimers present in the untilted orientation are capable to extract cholesterol (top-right, red arrow). (B) Set-up with 24 *β*-CD monomers. Initially dispersed in solution (bottom-left), *β*-CD monomers are also able to bind to the surface of the monolayer (middle), however, only partial extraction of cholesterol is observed (bottom-right, red arrow). Color code: diC~16~-PC, green, cholesterol, yellow, *β*-CDs, white. Water is not depicted for clarity.](srep02071-f1){#f1} ![Close-ups of cyclodextrin membrane interaction.\ (A) Pure cholesterol monolayers, favoring the tilted conformation of the *β*-CD dimer. (B) Adding diC~16~-PC lipids (1:3 PC/CHOL ratio), a straight conformation is stabilized through the formation of hydrogen bonds between the lipid head groups and the hydroxyl groups of the *β*-CDs. (C) Adding more diC~16~-PC lipids (1:1 PC/CHOL), *β*-CDs are no longer able to contact directly with cholesterols. (D) Replacing diC~16~-PC with SM (1:1 SM/CHOL), results in stronger binding of *β*-CD.](srep02071-f2){#f2} ![Energetic characterization of cholesterol extraction.\ Potential of mean force (PMF) for *β*-CD mediated cholesterol extraction from a diC~16~-PC/CHOL 1:2 monolayer. The reaction coordinate represents the distance between the center-of-mass of the extracted cholesterol and the *β*-CD dimer. A distance of 0 nm corresponds to the cholesterol complexed with *β*-CD, the largest distance indicates full embedding of cholesterol in the monolayer. Red bars indicate the various states used to calculate the extraction free energy Δ*G^extr^* and barrier free energy Δ*G^bar^*. The inset shows the Δ*G^extr^* and Δ*G^bar^* as a function of the lipid molar fraction for diC~16~-PC, SM, and diC~18:2~-PC/CHOL monolayers. The error bars on these data points do not exceed 5 kJ mol^−1^. Typical composition ranges for *L~o~* and *L~d~* membranes are indicated. Additional PMFs and a supporting table with free energy data are given in [Fig. S2](#s1){ref-type="supplementary-material"} and [Table S2](#s1){ref-type="supplementary-material"}.](srep02071-f3){#f3} ![Cholesterol extraction from coexisting liquid-ordered *L~o~* and liquid-disordered *L~d~* domains.\ Systems consist of SM/diC~18:2~-PC/CHOL 35:35:30 molar ratio with 0.04 M *β*-CD dimers in solution, and were simulated at a CG level. Snapshots are shown after 10 *μ*s simulation for both planar (A) and vesicular (B) membranes. *β*-CDs interact with the lipids from both *L~d~* and *L~o~* domains, and spontaneous cholesterol extraction is observed in both systems. Preferential extraction takes place form the disordered phase. The zoomed view highlights some of the CD-CHOL complexes formed. Color code: SM, light green; diC~18:2~-PC, red; cholesterol, yellow; *β*-CD dimer; white. Water molecules are not depicted for clarity.](srep02071-f4){#f4} ![Proposed model for cyclodextrin-mediated cholesterol extraction from lipid model membranes.\ *β*-CDs form dimers which bind to a membrane surface with high affinity of 35 kJ mol^−1^, irrespective of the membrane phase (either *L~o~* or *L~d~*). Extraction, however, is more favorable from the *L~d~* phase, with a gain in free energy of 40 kJ mol^−1^ compared to 10 kJ mol^−1^ from the *L~o~* domain. The cholesterol extraction from *L~d~* domains will cause a redistribution of cholesterol between the domains, and lastly result in the disruption of the phase coexistence.](srep02071-f5){#f5}
null
minipile
NaturalLanguage
mit
null
You can't plant him in your penthouse, he's going back to his plough (Image via Wikipedia) We call songs that are a part of any serious music collection standards. Films that everyone should see are classics. These classics and standards not only entertain us, they provide common language reference points for us. We refer to lyrics, movie lines, or even entire scenes in casual conversation. They become part of the repertoire of dialogue that we use without thought and interpret without effort. But what happens if a person hasn’t really seen or heard one of these icons? How are they affected? I can tell you from personal experience. I’ve never seen The Wizard Of Oz. A Respectful Pause I know that there is a level of shock that occurs when I tell people that I’ve never watched the Wizard of Oz. Go on, take a moment. Now let’s move on to the question I get from people when this matter comes up. How is it possible that someone my age, who grew up with this film on television every year of his life, has not seen this movie? Some parts I can explain, others I can’t. I remember it being on when I was a kid, but I was always doing something else. Short attention span? Could be. Truthfully, there have been times I have avoided the film for no reason other than to see how long I can survive without it. Over the years, I have seen parts of the movie on television. I know there’s a scarecrow, a lion and a tin man. I know there are good and bad witches and a trip on a yellow brick road. I have no idea if they get what they need on the road trip. I suppose they do, but I can’t confirm that from my own knowledge. The Consequence Of Ignorance Like anyone who doesn’t know a language, I fill in the pieces I’m missing with the bits and pieces that I think I do know. Sometimes I get by, sometimes I don’t. This is not my high school Italian teacher. This woman is much better looking.(Image via Wikipedia) In high school, a girl I sat with used to sing the Wicked Witch Of The West theme whenever our Italian teacher (remember her?) walked in the room. Of course I got blamed for doing it. The teacher didn’t get agitated with me for pointing out that the voice that was singing was a girl’s and not mine. I got in trouble when the teacher asked why I would sing the Wicked Witch Theme about her and I answered “is that what it is?” She told my mother that was the moment she was sure I was lying to her. More recently, in conversations with others, I’ve figured out things like: Flying Monkeys aren’t just a good idea. They are also characters in the movie. In the movie, Flying Monkeys are bad. Obviously this movie is fiction. The Lion is not brave or tough. The reason the Wicked Witch Of The West is designated as wicked is that there is a Good Witch. The existence of a Good Witch seems counter-intuitive to me. Wicked Witch is redundant. Usually there is some measure of embarrassment on my part when I learn these things. There is also the need for denial of the possibility that I grew up under a rock. Will I Ever See The Movie? I know that I will probably give in. There will be a point where I sit down with some popcorn and watch the movie. The whole movie. I don’t know when that will be. Part of what I like about the world is that I’ve always got something to learn. I’m always eager to see something new, even if it is old. But why rush on this? I’ve got time, and I don’t want to give up my dream of airborne simians. 35.410694-80.842850 Advertisements If I hadn't written this, I would use these to tell people I'd read it. Wow. I’m absolutely stunned. I mean, I probably haven’t seen it since the early 70s, I’m not one of those who watches it regularly but….wow, not once? Huh. I’m curious, has the Omawari-son seen it? Although I’ve never done this, I’m told that if you start the movie (on mute) and Dark Side of the Moon at precisely the same time, it’s supposed to be perfect. Maybe that’s how you should have your first experience. Oma, I accept the fact that you’ve not seen the Wizard of Oz. Will you please forgive me for not reading (or watching) anything related to Harry Potter or the 3 installments of Star Wars after the ‘original’ three Star Wars that were sequels? I could share other ‘classics’ I’ve yet to experience, but I fear the repercussions. You’re not missing much by not seeing The Wizard of Oz. As long as you understand the references — following the yellow-brick road, paying no attention to the man behind the curtain, “Somewhere Over the Rainbow” — you’re good. The movie is a technical marvel — the “Avatar” of its time — but as narrative, it’s pretty hard to sit through. I would say I pity you, but I understand the ‘holding off” part. I have never seen Alice in Wonderland all through, and do not ever intend to. If I ever did, it would only be to please the masses. Rather like I should eat Cow Tongue to please the congregation, should I ever convert to Judiaism. Not gonna hapen, if ony to avoid cow tongue. I support your right to ignore this movie. (but you are so missing out…maybe see it with the first grandchild who comes along?) If the robot flew I would watch the movie every day. They insist on calling it a tin man, but it has a good shape such that it he had rockets in his feet he would fly really well. Once airborne, he could dialogue with the monkeys and form a sort of detente… The amount of “pop wisdom” I’ve missed is even more abundant than my ignorance of classics, which is saying something. I’ve never seen The Goonies (mention because people keep mentioning it in reference to Super 8–you included, I believe), ET, any Disney films, Dr. Seuss books, etc. My childhood wasn’t much of a childhood, so in addition to the bad stuff, I didn’t get any much good. While I was into music at a young age, I completely missed all the videos–didn’t have access. Hell, I loaded Da Da Da the other day to listen (on youtube) and it was quite interesting seeing the video for the first time of something I owned a 12″ single for in ’82. Weird stuff! I think peoples’ connections to Seuss or those kid movies are because you saw them during a point in your life where you were safe and happy/ carefree. Seeing them now isn’t going to affect me the same way. Your warm fuzzy feelings are like when as a 40yo, you see a person you went to primary school with: you’re excited / delighted and it’s super-duper fun! That’s cos you remember your childhood friendship and the fun you had at recess. Purely from a statistical standpoint, it’s amazing that you haven’t seen this movie. At this point, I would refuse to see it just to be contrary. I’ve never seen It’s a Wonderful Life and I don’t really have any desire to. There. I said it. Oh blah. There are TONS of popular movies I have never watched. I don’t watch much movies. And my responses to public pressure in these things reminds me of something I learned in The Horse Whisperer. Apparently, horses are ‘into pressure animals’. That means they react to pressure by leaning/pushing into it. So if you want a horse to move, pushing it is the exact wrong thing to do. Man is a social animal, and I am a social into pressure animal. When they’re pushing where I personally was not already inclined to go, I instinctively push back. My parents still have trouble with this idea. If you really just need a way not to embarrass yourself, you could cheat and read the book. You’ll get most of the facts without ever seeing the actual cult object, and nearly everyone will be fooled. I won’t tell. Promise. Oh yeah, and I actually have seen the movie, but it took me years because I was scared witless of the monkeys. I don’t know about flying monkeys in principle, but those monkeys were bad. Definitely. Please don’t let your regime conduct that particular experiment. I’d have to move to Spain or something. I don’t know whether to click my heels three times and go somewhere else or stay here and dredge up the dozens of Wizard of Oz references I could. The best part of that movie is when it goes from black and white to full technicolor. It’s wonderful. I do wonder if my hatred of monkeys has anything to do with that movie. Or my hatred of oil cans, red shoes, men dressed as Cossacks, and cocaine. That was cocaine falling out of the sky wasn’t it? The cocaine explains a lot. The robot was probably the only one who still had his wits about him. Yeah, i know he was the Tin Man, but lets face it, he’s a robot. A robot who doesn’t fly or have a ray gun. Not the good kind of robot. I tend to be pretty far behind the curve when it comes to popular culture, but even I’ve seen Wizard of Oz. Anyway, there are so many lines from WoO that are part of daily usage you can probably qualify as having see it. I’m with you, Oma, when you say you don’t get musicals…me either! I have also resisted anything Harry Potter or Star Wars or Star Trek or Avatar, which the nerds I live with find difficult to understand! I haven’t seen Gone With the Wind, The Godfather, or GoodFellas, or Titanic either. I have, however, seen Ole Yeller and The Wizard of Oz. Ole Yeller made me cry, and The Wizard of Oz scared me, especially the flying monkeys and the Wicked Witch! The costumes were very good though, considering it was the 1930’s…except for the silver paint making the Tin Man sick… My cool cousins had a super talented artist friend who painted that album cover life-size on the wall of their family room (at that time called a rec or rumpus room which was, of course, in the basement.) As you walked down the stairs, it was right at the bottom and it looked like 6-foot-tall Elton John was inviting you to step in with him. Just as long as you had your enormous glasses and platform shoes. Besides the cultural references, that I am sure you are aware of and that have been mentioned in the previous comments the cinematic aspects of the film are significant. The first part of the movie takes place in “Kansas” and is filmed in black and white. When Dorothy and her wee dog Toto enter “The Land of Oz” it is filmed in colour. This would mark the first time most people saw a colour film. The were shocked, amazed and spellbound. With the advances in cinematography since then it seems extremely mundane. You just had to be there.
null
minipile
NaturalLanguage
mit
null
Distribution of penicillin-nonsusceptible pneumococcal clones in the Baltimore metropolitan area and variables associated with drug resistance. We assessed the distribution of the clonal groups (as determined by pulsed-field gel electrophoresis) of penicillin-nonsusceptible Streptococcus pneumoniae that caused invasive pneumococcal infection in the Baltimore metropolitan area during 1995 and 1996. Although S. pneumoniae caused invasive disease in individuals from a variety of demographic groups and locations, strains isolated during the season in which respiratory infections are most common were more likely to be from clonal groups associated with penicillin resistance than from other groups.
null
minipile
NaturalLanguage
mit
null
Liao Pen-yen Liao Pen-yen (; born 26 September 1956) is a Taiwanese politician who served two terms in the Legislative Yuan from 2002 to 2008. Education Liao graduated from Fu Jen Catholic University with a degree in business management. Political career Liao was elected the mayor of Shulin in 1993, serving in that position until 2002. During his tenure, Liao and other township heads were investigated for corruption, as they had charged multiple businesses a "township chief tax" to raise money for local community development funds. He ran in the legislative elections of 2001 and won a seat in the Legislative Yuan. Liao was the Taiwan Solidarity Union's caucus whip throughout most of his time in office. His expulsion from the TSU, announced in October 2007 and confirmed in November, for refusing to support the party's policies, led four other party members to defect. Shortly after Liao's expulsion the TSU ran ads in the United Daily News suggesting that Liao should join the Democratic Progressive Party. Later that month, Liao and a couple other defectors launched reelection bids under the DPP banner. A group of women's rights organizations opposed Liao's candidacy, and his 2008 campaign was unsuccessful. Though he was reported to be leading the race six days before polls opened, Liao lost to Huang Chih-hsiung by 5.49% of votes. Liao stood for election again in 2012, but did not win. He was elected to the New Taipei City Council in 2014. Controversy In 2010, the Taipei District Court found Liao not guilty of taking bribes from the Taiwan Dental Association. In September 2011, the Taiwan High Court heard an appeal of the case and sentenced him to seven years and three months imprisonment, as well as a suspension of civil rights for three years. The High Court ruling was appealed to the Supreme Court, which cleared him of the charges in March 2016. Personal life Liao Pen-yen's son Liao Yi-kun ran for a legislative seat in 2016, but was defeated in a Democratic Progressive Party primary by Su Chiao-hui. References Category:1956 births Category:Living people Category:New Taipei Members of the Legislative Yuan Category:Members of the 5th Legislative Yuan Category:Members of the 6th Legislative Yuan Category:Taiwan Solidarity Union Members of the Legislative Yuan Category:Fu Jen Catholic University alumni Category:Taiwanese people of Hakka descent Category:Mayors of places in Taiwan Category:Taiwanese city councilors Category:Democratic Progressive Party Members of the Legislative Yuan
null
minipile
NaturalLanguage
mit
null
Q: Request Access Token in Postman for Azure Function App protected by Azure AD B2C I have an AspNetCore 2.0 MVC web API secured by an Azure Active Directory B2C tenant. I have been able to use Postman to test the API end points by following this SO posting: Request Access Token in Postman for Azure AD B2C (in particular, the Microsoft documented steps referenced in SpottedMahn's comments: https://docs.microsoft.com/en-us/aspnet/core/security/authentication/azure-ad-b2c-webapi#use-postman-to-get-a-token-and-test-the-api ) Now, I am working on a serverless version of the above - the app is pretty much identical expect that the endpoints have been implemented by Azure functions in an Azure Functions App The Functions App has Authentication on, Log in with Azure Active Directory and the following settings: This is how i have set up the Application in the Azure B2C tenant: If I access the functions endpoint via a browser, I get successfully routed to the Azure AD B2C login page and can log in, then see the results from the API endpoint. So I'm pretty confident all is good w.r.t. the Azure AD B2C <-> Function App configuration. However, I can't use the Request Access Token technique linked above to get a token and inspect the endpoint in Postman If I take the token obtained after authentication (for example by using fiddler and observing the id_token being returned), and in Postman I choose Bearer authentication and supply that id_token, then Postman successfully hits my endpoint. However, if I follow the steps in the linked document above, I do get the "login" popup and then do get a valid [looking] token, but when I click Use Token and run the request, I get You do not have permission to view this directory or page. I'd really like to be able to request an access token from postman just like I can with my aspnetcore 2.0 app (really just for the consistency so I don't have to remember lots of different techniques). Is that possible for Azure Function Apps and if so, any clues what I'm doing wrong in the above? A: Ah I stumbled upon it. I fixed it by adding the Postman API client id (note: the postman API client id, not the postman App client id) [those references will make sense in the context of the Microsoft how-to linked above], under "ALLOWED TOKEN AUDIENCES" (visible in screenshot in question above).
null
minipile
NaturalLanguage
mit
null
New | Old | Green | Prefab Press Play for Intro Video Historically Inspired … Historically Green® New World Home is an award winning producer of sustainable housing dedicated to fulfilling the convergence of historically inspired design and state-of-the-art green products with advanced building science and ultra-efficient manufacturing methodologies. The end result represents the advent of the New Old Green Prefab® home. A New World Home respects our country’s rich architectural heritage, integrates all of the modern amenities of the 21st century and exceeds the most stringent green building performance standards in the housing industry. The Star-Ledger, NJ’s largest online newspaper, today published an update on the New Jersey Shore rebuilding efforts in the aftermath of Sandy. The Home & Garden article features multiple New World Home regional projects. Tyler Schmetterer, original New World Home founder, describes how these and other projects are contributing to the architectural integrity and safety of future generations of NJ homeowners. In western New Jersey, there’s a high-profile supporter of modular building in former Gov. Christine Todd Whitman. On the family’s Tewksbury farmstead, Whitman’s daughter Kate lives in a traditionally styled modular home with her husband, Craig Annis, and their four children. For Whitman, who also headed the U.S. Environmental Protection Agency, the dwelling exemplifies how modular construction can produce highly energy-efficient “green” dwellings designed to mesh with historic areas. A modular farmhouse, traditionally designed by New World Home on the farm owned by former NJ Gov. Christine Todd Whitman The Whitman-Annis house, with its steel roof, conforms to requirements of the Oldwick Historic District in which it is located, says Tyler Schmetterer, founding partner of New World Home, which built the house. “The architecture is equally important as the green attributes,” he says. Behind the facade of a traditional farmhouse, builders installed bamboo floors, a re- claimed-wood kitchen island, water-conserving kitchen and bathroom fixtures, energy- efficient appliances and other features. “When we design something, we want it to look like it’s been there 150 years,” Schmetterer says. Country Living magazine’s House of the Year in 2010 was this 1,607-square-foot modular cottage designed by New World Home. It was first shown in New York City and then moved to Crystal Springs Resort in Hardystown. On a verdant lot in Sussex County, another New World Home design had been a model house at Crystal Springs Resort, an example of what could be built in the area. Created in partnership with Crystal Springs and in collaboration with Country Living to produce the magazine’s annual House of the Year in 2010, the 1,607-square- foot cottage spent two weeks as an exhibition at Manhattan’s World Financial Center that year. Then it was disassembled and trucked to its present location in Hardystown. The model cottage, which sold last year, stands as an example of the sturdiness of modular houses, Schmetterer says. “The modules are independently strong and reinforced by the rest of the house.” The sturdiness of a well-engineered modular structure also makes it a good solution for those building in hurricane-prone areas, he says. “Our stock home is built to withstand 120-miles-per-hour wind. In the Hamptons, that’s code. New Jersey is going to have to do the same thing,” Schmetterer says. “Look what Sandy did at 70 miles per hour. The homes have to be able to withstand these storms.” New Jersey Monthly magazine’s Home & Garden Special, Prefab Goes Green, by Lauren Payne in the May 2013 issue is now available on newsstands. Prefab: Easy Being Green – The family of former NJ governor and Environmental Protection Agency chief Christine (“Christie”) Todd Whitman wanted a house that looked like it had always been part of its bucolic surrounding, but ecologically state-of-the-art. They found what they needed – in a factory with New World Home [click cover to view the complete feature] Having participated in the process from start to finish, Governor Whitman says she is well satisfied. “We’d been searching the market for many years for an authentic green prefab housing company that incorporates traditional architecture,” she says. “My family and I were very excited when we connected with New World Home since the company is clearly making great strides in transforming the home building industry.” Click to view online feature Tyler Schmetterer, an original founder of New World Home says, “With our collective commitment to the environment and the support of Governor Whitman and her wonderful family, this project is a testament to the current state of green prefab design and manufacturing in this country.” Prefabulous series author Sheri Koones illustrates the many ways of using prefabrication to create almost-off-the-grid homes – houses that are not only environmentally friendly but also function on a fraction of the energy required by most houses, and additionally are more comfortable, healthier, quieter inside and far cheaper to operate. New World Home projects highlighted in the book include homes in East Hampton, NY; World Financial Center, NYC – Crystal Springs, NJ and Oldwick, NJ. Tyler Schmetterer, original founder of New World Home and newly formed regional master licensee Systems Built says, “We are thrilled to be collaborating with Sheri Koones and furthering our collective pursuit of promoting historic green milestones in the residential building industry.” Schmetterer adds, “With our shared commitment to sustainable housing and the ongoing support of homeowners like former Gov. Whitman’s family, the Prefabulous books will invariably bring greater awareness of the inherent advantages of green prefab housing and further encourage mainstream use of factory-built construction to provide healthier and more sustainable homes.” Praise for Prefabulous + Almost Off the Grid World Financial Center - NYC “We need to learn from our past mistakes regarding energy consumption and embrace new ways to reduce our needs. It’s time for everyone to consider more clean energy options in our lives. Prefabulous + Almost Off the Grid: Your Path to Building an Energy-Independent Home will inspire and help you do that.” – Robert Redford, Environmentalist, Actor, Director Whitman Family Home - Oldwick, NJ “You can build a high quality, environmentally friendly and efficient home at a reasonable price with a look and feel of a traditional home. Advancements like those used in our house and the other houses in this book will transform the homebuilding industry.” – Christine Todd Whitman, former Governor of New Jersey and Administrator of the Environmental Protection Agency Prefabulous + Almost Off the Grid is published by Abrams Books and is available online and in bookstores around the country. This New House on the DIY network checks out the so-called “New Old Green Modular” home. It’s all about respect for architectural heritage, 21st-century amenities and performance exceeding certification standards, all on a schedule that makes these homes move-in ready in just a few weeks. This New House will air the original New World Home segment on the DIY Network on November 15, 2012 at 7:30pm E/P and again on December 18, 2012 at 7:00 E/P. The episode, aptly entitled “Brave New World”, includes a house tour of the NJ Governor Whitman family home by Mark Jupiter, original co-founder of New World Home and newly formed regional master licensee Systems Built. November 15, 2012 7:30 PM e/p December 18, 2012 7:00 PM e/p Mark Jupiter, original co-founder of New World Home and newly formed regional master licensee Systems Built, was recently interviewed live on-air by CNBC. “You may think you only have two choices when buying a home, right? A home someone else has lived in, a used home, and a new home built by a developer. There’s a third alternative, a prefabricated custom home. That market is growing. How big might is become? Let’s ask Mark Jupiter, founding partner at New World Home.” - CNBC Mark Jupiter, original co-founder of New World Home, interviewed on CNBC COVER PICTURE: New World Home's 2010 New York City exhibition at the World Financial Center COVER STORY: New World Home brings affordable, green, modular housing to New Jersey (and beyond) “After years of research and development, New World Home recently unveiled its affordable Essential Housing™ product line. Today, the company, working with five manufacturers across the country, offers homebuyers myriad home-design options, all of which offer the highest possible energy savings and usage of Earth-friendly materials. Details like metal roofs, rainwater harvesting systems, dual-flush toilets, and use of low-VOC paints, stains, and finishes are standard energy-saving mechanisms that allow homeowners to save about 60 percent on their energy bills and more than 50 percent of their water usage.” Factory FactorModular homes are still considered radical by many builders, but there’s a middle ground between box module and stick-built that they are starting to warm up to. We are of course referring to panelized walls, roof systems, and other prefab components as a means of moderating costs, reducing job site waste, and improving quality with structural pieces that aren’t exposed to weather for long stretches of time. Whereas “factory built” was once considered synonymous with “trailer park,” houses today that incorporate panelized design are nearly impossible to distinguish from conventionally built homes once they’re stitched up. And, contrary to some lingering bias, the prefab stuff is not invariably contemporary. Many factory-built homes now come in traditional styles such as Georgian, colonial, and even Victorian. Homes in the Country Living Collection by New World Home are about as traditional as they come and – surprise! — they’re modular. The portfolio includes five house plans ranging from 1,100 to 2,300 square feet, each of which can be built for $175 to $225 per square foot. To view the entire Country Living Green Modular Collection of homes click here Do you currently own a piece of land? If the answer to this question is yes, then now is the time tounlock the true value of your land. New World Home builds and delivers homes that are historically inspired, affordably priced and exceed the highest green standards in the industry. Your new home will arrive to your site up to 90% complete and is typically move-in ready in just 120 days from the start of production. New World Homes are built in an environmentally-controlled factory setting utilizing state-of-the-art technology where quality control is monitored at every station in the production line. Our construction process ensures an extremely well-built home that can save more than 50% in energy consumption, thousands of gallons/year in water, thousands of dollars/year in utility and maintenance costs and provides a superior indoor air quality.*Create your personal green legacy today Your home is where your heritage and family history are rooted … to be shared, protected and passed on for future generations. Respecting the past with an eye to the future is a delicate balance that will protect your land for generations to come. New York House Magazine recently wrote an article on the top 10 players who are making an impact by creating a more sustainable built environment. New World Home’s own Mark Jupiter and Tyler Schmetterer make the list: Mark Jupiter and Tyler Schmetterer Co-Founders, New World Home Mark Jupiter and Tyler Schmetterer, Skidmore College roommates who have been best friends for over 25 years, pooled their resources to launch New World Home with the belief that their green modular homes can transform the housing industry. New World Home builds and delivers homes that are historically inspired, affordably priced, and exceed the highest green standards in the industry. Its New Old Green Modulars (NOGMs) have achieved LEED Platinum and NAHB Emerald certification, without relying on costly renewable energy sources (e.g. solar panels, wind turbines, geothermal systems, etc.). Through superior building science, smart design, and an ultra-tight building envelope, the homes achieve a minimum of 50-percent-plus energy savings over a typical site-built home from day 1. Having built several NOGMs in New York, Georgia, and New Jersey, the company has a program to expand nationally through a turnkey licensing program to provide “on-demand housing”—working with developers, builders, landowners, and realtors to create regionally appropriate, sustainable housing. Because its homes have such a short building timeline, there is no longer the need for speculative building. This “on-demand housing” model, the executives feel, will become quite popular. New World Home is building a home in New Jersey for the former Governor and Head of the EPA Christie Todd Whitman’s family. It’s on target to be the first LEED Platinum and NAHB Emerald factory-built home in that state.
null
minipile
NaturalLanguage
mit
null
Early Adaptive Functioning Trajectories in Preschoolers With Autism Spectrum Disorders. In preschoolers with autism spectrum disorder (ASD) symptom, severity has a negative impact on the development of adaptive functioning, with critical consequences on the quality of life of those children. Developmental features such as reduced social interest or the presence of behavioral problems can further impede daily life learning experiences. The first aim of this study is to confirm the negative impact of high symptom severity on adaptive functioning trajectories in preschoolers with ASD. The second objective intends to explore whether reduced social interest and severe behavioral problems negatively affect developmental trajectories of adaptive functioning in young children with ASD. In total, 68 children with ASD and 48 age and gender-matched children with typical development (TD) between 1.6 and 6 years were included in our study, and longitudinal data on adaptive functioning were collected (mean length of the longitudinal data collection was 1.4 years ± 0.6). Baseline measures of symptom severity, social interest, and behavioral problems were also obtained. We confirmed that children with ASD show parallel developmental trajectories but a significantly lower performance of adaptive functioning compared with children with TD. Furthermore, analyses within ASD children demonstrated that those with higher symptom severity, reduced social interest, and higher scores of behavioral problems exhibited especially lower or faster declining trajectories of adaptive functioning. These findings bolster the idea that social interest and behavioral problems are crucial for the early adaptive functioning development of children with autism. The current study has clinical implications in pointing out early intervention targets in children with ASD.
null
minipile
NaturalLanguage
mit
null
Facebook said 53,000 people were talking about Terence Crutcher’s death by police, but instead it showed me Trends about Xbox, a Game Of Thrones actor, and New Jersey’s governor, even though they all had less chatter. [Update 3:45pm PT: As of rougly 2:45pm Pacific, “Tulsa Police Shooting” become a Trend on Facebook. But the fact that it took an entire day to appear, and four hours after TechCrunch published this story, demonstrates just how badly Facebook Trends needs to be rethought. This article has been revised.] Facebook denies it’s a media company, and has tried to distance itself from editorial decision-making by firing all its human Trend curators. But its values and stance towards important social issues are coded into the algorithms and processes that surface trends, and they’re not doing the public justice. Hopefully this incident will spur Facebook to re-examine how it chooses trends, the way the Ferguson protests inspired Jack Dorsey to get Twitter more involved with activism for worthy causes. Yesterday Tulsa, Oklahoma police released disturbing video footage of Terence Crutcher’s death on September 16th. Crutcher’s SUV stalled on the freeway, but when police arrived, they drew their guns on him. With his hands up, he walked towards his car, and was then tazed and shot by police. Video shows officers backing away rather than providing medical aid, and Crutcher later died in the hospital. Video footage from a helicopter and dashboard camera exploded on Facebook Monday thanks to its auto-play feature. The story became front-page news on sites like CNN and the New York Times. But many users didn’t see anything about Terence Crutcher in Facebook’s Top or Politics trends. Facebook has not responded to a request for comment. [Update 3:35pm PT: Facebook’s data collection tool for journalists called Signal showed that Terence Crutcher was not appearing as a trend for any users this morning, but is now in the non-public “Emerging Trends” list from which Facebook’s curators choose Trends to display, and “Tulsa Police Shooting” is now an official Trend.] While Trends are personalized, users I talked to who work in social justice also aren’t seeing Crutcher listed, and searching for stories about the incident multiple times didn’t surface him in Trends. That seems to go against Facebook’s VP of News Feed Adam Mosseri’s statement that “we have a responsibility to make sure that people are finding value with the time they spend with News Feed and Facebook” last week at TechCrunch Disrupt SF (video below). When asked about the impact of removing the human description writers from Trends, Mosseri insisted “I think it’s better”. Humans still help Facebook avoid highlighting common words like #Lunch as a trend. But since the change, multiple fake stories, like 9/11 being an inside job, have slipped into Facebook’s Trends. Anybody who read the newspaper or looked at the numbers today could tell that Terence Crutcher is a trend. But Facebook’s system can’t yet, and that omission is a defacto editorial decision.
null
minipile
NaturalLanguage
mit
null
Matthew Thorsen Farrington's Mobile Home Park in Burlington's New North End Residents of Burlington's only mobile home park have signed a purchase agreement to buy the land on which their houses are parked.When the Farrington's Mobile Home Park went on the market for $5 million last November , its inhabitants worried they'd be displaced by a developer looking to capitalize on the prime real estate. Located just off North Avenue, the 11-acre property with 120 lots offers what is widely considered to be the most affordable home-owning option in a city where the cost of housing has escalated.Residents voted to form a cooperative, with the goal of purchasing the property themselves. Robert Farrington, one of several family members who inherited the New North End park, toldat the time that he was "100 percent" in support of their effort.But the looming question for months was: Could the residents — many of whom are on fixed incomes — actually cobble together the money to make it happen? Theresa Lefebvre, who's lived in Farrington's for three decades, is president of the new North Avenue Co-Op. She announced today that the group signed a purchase agreement with the Farringtons on Tuesday. It had help along the way from the city, nonprofits with experience financing cooperative purchases, and strong state laws protecting mobile park tenants.Citing the complexity of the deal, she said they don't expect to close on the sale for another four months and aren't releasing further details about the transaction in the meantime, including the final price.The co-op may need to sell a swath of green space at the southern end of the park to raise the necessary money, according to Lefebvre. But members are hoping instead to raise $800,000 through donations, which would allow them to preserve the land as a play area for children and to have a place to pile snow in the winter.
null
minipile
NaturalLanguage
mit
null
NO. 12-12-00331-CV IN THE COURT OF APPEALS TWELFTH COURT OF APPEALS DISTRICT TYLER, TEXAS CURTIS MCCLENDON, § APPEAL FROM THE 159TH APPELLANT V. § JUDICIAL DISTRICT COURT DEEP EAST TEXAS PROPERTY MANAGEMENT, LLC, APPELLEE § ANGELINA COUNTY, TEXAS MEMORANDUM OPINION Curtis McClendon appeals the trial court’s summary judgment granted in favor of Appellee Deep East Texas Property Management, LLC (Deep East). In one issue, McClendon argues that the trial court erred in granting summary judgment in Deep East’s favor. We affirm. BACKGROUND Deep East is a property management company located in Lufkin, Texas, and is owned by Charles Royston and his wife. At all times pertinent to this appeal, Deep East had two employees––McClendon, who was the property manager, and the office manager, May Dessa Thomas. McClendon’s duties as property manager generally involved maintaining Deep East’s rental properties. On July 26, 2011, McClendon reported to Thomas that he had injured his back while moving a window air conditioning unit in conjunction with his duties as property manager. Thomas contacted Royston, who told her to take McClendon to the emergency room. Afterward, Thomas assisted McClendon in filing a workers’ compensation claim. McClendon returned to work several days after his injury under a doctor’s order restricting him to light duty work. But since there was no light duty work available for McClendon to perform, Royston permitted McClendon to take a leave of absence. During this time, McClendon sought medical treatment and reported to Deep East concerning his condition. Deep East initially relied on independent contractors to perform the maintenance on the properties that McClendon ordinarily would perform. But due to the added expense of hiring these independent contractors and additional maintenance needs that arose during this time, Deep East hired a new maintenance person to perform McClendon’s job duties and other duties until he was able to return. On September 19, 2011, Royston and McClendon met at a Texas Burger restaurant in Corrigan, Texas. Royston asked McClendon how he was doing. McClendon answered that he had finished physical therapy and was waiting on the results of an MRI he had undergone that morning. Without inquiring further about the prospect of McClendon’s returning to work, Royston told McClendon that Deep East was terminating his employment because Royston did not know when he would be able to return to work. In addition, according to McClendon, Royston stated that he could no longer keep McClendon because “he didn’t know how long he would be on workman’s comp.” On October 14, 2011, McClendon filed the instant suit against Deep East for retaliatory discharge. Subsequently, Deep East filed both traditional and no evidence motions for summary judgment. By its traditional motion, Deep East challenged McClendon’s evidence supporting the element of causation under Texas Labor Code, Section 451.001. In its no evidence motion, Deep East argued that McClendon lacked evidence to support its position that Deep East’s neutral reason for terminating McClendon’s employment was untrue. Ultimately, the trial court granted Deep East’s motions for summary judgment, and this appeal followed. SUMMARY JUDGMENT In his sole issue, McClendon argues that the trial court erred in granting summary judgment in Deep East’s favor. Standard of Review Because the propriety of summary judgment is a question of law, we review the trial court’s summary judgment determinations de novo. See Valence Operating Co. v. Dorsett, 164 S.W.3d 656, 661 (Tex. 2005). The standard of review for a traditional summary judgment motion pursuant to Texas Rule of Civil Procedure 166a(c) is threefold: (1) the movant must show there is no genuine issue of material fact and he is entitled to judgment as a matter of law; 2 (2) in deciding whether there is a disputed, material fact issue precluding summary judgment, the court must take as true evidence favorable to the nonmovant; and (3) the court must indulge every reasonable inference from the evidence in favor of the nonmovant and resolve any doubts in the nonmovant's favor. See TEX. R. CIV. P. 166a(c); Nixon v. Mr. Prop. Mgmt. Co., 690 S.W.2d 546, 548–49 (Tex.1985); Palestine Herald-Press Co. v. Zimmer, 257 S.W.3d 504, 508 (Tex. App.–Tyler 2008, pet. denied). A defendant moving for summary judgment must either negate at least one essential element of the nonmovant’s cause of action or prove all essential elements of an affirmative defense. See Randall's Food Mkts., Inc. v. Johnson, 891 S.W.2d 640, 644 (Tex. 1995). We are not required to ascertain the credibility of affiants or to determine the weight of evidence in the affidavits, depositions, exhibits and other summary judgment proof. See Gulbenkian v. Penn, 252 S.W.2d 929, 932 (Tex. 1952); Zimmer, 257 S.W.3d at 508. The only question is whether an issue of material fact is presented. See TEX. R. CIV. P. 166a(c). Once the movant has established a right to summary judgment, the nonmovant has the burden to respond to the motion for summary judgment and present to the trial court any issues that would preclude summary judgment. See, e.g., City of Houston v. Clear Creek Basin Auth., 589 S.W.2d 671, 678–79 (Tex. 1979). When a trial court’s order granting summary judgment does not specify the ground or grounds relied on for the ruling, summary judgment will be affirmed on appeal if any of the theories advanced are meritorious. State Farm Fire & Cas. Co. v. S.S., 858 S.W.2d 374, 380 (Tex. 1993). Workers’ Compensation Retaliation and Burden Shifting To support his cause of action for retaliatory discharge, McClendon was required to demonstrate that his cause of action fell under Texas Labor Code, Section 451.001. See TEX. LAB. CODE ANN. § 451.001 (West 2006). Section 451.001 states that an employer may not discharge, or in any other manner discriminate, against an employee because that employee has filed a workers’ compensation claim in good faith. See id.; Parker v. Valerus Compression Svcs., LP, 365 S.W.3d 61, 66 (Tex. App.–Houston [1st Dist.] 2011, pet. denied). The purpose of the statute is to protect a person entitled to workers' compensation benefits from retaliation for exercising his statutory rights. See Parker, 365 S.W.3d at 66. An employee who shows a violation of section 451.001 may recover “reasonable damages incurred by the employee as a result of the violation.” TEX. LAB. CODE ANN. § 451.002 (West 2006). 3 Texas employs a burden shifting analysis for workers’ compensation retaliatory discharge claims under section 451.001. Parker, 365 S.W.3d at 66; see, e.g., Benners v. Blanks Color Imaging, Inc., 133 S.W.3d 364, 369 (Tex. App.–Dallas 2004, no pet.). As part of its prima facie case, the employee “has the initial burden of demonstrating a causal link between the discharge and the filing of the claim for workers' compensation benefits.” Terry v. S. Floral Co., 927 S.W.2d 254, 257 (Tex. App.–Houston [1st Dist.] 1996, no writ); see also Cont’l Coffee Prods. Co. v. Cazarez, 937 S.W.2d 444, 450 (Tex. 1996) (applying standard of proof for causation in whistleblower actions to anti-retaliation claims under workers’ compensation); Wal-Mart Stores, Inc. v. Amos, 79 S.W.3d 178, 184 (Tex. App.–Texarkana 2002, no pet.) (stating that as “an element of a prima facie case for retaliatory discharge” employee must “demonstrate the causal link between the discharge and the filing of the claim”); Dallas Cnty. v. Holmes, 62 S.W.3d 326, 329 (Tex. App.–Dallas 2001, no pet.) (stating that plaintiff must “prove a prima facie case” by establishing that “he, in good faith, filed a workers' compensation claim, and there exists a causal connection between the filing of the claim and the discharge or other act of discrimination”). The employee does not need to show that the workers’ compensation claim was the sole reason for the employer’s conduct; it is sufficient to demonstrate that but for the filing of the claim, “the employer’s action would not have occurred when it did had the report not been made.” Cont’l Coffee, 937 S.W.2d at 450; Parker, 365 S.W.3d at 67; Turner v. Precision Surgical, L.L.C., 274 S.W.3d 245, 252 (Tex. App.–Houston [1st Dist.] 2008, no pet.). In other words, the filing of the workers’ compensation claim must be a reason for the employer’s adverse employment action, but not necessarily the only reason. See Parker, 365 S.W.3d at 67. An employee may prove the causal link between the adverse employment decision and the workers’ compensation claim by direct or circumstantial evidence. Jenkins v. Guardian Indus. Corp., 16 S.W.3d 431, 436 (Tex. App.–Waco 2000, pet. denied). Circumstantial evidence of the causal link includes (1) knowledge of the compensation claim by those making the decision on termination, (2) expression of a negative attitude towards the employee's injured condition, (3) failure to adhere to established company policies, (4) discriminatory treatment in comparison to similarly situated employees, and (5) evidence that the stated reason for the discharge was false. Cont’l Coffee, 937 S.W.2d at 451; Benners, 133 S.W.3d at 369. Little or no lapse in time between the plaintiff’s compensation claim and the employer’s adverse employment action is also circumstantial evidence of a retaliatory motive. Johnson v. City of 4 Houston, 203 S.W.3d 7, 11 (Tex. App.–Houston [14th Dist.] 2006, pet. denied); see also Green v. Lowe's Home Ctr., Inc., 199 S.W.3d 514, 519 (Tex. App.–Houston [1st Dist.] 2006, pet. denied) (stating that temporal proximity between assertion of protected right and termination may be evidence of causal connection). This evidence is relevant for determining whether a causal link exists, both in examining whether the employee established a prima facie case and the ultimate issue of whether the employee proved a retaliatory motive for the adverse employment action. See, e.g., Hertz Equip. Rental Corp. v. Barousse, 365 S.W.3d 46, 54–57 (Tex. App.– Houston [1st Dist.] 2011, pet. denied) (reviewing circumstantial evidence identified in Continental Coffee to determine whether evidence was legally and factually sufficient to support finding of retaliatory discharge); Green, 199 S.W.3d at 518–523 (reviewing circumstantial evidence identified in Continental Coffee to determine whether plaintiff established fact issue in response to summary judgment motion). Once the employee establishes a prima facie claim, including a causal link, the burden shifts to the employer to rebut the alleged discrimination by offering proof of a legitimate, nondiscriminatory reason for its actions. Green, 199 S.W.3d at 519; Benners, 133 S.W.3d at 369. If the employer demonstrates a legitimate, nondiscriminatory reason, then the burden shifts back to the employee “to produce controverting evidence of a retaliatory motive” in order to survive a motion for summary judgment. Green, 199 S.W.3d at 519; see also Tex. Div.˗Tranter, Inc. v. Carrozza, 876 S.W.2d 312, 314 (Tex. 1994) (employee must controvert employer’s neutral explanation of employment decision based on direct or circumstantial evidence). The employee must present evidence that the employer’s asserted reason for the discharge or other adverse employment action was pretextual or “challenge the employer's summary judgment evidence as failing to prove as a matter of law that the reason given was a legitimate, nondiscriminatory reason.” Benners, 133 S.W.3d at 369. Summary judgment is proper if the employee fails to produce controverting evidence. Parker, 365 S.W.3d at 68; Terry, 927 S.W.2d at 257 (affirming summary judgment for employer because employee failed to produce evidence of retaliatory motive to rebut employer’s neutral reason for firing her); Benners, 133 S.W.3d at 369, 372 (holding summary judgment proper because employee failed to raise a fact issue on retaliatory motive); Castor v. Laredo Cmty. Coll., 963 S.W.2d 783, 5 785˗86 (Tex. App.–San Antonio 1998, no pet.) (holding employee failed to raise fact issue on retaliatory motive despite indulging all inferences in his favor). Evidence of Causation In its traditional motion for summary judgment, Deep East argued that the summary judgment evidence establishes that the necessary causal connection to establish discrimination under section 451.001 does not exist. In his response, McClendon argued that the summary judgment record contains direct evidence that satisfies the causation element. Specifically, McClendon contended that his deposition testimony concerning Royston’s use of the term “workman’s comp.” in conjunction with his terminating McClendon’s employment is direct evidence of causation sufficient to create a genuine issue of material fact. At his deposition, McClendon testified as follows: Q. All right. On September the 19th of 2011, you had a meeting with Mr. Royston, is that correct? A. Yes, sir. .... Q. How did the conversation come up? How did you know you were supposed to meet? A. I had called him after my MRI that morning and let him know that I did take it and they would have the results, they told me, within a few days or a week. And he called me back around I want to say 11:30, maybe 12:00 or so, and asked me did I want to meet for lunch. .... [Q.] Was the meeting cordial? A. Yeah. It was friendly, yeah. Q. Any hostility, animosity in the meeting? A. No . . . . Q. When y’all sat down to talk, who talked first, you or him? A. Well, actually, he asked me how things were going and I told him. Q. Okay. What did you tell him about how things were going? A. I told him about the MRI and that we was finished with the therapy until they come back with the MRI results. And - - Q. Let me back up. You told him that you had finished with the therapy, correct? 6 A. Well - - Q. The physical therapy? A. Yes, sir. They stopped the therapy after it wasn’t working. .... Q. All right. So you told Mr. Royston that you were still waiting on the MRI results? A. Yes, sir. Q. What was said next? A. Mr. Royston told me that he had had - - earlier in his years had back problems and on a back injury, you didn’t know how long they would be until they healed, or if they ever healed, and that he couldn’t hold my position and he couldn’t afford to pay Ronnie whatever he was paying him to do construction work or whatever, contractual labor, so he was letting me go and giving Ronnie my position, because he did not know when I would be able to come back to work. .... A. Yes, sir. He said, “You can’t never tell about a back injury. And that’s when he told me about his injury and how long he had problems with it. And said that he didn’t know how long he could - - it would last, you know, if I would come back to work or not, so he had to do something with replacing me at my job. And I asked Mr. Royston, I said, “Well, I haven’t even been off two whole months yet,” you know, two full months. I said, you know, “We never gave it time to heal.” And then I asked him, I said, “And I’m still under doctor’s care. Do you know that I’m still under the doctor’s care?” He said, “Yes.” And I said, “So, I mean, how am I not being able to keep my job?” And that’s when he told me that he couldn’t keep it because he didn’t know how long I would be on workman’s comp. On appeal, McClendon argues that there could be no reason for Royston to use the words “workman’s comp.” unless that was the underlying reason for his decision to terminate McClendon. He further contends that this interpretation is bolstered by the fact that Royston did not wait to find out the results of the MRI performed on McClendon that morning. Deep East responds that Royston’s use of these words indicates nothing more than his awareness of the workers’ compensation claim, but does not amount to evidence of a causal link between Royston’s decision to terminate McClendon and McClendon’s having filed a claim. 7 We do not consider Royston’s statement in a vacuum. Instead, we consider the statement in the context of the entirety of the summary judgment record. Having done so, we note that apart from Royston’s allegedly telling use of the term “workman’s comp.,” there is no other evidence of record that indicates (1) Deep East or anyone associated with it had any animosity toward McClendon for his filing a workers’ compensation claim, (2) anyone expressed a negative attitude towards McClendon’s injured condition, (3) Deep East failed to adhere to established company policies, or (4) the stated reason for the discharge, i.e., Royston, who had experience with his own back injury, did not know when McClendon would be able to return to work, was false. To the contrary, the evidence reflects that Royston told Davis to take McClendon to the emergency room following his accident and that Davis, after McClendon was released from the emergency room, assisted him in filing a workers’ compensation claim. According to McClendon’s own testimony, those associated with Deep East treated him congenially. When McClendon took leave, Deep East initially hired independent contractors to perform his duties and incurred greater expense as a result before having to hire another full time maintenance person to fill in for McClendon during his leave. Finally, McClendon’s testimony concerning the meeting that culminated in his termination does not reveal any evidence that tends to indicate that Royston’s use of the term “workman’s comp.” was a revelation of his motivation for terminating McClendon. And despite McClendon’s assertions to the contrary, we do not conclude that Royston’s declining to wait for the results of McClendon’s MRI when McClendon already had been on leave for nearly two months bolsters McClendon’s interpretation of Royston’s statement. In sum, there is evidence that Royston referred to “workman’s comp.” when he terminated McClendon’s employment with Deep East. But Royston’s mere reference to “worker’s comp.,” without more, does not constitute evidence supporting the existence of a causal connection between McClendon’s filing the workers’ compensation claim and his discharge. If anything, Royston’s choice of words establishes no more than his knowledge that McClendon had filed a workers’ compensation claim. And while knowledge of a workers’ compensation claim is considered as a factor in light of the record as a whole, it does not alone establish a causal link between the alleged discriminatory behavior and the filing of a claim. See Courtney v. Nibco, Inc., 152 S.W.3d 640, 644 (Tex. App.–Tyler 2004, no pet.); Lone Star Steel 8 Co. v. Hatten, 104 S.W.3d 323, 327–28 (Tex. App.–Texarkana 2003, no pet.). Therefore, we hold that the trial court did not err in granting Deep East’s traditional motion for summary judgment.1 To the extent McClendon’s sole issue pertains to the trial court’s granting of Deep East’s traditional motion for summary judgment, it is overruled. DISPOSITION Having overruled McClendon’s sole issue in part, we affirm the trial court’s judgment. SAM GRIFFITH Justice Opinion delivered March 31, 2014. Panel consisted of Worthen, C.J., Griffith, J., and Hoyle, J. (PUBLISH) 1 Because we have held that the trial court did not err in granting Deep East’s traditional motion for summary judgment on the issue of causation, we do not address whether the trial court erred in granting Deep East’s no evidence motion for summary judgment regarding whether McClendon presented controverting evidence of a retaliatory motive. See TEX. R. APP. P. 47.1. 9 COURT OF APPEALS TWELFTH COURT OF APPEALS DISTRICT OF TEXAS JUDGMENT MARCH 31, 2014 NO. 12-12-00331-CV CURTIS MCCLENDON, Appellant V. DEEP EAST TEXAS PROPERTY MANAGEMENT, LLC, Appellee Appeal from the 159th District Court of Angelina County, Texas (Tr.Ct.No. CV-00889-11-10) THIS CAUSE came to be heard on the oral arguments, appellate record and briefs filed herein, and the same being considered, it is the opinion of this court that there was no error in the judgment. It is therefore ORDERED, ADJUDGED and DECREED that the judgment of the court below be in all things affirmed, and that all costs of this appeal are hereby adjudged against the appellant, CURTIS MCCLENDON, for which execution may issue, and that this decision be certified to the court below for observance. Sam Griffith, Justice. Panel consisted of Worthen, C.J., Griffith, and Hoyle, J.
null
minipile
NaturalLanguage
mit
null
Mr. Carhart, who lives on Staten Island with his wife, Anna, and their two children, said his goal was to create a link between the generations of the game’s greatest stars. “I’ll never get into the Hall of Fame for my curveball or hitting prowess, so this is my only chance,” he said. Jon Shestakofsky, a spokesman for the Hall of Fame, said Mr. Carhart’s project “illustrates the seemingly limitless scope of baseball fandom.” “The Hall Ball’s journey serves as a glowing example of the power and pull of baseball, and the respect and reverence associated with the game’s all-time greats,” Mr. Shestakofsky said, though he would not say if the hall would consider Mr. Carhart’s submission. “He’s trying to make a human connection with the living and a spiritual connection with those who’ve moved on,” said John Thorn, the official historian for Major League Baseball. He added that when it came to 19th-century baseball history, Mr. Carhart was “about as nerdy as they come, which is high praise from me.” Mr. Carhart said the project grew out of his love for baseball and genealogy and was born during a family visit to Cooperstown, N.Y., which is home to the Hall of Fame, in 2010. His wife found a baseball in a creek next to Doubleday Field, which is part of the hall’s complex, and it eventually became the Hall Ball.
null
minipile
NaturalLanguage
mit
null
Istanbul Today Tour 1. The walk along the Bosphorus — only during this walk and standing on the deck of a ship, swimming along the most beautiful strait of Eurasia, you will see the houses of millionairs and hoses-islands, palaces and castles of sultans. In Turkey they say: «If you feel sad, go towards the Bosphorus» . Dispel your saddness for years in advance – go to Istanbul. 2. Driving along the quay of the Bosphorus — Along this strait the city wall of Istanbul is spread, to the gates of which prince Oleg had hammered his shield. Along the Bosphorus the bands of modern roads of Istanbul are spread. The richest people of the world buy houses along the Bosphorus, where the sultans and emperors were settling. The Bosphorus is the axis, on which Istanbul is spinning. Everyone should drive it at least once in his life. 3. Bebek district — in Turkey they say: as beautiful as a baby. A baby is «bebek». The district, named Bebek is really one of the most beautiful in Istanbul. The place, chosen by sultans for the Guards, today is famous with its facilities: cafes, bars, and restaurants. Bebek ignites till the morning. 4. Etiler district - one of the largest shopping centers of Europe - «Akmerkez» is situated in one of the most prestigeous districts of Istanbul — Etiler. Not everyone can see the luxury of Istanbul and take advantage of its usability. But you will have this opportunity. 5. District Ulus and park Ulus — The island of comfort and peace among the giant and constantly moving Istanbul — park Ulus. A small and one of the most beautiful parks of the city, it is situated almost at the coast of the strait. Here you will stay in order to have rest.
null
minipile
NaturalLanguage
mit
null
GREEN LIVING There are many reasons to have indoor plants in offices. Besides looking beautiful and creating a more welcoming and gezellig workplace, research shows that there are numerous scientific benefits to having plants in your space as well. Plants have been proven to boost your mood, improve memory, and increase productivity. Unfortunately, the spaces we seem to spend most of our time and need to be at our best (the office), tend to be lacking in greenery. Here are some reasons and plant recommendations so you can convince your boss to step up the plant game. REASON 1: Plants in offices clean the air and increase productivity In addition to turning carbon dioxide into oxygen, some air-filtering indoor plants have been reported by NASA to help in remove chemicals from the air. By removing such substances found commonly in walls and carpets of office buildings, people can think and work with a clearer mind. REASON 2: Plants help reduce stress Studies have shown indoor plants have stress-reducing qualities such as less tension and anxiety, decreased anger, and a reduction in depression and fatigue. Color psychology suggests that the color green has a relaxing and calming effect, so decorating an office with plants can have similar results. Plant Mom’s favorite plants for sunny offices: Need help finding the perfect plant for your office? No matter what your question is or what kind of plant you have, I am here to answer your questions and give you the encouragement you need to be best plant parent you can be. I want to share my love and knowledge of plants with you. So let’s chat about plants!
null
minipile
NaturalLanguage
mit
null
Q: registerNib:forReuseidentifier with custom UTTableViewCell and Storyboards I'm migrating from customizing my TableViewCells in tableView:cellForRow:atIndexPath: to using a custom UITableViewCell subclass. Here's how I done it: First, created empty XIB, dragged UITableViewCell there and put a UILabel on top. Created a class (subclass of UITableViewCell) and in Interface Builder's properties editor set the class to MyCell. Then, in my TableViewController, put the following: - (void)viewDidLoad { [super viewDidLoad]; // load custom cell UINib *cellNIB = [UINib nibWithNibName:@"MyCell" bundle:nil]; if (cellNIB) { [self.tableView registerNib:cellNIB forCellReuseIdentifier:@"MyCell"]; } else NSLog(@"failed to load nib"); } After that I wiped out all the custom code from tableView:cellForRow:atIndexPath: and left only default lines: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"MyCell"; MyCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; return cell; } When I ran this, I expected to see a bunch of cells with a single label in each cell (the very label that I dropped in the middle while creating XIB). But instead I see just plain white empty cells and adding/removing the components to the XIB layout doesn't help. I spend A DAY trying different options like setting the reuseIdentifier in Interface Builder for custom cell XIB and loading the view in tableView:cellForRow:atIndexPath:, but none helped. A: ...but it turned out, that the only thing that I missed, was clearing the reuseIdentifier for prototype cell in my Storyboard for this TableViewController. It seems that Storyboard initializes its views/components later that viewDidLoad called, and instead of taking my nice custom cell, xCode sets the real cell view for reusing to just plain white cell which is the standard for newly created TableViewControllers. So again: go to your TableView properties and remove the reuseIdentifier you set before ;) I spend so much time for this, so I thought it might help someone if I share this experience here.
null
minipile
NaturalLanguage
mit
null
Articles molded from polypropylene are excellent in rigidity, heat resistance, surface glossiness, etc. and hence they have been widely applied to various uses. However, polypropylene is generally crystalline and is deteriorated in impact resistance, particularly in low-temperature impact resistance, and therefore application thereof is limited to certain uses. For increasing the impact resistance of polypropylene, conventionally adopted are a method of adding polyethylene to polypropylene and a method of adding thereto rubber-like materials such as polyisobutylene, polybutadiene and a non-crystalline ethylene/propylene copolymer. In particular, a method of adding a non-crystalline or low-crystalline ethylene/propylene random copolymer has been used in many cases. However, the present inventors have studied on polypropylene compositions comprising the non-crystalline or low-crystalline ethylene/propylene random copolymer and polypropylene, and found that the non-crystalline or low-crystalline ethylene/propylene random copolymer cannot improve the impact resistance so much and hence the ethylene/propylene random copolymer must be contained in the polypropylene composition in a large amount to obtain satisfactory impact resistance. When a large amount of the ethylene/propylene random copolymer is contained in the polypropylene composition, the composition can be improved in the impact resistance but seriously lowered in rigidity, heat resistance and surface hardness. On the other hand, if the ethylene/propylene random copolymer is contained in the polypropylene composition in a small amount to retain rigidity, heat resistance and surface hardness, the low-temperature impact resistance of the polypropylene composition cannot be sufficiently improved. In place of using such non-crystalline or low-crystalline ethylene/propylene random copolymer, a trial of adding other ethylene/.alpha.-olefin copolymer to polypropylene was made to obtain a polypropylene composition having high impact resistance. For example, Japanese Patent Publications No. 25693/1983 and No. 38459/1983 disclose a composition comprising crystalline polypropylene and an ethylene/1-butene copolymer which contains constituent units derived from 1-butene in an amount of not more than 15% by mol. In addition, Japanese Patent Laid Open No. 243842/1986 discloses a polypropylene composition comprising crystalline polypropylene and an ethylene/1-butene copolymer which is obtained by using a titanium heterogeneous type catalyst. The polypropylene compositions disclosed in these publications are improved in the impact resistance and the rigidity, but they are desired to be much more improved in the low-temperature impact resistance. Further, Japanese Patent Publication No. 42929/1988 discloses a polypropylene composition comprising crystalline polypropylene and an ethylene/1-butene copolymer which contains constituent units derived from 1-butene in an amount of 25 to 10% by weight and has an intrinsic viscosity [.eta.] of not more than 1.5 dl/g. This polypropylene composition is insufficient in the impact resistance. Furthermore, Japanese Patent Laid-Open No. 250040/1991 describes that an ethylene/1-butene block copolymer containing constituent units derived from 1-butene in an amount of 10 to 90% by weight is used to increase impact resistance of polypropylene. However, this ethylene/1-butene block copolymer is not good in compatibility with polypropylene and insufficient in improvement of the impact resistance. On that account, eagerly desired now is the advent of a polypropylene composition excellent in impact resistance, particularly in low-temperature impact resistance, as well as in rigidity and heat resistance. As a result of earnest studies by the present inventors to solve the problems associated with the prior arts, they have found a polypropylene composition comprising polypropylene and a specific ethylene/.alpha.-olefin copolymer is excellent in rigidity, heat resistance and impact resistance, particularly in low-temperature heat resistance, and accomplished the present invention.
null
minipile
NaturalLanguage
mit
null
There is hereby provided a method comprising: detecting at a first radio node a first signal indicating the number of hops at which a second radio node that transmitted the first signal first detected a second, earlier signal; and deciding whether to transmit said first signal onwards from said first radio node based at least partly on (i) a direction indicator in said first signal, and (ii) a comparison of the respective numbers of hops at which said first and second radio nodes first detected said earlier second signal.
null
minipile
NaturalLanguage
mit
null
--- abstract: 'We have undertaken a systematic search for candidate supernovae from high-redshift Population III stars in a field that has been observed with repeated imaging on a cadence of 2–3 weeks over a 2.2 year baseline, the *Spitzer*/IRAC Dark Field. The individual epochs reach a typical 5$\sigma$ depth of 1$\mu$Jy in IRAC Channel 1 (3.6$\mu$m). Requiring a minimum of four epochs coverage, the total effective area searched is 214arcminutes$^2$. The unprecedented depth and multi-epochal nature of these data make it ideal for a first foray to detect transient objects which may be candidate luminous Pair Instability Supernovae from the primordial-metallicity first stars. The search was conducted over a broad range of timescales, allowing for different durations of the putative candidates’ light curve plateau phases. All candidates were vetted by inspection of the *Spitzer* imaging data, as well as deep *HST*/ACS F814W imaging available over the full field. While many resolved-source objects were identified with *Spitzer* variability, no transient objects were identified that could plausibly be identified as high-redshift supernovae candidates. The resulting 95% confidence level upper limit is 23deg$^{-2}$yr$^{-1}$, for sources with plateau timescales under 400/(1+z) days and brightnesses above $\sim1\,\mu$Jy.' author: - 'Mark I. Frost,$^{1,3}$ Jason Surace,$^1$ Leonidas A. Moustakas,$^2$ Jessica Krick,$ ^1$' title: 'A Pilot Search for Population III Supernova Candidates in the *Spitzer*/IRAC Dark Field' --- Introduction ============ Primordial metallicity Population III stars are thought to be the first luminous objects to form in the Universe. Their formation marks the end of the cosmic dark ages and an important transition of the universe from a homogeneous state to a highly structured one. The UV photons produced by such stars at high redshifts are also thought to be at least partly responsible for re-ionizing the universe (@tumlinson00, @bromm01b, @schaerer02, @schaerer03). It is believed that the explosive events that mark the end state of such stars seed the intergalactic medium with heavy elements (@gnedin97, @furlanetto03, @greif07). Hence, studying these objects is of great importance in helping us to understand the high-redshift universe. To date, the supernovae marking the deaths of first stars (Pop III SNe) at high redshift have not been observed (though see @woo07). We turn to theoretical modeling to gain a better understanding of the properties of such stars and when they might have existed. It is thought that Population III stars formed out of primordial-abundance H/He gas in low-mass dark matter halos. For primordial-abundance stars it is expected that the explosion mechanism may drive not only “classical” supernovae, but also “hypernovae” for certain progenitor masses, driven through an electron-positron instability mechanism that results in explosive events with up to approximately one hundred times greater luminosities [@um02]. Exactly when the epoch of the first stars began is still a matter of debate but estimates place it at $10<z<50$ (@wiseabel05). Thereafter, such objects could exist in primordial-metallicity pockets even at relatively low redshifts, even $z\leq2.5$ (e.g. @scann05, @torn07). The primordial metallicity of these stars leads to inefficient cooling mechanisms through H$_{2}$, leading to very high stellar masses and to a top-heavy initial mass function (IMF) with a large fraction of stars having $M_{\star}>100 M_{\sun}$ (@bromm99, @bromm02, @abel00, @abel02) and possibly even greater masses (@brommloeb04, @omukai03). In this paper we describe a search for candidate supernovae from high-redshift Population III stars in the *Spitzer* IRAC dark field. The target field is described in Section \[sec:df\]. In Section \[sec:expec\] we summarize the theoretical modeling used as a guide for detecting transient objects, which would include candidate Pop III SNe. Section \[sec:method\] outlines the search methods used. In Section \[sec:dis\] we compare the Pop III SNe rate upper limits we find with expectations from the literature, and highlight how this type of search may be extended in the future. Where needed, we adopt $H_0$=73 kms$^{-1}$Mpc$^{-1}$, $\Omega_m=0.27$, and $\Omega_\Lambda=0.73$. The IRAC Dark Field {#sec:df} =================== The IRAC dark field is the dark current calibration target for the Infra-Red Array Camera (IRAC) (@fazio04) on board the *Spitzer* space telescope. It is an extragalactic field of very low background, in *Spitzer*’s continuous viewing zone near the North Ecliptic Pole. This area is observed at the start and end of each IRAC observing campaign (2–3 weeks apart) since *Spitzer*’s first light. For technical reasons anchored on the need for high-quality dark frames and on the normal precession of the observatory, there is only a modest overlap in the observed area on the sky from epoch to epoch, making for a position-dependent observed cadence in the time domain data. The data used in this analysis are based on 128 distinct epochs over the first 2.2 years of *Spitzer*’s operations. Each epoch is composed of multiple individual exposures at all of the available IRAC exposure times. The total full IRAC mosaic in the adopted dataset has a total observing time of $>500$ hours, and covers an approximately circular area, 20arcminutes in diameter. Each point in this mosaic typically has more than $10$ hours of total integration time, with a maximum of $\sim80$ hours in the area of maximal overlap across epochs. It is classically confusion-limited in all four IRAC channels ($3.6\mu m$, $4.5\mu m$, $5.8\mu m$, $8.0\mu m$). The IRAC source extraction was done using Source EXtractor, [@bertin96]. The unprecedented depth and multi-epochal nature of these data make it ideal for a first foray into trying to detect supernovae from the first stars. The field will, in fact, continue to be observed through the full extent of the *Spitzer* Warm Mission, eventually giving a baseline of around seven (and possibly ten) years. There is now a wealth of multiwavelength data available in the dark field including *Chandra* and *HST*/ACS F814W imaging [@krick08]. To find transient objects, a common method is to search for significant residuals when differencing registered time-series observations. However, because the IRAC point spread function is asymmetric and fairly complex, and indeed has a different orientation on the image of each epoch, it is difficult to distinguish candidate transient objects from artifacts in the difference images. Therefore, the search is conducted through cross-correlating catalogs of objects detected in each individual epoch. Each epoch of observation has a typical 5$\sigma$ depth of 1$\mu$Jy, or $m_{\rm 3.6\mu m}(\rm{AB}) \sim 23.9$. The catalogs from all epochs were cross-matched with each other using a $1$arcsec radius, resulting in a master database with 31,492 sources, each of which had at least one detection in a single distinct epoch. This master database contains all the critical information for a transience search. For each object, the following are recorded: in which epochs the object is within the observed field and detected, and its flux density; in which epochs the object is within the observed field and *not* detected, and the known sensitivity at that position (based on the integration time). If the spectral energy distributions of putative high-redshift Pop III SNe are above the measured detection limits for some period of time contained within the timespan of this survey, they will appear as transient objects. We explore the expectations from theory and the potential efficacy of our search in the next section. Expectations from Theory {#sec:expec} ======================== Massive Pop III stars with primordial metallicity are thought to be common at high redshift. Stars with $M_{\star}<140 M_{\odot}$ or $M_{\star}>260 M_{\odot}$ are thought to form black holes at the end of their evolution (e.g. @fryer01, @hegerwoosley02). Those which have masses between 140–260 $M_{\odot}$ are thought to end their lives as pair-instability supernova (PISNe). Once helium burning in the core of such stars has ceased there is sufficient entropy to create positron-electron pairs [@wiseabel05]. This process converts thermal energy to the mass of the particle pair and the pressure in the core is reduced. This leads to a partial collapse which triggers a thermonuclear explosion. The star is completely destroyed leading to a PISN, in which no remnant is left behind. At least one relatively local analog may already have been observed (SN 2006gy in NGC 1260; @smith07, @woo07), which lends support to the possibility of this mechanism. PISNe would be “host-less” and as much as one hundred times more luminous than more-typical supernovae. To estimate the chances of detecting such objects, we turn to predictions from the literature for anticipated luminosities, durations (through their light curves), and frequency of events. @scann05, using light curves calculated by @weinmann04, predict peak apparent magnitudes of $m_{\rm AB}\sim26.8$ at $z=10$ for 250$M_{\odot}$ PISN, assuming negligible extinction. This suggests that the typical by-epoch depth of $m_{\rm 3.6\mu m}(\rm{AB}) \sim 24$ of our search may be able to detect such objects at $z\sim3-5$. PISN light curves are calculated in @wiseabel05, @heger02, and @scann05. A broad plateau phase is expected, which could last from $\sim$10 days to as long as a full year in the frame of the event. Since in the observed frame the light curve is stretched by a factor of (1$+$z), there could be events that would appear as near-continuous sources over the entire 2.2-year monitoring span of the current dataset. As we ultimately anticipate a 7- to 10-year dataset by the end of the *Spitzer* Warm Mission, there is great potential in future analyses to encompass more plateau-duration possibilities. In Table \[tab:rates\] we list several predicted Pop III SNe differential rates from the literature. The rates quoted in Table \[tab:rates\] for @heger02 and @cen03 incorporate the corrections determined by [@weinmann04] of $(1+z)^{-2}$ and $(1+z)^{-1}$, respectively. These rates are over redshift ranges largely beyond the sensitivity of our search, but are quoted here for completeness. By the review in @weinmann04, realistic rates are expected to be $dN/dz\sim4$deg$^{-2}$yr$^{-1}$ for $z>15$ and 0.2deg$^{-2}$yr$^{-1}$ for $z>25$. [@wiseabel05] find a Pop III SNe rate of 0.34deg$^{-2}$yr$^{-1}$ at $z=10$, which changes negligibly over the mass range $100 M_{\odot}<M_{\star}<500 M_{\odot}$. A wide range of values are expected, $\sim$0.2$-$50deg$^{-2}$ yr$^{-1}$, which is indicative of how the parameters involved in such predictions are still not well-constrained. We also refer the reader to @scann05 for further discussion of these predictions. ---------------------------------- ---------------------- ---------- ---------------- dN/dz M$_{\rm progenitor}$ z Reference (${\rm deg}^{-2} {\rm yr}^{-1}$) ($M_{\odot}$) $\sim$0.2 250 $\sim10$ [@heger02][^1] 0.34 100-500 10 [@wiseabel05] 50 250 $>15$ [@mackey03] 11 100 $>$13 [@cen03][^2] 25 140-260 5 [@weinmann04] ---------------------------------- ---------------------- ---------- ---------------- : Predicted Population III supernova rates found in the literature.[]{data-label="tab:rates"} Search Method {#sec:method} ============= The search for transient objects is based on the master database described in Section \[sec:df\]. Systematic searches were conducted for three different ranges of possible plateau durations (in the observed frame), which are appropriate to the 2.2-year baseline of the present dataset, and which are plausible based on some of the theoretical expectations for the light curves. These are $0<t_{obs}<100$, $100<t_{obs}<200$, and $200<t_{obs}<400$ days. In addition to the practical convenience of analyzing these data in this comparamentalized fashion, it also anticipates the possibility that some transient-source light curves may be truncated by either end of our 2.2 year baseline by different timescales. We remind the reader that the light curves of all sources are largely discontiguous because the precise pointing (and orientation) varies epoch by epoch. This also affects the precise depth of each epoch, and changes the on-sky orientation of the IRAC diffraction pattern essentially leading to rotating diffraction spikes across images. We find that the photometry of objects near bright stars is systematically contaminated by these rotating diffraction spikes. Therefore we identify the stars and the sources close enough to them to be affected, and explicitly remove them from our master catalog. Imaging with *HST*/ACS F814W was obtained between November & December 2006, one year after the end of the IRAC dataset described here. With this data, a plot of the Source EXtractor [@bertin96] [isoarea]{} value versus aperture magnitude photometry clearly separates the ridge of unresolved sources from resolved galaxies, because unresolved sources have smaller [isoarea]{} than galaxies at any given magnitude. This method identifies 1147 unresolved sources. We empirically determine a radius of $r=0.06\times S_{3.6\mu m}$arcsec (but no greater than 30arcsec), to mask out a circular area around each star also removing an additional 5858 sources which were in close proximity. Given the relative non-homogeneity in the survey’s depth and cadence sampling, we apply several criteria to remove erroneous sources. 1) We impose an *upper* flux density limit of 40$\mu$Jy, which is an empirically determined practical threshold for removing additional objects that are affected by diffraction spike artifacts, beyond the masked radii, and object de-blending issues. 2) For all sources we only include epochs which have a 5$\sigma$ detection of $>$1$\mu$Jy. 3) We do not consider sources that appear in only a *single* epoch. 4) Sources detected in only two or three epochs are only included in our search if they have 5$\sigma$ detections of $>$1$\mu$Jy in *all* epochs. 5) For any object with multiple epoch detections, we require the ${3.6\mu m}$ photometric uncertainty to be less than 10% in at least half of its detected epochs. This ensures that this source is not rejected on the basis of large uncertainties in a small number of epochs (e.g. due to some epochs being particularly shallow relative to the others). 6) Finally, we require at least two significant non-detections, to ensure a clear transient signal. After application of each of these criteria, 650 candidates remained that warranted more careful follow-up. The ${3.6\mu m}$ light curves and the corresponding IRAC and *HST* imaging were all visually inspected by custom-built software that clearly identified all salient aspects for the object in the master database. No a priori restrictions were placed on the shape of the light curve. Several objects with otherwise flat light curves were found to have a dramatic “flare-up” in a single epoch. Careful inspection of the individual IRAC 100s exposures of that epoch showed the spike to be a cosmic ray incident. The remaining candidates were found to have resolved-source counterparts in the *HST* data, and are therefore candidate low-redshift active galactic nuclei, which though extremely interesting in their own right, are clearly not Pop III SNe candidates. At the end of this careful analysis and vetting procedure, *no* viable candidates survived. In the next Section we calculate the formal limits implied by our search. Discussion & Conclusions {#sec:dis} ======================== No Pop III SNe candidates to our sensitivity limit of $m_{\rm 3.6\mu m}(\rm AB) \sim 24$ were identified. For the areal search over a total of 214arcminutes$^2$, the rate limit is below 8deg$^{-2}$yr$^{-1}$. Using the same area but also modeling this non-detection limit as a Poisson distribution, the 95%-confidence upper limit is 23deg$^{-2}$yr$^{-1}$, ignoring cosmic variance uncertainties. At $z\leq$\[3, 5, 10\] this also corresponds to a volumetric rate of \[910, 480, 270\]Gpc$^{-3}$yr$^{-1}$, respectively. These limits are approximate, as the precise survey area relevant to each individual source is a complicated function of the varying field and depth by epoch. Furthermore, these limits only apply to moderate-duration events, with plateau phases lasting less than $\sim400/(1+z)$days, by construction of the search. Many of the predicted light curves given e.g. by @scann05 have plateau phases up to 1yr in the rest frame, which would be too long in the observed frame to be detected by the present survey, but which may be approachable in future incarnations of the *Spitzer* Dark Field surveys. The @wiseabel05 and @heger02 $z=10$ predicted differential rates of $dN/dz\sim0.34$deg$^{-2}$yr$^{-1}$ and $\sim$0.2deg$^{-2}$yr$^{-1}$, respectively, are not ruled out, even if they appear at lower redshifts which may be above our sensitivity limit. Given that our search is most effectively probing $z\sim3-5$, the differential rate of [@weinmann04] of 25 deg$^{-2}$yr$^{-1}$ at $z=5$ is broadly comparable to our limit. It should be noted the very luminous PISNe that we would be sensitive to in this survey may be only a small fraction of all high-redshift supernova events. There is a distinction currently being made between Pop III.1 and III.2 stars, where the former class are of fully primordial abundance, and form in dark matter mini-halos, resulting in the pre-requisite stellar masses of above $\sim100\,M_{\odot}$ to produce PISNe (@jobro06, @mcktan08). These are distinct from the Pop III.2 stars, which are expected to form through atomic cooling processes, producing only $\sim 10\,M_{\odot}$ progenitors [@grebro06]. The PISNe Pop III.1 progenitors may consist of only some 10% of Pop III SNe [@grebro06]. Furthermore, the “pristine” Pop III.1 progenitors can suffer dramatic negative feedback (e.g. @mcktan08), which may additionally limit their relative numbers. These were all considerations not yet taken in the predictions by @mackey03, which led to the high expectation rates. To do a *comprehensive* search for Pop III SNe using *Spitzer* (or a future platform with similar capabilities) would require a survey area of 1 deg$^2$ with exposures of 5000 seconds ($50\times100$s exposures) per point on the sky. This would reach $m_{\rm AB}\sim26$ at the 3-5$\sigma$ level, depending on extinction. Each epoch would require approximately 200 hours of observation to cover the survey area with a 5 arcminute field of view. This greater depth complicates the IRAC data reduction as confusion is a significant, but tractable problem. Trading depth for greater area may not be optimal for detecting the *highest*-redshift Pop III SNe, which are expected to be quite faint, $m_{\rm AB}\gtrsim26$. This requirement is increasingly relaxed for lower redshift events, which may possibly exist at redshifts as low as $z\sim2$. This ideal survey would need to be imaged every two months for a period of several years in order to plausibly sample much or all of the plateau phase of the light curve. The total program would therefore be of the order of 2000-3000 hours. In the meanwhile, the current dataset continues to expand as *Spitzer* continues to operate, acquiring new IRAC observations every 2–3 weeks. Including observations made during the *Spitzer* Warm Mission, the full dataset will span at least seven years, and maybe as many as ten full years. Future searches following the procedure described here should be powerful for identifying plausible candidates, or at least in setting firmer limits on the production rate of PISNe at high redshift, and thus setting practical limits on the relative abundances of Pop III.1 versus Pop III.2 progenitors ultimately informing studies of reionization of the high-redshift universe. Support for this work was provided by NASA through the *Spitzer* Space Telescope Visiting Graduate Student Program, through a contract issued by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. The work of LAM was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This research has made use of data from the [*Spitzer*]{} Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA, and the NASA/ESA [*Hubble*]{} Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program \#10521. Support for program \#10521 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy,Inc., under NASA contract NAS 5-26555. Support for this work was also provided by STFC studentship PPA/S/S2005/04270. LAM is grateful to Tom Abel and Ranga-Ram Chary for many discussions on this topic. We would also like to thank Seb Oliver for his useful comments and suggestions. We are grateful to the anonymous referee for comments that have helped focus and improve this report. Abel, T., Bryan, G. L., & Norman, M. L. 2000, , 540, 39 Abel, T., Bryan, G. L., & Norman, M. L. 2002, Science, 295, 93 Bertin, E., & Arnouts, S. 1996, , 117, 393 Bromm, V., Coppi, P. S., & Larson, R. B. 1999, , 527, L5 Bromm, V., Kudritzki, R. P., & Loeb, A. 2001, , 552, 464 Bromm, V., Coppi, P. S., & Larson, R. B. 2002, , 564, 23 Bromm, V., & Loeb, A. 2004, New Astronomy, 9, 353 Cen, R. 2003, , 591, 12 Fazio, G. G., et al.  2004, , 154, 10 Fryer, C. L., Woosley, S. E., & Heger, A. 2001, , 550, 372 Furlanetto, S. R., & Loeb, A. 2003, , 588, 18 Gnedin, N. Y., & Ostriker, J. P. 1997, , 486, 581 Greif, T., & Bromm, V. 2006, MNRAS, 373, 128 Greif, T., Johnson, J. L., Bromm, V., Klessen, R. S. 2007, , 670, 1 Heger, A., & Woosley, S. E. 2002, , 567, 532 Heger, A., Woosley, S., Baraffe, I., & Abel, T. 2002, Lighthouses of the Universe: The Most Luminous Celestial Objects and Their Use for Cosmology: Proceedings of the MPA/ESO/MPE/USM Joint Astronomy Conference Held in Garching, Germany, 6-10 August 2001, ESO ASTROPHYSICS SYMPOSIA. ISBN 3-540-43769-X. Edited by M. Gilfanov, R. Sunyaev, and E. Churazov. Springer-Verlag, 2002, p. 369, 369 Johnson, J. L., & Bromm, V. 2006, MNRAS, 366, 247 Krick, J. E., Surace, J. A., Thompson, D., Ashby, M. L. N., Hora, J. L., Gorjian, V., & Yan, L. 2008, , 686, 918 Mackey, J., Bromm, V., & Hernquist, L. 2003, , 586, 1 McKey, C. F., & Tan, J. C. 2008, ApJ, 681, 771 Miralda-Escude, J., & Rees, M. J. 1997, , 478, L57 Omukai, K., & Palla, F. 2003, , 589, 677 O’Shea, B. W., McKee, C. F., Heger, A., & Abel, T. 2008, ArXiv e-prints, 801, arXiv:0801.2124 Scannapieco, E., Madau, P., Woosley, S., Heger, A., & Ferrara, A. 2005, , 633, 1031 Schaerer, D. 2002, , 382, 28 Schaerer, D. 2003, , 397, 527 Smith, N., et al. 2007, , 666, 1116 Tornatore, L., Ferrara, A., & Schneider, R. 2007, MNRAS, 382, 945 Tumlinson, J., & Shull, J. M. 2000, , 528, L65 Umeda, H., & Nomoto, K. 2002, , 565, 385 Weinmann, S. M., & Lilly, S. J. 2005, , 624, 526 Wise, J. H., & Abel, T. 2003, The Emergence of Cosmic Structure, 666, 97 Wise, J. H., & Abel, T. 2005, , 629, 615 Woosley, S. E., Blinnikov, S., & Heger, A. 2007, Nature, 450, 7168 Yoshida, N., Omukai, K., Hernquist, L., & Abel, T. 2006, , 652, 6 \[lastpage\] [^1]: Rate of [@heger02] incorporating correction factor determined by [@weinmann04] and recalculated at $z=10$ [^2]: Corrected rate determined by [@weinmann04]
null
minipile
NaturalLanguage
mit
null
Our style can best be expressed as a synthesis of ripe, Santa Cruz Mountain fruit character (enhanced by barrel-fermentation) and the traditional "methode champenoise," bottle-fermentation and extended (tirage) ageing on the yeast. The wines are finished with minimal or zero dosage to better display the marriage of the cuvee and the yeast character fro the extended tirage.Each bottling of Equinox is distinct, reflecting diverse vineyards and the unique character of the vintage. The Santa Cruz Mountains, home to some of the world’s most distinct wines, was one of California’s first designated wine grape-growing regions in the United States. The AVA (American Viticultural Area) designation occurred in 1981, five years after the “Judgment of Paris” wine competition, in which a blind tasting by a panel of experts found…
null
minipile
NaturalLanguage
mit
null
Q: What are the basic system requirement for installing asp.net mvc 2 application What are the basic system requirement for installing asp.net mvc 2 application. Edited:- If I want to host the asp.net mvc 2 application on the windows XP then is there any special requirement like IIS version, Framework, service packs or else ? A: AS per the MVC download page, any of the following operating systems: Windows Server 2003, Windows Server 2008, Windows Vista, Windows XP Required framework: .NET 3.5 SP1 Framework As per the framework download page, it has the requirements: Processor: 400 MHz Pentium processor or equivalent (Minimum) 1GHz Pentium processor or equivalent (Recommended) RAM:96 MB (Minimum); 256 MB (Recommended) Hard Disk: Up to 500 MB of available space may be required
null
minipile
NaturalLanguage
mit
null
Abstract Application of physical and chemical concepts, complemented by studies of prokaryotes in ice cores and permafrost, has led to the present understanding of how microorganisms can metabolize at subfreezing temperatures on Earth and possibly on Mars and other cold planetary bodies. The habitats for life at subfreezing temperatures benefit from two unusual properties of ice. First, almost all ionic impurities are insoluble in the crystal structure of ice, which leads to a network of micron-diameter veins in which microorganisms may utilize ions for metabolism. Second, ice in contact with mineral surfaces develops a nanometre-thick film of unfrozen water that provides a second habitat that may allow microorganisms to extract energy from redox reactions with ions in the water film or ions in the mineral structure. On the early Earth and on icy planets, prebiotic molecules in veins in ice may have polymerized to RNA and polypeptides by virtue of the low water activity and high rate of encounter with each other in nearly one-dimensional trajectories in the veins. Prebiotic molecules may also have utilized grain surfaces to increase the rate of encounter and to exploit other physicochemical features of the surfaces.
null
minipile
NaturalLanguage
mit
null
--- abstract: 'We propose a feasible and effective approach to study quantum thermal transport through anharmonic systems. The main idea is to obtain an [*effective*]{} harmonic Hamiltonian for the anharmonic system by applying the self-consistent phonon theory. Using the effective harmonic Hamiltonian we study thermal transport within the framework of nonequilibrium Green’s function method using the celebrated Caroli formula. We corroborate our quantum self-consistent approach using the quantum master equation that can deal with anharmonicity exactly, but is limited to the weak system-bath coupling regime. Finally, in order demonstrate its strength we apply the quantum self-consistent approach to study thermal rectification in a weakly coupled two segment anharmonic system.' author: - Dahai He - Juzar Thingna - 'Jian-Sheng Wang' - Baowen Li title: 'Quantum thermal transport through anharmonic systems: A self-consistent approach' --- [GBK]{} Introduction {#sec:1} ============ Developing a first-principle based approach for quantum thermal transport across low-dimensional systems not only provides insight to potential nanodevice applications, but is also crucial to better understand nonequilibrium statistical physics. Till date, quantum thermal transport across harmonic crystals has been extensively studied using the generalized Langevin approach [@Dhar06; @Dhar_rev], nonequilibrium Green’s function method [@Wang_revEPJB; @Wang_revFront], or the density matrix approach [@Dhar12]. The main advantage of these methods lies in their exactness of treating harmonic systems, giving rise to ballistic transport. On the other hand, recent progress in classical thermal transport has demonstrated interesting practical applications, such as thermal diode, thermal transistor, and thermal logic gates [@Li_rev]. These studies suggest that the ability to manipulate thermal transport may lead to important technological breakthrough ranging from novel devices to improvement of thermal management in microelectronics, and even information processing by phonons. Unfortunately, the exactly solvable harmonic crystals do not exhibit these novel properties and it turns out that anharmonicity is one of the key ingredients for their occurrence. Naturally, anharmonic systems are of great interest in order to deduce the basic microscopic origin of these novel properties. Hence in the classical regime the effective phonon theory [@Nianbei06] and the self-consistent phonon theory [@Hu_SCPT06] were developed to study thermal transport for highly anharmonic systems. The former was based on the equipartition theorem, whereas the latter took advantage of the Feynman-Jensen inequality. In the quantum regime, the role of anharmonicity is even more enticing but relatively unexplored. A plethora of techniques for weakly anharmonic systems have been developed in this regime [@Segal03; @Wang_revEPJB], but a robust theory for strong anharmonicity still eludes the community. One of the popular techniques to treat the strongly anharmonic quantum regime is based on the quantum master equation that treats the system-bath interaction perturbatively [@Segal05; @Thingna12; @Thingna14]. Despite its ability to treat anharmonicity exactly, one major disadvantage is its inability to treat large phononic systems. This is mainly because this method operates in the eigenbasis of the system that increases rapidly with temperature or number of particles. Hence, an approach that can deal with relatively large systems is critical to understand the role of strong anharmonicity in thermal transport. In this paper we will propose a feasible and effective approach to study thermal transport through anharmonic systems. The key idea here is to renormalize the anharmonic Hamiltonian to an effective harmonic Hamiltonian using the quantum self-consistent phonon theory [@He08a]. We then apply the standard nonequilibrium Green’s function machinery to study the effective harmonic model. The paper is organized as follows: In Sec. \[sec:2\], we introduce the anharmonic model and propose a modified Caroli formula for thermal transport based on the quantum self-consistent phonon theory. We then corroborate our quantum self-consistent approach with the help of quantum master equation for mono- and di-atomic molecular junctions in Sec. \[sec:3\]. Finally, we demonstrate an intriguing application of our method by investigating thermal rectification in Sec. \[sec:4\]. Finally, we summarize our main conclusions in Sec. \[sec:5\]. Quantum self-consistent phonon theory and modified Caroli formula {#sec:2} ================================================================= ![\[fig\_model\](Color online) Schematic illustration of the anharmonic model given by Eq. . The left and right harmonic baths are at temperatures $T_L$ and $T_R$ respectively. The central system consists of harmonic plus anharmonic interactions depicted by the periodic potential](Fig1.eps){width="\columnwidth"} We consider the minimal model for thermal transport (as illustrated in Fig. \[fig\_model\]) that consists of a general one-dimensional system linearly coupled to two semi-infinite chain of harmonic oscillators, herein referred to as heat baths. The corresponding Hamiltonian $H$ of the total system reads [@Weiss08], $$\begin{aligned} \label{s2-eq-Hamiltonian} H = H_{S} &+& \sum_{l}\frac{P_{l}^2}{2M_{l}} +\frac{M_{l}\omega_l^2}{2}\left(Q_l-\frac{c_l S_{L}}{M_{l}\omega_l^2}\right)^{2} \nonumber\\ &+&\sum_{r}\frac{P_{r}^2}{2M_{r}}+\frac{M_{r}\omega_r^2}{2}\left(Q_r-\frac{c_r S_{R}}{M_{r}\omega_r^2}\right)^{2},\end{aligned}$$ where $H_{S}$ describes the system of interest, {$Q_{x}$, $P_{x}$, $M_{x}$, $\omega_{x}$} are the positions, conjugate momenta, masses, and frequency modes of the left ($x=l$) and right ($x=r$) bath. The parameter $c_{x}$ is the system-bath coupling constant of the $x$-th mode corresponding to the left ($x=l$) and right ($x=r$) bath. The system operator $S_{\alpha}$ couples the system to the $\alpha$-th bath and in general it can be any system operator or its function. The above Hamiltonian is commonly referred to as the Zwanzig-Caldeira-Legett model [@Zwanzig73; @Caldeira83] that can be split into various regions as, $$H=H_{S}+H_{L}+H_{R}+\sum_{\alpha = L,R} H_{S\alpha}+H_{\alpha}^{RN}.$$ The bath Hamiltonian $$\label{s2-eq-Hamiltonianbath} H_{\alpha}=\sum_{x}\frac{P_{x}^{2}}{2M_{x}}+\frac{\omega_{x}^{2}}{2}Q_{x}^{2}.$$ The system-bath interaction Hamiltonian $$\label{s2-eq-Hamiltoniansysbath} H_{S\alpha}=S_{\alpha}\otimes B_{\alpha},$$ where $B_{\alpha}=-\sum_{x}c_{x}Q_{x}$ is the collective bath operator that couples with the system and $$\label{s2-eq-Hamiltonianrn} H_{\alpha}^{RN}=\frac{S_{\alpha}^2}{2}\sum_{x}\frac{c_{x}^{2}}{M_{x}\omega_{x}^{2}}$$ is known as the re-normalization (counter) term. In this work since we will couple the system to the bath via the position-position coupling the re-normalization term is essential to maintain translational invariance of the total system. In the above equations $\alpha = L;~x=l$ corresponds to the left bath and $\alpha = R;~x=r$ corresponds to the right bath. In order to simplify the description, we have considered a one dimensional model, but our theory described below can be easily generalized to higher dimensions. The main difficulty to calculate heat current using the above Hamiltonian lies in the anharmonic interactions present in the system. To tackle such anharmonic interactions we apply the quantum self consistent phonon theory (QSCPT) to obtain an effective harmonic system [@Feynman86; @He08a] and then obtain the heat current using the machinery of nonequilibrium Green’s function (NEGF) [@Wang_revEPJB] through the effective model. Without loss of generality, we consider the system as a one-dimensional oscillator chain of the form $$H_{S}=\sum_{s}\frac{m}{2}\dot{x}_{s}^{2}+W(x_{s}-x_{s-1})+V(x_s),$$ where $W(\delta x)$ and $V(x)$ are the nearest-neighbor interaction and onsite potential respectively. The partition function in the canonical ensemble can be expressed as a path integral over all possible trajectories, i.e., $$Z=\int\mathrm{D}\mathbf{x}e^{-\frac{S[\mathbf{x}]}{\hbar}},$$ where the measure of functional integral $\mathrm{D}\mathbf{x} \equiv \Pi d\mathbf{x}$ and $$\label{s2-eq-action} S[\mathbf{x}]=\int_0^{\hbar\beta}d\tau\left(\frac{m}{2}\dot{\mathbf{x}}^2+W(\delta \mathbf{x})+V[\mathbf{x}]\right).$$ In the action $S[\mathbf{x}]$ above, $\mathbf{x}$ and $\dot{\mathbf{x}}$ are implicit functions of the time variable $\tau$. The key idea of quantum self-consistent phonon theory (QSCPT) is to replace the original Euclidean action, Eq. (\[s2-eq-action\]), by an approximate one. In order to do this we make a reasonable choice of the trial Hamiltonian $$\label{s2-eq-Hamiltonianeff} H_{S}^{eff}=\sum_{s}\frac{m}{2}\dot{x}_{s}^{2}+\frac{f_{c}}{2} (x_{s+1}- x_{s})^{2}+\frac{f}{2}x^{2}_{s},$$ where the parameters $f_c$ and $f$ are to be deduced by minimizing the right hand side of the Feynman-Jensen inequality [@Feynman98]: $$\label{s2-eq-feynman} F\leq F_{0}+\langle H_{S}-H_{S}^{eff}\rangle_{\mathrm{\scriptscriptstyle{canonical}}},$$ where $F_0=-k_{B}T\ln{Z_0}$. The trial partition function $$Z_{0}=\int \mathrm{D}\mathbf{x}e^{-\frac{S[\mathbf{x}]}{\hbar}},$$ where $S_{0}[\mathbf{x}] = \int_0^{\hbar\beta}d\tau\left(m\dot{\mathbf{x}}^2+f_{c}\delta \mathbf{x}^{2}+f\mathbf{x}^{2}\right)/2$. The canonical average in Eq. (\[s2-eq-feynman\]) is computed based on the trial Hamiltonian Eq. (\[s2-eq-Hamiltonianeff\]) that can be easily calculated since the integrand takes a quadratic form. Finally, the parameters $f_c$ and $f$ can be obtained by solving the following self-consistent equations: $$\begin{aligned} \label{eq:wp} \omega_{p}^{2}&=&\frac{2}{m}\left[\frac{\partial V(\rho)}{\partial \rho^{2}}+4\frac{\partial W(\delta\rho)}{\partial (\delta\rho^{2})}\sin^{2}\left(\frac{p\pi}{N}\right)\right],\\ \label{eq:rho} \rho^{2}&=&\langle x_{k}^{2}\rangle \nonumber \\ &=&\frac{\hbar}{2Nm}\sum_{p}\frac{1}{\omega_{p}}\coth\left(\frac{ \beta\hbar\omega_{p}}{2}\right),\\ \label{eq:drho} \delta\rho^{2}&\equiv&\langle(x_{k}-x_{k-1})^{2}\rangle \nonumber \\ &=& \frac{\hbar}{2Nm}\sum_{p} \frac{4\sin^{2}(p\pi/N)}{\omega_{p}}\coth\left(\frac{\beta\hbar\omega_{p}}{2}\right),\end{aligned}$$ where $\beta=(k_{B}T)^{-1}$, $T=(T_{L}+T_{R})/2$ and the variables $\omega_p,~\rho,$ and $\delta \rho$ implicitly depend on $f_c$ and $f$. Note that the canonical average is performed at the average temperature $T$, which requires that the heat baths have minimal influence on the system. This assumption could be accomplished in a variety of scenarios, e.g., when the system-bath coupling is weak and the temperature difference is small or when the system is comprised of various segments each strongly interacting with its own bath and weakly interacting with each other. It is worth noting here that although the effective phonon theory [@Nianbei06] is similar to the quantum self-consistent phonon theory (QSCPT) described above, it does not capture the essential quantum physics since it relies on the validity of the equipartition theorem. This serves as our main motivation to use the QSCPT and study *quantum* thermal transport. Therefore, given the effective Hamiltonian the model described by Eq. (\[s2-eq-Hamiltonian\]) is approximated as, $$H \approx H^{eff}_{S}+H_{L}+H_{R}+\sum_{\alpha =L,R} H_{S\alpha}+H_{\alpha}^{RN}.$$ Using the standard techniques to treat harmonic systems [@Dhar_rev; @Wang_revEPJB] we obtain the steady state heat current given by the Landauer-like formula as, $$\label{s2-eq-current} I_{L}=-I_{R}=\frac{1}{2\pi}\int_{0}^{\infty} d\omega\hbar\omega \widetilde{T}(\omega)(f_{L}-f_{R}).$$ The above formula is valid for any temperature difference between the left and right baths. The transmission function $\widetilde{T}(\omega)$ is given by a modified Caroli formula, $$\widetilde{T}(\omega)=\mathrm{Tr}(G^{r}\Gamma_{L}G^{a}\Gamma_{R}), \label{eq:caroli}$$ where $$\begin{aligned} \label{s2-eq-Greensfunc} G^r&=&\left[ m\omega^2I-\widetilde{K}-\Sigma_{L}^{r}-\Sigma_{R}^{r}\right]^{-1},\\ \label{s2-eq-Gamma} \Gamma_{\alpha}&=&-2\mathrm{Im}(\Sigma_{\alpha}^{r}),\\ f_{\alpha}&=&\frac{1}{e^{\hbar\omega/(k_{B}T_{\alpha})}-1}.\end{aligned}$$ Here $I$ is the identity matrix and $\Sigma_{\alpha}^{r}$ ($\omega$ dependence suppressed) is known as the retarded self-energy of the $\alpha$-th bath that completely depends on the properties of the bath. The effective force matrix $\widetilde{K}$ above is tridiagonal and $G^a=(G^r)^{\dag}$. It is important to note that the transmission function $\widetilde{T}(\omega)$ here is temperature dependent for anharmonic systems, since the trial parameters $f_c$ and $f$ are temperature dependent. Hence, owing to this inherent temperature dependence due to the self-consistent equations (\[eq:wp\], \[eq:rho\], and \[eq:drho\]) we herein term Eq. (\[eq:caroli\]) as the modified Caroli formula. Such a temperature dependent transmission function has been observed previously in the context of mean-field approximations [@Thingna12; @Zhang13; @Li13]. It is worth emphasizing that QSCPT [@He08a] has previously been used to evaluate the thermal conductivity via the kinetic theory as $\kappa = C v l$ with $C$ being the heat capacity, $v$ the phonon velocity, and $l$ being the mean free path [@Shrivastava]. It is in this work that we for the first time integrate the QSCPT with NEGF giving the “quantum self-consistent approach” a firm theoretical basis to be applied in the nonequilibrium *quantum* regime. Corroborating the quantum self-consistent approach {#sec:3} ================================================== The theory outlined above is a general approach applicable to any anharmonic system that can be approximated as a harmonic one. In this section, we will take specific examples of the system Hamiltonian $H_{S}$, i.e., a monoatomic molecule confined in a quartic potential and a diatomic molecule with a quartic interaction and/or onsite potential. The concerned system is linearly connected to two heat baths via the position operator, i.e., $S_{L}=S_{R}=x$ for monatomic molecule and $S_{L}=x_1,S_{R}=x_2$ for the diatomic case. In order to specify all properties of the baths we define a spectral density $$J_{\alpha}(\omega) = \frac{\pi}{2} \sum_{x} \frac{c_{x}^2}{M_{x}\omega_{x}} \delta\left(\omega - \omega_{x}\right),$$ that incorporates the effect of system-bath coupling since it is proportional to $c_{x}^{2}$. Above $\alpha = L;~ x=l$ corresponds to the left bath and $\alpha = R;~ x=r$ to the right. Given the above definition we can now recast the re-normalization part of the Hamiltonian Eq. (\[s2-eq-Hamiltonianrn\]) as, $$H_{\alpha}^{RN} = \frac{S_{\alpha}^{2}}{\pi} \int_{0}^{\infty}d\omega\frac{J_{\alpha}(\omega)}{\omega} = \frac{S_{\alpha}^2}{2}\gamma_{\alpha}(0),$$ where $\gamma_{\alpha}(0)$ is commonly referred to as the damping kernel at time $0$ of the $\alpha$-th bath. The self-energy $\Sigma_{\alpha}^{r}$ used in Eq. (\[s2-eq-Greensfunc\]) can now be expressed in terms of the spectral density [@Saito07] as, $$\Sigma_{\alpha}^{r} = \frac{1}{\pi}\mathrm{P}\int_{-\infty}^{+\infty}\frac{J_{\alpha}(\omega^{\prime})}{\omega-\omega^{\prime}}d\omega^{\prime}+\gamma_{\alpha}(0) - i J_{\alpha}(\omega),$$ where $\mathrm{P}$ denotes the principal value integral and importantly we have added the term coming from the re-normalization part of the Hamiltonian, i.e., $\gamma_{\alpha}(0)$, to the self-energy. It is important to note here that if the system-bath coupling has a nonlinear form then the re-normalization term should be incorporated in the effective Hamiltonian Eq. (\[s2-eq-Hamiltonianeff\]). Throughout this work we will choose both the left and right baths to have the same properties, i.e., $J_{L}(\omega) = J_{R}(\omega) = J(\omega)$ and use a specific form of this spectral density, namely, $$J(\omega)=\frac{\gamma m \omega}{1+(\omega/\omega_{c})^{2}},$$ where the parameter $\gamma$ is the Stokesian damping coefficient and characterizes the system-bath coupling strength. The above spectral density corresponds to the Lorentz-Drude model of the heat bath, i.e., an Ohmic bath with a Lorentz-Drude cutoff $\omega_c$. Note that the theory given in Sec. \[sec:2\] is not restricted to any particular form of the spectral density, but the Lorentz-Drude form helps us to evaluate $H_{\alpha}^{RN}$ and $\Sigma^{r}=\Sigma_{L}^{r}+\Sigma_{R}^{r}$ analytically as, $$\begin{aligned} H_{\alpha}^{RN}&=&\frac{S_{\alpha}^{2}}{2}\gamma m \omega_{c} ,\\ \Sigma^{r}&=&2J(\omega)\left[\frac{\omega}{\omega_{c}}-i\right],\end{aligned}$$ which immediately leads to $\Gamma_{\alpha}=-2\mathrm{Im}(\Sigma_{\alpha}^{r})=2J(\omega)$ via Eq. (\[s2-eq-Gamma\]). Monoatomic molecule ------------------- The Hamiltonian for the monoatomic molecule is given by $$\label{s3-eq-hamiltonian} H_{S}=\frac{p^{2}}{2m}+\frac{k}{2}x^{2}+\frac{\lambda}{4}x^{4},$$ where $k$ and $\lambda$ are the spring constant and anharmonic strength for the quartic potential. According to QSCPT, we can obtain the effective Hamiltonian for this model as, $$H_{S}^{eff}=\frac{p^{2}}{2m}+\frac{f}{2}x^{2},$$ where the effective force constant $f$ is obtained by solving the self-consistent nonlinear equation; $$\label{s3-eq-f} f=k+\frac{3\hbar\lambda}{2m\Omega}\coth\left(\frac{\beta\hbar\Omega}{2}\right),$$ where $\Omega=\sqrt{f/m}$. Using the Landauer-like formula Eq. (\[s2-eq-current\]) one can obtain the heat current for the monoatomic molecule as $$\label{s3-eq-current-monatomic} I_{L}=\frac{1}{2\pi}\int_{0}^{\infty}d\omega\hbar\omega G^{r}\Gamma_{L} G^{a}\Gamma_{R}(f_{L}-f_{R}),$$ where $G^{r}$, $G^{a}$, $\Gamma_{L}$ and $\Gamma_{R}$ are all numbers for a single degree of freedom with $G^{r}(\omega)=[\omega^{2}-f-\Sigma^{r}(\omega)]^{-1}$. ![\[fig1\](Color online) Current $I_{L}$ as a function of temperature $T$ for various strengths of quartic potential in a monoatomic molecule. Lines correspond to the quantum self-consistent approach and the empty symbols correspond to quantum master equation. Inset shows the temperature dependence of the transmission function for $\lambda/k=8$. All common parameters are: $m=$ 1u, $k=$ 30.160meV/($\textup{\AA}^2$u), $\gamma=$ 0.92THz, $\omega_{c}=$ 0.92PHz, and $\Delta=0.05$. The left bath temperature $T_{L} = T\left(1+\Delta\right)$, whereas the right bath temperature $T_{R} = T\left(1-\Delta\right)$. $\lambda/k$ has dimensions of \[$\textup{\AA}^2$u\]$^{-1}$.](Fig2.eps){width="\columnwidth"} Figure \[fig1\] shows the heat current $I_{L}$ calculated via quantum self-consistent approach Eqs. (\[s3-eq-f\]) and (\[s3-eq-current-monatomic\]) (lines) and the quantum master equation (empty symbols) [@Thingna12]. Since the quantum master equation is exact for any strength of anharmonicity we treat it as our benchmark to validate our approach. In Fig. \[fig1\] we see an excellent agreement between the two fundamentally different approaches for considerably high values of anharmonicity and for the entire temperature range. Inset of Fig. \[fig1\] shows the strong dependence of temperature on the transmission function, indicating the significant role anharmonicity plays in this system. Diatomic molecule ----------------- The Hamiltonian for the diatomic molecule is given by $$\begin{aligned} H_{S}=\frac{p^{2}_{1}}{2m}+\frac{p^{2}_{2}}{2m}&+&\frac{k}{2}(x^{2}_{1} +x^{2}_{2}) +\frac{k_{c}}{2}(x_{1} -x_{2})^{2} \nonumber \\ &+&\frac{\lambda}{4}(x^{4}_{1}+x^{4}_{2})+\frac{\lambda_{c}}{4}(x_{1} -x_{2})^{4},\end{aligned}$$ where $\lambda$ and $\lambda_c$ are the strength of the anharmonic interaction potential and onsite potential respectively. The effective Hamiltonian according to QSCPT reads $$H_{S}^{eff}=\frac{p^{2}_{1}}{2m}+\frac{p^{2}_{2}}{2m}+\frac{f}{2}(x^{2}_{1} +x^{2}_{2})+\frac{f_{c}}{2}(x_{1}-x_{2})^{2},$$ where $f$ and $f_{c}$ are obtained by solving the following self-consistent nonlinear equations: $$\begin{aligned} \label{eq:fdi} f&=&k+\frac{3\hbar\lambda}{4m}\left[\frac{1}{\Omega_{1}}\coth\left(\frac{\beta\hbar\Omega_{1}}{2}\right) \right. \nonumber \\ && \left.+\frac{1}{\Omega_{2}} \coth\left(\frac{\beta\hbar\Omega_{2}}{2}\right)\right],\\ \label{eq:fcdi} f_{c}&=&k_{c}+\frac{3\hbar\lambda_{c}}{m\Omega_2}\coth\left(\frac{\beta\hbar\Omega_{2}}{2}\right),\end{aligned}$$ where $\Omega_{1}=\sqrt{f/m}$ and $\Omega_{2}=\sqrt{(2f_{c}+f)/m}$. Then the heat current across the diatomic molecule is given by $$\begin{aligned} I_{L}&=&\frac{1}{2\pi}\int_{0}^{\infty}d\omega\hbar\omega \mathrm{Tr}(G^{r}\Gamma_{L} G^{a}\Gamma_{R})(f_{L}-f_{R}) \nonumber \\ &=&\frac{2}{\pi}\int_{0}^{\infty}d\omega\hbar\omega\left|G^{r}_{12}(\omega)\right|^{2}J^2(\omega)(f_{L}-f_{R}), \label{s3-eq-current-diatomic}\end{aligned}$$ where $$\begin{aligned} G^{r}_{12}(\omega)&=&\frac{f_{c}}{\left[b^{2}(\omega)-J^{2}(\omega)-f_{c}^{2}+i2b(\omega)J(\omega)\right]},\nonumber \\ b(\omega)&=&m\omega^{2}-(f+f_{c}+\gamma m\omega_{c})-\frac{J(\omega)\omega}{\omega_{c}}.\end{aligned}$$ Figure \[fig2\] shows a comparison between our quantum self-consistent approach Eqs. (\[eq:fdi\]), (\[eq:fcdi\]), and (\[s3-eq-current-diatomic\]) (lines) and the quantum master equation (empty symbols) for various combinations of the anharmonic parameters $\lambda$ and $\lambda_c$. The favorable agreement further validates our approach and the inset of Fig. \[fig2\] shows the temperature dependence of the transmission function indicating the strong role of anharmonicity. ![\[fig2\](Color online) Current $I_{L}$ as a function of temperature $T$ for various strengths of quartic interaction and quartic onsite potential in a diatomic molecule. Lines correspond to the quantum self-consistent approach and the empty symbols correspond to quantum master equation. Inset shows the temperature dependence of the transmission function for $\lambda/k=\lambda_c/k_c = 8$\[$\textup{\AA}^2$u\]$^{-1}$. All common parameters are: $m=$ 1u, $k=k_c$ 30.160meV/($\textup{\AA}^2$u), $\gamma=$ 0.92THz, $\omega_{c}=$ 0.92PHz, and $\Delta=0.05$. The left bath temperature $T_{L} = T\left(1+\Delta\right)$, whereas the right bath temperature $T_{R} = T\left(1-\Delta\right)$. $\lambda/k$ and $\lambda_c/k_c$ have dimensions of \[$\textup{\AA}^2$u\]$^{-1}$.](Fig3.eps){width="\columnwidth"} Thermal rectification in a two-segment model {#sec:4} ============================================ In order to show the strength of the quantum self-consistent approach we consider the stationary heat current across a chain consisting of two weakly coupled lattices, $$\label{s5-eq-Hamitonian} H_{C} = H_{1}+\frac{k_{int}}{2} (x_{N/2+1}-x_{N/2})^2+H_{2}.$$ The Hamiltonian for the left and right segments are given by $$H_{1} = \sum^{N/2}_{n=1} \frac {p_n^2}{2m} +V_{1}(x_{n+1}-x_{n}) +U_{1}(x_{n})$$ and $$H_{2} = \sum^{N}_{n=N/2+1} \frac {p_n^2}{2m} +V_{2}(x_{n+1}-x_{n}) +U_{2}(x_{n}).$$ Above $V_{1(2)}$ represents the interaction potential and $U_{1(2)}$ represents the onsite potential. The classical version of this model has been extensively studied to investigate the thermal rectification effect [@Li_rev]. The occurrence of thermal rectification requires: i) Asymmetry and ii) Anharmonicity in the system. The model above exhibits spatial asymmetry due to the two segments having different parameters and anharmonicity due to the nonlinear interaction potentials. Hence, we choose the potentials of the two segments $a=1,2$ to take the form $$\label{s5-eq-int-potential} V_{a}(x)=\frac{1}{2}k_{c,a}x^2+\frac{1}{4}\lambda_{c,a}x^4,$$ and $$\label{s5-eq-on-potential} U_{a}(x)=\frac{1}{2}k_{a}x^2+\frac{1}{4}\lambda_{a}x^4.$$ In order to apply the quantum self-consistent approach it is crucial that the system remains at approximately at one temperature $T$. Since the goal is to study thermal rectification, a far from linear response phenomenon, we choose the segment-segment coupling $k_{int}$ to be weak. This weak coupling between the two segments causes each segment to attain a temperature close to the bath it is connected to, i.e., the left segment $H_1$ attains the temperature $T_L$ and $H_2$ attains $T_R$. In accordance with our corroboration in Sec. \[sec:3\] we choose each segment to be weakly coupled to its respective bath. This allows us to safely apply our quantum self-consistent approach to each segment separately. According to QSCPT, the Hamiltonian $H_{1(2)}$ can thus be approximated by the effective Hamiltonian $H_{1(2)}^{eff}$ that takes the form $$\label{s5-eq-H1eff} H_{1}^{eff} = \sum^{N/2}_{n=1} \frac {p_n^2}{2m} +\frac {f_{c,1}}{2} (x_{n+1}-x_{n})^2 +\frac {f_{1}}{2}x_n^2,$$ and $$\label{s5-eq-H2eff} H_{2}^{eff} = \sum^{N}_{n=N/2+1} \frac {p_n^2}{2m} +\frac {f_{c,2}}{2} (x_{n+1}-x_{n})^2 +\frac {f_{2}}{2}x_n^2.$$ The temperature of left and right heat bath is given by $T_{L(R)}=T(1\pm\Delta)$. Using the effective Hamiltonian Eqs. (\[s5-eq-H1eff\]) and (\[s5-eq-H2eff\]) in Eq. (\[s5-eq-Hamitonian\]) we can obtain the heat current through the system using the Landauer-like formula Eq. (\[s2-eq-current\]). Figure \[fig5\] shows the heat current as a function of the dimensionless temperature difference $\Delta$. The sign of $\Delta$ indicates the direction of the current, i.e., $\Delta>0$ corresponds to a current from the left lead to the right lead and vice versa. We find that the heat current is substantially larger for $\Delta>0$ than that for $\Delta<0$ leading to a large thermal rectification ratio $R= |I_L(\Delta)-I_L(-\Delta)|/{\rm max}\{I_L(\Delta),I_L(-\Delta)\}$, where $I_L(\Delta)$ is the current evaluated at a fixed value of temperature difference $\Delta$. The maximum rectification we achieve is $\approx 98\%$ and to the best of our knowledge this value far exceeds the values obtained in the quantum regime [@WuPRL09; @WuPRE09]. ![\[fig5\](Color online) Current $I_{L}$ as a function of relative temperature difference $\Delta$. Parameters used for the calculation are: $m=$ 1u, $k_{c,1}=k_{c,2}=k_1=k_2=$ 60.321meV/($\textup{\AA}^2$u), $2\lambda_{c,1}/k_{c,1}=\lambda_{c,2}/k_{c,2}=$ 2\[$\textup{\AA}^2$u\]$^{-1}$, $2\lambda_1/k_1=\lambda_2/k_2=$ 0.4\[$\textup{\AA}^2$u\]$^{-1}$, $k_{int}=$ 3.016meV/($\textup{\AA}^2$u), $\gamma=$ 0.92THz, $\omega_{c}=$ 0.92PHz, $T=$ 490K, and $N=8$. The left bath temperature $T_{L} = T\left(1+\Delta\right)$, whereas the right bath temperature $T_{R} = T\left(1-\Delta\right)$. Schematic shows the model considered to obtain the thermal rectification.](Fig4.eps){width="\columnwidth"} Conclusion {#sec:5} ========== We develop a quantum self-consistent approach to study thermal transport across model-independent anharmonic systems. The key idea is to renormalize the anharmonic system to an effective harmonic one through a nonperturbative self-consistent approach. The effective Hamiltonian helps us utilize the nonequilibrium Green’s function machinery, that is exact for Harmonic systems, in order to evaluate the heat current. In case of strong anharmonic systems we corroborate our approach with the master equation based formulation and find excellent agreement for the entire temperature range for mono- and di-atomic systems. Moreover, we also tackle an interesting two segment anharmonic model consisting of $8$ particles that is well beyond the reach of master equation based formulations. The two segment model exhibits a significantly large rectification ratio in the quantum regime, which is due to the strong temperature dependent phonon bands that overlap unequally leading to the large rectification ratio [@Hu_SCPT06]. Overall, the quantum self-consistent approach is highly efficient and can be extended to higher dimensions incorporating effects of mass disorder [@Chaudhuri10]. Also complicated anharmonic potentials like the onsite Morse potential [@Hu_SCPT06; @He09] could be handled within this approach making it highly versatile. However, one should bear in mind that the approach in its present form has two limitations. Firstly, since we apply the canonical average to the system at approximately an average temperature, the approach cannot be applied to study homogeneous systems under a large temperature gradient. Although inhomogeneous systems such as the two segment model illustrated in Sec. \[sec:4\] can be easily studied. Secondly, even though anharmonicity can be exactly captured within this approach it fails to capture the diffusive behavior of systems. In other words the phonon mean-free path within this formulation is infinite. Thus, the approach captures essential physics of only those systems that are shorter than its actual phonon mean-free path making it relevant to the field of nano-device engineering. We acknowledge the helpful discussions with Sahin Buyukdagli, Lifa Zhang and late Prof. Bambi Hu. D. H. is surported by NSFC of China (Grant Nos. 11105112 and 11335006) and NSF of Fujian Province (No. 2016J01036). J.-S. W. acknowledges support from an FRC grant R-144-000-343-112. [26]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{},  @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**]{},  ed., Advanced Books Classics (, , ) @noop [****, ()]{} @noop [****,  ()]{} @noop [**]{} (, , ) @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{}
null
minipile
NaturalLanguage
mit
null
Q: *Value relation* widget in QGIS I cannot understand what the Value Relation widget of the Fields menu in QGIS actually does. For example: I have a point layer with some columns (ID, value1...) and a table (so just a dbf file) with other values (ID, valueA,...). If I open the Fields menu of the point layer I can set some options for the selected column: Layer (set to table.dbf), Key column (set to ID) and Value column (set to valueA). Now, if I create a new point, it appears in the row of the widget column a dropdown menu where I can choose the valueA of the table.dbf. And that's ok. But, what is the meaning of the Key column? Does someone have an explanation for that? Am I missing something? A: Key column is a value, that is saved to the field and replaces previous value. The idea is that each key corresponds to value, that you see in the drop-down menu. EDIT: Imagine a simple table: Key | Value 1 | Apple 2 | Pear 3 | Passion fruit Value relation enables you to select "apple" in a drop-down, but save value "1" (the key) to the field. As you can imagine, this can be very useful in relation DBs, such as PostgreSQL.
null
minipile
NaturalLanguage
mit
null
Roaming the field in Martin Stadium in a snowstorm in November is invigorating, but crawling out of bed for a slog through the stuff in March - not so much. Especially if you are the head coach and you have been there, done that before. "I'm getting too old for that," says Cougar head football coach Bill Doba jokingly, who decided to cancel today's practice after getting a glimpse of snowfall this morning. That can be taken as both good news and bad news for members of the Cougar team, depending on where they sit on the team hierarchy. For the newcomers like tight end Devin Frischknecht and defensive back Cornorris Atkins, the more reps they can get, the better. For the walking wounded like offensive lineman Joe Eppele and quarterback Kevin Lopina, it means extra rest for their injuries - in Eppele's case, just a minor tweak of the shoulder and in Lopina's case, a pulled hamstring. Doba says Eppele is expected to practice tomorrow while Lopina will sit out. But practice or not, the Cougars main concern remains the same - the defensive secondary.
null
minipile
NaturalLanguage
mit
null
Abderahmen Zoghbi, Institute of Astronomy, Cambridge Narrow Line Seyfert 1 (NLS1) are thought to be accreting at rates close to Eddington. They are highly variable, and taken to be the perfect targets for understanding the accretion process. The X-ray spectra of three Narrow Line Seyfert 1, from the XMM archive have been analysed. They are well fitted with the relativistically blurred photoionized disc reflection model. The Spectrum of mrk478 seem not change over a year period despite the a big drop in the flux. UGC 11763 show a spectral drop around 7 keV similar to those seen in other NLS1, but has a remarkably hard spectrum (Gamma ~ 1.6).
null
minipile
NaturalLanguage
mit
null
The general features of the proteolytic fragmentation of fibrinogen have been established by our work and others previous work. Further studies will focus on details, as the fate of the thrombin susceptible bonds during the fragmentation. Some physicochemical studies will be performed to characterize the isolated D and E fragments. The data on their unfolding by heat, obtained by differential scanning calorimetry, will be supplemented by studies of unfolding by acid, alkali, and denaturing agents, using difference spectroscopy and fluorescence for this purpose. The structural rigidity of the native fibrinogen molecule will be studied by nanosecond fluorimetry of derivatives labeled by fluorescent chromophores.
null
minipile
NaturalLanguage
mit
null
[Role of postoperative adjuvant radiotherapy in the treatment of class T2-3 NO MO adenocarcinoma of the kidney]. From 1968 to 1983, in the Department of Urology and in the Institute of Radiotherapy of the University of Brescia, the role of postoperative radiotherapy (PORT) in 95 patients with renal adenocarcinoma (T2-3 N0 M0) was investigated. From 1968 to 1978, 46 patients underwent radical nephrectomy and PORT; from 1978 to 1983, 49 patients were submitted to radical nephrectomy with regional lymphadenectomy (CH). Overall survival (PCS) at 5 years is 63% (PORT) vs 57% (CH) (p greater than 0.05). The probability of survival is better for left-sided neoplasms than for right-sided ones. In the CH group, the 5-year PCS is 40% vs 70%, respectively, for right vs left neoplasms (p less than 0.05); in the PORT group, PCS is 59% (right) vs 70% (left). For right-sided cancers, 5-year PCS is higher for PORT (59%) than for CH (40%) patients (p less than 0.05). In the PORT group acute bowel toxicity was 24% (grade 2, WHO). In 2 patients only (4.3%) PORT was stopped because of toxicity. PORT sequelae were investigated in: spinal cord, contralateral kidney, liver and bowel. Bowel sequelae (grade 2, Dische) were observed in 3 patients only (6.5%). In the T2-3 N0 M0 classes, radical nephrectomy with PORT may give the same results as aggressive surgery, with a low biological cost. Prognostic data might mean a different and more favorable loco-regional evolution for left-sided renal cancers.
null
minipile
NaturalLanguage
mit
null
OKC TOWING SERVICE405-788-4080 VEHICLE LOCKOUT SERVICE We will open your car Damage Free and get you back on the road in no time. One of our experienced and professional drivers will get to your location with all of the necessary tools and supplies and open up your car, regardless of the make or model. We use state of the art tools and equipment that will never cause any damage to your car, truck, van or SUV. Best of all, the entire process can be completed in just a few minutes. Our rates are very affordable and we would be happy to provide you with a free quote right over the phone. With our fast response time, specialized tools and exceptional customer service, OKC Towing Service is the leading provider in Oklahoma of your Roadside Assistance needs. Call us today at 405-788-4080.Lockout tools used:Auto Lockout Tool Set – assortment of entry tools made for safely getting inside your vehicle without any damage to the frame or windows. These tools are built to assist people who have locked themselves out of their car.Air or Plastic Wedge – pump that is connected to a high-tension bag. This is a more forceful approach to opening a locked door but very effective. After the air wedge is in the door, we use a special rod to access the vehicles lock system to open your door without any scratches or damage.
null
minipile
NaturalLanguage
mit
null
Kaveh Mazaheri Kaveh Mazaheri (, (born 14 September 1981 in Tehran, Iran) is an Iranian Director and Scriptwriter. He started his work by writing a film critique in magazines. After graduating in railway engineering from Iran University of Science and Technology in 2005, he made his first short narrative film entitled "Tweezers". He has directed four independent short narrative films and more than twenty short and medium-length documentaries. He has won numerous awards from national and international film festivals for short film Retouch, such as "Best Short Fiction Film" at Tribeca, Kraków, Palm Springs, Stockholm, Tirana, Fajr Film Festivals Movies Short Narrative Films Tweezers (2007) Cockroach (2016) Retouch (2017) Documentaries Waxinema (2008) Day of Blood (2008) Soori's Trip (2010) A Report about Mina (2015) Flight to Pardis (2016) Awards Yamagata International Documentary Film Festival (Japan 2015)- Winner Jury Special Mention Tribeca Film Festival (USA 2017)- Winner Best Narrative Short, Winner Jury Prize Kraków Film Festival (Poland 2017)- Winner Best Short Fiction Film, Winner Don Quixote Award Palm Springs International ShortFest (USA 2017)- Winner Best Live Action over 15 Minutes Fajr Film Festival (Iran 2017)-Winner Best Short Film (Crystal Simorgh Prize) 25th Curtas Vila do Conde International Film Festival (Portugal 2017) - Winner Audience Award, Nominated Grand Prize Stockholm International Film Festival (Sweden 2017) – Winner Best Short Film Traverse City Film Festival (USA 2017) - Winner Best Short Fiction Film Hancheng International Short Film Festival (China 2017) - Winner 3rd Prize for Best Short Film of Silk Road Competition Ojai Film Festival (USA 2017) – Winner Best Narrative Short Tirana International Film Festival (Albania 2017) – Winner Best Short Fiction Wine Country Film Festival-WCFF (USA 2017) – Winner “COURAGE IN CINEMA” AWARD Aix-en-Provence International Short Film Festival (France 2017) – Winner Jury Prize Almería International Film Festival (Spain 2017) – Winner Best Screenplay Iranian Film Festival – San Francisco (USA 2017) – Winner Best Screenplay for Short Film Asiana International Short Film Festival (Korea 2017) – Winner Jury Special Mention São Paulo International Short Film Festival (Brazil 2017) – Winner Audience Favorite Prize, Nominated Best Film Bosphorus International Film Festival (Turkey 2017) – Winner Best International Short Fiction Film International Short Film Festival ZUBROFFKA (Poland 2017) – Winner Best Film SETEM Academy Silk Road International Film Festival (Turkey 2017) – Winner Grand Special Jury prize Moscow International Film Festival (Russia 2017) – Nominated Best Film of the Short Film Competition (Silver St. George) Durban International Film Festival (South Africa 2017)- Nominated Best International Short Film Dokufest International Documentary and Short Film Festival (Kosovo 2017) - Nominated Best Fiction Short Film Encounters Film Festival (UK 2017) – Nominated Best Film Moondance International Film Festival (USA 2017) – Nominated Best Short Film Denver Film Festival (USA 2017) – Nominated Best Short Film Tallgrass Film Festival (USA 2017) – Nominated Best Short Film Tacoma Film Festival (USA 2017) – Nominated Best Short Film Adelaide Film Festival (Australia 2017) – Nominated Best Short Film Sedicicorto International Film Festival (Italy 2017) – Nominated Best Short Film Chicago International Film Festival (USA 2017) – Nominated Gold Hugo for Best Fiction Short Film Valladolid International Film Festival Seminci (Spain 2017) – Nominated Best Foreign Short at Meeting Point Section References persian wikipedia https://tribecafilm.com/filmguide/retouch-2017 https://mubi.com/cast/kaveh-mazaheri http://directorsnotes.com/2017/10/09/kaveh-mazaheri-retouch/ https://bingz.info/director/kaveh-mazaheri/ The film "Tweezers" on the short film website Page of the film "Waxinema" on the short film website Reotuch Category:1981 births Category:Iranian screenwriters Category:Iranian film directors Category:Iranian documentary filmmakers Category:Living people
null
minipile
NaturalLanguage
mit
null
“ Upon incarceration, you are subject to servitude and if you don’t comply then you will be subject to some form of punishment.” This is nothing more than a caste system by which the minds of the incarcerated are taught to place themselves below the authority and to remain at their beck and call. This translates to their life during freedom”. – Inmate 100608190 Just a few months ago, North and South Carolinians were prepping for Hurricane Florence. They were wiping out the shelves at a variety of grocery stores, boarding up their residences and businesses, and in many cases evacuating their homes. In preparation of the heavy rain and wind, mandatory evacuations were issued out. One in particular came from the governor of South Carolina, Henry McMaster. Reportedly, over one million people evacuated, including some who normally wouldn’t such as hospitalized patients and the staff of military bases. While there were millions fleeing for safety, there were thousands that weren’t given a chance to decide to leave. Those thousands of people that I am referring to are people that aren’t given the chance to make basic everyday decisions that people like you and I in the free world are able to make. They are humans just like you and I, yet subjected to horrific and unimaginable conditions. The Governor of South Carolina tweeted “The people of SC who you’re responsible for include those who are incarcerated.” However, prison officials decided not to evacuate several prisons, even though those prisons fell within the mandatory evacuation zone. This however, is not the first time this kind of occurrence has happened. Back in 2005, those same decisions were made during Hurricane Katrina to not evacuate prisons in mandatory evacuation zones. Thousands of prisoners found themselves in chest-level sewer flood waters with no means of escape. They were locked inside gym-like facilities, abandoned, and left for days without food or water. Many of us have heard the horror stories of what goes on behind prison walls and yet we turn a blind eye. We turn a blind eye because although we ourselves become fearful of just hearing the word prison, we cannot imagine living in a world without the prison system. After all, prison systems help keep communities free of crime right? At least that’s what’s been iterated throughout American history. After the abolition of slavery, former slave states created the Black Codes. The Black Codes were a mere revision of the Slave Codes. Although slavery was abolished, states began to criminalize a range of gestures and acts that were illegal only when the person committing them were black. Crimes ranged from homelessness, absence from work, breach of job contracts, the possession of firearms, and insulting gestures or acts. In 1865, the ratification of the Thirteenth Amendment to the Constitution abolished slavery and involuntary servitude, but, what most of us didn’t catch is the exception. Slavery was abolished “except as punishment for a crime, whereof the party shall have been duly convicted.” The Black Codes created crimes that only black people could be duly convicted of. Convict labor came into play shortly afterward, wherein convicts were sold as forced laborers to lumber camps, brickyards, railroads, farms, plantations, and dozens of corporations throughout the South. Prisoners became younger and blacker and the length of their sentences longer. Although convict leasing faded after the Civil Rights era that depleted the existence of Jim Crow, the amount of black prisoners and the length of their sentences has increased. Why? Because the War on Drugs was introduced which birthed mass incarceration, formerly known as the New Jim Crow. “One may perceive in the penitentiary many reflections of chattel slavery as it was practiced in the South. Both institutions subordinated their subjects to the will of others. Like Southern slaves, prison inmates follow a daily routine specified by their superiors. Both institutions reduced their subjects to dependence on others for the supply of basic human services such as food and shelter. Both isolated their subjects from the general population by confining them to a fixed habitat. And both frequently coerced their subjects to work, often for longer hours and for less compensation than free laborers.” -Historian Adam Jay Hirsch The War on Drugs implemented under the Nixon administration gave local, state and federal police officers access to military bases, intelligence, research, weapons and other equipment for drug prohibition. Thousands of homes were raided, usually with forced and unannounced entry by SWAT teams. Sometimes as little as a gram of cocaine or marijuana was found but that didn’t stop these military equipped officers from murdering innocent people, handcuffing children and grandparents, verbally abusing and traumatizing them, and hauling their loved ones off to prison. Research has proven that drug crime was in fact not increasing during the Reagan administration. In reality, this was a ploy by politicians to start mass incarceration and increase production for large corporations… oh and not to mention put money, drugs and assets into the hands of police departments. The police departments got to keep 80% of the value of forfeitures seized in drug raids (cash, clothes, cars, homes, and drugs). The money was their incentive for knocking people off and redistributing the drugs back into those same communities. Mass incarceration heated up in the 1980’s during the Reagan era “Just Say No”. The Reagans appealed to the emotions of many Americans by expressing that they wanted our communities to be safer, drug crime was rapidly increasing, and in order to ensure that we were protected, they implemented the “tough on crime” stances which forced longer prison sentences for petty crimes. Thereafter, former President Bill Clinton initiated the three strikes rule, significantly increasing the prison sentence of persons convicted of a felony who have been previously convicted of two or more violent or serious crimes; mass incarceration boiled over and reached an all time high. There was then a massive project of prison construction initiated during this time. More than 2 million people are currently incarcerated. More than 800,000 of those people are black. The Washington-based Sentencing Project published a study of U.S. populations in prison and jail and found that one in four black men between the ages of twenty and twenty-nine were among those numbers. The truth of the matter is more African American adults are under correctional control today than were enslaved in 1850. This is no mistake. In fact, the statistics quoted have likely been normalized by many. Most will say they are not surprised because the racial stereotypes and assumptions have clouded our judgment. We have internalized the effects of racism so much that even though we know the majority of our black men are locked away in cages, we have excused it. It is common to turn on the TV and see images of black men in handcuffs on the evening news, movies and media. We tell ourselves that they deserve it and that our communities are safer without them and then we turn back around and question where the bulk of them are when there is lack of representation from them in the polls. We ask where are they when we need mechanical and physical support, when our children are left fatherless, and when women are forced to pick up the slack. We turn a blind eye to the effects of the prison system but are left with the handicapped pieces that are dished back out to us after the prison system has no use or room for them anymore. They are given back to the streets after they have “served their sentence” or got off on “good behavior” yet they are given to us branded and labeled as “criminals” and “felons”. We are talking about a system that is suppose to rehabilitate them, but the recidivism rates prove that the system actually works in the opposite direction. Chances of becoming incarcerated again for our black men are greater than that of them becoming employed. When we talk about the prison system, we don’t mention that large corporations and businesses invest in prison systems the most because they sustain the most profit. We don’t mention that these same businesses and corporations can buy labor from a prisoner for as little as $18.50 a month. We don’t mention the fact that this system incarcerates the majority of black fathers for child support, yet their labor in prison does not contribute at all to the mothers in need. African American men are locked into cages and structured into a subordinate position. They are told when to eat, when to wash, when to come out of their cells and when to work and when to talk on the phones. It’s easy to imagine that African American men in urban areas chose a life of crime, instead of accepting the real possibility that their lives are structured in a way that guaranteed their early admission into a system from which they can never escape. Not only does the system lock them out, they are locked out by their communities and forced to accept inferior positions. They lose respect from all over and are feared by many, including the ones that claim to love them. The bulk of the love and respect they will receive come from the very streets that got them locked away, from the very people that live and breath the same realities. What is not discussed is the mental effects this “rehabilitation system” has on it’s average prisoner. Reportedly, there are many people suffering from mental illness who are in jails and prisons than there are in all psychiatric hospitals in the United States. So why are we subjecting them to slave like conditions and why is the Thirteenth Amendment of the United States Constitution endorsing it? This my friend is the “protections and immunities” that the land of the free and home of the brave guarantees. Citizens receive protection from being subjected to slavery but not if you are not a prisoner. After reading this, I encourage you to foster conversations amongst yourself, your family, friends, communities, and politicians. I urge you to analyze the way that you think about the prison system, prisoners, and ex-convicts. A heavy hand has been laid upon us. As a people, we feel ourselves to be not only deeply injured, but grossly misunderstood. Our white countrymen do not know us. They are strangers to our character, ignorant of our capacity, oblivious to our history and progress, and are misinformed as to the principles and ideas that control and guide us, as a people. The great mass of American citizens estimates us as being a characterless and purposeless people; and hence we hold up our heads, if at all, against the withering influence of a nation’s scorn and contempt. -Fredrick Douglass 1853.
null
minipile
NaturalLanguage
mit
null
Q: Java Slick2D drawImage nullpointer Here's my code package game.src; import java.util.ArrayList; import org.lwjgl.opengl.Drawable; import org.newdawn.slick.AppGameContainer; import org.newdawn.slick.BasicGame; import org.newdawn.slick.Color; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import org.newdawn.slick.SlickException; public class Main extends BasicGame{ public static final int WIDTH = 1920; public static final int HEIGHT = 1080; private Image grass = null; //Initializing the image public Main() { super("Wizard game"); } public static void main(String[] arguments) { try { AppGameContainer app = new AppGameContainer(new Main()); app.setDisplayMode(WIDTH, HEIGHT, false); app.setAlwaysRender(true); app.start(); } catch (SlickException e) { e.printStackTrace(); } } @Override public void init(GameContainer container) throws SlickException { Image grass = new Image("res/grass.png"); //Location of picture (see link below) } @Override public void update(GameContainer container, int delta) throws SlickException { } public void render(GameContainer container, Graphics g) throws SlickException{ g.drawImage(grass, 0, 0); //This is the crash location } } Here's the console error Fri Mar 28 22:21:36 CDT 2014 INFO:Slick Build #237 Fri Mar 28 22:21:36 CDT 2014 INFO:LWJGL Version: 2.9.0 Fri Mar 28 22:21:36 CDT 2014 INFO:OriginalDisplayMode: 1920 x 1080 x 32 @60Hz Fri Mar 28 22:21:36 CDT 2014 INFO:TargetDisplayMode: 1920 x 1080 x 0 @0Hz Fri Mar 28 22:21:36 CDT 2014 INFO:Starting display 1920x1080 Fri Mar 28 22:21:36 CDT 2014 INFO:Use Java PNG Loader = true WARNING: Found unknown Windows version: Windows 7 Attempting to use default windows plug-in. Loading: net.java.games.input.DirectAndRawInputEnvironmentPlugin Fri Mar 28 22:21:37 CDT 2014 INFO:Found 5 controllers Fri Mar 28 22:21:37 CDT 2014 INFO:0 : USB Multimedia Keyboard Fri Mar 28 22:21:37 CDT 2014 INFO:1 : USB Multimedia Keyboard Fri Mar 28 22:21:37 CDT 2014 INFO:2 : USB Multimedia Keyboard Fri Mar 28 22:21:37 CDT 2014 INFO:3 : Turtle Beach PX21 Headset Fri Mar 28 22:21:37 CDT 2014 INFO:4 : USB AUDIO Fri Mar 28 22:21:37 CDT 2014 ERROR:null java.lang.NullPointerException <--- I'm guessing it can't find the picture so it thinks it's null. at org.newdawn.slick.Graphics.drawImage(Graphics.java:1384) at org.newdawn.slick.Graphics.drawImage(Graphics.java:1433) at game.src.Main.render(Main.java:65) at org.newdawn.slick.GameContainer.updateAndRender(GameContainer.java:688) at org.newdawn.slick.AppGameContainer.gameLoop(AppGameContainer.java:411) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:321) at game.src.Main.main(Main.java:41) Fri Mar 28 22:21:37 CDT 2014 ERROR:Game.render() failure - check the game code. <-- No shit org.newdawn.slick.SlickException: Game.render() failure - check the game code. <-- No shit at org.newdawn.slick.GameContainer.updateAndRender(GameContainer.java:691) at org.newdawn.slick.AppGameContainer.gameLoop(AppGameContainer.java:411) at org.newdawn.slick.AppGameContainer.start(AppGameContainer.java:321) at game.src.Main.main(Main.java:41) I don't have a clue what is going on, and i'm pretty sure I installed LWJGL and Slick correctly. I suspect that for some reason the path is not working, making the picture equal null. Image of my pathing http://imgur.com/vLRGIcf A: In your init method, you shadow the grass variable. Try changing Image grass = new Image("res/grass.png"); to just grass = new Image("res/grass.png");, in order to set the field grass rather than create a new variable in the method. Because you never change the grass field from null, you get a NullPointerException when you try and draw it.
null
minipile
NaturalLanguage
mit
null
Bishopric of Minden The Bishopric of Minden was a Roman Catholic diocese () and a state, the Prince-Bishopric of Minden (), of the Holy Roman Empire. Its capital was Minden which is in modern-day Germany. History The diocese was founded by Charlemagne in 803, after he had conquered the Saxons. It was subordinate to the Archbishopric-Electorate of Cologne. It became the Prince-Bishopric of Minden () in 1180, when the Duchy of Saxony was dissolved. In the 16th century, the Protestant Reformation was starting to take hold in the state, under the influence of the Duchy of Brunswick-Lüneburg. Minden was occupied by Sweden in the Thirty Years' War, and was secularized. The Peace of Westphalia of 1648 gave it to the Margraviate of Brandenburg as the Principality of Minden (). Since 1719, Minden prince-bishopric was administered by Brandenburg-Prussia together with the adjacent County of Ravensberg as Minden-Ravensberg. In 1807, it became part of the Kingdom of Westphalia. In 1814, it returned to Prussia and became part of the Province of Westphalia. As of 1789, the principality had an area of . It was bordered by (clockwise from the north): an exclave of the Landgraviate of Hesse-Kassel (or Hesse-Cassel), the Electorate of Hanover, the County of Schaumburg-Lippe, another exclave of Hesse-Kassel, the Principality of Lippe, the County of Ravensberg, and the Prince-Bishopric of Osnabrück. Cities included Minden and Lübbecke. As to the diocese of Minden, there was no legitimate Roman Catholic pontificate any more since the Swedish takeover in 1648. The diocesan area, comprising besides the temporal prince-bishopric also parts of Brunswick-Wolfenbüttel and all of Schaumburg-Lippe, was made the first defunct diocese taken care of by the Apostolic Vicariate of the Nordic Missions in 1667. Between 1709 and 1780 the former diocesan area formed part of the Vicariate Apostolic of Upper and Lower Saxony, before it was reincorporated into the Nordic Missions. In 1821 the former Minden diocesan area within the former prince-bishopric boundary became part of the Diocese of Paderborn, whereas the Brunswickian part became part of the Apostolic Vicariate of Anhalt and Brunswick in 1825, only to join the Diocese of Hildesheim in 1834. The Schaumburg-Lippe area stayed with the Nordic Missions until their dissolution in 1930, becoming first part of the Diocese of Osnabrück and then of Hildesheim as of 1965. Famous bishops Saint Erkanbert (803–813) Saint Hardward (813–853) Saint Theoderich (853–880) Saint Thietmar (1185–1206) Francis of Waldeck (1530–53) Julius, Duke of Brunswick-Lüneburg (1553–54) Henry Julius, Duke of Brunswick-Lüneburg (1582–85, Protestant) Christian the Elder, Duke of Brunswick-Lüneburg (1599–1625, Protestant) Francis of Wartenberg (1631–48) Auxiliary bishops Johann Christiani von Schleppegrell, O.S.A. (1428–1468) Johannes Tideln, O.P. (1477–1501) Johannes Gropengeter, O.S.A. (1499–1508) Ludwig von Siegen, O.F.M. (1502–1508) Heinrich von Hattingen, O. Carm. (1515–1519) See also Ostwestfalen-Lippe References At Meyers Konversationslexikon, 1888 At NRW-Geschichte.de (with map) Category:Subdivisions of Prussia Category:Roman Catholic dioceses in the Holy Roman Empire Category:Former Roman Catholic dioceses in Germany Category:Prince-bishoprics of the Holy Roman Empire in Germany Category:Former states and territories of North Rhine-Westphalia Bishopric Category:1807 disestablishments in Germany Category:Dioceses established in the 9th century Category:Religious organizations established in the 800s Category:Religious organizations disestablished in 1648 Category:Lower Rhenish-Westphalian Circle Category:1180s establishments in the Holy Roman Empire Category:1180 establishments in Europe nl:Prinsbisdom Minden
null
minipile
NaturalLanguage
mit
null
Twenty-four-year-old Gizzy Fowler, a trans woman of color, was found dead in a suburb of Nashville, Tennessee earlier this week. This is the 10th known murder of a trans woman of color in the United States during 2014. According to local media reports, Fowler's body was found in a driveway of an empty home in Bordeaux, Tennessee. Neighbors told police that they had heard gunshots. Multiple outlets misgendered Fowler, publishing her old name, her criminal history and old photos. "There is an undeniable epidemic of fatal violence against transgender and gender non-confirming women, specifically transgender women of color in the United States -- these ten lives cut short cannot be ignored" said Osman Ahmed, The National Coalition of Anti-Violence Programs (NCAVP) Research and Education Coordinator at the New York City Anti-Violence Project. "We need immediate action on a national level to address the alarming violence against transgender women in the United States." November 20 marks the Transgender Day of Remembrance (TDOR) to memorialize those who were killed due to anti-transgender hatred or prejudice. Prior to Fowler's death, the list for 2014 included more than 70 names from around the world. Nashville TDOR organizers plan to honor Fowler during their event at the Scarritt-Bennett Center in Nashville. Fowler's family has indicated plans to attend. Advocates are concerned that the ongoing minimizing of the true identity of the victims of these crimes results in minimizing the magnitude of what NCAVP characterizes as an "epidemic" of violence. Anyone with information that could assist with the investigation is asked to contact Crime Stoppers at 74-CRIME, or through an electronic tip by texting the word "CASH" along with the message to 274637 (CRIMES) or online at www.nashvillecrimestoppers.com. Those who contact Crime Stoppers can remain anonymous and qualify for a cash reward.
null
minipile
NaturalLanguage
mit
null
Quick Chat: Daniel Juncadella 05.10.2016 Dani, you were deprived of your greatest success so far in the DTM, namely a podium in Budapest, because your floor did not comply with the regulations. How do you feel about this ruling? Daniel Juncadella: As a driver, you're powerless in such a situation. Unfortunately, this sometimes happens in motor racing. Obviously, it's a great pity. That was my first podium finish in the DTM, and up to then, it had been a super day for me. So I'm not too disappointed, because I had a mega weekend in Budapest. The race on Sunday was absolutely fantastic and I did everything that was in my power. In the end, though, it just wasn't meant to be. All the same, how important was it for you to have at last got onto the podium, especially as the component in question was not relevant to performance? Daniel Juncadella: It was mega important to me. This year is my fourth season in the DTM, and I'd never had a podium finish before. Yet up to the point of disqualification, I'd managed to do that on a weekend when we were not all that competitive. From that perspective, it was a really cool experience to be standing up there on the podium. So for me personally, the disqualification does not change the fact that I made it onto the podium by virtue of my own performance. What are your feelings going into the last race of the season at Hockenheim? Daniel Juncadella: Very good feelings. I scored my best DTM result so far on the previous race weekend, and the three weekends before that also went well for me. My crew and I are extremely well prepared for the final race of the season. Hopefully, I can finish on the podium again and maybe even step up a few rungs.
null
minipile
NaturalLanguage
mit
null
deb http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu trusty main #GCC toolchain updates # deb-src http://ppa.launchpad.net/ubuntu-toolchain-r/test/ubuntu trusty main #GCC toolchain updates
null
minipile
NaturalLanguage
mit
null
--- author: - 'Kazunori Kohri,' - Hiroki Matsui bibliography: - 'renormalization.bib' title: Electroweak Vacuum Instability and Renormalized Higgs Field Vacuum Fluctuations in the Inflationary Universe --- Introduction {#sec:intro} ============ The recent measurements of the Higgs boson mass $m_{h}=125.09\ \pm \ 0.21\ ({\rm stat})\ \pm \ 0.11\ ({\rm syst})\ {\rm GeV}$ [@Aad:2015zhl; @Aad:2013wqa; @Chatrchyan:2013mxa; @Giardino:2013bma] and the top quark mass $m_{t}=173.34\pm 0.27\ ({\rm stat})\ {\rm GeV}$ [@2014arXiv1403.4427A] suggest that the current electroweak vacuum state of the Universe is not stable, and finally cause a catastrophic vacuum decay through quantum tunneling [@Kobzarev:1974cp; @Coleman:1977py; @Callan:1977pt] although the cosmological timescale for the quantum tunneling decay is longer than the age of the Universe  [@Degrassi:2012ry; @Isidori:2001bm; @Ellis:2009tp; @EliasMiro:2011aa]. In de Sitter space, especially the inflationary Universe, however, the curved background enlarges the vacuum field fluctuations $\left<{ \delta \phi }^{ 2 } \right>$ in proportion to the Hubble scale $H^{2}$. Therefore, if the large inflationary vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>$ of the Higgs field overcomes the barrier of the standard model Higgs effective potential $V_{\rm eff}\left( \phi \right)$, it triggers off a catastrophic vacuum transition to the negative Planck-energy true vacuum and cause an immediate collapse of the Universe  [@Espinosa:2007qp; @Fairbairn:2014zia; @Lebedev:2012sy; @Kobakhidze:2013tn; @Enqvist:2013kaa; @Herranen:2014cua; @Kobakhidze:2014xda; @Kamada:2014ufa; @Enqvist:2014bua; @Hook:2014uia; @Kearney:2015vba; @Espinosa:2015qea; @Kohri:2016wof; @East:2016anr]. The vacuum field fluctuations $\left<{ \delta \phi }^{ 2 } \right>$, i.e., the vacuum expectation values are formally given by $$\begin{aligned} \left< { \delta \phi^{2} } \right>=\int { { d }^{ 3 }k{ \left| \delta { \phi }_{ k }\left( \eta ,x \right) \right| }^{ 2 } } =\int _{ 0 }^{ \infty }{ \frac { dk }{ k } } { \Delta }_{ \delta \phi }^{2}\left(\eta ,k \right)\label{eq:hfhfhedg} .\end{aligned}$$ where $ { \Delta }_{ \delta \phi }^{2}\left(\eta ,k \right)$ is defined as the power spectrum of quantum vacuum fluctuations. As well-known facts in quantum field theory (QFT), the vacuum expectation values $\left<{ \delta \phi }^{ 2 } \right>$ have an ultraviolet divergence (quadratic or logarithmic) and therefore a regularization is necessary. The quadratic divergence corresponds to the normal contribution from the fluctuations of the vacuum in Minkowski space, and it can be eliminated by standard renormalization in flat spacetime. The logarithmic divergence, however, appears as a consequence of the expansion of the Universe, and has the physical contributions to the origin of the primordial perturbations or the backreaction of the inflaton field. We usually eliminate this logarithmic ultraviolet divergence by simply neglecting the modes with $k > aH$. That corresponds to the stochastic Fokker-Planck (FP) equation, which treats the inflationary field fluctuations generally originating from long wave modes, i.e., the IR parts [@Linde:1993xx]. Recent works [@Hook:2014uia; @Kearney:2015vba; @Espinosa:2015qea; @Kohri:2016wof; @East:2016anr] for the electroweak vacuum stability during inflation based on the stochastic Fokker-Planck (FP) equation. However, from the viewpoint of QFT, we must treat carefully short wave modes as well as long wave modes [@Parker:2007ni; @Agullo:2011qg] and it is necessary to renormalize the vacuum expectation values $\left<{ \delta \phi }^{ 2 } \right>$ in the curved space-time in order to obtain exact physical contributions [@Vilenkin:1982wt; @Paz:1988mt]. Thus, in this paper, we revisit the electroweak vacuum instability from the legitimate perspective of QFT in curved space-time. In the first part of this paper, we derive the one-loop Higgs effective potential in curved space-time via the adiabatic expansion method. In the second part, we discuss the renormalized field vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>$ in de Sitter space by using adiabatic regularization and point-splitting regularization. In the third part, we investigate the electroweak vacuum instability during or after inflation from the global and homogeneous Higgs field $\phi$, and the renormalized vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$ corresponding to the local and inhomogeneous Higgs field fluctuations. The behavior of the homogeneous Higgs field $\phi$ is determined by the effective potential $V_{\rm eff}\left( \phi \right)$ in curved space-time, and then, the excursion of the homogeneous Higgs field $\phi$ to the negative Planck-energy vacuum state can terminate inflation and triggers off a catastrophic collapse of the Universe. The local and inhomogeneous Higgs fluctuations described by $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$ generate catastrophic Anti-de Sitter (AdS) domains or bubbles and finally cause a vacuum transition. In this work, we improve our previous work [@Kohri:2016wof], provide a comprehensive study of the phenomenon and reach new conclusions. In addition, we persist in the zero-temperature field theory, leaving the generalization to the finite-temperature case and discussion of the thermal History of the metastable Universe for a forthcoming work. This paper is organized as follows. In Section \[sec:effective\] we derive the Higgs effective potential in curved space-time by using the adiabatic expansion method. In Section \[sec:adiabatic\] we discuss the problem of renormalization to the vacuum fluctuations in de Sitter space by using adiabatic regularization. In Section \[sec:point\] we consider the renormalized vacuum fluctuations via point-splitting regularization and show that the renormalized expectation values via point-splitting regularization is consistent with the previous results via adiabatic regularization. In Section \[sec:electroweak\] we discuss the behavior of the global Higgs field $\phi$ and the vacuum transitions via the renormalized Higgs field vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$, and investigate the electroweak vacuum instability during inflation or after inflation. Finally, in Section \[sec:Conclusion\] we draw the conclusion of our work. Higgs effective potential in curved space-time via adiabatic expansion method {#sec:effective} ============================================================================= The behavior of the homogeneous Higgs field $\phi$ on the entire Universe is determined by the effective potential $V_{\rm eff}\left( \phi \right)$ in curved space-time. In this section, we review the standard derivation of the one-loop effective potential in curved space-time via the adiabatic expansion method [@Maroto:2014oca; @Albareti:2016cvx; @Ringwald:1987ui; @Sinha:1988ci; @Huang:1993fk], and then, show how the effective potential $V_{\rm eff}\left( \phi \right)$ of curved space-time is changed from the Minkowski space-time, where we use the notations and the conventions of Ref.[@Maroto:2014oca; @Albareti:2016cvx]. For simplicity, we consider a flat Robertson-Walker background not including metric perturbations. Thus, the Friedmann-Lemaitre-Robertson-Walker (FLRW) metric is given by $$g_{\mu\nu}={\rm diag}\left( -1,\frac { { a }^{ 2 }\left( t \right) }{ 1-K { r }^{ 2 } } ,{ a }^{ 2 }\left( t \right) { r }^{ 2 },{ a }^{ 2 }\left( t \right) { r }^{ 2 }\sin^{2} { \theta } \right)\label{eqsdffedg},$$ where $a = a\left(t\right)$ is the scale factor with the cosmic time $t$ and $K $ is the curvature parameter, where positive, zero, and negative values correspond to closed, flat, and hyperbolic space-time, respectively. For the spatially flat Universe, we take $K=0$. Then, the scalar curvature is obtained as $$R=6\left[ { \left( \frac { \dot { a } }{ a } \right) }^{ 2 }+\left( \frac { \ddot { a } }{ a } \right) \right]=6\left(\frac{a''}{a^{3}}\right)\label{eq:dgdgfdgedg}.$$ where the conformal time $\eta$ has been introduced and is defined by $d\eta=dt/a$. In the de Sitter space-time, the scale factor becomes $a\left(t\right) =e^{Ht}$ or $a\left(\eta\right) =-1/H\eta$, the scalar curvature is estimated to be $R=12H^{2}$ in the de Sitter Universe. The action for the Higgs field with the potential $V\left( \phi \right)$ in curved space-time is given by $$S\left[ \phi \right] =-\int { { d }^{ 4 }x\sqrt { -g } \left( \frac { 1 }{ 2 } { g }^{ \mu \nu } { \nabla }_{ \mu }\phi {\nabla }_{ \nu }\phi +V\left( \phi \right) \right) } \label{eq:ddddddgedg},$$ where we assume the simple form for the Higgs potential as $$V\left( \phi \right) =\frac{1}{2}\left(m^{2}+\xi R\right)\phi^{2}+\frac{\lambda}{4}\phi^{4} \label{eq:aaaadg}.$$ Thus, the Klein-Gordon equation for the Higgs field are given by $$\Box \phi\left(\eta ,x\right) +V'\left( \phi\left(\eta ,x\right) \right) =0 \label{eq:dsssssdg},$$ where $\Box$ denotes the generally covariant d’Alembertian operator, $\Box =g^{\mu\nu}{ \nabla }_{ \mu }{ \nabla }_{ \nu }=1/\sqrt { -g } { \partial }_{ \mu }\left( \sqrt { -g } { \partial }^{ \mu } \right) $ and $\xi$ is the non-minimal Higgs-gravity coupling constant. There are two popular choices for $\xi$, i.e., minimal coupling ($\xi=0$) and conformal coupling ($\xi=1/6$), which is conformally invariant in the massless limit. However, the non-minimal Higgs-gravity coupling $\xi$ is inevitably generated through the loop corrections. In the quantum field theory, we treat the Higgs field $\phi \left(\eta ,x\right)$ as the operator acting on the states. We assume that the vacuum expectation value of the Higgs field is $\phi\left(\eta \right)=\left< 0 \right| { \phi \left(\eta ,x\right) }\left| 0 \right>$. In this case, the Higgs field $\phi \left(\eta ,x\right)$ can be decomposed into a classic component and a quantum component as $$\phi \left(\eta ,x\right)=\phi \left(\eta \right)+\delta \phi \left(\eta ,x \right) \label{eq:kkkkgedg},$$ where $\left< 0 \right| { \delta\phi \left(\eta ,x\right) }\left| 0 \right>=0$. In the one-loop approximation, we can obtain the following equations $$\begin{aligned} &&\Box \phi +V'\left( \phi \right) +\frac { 1 }{ 2 } V'''\left( \phi \right) \left< \delta { \phi }^{ 2 } \right> =0 \label{eq:lklkedg},\\ &&\Box \delta \phi +V''\left( \phi \right) \delta \phi =0 \label{eq:dppppg},\end{aligned}$$ where the mass of the quantum field $\delta \phi$ is written by $$V''\left( \phi \right)={ m }^{ 2 }+3\lambda \phi^{2}+\xi R \label{eq:dyyytyg}.$$ The quantum field $\delta \phi$ is decomposed into each $k$ mode, $$\delta \phi \left( \eta ,x \right) =\int { { d }^{ 3 }k\left( { a }_{ k }\delta { \phi }_{ k }\left( \eta ,x \right) +{ a }_{ k }^{ \dagger }\delta { \phi }_{ k }^{ * }\left( \eta ,x \right) \right) } \label{eq:ddfkkfledg},$$ where $${ \delta \phi }_{ k }\left( \eta ,x \right) =\frac { { e }^{ ik\cdot x } }{ { \left( 2\pi \right) }^{ 3/2 } \sqrt { C\left( \eta \right) } } \delta { \chi }_{ k }\left( \eta \right) \label{eq:slkdlkgdg},$$ with $C\left(\eta \right)=a^{2}\left(\eta \right)$. Thus, the Higgs field vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>$, i.e., the expectation field values can be written as $$\begin{aligned} \left< 0 \right| { \delta \phi^{2} }\left| 0 \right>&=&\int { { d }^{ 3 }k{ \left| \delta { \phi }_{ k }\left( \eta ,x \right) \right| }^{ 2 } } \label{eq:ddkg;ergedg},\\ &=&\frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ 0 }^{ \infty } { dk { k }^{ 2 }{ \left| \delta { \chi }_{ k } \right| }^{ 2 } } \label{eq:xkkgdddgedg},\end{aligned}$$ where $\left<{ \delta \phi }^{ 2 } \right>$ has an ultraviolet divergence (quadratic and logarithmic) and requires a regularization, e.g. cut-off regularization or dimensional regularization. From Eq. (\[eq:dppppg\]), the Klein-Gordon equation for the quantum field $\delta \chi$ is written by $${ \delta \chi}''_{ k }+{ \Omega }_{ k }^{ 2 }\left( \eta \right) { \delta \chi }_{ k }=0 \label{eq:dlkrlekg}.$$ Here, we use the adiabatic (WKB) approximation to obtain the mode function. For the lowest-order approximation, the time-dependent mode function is given by $$\delta{ \chi }_{ k }=\frac { 1 }{ \sqrt { 2{\Omega_{k}\left( \eta \right) }} } \exp\left( -i\int { \Omega_{k} \left( \eta \right) d\eta } \right) \label{eq:ddkreitjdg},$$ where ${ \Omega }_{ k }^{ 2 }\left( \eta \right) ={ k }^{ 2 }+C\left( \eta \right) \left( { m }^{ 2 } +3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) $. More precisely, we must consider the higher order approximation and include the exact effects of the particle productions for the one-loop effective potential (see Ref.[@Ringwald:1987ui] for the details). However, we can simply include such effects by adding the backreaction term of the vacuum filed fluctuations, i.e., $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$. Furthermore, we comment the condition of the adiabatic (WKB) approximation (${ \Omega }_{ k }^{ 2 }>0$ and $\left| { \Omega ' }_{ k }/{ \Omega }_{ k }^{ 2 } \right| \ll 1$). This condition breaks during inflation for the massless scalar field, or in the parametric resonance of the preheating (see, e.g. Ref.[@Kofman:1997yn]). In these cases, the IR parts at $k<aH$ breaks the WKB approximation and we can expect enormous particle production. However, the UV parts at $k>aH$ are not affected by the cosmological dynamics of the Universe, and the effective potential generally originates from the UV parts, i.e., short wave modes. Therefore, as well as the radiation dominated and the matter dominated eras, we can adopt the adiabatic expansion method for the effective potential in de Sitter space by taking into account the IR backreaction effects (see Ref.[@Ringwald:1987ui] for the details). As a consequence, we must add the backreaction term, i.e., renormalized field vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$ (see Section \[sec:adiabatic\] for the details) to the effective potential in curved space-time. Then, we write the expectation field value as follows $$\left< 0 \right| { \delta \phi^{2} }\left| 0 \right> =\frac { 1 }{ 4{ \pi }^{ 2 }a^{2} } \int_{ 0 }^{ \infty }{ dk\frac { { k }^{ 2 } }{ \sqrt { { k }^{ 2 }+ \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) a^{2} } } } \label{eq:dklgkhldg}.$$ The one-loop contribution to the Higgs effective potential is given by $$\begin{aligned} &&\frac { 1 }{ 2 } V'''\left( \phi \right) \left< 0 \right| { \delta \phi^{2} }\left| 0 \right> \nonumber \\ &=&\frac{d}{d\phi}\left(\frac { 1 }{ 4{ \pi }^{ 2 }a^{4} } \int _{ }^{ \Lambda }{ dk\ { k }^{ 2 }\sqrt { { k }^{ 2 }+\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) a^{2} } }\right), \nonumber \\ &=&\frac{dV_{1}\left( \phi \right)}{d\phi} \label{eq:dskjksjg},\end{aligned}$$ where we take an ultraviolet cut-off as $\Lambda$ in order to regularize quadratic or logarithmic divergence. For convenience, we rewrite the classic Higgs field equation as follows $$\begin{aligned} \Box \phi +V'\left( \phi \right)+ V'_{1}\left( \phi \right) =0 \label{eq:dklhkgkdg}.\end{aligned}$$ In order to obtain the effective potential, we calculate exactly the integral $$\begin{aligned} V_{1}\left( \phi \right) &=&\frac { 1 }{ 4{ \pi }^{ 2 }a^{4} } \int _{ }^{ \Lambda } { dk\ { k }^{ 2 }\sqrt { { k }^{ 2 }+\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) a^{2} } } ,\\ &=& \frac { 1 }{ 32{ \pi }^{ 2 }a^{4} } \Biggl[ \left( \Lambda \left( 2{ \Lambda }^{ 2 }+\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) a^{2} \right) \right) \nonumber \\ && \times \sqrt { { \Lambda }^{ 2 }+\left( { m }^{ 2 }+3\lambda \phi^{2} +\left( \xi -1/6 \right) R \right) a^{2} } +\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)^{2}a^{4} \nonumber \\ && \times \ln { \left( \frac {\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)^{1/2}a }{ \Lambda +\sqrt { { \Lambda }^{ 2 }+\left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) a^{2} } } \right) } \Biggr] \label{ezzddgedg}.\end{aligned}$$ In the limit $\Lambda \rightarrow \infty$, we can obtain the following expression $$\begin{aligned} V_{1}\left( \phi \right) &=& \frac { { \Lambda }^{ 4 } }{ 16{ \pi }^{ 2 }{ a }^{ 4 } } +\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)\Lambda }^{ 2 } }{ 16{ \pi }^{ 2 }{ a }^{ 2 } } -\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)}^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { { \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) } \nonumber \\&& +\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) }^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right){ a }^{ 2 } }{ { \mu }^{ 2 } } \right) } \nonumber \\ && +\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) }^{ 2 } }{ 64{ \pi }^{ 2 } }C_{i} \label{eq:ddjkjkjdf},\end{aligned}$$ where we introduced the renormalization scale $\mu$ and the constant $C_{i}= \left(1/2-2\ln { 2 }\right)$ depends on the regularization method and the renormalization scheme. Here, we focus on the divergent contribution to the effective potential, which is given by $$\begin{aligned} V_{\Lambda}\left( \phi \right) &&= \frac { { \Lambda }^{ 4 } }{ 16{ \pi }^{ 2 }{ a }^{ 4 } } +\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)\Lambda }^{ 2 } }{ 16{ \pi }^{ 2 }{ a }^{ 2 } } -\frac { { \left( { m }^{ 2 }+\left( \xi -1/6 \right) R \right)}^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { { \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) } \nonumber \\ && +\left( \frac { 3\lambda { \Lambda }^{ 2 } }{ 16{ \pi }^{ 2 }a^{2}} -\frac { 6\lambda{ \left( { m }^{ 2 }+\left( \xi -1/6 \right) R \right)} }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { { \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) } \right) { \phi }^{ 2 }-\frac { 9{ \lambda }^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { { \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) { \phi }^{ 4 } } \label{eq:asmsmedg}.\end{aligned}$$ For convenience, we replace $\Lambda \rightarrow a\Lambda$, $\mu/a \rightarrow \mu$ and the divergent contribution can be written as $$\begin{aligned} V_{\Lambda }\left( \phi \right) &&= \frac { { \Lambda }^{ 4 } }{ 16{ \pi }^{ 2 } } +\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right)\Lambda }^{ 2 } }{ 16{ \pi }^{ 2 } } -\frac { { \left( { m }^{ 2 }+\left( \xi -1/6 \right) R \right)}^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac {{ \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) } \nonumber \\ && +\left( \frac { 3\lambda { \Lambda }^{ 2 } }{ 16{ \pi }^{ 2 } } -\frac { 6\lambda{ \left( { m }^{ 2 }+\left( \xi -1/6 \right) R \right)}}{ 64{ \pi }^{ 2 } } \ln { \left( \frac {{ \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) } \right) { \phi }^{ 2 } -\frac { 9{ \lambda }^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac {{ \Lambda }^{ 2 } }{ { \mu }^{ 2 } } \right) { \phi }^{ 4 } } \label{eq:dklkfdg}.\end{aligned}$$ We can obviously remove all the divergences (quartic, quadratic and logarithmic) by absorbing the counter-terms as follows: $$V_{\rm eff}\left( \phi \right) =V\left( \phi \right)+ V_{1}\left( \phi \right) + { \delta }_{ cc}+\frac { 1 }{ 2 } { \delta }_{ m }{ \phi }^{ 2 }+\frac { 1 }{ 2 } { \delta }_{\xi }{ \phi }^{ 2 } +\frac { 1 }{ 4 } { \delta }_{ \lambda }{ \phi }^{ 4 } \label{eq:ddkrekkdg}.$$ We obtain the Higgs effective potential in curved space-time as follows: $$\begin{aligned} V_{\rm eff}\left( \phi \right) &&=\frac{1}{2}m^{2}\phi^{2}+\frac{1}{2}\xi R\phi^{2} +\frac{\lambda}{4}\phi^{4} \label{eq:jjfkgedg}\\ &&+\frac { { \left( { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R \right) }^{ 2 } }{ 64{ \pi }^{ 2 } } \ln { \left( \frac { { m }^{ 2 }+3\lambda \phi^{2}+\left( \xi -1/6 \right) R}{ { \mu }^{ 2 } } -C_{i} \right) }\nonumber,\end{aligned}$$ which is consistent with the results by using the heat kernel method [@PhysRevD.31.953; @PhysRevD.31.2439]. The effective potential in curved background has been thoroughly investigated in the literatures [@Herranen:2014cua; @Toms:1983qr; @Buchbinder:1985js; @Hu:1984js; @Balakrishnan:1991pm; @Muta:1991mw; @Kirsten:1993jn; @Elizalde:1993ee; @Elizalde:1993ew; @Elizalde:1993qh; @Elizalde:1994im; @Elizalde:1994ds; @Gorbar:2002pw; @Gorbar:2003yt; @Gorbar:2003yp; @Czerwinska:2015xwa] and there are a variety of ways to derive the effective potential in curved spacetime. Now, we can read off the scale dependence of ${ m }^{ 2 }$, $\xi$ and $\lambda$ from the Eq. (\[eq:jjfkgedg\]), and the $\beta$ functions are given by $$\begin{aligned} \beta_{\lambda } &\equiv& \frac { d\lambda }{ d\ln { \mu } } =\frac { 18{ \lambda }^{ 2 } }{ { \left( 4\pi \right) }^{ 2 } } \label{eq:dkldgedg},\\ \beta_{\xi } &\equiv& \frac { d\xi }{ d\ln { \mu } } =\frac { 6{ \lambda } }{ { \left( 4\pi \right) }^{ 2 } }\left( \xi -1/6 \right) \label{eq:dgeddlkgg}, \\ \beta_{ m^{2} }&\equiv& \frac { d{ { m }^{ 2 } } }{ d\ln { \mu } } =\frac { 6{ \lambda }m^{2} }{ { \left( 4\pi \right) }^{ 2 } } \label{eq:ddakdlgg}.\end{aligned}$$ Finally, we can rewrite the classical Higgs field equation by using the effective potential in curved space-time as follows: $$\begin{aligned} \Box \phi + V'_{\rm eff}\left( \phi \right) =0 \label{eq:ddlreredg}.\end{aligned}$$ The effective potential given by Eq. (\[eq:jjfkgedg\]) does not include the exact particle productions and we must consider the backreaction term from the renormalized field vacuum fluctuations $\left<{ \delta \phi }^{ 2 } \right>_{\rm ren}$ to improve the effective potential in curved space-time. Renormalized vacuum fluctuations via adiabatic regularization {#sec:adiabatic} ============================================================= The main difficulty with the vacuum instability in de Sitter space comes from the vacuum field fluctuations on the dynamical background. The renormalized vacuum fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ originate from the dynamical particle production effects, and it corresponds to the local and inhomogeneous Higgs field fluctuations. In this section, we discuss the renormalized vacuum fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ by using adiabatic regularization. The adiabatic regularization [@0305-4470-13-4-022; @PhysRevD.10.3905; @FULLING1974176; @PhysRevD.9.341; @Haro:2010zz; @Haro:2010mx] is the extremely powerful method to remove the ultraviolet divergence from the expectation field value $\left< { \delta \phi }^{ 2 } \right>$ and obtain the renormalized finite value, which has physical contribution. For convenience, we write the equation of the field $\delta\chi$ for conformal time coordinate $\eta$ given by $${ \delta \chi}''_{ k }+{ \Omega }_{ k }^{ 2 }\left( \eta \right) { \delta \chi }_{ k }=0 \label{eq:dlkrlekg},$$ where ${ \Omega }_{ k }^{ 2 }\left( \eta \right) ={ \omega }_{ k }^{ 2 }\left( \eta \right) + C\left( \eta \right) \left( \xi -1/6 \right) R $ and ${ \omega }_{ k }^{ 2 }\left( \eta \right) ={ k }^{ 2 }+C\left( \eta \right) \left( { m }^{ 2 }+3\lambda \phi^{2} \right) $. More precisely, the self-coupling term $3\lambda \phi^{2}$ includes the backreaction effect, i.e., $3\lambda\phi^{2}+3\lambda\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ where we shift the Higgs field $\phi^{2}\rightarrow \phi^{2}+\left< { \delta \phi }^{ 2 } \right>_{\rm ren} $. Therefore, the vacuum field fluctuations on the dynamical background become complicated and intricate in contrast with static space-time. For simplicity, we neglect the self-coupling term $3\lambda \phi^{2}$ in Section \[sec:adiabatic\] and Section \[sec:point\]. The Wronskian condition is given as $${ \delta\chi }_{ k }{ \delta\chi }_{ k }^{ * }-{ \delta\chi }_{ k }^{ * }{ \delta\chi }_{ k }= i\label{eq:dddrrrg},$$ which ensures the canonical commutation relations for the field operator $\delta\chi$ below $$\begin{aligned} \bigl[ { a }_{ k },{ a }_{ k' } \bigr] &=&\bigl[{ a }_{ k }^{ \dagger },{ a }_{ k' }^{ \dagger } \bigr]=0 \label{eq:dfphipedg},\\ \bigl[{ a }_{ k },{ a }_{ k' }^{ \dagger } \bigr] &=&\delta \left( k-k' \right) \label{eq:oeuoegedg}.\end{aligned}$$ The adiabatic vacuum $\left| 0 \right>_{\rm A} $ is the vacuum state which is annihilated by all the operators ${ a }_{ k }$ and defined by choosing $\chi_{k}\left(\eta\right)$ to be a positive-frequency WKB mode. The adiabatic (WKB) approximation to the time-dependent mode function is written by $${\delta \chi }_{ k }=\frac { 1 }{ \sqrt { 2{ W }_{ k }\left( \eta \right) } } \exp\left( -i\int { { W }_{ k }\left( \eta \right) d\eta } \right) \label{eq:jgskedg},$$ where $${ W }_{ k }^{ 2 }={ \Omega }_{ k }^{ 2 }-\frac { 1 }{ 2 } \frac { { W }''_{ k } }{ { W }_{ k } } +\frac { 3 }{ 4 } \frac { { \left( { W }'_{ k } \right) }^{ 2 } }{ { W }_{ k }^{ 2 } } \label{eq:sklsdkldg}.$$ We can obtain the WKB solution by solving Eq. (\[eq:sklsdkldg\]) with an iterative procedure and the lowest-order WKB solution ${ W }^{0}_{ k }$ is given by $$\left({ W }^{0}_{ k }\right)^{2}={ \Omega }_{ k }^{2}\label{eq:flkldg}.$$ The first-order WKB solution ${ W }^{1}_{ k }$ is given by $$\left({ W }^{1}_{ k }\right)^{2}={ \Omega }_{ k }^{2}-\frac { 1 }{ 2 } \frac { \left({ W}^{0}_{ k }\right)'' }{{ W }^{0}_{ k } } +\frac { 3 }{ 4 } \frac {\left({{ W}^{0}_{ k }}'\right)^{2} }{ \left({ W}^{0}_{ k }\right)^{2} } \label{eq:skdl;lgedg}.$$ For the high-order with the nearly conformal case $\xi \simeq1/6$, we can obtain the following expression $$\begin{aligned} { W }_{ k }&\simeq&{ \omega }_{ k }+\frac { 3\left( \xi -1/6 \right) }{ 4{ \omega }_{ k } } \left( 2D'+{ D }^{ 2 } \right) -\frac { { m }^{ 2 }C }{ 8{ \omega }_{ k }^{ 3 } } \left( D'+{ D }^{ 2 } \right) +\frac { 5{ m }^{ 4 }{ C }^{ 2 }{ D }^{ 2 } }{ 32{ \omega }^{ 5 } } \nonumber \\ &&+\frac { { m }^{ 2 }C }{ 32{ \omega }_{ k }^{ 5 } } \left( D'''+4D'D+3{ D' }^{ 2 }+6D'D^{ 2 }+D^{ 4 } \right) \nonumber \\ &&-\frac { { m }^{ 4 }{ C }^{ 2 } }{ 128{ \omega }_{ k }^{ 7 } } \left( 28D''D+19{ D' }^{ 2 }+122{ D' }^{ 2 }+47D^{ 4 } \right) \nonumber \\ &&+\frac { { 221m }^{ 6 }{ C }^{ 3 } }{ 256{ \omega }_{ k }^{ 9 } } \left( D'D^{ 2 }+D^{ 4 } \right) -\frac { 1105{ m }^{ 8 }{ C }^{ 4 }{ D }^{ 4 } }{ 2048{ \omega }_{ k }^{ 11 } } \nonumber \\ &&-\frac { \left( \xi -1/6 \right) }{ 8{ \omega }_{ k }^{ 3 } } \left( 3D'''+3D''D+3{ D' }^{ 2 } \right) \nonumber \\ &&+\left( \xi -1/6 \right) \frac { { m }^{ 2 }C }{ 32{ \omega }_{ k }^{ 5 } } \left( 30D''D+18{ D' }^{ 2 }+57D'D^{ 2 }+9D^{ 4 } \right) \nonumber \\ &&-\left( \xi -1/6 \right) \frac { { 75m }^{ 4 }{ C }^{ 2 } }{ 128{ \omega }_{ k }^{ 7 } } \left( 2D'D^{ 2 }+D^{ 4 } \right) \nonumber \\ &&-\frac { { \left( \xi -1/6 \right) }^{ 2 } }{ 32{ \omega }_{ k }^{ 3 } } \left( 36{ D' }^{ 2 }+36D'D^{ 2 }+9D^{ 4 } \right) \label{eq:oekgkf},\end{aligned}$$ where $D=C'/C$. The vacuum expectation values ${ \left< { \delta \phi }^{ 2 } \right> }_{ \rm ad }$ by using the adiabatic vacuum sate $\left| 0 \right>_{\rm A} $ can be written by $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ad } ={ { _{ A }\left< { 0 }|{{ \delta \phi }^{ 2 } }|{ 0 } \right>_{A } } } =\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \int _{ 0 }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { W }_{ k } } dk } \label{eq:dlajshegedg}.\end{aligned}$$ The adiabatic regularization is not the method of regularizing divergent integrals as cut-off regularization or dimensional regularization. Thus, Eq. (\[eq:dlajshegedg\]) includes the divergences which need to be removed by these regularization. However, the divergences in the exact expression ${ \left< { \delta \phi }^{ 2 } \right> }$, which come from the large $k$ modes, are the same as the divergences in the adiabatic expression ${ \left< { \delta \phi }^{ 2 } \right>_{\rm ad} }$. Thus, we can remove the divergences by subtracting the adiabatic expression ${ \left< { \delta \phi }^{ 2 } \right>_{\rm ad} }$ from the original expression ${ \left< { \delta \phi }^{ 2 } \right> }$ as follows: $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &=&{ \left< { \delta \phi }^{ 2 } \right> }-{ \left< { \delta \phi }^{ 2 } \right> }_{ \rm ad },\\ &=&\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \int _{ 0 }^{ \Lambda }{ 2k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk } -\int _{ 0 }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { W }_{ k } } dk } \Biggr] \label{eq:dkhkkegedg}. \end{aligned}$$ This method has been shown to be equivalent to the point-splitting regularization [@Birrell513; @Anderson:1987yt], which has been used in a large number of space-time background. Massless minimally coupled cases {#sec:massless} -------------------------------- In this subsection, we discuss the vacuum expectation values in the massless minimally coupled case ($m = 0$ and $\xi = 0$). In the massless minimally coupling case, the power spectrum on super-horizon scales is given by $${ \Delta }_{ \delta \phi }^{2}\left( k \right) ={ \left( \frac { H }{ 2\pi } \right) }^{ 2 }\label{eq:hlkhggedg},$$ where $ { \Delta }_{ \delta \phi }^{2}\left( k \right) ={ k }^{ 3 }{ \left| \delta { \phi }_{ k } \right| }^{ 2 }/2{ \pi }^{ 2 }$ originates from the facts that the inflationary quantum field fluctuations are constant on super-horizon scales. If we take $aH$ as the UV cut-off and $H$ as the IR cut-off, the vacuum expectation values $\left< { \delta \phi }^{ 2 } \right>$ are simply given by $$\left< { \delta \phi }^{ 2 } \right>=\int _{ H }^{ aH }{ \frac { dk }{k } } { \Delta }_{ \delta \phi }^{2}\left( k \right) =\frac { { H }^{ 3 } }{ { 4{ \pi }^{ 2 } } } t \label{eq:rjktjgedg}.$$ Here, we review the renormalization of the vacuum expectation values $\left< { \delta \phi }^{ 2 } \right>$ with $m = 0$ and $\xi = 0$ by using the adiabatic regularization, where we use the result of Ref.[@Haro:2010zz; @Haro:2010mx], and show that Eq. (\[eq:rjktjgedg\]) is consistent with the results via the adiabatic regularization. In this case, the mode function $\delta { \chi }_{ k }\left( \eta \right) $ can be exactly given by $$\delta { \chi }_{ k }\left( \eta \right) ={ a }_{ k }\delta { \varphi }_{ k }\left( \eta \right) +{ b }_{ k }\delta { \varphi }^{*}_{ k }\left( \eta \right) \label{eq:ohjijgedg},$$ where $$\delta { \varphi }_{ k }\left( \eta \right) =\sqrt { \frac { 1 }{ 2k } } { e }^{ -ik\eta }\left( 1+\frac { 1 }{ ik\eta } \right)\label{eq:fhghgedg}.$$ In the massless minimally coupled case, the vacuum expectation values $\left< { \delta \phi }^{ 2 } \right>$ have not only ultraviolet divergences but also infrared divergences. To avoid the infrared divergences, we assume that the Universe changes over from the radiation-dominated phase to the de Sitter phase as the following $$a\left( \eta \right) =\begin{cases} 2-\frac { \eta }{ { \eta }_{ 0 } } ,\quad \left( \eta <{ \eta }_{ 0 } \right) \\ \frac { \eta }{ { \eta }_{ 0 } } ,\quad\quad\ \left( \eta >{ \eta }_{ 0 } \right) \end{cases}\label{eq:fhghedg}$$ where ${ \eta }_{ 0 }=-1/H$ and, during the radiation-dominated Universe $\left( \eta <{ \eta }_{ 0 } \right)$, we choose the mode function $$\delta { \chi }_{ k }={ e }^{ -ik\eta }/\sqrt { 2k }\label{eq:fhfhedg}.$$ which is in-vacuum state. Requiring the matching conditions $\delta { \chi }_{ k }\left( \eta \right)$ and $\delta { \chi }_{ k }'\left( \eta \right)$ at ${ \eta }={ \eta }_{ 0 }$, we can obtain the coefficients of the mode function $${ a }_{ k }=1+\frac { H }{ ik } -\frac { { H }^{ 2 } }{ 2{ k }^{ 2 } } ,\quad { b }_{ k }={ a }_{ k }+\frac { 2ik }{ 3H } +O\left( \frac { { k }^{ 2 } }{ { H }^{ 2 } } \right) \label{eq:fkhkokedg}.$$ For small $k$ modes in the de Sitter Universe $\left( \eta >{ \eta }_{ 0 } \right)$, we have $${ \left| \delta { \chi }_{ k } \right| }^{ 2 }=\frac { 1 }{ 2k } \left[ { \left( \frac { 2 }{ 3H\eta } +2+\frac { { H }^{ 2 }{ \eta }^{ 2 } }{ 6 } \right) }^{ 2 } +O\left( \frac { { k }^{ 2 } }{ { H }^{ 2 } } \right) +\cdots \right] \label{eq:joooegedg}.$$ Therefore, we obviously have no infrared divergences because of $k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }\sim O\left(k\right)$. For large $k$ modes, we can obtain the mode function $${ \left| \delta { \chi }_{ k } \right| }^{ 2 }=\frac { 1 }{ 2k } \left[ 1+\frac { 1 }{ { k }^{ 2 }{ \eta }^{ 2 } } -\frac { { H }^{ 2 } }{ { k }^{ 2 } } \cos { \left( 2k\left( 1/H+\eta \right) \right) } +O\left( \frac { { H }^{ 3 } }{ { k }^{ 3 } } \right) +\cdots \right] \label{eq:ogfhghedg}.$$ Here, we consider the following adiabatic (WKB) solution $$\begin{aligned} { W }_{ k }&=&{ \omega }_{ k }-\frac { 1 }{ 8{ \omega }_{ k } } \left( 2D'+{ D }^{ 2 } \right) -\frac { 1 }{ 8 } \frac { { m }^{ 2 }{ C }'' }{ { \omega }_{ k }^{ 3 } } +\frac { 5 }{ 32 } \frac { { { m }^{ 4 }\left( C' \right) }^{ 2 } }{ { \omega }_{ k }^{ 5 } } ,\\ &=&{ \omega }_{ k }-\frac { 1 }{ \eta^{2} { \omega }_{ k } } -\frac { 1 }{ 8 } \frac { { m }^{ 2 }{ C }'' }{ { \omega }_{ k }^{ 3 } } +\frac { 5 }{ 32 } \frac { { { m }^{ 4 }\left( C' \right) }^{ 2 } }{ { \omega }_{ k }^{ 5 } } \label{eq:ofhfhgedg}.\end{aligned}$$ We can obtain $$\frac{1}{{ W }_{ k }}\simeq\frac{1}{{ \omega }_{ k }}+\frac { 1 }{ \eta^{2} { \omega }_{ k }^{3} } +\frac { 1 }{ 8 } \frac { { m }^{ 2 }{ C }'' }{ { \omega }_{ k }^{ 5 } } -\frac { 5 }{ 32 } \frac { { { m }^{ 4 }\left( C' \right) }^{ 2 } }{ { \omega }_{ k }^{ 7 } } \label{eq:fhfhgedg}.$$ The condition of the adiabatic (WKB) approximation, i.e., ${ \Omega }_{ k }^{ 2 }>0$ requires $k>\sqrt{2}/\left| \eta \right|=\sqrt{2}aH$ as the cut-off of $k$ mode. Therefore, Eq. (\[eq:dkhkkegedg\]) can be given as follows: $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &=&\lim _{ m\rightarrow 0 }\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \int _{ 0 }^{ \Lambda }{ 2k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk } -\int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { W }_{ k } } dk } \Biggr] , \\ &=&\lim _{ m\rightarrow 0 }\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \int _{ 0 }^{ \Lambda }{ 2k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk } -\int _{\sqrt{2}/\left| \eta \right| }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { \omega }_{ k } } dk} -\int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda }{ \frac { { k }^{ 2 } }{\eta^{2} { \omega }_{ k }^{3} } dk} \nonumber \\ &&+\frac { { m }^{ 2 }C'' }{ 8 } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 5 } } dk +\frac { { 5m }^{ 4 }{ \left( C' \right) }^{ 2 } }{ 32 } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda } { \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 7 } } dk } }\Biggr] \label{eq:fhgegedg}. \end{aligned}$$ For large $k$ modes, we can subtract the ultraviolet divergences as $$\lim _{ m\rightarrow 0 }\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda } { \left(k-\frac { { k }^{ 2 } }{ { \omega }_{ k } }\right)dk } =0\label{eq:ohfgghedg}.$$ $$\lim _{ m\rightarrow 0 }\frac { 1 }{ 4{ \pi }^{ 2 } \eta^{2}C\left( \eta \right) } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda } { \left(\frac{1}{k}-\frac { { k }^{ 2 } }{ { \omega }_{ k }^{3} }\right)dk } =0\label{eq:dk;dkggedg}.$$ Furthermore, we can eliminate the following divergences $$\lim _{ m\rightarrow 0 }\frac { { m }^{ 2 }C'' }{ 8 } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda } { \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 5 } } dk = \lim _{ m\rightarrow 0 } \frac { { 5m }^{ 4 }{ \left( C' \right) }^{ 2 } }{ 32 } \int _{ \sqrt{2}/\left| \eta \right| }^{ \Lambda } { \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 7 } } dk } } =0\label{eq:fhk;hkegedg}.$$ Therefore, we can obtain the following expression $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &=&\frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ 0 }^{ \sqrt{2}/\left| \eta \right| } { k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk } \nonumber \\ && +\frac { { \eta }^{ 2 }{ H }^{ 2 } }{ 4{ \pi }^{ 2 } } \int _{ \sqrt { 2 } /\left| \eta \right| }^{ \infty } { \left( -\frac { { H }^{ 2 } }{ { k }^{ 2 } } \cos { \left( 2k\left( 1/H+\eta \right) \right) } +\cdots \right) } kdk\label{eq:oehf:lhg}. \end{aligned}$$ At the late cosmic-time ($\eta\simeq0$ that is $N_{\rm tot}=Ht\gg1$), we have the following approximation $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &\simeq&\frac { { \eta }^{ 2 }{ H }^{ 2 } }{ 2{ \pi }^{ 2 } } \int _{ 0 }^{ \sqrt{2}/\left| \eta \right| } { k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk }, \nonumber \\ &\simeq&\frac { 1 }{ 9{ \pi }^{ 2 } } \int _{ 0 }^{ H }{ kdk } +\frac { { H }^{ 2 } }{ 4{ \pi }^{ 2 } } \int _{ H }^{ \sqrt { 2 } /\left| \eta \right| }{ \frac { 1 }{ k } dk } \label{eq:oflhkegedg}, \end{aligned}$$ where we approximate ${ \left| \delta { \chi }_{ k } \right| }^{ 2 }=2/9{ \eta }^{ 2 }{ H }^{ 2 }k$ for small $k$ modes and ${ \left| \delta { \chi }_{ k } \right| }^{ 2 }=1/2\eta^{2}k^{3}$ for large $k$ modes. We can finally obtain $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &\simeq&\frac { { H }^{ 2 } }{ 18{ \pi }^{ 2 } } +\frac { { H }^{ 2 } }{ 4{ \pi }^{ 2 } } \left( \frac { 1 }{ 2 } \log { 2 } +Ht \right) \simeq \frac { { H }^{ 3 } }{ 4{ \pi }^{ 2 } } t\label{eq:odhhghedg}, \end{aligned}$$ which coincides with Eq. (\[eq:rjktjgedg\]) via the physical cut-off. Massive non-minimally coupled cases {#sec:massive} ------------------------------------ In this subsection, we consider the massive non-minimally coupled case ($m \neq 0$ and $\xi \neq 0$). At first, we discuss the renormalized vacuum fluctuations for $m\ll H$.[^1] In $m\ll H$ case, the power spectrum on super-horizon scales can be written as $${ \Delta }_{ \delta \phi }^{2}\left( k \right) = { \left( \frac { H }{ 2\pi } \right) }^{ 2 }{ \left( \frac { k }{ aH } \right) }^{ 3-2\nu }\label{eq:hfojjgfghg},$$ where $\nu=\sqrt { 9/4-{ m }^{ 2 }/{ H }^{ 2 } }$. If we take $aH$ as the UV cut-off and $H$ as the IR cut-off, the vacuum expectation values $\left< { \delta \phi }^{ 2 } \right>$ can be given by $$\left< { \delta \phi }^{ 2 } \right>=\int _{ H }^{ aH }{ \frac { dk }{k } } { \Delta }_{ \delta \phi }^{2}\left( k \right) =\frac { 3{ H }^{ 4 } }{ 8{ \pi }^{ 2 }{ m }^{ 2 } } \label{eq:gmkmkmg}.$$ Here, we briefly discuss the renormalized vacuum fluctuations ${ \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren }$ for $m\ll H$ by using the adiabatic regularization. The UV parts can be eliminated by using the adiabatic regularization, i.e., the following integral of Eq. (\[eq:dkhkkegedg\]) can converge $${ \left< { \delta \phi }^{ 2 } \right> }_{ \rm div } =\frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ \sqrt { 2 } /\left| \eta \right| }^{\Lambda } {k^{2}\left( { \left| \delta { \chi }_{ k } \right| }^{ 2 }-\frac { 1}{ 2{ W }_{ k } }\right)dk } \label{eq:fhfggedg},$$ where we take the mode function corresponding to the exact Bunch-Davies vacuum state given by $$\delta { \chi }_{ k }=\sqrt { \frac { \pi }{ 4 } } { \eta }^{ 1/2 }{ H }_{ \nu }^{ \left( 1 \right) }\left( k\eta \right) \label{eq:hjojojhg},$$ with ${ H }_{ \nu }^{ \left( 1 \right) }$ is the Hankel function of the first kind. By discarding the convergent terms, whose orders are $O(m^2)$, we can obtain the following expression $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } \simeq \frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ 0 }^{ H }{ k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 }dk } +\frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ H }^{ \sqrt { 2 } /\left| \eta \right| }{ k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 } dk } \label{eq:fkhdg}, \end{aligned}$$ as well as the massless minimally coupled case. At the late-time limit ($\eta\simeq0$ or $Ht\gg1$), the first integral converges to zero (see Ref.[@Haro:2010zz; @Haro:2010mx] for the details) and obtain the following approximation $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &\simeq&\frac { 1 }{ 2{ \pi }^{ 2 }C\left( \eta \right) } \int _{ H }^{ \sqrt { 2 } /\left| \eta \right| } { k^{2}{ \left| \delta { \chi }_{ k } \right| }^{ 2 } dk },\nonumber \\ &\simeq& \frac { 3{ H }^{ 4 } }{ 8{ \pi }^{ 2 }{ m }^{ 2 } } \left[ 1-{ e }^{ -2{ m }^{ 2 }t/3H } \right] \label{eq:fkhfgedg}.\end{aligned}$$ which coincides with Eq. (\[eq:gmkmkmg\]) by using the physical cut-off. Then, we briefly discuss the renormalized vacuum fluctuations ${ \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren }$ for $m\gg H$ by using the adiabatic regularization. In this case, the power spectrum on super-horizon scales is approximately given by [@Riotto:2002yw] $${ \Delta }_{ \delta \phi }^{2}\left( k \right) = { \left( \frac { H }{ 2\pi } \right) }^{ 2 } \left( \frac { H }{ m } \right) { \left( \frac { k }{ aH } \right) }^{ 3}\label{eq:fkh;fkhdg}.$$ For the very massive case $m\gg H$, the amplitude of the power spectrum is suppressed and the spectrum on long wave modes rapidly drops down. Therefore, the massive field vacuum fluctuations are more inhomogeneous fluctuations and then, break the scale invariance of the spectrum of perturbations. In this case, we must pay attention to the UV cut-off. For the very massive case $m\gg H$, the adiabatic conditions $\Omega^{2}_{k}>0$ satisfy for all $k$ modes and we can estimate the renormalized vacuum fluctuations by eliminating the lowest-order adiabatic (WKB) approximation $$\begin{aligned} { \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren } &\simeq&{ \left< { \delta \phi }^{ 2 } \right> }_{W_{k}}-{ \left< { \delta \phi }^{ 2 } \right> }_{\Omega_{k}},\\ &=&\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \int _{ 0 }^{ \Lambda }{{ \frac { { k }^{ 2 } }{ { W }_{ k } } dk }}-\int _{ 0 }^{ \Lambda } { \frac { { k }^{ 2 } }{ { \Omega }_{ k } } dk } \Biggr] \label{eq:ofhgjgjrg}. \end{aligned}$$ By using Eq. (\[eq:oekgkf\]), we can simply estimate the dominated terms as follows: $$\begin{aligned} &&\lim _{ \Lambda \rightarrow \infty }\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \frac { { m }^{ 2 }C'' }{ 8 } \int _{ 0 }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 5 } } dk -\frac { { 5m }^{ 4 }{ \left( C' \right) }^{ 2 } }{ 32 } \int _{ 0 }^{ \Lambda }{ \frac { { k }^{ 2 } }{ { \omega }_{ k }^{ 7 } } dk } }\Biggr] \nonumber \\ &&=\lim _{ \Lambda \rightarrow \infty }\frac { 1 }{ 4{ \pi }^{ 2 }C\left( \eta \right) } \Biggl[ \frac { { m }^{ 2 }C'' }{ 8 } \frac { { \Lambda }^{ 3 } }{ 3{ m }^{ 2 }C{ \left( { \Lambda }^{ 2 }+{ m }^{ 2 }C \right) }^{ 3/2 } } -\frac { 5{ m }^{ 4 }{ \left( C' \right) }^{ 2 } }{ 32 } \frac { { { 5m }^{ 2 }C\Lambda }^{ 3 }+2{ \Lambda }^{ 5 } }{ 15{ m }^{ 4 }{ C }^{ 2 }{ \left( { \Lambda }^{ 2 }+{ m }^{ 2 }C \right) }^{ 5/2 } } \Biggr], \nonumber \\ &&=-\frac { 1 }{ 96{ \pi }^{ 2 }C\left( \eta \right) } \left[ \frac { 1 }{ 2 } { \left( \frac { C' }{ C } \right) }^{ 2 }-\frac { C'' }{ C } \right]=\frac { 1 }{ 48{ \pi }^{ 2 } } \frac { a'' }{ { a }^{ 3 } } =\frac{R}{288\pi^2}\label{eq:fhkgkgedg}.\end{aligned}$$ Renormalized vacuum fluctuations via point-splitting regularization {#sec:point} =================================================================== The point-splitting regularization is the method of regularizing divergences as the point separation in the two-point function, and has been studied in detail in Ref.[@Birrell:1982ix; @Bunch:1978yq; @Vilenkin:1982wt]. In this section, we review the renormalized vacuum fluctuations by using the point-splitting regularization, and compare the results in the previous section. The regularized vacuum expectation values are expressed as [@Vilenkin:1982wt] $$\begin{aligned} \begin{split} \left< {\delta \phi }^{ 2 } \right>_{\rm reg} =& -16{ \pi }^{ 2 }{ \epsilon }^{ 2 }+\frac { R }{ 576{ \pi }^{ 2 } } +\frac { 1 }{ 16{ \pi }^{ 2 } } \left[ { m }^{ 2 }+\left( \xi -\frac { 1 }{ 6 } \right) R \right] \\ & \left[ \ln { \left( \frac { { \epsilon }^{ 2 }{ \mu }^{ 2 } }{ 12 } \right) } +\ln { \left( \frac { R }{ { \mu }^{ 2 } } \right) +2\gamma -1+\psi \left( \frac { 3 }{ 2 } +\nu \right) } +\psi \left( \frac { 3 }{ 2 } -\nu \right) \right] . \end{split}\label{eq:hfjhkdg}\end{aligned}$$ where we take Bunch-Davies vacuum state, $\epsilon $ is the regularization parameter which corresponds with the point separation, $\mu$ is the renormalization scale, $\gamma $ is Euler’s constant and $\psi \left( z \right)=\Gamma' \left( z \right)/\ \Gamma \left( z \right)$ is the digamma function. By using $m^{2}$ and $\xi$ renormalization, we have the following expression $$\begin{aligned} \begin{split} \left< { \delta \phi }^{ 2 } \right>_{\rm ren} & = \frac { 1 }{ 16{ \pi }^{ 2 } }\biggr\{-{ m }^{ 2 }\ln { \left( \frac { 12{ m }^{ 2 } }{ { \mu }^{ 2 } } \right)}\\ & +\left[ { m }^{ 2 }+\left( \xi -\frac { 1 }{ 6 } \right) R \right] \left[ \ln { \left( \frac { R }{ { \mu }^{ 2 } } \right) +\psi \left( \frac { 3 }{ 2 } +\nu \right) } +\psi \left( \frac { 3 }{ 2 } -\nu \right) \right] \biggr\}. \end{split}\label{eq:hitihiegedg}\end{aligned}$$ where the additive constant $\psi \left( \frac { 3 }{ 2 } \pm \nu \right)$ has been chosen so that $\left< { \delta \phi }^{ 2 } \right>_{\rm ren} =0$ at the radiation-dominated Universe $R=0$. In the massive field theory, we can remove simply the renormalization scale $\mu$ to set $\mu^{2}=12m^{2}$ $$\begin{aligned} \left< { \delta \phi }^{ 2 } \right>_{\rm ren} = \frac { 1 }{ 16{ \pi }^{ 2 } }\left[ { m }^{ 2 }+\left( \xi -\frac { 1 }{ 6 } \right) R \right] \left[ \ln { \left( \frac { R }{ { 12m }^{ 2 } } \right) +\psi \left( \frac { 3 }{ 2 } +\nu \right) } +\psi \left( \frac { 3 }{ 2 } -\nu \right) \right]\label{eq:klkrkkdg}\end{aligned}$$ In the massless and nearly conformal coupling case $m=0$ and $\xi \simeq1/6$, we cannot remove the renormalization scale $\mu$ and $\psi \left( \frac { 3 }{ 2 } \pm \nu \right)$ may be absorbed by the non-minimal coupling renormalization $$\left< { \delta \phi }^{ 2 } \right>_{\rm ren} =\frac { \xi -1/6 }{ 16{ \pi }^{ 2 } } R\ln { \left( \frac { R }{ { \mu }^{ 2 } } \right)}\label{eq:klkorkhkdg}.$$ In the massless conformal coupling case $m=0$ and $\xi=1/6$, we can simply obtain the following expression [^2] $$\left< { \delta \phi }^{ 2 } \right>_{\rm ren} =\frac { R }{ 576{ \pi }^{ 2 } }=\frac { H^{2} }{ 48{ \pi }^{ 2 }}\label{eq:hgl;l;fldg}.$$ In the minimal coupling case $\xi=0$ and we take massless limit $m\rightarrow 0$ $$\left< { \delta \phi }^{ 2 } \right>_{\rm ren} \rightarrow\frac { R^{2} }{ 384{ \pi }^{ 2 }m^{2} }=\frac { 3H^{4} }{ 8{ \pi }^{ 2 }m^{2} }\label{eq:hl;lhf:kdg}.$$ which corresponds with Eq. (\[eq:fkhfgedg\]). Then, the digamma function $\psi \left( z \right)$ for $z\gg1$ can be approximated as [@Bunch:1978yq] $${\rm Re} \ {\psi \left( \frac{3}{2}+i z \right)}= \log { z } +\frac { 11 }{ 24{ z }^{ 2 } } -\frac { 127 }{ 960{ z }^{ 4 } } +\cdots \label{eq:flklkdg}.$$ In the massive case $m\gg H$, $$\begin{aligned} &&\ln { \left( \frac { { H }^{ 2 } }{ { m }^{ 2 } } \right) } +\psi \left( \frac { 3 }{ 2 } +\nu \right) +\psi \left( \frac { 3 }{ 2 } -\nu \right)\nonumber \\ \approx && \ln { \left( \frac { { H }^{ 2 } }{ { m }^{ 2 } } \right) } +\psi \left( \frac { 3 }{ 2 } +i\frac { m }{ H } \right) +\psi \left( \frac { 3 }{ 2 } -i\frac { m }{ H } \right),\\ \approx &&\frac { 11 }{ 12 } \frac { { H }^{ 2 } }{ { m }^{ 2 } } -\frac { 127 }{ 480 } \frac { { H }^{ 4 } }{ { m }^{ 4 } } +\cdots \label{eq:dlkglkdg}.\end{aligned}$$ Therefore, the renormalized vacuum fluctuations of the massive Higgs field for $m\gg H$ is given as follows: $$\begin{aligned} \left< { \delta \phi }^{ 2 } \right>_{\rm ren} &=& \frac { 1 }{ 16{ \pi }^{ 2 } }\left[ { m }^{ 2 }+\left( \xi -\frac { 1 }{ 6 } \right) R \right] \left(\frac { 11 }{ 12 } \frac { { H }^{ 2 } }{ { m }^{ 2 } } -\frac { 127 }{ 480 } \frac { { H }^{ 4 } }{ { m }^{ 4 } } +\cdots \right), \\ &\simeq& O\left( H^{2} \right) \label{eq:hldfkldg},\end{aligned}$$ which is consistent with Eq. (\[klklksssg\]). Therefore, the renormalized vacuum fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ via the point-splitting regularization is equivalent to the results of Eq. (\[eq:fhkgkgedg\]) via the adiabatic regularization. Finally, we summarize the renormalized vacuum fluctuations ${ \left< { \delta \phi }^{ 2 } \right> }_{ \rm ren }$ via the adiabatic regularization and the point-splitting regularization as follows: $$\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq\begin{cases} { H }^{ 3 }t /4{ \pi }^{ 2 } ,\quad\quad\quad \left( m=0 \right) \\ 3{ H }^{ 4} / 8{ \pi }^{ 2 }m^{2},\quad\ \left( m\ll H \right) \\ H^2/24\pi^{2}. \quad\quad\ \ \left( m\gtrsim H \right) \end{cases} \label{klklksssg}$$ Electroweak vacuum instability from dynamical behavior of homogeneous Higgs field and renormalized Higgs vacuum fluctuations {#sec:electroweak} ============================================================================================================================ In this section, we apply the results of Section \[sec:effective\], Section \[sec:adiabatic\] and Section \[sec:point\] to the investigation of the electroweak vacuum instability during inflation (de Sitter space) or after inflation. The vacuum instability of the electroweak false vacuum on the dynamical background is determined by the behavior of the homogeneous Higgs field $\phi$ and the inhomogeneous Higgs field fluctuations, i.e., the renormalized vacuum fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$. The behavior of the global and homogeneous Higgs field $\phi$ is governed by the effective Klein-Gordon equation $$\begin{aligned} \Box \phi + V'_{\rm eff}\left( \phi \right) =0\label{eq:gkfkjdkdg}.\end{aligned}$$ We can rewrite the effective Klein-Gordon equation as the following $$\begin{aligned} \ddot { \phi }\left(t\right) +3H\dot { \phi }\left(t\right) +V'_{\rm eff}\left( \phi \left(t\right) \right) =0. \label{eq:kdljkdddljlkdj}\end{aligned}$$ If we approximate the effective potential as $V'_{\rm eff}\left( \phi \right) =-m^{2}_{\rm eff}\phi$, the behavior of the coherent Higgs field $\phi\left(t\right) $ is described as follows: $$\begin{aligned} \phi\left(t\right) \propto { e }^{ \frac { 1 }{ 2 } \left( -3H+\sqrt { 9{ H }^{ 2 }+4m^{2}_{\rm eff} } \right) t } \simeq\begin{cases} { e }^{ m_{\rm eff}t } \ \ \quad\quad \left( m_{\rm eff}\gg H \right), \\ { e }^{ m^{2}_{\rm eff}t/3H } \quad \ \left( m_{\rm eff}\lesssim H \right).\end{cases} \label{eq:kdljklkdj}\end{aligned}$$ The one-loop standard model Higgs effective potential $V_{\rm eff}\left( \phi \right)$ of curved space-time  [@Herranen:2014cua] in the ’t Hooft-Landau gauge and the $\overline {\rm MS } $ scheme, is given as follows: $$\begin{aligned} V_{\rm eff}\left( \phi \right) &&=\frac{1}{2}m^{2}(\mu)\phi^{2}+\frac{1}{2}\xi (\mu)R\phi^{2} +\frac{\lambda(\mu)}{4}\phi^{4}\\ &&+\sum _{ i=1 }^{ 9 }{ \frac { { n }_{ i } }{ 64{ \pi }^{ 2 } } { M }_{ i }^{ 4 }\left( \phi \right) \left[ \log{ \frac { { M }_{ i }^{ 2 }\left( \phi \right) }{ { \mu }^{ 2 } } } -{ C }_{ i } \right] } \nonumber \label{eq:fhlgkdg},\end{aligned}$$ where $${ M }_{ i }^{ 2 }\left( \phi \right) ={ \kappa }_{ i }{ \phi }^{ 2 }+{ \kappa }'_{ i }+\theta_{i} R.$$ The coefficients $n_{i}$, ${ \kappa }_{ i }$, ${ \kappa }'_{ i }$ and $\theta_{i}$ are given by Table I of Ref.[@Herranen:2014cua]. Furthermore, the $\beta$-function for the non-minimal coupling $\xi\left( \mu \right)$ in the standard model ignoring gravity, is given by $$\beta_{\xi }=\frac { 1 }{ { \left( 4\pi \right) }^{ 2 } }\left( \xi -1/6 \right) \left( 6\lambda +3{ y }_{ t }^{ 2 }-\frac { 3 }{ 4} { g' }^{ 2 }-\frac { 9 }{ 4 } { g }^{ 2 } \right) \label{eq:sdjldkdg}.$$ The running of the non-minimal coupling $\xi \left( \mu \right) $ can be obtained by integrating $\beta_{\xi }$ $$\xi \left( \mu \right) =\frac { 1 }{ 6 } +\left( { \xi }_{ \rm EW }-\frac { 1 }{ 6 } \right) F\left( \mu \right) \label{eq:gdjglg},$$ where $F\left( \mu \right)$ depend on the renormalization scale $\mu$. If we have the nearly minimal coupling ${ \xi }_{ \rm EW }\lesssim O\left(10^{-2}\right)$ at the electroweak scale, the running non-minimal coupling $\xi \left( \mu \right) $ becomes negative at some scale [@Herranen:2014cua]. On the other hand, we can take the initial condition of the running non-minimal coupling $\xi \left( \mu \right) $ at the Planck scale [@Espinosa:2015qea]. If the Hubble scale is larger than the instability scale [^3] i.e., $H>\Lambda_{I}$ and $\xi \left( H \right)<0 $, the Higgs effective potential in de Sitter space becomes negative, i.e., $V_{\rm eff}'\left( \phi \right)\lesssim 0$, and the homogeneous Higgs field $\phi$ on the entire Universe rolls down to the negative-energy Planck-scale true vacuum. However, as mentioned in Section \[sec:effective\], we must shift the Higgs field $\phi^{2}\rightarrow \phi^{2}+\left< { \delta \phi }^{ 2 } \right>_{\rm ren} $ in order to include the backreaction term via the renormalized vacuum fluctuations and the one-loop standard model Higgs effective potential in curved space-time is modified as follows: $$\begin{aligned} V_{\rm eff}\left( \phi \right) &&=\frac{1}{2}m^{2}(\mu)\phi^{2}+\frac{1}{2}\xi (\mu)R\phi^{2} +\frac{1}{2}\lambda(\mu)\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\phi^{2}+\frac{\lambda(\mu)}{4}\phi^{4} \label{eq:gkljkkdg} \\ &&+\sum _{ i=1 }^{ 9 }{ \frac { { n }_{ i } }{ 64{ \pi }^{ 2 } } { M }_{ i }^{ 4 }\left( \phi \right) \left[ \log{ \frac { { M }_{ i }^{ 2 }\left( \phi \right) }{ { \mu }^{ 2 } } } -{ C }_{ i } \right] } \nonumber.\end{aligned}$$ with $${ M }_{ i }^{ 2 }\left( \phi \right) ={ \kappa }_{ i }{ \phi }^{ 2 }+{ \kappa }_{ i } \left< {\delta \phi }^{ 2 } \right>_{\rm ren}+{ \kappa }'_{ i }+\theta_{i} R\label{eq:gkldfdfkdg}.$$ Here, we must choose the appropriate scale $\mu$ in order to suppress the higher order corrections of $\log{ { { M }_{ i }^{ 2 }\left( \phi \right) }/{ { \mu }^{ 2 } } }$. In the case of flat spacetime $R=0$, it is known that $\mu^{2} =\phi^{2}$ is a good choice to suppress the high order log-corrections. In de Sitter space, however, we must choose $\mu^{2} =\phi^{2}+R+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}=\phi^{2}+12H^{2}+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}$ to suppress the log-corrections. Therefore, if we have $\mu\simeq{ \left( 12H^{2}+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\right) }^{ 1/2 }>\Lambda_{I}$, the quartic term $\lambda(\mu)\phi^{4}/4$ makes negative contribution to the effective potential. The Higgs field phenomenologically acquires various effective masses during inflation or after inflation, e.g. the inflaton-Higgs coupling $\lambda_{\phi S}$ provides an extra contribution to the Higgs mass $m^{2}_{\rm eff}=\lambda_{\phi S}S^{2}$ where $S$ is the inflaton field. However, in this work, we restrict our attention to the simple case that the Higgs field only couples to the gravity via the non-minimal Higgs-gravity coupling $\xi(\mu)$ and we disregard other inflationary effective mass-terms. For convenience, we only consider the inflationary effective mass-term $m^{2}_{\rm eff}=\xi(\mu)R=12\xi(\mu)H^{2 }_{\rm inf}$ and use the results of Eq. (\[klklksssg\]), the renormalized vacuum field fluctuations are given by $$\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq\begin{cases} { H }^{ 2}_{\rm inf} / 32{ \pi }^{ 2 }\xi(\mu), \quad\quad \left( \xi(\mu) \ll O\left(10^{-1}\right) \right) \\ H^{2}_{\rm inf}/24\pi^{2}. \quad\quad\quad\quad\ \ \left( \xi(\mu)\gtrsim O\left(10^{-1}\right) \right) \end{cases} \label{klksklkdj}$$ Here, if we have $\mu\simeq{ \left( 12H^{2}_{\rm inf}+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\right) }^{ 1/2 }>\Lambda_{I}$ [^4], we can infer the sign of $V_{\rm eff}\left( \phi \right)$ by using the relations $\xi (\mu)R<\left| {\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren} \right| $. If we assume $\xi (\mu)R=\xi (\mu)12H^{2}_{\rm inf}$, ${\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren} \simeq{\lambda(\mu)}{ H }^{ 2}_{\rm inf} / 32{ \pi }^{ 2 }\xi(\mu)$ and $\lambda(\mu)\simeq -0.01$, we obtain the constraint on the non-minimal coupling ${ \xi (\mu) }\lesssim O\left(10^{-3}\right)$ where $H_{\rm inf}>\sqrt { 32{ \pi }^{ 2 }\xi \left( \mu \right) }\ \Lambda_{I} $. In this case, the homogeneous Higgs field $\phi$ goes out to the negative Planck-energy vacuum state, and therefore, the excursion of the homogeneous Higgs field $\phi$ to the Planck-scale true vacuum can terminate inflation and cause an immediate collapse of the Universe. On the other hand, the inhomogeneous Higgs field fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ can cause the vacuum transition of the Universe [@Espinosa:2007qp; @Fairbairn:2014zia; @Lebedev:2012sy; @Kobakhidze:2013tn; @Enqvist:2013kaa; @Herranen:2014cua; @Kobakhidze:2014xda; @Kohri:2016wof; @Kamada:2014ufa; @Enqvist:2014bua; @Hook:2014uia; @Kearney:2015vba; @Espinosa:2015qea]. If the inhomogeneous and local Higgs field gets over the hill of the effective potential, the local Higgs fields classically roll down into the negative Planck-energy true vacuum and catastrophic Anti-de Sitter (AdS) domains are formed. Note that not all AdS domains formed during inflation threaten the existence of the Universe [@Hook:2014uia; @Espinosa:2015qea], which highly depends on the evolution of the AdS domains at the end of inflation (see Ref.[@Espinosa:2015qea] for the details). The AdS domains can either shrink or expand, eating other regions of the electroweak vacuum. Although the high-scale inflation can generate more expanding AdS domains than shrinking domains during inflation, such domains never overcome the inflationary expansion of the Universe, i.e., one AdS domain cannot terminate the inflation of the Universe [^5]. However, after inflation, some AdS domains expand and consume the entire Universe. Therefore, the existence of AdS domains on our Universe is catastrophic, and so we focus on the conditions not to be generated during or after inflation. We assume that the probability distribution function of the Higgs field fluctuations is Gaussian, i.e. $$P\left( \phi, \left< { \delta \phi }^{ 2 } \right>_{\rm ren} \right) = \frac { 1 }{ \sqrt {2{ \pi }\left< { \delta \phi }^{ 2 } \right>_{\rm ren} } } \exp \left( -\frac {{ \phi }^{ 2 } }{ 2\left< { \delta \phi }^{ 2 } \right>_{\rm ren} } \right)\label{eq:hqqqqdg}.$$ By using Eq. (\[eq:hqqqqdg\]), the probability that the electroweak vacuum survives can be obtained as $$\begin{aligned} { P }\left( \phi<{ \phi }_{ \rm max }\right) &\equiv& \int _{ -{ \phi }_{\rm max } }^{ { \phi }_{ \rm max } } { P\left( \phi, \left< { \delta \phi }^{ 2 } \right>_{\rm ren} \right) d\phi }, \\ &=& {\rm erf}\left( \frac{{ \phi }_{ \rm max } } { \sqrt { 2\left< { \delta \phi }^{ 2 } \right>_{\rm ren} }} \right)\label{eq:hswegg}.\end{aligned}$$ where ${ \phi }_{ \rm max }$ defined as the Higgs effective potential $V_{\rm eff}\left( \phi \right)$ given by Eq. (\[eq:gkljkkdg\]) takes its maximal value [^6]. On the other hand, the probability that the inhomogeneous Higgs field falls into the true vacuum can be expressed as $$\begin{aligned} { P }\left( \phi>{ \phi }_{ \rm max }\right)&=&1- {\rm erf}\left( \frac{{ \phi }_{ \rm max } } { \sqrt { 2\left< { \delta \phi }^{ 2 } \right>_{\rm ren} }} \right), \\ &\simeq&\frac {\sqrt{2\left< { \delta \phi }^{ 2 } \right>_{\rm ren} }}{\pi{ \phi }_{ \rm max }}e^{-\frac {{ \phi }_{ \rm max }^{2}}{2\left< { \delta \phi }^{ 2 } \right>_{\rm ren}}}\label{eq:afklsjiijgg}.\end{aligned}$$ The vacuum decay probability is given by $${ e }^{ 3{ N }_{ \rm hor } }{ P }\left( \phi>{ \phi }_{ \rm max }\right)<1,\label{eq:aaawegg}$$ where ${ e }^{ 3{ N }_{ \rm hor } }$ corresponds to the physical volume of our universe at the end of the inflation, and we take the e-folding number $N_{\rm hor }\simeq N_{\rm CMB }\simeq60$. Imposing Eq. (\[eq:afklsjiijgg\]) on the condition shown in (\[eq:aaawegg\]), we obtain the relation $$\frac{\left< { \delta \phi }^{ 2 } \right>_{\rm ren}}{{ \phi }_{ \rm max }^{2}}<\frac{1}{6N_{\rm hor} }\label{eq:hswedg}.$$ Now, we consider the condition shown in (\[eq:hswedg\]) by using the Higgs effective potential in de-Sitter space-time given by Eq. (\[eq:gkljkkdg\]) and the Higgs field vacuum fluctuations by Eq. (\[klklksssg\]). In the same way as our previous work [@Kohri:2016wof], we can numerically obtain the constraint of the non-minimal coupling ${ \xi (\mu) }\lesssim O\left(10^{-2}\right)$ where $H_{\rm inf}>\sqrt { 32{ \pi }^{ 2 }\xi \left( \mu \right) }\ \Lambda_{I} $. Therefore, the catastrophic Anti-de Sitter (AdS) domains or bubbles from the high-scale inflation can be avoided if the relatively large non-minimal Higgs-gravity coupling (or e.g. inflaton-Higgs coupling $\lambda_{\phi S}$) is introduced. Here, we summarize the conclusions obtained in this section as follows: - In ${ \xi (\mu) }\lesssim O\left(10^{-3}\right)$ and $H_{\rm inf}>\sqrt { 32{ \pi }^{ 2 }\xi \left( \mu \right) }\ \Lambda_{I} $, the Higgs effective potential during inflation is destabilized by the backreaction term ${\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren}$, which overcome the stabilization term $\xi (\mu)R$. In this case, the effective potential becomes negative, i.e. $V_{\rm eff}'\left( \phi \right)\lesssim 0$, the excursion of the homogeneous Higgs field $\phi$ to the negative Planck-energy vacuum state terminates the inflation of the Universe and cause a catastrophic collapse. - In $O\left(10^{-3}\right)\lesssim{ \xi (\mu) }\lesssim O\left(10^{-2}\right)$ and $H_{\rm inf}>\sqrt { 32{ \pi }^{ 2 }\xi \left( \mu \right) }\ \Lambda_{I} $, The inflationary effective mass $\xi (\mu)R=12\xi (\mu)H^{2}_{\rm inf}$ can raise and stabilize the Higgs effective potential during inflation. Thus, the dangerous motion of the homogeneous Higgs field $\phi$ cannot occur. However, the inhomogeneous Higgs field fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}$ generate the catastrophic Anti-de Sitter (AdS) domains or bubbles, which finally cause the vacuum transition of the Universe. - In ${ \xi (\mu) }\gtrsim O\left(10^{-2}\right)$ or $H_{\rm inf}<\sqrt { 32{ \pi }^{ 2 }\xi \left( \mu \right) }\ \Lambda_{I} $, the Higgs effective potential stabilizes and the catastrophic Anti-de Sitter (AdS) domains or bubbles are not formed during inflation. However, after inflation, the effective mass-terms $\xi (\mu)R$ via the non-minimal Higgs-gravity coupling drops rapidly and sometimes become negative. Therefore, the effect of the stabilization via $\xi (\mu)R$ disappears and the Higgs effective potential becomes rather unstable due to the terms of ${\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren}$ or $\xi (\mu)R$ with $\mu\simeq{ \left( R+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\right) }^{ 1/2 }>\Lambda_{I} $. Furthermore, the non-minimal Higgs-gravity coupling can generate the large Higgs field vacuum fluctuations via tachyonic resonance [@Kohri:2016wof; @Herranen:2015ima; @Ema:2016kpf], thus, the Higgs effective potential is destabilized, or the catastrophic Anti-de Sitter (AdS) domains or bubbles are formed during subsequent preheating stage. In the rest of this section, we briefly discuss the instability of the electroweak false vacuum after inflation. Just after inflation, the inflaton field $S$ begins coherently oscillating near the minimum of the inflaton potential $V_{\rm inf}\left( S \right)$ and produces extremely a huge amount of massive bosons via the parametric or the tachyonic resonance. This temporal non-thermal stage is called preheating [@Kofman:1997yn], and is essentially different from the subsequent stages of the reheating and the thermalization. For simplicity, we approximate the inflaton potential as the quadratic form $$V_{\rm inf}\left( S \right) =\frac { 1 }{ 2 } { m }_{S}^{ 2 }{ S }^{ 2 }.\label{eq:assjfkjwegg}$$ In this case, the inflaton field $S$ classically oscillates as $$S\left( t \right) =\Phi \sin { \left( { m }_{ S }t\right) }, \quad \Phi =\sqrt { \frac { 8 }{ 3 } } \frac { { M }_{ \rm pl } }{ { m }_{ S }t }, \label{eq:aslsjdksg}$$ where the reduced Planck mass is ${ M }_{ \rm pl } = 2.4\times 10^{18}\ { \rm GeV }$. If the inflaton field $S$ dominates the energy density and the pressure of the Universe, i.e., during inflation or preheating stage, the scalar curvature $R(t)$ is written by $$\begin{aligned} R\left( t \right) &=&\frac { 1 }{ { M }_{\rm pl }^{ 2 } } \left[ 4V_{\rm inf}\left( S \right) -{ \dot { S } }^{ 2 } \right],\\ &\simeq&\frac { { { m }_{ S }^{ 2 } }{ \Phi }^{ 2 } }{ { M }_{\rm pl }^{ 2 } } \left( 3\sin^{2}\left( { { m }_{ S }t }\right) -1 \right). \label{eq:asjsljfgg}\end{aligned}$$ When the inflaton field $S$ oscillates as Eq. (\[eq:aslsjdksg\]), the effective mass $\xi (\mu)R$ drastically changes between positive and negative values. Therefore, the Higgs field vacuum fluctuations grows extremely rapidly via the tachyonic resonance, which is called geometric preheating [@Bassett:1997az; @Tsujikawa:1999jh]. The general equation for $k$ modes of the Higgs field during preheating is given as follows: $$\begin{split} \frac {d^{2}\left(a^{3/2}\delta { \phi }_{ k } \right)}{ dt^{2}}+\left(\frac{k^{2} }{a^{2}}+V'_{\rm eff}\left(\phi \right) + \frac{1}{M_{\rm pl}^{2}} \left(\frac{3}{8}-\xi \right)\dot { S }\right. \\ \left.-\frac{1}{M_{\rm pl}^{2}} \left(\frac{3}{4}-4\xi \right)V\left( S \right) \right) \left(a^{3/2}\delta { \phi }_{ k } \right)=0\label{eq:fhslsdfff}. \end{split}$$ Eq. (\[eq:fhslsdfff\]) can be reduced to the following Mathieu equation $$\frac {d^{2}\left(a^{3/2}\delta { \phi }_{ k } \right)}{ dz^{2}}+\left(A_{k}-2q\cos2z\right) \left(a^{3/2}\delta { \phi }_{ k } \right)=0,\label{eq:kmathieu}$$ where we take $z=m_{S}t$ and $A_{k}$ and $q$ are given as $$\begin{aligned} A_{k}&=&\frac{k^{2}}{a^{2}m_{S}^{2}}+\frac{V'_{\rm eff}\left(\phi \right)}{m_{S}^{2}} +\frac{\Phi^{2}}{2 M_{\rm pl}^{2}}\xi, \\ q&=&\frac{3\Phi^{2}}{4M_{\rm pl}^{2}}\left(\xi-\frac{1}{4}\right).\end{aligned}$$ The solutions of the Mathieu equation via the non-minimal coupling in Eq. (\[eq:kmathieu\]) show the tachyonic (broad) resonance when $q\gtrsim 1$ or the narrow resonance when $q<1$. In the tachyonic resonance regime, where $q\gtrsim 1$, i.e. $ \Phi^{2}\xi \gtrsim M_{\rm pl}^{2}$, the tachyonic resonance extremely amplifies the Higgs vacuum fluctuations at the end of inflation as $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\gg O\left(H^{2}\left(t \right)\right)$. In the context of preheating, $A_{k}$ and $q$ are $z$-dependent function due to the expansion of the Universe, making it very difficult to derive analytical estimation (see e.g. Ref.[@Ema:2016kpf]). If we take $m_{S}\simeq 7\times10^{-6}M_{\rm pl}^{2} $ assuming chaotic inflation with a quadratic potential, we can numerically obtain the condition of the tachyonic resonance as ${ \xi (\mu) }\gtrsim O\left(10\right)$ (see e.g. Ref.[@Kohri:2016wof] for the details). In the narrow resonance regime, where $q< 1$, i.e. $\Phi^{2}\xi < M_{\rm pl}^{2}$, the tachyonic resonance cannot occur, and therefore, the Higgs vacuum fluctuations after inflation decrease as $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq O\left(H^{2}\left(t \right)\right)$ due to the expansion of the Universe. Here, we briefly summarize the results of the Higgs field vacuum fluctuations after inflation as follows: $$\begin{aligned} \begin{cases} \left< { \delta \phi }^{ 2 } \right>_{\rm ren}\gg O\left(H^{2}\left(t \right)\right), \quad\quad \left(\Phi^{2}\xi \gtrsim M_{\rm pl}^{2} \right)\\ \left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq O\left(H^{2}\left(t \right)\right). \quad\quad\ \left(\Phi^{2}\xi < M_{\rm pl}^{2} \right) \end{cases} \label{eq:jkdljkdklkdj}\end{aligned}$$ In the same way as the inflation stage, if we have $\mu\simeq{ \left( R+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\right) }^{ 1/2 } >\Lambda_{I}$, we shall infer the sign of $V_{\rm eff}\left( \phi \right)$ by using the inequality $\xi (\mu)R\left(t\right)<\left| {\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren} \right| $, the scalar curvature $\left| R\left(t\right) \right| \simeq 3H^{2}\left(t\right)$ and the self-coupling $\lambda(\mu)\simeq -0.01$. If the tachyonic resonance happens, it is clear that the effective potential becomes negative, i.e. $V_{\rm eff}'\left( \phi \right)\lesssim 0$, and then, the homogeneous Higgs field $\phi$ goes out to the negative Planck-energy vacuum state. On the other hand, in the narrow resonance, it cannot happen the same catastrophe, where $\xi (\mu)R\left(t\right)<\left| {\lambda(\mu)}\left< {\delta \phi }^{ 2 } \right>_{\rm ren} \right| $, because the inhomogeneous Higgs field fluctuations after inflation are given as $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq O\left(H^{2}\left(t \right)\right)$ due to the expansion of the Universe, and the scalar curvature decreases as $\left| R\left(t\right) \right| \simeq 3H^{2}\left(t\right) $. Therefore, the narrow resonance does not destabilize the effective potential during preheating. However, we recall that the scalar curvature $R \left( t \right) $ shown in (\[eq:asjsljfgg\]) oscillates during each cycle $t\simeq 1/m_{S}$. The stabilization of $\xi (\mu)R\left( t \right)$ to the coherent Higgs field generally does not work at the end of the inflation, because $\xi (\mu)R\left( t \right)$ changes sign during each oscillation cycle. If the oscillation time-scale $t\simeq 1/m_{S}$ is relatively long, the curvature term $\xi (\mu)R\left( t \right)$ can accelerate catastrophic motion of the coherent Higgs field $\phi\left( t \right)$ immediately after the end of the inflation. Here, we briefly discuss the development of the coherent Higgs field $\phi\left( t \right)$ at the end of the inflation. In one oscillation time-scale $t\simeq 1/m_{S}$, we can simply approximate the effective mass as $m^{2}_{\rm eff}\simeq \xi (\mu)R\left( t \right)\approx -3\xi (\mu)H^{2}_{\rm end}$. By using Eq. (\[eq:kdljklkdj\]), the classical behavior of the coherent Higgs field $\phi\left( t \right)$ at the end of the inflation can be approximated as $$\begin{aligned} \phi\left( t \right)&\simeq& \phi_{\rm end } \cdot { e }^{\left(3\xi (\mu)H^{2}_{\rm end} \right)t/3H_{\rm end}},\\ &\simeq&\phi_{\rm end } \cdot { e }^{\left(3\xi (\mu)H^{2}_{\rm end}/3H_{\rm end}m_{S} \right)},\\ &\simeq&\phi_{\rm end } \cdot { e }^{\left(\xi (\mu)H_{\rm end}/m_{S} \right) }, \label{eq:jskjfdklkdj}\end{aligned}$$ where the coherent Higgs field $\phi_{\rm end }$ at the end of inflation is generally not zero, and corresponds to the Higgs field vacuum fluctuations $\left< {\delta \phi }^{ 2 } \right>_{\rm ren} \simeq O\left( H^{2}_{\rm end}\right)$ at the end of inflation. Therefore, if we have $\phi\left( t \right)>\phi_{\rm max}$ [^7], i.e. $H_{\rm end}/m_{S}\gtrsim (\log { 10\sqrt { 3\xi \left( \mu \right) } }) /\xi \left( \mu \right)$, the almost coherent Higgs fields $\phi\left( t \right)$ produced at the end of the inflation go out to the negative Planck-vacuum state and cause a catastrophic collapse of the Universe. Furthermore, the inflation produces an enormous amount of causally disconnected horizon-size domains and our observable Universe contains $e^{3N_{\rm hor}}$ of them. Thus, we can consider one domain which has the large Higgs field fluctuations $6N_{\rm hor}\left< {\delta \phi }^{ 2 } \right>_{\rm ren}$ by using Eq. (\[eq:hswedg\]). The classical motion of the coherent Higgs field on such domain can be given as the following $$\begin{aligned} \phi\left( t \right)&\simeq& \sqrt {6N_{\rm hor}\left< {\delta \phi }^{ 2 } \right>_{\rm ren} } \cdot { e }^{\left(3\xi (\mu)H^{2}_{\rm end} \right)t/3H_{\rm end}},\\ &\simeq& 10H_{\rm end} \cdot { e }^{\left(\xi (\mu)H_{\rm end}/m_{S} \right) }, \label{eq:jsjdlssdklkdj}\end{aligned}$$ where we take the e-folding number $N_{\rm hor}=60$. Therefore, if we have $\phi\left( t \right)>\phi_{\rm max}$ i.e., $H_{\rm end}/m_{S}\gtrsim ( \log { \sqrt { 3\xi \left( \mu \right) } })/\xi \left( \mu \right)$, the coherent Higgs field $\phi\left( t \right)$ on such domain goes out to the negative Planck-vacuum state and forms the catastrophic AdS domains or bubbles, which finally cause the vacuum transition of the Universe. That conclusion depends strongly on the non-minimal coupling $\xi(\mu)$, the oscillation time-scale $t\simeq 1/m_{S}$ and the Hubble scale $H_{\rm end}$ at the end of the inflation. Thus, the large non-minimal Higgs-gravity coupling $\xi(\mu)$ can destabilize the behavior of the coherent Higgs field after the end of the inflation. However, if the curvature oscillation is very fast, the curvature mass-term $\xi (\mu)R\left(t\right)$ cannot generate the exponential growth of the coherent Higgs field $\phi\left( t \right)$ after inflation. After inflation, the inflaton field $S$ oscillates and produces a huge amount of elementary particles. These particles produced during preheating stage interact with each other and eventually form a thermal plasma. We comment that thermal effects during reheating stage raise the effective Higgs potential via the extra effective mass $m^{2}_{\rm eff}=O\left(T^{2}\right)$. The one-loop thermal corrections to the Higgs effective potential is given as follows: $$\begin{aligned} \Delta { V }_{\rm eff}\left( \phi,T \right)=\sum _{ i=W,Z,t,\phi}{ \frac { { n }_{ i }{ T }^{ 4 } }{ 2{ \pi }^{ 2 } } \int _{ 0 }^{ \infty }{ dk{ k }^{ 2 }\ln { \left( 1\mp { e }^{ -\sqrt { { k }^{ 2 }+{{ M }_{ i }^{ 2 }\left( \phi \right)}/{ { T }^{ 2 } }} } \right) } } } .\end{aligned}$$ It is possible to lift up the Higgs effective potential throughout the thermal phase via the large thermal effective mass. However, it is impossible to prevent the exponential growth of the coherent Higgs field $\phi\left( t \right)$ after inflation, or the large Higgs vacuum fluctuations via the tachyonic resonance during preheating by the thermal effects, because there can exist a considerable time lag for the production of the thermal bath on the oscillation of the inflaton. Furthermore, the thermal fluctuations of the Higgs field ${ \left< { \phi }^{ 2 } \right> }_{ T }\simeq { T }^{ 2 }/12$ might generate the catastrophic AdS domains or bubbles if $T>\Lambda_{I}$ (see e.g. Ref.[@Kohri:2016wof; @Anderson:1990aa; @Arnold:1991cv; @Espinosa:1995se; @Rose:2015lna] for the details). Thus, the thermal effects cannot generally guarantee the stability of the electroweak vacuum in the inflationary Universe. Here, we summarize the conclusions obtained by the above discussion as follows: - In $H_{\rm end}> \Lambda_{I} $ and $H_{\rm end}/m_{S}\gtrsim (\log 10{ \sqrt { 3\xi \left( \mu \right) } })/\xi \left( \mu \right)$, the almost coherent Higgs fields $\phi \left( t \right)$ generated at the end of the inflation exponentially grow and finally go out to the Planck-energy vacuum state, which leads to the catastrophic collapse of the Universe. - In $H_{\rm end}> \Lambda_{I} $ and $H_{\rm end}/m_{S}\gtrsim (\log { \sqrt { 3\xi \left( \mu \right) } })/\xi \left( \mu \right)$, the coherent Higgs field $\phi \left( t \right)$ on one horizon-size domain exponentially grows at the end of the inflation and forms catastrophic AdS domains or bubbles, which finally cause the vacuum transition of the Universe. - In the tachyonic resonance regime $\Phi^{2}\xi \gtrsim M_{\rm pl}^{2}$, the Higgs field fluctuations extremely increase as $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\gg O\left( H^{2}\left( t \right) \right)$. Therefore, the effective potential becomes negative i.e. $V_{\rm eff}'\left( \phi \right)\lesssim 0$, and the excursion of the homogeneous Higgs field $\phi \left( t \right)$ to the negative Planck-energy vacuum state occurs during preheating stage and cause the catastrophic collapse of the Universe. - In the narrow resonance regime $\Phi^{2}\xi < M_{\rm pl}^{2}$, the Higgs field fluctuations decrease as $\left< { \delta \phi }^{ 2 } \right>_{\rm ren}\simeq O\left( H^{2}\left(t \right)\right)$ due the expansion of the Universe, and therefore, it is improbable to destabilize the effective potential during preheating stage. The relative large non-minimal Higgs-gravity coupling as $\xi (\mu)\gtrsim O\left(10^{-2}\right)$ can stabilize the effective Higgs potential and suppress the formations of the catastrophic AdS domains or bubbles during inflation. However, after inflation, the effective mass-term $\xi (\mu)R$ via the non-minimal coupling drops rapidly, sometimes become negative and lead to the exponential growth of the coherent Higgs field $\phi\left( t \right)$ at the end of inflation, or the large Higgs vacuum fluctuations via the tachyonic resonance during preheating stage. Therefore, the non-minimally coupling $\xi(\mu)$ cannot prevent the catastrophic scenario during or after inflation. After all, if we have large Hubble scale $H> \Lambda_{I} $, meaning the relatively large tensor-to-scalar ratio $r_{T}$ from the polarization measurements of the cosmic microwave background radiation, the safety of our electroweak vacuum is inevitably threatened during inflation or after inflation by the behavior of the homogeneous Higgs field $\phi$ or the generations of the catastrophic AdS domains or bubbles. We can simply avoid this situation by assuming the inflaton-Higgs couplings $\lambda_{\phi S}$ [@Lebedev:2012sy], the inflationary stabilizations [@Ema:2016ehh; @Kawasaki:2016ijp], or the high-order corrections from GUT or Planck-scale new physics [@Branchina:2013jra; @Lalak:2014qua; @Branchina:2014efa; @Branchina:2014usa; @Branchina:2014rva] etc. In any case, however, the electroweak vacuum instability from inflation gives tight constraints on the beyond the standard model. Conclusion {#sec:Conclusion} ========== In this work, we have investigated the electroweak vacuum instability during or after inflation. In the inflationary Universe, i.e., de Sitter space, the vacuum field fluctuations $\left< {\delta \phi }^{ 2 } \right>$ enlarge in proportion to the Hubble scale $H^{2}$. Therefore, the large inflationary vacuum fluctuations of the Higgs field $\left< {\delta \phi }^{ 2 } \right>$ is potentially catastrophic to trigger the vacuum decay to a negative-energy Planck-scale vacuum state and cause an immediate collapse of the Universe. However, the vacuum field fluctuations $\left< {\delta \phi }^{ 2 } \right>$, i.e., the vacuum expectation values have a ultraviolet divergence, which is a well-known fact in quantum field theory, and therefore a renormalization is necessary to estimate the physical effects of the vacuum transition. Thus, we have revisited the electroweak vacuum instability during or after inflation from the legitimate perspective of QFT in curved space-time. We have discussed dynamics of homogeneous Higgs field $\phi$ determined by the effective potential ${ V }_{\rm eff}\left( \phi \right)$ in curved space-time and the renormalized vacuum fluctuations $\left< {\delta \phi }^{ 2 } \right>_{\rm ren}$ by using adiabatic regularization and point-splitting regularization, where we assumed the simple scenario that the Higgs field only couples the gravity via the non-minimal Higgs-gravity coupling $\xi(\mu)$. In this scenario, we conclude that the Hubble scale must be smaller than $H<\Lambda_{I} $, or the Higgs effective potential in curved space-time is stabilized below the Planck scale by a new physics beyond the standard model. Otherwise, our electroweak vacuum is inevitably threatened by the catastrophic behavior of the homogeneous Higgs field $\phi$ or the formations of AdS domains or bubbles during or after inflation. This work is supported in part by MEXT KAKENHI No.15H05889 and No.JP16H00877 (K.K.), and JSPS KAKENHI Nos.26105520 and 26247042 (K.K.). [^1]: In de Sitter space, the scalar curvature becomes $R=12H^{2}$, and therefore the non-minimal coupling $\xi$ provides the effective mass-term $m^{2}=12H^{2}\xi$. [^2]: This is equal to the thermal fluctuations $\left< { \delta \phi }^{ 2 } \right>_{\rm ren} =T^{2}/12$ with the Gibbons-Hawking temperature $T=H/2\pi$ experienced by a point observer inside the de Sitter horizon (see, e.g. Ref.[@Acquaviva:2014xoa; @Obadia:2008rt]). [^3]: We define the instability scale $\Lambda_{I}$ as the derivative of the standard model Higgs effective potential in Minkowski space-time becomes negative at the scale and the current experiments of the Higgs boson mass, $m_{h}=125.09\ \pm \ 0.21\ ({\rm stat})\ \pm \ 0.11\ ({\rm syst})\ {\rm GeV}$  [@Aad:2015zhl; @Aad:2013wqa; @Chatrchyan:2013mxa; @Giardino:2013bma] and top quark mass, $m_{t}=173.34\pm 0.27\ ({\rm stat})\pm 0.71\ ({\rm syst})\ {\rm GeV}$ [@2014arXiv1403.4427A] suggest the instability scale $\Lambda_{I} = 10^{10} \sim 10^{11} \ {\rm GeV}$ [@Buttazzo:2013uya]. [^4]: In the case ${ \left( 12H^{2}_{\rm inf}+\left< {\delta \phi }^{ 2 } \right>_{\rm ren}\right) }^{ 1/2 }<\Lambda_{I}$, the quartic term $\lambda(\mu)\phi^{4}/4$ makes positive contribution to the effective potential unless $\phi>\Lambda_{I}$. Therefore, the homogeneous Higgs field $\phi$ cannot classically go out to the Planck-scale vacuum state. However, it is possible to generate AdS domains or bubbles via the inhomogeneous Higgs field fluctuations shown in (\[eq:hswedg\]). [^5]: The expansion of AdS domains or bubbles never takes over the expansion of the inflationary dS space [@Espinosa:2015qea], and therefore, it is impossible that one AdS domain terminates the inflation of the Universe. However, if the non-inflating domains or the AdS domains dominates all the space of the Universe [@Sekino:2010vc], the inflating space would crack, and the inflation of all the space of the Universe finally comes to an end. [^6]: The Higgs effective potential with the large effective mass-term can be approximated by $$V_{\rm eff }\left( \phi \right) \simeq \frac { 1 }{ 2 }m^{2}_{\rm eff}{ \phi }^{ 2 }\left( 1-\frac { 1 }{ 2 } { \left( \frac { \phi }{ { \phi}_{\rm max } } \right) }^{ 2 } \right),$$ where ${ \phi }_{\rm max }$ is given by $${\phi }_{ \rm max }=\sqrt { -\frac { m^{2}_{\rm eff}}{ { \lambda }\left( \mu \right)} }.$$ Our approximation $\phi_{\rm max}\simeq 10\ m_{\rm eff}$ is numerically valid for the one-loop effective potential. [^7]: By using $\phi_{\rm max}\simeq 10m_{\rm eff}$, we can simply approximate $\phi_{\rm max}\simeq 10\sqrt{\xi (\mu)\left| R\left(t\right) \right|}\simeq 10H_{\rm end}\sqrt{3\xi (\mu)}$
null
minipile
NaturalLanguage
mit
null
Separation and properties of EA-rosette-forming lymphocytes in humans. Human peripheral blood lymphocytes were separated into subpopulations enriched or depleted with respect to B lymphocytes (Ig-bearing cells), T lymphocytes, (cell forming rosettes with sheep erythrocytes: E-RFC) and Fc receptor-bearing lymphocytes (EA-RFC). From the distributions and recoveries of the various cell types it could be concluded that there was very little overlap between Ig-bearing lymphocytes and EA-RFC. The latter cells partly belonged to "null" (non-T, non-B) cells; it was however demonstrated that 30 % of the EA-RFC were T cells (E-RFC). The lytic capacity in antibody-dependent lymphocytotoxicity (ADL) was shown to correspond with the proportions of EA-RFC in the various fractions. Non-T cells showed enhanced ADL activity when compared to the unseparated cells. Purified T cells populations also displayed ADL activity. Since the latter could not be due to contaminating non-T cells, this activity was ascribed to Fc receptor-bearing T lymphocytes.
null
minipile
NaturalLanguage
mit
null
+ -2? -45410031 In base 6, what is 0 + 34151425? 34151425 In base 9, what is 3104 - 288401? -285286 In base 7, what is 100 + 533004? 533104 In base 2, what is -100111101 - -111000001000100? 110111100000111 In base 7, what is 4 + 40433211? 40433215 In base 15, what is 2600599 + 5? 260059e In base 9, what is 1 + 34648808? 34648810 In base 7, what is -164 - -60061? 56564 In base 5, what is -232103211 + -14? -232103230 In base 4, what is 320 + 2300211130? 2300212110 In base 7, what is -104536164 + -3? -104536200 In base 11, what is 801a6299 - 3? 801a6296 In base 9, what is -6 + 4202564? 4202557 In base 14, what is 2a0 + 67? 327 In base 13, what is 52c07c - -b? 52c08a In base 7, what is 4 - 4505441? -4505434 In base 3, what is -20220000 - 12022110? -110012110 In base 5, what is -1 + 1243100143? 1243100142 In base 7, what is 25 - 4412463? -4412435 In base 6, what is 1 - -350544410? 350544411 In base 7, what is 3105430221 - 4? 3105430214 In base 2, what is -10000101110111010010011001 - 100? -10000101110111010010011101 In base 4, what is 23013022113 - 1? 23013022112 In base 4, what is -221033330 - 221? -221100211 In base 10, what is 29 - 3059564? -3059535 In base 14, what is -3 + -1a534033? -1a534036 In base 10, what is 1 + 1704310? 1704311 In base 5, what is -120213410442 - -4? -120213410433 In base 11, what is -569 - -127791? 127223 In base 7, what is 6202463631 - 0? 6202463631 In base 3, what is 10121 + 1021002? 1101200 In base 2, what is 10001001011000000 + 1110000010? 10001011001000010 In base 12, what is -2 + -2b264491? -2b264493 In base 2, what is 110 - -10011010111001101? 10011010111010011 In base 14, what is 9b68 - 7cac? 1c9a In base 14, what is -63d7 - 9b? -6494 In base 7, what is -2 + -2254361616? -2254361621 In base 7, what is -3612166126 - -11? -3612166115 In base 10, what is 1119 - -18397? 19516 In base 14, what is 5 - -a67208? a6720d In base 16, what is 146 - -cbc4e? cbd94 In base 10, what is -4 - 6384995? -6384999 In base 7, what is -2 + 2213406221? 2213406216 In base 11, what is -411158 - 117a? -412327 In base 16, what is -20c4221 + c? -20c4215 In base 11, what is 35 + 15965? 1599a In base 6, what is -15320 + 2021533? 2002213 In base 15, what is -5957eb4 + 1? -5957eb3 In base 12, what is 19a749 + 13? 19a760 In base 3, what is 1212 + -10110201000? -10110122011 In base 5, what is -10 - 44444102213? -44444102223 In base 10, what is -889098 + 0? -889098 In base 4, what is 21013330211 - 1330? 21013322221 In base 12, what is 7 - -90b2b5? 90b300 In base 11, what is -1 - -2398269? 2398268 In base 16, what is 35c894 + 4? 35c898 In base 6, what is 455035454 + 5? 455035503 In base 8, what is 2175 + -261236? -257041 In base 7, what is 114640123 - -102? 114640225 In base 12, what is -23869123a - 0? -23869123a In base 11, what is 18713 - 484? 1823a In base 3, what is -111 + 101211201201120? 101211201201002 In base 15, what is -13 + 262468? 262455 In base 10, what is -207 - -28? -179 In base 12, what is 10 + 21b329? 21b339 In base 10, what is -2 + 66147174? 66147172 In base 8, what is -273435 + 1047? -272366 In base 4, what is -1 - -1312220110? 1312220103 In base 7, what is -5 - -66032520? 66032512 In base 6, what is 25155342 - -2? 25155344 In base 15, what is 16 - -1eba0? 1ebb6 In base 11, what is 16390 - -1a6? 16586 In base 10, what is 852 + -14600? -13748 In base 15, what is 13d95 - -17a? 14020 In base 12, what is 870 + 5755b? 5820b In base 4, what is 31103331 + -220? 31103111 In base 5, what is -2 - -11422323002? 11422323000 In base 14, what is 43754a5c + 1? 43754a5d In base 4, what is -30131102003 - 13? -30131102022 In base 5, what is -2011 - -1140431334? 1140424323 In base 11, what is -442346 + -61? -4423a7 In base 5, what is -420412 - -24444? -340413 In base 15, what is -26ca070 + -2? -26ca072 In base 8, what is 156705031 + -13? 156705016 In base 8, what is -1720017 + -770? -1721007 In base 9, what is 2 + -124153712? -124153710 In base 10, what is -1016875 - 9? -1016884 In base 6, what is 13331025 - 4? 13331021 In base 4, what is -3 + -2021013200233? -2021013200302 In base 5, what is 4 - 342414144? -342414140 In base 15, what is 16 - -211358? 21136e In base 2, what is 1101100001000 + -1100010110111? 1001010001 In base 7, what is 110 - -445165? 445305 In base 2, what is 1000001001010011100010010 + 1? 1000001001010011100010011 In base 8, what is 355 - 1541335? -1540760 In base 7, what is -3 - 143313152? -143313155 In base 11, what is -284389 - -274? -284115 In base 14, what is 3c9a002 - -5? 3c9a007 In base 6, what is 24404 - -325350? 354154 In base 13, what is -1804 + -54416? -55c1a In base 10, what is 102 + 204706? 204808 In base 3, what is 1100021222010 + 1? 1100021222011 In base 2, what is -110011 - 1000110000101010? -1000110001011101 In base 5, what is -2440144314 - 4? -2440144323 In base 11, what is 4155 - -4222? 8377 In base 5, what is 24023221 - -313? 24024034 In base 15, what is 30cc9 + -128? 30ba1 In base 12, what is -9a + -17281? -1735b In base 11, what is 11772354 + -5? 1177234a In base 15, what is 2d + c1b871? c1b89e In base 7, what is -64251546 - 116? -64251665 In base 2, what is -10110 + 101100100101000011010? 101100100101000000100 In base 8, what is 4514261403 + -2? 4514261401 In base 5, what is -4002222 - -1011? -4001211 In base 6, what is 0 + -11513024555? -11513024555 In base 3, what is -220012001 - 100122110? -1020211111 In base 7, what is -12 + -233160544? -233160556 In base 6, what is -304552 + -243112? -552104 In base 4, what is 2031 + -1000232133? -1000230102 In base 16, what is -1341ea - -1? -1341e9 In base 16, what is 9d74f9c + -3? 9d74f99 In base 6, what is -4035135110 + 10? -4035135100 In base 10, what is -17 - 4436250? -4436267 In base 8, what is -1031115 + 1331? -1027564 In base 3, what is -11100210 - 102221202? -121022112 In base 5, what is 102323231 + 30? 102323311 In base 3, what is 22220 - -11120022? 11220012 In base 14, what is 19a490 + 70? 19a520 In base 5, what is 2 - -14001133140? 14001133142 In base 3, what is -12011001110201 + 101? -12011001110100 In base 14, what is 287b767 + 1? 287b768 In base 13, what is 6 - -bb625? bb62b In base 5, what is 432 + 1023010? 1023442 In base 4, what is 30 - 100332112? -100332022 In base 14, what is 471c9 - -6c? 47257 In base 7, what is -42551032 + 4? -42551025 In base 9, what is 0 + 1606686? 1606686 In base 9, what is 85344163 + -2? 85344161 In base 15, what is -3 - -2d3830? 2d382c In base 12, what is -27 - a17158? -a17183 In base 4, what is 122223 - -3122010? 3310233 In base 11, what is 708307272 - -5? 708307277 In base 12, what is 3 - 4291a365? -4291a362 In base 14, what is -5cdbc - -169? -5cc53 In base 16, what is -83 - ad3d0d? -ad3d90 In base 15, what is 9045954 + 3? 9045957 In base 7, what is 415100526 + -5? 415100521 In base 7, what is -6 - 201404235? -201404244 In base 13, what is 2 - -25a276? 25a278 In base 14, what is 32 + -121115? -1210c3 In base 13, what is -3c519 + 169? -3c380 In base 2, what is -1010101011 + 110110110010? 101100000111 In base 12, what is -2182 - 4a650? -50812 In base 13, what is 226 + -15ac76? -15aa50 In base 2, what is -10100011111101100111 + 1011010? -10100011111100001101 In base 12, what is 12573 + a62? 13415 In base 9, what is 173 - -446172? 446355 In base 5, what is 4424400230 + -310? 4424344420 In base 13, what is 4 + 4418b05? 4418b09 In base 5, what is -4 - 2224030313? -2224030322 In base 13, what is -c6 - 29643? -29739 In base 7, what is 4550021660 + -1? 4550021656 In base 8, what is 1 - -47422162? 47422163 In base 9, what is 414 - -10148? 10563 In base 13, what is b86b950 - 2? b86b94b In base 15, what is -95742 - 8c? -957ce In base 13, what is 2132 + -33663? -31531 In base 6, what is 10 + -41041530? -41041520 In base 6, what is -322015341 - 4? -322015345 In base 5, what is 1010410240 - 30? 1010410210 In base 5, what is -400043 - -41221? -303322 In base 13, what is -45 - -10cc552? 10cc50a In base 7, what is -554603420 - -3? -554603414 In base 9, what is 17 + 36574058? 36574076 In base 11, what is 97563 + -38? 97526 In base 16, what is -7a - -7a69? 79ef In base 10, what is 19704761 + -4? 19704757 In base 5, what is 3100314 + 100220? 3201034 In base 2, what is 1000001001101111100 + 11000? 1000001001110010100 In base 4, what is 202220
null
minipile
NaturalLanguage
mit
null
The Liberation of Mosul — Now What? The Liberation of Mosul — Now What? The Liberation of Mosul — Now What?2017-07-132017-07-14/wp-content/uploads/2017/03/ma-logo-new.pngMichele Rigby Assadhttps://michelerigbyassad.com/wp-content/uploads/2017/07/mosul-devastation.jpg200px200px The Iraqi government announced its liberation of Mosul this week after nine months of intense operations to clear the city of ISIS. Three years ago ISIS waltzed across the Syria-Iraq border, took over Iraq’s second largest city, and declared Mosul part of its caliphate. But victory has come at the expense of the destruction of Mosul and the Ninevah Plains. Northern Iraq is utterly devastated. According to Sky News Arabia, 80% of Mosul has been destroyed which includes 308 schools, 12 educational institutes (including Mosul University) 11,000 homes, four electric stations, six water purification plants, 212 workshops/businesses, 63 houses of worship (historic mosques and churches), 29 hotels, factories, 9 hospitals, and 76 health centers. According to Refugees International, war and instability has caused the displacement of 3.3 million people. Today, 11 million people require assistance to survive: water, food, and shelter. They have lost their property, their homes, and their livelihoods. Most are not able to rebuild their shattered lives. Amnesty International warned in a report released Tuesday that the conflict in Mosul has created a “civilian catastrophe,” because extremists carried out forced displacement, summary killings and used civilians as human shields. On the other side of the equation, Iraq government forces (which include government troops, Shi’a militias, and Iranian advisors) have also engaged in extrajudicial killings. They are known for their “death squads” that shoot first and ask questions later. While there is great relief that most of Northern Iraq has been liberated, it doesn’t mean that everything’s OK now. There will be ongoing problems with remaining terrorist elements, regardless of whether they stay in Iraq, Syria, or go elsewhere in the region. Foreign fighters and their families have been abandoning ISIS and beginning the long journey home via Turkish land borders. Two months ago, two British nationals and a U.S. citizen (from Jacksonville, FL) were arrested by Turkish forces as they crossed the border. And that’s just a drop in the bucket of foreign fighters leaving Iraq. Once home, many of them will claim to have gone to Iraq or Syria for humanitarian purposes. Sure. It is immensely hard to determine which Iraqis are innocent civilians and which are Islamic State fighters, families and supporters. This is the messy part of war rarely acknowledged by the outside world–but remains a major driver of sectarian strife. Radical remnants don’t just disappear. Those that are not killed have to go SOMEWHERE. And Shi’a militants are working to ferret them out of the local population, and not in ways that would make any of us feel comfortable. We’re not out of the woods yet. The protracted humanitarian crisis may get worse. It takes time and resources to rebuild infrastructure and economies. Not only is Mosul’s infrastructure a disaster, cities across Northern Iraq are completely uninhabitable. (Having worked directly with Christians from historic villages such as Qaraqosh, I know that if these people could go back, they would in an instant. The fact that refugee camps are still full, indicates how ravaged the countryside is. If ISIS couldn’t control the area, they made sure no one else could either.) There is little to suggest that Baghdad has the capacity to move the country forward without encroaching on historic Sunni, Yazidi and Christian areas. And consider this wild card: The Kurdish government just announced an independence referendum for late September. This is a provocative, yet timely step that takes advantage of the chaos to advance the Kurds’ greatest dream. This could cause further tension in the region, depending on the Kurds’ ability to placate the Baghdad government, as well as Turkey and Iran with economic incentives. (Background on Kurdistan: The northeastern section of Iraq is referred to as Kurdistan. The Kurds are not Arabs. They are a distinctly different people group historically located in and along the border areas of Turkey, Iraq, and Iran. Kurds were severely persecuted under Saddam Hussein until the US, UK and France established a no fly zone in Northern Iraq after the First Gulf War in 1991. (You may recall shocking images, like the one below, of thousands of Kurds killed with mustard gas and nerve agents as part of Saddam’s al-Anfal genocidal campaign.) The no fly zone enabled the Kurds to get back on their feet and establish self-governing mechanisms.) Over the past three years, the Kurds have been hospitable to those displaced by ISIS. I commend that. But information I have just received from sources in Northern Iraq indicates that the Kurdish government may kick out tens of thousands of Christians, along with Arab Sunnis which make up the largest portion of displaced persons. As far as the Christians go–they don’t have many options. Outside of the Christian cities and villages that they hail from, which are now destroyed, much of Iraq remains inhospitable to them—with or without ISIS. (Context: Saddam Hussein largely allowed Christians to live in peace, but since the Iraq War, the numbers of Christians in Iraq have tanked. In 2003, there were approximately 1.1 million Christians, and today it’s approximately 180,000. What’s worse is that many Christians are what I call “repeat refugees” which means that they have fled multiple Iraqi cities over the years. They are in a constant state of upheaval and are exhausted by the struggle to survive.) Photo Credit: Michele Rigby Assad Kurdish authorities are considering theclosure of refugee camps. They are concerned that ISIS and similar groups are using them as a base for operations. (This is one of the most underreported dynamics in Western press. We like to think all people in refugee camps are innocent. Not true. Refugee camps are microcosms of the conflicts from which they emerge, meaning that there is representation from all elements of society. The last few refugee camps set up outside Mosul contain a large number of ISIS families. The terrorists sent their families out of the city when the fighting got too intense.) Recommendations While I support the right of the Kurds for self-determination, I would like our government to engage Kurdish authorities and ask that they allow Kurdistan to remain a safehaven for displaced persons, particularly those that are most vulnerable, such as Christians and Yazidis (an Islamic sect that is considered heretical by Sunnis and Shi’a). Our government should encourage Kurdish authorities to allow churches, mosques and nongovernmental organizations in Kurdistan to continue providing support to those whose lives were destroyed by ISIS. Reconstruction support should not be funneled through Baghdad’s central government as that money will never reach intended recipients due to corruption and mismanagement.
null
minipile
NaturalLanguage
mit
null
Fluoride-decorated oxides for large enhancement of conductivity in intrinsic silicon nanowires. We demonstrate that by controlling surface state properties with a simple and rapid aqueous fluoride ion treatment, excellent electronic operation can be achieved on intrinsic Si nanowires without bulk doping, thereby taking advantage of the increase in surface-to-volume ratios in these diminutive structures. Forming the fluoride-decorated (F-decorated) oxide surface significantly increases the conductivity by more than one order of magnitude over the oxide surface and more than three orders of magnitude over the oxide-free H-terminated surface. This provides a methodology that might, in some instances, sidestep the difficult-to-control impurity doping of nanodevices.
null
minipile
NaturalLanguage
mit
null
Saturday, September 19, 2015 "Happy People" Video Devotion "Happy People" is the title of this KJV video devotion from Psalm 144, verses 11 through 15. What is the key to happiness? What does it take to make happy people? Surprisingly, very few people actually know the answer to that question, yet it appears that the writer of Psalm 144 did! Happiness is not bundled in the abundance of possessions, but rather, true happiness is a gift from the Lord. The Psalmist is correct in asserting, "Happy is that people, whose God is the Lord." No comments: Follow by Email Followers About Me I live in Pella, Iowa, with my husband, 1 child still at home, 1 dog and 2 cats. Along with this blog, I have a website: Devotional Reflections from the Bible. In addition, I am a piano teacher. I know I am truly blessed to be able to do two of the most important things in my life - write Bible devotionals and teach music. God is so very good. My goal is to encourage others in their walk with the Lord, and to present the Gospel in a clear and direct manner. Christ is everything to me and that is my goal for my readers as well. He that dwelleth in the secret place of the Most High shall abide under the shadow of the Almighty. Psalm 91:1, KJV God Bless You, Linda
null
minipile
NaturalLanguage
mit
null
Phoenix police: Woman accidentally runs over man she was trying to help Phoenix police say a man died on Thursday afternoon after a woman who was trying to help him with a medical issue forgot to put her vehicle in park, causing it to run over him. Police say 34-year-old Michael Phillips was standing on the sidewalk near Bell Road and 42nd Avenue when "it appeared he was having a medical event," according to a release from the Phoenix Police Department. A 55-year-old woman pulled over in her 2004 Buick Sedan to help Phillips, who fell into the roadway as she stopped her car. Police say that in her rush to help Phillips, she forgot to put her car in park and it proceeded to roll over him. Phillips was taken to a local hospital where he was pronounced dead. Police say the woman, who is unidentified, remained at the scene and didn't show signs of impairment. Police say the Maricopa County Medical Examiner's Office will determine the cause of death. Reach the reporter Perry Vandell at 602-444-2474 or [email protected]. Follow him on Twitter @PerryVandell. Support local journalism. Subscribe to azcentral.com today.
null
minipile
NaturalLanguage
mit
null
Q: Is there an R function to extract data from a matrix with associated column and row headings? I would like to extract all distance variables from a matrix as well the row name and heading for each variable so that I end up with 3 columns of data: row1, heading1, 1stdatapoint I am able to extract the distance data to a vector but unable to extract the associated row and heading information for each point. Gen.v<-c(Gen.mat) A: If I have understood you correctly you want to create a 3-column data frame from the matrix where 1st is the row name, second is the column name and third the value. We can do it using row and col functions to get row and column indices for each element in matrix and get the corresponding rownames and colnames respectively. data.frame(row = rownames(Gen.mat)[c(row(Gen.mat))], col = colnames(Gen.mat)[c(col(Gen.mat))], value = c(Gen.mat)) # row col value #1 row1 col1 1 #2 row2 col1 11 #3 row1 col2 2 #4 row2 col2 12 #5 row1 col3 3 #6 row2 col3 13 where Gen.mat is Gen.mat <- structure(c(1, 11, 2, 12, 3, 13), .Dim = 2:3, .Dimnames = list( c("row1", "row2"), c("col1", "col2", "col3"))) Gen.mat # col1 col2 col3 #row1 1 2 3 #row2 11 12 13
null
minipile
NaturalLanguage
mit
null
Browse by Tags Calculating the number of new and returning customers is a recurring question. I would say this is a “classical” Business Intelligence problem, very common in marketing department. I worked on these problems with many customers, with small and large datasets, and I wrote a DAX Pattern “New and Returning Customers” showing how to calculate: New ... I'm so happy that DAX Studio finally supports Excel 2013! As Darren Gosbell described in his blog, this release has a few internal changes that will better support future enhancements. I will port the code to capture the query plan for a query in this new release, but unfortunately it will require some weeks because I'm traveling a lot in these ... During a PowerPivot Workshop course we received an interesting question from a student: “Can I use LASTNONBLANK (and FIRSTNONBLANK) with a column which is not a date column?” The reason is that we introduce LASTNONBLANK in the Advanced Time Intelligence module, because its typical use case is on a date column. However, you can use these functions ... Apparently Excel does not offer a way to import data in Excel by using a DAX query on Analysis Services. The Data Connection Wizard seems to offers only the ability to create a PivotTable when you connect to Tabular, but not a Table (see the Table option disabled in the next picture). However, the workaround is to create a connection file and ... The DATE function in DAX has this simple syntax: DATE( <year>, <month>, <day> ) If you are like me, you never read the BOL notes that says in a clear way that it supports dates beginning with March 1, 1900. In fact, I was wrongly assuming that it would have supported any date that can be represented in a Date data type in Data ...
null
minipile
NaturalLanguage
mit
null
Forgotten Investment In Bitcoin Buys Man A Flat In Central Oslo In 2009, Kristoffer Koch bought 5,000 bitcoins for £16 and forgot about it A Norwegian citizen has made the headlines thanks to a small investment in Bitcoin that years later bought him a flat in Oslo. Kristoffer Koch, a 29 year old engineer, got 150 kroner (£16) worth of bitcoins in 2009, at a time when the virtual currency was trading for pennies. He completely forgot about it until April this year, when he read an article about the soaring value of the virtual currency. Today, the typical price of one Bitcoin stands at around £132. Lost wallet Bitcoins are a digital currency based on an open-source, peer-to-peer Internet protocol, first introduced in 2009 by an anonymous developer known under the alias ‘Satoshi Nakamoto’. It was long thought that bitcoins cannot be traced to establish their ownership, leading to their popularity among certain Internet subcultures. However, several experts have suggested that anonymity of Bitcoin is an illusion, and with some effort, digital wallets can be connected to real individuals. The NKR reports that Koh invested into the newly created currency while he was writing a university paper on encryption. He completely forgot about this episode until reading about the wild price fluctuations of the Bitcoin market in April. Koch says he spent a day trying to find the password to his Bitcoin wallet, which turned out to be worth around £429,000. Cashing out just a fifth of this amount allowed the lucky engineer to buy and renovate a flat worth 2.6 million kroner (£116,000) in the wealthy Toyen neighbourhood in Oslo. “It’s bizarre, these psychological reflexes that make us attach a value to something that doesn’t have any in itself,” said Koh. Earlier this month, the price of Bitcoin dropped sharply following the closure of illegal ‘Dark Web’ marketplace Silk Road by the FBI, and the seizure of $3.6 million (£2.2m) worth of bitcoins. It has since fully recovered, hitting a new post-crash high last week. Max 'Beast from the East' Smolaks covers open source, public sector, startups and technology of the future at TechWeekEurope. If you find him looking lost on the streets of London, feed him coffee and sugar. At enterprises everywhere, private clouds are catching on. And increasingly, businesses are finding that an open hybrid approach to building private clouds is the best way to maximize their benefits—cost-effective, flexible computing and storage capacity without single-vendor limitations. That conclusion is causing IT leaders to seek out alternative providers of virtualization technology as well as […] Frost & Sullivan is in its 50th year in business with a global research organization of 1,800 analysts and consultants who monitor more than 300 industries and 250,000 companies. The company’s research philosophy originates with the CEO’s 360-Degree Perspective™, which serves as the foundation of its TEAM Research™ methodology. This unique approach enables us to […] Small and midmarket organizations depend on their data as much as large enterprises depend on theirs—but the right tools for protecting a smaller organization’s data are not enterprise tools with reduced feature sets and price tags. Organizations of all sizes need to understand their exposure caused by mediocre protection, and then utilize “right-sized” technologies that […] Shifting SMB IT and Storage Requirements This report describes how the HP Simply StoreIT program and HP MSA Storage can help small and midsized businesses (SMBs) reduce costs and improve operations by quickly and easily adding storage that is optimized for server virtualization to their IT infrastructure deployments.
null
minipile
NaturalLanguage
mit
null
Liverpool hero: Reds did not blow title Former Liverpool striker John Aldridge doesn’t believe that his old team blew their chance of winning a first ever Premier League title, although he says naivety cost them. The Reds were in pole position with just three games to go, before a costly defeat at home to Chelsea and a shock 3-3 draw against Crystal Palace – a game they led by three goals – handed the momentum to Manchester City. The Sky Blues went on to win their fixtures in hand and take the title by two points, which has been met by great amounts of disappointment in some sections of Merseyside. Prior to the loss against Jose Mourinho’s Blues, Liverpool had been on a fantastic 11-game winning run, during which time they’d beaten the likes of City, Manchester United and Tottenham with a series of impressive displays. Writing in his column for the Liverpool Echo, Aldridge says that he’s proud of his former side and that he does not feel they bottled it late on to lose out on the title: “Liverpool didn’t throw the title away. “What we did in the last 14 games of the season was unbelievable. To win 12, draw one and lose one and come second just shows just difficult it is to win the Premier League. “We were up against a top quality team which has been put together with a ridiculous amount of money. We pushed Manchester City all the way We put ourselves in a position none of us thought was possible by winning 11 on the bounce. “We took 37 points out of the last 42 – that’s not blowing it. “What cost us in the end? Probably a bit of naivety, a lack of depth and some bad luck.”
null
minipile
NaturalLanguage
mit
null
Computational neuroscience in China. The ultimate goal of Computational Neuroscience (CNS) is to use and develop mathematical models and approaches to elucidate brain functions. CNS is a young and highly multidisciplinary field. It heavily interacts with experimental neuroscience and such other research areas as artificial intelligence, robotics, computer vision, information science and machine learning. This paper reviews the history of CNS in China, its current status and the prospects for its future development. Examples of CNS research in China are also presented.
null
minipile
NaturalLanguage
mit
null
Lino Enea Spilimbergo Lino Enea Spilimbergo (born Lino Claro Honorio Enea Spilimbergo) (12 August 1896 – 16 March 1964) was an Argentine artist and engraver, and he is considered to be one of the country's most important painters. Biography Lino Enea Spilimbergo was born in Buenos Aires in 1896, the son of Italian immigrants, Antonio Enea Spilimbergo and María Giacoboni, and his full name was Lino Claro Honorio Enea Spilimbergo. His early years were spent in the Buenos Aires neighborhood of Palermo. Whilst visiting his mother's relatives in northern Italy with his family he contracted pneumonia, which in later years caused him to suffer from asthma. Returning to Buenos Aires in 1902 he started his schooling, which ended in 1910, when he began working for the post office to support himself. From then on, until 1924, he kept this job in parallel with his painting. In 1917, he graduated from the Academia Nacional de Bellas Artes and in September of that year his father died. At the age of 22, he began writing his autobiography, and in 1920 he wrote a booklet about his thoughts, in order to arrange and organize his life and work. In 1921 the Salón Nacional de Bellas Artes accepted, for the first time, one of his pieces and later that year, following the recommendation of his doctor to live in a place with a drier climate, his employer agreed to relocate him to Desamparados in San Juan Province. He stayed there until he resigned his job in 1924, and it was during this period that he had his first individual exhibition. Using prize money he had won in an art exhibition, he traveled to Europe in 1925 and visited Florence, Venice, Palermo and other Italian cities in search of classical art sources, paying particular attention to frescos. He then moved to Paris where he studied in the mornings at the Académie de la Grande Chaumière and in the studio of the French painter, sculptor and writer André Lhote and came under the influence of post-cubism and the work of Paul Cézanne. In 1928, he returned to Argentina to live in Las Lomitas, San Juan Province, and in 1929, his son Antonio was born. A year later, he moved back to Buenos Aires and in 1933, together with the Mexican artist David Alfaro Siqueiros and the Argentine artists Antonio Berni and Lozano produced the mural entitled Ejercicio Plástico. To explain their principles and ideas, the group produced a document under the same title. The involvement in this work was a decisive event in Spilimbergo's life and marked the start of his career as a muralist. In 1945, together with Berni, Juan Carlos Castagnino, Manuel Colmeiro Guimarás and Demetrio Urruchúa, he was one of the contributors to the frescos which decorate the large central cupola of Galerías Pacífico on pedestrian Florida Street, Buenos Aires. During his years as a painter, Spilimbergo developed a very personal synthesis of diverse styles, in particular the classical and the modern. From the post-impressionism of his first period, dominated by landscapes and local scenes, he later passed on to a study of the human figure. His figures were solid and monumental and the surreal and metaphorical often found its way into his works. His subjects included the marginalised and the disinherited, from the slum dwellers of Buenos Aires to the rural workers of the northern provinces. He taught art at Instituto de Arte Gráfico during the period 1934–1939 where his students included Medardo Pantoja (1906–1976), Eolo Pons (1914), Luis Lusnich (1911–1995), and Leopoldo Presas (1915). At the National University of Cuyo, he his students included Carlos Alonso. Spilimbergo's pictures were widely exhibited in Latin America, the United States and Europe during this period. Spilimbergo died in 1964 in the small town of Unquillo in Córdoba Province. References Alberto Belluci, "An Approach to Social Realism in Argentine Art: 1875-1945", J. Decorative & Propaganda Arts, Argentine Theme Issue, 1992. Diana B. Wechsler, La Vida de Emma en El Taller de Spilimbergo, (Buenos Aires: Fundacíon Osde, 2006). Category:Argentine artists Category:People from Buenos Aires Category:Argentine people of Italian descent Category:1896 births Category:1964 deaths Category:Alumni of the Académie de la Grande Chaumière Category:20th-century Argentine painters Category:Argentine male painters
null
minipile
NaturalLanguage
mit
null
--- abstract: 'Knowledge of shape geometry plays a pivotal role in many shape analysis applications. In this paper we introduce a local geometry-inclusive global representation of 3D shapes based on computation of the shortest quasi-geodesic paths between all possible pairs of points on the 3D shape manifold. In the proposed representation, the normal curvature along the quasi-geodesic paths between any two points on the shape surface is preserved. We employ the eigenspectrum of the proposed global representation to address the problems of determination of region-based correspondence between isometric shapes and characterization of self-symmetry in the absence of prior knowledge in the form of user-defined correspondence maps. We further utilize the commutative property of the resulting shape descriptor to extract stable regions between isometric shapes that differ from one another by a high degree of isometry transformation. We also propose various shape characterization metrics in terms of the eigenvector decomposition of the shape descriptor spectrum to quantify the correspondence and self-symmetry of 3D shapes. The performance of the proposed 3D shape descriptor is experimentally compared with the performance of other relevant state-of-the-art 3D shape descriptors.' author: - 'Somenath Das[^1]' - 'Suchendra M. Bhandarkar[^2]' bibliography: - './iccv\_cefrl\_17.bib' date: 'June 17, 2017' title: Local Geometry Inclusive Global Shape Representation --- [**Keywords:**]{} 3D shape representation, eigenspectrum decomposition, shape correspondence, shape symmetry Introduction {#sec:intro} ============ In the field of shape analysis, the computation of an optimal global description of a 3D shape is critically dependent upon the underlying application. Local shape geometry is important for applications where it is essential to establish point-to-point correspondence between candidate shapes. On the other hand, applications based on shape similarity computation rely on a suitably formulated global metric to characterize shape similarity. Based on the objective(s) of the application and nature or modality of the underlying shape data/information (i.e., geometric, topological, etc.), 3D shape analysis applications can be broadly categorized as purely geometric, semantic or knowledge-driven [@van2011survey]. However, a large number of 3D shape analysis applications that belong to these categories or lie within their intersections are based upon a fundamental problem, i.e., that of determining the correspondence between the 3D shapes under consideration. Typical examples of these applications include rigid and non-rigid shape registration  [@gelfand2005robust; @chang2008automatic], shape morphing  [@kraevoy2004cross], self-symmetry detection  [@gal2006salient], shape deformation transfer  [@sumner2004deformation], 3D surface reconstruction  [@pekelny2008articulated], shape-based object recognition and retrieval [@jain2007spectral], to name a few. In each of the aforementioned applications, shape descriptors play a critical role in determining the necessary 3D shape correspondence. Depending on the nature of the application, 3D shape descriptors could be purely geometric and used to capture the local 3D geometry of the shapes whereas others may incorporate prior knowledge about the global 3D shape. Ideally, a 3D shape descriptor should demonstrate robustness to topological noise while simultaneously capturing the underlying invariant shape features that are useful in computing the correspondence between 3D shapes. In this paper, we address an important problem, i.e., that of determining correspondence between isometric 3D shapes (i.e., 3D shapes that have undergone isometry deformation or transformation with respect to each other) *without* using any prior knowledge about the underlying shapes. To this end, we propose a 3D shape descriptor based on estimation of the approximate geodesic distance between all point pairs on the 3D shape manifold. The proposed representation is used to address the computation of 3D self-symmetry, determination of correspondence between isometric 3D shapes and detection of the most stable parts of the 3D shape under varying degrees of isometry (i.e, non-rigid pose) transformation between shapes. Since the geodesics over a 3D shape manifold are defined as surface curves of constant normal curvature, they naturally encode the local surface geometry along the curve. On a discrete triangulated 3D surface mesh, the discrete approximation to a geodesic is characterized by an optimal balance in terms of the angular distribution on either side of the discrete geodesic computed over the local neighborhood of each mesh point on the geodesic (Figure \[fig:discrete-geodesics\]). This balance of local angular distribution is observed to encode the local geometry of the triangulated mesh along the discrete geodesic. The aforementioned approximation to a geodesic computed over a discrete 3D triangulated mesh is referred to as a *quasi-geodesic* [@martinez2005computing]. The proposed global shape descriptor represents the 3D shape by computing the quasi-geodesic paths between all point pairs on the discrete 3D triangulated surface mesh. The all-point-pairs geodesic matrix representation of 3D shapes presents a symmetrical pattern as shown in Figure \[fig:2D-symm-maps\]. We employ the eigenspectrum of this representation to detect self-symmetry within a shape. Furthermore, we investigate the commutative property of the eigenvectors of the shape descriptor spectrum, which are shown to be approximately orthogonal to each other for discrete setting such as triangulated mesh representation of shapes. Approximate orthogonality refers to the fact that two distinct eigen vector $\phi_i$ and $\phi_j$ of the spectrum would produce $<\phi_i . \phi_j> \approx 0$ when operated by a dot product operator $<.>$. The proposed eigenspectral representation, however is distinct from the well known Laplace-Beltrami eigenspectrum that has been used extensively in several 3D shape analysis and 3D shape synthesis applications. In our case, we exploit the spectrum obtained by commuting the shape descriptor eigenspectrum to establish correspondence between isometric 3D shapes. It needs to be emphasized here that, unlike many related approaches [@kovnatsky2013coupled; @ovsjanikov2012functional], the proposed optimization criterion used to establish the correspondence between isometric 3D shapes does not exploit nor does it require prior user-specified maps between the 3D shapes. We use the proposed correspondence establishment scheme to test the hypothesis that the presence of implicit isometry between 3D shapes can be characterized using a global quasi-geodesic-based shape representation that encodes local shape geometry. Furthermore, we also contend that the proposed representation can be exploited to address problems such as self-symmetry detection and characterization, correspondence determination and stable part detection under isometry deformation without resorting to prior knowledge from external sources. To the best of our knowledge, the problem of correspondence determination in the absence of prior knowledge had not been addressed in the research literature. In some of our experiments, due to the high variability in the isometry transformations, we obtain poor results for correspondence determination as a result of not requiring any prior knowledge. However, our experiments show that the proposed correspondence determination technique is able to detect stable corresponding parts between shapes, i.e., corresponding parts that have undergone the least degree of isometry deformation (Section \[sec:results\]). The remainder of the paper is organized as follows. In Section \[sec:related-work\], we present a brief survey of the most relevant works on 3D shape description, 3D symmetry detection and characterization and 3D correspondence determination with an added emphasis on functional maps between isometric shapes [@ovsjanikov2012functional]. Section \[sec:contribution\] describes the specific contributions of our work. The mathematical model on which the proposed technique is based is detailed in Section \[sec:shape-operator\]. In Section \[sec:results\], we present the experimental results for 3D self-symmetry detection, 3D correspondence determination between isometric shapes, and stable 3D part detection. We conclude the paper in Section \[sec:conclusion\] with an outline of directions for future work. ![image](2d-shape-symmetry-maps){width="0.95\linewidth"} Related Work and Background {#sec:related-work} =========================== The proposed global shape descriptor is based on the computation of all possible quasi-geodesics between all pairs of points over the discrete triangulated 3D surface mesh. The proposed descriptor is also capable of encoding the local geometry at the discrete points over the shape. As mentioned previously, the optimization criterion for establishing correspondence using the proposed descriptor does not require user specified prior knowledge of corresponding points on the candidate shapes. In this section, we first present a brief survey of relevant local shape representation schemes and spectrum-dependent shape correspondence models [@heider2011localsurvey; @van2011survey]. We also discuss relevant work on quasi-coupled harmonic bases [@kovnatsky2013coupled], which exploits the commutativity of the isometric shape spectrum to establish correspondence between approximately isometric shapes. Local shape descriptors {#sec-geom-desc} ----------------------- The different classes of local shape descriptors can be categorized based on their approach towards sampling of the underlying local surface geometry. **Ring-based descriptors** are typically based on local sampling of a predefined metric over the discrete 3D surface mesh, however, they differ in their strategies for evaluation of the metric. Some of the prominent descriptors belonging to this class employ *blowing bubbles* [@shp-des-blowbubble-Mortara-ESV-Algca-2004; @shp-des-blowbubbleExtension-Pottman-ESV-2009] centered around a sample surface point, whereas others use the geodesic diameter to sample the surface metric in a local neighborhood [@pottmann2009integral]. These descriptors explicitly control the radius parameter of the bubbles or discs which in turn determines the size of the sample surface region. Some ring-based descriptors [@gatzke2005curvature] use the local surface normal vectors computed at discrete points on the surface mesh to capture the local surface features. *Geodesic fan descriptors* [@shp-des-geofans-Gatzke-SMA-2005; @shp-des-geonorm-Ong-2010] sample a local surface metric based on values of the local surface mesh curvature or the outward surface normal vector within regions of varying radii defined over the 3D surface mesh. *Splash descriptors* employ the values of surface normal vector as the primary metric for local surface characterization [@stein1992structural] whereas *point descriptors* [@shp-des-point-Chua-IJCV-1997; @shp-des-pnt-Yamany-ICCV-1999; @shp-des-pnt-Yamany-PAMI-2002] encode the local geometric features on the surface mesh defined by the relative local surface normal at a sample point with respect to a superimposed plane or line segment at the sample points. One of the more prominent examples from this category of shape descriptors is the point descriptor proposed by Kokkinos et al. [@kokkinos2012intrinsic] where feature points are represented by local geometric and photometric fields. **Expanding descriptors** fit a hypothesis-based model to a surface region in order to characterize it. Salient shape descriptors from this category typically employ a parametric model involving features such as geodesic distance [@shp-des-blowbubble-Mortara-ESV-Algca-2004; @shp-des-fitpoly-Cipriano-VCG-2009], volume and/or surface area [@shp-des-art-Connoly-JMG-2005; @shp-des-blowbubbleExtension-Pottman-ESV-2009]. Some variants of this descriptor use a mesh smoothing [@li2005multiscale] or mesh saliency computation [@shp-des-art-Lee-TOG-2005] procedure that is employed over a specific region on the 3D surface mesh. **Iterative operator-based descriptors** capture the geometric changes within a shape by manipulating the entire mesh surface. As a manipulation strategy they employ techniques such as smoothing [@shp-des-itrSmoothing-Li-Guskov-SOGP-2005] or estimation of local diffusion geometry [@bronstein2010gromov] over the mesh surface. The well known Laplace-Beltrami operator [@rustamov2007laplace] is typically employed to compute the diffusion-based shape descriptors. Global shape representation {#sec-globl-shp-rprsnt} --------------------------- In most situations, knowledge of local surface geometry alone is insufficient to characterize the entire shape. Consequently, a global shape representation based upon local surface features is necessary to effectively address the correspondence problem, which is fundamental to many applications in computer vision and computer graphics. In recent times, surface descriptors based on the eigenspectrum of the Laplace-Beltrami operator have gained popularity in the context of the correspondence problem. Some important examples of surface descriptors from this class are based on the formulation of a diffusion process. The diffusion process is guided by the Laplace-Beltrami operator [@rustamov2007laplace] that samples a surface metric, such as the mesh connectivity, along the geodesic curves on the 3D surface mesh. In related work, Bronstein et al. [@bronstein2010gromov] use diffusion geometry to measure the point-to-point length along a specific path on the surface mesh using a random walk model. Surface descriptors based on the heat kernel signature (HKS) [@sun2009concise; @bronstein2010scale] employ the heat diffusion model in conjunction with the eigenspectrum of the Laplace-Beltrami operator to characterize global shape. In an anisotropic variation, Boscaini et al. [@boscaini2016anisotropic] use the eigenspectrum of a directional version of the Laplace-Beltrami operator for shape representation. The wave kernel signature (WKS) [@aubry2011wave] is another popular category of shape descriptors based on the Laplace-Beltrami eigenspectrum, that employs the principles of quantum mechanics instead of heat diffusion to characterize the shape. Smeets et al. [@smeets2012isometric] address the global representation of shape by computing the geodesic distances between sample points on the 3D surface mesh resulting in a shape representation scheme that is shown to achieve robustness against nearly isometric deformations. Joint diagonalization of the commutative eigenspectrum {#sec-joint-diag} ------------------------------------------------------ Point or region specific correspondence between isometric shapes can be addressed by exploiting the commutative property of shape spectrum representation. In this section we briefly mention coupled quasi-harmonic bases  [@kovnatsky2013coupled] that employs this commutative property of isometric (or near isometric) shape spectrum to address correspondence between isometric shapes. **Commutative Eigenspectrum**. Formally commutative property implies that, given two Unitary (Hermitian or orthogonal) operators $\Phi_X$ and $\Phi_Y$ defined over isometric shape pair $X$ and $Y$, there will be a joint diagonalizer $\Psi$ that would diagonalize both $\Psi^{T}\Phi_X\Psi$ and $\Psi^{T}\Phi_Y\Psi$  [@coisperturbation]. The joint diagonalizer $Psi$ would represent the common eigen bases between isometric shape spectrum $\Phi_X$ and $\Phi_Y$. Shapes represented as discrete triangulated mesh need not to be exactly isometric to each other due to existing discretization error. Therefore, for discrete case, the respective shape spectrums would be approximately commutative. The term “approximately commutative” has been used in the following sense. Spectrums $\Phi_X$ and $\Phi_Y$ of triangulated shapes $X$ and $Y$ are approximately commutative if $||\Phi_X\Phi_Y - \Phi_Y\Phi_X||_F \approx 0$ where $||.||_F$ represents the Frobenius norm of matrix. A detailed treatment of common bases for approximately commutative spectral operators can be found in  [@coisperturbation; @yeredor2002non]. Some recent importatnt works  [@kovnatsky2015functional; @kovnatsky2013coupled] employ this principle to minimize an optimization criteria in least square sense to extract common spectrum bases that is used to address correspondence between isometric shapes. Specifically, we mention Coupled quasi-harmonic bases by Kovnatsky et. al.  [@kovnatsky2013coupled] as follows. **Coupled Quasi-harmonic Bases** address correspondence between two approximately isometric shapes $X$ and $Y$ by finding common bases existing within the spectrum of two isometric or approximately isometric shapes. The proposed optmization criteria presented in the paper finds bases $\widehat{\Phi}_X$ and $\widehat{\Phi}_Y$ that jointly diagonalize the Laplacians $\Delta_X$ and $\Delta_Y$ defined over near isometric shapes $X$ and $Y$. The proposed optimization criteria extracts the common eigen bases $\widehat{\Phi}_X$ and $\widehat{\Phi}_Y$ by minimizing the optimization criteria in eqn.  \[eq-quasi-opt\]. $$\begin{aligned} \begin{split} \operatorname*{argmin}_{\widehat{\Phi}_X,\widehat{\Phi}_X} \text{off}(\widehat{\Phi}_X^T W_X \widehat{\Phi}_X) + \text{off}(\widehat{\Phi}_Y^T W_Y \widehat{\Phi}_Y) + \\ ||F^T \widehat{\Phi}_X - G^T \widehat{\Phi}_Y||_F^2 \\ \text{such that } \widehat{\Phi}_X^T D_X \widehat{\Phi}_X = I \text{ and } \widehat{\Phi}_Y^T D_Y \widehat{\Phi}_Y = I\label{eq-quasi-opt} \end{split}\end{aligned}$$ Here $\text{off}(A) = \sum_{1\leq i \neq j \leq n} |a_{ij}^2|$ for an $n \times n$ matrix $A$ with elements $a_{ij}$. Matrices $W$ and $D$ are components of discrete cotangent laplacians $\Delta_X$ and $\Delta_Y$ for discrete meshes $X$ and $Y$ such that $\Delta_X = W_X^{-1} D_X$ and $\Delta_Y = W_Y^{-1} D_Y$ following the cotangent discretization of mesh Laplacian by Meyer et. al.  [@meyer2003discrete]. The third component of optimization  \[eq-quasi-opt\] corresponds to coarse correspondence between $X$ and $Y$ provided a priori i a format of point-wise mapping stored within matrices $F$ and $G$ respectively. In the present work we employ the common eigen bases between isometric shape spectrum to establish correspondence between them as well. However, in contrast with  [@kovnatsky2013coupled] our optimization criteria does not exploit any prior correspondence information. Contributions of the Paper {#sec:contribution} ========================== In this paper, we propose a global shape representation $D_g(X)$ for a 3D manifold $X$ that incorporates local surface geometry. The proposed representation is based on the computation of the shortest quasi-geodesic distances between all point pairs on the shape manifold. The proposed shape representation is shown to preserve the local surface geometry at each point on the 3D surface mesh. Furthermore, we effectively exploit the eigenspectrum of the proposed shape representation in the following applications: \(1) [*Self-symmetry characterization*]{}: We address the problem of self-symmetry characterization by exploiting the eigenspectrum of the proposed global shape descriptor $D_g(X)$. \(2) [*Correspondence determination*]{}: We determine the region-wise correspondence between isometric shapes without requiring the user to determine and specify [*a priori*]{} the point-wise mapping between the two shapes. \(3) [*Isometry deformation characterization*]{}: We exploit the results of the region-wise correspondence to characterize and quantify the extent of isometry deformation between the shapes. \(4) [*Stable part detection:*]{} We exploit the commutative property of the eigenfunctions of $D_g(X)$ to extract pose-invariant stable parts within non-rigid shapes. Local Geometry Inclusive Shape Operator {#sec:shape-operator} ======================================= In the proposed scheme a discrete 3D shape manifold $X$ is characterized by an operator $D_g(X)$, that is computed by determining the quasi-geodesics over the discrete manifold $X$. It is known that along a geodesic over a continuous manifold, only the normal component of the local curvature is dominant compared to the tangential component. A discrete 3D shape manifold $X$, in the form of a triangulated 3D surface mesh, can be represented by a $C^2$ differentiable function $f:\mathbb{R}^3\rightarrow\mathbb{R}$ as $X = \{f(x_{1}),f(x_{2}),...,f(x_{n})\}$ where $n$ denotes the number of vertices $x_i, 1 \leq i \leq n$ of $X$ [@azencot2014functional; @martinez2014smoothed]. The quasi-geodesic computed for a discrete path $x_i\rightsquigarrow x_j$ minimizes the distance measure $d(f(x_i),f(x_j))$ between the vertices $x_i$ and $x_j$ of $X$. The proposed shape representation $D_g(X)$ records all such quasi-geodesics, computed between all vertex-pairs or point-pairs over the surface mesh $X$. Furthermore, the matrix representation of $D_g(X)$ reveals an implicit symmetrical form, as is evident for the example 3D shapes as shown in Figure \[fig:2D-symm-maps\]. For discrete meshes, the computation of geodesics is enabled by stable schemes such as ones described in [@martinez2005computing]. The local geometry along a quasi-geodesic over a discrete mesh is preserved as follows: Figure \[fig:discrete-geodesics\] depicts two scenarios where a probable quasi-geodesic (marked in red) crosses a neighborhood of triangular meshes. In either case, one can measure the discrete geodesic curvature at a point $P$ as follows: $$\begin{aligned} \kappa_{g}(P) = \frac{2\pi}{\theta}(\frac{\theta}{2} - \theta_r) \label{eq-kappa-g}\end{aligned}$$ In eqn. (\[eq-kappa-g\]), $\theta$ denotes the sum of all angles incident at point $P$ where the geodesic crosses the surface mesh. In both cases, depicted in Figure \[fig:discrete-geodesics\](a) and (b), the quasi-geodesics create angular distributions $\theta_{l}$ and $\theta_{r}$ such that $\theta_l = \sum_i \beta_{i}$ and $\theta_r = \sum_i \alpha_{i}$. Since the normal curvature is dominant along the quasi-geodesics, we can compute an optimum balance between $\theta_{l}$ and $\theta_{r}$ that minimizes the discrete geodesic curvature $\kappa_{g}$, which is the tangential curvature component along the quasi-geodesic. This optimal balance between angular distribution along the quasi-geodesic approximately encodes the local angular distribution as depicted in Figure \[fig:discrete-geodesics\](a) and (b). The spectral decomposition of the symmetric shape operator $D_g(X)$ results in the eigenspectrum $\Phi_X$ for shape $X$ as follows: $$\begin{aligned} D_g(X)\Phi_X = \Delta_X\Phi_X \label{eq-eigen}\end{aligned}$$ where $\Delta_X = {\rm diag}(\gamma_1,\gamma_2,...,\gamma_n)$ denotes the diagonal matrix of eigenvalues $\gamma_i, 1 \leq i \leq n$ and $\Phi_X = \{\Phi_X^1, \Phi_X^2, ...,\Phi_X^n\}$ denotes the eigenvectors $\Phi_X^i, 1 \leq i \leq n$ of $X$ with $n$ surface vertices. ![The right and left angular distributions $\theta_l$ and $\theta_r$ generated by a geodesic at point $P$ on the surface mesh. The angular measures $\theta_l$ and $\theta_r$ encode the local geometry on a discrete surface mesh.[]{data-label="fig:discrete-geodesics"}](discrete-geodesics){width="0.45\linewidth"} Self-symmetry characterization {#sec:self-symm} ------------------------------ We characterize self-symmetric regions over shape $X$ as follows. Two regions $X_1, X_2 \subset X$ are possible candidates for being symmetric regions if for some upper bound $\varepsilon$: $$\begin{aligned} \left |\sum_{k=1}^{k_0}\Phi_X^k(p)-\sum_{k=1}^{k_0}\Phi_X^k(q) \right |_2 \leq \varepsilon \quad \forall p\in X_1, \; \forall q\in X_2 \label{eq-symm}\end{aligned}$$ where $| \cdot |_2$ denotes the $\mathcal{L}^2$ norm. Using spectral analysis one can find a tight bound on $\varepsilon$ such that $\varepsilon \leq \sum_{p,q\in X_1, \; r,s \in X_2}\left |d(p,q)-d(r,s) \right |_2$ for a $C^2$ distance metric $d$  [@dunford1963linear]. Parameter $\varepsilon$ depends upon the variance of geodesic error computed over entire shape manifold $X$. Therefore, $\varepsilon$ for shape manifold $X$ is a measure of the degree of isometry deformation of $X$ vis-a-vis the baseline shape. We report the bounds computed for different meshes in the Experimental Results section (Section \[sec:results\]). For characterizing self-symmetry we restrict ourselves to the lower-order eigenvectors denoted by $k_0 \leq 20$. Furthermore, the above characterization can be used to jointly analyze the correspondence between two candidate isometric shapes $X$ and $Y$ (Section \[sec:correspondence\]). Correspondence determination between isometric shapes {#sec:correspondence} ----------------------------------------------------- Determining the compatibility between the eigenbases of various shapes plays a critical role in applications dealing with analysis of multiple 3D shapes; in particular, determining the correspondence between 3D shapes. In related work, Ovsjanikov et al. [@ovsjanikov2012functional] represent the correspondence between two shapes by a parametric map between their functional spaces. However, functional map-based methods typically rely on user-specified prior knowledge of the mapping between the shapes for optimization of the correspondence criterion [@nguyen2011optimization; @ovsjanikov2012functional]. In contrast, the proposed approach does not assume any user-specified prior mapping between the shapes under consideration. For correspondence determination between two isometric shapes $X$ and $Y$ we exploit the fact that the eigendecomposition of symmetric shape operators $D_g(X)$ and $D_g(Y)$ leads to approximately commutative eigenspectra $\Phi_X$ and $\Phi_Y$. The characterization “approximately commutative” is due to the discrete triangulation of meshes and follows the formal definition given in section  \[sec-joint-diag\]. We couple $\Phi_X$ and $\Phi_Y$ by the commutative terms $\Phi_X^T \Delta_Y \Phi_Y$ and $\Phi_Y^T \Delta_X \Phi_X$ to solve the following optimization problem: $$\begin{aligned} \bar{\Phi}_X , \bar{\Phi}_Y = \operatorname*{argmin}_{\phi_x, \phi_y} \quad |\phi_x^T \Delta_Y \phi_y|_F + |\phi_y^T \Delta_X \phi_x|_F \label{eq-corr}\end{aligned}$$ where $\phi_x \subset \Phi_X$, $\phi_y \subset \Phi_Y$ and $| \cdot |_F$ denotes the Frobenius norm. It should be emphasized that eqn. (\[eq-corr\]) does not require that [*a priori*]{} correspondence maps be provided. The optimized maps $\bar{\Phi}_X$ and $\bar{\Phi}_Y$ over shapes $X$ and $Y$ encode the correspondence between them. From the optimized maps $\bar{\Phi}_X$ and $\bar{\Phi}_Y$, the relative correspondence error between shapes $X$ and $Y$ is given by $C_{X,Y} = \sum_{k=1}^{k_0} |\bar{\Phi}_X^k - \bar{\Phi}_Y^k|_2$. To compute $C_{X,Y}$ we consider the lower-order eigenvectors by setting $k_0 \leq 20$. Stable 3D region or part detection {#sec:stable-part} ---------------------------------- Relaxing the criterion for correspondence determination by not requiring a user-specified prior mapping between the shapes could result in poor correspondence between shapes that differ significantly from each other via isometry transformation. However, the optimization criterion for correspondence determination can be also used to identify common stable regions or parts within the shapes. The stable regions or parts are deemed to be ones that have undergone the least amount of isometry deformation as a result of pose variation. We present the following criterion to identify the stable regions $S_{X,Y}$ between shapes $X$ and $Y$ as follows. $$\begin{aligned} S_{X,Y} = \bigcup_p \; |\bar{\Phi}_X(p) - \bar{\Phi}_Y(p)|_2 \leq \varepsilon \label{eq-stable}\end{aligned}$$ where region $p$ represents a corresponding region in both shapes $X$ and $Y$ as identified by the correspondence optimization in eqn. (\[eq-corr\]). The parameter $\varepsilon$ is computed as mentioned in Section \[sec:self-symm\]. The stable part detection is quantified using the following criterion: $\bar{S}_{X,Y} = \sum_{p \in S_{X,Y}} \quad |\bar{\Phi}_X(p) - \bar{\Phi}_Y(p)|_2$. Experimental Results {#sec:results} ==================== For our experiments we have chosen the TOSCA dataset consisting of ten non-rigid shape categories, i.e., [*Cat*]{}, [*Dog*]{}, [*Wolf*]{}, two [*Human Males*]{}, [*Victoria*]{}, [*Gorilla*]{}, [*Horse*]{}, [*Centaur*]{} and [*Seahorse*]{} [@ovsjanikov2009shape]. Within each shape category, the individual shapes represent different transformations such as isometry, isometry coupled with topology change, different mesh triangulations of the same shape etc. In this work, we consider shapes that are isometric to one another, i.e., shapes that differ via an isometry transformation. Examples of some shapes that differ from one another via isometry transformations are shown in Figure \[fig:example-tosca\]. Experimental results are presented for six different shape categories for each of the applications formally described in Sections \[sec:self-symm\], \[sec:correspondence\] and \[sec:stable-part\] using visual representations of the results followed by the corresponding numerical evaluations. We have experimented with coarse meshes that are reduced by more than 87% of their original size or resolution. The results of the proposed shape representation are compared with those from relevant state-of-the-art shape representation schemes. The comparable performance achieved by the proposed local geometry-inclusive global shape representation scheme without requiring any prior knowledge of point-to-point or region-wise correspondence validates the central hypothesis behind the proposed scheme that the implicit isometry existing within candidate shapes can be employed to establish correspondence without the knowledge of coarse correspondence provided a priori. ![Examples of isometry transformation for the shape categories [*human*]{} and [*centaur*]{} in the TOSCA dataset.[]{data-label="fig:example-tosca"}](iso-deformations-tosca){width="0.45\linewidth"} Results of 3D self symmetry detection {#sec-res-sym} ------------------------------------- Figure \[fig:self-symm\] depicts the self-symmetry maps obtained for the various shapes using eqn. (\[eq-symm\]). The maps in Figure \[fig:self-symm\] correspond to the second eigenvector $\Phi_X^2$ obtained from the spectral decomposition of the global operator $D_g(X)$ for each shape using eqn. (\[eq-eigen\]). Table \[tab-epsilon\] presents the self-symmetry characterization measure, denoted by the upper bound $\varepsilon$ in eqn. (\[eq-symm\]), for each shape category. This characterization measure represents the average degree of isometry transformation within a shape category vis-a-vis the baseline shape. Note that the shape category [*Michael*]{} represents one of the two [*Human Male*]{} shape categories in the TOSCA dataset. ![image](self-symmetry){width="0.75\linewidth"} Category $\varepsilon$ Category $\varepsilon$ ---------------- --------------- --------------- --------------- [*Victoria*]{} 0.528 [*Dog*]{} 0.462 [*Cat*]{} 0.282 [*Michael*]{} 0.566 [*Horse*]{} 0.815 [*Centaur*]{} 0.203 : Self-symmetry characterization measure for different shape categories in the TOSCA dataset. The average degree of isometry transformation within the category [*Horse*]{} is observed to be at least 30% higher than the others.[]{data-label="tab-epsilon"} Results of 3D correspondence between isometric shapes {#sec-res-corr} ----------------------------------------------------- Since the lower-order eigenvectors represent global shape geometry more accurately, we consider the first 20 eigenvectors to compute the global region-based correspondence between the isometric shapes. Figure \[fig:corr-consistency\] shows the results of correspondence determination between the isometric [*Human Male*]{} shapes obtained via the optimization procedure described in eqn. (\[eq-corr\]). The correspondence maps between the shapes are shown to be consistent across the different order eigenvectors. Category Average $C_{X,Y}$ Category Average $C_{X,Y}$ ---------------- ------------------- --------------- ------------------- [*Victoria*]{} 0.069 [*Dog*]{} 0.0624 [*Cat*]{} 0.06 [*Michael*]{} 0.057 [*Horse*]{} 0.0559 [*Centaur*]{} 0.052 : Average relative error $C_{X,Y}$ in 3D correspondence determination between isometric shapes.[]{data-label="tab-cxy-corr"} The relative correspondence error for these maps can be characterized by the measure $C_{X,Y}$ defined in Section \[sec:correspondence\]. Table \[tab-cxy-corr\] lists this measure for isometric shapes from different TOSCA shape categories. Lower $C_{X,Y}$ values denote a higher degree of correspondence accuracy achieved via the optimization procedure described in eqn.( \[eq-corr\]). Once again, we emphasize that the correspondence accuracy is achieved without requiring any user-specified prior mapping between the shapes. ![image](eigen-vec-consistency){width="0.65\linewidth"} Results of 3D stable region or part detection --------------------------------------------- ![Stable region detection via the optimization procedure described in eqn. (\[eq-stable\]). The stable regions are detected between isometric shapes where the correspondence accuracy is observed to deteriorate due to a high degree of isometry transformation between the shapes. Unstable regions are ones that exhibit a higher degree of isometry transformation between them, for example, parts of the lower legs, the tail, etc.[]{data-label="fig:stable-region-detection"}](stable-parts-detection){width=".45\linewidth"} Shapes from different categories display varying degrees of isometry transformations between them. As a result, the accuracy of global correspondence deteriorates for shapes that exhibit a very high degree of isometry deformation. This is expected since the proposed scheme does not assume any prior mapping information that could potentially improve the correspondence. However, using the optimization described in eqn. (\[eq-stable\]) we can identify the stable corresponding regions or parts within the shapes that are least transformed by isometry. The detected stable regions or parts for the [*Centaur*]{} shape category are depicted in Figure \[fig:stable-region-detection\]. For various poses of the [*Centaur*]{} shape model, the more dynamic regions such as the tail and the lower legs exhibit low correspondence accuracy and hence are rejected by the optimization criterion in eqn. (\[eq-stable\]). However, regions that are least affected by the isometry deformation are detected as stable regions. These stable regions exhibit high correspondence accuracy and are depicted in Figure \[fig:stable-region-detection\]. We quantify the correspondence accuracy for the detected stable regions using the measure $\bar{S}_{X,Y}$ described in Section \[sec:stable-part\]. However, in our experiments, we observed a high positive correlation between measures $C_{X,Y}$ and $\bar{S}_{X,Y}$. Hence we state that the results in Table \[tab-cxy-corr\] hold for measure $\bar{S}_{X,Y}$ as well. Table \[tab-comparison\] compares the performance of the proposed representation scheme with the performance of other state-of-the-art representation schemes [@kim2011blended; @sahillioglu2011coarse]. These methods were further combined with functional map  [@ovsjanikov2012functional] to improve their correspondence through functional map supported local refinement. Performance comparison of these combined approaches with the proposed representation is also presented in Table \[tab-comparison\]. The numerical values presented in Table \[tab-comparison\] represent the highest percentage correspondence accuracy achieved by the various representation schemes along with the corresponding average geodesic errors. The performance of the proposed representation scheme compares very well with the performance of the other state-of-the-art representation schemes. We emphasize here on the merit of the cetral hypothesis behind the present shape representation that, the comparable performance in self-symmetry detection, and correspondance map between isometric shapes is achieved by the proposed representation scheme without using any prior correspondence mapping information between the shapes unlike the other state-of-the-art correspondence models [@kim2011blended; @sahillioglu2011coarse]. Methods Geodesic Error % Correspondence ---------------------------------------------------------- ---------------- ------------------ [@kim2011blended] 0.11 $\sim 95$  [@ovsjanikov2012functional] and [@kim2011blended] 0.06 $\sim 95$ [@sahillioglu2011coarse] 0.25 $\sim 90$ [@ovsjanikov2012functional] and [@sahillioglu2011coarse] 0.2 $\sim 90$ [Proposed Scheme]{} 0.15 $\sim 94$ : Comparison between the proposed scheme and those described in Kim et al. [@kim2011blended], Ovsjanikov et al. [@ovsjanikov2012functional] and Sahillioglu and Yemez [@sahillioglu2011coarse] and their combinations.[]{data-label="tab-comparison"} Conclusions and Future Work {#sec:conclusion} =========================== In this paper we proposed a global shape representation scheme using quasi-geodesics computed over the entire discrete shape manifold. The spectral decomposition of this representation is effectively used to identify self-symmetric regions on the discrete shape manifold. By exploiting the commutative property of the eigenbases of the proposed representation, we successfully demonstrated its use in correspondence determination between isometric shapes. We also proposed characterization metrics for self-symmetry identification and correspondence determination. Furthermore, stable regions within shapes were identified for shape pairs that differ from each other by a high degree of isometry deformation. The results of correspondence determination obtained via the proposed representation scheme were compared with those from relevant state-of-the-art representation schemes. The key contribution of this work is the fact that no prior knowledge, in the form of user-specified mappings, was used for correspondence determination or self-symmetry detection. As an extension of the present scheme, we intend to explore and combine functional maps [@ovsjanikov2012functional] that may prove critical in exploring the group structure within isometric shapes. Furthermore, we intend to use this combined scheme to address correspondence determination between near-isometric shapes [@kovnatsky2013coupled]. [^1]: [email protected] [^2]: [email protected]
null
minipile
NaturalLanguage
mit
null
Omkarnath Thakur Omkarnath Thakur (24 June 1897 – 29 December 1967), was an Indian music teacher, musicologist and Hindustani classical singer. A disciple of classical singer Vishnu Digambar Paluskar of Gwalior gharana, he became the principal of Gandharva Mahavidyalaya, Lahore, and later went on become the first dean of the music faculty at Banaras Hindu University. Early life and training Thakur was born in 1897 in a village called Jahaj in the Princely State of Baroda (5 km from Khambhat in present-day Anand District, Gujarat, into a poor military family. His grandfather Mahashankar Thakur had fought in the Indian Rebellion of 1857 for Nanasaheb Peshwa. His father Gaurishankar Thakur was also in the military, employed by Maharani Jamnabai of Baroda, where he commanded 200 cavalrymen. The family moved to Bharuch in 1900, though soon the family faced financial difficulties, as his father left the military to become a renunciate (sanyasi), leaving his wife to run the household, thus by the age of five Thakur started helping her out by doing various odd jobs, in mills, Ramlila troupe and even as a domestic help. When he was fourteen his father died. Impressed by his singing Thakur and his younger brother Ramesh Chandra were sponsored by a wealthy Parsi philanthropist Shahpurji Mancherji Dungaji in circa 1909 to train in Hindustani classical music in the Gandharva Mahavidyalaya, a music school in Bombay, under classical singer Vishnu Digambar Paluskar. Thakur soon became a singer in the style of the Gwalior gharana and started accompanying his guru and other musicians. Later in his career he developed his own distinct style. Eventually, he made his concert debut in 1918, though he continued his training under his guru, Paluskar, until the latter's death in 1931. Career Thakur was made the principal of a Lahore branch of Paluskar's Gandharva Mahavidyalaya in 1916. Here he became acquainted with the Patiala gharana singers like Ali Baksh and Kale Khan, paternal uncle of Bade Ghulam Ali Khan. In 1919, he returned to Bharuch and started his own music school, Gandharva Niketan. During the 1920s, Thakur worked for the non-cooperation movement of Mahatma Gandhi on a local level, as he became the President of Bharuch District Congress Committee of Indian National Congress. His performances of patriotic song Vande Mataram were a regular feature of annual sessions of the Indian National Congress. Thakur toured Europe in 1933 and became one of the first Indian musicians to perform in Europe. During this tour, he performed privately for Benito Mussolini. Thakur's wife Indira Devi died the same year and he began to concentrate exclusively on music. Thakur's work as a performer and musicologist led to the creation of a music college at Banaras Hindu University that emphasized both, here he was first dean of the music faculty. He wrote books on Indian classical music and its history. Thakur's work is criticized in contemporary music literature as ignorant of the contribution of Muslim musicians, which he blamed for deteriorating classical music. Thakur performed in Europe until 1954 and received the Padma Shri in 1955 and the Sangeet Natak Akademi Award in 1963. He retired in 1963 and was awarded honorary doctorates from Banaras Hindu University in 1963 and Rabindra Bharati University in 1964. Having survived a heart attack in 1954, he suffered a stroke in July 1965, which left him partially paralyzed for the last two years of his life. See also List of people on stamps of India References Bibliography A Comparative Study of Khyāl Style: Pandit Omkarnath Thakur and His Student Pandit B.R. Bhatt, by Harriotte Cook Hurie. Wesleyan University, 1980. Category:Indian male classical singers Category:1897 births Category:1967 deaths Category:Banaras Hindu University faculty Category:Hindustani singers Category:Indian music educators Category:Indian musicologists Category:People from Gujarat Category:Recipients of the Padma Shri in arts Category:Recipients of the Sangeet Natak Akademi Award Category:Gwalior gharana Category:20th-century Indian musicians Category:Recipients of the Ranjitram Suvarna Chandrak Category:20th-century musicologists
null
minipile
NaturalLanguage
mit
null
TEXAS COURT OF APPEALS, THIRD DISTRICT, AT AUSTIN ON MOTION FOR REHEARING NO. 03-98-00533-CV Carole Keeton Rylander, Comptroller of Public Accounts of the State of Texas; and John Cornyn, Attorney General of the State of Texas, Appellants v. 3 Beall Brothers 3, Inc., Appellee FROM THE DISTRICT COURT OF TRAVIS COUNTY, 261ST JUDICIAL DISTRICT NO. 97-05710, HONORABLE HUME COFER, JUDGE PRESIDING We withdraw our original opinion and judgment issued July 15, 1999, and substitute this one in its place. 3 Beall Brothers 3, Inc. ("Bealls") sued appellants (collectively, "the Comptroller") in district court for a refund of "additional tax." (1) The district court granted summary judgment in favor of Bealls. We will reverse the district court judgment and render judgment in favor of the Comptroller. THE CONTROVERSY This is a franchise tax case. (2) The Texas franchise tax is imposed on the value of the privilege of doing business in Texas. See Bullock v. National Bancshares Corp., 584 S.W.2d 268, 270 (Tex. 1979); General Dynamics Corp. v. Sharp, 919 S.W.2d 861, 863 (Tex. App.--Austin 1996, writ denied). The franchise tax is imposed annually on each corporation that is incorporated in Texas or that conducts business in Texas. See Tex. Tax Code Ann. § 171.001 (West 1992 & Supp. 1999). A corporation's franchise tax liability is based on the business done by the corporation during its last accounting period that ends in the year before the year in which the tax is due (the "privilege period"). Id. §§ 171.151, .153, .1532. The tax is calculated by multiplying the franchise tax base by the franchise tax rate. Id. § 171.002. Prior to 1992, the franchise tax was based solely on a corporation's taxable capital. (3) Capital intensive industries bore the brunt of the tax, even in unprofitable years. See General Dynamics Corp., 919 S.W.2d at 863. In 1991, the Texas Legislature amended the Franchise Tax Act to add earned surplus as a tax base from which to calculate the franchise tax. (4) See Tax Code § 171.002. The amendment became effective on January 1, 1992. The 1991 Franchise Tax Act amendment also added the "additional tax" that is at issue in this case. See Tex. Tax Code Ann. § 171.011 (West 1992) (since amended). The additional tax is levied on a corporation that is subject to the franchise tax and that is no longer subject to the taxing jurisdiction of the State in relation to the tax on net taxable earned surplus. See id. § 171.0011(a). The additional tax is calculated by multiplying the franchise tax rate by the corporation's earned surplus computed on "the period beginning on the day after the last day for which the tax imposed on net taxable earned surplus was computed." Id. § 171.0011(b). The tax ends on the date the corporation "is no longer subject to the taxing jurisdiction of this state." Id. According to the Comptroller, the additional tax is designed to reduce tax revenue losses caused by corporate reorganizations. Prior to August 2, 1993, Bealls was an apparel retailer incorporated in Texas. As of January 1993, Bealls operated 110 retail outlets in Texas, 10 retail outlets in Oklahoma, 8 retail outlets in New Mexico, and 3 retail outlets in Alabama. Bealls used a fiscal year accounting period ending on the Saturday nearest to January 31. Thus, for the privilege of doing business in Texas for calendar year 1992, the Franchise Tax Act required Bealls to base its 1992 franchise tax return on its accounting year beginning February 4, 1990, and ending February 2, 1991. For the privilege of doing business in Texas in calendar year 1993, the Act required Bealls to base its 1993 return on its accounting year beginning February 3, 1991, and ending February 1, 1992. Bealls paid the franchise tax assessed on its earned surplus for calendar years 1992 and 1993. Bealls ceased doing business in Texas for franchise tax purposes on August 2, 1993, when it merged with Palais Royal, Inc., (5) and has not since been subject to the regular annual franchise tax. See Sunoco Terminals, Inc. v. Bullock, 756 S.W.2d 418, 421 (Tex. App.--Austin 1988, no writ) (when two corporations merge, only one entity remains responsible for regular annual franchise tax); Texaco, Inc. v. Calvert, 526 S.W.2d 630, 634 (Tex. Civ. App.--Austin 1975, writ ref'd n.r.e.). Bealls did, however, earn $16.2 million in profits from February 2, 1992 (the day after the last day for which the tax imposed on net taxable earned surplus was computed) to August 2, 1993 (the day Bealls was no longer subject to the taxing jurisdiction of Texas). In accordance with the additional tax statute, Bealls' additional tax liability on these eighteen months of previously untaxed earned surplus was $732,559.27. Bealls paid the tax and filed a tax refund claim with the Comptroller. See Tax Code § 111.104. The claim was denied, as was Bealls' motion for rehearing. Id. § 111.105. Bealls then sued the Comptroller in district court, claiming that the additional tax was unconstitutional because fiscal year taxpayers who terminate their separate corporate existence at the same time as calendar year taxpayers owe tax on earned surplus over a longer period. For example, when Bealls ceased to do business in Texas on August 2, 1993, it paid the additional tax on eighteen months of income (February 2, 1992 to August 2, 1993) while a calendar year taxpayer ceasing to do business in Texas on the same date would have paid additional tax on only seven months of income (January 1, 1993 to August 2, 1993). Bealls argued that because there is no basis for this discrimination, the additional tax violates the constitutional requirements of equal protection and equal and uniform taxation. (6) Bealls also contended that the additional tax violated the federal commerce clause. See U.S. Const. art. I, § 8. The trial court granted Bealls' motion for summary judgment and denied the Comptroller's motion for summary judgment, and the Comptroller appeals to this Court. DISCUSSION AND HOLDINGS The parties have either stipulated to or do not dispute the facts material to this case. Consequently, the propriety of summary judgment is a question of law. See Natividad v. Alexsis, Inc., 875 S.W.2d 695, 699 (Tex. 1994). We therefore review the trial court's decision de novo to determine whether Bealls was entitled to judgment as a matter of law. (7) See id.; Nixon v. Mr. Property Management Co., 690 S.W.2d 546, 548-49 (Tex. 1985). In its first issue presented, the Comptroller argues that the additional tax is constitutional as applied to Bealls and other fiscal year taxpayers because (1) the additional tax applies the same standard of value to all taxpayers, and (2) the State has a legitimate interest in raising revenue and mitigating the fiscal effects of corporate reorganizations. The Comptroller asserts these arguments in response to Bealls' position on summary judgment that the additional tax is unconstitutional because it taxes Bealls, a fiscal year taxpayer, over a longer period of time than calendar year taxpayers. In determining the constitutionality of a statute, we begin with a presumption that it is constitutional. See Enron Corp. v. Spring Indep. Sch. Dist., 922 S.W.2d 931, 934 (Tex. 1996) (citing HL Farm Corp. v. Self, 877 S.W.2d 288, 290 (Tex. 1994) & Spring Branch Indep. Sch. Dist. v. Stamos, 695 S.W.2d 556, 558 (Tex. 1985)). Courts are to presume that "the Legislature has not acted unreasonably or arbitrarily; and a mere difference of opinion, where reasonable minds could differ, is not sufficient legal basis for striking down legislation . . . ." Texas Workers' Compensation Comm'n v. Garcia, 893 S.W.2d 504, 520 (Tex. 1995). Tax legislation receives special deference. See Vinson v. Burgess, 773 S.W.2d 263, 266 (Tex. 1989); see also Regan v. Taxation With Representation, 461 U.S. 540, 547 (1983). Moreover, because the legislature enacts statutes imposing franchise tax purely for revenue purposes, we liberally construe franchise tax statutes so as to effectuate their purpose. See Federal Crude Oil Co. v. Yount-Lee Oil Co., 52 S.W.2d 56, 61 (Tex. 1932). The party challenging the constitutionality of a statute bears the burden of demonstrating that the enactment fails to meet constitutional requirements. See Enron Corp., 922 S.W.2d at 934. Both the Texas Constitution and the United States Constitution require equal protection of the law. See Tex. Const. art. I, § 3; U.S. Const. amend. XIV, § 1. Bealls makes no claim that the additional tax statute infringes upon a fundamental right; therefore, the additional tax statute must only be rationally related to a legitimate state purpose to withstand Bealls' equal protection challenge. See Barshop v. Medina County Underground Water Conservation Dist., 925 S.W.2d 618, 631 (Tex. 1996); Reuters Am., Inc. v. Sharp, 889 S.W.2d 646, 656 (Tex. App.--Austin 1994, writ denied); see also Lehnhausen v. Lake Shore Auto Parts Co., 410 U.S. 356, 359 (1973). The equal and uniform requirement of the Texas Constitution is substantially similar to the equal protection clause of the Fourteenth Amendment. See Railroad Comm'n v. Channel Indus. Gas Co., 775 S.W.2d 503, 507 (Tex. App.--Austin 1989, writ denied). The mandate that all taxes be equal and uniform requires only that all persons falling within the same class be taxed alike. See Sharp v. Caterpillar, Inc., 932 S.W.2d 230, 240 (Tex. App.--Austin 1996, writ denied) (citing Hurt v. Cooper, 110 S.W.2d 896, 901 (Tex. 1937)). A tax classification will be upheld unless it has no rational basis. See id. Bealls argues that the additional tax violates the constitutional requirements of equal protection and equal and uniform taxation because similarly situated taxpayers are treated differently based solely upon their choice of accounting year. Bealls also contends that the Comptroller has no legitimate reason for favoring calendar year taxpayers or penalizing fiscal year taxpayers. Bealls relies primarily on Bullock v. Sage Energy Co., 728 S.W.2d 465 (Tex. App.--Austin 1987, writ ref'd n.r.e.). In that case, Sage, an operator of oil and gas properties, attacked a rule promulgated by the Comptroller that required a corporation to compute its franchise tax based on its financial condition as shown in its books and records of account. See id., 728 S.W.2d at 465. Because Sage's shares were publicly traded, the Securities and Exchange Commission ("SEC") required Sage to capitalize its intangible drilling costs on its books and records. Corporations whose shares were not publicly traded were not subject to this SEC regulation; they were able to treat intangible drilling costs as expenses on their books and records. Because the franchise tax statute in effect at the time calculated franchise tax liability solely on the amount of a taxpayer's taxable capital, capitalizing intangible drilling costs (as opposed to expensing) led to a higher franchise tax assessment for Sage. This Court concluded that although intangible drilling costs have the same value to all corporations, their value was ascertained by different standards under the Comptroller's rule. See id. at 468. While Sage's intangible drilling costs were capitalized at full value, the same costs for similar corporations were not capitalized at all based solely upon the accounting method employed by the corporation. See id. Accordingly, this Court held that "Sage was denied the right to equal and uniform taxation provided by the Constitution." (8) Id. The Comptroller argues that Sage Energy is distinguishable because it concerns the imposition of a particular method of accounting upon a corporation, while the instant case concerns a corporation's election of an accounting year. We agree. Both General Dynamics and Sunoco Terminals, two cases relied upon by the Comptroller, are more analogous to the instant case. In the former case, General Dynamics entered into a contract in 1984 with the United States military to manufacture fighter jets. See General Dynamics, 919 S.W.2d at 864. General Dynamics manufactured and delivered the jets over a period of seven years, receiving profits each year. For federal income tax purposes, however, General Dynamics elected to utilize the "completed contract" method of reporting its profits; thus, it realized the entire seven years' worth of profits, totaling $974 million, in 1991, the year in which the contract was completed. As discussed previously, in 1991, the Franchise Tax Act was amended to add earned surplus as a method of calculating a corporation's tax liability. Due to its use of the completed contract method of reporting its profits, General Dynamics owed franchise tax on the full $974 million of earned surplus. General Dynamics paid the tax under protest and sued the Comptroller. This Court held that the Comptroller was permitted to assess franchise tax on the full amount of earned surplus realized in 1991. See id. at 866-67. In Sunoco Terminals, a newly formed corporation received capital equipment from a related corporation. Due to the timing of the calculations necessary to compute franchise tax, the assets were included in the franchise tax bases of both corporations. This Court held that the double counting was a permissible consequence of the taxpayers' decision to transfer the assets at an inopportune time. See Sunoco Terminals, Inc., 756 S.W.2d at 420-21. Furthermore, this Court explained that the decision to establish a policy that "all sales of capital equipment by established companies to newly formed companies carry a concomitant tax credit" was one for the legislature and not the courts. Id. at 421. We also find Southern Clay Products, Inc. v. Bullock, 753 S.W.2d 781 (Tex. App.--Austin 1988, no writ), to be instructive. In that case, this Court again considered the computation of franchise tax based on a corporation's financial condition as shown in its books and records of account. The stock of Southern Clay was acquired by Gonzales Clay Corporation. Following the acquisition, the parent company of both Southern Clay and Gonzales Clay was a British corporation. The parent required Southern Clay to increase the book value of certain of its assets to reflect "takeover values," or the acquisition cost of the company. Accordingly, Southern Clay prepared two sets of accounting books, one to reflect the historical cost of its assets and one to reflect the higher takeover values. For the 1980 fiscal year, Southern Clay maintained both ledgers and based its franchise tax return on the historical cost of its assets. The Comptroller accepted the calculation. For the 1981 fiscal year, however, Southern Clay maintained only the ledger based on takeover values. The historical costs were recorded on subsidiary ledgers or worksheets. When Southern Clay attempted to employ the worksheets to calculate its franchise tax on a historical basis, the Comptroller objected, citing the rule requiring a corporation to report its assets and pay franchise tax based upon information contained in its general ledger. See id. at 783. The Comptroller assessed Southern Clay franchise tax based on the takeover values in its general ledger, and Southern Clay sued in district court for a refund. The district court rendered judgment that Southern Clay take nothing. Southern Clay appealed, arguing among other things that the franchise tax assessment violated the equal and uniform taxation clause of the Texas Constitution. Specifically, Southern Clay contended that: if a corporation with the same assets as appellant kept a general ledger based on its "takeover values" and a general ledger based on its historical values, the Comptroller would permit it to pay lower franchise tax based on the historical ledger kept in accordance with generally accepted accounting principles. At the same time, [Southern Clay] would be required to pay a higher franchise tax for the same value of doing business solely because it kept one general ledger based on "takeover values" along with subsidiary historical records that the Comptroller rejects as "working papers." Id. at 784. This Court affirmed the judgment of the district court and, in doing so, highlighted the district court's finding that there was no evidence that taxpayers similarly situated to Southern Clay were allowed to use the historical cost method of accounting while Southern Clay was required to use the takeover value method. Id. In General Dynamics, Sunoco Terminals, and Southern Clay, the taxpayer was responsible for the election that resulted in unfavorable tax consequences. There is no indication in General Dynamics that the taxpayer was required to utilize the completed contract method of reporting its profits; therefore, one must assume that the taxpayer could have reduced its 1991 franchise tax liability by structuring the transaction in a different manner. In Sunoco Terminals, the taxpayer could have reduced its franchise tax liability by structuring the asset transfer in a different manner, or by petitioning the Comptroller for an alternate allocation method. See Sunoco Terminals, 756 S.W.2d at 422. Similarly, Southern Clay was "not required by the Comptroller to use a particular accounting method but rather [could] employ one of its own choosing." Southern Clay, 753 S.W.2d at 784. Likewise, Bealls had the option of electing to maintain its accounts on either a calendar year or fiscal year basis. There is no doubt that Bealls' choice to be a fiscal year taxpayer resulted in a greater additional tax burden to Bealls than would have been assessed had Bealls chosen to be a calendar year taxpayer. Nonetheless, a tax system that results in one party paying a disproportionately higher tax is not inherently unconstitutional so long as the legislation is rationally related to a legitimate governmental goal and the system operates equally within each class. See Tandy Corp. v. Sharp, 872 S.W.2d 814, 818 (Tex. App.--Austin 1994, writ denied) (citing Channel Indus. Gas Co., 775 S.W.2d at 507-08). To determine whether the additional tax statute operates equally within the relevant class--taxpayers no longer subject to the regular annual franchise tax--we will analyze the mechanics of additional tax assessment. The additional tax statute states in part: § 171.0011. Additional Tax (a) An additional tax is imposed on a corporation that is subject to the tax imposed under Section 171.001 and that is no longer subject to the taxing jurisdiction of this state in relation to the tax on net taxable earned surplus. (b) The additional tax is equal to the rate then in effect under Section 171.002(a)(2) multiplied by the corporation's net taxable earned surplus computed on the period beginning on the day after the last day for which the tax imposed on net taxable earned surplus was computed under Section 171.1532 and ending on the date the corporation is no longer subject to the taxing jurisdiction of this state in relation to the tax on net taxable earned surplus. Tex. Tax Code Ann. § 171.0011 (West 1992). According to the stipulated facts, Bealls filed a franchise tax return for 1992. The reporting period was based on Bealls' accounting year from February 4, 1990 to February 2, 1991. Bealls also filed a franchise tax return for 1993. The reporting period was based on Bealls' accounting year from February 3, 1991 to February 1, 1992. Because Bealls merged with Palais Royal on August 2, 1993, Bealls filed an additional tax return for the period from February 2, 1992 to August 2, 1993, a period of eighteen months. Had Bealls elected to be a calendar year taxpayer, it would have had a reporting period from January 1, 1991 to December 31, 1992 for 1992 and from January 1, 1992 to December 31, 1992 for 1993. Its additional tax period would have begun on January 1, 1993 and ended on August 2, 1993. In each scenario, the beginning date of the additional tax period represents the first day on which the regular franchise tax was no longer applicable. Moreover, the ending date represents the first day Bealls was no longer subject to the taxing jurisdiction of this state in relation to the tax on net taxable earned surplus. Therefore, the additional tax applies even-handedly because the amount of the tax is always based upon the period of previously untaxed earned surplus. Moreover, the additional tax statute is rationally related to a legitimate government purpose. The Comptroller implemented the additional tax statute to raise revenue and promote the legitimate state purposes of convenience, efficiency, and reliability. Accordingly, the statute requires taxpayers to use their existing accounting years as a basis for the period of additional tax assessment. We cannot say that the Comptroller's decision to assess the additional tax on the period of previously untaxed earned surplus was irrational or unreasonable. See Grocers Supply Co. v. Sharp, 978 S.W.2d 638, 645 (Tex. App.--Austin 1998, pet. denied) (classification of taxpayers according to time Comptroller adjudicated their claims had rational basis). We therefore sustain the Comptroller's argument and turn to the issue of whether the imposition of the additional tax on Bealls violates the commerce clause of the United States Constitution. See U.S. Const. art. I, § 8. A state tax does not violate the commerce clause if it: (1) is applied to an activity with a substantial nexus with the taxing State; (2) is fairly apportioned; (3) does not discriminate against interstate commerce; and (4) is fairly related to the services provided by the State. See Vinmar v. Harris County Appraisal Dist., 947 S.W.2d 554, 555 (Tex. 1997) (citing Complete Auto Transit, Inc. v. Brady, 430 U.S. 274, 279 (1977)). Bealls argues that the additional tax violates the first and fourth requirement. See Quill Corp. v. North Dakota, 504 U.S. 298, 313 (1992) ("The first and fourth prongs . . . limit the reach of state taxing authority so as to ensure that state taxation does not unduly burden interstate commerce."). The commerce clause requires "some definite link, some minimum connection, between a state and the person, property, or transaction it seeks to tax." Allied-Signal, Inc. v. Director, Division of Tax, 504 U.S. 768, 777 (1992) (quoting Miller Bros. Co. v. Maryland, 347 U.S. 340, 344-45 (1954)). The commerce clause requirement of a substantial nexus with the taxing state is satisfied by the taxpayer's physical presence in the state. See Lawrence Indus., Inc. v. Sharp, 890 S.W.2d 886, 892-93 (Tex. App.--Austin 1994, writ denied); see also Quill Corp., 504 U.S. at 312-14. Bealls stipulated that "[p]rior to August 3, 1993, [Bealls] was an apparel retailer incorporated in Texas. As of January 1993, [Bealls] operated 110 retail outlets in Texas . . . ." Bealls does not claim that its physical presence and operations in Texas in any way diminished during the eighteen month additional tax period. Therefore, Bealls' nexus claim has no merit. Under the commerce clause, the measure of the tax must be reasonably related to the extent of the taxpayer's presence or activities within the taxing state and to the taxpayer's consequent enjoyment of the opportunities which the state has afforded. See Commonwealth Edison Co. v. Montana, 453 U.S. 609, 629 (1981). The tax must be tied to the earnings that the state has made possible, "insofar as government is the prerequisite for the fruits of civilization for which . . . we pay taxes." Wisconsin v. J.C. Penney Co., 311 U.S. 435, 446 (1940). The "only benefit to which the taxpayer is constitutionally entitled . . . [is] that derived from his enjoyment of the privileges of living in an organized society, established and safeguarded by the devotion of taxes to public purposes." Commonwealth Edison Co., 453 U.S. at 629 (citing Carmichael v. Southern Coal & Coke Co., 301 U.S. 495, 522 (1937)). The additional tax assessed Bealls was a percentage of the corporation's earned surplus during the period "beginning on the day after the last day for which the tax imposed on net taxable earned surplus was computed under Section 171.1532 and ending on the date the corporation is no longer subject to the taxing jurisdiction of this state in relation to the tax on net taxable earned surplus." Tax Code § 171.0011. Because the tax was tied to earnings, it was in proper proportion to Bealls' activities in Texas and therefore to the consequent enjoyment of the opportunities and protections which the State has afforded. Commonwealth Edison Co., 453 U.S. at 626-27. When a tax is assessed in proportion to a taxpayer's activities or presence in a state, the taxpayer is shouldering its fair share of supporting the state's provision of police and fire protection, the benefits of a trained work force, and the advantages of a civilized society. Id. (quoting Exxon Corp. v. Wisconsin Dept. of Revenue, 447 U.S. 207, 228 (1980) and Japan Line, Ltd. v. County of Los Angeles, 441 U.S. 434, 445 (1979)). We therefore sustain the Comptroller's argument. CONCLUSION (9) Having concluded that the Comptroller's imposition of the additional tax upon Bealls is constitutional, we reverse the judgment of the district court and render judgment in favor of the Comptroller. Jan P. Patterson, Justice Before Justices Kidd, Patterson and Powers* Reversed and Rendered Filed: August 26, 1999 Publish * Before John E. Powers, Senior Justice (retired), Third Court of Appeals, sitting by assignment. See Tex. Gov't Code Ann. § 74.003(b) (West 1998). 1. The Comptroller and the Attorney General are statutory defendants in tax refund suits. See Tex. Tax Code Ann. § 112.151(b) (West 1992). This appeal was originally filed in the names of the predecessors to the present Comptroller and Attorney General. We have substituted the current holders of those offices as the correct parties to this proceeding. See Tex. R. App. P. 7.2(a). 2. See Tex. Tax Code Ann. §§ 171.001-.687 (West 1992 & Supp. 1999) (the "Franchise Tax Act"). 3. A corporation's taxable capital consists of stated capital and surplus. See Tex. Tax Code Ann. § 171.101(a)(1) (West 1992). Stated capital is defined by reference to the Texas Business Corporation Act. See Tex. Bus. Corp. Act Ann. art. 1.02(24) (West Supp. 1999) (defining stated capital as the sum of all shares of the corporation having a par value that have been issued and the consideration fixed by the corporation for all shares without par value that have been issued). The Franchise Tax Act defines surplus as net assets minus stated capital. See Tex. Tax Code Ann. § 171.109(a)(1) (West 1992). Net assets are total assets minus total debts. Id. § 171.109(a)(2). 4. Earned surplus is reportable federal net income, less certain foreign source income, plus officer and director compensation. See Tex. Tax Code Ann. § 171.110(a)(1) (West 1992 & Supp. 1999); see also General Dynamics Corp. v. Sharp, 919 S.W.2d 861, 864 n.4 (Tex. App.--Austin 1996, writ denied) (citing Southern Realty Corp. v. McCallum, 65 F.2d 934, 935-36 (5th Cir.), cert. denied, 290 U.S. 692 (1933)) (past income used to measure privilege of doing business in current year because past wealth is financial starting point for current year's business). 5. Bealls continues to operate in Texas under the Bealls name, but it is owned by a parent company and operated by Palais Royal. 6. See Tex. Const. art. I, § 3 ("All free men, when they form a social compact, have equal rights, and no man, or set of men, is entitled to exclusive separate public emoluments, or privileges, but in consideration of public services."); art. VIII, § 1(a) ("Taxation shall be equal and uniform."); U.S. Const. amend. XIV, § 1 (providing that no State shall "make or enforce any law which shall . . . deny any person within its jurisdiction the equal protection of the laws."). 7. Where both parties file a motion for summary judgment, and one is granted and one is denied, we determine all questions presented and render such judgment as the trial court should have rendered. See Commissioners Court v. Agan, 940 S.W.2d 77, 80 (Tex. 1997). 8. To remedy this situation, the legislature in 1987 added to the Franchise Tax Act the requirement that corporations use generally accepted accounting principles ("GAAP") for bookkeeping. See Tex. Tax Code Ann. § 171.109(b) (West 1992); Texas Utils. Elec. Co. v. Sharp, 962 S.W.2d 723, 726-27 (Tex. App.--Austin 1998, pet. denied). 9. The Comptroller raises a third issue concerning whether the additional tax is properly classified as a privilege tax or a corporate income tax. Having already determined that the additional tax is constitutional, we need not address the issue. of Los Angeles, 441 U.S. 434, 445 (1979)). We therefore sustain the Comptroller's argument. CONCLUSION (9) Having concluded that the Comptroller's imposition of the additional tax upon Bealls is constitutional, we reverse the judgment of the district court and render judgment in favor of the Comptroller. Jan P. Patterson, Justice Before Justices Kidd, Patterson and Powers* Reversed and Rendered Filed: August 26, 1999 Publish *
null
minipile
NaturalLanguage
mit
null
The first social function of the school is to pass on the legacy of Christianity. In pursuing that goal students should develop other-centeredness or selflessness following Christ’s “Great Commandment” to love one another. The school should provide a framework for students to serve others demonstrating agape love. The school should also build in the student a character of personal responsibility. The school should have strict rules of conduct. I believe this helps students to be better citizens of the world and will move them toward the ultimate goal – imago dei. I value experiences and anything that can bring me closer to truth and God. All sources of knowledge and experience are complimentary. They all lead to the truth. These experiences exist in art, religion, and science. Performing on stage, creating a work of art, attending a prayer service or discovering new laws of nature are all links to divine revelation. These are important experiences that bring us closer to the truth. As we live these experiences and discover the knowledge, we come closer so that when we die we will be ready for the final revelation and become reunited with God, with the Image of God.
null
minipile
NaturalLanguage
mit
null
--- abstract: 'Deep learning techniques have recently demonstrated broad success in predicting complex dynamical systems ranging from turbulence to human speech, motivating broader questions about how neural networks encode and represent dynamical rules. We explore this problem in the context of cellular automata (CA), simple dynamical systems that are intrinsically discrete and thus difficult to analyze using standard tools from dynamical systems theory. We show that any CA may readily be represented using a convolutional neural network with a network-in-network architecture. This motivates our development of a general convolutional multilayer perceptron architecture, which we find can learn the dynamical rules for arbitrary CA when given videos of the CA as training data. In the limit of large network widths, we find that training dynamics are nearly identical across replicates, and that common patterns emerge in the structure of networks trained on different CA rulesets. We train ensembles of networks on randomly-sampled CA, and we probe how the trained networks internally represent the CA rules using an information-theoretic technique based on distributions of layer activation patterns. We find that CA with simpler rule tables produce trained networks with hierarchical structure and layer specialization, while more complex CA produce shallower representations—illustrating how the underlying complexity of the CA’s rules influences the specificity of these internal representations. Our results suggest how the entropy of a physical process can affect its representation when learned by neural networks.' author: - William Gilpin bibliography: - 'ca\_cites.bib' title: Cellular automata as convolutional neural networks --- Introduction ============ Recent studies have demonstrated the surprising ability of deep neural networks to learn predictive representations of dynamical systems [@zdeborova2017machine; @pathak2018model; @carrasquilla2017machine; @van2017learning; @torlai2018neural]. For example, certain types of recurrent neural networks, when trained on short-timescale samples of a high-dimensional chaotic process, can learn transition operators for that process that rival traditional simulation techniques [@pathak2018model; @jaeger2004harnessing; @bar2018data]. More broadly, neural networks can learn and predict general features of dynamical systems—ranging from turbulent energy spectra [@kutz2017deep], to Hamiltonian ground states [@carleo2017solving; @torlai2016learning], to topological invariants [@zhang2018machine]. Such successes mirror well-known findings in applied domains [@lecun2015deep], which have convincingly demonstrated that neural networks may not only represent, but also learn, generators for processes ranging from speech generation [@van2016wavenet] to video prediction [@mathieu2015deep]. However, open questions remain about how the underlying structure of a physical process affects its representation by a neural network trained using standard optimization techniques. We aim to study such questions in the context of cellular automata (CA), among the simplest dynamical systems due to the underlying discreteness of both their domain and the dynamical variables that they model. The most widely-known CA is Conway’s Game of Life, which consists of an infinite square grid of sites (“cells”) that can only take on a value of zero (“dead”) or one (“alive”). Starting from an initial binary pattern, each cell is synchronously updated based on its current state, as well as its current number of living and non-living neighbors. Despite its simple dynamical rules, the Game of Life has been found to exhibit remarkable properties ranging from self-replication to Turing universality [@adamatzky2010game]. Such versatility offers a vignette of broader questions in CA research, because many CA offer minimal examples of complexity emerging from apparent simplicity [@wolfram1983statistical; @langton1990computation; @feldman2008organization; @fredkin1990informational; @adamatzky2012collision]. For this reason, CA have previously been natural candidates for evaluating the expressivity and capability of machine learning techniques such as genetic algorithms [@mitchell1996evolving; @mitchell1993revisiting]. Here, we show that deep convolutional neural networks are capable of representing arbitrary cellular automata, and we demonstrate an example network architecture that smoothly and repeatably learns an arbitrary CA using standard loss gradient-based training. Our approach takes advantage of the “mean field limit” for large networks [@nguyen2019mean; @neal2012bayesian; @chen2018dynamical], for which we find that trained networks express a universal sparse representation of CA based on depthwise consolidation of similar inputs. The effective depth of this representation, however, depends on the entropy of the CA’s underlying rules. Equivalence between cellular automata and convolutional neural networks ======================================================================= [*Cellular automata.*]{} We define a CA as a dynamical system with $M$ possible states, which updates its value based on its current value and $D$ other cells—usually its immediate neighbors in a square lattice. There are $M^D$ possible unique $M$-ary input strings to a CA function, which we individually refer to as $\sigma$. A cellular automaton implements an operator $\mathcal G(\sigma)$ that is fully specified by a list of transition rules $\sigma \rightarrow m$, $m \in 0, 1, ..., M-1$, and there are $M^{M^D}$ possible unique $\mathcal G(\sigma)$, each implementing a different ruleset. For the Game of Life, $M=2, D=9$, and so $\mathcal G(\sigma)$ is a Boolean function that maps each of the $2^9=512$ possible $9$-bit input strings to a single bit. A defining feature of CA is the locality of dynamical update rule, which ensures that the rule domain is small; the size of $D$ thus sets an upper bound on the rate at which information propagates across space. [*Convolutional neural networks.*]{} We define a convolutional neural network as a function that takes as an input a multichannel image, to which it applies a series of local convolutions via a trainable “kernel”. The same kernel is applied to all pixels in the image, and each convolutional layer consolidates information within a fixed local radius of each pixel in the input image [@lecun2015deep]. Many standard convolutional architectures include “pooling” layers, which downsample the previous layer and thereby consolidate local information across progressively larger spatial scales; however, all CNN discussed in this paper do not include downsampling steps, and thus preserve the full dimensionality of the input image. [*Cellular automata as recurrent mlpconv networks.*]{} The primary analogy between cellular automata and traditional convolutional neural networks arises from (1) the locality of the dynamics, and (2) simultaneous temporal updating of all spatial points. Because neural networks can, in principle, act as universal function approximators [@cybenko1989approximation], a sufficiently complex neural network architecture can be used to fully approximate each rule $\sigma \rightarrow m$ that comprises the CA function $\mathcal G(\sigma)$. This single-neighborhood operator can then be implemented as a convolutional operator as part of a CNN, allowing it to be applied synchronously to all pixel neighborhoods in an input image. Representing a CA with a CNN thus requires two steps: feature extraction in order to identify each of the $M^D$ input cases describing each neighborhood, followed by association of each neighborhood with an appropriate output pixel. In the appendix, we show explicitly how to represent [*any*]{} CA using a single convolutional layer, followed by repeated $1\times1$ convolutional layers. The appropriate weights can be found analytically using analysis of the CA itself, rather than via algorithmic training on input data. In fact, we find that many representations are possible; we show that one possible approach defines a shallow network that uniquely matches each of the $M^D$ input $\sigma$ against a template, while another approach treats layers of the network like levels in a tree search that iteratively narrows down each input $\sigma$ to the desired output $m$. A key aspect of our approach is our usage of only one non-unity convolutional layer (with size $3 \times 3$ for the case of the Game of Life), which serves as the first hidden layer in the network. The receptive field of these convolutional neurons is equivalent to the neighborhood $D$ of the CA. All subsequent layers consist of $1 \times 1$ convolutions, which do not consolidate any additional neighbor information. Our use of $1\times1$ convolutions to implement the logic of the CA rule table is inspired by recent work showing that such layers can greatly increase network expressivity at low computational cost [@szegedy2015going]. Moreover, because CA are explicitly local, the network requires no pooling layers—making the network the equivalent of fitting a small, convolutional multilayer perceptron or “mlpconv” to the CA [@lin2013network; @szegedy2015going]. Our general approach is comparable to previous uses of deep convolutional networks to parallelize simple operations such as binary arithmetic [@kaiser2016neural], and it differs from efforts using less-common network types with sigma-pi units, in which individual input bits can gate one another [@wulff1993learning]. Figure \[model\] shows an example analytical mlpconv representation of the Game of Life, in which the two salient features for determining the CA evolution (the center pixel value and the number of neighbors) are extracted via an initial $3\times3$ convolution, the results of which are passed to additional $1\times1$ convolutional layers in order to generate a final output prediction (exact weights are given in Supplementary Material). The number of separate convolutions (four with the neighbor filter with different biases, and one with the identity filter) is affected by our choice of ReLU activations (the current best practice for deep convolutional networks) instead of traditional neurons with saturating nonlinearities [@nair2010rectified]. Many alternative and equivalent representations may be defined, underscoring the expressivity of multilayer perceptrons when representing simple functions like CA. ![image](fig_gol.pdf){width=".8\linewidth"} A general network architecture for learning arbitrary cellular automata ======================================================================= Having proven that arbitrary cellular automata may be analytically represented by convolutional perceptrons with finite layers and units, we next ask whether automated training of neural networks on time series of cellular automata images is sufficient to learn their rules. We investigate this process by training ensembles of convolutional neural networks on random random images and random CA rulesets. We start by defining a CA as an explicit mapping between each of $2^9=512$ possible $3\times3$ pixel groups in a binary image, and a single output pixel value. We then apply this map to an ensemble of random binary images (the training data), in order to produce a new output binary image set (the training labels). Here, we use large enough images ($10\times10$ pixels) and training data batches ($500$ images) to ensure that the training data contains at least one instance of each rule. On average, each image contains an equal number of black and white pixels; for sufficiently large images this ensures that each of the $512$ input states is equally probable. We note that, in principle, training the network will proceed much faster if the network is shown an example of only one rule at a time. However, such a process causes the network structure to depend strongly on the order in which individual rules were shown, whereas presenting all input cases simultaneously forces the network to learn internal rule representations based on their relative importance for maximizing accuracy. [*Network architecture and training parameters.*]{} Figure \[mlpconv\] shows the network used in our training experiments. Our network consists of a basic mlpconv architecture corresponding to a single $3\times3$ convolutional layer, followed by a variable number of $1\times1$ convolutional layers [@lin2013network]. No pooling layers are used, and the parameters in the $3\times3$ and $1\times1$ layers are trained together. The final hidden layer consists of a weighted summation, which generates the predicted value for the next state of a lattice site. Empirically, including final “prediction” layer with softmax classifier accelerates training on binary CA by reducing the dependence of convergence on initial neuron weights; however we omit this step here in order to allow the same architecture to readily be generalized for CA with $M>2$. Our network may thus be considered a fully convolutional linear committee machine. We trained our networks using the Adam optimizer with an L2 norm loss function, with hyperparameters (learning rate, initial weights, etc) optimized via a grid search (see Appendix for all hyperparameters). Because generating new training data is computationally inexpensive, for each stage of hyper parameter tuning, a new, unseen validation dataset was generated. Additionally, validation was performed using randomly-chosen, unseen CA rulesets in order to ensure that network hyperparameters were not tuned to specific CA rulesets. During training, a second validation dataset $20\%$ of the size of the training data was generated from the same CA ruleset. Training was stopped when the network prediction accuracy reached 100% on this secondary validation dataset, after rounding predictions to the nearest integer. The loss used to compute gradients for the optimizer was not rounded. The final, trained networks were then applied to a new dataset of unseen test data (equal in size to five batches of training data). We found that training successfully converged for all CA rulesets studied, and we note that our explicit use of a convolutional network architecture simplifies learning of the full rule table. Because we are primarily interested in using CNN as a way to study internal representations of CA rulesets, we emphasize that $100\%$ performance on the second validation dataset a condition of stopping training. As a result, all trained networks had identical performance; however, the duration and dynamics of training varied considerably by CA ruleset (discussed below). Regardless of whether weight-based regularization was used during training, we found that performance on the unseen test data was within $\sim 0.3\%$ of the training data for all networks studied (after outputs are rounded, performance reaches $100\%$, as expected). We caution, however, that this equal train-test performance should not be interpreted as a measure of generalizability, as would be the case for CNN used to classify images, etc. [@goodfellow2016deep]. Rather, because a CA only has $M^D$ possible input-output pairs (rather than an unlimited space of inputs), this result simply demonstrates that training was stopped at a point where the model had encountered and learned all inputs. In fact, we note that it would be impossible to train a network to represent an arbitrary CA without being exposed to all of its inputs: since an arbitrary CA can send any given input $\sigma$ to any given output $m$, there is no way for a network to predict the output for an symbol without having encountered it previously. However, we note that a network could, in principle, encode a prior expectation for an unseen input symbol $\sigma$, if it was trained primarily on CA of a certain type. In a previous work that used a one-layer network to learn the rules of a chaotic CA, it was found that training without weight-sharing prevents full learning, because different spatial regions on the system’s attractor have different dynamical complexity [@wulff1993learning]. In the results below, we deliberately use very large networks with 12 hidden layers—one $3\times3$ convolutional layer, followed by eleven $1\times1$ convolutional layers, all with 100 neurons per layer. These large networks ensure that the network can represent the CA ruleset in as shallow or deep a manner as it finds—and we expect and observe that many fewer neurons per layer are used than are available. [*Training dynamics of networks.*]{} Consistent with prior reports that large networks approach a “mean field” limit [@neal2012bayesian; @chen2018dynamical; @sompolinsky1988chaos], we find that training is highly repeatable for the large networks that we study, even when different training data is used, different CA rules are learned, or the hyperparameters are altered slightly from their optimal values (although this extends the duration of training). We also find that doubling the depth and width of our networks does not qualitatively affect our results, consistent with a large-network limit. Additionally, we trained alternative networks using a different optimizer (vanilla stochastic gradient descent) and loss function (cross-entropy loss), and found nearly identical internal structure in the trained networks (as discussed below); however, the form of the loss curves during training was more concave for such networks. See the supplementary material for further details of networks and training. Figure \[training\]A shows the results of training a single network on the Game of Life, and then applying the trained network to the “glider,” a known soliton-like solution to the Game. During the early stages of the training, the activations appear random and intermittent. As training proceeds, the network adjusts to the scale of output values generated by the input data, and then begins to learn clusters of related rules—leading to tightening of the output image and trimming of spurious activation patterns. ![image](fig_mlpconv.png){width=".8\linewidth"} Analysis of trained networks ============================ We next consider the relevance of our training observations to the general properties of binary cellular automata. Intuition would suggest that certain sets of CA rules are intrinsically easier to learn, regardless of $M$ and $D$; for example, a null CA that sends every input to zero in a single timestep requires a trivial network structure, while the Game of Life should require a structure like Figure \[model\] that can identify each possible neighborhood count. We thus repeat the training data generation and CA network training process described above, except this time we sample CA at random from the $2^{2^9} \approx 10^{154}$ possible rulesets for binary CA. The complexity of the dynamics produced by a given rule are generally difficult to ascertain [*a priori*]{}, and typical efforts to systematically investigate the full CA rule space have focused on comparative simulations of different rules [@wolfram1983statistical; @langton1990computation]. For example, the Game of Life is a member of a unique set of “Class IV” CA capable of both chaotic and regular dynamics depending on their initial state; membership in this class has been hypothesized to be a prerequisite to supporting computational universality [@wolfram1983statistical; @adamatzky2010game]. General prediction of dynamical class is an ongoing question in the CA literature [@mitchell1996evolving], however, there is a known, approximate relationship between the complexity of simulated dynamics, and the relative fraction $\lambda$ of transitions to zero and one among the full set of $512$ possible input cases: $\lambda=0$ and $\lambda=1$ correspond to null CA, whereas $\lambda=0.5$ corresponds to CA that sends equal numbers of input cases to $0$ and $1$ [@langton1990computation]. This captures the general intuition that CA typically display richer dynamics when they have a broader range of output symbols [@feldman2008organization; @adamatzky2012collision]. Here, instead of using $\lambda$ directly, we parametrize the space of CA equivalently using the effective “rule entropy,” $\mathcal H_{ca}$. We define $\mathcal H_{ca}$ by starting from a maximum-entropy image with a uniform distribution of input symbols ($p_\sigma \approx 1/M^D$ for all $\sigma$), to which we then apply the CA rule once and then record the new distribution of input cases, $p_\sigma'$. The residual Shannon entropy $\mathcal{H}_{ca} \equiv -\sum_\sigma p'_\sigma \log_2 p'_\sigma$ provides a measure of the degree to which the CA rules compress the space of available states. $\mathcal H_{ca}(\lambda)$ monotonically increases from $\mathcal H_{ca}(0) = 0$ until it reaches a global maximum at $\mathcal H_{ca}(1/2) = 9$, after which it symmetrically decreases back to $\mathcal H_{ca}(1) = 0$. Figure \[training\]B shows the result of training $2560$ randomly-sampled CA with different values of $\mathcal H_{ca}$. Ensembles of $512$ related cellular automata were generated by randomly selecting single symbols in the input space to transition to $1$ (starting with the null case $\sigma \rightarrow 0$ for all $\sigma$), one at a time, until reaching the case $\sigma \rightarrow 1$ for all $\sigma$. This “table walk” sampling approach [@langton1990computation] was then replicated $5$ times for different starting conditions. We observe that the initial $10-100$ training epochs are universal across $\mathcal H_{ca}$. Detailed analysis of the activation patterns across the network (Supplementary material) suggests that this transient corresponds to initialization, wherein the network learns the scale and bounds of the input data. Recent studies of networks trained on real-world data suggest that this initialization period consists of the network finding an optimal representation of the input data [@shwartz2017opening]. During the next stage of training, the network begins to learn specific rules: the number of neurons activated in each layer begins to decrease, as the network becomes more selective regarding which inputs provoke non-zero network outputs (see supplementary material). Because $\mathcal H_{ca}$ determines the sparsity of the rule table—and thus the degree to which the rules may be compressed—$\mathcal H_{ca}$ strongly affects the dynamics of this phase of training, with simpler CA learning faster and shallower representations of the rule table, resulting in smaller final loss values (Figure \[training\]B, inset). This behavior confirms general intuition that more complicated CA rules require more precise representations, making them harder to learn. A key feature of using large networks to fit simple functions like CA is strong repeatability of training across different initializations and CA rulesets. In the appendix, we reproduce all results shown in the main text using networks with different sizes and depths, and even a different optimizer, loss function, and other hyperparameters, and we report nearly identical results (for both training and test data) as those found using our network architecture described above. On both the training data and test data, we find similar universal training curves that depend on $\mathcal H_{ca}$, as well as distributions of activation patterns. This universality is not observed in “narrow” networks with fewer neurons per layer, for which training proceeds as a series of plateaus in the loss punctuated by large drops when the stochastic optimizer happens upon new rules. In this limit, randomly-chosen CA rulesets will not consistently result in training successfully finding all correct rules and terminating. Moreover, small networks that do terminate do not display apparent patterns when their internal structure is analyzed using the approaches described below—consistent with a random search. Similar loss dynamics have previously been observed when CA are learned using genetic algorithms, in which the loss function remains mostly flat, punctuated by occasional leaps when a mutant encounters a new rule [@mitchell1996evolving]. For gradient-based training, similar kinetic trapping occurs in the vicinity of shallow minima or saddle points [@saad1995line; @dauphin2014identifying], but these effects are reduced in larger networks such as those used here. ![image](fig_train.pdf){width=".8\linewidth"} Information-theoretic quantification of activations. ==================================================== That training thousands of arbitrary CA yields extremely similar training dynamics suggests that deep networks trained using gradient optimizers learn a universal approach to approximating simple functions like CA. This motivates us to next investigate how exactly the trained networks represent the underlying CA rule table—do the networks simply match entire input patterns, or do they learn consolidated features such as neighbor counts? Because the intrinsic entropy of the CA rule table affects training, we reason that the entropy of activated representations at each layer is a natural heuristic for analyzing the internal states of the network. We thus define a binary measure of activity for each neuron in a fully-trained network: when the network encounters a given input $\sigma$, any neurons that produce a non-zero output are marked as $1$ (or $0$ otherwise), resulting in a new set of binary strings $a(\sigma)$ denoting the rounded activation pattern for each input $\sigma$. For example, in an mlpconv network with only 3 layers, and 3 neurons per layer, an example activation pattern for a specific input $\sigma_1$ could yield $a(\sigma_1) = \{010,000,011\}$, with commas demarcating layers. Our approach constitutes a simplified version of efforts to study deep neural networks by inspecting activation pattern “images” of neurons in downstream layers when specific input images are fed into the network [@schoenholz2016deep; @erhan2009visualizing; @poole2016exponential; @chen2018dynamical]. However, for our system binary strings (thresholded activation patterns) are sufficient to characterize the trained networks, due to the finite space of input-output pairs for binary CA, and the large size of our networks; in our investigations, no cases were found in which two different inputs ($\sigma, \sigma'$) produced different unrounded activation patterns, but identical patterns after binarization ($a(\sigma), a(\sigma')$). Given the ensemble of input symbols $\sigma\in\{0,1\}^D$, and a network consisting of $L$ layers each containing $N$ neurons, we can define separate symbol spaces representing activations of the entire network $a_{\mathsf{T}}(\sigma) \in\{0,1\}^{LN}$; each individual layer, $a_{{\mathsf{L}},i}(\sigma) \in\{0,1\}^{N}$, $i \in [0,L-1]$; and each individual neuron $a_{{\mathsf{N}},ij}(\sigma) \in\{0,1\}$, $i \in [0,L-1]$, $j \in [0,N-1]$. Averaging over test data consisting of an equiprobable ensemble of all $M^D$ unique input cases $\sigma$, we can then calculate the probability $p_{\alpha,k}$ for observing a given unique symbol $a_{k}$ at a level $\alpha \in \{{\mathsf{T}}, {\mathsf{L}}, {\mathsf{N}}\}$ in the network. We quantify the uniformity of each activation symbol distribution $p$ using the entropy $\mathcal H_\alpha = -\sum_k p_{\alpha, k} \log_2 p_{\alpha, k}$, which satisfies $\mathcal{H}_\alpha \leq {\text{dim}{(\alpha)}} $. We condense notation and refer to the activation entropies $\mathcal H_{\mathsf{T}}$, $\mathcal{H}_{{\mathsf{L}},i}$, $\mathcal{H}_{{\mathsf{N}},{ij}}$ as the total entropy, the entropy of $i^{th}$ layer, and the entropy of the $j^{th}$ neuron in the $i^{th}$ layer. We note that, in addition to readily quantifying the number of unique activation patterns and their uniformity across input cases, the Shannon entropy naturally discounts zero-entropy “dead neurons,” a common artifact of training high-dimensional ReLU networks [@nair2010rectified]. Our general analysis approach is related to a recently-developed class of techniques for analyzing trained networks [@raghu2017svcca], in which an ensemble of training data (here, a uniform distribution of $\sigma$) is fed into a trained network in order to generate a new statistical observable (here, $\mathcal{H}$). We expect and observe that $\langle \mathcal{H}_{{\mathsf{N}},{ij}} \rangle_{ij} < \langle \mathcal{H}_{{\mathsf{L}},i} \rangle_{i} \leq \mathcal{H}_{\mathsf{T}}$. Unsurprisingly, the maximum entropy of a single neuron is $\log_2{2}=1$, and all multi-neuron layers generate more than two patterns across the test data. We also observe that $\mathcal{H}_{\mathsf{T}}\approx 9$ for all networks trained, suggesting that the overall firing patterns in the network differed for every unique input case—even for trivial rules like $\lambda = 0$ where a network with all zero weights and biases would both correctly represent the rule table, and have identical firing patterns for all inputs ($\mathcal{H}_{\mathsf{T}}=0$). This effect directly arises from training using gradient-based methods, for which at least some early layers in the network produce unique activation patterns for each $\sigma$ that are never condensed during later training stages. Accordingly, regularization using a total weight cost or dropout both reduce $\mathcal{H}_{\mathsf{T}}$. Comparing $\mathcal{H}_{{\mathsf{L}},i}$ across models and layers demonstrates that early layers in the network tend to generate a broad set of activation patterns that closely follow the uniform input symbol distribution (Figure \[entropy\]A). These early layers in the network thus remain saturated at $\mathcal{H}_{{\mathsf{L}},i} = \mathcal{H}_{\mathsf{T}}\approx 9$; however in deeper layers progressively lower entropies are observed, consistent with fewer unique activation patterns (and a less uniform distribution across these strings) appearing in later layers. These trends depend strongly on the CA rules (coloration). In the figure, dashed lines allow comparison of $\mathcal{H}_{{\mathsf{L}},i}$ to theoretical predictions for the layerwise entropy for the different types of ways that a CNN can represent the CA. The uppermost dashed curve corresponds to a network that generates a maximum entropy set of $512$ equiprobable activation patterns in each layer. This case corresponds to a “shallow” network that matches each input case to a unique template at each layer. Lower dashed curves correspond to predictions for networks that implement the CA as layerwise search, in which $\sigma$ that map to the same output $m$ are mapped to the same activation pattern at some point before the final layer. This corresponds to a progressive decrease in the number of unique activation patterns in each layer. The two dashed curves shown correspond to theoretical networks that eliminate $45\%$ and $50\%$ of unique activation patterns at each layer. ![image](fig_ent_new.pdf){width=".8\linewidth"} We find that higher entropy rules $\mathcal H_{ca}$ (red points) tend to produce shallower networks due to the rule table being less intrinsically compressible; whereas simpler CA (blue points) produce networks with more binary tree-like structure. This relationship has high variance in early layers, making it difficult to visually discern in the panel save for the last layer. However, explicit calculation of the Pearson correlation $r(\mathcal H_{ca},\mathcal H_{{\mathsf{L}},i})$ confirms its presence across all layers of the network, and that it becomes more prominent in deeper layers (Figure \[entropy\]A, inset). This trend is a consequence of training the network using backpropagation-based techniques, in which loss gradients computed at the final, $L^{th}$ hidden layer are used to update the weights in the previous $(L-1)^{th}$ layer, which are then used to update the $(L-2)^{th}$ layer, and so forth [@arora2014provable]. During training, the entropy of the final layer increases continuously until it reaches a plateau determined by the network size and by $\mathcal H_{ca}$. The penultimate layer then increases in entropy until reaching a plateau, and so forth until $\mathcal{H}_{\mathsf{T}}= 9$ across all $\sigma$—at which point training stops because the test error will reach zero (training dynamics are further analyzed in the Supplementary Material). This general correlation between CA entropy and network structure is consistent with earlier studies in which networks were trained to label CA rulesets by their dynamical complexity class [@gorodkin1993neural]. The role of $\mathcal H_{ca}$ on internal representation distributions $p_L$ can be further analyzed using Zipf plots of activation pattern $a_k$ frequency versus rank (Supplementary Material): the resulting plots show that the distribution of activation symbols is initially uniform (because the training data has a uniform distribution of $\sigma$), but the distribution becomes progressively narrower and more peaked in later layers. This process occurs more sharply for networks trained on CA with smaller $\mathcal H_{ca}$. We next consider how the entropy of our observed layer activation patterns relates to the entropy of the individual neurons $\mathcal{H}_{{\mathsf{N}},{ij}}$ that comprise them; we suspect there is a relation because the individual firing entropies determine the “effective” number of neurons in a layer, $N_{\text{eff}} = 2^{\sum_j \mathcal{H}_{{\mathsf{N}},{ij}}}$. Across all layers, we observe a linear relationship between $\mathcal{H}_{{\mathsf{N}},{ij}}$ and $\mathcal{H}_{{\mathsf{L}},i}$, which saturates when $\mathcal{H}_{{\mathsf{L}},i} \approx \mathcal{H}_{\mathsf{T}}$ (Figure \[entropy\]B). The lower-$\mathcal H_{ca}$ CA lie within the linear portion of this plot, suggesting that variation in activation patterns in this regime results from layers recruiting varying numbers of neurons. Conversely, higher-entropy CA localize in a saturated region where each layer encodes a unique activation pattern for each unique input state, leading to no dependence on the total effective number of neurons. This plot explains our earlier observation that the dynamics of training do not depend on the exact network shape as long as the network has sufficiently many neurons: for low $\mathcal H_{ca}$, layers never saturate, and are free to recruit more neurons until they are able to pattern-match every unique input (at intermediate and large $\mathcal H_{ca}$). A CA with more possible input states (larger $M$ or $D$) would thus require more neurons per layer to enter this large-network limit. We also consider the degree to which the decrease $\mathcal{H}_{{\mathsf{L}},i}$ vs. $i$ arises from deeper layers becoming “specialized” to specific input features, a common observation for deep neural networks [@arora2014provable; @lecun2015deep; @erhan2009visualizing]. We quantify the layer specialization using the total correlation, a measure of the mutual information between the activation patterns of a layer, and the neurons within that layer: $\mathcal I_i= \sum_j \mathcal{H}_{{\mathsf{N}},{ij}} - \mathcal{H}_{{\mathsf{L}},i}$. This quantity is minimized ($\mathcal I_i=0$) when the single neuron activations within a layer are independent of one another; conversely, at the maximum value individual neurons only activate jointly in the context of forming a specific layer activation pattern. Plots of $\mathcal I_i$ vs. $i$ (Supplementary material) reveal that during early layers, individual neurons tend to fire independently, consistent with multi-neuron features being unique to each input case. In these early layers, $\mathcal I_i$ is large because the number of possible activation patterns in a single layer of the large network ($2^{100}$) is much larger than the number of input cases ($2^9$). In later layers, however, the correlation begins to decrease, consistent with individual neurons being activated in the context of multiple input cases—indicating that these neurons are associated with features found in multiple input cases, like the states of specific neighbors. Calculation of $r(\mathcal I_i, \mathcal H_{ca})$ confirms that this effect varies with $\mathcal H_{ca}$. Discussion ========== We have shown an analogy between convolutional neural networks and cellular automata, and demonstrated a type of network capable of learning arbitrary binary CA using standard techniques. Our approach uses a simple architecture that applies a single $3 \times 3$ convolutional layer in order to consolidate the neighborhood structure, followed by repeated $1\times1$ convolutions that perform local operations. This architecture is capable of predicting output states using a mixture of shallow pattern-matching and deep layer-wise tree searching. After training an ensemble of networks on a variety of CA, we find that our networks structurally encode generic dynamical features of CA, such as the relative entropy of the rule table. Further work is necessary to determine whether neural networks can more broadly inform efforts to understand the dynamical space of CA, including fundamental efforts to relate a CA’s [*a priori*]{} rules to the its apparent dynamical complexity during simulation [@wolfram1983statistical; @mitchell1993revisiting; @feldman2008organization]—for example, do Class IV and other complex CA impose unique structures upon fitted neural networks, or can neural networks predict their computational complexity given a rule table? These problems and more general studies of dynamical systems will require more sophisticated approaches, such as unsupervised training and generative architectures (such as restricted Boltzmann machines). More broadly, we note that studying the bounded space of CA has motivated our development of general entropy-based approaches to probing trained neural networks. In future work we hope to relate our observations to more general patterns observed in studies of deep networks, such as the information bottleneck [@shwartz2017opening]. Such results may inform analysis of open-ended dynamical prediction tasks, such as video prediction, by showing a simple manner in which process complexity manifests as structural motifs. Acknowledgments =============== W.G. was supported by the NSF-Simons Center for Mathematical and Statistical Analysis of Biology at Harvard University, from NSF Grant No. DMS-1764269, and from the Harvard FAS Quantitative Biology Initiative. He was also supported by the U.S. Department of Defense through the NDSEG fellowship program, as well as by the Stanford EDGE-STEM fellowship program Code availability ================= Convolutional neural networks were implemented in Python 3.4 using TensorFlow 1.8 [@abadi2016]. Source code is available at <https://github.com/williamgilpin/convoca>. Appendix ======== Representing arbitrary CA with convolutional neural networks ------------------------------------------------------------ Here we show explicitly how a standard [*mlpconv*]{} multilayer perceptron architecture with ReLU activation is capable of representing an arbitrary $M$ state cellular automaton with a finite depth and neuron count [@lin2013network]. We provide the following explicit examples primarily as an illustration of the ways in which $1\times1$ convolutions may be used to implement arbitrary CA using a perceptron; we note that real-world networks trained using optimizers will find many other heuristics and representations. We provide the two analytic cases below for concreteness, and to illustrate two important limits: pattern-matching templates for each unique input across the entire network, or using individual layers to eliminate cases until the appropriate output symbol has been identified. ### Pattern-matching the rule table with a shallow network An arbitrary M-state cellular automaton can first be converted into a one-hot binary representation. Given an $L\times L$ image, we seek to generate an $L\times L\times M$ stack of binary activation images: 1. Convolve the input layer with $M$ distinct $1\times1$ convolutional filters with unit weights, and with biases given by $1,0,-1,...-(M-1)$. Now apply ReLU activation 2. Convolve the resulting image with $M$ $1\times1$ convolutional filters with zero biases. Each of the first $(M-1)$ convolutional filters tests a different consecutive pair $[1,-b, 0,...,0]$, $[0,1,-b,0,...,0]$, $[0,0,1,-b, 0, ..., 0]$, $...$, $[0,...,0,1,-b]$, where $b$ is any positive constant $b\geq M/(M-1)$. The last convolutional filter is the identity $[0,...,0,1]$. Now apply ReLU activation again. This conversion step is not necessary when working with a binary CA. It requires at total of $(1+M)+M^2$ parameters and two layers to produce an activation volume of dimensions $L\times L\times M$. We now have an $L\times L\times (M-1)$ array corresponding the one-hot encoding of each pixel’s value in an $L\times L$ lattice. We now pattern match each of the $M^D$ possible inputs with its corresponding correct output value. We note that the steps we take below represent an upper bound; if the number of quiescent versus active states in the cellular automaton is known in advance ($=\lambda M^D$, where $\lambda$ is Langton’s parameter) [@langton1990computation], then the number of patterns to match (and thus total parameters) may be reduced by a factor of $\lambda$, because only the non-quiescent “active” rules that produce non-zero output values need to be matched. 1. Construct a block of $M^D$ $S\times S\times(M-1)$ convolutional filters, where $S$ corresponds to the neighborhood size of the CA ($S=3$ for a standard CA with a Moore neighborhood). Each of the $M^D$ filters simply corresponds to an image of each possible input state, with entries equalling one for each non-zero site, and large negative values (greater than $D(M-1)$) at each zero site. For cases when $M>2$, the depth of each convolutional kernel allows exact matching of different non-zero values. 2. Assign a bias to each of the $M^D$ filters based on the cellular automaton’s rule table. For $S\times S\times(M-1)$ inputs that should map to a non-zero value $q$, assign a bias of $(q-1)-(L-1)$, where $L$ is the number of non-zero sites in the neighborhood $L \leq D(M-1)$. This ensures that only exact matches to the rule will produce positive values under convolution. For inputs that should map to zero, assign any bias $\geq L$, such as $D(M-1)$. 3. Apply the ReLU function. ### Searching the rule table with a deep network {#appendix_deep_ca} Another way to represent a cellular automaton with a multilayer perceptron constitutes searching a subset of all possible inputs in each layer. This approach requires all input cases $\sigma$ that map to the same output symbol $m$, to also map to the same activation pattern at some layer of the network. This coalescence of different input states can occur at any point in the network before the final layer; here we outline a general approach for constructing maps to the same output symbol using large networks. [**Assigning input cases to a unique binary strings.**]{} Assume there are $N$ convolutional filters. If there are $M^D$ unique input cases, these filters can be used to generate an $n$-hot encoding of the input states. $n$ should be chosen such that ${N \choose n} \geq M^D$. Here, we assume a binary CA with a Moore neighborhood ($M=2$, $D=9$). If $N=100$ neurons are present in each layer, then a two-hot binary string ($n=2$) is sufficient to uniquely represent every possible input state of a binary Moore CA, using the following steps 1. The $D$ pixel neighborhood is split into $n$ sub-neighborhoods, with sizes we refer to as $D_1, D_2, .., D_n$. For example, for a the binary Moore CA, we can split the neighborhood into the first $5$ pixels (counted from top-left to the center) and the remaining $4$ pixels (the center pixel to the bottom right corner. Note that the number and dimensionality of these sub-neighborhoods must satisfy the condition: if $Q \equiv M^{D_1} + M^{D_2} + ... + M^{D_n}$, then ${N \choose Q} \geq M^D$. 2. Define $M^{D_1} + M^{D_2} + ... + M^{D_n}$ filters, which match each possible sub-neighborhood. For example, for the neighborhood reading $101000111$ from upper-left to bottom-right, two filters can be defined that will match sub-neighborhoods consisting of the first $5$ bits and the last $4$ bits, using the approach described above for pattern-matching. In this case, these filters would be $1, -100, 1, -100,-100,-100, 0, 0, 0$ with a bias of $-1$, and $0, 0, 0, -100, -100, -100,1,1,1$ with a bias of $-2$. 3. Apply ReLU activation. 4. The resulting activation map will be an $n$-hot binary encoding of the input state, because each unique input case will match the same $n$ filters from the set of $N$, thus creating a unique representation. [**Assigning input case binary strings to matching output symbols.**]{} At this stage in the network, each input case has been mapped to a unique $N$ digit binary string with exactly $n$ ones within it. Successive $1\times1$ convolutional filters may now be used to combine different inputs into the same activation pattern. As a simple example, if $N=5$ then the possible input cases are $\sigma \in \{10001$, $10010$,$10100$, $11000$, $01001$, $01010$, $01100$, $00101$, $00110$, $00011\}$. Many of these cases can be uniquely matched by applying a filter consisting of three ones, followed by a bias of $b=2$. For example, using the filter $W = (-1,-2,-1,0,-2)$ to perform the operation $h = RELU(W\cdot\sigma+b)$ will result in an output of $1$ for the cases $\{10010,00110\}$ only. To match strings with no overlapping bits, more than two cases must be merged simultaneously. In general, to merge $H$ cases using this approach, two strings must have $H-1$ overlapping bits. For the case of binary CA with a Moore radius, an example of a network analogous to a simple binary search would consist of filters that reduce the $512$ input cases to $512$ $2$-hot strings (in the first $3\times3$ convolutional layer). Subsequent $1\times1$ convolutions could then map these states to $256$ unique cases, then $128$, and so forth until there are only two unique activation patterns left—the first for input states that map to one, and the second for input states that map to zero. Depending on the $\lambda$ parameter of the CA rule table, the depth (and thus minimum number of layers) to perform this search would be a maximum of $\log_2 512 - 1 = 8$ layers when $\lambda=0.5$ (i.e. when there are equal numbers of ones and zero outputs in the rule table). This case comprises just one example of performing a search using the depth of a network. However, many variations are possible, because coalescence of two input states may occur in any layer. Moreover, while the above examples describe two input states being combined together for each filter in a given layer, it is not difficult to construct alternative filters that can combine more than two states together. We thus expect that there is considerably flexibility in the different ways that a network trained algorithmically can internally represent input states with similar features and similar outputs, but that these different approaches manifest as an overall decrease in the number of unique activation patterns observed across the depth of the network. ### Network representation of the Game of Life We note that there are many other ways to implement a CA that are not exactly layerwise depth search, nor a shallow pattern match, depending on the number and type of features being checked at each layer of the network. For example, each of the $D$ pixels in the neighborhood of the CA can be checked with separate convolutional kernels all in the first layer, and then different combinations of these values could be checked in subsequent steps. The shallow network described above represents an extreme case, in which every value of the full input space is explicitly checked in the first layer. This implementation is efficient for many CA, because of the low cost of performing multiple numerical convolutions. However, for CA with large $M$ or $D$, the layer-wise search method may be preferable. For the Game of Life, we can use knowledge of the structure of a CA in order to design a better implementation. The Game of Life is an outer totalistic CA, meaning that the next state of the system is fully determined by the current value of the center pixel, and the total number of ones and zeros among its immediate neighbors. For this reason, only two unique convolutional filters are needed. The first filter is the identity, [|ccc|]{} 0 & 0 & 0\ 0 & 1 & 0\ 0 & 0 & 0\ which is applied with bias $0$. The second filter is the neighbor counting filter [|ccc|]{} 1 & 1 & 1\ 1 & 0 & 1\ 1 & 1 & 1\ Due specifically to our use of ReLU activation functions throughout our networks (rather than sigmoids), several copies of this filter must be applied in order to detect different specific neighbor counts. In particular, because the Game of Life rules require specific information about whether the total number of “alive” neighbors is $<2$, $2$, $3$, or $\geq4$, we need four duplicates of the neighbor counting filter, with biases $( -1, -2, -3, -4)$, in order to produce unique activation patterns for each neighbor total after the ReLU activation is applied. We thus perform a single convolution of an $L\times L$ binary input image with $5$ total $3\times3\times1$ convolutional filters, producing an $L\times L \times 5$ activation volume. Hereafter, we assume that the identity filter is the lowest-indexed filter in the stack, followed by the filters that count the successively-increasing numbers of neighbors $<2$, $=2$, $=3$, and $\geq4$. Each $5 \times 1$ pixel across the $L \times L$ face of the activation volume now contains a unique activation pattern that can be matched against the appropriate output case. In the next layer of the network, two $1\times1$ convolutional filters with depth $5$ are applied $$(0, 0, 4/3, -8/3, -1/3)$$ $$(3/2, 5/4, -5, -1/4, -1/4)$$ which are combined with biases $-1/3,-7/4$ and then activated with ReLU activation, resulting in an $L\times L \times 2$ activation volume. In order to generate a final $L\times L$ output corresponding to the next state of the automaton, this volume is summed along its depth—which can be performed efficiently as a final convolution with a $1\times1$ filter with value $(1,1)$ along its depth, and no bias. This will produce an $L\times L$ output image correspond to the next state of the Game. For an example implementation of this algorithm in TensorFlow, see the function ca_funcs.make_game_of_life() in <https://github.com/williamgilpin/convoca/blob/master/ca_funcs.py>. In principle, this architecture can work for any outer-totalistic cellular automaton, such as Life without Death, High Life, etc—although depending on the number of unique neighbor count and center pixel pairings that determine the ruleset, the number of neighbor filters may beed to be adjusted. For example, in the Game of Life the cases of $0$ living and $1$ living neighbors do not need to be distinguished by the network, because both cases result in the center pixel having a value of zero in the next timestep. Likewise, for a purely totalistic cellular automaton (such as a majority vote rule), only a single convolutional filter (consisting of $9$ identical values) is necessary, because the value of the center pixel does not need to be resolved by the network. Neural network training details ------------------------------- Convolutional neural networks were implemented in Python 3.4 using TensorFlow 1.8 [@abadi2016]. Source code is available at <https://github.com/williamgilpin/convoca>. For all convolutions, periodic boundary conditions were implemented by manually copying pixel values from each edge of the input image, and then appending them onto the opposite edges. The padding option “VALID” was then used for the first convolutional filter layer in the TensorFlow graph. Hyperparameters for the large networks described in the main text were optimized using a grid search. For each training run performed while optimizing hyperparameters, a new validation set of unseen binary images associated with an unseen cellular automaton ruleset was created, in order to prevent the cellular automaton ruleset from biasing the choice of hyperparameters. Once hyperparameters were chosen, and training on arbitrary cellular automata started, an additional validation set of binary images was generated for each ruleset. These images were used to determine when to stop training. Finally, an unseen set of binary images was used as a test partition, in order to compute the final accuracy of the trained networks. The training and test accuracies (before rounding the CNN output to the nearest integer) were within $0.3\%$ for all networks studied, which is a direct consequence of the network’s ability to represent all input cases exactly. After rounding the CNN output to the nearest integer, both the train and test datasets had $100\%$ accuracy. The unrounded train and test performance during the training of one network are shown as a function of training epoch in Figure S1 of the supplementary material. The default networks contained one $3\times3$ convolutional layer followed by $11$ layers of $1\times1$ convolutions. The convolutional layer, as well as the $1\times1$ layers, each had $100$ filters. A depth of $12$ layers was chosen for the network ensembles analyzed in the main text, in order to facilitate analysis of hidden layers across a variety of depths. Network and training parameters are given in Table \[params\]. \[params\] [l|l]{} [Parameter]{} & [ Value]{}\ Input dimensions & $10\times10$ px\ Number of layers & $12$\ Neurons per layer & $100$\ Input samples & $500$ images\ Batch size & $10$ images\ Weight initialization & He Normal[@he2015delving]\ Weight scale & $1$\ Learning rate & $10^{-4}$\ Max train epochs & $1500$\ Optimizer &\ Loss & L2\ We also considered the degree to which the exact dimensions of the “large network” affect our results. We trained another ensemble of networks with loss function, hyperparameters, and optimizer identical to the main text, but with the number of layers and the number of neurons per layer doubled (Table \[params\_big\]). As we observe in the main text, our results remain almost identical (Figure S2 of the supplementary material, left panel). We attribute this to the relatively small number of unique input cases that the networks need to learn ($512$) as compared to the potential expressivity of large networks. \[params\_big\] [l|l]{} [Parameter]{} & [ Value]{}\ Input dimensions & $10\times10$ px\ Number of layers & $24$\ Neurons per layer & $200$\ Input samples & $500$ images\ Batch size & $10$ images\ Weight initialization & He Normal[@he2015delving]\ Weight scale & $1$\ Learning rate & $10^{-4}$\ Max train epochs & $1500$\ Optimizer &\ Loss & L2\ As a control against the choice of optimizer and loss affecting training, we also trained a replicate ensemble of networks that had the same network shapes ($12$ layers with $100$ neurons each) but a different loss function and optimizer, for which different optimal hyperparameters were found using a new grid search (Table \[params\_alt\]). We compare results using this alternative network to the default network described in the main text, and we find the results are nearly identical. \[params\_alt\] [l|l]{} [Parameter]{} & [ Value]{}\ Input dimensions & $10\times10$ px\ Number of layers & $12$\ Neurons per layer & $100$\ Input samples & $500$ images\ Batch size & $20$ images\ Weight initialization & He Normal[@he2015delving]\ Weight scale & $5\times10^{-1}$\ Learning rate & $5\times 10^{-4}$\ Max train epochs & $3000$\ Optimizer & S. G. D.\ Loss & cross-entropy\ Figures S2 (right panel) and S3 of the supplementary material show the results of training a network using these parameters. The shape of the training curve is slightly different, with the universal transient (during which the network learns general features of the input data such as the range and number of unique cases) being much longer for this network. However, the later phases of training continue similarly to the standard network, with $\mathcal H_{ca}$ strongly affecting the later stages of training and the final loss. Moreover, after training has concluded, the dependence of the internal representations of the network on $\mathcal H_{ca}$ (Figure S2 of the supplementary material) matches the patterns seen in the default network above. Supplementary Materials and Additional Experiments ================================================== For this arXiv submission, additional supplementary material has been uploaded as an ancillary file. To obtain the SI, navigate to the abstract landing page, and then on the right-hand side select “Other formats” under the “Download” header. On the resulting page, click the link for “Download source” under the “Source” header. A numbered file will download. Either extract the file from the terminal, rename the file with the extension “.gz”, and then double-click the file to extract a directory containing the PDF of supplementary material.
null
minipile
NaturalLanguage
mit
null
Q: Convert Unicode Object to Python Dict A request object that I'm dealing with has the following value for the key "address": u"{u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}" I need to operate on this unicode object as a dictionary. Unfortunately, json.loads() fails because it is not a json compatible object. Is there any way to deal with this? Do I have to work with the the json.JSONDecoder object? A: >>> ast.literal_eval(u"{u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}") {u'city': u'new-york', u'name': u'Home', u'display_value': u'2 Main Street'}
null
minipile
NaturalLanguage
mit
null
The prevalence of hepatitis C virus antibodies among the voluntary blood donors of New Delhi, India. Infection with hepatitis C virus (HCV) is a major cause of transfusion-associated hepatitis, cirrhosis and hepatocellular carcinoma. The present study was conducted with an objective to evaluate the prevalence of anti-HCV antibody in New Delhi, India using a large number of healthy voluntary blood donors. A total of 15,898 healthy voluntary blood donors were subjected to anti-HCV testing (using a commercially available third generation anti-HCV ELISA kit) and 249 were found to be reactive for anti-HCV antibody, yielding an overall prevalence of 1.57%. No significant difference was found between the HCV positivity rate of male (1.57%; 238/15,152) vs. female (1.47%; 11/746) donors, family (1.58%; 213/13,521) vs. altruistic (1.51%; 36/2377) donors and first-time (1.55%; 180/11,605) vs. repeat (1.61%; 69/4293) donors. The age distribution of anti-HCV reactivity showed a maximum prevalence rate of 1.8% in the age group of 20-29 years. In addition, there was a clear trend of decreasing positivity for anti-HCV with increasing age and this trend was statistically significant. The results of the present study show that the prevalence of anti-HCV antibodies in the healthy voluntary blood donors of New Delhi, India is considerably higher than the reported seroprevalence of HCV in majority of the industrialized nations and this represents a large reservoir of infection capable of inflicting significant disease burden on the society. In addition, donors of New Delhi, India showed a trend of decreasing seroprevalence with increasing age, possibly implying a higher exposure rate to HCV in younger subjects.
null
minipile
NaturalLanguage
mit
null
Connecticut To Release Newtown School Shooting Report On Monday Connecticut's long-awaited report on the shooting last December at a Newtown elementary school that killed 20 children and six adults is due to be released on Monday, the prosecutor's office said on Friday. Connecticut's long-awaited report on the shooting last December at a Newtown elementary school that killed 20 children and six adults is due to be released on Monday, the prosecutor's office said on Friday. The report will be published on the website of Connecticut's Division of Criminal Justice, www.ct.gov/csao. Mark Dupuis, a spokesman for the state Attorney Stephen Sedensky III, declined to say whether the report has been shared with the families of those who died in the shooting, but he said his office is aware of families' concerns and has taken steps to address those concerns. On the morning of December 14, Adam Lanza, 20, shot and killed his mother, Nancy Lanza, in her bed in their Newtown home, and then went to Sandy Hook Elementary School - a school he once attended - and forced his way inside. He killed the 26 children and adults before turning the gun on himself. Some evidence from the state's investigation will never be made available to the public. A Connecticut law passed earlier this year in response to the shooting prohibits the release of photographs, film, video and other visual images showing a homicide victim if they can "reasonably be expected to constitute an unwarranted invasion of personal privacy of the victim or the victim's surviving family members."
null
minipile
NaturalLanguage
mit
null
Q: appropriate UML diagram to model Android app's Threads, network activity, Handlers, etc I'm writing an Android app that, among other things: Spawns a thread that binds a network socket to port 42777 when {Activity}.onResume() executes, and dispatches incoming data to a Handler. Unless no WLAN adapter currently has a connection with a local (192.168., 10., 172.16-31.*) IP address... then, it displays an error message and does nothing further until a suitable WLAN connection comes into existence and ConnectivityManager notifies my listener. Spawns a thread that fetches Runnables from a LinkedBlockingQueue, and submits them to a single-threaded ExecutorService. At this point, the Runnables basically broadcast a datagram via UDP to 255.255.255.255. when {Activity}.onPause() executes, the socket on port 42777 gets closed & released, the LinkedBlockingQueue gets clear()'ed, and the ExecutorService (and the Thread that feeds it Runnables) gets interrupted and killed. And the port bound to the socket hopefully gets released before the next time {Activity}.onResume() fires and tries to reopen the socket... otherwise, I'll have to tangle the rat's nest even more, and add logic to hammer away at the network stack until the port finally gets released. What UML diagram(s) would be appropriate to document something like this so I can go into this with a fairly clear understanding of how the classes interact with each other? I've been using class diagrams for years, but documenting chains of events that indirectly spawn Threads spawning Threads in response to Handler events is (up to now) unexplored territory for me. A: You get by with a mix of sequence and state diagrams. Sequence diagrams can show how functionality is performed in certain scenarios. E.g. for your first example you have the main instance on the left which instantiates a thread (whatever it's meaningful name may be). This will first call the bind, listen to incoming data and dispatch data to a Handler. etc. You will create such SDs for different scenarios where you might think that a graphical representation adds value to the reader of the model. Eventually you get by with activity diagrams which explain behavior on a higher, abstract level. State diagrams are useful to communicate "if-this-then-that" states of your machine. So the thread enters a Connect state until it gets the port and then enters a Listening state which transfers to Dispatch, etc. Note that the two diagrams are not redundant. The state machine could almost directly be transferred to code. There are also tools that can simulate a state machine, which is most helpful for complex machines. The sequence diagram just shows an example of how a scenario works. You eventually use it to do some kind of "static simulation". In most cases you need a set of different sequence diagram to transmit the idea. As @Ister points out, you will need the appropriate class diagrams to show the static structure of your class model. This is essential and usually the "dew point" of a model. I had assumed that this is already present :-)
null
minipile
NaturalLanguage
mit
null
March 31, 2005 ducklings I really love ducklings. Really..love..ducklings. My heart beats faster and then melts when I'm around them. I realized this last year when I met a little herd of them camped out incongruously at Lincoln Square Mall as part of their annual Easter display. So this year I was all ready to see my little beloveds again. They were here for a week, but it was a hard week of baby ajustments for the kids and they stubbornly refused to go every day no matter how I pleaded. You'd think they'd be into such things, but maybe they'd had their fill of babies and didn't want too see whole flocks of them. By Saturday I was a wreck from said adjustments and it was the ducklings' last day at the mall and I didn't know if it was going to happen but at the last minute Georgia (the main naysayer) went to hang with grandma Amy so I grabbed the camera and crying baby (and happy Maya and Billy) and off we went. Here's a picture. They remind me of little human babies and are made even more wonderful by the fact I don't have to take care of any of them!: March 29, 2005 I'm feeling... - scared at how vulnerable Teresa is- the soft spot on her head, her floppy neck, her thin, sensitive skin- especially around her BIG in-your-face sisters - worn down by the the emotional drama of Maya's anger, frustration, and sadness at her lack of access to mommy and mommy time - refreshed by sunny days and a hot shower - emboldened by the thought that parenting is a spiritual battlefield where all my preparation and experiences growing up learning through the Baha'i Writings are put to the test to see how far I've gotten and how far I have to go in my development as a person - encouraged by giving Teresa a bath and seeing how, though she is still so small, she doesn't look as extremely tiny as the last time she had a bath. I can see that she's grown! - amazed at Georgia's resourcefulness- she decided a few days ago that she was going to fold ALL the family's laundry and then proceeded to do so, in her way, with lots of enthusiasm. Then she decided to refold all the clothes in her dresser! And when Maya came and frustrated her efforts by throwing all her clothes on the floor, she didn't get all freaked out or in a fight. She came to me and I asked her what we should do about it. Her solution was suprising. She said, " I should put on my ballet clothes and my ballet movie and dance along with it." Effectively, she wanted to keep doing positive things instead of insisting on immediate restitution of the injustice. Wow! I think she feels sorry for Maya because she remembers what it's like to be in her position. March 24, 2005 Overheard Georgia from her room is yelling to Maya who is in the living room (all while Teresa sleeps, amazingly): Georgia: Come here! Maya: No! Georgia: I'm going to cut you in half! Maya: OK! (and she runs back to Georgia) March 22, 2005 first road trip We took Teresa on her first road trip this weekend, all the way to Nana's house. She did really well and slept almost the whole way there and back. Hooray! And she gave Layli her first real awake social smile. What fun! She's looking around at people more and making interested facial expressions with her eyes very wide open. She seemed to like all the new things and people to look at, except the bright lights of the Ikea store. (We had our first Ikea experience. It was SO great and also strange to be the demographic that's being marketed to.) Pictures to come soon! March 15, 2005 I love my daddy March 12, 2005 That's more like it! (a labor story) Me about 10 hours after giving birth, actually laughing while talking about labor. Now I know why some people talk about labor pain as being bearable. What I didn't realize until shortly before I went to do this a third time was that it's not usually as bad as it was with Georgia and Maya. They both had unusual 'presentations', which means what part of them comes out first (Georgia- hand and head together, causing contractions to have no discernable breaks between them, Maya- head facing up instead of down, which causes the dreaded back labor). It was also more bearable this time because I had worked through so many of my fears and issues about labor and really tried to stay happy as long as I could. And, amazingly, I was actually able to stay happy about being in labor up until transition, which lasted all of ten minutes as far as I could tell. But oh those ten minutes! Alright, I'm getting ahead of myself. Let's start at the beginning... I woke up Saturday morning (the 19th of February) to Maya asking for a bamboom (bottle with something yummy in it) very close to my face. I lumbered and pulled with both hands on the dresser to get myself into a sitting position to prepare for actually standing up and I felt my water break, just a little gush, but I was sure that that was what it was. I went over to Billy who was at the computer and said, 'Do you have a minute? Can I interrupt you?' 'Yeah' 'uh..my water just broke.' 'Do you have a minute?! That's great! Alright, let's go! I'll call Zivar.' And so he did and she came over to pick up Georgia and Maya. They were a little freaked out because they had both just woken up when I announced that the baby was coming today, my water had just broken, and Zivar was on her way. But they had packed their bags with glee a week before so as soon as they were dressed and eating breakfast, they were excited and ready to go. So off they went and Billy called the hospital. We had to go in right away instead of laboring at home because of the water breaking first. The water breaking was problematic because I had tested positive for Group Beta Strep, which is a normal bacteria to have but has just been found to cause really bad infections in a small percentage of newborns (like meningitis, pneumonia, etc). So I had to be in the hospital to get intravenous antibiotics every four hours during labor after Teresa's protective bubble had burst. So off we went. So far, no contractions AT ALL. (Except one big one in the middle of the night before. But just one.) We got set up in a room and the baby and contractions monitored- she was fine, and no contractions. Liz came and delivered lots of good smelling things to help cause a traffic jam where sensory inputs hit the brain so the pain ones couldn't get through- roses and fresh eucalyptis, sweet grass (smells like summer and attracts good spirits) and lavendar hand lotion. I had also brought a basket of fruit that all smelled good- pears, peaches and plums. Ray, the midwife on call, came to check on me. I was dilated to 1-2 centimeters, which was the same as a week ago at my check up. So he told us that it was all up to us to get labor started. We were to walk the halls and shower. Then the nurse came in to start my IV. Now, I hate getting IVs because noone ever gets it on the first try and they have to dig around for a vein, which is pretty much as excruciating as it sounds. The nurse tried and couldn't get a vein, but mercifully, she didn't keep trying. She called in Ray, and he found one right away. Yeah! I didn't watch the proceedings, I just looked at my roses and pretended that they were all that existed in the world. It helped a lot, and it was all over before too long. Ray tried to get the doctor on call to let us go home between antibiotic treatments so I could get labor started there, but they don't allow that sort of nonsense in this hospital, so 'we weren't going home without a baby' Ray said, quoting the doctor. I didn't know before that the midwives had to answer to a doctor. Anyways, Billy and I got busy walking around the halls, dancing around our room to the 'Shall We Dance' soundtrack, eating the fruit from my basket, saying hi to my mom and sister in the waiting room and then we were off to the shower. Still no contractions AT ALL and it's 5 pm now. That means we've spent the entire day puttering around the hospital and not one contraction to show for it. This is a long story. To keep reading it, click below. I got in the shower, holding my hand with the IV lock (just the part that goes into my hand, no long tubes and bags of liquids on a rack) out of the water. Billy sat right outside the shower trying to entertain us both by singing songs. But neither of us could think of the words to any songs. We had brought lots of music with us but we hadn't listened to it in a while so we were stuck humming and messing up words to songs we know we know. The only songs Billy could remember were kid songs and I couldn't handle that. We were finally distracted by eating seven layer burritos (Billy) and getting another dose of antibiotics (me) administered around the shower curtain by a nurse who brought the whole rig with her and pumped them into me as fast as she could. About halfway through the shower experience, I started to have contractions, strong but not that painful. Happy day! When my feet got sore I finally got out and found that I had been in there for an hour and a half! So now it's 7 pm. Husayn and Suzanne (and Amia) came in to visit us then and we found out that they and Liza and Nate and Liz had all been waiting in the waiting room all through my shower watching 'Queer Eye for the Straight Guy'. We also found out that Suzanne had given birth in that very room with the same midwife. They went off to Cake Night and Ray came in for a huddle. He said that I could keep trying to get labor started tonight or they could 'tuck me in' for the night with a sleeping pill and try again in the morning, and if labor still didn't start up, they would induce labor with pitocin. He also said that if it was anyone else, that they already would have started on pitocin, but because it was me... I don't know if he was joking and trying to make me feel special or serious since my last two labors had started this way on their own. I scoffed at the idea of a sleeping pill. I had just gotten contractions started! I wasn't going to stop now! So he went home to study his Spanish homework, and Billy and I kept walking back and forth on our hallway circuit. I knew inside myself at this point that labor had really started and that everything would happen pretty fast. This early part of labor can last a LONG time, like a day or two, but I knew mine would be much shorter. Sure enough, the contractions got longer and harder and more frequent. We didn't have a watch between us or in the hallway so we measured them by hallway lengths. At first they were one hallway apart, then three quarters, then half, and then I got tired of walking and went back for more baby monitoring, antibiotics, and to call the doula in. I was excited and kept telling myself how happy I was to be having contractions. This made the pain of them seem incidental. Desire cancels fear of pain really well, I found. I started using all of the 'labor saving devices' that I had brought. Smelling fruit and roses between contractions to refresh myself and get my head back to pleasantness, sitting on a 'birth ball', joking and talking with Billy. And I started to really need help to get through the contractions. Jessica, the doula, came at about this point and we all settled in together for the real work of labor. It was really nice to have real breaks between contractions where we could just talk and enjoy the thought that Teresa was about to be born. Then I got tired of sitting there and wanted to try something different so I asked to get into the jacuzzi. The nurse checked me and I was at 3 cm, so she let me go ahead. As soon as I got in, though, I knew it wasn't going to work. It felt hard and constricting. I couldn't move around as much or in the way that felt right lying in a bathtub-shaped piece of hard plastic. I suffered through two hard contractions there and then gave it up. I was shivering SO HARD when I got out. No fun to have contractions while cold and shivering, and they were starting to come on faster so that I had two more just getting out of the jacuzzi and getting back to my room. I was releived to get back to my good smelling comforts and birth ball. So that's where I stayed. No water birth for Teresa. I put on my special CD for labor and we all sang along, including Jessica, who has a beautiful voice. That helped with the pain. Billy was silly and talkative. That helped. I was silly and talkative. That helped too. I looked at the baby warmer in the corner and got excited imagining my baby squirming around on it. My thoughts became my reality! I talked about the whole 'walking through the doorway of fire without getting burned' imagery and all the things that I was going to want to hear during contractions. And they did a great job when it was contraction time, holding my hand, rubbing my lower back, talking me through it- telling me that I was safe and doing a great job and not going to get 'burned' and that my baby would be here soon. And I moved around sitting on my birth ball in whatever way felt right and moaned when it got really intense. Then Ray came back and joked and talked with me. I joked right back every second that I was not gone with the pain of a contraction. I don't remember what we talked about, but it was more how we were talking than what anyways. Things got more and more intense and I threw up, which I alwys do in labor. No biggie. Ray said, 'if you feel like pushing, go ahead. I'm going to go get a few things together'. And he went off to get all that technical stuff he needed for the birth, all of those just-in-case tools all spread out on green cloth on a tray that I've never really gotten a good look at because I'm always in the haze of giving birth when it's around. Then I didn't have any more strategies left because the pain and pressure of contractions (coming right after each other now) got too intense for strategies. I'm thinking this was transition. I thought it would be as easy as the last times to know when it was time to push because my body just started pushing before I could think about it the last two times. So I wasn't ready for that uncertainty and I didn't know what position I wanted to be in and I couldn't talk any more or sit around considering the options so I started to feel frantic and out of control (classic transition stuff). I had to get off the birth ball because it felt like it was keeping the baby from coming down and out. I squatted on the floor but couldn't quite get the hang of it. Jessica suggested that I get up on the bed and kneel, holding onto the top half of the bed, which was raised up like the back of a chair. So I did that but couldn't get 'comfortable'. Well, maybe there's just no comfortable position for the very end of labor. Anyways, I ended up pushing my head into the back of the bed like a goat or something, with Billy holding my hand and Jessica on the other side (I don't remember what she was doing). I was really not enjoying this part, but I tried to force myself to relax during contractions and focus on the fact that it was almost over and it was almost time to push. Then all of a sudden it was time to push. I felt like I had more control over pushing this time, which was weird. I didn't know what to do with myself. I ended up just pushing when it seemed right. By this time I was actually lying on my tummy with my head still pushed into the bed and my legs folded under me in a big M shape. It was weird. The whole thing is very hard to describe and I don't even want to because I still remember that out of control pain feeling very vividly. (I get all cold and shivery every time I try to write this part.) This is where I squashed Billy's hand so hard and didn't feel sorry for him or give it back when he complained. Then Ray put his hands on my back and said, 'Mrs. Baker, I need you to roll over onto your back. There's some meconium in the amniotic fluid and I need to be able to suction the baby off right away.' (Meconium is the baby's first poop that's all black and tarry and it's not good if it comes out before the baby is born because they could swallow it or breathe it in. So I guess my water must have broken more at that point for him to know that.) So I did roll over and was releived to be given some direction and to be facing all the people again. He had told me he was going to call more people in to help suction the baby and sure enough, there was a room full of nurses all standing in a row, watching. But I didn't care, I barely had my eyes open anyways at that point. I just wanted it all to be over. I pushed through a couple contractions, but didn't feel like I was getting anywhere, when Ray asked me if I wanted to feel my baby's head. This time I really needed the encouragement (I was horrified by this suggestion the last two times) and I was so happy that the pushing was working that I got busy and her head was out with the next push. The nurse said, 'Blow out candles on a birthday cake' and I did that shallow puffing breathing while (I assume) they suctioned Teresa out. Then she said I could push again, and I did and pushed the rest of her out. That is the weirdest feeling, very hard to describe, and very squirmy. Anyways, that was it, and she was born! And I finally opened my eyes and they put her on my tummy and I said, 'I can't beleive she's really here! It still hurts so much! But I'm holding her and she's really here!' Or something like that. I was so glad to be done. That whole thing from getting on the bed to when she was born took about ten minutes. It was 3:05 am, I checked the clock right away, so I know, though I don't know why it mattered to me. It took me a while to get out of labor mode and really understand that she was born because it all happened so fast. She was covered in dark green goo and yellow waxy stuff (meconium and vernix) so they wiped her off and handed her back for me to nurse her and hold her and I was so happy to have my baby at last and to know that she was healthy and fine. What a wild ride. Then they whisked her away to the nursery to clean her up (Billy went with her) and I was wheeled over to the room we would stay in for a couple days. It took a while for me to get enough time with her to really bond, with all the hospital rigamarole, but eventually we did, and I totally fell in love with my baby. Everything I see reminds me of Teresa now- ducks, cookies, random things like that and I say 'Awww, so cute!'. I miss her if she naps too long. And she's such an easy baby, she's really an angel. She's the harvest of all our work and a very precious gift indeed. I've been using what I learned in this labor to get through all the hard things since- that it really helps reduce pain (of any kind) if I am happy and optimistic and keep myself from being overcome by fear and anxiety. It's a great start to the next phase of my life with all of it's challenges and opportunities as the mother of three wonderful little people. And I feel like I've experienced some of the power of these Words: O God! Refresh and gladden my spirit. Purify my heart. Illumine my powers. I lay all my affairs in Thy hand. Thou art my Guide and my Refuge. I will no longer be sorrowful and grieved; I will be a happy and joyful being. O God! I will no longer be full of anxiety, nor will I let trouble harrass me. I will not dwell on the unpleasant things of life. O God! Thou art more friend to me than I am to myself. I dedicate myself to Thee, O Lord. --Abdul-Baha sisters little, sisters big March 08, 2005 Day 2 Hmm..typing with one hand while nursing with the football hold. I feel like a real multitasker. day 2: The good thing about a bad start is that there's nowhere to go but up. It went much better than yesterday. It's the end of the day and the only one crying is georgia (briefly and to Billy) about wanting to eat cereal. Here's what helped: prayer- both mine and other people's, getting mentally prepared and taking a shower in the morning before kids woke up, prayer, a great visit from Liza that helped the kids feel paid attention to, the promise of more Liza time tomorrow, knowing ahead of time that Teresa would not stay asleep after being put down for a nap for more than 10 minutes (suddenly this happened! she was such a good napper for me before yesterday), prayer, the relative success, compared to yesterday, of the food I made for lunch (apparently homemade foccacia beats shake-n-bake chicken in the 2-5 age group), and finally the sun, whose absense was felt keenly yesterday, and who warmed and cheered our afternoon today. Two down, three more to go before the weekend breather. I'm not very discerning when I'm in a good mood. I just have to put up ALL the pictures even though they are all almost the same. Think of it as a filmstrip: Day 1 O my goodness! Day 1 of my life as the mother of three without Billy there all day: Started out well and full of confidence and plans. Ended up with disarray, unmet needs on all sides and me a wimpering mess. Sigh. Today is another day and another try at it. And I'm calling in the troops too, to help until I'm recovered enough to have some strength against the difficulty of this job. My hormones are still on a roller coaster and that makes it hard to withstand large amounts of whining and sisterly fighting all day. Pray for me, please! Pictures of good times, that helps. Here are the relative sizes of my lovely family members: March 02, 2005 baby update It's the middle of the second week since Teresa's birth and since Billy's been at home on parental leave. Next week he goes back to work and I need to figure out how I'm going to do this on my own during the day. So far my strategy has been to rest as much as possible so I can be recovered as much as possible by then. With more energy, I can think more creatively and have more patience for difficulties. In general, I think that staying connected- all the family members that is- is going to be important. Other than that, it's going to be a learning process. With trials and errors. Georgia is doing very well with all the changes- she's done this before after all. Maya gets pretty upset when she can't have mommy at certain times. Like bedtime or when she wakes up or when she needs a nap- all sleepy times that she's used to lots of cuddling in. Sometimes we can find a way for us all to be together- I'll nurse Teresa and Maya will lay on my leg and Georgia will hug me around the middle on my free side. It's pretty cute. Other times it doesn't go so smoothly and Georgia and/or Maya has to be hauled off by Billy for kicking or hitting or jumping around the bed. Those are the times that I'm worried about because I won't be able to haul them off and redirect them. This was the problem with being eight and a half months pregnant too. I couldn't bodily remove Maya from a situation because I really couldn't carry her. Then I feel like I can't be consistent or even effective at discipline, which freaks out the kids because they don't have boundaries that they can rely on. And then it just gets worse and worse. Argh! Does anyone have any stories about sharing Mommy when you were little that I can share with the girls so they know that other people go through this too? I feel like I should give a Teresa update. But it seems like she is doing the same things every day, and I can't see her really getting bigger yet, or changing, except in tiny little ways that could just be my imagination. Like, she doesn't flail her arms quite as wildly getting in the way of her nursing every time. She also has better skill with latch-on, although she still has trouble sometimes. And she seems like she's a tiny little bit longer, and her skinny legs are just a little bit less skinny. She opens her eyes more, and her eyes are definitely turning blue. She's gotten a lot less yellow as her jaundice goes away; she enjoyed her first bath at home -- she even liked getting her hair washed, which I think is pretty much unheard of. She made a statement at her first Feast last night, doing a huge poop right in the middle of prayers that made Liz and Liza giggle behind their hands. She had been saving up that poop for a day and a half, and I had to change all her clothes. She was awake and alert, looking around at everyone and socializing after that, though. Here are some pictures from her first week: We made a cake for the Ayyam-i-Ha party whose theme was "Think Summer" and decided to decorate it to look like a garden with all our leftover gingerbread house decorations. After the landscaping was complete, we decided it needed little people to frolick in the garden. Maya got a little too enthusiastic pushing this little girl doll into the frosting, with the following result: I decided to rescue her from the waist high grass and rearranged them a little. Now here are a few pictures from Teresa's second week. Does she look different to you? P.S. Update to the update: Today Teresa's umbilical cord stump fell off, so now she's got her very own belly button!
null
minipile
NaturalLanguage
mit
null
H.Res.399 - Expressing the sense of the House of Representatives that the veterans health care system administered by the Department of Veterans Affairs should be maintained within that Department as a system uniquely charged with the mission of providing health care for the Nation's veterans.102nd Congress (1991-1992)
null
minipile
NaturalLanguage
mit
null
Postmodernism for healthcare workers in 13 easy steps. Despite a growing literature on postmodernism in nursing and other healthcare disciplines, it continues to be dogged by mistrust, misunderstanding and outright hostility. Presenting the philosophy of postmodernism is a particularly difficult task, and most attempts fall into one of two traps: either the writer is a well-read and committed postmodernist in which case the writing tends to make too many assumptions about the background knowledge of the reader; or else the writer has only a passing knowledge of 'popular' postmodernism, in which case the writing often falls back on over-simplistic concepts which do not do justice to the issues and which are often completely misconceived. The problem is further compounded by the difficulty of writing about one discourse (I am using the word in its postmodern sense-all such 'jargon' is explained in the paper) from within a different and potentially hostile one. For the postmodernists, rational debate with their modernist colleagues is all but impossible, since (as we shall see) the logic and language of the dominant discourse of modernism rules out and refuses to acknowledge that of postmodernism (and vice versa). Postmodern texts therefore rely less on rational argument than on persuasive narrative and a deliberate subversion of many of the usual practices of writing. This introduction to postmodernism for healthcare workers attempts to straddle the two discourses in both its form and its content, and offers a mixture of argument, example and speculation.
null
minipile
NaturalLanguage
mit
null
With a self-proclaimed dedication to color, Private Reserve Ink offers fun and interesting alternatives to typical black, blue, red, and green pigments. With over 50 colors to its name, Private Reserve offers pH-neutral inks that are vibrant, smooth, and fast drying. This drop includes a four-pack of 66-milliliter bottles. Certain colors are available in a “Fast Dry” formula, which is especially useful for left-handed writers.
null
minipile
NaturalLanguage
mit
null
Q: Enable java remote debug in code For example, we can enable java remote debug by adding following to command line. -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 But my application is running in yarn, I'm not sure which port is available. So I want enable java debug in my code. First I detect a available port and log in my program, then I can use this port to debug my application. A: The address property specifies host (optionally) and port (only the port if host is left out). So address=5005 specifies the port 5005 in your case. If you want your program to wait until you connect your debugger, switch suspend=n to suspend=y. Edit: Maybe I misunderstood your question. In case you want to enable debugging programmatically, this won't be possible as the debugging facility JPDA is not exposing a Java API nor any other way to start and stop it programmatically.
null
minipile
NaturalLanguage
mit
null
This invention relates generally to information retrieval in a computer network. More particularly, it relates to an improved method for providing a set of bookmarks in a browser for retrieving Web pages in an Internet environment. It is well known to couple a plurality of computer systems into a network of computer systems. In this way, the collective resources available within the network may be shared among users, thus allowing each connected user to enjoy resources which would not be economically feasible to provide to each user individually. With the growth of the Internet, sharing of computer resources has been brought to a much wider audience. The Internet has become a cultural medium in today""s society for both information and entertainment. Government agencies employ Internet sites for a variety of informational purposes. For many companies, one or more Internet sites are an integral part of their business; these sites are frequently mentioned in the companies"" television, radio and print advertising. The World Wide Web, or simply xe2x80x9cthe Webxe2x80x9d, is the Internet""s multimedia information retrieval system. It is the most commonly used method of transferring data in the Internet environment. Other methods exist such as the File Transfer Protocol (FTP) and Gopher, but have not achieved the popularity of the Web. Client machines accomplish transactions to Web servers using the Hypertext Transfer Protocol (HTTP), which is a known application protocol providing users access to files, e.g., text, graphics, images, sound, video, using a standard page description language known as the Hypertext Markup Language (HTML). HTML provides basic document formatting and allows the developer to specify xe2x80x9clinksxe2x80x9d to other servers and files. In the Internet paradigm, a network path to a server is identified by a Uniform Resource Locator (URL) having a special syntax for defining a network connection. Retrieval of information is generally achieved by the use of an HTML-compatible xe2x80x9cbrowserxe2x80x9d, e.g., Netscape Navigator, at a client machine. When the user of the browser specifies a link via a URL, the client issues a request to a naming service to map a hostname in the URL to a particular network IP address at which the server is located. The naming service returns a list of one or more IP addresses that can respond to the request. Using one of the IP addresses, the browser establishes a connection to a server. If the server is available, it returns a document or other object formatted according to HTML. Web browsers have become the primary interface for access to many network and server services. The entry of the URL in the entry field of a browser can be a difficult task for many users. While the URL for the main Web page of a major company can be relatively brief, e.g., www.ibm.com, subsidiary pages can have very lengthy URLs in, at least to the average user, an arcane syntax. Recognizing the difficulties involved, the developers of browsers have provided one useful means of returning to a favorite URL, by the creation of user stored xe2x80x9cbookmarksxe2x80x9d in the browser. Once created, bookmarks offer a means of page retrieval. The user can cause the browser to display his bookmark list and select among his bookmarks to go directly to a favorite page. Thus, the user is not forced to enter a lengthy URL nor retrace the original tortuous route through the Internet by which he may have arrived at the Web site. Once a bookmark is added to a bookmark list, in general, the bookmark becomes a permanent part of the browser until removed. The permanence and accessibility of bookmarks have made them a valuable means for personalizing a user""s Internet access through the browser. Yet despite their usefulness, the current arrangement of bookmarks is not without its flaws. The most common way of adding bookmarks to the bookmark file or a particular bookmark folder in the browser is manually intensive. Each bookmark is added one at a time. A user visits a web site, then selects that site as a bookmark entry and, if desired, categorizes it manually. Furthermore, the current technology used in browsers to update bookmarks, i.e. removing the old address and entering the new one, is very slow and inefficient. Another problem with retrieving information on the Internet is the amount of time required to sift through the enormous amount of information available to find the relatively few web pages or files of interest. Search engines help to a degree, however, the current methods require a user to actually retrieve a suitable web page before it can be added to the browser""s list of bookmarks. A substantial amount of user time is required to refine search strategies, visit pages, compile and discard results and so forth. Thus, a good list of bookmarks on a given topic can represent a significant investment in time and effort. These problems as well as others are addressed in various embodiments of the present invention. Therefore, it is an object of the invention to improve the management of bookmarks in the browser. It is another object of the invention to provide a search mechanism to create bookmark sets usable in a browser. It is another object of the invention to select entries returned from a search to a selected bookmark set without requiring browser retrieval of an associated page. It is another object of the invention to edit a bookmark set for a given client browser. These and other objects are accomplished by creating a bookmark set from search results. First, a search request from a client browser is sent for pages in a distributed database which satisfy a search condition. A set of pages which satisfy the search condition are returned. In the page which presents the search results, each page is associated with a user input sensitive selection means, such as a checkbox. Responsive to user input, i.e. selection of certain pages, a bookmark set comprised of a path to selected pages is created. The created bookmark set is served as a unit to the client browser.
null
minipile
NaturalLanguage
mit
null
Panama at the 2007 Pan American Games The 15th Pan American Games were held in Rio de Janeiro, Brazil from 13 July 2007 to 29 July 2007. Panama competed with a total number of 70 (48 men and 22 women) athletes in 12 sports. Medals Gold Men's Long Jump: Irving Saladino Silver Men's 400m Hurdles: Bayano Kamani Results by event Basketball Men's Team Competition Team Roster Joel Muñoz Jair Jamel Peralta Danilo Pinnock Maximiliano Gómez Torres Jamaal Levy Jorsua Chambers Reyjavick Degracia Eduardo Isaac Dionisio Gomez Joel Isaac Tesis José Lloreda Desmond Smith Head Coach: Vincente Duncan See also Panama at the 2008 Summer Olympics External links Rio 2007 Official website Category:Nations at the 2007 Pan American Games Category:2007 in Panamanian sport 2007
null
minipile
NaturalLanguage
mit
null
[Analgesia with intra-articular injection of buprenorphine after surgery of the shoulder]. The effect of 10 ml of intra-articular buprenorphine (0.30 mg) or normal saline on postoperative pain after shoulder surgery was studied in a randomized, prospective, double-blind study in 30 ASA I-II patients receiving general anaesthesia. The pain scores (Five Point Scale ranging from "no pain" to "unbearable pain" and Visual Analog Pain Scale) 1, 2, 3, 4, 6 and 24 hours after surgery, time to first analgesic use and total 6-hours and 24-hours analgesic requirements were recorded. VAPS was significantly lower in the buprenorphine group compared with placebo-treated patients one hour after surgery (p < 0.05). The time to first analgesic use was longer and total 6-h opioid requirements were lower in the buprenorphine group when compared with the control group (p < 0.05). No significant differences were detected in total 24-h analgesic requirements between the two groups. These results indicate that intra-articular injection of buprenorphine after shoulder surgery provides short analgesia. This effect may be mediated by systemic absorption.
null
minipile
NaturalLanguage
mit
null
Plone 4.3.2 adds support for zrs/zodb replication - e12e http://www.stevemcmahon.com/steves-blog/plone-adds-replication-in-micro-release ====== e12e This is pretty neat - a quick tests seems to show that it works fine: # Get a few libraries etc. In particular we need # dev packages for python, and libjpeg sudo apt-get build-dep zope2.12 sudo aptitude install python-virtualenv mkdir plone-test cd plone-test virtualenv -p python2.6 --no-site-packages --distribute py2.6-venv ./py2.6-venv/pip install -U distribute mkdir buildout cd buildout Then set up a buildout.cfg, I think this is pretty minimal: [buildout] extends = http://dist.plone.org/release/4.3.2/versions.cfg versions = versions parts = zeo-master zeo-slave plone-master-client plone-write-client2 plone-slave-client plone-read-client2 # This is a client that can write to the database # note that the admin login is set to: admin:admin [plone-master-client] recipe = plone.recipe.zope2instance user = admin:admin http-address = 7000 zeo-client = on zeo-address = ${zeo-master:zeo-address} eggs = Pillow Plone # This is "traditional" Plone clustering: # additional write client front-end - *single* db backend [plone-write-client2] recipe = plone.recipe.zope2instance user = ${plone-master-client:user} http-address = 7002 zeo-client = on zeo-address = ${zeo-master:zeo-address} eggs = ${plone-master-client:eggs} # This is a client on our read-only slave. We # cannot write (eg: edit contet, define sites) # through this - but it works for reading [plone-slave-client] recipe = plone.recipe.zope2instance user = ${plone-master-client:user} http-address = 7001 zeo-client = on zeo-address = ${zeo-slave:zeo-address} eggs = ${plone-master-client:eggs} read-only = true # We can cluster clients on the read-only backend [plone-read-client2] recipe = plone.recipe.zope2instance user = ${plone-master-client:user} http-address = 7003 zeo-client = on zeo-address = ${zeo-slave:zeo-address} eggs = ${plone-master-client:eggs} read-only = true # Our standard read-write zeo-backend # note that we can see some simple statistics # on http://127.0.0.1:8110 [zeo-master] recipe = plone.recipe.zeoserver[zrs] replicate-to = 127.0.0.1:8101 monitor-address = 127.0.0.1:8110 zeo-address = 127.0.0.1:8100 # Sample config for read-only slave zeo # Note that all run-time info, data etc # is stored under "var2" as opposed to the default # "var" for our zeo instance above. [zeo-slave] recipe = plone.recipe.zeoserver[zrs] zeo-address = 127.0.0.1:8200 replicate-from = ${zeo-master:replicate-to} var = ${buildout:directory}/var2 read-only = true Get bootstrap, bootstrap and install plone: curl -O https://raw.github.com/buildout/buildout/2/bootstrap/bootstrap.py ../py2.6-venv/bin/python bootstrap.py ./bin/buildout # This will take some time, make a coffee Start up the parts: ./bin/zeo-master start ./bin/zeo-slave start ./bin/plone-master-client start ./bin/plone-slave-client start ./bin/plone-read-client2 start ./bin/plone-write-client2 start Now, you can open [http://127.0.0.1:8110](http://127.0.0.1:8110) to look at some stats from the master, and create a site with one of the read-write- clients, eg: [http://127.0.0.1:7000](http://127.0.0.1:7000) \-- and watch the changes appear on the read-only-client: [http://127.0.0.1:7001](http://127.0.0.1:7001). ~~~ e12e Some more links: [https://github.com/plone/plone.recipe.zeoserver](https://github.com/plone/plone.recipe.zeoserver) [https://pypi.python.org/pypi/plone.recipe.zeoserver](https://pypi.python.org/pypi/plone.recipe.zeoserver)
null
minipile
NaturalLanguage
mit
null