content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
"I want to thank you for giving my husband a chance to express himself, to have someone to talk to that understands what war has done. The doctors just give pills to cover up the pain. I saw this group gives him hope. Thanks for those who care about our veterans".
The wife of a Vietnam Veteran
Supporters are now helping to
Campaign closed
This CAMMO program partners active duty service members and veterans with professional songwriters who help them use the creative process to tell their stories about their military experiences and express their thoughts and emotions. Songwriting has been demonstrated to be an effective therapeutic tool. | https://www.causes.com/posts/817897?auto_prompt_sharing=1 |
EQ2: Lionheart’s Newest Program for Direct Care Staff
Empowering Direct Care Staff to Build Trauma-Informed Communities for Youth.
What is EQ2?
The time youth spend in our care offers a window of opportunity to help build the social, emotional, and self-regulation skills needed to lead healthy and productive lives. Together, these skills create what we call Emotional Intelligence or EQ. (It’s like IQ. But instead of referring to your cognitive intelligence, EQ refers to your ability to understand and manage emotions.) These EQ skills don’t develop in a vacuum. They are cultivated through relationships with invested and consistent caregivers. For youth, EQ skills such as developing the capacity for trust, managing strong emotions, increasing impulse control, and learning responsible decision making, take place through the transformative relationships created with us. Each caring contact with youth, each genuine interaction, becomes an opportunity to learn and heal. That’s why the fundamental belief of EQ2 is that what is hurt through relationships can be healed through relationships. EQ2 refers to the healing experience that happens when we bring our own EQ to help youth build their EQ. | https://lionheart.org/eq2-lionhearts-newest-program-for-direct-care-staff/ |
Real Madrid news: German midfielder Toni Kroos looks set to earn a new contract at the Spanish side, according to reports.
Kroos’ current contract with Real Madrid expires at the end of the 22/23 campaign. But he could stay at the Spanish side beyond that if the latest reports from Spain are to be believed.
AS reports that Kroos’ impressive start to the 22/23 season is earning him a new contract at Real Madrid.
Kroos has so far featured in eight games for Real Madrid this season. The German midfielder made five of those appearances in the La Liga, two in the Champions League and one in the UEFA Super Cup final. He has also provided two assists so far in the 22/23 season.
Real Madrid news: Toni Kroos’ Los Blancos career
Kroos has been playing for Real Madrid since joining the Spanish side after leaving Bayern Munich in 2014.
The 32-year-old is now in his ninth season with Real Madrid. During his trophy-laden career at Real Madrid, Kroos has so far made 373 appearances for the Spanish giants. He has also scored 25 goals for Real Madrid in those matches.
Kroos has won a number of trophies with Real Madrid, including the Champions League title four times and the La Liga thrice.
How has Los Blancos started the 22/23 season?
The Spanish side began their new season with a 2-0 win over Europa League winners Eintracht Frankfurt in the UEFA Super Cup final.
Real Madrid then kicked off their La Liga title defense with a 2-1 win over Almeria. The defending champions have since played five matches in the Spanish top-flight and have secured wins in all of those.
Real Madrid sits top of the La Liga points table with 18 points from their first six matches in the league this season. They are two points clear of fierce rivals Barcelona, who occupy the second position in the table.
Meanwhile, Real Madrid began their Champions League title defense with a 3-0 win over Scottish side Celtic in the group stage. They followed that up with another win as they inflicted a 2-0 defeat on RB Leipzig. Real Madrid top Group F with 6 points from two matches.
The La Liga behemoths also enjoy a 100% win rate this season, having won all the matches they have so far played in the 22/23 season. They resume their campaign after the September international break by hosting Osasuna in La Liga. | https://the12thman.in/real-madrid-news-toni-kroos-to-earn-contract-extension/ |
Menlo Park, Calif. -- Scientists working at the U.S. Department of Energy's (DOE) SLAC National Accelerator Laboratory have created the shortest, purest X-ray laser pulses ever achieved, fulfilling a 45-year-old prediction and opening the door to a new range of scientific discovery.
The researchers, reporting today in Nature, aimed SLAC's Linac Coherent Light Source (LCLS) at a capsule of neon gas, setting off an avalanche of X-ray emissions to create the world's first "atomic X-ray laser."
"X-rays give us a penetrating view into the world of atoms and molecules," said physicist Nina Rohringer, who led the research. A group leader at the Max Planck Society's Advanced Study Group in Hamburg, Germany, Rohringer collaborated with researchers from SLAC, DOE's Lawrence Livermore National Laboratory and Colorado State University.
"We envision researchers using this new type of laser for all sorts of interesting things, such as teasing out the details of chemical reactions or watching biological molecules at work," she added. "The shorter the pulses, the faster the changes we can capture. And the purer the light, the sharper the details we can see."
The new atomic X-ray laser fulfills a 1967 prediction that X-ray lasers could be made in the same manner as many visible-light lasers - by inducing electrons to fall from higher to lower energy levels within atoms, releasing a single color of light in the process. But until 2009, when LCLS turned on, no X-ray source was powerful enough to create this type of laser.
To make the atom laser, LCLS's powerful X-ray pulses - each a billion times brighter than any available before - knocked electrons out of the inner shells of many of the neon atoms in the capsule. When other electrons fell in to fill the holes, about one in 50 atoms responded by emitting a photon in the X-ray range, which has a very short wavelength. Those X-rays then stimulated neighboring neon atoms to emit more X-rays, creating a domino effect that amplified the laser light 200 million times.
Although LCLS and the neon capsule are both lasers, they create light in different ways and emit light with different attributes. The LCLS passes high-energy electrons through alternating magnetic fields to trigger production of X-rays; its X-ray pulses are brighter and much more powerful. The atomic laser's pulses are only one-eighth as long and their color is much more pure, qualities that will enable it to illuminate and distinguish details of ultrafast reactions that had been impossible to see before.
"This achievement opens the door for a new realm of X-ray capabilities," said John Bozek, LCLS instrument scientist. "Scientists will surely want new facilities to take advantage of this new type of laser."
For example, researchers envision using both LCLS and atomic laser pulses in a synchronized one-two punch: The first laser triggers a change in a sample under study, and the second records with atomic-scale precision any changes that occurred within a few quadrillionths of a second.
In future experiments, Rohringer says she will try to create even shorter-pulsed, higher-energy atomic X-ray lasers using oxygen, nitrogen or sulfur gas.
Additional authors included Richard London, Felicie Albert, James Dunn, Randal Hill and Stefan P. Hau-Riege from Lawrence Livermore National Laboratory (LLNL); Duncan Ryan, Michael Purvis and Jorge J. Rocca from Colorado State University; and Christoph Bostedt from SLAC.
The work was supported by Lawrence Livermore National Laboratory's Laboratory Directed Research and Development Program. Authors Roca, Purvis and Ryan were supported by the DOE Office of Science. LCLS is a national scientific user facility operated by SLAC and supported by DOE's Office of Science.
SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit www.slac.stanford.edu.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. | https://archive.eurekalert.org/pub_releases/2012-01/dnal-scf012512.php |
A brand with over a decade of experience as a leading provider in the digital advertising sector. Operating worldwide with a double impact on the business. Allowing clients to have full access to a vast marketplace and advanced optimization. And making sure that the publishers are able to monetize their traffic in an effective and safe way.
They are looking for new members to join them in their journey in their Barcelona hub.
Your responsibilities and impact as an Account Manager will be:
- Managing relationships with an existing portfolio of clients
- Optimizing and creating digital campaigns for clients
- Creative and analytical advisory
- Data analysis on a daily basis
- Reporting on campaign results - both internally and to the client
- Proactively contacting activating new and existing advertisers
- Working closely with the technical team to troubleshoot technical issues and improve the service
- Negotiating advertising budgets and ad spaces
What’s in it for you? | https://www.bluselection.com/job/account-manager-english-speaker |
As we approached fall, the Federal Reserve (Fed) and financial markets awoke to the fact that an engine of global growth—it had been China—may be sputtering, and they both blinked. Against a backdrop of increased volatility and global stock market corrections, both the Federal Open Market Committee (FOMC) and lead investors from across our Global Fixed Income, Currency & Commodities platform gathered to discuss their economic and market outlooks.
Never before have global markets been so integrated nor as dependent on emerging markets, which now comprise over 40% of global GDP. Growth is slowing in over 60% of the emerging markets (Exhibit 1), productivity is down, and world trade remains subdued. Our base case outlook calls for sub-trend global growth, that is, below a 3.75% threshold. Given our focus on China, the obvious threat to our outlook is a hard landing in China, which would ripple through the global economy. Furthermore, continued weakness in commodity prices could drive instability in selected emerging markets.
In the U.S., we expect that the first Fed rate hike in almost ten years will happen this year. We caution, however, that real policy rates have fallen around the world (Exhibit 2) and a stronger dollar and increased market volatility may make it difficult for the Fed to decouple too much from its global counterparts. Against that backdrop, rates may actually decline over the next three to six months, as investors come to recognize the reality of slowing global growth. In the face of a stronger dollar and modest expansion, we expect global monetary policy to remain accommodative.
A deliberative Fed and slower global growth are positives for rates markets. Without an abrupt withdrawal of liquidity, people will be more patient in holding bonds, and yields look attractive.
However, given our expectation for slower global growth and potentially higher volatility, we approach risk markets carefully.
Over the past year we have experienced a significant re-pricing of credit – spreads are 60% wider than they were just one year ago, and, we believe, offer significant compensation for default and liquidity risks. That’s particularly true for corporate high yield (ex-energy, metals and mining), where it looks like buyers are being over- compensated for both defaults and potential volatility. U.S. high yield companies are, for the most part, domestically-focused, have healthy balance sheets, and are positioned to benefit from above-trend growth.
Likewise, European high yield fundamentals benefit from above-trend economic growth and ongoing ECB easing. We like European financials, specifically hybrid bank debt, but we are cautious about banks with emerging market exposure. Alternative Tier I securities (these receive top short-term ratings from any two Nationally Recognized Statistical Rating Organizations) are attractive in the face of regulatory pressure to improve bank balance sheets; however, we prefer the securities of banks who have completed their financing needs.
We are more guarded in our approach to U.S. investment grade corporates, as many are dependent on global demand. Furthermore, continued easy money increases the opportunities for investment grade issuers to re-leverage. Ongoing new issue supply, especially if paired with outflows as investors move to high yield, would also put negative pressure on spreads.
As an alternative, we like commercial mortgage-backed securities (CMBS). Spreads are attractive when compared with both U.S. investment grade corporates and agency MBS. Like the high yield market, CMBS is poised to benefit from U.S. growth and further strength in the housing market.
Finally, given our concern over slowing global growth, we are quite selective in our emerging market investments. Commodity prices present a major headwind; the effects of a strong U.S. dollar have yet to be fully priced in; and the specter of a Fed hike may increase the risks of emerging market crises. Differentiation among markets is key, and we’re focused on countries that have less borrowing needs; that are commodity importers, not exporters; and that are addressing market-friendly structural reforms.
Although the Fed tried to walk back its decision in the days following the FOMC meeting, its lost credibility was damaging and still reverberates through the markets. Nonetheless, as we look for deeper value, we view this as a further opportunity to search markets for investment opportunities. What we have learned is that the Fed (and other major central banks) will be overly cautious in normalizing policy; low rates and extreme monetary accommodation will be with us for a while.
Learn how to capitalize on the best ideas across global fixed income markets at jpmorganfunds.com/global-bond.
As we approached fall, the Federal Reserve (Fed) and financial markets awoke to the fact that an engine of global growth—it had been China—may be sputtering, and they both blinked. | https://www.barrons.com/articles/finding-growth-in-a-slow-growth-environment-1444066617 |
The Ancient Library of Alexandria
Once the largest library in the ancient world, and containing works by the greatest thinkers and writers of antiquity, including Homer, Plato, Socrates and many more, the Library of Alexandria, northern Egypt, is popularly believed to have been destroyed in a huge fire around 2000 years ago and its volumous works lost.
Since its destruction this wonder of the ancient world has haunted the imagination of poets, historians, travellers and scholars, who have lamented the tragic loss of knowledge and literature. Today, the idea of a ‘Universal Library’ situated in a city celebrated as the centre of learning in the ancient world, has attained mythical status.
The mystery has been perpetuated by the fact that no architectural remains or archaeological finds that can definitely be attributed to the ancient Library have ever been recovered, surprising for such a supposedly renowned and imposing structure. This lack of physical proof has even persuaded some to wonder if the fabulous Library actually existed at all in the form popularly imagined.
Ancient Alexandria
Once home to the massive Pharos lighthouse, one of the Seven Wonder of the Ancient World, the Mediterranean seaport of Alexandria was founded by Alexander the Great around 330 BC, and like many other cities in his Empire, took its name from him. After his death in 323 BC, Alexander’s Empire was left in the hands of his generals, with Ptolemy I Soter taking Egypt and making Alexandria his capital in 320 BC. Formerly a small fishing village on the Nile delta, Alexandria became the seat of the Ptolemaic rulers of Egypt and developed into a great intellectual and cultural centre, perhaps the greatest city in the ancient world.
The Origins of the Ancient Library
The founding of the Library of Alexandria, actually two or more libraries, is obscure. It is believed that around 295 BC, the scholar and orator Demetrius of Phalerum, an exiled governor of Athens, convinced Ptolemy I Soter to establish the Library. Demetrius envisioned a library that would house a copy of every book in the world, an institution to rival those of Athens itself. Subsequently, under the patronage of Ptolemy I, Demetrius organised the construction of the ‘Temple of the Muses’ or ‘the Musaeum’, from where our word ‘museum’ is derived. This structure was a shrine complex modeled on the Lyceum of Aristotle in Athens, a centre for intellectual and philosophical lectures and discussion.
The Temple of the Muses was to be the first part of the library complex at Alexandria, and was located within the grounds of the Royal Palace, in an area known as the Bruchion or palace quarter, in the Greek district of the city. The Museum was a cult centre with shrines for each of the nine muses, but also functioned as a place of study with lecture areas, laboratories, observatories, botanical gardens, a zoo, living quarters, and dining halls, as well as the Library itself. A priest chosen by Ptolemy I himself was the administrator of the Museum, and there was also a separate Librarian in charge of the manuscript collection. At some time during his reign from 282BC to 246BC, Ptolemy II Philadelphus, the son of Ptolemy I Soter, established the ‘Royal Library’ to complement the Temple of the Muses set up by his father.
It is not clear whether the Royal Library, which was to become the main manuscript Library, was a separate building located next to the Museum or was an extension of it. However, the consensus of opinion is that the Royal Library did form part of the Temple of the Muses.
During the reign of Ptolemy II, the idea of the Universal Library seems to have taken shape. Apparently more than 100 scholars were housed within the Museum, whose job it was to carry out scientific research, lecture, publish, translate, copy and collect not only original manuscripts of Greek authors (allegedly including the private collection of Aristotle himself), but translations of works from Egypt, Assyria, Persia, as well as Buddhist texts and Hebrew scriptures.
One story goes that the hunger of Ptolemy III for knowledge was so great that he decreed that all ships docking at the port should surrender their manuscripts to the authorities. Copies were then made by official scribes and delivered to the original owners, the originals being filed away in the Library. | https://brian-haughton.com/ancient-mysteries-articles/ancient-library-alexandria/ |
HONG KONG-More local people should be engaged in promoting Macao's rich culture to the world, as they are the ones who know the most about the special administrative region.
Wu Zhiliang, president of the Macao Foundation, made those remarks as a delegate to the Conference on Dialogue of Asian Civilizations in Beijing.
The conference, which runs from Wednesday to May 22, promotes cultural diversity, exchanges and mutual learning. It is expected to attract more than 2,000 government officials and representatives from 47 countries and regions.
Wu, a seasoned historian and Macao member of the National Committee of the Chinese People's Political Consultative Conference, said Macao has much to offer tourists besides gambling. The former Portuguese colony is a cultural crossroads of East and West, with a diverse population, languages, religions and festivals, he said.
Owing to the blending of the SAR's diverse cultures, the Historic Center of Macao, which includes 25 historic locations, was listed by UNESCO as a World Heritage Site in 2005.
To better present Macao's uniqueness to the world, Wu encouraged more local people to engage in promoting it, as they have natural advantages, given their familiarity with the SAR.
In particular, scholars and artists, especially those with global influence, could take advantage of their achievements to showcase the charm of Macao's culture through people-to-people exchanges, Wu said.
In addition, the city's young people, who represent the city's vitality and future, should also play a role in it, Wu said.
With the growing international influence of Macao's culture, Wu believes more tourists will be attracted to the city, and cultural tourism may become a new force driving the city's economy. It will fuel the city's goal to transform itself into a more diversified economy for its sustainable development.
Wu suggested Macao people take advantages of the favorable policies offered by the central government and actively participate in cross-culture communication programs.
In July 2018, the China National Arts Fund announced that it will accept applications for grants from Hong Kong, Macao and Taiwan arts practitioners.
Under a program launched by the Macao Foundation and the Macao SAR government in 2016, more than 3,300 Macao youths have visited mainland cities, getting firsthand knowledge of the nation's development.
The foundation also launched a scholarship program in 2017 with the aim to enhance exchanges between Macao students and those from countries and regions involved in the Belt and Road Initiative. The five-year scholarship program plans to offer 150 opportunities for exchanges.
With the city's deeper integration into the country, Wu also believes there will be more facilitation in enhancing cultural communication between Macao and the mainland, especially in the Guangdong-Hong Kong-Macao Greater Bay Area.
It is expected that more policies aiming to enhance cultural exchanges and collaborations among arts organizations, schools and cultural institutions in the Bay Area will be rolled out, according to the development outline for the 11-city cluster. | https://z127l3.static.ctm.net/pt/article/report/view/208 |
Delta Now Dominant Covid Variant In Most Of Europe: WHO
Delta Now Dominant Covid Variant In Most Of Europe: WHO
The WHO's Regional Office for Europe and European Centre for Disease Prevention and Control have warned that if current trends continue, Delta could soon become the globally dominant variant of Covid-19.
Delta is now the dominant variant of Covid-19 in most of Europe, the World Health Organisation (WHO) warned on Monday. Along with the European Centre for Disease Prevention and Control (ECDC), the WHO said that efforts to prevent transmission of the Delta variant must be reinforced.
In these 19 countries, the median proportion of Delta variant detected in samples sent for genetic sequencing was 68.3 per cent. In comparison, the previously
This indicates that Delta has now overtaken Alpha as the dominant variant of Covid-19 in most of Europe.
Experts have already confirmed the presence of the highly transmissible Delta variant in nearly all European countries.
Even in the United States, Delta variant accounts for about 83 per cent of all new Covid-19 infections, according to reports.
The variant will continue to spread, displacing the circulation of other variants unless the virus mutates further to form a new and more competitive strain.
First detected in India in October of 2020, Delta (B.1.617.2) was identified as one of the major factors driving the second wave of Covid-19 infections that devastated the country earlier this year.
Quoting microbiologist Sharon Peacock, news agency Reuters reported, "The biggest risk to the world at the moment is simply Delta." | |
Eagle County released its annual energy inventory report earlier this month, which details the greenhouse gas emissions generated by energy consumption, transportation and waste in 2019 and 2020.
The report was presented at the quarterly meeting of the Climate Action Collaborative, a group of community leaders and stakeholders who are leading the charge to achieve the county’s recently updated climate action plan, which calls for a 50% reduction in greenhouse gas emissions from 2014 levels by 2030.
The latest data shows that compared with 2014 levels, total greenhouse gas emissions increased by 7.6% in 2019, though this spike is largely attributed to a new calculation system for transportation emissions and does not accurately reflect a change in the amount of emissions produced. The following year, the effects of the pandemic and lockdowns in 2020 caused a 12.9% decrease in emissions from 2014 levels.
The report identifies areas of success and areas that need greater attention in order to achieve the county’s climate action goal.
Ground transportation is the leading source of emissions
Ground transportation continues to be the single largest contributor to emissions in the county, making up 41% of total emissions in 2019 and 37% in 2020.
The 2019 data shows that the 7.6% increase in total emissions was primarily driven by a 58.3% spike in ground transportation emissions.
If that number gets your heart rate up, don’t panic — this spike is not as it appears. Starting in 2018, Eagle County switched from using ground transportation statistics provided by the Colorado Department of Transportation to statistics provided by Google analytics. While the former only tracked the vehicles on Interstate 70 and U.S. Highway 6 that entered the Eagle County borders, the latter calculates all of the emissions produced during inbound and outbound trips to the county, as well as trips within the county’s borders.
For example, let’s take a weekend warrior who drives from Denver to Vail and then back to the city. The old system would only count the car’s emissions that occurred within Eagle County, a small percentage of the full trip. Now, using location tracking on Google Maps and similarly powered navigation apps, the data includes the total emissions released during a round-trip drive to the county.
Erica Sparhock is the deputy director of Clean Energy Economy for the Region, a consulting team that sources the new transportation data. She said that while the data is no longer comparable with that from previous years, moving forward it will provide the information necessary to be effective in addressing the full scope of transportation emissions.
“It’s easier data for folks to take action on,” Sparhock said. “I got really excited about the idea that we could look at these inbound and outbound trips, and use that kind of information for a lot of our counties to help advocate for more regional transportation. And then that inbound number is something that Eagle County can look at for their own expansion of transit within the county lines.”
In 2020, inbound and outbound trips accounted for 74% of vehicle emissions, while in-boundary trips accounted for 26%, a proportion that is similar to 2019. These statistics highlight the fact that reduction efforts need to include support for regional and state public transit systems, along with increasing the adoption of electric vehicles.
It is important to note that the two prior energy inventory reports showed a significant increase in ground transportation emissions between 2014 and 2017, and though the 2019 data is not comparable, it is very possible that this trend has continued.
Long commutes are part of the problem, as the average commute distance in Eagle County is double the national average. By 2030, the collaborative aims to have at least 50% of the workforce live within 5 miles of an employment center, and is encouraging commuters to ditch their cars for public transportation or other methods at least twice a week. As of now, around 20.5% of people in Eagle County commute sustainably.
County commissioner Matt Scherr said that transportation is an area where big changes need to take place to make sustainable commuting a viable option for more people. For example, the creation of a regional transportation authority could create a public transit system that better accommodates the needs of workers.
“If you’ve got hurdles to what you’re asking people to do, it’s not going to be successful,” Scherr said. “We have to do robust transit. Then we wouldn’t have to drive our cars so much, we wouldn’t have to be building parking, we wouldn’t have to be doing all sorts of things that contribute to greenhouse gases and a less well-functioning economy.”
The collaborative is also aiming for a 2% increase each year of electric vehicles registered in Eagle County. In 2021, the county added five new electrical vehicle charging stations, four new electric vehicle buses, and Holy Cross Energy rebated 240 e-bikes.
Aviation, which makes up about 10% of all transportation emissions in Eagle County, has also seen a small but steady increase in emissions since 2014.
Renewable energy is having a significant impact
Reduction in electrical emissions shines as the leading source of progress toward Eagle County’s sustainability goals.
While electrical consumption has remained essentially unchanged between 2014 and 2020, emissions attributable to electrical use have decreased by 45% over the same period, thanks to Holy Cross Energy’s commitment to growing renewable energy sources each year.
Holy Cross Energy has increased its share of renewables from 20% in 2014 to 46% in 2020, and is working toward its own internal sustainability goals to supply 100% renewable energy by 2030.
Across the valley, the vast majority of residential and commercial emissions are produced by unincorporated properties — which include all of Beaver Creek and Bachelor Gulch, EagleVail, and the mountains, among others — and the town of Vail. Together, these areas accounted for 487,079 metric tons of carbon dioxide equivalent in 2019, while all other municipalities combined account for 243,469 metric tons.
Despite an impressive overall reduction in electric emissions over the past six years, all communities except Edwards, Avon and Red Cliff showed an increase in electrical emissions from 2017 to 2019, with unincorporated properties showing the greatest spike. Sparhock said that snowmakers are particularly high energy consumers, and an increase in their use on the mountains is likely contributing to the spike.
Since the 2020 drop in electrical consumption during COVID-19 is an outlier, it will be critical to see the numbers in 2021 to determine whether the years-long trend toward lower electrical emissions has continued.
To help support the positive developments, the collaborative has set a goal of installing beneficial electrification for 5% of existing buildings each year, and adopting net zero construction codes for new buildings. From 2020 to 2021, the total number of all-electric homes increased by 1,641, which falls slightly short of the 5% annual goal but still demonstrates steady progress.
Unlike electricity, emissions from natural gas have been moving in the opposite direction, with a 16.1% increase in 2019 and a 4% increase in 2020. As of 2020, natural gas has now outpaced electrical, contributing 26% of emissions as compared with electrical’s 25%.
“This continued upward trend is in sharp contrast with the decrease in emissions from electricity, and should add greater urgency to efforts to reduce natural gas usage in homes, businesses and institutional buildings and facilities,” the report states.
Solid waste emissions rise despite improved diversion rates
The final source of emissions in Eagle County is the solid waste in our landfill, where emissions have risen significantly since 2014 levels. The county’s landfill has gone from emitting around 100,000 metric tons of carbon dioxide equivalent in 2014 to over 150,000 in 2020, which correlates to a growing tonnage of annual solid waste.
A decade ago, in 2010, Eagle County generated 97,972 tons of solid waste in a year. In 2020, that number was 155,887 tons. The solid waste data for 2021 is already in and shows that this trend is continuing, with 162,548 tons of waste generated last year.
An upside to the data is that Eagle County is diverting its highest percentage so far of this waste away from landfills, where it can be recycled, composted or treated. Last year, 29% of all solid waste generated was diverted from the landfill, and 33% of municipal solid waste — higher than the national average — was recycled.
“While the amount of waste diverted from the Eagle County landfill has increased steadily since 2012, so has the amount of waste delivered to it,” the report reads. “In 2020, the landfill received 109,319 tons of waste, while 46,589 tons were diverted – both figures setting all-time records.”
One of the most effective initiatives in reducing waste emissions is composting. Organic material left to decompose in the landfill is the primary source of solid waste emissions, and getting residential, commercial and industrial actors to bring organic waste to the Vail Honeywagon compost center or other compost sites will greatly reduce the emissions from solid waste.
The collaborative’s goal is to divert 80% of all organic waste and 100% of all recoverable construction and demolition waste by 2030. There has been success on the organic diversion, with an 11% increase in diversion last year, but an 8% decrease in construction and demolition diversion during the same time.
For more information about waste diversion opportunities, visit the waste diversion page at WalkingMountains.org.
Keep moving forward
The Climate Action Collaborative is made up of many departments and organizations, each pushing for progress in different areas that together will enable the county to dramatically reduce greenhouse gas emissions over the next decade.
The 2030 goal is ambitious, but commissioner Scherr said that it is possible if the county, the community and local organizations take bold action that generates sweeping changes. | https://www.vaildaily.com/news/eagle-valley/eagle-countys-annual-energy-inventory-report-shows-mixed-progress/ |
VANCOUVER—In many ways, Coal Harbour is the picture of prosperity, the quintessential combination of natural views and cityscape that attracts so many people to live in Vancouver in the first place.
Except half the people who live there can’t afford to do so.
A StarMetro analysis of affordability in Metro Vancouver census tracts — the small, stable neighbourhood portions Statistics Canada measures — shows 30 per cent of people living in the median tract couldn’t afford their homes — already a high number that secures Vancouver as the least affordable Canadian city.
A tool released Tuesday by the BC Non-Profit Housing Association uses data from the 2016 census to visualize how much Canadians are spending on rent in municipalities across the country. It shows 23 per cent of Vancouver renters are spending more than 50 per cent of their monthly income on the roofs over their heads.
Read more: ‘Crisis of affordability’ for Toronto renters, report says
B.C. has the highest proportion of renters spending more than 50 per cent of their income on housing at 21 per cent. Ontario is a close second, with Toronto matching Vancouver’s crisis spending levels.
An analysis of census data shows how the crisis of affordability plays out at the neighbourhood level in Metro Vancouver. In addition to some people being “priced out,” large proportions are staying in or moving to neighbourhoods where they spend an unaffordable amount of their monthly budgets on housing.
Spending 30 per cent of monthly income or less on housing is considered affordable by the Canadian Mortgage Housing Corporation. By that standard, a single person making $3,000 per month could live affordably spending $900 per month on housing. A family with a combined income of $7,000 per month could spend $2,100 per month and still be within their means.
It’s become common in Vancouver, however, to spend much more than that.
Have your say
“The 30 per cent standard has been used for years,” said Penny Gurstein, director of UBC’s community planning department. But today “So many people are spending so much more of their income on housing than 30 per cent.”
Looking at the differences between neighbourhoods paints an even bleaker picture, where some neighbourhoods face much greater challenges than others.
The quarter of Metro Vancouver tracts with the least affordability problems have between 12 and 25 per cent of households who can’t afford their homes. The 25 per cent of tracts with the worst affordability problems, have between 35 and 61 per cent.
Nine of the worst-off tracts include more than 50 per cent of households that can’t afford their homes. They include two tracts on UBC endowment lands (61 per cent and 55 per cent), two tracts in the Downtown Eastside (55 per cent and 51 per cent), two tracts in Richmond’s downtown (54 per cent each), one tract in Coquitlam (52 per cent), one tract in Burnaby’s Metrotown (51 per cent), and one tract in Coal Harbour (50 per cent).
Unsurprisingly, data analysis shows neighbourhoods with many people in the low-income measure are more likely to have an affordability problem.
Gentrification can explain the problem in some cases. For example, Burnaby’s Metrotown is infamous for being gentrified, causing many lower-income tenants to be demovicted. But the problem isn’t limited to low-income areas.
Coal Harbour, which also ranks among the census tracts with the worst affordability problems, is a prosperous neighbourhood with only about 20 per cent of people living under the low-income benchmark — about the same as Metro Vancouver’s average.
Despite the financial crunch, there are reasons people continue to live in those neighbourhoods.
“It’s a beautiful place. There’s a reason why people still come here,” said Coal Harbour resident Suzanne O’Donoghue, who moved to the neighbourhood six years ago. “And I find the suburbs equally expensive.”
O’Donoghue said part of the reason people choose to live in her Coal Harbour neighbourhood is that the convenient location allows them to save time and money on transportation, even if they have to spend more on housing.
It’s not always a choice though.
“I have a lot of friends that may get to a point where they end up leaving,” O’Donoghue said. “I know people who’ve sold their house and moved north or to the island.”
It’s a decision she said she may also make one day. She quit her job a few years ago and has been spending more than 30 per cent of her monthly income on housing since.
For Metrotown Residents Association founder Rick McGowan, two “camps” in Metrotown explain the affordability struggle.
There are the low-income people renting less expensive units, who would be spending greater than a third of their income almost anywhere in Metro Vancouver.
And then there are those moving into Metrotown’s brand new condos, who, like some Coal Harbour residents, are choosing to spend more of their income on housing for the “tradeoff” of proximity to the SkyTrain and amenities found there.
Alex Operacz, a medical laboratory technician, has rented in the Metrotown area for 18 years. Being a long-term tenant has kept his housing costs “hovering” around that 30 per cent mark, but he knows that will change if he’s evicted to make way for more new condos.
“When I look to find homes on Craigslist or in the newspaper, everyone tells me that I’ll be paying at least twice as much as I did in the old place,” Operacz said. “Basically everything is going up except wages.”
McGowan said affordability is “a neighbourhood problem” and solutions need to be customized to those neighbourhoods.
City or region-wide plans to develop new housing for a growing population, which focus on areas close to the SkyTrain, should also factor in the needs of people already living there, McGowan argued.
For example, he argued developing Joyce-Collingwood, previously an industrial area, did not have as much of a negative neighbourhood impact as developing Metrotown, where existing tenants had already formed deep roots.
“For myself, I’m secure. But I want my kids to be able to stay in the neighbourhood,” McGowan said.
The community he has an eye on now is Edmonds, which he says is “thriving,” but could become a repeat of Metrotown’s woes if development becomes rampant there.
Regional data on affordability shows little change between 2011 and 2016, with slightly more than 30 per cent of households in Metro Vancouver spending an unaffordable amount. For McGowan, the region-wide picture of affordability is just a piece of the puzzle. | https://www.thestar.com/vancouver/2018/05/08/these-vancouver-neighbourhoods-have-the-highest-proportion-of-people-who-cant-afford-their-homes.html |
This list of resources is continuously updated.
The goal of the National Science Foundation’s (NSF) ADVANCE program is to increase the representation and advancement of women in academic science and engineering careers, thereby contributing to the development of a more diverse science and engineering workforce. ADVANCE encourages institutions of higher education and the broader science, technology, engineering and mathematics (STEM) community, including professional societies and other STEM-related not-for-profit organizations, to address various aspects of STEM academic culture and institutional structure that may differentially affect women faculty and academic administrators.
The National Science Foundation ADVANCE program began in 2001, and as of 2014 had awarded 297 grants to 199 institutions. In addition to large grants focused on “Institutional Transformation”, the early years of the program included grants to individuals, called “Fellows”, and smaller grants for “Leadership” programs. As the program matured, the Fellows and Leadership grants were discontinued, and planning grants called “IT-Catalyst” and “IT-Start” were added, along with “PAID” grants for Partnerships for Adaptation, Implementation, and Dissemination”. In 2014, the PAID grants were discontinued in favor of “PLAN”, Partnerships for Learning and Adaptation Networks.
The purpose of the Association for Women in Mathematics is to encourage women and girls to study and to have active careers in the mathematical sciences, and to promote equal opportunity and the equal treatment of women and girls in the mathematical sciences.
Founded in 1971, the Association for Women in Science (AWIS) is the largest multi-discipline organization for women in science, technology, engineering, and mathematics (STEM). We are committed to driving excellence in STEM and achieving equity for women of all disciplines, across all employment sectors. AWIS reaches more than 20,000 professionals in STEM with members and chapters nationwide. Membership is open to anyone that supports the vision and mission of AWIS.
With an $1.8 million grant from the National Science Foundation (NSF), the Opportunities for UnderRepresented Scholars (OURS) program provides initial funding to support full scholarships for eligible participants for a Post-Graduate Certificate in Academic Leadership.
For more than six decades, SWE has given women engineers a unique place and voice within the engineering industry. Our organization is centered around a passion for our members’ success and continues to evolve with the challenges and opportunities reflected in today’s exciting engineering and technology specialties.
We invite you to explore the values, principles, and priorities that guide our initiatives and learn how together, WE can continue to make a lasting impact on the future.
The STEM Women of Color Conclave is the largest national forum that centrally focuses on the intersection of race and gender and its impact on the professional advancement of women of color in the academic STEM disciplines.
Women in Academia Report monitors and reports trends concerning women in all areas of higher education, discusses important issues of gender equity, reports instances of gender discrimination, and identifies the leaders and laggards among colleges and universities in creating greater opportunities for women. Special editorial attention will be paid to academic programs and other developments at women’s colleges throughout the United States.
Women in Academia Report announces significant appointments of women to positions of influence in higher education. We report important awards and grants to women scholars. We review and provide a database of books of importance to women in higher education. | http://huadvanceit.howard.edu/?page_id=56 |
Introduction {#Sec1}
============
Dopamine has been heavily implicated in reinforcement learning^[@CR1]--[@CR3]^, and recently evidence has shown that dopamine also affects later choices based on these learned values^[@CR4]--[@CR6]^. However, unpicking the relative contribution of dopaminergic neurons during encoding, consolidation and retrieval stages of memory is often confounded by relatively long duration of action of medications.
Exogenous dopamine administration biases consolidation or retrieval in Parkinson's disease {#Sec2}
------------------------------------------------------------------------------------------
An early study showed that if Parkinson's disease (PD) patients were given their dopaminergic medication before completing a reinforcement learning task they learned better from positive than negative feedback^[@CR1]^. The opposite pattern was shown if they were withdrawn from their dopaminergic medication prior to learning. However, the differences were not apparent during the learning trials themselves. Instead, after learning, all the combinations of stimuli were presented without feedback to see whether participants had learned the relative value of the symbols via positive or negative reinforcement. It was only on this latter choice phase that the differences between medication states were seen, which raised the possibility that dopamine does not actually affect the learning process, but a separate process invoked when choosing stimuli based on their learned values. This could be a retrieval process for the learned values, or a decision process on the retrieved values.
When learning and choice trials were separated by a delay, which allowed PD patients to learn off medication and be tested on or off medication, medication state during learning had no effect on expression of positive or negative reinforcement, but dopaminergic state during the choices did^[@CR4]^. This was accompanied by fMRI signals in the ventro-medial prefrontal cortex and nucleus accumbens tracking the value of stimuli only when PD patients were on medication. This suggested that dopamine improved the retrieval and comparison of the learned values.
Similarly, when PD patients learned a set of stimulus-stimulus associations, and only had the rewards mapped onto these stimuli after they had finished learning, they still showed a bias towards the most rewarded stimuli if they were on their medications during the entire session^[@CR5]^. This demonstrated that the reward bias could be induced even when reward learning did not take place. Thus, dopamine appeared to affect value-based decision making, with a bias towards rewarding outcomes.
However, other studies have failed to find effects of dopamine during choice performance, with dopamine during testing 24 hours after reinforcement learning not affecting the change in accuracy from the learning trials^[@CR7],[@CR8]^. One of these studies^[@CR7]^ also found that PD patients on their dopaminergic medications during learning had poorer learning than those off medication. However, this task was a deterministic feedback task, rather than a probabilistic feedback task as used in most other studies, which may have different learning mechanisms due to the lack of stochasticity.
Effect of dopamine administration in healthy young adults {#Sec3}
---------------------------------------------------------
While patients with Parkinson's are known to be dopamine-depleted without medication, healthy young adults are usually considered to have optimal levels of dopaminergic activity for brain processing. Given the dopamine overdose hypothesis^[@CR9]^ posits an optimal level of dopaminergic function, where both increases or decreases to this level impair functioning, one would predict distinct effects of dopamine administration on healthy young people compared to older people with relative dopaminergic loss^[@CR10]^ and people with Parkinson's disease who have more profound dopaminergic loss. Using the deterministic stimulus-response task mentioned above, healthy young participants were worse at learning after 100 mg levodopa^[@CR11]^. Likewise, pramipexole, a D2 agonist, impaired learning on the same task^[@CR12]^. This could be explained by the increased dopaminergic activity tipping people over the peak of the inverted U-shaped response posited by the dopamine overdose hypothesis^[@CR13]^.
A dopamine D2/3 receptor antagonist given to young adults during a probabilistic reward/punishment task did not affect the earlier stages of learning, but impaired performance at the later stages of the learning task, though only for the rewarded stimuli^[@CR6]^. Computational modelling demonstrated an effect of dopamine on the choice parameter for the reward stimuli, but not for the punishment stimuli, or the learning rates, suggesting that the effect was not driven by learning from the feedback. This points to a D2/3 contribution to consolidation or retrieval of rewarded information in healthy young adults.
Effects of exogenous dopamine in older adults {#Sec4}
---------------------------------------------
When healthy older participants were given levodopa before a reward/punishment learning task, they showed better performance on the reward trials, but no difference on the punishment trials, when compared against a haloperidol (D2 inverse agonist) group^[@CR14]^. Neuroimaging revealed that levodopa increased the striatal reward prediction errors for reward trials but did not affect aversive prediction errors from the punishment trials. If contrasted with Eisenegger *et al*.^[@CR6]^, it suggests that dopamine contributes to the reward prediction errors during learning, and that D2 receptors are important for the selection of actions, but not the learning from them. However, these studies used tasks with only learning trials, and used analysis techniques to try to separate out the influence of the drug on learning and choice selection within that. While other studies with positive and negative outcomes have used post-learning phases to remove the influence of feedback affecting choices^[@CR15]--[@CR17]^, these have not been used with dopaminergic manipulations to our knowledge.
Here, we used a separate choice phase on a reinforcement learning task which had no feedback, and thus tested choice selection only, to assess how levodopa affects the expression/retrieval of positive and negative learning. We chose levodopa as the drug as it is the most commonly prescribed dopaminergic treatment in PD patients, and has previously shown effects on a similar task^[@CR14]^ in healthy adults. In order to isolate the effects of dopamine administration on choice performance from learning or consolidation, we gave this choice phase 24 hours after initial learning and gave participants either 150 mg levodopa or a placebo 1 hour before.
Methods {#Sec5}
=======
Participants {#Sec6}
------------
Thirty-five healthy older adults were recruited from Join Dementia Research and the ReMemBr Group Healthy Volunteer database. One participant was excluded due to glaucoma (contraindication), and three withdrew before completing both conditions. Thirty-one participants completed both conditions.
Participants were native English speakers over 65 years old with normal or corrected vision. They had no neurological or psychiatric disorders and did not have any of the contraindications for the study drugs Domperidone and Madopar (levodopa; see Supplementary Materials [1](#MOESM1){ref-type="media"}). They were not taking any monoaminergic medications, or any drugs listed in the Summary of Product Characteristics for Domperidone or Madopar. Demographic details are provided in Table [1](#Tab1){ref-type="table"}.Table 1Demographics and questionnaires statistics.MeasureMeanSDRangeN (Male: Female)31 (14:17)Age71.237.4165--92Years of Education14.423.4510--24MoCA26.193.1018--30DASS Total11.2910.121--39DASS-D3.844.510--18DASS-A2.102.470--11DASS-S5.354.100--14BIS57.539.0138--73LARS−26.655.45−34--14The means, standard deviations (SD) and ranges of the demographic and questionnaire data for the participants. Montreal Cognitive Assessment (MoCA) of less than 24 suggests cognitive impairment, Barratt Impulsivity Scale (BIS) of 72 or higher suggests high impulsivity, Lille Apathy Rating Scale (LARS) scores above −22 suggest apathy, and a Depression Anxiety Stress Scale (DASS) above 21, 15, and 26 suggest severe depression, anxiety and stress, respectively.
Participants were tested at Southmead Hospital, Bristol, UK. All participants gave written informed consent at the start of each testing session, in accordance with the Declaration of Helsinki. Ethical approval was granted by University of Bristol Faculty Research Ethics Committee. All procedures were in accordance with Good Clinical Practice and HRA and ethical regulations.
Design {#Sec7}
------
A double-blinded, within-subjects, randomised placebo-controlled design was used. The two drugs were 10 mg suspension of Domperidone and 187.5 mg Madopar (37.5 mg benserazide + 150 mg levodopa) dispersible, both mixed with diluted squash, and the placebos were diluted squash, with a Vitamin C tablet dissolved in one to mimic the residue left by the Madopar dispersible tablet. The levodopa dose was chosen to match previous studies which have found effects of dopamine on reinforcement learning tasks^[@CR18],[@CR19]^.
Domperidone is a peripheral dopamine D2 receptor antagonist, given 1 hour before levodopa to counter the nausea sometimes caused by it. The drugs and placebos were prepared by a lab member not otherwise involved in the study.
Tasks {#Sec8}
-----
The reinforcement task was adapted from Pessiglione *et al*.^[@CR14]^, and is referred to as the GainLoss task. It was run using Matlab r2015 and Psychtoolbox-3^[@CR20]--[@CR22]^ on Dell Latitude 3340 laptops. Links to download the code are provided in the Data Availability section in this manuscript.
In this task, volunteers were instructed to attempt to win as much money as possible. During learning, on each trial one of three pairs of symbols (Fig. [1](#Fig1){ref-type="fig"}) was shown on the computer screen until the participants selected one symbol using the keyboard (there was no response deadline). After this their selection was circled in red for 500 ms. This was followed by one of four outcomes presented on the screen for 1000 ms: GAIN 20 pence; LOSE 20 pence; LOOK at a 20 pence piece; or NOTHING. The outcome was determined probabilistically, with symbol A in the Gain pair resulting in 'GAIN' on 80% of trials, and 'NOTHING' on 20%, and vice versa for symbol B in the Gain pair. In the Look pair, symbol C resulted in a 'LOOK' outcome 80% of the time, and 'NOTHING' 20% of the time (vice versa for symbol D), and in the Loss pair symbol F had an 80% chance of resulting in a 'LOSS' and 20% chance of 'NOTHING' (vice versa for symbol E). The outcome was displayed for 1000 ms, which was followed by a fixation cross for 500 ms before the onset of the next trial.Figure 1Diagram of the GainLoss experiment learning trials. Top left shows a sample Gain trial, and the other three panels show the outcome probabilities for the symbols in each pair (representative symbols shown here).
The learning was preceded by a practice block of 30 trials (10 for each pair, using different symbols to the learning blocks), followed by two blocks of 90 learning trials (30 trials per pair). Choice performance was measured by showing all symbols in all combinations six times (e.g. AB, AC, AD..., 15 pairs in total, 6 repetitions of each pairs, 90 trials in total) without the outcomes shown. The stimuli were presented for the same duration as in the learning trials, except without the outcome screen. Choice performance was assessed immediately after learning, after a 30-minute delay, and 24 hours later. Different sets of stimuli were used for each condition, the order of which was randomised across participants.
An episodic verbal learning task was also learned on day 1. Participants read aloud a list of 100 words and were tested 30 minutes and 24 hours later with the remember-know paradigm. Several questionnaires and paper tests were also given; digit span^[@CR23]^ and the St. Mary's Hospital Sleep Questionnaire^[@CR24]^ (SMHSQ) were given each day, and the Montreal Cognitive Assessment^[@CR25]^ (MoCA), Barratt Impulsivity Scale^[@CR26]^ (BIS), Lille Apathy Rating Scale^[@CR27]^ (LARS), Depression Anxiety Stress Scale^[@CR28]^ (DASS) and Rational-Experiential Inventory^[@CR29]^ (REI) were given once each on day 1 or day 3 (i.e. not after drug or placebo). The digit span measures were reported elsewhere^[@CR30]^, but in brief levodopa did not affect working memory capacity but did impair accuracy on manipulation components.
Procedure {#Sec9}
---------
Participants completed four testing sessions, arranged into two pairs of days (see Fig. [2](#Fig2){ref-type="fig"}). On day 1, participants gave consent and were fully screened for all contraindications and interactions for the study drugs (Domperidone and Madopar), and Vitamin C, which was used in the placebo. They then learned the cognitive tasks and completed some of the questionnaires during the 30-minute delay before being tested on the tasks.Figure 2Timeline of experimental conditions. Each condition was identical except that in one pair of days participants received the drugs (blue) 1 hour before testing on Day 2, and on the other received the placebos (red) before testing. The order of drug and placebo condition was randomised across participants.
On day 2, participants again gave consent and continued eligibility was confirmed. Baseline blood pressure and heart rate was recorded before the Domperidone (or placebo; double-blinded) was administered. Thirty minutes later their blood pressure and heart rate were measured again, and the levodopa (or placebo) was given. Blood pressure and heart rate were also recorded 30 and 60 minutes later. One hour after the levodopa (or placebo) was administered, participants completed the GainLoss and remember-know tasks, digit span and SMHSQ. They then learned another list of words to test encoding effects of dopamine on long term memory, and memory was tested immediately, and over the phone 1, 3 and 5 days later.
Days 3 and 4 were identical to days 1 and 2, with the exception of the drug/placebo. On the last phone test after day 4, participants were asked which day they thought they received the drugs to assess blinding success.
Data analysis {#Sec10}
-------------
Selection of the symbol that was more likely to lead to the highest value of the two shown was considered the optimal response, regardless of the outcome actually given on that learning trial (e.g. if they select symbol A, the 80% Gain symbol, this is considered optimal even it results in 'NOTHING' on that particular trial). For the Look pair, symbol C (80% LOOK) was treated as optimal when it was against 'NOTHING' even though neither outcome had monetary value. The Look symbols were considered optimal against the Loss symbols, while the Gain symbols were considered optimal against the Look symbols.
For the choice phase, the number of times each symbol was chosen was divided by the number of times it was seen, to give percentage selections (see Fig. [3](#Fig3){ref-type="fig"}). Percentage avoidances were calculated likewise. Within-subject ANOVAs and t-tests were used on the 24-hour choice phase measures to see how levodopa affected choice performance. Cohen's *d* and partial *η*^2^ ($\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$) effect sizes are reported alongside t-tests and ANOVAs, respectively. If Mauchly's test of sphericity was significant, the Greenhouse-Geisser correction to the degrees of freedom was applied. We used SPSS v23 (IBM) for statistics. Q-Q plots were used to verify that data were approximately normal before parametric tests.Figure 3Diagram showing how Choose-A and Avoid-F were calculated in the choice phase. The same procedure was used for all symbols (representative symbols shown here).
In addition to frequentists statistical analyses, we also performed Bayesian analyses in JASP^[@CR31]^. Bayesian t-tests and repeated measures ANOVAs were used. Bayesian analysis compares the likelihood of the data given the null hypothesis (H~0~) to the likelihood given the experimental hypothesis (H~1~). The ratio of these two gives the Bayes Factor (BF~01~ = H~0~/H~1~) which quantifies how much more likely the data are given the null hypothesis rather than the experimental hypothesis. Please note that BF can also be reported in terms of the experimental hypothesis (i.e. BF~10~ = H~1~/H~0~), but we use the BF~01~ here due to the direction of results we found. BF of 1 suggest equal evidence for the two hypotheses, while the further the BF is from 1, the stronger the evidence for or against the null. We used the default prior of a Cauchy distribution with width 0.707 (meaning we assume there is a 50% probability of the effect size being between −0.707 and 0.707). Robustness checks with different prior widths are provided in the Supplementary Materials.
While levodopa is not prescribed based on body-weight, a previous study showed dose-dependent effects of levodopa on episodic memory consolidation when body weight was used to adjust the doses^[@CR32]^. Body weight affects total absorption of levodopa, and the elimination half-life^[@CR33]^, thus affecting the concentration of dopamine available in the brain. Therefore, we divided the levodopa dose (150 mg) by body weight (kg) to give the weight-adjusted doses (mg/kg) and looked for linear or polynomial regressions between this and the difference in accuracy and choices between drug and placebo conditions.
We fit two computational reinforcement learning models to the behavioural data to examine the effects on softmax choice parameters; a Q-learning^[@CR34]^ model with 2 learning rates and one choice parameter, and an OpAL^[@CR35]^ model with 2 learning rates and 2 choice parameters. Separate parameters were used for day 1 learning trials and day 2 testing trials. Full details are provided in Supplementary Materials.
Results {#Sec11}
=======
Participants were not able to guess correctly which day they received the drugs or placebo. Twenty-nine participants provided guesses, of which 17 were correct, and a binomial test showed this was not significantly different from chance (p = 0.720).
Learning accuracy {#Sec12}
-----------------
During learning trials, overall mean accuracy was slightly higher on the Gain pair (mean 57% accuracy, SD = 14.4) than the Loss pair (mean = 53%, SD = 11.8; Look pair mean = 52% %, SD = 14.0), although this difference was not significant (pair \* drug ANOVA, pair effect: F (1, 30) = 2.508, p = 0.124, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.077).
Does levodopa affect choice phase accuracy {#Sec13}
------------------------------------------
The mean accuracies were much higher for the choice phases at 0 minutes, 30 minutes and 24 hours (\> 65%; see Fig. [4](#Fig4){ref-type="fig"}). Performance did not change over the 3 choice tests, as shown by no significant effect of time (nor drug nor interaction) in a time \* drug repeated measures ANOVA (p \> 0.05; see Table [2](#Tab2){ref-type="table"} for statistics). As the drug/placebo was only given before the 24-hour choice phase, we used paired t-tests to look at the accuracy separately on this phase, which revealed accuracy was not affected by levodopa (t (30) = 0.906, p = 0.372, *d* = 0.163; BF~01~ = 3.581).Figure 4The mean % accuracy on learning and choice phases, for both conditions. The arrow shows when the drug/placebo was administered (time not to scale). There was no difference between accuracy after drug or placebo (p = 0.372, BF~01~ = 3.581; 95% confidence intervals).Table 2Time \* drug ANOVAs on accuracy and selections.EffectTimeDrugTime \* DrugMeasureFp$\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$Fp$\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$Accuracy0.2020.8170.0070.4250.5200.0140.4550.6370.015Choose-A1.5050.2320.0490.2300.6350.0080.0310.9690.001Choose-B0.1420.8680.0051.1210.2990.0370.2820.7550.010Choose-C1.0770.3470.0360.0630.8040.0021.9270.1550.062Choose-D0.5680.5700.0190.0190.8920.0010.6740.5140.023Choose-E0.3870.6810.0131.4460.2390.0470.3410.7130.012Choose-F0.5030.6070.0173.0930.0890.0961.2300.3000.041Statistical output from the two-way repeated measures ANOVAs (time \* drug) on accuracy and each choice across the three choice phases. No effects or interactions were significant. df for the three columns are (2, 58), (1, 29), (2, 58).
We investigated why learning accuracy might have been so low. We found no correlations between age and learning or choice accuracy (p \> 0.5; Table [S4](#MOESM1){ref-type="media"}) but did find that MoCA (a measure of cognitive impairment) correlated with learning accuracy in both conditions (drug: r = 0.364, p = 0.044; placebo: r = 0.388, p = 0.031) and with choice phase accuracy only in the drug condition (drug: r \> 0.47, p \< 0.01; placebo r \< 0.25, p \> 0.2; Table [S4](#MOESM1){ref-type="media"}). Importantly, while these latter correlations might suggest that levodopa is interacting with cognitive impairment to affect accuracy, the correlations were seen in the drug condition at the 0-minute and 30-minute choice phases, which occurred *before* the drug was given and therefore suggest that the drug itself had no effect. Further supporting this view, we found no correlation of MoCA with the difference in 24-hour accuracy between the two conditions (r = 0.233, p = 0.206), and no effect of including MoCA as a covariate in any accuracy analyses (p \> 0.05; see Table [S6](#MOESM1){ref-type="media"}).
Positive and negative choices {#Sec14}
-----------------------------
We divided the number of times participants chose each symbol by the number of times it was presented to give the percentage of choices of each symbol (see Fig. [3](#Fig3){ref-type="fig"}). Figure [5](#Fig5){ref-type="fig"} shows the mean percentages of the selections of each symbol for the drug and placebo conditions at each choice phase. We looked to see whether performance changed over the three choice phases, including drug/placebo as a factor; if levodopa affected behaviour on the 24-hour choice phase there would be a time \* drug interaction. No effects of time, drug or interaction were found for any choice (p \> 0.05; see Table [2](#Tab2){ref-type="table"} for statistics).Figure 5The mean percentage of choices of each symbol for both conditions (95% confidence intervals) at (**a**) 0-minutes, (**b**) 30-minutes, (**c**) 24-hours. The value of the symbol is the sum of the probability multiplied by the value of each outcome (i.e. 80% chance of loss (−1) and 20% chance of nothing (0) gives −80%). There were no significant effects of time or drug across the phases, nor any differences between drug and placebo conditions the 24-hour test (p \> 0.05, BF~01~ \> 1).
Does levodopa affect positive and negative choices {#Sec15}
--------------------------------------------------
Paired t-tests on the 24-hour choice phase showed no significant differences in percentage of choices on drug or placebo for any of the symbols (p \> 0.05; see Table [3](#Tab3){ref-type="table"} for statistics), suggesting that levodopa did not affect selection for any choice. Bayesian t-tests showed moderate evidence in favour of the null hypothesis (BF~01~ \> 3) for all choices apart from symbol F where the evidence for the null hypothesis was anecdotal (BF~01~ = 1.301; see Table [3](#Tab3){ref-type="table"}). This suggests that levodopa does not affect choice selection, except for the most punished symbol where the evidence is inconclusive.Table 3Frequentist and Bayesian t-tests on 24-hour choice phase.Measuretpd95% Conf IntBF~01~Posterior95% Cred IntAccuracy0.9060.3720.163−4.135, 10.7303.5810.148−0.186, 0.482Choose-A−0.3320.742−0.060−16.919, 12.1884.960−0.052−0.391, 0.281Choose-B0.7180.4780.129−8.725, 18.1874.1150.115−0.214, 0.455Choose-C0.8780.3870.158−6.981, 17.5193.6630.143−0.200, 0.494Choose-D0.1080.9150.019−13.463, 14.9685.1920.019−0.308, 0.355Choose-E0.4540.6530.082−11.277, 17.7294.7440.071−0.252, 0.415Choose-F−1.7710.087−0.318−25.002, 1.1771.301−0.288−0.642, 0.055Statistics from frequentist and Bayesian t-tests on the accuracy and percentage of choices for each symbol at the 24-hour choice test. BF~01~ \> 3 reflects moderate evidence in favour of the null hypothesis. Cohen's d and 95% confidence intervals are presented for frequentist t-tests, and the posterior median and 95% credible intervals for the Bayesian t-tests. All error % from the Bayesian analyses were \< 4 × 10^−4^.
We ran a repeated measures ANOVA to see whether levodopa affected the selection of the most rewarded and punished symbols differently (this is analogous to the ANOVAs run on choose-A and avoid-B in Frank *et al*.^[@CR1]^). Looking just at the number of times the most rewarded symbol was chosen (choose-A) and the number of times the most punished symbol was avoided (avoid-F), there was no effect of medication (F (1, 30) = 0.719, p = 0.403, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.023) or choice (F (1, 30) = 3.058, p = 0.091, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.092), nor an interaction of medication and choice (F (1, 30) = 2.851, p = 0.102, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.087). This again suggests that levodopa did not affect expression of positive or negative reinforcement (Fig. [4](#Fig4){ref-type="fig"}; avoid-F is the inverse of choose-F) and that punishment-avoidance and reward-selection were equal in this task. A Bayesian repeated measures ANOVA found that this data was most likely under the null model (with no effects of medication, choice, or interactions; BF~M~ = 3.252) arguing against the inclusion of medication or choice in the model (BF~inclusion~ \< 1).
Additional analyses {#Sec16}
-------------------
The lack of effect here was surprising given previous studies' findings^[@CR6],[@CR14]^, so we investigated whether factors such as age, relative levodopa dose, or cognitive function could have contributed to the lack of effect.
Weight-adjusted dose did not have any significant linear or polynomial associations with the difference (between levodopa and placebo conditions) in 24-hour choice accuracy or on the difference on any of the choices (*r*^2^ \< 0.017, p \> 0.2; Table [S2](#MOESM1){ref-type="media"}). Nor did we find any associations between 24-hour accuracy or choice behaviour and MoCA, DASS, BIS, LARS, age, or years of education (p \> 0.05; see Table [S3](#MOESM1){ref-type="media"}). Several participants had low MoCAs, so we included age and MoCA as covariates in the frequentist analyses reported above, which did not return any significant interactions with these covariates or produce different main effects (p \> 0.05; see Tables [S6](#MOESM1){ref-type="media"} & [S7](#MOESM1){ref-type="media"}).
As mentioned above, overall learning accuracy was low, so we applied post-hoc thresholding to the data, only including participants who had greater than 60% accuracy overall, or on just the Gain pair or Loss pair, or on the accuracy on the final 10 presentations of the Gain or Loss pair. This left 21 participants in the drug condition and 18 in the placebo condition; only 12 participants passed for both conditions so between-subject analyses were used. The only significant effect found was an overall effect of drug on choose-F (F (1, 37) = 5.189, p = 0.029, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.123). However, this does not mean that in these high-learners the drug decreased choose-F, as it was an overall effect and the drug \* time interaction was not significant (F (2, 74) = 0.464, p = 0.630, $\documentclass[12pt]{minimal}
\begin{document}$${\eta }_{p}^{2}$$\end{document}$ = 0.012), meaning that the drug group had lower choose-F across all three choice phases, including before the drug was given, thus suggesting that levodopa did not affect choice behaviour in these high learners.
We also split participants into those who showed a negative effect of levodopa on digit span manipulation accuracy^[@CR30]^, and those who did not. Including the subgrouping as a between-subject factor did not affect the results (see Table [S7](#MOESM1){ref-type="media"}). This suggests that the lack of effect here was the same in those who showed effects of dopamine on the digit span, and those who did not.
We also looked at overall reaction times and found no difference between reaction times when on drug or placebo (p \> 0.05; see Supplementary Materials).
Computational Modelling {#Sec17}
-----------------------
We fit two reinforcement learning models (Q-learning and OpAL model; see Supplementary Materials for model details) to the behavioural data, with separate parameters for the 24-hour choice phase, to see whether levodopa affected the choice mechanisms. As there was no feedback on the 24-hour choice phase, the only parameters that are fit to that phase are the softmax inverse temperatures, which control how strictly people rely on the learned values of the stimuli versus how random their choices are. In the OpAL model there are two softmax parameters to separately control the influence of information learned through positive and negative reinforcement.
The Q-learning model fit better than the OpAL model (lower Bayesian Information Criteria^[@CR36]^; 369.7223 vs 374.6993), but its day 2 parameters did not differ between the two conditions (p \> 0.2, BF~01~ \> 1), nor did the day 1 parameters (p \> 0.05, BF~01~ \> 1; see Table [4](#Tab4){ref-type="table"}). We also looked at the parameters from the poorer fitting OpAL model which had no significant difference between conditions either (Table [S8](#MOESM1){ref-type="media"}). This suggests that levodopa does not affect the randomness of choice behaviour, or the relative influence of positive and negative learning on this.Table 4Q-learning model parameter statistics.Measuretpd95% Conf IntBF~01~Posterior95% Cred Int*α* ~+~0.05800.95410.010−0.342, 0.3625.2120.015−0.428, 0.468*α* ~−~−1.17820.2480−0.212−0.566, 0.1462.776−0.249−0.723, 0.206*Β*1.46450.15350.263−0.097, 0.6191.9900.306−0.139, 0.795*β -* day 21.92050.06430.345−0.020, 0.7051.0330.417−0.061, 0.915Output from frequentist and Bayesian paired t-tests on the Q-learning model's parameters for day 1 and day 2 data. No significant differences were found. BF~01~ \> 3 reflects moderate evidence in favour of the null hypothesis. Cohen's d and 95% confidence intervals are presented for frequentist t-tests, and the posterior median and 95% credible intervals for the Bayesian t-tests.
Discussion {#Sec18}
==========
Levodopa given 24 hours after learning a reward and punishment task did not affect choice performance. This suggests that levodopa does not affect the expression of positive or negative reinforcement 24 hours after learning in older adults.
This contradicts several other studies which have found that dopamine can affect expression of reinforcement learning^[@CR4]--[@CR6]^. However, there are several differences between each of these studies and the current one. For example, Shiner *et al*.^[@CR4]^ and Smittenaar *et al*.^[@CR5]^ did not have punishments in their task, only rewards of varying probabilities. It may be that dopamine's effects are only seen on positive reinforcement, which were missed in our task as we only had 2 stimuli that were positively reinforced (symbols A and B).
Eisenegger *et al*.^[@CR6]^ used a task with positive and negative reinforcement like ours but did not have a separate 'novel pairs' choice phase. Instead, they looked at the performance towards the end of the learning trials and used that to assess effects on the expression of learning. While their modelling analysis suggested the effects were not due to differences in learning rates, but rather the softmax decision parameter, this was still during the learning process and thus may be quite different to processes that occur much later and do not incur feedback. It should be noted that the softmax parameter in reinforcement learning models captures how frequently participants make a 'greedy' selection and choose the stimuli with the highest value, rather than making an explorative choice to a lower value stimulus. Thus, it also functions like a noise parameter, and will be higher when there is more variance that the learning rate parameters cannot explain. It is possible that the true effects were not due to more random choosing but rather some unknown process during learning that was simply captured by this noise parameter.
Alternatively, perhaps our participants did not learn the task well enough for us to be able to detect differences. The average accuracy at the end of the learning trials was close to chance, though it increased on the novel pairs choice phase to levels seen in other studies^[@CR1],[@CR4],[@CR5]^ (i.e. 50--80%). The poor learning may have been compounded by the inclusion of several participants with low MoCAs; MoCA correlated negatively with learning accuracy but did not reliably correlate with accuracy on the choice phases and excluding low MoCA participants did not change the pattern or significance of results. Levodopa had no effect regardless of cognitive function, but as this was not our main focus and the experiment was not set up to test this directly, this analysis was underpowered.
Additionally, the current participants were older adults (65+ years) whereas the majority of studies using this task have been on young adults^[@CR14]--[@CR16],[@CR37],[@CR38]^. We chose older adults as they have reduced dopaminergic activity^[@CR10]^, however as dopamine receptors and transporters seem more affected by age it may be that this actually reduced the effect of the drug in our sample. Age did not correlate with accuracy or choice measures, although this may be due to the narrow age-range tested here. It is possible that levodopa may affect expression of reinforcement learning in young healthy participants while not doing so in older adults, thus different results may be found if this experiment were repeated in young adults, and if performance thresholds were applied during the learning phase.
Several other studies have combined positive and negative outcomes with a transfer task^[@CR15]--[@CR17]^. Our data are similar to the 'partial information' feedback condition from some of these studies^[@CR15],[@CR16]^ with an increase in choices with increasing value. Our study gave such a task three times, with the third one occurring after drug/placebo administration. It is possible that the repeated testing in our study changed the framing of the 24-hour choice phase to more of an explicit memory task, rather than a test of implicit learning, although the lack of difference between performance across the 3 choice phases and the similarity with previous studies argues against this.
The lack of effect of levodopa on anything could also suggest that the drug simply was not having an effect. However, we used a fairly large dose (150 mg levodopa), which is as large as or larger than several other studies^[@CR11],[@CR14],[@CR18],[@CR19],[@CR39]--[@CR41]^. We waited 1 hour between dosing and testing, which coincides with the time to max concentration^[@CR42],[@CR43]^. Although levodopa is not prescribed based on weight, higher weight (and thus larger size) decreases absorption and concentrations of levodopa^[@CR33]^ and will lead to lower relative doses reaching the brain. The dopamine overdose hypothesis suggests that too high or low levels of dopamine would impair function, so the relative doses people received may affect the results. Some studies have reported dose-dependent effects, with people who had larger relative doses showing greater effects^[@CR44]^. We found no such associations, linear or quadratic. Additionally, levodopa did affect the digit span in some participants (see^[@CR30]^ for details), and when we looked specifically in the participants who showed that effect there was still no effect in the GainLoss task. This suggests the lack of effect was not due to the specific dosage given.
Finally, if dopamine does not affect expression of reinforcement learning, then how can we explain previous results? One possibility is that overall dopamine levels do not affect expression/retrieval, but rather that D2 receptor activation does, as suggested by Eisenegger *et al*.^[@CR6]^. An alternate explanation is that previous effects were driven by consolidation. Consolidation is a mechanism often overlooked in this type of memory, but in between learning the values and retrieving them, those values must be stored for a period and protected against interference from other learning. It may be that previous effects can be explained by consolidation, as the dopamine drugs were either present during learning (and thus consolidation after learning) or given just after learning before a 1-hour delay (which would have allowed consolidation to be affected). Previous studies have suggested dopamine may affect the persistence of reinforcement learning across time^[@CR8],[@CR45]^, and this is a possible avenue for future research.
Supplementary information
=========================
{#Sec19}
Supplementary Matierals 1
**Publisher's note:** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
=========================
**Supplementary information** accompanies this paper at 10.1038/s41598-019-42904-5.
The authors wish to thank NBT and BRACE for the use of their buildings, the funders (Wellcome \[097081/Z/11/Z\], MRC and BRACE \[14/8108\]), and all the participants who took part in the study.
J.P.G., H.K.I. and E.J.C. designed the study. J.P.G., H.K.I., L.E.K., A.H. and N.I.I. performed data collection. J.P.G. analysed the data. J.P.G., H.K.I. and E.J.C. wrote the manuscript. All authors reviewed the manuscript.
Data are available at the University of Bristol data repository, data.bris, at 10.5523/bris.qpqzeqc3q53m2dwczp69q3pv0^[@CR46]^. Our Matlab code for the analysis is available here 10.5281/zenodo.1438407^[@CR47]^, and the code for the GainLoss task is available here 10.5281/zenodo.1443384^[@CR48]^.
Competing Interests {#FPar1}
===================
The authors declare no competing interests.
| |
Last week, two congressional committees issued a little-noticed report detailing how Treasury Department, Internal Revenue Service, and Health and Human Services officials conspired to create a massive new entitlement not authorized anywhere in federal law.
In the summer of 2012, the House of Representatives' Committee on Oversight & Government Reform and Committee on Ways & Means launched an investigation to determine “whether IRS and Treasury conducted an adequate review of the statute and legislative history prior to coming to [the] conclusion that [the Patient Protection and Affordable Care Act's] premium subsidies would be allowed in federal exchanges.” Over the next 18 months, the committees held numerous hearings with senior Treasury and IRS officials, while investigative staff conducted interviews with key agency attorneys responsible for developing the regulations in question. Investigators also reviewed what few documents Treasury and IRS officials allowed them to see.
Here is what seven key Treasury and IRS officials told investigators.
In early 2011, Treasury and IRS officials realized they had a problem. They unanimously believed Congress had intended to authorize certain taxes and subsidies in all states, whether or not a state opted to establish a health insurance "exchange" under the Patient Protection and Affordable Care Act. At the same time, agency officials recognized: (1) the PPACA plainly does not allow those taxes and subsidies in non-establishing states; (2) the law's legislative history offers no support for their theory that Congress intended to allow them in non-establishing states; and (3) Congress had not given the agencies authority to treat non-establishing states the same as establishing states.
Nevertheless, agency officials agreed, again with apparent unanimity, to impose those taxes and dispense those subsidies in states with federal Exchanges, the undisputed plain meaning of the PPACA notwithstanding. Treasury, IRS, and HHS officials simply rewrote the law to create a new, unauthorized entitlement program whose cost "may exceed $500 billion dollars over 10 years." (My own estimate puts the 10-year cost closer to $700 billion.)
Finally, what little research the agencies performed on Congress' intent was neither "serious" nor "thorough," and appears to have occurred after after agency officials had already made up their minds. For example, Treasury and IRS officials were unaware of numerous elements of the statute and legislative history that conflicted with their theory of Congress's intent and supported the plain meaning of the statute.
Background
Section 1311 of the PPACA directs states to establish health insurance Exchanges. If a state declines, Section 1321 directs the federal government to establish an Exchange within that state. Other sections authorize health-insurance subsidies as well as penalties against individuals and employers who do not purchase coverage.
The PPACA authorizes the IRS to issue health-insurance tax credits only to taxpayers who purchase coverage “through an Exchange established by the State under section 1311 of the Patient Protection and Affordable Care Act.” The tax-credit eligibility rules repeat this restriction, without deviation, nine times. The undisputed plain meaning of these rules is that when states decline to establish an Exchange and thereby opt for a federal Exchange -- as 34 states accounting for two-thirds of the U.S. population have done -- the IRS cannot issue tax credits in those states.
Treasury, IRS, and HHS officials simply decided that Congress was wrong, and conspired to disregard the clear restrictions Congress placed on this new entitlement program. In effect, they created a new entitlement program that no Congress ever authorized. The IRS is dispensing those unauthorized subsidies today, which means that two-thirds of the tax credits the IRS is issuing are illegal.
The following account of how the agencies created this massive new entitlement program is taken from the congressional investigators’ report.
How Changing a Few Words Can Create a $700 Billion Entitlement
In the summer of 2010, IRS officials began working on rules to implement the PPACA’s premium-assistance tax credits. Like the statute itself, early drafts of their regulations reflected the requirement that tax-credit recipients must be enrolled in health insurance through an Exchange “established by the State under section 1311.”
In March 2011, Emily McMahon came across a news article noting that the Act only authorizes tax credits through state-established Exchanges. McMahon told investigators that article was the first she had heard of this provision of the law. That's significant, because as the Acting Assistant Secretary for Tax Policy at the Department of Treasury, McMahon was responsible for implementing that provision of the statute. A full year after President Obama signed the PPACA into law, McMahon was unaware of a very important feature of the statutory language she was charged with implementing.
According to investigators, shortly after McMahon learned that the language “established by the State under section 1311” appears in the statute, it disappeared from the IRS’s draft regulations. It was replaced with language permitting tax credits to be issued through federal Exchanges.
That seemingly minor change is significant for several reasons.
First, the IRS doesn’t have the authority to issue tax credits on its own, and Congress clearly authorized these credits only in specific circumstances.
Second, these “tax credits” are actually cash payments that the IRS sends straight to private health insurance companies.
Third, the tax credits trigger a host of other measures, including additional subsidies as well as penalties against both individuals and employers who fail to purchase adequate coverage. If a state doesn't establish an Exchange, no tax credits are allowed, and those employers and individuals are explicitly exempt from such penalties. But when the IRS issues unauthorized tax credits in those states, it subjects individuals and employers to illegal penalties.
Again, this is happening right now, in 34 states accounting for two-thirds of the U.S. population. By my estimate, over 10 years this seemingly innocuous change will result in the IRS taxing, borrowing, and spending a staggering $700 billion more than the PPACA allows, which is almost as much as the PPACA's initial price tag. All for no more reason than a few unelected bureaucrats felt like it. I wish that were an exaggeration.
A Conspiracy against Taxpayers
Congressional investigators found that from the moment this issue came to the attention of Treasury and IRS officials, everyone involved agreed not to follow the statute -- even though the statutory language was (as Treasury officials would later describe it) “apparently plain,” and they recognized that Congress had given the IRS no authority to designate an Exchange established by the federal government under Section 1321 to be “an Exchange established by the State under section 1311.”
To get around that problem, Treasury and IRS officials asked HHS to issue a rule declaring that federally established Exchanges are in fact "established by the State." HHS had no authority to make such a paradoxical designation either, yet the agency obliged. When the IRS published its proposed tax-credit rule on August 17, 2011, it adopted HHS’s counter-textual designation. That had the effect of proposing to offer tax credits in federal Exchanges; or more precisely, of proposing a massive new entitlement program that is in fact specifically precluded by federal law.
The proposed rule met instant condemnation in the media, from members of Congress, and from individual citizens during the rule’s public-comment period. Critics noted the IRS was planning to do the exact opposite of what the statute permits the agency to do.
The U.S. Department of...Yeah, Whatever
Congressional investigators found Treasury and IRS officials never took the law or these criticisms seriously:
The evidence gathered by the Committees indicates that neither IRS nor the Treasury Department conducted a serious or thorough analysis of the PPACA statute or the law’s legislative history with respect to the government’s authority to provide premium subsidies in exchanges established by the federal government. IRS and Treasury merely asserted that they possessed such authority without providing the Committees with evidence to indicate that they came to their conclusion through reasoned decision-making.
“On three separate occasions,” investigators wrote, “IRS and Treasury employees were unable to provide the Committees with detailed information about the factors they considered before determining that premium subsidies should be allowed in federal exchanges.”
Indeed, according to investigators, IRS and Treasury officials said they “did not consider the statutory language expressly precluding subsides in federal exchanges to be a significant issue” and “spent relatively little time on it.”
Here are a few of the things investigators were able to learn.
- IRS and Treasury officials produced just one paragraph of analysis on this issue prior to promulgating the proposed rule, and just one further paragraph of analysis between issuing the proposed rule and the final rule.
- A May 16, 2012, policy memorandum accompanying the final rule stated, “we carefully considered the language of the statute and the legislative history and concluded that the better interpretation of Congressional intent was that premium tax credits should be available to taxpayers on any type of Exchange.” Yet that memo raises more questions than it answers.
- The memo’s author was “Cameron Arterton, a Deputy Tax Legislative Counsel for Treasury hired in late 2011…to conduct a review of the legislative text and history surrounding the issue of whether tax credits should be available in federal exchanges.”
- Investigators wrote that Arterton “did not remember ever discussing the issue of whether the statute authorized premium subsidies in federal exchanges with other members of the working group" developing the rule. This raises the question: was Arterton tasked with discerning Congress’ intent, or merely finding support for the agencies’ decision to do the opposite of what the statute says? The available information suggests it may have been the latter.
- An email exchange from December 2011 shows Arterton counseled Treasury attorneys that “tension/conflict between two statutory provisions can create sufficient ambiguity” for courts to defer to an agency’s interpretation. When asked by investigators, Arterton could not identify which provisions of the PPACA created such ambiguity.
- An October 2012 letter from Treasury to congressional investigators claimed there was “no discernible pattern” in the statute that suggests Congress meant to restrict tax credits to state-established Exchanges. Yet Arterton along with a colleague who searched for certain terms in the statute "admitted…that neither of them made any attempt to categorize or organize the results of their search in any way to determine whether a pattern existed with PPACA.”
- Arterton likewise “told the Committees that she never produced a written review of any kind related to her search of the law’s legislative history.”
- Arterton told investigators her legislative-history search incorporated statements made by House members prior to the Senate approving the PPACA in December 2009. Such statements would not shed any light on the intent behind the PPACA. The House bill took a different approach to Exchanges and subsidies than the PPACA. Among other differences, it explicitly authorized subsidies through both state-established and federal Exchanges. Statements about the House's approach cannot constitute congressional intent, because that approach did not and could not pass Congress.
- McMahon confirmed at a congressional hearing that Treasury and IRS considered the House bills when trying to ascertain the intent behind the PPACA.
- At the same time Arterton was researching inapposite legislative history, she failed to consider a January 2010 letter from Rep. Lloyd Doggett (D-TX) and 10 other Texas Democrats that spoke to the issue at hand. The Texas Democrats' letter warned that the PPACA’s approach to Exchanges would allow states to block the hoped-for coverage expansion and that “millions of people will be left no better off than before Congress acted.”
- Interestingly, prior to joining the IRS, Arterton worked for Doggett at the Ways & Means Committee. So while Arterton was inappropriately researching House members’ thoughts about the House bill, she overlooked her former employer’s thoughts on the PPACA itself, which happen to conflict with the IRS's theory of what Congress intended and to confirm the plain meaning of the statute.
- The agencies admitted the legislative history of the PPACA does not support their interpretation. “Arterton told the Committees that the legislative history was inconclusive, echoing then-Deputy General Counsel for Treasury Chris Weideman’s statement…that IRS and Treasury concluded that there was a lack of evidence in PPACA’s legislative history to support its interpretation and that there was also a lack of evidence in the legislative history that contradicted their interpretation.”
- If Treasury and the IRS didn't find any evidence from the PPACA's legislative history that contradicted their interpretation, it can only be because they weren't looking very hard. The investigators report, for example, “the seven IRS and Treasury employees stated they did not consider the Senate’s preference for state exchanges during the development of the rule.” A preference for state-run Exchanges offers a good rationale for restricting tax credits to state-established Exchanges: the tax credits would serve as an inducement to states.
- And yet: “none of the seven IRS and Treasury employees interviewed by the Committees were aware of any internal discussion within IRS or Treasury, prior to the issuance of the final rule, that making tax credits conditional on state exchanges might be an incentive put in the law for states to create their own exchanges.” That's pretty remarkable. The very article that first brought this issue to the agencies’ attention – the one Emily McMahon saw in March 2011 – and countless articles and commenters since have all described the tax credits as an inducement to encourage states to establish Exchanges. Yet every official interviewed by the committees admitted they never considered that possibility.
- Investigators showed Treasury and IRS officials an article from early 2009, in which influential health-law professor Timothy Jost noted that because the Constitution does not permit Congress to commandeer states into establishing Exchanges, Congress might consider encouraging states to comply “by offering tax subsidies for insurance only in states that complied with federal requirements (as it has done with respect to tax subsidies for health savings accounts)” -- which the IRS also administers. Yet “none of the seven key employees from IRS and Treasury interviewed by the Committee had seen Timothy Jost’s January 2009 article prior to being shown it by Committee staff.”
- Emily McMahon, furthermore, “was unfamiliar with the term ‘commandeering problem’” and “none of the officials working on the rule could recall anyone raising the commandeering problem and its applicability to its rulemaking in this area.”
- None of the seven Treasury and IRS employees the Committees interviewed could recall whether they considered other incentives the PPACA creates for states to establish Exchanges (e.g., unlimited start-up grants or a costly new requirement on state Medicaid programs that only lifts once “an Exchange established by the State under section 1311 of the Patient Protection and Affordable Care Act is fully operational”). Nor could they recall discussing the highly relevant and undisputed fact that the PPACA also conditioned Medicaid funding on state cooperation.
- The officials were unaware that the PPACA contains language providing that U.S. territories that establish Exchanges shall be treated as a state, and offering small-business tax credits through “an Exchange,” and that these provisions show Congress knew how to use inclusive Exchange language when it desired.
- The officials “did not consider” a 2009 statement by the PPACA’s lead author, Finance Committee chairman Max Baucus (D-MT), in which he acknowledged the bill places conditions on tax credits, and that that is how the Finance Committee had jurisdiction to direct states to establish Exchanges, which would otherwise be outside its jurisdiction.
The IRS finalized its rule in May 2012. Employers and individuals who will be subject to illegal taxes under the IRS’s unauthorized entitlement program began filing legal challenges shortly thereafter.
The committees' report does not provide a complete picture of how Treasury, the IRS, and HHS conspired to create this new entitlement program. Treasury and the IRS have refused to show certain documents to congressional investigators. Even when they are willing to share documents, they allow investigators to review them only briefly, without taking notes.
Even so, the Committees’ report tells a story of government officials who decided they knew better than Congress how many taxpayer dollars they should spend. Who had a vision of health care reform that they were determined to enact, even though their vision did not and could not pass Congress. Who could not be bothered to obey the law. It should help put pressure on Treasury and the IRS to be more forthcoming about how they reached this decision. | https://www.forbes.com/sites/michaelcannon/2014/02/10/congressional-report-treasury-irs-hhs-conspired-to-create-an-unauthorized-half-trillion-dollar-entitlement/ |
What Is a Micrometer?
A micrometer is a measuring instrument that can make extraordinarily precise measurements. Most micrometers are designed to measure within one one-thousandth of an inch! That’s a close fit. Exact measurements like this are necessary when even the smallest of space between objects can cause problems or difficulties.
A micrometer, sometimes known as a micrometer screw gauge, is a device incorporating a calibrated screw widely used for accurate measurement of components in mechanical engineering and machining as well as most mechanical trades, along with other metrological instruments such as vernier, dial calipers, and digital calipers.
Micrometers are usually, but not always, in the form of calipers (opposing ends joined by a frame). The spindle is a very accurately machined screw and the object to be measured is placed between the spindle and the anvil.
The spindle is moved by turning the ratchet knob or thimble until the object to be measured is lightly touched by both the spindle and the anvil.
Micrometers are also used in telescopes or microscopes to measure the apparent diameter of celestial bodies or microscopic objects. The micrometer used with a telescope was invented about 1638 by William Gascoigne, an English astronomer.
Micrometer Symbol
A micrometer also called a micron, metric unit of measure for a length equal to 0.001 mm, or about 0.000039 inches. Its symbol is μm. The micrometer is commonly employed to measure the thickness or diameter of microscopic objects, such as microorganisms and colloidal particles.
Parts of Micrometer
A micrometer is composed of:
- Frame: The C-shaped body that holds the anvil and barrel in constant relation to each other. It is thick because it needs to minimize flexion, expansion, and contraction, which would distort the measurement. The frame is heavy and consequently has a high thermal mass, to prevent substantial heating up by the holding hand/fingers. It is often covered by insulating plastic plates which further reduce heat transference.
- Anvil: The shiny part that the spindle moves toward, and that the sample rests against.
- Sleeve, barrel, or stock: The stationary round component with the linear scale on it, sometimes with vernier markings. In some instruments the scale is marked on a tight-fitting but movable cylindrical sleeve fitting over the internal fixed barrel. This allows zeroing to be done by slightly altering the position of the sleeve.
- Lock nut, lock-ring, or thimble lock: The knurled component (or lever) that one can tighten to hold the spindle stationary, such as when momentarily holding a measurement.
- Screw: The heart of the micrometer, as explained under “Operating principles”. It is inside the barrel. This references the fact that the usual name for the device in German is Messschraube, literally “measuring screw”.
- Spindle: The shiny cylindrical component that the thimble causes to move toward the anvil.
- Thimble: The component that one’s thumb turns. Graduated markings.
- Ratchet stop: Device on end of handle that limits applied pressure by slipping at a calibrated torque.
Types of Micrometers
There are several types of micrometers that are designed to measure different types of objects or spaces. Most micrometers are available in sets to accommodate measurements of varying size.
1. Outside Micrometer
This type of micrometer is designed for measuring the outside of objects—the outside diameter (OD). They look and move much like a C-clamp, which opens and closes by turning an internal screw.
In a micrometer, the object you wish to measure is clamped between the anvil (the stationary end of the clamp) and the spindle (the moving part of the clamp). Once the object is secured in the clamp, you use the numbering system on the thimble (the handle portion) to find your measurement.
2. Inside Micrometer
While the outside micrometer is used for measuring the outer diameter of an object, the inside micrometer is used to measure the inside, or inside diameter (ID). These looks more like a pen, but with a thimble in the middle that turns.
As the thimble turns, the micrometer expands like a curtain rod would. This then extends until each end of the tool is touching the inside of the pipe. When this happens, you use the numbering system on the thimble to find your measurement.
3. Depth Micrometers
While inside and outside micrometers are both used to measure the diameter of an object or hole, a depth micrometer is for measuring the depth of a hole, recess or slot. Depth micrometers have a base that aligns with the top of the recess that needs to be measured.
The thimble is on a shaft that sticks up from the base. As the thimble turns, a measurement rod comes down from the shaft. You continue to turn until the rod hits the bottom surface of the hole being measured. When this happens, you use the numbering system on the thimble to find your measurement.
When Would I Use a Micrometer?
You would use a micrometer when a very precise measurement is needed. There are several different designs, depending on what needs to be measured. This could be the size of a pipe, tool or object from the outside. This could be the inside width of a pipe, bearing or another hollow object. Or this could be the depth of a hole or recess.
These are the tools you will reach for when accuracy is the most important factor. This is frequently true for machines with moving parts. Parts that move in and out of each other, like a piston, for example, need to remain in a steady, straight line. If these parts have even the smallest bit of sway, they can begin to fail.
This is also true in other applications, such as the use of bearings. Other applications that require the most exact measurement are pipe fittings especially if the pipe will be moving gases with very small and light molecules, like helium. Micrometers are also the preferred tool when measuring the thickness of items like sheet metals.
How Do I Read a Micrometer?
It is important to check if the micrometer is English or metric before using it for measurements. Make sure you are using a tool that has the same unit of measure as whatever you are already working with.
Once the micrometer is rotated into the proper measurement, the measurement can be taken. This requires adding together numbers found on the spindle and thimble which will give you the accurate measure.
How to find the numbers you want will vary depending on the type and design of the micrometer. Instructions on how to read your micrometer will come from the manufacturer with your tool. | https://www.engineeringchoice.com/what-is-micrometer/ |
After working on this post for awhile, I realized I was attempting to do a layered post. This is how Donald Knuth manages to make classics like The TeXbook accessible to barely-above-luddites like myself all the way to advanced programmers. He does this by marking the more sophisticated bits with ‘dangerous bend’ icons—one for reasonably savvy users, and two for serious geeks. Glass beads, of course, are not typesetting programs, but I'm attempting all at once to a) show pretties b) explain the steps I take to design a new variation on a colorway and c) document the exact colors used, so that if someone five years down the road wants me to replicate these beads I can do so more quickly, or, more likely, after reading my old notes, realize that it's going to be impractical.
No doubt, if I were even one-tenth the programmer Knuth is I could figure out some way to put cute lil icons on the more technical parts of this post. As it happens, I think there's an even easier approach, that people automatically use—look at the pretty pix, and skip the text. As the wizard pointed out, I hardly need invite any readers I may have to do what comes naturally; this is rather an acknowledgement that, yes, parts of this post are screamingly boring, and no, I'm not taking them out: so much for amusing the average customer with personal stories. Ok, on to the post...
I have any number of early, and extremely ugly abstracts that I saved*, but eventually, after making enough ugly beads, they slowly morphed into pretty beads—at least in my opinion, and since they're my beads, I get to decide which is which. (*The fun part is that my judgement changes over time, which is why I saved these—I thought they were worthwhile. I still think this, mostly because they show how far I've come.)
One of the earliest successes was a color combo Page calls Monet Green, because it reminds her so forcibly of Monet's water lily paintings. It's basically a green bead with touches of pink and purple, even red, with black trailing but I didn't think it would be suitable.
This is an old image, but aside from the fact that I sincerely hope my proportions—both in terms of shape and color ratios—have improved, you can get a sense of my mental starting point.
It's always difficult to put into words these vague intuitions. I had a mental itch that had to be scratched, and it wasn't really till I started writing this post that I resolved that more clearly—so perhaps I'm making stuff up to explain my discomfort. With that caveat, my goals were the following:
(1) firstly to deal with obvious problems: switch from silver to gold and remove the black trailing;
(2) tighten the color scheme from almost complementary, or at least a triad (green/purple/red-pink) to one that reads as “green”.
(3) decrease the flatness/opacity to something with more transparency/ increase the bumpyness quotient.
So what about the firstly/obvious?
Well, silver dissolves (“burns off”) into glass very readily. I didn't want ugly earth-brown stains (politeness prevents me from substituting a considerably coarser term) on these—as I've mentioned before, dissolved silver can be absolutely gorgeous, but that's in subtle, tertiary color schemes, which this is not. Gold, on the other hand, stands up to heat reasonably well; I could expect it to remain on the surface (albeit in the ‘broken’ pattern) of the bead while heating it enough to partially case it in the frit I use to make the bead bumpy. (The thing to do when you need a “silver” that stands up to heating is to go with palladium or platinum—an experiment for another time...)
That same irregularly applied frit would “move” a thread-black stringer, making my trailing look erratic, like one of those sewing machine threads home-sewers despise when they get caught in the hems of their creations. Besides there'd be plenty else going on; the zing added wouldn't be needed.
Next: making a “green” bead:
I didn't tag these beads (I did think about it, for about 5 seconds—nah, too much work) but I think this photo is pretty close: that is, I tried some new ideas, decided they were icky, backtracked to the tried and true, then started swapping stuff out one thing at a time.
The basic manufacturing steps:
- make a chiming hollow bead out of transparent green glass
- apply TE enamels (3 layers)
- apply frits (3 layers) interleaved with gold leaf
Now I'm recreating some here, but just for kicks, let's pretend my experiments followed this sequence, which is relatively close to the truth:
1st try (at left) put apple green TE enamels on emerald green Effetre glass. This will remain the case throughout the sequence: didn't have to mess with that, thank goodness. Tried substituting cobalt blue enamel for the pink. Too contrasty. Probably used the yellow/yellow green frit that worked so well in the commemorative bead sequence, which in this case (and as it turns out, every other) barely showed. I can be just as stubborn and stupid as the next person, sometimes. Added large transparent blue-green frits. End result: a boring mess.
2nd try: Backtracked to the purple frit (which in fact is not my usual 254 based handmade, but some weird stuff the TE people were handing out, that I ground from coarse to medium-fine and then mixed in with the 254 and in any case was practically indistinguishable); still attempting to get that cobalt in there. The bead is pretty (which is as it should be, since it's a basically a Monet without the ruby clear frit and black trailing), but it doesn't ‘fit’ with my internal concept. Sigh.
3rd: At this point I had to bite the bullet: a single effort wasn't going to cut it. The base concept wasn't either, except with a lot of tweaking. Time to get on with it. Stripped out the purple frit and dark blue TE, and tried ‘bumping’ with the frit I used on the original bumpies, which, surprise, is not clear colorless but some sort of transluscent white. (Still hoping to get away without having to do much work by copying an older design.) And sho ’nuff, that's what's on the opalino pink bumpies. Again, it has potential and I even tried it on an opalino base, but not what I'm looking for. Ok, now I was ready to admit that I really was going to—horrors!—make several samples before I'd have a workable bead.
4th: Back to the green, but with clear frit. Bor-ring. (We're now to the apex of our triangle.) At this point I was definitely fixed on the yellow green/green fine frit for the base, and it wasn't contrasty enough—at any rate, we have an undistinguished green bead. But at least I was treating the necessity of trying things a little more seriously. If I'd taken better notes, I'd be able to detail exactly which TEs I was playing with; I strongly suspect beads 3 and 4 focused on narrowing this range down.
Tweaking ratios, adding depth to the color:
5th: Round about here I was starting to get a handle on the TE accent colors but, frustratingly, they're not showing up (which was more or less my reaction when I made the bead: that is, I'd been playing with these “touches” and was starting to get hints of what would work, but still needed to increase the proportion, that is, bathe bigger swatches of the beads in the TE accent colors).
Seeing the light at the end of the tunnel with regard to how I wanted to the TEs to work, I probably had come to the conclusion that no matter how much I wanted it to show up or thought it ought to, that yellow/yellow-green frit simply wasn't going to. I could've gone to a larger size (my handmade frits basically come in two sizes, large and small, a result of the screen sizes in my set of sieves) but then it would've been difficult to smooth the gold leaf over them; and the beads my customer admired have the gold leaf applied after the fine opaque frit.
So I needed something a little more visually aggressive (but not too much more) so I tried the fine sky and turquoise blue frit. That worked. Bead is still boring, but it's starting to come together. One thing my explanations don't really make clear (because I waited too long to write the explanation and can't detail all the TE experiments in the earlier beads) is that I broke this process down into chunks, and aside from detour (bead #2) of trying to do this all at once—a reaction to how icky the first bead came out, and I'm sorry to say, not at all atypical of my working methods—I more or less worked out most of the first layer (Thompson Enamels); then the next layer (opaque frits); then the big clear frits, (but I pretty much stayed with the blue-green frits: recall that the underlying goal is to make a green bead that will co-ordinate with a blue bead, so bluish green frit always made a lot of sense); and then the clear frit, which again was pretty much the choice from the get go since it defines what a bumpy abstract is, and finally the leaf (since it only really comes in two-three colors, its effect is easy to predict). In other words, I started with the factors that had the biggest impact (which in this case, but not always, correspond with the bottom-most layers) and moved to the ones with the smallest visual effect. And really, I was working only with 2 variables out of seven (TE accent colors and base frit color) which is how I managed to come up with a decent bead in so few tries:
- base bead color (transparent): emerald (vetrofond 028)
- base (rolled) thompson enamel color: kelly green 9330
- major accent (touches of) thompson enamel color: 9550, plus
- minor accents (smaller touches; pick any 2) of 9350, 9530, 9620*
- fine opaque frit: sky/turquoise blue (touches each side)
- leaf: gold (1 pc)
- coarse transparent frit: blue-green (roll; starting with leaf)
- texture frit fine clear colorless (roll aggressively)
*Hm. So did I do a pair of major accent touches, one on each side, plus 3 small touches, or 1 large and 2 small? At any rate, I tend to position touches in 3’s. (N.b. 9650 is too dark. Too bad...)
Originally I wanted a green bead, but I simply don't do monochrome well so I adjusted the thompson enamel to have touches of turquoise blue, and also substituted turquoise blue fine frit; these blue layers (which blend into each other very nicely if I do say so myself) are sandwiched between the green-base glass/base green thompson enamel and the big chunks of green transparent frit. The bead reads as green, but has enough blue in it not to be dull; the blue will also help to co-ordinate it with the blue-purple beads I need to make.
6th bead: by this point I was happy with my bead.
7th bead: discovered I had a whole more of this Vetrofond slightly less yellow, less intense transparent for the base. Figured it'd look just as good or better, plus give me the chance to make sure bead #6 wasn't a fluke.
And after that, I went into production. One down, two to go.
So, it's really not hard, broken down into steps. Yes, you can try to copy this bead (and even sell it) because no, it's not gonna look like mine. Too many variables, and they'll be seducing you down little side-paths in no time... | https://rejiquar.com/rw/GlassBeads/2006abs_bump_grn |
I am honored to serve as Fort Bend ISD’s Superintendent of Schools. As the seventh largest school district in Texas, serving 74 campuses and 14 other sites, Fort Bend ISD is committed to our children, who represent the very future of our community. I am aware of what a tremendous responsibility it is to be in charge of their education. I am also aware of the tremendous trust you place in all of us here at Fort Bend ISD as we work to ensure they are future-ready. Everything we do at the district and classroom level is in pursuit of that goal.
We are dedicated to improving the district on a daily basis, and we will accomplish this by becoming more transparent, collaborative and more responsive to your needs. By transparent and collaborative I mean we want to engage in open two-way dialogue, where your ideas, suggestions and concerns easily reach me. We have created a section on our website called YourVoiceMatters, where you can quickly complete a simple form to reach me and the district’s leadership team.
We promise to carefully review every incoming note. I also promise that, over the summer and in the coming months, I will to reach out to you in advance of any major decision so we can face our challenges together. While many of the issues before us cannot be decided by a simple majority opinion of all parents, your input is crucial in helping us establish our priorities. In addition to understanding how you feel about specific issues, I will benefit from knowing why you feel that way.
Every Fort Bend ISD employee is committed to becoming more responsive to the needs of every student and parent. If we fail to meet your standards, please alert me through YourVoiceMatters. I also hope you will take a moment to recognize those members of Fort Bend ISD who consistently deliver excellent service. By committing to excellence and working tirelessly to achieve it, by listening to every concern and suggestion you may have, we hope to earn your trust.
I invite you to join me in laying the foundation of a great future for every child in our district — by coming together as a community and a family, speaking to each other candidly and constructively, and consciously improving Fort Bend ISD every single day. | http://www.fortbendstar.com/fort-bend-isd-working-together-to-ensure-all-students-are-future-ready/ |
Why be your own worst enemy when you can be your own best friend? It can be that simple.
I used to constantly compare myself to others. This was not only exhausting, but it was extremely taxing and detrimental to my well being. When we compare ourselves to others, we set ourselves up for failure time and time again.
- Is there a voice in your mind that tries to make you feel like you are better than someone else?
- Do you at times feel the need to inflate your ego so you feel better about yourself?
- Do you ever put yourself down for not being as good as someone else?
We all have our own unique abilities and specialties, things we are inherently good at. Think of how ridiculous it would be for the bamboo to compare itself to the mighty oak tree, or the honeybee to the butterfly. Each creation of nature is its own unique being and brings its own particular beauty, purpose and charm to our sweet Mother Earth.
The same goes for us humans. It actually goes against nature and simply does not serve us to compare ourselves to someone else and our differences, for in each of us lies our own uniquely exquisite gifts, that special mixture of wonder, waiting to be discovered, acknowledged and adored. I have friends who are amazing at so many different things, if I were to compare myself to their personalabilities, in many areas, I would fall short, and then starts the detrimental cycle of comparison.
By choosing to step out of our own judgement into honoring and seeing everyone’s gifts as a blessing, we step out of comparison. It’s natural law that everyone has their own inherent talents. I dream that we all choose to make our passions our life’s work. With us all being so different, we can cover all the bases and this will lead to a much happier society with people doing what they love.
When we appreciate ourselves wholly or our unique assets, quirks and awesomeness this allows for greater self love and satisfaction. Leaving us happier, more fulfilled and in turn, more whole as a person. Isn't this what we’re all striving for, true happiness and fulfillment?
When we start to see our differences as multifaceted as a diamond, that’s when we start to really win. Opposing qualities highlight the uniqueness and the strengths and differences of one another. Each one serves the other and adds a certain brillianceto life, creating a glorious beauty that illuminates our talents far beyond the rigid thinking of the ego and it’s constant state of comparison.
Another thing to keep in mind is don’t take yourself too seriously! When we see ourselves as perfect and whole just as we are, we learn to stay balanced through adversity and know a brighter day is coming. Then we naturally don’t compare ourselves to others. As nature runs in an ebb and flow cycle, we humans are justthe same. We must learn to ride with this flow and be more accepting of our ups and downs so we can BE in the flow of life. When accept ourselves and others just the way we are, without tripping into comparison, we see we our perfection as individual facets of the whole. We are here to shine our light in our own unique way so that this World may be a place rich with culture and rare experiences brought to usby each particular person.
It is not about whether or not someone is better at something, more physically attractive or more accomplished than you are. That will always exist, and there will always be others who appear to be less than you in these ways too. It is about you looking inside and seeing if you are fulfilling your own potential to the best of your abilities.
The only person you should be better than is the person you were yesterday.
So next time you start to compare yourself to another, stop, take a deep breath and breath in all ofyour own uniqueness and theirs. Have appreciation for your fellow humans, we’re all doing the best we can with what we know. Shine your light bright so you can be a beacon and lead others in doing so for themselves.
And remember to FLY (First Love Yourself) because when you love yourself, then you can truly enjoy life. | http://www.brookealexandra.tv/blog/comparison-is-not-our-companion/ |
“From my first life drawing session, I was hooked”, says Julie Hitchings’ who, since that first fateful day, has been facilitating a life drawing group for 24 years. Hitchings’ commitment to her practice is readily apparent in her works, which speak of both a keenly observant eye garnered from years of training, as well as an inventiveness which allows her to transform reality into a world of her own. Taking her cues from Francis Bacon and Vincent van Gogh, Hitchings does not merely transcribe what she sees but rather deconstructs it, allowing it to metamorphose into something more than merely an image.
Her works become snapshots of a narrative, conveying movement and emotion through layers of line and colour. She usually works in charcoal, oils and oil sticks, enabling her to create an atmosphere of mystery as each layer is enveloped in the next. Lines dip in and out of planes of colour, or intertwine with each other in hazy, half-seen gestures. Her figures defy any straightforward interpretation, as Julie is committed to relating a story without descending into the literal, and her audience is left to eke out their own meaning. | https://artedit.com.au/artist-profile/julie-hitchings/ |
KLIA Facility & Asset Management Sdn. Bhd. (847087-W)
KLIA Facility & Asset Management Sdn. Bhd.
Vision & Mission
Vision
- To be an innovative team of motivated professionals recognized for leadership, professionalism and excellence in Facilities Management industry
Mission
- To provide our community with a safe and sustainable built environment which supports and enhances an inspiring and conducive working environment
Objectives
Recognition
- To be recognized in the FM industry
- To offer our FM services to the potential Clients
- To participate in Government/Private tender exercise
Standardized Process
- To achieve efficient and effective results through standardized business processes
Communication
- To share accurate information
Employee
- To develop innovative and motivated employees
- Attend ‘on the job training’ to gain experience
Services
Services
- Facilities Management Consultancy Services Develop and establish facility management plan and work processes associated to facilities maintenance and take full control on the implementation.
- Integrated Facilities Management Services Responsible for planning, execution, monitoring and control of all tasks associated to the facilities maintenance by establishing a single point of contact.
- Building Facilities Condition Audit.
- Perform inspections on building services and installed systems to ascertain the level of operating and physical condition of the as-built facilities. | http://kliaholdings.com.my/klia-facilities/ |
The Chinese New Year, known as the Lunar New Year, is also called the Spring Festival and it is one of many Asian Lunar New Years that take place across the world. Lunar means that it follows the cycle of the moon -- this festival begins on the second new moon after the winter solstice, which appears between 21 January and 20 February, and ends on the full moon 15 days later, in time for the Lantern Festival.
Traditionally it's a time for family reunions and a time to clean the house to sweep out ill-fortune and welcome in prosperity. Red decorations (especially paper cuts), new red clothes and red envelopes full of money for children are some of the ways that people celebrate. Vancouver is home to many people with Chinese heritage and this means that it's a great city to enjoy the colorful celebrations.
The Lunar New Year and the Year of the Pig begins on Tuesday, February 5, 2019. Every year, Vancouver hosts multiple Chinese New Year Events, culminating in the annual Chinese New Year Parade through historic Chinatown, a cultural extravaganza that's one of the city's biggest and best annual parades.
Organized by the Chinese Benevolent Association of Vancouver since 1979, the parade has grown into a must-see event in Vancouver, attracting over 50,000 spectators and 3,000 performers, including the largest assembly of lion dance teams in Canada.
With more than 50 lions, diverse multicultural dance troupes, the Vancouver Police Department Motorcycle Drill Team, marching bands and more, the Chinese New Year Parade is a celebration not to be missed. Head there early to get a good spot to watch the parade - as it's February in Vancouver you'll need to wear layers and dress warm with a waterproof jacket and umbrella to stay dry!
The colorful Chinese New Year Parade in Vancouver takes place on Sunday, February 10, 2019, starting at 11 am. The 1.3-km long route starts at the Millennium Gate on Pender Street (between Shanghai Alley and Taylor Street), proceeds east along Pender Street, turns south onto Gore Street, turns west onto Keefer Street and then disperses on Keefer at Abbott.
Like any big, downtown event, driving and parking will probably be extremely difficult. Buses and SkyTrain are the best options. Since so many streets will be blocked off, check Translink for alternate bus routes, or take the SkyTrain to Stadium-Chinatown Station. From there it's an easy walk to the start of the parade in Chinatown.
After the parade, which takes around two and a half hours, festivities in Chinatown continue with lion dances and the family-friendly free Vancouver Chinatown Spring Festival-Cultural Fair at Sun Yat-Sen Plaza (50 East Pender Street).
Head to the nearby Dr Sun Yat-Sen Classical Chinese Garden for the Year of the Pig Temple Fair (10.30 am to 4 pm). Celebrating the 12th animal in the Chinese zodiac, the pig is a symbol of prosperity, with its chubby cheeks full of good fortune. This family-friendly event includes a wishing tree, ping pong with Frida & Frank, paper cutting pig designs, Dragon Boat BC demonstration, instrument discovery with VSO School of Music, Gong Fu Cha tastings and the 淑芳你好嘛 (Suk-Fong Nay Ho Mah) / Suk-Fong, How Are You? exhibition from artist Paul Wong.
Visit between 1.30 pm and 2 pm, and then 3 pm – 3:30 pm for performances by the Azalea Chinese Ensemble from the VSO School of Music. Learn more about Chinese zodiacs in English with Cantonese at 10. 45 am and 12 pm, and in English with Mandarin at 11.15 am and 12.30 pm. From 2 pm until 3 pm you can enjoy a Lion Dance by Mah’s Athletic Association.
At the Temple Fair you'll also find the innovative company Edible Projects. They will be serving a slightly modernized version of tang yuan 湯圓 (a glutinous rice dessert), combining a traditional shape with contemporary flavors and textures.
End the day at the Chinese New Year Banquet at Floata Seafood Restaurant. Tickets to the banquet start at $38 and include both dinner and live entertainment, lion dances, greetings by the fortune god and a variety show with singing, cultural dances and more.
Find out more about Vancouver's other Chinese New Year celebrations and events in our handy guide to 2019's festivities. | https://www.tripsavvy.com/chinese-new-year-parade-in-vancouver-3371405 |
See below for a press release from Progress Virginia, on the three-year anniversary of the passage of Medicaid expansion in Virginia. And, of course, note that this was accomplished largely because of Democratic leadership, and largely in spite of Republican opposition/obstruction.
|Celebrating Medicaid Expansion in Virginia
|
Richmond, Virginia—Today marks the three-year anniversary of the passage of Medicaid expansion in the Commonwealth. Virginia’s expansion of Medicaid has broadened health coverage to include an additional 494,000 people across the Commonwealth. This commitment to better healthcare and outcomes is a step toward the ultimate goal of ensuring that everyone in Virginia, regardless of income level, zip code, race, immigration status, or gender identity is able to see a doctor when they need to.
“Healthcare is a right, not a privilege reserved for only a few. Everyone should be able to see a doctor when they are sick, with no questions asked,” Ashleigh Crocker, Communications Director at Progress Virginia, said. “No one should have to choose between essential needs like food, clothes, shelter, and their healthcare. Thanks to Medicaid expansion, more people than ever have access to preventative and lifesaving care. However, there is still a huge disparity in healthcare, particularly for Black and Brown mothers and children, that is rooted in racism. While we’re proud of the work we did to expand Medicaid, there is still work to be done. ”
Background:
|
|
###
At Progress Virginia, we drive powerful, values-based narratives to uplift and amplify grassroots voices through innovative digital communications and earned media strategies. We build progressive power alongside marginalized communities to tear down systems of white supremacy, advocate for equitable policies, and ensure leaders reflect the communities they serve. | https://bluevirginia.us/2021/05/celebrating-medicaid-expansion-in-virginia-three-years-after-its-passage |
Few issues have prompted more negative responses than our coverage of the Grassley-Dorgan amendment. Sun Belt farmers have become accustomed to reading vitriolic criticism from politicians and editorial writers about “the top 5 percent of farmers receiving 49 percent of the payments.” What might surprise you, however, is the number of such e-mails from farmers and persons close to farming.
A fair number have originated in the Midwest where growers must be enjoying a different farm economy than in the Sun Belt, judging from their letters. But many are from producers and businessmen closer to home, taking us to task for “trying to help big farmers get bigger.”
That is a common theme in many — that higher payment limits will only exacerbate the trend to larger and larger farms in regions like the Mississippi Delta.
“When riding through the Delta, it is daunting to see the abandoned headquarters signifying the consolidation that occurred within the last 15 to 20 years, much of it resulting from purchases by ‘trust fund farmers,’” said one writer.
“Unfortunately, your magazine seems to be promoting this consolidation of farms by these trust fund babies without realizing that payment limitations help the owners of small to medium-size farms to compete on a balanced economic playing field,” he said.
As someone who grew up on a farm with a 13.6-acre cotton allotment, I can easily relate to the thinking expressed by the letter writer — except for the fact that his premise is faulty. The writer assumes that the operator of a 600- to 800-acre cotton farm stands a better chance of survival if his larger neighbors are prevented from receiving more farm payments than he does.
In fact, the smaller operator is more likely to have higher costs per acre because he can't spread them over a larger number of acres. That's not to imply that good managers of smaller farms can't survive; just that they face tougher odds.
But to blame all of the consolidation not only in the Delta but across the entire country on farm program payments ignores the economic realities of high-input farming.
Another theme is that a new farm bill isn't necessary now; that the reports of the demise of many farmers have been greatly exaggerated.
These supporters of the “Richard Lugar” school — named for the Indiana senator who loves to downplay the financial problems in agriculture — look at USDA's net farm income projections and ask how farmers could be facing any difficulty.
The answer goes back to the earlier line about the different economy in the Midwest. For all their complaints about big payments to cotton and rice farmers, Midwest growers, as a whole, are receiving a much higher percentage of farm payments than farmers in any other region of the country.
If you look at the payment rates in the assistance bill introduced by Sen. Pat Roberts on March 21 and multiply the corn rate by the average yield and number of base acres in the 11 Midwestern states, you come up with a total of $2.66 billion. That's more than half of the money allocated for all farm program payments for 2000. | https://www.farmprogress.com/not-all-farmers-oppose-grassley-amendment |
Is the study relevant?
Just because you’ve seen or been sent a reputable scientific paper, you shouldn’t assume the research actually supports the claim, even if someone has said it does. The research might well be totally irrelevant.
So first ask or check: what’s actually being tested in the study? Does it really relate to the claim?
The wider body of evidence
Even peer-reviewed research can be wrong. In fact, it often is. If a paper has been peer reviewed, you should also look out for what other scientists have to say about it, and see if you can find out whether it is part of a wider body of evidence pointing towards the same conclusions.
A single study of any kind doesn’t tell you much. And if the conclusions of a study go against the majority of research in an area, that’s a big warning sign that its conclusions may be wrong. It doesn’t necessarily mean that – but it is rare for a single piece of research to totally overturn an established body of evidence.
So always ask: how do the results fit with the wider body of evidence?
Study size
A common pitfall with studies of all kinds is that they are simply too small to make strong conclusions. Studies with a small sample size are more likely to get chance results.
There’s no magic number above which studies are big enough, but dramatic headlines are often made based on studies with only a handful of participants, and that should be taken as a red flag. If the conclusion of a small study goes against the conclusions of larger studies in the same area of research, it is likely to be wrong.
What’s the dose?
Studies in test tubes and in animals looking at the effects of a chemical can use a very high dose compared to what’s likely in the real world.
A chemical can’t simply be classified as “dangerous” or “safe”: it always depends on the amount, or dose, received. The effects of a chemical will change with different levels of exposure, so that below a certain dose it may be harmless or beneficial and at a higher dose it may be toxic. For example, a little aspirin is good for us, whereas 50 tablets could kill you.
So if a claim is being made about a ‘dangerous’ chemical based on a study, ask: what was the dose? How does that compare to likely exposure in the real world? Was the study relevant to a real-world situation? | https://askforevidence.org/help/common-pitfalls-with-studies-and-things-to-look-out-for |
Search all condominium listings for sale on James Island, one of Charleston’s favorite communities. Conveniently situated close to downtown, and Folly Beach – James Island is the perfect community to enjoy both beach and night life. Charleston’s condos for sale vary in style and price as you can imagine, but most condos on James Island will be under $250,000 unless of course you want waterfront.
MLS Listings Data
Total Listings:
13
Average Price:
$347,831
Highest Listing Price:
$1,080,000
Average Days On Market:
31
Average Price/Sqft: | http://jamesschiller.com/listings-mls/james-island-sc-homes-for-sale/condos |
Distributed object systems are designed to increase the efficiency of computer program development by enabling object reuse and simplifying system maintenance through clear separation of functions. Each object in a distributed object system encapsulates the data for that object and the procedures or methods for operating on that data. Encapsulation means that the data for an object can be manipulated only by that object using the defined methods. These features of distributed object systems allow the objects to be reused and portable. Exemplary distributed object systems include: COM (Common Object Model), COM+, DCOM (Distributed Component Object Model) and CORBA (Common Object Request Broker Architecture).
One of the features of the distributed object system is a message service. A conventional message service system includes one or more publishers, subscribers and message servers. A publisher is a program (object or method) that makes calls that initiate sending messages that contain data, and a subscriber is another program (object or method) that receives the messages from a publisher. A subscriber indicates to (e.g., registers with) its message server that it wishes to receive messages from a publisher.
An exemplary conventional message service server is Message Queuing Services (MSMQ) developed by Microsoft may be used. MSMQ implements asynchronous message service by enabling applications (e.g., data providers) to send messages to other applications (e.g., data receivers). While the messages are being forwarded from senders to receivers, MSMQ keeps the messages in queues. The MSMQ queues may protect messages from being lost in transit and provide a place for receivers to look for messages when they are ready. MSMQ is configured to support IPX (Internet Packet exchange) and TCP/IP (Transmission Control Protocol/Internet Protocol) networking protocols. In the distributed object system parlance, a publisher is a data provider (e.g., the method sending the message) and a subscriber is a data receiver (e.g., the method receiving the message).
The conventional distributed systems fall short when messages are to be exchanged between a large number of publishers and subscribers, because in such a case the conventional message service system is required to predefine the relation between the data providers and data receivers (e.g., certain types of messages are predefined to be received by certain subscribers). In particular, the conventional system may provide adequate message services when all the relations are predefined and do not change. However, the conventional message system fails when the relations are to be dynamic. For example, assume a subset of the data providers are to send messages to one subset of the data receivers under one condition while the same subset of the data providers are required to send messages to another subset of the data receivers under another condition. Under such a scenario, the connections between data providers and data receivers are required to be updated dynamically (e.g., as the conditions change and/or as the messages are created).
| |
Originally from the Campania region of Italy, linguini (or linguine) is a type of pasta that translates from Italian into English as “little tongues.”
As creepy as that may sound, linguini is actually a wide, flat, and thin-shaped noodle that can have the same length or thickness as spaghetti, but tends to be marginally wider than spaghetti, and not quite as wide as fettuccine.
Traditionally, spaghetti is paired with heavier sauces, such as meaty Bolognese Sauce or classic marinara sauces (and you can check out how to make your own Fresh Marinara Sauce while you’re at it).
Linguine, on the other hand, tends to pair with lighter sauces, and pestos. Don’t worry — if you’re wondering How to Make Pesto Without Cheese — It’s Easy!
Below, we’ve put together five excellent vegan linguini recipes from our Food Monster App, and then included a few different sauces and toppings recipes we know will pair marvelously with plain linguine below. Which dish are you most tempted to make?
1. Coconut Curry Linguini
This Coconut Curry Linguini by Jess Hoffman has the texture and creaminess of a fettuccini Alfredo, but with Thai ingredients and flavors like rice noodles, coconut milk, red curry paste, and lime. It’ll quickly become one of your favorite dishes to whip up last minute when you want something easy that packs big flavor.
2. Creamy Beet and Dill Linguini
This recipe for Creamy Beet and Dill Linguini by Rosie Newton and Dillon Sivyour tosses the linguini in a vibrant, creamy beet sauce and garnished with fresh dill and radishes. Because of its beautiful color, this pasta is great for a romantic evening or just a nourishing dinner for two.
3. Lemon and Walnut Linguini With Roasted Broccoli
Lemon and Walnut Linguini With Roasted Broccoli
This recipe for Lemon and Walnut Linguini With Roasted Broccoli by Sarah Gory is light but delicious. Roasting the garlic mellows and sweetens it, and the toasted breadcrumb-walnut mixture gives the whole dish a depth of flavor. As this is a deliberately simple pasta dish, use the best quality ingredients you can find – locally-grown garlic and juicy organic lemons. Rather than store-bought bread crumbs, save a couple of slices of sourdough from a loaf and make your own. Find some high-quality linguine, too!
4. Linguini With Greens and Chickpeas
Linguine With Greens and Chickpeas
This recipe for Linguine With Greens and Chickpeas by Rinku Bhattacharya makes a great weeknight pasta dish that is healthy, hearty, and crunchy (thanks to the almonds).
5. Creamy Thai Pesto Linguini
This Creamy Thai Pesto Linguine by Jess Hoffman is delicious comfort food at its best. This dish is made with chewy soft rice noodles, creamy thick coconut milk, and fragrant Thai basil pesto. It’s the perfect fusion of Thai and Italian.
6. Pistachio Pesto Pasta
This Pistachio Pesto Pasta by Christin McKamey showcases just how harmoniously pesto pairs with long thin noodles. Pistachios, spinach, and fresh basil pesto top this amazing gluten-free pasta dish. It’s so easy to make, so it’ll save time on busy weekdays!
7. Tofu “Shrimp” Scampi
Traditionally, linguini pairs extremely well with seafood and light sauces. This Tofu “Shrimp” Scampi by Rhea Parsons could easily be adapted and made with your favorite linguini.
If you’re looking for more, make sure to check out all these Italian Sauce Recipes — they won’t disappoint. Also be sure to read this guide on Pasta: How to Avoid 8 Common Cooking Mistakes.
Are you as obsessed with noodles as we are? Get out The Ultimate Guide to Vegan Noodles! for all some awesome articles about noodles! We also highly recommend checking out Food Monster app, a food app available for both Android and iPhone. The app has over 8,000+ vegan recipes with 10+ recipes added daily as well as cooking tips, you’re going to find something you love. Give it a try!
For more Animal, Earth, Life, Vegan Food, Health, and Recipe content published daily, subscribe to the One Green Planet Newsletter! Lastly, being publicly-funded gives us a greater chance to continue providing you with high-quality content. Please consider supporting us by donating! | https://www.onegreenplanet.org/vegan-food/5-yummy-vegan-recipes-using-linguine/ |
Studio TJOA is based out of Brooklyn, New York working in a variety of materials and processes that range from analog to those that utilize digital technology. Their approach to various design challenges work are guided by materials, techniques and tools which results in some surprising solutions that are inventive and fresh. The influence of experimentation with textile structures gleaned from knitting to crochet techniques reoccurs throughout their portfolio of projects reaffirming the complex relationship between textiles and architecture.
How would you describe your process from idea to execution?
When we approach an idea, we tend to front load the experiment with construction, feasibility, and materiality. Our designs and our process is heavily influenced by these factors and we try to let the physical environment, the constraints of the project, and the material help guide us through the development. Our method in regards to architectural projects is similar, however the approach tends to be more spatial than material, but ultimately the constructability and the manner of construction aids in the development and definition of the form and structure.
What are the primary materials used in this process?
What we enjoy about our studio is we do not have a typical go to material. We are often inspired by the materials that we find in other aspects of our design practice. Artwork like the Lilypad and the Acoustical Installation are created from a foam rod material commonly used in construction. The scale and feel of the material is a large part of the emotion that these pieces invoke. Material selection is part of how we approach the idea through experimentation because we rely on what ever material we are using per that project.
Why were you attracted to this material(s)?
We are attracted to using materials in unexpected ways. We gravitate towards materials that are typically industrial or considered to be more commodity. The idea of using often overlooked materials and expressing it in an artful way is our way of showing appreciation for the material. We enjoy looking for beauty in the mundane and exploring ways to express the wonder of that material.
After your initial step, how do you proceed from there?
Once we have created physical models and decided upon the material that we want to work with, we tend to move into a more technical realm. Depending on the project, that could mean research, computer modeling, analysis, or full scale models.
Visit Studio TJOA's profile on the Fiber Textiles Surface Design Registry
What is your first step in beginning a project with this process?
It all depends on the project itself, although it tends to start with an idea for exploration and experimentation. Experimentation is the heart of our studio. Though we believe in the need for top down rules, constraints, and evaluation, the actual process of development is a bottom up exploration through experimentation with techniques, materials, and physical models.
As you have developed this process of working, what were some of the initial hurdles you had to figure out?
Our emergent method is often very wide and boundless at the beginning. It is sometimes hard to fight the urge to set unnecessary and heavy handed constraints in the initial stages of design. We have realized that the best restrictions and governing rules are expressed to us during the interaction of the material and the techniques employed.
How do you know when you are done or finished?
We rarely ever feel like we are done with a project. It is very hard for us to stop continually changing and experimenting with our work. We try to do our best with each piece that we send out or show, but we almost always make changes to the next iteration of the item.
When you first shared this work with someone what were the initial responses?
We have been very happy with the response to our work and typically all inquiries are a result of interest from our website. While images do give a sense of our work, there is no substitute for seeing the real pieces in person. For example, the scale of the Lilypad is often underestimated in images, but when people see it for the first time, they are often surprised and delighted with the unexpected statement that it makes.
What is the one word you would use to describe your work?
Playful. | http://www.knotwe.com/studiotjoa |
This webinar will provide tax advisers with a thorough and practical guide to preparing Form 1042-S, Foreign Person's U.S. Source Income Subject to Withholding. The panel will discuss evaluating potential Form 1042-S withholding and filing responsibilities and detail the significant changes made to the form by the 2017 tax reform legislation. The webinar will also review common mistakes and offer ways to avoid or mitigate late-filing or non-filing penalties.
Outline
- When withholding and Forms 1042, 1042-S and 1042-T are required
- Payments to foreign persons subject to federal withholding tax
- Exemptions
- Reporting treaty positions
- Completing the Form 1042-S
- New IRS enforcement initiatives
- Noncompliance penalties
Benefits
The panel will examine these and other relevant topics:
- Which payments to related parties trigger withholding obligations--and which don't--under the IRC?
- Anticipating and avoiding common errors that tax professionals make when preparing Form 1042-S
- Understanding the new rules for substitute Form 1042-S
- Adapting to the changes in Form 1042-S reporting for FATCA
- Identifying triggers and issues that lead to audit or denial of claimed withholding credits
Faculty
C. Edward Kennedy, Jr., CPA, JD
Managing Director
C Edward Kennedy Jr
Mr. Kennedy has more than 36 years of experience dealing with a variety of international tax matters, specializing in... | Read More
Elis A. Prendergast
Senior Manager
KPMG
Mr. Prendergast is part of the firm's Information Reporting and Withholding practice. He has considerable... | Read More
Other Formats— Anytime, Anywhere
CPE On-DemandSee NASBA details. | https://www.straffordpub.com/products/form-1042-s-withholding-on-foreign-persons-u-s-income-2019-07-30 |
China’s State Taxation Administration on 14 October published “Administrative Measures for Non-resident Taxpayers Claiming Tax Treaty Benefits” [Announcement (2019) No. 35].
Announcement (2019) No. 35 will become effective from 1 January 2020, and will replace existing Announcement (2015) No. 60. The new regulation simplifies filing procedures for claiming tax treaty benefits by non-resident taxpayers. It also clarifies the responsibilities of non-resident taxpayers and their withholding agents.
Under Announcement (2019) No. 35, the existing “record-filing” mechanism will be replaced by the “documentation-retaining for inspection” mechanism.
Non-resident taxpayers must self-assess their eligibility to enjoy tax treaty benefits, file the reporting form (via their withholding agents, if any), and retain specified supporting documents for any post-filing inspection by tax authorities.
Announcement (2019) No. 35 also reduces the amount of information to be filled in the reporting form.
Non-resident taxpayers are required to provide information for only 17 items in the revised form, including the taxpayer’s name, contact information, the treaty article, the amount of tax reduced or exempted.
The supporting documentation that must be retained by non-resident taxpayers under Announcement (2019) No. 35 is similar to the filing documentation specified under Announcement (2015) No. 60. Announcement (2019) No. 35 stipulates, however, that supporting documents that justify “beneficiary owner” status under the treaty article on dividends, interest, or royalties should also be retained.
MNEs should be aware that even though Announcement (2019) No. 35 simplifies the reporting form and reduces paperwork for claiming tax treaty benefits, due care and tax responsibility of non-resident taxpayers is not alleviated. Non-resident taxpayers are still responsible for the authenticity, accuracy, and legality of the reporting form and retained documents.
It should be noted that self-assessment under Announcement (2019) No. 35 can increase tax uncertainty for MNEs. For example, MNEs may find it challenging to ensure that the determination of “beneficial owner” is correctly self-assessed and the tax authorities will not disallow the tax treaty benefits claimed for dividends, interest, and royalties in a post-filing inspection.
Further, if eligibility for tax treaty benefits is denied in a subsequent administration process by tax authorities, non-resident taxpayers would be regarded as not fulfilling tax reporting responsibilities, and tax authorities can impose a late payment surcharge on underpaid tax. | https://taxjournal.eu/china-simplifies-procedure-for-claiming-tax-treaty-benefits-by-agnes-lo-lingnan-university-hong-kong-raymond-wong-associate-dean-city-university-of-hong-kong/ |
Filed 10/28/15 Marteney v. Union Carbide Corp. CA2/4
NOT TO BE PUBLISHED IN THE OFFICIAL REPORTS
California Rules of Court, rule 8.1115(a), prohibits courts and parties from citing or relying on opinions not certified for
publication or ordered published, except as specified by rule 8.1115(b). This opinion has not been certified for publication
or ordered published for purposes of rule 8.1115.
IN THE COURT OF APPEAL OF THE STATE OF CALIFORNIA
SECOND APPELLATE DISTRICT
DIVISION FOUR
B252711c/wB253265
MARIE MARTENEY, (Los Angeles County
Super. Ct. No. BC489395)
Plaintiff and Respondent,
v.
UNION CARBIDE CORPORATION et
al.,
Defendants and Appellants.
APPEAL from a judgment of the Superior Court of Los Angeles County,
John J. Kralik, Judge. Affirmed.
Mayer Brown and Michele Odorizzi and Polsinelli and David K. Schultz for
Defendant and Appellant Union Carbide Corporation.
Armstrong & Associates and William H. Armstrong for Defendant and
Appellant Elementis Chemicals Inc.
Weitz & Luxenberg, Benno Ashrafi, Cindy Saxey and Josiah Parker for
Plaintiff and Respondent Marie Marteney.
Marty and Marie Marteney asserted claims for negligence, strict liability,
and loss of consortium against appellants Union Carbide Corporation (UCC) and
Elementis Chemicals, Inc. (Elementis), alleging that asbestos they marketed
caused Marty Marteney’s mesothelioma. After the jury returned special verdicts
in the Marteneys’ favor on their claim for strict liability, appellants filed
unsuccessful motions for judgment notwithstanding the verdict, and a judgment
was entered awarding the Marteneys compensatory damages. Appellants
challenge the denial of their motions for judgment notwithstanding the verdict.
We reject their contentions, and affirm.
RELEVANT FACTUAL AND PROCEDURAL BACKGROUND
Beginning in or about 1963, UCC sold asbestos to various manufacturers,
some of which made joint compounds used in the construction of walls. Elementis
is the successor-in-interest of Harrisons & Crosfield (Pacific), Inc. and certain
related entities (HCP), which distributed UCC asbestos. In 1958, Marty Marteney
began working for an architectural firm as “job captain,” and became a project
architect. He also engaged in remodeling projects on his home, and worked as a
volunteer on remodeling projects involving churches. In the course of his
employment and other activities, he handled joint compounds. In April 2012, he
was diagnosed as suffering from mesothelioma, which is a cancer of the lung’s
lining.
On August 1, 2012, the Marteneys filed their complaint for negligence,
breach of warranties, strict liability, and loss of consortium against 21 defendants
involved in the manufacture and marketing of asbestos-containing products,
including joint compounds. The complaint alleged that Marty Marteney’s
2
mesothelioma resulted from his exposure to asbestos from the defendants’
products. The Marteneys sought compensatory and punitive damages.
Prior to trial, the Marteneys entered into settlements with several
defendants. As a result of the settlements and other dispositions, on June 17,
2013, at the commencement of jury selection, UCC and Elementis were the sole
remaining defendants in the action. At trial, the key issues concerned the extent to
which Marty Marteney was exposed to UCC asbestos through contact with three
brands of joint compound -- Gold Bond, Paco Quick Set, and Georgia Pacific --
and the extent, if any, to which Elementis distributed the UCC asbestos to which
he was so exposed.
The jury was instructed to return special verdicts regarding three theories of
liability -- namely, negligence, strict liability based on a design defect, and strict
liability based on a failure to warn -- and other issues. The jury returned special
verdicts in favor of the Marteneys solely on their claim for strict liability based on
a design defect. The jury also found that the Marteneys suffered non-economic
damages totaling $1,175,000, but rejected their request for punitive damages. The
jury allocated UCC a five percent share of comparative fault, and Elementis a
three percent share of comparative fault.
UCC filed a motion for judgment notwithstanding the verdict, contending,
inter alia, that the Marteneys had failed to show that exposure to UCC asbestos
was a substantial factor in the causation of Marty Marteney’s mesothelioma, under
the standard stated in Rutherford v. Owens-Illinois, Inc. (1997) 16 Cal.4th 953
(Rutherford). Elementis also submitted a motion for judgment notwithstanding
the verdict, asserting there was no evidence that the asbestos it distributed was
incorporated into any joint compound handled by Marty Marteney. After denying
the motions, on October 10, 2013, the trial court entered a judgment awarding the
3
Marteneys damages totaling $56,250 against UCC, and damages totaling $33,750
against Elementis. On December 30, 2013, the judgment was amended to reflect
an award of costs. UCC and Elementis noticed appeals from the judgments, which
were consolidated.1
DISCUSSION
Appellants present overlapping contentions regarding the denials of their
motions for judgment notwithstanding the verdict. UCC contends (1) that the
testimony from the Marteneys’ experts regarding the causation of Marty
Marteney’s mesothelioma did not satisfy the Rutherford standard, (2) that there is
insufficient evidence that Marty Marteney was exposed to its asbestos, (3) that the
jury’s special verdicts regarding the adequacy of UCC’s product warnings
shielded it from liability under a theory of strict liability based on a design defect,
and (4) that the “design defect” theory fails under O’Neil v. Crane Co. (2012) 53
Cal.4th 335 (O’Neil). In addition to joining in those contentions, Elementis
contends there is insufficient evidence that it distributed the asbestos to which
Marty Marteney may have been exposed. For the reasons discussed below, we
reject their contentions.
A. Standard of Review
As motions for judgment notwithstanding the verdict potentially conclude
litigation on a complaint, the rules governing them are “strict” (Fountain Valley
Chateau Blanc Homeowner’s Assn. v. Department of Veterans Affairs (1998) 67
1 During the pendency of this consolidated appeal, Marty Marteney died. For
purposes of the appeal, Marie Marteney has been designated his successor in interest.
4
Cal.App.4th 743, 750), and “[t]he trial court’s discretion in granting a motion for
judgment notwithstanding the verdict is severely limited” (Teitel v. First Los
Angeles Bank (1991) 231 Cal.App.3d 1593, 1603). Generally, “‘“[i]f the evidence
is conflicting or if several reasonable inferences may be drawn, the motion for
judgment notwithstanding the verdict should be denied. [Citations.] ‘A motion
for judgment notwithstanding the verdict of a jury may properly be granted only if
it appears from the evidence, viewed in the light most favorable to the party
securing the verdict, that there is no substantial evidence to support the verdict. If
there is any substantial evidence, or reasonable inferences to be drawn therefrom,
in support of the verdict, the motion should be denied.’ [Citation.]”’” (Id. at
p. 1603, quoting Clemmer v. Hartford Insurance Co. (1978) 22 Cal.3d 865, 877-
878 (Clemmer).) In reviewing the trial court’s ruling, we also examine the record
for substantial evidence to support the verdict. (OCM Principal Opportunities
Fund, L.P. v. CIBC World Markets Corp. (2007) 157 Cal.App.4th 835, 845.)
B. Causation
We begin by examining appellants’ contentions regarding the sufficiency of
the evidence to support the special verdicts regarding their role in the causation of
Marty Marteney’s mesothelioma. The jury found that he was exposed to UCC
asbestos from three brands of joint compound, that Elementis distributed that UCC
asbestos, that the “design” of the asbestos was a substantial factor in causing harm,
and that appellants were responsible for a non-zero share of comparable fault for
the Marteneys’ injuries. Appellants maintain there is insufficient evidence that
UCC asbestos was a substantial factor in the causation of Marty Marteney’s
mesothelioma. In addition, Elementis contends there is insufficient evidence that
5
its activities as a distributor of UCC asbestos support the imposition of strict
liability for Marty Marteney’s mesothelioma. As explained below, we disagree.
1. Governing Principles
In cases “presenting complicated and possibly esoteric medical causation
issues,” the plaintiff is obliged to establish “‘“a reasonable medical probability
based upon competent expert testimony that the defendant’s conduct contributed
to [the] plaintiff’s injury.”’” (Bockrath v. Aldrich Chemical Co. (1999) 21 Cal.4th
71, 79, quoting Rutherford, supra, 16 Cal.4th at p. 976, fn. 11.) As explained in
Rutherford, California applies the substantial factor test to so-called “cause in
fact” determinations. (Rutherford, supra, at p. 969.) “Under that standard, a cause
in fact is something that is a substantial factor in bringing about the injury.
[Citations.] The substantial factor standard generally produces the same results as
does the ‘but for’ rule of causation which states that a defendant’s conduct is a
cause of the injury if the injury would not have occurred ‘but for’ that conduct.
[Citations.] The substantial factor standard, however, has been embraced as a
clearer rule of causation -- one which subsumes the ‘but for’ test while reaching
beyond it to satisfactorily address other situations, such as those involving
independent or concurrent causes in fact. [Citations.]” (Id. at pp. 968-969.)
Although the term “substantial factor” has no authoritative definition, a force that
“plays only an ‘infinitesimal’ or ‘theoretical’ part in bringing about injury” is not a
substantial factor. (Id. at p. 969.)
Rutherford examined the relationship between the plaintiff’s burden of
proof and the substantial factor test in a specific context, namely, when the
asbestos alleged to have caused the plaintiff’s injuries potentially has multiple
sources. There, the wife and daughter of a deceased metal worker sued numerous
6
manufacturers and distributors of asbestos-laden products, alleging that the metal
worker’s exposure to their products caused his fatal lung cancer. (Rutherford,
supra, 16 Cal.4th at pp. 958-959.) Following the first phase of a bifurcated trial,
after a jury found that the decedent’s inhalation of asbestos fibers caused his
cancer, all but one manufacturer settled with the plaintiffs. (Id. at p. 960.) During
the second phase of trial, the jury heard testimony that the metal worker labored in
confined areas of ships containing the manufacturer’s asbestos-laden insulation.
(Id. at p. 961.) The parties also presented expert testimony regarding asbestos-
related cancers. (Ibid.) After receiving a burden-shifting instruction that the
manufacturer had the burden of showing that its product did not cause the
decedent’s cancer, the jury allocated the manufacturer a 1.2 percent share of
comparative fault. (Id. at pp. 961-962.) On appeal, the manufacturer challenged
the instruction. (Id. at pp. 962-963.)
Our Supreme Court concluded that the case fell outside the special
circumstances in which a burden-shifting instruction on causation is appropriate,
notwithstanding the “‘scientifically unknown details of carcinogenesis’” and the
impossibility of identifying the “‘specific fibers’” that caused an individual’s
cancer.2 (Rutherford, supra, 16 Cal.4th at p. 976.) The court determined that the
burden of proof remained on the plaintiff, subject to a specific quantum of proof.
(Id. at p. 969-982.) Under that quantum of proof, plaintiffs may establish
causation on the basis of expert testimony regarding the size of the “dose” or the
enhancement of risk attributable to exposure to asbestos from the defendant’s
products. (Id. at p. 976, fn. 11.)
2 As appellants do not suggest that the special circumstances are present here, they
have forfeited any contention that the burden of proving causation is properly imposed
upon respondents.
7
To “‘bridge th[e] gap in the humanly knowable,’” the court adopted the
following standard of proof: “In the context of a cause of action for asbestos-
related latent injuries, the plaintiff must first establish some threshold exposure to
the defendant’s defective asbestos-containing products,[] and must further
establish in reasonable medical probability that a particular exposure or series of
exposures was a ‘legal cause’ of his injury, i.e., a substantial factor in bringing
about the injury. In an asbestos-related cancer case, the plaintiff need not prove
that fibers from the defendant’s product were the ones, or among the ones, that
actually began the process of malignant cellular growth. Instead, the plaintiff may
meet the burden of proving that exposure to [the] defendant’s product was a
substantial factor causing the illness by showing that in reasonable medical
probability it was a substantial factor contributing to the plaintiff’s or decedent’s
risk of developing cancer.” (Rutherford, supra, 16 Cal.4th at pp. 976, 982, fn.
omitted, italics deleted.)
The court further held that juries should be so instructed. (Rutherford,
supra, 16 Cal.4th at p. 976.) Turning to the case before it, however, the court
found no prejudice from the instructional error. (Id. at pp. 983-985.)
2. Evidence at Trial
a. Marteneys’ Evidence
i. UCC and HCP
Beginning in the early 1960’s, UCC mined asbestos in King City,
California, and shipped it to product manufacturers. The asbestos was “a high
purity . . . chrysotile type,” and was marketed under the name, “Calidria.” UCC
marketed several grades of Calidria asbestos, including a grade known as “SG-
210” for use in joint compounds. Joint compounds are used to cover the joints
8
between dry wall and wall board construction materials, and include ready-mix
and dry powder products.
From the mid-1960’s to 1986, HCP distributed Calidria to the west coast of
the United States. UCC collaborated with HCP’s manager located in San
Francisco in distributing Calidria. Although UCC sometimes shipped Calidria
directly, HCP participated in the profits from UCC’s activities under an “exclusive
distribution agreement.”
In 1965, National Gypsum began making joint compounds -- marketed
under the name “Gold Bond” -- in a factory in Long Beach, California. National
Gypsum also made those products in plants located in Illinois, Maryland, and
Louisiana. The Long Beach plant distributed its joint compounds to the states on
the west coast of the United States, including California. In 1969, National
Gypsum began making Gold Bond products using formulas “built around” UCC’s
SG-210, which National Gypsum viewed as superior to its prior asbestos
ingredient. As of March 1970, UCC’s SG-210 was the sole asbestos incorporated
into the Gold Bond joint compounds made in Long Beach. Until the mid-1970’s,
the Long Beach plant relied on versions of the formulas adopted in and after 1969
in manufacturing Gold Bond products.
There was also evidence that during the pertinent period, Georgia Pacific
and Kelly-Moore used Calidria in their joint compounds.3 From late 1969 to mid-
1977, Georgia Pacific incorporated Calidria in some of its joint compounds, which
3 As explained below (see pt. B.3.b., post), the principal evidence concerning Marty
Marteney’s exposure to UCC asbestos relies on his contact with Gold Bond joint
compound, although he also encountered the Georgia Pacific and Paco Quick Set joint
compounds.
9
were manufactured in plants located in Texas, Illinois, Georgia, New York, and
Virginia. Only the Texas plant supplied joint compound products to California.
From 1963 to 1978, the Paco division of Kelly-Moore manufactured an
asbestos-containing joint compound sold as “Quick Set.” In addition, from 1968
to 1971, pursuant to an agreement, Kelly-Moore manufactured joint compound
products for Georgia Pacific in California, where Kelly-Moore had plants in San
Carlos and Ontario. In view of the agreement, Kelly-Moore made all Georgia
Pacific asbestos-containing joint compounds sold in California. After 1971, some
Georgia Pacific branches continued to sell Kelly-Moore products under the
Georgia Pacific label. The products that Kelly-Moore made for Georgia Pacific in
California were identical to its own product, and were distributed in California.
From 1971 to 1973 and for a 15-month period after August 1975, UCC supplied
Calidria to Kelly-Moore’s San Carlos plant.4
ii. Marty Marteney
Marty Marteney was born in 1931. At the age of nine, he began working
regularly in his father’s garage, where he replaced asbestos-containing brake
linings on trucks. He also helped his father renovate car dealerships by installing
asbestos sheets.
In 1956, after military service, Marteney moved to Los Angeles. From the
late 1950’s until 1971 or 1972, he worked for Levitt, an architectural firm.
Initially employed as a “job captain,” he was promoted to “project architect” after
two and a half years, and eventually became a certified architect.
4 In addition, appellants’ evidence showed that from 1968 to 1978, UCC supplied 8
percent of the asbestos fiber that Kelly-Moore used, most of which was shipped to its
California plants.
10
While employed by Levitt, Marteney worked “hands-on,” visiting job sites.
As a job captain, he spent 50 percent of his time in the field, and continued to
spend 20 percent of his time in the field after becoming a project architect. He
demonstrated how to mix construction materials, including joint compounds, and
participated in applying the joint compounds. He recalled using Gold Bond,
Georgia Pacific, and Paco Quick Set joint compounds, and was around other
workers who used them. The work sites were dusty and dirty, and he was
sometimes present when workers cleaned up after using joint compounds.
After leaving Levitt, Marteney secured employment with Ficus, another
architectural firm. Sometime after 1972, he spent time at the site of a large
hospital project, where workers used joint compounds. He recalled seeing bags
labeled “Gold Bond” and “Georgia Pacific.”
From 1965 to the mid-1970’s, Marteney also remodeled his home, and
volunteered to remodel many churches. In working on his home, he engaged in
drywall work, and used “big bags” of Gold Bond, as well as Paco Quick Set. He
also used Paco Quick Set in remodeling the churches.
iii. Expert Testimony
Dr. Allan Smith, an epidemiologist, testified that the inhalation of asbestos
dust is the major cause of mesothelioma. According to Smith, mesothelioma is a
“dose response disease,” that is, workers who have inhaled more asbestos or had a
higher dose face a higher risk of developing mesothelioma. He further testified
that chrysotile asbestos, the type of asbestos most used in the United States, causes
mesothelioma. Responding to hypothetical questions, Smith opined that if a
person with Marty Marteney’s personal history suffered from mesothelioma,
exposure to asbestos caused the disease. He further opined that each exposure to
11
asbestos would have contributed to the person’s overall risk of acquiring the
disease, stating that “every part of a causal dose that caused [the] cancer is
important.”
Dr. James Dahlgren, an expert in toxicology and occupational diseases,
testified that by 1960, medical science had confirmed that asbestos exposure
causes mesothelioma. Although all the main types of asbestos can cause
mesothelioma, exposure to chrysotile asbestos is the “overwhelming cause” of the
disease, as 95 to 99 percent of the asbestos used worldwide is of that type.
Generally, mesothelioma is subject to a “dose response curve.” Even very low
levels of exposure to asbestos -- including short term exposures -- greatly
increased the risk of mesothelioma. According to Dahlgren, workers exposed to
.05 “fiber years” of asbestos -- one-half of the OSHA limit set in the late 1970’s --
face a “statistically significant[] increase[]” in lung cancer and mesothelioma. He
stated: “[T]here’s no threshold, that is[,] no level below which there would be no
effect.”5
Responding to hypothetical questions, Dr. Dahlgren opined that exposure to
asbestos would have caused the mesothelioma suffered by a person with Marty
Marteney’s personal history. He further opined that if the person’s history
included one or two exposures to joint compound products containing UCC
asbestos, he would not exclude “those exposures as being causative for [the]
5 Dr. Dahlgren explained that a “fiber year[]” is a measure of the amount of asbestos
fibers to which a person is exposed. An exposure of .1 fiber years -- the OSHA standard
in the late 1970’s -- is equivalent to exposure to air containing .1 fibers per cubic
centimeter throughout an average working day for a one-year period. Dahlgren stated that
the OSHA standard reflected the fact that in the late 1970’s, available microscopes could
not detect airborne fiber concentrations of less than .1 fibers per cubic centimeter.
12
mesothelioma.” Dahlgren stated: “All those asbestos fibers . . . contributed to the
risk.”
b. UCC’s Evidence
William Dyson, an industrial hygienist, testified there is little data regarding
the risk of mesothelioma at very low levels of exposure to asbestos. He opined
that there was no increased risk from exposure to chrysotile from doses below the
range of 15 to 25 fiber years.6 Responding to hypothetical questions, Dyson
opined that if a person worked with a joint compound containing UCC asbestos on
ten two-hour occasions, that person’s level of exposure would be approximately
.02 fiber years, which Dyson characterized as “very, very low.”
In addition, UCC submitted evidence that aside from trial batches, no Paco
Quick Set joint compound was manufactured in California. According to that
evidence, Paco Quick Set was made in Kelly-Moore’s plants in Texas, although
UCC supplied some asbestos to those plants in the early 1970’s.
c. Elementis’s Evidence
Robert Mann, who testified as the person most knowledgeable regarding
HCP, denied that HCP received a commission or credit for UCC’s direct sales of
Calidria. He further stated that there were several grades of Calidria asbestos,
only one of which -- SG-210 -- was used in joint compounds, and that HCP
distributed SG-210 to joint compound manufacturers only from 1973 to 1977.
6 Although Dyson relied on a unit measurement of exposure he called a “fiber year
per cubic centimeter,” he noted that the unit is often called a “fiber year[],” and his
testimony establishes that he was relying on the unit measurement that Dr. Dahlgren also
used. For simplicity, we use the term “fiber year.”
13
3. Sufficiency of Evidence Regarding the Role of UCC’s Asbestos in
Causing Marteney’s Mesothelioma
We begin with UCC’s challenges to the special verdicts regarding the role
of UCC’s asbestos in causing Marty Marteney’s mesothelioma. As explained
above (see pt. B.2, ante), under Rutherford, at trial the Marteneys had the burden
of proof with respect to two facts. They were obliged to establish (1) that Marty
Marteney was exposed to UCC’s asbestos, and (2) that “in reasonable medical
probability,” his exposure was a substantial factor in bringing about his
mesothelioma. (Rutherford, supra, 16 Cal.4th at p. 982.) Regarding the second
fact, the Marteneys could carry their burden by showing “in reasonable medical
probability,” that the exposure “was a substantial factor contributing to
[Marteney’s] risk of developing cancer.” (Id. at pp. 982-983, italics deleted.)
UCC maintains the Marteneys failed to carry their burden regarding each
fact. UCC argues that Rutherford imposed substantive requirements on testimony
offered to show the second fact that the Marteneys’ experts failed to satisfy. UCC
further argues there is no evidence regarding the extent to which Marty Marteney
was exposed to UCC asbestos. As explained below, we reject UCC’s contentions
because the record -- including the expert testimony, viewed collectively -- was
sufficient to show that Marteney’s exposure to UCC asbestos “was a substantial
factor contributing to [his] risk of developing cancer.” (Id. at p. 982.)
a. Adequacy of Expert Testimony
UCC maintains that Rutherford imposed certain requirements on the
showing required of plaintiffs to establish the second fact. As noted above (see pt.
A.2., ante), in explaining the “substantial factor” test, the court stated: “Although
the term ‘substantial factor’ has no authoritative definition, a force that ‘plays only
14
an “infinitesimal” or “theoretical” part in bringing about injury’ is not a substantial
factor.” (Rutherford, supra, 16 Cal.4th at p. 969.) Furthermore, while discussing
the propriety of burden-shifting instructions on causation, the court suggested that
the length, frequency, and intensity of an individual’s exposure to an asbestos-
containing product may be relevant to showing the causation of cancer.7 UCC
argues that those remarks oblige plaintiffs seeking to carry their burden of proof
under Rutherford to “show, at a minimum, [that] exposure to the defendant’s
product was ‘sufficiently lengthy, intense, and frequent’ to warrant treating it as ‘a
substantial factor contributing to the risk of cancer.’”
UCC further contends the Marteneys’ experts provided no testimony
satisfying those requirements, arguing that the experts made only the “tautological
claim that any asbestos exposure . . . ‘contributes’ to the risk.” As noted above
(see pt. B.2.iii, ante), Dr. Smith opined that when a person’s exposure to asbestos
causes mesothelioma, “every part of a causal dose that caused [the] cancer is
important,” and Dr. Dahlgren stated that there is “no threshold” below which
exposures to asbestos have “no effect.” UCC maintains that under their testimony,
“any exposure to asbestos, however small, would always be sufficient to prove
medical causation,” and that nothing in their opinions “showed that the
7 In describing the scientific uncertainties attending the causation of cancer, the
court asked rhetorically: “Taking into account the length, frequency, proximity and
intensity of exposure, the peculiar properties of the individual product, any other potential
causes to which the disease could be attributed (e.g., other asbestos products, cigarette
smoking), and perhaps other factors affecting the assessment of comparative risk, should
inhalation of fibers from the particular product be deemed a ‘substantial factor’ in causing
the cancer?” (Rutherford, supra, 16 Cal.4th at p. 975.) Later, the court observed a
burden-shifting instruction on causation might be appropriate in special circumstances,
namely, “after the plaintiff had proven . . . [a] sufficiently lengthy, intense and frequent
exposure as to render the defendant’s product a substantial factor contributing to the risk
of cancer.” (Id. at p. 979.)
15
contribution of UCC’s asbestos to . . . Marteney’s risk of developing
mesothelioma was more than ‘negligible’ or ‘theoretical.’”
UCC’s contention fails, as it relies on a defective rationale. Our inquiry
concerns the existence of substantial evidence to support the judgment, not the
Marteneys’ burden of proof. The holding in Rutherford regarding the burden of
proof does not dictate that in reviewing the denial of UCC’s motion for judgment
notwithstanding the verdict, we must focus exclusively on the testimony from the
Marteneys’ experts to determine whether the Marteneys demonstrated the second
fact. Generally, the burden of proof is “the obligation of a party to establish by
evidence a requisite degree of belief concerning a fact in the mind of the trier of
fact or the court.” (Evid. Code, § 115.) However, although the burden of proof
imposes an obligation on a specific party, that obligation “is ‘satisfied when the
requisite evidence has been introduced . . . , and . . . it is of no consequence
whether the evidence was introduced by one party rather than the other[.]’”
(People v. Belton (1979) 23 Cal.3d 516, 524; quoting Morgan, Basic Problems of
State and Federal Evidence (Weinstein rev. ed. 1976) p. 14.) Accordingly, in
examining the record for substantial evidence, we may look at the entire record to
determine whether there was sufficient “‘competent expert testimony’” regarding
whether a particular exposure “was a substantial factor contributing to
[Marteney’s] risk of developing cancer.” (Rutherford, supra, 16 Cal.4th at pp.
977, fn. 11, 982-983, italics deleted.)
The record, viewed as a whole, discloses adequate expert testimony
regarding the length, intensity, and frequency of exposures to asbestos fibers from
joint compounds containing UCC asbestos to support a finding that Marty
Marteney’s exposures were a substantial factor contributing to the risk of his
cancer. Although the Marteneys’ experts agreed that even small exposures to
16
asbestos are potentially material to the causation of mesothelioma, Dr. Dahlgren
identified a specific level of exposure to asbestos -- namely, .05 fiber years --
associated with a “statistically significant[] increase[]” in lung cancer and
mesothelioma. UCC’s expert Dyson maintained that significant increments in risk
arise only at higher exposure levels, but also testified regarding the exposures
experienced by individuals working with joint compounds containing UCC
asbestos. He stated that working with dry mix joint compounds involved four
activities: mixing, applying the compound, sanding, and cleanup. The
concentrations of airborne fibers per cubic centimeter from those activities were,
respectively, 12.7, 0, 3.8, and 10.7. He further noted that although the “time-
weighted average” of the concentrations arising from the activities -- as they
would occur in the workplace -- is 2 fibers per cubic centimeter, the average
concentration increases to 6 fibers per cubic centimeters if one focuses on the
dust-producing activities.
Relying on those estimates, Dyson stated if a person worked with a joint
compound containing UCC asbestos on 10 two-hour occasions, that person’s level
of exposure would be approximately .02 fiber years, based on the time-weighted
average of 2 fibers per cubic centimeter for the four activities described above. He
further testified that the exposure level of an observer watching the activities
diminished as the observer’s distance from them increased: at 4 feet, the
observer’s exposure was 50 percent of the worker’s exposure, and at 10 feet, 10
percent of the worker’s exposure.
Dyson’s testimony supports reasonable inferences regarding the encounters
with an asbestos-containing joint compound necessary for an exposure level of .05
fiber years, which Dr. Dahlgren described as presenting a statistically significant
risk of cancer. Under Dyson’s testimony, a person who worked with the joint
17
compound on 25 two-hour occasions -- that is, 50 hours -- would experience that
level of exposure, based on the time-weighted average concentration of airborne
fibers for all four activities (2 fibers per cubic centimeter). Furthermore, a person
engaged solely in the dust-creating activities would experience that level of
exposure in far less time, as the average concentration of airborne air fibers arising
from those activities is three times greater than the time-weighted average for all
four activities, and the average concentrations of air fibers arising from the
dustiest activities -- mixing and cleanup -- are more than five times greater than
that average.
Dyson’s testimony thus supports the reasonable inference that a person
engaged in the dust-producing activities -- and thereby creating the average
concentration of airborne air fibers arising from those activities (6 fibers per cubic
centimeter) -- would experience an exposure level of .05 fiber years in
approximately 17 hours (one-third of 50 hours), based on the average
concentration of airborne air fibers arising from those activities (6 fibers per cubic
centimeter). His testimony also supports the reasonable inference that an observer
standing within 10 feet of those activities would experience that exposure level in
less than 170 hours. Moreover, even shorter periods would result in that exposure
level if one focuses on the dustiest activities, namely, mixing and cleanup.
Viewed collectively, the expert testimony supports the reasonable inference
that an exposure level of .05 fiber years would constitute “a substantial factor
contributing to [a person’s] risk of developing cancer” (Rutherford, supra, 16
Cal.4th at p. 982), as well as reasonable inferences regarding the length,
frequency, and intensity of encounters with joint compounds necessary to create
that level of exposure. Furthermore, the jury was free to make those inferences.
In cases requiring expert testimony to establish the causation of a disease, the jury
18
may rejected even uncontradicted expert testimony, absent special circumstances
not present here. (Howard v. Owens Corning (1999) 72 Cal.App.4th 621, 632.)
Furthermore, as a general rule, the jury may in suitable circumstances accept a
portion of an expert’s testimony while rejecting other aspects of it. (See Liberty
Mut. Ins. Co. v. Industrial Acc. Com. (1948) 33 Cal.2d 89, 93-94; San Gabriel
Valley Water Co. City of Montebello (1978) 84 Cal.App.3d 757, 765.) Thus, the
jury could properly credit Dyson’s testimony regarding the levels of asbestos
exposure from activities involving joint compounds, while rejecting his view
regarding the level at which such exposures presented a significant risk of cancer
in favor of Dr. Dahlgren’s. Accordingly, we reject UCC’s contention there is
insufficient expert testimony to satisfy Rutherford.8
b. Marteney’s Exposure to UCC Asbestos
UCC contends there is insufficient evidence regarding the extent to which
Marty Marteney was exposed to UCC asbestos. As explained below, we disagree.
The record supports the reasonable inference that from 1969 to the mid-
1970‘s, UCC supplied SG-210 to National Gypsum’s Long Beach plant for use in
its joint compounds, including Gold Bond. Indeed, as of March 1970, UCC’s SG-
210 was the sole asbestos incorporated into the Gold Bond joint compounds made
in Long Beach. Generally, the joint compounds made at the Long Beach plant
were distributed within California and other west coast states. In addition, there
was evidence that Georgia Pacific arranged for Kelly-Moore to make joint
compounds for it in California because shipping costs rendered the products that
8 As there is sufficient evidence to satisfy the requirements that UCC asserts are
mandated in Rutherford, it is unnecessary for us to decide whether Rutherford, in fact,
imposes those standards.
19
Georgia Pacific manufactured in other states uncompetitive in California. The
jury thus reasonably could have inferred that the Gold Bond containing UCC
asbestos made in Long Beach from 1969 to the mid-1970’s was sold in Los
Angeles, where Marteney lived. In addition, the jury heard evidence suggesting
that during that period, some Georgia Pacific and Paco Quick Set containing UCC
asbestos was made in California.
The record further establishes that from 1969 to 1975, Marteney
encountered Gold Bond and the other joint compounds at work and at home.
From 1969 to 1971 or 1972, he worked as a project architect for Levitt, and spent
20 percent of his time at work sites. According to Marteney, he was a “hands-on”
employee at the job sites. He demonstrated how to mix joint compounds,
participated in applying them, and was sometimes present during the clean up. He
worked with Gold Bond, Georgia Pacific, and Paco Quick Set, and was around
others who used them. The worksites themselves were dirty and dusty. After
1972, while working for Ficus, he was involved in a large hospital project, where
workers used joint compounds, including Gold Bond and Georgia Pacific. In
addition, from 1969 to the mid-1970’s, Marteney also remodeled his home, and
worked as a volunteer on a remodeling project involving a church. In working on
his home, he used “big bags” of Gold Bond, as well as Paco Quick Set.
In our view, the evidence is sufficient to show that Marteney’s contact with
Gold Bond containing UCC asbestos created an exposure level of .05 fiber years. 9
As explained above (see pt. B.3.a), a person engaged in dust-producing activities
with joint compounds -- such as mixing and cleaning -- would experience that
9 For that reason, it is unnecessary to decide whether the evidence regarding
Marteney’s contact with the Georgia Pacific or Paco Quick Set joint compounds is also
sufficient to support that conclusion.
20
exposure level in 17 hours or less, and a close observer of those activities would
experience that exposure level in 170 hours or less. According to Marteney, while
at Levitt, he spent 20 percent of his work week -- that is, approximately 400 hours
per year, based on a 40-hour work week for 50 weeks -- at job sites, where he
supervised workers using Gold Bond and participated in its use.10 Furthermore,
while at Ficus, he supervised workers using Gold Bond, and employed it in the
remodeling of his home. In view of this evidence, the jury could reasonably infer
that Marteney had encounters with Gold Bond sufficient for an exposure of .05
fiber years.11 (See Izell v. Union Carbide Corp. (2014) 231 Cal.App.4th 962, 973-
974 [under Rutherford standard, plaintiff adequately showed exposure to
defendant’s asbestos on basis of evidence that from mid- to late-1970’s, while
supervising workers, he frequently encountered dust from joint compound
10 According to Dyson, for purposes of the “fiber year” unit of measurement, a year
is 2000 hours, based on a 40-hour work week for 50 weeks.
11 Pointing to certain apparent conflicts in Marteney’s testimony, UCC maintains that
it is insufficient to support the special verdicts. We disagree. As our Supreme Court
explained, even internally inconsistent testimony from a single witness may support a
judgment. “It is for the trier of fact to consider internal inconsistencies in testimony, to
resolve them if this is possible, and to determine what weight should be given to such
testimony.” (Clemmer, supra, at p. 878.) Furthermore, “[t]he testimony of a single
witness is sufficient to uphold a judgment even if it is contradicted by other evidence,
inconsistent or false as to other portions. [Citations.]” (In re Frederick G. (1979) 96
Cal.App.3d 353, 366.) We reject the statements of a witness that the factfinder has
believed only if they are “‘inherently improbable,’” that is, “physically impossible or
obviously false without resorting to inference or deduction.” (Watson v. Department of
Rehabilitation (1989) 212 Cal.App.3d 1271, 1293; see Daly v. Wallace (1965) 234
Cal.App.2d 689, 692.) Here, Marteney’s testimony was neither physically impossible nor
obviously false on its face.
UCC suggests that during the trial, the Marteneys assumed that Marty Marteney
was exposed to joint compounds containing UCC asbestos only once or twice. That
contention fails, as the record discloses only that their counsel asserted in closing
arguments that one such exposure sufficed to establish causation.
21
incorporating defendant’s asbestos].) In sum, there is sufficient evidence that
UCC asbestos was a substantial factor in the causation of Marty Marteney’s
mesothelioma.12
4. Sufficiency of the Evidence Regarding Elementis’s Liability for
the Marteneys’ Injuries
Elementis challenges the sufficiency of the evidence to support the special
verdicts regarding its liability for the Marteneys’ injuries arguing that
“[a]bsolutely no evidence supports an inference that HCP distributed some SG-
210 that became dust [Marty] Marteney inhaled.” For the reasons discussed
below, we reject Elementis’s contention.
12 For the first time on appeal, UCC’s reply brief argues that under Rutherford, the
record must contain sufficient evidence for the jury to estimate Marty Marteney’s “overall
exposure” to asbestos. As no such contention was raised in the opening brief, it has been
forfeited. (Horowitz v. Noble (1978) 79 Cal.App.3d 120, 138-139; 9 Witkin, Cal.
Procedure (5th ed. 2008) Appeal, § 701, pp. 769-771.)
In a supplemental letter brief, UCC also directs our attention to Shiffer v. CBS
Corp. (2015) 240 Cal.App.4th 246.) There, the plaintiff asserted products liability claims
against a turbine manufacturer, alleging that his contact with asbestos-containing
materials in a turbine made by the defendant caused his mesothelioma. (Ibid.) In
opposing the defendant’s motion for summary judgment on the claims, the plaintiff
submitted declarations from three experts, who opined that the plaintiff’s exposure to
asbestos during the turbine’s installation was significant, and constituted a substantial
contributing factor to the plaintiff’s aggregate dose of asbestos. (Id. at p. 250.)
Affirming the grant of summary judgment, the appellate court concluded that the experts’
opinions lacked a sufficient foundational basis, as the plaintiff had supplied the experts
with no evidence that he had any exposure to asbestos. (Id. at p. 256.) Here, in contrast,
the evidence regarding Marty Marteney’s exposure to UCC asbestos and the testimony of
appellants’ and respondent’s experts sufficed to show that UCC asbestos was a
substantial factor in increasing his risk of mesothelioma.
22
a. Governing Principles
At trial, the Marteneys maintained that Elementis was liable for their
injuries because it was UCC’s exclusive distributor of Calidria on the west coast
during the pertinent period, and pursuant to an agreement, Elementis received a
five or ten percent commission for a sale when UCC shipped the asbestos directly
to the customer. As explained in Bay Summit Community Assn. v. Shell Oil Co.
(1996) 51 Cal.App.4th 762, 773 (Bay Summit), the strict liability doctrine “extends
to nonmanufacturing parties, outside the vertical chain of distribution of a product,
which play an integral role in the ‘producing and marketing enterprise’ of a
defective product and profit from placing the product into the stream of
commerce.” There, the plaintiffs asserted products liability claims against the
manufacturers of a plastic plumbing system and a supplier of plastic resin, alleging
that the fittings in the plumbing system were defective. (Id. at pp. 767-769.) At
trial, the evidence showed that the supplier’s resin was used in the system’s plastic
pipes, but the plaintiffs submitted no evidence that the resin was used in the
defective fittings or that the resin itself was defective. (Ibid.) The plaintiffs’
theory at trial was that the supplier was strictly liable for the defective plumbing
system not as a resin supplier, but as a participant in the marketing and distribution
of the system. (Id. at p. 771.)
In affirming the judgment in favor of the plaintiffs, the appellate court
examined the principles under which entities may be subject to strict liability for
playing a role in the marketing of a product. (Bay Summit, supra, 51 Cal.App.4th
at p. 773.) Generally, the doctrine of strict liability is intended to ensure that
parties that play an integral role in the manufacture, marketing, and distribution of
a defective product bear the costs of injuries arising from the product. (Id. at
pp. 772-773.) Thus, liability is properly imposed on nonmanufacturers of a
23
defective product involved in the “vertical distribution” of the product. (Ibid.)
Furthermore, in suitable circumstances, liability may also be imposed on an entity
that is neither the product’s manufacturer nor within the product’s “vertical chain
of distribution . . . .” (Id. at pp. 773.) In such cases, “the mere fact that an entity
‘promotes’ or ‘endorses’ or ‘advertises’ a product does not automatically render
that entity strictly liable for a defect in the product.” (Id. at pp. 775-776.) Rather,
“[t]he imposition of strict liability depends on whether the facts establish a
sufficient causative relationship or connection between the defendant and the
product so as to establish that the policies underlying the strict liability doctrine
are satisfied.” (Id. at p. 776.) Based on an examination of then-existing case
authority, the court concluded that a defendant involved in the
marketing/distribution process may be held strictly liable “if three factors are
present: (1) the defendant received a direct financial benefit from its activities and
from the sale of the product; (2) the defendant’s role was integral to the business
enterprise such that the defendant’s conduct was a necessary factor in bringing the
product to the initial consumer market; and (3) the defendant had control over, or a
substantial ability to influence, the manufacturing or distribution process. (Id. at
p. 776.)
Applying those principles to the case presented on appeal, the court
determined that there was sufficient evidence to support the imposition of strict
liability on the resin supplier. (Bay Summit, supra, 51 Cal.App.4th at p. 776.)
Aside from supplying the resin for the pipes, the supplier had provided marketing
assistance to pipe manufacturers, arranged for its employees to assist in the
advertising and sales of pipes made with its resin, and directly promoted the
plumbing system. (Id. at pp. 769-771.) The court thus concluded that the factors
described above were present. (Ibid.)
24
b. Evidence At Trial
Regarding Elementis’s role in the distribution of UCC asbestos, the
Marteneys relied primarily on deposition testimony from Robert Mann, who had
been designated to testify on behalf of Elementis. In the course of that deposition,
Mann recounted deposition testimony from Leon Persson, who had previously
been designated to testify on behalf of Elementis. Leon Persson was employed by
HCP and its successors from 1958 to 1991. He was a branch manager in San
Francisco, and became a regional vice president.
According to Mann’s deposition testimony, in prior depositions, Persson
provided the following account of HCP’s relationship with UCC: HCP distributed
UCC’s Calidria from 1968 to 1986. It sold only UCC’s Calidria, and it was the
sole distributor of Calidria on the west coast. Persson was unable to recall,
however, which grades of Calidria HCP distributed. Although Persson was
personally responsible for overseeing HCP’s distribution of Calidria, he worked
closely with UCC in distributing that asbestos. In “nearly 100 percent” of
customer contacts, he and a UCC representative made a joint visit. Although HCP
delivered Calidria to customers, UCC also delivered Calidria directly to some
customers. However, when a customer received Calidria directly from UCC, HCP
received a commission or share of the profit pursuant to an exclusive distribution
agreement that Persson had seen.13
In the deposition, Mann denied that HCP had an agreement with UCC of the
type described by Persson. He had seen no such agreement, and none had been
produced by Elementis. He acknowledged, however, that Steven Gripp, who had
13 The Marteneys also presented evidence that UCC directly shipped large orders of
asbestos to manufacturers on the west coast of the United States, and otherwise relied
exclusively on HCP to ship smaller quantities of asbestos.
25
been designated to testify on Elementis’s behalf on previous occasions, had stated
in 1998 that the agreement existed. He further acknowledged that Elementis later
“cull[ed]” its records, and following that event, Gripp stated that the agreement
could not be located.
Mann also testified at trial on behalf of Elementis. He stated that during his
career, he had encountered hundreds of distributor contracts, and never had seen
one of the type described by Persson. He also stated HCP distributed UCC’s SG-
210 to joint compound manufacturers only from 1973 to 1977.
c. Analysis
We conclude that the trial evidence, viewed in the light most favorable to
the Marteneys, establishes that liability was properly imposed on Elementis. As
explained above (see pt.B.3., ante), there was sufficient evidence that Marty
Marteney’s exposure to the Gold Bond made in Long Beach plan, which
incorporated SG-210 from UCC, was a substantial factor in the causation of his
mesothelioma. The evidence further shows that during Marteney’s period of
exposure to that joint compound, HCP was the exclusive distributor of UCC’s
Calidria on the west coast. Under the agreement between HCP and UCC, HCP
received a commission for any Calidria that UCC supplied directly to a customer.
The evidence further showed that HCP and UCC worked closely in distributing
the asbestos, as their representatives met jointly with customers.
In our view, the record discloses evidence sufficient for the imposition of
liability under the principles set forth in Bay Summit. That evidence
unequivocally established that HCP was in the vertical chain of distribution
regarding Calidria. Furthermore, to the extent that UCC, rather than HCP, directly
shipped Calidria to customers, HCP is properly subject to liability for those
26
shipments, in view of the factors identified in Bay Summit. Although HCP did not
create the initial consumer market for asbestos-containing products, it derived
profits from UCC’s direct sales, worked jointly with UCC to sell Calidria, and had
sufficient influence with UCC to negotiate an unusually favorable distribution
agreement, namely, one containing the profit-sharing term noted above.
Elementis maintains there is insufficient evidence to support the imposition
of liability, placing special emphasis on the lack of evidence that it shipped any
UCC SG-210 to the Long Beach plant during Marteney’s relevant period of
exposure to Gold Bond, and the evidence questioning the existence of the
distribution agreement. In so arguing, however, Elementis “‘misapprehends our
role as an appellate court. Review for substantial evidence is not trial de novo.
[Citation.]’ [Citation.] When there is substantial evidence to support the jury’s
actual conclusion, ‘it is of no consequence that the [jury,] believing other
evidence, or drawing other reasonable inferences, might have reached a contrary
conclusion.’ [Citation.]” (Pfeifer v. John Crane, Inc. (2013) 220 Cal.App.4th
1270, 1301.) As explained above, there is sufficient evidence that the agreement
in question existed. In view of that agreement, Elementis was properly subject to
liability for the distribution of UCC’s SG-210 to National Gypsum’s Long Beach
plant, which made the Gold Bond that Marty Marteney encountered. In sum, the
record discloses evidence adequate to support the imposition of strict liability on
Elementis for the Marteneys’ injuries.
C. Warnings
Appellants contend the jury’s special verdicts regarding the Marteneys’
warning-related theories of liability shield them from liability under the
Marteneys’ “defective design” theory of strict liability. They argue that the latter
27
theory fails as a matter of law, in light of the jury’s special verdicts rejecting the
Marteneys’ claims insofar as they were predicated on theories of negligence and
“defective warning” strict liability. As explained below, we disagree.
1. Marteneys’ Claims and Jury’s Special Verdicts
The Marteneys submitted three theories of liability to the jury: strict liability
predicated on a design defect; strict liability predicated on a failure to warn; and
negligence predicated, inter alia, on a failure to warn. The “design defect” theory
of strict liability relied on the so-called “consumer expectations” test for defects.
Under that test, a product is defective in design if it “fail[s] to perform as safely as
an ordinary consumer would expect.” (Soule v. General Motors Corp. (1994) 8
Cal.4th 548, 562 (Soule).) In connection with the theory, the jury was instructed
that it could consider “the product as a whole, including its warnings.”
The jury was instructed that the “defective warning” theory of strict liability
required a determination that appellants had failed to provide adequate warnings
of potential risk that were scientifically known or knowable when the product was
distributed. In connection with such a theory, our Supreme Court has explained:
“Generally speaking, manufacturers have a duty to warn consumers about the
hazards inherent in their products. [Citation.] The requirement’s purpose is to
inform consumers about a product’s hazards and faults of which they are unaware,
so that they can refrain from using the product altogether or evade the danger by
careful use.” (Johnson v. American Standard, Inc. (2008) 43 Cal.4th 56, 64.) A
product that is otherwise flawless in its design and manufacture “‘may nonetheless
possess such risks to the user without a suitable warning that it becomes
“defective” simply by the absence of a warning.’” (Finn v. G. D. Searle & Co.
(1984) 35 Cal.3d 691, 699.)
28
The jury was instructed that the negligence theory relied in part on an
allegation that appellants failed to exercise reasonable care in providing warnings.
Under that theory, liability hinges on the reasonableness of the failure to warn,
rather than on whether, in fact, the defendant failed to issue warnings regarding
known or knowable hazards. (Carlin v. Superior Court (1996) 13 Cal.4th 1104,
1113 (Carlin).) “‘Thus, the fact that a manufacturer acted as a reasonably prudent
manufacturer in deciding not to warn, while perhaps absolving the manufacturer of
liability under the negligence theory, will not preclude liability under strict
liability principles if the trier of fact concludes that, based on the information
scientifically available to the manufacturer, the manufacturer’s failure to warn
rendered the product unsafe to its users.’” (Ibid., quoting Anderson v. Owens-
Corning Fiberglas Corp. (1993) 53 Cal.3d 987, 1003.)
The jury returned special verdicts that appellants were not negligent, and
that their product warnings adequately addressed the “potential risks that were
known or knowable risks in light of the scientific and medical knowledge that was
generally accepted in the scientific community at the time of sale or distribution.”
The jury nonetheless found that UCC asbestos was defective under the consumer
expectations test.
2. Analysis
Appellants contend the special verdicts regarding the adequacy of the
product warning mandated the contrary finding. As explained below, that
contention fails, as the special verdicts regarding the “failure to warn” theories did
not, as a matter of law, shield appellants from liability under a “defective design”
theory relying on the consumer expectations test.
29
Under “defective warning” theories, defendants may avoid liability by
showing that they acted reasonably in providing warnings (thus nullifying
negligence), and that their warnings adequately addressed all known or knowable
hazards (thus nullifying strict liability). Nonetheless, they may still be subject to
liability under the “design defect” theory because their product “fail[s] to perform
as safely as an ordinary consumer would expect.” (Soule, supra, 8 Cal.4th at
p. 562.) (See Carlin, supra, 13 Cal.4th at p. 1117 [“[U]nlike strict liability for
design defects, strict liability for failure to warn does not potentially subject drug
manufacturers to liability for flaws in their products that they have not, and could
not have, discovered. Drug manufacturers need only warn of risks that are
actually known or reasonably scientifically knowable.”]; Boeken v. Phillip Morris,
Inc. (2005) 127 Cal.App.4th 1640, 1669 [“Product liability under a failure-to-warn
theory is a distinct cause of action from one under the consumer expectations
test.”].)
Nor did the trial evidence mandate that UCC’s asbestos was nondefective
under the consumer expectations test. As explained in Arena v. Owens-Corning
Fiberglas Corp. (1998) 63 Cal.App.4th 1178, 1185 (Arena), that test “applies in
‘cases in which the everyday experience of the product’s users permits a
conclusion that the product’s design violated minimum safety assumptions, and is
thus defective regardless of expert opinion about the merits of the design.’
[Citation.] A plaintiff may show the objective condition of the product, and the
fact finder may use its own ‘“sense of whether the product meets ordinary
expectations as to its safety under the circumstances presented by the evidence.”’
[Citation.]”
In Arena, the plaintiff asserted an “defective design” products liability claim
against a supplier of raw asbestos and a manufacturer of asbestos-containing
30
products, alleging that exposure to asbestos fibers from the products containing
the supplier’s asbestos caused his cancer. (Arena, supra, 63 Cal.App.4th at
p. 1183.) Although the appellate court reversed a judgment in favor of the
plaintiff for a redetermination of damages, it concluded that the consumer
expectations test was properly applied to establish a “design defect” theory of
strict liability against the supplier. (Id. at pp. 1186-1190.) The court stated: “To
the extent that the term ‘design’ merely means a preconceived plan, even raw
asbestos has a design, in that the miner’s subjective plan of blasting it out of the
ground, pounding and separating the fibers, and marketing them for various uses,
constitutes a design. . . .[] [W]hen that design violates minimum safety
assumptions, it is defective. [Citation.]” (Id. at pp. 1185-1186-1188, fn. omitted.)
The court further noted certain principles restricting the imposition of liability on
suppliers of component parts and raw materials to manufacturers whose products
cause injury -- including the so-called “component parts” doctrine, which we
discuss below -- but determined that they were inapplicable, because the plaintiff’s
injuries arose from dust containing asbestos fibers which had not been altered in
the manufacturing process. (Id. at pp. 1186-1191.)
Although Arena did not address a “defective warning” claim, it establishes
the propriety of applying the consumer expectations test to the Marteneys’ “defect
design” claims. Under that test, we examine the ordinary expectations of
consumers regarding the safety of joint compounds during the pertinent period of
Marty Marteney’s exposure to those asbestos-containing products. At trial, the
evidence showed that as of 1968, appellants provided information describing the
risks of asbestos to joint compound manufacturers, but there was no evidence that
31
those warnings were passed onto to users such as Marteney.14 The evidence
otherwise shows only that Marteney and the workers he oversaw at jobsites used
asbestos-containing joint compounds with no awareness of their hazards or the
need for precautions. In addition, John Walsh, who testified on behalf of UCC,
acknowledged that as late as 1978, “do-it-yourselfers” generally lacked knowledge
regarding the hazards of asbestos in joint compounds.
Relying on Groll v. Shell Oil Co. (1983) 148 Cal.App.3d 444 (Groll) and
Walker v. Stauffer Chemical Corp. (1971) 19 Cal.App.3d 669 (Walker), appellants
contend the consumer expectations test is inapplicable to UCC’s asbestos in view
of UCC’s warnings to appellants’ customers. In Groll, a fuel manufacturer sold
lantern fuel in bulk to a distributor, and provided the distributor warnings
regarding the fuel’s hazards. (Groll, supra, 148 Cal.App.3d at pp. 446-447.) In
turn, the distributor repackaged the fuel and marketed it to the public with similar
warnings. (Ibid.) The plaintiff asserted products liability claims against the fuel
manufacturer and the distributor predicated on negligence and a failure to warn,
alleging that he suffered injuries from an explosion when he used the fuel to light
14 The trial evidence showed that in 1964, UCC prepared an internal asbestos
toxicology report reflecting that exposure to asbestos had been associated with cancer,
including some cancerous lung tumors. In 1968, UCC created a brochure to inform joint
compound manufacturers regarding asbestos-related hazards, attached a warning label to
its products stating that “‘[b]reathing dust may be harmful,’” and provided a test report
linking asbestos to mesothelioma. In 1972, after the federal Occupational Health and
Safety Administration (OSHA) imposed asbestos regulations in 1972, UCC forwarded
them to its customers; in addition, UCC described asbestos-related hazards -- including
the risk of mesothelioma -- in material safety data sheets accompanying its asbestos, and
gave other information regarding those hazards to its customers. The trial evidence
further showed that Elementis, as UCC’s distributor, “passed on” any information that
UCC provided. However, as of 1984, the bags in which UCC shipped Calidria did not
carry a warning identifying mesothelioma as an asbestos-related hazard.
32
his fireplace. (Ibid.) The appellate court affirmed a grant of nonsuit on the
plaintiff’s “defective warning” claims against the fuel manufacturer, stating that
“[s]ince [it] manufactured and sold [the fuel] in bulk, its responsibility must be
absolved at such time as it provides adequate warnings to the distributor who
subsequently packages, labels and markets the product.” (Id. at p. 449-450.)
Groll is distinguishable, as it confronted only “defective warning” claims,
and examined the propriety of imposing liability on a supplier that provided its
product with adequate warnings to an intermediary, which passed those warnings
along to the product’s end user. As explained above, under the consumer
expectations test, the key inquiry focuses on the expectations of the ultimate
consumer. The evidence in the record supports the reasonable inference that
appellants’ warnings had no effect on average joint compound consumers.15
Walker is also distinguishable, as it represents an application of the so-
called “component parts” doctrine. Under that doctrine, suppliers of component
parts or raw materials integrated into an “end product” are ordinarily not liable for
defects in the end product, provided that their own parts or material were
nondefective, and they did not exercise control over the end product. (Artiglio v.
General Electric. Co. (1998) 61 Cal.App.4th 830, 838-840.) In Walker, the
appellate court concluded only that a supplier of acid was not liable for injuries
from drain cleanser containing acid as component, as the acid was substantially
15 Regarding the potential relevance of Groll, appellants purport to find support from
Garza v. Asbestos Corp., Ltd. (2008) 161 Cal.App.4th 651, 658-662, in which the
appellate court agreed with Arena regarding the application of the consumer expectations
test to “defective design” claims against suppliers of raw asbestos. In so concluding, the
court distinguished Groll on the grounds that in the case before it, the supplier of raw
asbestos gave no warnings to its customers. (Id. at pp. 661-662.) Garza thus provides no
guidance on the issue before us.
33
changed during the process of making the cleanser, over which the supplier had no
control. (Id. at p. 672.) That rationale is inapplicable here for the reasons
discussed in Arena, namely, Marty Marteney’s injuries arose from asbestos fibers
not materially altered by the manufacturing process. In sum, the jury’s special
verdicts regarding the adequacy of appellants’ warnings did not shield them from
liability under a “defective design” theory of strict liability. 16
D. Liability of Suppliers of Raw Materials
Appellants contend they are not subject to strict liability under a “design
defect” theory, arguing that in O’Neil, supra, 53 Cal.4th 335, our Supreme Court
adopted section 5 of the Restatement Third of Torts, including the doctrine set
forth in comment c. That comment addresses sand, gravel, and other materials
when they take the form of “basic raw material[s],” and sets forth limitations on
their suppliers’ liability for design and warning defects when they are integrated
into end products. The comment further states that such basic raw materials
“cannot” suffer from design defects. (Rest.3d Torts, Products Liability, § 5, com.
16 The remaining decisions upon which appellants rely are inapposite, as they merely
establish that the existence of direct warnings to the end user of a product may preclude
the imposition of strict liability on a manufacturer (Oakes v. E. I. Du Pont Nemours &
Co., Inc. (1969) 272 Cal.App.2d 645, 649), and are relevant to the expectations of end
users, for purposes of the consumer expectations test (Dinsio v. Occidental Chem. Corp.
(1998) 126 Ohio App.3d 292, 295-298 [710 N.E.2d 326, 329]; McCathern v. Toyota
Motor Corp. (1999) 160 Ore.App. 201, 228 [985 P.2d 804, 820]; Tillman v. R.J. Reynolds
Co. Tobacco (Ala. 2003) 871 So.2d 28, 34; Adkins v. GAF Corp. (6th Cir. 1991) 923 F.2d
1225, 1228; Graves v. Church & Dwight Co. Inc. (1993) 267 N.J.Super. 445, 467-468
[631 A.2d 1248, 1259-1260].) Here, there is no evidence that warnings accompanied the
joint compounds that Marty Marteney encountered.
34
c., p. 134.)17 Appellants argue that O’Neil must be regarded as having adopted
comment (c), and that its doctrine necessarily safeguards them from “design
defect” liability. We disagree.
O’Neil cannot reasonably be regarded as having adopted the doctrine in
comment (c). There the family of a deceased U.S. Navy seaman asserted claims
for negligence and strict liability against manufacturers of pumps and valves used
on warships, alleging that the serviceman’s exposure to asbestos dust from
asbestos-containing materials used in connection with the pumps and valves
caused his fatal mesothelioma. (O’Neil, supra, 53 Cal.4th at pp. 342-347.) The
court rejected the claims, concluding that “a product manufacturer may not be held
liable in strict liability or negligence for harm caused by another manufacturer’s
product unless the defendant’s own product contributed substantially to the harm,
or the defendant participated substantially in creating a harmful combined use of
the products.” (Id. at p. 342.)
In so concluding, the court discussed the component parts doctrine, which it
characterized as shielding a component part manufacturer from liability for
17 Comment c states: “Product components include raw materials. . . . Regarding the
seller’s exposure to liability for defective design, a basic raw material such as sand,
gravel, or kerosene cannot be defectively designed. Inappropriate decisions regarding the
use of such materials are not attributable to the supplier of the raw materials but rather to
the fabricator that puts them to improper use. The manufacturer of the integrated product
has a significant comparative advantage regarding selection of materials to be used.
Accordingly, raw-materials sellers are not subject to liability for harm caused by defective
design of the end-product. The same considerations apply to failure-to-warn claims
against sellers of raw materials. To impose a duty to warn would require the seller to
develop expertise regarding a multitude of different end-products and to investigate the
actual use of raw materials by manufacturers over whom the supplier has no control.
Courts uniformly refuse to impose such an onerous duty to warn.” (Rest.3d, Torts,
Products Liability, § 5, com. c., p. 134.)
35
injuries arising from a finished product that integrated the component “unless the
component itself was defective and caused harm.” (O’Neil, supra, 53 Cal.4th at
p. 355.) As support for that exception, the court pointed to subdivision (a) of
section 5 of the Restatement Third of Torts, which states: “One engaged in the
business of selling or otherwise distributing product components who sells or
distributes a component is subject to liability for harm to persons or property
caused by a product into which the component is integrated if: [¶] (a) the
component is defective in itself, . . . and the defect causes the harm . . . .” (O’Neil,
supra, at p. 355.) O’Neil otherwise contains no reference to comment (c), and
does not discuss the doctrine stated in it.
Nothing in O’Neil supports the reasonable inference that the court adopted
the entirety of section 5 of the Restatement Third of Torts, including the doctrine
stated in comment (c). The court’s acceptance of a portion of that section did not,
by itself carry a commitment to the entire section. (See Cronin v. J.B.E. Olson
Corp. (1972) 8 Cal.3d 121, 130-135 [rejecting portion of section 402A of the
Restatement Second of Torts while approving other portions of that section].)
Furthermore, as the court did not discuss the doctrine set forth in comment (c), it
cannot be viewed as having accepted it. (Ginns v. Savage (1964) 61 Cal.2d 520,
524 (“Language used in any opinion is . . . to be understood in the light of the facts
and the issue then before the court, and an opinion is not authority for a
proposition not therein considered.”].)
Furthermore, we conclude that the doctrine in comment (c) is inapplicable
to appellants. As explained in Arena, the doctrine does not encompass raw
asbestos: “‘[A]sbestos is not a component material that is usually innocuous, such
as sand, gravel, nuts or screws. . . . [I]t is the asbestos itself that produces the
harmful dust.’” (Arena, supra, 63 Cal.App.4th at p. 1191.) Accordingly,
36
appellants are properly subject to liability under a “defective design” theory of
strict liability.
///
///
///
37
DISPOSITION
The judgment is affirmed. Respondent is awarded her costs on appeal.
NOT TO BE PUBLISHED IN THE OFFICIAL REPORTS
MANELLA, J.
We concur:
EPSTEIN, P. J.
COLLINS, J.
38
| |
Burning waste for energy undermines Europe’s recycling efforts by diverting waste to incinerators instead of having it reused or recycled, thereby defeating the purpose of the Commission’s well-meant directive on minimising waste.
Republishing a corrected version.
“Closing the loop”
In 2015, the Commission launched an initiative on the circular economy, titled “Closing the loop”. The objective was to minimise waste and extend the value of products and resources for as long as possible.
A waste hierarchy was devised, in which reducing, reusing and recycling waste sit at the top – and incineration is just above landfill. In other words, nothing that could be recycled or composted should be burnt.
This tallies with the EU’s recycling target, agreed in 2008, to recycle 50% of all municipal waste by 2020.
But waste can also be burnt to create heat or electricity, a process known as “waste to energy” (WtE). And as the organic fraction of waste is considered a renewable resource, it is eligible for state subsidies under the EU’s current renewable energy scheme. Countries are trying to make the most of the subsidies, often bending the rules.
Waste shortages
Subsidies encouraged investment into incineration plants. According to a 2017 report by the European Environment Agency, member states including Sweden, Denmark and Estonia have reached incineration overcapacity – in other words, they have a waste shortage.
In an ideal scenario where 65% of waste is recycled, France, Germany, Austria and the Benelux countries also produce less waste than what they need.
Effectively, this over-capacity puts a cap on recycling: a recent study shows that with the incinerators existing in 2011, the UK could have recycled 77% of its waste.
But building new capacity led to a paradoxical situation: if all plants are to be used, by 2030 recycling will be capped at 63%, simply because there is not enough waste.
This is not a hypothetical scenario. The trade-off between waste recycling vs. burning is already happening in some countries, Eurostat data shows. The Commission called on member states earlier this year to phase-out subsidies to WtE to avoid subverting recycling targets and the waste hierarchy.
But despite a decline in WtE across the bloc (-3,2% in 2015 compared to the previous year), a number of eastern countries (including Slovenia, Bulgaria, Hungary, Lithuania, Poland, Estonia and Slovakia) but also the UK, Austria and Sweden, are burning waste at increasing rates.
“If the incorrect application of RED distorts the waste economy and hierarchy, away from recycling and towards incineration, we will get to 2020 and still be far from EU recycling targets of 50%. The fear is that member states will review their recycling ambitions downwards,” Enzo Favoino, scientific director of campaign group Zero Waste Europe, told EURACTIV.com.
What is more worrying, in some cases the recycling trend has reversed – as in Bulgaria (-15,4% in 2015 compared to 2014) and Estonia (-6,4%) – or stagnated, like in Sweden and the UK. All four countries have, over the same period, increased the rate at which they burn waste.
Waste trade
Wherever there is a demand, there will be a supply. This is how waste became a commodity: imports of municipal mixed waste grew fivefold after the introduction of waste-to-energy subsidies.
Because waste has become a traded good, it cannot face barriers to trade. The market decides the price and waste flocks to countries with relatively larger disposal capacity, where it is burnt in subsidised plants.
Unsurprisingly, the biggest importers of waste are Germany, Sweden, the Netherlands, Estonia and Belgium – all countries with a high incineration capacity according to the European Environment Agency.
Waste exporters also include net importers Germany and Austria, defeating the EU’s “proximity principle” (Article 16 of the Waste Management Directive), according to which waste should be disposed of as close as possible to where it is produced.
Cheating on bin contents
Under the EU’s directive on renewable energy, only a certain part of mixed waste is considered eligible for “renewable” electricity subsidies, and that is biomass. This is the kitchen and garden waste that ends in the mixed waste bin (currently, there are no compulsory EU rules on collecting organic waste although some member states have implemented their own initiatives).
It is up to countries to decide what percentage of their mixed waste is made of biomass, but there is no uniform rule on how to do this. Countries like the Netherlands measure the average bin content and publish every year an official waste percentage eligible for state subsidies.
Most countries arbitrarily set this at 50% (France, Italy, the UK). Others, like Estonia, do not disclose their subsidised percentage of organic waste because, they say, it is a “trade secret”.
NGO European Compost Network estimates this percentage is closer to 40%, although it depends on the season and geography.
Nonetheless, according to calculations by NGO Zero Waste Europe, a number of incinerators across Europe receive subsidises for all the waste they burn, and not just the renewable component – this includes plastics, paper and cardboard that could be recycled (and have a higher negative impact on environment and health).
The Zabalgarbi incinerator in Bilbao, Spain, is cheating with RED subsidies according to Gorka Bueno Mendieta, professor of engineering at the Basque Country University.
“Although less than 20% of the electricity generated is of renewable origin […] every megawatt generated in Zabalgarbi is rewarded with feed-in tariffs, as if that electricity would all come from waste,” Bueno Mendieta wrote in an email.
“This situation has to be known in the European Union, and denounced.”
Positions
To date, the Renewable Energy Directive (RED) has been one of the key obstacles to the achievement
of progressive goals of EU waste legislation. This is due to the financial incentives provided to energy
generated from waste which disincentives other more environmentally sound options which also save
more energy.
The Commission’s new proposal for a revised renewable energy directive (RED II) continues to
consider the organic fraction of the municipal solid waste as a source of renewable energy. This is
distorting the waste market by making it comparatively cheaper to recover energy from waste than
to prevent or recycle it, effectively contradicting the waste hierarchy and hindering the transition
towards more sustainable waste management systems and a circular economy.
Zero Waste Europe calls on the European Parliament and Council to improve the proposed
legislation by explicitly excluding the biodegradable fraction of municipal waste as eligible for
renewable energy primes.
Confederation of European Waste to Energy Plants:
CEWEP supports source separation of waste (including biowaste) as this is a prerequisite to make quality recycling possible. The Waste Hierarchy (prevention, preparing for re-use, recycling, recovery, disposal) established under the Waste Framework Directive is to be respected. However, despite all efforts of source separation, there will always remain some polluted biodegradable part of the residual fraction of industrial, commercial and municipal waste (e.g. dirty cardboards, multilayer packaging etc.), which is not suitable for quality recycling or composting. This waste should be treated in an environmentally sound way and at the same time used to produce energy (heat, electricity, steam). Energy generation from the residual waste is a reasonable energy efficient option, replacing energy produced from fossil fuels. The only alternative treatment for this waste would be landfilling, the least desirable option in the European Waste Hierarchy and with regard to climate protection. | https://www.euractiv.com/section/circular-economy/news/waste-subsidies-make-it-cheaper-to-burn-than-recycle/ |
RELATED APPLICATIONS
BACKGROUND OF THE INVENTION
DETAILED DESCRIPTION OF THE INVENTION
This patent application claims priority to Indian patent application serial number 1524/CHE/2007, having title “Memory allocation for crash dump”, filed on 16 Jul. 2007 in India (IN), commonly assigned herewith, and hereby incorporated by reference.
Most operating systems have a procedure for saving the contents of the physical memory to non-volatile storage at the time of a crash to make it easier for engineers to later identify the cause of the crash. This procedure is normally carried out by a subsystem in the kernel. When the kernel crashes, for example as a result of a fatal error in one of the programs it is running, the crash dump routine reads the content of the physical memory and writes it to a stable storage like a dedicated internal disk or over a network to an external device. Engineers can later run kernel debuggers on the stored data, the dump.
Memory is required to run the crash dump. Conventionally, an area of the memory is allocated at boot-up of the system for this purpose. However, when the entire memory for running the crash is allocated at boot-up, the allocated memory cannot be used for other purposes during run-time. This results in an inefficient use of the memory resources available. The problem is made even worse in operating systems using more advanced crash dump routines which make use of compression, parallel input/output and dump via network because they have higher memory requirements and, therefore, even more memory is unavailable for other purposes during run-time. Additionally, the amount of memory required to perform the crash dump is, in some circumstances, easier to identify at the time of the crash than at boot-up.
There have been attempts to overcome the above problems. For example, the HP-UX™ operating system allocates the memory for performing the dump at the time of the crash and not at run-time. The technique used by this operating system to find memory to use at the time of crash involves keeping track of what type of data each page in memory stores. The kernel classifies each page in physical memory according to its usage and at crash time it can easily identify those pages that belong to a class of pages which is safe to reuse. A problem with this technique is that the records for keeping track of the class to which each page belongs take up a large portion of the available memory.
FIG. 1
1
2
3
4
5
6
5
7
8
9
10
11
6
is a schematic diagram of a processing system , such as a server or a workstation. It comprises a processor , a main memory , for example in the form of dynamic RAM, a hard disk and an additional non-volatile disk , interconnected by a bus . The non-volatile disk may be used for saving the physical data at the time of a crash. The system typically also includes a variety of other input/output (I/O) subsystems required for operation of the system, as would be apparent to a person skilled in the art. The processor comprises a central processing unit CPU , an internal cache memory , a translation lookaside buffer TLB and a bus interface module for interfacing with the central bus .
FIG. 1
5
7
6
5
It should be understood that is exemplary only and the invention is not limited to systems with only a single processor. The invention may also be implemented in systems with a plurality of processors. Moreover, the non-volatile disk may be connected to the I/O subsystem instead of the central bus . The non-volatile disk may, for example, be accessed over a network and located at a remote location.
FIG. 2
FIG. 1
12
13
14
12
13
14
13
15
is a high-level overview of a computer system illustrating the interrelationship between software and hardware. The system includes a hardware level , a kernel and a user level . The hardware level includes the hardware system elements shown in , the kernel is the part of the operating system that controls the hardware and the user level includes the application programs that are being run on the computer. The processor runs one or more processes, which can be defined as programs in execution. Processes in turn generally run as a number of threads, where a thread is an entity within a process that can be scheduled for execution by an operating system scheduler. The kernel also includes a crash dump subsystem which will be described in more detail below.
9
3
4
7
13
3
8
13
3
4
3
4
FIG. 1
The cache , main memory and hard disk shown in are all capable of storing program instructions and data, generally referred to together as data. The data can also be stored on external devices accessed through the I/O subsystems . One of the tasks of the kernel is to manage the access to the memory resources. To execute a process or a thread, the data required for that process or thread must reside in the main memory , also referred to as physical memory, at the time of execution so that it is available to the CPU . Once the kernel itself is loaded into memory at boot-up, it ensures that any other data required for running a desired process or thread is brought into physical memory . However, the amount of RAM is limited and if all the data associated with a particular program is made available in the RAM at the outset, the system could only run a limited number of programs. Modern operating systems such as HP-UX™ therefore operate a virtual memory management system, which allows the kernel to move data and instructions from the hard disk or external memory devices to RAM when needed and move it back when not needed any longer. The total memory available is referred to as virtual memory and can exceed the size of the physical memory. Some of the virtual memory space has corresponding addresses in the physical memory. The rest of the virtual memory space maps onto addresses on the hard disk and/or external memory device. Hereinafter, any reference to loading data from the hard disk into RAM should also be construed to refer to loading data from any other external memory device into RAM, unless otherwise stated.
4
3
4
When a program is compiled, the compiler generates virtual addresses for the program code that represent locations in memory. When the operating system then tries to access the virtual addresses while running the program, the system checks whether a particular address corresponds to a physical address. If it does, it accesses the data at the corresponding physical address. If the virtual address does not correspond to a physical address, the system retrieves the data from the hard disk and moves the data into the physical memory . It then accesses the data in the physical memory in the normal way. If there is not enough available memory in the physical memory, used memory has to be freed and the data and instructions saved at the addresses to be freed is moved to the hard disk . Usually, the data that is moved from the physical memory is data that has not been used for a while.
A page is the smallest unit of physical memory that can be mapped to a virtual address. For example, on the HP-UX™ system, the page size is 4 KB. Virtual pages are therefore referred to by a virtual page number VPN, while physical pages are referred to by a physical page number PPN. The process of bringing virtual memory into main memory only as needed is referred to as demand paging.
FIG. 3
16
16
17
18
19
With reference to , to manage the various kinds of memory and where the data is stored, an operating system, such as HP-UX™ maintains a table in memory called the Page Directory (PDIR) that keeps track of all pages currently in memory. When a page is mapped in some virtual address space, it is allocated an entry in the PDIR. The PDIR links a physical page in memory to its virtual address. Every entry in the PDIR contains a field with the physical page number, a field with the virtual page number and at least one field with at least one bit of auxiliary information about the page of memory.
16
3
10
2
The PDIR is saved in RAM . To speed up the system, a subset of the PDIR is stored in the TLB in the processor . The TLB translates virtual to physical addresses. Therefore, each entry contains both the virtual page number and the physical page number.
FIG. 3
The invention is not restricted to use of the hashed page table arrangement shown in , but is applicable to systems using other types of page table, such as a linear hierarchical page table.
FIG. 4
8
10
10
3
9
8
9
9
9
With reference to , when the CPU wishes to access a memory page, it first looks in the TLB using the VPN as an index. If a PPN is found in the TLB , which is referred to as a TLB hit, the processor knows that the required page is in the main memory . The required data from the page can then be loaded into the cache to be used by the CPU . A cache controller (not shown) may control the process of loading the required data into the cache . The cache controller will check whether the required data already exists in the cache . If not, the cache controller can retrieve the data from the RAM and move it into the cache. The cache may be either physically or virtually indexed.
16
10
3
4
4
3
20
16
10
8
If the page number is not found in the TLB, which is referred to as a TLB miss, the PDIR is checked to see if the required page exists there. If it does, which is referred to as a PDIR hit, the physical page number is loaded into the TLB and the instruction to access the page by the CPU is restarted again. If it does not exist, which is generally referred to as a PDIR miss, this indicates that the required page does not exist in physical memory , and needs to be brought into memory from the hard disk or from an external device. The process of bringing a page from the hard disk into the main memory is dealt with by a software page fault handler and causes corresponding VPN/PPN entries to be made in the PDIR and TLB , as is well known in the art. When the relevant page has been loaded into physical memory, the access routine by the CPU is restarted and the relevant data can be loaded into the cache and used by the CPU .
15
3
When the operating system crashes, it is often desirable to save the data in the physical memory to a disk such that it can be analysed later and the cause of the crash can be identified. The system can then be restarted. The process of saving data to disk at the time of crash will hereinafter be referred to as a crash dump. The data saved may include the memory image of a particular process, the memory image of parts of the address space of that process along with other information such as the values of the processor registers. The crash dump is performed in the crash dump subsystem of the kernel and like any other application requires memory for its operations. This memory has to be allocated from the physical memory . High performance crash dump algorithms include compression, parallel I/O, encryption and access to remote storage locations for dumping the data. High performance crash dumps require a large amount of memory for the compression and encryption dictionaries, compression buffers and I/O buffers.
FIG. 5
21
22
23
illustrates the allocation of physical memory for the kernel, for the crash dump and for user processes. A small amount of memory is reserved by the kernel at boot-up. A portion of this amount includes kernel text, i.e. instructions for carrying out essential processes of the kernel, and initial data. The kernel text and initial data must reside in memory until the operating system shuts down and cannot be swapped out to make room for pages required for user processes. Another portion of the memory reserved at boot-up is allocated for the crash dump. The memory that is not reserved by the kernel at boot-up can be used for mapping virtual memory pages during run-time. The kernel keeps a record of all the available ranges of memory in the form of a physical memory map. The physical memory map is an array of structures in which each structure specifies a physical starting address of an available range of memory and the number of contiguous physical pages from that address that are available. Some of this memory will be allocated for kernel processes as and when the memory is needed. Consequently, the kernel data structures are not all located at consecutive memory addresses. As such, the system cannot easily identify the location of pages that store important kernel data structures at crash time and determine which pages must not be reused for performing the crash dump.
22
The amount of memory allocated at boot-up for the crash dump, according to the invention, is much smaller than the memory required to run the actual dump and much smaller than the amount that would be allocated in a conventional dump. According to the invention, a large proportion of the memory required for the dump is allocated after the crash from memory available to be used by other processes during run-time. The process for allocating memory at the time of the crash will be described in more detail below.
FIG. 6
FIG. 9
22
24
25
26
27
22
28
29
30
31
22
With reference to , the memory allocated for the crash dump at boot-up includes an initial compression buffer , an initial I/O buffer , a memory area for storing the compression dictionaries, encryption dictionaries and other necessary data and an array for storing a list of memory locations comprising data structures that are required for performing the dump. This array will be referred to as the crash dump memory usage list hereinafter. The list is populated at crash time as will be described with respect to . The memory also comprises an internal resource map , instructions for a special crash dump page fault handler , instructions for I/O drivers used by the crash dump and instructions for an internal memory allocator . It should be understood that if the invention is implemented in an apparatus with a pageable kernel, i.e. in which kernel data and code can be paged on demand, at least some of the data structures of memory can be loaded into physical memory at the time of the crash, rather than at boot-up.
FIG. 7
FIGS. 8 and 9
FIGS. 10 and 11
15
7
1
7
2
5
With reference to , when the system crashes the kernel instructs the dump subsystem to perform a two-stage crash dump. First, a dummy crash dump is run at step . to identify the pages that are used internally by the crash dump for its correct running, then the real dump is run at step . using memory allocated from memory ranges that do not include the pages that are required to perform the dump. Both dump routines loop through the physical memory map but only the second one actually saves the memory to non-volatile storage . The dummy crash dump will be described in more detail with respect to and the real dump will be described in more detail with respect to .
FIG. 8
8
1
5
With reference to , the dummy crash dump routine is initialised at step .. A dummy crash dump routine is a crash dump modified such that no input or output to the non-volatile disk is performed. It invokes all the driver I/O routines and iterates through the dump algorithm fully except that the I/O is not performed. The dummy dump may also be run in a low memory operating mode, which means that much smaller data sizes are used at the cost of the special improvements provided by the dump algorithm, such as reduced dump time, compression and encryption.
8
2
29
8
3
19
16
16
10
16
8
2
8
3
FIG. 9
At step ., the kernel page fault handler is replaced with the specific crash dump page fault handler . At step ., the bits of the PDIR containing auxiliary information are used to mark all translations in the PDIR as invalid. The TLB is also updated to reflect the changes in the PDIR . The purpose of steps . and . will be described in more detail with respect to .
8
4
15
8
5
24
8
6
8
7
25
8
8
30
5
30
30
At step ., the crash dump subsystem retrieves the page numbers of the first set of pages listed in the physical memory map. At step ., data from the retrieved pages is then read into the compression buffer and at step ., the data is compressed. Considering that the memory allocated for the dump is relatively small, the number of pages compressed at a time must also be small. At step ., the compressed data is then copied from the compression buffer to the input/output buffer . At step ., the I/O drivers are invoked but the data in the input/output buffers is never actually written to disk . For example, in one embodiment, the I/O drivers may instruct a Direct Memory Access (DMA) controller to write the data as is well known in the art. However, when the dump is a dummy dump, the I/O drivers never pass the instruction to the DMA controller to write the data.
8
9
15
8
3
8
9
8
10
At step ., the kernel subsystem then checks if the routine has looped through all the pages in the physical memory map. If not, steps . to . are repeated until all the pages in the physical memory map have been processed. When all the pages in the physical memory map have been processed, the routine exits at step ..
15
FIG. 8
Not all the physical memory must necessarily be dumped. For example, unused pages, user process pages and kernel code pages may not be dumped by default. In some operating systems, the crash dump subsystem may check user settings to identify the data to be included in the dump. Alternatively, the physical memory map may include an indication of which pages should be dumped. For example, the system may keep track of what each memory range, rather than each page, is used for and only the memory ranges that include data of certain types or associated with certain processes may be included in the dump. The steps of are only visited for the pages that contain data to be dumped.
8
1
8
10
16
29
29
10
FIG. 3
Throughout steps . to ., crash dump data structures and other kernel data structures are accessed, including data structures apart from the data to be dumped. Since the dummy dump invokes similar routines to the real dump, the accessed kernel memory and data structures are also required for the real dump. Therefore, the pages storing the kernel memory and data structures are not safe to be used for allocating memory for running the real dump. Whereas the data to be dumped is accessed using its associated physical page numbers stored in the physical memory map, the kernel memory and data structures are accessed using their associated virtual page numbers as explained with reference to . However, since the translations between the physical page numbers and the virtual page numbers in the PDIR have been marked as invalid, the first time a page is accessed using its virtual page number using the crash dump subsystem, the special crash dump fault handler is invoked. The crash dump fault handler is also invoked if a translation is found to be marked as invalid in the TLB .
FIG. 9
9
1
9
2
9
3
19
16
9
4
29
27
9
5
9
6
16
9
7
9
2
9
3
9
8
In more detail, with reference to , each time a virtual page number is accessed by the dummy crash dump routine, the CPU searches for the VPN in the TLB or in the PDIR at step .. The corresponding PPN is found at step . and at step . it is checked whether the auxiliary bit in the PDIR , or an auxiliary bit in the TLB, indicates whether the translation is valid. If the translation is invalid, the process continues to step . and the crash dump fault handler is invoked. The special crash dump fault handler checks whether the PPN is present in the crash dump memory usage list at step .. If the physical page number is not included, the physical page number is saved at step . and the translation in the PDIR is then marked as valid at step .. The instruction to translate the address is then restarted, the CPU finds the physical page number at step ., checks that the translation is valid at step . and exits at step ..
9
5
27
9
7
If at step ., it is determined that the physical page number is already in the crash dump memory usage list , for example because it has already been accessed in the dummy dump routine through another virtual page number mapping onto the same physical page number, referred to as virtual aliasing, the process continues directly to step ., the translation is marked as valid and the instruction to access the page is restarted.
19
29
29
15
As mentioned above, the actual physical pages to be dumped are accessed using their physical addresses and no translations are carried out. Therefore these accesses do not result in page faults. Alternatively, the pages may be accessed using their virtual page numbers but the process for accessing the memory may be modified such that the auxiliary bits are not checked or the invalid flags of the translations are ignored. Therefore, the access to these pages does not result in page faults and the physical page numbers are not stored in the list . Consequently, the crash dump memory usage list only includes the physical page numbers of pages used by the crash dump subsystem to perform the routines of the dump. A method of accessing the pages to be dumped without invoking the crash dump page fault handler can be implemented in various known ways and will not be described in detail herein.
FIG. 10
FIG. 11
10
1
10
2
10
3
16
10
4
10
5
With reference to , the real dump is then initialised at step .. At step ., the kernel page fault handler is restored. At step ., all remaining invalid entries in the PDIR table are marked as valid. The memory needs for running the dump are then identified as well known in the art at step . and the memory ranges that can be reused for allocating memory are identified at step . as will be described in more detail with respect to .
15
10
6
5
24
25
15
10
7
The data already stored in the memory range that is safe to reuse must be dumped before the memory can be allocated to the crash dump subsystem . At step ., the data in the memory range that is safe to be reused for the dump is saved to non-volatile storage . The dump of the data is performed using the initial compression buffer and the initial I/O buffer . The pages are then marked as dumped in the physical memory map such that the crash dump subsystem does not attempt to dump the data again. The memory range is now free to be reused and at step ., larger compression and input/output buffers to replace the initial compression and input/output buffers are allocated from the memory range. If parallel dumps are supported by the kernel, a plurality of input/output buffers may be created. Additionally, if multithreaded compressed dumps are supported by the kernel, a plurality of compression buffers may be created.
10
8
10
8
10
9
10
10
10
11
10
12
30
10
13
5
10
14
10
8
10
14
10
15
The main dump is then started at step .. The dump can now be performed in normal mode because enough memory is available. The next set of pages in the physical memory map to be dumped is identified at step . and at step . the data contained in the pages is moved to the newly allocated compression buffer. The data is then compressed at step . and copied to the input/output buffer at step .. At step . the driver routines are invoked. This time, the driver routines instruct the DMA controller at step . to write the data in the I/O buffers to the non-volatile crash dump disk . The pages are then marked as dumped in the physical memory map. At step . it is checked whether there are any remaining pages in the physical memory map that are not marked as dumped. If there are, steps . to . are repeated for the remaining pages until all the pages in the physical memory map have been dumped. When all the pages have been dumped, the dump exits at step ..
10
5
11
1
15
31
11
2
21
11
3
21
11
4
27
28
10
6
10
7
FIG. 11
FIG. 5
FIG. 10
Step . will now be described in more detail with respect to . At step ., the crash dump subsystem invokes the internal crash dump memory allocator routines . The memory allocator accesses the physical memory map at step .. As shown in , a portion of the memory comprises kernel static data and text segments that are loaded into the memory at boot up. The data in this portion of the memory is required for the kernel to run and although some of the memory could be reused for running the crash dump it is safest to avoid reusing it. Therefore, at step ., the memory allocator iterates through the kernel static data and text segments that are loaded into memory at boot up in memory area and identifies the highest physical page used for kernel static data and text segments and, at step ., the memory allocator searches above the highest physical page used for kernel static data and text segments for the largest range of pages that does not include any pages in the crash dump memory usage list . The memory allocator then initialises its own resource map with the identified range. The data in the identified range can then be dumped and reused as described with respect to steps . and . in .
FIG. 11
11
3
21
It should be understood that the process described with respect to constitutes just one example of a process for identifying a memory range that is safe to be reused. The details of this process can be varied. For example, step . may not be performed and the portion of memory reserved for kernel static data and text segments may also be searched for a memory range that is safe to reuse for the crash dump routine.
In one embodiment, the dummy crash dump routine and the real crash dump routine are implemented using software. The kernel may call a crash dump function that takes two variables, a variable indicating whether the dump is a dummy dump or a real dump and a variable pointing to the resource map used for identifying data to be dumped. Whether the initial buffers or the buffers allocated at crash time are accessed is determined in dependence on the value of the variable indicating whether the dump is a dummy dump or a real dump. The function may also pass the variable to the driver routines which in turn determines based on the value of the variable whether to instruct the DMA controller to write the data to be dumped to non-volatile storage or not. The memory for the actual dump may be allocated using the C function malloc( ), which is well known in the art. In one embodiment, a special crashdump_malloc( ) function may be provided. This function may be written to allocate memory from the internal crash dump memory resource map.
An algorithm, according to one embodiment of the invention, for performing the crash dumps is provided below:
Dumpsys(FakeDump, PhysicalMemoryMap[ ])
If (FakeDump is TRUE)
Replace Kernel Page Fault handler with crash dump specific
handler, crashdump_pagefault_handler( ) and mark
translations as invalid
Else
Restore Kernel Page Fault handler and mark translations as
valid
Identify memory needs during crash dump, “MemReq”, for
operating in normal mode (i.e multi-threaded mode)
Identify memory ranges that can be reused for allocating upto
“MemReq”. Initialize the memory resource map
crashdump_malloc_rmap_init( )
Save (dump) ranges of memory identified for reuse to the disk
using initial buffers BFC_X and DRV_X.
Allocate bigger buffers (many instances for a parallel dump)
crashdump_malloc(BFC_Y)
crashdump_malloc(DRV_Y)
Mark ranges in crashdump_rmap[ ] as dumped or remove
them from the PhysicalMemoryMap[ ].
DumpLoop: Loop PhysicalMemoryMap[ ] Until All Data is Dumped
Begin
If (FakeDump is TRUE)
Identify memory area MA_X to be dumped
Else
Identify memory area MA_Y to be dumped
If (FakeDump is TRUE)
Read data, DA_X, from memory area MA_X to be
dumped into compression buffer BFC_X
Else
Read data, DA_Y, from memory area MA_Y to be
dumped into compression buffer BFC_Y
If (FakeDump is TRUE)
compress data using compression buffer BFC_X
Else
compress data using compression buffer BFC_Y
If (FakeDump is TRUE)
Copy compressed data from BFC_X to DRV_X
Else
Copy compressed data from BFC_Y to DRV_Y
Invoke driver write routine and write compressed data to disk:
If (FakeDump is TRUE)
DriverWrite(FakeWrite = TRUE, DRV_X)
Else
DriverWrite(FakeWrite = FALSE, DRV_Y)
If (all required data is dumped)
Exit
Else
GoTo DumpLoop
End
In the above algorithm, DriverWrite( ) starts DMA only when FakeWrite is FALSE.
The algorithm for the Page Fault Handler, in one embodiment, is provided below:
crashdump_pagefault_handler( )
Get Fault Virtual Address, “fva”, from processor special registers
Get Physical page, “pfn”, corresponding to Fault Virtual Address,
“fva”
If (pfn not present in crashdump_memory_usage[ ])
save “pfn” in crashdump_memory_usage[ ]
The algorithms for the functions carried out by the memory allocator is provided below:
crashdump_malloc_rmap_init( )
Iterate thru Kernel static data, text segments and identify highest
physical page used, kernel_static_phys_max.
Iterate physical memory map of kernel, and start search above
kernel_static_phys_max to identify biggest range of pages, which
are not present in crashdump_memory_usage[ ].
Initialize resource map crashdump_rmap[ ] with above ranges
crashdump_malloc( )
Allocate memory from crashdump_rmap[ ].
Although the invention has been described with respect to a specific embodiment above, variations are possible.
19
16
For example, instead of marking the bit comprising auxiliary information in the PDIR as invalid, the pages that contain kernel data structures required for the dump can be logged by using some other page protection mechanism to invoke the fault handler. For example, protection keys or task identifiers associated with a particular block of memory may be used instead of the invalid/valid bit in the page table. Protection keys and task identifiers, used for example in PA-RISC™ processors, will not be described in detail here since they are known in the art.
Moreover, it should be noted that in some operating systems the crash dump routine is run in real mode. In other words, no access or protection checks are performed. In that case, the dummy dump may be run in virtual mode such that the CPU performs access and protection checks and the actual dump may be run in real mode or physical mode for fail safe operation.
Additionally, the memory does not have to be divided into pages. Other types of segmentations of memory are also possible.
The invention provides a mechanism to allocate memory after system crash for dump purposes. Moreover, the mechanism requires less book keeping of memory than in the prior art. Moreover, by using the mechanism for allocating memory according to the invention, only a small amount of memory must be reserved during boot-up or run-time for initiating the crash dump routine. As a result, the apparatus uses memory more efficiently than in the prior art.
It should be noted that the time taken for the dummy crash dump to run does not extend the time taken for the crash dump process to complete significantly, because the time-consuming task of writing the data to non-volatile disk is not performed in the dummy crash dump routine.
Moreover, the time taken for the crash dump process to complete can sometimes be reduced further by not performing any compression or encryption as part of the dummy crash dump. The time taken for the crash dump process according to the invention is shorter than if the actual dump was performed with the smaller data sizes used in the dummy crash dump. Therefore, the invention allows more efficient use of memory without extending the time taken for the crash dump to complete significantly.
Although the crash dump algorithm has been described to include compression, the invention also has advantages when compression is not used. Any system with a crash dump algorithm that uses a significant amount of memory because it employs, for example, encryption or parallel I/O would provide inefficient use of memory if all the memory required for performing the dump is allocated at boot-up. According to the invention, a large proportion of the required memory can be allocated at the time of the crash instead. Memory used during run time for other processes can be reused at the time of the crash and memory is used more efficiently.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
FIG. 1
is a schematic diagram of a data processing system;
FIG. 2
is a high-level overview of a computer system illustrating the interrelationship between software and hardware;
FIG. 3
illustrates the structure of a page directory for translating virtual memory addresses into physical memory addresses;
FIG. 4
illustrates a mechanism used to translate between virtual and physical addresses in a virtual memory management system;
FIG. 5
illustrates a typical partition of physical memory in the data processing system;
FIG. 6
is a schematic diagram of the uses of the memory initially allocated for performing a crash dump;
FIG. 7
is a flowchart illustrating the two-step crash dump according to the invention;
FIG. 8
is a flowchart illustrating a dummy crash dump routine;
FIG. 9
is a flowchart illustrating a process for identifying memory that is not safe to be reused for the crash dump;
FIG. 10
is a flowchart illustrating a process for performing a crash dump routine; and
FIG. 11
is a flowchart illustrating a process for identifying a memory range that is safe to be reused for the crash dump. | |
Images, sheet music, and other items depicting the experience and interpretation of the Civil War.
The "Civil War Visual Culture" unit of the Tennessee Virtual Archive showcases a wide variety of Civil War-related materials: sheet music covers, professionally designed lithographs, flags, hand-drawn letters, military drawings, and other images. These items represent some of the ways in which a tragic era in America’s history was experienced by contemporaries and interpreted by subsequent generations.
Tennesseans experienced the Civil War in their state in a variety of different ways. Perhaps most notably, soldiers left their homes—from the Mississippi River to the mountains of East Tennessee—to either preserve the Union or defend the Confederacy’s right to leave it. Civilians, left at home to carry on the daily tasks of life, were often thrust into responsibilities virtually unheard of just a few years before (see the Civilian Life in the Civil War for more images on this subject).
Some representations of war depict immediate responses to events. In Civil War Drawings from the Tennessee State Museum, James C. Kelly discusses the impact of a growing visual culture in America on the cusp of civil war:
The American Civil War was the first conflict to receive adequate pictorial documentation by men on the spot. Immediately one thinks of the photographer, and, indeed, the Civil War was the first war to be extensively photographed. But one could only photograph what stood still...the American public, during the War, wanted action scenes, and photography could not supply it. For that, they still relied on the artist’s pencil.
Although it does not depict action on the battlefield, one drawing in this collection shows Federal troops apparently drilling on the public square of a city, most likely Nashville. The scene provides a priceless impression of the experience of living in the captured city. On one side of the square, a man drives a mule-led wagon, perhaps with supplies for the troops. Two horseback officers confer with one another while other soldiers appear to split wood or prepare food nearby. The sketch, while apparently not produced by a professional hand, allows us to view an image not captured by any photographer’s lens.
Other images in this collection let us see exactly what soldiers witnessed on the battlefield. Woodbury, Tennessee’s St. John Guards fashioned a company flag that suggested the Confederacy’s first official flag, known as the "Stars and Bars." Such banners were integral in the war to develop company pride and cohesion. Just as they do today, flags used in the Civil War were infused with significance beyond the apparent simplicity of several pieces of fabric.
While artists often employed the pencil to sketch "action scenes," the pencil was also used for much more utilitarian and urgent purposes. Army commanders and their officers required detailed plans of battlefield geographies in order to manage troops effectively and plan for engagement. Similar sketches, included in this collection, portray military encampment in present-day East Nashville and military prisons at Johnson’s Island and Fort Delaware.
Frank Leslie’s Illustrated Newspaper, a well-known and successful publication with a national audience, provided many Americans with their earliest visual impressions of the battlefront. The newspaper, which was published weekly from the 1850s until the early 1920s, provided an alternative to local town and city newspapers, most of which did not publish many photographs or illustrations in the mid-nineteenth century. The extracts from this publication are drawn primarily from 1862 and depict troop movements and action in Tennessee. TSLA also holds a German-language edition of Frank Leslie’s, which shows a picture of captured Confederate troops near Bridgeport, Tennessee.
Founded in the post-Reconstruction era, the Chicago print company Kurz & Allison helped shape the memory of the Civil War by mass producing a series of battle-themed art prints. The company released a print titled The Fort Pillow Massacre in the early 1890s. That famous image, shown here, reflects a more Northern perspective on a controversial event of the Civil War. Indeed, Kurz & Allison’s dramatic depiction of the Fort Pillow atrocities probably shaped national opinions about this battle. Interestingly, such prints were produced the same decade as the seminal Supreme Court case of Plessy v. Ferguson, which paved the way for many decades of segregation across the South.
In the decades after the war ended, its legacy was portrayed in various ways. This collection features ten sheet music covers to display how this unique format was often employed to reflect issues in the popular imagination. The Civil War and the Reconstruction era provided an opportunity for music publishers to cater to the powerful collective memory of the war that ran throughout the nation. In the South, this often translated into an intense nostalgia for the Old South and the "Lost Cause." Sheet music with titles like "The Bonnie Blue Flag" exemplified the sense of loss felt by many Southerners.
Though the "Lost Cause" sentiment spread across the South in postwar years, perhaps the most concrete reminder of the true cost of war surfaced in the soldiers’ graves that dotted the South. An image from the Wainwright Collection shows the practical approach taken in designing resting places for soldiers in Chattanooga. The plans for the gates and fence provide a permanent physical manifestation of the bloody four years—a toll taken by both North and South—that lingered for years in the collective mind of the reunited nation.
Kelly, James C. Civil War Drawings from the Tennessee State Museum. Nashville: The Tennessee State Museum Foundation, 1989. | https://teva.contentdm.oclc.org/customizations/global/pages/collections/cwvisual/cwvisual.html |
Singapore’s national climate change strategy
This national climate change strategy presents Singapore’s current and future efforts to address climate change in vulnerability and adaptation, as well as mitigation of greenhouse gas emissions. The strategy also outlines our local competency-building efforts and our participation in international climate change discussions.
Environmental sustainability and economic growth have been key drivers of Singapore’s socio-economic development. As a small country with few natural resources, it is important Singapore optimizes the use of its available environmental and energy resources, and at the same time achieves synergies across the policy objectives of environmental sustainability, economic competitiveness and energy security.
The national climate change strategy reiterates Singapore’s commitment to do its part in the international effort to address climate change. The challenge of mitigating greenhouse gas emissions is not one that the country can tackle alone. It requires the commitment and participation of all countries, under the auspices of the United Nations Framework Convention on Climate Change (UNFCCC).
All countries have to play a role consistent with their unique national circumstances. Singapore will do its part, in particular by improving the energy efficiency of our major energy sectors, namely power generation, industries, transport, buildings and households. Singapore is also committed to the global research effort on climate change and energy technologies and is investing to develop technologies that can help the world meet the climate change challenge, in the areas of solar energy and water. | http://ledsgp.org/resource/singapore-national-climate-change-strategy/?loclang=en_gb |
We are open and ready to support you! As we continue to monitor and meet established safety guidelines in response to the novel coronavirus (COVID-19) outbreak, our offices are still open and operating in a virtual format. You can reach us by phone at (909) 537-3094 or via email at [email protected]. We will respond as quickly as possible. Please keep a close watch for new announcements. Many of them, as well as several resources, can be found on CSUSBs dedicated COVID-19/Coronavirus Information web page. Thank you and stay safe.
Welcome to the Office of Academic Programs
The Office of Academic Programs ensures the undergraduate and graduate degrees, minors, post-baccalaureate certificate programs support the University’s Learning Outcomes and the Meaning of a CSUSB Degree in service to all students. We are forward-thinking and strive to meet all learners' future needs by developing creative solutions and supporting innovative programs in collaboration with faculty.
The Office oversees academic program planning, review, assessment, and accreditation. Academic Programs offers faculty professional development in support of assessment and continuous improvement of educational programs, general education, writing-intensive programs, and high-impact practices. We are committed to expanding all learners' opportunities to engage in high-quality, high-impact practices and ensuring that academic programs promote intellectual achievement, inclusivity, and equity.
The Office is responsible for creating the Academic Calendar, the University's Course Catalog, the course schedule, and operationalizing relevant Title Five California Code of Regulations, WSCUC requirements, and CSU policies and procedures.
Meaning of a CSUSB Degree
At CSUSB, students engage in diverse ways of knowing and contributing to the world. Through their degree programs and co-curricular activities, they grow intellectually, creatively, and professionally. Our students explore the paradigms and knowledge reservoirs of various disciplines and cultures, discover and make meaning in new ways, and integrate and apply multiple perspectives to solving problems. Often the first in their families to earn a college degree, our graduates are transformed by their high-value CSUSB education and by the resilience, reimagining, and reflection that it asks of them. They take pride in their degrees and leave the campus as lifelong learners. As they pursue their careers of choice, our alumni achieve social mobility and success in ever-changing professional and public sectors. They are skilled at collaborating with people from diverse backgrounds and leading positive change for social justice, both locally and globally. In all of these ways, CSUSB graduates are able to live empathetic, fulfilled lives that create opportunities for themselves, their communities, and their world. | https://www.csusb.edu/academic-programs |
This question already has an answer here:
I have asked a question to find out what the term is for 3D objects that appear suddenly and disappear suddenly in 3D games when the player moves towards or away from them.
What is the standard term used to describe objects appearing from nowhere in 3D games?
Someone commented:
We explicitly do not allow terminology questions spanning the breadth of gaming. If you want to know what this is for a specific game, alright, ask that.
In light of this comment, even though I could not locate anything on the tour that explicitly states this, I am concerned that my question will be closed, though an answer has been posted, and is helpful, I do not feel that it answers the question adequately, so I would be prepared to take action to prevent question closure if necessary.
Should I edit my question to mention a specific game, or a list of games that have the described behaviour? Or can I leave it as is? | https://gaming.meta.stackexchange.com/questions/11591/are-terminology-questions-that-apply-to-more-than-one-game-on-topic?noredirect=1 |
Emily Grossman is a mobile product marketer and app strategist. She specializes in app search marketing, with a focus on strategic deep linking, app indexing, app launch strategy, and app store optimization (also known as ASO).
Emily has spoken about mobile application marketing at national and international conferences and has collaborated with major brands on their mobile marketing and mobile app growth strategies.
As the Director of App Strategy at MobileMoxie, Emily developed important app strategies that improved user acquisition, activation, and retention for Fortune 500 businesses and well-known startups. Before her work as a consultant, Emily was one of the original employees at Double Encore, Inc, one of the world’s first native app development agencies (acquired by WPP’s POSSIBLE in 2014). | https://www.99starts.com/speakers/emily-grossman |
White Cube Hong Kong is pleased to present ‘Rotation’, a solo exhibition by multimedia artist Wang Gongxin. This is the first presentation of the artist’s early installation works, as well as being Wang’s first solo exhibition in Hong Kong.
Born in 1960 in Beijing, Wang is a pioneering media artist, being one of the first in China to use digital editing. He was also, in 2001, the founder of Loft, the earliest media art centre in China. Wang began his career as a painter, but his experiences and in particular the art education he received in the US between the late 1980s and early 1990s encouraged him to broaden his artistic language, evidence of the energy and vitality within his practice.
The Sky of Brooklyn, created in 1995, established Wang as a leading experimental artist. The installation sets the tone for his subsequent practice, one that is humorous and modest, rooted in ideas of memory, reality, time and space, using a language that is full of tension. The subtlety with which Wang addresses constructs of cultural identity was a radical departure from the discursive trends and curatorial tastes of the 1990s. This initially served to obscure the importance of this period in Wang’s practice within contemporary Chinese art. Revisiting these works now, twenty years later, Wang’s self-conscious avoidance of easily recognisable Chinese signifiers, and his engagement with cultural references through materials and concepts, is evidence of his desire to depart from the then dominant artistic styles.
Around this time, Wang was moving away from easel painting and starting to experiment with mixed media. Video alone was not enough to satisfy his curiosity, and he began using materials that were ‘mobile’, ‘radiant’ and ‘liquid’. As early as 1994, he was making site-specific kinetic installations that employ suspended or embedded lightbulbs, metal containers, ink and other fluids to play with light, movement and the environment. Moving lightbulbs animate liquids in shallow tray-like containers through illumination or direct contact, creating a tension between the liquid and the still solidity of the installation’s geometry.
The kinetic installation Unseatable was exhibited in an artist-run space in Red Hook in Brooklyn in 1994. A red lightbulb circles over a square formation of chairs whose seats are alternately filled with black ink and white milk. The work exemplifies this transitional period in Wang’s practice, revealing his discomfort and unease at finding himself in a new environment.
Dialogue (1995) again uses the transformative movement of light and shadow. The installation consists of two suspended lightbulbs that descend at alternating intervals into a pool of black ink. The bulbs’ contact with the surface produces ripples across the stilled pool, while their descent creates a shadow play of the audience’s figures on the gallery walls. The movement of shadow and ink merges figures and abstract forms and suggests the ways in which exchange destabilises otherwise seemingly fixed entities. The installation’s activation of an image through the manipulation of light and its electric source was the precursor to Wang’s interest in the moving image.
A number of small-scale installation works, including some unrealised projects from these years have a compelling materiality. For Wang, this period of experimentation was key in transforming his modes of thought. It was a phase of ‘qualitative’ metamorphosis in his practice. The artist’s selection of materials in these artworks is not based on the symbolic or allegorical properties of the materials themselves, though their history is unavoidable. It was, instead, an attempt to use the arrangements of different materials and spaces, and the suspension or clearing out of existing symbolism, to propose a new mechanism for discourse. Wang is primarily concerned with restoring the materials to their most fundamental physical properties, true to the relationships between structure and aesthetics. He strikes a ‘balance’ between materiality, space and time. The politics of ‘criticality’ and ‘wariness’ are the products of these minute adjustments.
Three new works created for this exhibition, In and Out (2017), Horizontal (2017) and Equal (2017), are the realisation of ideas that Wang conceived in the 1990s but lacked the resources to put into practice. They demonstrate a sensitivity to textures − the play between marble and wood in In and Out, or wood and ink in Horizontal and Equal – and, like his other works from this period, investigate how vernacular forms invoke cultural and geographical localities. They provide an opportunity to engage with obscured histories and moments of artistic innovation, and to address the ways in which cultural norms impact individual and collective identities. | https://www.artexb.com/portfolio/wanggongxin001/ |
There's something new in the air, and it's heading your way! SFM2 (Steam Flying Machine 2) is a stand alone figure for Poser and Daz Studio. This Fully poseable model has moving pedals, rotors, and more, and it is pre-scaled for Poser humanoid figures. Also included are four poses for Michael 4 and Victoria 4 as pilot and gunner.
Below is a list of the installation package types provided by this product. The name of each package contains a Package Qualifier (WIP), which is used as a key to indicate something about the contents of that package.
[ ] = Optional, depending on target application(s)
Not all installation packages provide files that are displayed to the user within the interface of an application. The packages listed below, do. The application(s), and the location(s) within each application, are shown below.
Visit our site for technical support questions or concerns. | http://docs.daz3d.com/doku.php/public/read_me/index/8195/start |
Focused on nurturing the best new talents across emerging media from outside London, hotel generation ’18, is designed to open up new pathways to equip the next generation of Artists outside of London. The first programme of its kind in the UK, arebyte Gallery has heard the Artist voices from differing regions asking for help to develop sustainable careers.
This Spring, the curatorial team at arebyte Gallery are shortlisting four participants for their programme. They are interested in Artists who show promise and have a commitment to developing their practice. As a university graduate, arebye is providing the perfect opportunity to get a sense for the London art scene, without the risk of navigating it alone.
If you are interested in new opportunities and potential collaborations to further gain experience, this is the program for you. Don’t miss out on this chance to accelerate your career in emerging and digital media.
The deadline is fast approaching. All submissions need to be in by March 29th.
Find out more by going to their website here. | https://blog.kitmapper.com/calling-emerging-artists-arebyte-gallery-open-call/ |
There is no single test to definitively diagnose lupus, and it could take months or even years to be sure. Typically, your doctor will conduct a complete medical history and physical exam, including blood tests. The doctor may also perform skin and kidney biopsies (extracting tissue samples that are then examined under a microscope) to make a diagnosis.
The lupus erythematosus (LE) cell test was commonly used for diagnosis, but it is no longer used because the LE cells are only found in 50–75% of SLE cases, and they are also found in some people with rheumatoid arthritis, scleroderma, and drug sensitivities. Because of this, the LE cell test is now performed only rarely and is mostly of historical significance.
Symptoms, causes, and treatment of chronic kidney disease Chronic kidney disease or failure is a progressive loss of kidney function that sometimes occurs over many years. Often the symptoms are not noticeable until the disease is well advanced, so it is essential that people who are at risk of developing kidney problems, such as those with diabetes, have regular check-ups. Read now
Lupus is an autoimmune disease that takes on several forms, of which systemic lupus erythematosus (SLE) is one. Lupus can affect any part of the body, but it most commonly attacks your skin, joints, heart, lungs, blood cells, kidneys, and brain. Around 1.5 million Americans have some form of lupus, according to the Lupus Foundation of America, with an estimated 16,000 newly diagnosed each year. Anyone at any age can acquire the disease, though most lupus patients are women between the ages of 15 and 45.
ANA screening yields positive results in many connective tissue disorders and other autoimmune diseases, and may occur in normal individuals. Subtypes of antinuclear antibodies include anti-Smith and anti-double stranded DNA (dsDNA) antibodies (which are linked to SLE) and anti-histone antibodies (which are linked to drug-induced lupus). Anti-dsDNA antibodies are highly specific for SLE; they are present in 70% of cases, whereas they appear in only 0.5% of people without SLE. The anti-dsDNA antibody titers also tend to reflect disease activity, although not in all cases. Other ANA that may occur in people with SLE are anti-U1 RNP (which also appears in systemic sclerosis and mixed connective tissue disease), SS-A (or anti-Ro) and SS-B (or anti-La; both of which are more common in Sjögren's syndrome). SS-A and SS-B confer a specific risk for heart conduction block in neonatal lupus.
Rates of positive ANA tests are affected by the prevalence of systemic lupus erythematosus in the population. Specifically, false-positive rates will be higher in populations with a low prevalence of the disease, such as primary care patients. Because of the high false-positive rates at 1:40 dilution, ANA titers should be obtained only in patients who meet specific clinical criteria (discussed in the clinical recommendations section of this article). When ANA titers are measured, laboratories should report ANA levels at both 1:40 and 1:160 dilutions and should supply information on the percentage of normal persons who are positive at each dilution.41
• Important Disclaimer: Information provided on disabled-world.com is for general informational and educational purposes only, it is not offered as and does not constitute medical advice. In no way are any of the materials presented meant to be a substitute for professional medical care or attention by a qualified practitioner, nor should they be construed as such. Any third party offering or advertising on disabled-world.com does not constitute an endorsement by Disabled World. All trademarks(TM) and registered(R) trademarks are the property of their respective owners. Please report outdated or inaccurate information to us.
Avoid calcium supplements, however, which Johns Hopkins researchers have found to potentially increase the risk of heart damage and arterial plaque buildup. “Due to the risk of accelerated atherosclerosis in lupus, we no longer recommend calcium supplementation and encourage a diet rich in calcium instead,” noted George Stojan, MD, a rheumatologist and assistant professor of medicine at Johns Hopkins.
To unravel which people with positive ANA tests actually have lupus, additional blood work can be done. Doctors look for other potentially troublesome antibodies, so they will test for anti-double-stranded DNA and anti-Smith antibodies. These tests are less likely to be positive unless a patient truly has lupus. However, a person who has negative test results could still have lupus, even though this is not so in the case of ANA tests.
Neutrophils, 55% to 70% of all leukocytes, are the most numerous phagocytic cells and are a primary effector cell in inflammation. Eosinophils, 1% to 3% of total leukocytes, destroy parasites and are involved in allergic reactions. Basophils, less than 1% of all leukocytes, contain granules of histamine and heparin and are part of the inflammatory response to injury. Monocytes, 3% to 8% of all leukocytes, become macrophages and phagocytize pathogens and damaged cells, esp. in the tissue fluid. Lymphocytes, 20% to 35% of all leukocytes, have several functions: recognizing foreign antigens, producing antibodies, suppressing the immune response to prevent excess tissue damage, and becoming memory cells.
Hydroxychloroquine (Plaquenil) is an antimalarial medication found to be particularly effective for SLE people with fatigue, skin involvement, and joint disease. Consistently taking Plaquenil can prevent flare-ups of lupus. Side effects are uncommon but include diarrhea, upset stomach, and eye-pigment changes. Eye-pigment changes are rare but require monitoring by an ophthalmologist (eye specialist) during treatment with Plaquenil. Researchers have found that Plaquenil significantly decreased the frequency of abnormal blood clots in people with systemic lupus. Moreover, the effect seemed independent of immune suppression, implying that Plaquenil can directly act to prevent the blood clots. This fascinating study highlights an important reason for people and doctors to consider Plaquenil for long-term use, especially for those SLE people who are at some risk for blood clots in veins and arteries, such as those with phospholipid antibodies (cardiolipin antibodies, lupus anticoagulant, and false-positive venereal disease research laboratory test). This means not only that Plaquenil reduces the chance for re-flares of SLE, but it can also be beneficial in thinning the blood to prevent abnormal excessive blood clotting. Plaquenil is commonly used in combination with other treatments for lupus.
Your kidneys are two bean-shaped organs, each about the size of your fists. They are located near the middle of your back, just below the rib cage. Inside each kidney about a million tiny structures called nephrons filter blood. They remove waste products and extra water, which become urine. The urine flows through tubes called ureters to your bladder, which stores the urine until you go to the bathroom. Most kidney diseases attack the nephrons. This damage may leave kidneys unable to remove wastes. Causes can include genetic problems, injuries, or medicines.
Mortality rates for systemic lupus erythematosus are particularly high in children. In a retrospective study26 of Brazilian children, overall mortality during 16 years of follow-up was 24 percent. Death occurred because of infection (58 percent), central nervous system disease (36 percent), and renal disease (7 percent). When disease onset was before the age of 15 years, renal involvement and hypertension predicted mortality.
If your doctor suspects you have lupus, he or she will focus on your RBC and WBC counts. Low RBC counts are frequently seen in autoimmune diseases like lupus. However, low RBC counts can also indicate blood loss, bone marrow failure, kidney disease, hemolysis (RBC destruction), leukemia, malnutrition, and more. Low WBC counts can point toward lupus as well as bone marrow failure and liver and spleen disease.
In a study published in 2015, patients with SLE were referred for nutrition counseling with a registered dietician (RD), and 41 of 71 referrals participated in the sessions.8 At the end of the 6-month period, the patients who received nutrition counseling were more likely to have lost weight; decreased their intake of foods high in fat, sodium, and calories; and increased their consumption of fruits, vegetables, fiber, and fish.
Infections and diseases of the cardiovascular, renal, pulmonary, and central nervous systems are the most frequent causes of death in patients with systemic lupus erythematosus.8,23,32–37 Since the 1950s, the five-year survival rate for patients with systemic lupus erythematosus has increased from 50 percent to a range of 91 to 97 percent.8,23,32–34,38,39 It is not known how much of this increase in survival is due to improved management versus diagnosis of earlier and milder disease. Higher mortality rates are associated with seizures, lupus nephritis, and azotemia.36,37,40
Avoid calcium supplements, however, which Johns Hopkins researchers have found to potentially increase the risk of heart damage and arterial plaque buildup. “Due to the risk of accelerated atherosclerosis in lupus, we no longer recommend calcium supplementation and encourage a diet rich in calcium instead,” noted George Stojan, MD, a rheumatologist and assistant professor of medicine at Johns Hopkins.
Avoid calcium supplements, however, which Johns Hopkins researchers have found to potentially increase the risk of heart damage and arterial plaque buildup. “Due to the risk of accelerated atherosclerosis in lupus, we no longer recommend calcium supplementation and encourage a diet rich in calcium instead,” noted George Stojan, MD, a rheumatologist and assistant professor of medicine at Johns Hopkins.
According to the Mayo Clinic, “People with lupus should eat plenty of fruits, vegetables and whole grains. These foods are rich in vitamins, minerals and essential nutrients that benefit overall health and can help prevent high blood pressure, heart disease, kidney disease, cancer and digestive disorders. Plant-based diets also support a healthy weight because they are naturally low in calories, fat and cholesterol. Fruits and vegetables are particularly high in antioxidants. Antioxidants protect the body by destroying harmful substances that damage cells and tissue and cause heart disease and cancer.” Take a look at our blog, Lupus: the Diet Dilemma for some great tips. While these diets, or eating plans, may have some merit, individual foods should not be the focus. Pay attention to your overall pattern of nutrition. Reducing inflammation is not just about what you eat. Patients should also know that these diets are never meant to be a replacement for the lupus treatments they may already be taking under the close supervision of a medical professional. Until more research is in on the effectiveness of these diets, be practical by getting enough sleep and exercise, and try to maintain a healthy weight. Back to top
Saturated fats, on the other hand, can raise cholesterol levels and may contribute to inflammation. So they should be limited. Sources of saturated fats include fried foods, commercial baked goods, creamed soups and sauces, red meat, animal fat, processed meat products, and high-fat dairy foods. That includes whole milk, half and half, cheeses, butter, and ice cream.
A mononuclear phagocytic white blood cell derived from myeloid stem cells. Monocytes circulate in the bloodstream for about 24 hr and then move into tissues, at which point they mature into macrophages, which are long lived. Monocytes and macrophages are one of the first lines of defense in the inflammatory process. This network of fixed and mobile phagocytes that engulf foreign antigens and cell debris previously was called the reticuloendothelial system and is now referred to as the mononuclear phagocyte system (MPS).
In addition to the oral antimalarial hydroxychloroquine, doctors may prescribe topical steroids for lupus rash. Steroids or antimalarials may also be injected directly into rash lesions. (8) Topical creams containing tacrolimus or pimecrolimus that modulate the skin’s immune response may help manage lupus rash. Oral thalidomide, which affects the immune response, may be prescribed if other therapies don’t work. Doctors may also recommend that people with lupus rash avoid the sun and other ultraviolet light sources and wear sunscreen.
Affiliate Disclosure: There are links on this site that can be defined as affiliate links. This means that I may receive a small commission (at no cost to you) if you purchase something when clicking on the links that take you through to a different website. By clicking on the links, you are in no way obligated to buy.
Please Note: The material on this site is provided for informational purposes only and is not medical advice. Always consult your physician before beginning any diet or exercise program. | https://www.livehopelupus.org/can-lupus-be-brought-on-by-stress-lupus-miscarriage-treatment.html |
Friend Membership, available for the first two years of membership only, entitles you to all Chapter 13 meetings, workshops, exhibits and other events.
I.I. Dallas Board Members
2022-2023
COMMITTEE CHAIRPERSONS
PHOTOGRAPHIC HISTORIAN Paula Brehm
PUBLICATIONS Marilyn Calley
SPECIAL EXHIBITION Ann Hambleton
REGISTRATION Amira Matsuda
WAYS AND MEANS Patricia O’Reilly
ADVISOR Nancy Griggs
FORT WORTH CHAPTER 38 LIAISON Sandra Prachyl
SOGETSU DALLAS BRANCH LIAISON Amira Matsuda
Membership is open to anyone interested in the art of ikebana and the ideals of Ikebana International,
regardless of their previous experience.
Regular Membership
Friend Membership
Associate Membership
Transfer Membership
Chapter Meetings
Chapter Newsletters
Ikebana International Magazine
Sakura News
Multiple Ikebana Schools
Find a Teacher
Teach New Students
Regional Conferences and World Conventions
Affiliations
Contact I.I. Dallas Chapter #13
Thank you for visiting us! We hope that you will join us at one of our upcoming programs
or begin classes with one of the ikebana teachers in the Dallas area.
Through membership in our chapter, you too will experience “Friendship through Flowers.”
For further information about the Japanese art of ikebana
or how to join Ikebana International Dallas Chapter 13,
please send an email via our Contact Form below. | https://www.ikebanadallas.org/membership |
One of the most amazing cities of the land of the Nile, Alexandria, nicknamed as the jewel of the Mediterranean, has a special magic of its own that attracts the love of many Egyptians and foreigners who spend their vacations in Egypt.
Although it doesn't hosts startling monuments like Cairo, Luxor, and Aswan and it doesn't enjoy the marvels of the Red Sea like Hurghada and Sharm El Sheikh, Alexandria offers many excitements to tourists who enjoy their holidays in Egypt.
Established in the 4th century BC by Alexander the Great during his visit to the Temple of the Oracle to gain legitimacy to rule Egypt, the city became the capital of Egypt for many centuries and an important cultural, political, and arts hub in the Middle East. Today, some travelers who tour Egypt usually spend one or two days in the jewel of the Mediterranean Sea. We would be having an insight in the most remarkable sites in Alexandria.
Qaitbey Fort
Constructed in the 15th century by the famous Mamluk Sultan and builder, Qaitbey, it became a landmark of Alexandria afterwards. Built to protect the Western section of the city against any attacks coming from the sea. It turned into a major touristic attraction that welcomes hundreds of travelers who tour Egypt.
Located in the Westernmost point of Alexandria, the fort can be seen from everywhere in the city. The most important sections of Fort Qaitbey include the oldest mosque in the city, a marvelously stone decorated Mihrab, a finely preserved building that was constructed with the remaining stones from the ancient lighthouse that was erected during the Pharaonic period.
The Roman Amphitheatre
The only one of the kind in Egypt, the Roman Amphitheatre was founded during the 4th century AD. The complex was discovered in the 20th century by coincidence when workers were trying to dig into the ground. Afterwards, it became a wonderful touristic site that welcomes many tourists who travel to Egypt.
With its U shape stage, the Roman Amphitheatre has 13 columns of marble steps that can accommodate more than 600 spectators. This distinctive complex is still used to host many artistic events and performances. Beside the theatre, there are many statues and other displays that were found in many sections around Alexandria in a wonderfully organized open-air museum.
Pompey's Pillar
Another magnificent landmark of Alexandria, Pompey's Pillar, together with the two sphinxes beside it, are the only remaining sections of a huge Roman temple that was constructed in the 3rd century BC. A visit to Pompey's Pillar is commonly included to many travel packages to Egypt.
Made out of red granite, the pillar is 27 meters high with a base that has a diameter of around 3 meters. Located in the Western section of the city, this small yet impressive complex hosts a number of travelers who visit Egypt.
Montazah Complex and Gardens
The Montazah Complex was established at the end of the 19th century by Khedive Abbas Helmi II to be the summer resort of the royal family of Egypt at the time. Many tourists who travel to Egypt explore Al Montazah Complex for its wonderful gardens and a number of wonderfully decorated historical palaces. | http://dijalogin.tv/15-hidden-travel-destinations-in-uk-for-travelers-on-a-budget/?vlogger_serie_in=1545 |
Posted by John Kleeman
In Questionmark’s white paper, The Role of Assessments in Mitigating Risk for Financial Services Organizations, we shared advice and requirements from financial services regulators about compliance-related testing for employees.
Do health care regulators also advise or require companies to test their employees to check understanding?
The answer is yes, and here are some examples.
The World Health Organization (WHO) states in its principles for good manufacturing practices for pharmaceutical products:
“Continuing training should also be given, and its practical effectiveness periodically assessed.”
WHO guidance also states:
“If training is conducted to achieve a goal, it is reasonable to ask if the goals of the
organization’s training programme and the specific training course have been attained or not. Assessment and evaluation are conducted to determine if the goals have been met.
The European Commission directive 2005/62/EX requires for organizations handling blood that
“Training programmes shall be in place and shall include good
practice. The contents of training programmes shall be periodically assessed and the competence of personnel evaluated regularly.”
The US Department of Health and Human Services in its Compliance Program Guidance for Medicare Contractors states:
“Contractors should consider using tests or other mechanisms to determine the trainees’ comprehension of the training concepts presented.”
Also in the US, the Pharmacy Compounding Accreditation Board (PCAB) gives guidance that
“The pharmacy has SOPs for educating, training, and assessing the competencies of all compounding personnel on an ongoing basis, including documentation that compounding personnel is trained on SOPs.”
Just like in financial services, health care regulators strongly encourage and in some cases require that regulated organizations test their employees to ensure that they have understood training and that they are competent to do their jobs.
One thing health care regulators emphasize more than those overseeing financial services is the merit of giving observational assessments as well as knowledge tests — presumably because skills are often more practical. For example PCAB guidance says that:
“Staff competency can be evaluated by a combination of … direct observation … written tests [and] … other quality control activities”
Previously, in this series on assessments in health care, I’ve covered good practice in competency testing in the health care industry and shared analysis of why errors are made and how testing can help. I hope these examples of regulator guidance and requirements are also useful. | https://www.questionmark.com/to-your-health-what-assessments-do-regulators-require/?lang=en_GB |
2013 has been a brilliant year for the Fruit Harvesting Scheme. We have harvested crates and crates full of surplus fruit from around Oabby and Wigston - it's certainly been a bumper crop!
We have been working with various Food Banks to distribute the fruit to people in need, and have been able to help well over 100 families.
Thank you to everyone who has been involved; the owners of the trees, volunteer pickers and sorters, people who have helped to process the damaged fruit to make juice, and the food banks for helping to get the fruit to the people who need it most.
Friday, 11 October 2013
A bumper year for apples
1 comment: | http://www.oadbywigstonfood.org/2013/10/a-bumper-year-for-apples.html |
The MindFuture Foundation’s primary purpose is to provoke awareness and spur debates about the use of next generation algorithms creating emotional, rational and irrational decision making within AI. It becomes the way and manner we think about the difference between human- and artificial mindsets that will show how we will plan for mankind’s future existence and living conditions.
As part of this, we have fashioned “The Book of Mindsets: A never-ending investigation”. It is not really a book per se. It is an alternative mindpool for sharing knowledge and debate about the future uses of AI-technologies. It collects knowledge and opinions in writing, visuals, audio or other formats that investigate different mindsets’ worldview.
Chapter 1
The artificials
Is humanized AI-technology possible? This short article offers basic knowledge on how to define the human mind, mindset and intelligence.
Chapter 2
AI manifesto
The AI manifesto is created and written solely by machine learning. We have fed the algorithm a substantial dataset about art and artificial intelligence and asked it to write a manifesto for The MindFuture Foundation itself. The result was 100 hundred pages long, which we then curated down to this manifesto. It may come across as “weird” – and certainly biased – but it is nonetheless a first attempt for an algorithm to “speak its mind”.
Chapter 3
The Next Step
Philosophically speaking, the only constant we definitely can identify is change – as in changing our mind of certain beliefs, or behaving differently because of changes in real world contexts. Changes are determined by cultural and social beliefs and contexts that infer our mindsets.
Scene 1
Digital Self
Facial scanning and reconstruction. Speech synthesis and voice cloning. Personality and sentiment analysis algorithms. What do these existing technologies mean for the creation of our digital identities? Do they represent one’s true self? And what happens to our data when we pass?
Scene 2
Personal Space
Machine learning algorithms aren’t perfect. They can contain bugs or bias. They can make mistakes, and often don’t know best. Or they can be hacked and controlled externally. How much control are we giving to these systems in our everyday lives? And what happens when their programming opposes your own will?
Scene 3
Eternal Data
Where does our data go? How does an algorithm sort through all the white noise, filtering out irrelevant information? And can the essence of a person truly be represented in a dataset; in a series of 1’s and 0’s?
Covid19 AI Battle
What does it mean for multiple artificial intelligence agents to debate Covid-19? What will the machine learning algorithms find worthy, or unworthy, of their attention? And is it possible for them to reach some sort of consensus, or will we merely bear witness to a political circus?
Covid 19 AI Battle established a unique dynamic where two opposing neural network, each trained to think about Covid-19 based upon data collected from respectively Donald Trump and Tedros Adhanom, from the World Health Organization, debate all matters Covid-19 related, in real time. | https://mindfuture.ai/the-book-of-mindsets/ |
Background:
Conservative management is recognized as an acceptable treatment for people with worsening chronic kidney disease; however, patients consistently report they lack understanding about their changing disease state and feel unsupported in making shared decisions about future treatment. The purpose of this review was to critically evaluate patient decision aids (PtDAs) developed to support patient–professional shared decision-making between dialysis and conservative management treatment pathways.
Methods:
We performed a systematic review of resources accessible in English using environmental scan methods. Data sources included online databases of research publications, repositories for clinical guidelines, research projects and PtDAs, international PtDA expert lists and reference lists from relevant publications. The resource selection was from 56 screened records; 17 PtDAs were included. A data extraction sheet was applied to all eligible resources, eliciting resource characteristics, decision architecture to boost/bias thinking, indicators of quality such as International Standards for Patient Decision Aids Standards checklist and engagement with health services.
Results:
PtDAs were developed in five countries; eleven were publically available via the Internet. Treatment options described were dialysis (n = 17), conservative management (n = 9) and transplant (n = 5). Eight resources signposted conservative management as an option rather than an active choice. Ten different labels across 14 resources were used to name ‘conservative management’. The readability of the resources was good. Six publications detail decision aid development and/or evaluation research. Using PtDAs improved treatment decision-making by patients. Only resources identified as PtDAs and available in English were included.
Conclusions:
PtDAs are used by some services to support patients choosing between dialysis options or end-of-life options. PtDAs developed to proactively support people making informed decisions between conservative management and dialysis treatments are likely to enable services to meet current best practice. | https://eprints.whiterose.ac.uk/165373/ |
Cold weather safety is a topic that should be discussed at length among utility workers who perform any outdoor job functions. That’s because, as with heat stress, cold stress can be a fatal threat. When you’re exposed to freezing temperatures for long periods of time, you run the risk of losing a dangerous amount of body heat, which, if not corrected immediately, could lead to frostbite, hypothermia and even death. There are a number of things to think about prior to and when working in the cold, and while we won’t talk about all of those things in this month’s Tailgate Topic, we’ll cover three of the most important items: dressing properly, staying hydrated and eating right, and keeping an eye on your co-workers.
1. Dress Properly
The golden rule for winter weather preparation is to dress in layers. One of the biggest problems with working in the cold is getting too warm and sweaty. If it’s a cold and windy day, hypothermia can begin within just a few minutes. So, layering is key. Here are some layering basics:
- Layer 1 (base layer): Wear a light, long-sleeved base layer close to your skin. Thinner layers wick sweat better and dry faster.
- Layer 2 (mid-layer): This layer also should be a thin layer. Wool is a good choice; not only is it warm, but it will retain most of its warmth when wet. There are some fantastic flame-resistant wool garments currently on the market.
- Layer 3 (heat trap): This should be a zippered jacket with a hood – hooded zip-up sweatshirts are most commonly used for this layer.
- Layer 4 (outer shell): Choose a waterproof but breathable fabric, and make sure the garment is large enough to fit over all of the other layers. | https://incident-prevention.com/ip-articles/december-2017 |
Jeffares DC1,2, Jolly C1, Hoti M1, Speed D2, Shaw L1,2, Rallis C1,2, Balloux F1,2, Dessimoz C1,3,4,5, Bähler J1,2, Sedlazeck FJ6.
Department of Computer Science, University College London, London WC1E 6BT, UK.
Department of Ecology and Evolution and Center for Integrative Genomics, University of Lausanne, Biophore, Lausanne 1015, Switzerland.
Swiss Institute of Bioinformatics, Biophore, Lausanne 1015, Switzerland.
Department of Computer Science, Johns Hopkins University, Baltimore, Maryland 21218, USA.
Large structural variations (SVs) within genomes are more challenging to identify than smaller genetic variants but may substantially contribute to phenotypic diversity and evolution. We analyse the effects of SVs on gene expression, quantitative traits and intrinsic reproductive isolation in the yeast Schizosaccharomyces pombe. We establish a high-quality curated catalogue of SVs in the genomes of a worldwide library of S. pombe strains, including duplications, deletions, inversions and translocations. We show that copy number variants (CNVs) show a variety of genetic signals consistent with rapid turnover. These transient CNVs produce stoichiometric effects on gene expression both within and outside the duplicated regions. CNVs make substantial contributions to quantitative traits, most notably intracellular amino acid concentrations, growth under stress and sugar utilization in winemaking, whereas rearrangements are strongly associated with reproductive isolation. Collectively, these findings have broad implications for evolution and for our understanding of quantitative traits including complex human diseases.
Characteristics of SVs in S. pombe.
(a) Relative proportions of SVs identified. Duplications (DUP) were the most abundant SVs, followed by deletions (DEL), inversions (INV) and translocations (TRA). (b) Population allele frequency distribution of SVs, showing the frequencies of less abundant alleles in the population (minor allele frequencies). (c) Length distributions of SVs, log10 scale. Deletions were smallest (2.8–52 kb), duplications larger (2.6–510 kb) and inversions often even larger, spanning large portions of chromosomes (0.1 kb–5,374 kb, see d). Horizontal dotted lines show the size of chromosome regions that contain an average of 1, 10 and 100 genes in this yeast. Box plots indicate the first quartile, the median and the third quartile; whiskers extend to the most extreme data point, which is no more than 1.5 × the interquartile range from the box. (d) Locations of SVs on the three chromosomes compared with other genomic features. From outside: density of essential genes, locations of Tf-type retrotransposons, diversity (π, average pairwise diversity from SNPs), deletions (black), duplications (red) and breakpoints of inversions and translocations as curved lines inside the concentric circles (green and blue, respectively). Bar heights for retrotransposons, deletions and duplications are proportional to minor allele frequencies. Diversity and retrotransposon frequencies were calculated from 57 non-clonal strains as described by Jeffares et al.
CNVs are transient within fission yeast.
(a) For each of the 87 CNVs we calculated the genetic distance between strains using SNPs in the region around the CNV (20 kb up- and downstream of the CNV, merged) as the total branch length from an approximate maximum-likelihood tree (x axis, SNP-based branch length normalized to maximum value). We further calculated a CNV-based distance using the total branch length from a neighbour-joining tree constructed from Euclidean distances between strains based on their copy numbers (y axis, CNV-based branch length normalized to maximum value). The weak correlation indicates that CNVs are subject to additional or different evolutionary processes. (b) Histogram of the standard deviation of each CNV within a near-clonal cluster (see also ), relative to its standard deviation across strains not in the near-clonal cluster. Standard deviation is highly correlated with CNV-based branch length (Spearman rank correlation ρ=0.90, P<0.001) (). The highlighted CNVs have unusually high rates of variation within this cluster compared with other clusters. (c) Copy number variation of these highlighted CNVs plotted on a SNP-based phylogeny (20 kb up- and downstream of the DUP.III:274001..286000 CNV) shows their relative transience within the cluster, as well as their variation across other near-clonal clusters. SNP-based phylogenies for the other two selected CNVs also do not separate the strains with different copy numbers (individual plots for each CNV across clusters for its corresponding SNP-based phylogeny are available as ).
Transient duplications affect gene expression.
(a) Duplications occur within near-clonal strains. Plot showing average read coverage in 1 kb windows for two clonal strains (JB760, JB886) with the duplication (red), five strains without duplication (green) and two reference strains (h+, and h−) (black). Genes (with exons as red rectangles) and retrotransposon LTRs (blue rectangles) are shown on top (see for details). (b) Eight pairs of closely related strains, differing by one or more large duplications, selected for expression analysis. The tree indicates the relatedness of these strain pairs (dots coloured as in d). The position of the reference strain (Leupold's 972, JB22) is indicated with a black arrow. The scale bar shows the length of 0.003 insertions per site. (c) Gene expression increases for most genes within duplicated regions. For each tested strain pair, we show the relative gene expression (strains with duplication/strains without duplication) for all genes outside the duplication (as boxplot) and for all genes within the duplication (red strip chart). In all but one case (array 4), the genes within the duplication tend to be more highly expressed than the genes outside of the duplication (all Wilcoxon rank sum test P values <1.5 × 10−3). Box plots indicate the first quartile, the median and the third quartile; whiskers extend to the most extreme data point, which is no more than 1.5 × the interquartile range from the box. (d) Summary of expression arrays 1–8, with strains indicated as coloured dots (as in b), showing number of SNP differences between strains, sizes of duplications in kb (DUP, where ‘+X +Y' indicates two duplications with lengths X and Y, respectively). We show total numbers of induced (up) and repressed (down) genes, both inside and outside the duplicated regions. Arrays 2,3 and 7,8 (in yellow shading) are replicates within the same clonal population that contain the same duplications, so we list the number of up- and downregulated genes that are consistent between both arrays. See for details.
SVs contribute to quantitative traits.
(a) Heritability estimates are improved by the addition of SVs. Heritability estimates for 228 traits (), using only SNP data (x axis) range from 0 to 96% (median 29%). Adding SV calls (y axis) increases the estimates (median 34%), with estimates for some traits being improved up to a gain of 43% (histogram inset). The diagonal line shows where estimates after adding SVs are the same as those without (x=y). Inset: the distribution of the ‘gain' in heritability after adding SV calls (median 0.4%, maximum 43%). Points are coloured by trait types, according to legend top left. (b) The contributions of SNPs (grey), CNVs (red) and rearrangements (black) to heritability varied considerably between traits. Coloured bars along the x axis indicate the trait types. heritability estimates are in . The panel below bars indicates trait types as in the legend for part (a). (c, top) For some traits, SVs explained more of the trait variation than SNPs. Boxes are coloured as legend in a. (c, lower) Analysis of simulated data generated with assumption that only SNPs cause traits indicates that the contribution of SVs to trait variance is unlikely to be due to linkage. Traits from left are; with red inset at top, free amino acid concentrations (glutamine, histidine, lysine, methionine, phenylalanine, proline and tyrosine), with green inset liquid media growth traits (maximum mass in minimal media, time to maximum slope, most rapid slope and highest cell density in rich media), in with magenta inset colony growth on solid media (with Brefeldin, CuSO4, H2O2, hydroxyurea, 0.0025% MMS, 0.005% MMS, with proline and 0.001% SDS), wine traits with Burgundy inset (malic acid accumulation and glucose+fructose ultilisation), with grey inset liquid media conditions (caffeine lag, rate and efficiency, CsCl12 efficiency, diamide growth rate, EMS growth rate, ethanol efficiency, ethanol growth rate, galactose growth rate, growth rate at 40°C, HqCl2 lag, KCl efficiency, MgCl2 efficiency, MMS lag, NiCl lag, unstressed lag and rate, SrCl efficiency, tunicamycin lag and rate), and with yellow insets mating traits (the proportion of free spores, mating figures observed and total spore counts).
Both SNPs and rearrangements contribute to intrinsic reproductive isolation.
Spore viability was measured from 58 different crosses from Jeffares et al. (black) or Avelar et al. (red), with each circle in the plots representing one cross. An additive linear model incorporating both SNP and rearrangement differences showed highly significant correlations with viability (P=1.2 × 10−6, r2=0.39). Both genetic distances measured using SNPs and rearrangements (inversions and translocations) significantly correlated with viability when controlling for the other factor (Kendall partial rank order correlations with viability SNPs|rearrangements τ=−0.19, P=0.038; rearrangements|SNPs τ=−0.22, P=0.016). Some strains produce low-viability spores even when self-mated with their own genotype. The lowest self-mating viability of each strain pair is indicated by circle size (see legend, smaller circles indicate lower self-mating viability) to illustrate that low-viability outliers tend to include such cases (see for details). | https://www.ncbi.nlm.nih.gov/pubmed/28117401?dopt=Abstract |
Tai Chi & Qigong, is a mind-body exercise therapy, typically used to manage chronic pain conditions. During Tai Chi & Qigong exercises, the slow motion and weight shifting improves musculoskeletal strength and joint stability. Kerry-Anne qualified instructor Tai Chi, Qigong and Meditation. The class will include a cool down section with body scan or short meditation. You will need a camera and microphone (most laptops or ipad/tablet/phone devices have this). You can use Goolge Cast or Apple TV to mirror to your TV.
Facilitator Kerry-Anne Knibbs, Tai Chi and Qigong Instructor, Reiki and Meditation Master, Counsellor - Art Therapist. | https://www.ebbandflowstudioart.com/events-1/qigong-free-online-zoom |
A system includes a laser displacement sensor which is provided on a shoulder of a roadway, emits a laser beam which scans a roadway space in a height direction thereof, receives a beam reflected by an object which is present in the roadway space, and measures a distance up to a reflection point on the object, at which the laser beam was reflected; and a vehicle window detection device that detects a window of the vehicle based on the distance measured by the laser displacement sensor. The vehicle window detection device detects the window of the vehicle based on a change in a distance in a horizontal direction from the laser displacement sensor to the reflection point after the vehicle in the roadway space was detected. | |
Change an object to a period.
as.period changes Interval, Duration, difftime and numeric class objects to Period class objects with the specified units.
Usage
as.period(x, unit, ...)
Arguments
- x
- an interval, difftime, or numeric object
- unit
- A character string that specifies which time units to build period in. unit is only implemented for the as.period.numeric method.
- ...
- additional arguments to pass to as.period
Details
Users must specify which time units to measure the period
in. The exact length of each time unit in a period will
depend on when it occurs. See
Period-class
and
new_period. The choice of units is not
trivial; units that are normally equal may differ in
length depending on when the time period occurs. For
example, when a leap second occurs one minute is longer
than 60 seconds.
Because periods do not have a fixed length, they can not
be accurately converted to and from Duration objects.
Duration objects measure time spans in exact numbers of
seconds, see
Duration-class. Hence, a one
to one mapping does not exist between durations and
periods. When used with a Duration object, as.period
provides an inexact estimate; the duration is broken into
time units based on the most common lengths of time
units, in seconds. Because the length of months are
particularly variable, a period with a months unit can
not be coerced from a duration object. For an exact
transformation, first transform the duration to an
interval with
as.interval. | https://www.rdocumentation.org/packages/lubridate/versions/1.1.0/topics/as.period |
A Flaw has been discovered in the motherboards manufactured by the server manufacturer Supermicro, has left more than 30,000 servers vulnerable to hackers that could allow them to remotely compromise the management interface of unpatched servers.
The vulnerability actually resides in the Baseboard Management Controller (BMC) in the WPCM450 line of chips incorporated into the motherboards. Security Researcher at CARInet Security Incident Response Team, discovered that Baseboard Management Controller (BMC) of Supermicro motherboards contain a binary file that stores remote login passwords in clear text and the file is available for download simply by connecting to the specific port, 49152.
Baseboard Management Controller (BMC) is the central part of the microcontroller that resides on server motherboard or in the chassis of a blade server or telecom platform. The BMC links to a main processor and other onboard elements via a simple serial bus.
Baseboard management controllers are part of the Intelligent Platform Management Interface (IPMI) protocol, which defines communication protocols and a server administrator can access the BMC by using an IPMI-compliant management application loaded on a computer or via a web interface via port 49152.
In order to compromise vulnerable servers, an attacker can perform Internet scanning on port 49152 to identify exploitable servers and can download remote login passwords which is stored in a binary file location “GET /PSBlock” of the motherboard in clear plain text.
When recently an Internet scan is performed on the Shodan, a specialized search engine for finding embedded systems, approximately 31,964 machines were found still vulnerable, a count that doesn't include the vulnerable systems installed on virtual environment used in shared hosting services.
"This means at the point of this writing, there are 31,964 systems that have their passwords available on the open market," wrote Zachary Wikholm, a senior security engineer with the CARInet Security Incident Response Team.
An analysis of the passwords available for download also indicates that thousands of the passwords are really easily guessable or the default ones.
"It gets a bit scarier when you review some of the password statistics. Out of those passwords, 3,296 are the default combination. Since I'm not comfortable providing too much password information, I will just say that there exists a subset of this data that either contains or just was 'password.'"
He also found that lot of systems are running older versions of the Linux kernel. According to Shodan search, approximately 23,380 of the total hosts are running the 2.4.31.x kernel, another 112,883 are running the 2.4.30.x kernel, and 710,046 systems are running the 2.4.19.x kernel.
The vulnerable 84 firmwares are listed here and server administrators are advised to apply available patches from vendors. In order to apply patches, you need to flash the device with new firmware update. For quick and temporary fix, administrators can disable all universal plug and play processes and their related children processes using secure shell connection to a vulnerable devices. | https://thehackernews.com/2014/06/bmc-vulnerability-in-32000-servers.html |
The Centers for Disease Control and Prevention is investigating a multi-state outbreak of campylobacter infections linked to contact with puppies sold through Petland, a national pet store chain.
Campylobacteriosis, a common bacterial infection, can cause diarrhea, abdominal pain and fever, according to the CDC.
As of Sept. 11, a total of 39 people have fallen ill in seven states, including 11 cases in Florida, five in Kansas, one in Missouri, 18 in Ohio, two in Pennsylvania, one in Tennessee and one in Wisconsin. There have been nine hospitalizations and no deaths reported. The first case within this outbreak occurred on September 15, 2016.
Symptoms, which typically begin within two to five days after contact with the bacteria, last around a week, though some people don't experience any signs of illness.
While many cases go unreported, about 14 cases for every 100,000 people are diagnosed each year in the United States, according to the CDC. Overall, campylobacteriosis -- which occurs much more frequently in the summer months than in the winter -- is estimated to affect over 1.3 million persons every year.
Almost every patient recovers within five days without treatment, though drinking extra fluids is recommended. In rare cases, an infection can lead to complications, including paralysis and even death.
People with weakened immune systems, such as infants, the elderly and people with cancer or other severe illnesses, are most at risk for a serious infection.
People sickened in the current outbreak range in age from younger than 1 year to 64 years old.
Most -- 28 of the 39 -- are female and 12 are Petland employees. Of the total, 27 either recently purchased a puppy at Petland, visited a Petland, or visited or live in a home with a puppy sold through Petland.
"The CDC has not identified any failures of Petland's operating system that would lead to any campylobacter infection," Elizabeth Kunzelman, a spokeswoman for the company, said in an email. "Last week the CDC advised Petland to 'continue to do what we are already doing' and to continue to educate customers and staff to sanitize their hands after handling our puppies."
The CDC noted that no matter where a puppy comes from, it may carry a campylobacter infection. Petland is cooperating with the investigation, the government agency said.
Kunzelman added that Petland has many sanitation stations in each store and has strict kennel sanitation procedures and protocols put in place by consulting veterinarians.
"Our extensive health warranty protects both our pets and our customers from bacterial, viral and congenital issues," she said.
Most people become infected with campylobacteriosis through eating raw or undercooked poultry. Most infections are singular and not part of an outbreak.
By contrast, outbreaks of campylobacteriosis are often linked to unpasteurized dairy products, contaminated water, poultry and produce. Sometimes, though, people get sick after coming into contact with the stool of an ill dog or cat.
To avoid contamination from your pet, CDC recommends you wash your hands thoroughly after touching dogs, their poop, or their food. Extra care is needed so that children playing with puppies also wash their hands carefully. Pick up and dispose dog poop carefully, especially in areas where children might play. Finally, you need to contact your veterinarian if you see signs of illness in your puppy or dog.
While puppies and dogs with a campylobacter infection might have diarrhea, vomiting, or a fever, just like humans, they also sometimes show no signs of illness.
The investigation is ongoing, according to the CDC, which is working with the US Department of Agriculture's Animal and Plant Inspection Service, and several health departments. | https://www.abc15.com/news/national/puppies-from-national-pet-store-chain-sicken-39-people-officials-say |
Aquatic plants are plants that have adapted to living in aquatic environments (saltwater or freshwater). They are also referred to as hydrophytes or macrophytes. In lakes and rivers macrophytes provide cover for fish and substrate for aquatic invertebrates, produce oxygen, and act as food for some fish and wildlife.
Aquatic plants require special adaptations for living submerged in water, or at the water's surface. Aquatic plants can only grow in water or in soil that is permanently saturated with water. Aquatic plant seeds contain plants like Azolla fairy moss, water cabbage making the aquatic life also beautiful and sparkling.
Floating plants can be found in fresh or salt water. The leaves of these plants are firm and remain flat in order to absorb more sunlight. Common examples of floating plants include various types of lilies (such as the water lily or banana lily) and the water hyacinth. These aquatic plant seeds produce which are eye- catching and pleasurable to see. | https://nurserylive.com/seeds/other-seeds/aquatic-plant-seeds |
From the Green River: Forensic Evidence and the Prosecution of Gary Ridgway
Baird, J. (2006) From the Green River: Forensic Evidence and the Prosecution of Gary Ridgway. American Academy of Forensic Sciences Annual Meeting Green River.
Published on: 2/1/2006
The goal of this presentation is to describe the apprehension of Gary Ridgway as the perpetrator of multiple homicides and discuss the role of forensic evidence in the prosecution of this serial killer. This presentation will impact the forensic community and/or humanity by providing attendees with insight into the organization of a multi-agency manhunt; and understand the enormous contribution and the limited role of forensic evidence in this case. In July and August of 1982, five women were murdered and left in or near the Green River in King County, Washington. All five had a history of prostitution; all five had been strangled. These murders were the community’s first notice that a serial killer was preying on young women. Over the next several years, the bodies of more and more victims, most of them teenage girls, were found in wooded or remote parts of King County. Most were found with no clothing or possessions. In many cases, months or even years had passed since the victim’s disappearance, and all that was found were skeletal remains. Identification of the victims sometimes took years. Eventually, 49victims were listed as victims of the Green River Killer. Despite extraordinary efforts by county, state, and federal investigators, and public and private forensic scientists, these murders remained unsolved for nearly two decades. Hundreds of suspects were identified, but no convincing evidence of their guilt was developed. Finally, in 2001, Beverly Himick and Jean Johnston at the Washington State Patrol Crime Laboratory discovered DNA evidence linking Gary Ridgway to several of the Green River homicides. Ridgway, a King County resident who had worked for decades in the paint shop at a local truck factory, was charged with four murders. For the next 18 months, a team of detectives and prosecutors painstakingly reviewed approximately five dozen unsolved homicides (most of them attributed at the time to the “Green River Killer”) for any evidence linking them to Ridgway or any other suspect. Hundreds of items of evidence were submitted to scientists in various forensic disciplines throughout the country. In 2003, shortly before the court-imposed charging deadline in the case, Skip Palenik, a private forensic scientist at Microtrace, reported that he had discovered tiny spheres of sprayed paint from a number of the crime scenes and on evidence seized from Ridgway in the 1980s. Based on this evidence, Ridgway was charged with three additional murders. Faced with this additional forensic evidence of his guilt, Ridgway offered to provide prosecutors with a full account of his criminal activities in King County and to plead guilty to all the murders he had committed in that jurisdiction if the prosecution would agree not to seek the death penalty. After considerable discussion and contemplation, Norm Maleng, the King County Prosecutor, accepted this offer. Detectives, prosecutors, and mental health experts interviewed Ridgway for nearly six months. In November of 2003, Gary Ridgway pled guilty to 48 counts of aggravated, first-degree murder. The Ridgway case illustrates both the extraordinary power of contemporary forensic science, and its equally striking limitations. Without DNA evidence, the Green River Killings would never have been solved. Yet despite extraordinary efforts by premier public and private forensic laboratories employing state-of-the art methods, no physical evidence whatsoever linked Ridgway (or any other suspect, identified or unidentified) to the majority of the Green River murders. Even after Ridgway was identified — and after he provided irrefutable corroborative evidence of his guilt to investigators (e.g., leading them to additional bodies) – forensic science was unable to link him to most of his victims. The unsophisticated but ruthlessly successful way Ridgway committed his crimes – the victims he chose, the manner in which he killed them, and the way he disposed of their bodies – yielded surprisingly little forensic evidence of his guilt.
How May We Help You?Contact us
to discuss your project in more detail. | https://www.microtrace.com/from-the-green-river-forensic-evidence-and-the-prosecution-of-gary-ridgway/ |
This department is responsible for the administration of public education, primary, post primary and special education.
The mission of the department is to provide high-quality education, which will enable individuals to achieve their full potential and to participate fully as members of society and contribute to Ireland's social, cultural and economic development.
To promote equity and inclusion.
To plan for education that is relevant to personal, social, cultural and economic needs.
To enhance the capacity of the Department of Education and Science for service delivery, policy formulation, research and evaluation.
The Department of Education is responsible for the central administration of all aspects of education and related services in Northern Ireland except the higher and further education sector, which is within the remit of the Department for Employment and Learning.
This department has responsibility for the development of pre-school, primary, post-primary and special education; the youth service; the promotion of community relations within and between schools; and teacher education and salaries.
The department also aims to ensure that children, through participation at schools, reach the highest possible standards of educational achievement. The department also promotes personal well-being and social development.
The North Eastern Education and Library Board was established in 1973. The Board consists of 35 members, appointed by the Minister responsible for the Department of Education for Northern Ireland (DE).
The Board is the local education and library authority for most of County Antrim and the eastern part of County Derry.
The Board is the local authority for education, library and youth services in the district council areas of Ards, Castlereagh, Down, Lisburn and North Down in Northern Ireland. Its mission is to raise the standards of learning and levels of achievement of the people of the Board’s area through the provision of high quality education, library and youth services.
The Southern Education and Library Board serves the district council areas of Armagh, Banbridge, Cookstown, Craigavon, Dungannon & South Tyrone, Newry & Mourne. It spans 1,450 square miles, with a total population of approximately 332,000 people; including 75,000 pupils and with over 137,900 registered library users.
The mission of the board is to ensure that high quality education, youth and library support services exist throughout the area in order to promote learning, provide opportunities for personal development, encourage individuals to acquire core skills, promote spiritual and moral values in individuals and in the community a sense of shared responsibility, repect for one another and appreciation of the worth of the individual person.
The Western Education and Library Board (WELB) in Northern Ireland is the local authority for the provision of education, library and youth services in the Council areas of Omagh, Fermanagh, Derry, Strabane and Limavady. The Board has statutory responsibility for primary and secondary education within its area and it is also responsible for the provision of a youth service and library services to schools and the public.
The Board's Administrative Headquarters is located in the county town of Omagh, with district offices in Derry and Enniskillen. The area has a population of over 280,000. There are over 63,000 pupils attending schools and over 105,000 registered library users. The Board provides or maintains 12 Nurseries, 194 Primary, 10 Special, 36 Secondary, 4 Grammar Schools and 16 Branch Libraries. | http://kildare.ie/education/general-resources/government-departments.asp |
keep the rhythm of the poles steady and consistent. People familiar with folk dancing can often determine what country a dance is from even if they have not seen that particular dance before. According to popular tradition, the dance was created by a lady named Kanang who choreographed the steps while dancing at a baptismal party. A rotational order speech On Water Conservation for dancing and moving the poles will need to be established. The music is in 3/4 time and is usually accompanied by castanets. The dance has three parts. La Jota Moncadea, the. Dancers imitated the tinikling birds legendary grace and speed as they walked between grass stems, ran over tree branches, or dodged bamboo traps set by rice farmers. There are men's and women's versions of the dance since they wear malongs in different ways.
Sometimes, the sticks would have thorns jutting out from their segments. Purpose of Activity: Students will learn a dance that is frequently performed in the Philippines, as well as the legend and history surrounding the dance. Ask the students to figure out how to use the basic step so that all four dancers can move simultaneously around the poles. "The Pangalay Dance in the Construction of Filipino Heritage". Use hollow bamboo, not rattan, which is similar looking, but solid. . 7 Adaptations edit When performed by dance troupes or in cultural shows, Tinikling is typically performed in the "Rural Suite which includes dances originating from Filipino Christians that have a more "folksy" character. Traditional Valencian dances, a folk dance is developed by people that reflect the life of the people of a certain country or region. A b Christy Lane Susan Langhout (1998). Variations: This activity can be made into a Christmas presentation by making ankle bells (jingle bells threaded on elastic) for each dancer and wrist bells for each person moving the poles. Leap with R foot then L foot to center of poles (counts 2-3). "The Tinikling: How Traditional Filipino Dance Can Develop Your Combative Attributes!". Leyte and Samarthe largest of the Visayansform the eastern edge of the group, shielding the remaining islands from Pacific storms.
Consumerism, already having captured death as a consumer obligation whereby sadness and regret are quenched by spending lots of money, now turns major life events like weddings and births into consumer events with their own hierarchy of demands for..
63 After the War, many new food products became available to the typical household, with branded foods advertised for their convenience. Related pages LAW IN ancient rome. Rome became the recognised centre of the Catholic church with the Pope..
Archived from the original on October 1, 2016. Leaders are accountable for what they do or fail. The Kansas City Sun, Volume 10, Number 32 (Kansas City, Missouri). After Asgari refused to change his grade, Abdin shot him twice.. | http://meine-hunde.info/25084-how-to-dance-the-tinikling/ |
In 2011 the City of Keokuk, Iowa, acquired the Keokuk Union Depot from Pioneer Railcorp for the purpose of historic preservation and community use and leased the associated real estate for 99 years. Erected in 1891, the building was designed by the Chicago architectural firm Burnham and Root. It is notable as one of John Root’s last designs for public buildings in his distinctive Romanesque Revival style. Thereafter, with Root’s death in 1891, architectural design in North America took a different turn.
Architectural features of the Depot include Root’s characteristic use of earth tones, arched windows and high-pitched roofline; the interior features an oak-paneled waiting room and octagonal ticket booth.
In 2011 an intern from the School of the Art Institute of Chicago developed an architectural study that resulted in the Depot’s being listed in the National Register of Historic Places. Restoric LLC, a Chicago firm specializing in the restoration of historic structures, completed a thorough architectural study in 2014 and produced a 180-page Historic Structure Report.
The plans drawn up from the Historic Structure Report form the basis for returning the Depot as closely as possible to its 1891 appearance, including replacing the shingled roof with clay tiles and rebuilding the central peaked tower, which was leveled off around 1950. The goal is to restore the building while adapting it to serve the community in a manner consistent with both historic preservation standards and contemporary requirements for public use.
The Keokuk Union Depot Commission, appointed by the Mayor, administers the preservation project as part of Keokuk’s larger Riverfront Development Master Plan. No municipal tax funds are to be used for Depot projects. A separate Keokuk Union Depot Foundation, a tax-exempt charitable organization, was established in 2012 to raise the necessary funds from grants and private sources.
In 2014 the Keokuk Union Depot Foundation launched a Capital Campaign upon securing a one-third matching grant of $333,000 from the Jeffris Family Foundation toward the projected $1 million-plus cost of restoring the roof. Substantial grants for specific phases of the project from the Iowa Historical Resource Development Program (HDRP) were also instrumental in accelerating the pace of the project by enabling work on the building’s eaves and reconstruction of the apex. Grants by local foundations and community groups have contributed to the effort, but a large segment of support has come from individuals who share the vision for preservation of this community resource. | https://www.jeffrisfoundation.org/project/keokuk-union-depot/ |
The University of Texas at Austin and its partners at Los Alamos National Laboratory (LANL), Environmental Defense Fund, and Sandia Technologies, LLC have investigated Texas offshore subsurface storage resources in the Gulf of Mexico as candidate geologic storage formations. This project was designed to identify one or more CO2 injection site(s) within an area of Texas offshore state lands (extending approximately 10 miles from the shoreline) that are suitable for the safe and permanent storage of CO2 from future large-scale commercial CCS operations. The approach used in identifying these injection sites was to use both historic and new data to evaluate the candidate geologic formations, including an extensive Gulf Coast well database and seismic survey database. A major project effort was to characterize three selected offshore areas with high-resolution seismic surveys that were conducted by the project. In addition, reservoir simulations were performed and work was conducted to evaluate the effects of chemical reactions resulting from injection of CO2 into the identified formations. A risk analysis and mitigation plan has been generated in support of near-term commercial development efforts.
Project Benefits
The overall effort provides greater insight into the potential for geologic formations across the United States to safely and permanently store CO2. The information gained from this endeavor is being used to refine a national assessment of CO2 storage capacity in deep geologic formations. Specifically, the project’s ability to develop and utilize offshore geologic storage resources contributes significantly to the management of CO2 emissions from various emission sources located in southeastern Texas. The results from this study are helping to provide a summary of basin-scale suitability and identified and prioritized potential offshore CO2 geological storage opportunities. | https://netl.doe.gov/research/proj?k=FE0001941&show=ppp |
Dedicated to a foundation stone of western artistic training, this exhibition attempts a celebratory note as the Royal Academy approaches its 250th anniversary.
The study of nature was fundamental to Renaissance thinking, and as artists aspired to the naturalism they perceived in Antique sculpture, working from the life took on a new significance that endured, one way or another, into the 20th century. In the 18th century, as institutions like the Royal Academy replaced the system of apprenticing artists in workshops, the life room achieved quasi-sacred significance, access to which would only be granted after a lengthy period copying prints and casts of Ancient and Renaissance sculptures. It’s no coincidence that 19th century paintings and prints like Rowlandson’s Drawing from Life at the Royal Academy, 1808 (main picture), tend to show students in the life room, a setting that embodied the pinnacle of artistic training, the primacy of drawing and implicitly the learned status of artists conversant with precedents from the ancient and more recent past.
While the National Portrait Gallery’s recent exhibition The Encounter offered a nuanced assessment of drawing from life as the backbone of Renaissance workshop practice, the Royal Academy’s show lacks anything like its focus or insight. We never really get any sense of how working from life might practically or conceptually have shaped the output of early academicians, and we are reliant on several fine casts of classical sculptures from the Royal Academy’s collection to evoke something of their experiences (pictured right).
How artists from our own times have responded to the life receives slightly more productive attention, with Lucian Freud’s barely begun painting A Beginning, Blond Girl, 1980, perhaps the most fascinating piece in the whole show, showing a working method that seems entirely at odds with the appearance of Freud’s finished works. The figure is set out in the most ghostly and yet simultaneously, the most assured terms, with only a small section of the face having been worked up in detail. It’s a process that seems more sculptural than painterly, the modelling done inch by inch, with paint built up layer by layer in small and precise, but still confident strokes.
Life drawing had fallen out of fashion by the Seventies and Eighties, and Goldsmith’s College went so far as to ban it because it was felt to objectify women. Perhaps in light of this Jeremy Deller’s largely self-explanatory if still rather surprising project, the Iggy Pop Life Class, 2016, might be interpreted as demonstrating the irrepressible vigour of the life class, despite everything. In the end it’s hard to resist the very opposite conclusion which is that by making it the subject of an artwork, it becomes an ironic observation of a dying practice.
If the formal institution of the life class has lost ground, examples of work by Jenny Saville and Chantal Joffe attest to the continued importance of the female nude. They lack the force they deserve though, largely because the exhibition does nothing to address the overwhelmingly male character of the 19th and early 20th century life room, in which with few exceptions, women were only welcome as models. Consequently, the visceral treatment of the female body by women painters, and in Joffe’s case, her treatment of her own body, lacks the context needed to make any impact (pictured left: Chantal Joffe, Self Portrait with Hand on Hip, 2016).
The real hobby horse of this exhibition is virtual reality, and Jonathan Yeo’s self-portrait sculpture, cast in collaboration with Google Arts & Culture, is the product of a strange mix of sculpture and painting – painting in three dimensions – made possible by Google’s Tilt Brush software. Elsewhere, the relationship between VR technologies and working from the life is more strained, with Farshid Moussavi’s architectural environment and Yinka Shonibare’s three-dimensional painting, offering a bizarre conclusion to an exhibition that never really quite got started. | https://www.theartsdesk.com/node/80633/view |
January 2017, due to the illness of the Secretary of State.
The visit took place in the wake of President Sissi’s successful visit to India last September which established three main pillars of relations between the two countries: political and security cooperation, economic and scientific cooperation, cultural cooperation and strengthening relations between the two countries and peoples. In this context, the seminar focused mainly on a number of key themes, which focused on ways to advance Egyptian-Indian relations and enhance the areas of partnership between them, especially in the economic field, in the context of Egypt’s implementation of the economic reform program and the need to consider signing a comprehensive strategic partnership agreement with India. Such as the similar agreement with China, in light of the commonalities and ways of benefiting from the Indian experience in the fields of renewable energy, small and medium enterprises and security cooperation in the field of combating terrorism. It has been confirmed on the need to coordinate between the Egyptian and Indian visions on the regional situation, the geopolitical landscape in West Asia and North Africa, especially under the new American administration, and the rise of new global and regional forces such as Russia, China, Iran, Turkey, Israel and other countries in the absence of an effective role of the Arab League. The two countries should play an active role in settling the region’s conflicts, foremost of which is the Palestinian issue to achieve security and stability in the region as a whole and coordination of efforts at the international level in the field of reform of the UN Security Council to ensure equitable representation of the developing world. Combating terrorism from a global perspective and pursuing the United Nations’ agenda for sustainable development 2030, has been discussed during the session. | https://ecfa-egypt.org/2017/01/07/visit-of-the-councils-delegation-to-india/ |
Understand what Processes and Threads are and what it means to have multiple Threads active in Java program. Gain a basic understanding of how to use Threads.
An executing Java program is called a Process. That process executes your instructions (program) sequentially following the path you created when you wrote your program. This single path is called a thread. As such, your program is only doing one activity (Java statement) at a time and working on one task (your programs list of statements) at a time. For most situations, this works fine. But there are times when you would like to have your program doing more than one thing at time. Using Java Threads it is possible to create additional threads or paths of execution that run in parallel with the main (or first) thread. The effect is that your program can be doing more than one thing (task) at a time. Doing robotics on our platforms you will not need additional threads (multi-threading) most of the time but there are some cases where multi-threading can be useful. Threads can be used to simplify the main path in your program or to perform repetitive tasks while the main thread is busy with something else, sleeping or waiting for some user action.
Like many aspects of programming in Java (and other languages), threads are simple in concept but potentially complex in implementation. There are several ways to do threading and multi-threading can create very interesting and hard to find bugs in your program. But it is possible to keep it simple and get benefit from multi-threading without getting into too much of the possible complexity. We are going to explore basic threading here and there will be example programs in each of the platform sections. It is probably best for you to go now to the section for your hardware platform and work through the examples until you come to the one on using Threads. Then return here to finish this lesson.
When creating a new thread of execution in a Java program, there are two ways to do it. You can implement the runnable interface or you can extend the Thread class. We are only going to look at extending the Thread class.
When creating a new thread, the main thing we need to define is the code you want executed in that thread. When you create a new thread, you are telling the JVM here is a bit of code, start running it separately from the main thread and run it until the code path comes to an end. The thread code can have it's own private variables that exist only while the thread is running and the thread shares the class level variables of the class that creates the thread, assuming the thread class is an inner class. An inner class is a class within a class and doing threading with inner classes greatly simplifies things.
Lets look at a simple example:
Here the main thread prints the value of i every half second and the thread increments the value every second. You can see by the results of this code that the two threads run independently of each other.
The code we want to run in the thread is put in the run() method of the thread class. Threads are started with the start() method and stopped with the interrupt() method. When thread.interrupt() is called, the Thread class isInterrupted() method returns false. You should look for this to exit your thread code. If you happen to be in a blocking method when interrupted, sleep() in this case, the InterruptedException will be thrown. You normally just ignore that exception as it is just another signal to stop your thread code. The second catch statement catches and reports any errors that might occur in your code.
Here is this example in CodingPoint. Threads can be used in many ways and there are many methods on the Thread class for managing and coordinating threads. More complex threading is beyond the scope of this lesson and if you use threads it is best to keep it simple as shown here.
When doing multiple threads, you frequently need to share data between threads. Coordinating access to shared variables is called concurrency and is a complex and multi-faceted subject. Java contains many features to allow threads to coordinate write access to shared variables and objects by multiple thread. Again these features are beyond the scope of this lesson. To keep it simple and avoid concurrency issues, follow this rule: variables should only be updated by one thread. They can be read by several threads but only changed by one. In the above example, only the MyThread class changes the value of variable i.
Here is a simple tutorial and a video on multi-threading. Here is a more detailed tutorial including concurrency and here is the official Java documentation on threading.
Understand the Singleton Design Pattern and how it is used.
In a previous lesson, we discussed static variables and methods. Static variables and methods are available without an instance of their containing object and are shared with all other object instances that exist in your program. This is used for global variables and utility methods that don't really have the aspect of multiple instances that many objects do. We also said that Java does not support static classes. Lets explore the idea of static classes in more detail.
A static class would be useful when you will have only one instance of a class in existence at any time in your program. A robotics example might be a class that handles the teleop phase of the robot game. You really would not want to have more than one instance of your teleop class existing at the same time since the hardware interface can't be shared. So it would be nice to be able to define your teleop class as static.
Since you can't, you could define all variables and methods in your teleop class to be static and that would technically achieve the result you are looking for. However, it would still be possible to use the new keyword and create multiple instances of your teleop class. This would not make much sense as the fields and methods are static. However, you can disable the new keyword for a class by marking the class constructor private. Now you can't create instances of the teleop class with new and you have to access the class variables and methods using the class name. This will work but at the end of the day it is kind of messy and different than most classes you would write in Java.
A better alternative might be a regular class that is limited to a single instance and that single instance is shared when you ask for a new instance of that class. This can be done using the Singleton Design Pattern.
A quick note about design patterns. Design Patterns are coding techniques or design ideas shared by programmers across the world. Like code libraries, design patterns are idea or concept libraries. Singleton is a design pattern that describes a way to have a single instance object. This is how it works:
To create a Singleton class, you add a private class level static variable with the data type of the class itself. Next you mark the class constructor as private to disable the Java new keyword. Finally you add a method called (by convention) getInstance(). The getInstance() method checks to see if the static class variable is null, and if it is null, creates an instance of the class and stores the reference in the class variable and returns the reference to the caller. If the static class variable is not null, getInstance() returns the existing reference to the single existing instance of the class to the caller. In this way all callers to getInstance() get the same reference to the single instance of the class. The rest of the class can be written just like a normal class and the variables and methods are accessed via the instance reference in the calling class.
Here is an example of a singleton class:
Here is how this might be used:
Here we have 3 references to the singleton class but only one actual object instance has been created. Note that the variable instanceCount and method getInstanceCount() are coded and accessed just they would be in a normal class. This code would print out:
inst=1;req=3 inst=1;req=3 inst=1;req=3
Here is the example code in CodingPoint. Here is a video dicussing the Singleton pattern.
Singleton classes can be difficult to understand at first, but are very useful in Java programming and in robotics in particular.
Understand what static fields and methods are and how and when to use them.
Normally, class members (variables and methods) are accessed via an instance reference. Leaving methods aside for the moment, this is because class variables exist separately for each instance of a class (created with the new keyword). If you have a variable x in a class and create two instances of the class, each instance will have its own x variable, access to which is by the instance reference. You also access methods via the instance reference. Here is an example:
The result:
theVar=3 theVar=7
Instance1 and instance2 refer to separate object instances of the class and as such each has its own theVar variable which has its own value. The variable is said to be an instance variable.
What if we would like to have a class level or global variable? One that is not specific to any instance of the class but exists as a single copy in memory at the class level? We can do that with the static modifier. Marking a variable as static means there is only one copy of the variable for all class instances. The static variable is created when first accessed and persists as long as the program runs. Any instance of the class can access the static variable as it is shared among all instances of the class.
Since static variables are not accessed via an instance reference, you use the class name with a dot to access the variable.
You can also mark methods as static. This means the method does not need an instance reference to be called. The method is class level or global. Note that static methods can only access the static variables in the same class. Non-static or instance methods can access static and instance variables. Here is an example:
This example would print out:
instance count=0 instance count=3
The example uses a static variable to count how many instances of MyClass are created. We increment globalCount in the class constructor. This would make globalCount = 2 but to demonstrate static variable access, we directly increment globalCount to 3. We can do this since globalCount has public access. Note that the first output is zero because we called the static method which caused the static globalCount varible to be created and initialized, but we have not yet created any instances of the class. Note that the last statement would generate a compile error since we are accessing a non-static variable through a static (class name) reference.
Here is the example above on CodingGround. Fix the error and demonstrate the program.
Here are two videos (video1, video2) about static members. Here is a detailed discussion of static members.
Note: While classes can't normally be labelled static, an inner class, that is a class within a class, can be. We won't dicuss this as it is beyond the scope of this curriculum, but the use of the static keyword on the inner classes in our CodingGround examples is required when inner classes are defined in the same class as the main() method.
Understand logging basic concepts.
Logging, also called tracing, is the practice of recording debugging information from your program in a disk file for later examination. When logging, you place method calls in your program to call either the Java Logging system or helper methods in code provided by this course. As said in the last lesson, file output is a topic beyond the scope of this curriculum and the details of using the Java Logging system directly are as well. But, due the usefulness of logging in debugging robot programs, we are providing you with code to do logging for you. In each of the three sections on the FIRST robotic platforms, there will be a lesson on how to implement logging in your programs.
It is really pretty simple. You add a .java file containing the logging code to your project and then in your own code, you can call one of the logging methods provided by the logging code to record whatever information you think is useful to a file on the robot controller. You then use the appropriate utility program to pull that file back to your development PC where you can examine it. Each record in the log file contains the time of day, the class in your code where you called the log method and the source file and line number where that call is located. This makes it easy to go from the log file back to your code.
Understand how input/output with disk files fits into robotics programming and where to go for more information.
Writing data to and reading data from disk files is a common activity for Java programs in many situations. However, file input/output (I/O) is rarely used in the programs created for robots on the platforms we are working with. As such, and given that file I/O is a large and complex topic, it is not going to be covered here. For those who are interested, here are some resources where you can learn more:
File I/O in Java is done with classes available in the java.io package. You can read about it here and many other places on the web. A newer package of I/O classes was released with Java 7 called java.nio.file and you can read the official documentation here.
Understand what exceptions are and how to use them to handle errors in Java programs.
When running a Java program, if the JVM detects an error it will generate an error condition called an Exception. An Exception is actually an object that contains information about the error and is available for your code to capture and handle as needed. Generating an exception is called throwing, since all Exception objects are subclasses of the Java Throwable object. when an exception occurs, execution of your program stops at that point and the JVM will look for special code that handles exceptions. If an exception is not explicity handled by your code, the JVM will abort your program and report the exception details to the console.
An Exception object identifies the type of exception, indicated by the specific Exception object thrown. There are many pre-defined Exception objects descended from Exception such as IOException or ArithmeticException. An Exception object will contain the location in your program where the exception occurred (stack trace) and may include a description (message).
The most common exception you will encounter is the NullPointerException. This occurs when you attempt to use an object reference variable that has not been set to a valid object reference. Here is an example of this exception in CodingPoint. Compile and run the program to see the exception abort the program and report its information to the console. You can then comment out line 7, compile and run again. This will demonstrate how the stack trace information shows the where in a hierarchy of method calls the exception occurred.
What if you would like to catch exceptions and handle them in some other manner than aborting your program? Java provides a way to do that with try/catch/finally blocks. The general form of a try/catch/finally block is:
This says try executing the code in the try block and if an exception occurs, pass the exception to the catch block, which executes the statements in the catch block (your error handling code). If there is no exception, execution passes to the next statement after the catch block. The code in the optional finally block is always executed exception or not, and execution proceeds to the next statement after the finally block.
Here is an example in CodingPoint of catching an exception. You can compile and run the example and then uncomment the finally block and compile and run again to see how the finally block works. You can also comment out the call to myMethod to see how finally works when there is no exception.
Note that the exception occurred in myMethod but the try/catch block in the main method handled the exception. This is because Java will work its way back through a method call hierarchy until it finds a try/catch block that can handle the exception. Notice we said "finds" a try/catch block that can "handle" the exception. This is because a catch can specify a specific Exception it will handle. If the exception being caught matches an Exception class specified on a catch statement, that catch will process the exception. This is coupled with the fact that you can have multiple catch statements and so tune your exception processing by Exception type.
Here is an example in CodingPoint showing multiple catch statements handling the NullPointerException differently than all other exceptions.
When designing your programs you can use Exceptions for your own error handling. You can trigger exception handling just like Java with the throw statement. You simply throw the exception you want handled:
This will throw the standard Java Exception with your text as it's message.
You can also extend the Exception class to create your own Exceptions. Here is an example in CodingPoint showing how to use exceptions to handle your own error processing. Compile and run to see the generic Java exception used. Then comment the first throw out and uncomment the second. This will show the use of a custom Exception. Finally you can uncomment the catch for the MyException class and see how you can trap custom exceptions.
Note that if you throw exceptions in a method, the throws Exception specifier must be added to the method definition.
Here is a series of videos (video1, video2, video3, video4) about Exceptions and here is a detailed discussion of Exceptions.
Understand Java's data conversion features which are called Casting.
Java provides the ability to convert from one data type to another within a set of rules. Some conversions are made automatically and some you have to explicitly request.
Here is a discussion of converting between primitive data types.
Here is a discussion of converting between reference (object) data types.
Here is a video about casting.
Understand basic collections, what they are, how they work and how to use them.
A Collection is an object that stores lists of other objects allowing the group of stored objects to be manipulated in many powerful ways. A Collection may sound like an array or ArrayList and while a Collection is quite different than an array, ArrayList is in fact one implementation of the Collection concept. Java has a large number of specific implementations of the Collection concept you can use. Here are the most commonly used types of Collection:
- set - A list of objects with no duplicates.
- list - A list of objects duplicates allowed. (ArrayList is an implementation of the list general type)
- map - A list of objects with key values (no duplicates).
- queue - A list of objects with features that support sequential processing of the elements.
Within each type, there are a number of actual implementations you can use. Each implementation has specific features or performance aspects that you consider when choosing an implementation to use for your programs. Here is an example:
second string third string first string
new first string first string second string third string second string -------------------- second string third string second string first string new first string
Here we create a List type Collection using the ArrayList implementation and add some elements. Note we added an element using an index (position) and it inserted the element at that location, moving all subsequent elements up. The ArrayList allows us to add a duplicate element. Finally we use the Iterator type ListIterator (a specialized Iterator for List collections) to manually list the elements in forward order and then reverse order. Note that the ListIterator to go in reverse order is created with it's starting position set to the last element in the list by using the list length field to identify the last element's position.
Due to the many types of Collections and the many implementations of the types of Collections, Collections can seem daunting and overly complex. Collections are very powerful tools for manipulating data sets but most cases can be handled by the ArrayList Collection type.
Here is a video on the ArrayList Collection type. Here is a detailed discussion of Collections starting with an introduction and moving through the specific implementations of the various Collection types.
Here is the example code in CodingGround. Add code to the example to use the iterator for myList to locate the element containing "third string" and remove it from the list. Print out the modified myList to confirm the removal.
Understand what arrays are and how to use them.
An array is a special object used to store a list of variables of the same data type. An array is defined like this:
This statement defines and then creates an array of 3 integer variables (or elements) which will be addressed as a list. The new keyword defines the size of the array. We can then put values in the array and access them with an index value (position) in the array. Arrays are indexed starting at zero:
Note that we can initialize array values with the new keyword:
Arrays may have more than one dimension:
For loops are especially useful in processing arrays:
This will print out:
row 0 col 0 = 5 row 0 col 1 = 10 row 1 col 0 = 15 row 1 col 1 = 20
Notice the array has a built-in field called length that tells the size of the array.
Arrays are fixed in their dimensions once created so the array size can't be changed. If you need dynamic array sizing, that is, you want to change the size of the array as your program proceeds, you can use a class called an ArrayList. The ArrayList is defined in the java.util package. An ArrayList has methods that allow you to add and remove elements on the fly:
String s will contain "A different String object". Why? Because when we removed element zero, the rest of the elements shifted down.
ArrayLists have a number of methods you can use to manipulate the array. Note that the ArrayList can only contain object instance references (no primitives). Also note that when we created the ArrayList, we specified the type of object that would be contained in the ArrayList.
The for statement has a special case called for-each that applies to arrays and collections (next lesson). This special for statement will automatically provide each element in an array to your code in the for statement or block:
ArrayList is just one of many types of Lists (called Collections) available in the Java API.
Here is a video about single dimension Arrays. Here is a video about multi-dimension Arrays. Here is a detailed discussion of Arrays.
Here is the example code on CodingGround. Add code to the example to add up all the elements in array x1 using a for loop and print the result. Add another for loop to print the strings in ArrayList a1. | https://stemrobotics.cs.pdx.edu/target-hardwareplatform/tetrix?page=10 |
Block:
1
- Block Name
1
- Varietal
Pinot Noir
- Planted Area
1.69 acres
- Wine Regions
- Everyvine Rating
Not ratedThis location was added recently to Everyvine and has not yet been reviewed. Learn more...
Grape Profile
Grape DetailsVarietal: Pinot NoirClone: PommardRootstock: UnknownVine Age: 7 yearsStatus: Planted
Vineyard block planted and producing fruit
ManagementAnnual Grape Production: Unknown TonsRenewable Energy Used: UnknownDate Planted: January 1, 2013
Farm Certifications
This block does not have any farm certifications
Grape Sales
No winegrapes or bulk wine sourced from this vineyard block are currently available...
About Market ListingsWinegrape, Bulk Wine, and Shiners are provided in partnership with WineBusiness.com. Vineyard and winery managers can link WineBusiness.com classified listings to vineyards on Everyvine by logging in at WineBusiness.com.
Wines
There are currently no wines associated with these grapes.Login here to add wines
Surface Profile
Topography
|Value||STD|
|Elevation||517.89 ft.||13.66|
|Slope||9.5% (5.43°)||2.34|
|Aspect||West (285.14°)||28.74|
Solar Radiation
|Average Radiation Total||WH/m2|
|Annual||89027.38|
|Growing Season||128420.54|
|Month||Solar Radiation (WH/m2)|
|Jan||18913.73|
|Feb||35836.3|
|Mar||78767.2|
|Apr||120625.3|
|May||161577.65|
|Jun||170678.93|
|Jul||168875.81|
|Aug||137765.75|
|Sep||91847.8|
|Oct||47572.51|
|Nov||21875.05|
|Dec||13992.57|
Climate Profile
Climate Indices
|Index Name|
|Ave. Annual Temperature||52.2 °F|
|Ave. Growing Season Temperature||59.1 °F|
|Growing Degree-Days||1961.57|
|Huglin Index||1773.18|
|Biologically Effective Degree-Days||1194.81|
Temperature & Rainfall
|Average||Hi °F||Low °F||Rain in.|
|Annual||62.2||42.1||46.95|
|Growing Season||71.3||47||13.96|
|Month||High (°F)||Low (°F)||Rainfall (in.)|
|Jan||45.2||32.9||7.06|
|Feb||50||34.6||5.96|
|Mar||55||36.7||4.96|
|Apr||60.5||39.6||3.28|
|May||67||44.3||2.36|
|Jun||73||49.1||1.59|
|Jul||79.9||52.4||0.66|
|Aug||80.4||52.1||0.9|
|Sep||75||48.6||1.73|
|Oct||63.4||42.7||3.43|
|Nov||51.8||37.7||7.22|
|Dec||45.6||34||7.79|
Soil Profile
Soil ComponentComponent Name/Kind: Saum / SeriesMapunit Name/Kind: Saum silt loam, 7 to 12 percent slopes, Saum silt loam, 12 to 20 percent slopes / ConsociationTaxonomy:
- Class: Fine, mixed, mesic Typic Xerumbrepts
- Order: Inceptisols
- Suborder: Umbrepts
- Particle Size: fine
Soil HorizonBedrock Depth: 127 cm
|Available Water Storage||cm|
|0-25 cm||4.95|
|0-50 cm||9.7|
|0-100 cm||17.1|
|0-150 cm||20.88|
|Typical||Low||High|
|Avail. Water Capacity||0.1||-||0.11|
|pH H2O||2.9||2.8||3|
|Horizon Depth||cm|
|Top||64|
|Bottom||79|
|Thickness||-|
Soil Capability Classifications
|Land (Class/Subclass)||3 / e|
|Natural Drainage||Well drained|
|Irrigated||2, 3|
|Non-Irrigated||3|
|Hydrologic Group||C|
Edit History
Want to change something? Edit this vineyard
AdministrationBlock-ID: 348612Created Date: UnknownEdit History: View Edit History of this Block
Last Attribution DatesElevation: November 3, 2017 - 11:32 pmClimate: November 3, 2017 - 11:32 pmSoil: November 3, 2017 - 11:32 pm
Edit PermissionUnlocked: Anyone may edit this vineyard
If you are the owner of this vineyard you can lock editing. If locked, a vineyard is not editable by anyone other than the owner. You can lock the vineyard on this page
Data Notes and References
NotesFarm Certifications: See the certifications page to learn more about the farm certifications Everyvine tracks. Soil Horizon: Bedrock value set to "-" is not available. Available water capacity units are cm/cm. Solar Radiation: Area Solar radiation has been calculated in watt hours per square meter (WH/m2) Topography: Topography values are averaged over the block area. Aspect is expressed in positive degrees from 0 to 359.9, measured clockwise from north. Growing Season: For use in various statistical averages, Everyvine defines the vineyard growing season as the period April 1st to October 31st.
References
The vineyard data presented here is most often sourced from the public data sources listed below. However, Everyvine also accepts data from vineyard owners who often have even better data they have collected themseleves.
If you have more accurate data on a vineyard go to: Data Submission page.Topography: Topographic and solor radiation calculations are based on 1/3 arc-second (approx. 10m) elevation models. Data provided by U.S. Geological Survey (USGS), EROS Data Center, National Elevation Dataset, http://ned.usgs.gov/, 1999 Climate Indices: Calculations based on methodologies described by Gregory Jones and John Gladstones. See the following:
- Jones, G.V, Duff, A.A., Hall, A., and J. Myers (2009). Spatial analysis of climate in winegrape growing regions in the western United States.” American Journal of Enology and Viticulture, in Press Spring 2010.
- Gladstones, J. (1992) Viticulture and environment (Winetitles: Adelaide).
- SSURGO: Soil Survey Staff, Natural Resources Conservation Service, United States Department of Agriculture. Soil Survey Geographic (SSURGO) Database. Available online at http://soildatamart.nrcs.usda.gov.
- STATSGO2: Soil Survey Staff, Natural Resources Conservation Service, United States Department of Agriculture. U.S. General Soil Map (STATSGO2). Available online at http://soildatamart.nrcs.usda.gov.
- Web Soil Survey: Soil Survey Staff, Natural Resources Conservation Service, United States Department of Agriculture. Web Soil Survey. Available online at http://websoilsurvey.nrcs.usda.gov/. | http://www.everyvine.com/org/Ninebark_Vineyard/vineyard/Ninebark_Vineyard/1/ |
The origin of Lachak Toranj’s design goes back to Golestan designs and garden designs in Persian carpets. Some believe the root of this design was first on the covers of Qurans and exquisite books. By observing the relevant principles and rules, carpet designers considered and entered the patterns into Persian carpets. The Lachak Toranj design is a famous, beautiful, and very traditional design in these carpets. It has had much demand for carpet weaving in different regions of Iran since the Shah Abbasi school.
Lachak Toranj design
Carpet weavers named a quarter of the core map with a related or different shape as lachak. Lachaks are in the four corners of the carpet texture. So, If the center of the carpet accompanied Lachaks, the design is called Lachak Toranj.
The round figure locates in the center of the carpet is bergamot or Toranj. Its size and purpose overshadow the entire carpet pattern. Of course, in many traditional structures, in the corners of the framework, the bergamot pattern repeats in a quadrant (a quarter of the bergamot) entitled Lachak. The literal meaning of bergamot is the fruit of citron. Toranj is one of the most prominent elements in the Lachak Toranj design. Toranjes design in a circle, oval, and strip. In addition, the patterns that are from Khatai and Islamic motifs place in specific bergamots spaces. Therefore, weavers separate the framework part from the bergamot section by using these patterns. Colors in Toranj (the bergamot) stand out like a central medal or frame in the heart of the carpet. | https://carpet-plaza.com/lachak-toranj-design/ |
Our ReplayTV Home Is Somewhat Similar.
A Life Where TiVo Has Always Existed
“…My daughter was only 3 months old when it arrived
and we set it up. As far as my daughter knows, TiVo has always been
around. Now that she (and our TiVo) are three years old, and there are
some very interesting things I've been able to observe.
First – she doesn't watch much TV (an allotted hour per day), but
when she does watch it, she gets a choice of a recent episode of any of
her favorite pre-recorded shows (current favorites are Dora the
Explorer and Caillou), and she can watch it at any time of day. We get
to choose what shows we'd like to allow her to watch, set up a Season
Pass, and we're done.
Second – Commercials are an infrequent novelty to her. We always
fast-forward through commercials, or watch non-commercial shows. When
she does occasionally see a full commercial, she's fascinated, and will
often ask us to stop so she can see what's going on. How can we
demonstrate to her the evils of commercial interruption, when she has
never had to experience it?
Third – Ignorance of Schedules/Programming – she has no idea when
her favorite shows are on, never has. She gets quite confused when we
are watching a non-TiVo TV, and she asks to watch 'a kids show', and we
have to explain that this TV won't do what ours at home does. We've
sometimes shortened this explanation to 'This TV is broken', which she
seems to accept, and will wait until we get home to watch our 'fixed'
TV.
Fourth – pausing taken for granted. She is now the master of paused
TV – saying 'Can you please stop this for a minute – I have to use the
Potty'….
I compare all of these observations to my TV-watching experience as
a child – always excited about Saturday Morning, because that's when
cartoons were on – swapping stories about the latest Evel Knievel
motorcycle I saw on a commercial with the other kids, knowing they had
all seen the same commercials as well. Feeling disappointed when my
parents would switch off a show mid-way through because they decided it
wasn't appropriate. The pain of commercial interruption, the
disappointment of 'nothing's on', or the missed shows that were
probably gone for good. (On a side note, anyone else remember the days
where if you missed a movie in the theater, you'd never get a chance to
see it again?)
There are a lot of other home entertainment developments that have
changed since I was a kid, but none so radically as the TiVo
experience. I never cease to be amazed when I'm zooming past a
commercial with a woman dancing with a 'swiffer', and I hear my
daughters small voice say: 'Wait Papa, I wanna see that'.” [Eintagsfliegen]
Kids growing up like this view their entertainment and multimedia very
differently than the rest of us. Heck, as an adult I'm completely
spoiled by this revolution, and the desire for this functionality
spills over into other mediums (why can't I press a button to go back 7
seconds and hear what I just missed on the radio or pause it?). | http://weblog.vkimball.com/2004/12/02/our-replaytv-home-is-somewhat-similar/ |
|Product Home Page||Developer Site||Version|
The Sun ONE Active Server Pages knowledge base provides troubleshooting information for problems you might encounter when using Sun ONE ASP.
The knowledge base is a valuable technical resource, providing an updated list of product-related articles, answers to frequently asked questions, and useful tips designed to help you get the most out of Sun ONE ASP.
To access the knowledge base, go to:
http://developer.chilisoft.com/kb/
See also:
Other Resources
Copyright © 2003 Sun Microsystems, Inc. All rights reserved. | https://aspdoc.indoglobal.com/AppBTroubleshooting.html |
According to Phys.org, astronomers have made an interesting new discovery – a gas giant orbiting a brown dwarf. The exoplanet was named OGLE-2017-BLG-1522Lb. The newly discovered gaseous body has about 25% less mass than Jupiter, while its pseudo-star is about 46 times more massive than our own Jove.
Exoplanets are pretty hard to spot. Unlike stars, planets do not emit any natural light, which means that scientists need to get a little creative when searching for them. Often, they use something called the "transit method", following a star as they cross in front of their parent's light and observe how and where the image darkens to obtain a rough estimate of its characteristics.
But this one was a bit different. Here the researchers used gravitational microlenses. Because gravity is not really a force, as Einstein demonstrated, but a curvature of space-time, in the same way that a lens can be used to magnify an image, gravity can be used to do the same. This technique requires a solid understanding of the masses in the system under study, but it is extremely effective and quite useful for discovering a planet near a brown dwarf that naturally emits very little light.
OGLE-201
"We report the discovery of a giant planet in the OLELE-2017-BLG-1522 microlens event, which clearly identified the planetary perturbations despite the relatively short event time of ~ 7.5 days by high cadence surveying experiments," the researchers said.
It matters that brown dwarfs generally do not have the kind of mists that causes planet formation. There is a boundary known as the "snow line" where planets tend to form. And while the astrophysics behind it gets a little complicated, these astronomers are pretty confident that their discovery is the first giant planet orbiting a brown dwarf with the right proportions. Further observations must be made to confirm that the star is indeed brown, i. H. He does not have enough mass to maintain the hydrogen fusion in his core. If so, this could be an important innovation for our understanding of astrophysics and the formation of planets.
Let us know what you like about geek by doing our poll. | https://newsbeezer.com/astronomers-discover-a-giant-planet-roaring-over-a-brown-dwarf/ |
The role of a Direct Support Professional (DSP) is complex, and demand for passionate DSPs has never been higher. A foundational position within the human services field, it is estimated that over a million new DSP positions will be required by 2022. The daily person-centered plans a DSP must create will present challenges, along with distinct opportunities to grow within the role.
Whether you’re starting straight out of high school, finding a job while studying at college, or any walk of life beyond, entering the sphere of direct support is one of the most rewarding career paths you can take. Each person supported’s unique context will allow you as a professional to adapt to a wide variety of fulfilling circumstances.
“I never had a dream job when I was growing up. Being a DSP made me realize that this would have been it. I have always been someone who cares for others both physically and mentally and I love that being a DSP allows me to do both.”
– Abigail Ivaldi, DSP
What it Means to be a DSP
Being a DSP is a versatile position. While an ounce of stress can be expected, the benefits of working with and improving the life of another person is infinitely more distinguished. Making the position yours throughout a flexible schedule demands a creative mind; one that’s wholly involved, kind, and puts the needs of the people you support above all else. When you enter a home you’re not just staff, you’re an advocate for growth.
Professional Experience
If you’re at least 18 years old and have a high school diploma, becoming a DSP provides invaluable professional experience as well as career-launching training. Working in both residential and community settings comes with a variety of responsibilities that center around assisting individuals to lead a self-directed life. Being a natural caregiver does not require any formal experience, and as a DSP, you’ll be in one of the few entry-level positions where you’ll have the opportunity to work with individuals directly.
Meaningful Growth
Being at the forefront of intellectual support requires patience, compassion, and the utmost dedication. Here at OPG, we believe the ultimate goal is to develop independent skills through a fading plan – growing individuals’ independence over time. Developing this level of independence not only makes real positive change in the lives of people we support, but delivers a true sense of accomplishment for all.
Expect the Unexpected
While many jobs boast that no two days are the same, a DSP’s day-to-day is legitimately that. Each day comes with its own unique hurdles, and unique accomplishments. Person-centered plans mean adaptation is key. Furthermore, adventuring out into the community can provide a host of additional activities to enjoy and challenges to overcome.
Simply put, being a DSP means you actively seek to improve the lives of individuals with intellectual disabilities. If you strive to realize a true positive impact on the world, there is no better place to start than becoming a Direct Support Professional. | https://www.opgrowth.com/2017/07/13/being-a-direct-support-professional/ |
.
Item Usage Stats
52
views
0
downloads
Cite This
This thesis investigates the impacts of 4.2 ka and 3.2 ka BP climatic changes on the agricultural practices in Toprakhisar Höyük and Tell Atchana, located in the Hatay region of southern Turkey. The fundamental inquiry in this thesis is if or to what extent the aforementioned climatic changes affected the agricultural practices of the communities. To answer this question, a descriptive analysis of cereals and wild seeds of archaeobotanical assemblages of these two sites has been assessed. Besides, morphometric measurements and stable carbon isotope analysis on wheat and barley grains have been carried out to examine if there was water stress due to climate change. The findings demonstrate that Toprakhisar Höyük and Tell Atchana switched their preferred grains to drought-tolerant varieties in the time periods that coincide with climate changes. Grain size reduction and water stress were only visible in the hulled wheat grains. Overall, the data generated for this thesis demonstrates that, although agricultural systems did not drastically change or completely collapse, the societies of Atchana and Toprakhisar appeared to have adapted to the increasingly arid conditions by changing the types of cereals they cultivated. By combining different methodologies, results were obtained that enabled the widening of climate change studies and provided a better understanding of climatic impacts, especially on the local scale. This study also shows that archaeobotanical studies could be very appropriate not only for understanding the culinary activities and consumption habits of past societies but also for determining environmental conditions when integrated into these types of environmental studies.
Subject Keywords
The 4.2 ka BP Event
,
The 3.2 ka BP Event
,
Archaeobotany
,
Tell Atchana
,
Toprakhisar Höyük
URI
https://hdl.handle.net/11511/99488
Collections
Graduate School of Social Sciences, Thesis
Suggestions
OpenMETU
Core
The Impact of the 4.2 ka BP event in Western Anatolia: an evaluation through palaeoenvironmental and archaeological data
Bal, Çağlayan; Pişkin, Evangelia; Department of Settlement Archaeology (2019)
Palaeoenvironmental and archaeological research in the eastern Mediterranean and adjacent regions asserts a correlation between the 4.2 ka BP event, an abrupt climatic change (ca. 2200-1900 BC), and societal changes at the end of the 3rd millennium BC. It has been hypothesized that the drought as a result of the event led to social disturbance, conflicts, migrations and in some cases, societal collapses following a breakdown in agriculture and animal husbandry. Similarly, palaeoenvironmental studies provide...
Challenging geographical disadvantages and marginalisation: a case study of depeasantisation in mountain villages, the western black sea region of Turkey
Him, Miki; Hoşgör, Hatice Ayşe (null; 2017-07-27)
This paper examines an implication of depeasantisation for rural community in a case study of Dikmen in Turkey, which has faced depopulation, agricultural decline and impoverishment during the last decade. The analysis of 171 household data and the qualitative data obtained from in-depth interviews with 27 women show that the villagers are depeasantised and apparently marginalised. However, while some villagers live under multiple deprivations, some challenge geographical disadvantages by investing their me...
Estimation of Rainfall Runoff Relation Using HEC HMS for a Basin In Turkey
Akay, Hüseyin; Baduna Koçyiğit, Müsteyde; Yanmaz, Ali Melih (null; 2016-07-29)
This study deals with the investigation of the rainfall-runoff relation of a watershed inWestern Black Sea Region, Turkey. The aim is to perform a hydrological assessment for a watershed located near Bartın city. The basin is very mountainous and forestry with increasing urbanization. The hydrological parameters of the basin are likely to be altering due to possible changes in climate and precipitation patterns. Land-use/cover changes owing to human activities and urbanization in the area also contributes t...
Evaluation of PCB dechlorination pathways in anaerobic sediment microcosms using an anaerobic dechlorination model
Demirtepe, Hale; Kjellerup, Birthe; Sowers, Kevin R.; İmamoğlu, İpek (Elsevier BV, 2015-10-15)
A detailed quantitative analysis of anaerobic dechlorination (AD) pathways of polychlorinated biphenyls (PCBs) in sediment microcosms was performed by applying an anaerobic dechlorination model (ADM). The purpose of ADM is to systematically analyze changes in a contaminant profile that result from microbial reductive dechlorination according to empirically determined dechlorination pathways. In contrast to prior studies that utilized modeling tools to predict dechlorination pathways, ADM also provides quant...
Evaluation of ecological changes in lake hazar (Elazig, Turkey) using diatom-based paleolimnological techniques
Maraslioglu, Faruk; Soylu, Elif Neyran; Avşar, Ulaş; Hubert-Ferrari, Aurelia (2020-12-01)
Sediment core samples (H002, H003, H005 and H006) recovered from the basin of Lake Hazar located in the South East Region of Turkey. Core sam¬ ples were collected from the four sampling stations by UWITEC corer to determine diatom flora distri¬ bution in Lake Hazar. Diatom counts and identifica¬ tions were made from the sediments sampled at 1-2 cm intervals. Total of 142 species of diatoms were recorded, most of them were pennate diatoms. Ulnaria ulna, Nitzschia gracilis, Synedra nana, and Cyclotella ocella...
Citation Formats
IEEE
ACM
APA
CHICAGO
MLA
BibTeX
E. Sinmez, “Tracing the Impact of 4.2 ka and 3.2 ka BP Climatic Events on the Agriculture of Tell Atchana and Toprakhisar Sites in the Hatay Region through Multidisciplinary Examination of Archaeobotanical Assemblages,” M.S. - Master of Science, Middle East Technical University, 2022.
METU IIS - Integrated Information Service
[email protected]
You can transmit your request, problems about OpenMETU. | https://open.metu.edu.tr/handle/11511/99488 |
PSC Question and Answer about Repeated Questions-(Part-24) including for Kerala Administrative Service (KAS), Secretariat Assistant, Panchayath Secretary, BDO, Auditor, Assistant, LDC and LGS examination level questions. We will update balance portion of the study material.....
Question1:- Who coined the term 'United Nations
Question2:- Where is Harley street
Question3:- Who was the first person to study on inferiority complex
Question4:- In which state is the Lake Umiam is located
Question5:- When did India conduct the second Nuclear Experiment Operation Sakthi
Question6:- The host of 2018 World Cup Football
Question7:- The longest river in France
Question8:- For uplifting the downtrodden . the 'Bahishkritha Hitakarini Sabha' was established by
Question9:- The book that contain the list of animals that are under the verge of extinction
Question10:- Tide that occurs on the days of New Moon and Full Moon
Question11:- 'Beamer' is a term related to which sports
Question12:- The term Black Hole is used to denote
Question13:-Name the only animal which has horn even during birth
Question14:- Fifth Veda
Question15:- The Nobel Prize on which subject is known as "Bank of Sweden Prize "
Question16:- Name the channel that separates Islands of Lakshadweep from that of Minicoy
Question17:-Who is the author of Samadhi Sapatakam composed in connection with the demise of Chatambi Swamikal
Question18:- Which is the largest muscle in the Human body
Question19:- The first Man Made Satellite
Question20:-Name the diseases that can be prevented by the triple antigen
Question21:-Name the Epic penned by Aurobindo Gosh
Question22:- The Indus Valley site where the mount of dead can be seen
Question23:-The outer most color of the Rainbow
Question24:- Who established Aravidu dynasty, the last one among the Vijayanagar Dynasties
Question25:- Mathura is on the banks of which river
Question26:-Name the Ocean which resembles the English alphabet 's'
Question27:- The district of Kerala which covers the least Forest Area
Question28:- The middle Colour of the Rainbow
Question29:- Who discovered C T Scan
Question30:-In which Ocean, Kairali, the ship of Shipping Corporation vanished
Question31:- The Chemical Component in the Human Nail
Question32:- Where is "Gandhi Maithan"
Question33:-CV Raman fetched the Nobel Prize for which Invention
Question34:-Which Continent has the largest Hindu Population
Question35:- Adyanpara Waterfall is situated in ------- district
Question36:- Which country's old name is Abyssinia
Question37:- Abhdharmapetakam deals with -----
Question38:- The British Monarch who stepped down in 1936
Question39:-Cleopatra was the queen of which Country
Question40:- Who appoints the Loksabha Secretary General
Question41:- Marakkudaykkullile Mahanarakam was authored by
Question42:- When did Arab Merchant Sulaiman visit Kerala
Question43:- Navel academy in Kerala is located at
Question44:- Name the mountain where the Noah's ship struck
Question45:- The Epic written by Aurbindo Ghosh
Question46:- .Who built the Dutch Palace in Kochi
Question47:- The real name of Nandanar
Question48:-The innermost colour of the Rainbow
Question49:-The real name of " Mother of Aurbindo Ashram" | http://www.onlinepscexam.com/Repeated_Question/RepeatedQuestion9 |
|theImage.com Notes on Basic Geology|
|Notes created & information organization based on the book:
|
"The Dynamic Earth - an introduction to physical geology"
Brian Skinner & Stephen C. Porter (further book information here)
also look at www.wiley.com for additional resource information
|
|
|Streams & Drainage - Page 3|
|Dams
|
Dams may be natural or man made and obstructions that stop the forward and downward movement of streams.
Natural dams:
Man Made Dams:
|Channel Patterns
|
Straight Channels
|Meandering Channels
|
Meandering is caused by the river attempting to equalize its energy over the widest possible area. Another way to put this; it minimizes the grade (reduces it.)
The better developed the meandering is, the older the river must be. The water in a meandering stream does not flow at the same rate throughout its course.
Water in the straight sections connecting the curves, tends to be fast near the center channel and more equal along the edges.
Water running through a curved section runs fastest at the outside of the curve. Maximum velocity tends to be below the surface but at the greatest bend. This can lead to undercutting of the channel wall.
The slowest water runs at the inside of a curve and between these two points a sand bar may develop.
Oxbow Lake: an oxbow lake is an isolated meander from the original stream. #1 shows normal center flow. The highest velocity and greatest cutting power in on the outside of a river/stream curve (#2). The two curves shown in box #3 continue to wear away at the outside of each curve.
Eventually they wear through and create a shorter path for the river. As the water now can move at higher velocity down the new channel it will isolate the water in the meander, and it becomes a lake.
|Braided Channels
|
The braiding is caused by deposited sediment. Braided channels are created in streams that have high run-off, good velocity, and high variability. During major runoffs both large and small materials are carried, then when the run-off dies down, the larger materials are deposited first and create a series of sand bars.
The water finds multiple passages between the bars. The braids constantly change and reform as the water runs swiftly then slow again.
Laminar Flow
This is straight ahead flow without side currents. It is made up of parallel follow all moving in one direction. It is only achieved at very slow velocity.
|
|
Turbulent Flow
This type of flow can raise items from the bottom of the bed and put them temporarily back into the main current. Gravity eventually wins and the particles come to rest again.
|Saltation
|
Sandy beds move via "saltation", which is caused by turbulent flow. A grain is set in motion by the churning action of the stream. The water travels faster higher in the stream which causes the turbulent flow. Grains lift from the bottom and move up into faster water, they are transported a short distance, then gravity pulls them back to the bed. They may jar another particle loose, or the turbulence may do it alone. (see diagram to left)
Suspended load
A suspended load is the fine sediment that is carried within a stream. It is not in solution, but rather hangs in the water by the current. If the current should slow, then some of the particles drop.
Dissolved load
The dissolved load is a group of ions that are carried as dissolved materials. They do not rely on the speed of the stream to hold them in solution, rather their solubility.
Streams which have a major constituent coming from ground water tend to have more of a dissolved load.
Down Stream Sediments
Coarse grained materials tend to be at the beginning of most streams and it is graded to finer sediment at length. Streams with even slow discharge rates can carry fine sediments and move them downward, but it takes a steep slope to move larger material.
Solids are also abraded more as they travel further. Hence large materials become finer over distance due to wear and wear. (abrasion)
|Down Stream Composition
|
Down stream composition is not a constant from stream to stream other than the fact, that finer materials like clay and silt travel further.
The mineral composition of a stream is very dependent upon the nature of the rock the stream channel cuts, as well as the tributary streams which supply it with material. | http://theimage.com/geology/notes9/index3.html |
A team from Google Brain has written a new approach to create an Artificial Intelligence (AI) that can predict changes in the source code based on past adjustments. It claims that the approach delivers the best performance and scalability of all the options tested so far.
Making such an AI is challenging, as a developer often makes changes with one or more goals in mind. “Patterns of change can be understood not only in terms of the change (what has been added or removed) or its outcome (the state of the code after the application of the change)”, according to the researchers, reports Venturebeat.
“According to the researchers, if you want to get an AI to predict a series of operations, there must be a representation of the previous operations with which the model can generalize the pattern and predict future operations.
Development
Therefore, the team first developed two representations to capture intentional information that scales with the length of the code series over time. It concerns explicit representations, which show operations in the series as tokens in a 2D grid, and implicit representations, which confirm further operations.
The team then created a machine learning model that can record the relationship between the changes and the context in which they were made, specifically by encoding the initial code and the changes, composing the context and recording the subsequent changes and their positions.
To measure the generalizability of the system, the researchers developed a set of synthetic data inspired by changes that may occur in real data, but have been simplified to allow for a clearer interpretation of the results. In addition, they have put together a large dataset of editing sequences of snapshots of a Google code base with eight million changes from 5,700 developers, and divided it into sets for training, development and testing.
Reliability
The researchers argue that experiments show that the model reliably and accurately predicted positions where a change had to be made, as well as the content of that change. The researchers believe that the model can be adapted to improve autocomplete systems that ignore the history of changes, or to predict searches for code that developers will perform.This news article was automatically translated from Dutch to give Techzine.eu a head start. All news articles after September 1, 2019 are written in native English and NOT translated. All our background stories are written in native English as well. For more information read our launch article. | https://www.techzine.eu/news/trends/39758/ai-from-google-brain-predicts-code-changes/ |
If you are registered, you may access the virtual 62nd ASH Annual Meeting and Exposition at https://annualmeeting.hematology.org.
Joint Session: Scientific Committee on Blood Disorders in Childhood & Scientific Committee on Immunology and Host Defense
What the Children Can Teach Us: Congenital Immunodeficiencies Shed Light on Immunity, Hematopoiesis, and Cancer
|Sunday, December 6, 2020, 12: 00 p.m. - 12: 45 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Over the last 20 years there has been an exponential rise in the identification of inborn errors of immunity, now numbering >400 monogenetic defects. While the classical presentation of these diseases is recurrent and persistent infections in a young child, the uncovering of the genetic underpinnings of these fascinating diseases has led to unexpected roles for these genes in immune dysregulation, hematopoiesis, and malignancy. Furthermore, hypomorphic variants in genes traditionally thought to affect only children may manifest as milder or atypical disease in adults. The investigation of patients with inborn errors of immunity thus provide unique insights into biology, immunology, and molecular mechanisms of disease. In this series of talks, these dynamic speakers will shed some light on the complex and tightly regulated interplay between a dysregulated immune system and abnormal hematopoiesis and malignant transformation. Drs. Lucas and Snow will discuss the roles of PI3K subunits and CARMA proteins respectively, both gain and loss-of-function variants, in lymphocyte biology and lymphomagenesis. Drs. Latour and Holland will discuss the roles of IKZF1 and GATA2 respectively in immune cell development, function, malignancy.
Dr. Carrie Lucas will describe Activated PI3K-delta Syndrome (APDS) patients with germline gain-of-function mutations in the genes encoding the leukocyte-restricted PI3Kdelta subunits p110delta and p85alpha and discuss their similarity to oncogenic mutations. These patients have immunodeficiency, autoimmunity, and lymphoproliferative disease, and have recently been successfully treated with targeted therapy specifically inhibiting the PI3Kdelta complex. She will then transition to recent work on a new disorder termed ‘Inactivated PI3K-gamma Syndrome’ (IPGS), caused by loss-of-function mutations in the gene encoding the p110gamma PI3K subunit. This rare disease has features of immunopathology and immunodeficiency, including features not previously expected from knockout mice. The p110gamma kinase is being inhibited in clinical trials to boost myeloid cell responses in cancer, as such, Dr. Lucas will describe how her lab’s findings in rare disease illuminate roles for this kinase directly in humans.
Dr. Andrew Snow will provide an overview of primary immune regulation disorders associated with mutations in CARMA proteins and their associated signaling partners, BCL10 and MALT1. Particular emphasis will be paid to the broad spectrum of immune diseases associated with mutations in CARMA1 (CARD11). He will discuss underlying molecular mechanisms explaining lymphocyte signaling defects and associated clinical phenotypes, including predisposition to lymphoma. Current and novel therapeutic strategies for CARMA-related immune disorders will also be briefly covered.
Dr. Sylvain Latour will first discuss the biology of the transcription factor IKZF1, its role in development of immune cells, normal hematopoiesis, and lymphomagenesis associated with somatic mutations in IKZF1. He will then focus on work from his laboratory delineating the impact of germinal mutations in IKZF1 in humans on lymphocyte development and function, contrasting the effect of loss of function, dominant negative variants and novel recently identified mutations. He will further discuss the underlying molecular and pathophysiological mechanisms of these different mutations and their association with clinical phenotypes.
Dr. Steven Holland will review the varied phenotypes and presentations of in GATA2. There are broad genotype/phenotype associations as well as variations in penetrance and expression. With the recruitment, longitudinal follow up and treatment of large cohorts of patients, some general aspects of management and successful transplantation can be derived. The biology of GATA2 and its many roles in blood, immune cell and lymphatic development and function have been exciting areas of investigation. He will review current concepts of GATA2 in myelodysplasia and myeloid malignancy, as well as some molecular mechanisms of GATA2 deficiency in allergic, rheumatologic and immunodeficiency diseases.
Co-Chairs:
Robert F. Sidonio Jr.
, MD, MSc.
Children's Hospital of Atlanta
Atlanta, GA
Sung-Yun Pai
, MD
Boston Children's Hospital
Boston, MA
Speakers:
Carrie L. Lucas
Yale University
New Haven, CT
Human PI3K Mutations: Immunodeficiency and Malignancy
Andrew L. Snow
, PhD
Uniformed Services University
Bethesda, MD
The Biology of CARMA Proteins in Immunity and Malignancy
Sylvain Latour
Hôpital Necker
Paris, France
The Complete Spectrum of IKZF1 Defects: Immunodeficiency, Immune Dysregulation, Abnormal Hematopoiesis and Leukemia
Steven M. Holland
, MD
National Institute of Allergy and Infectious Diseases
Bethesda, MD
GATA2: MonoMac and Beyond
Joint Session: Scientific Committee on Hematopathology and Clinical Laboratory Hematology & Scientific Committee on Lymphoid Neoplasia
Getting the Most from Minimal Residual Disease
|Sunday, December 6, 2020, 2:00 p.m. - 2:45 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Minimal residual disease (MRD) detection has become standard of care in some hematologic malignancies such as acute lymphoblastic leukemia. While MRD has been studied in many different lymphoid and plasma cell malignancies, strategies based on techniques such as, flow cytometry, and detection of patient specific immunoglobulin gene sequence by molecular techniques such as allele-specific PCR have not been successful. Recent technical advances hold great promise in detecting residual disease and providing response assessment as well as prognostic information that may allow intervention to improve outcomes. In this joint session, the speakers will explore advances in using various forms of technology such as next generation sequencing in the context of circulating tumor DNA (ctDNA), mass spectrometry, high throughput biophysical measurements and molecular analysis of single cells to detect MRD. These technologies and application toward MRD detection will influence clinical trial design and improve outcomes for patients with lymphoma and plasmacytic neoplasms.
Dr. David Rossi will discuss the role cell-free circulating tumor DNA (ctDNA) in detecting residual disease. ctDNA in blood is an opportunity for comprehensive and minimally invasive lymphoma diagnostics that is not limited by sampling frequency, tumor accessibility, or the existence of clinically overt disease. Qualification of ctDNA is used for the identification of pre-treatment mutations associated with primary resistance to therapy and for the longitudinal non-invasive detection of acquired-resistance mutations under treatment. Quantification of ctDNA is used as a proxy of imaging for the measurement of tumor volume. It allows identifying residual disease after treatment also when the disease is in complete remission. Persistence of ctDNA detection during curative-intent therapy is proposed as dynamic prognostic marker for ultimate clinical outcome. Given the emerging role of the ctDNA, its implementation to detect genomic variants and residual disease is a priority in the roadmap of lymphoma research. Moving ctDNA applications from the bench to the bedside requires filling the uncertainties surrounding their clinical validity and, most importantly, clinical utility in the context of prospective clinical trials.
Dr. Katie Thoren will describe the use of mass spectrometry to detect M-proteins in multiple myeloma, identify the challenges of using this biomarker, and describe work that must be done for these techniques to be incorporated into clinical practice for tracking of low disease burden. Over the last several years, efforts have demonstrated that it is technically feasible to detect low levels of monoclonal proteins in peripheral blood using mass spectrometry. These methods are based on the fact that an M-protein has a specific amino acid sequence, and therefore, a particular mass. This mass can be tracked over time and can serve as a surrogate marker of the presence of clonal plasma cells.
Dr. Ash Alizadeh will discuss the spectrum of available technologies for lymphoma MRD detection and quantitation by next-generation sequencing, including the strengths and weaknesses of methods such as IgHTS, CAPP-Seq, PhasED-Seq, and related techniques. He will separately discuss the role of ctDNA in several lymphoma subtypes using blood plasma, including for noninvasive lymphoma genotyping, disease classification, and risk assessment before therapy. Finally, he will discuss the use of ctDNA for early molecular response measurements as informative for adaptive clinical trial designs, for noninvasively detecting the emergence of resistance mechanisms, and for late MRD detection for early detection of progression.
Dr. Scott Manalis will discuss advances made towards monitoring and targeting MRD. Over the past decade, there have been significant advancements in microfluidic approaches for isolating rare cells and characterizing their molecular as well as biophysical properties. These approaches hold great promise for defining personalized vulnerabilities. His talk will focus on their prospects for monitoring as well as targeting MRD.
Co-Chairs:
Eric D. Hsi
, MD
Cleveland Clinic
Cleveland, OH
Lisa G. Roth
, MD
Weill Cornell Medical College
New York, NY
Speakers:
Davide Rossi
, MD, PhD
Oncology Institute of Southern Switzerland
Bellinzona, Switzerland
Use of Minimal Residual Disease and Advances in Clinical Trials
Katie Thoren
Memorial Sloan Kettering Cancer Center
New York, NY
Advances in Mass Spectrometry for Myeloma Minimal Residual Disease
Ash A. Alizadeh
, MD, PhD
Stanford University
Stanford, CA
Newest Discoveries Using Next Generation Sequencing Approaches for Minimal Residual Disease
Scott R. Manalis
, PhD
Massachusetts Institute of Technology
Cambridge, MA
Bioengineering Strategies to Phenotypically Define Minimal Residual Disease
Joint Session: Scientific Committee on Myeloid Biology & Scientific Committee on Myeloid Neoplasia
Single Cell Analysis of Hematopoietic Development and Clonal Complexity of Malignant Hematopoiesis
|Saturday, December 5, 2020, 9:30 a.m. - 10: 15 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Dr. Vijay Sankaran will describe how single cell genomic assays, including single-cell RNA-seq and single-cell ATAC-seq, can be valuable to gain insights into how normal hematopoiesis occurs and how this process goes awry in disease. He will discuss recent work involving the use of mitochondrial DNA mutations with single-cell genomic assays to enable lineage tracing and also discuss how human disorders can be studied using single cell genomic approaches.
Dr. Timm Schroeder will discuss the importance of single-cell analyses when trying to understand the molecular control of HSPC fates in vitro and in vivo. He will discuss current state of the art imaging and image analysis approaches for tissue-wide large volume quantification of the location of individual HSPCs, individual niche cells and individual regulatory extracellular molecules in bone marrow and other hematopoietic tissues. Current possibilities and limitations will be discussed using recent applications to biological questions in hematopoiesis research.
Dr. Margaret Goodell will discuss recent findings and implications surrounding subclonal complexity in myeloid malignancies. DNMT3A is one of the most frequently mutated genes in hematologic malignancies. While the R882 hotspot mutation dominates in AML, other mutations spread throughout the protein are collectively more common in other disorders. By profiling over 250 mutations, we can classify some types of mutation that may have prognostic and therapeutic value.
Co-Chairs:
Soheil Meshinchi
, MD,PhD
Fred Hutchinson Cancer Research Center
Seattle, WA
Sandra S. Zinkel
, MD, PhD
Vanderbilt University School of Medicine
Nashville, TN
Speakers:
Vijay G. Sankaran
, MD,PhD
Boston Children's Hospital
Boston, MA
Single Cell Understanding of Hematopoiesis and Myeloid Lineage Commitment
Timm Schroeder
, PhD
ETH Zurich
Basel, Switzerland
Single Cell Analysis of the Bone Marrow Niche - Quantitative Understanding of Stem/Progenitor Niche Interactions
Margaret Goodell
, PhD
Baylor College of Medicine
Houston, TX
Subclonal Complexity in Myeloid Malignancies and Mechanism of Selection and Resistance
Konstanze Dohner
, MD
University Hospital of Ulm
Ulm, Germany
Measuring Disease Burden in Myeloid Malignancies/Residual Disease
Scientific Committee on Bone Marrow Failure
Precision Medicine Approaches to Leukemia Predisposition in Bone Marrow Failure
|Monday, December 7, 2020, 11: 30 a.m. - 12: 15 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Inherited and acquired bone marrow failure disorders are associated with an increased risk of myeloid malignancies. Treatment of MDS or AML is challenging for these patients due to both malignancy resistance and toxicities from disease co-morbidities. An understanding of the biologic mechanisms driving bone marrow failure and clonal evolution would inform rational surveillance strategies and provide opportunities for leukemia interception. This session will present cutting-edge advances in the understanding of the molecular mechanisms driving marrow failure and clonal evolution in three different germline genetic leukemia predisposition disorders. Clinical implications for surveillance and therapeutics will be discussed.
Dr. Leighton Grimes will discuss severe congenital neutropenia, which is caused by inherited and de novo mutations leading to a profound block in neutrophil granulopoiesis. Dr. Grimes will present insights gleaned using mouse models of severe congenital neutropenia. The successive cell states encountered during normal neutrophil granulopoiesis will be described. Differential single cell gene expression and chromatic patterns in mutant cells assigned to wild type cell states via comparative genomics will be presented. These studies highlight the dominance of cell state in integrating the effects of mutations and therapy, and illustrate cell-state-specific effects of mutations, with direct consequences for attempts to repair defects.
Dr. Coleman Lindsley will discuss insights from Shwachman-Diamond syndrome, a disorder characterized by impaired ribosome assembly. Dr. Lindsley will present novel somatic mutation pathways driven by the germline genetic background. These studies show how germline genetic context together with the cell-specific somatic mutational context determine the functional contribution of a somatic mutation to relative cell fitness, selection, and malignant potential. These studies identify adaptive and maladaptive pathways of clonal expansion in response to a germline genetic selective pressure and provide a mechanistic rationale for clinical surveillance.
Dr. Paula Rio will discuss clonal tracking following gene therapy to treat Fanconi anemia, a disorder of DNA repair. She will provide an update on a phase I/II gene therapy trial that has shown successful engraftment and proliferative advantage of corrected HSCs in FA-A patients in the absence of conditioning. Dr. Rio will discuss the engraftment, clonal tracking and phenotypic correction of HSCs in these initial patients with up to 3 years of follow up.
Chair:
Akiko Shimamura
, MD, PhD
Boston Children's Hospital
Boston, MA
Speakers:
H. Leighton Grimes
, PhD
Cincinnati Children's Hospital Medical Center
Cincinnati, OH
Neutrophil Development and Neutropenia
R. Coleman Lindsley
, MD, PhD
Dana-Farber Cancer Institute
Boston, MA
Germline and Somatic Genomics in Ribosomopathies
Paula Rio
, PhD
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT)
Madrid, Spain
Clonal Tracking Post-Gene Therapy for Fanconi Anemia
Scientific Committee on Epigenetics and Genomics
RNA in Normal and Malignant Hematopoiesis
|Saturday, December 5, 2020, 12: 00 p.m. - 12: 45 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Post-transcriptional control of gene expression is emerging as a new frontier in the regulating stemness, differentiation and malignant transformation. Mutations in spliceosomal genes are a common cause of acute myeloid leukemia. In addition, RNA processing, splicing, and feedback loops to transcriptional regulation represent opportunities for drug development, therapeutic targeting, as well as immunotherapy. This session will present cutting-edge developments in how the regulation of splicing, RNA stability, and RNA processing impact fundamental processes in hematopoiesis. In addition, it will address how dysregulation of these processes induces or contribute to leukemia, and how it can be modulated for therapeutic purposes.
Dr. Christopher Burge will focus on mechanism of RNA-processing and their impact on gene expression, including feedback loops to promoter regulation. Almost all human genes undergo alternative processing of their primary transcripts, including alternative splicing and/or alternative cleavage and polyadenylation, with RNA-binding proteins playing key regulatory roles. Pre-mRNA processing can impact the expression levels of genes in a variety of ways, including by producing alternative mRNA isoforms that differ in their stability, nuclear export, or translation. He will discuss recent work showing that splicing of internal exons near the 5' ends of genes can activate transcription from proximal upstream promoters, and exploring the mechanisms underlying this and other connections between RNA processing and gene expression. This work has implications for understanding programs of gene regulation in T cells and other immune cell types, and also opens up new approaches for therapeutic manipulation of gene expression.
Dr. Omar Abdel Wahab will focus on mutations in slicing factors and their contribution to hematologic malignancies. He will describe the role of genomic as well as transcriptomic and proteomic studies in defining critical events altered by mutant splicing factors in myeloid and lymphoid leukemias. Mutations in the spliceosomal genes SRSF2, U2AF1, SF3B1, and ZRSR2 are commonly found in patients with leukemia and are among the most common class of genetic alterations in clonal hematopoiesis, myelodysplastic neoplasms, and chronic lymphocytic leukemia. These mutations that occur at highly restricted amino acid residues, are always heterozygous, and rarely co-occur with one another. These data suggest that splicing mutations confer an alteration of splicing function and/or that cells may only tolerate a certain degree of splicing modulation.
Dr. Kristin Hope will focus on the role of RNA processing in determining cell states. This has broad implications for hematopoietic development, differentiation, and malignant transformation. RNA based mechanisms are contributing to proper enforcement of the stemness state in hematopoiesis. Dysregulation of these processes can underpin the pathogenesis of hematopoietic malignancies and acute myeloid leukemia in particular. RNA binding protein (RBP)-directed control of the post-transcriptional landscape is beginning to be appreciated for its importance in control of cell states. Dr. Hope will describe new strategies to both define key RBP regulators of normal vs leukemic stem cells (LSC) as well as unbiased approaches such as integrative RBP-interactome mapping, transcriptomics and proteomics, to identify their RNA substrates and the nature of their effects on RNA metabolism. She will also discuss the potential for carrying out rational manipulations of these circuitries to advance hematopoietic stem cell regeneration or target LSCs.
Chair:
Kathrin M. Bernt
, MD
Children's Hospital of Philadelphia
Philadelphia, PA
Speakers:
Chris B. Burge
, MD, PhD
Massachusetts Institute of Technology
Cambridge, MA
Basic Mechanisms and Significance of Altered Splicing in Cancer and Hematology
Omar Abdel-Wahab
, MD
Memorial Sloan Kettering Cancer Center
New York, NY
Understanding and Targeting Spliceosomal Gene Mutations in Leukemia
Kristin Hope
, PhD
McMaster University
Hamilton, ON, Canada
RNA Processing in Benign and/or Malignant Hematology
Scientific Committee on Hematopoiesis
Hematopoietic Aging: Mechanisms and Consequences
|Monday, December 7, 2020, 9:00 a.m. - 9:45 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Advancing age frequently associates with the onset of a variety of hematological conditions characterized by diminished clonal heterogeneity and homeostatic control of blood cell production. Upstream hematopoietic stem and progenitor cells are obligate mediators of homeostatic control of all blood lineages. Hematopoietic stem cell and progenitor clonality is frequently associated with specific epigenetic changes and mutations resulting in inflammation, impaired adaptive immune system and elevated incidence of myeloproliferative diseases. The cell-autonomous and non-autonomous causative of clonal-dependent and clonal-independent hematopoietic aging represent a major area of interest in hematology and vascular biology.
Dr. Danica Chen will highlight a mitochondrial metabolic checkpoint that is critical for the maintenance of hematopoietic stem cells (HSCs) and discuss how dysregulation of that mitochondrial metabolic checkpoint leads to HSC aging. Evidence is provided to support the role of the NLRP3 inflammasome in the mitochondrial metabolic checkpoint of HSC aging and, more broadly, how HSC aging impacts the distance tissues and organismal aging. Therapeutic implications will also be discussed.
Dr. Carolina Florian will discuss the aging of the stem cell niche. With aging, intrinsic HSC activity decreases, resulting in impaired tissue homeostasis, reduced engraftment following transplantation, and increased susceptibility to diseases. However, whether aging also affects the HSC niche impairing its capacity to support HSC function is still largely debated. Recently by using in vivo long-term label retention assays, we demonstrated that aged labeling (LR) HSCs, which are in old mice, reside predominantly in perisinusoidal niches. These cells are also the most quiescent HSC subpopulation with the highest regenerative capacity and cellular polarity. Furthermore, studies in her lab have revealed that sinusoidal niches are uniquely preserved in shape, morphology, and number upon aging, and that myeloablative chemotherapy can selectively disrupt aged sinusoidal niches long term. This is linked to the lack of recovery of endothelial Jag2 at sinusoids and to decreased survival of aged mice to chemotherapy. Overall, Dr. Florian’s research has characterized the functional alterations of the aged HSC niche and unveiled that perisinusoidal niches are uniquely preserved and protect HSCs from aging.
Dr. Hartmut Geiger will describe the implications of hematopoietic aging on HSC activity. Aging of HSCs is linked to age-associated remodeling of the immune system, age-associated leukemia as well as a large number of other age-associated diseases. Besides changes intrinsic to HSCs that are causatively linked to the aging of HSCs (aka cause the changes in the function of HSCs that are found in HSCs from old animals for example), more recent research supports that changes in the local bone marrow niche microenvironment are also very potent influencers of the aging- associated decline in HSCs function. Dr. Geiger’s talk will present novel data and concepts on causes and consequences of aging of hematopoietic stem cells, and implications for the clinic.
Chair:
Jose A. Cancelas
, MD
University of Cincinnati
Cincinnati, OH
Speakers:
Danica Chen
, PhD
University of California, Berkley
Berkeley, CA
Hematopoietic Stem Cell Aging and its Impact on Lifespan
Maria Carolina Florian
, PhD
Bellvitge Institute for Biomedical Research
Barcelona, Spain
Aging of the Stem Cell Niche
Hartmut Geiger
, PhD
Cincinnati Children's Hospital Medical Center
Cincinnati, OH
Hematopoietic Aging on Hematopoietic Stem Cell Activity
Scientific Committee on Hemostasis
Mechanisms and Modifiers of Bleeding
|Monday, December 7, 2020, 9:00 a.m. - 9:45 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Coagulation is a dynamic process. In its normal setting, it involves a pro-coagulant pathway that results in the development of a fibrin clot that is balanced by mechanisms that limit the extent of the clot to the site of injury. This complex highly regulated system involves interactions between the vessel wall, platelets, and coagulation factors among others. Disruption to this balance may result in either bleeding or thrombosis. Over the last decade great strides have been made in understanding mechanisms and modifiers of this system. This session will present recent developments across diverse fields that continue to push the envelope on our knowledge and understanding of the mechanisms and modifiers of bleeding.
Dr. Mitchell Cohen will discuss the drivers and mechanisms of acute traumatic coagulopathy. Specifically, he will describe the clinical and biologic picture of coagulation and inflammatory perturbations after severe injury and shock. In addition, Dr. Cohen will address translational approaches to the study of these topics and future research.
Dr. Valerie O’Donnell will discuss the interaction of the phospholipid membrane surface of platelets and white blood cells with coagulation factors, specifically the generation and action of enzymatically-oxidized phospholipids formed by lipoxygenases. Her lab has shown that these lipids regulate coagulation during development of abdominal aortic aneurysms (AAA) in mice, and that they are found in human AAA lesions. Extensive in vitro studies have defined the mechanisms of action of these lipids, showing that they enhance the ability of phosphatidylserine to support coagulation.
Dr. Karin Leiderman will discuss a mathematical and computational approach to studying variability in bleeding patterns among individuals with hemophilia A. Uncertainty and sensitivity analysis were recently performed on a mathematical model of flow-mediated coagulation to identify parameters most likely to enhance thrombin generation in the context of FVIII deficiency. Results from those computational studies identified low-normal FV (50%) as the strongest modifier, with additional thrombin enhancement when combined with high-normal prothrombin (150%). Partial FV inhibition (60% activity) augmented thrombin generation in FVIII-inhibited or FVIII-deficient plasma in CAT and boosted fibrin deposition in flow assays performed with whole blood from individuals with mild and moderate FVIII deficiencies; these effects were amplified by high-normal prothrombin levels in both experimental models. Dr. Leiderman will highlight how the mathematical model was used to predict a biochemical mechanism underlying the modified thrombin response.
Chair:
Shannon Meeks
, MD
Emory University
Atlanta, GA
Speakers:
Mitchell J. Cohen
University of Colorado
Denver, CO
Understanding the Dynamics of Bleeding in Trauma
Valerie O'Donnell
Cardiff University
Cardiff, United Kingdom
Innate Immune Cell-Derived Phospholipids and Hemostasis
Karin Leiderman
, PhD
Colorado School of Mines
Arvada, CO
A Systems Biology Approach to Identifying Modifiers of Bleeding in Hemophilia
Scientific Committee on Iron and Heme
Well-Regulated vs Malfunctioning Mechanisms of Iron Metabolism
|Monday, December 7, 2020, 11: 30 a.m. - 12: 15 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
As we approach the 20th anniversary of the first description of the central iron-regulatory hormone hepcidin and its molecular target, the cellular iron exporter ferroportin, we will take the opportunity to delve more deeply into the regulation of this axis in normal homeostasis, the cellular mediators of iron metabolism and the contribution of iron status to morbidity/mortality in diseases of ineffective hematopoiesis. Hepcidin expression and subsequent iron flux through intestinal absorption and reticuloendothelial macrophage release is regulated through iron status, erythropoietic drive, and inflammatory mediators. In addition to these systemic regulatory mechanisms, there is an emerging role of intestinal regulation of iron handling by local HIF-2a. Recent work demonstrates that hepatic hepcidin regulates intestinal HIF-2a, and this axis can be targeted in iron-related disorders.
Reticuloendothelial macrophages are the primary source of iron for erythropoiesis given their ability to recycle hemoglobin-derived iron. However, macrophages also have considerable functional and phenotypic plasticity directed by signals from the microenvironment, including iron. The spectrum of macrophage phenotypes derived from exposure to differing iron sources (e.g., hemoglobin, heme, iron) could modulate the inflammatory response, oxidative stress, and natural history of iron-loaded and hemolytic disease states. The primary toxicity of excess iron is mediated through oxidant generation and oxidative stress, which contribute to a variety of human disease states. Although this toxicity is a well-recognized contributor to organ damage/failure and mortality in iron-loading anemias, the association with poor outcomes in hematologic malignancy, and stem cell transplant is becoming established. Well-designed large trials have revealed the deleterious effects of iron overload in myelodysplastic syndrome (MDS) and the beneficial effects of iron chelation.
Dr. Yatrik Shah will describe how cell autonomous oxygen signaling pathways integrate with systemic hepcidin signaling to control intestinal iron absorption. Intestinal HIF-2a is essential for the local absorptive response to systemic iron deficiency and iron overload. Recent work has uncovered a hetero-tissue crosstalk mechanism, whereby hepatic hepcidin regulates intestinal HIF-2a in iron deficiency, anemia, and iron overload. A decrease in systemic hepcidin alters the activity on intestinal PHDs, which subsequently leads to the stabilization of HIF-2a. Pharmacological targeting of HIF-2a using a clinically relevant and highly specific inhibitor successfully treated iron overload in mouse models. These findings demonstrate a molecular link between hepatic hepcidin and intestinal HIF-2a that controls physiological iron uptake and drives iron hyperabsorption during iron overload.
Dr. Francesca Vinchi will demonstrate how the modulation of macrophage plasticity through the application of either iron sources or scavengers/chelators achieves therapeutic effects that improve disease conditions. Along with the regulation of iron homeostasis, macrophages play a crucial role in the orchestration of inflammatory and tissue remodeling processes through the acquisition of distinct functional phenotypes in response to the surrounding microenvironment. The iron-related and immune functions of macrophages are tightly interconnected: on the one hand, macrophage polarization dictates the expression of iron-regulated genes and determines cell iron handling; on the other, iron availability affects macrophage immune effector functions. Recent observations support a role for free heme and iron in shaping macrophage plasticity towards a pro-inflammatory phenotype. These findings have implications for the pathophysiology of diseases hallmarked by elevated circulating heme and iron, including hemolytic diseases and transfusion- or iv iron-dependent anemias, as well as conditions associated with increased local heme and iron accumulation, including trauma, atherosclerosis, and tumors.
Dr. Nobert Gatterman will address the impact of iron overload and chelation in patients with myelodysplastic syndrome (MDS). There is no reason to believe that transfusional iron overload (IOL) is less toxic in elderly MDS patients than young thalassemia patients. Comorbidities, in particular cardiac problems, may increase vulnerability to the toxic effects of IOL in elderly MDS patients. Nevertheless, the prognostic impact of IOL is more challenging to prove in MDS, due to considerable overlap between iron-related and age-related clinical problems. Registry studies have consistently shown a survival benefit of iron chelation therapy (ICT) in lower-risk MDS. At least two registry studies have reached high quality through extensive matched-pair-analyses, virtually eliminating the bias resulting from uneven distribution of comorbidities and performance scores. The prognostic impact of ICT has recently been corroborated by improved EFS shown in the randomized Telesto trial.
Chair:
Matthew M. Heeney
, MD
Children's Hospital - Boston
Boston, MA
Speakers:
Yatrik Shah
, PhD
University of Michigan
Ann Arbor, MI
Update on Ferroportin Regulation
Francesca Vinchi
, PhD
New York Blood Center
New York, NY
Iron Handling by Macrophages
Norbert Gattermann
, MD
Heinrich-Heine-Universitat
Dusseldorf, Germany
Prognostic Impact of Iron Overload and Iron Chelation in Myelodysplastic Syndromes
Scientific Committee on Megakaryocytes and Platelets
Molecular Basis of Platelet/Megakaryocyte Dysfunction: Novel Approaches
|Sunday, December 6, 2020, 2:00 p.m. - 2:45 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Last few years have witnessed a tremendous advance in approaches to study the basic biology of megakaryocytes and platelets and their role in disease. This session focuses on three state-of-the-art approaches that have provided remarkable new insights. The first talk focuses on the application of single cell multi-omic approaches to advance understanding of megakaryocyte biology and their application to myelofibrosis, an acquired stem cell disorder. The second talk extends this with a focus on the unique insights provided by studies using induced pluripotent stem cell (iPSC) in the context of a disorder associated with germline mutations in hematopoietic transcription factor RUNX1. These patients are characterized by aberrations in platelet function and number and a predisposition to myeloid malignancies. These studies cover the spectrum from insights into the biology of the disease to potential therapeutic approaches. Last few years have seen an explosion of information on the gene abnormalities in patients with inherited platelet disorders, and much of this has come through the application of new-generation sequencing - the focus of the third talk. These approaches have advanced understanding of the genetic abnormalities in patients and provided novel and unexpected insights into the causative genes and their role in platelets and megakaryocytes.
Dr. Bethan Psaila will discuss the application of single cell multi-omic approaches to studying normal and aberrant pathways of megakaryocyte differentiation. She will discuss changes to megakaryopoiesis over normal human ontogeny and the mechanisms of megakaryocyte-biased hematopoiesis in myelofibrosis. Single cell approaches can identify heterogeneous megakaryocyte subpopulations with distinct metabolic and inflammatory signatures, and implications for novel approaches for therapeutic targeting of malignant megakaryocytes will be discussed.
Dr. Mortimer Poncz will address the role induced pluripotent stem cells (iPSC) have played in highlighting mechanisms in inherited platelet/megakaryocyte disorders. RUNX1 is a transcription factor central to hematopoiesis. Haploinsufficiency of RUNX1 results in a clinical disorder termed Familial Platelet Disorder associated with Myeloid Malignancy or FPDMM that is associated with quantitative and qualitative defects and an increased risk of myeloid leukemia. Studies of iPSCs derived from patients with FPDMM recapitulate the defect in megakaryopoiesis and have led to new insights into the pathogenesis of this disorder, including a deficiency of megakaryocyte-biased progenitor cells and upregulation of proinflammatory pathways during megakaryopoiesis. These pathogenic insights have also led to therapeutics that prevent the defect in megakaryopoiesis in iPSC and primary cell ex vivo studies.
Dr. Kathleen Freson will discuss the value of next generation sequencing (NGS) in providing new insights into platelet and megakaryocyte biology. Patient studies have significantly contributed to our current knowledge of platelet and megakaryocyte biology. NGS-based multi-gene panel tests comprising all platelet disorder genes known today can diagnose about 26 to 48% of patients with platelet function and formation disorders, respectively. This means that many disease-related genes are still unknown and often totally unexpected genes are discovered in exomes and genomes as candidate for a novel platelet disorder. Disease models, platelet transcriptomics and other functional assays are still critical to prove causality and understand their role in platelet and megakaryocyte biology.
Chair:
Angara Koneti Rao
, MBBS
Temple University
Philadelphia, PA
Speakers:
Bethan Psaila
, MD
University of Oxford
Oxford, United Kingdom
Single Cell Approaches to Elucidate Novel and Aberrant Pathways in Megakaryocytes
Mortimer Poncz
, MD
Children's Hospital of Philadelphia
Philadelphia, PA
Exploiting Induced Pluripotent Stem Cells to Unravel Mechanisms in Inherited Platelet/Megakaryocyte Disorders
Kathleen Freson
, PhD
University of Leuven
Leuven, Belgium
Insights into Platelet-Megakaryocyte Biology through Next-Generation Sequencing
Scientific Committee on Plasma Cell Neoplasia
The Immune System in Multiple Myeloma
|Monday, December 7, 2020, 11: 30 a.m. - 12: 15 p.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
The Immune System in Multiple Myeloma
The immune repertoire plays an important role in many cancers, and there has been growing recognition of the immune deregulation plays an important and independent role in the progression of malignant plasma cells through the precursor states to active disease. The progression of multiple myeloma is associated with both innate and adaptive immune system dysfunction, notably in the T-cell repertoire Understanding the interplay between the bone marrow microenvironment, immune repertoire and malignant plasma cells will be of utmost important to achieve long-term disease control and potential curability.
Dr. Kyohei Nakamura will discuss the immune system as it related to the progression of the precursor condition MGUS to active myeloma. His lab has been exploring the relative importance of different immune cells and molecules in blood cancers, from their initiation, growth and spread and under therapy. To that end, his research has demonstrated that the pro-inflammatory cytokine IL-18 is critically involved in these hallmarks in multiple myeloma (MM). In addition, blocking TIGIT using monoclonal antibodies (mAbs) increased the effector function of MM patient CD8+ T cells and suppressed MM development. Furthermore, besides examining the role of extracellular adenosine in blood cancers, Dr. Nakamura is now also evaluating models of minimal residual disease as a treatment window of opportunity for MM.
Dr. Paola Neri will discuss insights into the mechanisms that promote tumor escape, cause inadequate T-cell stimulation and impaired cytotoxicity in MM. In addition, she will highlight current immunotherapies being used to restore adaptive T-cell immune responses in MM and describe strategies created to escape these multiple immune evasion mechanisms. Her lab been examining the complex interaction of BM stromal cells (BMSCs) and malignant cells that using bidirectional connections and cytokines released stimulate disease progression, drug resistance and enable immune escape, showing distinct immunophenotyping features using single cell RNA sequencing and mass spectrometry.
Dr. Madhav Dhodapkar will discuss emerging data from studies to evaluate the immune system in myeloma patients receiving therapy. He will be discussion the potential immune signatures in response to therapy in newly diagnosed and relapsed myeloma patients, as the efficacy of T-cell-dependent immunotherapies for myeloma are going to depend on engaging the endogenous T-cell repertoire. He will discuss the potential applications of different immune monitoring approaches that are providing novel insights for strategies to harness the immune system to treat myeloma.
Chair:
Saad Z. Usmani
, MD,MBBS,MBA
Levine Cancer Institute
Charlotte, NC
Speakers:
Kyohei Nakamura
, MD, PhD
QIMR Berghofer Medical Research Institute
Herston, Australia
The Immune System and Progression from Precursor Condition to Active Myeloma
Paola Neri
, MD
University of Calgary
Calgary, AB, Canada
Immune Deregulation in Active Multiple Myeloma
Madhav V. Dhodapkar
, MBBS
Emory University
Atlanta, GA
Immune Monitoring in Myeloma
Scientific Committee on Red Cell Biology
Location, Location, Location
|Sunday, December 6, 2020, 9:30 a.m. - 10: 15 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Erythropoiesis is a complex carefully orchestrated process that replenishes billions of erythrocytes lost daily. Producing red blood cells at large scale is a major clinical need. Challenges encountered in producing red blood cells in vitro for clinical use, have highlighted the necessity to better understand processes involved. The molecular mechanisms involved in erythroid differentiation are tightly regulated and compartmentalized in erythroid precursors. In this session, speakers will discuss their work using new technology, including super resolution microscopy to dissect at nanoscale the compartmentalization of proteins or organelles, cytoskeletal rearrangement and erythroid enucleation to better understand red blood cell generation. Another topic addressed will be the overall production of red blood cells from reprogrammed fibroblasts in vitro for clinical use.
Dr. Ke Xu will discuss the importance of using super-resolution fluorescence microscopy to understand the ultrastructure of red cells. Recent advances in super-resolution fluorescence microscopy offers exciting new opportunities to probe intracellular structures at ~20 nm resolution with excellent molecular specificity and minimal sample processing. This presentation will shed new light on related cytoskeletal systems in erythropoiesis. Super-resolution fluorescence microscopy opens a new window into understanding the ultrastructure of red cells, and the impact these methods have on our understating of erythroid differentiation will be discussed.
Dr. Johan Flygare will discuss the background and clinical significance of understanding transcriptional programs regulating the developmental waves of erythropoiesis. He will focus less on what has been learned from loss of function approaches and instead highlight overexpression approaches being used to study key transcription factors in erythropoiesis. Dr. Flygare will include published and unpublished results from his own studies using direct lineage reprogramming from fibroblasts to erythroid progenitor cells and will finish with future perspectives.
Dr. Velia Fowler will discuss how the biogenesis of mammalian red blood cells is a highly orchestrated process of terminal differentiation with a series of cell divisions coupled to dramatic changes in cell and nuclear morphology, culminate in cell cycle exit and nuclear expulsion (enucleation). While enucleation has been assumed to be a type of asymmetric cell division, differences in cell polarity control and nanoscale organization of cytoskeletal structures indicate otherwise. The molecular and structural basis for events of enucleation will be discussed and evaluated critically, with an eye on providing strategies for optimizing red cell production in vitro.
Chair:
Miguel R Abboud
, MD
American University of Beirut
Beirut, Lebanon
Speakers:
Ke Xu
, PhD
University of California- Berkeley
Berkeley, CA
Phase Resolution in Erythropoiesis
Johan Flygare
, MD, PhD
Lund University
Lund, Sweden
Molecules Involved in the Generation of Definitive Hematopoiesis
Velia M. Fowler
, PhD
University of Delaware
Newark, DE
Cytoskeletal Control of Erythroid Properties and Enucleation
Scientific Committee on Stem Cells and Regenerative Medicine
Extrinsic Regulation of Hematopoietic Stem Cell Emergence and Homeostasis
|Saturday, December 5, 2020, 7:30 a.m. - 8:15 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Hematopoietic stem cells (HSCs) undergo carefully-orchestrated, dynamic processes of specification, self-renewal and differentiation to yield the most abundant cells in the body. Approximately 1015 cells of diverse structure and function are generated from vastly smaller pools of HSCs over the average human lifespan, in a manner highly responsive to developmental and environmental cues. Cell-autonomous functions define HSCs, from classical experimental systems to clinical hematopoietic cell transplantation. As a consequence, HSC-intrinsic factors, such as epigenetic programs, transcription factors, and growth factor signaling pathways, dominate oft-cited models of hematopoiesis. A fuller understanding of the life of HSCs is revealed through the lens of basic stem cell biology, incorporating determinants such as niche contacts, morphogen gradients, physical forces, and changes in these over time. This session will present the latest developments in our understanding of extrinsic factors that regulate HSC development and function in vertebrate systems, from early specification to homeostasis, regeneration and aging.
Dr. Trista North will discuss the role of the extrinsic factors in governing the location, onset and progression of HSC formation in the vertebrate embryo. Following earlier waves of production of lineage restricted progenitors, HSCs develop de novo from the hemogenic endothelium in the embryonic dorsal aorta, via a process termed endothelial to hematopoietic transition. While key transcriptional regulators of hemogenic endothelial specification and HSC formation are well established in the field, it is only more recently appreciated how these pathways are activated to initiate commitment to HSC production, and repress endothelial fate. In particular, extrinsic regulation from the developing embryo appears to play a key role in biomechanical and metabolic stimuli, inflammatory signals, and morphogen gradients converge to coordinately regulate the timing and location of HSC production. Dr. North will outline emerging data describing the integration of extrinsic developmental cues with intracellular signaling networks to regulate the onset and maintenance of HSC formation across vertebrate species, from zebrafish to human.
Dr. John Chute will discuss extrinsic factors that regulate adult HSC homeostasis. Bone marrow endothelial cells (BMECs) have an essential role in regulating HSC regeneration following myelotoxicity, but the mechanisms through which BMECs regulate HSC regeneration are not well understood. Dr. Chute will describe the discovery that semaphorin 3A (SEMA3A) - NRP1 signaling negatively regulates BMEC regeneration following chemotherapy or total body irradiation. Systemic administration of a blocking anti-NRP1 antibody or EC-specific deletion of NRP1 or SEMA3A causes the rapid regeneration of the BM vasculature and the hematopoietic system in irradiated mice. Regenerating BMECs in anti-NRP1-treated mice display significantly increased expression and secretion of R-spondin 2, a Wnt pathway amplifying protein, compared to control BMECs. BM HSCs concordantly upregulate expression of LGR5, a receptor for R-spondin 2. Systemic administration of anti-R-spondin 2 antibody blocks both HSC regeneration and hematologic recovery in irradiated mice that otherwise occurred in response to anti-NRP1 treatment. These studies suggest that BMECs drive hematopoietic regeneration through secretion of R-spondin 2 and activation of LGR5+ HSCs.
Dr. Laura Calvi will discuss characteristics of HSC aging in murine models and in humans. These characteristics have been in part ascribed to cell-autonomous processes, but, given the regulatory interactions with HSC with their niche, the aged microenvironment would also be expected to contribute. Recent data from multiple laboratories have outlined mechanisms by which the aged microenvironment influences hematopoietic stem cells, providing evidence the aged components of the microenvironment. Dr. Calvi will review this work and focus particularly on cellular constituents found to impact HSC skewing, as shown by in vivo models. These will include data on aged multipotent stromal cells as well as macrophages.
Chair:
Suneet Agarwal
, MD,PhD
Children's Hosp. Boston
Boston, MA
Speakers:
Trista E. North
, PhD
Boston Children's Hospital
Boston, MA
Extrinsic Factors Governing Hematopoietic Stem Cell Development
John P. Chute
, MD
University of California- Los Angeles
Los Angeles, CA
Regenerative Niche-Hematopoietic Stem Cell Interactions
Laura M. Calvi
, MD
University of Rochester School of Medicine
Rochester, NY
Role of the Niche in Hematopoietic Stem Cell Aging
Scientific Committee on Thrombosis and Vascular Biology
Gut Microbiome and the Endothelium
|Monday, December 7, 2020, 9:00 a.m. - 9:45 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Commensal microbiota are increasingly recognized participants in cardiometabolic diseases and central modulators of immunity, allergies and autoimmunity. The symbiotic relationship of the microbiome with epithelial interfaces crucially depends on an interplay of microbiota-derived metabolites, host innate immune sensing, and regulation of adaptive immunity. These local interactions particularly in the intestinal milieu have profound effects on the vasculature and contribute to thrombosis through a steadily expanding repertoire of recognized molecular mechanisms. This session will highlight recent cutting-edge research deciphering pathways by which microbiota influence the vascular endothelium and touch in this context on cerebral vascular disease and autoimmunity-evoked thrombosis.
Dr. Martin Kriegel will discuss the role of the microbiota in thrombosis with a focus on the antiphospholipid syndrome. He will provide an overview of the pathogenesis and the importance of ß2-glycoprotein I in thrombosis and of pathobionts within the human gut microbiota that elicit antibodies cross-reactive with epitopes of ß2-glycoprotein I. Dr. Kriegel will link the cross-reactive process with pathogenic autoantibodies leading to thrombosis in an animal model in vivo as well as trophoblast dysfunction in vitro.
Dr. Mark Kahn will present the identification of a gut-brain disease axis for vascular malformation and implications for novel treatment strategies. Loss of function in the genes encoding cerebral cavernous malformation (CCM) adaptor proteins in endothelial cells causes vascular formations that form in the brain and are a significant cause for stroke and seizure in younger individuals. CCM loss of function results in gain of signaling by the MEKK3-KLF2/4 pathway. Unexpectedly, a major input to this pathway in brain endothelial cells is the TLR4 innate immune receptor, the activity of which is strongly influenced by the gut microbiome and gut epithelial barrier function.
Dr. Weifei Zhu has examined the role of gut microbes in modulating stroke susceptibility and functional recovery post stroke onset. Over the past few years, mechanistic links have been developed between nutrients in a western diet, gut microbiota formation of the metabolite trimethylamine N oxide (TMAO), and the development of both platelet hyper-responsiveness and cardiovascular diseases. Dr. Zhu will address the meta-organismal TMAO pathway as a stroke risk factor, depict potential mechanisms contributing to diet enhanced ischemic stroke risks, and explore novel therapeutic approaches targeting gut microbial contributions for prevention and treatment in cerebrovascular disease.
Chair:
Wolfram Ruf
, MD
Johannes Gutenberg University Medical Center
Mainz, CA, Germany
Speakers:
Martin Kriegel
, MD, PhD
Yale School of Medicine
New Haven, CT
Microbiota and Thrombosis
Mark L Kahn
, MD
University of Pennsylvania
Philadelphia, PA
Microbiome Regulation of Toll-Like Receptor Signaling and Vascular Malformation
Weifei Zhu
, PhD
Cleveland Clinic Foundation
Cleveland, OH
Microbiome-Derived Metabolites Affecting Vascular Function
Scientific Committee on Transfusion Medicine
Novel Blood Therapeutics
|Saturday, December 5, 2020, 9:30 a.m. - 10: 15 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
While red blood cells (RBCs) play a critical role in the transport and delivery of oxygen to tissue, new research demonstrates that they can be engineered into cargo RBCs that can be used to deliver drugs throughout the body, track red blood cells, visualize blood vessels, or induce immune tolerance to specific antigenic peptides. Additionally, searches for non-cardiotoxic artificial blood substitutes continue to alleviate blood availability concerns and side effects such as alloimmunization. This session will present cutting-edge advancements in the development and potential benefits of RBC-based therapeutics and novel hemoglobin-based oxygen carriers.
Dr. Vladimir Muzykantov will discuss how red blood cells (RBCs) can be used to deliver dugs. RBCs are ideal natural carriers for diverse therapeutic, prophylactic and diagnostic (imaging) agents. Strategies to load these agents into RBCs include encapsulation into isolated RBC via transient pores in cell membrane, genetic modification of RBC precursors, and coupling to RBC surface. Dr. Muzykantov’s talk will focus on the latter approach, whereby a single injection of compounds targeted to RBC surface determinants can uniquely "paint" circulating RBCs in animal studies enabling enhanced pharmacokinetics and unusual distribution of the compound cargoes in the body.
Dr. Hidde Ploegh will discuss how the immune system can be retrained to ignore the antigens that usually trigger inappropriate immune responses in autoimmune diseases such as multiple sclerosis and type 1 diabetes using mouse models. Autoimmune diseases are characterized by inappropriate immune responses in which the body destroys its own cells. Dr. Ploegh will discuss how cargo red blood cells (RBCs) loaded with antigenic peptides, can be used to redirect the immune system and allow these antigens that usually cause an inappropriate immune response to be tolerated - a method called tolerance induction.
Dr. Leticia Hosta-Rigau will discuss the development of hemoglobin-loaded nanoparticles (Hb-NPs) as a novel type of advanced hemoglobin-based oxygen carriers (HBOCs). She will discuss how Hb-NPs aim at addressing major challenges in the field of blood surrogates such as attaining a high Hb loading and long circulation times. Antioxidant coatings are incorporated into the Hb-NPs in order to minimize the conversion of Hb into nonfunctional methemoglobin. Decoration with PEG results in decreased protein adsorption onto the Hb-NPs surface, suggesting a prolonged retention time within the body. Dr. Hosta-Rigau will also discuss how the Hb-NPs preserve the reversible oxygen-binding and releasing properties of Hb.
Co-Chairs:
Simone A. Glynn
, MD, MPH
National Heart, Lung and Blood Institute
Bethesda, MD
Stella P Chou
, MD
Children's Hospital of Philadelphia
Philadelphia, PA
Speakers:
Vladimir Muzykantov
, PhD
Perelman School of Medicine, University of Pennsylvania
Philadelphia, PA
Drug Delivery by Red Cells
Hidde L. Ploegh
, PhD
Boston Children's Hospital
Boston, MA
Immune Tolerance by Red Cells
Leticia Hosta-Rigau
, PhD
Technical University of Denmark
Kongens Lyngby, Hovedstaden, Denmark
Synthetic Red Cells
Scientific Committee on Transplantation Biology and Cellular Therapies
Challenges in Cell Therapy: Relapse and Toxicities
|Saturday, December 5, 2020, 9:30 a.m. - 10: 15 a.m.|
Please view the pre-recorded presentations prior to attending the corresponding Live Q&A session. Access to the on-demand content will be available starting on December 2 and will continue through the duration of your meeting subscription. CME is available for participation in the corresponding Live Q&A.
Hematopoietic transplantation and adoptive cellular therapies are expanding fields with increasing established and experimental indications. Even as growing numbers of patients benefit from cell therapy, we are still confronted with addressing two key challenges: that of disease relapse despite these therapies, and that of toxicities that accompany them. This session will present state-of-the-science advances in understanding the basic biology of relapse and toxicity following diverse cell therapies, ranging from allogeneic stem cell transplantation to TCR therapy to chimeric antigen receptor (CAR) T cell therapy. Integrated within the discussions are novel insights gained from the analysis of clinical trials and patient samples, thus providing an opportunity to synthesize biologic features with relevance to confronting these challenges across cellular therapies.
Dr. John F. DiPersio will discuss strategies to enhance efficacy and reduce the toxicity of allogeneic stem cell transplantation (allo-HCT). Allo-HCT remains the best chance of a cure for many patients with newly diagnosed and relapsed hematologic malignancies and marrow failure states. The curative power of allo-HCT rests in the graft vs. tumor/leukemia (GvT/GvL) effect of alloreactive donor T cells. These same donor T cells mediate many of the life-threatening complications of allo-HCT, including graft vs host disease (GvHD), conditioning-associated morbidity, and cytokine release syndrome (in the case of haploidentical stem cell transplantation) limiting success. Furthermore, despite the potential for GvL, a relapse often occurs. The mechanisms of relapse after allo-HCT remain poorly understood, but some patients may be related to defined pathways of immune escape. Dr. Dipersio will discuss pre-clinical mouse models leading to early clinical trials, which explore the use of chemotherapy- and radiation-free conditioning regimens and novel approaches for preventing GvHD. He will also discuss approaches for overcoming immune escape, especially in patients with AML after allo-HCT.
Dr. Aude Chapuis will discuss novel strategies to improve current anti-cancer treatments employing T cells transduced to express T cell receptors (TCRs). State-of-the-art methods are being used to elucidate challenges to the efficacy of such therapies in individual patients. Based on these newly identified mechanisms, she will discuss how we can now improve the next generation of adoptive T cell therapies. For example, her group has performed intensive gene expression profiling to identify immune evasion mechanisms and the shortcomings of transferred T cells. To overcome these limitations, her group is developing strategies such as multiplexing of high-affinity TCRs, engineering both CD4+ and CD8+ T cells, and tethering a co-stimulatory signal to transferred T cells. In collaborative studies, they use mouse models that recapitulate the human immune environment to validate particular strategies. These results will likely have broad applicability across solid tumors and blood malignancies.
Dr. Chiara Bonini will address the challenges of managing the toxicities associated with genetically engineered T lymphocytes. This revolutionary therapeutic approach is yielding, encouraging signs of efficacy, but these innovative cellular therapy products (TCR and CAR redirected T cells) have shown unique toxicity profiles. Such toxicity may be linked to the target antigen and result from on-target/off-tumor reactions, due to antigen recognition on healthy cells and tissues, such as in the case of B-cell aplasia that follows CD19-CART cell infusion. In other contexts, toxicity may result from cross-reactivity due to the recognition of epitopes structurally similar to the cancer antigen, as observed with TCR-redirected T cells specific for MAGE-A3 or MAGE-A12 peptides. Excessive activation of innate immunity can be triggered by the synchronous activation of infused T cells, resulting in cytokine release syndrome (CRS), in some cases followed by neurotoxicity. Also, the presence of an intact TCR repertoire on engineered T-cells might result in graft-versus-host disease. Several approaches have been implemented to reduce and control toxicity associated with adoptive cellular therapy. Anti-inflammatory compounds such as anti-IL6R or anti-IL6 monoclonal antibodies have proven to be successful in taming CRS. The selection of cancer antigen combined with the proper affinity of the CAR/TCR used might offer new therapeutic windows. Modifications in construct design, the inclusion of suicide genes in transfer vectors, and the implementation of genome-editing tools in cell manufacturing protocols provide unique opportunities to increase the safety profile of adoptive T cell therapy.
Chair:
Catherine J. Wu
, MD
Dana-Farber Cancer Institute
Boston, MA
Speakers: | https://www.hematology.org/meetings/annual-meeting/programs/scientific-program |
Why Online Customer Reviews are Important for your Business?
Modern customers start their search of products or services with thorough analysis of information available on the Internet. They visit company’s Facebook and LinkedIn pages and read available online customer reviews. Thus it is important to create a strong online presence and maintain it by collecting customer reviews and testimonials. | https://www.providesupport.com/blog/category/articles/ |
Sunday 12 December 1993
It was Sunday, under the sign of Sagittarius. The US president was Bill Clinton (Democratic).
Famous people born on this day include EJ Jallorina and Zeli Ismail.
In that special week of December people in US were listening to Again by Janet Jackson.
In UK Mr Blobby by Mr Blobby was in the top 5 hits.
What's Eating Gilbert Grape, directed by Lasse Hallstrm, was one of the most viewed movies released in 1993
while Slow Waltz In Cedar Bend by Robert James Waller was one of the best selling books.
On TV people were watching The Adventures of Pete & Pete.
If you liked videogames you were probably playing Bari-arm or Star Fox.
But much more happened that day: find out below..
You can also have a look at the whole 1993 or at December 12 across the years.
Old Newspapers
Have a look at the old newspapers from 12 December 1993 and get them!
TV Series
Which were the most popular TV series released in the last months ?
Movies
Which were the most popular movies released in the last 30 days ?
Find out your future
Get a FREE Numerology report based on the digits of 12 December 1993!
Wish him/her..
Historical Events
Which were the important events of 12 December 1993 ?
Holidays:
- Bah�'f Faith - Feast of Mas�'il (Questions) - First day of the 15th month of the Bah�'f Calendar
- R.C. Saints - optional memorial of Our Lady of Guadalupe
- Also see December 12 (Eastern Orthodox liturgics)
- Kenya - Jamhuri Day: Independence Day (from Britain, 1963)
Famous Birthdays:
- EJ Jallorina: Filipino actor.
- Zeli Ismail: English footballer.
Famous Deaths:
- Jeremiah Sullivan: Actor (Soldier, Double-Stop), dies of AIDS at 58.
- Jozsef Antall: historian/premier of Hungary (1990-93), dies at age 61.
Facts:
- Any Given Day closes at Longacre Theater NYC after 32 performances
- Kentucky Cycle closes at Royale Theater NYC after 34 performances
- WAQX 104.3 (Q-104) rock format replaces WNCN classic format in New York City
- Péter Boross becomes Prime Minister of Hungary following the death of József Antall.
- Canadian Prime Minister Kim Campbell resigns as head of the Conservative Party, to be succeeded by Jean Charest.
- The Majilis of Kazakhstan approves the nuclear Non-Proliferation Treaty, and agrees to dismantle the more than 100 missiles left on its territory by the fall of the USSR.
- Downing Street Declaration: The United Kingdom commits itself to the search for an answer to the problems of Northern Ireland.
- The Uruguay Round of General Agreement on Tariffs and Trade (GATT) talks reach a successful conclusion after 7 years.
Featured Contents
Would you like to write an original contribution on a topic related to this specific date and get featured on Takemeback.to ? Send us your proposal!
Music Charts
Which were the top hits in that special week of December 1993 ?
Top #10 songs in the USA
Search For Your Ancestors
Sport Games on 12 December 1993
NBAAmerican Basketball
- Golden State Warriors - Los Angeles Lakers : 100 - 97
- Los Angeles Clippers - Sacramento Kings : 112 - 102
- Orlando Magic - Portland Trail Blazers : 103 - 88
NFLAmerican Football
- Buffalo Bills - Philadelphia Eagles : 10 - 7
- Chicago Bears - Tampa Bay Buccaneers : 10 - 13
- Cincinnati Bengals - New England Patriots : 2 - 7
- Cleveland Browns - Houston Oilers : 17 - 19
Serie AItalian Football
- Cagliari - Parma : 0 - 4
- Genoa - Foggia : 1 - 4
- Inter - Sampdoria : 3 - 0
- Lazio - Juventus : 3 - 1
La Liga Primera DivisionFootball (Spain)
- Ath Bilbao - Ath Madrid : 3 - 2
- Celta - Albacete : 1 - 4
- Lerida - Valladolid : 1 - 0
- Logrones - Zaragoza : 2 - 2
Subscribe to our Newsletter
Don't miss our monthly news, no spam guarantee Click here to Subscribe
...and if 12 December 1993 was your Birth Date then Join our Birthday Club!
Make this date unforgettable
Genealogy: find out your family heritage
Magazine Covers
What news were making the headlines those days in December 1993?
- EW: Entertainment Weekly: no.200
- Time: Computer-altered photograph by Gianfranco Gorgoni-Contact Press Images.
- SportsIllustrated: Sports Illustrated: no.9410
Books
Which were the most popular books released in the last weeks ?
Remember when
Videogames
What videogames would you be playing back in the days ? | https://takemeback.to/12-December-1993 |
This invention relates to methods of manufacturing blades of combustion turbine engines and, specifically, to the use of a particular internal core arrangement in the casting of turbine blades, and to a blades having internal cooling configurations formed in this manner.
Conventional combustion turbine engines include a compressor, a combustor, and a turbine. As is well known in the art, air compressed in the compressor is mixed with fuel which is burned in the combustor and expanded in the turbine, thereby rotating the turbine and driving the compressor. The turbine components are subjected to a hostile environment characterized by the extremely high temperatures and pressures of the hot products of combustion that enter the turbine. In order to withstand repetitive thermal cycling in such a hot environment, structural integrity and cooling of the turbine airfoils must be optimized.
As one of ordinary skill in the art will appreciate, serpentine or winding cooling circuits have proven to be an efficient and cost effective means of air cooling the shank and airfoil portions of rotor and stator blades in a combustion turbine engines, and such cooling schemes have become very sophisticated in modern engines. The airfoils typically include intricate internal cooling passages that extend radially within the very thin airfoil. The radial passages are frequently connected by a plurality of small passages to allow the flow of cooling air between the larger flow passages. Fabrication of airfoils with such small internal features necessitates a complicated multi-step casting process.
A problem with the current manufacturing process is the fabrication and maintenance of the cores used in the casting and the low yield rates achieved by conventional processes. The main reason for the low yields is that during the manufacturing process of airfoils, a ceramic core that defines the cooling passages of the airfoil often either breaks or fractures. There are a number of factors that contribute to such a high percentage of ceramic cores becoming damaged. First, ceramic, in general, is a brittle material. Second, the airfoils are very thin and subsequently, the cores are very thin. Finally, the small crossover passages and other intricacies in the airfoil result in narrow delicate features that are easily broken under load.
Another drawback is that the fragile nature of the ceramic cores results in production constraints that limit more optimal cooling schemes. In many instances it may be more advantageous for the airfoil cooling and engine efficiency to have smaller crossover holes or more intricate geometric features. However, more intricate cooling passages are sometimes not practical, since the current manufacturing process already yields an insufficiently small number of usable airfoils and has a high percentage of ceramic cores being damaged. More intricate cooling schemes would result in even lower manufacturing yields and even higher, cost per airfoil. Thus, there is a great need to improve manufacturability of the gas turbine engine airfoils to reduce the cost of each airfoil as well as to improve cooling schemes that accomplish this.
| |
Hawks Sign Onyeka Okongwu, Skylar Mays and Nathan Knight
ATLANTA -– The Atlanta Hawks have signed rookie forward/center Onyeka Okongwu, rookie guard Skylar Mays and rookie forward/center Nathan Knight, the team announced today. Mays and Knight have been signed to two-way contracts. Per team policy, terms of the agreements were not disclosed.
Drafted sixth overall by the Hawks in the 2020 NBA Draft, the 6’9 Okongwu led the USC Trojans in points per game, rebounds per game and blocks per game in his only collegiate season, averaging 16.2 points, 8.6 boards and 2.7 blocks in 28 appearances (all starts) in 2019-20. He earned All-Pac-12 First Team honors and was a member of the Pac-12 All-Freshman team after leading the conference in FG% (.616), ranking second in blocks and second in total offensive rebounds (92).
The Chino, Calif. native scored in double figures on 25 occasions and recorded 11 double-doubles, including five contests with at least 20 points and 10 rebounds. He set a USC freshman record with 76 blocked shots, including a school-record tying eight rejections in his first collegiate game on 11/5/19 against Florida A&M.
Mays, selected in the second round (50th overall) by the Hawks in the 2020 NBA Draft, averaged 16.7 points, 5.0 rebounds, 3.2 assists and 1.8 steals in 31 games (all starts) en route to an All-SEC First Team selection as a senior in 2019-20. A native of Baton Rouge, LA, the 6’4 Mays became the first player in LSU history to record at least 1,600 points, 400 rebounds, 300 assists and 200 steals.
A summa cum laude graduate with a degree in kinesiology, Mays was the 2019-20 COSIDA Academic All-American Player of the Year, a three-time Academic All-American and a two-time SEC Scholar-Athlete of the Year.
Knight finished the 2019-20 season averaging 20.7 points, 10.5 rebounds, 1.8 assists and 1.5 blocks in 29.6 minutes (.524 FG%, .773 FT%) starting all 32 games, finishing second nationally in double-doubles (23). He earned the 2020 Lou Henson National Mid-Major Player of the Year as well as the Colonial Athletic Association’s Player of the Year and Defensive Player of the Year awards.
The 6’10 Knight earned his degree in business analytics from William & Mary’s Raymond A. Mason School of Business. The Syracuse, NY native attended Nottingham High School in his hometown before finishing at Kimball Union Academy in New Hampshire. | |
No Products in the Cart
Let’s begin by admitting that we tend to accumulate a lot of clothes without really appreciating the good pieces we have. The Japanese are famous when it comes to de-cluttering and organization (remember Marie Kondo?), and we truly believe that there’s pleasure to be found in organising your wardrobe.
If you’re feeling overwhelmed by the amount of clothes in your closet and can’t find anything to wear, here are a few helpful steps to cleaning it out and getting rid of items that are only occupying space.
1) Choose a day when you have a few hours to spare
Cleaning out your wardrobe takes time, so make sure you have more than just a frantic half hour to go through all your clothes. Start by taking out all your clothes and shoes from your closet and lay them on your bed or floor. You’ll be surprised at the number of items you have when you see everything piled up together. Separate your clothes according to type, and if you have many, you can separate them by colour.
2) Select, select, select
Look at each item of clothing separately and assess whether it still fits, if it suits your body shape and importantly, whether you actually wear it. When it comes to size, remember that clothes that too small or too tight can actually make you look bigger. Put these aside because chances are you’ll never wear them. If you’ve lost weight, consider whether any loose-fitting or larger size items still look good on you. If not, get rid of them.
After this, consider the style and quality of each remaining piece. Do you like wearing them and do they bring out your body’s best physical features? Will they last a long time? Are they easy to care for? Is the fabric still in good shape and is the colour still as it was originally? If the answer is no to any of these questions, you’ll know what to do. If some of your favourite clothes have broken zippers or missing buttons, consider whether they’ll still salvageable.
3) Keep only the good pieces
After you’ve finished selecting, you’ll see how many items you have left. Consider the pieces that you cannot do without and those that you don’t wear and won’t be missed. If you find you need some new essentials to replace the ones you’ve gotten rid of, make a list and purchase pieces that are truly timeless and versatile and that you can count on for many seasons and occasions to come. At the end of the day, your clothes serve a purpose, and that’s to make you feel comfortable and look your best at all times. | https://ew-wardrobe.com/blogs/tips-by-enemy-in-the-wardrobe/de-cluttering-your-wardrobe |
Please join us Tuesday, February 28, 2020, on Zoom online. I will be channeling St. Peregrine for physical, emotional and spiritual healing.
I will channel St. Peregrine, to give the group an insight into learning how we can pull toxins from our body. You will learn and be guided on how to use energy on affected areas and where you are drawn to go with assistance from the divine using your own intentions.
We are all capable of healing ourselves and connecting with guidance of spirit whether its to alleviate pain, (Physical) and or tear down blocks that are stopping us, (Emotional) and so much more! Once you initiate guidance it will always be with you for the course of life. It is just asking God/Source to help you through the things that you need.
Once I receive payment, I will send you the zoom link to download. Download the zoom application to participate. Please do it 7 minutes before the starting time. Click on computer audio. You may choose to not be seen on the zoom.
e. | https://www.itisjustme.com/events/ |
To achieve the elimination of emissions by 2040 natural gas, petroleum, nuclear and coal would need to become obsolete. Data supplied by an individual who follows the energy industry revealed that in 2019 the United States consumed 100.2 quadrillion British thermal units (BTU) of energy of which 11% came from renewable energy. Thus, 88% of the Btu's consumed was from petroleum, natural gas, coal and nuclear sources.
If we look solely at wind, a GE Haliade-X turbine produces 19,408,807,65.53 Btu's annually. A simple division of the 88,145,547,00,000,000 Btu's equals 4,541,524 turbines. We would need to install 652.5 wind turbines per day using 6,960 days.
Using a similar methodology a four acre solar farm would produce 462.11 million Btu’s per year and require almost 763 million acres, or 1.19 million sq. miles of land. The United States is 3.8 million sq. miles, thus, 31.37% of America's surface would be solar panels. We would need to install 168.45 sq. miles of solar panels per day to eliminate the need for fossil fuels.
How much of our agricultural land and food production would be lost and what food would replace it? Soylent Green!
Walter Rucinski,
Kalispell
Catch the latest in Opinion
Get opinion pieces, letters and editorials sent directly to your inbox weekly! | https://missoulian.com/opinion/letters/democratic-plan-for-clean-energy/article_96a6f99c-7c71-5550-9d77-d02fe5e61b8f.html |
Ethernalis is an ancient world where the winds of powerful magic blow across the lands. which affects many aspects of the game.
In general, non-rune based magic is frowned upon in the cities and civilised settlements, treated more as witchcraft and greatly distrusted, unlike the use of imperial approved runic practices' taught in the academies found in all major imperial cities.
Magic has been known to populace of the world since the beginning of time. With the abundance of magical energies, nearly any one can be taught simple spell casting but only dedicated few, who spend centuries mastering their path can ascend to power once thought beyond their mortal form.
The most common form of safe magic practice by the citizenry is in the use of scrolls. Scrolls are not affected by the wild mana outbursts that are often associated with witchcraft. Although, casting a spell contained on the scroll requires mana; it does not require magical skills, as the ritual required to cast the spell has already been performed by the scrolls creator. Scrolls cannot fail while being used, but are consumed by use.
Characters with the scroll binding skill will be able to craft new spell scrolls of the spells available in their spell book
The magically talented characters have a small chance to learn the spell contained within the scroll being used without the aid of a spell book, giving many opportunities to learn.
All spells require mana to be activated, if you don't have enough mana and attempt to cast a spell, the spell will fail and your character will end up loosing the turn. The scroll however will not be consumed by the failed action. Mana regenerates slowly every few turns. The regeneration rate depends strongly on your character's class and can be affected by potions and equipment. | http://ethernalis.com/game/scrolls-and-magic.php |
You need two things to determine the equation of a line: a point and a slope. These are the very basic things that are required. (If you have two points, you still must determine the slope before writing the equation.) If the slope of a line is m, and if it contains the points (x1,y1), then the equation can be written y - y1 = m(x - x1). For instance, if a line containing (5,7) has slope 3, its equation can be written y - 7 = 3(x - 5). If you want the equation in the form y = mx + b, simply solve for y, obtaining y = 3x - 8.
Make sure you can interpret slope, which is change in y divided by change in x. If a line has slope 3.4, this means that for every unit increase in x, y increases by 3.4 units. If a line has slope -2/3, the means that if x increases by 3 units, y decreases by 2 units.
I like to think of a numerical function as a rule that assigns to a number another unique number. For instance, when you write
then f is a rule that says "take a number, multiply it by 3, then subtract 2." What does the rule f do to 10? Well, f(10) = 3x10 - 2 = 28. That is, f says "take 10, multiply it be 3, then subtract 2." This yields 28. In other words, the rule f assigns to 10 the number 28.
Math is a language. If you will read it properly, and interpret it properly as you do your assignments, you will gain MATH POWER on a daily basis. | http://herkimershideaway.org/algebra2/doc_page15.html |
Q:
Count the number of "special subsets"
Let $A(n)$ - is the set of natural numbers $\{1,2, \dots ,n\}$.
Let $B$ - is any subset of $A(n)$. And $S(B)$ is the sum of all elements $B$.
Subset $B$ is "special subset" if $S(B)$ divisible by $2n$ ( Mod$[S(B),2n]=0$).
Example: $A(3)=\{ 1,2,3 \}$, so we have only two "special subset" - $\{\varnothing\}$ and $\{1,2,3\}$.
$A(5)=\{ 1,2,3,4,5 \}$, so we have $4$ "special subset" - $\{\varnothing\}, \{1,4,5\}, \{2,3,5\}, \{1,2,3,4\}$.
Let $F(n)$ is the number of all "special subsets" for $A(n)$, $n \in \mathbf{N}$.
I found for $n<50$ that $F(n)-1$ is the nearest integer to $\frac{2^{n-1}}{n}$.
$F(n)$=Floor$[\frac{2^{n-1}}{n} + \frac{1}{2}] + 1$.
Is it possible to prove this formula for any natural $n$ -?
A:
Let $B_j$ be independent Bernoulli random variables with parameter $1/2$, i.e. they take values $0$ and $1$, each with probability $1/2$, and $X = \sum_{j=1}^n j B_j$. Thus $X$ is the sum of a randomly-chosen subset of $A(n)$. Your $F(n) = 2^n P(X \equiv 0 \mod 2n) = \frac{2^n}{2n} \sum_{\omega} E[\omega^X]$ where the sum is over the $2n$'th roots of unity ($\omega = e^{\pi i k/n}, k=0,1,\ldots, 2n-1$).
Now $$E[\omega^X] = \prod_{j=1}^n E[\omega^{j B_j}] = \prod_{j=1}^n \frac{1 + \omega^j}{2}
$$
For $\omega = 1$ we have $E[1^X] = 1$, so this gives us a term $2^{n-1}/n$. Each $\omega \ne 1$ is a primitive $m$'th root of $1$, i.e. $\omega = e^{2\pi i k/m}$ where $m$ divides $2n$ and $\gcd(k,m)=1$. Now $E[\omega^X] = 0$ if some $\omega^j = -1$. This is true iff $m$ is even. Each primitive $m$'th root for the same $m$ gives the same value for $E[\omega^X]$ (the same factors appear, just in different orders). It appears to me (from looking at the first few cases) that if $\omega$ is a primitive $m$'th root with $m$ odd,
$E[\omega^X] = 2^{-n+n/m}$. Now there are $\phi(m)$ primitive $m$'th roots, so I get
$$ F(n) = \frac{2^{n-1}}{n} + \sum_m \frac{\phi(m)}{n} 2^{n/m-1} $$
where the sum is over all odd divisors of $n$ except $1$.
It's not true that $F(n) -1$ is the nearest integer to $2^{n-1}/n$, although $2^{n-1}/n$
is the largest term in $F(n)$. For example, $F(25) = 671092$ but $2^{25-1}/25 = 671088.64$.
| |
Q:
Best way to simplify a polynomial fraction divided by a polynomial fraction as completely as possible
I've been trying for the past few days to complete this question from a review booklet before I start university:
Simplify as completely as possible:
( 5x^2 -9x -2 / 30x^3 + 6x^2 ) / ( x^4 -3x^2 -4 / 2x^8 +6x^7 + 4x^6 )
However, I've only gotten as far as this answer below:
( (x -1) / 6x^2 ) / ((x^2 +1)(x^2 -4) / (2x^4 +4x^3)(x^4 + x^3))
I can't figure out how to simplify it further. What is the best / a good way to approach such a question that consists of a polynomial fraction divided by a polynomial fraction?
Is it generally a good idea to factor each fraction first then multiply them like I attempted above, or is it better to multiply them without factoring then try to simplify one big fraction?
A:
\begin{align}
&\;\frac{ 5x^2 -9x -2 }{ 30x^3 + 6x^2 } \div \frac{ x^4 -3x^2 -4}{ 2x^8 +6x^7 + 4x^6 }\\
=&\;\frac{(x-2)(5x+1) }{ 6x^2(5x+1) } \times \frac{ 2x^6(x+1)(x+2)}{(x-2)(x+2)(x^2+1)}\\
=&\; \frac{x^4(x+1)}{3(x^2+1)}
\end{align}
| |
[[**[Chiral Symmetry Breaking in Gribov’s Approach to QCD at Low Momentum]{}**]{}]{}
Alok Kumar[[^1]]{}\
The Institute of Mathematical Sciences\
C.P.T.Campus, Taramani Post\
Chennai 600 113\
India.\
[[*[Abstract]{}*]{}]{}
We consider Gribov’s equation for inverse quark Green function with and without pion correction. With polar parametrization of inverse quark Green function, we relate the dynamical mass function without pion correction, $M_{0}(q^2)$ and with pion correction, $M(q^2)$ at low momentum. A graph is plotted for $M(q^2)$ and $M_{0}(q^2)$ with q for low momentum. It is found that at low momenta pion corrections are small.
Gribov \[1-5\] developed an approach to describe the confinement mechanism and chiral symmetry breaking in QCD, based on the phenomenon of supercritical charges in QCD due to the existence of very light quarks. A mechanism of confinement was given by Gribov and further elaborated by Ewerz \[6\]. The related phenomenon chiral symmetry breaking has also been dealt with by Gribov. In this letter, we confine ourselves to the chiral symmetry breaking in Gribov’s approach at low momentum transfer.
In most of the studies of chiral symmetry breaking, the Schwinger-Dyson integral equation is used, with suitable approximation methods. Briefly, the one-loop self energy diagram for the quarks gives $$\begin{aligned}
\Sigma &=& -i\, C_{F}\, \frac{\alpha_{s}}{\pi}\, \int\, \frac{d^{4}k}{4\pi^2}\,\gamma^{\mu}\,G (k)\,\gamma_{\mu}\,\frac{1}{(q-k)^2},\end{aligned}$$ in the Feynman gauge for the gluon propagator with $G(k)$ with Green function for the quark, $\alpha_{s}$ as the strong coupling which is taken to be a constant at low momentum region and $C_{F}$ is $\frac{N_{c}^2-1}{2N_{c}}$ where $N_{c}$ is the number of color degree of freedom and for $N_{c}=3$, $C_{F}=\frac{4}{3}$. By differentiating (1) twice with respect to $q_{\mu}$ (external momentum) , one obtains $$\begin{aligned}
\partial^{2}G^{-1}(q) &=& C_{F}\,\frac{\alpha_{s}}{\pi}\,\gamma^{\mu}\,G(q)\,\gamma_{\mu}.\end{aligned}$$ Inclusion of higher order diagram for the most singular contribution essentially is done when both the derivatives are applied to the same gluon line, and this replces the bare quark-gluon vertices $\gamma_{\mu}$ in (2) by the full vertices $\Gamma_{\mu}$ which due to the delta functions appearing in $\partial^{2}$ on the gluon propagator in Feynman gauge are at vanishing gluon momentum i.e. $\Gamma_{\mu}(q,q,0)$. It is to be noted that while the original Schwinger-Dyson equation involves one bare and one full vertex, in Gribov’s approach we have two full vertices \[7\]. The use of the Ward identity $$\begin{aligned}
\Gamma_{\mu}(q,q,0) &=& \partial_{\mu}G^{-1}(q),\end{aligned}$$ then gives $$\begin{aligned}
\partial^{2}G^{-1}(q) &=& g\,(\partial^{\mu}G^{-1})\,G\,(\partial_{\mu}G^{-1})\, + ....... \, ,\end{aligned}$$ where $g = C_{F}\,\frac{\alpha_{s}}{\pi}$ and the dots in (4) stand for less infra-red singular term which are neglected here. In this way, the integral Schwinger-Dyson equation is converted into a partial differential equation for $G(q)$ and this is made possible by the choice of the Feynman gauge. The remarkable feature is that (4) involves only the quark Green function.
The general form of the inverse quark Green function is $$\begin{aligned}
G^{-1}(q) &=& a(q^2)\,\not\!{q} - b(q^2), \end{aligned}$$ where a and b are two unknown scalar functions of $q^{2}$. A polar parametrization of the Green function in (5) is given by $$\begin{aligned}
G^{-1}(q) &=& - \rho\,\exp\left( -\frac{1}{2}\,\phi\,\frac{\not\!{q}}{q}\right),\end{aligned}$$ where $\rho$ and $\phi$ are functions of $q^2 \,(q = \sqrt{q^{\mu}q_{\mu}})$. From (5) and (6), we have $$\begin{aligned}
a(q^2) &=& \frac{1}{q}\,{\rho}\,\sinh\left(\frac{\phi}{2}\right),\end{aligned}$$ $$\begin{aligned}
b(q^2) &=& {\rho}\,\cosh\left( \frac{\phi}{2}\right).\end{aligned}$$ The dynamical mass function $M_{0}(q^2)$ of the quark is given by $$\begin{aligned}
M_{0}(q^2) &=& \frac{b(q^2)}{a(q^2)} = q \, \coth\left( \frac{\phi}{2}\right).\end{aligned}$$ which involves only $\phi$. The subscript ’0’ on M will be explained later. Introducing $\xi$ as $$\begin{aligned}
\xi &\equiv & \ln q = \ln \sqrt{q^\mu q_\mu},\end{aligned}$$ and denoting $\partial_\xi f(q) = \dot{f}(q)$, the Gribov’s equation (4) gets converted into a pair of coupled differential equations for $\phi$ and $\rho$ as $$\begin{aligned}
\dot{p} &=& 1 - p^2 - \beta^2 \,\left( \frac{1}{4}\,\dot{\phi^2} + 3 \,\sinh^2\left( \frac{\phi}{2}\right) \right),\end{aligned}$$ $$\begin{aligned}
\ddot{\phi} + 2 \,p \,\dot{\phi} - 3 \,\sinh\, (\phi) &=& 0 ,\end{aligned}$$ where $$\begin{aligned}
p &=& 1 + \beta\,\frac{\dot{\rho}}{\rho},\end{aligned}$$ with $$\begin{aligned}
\beta &=& 1 - g = 1- C_F\,\frac{\alpha_s}{\pi} .\end{aligned}$$ By solving (11) and (12) for $\phi$ and $\rho$ for large and small q, it was found \[2,4\] that the dynamical mass function (9) $M_{0}(q^2)$ behaved such that $M_{0}(0)\neq \,0$. This is a signature for chiral symmetry breaking. In the spontaneous breaking of chiral symmetry, massless pions appear as Goldstone mode in the physical spectrum and they produce correction to the quark propagator. Taking into account this back-reaction of pions on quark, Gribov obtained ’pion corrected’ equation for $G^{-1}(q)$. The coupling of the pion to the quark can be related to the pion decay constant $f_{\pi}$ via Goldberger-Trieman relation, by taking into account the proper isospin factor for light quark flavours. The pion corrected equation for quark Green function, see for review \[8\] and Gribov \[9\], $$\begin{aligned}
\partial^{2} G^{-1} &=& g\,(\partial^{\mu} G^{-1})\,G\,(\partial_\mu G^{-1})
- \frac{3}{16 \pi^2 f_\pi^2}\,\{i \gamma_5,G^{-1} \}\,G \,\{i \gamma_5,G^{-1} \} \end{aligned}$$ where$f_{\pi}$ is the pion decay constant, 0.093 GeV. It is to be noted that pion corrected Gribov’s equation is still an differential equation involving only light quark’s Green function.
Using the same parametrization for this improved equation (14) as in (6) with $(\rho,\phi)$ replaced by $(\rho^{\prime},\phi^{\prime})$, i.e., $$\begin{aligned}
G^{-1} &=& - \rho^{\prime} \exp \left( -\frac{1}{2} \phi^{\prime} \frac{\not\!{q}}{q}\right),\end{aligned}$$ (14) yields, $$\begin{aligned}
\dot{p^{\prime}} &=& 1 - {p^{\prime}}^2 - \beta^2\, \left( \frac{1}{4} \dot{{\phi^{\prime}}^2} + 3 \sinh^2\left(\frac{\phi^{\prime}}{2}\right)\right)
\nonumber\\
&&{}
+ \frac{3 \beta q^2}{4\pi^2 f_{\pi}^{2}}\cosh^{2}\left( \frac{\phi^{\prime}}{2} \right), \end{aligned}$$ $$\begin{aligned}
\ddot{\phi^{\prime}} + 2\,p^{\prime}\,\dot{\phi^{\prime}} - 3\,\sinh\,(\phi^{\prime}) &=& 0. \end{aligned}$$ It is to be observed that the form of the equation for $\phi^{\prime}$ is the same as that for $\phi$ in (12). The dynamical mass with pion correction is $$\begin{aligned}
M(q^2) &=& {q}\,\coth \left( \frac{\phi^{\prime}}{2} \right).\end{aligned}$$ For low momentum, $|\overrightarrow{q}|\rightarrow 0$, we linearize the pair of equations (16) and (17) around $(\rho,\phi)$ $$\begin{aligned}
\phi^{\prime} = \phi + \delta{\phi} \ &\&& \ p^{\prime} = p + \delta{p},\end{aligned}$$ and keep only terms linear in $\delta{\phi}$ and $\delta{p}$. The $\phi$-equations (12) and (17) give the relation $$\begin{aligned}
2\,\delta{p}\,\dot{\phi} - 3\,\cosh\,(\phi)\,\delta{\phi} &=& 0,\end{aligned}$$ and the p-equations (11) and (16) give $$\begin{aligned}
\delta{p} &=& \left( -\frac{3\beta^{2}}{4p} + \frac{3\beta q^2}{16 \pi^2 {f_{\pi}}^{2}p} \right)\,\sinh\,(\phi)\,\delta{\phi} + \frac{3\beta q^2}{8 \pi^2 {f_{\pi}}^{2}p}\,\cosh^2 \left( \frac{\phi}{2} \right).\end{aligned}$$ From (19) and (20), we find $$\begin{aligned}
\delta{\phi} \,\left[ \frac{\coth\,(\phi)}{2\dot{\phi}} + \frac{\beta^2}{4p} - \frac{\beta \,q^2}{16 \pi^2 {f_{\pi}}^{2}p} \right] &=& \frac{\beta\,q\, M_{0}(q^2)}{16 \pi^2 {f_{\pi}}^{2}p},\end{aligned}$$ where we have used, $ \coth\left( \frac{\phi}{2}\right) = \frac{M_{0}(q^2)}{q}$ from (9). The dynamical mass with pion correction (18) is $$\begin{aligned}
M(q^2)&=&q\coth\left(\frac{\phi+\delta\phi}{2}\right),\nonumber\\
&\approx&q \,\left[ \frac{\coth \left( \frac{\phi}{2}\right) + \frac{\delta{\phi}}{2}}{1 + \frac{\delta\phi}{2}\coth\left(\frac{\phi}{2}\right)}\right],\nonumber\\
&\approx&q\,\left[\coth\left(\frac{\phi}{2}\right)+\frac{\delta{\phi}}{2}\right]\left[1-\frac{\delta{\phi}}{2}\coth\left(\frac{\phi}{2}\right)\right],\nonumber\\
&\approx&q\left[\coth\left(\frac{\phi}{2}\right)+\frac{\delta{\phi}}{2}\left(1-\coth^2\left(\frac{\phi}{2}\right)\right) \right],\nonumber\end{aligned}$$ where we have kept the terms linear in $\delta\phi$. We use the relation $ \coth\left( \frac{\phi}{2}\right) = \frac{M_{0}(q^2)}{q}$ from (9), and $M(q^2)$ becomes, $$\begin{aligned}
M(q^2) &=& M_{0}(q^2)+q\left(\frac{\delta\phi}{2}\right)\left(1-\frac{M_{0}^2(q^2)}{q^2}\right),\end{aligned}$$ substituting $\delta{\phi}$ from (21) in (22), we find $$\begin{aligned}
M(q^2) &=& M_{0}(q^2)\left[ 1+\left(\frac{\beta q^2}{32\pi^2 f_{\pi}^2p}\right)\left(\frac{1}{\alpha}\right)\left(1-\frac{M_{0}^2(q^2)}{q^2}\right)\right],\end{aligned}$$ where $$\begin{aligned}
\alpha &=& \left[ \frac{\coth\,(\phi)}{2\dot{\phi}} + \frac{\beta^2}{4p}-\frac{\beta q^2}{16\pi^2 f_{\pi}^2 p}\right].\end{aligned}$$ Equation (23) gives a relationship between dynamical mass of quarks with pion correction, $M(q^2)$ and without pion correction, $M_{0}(q^2)$ at low momentum. This is our main result. Further the expression in (23) and $\alpha$ involve solutions to (11) and (12). It can be seen from equation (23) that in the limit $f_{\pi}\rightarrow \infty$ (i.e. no pion correction ) $M(q^2) \rightarrow M_{0}(q^2)$.
Now we consider the solutions of (11) and (12) in the infrared region $q \rightarrow 0$. In \[6\], one possible solution when $|\overrightarrow{q}|\rightarrow 0$ is $p \rightarrow p_{0}$ with $p_{0}^2=1$ and $\phi = C \, e^{\xi}$ for $p_{0}=1$, the arbitrary constant C has the dimension inverse of length. We use the expansion for $\coth(x)$ and keep first three terms only \[10\], $$\begin{aligned}
\coth(x) &\approx& \frac{1}{x} + \frac{x}{3} - \frac{x^3}{45}.\end{aligned}$$ Using the solution for $\phi$ at low momentum $\phi = C q$ and $\dot{\phi} = C q$, the dynamical mass without pion correction $M_{0}(q^2)$ is given by, $$\begin{aligned}
M_{0}(q^2) &=& \frac{2}{C} + \frac{C\,q^2}{6} - \frac{C^3\,q^4}{360},\end{aligned}$$ and $\alpha$ is given by $$\begin{aligned}
\alpha &=& \frac{1}{2}\,\left(\frac{1}{C^2\,q^2}+\frac{1}{3}-\frac{C^2\,q^2}{45}\right)\,+\,\frac{\beta^2}{4}-\frac{\beta\,q^2}{16\,\pi^2\,f_{\pi}^2},\end{aligned}$$ and the dynamical mass with pion correction $M(q^2)$ is given by $$\begin{aligned}
M(q^2)&=&M_{0}(q^2)\left[1+\frac{\beta C^2 q^2\frac{\left(q^2-M_{0}^2(q^2)\right)}{16\pi^2 f_{\pi}^2}}{1+(\frac{1}{3}+\frac{\beta^2}{2})C^2\,q^2-\frac{C^4\,q^4}{45}-\frac{\beta\,C^2\,q^4}{8\,\pi^2 \,f_{\pi}^2}}\right].\end{aligned}$$ This is valid in the low momentum region only. In the limit $q\rightarrow 0$, we find $M(0)\rightarrow M_{0}(0)=\frac{2}{C}\neq 0$. For space-like momenta we replace ’$q^2$’ by ’-$q^2$’, and equations (25) and (29) change to, $$\begin{aligned}
M_{0}(q^2) &=& \frac{2}{C} - \frac{C\,q^2}{6} - \frac{C^3\,q^4}{360},\end{aligned}$$ $$\begin{aligned}
M(q^2)&=&M_{0}(q^2)\left[1+\frac{\beta\,C^2\,q^2\frac{\left(q^2+M_{0}^2(q^2)\right)}{16\pi^2 f_{\pi}^2}}{1-(\frac{1}{3}+\frac{\beta^2}{2})C^2\,q^2-\frac{C^4\,q^4}{45}-\frac{\beta\,C^2\,q^4}{8\,\pi^2 \,f_{\pi}^2}}\right].\end{aligned}$$ We use (28) and (29) to exhibit the behaviour of $M_{0}(q^2)$ and $M(q^2)$ at low momentum. We use $f_{\pi}=0.093\,GeV$ \[11\] and the arbitrary constant C is taken to reproduce the numerical value of $M_{0}(0)=M(0)$ as estimated in \[6\]. In \[6\], $M_{0}(0)=M(0)=0.1\,GeV$ and so from (25), $C = 20GeV^{-1}$. At low momenta we take the strong coupling constant to be constant and use the supercritical value $\alpha_{c}=0.43$ as found in \[2,4\] by Gribov. For this value of $\alpha_{s}$, $\beta =1-g\,= 0.8175\,$. We plotted the variation of $M_{0}(q^2)$ and $M(q^2)$ for q=0 to 0.045 GeV using Matlab. The output graph is given in Figure1 and the solid line corresponds to variation of $M_{0}(q^2)$ and the broken line is for $M(q^2)$. It is found from the Figure1 that in the low momentum region pion correction to quark’s mass is small. This feature is similar to the study of \[6\] at large momentum.
[[**[Acknowledgements]{}**]{}]{}
I thank Prof. R. Parthasarathy (IMSc, Chennai and CMI, Chennai) for showing the problem and providing constant help and encourgement during the completion of this work. The award of JRF fellowship by IMSc is acknowledged with thanks.
[[**[References]{}**]{}]{}
1. V.N.Gribov, Phys. Scripta [**[T 15]{}**]{} (1987) 164.
2. V.N. Gribov, [*Possible Solution of the Problem of Quark Confinement*]{}, Lund preprint LU-TP 91-7 (1991).
3. V.N. Gribov, [*Eur. Phys. J.* ]{}[**C 10**]{} (1999) 91 \[hep-ph/9902279\].
4. V.N. Gribov, [*Eur. Phys. J.* ]{}[**C 10**]{} (1999) 71 \[hep-ph/9807224\].
5. V.N. Gribov, Orsay lectures on confinement (I-III):\
LPT Orsay 92-60, hep-ph/9403218;\
LPT Orsay 94-20, hep-ph/9404332;\
LPT Orsay 99-37, hep-ph/9905285.
6. Carlo Ewerz, [*Gribov’s Equation for the Green Function of Light Quarks*]{}, Eur. Phys. J. [**[ C 13]{}**]{} (2000) 503-518 \[hep-ph/0001038\].
7. Carlo Ewerz, [*Gribov’s Picture of Confinement and Chiral Symmetry Breaking*]{}, Talk presented at Gribov-75 Memorial Workshop, Budapest, May 2005 \[hep-th/0601271\].
8. Yu.L. Dokshitzer, and D.E. Kharzeev, [*The Gribov Conception of Quantum Chromodynamics*]{}, Ann. Rev. Nucl. Part. Sci. [**[ 54]{}**]{} (2004) 487-524 \[hep-ph/0404216\].
9. V.N. Gribov, [*The Gribov Theory of Quark Confinement* ]{}, Ed. J. Nyiri, World Scientific Publication, 2001.
10. Alan Jeffrey,Handbook of Mathematical Formulas and Integrals,Elsevier Academic Press,Third Edition(2004).
11. M.E.Peskin and D.V.Schroeder,“ An Introduction to Quantum Field Theory”,Westview Press,(1995).
[^1]: e-mail address: [email protected]
| |
Team Colombia appears to have a true snatch specialist on their hands. On Dec. 14, 2021, 96-kilogram weightlifter Lesman Paredes Montaño set a new world record in the snatch of 187 kilograms (412.2 pounds) at the 2021 Weightlifting World Championships. This, coupled with his 213-kilogram (469.5-pound) clean & jerk, facilitated his gold-medal win via a 400-kilogram (881.8-pound) total.
Paredes Montaño, who had a prior career as a junior athlete in 2015, returned from a competitive hiatus in 2021 to become the 2021 Pan-American Champion in the 102-kilogram class. His time away from the platform appears to have been to his benefit, as his performance narrowly bested the current 96-kilogram Olympic Champion, Fares Ibrahim El-Bakh of Qatar.
You can see Paredes Montaño’s record-breaking snatch below, courtesy of /r/weightlifting user /u/TheYKcid, as well as some extra context on how he pulled it off, and the podium results from the session:
[Related: Watch Lasha Talakhadze Pull Off a 150-Kilogram Muscle Snatch]
2021 World Weightlifting Championships Men’s 96-Kilogram Results
- Lesman Paredes Montaño (COL) — 400 kilograms (187/213), Gold
- Fares Ibrahim El-Bakh (QAT) — 394 kilograms (172/222), Silver
- Keydomar Giovanni Vallenilla Sánchez (VEN) — 391 kilograms (177/214), Bronze
Paredes Montaño set himself up for success in the snatch portion of the event. He was the heaviest athlete to appear and the only one to even attempt a lift at or above 180 kilograms (396.8 pounds), which he made on his first attempt. After nailing 187, he attempted a massive 190 kilograms (418.8 pounds) but was unsuccessful.
By the time the clean & jerks arrived, Paredes Montaño had established a lead that the other participants would struggle to surpass. Even though he missed his first clean & jerk, by securing 213 kilograms on his final lift, he had firmly beaten both the gold and silver medalists — El-Bakh and Vallenilla Sánchez — from the Tokyo Olympics. El-Bakh made a valiant effort at 229 kilograms (504.8 pounds) to win it all, but failed to catch his clean.
Pulling Off a World Record Snatch
Putting a world-class lift overhead is often a perfect storm. For Paredes Montaño to break the snatch record, formerly held by 2016 Olympic Champion Sohrab Moradi, he needed multiple factors to go his way on the day.
To open a full eight kilograms beyond any other athlete in the field suggests that Paredes Montaño both had a successful training program leading up to the event and was technically precise during his warm-up attempts. It is common for weightlifters to exercise caution during their snatches, due to the technical demands of the lift itself. Opening high suggests that Paredes Montaño and his coaches were likely confident about their prospects.
Further, Paredes Montaño is undeniably built to perform in the snatch. He has uncommonly long arms, which allow him to take a very wide grip on the bar. Large hands allow him to have a more secure hook grip, which improves his pulling confidence. Being taller than other athletes in his class, usually a detriment in the sport, gives Paredes Montaño more time to develop momentum and force.
It certainly helps that Paredes Montaño is exceptionally flexible. His mobility in the hip, ankle, and shoulder joints let him receive the barbell in an extremely stable and upright position, which is essential for making snatches look easy. While it is rare for an athlete to excel so strongly in one lift, things clearly worked out in Paredes Montaño’s favor.
Snatching Victory
A snatch world record from Paredes Montaño rockets him into the conversation of potential podium athletes in an already-stacked weight class. His performance also reminds of athletes like Andrei Rybakou, famous throughout their careers for being exceedingly talented at one specific competition movement.
Although the 2021 World Championships in Tashkent, Uzbekistan, from Dec. 7-17, are drawing slowly to a close, athletes like Paredes Montaño are proving there’s still plenty of opportunities to set records and make history. | https://barbend.com/lesman-paredes-montano-snatch-world-record-187-kilograms/ |
PROVIDENCE, Rhode Island, Jan. 25 (TNSRep) -- The Rhode Island Department of Transportation, Motor Vehicles Division and Office of Energy Resources with support from the Environmental Management Department and Health Department issued a 91-page report in December 2021 entitled "Electrifying Transportation: A Strategic Policy Guide for Improving Public Access to Electric Vehicle Charging Infrastructure in Rhode Island."
(Continued from Part 3 of 4)
* * *
Electric Grid
Building out an electric transportation system will require additional decarbonized electric supply, upgrades to the electric grid, innovation in electricity programs, and planning to ensure resilience during power outages. Near full electrification of our light-duty, medium-duty, and heavy-duty vehicles will require roughly 6,000 GWh of electricity on an annual basis - for reference, that's equivalent to about three-quarters of our current annual electricity consumption./47
Not only will our electric grid need to be built out to deliver this much electricity, but we will need to onboard additional renewable energy systems to generate this electricity if we are to meet our net-zero greenhouse gas emissions mandates by 2050./48
The Division of Public Utilities and Carriers should evaluate the costs and benefits of proposals that create an integrated strategy in Rhode Island to support the state's clean transportation goals with a framework that will consider electric rate impacts, ensure transportation decarbonization benefits, and enable a competitive market and private investment, as well as grid integration.
Identify impacts
First, we need to understand the impacts of electric transportation on the electric grid so we can strategically address them. Indeed, current electric grid planning processes account for expected load growth due to electrification on both the distribution and transmission systems./49
However, as recommended in the technical and economic analysis The Road to 100% Renewable Electricity by 2030 in Rhode Island, utilities and stakeholders should explore the concept of integrated grid planning. This concept considers key drivers of electricity system needs, such as projections for transportation electrification using local knowledge and expectations, over longer timescales to better understand and plan for changing future system needs.
Mitigating grid strain
Electric vehicle charging causes strain on the electric grid primarily when vehicles are charged at the same time as peak demand - when Rhode Islanders are using the most electricity at the same time during the course of the year. Peak demand occurs on a daily basis in the afternoon and evening hours, and on an annual basis in the hot summer months./50
Therefore, we should continue programs and policies that incentivize charging vehicles during off-peak hours to help alleviate strain on the electric grid. There is further potential for the future technology to not only reduce grid strain but provide grid benefits through vehicle-to-grid services. Examples include:
* At-home charging: Simply encouraging and enabling at-home charging may help to reduce grid strain due to the lower voltage of the charger. However, at-home charging should be supplemented with clear price and information signals about the true costs of electricity throughout the day.
* Off-peak charging incentive programs: Such programs give drivers a reward based on their charging behavior. National Grid has piloted an off-peak charging rebate pilot program since 2018 - this program has shown real decreases in peak charging in return for nominal incentives and informational feedback about charging behavior.
* Demand response programs: These programs pay customers to reduce electricity consumption during hours of peak electricity demand. Electricity use may be ramped down either manually or automatically depending on the capabilities of technology. A growing number of electric vehicles and chargers are capable of accepting signals about peak periods and responding by pausing charging.
* Time-varying rates: Electricity rates for the majority of Rhode Islanders are 'flat rates,' where the cost of electricity is the same regardless of the time at which that electricity is used. In contrast, time-varying rates allow prices to differ throughout the day and the year, thus sending more accurate signals about the true cost of electricity. A prerequisite for time-varying rates is deploying advanced metering infrastructure that can record electricity consumption at various intervals throughout the day./51
* Demand charges: Another common type of rate structure charges an additional price based on how much electricity is demanded at any instance. DCFC charge vehicles quickly because they are capable of transferring large amounts of electricity, and therefore have high demand. Demand charges provide a signal to incentivize DCFC to ramp down charging rates during times of peak demand. Indeed, at least one company's DCFC comes standard with technology that can automatically ramp down charging rates to avoid demand charges.
* Vehicle-to-grid services: Future vehicle technology may be able to provide vehicle-to-grid services that benefit the electric grid and reduce system costs. These technological capabilities should continue to be monitored and, when they enter the market, may be incentivized through strategically designed pay-for-performance programs.
* Energy efficiency: Increases in electricity demand and consumption can be offset by foundational investments in energy efficiency, which is our least-cost resource. Utility energy efficiency programs are required by Least-Cost Procurement statute, and it is imperative to continue these programs as we electrify.
* System utilization: While time-varying rates and demand charges are responsive to system-wide price dynamics, optimizing system utilization is responsive to the dynamics of the local electric grid. Consistent with concepts presented in Power Sector Transformation, Rhode Island may consider statutory, regulatory, or programmatic changes to the utility business model to incentivize the utility to promote electrification and charging that makes better use of the wires and substations that make up our electric grid. In fact, using the electric grid more consistently throughout the day and the year can actually put downward pressure on electricity rates, which would create a positive feedback loop to further encourage electrification.52
* Grid Modernization: Modernizing our electric grid involves not just replacing or repairing equipment at the end of its life, but making proactive investments that result in cost-effective benefits to customers. Those benefits may come in the form of easier or less costly integration of electric vehicle charging infrastructure and distributed energy resources, improved reliability and customer service, and avoided costs to operate and maintain the electric grid.
"And I have to say it -- time of use rates for electricity! I am concerned everyone will plug in while they are at work - and further tax the electric system in the afternoons when it's already at its peak. Let's give folks an incentive to plug in at night/evening -- and the infrastructure to do it!"
Access and capacity
Information about the ease and cost of hooking up new electric vehicle charging stations to the electric grid should be readily available and up to date so developers and customers can make decisions with full knowledge. National Grid hosts the RI System Data Portal, which provides information about how heavily loaded sections of the electric grid are. To improve use of this system for identifying economic areas for charging station deployment, National Grid should update data as frequently as practical and, to the extent possible, connect the dots between loading data and ability to add electric vehicle charging stations.
Furthermore, businesses with fleets may consider partnerships that optimize use of electric vehicle charging infrastructure and thereby reduce upfront and ongoing costs. Coordination among fleets and with the electric grid can also improve system utilization. The utility or third party may provide value as a sort of matchmaker between businesses to share charging station infrastructure and locate the infrastructure appropriately on the electric grid.
"[M]y company ... integrates storage in our DC fast charging units to reduce that strain that's created on local utility grids ... [and] also leveraging that storage to charge the battery at the charging unit overnight and then being able to charge the vehicles directly from the battery during the day so you're capturing that off peak energy making sure that you're lowering your operational expenses for the site host and then you're also making it less costly for the drivers to charge their vehicle."
Decarbonization
The transition to electric vehicles immediately reduces greenhouse gas emissions relative to internal combustion engines fueled by gasoline, and provides immediate public health benefits. However, the transition to electric vehicles only reduces greenhouse gas emissions to the levels required by the Act on Climate if our electricity demand is met by renewable energy resources. Rhode Island's current mechanism to decarbonize electricity supply - the Renewable Energy Standard - only requires a minimum of 38.5% renewable electricity by 2035. The General Assembly should consider an amendment to strengthen the Renewable Energy Standard or other legislative mechanism to ensure we fully decarbonize our electric supply. Furthermore, people and businesses who have electric vehicle charging may consider installing on-site renewable energy to ensure their additional electricity use is decarbonized./53
"I love when I go someplace, and I see a parking lot with a canopy over it with solar-sourced electricity to charge the vehicles. I would love to see something like this more widely utilized here in Rhode Island."
"The one thing that I would really love to see is more push ... [to] require or incentivize having solar over existing parking lots or over any parking lot for that matter. It puts that all that impervious cover to good use, and it would provide a direct source of energy right there where the charging station is."
Resilience
As climate change causes more frequent and extreme weather conditions, risk of power outages may increase if not mitigated. Therefore, the transition to vehicles that run on electricity also comes with the risk of not being able to charge those vehicles during power outages. Note that gas stations are also impacted by power outages and must rely on backup generation (either a generator or microgrid) to pump gas. While gas stations have generally had decades to build up resilience investments, resilience should be a consideration at the outset for electric vehicle charging stations./54
This risk can be mitigated in several ways:
First, on-site backup power, such as a battery energy storage system, can provide continued ability to charge during a power outage.
Second, some public charging stations may be designated as resilience hubs - an integrated combination of electric vehicle charging stations, renewable energy, and battery energy storage - and may be available for charging even when the electric grid is down. Such resilience hubs may be strategically located throughout the state and in proximity to evacuation routes and transportation corridors.
Third, mobile charging units - essentially battery energy systems on trucks - may be deployed to meet drivers where they and their vehicles are./55
The State may consider exploring a public-private partnership to offer no-cost roadside assistance specifically for electric vehicle drivers.
Building out resilience in our transportation system broadly, and deploying a network of resilient charging capability specifically, should be further considered, integrated into other statewide and emergency planning exercises via coordination with the Division of Statewide Planning and the Emergency Management Agency (EMA), and allocated funding to catalyze demonstration projects in the near-term and statewide deployment in the long-term.
As part of its next Evacuation Route study, EMA should conduct an internal audit related to charging station access during times of emergency. In this audit, EMA should inventory charging stations along evacuation routes, identify needs for additional charging station infrastructure, and assess the need for mobile or other emergency charging services.
"Are plans established to deal with EVs ... in the case of natural disasters that may render electrical charging infrastructure disabled?"
"I think the plan should include [Emergency Management Agency] considerations - how do we deal with power outages or storms? Today people rush to fill gas tanks - will people all plug in at the same time and how will that work?"
Workforce
Installing and maintaining electric vehicle charging infrastructure requires a set of skills related to electrical trades, fluency with software and information technology, and construction methods. As we accelerate deployment of charging stations in Rhode Island, we must ensure we are developing the workforce to meet a growing volume of demand. Increased adoption of electric vehicles and slow phase out of internal combustion engines will require the full supply chain of vehicle sales, mechanic services, and recyclers to broaden their expertise to electric vehicles. In parallel, potential reductions in need for gas stations and intake of fossil fuel deliveries at ports will necessitate careful planning and deliberate upskilling of workers to ensure a just transition. Finally, we must ensure that the benefits of electrifying our transportation sector - including contracts of companies and jobs for Rhode Islanders - are realized by those who have been historically underserved, in particular by black, indigenous, and people of color.
"... [M]ake sure that electricians that are doing the work installing charging stations are properly trained in Rhode Island and that we take full advantage of the job creating opportunities. I want to make sure we have the right number of people trained so that the amount of work is commensurate to the number of people that are qualified. I think that it's a really great opportunity for Rhode Island to create some clean energy sector jobs and just love to see recommendations about how that liaisoning can happen between the multiple departments."
There are several secondary school programs in Rhode Island accredited by the National Institute of Automotive Service Excellent that offer automotive technology, autobody, and diesel mechanic training. The National Automobile Dealers Association provides curriculum through the Automotive Youth Educational Systems, which is a partnership between manufacturers, dealers, and secondary school programs. Participating dealerships fill entry-level positions from these partnerships and manufacturers use them to recruit students to automotive careers. Post-secondary training is available at New England Institute of Technology and other regional schools. This training is often sponsored by specific manufacturers and relates to specific technologies.
Becoming an electrician in Rhode Island requires 576 hours of classroom time and 8,000 hours of on-the-job training through an apprenticeship program with a licensed electrician. Passing the journeyman exam allows an electrician to become licensed and start their own business. There is one union electrical Joint Apprenticeship and Training Committee associated with the International Brotherhood of Electrical Workers local 99 in Cranston. Accepted apprentices can expect to spend five years completing a program. There is also a nonunion apprenticeship program associated with the Rhode Island chapter of the Associated Builders and Contractors of Rhode Island. Classroom time can also be gained through several career and technical programs in Rhode Island.
The Bureau of Labor Statistics reports that in May 2017 there were 2,370 auto service technicians, 690 automotive body repair technicians, and 550 truck mechanics working in Rhode Island. Many auto service technicians are trained by and are working at dealerships on specific makes and models of cars. Others are employed at small auto mechanics throughout the state and may not have as much access to training on specific new technologies. In 2018, there were 2,530 employed electricians in Rhode Island. There are also electrician apprentices working with licensed electricians.
The Rhode Island Department of Labor and Training should hold industry convenings with electric vehicle charging station developers, auto mechanics, and electricians to understand projected needs and challenges as electric vehicle adoption increases, and to identify potential future training and development opportunities.
Automotive training programs may need to maintain or expand their partnerships with manufacturers to access current and new technology. Retraining of those currently in the workforce, especially those who do not work at a dealership will need to be supported. Some jobs may be lost as electric car models need fewer repairs and their repair is limited by technology.
The General Assembly may choose to explore right-to-repair laws as an important part of supporting local auto mechanics who would otherwise not be able to make repairs. Such legislation would require automobile manufacturers to provide the same information to independent repair shops as they do for repair shops at auto dealers. It is important for small local shops to have access to the technology to repair these vehicles, otherwise only dealers will be able to service electric vehicles (other than changing tires and brakes). Right-to-repair legislation would promote a level playing field for who can service a car.
More electricians may need to be trained and opportunities for electrician apprenticeships will need to be expanded. There will need to be a pipeline for minority students to enter the trades and join unions. Some opportunities that may be explored include youth skills programs, grant funding to support workforce development, public-private partnerships to expand training access, and regional coordination. Lastly, incentive programs and workforce development programs should consider how to support workers in fossil fuel-driven industries, like gas stations, oil service stations, and others.
Public entities should also consider their procurement practices for electric vehicle charging infrastructure vendors as some procurement choices may affect the abilities of minority businesses enterprises from winning contract bids. Entities may examine the scale at which procurements occur, and whether procurements are done for individual projects in parallel or in series. If procurement of services for electric vehicle charging equipment is parsed into smaller projects, more small local businesses could potentially bid on those contracts. If a set of projects or services go out to bid separately but in parallel, the procuring body may be better able to meet a minimum target of minority business enterprises in their portfolio of selected vendors. Carefully considering vendors both for individual projects and as a whole portfolio will ensure a minimum number of jobs and the economic benefits of vehicle electrification will be delivered to communities historically underserved and overburdened by our transportation and procurement system.
Data Tracking & Reporting
Data is required not only to track progress towards electrification and greenhouse gas reduction targets in Rhode Island, but to continually evaluate metrics related to an equitable clean transportation transition and advancement of community-prioritized transportation outcomes. Data collection will guide the strategic planning of charging infrastructure development, electric vehicle incentives, public transportation, and more. The goal of data collection is to promote equitable adoption of electric vehicles, expand charging infrastructure efficiently, and understand the demographics of electric vehicle owners and potential barriers to entry.
The Executive Climate Change Coordinating Council, in coordination with the Division of Motor Vehicles, Office of Energy Resources, and Department of Transportation, should develop and maintain a clean transportation dashboard.
Transportation metrics such as vehicle miles traveled (VMT), the makes and models of electric vehicles registered in Rhode Island, the average time spent charging, and popular charging locations are valuable opportunities for Rhode Island to improve data collection.
* VMT can be used to estimate on-road transportation emissions and driving trends. VMT is estimated by the Department of Transportation and is submitted annually to the Federal Highway Administration (FHWA). County and street-level VMT data can provide insight to traffic patterns, potential charging infrastructure locations, and air quality near high volume traffic corridors.
* The makes and models of electric vehicles registered in Rhode Island shows what vehicles are in high demand. For instance, are PHEVs more popular than BEVs? Are electric SUVs and trucks purchased at a similar proportion as gasoline SUVs and trucks? Are high-end luxury electric vehicles being purchased at a higher rate than affordable electric vehicle models?
* Understanding public charging station usage is critical to the efficient buildout of charging infrastructure. If stations are always in use, more stations can be added nearby to satisfy the demand for charging. Conversely, if a charging station sees low amounts of traffic, additional stations can be prioritized in other areas.
While the State of Rhode Island is in the beginning stages of transportation data collection, progress has been made to collect active electric vehicle registrations on a quarterly basis, identify county level transportation patterns and vehicle counts, and determine other avenues for future data collection.
Data collection is a cross-agency effort requiring collaboration between the Department of Transportation, the Division of Motor Vehicles, the Department of Environmental Management, and the Office of Energy Resources. The collaboration includes brainstorming, reviewing data requests, and determining what data is available at each agency.
Methodologies to improve data collection are always considered; one recent development uses an electric vehicle VIN decoder to determine the number of electric vehicles registered in Rhode Island. The electric vehicle VIN decoder includes VIN strings from all BEV and PHEV vehicles sold in the United States. The Division of Motor Vehicles matches each electric vehicle VIN string to active registration data to determine how many electric vehicles are registered in Rhode Island. The registration data is used to create a map highlighting the concentrations of electric vehicles by zip code in Rhode Island.
Other transportation data collection efforts underway include historical electric vehicle counts, the average age of vehicles by zip code, and information on the secondary market for electric vehicles in Rhode Island.
Electric vehicle data, transportation trends, and demographic data can be compiled into a clean transportation dashboard for public use. Some states have dashboards already; a good example is the Evaluate Dashboard created by Atlas Public Policy for New York and Colorado. Other New England states are also pursuing this particular dashboard software for tracking their data. 56 A dashboard puts transportation data in one place for the public and state agencies to review. Considerations as to where the dashboard will be online, what data it will include, and how often it will be updated should be further discussed and coordinated among agencies and with public input.
In addition to tracking vehicle data, state agencies should also track data related to vehicle electrification's impact on macroeconomic factors (e.g. jobs and businesses) as well as public health metrics (e.g. environmental quality and incidence of asthma). Doing so may require additional coordination with the Department of 56 The United States Climate Alliance has a funding opportunity for states interested in developing a dashboard with Atlas Public Policy. New England states pursuing this opportunity include Maine, Connecticut, and Vermont. Labor and Training and the Department of Health.
Understanding the demographics of people who purchase electric vehicles is important to ensure equitable electric vehicle adoption and provide access to resources. Who are the main buyers of electric vehicles? What age group is buying electric vehicles most frequently? What level of income is most likely to purchase an EV? Are EVs being purchased by homeowners and apartment dwellers equally? Where are these EVs being sold in Rhode Island? Answers to these questions will be helpful to determine a logical pathway forward for Rhode Island's charging infrastructure buildout and electric vehicle incentive opportunities. State agencies should work with communities to understand their priority outcomes and metrics needed to track progress toward those outcomes.
2022 Priority Actions for EC4 Agencies
This section prioritizes a specific meaningful action item for all agencies represented by the Executive Climate Change Coordinating Council (EC4). By including these priority action items in this Guide, agencies - and the specific points of contact listed within those agencies - have committed to advancing these actions in 2022. Agencies will be held accountable via routine report outs on progress at public EC4 meetings.
Executive Climate Change Coordinating Council
* Coordinate quarterly report outs from agencies on progress and, in coordination with the Division of Motor Vehicles, Office of Energy Resources, and Department of Transportation, develop and maintain a clean transportation dashboard.
- Point of Contact: EC4 Chairperson
Department of Environmental Management
* Lead-by-Example with electric vehicle charging infrastructure at state parks and beaches.
- Point of Contact: Administrator, Office of Air Resources Office of Energy Resources
* The Office of Energy Resources, in coordination with the Department of Transportation and the Department of Environmental Management, will prepare an investment strategy and deploy electric vehicle charging infrastructure funds allocated to Rhode Island through the federal infrastructure bill (signed by President Biden in November 2021). Investment will align with the recommendations of this Plan, advance equity and accessibility, and follow applicable federal guidelines. In addition, OER will publish a guideline of best practices for public and private charging station installations and continue to work with state agencies to expand the number of electric vehicle ports at public facilities.
- Point of Contact: Commissioner
Department of Transportation
* Conduct a study on state revenue streams for transportation infrastructure. This study should include a review of alternative revenue generation mechanisms and, in coordination with the Office of Energy Resources and Department of Environmental Management, model changes in revenue based on forecasted adoption of electric vehicles.
- Point of Contact: Assistant Director Department of Health
* Quantify health benefits of clean transportation investments and identify opportunities to leverage health-based funding streams (e.g., via partnerships with health insurers or providers) to promote electrification of vehicles and mobility equipment in underserved and overburdened communities.
- Point of Contact: Climate Change Health Program Manager Emergency Management Agency
* As part of its next Evacuation Route study, RI EMA will conduct an internal audit related to charging station access during times of emergency. In this audit, EMA will inventory charging stations along evacuation routes, identify needs for additional charging station infrastructure, and assess the need for mobile or other emergency charging services.
- Point of Contact: Mitigation Planning Supervisor Department of Labor and Training
* Hold industry convenings with electric vehicle charging station developers, auto mechanics, and electricians to understand projected needs and challenges as electric vehicle adoption increases, and to identify potential future training and development opportunities.
- Point of Contact: Chief Operating Officer Rhode Island Public Transit Authority
* Develop a detailed strategy to fully electrify the public bus fleet, including any necessary modifications to RIPTA's infrastructure, workforce, route planning, or other core aspects of operating a successful public transit fleet.
- Point of Contact: Chief of Strategic Advancement Department of Administration
* The Division of Capital Asset Management and Maintenance, in collaboration with the Office of Energy Resources, to develop a charging station maintenance strategy for charging infrastructure on State property and an actionable plan to both right-size and electrify the State fleet.
- Point of Contact: Director, Division of Capital Asset Management and Maintenance
Division of Public Utilities and Carriers
* Evaluate the costs and benefits of proposals that create an integrated strategy in Rhode Island to support the state's clean transportation goals with a framework that will consider electric rate impacts, ensure transportation decarbonization benefits, and enable a competitive market and private investment, as well as grid integration.
- Point of Contact: Administrator
Division of Statewide Planning
* Determine the best way(s) to incorporate vehicle electrification into the State Guide Plan, whether as a separate element or a component of existing elements, and ensure that either this Strategic Policy Guide is adopted as a discrete element or that amendments are made to one or more existing State Guide Plan elements.
- Point of Contact: Associate Director
Executive Office of Health and Human Services
* In collaboration with other relevant state agencies, inventory state owned fleet vehicles designated for use by the Departments within the EOHHS structure, in cooperation with the State's plan to transition to electric vehicles.
- Point of Contact: Director of Legislative and Constituent Commerce RI
* Convene business community representatives and coordinate next steps pertaining to fleet electrification and charging station installation for new and expanding businesses, such as through existing or new programs and support services and targeted outreach.
- Point of Contact: Executive Vice President of Business Development Rhode Island Infrastructure Bank
* Promote deployment of charging stations and electric fleet conversions for private and public entities, with an emphasis on supporting municipal, multi-unit housing, non-profit and commercial properties. RIIB will utilize both existing and new financing and grant programs to accelerate the investment of public and private capital via the Bank's relationships with state, municipal and private sector stakeholders.
- Point of Contact: Managing Director
Coastal Resources Management Council
* Assess the extent to which the CRMC has a role in permitting for electric vehicle charging infrastructure; whether the CRMC may weigh non-polluting or zero-emissions marine technology in coastal permitting; and, assess ways in which the CRMC may incentivize zero-emissions transportation activities in the permitting process.
- Point of Contact: Executive Director
Considerations for the General Assembely.
This Plan proposes a number of next steps the General Assembly may consider in future legislative sessions. These considerations are compiled here for easy reference. The Project Team would welcome continued discussion about any of these ideas and looks forward to further guidance and direction from the General Assembly.
* Enact a 100% Renewable Energy Standard to enable transportation sector decarbonization.
* Direct DOT and OER in consultation with DEM to strategically deploy federal Infrastructure Investment and Jobs Act stimulus funding according the priorities herein and in compliance with federal guidance.
* Identify funding to support (and sustain) incentive programs to encourage electric vehicle adoption.
* Consider rights to charge for Rhode Islanders who rent or lease.
* Consider rights to repair electric vehicles and charging stations.
* Consider legislation requiring a minimum number of public parking spots having charging station access.
* Consider passing design and functionality standards for electric vehicle charging infrastructure.
* Consider requirements to advance building codes to ready buildings for electric vehicle adoption.
* Provide guidance on sustainable revenue mechanisms to support transportation infrastructure and transit services in an electric transportation
Conclusion
In developing this Guide, we heard from Rhode Islanders that addressing climate change, alleviating public health burdens, and improving equity are critical and urgent priorities. Electrifying transportation is one strategy within a portfolio of broader mobility solutions that takes immediate action to address all three of these priorities.
This Strategic Policy Guide synthesizes eight categories of needs identified by the public and by stakeholder organizations. For each need, the Project Team has distilled priorities that should guide future programs, policies, actions, and potential legislation. Within these priorities, we integrate equity by considering how our collective action can connect historically underrepresented and overburdened frontline communities with the energy, economic, and environmental benefits of decarbonization. In addition to integrating these priorities throughout, we also compile these priorities at the outset of our recommendations - these priorities are essential for an equitable strategy to improve access to electric transportation.
Furthermore, the Project Team heard loud and clear that we need to demonstrate action and progress, not just planning. In response, we worked with every state agency represented on the Executive Climate Change Coordinating Council (EC4) to identify a specific action to undertake in 2022 that advances the priority recommendations laid out in this Guide, as well as a specific senior-level point person to lead each action. Agencies will be held accountable through Administration discussion and report-outs at public EC4 meetings, and progress will be tracked as part of a clean transportation dashboard developed and maintained by the EC4.
The Project Team also respectfully identifies some potential topics members of the General Assembly may wish to consider for future legislative action. These topics include revising the Renewable Energy Standard to require 100% renewable energy, directing federal funding to support incentive programs, ensuring rights to charge at home for Rhode Islanders who rent or lease, and providing guidance for sustainable transportation infrastructure revenue streams, among others.
Acknowledging that we have substantial work to do and that our ongoing work will need to evolve as electric vehicle penetration increases, the Project Team considers this Strategic Policy Guide as the beginning of our work rather than the completion of a deliverable. Our intent is for this Strategic Policy Guide to be a working document that will continue to coordinate action in the years to come. Along with the opportunities presented within this Guide, robust investments in renewable energy, clean thermal technologies, and community resilience will also be vital. Working together, we remain confident that Rhode Island will meet the challenges ahead, while creating new economic investment and job growth opportunities for the 21st century.
The full report including footnotes can be viewed at: http://www.energy.ri.gov/documents/Transportation/Electrifying%20Transpo... NEWS SERVICE (founded 2004) features non-partisan 'edited journalism' news briefs and information for news organizations, public policy groups and individuals; as well as 'gathered' public policy information, including news releases, reports, speeches. For more information contact MYRON STRUCK, editor, mailto:[email protected]">[email protected], Springfield, Virginia; 703/304-1897; https://targetednews.com">https://targetednews.com
Other Articles - Utility Business / General
May 23
Palm Energy Acquires Power Portfolio of Behind The Meter from NRG Energy
Hot take: Bitcoin’s [BTC] next bull phase might be driven by energy companies...
Schneider Electric Optimizes Distributed Energy Resources (DER) Management with Grid to Prosumer End-to-End Approach
Smarter Grid Solutions: Québec’s first state-of-the-art microgrid finalised
Abbott and ERCOT should come clean: Grid not fixed
Utility weighing new type of nuclear power plant
Say no to nuclear power
A radioactive argument for storing nuclear waste in Eugene
HousingNew affordable housing in Fort Bragg will be 100% solar powered
United States : Biden Administration Launches $2.5 Billion Fund to Modernize and Expand Capacity of Americas Power Grid
California grid not up to charging electric vehicles?
Electrify America Signs VPPA To Back 75MWp Of Solar Project In California, US
California is beginning to bury its power lines to prevent wildfire
Logan Energy hires former NATO chief of staff
DOE Awards Funding to Criterion Energy Partners to Advance Small Business Research and Development for a Just and Equitable Energy Transition in Baytown, Texas
Youngkin Endorses Dominion Wind Project
Study lays out options for New England grid operator to help cut emissions
Solar company plans 200MW solar farm near Greentown
BGE to Host Recycling Drop-off Event in Columbia, Maryland
May 22
Alectra continues power restoration in wake of Saturday's devastating winds
Lee County extends moratorium on wind, solar projects
KKR to Acquire Power Generation Firm ContourGlobal for £1.75bn
Plans for offshore wind projects debated at forum
Baker proposes $20 million in state funding for Brayton Point development
New study suggests Atlantic Loop needed as part of energy mix as coal is phased out
New Maine Supreme Judicial Court on display in CMP cases
Transfer vessel to aid project
NERC SUMMER ASSESSMENT SEES 'ELEVATED' RELIABILITY RISKS IN THE WEST, MISO
United States : Biden Administration Launches $2.5 Billion Fund to Modernize and Expand Capacity of Americas Power Grid
Clean-up underway after storm leaves at least seven dead, thousands without power
Avangrid Signs USD-16m Host Community Deal for Park City Wind
Offshore Wind Turbine Market Growth Strategies, Regional Overview, Business Share and Leading Companies Forecast Analysis by 2028
Power plant generates fire in former Banca Cremi building
In W.Va., pivot to clean energy rests with Joe Manchin
Ohio County Veto Of Wind Project Shows It’s Time To End Federal Wind Subsidies
At least two people dead, more than 300,000 without power after storm hits Ontario
Vessel to aid the Vineyard Wind project
More than 1,100 in Yolo County without power after downed power line sparked grass fire
NRG Energy Inc.: 52-Week High Recently Eclipsed (NRG)
May 20
Dominion makes its case to SCC for $9.65 billion from customers to build wind farm
Over Half The Country At Risk Of ‘Energy Emergencies’ This Summer, Electric Grid Analysis Shows
Sen. King Pushes Administration to Accelerate UMaine Offshore Wind Project Proposal
Consumers Energy Signs PPA For Two Solar Projects In US
Dairyland Power Cooperative: Milwaukee Journal Sentinel - Interview With Brent Ridge on Nuclear
MN utility commission: CO2 pipelines should be regulated
E2 COMPANIES' PALM ENERGY, LLC ACQUIRES BEHIND THE METER PORTFOLIO FROM NRG ENERGY, INC.
Environmental Working Group: Gov. Newsom Says Rooftop Solar 'Essential' for California's Future
The ISO must do more to push for green energy
N.C. Commerce Dept.: N.C. Taskforce for Offshore Wind Economic Resource Strategies Holds Second Meeting
Avangrid signs USD-16m host community deal for Park City Wind
United States : Sierra Club Gains Significant DEI Commitments from Dominion on Offshore Wind Project
United States : Duke Energy secures offshore wind lease for Carolina Long Bay
At Dominion wind hearings, continued disputes over ratepayer protections
May 19
Further public consultation for offshore wind farm plans
Village plans to resist wind farm scheme
Board Approves Ordinance Prohibiting Wind Energy
Armada Power Launches Cellular Water Heater Controller
NOAA Wind Forecasts Result in $150 Million in Energy Savings Every Year
OTC 2022: Department of Energy to boost investment in U.S. offshore wind development
Homeowners sue SoCal Edison, claiming faulty equipment sparked Coastal fire
Captona, SJI venture buys solar-plus-storage plant in Massachusetts
The Maine Idea: New Supreme Court on display in CMP cases
AEP Releases 2022 Corporate Sustainability Report
Generac Grid Services, PosiGen Announce a First-of-its-Kind Program to Expand Access to Clean Energy for Low- to Moderate- Income Residents
Electrify America inks VPPA to back 75 MWp of California solar
Itron Expands Collaboration with Microsoft to Accelerate the Energy Transition
FirstEnergy Corp. shakes up board
SOLV Energy Wins EPC Contract For Phase I Of 1.3GW Solar Project In Indiana, US
Hexagon Energy Seeks Permit For 138MW Solar PV Project In Virginia, US
May 18
N.Y. PSC Issues Order Approving Compliance Filings Involving Baron Winds
Hydropower's Future Clouded By Droughts, Floods & Climate Change
Ball Signs VPPA For 151MW Wind Project In Texas, US
Micro Motion Assigned Patent for Load Leveling Boost Supply
Lamont faces test to see CT can get on track to meet 2040 climate goal
GridPoint Partners with Public Service Company of Oklahoma and ICF to Launch Strategic Energy Management Program for Small and Medium-Sized Business Customers
Turbine parts arriving for Number Three Wind Farm
American Clean Power Association Annual Market Report 2021 Shows Significant Milestones But Current Growth Must Accelerate to Reach Net Zero Grid by 2035
FirstEnergy Holds Annual Meeting, Announces New Board Leadership
Ameren Illinois chairman is retiring from utility company
PUC alerts consumers of June 1 price changes for electric generation
Homes fitted with new technology could make the grid smarter
Last coal-fired power plant in Delaware gets a reprieve
ERCOT: Grid is ready for summer
GridPoint Partners with Public Service Company of Oklahoma and ICF to Launch Strategic Energy Management Program for Small- and Medium-Sized Business Customers
In West Virginia, the clean-energy transition rests on Joe Manchin
Change is in the wind
Global offshore wind market to reach 330 GW by 2030 - WoodMac
Prolonging the lifecycle of wind farm transformers
Ratepayer advocates seek protections in offshore wind case
May 17
SPG Spark Power to hold AGM May 25
Spark Power Announces Details of Annual General Meeting
Electricity prices for tomorrow, Wednesday, May 18: what will be the cheapest hour?
Offshore Wind Turbine Market Growth Strategies, Regional Overview, Business Share and Leading Companies Forecast Analysis by 2028
Village plans to resist wind farm scheme
FirstEnergy Holds Annual Meeting, Announces New Board Leadership
Voltus Readies Nearly 500 MWs of Distributed Energy Resources (DERs) for Projected Midcontinent Independent System Operator (MISO) Power Shortfall This Summer
Mawson Infrastructure Group to develop new 120 megawatt Bitcoin mining facility in Texas, US
NC TASKFORCE FOR OFFSHORE WIND ECONOMIC RESOURCE STRATEGIES HOLDS SECOND MEETING NC TOWERS HEARS FIRST PROGRESS REPORTS FROM SUBCOMMITTEES
Emera Technologies onboards Tampa Electric Company as First Utility to Pilot Owning, Installing, and Commissioning BlockEnergy™ Front of Meter Solar-Plus-Storage Microgrid Platform
Canada invests in Indigenous Wind Energy Project in Saint John, New Brunswick
Microsoft Corporation - Qubit Engineering Inc. uses Azure Quantum to optimize wind farm energy production
Aqua Pennsylvania's sewage rate hike: The price per flush will go up 50% as early as Thursday
CLEAN POWER ANNUAL MARKET REPORT 2021 SHOWS SIGNIFICANT MILESTONES BUT CURRENT GROWTH MUST ACCELERATE TO REACH NET ZERO GRID BY 2035
MIT study: More energy storage needed
Maui Power Plant Woes Illustrate Need For Sustainability
bp & Linde plan major CCS project to advance decarbonization efforts across Texas Gulf Coast
SoCalGas Submits 2024-2027 Rate Request to Invest in Infrastructure Reliability and Resiliency, Furthering California's Clean Energy Goals
CALIFORNIAState preps for energy shortfalls in summer
TEXAS ELECTRICAL & TESTING SERVICES Wins $78,860 Federal Contract
GridPoint Partners with Public Service Company of Oklahoma and ICF to Launch Strategic Energy Management Program for Small- and Medium-Sized Business Customers
IECA Urges FERC To Release Demand Response Rule (MISO)
Kentucky PSC Issues Order Involving Kentucky Utilities
Pacific Northwest National Lab: An Electric Connection - Homes Helping the Grid
United States : West Penn Power Company/FirstEnergy awarded firm fixed-price multi-year contract
AN ELECTRIC CONNECTION: HOMES HELPING THE GRID HOMES FITTED WITH NEW TECHNOLOGY MAY REDUCE COSTS AND CARBON
Managing The Transition Out Of Coal And The Midwestern Independent System Operator Capacity Cost Spike
SDG&E Files Required Budget Proposal With Regulators, Seeks to Empower Clean Energy Future by Investing in Infrastructure Innovations
Wind Expansion Will Need Much Higher Production of Rare Earths: IRENA
DoE study reveals high potential for distributed wind in US
GE to launch new renewable energy digital suite
United States : Georgia Power issues landmark Equality Progress Sustainability Bond
PSC APPROVES NEED FOR FPL'S PROPOSED TRANSMISSION LINE
May 16
Second hydroelectric power plant coming to Kentucky River to benefit Berea College
NXH Next Hydrogen losses rise to $3.7-million in Q1 2022
Energy storage firm Dragonfly to go public in blank check deal
Covanta Announces Tequila Smith as Chief Sustainability Officer
Concentric Energy Advisors - An Organized Western Electricity Market, Who Would Run it and what are the Challenges
U.S. Grid Operators Are Bracing For A Wave Of Blackouts
New proposed transmission line draws concerns
Arizona Energy Providers and State Universities Join Forces to Pursue a Carbon-Neutral Economy
America’s Electric Grid Has A $2 Trillion Problem
Investing in a new energy economy for Montana
We need to rethink solar and wind power. Here's why
What’s ‘reasonable and prudent’ when it comes to Dominion offshore wind project’s costs?
Empire Wind Selects Edison Chouest Offshore to Provide Plug-in Hybrid Service Operations Vessel
TotalEnergies Wins Maritime Lease for 1GW Offshore Wind Farm off North Carolina’s Coast
If you've checked off any stories using the boxes above, use the following button
to retrieve all of your stories at once. | http://pro.energycentral.com/professional/news/power/news_article.cfm?id=39411334 |
A compass and a protractor are two of the most basic tools used in geometry. Along with a ruler, they are the tools most students are expected to master. Once the basic techniques are understood, you can use a compass and protractor for many different purposes, including drawing regular polygons, bisecting lines and angles, and drawing and dividing circles.
- Compass
- Protractor
- Paper
- Pencil
- Sandpaper (optional)
Clear plastic protractors are handy because you can easily see your base line through the material.
Be careful carrying compasses; the point is quite sharp.
Figure out how your compass makes marks. A compass has two arms, one of which generally ends in a metal point. The other arm should have a place to attach a pencil, or a small pencil lead that fits into the end of the arm. Sharpen the pencil or use sandpaper to file a compass lead to a fine point.
Draw a circle with the compass. Place the metal point in the approximate middle of a piece of paper gently, trying not to poke through the paper. Holding this point steady, bring the pencil end of the compass down and rotate the compass, drawing the pencil end in a circle around the point and creating a perfect circle.
Adjust the arms of the compass to make different sizes of circle. Move the points closer to each other or further apart by pushing or pulling gently, or, in some cases, by rotating a small dial between the arms. Use a ruler to measure the distance between the points--this distance is equal to the radius of the circle you can draw.
Use a protractor to draw specific angles. Start by drawing a line with a ruler. Draw a point somewhere on this line.
Line the protractor up over this line. The line on the protractor marked zero should be directly on top of your pencil line, and the center of the zero line should be precisely on the point you drew.
Make a mark by the curve of the protractor at the number of degrees of the angle you want to draw. For example, if you want to draw a 45 degree angle, make a mark where the line on the protractor marked 45 meets your paper.
Move the protractor and use a ruler to draw a line from your center point to the mark you made with the protractor. This line should be at the specified number of degrees to your base line.
Things You'll Need
Tips
Warnings
About the Author
Laura Gee has a B.A. in history and anthropology, but now spends more time blogging and producing web content. She has worked and/or trained as an illustrator, crafter, caterer, yoga teacher, child-care provider and massage therapist, and she loves to travel when she gets a chance. | https://sciencing.com/use-compass-protractor-4780838.html |
Hey, it added a 5 and a 1 too! Mighty convenient, I'll say.
At this speed, every carbon molecule carries 25 TeV of energy, c.omparable to particles in the beam of the Large Hadron Collider.
This isn't quite right. The maximal energy of the LHC is 14 TeV for both beams, so 7 TeV per beam. And not per particle! Those have less.
Last edited by Angelastic on Wed Nov 14, 2012 12:40 pm UTC, edited 1 time in total.
... but ... why they haven’t slowed down more before they got here—is something of a mystery.
X-rays are subject to the Bragg Peak, where they dump most of their energy right at the end of their journey. That's one of the reasons you can x-ray someone (mostly) without hurting them. Does it apply to cosmic rays too?
It applies to anything ionising really.
Quite sure he meant his leopard.
I don't get the bit about the Lorentz contraction; the formula is γ = 1/√(1-β²) (sorry I couldn't work out how to use BBCode math) which for β = 0.99 is only about 7 (so the diamond's length works out to be 14' 1.28").
A disk 100ft in diameter and only 14ft thick sounds reasonably pancake-like to me.
mcdigman wrote: Now that I think about it, 100 ft is less than the Schwarzchild of the sun, so would this be a black hole?
No, because in its rest frame the diamond still just weighs as much as it did before. By this argument, you could make absolutely anything into a black hole by moving fast enough relative to it, until its kinetic energy became large enough.
Ok, I just didn't know how that worked-I have not yet learned how things work regarding relativity and mass/gravity, that being I understand somewhat harder than the special case where we ignore gravity.
The 64000 dollar question is whether Mark LeClair made something like this actually happen, but on a much smaller scale (fortunately).
peewee_RotA wrote: A baseball traveling at 0.9c fuses with air molecules. A 100 foot wide diamond traveling at 0.99c has no bonds and the particles are moving to fast to fuse with air molecules.
I honestly don't care enough to research this any further. I'm just saying that they seem to be contradictory.
He doesn't say it has no bonds; he says the bonds don't matter at that speed.
Also, I'm no expert, but I think there's a big difference between 0.9c and 0.99c.
The difference between the two numbers is roughly 10%, so the concept of it being a "big difference" is purely a matter of interpretation. The difference between 0.1c and 0.2c is a big difference too. Relatively the difference between 0c and 0.1c is infinitely bigger than the other two examples.
All that aside here's the problem if there is a magic number between 0.9c and 0.99c that makes the bond between atoms suddenly no longer be relevant, then that's a pretty awesome thing to talk about. That's much more fascinating than the calculation for how far a speeding object penetrates. (although it has less potential for innuendos). So if that magic number exists, and that magic number is between 0.9c and 0.99c, then why wasn't it mentioned in either article?
peewee_RotA wrote: All that aside here's the problem if there is a magic number between 0.9c and 0.99c that makes the bond between atoms suddenly no longer be relevant, then that's a pretty awesome thing to talk about. That's much more fascinating than the calculation for how far a speeding object penetrates. (although it has less potential for innuendos). So if that magic number exists, and that magic number is between 0.9c and 0.99c, then why wasn't it mentioned in either article?
Last edited by rmsgrey on Thu Nov 15, 2012 2:46 pm UTC, edited 1 time in total.
Looks like you didn't actually take the square root there.
Is it just me, or is the scale a bit off on Randall's illustrations?
That dinosaur's about 100m long!
I make that Statue of Liberty about 160m tall - 3.5x the height of the real one.
And that's an Ent, right?
gmalivuk wrote: Looks like you didn't actually take the square root there.
Hmm, okay, looked more like a half-melted snowman to me.
The 72-kilometer-per-second diamond sphere blasts out a crater over two kilometers across, with an energy release comparable to that of the biggest fusion bombs.
What 72-kilometer-per-second diamond sphere? That paragraph is about a speed of 11 km/s, and it's the only place in the article the number 72 appears. Is this a typo?
Anybody have any luck finding a full-text version of "The nuclear and aerial dynamics of the Tunguska Event"?
adavies42 wrote: Anybody have any luck finding a full-text version of "The nuclear and aerial dynamics of the Tunguska Event"?
Assuming this is the full text, it would be the first hit on Google.
I think that there is a paragraph missing in the article. The 72 km/s speed would be for a scenario where the diamond is approaching Earth at just below solar escape velocity.
Yeah, it all makes sense if you assume there's an invisible bold line saying "72 kilometers per second:" just below the image with the statues.
It would become a black hole as soon as it interacted with something though. Well, if the interaction is violent enough to release a significant amount of the kinetic energy.
Which a collision with Earth wouldn't be, since at that speed it makes too insignificant a target.
Sure, if you get the kinetic energy of two rocks up high enough, you can smack them together and the CM frame energy will exceed that necessary to make a black hole. Kinda an exotic way to make one that never occurs in Nature (nothing macroscopic ever reaches the necessary ultra relativistic speeds--the violent process that accelerated the object would necessarily also rip it to shreds), but it's possible in principle.
Can anybody tell me whether I'm working on the right lines here? Trying to determine, given the massive energies and therefore reduced cross-sections involved, the order of magnitude number of actual particle-particle interactions that happen as the meteorite goes through. If I am right, how fast must the meteorite be going before it passes through the Earth unnoticed, and life goes on?
Each layer of atoms in the meteorite collides with around 10^18 atoms of Earth. 10^11 layers of atoms in meteorite, so expect 10^(18+11-21)=10^8 interactions, each with a COM energy of around 20 J.
At these insanely high momentums, we might actually see some weird macroscopic quantum mechanical effects begin to manifest themselves and interfere with the ideal relativistic solutions Munroe got. Specifically, I'm thinking of the Heisenberg Uncertainty Principle.
I calculated the momentum of the final meteor to be ~6.7*10^27 kg m/s. I'm no expert, but that makes me wonder if we can actually be certain that the meteor hits the Earth at all. But that would imply some things, and I'm not sure if anyone really knows what.
PS: I just noticed the date. Can it really be called thread necromancy if the thread was still on the first page, though?
Uncertainty has nothing to do with it. Position only becomes uncertain when our knowledge of momentum becomes very very *precise*. Knowing only that momentum gets very very *large* doesn't really matter.
Moreover it wouldn't be a relative precision (e.g. 8 digits for the one leading to 2 digits for the other) but rather an absolute precision on a quantum-mechanical scale.
Indeed. If the uncertainty in that momentum figure is 10^25 kg m/s, then the uncertainty principle allows for position certainty down to 10^-60 meters, which is a lot smaller than the Planck length.
Flumble wrote: Moreover it wouldn't be a relative precision (e.g. 8 digits for the one leading to 2 digits for the other) but rather an absolute precision on a quantum-mechanical scale.
Oh, that would explain why I was thinking that.
The YouTube channel "Ridddle" has copied this pretty much paragraph for paragraph to the acquisition of over 7 million views.
I don't know if it's authorised or a total rip off but I thought you should be aware.
I initially tried to post this with a link to the video itself but a moderator disapproved and blocked the post.
Homicidal Renegade wrote: The YouTube channel "Ridddle" has copied this pretty much paragraph for paragraph to the acquisition of over 7 million views. | http://echochamber.me/viewtopic.php?f=60&t=98757&start=40 |
---
abstract: 'We present an analytical theory of thermonuclear X-ray burst atmosphere structure. Newtonian gravity and diffusion approximation are assumed. Hydrodynamic and thermodynamic profiles are obtained as a numerical solution of the Cauchy problem for the first-order ordinary differential equation. We further elaborate a combined approach to the radiative transfer problem which yields the spectrum of the expansion stage of X-ray bursts in analytical form where Comptonization and free-free absorption-emission processes are accounted for and $\tau\sim r^{-2}$ opacity dependence is assumed. Relaxation method on an energy opacity grid is used to simulate radiative diffusion process in order to match analytical form of spectrum, which contains free parameter, to energy axis. Numerical and analytical results show high similarity. All spectra consist of a power-law soft component and diluted black-body hard tail. We derive simple approximation formulae usable for mass-radius determination by observational spectra fitting.'
author:
- Nickolai Shaposhnikov and Lev Titarchuk
title: 'Spectra of the Expansion Stage of X-Ray Bursts'
---
Introduction
============
First discovered by @gri75, strong X-ray bursts are believed to occur due to thermonuclear explosions in the bottom helium-reach layers of the atmosphere accumulated by a neutron star during the accretion process in close binary system. Since then dozens of burster-type X-ray sources were found. One of the distinctive feature of Type I X-ray bursts is the sudden and abrupt ($\sim 1$ s) luminosity increase (expansion stage) followed by exponential decay (contraction stage). Energy released in X-ray radiation during the first seconds greatly exceeds the Eddington limit for layers above the helium burning zone which are no longer dynamically stable. Super-critically irradiated shells of atmosphere start to move outward, producing an expanding wind-like envelope. The average lifetime of an X-ray bursts is sufficient for steady-state regime of mass loss to be established when the local luminosity throughout the most of the atmosphere is equal or slightly greater than the Eddington limit.
During the last two decades the problem of determining properties of radiatively driven winds during X-ray bursts was subjected to extensive theoretical and numerical studies. Various theories were put forward with gradually increasing level of accuracy of the problem description, but only a few approaches addressed the case of considerably expanded photosphere under influence of near-Eddington luminosities [@lth; @esz; @lap; @t94]. See @lpt for a detailed review of X-ray burst study during 80’s and the beginning of 90’s.
Similarly to the problem of accretion flows, notion of the existence of sonic point in continuous flow became a natural starting point in the analysis of wind flows from stellar objects. @ehs83, hereafter EHS, investigated the structure of the envelopes with steady-state mass outflow and pointed out the higher Eddington luminosity in the inner shells due to the prevalent higher temperatures and correspondingly lower Compton scattering opacities. They showed that the product of opacity and luminosity remains almost constant throughout the atmosphere which is the key assumption of the model. The existence of wind-like solutions for critically irradiated atmospheres was proved. @t94, hereafter T94, studied analytically spectral shapes of the expansion and contraction stages of bursts. He showed how EHS’s approach to hydrodynamic problem can be greatly simplified with the sonic point condition properly calculated and tied with conditions at the bottom of the envelope. @ht applied the T94 model to extract the neutron star mass-radius relations from the observed burst spectra in 4U 1820-30 and 4U 1705-44.
@ntl94, hereafter NTL, adopted a high accuracy numerical approach to the problem of X-ray burster atmosphere structure based on the moment formalism [@trn81; @ntz91]. They integrated a self-consistent system of frequency-independent, relativistic, hydrodynamical and radiative transfer equations over the whole atmosphere including the inner dense helium-burning shells. Three important characteristic of X-ray burst outflow were obtained in this work: the helium-burning zone temperature was maintained approximately at the level of $3 \times 10^9$ K, the temperature of the photosphere was shown to depart appreciably from the electron temperature and to stay constant at the outer shells, and the existence of the maximum and the minimum values of the mass loss rate was found.
One of the goals of these studies was to provide the algorithm of determination of the compact object characteristics by analyzing observational data. With the advent of high spectral and time resolution observational instruments (such as Chandra, RXTE, USA, XMM-Newton missions) the task of obtaining a suitable tool for fitting the energy spectra became extremely important. Despite numerous earlier studies of X-ray burst observations, recent developments have shown a growing interest of the astrophysical community in this area [@str; @kul].
Obviously, the problem of radiative transfer in relativistically moving media is very complicated one and under rigorous consideration it must be solved numerically. In this paper we develop an alternative approach which allows both numerical and analytical solutions and successfully accounts for all crucial physical processes involved. We show how this problem under some appropriate approximations yields the spectrum of radiation from spherically symmetric outflows in an analytical form. We concentrate on the case of extended atmosphere with inverse cubic power dependence of the number density on radius, which is more appropriate for the expansion stage but can also be employed for description of the contraction as a sequence of models with decreasing mass-loss rate.
We represent a numerical approach to the problem which then provides the validation of our analytical description. We adopt the general approach formulated in EHS and developed in T94. The problem of determining profiles of thermodynamic variables of steady-state radiatively driven outflow was solved in T94. The problem is reduced to the form of a first order differential equation, which allows easy and precise numerical solution. For a completeness we present this method in Section 2. Using atmospheric profiles obtained for different neutron star configurations we solve the problem of radiative transfer by relaxation method on an energy-opacity logarithmic grid. We perform temperature profile correction by applying temperature equation to the obtained spectral profiles. The basic formulae are given in Section 3.1. Then the analytical description of the problem is represented in detail. The analytic solution of radiative transfer equation on the atmospheric profile $\tau\sim r^{-2}$ is presented in T94. Here we review the solution by carrying out the integration without introducing any approximations. In Section 4 we compare and match our analytical and numerical results to describe the behavior of free parameter. We finalize our work by examining the properties of our analytic solution, combine it with the results of Section 4 and construct the final formula for fitting the spectra in Section 5. The discussion of our method along with some other important issues concerning the problem being solved are presented in Section 6. Conclusions follow in the last Section.
Hydrodynamics
=============
As we already mentioned, that the calculation of X-ray burst spectra can be treated as a steady state problem. To justify this assumption one has to compare the characteristic times of phenomena considered. Time scale for the photosphere to collapse can be estimated as follows $$t_{coll}=\int_{r_s}^{r_{ph}}\frac{dr}{v_{coll}},\quad\mathrm{where}\quad v_{coll}\approx
\sqrt{\frac{2 \,G M_{ns}}{r}(1-l)}.$$ Here $r_s$ denotes the sonic point radius, which is adopted as the outer boundary of the photosphere throughout this paper. Dimensionless luminosity is $l=L/L_{Edd}$, where Eddington luminosity is given by $$\protect\label{hydro1}
L_{Edd}=\frac{4\pi c\,G M_{ns}}{\kappa}.$$ Opacity $\kappa$ is expressed by the Compton scattering opacity with Klein-Nishina correction by [@pac] $$\protect\label{hydro2}
\kappa=\frac{\kappa_{0}}{(1+\alpha T)},$$ $\kappa_{0}=0.2(2-Y_{He})~{\rm cm}^{2} g^{-1}$ with $Y_{He}$ being the helium abundance, and $\alpha=2.2\times10^{-9}K^{-1}$. It is exactly this temperature dependence of the opacity that is responsible for the excessive radiation flux, which appears to be super-Eddington to the outer less hot layers of the atmosphere. In the framework of strong X-ray bursts the following condition are usually satisfied: $r_s \gtrsim 10^3$km $>>r_{ph}$ , $l\sim 0.99$. Putting $m=M_{ns}/M_{Sun}\sim 1$ results in a time of collapse of the order of several seconds, the observed time which a Type I X-ray burst usually lasts. For evaluation of the time for photons to diffuse through the photosphere, we note that a number of scattering events is $N\approx \tau_{ph}^2$ \[see, for example, @rbk79\], where $\tau_{ph}$ is the total opacity of the photosphere, which is $\sim 10$. The time for a photon to escape is $$t_{esc}\sim\frac{\tau_{ph}^2}{\sigma_T n_e c}\sim
\frac{r_{ph}}{c}\tau_{ph}\sim 0.1 ~~s.$$ This indicates that the hydrodynamic structure develops at least ten times slower than the photons diffusing time through the photosphere. Although these time scales can become comparable in cases of greatly extended atmospheres, generally steady-state approximation is acceptable.
Basic equations for radiatively driven outflow
----------------------------------------------
The problem of mass loss as a result of radiatively driven wind was formulated by EHS. For convenience of the reader we summarize all equations important for the derivations in the following sections and refer the reader to EHS paper for details. The system of equations describing steady-state outflow in spherical symmetry consists of a well known Euler (radial momentum conservation) equation $$\protect\label{hydro4}
v\frac{d v}{d r}+\frac{G M}{r^{2}}+\frac{1}{\rho}\frac{dP}{dr}=0,$$ the mass conservation law $$\protect\label{hydro5}
\frac{d}{dr}(4\pi r^{2}\rho v)=0,$$ the averaged radiation transport equation in the diffusion approximation $$\protect\label{hydro6}
\kappa L_{r}=-\frac{16\pi a c r^{2} T^{3}}{3\rho}\frac{dT}{dr},$$ and the entropy equation $$\protect\label{hydro7}
vT\frac{ds}{dr}+\frac{1}{4\pi r^{2}\rho}\frac{dL_{r}}{dr}=0,$$ where $P,\rho,T,s$ and $L_{r}$ are, respectively, the pressure, the density, the temperature, the specific entropy, and the diffusive energy flux flowing through a shell at $r$.
The outflowing gas is taken to be ideal. Dimensionless coordinate $y$, which is the ratio of the radiation pressure $P_{r}$, to the gas pressure $P_{g}$, is introduced by $$y=\frac{P_{r}}{P_{g}}=\frac{\mu m_p}{k}\frac{aT^{3}/3}{\rho},$$ where $\mu=4/(8-5 Y_{He})$ and $m_p$ are the mean molecular weight and the mass of proton, respectively. Then the other thermodynamic quantities are expressed in terms of $y$ and $T$ as $$\protect\label{hydro9}
P=P_{r}+P_{g}=\left(1+\frac{1}{y}\right)\frac{aT^{4}}{3},$$
$$\protect\label{hydro10}
\rho=\frac{a\mu m_p}{3 k}\frac{T^3}{y},$$
$$\protect\label{hydro11}
s=\left(\frac{k}{\mu m_p}\right)[4y+\ln y-(3/2)\ln T],$$
$$\protect\label{hydro12}
h=\frac{k}{\mu m_p}(4y+5/2)T,$$
where $h$ is the specific enthalpy.
Integrals of equation (\[hydro5\]) and (\[hydro4\]) give energy flow and mass flux correspondingly $$\protect\label{hydro13}
4\pi r^{2}\rho v=\Phi,$$ $$\protect\label{hydro14}
\left(\frac{ v^{2}}{2}-\frac{G M}{r}+h\right)\Phi+L_{r}=\Psi.$$
To make two more integrations, which can not be performed analytically, the constancy of $\kappa L_{r}$, which stands for the integral of equation (\[hydro6\]), over the relevant layers is assumed. In EHS this assumption is confirmed numerically. We can also justified by the following consideration. At the near-Eddington regime the radiation pressure $aT^4/3$ is much greater than the pressure of gas almost everywhere except for the innermost layers adjacent to the helium-burning zone. Neglecting the gas pressure in equation (\[hydro4\]) and multiplying it by $-r^2$ we get $$\protect\label{hydro14_1}
\frac{\kappa L_{r}}{4\pi c}=GM+r^2v\frac{dv}{dr}.$$ Here we moved first two terms of (\[hydro4\]) to the right hand side and used equation (\[hydro6\]) to express third term by $\kappa L_{r}$. For inner and intermediate layers of the atmosphere the last term in (\[hydro14\_1\]) is negligible and equation reduces to $\kappa L_r=\kappa_0 L_{0}$. This term can become considerably large for the outermost layers where $L_r$ must exceed $L_{Edd}$. This also is in agreement with observations of X-ray bursts from which super-Eddington luminosities are inferred. For the sake of analytical consideration we consider $\kappa L_{r}$ to be constant throughout the whole atmosphere, and the third integral is $$\protect\label{hydro15}
\kappa L_{r}=\kappa_{0} L_{0}=const.$$ Replacing $L_{r}$ of equation (\[hydro7\]) with equation (\[hydro15\]), the fourth integral is obtained as $$\protect\label{hydro16}
\Phi s + \alpha L_{0}\ln T=\Xi=const.$$
Boundary conditions need to be imposed at the bottom and the outer boundary to determine the four integration constants $\Phi,\Psi,L_{0}$, $\Xi$ and to obtain a specific solution.
At the bottom of the atmosphere close to the helium-burning zone there should be a point where gas and radiation pressure are equal. As another important numerical result EHS showed that near the neutron star surface the temperature and radius profiles level off with respect to $y$, so there is always a point where $$\protect\label{inbcond}
r=r_{b},~ T=T_{b}, ~y=1,$$ and $r_{b}$ is well approximated by the radius of the neutron star $R_{ns}$. However, $T_b$ can not be considered as a real temperature of helium-burning shell at the bottom of the star surface because thermonuclear processes are not included in the model. Rigorous account of helium-burning NTL shows that temperature of burning shells vary in small range of values.
To obtain the outer boundary condition the concept of sonic point is used. For the solution to be steady-state and to have finite terminal velocity it should pass sonic point where $$\protect\label{hydro21}
v_{s}^{2}=\frac{G M_{ns}}{2r_{s}}=
\left(\frac{\partial P_{s}}{\partial\rho_{s}}\right)_{\Xi}=
\left(\frac{k}{\mu m_p}\right)Y_{s}T_{s},$$ $$\protect\label{hydro22}
Y_{s}=\frac{\lambda+4(1+y_s)(1+4y_s)}{\lambda+3(1+4y_s)},$$ where $\lambda$ is a quantity related to the ratio of the energy flux to the mass flux (see formula 22, below). In EHS this formula contains a typo. We give a proper derivation of this form for $Y_s$ in Appendix C.
ODE solution of the hydrodynamic problem
----------------------------------------
T94 has shown how the treatment of the hydrodynamic problem can be reduced to a Cauchy problem with the boundary condition determined at the sonic point. This treatment provides a high-accuracy method of obtaining the hydrodynamic solution. The crucial point is to relate the position of the sonic point with the values of the velocity and the thermodynamic quantities before solving the set of appropriate hydrodynamic equations. The profile of the expanded envelope is then obtained as a result of the integration of a single first-order ordinary differential equation (ODE) from the sonic point inward up to the neutron star surface. For completeness we present the details of this approach.
At the bottom of the atmosphere the potential energy per unit mass of the gas, $GM/r$, is significantly greater than the kinetic energy, $v^2/2$, and enthalpy. Therefore, by ignoring these terms in equation (\[hydro14\]), we obtain the value of the mass flux $$\Phi=\frac{R_{ns}}{G M_{ns}}\alpha T_{b}L^{0}_{Edd}.$$ The inner boundary condition (\[inbcond\]), the integral (\[hydro16\]), and equation (\[hydro11\]) for entropy can be used to find the temperature distribution with respect to $y$ $$\protect\label{hydro24}
T=T_{b}\,y^{-1/\lambda}\exp \left[-\frac{4(y-1)}{\lambda}\right],$$ and $$\lambda=\frac{\alpha\mu m_p}{k\Phi}L_{0}-\frac{3}{2}.$$ The condition at the sonic point (\[hydro21\]) allows us to find the constant $\Psi$ $$\Psi=h_{s}\Phi+L_{r}(r_{s})-\frac{3}{4}\frac{G M}{r_{s}}\Phi.$$ Combining the mass and energy conservation laws (\[hydro13\]) and (\[hydro14\]), the specific enthalpy and density equations (\[hydro10\]) and (\[hydro12\]) and eliminating the radial coordinate $r$ between equations (\[hydro13\]) and (\[hydro14\]) yield the following dependence of the velocity derivative $v'$ with respect to $y$ $$\protect\label{hydro26}
v'_y(y,v)=v\left[\left(1+3\frac{1+4y}{\lambda}\right)\frac{1}{y}-75.2\frac{rT(8-5Y_{He})(1+4y)(1+\alpha T)}{\lambda r_{b,6}T_{b,9} (\Psi/\Phi-v^2/2-h+G M_{ns}/r)}\right].$$ Derivation of equation (\[hydro26\]) is given in Appendix D.
By imposing boundary conditions at the bottom of the extended envelope (at the neutron star surface) and at the sonic point, we can determine the four integration constants necessary to obtain a specific solution. One can note the obvious fact that the bottom of the envelope can not serve as a starting point of integration of equation (\[hydro26\]) as long as $v_{b}=0$, which introduces uncertainty. Fortunately, we can calculate parameters at the sonic point in the framework of our problem description by solving a nonlinear algebraic equation, which involves only $y_s$, the ratio of the radiation pressure $P_{r}$ to the gas pressure $P_{g}$ at that point. Substitution of the radial coordinate $r_{s}$ and velocity $v_s$ from the definition of the sonic point position (\[hydro21\]), and the sonic point density, $\rho_{s}$ from equation (\[hydro10\]), we find $$r_s=\frac{G M_{ns}\, \mu m_p}{2 k Y_s T_s},\hspace{0.1in}v_s=
\left(\frac{k}{\mu m_p}Y_s T_s\right)^\frac{1}{2},\hspace{0.1in}
\rho_s=\frac{a\mu m_p}{3 k}\frac{T_s^3}{y_s},$$ and the expression for the temperature given in equation (\[hydro24\]) into the mass conservation law (\[hydro13\]), after some algebra give an equation for the value of $y_{s}$ $$\protect\label{hydro27}
y_{s}=\frac{\lambda}{4}\ln \left\{\left[\frac{(2-Y_{He})m^2}{r_{b,6}T_{b,9}}\right]^{2/3}\frac{T_{b}}{0.149\,(8-5Y_{He})^{5/3}Y_s y_s^{1/\lambda+2/3}}\right\}+1.$$ Here $r_{b,6}$ and $T_{b,9}$ are the neutron star surface radius and temperature in units of $10^6$ cm and $10^9$ Kelvin respectively. Since $Y_s$ is expressed in terms of $y_s$ (eq. \[hydro22\]), equation (\[hydro27\]) can be solved to determine the value of $y_s$. Knowledge of $y_s$ can then, by substitution in equation (\[hydro24\]), yield the value of the temperature at the sonic point $T_s$ and then $v_s$ from equation (\[hydro21\]). It is now possible to relate $v_s$ to $T_s$, $T_b$, $r_s$, $r_b$, thus obtaining the analytical expression for the various dynamical quantities at the sonic point in terms of the values of the parameters associated with the boundary conditions. To obtain the solution of the hydrodynamical problem for a particular set of input parameters we use a standard Matlab/Octave package function minimizators and ODE solvers.
Radiative Transfer Problem
==========================
The radiation field of X-ray burst atmosphere may be described by the diffusion equation, written in spherical geometry, with the Kompaneets’s energy operator (see T94):
$$\hspace{-1in}\frac{1}{3} \left(\frac{\partial^2 J_\nu}{\partial \tau^2}-
\frac{2}{r \alpha_T}\frac{\partial J_\nu}{\partial \tau} \right)=
\frac{\alpha_{ff}}{\alpha_T}(J_\nu-B_\nu)-$$
$$\protect\label{rad1}
\hspace{0.4in}-\frac{k T_e}{m_e c^2}\,x_0\frac{\partial}{\partial x_0}
\left(x_0\frac{\partial J_\nu}{\partial x_0}-3J_\nu+\frac{T_0}{T}J_\nu\right),$$
where $x_0=h\nu/kT_0$ is a dimensionless frequency, $T_0$ being the effective temperature; $\alpha_{ff}$ and $\alpha_T=\sigma_T n_e$ are the coefficients of free-free absorption and Thompson scattering, respectively, whose ratio is given by [@rbk79] $$\protect\label{affat}
\frac{\alpha_{ff}}{\alpha_T}=1.23\, \rho g_{14}^{7/8}(1-Y_{He}/2)^{7/8} \Psi(x_0) \left(\frac{T_0}{T}\right)^{1/2}$$ with $$\Psi(x_0)=\frac{\tilde{g}(x_0 T_0/T)}{x_0^3}\left(1-e^{-x_0T_0/T}\right).$$ Here $\tilde{g}(x)$ is the Gaunt factor [@gr] $$\tilde{g}(x)=\frac{\sqrt{3}}{\pi}\,e^{x/2}K_0(x/2),$$ $K_0(x)$ is the Macdonald function, and $g_{14}$ denotes the free-fall acceleration onto the neutron star surface, in units of $10^{14}$ cm s$^{-1}$.
We combine equation (\[rad1\]) with the outer boundary condition of zero energy inflow $$\protect\label{rad3}
\left(\frac{\partial J_\nu}{\partial \tau}-\frac{3}{2}J_\nu\right)\Bigl|_{\tau=0}=0$$ and the condition of equilibrium blackbody spectrum at the bottom of the photosphere, which is represented in a dimensionless form as $$B_\nu=\frac{x_0^3}{\exp(x_0T_0/T)-1}.$$
We will make use of the temperature equation, which is obtained by integration of (\[rad1\]) over frequency. The opacity operator vanishes as a result of the total flux conservation with respect to optical depth, leaving us with $$\hspace{-1.5in}\frac{kT_e}{m c^2}\left(4\int_0^\infty J_\nu dx_0 -
\frac{T_0}{T}\int_0^\infty x_0 J_\nu dx_0\right)=$$ $$\protect\label{rad4}
\hspace{0.5in}=1.23\, \rho g_{14}^{7/8}(1-Y_{He}/2)^{7/8}
\left(\frac{T_0}{T}\right)^{1/2}\left[\int_0^\infty J_\nu
\Psi(x_0) dx_0-\frac{2\sqrt{3}}{\pi}\frac{T}{T_0}\right].$$ In the condition of the extended photosphere of X-ray bursts, the density usually is very low and the left-hand side of the last equation can be neglected reducing the last equation to the formula for temperature $$\protect\label{rad5}
\frac{T}{T_0}=\frac{1}{4}\left(\int_0^\infty x_0 J_\nu dx_0\Bigl/\int_0^\infty J_\nu dx_0\right),$$ We will use the last relation to produce a corrected temperature profile for the photosphere where it departs significantly from that given by the hydrodynamic solution.
Analytic Description of Radiative Diffusion
-------------------------------------------
Hydrodynamic profiles calculated in section 2 show that during the expansion stage in the vicinity of the sonic point $v\sim v_s(r/r_s)$. Considering this relation to be true throughout the entire envelope, according to the mass conservation law, we can write $$\protect\label{rad6}
n_e=\frac{\rho}{m_p}\left(1-\frac{Y_{He}}{2}\right)=\frac{\Phi}{4\pi m_p v r^2}=\frac{GM\Phi}{8\pi v_s^3m_p}\left(1-\frac{Y_{He}}{2}\right)r^{-3},$$ where $\Phi$ is the mass loss rate and $v_s$ is the velocity of gas at the sonic point. In this case opacity can be expressed as $$\tau=C \int_r^\infty \frac{\sigma_T}{r^3}dr=\frac{C\sigma_T}{2r^2}.$$ Noting that in this case $$\protect\label{rad7}
\tau=\frac{r\alpha_T}{2},$$ we can rewrite the radiation transfer equation in the form $$\protect\label{rad8}
\frac{\partial}{\partial \tau}\frac{1}{\tau}\frac{\partial J_\nu}{\partial \tau}=
\frac{3}{\tau}\frac{\alpha_{ff}}{\alpha_T}(J_\nu-B_\nu)-\frac{3kT_e}{m_e c^2 \tau}L_\nu(J_\nu).$$
The boundary conditions are given by $$\protect\label{rad9}
J_\nu|_{\tau=\tau_{th}}=B_\nu(\tau_{th})$$ at the inner boundary of photosphere, and $$\protect\label{rad10}
H=\frac{4\pi}{3}\int_0^\infty\frac{\partial J_\nu}{\partial \tau}d\nu \Bigl|_{\tau=0}=\frac{L}{4\pi R_s^2}$$ at the sonic surface. The ratio $\alpha_{ff}/\alpha_T$ can be written in the form $$\hspace{-0.5in}\frac{\alpha_{ff}}{\alpha_T}=1.23\left(1-\frac{Y_{He}}{2}\right)^{-5/8}\left(\frac{2m_p}{\sigma_T}\right)^{3/2}\left(\frac{8\pi v_s^3}{GM\Phi}\right)^{1/2}$$ $$\protect\label{rad11}
\times g_{14}^{-7/8}\frac{\tilde{g}(x)(1-e^{-x})}{x^3}
\left(\frac{T_0}{T}\right)^{7/2}\tau^{3/2}=D\Psi(x)\tau^{3/2},$$ where $x=h\nu/kT_e$, and $\Psi(x)=\tilde{g}(x)(1-e^{-x})/x^3$.
Stated in this way the problem of radiative transfer allows an analytical approach. The solution of the radiative transfer equation (\[rad8\]) is $$\protect\label{rad12}
J(t,x)= B_\nu \frac{t^{8/7}}{2^{4/7}
\Gamma\left(\frac{11}{7}\right)}
\left[\frac{\Gamma\left(\frac{3}{7}\right)}
{2^{4/7}}+\frac{8}{7}\,t_{th}^{-4/7}K_{4/7}(t_{th})\right],$$ where $K_p$ is the modified Bessel function of the first kind, and $$\protect\label{rad13}
t=\frac{4}{7}\sqrt{3D\Psi(x)}\,\tau^{7/4}.$$ Details of derivation of this formula are given in Appendix A.
Evaluation of $\tau_{th}$ and $T_c$
-----------------------------------
The next step is to find the color temperature and to determine the thermalization depth $\tau_{th}$ where the boundary condition (\[rad9\]) is valid. For saturated Comptonization, the occupation number behaves in accordance with Bose-Einstein photon distribution $n=(e^{\mu+x}-1)^{-1}$ which might be described as a diluted blackbody spectrum or as a diluted Wien distribution.
At first we evaluate the color temperature assuming a blackbody spectral shape. We look for the solution of the form $$\protect\label{rad14}
n(\tau,x)=\frac{R(\tau)}{e^x-1}.$$ The solution, which is described in detail in Appendix B, gives for $R(\tau)$ $$R(\tau)=1-\frac{2^{3/7}}{\Gamma\left(\frac{4}{7}\right)}\frac{\tau}{\tau_{th}} K_{4/7}\left(\left[\frac{\tau}{\tau_{th}}\right]^{7/4}\right).$$
As long as $R(\tau)=1$ for $\tau>\tau_{th}$, there is radiation equilibrium for optical depths deeper than the photospheric envelope. The temperature equation in the zone $0<\tau<\tau_{th}$ reads $$\left(\frac{T}{T_0}\right)^4=\frac{2H_0}{R^2}\left(\frac{3}{2\tau_R}\int_0^\tau\tau d\tau+2\right)\Bigl/\frac{\pi^4}{15}R(\tau)$$ where $H_0=(4\pi R_{ns}^2\pi^5/15)/16\pi^2$ and $\tau_R$ ($<1$) is the optical depth coordinate at the outer boundary of the expanded atmosphere, $r=R$ (see eq. 35). This equation can be rewritten as follows $$\left(\frac{T}{T_0}\right)^4=\frac{3\tau^2/4+2\tau_R}{2\tau_{ns}R(\tau)}.$$ Neglecting $\tau_R$ with respect to $\tau$ and making use of Taylor expansion (\[B6\]) of $R(\tau)$ we get a constant value of the temperature $$\protect\label{rad15}
\left(\frac{T}{T_0}\right)^4=\frac{3}{8}\frac{2^{8/7}\Gamma\left(\frac{11}{7}\right)}{\Gamma\left(\frac{3}{7}\right)}\frac{\tau_{th}^2}{\tau_{ns}}=0.356\frac{\tau_{th}^2}{\tau_{ns}}.$$ Using the notation $$\protect\label{rad16}
p=2\tilde{g}(x_*)\approx \ln\left(\frac{2.35}{x_*}\right)$$ formula (\[rad11\]) becomes $$\protect\label{rad16_1}
D=D_0\left(\frac{T_0}{T}\right)^{7/2}$$ where $$D_0=1.23\left(1-\frac{Y_{He}}{2}\right)^{-5/8}\left(\frac{2m_p}{\sigma_T}\right)^{3/2}\left(\frac{8\pi v_s^3}{GM\Phi}\right)^{1/2}g_{14}^{-7/8},$$ while we can write for $\tau_{th}$ $$\protect\label{rad17}
\tau_{th}=\left[\frac{4}{7}\sqrt{\tilde{D}}\right]^{-4/7}=\left[\frac{6}{49}p^2D\right]^{-2/7}=\frac{T}{T_0}\left[\frac{6}{49}p^2\right]^{-2/7}D_0^{-2/7}.$$ Substituting it into (\[rad15\]) we get for $T/T_0$ $$\frac{T}{T_0}=0.596\left[\frac{6}{49}p^2\right]^{-2/7}
\frac{D_0^{-2/7}}{\tau_{ns}^{1/2}}.$$ Assuming the same electron number density as in (\[rad6\]) we than express opacity at the neutron star surface $\tau_{ns}$ in the form $$\protect\label{rad18}
\tau_{ns}=\left(1-\frac{Y_{He}}{2}\right)
\left(\frac{\sigma_T}{2m_p}\right)\left(\frac{GM\Phi}
{8\pi v_s^3 R_{ns}^2}\right).$$ If we use the dependence of input parameters $g_{14}$ and $\Phi$ on $m,r_{b,6}, T_{b,9}$ and $Y_{He}$, the next useful equations for the color ratio $T/T_0$, color temperature $kT$, and thermalization depth $\tau_{th}$ are found: $$\protect\label{rad20}
\frac{T}{T_0}=\frac{0.191(2-Y_{He})^{1/28}\,r_{b,6}^{1/7}v_{s,8}^{15/14}}{m^{3/28}\,T_{b,9}^{5/14}p^{4/7}},$$ $$\protect\label{rad21}
kT=0.4\,m^{1/7}r_{b,6}^{-5/14}\,T_{b,9}^{-5/14}v_{s,8}^{15/14}
(2-Y_{He})^{-3/14}p^{-4/7}\quad\mathrm{keV}\quad,$$ $$\protect\label{rad22}
\tau_{th}=90.5\,m^{2/7}r_{b,6}^{-3/14}\,T_{b,9}^{-3/14}v_{s,8}^{9/14}(2-Y_{He})^{1/14}p^{-8/7}.$$ Here $v_{s,8}$ is the sonic point velocity in units of $10^8$ cm/s. These relations present the final results of our analytical approach. There is still a lack of completeness due to the presence of $p$ and $v_s$ in the left-hand sides of this system of equations. Parameter $p$ and sonic point velocity are not independent parameters of the problem, but at this point they can not be inferred from further analytical consideration. Fortunately, these quantities can be quite well approximated by a power dependence from $m,r_{b,6},T_{b,9}$ and $Y_{He}$, which is done in the next chapter.
Numerical Results and Comparison with Analytical Description of Radiative Transfer Problem
==========================================================================================
To confirm the validity of our analytical approach and to examine the behavior of $p$ and $v_s$ in dependence of different input parameters of the problem, we perform numerical modeling of the steady-state radiative transfer process. The whole procedure consisted of three steps.
First, for particular model of neutron star, i. e. for a given mass and radius, we obtain a set of model atmospheres for a chosen set of bottom temperatures. These solutions provide us with runs of thermodynamical and hydrodynamical profiles, sonic point characteristics, masses of the extended envelopes and their loss rates. Second, we solve radiative transfer equation (\[rad1\]) on each model atmosphere obtained with the relaxation method (e.g. Press et al. 1992) on an energy-opacity grid using logarithmic scale on both dimensions. The energy range included 500 grid points. The number of grid points in opacity varied between 100 and 300. The opacity domain included the range $\tau_s<\tau<\tau_{max}$, where $\tau_s$ is opacity at the sonic point and $\tau_{max}$ was taken large enough to meet safely inequality $\tau_{max}>\tau_{th}$. We used the mixed outer boundary condition (\[rad3\]). Spectrum at the inner bottom of the photosphere was taken a pure black body $B_{\nu}$. Numerical calculations of frequency-dependent radiation field consisted of two runs of our relaxation code. The first run was performed on temperature continuum, obtained from the hydrodynamical solution (see section 2.5). Then we calculated a spectral temperature profile using formulae (\[rad5\]) which exhibits a quite expected behavior. At some region this corrected profile departs from initial temperature profile and levels off at some constant value in absolute agreement with analytic result of section 3.2. It is also in a qualitative agreement with the NTL self-consisted calculation of radiation driven wind structure of X-ray burster. A second run is performed on the corrected profile to get more reliable spectrum shape. At the final third step we compared analytical and numerical solutions. The sonic point provided a natural position to match numerical and analytical solutions. Combining the sonic point parameters, calculated through the solution of equation (\[hydro27\]) and using relation (\[rad7\]) we get for the opacity at the sonic point $$\protect\label{num1}
\tau_s=\frac{\sigma_T}{2m_p}r_s\rho_s\left(1-\frac{Y_{He}}{2}\right).$$
We calculated and and plotted fluxes given by both methods at the sonic point. A particular value of parameter $p$ for analytic model was obtained by matching value of $kT$ and corrected level of numerically achieved value of photospheric temperature.
We obtained results for approximately 150 different sets of values $T_b, R_{ns}, M_{ns}$ and $Y_{He}$. Examples of numerical calculations of spectra for different neutron star models and fitting them with analytical shapes are presented in Figures 1 and 2. Analytical and numerical shapes match quite well in the wide range of neutron star surface temperatures and both show two distinctive features of outgoing spectrum of expansion stage: diluted black-body like high-frequency component and power law soft excess at the lower part. Dependence of the sonic point opacity presented by (\[num1\]) describes correctly the dilution process indicating the assumption of atmosphere structure adopted at the analytical model is correct.
Tables 1 and 2, which summarize results for two different neutron star models, are given in order to compare our results with more rigorous calculations (NTL). Taking mass-loss rate as an input parameter, NTL obtained profiles of different quantities throughout the whole atmosphere. They argue that the temperature of burning shell is maintained around $3 \times 10^9$ K for all models. The temperature of photons departs appreciably from the temperature of ambient matter above photospheric radius and stays practically constant indicating that radiation becomes essentially decoupled from expanded media. We changed the bottom temperature in a wide range of values and inferred the mass-loss rate, the mass of envelope, etc.
Our results are in qualitative agreement with NTL. The crucial physical parameters which define the main spectral signatures are the photospheric radius $r_{ph}$ and its temperature $kT$. Runs of the atmospheric profiles obtained by both approaches are quite similar, although $T_{ph}$ in NTL results is usually $15\%\sim25\%$ greater than in our models. This difference is explainable. We match isothermal levels given by numerical and analytical calculations and define the obtained value as a photospheric temperature. This is the lowest estimation, because the temperature profile starts to grow before the bottom of the photosphere is reached. NTL define $T_{ph}$ as a matter temperature at $r_{ph}$. A temperature level calculated at the thermalization depth $\tau_{th}$ should compensate the considered difference. The difference in density profiles, which can achieve a factor of two, will affect the spectrum only in the soft part ($\leq 0.2$ keV) where the normalization of the power law component can be changed. This fact does not diminish the validity of our results. The soft component of the spectrum can be represented as an independent fitting shape with a normalization included as an additional fitting parameter. This matter is not crucial at the moment due to the restricted spectral capabilities of current X-ray observing facilities. One can also notice a quick decrease of the envelope mass, and point out a wide variation of $T_b$. This discrepancy can be explained by differences in model formulations. Specifically, NTL included helium-burning shells into the model and put the inner boundary condition on the “real” neutron star surface while our model stops where radiation and gas pressures are equal ($y=1$), which is close but still outside of the helium-burning shell. In our approach, part of the bottom of the atmosphere is left out. In fact, the lower the mass-loss rate, the greater the portion of mass missing beyond the point where $y=1$. This is clearly seen from the tables. The temperature at the bottom can be considered as an “effective” instead of the real temperature of helium-burning zone.
As we have already point out that one needs to know the dependencies of $v_s$ and $p$ on input parameters to complete the analytical description and thus to employ these results to the fitting of observational X-ray spectra. Analysis of $v_s$ and $p$ runs show that $\log v_s$ and $\log p$ are linear functions of $\log T_b$, $\log R_{ns}$, $\log M_{ns}$ and $\log(2-Y_{He})$. We combine all experiments and fit $v_{s,8}$ and $p$ with a model $cons\times T_{b,9}^\alpha r_{b,6}^\beta m^\gamma (2-Y_{He})^{\eta}$ by the least squares method to get $$\protect\label{num2}
p=7.69\, T_{b,9}^{-0.84} r_{b,6}^{-0.89} m^{0.69} (2-Y_{He})^{-0.22}$$ $$\protect\label{num3}
v_{s,8}=5.46\, T_{b,9}^{-0.71} r_{b,6}^{-0.87} m^{0.63} (2-Y_{He})^{-0.22}$$ with maximum errors of parameters less than 1%. The ranges of parameters included in fitting are 0.3-7.0 for $T_{b,9}$, 0.6-2.0 for $r_{b,6}$, 0.8-2.7 for $m$ and 0.3-1.0 for $Y_{He}$. These results can be used to substitute $p$ and $v_{s,8}$ in equations (\[rad20\])-(\[rad22\]). Now we have consistent system of equations, which should yield X-ray spectrum of burster in the form of function of only input physical parameters, i. e. neutron star mass, radius, surface temperature and elemental abundance.
Final form of the profile for spectral fitting
===============================================
The fact that spectra obtained are a black-body like almost everywhere except for small values of energies allows us to proceed with simplification of the formula (\[rad12\]). First we note that due to (\[rad17\]) and smallness of $x$ $$\protect\label{fin10}
t_{th}=\frac{4}{7}\sqrt{3\Psi(x)D}\,\tau_{th}^{7/4}=2\frac{\sqrt{2\Psi(x)}}{p}
\simeq 2\frac{\sqrt{\ln(2.35/x)}}{p x},$$ for the soft part of spectrum. Here $x=h\nu/kT$, $\Psi(x)$ and $D$ are defined in formulae and correspondingly. Because $t_{th}$ is large for small values $x$ we can use approximation of the modified Bessel function of the second kind for large arguments $$K_p(x)\approx\sqrt{\frac{\pi}{2x}}\,e^{-x},$$ and rewrite equation (\[rad12\]) as follows $$J(\tau,x)=B_\nu \left(\frac{\tau}{\tau_{th}}\right)^2
\left[\frac{\Gamma(3/7)}{\Gamma(11/7)}z^{8/7}+\frac{4}{7\Gamma(11/7)}z^{1/14}e^{-2z}\right]=$$ $$\protect\label{fin11}
=B_\nu \left(\frac{\tau}{\tau_{th}}\right)^2\left[2.32\,z^{8/7}+0.64\,z^{1/14}e^{-2z}\right],$$ where $$z=\frac{\sqrt{\ln (2.35/x)}}{p x}.$$ Here we rewrite the dilution factor in terms of opacity using relation (\[rad13\]). Clearly, the second term in the parenthesis of formula (\[fin11\]) is significant only where $z$ becomes small ($x$ becomes large) and the spectrum shape “adjusts” to the black-body component. In turn, the first term of equation (\[fin11\]) represents the power law component of the lower part of spectrum with the slope 6/7, which can be shown by simple similarity (see also T94) $$B_\nu z^{8/7}\sim \frac{x^2}{x^{8/7}}=x^{6/7}.$$ Another important advantage of this term is that it vanishes for large values of $x$. This fact gives us opportunity to construct convenient and accurate formula for observational spectra fitting. We drop the second term in equation (\[fin11\]) and adjust to the diluted black-body shape by means of quadratic power combination as follows $$\protect\label{fin12}
J(\tau,x)=B_\nu \left(\frac{\tau}{\tau_{th}}\right)^2\left[1+5.34\,z^{16/7}\right]^{1/2}.$$ Comparison of shapes given by formula (\[fin12\]) with exact solution (\[rad12\]) shows that they deviate from each other by less than 2% which is more than acceptable in contemporary astrophysical observational data analysis. Using the explicit form of $z$ and the form of outgoing flux equation (\[fin12\]) can be rewritten in the form $$\protect\label{fin13}
F_\nu=\frac{4\pi}{3}\frac{dJ_\nu}{d\tau}=\frac{8\pi}{3}
B_\nu\frac{\tau_s}{\tau_{th}^2}\left\{1+5.34
\left[\frac{\ln (2.35/x)}{p^2x^2}\right]^{8/7}\right\}^{1/2}.$$ Equation (\[num1\]) yields useful relationship for the dilution coefficient in the manner similar to (\[rad20\])-(\[rad22\]) $$\protect\label{fin14}
\frac{8\pi}{3} \frac{\tau_s}{\tau_{th}^2}=\frac{5.07\times10^{-5}r_{b,6}^{10/7}T_{b,9}^{10/17}p^{16/7}}{m^{11/7}v_{s,8}^{2/7}(2-Y_{He})^{1/7}}.$$ Substituting results of parameter fitting (\[num2\])-(\[num3\]), we get $$\protect\label{fin15}
\frac{8\pi}{3} \frac{\tau_s}{\tau_{th}^2}=
3.31\times10^{-3} r_{b,6}^{-0.36}\,T_{b,9}^{-0.29} m^{-0.17}(2-Y_{He})^{-0.58},$$ $$\tag{\ref{num2}}
p=7.69\, T_{b,9}^{-0.84} r_{b,6}^{-0.89} m^{0.69} (2-Y_{He})^{-0.22}.$$ Here, again, $r_{b,6}$, $T_{b,9}$ and $m$ are the neutron star radius, surface temperature and mass in units of $10^6$ cm, $10^9$ Kelvin and solar mass respectively. Now we have provided our spectrum profile (\[fin13\]) with expressions for parameter $p$ and the dilution factor . These three formulae constitute the final analytical results of this paper.
Discussion
==========
The model and derivations presented above assume that plasma consists of fully ionized hydrogen and helium. In reality, this assumption can be too simplistic. For instance, in the case of the recently discovered super-burst [@str], a sufficient fraction of material should be represented by heavier elements. These long and powerful bursts are also considered to be due to the nuclear runaway burning in the carbon “ocean” under the neutron star surface. In this section we discuss how our model can be adjusted for study of this phenomenon. The approach as a whole does not change, but some formulae have to be modified in order to account for the different plasma composition.
First, we note that for plasma which consists of a single ionized element, we have for the mean molecular weight $$\mu=\frac{A}{1+Z},$$ and for the electron number density $$n_e=\frac{\rho}{A m_p}Z,$$ where $A$ and $Z$ are the atomic weight and the atomic number of the corresponding element. In the general case of heterogenous elements, each represented by weight abundance $Y_i$, we write $$\protect\label{disc1}
\mu= \frac{1}{\sum Y_i(1+Z_i)/A_i}.$$ $$\protect\label{disc2}
n_e=\frac{\rho}{m_p}\sum \frac{Z_i}{A_i}Y_i.$$ In the hydrodynamic part of this study, these modifications will affect only the form of the terms and factors containing $Y_{He}$. In the radiation transfer section, the form of $\alpha_{ff}/\alpha_T$ will require more careful treatment. According to @rbk79, the free-free absorption coefficient is $$\protect\label{disc4}
\alpha_{ff}=3.7\times 10^{8}\,T^{-1/2} \overline{Z^2n_i}n_e\,\nu^{-3}(1-e^{-h\nu/kT})\,\tilde{g}_{ff},$$ where $$\protect\label{disc5}
\overline{Z^2n_i}=\sum Z_i^2n_i=\frac{\rho}{m_p} \sum \frac{Z_i^2}{A}Y_i.$$ In the case of a hydrogen-helium plasma this factor is conveniently represented just by gas density, i. e. $$\overline{Z^2n_i}=n_H+4n_{He}=\frac{\rho}{m_p}(Y_H+Y_{He})=\frac{\rho}{m_p},$$ which yields the equation . In general, one should use expressions and to find the correct form of $\alpha_{ff}/\alpha_T$ relevant to the specific chemical composition.
To be more instructive we conduct such a modification for the case when the plasma has a substantial carbon fraction. Using , and , we write for hydrogen-helium-carbon gas $$\protect\label{disc6}
\mu=\frac{12}{24Y_H+9Y_{He}+7Y_C}=\frac{4}{8-5Y_{He}-17Y_C/3},$$ $$\protect\label{disc7}
n_e=\frac{\rho}{m_p}\left(1-\frac{Y_{He}+Y_C}{2}\right),$$ and $$\protect\label{disc8}
\overline{Z^2n_i}=\frac{\rho}{m_p}(Y_H+Y_{He}+3Y_C)=\frac{\rho}{m_p}(1+2Y_C).$$ Correspondingly, in all formulae the factor $(2-Y_{He})$ will be replaced by $(2-Y_{He}-Y_C)$ and $(8-5Y_{He})$ by $(8-5Y_{He}-17Y_C/3)$. Additionally, the right-hand side of equation has to be multiplied by the factor of $(1+2Y_C)$. Clearly, this modification will add the fifth free parameter $Y_C$ to the model. Using the general methodology outlined in this paper one should be able to produce solutions for the parameter $p$ and the dilution factor. The problem which can arise from the inclusion of heavy elements is the possibility for heavy ions to be only partly ionized. The ionization degree can also vary throughout the atmosphere. Because full ionization and constancy of the gas’s chemical composition are the basic assumptions of the adopted approach, we cannot explicitly include the effect of ionization in our model. Instead, it can be accounted for in a manner similar to our temperature profile correction. First, the approximate atmospheric profiles can be obtained by assuming full ionization. Then, the ionization degree can be calculated by solving the Saha equation, and using this solution as a zero-order approximation of the atmospheric temperature and the electron number density profiles. Finally, one should proceed by solving the hydrodynamic problem, in which the partial ionization of heavy elements is taken into account.
For reasons mentioned above, it is also a problem to include the proper physics for the transport of heavy nuclei to the outer layers. Two major processes can contribute to this element flow. Bulk motion mixing should dominate in the convection zone close to the bottom of the atmosphere. In the outer layers, a strong radiative push should govern the process, because of the large resonance cross-sections of the heavy elements. The general problem of heavy ions mixing is quite difficult and requires a rigorous approach, which is out of scope of this paper.
As far as the boundary conditions are concerned, modeling of carbon nuclear flashes will require higher bottom temperatures. Temperature of the carbon burning zone is argued to be about $10^{10}$ K \[see @str\], which is close to the upper boundary for the bottom temperature $T_b$ used in our calculations. No peculiarities of the approach were detected in the case of very high bottom temperatures. Extremely high temperatures will require the correct form of the opacity coefficient $\kappa$ [@pac], instead of equation , which represents a simplified formula for $\kappa$ in the case of modest temperatures.
Another important issue is the correct accounting for the line emission of heavy elements which is detected in the spectral analysis of super-bursts. @str argued that this phenomena is due to reflection from the accretion disk during the burst. One can estimate the disk heating time by using the standard Shakura-Sunyaev accretion disk model [@ss] and the fact that approximately 10% of the burst luminosity is absorbed by the inner part of the disk [@lapsun]. [*Simple estimates give a timescale of less than a second assuming a mass-accretion rate of the order of Eddington or less for the disk accretion regime, and a burst luminosity greater than 5% of Eddington, which is detected during several thousand seconds of observation of the super-burst in 4U 1820-30*]{} Consequently, the observed spectral feature of the K$\alpha$ line should rather be generated in the burst atmosphere than in the disk. The disk gains the temperature of the X-ray radiation very quickly.
Unfortunately, the origin and behavior of the spectral line features still remain unexplained. The authors plan to inlcude the spectral line effect in the relaxation method, in order to calculate the line emission during the X-ray burst and to compare this with the observed spectra. Relativistic effects are usually negligible during the strong X-ray burst due to the significant radial expansion and the fact that the outgoing spectrum formation occurs at the outer layers of the atmosphere. General relativistic effects become important at the contraction stage when the extended envelope recedes close to the NS surface \[see @lpt\]. Haberl & Titarchuk (1995) applied the full general relativity approach for a derivation of NS mass-radius relation in 4U 1820-30 using EXOSAT observations and the T94 model.
Conclusion
==========
This paper follows a common idea of the last decade to fit observational and numerical spectra with some model, mostly black-body shapes, to obtain spectral softening/hardening factors [@lth]. We improve this technique in several ways. We use for fitting more realistic non-blackbody spectral profile, which accounts for the observed power law soft excess of X-ray burster spectra. The temperature profile is corrected by solving the temperature equation. The existence of the isothermal photosphere during X-ray bursts is confirmed numerically and analytically. Finally, we analytically obtain the multiplicative (dilution) factor which is not a parameter of fitting anymore but self-consistently incorporated in the model.
We show how the theoretical study of radiatively driven wind phenomenon can produce useful techniques for analyzing observational data. It can fulfill the needs of new emerging branches of observational X-ray astronomy such as a very promising discovery of super-bursts [@str], which exhibit photospheric expansion and spectral modifications relevant to extended atmospheres. We present the analytical theory of strong X-ray bursts, which include effects of Comptonization and free-free absorption. Partly presented in some earlier publications, this area of the study of the X-ray burst spectral formation was lacking a detailed and self-consistent account. We use numerical simulation to validate our analytical theory and to link our solution to energy axes. We show how this information can be extracted from spectral data. We provide the analytical expression for the X-ray burst spectral shape, which depends only upon input physical parameters of the problem: neutron star mass, radius, surface temperature and elemental abundance. Expressions for color ratios and dilution coefficient are also given.
Authors thank Peter Becker for valuable comments and suggestions which improved the paper. We are greatful to Menas Kafatos for encouragement and to Center for Earth Science and Space Research (GMU) for the support of this research. We appreciate the thorough analysis of the presented work by the referee.
Ebisuzaki, T., 1987, , 39, 287 Ebisuzaki, T., Hanawa, T., & Sugimoto, D. 1983, , 35, 17 (EHS) Greene, J., 1959, , 130, 693 Grindlay, J. E., & Heise, J. 1975, , No.2879 Haberl, F., & Titarchuk, L.G. 1995, A&A, 299, 414 Kuulkers, E., Homan, J., van der Klis M., Levin, W.H.G., Mendez, M. 2001 , accepted to Lapidus, I., I. 1991, , 377, L93 Lapidus, I., I., Sunyaev, R., A., 1985, , 217, 291 Lewin, W. H. G.,van Paradijs, J., & Taam, R. E. 1993, Space Sci. Rev., 62, 223 London, R., A., Taam, R., E., Howard, H., W. 1986, , 306, 170 Nobili, L., Turolla, R., & Zamperi, L. 1991, , 383, 250 Nobili, L., Turolla, R., & Lapidus, I. I. 1994, , 433, 276 (NTL) Paczynski, B., 1983, , 267, 315 Press, W. H., Teulkolsky, S.A., Vettering, W.T., & Flannery B.P. 1992 Numerical Recipes\
(Cambridge: University Press) Rybicki, G. B., & Lightman A.P. 1979 Radiative Processes in Astrophysics\
(New York: Wiley) Thorne, K. S. 1981, , 194, 439 Titarchukk, L. G. 1994, , 429, 340 (T94) Strohmayer E. T., Brown F. E., (2001), in press (astro-ph/0108420). Shakura, N., I., Sunyaev, R., A., 1973, , 24, 337
Analytic solution for the radiative transfer problem
====================================================
We look for the solution of equation (\[rad8\]) in the form $$\protect\label{A1}
J(\tau,x)=\left(\frac{\tau}{\tau_{th}}\right)^2B_\nu(\tau_{th})+\tilde{J}(\tau,x).$$ The basic idea is to separate the high-frequency (diluted black body) and the low-frequency $\tilde{J}(\tau,x)$ parts of spectrum, where different physical processes dominate. Kompaneets operator $L_\nu$ acting upon black body shape vanishes and we neglect $L_\nu(\tilde{J})$, which allows us to get the solution of radiative transfer problem analytically. At this point $\tau_{th}$ is a parameter of the problem. The algorithm of determination of $\tau_{th}$ will be described separately. Substituting (\[A1\]) into (\[rad8\]) we found for $\tilde{J}(\tau,x)$ $$\frac{\partial}{\partial \tau}\left(\frac{1}{\tau}\frac{\partial \tilde{J}}{\partial \tau}\right)-\frac{3}{\tau}\frac{\alpha_{ff}}{\alpha_T}\tilde{J}=-\frac{3}{\tau}\frac{\alpha_{ff}}{\alpha_T}B_\nu\left[1-\left(\frac{\tau}{\tau_{th}}\right)^2\right],$$ with a boundary condition $$\tilde{J}_\nu|_{\tau=\tau_{th}}=0.$$ The solution satisfying this condition is presented by $$\protect\label{A2}
\tilde{J}(\tau,x)=\frac{1}{pW}y_1(\tau)\int_0^{\tau_{th}}y_2(\tau)f(\tau)\,d\tau,$$ where $ p(\tau)=\frac{1}{\tau}$ and $W(\tau)$ is the Wronskian $$W=\left|\begin{array}{cc}
y_1&y_2\\
y_1'&y_2'
\end{array}\right|=-\frac{7}{4}\tau.$$ Thus the product $$pW=-\frac{7}{4}.$$ Functions $y_1(x)$ and $y_2(x)$ are $$y_1(\tau)=\tau I_{4/7}\left(\frac{4}{7}\sqrt{3D\Psi(x)}\,\tau^{7/4}\right),$$ $$y_2(\tau)=\tau K_{4/7}\left(\frac{4}{7}\sqrt{3D\Psi(x)}\,\tau^{7/4}\right),$$ where $I_\nu(x)$ and $K_\nu(x)$ are modified Bessel function of the first and the second types respectively.
The function $f(\tau)$ in (\[A2\]) is the right hand side of equation (A2), namely $$f(\tau)=-\frac{3}{\tau}\frac{\alpha_{ff}}{\alpha_T}=-3\,\tau^{1/2}D\Psi(x)B_\nu\left[1-\left(\frac{\tau}{\tau_{th}}\right)^2\right].$$ We introduce a new variable $t$ $$t=\frac{4}{7}\sqrt{3D\Psi(x)}\,\tau^{7/4},
%=c_t\tau^{7/4}\Rightarrow \tau=\left(\frac{t}{c_t}\right)^{4/7}\Rightarrow d\tau=\frac{4}{7} \frac{t^{-3/7}}{c_t^{4/7}}dt,$$ and rewrite solution (\[A2\]) as $$\protect\label{A3}
\tilde{J}(t,x)=B_\nu t^{4/7}I_{4/7}(t)\int_0^{t_{th}}t^{3/7}K_{4/7}(t)\left[1-\left(\frac{t}{t_{th}}\right)^{8/7}\right]dt.$$ Using the properties of modified Bessel functions $$\int x^p K_{p-1} dx=-x^p K_{p}+C\quad\mathrm{and}\quad K_p=K_{-p}$$ we evaluate the integrals in (\[A3\]) $$\int_0^{t_{th}}t^{3/7}K_{4/7}(t)\,dt=-t^{3/7}K_{3/7}(t)\Bigl|_0^{t_{th}}=\frac{\Gamma\left(\frac{3}{7}\right)}{2^{4/7}}-t_{th}^{3/7}K_{3/7}(t_{th}),$$ $$\int_0^{t_{th}}t^{11/7}K_{4/7}(t)\,dt=-t^{11/7}K_{11/7}(t)\Bigl|_0^{t_{th}}=\Gamma\left(\frac{11}{7}\right)2^{4/7}-t_{th}^{11/7}K_{11/7}(t_{th}).$$ Finally $\tilde{J}(t,x)$ takes the form $$\tilde{J}(t,x)=B_\nu t^{4/7}I_{4/7}(t)\left[\frac{\Gamma\left(\frac{3}{7}\right)}{2^{4/7}}- \frac{\Gamma\left(\frac{11}{7}\right)2^{4/7}}{t_{th}^{8/7}}+t_{th}^{3/7}(K_{11/7}(t_{th})-K_{3/7}(t_{th}))\right]=$$ $$\tilde{J}(t,x)=B_\nu t^{4/7}I_{4/7}(t)\left[\frac{\Gamma\left(\frac{3}{7}\right)}{2^{4/7}}- \frac{\Gamma\left(\frac{11}{7}\right)2^{4/7}}{t_{th}^{8/7}}+\frac{8}{7}\,t_{th}^{-4/7}K_{4/7}(t_{th})\right].$$ The last formula is a solution of equation (\[rad8\]). We can simplify this form by noting that we are interested in the solution in the outer layers of atmosphere (emergent spectrum) where $\tau \rightarrow 0$ and $t \rightarrow 0$, and we can use the asymptotic form for small arguments $$I_p(x)\approx\frac{1}{\Gamma(p+1)}\left(\frac{x}{2}\right)^p.$$ Making this substitution and putting the result into expression for $J(\tau,x)$ we find that second term in $\tilde{J}(\tau,x)$ cancels with diluted blackbody term in $J(\tau,x)$, which takes the form $$J(t,x)= B_\nu \frac{t^{8/7}}{2^{4/7}\Gamma\left(\frac{11}{7}\right)}\left[\frac{\Gamma\left(\frac{3}{7}\right)}{2^{4/7}}+\frac{8}{7}\,t_{th}^{-4/7}K_{4/7}(t_{th})\right].$$
Solution of the temperature equation
====================================
Substituting relation (\[rad14\]) into the equation of radiative diffusion, multiplying it by $x^2$ and integrating over the energy range from $x_*$ to $\infty$ we get $$\protect\label{e1}
\frac{1}{3}\left(\frac{d^2R}{d\tau^2}-\frac{1}{\tau}\frac{dR}{d\tau}\right)
\int_{x_*}^\infty\frac{x^2}{e^x-1}\,dx=[R(\tau)-1]\int_{x_*}^\infty\frac{x^2}{e^x-1}
\frac{\alpha_{ff}}{\alpha_T}\,dx,$$ where integrals can be approximated as $$\int_{x_*}^\infty\frac{x^2}{e^x-1}\approx \int_0^\infty x^2e^{-x}dx=2$$ and, noting that $\alpha_{ff}/\alpha_T=D\Psi(x)\,\tau^{3/2}\approx
D\,\tau^{3/2}\tilde g(x)/x^2$ we obtain $$\int_{x_*}^\infty\frac{x^2}{e^x-1}\frac{\alpha_{ff}}{\alpha_T}\,dx\approx D\,\tau^{3/2}\int_{x_*}^\infty\frac{\tilde{g}(x)}{x}\,dx\approx \frac{1}{4}\ln^2\frac{2.25}{x_*}D\,\tau^{3/2}.$$ Here we used the fact that $\tilde{g}(x)\approx \frac{1}{2}\ln(2.35/x)$. The equation for $R(\tau)$ gets the form $$\frac{d^2R}{d\tau^2}-\frac{1}{\tau}\frac{dR}{d\tau}=\frac{3}{8}\ln^2\frac{2.25}{x_*}D\tau^{3/2}[R(\tau)-1]=\tilde{D}\tau^{3/2}[R(\tau)-1].$$ Boundary conditions for this equation are $$\begin{array}{cc}
\tau\rightarrow 0&R(\tau)\rightarrow 0,\\
\tau\rightarrow \infty&R(\tau)\rightarrow 1.
\end{array}$$ A general solution of equation (29) is $$R(\tau)=1+\tau Z_{4/7}\left(\frac{4}{7}i\sqrt{\tilde{D}}\tau^{7/4}\right),$$ where $Z_{4/7}(z)=c_1K_{4/7}(z)+c_2I_{4/7}(z)$. In derivation of this formula we take into account a well known theorem from ODE theory that general solution of inhomogeneous ODE is the sum of a general solution of the corresponding homogeneous ODE and some particular solution of the inhomogeneous ODE, which is chosen equals to unity in our case. The second boundary condition and the fact that $$\begin{array}{cc}
K_{4/7}(z)\rightarrow 0&z\rightarrow \infty,\\
I_{4/7}(z)\rightarrow \infty&z\rightarrow \infty
\end{array}$$ leave only $c_1$ nonzero, and the first boundary condition gives us the value for $c_1$, namely $$c_1=-\frac{1}{ _\tau\varinjlim_{0}\tau K_{4/7}
\left(\frac{4}{7}\sqrt{\tilde{D}}\,\tau^{7/4}\right)}=
-\frac{2^{3/7}}{\Gamma\left(\frac{4}{7}\right)\tau_{th}},$$ where we put $\tau_{th}=\left(\frac{4}{7}\sqrt{\tilde{D}}\right)^{-4/7}$. Then $R(\tau)$ reduces to $$R(\tau)=1-\frac{2^{3/7}}{\Gamma\left(\frac{4}{7}\right)}\frac{\tau}{\tau_{th}}
K_{4/7}\left[\left(\frac{\tau}{\tau_{th}}\right)^{7/4}\right].$$ Taylor series expansion of $K_{4/7}$ over $\tau/\tau_{th}$ yields for $R(\tau)$ useful relation $$\protect\label{B6}
R(\tau)=\frac{\Gamma\left(\frac{3}{7}\right)}{2^{8/7}
\Gamma\left(\frac{11}{7}\right)}\left(\frac{\tau}{\tau_{th}}\right)^2.$$
Condition at the sonic point (derivation of $Y_s$)
==================================================
We can rewrite the partial derivative in (\[hydro21\]) using the obvious relation $$\left(\frac{\partial P}{\partial \rho}\right)_\Xi=
\frac{\partial P}{\partial T}\left(\frac{\partial T}{\partial \rho}\right)_\Xi+\frac{\partial P}{\partial y}\left(\frac{\partial y}{\partial \rho}\right)_\Xi.$$
Differentiating the equation of state (\[hydro9\]) we obtain derivatives of pressure $$\frac{\partial P}{\partial T}=\left(1+\frac{1}{y}\right)\frac{4aT^3}{3}
\quad\mathrm{and}\quad\frac{\partial P}{\partial y}=-\frac{1}{y^2}\frac{aT^4}{3},$$ and differentiating (\[hydro24\]) with respect to $\rho$ we get $$\left(\frac{\partial T}{\partial \rho}\right)_\Xi=
-\frac{T}{\lambda}\left(\frac{1}{y}+4\right)
\left(\frac{\partial y}{\partial \rho}\right)_\Xi.$$ From the other hand differentiation of (\[hydro10\]) gives us $$\frac{a\mu m_p}{3k}\left(\frac{3T^2}{y}
\frac{\partial T}{\partial \rho}-\frac{T^3}{y^2}\frac{\partial y}{\partial \rho}\right)=1.$$ Combination with the previous equation it yields: $$\left(\frac{\partial y}{\partial \rho}\right)_\Xi=
-\frac{3k}{a\mu m_p}\frac{y^2}{T^3}\left(\frac{\lambda}{\lambda+3(1+4y)}\right),$$ $$\left(\frac{\partial T}{\partial \rho}\right)_\Xi=
\frac{3k}{a\mu m_pT^2}\left(\frac{y(1+4y)}{\lambda+3(1+4y)}\right).$$
Now, combining all found derivatives we have $$\left(\frac{\partial P}{\partial \rho}\right)_\Xi=
\frac{k}{\mu m_p}\left[4\left(1+\frac{1}{y}\right)\left(\frac{y(1+4y)}
{\lambda+3(1+4y)}\right)+\frac{\lambda}{\lambda+3(1+4y)}\right]T$$ or in a more compact form $$\label{appc1}
\left(\frac{\partial P}{\partial \rho}\right)_\Xi=\frac{k}{\mu m_p}
\left[\frac{\lambda+4(1+y)(1+4y)}{\lambda+3(1+4y)}\right]T,$$ which is, in fact, a sonic point condition (\[hydro21\]).
Reduction of Hydrodynamical Problem to First-Order ODE
======================================================
We derive an expression for the derivative of velocity $v_y$ through $v$ and $y$.
We substitute the temperature profile found in (\[hydro24\]) to (\[hydro6\]) to obtain $\rho$ as a function of $y$ $$\rho=\rho(y)=\frac{a\mu m_p T_b^3}{3 k}\,y^{-3/\lambda-1}
\exp \left[-\frac{12(y-1)}{\lambda}\right].$$ Using this expression for $\rho(y)$ and equation (\[hydro13\]) we get $$r=r(v,y)=\left(\frac{\Phi}{4\pi}\right)^{1/2}\rho^{-\frac{1}{2}}v^{-\frac{1}{2}}=
\left(\frac{3\Phi k}{4\pi a \mu m_p T_b^3}\right)^{1/2}
y^{\frac{3}{2\lambda}+\frac{1}{2}}\exp \left[\frac{6(y-1)}{\lambda}\right]v^{-\frac{1}{2}}.$$ Then we get derivatives $$\frac{dr}{dy}=r\left[\left(\frac{3}{2\lambda}+
\frac{1}{2}\right)\frac{1}{y}+\frac{6}{\lambda}\right],$$ $$\frac{dr}{dv}=-\frac{r}{2v}.$$ Differentiation of (\[hydro24\]) also gives us $$\frac{dT}{dy}=T_{b}\,y^{-1/\lambda}\exp \left[-\frac{4(y-1)}{\lambda}\right]\left(-\frac{1}{\lambda y}-\frac{4}{\lambda}\right)=-\frac{T}{\lambda}\left(4+\frac{1}{y}\right).$$ By a combination of all these derivatives we obtain $$\frac{dT}{dr}=\frac{dT}{dy}\frac{dy}{dr}=
\frac{dT}{dy}\left(\frac{\partial r}{\partial y}+
\frac{\partial r}{\partial v}\frac{d v}{d y}\right)^{-1}=
-\frac{2T(4y+1)}{r\lambda y}\left[\left(\frac{3}{\lambda}+1
\right)\frac{1}{y}+\frac{12}{\lambda}-\frac{v'}{v}\right]^{-1}.$$ Substitution of it into (\[hydro6\]) yields $$L_{r}=-\frac{16\pi c k r^{2} y}{\mu m_p \kappa}\frac{dT}{dr}
=\frac{32\pi c k }{\mu m_p \lambda \kappa_0 (2-Y_{He})}
\frac{(4y+1)(1+\alpha T)Tr}{\left[\left(3/\lambda+1\right)
1/y+12/\lambda-v'/v\right]}.$$ But from (\[hydro14\]) we also have $$L_{r}=\Psi-\Phi \left(h+\frac{v^2}{2}-\frac{G M_{ns}}{r}\right).$$ Equating the last two expressions for $L_r$ we finally find $v_y$ as $$v_y^{\prime}=f(v,y)=v\left[\left(1+3\frac{1+4y}{\lambda}\right)\frac{1}{y}-
75.2\frac{rT(8-5Y_{He})(1+4y)(1+\alpha T)}{\lambda r_{b,6}T_{b,9}
(\Psi/\Phi-v^2/2-h+G M_{ns}/r)}\right].$$ Here $r_{b,6}$ and $T_{b,9}$ represent neutron star radius and the bottom temperature of atmosphere in terms of $10^6$ cm and $10^9$ K correspondingly.
[lrrrr@[.]{}lrr@[.]{}lcccc]{} 7.0 & 0.197 & 0.099 & 45.1 & 1&56 & 93.9 & 173&3 & 0.21 & 1.31 & 57.9 & 4.16\
6.5 & 0.208 & 0.105 & 44.7 & 1&64 & 87.2 & 128&9 & 0.22 & 1.39 & 51.6 & 3.71\
6.0 & 0.221 & 0.111 & 44.1 & 1&75 & 80.5 & 93&6 & 0.24 & 1.48 & 45.6 & 3.28\
5.5 & 0.236 & 0.118 & 43.3 & 1&87 & 73.8 & 66&1 & 0.25 & 1.58 & 39.9 & 2.88\
5.0 & 0.252 & 0.127 & 42.4 & 2&02 & 67.1 & 45&1 & 0.27 & 1.70 & 34.5 & 2.50\
4.5 & 0.272 & 0.137 & 41.5 & 2&20 & 60.4 & 29&6 & 0.29 & 1.84 & 29.4 & 2.14\
4.0 & 0.295 & 0.148 & 40.3 & 2&42 & 53.7 & 18&5 & 0.32 & 2.01 & 24.6 & 1.80\
3.5 & 0.324 & 0.163 & 39.0 & 2&71 & 46.9 & 10&9 & 0.35 & 2.22 & 20.1 & 1.48\
3.0 & 0.359 & 0.180 & 37.4 & 3&09 & 40.2 & 5&86 & 0.39 & 2.50 & 16.0 & 1.19\
2.5 & 0.406 & 0.204 & 35.6 & 3&60 & 33.5 & 2&83 & 0.44 & 2.85 & 12.2 & 0.92\
2.0 & 0.470 & 0.236 & 33.4 & 4&34 & 26.8 & 1&16 & 0.51 & 3.36 & 8.83 & 0.67\
1.75 & 0.510 & 0.257 & 32.2 & 4&85 & 23.5 & 0&68 & 0.55 & 3.69 & 7.29 & 0.56\
1.5 & 0.565 & 0.284 & 30.8 & 5&53 & 20.1 & 0&37 & 0.61 & 4.12 & 5.86 & 0.45\
1.25 & 0.634 & 0.318 & 29.2 & 6&44 & 16.8 & 0&179 & 0.68 & 4.69 & 4.53 & 0.36\
1.1 & 0.686 & 0.344 & 28.0 & 7&18 & 14.8 & 0&108 & 0.74 & 5.12 & 3.80 & 0.30\
1.0 & 0.727 & 0.365 & 27.2 & 7&78 & 13.4 & 0&074 & 0.78 & 5.47 & 3.33 & 0.27\
0.9 & 0.775 & 0.389 & 26.3 & 8&51 & 12.1 & 0&049 & 0.84 & 5.87 & 2.89 & 0.23\
0.8 & 0.831 & 0.417 & 25.3 & 9&40 & 10.7 & 0&031 & 0.90 & 6.36 & 2.46 & 0.20\
0.7 & 0.898 & 0.451 & 24.3 & 10&5 & 9.4 & 0&018 & 0.97 & 6.95 & 2.06 & 0.17\
0.6 & 0.981 & 0.492 & 23.0 & 12&0 & 8.0 & 0&010 & 1.06 & 7.69 & 1.68 & 0.14\
0.5 & 1.086 & 0.545 & 21.6 & 14&0 & 6.7 & 0&005 & 1.17 & 8.65 & 1.33 & 0.11\
0.4 & 1.224 & 0.614 & 19.9 & 17&0 & 5.4 & 0&002 & 1.32 & 9.95 & 1.01 & 0.09\
0.3 & 1.417 & 0.711 & 17.8 & 21&9 & 4.0 & 0&001 & 1.53 & 11.8 & 0.71 & 0.07\
[lrrrr@[.]{}lr@[.]{}lr@[.]{}lcr@[.]{}lr@[.]{}lr]{} 7.0 & 0.248 & 0.110 & 45.1 & 2&12 & 56&2 & 115&8 & 0.242 & 1&80 & 53&4 & 3.53\
6.5 & 0.261 & 0.116 & 44.5 & 2&24 & 52&2 & 86&1 & 0.256 & 1&90 & 47&7 & 3.15\
6.0 & 0.277 & 0.123 & 43.6 & 2&40 & 48&2 & 62&6 & 0.271 & 2&02 & 42&2 & 2.80\
5.5 & 0.294 & 0.131 & 42.6 & 2&58 & 44&1 & 44&2 & 0.288 & 2&16 & 36&9 & 2.46\
5.0 & 0.313 & 0.139 & 41.5 & 2&80 & 40&1 & 30&2 & 0.308 & 2&32 & 31&9 & 2.15\
4.5 & 0.337 & 0.150 & 40.3 & 3&07 & 36&1 & 19&8 & 0.331 & 2&52 & 27&2 & 1.84\
4.0 & 0.364 & 0.162 & 39.0 & 3&40 & 32&1 & 12&4 & 0.359 & 2&75 & 22&8 & 1.56\
3.5 & 0.398 & 0.177 & 37.4 & 3&82 & 28&1 & 7&27 & 0.394 & 3&04 & 18&7 & 1.29\
3.0 & 0.440 & 0.196 & 35.7 & 4&36 & 24&1 & 3&93 & 0.437 & 3&41 & 14&9 & 1.04\
2.5 & 0.495 & 0.220 & 33.8 & 5&12 & 20&1 & 1&90 & 0.494 & 3&89 & 11&4 & 0.81\
2.0 & 0.570 & 0.254 & 31.5 & 6&21 & 16&1 & 0&78 & 0.572 & 4&58 & 8&24 & 0.59\
1.75& 0.620 & 0.276 & 30.2 & 6&97 & 14&0 & 0&46 & 0.623 & 5&03 & 6&80 & 0.50\
1.5 & 0.682 & 0.304 & 28.7 & 7&97 & 12&0 & 0&25 & 0.687 & 5&61 & 5&47 & 0.40\
1.25& 0.763 & 0.339 & 27.0 & 9&33 & 10&0 & 0&121 & 0.770 & 6&38 & 4&24 & 0.32\
1.1 & 0.824 & 0.366 & 25.9 & 10&4 & 8&83 & 0&073 & 0.833 & 6&96 & 3&56 & 0.27\
1.0 & 0.872 & 0.388 & 25.1 & 11&3 & 8&03 & 0&050 & 0.883 & 7&43 & 3&12 & 0.24\
0.9 & 0.928 & 0.413 & 24.2 & 12&4 & 7&22 & 0&033 & 0.940 & 7&98 & 2&71 & 0.21\
0.8 & 0.993 & 0.442 & 23.3 & 13&7 & 6&42 & 0&021 & 1.008 & 8&63 & 2&31 & 0.18\
0.7 & 1.072 & 0.477 & 22.2 & 15&4 & 5&62 & 0&012 & 1.089 & 9&43 & 1&94 & 0.15\
0.6 & 1.169 & 0.520 & 21.0 & 17&6 & 4&82 & 0&007 & 1.188 & 10&4 & 1&59 & 0.13\
0.5 & 1.292 & 0.575 & 19.7 & 20&6 & 4&01 & 0&003 & 1.313 & 11&7 & 1&26 & 0.10\
0.4 & 1.455 & 0.647 & 18.1 & 24&9 & 3&21 & 0&001 & 1.479 & 13&5 & 0&95 & 0.08\
0.3 & 1.682 & 0.749 & 16.2 & 32&0 & 2&41 & 0&0005 & 1.710 & 16&0 & 0&67 & 0.06\
| |
If you are a book lover looking for somewhere to discover new books and quiet places to read, check out public libraries in the Bronx! This definitive guide to the Bronx public library system will give you all the finest libraries around. Find a place to study, use the internet, get homework help and more - all are available at Bronx libraries.
One of the Largest Libraries in the Bronx
Home to the largest collections in the Bronx, this large library was the first certified green library in the NY Public Library system. Chic and modern, this library is home to 4 floors of reading materials of all types - reference, fiction, non-fiction, children's books, young adult books, educational texts and more. This multi-lingual library in the Bronx has an extensive collection of Latino and Puerto Rican cultural resources and books on the 4th floor.
The 5th floor is dedicated solely to career and educational advancement, while the concourse level has a large 150 seat auditorium for events and presentations, a technology training center and a literacy improvement center. This Bronx library is open 7 days a week and is fully wheelchair accessible. | https://www.funnewyork.com/bronx/category/bronx-public-library-in-the-bronx |
In this freeflow episode, the Universe has a powerful message to share about priorities. You’ll be guided through a simple exercise that reveals the folly of where you’re focusing your attention. If you’ve been trying to improve or expand in one area of your life, discover a new way of looking at things to accelerate your growth and expansion.
072:) Everything Is One: Connecting with the Element of Water
The most abundant and essential element of life is water. When we hear the word “water,” so many free flow associations come to mind. However, we usually overlook the most important one. This oversight is a reflection of how we fail to see ourselves and our true nature.
Tune in to this episode to reconnect with your true nature and understand that everything is one. There is no separation between the oceans and land. There is no separation between you and another. All is one.
So what’s the point of realizing and experiencing this? How about inner peace? And a reminder that world peace arises from inner peace.
71:) Connecting with the Element of Metal to Relax Into Your True Nature of Flow
Tune in to this episode to know when you should be persisting and when you are simply resisting. You’ll learn how to recognize when persistence is blocking you, and when it’s helping you face fears and assist your growth and expansion.
Discover how to balance the element of metal within your nature, so that metal’s characteristics of persistence and determination are in balance. Get unstuck by letting go of being stubborn, dogmatic, and digging in your heels. You’ll feel lighter, more flexible, and relax into your true nature with a focus on moving forward with ease and flow.
070:) Connecting with the Element of Wood to Let Go of Control and Surrender Into the Flow of Life
Did you know that tension in your body shows you where you’re trying to control your life? But control is just an illusion.
In this episode, we’ll connect with the natural element of wood for a special lesson from the trees about the power of surrender and going with the flow.
The element of wood is associated with the Liver, and the Liver is associated with the smooth flow of qi throughout your body. How easily you allow the flow of life is a reflection of how easily the qi flows through your body. Discover what it means to have tightness in your jaw, face, back, forehead, belly, shoulders and more.
Learn how going against your true nature manifests in the world you see, and how you can be in harmony with your true nature to bring forth harmony for humanity and nature.
Everything that we manifest and see in this world is an opportunity for growth and expansion and to change things. even when it seems darkest and heaviest. This is an opportunity for your light to shine most brightly.
Listen to this podcast to tune in to your true nature so that you save time and get more done with less effort.
067:) A Rather Cryptic Meditation
What do vanilla cake and cats have in common? Few will listen to this meditation because the description is deliberately cryptic. But for those who do, you will realize an incredible moment of self-awareness that can change your life forever.
066:) The Power of Spontaneity – Message from the Universe
In this episode, the Universe shares an important message on spontaneity. Discover how your mind is keeping you from expressing your creativity and playfulness, and what you can do to allow your inner child to play and explore.
Tune in to reconnect with your sense of adventure, freedom and excitement.
058:) How to Figure Out Whether You’re Procrastinating or You’re Taking a Much Needed Break
Have you ever been excited about a new direction and then found yourself procrastinating? Tune in to this episode to discover whether it’s resistance from your mind or if you need to take a much needed break.
When you follow your soul’s calling, you’ll encounter two types of pauses. One is beneficial and leads to effortless expansion, whereas the other limits your growth. The key to moving forward is to recognize whether your procrastination is a limiting pause or an expansive one.
By the end of this episode, you’ll know which type of pause you’re taking. Then you can give yourself a pat on the back, and move forward with ease and flow.
P.S. If you’re taking a “bad” pause, listen to Episode 28: Stop Procrastinating for a powerful activation to stop procrastinating and spring into action….
055:) Freeflow on Freedom – How Free Are You Really?
In this #freeflow episode, the Universe has a message to share with you. It’s a simple way to measure how free you are. Yes, I recognize the paradox in this statement!
Our minds like to measure things, make judgements and come to predetermined conclusions. But the more freedom you give yourself, the more power you have to change grow, expand and change the world.
Tune in to this episode to see just how free you are! 🙂
054:) Lessons from the Hummingbirds
Nature is our greatest teacher, and one of the most humble instructors on human nature is none other than the tiny little hummingbird. These jerky little birds show us how the Universe sees us.
If you’ve ever wondered what it’s like to view humans through the eyes of the Universe, then tune in to this episode. You’ll gain a bird’s eye perspective on what drives us, and how you can release the mindset of scarcity to glide effortlessly above the mountains.
052:) That Magical Feeling – A Fun Way to Hear Your Soul
Would you like a fun, simple and delightful way to get guidance from the Universe? Your soul sends you messages all the time, but it’s hard to hear clearly when there’s too much thinking going on. You’ve heard the phrase “quiet your mind” so many times. But it’s not the easiest thing in the world to do, especially when you’ve got a lot going on.
When you want to change the world, you’ll need to get better and better at listening to your soul. Tune in to this week’s podcast episode for a magical way to quiet your mind so that you can create with the aligned energy of the Universe.
Learn how to become your own alchemist! | https://hollytse.com/category/flow/ |
PURPOSE: A convergence apparatus of a color cathode ray tube is provided to effectively intercept a geomagnetic field without hindering a convergence control. CONSTITUTION: A holder(H) is mounted at a neck of a color cathode ray tube. A cylindrical sleeve(L) is formed at a periphery of the holder(H). Pairs(P2,P4,P6) supports the cylindrical sleeve(L). A flange(F) is formed at one end of the cylindrical sleeve(L). The cylindrical sleeve(L) includes a support member(L1), a locker(K), and a fixing portion(L2). The support member(L1) rotatably supports rings(R). The support member(L1) is screwed to a locker(K). Each of two shield plates(S1,S2) is connected to the fixing portion(L2) of the sleeve(L). Each of two shield plates(S1,S2) is formed by a permeable metal material having an alloy or a nickel and an iron. | |
Question about Panasonic PV-GS150 Mini DV Digital Camcorder
I live in Ireland and want to purchase a DVD recorder that I can convert my NTSC VHS videos to DVD PAL. I don't know what to purchase and can't find anything locally. I am in urgent need of this as I have terminal cancer and I want to convert all my sons tapes from when he was a baby to give to him. Any advice would be appreciated.
Posted by Jacqueline Cronin on
If you do not have an adaptor to play the tapes to a standard VHS viseo player, you can use the camera as the player by connecting the AV out to a DVD recorder. There are hundreds of models in the market, and do not worry if it is NTSC or PAL because 99% of DVD recorder accepts both.
Posted on May 24, 2007
Tips for a great answer:
Feb 14, 2015 | Magnavox ZV427MG9 DVD Recorder/ VCR
May 09, 2013 | Fulltone Ruby GrabBee Deluxe USB 20 NTSC...
Sep 29, 2011 | Diamond Multimedia ( VC500) Video Capture
Oct 24, 2009 | JVC KV-V8 VHS/S-VHS playback VCR
Feb 19, 2009 | Panasonic DMR-ES35VS
Feb 13, 2009 | Canopus ADVC-300 A/D Converter Video...
Mar 18, 2008 | Televison & Video
Jan 20, 2008 | Philips DVDR3435V/37 DVD Recorder/VCR
May 23, 2007 | HP DC4000 (Q2125A#ABA) DVD±RW Burner
May 23, 2007 | Samsung SV5000W VHS VCR
Dec 08, 2012 | Panasonic PV-GS150 Mini DV Digital...
See all Panasonic PV-GS150 Mini DV Digital Camcorder Questions
203 people viewed this question
Usually answered in minutes!
Level 2 Expert
293 Answers
Level 3 Expert
6240 Answers
Level 3 Expert
2027 Answers
Step 2: Please assign your manual to a product: | https://www.fixya.com/support/t149349-convert_ntsc_vhs_video_tapes_dvd_pal |
Polish mathematician Wacław Sierpiński (1882-1969) described the 2D geometric figure known as the Sierpiński triangle as part of his work on set theory in 1915. The triangle, which is really an infinite collection of points, can be constructed by the following algorithm:
-
The initial shape is a solid triangle.
-
Shrink the current shape to half its dimensions (both height and width), and make two more copies of it (giving three copies total).
-
Arrange the three copies so that each touches the two others at their corners. Set the current shape to be the union of these three.
-
Repeat from step 2.
Here is an illustration of the first few iterations:
|
|
|
|
|
|
|
|
|
|
|
|
iteration
|
|
$0$
|
|
$1$
|
|
$2$
|
|
$3$
|
|
$4$
As the iterations go to infinity, this process creates an infinite number of connected points. However, consider the case of a finite number of iterations. If the initial triangle has a circumference of $3$, what is the sum of the circumferences of all (black) triangles at a given iteration? Write a program to find out not the exact circumference, but the number of decimal digits required to represent its integer portion. That is, find the number of decimal digits required to represent the largest integer that is at most as large as the circumference.
Input
Each test case is a line containing a non-negative integer $0 \leq n \leq 10\, 000$ indicating the number of iterations.
Output
For each case, display the case number followed by the number of decimal digits required to represent the integer portion of the circumference for the given number of iterations. Follow the format of the sample output. | https://open.kattis.com/problems/triangle |
In theory, the jamming effect's estimation is a theoretical value based on the premise of the ultimate vacuum environment without any barrier. And the estimates can only provide a concept of order of magnitudes, but the real jamming effect is subject to the field test.
Many factors can easily influence the radiofrequency equipment, for example, the different altitudes, different seasons, the density and height of buildings and surrounding vegetation, the weather condition like raining or sunshine days, etc. The working environment strongly affects the RF signal transmission, so it is hard to provide a standard to identify accurate coverage. It must be subject to the actual transmission and working environment. Or the application engineer can have references on the existing coverage models (like Ericsson's suburban Chicago model).
As VHF and UHF systems, jamming frequencies are usually working as interference frequencies on IED jamming projects. Still, to a great extent, it must satisfy certain conditions: The Jammer will take effect only when they have a certain distance among the controller, Jammer, and IED. We have never heard any better solutions without these premises.
LD-Ld >6
|Remote||Jammer 1||Jammer 2||Jammer 3|
|12.5KHz, 1W||1MHz, 100W(1.25W)||10MHz, 100W(125mW)||100MHz, 100W(12.5mW)|
|Distance (m)||Ld (m)||Ld (m)||Ld (m)|
|10||5||1.5|
0.5
|20||10||3.2||1|
|50||25||8||2.5|
|100||50||16||5|
|200||100||32||10|
|500||250||79||25|
|1000||500||158||50|
We take field intensity suppression as an example to explain the difficulty with interfering Off-network Walkie-talkie as follows:
|Remote||Jammer 1||Jammer 2||Jammer 3|
|12.5KHz, 1W||1MHz, 100W(1.25W)||10MHz, 100W(125mW)||100MHz, 100W(12.5mW)|
|Distance (m)||Ld (m)||Ld (m)||Ld (m)|
|10||1||0.4|
0.2
|20||2||0.7||0.4|
|50||6||1.8||1.1|
|100||11||3.6||2.2|
|200||22||7.1||4.4|
|500||56||17.9||11.1|
|1000||111||35.7||22.2|
The working-mode for in-network walkie-talkie is similar to Public Cellular Mobile system.
Interference upon Pubic Cellular telecommunication system
Please refer to DCS1800 as an example as follows. 3G and 4G systems for 1800MHz adopted spread spectrum anti-jamming system which leads more complicatedly on theory estimation, therefore the anti-jamming effect gets good than DCS 1800MHz system under the same frequency range and bandwidth.
The interference upon UAV gets more complicated, as it mainly relates to the below factors:
Clarify main purposes: general interference (Interfere Video, Interfere Control part or kill it down)? Or capture (It should be totally different device if need to capture UAV).
Typical test application should be: The distance between Controller and UAV is much further than the distance between Jammer and UAV. | https://www.rf-defence.com/news/jamming-effects-estimation-for-typical-applications.html |
213 F.3d 538 (10th Cir. 2000)
DEBRA A. SHAW; Sued as UNITED STATES ex rel, Plaintiff-Appellee,v.AAA ENGINEERING & DRAFTING, INC., a Utah corporation; WILBUR L. BRAKHAGE, Supervisor; JANICE KELLIN, Defendants-Appellants.DEBRA A. SHAW, Plaintiff-Appellee,v.AAA ENGINEERING & DRAFTING, INC., a Utah corporation; WILBUR L. BRAKHAGE, Supervisor; JANICE KELLIN, Defendants-Appellants.
Nos. 98-6172, 98-6173, 98-6362
UNITED STATES COURT OF APPEALS, TENTH CIRCUIT
May 18, 2000
Appeal from the United States District Court for the W. District of Oklahoma (D.C. No. 95-CV-950-M)(D.C. No. 95-CV-951-M)[Copyrighted Material Omitted]
John B. Hayes, of Hayes & Magrini, Oklahoma City, Oklahoma, for Defendants-Appellants.
Marilyn D. Barringer, Oklahoma City, Oklahoma (Micheal C. Salem, Norman, Oklahoma, with her on the brief) for Plaintiffs-Appellees.
Before HENRY, and MURPHY, Circuit Judges, and KIMBALL,*
MURPHY, Circuit Judge.
I. INTRODUCTION
1
Defendants AAA Engineering & Drafting, Inc., Wilbur L. Brakhage, and Janice Keelin (collectively "Defendants") appeal from an Amended Order and Judgment on Attorneys' Fees and Litigation Expenses. This court concludes that the district court did not abuse its discretion in its award of attorney's fees, expenses, and costs, and that the award of attorney's fees for post-judgment enforcement and collection activities was proper under the False Claims Act ("FCA"). This court exercises jurisdiction pursuant to 28 U.S.C. 1291 and affirms.
II. FACTS AND PROCEDURAL HISTORY
2
In June 1997, the district court entered judgment following a jury verdict for plaintiff Debra Shaw in a consolidated FCA qui tam and wrongful discharge action and pendant state law wrongful discharge action. Shaw then moved for an award of attorney's fees, litigation expenses, and court costs on the FCA qui tam and wrongful discharge actions (collectively "fees and expenses"). Defendants agreed the FCA qui tam and wrongful discharge provisions both authorize the award of reasonable fees and expenses, but they disagreed as to the proper amount to be awarded.
3
Defendants appealed the judgment1 and applied for an order staying execution on the judgment. In addition, Defendants asked that the stay on the judgment be entered immediately but that the amount of the supersedeas bond2 be set only after entry of judgment on fees and costs. Shaw opposed a stay without a bond and asked the district court to deny Defendants' application to stay execution and to immediately set the amount of the bond. By October 1997, some three months later, the district court had not yet ruled on the motion for fees and expenses, and Defendants had not filed a supersedeas bond. Shaw commenced execution on the judgment and moved for a Writ of Garnishment. Defendants then moved for the immediate approval of a supersedeas bond and to quash the garnishment proceeding. In February 1998, the district court granted Defendants' motion for approval of the supersedeas bond and quashed the Writ of Garnishment.
4
In March 1998, the district court held a hearing to determine the proper amount of fees and expenses to be awarded. Shaw presented two expert witnesses, both of whom testified $175 per hour was a reasonable hourly rate for a plaintiff's attorney with the experience of Shaw's counsel in federal employment litigation. One witness also testified that both FCA qui tam and wrongful termination actions involve complex issues requiring substantial expenditures of time. Shaw's counsel also testified that this case was particularly difficult for her, in part because of the need for information and assistance from the government and because of the extensive document preparation involved. Defendants cross-examined each of these witnesses. Defendants also called an expert witness who testified that reasonable rates for the defense bar in Oklahoma were $100 to $125 per hour. This witness had not, however, examined the specific facts of the underlying case and was thus unable to address its complexity.
5
After the hearing, the district court entered judgment in favor of Shaw for fees and expenses.3 Shaw then moved to amend this judgment to include fees and expenses incurred after the date of her initial application for fees and expenses but before the district court's hearing on the application.4 The additional fees and expenses sought included fees for time spent on post-judgment collection activities. Defendants opposed the portion of the request for additional fees and expenses based on post-judgment collection activities, but they did not request a second evidentiary hearing. The district court, without conducting a second hearing, granted Shaw's motion. Noting that Defendants contested the award of additional fees but did not contest the reasonableness of the amount requested, the district court amended the judgment to include all the additional fees and expenses claimed by Shaw. Under the amended judgment, Shaw was awarded $87,829.00 in attorney's fees, $2267.34 in costs, and $7339.40 in expenses, for a total of $97,435.74 plus interest in the qui tam action, as well as $74,768.75 in attorney's fees, $2267.33 in costs, and $2436.95 in expenses, for a total of $79,473.03 plus interest in the wrongful discharge action. Defendants appeal the amended fees and expenses judgment.
6
The FCA provides that a qui tam relator who successfully brings an FCA action shall receive an amount between twenty-five and thirty percent of the proceeds from the action and "an amount for reasonable expenses which the court finds to have been necessarily incurred, plus reasonable attorneys' fees and costs. All such expenses, fees, and costs shall be awarded against the defendant." 31 U.S.C. 3730(d)(2). Under the FCA wrongful termination provisions, a plaintiff is entitled to "all relief necessary to make the employee whole. Such relief shall include . . . litigation costs and reasonable attorneys' fees." 31 U.S.C. 3730(h).
III. DISCUSSION
7
A. Reasonableness of Attorney's Hours and Rate
1) Standard of Review
8
Defendants argue the district court granted Shaw excessive attorney's fees. Specifically, they argue the district court should have 1) required a more detailed explanation by Shaw's counsel as to the number of hours spent on specific tasks; 2) given more weight to the disparity between the hours claimed by Shaw's counsel and the hours billed by Defendants' counsel; and 3) allowed Defendants to cross-examine Shaw's counsel concerning her fee agreement with Shaw.
9
This court reviews the district court's determination of the amount of attorney's fees to be awarded for an abuse of discretion. See Wolfe v. New Mexico Dep't of Human Servs., 28 F.3d 1056, 1058-59 (10th Cir. 1994). A court abuses its discretion when it bases its decision on an erroneous conclusion of law or when there is no rational basis in evidence for its ruling. See Mann v. Reynolds, 46 F.3d 1055, 1062 (10th Cir. 1995).
2) Number of Hours Spent on Specific Tasks
10
Defendants contend the district court should have required Shaw's counsel to give a detailed analysis of her reasons for the amount of time she spent on specific tasks. For example, Shaw's counsel recorded thirteen hours for preparing suggested voir dire questions in the qui tam case, six hours for writing two Freedom of Information Act letters, approximately forty hours for responding to a summary judgment motion, and eight hours for drafting a complaint.
11
Shaw responds that she did present evidence explaining much of this time. For example, her expert witness testified that in preparing voir dire questions for a qui tam case, he would have to spend more time than in a typical employment law case, and that he was not surprised 13 hours were spent preparing voir dire. He also testified that six hours spent writing Freedom of Information Act letters was even less surprising. In addition, Shaw's counsel also offered some explanation for the time she spent drafting the qui tam complaint, noting the FCA has specific requirements which she had to follow. Additionally, Defendants' counsel cross-examined Shaw's attorney, yet only questioned her briefly about the time spent on voir dire and not at all about time spent on the other matters now raised on appeal. Defendants themselves did not call any witnesses to testify that the amount of time spent on these tasks was excessive.
12
Shaw is entitled to "reasonable attorneys' fees" under both the qui tam and wrongful discharge provisions of the FCA. 31 U.S.C. 3730(d)(2), (h). This court has previously noted that when examining an attorney's fee claim, the district court should examine the hours spent on each task to determine the reasonableness of the hours reported. See Ramos v. Lamm, 713 F.2d 546, 554 (10th Cir. 1983) (reviewing award of attorney's fees under 42 U.S.C. 1988). The district court, however, does not have to justify every hour allowed in awarding attorney's fees under federal statutes. See Malloy v. Monahan, 73 F.3d 1012, 1018 (10th Cir. 1996). "[W]hat is reasonable in a particular case can depend upon factors such as the complexity of the case, the number of reasonable strategies pursued, and the responses necessitated by the maneuvering of the other side." Ramos, 713 F.2d at 554. The district court's superior perspective on the presence or absence of these particular factors in the underlying merits litigation counsels deference to the district court's decision as to whether the number of hours claimed is reasonable. See Hensley v. Eckerhart, 461 U.S. 424, 437 (1983).
13
In this case, the district court reviewed Shaw's counsel's records and heard testimony that the posture and complexity of this case required substantial expenditure of time. The Defendants failed to meaningfully question this testimony. This court concludes the district court had a "sufficient basis for its determination that the claimed hours were reasonable," and there was no abuse of discretion in the award of attorney's fees for the hours claimed. Malloy, 73 F.3d at 1018.
3) Comparing Counsels' Time
14
In her initial fee request, Shaw's counsel claimed approximately 804 hours for prosecuting the underlying merits lawsuit, while Defendants' counsel billed approximately 425 hours for defending the action. Defendants argue the district court should have given more weight to this time difference, and that testimony by Shaw's counsel was not adequate to explain the amount of time which she recorded.5
15
Evidence of the hours expended by opposing counsel may be helpful in determining whether time expended on a case was reasonable, but the opponent's time is not an "immutable yardstick of reasonableness." Robinson v. City of Edmond, 160 F.3d 1275, 1284 (10th Cir. 1998) (reviewing attorney's fees award under 42 U.S.C. 1988). The district court had first hand knowledge of the complexity of the case and the voluminous number of documents Shaw, who had the burden of proof, presented at trail. See Hensley, 461 U.S. at 437. The district court did not abuse its discretion in finding counsel's hours reasonable in spite of the contrast with defense counsel's time.
4) Hourly Rate
16
The district court received into evidence the affidavit of Shaw's counsel, which indicated that Shaw's agreement with her counsel was for a contingency fee and that Shaw paid only costs and expenses. The affidavit thereafter refers to an hourly rate of $150. It would appear, however, that Shaw's counsel was explaining that $150/hour was her normal billing rate at the time of the fee arrangement, some two years antedating the affidavit and claim premised on a $175/hour rate.
17
After the admission of this affidavit, Defendants asked Shaw's counsel in cross-examination, "[Y]ou say in your affidavit that you had a contract with your client. What was that, please, ma'am, and did you bring it to court today?" Shaw objected to this line of questioning, arguing that the fee agreement was irrelevant to a determination of a reasonable fee. The district court sustained this objection, though without specifically stating the reasons for its ruling.
18
On appeal, Defendants argue that if the trial court had allowed them to address the terms of the representation agreement, this could have led to a revelation that the agreement reflected a rate of $150/hour. This, Defendants argue, in turn might have convinced the district court to apply an hourly rate less than the $175/hour on which it ultimately settled. The affidavit of Shaw's counsel, however, established that her regular hourly rate was $150/hour, the very rate Defendants suggest is the reason cross-examination on the fee arrangement should have been allowed. Therefore, any error by the district court in sustaining Shaw's objection was harmless. See United States v. Rothbart, 723 F.2d 752, 755 (10th Cir. 1983); Fed. R. Civ. P. 61 ("No error in either the admission or the exclusion of evidence . . . is a ground for . . . vacating, modifying, or otherwise disturbing a judgment or order, unless refusal to take such action appears to the court inconsistent with substantial justice.").
B. The Amended Judgment
1) Standard of Review
19
In the Order granting Shaw's Motion to Alter or Amend the Judgment on Attorneys' Fees, Costs, and Litigation Expenses ("motion to amend"), a portion of the fees awarded by the district court was for time spent in post-judgment collection proceedings. Shaw's counsel's hours included time spent objecting to Defendants' motion to stay execution and preparing for execution and garnishment proceedings. Defendants argue on appeal the district court erred 1) in awarding Shaw attorney's fees for this time; and 2) in granting Shaw's motion to amend without conducting a second evidentiary hearing. This court reviews the district court's grant of attorney's fees for an abuse of discretion. See Wolfe, 28 F.3d at 1058-59. Factual resolutions are reviewed for clear error; the statutory interpretation and legal analysis supporting the district court's decision are reviewed de novo. See id.
20
2) Award of Fees for Post-judgment Collection Activities
21
There is no precedent in this circuit nor authority from other circuits resolving whether attorney's fees can be awarded under the FCA for post-judgment collection activities. Cases addressing claims for post-judgment fee awards under the Civil Rights Attorney's Fees Awards Act, 42 U.S.C. 1988(b), and the citizen suit attorney's fee provision of the Clean Air Act, id. 7604(d), however, are instructive. See, e.g., Pennsylvania v. Delaware Valley Citizens' Council for Clean Air, 478 U.S. 546, 558-60 (1986).
22
The Civil Rights Attorney's Fees Awards Act states "the court, in its discretion, may allow . . . a reasonable attorney's fee." 42 U.S.C. 1988(b). The Clean Air Act citizen suit attorney's fee provision provides the court "may award costs of litigation (including reasonable attorney and expert witness fees) to any party, whenever the court determines such award is appropriate." Id. 7604(d). Similarly, the FCA qui tam attorney's fees provision states that one who successfully brings an FCA action "shall . . . receive . . . reasonable attorneys' fees and costs." 31 U.S.C. 3730(d)(2) (emphasis added). The FCA wrongful discharge attorney's fees provision likewise states that a plaintiff "shall be entitled to all relief necessary to make the employee whole . . . [including] reasonable attorneys' fees." 31 U.S.C. 3730(h) (emphasis added). All four provisions share the requirement that attorney's fees awards must be "reasonable." 31 U.S.C. 3730(d)(2), (h); 42 U.S.C. 1988(b); id. 7604(d). The only significant difference between the FCA and the attorney's fees provisions in the other statutes is that the FCA provisions are mandatory on their face. See 32 U.S.C. 3730(d)(2), (h).
23
Courts interpreting the Civil Rights Attorney's Fees Awards Act and the citizen suit attorney's fee provision of the Clean Air Act have consistently allowed attorney's fees for post-judgment enforcement and collection activities. See, e.g., Delaware Valley, 478 U.S. at 558-60; Wolfe, 28 F.3d at 1059.6 In Delaware Valley, the court noted that both of these statutory provisions were enacted to encourage citizen enforcement of important federal policies. See 478 U.S. at 560. The FCA is also intended to encourage citizen enforcement of important federal policies. Two of the FCA's main goals are to enhance the government's ability to recover losses resulting from fraud and to encourage individuals who know of government fraud to come forward with that information. See S. Rep. 99-345, at 1, 6 (1986), reprinted in 1986 U.S.C.C.A.N. 5266, 5266, 5271; United States ex rel. Precision Co. v. Koch Indus., 971 F.2d 548, 552 (10th Cir. 1992). The FCA attorney's fees provisions are central to the implementation of these policies. As noted in the FCA's legislative history, "[u]navailability of attorneys fees inhibits and precludes many private individuals, as well as their attorneys, from bringing civil fraud suits." S. Rep. 99-345, at 29 (1986), reprinted in 1986 U.S.C.C.A.N. 5266, 5294.
24
We see no reason why the attorney's fees provisions of the FCA should be applied differently than those of the civil rights laws or the Clean Air Act. This court thus concludes that both FCA attorney fee provisions allow the award of attorney's fees for time spent in post-judgment collection activities. See 31 U.S.C. 3730(d)(2), (h).
25
3) Failure to Hold a Second Evidentiary Hearing
26
Defendants argue the district court erred when it did not conduct an evidentiary hearing on Shaw's motion to amend. Defendants, however, did not request such an evidentiary hearing. See Robinson, 160 F.3d at 1286 ("Ordinarily, a district court does not abuse its discretion in deciding not to hold an evidentiary hearing when no such request is ever made.") Additionally, this court notes that in Defendants' response to Shaw's motion to amend, Defendants made no factual challenges which would have necessitated an evidentiary hearing. Defendants instead asserted two purely legal issues: 1) that the district court had resolved all the matters presented in the motion to amend at the original March 13, 1998 hearing, and these resolutions became the law of the case and 2) the FCA does not authorize an award of attorney's fees for post-judgment collection activities. An evidentiary hearing, however, was unnecessary to resolve these legal issues. Defendants certainly did not assert below, as they do on appeal, that an evidentiary hearing was necessary to distinguish time spent on post-judgment collection activities from time which was not contested by Defendants or to allow Defendants the opportunity to cross-examine Shaw's counsel as to the reasonableness of the additional hours claimed. The district court therefore did not abuse its discretion in not conducting the hearing. See id.; King of the Mountain Sports, Inc. v. Chrysler Corp., 185 F.3d 1084, 1091 n.2 (10th Cir. 1999).
IV. CONCLUSION
27
For the reasons stated above, this court AFFIRMS the district court's Amended Order and Judgment on Attorneys' Fees and Litigation Expenses.
NOTES:
*
Honorable Dale A. Kimball, District Judge, United States District Court for the District of Utah, sitting by designation.
1
In the related merits appeal, also decided today, this court affirmed the FCA qui tam and wrongful discharge portions of the underlying judgment. Although Shaw had prevailed on the state law claim below, that portion of the judgment was reversed on appeal. See Shaw v. AAA Engineering and Drafting, Inc., 213 F.3d. 519, 523, 524 (10th Cir. 2000) ("Shaw I").
2
"When an appeal is taken the appellant by giving a supersedeas bond may obtain a stay subject to the exceptions contained in subdivision (a) of this rule. The bond may be given at or after the time of filing the notice of appeal . . . . The stay is effective when the supersedeas bond is approved by the court." Fed. R. Civ. P. 62(d).
3
This award was for $76,882.75 in attorney's fees, $6653.90 in expenses, and $2267.34 in costs, for a total of $85,803.99 plus interest in the qui tam action, as well as $63,822.50 in attorney's fees, $1751.45 in expenses, and $2267.33 in costs, for a total of $67,841.28 plus interest in the wrongful discharge action.
4
Shaw sought an additional $21,892.50 in attorney's fees and $1,371.00 in litigation expenses, for a total of $23,263.50.
5
Insofar as Defendants are arguing the district court did not compare the time recorded by the parties at all, this is incorrect. Defense counsel's time records were admitted at the March 1998 hearing.
6
In addition, the courts have not limited post-judgment attorney's fees awards under the Civil Rights Attorney's Fees Awards Act to time spent securing non-monetary forms of relief. See Balark v. Curtin, 655 F.2d 798, 802-03 (7th Cir. 1981). The Balark court upheld the award of fees for time spent litigating collection procedures for a civil rights judgment, stating "[t]he compensatory goals of the civil rights laws would thus be undermined if fees were not also available when defendants oppose the collection of civil rights judgments." 655 F.2d at 803; see also Powell v. Georgia-Pacific Corp., 119 F.3d 703, 707 (8th Cir. 1997).
| |
Q:
GCD / LCM Polyglots!
Your challenge is to make a program or function that outputs the GCD of its inputs in one language and the LCM of its inputs in another. Builtins for GCD or LCM (I'm looking at you, Mathematica) are allowed but not encouraged. There will be 2 inputs, which will always be positive integers, never greater than 1000.
Test Cases
Each line is one test case in the format x y => GCD(x,y) LCM(x,y):
1 1 => 1 1
1 2 => 1 2
4 1 => 1 4
3 4 => 1 12
7 5 => 1 35
18 15 => 3 90
23 23 => 23 23
999 1000 => 1 999000
1000 999 => 1 999000
1000 1000 => 1000 1000
See this pastebin for all possible inputs with 0 < x, y < 31. Note that different versions of the same languages count as different languages.
A:
C / C++, 79 78 73 bytes
Thanks to @ETHproductions for saving a byte!
int f(int a,int b){int c,d=a*b;for(;a;b=c)c=a,a=b%a;auto e=.5;c=e?d/b:b;}
C calculates the GCD: Try it online!
C++ calculates the LCM: Try it online!
In C, auto e=.5 declares an integer variable with the auto storage class (which is the default), which is then initialized to 0, whereas in C++11 it declares a double, which is initialized to 0.5. So the variable's value will be truthy in C++ and falsy in C.
The function calculates the GCD with Euclid's algorithm, and the LCM by dividing the product of a and b by the GCD.
Omitting the return statement works at least on GCC. The 78 byte solution below should work with any compiler:
int f(int a,int b){int c,d=a*b;for(;a;b=c)c=a,a=b%a;auto e=.5;return e?d/b:b;}
A:
Jelly / Actually, 2 bytes
00000000: 1e 67 .g
This is a hexdump (xxd) of the submitted program. It cannot be tested online because TIO doesn't support the CP437 encoding. @Mego was kind enough to verify that this works on Cygwin, which implements CP437 as intended for Actually.
Jelly: GCD
Jelly uses the Jelly code page, so it sees the following characters.
œg
Try it online!
How it works
œ is an incomplete token and thus ignored. g is the GCD built-in.
Actually: LCM
Actually uses CP 437, so it sees the following characters.
▲g
Try it online!
How it works
▲ is the LCM input. Since g (GCD) requires two integer inputs, it isn't executed.
A:
Actually / Jelly, 3 bytes
00000000: 11 1c 67 ..g
This is a hexdump (xxd) of the submitted program.
Try it online!1
Actually: GCD
(implicit) Read a and b from STDIN and push them on the stack.
◄ Unassigned. Does nothing.
∟ Unassigned. Does nothing.
g Pop a and b and push gcd(a,b).
(implicit) Write the result to STDOUT.
Jelly: LCM
×÷g Main link. Left argument: a. Right argument: b
× Multiply; yield ab.
g GCD; yield gcd(a,b).
÷ Division; yield ab/gcd(a,b) = lcm(a,b).
Note: The formula gcd(a,b)lcm(a,b) = ab holds because a and b are positive.
1 TIO actually uses UTF-8 for Actually. Since both the ASCII characters and the CP437 characters 0x11 and 0x1c are unassigned, the program works nevertheless.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.