content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The twin nerve tracts of the antenna of the grasshopper Schistocerca gregaria are established early in embryogenesis by sibling pairs of pioneers which delaminate from the epithelium into the lumen at the antennal tip. These cells can be uniquely identified via their co-expression of the neuronal labels horseradish peroxidase and the lipocalin Lazarillo. The apical pioneers direct axons toward the antennal base where they encounter guidepost-like cells called base pioneers which transiently express the same molecular labels as the apical pioneers. To what extent the pioneer growth cones then progress into the brain neuropil proper, and what their targets there might be, has remained unclear. In this study, we show that the apical antennal pioneers project centrally beyond the antennal base first into the deutocerebral, and then into the protocerebral brain neuropils. In the protocerebrum, we identify their target circuitry as being identified Lazarillo-positive cells which themselves pioneer the primary axon scaffold of the brain. The apical and base antennal pioneers therefore form part of a molecularly contiguous pathway from the periphery to an identified central circuit of the embryonic grasshopper brain. | https://epub.ub.uni-muenchen.de/61091/ |
A new analysis of the technological characteristics of devices used by Internet users to access digital media shows that in the period from June 2019 to June 2020, 74% of all access to digital media was conducted via mobile devices. The average monthly reach of digital media increased by 32% in the analysed period.
iPROM’s analysis of last year shows that the trend of growth in access to digital media via mobile devices is slowing down, but this year’s data show that the trend is strengthening again. Over the analysed period, mobile access to digital media increased by 7%, while computer access decreased proportionally.
»Since 2015, we have been monitoring the growth trend of mobile access to digital content, which in recent years has strongly indicated that desktops and laptops are stepping aside to give space to mobile devices as the first and main point of contact with various content in digital media. Despite the fact that in last year’s analysis we noticed a slowdown in the share growth of mobile access, this year the growth trend has strengthened again. It seems that the vast majority of web users regularly monitor content on digital media and perform a quick review of it mostly via mobile devices, but they still use a computer for in-depth reading, and even more intensively now«”, comments Tomaž Tomšič, head of iPROM Labs.
Computers retain first place with an in-depth review of digital media content
Slovenian digital media users primarily use desktop computers and laptops for in-depth content review. iPROM analysis records three times the activity of users who access digital content via computers than those who access it via mobile devices. Despite the predominance of mobile access, the analysis shows that almost half of digital media page views are performed via computers.
The average monthly reach of Slovenian digital media grew by a third
In this year’s analysis, a 32% increase in the average monthly reach of Slovenian digital media was recorded compared to the previous year.
»The data obtained once again confirm that the trend of digital media content consumption is growing. In last year’s analysis, Slovenian digital media reach increased by more than 10% annually, but this year the increase was significantly higher, which we attribute to changed habits of digital media consumption of content during social isolation«”, adds Tomaž Tomšič, head of iPROM Labs.
The Windows 10 version continues to cement its leading share
Operating systems usage analysis showed that Microsoft Windows with Windows 10 is the predominant operating system. This version accounts for a 63% share, followed by the Windows 7 version with 20%. The rest of the operating system usage pie chart is shared by Mac OS X with 7%, Windows 8.1 with 6%, and other operating systems such as Linux and older versions of Windows.
Methodology used
The data on which iPROM’s analysis of the technological characteristics of devices used by Slovenian Internet users to access digital content are based, captured and analysed every year in the same period (in this analysis it is from June 2019 to June 2020). iPROM records and processes a lot of acquired data on user behaviour through the iPROM Cloud technology platform, and has been preparing the analysis since 2004. This year’s analysis was performed on a sample of 1.6 million Slovenian Internet users and, according to the IAB standard, covered 18.2 billion measurement requests.
About iPROM
iPROM specializes in planning and running advertising campaigns in digital media. The tools we create are used across the industry by advertisers, direct marketers and web publishers and help them plan, execute and analyze their digital marketing activities. They are designed to increase the effectiveness of these activities as well as to make the complex world of web advertising a little simpler for our customers, justifying their investments with outstanding returns.
In a data-driven world, ad-serving efficiency is very important. Using media properties, data analytics, in-depth research and the smartest technology available, we enable marketers to deliver the right message to the right person at the right time, every time. With our tools for ultra-precise targeted audience segmentation, effective planning and proficient execution, companies are equipped to build better brands, more successful operations and stronger customer relationships. | https://iprom.eu/in-slovenia-74-of-all-access-to-digital-media-via-mobile-devices/ |
The beaches of Israel are rich in both beauty and cultural depth. In fact, these beaches are a relished location for tourists and locals alike. The TLV88 Hotel is known for its unique location, luxury, style and cultural richness, making it a superb selection for travelers.
Beach hotels give you exclusive access to festive activities and events in the surrounding city. To better your travelling experience, acquaint yourself with the entertainment and the wonderful people of Tel Aviv. Immersing yourself in the social atmosphere of Tel Aviv can leave you with lifelong memories you can recount in later years. Do not settle for an ordinary hotel stay.
Tel Aviv beach hotels are situated in a very strategic location. You can dine with friends, family or a significant other in a nearby restaurant. Within walking distance, you can access everything you will need during your stay, including the movie theater, trails, parks, and much more. You can experience the arts of Tel Aviv by visiting events and museums. Furthermore, you can keep your physical fitness intact by visiting water sports or exercise facilities. | http://www.tlv88.com/tel-aviv-articles/Entertainment-Culture-Bars-and-Fun |
The Sustainable Agriculture Research and Education (SARE) program provides grant funding for farmer-driven research, and also plays a crucial role in sharing the results of that research to other farmers across the country. SARE has been supporting sustainable agriculture research for over 30 years. It is the only regionally based, farmer driven, and outcome-oriented competitive research program that involves farmers and ranchers directly as the primary investigators and/or cooperators in research and education projects. SARE is also the only U.S. Department of Agriculture (USDA) competitive grants research program that focuses solely on sustainable agriculture.
Learn More About SARE:
SARE is a competitive research and outreach program that advances sustainable agriculture across the country. Successful SARE grantees are producers, researchers, nonprofit organizations, and educators engaged in projects that simultaneously address the three Ps of sustainability:
SARE is administered through USDA’s National Institute of Food and Agriculture (NIFA), and is run by four regional councils. These regional councils are made up of producers, researchers, educators, and government representatives who set SARE regional priorities and oversee grant programs. Technical reviewers also help the councils evaluate project proposals.
In addition to research, SARE also conducts education and extension programs in an effort to increase knowledge about – and help farmers and ranchers adopt – sustainable farming practices. SARE Outreach produces and distributes practical information based on the program’s more than 30 years of research results.
SARE’s four regional offices administer five primary grant programs:
Universities, nonprofit organizations, government agencies, and individual agricultural producers are eligible to apply for SARE grants. There are different eligibility requirements for each category of grants listed above. Refer to the Request for Applications developed by your Regional SARE program for more information on eligibility.
Since 1988, SARE has invested a total of $251 million in more than 6,300 initiatives, directing more than $21 million in research funds directly to farmers and ranchers. $20.6 million in SARE funds has gone to projects on conservation tillage, $16.4 million to crop rotation projects, $41.4 million for grazing management and economics, and $31.9 million on research addressing soil health. Additionally, grants for producers scaling up local and regional food systems are being boosted by SARE-funded educators who create tools and training programs to help producers create profitable marketing strategies. Thus far, 350 grants have gone to those working on local and regional food systems; 240 to Community Supported Agriculture (CSA) and 330 to farmer’s markets.
SARE has helped drive innovation on farms across the country, funding some of the most cutting-edge and relevant research projects among any federal agriculture-focused grant program over its 30 years of operation. Some examples of SARE-funded projects include:
A searchable database of all SARE funded research projects is available on the main SARE website. SARE’s report, Our Farms, Our Future: 30 Years of Sustainable Agriculture Research and Education (SARE) 1988-2018 summarizes the work SARE has done over the last 30 years. Free downloads of SARE handbooks and information bulletins on a variety of research issues are available from the SARE Learning Center.
How to Apply and Program Resources
SARE’s four regional offices administer their three primary grant programs. Some regions offer additional grant opportunities for community innovation, graduate student research, agricultural professional conducting on-farm research, and region-specific initiatives. The uses and restrictions on the funds vary by region and year, depending on the specific call for proposals for a given year.
Each SARE region solicits proposals and awards grants at different times of the year. All grant programs have only one application period per year and each grant type has its own application, deadline, and focus.
For additional grant information, contact your regional SARE office:
Find the latest news about SARE and sustainable agricultural research on NSAC’s blog:
Program History, Funding, and Farm Bill Changes
In 1988, the same year that the Sustainable Agriculture Coalition (SAC) was established (NSAC’s predecessor), SAC helped to advocate for funding to create the SARE program. Since then, SARE has remained the flagship research program for sustainable agriculture at USDA, and remains the only farmer-driven federal research program.
In 1990, as a result of SAC’s work, Congress authorized the SARE program and determined that it should be funded at no less than $60 million a year, consistent with the recommendations of the National Academy of Sciences. Sadly, the annual appropriations for this award-winning program have yet to reach this level – with the most recent funding cycle providing only $37 million for this successful program with a proven track record.
Beginning in fiscal year (FY) 2014, Congress consolidated funding for the SARE Professional Development Program (PDP) and funding for SARE Research and Education (R&E) grants into a single budgetary funding line. While this did not substantially change the future direction of the program, funding decisions regarding how much of the total funding made available for SARE in a given year should go towards each of these program components is now left up to USDA, rather than Congress.
The program’s legislative requirements have changed very little since it was first created in 1990. For example, while the 2014 Farm Bill did not modify the substance of the SARE program, it did sunset its authorization, meaning that the program now has to be reauthorized in each subsequent farm bill – as was the case in the 2018 Farm Bill.
Sustainable Agriculture Research and Education Program Funding (not including sequestration cuts)
|Fiscal Year||Total Funding (in millions)|
|2016||$27|
|2017||$27|
|2018||$35|
|2019||$37|
|2020||$ pending|
For the most current information on program funding levels, please see NSAC’s Annual Appropriations Chart.
Authorizing Language
Section 1619-1624 of the Food, Agriculture, Conservation and Trade Act of 1990 (FACTA), Public Law 101-624 (7 U.S.C. 5801) created the SARE program.
Section 7201 of the Agriculture Improvement Act of 2018 amends Section 1624 of the Food, Agriculture, Conservation, and Trade Act of 1990, to be codified at 7 U.S.C. 5814.
Last updated in July 2019. | https://sustainableagriculture.net/publications/grassrootsguide/sustainable-organic-research/sustainable-agriculture-research-and-education-program/ |
In an official communique, the Outgoing President of the Republic of Somaliland congratulates the President of the Republic of Kenya. Below is the letter from the Somaliland President:
H.E Uhuru Kenyatta
President,
The Republic of Kenya
Your Excellency,
Please accept my warm congratulations on your victory and my best wishes for your success as you prepare to take up the responsibilities and challenges of your high office for a second term.
Here, in Somaliland, our people went to the polls recently to elect a new president, who I can assure you, will be as committed to continued friendship and cooperation between our people and their representatives.
We look forward to working with you not only to develop closer relations between our countries but also to concert our efforts in the cause of peace and prosperity in our region.
Yours Sincerely,
Ahmed Mohamed Silanyo
President, Republic of Somaliland
Leave a Reply
News
OpEd: Recall Of Berbera Oil Storage Facilities – A Major Milestone
Written by: Abdirahman Aideed
Majority of the public in Somaliland welcomed and aired their congratulations for the president of Somaliland (H.E. President Muse Behi) for his remarkable decision, August 2018, to recall the national oil storage facilities in Berbera from the hands of the currently run private company(ies) to remain under the management of the government.
The former president Silanyo government tabled a resolution/motion to the parliament in August 2015 to approve the privatization of the Oil Storage Facilities to private companies. The parliament objected the motion with majority ruling vote. However the government has overruled and implemented the decision with presidential degree by October 2015. A period of about three years of being run the property by the private companies, there were number of arguments in the public domain, as such arrangement tempted a number of risks to the public interest including slack quality control of the petroleum coming into the country and national security as well. This debate heated up in April 2018 when a sub-standard petrol offloaded into the storage facility that has affected many vehicles as owners suffered heavy unaccounted financial loss for repairing the damaged machines by that poor quality fuel, and that is why the public are clapping for the action of the president. In socio-economic perspective we can mark it as “the SECOND MILESTONE” for President Behi, since he come into power.
Dear reader, let me remind you also the FIRST MILESTONE for President Muse Behi Abdi, which has happen in March 2018, , when a presidential letter/order released by the Minister of Public Works, Mr. Qambi, in a press conference. The minister explained in detail the message from the president, informing all government respected offices about the suspension of any tenure awarded for the natural seaway land in Berbera (aka Raasiga). This land of about 3.5 Km into the sea waters is a natural inland that remain untouched for centuries and safeguarded as national property by all subsequent administrations that ruled Berbera in history. The Raasiga is believed to be the comparative advantage that gave Berbera to have the importance of being a strategic sea port for centuries. The Behi’s predecessor of Mr. Silanyo government awarded property ownership deeds of Raasiga to business people and individuals that had an influence in the ruling regime. President’s decision of retaining the rights of Raasiga was also commended by the public and termed as the FIRST MILESTONE for President Muse Behi accomplishments of recollecting the national properties that has been privatized by those in power, before him, to themselves as beneficiaries and their respective closer friends, relatives and bogus political allies that later vanished.
Terminating buddy based contracts in Egal International airport and ensuring such diverted incomes go directly to the national income box was also another key THIRD MILESTONE for MBA. Though there are still some government duties that are franchised to private hands to, for instance, print various national tax stamps and logos etc., the majority of the public regarded this third milestone as the beginning of empowering tax collecting departments and upholding the government reputation.
Recovering the national properties and income sources is not an easy task and there are number of reasons for making the task challenging, these will include: First, prominent figures from the former ruling executive power under H.E. Silanyo are believed to have vested interests in those privatized projects. Secondly, such finger-pointed de-facto officers conserve deep-rooted support to a major constituencies that was among Behi’s voting centers, which has contributed President Behi’ s landslide victory to come to the Power House and hence touching below the belt of those individuals will implicate Behi’s political support and jeopardize for his intentions in next term elections. Thirdly, the opposition sustained a continuous pressure, which sometimes seems embedded with number of spicy ingredients on the President to weaken his political support status. All these and plus are putting the president in a situation with least options to take drastic corrective actions in the system, and he may even sometimes opt sitting on the burning pans, despite being fiery and heatening.
Back to the topic, the Nationalised Oil Storage Complex in Berbera, implications are not easy either. This Presidential decision is coming in a situation where already other private companies are given permits and allowed to establish and construct their own oil/Fuel storages in the fuel port quarters by former Silanyo government. Envisaging such a trend it is understandable that privatised oil storage facilities constructions would ignite uneasy market competition with government owned facilities. The current allied company that was recently running the government facilities, could also demand to establish their own private Oil Storage facilities, because that path is already open, they can also use to store their imported supplies to privately owned storage facilities. This cites the scenario of the unchanged status quo of the national banks of the government in compare to the fast developed private banks., which could mean government oil storage facilities will same be only used by the government, should they not become dynamic and competitive in the market. In that regard the re-nationalized government Oil Storage facilities perhaps can only commercially survive if the government retains the right of becoming the sole proprietor of such giant facilities in the country, like the norm and practice are in the region, especially in the port of Berbera, while oil importers will only be users of the facilities. See more on this in an earlier article on the same subject http://hadhwanaag.ca/detail.aspx?id=221181
However, there are number of awaited milestones, from the new president to accomplish:
- To recall the reserve lands for the port and free zone expansion, aka Noobiyadda area, as was the plan in history.
- To implement the issued order regarding the Raasiga area
- To initiate a law, or issue presidential degree of making clear demarcation of the state properties that only the government can own, run and manage vs. what properties the public or private sector can be engaged and to what level. For instance, can an individual or private company retain the ownership of (a) a sea port? (b) An airport? (c) An Oil well? (d) A mineral well/cave? (e) City water supply? (f) Fuel Storage complex at import/export hubs? (g)Export livestock health check quarantines/Mahjars? …….just to mention few.
In my view, I think such resources and facilities can only be run by a government on behalf of the ownership of the state.
Written by:
Abdirahman Aideed
News
Baadigoob: Jawaabta Goobjoogayaasha Caalamiga ah
Halkan Kala eeg Maqaal uu qoray Mr. Michael Walls, Madaxa Goobjoogayaasha Caalamiga ah ee Doorashadii Madaxtooyada Somaliland ee 2017kii. Farriintan ayuu Michael ku caddaynayaa arrimo muddadan dambeba dhex wareegayay baraha bulshada iyo tuhuno lagu faafiyay muuqaal lagu baahiyay baraha bulshada oo la yidhaahdo “Baadigoob”.
Halkan kala deg Maqaalka oo Faahfaahsan:
OpEd Somali
News
OpEd: The Lost Intellectuals
They live in a place where the tribalism is the major dream that the leaders in their country kill to hunt what they want to find. They live in a place where the only opportunity they can mostly get is a life that is related to their heredity family prosperity or clan based opportunity that they cannot learn anything more; where an international NGO/INGO’s go after their rights, because there are no more opportunity they can get. Every new authority promise to develop somewhere or something but after they get the lead position they tend to spent their time and money many vehicles and new houses for billions of money. Where none of them is aware about even the name of their country; or the passport of their own country.
In each year they try to improve and support their society just for dream; they defeat their government on all the social media for hope; they support every one promise unity of their community and developing of their countries production because of hope; in each year of election they expect new change and support for bright future but after another few years of dream and wishes they nearly lost all their dream; and they lost like nothing until their sight is dark in every other way of dream.
The youth which are almost finish their first generation in a dream seasons; were there are more politicians but their sense of politician is not rather of having a big houses; cars and eat good food at homes; where even they do not think about how they find for it; and they are mostly from outside of the country and they send their children to the most popular university and they send their family and wives to the most beautiful countries and they just visit for rest; while they block the right of the street children. The mother that don’t sleep because of worry; gets up early in the morning or midnight around 3:am to feed her an orphan child; or the single mother that her husband lost any opportunity to have a work or don’t even consider his children anymore and busy for feeding his self-non-sense grass like goats all the day and night.
The right of the child in 20 years of age who is already lost in African coastal area and the sea animal have hunted him; most beautiful African girls that lost their most important body in the African coastal and their hunger poor mothers faces is unknown because of sadness and worry; which has many of them effected with diseases in the coastal area; all this does not make any sense to our politicians until his children are safe and in beautiful country that a leader of his age, mind and power to make it a country that everybody likes it; a youth have a leader who campaign to support them and improve their future life but cannot even have any dream to step forward; no more foot print; no real dream; a youth which have a coastal area of more than 360 feet who do not know how to live for it and have no power to learn and cannot across where their families are from because of tribalism and lack of leadership.
A youth which cannot learn any more about government because of their heredity leadership tribalism; a youth which are just separate because of where their families are from and cannot make a change any more; a youth which their maturity is after 30 years old and death for worry diseases and do not know where to go and what to do even in that age; a youth which their intellectuals got mad after more think and lack of leadership; a youth which have the most powerful mentally and physically but cannot do any more because of leadership situation; a youth which have not any role model; a youth which have no motivation; a youth which cannot realize their dream; a youth which are ready to change their people, country, and mind but have not any support; a youth which have no power to eliminate to bring new ideas; a youth who live in a country while the leaders are the business men’s, doctors, telecommunications, and everybody of their families have a department of the authority without any humanity.
The question doesn’t the street child is citizen? Why educative citizen cannot do a work that an international citizen take a lot of money in the country? Why the community are separate and not support each other? Isn’t that because of selfish campaign tribalism? Why they teach the others out of their nation where they from I mean which clan they are? Is it the leadership? If it’s not the leading why the religion make equal brotherhood on all the different individuals that become Muslim when the prophet (csw) was spreading the religion? If it is not then why all American people mention that they are just American from that city? Why and why?
Written by: | http://warshiil.com/somaliland-president-congratulates-the-president-of-kenya/ |
We are looking for a reliable Maintenance Manager to oversee all installation, repair and upkeep operations of the Company's facilities.You will be the one to ensure that your colleagues have the best physical resources available to complete their duties according to budget.
A great maintenance manager will have a solid understanding of plumbing and electrical systems. He should be well-versed in all maintenance process and health and safety regulations. The ideal candidate will also have aptitude in undertaking administrative task such as reporting, budgeting, etc.The goal is to ensure the company facilities are well-cared for and adequate to support the company's business operations.
Duties and Responsibilities:
- Perform daily and weekly inspection of facility to ensure operations.
- Hire and train maintenance employees to carry out specialized duties.
- Delegate maintenance issues to appropriate personnel for effective resolution.
- Allocate workload and supervise upkeep staff.
- Carry out inspections of facilities to identify and resolve issues.
- Inspect equipment to identify operational inefficiency and facilitate optimization
- Check electrical systems of buildings to ensure functionality
- Plan and oversee all repair and installation activities.
- Perform periodic maintenance and routine calibration of mechanical, electrical and pneumatic
system.
- Manage the process of disposal of worn-out/damaged machinery as well as the installation
of new equipment.
- Conduct risk assessments to identify possible hazard around building facilities or premises.
- Design and implement programs and procedures for effective maintenance operations.
- Ensure adherence to health/safety procedures and policies.
- Supervise the activities of a building upkeep personnel to ensure they maintain a clean and
orderly facility.
- Monitor equipment inventory and place orders when necessary
- Monitor expenses and control/manage maintenance budget in order to meet set objectives
- Manage relationships with contractors and service providers and conduct negotiations with them to determine the rates and terms of service.
- Maintain accurate record of maintenance operations and present to upper management reports
of daily activities.
Requirements
- Proven experience as maintenance manager of other managerial role
- Experience in planning maintenance operations
- Solid understanding of technical aspects of plumbing, electrical systems etc.
- Working knowledge of facilities machines and equipment
- Ability to keep track of and report on activity
- Excellent communication and interpersonal skills
- Outstanding organizational and leadership abilities.
Job Type: Full-time
Location: | https://www.wowjobs.ca/posting/iWFLZnhD_1IZQ9fxC1s4aKM-qdFW4aR8qQ7_JQAWHThY5PvquMDDVQ |
In Debt or Indentured Part Eight: Single Issue Voting
This is the eighth part of a multiple part series taking a deep dive into our current political and economic crisis in America. Partisan politics, unfettered corporate spending and recklessness along with a shift in our social acceptance of debt, is having far reaching and potentially devastating affects on our way of life, on the American Dream. With each installment we will take a closer look at some of the major pieces of this very complex puzzle and try to understand them and bring them into perspective. Use this opportunity to take a broader look on the political and social economic state of America and how each of us, as a small pieces of the puzzle, can make a difference.
In Debt or Indentured: Single Issue Voting
Another reason that the middle class has not unified to demand change from the government and subsequently their employers is that the American middle class has allowed their voices to be divided on the basis of single issues. These types of social issues have dominated American politics starting in the 1980’s with efforts to mitigate the effects of the 1973 Roe v. Wade Supreme Court decision that legalized abortion in America. The two major American political parties took strong positions supporting and denouncing legalized abortion in America. Through time there have been litanies of new social issues that have been added to the table to assist in dividing the middle class vote and creating a new type of voter. This new voter will vote on a single issue instead of demanding platforms that address their own personal issues that are affecting their everyday life instead of one single issue that the my only care about because of religious or personal convictions. Some of the issues that currently stifle America middle class voices are: abortion, stem cell research, gay marriage, euthanasia, gun control, and illegal immigration. The interesting thing about single issues or wedge issues is that when either party is in control of congress and The White House, little to nothing is done to change the issue. On the face it appears that there is a lack of political will to tackle such monumental social issues, but politicians on both sides ran with those issues central to their platforms.
There are many example of the bait-and-switch of wedge issues by both Democrats and Republicans. One for the Republican’s took place while controlling all of congress and The White House from 2003 through 2007. Almost every elected Republican took a stance in their run for office to work towards the limitation of abortion. Yet during this time period of Republican control, little to nothing was changed to limit abortion in America. This was done even though this wedge issue was used by most Republicans to motivate voters that saw this as a single issue to vote for. The Democrats did the same thing with stem cell research and gay marriage when controlling congress and the presidency from 2009 to 2011. This is sometimes referred to in political communities as pandering to the base or telling the base of the party what they want to hear. Once elected these politicians do not make it a priority to do anything about the issue until close to another election. This maneuver ensures a constant block of voters (mostly middle class) that will come out to vote for their party because the voter cares about this sole issue. This was seen with gay marriage and abortion in the U.S. Presidential Election of 2004 between then President George W. Bush and Senator John Kerry and again in the 2008 election between Senator John McCain and Senator Barack Obama. Even now in the run-up to the 2012 Presidential Elections, the sound of pandering to single issue voters are being displayed by both Former-Governor Romney and President Obama.
Middle Class Divided
The division of the American middle class is important to both major political parties which have unlimited funds now available to them through the Supreme Court ruling in the Citizens United v. Federal Election Commission that classes corporations as individuals protected by freedom of speech through the form of political contributions to individual candidates as protected speech, this reported by Adam Liptak for the New York Times in his article titled, “Justices, 5-4, Reject Corporate Spending Limit.” This decision now entitles corporations to spend as much as they like on political contributions to individual candidates (Liptak). The dissenting Supreme Court justices warn, “allowing corporate money to flood the political marketplace would corrupt democracy” (Liptak). This ruling allows corporations to act as individuals in order further influence elections through money. All though as a corporation it will not be able to vote in that very same election since it is not a person or a citizen.
That’s it for this part of In Debt or Indentured. We hope that this has given you some important things to think about. Use some of what you have learned here to look beyond the mere message our politicians are presenting to what the ramifications of these actions has on all of us Americans. Our country started down a slippery slope over a decade ago. We seemed to have stopped the free fall it had become, but do not fool yourself. We are still on the precipice of another long fall. Let’s just hope we all can learn, and grow, from the last spill we took.
Check back soon for the next installment of In Debt or Indentured. | https://www.peoplepolitico.com/in-debt-or-indentured-part-eight-single-issue-voting/ |
A new interpretation of quantum mechanics suggests reality does not depend on the measurer — ScienceDaily
Quantum mechanics arose in the 1920s — and since then scientists have disagreed on how best to interpret it. Many interpretations, including the Copenhagen interpretation presented by Niels Bohr and Werner Heisenberg and in particular von Neumann-Wigner interpretation, state that the consciousness of the person conducting the test affects its result. On the other hand, Karl Popper and Albert Einstein thought that an objective reality exists. Erwin Schrödinger put forward the famous thought experiment involving the fate of an unfortunate cat that aimed to describe the imperfections of quantum mechanics. | https://dimensionextreme.com/tag/interpretation |
Bioterrorism Act: See Public Health Security and Bioterrorism Preparedness and Response Act of 2002.
BKC: Biodefense Knowledge Center, DHS (at Lawrence Livermore National Laboratory).
BOCA: Building Officials and Code Administrators International, Inc.
BPAT: Building Performance Assessment Team, FEMA.
BSIR: Bi-annual Strategy Implementation Report.
BSSC: Building Seismic Safety Council.
BTCDP: Bioterrorism Training and Curriculum Development Program.
• Identify significant assets at the site(s) that may be targeted by terrorists for attack.
• Identify specific threats and vulnerabilities associated with the site(s) and its significant assets.
• Develop an appropriate buffer zone extending outward from the facility in which preventive and protective measures can be employed to make it more difficult for terrorists to conduct site surveillance or launch attacks.
• Identify all applicable law enforcement jurisdictions and other Federal, State, and local agencies having a role in the prevention of, protection against, and response to terrorist threats or attacks specific to the CI/KR site(s) and appropriate points of contact within these organizations.
• Evaluate the capabilities of the responsible jurisdictions with respect to terrorism prevention and response.
• having a plan to quickly restore operations to ‘business as usual’.
Business Continuity, Disaster Recovery & Contingency Planning Differences: “A person builds a house on an ocean beach. A storm washes away the beach. The house collapses.
Business continuity would suggest building a barrier reef or moving the house farther inland.
Disaster recovery rebuilds the house in time for the next storm.
• Avoid a risk, typically through redundancy.
• Mitigate a risk by implementation of ‘work-arounds’.
10. Coordination with Public Authorities.
Business Impact Analysis (BIA): “The Business Impact Analysis is the foundation on which the whole BCM [Business Continuity Management] process is built. It identifies, quantifies and qualifies the business impacts of a loss, interruption or disruption of business processes so that management can determine at what point in time these become intolerable (after an interruption).
Business Impact Analysis (BIA): “Analysis which identifies the resources critical to an organization's continued existence, identifies threats posed to those resources, assesses the likelihood of those threats occurring, and the impact of each of those threats on the organization.
CAER: Community Awareness and Emergency Response, Chemical Manufacturers Association.
CAMEO (Computer-Aided Management of Emergency Operations): “CAMEO ® is a system of software applications used widely to plan for and respond to chemical emergencies. It is one of the tools developed by EPA’s Chemical Emergency Preparedness and Prevention Office (CEPPO) and the National Oceanic and Atmospheric Administration Office of Response and Restoration (NOAA), to assist front-line chemical emergency planners and responders. They can use CAMEO to access, store, and evaluate information critical for developing emergency plans. In addition, CAMEO supports regulatory compliance by helping users meet the chemical inventory reporting requirements of the Emergency Planning and Community Right-to-Know Act (EPCRA, also known as SARA Title III). CAMEO also can be used with a separate software application called LandView ® to display EPA environmental databases and demographic/economic information to support analysis of environmental justice issues.
(a) Alert the appropriate federal authorities within each country of the existence of a threat from a potential or actual radiological event.
(b) Establish a framework of cooperative measures to reduce, to the extent possible, the threat posed to public health, safety, property, and the environment.
CAP: Capabilities Assessment Pilot(s). DHS, 2006.
…preparing, under uncertainty, to provide capabilities suitable for a wide range of challenges while working within an economic framework that necessitates prioritization and choice.
Capability Assessment for Readiness (CAR): “The Federal Emergency Management Agency (FEMA) and the National Emergency Management Association (NEMA) are working together aggressively to reduce losses from disasters. As an important component of this effort, FEMA and NEMA joined together in 1997 to develop the CAR, an assessment process and tool that States, Territories, and Insular Areas can use to evaluate their own operational readiness and capabilities in emergency management. The CAR was implemented first in 1997 and has matured into a sophisticated and accepted, automated, self-assessment tool that helps the States, Territories, and Insular Areas establish sound mitigation, preparedness, response, and recovery practices, establish priorities, and analyze program performance.
“The CAR was revised after its initial implementation in 1997, and a second self-assessment is underway this year. The CAR is available in automated or manual versions and is divided into 13 Emergency Management Functions (EMF) common to emergency management programs: 1) laws and authorities; 2) hazard identification and risk assessment; 3) mitigation; 4) resource management; 5) planning; 6) direction, control, and coordination; 7) communications and warning; 8) operations and procedures; 9) logistics and facilities; 10) training; 11) exercises, evaluation, and corrective actions; 12) crisis communications, public education, and information; and 13) finance and administration.
Capacity for Disaster Reduction Initiative (CADRI): “CADRI was created in 2007 as a joint programme of the United Nations Development Programme’s Bureau for Crisis Prevention and Recovery (UNDP/BCPR), the United Nations Office for the Coordination of Humanitarian Affairs (UN OCHA), and the secretariat of the International Strategy for Disaster Reduction (ISDR)…. CADRI succeeds the UN Disaster Management Training Programme (DMTP), a global learning initiative, which trained United Nations, government and civil society professionals between 1991-2006. DMTP is widely known for its pioneering work in developing high quality resource materials on a wide range of disaster management and training topics. More than twenty trainers’ guides and modules were developed and translated. CADRI’s design builds upon the success and lessons learned from the DMTP. While the importance of capacity is now widely recognized, lessons of experience have demonstrated that the development of capacity is far more complex than previously thought. Capacity development goes beyond training or the transfer of technology, requiring local ownership and political leadership.
CAR: Capability Assessment for Readiness.
Cascadia Region Earthquake Workgroup (CREW): “The Cascadia Region Earthquake Workgroup (CREW) is a coalition of private and public representatives working together to improve the ability of Cascadia Region communities to reduce the effects of earthquake events…. In less than 50 years, a number of great Cascadia-like earthquakes have occurred around the Pacific Rim, including Chile (1960), Alaska, (1964) and Mexico (1985). A unique aspect of a great Cascadia earthquake is the strong likelihood that the three greater metropolitan areas of Portland, Seattle, and Vancouver will simultaneously feel the effects of strong and sustained ground shaking. This wide-spread ground shaking combined with accompanying elevation changes and the likely generation of a tsunami along the Pacific coast, will cause loss of life, property damage, and business interruption in vulnerable locations through out southwestern British Columbia, Washington, Oregon, and northwestern California. The broad geographic distribution of damaging impacts will generate special challenges and severely stress the response and recovery resources of the three Pacific states and British Columbia.
of Pacific Rim trade involving Ports like Vancouver, Seattle, Tacoma, and Portland.
Promote efforts to reduce the loss of life and property.
Conduct education efforts to motivate key decision makers to reduce risks associated with earthquakes.
Catastrophe: An event in which a society incurs, or is threatened to incur, such losses to persons and/or property that the entire society is affected and extraordinary resources and skills are required, some of which must come from other nations.
Catastrophe: “You see, one of the lessons I think we have learned from last year's hurricanes is, we've got to look at the challenge of the catastrophic event, not only at the point where the catastrophe hits, but in all the areas around that point that are going to receive the collateral or cascading effects of that catastrophe.
When we have a major event, whether it be a terrorism event or a natural disaster, that causes a lot of people to move out of a particular area, they're going to go someplace. And a lot of them are going to go to your cities or your towns, and you're going to have to be able to deal with that challenge. | http://hestories.info/guide-to-emergency-management-and-related-terms-definitions-co.html?page=5 |
To promote environmentally friendly choices and practices to the community by showcasing sustainable businesses, technologies, and organizations that empower us to be better stewards of the Earth. To engage in ongoing dialogue, inquiry, and discovery of more sustainable practices through educational forums, exhibits, and networking.
Empower citizens with awareness in sustainability issues and a capacity to contribute to the direction of sustainable lifestyles. Integrate sustainable practices into all aspects of personal and business planning, focusing both on the current and future implications.
Design, build and operate environments that minimizes its ecological footprint, contributing to the goal of climate neutrality. Create new paradigms for energy, transportation, water, waste and food systems that contribute to best practices.
Participate in research initiatives and outreach activities that educate our communities, nourish our natural ecosystems, and enrich our cultural experiences. Embrace right livelihoods that promote health, diversity, and tolerance as principles by which policy and process are designed.
Sustainable living is a lifestyle that attempts to reduce an individual’s or society’s use of the Earth’s natural resources and personal resources. Practitioners of sustainable living often attempt to reduce their own carbon footprint by altering methods of transportation, energy consumption, diet and work habits.
Sustainability crosses all areas of life, from the natural environment to urban planning, to health care, economics, transportation, energy, agriculture, water and more. It is generally recognized that sustainable enterprise in all of its many aspects will be a major component of future economic development in Michigan, the region, and around the globe. Much of the world is now in the early stages of a historic transition from economies that accommodate waste and inefficiency and depend too much on fossil fuels …to systems that are much more conserving, efficient, and sustainable.
Sustainability is the ability to endure. Future generations can live, work and meet their needs, ONLY if our current generation acts responsibly, conscious of all we do and the effects of all we do. Sustainability crosses all areas of life, from the natural environment to urban planning, to health care, economics, transportation, energy, agriculture, water and more.
It is generally recognized that sustainable enterprise in all of its many aspects will be a major component of future economic development in Michigan, the region, and around the globe. Much of the world is now in the early stages of an historic transition. A transition from economies that accommodate waste and inefficiency and depend too much on fossil fuels …to systems that are much more conserving, efficient and sustainable.
Unprecedented Opportunity
The Sustainability Summit recognizes unprecedented opportunities and encourages investment and development in the green sector. We focus on Energy, Water, Lifestyle, Business and Workforce. It’s clear that sustainability is no longer an optional add-on for business. In the commercial world of adapting to climate change is now an imperative. Companies need to know that their supply chain is both secure AND sustainable. Shareholders and customers are increasingly demanding to know that their products are coming from sustainable sources. Market advantage will flow to those companies which can prove this.
The purpose of the Sustainability Living Summit is to create a forum where business leaders, innovators, government agencies, and non-profits can examine both the commercial opportunities in the emerging green economy as well as the challenges in achieving sustainability goals. Our aim is to create an annual event which will chart how industry is progressing and how Michigan can secure a competitive advantage.
Program Format
A series of ‘forums’ with 3-4 speakers in each forum and a moderator will address the following topics below. The moderator of each forum will introduce the topic and each speaker, allowing each speaker 8-12 minutes to introduce themselves; their mission, product, service or policy position. Once the introductions are complete the moderator will ask the group a series of questions to prompt a discussion that will be open to the audience.
- Renewable Energy
- Energy Efficiency
- Alternative Transportation
- Sustainable & Local Agriculture
- Green Building
- Natural Health
- Environmental & Social Responsibility
- Sustainable Economic Development
- Sustainable Communities
- Press Conference to announce breakthrough technology
Limited Seating Available.
Full Day session $25.00 / GLREA Memebrs $15.00
Register for Sustainable Living Summit online at www.glrea.org
For more information on how you or your organization can participate please contact:
Ms. Mary McGraw Mr. Douglas Elbinger
Michigan Energy Fair Director
GLREA Sustainable Living Summit
616.813.2384 248-808-2574
[email protected] [email protected]
Some of the distinguished speakers include:
ANDY BLASKOWITZ
Reporter for MidWest Energy News
Andy started as a reporting fellow for Midwest Energy News in May 2014. He previously spent four years at City Pulse, Lansing’s alt-weekly newspaper. In addition to Midwest Energy News, Andy also covers state politics and business policy for Inside Michigan Politics and MiBiz. He is a graduate of Michigan State University’s Journalism School and received multiple awards as an undergraduate from the Knight Center for Environmental Journalism. He is based in Grand Rapids.
JEFFREY D. ROSTONI
LEED AP
Diagnosing & Restoring your Home or Work Environment
Analyzing Mold and Moisture Problems in Buildings
Jeffreys presentation will cover the basics and misconceptions about mold, liability associated with mold, mold and moisture’s impact to structures and occupant’s health, and will address the many moisture sources in buildings, typical mold remediation techniques, as well as other indoor air quality issues arising from tight buildings. AQC will discuss moisture sources in great detail as well as prevention. There will be a Questions and Answer session at the end to field questions from the audience.
Mr. Jeff Rostoni is an environmental consultant specializing in indoor air quality issues in residential and commercial buildings. Mr. Rostoni has extensive knowledge about indoor air quality problems and building diagnostics and provides consulting services to help identify, solve and remediate indoor air quality problems. Mr. Rostoni also has his LEED accreditation and is routinely involved in diagnosing problems associated with “tight” buildings. Jeff evaluates mold and moisture problems, asbestos, lead paint, methamphetamine houses, VOCs, house ventilation issues, attic ventilation, crawl space moisture and venting, and other building and air quality related problems.
BRINDLEY BYRD
Executive Director
Michigan Energy Efficiency Contractors Association
Advocating Public Policy for Energy Conservation
After graduating from Michigan State University, Brindley founded and successfully managed a residential remodeling company for 12 years. He leveraged his own professional development experience into being a national speaker and presenter for the National Association of Home Builders. After serving as founding executive director of the Michigan Construction Career Council, part of Capital Area Michigan Works!, Brindley moved into utility rebate program design with CLEAResult and then onto advocating for the energy efficiency industry as the founding executive director of the Michigan Energy Efficiency Contractors Association (MEECA). He also serves as executive director of the Michigan Air Conditioning Contractors Association (MIACCA) fighting for fair competition throughout Michigan’s mechanical industry. He is currently working to expand Michigan’s integrated energy efficiency market.
MARK BATES
Building Automation Specialist
TH Eifert Mechanical Contractors
You Can’t Manage What You Can’t Measure
In order to get control of your spending on Utilities, you must first have the ability to understand how your facilities consumes energy. According to the Building Owners & Manufacturing Association, approximately 72% of a buildings monthly energy spend is on Space Heating, Cooling, Water Heating and Lighting. Understanding how efficient your building is performing in each of these parameters, requires that you first discover how your building compares to others of similar size, age and usage nationally and receive an EnergyStar Score from the United States Department of Energy. This score provides the data that is necessary for your Energy Consultant to determine to what extent there is Energy Savings Potential.
Mark Bates brings nearly 40 years of experience in the Construction Services business, specializing in “all things low voltage.” As a Building Automation Solution Specialist at Lansing-based T. H. Eifert Mechanical Contractors, Bates, is responsible for helping the company evolve into providing innovative Energy Management solutions for its’ HVAC service clients. His career includes serving as a Sales Engineer with Honeywell’s Commercial Construction Division, where he designed and project managed a building management solution for the State of Michigan office buildings including, the State Capital, inner-connected State-owned buildings downtown and the State Secondary Complex.
JAMES L. NEWMAN
Newman Consulting Group LLC
CEM, CSDP, LEED AP BD+C, ASHRAE OPMP & BEAP
Improving Energy Efficiency in New and Existing Buildings
Jim Newman is one of the country’s most experienced energy efficiency and green building experts. Known as the “Dean of Green,” Jim regularly speaks across the US and internationally to professionals, student groups and the media about sustainability and green technology. He is a Certified Energy Manager (CEM), a Certified Sustainable Development Professional (CSDP), a LEED Accredited Professional, an ASHRAE Distinguished Lecturer (DL), an Operations and Performance Management Professional (OPMPa Building Energy Assessment Professional (BEAP) and a Fellow of the Engineering Society of Detroit (FESD). In 2012, he was named a Green Leader by Corp! Magazine and the Detroit Free Press.Jim Newman has been involved in this industry long enough to be both a seasoned expert and a pioneer. Internationally recognized speaker and writer on Indoor Air Quality, Energy Conservation, Green Design and Efficient Operating and Maintenance, USGBC and LEED®.
PATRICK LINDEMANN
Ingham County Drain Commissioner
Greening of the Drains
Michigan stands as one of the guardians of The Great Lakes …the largest body of fresh water in the world. Storm water runoff is the largest transporter of non-point and point source pollution discharges to the waters of all jurisdictions in Michigan and throughout the Nation. “This is where we draw the line in the sand and develop new Best Management Practices for negating the pollution conveyed by storm water runoff,” says Pat Lindemann, one of the nations leading experts in water management systems who will talk about the cutting-edge efforts in Ingham County to protect our valuable water resources.
Pat Lindemann was first elected as the Ingham County Drain Commissioner in 1992, an office he has re-defined to reflect his environmental stewardship ethic and systems approach to stormwater management. He has received international, national, and state recognition and awards for his innovative drain projects. The design for a wetland filtration system in the Tollgate Drainage District in Lansing Township, Michigan is the first of its kind, and has received national awards and international interest. Pat holds a Bachelor of Science Degree from Michigan State University in Resource Development, with credits toward an advanced degree in the same discipline.
RON DOETCH
Agronomist
The Future of Food and Farming Systems
Food and farming systems are the same system. How do we share a vision of a sustainable food system and what framing and language do we use to communicate in our planning discussions? How do we honor the past while coping with the rapid acceleration of technology that provides so many new possibilities for our relationship with food? How do we bridge the gaps in income and labor between food production and modern lifestyles? This talk attempts to answer these and other questions about our future.
Agronomist, Ron Doetch, was raised on a small working dairy farm in Northern Illinois. The farm at this time was without chemicals and integrated as a diverse system where everything got used in some way. After receiving a degree in agronomy at the University of Illinois, Ron worked on special projects for John Deere including new ways to incorporate conservation into modern farming techniques. It was then a natural fit to become specialty crops manager for Itochu, a global Japanese trading firm, to connect Japanese food processors to American farmers. In 2003, Ron assumed the helm as Executive Director of the Michael Fields Agricultural Institute to oversee the agriculture education, research and outreach programs of the now 20 year old organization in East Troy, WI. Today, Ron works as an independent consultant in food and farming systems
Complete information about these events may be found at www.glrea.org
For information about opportunities for speakers, sponsors, and exhibitors contact: | http://elbinger.com/partners/sustainable-living-summit/ |
Participant observation involves spending time being, living or working with people or communities in order to understand them. In other words, it is, as the name implies, a method based on participating and observing in which field-notes, sketches, photographs or video recordings are used as a method of data collection. The basis of this approach is to become, or stay, as close to the spatial phenomenon being studied as possible and it is thereby quite distinct from methodologies that emphasize distance and objectivity. | https://www.research.ed.ac.uk/en/publications/participant-observation-2 |
What will students do in the subject?
In GCSE Dance you will learn how to work imaginatively and creatively both independently and collaboratively. You will learn how to develop and demonstrate competence in a range of practical, creative and performance skills. To do this you will be introduced to a range of dance styles and dance fusions. You will also explore the anthology of 6 professional Dance works and will learn to appreciate, interpret and evaluate these works. Skills learned will also include how to write about dance, including reflection and evaluation of their own performance and choreography.
Who is it suitable for?
This course is suitable if you that have an interest in performing and creative arts. GCSE Dance is not just about learning and copying routines but is built for those who wish to study dance as a whole art form. Dance is a distinct art form, which has its own history, body of knowledge, aesthetic values, cultural contexts, and artistic products. GCSE dance is also designed to build and increase confidence and selfesteem, improve problem solving skills and refine your creativity to shine to it’s highest standard. This qualification is also available if you wish to absorb a wealth of transferable skills that are used to thrive in a variety of different careers in the future. Teamwork, leadership and collaborative skills are needed when working together. Rehearsing and refining technical exercises and choreography enables students to build tenacity, perseverance and resilience
What might the subject lead into?
Many of the skills acquired in following GCSE Dance are those which are essential to the students’ other areas of study within the world of work. The course is also a useful introduction that stretch far beyond the familiar roles of dancer, teacher and choreographer to include health care practitioners, researchers, dance scientists, writers, producers, programme managers, costume designers and many more. Dance and Performing Arts courses are available at local colleges, and favour students who have this qualification. Universities offer Dance and combination of various creatives courses and some linked with Science. Alternatively, there are many well renowned Dance academies, schools and conservatoires.
GCSE Drama - Key Stage 4
Why choose this course?
The new specification engages and encourages students to become confident performers and designers with the skills you need for a bright and successful future. You’ll learn to collaborate with others, think analytically and evaluate effectively. GCSE Drama students will gain the confidence to pursue their ideas, reflect and refine their efforts. Whatever the future holds, students of GCSE Drama emerge with a toolkit of transferable skills, applicable both in further studies and in our ever changing global workplace.
Who is it suitable for?
WHAT WILL I STUDY? The subject content for GCSE Drama is divided into three components:
1. Understanding drama
2. Devising drama
3. Texts in practice
HOW WILL I BE TAUGHT? There is as much opportunity as possible for you to do what you like the best – participating in performance. All students devise drama. All students explore texts practically and work on two text-based performances. Students can choose to develop as a:
- performer
- designer (lighting, sound, set, costume, puppets)
- performer and designer.
Whichever option you choose, you can be sure to learn invaluable skills, both theatrical and transferable, that will expand your horizons.
HOW WILL I BE ASSESSED? The written exam paper is designed to help all students realise their full potential. We use a variety of question styles and ask students to combine what they’ve learned about how drama is performed with their practical experience and imagination.
This specification ensures continuity for progressing from GCSE Drama to AS and A level Drama and Theatre. Students who go on to AS or A level are already familiar with studying a whole set text for the written paper. They have built solid foundations in reviewing a live theatre production and in interpreting key extracts
What might the subject lead into?
You could go on to further study at 6th Form, College or a specialist Drama school, which could then lead to University. Future pathways could include Production Arts, Film Studies, Psychology, Philosophy, English, English Literature, and Media Studies. These can lead to careers in broadcasting, film, media, TV and theatre such as actor, director, stage manager. | https://www.cityofpeterboroughacademy.org/page/?title=Performing+Arts&pid=1211 |
The American Geophysical Union (AGU) brings many resources to bear on finding solutions to our world’s most pressing problems. Foremost are our members, who represent the full breadth of the Earth and space sciences. Aiding those members and other researchers in their work, our highly cited journals reflect the expansive range of cutting-edge research. Our meetings and conferences bring together the brightest minds to share critical new information with one another and to identify areas ripe for collaboration.
With those resources, AGU
- promotes the wise management of our planet based on scientific knowledge,
- builds an inclusive global community of Earth and space scientists who share ideas and solve important problems,
- gives decision makers and the general public sound scientific information to inform the debate on societal issues,
- improves the scientific literacy of the next generation of citizens, and
- develops a continuing flow of the highest-quality scientists to tackle the problems of the future.
Pressing Need
To enable these endeavors, AGU has revitalized its approach to development. Members and donors have stepped up to aid our work, but the need continues to grow.
Last year, our Voluntary Contribution Campaign raised nearly $400,000 to support programs and initiatives across AGU. A look at who contributed shows that AGU has a very strong base of donors. Those supporting AGU with gifts of $500 or more made up just over 2% of donors and contributed a total of nearly $200,000 (see Figure 1). Just 37 donors made gifts of $1000 or more in 2014, equating to 1 out of roughly every 1600 AGU members. Nearly 69% of our 4400 donors gave less than $50, totaling almost $32,000.
Even with more than 4000 donors, only 7% of AGU members supported AGU with charitable gifts in 2014. What’s more telling is that the number of donors has fallen at a significant rate since 2011, when there were nearly 7500 donors (see Figure 2).
Help us reverse this declining trend! Support AGU and make a difference today. The participation goal for 2015 is 12%, or 7200 donors.
What Can You Do Now?
Supporting AGU programs makes a demonstrable difference in the lives and careers of our members and in the global understanding of scientific advancements and issues.
Student Travel Grants support AGU’s student members as they attend and present at their first Fall Meeting. This experience opens the door to fruitful and long-lasting careers, feeding the talent pipeline and ensuring that the next generation of great scientists is well trained and ready for the field. Gifts of $500 and $1000 can fund one domestic or international student’s attendance, respectively, at a Fall Meeting.
The Mass Media Fellowship and Congressional Science Fellowship place highly qualified, accomplished scientists and engineers in assignments where they can learn about science communication or policy making and contribute their expertise to those areas. At reputable and well-known media outlets, Mass Media Fellows learn to communicate about science like professional journalists while contributing their expertise to news media coverage. In the offices of members of Congress or committees, Congressional Science Fellows enable more effective use of scientific knowledge in government and get firsthand experience in the use of technical information in policy decisions.
AGU’s 23 sections and focus groups create and facilitate opportunities for AGU members to network with colleagues in their field, honor luminaries, support the next generation by mentoring students and early-career scientists, and foster scientific discussion and collaboration among their affiliates.
The 2015 Challenge
Many individuals give, but at very modest levels. We challenge all members to consider at least a $50 contribution in 2015. We realize everyone has different financial situations—some can give more, others less—but we encourage all to make meaningful contributions to ensure the future of our organization and the next generation of Earth and space science leaders.
When you make a contribution, encourage others to contribute as well. If every donor encourages one colleague to follow suit, 2015 will be the most successful fundraising year in AGU’s history! To make your gift, visit giving.agu.org.
—Jeff Borchardt, Development Director, AGU; email: [email protected]
Citation: Borchardt, J. (2015), Donors can help AGU climb to greater heights, Eos, 96, doi:10.1o29/2015EO038069. Published on 28 October 2015.
Text © 2015. The authors. CC BY-NC 3.0
Except where otherwise noted, images are subject to copyright. Any reuse without express permission from the copyright owner is prohibited. | https://eos.org/agu-news/donors-can-help-agu-climb-to-greater-heights |
(NASA.gov) – Engineers completed connecting the Pressurized Cargo Module with the Service Module to form the Cygnus spacecraft that will ferry more than 7,000 pounds of supplies, equipment and experiments to the International Space Station during its December mission.
Working inside the Space Station Processing Facility at NASA’s Kennedy Space Center in Florida, crews lifted the cargo module off its work stand and lowered it precisely onto the service module before completing the connections of fasteners and systems.
The service module contains the power-producing solar arrays, propulsion system and instrumentation to steer the spacecraft once it reaches orbit.
Not carrying any crew, the Cygnus will fly autonomously to the station where astronauts there will use the robotic arm to latch onto the spacecraft and berth it to a port for unloading.
A United Launch Alliance Atlas V will lift the Cygnus into space from Space Launch Complex 41.
The Cygnus spacecraft is an American automated cargo spacecraft developed by Orbital Sciences Corporation as part of NASA‘s Commercial Orbital Transportation Services (COTS) developmental program.
It is launched by Orbital’s Antares rocket and is designed to transport supplies to the International Space Station (ISS) following the retirement of the American Space Shuttle.
Since August 2000 ISS resupply missions have been regularly flown by Russian Progress spacecraft, as well as by the European Automated Transfer Vehicle, and the Japanese H-II Transfer Vehicle. | https://spacecoastdaily.com/2015/10/nasa-engineers-complete-pressurized-cargo-module-to-form-cygnus-spacecraft/ |
Q:
Linear span- geometric interpretation in $\mathbb R^3$
Prove that Span({$x$})={$ax: a\in F$} for any vector $x$ in a vector space. Interpret this result geometrically in $\mathbb R^3$
My attempt at the first part:
By definition of Span, Span({$x$}) is the set that contains all linear combinations of $x$. So, for any $a\in F$, Span({$x$})={$ax: a\in F$}
I'm confused about the second part. I think it geometrically represents all the points on the $x$-axis.
Am I correct about both the parts?
A:
See any vector space must contain the zero element. The geometrical equivalent of the span of $\{x\}$ is the line passing through $x$ and $0$
| |
Four Elements in Astrology
Earth, Water, Fire, and Air are the four elements of nature that help us better understand many things in life, including astrology. Each of these elements has a rich history that was known long before our days.
Table of Contents
- Elements Overview
- Qualities of the Zodiac Sign
- The Air Element
- The Fire Element
- The Water Element
- The Earth Element
These four elements have always been the basis of everything that surrounds us. Alchemists used them to find answers to numerous questions, and ordinary people used them to try to understand the essence of mother nature.
Zodiac and the Elements
Each of the four elements speaks of certain tendencies inherent in the zodiac signs to which they belong. They explain that there are other ways to look at each person. Because of this, we are more tolerant of those who are different from us. We can also understand why some signs are more compatible than others.
In short, the element of Water is perfectly combined with the element of Earth, and the element of Air — with the element of Fire. If you think about it, this is quite logical, because Water feeds the Earth and gives us life, and thanks to the Air, the Fire does not go out, igniting the light and the very power of creation.
The natural order of things
Life itself would not be possible without any of these four elements — if you look at the question in more detail, it becomes clear as day. Interacting with each other, they are in perfect harmony in nature. And the division into higher and lower energies is only an internal human need to divide things into black and white — good and bad, feelings and reason, male and female, plus and minus.
But in no case should we forget that these elements are worth nothing without each other. They all share a common trait known as matter.
Matter is the connecting link that connects each element to what we represent as our body, and it is the element of the Earth. Without this, life as it is would be impossible, and you would not be able to taste, touch, smell, or even listen.
Therefore, the manifestation of all things depends on the Earth element, and this element is most difficult to find a replacement in the Natal map when it is not positioned too obviously.
Interpretation of Elements
When you open someone's chart, the first thing you must do is check it for the occupancy of each element. If you see that one of them is missing a planet, then this is a vast, but still particular problem that the owner of the chart will have to face in his life.
When the chart reveals the absence of a planet in one of the elements, it means that a person is not able to link with this element. This leads to problems on the other levels of existence, because all the elements must cooperate so that something true and meaningful can be created in our lives.
This missing is frequently overcome by the use of houses in joining to the element in question, but it will never be simple for someone with this situation in their chart, to seek the balance between levels of life elements speak of.
When the situation is completely different and your client has multiple planets in only one of the elements, it will be difficult to slow down, balance the planet with all the others, and stay calm.
Special Planets
When a person has only one planet in one of the elements, particularly if it is one of the lights or visible planets, this is a very special situation. It's like a drowning man who has only one straw, clutching at it as if it's his only way out.
If your chart contains such a planet, try not to give it too much energy, and try to find a general balance. | https://freehoroscope.info/articles/four_elements_in_astrology.html |
Carly is a Career Placement Counselor at the Mayor’s Office for People with Disabilities (MOPD) Career Center. The Career Center, which was launched in July 2022, seeks to increase meaningful employment and career opportunities for Chicagoans with disabilities. As a Career Placement Counselor, Carly works directly with job seekers throughout their job search, including assisting them in identifying their career goals, providing interview preparation, navigating disability disclosure and requesting Reasonable Accommodations. Prior to joining MOPD, Carly worked at the Chicago Lighthouse for the Blind. There, she worked with clients individually to identify their vocational goals, providing job training and placement services. Before moving to Chicago, Carly was an Employment Counselor at the Queensborough Public Library in New York. She developed a staff training on appropriate ways to assist library patrons with disabilities and educated staff on disability laws and policies, improving relations between the library and the public.
Carly uses her knowledge of the Americans with Disabilities Act to advocate and educate in a friendly and professional manner. She has strong cultural competency and has worked with individuals with varying disabilities and backgrounds. She advocates on behalf of clients, ensuring access to services and resources, including benefits from social service and government agencies. Carly holds her Certification as a Rehabilitation Counselor and License Professional Counselor. She received her Bachelors in Sociology from New Paltz University and her Masters in Rehabilitation Counseling from Hofstra University. | https://www.disabilitylead.org/network/carly-englander |
Masato Taki Mathematical Physics Lab., RIKEN Nishina Center,
Saitama 351-0198, Japan
The most recent studies on the supersymmetric localization reveal many non-trivial features of supersymmetric field theories in diverse dimensions, and 3d gauge theory provides a typical example. It was conjectured that the index and the partition function of a 3d theory are constructed from a single component: the holomorphic block. We prove this conjecture for non-abelian gauge theories by computing exactly the 3d partition functions and holomorphic blocks.
1 Introduction
The pioneer work by Pestun on the partition function of four-dimensional (4d) theories has served as a trigger to great progress on localization computation of supersymmetric gauge theories in diverse dimensions and on various manifolds . Localization of three-dimensional (3d) theories is a focus of recent attention. Kapustin, Willett, and Yaakov [3, 4] extended Pestun’s idea to gauge theories on , and they obtained matrix model representations for the supersymmetric partition functions of these theories. We can solve these matrix models in large- limit, for instance the ABJM partition function was computed by Drukker, Marino, and Putrov . They found that the free energy of the ABJM theory actually shows the -scaling behavior which had been suggested by the AdS/CFT argument. This result is a typical example of the power of the localization approach.
The efficiency of localization reaches beyond large- approximation. The matrix models for partition functions of gauge theories on was derived in [6, 7]. The integrant of this matrix model consists of a complicated combination of double-sine functions, and it looks hard on first glance to evaluate it exactly. In [8, 9], however, the authors succeeded to solve these matrix models exactly. In particular the partition functions of 3d theories computed in show the following factorization property:
|(1.1)|
Here and are the K-theoretic vortex/antivortex partition function [10, 11] on . The summation is taken over the supersymmetric ground states which specify the vortex sector. This factorization into vortexes is 3d analogue of Pestun’s expression
|(1.2)|
In this 4d case, ground states are labeled by the continuous moduli parameter , so we take the integral over it after combining the contributions from instantons and anti-instantons. 3d factorization is therefore expected to originate from the localization after changing the way of it111 The factorization of 2d theories was shown along the line [12, 13].
In this article we prove the factorization of this type actually occurs in non-abelian gauge theories. The matrix model for a non-abelian theory involves a complicated interaction, and so it is not easy to compute it straightforwardly. We therefore employ the Cauchy formula222 This idea was suggested in . and we resolve the problem into that of abelian theory. We fond that the factorized partition function is consistent with the vortex/antivortex partition functions for the corresponding non-abelian theory. Our result strongly supports the conjecture on the factorization of generic 3d theories.
This article is organized as follows. In Section 2, we review the factorization of supersymmetric partition functions and superconformal indexes of 3d gauge theories. In Section 3, we compute exactly the partition functions of non-abelian gauge theories based on the matrix model representation coming from localization. We then find that the partition functions are actually factorized into the holomorphic blocks. The topological string interpretation of these holomorphic blocks is given in Section 4. Section 5 is devoted to discussions of our results and future directioins.
2 3d partition functions and factorization
In this section we provide a review of localization and the resulting factorization of the partition function and the superconformal index of a 3d gauge theory with supersymmetry. The factorization is only conjecture yet for generic theories, however, there exists a nice geometric interpretation of this phenomenon.
The partition functions of 3d theories were calculated with the help of supersymmetric localization. The path integral for a theory on squashed three-sphere is
|(2.1)|
For suitable choice of the scalar supercharge and the deformation action , we can calculate it exactly in the limit [15, 16]. As we will review in Appendix B, the partition function then becomes a kind of matrix model. The factorization of these partition functions of 3d abelian theories was found by Pasquetti in .
|(2.2)|
where the operation in the sum acts for instance as . So this pairing involves the S-duality transformation, and we call it the -pairing. The geometric meaning of the S-transformation will be clear in this section.
The supersymmetric index is also important quantity to catch a part of quantum dynamics of theory. The 3d superconformal index, which is defined for a 3d SCFT, is the following trace taken over the Hilbert space of the theory on :
|(2.3)|
Here is a Cartan generator of the flavor symmetry. The bosonic parti of the 3d superconformal group is , and the quantum numbers under the Cartan generators of its compact subgroup label the states of the 3d theory. Then the above superconformal index counts the BPS states for and :
|(2.4)|
so this index does not depend on , and we can take the limit to evaluate it.
On the one hand we can write down the index as a twisted partition function of a 3d theory defined on the curved space-time ,
|(2.5)|
The index was also calculated by using the localization formula [17, 18], and then the path integral reduces to an ordinary integral over the Cartan of the gauge group via supersymmetric localization . Factorization for the resulting expression of the index was predicted in
|(2.6)|
For 3d abelian gauge theories, it is observed that this building block of the index is identical with that of the factorized partition function in . The difference is the meaning of the conjugated variables and . The conjugation here is merely the inversion , and we call this paring the -pairing.
In , it is proposed that the above-mentioned factorization originates in geometry on which the quantum field theory is defined. As Figure 1, and are - and -gluing of a pair of solid tori . Actually the squashed sphere , on which our discussion focuses, is the -gluing of two half geometries , and this building block is the Melvin cigar [22, 14] where fibers over with holonomy . In the authors defined the holomorphic blocks as the partition functions on this Melvin cigar . This partition function is just the wave function for the Hilbert space on the asymptotic . Here is the infinite time direction, and then a state evolves into a ground state as . In this way the wave function depends on the choice of the supersymmetric vacuum which specifies a state on the boundary . Then the gluing of two geometries through an element of implies the following form of the partition function on the total geometry:
|(2.7)|
This is the geometric origin of the factorization which was conjectured in .
As we mentioned, the factorization is actually observed in abelian partition functions and non-abelian superconformal indexes. So in order to verify this conjecture for a wide range of 3d theories, we have to confirm the factorization phenomenon for partition functions of non-abelian gauge theories. In the following sections, we show that non-abelian partition functions actually factorize into the holomorphic blocks which are consistent with superconformal index and have a nice 3d interpretation.
3 Factorization of ellipsoid partition functions
The following matrix model gives the partition function of gauge theory with fundamentals and anti-fundamentals on squashed three-sphere [6, 7, 15, 16]
|(3.1)|
where is the Chern-Simons coupling, is the FI-parameter for the factor, and is the squashing parameter for the three-sphere. We give masses for fundamental and anti-fundamental chiral field. When , this theory is the mass deformation of SQCD and a pair of a fundamental and an anti-fundamental chirals forms a hypermultiplet of the theory. The deformation of type is the vector mass for the original hypermultiplet, and type is the axial mass. In the following, we turn on these mass deformations.
3.1 vector-like theory
We start with studying non-chiral gauge theory whose matter content consists of mass deformation of hypermultiplets. The localized partition function is
|(3.2)|
For we can enclose the integral contour in the upper half-plane as Figure 2, and in the author employed it for generic Chern-Simons coupling. In this paper we follow the argument there.
To evaluate the matrix model for non-abelian theory , the best strategy is to translate this “many-body” problem into that of abelian theory, or a collection of non-interacting one-body systems. This idea of “abelianization” plays a role in solving many problems in for instance [8, 23], and the Cauchy formula is the key in these articles. The Cauchy formula implies
|(3.3)|
for auxiliary variables (mod). So with this formula, we can resolve the“sinh-interaction” between “particles” into a collection of one-particle systems in a background . We therefore use this formula as a separation of variables of our problem.
Substituting the formula (3.1) into the partition function, we obtain the following expression:
|(3.4)|
|(3.5)|
Therefore the evaluation of the abelian integral immediately implies an explicit formula for the non-abelian partition function. This integral is essentially equal to that was computed in , and we provide an explicit computation in Appendix C. By computing this integral exactly, we find the following factorized form of the partition function of the 3d non-chiral theory:
|(3.6)|
Here the summation is taken over the sequece of integers , which labels the supersymmetric ground states of the theory on . The perturbative part is given by
|(3.7)|
|(3.8)|
where
|(3.9)|
The remaining parts, which are the holomorphic blocks, take the form
|(3.10)|
|(3.11)|
These blocks are precisely equal to the 3d (K-theoretic) uplift of the vortex and anti-vortex partition functions for gauge theory with antifundamental and fundamental chiral multiplets. The Coulomb branch and the mass parameters for the vortex theory are
|(3.12)|
|(3.13)|
|(3.14)|
See Appendix C for detailed computation and discussion.
The classical and 1-loop part is basically a product of those of theory, and we can show that it is precisely the purtabative part of the vortex/antivortex partition function . Actually, we have the factorization of the 1-loop contributions
|(3.15)|
where
|(3.16)|
|(3.17)|
The prefactor can be absorbed into the classical part, up to irrelevant overall coefficient , by changing the FI parameters
|(3.18)|
We thus obtain the factorization of the non-chiral theory as a natural extension of that of theory:
|(3.19)|
As we had expected, the holomorphic block coming from this factorization coincides with that of the superconformal index of the same gauge theory . We can therefore conclude that the single holomorphic block leads to not only the partition function but also the superconformal index of the vector-like gauge theory. As we will see in below, this fact holds for chiral theories.
In the next section, we will see that this non-abelian holomorphic block we derived here can be reformulated into an open topological string partition function in the presence of A-branes on strip geometry.
3.2 chiral theory
We move on to studying chiral gauge theory. In this section we deal with gauge theory with fundamental chiral multiplets. The partition function is given by the following matrix model:
|(3.20)|
Here we turn on the axial masses to the chiral multiplets by turning on the scalar VEVs for the background vector multiplets of weakly-gauged symmetry.
This partition function also takes the factorized form
|(3.21)|
See Appendix C for detailed computation. The perturbative part is given by
|(3.22)|
|(3.23)|
where . The proportional coefficient in the last line can be absorbed into the classical part by the change of the couplings
|(3.24)|
The full partition function then take the following factorized form
|(3.25)|
The holomorphic block for this chiral theory is
|(3.26)|
The anti-vortex part is given by the replacement , and . This is precisely equal to the vortex partition function for theory with fundamental chital multiplets. This is the non-abelian generalization of the holomorphic block for theory .
4 Vortex partition functions and topological strings
In this section, we provide an interpretation of non-abelian vortex partition functions in terms of open topological string theory. In , it was shown that the holomorphic blocks for gauge theories are given by partition functions of open topological strings in the presence of single A-brane. By generalizing this argument, we demonstrate that the open topological string partition functions with multiple A-branes give the holomorphic blocks for non-abelian gauge theories.
The topological string partition function of the strip geometry Figure 4 which gives the Nekrasov partition function of 4d gauge theory with flavors is
|(4.1)|
where
|(4.2)|
In the following we show that the open string partition function on this geometry gives the holomorphic block for non-chiral theory in the previous section. Notice that this geometry is the same as that of case . The difference is the number of A-branes we insert in the geometry, and we now consider branes for non-abelian gauge theory. Let us consider A-brane insertion at -th legs of the strip geometry as Figure 4. Since the world-sheet instanton on single A-brane is labeled by the Young diagrams , the following assignment of representations leads to the open string partition function with instanton mode :
|(4.3)|
Let be the open string moduli on the -th brane. The open topological string partition function is given by the following partition function of strip geometry with non-trivial representations :
|(4.4)|
Under the identification between parameters,
|(4.5)|
this open string partition function is precisely the holomorphic block for 3d vector-like theory up to an overall monomial which is not relevant for our discussion. In other words, the open topological string partition function on the strip geometry gives the K-theoretic uplift of the vortex partition function for theory with antifundamentals and fundamentals. This is a generalization of the relation between the vortex partition function and topological strings found in where the authors studied the special case .
It is straightforward to generalize this computation for chiral theory. The relevant geometry for this case is the half geometry, and this geometry is the same as that of case again. We skip the detailed computation since it is merely a slight modification of the above case, but it is easy to see that the corresponding partition function gives the holomorphic block for the gauge theory with fundamentals.
5 Discussion
In this article, we computed exactly the supersymmetric partition functions of gauge theories on the squashed three-sphere. We then found that the resulting expression shows the factorized structure, and it leads to the expected holomorphic block of the 3d theory. In this way we gave an explicit proof of the factorization conjecture for a range of non-abelian gauge theories. The obtained holomorphic blocks are consistent with the computation of the superconformal index, and we found that the blocks can be recast into open topological string partition functions with A-branes.
The system of the vortex counting of a 2d theory coupled to a bulk 4d gauge theory describes the surface operator of the 4d theory [26, 27, 11, 28, 29, 30, 31, 32]. Since the 3d uplift of the vortex theory gives the holomorphic block, it is very interesting to investigate the 5d uplift of the surface operator, which is 3d gauge theory coupled to 5d gauge theory. In this way we can synthesize the factorizations of 4d theory (1.2) and 3d theory (1.1), and then this interplay of them should lead to new phenomenon in 5d. From the perspective of the AGT correspondence, -Toda theory will play a role in this direction [33, 28, 34]. It should be interesting to study this issue further.
Acknowledgements
The author would like to thank N.Hama for useful discussions. The work is supported by Special Postdoctoral Researchers Program at RIKEN. .
Appendix Appendix A Double-sine function
In , Jafferis found that the following -function plays a role in the localization computation and the F-extremization of three-dimensional theories:
|(A.1)|
whose defining property is
|(A.2)|
Let us consider the function . This function satisfies many nice properties, and we can show that this is a specialization of the double-sine function :
|(A.3)|
The double-sine function is defined as a natural extension of the sine function through the product expression
|(A.4)|
We can recast this definition into the language of Barnes gamma function
|(A.5)|
From the definition, we can easily see the inversion relation .
The double-sine function is meromorphic, and it satisfies the following properties
|(A.6)|
|(A.7)|
where we introduce
|(A.8)|
These formulas come from the following expression of this function:
|(A.9)|
We also have the integral representation
|(A.10)|
The residue is .
Appendix Appendix B 3d partition function
The supersymmetric localization of gauge theories on enables us to compute their partition functions as the conventional matrix model over the gauge group [6, 7]:
|(B.1)|
Here the“dynamical” variable originates from the auxiliary scalar component of the vector multiplet. The localization reduces the path integral onto the constant VEV of the scalars. Then the saddle point approximation is exact, and the one-loop computation provides the factor
|(B.2)|
The denominator cancels with the Vandermonde determinant when we replace the matrix integral with the eigenvalue integral .
The chiral multiplet in the representation with R-charge (conformal dimension) gives the contribution
|(B.3)|
These results are generalized to the theories on the squashed sphere [15, 16]. There are some realizations of squashed three-sphere , and each preserves different subgroup of the isometry of . In this article we adopt that of the last section of , which is natural in our context. The building blocks of the partition functions then receive a slight modification by the squashing parameter : | https://www.arxiv-vanity.com/papers/1303.5915/ |
A simple question,
“Can you perform the same role using different skills?”
From another perspective,
“Can the same skill set be applied to different roles?”
Why do these questions matter?
How you answer them will say a great deal about the kind of organizational culture you are building. Are you building a culture of specialists or generalists? Do you want people to have deep exercise in or possibly two domains, or are you going to invest in growing people with comb-shaped skill profiles who can go deep in several different areas? What mix do you currently have and what is the optimal mix for your organization.
|One Skill Set applied to||One Role||Multiple Roles|
|Specialists||Generalists|
|Multiple Skill Sets applied to||One Role||Multiple Roles|
|Diversity||Comb-Shaped|
Most organizations will want a mix of people. They will need some generalists and some specialists. People who are adept at applying their skills to more than one roles are often the glue that connects the different roles and promote the sharing of knowledge across roles. People who have comb-shaped skills profiles (they are relatively deep in several domains) and who work across multiple roles are likely rare today, but they are also likely to be the places where innovation sparks.
In most cases the real world probably looks something like this. There are a set of core skills that everyone in a role is expected to have. Then each person has some
For most roles, there will be a set of common or core skills. Without these core skills it will be very difficult to perform the role at an acceptable level. For all but the simplest roles, these core skills will be necessary but not sufficient. There will be other skills that each person brings to the role that support their special approach and that make them more suitable for one project than another. Then there are the supporting skills, that help the person perform the role. They may be optional, but at least some of them need to be present.
Simpler or better-established roles may have more standardized skill models.
There will be more core skills and fewer differentiating or supporting skills. It is easier to train for this type of role. But there is a cost. The lack of diversity in skills will generally lead to less creative and resilient organizations.
An examination of the TeamFit skill graph suggests that Account Managers, Project Managers and Program Managers share a common set of skills: communication, negotiation, planning, trade-off management, risk management. Most people who excel at project management will be able to apply their existing skills to account management or program management, or, as we noted in an earlier post, to product management.
At TeamFit, we are developing an open and flexible data model that allows for many-to-many mapping between skills, projects and roles. In other words, any one skill may be applied to may different projects and be part of many different roles. There will be no one ideal set of skills that characterizes a role. Instead, there will be several different sets of skills that can be used to carry out a role. (It actually goes much deeper than this, for any individual, each skill will have a unique set of links to other skills and will have been applied in different skills – your skill in Design Thinking will be comprised of different associated skills and have been applied in different ways than my own Design Thinking skill.)
This open and evolutionary approach to the skill graph is required in today’s world. The rapid evolution of job roles, the new skills needed, and the unique ways that people combine skills to build teams mean that a more rigid approach will crack as soon as people begin to apply it.
TeamFit’s mission is to help individuals and organizations to expand their potential and to make a difference by applying their skills to projects.
Top image from Penn State course on Flowering Plants.
Understanding the skills you have and the skills you need shouldn’t be so hard.
TeamFit can quickly and precisely give you the skill insights you have always wanted. | http://hq.teamfit.co/many-to-many-mapping-how-skills-and-roles-connect/ |
Voice Media Group, LLC is a limited liability company (LLC) located at 1201 E Jefferson in Phoenix, Arizona that received a Coronavirus-related PPP loan from the SBA of $4,004,000.00 in April, 2020.
The company has reported itself as a White male owned business, and employed at least 251 people during the applicable loan loan period.
$ PPP Loan Information
Loan Size:
Jobs Retained:251
Loan Approved:2020-04-06
Lender:ZIONS BANK, A DIVISION OF
Voice Media Group, LLC received a Paycheck Protection Loan of $4M through ZIONS BANK, A DIVISION OF, which was approved in April, 2020.
Based on standard PPP eligibility rules, Voice Media Group, LLC's total 2019 payroll expenses were approximately $19.22M in order to qualify for the PPP loan amount received.
Based on their reported 251 jobs retained, this equals an estimated average yearly compensation of $76,571 per employee1.
Business Information - Voice Media Group, LLC in Phoenix, AZ
1201 E Jefferson
Phoenix, AZ 85034
Business Industry
Advertising Agencies (NAICS code 541810)
Business Owner Demographics
Race / Ethnicity: WHITE
Gender: MALE OWNED
Veteran Status: NON-VETERANCongressional District: AZ-07
Similar Companies near Phoenix
In the Phoenix area, 5 businesses in the "Advertising Agencies" industry received a PPP loan. These local businesses reported an average of 51 employees (compared to this company's 251) and received an average PPP loan of $823,311 (compared to this company's $4M).
Similar Nearby Businesses Who Received PPP Funding:
Michael Watson
Phoenix, AZ
Cultivator Content Labs, LLC
Phoenix, AZ
Mary Rabago LLC
Phoenix, AZ
Sales Builders Marketing Inc
Phoenix, AZ
Industry PPP Comparison Statistics
Nationwide, 11,928 businesses in the "Advertising Agencies" industry received a total of $1,505,946,886.00 in PPP loans. These businesses account for 0% of total PPP applications submitted, and received 0% of the total PPP funding allocated.
PPP recipients in this industry report an average of 8 employees, 97% lower than Voice Media Group, LLC's reported 251 employees, and received an average PPP loan of $126,253, 97% lower than this company's loan of $4M.
FederalPay's PPP Information Policy
Paycheck Protection Loan data has been made public by the Small Business Administration (SBA) for all private companies that received a PPP loan.
All information displayed on this page is publicly available information under PPP loan guidelines, in compliance with 5 U.S.C. § 552 (Freedom of Information Act) and 5 U.S.C. § 552a (the Privacy Act) and is published unmodified, as provided by the SBA.
Footnotes & Information
1. Payroll and salary estimates assume the borrower used the standard PPP calculation of 2.5 x average 2019 monthly payroll costs to determine PPP loan eligibility. Calculation methods vary based on entity type. Please read the latest official SBA PPP calculation rules for a full explanation of PPP loan amount calculation methods.
2. If a company's reported number of employees divided by the maximum PPP range amount per the SBA is greater than $100,000, the estimated maximum PPP loan received by the company can be adjusted down to assume no more than $100,000 yearly salary per employee was used in the PPP application. While employees at the company may earn more, $100k / employee is the maximum amount that can be used in PPP eligibility calculations.
Have FederalPay.org's open data tools been valuable? Consider donating! | https://www.federalpay.org/paycheck-protection-program/voice-media-group-llc-phoenix-az |
September 30, 2022 4 min read
We hear this question often from homeowners with wells: "Will my well water be safe after a nuclear event?" Unfortunately, even though your well water comes from an underground source, it probably won't be safe to drink after a nuclear incident without adequate filtration.
In fact, your well water might already be contaminated with radiological particles, even if there hasn't been a nuclear attack or other nuclear fallout nearby.
Radioactive particles travel through our oceans, into the atmosphere via the water cycle, and down to earth again. There, it seeps down into our water table and our wells.
If this information makes you nervous, take heart. There is good news. You can decontaminate your well water if you use the right kind of filter.
Today, the Seychelle team is here to explain:
Let's look at how nuclear fallout could contaminate your well water.
It's terrifying to imagine, but a direct nuclear attack on the US would have immediate and profound consequences. One study published by John W. Birks, PH.D., and Sherry L. Stephens of the University of Colorado, says a major nuclear war would likely kill 1 billion people immediately and injure hundreds of millions more.
Beyond the destruction of shelter, food, infrastructure and communications, survivors eventually discover the atmosphere and drinking water changed. Clouds of dusty radioactive material would hover in the skies, and oceans and freshwater resources would become contaminated.
Radioactive materials called radionuclides can occur naturally. Erosion, for instance, can cause mildly radioactive materials to leach into our groundwater.
But the more significant concern for most modern Americans is the possibility of nuclear war. Nearly every material we come in contact with would become contaminated in the event of an attack. This includes soil, drinking water, livestock and food crops.
When radionuclides break down (decay), they create radiation. Radionuclides are a natural part of our environment, and we're in contact with them in small amounts every day. Acute, intense exposure to large amounts of radiation (like nuclear fallout), or even long-term exposure to small quantities in an untested well, can cause significant health issues.
Radioactive drinking water causes several cancers, including:
Reproductive issues are possible as well. We still have a lot to learn about these problems, but some studies suggest drinking radionuclides will cause:
Clearly, it's vital that well-owners test their drinking water regularly for radionuclides.
About 15% of Americans use private wells as their primary source of drinking water. Unlike municipal systems, these wells are not well-regulated or routinely inspected for radionuclides. Well owners are responsible for making sure their drinking water is safe.
Well water should be tested every year for contaminants. Every third year, well owners should test for radionuclides. Well-owners can order these tests online or purchase them from their local water department.
Radionuclides are dangerous to your whole family. You can use Seychelle's Radiological Bundle to remove radiological contaminants like:
Our nuclear emergency bundle will also clear your water of inorganic contaminants like asbestos, lead and mercury. Chemical pollutants are also removed: herbicides, pesticides (DDT), detergents, benzene and other toxic chemicals.
The icing on the cake is that your well water will be clear and delicious! No more sulfur smell, no chlorine taste, no more silt or visible particulates. Just refreshing, clean, safe water for your family to enjoy. Your well water has never tasted this good.
Most wells in the US aren't regulated by government authorities. And even if they are, homeowners should still take a proactive stance toward their family’s safety.
Well water can be contaminated by radioactive particles anytime, thanks to the water cycle. You should test your well water for radionuclides every third year during the best of times and more often after a nuclear incident or event.
Contaminated well water can be extremely dangerous, but you won't be able to identify it by taste, smell, or appearance.
If you do have contaminated well water, there is a solution. Seychelle radiological filters are tested to remove up to 99% of radiological contaminants from your water.
Radiation never goes away. Radioactive particles have a half-life. Depending on the molecule type, that half life can be a few seconds or a few million years. During that time, the nucleus of these molecules will continue to blast electrons into the matter around it. This is the same way your microwave cooks food.
After one half-life, the radioactive material is at half potency. Another half-life period passes, and the radioactivity is decreased by half to 1/4, then 1/8, 1/16, and so on. So radioactive particles never disappear, they only grow less potent over time.
Radioactive water is water contaminated by radiological pollution. This pollution can come from nuclear power plants, nuclear incidents like meltdowns, and nuclear war. Radiological pollution can also occur naturally. Radiological materials found in well water include radium, uranium and radon. | https://www.seychelle.com/blogs/news/is-well-water-safe-to-drink-after-nuclear-fallout |
In intercollegiate athletic programs, equal access for male and female student-athletes to equitably qualified coaches is a Title IX requirement. Establishing this fair situation, however, can sometimes be tricky. The good news is that after an athletic department has analyzed its own personnel system and implemented a fair plan that incorporates the elements listed below, any further analysis is usually unnecessary unless changes occur which affect the equitable balance between the coaching staffs for men's and women's sports.
The Policy Interpretation outlines three factors to be assessed when measuring the opportunity to receive coaching: (1) relative availability of full-time coaches, (2) relative availability of part-time and assistant coaches and (3) relative availability of graduate assistants.
Two factors are listed when measuring the assignment of coaches: (1) training, experience and other professional qualifications and (2) professional standing.
Seven factors need to be assessed when dealing with the compensation of coaches: (1) rate of compensation, (2) duration of contracts, (3) conditions relating to contract renewal, (4) experience, (5) nature of coaching duties performed, (6) working conditions and (7) other terms
and conditions of employment. (Title IX Athletics Investigator's Manual, 1990, p.55)
Availability
In Division I where most of the head and assistant coaches are in full-time coaching positions, an analysis of the number of coaches allocated to the women's program compared to the men's program is relatively easy to do. In comparable sports, the total number of coaches in a men's sport program should be the same as in the women's sport program e.g. each basketball program having a total of four coaches. In non-comparable sports, a wise guide would be to use the NCAA coaching limits in each of these sports since the organization has attempted to identify for each sport the number of coaches necessary to adequately perform the responsibilities associated with that sport. A common problem is to hire the maximum number of coaches for some men's sports and fewer than the maximum for women's sports.
In other Divisions where many of the coaches may not be full-time, the analysis is a little more complex. In this instance, there may be part-time head coaches and assistant coaches. One way to look at the assignment of coaches is to convert percentage of time assignments, full-time and part-time, to full-time-equivalents (FTE). One FTE is equal to the equivalent of one 100% time coach, i.e., two 50% time coaches, etc.). Then take the total number of male athletes and divide by the number of FTEs for coaches of male teams and do the same for coaches of female teams. For instance, if there are 200 male athletes and a total of 10 FTEs for coaches of these teams, there is a coach/athlete ratio of 1 to 20. Assume there are 150 female athletes and 6 FTEs for a coach/athlete ratio of 1 to 25. In this example, a school is giving its male student-athletes more favorable teaching ratios than female athletes.
Ideally, in all sports the coaches of men's sports and the coaches of women's sport would have the same percentage of time allocated for coaching. If not, then it would also be defensible to have the coaches of comparable sports with the same percentage and to allocate percentages to coaches of non-comparable sports in such a way that the overall result is equitable. Allocating differing percentages is permissible providing that student-athletes of one gender are not disadvantaged by having less access to their coaches because of additional non-coaching responsibilities e.g. having some full-time coaches for one gender and not for the other. Having non-comparable teaching loads would also be of concern. For example, it would be inequitable to have coaches of men's sports teaching sport skill classes while coaches of women's sports are teaching theoretical courses, such as biomechanics, which require much more preparation time.
The overall allocation of graduate assistants to the men's program and the women's program should also be equitable and the institution again can be guided by the NCAA rules and regulations in this area.
Assignment
The assignment of coaches deals with the professional qualifications of coaches e.g. their educational preparation, their experience and their achievements in their careers. One way to help develop similarly well qualified coaches for both men's and women's sports is to advertise
coaching positions with the same required and desired qualifications and to have compensation packages designed to attract quality individuals for both programs. A common problem is created when salaries for women's sports fail to attract quality coaches with the result that female
student-athletes do not receive the high quality coaching afforded their counterparts in men's sports. While years of experience should be one factor in the search for good coaches, the proven academic and athletic success record of an individual or the potential for success
based on excellent experiences should be an important factor since years of experience do not necessarily correlate well with success.
Compensation
The area of compensation in athletics is complex. Disparities in coaches' salaries cannot be resolved under Title IX unless the salaries create a lower quality of coaching for student-athletes of one gender. In this instance, the complaint would have to come from the affected student-athletes since the coaches cannot assert a compensation discrimination claim under this area of Title IX. Title VII and the Equal Pay Act are the appropriate avenues for coaches to resolve salary disputes if no resolution can be reached at the institutional level.
For institutions wishing to avoid salary disputes, criteria for the establishment of base salaries should be created. Factors taken into consideration could include areas such as: educational preparation, years of coaching experience, academic success of student-athletes, athletic success and achievements.
Supplemental sources of income could include areas such as: sports camps, television and radio shows, and speaking engagements as well as incentives in specific areas e.g. graduation rates. Whenever possible, the institution should treat the coaches of men's teams and women's teams in a similar fashion. It is also important to avoid using criteria that may be the result of past discriminatory practices e.g. the number of spectators at athletic events.
Additionally, courtesy cars and cell phones, which can be viewed as fringe benefits when available for coaches' personal use, should be equitably shared between men's and women's coaches. These benefits can also be used as recruiting tools so an inequitable distribution can
unfairly impact the recruiting of the students of one gender.
The length of coaches' contracts is often another area of concern. Again, having all coaches on the same length of contract avoids problems. However, having some on 12 month contracts and others on 9 month contracts is permissible providing that the student-athletes of one gender are not being short-changed because more of their coaches are on the shorter contract. If differing lengths of contract are used, then the percentage of men's coaches on the 12 month contract should be the same (or as close as is possible) as the percentage of women's coaches on the l12 month contract. The same holds true for coaches on multi-year contracts.
Where possible, terms for the renewal of contracts for coaches should be the same or very similar for coaches of men's and women's teams.
Having similar responsibilities for all coaches additionally helps avoid problems related to treatment of personnel. Moreover, working conditions and other conditions of employment should be equitable. For instance, an area of concern may be when all coaches are required to attend booster club functions but only some coaches of men's sports are compensated for this responsibility.
It is recommended that the advice of the University legal counsel be sought when dealing with the area of compensation. | https://sportsmanagementresources.com/index.php/library/assignment-compensation-coaches |
Here’s everything you need to know about storing mangoes. Learn when they should sit on the counter and when you should refrigerate them.
Not sure where you should store the mangoes you bought in the supermarket? How to store mangoes?
Unripe mangoes should sit on the counter at room temperature until they ripen. Once ripe, they last about 5 to 7 days if refrigerated and only 2 to 3 days if you leave them on the counter.
That’s the gist of it if you’re looking for a quick answer. But if you’re new to mangoes and want to learn a bit more about them, including telling if one is ripe, read on.
Here’s what we cover below:
- knowing if a mango is ripe (hint: it’s rarely about the color)
- details on storing whole and cut mangoes
- tips for accelerating ripening; helpful if your mango is unripe and you want it ready for eating as soon as possible
Let’s start by discussing ripeness, as knowing if yours is ripe or not is essential to how you store it.
How to Tell if Mango Is Ripe?
A mango is ripe when it gives a little when you apply gentle pressure using your fingers. If it’s firm and has no give, it’s not yet ripe, and if it’s soft and has a lot of give, it is already overripe. Feeling the fruit is the only reliable way to tell if it’s already good to eat or not.
Now, there are three other characteristics that might help you tell if your mango is ripe, but remember that feel is the primary one that’s the most reliable.
The first is the smell.
If the stem end of your mango gives off a fruity smell, that usually means the mango is good to eat. But if yours doesn’t smell like much, it might be perfectly ripe just as well.
(None of the mangoes I ever bought had a fruity aroma, and they all were ripe and tasted sweet.)
The second characteristic is color.
Some mango varieties change color as they ripen, similar to how bananas go from green to yellow as they mature. Unfortunately, that’s not the case for all popular varieties, and some of them don’t change color at all.
(For instance, the Keitt variety stays green even after it ripens.)
The last one is signs of skin shriveling. If you notice that the skin, usually near the stem end, starts to shrivel, that’s a sure sign the mango is ripe.
Of course, shriveling is usually a sign that a fruit or veggie is starting to deteriorate, and it’s the same for mangoes. In other words, you don’t have to (or even want to) wait until that happens.
Long story short, unless you want to memorize how each mango variety changes as it ripens, it’s best to stick to how it feels to the touch.
Now that you know which is which, it’s time to talk about storage practices.
How to Store Whole Mangoes
Store your unripe mangoes on the counter at room temperature until they ripen. Once ripe, transfer them into the fridge, where they can sit without any extra packaging for 5 to 7 days. If you leave a ripe mango at room temperature, it’ll go overripe within 2 to 3 days.
If you don’t particularly care how long it’ll take until your unripe mango ripens, feel free to put it in a fruit basket or wherever else you store your fruits. But if you’d like it to be ready within a couple of days tops, make sure it sits in a sealed bag.
The ripening process usually takes several days, and that’s when the fruit becomes sweeter and softer.
(More on accelerating ripening in a moment.)
Last but not least, remember to check your mangoes every day so that you catch the moment they’re ripe and should be moved to the fridge.
When it comes to refrigerating a ripe mango, just placing it in the fridge is good enough. It doesn’t need any wrapping, bags, or any of that.
A veggie drawer is probably a better option than a regular fridge shelf, as it’s much more humid than the rest of the refrigerator. Unless, of course, you have a whole bunch of ripening fruits and veggies in there releasing a ton of ethylene.
If that’s the case, better stick to a regular shelf, or your mango will go overripe fast.
Now, let’s say you have an unripe mango or two, and you’d like to get them ready for eating as soon as possible. Here’s what you can do:
How to Speed Up Ripening
To ripen your mangoes faster, you can put them in a sealed paper bag. The more mangoes in the bag, the better, as they will produce more ethylene gas, which helps with the process.
If you want to speed things up even further or have just one or two mangoes on hand, you can throw another fruit or vegetable that produces ethylene in that bag.
There’s a whole bunch of options, like avocados or tomatoes, but you might as well go with something as common as an apple, pear, or banana. Just make sure whatever you put in that bag is ripening.
How to Store a Cut Mango
Cut mangoes last 3 to 4 days in the refrigerator. It’s best to store them either in a resealable bag or an airtight container.
For large pieces, like halves, I suggest a ziploc bag, as it takes much less space than a container. For smaller cuts, diced mango, or mango puree, an airtight container is the best option.
The most important thing is that the fruit is sealed tight, so that it doesn’t dry out. That means that simply wrapping the fruit with plastic wrap works fine if you don’t have a bag or container on hand.
(Don’t try that with pureed mango, though.)
Last but not least, you can freeze mango. So if you can’t use the fruit before it goes bad (more on that in my How long does mango last? article), you can always save it for later.
Of course, freezing the mango somewhat limits what you can do with it afterward, but there are still plenty of options for using it. | https://www.doesitgobad.com/how-to-store-mangoes/ |
An awful lot of people in the media and on various blogs are demanding the police arrest Zimmerman. This is just another indicator that 99% of the folks in this country really don’t know how our criminal justice system works. It is exceptionally stunning to me that a lot of lawyers are in the group crying for the police to arrest Zimmerman and hold him for trial. They, of all people, should know better.
Arrest is the taking of a person into lawful custody. The operative word in that sentence is LAWFUL. The police at this point CANNOT arrest Zimmerman without an arrest warrant having been issued. He is protected by the Fourth Amendment, which prohibits the government from making an unreasonable seizure of a person (an arrest) without a warrant, issued under probable cause. The ability for cops to arrest based on probable cause is an EXCEPTION to the warrant rule, held by the courts to be a ‘reasonable’ seizure without a warrant if founded in exigent circumstances.
The Fifth Amendment also comes into play, wherein nobody may not be held to answer for a “capital or otherwise infamous crime” except upon indictment by a Grand Jury (when a DA charges a crime, the indictment is at some point rubber-stamped by a Grand Jury), nor shall any person be deprived of their liberty without due process of law.
What this means in practical terms is that officers can arrest without a warrant for probable cause WHEN THE ARREST IS MADE AS A MATTER OF EXIGENT CIRCUMSTANCE. Put in plain language, that means it has to pretty much be done contemporaneously with the initial investigation of the crime….. and additional criteria must be met for the arrest to be legal. There has to be probable cause to believe a crime has been committed (i.e. that ALL of the specific elements outlined in the code which defines the crime have been met and that the person being arrested is the person who met them).
Where there is doubt in the minds of the police as to whether or not a crime was committed, the proper way to proceed is to investigate and submit the product of the investigation to the DA….. NOT to arrest.
Also considered in this determination is the ability of the police to adequately determine identity of the suspect, likelihood of flight, likelihood of danger to the community, and likelihood of loss of evidence (e.g. in a DUI, evidence is automatically lost merely by the passage of time, so DUI arrests are almost always contemporaneous with the perceived commission of the crime).
If any of the criteria are lacking, the police MUST submit their product of their investigation to the DA and/or the Grand Jury in lieu of arresting. Once the DA (or in the case of a violation of federal law, the US Attorney) has had a chance to review the case, the police, generally speaking, no longer have an option to arrest on probable cause. There can be exceptions, but they are few and far between, rely once again on an exigent circumstance, and have no application in this case. A warrant has to be issued as a Grand Jury Indictment OR by competent judicial authority. This is what is meant in the Sixth Amendment by “due process of law.”
The NORMAL process is for the DA, or special prosecutor to compile a list of the charges, a breakdown of the evidence gathered to that point, an explanation of the legality of the initial police contact with the suspect, and an explanation of how the evidence demonstrates a meeting of the requirements of the various elements of the specific statutes for which a violation is alleged (the probable cause part). A judge reviews that information and makes a determination on whether or not the DA has met all the requirements of the law, and if so, issues an arrest warrant. The police can THEN go legally pick Zimmerman up. If they do so otherwise at this point, it is by definition a false arrest, and the City of Sanford would be subject to rather severe civil penalties for having allowed it.
Individual state laws define who has authority to arrest, and under what circumstances, but that authority is always limited by the Constitution. Every state has some form of citizen’s arrest law on the books, but since the citizens are by definition NOT part of the criminal justice system (except as to their responsibilities as triers of fact in a jury trial), a citizen’s arrest is in reality not so much an arrest as a detention for purposes of handing a suspect over to the police. What the police do once the suspect is handed over, varies somewhat from state to state, but in most cases they have the authority to either accept the subject and continue the arrest process, cite them to appear in court later, or release the subject on the spot based on the police determination of whether or not there is, in fact, a crime AND sufficient evidence to justify an arrest.
The bottom line in this case……. The police in Florida do not have the authority to arrest Zimmerman at this point until an arrest warrant is issued by a competent judicial authority or the Grand Jury.
And believe me….. we do not want to live in a country where police can arrest someone just because the citizens are crying out for it to happen, or, for that matter, because someone on the police department WANTS to.
Why, then you ask, doesn’t the DA in Sanford seek an arrest warrant at this point? There could be any number of reasons. The first that comes to mind is, HE MAY WELL NOT HAVE EVIDENCE THAT A CRIME WAS EVEN COMMITTED. 1.) If the evidence gathered thus far (that is the evidence… not the conjecture of the media or of bloggers) indicates Zimmerman was defending himself in a legal manner, no judge will issue the warrant. 2.) If the police
botched the evidence collection in such a way as to make it inadmissible…. inadmissible evidence is the same as not having any as far as the courts are concerned. 3.) The evidence he has thus far may be insufficient to meet the ‘probable cause’ criteria.
The DA may also not yet have sought a warrant because:
He may have cause to believe the initial contact by the police was illegal. This is unlikely, but does occur on occasion. I once had a case thrown out because I responded to a large fight with lights and siren blaring, and failed to shut them off before entering the actual parking lot where the fight was happening. The judge determined that my lights and siren constituted a ‘detention’ of the suspects before I had personally observed a violation of law (I couldn’t actually see them fighting because of the crowd of people around them until I broke through the crowd), so the detention was illegal and the case was thrown out. That judge’s ruling would have likely been overturned on appeal, but the case was not big enough to justify the expense of going through that process. If there is doubt, he’s not going to want the case thrown out early in the prosecution for that reason. In such an instance, he would probably wait for a Grand Jury indictment before proceeding.
There may be further indication that additional evidence is likely (e.g. additional witnesses have or are expected to give statements, and the veracity of their statements must be checked out, or lab results that take time to process have not yet posted) a warrant should not be sought until at least the vast majority of relevant evidence is secured, and that evidence meets criteria for successful prosecution.
The DA, before he seeks a warrant, has to keep in mind that if an arrest warrant issues, in most places he has about 48 hours to get the defendant in front of a judge for indictment (the judicial process in which charges are leveled), and that the subject has a right to a speedy trial. That means in most jurisdictions in this country the defendant can demand his trial within, say, ten days. That means a preliminary hearing to determine whether or not there is sufficient cause to even hold a trial will be held within that period. If the DA cannot present sufficient cause to bind Zimmerman over for trial (e.g. the defense argues a CREDIBLE self-defense theory in combination with evidence for the prosecution that is not fully developed), the case will be dismissed. And even if bound over for trial, if there is not sufficient evidence to convince the jury, the case can be lost…. and the double jeopardy clause in the Fifth Amendment can prevent any further prosecution for the particular alleged crime.
What THIS means in practical terms, is that in a case like this, the DA is NOT going to seek an arrest warrant until he is prepared to carry through on a full prosecution, with a good likelihood of conviction, and he would be wasting the taxpayers’ money if he did. | https://redstate.com/diary/uplateagain/2012/03/30/why-zimmerman-has-not-been-arrested-n202952 |
Wout Van Aert (Team Jumbo-Visma) sprinted to victory on stage 10 of the Tour de France, coming in just ahead of Elia Viviani (Deceuninck-Quickstep) and Caleb Ewan (Lotto-Soudal).
Stage 10 of the Tour de France was another long stage, measuring in at 217km. It was meant to be a stage for the sprinters, but not all the sprinters had things go their own way today. The stage would be characterized by crosswinds and splits in the peloton, seeing a number of big names losing out.
The real action began with just 35km remaining, as Deceuninck-Quickstep and Team Ineos put the peloton in the gutter. Roman Kreuziger was paying close attention at the head of affairs for Team Dimension Data and was able to make the front split when gaps began to open.
Edvald Boasson Hagen, Ben King and Reinardt Janse van Rensburg were close to making the front selection too but would instead find themselves split between groups 2 & 3, which also contained the likes of Jakob Fuglsang (Team Astana) and Thibaut Pinot (Groupama-FDJ).
With a number of teams having a vested interest in the front group staying away, they all worked well together to open a minute gap to the 2nd group on the road. This meant there was no real chance for Team Dimension Data to contest for the stage result with Boasson Hagen but Kreuziger was gaining some valuable time on the GC.
At the line, the front group contained just 43 riders, a mixture of sprinters and GC contenders. Van Aert was the fastest of them all, taking the stage win while Kreuziger rolled home in the wheels. The 2nd group finished over 1’40” down with smaller groups even further back.
As a result, Kreuziger was able to move up 7 placings on the general classification and into the top 20 overall. Our Czech climber now heads into tomorrow’s rest day in 18th overall. | https://teamqhubeka.com/tour-de-france-10-kreuziger-climbs-up-to-18th-in-crosswinds |
The invention relates to a braking system for a vehicle having a primary brake cylinder (14) that is coupled to a brake input element (10) such that a braking pressure and/or activation path signal can be provided unreinforced to the primary brake cylinder (14) and a corresponding unreinforced pressure signal can be output by the primary brake cylinder (14), a brake circuit (24) having a separator (66) switchable into an open mode and into a closed mode and at least one wheel brake cylinder (68a, 68b) arranged on a wheel (26a, 26b), wherein the unreinforced pressure signal can be forwarded to the first wheel brake cylinder (68a, 68b) in an open mode of the separator and a forwarding of the unreinforced pressure signal can be prevented by the separator (66) switched to the closed mode, and a hydraulic assembly (76, 80) coupled to the first wheel brake cylinder (68a, 68b); that is designed to output a reinforcement pressure signal. The invention further relates to a method for operating a braking system for a vehicle and to a production method for a braking system for a vehicle. | |
With Star Trek: Picard season two is now complete and with the main cast of Star Trek: The Next Generation entering for the third and final season (which has already been filmed), the series is saying goodbye to some of its original cast. Michelle Hurd (Raffi) and Jeri Ryan (Seven) are confirmed to return, but Alison Pill (Jurati) has confirmed she won’t be returning, and now more actors are saying goodbye.
UPDATE: Isa Briones says goodbye
Isa Briones played several roles during the two seasons of Star Trek: Picard. Her season two character Kore was recruited to be an overseer (by traveler Wesley Crusher, played by Wil Wheaton) and her Soji character is probably still around in the 25th century, as she didn’t join the crew. for their time travel adventure.
On Friday, Briones posted behind-the-scenes photos from his time on the show. Her post spoke about her time on the show and how she was “grateful for every part of this experience.” She ended with “Goodbye Soji, this orchid for you.”
Evan Evagora says goodbye
Evan Evagora was a series regular for both seasons of Star Trek: Picard, playing the Romulan Elnor. The second season saw warrior Qowat Milat become a Starfleet Academy cadet assigned to the USS Excelsior. Even though he was killed off early in the season, he continued to return for a number of episodes through flashbacks, visions, in hologram form, and eventually a resurrection in the season finale.
In an Instagram post, Evagora shared some behind-the-scenes snaps from season two and revealed “I won’t be returning for Picard’s third season, so to quote a mediocre band ‘thanks for the memories’ you all know the rest !LLAP »
Cabrera says it’s been a hell of a ride
While Hurd and Ryan have confirmed they’ll be joining star Sir Patrick Stewart in season three, the only remaining series regular is Santiago Cabrera (Cristobal Rios). For the season two finale, Rios was left behind in the 21st century along with Teresa and Ricardo. Guinan detailed his life story, including a death in a fight in a Moroccan bar. It therefore seems fairly certain that he will not return, especially since he was filming the last season of The stewardess while picard season three was in production.
Cabrera also posted on Instagram on Thursday. He didn’t come out and said he wasn’t coming back, but said, “That’s been a hell of a ride.” He also added (in Spanish): “Last chapter of picard appears today. I hope you like it.”
Wheaton is not part of season 3
Wheaton’s surprise return as a traveler for the season three finale certainly opens up a lot of possibilities. However, in a lengthy blog post on his site, Wheaton again confirmed that he will not be joining his TNG co-stars in the third season of picard:
Wesley and Kore can vanish from existence and never come back on camera again. Or they could go literally anywhere through all of space and time, from Strange new worlds for Discovery for Lower decks (but not for season three of picard. Sorry, nerds.). Honestly, I don’t know what awaits them in canon, but I’d be lying if I said I didn’t spend time thinking about it.
I may be able to say more about Wesley’s story at some point – his journey over the past 25 years or so is something I’ve spent a lot of time thinking about – as a writer or as a than an actor. Maybe both. But even if that never happens, if I never get to be Wesley Crusher on camera again, I’ll have the privilege of hosting The loan room, where I become a Starfleet veteran, a member of the exclusive “Star Trek Legacy” club, and a shameless superfan who can bring other nerds into the room where it’s at. I can celebrate everything we all love about Star Trek in all of its incarnations, for my work.
Brady comes out at the start of season 3?
It’s also a safe bet that recurring guest star Orla Brady won’t play a major role in season three, even though season two ended with her Laris finally reuniting with Jean-Luc Picard. Last September, days after production began on season three (which began the day after season two wrapped), Brady posted on Twitter that she had finished work on season two, adding, ” it’s time to take off my beloved pointy ears”.
I wrapped #StarTrekPicard season 2 and it’s time to take off my dear pointy ears alas. Not sure I like my own ears more… #Star Trek @StarTrek #Laris #cheeky pic.twitter.com/DJjI43ExdF
— Orla Brady (@orla_brady) September 12, 2021
It is not yet known when the third season of Star Trek: Picard will make its debut.
Find more stories from the Star Trek universe. | https://tanialezakblog.com/entertainment/cbmiawh0dhbzoi8vdhjla21vdmlllmnvbs8ymdiylza1lza2l2fub3rozxitc3rhci10cmvrlxbpy2fyzc1jyxn0lw1lbwjlci1jb25maxjtzwqtbm90lwjllwjhy2stzm9ylxnlyxnvbi0zl9ibaaoc5/ |
The team can begin with the external or internal influences. Rather than looking at these in a linear fashion, consider both a linear and an interactive approach. To begin, consider the internal influences of resources, funding, and priorities on the outcome. Consider the one-dimensional influence separately and then begin to consider two of the influencers together as a contributor. Does priority influence the funding or resources allocation? If this interaction is present, which it is, how does it affect availability? Answer these questions and you can begin to understand the degree of interrelatedness and complexity needed before reaching a final decision. Think of these influencers, not individually, but in the context of how these interact with one another. To begin, operationally define attributes (characteristics) that consumers (purchasers) will use to evaluate the product. Consider the following general terms.
Needs: Ability to meet (satisfy) customer or user needs.
Applicability: A measure of value and use.
Price: Determines sustained profit (ROI) potential.
Availability: Supply and demand issues.
Differentiation: The degree of difference between the product (service) the application replaces or updates.
Newness: Perception (or novelty) of “newness” by the customer or user.
Figure 1: Influence Matrix (taken from McLaughlin, G., and Kennedy, W. (2016). Innovation Project Management Handbook, CRC Press, p. 101).
Using this tool (Figure 1), the team identified key “influencers” that could or would affect the decision. Figure 1 lists the Influencing Matrix evaluations with scoring defined. The largest score suggests the influence element that greatly affects how consumers will choose to purchase the product or service.
Instructions: Choose an Influencer Element, determine how much influence this element will have on purchase behavior. The greater the influence, the higher the score. For a score of “1” (parity) suggests that its influence will be essentially the same as the product it is replacing. Lower scores – less influence; higher scores – more influence.
Resources Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
Priority Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
Funding/Financial Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
Competitor Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
Availability Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
Customer Appeal Scoring: Low – .25, .5, .75; Medium – 1.0, 1.25; High – 1.50. 1.75. 2.0
These general influencers all affect purchase (use) decisions. Every item is unique, so criteria and scoring will vary. Use a low score when the amount of influence is weak. The authors suggest a value less than parity (1.0). Near parity for moderate influence and greater than parity, the influence is strong. These criteria could easily come from a perceptual survey or focus group. Rather than asking the customer or user what they like, ask them what they need, want, or desire. Ask them to define how they know when they are satisfied and capable of making a decision.
The Influence Matrix considers each influence component and then determines a score based on the criteria (McLaughlin and Kennedy, 2016, p. 99-102).
For the Innovation Project that the team is working on, create a list of unique attributes and features and complete the Influence Matrix using the Influence Matrix file.
Prepare a 100-page summary of your learnings and findings. Be sure to include the completed excel file with the Word document.
The post unit-6-team-project-innovation-collaborative-strategy first appeared on Original Papers.
Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe. | https://originalpapers.org/unit-6-team-project-innovation-collaborative-strategy-original-papers/ |
Please tell us about your book.
When 12-year-old Kinchen and her new friend Caesar decide to uncover the Raft King’s nefarious plan and save Kinchen’s brother, they set in motion a chain of events that will put them in contact with another world and pull in stories from far-away times. A Crack in the Sea follows the fates of twin siblings on a slave ship in 1781; a 12-year-old boy escaping from Vietnam in 1978 shortly after the fall of Saigon; a giant overcrowded Raftworld whose people need to find a land of their own; and sea monsters in love.
Combining real history and deep fantasy, A Crack in the Sea is about how sometimes you have to leave your home–even though you don’t want to–and how you might find or build a new place to call home.
What inspired you to write this story?
A couple of different things happened at almost the same time. One was that I imagined a giant raftworld, and I wanted to write something with that setting. The other was that I read about the Zong slave ship (a real and horrifying story) and its weight pressed on me. More images and stories and real world events influenced the writing as the manuscript progressed, but I think the book really started with the image of the giant raft and the story of the Zong. And it started with the idea that these two opposite things—one an image of how a people might survive and even thrive, and the other a story of genocide—might somehow inform each other in a novel.
Could you share with readers how you conducted your research?
Because there were three different historical periods/events I was researching (the Zong slave ship; the Vietnam war and its aftermath for those who tried to escape; and a third, smaller thing that I won’t go into because it would spoil a surprise in the book), the research progressed differently for all three.
For the Zong, I was limited to scholarly research on the Zong specifically and the Middle Passage more generally, as well as historical documents and (historical) first-person accounts. There wasn’t anyone I could interview, obviously, about their experience on the Zong.
But with the story of Thanh and his family leaving Vietnam, I not only read books and first-person accounts but also interviewed people who had left Vietnam in the late 1970’s, and I asked a couple of expert readers to read my manuscript and give me feedback on Thanh’s story. As an early Americanist, I’m used to book research and digging into scholarly archives; but interviewing and working with living memory is new to me. I was amazed at what a sharp learning curve that was, and I know I still have much more to learn about this kind of research.
With the last historical piece—and I am being really vague here to avoid spoiling a surprise for those who haven’t read the book yet—I gave myself more leeway to consider odd theories and unproven hypotheses, as with this historical event there isn’t full consensus/proof about what happened—at least not as I’m writing this interview. (That may change, as there have been developments in the research in past few years.) I was simply trying to make my version of the story fit with what might be plausible in the real world.
What are some special challenges associated with fictionalizing a true story?
I was (and still am) concerned about the book seeming to make actual, terrible events more “palatable” for readers. There’s a real danger that when we write historical fiction about characters who miraculously escape from the terrible events of the past, we belittle the real and terrible things that happened—we turn those terrible events into a kind of fictional fodder for the amazing life of our protagonist, who has overcome such astounding obstacles.
But at the same time, I think one of the great strengths of fiction—and especially of fantasy—is that it offers readers the opportunity to imagine a world that might be different from what it is now, or a history that might have gone differently than it really did. And I think that potential benefit from fantasy is worth exploring. What if people in power had made different choices? What if things could turn out differently? What kind of world might we as a people create if we choose not to relive our history?
What topics does your book touch upon that would make it a perfect fit for the classroom?
The book includes some discussion of slavery and the Middle Passage; and some discussion of the aftermath of the Vietnam War. More importantly, I think, the story focuses on kids who are forced by various circumstances to leave their original homes and find or create new homes, and it asks readers to think about what it means to be a refugee or a forced immigrant of any kind. | https://carolinestarrrose.com/classroom-connections-crack-sea-h-m-bouwman/ |
The Prince George’s County and Corvias Solutions Public-Private Partnership (P3), also called the Clean Water Partnership, is an agreement between County government and the private sector to retrofit up to 4,000 acres of impervious surfaces using green infrastructure.
This pioneering P3 approach will leverage private sector best practices and efficiencies to deliver functional and sustainable stormwater infrastructure with accelerated project timelines and reduced costs.
The Partnership is also specifically tasked with driving local economic development by using local, small and minority businesses for at least 30 – 40 percent of the total project scope.
The Clean Water Partnership between Prince George’s County and Corvias is the first public-private partnership of its kind to design, build, finance, operate, and maintain urban stormwater infrastructure in order to meet a municipality’s Municipal Separate Storm Sewer System (MS4) permit compliance requirements. It’s the first-ever P3 model to address stormwater at this scale.
The 30-year partnership is committed to ensuring regulatory urban stormwater compliance for the design, retrofit and maintenance of up to 4,000 impervious acres. The private partner will be responsible for both the initial development and the long-term maintenance, which ensures an integrated approach that will maximize the efficiencies and savings for the entire life cycle of the green infrastructure assets. Additionally, both the short- and long-term risks associated with construction and maintenance are effectively transferred from the County to Corvias.
This partnership is also unique in how it will drive local economic development by using local and County-based small and minority-owned businesses for 30 to 40 percent of the total project scope. The partnership has also been set up with a robust community outreach and socio-economic development program with specific performance goals.
The Prince George’s County Council approved the partnership in November. The contract was formally signed March 2015.
The goals of the Clean Water Partnership are to reduce the cost and timelines traditionally associated with achieving regulatory compliance, while also enhancing the long-term sustainability of the program through effective project management.
The partnership will complete retrofits utilizing green infrastructure (GI) and low-impact development (LID) practices as approved by the Maryland Department of the Environment (MDE) and the U.S. Environmental Protection Agency (EPA) standards.
As part of the program, the partnership will also recruit local disadvantaged businesses and connect them with training and work experience that can assist them in building a viable business and workforce in green infrastructure and related fields. This is one of the unique facets of this Partnership, whereby the County is turning a regulatory requirement into an opportunity to spur local jobs and economic development.
Under the terms of the agreement, the County has committed to investing $100 million during the next 3 years to retrofit 2,000 acres. The initial funding includes the planning, design and construction of green infrastructure retrofits to 2,000 acres of impervious surfaces over those 3 years. There is an option for an additional 2,000 acres after this initial 3 year term if the County is satisfied with the progress of the arrangement with Corvias.
- Corvias will manage the design, construction and long-term maintenance of stormwater best practices in the 2,000 acres (or up to 4,000) covered by the program.
- Corvias has 15 years of experience in stormwater development and management for both new construction and retrofitted communities across the country.
- As part of their military housing portfolio, the company manages 15 stormwater management pollution prevention programs on more than 12,000 acres across 13 states.
- Corvias has worked directly with the Maryland Department of the Environment (MDE) for the past 10 years on more than 1,200 of those acres currently in critical watersheds within Maryland.
The signed agreement represents the official start of partnership activities. The list of projects for 2016 is being finalized and will include projects across the County. As a first step, and as dictated by the Contract, the County must approve of Corvias’ work plan, which will spell out where these projects are located.
It is anticipated that while some projects may be contiguous, there will be a number of projects that are prioritized based on the County’s strategic plans, including those in Transforming Neighborhood Initiative areas.
The County will utilize its traditional procurement process to address impervious acres while Corvias will follow the streamlined process outlined in the Partnership agreement to treat its 2,000 acres. The purpose of this model is to create a benchmark to determine the extent to which the P3 arrangement is in fact delivering increased speed and decreased costs as compared to the County’s traditional processes.
We anticipate the Partnership will significantly accelerate the County’s pace towards compliance and reduce their overall cost of achieving compliance with urban stormwater retrofits within their MS4 permit.
The majority of the initial projects will be on public land, but in some cases the Partnership will work with private property owners to address stormwater challenges.
The Construction efforts of the Program are executed and performed by three General Contractors—Essex, Nardi, and D&F Construction. As members of the implementation team they will facilitate and manage the application of all work that has been designed. There is a tremendous opportunity for contractors that exhibit the necessary skills and capabilities in the following areas of need:
- Landscaping: General services including installation of selected landscaping materials.
- Concrete Flatwork: Removal and replacement of pavement and asphalt, curb, gutter and sidewalks replacement with porous concrete, pavers, grass and plantings
- Installation of Precast Structure: Catch basin, manholes, precast concrete boxes and tree boxes.
- Sitework: Rough initial and final grading, excavation, backfill, underdrain installation, wet utility, infrastructure activities, and preparation of work for permit approval.
- Suppliers: Stone, pavers, porous concrete, soil, rain barrels, mulch, plant materials, trucking and hauling companies, maintenance sales and rental of equipment relating to landscaping.
One of the goals set up by the Clean Water Partnership is to utilize the County’s small, minority and women-owned businesses for 30 – 40 percent of the total project scope. The contract includes incentive payments tied to the number of local and minority firms that are utilized in the delivery of services, providing an economic incentive to Corvias for the use of these firms.
We are committed to removing barriers to entry for small businesses and actively helping them develop the necessary Green Infrastructure skills to compete for this work through training, internships and other programs. This includes developing a curriculum with Prince George’s County Community College and other local agencies as well as training programs that enable local business to gain skills in green infrastructure practices.
Regardless of the status of the stormwater fee, the County is federally mandated to complete the retrofit of 8,000 acres by 2017.
You may reach out to Prince George’s County Department of the Environment, Prince George’s County’s Supplier Development and Diversity Division (SDDD), or Corvias at the following numbers:
Corvias Solutions: Tasha Brokenberry (301) 291-2254
SDDD: Denise Roberts (301) 883-6488
. | https://thecleanwaterpartnership.com/faqs/ |
The T_CVF function computes the cutoff value V in a Student’s t distribution with Df degrees of freedom such that the probability that a random variable X is greater than V is equal to a user-supplied probability P.
Note: T_CVF computes the cutoff value using the one-tailed probability. The cutoff value for the two-tailed probability, which is the probability that the absolute value of X is greater than V), can be computed as T_CVF(P/2, Df).
This routine is written in the IDL language. Its source code can be found in the file t_cvf.pro in the lib subdirectory of the IDL distribution.
Use the following command to compute the cutoff value in a Student’s t distribution with five degrees of freedom such that the probability that a random variable X is greater than the cutoff value is 0.025.
A non-negative single- or double-precision floating-point scalar, in the interval [0.0, 1.0], that specifies the probability of occurrence or success.
A positive integer, single- or double-precision floating-point scalar that specifies the number of degrees of freedom of the Student’s t distribution. | https://www.harrisgeospatial.com/docs/T_CVT.html |
Dentistry is a medical specialty that is practiced almost exclusively in a liberal mode. Only about 5,000 French dentists are hospital or other employees. As with other health professionals, there are strong geographical disparities in terms of density. While 100,000 inhabitants of the Provence-Alpes-Côte d'Azur region have more than 70 liberal dentists at their disposal, the same number of Normans manage with about thirty. While their average density in France remains stable, the total number of active dentists is still increasing.
On average, a dentist sees about 900 patients in the course of a year and receives 271,000 euros in fees. With more than 1,200 patients per person per year, dentists in Normandy are the most in demand by far. On the other hand, a dentist in Corsica sees 720 patients a year.
In total, dentists in France perform around 18 million radiology examinations on their patients in metropolitan France every year in order to find cavities or other hidden dental problems. Furthermore, they perform over 10 million dental protheses procedures. The accompanying dental specialty is called "prosthodontics" and focuses on the rehabilitation and maintenance of oral functions like chewing as well as the overall aesthetic appearance of a patient's teeth. Interestingly, the idea of improving oral health through protheses placement was pioneered by Pierre Fauchard, a famous Frenchman (1679-1761) widely considered to be the "father of modern dentistry". | https://www.statista.com/topics/5859/dentists-in-france/ |
There are limitations in interactions with molecular objects in laboratory experiments due to the very small size of the objects. Common media to show the experimental results of molecular objects is still lack of observer interaction to understand it intuitively. In order to overcome this lack of interaction, this research takes tensegrity representation of molecular objects reproducing experimental results and creates interactive 3D objects to be presented in a virtual reality (VR) environment. The tensegrity representation enables us to enhance the interaction experience with the natural user interface with haptic technology and hand tracking controller. A particle simulation system that utilizes multiple GPUs resources is used to fulfill haptic VR requirements. We developed a unified particle object model using springs and particles which we call anchors which act as tensegrity structure of the object to support conformation of filament-type objects such as microtubules. Some object parameters can be set to match the flexural rigidity of the object with some experimental results. The bending shape of the object is evaluated using the classic bending equation and the results show high compatibility. Viscoelastic behavior also shows similarities with the viscosity reported in other studies. The object's flexural rigidity can be adjusted to match the target value with the direction of the prediction equation. The object model provides a better insight about molecular objects with natural and real-time interactions to provide a more intuitive understanding with the molecular objects presented. The results show that this model can also be applied to any filament-type or rod-like molecular object.
Although the open-field test has been widely used, its reliability and compatibility are frequently questioned. Many indicating parameters were introduced for this test; however, they did not take data distributions into consideration. This oversight may have caused the problems mentioned above. Here, an exploratory approach for the analysis of video records of tests of elderly mice was taken that described the distributions using the least number of parameters. The locomotor activity of the animals was separated into two clusters: dash and search. The accelerations found in each of the clusters were distributed normally. The speed and the duration of the clusters exhibited an exponential distribution. Although the exponential model includes a single parameter, an additional parameter that indicated instability of the behaviour was required in many cases for fitting to the data. As this instability parameter exhibited an inverse correlation with speed, the function of the brain that maintained stability would be required for a better performance. According to the distributions, the travel distance, which has been regarded as an important indicator, was not a robust estimator of the animals’ condition.
HLA (Human Leucocyte Antigen) class I molecules present a variable but limited repertoire of antigenic peptides for T-cell recognition. Identification of specific antigenic peptides is essential for the development of immunotherapy. High polymorphism of HLA genes and a large number of possible peptides to be evaluated, however, have made the identification by experiments costly and time-consuming. Computational methods have been proposed to address this problem. In cases where plenty number of binding affinity data of peptides are available, various QSAR and machine learning approaches efficiently evaluate the affinity of test peptides, while in the cases where just a little data are available, structure-based approaches like elaborate docking have been proposed. We have developed a software named HLABAP that is designed to predict the binding affinities for a set of peptides against a particular HLA class I allele. By the combination of homology modeling for posing instead of docking and geometry optimization of the complex structures between the HLA molecule and peptides, HLABAP well predicts the binding affinities for the peptides. The results have shown that HLABAP should be applicable to identify possible antigenic peptides against a particular allele of HLA class I prior to the experiments far efficiently than the ordinary docking methods.
Skin sensitization is an important aspect of occupational and consumer safety. Because of the ban on animal testing for skin sensitization in Europe, in silico approaches to predict skin sensitizers are needed. Recently, several machine learning approaches, such as the gradient boosting decision tree (GBDT) and deep neural networks (DNNs), have been applied to chemical reactivity prediction, showing remarkable accuracy. Herein, we performed a study on DNN- and GBDT-based modeling to investigate their potential for use in predicting skin sensitizers. We separately input two types of chemical properties (physical and structural properties) in the form of one-hot labeled vectors into single- and dual-input models. All the trained dual-input models achieved higher accuracy than single-input models, suggesting that a multi-input machine learning model with different types of chemical properties has excellent potential for skin sensitizer classification.
The emergence of antibiotic-resistant bacteria is a serious public health concern. Understanding the relationships between antibiotic compounds and phenotypic changes related to the acquisition of resistance is important to estimate the effective characteristics of drug seeds. It is important to analyze the relationships between phenotypic changes and compound structures; hence, we performed a canonical correlation analysis (CCA) for high dimensional phenotypic and compound structure datasets. For the CCA, the required sample number must be larger than the feature number; however, collecting a large amount of data can sometimes be difficult. Thus, we combined consensus clustering to gather and reduce features. The CCA was performed using the clustered features, and it revealed relationships between the features of chemical substructures and the expression level of genes related to several types of antibiotic resistance.
In recent years, with the emergence of new technologies employing information science, open innovation and collaborative drug discovery research, utilizing biological and chemical experimental data, have been actively conducted. The Young Researcher Association of Chem-Bio Informatics Society (“CBI Wakate”) has constructed an online discussion space using Slack and provided a cloud-based collaborative platform in which researchers have freely discussed specific issues and aimed at raising the level of cross-sectoral communication regarding technology and knowledge. On this platform, we created three channels—dataset, model evaluation and scripts—where participants with different backgrounds co-developed a solution for solubility prediction. In the dataset channel, we exchanged our knowledge and methodology for calculations using the chemical descriptors for the original dataset and also discussed methods to improve the dataset for pharmaceutical purposes. We have also developed a protocol for evaluating the applicability of solubility prediction models for drug discovery by using the ChEMBL database and for sharing the dataset among users on the cloud. In the model evaluation channel, we discussed the necessary conditions for the prediction model to be used in daily drug discovery research. We examined the effect of these discussions on script development and suggested future improvements. This study provides an example of a new cloud-based open collaboration that can be useful for various projects in the early stage of drug discovery. | https://www.jstage.jst.go.jp/browse/cbij/20/0/_contents/-char/ja |
2019/02/10 17:00 phanjam created, straight paste, needs ofrmating!
phanjam created, straight paste, needs ofrmating!
+ Chthon Tut "how loot works"
+ I was looking into this for modding purposes awhile back. Since there seems to be some interest in the topic on the forums, I thought I'd post some info about how loot generation works in TL2.
+ The first step in the loot generation process is that you have to kill a monster. When a monster dies, the loot generation process begins. This process is run independently for each player who is close enough (and probably locally on that player's computer). You cannot see, much less steal, other players' loot. How close is close enough is defined in GLOBALS.DAT as LOOT_DISTRIBUTION_RANGE:68.
+ In each monster's data file, there can be up to three entries for "treasure." Each entry specifies three things: The spawnclass to roll for the drop, the minimum number of times to roll that spawnclass, and the maximum number of times to roll that spawnclass. When the monster dies, for each treasure entry it has, a random number between the entry's minimum and maximum is picked and that entry's spawnclass is rolled that number of times.
+ In order to keep this example simple, I'm going to proceed as if we're only doing one roll for one treasure entry; but remember that it's entirely possible to end up repeating steps 2 and 3 for multiple rolls for each treasure entry, and that a monster can have up to three treasure entries.
+ ** A specific item. If this entry is chosen, you get this specific item. Example, if the entry says "Mana Potion 3" as a specific item, then you will get a Mana Potion 3 (a/k/a "Giant Mana Potion"). FYI: Gold piles are specific items.
+ ** A Unit Type. If this entry is chosen, you get a random item from this unit type. See Step 2b for details. FYI: "None" is a unit type, meaning you get nothing. it is by far the most common unit type in the spawnclasses.
+ ** A roll from another spawnclass. If this entry is chosen, you go roll the named spawnclass. The process continues recursively until you finally roll a specific item or a unit type.
+ * Third and fourth, a minimum and a maximum. If this entry is for a specific item, you will get a random number, between the minimum and the maximum, of copies of that specific item. If this entry is for a unit type, you will get a random number, between the minimum and the maximum, of items from the unit type. If this entry is for a spawnclass, you will get a random number, between the minimum and the maximum, of rolls of that spawnclass.
+ * Beneficiary Unit Type (optional). This entry cannot be selected if the player's character is not of this unit type. Used to make class-specific quest rewards.
+ * Rarity Override (optional). Only applies if this entry is for a unit type. Unknown if this is inherited by recursed spawnclasses. Ignores the rarities defined in items' dat files, causing all eligible items to have the same chance to spawn -- including those that would otherwise never spawn due to rarities of 0.
+ * Level Bonus (optional). Applies if this entry is for a unit type. Unknown if it also applies if this entry is for a spawnclass that doesn't specify its own level bonus. Loot gets generated by this entry as if the monster was this many levels higher than it actually was.
+ * Magic Find % (optional). Applies if this entry is for a unit type. Unknown if it also applies if this entry is for a spawnclass that doesn't specify its own MF bonus. Loot gets generated by this entry as if the player has this much more MF% than the player actually has.
+ * Ignore range (true or false). Only applies if this entry is for a unit type. If set to "true," then all items from the unit type are candidates to drop instead of only items with a level range that includes the dead monster's level.
+ * Force enchant (true or false). If set to true, item cannot be white; must be green or better.
+ * No unique (true or false). Only applies if this entry is for a unit type. Unique items are excluded from being candidates to drop.
+ * No magical (true or false). Only applies if this entry is for a unit type. Blue items are excluded from being candidates to drop.
+ * No set pieces (true or false). Only applies if this entry is for a unit type. Set items are excluded from being candidates to drop.
+ * Only set pieces (true or false). Only applies if this entry is for a unit type. Everything other than set items are excluded from being candidates to drop.
+ Again, for the sake of simplicity, I'm going to be proceeding as if we just picked one spawnclass entry; but remember that we could have had multiple entries picked if some had weights of -1, so it's possible to go through steps 2b, 2c, and 3 multiple times per spawnclass roll. And that's on top of going through step 2 multiple times if the monster had multiple treasure entries and/or multiple rolls for a treasure entry.
+ Step 2b. Magic Find!
+ This step only applies if a spawnclass entry for a unit type gets selected. If a spawnclass entry for a specific item was selected, skip to step 3.
+ 1. A spawnclass contains two entries: one for the unit type "potion" and one for the unit type "legendary sword," each with equal weight. You will get a potion 50% of the time regardless of your MF%.
+ 2. A spawnclass contains two entries: one for the unit type "potion" and one for the unit type "1H Sword," each with equal weight. You will get a 1H Sword 50% of the time regardless of your MF%. If you get a 1H Sword, then the rarity of the sword will be affected by your MF%.
+ So, for example, the baseline chance for a blue item is 90 / (10k + 100 + 90 + 7) ~= 0.0088.
+ So, for example, with 10% MF, your chance to find a blue item would increase to 270 / (10k + 200 + 270 + 21) ~= 0.026.
+ 1. There is no plain English way to express what MF% does in TL2. Intuitive statements like "100% MF doubles your chance to find magic items" are nowhere near accurate.
+ 2. The relationship between MF% and the actual chance of getting a particular rarity is non-linear with diminishing returns.
+ 3. The hidden bonus multipliers mean that low amounts of MF% have a much, much bigger influence than you would intuitively expect from the number displayed. For instance, our example with 10% MF nearly tripled the actual odds of getting a blue item.
+ First, unless the spawnclass entry had "ignore range" set to true, the pool of items in the unit type is filtered by level. Items have a level range bracketed by a minimum and maximum in their dat file. (I don't remember the default values for if these fields are left blank, but they're something like item's level +/- 4.) Items are only allowed to stay in the pool of candidates if the monster's level (as modified by any level bonus in the spawnclass entry) falls within that range.
+ Second, the rarities of the items remaining in the candidate pool are added up, and each item has the chance to drop equal to its rarity divided by the sum of all the rarities. Items with a rarity of 0 cannot spawn unless a rarity override is specified in the spawnclass entry (in which case all items will have an equal chance).
+ The affixes providing magical properties are defined with a list of item types and a range of item levels they can appear on. Affixes are selected by a system that should be familiar to you by now -- the weights for all eligible affixes are added up, and each affix's chance to appear equals its weight divided by the sum of all the weights.
+ The process of rolling for affixes continues until the item does not have enough "slots" left to pay for any of the available affixes.
+ I think an example may help make sense of the system, so let's do one.
+ This example is quite long, so I'm going to put it in a spoiler. I do recommend reading it though, since it really helps to clear things up. | http://torchmodders.com/wiki/doku.php?id=loot_works&rev=1549839604&do=diff |
Distributes the rows in an ordered partition into a specified number of groups. The groups are numbered, starting at one. For each row, NTILE returns the number of the group to which the row belongs.
Is a positive integer expression that specifies the number of groups into which each partition must be divided. integer_expression can be of type int, or bigint.
Divides the result set produced by the FROM clause into partitions to which the function is applied. For the PARTITION BY syntax, see OVER Clause (Transact-SQL).
Determines the order in which the NTILE values are assigned to the rows in a partition. An integer cannot represent a column when the <order_by_clause> is used in a ranking function.
If the number of rows in a partition is not divisible by integer_expression, this will cause groups of two sizes that differ by one member. Larger groups come before smaller groups in the order specified by the OVER clause. For example if the total number of rows is 53 and the number of groups is five, the first three groups will have 11 rows and the two remaining groups will have 10 rows each. If on the other hand the total number of rows is divisible by the number of groups, the rows will be evenly distributed among the groups. For example, if the total number of rows is 50, and there are five groups, each bucket will contain 10 rows.
NTILE is nondeterministic. For more information, see Deterministic and Nondeterministic Functions.
The following example divides rows into four groups of employees based on their year-to-date sales. Because the total number of rows is not divisible by the number of groups, the first two groups have four rows and the remaining groups have three rows each.
The following example adds the PARTITION BY argument to the code in example A. The rows are first partitioned by PostalCode and then divided into four groups within each PostalCode. The example also declares a variable @NTILE_Var and uses that variable to specify the value for the integer_expression parameter.
The following example uses the NTILE function to divide a set of salespersons into four groups based on their assigned sales quota for the year 2003. Because the total number of rows is not divisible by the number of groups, the first group has five rows and the remaining groups have four rows each.
The following example adds the PARTITION BY argument to the code in example A. The rows are first partitioned by SalesTerritoryCountry and then divided into two groups within each SalesTerritoryCountry. Notice that the ORDER BY in the OVER clause orders the NTILE and the ORDER BY of the SELECT statement orders the result set. | https://docs.microsoft.com/en-us/sql/t-sql/functions/ntile-transact-sql?view=sql-server-2017 |
Glen Canyon Dam
The Glen Canyon dam ( Glen Canyon Dam ) is an arch-gravity dam that accumulates the Colorado River in Arizona. Starting from the storage content is the resulting reservoir, Lake Powell to Lake Mead is the second largest reservoir in the United States.
Dam
The dam was designed by the Bureau of Reclamation and built from 1956 to 1964; at a cost of 187 million U.S. dollars. With 216 meters construction height (above the bedrock ), it is the fifth highest dam in the U.S.. The height above the former river bed is 178 m. The mural crown is 475 meters long with a width of 7.6 m. The wall is at the lowest point of construction 91 meters wide, the maximum width is reached at the right abutment with 106 m. The wall comprises a concrete volume of 3,750,000 m³. Because of this thickness can no longer be described as pure arch dam. Part of the water load it carries off as a gravity dam; therefore, it is as with the Hoover dam to an "arc weight ( jam ) wall ".
During construction, 4,212,551 cubic meters of sand and rock had to be moved.
Relief and collection facilities
The wall has four openings ( outlet pipes ) diameter of 2 m, can flow per second through 420 cubic meters of water.
Eight -pressure pipes ( penstocks ) with a diameter from 4.6 to 4.3 m per second lead in total 940 cubic meters of water to eight turbines ( 155,550 hp), driving the eight generators with a total capacity of 1,296 MW. Electricity from the Glen Canyon dam supplies the states of Wyoming, Colorado, Utah, New Mexico and Arizona with energy.
On both sides of the dam ever leads a relief channel ( spillway tunnel ) which tapers in diameter from 15 m to 12 m, through the rock. Through both together up to 5,890 m³ of water can be drained per second. The spillways (flood relief ) are needed only if large masses of water must be drained to lower the water level of the reservoir or to prevent flooding of the wall ( at high tide ). With a use of spillways, it was discovered that the exiting water, in contrast to the waters of Lake Powell, a distinct red coloration exhibited. When examined after the closing of the spillways, it was found that the water had led to significant erosion within the tunnel in red sandstone. To prevent further erosion at the next use, you dressed both tunnels with concrete.
The added maximum Abführvermögen the discharge and collection facilities is 5890 420 940 = 7250 m³ / s
History of the dam
The Glen Canyon dam was planned as part of the Colorado River Storage Project and constructed. Purpose of this building was the construction of a water reservoir for the water- poor states of the Southwest. At the same time electricity should be generated for the ever- growing demand. In addition, it was made possible by the dams to prevent the recurrent floods in the downstream regions.
From 1946 to 1948, the Glen Canyon of engineers and geologists of the Bureau of Reclamation has been studied to find the right place for the shut-off. The place they finally chose, united several advantages:
- The area that covered the dam, forming a basin which can hold a large amount of water.
- The walls of the gorge at this point are very steep and are very close together.
- The rock of the canyon walls and the ground is firm enough to give the necessary support to the dam.
- There was near enough sand and rock for the enormous amount of concrete that was required for the construction.
On October 15, 1956 officially started construction work on the dam. To divert the water during the construction of the Colorado River, one blew up on each side of the gorge a tunnel through the red sandstone. Thus, the actual river bed was drained in the field of construction. Since the path for vehicles was from one side of the gorge to the other more than 200 miles, was built in the immediate vicinity of the Glen Canyon Bridge, which was completed in 1959. On June 17, 1960, the concreting of the dam that were not interrupted for three years day and night and ended on September 13, 1963 began. For the construction workers employed and their families built a camp in 1957 in the immediate vicinity, which later became the city of Page. 17 construction workers died during the ten -year construction period. In 1963, then began to dam the river. From 1963, the turbines and generators were installed. The last two generators were commissioned in 1966. On October 22, opened in 1966, Lady Bird Johnson, the wife of U.S. President Lyndon B. Johnson, the dam. It took 17 years, from March 13 1963 until June 22, 1980 to fill the reservoir completely. With a maximum depth of about 171 m at the dam of Lake Powell includes 33.3 billion m³ ( = 33.3 km ³) of water and is therefore entitled under the Lake Mead is the second largest reservoir in the United States. The area of the reservoir is different claims to either 640 km ², 658 km ² and 1627 km ².
Effects on the environment
The construction of the Glen Canyon dam has far-reaching consequences for nature. By regulating the flow, the amount of transported sediment and the seasonal fluctuations of the water temperature changed. As a result, the water quality of the Colorado River has improved considerably below the dam and in the reservoir. Since settles most of the sediments in Lake Powell, the water is now blue - green and clear instead of red and muddy. This made it possible to locate various types of perch in the lake. Below the dam now rainbow trout live in the Colorado River.
On the other hand, the damming of the Colorado River has also brought significant disadvantages for the further course of the river with it, especially in the area of the Grand Canyon.
The reduced number of floods since the regulation has also reduced the size of the sand banks on the banks and allows the vegetation to encroach on the river bed. Debris accumulations that could be deposited in the mouths of the tributaries of the side because of the low water velocity, the flow close one more and the backwater areas where native species of fish, sand gradually. As a flagship species for the changes applies the fish Gila cypha, which is a protected species by the Federal government. To his protection experiments were held with the water level.
Since the construction of the dam more water was only on a few occasions drained from the reservoir than 930 m³ / s, the flow through the pressure tubes:
- On 5 March 2008 a controlled tidal wave for the purpose of studying the impact was discharged on nature again, which lasted 60 hours.
- In May 2012, the plan was presented after evaluating the results of the previous attempts for irregular periods from a few hours up to four days to increase the water flow through the dam up to 1275 m³ / s. This drainage patterns will be maintained through 2020.
- The first flooding the new model took place in November 2012 and November 2013.
Carl T. Hayden Visitor Center
At the western end of the dam is the " Carl T. Hayden Visitor Center ." Owned by the Bureau of Reclamation is ( USBR ) operates the Visitor Center by the National Park Service.
In the large round building, which towers above the dam, you have a unique view of the reservoir, the wall, the bridge and the rest of the Colorado River through the huge panoramic windows. An exhibition documented with images, text and video films, the construction of the Glen Canyon Dam and the Glen Canyon Bridge. Other exhibits show pictures of the local area or find out about arts and crafts of the Native Americans. At the reception you can sign up for a tour of the dam. The Visitor Center is open daily except Thanksgiving, December 25 and January 1. | https://memim.com/glen-canyon-dam.html |
Please download and follow along with the examples and experiment - always the best way to learn!
Toady we're going to be looking at UV's and what they actually mean in terms of Math and how we can play with them for our benefit!
UVs
Firstly I've just plugged the UV output into the emissive node and we get Reds, Yellows and Greens. What's going on? Well if we use our Mask node and break out the channels we can see it's actually just a black to white gradient in the red and green channels and blue doesn't exist. - UV's are a Vector 2.
Basically what we have here is a lookup table - each of the pixels in the layout is uniquely identified by it's red and green component and that tells the engine what pixel to refer to from the texture the UV layout is plugged into.
Here we're just using the UV node as most of you will already know how - plugging it into a Texture Sample. Unreal automatically assumes default UV values if this node isn't connected so you may never have needed to connect like this before.
Notice here that we've used a Texture Sample, and masked it to a Vector2, instead of a UV layout. It's just two gradients I've created in Photoshop but we're getting a really noisey result - so what gives? Basically compression is the issue here - Textures in Unreal are compressed to save memory but UV's are very sensitive to change so we're seeing those compression artifacts. Note if you just want a gradient it's cheaper and more accurate to mask a UV sample rather than importing a texture, you might remember I used this technique in the Math examples.
These next three nodes are just showing how Tiling works - All the UV nodes are setup with a Tiling of 4,4. Notice when the first Node is connected you're seeing a lot of blooming Yellow, this might remind you of something from the Math example - Unreal is trying to display numbers over 1, so if we Frac it down we can see the gradient is repeating over the layout. Note the whole number part of a UV layout is totally ignored, only the fractional part is relevant. This is the reason that textures tile.
Note there's no reason that a texture has to tile uniformly - here I've set up a tiling of 1,0.1 so only the first column of the texture is visible. This can be useful for things like packing multiple textures into one sheet or for creating a long texture and then un-stretching it in the material so only a fraction is visible at a time.
So now we know that UV's are just Mathematical gradients, what happens when we start adding numbers to them. Well, as you might have guessed, they move! Adding numbers to R moves things horizontally, G vertically. So what happens if we add a changing value, such as Time - a scrolling texture! and how about if we just want to scroll in a single axis? Simple, just add the Time to a single channel.
So what we've created here is what Unreal refers to as a Panner node - but it's basically doing the same thing, just moving the UV's over time by adding values to the U and V channel. Notice that if we plug a constant value into the Time input of the Panner we get control over the movement without any scrolling, useful to directly control your material offsets.
Similar to the Panner is the Rotator. This is doing much more complex math but basically it's doing the same thing, just changing the input UV values mathematically to create a rotation motion.
So now we're getting to some interesting things - what happens if we scroll a texture and add it to our UV layout. Well with this kind of mask we get a strange bulging effect, very cool. Note we've had to scale the texture right down, UV's are very sensitive remember. This is super useful and the basis for all sorts of VFX work - water, fire, cloth etc.
Here's another example where we're just adding a simple noise texture to our UV's - it almost looks like we're looking at the texture through a water surface. Note the White values on the right hand side of the image - here the texture has wrapped around from the left. Because we're adding values in both axis the noise is going to move things up and right - maybe not a problem if we're dealing with a tiling texture but if we do want to keep things centred that's easy enough - just subtract 0.5 from out input noise before we multiply and add - now our texture range goes from 0 - 1 to -0.5 - 0.5 so the noise equally adds and subtracts from the UV's and keep the image centred.
Here we've just plugged a Time Sine into our Panner to create an oscillating texture, but we're not getting a tiling effect of the black and white repeating like we'd expect. So what gives? Here the trick is in the texture - if you double click it and open it's settings and click the drop down to extend the Texture section there are options for X and Y-Axis Tiling Methods. In this case we've set this to Clamp so rather than Tiling the Texture the Engine just repeats the last pixel, which in our case is pure black or pure white - creating this sliding door effect. Super useful.
A bit less useful but included for the sake of completeness - there's also an option to Mirror UV's instead of Tile or Clamp. Here we've created a circle but if we inspect the texture we can see it's actually just a quarter of a circle. Could be useful in certain circumstances.
Here we're plugging a texture, masked to a Vec2, into our fake UV layout from earlier - so what's happening. Well if you remember the Black and White values of a UV layout are just a lookup table. So here the Black and White Vec2 mask we're using is just being remapped to the colours of the second texture - exactly like the Gradient feature works in Photoshop. This is super useful for things like Fire where we can use lots of different black and white masks to create our shape and motion and then use the same gradient on all our fires to ensure a consistent colour and allow for quick and easy editing.
Lastly you've probably noticed the TexCoord[0] in the Name of the Node. Well TexCoord is just Unreals name for UV and is a shortened version of Texture Coordinate. The [0] refers to UV layout 0 - it's really useful to unwrap a model in multiple ways and keep them all in the model for different uses - for example a Character might have a large tiling unwrap for things like cloth detail normals and then another unwrap that is just a front Planar projection which allows for a quick way to paint mud kickup on the bottom of the trouser leg etc.
| |
Extending your lease – an overview
When surveyors describe a lease as ‘short’ they almost certainly mean that there are less than 80 years of the term remaining and the reason why that is significant is that a lease with an unexpired term of less than 80 years becomes considerably more expensive to extend. It is therefore important to be aware of the length of your lease and to initiate the process of extending before it become prohibitively expensive.
Once the unexpired term of a lease falls below 80 years not only does it immediately become more expensive to extend but the cost accelerates with every passing year.
The ‘rules’ for extending a lease are found in the Leasehold Reform, Housing and Urban Development Act 1993 (the Act) and a new lease created under the Act will:
- Have a term expiring 90 years after the termination date of the original lease.
- Be at a ‘peppercorn’ (i.e. zero rent).
- Retain any other provisions that were present in the original lease.
The first step in extending your lease is to have a valuation prepared by a surveyor specialising in leasehold matters. The surveyor will inspect the property, take accurate measurements and make a note of any improvements you have made. He will also want to see a copy of your lease.
Using this information he will calculate what he considers to be a reasonable sum of compensation payable to the Freeholder in return for granting a new longer lease.
The compensation (often referred to as the ‘Premium’) is generally made up of three parts:
Part 1: Compensation for loss of ground rent
As mentioned above, once the new lease has been granted there will be no further ground rent to pay. The Freeholder is therefore losing that future income and must be compensated – calculation tables are used to work out what the current value of the right to receive those future ground rents is. If the ground rent was due to increase in the future that must be factored in to the calculation.
Part 2: The Reversion
A lease of set length will ultimately run its course and if that were to happen the leasehold asset would be returned to the Freeholder at the end of its term. If that day is postponed for a further 90 years there’s a loss to the Freeholder and compensation must be paid. The current value of the right to receive a flat worth say £250,000 in 78 years might be in the region of £5,000 so the surveyor must ask ‘how much would I need to invest today to receive £250,000 in 78 years time?’ and add his answer to the premium.
Part 3: Marriage Value
Marriage value is what makes it so much more expensive to extend a lease that is shorter than 80 years. In its most basic definition, marriage value is 50% of the uplift in value attributable to the granting of the extension (or rather the new lease).
A property with a new lease of, say, 167 years will be worth more than the same property with its original 78 year lease. Our £250,000 flat may be worth £270,000 with its new long lease and under the terms of the Act this increase in value must be shared equally between the Freeholder and leaseholder, therefore a further £10,000 is added to the premium. If the original lease term is more than 80 years any marriage value is disregarded.
Having received your valuation the next step is to serve notice on the Freeholder. The notice informs the Freeholder that you wish to exercise your right to a new, longer lease under the Act and sates how much you intend to pay (the valuation figure).
The Freeholder will have 2 months to respond with a counter offer – that will generally be done via his solicitor and include his own surveyor’s valuation. There is then likely to be a period of negotiation between the parties’ surveyors which will hopefully result in a compromise figure. If a stalemate is reached the matter can be referred to the Leasehold Tribunal who will have the final say.
The main benefit of applying for a lease extension under the Act, as opposed to by informal negotiation with the Freeholder, is that you know the lease will be granted at a reasonable premium and within a structured time frame. Alternatively, you can look to approach your freeholder on an informal basis, but this needs to be managed very carefully, as in the majority of instances, freeholders simply mess leaseholders around in an effort to see their lease deplete even further or at the very least just ask for a very inflated premium. That is not to say that a good deal cannot be struck via this route but this will require strong guidance and from your Valuer. | https://investor-square.com/investment-news/newswire/extending-lease-overview/ |
Q:
Lorenz map for the Rössler system
Possible Duplicate:
How to find all the local minima/maxima in a range
I have the solution of the following non-linear system:
sol1 = NDSolve[
{x'[t] == -(y[t] + z[t]),
y'[t] == x[t] + 0.2 y[t],
z'[t] == 0.2 + x[t] z[t] - 5.7 z[t],
x[0] == 1, y[0] == 1, z[0] == 1
},
{x, y, z},
{t, 0, 100}
]
How can I find the $k^{th}$ local maximum of $z(t)$, i.e. $z(k)$, and then plot $z(k+1)$ vs. $z(k)$? There is an example in the "Mapping local maxima" section in Rössler attractor's wiki page. I am working with Wolfram Mathematica 8.0.
A:
The maxima will occur at points where the derivative is zero and, except in special cases, they will alternate with minima. You can easily detect where the derivative zero using event detection. In V9, you do this like so.
Clear[x, y, z, sol, pts];
{{sol}, {pts}} = Reap[
NDSolve[
WhenEvent[z'[t] == 0, Sow[t]]
},
{x, y, z}, {t, 0, 100}
]];
z = z /. sol;
maxPts = Last /@ Partition[pts, 2];
Plot[z[t], {t, 0, 100}, PlotRange -> All,
Epilog -> Point[{#, z[#]} & /@ maxPts]]
Note that the extremes are found reliably during the solution of the differential equation and there's no need to numerically solve equations involving interpolating functions afterward.
Now that you've got the maxima in a list, you can do anything you want with them.
| |
It seems that other bloggers have experienced the same thing, and Jen, over at I Heart Organizing, is one of them. During the month of October she has a challenge called “Dare to DIY With A New Supply”.
Since I am cleaning/organizing/making-over my laundry room during the One Room Challenge, I thought I’d throw in some cleaning tips you can use… in the laundry room. When this challenge appeared, I knew where I was going to start: DIY Wool Dryer Balls. I have been wanting to try these FOREVER. They fit the challenge perfectly because this is my first time EVER using yarn. I do not know how to crochet or knit, so I’ve never had a reason to buy yarn in the past.
I know. These are items that have never once made an appearance in the cleaning section, nor would I really think of them as cleaning products… You learn something new every day.
Pinch it in the middle and wrap the yarn around the pinched area.
Cut the yarn and tuck it under some of the other layers.
I was able to make 6 balls with the 2 skeins of yarn I bought.
Next, you’ll want to place each ball into the leg of some nylons, tying a knot in between each ball.
Take these nylon-wrapped wool balls, and throw them into your next load of towels, washing them in HOT water. After washing, throw them in the dryer on HIGH heat. This process is what causes the wool to felt and stick together. Once you remove them from the dryer, cut the nylons to free the wool balls.
If you’d like to add a little scent to your dryer balls, just place a drop or two of essential oils onto a couple of the balls. I, personally, go with lavender, but if you have a different favorite oil, go for it!
That’s almost a 20% decrease in dry time. The first 2 items I pulled out of the laundry were a robe and a pair of shorts, and I could hear the static as I pulled them out. I was SO disappointed, but as I removed all the other clothes, they seemed completely static-free. I was a little baffled, so I looked at what others had to say about their experience. I found that, while wool dryer balls work great at reducing static for most fabrics, it doesn’t do well with synthetic materials, like Polyester. I checked the label on both the robe and shorts… 100% Polyester. I tried another load with my towels, socks, and undergarments (all 100% cotton items), and when the load was finished, there was NO static. I wanted to let you know my true experience so you have all the information.
UPDATE: I found the easiest trick to eliminate static, whether natural or synthetic fibers… You can read all about it HERE.
Dryer Sheets= I’ve decided not to go with the cheapest dryer sheets I could find for this cost breakdown because I don’t think generic dryer sheets work well, so I, personally, would never use them. Bounce Dryer Sheets at Walmart= $8.94 for 240 sheets.
If I were to dry 6 loads of laundry each week, these would last me 40 weeks.
2 Skeins of Wool= $10.48… Now if you have an old wool sweater, you could unravel it and use it, thus saving even more money.
Nylons= Hopefully you have an old pair of nylons which you could use for free. If not, Walmart has a pair for $1.99.
I realize I am just starting to use these balls, so I can’t tell you how long they will truly last, but according to information I found online, many report 4+ years. Plus, by reducing drying time, you save money on gas/electricity.
Over the course of 4 years you would spend $46.49 on dryer sheets if averaging 6 loads of laundry each week. Over the same time-frame, you would spend up to $12.47 on DIY Dryer Balls (if you need both nylons and skeins of wool). The savings over 4 years is $34.02, or $8.50 each year.
… and let’s be honest, these wool dryer balls are a whole lot cuter than dryer sheets. Right?
Tomorrow I’m sharing all my wonderful plans for the laundry room, so I hope you’ll stop over.
My daughter seemed to react to the dryer balls I bought recently. Not sure what type of wool.
Have you found any one react with DIY wool balls?! Or something better to use if send skin?!?!
Not only are they cute, the wool balls are chemical-free! Dryer sheets have been known to be the cause of dryer fires as well as coating the walls of the dryer.
This is an excellent tip with so many advantages. Use at least three in the dryer and lower the heat to reduce static.
Another HUGE bonus is that NOT using dryer sheets reduces garbage which does not degrade or decompose. Dryer sheets pollute!
These wool balls are ecofriendly!! Excellent!
PS Do you only toss in one ball per dryer load or more than one? Thanks!
Question: How many wool dryer balls do you use at a time?
Great question Dottie. I made these over a year ago, and using the tutorial above, I was able to make 6 dryer balls, so that’s the number I used. I have since lost one (probably with all those missing socks 🙂 ), so now I just use 5.
These are so cute! One of my favorite benefits of using dryer balls is that they don’t gum up your lint screen like dryer sheets can. Also, you can add a few drops of essential oils for a custom scent to your laundry.
Just as a word of caution… I’d keep the lid off the jar if you decide to use one. My jar started forming condensation on the inside, perhaps because the balls don’t get completely dry to their core. Leaving the lid off helped. I haven’t lost any balls yet, but I find them in the most random places when folding clothes… down arm sleeves, etc. Thanks for stopping over McKenzie!
Hi Erin, I made dryer balls several years ago, but over the years they’ve disappeared…hmmm…guess I need to make some more. Thanks for prompting me to put that back on my to do list.!
awesome, just love the look so cute!
Thanks for the insight Linda! Enjoy the rest of your day! | https://www.lemonslavenderandlaundry.com/cleaning-tip-tuesday-diy-wool-dryer-balls/ |
By Colleen McTiernan
From arranging drop off and pick up schedules to getting the kiddos back into the swing of getting up early and doing homework, the beginning of the school year can be overwhelming! Just thinking about preparing lunches, let alone healthy lunches, can be enough to send you into a panic! But with these easy ideas, you will be able to send your kiddos into the cafeteria with confidence.
Monday
Breakfast for lunch!
2 whole wheat waffles
Banana
Peanut butter
Almond slivers
Heat the waffles according to package directions, then spread about a tablespoon of peanut butter across the top of each waffle. Slice up the banana and place half of the slices on top of one waffle and half of the slices on the other. Sprinkle with almond slivers for an open-faced waffle sandwich!
Pack your child some plain yogurt topped with her favorite berries to complete the meal!
Tuesday
Buffalo chicken wrap
1 chicken breast
3 tablespoons hot sauce (we recommend Frank’s RedHot)
1 tablespoon olive oil
Salt and pepper to taste
¼ cup broccoli florets
¼ cup shredded cabbage or shredded lettuce
Whole wheat wrap
1 tablespoon light ranch or blue cheese dressing
Shredded cheese of your choice
Prepare this one the night before! Combine hot sauce and olive oil together in a small bowl and set aside.
Cut the chicken into bite-size pieces and season with salt and pepper.
In the meantime, heat a small skillet over medium-high heat. Add a teaspoon of olive oil to the skillet and cook the chicken for about 6 minutes. Add the hot sauce mixture to the pan and allow the chicken to absorb the sauce, about 2 minutes. Remove the chicken from the skillet and set aside.
Over medium heat, add the broccoli to the same skillet and cook until tender, about 4 minutes. Remove from heat.
Spread the dressing on the wrap, and then add the chicken, broccoli, shredded cabbage (or lettuce) and cheese. Roll tightly and cut in half.
Pack your child some celery sticks and baby carrots to complete the meal!
Wednesday
Breakfast for lunch, round 2
Plain bagel thins
Lettuce
Tomato
3 slices deli turkey
2 slices Swiss cheese
Place one slice of cheese on each bagel half. Place lettuce and a slice of tomato on one half, and turkey on the other, then combine. Cut the sandwich in half so it is easier for little hands to eat.
Serve with apple slices and pretzels to complete the meal!
Thursday
A spin on the classic PB&J
2 slices of bread
Almond butter
Sliced strawberries
Spread almond butter on each of your two slices of bread and then top with strawberries before combining the two sides. To add a bit of pizzazz to this sandwich, use sandwich cutters or even cookie cutters to give it a fun shape!
Serve with cheese cubes and sliced cherry tomatoes to complete the meal!
Friday
Ham and cream cheese pinwheels
Tortilla (leave out for a gluten-free version)
2 thin slices of deli ham
Reduced fat cream cheese
1 pickle spear (or a quartered slice of cucumber)
Lay your tortilla flat and spread cream cheese evenly across it. Cover the tortilla with the slices of ham — it is OK if the slices overlap! Place the pickle spears at one end of the tortilla, and then roll up your tortilla into log shape. Cut your rollup into 1-inch pieces.
Serve with sliced grapes and your kiddo’s favorite cookie to complete the meal!
*For gluten-free version, place cream cheese on the ham slices instead of on the tortilla.
Do not forget to use an ice pack or thermos to keep lunches fresh through the day! | https://gigglemagazine.com/a-week-of-simple-healthy-school-lunches/ |
Fans of the Need for Speed racing game series are still waiting for any information about a new installment. Meanwhile, a post that appeared today on Axel Nu's official Instagram profile suggests that motorcycles may appear in the new part of NFS.
Art of Rally Reviews Depict a Love Letter for Cars and Racing
Milosz Szubert, 25 September 2020, 15:06
Last Wednesday, art of rally, a minimalist racing game from the independent studio Funselektor Labs, debuted on PC. The title gathers positive opinions in the industry media.
First Reviews: Project CARS 3 - A New Direction
Bart Swiatek, 25 August 2020, 13:47
A lot of reviews of Project CARS 3, the latest game from Slightly Mad Studios, has appeared on the web. It seems that the developers decided to go with their flagship series in a slightly different direction, because we're dealing with a game that is very different from previous installments.
DiRT 5 System Requirements
Adrian Werner, 17 August 2020, 10:55
We have learned the system requirements of the PC edition of DiRT 5, the latest installnent of Codemasters' racing game series.
NFS: Hot Pursuit Remastered May Launch This Year
Adrian Werner, 16 August 2020, 15:39
Thanks to Amazon, information about Need for Speed: Hot Pursuit Remastered, a refreshed version of the iconic 2010 installment of the NFS series, has leaked.
Forza Horizon 3 Will Disappear From Microsoft Store in Late September
Bart Swiatek, 12 August 2020, 11:49
The creators of the racing game Forza Horizon 3 informed that at the end of September both this game and its expansions will be withdrawn from sale in Microsoft Store.
DiRT 5 Delayed on PC, PS4 and Xbox One
Agnes Adamus, 11 August 2020, 17:59
Another game gets delayed. Codemasters have announced that the release of DiRT 5 on PC, PlayStation 4 and Xbox One will take place a little later than originally announced.
Need for Speed 2021 Leak Shows Gameplay From Alpha Version
Jacob Blazewicz, 06 August 2020, 11:31
An alleged gameplay from the new installment of Need for Speed is making rounds on the web. The material shows a very early build of the game, but gives some idea of what Criterion Games may be preparing for us.
Project CARS 3 Hardware Requirements; Campaign For 40 Hours
Jacob Blazewicz, 03 August 2020, 22:03
The team at Slightly Mad Studios revealed the hardware requirements of the PC edition of Project CARS 3. Kris Pope also revealed that Career Mode will provide us with over 40 hours of fun.
Forza Horizon 4 Coming to Xbox Series X With Improved Visuals
Bart Woldanski, 24 July 2020, 15:02
Playground Games revealed that Forza Horizon 4 will hit the Xbox Series X console and offer better graphics.
First Trailer of Forza Motorsport 8 Reveals a Beautiful Game
Paul Wozniak, 23 July 2020, 18:35
During the Xbox Games Showcase we were presented with the first trailer of the new installment of the Forza Motorsport series captured on the game engine.
F1 2020 Launches
Milosz Szubert, 10 July 2020, 20:45
Today, F1 2020, the next instalment of the Formula 1 racing simulator game series from Codemasters launched on PC, PlayStation 4 and Xbox One. A new trailer for the game is now online.
New Trailer Presents Features of DiRT 5
Bart Swiatek, 08 July 2020, 12:22
Codemasters Software, the creators of DiRT 5 have released another trailer of the game. The material focuses on content and features.
Test Drive Unlimited Solar Crown Announced; First Trailer
Michael Kulakowski, 07 July 2020, 21:16
KT Racing and Nacon have officially announced a new racing game - Test Drive Unlimited: Solar Crown. The title resurrects the Test Drive series, which was popular years ago, and offers thrill-seeking players a huge sandbox map on which they will compete in the world's fastest cars.
First Reviews: F1 2020 - Great Culmination of Current Gen
Paul Wozniak, 06 July 2020, 18:16
The first reviews of the latest installment Formula 1 racing game series from Codemasters Software. According to the journalists, it's a very good game, which highlights the developers' experience with the franchise, although it doesn't lack outdated elements. Mainly the new My Team mode has been gathering a lot of positive feedback.
Announcement of New Test Drive Unlimited Inbound - New Teaser
Agnes Adamus, 03 July 2020, 17:11
Nacon has released a short teaser related to the new insyallment of the Test Drive Unlimited series. The official announcement of the game is expected next week.
Trackmania Launches Today; Download Instructions
Jacob Blazewicz, 01 July 2020, 22:15
The free game Trackmania debuted today on PCs. We can download the title on Uplay and Epic Games Store.
Project CARS 3 Gets Release Date
Jacob Blazewicz, 24 June 2020, 19:02
Bandai Namco and Slightly Mad Studios have revealed the launch date of Project CARS 3. The third installment of the realistic racing series will be released before the end of summer.
Need for Speed: Hot Pursuit Remaster in the Works
Michael Kulakowski, 23 June 2020, 20:18
According to still unconfirmed information, Electronic Arts is preparing a refreshed version of Need for Speed: Hot Pursuit from 2010. The title, considered by many to be one of the best in the series, should appear on PCs and consoles before the end of this year.
DiRT 5 Release Date Revealed
Bart Woldanski, 18 June 2020, 15:35
Codemasters has revealed the release date of DiRT 5. The game will initially go on PC and current-gen consoles. The PS5 and Xbox Series X versions will be released later.
Gran Turismo 7 Announced, New Gameplay From PS5
Adrian Werner, 11 June 2020, 22:25
Sony and Polyphony Digital announced Gran Turismo 7. We were shown a game trailer and first gameplay video.
Disappointment and Grief - Gamers Say Goodbye to NFS: Heat
Bart Swiatek, 09 June 2020, 13:50
Electronic Arts' decision to end support for NFS: Heat seven months after its launch was very much disliked by the players. Message boards were flooded with critical comments and mocking memes.
DiRT 5 - Career Mode Details
Jacob Blazewicz, 08 June 2020, 22:02
Codemasters shared details about the Career Mode in DiRT 5. The devs bet on strong narrative and many kinds of challenges, and the fun will be enhanced by the option to play together with friends.
Criterion Ceases NFS Heat's Development; New Need for Speed in the Works
Agnes Adamus, 08 June 2020, 21:44
Criterion announced that tomorrow the last update for Need for Speed: Heat will be released, adding a cross-play option to the game. It was also announced that the full version of the title will launch in EA Access and Origin Access Basic. The company focuses on working on the new installment of the series.
Release Date and First In-game Trailer of Project CARS 3
Michael Kulakowski, 03 June 2020, 21:31
The third installment of the Project CARS racing game series has been officially announced. The information was accompanied by the first trailer running on the game engine. Project CARS 3 will be released this summer on PCs, Xbox One, and PlayStation 4.
Codemasters Acquires WRC License; New DiRT Rally in 2023
Jacob Blazewicz, 01 June 2020, 20:54
Codemasters has signed an exclusive agreement with the FIA. Under the agreement, the British studio will develop WRC games between 2023 and 2027. The release date of the new installment of the DiRT Rally series was announced.
New Trackmania is a Free Game With Optional Subscription
Jacob Blazewicz, 27 May 2020, 22:29
Ubisoft has revealed the planned subscription model of the free game Trackmania. Those willing to invest in the game from Nadeo will receive, among other things, access to additional activity and car personalization.
Rumor: Gran Turismo 7 Will Launch This Year
Bart Swiatek, 21 May 2020, 16:51
Next Level Racing, a PlayStation-licensed brand that develops professional simulator equipment, suggested in social media entries that Gran Turismo 7 exists and will make its debut in 2020.
DiRT 5 Announced
Paul Wozniak, 07 May 2020, 17:25
Codemasters has just announced a new installment of the DiRT series, which will also be available on Xbox Series X consoles.
Free Forza Street Launches on Android and iOS
Jacob Blazewicz, 05 May 2020, 20:51
Forza Street launched today on mobile devices. For the next month, we will receive valuable gifts for logging in to this free game, especially if we have a Samsung Galaxy.
DiRT - Codemasters Preparing Announcement of the Next Installment
Jacob Blazewicz, 04 May 2020, 21:32
Codemasters revealed the existence of two more installments from the DiRT series. The first one is a spin-off, which we will get to know soon.
First Gameplay From F1 2020
Frozen, 01 May 2020, 12:24
Codemasters has published the first gameplay of F1 2020, the next installment of racing series on the Formula 1 licensed. The video shows the first of two new tracks - Dutch Circuit Zandvoort.
Launch of Trackmania Nations Remake Delayed
Milosz Szubert, 23 April 2020, 12:20
The release of Trackmania Nations Remake (whose final title is Trackmania), a refreshed version of the 2006 game developed by Nadeo, has been postponed. The title will debut exclusively on PC on July 1.
F1 2020 Announced With Release Date and Trailer
Jacob Blazewicz, 15 April 2020, 22:00
Codemasters has announced the F1 2020 game. The new version of official Formula 1 games will be released this summer on PC, PlayStation 4 and Xbox One and Google Stadia.
CTR: Nitro-Fueled May Come to PC
Adrian Werner, 09 April 2020, 15:15
A change on Activision's support page suggests that a PC edition of Crash Team Racing Nitro-Fueled, a refreshed version of the classic racing game from the first PlayStation, is planned.
Forza Horizon 4 has Already 685 Cars and More May be Inbound
Bart Swiatek, 07 April 2020, 14:31
A leak published on Reddit suggests that over a hundred new cars may appear in Forza Horizon 4. The rumor is based on information found in the game files.
XCOM 2 and Burnout Paradise Coming to Nintendo Switch
Paul Wozniak, 26 March 2020, 18:23
Electronic Arts have announced that the refreshed version of the racing Burnout Paradise will be coming to Nintendo Switch this year. So far we do not know the exact release date. A special edition of XCOM 2 is also coming to Switch.
Canceled Formula 1 Races Will Take Place in F1 2019
Paul Wozniak, 20 March 2020, 20:51
The organizers of Formula 1 races announced today that they will prepare virtual equivalents of the canceled Grand Prix series races. F1 drivers will take part in them, but the struggle will be purely entertaining, without affecting the official World Cup score.
Assetto Corsa Competizione Coming to Consoles
Michael Kulakowski, 11 March 2020, 23:07
The racing simulator Assetto Corsa Competizione, which made its PC debut last year, will be released at the end of June on Xbox One and PlayStation 4. With the launch of console ports, the developers also plan to release new DLC sets for all supported platforms.
Drive Test Unlimited 3 Is in the Works - Another Confirmation
Christian Pieniazek, 03 March 2020, 21:25
Test Drive Unlimited 3 is being developed under the wings of Nacon, which took over the rights to this brand in 2016 (still as Bigben Interactive). The production is announced as the biggest project of this publisher and at the same time of the studio that is working on it.
First Gameplay From Trackmania Nations Remake
Adrian Werner, 02 March 2020, 13:55
A recording of the presentation of Trackmania Nations Remake, a new version of the popular racing game from Ubisoft Nadeo, appeared online. The video shows, among others, gameplay and the track editor.
Remake of Trackmania Nations in the Works; PC Launch in May
Konrad Serafinski, 29 February 2020, 21:57
Ubisoft Nadeo is making a new installment of Trackmania. However, instead of a sequel, we will get a remake of the 2006 Trackmania Nations. The title is to be released only on PC in May, this year.
The Devs of Burnout Return With Dangerous Driving 2
Michael Kulakowski, 19 February 2020, 23:08
The devs of the legendary Burnout racing series, forming the independent studio Three Fields Entertainment, have announced Dangerous Driving 2, a sequel to the coldly received prequel, which was released last year. The advantage of the new title is an open sandbox world and a spectacular model of car damage.
Gran Turismo Devs: 4K is Enough, Next Target is 240 FPS
Paul Wozniak, 17 February 2020, 20:18
Kazunori Yamauchi, the creator of the Gran Turismo series, revealed that he would like to focus on increasing the FPS limit with the new installment of the series. In his opinion, 4K resolution is sufficient, and increasing the number of frames per second is what will translate into a much better gaming experience.
Gran Turismo Sport Has 8.2 Million Users and Billion Completed Races
Paul Wozniak, 15 February 2020, 22:47
Kazunori Yamauchi, the dev of the Gran Turismo game series, revealed during the GT Sport World Tour that the latest installment of the series, GT Sport, has already acquired 8.2 million users. The players have spent more than 303 million hours in the game and completed a billion races.
Need for Speed Returns to Criterion, Creators of Burnout
Jacob Blazewicz, 12 February 2020, 20:38
Electronic Arts has given custody of the Need for Speed series to Criterion Games. Thus, the iconic franchise returned to the British developer after almost seven years, along with some of its former employees.
Flat Out Pack Brings Back the Legend of Colin McRae in DiRT Rally 2.0
Milosz Szubert, 29 January 2020, 22:22
DiRT Rally 2.0 will feature a special big expansion to celebrate the 25th anniversary of the first and only World Rally Championship won by Colin McRae. The DLC will introduce 40 challenges, a Scottish rally and two Subaru cars.
Fast and Furious Returns in the Trailer of Fast & Furious Crossroads
Jacob Blazewicz, 13 December 2019, 10:45
Fast & Furious Crossroads was the last game announced at The Game Awards 2019. The game from Slightly Mad Studios features Vin Diesel and other actors known to fans of the series, as well as a ton of cars and impressive driving.
Forza Horizon 4 Gets Battle Royale Mode
Laty, 12 December 2019, 11:06
Today, Forza Horizon 4 will receive a large free update. A battle royale mode will be added to the game, in which 72 players will be able to play at the same time. We will also get some new cars.
Forza Horizon 4 May Soon Get Over 100 New Cars
Milosz Szubert, 04 December 2019, 12:15
Fans of Fors Horizon 4 found a list of over 100 cars, which are not yet present in the game, in the title's data files. They may be added to the game as part of free updates or paid DLC. | https://www.gamepressure.com/newsroom/tags/racing-games/zd340f-4 |
Abyssinia, now Ethiopia, is the original home of the coffee (arabica) plant. Kaffa, the province in the south-western highlands where they first blossomed, gave its name to coffee. The formal cultivation and use of coffee as a beverage began early in the 9th century. Prior to that, coffee trees grew wild in the forests of Kaffa, and may in the region were familiar with the berries and the drink. According to Ethiopia’s ancient history, an Abyssinian goatherd, Kaldi, who lived around AD 850, discovered coffee. He observed his goats prancing excitedly and bleating loudly after chewing the bright red berries that grew on some green bushes nearby. Kaldi tried a few berries himself, and soon felt a sense of elation. He filled his pockets with the berries and ran home to announce his discovery. At his wife’s suggestion, he took the berries to the Monks in the monastery near Lake Tana, the source of the Blue Nile River.
Kaldi presented the chief Monk with the berries and related his account of their miraculous effect. "Devil’s work!" exclaimed the monk, and hurled the berries in the fire. Within minutes the monastery filled with the aroma of roasting beans, and the other monks gathered to investigate. The beans were raked from the fire and crushed to extinguish the embers. The chief Monk ordered the grains to be placed in the ewer and covered with hot water to preserve their goodness. That night the monks sat up drinking the rich fragrant brew, and vowed that they would drink it daily to keep them awake during their long, nocturnal devotions.
While this popular account provides a religious approval for the drinking of roasted coffee berries, it is believed that Ethiopian monks were already chewing the berries as a stimulant for centuries before it was brewed. Ethiopian records establish that Ethiopian and Sudanese traders who traveled to Yemen over 600 years ago chewed the berries en route to their destination to survive the harsh difficult journey. Residents of Kaffa, as well as other ethnic groups such as the Galla were also familiar with coffee. They mixed ground coffee with butter, and consumed them for sustenance. This practice of mixing ground coffee beans with ghee (clarified butter) to give it a distinctive, buttery flavor persists to this day in parts of Kaffa and Sidamo, two of the principle coffee producing regions of Ethiopia.
Brewed coffee, the dry, roasted, ground, non-alcoholic beverage is described as Bunna (in Amharic), Bun (in Tigrigna), Buna (in Oromiya), Bono (in Kefficho), and Kaffa (in Guragigna). Arabic scientific documents dating from around 900 AD refer to a beverage drunk in Ethiopia, known as ‘buna." This is one of the earliest references to Ethiopian, coffee in its brewed form. It is recorded that in 1454 the Mufti of Aden visited Ethiopia, and saw his own countrymen drinking coffee there. He was suitably impressed with the drink which cured him of some affliction, and his approval made it popular among the dervishes of the Yemen who used it in religious ceremonies, and subsequently introduced it to Mecca.
The transformation of coffee as a trendy social drink occurred in Mecca through the establishment of the first coffee houses. Known as Kaveh Kanes, these coffee houses were originally religious meeting places, but soon became social meeting places for gossip, singing and story-telling. With the spread of coffee as a popular beverage it soon became a subject for heated debate among devout Muslims.
The Arabic word for coffee, kahwah, is also one of several words for wine. In the process of stripping the cherry husk, the pulp of the bean was fermented to make potent liquor. Some argued that the Qu'ran forbade the use of wine or intoxicating beverages, but other Muslims in favor of coffee argued that it was not an intoxicant but a stimulant. The dispute over coffee came to a head in 1511 in Mecca. The governor of Mecca, Beg, saw some people drinking coffee in a mosque as they prepared a night-long prayer vigil. Furious he drove them from the mosque and ordered all coffee houses to be closed. A heated debate ensued, with coffee being condemned as an unhealthy brew by two devious Persian doctors, the Hakimani brothers, who wanted coffee banned, because melancholic patients who otherwise would have paid the doctors to treat them, used it as a popular cure. The Mufti of Mecca spoke in defense of coffee. The issue was finally resolved when the Sultan of Cairo intervened and reprimanded the Khair Beg for banning a drink that was widely enjoyed in Cairo without consulting his superior. In 1512, when Khair Beg was accused of embezzlement, the Sultan had him put to death. Coffee survived in Mecca.
The picture of Arabic coffee houses as dens of iniquity and frivolity was exaggerated by religious zealots. In reality the Muslim world was the forerunner of the European Café society and the coffee houses of London which became famous London clubs. They were meeting places for intellectuals, where news and gossip were exchanged and clients were regularly entertained by traditional story-tellers.
From the Arabian Peninsula coffee traveled to the East. Muslim traders and travelers introduced coffee to Sri Lanka (Ceylon) in 1505. Fertile coffee beans, the berries with their husks unbroken, were taken to South-West India by a Baba Budan on his return from pilgrimage to Mecca in the 17th century.
By 1517 coffee had reached Constantinople, following the conquest of Egypt by Salim I, and by 1530, it was established in Damascus. Coffee houses were opened in Constantinople in 1554, and their advent provoked religiously-inspired riots that temporarily closed them. But they survived their critics, and their luxurious interiors became a regular rendezvous for those engaged in radical political thought and dissent.
Venetian traders introduced coffee to Europe in 1615, a few years after tea which had appeared in 1610. Again its introduction aroused controversy in Italy when some clerics, in the manner of the mullahs of Mecca, suggested it should be excommunicated as it was the Devil’s drink. Fortunately, Pope Clement VIII (1592- 1605) enjoyed the drink so much that he declared that "coffee should be baptized to make it a true Christian drink." The first coffee house opened in Venice in 1683. The famous Café Florian in the Piazza San Marco, established in 1720, is the oldest surviving coffee house in Europe. Throughout the 17th and 18th centuries coffee houses proliferated in Europe. Nothing quite like the coffee houses, or café, had ever existed before, the novelty of a place to enjoy a relatively inexpensive and stimulating beverage in convivial company. This established a social habit that has endured for over 400 years.
In 1650, the first coffee house in England was opened in Oxford, not London, by a man called Jacob. The coffee club established near all Souls’ College eventually becoming the Royal Society. London’s first coffee house, in St. Michael’s Alley, was opened in 1652. The most famous name in the world of insurance, Lloyds of London, began life as a coffee house in Tower Street. It was founded in 1688 by Edward Lloyd who used to prepare lists of ships that his clients had insured. With the rapid growth in popularity of coffee houses, by the 17th century, the European powers were competing with each other to establish coffee plantations in their respective colonies. In 1616 the Dutch gained a head start by taking a coffee plant from Mocha in Yemen to the Netherlands, and they began large scale cultivation in Sri Lanka in1658. In 1699 cuttings were successfully transplanted from Malabar to Java. Samples of Java coffee plants were sent to Amsterdam in 1706, were seedlings were grown in botanical gardens and distributed to horticulturists throughout Europe.
A few years later, in 1718, the Dutch transplanted coffee to Surinam and soon after the plant became widely established in South America, which was to become the coffee center of the world. In 1878 the story of coffee’s journey around the world came full circle when the British laid foundations of Kenya’s coffee industry by introducing plants to British East Africa right next to neighboring Ethiopia, where coffee had first been discovered a 1,000 years before.
Today Ethiopia, is Africa’s major exporter of Kaffa and Sidamo beans, now known as Arabica, the quality coffee of the world, and the variety that originated in Ethiopia. Coffea Arabica, which was identified by the botanist Linnaeus in 1753, is one of the two major species used in most production, and presently accounts for around 70 per cent of the world’s coffee.
The other major species is Coffea Canefora, or Robusta, whose production is increasing now due to better yields from robusta trees and their hardiness against decease. Robusta coffee is mostly used in blend, but Arabica is the only coffee to be drunk on its own unblended, and this is the type grown and drunk in Ethiopia, The arabica and robusta trees both produce crops within 3-4 years after planting, and remain productive for 20-30 years. Arabica trees flourish ideally in a seasonal climate with a temperature range of 59-75 degrees Fahrenheit, whereas Robusta prefers an equatorial climate.
In Ethiopia’s province of Kaffa a large proportion of the coffee arabica trees grow wild amidst the rolling hills and forests of the fertile and beautiful region. At an altitude of 1,500 meters the climate is ideal and the plants are well protected by the larger forest trees which provide shade from the midday sun and preserve the moisture in the soil. Traditionally, these are the ideal conditions for coffee growing. There are two main processing methods: the wet and the dry. Commercially the wet method is preferred, but the small producer who picks the cherries wild may save time by sun-drying the beans after picking, and the sell them direct to customers in the local market.
Ethiopia's distinctive coffee varieties are highly sought after. Each region's coffee tastes slightly different, according to the growing conditions. The highest grown coffee comes from Harar, where the Longberry variety is the most popular, having a wine-like flavour and tasting slightly acidic. Coffee from Sidamo in the south has an unusual flavour and is very popular, especially the beans known as Yirgacheffes. Ethiopian coffee is unique, having neither excessive pungency nor the acidity of the Kenyan brands. Mocca (the anglicized version is Mocha) coffee of Yemen is closest to the Ethiopian coffee in character since the latter shares a common origin with the beans of Kaffa and Sidamo. Ethiopian coffee is among the finest coffee in the world. Connoisseurs worldwide savor the beans from Yirgacheffe for their distinctive flavor. It cannot be high roasted so as not to destroy its character and flavor.
According to official Ethiopian sources, these are some of the unique gourmet Ethiopian coffees.
Harar coffee grows in the Eastern Highlands. The bean is medium in size, with a greenish-yellowish color. It has medium acidity and full body and a distinctive mocha flavor. It is one of the highest premium coffees in the world.
Wollega (Nekempte) coffee grows in Western Ethiopia, and the medium-to-bold bean is mainly known for its fruity taste. It has a greenish, brownish color, with good acidity and body. There are many roasters who put this flavor in their blends, but it can also be sold as an original gourmet or special origin flavor.
Limu coffee is known for its spicy flavor and attracts many roasters. It has good acidity and body and the washed Limu is one of the premium coffees. It has a medium-sized bean, and is greenish-bluish in color and mostly round in shape.
Sidama coffee has a medium sized bean, and is greenish-grayish in color. Sidamo washed coffee, known for its balanced taste and good flavor, is called sweet coffee. It has fine acidity and good body and is produced in the southern part of the country. It is always blended for gourmet or specialty coffee.
Yirgacheffe coffee has an intense flavor known as flora. The washed Yirgacheffe is one of the best highland grown coffees. It has fine acidity and rich body. Roasters are attracted to its delicate fine flavor and are willing to pay a premium for it.
Lastly, there are also other coffees, such as Tepi and Bebeka, which are known for their low acidity but better body.
No visit to Ethiopia, is complete without experiencing the elaborate coffee ceremony that is Ethiopia's traditional form of hospitality. Coffee ceremony is an integral part of the social life. The ceremony is typically conducted by a young woman in the traditional Ethiopian white dress with colored woven borders. The process starts with the arranging of the ceremonial apparatus on a bed of long scented grasses. The lady bring out the washed green coffee beans, proceeds to roast them in a flat pan over a charcoal brazier, shaking the roasting pan back and forth so the beans will not burn. Once the coffee beans begin to pop, the rich aroma of coffee mingle with the heady smell of incense that is always burned during the ceremony. To further heighten this sensory experience, after the coffee beans have turned black and shining and the aromatic oil is coaxed out of them, the lady takes the roasted coffee and walks around the room so that the smell of freshly roasted coffee fills the air. She returns to her seat to grind the beans with a pestle and mortar. The ground coffee is then brewed in a black pot with a narrow spout, known as jebena, filling the room with aroma.
The brewed coffee is strained through a fine sieve several times before it is served to family, friends and neighbors who have waited and watched the procedure. The lady gracefully and expertly pours a golden stream of coffee into little cups called 'cini' (si-ni) from a height of one foot or more without spilling the beverage. The coffee is taken with plenty of sugar, complemented by a traditional snack food, such as popcorn, peanuts or cooked barley. It is common to wait for a second and third cup of coffee. The second and third servings are important enough that each serving has a name; the first serving is called "Abol"; second serving is "Huletegna"(second) and third serving is "Bereka." The coffee is not ground for the second and third serving, a portion of ground coffee is usually saved for these two occasions.
Coffee ceremonies are major social events. They create a time to discuss topical issues and politics, resulting in the transformation of the spirit, given that it feeds and nurtures social relations. An ancient proverb best describes the place of coffee in Ethiopian life, "Buna dabo naw", means "Coffee is our bread!" | https://www.africaresource.com/arts-a-culture/culture/884-the-history-of-coffee |
SHERWOOD MIDDLE SCHOOL
SHERWOOD MIDDLE SCHOOL School located in Sherwood, Oregon – OR. SHERWOOD MIDDLE SCHOOL school phone number, address and USA school directory.
School Phone Number : (503) 825-5400
School Type : Regular school
School State Name : Schools in Oregon
School Location City: Schools in Sherwood
School Education Type : Public Schools in Oregon
SHERWOOD MIDDLE SCHOOL address 21970 SW Sherwood Blvd, Sherwood, OR 97140
School Latitude : 45.358780
School Longitude : -122.841364
SHERWOOD MIDDLE SCHOOL Public School Information
National School Lunch Program : Yes participating without using any Provision or the CEO
Adult Education Offered : No
Kindergarten offered : No
Prekindergarten offered : No
Highest Grade Offered : 8th Grade School
Lowest Grade Offered : 6th Grade School
School Level : Middle School
SHERWOOD MIDDLE SCHOOL Number of Students
Number Of Total Students : 702
Number Of Free Lunch Eligible : 100
Number Of Prekindergarten Students :
Number Of Kindergarten Students :
Number Of Ungraded Students :
Number Of Adult Education Students :
Number Of Grades 1-8 Students : 692
Number Of Grades 9-12 Students :
SHERWOOD MIDDLE SCHOOL Student Gender Ratios
Number Of Male Students : 372
Number Of Female Students : 330
SHERWOOD MIDDLE SCHOOL Student Demographics
Number Of American Indian/Alaska Native Students : 3
Number Of Asian or Asian/Pacific Islander Students : 18
Number Of Hispanic Students : 81
Number Of Black Students : 5
Number Of White Students : 550
Number Of Hawaiian Nat./Pacific Isl. Students : 7
Number Of Two or More Races Students : 38
Number Of Total Race/Ethnicity : 702
SHERWOOD MIDDLE SCHOOL Teacher Ratio
Number Of Full-Time Equivalent (FTE) Teachers : 32
Pupil/Teacher Ratio : 22
This page is not affiliated with SHERWOOD MIDDLE SCHOOL USA school directory, public schools in the USA, private schools in the USA, communication schools, school phone, school list, school directory, schools in the province, school information, schools in the region, Primary School, High School, private schools, educational centers, course centers, Kindergarten, phone numbers of primary schools, high school phone numbers, phone numbers of private schools, class phone number, education center phone number, Kindergarten phone number. It is known that this school information changes over time. The information you obtain from our website is the sole responsibility of the User. | http://usaschooldirectory.com/oregon/sherwood-middle-school/ |
Every summer in the Northern Hemisphere, electric blue streaks form high in the atmosphere. These seasonal clouds typically lurk about 80 kilometers (50 miles) overhead in the mesosphere around the Arctic, but every once in a while they form at lower latitudes. In 2019, the clouds showed up in places where they were only rarely seen in the previous decade, including California, Colorado, and France. This year, the clouds are equally impressive.
“It‘s another incredible year,” said Lynn Harvey, an atmospheric scientist at the Laboratory for Atmospheric and Space Physics at the University of Colorado. “When noctilucent clouds extend to mid-latitudes—where people live and notice them on a daily basis—we consider that a noteworthy season.” This year’s clouds have been seen as far south as Joshua Tree, California.
Noctilucent clouds form when water vapor aggregates and freezes around specks of meteor dust floating in the mesosphere. These thin, wavy ice clouds reflect sunlight and usually shine bright blue and white. Known as “night-shining” clouds, they typically appear around dusk or dawn when the Sun is below the horizon at an angle that lights the clouds from below.
The image above shows a satellite view of noctilucent clouds on June 23, 2020. The image is centered on the North Pole and is stitched together from data acquired in several orbital passes by NASA’s Aeronomy of Ice in the Mesosphere (AIM) spacecraft. AIM’s Cloud Imaging and Particle Size (CIPS) instrument measures albedo, or the amount of light reflected back to space by the high-altitude clouds. The clouds appear in various shades of light blue to white, depending on the properties of the ice particles.
The video above shows noctilucent clouds on July 7, 2020, at around 3:30 a.m. approximately 30 minutes north of Calgary, Canada. After snapping photos of noctilucent clouds for nearly a decade, photographer Chris Ratzlaff noted that he has had more sightings in 2020 than in past years.
Harvey said this year’s atmospheric conditions have been outstanding for noctilucent cloud formation. The clouds largely need cold temperatures and high water vapor concentrations—both of which have been present this summer and at record-breaking levels on some days at some latitudes.
The graphs below show daily average temperature and water vapor concentrations at 80°N latitude for the past 14 years (2007-2020). The graphs are based on data from Microwave Limb Sounder on NASA’s Aura satellite.
Note than on May 24, 2020, the mesosphere dropped to its coldest temperature in 14 years of records; that cold persisted into June. The mesosphere was also wetter than normal at the beginning of May, then the water vapor was likely converted to water-ice as the cloud season ensued. However, water vapor concentrations at lower altitudes (where clouds are more sparse) indicated an extremely wet atmosphere.
Harvey said the extra moisture and colder-than-normal temperatures can be traced to a few factors. First, the Sun is in a period of lower activity known as a solar minimum, so there is less ultraviolet radiation breaking up water molecules at high altitudes. Second, the mesosphere may be wetter due to air rising from lower layers of the atmosphere and carrying more moisture into the region.
“We do not yet understand whether the cold and wet conditions this year and last are due to solar influences or atmospheric circulation patterns,” said Harvey.
NASA Earth Observatory images by Joshua Stevens, using data from the University of Colorado Laboratory for Atmospheric and Space Physics and analysis courtesy of the MLS team and V. Lynn Harvey/CU/LASP. Video courtesy of Chris Ratzlaff/Alberta Aurora Chasers. Story by Kasha Patel. | https://www.earthobservatory.nasa.gov/images/146950/another-lively-season-of-night-shining-clouds |
Tony Prince is the project manager for the Recreation and Wellness Intranet Project. Team members include you, a programmer/analyst and aspiring project manager; Patrick, a network specialist; Nancy, a business analyst; and Bonnie, another programmer/analyst. Other people are supporting the project from other departments, including Yusaff from human resources and Cassandra from finance. Assume that these are the only people who can be assigned and charged to work on project activities. Recall that your schedule and cost goals are to complete the project in six months for under $200,000.
Tasks
1. Review the WBS and Gantt chart attached. Propose three to five additional activities that would help you estimate resources and durations. Describe these new activities.
2. Identify at least eight milestones for the Recreation and Wellness Intranet Project. Write a short description of each milestone using the SMART criteria. Discuss how determining these milestones might add activities or tasks to the Gantt chart. Remember that milestones normally have no duration, so you must have tasks that will lead to completing the milestone.
3. Using the Gantt chart attached and the new activities and milestones you proposed in Tasks 1 and 2 above, create a new Gantt chart using Project 2010. Estimate the task durations and enter dependencies as appropriate. Remember that your schedule goal for the project is six months. Print the Gantt chart and network diagram, each on one page.
4. Summarize how you would assign people to each activity from Tasks 1, 2, and 3. Include a table or matrix listing how many hours each person would work on each task. These resource assignments should make sense given the duration estimates made in Task 3 above. Remember that duration estimates are not the same as effort estimates because they include elapsed time. | https://clubessays.com/tony-prince-is-the-project-manager-for-the-recreation-and-wellness/ |
Meet new SCORE Lancaster-Lebanon mentor Lynn Wise. An impressive 38-year career was spent in the service of Armstrong World Industries. Lynn began as a new product development chemist at R&D and was fortunate to be given a multitude of opportunities which included roles as a plant chemist, project manager, new product and installation manager, process improvement black belt, head of quality assurance, market researcher and product manager.
Lynn served in leadership roles at Armstrong supporting the development and commercialization of new flooring and installation products. She also obtained a Six Sigma Black Belt certification and facilitated hundreds of process improvement and problem-solving projects to improve business and operations processes. Lynn was Armstrong's Global Director of Quality, a role focused on ensuring all products met quality standards and that any product quality issues were resolved. On the marketing side, she is trained in market research methodologies and spent several years as a product manager with P&L responsibility. Lynn has a passion for problem-solving and business improvement.
Recently retired Lynn was looking for a way to use her time and talents to give back to her community.
“When I retired last year, I knew that I wanted to spend time volunteering, but was unsure of how to make that happen. After talking with several friends and colleagues about their own volunteering experiences, I had a conversation with Joann Brayman about SCORE. I had worked for Joann at Armstrong and she described what she was doing with SCORE and how they were trying to increase the diversity of their membership. The mission of SCORE resonated with me and I joined last November.”
~ Lynn Wise, SCORE Lancaster-Lebanon volunteer mentor
Lynn recently completed her certification process and is now ready to take on clients of her own to mentor and support.
“ I have just been given my first client, so I don't have a success story to share yet. But, I have sat in with several experienced mentors and am so impressed with their knowledge and enthusiasm to help their clients. It's been great getting to meet so many dedicated, knowledgeable people within SCORE. Everyone has been so welcoming and helpful. I know I made a good choice!”
~ Lynn Wise, SCORE Lancaster-Lebanon volunteer mentor
Lynn is eager to serve with co-mentoring opportunities. She is familiar with the following: | https://lancaster.score.org/blog/new-score-mentor-lynn-wise |
This position is being concurrently announced under Pathways Recent Graduate program as announcement number 19-NCH-SRM-1350-57-RG. Current federal employees may apply to both announcements. This position may be filled at the GS-05 or GS-07 grade level with promotion potential to the GS-09.
Performs professional work in the field of geology and includes a variety of geology, geomorphology, and earth science related tasks.
Conducts geologic studies or investigations on a variety of projects where geology activities have an effect on development and/or management of forest resources.
Inspects mineral operations for compliance with operating plans, and monitors mineral leasing activities on National Forest system lands.
Examines forest land and makes assessment of geologic potential, and participates in validity examinations to determine and make recommendations.
Communicates with agency personnel, permittees, applicants, company representatives, recreationists, and interest groups to gather information, inspect work, and obtain compliance with permits and plans of operation.
Degree: Successful completion of a full 4-year course of study in an accredited college or university leading to a bachelor's or higher degree that included a major field of study in: geology, plus 20 additional semester hours in any combination of mathematics, physics, chemistry, biological science, structural, chemical, civil, mining or petroleum engineering, computer science, planetary geology, comparative planetology, geophysics, meteorology, hydrology, oceanography, physical geography, marine geology, and cartography.
Combination of education and experience -- course work as shown above, plus appropriate experience or additional education.
Examples of specialized experience are: Assisted in the preparation of documents and visual aids related to geological studies/programs; Assisted in the routine observation, measurement, inventorying, and recording of scientific data used to support geological studies; and/or Gathered information and research data for geological studies using a variety of established methods, procedures, and techniques where tasks involved research and analysis of data.
Please note that Superior Academic Achievement can only be applied if you have graduated or completed (or expect to complete within 9 months) all the requirements for a bachelor's degree from an accredited college or university and demonstrate the knowledge, skills, and abilities necessary to do the work.
To exercise selection priority for this vacancy, CTAP/RPL/ICTAP candidates must meet the basic eligibility requirements and all selective factors. CTAP/ICTAP candidates must be rated and determined to be well qualified (or above) based on an evaluation of the competencies listed in the How You Will Be Evaluated section. When assessed through a score-based category rating method, CTAP/ICTAP applicants must receive a rating of at least 85 out of a possible 100.
and may be non-competitively promoted if you successfully complete the requirements and if recommended by management. However, promotion is not guaranteed.
This job originated on www.usajobs.gov. For the full announcement and to apply, visit www.usajobs.gov/GetJob/ViewDetails/526394100. Only resumes submitted according to the instructions on the job announcement listed at www.usajobs.gov will be considered. | https://www.usajobs.gov/GetJob/ViewDetails/526394100?PostingChannelID=RESTAPI |
Let's say I have a 3D space containing spheres (for simplicity, all with the same radius). All spheres are in two disjoint sets, $A$ and $B$. It is guaranteed that spheres from $A$ do not intersect with spheres from $B$.
How can I find the smallest possible set $C$ of spheres (of any radius) that covers all spheres from $A$ (i.e. $\cup A\subset\cup C$) but not even partially covering any spheres from $B$ (i.e. $\cup C \cap \cup B = \emptyset$)?
If it is relevant whether the surface of a sphere is included in its points, I suppose the simplest case to implement (if required) is all open spheres (i.e. without their surface). That means it is safe if spheres from $C$ "touch" spheres from $B$ (but with zero intersection volume still).
The trivial answer is $C=A$, but I am looking for an answer that uses the least number of spheres (whose radii are not limited in any way). The spheres in $C$ may also intersect with each other freely. If possible, polynomial algorithm is preferred. | https://cs.stackexchange.com/questions/85387/least-number-of-spheres-enclosing-other-spheres-but-not-intersecting-others |
The population of Nampa, Idaho consists of 88,211 residents. The crime index rating for the city is 20. While it is 20% safer than comparable cities in the country, the city is filled with risks. These risks could increase the need for home security systems. The following are criminal statistics and facts showing why homeowners need home security systems now.
What Current Criminal Statistics Say About Nampa
According to criminal statistics, there were 266 violent crimes committed in Nampa. These statistics indicate that 3.02 out of every 1,000 residents were a victim of a violent crime. Next, there were 2,415 property-related crimes committed in the city last year. These statistics indicate that 27.38 out of every 1,000 residents were a victim of a property-related crime. Overall, there were 2,681 crimes committed last year in Nampa. These statistics show that 30.29 out of every 1,000 residents were involved in a crime. In contrast, among the violent crimes reported there were 3 murders, 50 rapes, 17 robberies, and 196 physical assaults. These statistics indicate that 1 out of every 332 residents was the victim of a violent crime. Next, among the property-related crimes, there were 554 home invasions, 1,708 property thefts, and 153 automobile thefts including carjackings. Overall, these statistics show that 86 crimes were committed per square mile in Nampa.
Were There Any Violent Protests or Violent Crimes in the Local News?
The city has a history of protests that have gone awry. For example, a protest in 2006 ended with a woman driving into the crowd of protesters and causing multiple injuries. As of late, there were armed protests opposing further gun control laws. There were anti-Trump protests following the election. There have also been anti-Planned Parenthood protests that have led to physical contact and assault. These circumstances indicate a real need for security for residential properties. These protests often get out of hand and lead to property damage and criminal activities. These events lead to higher risks for homeowners. This is why they need to assess all available home security systems and choose the right level of protection for their property.
What is the Current State of the Local Housing Market?
The current state of the local housing market is thriving and prosperous. Local homeowners have a real chance of selling if they prefer. The average home price is $147,300. This price is based on the unit value of $107 per square foot. According to reports, the housing values have increased by 9.9% in the last year. The predictions for the upcoming new year indicate a potential increase o 3.5%. The average rental price in this area is $950. The thriving market could indicate a reduction of possible crime due to the current housing market itself. However, with properties that have a high value, it is possible for home invasions to continue. Local homeowners need to review possible home security systems that lower their risks of a home invasion and possible property damage. It is local crime that decreases or increases the value of properties in these neighborhoods.
Are There Any Sex Offenders Living in Nampa?
Yes, there are currently 363 sex offenders living in Nampa. In comparison between residents and offenders, the current ratio is 231 to 1. The highest concentration of sex offenders in the city is in the city’s center. Although reports show that sex offenders live in nearly every neighborhood surrounding this center-most point. Among the offenses for which larger volumes of these sex offenders were convicted include 1st-degree rape, sexual assault of a child under 14, child pornography, and enticing a child over the internet for sexual acts. These excessive risks indicate a clear need to install home security systems to prevent offenders from gaining access to children. These systems provide 24-hour monitoring of the property. They offer features that enable the homeowner to stay in constant contact with their children if they are away from home. They provide surveillance options without blind spots that could allow intruders to hide around or inside the property. Parents must review these systems and install the option that is most advantageous for them.
Are There Prisons in or Near Nampa?
The city has three prisons in or near this area. They include the Idaho State Correctional Center, South Boise Women’s Correctional Center, and the MTC Idaho Correctional Alternative Placement Program. These facilities house a collective of inmates that exceeds 7,000 altogether. The potential for escape is high with this high volume of inmates. For these reasons, homeowners need to take action quickly to lower their risks. They need a home security system that allows for immediate monitoring if a threat is detected. The homeowner must discuss these opportunities with their preferred security provider.
What Security Systems are Available for Local Homeowners?
A wireless security system could provide superior for a variety of properties. They offer an internet-based security system that connects to a multitude of services inside the house. These connections allow for monitoring remotely at any time. It also enables the homeowner to access the intercom, surveillance cameras, and control panel via their smartphone or mobile device. They can engage the alarm or locking mechanism found on the doors. This increases security throughout the property at any time. In Nampa, Idaho, homeowners review their need for home security based on local risks. The city presents a high crime rate including violent crimes and home invasions. These risks could indicate potential dangers that require immediate action. Homeowners who want to acquire security for their home contact a local provider now. | http://copperheadsecurity.com/adt-nampa-idaho-2/ |
The Committee heard 15 speakers, representing the U.S., Canada, the U.K., Denmark, Iraq, Yemen, China, Syria, Poland, Pakistan, El Salvador, France, the Jewish Agency, New Zealand and Bolivia.
The Committee will meet again at 8:00 p.m.
* * *
PALESTINE COMMITTEE (PM) TAKE #1
Sir Carl said that he had submitted the plan of partition prepared by Subcommittee 1 to his Government, which had instructed him to support it but only on condition that the United nations also assumed responsibility for the implementation of this Plan.
Sir Carl asked that all necessary time be devoted to a thorough examination of all aspects of the Plan, and he suggested to this end the General Assembly prolong its present session, or the Ad Hoc Committee on the Palestinian Question set itself up as the General Assembly.
Lester B. Pearson (Canada) said he wanted to address some questions to Subcommittee 2 (on a unitary, independent Palestine), so that the members could have adequate information on which to decide this difficult problem.
His first question was fi this Subcommittee thought its recommendations would bring about a peaceful and orderly transfer of power from the Mandatory Power, Great Britain, to the people of Palestine.
His second question was what legal basis existed in the Charter for the establishment of a unitary, independent Palestine by the General Assembly.
PALESTINE COMMITTEE (PM) TAKE #2
Only a solution acceptable to the majority of the population of Palestine could ensure peace, Dr. Castro said, adding that the people of Palestine had not really been consulted by the United Nations when considering a solution.
He regretted that representatives of the Arab Higher Committee and of the Jewish Agency had not been brought together as he had proposed with the purpose of finding a common basis for two future of Palestine.
Dr. Castro remarked that although he had offered to serve on a conciliation subcommittee together with a few other delegates, the task of conciliation had finally been assumed by the Chairman of the Committee alone. Dr. Castro also remarked that the Minority Plan for a single federal Palestinian State had not been given enough attention.
For these reasons, Dr. Castro said, he would abstain from voting on the Palestinian question, except in the case of the protection of Holy Places.
PALESTINE COMMITTEE (PM) TAKE #3
Moshe Shertok, head of the political department of the Jewish Agency for Palestine, then made a brief statement.
He said the Jewish Agency would be willing, in order to meet objections, to agree to exclude from the proposed Jewish national home the town of Beersheba and an area to the north and northeast of the town, amounting to 300,000 dunims, also a portion of the South Negeb, along the Egyptian frontier, amounting to 2, 000,000 dumins.
Those would be direct continuations of the Arab state, and not enclaves.
Mr. Shertok said the Jewish Agency did not consider itself obligated to make those concessions, and felt, rather, that the whole of Palestine should be open to Jewish immigration.
Palestine had already been partitioned once, in 1922, when Transjordan was cut away, he said, and now the Jewish national home would be even smaller.
The Chairman said that the Committee could deal with suggestion like this only in the form of amendments.
Herschel Johnson (US) said his delegation would support and vote for the partition plan recommended by Subcommittee 1, and he thought the Assembly could best discharge its responsibility in this case by approving that plan, with economic union.
He recognized that unanimity could not be achieved, but he hoped that he plan would be approved by as large a majority as possible, and given loyal cooperation by all concerned, he continued.
Mr. Johnson said the U.S. considered the partition plan legal under the Charter, and regarded such legal objections as had been raised as formal in character.
He did not think there would be a gap in which there would be no effective governmental authority. That gap, he said, would be avoided by immediate resumption of authority by provisional machinery as soon as the tasks were given up by the Mandatory Power.
Mr. Johnson said the partition plan was not perfect, but was a “humanly just and workable” solution to one of the thorniest problems in the world today. He hoped all member states would give the plan their full cooperation, if it were approved, and not attempt to defy it. This, said Mr. Johnson, was “the greatest test over presented of the integrity of the United Nations as a whole.”
One of the Subcommittee’s greatest difficulties, Mr. Johnson said, had been the declarations of the Mandatory Power (The UK) that it would take no part in carrying out a plan not acceptable to both Arabs and Jews.
Taken literally, he said, this condition was impossible to fulfil, for no plan could possibly meet that requirement.
Mr. Johnson disagreed with the inference of the Representative of New Zealand that the Mandatory Power had been given sole responsibility for carrying out the plan. He said the Subcommittee had tried to avoid such a situation, and in his view had succeeded.
Mr. Johnson agreed that this plan, like any other, contained the possibility of failure, but he said it had to assume the cooperation of the Mandatory Power - which had agreed not to hinder or try to prevent implementation – and of the Member States.
This situation had not been contemplated in the Charter, Mr. Johnson continued. The responsibility had fallen on to the United Nations because of the unilateral decision of Britain to give up the mandate, without suggesting any future government to take its place in Palestine.
Mr. Johnson said he welcomed the declaration of Mr. Shertok regarding Beersheba and the Negeb, and would present and amendment to cover these suggested boundary changes.
PALESTINE COMMITTEE (PM) TAKE #4
On implementation, he said, they had made five statements, none of which deviated from the others.
Mr. Martin asserted that in the transitional period there was a gap, and that clearly a risk was being taken. As long as the Mandate is in force, Mr. Martin declared, the Mandatory insists on retaining undivided control throughout Palestine.
Mr. Martin added that if partition is approved by the Assembly and a U.N. Commission goes to Palestine, the Mandatory would hand over to the Commission “when the time came.” He explained that this expression meant “when and as the territory of Palestine in evacuated.”
Progressive transfer, he said, would begin after termination of the Mandate.
Referring to the passage in the Partition Plan concerning the early evacuation of a sea port with its hinterland in the Jewish State, Mr. Martin said that evacuation plans had not yet been worked out but that this Government was taking note of this passage.
With regard to the assistance the Mandatory was expected by the Partition Plan to give to the United Nations Commission, Mr. Martin repeated that his Government insisted on undivided control. Assistance, he said, would be subject to the over-riding considerations of preserving law and order. If fighting occurs in Palestine said Mr. Martin, it is the lives of British that will be lost.
PALESTINE COMMITTEE (PM) TAKE #5
That closed the list at least temporarily, he said.
In reply to a question from Finn T.B. Friis (Denmark), the Chairman said that any proposal to continue this Committee past the end of the General Assembly session would have to be made formally, and decided by the Assembly.
Dr. Mohamed Fadhil Jamali (Iraq) contended that the General Assembly was going beyond the authority given to it under the Charter in this matter.
The very idea of partition was contrary to the principles of the Charter, he argued. He asked by what legal authority was the Assembly planning to put people who had been in one place for more than 1400 years under the rule of foreigners? This, he said, could only be called “aggression, invasion and imperialism.”
Dr. Jamali said the Arab states had respect for the United Nations but had to appeal against such an “unjust and unworkable” plan as that presented by the Subcommittee.
Britain, he noted, had regarded partition as impracticable in a White Paper 10 years ago. Wasn’t this even more true now? He asked.
Dr. Jamali said that anything going beyond the League of Nations mandate was “an imposition” end that “we would wish to go to court about that.”
Recalling that the U.S. Representative, Mr. Johnson, had said that nobody wanted to think the United Nations would be unable to find a solution to the Palestine problem, Dr. Jamali said the solution was simple – apply the principles of democracy and self-determination. He urged the Committee to take time, and not adopt a plan which would result in immediate bloodshed and racial hatred and which could not be carried out without force.
PALESTINE COMMITTEE (PM) TAKE #6
Dr. Wellington Koo stressed the unique character of the Palestine problem and the need to take into consideration the elements of this problem if a solution acceptable to both parties were to be found.
Nevertheless, he said, time is now short and a decision in imperative. If, he said, a 100%satifactory solution cannot be found, then a solution should be adopted that is the least objectionable to the parties.
Dr. Wellington Kee declared that in his opinion, the partition plan as proposed by Subcommittee 1 represented the nearest approach to such a solution and he hoped that means would be found to improve this plan so as to further reduce the gap between and Arab and Jewish claims.
KSAWERY PRUSZYNSKI (Poland) said that Subcommittee 1’s work with the U.K. had been “a little difficult.” After all, he said, the U.K. had brought the matter to the United Nations, and so should cooperate. But sometimes, he said, it seemed that the U.K. was operating “on the other side of the fence.”
He found it “disagreeable” to speak about this matter, for the Subcommittee had done everything possible to bring the U.K. into its work, he said. But the U.K.’s attitude, in his view, was making the solution of this problem very much harder, with consequences that would be felt by the whole organization.
FARIS EL-KHOURY (Syria) called El Salvador’s suggestion of a plebiscite in Palestine “a very sound idea.” He said the U.S. should agree to this procedure, as well, for the U.S. had advocated it strongly in the case of Korea.
The U.S., he added, had not wanted partition in Korea or Greece. That there were Jews involved in this case did not seem to Mr. El-Khoury to be an obstacle. Judaism was only a religion, he said, and should not be favoured over any other religion.
The Representative of Syria again stated his delegation’s opposition to partition, and asked for a ruling from the International Court of Justice on the legal issues involved.
Mr. El-Khoury outlined the history of the Palestine situation, and contended that the Arabs had been treated unjustly throughout the years. It was the duty of the United Nations now to undo this wrong, not to add to it, he said.
There was no power in the Charter for the Assembly to partition on mandated territory, said Mr. El-Khoury, and to do this would be “a dangerous precedent,” all the more so because partition would benefit only “foreign intruders.”
He felt that the ruling of the Court on the legal issues should be obtained before any other decisions were taken.
Mr. El-Khoury said the Arabs were ready for conciliation at any time, provided that the Zionists would give up their idea of “dominating” and displacing other peoples.
PALESTINE COMMITTEE (PM) TAKE #7
The UNSCOP had been unable to surmount all the contradictions of the Palestine problem because these contradictions were inherent to the problem, he said.
The United Nations, Dr. Palza continued, is a political organization, and its solutions should also be of a political nature. The best political solution, he added, would be one that would maintain continuity with the first solution given to Palestine, namely the Balfour Declaration. This solution, he said would be partition, which supported by the countries most directly interested in the problem.
Sir Mohamed Zafrullah Khan (Pakistan) replied to the questions asked earlier by the Representative of Canada.
The first question, whether Sub-Committee 2 thought its plan would provide for a peaceful and orderly transfer of power? was very difficult to answer, said Sir Zafrullah, and could be answered only in practice by the people of Palestine.
The second question was, what was the legal basis for Sub-Committee 2’s plan for a unitary, independent Palestine? Sir Zafrullah said it was his personal view that the terms of the original League of Nations Mandate gave this authority. The Sub-Committee’s report did not provide for United Nations interference in setting up such a government. There would be some “practical difficulties,” he said, but the Ad Hoc Committee was now engaged in appraising them.
PALESTINE COMMITTEE (PM) TAKE #8
The first amendment would safeguard free access to the Holy Places, which involved freedom of transit.
The second amendment would strengthen the protection of foreign schools and charitable institutions.
Mr. Parodi expressed the opinion that the guard of the Holy Places should be entrusted to a specially recruited guard rather than to the police force provided for in the Report. He also hoped that the case of French would be given full consideration when the question of additional working languages in the City of Jerusalem came up.
The Committee adjourned at 6:15 p.m. to 8 p.m. | https://unispal.un.org/DPA/DPR/unispal.nsf/9a798adbf322aff38525617b006d88d7/93410cf4781407a985256a720066f5a7?OpenDocument |
Below is a set of 20 sentences in a language which we shall refer to as Language X with their equivalents in English (in italics). (Note: Language X is based on a real language but for the purposes of this assignment some of the structures have been modified and the forms of words changed.) Use this data to answer questions 2.1 and 2.2.
For each bound morpheme, write a rule showing how it is used in Language X. (Imagine you are a teacher of Language X and your learners have asked you to give them some grammar rules.)
Perhaps acting as an introductory morpheme for the entire sentence, it could be observed that the other sentences that have not any of these morphemes were considered incomplete when translated to English.
An: Most likely as observed, the morpheme "an" actually stands as a linking verb that also serves as a completing aspect of any sentence made. From observation it could be noted as the x language's substitute to the forms of the linking verb "is" in English language.
All languages have rules for the order of elements in sentences. Describe TWO differences between the rules for order in English and order rules in Language X. Refer to the Language X sentences and their English translations as examples of the rules.
As observed from the given samples, the English language simply requires the necessary elemental factors [which may refer to the main subject and the predicate of the statement] to be able to create a complete sentence that would denote meaning and sense to the said statement.
Meanwhile, in the rules of sentence completion in language x, there is a sentence completing agent which is the morpheme "ga" which perhaps gives an essence to the entire sentence. ... | https://studentshare.net/sociology/292079-culture-and-communication |
Thursday afternoon. 19th arrondissement in Paris. A small cafe break at ‘Le Progress.’ Eight other patrons sit near me in the cafe. Small, circular tables rest in front of me. Stools stand closest to the street on the other side of the tables. Chairs with nice backing look out to the street on the opposite side of the table from the stools.
A pain-au-chocolat lies half eaten on a white plate on the table. I have a half-written journal entry in a brown notebook next to the breakfast repast.
Looking out from my table, I catch the sight of a solitary pigeon breaking from its clan as it makes its morning rounds to catch and gobble up away crumbs that have fallen to the ground between the tables. There’s a mess of crumbs under the table from my pain-au-chocolat.
“Jackpot,” says Mr. Pigeon.
“Bien
Overcast day today. Not quite sweater weather. It’s more like a long-sleeved shirt and blue jeans type of day. The thirty-foot tree in front of the cafe provides ample cover and shade, wanted or not. A metal grate with water holes lines and encircles the tree. People clump by with their shoes and sometimes hit the metal grate.
It’s 11:30 am and the cafe is starting to fill. The cafe has an awning of complete red that gives a nice glow from the sun. The yellow strands of gold slip out from the clouds and then bath the cafe terrace in a vibrant red hue. I enjoy the warm rays of the mid-morning light. I remain seated and continue writing this journal entry with the notebook on my table. I bite the last piece of my pain-au-chocolat.
Mr. Pigeon has gone. “I think I’ll get a gelato now,” I say to myself. I gather up my thing and leave a couple of euros worth of coins on the table.
“A
“Au revoir, monsieur.” she says back. | http://www.bellemedia.com/ |
Learning Math With Manipulatives — The Abacus
The abacus has been around in various forms for over 2300 years. It was used for various counting and operational tasks. One might even call it the original math manipulative (unless you count fingers and stones). In my younger years, abaci were relegated to the bottom shelf or used as a toy for the kinesthetic kids. These days, abaci can meet the same fate that the abaci of my youth did. The first known abacus, the Salamis tablet, collected dust for over 2100 years. For all those lonely and banished abaci on dusty shelves everywhere, I dedicate this article on how to represent, add and subtract whole and decimal numbers.
As most teachers know, the use of manipulatives by younger elementary students helps them to understand the concepts of place value and operations later on. In my search for a variety of manipulatives to teach number sense, addition and subtraction, I came across a convenient tool in the abacus. I’m sure it was no coincidence that each row on the abacus included exactly ten beads, but there was no operators manual with the abacus I found. When I found an instruction manual several years later, I found that the manufacturer of the abacus saw it as no more than a counting device and had no idea of the place value power inherent in the design.
Representing Numbers With a Dusty Abacus
When I first started using an abacus as a manipulative in math class, I was teaching grade six. In the grade six curriculum, students were supposed to represent whole numbers greater that one million and decimal numbers to thousandths. If you count the number of places from one million down to thousandths, you get ten places. Coincidentally, the abacus had ten rods of ten beads each. I’m sure what I discovered was discovered long ago, and some manufacturers probably even send out better instruction manuals that make note of this, but at the time, it was a completely new discovery.
To make a long story short, I assigned each row a specific place value starting with millions at the top, and thousandths at the bottom. One could use a strip of tape or an indelible marker to label the rows. To represent a number, a student would simply move the number of beads for the value of each place in the number they were given. For example, the number 325,729 was represented by moving three of the hundred thousands beads, two of the ten thousands beads, five of the thousands beads, seven of the hundreds beads, two of the tens beads and nine of the ones beads.
I didn’t have a class set of abaci, so I made up little sketches of an abacus (six or so per page) and students showed representations of numbers using these.
Adding and Subtracting Numbers With a Polished Abacus
Once students are familiar with representing numbers using an abacus, they can move onto adding and subtracting numbers. The idea of adding using an abacus and place value is quite a simple process. Begin by representing the first number. Add the value of each place value in the second and subsequent numbers one at a time beginning with the lowest place value and regroup as necessary.
Consider this simple example, 178 + 255. The student would represent 178 on the abacus to begin. She would then add five to the ones row. Since there aren’t five more beads to add, this first move would also involve regrouping. The student would move the two remaining ones, then regroup by sliding all ten ones back and replacing them with a ten. She would then move three more beads since she already moved two of them for a total of five. Since there was some regrouping, there would now be eight tens. The students needs to add five more, so there would be another regrouping, this time of ten tens to make a hundred. Finally, the student moves two additional hundred beads; this time regrouping isn’t necessary. If everything was done correctly, the student would end up with four hundreds beads, three tens beads and three ones beads.
A variation on addition is to add the second and subsequent numbers from the highest place value to the lowest place value.
Subtracting is much the same as addition, but it involves “removing” beads. The procedure for subtracting is to represent the first number then to subtract the value of each place value in the second and subsequent numbers beginning with the highest place value.
Consider this example, 3.252 – 1.986. The student would first represent 3.252 using the abacus. He would begin by subtracting one one. This is fairly straight forward because there are enough ones available. In the next step, though, the student has to subtract nine tenths from two tenths. He begins by subtracting two of the nine tenths, but he then has to regroup one of the remaining ones into ten tenths. Once he has ten more tenths, he can subtract the remaining seven tenths. He continues by subtracting eight hundredths from five hundredths, and again, he has to regroup, this time, one of the tenths into ten hundredths. The final step also involves regrouping since six thousandths must be subtracted from two thousandths. In the end, the student hopefully ends up with one one, two tenths, six hundredths, and six thousandths (1.266).
Subtraction could also be accomplished by subtracting the lowest place value first, but this sometimes means more manipulations of the beads which means more chance for error.
Conclusion
The use of the abacus takes a little bit of time to master. It is important that the teacher and the students use the correct place value terminology (e.g. “regroup ten hundreds to make one thousand” instead of “turn ten green beads into one blue bead”), so the concepts of place value, addition, and subtraction can be transfered to mental strategies and paper/pencil algorithms. Remember, the best way to dust and polish an abacus is with little fingers! | https://feifei.us/learning-math-with-manipulatives-the-abacus/ |
News of Record for April 21, 2017
1:14 p.m., reckless driving - A woman was tailgated on Stockton Road and berated by the driver of another vehicle. 3:02 p.m., Sonora - Someone stole a diamond necklace and diamond bracelet worth up to $10,000 from a Sylva Lane residence.
Add your comments below
Please note by submitting this form you acknowledge that you have read the Terms of Service and the comment you are posting is in compliance with such terms. Be polite.
Inappropriate posts may be removed by the moderator. Send us your feedback. | |
“… we need at least 80 subjects in our sample otherwise the sample won’t be representative…”! This is a remark I often hear while designing user studies. I’ve always wondered why 80? When I ask for details about this the only thing I hear is that it has been a way of working for many years. I think that what they say is based on the fact that they like to encompass most of the population variation in their sample. I checked with my customer this morning and that is exactly why they say 80 subjects, because they like to see the variation between subjects in their sample as well. But can we understand this from a statistical point of view? Well, I think tolerance intervals can provide an answer.
The definition is : Let L < U be two statistics i.e., quantities calculated from the data. Then [L,U] is called a 100β% tolerance interval at confidence level 100(1-α)% if Pr(F(U)-F(L)≥β)≥1-α, or if, with high probability, at least a given large part of the distribution will be enclosed between L and U. Typical values for α and β are α=0.05 and β=0.95.
An example of a 95% tolerance interval at confidence level 95% assuming a normal distribution, sample mean=10, sample standard deviation=1, sample size n=100 can be found to be [7.766, 12.234] using Minitab.
Tolerance intervals however come in two flavours i.e. parametric, like in the example above where we have assumed normality, and non-parametric ones where no specific distribution is assumed. Let’s focus on non-parametric tolerance intervals. Say, we take L to be the minimum of the sample and U its sample maximum. We would like see that between this sample minimum and sample maximum a large part of the population is located because if that is so then we will almost include the entire population variation. The question now is how large should my sample be to make this happen? That is, to be able to state that the interval made up from this sample minimum and the sample maximum contains, with high probability (95%, say), at least 95% of the population. Using sample minimum and sample maximum the following relation holds between α, β and sample size n,
This can be solved iteratively but a good approximation can be found here and be written as:
The solution is shown in the graph below. From this graph it follows that if α=1-0.95=0.05 en β=0.95 roughly a sample size of n=90 is needed. This is rather close to the 80 subjects from the rule of thumb. | http://www.dfss.nl/sample-size-done-differently/ |
In this briefing our Immigration team provides an update on the Prime Minister's announcement about a shake-up of the Immigration Rules and sets out the main changes to the Tier 2 policy guidance and the Appendix D guidance for sponsors.
EU Settlement Scheme
The Home Office has confirmed that free movement will end on 31 October 2019 when the UK leaves the EU, regardless of whether we leave with or without a deal. In a "no-deal" scenario, EU citizens who wish stay in the UK should ensure that they and their families are living in the UK by 31 October 2019. They will then have until at least 31 December 2020 to make their application under the EU Settlement Scheme, for settled status or pre-settled status (as applicable).
If the UK leaves the EU without a deal on 31 October, EU citizens will still be entitled to visit the UK for holidays and short trips, but the current arrangements for EU migrants who want to come to the UK for longer periods to work or study are not clear. We expect the Government to produce more information about their plans in the coming months and will provide updates when further information become available.
Minimum salary threshold for visa applicants
The Centre for Social Justice (CSJ) (co-founded by Iain Duncan Smith) has called for the Home Secretary to increase the minimum salary threshold for all migrant workers (including EU migrants) to at least £36,700 post-Brexit. The CSJ suggest that increasing the threshold will mean that it will correspond to the status of "skilled" work but also recommended that this threshold should not apply to those who carry out a strategically important role e.g. NHS workers.
The Home Secretary has instructed the Migrant Advisory Committee to consider the salary threshold levels for the future Immigration System, with the report expected to be published in January 2020. We will keep you up to date of any further developments on this topic in the meantime.
Scientists
The Prime Minister recently announced plans to work together with scientists to develop a new fast-track visa route for elite specialists in science, engineering and technology. The options that may be discussed with leading institutions and universities are:
- removing the cap on numbers in the Tier 1 Exceptional Talent category
- increasing the number of UK research institutions and universities able to endorse candidates
- creating criteria that confer automatic endorsement subject to immigration checks
- giving dependents full access to the labour market
- removing the need to hold an offer of employment before arriving
- accelerating the path to settlement
Whilst this is an effort to encourage scientists and other elite researchers to continue to come and work in the UK post-Brexit, the plans have been criticised by some who have noted that the low numbers of scientists coming to the UK under a Tier 1 Exceptional Talent visa means that the cap has never been reached and that the Tier 2 shortage occupation list already allows scientists to fast-track through the UK's Immigration system. It appears this may be the first step in the Prime Minister's plan to "shake up" the UK's immigration system and employers should ensure that they stay up-to-date with any changes to the Immigration Rules as they occur.
Updates to Appendix D: record keeping guidance for sponsors
The Home Office has produced an updated guidance note for sponsors regarding their duties in relation to record keeping. The key changes are:
- Employers should ensure that they check for evidence of the date that the migrant entered the UK to ensure the validity of their Tier 2, Tier 4 or Tier 5 visa. Employers should ensure that the migrant's 'valid from' date on their visa is not in the future – if it is, the migrant will not have permission to work and should be advised to leave the Common Travel Area and re-enter the UK within the validity of their visa.
- If a migrant does not have an entry stamp because they are a national of a country which is eligible to use the automated ePassport Gates, employers should ask for evidence of the date that the migrant entered the UK e.g. in the form of travel tickets or boarding passes. This evidence should be checked but does not have to be retained. We would suggest, however, that a copy of the data is retained and that the date of the check and who undertook it is recorded.
- If the migrant is entering the country without a visa under the Tier 5 creative and sporting visa concession, they must see an immigration officer on arrival and receive an entry stamp to show that they have been given leave to enter with permission to work. If the migrant used the eGates, they will not have permission to work and should leave the Common Travel Area and seek re-entry to the UK via an immigration officer on arrival, to obtain permission to work in the UK.
- If a migrant entered the UK with a short-term biometric visa, they must collect their biometric residence permit upon arrival. Employers must make a copy of the migrant's biometric residence permit once the migrant has collected it.
- When taking screenshots as evidence of an advertisement for the resident labour market test, employers now need to ensure it contains all of the following:
- logo of the relevant government website hosting the job advertisement
- URL
- contents of the advert
- date the vacancy was first advertised
- closing date for applications
- for jobs advertised on Find a Job, the vacancy reference number (if one exists)
- for jobs advertised on Universal Jobmatch (the Jobcentre Plus service in place for jobs advertised before 14 May 2018) , the Job ID number
- for jobs advertised on Jobcentre Online (for Northern Ireland vacancies), the Job Reference number.
Employers should ensure that they familiarise themselves with the changes to the guidance to ensure that the correct right to work checks are being completed and that the requisite documents are being copied and retained by the sponsoring employer.
Updates to the Tier 2 policy guidance
The main update in this guidance is a reminder that certificates of sponsorship (CoS) which are granted for less than 3 months are not subject to the Tier 2 cooling-off period. This means that migrants can enter the country for multiple short stays within any 12 month period, provided that each CoS is granted for 3 months or less. All other aspects of Tier 2 apply, however, and this is unlikely to be a pragmatic approach for someone working in the UK frequently. | https://www.addleshawgoddard.com/en/insights/insights-briefings/2019/employment/employment-up-to-date-august-2019/immigration-utd-august-2019/ |
The broker mechanism is widely applied to serve for interested parties to derive long-term policies in order to reduce costs or gain profits in smart grid. However, a broker is faced with a number of challenging problems such as balancing demand and supply from customers and competing with other coexisting brokers to maximize its profit. In this paper, we develop an effective pricing strategy for brokers in local electricity retail market based on recurrent deep multiagent reinforcement learning and sequential clustering. We use real household electricity consumption data to simulate the retail market for evaluating our strategy. The experiments demonstrate the superior performance of the proposed pricing strategy and highlight the effectiveness of our reward shaping mechanism.
Keywords: | https://www.ijcai.org/proceedings/2018/79 |
Q:
How to Create a list with Missing Numbers in a non continious list of numbers using TCL
I wanted to create a list of numbers with missing numbers in a given list as provided in the example below
Existing list { 1,3, 5, 9 , 13, 15}
Resultant list { 2,4,6,7,8,10,11,12,14}
A:
Extended TCL has the function intersect3 which as one of its return values gives a list of A-B. You could intersect your list with a list of all possible numbers that span your list.
If you don't use Extended TCL, you'll have to implement something yourself.
I hardly ever use TCL, so maybe there's a better way, but the basic approach is to just sort the list, then run through it and find the missing values:
#!/usr/bin/tclsh
set A {1 3 5 9 13 15}
set A [lsort -integer $A]
set B {}
set x 0
set y [lindex $A $x]
while {$x < [llength $A]} {
set i [lindex $A $x]
while {$y < $i} {
lappend B $y
incr y
}
incr x
incr y
}
puts $B
Output:
2 4 6 7 8 10 11 12 14
| |
One of the key benefits of risk premia asset allocation over more traditional forms of asset allocation is that once combined, the overall risk/return profile should be more attractive. As such, incorporating risk premia into a traditional asset portfolio should also deliver notable diversification benefits. Many risk premia strategies are selected in the first place because of historical risk and return characteristics and therefore any investor must also consider the likelihood of these features remaining in the future. The dangers of potential hind-sight biases and over-fitting are major considerations when investing in risk premia strategies. A clear economic rationale for the existence and persistence of the risk premia. However, the very same criticism can be levelled at any model or process that includes historical price returns, including those traditional forms of asset allocation. In this section we take a look at how we might combine risk premia strategies across different assets into a multi-asset portfolio. However, before we do this, we must consider the difficulties of implementing a risk premia strategy.
Moving from a theoretical framework to implementing a risk premia strategy involves considerable challenges. In many instances, particularly within the equity market, returns are often expressed on a long/short basis, adding a significant amount of cost and complexity. Theoretical portfolios are often rebalanced to such a degree that annual turnover rates not only eat into returns, but also involve a considerable amount of portfolio management, putting them out of reach of many investors. Capacity constraints are also a concern of many market practitioners. All types of trades risk over-crowding and of course if expected returns fall, action may need to be taken. But just because behemoth funds cannot readily trade in and out of risk premia without overly impacting the underlying price does not necessarily negate their worth. Many corporate bonds, for example, lack secondary market liquidity. But we should also remind ourselves that the reason many of these risk premia exist is precisely because of these limits to arbitrage.
That said many strategies do encounter considerable implementation challenges. Running a long/short equity price momentum strategy not only involves an understanding of borrowing costs, but given the strategy turns over 65% of the portfolio each month, to exclude trading costs is nonsensical. Importantly, as the short leg is the key driver of equity momentum profits, running the strategy long whilst simply shorting the benchmark is not particularly viable. That is not to say long/short risk premia is an impossibility, it just means that it takes more work and the costs associated can leave them inaccessible to many investors.
Still, many risk premia strategies are now available in index form and investable via swaps or options. We therefore further try to demonstrate how risk premia investing can not only deliver an interesting return profile, but also to show how many of these ideas have been taken up and moved beyond the mere theoretical.
To allow for a fair comparison of our risk premia strategies across different asset classes, we have removed some risk management strategies, such as volatility targeting, filtering, etc. These techniques make sense when using the strategy on a standalone basis but are not appropriate within this analysis as they tend to disguise the true nature of the underlying risk premium.
We utilize standard performance ratios and statistics commonly used in asset management including the Sharpe ratio (returns divided by volatility), the Sortino ratio (returns divided by downside volatility), maximum drawdown and time to recovery (or the length of the maximum drawdown) and take a look at a variety of risk metrics based on past returns (i.e. volatility, skew and kurtosis) and also use measures designed to evaluate extreme risks such as value-at-risk at 95% and expected shortfall. A strategy with a positive skew is more likely to make large gains than suffer large losses. A high kurtosis indicates potential fat-tails, i.e. a tendency to post unusually large returns, whether either on the upside or the downside.
The best strategies in terms of Sharpe ratio are income strategies. The Sharpe ratio for the aggregated strategy stands above 1.0 in both cases. The equity value, equity dividend and FX carry strategies delivered the strongest returns. Momentum strategies tend to deliver lower Sharpe ratios. Yet, over a shorter time periods, income strategies are more likely to suffer large losses than make large gains. The skewness is a measure of the symmetry of a distribution and a negative skew means that the most extreme movements are on the downside. It would appear that income strategies monetize a premium that comes in compensation for possibly large losses in certain circumstances. Conversely, the skew of momentum and relative value strategies is close to zero, or in some cases positive. Kurtosis (fat tails) is also high for income strategies, especially so for volatility strategies (VIX contango, short variance swaps, volatility risk premium and tail event risk premium strategies). Kurtosis is a measure of extreme risk and, in periods of market stress when volatility rises rapidly, strategies that sell options tend to lose more than most other strategies. | http://www.qmsadv.com/adv/risk-premia/rp-alloc |
Yellow-wattled Lapwing is permanent resident in our region. Here they found in open fields, scrubs and dry patches around riverbed or canals. They prefer deserted area as habitat thus called ‘વગડાઉ ટીટોડી‘ - Vagdau Titodi in Gujarati language. They are unmistakably stand out with their black cap and bright yellow fleshy lappets. Usually they are less noisy than Red-wattled Lapwing, so we mostly notice them during their breeding season- chiefly April to July. Like other lapwings and plovers, they are ground birds, laid eggs on ground. Although its nest is look like just a collection of tiny pebbles and scrape, but this lapwing disguise their nest so carefully that only experienced birder can hope to find it.
We observe this lapwing in one of the canal visit. Pair of them become instantly became alert when we stopped by. We understand presence of their chicks which are just beside the road highly camouflaged within scrub. There are two of them. Chicks looks too fragile and too cute. We wonder how they can following their parents to forage for food...! or maybe they came this far to drink water. We unable to stop ourselves from taking some quick shots. | http://www.escapeintothewild.net/2017/08/yellow-wattled-lapwing.html |
An online petitioner has a unique plan to help pay down the national debt, sell the state of Montana!
A Change.org user named Ian Hammond says the U.S. has too much debt and that Montana is, quote, useless.
The petition suggests the U.S. could earn a trillion dollars by selling big sky country to Canada.
Even if the harebrained sale happened, it would only put a dent in the debt.
It crossed the $22 trillion thresholds earlier this month.
Montana’s GDP is just under $50 billion per year. | https://www.myhighplains.com/weird-news/petition-to-sell-montana-to-canada-to-eliminate-the-national-debt/ |
Glen Hope is a borough in Clearfield County, Pennsylvania, United States. The population was 142 at the 2010 census.
. . . Glen Hope, Pennsylvania . . .
Glen Hope is located in southern Clearfield County at 40°47′56″N78°30′1″W (40.798959, -78.500320), primarily on the north side of Clearfield Creek, a northeast-flowing tributary of the West Branch Susquehanna River. Pennsylvania Route 53 passes through the borough, leading northeast 4 miles (6 km) to Madera and southwest 4 miles (6 km) to Irvona. Pennsylvania Route 729 crosses Clearfield Creek and PA-53 in the center of town and leads northwest 16 miles (26 km) to Grampian and southeast 5 miles (8 km) to Janesville.
According to the United States Census Bureau, Glen Hope has a total area of 2.2 square miles (5.6 km2), of which 0.04 square miles (0.1 km2), or 2.06%, is water.
As of the census of 2000, there were 149 people, 55 households, and 44 families residing in the borough. The population density was 72.0 people per square mile (27.8/km2). There were 59 housing units at an average density of 28.5 per square mile (11.0/km2). The racial makeup of the borough was 99.33% White, and 0.67% from two or more races.
There were 55 households, out of which 29.1% had children under the age of 18 living with them, 67.3% were married couples living together, 9.1% had a female householder with no husband present, and 18.2% were non-families. 14.5% of all households were made up of individuals, and 10.9% had someone living alone who was 65 years of age or older. The average household size was 2.71 and the average family size was 3.00.
In the borough the population was spread out, with 19.5% under the age of 18, 8.1% from 18 to 24, 28.2% from 25 to 44, 26.2% from 45 to 64, and 18.1% who were 65 years of age or older. The median age was 42 years. For every 100 females there were 93.5 males. For every 100 females age 18 and over, there were 96.7 males.
The median income for a household in the borough was $35,625, and the median income for a family was $42,500. Males had a median income of $31,250 versus $31,250 for females. The per capita income for the borough was $13,321. There were 14.6% of families and 15.7% of the population living below the poverty line, including 25.0% of under eighteens and 7.7% of those over 64.
. . . Glen Hope, Pennsylvania . . . | https://www.light-sea.ga/blog/2022/01/09/glen-hope-pennsylvania/ |
Q:
COPY (import) data into PostgreSQL array column
How should a (CSV?) text file be formatted so that it can be imported (with COPY?) into an array column in a PostgreSQL (8.4) table?
Given table testarray:
Column | Type |
---------+-------------------------|
rundate | date |
runtype | integer |
raw | double precision[] |
labels | character varying(16)[] |
results | double precision[] |
outcome | character varying(8)[] |
and
COPY testarray from '/tmp/import.txt' CSV
neither of the following contents of import.txt work:
2010/06/22,88,{{1,2},{3,4}},{{1,2},{3,4}},{{1,2},{3,4}},{{1,2},{3,4}}
2010/06/22,88,1,2,3,4,1,2,3,4,1,2,3,4,1,2,3,4
2010/06/22,88,'{{1,2},{3,4}}','{{1,2},{3,4}}','{{1,2},{3,4}}','{{1,2},{3,4}}'
2010/06/22,88,'1,2,3,4','1,2,3,4','1,2,3,4','1,2,3,4'
A:
COPY testarray from '/tmp/import.txt' CSV
2010-06-22,88,"{{1,2},{3,4}}","{{1,2},{3,4}}","{{1,2},{3,4}}","{{1,2},{3,4}}"
| |
There are well-mapped legal risks, regulatory risks, reputational risks and the risk of financial and operational losses from the use of AI.Footnote 6 General statements about AI risk as seen in Microsoft's annual report are not sufficient for shareholders and stakeholders to assess the full extent of fairness risks faced by the company in the provision and use of AI. Besides, investors with increased awareness of sustainable investing would want to know whether artificial intelligence solutions used or sold by companies are aligned with their values.
AI Fairness Reporting beyond general statements relating to AI risks in annual reports or other filings would require standards akin to the Global Reporting Initiative (GRI) standards in sustainability reporting.Footnote 7 Sustainability reporting rules (and practice notes) require (or advise) companies to describe both the reasons and the process of selecting material ESG factors.Footnote 8 In a similar way, companies should be required to report on the AI fairness metrics that they have adopted for the algorithms and the reasons for adoption, in a manner which will be useful for public scrutiny and debate by stakeholders, regulators and civil society.
Unfortunately, current guidance on Data Protection Impact Assessments (DPIA) under the General Data Protection Regulation (GDPR) does not make reference to the development of metrics which capture different notions of fairness in the technical machine learning literature.Footnote 9 In this paper, we propose a legal framework for AI Fairness Reporting informed by recent developments in the computer science machine learning literature on fairness. Companies should disclose the fairness of machine learning models produced or used by them on a comply-or-explain basis based on our proposed reporting framework.Footnote 10
The argument for a framework for AI Fairness Reporting comprises five parts. First, reasons are given as to why a reporting framework is needed. Second, the common sources of unfairness are identified. Third, how the machine learning literature has sought to address the problem of unfairness through the use of fairness metrics is analysed. Fourth, bearing in mind the issues related to unfairness and the fairness metrics, we propose a legal solution addressing of what the disclosure contents of the AI Fairness Reporting framework should consist. Fifth and finally, the proposed Reporting framework is applied to two case studies.
The structure of this article is as follows. Section II provides three reasons for having the AI Fairness Reporting framework: (1) to enable investors and stakeholders to have a better understanding of the potential legal liability risks due to contravention of applicable legislation; (2) to address investors’ and stakeholders’ sustainability-related expectations concerning the company's business and operations; and (3) to address inadequacies in the DPIA under the GDPR.
Section III analyses the nature or sources of unfairness. The unfairness can arise from different aspects in the process of building a supervised machine learning model, specifically with regards to data creation and labelling as well as feature extraction, embeddings and representation learning.Footnote 11 The unfairness can also arise from disparities in the performance of machine learning systems with respect to data related to different demographic groups.
Section IV examines how the machine learning literature has sought to address the problem of unfairness by using different metrics of fairness. These metrics are analysed, followed by an assessment of the trade-offs between the fairness metrics and the disparities in AI model performance.
Section V advances a framework for AI Fairness Reporting, the proposed reporting obligations of which should include: (1) disclosure of all uses of machine learning models; (2) disclosure of the fairness metrics used and the ensuing trade-offs; (3) disclosure of the de-biasing methods used; and (4) release of datasets for public inspection or for third-party audit.
Section VI applies the proposed AI Fairness Reporting framework to two case studies – one relating to credit profiling and the other to facial recognition – in order to show its utility. This is followed by the conclusion.
II. Why the Need for AI Fairness Reporting
A. To Enable Stakeholders to Better Understand Potential Legal Liability Risks
A first practical reason for the need for AI Fairness Reporting is to empower stakeholders like investors, customers and employees of a company to better assess the legal risks of a company due to potential breaches of applicable legislation through its use of machine learning models. We consider statutory examples from the UK and the US.
1. Equality Act 2010
The forms of discrimination under the UK Equality Act can be divided into direct discrimination and indirect discrimination. Section 13(1) of the Equality Act defines direct discrimination as Person A treating Person B less favourably than Person A treats or would treat others, because of a “protected characteristic” of B. Section 14 of the Act sets out the concept of combined discrimination, where direct discrimination happens on the basis of two relevant protected characteristics. The protected characteristics include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.Footnote 12
Indirect discrimination under the UK Equality Act, as defined in Section 19, refers to the application of a provision, criterion or practice that puts people with a relevant protected characteristic at a “particular disadvantage”, without showing the provision, criterion or practice to be a proportionate means of achieving a legitimate aim. The difference from direct discrimination is that the provision, criterion or practice only needs to be related to the protected characteristic and use of the protected characteristic itself is not needed for indirect discrimination to be found. For example, an algorithm used by a bank in relation to credit card applications that does not assign different creditworthiness based on the protected characteristics, but on spending patterns related to certain products and services, may impose a particular disadvantage on certain segments of the population, thus potentially violating the Equality Act.Footnote 13
2. GDPR
The GDPR became a part of UK domestic law in accordance with Section 3 of the European Withdrawal Act 2018. The GDPR governs the processing of personal data, and “profiling” is defined under the GDPR as “any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person”.Footnote 14 Thus, most machine learning models acting on individuals will fall under this definition of profiling under the GDPR. Article 5 of the GDPR states the principle that data shall be processed “lawfully, fairly and in a transparent manner” and GDPR Article 24(1) requires that “appropriate technical and organisation measures” need to be implemented in light of risks to the rights of individuals.
Processing of special category dataFootnote 15 is prohibited under Article 9(1) of the GDPR, unless one of the exceptions in Paragraph 2 is satisfied. This concept of special category data is similar to that of protected characteristics discussed above regarding the UK Equality Act. However, this also means that a machine learning engineer is prevented from using special category data in the algorithm in order to correct for human biases in the datasetFootnote 16 unless the engineer fulfils one of the Paragraph 2 exceptions such as consent. However, it has been argued that genuinely free consent cannot be obtained in this case, because a refusal to grant consent could result in the individual suffering a higher risk of discrimination, such as being denied the opportunity to apply for a job.Footnote 17
Even if special category data are not processed directly, other data categories in the dataset might be used as proxy information to infer the special category data. The law is unclear as to when the existence of multiple proxy information available in the dataset, which allow for special category data to be inferred, would be deemed by the regulator to amount to special category data. The UK's Information Commissioner's Office guidelines on special category data state that the question of whether proxy information, which allows special category data to be inferred, will be deemed by the regulator as special category data depends on the certainty of the inference, and whether the inference was deliberately drawn.Footnote 18 Courts, in interpreting this provision, are likely to distinguish between (1) an explicit inference of special category data made by an algorithm in its final prediction and (2) algorithms which make predictions correlated with special categories without actually making the inference that the person in question possesses the special characteristics.Footnote 19 In addition to the latter case, we think algorithms which are provided with data correlated with special categories would belong to that category too, and this latter case should not trigger Article 9.
3. Domain-specific Legislation in the US
The US has domain-specific legislation in a variety of areas where machine learning is now applied, for example, the Fair Housing ActFootnote 20 and the Equal Credit Opportunity Act,Footnote 21 which list protected characteristics which are similar to those listed in the UK Equality Act. Employment law in the US also allows an employer to be sued under Title VII for employment discrimination under one of two theories of liability: disparate treatment and disparate impact.Footnote 22 Disparate treatment comprises either formal disparate treatment of similarly situated people or treatment carried out with the intent to discriminate. Disparate impact refers to practices that are superficially neutral but have a disproportionately adverse impact on groups with protected characteristics. Disparate impact is not concerned with intent, but to establish it, three questions need to be asked. First, whether there is a disparate impact on members of a group with a protected characteristic; second, whether there is a business justification for that impact; and finally, whether there are less discriminatory ways of achieving the same result.Footnote 23 The US Equal Employment Opportunity Commission advocates for a four-fifths rule,Footnote 24 namely, that the ratio of the probability of one group of the protected characteristic getting hired over the probability of the other group with the protected characteristic getting hired, should not be lower than four-fifths.
Our proposed AI Fairness Reporting would allow investors, stakeholders and regulators to better assess whether sufficient work has been done by the company to comply with such regulations. Reporting on the fairness of AI models would also help to inform investors and stakeholders about the reputational risks of the company being involved in a discrimination scandal, especially when such incidents can impact share prices and result in a loss of talent.
B. Sustainable Investments
There has been a rapid growth in sustainable investments in the last few years. This has resulted in the incorporation of various ESG-related concerns or objectives into investment decisions. Globally, assets under management in ESG mutual funds and exchange-traded funds have grown from $453 billion in 2013 to $760 billion in 2018 and are expected to continue growing.Footnote 25 It is plausible that AI fairness considerations are already being taken into account by such ESG funds, (or will be in the near future) as part of their compliance with ESG reporting requirements. There is already work being done by investment funds on establishing a set of requirements including non-bias and transparency of AI use.Footnote 26 This set of requirements could then be used by investment funds to evaluate the use of AI by a company.
Stakeholder capitalism, which challenges the idea of shareholder primacy, seeks to promote long-term value creation by taking into account the interests of all relevant stakeholders.Footnote 27 Stakeholder capitalism is premised on the idea that the stock market misvalues intangibles that affect stakeholders, such as employee satisfaction.Footnote 28 Therefore, it emphasises that corporate directors and executives should make decisions in a manner which takes into account the interests of stakeholders other than shareholders, such as customers, employees and society at large. A natural extension of the considerations that corporate directors are required to take into account in order to make decisions which accord with stakeholder capitalism would be whether AI products and services used or sold by the company are fair towards potential job applicants, employees, customers and the public.
C. Inadequacies in the DPIA under the GDPR
The GDPR requires that a DPIA be carried out for any data processing which is “likely to result in a high risk to the rights and freedoms of natural persons”.Footnote 29 This reference to the “rights and freedoms of natural persons” is to be interpreted as concerned not only with the rights to data protection and privacy, but also, according to the Article 29 Data Protection Working Party Statement on the role of a risk-based approach in data protection legal frameworks, with other fundamental rights including the prohibition of discrimination.Footnote 30 Examples of processing operations which are “likely to result in high risks” are laid out in Article 35(3). Article 35(3)(a) relates to “a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person”. This is further elaborated in Recital 71 which specifically highlights processing operations as including those of a “profiling” nature such as “analysing or predicting aspects concerning performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements”. Further, Article 35(3)(b) relates to “processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10”. Recital 75 explains such special categories of data as those which “reveal racial or ethnic origin, political opinions, religion or philosophical beliefs, trade union membership and the processing of genetic data, data concerning health or data concerning sex life or criminal convictions and offences or related security measures”.
However, the exact scope and nature of what a DPIA entails, especially relating to issues concerning fairness, is less clear. Article 35(7) of the GDPR, read with Recitals 84 and 90, sets out the minimum features of a DPIA to comprise “a description of the envisaged processing operations and the purposes of the processing”, “an assessment of the necessity and proportionality of the processing”, “an assessment of the risks to the rights and freedoms of data subjects” and the measures envisaged to “address the risks” and “demonstrate compliance with this Regulation”.Footnote 31 The methodology of the DPIA is left up to the data controller. Even though guideline criteria are provided,Footnote 32 they make no reference to any fairness metrics and de-biasing techniquesFootnote 33 which have emerged in the technical machine learning literature.Footnote 34
Although previous work on biased hiring algorithms called for DPIA reports to be made available publicly,Footnote 35 there is no current requirement under the GDPR for such DPIA reports to be made public. Moreover, we do not think DPIA reports in their current form as defined under the GDPR and their guidance documents adequately serve the needs of AI Fairness Reporting because the DPIAs do not require the disclosure of fairness metrics and the de-biasing methods used.Footnote 36
III. Sources of Unfairness in the Machine Learning Models and Performance
A. Unfairness from the Process of Building Supervised Learning Models
We first examine how bias can be attributed to the various stages of the process of building supervised learning models. In general, there are three broad typesFootnote 37 of machine learning models: supervised learning, unsupervised learning and reinforcement learning. Supervised learning models are trained on data examples labelled with the decision which needs to be made. These labels are created either by manual human labelling or by less precise proxy sources or heuristics in a method known as weak supervision. When supervised models are trained using the labelled examples, the model learns how much weight to put on various factors fed to it when making a decision. In unsupervised learning, the data examples given to the model are not labelled with the decision. The model's goal here is simply to find patterns in the data, without being told what patterns to look for and with no obvious measure of how well it is performing. Reinforcement learning models use reward or punishment signals to learn how to act or behave. These models are distinct from supervised and unsupervised learning models. In our discussion, we focus primarily on supervised learning models. These have, so far, brought about the most legal and policy concerns surrounding fairness.
1. Dataset creation and labelling
In the dataset creation process, unfair sampling can occur from operational practices in the company. A practice of refusing credit to minorities without first assessing them would result in records of minorities being less represented in the training dataset.Footnote 38 Supervised learning models are dependent on the labels given to data in the training set. If the organisation has been making unfair decisions reflected in the training dataset, such unfairness will be included in the trained model. For example, human essay graders are known to have prejudices on the linguistic choices of students which signify membership in demographic groups.Footnote 39 Automatic essay grading models might then be trained on a dataset of essays with the corresponding scores assigned by such human essay graders, thus incorporating the biases of the humans into the models.
2. Feature extraction, embeddings and representation learning
Although images and text are easily associated with meaning when presented to a human, in their raw form these data types are devoid of meaning to a computer. Raw images are just rows of pixel values, while text is just a string of characters each encoded in the ASCIIFootnote 40 format. Deep neural network models are used to learn feature maps of images and embeddings of text which are used respectively in the computer vision and natural language processing applications of AI. For example, words can be represented in the form of numerical representations known as vector embeddings, which can capture meaning and semantic relationships between words through their distance and directional relationship with vector embeddings representing other words. In the classic word2vec example, the direction and distance between the vectors representing the words king and queen, are similar to that of the direction and distance between the vectors representing the words husband and wife.
Traditionally, heuristics or rule-based approaches are used to create such features from the input data. Today, deep learning methods often rely on a technique known as representation learning to learn the representations as vector embeddings instead. In the context of natural language processing, representation learning is done by training on large datasets like Common Crawl,Footnote 41 using the frequency of words appearing close to each other and the order in which words appear as signals for a model to learn the meaning of words. The principle underlying the technique is that “a word is characterized by the company it keeps”.Footnote 42 There is much technical evidence to show that vector embeddings representing words, which are often used as inputs to current state-of-the-art natural language processing systems, encapsulate gender biases.Footnote 43 An extensive studyFootnote 44 looked into how stereotypical associations between gender and professional occupations propagate from the text used to train the models to the text embeddings, so that words like “doctor” are closely associated with the male gender pronoun “he”.Footnote 45
In the use of deep neural networks for supervised learning, engineers sometimes face the practical problem of having insufficient labelled data in their datasets. This is especially the case in applications where it takes domain experts to label the data, so that the creation of a huge, labelled dataset is a costly endeavour. To overcome the problem of limited training data, machine learning engineers often use a technique called transfer learning. This technique involves using a model already trained on another (possibly larger) dataset which contains data similar to the data the engineer is working with, before continuing training on the limited labelled data. Open-source models which have been pretrained on open datasets are made widely available by universities and technology companies. However, the geographic distribution of images in the popular ImageNet dataset reveals that 53 per cent of the images were collected in the US and Great Britain, and a similar skew is also found in other popular open-source image datasets, such as Open Images.Footnote 46 This can lead to models trained on such datasets performing better in the recognition of objects more commonly found in the US and UK than in other countries.
B. Unfairness through Disparities in the Performance of Machine Learning Models
Beyond the fairness of classification decisions produced by supervised learning models, there is another notion of fairness more generally applicable to all machine learning models that might not be clearly addressed by existing laws. This notion, which is considered in the machine learning literature on fairness, relates to the disparities in the performance of machine learning models with respect to data related to different demographic groups. These disparities can occur, for instance, when such groups are underrepresented in datasets used for training machine learning models. In addition, other applications of machine learning beyond classification can propagate bias when they are trained on datasets which are labelled by biased humans or biased proxy data.
1. Natural language processing
There are disparities between how well machine learning systems which deal with natural language perform for data relating to different demographic groups. Speech-to-text tools do not perform as well for individuals with some accents.Footnote 47 Sentiment analysis tools, which predict the sentiment expressed by texts through assigning scores on a scale, have been shown to systematically assign different scores to text based on race-related or gender-related names of people mentioned.Footnote 48 Moreover, annotators’ insensitivity to differences in dialect has also resulted in automatic hate speech detection models displaying a racial bias, so that words and phrases which are characteristic of African American English are correlated with ratings of toxicity in numerous widely-used hate speech datasets, which were then acquired and propagated by models trained on these datasets.Footnote 49 Even compared to human graders who may themselves give biased ratings, automated essay grading systems tend to assign lower scores to some demographic groups in a systemic manner.Footnote 50
It was found that when the sentences “She is a doctor. He is a nurse.” were translated using Google Translate from English to Turkish and then back to English, gender stereotypes were injected, such that Google Translate returned the sentences “He is a doctor. She is a nurse”.Footnote 51 The explanation provided by the researchers in the study is that Turkish has gender-neutral pronouns, so the original gender information was lost during the translation from English to Turkish and when the sentences were translated from Turkish back to English, the Google Translate picked the English pronouns which best matched the statistics of the text it was trained on.
2. Computer vision
Machine learning is widely deployed in computer vision tasks such as image classification, object detection and facial recognition. However, as previously discussed,Footnote 52 populations outside the US and UK are underrepresented in the standard datasets used for training such models. These datasets, curated predominantly by White, male researchers, reflect the world view of its creators. Images of household objects from lower-income countries are significantly less accurately classified than those from higher-income countries.Footnote 53 It has also been found that the commercial tools by Microsoft, Face++ and IBM designed for gender classification of facial images were shown to perform better on male faces than female faces, with up to a 20.6 per cent difference in error rate.Footnote 54 The classifiers were also shown to perform better on lighter faces than darker faces and worst on darker female faces.
3. Recommendation systems and search
Recommendation and search systems control the content or items which are exposed to users and thus bring about a unique set of fairness concerns.Footnote 55 First, the informational needs of some searchers or users may be served better than those of others. Harm to consumers can happen when a recommendation system underperforms for minority groups in recommending content or products they like. Such unfairness is difficult to study in real systems as the relevant target variable of satisfaction is hard to measure:Footnote 56 clicks and ratings only serve as crude proxies for user satisfaction. Second, inequities may be created between content creators or product providers by privileging certain content over others. YouTube was sued in 2019 by content creators who alleged that the reach of their LGBT-focused videos was suppressed by YouTube algorithms, while allegations relating to search have included partisan bias in search results.Footnote 57 Third, representational harms can occur by the amplification and propagation of cultural stereotypes.
4. Risk assessment tools
In risk assessment tools like COMPAS,Footnote 58 calibrationFootnote 59 is an important goal. Equalised calibration requires that “outcomes are independent of protected characteristic after controlling for estimated risk”.Footnote 60 For example, in a group of loan applicants estimated to have a 20 per cent chance of default, calibration would require that the rate of default of Whites and African Americans is similar, or even equal, if equalised calibration is enforced. If a tool for evaluating recidivism risk does not have equalised calibration between demographic groups defined by race, the same probability estimate given by the tool would have a different meaning for African American and White defendants – inducing judges to take race into account when interpreting the predictions of the risk tool.Footnote 61
IV. Competing Algorithmic Fairness Metrics and Trade-offs
A. Fairness Metrics of Supervised Classification Models
Although the concept of fairnessFootnote 62 in the law governing data processing is nebulous, the technical machine learning community has developed several technical metrics of fairness. In this section, we attempt to give a flavour of the various main categories of technical fairness metrics.
To begin with, “Fairness through Unawareness” is an approach to machine learning fairness where the model simply ignores special category data like race and gender, also known as protected characteristics. This approach has been shown to be ineffective because it is possible for the model to infer information about such protected characteristics from other data categories which are correlated with the protected characteristic,Footnote 63 thus leading to indirect discrimination. A classic example of this would be the removal of the protected characteristic of race in a dataset, but the retention of another feature of the dataset focusing on whether or not the individual visits the Mexican market on a weekly basis, which is correlated with the Hispanic race. Fairness through Unawareness, apart from being ineffective, requires all protected characteristics to be masked out. This requirement might be unfeasible in some applications where it would, for example, require the removal of gender from facial images, or the removal of words relating to protected characteristics from sentences which would be left devoid of readability.
To address the problems of Fairness through Unawareness, at least four fairness metrics have been developed which do without the need to mask out protected characteristics and instead determine fairness directly based on the protected characteristic.Footnote 64 These four metrics are “Demographic Parity”, “Equality of Odds”, “Equality of Opportunity” and “Equalised Calibration”. These metrics are examined in the context of a binary classification model, which is a machine learning model which predicts either a positive or negative class (e.g. whether a person is positive or negative for a disease).
1. Demographic Parity
The fairness metric of Demographic Parity measures how much an algorithmic decision is independent of the protected characteristic by taking the difference in the probability of the model predicting the positive class across demographic groups which are differentiated based on the protected characteristic.Footnote 65 Between two demographic groups which are differentiated based on the race protected characteristic, namely Whites and African Americans, perfect satisfaction of this metric in a hiring model would result in the positive hiring decision being assigned to the two demographic groups at an equal rate.
However, there have been disadvantagesFootnote 66 identified with Demographic Parity, which can be demonstrated through the example of a credit scoring model. Take, for example, a dataset of loan applicants, divided into qualified applicants (those who did actually repay the loan) and unqualified applicants (those who eventually defaulted on the loan). If African Americans have a higher rate of actual loan defaults than Whites, enforcing Demographic Parity would result in a situation where unqualified individuals belonging to a particular demographic group of the protected characteristic with lower rates of loan repayment being assigned a positive outcome by the credit scoring model as a form of affirmative action, in order to match the percentages of those assigned a positive outcome with other demographic groups of the protected characteristic. Thus, Demographic Parity has been empirically shown to often substantially cripple the utility of the model used due to the decrease in accuracy, especially where the subject of prediction is highly correlated with the protected characteristic.
2. Equality of odds
To address the problems with Demographic Parity, an alternative metric called Equality of Odds was proposed. This metric computes both the difference between the false positive rates,Footnote 67 and the difference between the true positive rates,Footnote 68 of the decisions of the model on the two demographic groups across the protected characteristic.Footnote 69 For instance, enforcing this metric in relation to a model in our previous example would ensure that the rate of qualified African Americans getting a loan is equal to that of qualified Whites, while also ensuring that the rate of unqualified African Americans getting a loan is equal to that of unqualified Whites.
A study examining the effectiveness of Equality of Odds on the operation of the controversial COMPASFootnote 70 algorithm which predicts recidivism of criminals, showed that although the accuracy of the algorithm was similar for both African Americans and Whites, the algorithm was far from satisfying the Equality of Odds metric because the false positive rate of the algorithm's decisions was twice that for African Americans than for Whites.Footnote 71 This is because in cases where the algorithm fails, it fails differently for African Americans and Whites. While African Americans are twice as likely to be predicted by the algorithm to reoffend but not actually reoffend, it was much more likely for the Whites to be predicted by the algorithm not to reoffend but go on to commit crimes.
3. Equality of opportunity
Another variation is Equality of Opportunity, a weaker fairness criterion than Equality of Odds because it only matches the true positive rates across the demographic groups, without matching the false positive rate.Footnote 72 In the above example of the credit scoring algorithm, enforcing this metric would ensure qualified individuals have an equal opportunity of getting the loan, without enforcing any constraints on the model for individuals who ultimately defaulted. In some cases, Equality of Opportunity can allow the trained modelFootnote 73 to achieve a higher accuracy rate due to the lack of the additional constraint.
However, it has also been found that enforcing equality only in relation to the true positive rate will increase disparity between the demographic groups in relation to the false positive rate.Footnote 74 In the COMPAS example above, we see a trade-off which will often be faced in machine learning classification models. Ensuring the algorithm succeeds at an equal rate in predicting reoffending among African Americans and Whites when they do actually go on to reoffend (true positive rate), results in an unequal rate of the algorithm wrongly predicting African Americans and Whites – who do not go on to reoffend – as reoffending (false positive rate). To enforce the algorithm to err at an equal rate between Whites and African Americans who do not actually reoffend, would almost always result in a drop in the overall accuracy of the model. This is because in the naturally occurring data, the actual rate of reoffending differs between White and African Americans.
4. Equalised calibration
Another important fairness metric to consider is equalised calibration between demographic groups. In classification models, it is often useful for a model to provide not only its prediction, but also the confidence level of its predictions. Calibration can be understood as the extent to which this confidence level provided matches reality. Having a perfectly calibrated model would mean that if a confidence level of 0.8 is assigned to a prediction, then eight out of ten times the predictions of the model which were assigned the confidence level of 0.8 would belong to the class predicted by the model. In recidivism models like COMPAS, risk scores are often provided along with the classification prediction of whether or not a convict will reoffend. In classification models predicting whether a borrower will default on the loan, risk scores are also provided by the model together with the confidence level of its predictions. Where there is no perfect calibration, it is thus important that there is equalised calibration of these confidence scores between demographic groups. Otherwise, a user of the model would, for example, need to interpret a risk score for a African American individual differently from a risk score for a White individual. However, as will be shown below, there is a trade-off between Equalised Calibration and Equality of Odds.
B. Trade-offs
The technical literature on fairness in machine learning has shown that there are trade-offs between the notions of fairness on both levels, namely, trade-offs between the fairness metrics for classification models (i.e. between Equalised Calibration and Equality of Odds) and trade-offs between fairness metrics and disparities in model accuracy.
1. An example of trade-offs between two fairness metrics (i.e. between Equalised Calibration and Equality of Odds) – Chouldechova's Impossibility Theorem
According to Chouldechova's Impossibility Theorem, if the prevalence (base) rates of the positive class in the demographic groups differ, it is impossible for a binary classification model to achieve all three of equalised calibration, equal false positive rates and equal false negative rates between demographic groups.Footnote 75 If a classifier has equal false negative rates between both groups, it can be mathematically derived that it will also have equal true positive rates between both groups. Therefore, the Chouldechova Impossibility Theorem can be generalised to mean that a model cannot satisfy both the Equality of Odds (equal false positive rates and equal true positive rates between demographic groups) and Equalised Calibration metrics at the same time.
To put this in the context of a classification model for the provision of loans, if people of colour and White individuals in the dataset do have different rates of actually defaulting on loans (the prevalence rate), it is not possible to perfectly calibrate the credit risk scores provided by the model (so that, for example, 80 per cent of people assigned a 0.8 risk score actually default), while also having (1) the rate at which individuals predicted to default do not actually default (the false positive rate) to be equal between both demographic groups and (2) the rate at which individuals predicted to not default actually default (the false negative rate) to be equal between both groups.
Further, it was found that, on the specific recidivism dataset on which COMPAS was used, enforcing an algorithm to achieve calibration would result in disparities in both the false positive and false negative ratesFootnote 76 across demographic groups. On the other hand, mis-calibrated risk scores would cause discrimination to one of the demographic groups, since a user of the model would need to interpret the risk scores differently depending on the demographic group the subject belongs to. To achieve fairness, the best course of action in such a situation is to make changes to the dataset by either collecting more data, or, hopefully, including more salient features in the dataset.Footnote 77
There may be situations in which the dire consequences of false positives may differ greatly from the consequences of false negatives. In such situations, the company might choose to satisfy calibration along with only one of either an equalised false positive rate or an equalised false negative rate, corresponding to the condition for which consequences are more dire.Footnote 78 An example to consider could be an early detection system for a chronic disease like diabetes which can be treated if detected at the early onset stage, but which bears significant long-term financial and well-being costs for the patient if left untreated till it develops into the later stage. In such a situation, the consequence of a false negative (allowing the disease to develop into the untreatable stage with long-term financial and lifestyle costs) is significantly greater than the consequence of a false positive (cost of repeated testing, or of lifestyle changes like exercise and healthy eating, aimed at reversing prediabetes), especially for lower-income minority groups. A company developing such a system might give a well-reasoned explanation for choosing to enforce calibration and an equalised low false negative rate, while forgoing an equalised false positive rate.
Another example would be an experiment on an income prediction model, for deciding whether a person's income should be above $50,000. Ensuring calibration along with an equalised low false negative rate across genders would result in some employees being overpaid. This is because a false negative in such a scenario means that there are borderline cases where a male and female will each be paid less than $50,000, when in reality, one of them should have been paid more than $50,000. The company should enforce an equalised low false negative rate in a manner which would mean that the algorithm recommended that the company pay both of them more than $50,000,Footnote 79 even if one of them does not deserve it. For a company, this might be more tolerable than if the equalised false positive rate was chosen instead, which might result in reputational risk with some employees of a particular gender being underpaid more often than employees of another gender.
2. Trade-off between Equality of Odds and equalised accuracy
With Equality of Odds being one of the most popular and advanced metrics of fairness, it is interesting to note that there is evidence of a trade-off between Equality of Odds and equalised accuracy between the demographic groups in a dataset.Footnote 80 This was found in the dataset for the COMPAS recidivism prediction tool. In other words, this means that having the tool achieve similar levels of accuracy for African Americans and Whites will result in greater differences in the false positive rate as well as the false negative rate of the tool between African Americans and Whites.
V. A Framework for AI Fairness Reporting
In light of the two types of unfairness in machine learning, as discussed in Part II above (bias in classification decisions by supervised learning models and disparities in the performance of machine learning applications across demographic groups), it is suggested that a framework for AI Fairness Reporting should consist of the following requirements: (1) disclosure of the machine learning models used; (2) disclosure of the fairness metrics and the trade-offs involved; (3) disclosure of any de-biasing methods adopted; and (4) release of datasets for public inspection or for third-party audit.
A. Disclosing All Uses of Machine Learning Models Involved
We distinguish between machine learning systems which make predictions or decisions directly affecting individuals and machine learning systems which do not. We propose that companies should be made to furnish detailed AI fairness reports for supervised learning systems which make decisions or predictions directly affecting individuals.
Even though our proposal does not require detailed fairness reporting for machine learning models which do not make decisions directly affecting individuals, use of any machine learning models might still bring about fairness concerns for a variety of reasons including unfair sampling. For example, crowd-sourcing of data on potholes in Boston through a smartphone app which uploaded sensor data from the smartphone to the city's database resulted in more potholes detected in wealthier neighbourhoods than lower-income neighbourhoods and neighbourhoods with predominantly elderly populations, in line with patterns of smartphone usage.Footnote 81 This could have directed the use of resources on fixing potholes towards those wealthier neighbourhoods, away from poorer neighbourhoods.
A company's disclosure of all its uses of its machine learning models would allow for potential indirect implications on fairness to be flagged. Thus, companies ought to disclose all uses of machine learning models as a matter of best practice.
B. Reporting on Fairness Metrics Used and Trade-offs
Companies ought to disclose the main AI fairness metric or metrics adopted for a classification algorithm and the reasons for its adoption. Any deliberations as to why other fairness metrics were not adopted, and how the trade-offs were navigated, also need to be explained. In light of the Chouldechova Impossibility Theorem and the trade-offs in the adoption of AI fairness metrics which have been pointed out above, along with many more which are likely to be found as research in AI fairness matures, it is important to ensure companies disclose their decisions in relation to such trade-offs and the reasons behind it.
One way to implement and enforce explanations of deliberate omissions in reporting of AI fairness metrics is to have a robust whistleblowing policy with sufficient incentives such as monetary rewards,Footnote 82 as well as sanctions for companies found guilty of not explaining deliberate omissions in reporting. Employees of technology companies have not been shy to come forward with concerns over the environmental and social impacts of the companies they work for. When Google allegedly forced out the co-lead of its ethical AI team over a paper which pointed out the risks of large language models which were used in recent significant enhancements to Google's core search product,Footnote 83 more than 1,400 Google staff members signed a letter in protest. The risks pointed out in the paper included the significant environmental costs from the large computer processing power needed to train such models, and the racist, sexist and abusive language which ends up in the training data obtained from the Internet. Having a whistleblowing policy, coupled with an option for anonymity, would provide an accessible and effective channel for technology employees to bring omissions in reporting such matters to light without suffering personal repercussions.
To address disparities in the performance of models, requiring companies to report accuracy rates (and other appropriate measures of model performance) of supervised learning models by demographic groups, instead of merely reporting an overall accuracy rate, would be a good start. However, the metric of choice for measuring model performance might not be able to capture all fairness issues, especially in machine learning applications like machine translation where the test dataset might be biased as well.
As a best practice to be encouraged, companies should consider opening up a limited interface for non-commercial use of their AI applications, where public users can probe the model to check for fairness. For example, registered users could each be allowed to upload a limited number of passages to test a translation model, or a limited number of personal selfies to test a facial recognition system.
C. Reporting on De-biasing Methods Used
Of the various approaches available for companies to satisfy the fairness metrics they have chosen for a machine learning application, each choice of approach would have different implications on trade-offs with other metrics of fairness, as well as overall accuracy, as we see below. Thus, we argue that along with the choice of fairness metrics, companies should report any interventions made to achieve fairness goals.
Methods for de-biasing machine learning models have occasionally been proven to merely cover up biases with respect to a fairness metric, but not remove them. For example, methods for removing gender biases in word embeddings which reported substantial reductions in bias were shown to have the actual effect of mostly hiding the bias, not removing it.Footnote 84 The gender bias information can still be found in the vector space distances between “gender-neutralised” words in the de-biased vector embeddings and is still recoverable from them.Footnote 85 This is why the techniques for de-biasing have to be reported in conjunction with the fairness metrics: to prevent companies “over-optimising” on the chosen fairness metric in the way described above, without serving the actual goal of fairness. It is important to note that the de-biasing techniques used can be reported with little to no revelation about the AI model itself. Thus, companies should have no excuse for not reporting the basis of protecting their trade secrets.
1. Pre-processing methods
Pre-processing methods make changes to the dataset before the machine learning algorithm is applied. As discussed earlier, prevalence rates of the target variable of prediction, say the occurrence rate of recidivism, may differ across demographic groups. Methods of rebalancing the dataset could involve re-labelling some of the data points (an example of which is changing the label of a random sample of men who failed on parole to a success), or assigning weights to the data points and weighing less represented groups in the dataset more heavily. As intuition would readily tell us, rebalancing would be likely to lead to a loss in accuracy. There are other more sophisticated methods of pre-processing, which can be optimised in a manner which changes the values of all predictive features in the dataset while still preserving as much “information as possible”,Footnote 86 but it remains to be seen whether such methods will result in other trade-offs.
Because deep learning models learn rich representations of the data with which they are fed, deep learning researchers have experimented to see if models can learn fair representations.Footnote 87 For example, representations of the data learnt by the deep learning model, instead of the raw data itself, are used to make predictions. If the way the representations of the data are learnt is constrained in a manner that excludes information on demographic group membership, then the predictive part of the model has no discernible information about group membership to work with in making its predictions. Thus, the decisions would be made in a manner independent of group membership, which is what researchers who work on fair representations argue is a fairer approach.Footnote 88
2. In-processing
In-processing makes fairness adjustments during the process of the machine learning model making predictions. This could involve changes to the model so that a specified fairness goal is taken into account.Footnote 89
3. Post-processing
Post-processing involves changing the predictions of a machine learning model to achieve better results on a fairness metric. This can be done through randomly reassigning the prediction labels of the model.Footnote 90 However, the result of such reassignments could be that the overall classification accuracy of the model is brought down to match that of the demographic group for which accuracy was the worst. Besides, having individuals being randomly chosen to be assigned a different outcome might raise individual fairness concerns when similar individuals are treated differently. There might also be ethical considerations when such methods are used in sensitive domains like healthcare.
Technical research on the implications of de-biasing techniques is still nascent, though there is evidence of consequences for both model accuracy and trade-offs, with competing fairness goals not taken into account by the choice of de-biasing technique. Making it mandatory for companies to transparently report any de-biasing interventions made would allow public scrutiny and academic research to flag potential implications, intended or unintended, of the procedure chosen.
D. Release of Datasets for Public Inspection or for Third-party Audit
Ideally, companies should release all datasets used for the training of machine learning models to the public.Footnote 91 However, it is understandable that significant investment is often required on the part of companies to collect and curate such datasets in order to obtain a competitive advantage. As a result, companies might be reluctant to share this data. Also, some datasets might also contain trade secrets, confidential information and the private data of users. It might not always be feasible to completely prevent data re-identification from the release of anonymised data. Thus, the release of datasets should not be mandated, but is best left to a comply-or-explain basis.Footnote 92
However, in cases where the dataset is not released, we propose that a requirement be set for an independent third-party audit to be done on the dataset. This audit can flag any potential problems of bias in data labelling from operational practice, or underrepresentation of specific demographic groups. The audit report should be made public together with the AI Fairness Report in the company's public disclosures.
Much can be done to encourage companies to release their datasets and the availability of such data would aid the progress of research into AI fairness. First, for companies to preserve their competitive advantage, the release of such datasets does not need to be made under an open-source licence.Footnote 93 A new standard data release licence, similar to non-commercial and no derivatives licences used for research data,Footnote 94 can be created in such a way that the use of the data is limited to inspection for fairness concerns. Admittedly, enforcement of such a licence can be a problem if it is possible for models to be trained using the released data with little risk of detection by the data owner.
Second, companies might be concerned about the impact on user privacy should such datasets contain user information and about potential liability from breaches of data protection regulations. Data protection authorities can consider providing a safe harbour for datasets released to facilitate AI fairness, as long as anonymisation procedures under guidelines issued by data protection authorities are followed to reduce the risk of data re-identification.
One major limitation to note on the release of anonymised datasets is how much it correctly represents the nature of the original dataset, especially if modificationsFootnote 95 to values in the dataset have had to be made to prevent the re-identification of individuals. It might be possible that the anonymised dataset released might in turn be a misrepresentation of fairness in the original dataset. It might be helpful to mandate that any data anonymisation procedures applied to the released data be declared by the company to mitigate this concern.
Apart from releasing the proprietary data used for model training, the company should also disclose any use of open-source datasets and pre-trained models from third parties. This would allow the public to consider whether any known biases in such open-sourced datasets and pre-trained models might be carried into the company's AI models.
VI. Application of AI Fairness Reporting Framework to Two Case Studies
A. Goldman Sachs’ Credit Profiling Model on the Issuance of the Apple Card
We consider the case of Goldman Sachs’ credit profiling of applicants for the Apple Card. A technology entrepreneur, David Heinemeier Hansson, raised concerns about Goldman Sachs’ Apple Card program for gender-based discrimination through the use of what he called a “black-box algorithm”.Footnote 96 He claimed that, although he and his wife filed joint tax returns and lived in a community-property state, he received a credit limit that was 20 times higher than that offered to his wife. Hansson also expressed concerns that “the Bank relied on algorithms and machine learning for credit decision-making, and [he] complained that an Apple Card customer service agent could not explain these algorithms or the basis for the difference in credit limits”.Footnote 97 Apple's co-founder Steve Wozniak also claimed that he had 10 times the credit limit of his wife on the Apple Card, even though they shared all assets and accounts.
We now turn to look at how AI Fairness Reporting under our framework could be retrospectively applied in this case. Even though no fair lending violations were found by the New York State Department of Financial Services, we argue that had this reporting been done, the transparency and communication issues flaggedFootnote 98 by the New York State Department of Financial Services report could have at least been mitigated, if not avoided entirely.
1. Disclosing all uses of machine learning models
Under the proposed AI Fairness Reporting framework, Goldman Sachs would have needed to disclose all its uses of machine learning models as a matter of best practice. Disclosure of even the use of machine learning models which have not been making directions or predictions directly affecting individuals would have been needed under our reporting framework. This would have included internal risk management models which predicted the health of Goldman Sachs’ lending business. If the internal risk models had consistently predicted a high-risk exposure to Goldman Sachs’ lending business just before a holiday specific to one demographic group, causing Goldman Sachs to generally tighten credit lending ahead annually at this time of the year in line with an increase in credit needs from this demographic group, this could have raised fairness considerations.
The machine learning models used in Goldman Sachs relating to the Apple Card program, which directly affected individuals, included more than just the credit scoring model. Under our proposed reporting framework, machine learning models deployed on Goldman Sachs’ consumer-facing platforms, which determined whether to advertise or recommend the Apple Card to a particular user, would have been needed to go through detailed fairness reporting as well.
2. Reporting on fairness metrics used
Under our proposed reporting framework, the choice of fairness metrics should have taken into account the social and legal contexts of the machine learning application. For credit lending decisions, the Equal Credit Opportunity Act and state laws in the US apply to Goldman Sachs’ Apple Card programme. Under these laws, the gender of credit applicants cannot be taken into account in the credit decisions and two categories of discrimination are recognised: disparate treatment and disparate impact. Under our proposed reporting framework, de-biasing a machine learning model, together with the disclosure of group fairness metrics, would have revealed that protected characteristics like gender had been taken into account. If so, this would have contravened the disparate treatment requirement since the Equal Credit Opportunity Act disallows the intended use of protected characteristics.
At the same time, to examine disparate impact, the Consumer Examinations Unit of the New York State Department of Financial Services applied regression analysis on the Bank's Apple Card underwriting data for nearly 400,000 New York applicants, covering applications dating from the launch of Apple Card until the time of the initial discrimination complaints. It did not state if any specific fairness metric was used, but the regression analysis would have measured the degree of independence between gender and the credit decisions made.Footnote 99 The Department found that the Bank had a fair lending programme in place for ensuring its lending policy “did not consider prohibited characteristics of applicants and would not produce disparate impacts”, with an “underlying statistical model”.Footnote 100 The New York State Department of Financial Services, in its investigation report,Footnote 101 also found that “women and men with equivalent credit characteristics had similar Apple Card application outcomes”. This seems to allude to a notion of individual fairness also being applied in the report.
In such a situation, under our proposed reporting framework Goldman Sachs would have had to choose both a group fairness metric and an individual fairness metric to report on.Footnote 102 It is highly likely that there would have been trade-offs between the chosen group fairness metric and the individual fairness metric. In the context of this case, enforcing the algorithm to give a high credit rating at an equal rate to men and women who do not ultimately default on payments might have resulted in individuals with highly similar profiles being given a different credit rating. This could have happened when, for example, men have more borderline cases than women and in order to equalise the rate at which a high credit rating is predicted between men and women who did not ultimately default, highly similar borderline profiles of men might have been assigned different outcomes. All metrics used in arriving at the operational model should have thus been reported to show transparently how these trade-offs were navigated in the final model used.
3. Reporting on de-biasing methods used
What is completely missing in both the investigation report and subsequent public relations efforts by Goldman Sachs on the Apple Card program is an account of any specific de-biasing methods used to arrive at the fairness outcomes, which we propose should have been made public.
Existing laws like the Equal Credit Opportunity Act serve to protect consumers from discriminatory lending based on protected characteristics, so the investigation report's finding that no fair lending laws have been breached serves little to inform other stakeholders on how the use of the machine learning model affects them. Investors and stakeholders of Goldman Sachs would have been interested to know how much the de-biasing methods used (if any) would have had an impact on the accuracy of the credit scoring model as this would have affected the business and operations of Goldman Sachs, which would have in turn impacted its financial performance and reputation. Researchers could have further concentrated their study of the implications of such de-biasing techniques being used in practice, in the specific context of credit scoring, given that the full implications of de-biasing techniques are still under-researched. Credit applicants themselves would have wanted to know how such de-biasing techniques might have potentially affected them and therefore would have wanted a fuller report that did not merely confirm that there was compliance with the law.
4. Release of datasets for inspection
We refer here to the German Credit DatasetFootnote 103 as an indication that it might have been possible for Goldman Sachs to have released an anonymised dataset of applicants to its Apple Card program. The German Credit Dataset consists of 1,000 individuals drawn from a German bank in 1994. Protected characteristics in the dataset include gender and age, along with 18 other attributes including employment, housing and savings.
Under our proposed reporting framework, a third-party audit of datasets used to train any machine learning models used for credit scoring in the Apple Card program would have been required, if there was no release of a public dataset. These datasets could include Goldman Sachs’ historical data on setting credit limits on other similar credit programs and any bias in those datasets could have carried over to the Apple Card program if models were trained on that data.
However, even if Goldman Sachs had deemed that the release of such a dataset would pose significant risks for client privacy, it could have been more transparent by giving a comprehensive listing of the attributes which were taken into account in its credit scoring model. That would have reduced misunderstandings as to why seemingly similar individuals were offered different credit limits. Explanations givenFootnote 104 in the Department's report on the Apple Card case as to why spouses with shared bank accounts and assets were given different credit outcomes included obscure attributes which might not have been considered by a layman. These included “one spouse was named on a residential mortgage, while the other spouse was not” and “some individuals carried multiple credit cards and a line of credit, while the other spouse held only a single credit card in his or her name”. Even if an applicant had referred to public education materials which were released by Goldman Sachs after this incidentFootnote 105 – the applicant would not know the attributes that Goldman Sachs took into account in its credit scoring model.
B. Wrongful Arrest Attributed to False Positive Match by the Dataworks Plus Facial Recognition System
We next consider the case where the facial recognition technology by a US company Dataworks Plus resulted in a wrongful arrest in the US state of Michigan. Robert Julian-Borchak Williams, an African American man, was wrongfully accused of shoplifting due to a false positive match by the Dataworks Plus facial recognition software.Footnote 106
This culminated in a request by Senator Sherrod Brown of Ohio for Dataworks Plus to provide information to the US Congress on questions including (1) whether the company planned to impose a moratorium on the use of its facial recognition technologies by law enforcement, (2) what the factual basis behind marketing claims by the company on the reliability and accuracy of its facial recognition system was and (3) whether there was an executive responsible in the company for facilitating conversations on ethical decision-making.Footnote 107 Keeping in mind that Dataworks Plus brands itself as a “leader in law enforcement and criminal justice technology,”Footnote 108 with the facial recognition system FACE Plus being one of its key offerings, imposing such a moratorium would have a substantial impact on its financial revenue.
This case is different from the previous caseFootnote 109 in that the creator of the facial recognition system was not the user of the system: that was the Detroit police department. Also, there is a nuanced difference here in relation to the allegation of unfairness. This was not a problem of disparate outcomes across a protected characteristic, but of the AI system having a different level of accuracy for different demographic groups. Here, the facial recognition system matched facial snapshots from crime scene video surveillance to a 50 million Michigan police database of driver's licence photographs in order to generate matches to candidates who might be potential suspects. The allegation was that the quality of matches produced by the facial recognition system is worse when it comes to people of colour.
This allegation is not unfounded, given the findings of studies preceding the incident, conducted on commercial facial recognition systems. In a Massachusetts Institute of Technology study on such systemsFootnote 110 it was found that the error rate for light-skinned men is never worse than 0.8 per cent, but 34.7 per cent for dark-skinned women. According to the study, although researchers at a major US technology company claimed an accuracy rate of more than 97 per cent for a face-recognition system they had designed, the dataset used to assess its performance was more than 77 per cent male and more than 83 per cent White. A National Institute of Standards and Technology studyFootnote 111 covered 189 software algorithms from 99 developers, which make up the majority of the industry in the US. The study used four collections of photographs containing 18.27 million images of 8.49 million people from operational databases provided by the State Department, the Department of Homeland Security and the FBI. It found that for one-to-many matching systemsFootnote 112 which are commonly used in suspect identification systems, there was a higher rate of false positives for African American women, although the study contained a caveat pointing out that not all algorithms give this high rate of false positives across demographics in these types of system and systems that are the most equitable are also amongst the most accurate. By the account of the Detroit Police Chief, the Dataworks Plus facial recognition system misidentifies 96 per cent of the time.Footnote 113 From the results of the NIST study, this might indicate that the allegation that it has a higher rate of false positives for African Americans is a reasonable one to make.
Applying our AI Fairness Reporting framework to Dataworks Plus, we argue that the process would have enabled Dataworks Plus to identify problems better with its facial recognition system and would have allowed the civilian oversight boardFootnote 114 in Detroit to evaluate the adoption of the system better. The discussion in Sections 1 to 4 below describe the consequences of applying the requirements of our proposed reporting framework to the facts of the Dataworks Plus case.
1. Disclosing all uses of machine learning models
Under our proposed AI Fairness Reporting framework, Dataworks Plus, being a provider of software systems rather than a user, would have needed to disclose all the uses of machine learning models in the various software solutions it provided. There might have been multiple machine learning models in a single software system. For example, a facial recognition system might have an image classification model to first classify the race of the subject of a facial image, before applying a matching algorithm built specifically for image subjects belonging to that particular race.
We do note that there might have been concerns about the protection of trade secrets, if the disclosure of machine learning model use were made compulsory. However, there could have been a degree of flexibility afforded to the company with regards to the granularity of disclosure: the disclosure could have ranged from the general class of machine learning model to the specific model used. It would have been hard for a company to justify why such a requirement, modified by the flexibility mentioned above, could not have been imposed on companies; especially when it is balanced against the interests of stakeholders such as potential customers and individuals, whose lives might be affected by the use of the models.
2. Reporting on fairness metrics used
The NIST Face Recognition Vendor Test reportFootnote 115 studied the differences in false positives and false negatives between demographic groups in the dataset, along the lines of gender, age and racial background. We suggest that these two metrics would have been apt for use in AI Fairness Reporting by Dataworks Plus. This would have been a holistic representation of how well the facial recognition system had performed, in stark contrast to the marketing materials on the Dataworks Plus website that were highlighted by Senator Brown, which vaguely described the identification of the facial candidates produced by the FACE Plus software system as “accurate and reliable”.
When the wrongful arrest of Robert Julian-Borchak Williams, mentioned earlier, was first reported in the New York Times, the General Manager of Dataworks Plus, Todd Pastorini, was cited as claiming that checks which Dataworks Plus did when they integrated facial recognition systems from subcontractors were not “scientific” and that no formal measures of the systems’ accuracy or bias were done. All this negative publicity for the company and its associated reputational risks, could have been avoided had a fairness study been conducted and reported on by the company. The Dataworks Plus facial recognition software used by the police in Michigan included components developed by two other companies, NEC and Rank One Computing.Footnote 116 The NIST studyFootnote 117 conducted the year before the incident on over a hundred facial recognition systems, including those developed by these two companies, had found that African American and Asian faces were ten to a hundred times more likely to be falsely identified than Caucasian faces.Footnote 118
However, one more nuance needs to be appreciated in this situation where the developer of the AI system was not the end user: the prediction outputs of the AI system needed to be interpreted and acted upon by the users who were not as familiar as developers with the workings of machine learning models. In the Dataworks case, the system provided a row of results generated by the software from each of the two companies, NEC and Rank One Computing, along with the confidence scores of each candidate match generated.Footnote 119 It was up to the investigator to interpret these matching candidates, along with the associated confidence scores, before deciding whether to proceed with any arrest. The outputs of the AI system were thus characterised by law enforcement and software providers like Dataworks Plus as mere investigative leads and were therefore not conclusive as to arrest decisions. In such a situation, assuming proper use of the system, the presence of false positives was not as detrimental as it might be sensationalised to be. Thus, explanations about the context of the AI system's use and guidance on how the reported fairness metrics should be interpreted, would have been helpful if included in the AI Fairness Reporting.
3. Reporting on de-biasing methods used
The Dataworks Plus case presented a clear risk that the use of de-biasing methods could have created other problems. A studyFootnote 120 by computer scientists at the Florida Institute of Technology and the University of Notre Dame showed that facial recognition algorithms return false matches at a higher rate for African Americans than for White people, unless they are explicitly recalibrated for the African American population. However, such recalibration would result in an increase in false negatives for White people if the same model were used, which means it would make it easier for the actual White culprits to evade detection by the system. Using different models, however, would have required a separate classification model for choosing the appropriate model to use, or have required the police to exercise judgment which might introduce human bias.Footnote 121 It is, therefore, important that the methods used to address bias were disclosed in order that observers could anticipate and flag any potentially inadvertent problems that the models created.
4. Release of datasets for inspection
The datasets contained the photographs of individuals, which made anonymisation without removing important information in the data practically impossible. However, under our proposed AI Fairness Reporting framework, the metadata of the subjects could have been released and reference could have been made to the metadata information used in the NIST studyFootnote 122 indicating the subject's age, gender and either race or country of birth. This transparency with regards to metadata information would have allowed underrepresentation of demographic groups in the dataset to be detected and flagged by observers and in our view would have been sufficient for the purposes of disclosure.
VII. Conclusion
Thus far, regulators and the legal literature have been treating fairness as a principle of AI governance, but shy away from prescribing specific rules on how this principle should be adhered to. That approach may be justified in view of the technical uncertainty over how fairness in AI should work in practice and the myriad considerations and contexts in which it operates. However, technical progress in AI fairness research has highlighted the issues arising from the fairness metrics used and the important trade-offs in the use of AI, including between AI fairness metrics as well as accuracy. There are also reported incidents of bias in AI systems which have captured the public consciousness, leading to a backlash against companies in the form of employee walkouts, resignations of key executivesFootnote 123 and media scrutiny.Footnote 124
Reflexive regulation in the form of AI Fairness Reporting according to the framework proposed in this paper encourages companies to take the necessary steps to ensure the fairness of AI systems used or sold, while empowering stakeholders of a company with adequate information to flag potential concerns of unfairness in the company's AI systems. It also affords companies with a measure of flexibility to take into account other considerations, such as user privacy and protection of trade secrets, when they are reporting on AI fairness.
One limitation of the AI Fairness Reporting framework is that it only captures the fairness outcomes of machine learning models at a snapshot at the time of reporting. Even if companies are subject to such reporting on an annual basis, it is at best an ex-post monitoring mechanism when shifts in the nature of the data happen between reporting periods. Companies might also push back on how the AI Fairness Reporting would create an onerous burden for companies using AI and would hold the use of AI to a higher standard of interrogation than that applied to human decision makers. However, it is important to note the opportunity opened up by the use of AI for unfairness to be combated, which was not available with human decision makers. Despite the complaints about the opacity of AI, AI would still be far more transparent through the methods outlined in the proposed framework than the conscious (and unconscious) thoughts in the brain of a human decision maker. Compared to our ability to inspect the datasets used to train an AI model, it is much harder to access and assess all the experiences in the lifetime of a human decision maker which might influence how a decision is made. Similarly, while explicit de-biasing methods are applied to an AI model in order to achieve the reported AI fairness metrics, it is harder to assess how a human decision maker corrects, and potentially overcorrects, for the biases of which they are aware. Businesses should see the increased compliance costs as part of the bargain for accessing the benefits of AI. We can look to the progress of climate change reporting in the UK, which has now been made mandatory,Footnote 125 in the hope that efforts to ensure companies act more responsibly towards their stakeholders, such as the proposed AI Fairness Reporting, can have similar traction. | https://core-cms.prod.aop.cambridge.org/core/journals/cambridge-law-journal/article/legal-framework-for-artificial-intelligence-fairness-reporting/C2D73FBE9BB74E5D41DDA6BDCA208424 |
The invention discloses a forward and reverse combined express package specification design method. The forward and reverse combined express package specification design method comprises the steps: classifying trucks of the mechanism according to the size of a carriage; selecting two types of trucks with the largest total mileage in the last year as main vehicle types; restraining the length of each edge of the packaging unit; determining the length, the width and the height of each container unit, equally dividing n1, n2, n3 and n4 by taking the length, the width and the height of each container unit as a reference, and arranging and combining to form alternative specification combinations; determining the number I of packaging specification categories; counting the goods volume of the orders in the last year, dividing the volume range into I sub-intervals, and enabling the number of orders contained in each sub-interval to be equal; selecting a goods length maximum value, a goods width maximum value and a goods height maximum value in each sub-interval as calibration sizes of the sub-intervals; and selecting the packaging specification which can accommodate the calibration size and the minimum size of each sub-interval from the alternative specification combination as the express packaging specification design. According to the forward and reverse combined express package specification design method, transportation conditions and goods adaptation are considered, and the problem that the shipping efficiency is low due to the fact that express packages are scattered in sizeis solved. | |
This Calculator provides online conversion of kilograms to pounds (kg to lb).
The kilogram (symbol: kg), is the base unit of mass in the Metric system (the International System of Units or SI). The kilogram is also known as kilogramme (in UK) and is defined as being equal to the mass of the International Prototype of the Kilogram. The kilogram is equal to 1000 grams and was originally defined in 1795 as the mass of one cubic decimeter (or 1 liter) of water at 4°C. The kilogram is approximately equal to 2.20462262 international (avoirdupois) pounds. The prototype kilogram was manufactured in year 1799 and has a mass equal to the mass of 1.000025 liters of water at 4°C. The international pound (lb) is equal to 453.59237 grams. | https://lbs-to-kg.com/converter_kg_to_lb.php |
RALEIGH - As progress on the U.S. 401 Rolesville Bypass project continues, the contractor plans to close a section of Rolesville Road this weekend to make the required connection for the upcoming bypass. The roadway will close Friday, March 27, at 10 p.m. and is expected to re-open by Monday, March 30, at 6 a.m.The closure is needed so crews can install new concrete islands, and pavement markings to put traffic into its new "superstreet" pattern. A superstreet keeps through traffic moving instead of being slowed by traffic signals at intersections. By minimizing the number of signals, the plan improves mobility and reduces the number of accidents.A signed detour will be in place, using South Main Street., Jonesville Road and Mitchell Mill Road.Once Rolesville Road is reopened, there will be a new traffic pattern and motorists will be using a small portion of the new bypass.Motorists are advised to use caution while traveling in the area of the road closures and to take the detour routes into account for possible delays. NCDOT reminds motorists to watch for detour signs, stay alert and obey the posted speed limit.For more information about funding for infrastructure improvements in North Carolina, as well as other NCDOT projects and activities, visit www.ncdot.gov.For real-time travel information at any time, call 511, visit the Traveler Services section of the NCDOT website or follow NCDOT on Twitter. You can also access NCDOT Mobile, a version of the NCDOT website especially for mobile devices. Visit m.ncdot.gov from your mobile browser. | https://www.ncdot.gov/news/press-releases/Pages/2015/Section-of-Rolesville-Road-Closing-This-.aspx |
Who invented the first weaving machine?
In 1733, James Kay, invented a simple weaving machine called the flying shuttle.
How was weaving discovered?
The tradition of weaving traces back to Neolithic times – approximately 12,000 years ago. … Weaving can be done by hand or by using machines. Machines used for weaving are called looms. Loom originated from crude wooden frame and gradually transformed into the modern sophisticated electronic weaving machine.
Who were the weavers in history?
Weavers often belonged to communities that specialised in weaving. Their skills were passed on from one generation to the next. The tanti weavers of Bengal, the julahas or momin weavers of north India, sale and kaikollar and devangs of south India are some of the communities famous for weaving.
Where was the power loom invented?
Synopsis. The textile industry in the United States entered a new era in 1814 when Francis Cabot Lowell created the first successful American power loom in Waltham, Massachusetts. | https://cherryblossomlove.com/how-to-sew/who-invented-the-first-weaving-loom.html |
Copyright © 2011 Josefina Navarrete-Solís et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Background. Multiple modalities have been used in the treatment of melasma with variable success. Niacinamide has anti-inflammatory properties and is able to decrease the transfer of melanosomes. Objective. To evaluate the therapeutic effect of topical niacinamide versus hydroquinone (HQ) in melasma patients. Patients and Methods. Twenty-seven melasma patients were randomized to receive for eight weeks 4% niacinamide cream on one side of the face, and 4% HQ cream on the other. Sunscreen was applied along the observation period. They were assessed by noninvasive techniques for the evaluation of skin color, as well as subjective scales and histological sections initially and after the treatment with niacinamide. Results. All patients showed pigment improvement with both treatments. Colorimetric measures did not show statistical differences between both sides. However, good to excellent improvement was observed with niacinamide in 44% of patients, compared to 55% with HQ. Niacinamide reduced importantly the mast cell infiltrate and showed improvement of solar elastosis in melasma skin. Side effects were present in 18% with niacinamide versus 29% with HQ. Conclusion. Niacinamide induces a decrease in pigmentation, inflammatory infiltrate, and solar elastosis. Niacinamide is a safe and effective therapeutic agent for this condition.
1. Introduction
Melasma is defined as an acquired chronic hypermelanosis on sun exposed areas being most frequently found in women with III-V phototypes of Fitzpatrick. The etiology is not completely elucidated; however, the ultraviolet sunlight exposure appears to be the most significant factor . The basis of the treatment is photoprotection. Diverse modalities in drug therapy have been used such as hydroquinone (HQ), which inhibits the tyrosinase enzyme activity. In spite of its serious adverse effects and moderate results in 80% of patients, HQ is considered the gold standard treatment in melasma although usually relapses after suspension .
Niacinamide studies have demonstrated a suppression of melanosome transfer suggesting the reduction of cutaneous pigmentation , but to date there has been no clinical report of this effect in melasma. There have been several reports regarding other beneficial effects of topical niacinamide on the skin, including prevention of photoimmunosuppression and photocarcinogenesis , anti-inflammatory effects in acne , rosacea , and psoriasis . It also increases biosynthesis of ceramides, as well as other stratum corneum lipids with enhanced epidermal permeability barrier function . Moreover, its antiaging effects have been demonstrated in randomized trials .
The guidelines to clinical trials in melasma have suggested a correct diagnosis by using at least two subjective methods (besides an objective method), a comparison with the therapeutic gold standard and an evaluation of safety outcome .
The aim of this work was to assess the efficacy and safety of niacinamide 4% versus HQ 4% in the treatment of melasma through subjective and objective methods.
2. Patients and Methods
This is a double-blind, left-right randomized clinical trial. The protocol was reviewed and approved by the ethic committee in our hospital, and each subject signed a written informed consent. The sample size was determined based on favorable response: 0.8 for HQ and at least 0.4 for niacinamide, with 95% IC, two tails, α of 0.05 and β of 0.8.
We included 27 women with melasma attending the outpatient clinic of Dermatology Department at the Hospital Central “Dr. Ignacio Morones Prieto”, from March 2008 to February 2009.
Our inclusion criteria were women at least 18 years old without any topical, systemic, laser, and surgical treatment on face during the previous year. The exclusion criteria were pregnant and nursing women, patients with history of hypersensitivity to some of the components of the formulas of the study, and coexistence of associate diseases and other pigmentation diseases.
A history was taken from each patient, regarding age, gender, occupation, time of onset, history of pregnancy, contraceptive pills, and sun exposure.
At baseline, we obtained two 2 mm biopsies in 27 patients, one biopsy from lesioned and another one from facial not photoexposed skin; these were stained with haematoxylin and eosin to determine the general histopathological features of the epidermis and dermis.
The inflammatory infiltrate was counted manually by two independent blinded observers, using a mm ocular grid and 100× magnification. The cells were counted for the entire section, and the results expressed as the number of cells per mm2. The same procedure was employed to count melanocytes (Fontana Masson) and metachromatic granules (Wright-Giemsa) in mast cells. To count the epidermal melanin, we obtained a magnification of 40× to get a scanning view of the epidermis. Images were obtained from the entire 2 mm sample with a digital camera mounted on a microscope (Olympus CX 31) which was connected to a personal computer (PC). The image signals taken by the PC were evaluated using Image-Pro Plus Version 4.5 (Media Cybernetics, Silver Spring, MA, USA). With the aim of discern possible abnormalities of melanin in melasma patients as shown before , or even being induced by the intervention, we perform a qualitative analysis by Raman spectrophotometry (Horiba, Jobin-Yvon T64000. Edison, NJ, USA) before and at the conclusion of the study.
Patients were randomized in a double-blind manner to receive one treatment on the left and the other on the right side of the face. They received two containers labeled right or left with 4% niacinamide (Nicomide-T cream 4%, DUSA Pharmaceuticals Inc.) or 4% HQ (Eldoquin cream 4%, Valeant Pharmaceutical). All patients were instructed to apply the correct amount of both treatments and to use a SPF 50+ broad spectrum sunscreen every 3 hours during day time.
Concomitant use of other skin care products or systemic treatments was not allowed during the study. Treatment was administered for the period of 8 weeks, with basal evaluation and followup at 4 and 8 weeks. Assessments included a skin pigment evaluation by a chromameter (CR-300; Minolta, Osaka, Japan), melasma area and severity index (MASI), physician global assessment (PGA) by an independent observer, conventional photography, and infrared thermography (Flexcam S, Infrared solutions, USA) with photographic register which mainly was used to detect irritation. All side-effects were registered. The double-blinded study was opened at 8 weeks in order to take a 2 mm biopsy in the side treated with niacinamide.
For statistical analysis, we used the Student -test and , and a value of less than 0.05 was considered significant.
3. Results
Twenty-seven female patients with melasma were included, 12 (33%) were of skin phototype IV, and 13 (48%) of type V. The pattern of melasma was centrofacial in 13 (50%), malar in 10 (37%), and mandibular in 4 (14%).
The patients age ranged from 25 to 53 years (mean, 37 years). The duration of melasma varied from 4 to 8 years (mean, 6.5 years). Family history of melasma was found in 19 (70%) patients. The most frequent precipitating factor was the sun exposure followed by pregnancy. Eight patients (29%) have used oral contraceptives.
3.1. Clinical Results
The onset average MASI score for the HQ side was 4 (5% CI, 90.9–1.8) and 1.2 (95% IC, 0.8–1.6) after eight weeks (). The initial MASI score for the niacinamide side was 3.7 (95% CI, 2.9–4.4) and 1.4 (95% CI, 3.3–4.7) at the end of the study (). The average decrease for HQ was 70% and 62% for niacinamide. This improvement was registered using conventional photography (Figures 1 and 2) with no perceptible differences between both sides.
The PGA rated the niacinamide side improvement as excellent in three patients, good in nine, moderate in seven, and mild in eight. The HQ-treated side was rated excellent in seven, good in eight, moderate in six, and mild in six patients (Figure 3). Data showed statistical significance for both treatments, HQ (), and niacinamide ().
Colorimetric assessment was performed initially and at the end of the study; we evaluated the luminosity axis (L*) as well as the erythema axis (a*). The lightening effect of HQ and niacinamide was apparent at 4 weeks of treatment, whereas it was more evident at 8 weeks. Colorimetric measures showed no statistical differences between both treatments (Table 1). The erythema was more intense on the side treated with HQ than with niacinamide, but it was not statistically significative. Infrared light thermography at environmental temperature of 21°C showed a diminished temperature of 0.8°C in both sides after treatment. There was no statistical difference between both treatments.
3.2. Histopathology Results
The biopsy samples were stained with haematoxylin and eosin for general histology, Fontana Masson to evaluate melanin pigment, and Wright-Giemsa for metachromatic granules in mast cells. At baseline, we found a moderate to severe degree of rete ridge flattening and epidermal thinning in 23 (85%) melasma biopsies. Solar elastosis was present in all melasma samples. Mild to moderate perivascular lymphohistiocytic infiltrates were also present in all of them and moderate presence of mast cells near elastotic areas in 11 (40%) patients. With Fontana-Masson stained sections, the amount of melanin was increased in all epidermal layers of melasma skin; we observed pigment basal cells protruded into the dermis in 20 (74%) biopsies as informed before . In the upper dermis, we found scattered melanin in 19 (70%) melasma biopsies. The features in the biopsies of nonexposed sun skin were close to normal skin.
After 8 weeks of treatment, the blind was opened in order to take a biopsy from the side treated with niacinamide. We obtained 11 posttreatment biopsies for analysis. By means of digital analysis of biopsies images, we could observe that the amount of epidermal stained melanin was diminished significantly (). The average inflammatory infiltrate of mast cells was reduced from 22 to 16 cells/mm2 (). Solar elastosis was also reduced, but no statistical differences were present (Figure 4).
3.3. Spectrophotometry
Raman spectroscopy measurements showed that the molecular structure of melanin was normal and remained unaltered after exposure to niacinamide since the measurements showed the characteristic peaks of melanin previously published [11, 12]. Patients with abnormal melanin could respond differently to treatment and explain the variable success rate to HQ . We wanted to show that these patients were homogeneous in this aspect.
3.4. Side Effects
Side effects were present in the niacinamide side in 5 patients (18%), compared to 8 patients (29%) for the HQ side. The most frequent side effects were erythema, pruritus, and burning. Most of them were mild for niacinamide and moderated for HQ. On the niacinamide side, erythema, pruritus, or burning was present in 2 (7%) patients, and on the HQ side they were present in 5 (18%) patients. All these were reduced through continuous treatment in both modalities, as the a* colorimetric value did not show significant changes for both treatments at the end of the study.
4. Discussion
Melasma is a chronic and persistent hyperpigmentation, representing a therapeutic challenge because of the high rate of relapses. This work showed that niacinamide 4% is an effective agent for the treatment of melasma, as assessed by objective methods and clinical evaluation. Our results indicate that 4% niacinamide was effective in approximate 40% of patients, showing outstanding clinical results. In the posttreatment biopsies, we could observe that the amount of epidermal melanin and inflammatory infiltrate was diminished significantly, as well as solar elastosis although it was not enough to get statistical difference. This insufficient antiaging effect could be related to the short time of the study; therefore, further clinical studies using niacinamide for longer periods are warranted in this condition. We observed that the evolution time of melasma did not affect the response to treatment. On the other hand, colorimetric assessment showed no statistical difference between these two treatments (). However, the lightening effect of HQ was evident as early as the first month of treatment, whereas with niacinamide was noted at second month. HQ had the disadvantage of moderate adverse effects in 18% of patients, compared to milder in 7% with niacinamide. Treatment with niacinamide showed no significant side effects and was well tolerated; therefore, it could be used for longer periods, as part of the initial hyperpigmentation treatment and as maintenance drug. However, further trials are required to assess the combination of this topical drug with others agents and assess its additive effects in the treatment of melasma. The mechanism of action of niacinamide in melasma could be through the reduction of melanosomes transfer , photoprotection actions , its anti-inflammatory properties , and a direct or related antiaging effects such as reduction of solar elastosis . We previously have described prominent infiltrates of mast cells in the elastotic areas of melasma skin and evidence of damage to epidermal basal membrane, which could facilitate the fall or the migration of active melanocytes and melanin into the dermis allowing the constant hyperpigmentation in melasma . Due to these findings, we wanted to prove an intervention capable of inducing modifications to these atypical findings related in the pathogenesis of melasma, in addition to modify the increased pigmentation. Therefore, we propose niacinamide as an effective, integral, and safe therapeutic alternative in the melasma treatment, since it not only reduces pigmentation and inflammation, but also may reduce solar degenerative changes with minimal adverse events. | https://www.hindawi.com/journals/drp/2011/379173/ |
how to decorate a dining room table decorating dining room table for christmas.
centerpeaces baby blocks floral centerpiece centerpieces for baby shower ideas.
dining room table against wall when is a bench appropriate fascinating dining room table with bench against wall walmart dining room chairs.
houzz dining tables wallpaper dining room breakfast room lighting orb dining chair lovely chandeliers breakfast room lighting dining houzz dining tables round.
dining table centerpieces ideas medium size of decorating dining room table decorating ideas on a budget dining table design images round dining table decorating ideas.
how to decorate dining room downloads full medium decorate round dining room table.
decorating a dining room rustic decor decorating cushions low photos chairs wood sets modern ideas chandelier plans decorations dining farmhouse decorating dining room table for easter.
hall interiors design photo for hall interiors banquet hall designs interiors.
mothers day centerpieces mothers day decoration diy.
formal dining room decor formal dining room decorating pictures formal dining room decorating ideas elegant exquisite decor pretty rooms table formal dining room table decorating formal dining room ph. | http://wavesweb.org/2019/04/ |
Would floating car data help with the bus or subway challenge (from personal cars or satnav devices)?
Absolutely—more real-time information on the movement of cars would be helpful.
Does the MTA have data on the demand at each station or numbers of people on the platform in real-time?
There is modeling that gives a decent estimate on numbers of people at different platforms, but no real-time count right now.
How much of the system is the test plan looking to cover?
It depends on the product that is intended to be piloted.
Is there an open data portal for transit?
Yes, all of NYCT's open data is available from http://web.mta.info/developers/developer-data-terms.html#data.
Are there areas outside of NYC that the MTA has responsibility to provide transit services which have low ridership levels?
This challenge is focused on New York City Transit, which only operates within the five boroughs of New York City.
What would the MTA consider as a low ridership? e.g., 0-20 passengers per hour, under 40 passengers per hour?
How do you think about measuring the effectiveness of service disruption communication?
We have undertaken extensive customer research during the development of our new MYmta app. However, we do not currently have a formal or comprehensive program to evaluate the effectiveness of communications for planned or unplanned service diversions/disruptions. We do respond to and take heed of feedback recieved via social media and through our CRM system.
If NYCT knew ahead of time that you were on the cusp of a crisis, what actions could they take?
Actions would be focused on customer service and trying to proactively warn customers of where issues might be arising to give them the opportunity to select another route or plan accordingly.
Do MTA buses have OBD ports in them?
Roughly what percentage of bus routes would be considered low ridership?
Bus ridership data by route is available on the MTA website for public reference.
Can the MTA identify bus routes and operating times that have low rates of passengers per hour? e.g., between 0-20 or under 40?
Yes, generally bus service on nights and weekends are scheduled for lower frequency service than weekday and peak hour service.
Do school buses and access-a-ride fleet fall under MTA or different departments?
Does the MTA have a fleet of 'regular' cars used by employees?
Work crews use small trucks or vans.
What will it take to get transit agencies to prepare business plans for driverless rail operations?
This question does not appear to be within the scope of the TTL challenges.
Can you give more information on the UWB pilot?
CBTC is aided by UWB, and this tech is currently under evaluation. UWB will simplify the installation of CBTC installation.
Can you locate all buses at any one time?
GPS on all buses, transmit locations every 30 seconds with route and direction of bus—this is a public feed.
Do access-a-ride have OBD ports?
Is the bus challenge looking to just prevent bus lane blockages or is it looking to speed up buses through new routes, as suggested in the Fast Forward Plan?
Looking for all ideas to speed up buses, it is as broad base as possible.
Is there traffic signal priority for buses?
Yes, on routes right now and will be on 5 by 0 This is done in partnership with City DOT.
What corrective measures are you able to take if you have real-time data on bus lane blockages?
Real-time there isn’t much that can be done, but hope to use the data to work with city agencies to come up with solutions/measures to counter the impact.
Can a project require interface development with the NYDOT?
We are working with NYCDOT on bus performance issues and are open to this option.
Does Wi-Fi work in every platform?
Is there a Read Me for the historical train arrival and other data?
The historical train arrival data consists of archived GTFS-realtime files; GTFS-realtime is an industry-standard data format with information available here and elsewhere on the Internet.
Has the MTA considered embracing the digital rail strategy adopted in London? If yes can you share an update.
We are aware of Network Rail's Digital Railway program, and will continue to observe its progress for lessons which may be applicable to NYCT.
Does the challenge include integration of equipment on-board subway cars to collect additional data to assist in predicting equipment failures before they occur?
MTA is keen to see all applications that address the challenge; how equipment could be onboarded would come later on in the application process.
Does the data on the countdown clocks show predictions of next subway arrivals or is it real-time data?
Clocks are driven off real-time data.
Discussions on how the pilot would be executed will happen later in the application phase, and will be a collaborative dialogue between the MTA and the applicant as to what will be feasible.
What official and unofficial channels of communication is the MTA looking to make the most of in subway challenge?
Twitter, countdown clocks, displays on subway entrances, app data, conductor announcements. We don't recommend getting stats from third-party apps.
I think it makes the most sense to work alongside NYC Subway Operations personnel to create a custom solution. Will the transit Tech Lab give this opportunity?
The challenge is aimed at growth-stage companies that have a proven product/technology/solution which might work in the transit environment with some refinement to fully meet New York City Transit's needs. Competitors who reach the later stages of the challenge will have a chance to interface with personnel at Transit, including operations, to refine their solutions.
Have you addressed how the end-to-end solution is going to look like? How to map out deployment across the transit system when pilot comes to a successful end? Lastly, how much emphasis or planning has been done around pushing intelligence derived from these ML technologies down to the actual edge, or train, to enable real-time predictive analysis in the event of avoiding failures and promoting a safer environment for MTA commuters.
The end-to-end solution will depend on the technologies proposed and their unique implementation plan. The structure of end-to-end deployment will be one of the areas for ongoing partnership as this challenge proceeds.
What innovations are occuring or exist in other cities that can be applied or adapted for the NYC subway or transportation in general?
It's a long list. We're hoping you can come up with new ideas.
Does the MTA have funding to implement tech post-pilot?
If I am not able to attend in person, will there still be an opportunity to gain access to the event details (webinar, recording, etc.) to compete for an opportunity to bid come January 2019?
Out of the pool of candidates, will just one solution provider be selected come January?
There is no set number of candidates that will be accepted into the accelerator at this time.
Are you open to joint applications, from a consortium of companies?
Yes. Each application must come from a single company but that companies may indicate in their application if they have an agreement to partner with another company.
Does Applicant have to be U.S. corporation? Can a Canadian company apply?
Applicants from other countries can apply, but must be registered to do business in New York.
What is the application process for the Transit Tech Lab like?
On F6S—it is fairly standard, more akin to accelerator application than govt RFP.
How many companies will be selected for the Lab?
There is no pre-set desired number, the numbers selected will depend on the quality of the applications.
In best case scenario how would a pilot benefit the selected startup pilot partners?
Pilots are unpaid, and companies will retain all IP. MTA would get a license for the duration of the pilot and a period of time following the pilot. The Lab is an opportunity for companies to get their technology in front of key decision makers at the MTA and NYCT. If chosen for the accelerator, it’s a chance to refine tech for the largest transit system in North America, and there is a lot of value to pointing to experience with the MTA. The MTA wants to see improves in performance and customer service.
What are the next steps if a pilot is successful?
Post-pilot, the standard route is that the MTA will issue a competitive RFP process, but this will be case-by-case and there may be some flexibility for the MTA to negotiate a contract directly if the product is unique and there is little competition on the market. Most cases will likely result in a competitive solicitation.
What do you mean by efficient—faster or greener?
What happens in the accelerator phase?
Will start program-heavy with meetings and kick off sessions. Will include meetings with leadership and development of a plan to map what needs to be achieved. Co-working space is available if it is needed. | https://transitinnovation.org/faq |
Album Review: SUNDR - Solar Ships
Artist: Sundr Album: Solar Ships Genre: Post-Metal/Sludge/Drone Review by Mothlord It is hard to know exactly where to start with this review of the impending sophomore full-length release by Sundr.
One perspective is to view the band from the out looking in as an organically enmeshed unit of humbled human beings collaboratively expressing a depth of personal feeling and thought through their music with potent effect.
The other is to experience Sundr from the in looking out to the sea of dispersing and cascading sound and emotion. For the music to be a vessel for transcendent astral experience, rather than one of connecting with a unit of people. I believe that with this record the band have pushed further towards that ethereal experience, now unmoored from the rocky shores drifting further beyond the canvas sea. Solar Ships in name proves itself to authentically shepherd the listener to cosmically emotive realms. Drenched with a depth of emotive richness and honesty. Sundr's song writing is characteristically grandiose, monolithic and ethereal. But simultaneously, poignantly human with a fertile foundation of sincerity and vulnerability that becomes sweetly wombing like a viscous honey. Very carefully, "Younger Dryas" builds trust with the listener as sombre guitars layer with pondering and restrained drum beats. Sundr have never been in a rush towards climactic crests and shifting structures. On this record though there is a noticeable stripping back away from the tropes of "metal" songwriting, though I hesitate to paint the wrong image when I say metal.
There is a focus on creating a sense of drifting gently and carrying of the listener that strangely struck me as being similar to the shoegazing inclinations in some of Deftones works as a quick comparison. This is slow burn story telling, that will not usher in each chapter and each act with abrasive and impatient haste.
It feels an age until singer Scott Curtis anguished and commanding throat peels through the wall of reverb tails. This is something that in a live setting is not an absence of entertainment and crowd engagement as he looms in the pooling of stage lights with hands on head, drifting and swaying in earnest with feeling poured into even the moments he is not bellowing with power.
Just as soon as the drums have risen in defiance of the silence, with the clashing and churning of truly anguished guitars. We retreat back to gentle solace of swimming guitar notes and the rousing groan of bass guitar. This is a nice subversion of expectations. These kinds of delays might perturb those disinclined from delayed gratification. But rest assured, by the near end of the eleven minute journey doors are thrust open to churning and cascading black metal esque tremolo guitars. The wash of sound is accompanied by desperate bellows and the chopping, changing and nigh punishing percussion in defiance of such linear rigidity. "I've Forgotten How To Be Alone", the recently released track and music video takes us through a tenuous introduction that has a clear shift towards the powerfully discordant as harrowing delayed guitar churns in the distance. This is while angular and off kilter patterns of kick drum, toms and even the rim of the drum shells clicking together quirkily are accompanied by the weaving of soulful blues tinged bass guitar. I very much appreciate the subtle use of bass chords for a four piece band. They assist in expanding the width of tone and melody available which for such spacious music, is very welcome. As expected, as heralded by a terrifying vocal wail. All elements collide and strike confidently, flooring with wall of doomed guitar chords designed to crush and pave their way through any emotional walls the listener had up.
There is a continual and lingering sense of defined and harrowing honesty that bleeds between the pound and clash of instrumentation that is emotionally claustrophobic. I find myself thinking somewhat of the darker emotive palette fellow Australian, post-black metal act Départe paint with. This seems to resonate clearly with meaning of the song as articulated by Scott Curtis, whose vocals on this track reach a peak of depraved and abject despondence as vocal chords twist and strain to express. “The song is about ‘the tragedy we don’t talk about’, anxiety, fear and depression silently creeping into our homes and lives, even in the seemingly mundane aspects of life, through illness both mental and physical, and our instinct in our modern society to push these things aside, internalise them, constantly distract ourselves until they have a stronghold on our life, changing the physical appearance of ones home, self and relationships. "Lyrically I think this song is the most straight forward and a good representation of the overall concept.” The striding rhythms and despondent guitars lumber with clear direction as if to invade our ongoing distraction. Though slight changes occur the pace is maintained as shifting layers and angles creating space for contemplation until like an inversion in structure of the track before hand, the song in its last moments are deconstructed and stripped back to bass and drum throbbing in unison like that of a heart beat. "Inherit" begins with an emotive respite, as shimmering guitar chords tremor in the calm. Emphasized by guitarist Troy Power's characteristic use of dry and evocative guitar tones. That have always reminded me somewhat of the approach of seminal drone/doom act, Earth.
Once again Sundr stride in a commanding manner as song breaks forth. Differing now in that musically there is a more potent sense of rage seething through the energy. Percussively driven and decisive. The chords shift expectedly towards darker emotive tones, feeling openly bitter and resentful in their intent. Droning onwards luring the physical aspects of a listener into trancelike motioning. It is the decisive use of droning that hammers the nails of the potency of "Solar Ships" poignancy.
Though present on prior releases, it is clear that there is an even more defined aspect on a motif building around whichever aspect of instrumentation that builds in creeping manners like a rising tide, shifting slightly as elements find their place of comfort. It is no criticism to state, but merely an honest observation that this kind of experience may be less suited to those who do not have an impatient disposition or not as much of an attention to detail when it comes to the consumption of art. It is very reminiscent of the way in which Belgian post-metal band Amenra induces a hypnotic and captivating emotional state. It is cinematic and visual, leaving an opportunity for the listener to create their own narrative within the experience rather than forcing an image or idea onto them. However I feel as if Sundr always have one foot firmly planted in the mire of human soils. Retaining grit and something disturbing, yet childlike within us we can all relate to. Finally with the closing title track of "Solar Ships", there is a sense of resolution in the busier pondering of guitars ushered by the sheen of a distant tambourine. Moans of crooned spoken word reminiscent to that of the folk driven songs by chaotic hardcore band Cult Leader are a much welcome addition to the range of utensils the band have at their disposal in creating intrigue and personality. It is five minutes of a sullen journey guided this masculine spectre before a sudden shift in pace and tone. A five minutes that dissipate due to the hypnotic ability to captivate a listener and dissolve the perception of time passing around them Finally nearing the last moments of the album building volume and intensity around the same motif from before.
We are given a more deliberate delivery of busier and driving drum rhythms given focus and space as glossy guitar and rumbling bass slides in the background. While I understand the desire to create space and tension, I feel as if the carefully chosen, yet very precise drumming of Dan Neumann goes highly underrated on this record often bled into the background even when busier. Perhaps this was suitably chosen to have a moment of focus on percussion before what feels like a poetically abrupt end to the record. And I must say, any end to a record that leaves a listener feeling as if they needed just that bit more of the experience is a wise and impactful choice to make with ensuring return listens even if it might be taken as cruel or unfair to those thoroughly invested. I am glad though as upon each listen I have had of this record, I have unearthed more of the buried, wavering, weaving layers and intonations within these entrancing and captivating songs. Though not the most methodically busy of records, there is a methodical and deliberate choice of restraint in motion, dynamics, timbre and rhythm made. I believe this indicates a true sense of dare I say, genius in the art when a creator has the restraint to not always use all the tools at their disposal. At times through "Solar Ships" the sense of pathos and isolation were so heavy in my minds eye I saw mental images of being flung from a boat on stormy seas and left to wrestle with the stirring, frigid waters helplessly and alone witnessing the ship drift into the foggy distance as I struggle to stay afloat and keep my head above water. There is something comforting though in being ushered through that kind of striking emotive experience, and I eagerly await the opportunity to see Sundr perform once again with their well demonstrated ability to captivate and command audiences. Once again I have decided not to give a numeric rating of a record and reduce the quality of art to an arbitrary ranking. I recommend you all buy a bottle of wine or make a hot drink and sit in the dark, ideally on a rainy day and experience this record for yourself. Solar Ships releases via CRUCIBLE, September 18th 2020. | https://www.insertreviewhere.com/single-post/album-review-sundr-solar-ships |
When lenders or borrowers apply to the credit bureau for the financial report, it may come to be the part of the credit history of the applicant. Generally, there are a couple sort of financial reports: hard inquiries and soft inquiries. The soft inquiries occur when the borrowers check the credit on their own or when the credit card issuing company applies for soft inquiries for pre-approving. Soft inquiries do not affect the credit score of the borrower, more to the point, they even do not appear in the history. Alternatively, hard inquiries are used for the decision-making process and this kind of reports may have an impact on the credit score of the borrower.
Hard inquiries
When you are applying for a mortgage or a car loan, in all probability, the lender will request you to provide a permit for checking the credit report through one or all of the big credit bureaus (such as Equifax, Experian and TransUnion). This kind of inquires is regarded to be hard inquires as they are closely linked to the credit application (that applicant must fill out).
The number of hard inquires plays a great role in credit scoring and in a lending process at all. Based on the thesis of financial analysts, too many hard inquiries within a short time period must alert the lenders that the applicant has a financial difficulty with the payments of bills or problems concerning overspending. However, a big number of hard inquires may also reflect the fact that the applicant tries to manage the best available rate in the market, but in any case, consider that multiple hard inquiries in your financial report definitely hurt your credit score. Apart from inquires there are some more components that comprise the credit score: payment history, mix of credits, duration of credit usage, credit utilization ratio, etc. Nevertheless, some professionals from the banking system tend to believe that inquiries are not the leading indicators for calculating the credit score.
By the mater of fact, hard inquires may “reside” in your credit report for about two years, but the effect that they may impose can remain for a long time. If you totally satisfy the requirements of credit score components, except multiple inquires, be confident, that your application may not be rejected by 99 percent probability. Now you may have questions “Why the inquires play smaller role in the scoring, if they are also a component of scoring system”? In informal literature, there is such kind of issue linked to the primary and secondary components, and if your credit fits to the all requirements (except inquires), then the presence of multiple inquiries may be regarded as secondary. On the other hand, it does not mean that you have to be less careful with the hard inquires, conversely, you should keep your eyes on it. If you noticed an inaccurate hard inquiry on your credit report which was committed without your permit, you should apply to the credit bureau and request to remove the inquiry. An “unlicensed” hard inquiry may alert about the scam or the theft of identity.
Soft inquiries
Generally, the soft inquiries are linked to the cases when the applicant checks his/her own credit report or grants a permission to another person to examine the credit report. The soft inquires used to utilize by the credit card companies or insurance companies for pre-approval of the offer. The essence of soft inquires is not as big as in the case of hard inquires, that is why they are visible only for the applicants. However, here is a couple of exclusion: (1) insurance companies are entitled to see the soft inquires done by the other insurance companies, and also (2) inquires committed by debt settlement companies may be provided to your active creditors. This kind of inquires is not included in the scoring system, consequently, they do not affect your credit score. They are available only in the form of reference and no potential lenders may see them.
Payday online loans provided by the trustworthy direct lenders are not reflected in the hard inquires. In other words, if you have online same day loan it probably will not affect your credit history. However, if you have a poor credit history and an insufficient credit score, you may not be eligible for traditional lenders, as you do to satisfy the requirements of the scoring system. Do not be discouraged, as you may have a chance to apply for payday loans with quick approval through our connecting platform: Shinyloans.com. You may check your eligibility by filling out the online payday loans application form. For instance, you live in Colorado or in Kentucky, you may easily browse the phrases “payday loans online Colorado” or “payday loans online Kentucky” and find reliable online lenders. | https://shinyloans.com/blog/all-you-need-to-know-about-credit-inquiries |
Risk management procedures are intended to protect a company’s long-term viability amid dynamic markets and regulatory changes. In today’s economy, companies face a rapidly growing challenge—and opportunity—to expand their businesses and create value. The increasing physical, regulatory, reputational, and financial impacts of sustainability issues, including environmental, social, and governance (ESG) concerns, are compelling companies to take a broader view when identifying and managing risks. CPAs are grounded in a pragmatic and multidimensional risk management approach and therefore well positioned to assess ESG issues and help organizations make more informed operational and strategic decisions.
The Changing Landscape
According to Ocean Tomo’s “2015 Intangible Asset Market Value Report” (http://bit.ly/23I35mo), only 16% of the market value of the S&P 500 can be traced to physical and financial assets. The remaining 84% of corporate value is tied to intangibles such as intellectual capital, human capital, brand and reputation, and relationships with suppliers, customers, and other external stakeholders. A broader corporate perspective on value protection and value creation is integral to business success in the 21st century.
Corporate risk management is evolving to respond to the needs and requests of various stakeholders, such as investors, employees, customers, suppliers and regulators, as well as the local communities in which the company operates. Stakeholders seek to understand the broad spectrum of complex risks that companies face in order to confirm that such risks are effectively managed across the enterprise. Enterprise risk management (ERM) provides a consistent framework for identifying, assessing, mitigating, and monitoring risk across the business by taking risk management out of siloed functions, aligning processes and procedures across the organization, and incorporating internal controls. This approach equips companies to address risks and opportunities more proactively and may protect and create value for stakeholders.
Risk Registers
While some organizations have advanced their risk procedures with ERM, their risk registers have not necessarily matured at the same pace. A risk register formalizes the identification, assessment, and management of risks and opportunities in a way that facilitates wider consideration by management and the board. Risk registers also allow management to compare disparate risks on like terms (e.g., monetary impact). Nonfinancial environmental and social risks are often unintentionally omitted from risk registers or masked by more traditional risk categories and thus are often not included in key risk management discussions. While organizations may communicate environmental and social activities externally to the public, the internal lack of an integrated risk approach may indicate that sustainability and corporate responsibility activities are “bolted onto” rather than “baked into” company strategy and operations, preventing functional managers from securing the necessary resources to effectively manage these associated risks and realize opportunities.
Organizations with more mature risk management practices outperform their peers financially.
Shareholders are now taking notice of this. According to a 2015 global Ernst & Young report (“Tomorrow’s Investment Rules 2.0,” http://bit.ly/1qecoNz), most investors factor ESG information into their decision-making. A notable 71% of the 211 institutional investors participating in the survey considered ESG data essential or important when making investment decisions, up from 61% in 2014. Furthermore, 62% considered nonfinancial information relevant to all sectors. Finally, more than one-third of respondents reported cutting their holdings of a company in the last year due to ESG risks, and an additional quarter of respondents planned to monitor ESG risks closely in the future.
Integrated Risk Management Frameworks
The confluence of risks and opportunities associated with environmental, social, and economic performance has made sustainability a strategic business priority. A 2013 Ernst & Young report (“Turning Risks Into Results,” http://bit.ly/1se6uxY) found that organizations with more mature risk management practices outperform their peers financially. The leading companies from a risk maturity perspective implemented on average twice as many of the key risk capabilities as those in the lowest-performing group. In addition, companies in the top 20% of risk maturity generated three times the level of earnings before interest, taxes, depreciation, and amortization (EBITDA) as those in the bottom 20%.
Integrating sustainability into the components of the ERM framework established by the Committee of Sponsoring Organizations of the Treadway Commission (COSO) was detailed in a 2013 Ernst & Young report with Miami University (“Demystifying Sustainability Risk,” http://bit.ly/1T5qhJI). COSO has historically provided a good ERM starting point for organizations by enabling them to understand key risks across the business and helping them to identify, address, and monitor those risks. The COSO framework also provides valuable guidance to organizations in managing nonfinancial risks.
COSO identifies the following competitive advantages for including sustainability in an ERM framework:
- Alignment of sustainability risk appetite to the organization’s corporate strategy and the new world view of company value
- Expanded visibility and insights relative to the complexity of today’s business environment
- Stronger linkage of company values and nonfinancial impacts to the organization’s risk management program
- Better ability to manage strategic and operational performance
- Improved deployment of capital
RISKS RELATED TO SUSTAINABLE SUPPLY CHAIN MANAGEMENT
In a recent World Economic Forum survey (“New Models for Addressing Supply Chain and Transport Risk,” 2012, http://bit.ly/1VTvqXE), more than 90% of respondents indicated that supply chain and transport risk management has become a greater priority in their organizations over the last five years. In addition, there has been an increase in supply chain regulations around product stewardship, human trafficking, and conflict minerals.
In recent years, there has been an increasing international focus on conflict minerals emanating from mining operations in the Democratic Republic of the Congo (DRC) and adjoining countries. Armed groups engaged in mining operations in this region are believed to subject workers and indigenous people to serious human rights abuses and are using proceeds from the sale of conflict minerals to finance regional conflicts.
On July 21, 2010, in response to these concerns, the U.S. Congress enacted legislation requiring certain public companies to disclose the use of specified conflict minerals originating from the DRC and nine adjoining countries. Known as section 1502 of the Dodd-Frank Act, the intent was to make transparent the financial interests that support armed groups in the DRC area. By requiring companies using conflict minerals in their products to disclose the source of such minerals, the law aimed to dissuade companies from continuing to engage in trade that supports regional conflicts. Section 1502 is applicable to all SEC issuers (including foreign issuers) that manufacture or contract to manufacture products where “conflict minerals are necessary to the functionality or production” of the product. The industries most likely to be affected include electronics and communications, aerospace, automotive, jewelry, and industrial products.
Where to Start
CPAs and accounting and advisory firms are well positioned to offer guidance and independent assurance on sustainability issues for their clients. By aligning traditional organizational priorities with ethical and responsible corporate practices, they can help clients achieve tangible financial returns while mitigating critical ESG risks. Questions CPAs can present to the audit committee include:
- When did the company last revise its risk register?
- Is the chief sustainability officer involved in making the risk register?
- How is the risk register revised?
- Does the risk register take into account the material risks of key stake-holders such as primary investors, core suppliers, and customers?
- How is the risk register made complete?
- What types of nonfinancial risks are considered?
- How does the ERM process drive operational excellence?
- Does the ERM approach consider all material nonfinancial aspects of the business?
Sustainability issues have significant, lasting impacts on inventory management, supply chain procurement risk, resource availability, price volatility, and human well-being. Re-engineering processes and restructuring organizations to provide expanded visibility and insights in the complexity of today’s business environment can be messy. By broadening the risk management perspective, improving risk registers, and integrating sustainability with traditional risk areas, organizations can improve functional leadership and realize opportunities to manage strategic and operational risks and performance more effectively. | https://www.cpajournal.com/2016/06/12/integrating-sustainability-enterprise-risk-management/ |
In this section, I introduce a distinction between the four following types of presentism: empirical presentism, descriptive presentism, causal-narrative presentism and normative presentism.
What is an example of presentism?
For example, consider Mr. John Teacher who caned pupils in his 1889 class. A presentist would say that Mr. Teacher engaged in unacceptable violence against children while one with an opposing view would claim that since it was considered OK to hit children at the time, Mr.
What is the definition for presentism?
Presentism is the view that only present things exist. So understood, presentism is primarily an ontological doctrine; it’s a view about what exists, absolutely and unrestrictedly.
What is presentism quizlet?
Presentism: Interpreting and evaluating historical events in terms of contemporary knowledge and standards. It emphasizes the biased nature of human experience and the difficulties in separating historical facts from current biases. | https://365go.me/a-form-of-presentism-7673/ |
I don’t think I’m alone on this one but I have to admit that I often don’t immediately ‘get’ the correlating parallels of those I’m meeting until far too late. I work (and rock) around the clock but feel that no matter how fast I go, who I meet and where I am, I’m always going to miss something or someone. I’m on a constant watch. So be it. That’s London. Deal with it.
Those I do meet that are undeniably unique in their process of design or performance tend to find others of the same nature in this weird interconnected universe – banding together in time and effort and it’s of these moments that I ride. High, appreciative and expectant; like a junkie, the energy and connectivity of creative’s together is my instant fix. I’ve been lucky to have met so many of these exact people in recent weeks. I’m trying to stand still, listen and learn from them.
What’s most interesting to me is the rate at which I’m growing up. I don’t mean as such in the adult sense (I’m still flat out boiling a jug), but I do mean in the sense of understanding the drive and consequent similarities between others in the creative spectrum. The more I know, the more I know how little I really know.
Interviewing Sophia Grace of fashion label ‘Sophia Grace’ a few weeks ago made me think.
Sophia designs completely as she sees fit; and for women such as ourselves – inspiring, busy, interested in (yet unaffected by), trends. Gone are the days of anyone’s approval – and this is key. Too much time is spent and energy wasted adhering to appearances. The most interesting people I’ve ever met have a genial humility. Far from blasting their achievements, they shy away from the attention and get on with the job; sparing little time to anything off the path of what they are trying to achieve. Sophia is one of these people.
I had a wonderful conversation with my flatmate just days prior to meeting Sophia. She said, “find your angle, whatever that may be, and do that damn well’ I heard the same from designer Nicola Woods who I interviewed recently while she busily worked on her next collection, excited regardless of the impending press reviews.
Sophia is finding her angle. I met up with her over a coffee after a spray of email correspondence. An hour passed by quickly with the momentum of bustling ideas and enthusiasm; once time was up, she marched back to her office where she currently works as a developer for some very high profile clients under a highflying corporation. “How is she to make it work as a full time designer? Who does she have to emulate? No one is the answer. If the buzz of the idea and the excitement of putting it to play is there, then what comes next is irrelevant in measures of achievement. She, as we all strive to do, will do her best. Never before has she expended so much energy on the one direction. It’s hard to fault anyone willing to put a backseat to comforts (holidays and the trimmings), for an unbridled ambition and relentless tunnel-vision direction.
If her current collection is not her best work then we must look forward to what’s next. Presence and recognition for her work perhaps? One can only hope.
Take her Google stance for instance. Type in her name, Sophia Grace and you’ll get the dancing Ellen twins. Scroll down a little further. Ah…there you go. Watch this space.
Sophia comes from a modest background of dressmakers and is a self taught fashion designer with a luxury brand of sophisticated tailored dresses and elegant evening gowns. It’s a beautifully defined visual catalogue of what she perceives to be the modern women. Each piece comprises a variety of quality tweeds, crepes, silks and crafted embroidery.
Sophia lights up when I ask after her childhood passion for drawing and of the times she spent with her grandmother’s huge book of Vogue patterns. They would visit fabric stores together and Sophia, ever with a keen eye for fabric, started off by selecting clever pieces in the 99p bin. She would then make up patterns from newspaper to create tops and skirts. Developing from a simple idea, it’s great to see how she’s evolved her work ethic into an ethos for modern classic elegance.
Although Sophia didn’t pursue fashion at university, years never dwindled her desire to tackle the industry and so for the Spring/Summer 2012 season, Sophia launched her very first collection under her first and middle name to create ‘Sophia Grace’.
The moments I was referring to earlier are the reasons behind why I’m involved in this industry. It’s something special when you come across people out there still working with both feet on the ground and two hands on the cutter; chipping away slowly yet surely at their purpose. | http://www.stylistandthecity.com/unique-streak-with-designer-sophia-grace/ |
Healthcare stakeholders are seeking ways to reward quality and value. As providers continue to manage COVID-19, value-based payment programs may provide support.
Source:
McKinsey & Company
Author:
Julius Bruch, Adi Kumar, and Christa Moss
Healthcare stakeholders are seeking ways to reward quality and value. As providers continue to manage COVID-19, value-based payment programs may provide support.
COVID-19 has created an exogenous shock to the healthcare system, including a major disturbance of traditional utilization patterns and provider finances. Payers, providers, and state and federal governments had been embarking on a journey to transform the US healthcare payment system from one that rewards volume to one that rewards quality and value. Under value-based payment (VBP) programs, payers reward providers for reducing healthcare costs while maintaining or improving quality. Through a COVID-19 lens, VBP programs can support providers in times of uncertainty, such as by providing capitated incomes.
Less than 3 percent of payments are fully capitated, according to the latest Catalyst for Payment Reform scorecard. Moreover, while COVID-19 has further unearthed some clear shortcomings of fee-for-service payment models, it has also created meaningful cracks in the typical VBP accounting systems and payer-provider relationships. For example, providers in VBP arrangements, while fighting the pandemic, may be exposed to attribution losses, missed outcome metrics, missed savings targets, and struggles with reporting requirements. Therefore, most providers’ 2020 performance under VBP programs cannot be fairly assessed in the way the programs were initially designed.
As a result, the regulatory environment has already begun to adapt by easing the burden on providers. The Centers for Medicare & Medicaid Services (CMS) announced it will amend its quality reporting requirements through the end of the second quarter 2020 and intends to prorate any losses incurred by Medicare Accountable Care Organizations (ACOs) in 2020 for the duration of the public health emergency.
Still, providers are indicating concern and may seek to leave VBP programs unless additional actions are taken. In a recent survey, 56 percent of ACOs with negative risk said they were at least “somewhat likely” to leave the program due to COVID-19. Thus, payers and providers face a critical choice about how to proceed. Potential options to consider are:
Bolt on additional safeguards to maintain (or grow) participation and adapt current programs to fairly compensate (and where possible, further support) providers; or
Pause current programs this year and plan to rebuild with improvements.
Option 1
Bolt on additional safeguards to maintain (or grow) participation and adapt current programs to fairly compensate (and where possible, further support) providers
As payers look to adapt program design, they might consider three tactical areas to maintain program goals, better support providers, and ensure continued participation. These actions may inform how payers assess and revise their VBP program designs.
Strive for technical accuracy—make tactical changes to program design to ensure appropriate performance measurement, given the shock to healthcare system utilization and payment trends
Ease provider burden—mitigate additional operational and resource requirements to support providers while they are focusing on the most critical actions amid the pandemic
Adapt timing of payments to support providers—consider pulling forward rewards or delaying penalties.
Strive for technical accuracy.
Orchestrators of innovative arrangements, including policy leaders and payers, may want to consider adjusting VBP programs to focus on what outcome improvements providers can control, rather than changes driven by or reliant on COVID-19. In practice, payers may need to make a series of adjustments to the technical elements of VBP programs:
Attribution: In programs that define attribution by minimum activity levels, consider freezing attribution from a pre-COVID-19 era. Extend the window for determining activity, including telehealth visits in attribution methodologies, or alternative adjustments. These changes are especially relevant if the program makes payments on a per-member basis.
Quality metrics: COVID-19 has reduced utilization.5 While overutilization metrics (such as emergency department utilization) will likely perform well, others (for example, screening, vaccination, follow-up) will likely perform poorly. Consider excluding the time horizon affected by COVID-19 in the calculation of metrics, or resetting the threshold to adjust for its impact.
Risk adjustment: The decrease in utilization—and coupled lack of access to recent diagnosis information—may make populations seem artificially less risky. Consider the time period affected by COVID-19 as an outlier and ensure appropriate lookback to pre-COVID-19 era care to determine risk.
Cost benchmarks: This time of reduced utilization may be followed by a “surge” as routine care recommences. For 2020 performance, consider extending performance periods to balance impact of utilization patterns or comparing cost growth across regional (or national) peers in resetting benchmarks.
Ease provider burden.
Providers have often expressed frustration by technical or reporting requirements.6 In a COVID-19 era, healthcare stakeholders could consider easing or stopping these requirements. One example is payers adjusting (or pausing) reporting requirements for the current performance period. Providers are often tasked with submitting different reporting criteria for each value-based payment arrangement, which can be an operational burden. In addition to CMS amending reporting requirements for 2020, other payers, such as Centene, Blue Cross and Blue Shield of New Mexico, and Blue Cross and Blue Shield of North Carolina,7 have given providers access to special grants and credits.
Adapt timing of payments to support providers.
Value-based payment models may span multiple payments, including upfront, per member per month payments—either as a capitated amount to cover potential services rendered or a care coordination payment to support additional activities, as well as shared savings or risk payments delivered retrospectively based on performance. Timing of these payments may vary based on program design.
Opportunities to adjust timing of payments and better support providers include:
Pausing or waiving any negative reconciliation payments that may be occurring now based on 2019 performance—in some cases, these are obtained through a withhold on current payments. Pausing or waiving will ensure that providers receive full payment for services rendered in the pandemic.
Accelerating shared savings payments—in many cases, payments do not occur until months after the performance period has ended based on the desire for nearly complete claims information prior to determining incentives. Payers can assess performance earlier, pay shared savings, and ensure payments are reconciled based on additional claims run-out.
Accelerate per-member payments—rather than continuing on a monthly cadence, payers can pull forward payments to help providers solve the immediate cash flow issues given reduced utilization during the pandemic. For example, Blue Cross Blue Shield of Massachusetts is accelerating some payments for its Alternative Quality Contract (AQC) that would have been made in late 2020 or early 2021 to assist providers with the financial pressures associated with the COVID-19 public health emergency.8
Keeping programs running and bolting on additional safeguards does come with its own risks. It is not trivial to ensure accuracy for the analytics. It may be even harder to convince providers that the program has been adjusted sufficiently to shield them from adverse consequences. As a result, providers may still leave programs.
Option 2
Pause current programs this year and plan to rebuild with improvements
If making the technical adaptations and tactical changes above seems technically difficult for payers or is contractually not possible, pausing or ending VBP programs is an option. Some contracts contain force majeure clauses allowing payers to excuse performance during the current measurement period in the face of COVID-19. This pause could be followed up by the development of revised programs addressing provider concerns and seen as an opportunity to reimagine the future of value-based care. Key risks of this approach include undermining care quality for members during this time of crisis, potentially harming relationships with providers expecting incentive payments or care management fees given economic hardships, and ultimately sending out the signal that value-based payment works during good times only.
The potential approaches to address VBP programs, given the circumstances of COVID-19, illustrate two options to consider. While adjusting the program and bolting on safeguards is not without its challenges, it is an option that keeps providers engaged and maintains the collaborative payer-provider relationship often found in successful VBP arrangements.
Pausing programs, comparatively, is an option that is easier to communicate and may enable a “reboot” of VBP programs down the line. Regardless of the route taken, the following overarching steps could be considered:
Adopt and communicate changes quickly to provide clarity, allay provider concerns, and enable continued investments in VBP: Clear communication will be key to alleviate provider concerns and prevent a wave of providers leaving VBP programs.
Recognize VBP program members as partners: VBP program members have demonstrated their willingness to take on risk by partnering with payers to improve the care they provide. Whether it be access to credits, grants, accelerated payments, or access to infrastructure, such as telemedicine platforms—consider rewarding this group as members of a select community.
Encourage positive trends facilitated by the current situation: Amid the tremendous toll of COVID-19 on patients and providers, some small bright spots have emerged, including greater adoption of telehealth and greater use of home health. Decrease in emergency room use in 2020 should be further examined. Trends that increase quality and decrease costs should be maintained in VBP programs in the long term.
Finally, leaders managing VBP programs will encounter additional operational implications. Team capabilities and priorities may need to be reconsidered in order to address any technical adjustments or the pausing of the program. Analysis and data requirements may change in response. Leaders should also consider what needs to be in place, in addition to value-based payment, in order to provide high quality and efficient care to patients, for example, through network management or care management.
While COVID-19 sent an unprecedented shock to the healthcare system, leaders can now seize the opportunity to reimagine payment systems that—if managed correctly—create efficiency and provide high-quality care to patients.
About the author(s)
Julius Bruch is a consultant in McKinsey’s New York office. Adi Kumar is a partner in the Washington, DC office. Christa Moss is a consultant in the Cleveland office.
This article was edited by Elizabeth Newman, an executive editor in the Chicago office. | https://www.effyhealthcare.com/media-center/-implications-for-value-based-payment-programs%3A-weathering-covid-19 |
The following conversation between Alison Brooks (Creative Director, Alison Brooks Architects) and Shovan Shah (MAUD ‘20) took place on October 10, 2018 regarding the issue of globalism and contextualism. Alison Brooks has a range of projects across the world from Ulaanbaatar to Vancouver to predominantly UK.
How do you position your practice and work in the context of globalism and contextualism in today's terms?
Contextualism is a fundamental premise of not only my practice today, but many practitioners today or architects today that I think distinguish what we do from previous generations, like the 20th C paradigm of modernist ideology that has a universality that can be implemented anywhere irrespective of context.
That kind of “contextlessness” of many approaches to architecture and practice is something that has historically created quite a lot of damage in urban design, urban renewal, the sort of universalizing tendencies of big ideas of architecture and urban planning, and in a way taking an alternative approach, which is to try to understand and respond to context in a way that is more sensitive or it's more about a relational approach to a place rather than an “impositional” approach to a place. I think it a futile ground for invention and for discovery of new answers.
In a way, not falling back into a lazy approach, which is, one answer fits every place in every culture, because every place in every culture is different and every client is different and in every place of the kind of cultural memory, collective memory and technological potential that needs to be explored.
Do you think your practice or your design ideologies are influenced by your education?
I think it probably did emerge from my experience from the North American city.
The conditions of suburbanization, that kind of loss in the 60s and 70s, mainly of the amazing historic fabric in cities like Toronto or every city in North America even without a war like in Europe, had to create sort of destruction of “the sort” of historic city.
In North America there's been examples of that destruction and loss which is felt by everybody, not just architects and planners, but by society. The sense of impoverishment of the built environment through the eraser of historic forms or urban grain, ideas of scale, and then then experiencing the European city, when I studied at University of Waterloo, where every class studies and works in Rome for four month and that was crucial, the opposite of the hundred year old Canadian city is the 2000 year old Roman city and absorb the lessons of Rome and the depth and richness of cultural content, and artifacts of each era that are layered, one upon the other, it's an experience of city form and of course, architectural expression, period styles which create this incredible jumble.
So I think that kind of total experience heightens your awareness of what we are in contemporary cities, and also made me super-critical of the condition of urban renewal and of zoning and the segregation of demographic segments of society based on the form of division and boundary that is imposed by class or race or income, that those kind of boundaries between things or something that I felt should be challenged.
Could you elaborate on your understanding of urban renewal?
I don't know urban renewal. It was this terminology that made everybody think it was really important to sort of wipe the slate clean and renew, just start over again, the kind of tabula rasa approach to urban civic growth, and I think it’s a misguided paradigm as a of many factors, it's political, economic, and recently started to become environmental.
But historically, lot of it was to do with political motives and money and transactions going on between various stakeholders, but did not necessarily respect, communities, history, memory or sense of place, which is a innate human right to feel an attachment to a place and what are those elements that allow you to form attachments to a place and when we lose those things, that it's a deep loss, that’s much more than a visual one.
How do you challenge the existing zoning policies in terms of how you approach projects which high density multiple-use developments?
It's tricky because that's all there is, its zoning policies. That's the missing factor in the development and growth of places like Vancouver, which have exceeded this city center boundary, like the island of a core of Vancouver, and now they're developing satellites centers, like Richmond and Surrey (London).
But there's a zoning policy and a design code in terms of street types, but there don't appear to be urban design strategies or master plans that mediate between the large-scale zoning picture and the specificity of an individual plot and what you can achieve on the plot.
So there's a lack of resolution, “it's very low res”, the definition or the vision or the parameters of developing these new dense urban neighborhoods.
The problem with that is you end up with the kind of monoculture, with urban blocks that have between two and four towers on a block and then a bit of single family townhouses in between, and that is the default scenario for new development to extract value from the land, obviously pay for the land costs, which is inflated, because of a very high FAR because of the zoning policy, and so it generates one type of development and it gets repeated around these new centers.
The problem is that there's no master plan that actually sort of delves into the grain of things and how to stitch blocks to each other, and how to introduce diversity and kind of tolerance of unusual things happening, for example to have a block with some leisure or café or food-use but also some really fundamental services, like dry cleaners or drug stores or news agents or pub or a post office or just things that people need in their everyday life.
If they're not in the zoning strategy for that neighborhood, even though it's not permitted.
We're working hard to provide generous adoptable spaces in the ground and first and sometimes second floors of our project there, to allow them to be adopted in the future to other uses.
We have managed to get nursery into a part of the scheme and some co-work spaces in the bottoms of all of our buildings which are part of the foyer.
So we are trying blur the distinction between just a foyer and the building and to a workspace which is an alternative for people from home they can work in the lobby of the building where there’s no need to sign up to a workspace provider model. It is actually just part of your building and it's a service you could say allows residential buildings to be understood as actual workspaces which is what they are. Every flat and every apartment is a workspace and a potential start-up. You need places to convene when you are working from home and inverting the thinking about type and typology.
I like to think of our residential buildings as office towers with people living in them.
If you think of it as a different building type of people living in it, it makes you think completely differently about the quality and proportion and generosity of what you're designing, because it breaks the mold of the kind of expectation that that use in the kind of space that it is associated with it.
And then we're also, on our site, creating a kind of urban corridor with a lot of different building typologies and the entire ground plane is public realm. It's not gated in any way and it's completely permeable.
The whole site, since it’s quite a big block, we are trying to create a of civic space which has a variety of building types of scales that becomes part of a pedestrian experience and creates opportunities for stuff to happen in the public realm.
Even my clients I work with, they know that I am always trying to inject other potential uses and types of spaces into the products I'm designing in a way it adds a complexity to the product because then the immediate reaction (of the developer) is “who's going to look after that?”.
And, you sort of say, well if there is a concierge who's sitting there all day anyway, all you need is one pair of eyes and a bit of a CCTV and a kind of honor system. And I don't know make it a place that people love. And it just happens now anyway, if you make a big space that has places for people to sit and sockets, they will start working.
So why not actually create a kind of space that celebrates that activity, makes it visible to the streets. And in the future, maybe it could be commercialized. Maybe it could become a shop or something, but it's kind of designing in that resilience and that generosity and that robustness so that alternative futures can happen to the one that we kind of impose on our design briefs. | https://www.ud-id.com/conversationbrooks |
Inside Sources published an opinion editorial from former senior advisor to U.S. Energy Secretary Sam Bodman and GAIN spokesman Craig Stevens in which he urges U.S. leadership to consider strengthening U.S. energy production and infrastructure investment. The piece notes a large container ship was recently stranded in Egypt’s Suez Canal for six days – blocking off critical global LNG trade for a number of nations. Incidents like this can cause oil prices to spike and severe delays in critical shipments.
Stevens explains how increasing energy dependence on foreign sources can be detrimental to the U.S.:
“Unsurprisingly, countries that largely rely on importing their energy resources are vulnerable to delays or increased oil and gas costs when incidents like the Suez blockage occur. Ultimately, consumers are the ones who have to pay the price. With gas prices already increasing here in the U.S., imagine if we had even less domestic oil and gas being produced to help meet the fuel needs of both our country and our allies. Or less infrastructure to help safely transport these resources. No matter what kind of resources you think should power our homes and businesses or fuel our vehicles and planes, it is clear the U.S. should prioritize growing domestic energy production and focus on a path towards energy independence.”
The Biden administration’s federal ban on oil and gas leases would undermine the successful progress America has made so far in energy independence. A study from the American Petroleum Institute found that a federal leasing ban would increase U.S. oil imports from foreign sources by approximately 2 million barrels a day and cause our nation to spend more than $500 billion on energy from foreign suppliers.
Stevens goes on to highlight the valuable role of the U.S. energy sector:
“The U.S. oil and gas industry has been a reliable component of American economic success for decades. A study from API reveals in 2015, the industry supported over 10.3 million jobs and contributed more than $1.3 trillion to the U.S. economy. The same study found that between 2011 and 2015, industry jobs increased by 500,000 each year and benefitted all 50 states. Why should we rely upon unstable energy sourcing from the Middle East or South America when right here at home we can create jobs and steady the global energy market? Reducing domestic energy production risks higher gas prices and potential energy shortages at home – while strengthening foreign states by growing their economies.”
It would be a dire mistake for the Biden administration to minimize domestic oil and gas production and increase reliance on unstable, foreign sources like Venezuela or Iran. The Suez Canal blunder should serve as a prime example of why we should continue to support U.S. energy independence. | https://gainnow.org/2021/04/16/gain-spokesman-craig-stevens-highlights-need-for-u-s-energy-independence-amid-suez-canal-blockage/ |
Cork-based artist Angela Fulcher’s response to the 1923 gaol break at Cork City Women’s Gaol is inspired by forty-two inmates bid for freedom in aftermath of the civil war. The escape made use of the soft stuff of textiles: bedding fashioned into ladders to aid their descent. Fulcher’s recent work regularly includes found fabrics such as curtains, blinds and carpets associated with the home, clothing and accessories, as well as tents, window display materials and vinyl that “span spaces”. Her response to the 1923 gaol break spans not only space but also time. Anachronistic purple, maroon, light and cerise pink sheeting and duvet covers that date from the 1970s through to the contemporary are here deployed as a reminder of the event. Much like the prisoners daring means of escape, this material too was found close to hand: harvested by the artist from charity shops and popular economy department stores such as Guineys and Penneys.
“The boldness of the colour in the space reflects a sense of the audacious and spirited nature of the escape,” explains Fulcher. The prominent presence of colour and pattern represents other changes as well: a return of the decorative to visual language that carries meaning beyond the superficial. The textile – and practices more generally that focus on materials – have experienced the decorative used in a pejorative sense for decades. But interest in the meaning of beauty is on the rise. The political, as diverse examples from hip-hop fashion to Chilean arpilleras can teach us, also resides in the decorative. Today this may be explained as an expanded interest in the everyday and recognition of value in aspects of culture previously ignored. But modest things have always been nearby; their variety of purposes includes the potential to be overlooked.
The low wasted style of hip-hop culture is associated with “time inside”, suggesting a familiarity with wearing clothing without a belt eventually translating into global fashion. See, for example, Shaun Cole (2012) “Considerations on a Gentleman's Posterior”, Fashion Theory, 16:2, 211-234.
South African anti-apartheid activist, Ruth First recalls stitching a calendar of unravelled threads during solitary confinement in Pretoria in 1963 to keep track of her days and preserve her sanity while held under South Africa’s 90-day detention law. See R. First, 117 Days: An Account of Confinement and Interrogation Under the South African 90-Day Detention Law (Virago Modern Classics, 2006).
During Pinochet’s ruthless dictatorship of Chile, arpilleras stitched by women’s groups documented the “disappeared”. These pieced and quilted textiles, often with short passages of stitched text, were smuggled out of the country for exhibition before conventional media reported these stories. See M. Agos and C. Franzen, Scraps of Life: Chilean Arpilleras: Chilean Women and the Pinochet Dictatorship (Red Sea Press, 1987).
In the Japanese novel by Kobo Abe Woman in the Dunes (1962) (made into a film of the same name directed by Hiroshi Teshigahara and released in 1964) a couple thrown together by cunning and chance exist in a pit in the sand dunes. Their daily task of clearing sand preserves their immediate existence, but deepens their prison. The local community command a rope ladder by which the protagonist first enters his prison, ensuring power remains only with those who choose to deploy the ladder from above.
Katie McGown’s unpublished PhD at the University of Northumbria at Newcastle, Dropped Threads: Articulating a History of Textile Instability through 20th Century Sculpture describes the textile’s covert role in the French film A Man Escaped (1956) directed by Robert Bresson: “This first object, a piece of string thrown up through the bars of Fontaine’s window into his still-cuffed hands by another sympathetic inmate, is tied to the corners of a handkerchief creating a makeshift basket. By raising and lowering it to the courtyard below, he can send letters to his family, and smuggle in a safety pin capable of springing his handcuffs. This initial liberation enables the prisoner to gradually breach successive boundaries, and simultaneously gain a better understanding of the prison’s architecture. He determines that he needs to create rope and hooks in order to drop down towering walls, and monkey climb between two high barriers. Unravelling the wire mesh of his bed frame, and ripping his blankets into long strips, he twists the materials together to make a strong and flexible length. His earlier letters to his family have brought a suitcase full of clothing, and these are cut up as well. In prison where even pencils are forbidden, the tools of escape have to be as innocuous as possible. If the guards had found his length of rope, there would have been trouble, but the raw materials of his escape could be stuffed into a mattress, becoming soft and amorphous again, flying under the radar. Through this small accretion of inconsequential fibres, an arsenal of tools were created.” (pp. 125)
Skype interview with the artist January 10, 2017.
Email correspondence with the artist December 29, 2016.
See Jorunn Veiteberg’s “The Problem of Beauty” in Craft in Transition translated by Douglas Ferguson (Bergen National Academy of the Arts, 2005). | https://www.jessicahemmings.com/gaolbreak-angela-fulcher/ |
Someday in the very distant future – about 5 billion years from now – our sun begins to lose its “power”, running out of fuel, it will be the beginning of the end. The existence of the central star of our solar system will come to end. We know what it looks like today’s Sun, but how will look the dying sun?
The Sun will begin to swell – will increase its size, its outer layers expanding into space beyond Earth’ orbit, “frying” the planets in its path, eventually will explode, leaving behind the burned remains of planets (including Earth) and a dense core, a formation called white dwarf, remnant of the star that was once the Sun.
And when will these things happens, Sun will probably look like the star in the image below called HD 184 738 (also known as the Campbell’s Hydrogen Star) which lies in the center of a small planetary nebula in the constellation Cygnus.
HD 184738 is a low-mass star like our sun, therefore, it “agony” can provide us the final transformation model of our sun in about 5 billion years from now.
In the image (form Hubble Telescope), the red and orange glow is caused by hydrogen and nitrogen surrounding the dying star.
HD 184738 is surrounded by dust that’s elementally very similar to the material that the Earth formed from. Researchers don’t know for sure where it comes from this dust, but it is possible that it derives from a group of planets that formed the solar system of the star HD 184 738 and were destroyed by the explosion of the central star. | http://www.cosmosup.com/how-it-will-look-the-sun-when-it-dies/ |
Advice from the BP Measurement Experts
Over the years, we’ve developed numerous white papers covering a wide range of topics. Through our work with hundreds of end users, we’ve discovered there are still elements of automated blood pressure monitoring that are misunderstood. As a result, we felt compelled to put together a comprehensive guide to automated BP monitoring to help increase knowledge and awareness. Much of this information has been pulled from our existing white papers that can be found at http://www.suntechmed.com/bp-devices-and-cuffs/suntech-247#White_Papers.
First, we need to focus on why accurate blood pressure is important.
The New England Journal of Medicine published an interesting paper on blood pressure in its January 6, 2000 edition . This study tracked the blood pressure measurements of 12,031 healthy men in six regions of the world for over 25 years. 1,291 of the subjects died from coronary heart disease (CHD). For an increase of 10 mmHg in systolic blood pressure, the relative risk of death from CHD rose by 28%. A 5 mmHg increase in diastolic blood pressure had a similar effect. These results closely match the outcomes of other ten-year studies from different regions of the world. They indicate the importance of small increases in blood pressure and the prerequisite of accurate blood pressure measurement.
This study underscores why an accurate blood pressure measurement is important. As we monitor our health, not having a true picture of blood pressure changes can result in a lack of treatment or proper management.
As we search for devices and techniques to improve patient care, some questions arise regarding the accuracy of automated devices. The most common being, “Why is my device giving me high readings?” Chances are, these automated readings are actually more representative of your true blood pressure because of the device’s built-in deflation rate and its clinically-validated and repeatable method of measurement.
As mentioned, automated monitors use an entirely different technique for blood pressure measurement.
Automated monitors measure BP differently than clinicians do. They employ a technique called “oscillometry,” which measures the pressure waves associated with the same sounds that clinicians listen for when they measure BP, called Korotkoff sounds; the latter technique is called “auscultation”. Oscillometry is used in a vast majority of automated BP devices because it has shown generally good agreement with auscultation . However, there are times when the two techniques result in significantly different results. Thus, the BHS recommends occasionally checking the monitor against a mercury sphygmomanometer or other known pressure standard .
Above is the first major difference between traditional manual BP measurement and automated BP measurement. Another difference is the actual monitoring device. To ensure accurate BP measurement, a clinical-grade, validated device is most desirable.
Certification of medical devices in the United States is the responsibility of the Food and Drug Association (FDA). For blood pressure devices, the FDA relies upon the voluntary compliance by manufacturers to a standard developed by the Association for the Advancement of Medical Instruments (AAMI). However, lack of this designation does not prevent the sale of non-certified blood pressure instruments to hospitals, clinics, or individuals. Currently, no regulatory agency requires the use of AAMI-validated instruments. In fact, some of the most common automated blood pressure measuring devices have never passed AAMI certification . Furthermore, the accuracy problems with inexpensive blood pressure devices such as those used for home monitoring are well-documented .
While it is critical to have an accurate device, many times there are ways to improve the accuracy of BP measurement through tried and true techniques. One of those is using a patient monitor that allows you to take both a manual reading and an automated reading which keeps more things constant (cuff, placement, deflation rate, etc.).
When in doubt, wait. This is frequently overlooked and can help your practice increase the accuracy of BP readings.
The American Heart Association (AHA) and the British Hypertension Society (BHS) recommend that clinicians allow a patient to sit still without talking for at least five minutes prior to measurement [2, 3]. For most patients, it is likely that the first measurement will be higher than the second regardless of the resting interval
Having the correct size cuff is another critical factor for accurate measurements. Make sure that the correct cuff size is selected, and when in doubt, use a larger cuff.
In a study on cuff application, undercuffing accounted for 84% of cuff sizing errors . These overestimations have been shown to be significantly larger than the errors using a cuff that is too large .
In the haste of improving patient throughput, a commonly overlooked factor is the cuff deflation rate. Many times, the deflation is performed much too quickly and can miss the Systolic measurement by 10-12 mmHg.
Deflating the cuff at 2-3 mmHg/sec, as recommended by the AHA and BHS, is the most difficult and important thing to do to ensure accuracy [2, 3].
Automated devices force a constant deflation rate, but if taking a manual reading, do not deflate the cuff more than 2-3 mmHg/sec. Odds are this will feel extremely slow…that’s because most readings are taken with a deflation rate much greater than the recommended rate.
So, how does one account for all of these complicating factors that are a part of measuring the two numbers that represent a patient’s cardiovascular health? While patience and understanding on the part of the clinician are required to ensure that BP measurements are carefully taken, different monitors or instruments are required as well. These devices should 1) employ clinical-grade automated technology, 2) allow clinicians to take a manual measurement with a stethoscope as do sphygmomanometers, and 3) encourage careful observer technique when taking measurements with a stethoscope.
1 van den Hoogen PC, Feskens EJ, Nagelkerke NJ, Menotti A, Nissinen A, Kromhout D. The relation between blood pressure and mortality due to coronary heart disease among men in different parts of the world.Seven Countries Study Research Group. New Engl J Med. 2000 Jan 6; 342(1):1-8.
2 Pickering TG, Hall JE, Appel LJ, Falkner BE, Graves J, Hill MN, Jones DW, Kurtz T, Sheps SG, Roccella EJ. Recommendations for Blood Pressure Measurement in Humans and Experimental Animals: Part 1: Blood Pressure Measurements in Humans: A Statement for Professionals From the Subcommittee of Professional and Public Education of the American Heart Association Council on High Blood Pressure Research. Hypertension.2005; 45: 142-161.
3 British Hypertension Society. Blood Pressure Measurement With Mercury Blood Pressure Monitors. Poster: available from the British Hypertension Society at http://www.bhsoc.org/how_to_measure_blood_pressure.stm, accessed 6 Nov 2008.
4 Manning DM, Kuchirka C, Kaminski J. Miscuffing: inappropriate blood pressure cuff application. Circulation. 1983; 68(4): 763-766.
5 McAlister FA, Straus SE. Evidence based treatment of hypertension: Measurement of blood pressure: an evidence based review. British Medical Journal. 2001; 322: 908-911 (14 April).
6 Jones DW, Appel LJ, Sheps SG, Roccella EJ, Lenfant C. Measuring blood pressure accurately: new and persistent challenges. JAMA. 2003 Feb 26; 289(8): 1027-30.
7 James GD, Pickering TG, Yee LS, Harshfield GA, Riva S, Laragh JH. The reproducibility of average ambulatory, home, and clinic pressures. Hypertension. 1988 Jun; 11(6 Pt 1): 545-9.
Subscribe today to get the latest insights from the BP Measurement Experts. | https://suntechmed.com/blog/entry/bp-measurement/an-introduction-to-automated-blood-pressure-monitoring |
BACKGROUND OF THE INVENTION
SUMMARY OF THE INVENTION
0001 1. Field of the Invention
0002 This invention relates to an equipment and a method for manufacturing a single crystal such as Hg-Cd-Mn-Te based single crystal.
0003 2. Related Art Statement
0004 Recently, attention is now paid to an erbium doped fiber amplifier. The wavelength of 0.98m is particularly expected as an exciting wavelength for erbium. The bulky Hg-Cd-Mn-Te based single crystal is much expected as a material for an optical isolator with a wavelength of 0.98 Mn band. The range of a composition usable for an optical isolator is described in JP-A 7-233000, for example.
0005 It has been, however, difficult to grow a bulky Hg-Cd-Mn-Te based single crystal. For, since Hg is contained as a component having a high vapor pressure, the interior pressure of a crucible becomes extremely high, if a single crystal is grown by an ordinary Bridgeman process. Consequently, a crucible may be broken.
0006 In JP-A 7-206598, an equipment using a high pressure Bridgeman furnace, by which a Hg-Cd-Mn-Te single crystal is formed, is described, for example. The equipment has a heater above a crucible over the high pressure Bridgeman furnace to prevent the precipitation of Hg with higher vapor pressure in the crucible. In JP-A-8-40800, a method for setting a material of single crystal in a container is so examined that in THM method (traveling heater method), the production of a twin crystal may be prevented and the diameter of the thus obtained single crystal may be made large.
3.
0007 Although the mass production of single crystals is attempted by making the diameter of the each single crystal large besides preventing generation of a twin crystal in particular according to the conventional single crystal-producing method, a difficult problem remains unsolved. That is, for manufacturing a single crystal at a relatively low cost, it is required that the diameter of the single crystal is large, but the vapor pressure of Hg increases in geometrical progression as its diameter becomes large, so that a high pressure Bridgeman furnace is used, and a container for forming the single crystal is required to be pressurized at about 30 kg/cm
0008 Actually, compositional segregation is, however, recognized in the single crystal as viewed in its diametrical direction, and sometimes crystals with different phase are generated, because the state of a melt differs between the outer peripheral part and the center part. Thus, since the characteristics of such a single crystal as an optical isolator largely vary, it is difficult to obtain the single crystal satisfying the characteristics as the optical isolator. Moreover, the optical loss character of the isolator varies due to the deviation in the crystal orientation of the single crystal.
0009 It is an object to improve and stabilize the characters of a single crystal as well as to simultaneously enable the mass production of single crystals by prohibiting a compositional segregation, generation of different phase, deviation in the crystal orientation, etc. in the single crystal in a manufacturing equipment for producing the single crystal in a container by thermally treating the container filled up with the sources of the single crystal such as a Hg-Cd-Mn-Te based single crystal.
0010 This invention relates an equipment for producing a single crystal in each of plural containers by thermally treating a raw material for the single crystal each charged in each of the container, comprising heaters provided corresponding to each of the containers, an elevator to move each of the containers upward and downward relatively to the respective one of the heaters, and a connecting member to connect at least one of the container and the heater of each of plural sets of the containers and the heaters mechanically to the elevator, wherein each container is moved vertically relatively to the respective one of the heaters by driving the elevator and passed through an area of thermal treatment formed by the heater to continuously form a melt in the raw material inside the container, and the single crystal is continuously produced in the container by solidifying the melt.
0011 This invention also relates a method for producing a single crystal in each of plural containers by thermal treating the each container filled up with sources of the single crystal, comprising each thermal treating equipment corresponding to the each container, an elevating drive equipment to move the each container upward and downward relatively to the each thermal treating equipment and connecting members to connect at least one of the plural containers and the plural the thermal treating equipment mechanically to the elevating drive equipment, wherein the each container is moved upward and downward relatively to the each thermal treating equipment by driving the elevating driving equipment and passed through an area of thermal treatment formed by the thermal treating equipment to generate melting zones in the sources of the container in successive, and the single crystal was generated in the container in successive by making the melting zones solid.
0012 The present inventors have found that in the Hg-Cd-Mn-Te single crystal, for example, if the diameters of the single crystal and its container are increased, compositional segregation, generation of different phase of crystals and deviation in the crystalline orientation are likely to occur; so that it is very hard to control them microscopically. Based on the above finding, the inventors made further investigations. During this, they tried to mass-produce single crystals by making small the diameters of the single crystal and the container to be filled with a starting material for the single crystal and increasing the number of the containers to be thermal treated.
0013 But they found that in the case of bundling plural sealed members and setting them in a single THM furnace and making experiments actually, different from their expectation, it is very hard to control the characteristics of the single crystals if many containers are employed, so that the above problems are unsolved. That is, the condition of the each single crystal produced varies depending upon the respective containers. For example, in the case that a single crystal having good characteristics usable for an optical isolator is produced in a container, compositional segregation and different phase of crystals often occur in the single crystals in many other containers and the crystalline orientation is deviated among the single crystals. This means that it is very difficult to control a melt finely in each container in the case of treating many containers with their small diameters as well as in the case of the single crystals with large diameters. Moreover it is possible to bundle three to four sealing members at the maximum, but not possible to simultaneously grow single crystals for by not less than five containers.
0014 The inventors have studied to solve the fatal serious problems from the viewpoint of mass-production. During the studies, they have figured out the continuous production of the single crystals by providing heaters corresponding to respective containers, connecting each of the heaters mechanically to an elevator, vertically moving each of the heaters relative to the respective one of containers by driving the elevator, continuously producing a melt in the starting material for the single crystal by passing each container through an area for thermal treatment formed by the heater, thereafter solidifying the melt.
0015 Thus they have found that in each of the containers, the compositional segregation of the single crystals and the generation of different phase of crystals were remarkably prohibited and the fluctuations in the crystalline orientation in the single crystal were not observed. Accordingly even in the single crystal with difficulty in controlling its melt, such as a Hg-Cd-Mn-Te based single crystal, the present invention first enables the mass-production of single crystals beyond a certain level without causing compositional segregation, generation of different phase of crystals or deviations in the crystalline direction.
7
0016 The inventors further paid attention to the inner diameter of the container for growing the single crystal in developing an equipment which enables the mass-production of the single crystal, and have also found that by setting the inner diameter of the container into mm or below, the single crystal having a desired composition within a particular range can be mass-produced for the weight of the starting material. If the diameter is beyond 8 mm, particularly since the composition largely differs between an outer peripheral part and a central part of the single crystal. Therefore, the composition of an outer peripheral portion cut out of the single crystal largely differs from that of a center portion of the single crystal, thereby giving a low yield during the growing step of the single crystal. However, by setting the inner diameter of that area of the container in which to grow the single crystal to 7 mm or below, the yield can be remarkably enhanced.
0017 The reason is that the heat conduction from the heater is increased to stabilize the condition of the melt successively produced in the polycrystalline starting material and suppress different phase of crystals and the compositional segregation.
0018 The inner diameter of the container is preferably 5 mm or below, more preferably 3 mm or below. Not particularly limited, the lower limit of the inner diameter is required to be larger than the dimension of the product to be manufactured.
BRIEF DESCRIPTION OF THE DRAWINGS
0019 For a better understanding of the invention, reference is made to the attached drawings, wherein:
0020FIG. 1 is a plan view schematically showing a manufacturing equipment as an embodiment according to this invention;
0021FIG. 2 is a front view schematically showing a manufacturing equipment as an embodiment according to this invention;
a
b
a
16
30
17
20
0022FIG. 3 () is a cross sectional view schematically showing that a crucible of a container usable for this invention is filled up with powdery sources , FIG. 3 () is a cross sectional view schematically showing that polycrystalline starting material in the container of FIG. 3() are thermally treated;
a
b
c
c
c
b
23
22
16
24
16
0023FIG. 4 () is a cross sectional view schematically showing that a single crystal and a polycrystal are generated in a crucible , FIG. 4() is a cross sectional view schematically showing a column like-body obtained by cutting off the tubular-part of a crucible , and FIG. 4() is a cross sectional view taken on line IV-IVof FIG. 4();
a
b
0024FIG. 5() and () are plan views showing the cutting points in each sample when cutting out each sample shown in Table 3,4,7, and 8 of a single crystal wafer in comparative example A and B, respectively;
a
b
a
34
22
36
35
0025FIG. 6 () is a longitudinal cross sectional view showing that a single crystal and a polycrystal are generated in a crucible in a seal up-member ; and FIG. 6 () is a longitudinal cross sectional view showing a seal up-member and a crucible taken on normal direction to FIG. 6(); and
a
b
0026FIG. 7() is a cross sectional view showing a position in up and down direction in the each sample cut off of a single crystal obtained in Example C; and FIG. 7() is a cross sectional view showing a position in transverse direction in the each sample cut off of a single crystal obtained in Example C.
DETAILED DESCRIPTION OF THE INVENTION
0027 Specific embodiments of this invention will be also described in more detail hereinafter.
0028 A polycrystalline starting material is preferred as a staitng material for filling up and being accommodated in the container, but a starting material composed of a mixture of metal powders before producing a polycrystal may be used.
0029 In a preferable embodiment, each heater has a tubular shape and a corresponding container is accommodated in the interior of the heater. Accordingly the fluctuation in the characteristics of the single crystal in its diametrical direction is prevented, because a thermally treating area defined inside the heater has a substantially uniform temperature distribution as viewed in the diametrical direction of the container. The wording each heater has a tubular shape means including a case in which a resistive heat-generating wire has a tubular shape and a plate-like form of a resistive heat-generating body is shaped in a cylindrical form.
0030 In another preferred embodiment, the heater has a melt-producing part to form a melt, a preheating part around an upper side of the melt-producing part, and an annealing part around a lower side of the melt-producing part. Thus by using such heaters, a successive process of pre-heating the starting material for the single crystal, producing the melt, producing the single crystal through solidifying the melt, and annealing the single crystal can be simultaneously effected under the same condition for all the containers. As a result, the fluctuations in crystallinity among the obtained single crystals are further suppressed.
0031 In a further preferred embodiment, the container includes a crucible to be filled up with a starting material and a sealing member to accommodate and seal the crucible. In this case, the crucible particularly preferably includes a single crystal-growing part vertically extending and an enlarged part at an upper side of the single crystal-growing part.
0032 Concrete embodiments of this invention are also described hereinafter, in reference to drawings.
a
b
a
16
30
0033FIG. 1 is a plan view schematically showing a heater as an embodiment according to this invention, and FIG. 2 is a front view schematically showing the heater of FIG. 1. FIG. 3() is a cross sectional view schematically showing that a seed crystal and a starting material are provided in a crucible of a container , and FIG. 3() is a cross sectional view schematically showing a state in which a melt is produced in the starting material inside the crucible. FIG. 4() is a cross sectional view schematically showing that a single crystal has been produced in the crucible.
2
1
7
7
7
7
7
7
5
5
5
5
5
5
0034 A manufacturing equipment of this invention is accommodated in an inner space of a refractory material . Each container A, B, C, D, E, F is filled up with a starting material for a single crystal. Each heater A, B, C, D, E, F is provided in a corresponding container. In this embodiment, the heaters are arranged in a matrix of 23, transversely and vertically, respectively, but the number and arrangement of the heaters may be changed.
4
10
10
0035 An elevator moves each heater upward and downward relatively to the corresponding container. In this embodiment, the elevator is attached to around a spindle and can be moved up and down along the spindle with a driving mechanism (not shown).
3
4
3
3
4
3
3
3
3
3
3
5
5
5
5
5
5
3
3
8
9
4
g
a
b
c
d
e
f
g
h
0036 A connecting member is attached to the elevator . The connecting member includes an attaching part to the elevator and attaching parts , , , , , and to the respective heaters A, B, C, D, E, and F. These attaching parts are connected to the attaching part through a holding part . Each container is fixed to a given portion with fixing spindles and . Each heater can be moved up and down by driving the elevator .
6
6
6
6
6
6
0037 Thermally treating areas A, B, C, D, E, and F are defined in the respective heaters. A single crystal is continuously produced in each container by passing the container through the thermally treating area, producing a melt in the starting material for the single crystal in the container, and solidifying the melt.
12
13
14
0038 Each heater includes a preheating part , a melt-producing part , and an annealing part , for example as shown in FIG. 2. Thus the melt is continuously produced in the starting material inside each container as the corresponding heater moves up.
30
16
17
15
16
16
16
16
16
15
15
15
15
16
16
a
a
b
c
a
b
c
d
0039 It is desirable that the each container, such as a container of FIG. 3(), includes a crucible filled up with a starting material and a sealing member to accommodate and seal the crucible . In this embodiment, the crucible includes an enlarged part , a tubular part , and a connecting part to connect them. The sealing member also includes an enlarged part , a tubular part , and a connecting part . The numeral reference denotes an opening of the crucible .
0040 As a material of the above crucible, boron nitride, carbon, or amorphous glassy carbon is preferably used. Moreover a composite product consisting of any one of the above materials and a CVD processing thin film of pyrolytic carbon(p-C), pyrolytic boron nitride(p-BN) or p-C and p-BN CVD processing bulky crucible may be more preferably used, because of their smaller reactivity.
16
16
17
17
16
16
16
a
b
b
b
a
0041 It is desirable that the enlarged part and the tubular part of the crucible are filled with starting material composed of a mixed metal powders , and that the single crystal is produced at least in the cylindrical part . For, in the crucible to actually grow the single crystal, the melt thereof is stably generated and thereby the quality of the single crystal becomes stabler by making the diameter of the crucible as small as possible. As the diameter of the crucible is decreased, however it is difficult to charge the starting material for the single crystal into the crucible. In this case, as the starting material is charged into the cylindrical part through the enlarged part , it is easy to filled up the sources into the crucible.
17
20
16
21
21
b
b
0042 Thereafter the interior of the sealing member is normally vacuum-evacuated, and the crucible is sealed up by cutting the member under vacuum sealing. The powdery starting material is once melted, and a polycrystal is made by quenching the thus obtained melt. Then, when moving each heater upward, the polycrystalline starting material in the tubular part is heated successively from the under side to produce a melt as shown in FIG. 3(). The melt moves upward gradually.
23
16
16
18
40
22
16
16
22
17
a
b
c
a
0043 When producing the final single crystal , as shown in FIG. 4(), it is formed in the tubular part inside the crucible . Hereupon the single crystal having a desired composition is produced in an area between broken lines B. A part of the single crystal near the seed crystal is in a composition-changing zone according to material phase diagram, and a part of the single crystal above this zone is in a uniform zone with the desired composition. The seed crystal exists under the lower broken line B. A single crystal having the same composition as the melt is produced above the upper broken line B. A polycrystal is normally produced in the connecting part and the enlarged part . The surface of the polycrystal descends from the surface A of the powdery starting material .
23
0044 By using the above crucible, pores hardly occur in the single crystal when the melt is supplemented, and the convection of the melt actively occurs to further suppress compositional segregation.
23
16
23
17
20
16
16
40
16
23
16
b
a
a
a
b
0045 Thereafter it is particularly desirable to use only the part of the single crystal with the uniform composition in the tubular part . For, in the step of producing the single crystal , a surplus metal component contained the powdery starting or the polycrystal tends to move upward in the crucible , that is, into the enlarged part , so that the surplus metal is localized in the polycrystal of FIG. 4(). Thus by abandoning the polycrystal formed in the enlarged part and utilizing the single crystal generated in the tubular part , the characteristics of the single crystal, particularly, the optical characteristics thereof are more stabilized.
18
16
17
18
23
30
0046 It is preferable that the seed crystal is accommodated in the lowest part of the crucible and the powdery starting material is charged onto the seed crystal . Thereby the crystalline orientation of each produced in the container is more uninformed.
16
16
16
16
23
4
24
26
23
25
23
b
a
b
b
c
0047 At least a part of the tubular part of the crucible can be cut out of the crucible . For example, along the broken lines B shown in FIG. 4(), the tubular part and the single crystal are cut out. Thereby, for example, as shown in FIGS. () and (), a column-like body consisting of tubular covering part and the single crystal can be obtained and used as an optical material. In that case, light can pass through between a pair of ends of the single crystal .
17
30
30
0048 In this invention, as the diameter of the container in the area to produce the single crystal can be made smaller, the weight of the starting material filled in the interior of each container is relatively small. Accordingly the thermal treatment of the container can be carried out under non-pressurizing condition.
13
14
0049 In the boundary area between the melt-producing part and the annealing part , a temperature gradient occurs. This temperature gradient is preferably 50 C./cm or more, thereby the velocity of the crystallization increases, so that different phase of crystal are unlikely to remain. Moreover the temperature gradient is more preferably 100 C./cm or below, thereby pores are prevented from occurring.
0050 The vertical length of the melt-producing part is preferably 5 mm or more, by which the melt is stably produced with high reproductively. Moreover, that vertical length is preferably 30 mm or less. For, in the case of a too long melting part, an area giving a desired composition for the single crystal is decreased due to slow changing in the composition.
0051 The vertical length of each of the preheating part and the annealing part is preferably 30 mm or more, which suppresses occurrence of pores and increases the crystallization speed. The vertical length each of the preheating part and the annealing part is preferably 100 mm or less.
0052 The manufacturing method and equipment in this invention can be applied to single crystals having various compositions, for example, II-VI Group compound-based single crystals such as Hg-Mn-Te, Hg-Cd-Te, Cd-Mn-Te, Hg-Cd-Mn-Zn-Te, Hg-Cd-Mn-Te-Se and Zn-Be-Mg-Se-Te, and III-V Group compound-based single crystals such as Ga-Al-As-P and In-Al-As-P.
0053 In particular, in the case of forming a single crystal having an Hg-Cd-Mn-Te composition, a mixture of an Hg-Te alloy and a Cd-Te alloy is used as a powdery starting material of the single crystal. Thereby the generation of heat nearby the reaction of their powdery starting material and the damage of the sealing due to such heat generation can be prohibited.
0054 In the case of forming a single crystal having a Hg-Cd-Mn-Te composition, the temperature in the melt-producing part to grow the single crystal is preferably not less than 700 C. to not more than 1050 C. Moreover, in this case the temperature of the preheating part is preferably set lower than that of the melt-producing part by not less than 50 C. to not less than 300 C., so that the growth of the polycrystal can be controlled and leaving of Hg and Cd from the polycrystal which may cause compositional segregation can be prohibited. Moreover, the temperature of the annealing part is preferably not less than 400 C. to not more than 1000 C.
O.5
0.0
0.5
0.08
0.8
0.12
0.05
0.5
0.45
0.5
0.5
0.0
0055 Furthermore in the case of forming a single crystal having an Hg-Cd-Mn-Te composition, the composition of the single crystal is preferably within the composition range defined by connecting points of (HgCdMn)Te, (HgCdMn)Te, (HgCdMn)Te, and (HgCdMn)Te by straight line segments.
0056 Each container and the corresponding sealing member are preferably fixed to extend in parallel to each other, and both the upper end and the lower end of each container are preferably fixed. By so doing, vibration of the melt is inhibited to control the formation of different phase of crystals.
11
8
9
0057 Moreover, in FIG. 2 by way of example, each container is preferably fixed at three or four points at a joint portion between each of fixing spindles and and each container by using ball-point screws to sustain the horizontal level or the circularity. Thereby the fluctuations of the melt is diminished to prohibit the occurrence of deviations in the crystalline orientation.
0058 More concrete experimental results will be described hereinafter.
0059 Example A
17
111
15
16
16
16
16
16
16
0.16
0.68
0.16
b
a
0060 An optical material made of a single crystal having an Hg-Cd-Mn-Te composition was grown according to the above method as explained referring to FIG. 1 to FIG. 4. Concretely, as a powdery starting material , Cd, Mn, an Hg-Te alloy, and a Cd-Te alloy were employed. A crystal having a () orientation of CdTe (diameter: 3 mm, length: 30 mm) was used as a seed crystal. Three hundreds g of the starting material was formulated to give a composition of (HgCdMn)Te after the formulation. The sealing member was formed of quartz glass in a thickness of 2 mm. The inner diameter and the length of the tubular part of the crucible were 3 mm and 300 mm, respectively. The inner diameter and the length of the enlarged part of the crucible were 5 mm and 50 mm, respectively. The crucible was formed of p-BN bulk in a thickness of 1 mm. The seed crystal was put into the container, and 15 g of the powdery starting material was charged into the crucible . The crucible was put into the sealing quartz member, and the sealing member was sealed.
30
20
17
18
16
20
30
c
0061 Containers were formed by using sealing members, and accommodated in a normal pressure electric furnace, which was heated to 1100 C. at 50 C./hour to melt the starting material . At that time, the system was designed such that the seed crystal was prevented from melting through cooling with a cooling mechanism (not shown). As the temperature went up, the starting material put into the container turned to a melt, and the melt was formed at a location up to a level of a tapered part of the crucible. Next, the container was cooled rapidly to obtain a polycrystalline starting material . Thereafter each container was taken out and set into a manufacturing equipment of FIGS. 1 and 2.
30
0062 Hereupon the inner diameter and the length of the melt-producing part in each heater were 15 mm and 10 mm, respectively. The length of each of the preheating part and the annealing part was 50 mm. The heater was formed of a tubular heat-generating body of a metal, alloy, or ceramic material. The opposite ends of each container were fixed by fixing spindles.
30
18
0063 As above mentioned, twenty containers were fixed at given locations, and thereafter the temperature was raised at a heating rate of 50 C./hour under normal pressure with use of the heaters. The position of the melt-producing part was aligned to the upper end of the seed crystal and held at 1050 C. Moreover the temperature of the pre-heating part and that of the annealing part were held at 800 C. The temperature gradient between the preheating part and the melt-producing part and that between the annealing part and the melt-producing part were both 75 C./cm. Holding the above condition, the heaters were simultaneously moved at 30 mm/day, while the temperature of the melt-producing part was fell down to 950 C. at 100 C./day. When its temperature reached 950 C. in 24 hours, the melt-producing part was held at the same temperature, and each heater was continuously moved for 9 days. After growing a single crystal, the melt-producing part was fell down at 50 C./hour, and twenty sealing members were removed.
18
TABLE 1
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
1
 1
0.00
1.00
0.00
1.0
X
 3
0.08
0.74
0.18
1.0
0.042
880
X
 5
0.10
0.73
0.17
1.0
0.045
890
X
 7
0.12
0.71
0.17
1.0
0.048
900
X
 9
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
10
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
11
0.16
0.68
0.16
1.0
0.060
940
◯
12
0.16
0.68
0.16
1.0
0.060
940
◯
13
0.16
0.68
0.16
1.0
0.060
940
◯
15
0.16
0.68
0.16
1.0
0.060
940
◯
20
0.16
0.68
0.16
1.0
0.060
940
◯
25
0.16
0.68
0.16
1.0
0.060
940
◯
30
0.16
0.68
0.16
1.0
0.060
940
◯
35
0.16
0.68
0.16
1.0
0.060
940
◯
40
0.16
0.68
0.16
1.0
0.060
940
◯
45
0.16
0.68
0.16
1.0
0.060
940
◯
50
0.16
0.68
0.16
1.0
0.060
940
◯
55
0.16
0.68
0.16
1.0
0.060
940
◯
60
0.16
0.68
0.16
1.0
0.060
940
◯
63
0.16
0.68
0.16
1.0
0.060
940
◯
64
0.16
0.68
0.16
1.0
0.060
940
◯
65
0.16
0.68
0.16
1.0
0.060
940
◯
66
0.25
0.61
0.14
1.0
measurement
X
impossible
67
0.48
0.42
0.10
1.0
measurement
X
impossible
68
0.30
0.57
0.13
1.0
measurement
X
impossible
69
0.16
0.68
0.16
1.0
polycrystal
X
70
0.16
0.68
0.16
1.0
polycrystal
X
0064 Each crucible was taken out of the corresponding sealing member thus removed, and the grown crystals were cut out each in a thickness of 3.5 mm, starting from a point above the seed crystal to obtain 70 samples. The totally 1400 grown samples were obtained from the totally 20 crucibles, and the composition and the optical characteristics, i.e., the Faraday rotation angle and the cut-off wavelength of light absorption were investigated with respect to them. Results obtained are listed in Tables 1 and 2.
TABLE 2
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
 2
10
0.15
0.69
0.16
1.0
0.058
930
X
11
0.16
0.68
0.16
1.0
0.060
940
◯
20
0.16
0.68
0.16
1.0
0.060
940
◯
30
0.16
0.68
0.16
1.0
0.060
940
◯
40
0.16
0.68
0.16
1.0
0.060
940
◯
60
0.16
0.68
0.16
1.0
0.060
940
◯
65
0.16
0.68
0.16
1.0
0.060
940
◯
66
0.25
0.61
0.14
1.0
measurement
X
impossible
 3
40
0.16
0.68
0.16
1.0
0.060
940
◯
 4
40
0.16
0.68
0.16
1.0
0.060
940
◯
 5
40
0.16
0.68
0.16
1.0
0.060
940
◯
 6
40
0.16
0.68
0.16
1.0
0.060
940
◯
 7
40
0.16
0.68
0.16
1.0
0.060
940
◯
 8
40
0.16
0.68
0.16
1.0
0.060
940
◯
 9
40
0.16
0.68
0.16
1.0
0.060
940
◯
10
40
0.16
0.68
0.16
1.0
0.060
940
◯
11
40
0.16
0.68
0.16
1.0
0.060
940
◯
12
40
0.16
0.68
0.16
1.0
0.060
940
◯
13
40
0.16
0.68
0.16
1.0
0.060
940
◯
14
40
0.16
0.68
0.16
1.0
0.060
940
◯
15
40
0.16
0.68
0.16
1.0
0.060
940
◯
16
40
0.16
0.68
0.16
1.0
0.060
940
◯
17
40
0.16
0.68
0.16
1.0
0.060
940
◯
18
40
0.16
0.68
0.16
1.0
0.060
940
◯
19
40
0.16
0.68
0.16
1.0
0.060
940
◯
20
40
0.16
0.68
0.16
1.0
0.060
940
◯
0065
18
18
0066 In Table 1, the results are given on No. 1 among the twenty crucibles. The wording Position of sample means the position of each sample in a case where samples were cut out successively upwardly in order starting from above the seed crystal . The smaller the number of the position of the sample, the nearer the position to the seed crystal , whereas the larger the number of the position of the sample, the farther the position of the sample to the seed crystal.
0067 Apparent from the above, in one grown crystal, ten samples in the lower part of the crucible were ones in the composition-changing area, and five samples in the upper part thereof were ones in an area of the melt, so that they cannot be employed as optical isolator elements at a wavelength of 980 nm. Moreover as shown in Table 1 and 2, the midst fifty five samples have a uniform composition for each of all twenty grown crystals. Accordingly it is confirmed that 1100 single crystal samples having the uniform composition could be obtained at the same time.
0068 Comparative Example A
0069 A single crystal was grown by a conventional THM method. As a sealing member also functioning as a crucible, a container made of quartz glass having an inner diameter of 15 mm, a length of 100 mm, and a tlhickness of 3 mm was employed. About 75 grams of the starting material was put into the container, and the sealing member was sealed in vacuum. A 20 mm-long lower part of the container was formed in a tapered shape, and a seed crystal having the diameter of 3 mm and the length of 30 mm was placed under the tapered part, and the starting material was melted and a single crystal was grown as in Example A. The single crystal was grown under a pressure of 30 atms in an Ar gas by using a THM growing furnace having a pressurizing container with a pressure-resistance of 100 atms. A heating mechanism employed had a melt-producing part with an inner of 30 mm and a length of 20 mm, an annealing part and a pre-heating part each having an inner diameter of 30 mm and a length of 50 mm. The growing speed of the single crystal was 4 mm/day, and 20 days were needed for the growth.
15
TABLE 3
Position
Faraday
Cut-off
Wafer
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
1
a
0.10
0.73
0.17
1.0
0.045
890
X
b
0.10
0.73
0.17
1.0
0.045
890
X
c
0.09
0.73
0.18
1.0
0.043
885
X
d
0.08
0.74
0.18
1.0
0.042
880
X
e
0.08
0.74
0.18
1.0
0.042
880
X
3
a
0.12
0.71
0.17
1.0
0.048
900
X
b
0.12
0.71
0.17
1.0
0.048
900
X
c
0.11
0.72
0.17
1.0
0.047
895
X
d
0.10
0.73
0.17
1.0
0.045
890
X
e
0.10
0.73
0.17
1.0
0.045
890
X
5
a
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
b
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
c
0.13
0.70
0.17
1.0
0.052
910
X
d
0.12
0.71
0.17
1.0
0.048
900
X
e
0.12
0.71
0.17
1.0
0.048
900
X
6
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.16
0.68
0.16
1.0
0.060
940
◯
c
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
d
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
e
0.13
0.70
0.17
1.0
0.052
910
X
8
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.16
0.68
0.16
1.0
0.060
940
◯
c
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
d
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
e
0.13
0.70
0.17
1.0
0.052
910
X
0070 Consequently, a single crystal was formed in a diameter of 15 mm and a length of 60 mm. The single crystal was transversely cut to obtain wafers having a thickness of 3.5 mm and a diameter of 15 mm. Their characteristics were measured as in Example A, and results are listed in Table 3 and 4.
TABLE 4
Position
Faraday
Cut-off
Wafer
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
10
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.16
0.68
0.16
1.0
0.060
940
◯
c
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
d
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
e
0.13
0.70
0.17
1.0
0.052
910
X
11
a
0.19
0.66
0.15
1.0
0.060
970
X
b
0.18
0.67
0.15
1.0
0.060
960
X
c
0.18
0.67
0.15
1.0
0.061
960
X
d
0.17
0.67
0.16
1.0
0.062
950
X
e
0.17
0.67
0.16
1.0
0.062
950
X
13
a
0.30
0.57
0.13
1.0
measurement
X
impossible
b
0.30
0.57
0.13
1.0
measurement
X
impossible
c
0.30
0.57
0.13
1.0
measurement
X
impossible
d
0.30
0.57
0.13
1.0
measurement
X
impossible
e
0.30
0.57
0.13
1.0
measurement
X
impossible
15
a
0.48
0.42
0.10
1.0
measurement
X
impossible
b
0.48
0.42
0.10
1.0
measurement
X
impossible
c
0.48
0.42
0.10
1.0
measurement
X
impossible
d
0.48
0.42
0.10
1.0
measurement
X
impossible
e
0.48
0.42
0.10
1.0
measurement
X
impossible
0071
a
0072 In Table 3 and 4, the wafer number denotes the first, second, third, through fifteenth wafer from the bottom of the sample. The position of the sample denoted the each position as shown in FIG. 5(). It is realized that the area of a desired composition is around the thickness with 7 mm and the length with 25 mm nearby the center of the wafer. That is, in this growing method, only 30 elements of a desired composition having the diameter with 2 mm and the length with 3.5 mm were obtained.
0073 Example B
16
16
16
10
b
0074 A growing experiment was carried out as in the Example A. However the thickness of the quartz glass was 3 mm, the inner diameter and the length of the crucible were 6 mm and 200 mm respectively, the inner diameter and the length of the enlarged part were 10 mm and 50 mm respectively. A crucible made of graphite coated with pyrolytic carbon thin film in its interior (total thickness with 2 mm) was employed as the crucible . The powdery sources with 30 g was put into the crucible . The number of container was . The inner diameter and the length of the melt-producing part were 18 mm and 15 mm respectively and the length of the pre-heating part and annealing part was 50 mm.
18
10
0075 The 10 containers 30 were fixed at given places, thereafter the thermal treating equipment was heated at 50 C./hour under normal pressure. The position of the melt producing part was prepared in equal to the upper end of the original crystal and the melt-producing part was held at 1050 C. The temperature of the pre-heating part and the annealing part was held at 800 C. At this time, the temperature gradients of between the pre-heating part and the melt-producing part and between the annealing part and the melt-producing part were 65 C./cm respectively. Holding the above conditions, the each thermal treating equipment was moved at 10 mm/day at the same time and the temperature of the melt-producing part was fell down to 950 C. at 50 C./day. When its temperature was 950 C. at 48 hours later, by holding the temperature the each thermal treating equipment was continued to be moved for 15 days. After growing, the melt-producing part was cooled at 50 C./hour and the seal up-member were taken out.
40
18
TABLE 5
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
1
 1
0.00
1.00
0.00
1.0
measurement
X
impossible
 3
0.08
0.74
0.18
1.0
0.042
880
X
 5
0.10
0.73
0.17
1.0
0.045
890
X
 7
0.12
0.71
0.17
1.0
0.048
900
X
 9
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
10
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
11
0.16
0.68
0.16
1.0
0.060
940
◯
12
0.16
0.68
0.16
1.0
0.060
940
◯
13
0.16
0.68
0.16
1.0
0.060
940
◯
15
0.16
0.68
0.16
1.0
0.060
940
◯
20-1
0.16
0.68
0.16
1.0
0.060
940
◯
20-2
0.16
0.68
0.16
1.0
0.060
940
◯
20-3
0.16
0.68
0.16
1.0
0.060
940
◯
20-4
0.16
0.68
0.16
1.0
0.060
940
◯
25
0.16
0.68
0.16
1.0
0.060
940
◯
30
0.16
0.68
0.16
1.0
0.060
940
◯
33
0.16
0.68
0.16
1.0
0.060
940
◯
34
0.16
0.68
0.16
1.0
0.060
940
◯
35
0.16
0.68
0.16
1.0
0.060
940
◯
36
0.25
0.61
0.14
1.0
measurement
X
impossible
37
0.48
0.42
0.10
1.0
measurement
X
impossible
38
0.30
0.57
0.13
1.0
measurement
X
impossible
39
0.16
0.68
0.16
1.0
polycrystal
X
40
0.16
0.68
0.16
1.0
polycrystal
X
0076 The each crucible was taken out of the each seal up-member and wafers were cut out in the thickness with 3.5 mm and the diameter with 6 mm of the seed crystal in turn from the upper part thereof. 4 samples was formed in the diameter with 2.5 mm of the each wafer to obtain 160 samples. Accordingly 1600 samples were totally formed of 10 grown crystals. Their composition, Faraday rotation angle and cut off wavelength of light absorption were investigated. The results were listed in Table 5 and 6.
TABLE 6
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
2
10
0.15
0.69
0.16
1.0
0.058
930
X
11
0.16
0.68
0.16
1.0
0.060
940
◯
20
0.16
0.68
0.16
1.0
0.060
940
◯
30
0.16
0.68
0.16
1.0
0.060
940
◯
35
0.16
0.68
0.16
1.0
0.060
940
◯
36
0.25
0.61
0.14
1.0
measurement
X
impossible
3
20
0.16
0.68
0.16
1.0
0.060
940
◯
4
20
0.16
0.68
0.16
1.0
0.060
940
◯
5
20
0.16
0.68
0.16
1.0
0.060
940
◯
6
20
0.16
0.68
0.16
1.0
0.060
940
◯
7
20
0.16
0.68
0.16
1.0
0.060
940
◯
8
20
0.16
0.68
0.16
1.0
0.060
940
◯
9
20
0.16
0.68
0.16
1.0
0.060
940
◯
10 
20
0.16
0.68
0.16
1.0
0.060
940
◯
0077
18
20
0078 In Table 5, the results of the crucible with numerical number 1 were presented. The wording the position of the wafer means the cut off position of each wafer when cutting out the wafer upward from the seed crystal in turn. As the number of the position of the wafer is small, the position of the cut off wafer is nearby the seed crystal. As the number is large, the position is far from the seed crystal. Since the positions of cut out samples of the each wafer were symmetric each other, only one sample per one wafer was examined in principle. In this point, the wording 20-120-4denotes the four samples in the wafer .
10
5
0079 As realizing the above-mentioned, since in one growing crystal, the lower part's samples were the changing area of their composition and the upper part's samples were the melted area, they could not be employed as an optical isolator with 980 nm. Moreover as shown in FIGS. 5 and 6, the 100 midst samples have uniform composition in the every ten crystals. Thus it is confirmed that 1000 samples having uniform composition are obtained at the same time.
0080 Comparative Example B
0081 A single crystal was grown as in the Comparative Example A. As a crucible and a seal up-member, a container made of quartz glass having the inner diameter with 10 mm, the length with 200 mm, and the thickness with 3 mm was employed. The sources with 75g were put into the container and seal up in vacuum. The lower part 20 mm of the container was formed in tapered shape and an seed crystal having the diameter with 3 mm and the length with 30 mm was prepared under the tapered part, thereafter the sources were melted to grow a single crystal as in the Example A. The single crystal was grown under the 30 atm of Ar gas by using a THM growing furnace having a pressurizing container with the pressure-resistance of 100 atin. The employed heating system has a melt-producing part having the inner diameter with 20 mm and the length with 15 mm, an annealing part and a pre-heating part having the inner diameter with 30 mm and the length with 50 mm respectively. The single crystal was grown for 20 days at the growing velocity with 7 mm/day.
TABLE 7
Position
Faraday
Cut-off
Wafer
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
1
a
0.10
0.73
0.17
1.0
0.045
890
X
b
0.08
0.74
0.18
1.0
0.042
880
X
3
a
0.12
0.71
0.17
1.0
0.048
900
X
b
0.10
0.73
0.17
1.0
0.045
890
X
5
a
0.13
0.70
0.17
1.0
0.052
910
X
b
0.10
0.73
0.17
1.0
0.045
890
X
7
a
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
b
0.12
0.71
0.17
1.0
0.048
900
X
9
a
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
b
0.12
0.71
0.17
1.0
0.048
900
X
10 
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.13
0.70
0.17
1.0
0.052
910 X
15 
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.13
0.70
0.17
1.0
0.052
910
X
20 
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.13
0.70
0.17
1.0
0.052
910
X
0082 Accordingly the single crystal was formed in the diameter with 10 mm and the length 140 mm. The single crystal was cut off in the transverse direction thereof to obtain 35 wafers having the thickness with 3.5 mm and the diameter with 10 mm. The measured results as in the Example A were listed in Table 7 and 8.
TABLE 8
Position
Faraday
Cut-off
Wafer
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
25
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.13
0.70
0.17
1.0
0.052
910
X
29
a
0.16
0.68
0.16
1.0
0.060
940
◯
b
0.13
0.70
0.17
1.0
0.052
910
X
30
a
0.17
0.67
0.16
1.0
0.062
950
X
b
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
31
a
0.19
0.66
0.15
1.0
0.060
970
X
b
0.17
0.67
0.16
1.0
0.062
950
X
33
a
0.30
0.57
0.13
1.0
measurement
X
impossible
b
0.30
0.57
0.13
1.0
measurement
X
impossible
35
a
0.48
0.57
0.13
1.0
measurement
X
impossible
b
0.48
0.42
0.10
1.0
measurement
X
impossible
0083
b
0084 In Table 7 and 8, the wafer number denotes the first, second, third, through thirtieth wafer from the bottom of the sample. The position of the sample denoted the each position as shown in FIG. 5(). It is realized that the area of a desired composition is the one around the thickness with 6 mm and the length with 80 mm nearby the center of the wafer. That is, in this growing method, only 80 elements of a desired composition having the diameter with 2.5 mm and the length with 3.5 mm were obtained.
0085 Example C
0086 This invention is adaptable for manufacturing a plate-like or a board-like single crystal. In this case, a container to grow a single crystal is required to have a shape of like board. In particular, the container has a crucible and a seal up-member to accommodate the crucible, and the crucible has a board like-part extending upward and downward thereof and a enlarged part formed above the board like-part.
0087 The thus obtained plate-like or board-like single crystals can be utilized for the mass-production of Faraday elements with polarizer elements. That is, a number of chips can be produced by using such plate-like or board-like single crystals as Faraday elements, bonding plate-like polarizers such as rutile or polar cor to surfaces of the Faraday elements to obtain joined bodies, and cutting and grinding the joined bodies and finally obtain the cut pieces.
6
6
6
4
4
33
36
35
36
36
36
36
35
35
35
35
36
36
a
b
a
b
a
b
a
b
c
a
b
c
d
0088 Referring to FIGS. () and (), a further preferred embodiment will be described, wherein in FIGS. () and () the same reference numerals are given to the same constituent parts as in FIGS. () and (), and their explanation is omitted. A container includes a crucible and a sealing member to seal the container. The crucible includes an enlarged part , a plate-like part , and a connecting part between them. The sealing member includes an enlarged part , a plate-like part , and a connecting part between them. The reference numeral denotes an opening of the crucible .
0089 From the viewpoint of making the composition of an obtained single crystal more uniform, the thickness of the inner space of the plate like-part of the crucible is preferably 5 mm or less, more preferably 3 mm or less. Although the lower limit of thickness is not particularly limited, it is required to be at least not less than the thickness of a final product. From the viewpoint of maltng the handling of the single crystal easy, it is preferably 2 mm or more. Although the width of the inner space of the plate-like part of the crucible is not limited, it may be set to be 10 to 80 mm, for example.
a
b
b
a
17
111
15
36
36
36
0.16
0.68
0.16
0090 According to the above-mentioned method explained in referring to FIG. 1 to FIG. 4, a single crystal having a composition of Hg-Cd-Mn-Te was grown as an optical material by using the container shown in FIG. 6() and (). Cd, Mn, an Hg-Te and Cd-Te were employed as a powdery starting material . A plate-like Cd-Te crystal having a () crystalline orientation (thickness: 4 mm, width: 15 mm, length: 30 mm) was used as a seed crystal. The starting material, 400 g, was formulated to give a formulated composition of (HgCdMn)Te. The sealing member was formed of quartz glass in a thickness of 2 mm. The thickness, the width and the length of the inner space of the plate-like part were 3 mm, 15 mm and 12 mm, respectively. The thickness, the width and the length of the inner space of the enlarged part were 5 mm, 15 mm and 30 mm, respectively. The crucible was formed of 1 mm thick p-B N. After putting the seed crystal into the container, 40 g of the powdery starting material was charged into the crucible , the crucible was placed in the sealing member, and the sealing member was sealed.
33
10
33
33
0091 The each container was formed of seal up-members to obtain a polycrystal as in Example A. Consequently the each container was taken out and prepared in the manufacturing equipment of FIGS. 1 and 2. In this case, the inner space of the melt-producing part in the thermal treating equipment was formed in the thickness with 15 mm, the width with 30 mm and the length with 10 mm. The length of the pre-heating part and the annealing part was 50 mm. The thermal equipment was also formed of a plate like-exothermic body made of metal, alloy or ceramic material. The ends of the each container were fixed with fixing axes.
33
38
0092 As mentioned above, ten containers were fixed at given places, and the temperature was raised at 50 C./hour under normal pressure by using heaters. The position of the melt-producing part was aligned with the upper end of the seed crystal , and the melt-producing part was held at 1050 C. The temperature of each of the preheating part and the annealing part was held at 800 C. At this time, the temperature gradient between tie preheating part and the melt-producing part and that between the annealing part and the melt-producing part were both 65 C./cm. Holding the above condition, the heaters were all simultaneously moved at 10 mm/day, while the temperature of the melt-producing part was fell down to 950 C. at 50 C./day. When its temperature reached 950 C. in 48 hours, the temperature was kept as it was, and the heaters were continuously moved for 9 days. After growing, the melt-producing part was cooled at 50 C./hour, and ten sealing member were removed.
38
TABLE 9
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
1
1a
0.08
0.74
0.18
1.0
0.042
880
X
1b
0.10
0.73
0.17
1.0
0.045
890
X
1c
0.12
0.71
0.17
1.0
0.048
900
X
2a
0.14
0.70
0.16
1.0
0.056
920
&Dgr;
2b
0.15
0.69
0.16
1.0
0.058
930
&Dgr;
2c
0.16
0.68
0.16
1.0
0.060
940
◯
3a
0.16
0.68
0.16
1.0
0.060
940
◯
3b
0.16
0.68
0.16
1.0
0.060
940
◯
3c
0.16
0.68
0.16
1.0
0.060
940
◯
3d
0.16
0.68
0.16
1.0
0.060
940
◯
3e
0.16
0.68
0.16
1.0
0.060
940
◯
3f
0.16
0.68
0.16
1.0
0.060
940
◯
3g
0.16
0.68
0.16
1.0
0.060
940
◯
4b
0.16
0.68
0.16
1.0
0.060
940
◯
5b
0.16
0.68
0.16
1.0
0.060
940
◯
6b
0.16
0.68
0.16
1.0
0.060
940
◯
7b
0.16
0.68
0.16
1.0
0.060
940
◯
8a
0.16
0.68
0.16
1.0
0.060
940
◯
8b
0.16
0.68
0.16
1.0
0.060
940
◯
8c
0.16
0.68
0.16
1.0
0.060
940
◯
8d
0.16
0.68
0.16
1.0
0.060
940
◯
8e
0.16
0.68
0.16
1.0
0.060
940
◯
8f
0.16
0.68
0.16
1.0
0.060
940
◯
8g
0.16
0.68
0.16
1.0
0.060
940
◯
9a
0.25
0.61
0.14
1.0
measurement
X
impossible
9b
0.48
0.42
0.10
1.0
measurement
X
impossible
9c
0.30
0.57
0.13
1.0
measurement
X
impossible
0093 Each crucible was taken out of the corresponding sealing member removed, and grown crystals were cut out in a length of 12 mm out starting from above the seed crystal to obtain nine samples. Each sample was cut to have a vertical size of 12 mm, a lateral size of 12 mm, and polished at the opposite surfaces to give a thickness of 3.5 mm. Ninety samples were totally formed of totally 10 crystals. The composition and optical characteristics, i.e., Faraday rotation angle and cut off wavelength in light absorption were examined with respect to them. Results are given in Tables 9 and 10.
TABLE 10
Position
Faraday
Cut-off
Crucible
of
Composition
rotation
wavelength
Practically
No.
sample
Hg
Cd
Mn
Te
(deg/cm Oe)
(nm)
usable or not
2
2b
0.15
0.69
0.16
1.0
0.058
930
X
3b
0.16
0.68
0.16
1.0
0.060
940
◯
4b
0.16
0.68
0.16
1.0
0.060
940
◯
5b
0.16
0.68
0.16
1.0
0.060
940
◯
6b
0.16
0.68
0.16
1.0
0.060
940
◯
7b
0.16
0.68
0.16
1.0
0.060
940
◯
8b
0.16
0.68
0.16
1.0
0.060
940
◯
9b
0.48
0.42
0.10
1.0
measurement
impossible
3
5b
0.16
0.68
0.16
1.0
0.060
940
◯
4
5b
0.16
0.68
0.16
1.0
0.060
940
◯
5
5b
0.16
0.68
0.16
1.0
0.060
940
◯
6
5b
0.16
0.68
0.16
1.0
0.060
940
◯
7
5b
0.16
0.68
0.16
1.0
0.060
940
◯
8
5b
0.16
0.68
0.16
1.0
0.060
940
◯
9
5b
0.16
0.68
0.16
1.0
0.060
940
◯
10 
5b
0.16
0.68
0.16
1.0
0.060
940
◯
0094
38
38
22
a
b
0095 In Table 9 are shown results on No. 1 among ten crucibles. The wording Position of sample means the position of each sample in a case where samples were cut out successively upwardly in order starting from above the seed crystal as shown in FIG. 7(). The sample immediately above the seed crystal is denoted by 1, and that immediately under the polycrystal is denoted by 9. As shown in FIG. 7(), the composition, Faraday rotation angle and cut-off wavelength in light absorption were examined with respect to each sample at points a to g. The positions a to g were shown in the sample positions.
40
0096 As seen in the above, since in one grown crystal, the lower two samples were in a composition-changing area, and the upper one sample was in a portion of the melt. As shown in Tables 9 and 10, the midst six samples had a uniform composition with respect to each of ten samples. Twenty five optical isolator elements having a vertical size of 2 mm and a lateral size of 2 mm were obtained from one usable plate-like sample after cutting.
0097 As above mentioned, according to this invention, in the single crystal-manufacturing equipment in which the single crystal is grown by thermal treating the starting material filled in the container, the compositional segregation, the generation of different phase, and deviation in the crystal orientation in the single crystal are prevented, so that the characters of the single crystal are developed and stabilized in addition to be capable of mass-producing single crystals. | |
Today, as much as ever, it is crucial to understand the role of religion in the world. Many world events—such as the Syrian crisis, U.S. Immigration debates, or controversy over marriage equality—require a deep understanding of the role of religion in our culture.
Ball State's major stresses the many ways people participate in religion, and how this impacts various areas of their lives.
Whether you wish to better understand your own religion, or you are simply curious about other peoples’ religious practices, our program provides an excellent foundation.
Courses in religious studies address many dimensions and functions of religion within the world’s cultures. Additional topics include:
You’ll learn to use multiple approaches (e.g., anthropology, cultural studies, history, sociology, hermeneutics, and critical theories of interpretation) to examine the dynamic relationships between religion and other social, economic, and political structures. You'll gain a critical understanding of religious traditions, issues, questions, and values. You'll also cultivate an awareness of religion’s multifaceted influence on societies, and learn to appreciate the diversity of practices and beliefs in the world.
Our faculty is a group of well-educated, noted scholars in the field of Religious Studies who constantly develop learning opportunities.
As an undergraduate, you’ll receive individual attention from faculty on coursework and research projects—a huge distinguishing factor for Ball State compared to other universities.
Our students find a home here at Ball State, built around a close community and shared learning experiences and student activities. You can join a student organization dedicated to our fields, join an immersive learning program, help with a community-engagement project, and more.
Our diverse lineup of programs offers hands-on applications and immersive learning projects. You’ll get to work directly with groups in the community, helping them address real issues while gaining valuable professional experience.
You’ll receive a lot of attention from our faculty. You’ll work directly with them—on their research and your own.
The size of our department, and our university, is just right. You’ll get to know your professors unlike at larger universities, but we offer more resources and opportunities than many smaller colleges.
We have a wide variety of courses, and you can tailor directed studies to your own needs and interests.
In addition to the University Core Curriculum and lineup of electives, you’ll take 33 credits of coursework specific to religious studies. You can select from a wide range of courses to best fit your interests—ranging from studying the interplay between religion and culture, the ethical foundations of religion, the advanced study of biblical traditions, and more.
120
A few of the classes you will take include:
For a complete list of all the courses you will take and their descriptions, please see our Course Catalog.
View Catalog
Are you interested in pursuing a bachelor’s degree in religious studies? The first step is to apply as an undergraduate student to Ball State University. Begin the journey today.
One of the best ways to understand why Ball State stands out is to come see it for yourself. You can schedule a visit through our Office of Undergraduate Admissions. Make sure to tell them you’re interested in our program. Or if you’d like to speak with someone in our department directly by phone or email, please contact us. | https://www.bsu.edu/academics/collegesanddepartments/philosophy-religious-studies/academic-programs/bachelors-religious-studies |
DOI:
https://doi.org/10.18332/tpc/62407
Article
(PDF)
KEYWORDS
women
smoking
TOPICS
Smoking
ABSTRACT
Global tobacco control has led to a reduction in smoking prevalence and mortality in men, while the rates among women have not followed the same declining rates or patterns. Tobacco-induced diseases, including those unique to women (reproductive complications, cervical and breast cancer) are becoming increasingly prevalent among women. Unfortunately, many tobacco control policies and cessation programs have been found to be less effective for women than men. This is alarming as disease risk for lung cancer, CVD, osteoporosis, and COPD, associated with smoking, is higher among women. Women are also more likely to be exposed to secondhand tobacco smoke and subsequent morbidity. Finally, quitting smoking appears to be harder for women than men. Current tobacco control and surveillance data come primarily from high resource countries. WHO estimates that in 2030, in low and medium resource countries, 7 out of 10 deaths will be smoking-related. While the prevalence of smoking in women is relatively low in these countries, more information is needed regarding their patterns of tobacco use uptake, and subsequent health outcomes, as theirs differ from men. Tobacco use in women is greatly influenced by social, cultural and political determinants, and needs to be conceptualized within an intersectional framework.
Submit your paper
Instructions to Authors
Home
Share
Send by email
RELATED ARTICLE
Impact of a smoke-free policy on smoking prevalence on hospital grounds: A before-after study
Non-compliance with university tobacco-free policies: A qualitative exploration
Consumption, nicotine dependence and motivation for smoke cessation during early stages of COVID-19 pandemic in Brazil: A cross-sectional study
Outdoor smoking as a nuisance to non-smokers: The case for smoke-free outdoor public spaces in dense urban areas
Pediatric patients’ views regarding smoke-free hospital grounds compared to those of adults: A survey study
Indexes
Keywords index
Topics index
Authors index
eISSN:
2459-3087
On behalf of the
European Network of Smoking and Tobacco Prevention
. Chausse d'lxelles 144, Brussels, Belgium.
Published by
European Publishing
. Science and Technology Park of Crete (STEP-C). Vassilika Vouton, GR-70013 Heraklion, Crete, Greece
This journal is currently funded by the Health Programme of the European Union
Neither the European Commission nor any person acting on behalf of the European Commission is responsible for the use which might be made of the information contained herein. The views in this publication are those of the author and do not necessarily reflect the policies of the European Commission.
© 2022 European Publishing, unless otherwise stated.
The views and opinions expressed in the published articles are strictly those of the author(s). 21 December 2021.
© 2006-2022 Journal hosting platform by
Bentus
We process personal data collected when visiting the website. The function of obtaining information about users and their behavior is carried out by voluntarily entered information in forms and saving cookies in end devices. Data, including cookies, are used to provide services, improve the user experience and to analyze the traffic in accordance with the
. Data are also collected and processed by Google Analytics tool (
more
). | http://www.tobaccopreventioncessation.com/Women-and-Smoking-Global-Challenge,62407,0,2.html |
The Office of Professional Responsibility (OPR) under the auspices of the Justice Department published a long-awaited report in 2010 that found former Office of Legal Counsel attorneys John Yoo and Jay Bybee of engaging in professional misconduct by authoring two memorandums in 2002 and 2003 which justified the use of torture in the crackdown on suspected terrorists in the wake of the 9/11 attacks. The title of the OPR report is Investigation into the Office of Legal Counsel’s Memoranda Concerning Issues Relating to the Central Intelligence Agency’s Use of ‘Enhanced Interrogation Techniques’ on Suspected Terrorists. The report also concluded that both attorneys had failed to provide the Bush administration for which the torture memos had been written, with a thorough, candid, and objective legal analysis incumbent upon all lawyers working in the OLC.
It was recommended in the report that both Yoo and Bybee be referred to their respective state bar associations in order to face disciplinary actions against their licenses to practice law. Bybee was a member of the District of Columbia bar while Yoo was licensed in Pennsylvania at that time. However, neither man was ever referred for sanctions of any sort. Ironically, on the same day the Justice Department released the OPR’s report, a memo written by Associate Deputy Attorney General David Margolis was released that apparently absolved both Yoo and Bybee of any misconduct. David Margolis had been assigned to review adverse OPR findings entered against wayward Justice Department lawyers. In the memo , it was determined that the torture memos that approved of brutal tactics as waterboarding, were flawed, however they did not in the context of post-9/11 emergency cross the line into formal misconduct or bad faith in violation of international human rights.
Today John Yoo is a tenured professor at the University of California School of Law at Berkeley where he holds an endowed chair and teaches in the school’s program on public law and policy whereas Jay Bybee is a judge on the 9th U.S. Circuit Court of Appeals. Bybee was nominated to the bench by Bush in 2002 and his appointment was confirmed by the Senate in 2003, long before the torture memos were made public. According to the memo, Yoo and Bybee’s transgressions were merely isolated instances of misconduct, however, they weren’t as isolated as portrayed. Prosecutorial misconduct is not confined only to federal lawyers; on the contrary, prosecutorial misconduct is a rampant epidemic in almost the entire American legal system. Some observers have gone so far to say that prosecutorial misconduct across the country has reached epidemic proportions.
Generally, instances of prosecutorial misconduct are found on state level in in ordinary criminal cases. Such instances may include the withholding of exculpatory evidence and overcharging defendants to the willful destruction of evidence, perjury from government witnesses and pressuring defense witnesses. According to a study carried out recently, in some states instances of prosecutorial misconduct occur on average more than once a week. Even more stunning is the fact that almost none of the prosecutors were ever criminally charged for their misdeeds. The study did not take into account any misconduct committed in cases that plea out. As 97% of criminal cases are settled with a plea bargain, this study merely scraped the surface in regards to instances of prosecutorial misconduct.
Researchers and news outlets have reached similar conclusions regarding misconduct in other jurisdictions. Some of the prominent instances are mentioned below:
● USA Today focused on over 200 cases tried nationally in federal courts. In these cases appellate judges determined that prosecutorial conduct had taken place but only one Justice Department prosecutor had been temporarily disbarred.
● The Chicago Tribune examined over 10,000 cases from around the US where appellate courts had reversed convictions in 381 cases including 67 death penalty judgments. However, not a single state disciplinary agency publicly sanctioned a single government lawyer involved.
● Yale University published the results of a detailed investigation of the ethical rules and disciplinary practices of all 50 states. They were all found lacking in holding prosecutors accountable for misconduct.
In another ironic twist, prior to joining the Supreme Court, Justice Powell wrote a memo on behalf of the U.S. Chamber of Commerce where he urged the business community to become more proactive in its use of litigation to counteract the impact of groups like the NAACP and the ACLU. The Supreme Court prevented prosecutors from being held accountable yet again in a case dealing with civil damage actions against federal prosecutors. This case was brought by the New York-based Center for Constitutional Rights (CCR) on behalf of six men of Arab and South Asian descent who were arrested in the immediate aftermath of 9/11 and subsequently held and abused in a federal facility for up to six months before being deported. The CCR sought to hold the Attorney General and FBI Director at the time among other officials, liable for the plaintiffs’ extended detention and mistreatment.
In the ensuing lengthy litigation the Justice Department strived to have the complaint dismissed whereas the CCR was pursuing civil damages actions against government officials for Fourth Amendment violations. In a fractured 4-2 decision, the Supreme Court declined to accept the CCR’s standpoint citing that the conduct complained of took place during a time of national crisis. It is a fact that being a prosecutor is hard and pressurizing work even under the best of circumstances. Lawyers would decline any posts with the Justice Department or as a prosecutor if they could be potentially prosecuted and sanctioned for every discretionary decision they make. Thus it would be safe to say that certain protections are warranted. Without prosecutors there would be no Justice Department, therefore the aim should be to foster a system that encourages lawful prosecutorial behavior while punishing the worst instances of misconduct.
Although prosecutorial misconduct may never be eliminated altogether it is important to remember that no one is above the law. If you have been a victim of prosecutorial misconduct it is advisable that you seek the services of an experienced criminal defense attorney to protect your rights and freedom.
Author Bio:
Our Drug Charges Attorney in Hamilton, NJ have dedicated their lives to the craft and stand by your side throughout the entire case, should you have any questions or concerns about the process.
Finance
Insurance
Mortgage
Tech
Business
Travel
Legal
Health/Fitness
Sports
Communications
Name Meanings
Search Engine Optimization
Renovation
Automotive
Fashion
Entertainment
Pets
How To's
Relationships
Opinion
Book Reviews
Politics
Slang Meanings
Hashtags
Real Estate
Investing
Shopping
Gaming
Self Improvement
Society
General
Acronym Meanings
About Us
Privacy
Contact Us
Copyrights © 2016 Voticle. All Rights Reserved. | https://voticle.com/a/articles/15661/instances-of-prosecutorial-misconduct-in-the-united-states |
It is the person behind the lens who takes a good photo, not the camera. Regardless of your choice, the best gear or a certain capture method will not produce any good pictures if it was not for the artistic capabilities of the photographer. The following considerations guides your artistic choice for or against a certain capture method for individual situations.
WorkflowEdit
The most obvious difference is the mere workflow in taking pictures and processing them, but also before you even start.
- Since about the 2010s digital cameras are regularly equipped with a small LCD giving you the opportunity to review a (scaled-down) picture right after it has been taken. Using film, however, you cannot assess the exposure’s quality. Maybe you have underexposed or overexposed the shot. Maybe a person you photographed blinked right in that moment.
- Some photographers actually appreciate this fact, because you have to carefully think about the photo, instead of doing an iterative, “amateurish” trial and error approach.
- Other photographers see themselves stymied by the fact of not having any immediate feedback. This may be due to a lack of experience, because you can produce pictures with either capture method just as fast.
- Frequently external factors like the situation, the setting, or the photographed subject will dictate your choice. If you think you are producing too many rejects, too few “good” pictures with your digital camera, you might benefit to artificially slow down your workflow by switching to an analog camera, maybe even just for a while, you know.
TechnicalEdit
Some people like to justify their choice with technical facts. While it is important to know technical limitations, in most circumstances it makes no difference.
- When you use film, you start off with a “fresh sensor” over and over again. A digital camera, on the other hand, has a sensor built-in and while using it it heats up, thus altering its characteristics. When advancing the film a dust particle may be gone for the next exposure, but a dust particle on a digital sensor stays there affecting all pictures you take. Cleaning a digital sensor needs careful caution.
- Photographic film displays a loss of effective sensitivity at long exposure times (about longer than one second) and super short exposure times (when you have to use a special high-speed camera).
- Copying analog materials always entails a loss of information, whereas digital materials can be copied without any loss in information. Although this difference is hardly noticeable, technically it is measurable.
CulturalEdit
- As of the 2020s it has not yet been determined that digital media can be stored indefinitely. By the 21st century it was already pretty difficult to read digital media produced just 50 years earlier, if not impossible. Data formats change, hardware breaks down, or simply the magnetic polarization of bits decayed too much in quality. Modern photographic film, on the other hand, can be stored for at least hundreds of years. It is pretty much guaranteed that humans will be able to project light through it in the far future.
- Regular consumer PCs became powerful enough to allow everyone to retouch their photos. In consequence, digital photos are considered to be easier to manipulate. While it is also possible to manipulate analog pictures the general public may consider non-digital works more “truthful”.
- Just emotionally it can make a difference to you whether you can (and have to) physically touch your photos. | https://en.m.wikibooks.org/wiki/Modern_Photography/Film_processing |
Peter Bollington – Designer / Maker
The S_3.0 range of furniture was created out of a study into small space living, with the intent of engaging the final user, building an intimate relationship between the user and the work through touch, feel and play. The multifunctional aspect to the works allows the user to determine various functional outcomes in accordance with their needs….
S_3.0 Series
Having studied both furniture and interior design,my practice focuses on creating a harmony between the object and its environment. Created from necessity – my furniture is designed for small living spaces while maintaining a sleek aesthetic appeal and comfort. The works are multifunctional, unobtrusive in size yet bold statements of a creative process that transcends the humble piece of furniture into the realm of the designed object – a blend of material, form, pattern, function and artistic intention. The work also carries the undertone of the possibilities of plywood as a quality fine furniture material. With pressure on dwindling rescources in furniture timbers, my work addresses a need to source and use sustainable options in both material and application. PETER BOLLINGTON 2012
….The collection in its various ways of assemblage encompasses three categories of furniture: shelving, seating and surface. The unique shape also allows for the individual pieces to nest inside one another for both ease of freight and storage when not in use. | http://www.noddyboffin.com/peter-bollington-designer-maker/ |
[**A HEURISTIC REMARK\
ON THE PERIODIC VARIATION\
IN THE NUMBER OF SOLAR NEUTRINOS\
DETECTED ON EARTH**]{}\
\[0,5cm\] H.J. Haubold\
UN Outer Space Office, Vienna International Centre, Vienna, Austria\
and\
A.M. Mathai\
Department of Mathematics and Statistics, McGill University, Montreal, Canada\
Abstract. Four operating neutrino observatories confirm the long standing discrepancy between detected and predicted solar neutrino flux. Among these four experiments the Homestake experiment is taking data for almost 25 years. The reliability of the radiochemical method for detecting solar neutrinos has been tested recently by the GALLEX experiment. All efforts to solve the solar neutrino problem by improving solar, nuclear, and neutrino physics have failed so far. This may also mean that the average solar neutrino flux extracted from the four experiments may not be the proper quantity to explain the production of neutrinos in the deep interior of the Sun. Occasionally it has been emphasized that the solar neutrino flux may vary over time. In this paper we do address relations among specific neutrino fluxes produced in the proton-proton chain that are imposed by the coupled systems of nonlinear partial differential equations of solar structure and kinetic equations by focusing our attention on a statistical interpretation of selected kinetic equation of PPII/PPIII branch reactions of the proton-proton chain. A fresh look at the statistical implications for the outcome of kinetic equations for nuclear reactions may shed light on recent claims that the $^7Be$-neutrino flux of the Sun is suppressed in comparison to the pp- and $^8$B neutrino fluxes and may hint at that the solar neutrino flux is indeed varying over time as shown by the Homestake experiment.
Solar Nuclear Energy Generation: Proton-Proton-Chain
====================================================
The nuclear energy source in the Sun is believed to be the proton-proton chain, in which four protons fuse to form one $^4He$ nucleus, i.e. $$4p\rightarrow ^4He+2e^++2\nu_e +Q,$$ where $Q=M(^4He)-4M_p-2M_e\approx 26.73 MeV$ denotes the energy release. The three different branches (PPI, PPII, PPIII) to accomplish the formation of $^4He$ in the pp-chain are shown in Figure 1. Neutrinos are produced in the pp-chain by nuclear fusion reactions, beta-decay, and electron capture. The dominant reactions (PPI, 86% of the produced $^4He$) produce the great majority of low energy solar neutrinos $(\Phi_\nu^{SSM}(pp)\approx 6.0 \times 10^{10}\nu
cm^{-2}s^{-1}, \Phi^{SSM}_\nu(pp)\sim T_c^{-1.2}$, where $T_c$ denotes the temperature at the centre of the Sun) and their number should be a firm prediction of any solar model because it is closely tied to the solar luminosity. The second branch of the pp-chain (PPII, 14% of the produced $^4He$) yields the $^7Be$ neutrinos at two discrete energies $(\Phi ^{SSM}_\nu(^7Be)\approx 4.9\times 10^9 \nu cm^{-2}s^{-1},
\Phi^{SSM}_\nu(^7Be)\sim T_c^8).$ In the third branch of the pp-chain, the very rare case (PPIII, 0.02% of the produced $^4He$), radioactive $^8B$ is produced which decays ultimately and is the source of high energy $^8B$ neutrinos $(\Phi^{SSM}_\nu
(^8B)\approx 5.5 \times 10^6\nu cm^{-2}s^{-1},
\Phi_\nu^{SSM}(^8B)\sim T_c^{18}).$ The specific neutrino fluxes are taken from the Standard Solar Model (SSM) of Bahcall and Pinsonneault (1992). The overall energy production of the pp-chain is $Q=26.73 MeV$, however, the three branches produce a different amount of energy (PPI: $E_\gamma=(26.73-0.265)$MeV, PPII: $E_\gamma=(26.73-0.861)$MeV, PPIII: $E_\gamma=(26.73-7)MeV)$ due to the energy loss carried off by elusive neutrinos. The fluxes of high energy solar neutrinos are especially sensitive to the central temperature, $T_c$, mainly because of the energy dependence of the cross sections of the respective nuclear reactions. The branching ratios (the percentage of which each branch of the pp-chain contributes to the production of $^4He$) are strongly dependent on the nuclear reaction probabilities and on the density and temperature profiles inside the Sun. Assuming that the Sun is in a state of quasistatic equilibrium, the solar luminosity $L_\odot$ tells us the total energy generation rate which can be turned into a constraint on the total solar neutrino fluxes, that is $$L_\odot=13.1(\phi_\nu(pp)-\phi_\nu(^7Be)-\phi_\nu(^8B))+25.6\phi_
\nu(^7Be)+19.5\phi_\nu(^8B).$$ The luminosity $L_\odot$ observed at the current stage of evolution of the Sun corresponds to the energy that was generated in the gravitationally stabilized solar fusion reactor $10^7$ yr ago (Helmholtz-Kelvin timescale). The quasistatic assumption allows one to equate the present luminosity with the present nuclear energy production rate (Mathai and Haubold, 1988). Normalizing the neutrino fluxes in (2) to those of the Standard Solar Model $(\Phi_\nu=\phi_\nu/\phi_\nu^{SSM})$ leads to a luminosity constraint indicating the degree of contribution of the respective neutrino flux to the total solar neutrino emission: $$1=0.913\Phi_\nu(pp)+0.071\Phi_\nu(^7Be)+0.00004 \Phi_\nu(^8B).$$ To reveal how the pp-chain operates in the Sun is to measure the individual neutrino fluxes $\phi_\nu(pp),\, \phi_\nu(^7Be),$ and $\phi_\nu(^8B)$ in (3), thereby fixing the branching ratios of PPI, PPII, and PPIII as indicated in Figure 1. Four experiments are in operation now to accomplish this solar neutrino spectroscopy. Kamiokande measures exclusively the high-energy flux $\phi_\nu(^8B)$, thus PPIII, in real-time spectroscopy. Homestake observes primarily $\phi_\nu(^8B)$ and to a much lesser extend $\phi_\nu(^7Be)$, that is, the branching of PPII and PPIII. GransSasso/Baksan detects primarily the low-energy flux $\phi_\nu(pp)$ and to a lesser extend the fluxes $\phi(^7Be)$ and $\phi_\nu(^8B)$, thus focusing on the branching of PPI and PPII/PPIII.
Spatial Distribution of Solar Neutrino Sources: Standard Solar Model
====================================================================
Figure 2 shows the neutrino production as a function of the dimensionless distance variable $x=R/R_\odot$, starting from the center of the Sun for the Standard Solar Model (Bahcall and Pinsonneault, 1992). We note that experiments for the detection of solar neutrinos are looking into different depths of the solar core as they are only sensitive to specific neutrinos produced in the respective nuclear reactions. The region in which the low-energy pp-neutrino flux is produced is very similar to that of the total nuclear energy generation. Because of its strong temperature dependence, the high-energy $^8B$-neutrino production is peaked at the very small radius $x=0.05$ and is generated in a much narrower region in comparison to the other two neutrino sources (Table 3).
The contribution of the neutrino fluxes $\phi_\nu(pp),
\phi_\nu(^7Be),\, \mbox{and}\, \phi_\nu(^8B)$ to the Homestake, Kamiokande, and GranSasso/Baksan experiments is $$\phi_\nu\mbox{(Homestake)}\approx
6.2\phi_\nu(^8B)+1.2\phi_\nu(^7Be) SNU,$$ $$\phi_\nu\mbox{(Kamiokande)}=\phi_\nu(^8B)\Phi_\nu(^8B),$$ $$\phi_\nu\mbox{(GranSasso/Baksan)}\approx
13.8\phi_\nu(^8B)+35.8\phi_\nu(^7Be)+70.8\phi_\nu(pp).$$ The coefficients in Equation (4)-(6) are corresponding to the capture rates predicted by the Standard Solar Model for each respective solar neutrino source.
Table 1 summarizes the predictions of the total capture rates of the Standard Solar Model for the Kamiokande, Homestake, and GranSasso/Baksan experiments. The Kamiokande solar neutrino flux is given in units of\
$10^6 \nu cm^{-2}s^{-1}$, while the Homestake and GranSasso/Baksan rates are given in solar neutrino units $(1SNU \equiv 10^{-36}\nu
atom^{-1} s^{-1}).$ The uncertainties shown in Table 1 are $1\sigma$.
-----------------------------------------------------------------
Experiment Kamiokande Homestake GranSasso/Baksan
-------------- ---------------- -------------- ------------------
Predicted $5.69\pm 0.82$ $8.0\pm 3.0$ $131.5 \pm
^{21}_{17}$
capture rate
-----------------------------------------------------------------
Table 1.
Four Solar Neutrino Experiments: Solar Neutrino Problem and $^7$Be-Neutrino Deficiency
======================================================================================
------------ ----------------------------------------- ------------- ----------
Experiment Reaction Energy Location
threshold
Kamiokande $\nu_e+e^-\rightarrow \nu_e+e^-$ $7.5 MeV$ Japan
Homestake $\nu_e+^{37}Cl\rightarrow ^{37}Ar+e^-$ $0.814 MeV$ USA
GranSasso $\nu_e+^{71}Ga \rightarrow ^{71}Ga+e^-$ $0.233 MeV$ Italy
(GALLEX)
Baksan $\nu_e+^{71}Ga\rightarrow ^{71}Ge+e^-$ $0.233 MeV$ Russia
(SAGE)
------------ ----------------------------------------- ------------- ----------
Table 2.
There are four experiments currently operating to detect neutrinos coming from the Sun (Table 2). The Kamiokande experiment is a water Čerenkov detector which measures the energy of the scattered electrons (Nakamura, 1993). Due to its energy threshold it is only sensitive to the high-energy $^8B$ neutrinos from branch PPIII of the pp-chain. The Homestake experiment consists of $10^5$ gallons of $C_2Cl_4$ and detects solar neutrinos via capture on the chlorine (Davis, 1993). Its energy threshold allows to detect the higher energy line of $^7Be$-neutrinos from branch PPII as well as the high-energy $^8B$ neutrinos from branch PPIII. The two gallium experiments at GranSasso and Baksan are sensitive to the low energy pp-neutrinos from the PPI branch of the pp-chain as well as to the higher energy $^7Be$- and $^8B$-neutrinos (Anselmann et al., 1994; Abdurashitov et al., 1994). The predicted contributions to the Homestake and GranSasso/Baksan experiments based on the Standard Solar Model are shown in Table 3.
-------------- -------------- ---------- -------------- ------------------
Neutrino Homestake Percentage GranSasso/Baksan
source experiment of total experiments
capture rate
pp $0.0$ $\equiv$ $0\%$ $70.8$
pep $0.2$ $\equiv$ $2.5\%$ 3.1
$^7Be$ $1.2$ $\equiv$ $15\%$ $35.8$
$^8B$ $6.2$ $\equiv$ $77.5 \%$ $13.8$
$^{13}N$ $0.1$ $\equiv$ $1.25\%$ $3.0$
$^{15}O$ $0.3$ $\equiv$ $3.75\%$ $4.9$
total $8.0\pm 3.0$ $\equiv$ $100\%$ $131.5\pm
^{21}_{17}$
capture rate
-------------- -------------- ---------- -------------- ------------------
Table 3.
The experimental results of the four solar neutrino experiments are given in Table 4 and can be compared with the predictions of the Standard Solar Model as shown in Table 1.
------------ -------------------------------- ------------------------
Experiment Detected Detected capture rate/
capture rate predicted capture rate
Kamiokande $2.89^{+0.22}_{-0.21}\pm 0.35$ $0.50 \pm 0.07$
Homestake $2.55 \pm 0.17\pm 0.18$ $0.32 \pm 0.03$
GranSasso/ $77\pm9$ $0.59 \pm 0.07$
Baksan
------------ -------------------------------- ------------------------
Table 4.
From Table 4 it is evident that the results of the four experiments are between 1/3 and 1/2 of the neutrino capture rates predicted by the Standard Solar Model. This deficit of solar neutrinos is called the solar neutrino problem which poses a serious conflict with the constraint of the overall solar luminosity in (2) and (3). Additionally, the comparison of the three detected capture rates in Table 4 with the predicted capture rates in Tables 1 and 3 shows that the Kamiokande rate is less suppressed than the Homstake rate. Because the Homestake experiment has a lower energy threshold, the lower detected capture rate suggests that the $^7Be$-neutrinos are more suppressed than the high energy $^8B$-neutrinos. However, any reduction of the $^7Be$ production rate by lowering the temperature $T_c$ would affect immediately both the $^7Be$ and $^8B$ neutrino production equally. This fact seems to pose an additional problem in finding a solution of the solar neutrino problem in terms of solar, nuclear, and neutrino physics on which the Standard Solar Model is based. This is particularly true for the so-called cooler Sun models (Castellani et al., 1994).
Argon-Production Rate of the Homestake Experiment: Variations Over Time
=======================================================================
Figure 3 shows the $^{37}Ar$ production rate detected by the Homestake experiment from 1970.8 to 1991.6 (Davis, 1993). The average $^{37}Ar$ production rate (combined likelihood function) for the 94 individual runs shown was $0.509\pm 0.031$ argon atoms per day. Subtracting a total background $^{37}Ar$ production rate of $0.08\pm 0.03$ atoms per day yields the production rate that can be ascribed to solar neutrinos: $0.429\pm 0.043$ atoms per day or $2.28\pm 0.23$ SNU (the rate in SNU is equal to $5.31$ times the captures per day in the Homestake experiment). This average capture rate is commonly compared to the predictions of the Standard Solar Model as shown in Tables 1 and 3 for the pp-chain and the CNO cycle. This procedure does not take into account in any way the apparent time variation in the observed $^{37}Ar$ production rate evident in Figure 3.
Figure 4 shows a five-point moving average of the $^{37}Ar$ production rate, removing high frequency noise from the actual time series collected in the Homestake experiment as shown in Figure 3. One notes in the five-point moving average that in the periods 1978 to 1979 and 1987 to 1988 a supression of the $^{37}Ar$ production rate seems to occur. The overall shape of the five-point moving average suggests that there are two distinctive epochs spanning the time periods 1971 to 1980 and 1980 to 1989. Each epoch shows a shock-like rise and subsequent rapid decline of the $^{37}Ar$ production rate. Further, the five-point moving average of the $^{37}Ar$ production rate reveals that each of the two distinct cycles covers a time period of around 9 years. Each cycle exhibits a slow but shock-like increase, reaching a peak, succeeded by a rapid decrease to a minimum value of the $^{37}Ar$ production rate. This pattern is repeated for a second nine-year period and seems to start for a third period in 1989 (Haubold and Mathai, 1994). Each of these cycles can be reproduced by a mechanism discussed in the following Section. Fourier analysis of the $^{37}Ar$ production rate data in Figure 3 reveals a power spectrum showing the harmonic content in this time series in terms of a series of distinctive periodicities which is shown in Figure 5. Fourier analysis also indicates that the harmonic content in the $^{37}Ar$ production rate data is dominated by periodicities of 0.57, 2.2, 4.8, and 8.3 years (Haubold and Gerth, 1990).
Kinetic Equations: Lifetime Densities
=====================================
The production and destruction of nuclei in the proton-proton chain of reactions can be described by kinetic equations governing the change of the number density $N_i$ of species $i$ over time, that is, $$\frac{d}{dt}N_i=-\sum_jN_iN_j<\sigma v>_{ij} + \sum_{k,l\neq
i}N_kN_l<\sigma v>_{kl},$$ where $<\sigma v>_{mn}$ denotes the reaction probability for an interaction involving species $m$ and $n$, and the summation is taken over all reactions which either produce or destroy the species $i$. The first sum in (7) can also be written as $$-\sum_jN_iN_j<\sigma v>_{ij}=-N_i(\sum_j N_j<\sigma
v>_{ij})=-N_ia_i,$$ where $a_i$ is the statistical expected number of reactions per unit volume per unit time destroying the species $i$. The reciprocal of the quantity $a_i$ is the lifetime of species $i$ for interaction with species $j$ for all $j$. It is also a measure of the speed in which the reaction proceeds. If the reaction results in the production of a neutrino, for example, then the reciprocal of $a_i$ is the expected time it takes to produce this neutrino in the solar interior. In the following we are assuming that there are $N_j(j=1,\ldots, i, \ldots)$ of species $j$ per unit volume and that for a fixed $N_i$ the numbers of other reacting species that react with the i-th species are constants in a unit volume. Following the same argument we have for the second sum in (7) accordingly, $$+\sum_{k,l\neq i}N_kN_l<\sigma v>_{kl}=+N_ib_i,$$ where $N_ib_i$ is the statistical expected number of the i-th species produced per unit volume per unit time for a fixed $N_i$. Note that by nature the number density of species $i,
N_i=N_i(t)$, is a function of time while the $<\sigma v>_{mn}$ are supposed to depend only on the temperature but not on the time $t$ and number densities $N_j$. Then equation (7) implies that $$\frac{d}{dt}N_i(t)=-(a_i-b_i)N_i(t).$$ For equation (9) we have three cases, $c_i=a_i-b_i>0, c_i<0,
c_i=0,$ of which the last case says that $N_i(t)$ does not vary over time, which means that the forward and reverse reactions involving species $i$ are in equilibrium. The first two cases exhibit that either the destruction $(c_i>0)$ of species $i$ or production $(c_i<0)$ of species $i$ dominates.\
For the case $c_i>0$ we have $$\frac{d}{dt}N_i(t)=-c_iN_i(t),$$ and it follows that $$N_i(t)dt=N_i(0)e^{-c_it}dt,$$ where $N_i(0)$ is the number density of species $i$ at time $t=0$. If $c_i$ in (10) is a function of time, say $c_i(t)$, then $c_it$ in (10) is to be replaced by $\int dt c_i(t)$. If the arrival distributions for the other species are Poisson, then $c_i(t)$ will be of the form $d_it$, where $d_i>0$ independent of $t$. In this case the exponent in (10) is $\int dt c_i(t)=d_it^2/2.$ Contrarily, when $c_i$ is $a$ constant, the total number of reactions in the time interval $0\leq t \leq t_0$ is given by $$\int^{t_0}_0dt N_i(t)=N_i(0)\int^{t_0}_0dt
e^{-c_it}=\frac{N_i(0)}{c_i}(1-e^{-c_it_0}).$$ In (11), $1-e^{-c_it_0}$ is the probability that the lifetime of species $i$ is $\leq t_0$ when $t$ has the density $$f(t)=c_ie^{-c_it}, 0\leq t \leq \infty, c_i>0,$$ or $$N_i(t)=\frac{N_i(0)}{c_i}f(t).$$ When $c_i=c_i(t)=d_it$ then $$N_i(t)=\left(\frac{\pi}{2d_i}\right)^{1/2}N_i(0)h(t),$$ where $$h(t)=\left(\frac{2d_i}{\pi}\right)^{1/2}e^{-d_it^2/2}, 0\leq t
\leq
\infty, d_i>0.$$ The density in (12) will be called the lifetime density for the destruction of species $i$, with the expected mean lifetime $$E(t)=\frac{1}{c_i}.$$ If the lifetime density is as given in (15) then $$E(t)=\left(\frac{2}{\pi d_i}\right)^{1/2}.$$ From (12) and (16) we can make the following observations:\
(i) $c_i$ can be interpreted as a measure of net destruction, the larger the value of $c_i$ the faster the net destruction.\
(ii) $\frac{N_i(0)}{c_i}f(t)\Delta t$ can be interpreted as the amount of net destruction over the small interval of time $\Delta t$. The faster the net destruction the shorter the lifetime.\
(iii) The quantity $$\int^\infty _0 dt \frac{N_i(0)}{c_i}f(t)=\frac{N_i(0)}{c_i}$$ can be interpreted as the total net destruction of species $i$ starting with the initial number $N_i(0)$.\
(iv) If the net destruction of species $i$ produces a species $k$, for example a neutrino, then the number produced is proportional to $\frac{N_i(0)}{c_i}$.\
If the lifetime for the production of a species $k$ due to the net destruction of species $i$ is denoted by $\tau$, then $\tau$ is a constant multiple of $t$, say $\tau=\alpha_1t$, where $t$ has the lifetime density $f(t)$. But the densities of $t$ and $\alpha_1t\,
(\alpha_1>0)$ belong to the same family of distributions and hence the density of $\tau$ can be written as $$f(\tau)=\theta_i e^{-\theta_i \tau}, \theta_i >0, \tau>0,$$ where $\theta _i=c_i/\alpha_1$ and thus the total production is $\alpha_1\frac{N_i(0)}{c_i}.$
Dampening of Reactions: Poisson Arrivals
========================================
Suppose that after a certain period of time of net destruction, say $t_0$, a dampening effect starts to slow down the net destruction of species $i$ with initial number $N_i(0).$ Let this dampening variable be denoted by $\tau_2$, where $\tau_2$ is again proportional to the lifetime, say $\alpha_2t$. Then the lifetime density associated with $\tau_2$ is of the exponential type, belonging to the same family as in (19). Let $\tau_1$ and $\tau_2$ be independently acting or statistically independent. Let the delay in time for $\tau_2$ to start be $c=\alpha_2t_0$ and let the densities of $\tau_1$ and $\tau_2$ be denoted by $$f_j(\tau_j)=\beta_je^{-\beta_j\tau_j}, \tau_j>0, \beta_j>0, j=1,2$$ where $\beta_1=\theta_i=c_i/\alpha_1$ of (19) and let $\beta_2=c_i/\alpha_2.$ Then the net destruction of species $i$ is proportional to $u=\tau_1-(\tau_2-c)=\tau_1-\tau_2+c$ with the joint desity of $\tau_1$ and $\tau_2$ given by $$f(\tau_1,
\tau_2)=\beta_1\beta_2e^{-(\beta_1\tau_1+\beta_2\tau_2)},
\tau_j>0, \beta_j>0, j=1,2$$ due to the statistical independence of $\tau_1$ and $\tau_2$. The density of u, denoted by g(u), is the following (Mathai, 1993) $$g(u)=\left\{ \begin{array}{cc}
\frac{\beta_1 \beta_2}{\beta_1+\beta_2} & e^{-\beta_1(u-c)},
c\leq
u <\infty \\[0,3cm]
\frac{\beta_1\beta_2}{\beta_1+\beta_2} & e^{\beta_2(u-c)},
-\infty<
u \leq c
\end{array} \right.$$ where $$\frac{\beta_1\beta_2}{\beta_1+\beta_2}=\frac{c_i}{\alpha_1+
\alpha_2},$$ observing that $\beta_1=c_i/\alpha_1$ and $\beta_2=c_i/\alpha_2.$ If the net destruction of species $i$ is exceeding the dampening rate, then $\beta_1>\beta_2$ and the following Figure 6 illustrates the behaviour of the density $u$ in (22).\
Figure 6 shows a non-symmetric Laplacian, slowly rising and rapidly falling. At the time $t=t_0$ the net destruction of species $i$ is given by $\frac{N_i(0)}{c_i}(1-e^{-c_it_0}).$ Then the production of species, for example neutrinos, as a result of the net destruction of species $i$, in an instant of time is given by $$\alpha_1\alpha_2\left(\frac{N_i(0)}{c_i}\right)^2\left(1-e^{-c_it
_0}\right)g(u)du,$$ which is a constant multiple of g(u), where g(u) is given in (22). Hence the shape of the curve for the net destruction of species $i$ and the resulting production of species k will be the same as of g(u) shown in Figure 6. The production of resulting species in a small intervall of time $\Delta t$ is $Ag(u)\Delta t$ with $t_0=c/\alpha_2$ starting with a constant initial number $N_i(0)$ of species $i$, where $$A=\alpha_1\alpha_2\left(\frac{N_i(0)}{c_i}\right)^2\left(1-e^{-
\frac{c_i}{
\alpha_2}c}\right),$$ since $\int^{+\infty}_{-\infty}du g(u)=1.$ Here the integration is done from $-\infty$ to $+\infty$. Note however, that when $c$ is large enough the probability for $u$ being negative will be negligibly small and hence $A$ in (24) is a good approximation to the total production.
We observe in (24) that when $c$ is small, $A$ is small and $A$ is an increasing function of $c$ as shown in Figure 7.
Note that $A$ in (24) is the result of assuming that the initial number $N_i(0)$ of species $i$ per unit volume is a constant. If the species $i$ is arriving to the unit volume according to a Poisson distribution with parameter $\lambda_i$ (Poisson arrivals), then $N_i(0)$ in (24) as well as in the previous formulae is to be replaced by its expected value, that is $E[N_i(0)]=\lambda_i$ in the considered case. In Poisson arrivals one can take the expected number to be $\lambda_i=\gamma_it$, where $t$ is the duration of destruction and $\gamma_i$ is a constant independent of time $t$. In this case $A$ in (24) becomes $$A=\frac{\alpha_1\alpha_2}{c_i^2}\gamma_i^2 t^2(1-e^{-c_it_0}),$$ where $t_0=c/\alpha_2$ is the time where the dampening effect starts.
Proton-Proton Chain: Branches II and III
========================================
The fusion of four protons to produce one helium nucleus in the pp-chain is accomplished in at least three different branches in the chain (Figure 1). This branching results in uncertainties of the predictions of the $^7$Be- and $^8$B neutrino fluxes in the Standard Solar Model and needs particular attention in discussing the results of those solar neutrino experiments which are detecting exclusively $^7$Be- and $^8$B neutrinos (Homestake and Kamiokande experiments in Tables 1,2, and 4). Without any branching in the pp-chain, the number of all reactions and neutrinos would be equal, that means $$\phi_\nu(pp)=\phi(^8B)=N.$$ As shown in Figure 1, $^3$He can interact with another $^3$He nucleus to produce right away $^4$He (PPI branch), or $^3$He can fuse with $^4$He to produce a $^7$Be nucleus and subsequently to open branches II and III of the pp-chain. The branching ratio r is determined by the reaction probabilities $<\sigma v>_{ij}$ and number densities $N_i$: $$\frac{r}{1-r}=\frac{<\sigma v>_{34}}{<\sigma
v>_{33}}.\frac{N_4}{N_3},$$ where the notations have been explained in the preceding section. With regard to branches II and III in Figure 1, $^7$Be can capture an electron to emit a $^7$Be neutrino, or it can fuse with a proton to produce $^8$B which immediately decays and produces a $^8$B neutrino. The branching ratio $r'$ for PPII and PPIII is $$\frac{r'}{1-r'}=\frac{<\sigma v>_{17}}{<\sigma
v>_{e7}}.\frac{N_1}{N_e}.$$ With (26) and (27) the following relations between the number of chains to produce $^4$He and the neutrino fluxes produced by the three branches are established, $$\phi_\nu(pp)=\frac{N}{2}(2-r), \,\,\,(PPI),$$ $$\phi_\nu(^7Be)=\frac{N}{2}r(2-r'), \,\,\,(PPII),$$ $$\phi_\nu(^8B)=\frac{N}{2}rr', \,\,\,(PPIII).$$ Equations (28) to (30) show the link of the three branches of the pp-chain which is eventually governed by the reaction probabilities $<\sigma v>_{ij}$ and number densities $N_i$ in the system of kinetic equations in (7) and by the profiles of density and temperature of the solar model. Basic assumptions for equations (28) to (30) are that the Sun is in thermal equilibrium which fixes the number of chains through (2) and that the nuclei responsible for neutrino production are in thermal equilibrium with the ambient plasma which allows to determine the neutrino fluxes by the reaction probabilities. For the latter assumption the characteristic time for significant energy exchange by Coulomb collisions between reacting species must be orders of magnitude less than the characteristic time it takes to produce a neutrino in the solar interior (Maxwell-Boltzmann reaction rates). These basic assumptions still leave the question open on what is relevant for branching governed by kinetic equations: The time for reducing the protons to thermal equilibrium with the ambient plasma ($\approx
10^{-20}$yr) or the lifetime of a proton to undergo a reaction with a second proton to produce, among other species, a neutrino $(\approx 10^{10}$yr)? This question will be addressed in the following section for three reactions of the branches II and III of the pp-chain.
Production - Dampening Mechanism: Laplacian Behaviour
=====================================================
Consider three sets of Laplacians of the type given in Figure 6, one set consisting of one Laplacian with $t_0=\frac{1}{2}(1)$ units of time, five successive Laplacians with $t_0=\frac{1}{2}(0.2)$ units of time each in the second set, and the third set consisting of eight successive Laplacians with $t_0=\frac{1}{2}(0.125)$ units of time each. Suppose that we consider the Laplacians for a total arbitrary time interval of $t=1$ unit of time. Let the total destruction of species $i$ by one Laplacian of set 1, the five Laplacians of set 2, and the 8 Laplacians of set 3 be denoted by $A_1, A_2, A_3$ respectively. Then we have from (25) $$\begin{aligned}
A_1 &=&
\frac{\alpha_1\alpha_2}{c_i^2}\gamma_i^2[1(1)^2]\left(1-e^{-c_i
\frac{1}{2}(1)}
\right),\nonumber \\
A_2 & = &
\frac{\alpha_1\alpha_2}{c_i^2}\gamma_i^2[5(0.2)^2]\left(1-e^{-c_i
\frac{1}{2}(0.2)}\right), \\
A_3 & = &
\frac{\alpha_1\alpha_2}{c_i^2}\gamma_i^2[8(0.125)^2]\left(1-e^{-c
_i\frac{1}{2}(0.125)}\right).\nonumber\end{aligned}$$ If $c_i$ is large so that $e^{-c_i(.)}$ is negligible, then the total contributions coming from the three sets are respectively, $$\frac{A_j}{A_1+A_2+A_3}=0.755,\, 0.15,\, 0.095,\, j=1,2,3$$ respectively, that is, 75.5%, 15%, and 9.5% for each of the three reactions.
The Laplacians can be approximated by using triangles in the following way. From Figure 6 it is noted that the maximum height of the Laplacian is at $u=c$ which will then be $$\frac{\beta_1\beta_2}{\beta_1+\beta_2}=\frac{c_i}{\alpha_1+\alpha
_2}.$$ Suppose that $\alpha_2=3\alpha_1$, which will imply that $\beta_1=\frac{c_i}{\alpha_1}$ and $\beta_2=\frac{c_i}{3\alpha_1}$, which in turn means that the net destruction rate is three times the dampening rate. Suppose that $\beta_1=\sqrt{3}b$, where $b>0$ is a constant and $t_0=\frac{3}{4}b.$ Then the maximum hight of the Laplacian in Figure 6 is $$\frac{\beta_1\beta_2}{\beta_1+\beta_2}=\frac{c_i}{\alpha_1+\alpha
_2}=
\frac{\sqrt{3}b}{4}.$$ In this case the Laplacian approximates to the following triangle shown in Figure 8.
If we take three sets of triangles shown in Figure 8, where the first set consists only of one triangle with $b=1$ time unit, the second set contains 5 successive triangles with $b=0.2$ time units each, and in the third set there are 8 successive triangles with $b=0.125$ units each, and if the total areas of these three sets of triangles are denoted by $A_1, A_2,$ and $A_3$ similiar to (31), then the respective areas are in the proportion 75.5, 15, and 9.5 percent respectively (see Table 5).
------------- --------------------- ------------------------------
Total area Contribution to the $^{37}$Ar production rate of
9 year cycle the Homestake experiment
(Table 3)
1 triangle 75.5 % $^8$B contributes 77.5%
Reaction 1
5 triangles 15.0% $^7$Be contributes 15%
Reaction 2
8 triangles 9.5%
Reaction 3
------------- --------------------- ------------------------------
Table 5.\
Triangles as the ones shown in Figure 8 have been used in the following graph, where in Figure 9, $\alpha$ and $\beta$ denote the shifts in the starting point of the set of 5 and of the set of 8 triangles, respectively. The starting point of the one big triangle has been chosen as $t=0$.
Conclusion
==========
The time variation of the argon production in the Homestake experiment which is ascribed to be produced by solar neutrinos can be explained as follows. The original Homestake data, but more distinctively the five-point moving average of the data, seem to show cycles of approximately nine years duration. The reactions of the PPII and PPIII branches of the proton-proton chain are producing neutrinos through the $^7Be$- and $^8B$- reactions. If one assumes that a dampening mechanism operates for three reactions of the PPII and PPIII branches as discussed above, the variations of the argon production in the Homestake experiment over time can be explained on purely statistical arguments based on lifetimes and their ratios for the three reactions. For these nuclear reactions, destruction and dampening may work opposite to each other. If the
destruction rate is approximately three times the dampening rate, if the destruction is $\sqrt{3}b$ for some $b>0$, and if the dampening effect starts $\frac{3}{4}b$ time units from the starting time $t=0$, then the time variation cycles seen in the argon production in the Homestake experiment can be reproduced by considering a scenario of three sets of reactions of the PPII and PPIII branches of the proton-proton chain, one set with $b=1$ unit of time, say 9 years, the second set consisting of 5 successive reactions with $b=0.2$ time units each, and the third set consisting of 8 successive reactions with $b=0.125$ time units each.
References
Abdurashitov, J.N. et al.: 1994, Phys. Lett. , 234.
Anselmann, P. et al.: 1994, Phys. Lett. , 377.
Bahcall, J.N. and Pinsonneault, M.H.: 1992, Rev. Mod. Phys. , 885.
Castellani, V. et al.: 1994, Phys. Rev. , 4749.
Davis Jr., R.: 1993, in Y. Suzuki and K. Nakamura (eds.), ’Frontiers of
Neutrino Astrophysics’, Universal Academy Press, Inc., Tokyo.
Haubold, H.J. and Gerth, E.: 1990, Solar Physics , 347.
Haubold, H.J. and Mathai, A.M.: 1994, in H.J. Haubold and L.I. Onuora
(eds.), ’Basic Space Science’, AIP Conference Proceedings Vol. 320,
American Institute of Physics, New York.
Kiko, J.: 1995, The GALLEX solar neutrino experiment at the GranSasso
Underground Laboratory; these Proceedings.
Mathai, A.M. and Haubold, H.J.: 1988, Modern Problems in Nuclear and
Neutrino Astrophysics, Adademie-Verlag, Berlin.
Mathai, A.M.: 1993, Canad. J. Statist. , 277.
Nakamura, K.: 1993, Nucl. Phys. Suppl. , 105.
Table 1: Predictions of the Standard Solar Model for the Kamiokande,\
Homestake, and GranSasso/Baksan experiments (Bahcall and\
Pinsonneault, 1992).\
\[0.3cm\] Table 2: The four currently operating solar neutrino experiments.\
\[0.3cm\] Table 3: Predicted capture rates in SNU from various flux components of the\
pp-chain and CNO cycle for the Homstake and GranSasso/Baksan\
experiments. The uncertainties are the total theoretical range,\
$\sim 3\sigma$ (Bahcall and Pinsonneault, 1992).\
\[0.3cm\] Table 4: Comparison of the detected rates of the four solar neutrino experiments\
with the predicted rates of the Standard Solar Model (Bahcall and\
Pinsonneault, 1992). The Kamiokande flux is in units of $10^6\nu cm^{-2}s^{-1}$,\
while the Homestake and GranSasso/Baksan rates are in SNU.\
Table 5: For the destruction-dampening mechanism considered here, the area\
of the triangle governing reaction 1, the combined areas of the\
5 triangles for reaction 2, and the combined areas of the\
8 triangles of reaction 3, are proportional to the combination\
of three sources to the total capture rate of the Homestake experiment.
Fig. 1: The proton-proton chain and its three different branches to accomplish\
to formation of $^4He$.\
Fig. 2: The neutrino production as a function of the dimensionless distance\
variable $x=R/R_\odot$ in the Standard Solar Model of Bahcall and\
Pinsonneault (1992).\
Fig. 3: The argon-production rate detected by the Homestake experiment from\
1970.8 to 1991.6 (Davis, 1993).\
Fig. 4: The five-point moving average of the argon-production rate data as\
shown in Figure 3.\
\[0.3cm\] Fig. 5: Power spectrum of the argon-production rate data in Figure 3 obtained\
by Fourier analysis.\
Fig. 6: A non-symmetric Laplacian, slowly rising and rapidly falling.\
The function $g(u)$ is the density of $u$, describing the destruction-\
dampening mechanism for nuclear reactions involving species i.\
Fig. 7: The behaviour of the function A in (24) denoting the total destruction of\
species i.\
Fig. 8: The non-symmetric Laplacian shown in Figure 6 can be approximated by\
a non-symmetrical triangle.\
\[0.3cm\] Fig. 9: The destruction-dampening mechanism in (25) and (26), where the Laplacians\
has been approximated by triangles, can reproduce the time variation of\
the original argon-production rate within the range of the error bars\
attached to each run.
| |
(CNN) — The National Park Service proposes more than doubling the entrance fees at 17 popular national parks, including Grand Canyon, Yosemite, and Yellowstone, to help pay for infrastructure improvements.
Under the agency's proposal, the entrance fee for a private vehicle would jump to $70 during peak season, from its current rate of $25 to $30.
The cost for a motorcycle entering the park could increase to $50, from the current fee of $15 to $25. The cost for people entering the park on foot or on bike could go to $30, up from the current rate of $10 to $15.
The cost of the annual pass, which permits entrance into all federal lands and parks, would remain at $80.
The proposal would affect the following 17 national parks during the 2018 peak season:
Arches
Bryce Canyon
Canyonlands
Denali
Glacier
Grand Canyon
Grand Teton
Olympic
Sequoia & Kings Canyon
Yellowstone
Yosemite
Zion
Acadia
Mount Rainier
Rocky Mountain
Shenandoah
Joshua Tree
Peak pricing would affect each park's busiest five months for visitors.
The National Park Service said the increase would help pay for badly needed improvements, including to roads, bridges, campgrounds, waterlines, bathrooms and other visitor services at the parks. The fee hikes could also boost national park revenue by $70 million per year, it said.
"The infrastructure of our national parks is aging and in need of renovation and restoration," Secretary of the Interior Ryan Zinke said in a statement.
Of the 417 national park sites, 118 charge an entrance fee.
The proposal was blasted by the National Parks Conservation Association, a nonpartisan advocacy group.
"We should not increase fees to such a degree as to make these places -- protected for all Americans to experience -- unaffordable for some families to visit," the group's president and CEO Theresa Pierno said in a statement. "The solution to our parks' repair needs cannot and should not be largely shouldered by its visitors."
l e v a r t
"The administration just proposed a major cut to the National Park Service budget even as parks struggle with billions of dollars in needed repairs," Pierno said. "If the administration wants to support national parks, it needs to walk the walk and work with Congress to address the maintenance backlog."
On the National Park Service's Facebook page, some commented that the proposal was reasonable since it was going to improve and maintain the parks. Others lamented that it would price working class people out of making trips that they had saved up for.
Entrance fees at several national parks, including Mount Rainer, Grand Teton and Yellowstone, went up in 2015 to their current price.
Those fee increases didn't seem to deter visitors. In 2016, National Park Services received a record-breaking 331 million visits, which marked a 7.7% increase over 2015. It was the park service's third consecutive all-time attendance record.
Most popular National Parks in 2016 (59 total)
Great Smoky Mountains National Park -- 11,312,786 million visitors
Grand Canyon National Park -- 5,969,811
Yosemite National Park -- 5,028,868
Rocky Mountain National Park -- 4,517,585
Zion National Park -- 4,295,127
Yellowstone National Park -- 4,257,177
Olympic National Park -- 3,390,221
Acadia National Park -- 3,303,393
Grand Teton National Park -- 3,270,076
| |
Smart contracts are digital contracts stored on a blockchain that are automatically executed when predetermined terms and conditions are met .
They are used to automate the execution of an agreement so that all participants can be immediately certain of the outcome, without any intermediary’s involvement or time loss.
Smart contracts work by following simple “if/when…then…” statements that are written into code on a blockchain. | https://financialanswers.in/web-stories/smart-contracts/ |
As animals age, there are functional alterations in synaptic connectivity and plasticity within the hippocampus and the entorhinal cortex. These changes are associated with age-related spatial memory deficits. Importantly, there is evidence that at least in some experimental situations aged rats may rely on self-motion information more than external visual cues for navigation. In order to better understand differences in the degree to which old and young animals are able to utilize external cues to update their internal representations of space a novel behavioral apparatus was developed to allow for complete and immediate control of all visual cues in the environment. Both old and young rats were able to locate a goal location after all orienting cues in the apparatus were rotated instantaneously. Unexpectedly, aged animals tended to change their behavior to realign with the rotated cues more reliably than did young animals. Young rats tended to visit the area surrounding the goal location, but appeared to improve over time. | https://repository.arizona.edu/handle/10150/595047 |
1. Field of the Invention
The present invention relates in general to the field of information handling system management, and more particularly to a system and method for proactive management of information handling systems with in-situ measurement of end user actions.
2. Description of the Related Art
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems have helped to make people more productive by promoting efficient information management and communication. A downside to the widespread adoption of information handling systems is that failure of an information handling system often leaves an end user in a bad spot. With a soft failure, the end user can usually get the information handling system running again by taking corrective action. For example, a re-boot of an information handling system fixes any number of problems by resetting the data in memory that is used by the processor. Such minor failures are an inconvenience, although a poorly-timed re-boot can lose data recently generated by the end user and not yet saved to non-volatile memory, such as a hard disk drive. A catastrophic failure prevents the end user from using the information handling system, such as where a hard disk drive fails to make booting impossible. In the event of a hard disk failure, an end user typically loses all of the data stored on the hard disk drive. To prevent a loss of data, end users typically back-up data at another information handling system or storage device; however, even when data is backed-up, some data created after the time of the most recent back-up is likely still lost, and the end user faces the inconvenience of having to load the backed-up data to a replacement information handling system.
One way to avoid the inconvenience of an information handling system failure is to determine that system is likely to fail before the system actually fails so that the likely failure can be proactively addressed before an actual failure occurs. In order to predict an imminent failure, some systems monitor operating parameters with hardware and software monitoring agents and analyze the operating parameters with reasoning tools for indications of impending failures. For example, detection of outliers, faults and early warnings of failures use real time data and intelligent algorithms, but at a high expense both for design of the systems and for computational resources generally needed to track all types of failure modes and their symptoms on an information handling system in the field. Further, many faults and failures do not generate patterns that are detectable in advance via data and algorithms.
| |
Poble Espanyol in Palma is a homage to Spain's cultural and historical past built by architect Fernando Goitia. Built between 1965 and 1967, it is a recreation of some of Spain's finest cultural and historical buildings. They were modeled almost identically on the original buildings, however they are on a greatly reduced scale. Here you will find reconstructions of the Alhambra Palace in Granada, the Arab baths in Cordoba, plus other historical towers, castles and churches that are scattered throughout Spain and represent its diverse multicultural history.
History & Anthropology
|Monday|
9:00 – 20:00
|Tuesday|
9:00 – 0:00
|Wednesday|
9:00 – 0:00
|Thursday|
9:00 – 0:00
|Friday|
9:00 – 3:00
|Saturday|
9:00 – 4:00
|Sunday|
9:00 – 0:00
.
|Adults||€6.00|
|Children||€4.00|
Carrer del Poble Espanyol, 55, Palma
No exhibitions in Poble Espanyol have been found.
Poble Espanyol has 1 rating.
This rating is based on the rating of this museum on several other platforms.
Discover museums near Poble Espanyol in or around Palma. | https://whichmuseum.com/museum/poble-espanyol-palma-6553 |
Action Alert from the National Anti-Vivisection Society
The National Anti-Vivisection Society (NAVS) sends out a “Take Action Thursday” e-mail alert, which tells subscribers about current actions they can take to help animals. NAVS is a national, not-for-profit educational organization incorporated in the state of Illinois. NAVS promotes greater compassion, respect, and justice for animals through educational programs based on respected ethical and scientific theory and supported by extensive documentation of the cruelty and waste of vivisection. You can register to receive these action alerts and more at the NAVS Web site.
This week’s Take Action Thursday asks the next Congress to add accountability for mice, rats, and birds, who represent the vast majority of animals used for research, to the Animal Welfare Act.
Federal Action
Earlier this month, the Animal Law & Policy Program at Harvard Law School held a conference that brought together lawyers, philosophers, ethicists and government representatives to assess the first 50 years of the Animal Welfare Act (AWA). Animal advocates—including NAVS leadership—were also well-represented at the conference, and left with a sense of hope for the future.
There is a lot to criticize in a law that was originally entitled the “Laboratory Animal Welfare Act,” which has evolved into a means for authorizing/validating entities that use animals for research, education and exhibition with little enforcement of animal welfare regulations. The conference succeeded, however, on two important fronts.
First, it gathered together a wide range of experts and animal advocates to consider what can be done to improve animal welfare concerns. Second, a renewed commitment was delivered by the U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) for enforcement actions against AWA violations. APHIS also made a commitment to increase efforts aimed at holding licensees accountable for harm they are causing to animals in their care.
Congress is now finished with the 2015-16 legislative session. But it is not too early to contact your elected officials and let them know what issues are important to you for the new session starting in January.
Please contact your U.S. Senators and Representative and ask them to amend the Animal Welfare Act to include mice, rats and birds.
Want to do more? Visit the NAVS Advocacy Center to TAKE ACTION on behalf of animals in your state and around the country.
And for the latest information regarding animals and the law, visit NAVS’ Animal Law Resource Center. | http://advocacy.britannica.com/blog/advocacy/2016/12/action-alert-from-the-national-anti-vivisection-society-179/ |
Farmer Dave Colling, ex of the Ripley area, made a presentation recently to the Drayton, Ontario area and said that, due to stray electricity, noise and other factors, if you’re a farmer who is leasing land for wind turbines, “In the long run, you’re going to wish you never had them built on your land.” The stray electricity is akin to “living in a microwave” he says.
And now over to Colette McLean of the Harrow area, who despite her objections to the proposed wind turbine installation under construction near her (she herself was offered leases for turbines on her land, but turned down the ‘opportunity’), now has to live with the turbines. “It’s my health, my family’s health and the viability of our farm and the value of our farm,” she recently told the CBC. “Everything my husband, my son and I have worked for, is going to be gone.”
And then there is Wisconsin farmer Scott Smrynka who has actually measured the stray voltage in his dairy barn, and notes the reduction in milk production, problems with calving, and the fact that his cows and calves are dying from mysterious causes, and show abnormal hearts and kidneys at autopsy.
Put the wind turbines where the wind is, not where the people and the animals are.
Somewhere among the pie-in-the-sky claims made by Prowind about how the wind turbine developments will actually add to the bucolic nature of the rural landscape, is the statement that farmers can use their land right up to within a couple of feet of the base for each turbine, and that the base doesn’t take up much space.
Here is a photo taken by the CBC of the base for a turbine being constructed right now near Harrow, Ontario. THAT is how big the base really is (some of it will be underground when they backfill the earth). This base is for a turbine that is 120 meters high—the ones planned for North Gower are much larger.
| |
By purchasing this content you agree and accept the terms and conditions
This study examined how teacher educators’ perceptions of their undergraduate students’ classroom agenda influenced subsequent expectations for trainee performance,1 more particularly, how those perceptions shape the ways in which instructional demands are defined, communicated, and enforced or relented over the span of an undergraduate course. Three teacher educators teaching two courses were studied along with a group of students who were enrolled in both courses. Data collection consisted of nonparticipant observation, interviews, and document analysis. The results indicate that the teacher educators developed perceptions of student agendas that in some regards were closely similar but in other ways were sharply divergent. Further, each instructor developed a perception of her students’ classroom agenda that was somewhat congruent with her own intentions for the class and her own standards for student intentions and actions. Accordingly, expectations for trainees’ classroom performance were communicated in ways that reflected the degree of congruence between perception of students’ agenda and the instructors’ own definition of desirable student characteristics.
Kim C. Graber currently is with the Department of Physical Education and Human Movement Studies at the University of Oregon, Eugene, OR 97403. | https://journals.humankinetics.com/abstract/journals/jtpe/10/1/article-p49.xml?rskey=tOo3qG&result=1 |
Right Under Your Nose features selections from the collection of JJ Murphy and Nancy Mladenoff, a compendium of more than 3,200 children’s printed handkerchiefs and related ephemera gifted to Shelburne Museum in 2020.
EXPLORE
Including automatons, mechanical banks, toys, and whirligigs, this online exhibition brings Shelburne Museum’s collection to life. Whether by turnkey, button, string, or breeze, these objects have been carefully reactivated, many for the first time in more than half a century. Featuring short digital videos, the exhibition captures these rare performances, allowing contemporary audiences the opportunity to watch these historical objects spring into action.
Commissioned to celebrate the Museum’s 75th anniversary, Nancy Winship Milliken: Varied and Alive, is a site-specific outdoor sculpture exhibition that embodies the Museum’s commitment to environmental stewardship and sustainability while also engaging in global and local ecological conversations, from climate change to Lake Champlain’s watershed history.
Maria Shell: Off the Grid features a selection of vibrant, contemporary quilts that push the boundaries of the traditional gridded format of the American quilt. This online exhibition is the precursor to an exhibition featuring fourteen works by Shell created between 2011 and 2022 on view in the Dana-Spencer Textile Galleries at Hat and Fragrance from May 15–October 16, 2022.
Luigi Lucioni: Modern Light highlights landscapes, portraits, still lifes, and related ephemera by painter and printmaker Luigi Lucioni, illuminating a body of work in conversation with early 20th-century American modernist painters, photographers, writers, and musicians. This online exhibition is the precursor to an exhibition at the Museum on view from June 25 to October 16, 2022.
Our Collection: Electra Havemeyer Webb, Edith Halpert, and American Folk Art is a virtual exhibition celebrating the friendship between two visionary women who thoughtfully assembled one of the world’s most revered collections of American folk sculpture for Shelburne Museum.
Mary Cassatt’s Impressions: Assembling the Havemeyer Art Collection explores the enduring friendship between Cassatt and the Havemeyers and highlights archival anecdotes and primary sources detailing their acquisitions of Impressionist paintings, drawings, and sculptures based on Cassatt’s advice.
Drawn from Shelburne Museum’s extensive permanent collection, Pattern & Purpose brings together twenty masterpiece quilts made between the first decades of the 1800s and the turn of the twenty- first century, ranging from carefully-pieced Lemoyne stars and embroidered “best quilts” to more recent “art quilts” by contemporary makers.
A. Elmer Crowell: Sculptor, Painter, Decoy Maker explores the artistry and innovation of the acclaimed carver’s ornamental birds. Drawing from Shelburne Museum’s renowned decoy collection, the exhibition features important milestones that chart the development of Crowell’s prolific artistic career, from the earliest miniature goose he carved in 1894 to the very last bird he made before retiring in the early 1940s.
Drawing heavily from Shelburne Museum’s permanent collection and supplemented by strategic loans from contemporary artists and private collectors, this exhibition will explore the creative ways animal forms have been adapted to create a wide range of beautiful and functional household objects. Ranging in date from the 18th century to the present day, the selected decorative art objects explore complex themes related to animal/human bonds, including domestication, emotional connections, and ethical treatment.
American Stories explores the ways Americans celebrate the bonds of friendship, family, and civic identity through remarkable and unexpected objects drawn from Shelburne Museum’s permanent collection.
PEOPLE TRAVEL HOME COMMUNITY
This inaugural online exhibition explores Electra Havemeyer Webb’s idiosyncratic, intuitive, and imaginative approach to collecting, and features well-known masterpieces and treasures in Shelburne Museum’s diverse collection. | https://shelburnemuseum.org/online-exhibitions/ |
M-F:
10am - 7pm
S: 10am - 5pm
Su: 12pm - 5pm
I think we can all agree there are many days where training closer resembles a chore than fun. The dread of getting out the door for a big workout that has been glaring at us from the pages of our training plan can make us question whether it' really worth it. Be glad to know it's a natural feeling for runners of all abilities from time to time.
That being said, a chronic lack of motivation can be detrimental to both our mental health and a future race performance as our training quality begins to suffer. Thus, it's crucial to recognize the situation early and make changes to mitigate it. I have found success using each of the six methods below (or a combination of them) to help me reframe my training in a way that sparks fun, reignites motivation and reminds me why I continue to lace up my shoes every day.
The focal point of most running literature and training plans is physiology and the science of running. In other words, the spotlight is on determining which workouts we should be doing, at what paces, how often, and at what points in the training cycle. This has led to a mentality that in order to run X time for our goal race, we must run our workouts at Y pace. Otherwise, we are doomed. The problem with this approach is it can zap much of the enjoyment out of running, and it does not take into consideration the many other life factors that contribute to performance. Science has its place, but we should go on feel and connect with our inner self first and foremost. Too often people get injured overstepping themselves to hit numbers, or they limit themselves by slowing down when they see numbers that are faster than what they are used to.
As children, everything we do is play. When we are kicking around a soccer ball as a youth, we are not thinking about it as training or placing importance on getting the perfect number of kicks in, kicking each ball the perfect distance to develop the right legs muscles, and so on. It is play; yet, it serves as training simultaneously. Use the same approach with running. Treating our running as adult play can allow us to find joy in running at varying speeds across varying terrain and distances. Remembering the importance of adult play and not excessively focusing on the science of hitting splits can reignite motivation to get out the door. Running is, after all, meant to be enjoyable.
Wearable technology makes it way too easy to gather more data than we can even begin to process or use to our advantage. We too often allow the flashing numbers on the screen of our watch’s dashboard dictate how we think we should be feeling and alter our workout on the fly. On the one hand, I love technology and the ability to track every little piece of my workout (I admit I am addicted to the Garmin Forerunner 645M). On the other hand, too much analysis can be detrimental and counterproductive.
Running by feel and perceived effort is what truly matters during training (for more information about the effort versus data debate, I highly recommend the book Endure by Alex Hutchinson). To genuinely ditch the data, try not wearing a watch at all, or at least turn off the GPS functionality, so your pace is solely based on your effort with no outside influences. If you cannot resist wearing your GPS watch, then turn off all sound/vibration notifications and try not to look at it during the run. That way, you still have the data to analyze, but you didn’t allow the watch to be the dictator during the workout itself.
Shifting the focus of running from being a prescribed chore to an opportunity to explore can emphasize running’s inherent naturalistic roots. There is no better way to achieve the exploration mindset than by trail running. Trail running allows you to remove any thoughts of hitting splits in favor of connecting with yourself and your surroundings. There is something inexplicably empowering about adventuring into new territory and being one with nature.
Aside from breaking the mold and monotony of prescribed training, trail running provides a number of physiological benefits. The softer footing reduces the risk of injury, the uneven ground improves lower leg strength, stability, and coordination, and the often hillier terrain increases strength, aerobic capacity and the ability to alter speeds. There is no run more gratifying than exploring and conquering fresh territory.
Too often, we get so caught up in running-related numbers, namely miles run per week, that we use these metrics as our sole indicators for fitness and readiness to perform. This can be a slippery slope as we begin to channel more and more of our focus and energy on running itself and neglect the supplementary activities that allow us to perform better at the goal activity. While you may be able to do this for a while, it is also an easy route to an out-of-left-field injury.
To counter this pitfall, I find it helpful to remind myself that I am an athlete first and a runner second. This shift in mindset places my training focus on how I can best prepare myself as an athlete for a goal competition, as opposed to placing my training focus on how much running I need to complete to satisfy some arbitrary mileage number. With this mindset, I don't push crucial training activities (weight lifting, core strengthening, form drills, etc.) aside in favor of eking out a few bonus miles.
During a training rut, it can be very productive to shift the focus away from solely running onto some of the other athletic activities you enjoy. For example, you could replace some of your runs with a basketball game, a bike ride, or a game of tennis. These activities contribute to your all-around athleticism and allow you to maintain fitness while taking your mind off of running long enough to re-discover your motivation.
Nobody is perfectly symmetrical. Observe any runner, and it is pretty easy to spot asymmetries in her running form. Perfection is impossible, but we can strive to come as close to it as possible. By improving strength, mobility, and efficiency in your form you raise your ceiling of performance potential. Shifting the focus of your runs toward mastery of technique (cadence, arm swing, foot landing, etc.) gives you new goals to work towards and places stale mileage and speed goals on the back burner.
As humans, we now live in an unnatural environment that goes against our physical and mental development throughout existence. The world we designed for ourselves to make life more comfortable (e.g., the chair) has consequences on our health and ability to attain peak performance because we never learn how to optimize the use of our body to perform (strength, speed, power, etc.). Much of our adaptation to our surroundings throughout life is contradictory to our needs to perform our best athletically, which is why focusing on improving our strength, form, and mobility are so crucial for raising our ceiling of potential.
Getting a training partner or training group is perhaps the simplest and quickest way to gain newfound motivation in your training. Training partners hold one another accountable for showing up and putting in the work. It also creates an atmosphere of friendly competition that will motivate you to push harder in your workouts.
Even if you cannot think of someone at your exact level to train with every day, you can surely find someone to meet up with a few times per week. If your struggle is pushing your hard workouts hard enough, then find someone slightly faster than you for those days. If, on the other hand, you want someone there to converse with during the easy miles, then it is perfectly fine to run with someone slower than you so you can take your recovery seriously.
We all go through training ruts, which often result from mental stress and fatigue. By shifting the backdrop and mental focus of our training, we can quickly renew our motivation and remember our purpose for training in the first place. Sometimes we need the reminder that running is meant to be FUN.
By Chris Robertson. Robertson races competitively for Chicago’s Fleet Feet Nike Racing Team. He holds a marathon personal best of 2:24 and is the Beer Mile American Record holder (4:46). He is currently training with the goal of qualifying for the 2020 Olympic Trials Marathon and defending his 2017 Beer Mile World Title while working full-time as a Technology Consultant and pursuing additional entrepreneurial endeavors. | http://www.fleetfeetpittsburgh.com/news/how-to-reinvigorate-your-training-and-set-a-new-personal-best |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.