content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
How to Use a Laptop as a Monitor
This article explains how to use Miracast, third-party software, or a remote desktop solution to add a laptop as a second monitor to your system.
How to Add a Laptop as a Monitor With Miracast
Windows 10 systems come with a feature called Miracast that lets you project your present computer's display to a different computer. The only requirement is that both computers are running a modern enough version of Windows 10 that includes Miracast.
If you can use this option to use your laptop as a monitor, it's the easiest method.
-
Start on the laptop you want to use as a monitor. Select the Start menu, type Settings, and select the Settings app.
-
In Settings, select System.
-
On the Display screen, select Projecting to this PC from the left menu.
-
On the next screen, select the first dropdown as Available everywhere. Set the second dropdown to Every time a connection is requested. Set the third dropdown to Never (unless you want to require a PIN when projecting to this laptop screen, in which case select Always).
Make a note of the PC name listed in this window. You'll need it when projecting your display to the laptop from your other Windows 10 machine.
-
Switch to the computer you want to cast your display from. Select the notifications icon in the lower right corner of the desktop. Select the Connect icon.
-
You'll see the system search for available wireless displays. The laptop you set up as an available display will appear in this list. Select the display to connect to it.
An alternative way to access this connection is to open Windows Settings, select System, select Display, scroll down to the Multiple displays section and select Connect to a wireless display. This will open the same display search window where you can select the secondary laptop display to connect to.
-
On the secondary laptop, you'll see a notification that a connection is in progress. Select the permissions option you prefer. If you don't want to see the notification again, just select Always allow.
-
A new window will appear with the display of the primary computer that you're projecting from.
Project to Your Laptop Screen With a Third-Party App
If both computers aren't running Windows 10, you can cast your screen to your laptop display using a third-party app instead.
In this example, we'll use Spacedesk to project to a secondary laptop screen. Spacedesk requires you to install the main program on the laptop you want to project your display from, and the Viewer program on the computer you want to project your display to.
-
First, download and install the Spacedesk software on the laptop you want to project your screen from. The software is available for Windows 10 or Windows 8.1 PCs, either 32-bit or 64-bit.
-
Once installed, select the notifications area of the taskbar and select the Spacedesk icon. This will open the Server window, where you can confirm that the status is ON (idle).
If the status isn't ON, select the three dots on the left side of the window and select ON to enable the server.
-
On the second laptop where you want to project your display, install the viewer version of the Spacedesk software. On the last step of the install, select Launch spacedesk Viewer. The Viewer software is available for Windows, iOS, or Android devices. On all systems, the Viewer software interface looks the same.
-
In the Viewer application, select the server that the software detects on the network. This will turn the laptop running the Viewer software into an extended display for the desktop that's running the Server software.
-
You can then use the Display settings on the desktop PC to adjust the external display's resolution and position.
Other software that can help you accomplish this same thing include:
How to Use Chrome Remote Desktop
Another quick and simple solution to use a laptop as a monitor is to take advantage of Google's free Chrome Remote Desktop service.
This solution is ideal in a scenario where you want to mirror your screen to another monitor so that other people can see it. Chrome Remote Desktop will let you display your desktop on the laptop's screen.
-
From the computer where you want to project the screen from, visit remotedesktop.google.com, and select Remote Support from the two links at the top of the page.
-
On the next page, select the download icon in the Get Support section.
-
Once the Chrome extension is installed, return to the same page. You'll now see a Generate Code button that you can select.
-
This will display a code that you'll need on your laptop later. Make a note of this code.
-
Now, log into the laptop where you want to project your screen to. Visit the Google Remote Desktop page, select Remote Support, but this time scroll down to the Give Support section. Type the code you noted above into the field in this section.
-
Once you select Connect, the laptop screen will display the screen from the original computer where you started this process.
You will notice that Google Remote Desktop displays all screens from the remote system. If you only want to display one screen on the laptop, you'll need to disconnect the other screens so that you're only using a single display while displaying to the remote laptop.
Source:https://www.lifewire.com/use-laptop-as-a-monitor-5072964 Date: 23.12.2021г. | https://www.ibik.ru/use-laptop-as-a-monitor |
Finding a viable and sustainable business model is a major challenge not only for startup companies. Estab- lished companies are re-thinking their existing business models and explore new business opportunities. The Business Model Canvas is currently one of the most popular frameworks for business model innovation. While computer-aided design (CAD) tools are well-established in mechanical engineering, business model design is still mostly done using pen-and-paper methods. In this paper, we (1) discuss benefits and shortcomings of the Business Model Canvas approach, (2) show how it can borrow techniques from General Morphological Analysis to overcome shortcomings, and (3) derive three key requirements for future collaborative CAD tools for business model design. Our analysis contributes to an understanding of how software support can improve collaborative design and evaluation of business models. | https://wwwmatthes.in.tum.de/pages/ux2a18x5xij9/Ze14-Improving-Computer-Support-For-Collaborative-Business-Model-Design-And-Exploration |
Go North West is working with ITO World to integrate the information which will be used by those using its Manchester service to make more informed travel decisions.
More
Many Brits won't give up their cars, even for the environment
11 Apr 2019
Concern for the environment is the least popular reason to switch from a private car to alternative modes of transport, says a new survey.
More
Manifesto urges cities to embrace MaaS
14 Sep 2018
Ito World calls on cities to unlock the potential of mobility-as-a-service to open up opportunities for citizens as well as combat congestion, pollution and private car dependency
More
First global bike-share data feed launched
10 Nov 2017
Ito World’s has curated the disparate data of bike-sharing schemes around the world into a single feed that can be integrated into mobility platforms
More
If it moves, Ito Motion can visualise it
14 Jun 2017
Ito World is launching its first off-the-shelf product which helps transportation and infrastructure professionals easily visualise data
More
Special Reports
More cities declare climate emergencies: Will it spur real change?
Experts praise cities for highlighting the scale of the climate crisis but urge them to follow up with concrete actions. | https://www.smartcitiesworld.net/news/news?tags=Ito%20World |
The formation area of saline soils is mainly on the coastal line, the edge of mountain and hill. The formation of coastal saline soils may be from: (1) Seawater erosion enters groundwater, which then brings salt to the ground. (2) Aquaculture ponds are populated along the coast. Due to saltwater is introduced into ponds, resulting in the accumulation of salt in the soil. The formation of saline soils along the edge of hillside can be: rain infiltrating into hill and dissolving salt, flowing to the downhill side and deposited on the surface. If the content of salt (sodium chloride) in soil is too high, and crops absorb excessive chloride ions, the leaves will wither. If the crops absorb excessive sodium ions, the roots will be thirsty and unable to absorb magnesium ions and other nutrients. In our living environment the land for agriculture always is limited. There are many reasons for the reducing of cultivated land area. Among them, over cultivation leads to land degradation and environmental factors lead to excessive salt content are the common reasons.
Characteristics of ideal arable land
What are the basic characteristics of ideal arable land? Besides for the limitation of salt, alkali and mineral content, the content and distribution of organic matters in soil are important indexes. In Figure 1, On the left is the ideal distribution of organic matter in textbooks, and on the right is the natural uncultivated soil section of the Hawaiian Islands. Obviously, The upper layer of dark humuic soil and its distribution are suitable for cultivation of most crops. In Figure 2, On the left is the distribution of organic matter (organic carbon) in fertile soil, and on the middle is the distribution of organic matter in degraded soil, On the right is the overlapping comparison of fertile and degraded soils, representing the distribution of organic matter to be supplemented and repaired.
Figure 1
Figure 2
|
Repair Steps For Saline Soils
|
The restoration of saline soils requires two major steps. The first step is to reduce and remove the salt content. The second step is to restore organic matters distribution closing to the ideal soil organic matters distribution.
Reduce Salt Content How to reduce and remove the salt in the soil? The preferred method is to use fresh water for leaching. Based upon different situations, there are many ways to be arranged, for example: (1) Around base pile up soil, pour in fresh water, and drain water at low place. (2) Dig ditches around, pour in fresh water, and lead salt water into the ditches to drain outwards. (3) Freshwater irrigation, drilling wells in the downstream to drain out salt water. (4) If salt and soil are lumped together, they need to be rinsed several times.
Mixed With Organic Compost Matters
After the soil is washed or watered with fresh water, a large part of salt is removed, and then Organic Compost Matters (organic matter > 30%) is mixed into the soil. Mix at least 5-8 tons of Organic Compost Matters per mu, and mix the organic nutrient soil to the surface and deep layers of the soil, so as to provide enough organic materials for soil property modification.
Diversified Soil Microorganisms and Enzymes
Due to Organic Compost Materials can absorb and maintain water, spray water to keep wet, and spray soil improver, diversified soil microorganisms, fertilizer, or enzymes to promote the well fermentation of organic materials in soil. In addition, we introduced a new type of enzyme, which itself is a soil improver through combining fruit enzyme and soil microorganism, which can increase fermentation speed and reduce odor.
|Back to Front Page | Back to Domestic Solid Waste Main Page ||
|VitaBio Inc.(USA). All rights reserved. | http://vitabio.com/msw/organic_compost.html |
Maisons à pans de fer et revêtement de faïence
This drawing published by Viollet-le-Duc in 1872 in his famous Discourses on Architecture, has become a legendary image, a true architectural icon. The Entretiens, which appeared from 1863 to 1872, considered to be the foundation of modern architecture, were almost a bible for architects such as Horta, Guimard, Gaudi and Sullivan. In Entretien XVIII (1872), devoted to private architectural projects, Viollet-le-Duc designed a load-bearing framework made entirely of metal, an idea already illustrated in 1871-1872 with the Menier chocolate factory, built by Jules Saulnier in Noisiel.
Viollet-le-Duc's bold innovation was to apply this system to a residential building, and to have the metal structure exposed on the facade, and projecting. A discreet polychromy enhanced the facade and the squares of the cladding, playing on the iron framework and emphasising the structural lines, the fusion of structure and decoration being one of the architect's fundamental ideas. A few months after the fall of the second Empire, he proposed an alternative to Haussmann's aesthetic. The Prefect had in fact been a supporter of a style of town planning where observing a coherent outline and alignment, prohibiting projections, specifying the use of stone and reduced ornamentation, all ensured the uniformity of the city. This design, on the other hand, was a play on volume, colour and materials. Presented as a fantasy, the carefully drawn details of the young woman and the shop window nevertheless give this drawing an intense realism.
When speaking of this study, Viollet-le-Duc affirmed: "I do not claim in any way to present this fragment as a standard design for future apartment buildings or as the architecture of the future, but rather as a study uninfluenced by the methods that modern industry now brings to the art of construction". Subsequently, however, building regulations would break with the principle of alignment and urban regularity. The buildings of the Perret brothers (apartment block at 25 bis, rue Franklin, 1903-1904), of Frantz Jourdain (shop at no. 2 rue de la Samaritaine, 1903-1907), and of Henri Sauvage (apartment block at 26, rue Vavin, 1912-1913), would consolidate the triumph of architectural variety by taking up the ideas of Viollet-le-Duc thirty years later. | https://www.musee-orsay.fr/en/artworks/maisons-pans-de-fer-et-revetement-de-faience-149870 |
Welcome to my new StateTech column about government engagement -- the place where the rubber meets the road in the public sector. In this space, in print and online, I'll cover the internal and external interaction between government and its citizens in the process of democracy and governance.
While there will certainly be references to traditional or conventional forms of government and public engagement, the focus here will be on the use of online communication as the new form of government engagement that is or is not occurring in today's government-to-citizen, citizen-to-government and government-to-government interactions. It will focus on government issues, events, projects, policies and programs.
Since the Internet's creation, online contact between government and citizens could be described as a transactional exchange of information and communication in which governments post content to their websites for citizens to view or download. Some governments also allow citizens to purchase licenses or pay fees online.
Within government, administrations have to work especially hard to ensure cross-agency communication and cooperation. Duplication of effort or operating within silos not only decreases efficiency but also requires more resources (people and money) to achieve desired results. One might expect new technologies that improve interaction within government administrations to be a welcome development. But as many of you have experienced, this is not always the case.
Getting to Gov 2.0
New online processes make use of the emerging collaborative technologies known as Web 2.0, social media and social networks. The term is called Gov 2.0 as it relates to applications in the public sector.
The main improvement Gov 2.0 offers over earlier online communication technologies is the development of horizontal or collaborative communication structures. This elevates the earlier, two-way transaction to a level of individual or group collaboration, replicating conventional forms of group communication such as a town hall meeting or staff meeting where ideas can be exchanged and discussed. Benefits here include the ability to share and build knowledge from diverse points of view and provide a method to gauge public sentiment or perhaps build consensus.
Many contend that government lags behind the private sector in utilizing 2.0 technologies and is still trying to get Gov 1.0 right, whatever that is. Indeed, the process has been slow and incremental. State and local governments face challenges above and beyond the private sector in using the Internet to administer programs and services and engage their constituents.
A combination of legal, political, social and economic challenges are at the heart of government's ability, desire and hesitancy to use the web. Likewise, citizens' acceptance of, access to, and even their interest in utilizing these online tools to interact with their government are still being debated.
The devastated economy has adversely affected most, if not all, governments, resulting in reduced or eliminated programs and services. That provides a valid cost-benefit argument for investing in and maintaining online communication technologies, whether or not they will be embraced by constituents.
No one doubts that these new technologies are headed for wider adoption. It is inevitable, considering the younger generations' preferences and expectations to use social media to communicate and share information. However, the impact of this technology on our longstanding institution of government is still unclear. | https://statetechmagazine.com/article/2009/12/communication-conundrum |
The core programs of the Federation of Child Care Centers of Alabama are child care, CATCH, SRBWI, Civic Engagement and AOP.
- Child care training, technical assistance, advocacy, organizing, and leadership development: improving, supporting, and promoting accessible licensed child care in homes and centers throughout the state. Activities include regional and state workshops and meetings, and community meetings, to educate both child care providers and parents about child care funding and policy, and organize them to advocate for change. FOCAL has been involved in every major policy practice and decision regarding child care in Alabama since 1972. During the past decade, FOCAL’s attention has been on the quality and funding of child care in Alabama, in both home settings and centers.
- Community development using FOCAL’s MCTT™ (More Is Caught Than Taught) and CATCH™ (Communities Act to Create Hope); Out of its work in communities, FOCAL has created two effective and replicable processes: More is Caught than Taught (MCTT™) and Communities Act to Create Hope (CATCH™) . MCTT™ builds on the reality that children absorb and internalize messages from their environment about their worth and prospects for success and prosper in circumstances where adults challenge their own internalized oppression, relate to each other with respect and appreciation and experience their power to bring about change. MCTT™ is tailored to the concerns of community-centered child care. CATCH™ adapts MCTT™ for a broader array of community organizations and groups. A unique component of CATCH™ is its direct approach to facing the external and interior barriers to community action. The process guides people to examine their own lives courageously and free themselves from perceptions of inadequacy and powerlessness. The process transforms people and communities from the inside. Hope emerges as people experience new models of cooperation and power sharing. Hope generates responsibility, action and change.
- Dismantling Racism: Through all its programs FOCAL addresses racism from the systemic to the personal levels. FOCAL’s Raising the Curtain on Race conference in April, 2013 was a long-awaited and much needed conversation about race and its impact on our communities. Dr. Shakti Butler utilized her film “Cracking the Codes” as a tool to deepen individual’s thinking around racial disparities. Participants were encouraged to take the time to have much needed conversations acknowledging that there are open wounds of racism and inequities that are being constantly repeated. She shared several resources to participants to aid them in moving the conversations forward in their communities. Many participants confirmed that this conversation gave them the courage and the tools to have a discussion in their local communities. As a result of this conference, many participants returned to their communities and initiated discussions on race with community members.
- Civic Engagement: FOCAL’s aim is to inform and develop community members to be strong advocates for themselves and for the issues they want to address.
- Southern Rural Black Women’s Initiative for Economic and Social Justice (adults and young women): advancing economic and social justice for women and young women in the Black Belt counties of Alabama. Activities are conducted to attain improvement in economic and social conditions by organizing, educating and building community projects.
- Alabama Organizing Project: FOCAL collaborates with five other Alabama grassroots organizations in implementing a Grassroots Leadership Development program and a community liaison program for emerging local community leaders. AOP partners effectively strategize, organize, and advocate for policy change, particularly in child care, transportation, health care reform and constitutional and tax reform.
- Alabama Child Care Alliance: guiding and supporting the growth and development of a coalition of agencies, organizations, child care facilities and parents that is committed to ensuring that every child in Alabama has access to early care and education environments that are designed and inspected to be safe, healthy, and supportive of the well-being of children.
- Center for Community Change: FOCAL trained civic engagement fellows to hold town hall meetings in their communities and to inform community members about retirement security issues and the links with poverty and fair and just wages. | http://focalfocal.org/index_page_id_9.html |
The Cleveland Restoration Society was founded in 1972 as a downtown advocacy group after the loss of some important historic resources. We realize that maintaining a vibrant downtown is crucial to growing and maintaining a vibrant region. Historic buildings and historic preservation tax incentives have played an important role in the renaissance of the Historic Warehouse, Gateway, and Playhouse Square neighborhoods.
|
|
We continue to be committed to the preservation of Cleveland’s urban core and balancing preservation with economic development.
|
|
Today, several high-rise buildings downtown are empty or under-utilized. Many from the Midcentury modern era are encountering issues of material conservation and energy efficiency.
|
|
Downtown buildings between East 9th and 12th on Euclid Avenue are awaiting adaptive use and rehabilitation.
How You Can Help
For more information on how you can help save these resources or to join the Society's Advocacy and Public Policy Committee, call (216) 426-1000. | http://www.clevelandrestoration.org/preserving_landmarks/advocacy/downtown_cleveland.php |
The second annual report by the ECRI Institute identifies EHR data integrity near the top of its list of patient safety hazards in hospitals. Although many hospitals have invested heavily in systems designed to detect mundane errors in electronic medical records, incorrect or missing data continue to jeopardise safe care delivery. The report advises training staff to understand how important it is to immediately address IT problems, since errors in EHR systems can lead to directly to medical mistakes.
According to the report, most of the EHR problems frequently faced by hospitals involve easy access by multiple users, technology complexity, and the dependence on that technology for patient care. Poor communication at the time of patient transfers to different departments is also a major contributor of errors in the EHR, which may be remedied by standardising processes for patient transport and handoffs.
Data can be erroneously entered into EHRs when one patient’s information appears in the record of another patient, when there are clock synchronisation errors between medical devices and systems, when default values are used by mistake or fields are pre-populated with the wrong data, or when there are inconsistencies between paper and electronic records. Missing, delayed or outdated data represent another source of error that can impact patient safety.
Alarm hazards again topped the ECRI’s list of patient safety hazards, although it is not alarm fatigue but faulty configuration policies and practices that put patients in danger. “Our accident investigations have found that hospitals have either not had consistent or not had any practices to determine how alarms are set by care area or by patient type,” said James Keller, the ECRI Institute’s vice president of health technology evaluation and safety. “It doesn’t make sense to use the same default alarm settings in paediatric intensive care as in adult intensive care.”
The ECRI report recognises the following as the 10 most prominent patient safety hazards:
1. Alarm hazards: inadequate alarm configuration policies and practices
2. Data integrity: incorrect or missing data in EHRs and other health IT systems;
3. Managing patient violence;
4. Mix-up of IV lines leading to misadministration of drugs and solutions;
5. Care coordination events related to medication reconciliation;
6. Failure to conduct independent double checks independently;
7. Opioid-related events;
8. Inadequate reprocessing of endoscopes and surgical instruments;
9. Inadequate patient handoffs related to patient transport;
10. Medication errors related to pounds and kilograms
The insights come from patient safety reports voluntarily sent to the ECRI Institute. The institute compiles and analyses its findings, and shares them in order to improve awareness of patient safety hazards and recommend solutions. From 2009 to 2014, it collected almost 500,000 reports. | https://healthmanagement.org/c/it/news/poor-ehr-data-integrity-threatens-patient-safety |
NOTICE:
This publication is available digitally on the AFDPO WWW site at: http://www.e-publishing.af.mil.
Certified by: SAF/FMB (Maj Gen Stephen R. Lorenz, USAF) Pages: 7 Distribution: F
OPR: SAF/FMBMM (Mr Joe Farrell)
This instruction contains Commanders' responsibilities for exercising good financial management and cost stewardship in the expenditure of funds to support contingency operations. This instruction implements Air Force Policy Directive (AFPD) 65-6, USAF Budget Policy. 1. Purpose. 1.1. The purpose of this AFI is to provide guidance for administrative, general support, and quality of life expenditures at deployed locations. It is not all-inclusive, but does include references for specific areas as needed. Appropriated funds, e.g., Operations and Maintenance (O&M) funds provided by Congress to the Air Force for contingency operations at deployed locations, are to be spent on those requirements which are directly attributable to the deployment of forces to the region in which the contingency operations are taking place, as well as incremental costs at home stations incurred in support of such operations. That is, these are incremental costs to the Air Force that would not have been incurred had the contingency operation not been supported (DoD FMR 7000.14-R, Volume 12, Chapter 23). Commanders and deployed personnel are responsible for ensuring all purchases are necessary, prudent, and limited to those needed to support deployed mission operations. 1.2. Records Disposition. Ensure that all records created by this AFI are maintained and disposed of IAW AFMAN 37-139, Records Disposition Schedule. 2. Responsibilities. It is imperative that commanders, and those upon whom they rely for execution of the operation, exercise sound financial management and cost stewardship in the expenditure of any Government funds to which they may have access, including funds to support contingency operations. Commanders must ensure that all expenditures are accurately documented and correctly reflected in summary logs. AFI 65-601, Volume I, Budget Guidance and Procedures, provides detailed guidance for several of the areas addressed in this AFI. Other related and applicable AFIs and DoD publications are referenced at Attachment 1.
2
AFI65-610 17 OCTOBER 2003
3. Applicability and Scope. This instruction applies to all Air Force, Air Force Reserve, and Air National Guard activities. 4. Policy. Funding policies in AFIs are based upon law, direction in Congressional Reports, OSD policy, and corporate Air Force decisions. They are applicable to funds directly appropriated to the Air Force, as well as to funds received from DoD transfer accounts, e.g., Overseas Contingency Transfer Funds (OCOTF) and Defense Emergency Response Funds (DERF). Exceptions to this policy must be clearly identified in official AF, DoD, or Joint Staff policy guidance. 4.1. Contingency Operations, in many instances, involve deployment of members to austere and isolated locations, devoid of the amenities that are an integral part of the American standard of living and commonplace at established Air Force installations. However, the purpose of appropriated fund expenditures at deployed locations is not to recreate or duplicate all the support amenities available at established Air Force installations. Rather, it is to support contingency site operations at a level commensurate with contingency operation requirements, the deployment of members on scheduled rotations, and anticipated duration of contingency sites. All purchases made pursuant to this AFI will be consistent with this principle. 4.2. Maintenance, Repair, and Construction funding rules and processes for contingency operations occurring outside the United States are described in AFI 32-1032, Planning and Programming Appropriated Funded Maintenance, Repair, and Construction Projects, Chapter 7. 4.3. When available, DoD transfer funds can be used only to support contingency operations. All personnel charged with the expenditure of regular appropriated funds or transfer funds are still bound by existing regulations and appropriations laws, regardless of funding availability or source. In addition, available resources must not be perceived as unlimited, but rather as limited by what is necessary to accomplish the mission as authorized in existing Air Force guidance. Therefore, lavish or extravagant expenditures are unacceptable and not permitted. Additionally, expenditures should be scrutinized for the appearance of impropriety. The absence of a prohibition on a specific use of appropriated funds at deployed locations does not constitute authority to use them. 4.4. If designated approval authorities or the Commander are uncertain regarding the propriety of a proposed expenditure, they should consult the appropriate Judge Advocate General (JAG), Financial Management (FM) personnel, and/or the applicable forward deployed HQ functional for that deployed location. Expenditures that have had the appearance of impropriety have compromised Air Force credibility and resources during the appropriations process. Commanders at all levels must ensure effective review procedures are in place to avoid improper expenditures. 4.4.1. Deployed personnel shall make every effort to avoid any expenditures or procurement actions creating the appearance that they are violating the law or applicable ethical standards. Whether particular circumstances create an appearance that the law or these standards have been violated shall be determined from the perspective of a reasonable person with knowledge of the relevant facts. 5. Training and Oversight. 5.1. Training. The MAJCOM Comptrollers and FM personnel at other headquarters responsible for deploying personnel will provide training on this AFI to deploying FM personnel and commanders.
AFI65-610 17 OCTOBER 2003
3
The MAJCOM Comptrollers may delegate training responsibility to numbered Air Force or Wing Comptrollers, but are ultimately responsible for FM training within their Commands. 5.2. Oversight. The MAJCOM Comptrollers and FM personnel at other headquarters responsible for deploying personnel will establish procedures for regular oversight reviews of purchases in their areas of responsibility (AORs). These reviews will include government purchase cards and fund cite authorizations, i.e., AF Form 616s and AF Form 9s. Reviewing personnel must take immediate corrective action on any improper expenditures or those that have the appearance of impropriety. Corrective actions should include notifying deployed location commanders of improper expenditures discovered during oversight reviews; recommending investigations if warranted; and reducing funding in applicable areas if necessary. 5.2.1. Internal control systems must be established at all levels to provide reasonable assurance of the effectiveness of the organization; the efficiency and economy of operations; safeguards over assets; the propriety of receipts and disbursements; and the accuracy and reliability of records and reports. Commanders must ensure that all expenditures are properly documented in sufficient detail to assure the propriety of the expenditure and provide an adequate audit trail. Commanders will set the tone for positive internal controls; financial managers will provide oversight assistance. 6. Support of Operations. 6.1. If the local situation permits, lead times can be met, and airlift costs do not raise the total cost beyond the local cost, U.S. sources should be used to support the equipment and supply requirements at deployed locations. However, it is recognized that mission requirements may dictate local purchase even though costs in local markets may be considerably higher or result in lower quality equipment and supply items. 7. Office and Lodging Furnishings. Furnishings must be procured in minimum quantities with functional and durable, not ornate or extravagant, quality. 8. Awards and Gifts. Authorized uses of appropriated funds for awards and gifts are limited. (See AFI 65-603 and AFI 65-601, Volume I, chapter 4, para. 4.29 for awards and gifts that are authorized to be procured with O&M funds.) 9. Recreational MWR Equipment. 9.1. The expenditure of Air Force O&M (3400) and Other Procurement (3080) funds directly appropriated to the Air Force are authorized for recreational MWR equipment at contingency locations. However, funds specifically designated for Contingency Operations and provided to the Air Force from transfer accounts appropriated to OSD solely for that purpose are only available to procure recreational MWR equipment if the Air Force budget request for those funds identified such equipment as a requirement. Recreational equipment acquired under this provision must be free of charge to the user and available/accessible to most, if not all, deployed personnel at the contingency location. It cannot be located in individual offices or quarters. Recreational equipment used to generate revenues must be purchased with nonappropriated funds.
4
AFI65-610 17 OCTOBER 2003 9.1.1. Procurement of Recreation and Physical Fitness Equipment. The support element containing Air Force Services activities is the only organization authorized to purchase recreation and physical fitness equipment. Individual units are not authorized to procure it on their own authority. This provision also applies to television sets, DVDs, VCRs, CD players, etc, procured with APFs for recreational or fitness purposes.
10. Serving Materials. Serving materials (plates, dishes, utensils, etc), other than those procured by Services for use in a dining facility (fixed structure, transportable, or van), are only authorized for procurement by protocol offices as outlined and authorized in AFI 65-601, Volume I, Chapter 4, Paragraph 4.42.1.1.1. Serving materials procured under this authority are chargeable to direct Air Force O&M appropriations and should not be ESP coded in support of contingency operations. 11. Standard Information Technology (IT) Office Equipment. Such equipment shall be procured for a contingency site using appropriated funds for contingency operations, if authorized and available, and shall remain at that site for rotating units to use. In order to minimize duplicate purchases, each successive deploying unit will not procure such equipment specifically for its deployment unless the equipment becomes obsolete or irreparable. Information Technology equipment and accessories, which may be mission-specific for a particular deploying unit, shall accompany the unit when deploying and return with it on redeployment. Personal IT accessories such as Personal Digital Assistants (PDAs) shall only be procured for personnel requiring such accessories for mission accomplishment, not merely as a convenience. Such accessories shall remain at the contingency site for rotating personnel and not taken home as personal equipment. They remain government property without regard to where they are being used. 12. Holiday Observances. See AFI 65-601, Volume I, chapter 4, para. 4.26.2 provides guidance on use of appropriated funds for decorations. Any decorations procured under the authority in this AFI shall be purchased with appropriated funds, and in accordance with guidance in AFI 65-601. Authorized decorations are limited to recognized national holidays and seasonal observances, e.g., Halloween, Valentine's Day, and Easter. Decorations may not be religious in nature and are limited to locations where all installation personnel may benefit from their use. (Items for chapels and chaplain events must be procured in accordance with AFI 52-105, Vol II, Chaplain Service Resourcing, Nonappropriated Funds.) The Commander's best judgment is essential in avoiding extravagance or the appearance thereof, as well as ensuring appropriate sensitivity to host/local customs. 13. Equipment Maintenance. 13.1. Maintenance on equipment at the deployed location, and reconstitution maintenance when equipment has been redeployed to its home base, shall be accomplished using funds appropriated for contingency operations. Maintenance on equipment at a home station, including that designated for imminent deployment, is funded with direct Air Force appropriations, not funds appropriated for contingency operations. 13.2. Equipment and supplies (i.e., Unit Type Code (UTC) and other organic unit items) left in-place at a deployed location at the direction of Air Force instructions, Commander Air Force Forces order, or designated approval authority, will be reconstituted using funds appropriated for contingency operations, when authorized and available. If funds appropriated for contingency operations are not available, use existing AF TOA with the appropriate ESP code to reconstitute equipment and supplies left in-place at the deployed location. The deployed installation commander, or designated approval
AFI65-610 17 OCTOBER 2003
5
authority, will identify in writing the unit equipment and supplies being directed to remain in-place via memo with attached UTC Logistic Details, CA-CRL, or standardized Equipment and Supplies List. The memo will be the authority for the owning unit to take appropriate supply authorization, funding, and reconstitution actions upon return to home station. 14. Government Purchase Card (GPC) Purchases. Purchases using the GPC are subject to the same laws, Congressional direction, and DoD policy as those made with other funding mechanisms. Governing Air Force policy on the use of the GPC in support of Contingency/Exercise Operations can be found in AFI 64-117, Air Force Government-wide Purchase Card Program, paragraph 2.6. The unauthorized use of the card is described under AFI 64-117, paragraph 2.4. If a purchase is prohibited or questionable using other procurement methods, it remains prohibited or questionable even if the GPC is used. The GPC is simply an additional procurement vehicle to obtain goods and services, not additional authority to purchase goods and services otherwise prohibited or questionable using any other procurement vehicle, i.e., AF Form 9. Only warranted Contingency Contracting Officers (CCOs) are authorized to use the purchase card in support of contingency/exercise operations according to AFI 64-117. However, IAW AFI 64-117, para. 2.6.2, cardholders who are not warranted CCOs may continue to use their purchase cards when deployed with their unit only for exercises of short duration (typically 30 days or less) and contingencies, when their unit's funding will be used. 15. Effective Date. This regulation is effective upon issuance. 16. Glossary. Attachment 1 provides Glossary of References and Supporting Information.
MICHAEL MONTELONGO Assistant Secretary of the Air Force Financial Management and Comptroller
6 Attachment 1
AFI65-610 17 OCTOBER 2003
GLOSSARY OF REFERENCES AND SUPPORTING INFORMATION References AFI 10-213, Comptroller Operations Under Emergency Conditions AFI 32-1022, Chapter 7, Planning and Programming Appropriated Funded Maintenance, Repair, and Construction Projects AFI 64-117, Air Force Government-Wide Purchase Card (GPC) Program AFI 65-106, Appropriated Fund Support of Morale, Welfare, and Recreation and Nonappropriated Fund Instrumentalities AFI 65-601, Budget Guidance and Procedures. AFI 65-603, Official Representation Funds - Guidance and Procedures AFPAM 65-110, Deployed Agent Operations DoD FMR 7000.14-R, Volume 12, Chapter 23, Contingency Operations Joint Publication 1.0, Doctrine for Personnel Support to Joint Operations Abbreviations and Acronyms AF--Air Force AFI--Air Force Instruction--A publication issued to support an antecedent Air Force Policy Directive. AFPD--Air Force Policy Directive--A publication necessary to meet the requirements of law, safety, security, or other area where common direction and standardization benefit the Air Force. The language used within describes the nature of compliance required. Air Force personnel are expected to comply with these publications. CA/CRL--Custodian Authorization/Custody Receipt Listing CD Player--Compact Disc Player CCO--Contingency Contracting Officer DoD--Department of Defense DoD FMR--Department of Defense Financial Management Regulation ESP Code--Emergency & Special Program (ESP) Code GPC--Government Purchase Card IT--Information Technology JAG--Judge Advocate General MAJCOM--Major Command O&M--Operations and Maintenance OSD--Office of Secretary of Defense
AFI65-610 17 OCTOBER 2003 PDA--Personal Digital Assistants U.S.--United States of America UTC--Unit Type Code Terms
7
Contingency Operation--A military operation that is designated by the Secretary of Defense in which members of the Armed Forces are or may become involved in military actions, operations, or hostilities against an enemy of the United States or against an opposing force; or is created by definition of law.
Information
AFI65-610.STR
7 pages
Report File (DMCA)
Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us: | http://www.readbag.com/af-shared-media-epubs-afi65-610 |
It is your discussion of the topic and your analysis of their ideas that should form the backbone of your essay. We do not hire authors who do not have sufficient experience or education. Instead, use pronouns that refer back to earlier key words.
This is the main point of your paragraph and everything within this paragraph should relate back to it. Ask yourself have i done everything required? Read the paper aloud to find errors in sentence structure and word choice and refine it so there is a more natural flow. Do not go dramatically under or over this amount.
If you use the introduction, body and conclusion model, it is recommended to have. Do not simply present evidence, but analyse it at each stage, always relating it back to your assignment question. Remember that your marker will be looking for remember that these are the first words your marker will read, so always try to make a great first impression, to ensure that you provide your marker with a clear and accurate outline of what is to follow in your essay.
. Always check the assignment criteria and other information in your unit site for specific requirements. All sources must be cited in text in the referencing style required by your unit.
Introduce and define some of the key concepts discussed in the essay. If you are not sure, ask your lecturer or tutor. Therefore, most of our professional critical analysis writers are professors at the colleges or universities, and each of them is a native speaker with a university degree who has a vast experience in writing academic papers.
Consider how you conclude your paragraph and how you might link it to the following paragraph. Usually, an essay has the format of an introduction, body paragraphs and a conclusion. This means that the necessary work will be done within a timeframe of a few hours or one day, yet, regardless of the timing and complexity of the task, our service will deliver an academic paper of the premium quality to you quickly and at a low price. Respond directly to the essay question and clearly state what your essay intends to achieve. You then support your thesis statement in the body of the essay, using relevant ideas and evidence from throughout your essay.
1. Understand the assignment task. Consider the following question: In Australia, a person's social class impacts their life chances. Critically evaluate this.
Writing a critical essay Australia An essay flows cohesively when ideas and information relate to each other smoothly and logically. Linking words clarify for the reader how one point relates to another. Introduce and define some of the key concepts discussed in the essay. This means that the necessary work will be done within a timeframe of a few hours or one day, yet, regardless of the timing and complexity of the task, our service will deliver an academic paper of the premium quality to you quickly and at a low price. We do not hire authors who do not have sufficient experience or education. What is meant by analysis?. Ask yourself have i done everything required? Read the paper aloud to find errors in sentence structure and word choice and refine it so there is a more natural flow. Usually, an essay has the format of an introduction, body paragraphs and a conclusion. Save the detail for the body of your essay. Your lecturer and tutor are there to help and you can always ask for further advice from a drawn on discussions from weekly seminars and classes (your units weekly topics should be your guide for all of your assessments) discussed and analysed sources, and formatted them in the required referencing style planned your essay so that is readable, clear and logically sequenced, and with a distinct introduction, body and conclusion for exact details of your assignment.
Therefore, most of our professional critical analysis writers are professors at the colleges or universities, and each of them is a native speaker with a university degree who has a vast experience in writing academic papers. The choice of writers to work with our clients is a key priority for our company. Always check the assignment criteria and other information in your unit site for specific requirements. An essay is a type of assignment in which you present your point of view on a single topic through the analysis and discussion of academic sources. .
With us, you will have more time to spend on your family, friends or hobby. Conclusions are primarily for summing up what you have presented in the body of your essay. Your lecturer and tutor are there to help and you can always ask for further advice from a drawn on discussions from weekly seminars and classes (your units weekly topics should be your guide for all of your assessments) discussed and analysed sources, and formatted them in the required referencing style planned your essay so that is readable, clear and logically sequenced, and with a distinct introduction, body and conclusion for exact details of your assignment. Among the list of offered services you can find. Respond directly to the essay question and clearly state what your essay intends to achieve.
This is why, in australia, many students often turn to critical analysis writing service to help them write papers. Each main point should be relevant to your essay question or thesis statement. This is the main point of your paragraph and everything within this paragraph should relate back to it. Be sure to revise the introduction in your final draft, so that it accurately reflects any changes you may have made to the body and conclusion of your essay start each paragraph with a topic sentence. Plan and structure the body paragraphs of your essay into topic sentences with bullet points for each paragraph. Remember that your marker will be looking for remember that these are the first words your marker will read, so always try to make a great first impression, to ensure that you provide your marker with a clear and accurate outline of what is to follow in your essay. Among another custom critical analysis writing services running in australia, we are the only platform that offers you to yourself. Difficulties in writing academic papers is a very common problem for students of colleges and universities because of the requirement to study a large amount of information. One way you can demonstrate this is by other writers, by comparing, contrasting and evaluating their ideas. When writing an essay, dont be tempted to simply summarise other writers ideas.
Critical writing involves analysis and review, supported by evidence. Critical writing in design may take the form of an essay, visual analysis or journal article.
Writing the critical review usually requires you to read the selected text in detail and to also read other related texts so that you can ... What is meant by analysis?
Usually about 10 over or under is acceptable but always check with your lecturer first. Introduce and define some of the key concepts discussed in the essay. Instead, use pronouns that refer back to earlier key words. Consider how you conclude your paragraph and how you might link it to the following paragraph. We do not hire authors who do not have sufficient experience or education.
Do not go dramatically under or over this amount. Usually, an essay has the format of an introduction, body paragraphs and a conclusion. Summarise your argument and draw on some of the main points discussed in the body of the essay, but not in too much detail. You can use this analysis to construct your own opinions, questions or conclusions. When you use someone elses ideas, you must correctly acknowledge it through at the end of the text, both according to the referencing style required by your unit.
It is important to begin writing as soon as soon as possible think of writing as a write an answer to the question in just one or two sentences this can form the basis your thesis statement or argument. Always check the assignment criteria and other information in your unit site for specific requirements. Do not go dramatically under or over this amount. Expand on each bullet point to build paragraphs based on evidence, which will also require with after reviewing the plan and draft of body paragraphs, write the introduction and conclusion. With us, you will have more time to spend on your family, friends or hobby.
. Conclusions are primarily for summing up what you have presented in the body of your essay. When writing an essay, dont be tempted to simply summarise other writers ideas. Respond directly to the essay question and clearly state what your essay intends to achieve. Difficulties in writing academic papers is a very common problem for students of colleges and universities because of the requirement to study a large amount of information.
Our service is specialized in all types of academic papers. Australia with professional critical analysis help and support in completing other academic works. Save the detail for the body of your essay. Usually about 10 over or under is acceptable but always check with your lecturer first. We appreciate each of our customers, and we strive to ensure that they have a good experience with our company.
An essay flows cohesively when ideas and information relate to each other smoothly and logically. One way you can demonstrate this is by other writers, by comparing, contrasting and evaluating their ideas. Use synonyms and paraphrasing so that you do not repeat all your main points word for word. Introduce and define some of the key concepts discussed in the essay. If you are not sure, ask your lecturer or tutor. | http://matdasttefi.ml/term-paper/7efd566de7077fd366edccf6af15c236 |
One of the more interesting and challenging actions of a term paper writer is finding an appropriate balance between formatting, grammar, and style when composing. As a result, many writers fail to accomplish an even degree of equilibrium in their term paper essays, which can adversely influence their grade point averages.
Additionally, pro papers review some pupils become frustrated with the difficulty they encounter after viewing their term paper for content and length. The difficulty that most students face when writing a word paper is they lack an understanding of exactly what constitutes”good” academic writing. This will make it hard to differentiate between”good” article writing and also”bad” essay writing. Alas, a number of teachers are not well versed in this issue and therefore, do not have a thorough understanding about how best to assess student essays.
A excellent academic article can include 2 components: first, it has to include good information, and second, it has to be composed in such a manner that the info is pertinent to the particular subject. A good illustration of this is if you write a history paper about Napoleon Bonaparte, do you expect the reader to fully know what he did and why or simply to grasp a general thesis or idea? You will need to create a persuasive argument which is going to be worth your audience’s attention. In actuality, if you’re able to demonstrate that you actually have taken the opportunity to study a subject and that your arguments are based on truth, you will succeed as a good academic essay writer.
As someone that wants to write an effective essay, you need to work with the academic adviser that will help you recognize the intricacies of this academic writing process and summarize a few essay examples which may serve as guides. As a result, it will be simpler for you to achieve a balanced level of balance in your academic documents.
If you feel as though your term papers are unbalanced, you might choose to contact your adviser to talk about whether your regular point averages are too significant. The bottom line is that no matter how hard or challenging your mission might be, you shouldn’t be afraid to seek assistance from an academic advisor or counselor. Your advisor can steer you in achieving a decent balance in your term paper along with your overall quality point average.
Writing a good academic paper isn’t easy. It requires a good deal of work, discipline, and commitment, but is really rewarding. | https://kapurcuk.com/whats-the-balance/ |
According to a study by the European Commission’s Joint Research Centre, using digital technologies for learning in schools improves parents' perceptions of these technologies, which in turn helps children's digital learning and supports a healthier and more meaningful use of digital devices.
This conclusion is based on interviews with 234 families in 21 countries, and states that children’s digital skills develop from a very young age, based on observing and mirroring parents’ and older siblings’ behaviour. Furthermore, developing digital competence as early as kindergarten can help to build critical thinking among children regarding content and devices that they use. Some parents underlined the strategic importance of schools to provide the guidance they need for presenting digital skills to their children.
More information about the study and related EU initiatives can be found on the European Commission’s website. | https://www.cepis.org/index.jsp?p=636&n=639&a=6088 |
Step 1:
- Review the Eligibility criteria.
- Create your user profile and entry form by clicking here, or using the link on our home page.
- Once your user profile has been created, you can access your applications any time using the "My Applications" link on the left-hand side of the page.
- Complete your order by submitting payment through the checkout link accessed in your cart. An invoice/receipt will be emailed to the email address you provided.
Step 2:
- Log in using your email and password to begin your submission form.
- You can access the submission form by clicking the "My Application" link on the left side of the page, and clicking the "Edit" link for the project you wish to submit.
- Fill out each applicable question for your project (required questions are marked with a red asterisk). A minimum of 6 images is required for each project. You can save your application any time using the "Save" button at the bottom of the page.
- Select "Built" or "Unbuilt" subcategory
- Click the "Save and Finalize" button at the bottom of the page to complete your submission. No changes can be made after this point.
Submittal Guidelines
The 2019 Design Awards requires the submittal of all materials in digital format.
If the authorship is revealed on any of the images, plans or narrative (outside of the credits section), the entry will be disqualified. All submitted content will be displayed. No images will be hidden from the public gallery.
Number and Type of Images:
A minimum of 6 images/slides and a maximum of 12 images/slides are required. These images/slides should include all plans, diagrams, interior and exterior views required to fully illustrate each entry. Multiple images on individual slides are acceptable and encouraged to help explain the project.
Floor Plans and Site Plans are required for all entries, except Interior projects, which may omit the site plan. Site Plans must include a North arrow and a scale. Other drawings, such as sections, axonometric views, elevations, graphics data, and details may be included.
Views of both interior and exteriors are required for all entries, except Interior projects which may omit the exterior views or exterior projects where the interior design was done by another design professional. Exterior views should show all sides of a project. Exceptions might be projects with many similar buildings. Views, where possible, should show the project in the context of its immediate surroundings (i.e. adjacent streets and buildings).
Exterior views should be primarily daylighted views unless night views relate directly to the project use or effect, such as marquees, or special illumination.
For renovation projects, views taken before and after construction will be accepted when labeled accordingly.
Format & Size of Images:
The digital image files (plans, photos, illustrations) must be submitted as a .jpg .jpeg .png and/or .gif file format. No other formats will be accepted.
The maximum image width is 1,600 pixels. The maximum image height is 900 pixels. The recommended minimum dimension in either direction is 600 pixels.
The maximum file size is 1 MB (1000 KB), however, larger images will result in slower load time. The recommended file size is approximately 600 KB. You can use the "Save for Web & Devices" tool in photoshop to achieve the desired file size.
File Name:
Please name your files using the following coding: Entry #_DA2019_Image # (For example, if you are entry # 003, your first image would be titled 003_DA2019_01.jpg)
File names should not contain any empty spaces or special characters. Please review the acceptable file formats outlined in the Format & Size of Images section of these instructions.
Text Information (Description, Credits, etc.):
The following text is required for a complete entry:
- Project Description (max. 3,500 characters - approx. 500 words).
This is the entrant's primary opportunity to tell the story of the submitted project in words. Is there a specific inspiration driving the design of the project? Are there special conditions or challenges that are worth noting? The actual content of this narrative is completely up to the individual entrants but ideally should be geared towards helping the jury (and those who may not be familiar with each project) understand the project's background, it's design process and subsequent final result being presented for this year's awards program.
NOTE: Do not include any credit information in the descriptive texts titled "Project Description"
Display Boards:
All Residential, Commercial, and INSPIRE Awards Participants are encouraged to submit a board for display at the 2019 AIAOC Design Awards Presentation and Gala.
These boards are for public display only and will not be judged. You are encouraged to include your firm logo on the display boards, however, please do not place your logo on any of the online submittals per the requirements listed within the submittal guidelines, as they are being judged.
All display boards must comply with the following requirements: | https://aiaoc.secure-platform.com/a/page/submittals |
It’s often said that breakfast is the most important meal of the day. It sets a precedent for the rest of your day: miss it and you’ll be hungry and grumpy all afternoon. It boosts metabolism to eat healthy in the morning, and can help provide you with the energy to start your day off right.
So it might come as a surprise that I say we ought to get rid of breakfast completely. While we’re at it, let’s toss lunch and dinner out the window as well.
I’m not advocating for fasting, no, that’s quite the opposite of healthy and natural. Rather, I’m advocating for the abolishment of the three-meal system that we as human beings have adopted (fairly recently, actually, as far as human history goes) that is tying us down and weakening our health meal by meal.
There is, to put it as simply as one article did, no biological reason for human beings to eat three meals a day. And to be even more frank, not all cultures do. In Poland, for example, it is typical to eat four meals a day, the third of which is often the biggest and is comparable to the dinner that most of us know. Unlike American culture, however, it’s eaten at around 3 p.m.
The practice of three meals a day dates back to at least the 16th century, when the well-off in society would eat three meals a day, though it was often acknowledged that two would suffice for non-laborers. Similar to Polish culture of today, however, the meals were eaten earlier than we do now in America.
Eating naturally, as I have phrased it before has been proven time and time again to have immense benefits on our body. Breaking away from the three meal construct that we’ve been indoctrinated into believing for our lives can reduce our risk of obesity, increase our glucose intolerance and insulin resistance, two factors that are key in diabetes.
Eating four, five or six smaller meals a day can give you sustained energy throughout the day, and make it easier to monitor your intake of calories, sodium, fats and carbs. Smaller meals spaced out evenly can also help stave off bouts of hunger and overeating; if you’re have already eaten three smaller meals by midday, you’re less likely to binge on junk food if you have another small meal coming up in the near future.
Breaking away from a three meal schedule can be difficult, particularly if you’ve been accustomed to it your entire life or if your family is resistant. However the rewards of eating naturally are palpable. You’ll stay healthier for longer and reduce your risk of diabetes and heart disease, as well as find the needed energy to maintain a good workout regimen and keep your body in its peak form.
All of this relates back to the idea of utilizing your body and its own natural abilities to keep you healthy and disease-free. So stop eating breakfast, lunch and dinner, at least in the conventional sense. Your body will thank you in the long run. | http://jeremiesaintvil.net/eating-naturally/ |
Tracks News - In this section you'll find news from cities around the country as well as interviews and general reporting on issues. It might be from a newspaper or a blog, but it counts as news.
TRANSPORT
California: Engineers Questions HSR Oversight
San Jose Mercury News
As California prepares to embark on its largest public works project in decades, a union that represents state engineers is questioning whether all the construction work will be thoroughly scrutinized...
Tracks News - In this section you'll find news from cities around the country as well as interviews and general reporting on issues. It might be from a newspaper or a blog, but it counts as news.
TRANSPORT
Charlotte: Budget Battle Stops at Streetcar
Charlotte Observer
In Charlotte's budget fight, perhaps the biggest sticking point has been a proposal to spend $119 million on a streetcar through uptown...
Read On
National: Boehner, Reid Fail to Break Deadlock
Reuters
U.S. congressional leaders failed on Tuesday to break a deadlock on a long-stalled transportation funding measure, and Republicans may now have to detach from the bill approval of the controversial…
Executive Summary
The federal government, through various transportation acts, such as the Intermodal Surface Transportation Efficiency Act (ISTEA), the Transportation Equity Act for the 21st Century (TEA-21), and, more recently, the Safe, Affordable, Flexible, Efficient Transportation Equity Act—A Legacy for Users (SAFETEA-LU), has reinforced the need for integration of land use and transportation and the provision of public transit. Other federal programs, such as the Livable Communities Program and the New Starts Program, have provided additional impetus to public transit. At the state and regional level, the past three decades have seen increased provision of public transit. However, the public transit systems typically require significant operating and capital subsidies—75 percent of transit funding is provided by local and state governments.1 With all levels of government under significant fiscal stress, new transit funding mechanisms are welcome. Value capture (VC) is once…
Contra Costa cities are vowing to fight a plan signed by Gov. Jerry Brown this week to use money earmarked for local revitalization to plug the state deficit. Brown has endorsed two bills: One dissolves the state's nearly 400 redevelopment agencies and the other allows the agencies to remain if they "pay to play," handing over $1.7 billion in revenue to the state this year and making smaller payments in perpetuity. ..
EasyConnect II explored the introduction and integration of multi-modal transportation services, employing both traditional and innovative technologies, at the Pleasant Hill Bay Area Rapid Transit (BART) District station during the initial construction phase of the Contra Costa Centre Transit Village, a transit-oriented development, in the San Francisco Bay Area....Read On
Suppose a builder pitches a 100-condominium development in Richmond within 1,000 feet of Interstate 80. Under proposed air-quality guidelines, for the first time in the U.S., if extra cancer risk meets a specific threshold, the developer would be told to study the potential health effects of the freeway pollution on the people who would live in the homes.....Read On
Smart growth policy strategies attempt to control increasing auto travel, congestion, and vehicle emissions by redirecting new development into communities with a high-intensity mix of shopping, jobs, and housing that is served by high-quality modal alternatives to single occupant vehicles. The integration of innovative technologies with traditional modal options in transit-oriented developments (TODs) may be the key to providing the kind of high-quality transit service that can effectively compete with the automobile in suburban transit corridors. A major challenge, however, of such an integration strategy is the facilitation of a well-designed and seamless multi-modal connection infrastructure – both informational and physical. EasyConnect II explored the introduction and integration of multi-modal transportation services, both traditional and innovative technologies, at the Pleasant Hill Bay Area Rapid Transit (BART) District station during the initial construction phase of the…
THE MASSIVE construction project on Treat Boulevard that obscures the Pleasant Hill BART station will actually be completed one of these days. The sprawling expanse of yellow sheathing and black tar paper will give way to a soothing facade of earth tones and coffee colors. Perhaps as soon as March residents may be coming and going from the newest Bay Area trend in mixed-use developments, the so-called transit village....
| |
“Summary and Evaluation of Arguments Made By Creon” Character assumes that state laws are very valuable for the protection and survival of the country. He thinks that family and divine laws are not as important as the rules established by the king. The person who breaks these rules should be punished. Moreover, a disloyal person should not be treated in the same way as a loyal and enemy of the country should be left unburied for dogs, cats and eagles after the death. He argues polyneices was a traitor to the country because he did not like his acts, but his brother Eteocles was loyal so they should not be treated in the same way. Creon believes that the enemy of state is still enemy even after the death so he should have no honour. He explained his evidence for supporting this assertion using facts. Creon mentioned the fact that gods do not bless criminals. Evidence is relevant to what the character argues and has adequate information to support his argument. Creon is fair and has no biasing to one side. He is dealing them with justice and on the basis of acts they did in their life. Character is neither emotional nor unfair according to his basic assumptions. He used ironic and straight forward language for his point of view. The tone of character is serious and consistent. This is a logical and valid argument because Creon is doing justice and punishing the one who is disloyal to the city. Reasonableness dies out due to the harshness of the rule. The second main assertion of the character is about the punishment of the Antigone for denying the Theban rule established by King about the burial of her brother. Creon wanted to punish Antigone for violating the state law and burying her brother. He supported his argument by providing proper proof for it. He mentioned the fact that this is only Antigone among the citizens who thinks family and divine law are more serious... | https://www.studymode.com/essays/Jknkj-1858880.html |
To provide ongoing support for a group of formerly incarcerated parents and caregivers of children with incarcerated parents to inform the work of, and related to, Alameda County Children of Incarcerated Parents Partnership (ACCIPP), which seeks to improve policies and practices on behalf of children with incarcerated parents and their families.
To support the collaboration of public and private agencies and community members working to raise awareness and improve the lives of Bay Area children with incarcerated parents.
To support capacity building for juvenile defense attorneys to effectively advocate for cross-system, community-based supports for youth involved in the delinquency system.
To support the continuation of Bay Area Youth Leadership Academy (BALYA); provide opportunities for shared learning among organizations and programs focused on youth leadership and development through Z Plus; and provide opportunities for youth to inform local Continuum of Care Reform implementation through partnership with Youth Law Center.
To support a one-year fellowship at the Haywood Burns Institute for an emerging leader in justice reform to help advance the work of the Institute and learn from leaders in the field.
To support a collaborative of public agency and private foundation investors in the child welfare system to leverage their collective resources to improve child welfare outcomes.
To support the general operations of a statewide organization that provides training, technical assistance and advocacy on behalf of local CASA programs.
To support ongoing efforts to bring together advocates and statewide leadership from behavioral health and child welfare to develop a coordinated, triaged response to foster youth in crisis.
To support a public/private coalition-building effort that can transform outcomes for children by leveraging behavioral health policy and aligned public/private efforts to support healthy development, increase health equity, improve clinical efficacy, and achieve necessary systems change.
To provide continued support for coordination of the California Child Welfare Council’s CSEC Action Team tasked with improving the systems, policies, services and supports for children and youth involved or at risk of involvement in commercial sexual exploitation; and provide technical assistance to Bay Area counties.
To provide general operating support for this growing organization that aims to interrupt the cycles of violence and poverty by motivating and involving young people in changing policies and systems that impact their lives and communities.
To provide general operating support for an organization that engages youth and families impacted by incarceration to promote lasting change that keeps people out of prison, advocates for a more humane justice system, and supports healthy communities through services and art.
To support a convening of staff from San Francisco’s Child, Youth and Family System of Care and community-based partners to review efforts related to trauma informed systems and equity; acknowledge and reward progress; and practice self-care.
To support a regional collaborative that focuses on organizational healing, community- and peer-led healing, and policy with proximity in an effort to transform overlapping systems into a coordinated, trauma-informed, youth-guided and family driven system of care for children and youth.
To support early implementation of an evidence-based program that prevents foster care placement and promotes family reunification by providing interdisciplinary support to parents and caregivers facing child welfare intervention, with attention to criminal crossover matters and collateral civil legal issues.
To support intensive training and coaching on foster youth-led organizing to staff, volunteers and youth involved in Bay Area foster youth advocacy groups.
To support the transition of the Foster Youth Museum to an independent, sustainable entity.
To amplify the voices of youth through media that illuminates the challenges and successes facing young people involved in the Bay Area child welfare and/or juvenile justice systems.
To support the continued expansion of FLY in Alameda County to provide legal education and leadership training to youth involved or at risk of involvement in the juvenile justice system.
To support a new program that provides Bay Area advocates of color with holistic and integrative healing along with tools and knowledge about how to regenerate and heal from mental and emotional trauma inherent in their work.
To provide general support to expand the capacity of a non-traditional charter high school that serves youth with histories of school failure.
To bolster capacity to engage systems leaders in implementing radical inquiry through which local expertise and experience informs culturally relevant, healing-centered policies, practices and investments.
To support the launch of a youth communications initiative that amplifies the voices of a new generation coming of age in East Contra Costa County.
To provide continued training and guidance to assist incarcerated individuals, family members and the organizations that support them to prepare for release and reentry.
To explore opportunities to inform local and statewide policy, and advise the California Children’s Trust, based on practice-based evidence in child and family resiliency.
To provide continued support for a collaboration of county social service agencies and universities to foster shared learning and activities designed to improve human service systems and strengthen communities of vulnerable children, youth and families.
To provide a final year of funding for a collaboration of public and private agencies working to raise awareness and improve the lives of San Francisco children with incarcerated or detained parents.
To support the participation of Bay Area organizations in a statewide initiative that aims to accelerate a movement toward a more equitable, youth development-focused juvenile justice system.
To support a train-the-trainer program for the San Francisco Police Department and update the San Francisco versions of Juvenile Justice Jeopardy with the goal of improving interactions between youth and law enforcement in San Francisco.
To support advocacy in six Bay Area counties committed to the Quality Parenting Initiative to transform agency practice so that youth receive excellent parenting, access to developmentally appropriate activities and environments, and the opportunities necessary to thrive.
To provide training, technical assistance, and support to West Contra Costa Unified School District, as well as grassroots organizations in this community, to ensure effective implementation of school climate, safety and discipline reform that reduces the number of students suspended and expelled from school, increases positive social/emotional and behavioral supports at school, and reduces the disparity in suspensions and expulsions for youth of color, youth with disabilities, and students in the juvenile dependency and delinquency systems.
To better understand how diverting children who cannot remain safely in the home of a parent impacts the relatives who care for them without system involvement; develop strategies to prevent inappropriate diversion and more fully support those families that choose diversion; and educate key decision makers on the impact of diversion to create systemic long-term improvements.
To provide general operating support for an organization that builds a movement of formerly incarcerated and system involved young women to heal from trauma and change the systems, policies, services, and narratives that ensnare poor young women in cycles of violence, economic marginalization, incarceration, and self destruction. | http://zff.org/grantsearch/category/improving-human-service-systems/year/2018/ |
The International Union for Conservation of Nature (IUCN) lists scalloped
hammerhead (Sphyrna leweni”) as endangered and considers a "very high risk of extinction in the wild”, yet it did not receive CITES status due to pressure from fishing nations. This species is in drastic decline throughout its range through overfishing and from the shark fin trade. These schooling sharks are highly vulnerable to targeted fisheries. Like most sharks, scalloped hammerheads play an important role in the health and balance of oceanic ecosystems.
These ecosystems, including already threatened coral reefs, will seriously suffer by the removal of this top predator.
Besides protecting endangered species, a main goal of the ESA is ecosystem stability and biodiversity. Protecting these sharks will satisfy both goals and ensure the survival of this important apex predator.
The International Union for Conservation of Nature (IUCN) lists scallopedhammerhead (Sphyrna lewini”) as endangered and considers a “very high risk of extinction in the wild”, yet it did not receive CITES status due to pressure from fishing nations. The U.S. has already acknowledged that the scalloped hammerhead is threatened by exploitation and that additional regulation is needed to conserve the species, as these were core contentions supporting its proposal to list the species under CITES.This species is in drastic decline throughout its range through overfishing and from the shark fin trade. These schooling sharks are highly vulnerable to targeted fisheries. Like most sharks, scalloped hammerheads play an important role in the health and balance of oceanic ecosystems.These ecosystems, including already threatened coral reefs, could seriously suffer from the removal of a top predator.We join WildEarth Guardians and Friends of Animals request that NMFS list the species (1) throughout its entire range or, alternatively (2) as five distinct population segments (DPS’s) under the ESA representing each subpopulation of the species. The five subpopulations, any of which might qualify for listing as a DPS, include the Eastern Central and Southeast Pacific, the Eastern Central Atlantic, the Northwest and Western Central Atlantic, the Southwest Atlantic, and the Western Indian Ocean.We also request designation of critical habitat in U.S. waters for this species along with final ESA listing. Critical habitat should protect the areas most important to the scalloped hammerhead’s survival, such as breeding grounds or coastal areas. Areas should include waters around the Hawaiian Islands, Southern California, and the coastal region between South Carolina and central Florida, which is believed to be an important nursery area in the Western Atlantic.Besides protecting endangered species, a main goal of the ESA is ecosystem stability and biodiversity. Protecting these sharks will satisfy both goals and ensure the survival of this important apex predator.We urge you to list the Scalloped Hammerhead Shark (Sphyrna lewini) under the endangered species act either worldwide or as one or more Distinct Population Segments.
| |
Aamir, M 2022, Identifying the psychosocial needs of Emirati and expatriate breast cancer survivors in the U.A. E.: A mixed method investigation in a hospital setting , PhD thesis, University of Salford.
|
|
|
PDF
|
Download (9MB) | Preview
Abstract
Cancer is considered as a chronic disease which requires high-quality, long-term, post-treatment care (Drayton, et. al., 2012; Phillips & Currow, 2010). However, long-term survivorship care has become a growing healthcare burden requiring numerous resources including psychosocial support, late and long-term side effects monitoring, follow-up care to screen cancer progression, recurrence or newly developed or secondary cancers (Howell, et. al., 2011; McCorkle, 2011; Morgan, 2009). Breast cancer is the most frequent cancer in the world and in the United Arab Emirates (UAE); yet there is little known about breast cancer survivors’ psychosocial concerns in the UAE. Research shows that meeting the full range of psychosocial needs significantly contributes to survivor’s wellbeing and potentially elevates the quality of the patient’s life (Holland & Reznik, 2005; Institute of Medicine, 2008; Culbertson, et. al., 2020). Thus, it is important to understand and meet the needs of the country’s diverse population to help cancer patients deal with the range of psychosocial issues they may experience. The aim of this study was to investigate the psychosocial concerns of breast cancer survivors in a hospital setting in the UAE and the association with cultural factors related to two groups: Emiratis and expatriates. A two-phase mixed methods study was conducted involving a cross-sectional quantitative survey to examine survivors’ concerns and semi-structured interviews to develop an in-depth understanding of their needs. Among 205 breast cancer survivors who completed Phase One, twenty six percent were Emiratis and eighty seven percent were expatriates with the mean age of 49 years (both groups). Sixty percent of participants were diagnosed in 2018 or after that period. Around seventy four percent of survivors had regional stage disease and thirty eight percent of survivors had multimodality treatment. Fifty nine percent were on treatment whereas forty one percent had their treatment complete or had no treatment. Seventy two percent coded their quality of life as “good”. The severity scores of each psychosocial domain were calculated based on the participants’ reported concerns using mean scores. Information concerns were the highest reported concerns with the mean score 4.3. Emotional needs were the second most concerning reported with a mean score of 3.4. Physical needs were reported by the survivors with some level of concerns, mainly pain and fatigue (p=0.031). Survivors had a significant level of social and financial concerns (p<0.001). Regression analysis t-test results indicated no significant differences in information and emotional needs between Emiratis and expatriates. However, a significant difference was found in physical and social & financial domains in two groups. There were no significant religious or spiritual concerns reported by survivors in both groups. A Chi-square test showed no association “between nationality and age” (Sig=0.287) and “between nationality and stage” (Sig=0.083) of the disease. Results also demonstrated significant positive association between physical concerns and received treatment whereas information concerns were significantly associated with age and type of the treatment received. The association between quality of life and psychosocial concerns was also explored using correlation analysis techniques. There was a negative correlation found between quality of life and the scores of psychosocial concerns including physical, social, emotional as well as spiritual concerns (p< 0.01) and a significant correlation was found between quality of life and information concern (p<0.01). Multiple regression results showed significant positive association between physical concerns and received treatment, whereas information concerns were significantly associated with age and type of the treatment received. Physical and emotional concerns were found to make the strongest contribution to explain quality of life (QOL) (p<0.001, and p=0.001 respectively). In phase two, thematic analysis revealed three broader themes including “living experience with breast cancer, survivors’ psychosocial concerns” and “survivors’ experience with healthcare providers which revealed in-depth concerns amongst cancer survivors about addressing their physical, informational, social, financial, emotional, and spiritual needs related to living with cancer. Cancer survivors continue to face challenges and symptoms even after their treatment is completed (Tian, Cao & Feng; 2019; Rutten, 2005; Siemsen, 2001), however, culturally tailored psychosocial support would likely improve their survivorship experience. In order to do that, health providers need to facilitate the development of comprehensive and integrated cancer services to meet the ongoing psychosocial needs of cancer patients. The study has indicated several gaps and barriers in the provision of high-quality cancer care such as lack of routine assessment of survivors’ psychosocial concerns. It also highlights the need for further research in psychosocial needs and cancer survivorship care particularly in the region. KEY WORDS: Breast cancer, psychosocial needs, expatriate in the United Arab Emirates, cultural dimensions of cancer care, cancer trajectory. | http://usir.salford.ac.uk/id/eprint/63966/?template=etheses |
While young women and men already face several challenges in the transition to adulthood, such as economic insecurity, social changes and the need to adapt to new life environments, surviving a terrorist attack can increase the burden of having to navigate big life-changing events while also needing to cope with trauma. Indeed, there is a growing need to make victim support and other support services, that are often tailored to older generations, more accessible and responsive to the needs of young people. At the same time, young victims/survivors of terrorism can play a valuable role in the prevention and countering of violent extremism (P/CVE) because they represent a credible voice, especially to their peers, and their story can also serve as an inspiring example of resilience.
The RAN Victims/survivors of Terrorism Working Group meeting on 18 June discussed how to support young victims/survivors of terrorism in making their voices heard. The meeting brought together young victims/survivors and first-line practitioners and organisations working with them such as youth workers, educational professionals, social workers and psychologists. Because sustainable support for young victims/survivors and meaningful P/CVE programmes go hand in hand, they discussed how best to support young victims/survivors wanting to play a role in P/CVE and how to provide tailored support in their healing process that addresses the diversity of challenges young people face.
Indeed, support should be based on a youth-sensitive needs assessment as the voices of young victims/survivors cannot be separated from their needs. To lessen the risk of retraumatisation due to project limitations or risk-related interventions, support needs to be organised in a sustainable fashion. There are many interrelated challenges; support structures often do not take the specific needs of young victim s/survivors into account, for example when they have to deal with major life events coming up, such as moving to a different city to start higher education, navigating the labour market or institutions and so on. They are or soon will be in a transitional phase, moving to self-sustained adulthood, and they need the right support to guide them through this.
Supporting young victims/survivors in a meaningful way means addressing needs and interests ranging from psychological well-being, their educational environment, their families and supporting environment and the relationship with their peers. Helping them making their voices heard requires a sensible approach that avoids retraumatisation and ensures that the needs of victims/survivors inform the P/CVE efforts and not the other way around. This conclusion paper reflects the discussions of the Working Group meeting of 18 June and includes recommendations and lessons learned in relation to working with and supporting young victims/survivors of terrorism and identifies gaps that need to be further explored. | https://home-affairs.ec.europa.eu/whats-new/publications/ran-vot-supporting-voices-young-victimssurvivors-terrorism-18-june-2021_en |
Loading...
Contributor(s)
Wang, Yinuo; Frodsham, Sydney
Composers
Handel, George Frideric, 1685-1759; Brahms, Johannes, 1833-1897; Fauré, Gabriel, 1845-1924; Massenet, Jules, 1842-1912; Bach, Johann Sebastian, 1685-1750; Mozart, Wolfgang Amadeus, 1756-1791; Rubinstein, Joseph, 1986-; Johnson, Craig Hella
Abstract
He that dwelleth in heaven…Thou Shalt break them, from Messiah / G. F. Handel; Sonntag, from op. 47; Immer leiser wird, from Op. 105; O kühler Wald, from Op. 72 / Johannes Brahms; Green / Gabriel Fauré ; Elegy / Jules Massenet; Qui sedes, from B minor Mass / J. S. Bach; Verdi prati, from Alcina; Se bramate d'amar, from Serse/ G. F. Handel; Laudate Dominum, from Vespere Solennes de Confessore / W. A. Mozart; A woman, that's me, from Legendary / Joseph Rubinstein; Will There Really be a Morning / Craig Hella Johnson
Document Type
Recording
Performance Date
Fall 11-12-2016
City
Dallas, TX
Degree Department
Division of Music
Degree Statement
This recital is given in partial fulfillment of the requirements for the Bachelor of Music in Voice Performance. | https://scholar.smu.edu/arts_music_recordings_degree/92/ |
What?
Where?
Search
Advanced
Filters
2,524
Recruiter Jobs in Cape Town Region
Average Salary:
R432,256
Average Salary
R432,256
See More Stats ❯
Receive the newest jobs for this search
by email
:
Create alert
By creating an alert, you agree to our
T&Cs
and
Privacy Notice
, and Cookie Use.
Filter results
Sort by
Most recent
Most relevant
Highest salary
Lowest salary
Salary
Select a salary range
Salary from
from R0
from R50,000
from R100,000
from R150,000
from R200,000
from R250,000
from R300,000
from R350,000
from R400,000
from R450,000
from R500,000
from R550,000
from R600,000
from R650,000
from R700,000
from R750,000
from R800,000
from R850,000
from R900,000
from R950,000
from R1,000,000
from R1,050,000
from R1,100,000
from R1,150,000
from R1,200,000
from R1,250,000
from R1,300,000
from R1,350,000
from R1,400,000
from R1,450,000
from R1,500,000
Salary to
to R50,000
to R100,000
to R150,000
to R200,000
to R250,000
to R300,000
to R350,000
to R400,000
to R450,000
to R500,000
to R550,000
to R600,000
to R650,000
to R700,000
to R750,000
to R800,000
to R850,000
to R900,000
to R950,000
to R1,000,000
to R1,050,000
to R1,100,000
to R1,150,000
to R1,200,000
to R1,250,000
to R1,300,000
to R1,350,000
to R1,400,000
to R1,450,000
to R1,500,000+
per annum
Apply
Remote
Remote jobs
Location
South Africa
Western Cape
Cape Town Region
(2,524)
Cape Town City Centre
(1,590)
Somerset West
(25)
Rondebosch
(14)
Goodwood
(4)
Strand
(4)
Gordon's Bay
(3)
Atlantis
(2)
Blackheath
(1)
Elsies River
(1)
Firgrove
(1)
Kuils River
(1)
Ravensmead
(1)
Cape Winelands
(223)
Eden
(114)
West Coast
(36)
Overberg
(12)
Central Karoo
(1)
Category
IT Jobs
(666)
Accounting & Finance Jobs
(256)
Consultancy Jobs
(237)
Other/General Jobs
(122)
Engineering Jobs
(119)
HR & Recruitment Jobs
(94)
Healthcare & Nursing Jobs
(66)
Retail Jobs
(65)
Sales Jobs
(64)
Admin Jobs
(49)
Company
Datafin
(104)
Datafin IT Recruitment
(85)
Parvana
(67)
Persona Staff
(48)
Datafin Recruitment
(48)
Frogg Recruitment SA
(44)
Personastaff
(44)
Dis Chem Pharmacies
(40)
MPRTC Recruitment
(40)
E Merge IT Recruitment
(38)
Contract type
Permanent
(952)
Contract
(58)
Hours
Full time
(229)
Part time
(5)
Related searches
IT recruiter
intern
electrical engineering
administrator
technician
direct sales
autocad
architect
sap
software developer
more ❯
Popular locations
Western Cape
Cape Town City Centre
Durban
Stellenbosch
Brackenfell
Bellville
George
Paarl
Somerset West
more ❯
Recent searches
Links to the last 5 searches (with results) that you've made will be displayed here.
Scrum Master at Parvana
Recruitment
PARVANA STRATEGIC SOURCING
-
CAPE TOWN
... that the development team has the tools and environment to be productive. Driving
recruitment
of development teams and ensuring that people leadership and training are in place. Participating ...
More details ❯
DevOps Engineer at Datafin
Recruitment
DATAFIN
-
WESTLAKE
. Please e-mail a word copy of your CV to [Email Address Removed] and mention the reference numbers of the jobs. We have a list of jobs on [URL Removed] Datafin IT
Recruitment
- Cape Town ...
More details ❯
Lead Data Engineer at Red Ember
Recruitment
RED EMBER RECRUITMENT
-
CAPE TOWN REGION
... Citizen About The Employer: Red Ember is actively
recruiting
for a Lead Data Engineer.
Recruiter
: Johandri ...
More details ❯
Intermediate Software Developer at e-Merge IT
Recruitment
E MERGE IT RECRUITMENT
-
CAPE TOWN CBD
...
recruitment
is a specialist niche
recruitment
agency. We offer our candidates options so that we can successfully place the right developers with the right companies in the right roles ...
More details ❯
Recruiter
R18,000.00PM TO R24,000PM
-
CAPE TOWN, WESTERN CAPE
Recruiter
needed Foreshore, Cape Town. Our client a BPAAS (Business Process as a Service) company based in Cape Town specialists in operations management and analytics is looking ... for a
Recruiter
to join their team. Should you be successful, your primary responsibility is to develop and implement
recruiting
plans and strategies designed to fulfill company staffing needs ...
More details ❯
Recruitment
Consultant
MILNERTON
Requirements Proven track record of revenue generation, new business development and client retention in the
recruitment
industry in excess of two years. Excellent written ... assessment, interviewing, reporting and profiling In-depth knowledge of
recruitment
as well as
recruitment
processes Own transport essential An excellent basic salary and above average ...
More details ❯
Recruiter
/
Recruitment
Consultant
CRAYON
-
CAPE TOWN
... for review and therefore manage the process from start to finish (5% of annual CTC) Crayon Social - A platform to turn staff into
recruiters
by leveraging their social networks ... via the employer dashboard, allowing them to review applications and shortlist candidates (R950 per job posted) Are you an experienced
recruiter
, with strong interpersonal skills who ...
More details ❯
Recruiter
NEGOTIABLE
-
WYNBERG, CAPE TOWN, SOUTH AFRICA
We are a
Recruitment
Agency located in Wetton, Cape Town, looking for a young result driven individual. At Globe
Recruitment
we follow a high touch approach, supporting our ... manner. The role encompasses,
Recruitment
Administration Interviewing Candidates Preparing CV's Cold Calling Clients and Candidates Client relationship management Requirements: Matric ...
More details ❯
Recruiter
400.000 - 600.000
-
BLACKSWAN
-
CAPE TOWN
... want to hear from you. “The best way to predict your future is to invent it.” Peter Drucker The Role: As part of our international
recruit
ment team , you will be responsible ... for driving all
recruit
ing activities within our Cape Town Office as well as other Global offices such as New York, London and Szeged. You’ll partner with senior stakeholders across ...
More details ❯
Recruitment
Consultant
easy apply
ONECART LTD
-
CAPE TOWN
RECRUITMENT
CONSULTANT Description In this role, you'll work within a New Hire team of
Recruiting
Sourcers and Coordinators to attract and hire shoppers and delivery drivers ... and candidates. Key Responsibilities: • Coordinate full lifecycle of the
recruiting
process for candidate to onboarding. • Source candidates via
recruitment
portals online, internal system ...
More details ❯
Receive the newest jobs for this search
by email
:
Create alert
By creating an alert, you agree to our
T&Cs
and
Privacy Notice
, and Cookie Use.
Results
1-10
of
2,524
1
2
3
4
5
6
7
...
next ❯
Get the iPhone app
Get the Android app
Jobs
❯
Western Cape
❯
Cape Town Region
❯
IT Jobs in Cape Town Region
❯
Recruiter Jobs in Cape Town Region
Get the latest jobs: | https://www.adzuna.co.za/cape-town-metro/recruiter |
Course:GEOG352/2019/Food Security in Sana'a, Yemen
With the world’s population still growing at an exponential rate, and the rural-to-urban migration in the geographical south and east ever increasing, food availability is becoming a pressing issue for multiple countries as many populations, especially urban, across the globe now face food insecurity. According to the United Nations, food security can be defined as a “situation that exists when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life”. Food security is important in the context of the GEOG 352 course as it can directly impact the level of development in a country and can vary greatly between the rural and urban context. Therefore, we can observe the GEOG 352 themes of urban governance and political transformation as key aspects of food insecurity.
This wiki focuses on food insecurity in Sana’a, the capital city of Yemen, which is located in the west of the country and is considered to be in the Integrated Phase Classification Food Insecurity Phase 4 Emergency. The country’s declining political situation since 2015 and outbreak of conflict has resulted in the inability to access food. By examining how the ongoing proxy war between Saudi Arabia and Iran has exacerbated the situation in Yemen, the social, political, and geographical implications can be explored. Not only is the political and economic situation important, but disease, has also amplified the severity of the food insecurity by contaminating available food and clean water. Sana’a serves as a crucial location and point of interest, as it is the country’s central node of transportation and a frequent site of conflict and internally displaced persons (IDP) relocation. Therefore, through the analysis of Sana'a we can analyze the local context of the political situation, economic status, and health to observe how these factors played a part in creating an area at high risk of famine.
Contents
Overview
Food Security
Food security exists when people have physical and economic access to adequate quantities of nutritious foods to meet their dietary needs to sustain a healthy life and it is essential for everyone on Earth. With it, one may thrive in their environment and without it, the likelihood of mortality and morbidity increases significantly. Currently, the main causes of food insecurity are due to problems associated with climate change, urbanization, and food acquisition. With much of the Earth’s climate changing, landscapes are becoming less fertile for the native crops that are being grown, causing crop losses and in many cases zero crop yield. However, this diverges from the needs of a growing global population, as many regions of the world are reliant on subsistence farming and high crop yields. Furthermore, with the rapid urbanization of spaces, especially within the geographical south and east, agricultural lands are being absorbed into urban centres causing the fragmentation of agricultural lands and aggravating the difficulties from having lower food yields. Finally, food acquisition presents itself as one of the leading causes of food insecurity. Not only are food acquisition problems apparent in the geographical south and east, but it’s also found in the geographical north and west, a space less often associated with food insecurity.
Politics and Food Security
Over the past few decades a paradigm shift has started taking place in the way that food insecurity, and more specifically famine, is considered. With climate change and decreased agricultural land there tends to be an expectation that famine is the inevitable result of these issues. However, following Amartya Sen’s theory, as described in his book “Development as Freedom”, all famines are avoidable. In fact, in properly functioning multi-party democracies famine is virtually impossible due to democratic institutions upholding people’s right to food. This suggests that even in an environment with a desolate landscape with no agricultural yield, food security could still be present thanks to the presence of a fully-functioning democracy. Unfortunately, there are very few, if any, countries that possess fully-functioning democracies, thus explaining the continued presence of famines if following this theory. Delving further into this problem of government and their structures; the government, and their political conflicts, are also a significant source of food insecurity. Political unrest, economic instability and the increase of displaced people are putting more pressure on countries, with respect to food provisions, and it is becoming increasingly difficult to prioritize the right to food over other issues that must also be addressed. One example of this, which will be the focus of the case study, is Sana’a, Yemen. Being the capital of Yemen, many internally displaced people (IDPs) migrate to this urban centre due to the political unrest in the surrounding regions caused by the proxy war between Saudi Arabia and Iran, thus creating a very unique situation in which famine has flourished.
Scope
Due to the Yemeni famine revolving around the proxy war that is occurring, food insecurity is most prevalent in areas where there is active fighting. The people who are most affected by this are IDPs, host families as their resources are stretched thin, and marginalized groups, all of whom are facing challenges in accessing food, even if food distribution sites were unaffected.
Sana'a: The Yemeni Context
The Republic of Yemen is currently facing a severe famine where about 53% of the Yemeni population is suffering from severe food insecurity, equating to approximately 15.9 million people. However, unlike many famines currently plaguing the world due to their geography of being situated in an unfavourable climate producing poor soil, the Yemeni famine is a man-made phenomenon. It represents a blend of physical geographical barriers and the consequences of the actions of man. Yemen is caught up in a regional conflict, a proxy war between Saudi Arabia and Iran, which can be seen in the larger context of Sunni-Shia power tensions in the Middle East. However, its consequences are localized, where civilians are facing a man-made famine that is entirely preventable.
Yemen is a moderately sized country, with very little arable land to promote agricultural productivity, and the aggravating factor of increasing water scarcity that predicts agricultural output to decrease by up to 40%, as such, there is a large amount of reliance on food imports, which composes of 90% of Yemeni food sources. Unfortunately, with the main port of Hodeidah being under siege by the Houthi militant group, very little food is being distributed throughout the country. Specifically in Sana’a, severe food insecurity has been an increasingly prevalent problem since 2015, when the Houthis took control, and is intensified by internally displaced persons (IDPs) who have migrated to urban centres due to the political unrest in the surrounding regions. Thus aggravating the difficulties of procuring nutritional food in a city that is already suffering from human-caused food acquisition problems.
Transport Routes
As Yemen’s capital, most populated city and host to the main international airport, Sana'a serves as a crucial port of entry for supplies, and is thus relatively more accessible to food deliveries and humanitarian aid, giving it an advantage over some of the more rural areas of the country. Additionally, one of the few accessible roads left in the Yemen is that between Hodeidah, the country’s main port, and Sana’a, allowing for a more efficient transfer of materials. However, this road is still highly damaged from Saudi airstrikes, and the journey between Hodeidah and Sana’a now takes 12 to 18 hours instead of 8 to 10 hours, making it more difficult and potentially dangerous to transport food and supplies from Hodeidah into Sana’a. As of January 30, 2019, a ceasefire has been declared along that road so that food and aid can reach the capital and then be dispersed throughout the rest of the country, as Sana’a serves as the country’s crucial node of transportation and distribution.
Famine as Strategy
The Saudi-led coalition is using the lack of food access as a strategy for domination, which is carried out through bombings, endemic unemployment and job loss, and inflation. This can be seen in the coalition bombing of civilian targets and food centres, including a processing center in Sana’a that produced 40% of Yemen’s cooking oil. This famine has been regarded as “man-made” because sufficient food is arriving in Sana’a (nearly 90% of Yemen's food is imported), but nobody has the means to the fuel/energy needed to purchase and cook the food. Alex de Waal, a researcher on man-made famines, dubbed the case of Sana’a as, “an economic war with famine as a consequence,” as the Yemeni market is being systematically destroyed through airstrikes. The magnitude and severity of the crisis requires humanitarian food assistance (HFA), without it, “67% of the total population would be in need of urgent action,” meaning more people would be living in affected crisis, emergency, and famine zones.
Food Prices and Availability in Markets
The situation in Sana’a, as in the rest of the country, remains dire, as Sana’a is one of the main sites in which armed conflict and airstrikes have been most severe. In Sana’a, extreme shortages have caused wheat flour prices to skyrocket and there has been a sharp decline in the availability of cooking oil. The price of water in Sana’a has tripled since the conflict began, as the pumping systems have been hit by the lack of diesel fuel. Families rely on water for food preparation and the keeping of livestock, which supports much of the livelihood in the country. As a result, more than half of the shops in Sana’a have been shuttered and rampant unemployment makes it impossible for families to afford the rising food prices. The conflict has limited access to clean water and health services, which has lead to the outbreak of disease, including the 2017 Sana’a cholera outbreak which killed 115 and left 8500 ill. Additionally, the Sana’a airport was closed between August 2016 and November 2017, and during that time, the Ministry of Health estimates that 10,000 Yemenis died from critical health conditions for which they were seeking international medical treatment, but were unable to do so due to the airport closure.
Internally Displaced Persons (IDPs)
With Saudi Arabia preventing food imports from arriving and being distributed, as well as conducting mass airstrikes across the country, more than 3 million civilians have become IDPs and remain so indefinitely. IDPs in Yemen are prone to experience food insecurity more intensely, as they rely on donations and humanitarian assistance, and report to eating less than three meals a day. As a consequence, they often resort to the trafficking of children for child labour, child soldiers, and early marriage to secure food. As Sana'a has significantly more advanced infrastructure, medical treatment, and shelter than Yemen's rural areas, IDPs and other rural inhabitants are flocking to the capital to seek out treatment for malnutrition or other diseases such as cholera, in addition to core relief items and better food access due to the presence of Sana'a's international airport.
Famine and Gender
Additionally, the famine in Yemen has disproportionately affected people based on gender. Yemeni women are usually the first to skip meals or eat smaller portions so the family’s food rations can last longer. Since the outbreak of conflict, early marriage is increasing, and girls aged 8-10 are often married off to reduce the amount of family members to feed or as a source of income to feed the family.
Lessons Learned
Famine is a far more complex phenomenon than meets the eye. It unusually transcends natural occurrences such as crop failure or climate, and is largely due to man-made factors, such as war or government policies. Although Yemen was already the poorest country in the Middle East, its famine has not been provoked by widespread drought or blight. Rather, the Yemeni famine has been the result of political and social unrest, including Saudi airstrikes and the strategic blockade of key ports. Famine is being used as a deadly war tactic against innocent civilians, as Yemen has found themselves in the crossfires of a regional proxy war between Saudi Arabia and Iran, amidst the 2015 Houthi uprising. Airstrikes have destroyed much of the country's infrastructure, including food processing and distribution facilities, health care facilities, and civilian housing. There is a dire need for international humanitarian funding for development, as scarce resources are further exacerbated by the increase of IDPs. As a result, Yemen's population is facing extreme food insecurity, as food imports are limited and families are unable to afford the mass inflation.
Sana'a, being the country's capital and central node of infrastructure and transportation, serves as a critical case study, as it has often been targeted as the site for airstrikes and conflict, representing the localized effects of the regional power struggle. The problems in food aid distribution due to the long-lasting food crisis in Sana'a and other Houthi controlled urban centres can serve as a space of learning for food aid organizations, as they work to find effective methods of distributing food aid without being pillaged or blocked. Due to exponentially increasing population and urbanization across the geographical south and east, man-made famines will likely be an unfortunate reality for other sites, as they face social and political unrest. Thus, these methodologies then have the potential to be applied to other countries that may face man-made famines, as this is far from being an isolated phenomenon.
References
- ↑ 1.0 1.1 FAO, IFAD, UNICEF, WFP, & WHO (Eds.). (2017). The State of Food Security and Nutrition in the World 2017. Rome: FAO.
- ↑ 2.0 2.1 2.2 2.3 Yemen - Food Security Outlook. (2018, May). Retrieved March 5, 2019, from http://fews.net/east-africa/yemen/food-security-outlook/november-2017
- ↑ Gundersen, C., Tarasuk, V., Cheng, J., Oliveira, C. de, & Kurdyak, P. (2018). Food insecurity status and mortality among adults in Ontario, Canada. PLOS ONE, 13(8), e0202642. https://doi.org/10.1371/journal.pone.0202642
- ↑ 4.0 4.1 Arulbalachandran, D., Mullainathan, L., & Latha, S. (2017). Food Security and Sustainable Agriculture. In A. Dhanarajan (Ed.), Sustainable Agriculture towards Food Security (pp. 3–13). Singapore: Springer Singapore. https://doi.org/10.1007/978-981-10-6647-4_1
- ↑ Moustafa, K. (2018). Food and starvation: is Earth able to feed its growing population? International Journal of Food Sciences and Nutrition, 69(4), 385–388.
- ↑ 6.0 6.1 Sen, A. (1999). Development as Freedom. New York: Oxford University Press.
- ↑ 7.0 7.1 Laub, Z. (2015, February 25). Who Are Yemen’s Houthis? [Written]. Retrieved from https://www.cfr.org/interview/who-are-yemens-houthis
- ↑ Yemen IPC Acute Food Insecurity Analysis. (2018), 8. Retrieved from http://www.ipcinfo.org/fileadmin/user_upload/ipcinfo/docs/IPC_Yemen_AcuteFI_2018Dec2019Jan.pdf
- ↑ 9.0 9.1 FAO Country Programming Framework (CPF) Republic of Yemen. (2017), 55.
- ↑ Casey, R. (2018, September 28). Yemen is undeniably the world’s worst humanitarian crisis: WFP. Retrieved March 5, 2019, from https://www.aljazeera.com/news/2018/09/yemen-undeniably-world-worst-humanitarian-crisis-wfp-180928051150315.html.
- ↑ 11.0 11.1 Kandeh, J., & Kumar, L. (2015). Developing a Relative Ranking of Social Vulnerability of Governorates of Yemen to Humanitarian Crisis. SPRS International Journal of Geo-Information,4(4), 1913-1935. doi:10.3390/ijgi4041913. | https://wiki.ubc.ca/Course:GEOG352/2019/Food_Security_in_Sana%27a,_Yemen |
N.B. #1: Before we begin, grab a big cup of coffee or your favorite glass of spirit. This is going to be a long read.
N.B. #2: This article has not gone through the magical hands of our editor. It’s raw, unedited, and comes from the heart.
Photo credit @morgansaignes
Introduction
An artistic movement is generally named by a critic and recognized after the fact, once all of the artists of said movement are well up there in their years or have long passed away. Think Impressionists or Renaissance painters who naturally started creating art that was inspired from one another, or art that comes from the fact that they studied under the same master and later developed their own style whilst adding their own twist. All art movements gathered around the same principles of expression and subject matters. The same can be said of watch photography that we see on Instagram. There are basically two main schools of watch photography: the lifestyle one that almost always uses natural light and shows the watch being worn, and the studio flat-lay one in which the watch lays flat on a surface and is surrounded by props.
Other types of watch photography that exist—for example, wrist shots and outdoor flat-lays—although we can often come across them, don’t quite belong to either of the two aforementioned and established trends.
The two styles identified earlier are diametrically opposite and if one photographer adopts one style, he/she doesn’t do the other. And both movements, based on my own clinical observation of thousands of posts over the past two years, were created at the beginning of the COVID-19 pandemic. Of course, I cannot go back in the past and see what type of photography was being made before 2020 (because I was not on Instagram), I do know, however through the multiple interviews I’ve conducted, that all of those who embrace natural light and lifestyle photography started doing so more or less at the end of 2019 but especially at the beginning of 2020, when the world came to a stop and that we had nothing better to do than to photograph our watches.
In this article, then, I wanted to talk about the lifestyle type of photography which I propose to call “Koda Watch Photography,” in reference to the legendary Kodachrome camera film that was used to create most of the historical and iconic documentary photos of the 20th century. Kodachrome was famous for showing color in its most subtle tones and offering good contrast and representing dynamic range accurately. All visual qualities that can be attributed to the type of watch photography we are about to discuss and discover in this article. As we will see, all photographers of this style share common aesthetics and methods, aspirations and inspirations, and all have captured my wildest imagination through their outstanding work.
I will name a few of these photographers who many of us look up to and who are at the origin of this movement. Because all of those who best represent this style have mentioned one another as a source of inspiration, it is possible to trace back one of the main influences for this style of photography to one particular individual. Please do take what I say in this article with a grain of salt, though. All of what I’m talking about here is subjective and you may or may not agree with me. Hey, that’s fine. Send me an email to share your thoughts: [email protected].
Photo credit @thewatchdude2
Where It May Have Started
I assume there are tens of thousands of people who photograph watches today. Just like there is a new independent brand (I started to use the word “independent” instead of “micro”) coming to life each day, there is a new watch photographer appearing on Instagram each time the sun rises. There cannot be enough of us. And, actually, the more people photograph watches the more there will be people photographing watches. Therefore, the more content is being created and the more we get exposed to watches and brands and the more opportunities we have for interacting with each other. Regardless of the above, all of these thousands of people who started photographing watches in the past two years (2020-2022) may have, either consciously or not, been inspired by a very few people who we all know and we all follow. One of them is Allan a.k.a. @TheWatchDude2.
I wrote a profile story about Allan a few months back, and since then I have interviewed another dozen of watch photographers whose style is reminiscent of Allan’s and this for a good reason: he was their inspiration. It seems that miraculously they all came across Allan’s work when they first created their Instagram account, perhaps due to the way the algorithm worked in 2020 or because of the fact that Allan has impeccable tastes in watches and photographs some of the most popular sport watches that exist: Rolex Explorer 1, 2, the Submariner, amongst others. Or, most simply, his stuff is so good that we all got hooked as soon as we saw a photo of his.
Photo credit @thewatchdude2
No, this article is not here to just praise Allan. This article is a story about what makes his work so influential and how he inadvertently brought many of us together by way of his own passions and personal influences, and how it all comes together into a watch photography movement.
I invite you to think of Allan as the person who may have started the movement for lifestyle watch photography and also as someone who himself was influenced by lifestyle and documentary-style watch photography outside of the Instagram realm. Before he started photographing watches, Allan had been photographing for a while and perusing photography magazines in which his artistic eye was being trained to look at light and frame shots in particular ways. He therefore transposed these influences into watch photography. Just like Henry Cartier Bresson always carried a camera with him, Allan made a habit of doing so in order to never miss a shot.
As we will see in more details below, all of the photographers who will be named in this article, and whose style falls under the same style as Allan’s—use common visual language characteristics that make it possible to categorize them under the same style of photography. These characteristics have to do with the light, the framing, the subject matter, and the editing. I think it’s necessary to let you know, at this point in the article, that I’m no photographer and hence will be using simple language to explain these characteristics which I believe they all share in common.
Lastly, Allan may not actually be the one person who started the Koda Watch Photography movement. But he’s one of its best ambassadors.
Photo credit @young_watch_dude
Key Common Visual Characteristics
First of foremost, let’s look at the list of photographers I have in mind when speaking about this style of watch photography: @TheWatchDude2 (obviously,) @a_watchguys_life, @m.adcock81, @morgansaignes, @lar5erik, @ethanwrist, @wristcheckindia, @youngwatchdude and @the_vintage_guy. Naturally, their style is not a carbon copy of each other—thankfully so—however I can easily argue that they share many similar characteristics. It is important to note that I have not interviewed all of the aforementioned photographers, however I was brought to discover their account because they share common visual characteristics with Allan’s work and the ones I have interviewed. And this is just a short list to help you get started in analyzing their style of watch photography on your own. There are many others like them out there.
The first common element of their visual expression is how they use light. They, for the most part, use natural light and they do so at specific times of day: right before sunrise or sunset. The latter is when you get the “golden hour” during which the light becomes warmer and more cinematic. This is my favorite time to photograph watches, as the light is even and has hints of orange, giving the photos a relaxed mood and soft shadows. When this light is then edited in Photoshop or Lightroom, one can accentuate the contrasts to make the photo even more moody. (There is actually a hashtag #moodywatchshot which I follow and I recommend you the same.)
Photo credit @the_vintage_guy
Natural light can also be used when there are clouds and basically inclement weather. I found that photographing outdoors half an hour before a storm hits is also a great opportunity to get good lighting. Some of our friend photographers live in Northern Europe (Scotland, Finland, Norway) where cloud coverage is quite a normal sight. It won’t give them a good sun tan but it’s basically like having an enormous softbox to photograph with. Lastly, a good natural light can be had by standing by the window of your home, either at sunset, or by covering your window with a sheer curtain (I sometimes use a painter’s drop cloth.) This allows indirect and softened light to come in. Ah, I almost forgot standing in the darkest corner of your room at any time of day (thank you Harin.)
All of the Koda photographers use either one of the above techniques to get even natural light that displays subtle contrasts and soft shadows. @Lar5erik and @a_watchguys_life, for example, always photograph near a window that is covered by a sheer curtain to diffuse the light, and use reflectors to make the light bounce from the opposite side (inside the room) to light up all sides of the watch with the same even light. When photographing outdoors, the key is to wait for the right moment when there is no longer any visible sun so that there won’t be any reflection on the crystal of the watch. Looking at Allan’s photos, one can tell that he often photographs right before sunset. @Morgansaignes and @m.adcock81 use indirect light coming from a window, but don’t stand immediately next to it.
What follows light is framing which itself comes hand-in-hand with spontaneity. Our friends’ photography displays watches in a certain way that shows them within a context. This context is how the watch is being worn on the wrist, as opposed to flat-lays in which the watch is laying flat on a surface, surrounded by props that are always more or less the same: a knife, a book (about watches, preferably), coffee beans, a camera, a lens, and some kind of cool leather jacket. Conversely, Koda photographers show the watch on a wrist and parts of the body—the arm, the mid-section, or even the full body—to give the watch a sense of proportion and style. A watch by itself surrounded by inanimate objects does not infuse life to the timepiece, nor does it show how it wears on the wrist. Koda photographers are positively obsessed with showing the viewer how it feels like to wear the watch, which is, after all, what we want to know about it.
But a watch being worn hiking, hugging a loved one, or riding a motorcycle infuses the watch with life, energy, and character. More often than not, Koda watch photography is about a sports watch, not a dress watch, which itself makes it possible to place the watch in more adventurous situations and to pair it with a certain type of clothing. Allan—going back to the ambassador of this movement—always pairs his watch with fashionable, rugged clothing that perfectly match each other. He manages to create a visual aesthetic that we gravitate to and which invites us to join him on his adventures. That is what drives the others to do the same, which in turns drives more people to join in to this type of photography. (I do not have a good wardrobe, which doesn’t prevent me from trying myself at Koda photography in a nearby forest.)
Photo credit @a_watchguys_life
A while back I read about a study of how people from different cultures photograph a portrait. Americans would do close-up shots of the person’s face while Japanese would show the person, full body, within a complete context, for example a room. While Americans would tend to remove the context the person lives in completely, Japanese people could not make a portrait—which is a representation of someone—without showing their environment. Koda watch photography is more similar to how Japanese people go about doing a portrait: they show part of the entire context the watch is being worn in. Although most of them only show part of their body, because they want to remain anonymous, they do show where the watch is being photographed, be it a studio, a forest, or an apartment. So showing the environment, the context in which they are truly wearing the watch is key for Koda photographers.
Oftentimes, the location they photograph the watch in says a lot about what we are up to most days. For example, @a_watchguys_life takes all of his photos inside his home by a window. All of his photos display an even and soft lighting that is by far the cleanest and clearest of all lighting I’ve seen on any watch photos. He therefore photographs all of his watches in the same context of his home where he has been spending more time due to the pandemic. Of course, I’m not saying that he doesn’t spend time outside of his home—he’s not a hermit!—but that is where he photographs his watches. His collection exists within the context of his interior living space. He could have done flat-lays, but instead he invites us to be part of his process by showing part of his body in the photo, by giving us a context.
Koda watch photographers also tend to be more spontaneous since they photograph according to the natural cycles of the sun. While some of them aim to photograph everyday one hour before the sun sets, they may not be able to do it at all because suddenly the sunset could be hidden by clouds and the photographer would find himself without enough light. Allan takes his camera with him everywhere he goes so that he can snap a photo when the light is right. Lars photographs his watches outdoors when the weather and light are right, sometimes having to improvise a shot. Although I won’t compare myself to any of our friends here, I do tend to photograph out-of-the-blue because suddenly the light is right. Koda photographers, therefore, tend to be spontaneous. And if they aren’t, they are nevertheless following a natural element of daily life: the cycles of the sun rising and setting.
Last, but not least, the editing also plays an important role in how the photos come out. Since they use natural light, the photo will automatically come with natural shadows that they won’t try to correct during the editing process. Rather, they will edit the photos to add or remove contrast, alter the color of the light to add or remove some of that cinematic lighting. But whatever they do, they will never remove the visual quality that comes from photographing using the natural—imperfect—light. They are looking for, should we say, for a general aesthetic that matches the context within which they photographed the watch. If they photographed during a snowstorm, they will translate the coldness of the air by making the light bluer; if they photographed during the golden hour, they will translate the warmth of the light by accentuating the reds and the oranges. They respect the natural tones produced by the light and ever so slightly change it during the editing process.
As we’ve seen, our friends share common traits when it comes to photographing their watches: using natural light, giving context to the photo, and editing the photo to safeguard the qualities of the first two other characteristics. What links all of them too is how they perceive watch collecting and photography.
Photo credit @the_vintage_guy
A Shared Vision of Photography and Watch Collecting
The painters who fall under the umbrella of Impressionism clearly had a certain sensibility to the natural world, the changing color of the sun as it sets, of how the wind can move tree branches and the clouds create even lighting. They all had a certain fascination for the natural cycles of life and spent a lot of time observing them. They slowed down a lot, I imagine, to figure out how to best capture what they were seeing and feeling internally. The same can be said of Koda photographers: they see watches the same way and approach photographing them the same way too. For one, they see watches as a way to connect with others, to express who they are and who they aspire to be, and to have that specific relationship with time that’ll come from tracking it by way of a timekeeping device.
A common trait that all Koda photographers have—again based on those I have interviewed and on the captions of similar photographers on Instagram—is that a watch is an object that connects them to others. In many cases, their first watch or their most important watch was given to them by a parent or spouse to mark an important milestone—getting a first job, getting married, or the birth of a child. The watch is the object used to connect them to this happy event and is given by someone who is important to them and who understands what they love and what they put importance in. The watch, in this context, is more than a timekeeping device: it represents history and as such, a watch will never leave a collection. I own such a watch, a Breitling Navitimer that belonged to my father and which I inherited when he passed away. This watch will never leave my collection and connects me with my dad who also enjoyed horology.
Watches can also be a way for them to express who they are. In the case of male collectors, a watch is virtually the only piece of jewelry they can wear and that makes a statement. If one wears a dive or field watch, it means he is into adventure. If one wears a dress watch, it means he is a business person. Of course, these watches are interchangeable but the point is that what they strap on their wrist says something about them. The same is true of female collectors who accessorize with a watch; a specific watch for a specific task or moment of the day. For either gender, a watch becomes a way by which we signal to others who we are and what we are about. And sometimes a watch can mean two things at the same time: it can mark an event (the birth of a first child) and signal being a proud parent.
In order to properly relate the way they feel about their watches, Koda photographers will, for the most part, photograph watches in the environments they imagined themselves wearing it in. Doing so requires a go-slow, environmental type of photography which makes it possible for the photographers to show the watch being worn within a complete context. Flat-lays wouldn’t be as efficient showcasing, for example, the intense emotional drive that one gets when going on a hike in the Scottish mountains, or diving colorful coral reefs. Going outside to photograph a watch is a commitment, and as such means photographing less, or producing less shots than they would have by remaining indoors.
And even for the Koda photographers who mostly work indoors—thinking of @a_watchguys_life and @m.adock81 here—they spend tremendous amounts of time preparing the shot to demonstrate, by way of images, how they feel about it and how it feels to wear it. They do so by framing each shot differently and finding new angles; they do take their time to put it all together. They don’t really re-use the exact same prop or angle each time, so they have to spend time (such a nice recurring word here!) brainstorming the ways in which they can tell the story using new angles, new props, new backgrounds, to communicate something unique. Think again of Impressionists painters who could represent a garden or a river bank, a factory, or people walking the streets of Paris, by using the same thematic narrative.
Photo credit @Lar5erik
The way Koda photographers capture their watch is so important that oftentimes they have as much pleasure speaking about photography than they do about watches. And in most cases, actually, they prefer talking about the creative process they go through to capture the watch rather than the watch itself—as in talking about the watch specifications or history is not as interesting as talking about which mountain they went climbing to get the right light, or how they generally go about shooting watches. Naturally, they are all into watches, but it is interesting to see that they care as much about the process to create the image than they do wearing the right watch for the right occasion.
Similarly, Koda photographers are very engaged on social media and make a point to respond to each person who leaves a comment on their Instagram posts, and to themselves engage with other photographers by way of comments or direct message. We can all agree that we, watch nerds, love to talk about watches, and it is equally true that Koda photographers love talking about watches and photography, engaging the community uniformly on these topics. The same is true of any one who is into any sort of hobby or collection: they probably spend more time talking about what they do than doing it, and that is what keeps the community alive and striving. Allan, for example, engages in lengthy conversations about photography and where he goes to shoot, more than he does about the watches he takes on his adventures.
And I can totally relate to this: ideally, everyday I would go hike a mountain or explore a desert to photograph a watch I’m writing about or wearing for my own pleasure. I get magnetized by the types of adventures Koda photographers go on with their watches and I have wanted to do the same for a long while. As soon as I joined Instagram, and as soon as I had the opportunity to travel in 2020, I took my then only-watch to the Southwestern deserts of Arizona and photographed it hiking in canyons and riding dirt bikes in the red sands of Sedona. All of this is to say one thing: Koda photographers go out of their way to photograph watches, and as such, enter into some kind of zone while doing so. They must go slow in order to capture a certain vibe in their photographs, a vibe that is present in all Koda photographers' work.
Another common trait of Koda photographers is that they don’t see themselves as professional photographers, despite the professional-quality of their work. This is not the expression of some sort of artificial humility, rather it is the sign that they care the most about taking a good photo of a nice watch to accurately represent the experience of wearing it, in which situation it can be worn, and what this watch is all about. Similarly, it is sharing about a moment they spent with their watch and not accomplishing the best shot possible that should win them a prize. All of what they do, in a way, is about slowing down and teaching themselves new skills—photography, videography, editing—to better integrate their feelings about watch collecting within their process and to create memories with the timepieces they wear.
As we saw, Koda photographers have similar ways to look at watches and to create a bond with them. In the last part of this article, I want to get a bit deeper about a few points of their process most of them have in common.
Photo credit @m.adcock81
A Shared Process
There are three specific aspects of their creative process that Koda photographers have in common, and in some ways we have touched upon some of them already. (But they are key and deserve to be repeated.) The first one is that they photograph more or less at the same time each day; the second is that they picture their shots in their heads before taking it; third is that they take their time to photograph. I know, it doesn’t seem like the 21st century’s most sensational revelations. However, these three steps are key in creating Koda photographs. They don’t all do all three steps and although there is somewhat of an order in which to go through these steps, they don’t necessarily all move through the same motions in the same order.
Since Koda photographers use natural light, they will necessarily photograph around sunrise or sunset in order to get that soft and contrasty light we spoke about. By default, this means they must wait for either of these natural phenomenons to occur which means they are bound to photograph more or or less at the same time each day. This means that the very nature of their process makes them follow a natural order of things. Sometimes, however, they can photograph at any time of day as long as they have light, meaning using a window covered by a sheer curtain provides a constant stream of soft light. Those who photograph in this way tend to photograph at the same time always, during the end of the afternoon, after a long day of work.
If we extrapolate this idea of photographing at the same time each time they photograph, we must also look at those like Allan who go photographing in the outdoors and dedicate an entire day or weekend to the craft. When he goes out, Allan in a sense dedicates a time to go photograph—in this case on the weekends—and will wait until the right cloud coverage is present or for the watch to show it’s getting close to sunset to take most of his shots. Someone like Allan plans some of his shoots several weeks ahead of time. On the other end of the spectrum, someone like @m.adock81 photographs at the same time each morning from his studio in the basement of his house. (Although we should note that recently he has been spending more time going on adventures.)
Another common characteristic of Koda photographers is that they picture the shot they want to take in their head before taking it. Whether they want to do a wrist shot or a more complex setup (e.g., them wearing the watch driving a Land Rover), they picture the whole scene, a bit like a movie director who puts together all of the scenes in his mind’s eye before rolling the cameras. This step is important for Koda photographers as it allows them to mentally prepare to set up the shot and take it, allowing them to slow down when photographing, limiting the number of frames they take; taking fewer photos but better ones. This idea of picturing the shot in their head is common to all creatives, and I bet that people who do flat-lays go through the same process.
However, it’s the combination of all of what they do that creates this type of art. Yes, Koda photography is an art form in its own right. Which brings us to the last common characteristic they all share: going slow. As we have established, they like to use natural light which inherently means they must wait for a certain time in the day to photograph. Once they have reached that crucial moment, they must then wait when the light is perfect: not too strong, not too weak, the type of light that creates soft contrasts and almost no shadow and has certain hints of red/orange/blue. This means that they wait and observe the light changing to take the best photo possible.
For those who photograph indoors, Koda photographers have a tendency to literally sit with their watches before photographing them. They all have in common to be coffee drinkers and to own one or two watch magazines. Imagine them sitting down with their cup of coffee, looking at their watches and imagining how to photograph them as they are waiting for the light to change to perfection. A good example of this is @a_watchguys_life who go through the aforementioned motion every time he shoots. And @morgansaignes does the same thing: he sits with his watches and waits for the right light to photograph them.
Photo credit @young_watch_dude
Conclusion
For all Koda photographers, this art form was at first a necessity: they needed to learn how to photograph a watch—if not learn to photograph at all to begin with,—in order to show people the watches in their collection. For most of them, photography became a passion in itself and they made significant investments in getting the right gear. They realized that learning photography was an integral part of telling stories about their watches and that’s why they became so good at it. They are also regularly inspired by how they each photograph, and they aspire to be able to mimic this photography style as it is the one that inspires them to tell stories. It’s akin to finding the right culinary school that teaches how to cook in a way that naturally resonates with us instead of going to the best and most popular school to learn to do something the same way everybody does already.
In a sense, all Koda photographers are endowed with the same creative sensibility and respond to the same type of storytelling. The documentary/lifestyle type of photography using natural light appealed to them in an almost visceral way and they all naturally gravitated toward each other because of that.
Alright, I know this was long and convoluted at times. I’m not sorry. I wrote this article with as much spontaneity as Koda photographers pull out their camera to capture the right moment. Just like the light won’t be perfect—there will be dust on the lens, too many shadows—I too wanted to share my thoughts about something that has unequivocally unified many of us watch nerds and amateurs of photography, and most importantly of all, passionates of storytelling.
I wish I could have gone more deeply about each Koda photographer’s style and quirks. I did mention you because you are part of this moment and your name should be mentioned. You are part of something that nobody meant to create or plan for, and that’s what makes it so beautiful. We may not always have the time to go out and explore, however we always have time to get in touch and share our respective passions for watches, photography, and stories.
Thanks for reading. | https://www.mainspring.watch/koda-watch-photography |
I am not sure what made me do research on where the phrase “If you can’t say something nice, don’t say anything at all” originated from but I was surprised to find out that Mrs. Rabbit reminded Thumper that this is what his father told him in the Walt Disney movie Bambie. You can see the video clip here.
According to Walt Disney, mother and father rabbits teach their youngsters kindness, the use of gentle words and actions (anti-bullying manners). The Movie Bambie was released in 1942.
I found the article fascinating as it explains, that the three youngsters who become friends Thumper, Bambie, and Flower (the skunk) also exhibit another moral lesson in the virtues of tolerance and an easy disposition. Link to text here.
I found an equally surprising lesson reflected in Albert Einstein’s quote below.
“Everyone is a Genius. But if you judge a fish on its ability to climb a tree, it will live its whole life believing that it is stupid” ~Albert Einstein~
What makes people dwell on differences and challenge, protest, and attempt to set themselves apart from others instead of learning and teaching to coexist with others in a peaceful manor?
There is a beautiful song 525,600 minutes. How do you measure a year in the life? Do you want to find yourself with only a few minutes left in your life only to look back and see that you used so many of your minutes up angrily protesting against someone doing or simply attempting to live differently than you chose to do?
When we are taught or choose to compare ourselves with others and believe we are superior to, and greater than our peers we choose to live negatively. Criticizing others, always attempting to appear better than others, not being supportive or showing interest in others well being and success, leads to a negative, angry, stressful, restless, and irritated existence.
When we are taught or choose to be comfortable in our own skin, appreciate and encourage the differences of others, find our own special uniqueness’s follow and develop those to the greatest potential and support others in their choices and differences, we will lead a peaceful, harmonious life filled with Joy and Peace and free from restlessness, agitation, and anger.
Everyone has a choice in life. You can choose to bring Joy and Peace into your life and others, or you can choose to create distress and anguish within yourself and others. Those who live their lives in Love, wishing no harm for others will prevail. Living in fear and lies will never bring you Peace and Joy.
The world would be so much more enjoyable if everyone chose to learn from and about each other rather than judge and condemn each other because of differences.
No moment is ever wasted if it is spent communicating with another soul. When we exchange words in kindness without expectation we receive great rewards from the Joy that we experience from interacting and showing respect to another human soul.
In order to comprehend the deepest understanding of Love we need to totally be free from shame by standing in total and complete truth. At this level it allows us to feel worthy of accepting Divine Love, Respect and Understanding, it is only then that we feel complete and can in turn share this level of Love and Respect with every living being on earth. | https://carmelasnelbaker.com/2016/05/28/wisdom-from-a-rabbit-thumperian-principle/ |
Securing the Air Cargo Supply Chain,
Expediting the Flow of Commerce:
a Collaborative Approach
John A. Muckstadt
Cornell University
Cayuga Partners LLC
Sean E. Conlin
Deloitte Consulting LLP
Walter H. Beadling
Cargo Security Alliance
Cayuga Partners LLC
1. Executive Summary
The Congressional mandate for 100% screening of air cargo carried on passenger aircraft originating in the U.S. takes full effect in August, 2010. This event has the potential to seriously disrupt the air cargo supply chain with severe economic consequences. Timely, effective implementation of the TSA’s Certified Cargo Screening Program (CCSP) promises a solution, but only if shippers, industry associations and the air cargo industry work together to take quick action. Successful deployment of the CCSP depends upon adherence to, and the rapid adoption of, the “5 Principles of Secure Supply Chain Design and Operation”, described herein.
2. Background: the Congressional mandate for screening air cargo
The paradigm of passenger air cargo security was set to shift on August 3, 2007 when the 9/11 Act was signed. In addition to mandating security enhancements across the government, this law “requires the Secretary of Homeland Security to establish a system to enable industry to screen 100 percent of cargo transported on passenger aircraft at a level of security commensurate with the level of security of passenger checked baggage within three years.” To date, the impact of the 9/11 Act has minimally affected the air cargo industry as it reached the 50% screening milestone in February 2009. However, the full challenges of implementing the law will be felt as the 100% screening requirement approaches in August 2010. As the Transportation Security Administration (TSA) presses toward providing a flexible solution for industry, it does not anticipate any adjustment, extension, or elimination of the Congressional mandate.
Given that cargo must be screened on a level commensurate to passenger baggage, the requirement of “piece level screening” is necessary to achieving the 100% mandate. This poses complex challenges to the current supply chain, as cargo arrives at an airport in large containers, pallets built up from smaller pieces of cargo from a variety of sources, and loose cargo that is often mixed together. “Piece level screening” will require each piece within those larger configurations to be deconstructed, individually screened and reconstructed, significantly increasing handling and processing time and costs.
Initially, a Federally managed screening service, similar to that used for passengers and baggage, was considered to accomplish this task. This would consist of government employees performing the screening on millions of pieces of cargo. Given the nuanced details of the air cargo supply chain, this solution had many drawbacks. TSA employees performing piece-level screening of cargo would mean that a government worker would be inspecting and possibly opening each box, with little regard for the integrity of packaging, nor the special handling requirements of some goods, and little, if any, consideration given to the need for quick throughput. Additionally, screening cargo at the piece level at the airport would result in bottlenecks and delays. Alternatively, screening could be performed by the air carriers at airports. Although air carriers have screening capabilities, most facilities do not have the necessary operational scalability from a technology and logistics perspective. Even if on-airport facilities could adjust their operations, the bottleneck and delay issues would still exist. To eliminate these problems and achieve the required service levels, the costs would be prohibitive
3. TSA’s Solution: a “supply chain” approach
To the air cargo shipping industry, the 100% screening mandate increases complexity. Recognizing this, the TSA adopted a flexible “supply chain” approach that considers both security requirements and unique industry needs in the design of practical, workable screening systems. To enable this, the Certified Cargo Screening Program (CCSP) was established. The CCSP is a voluntary, facility-based, program in which shipping and freight forwarding actors can become certified to screen cargo. The process to become certified involves implementing facility security standards, vetting necessary employees who will have access to screened cargo, training employees to perform security and screening roles, and implementing the processes and technology needed to screen cargo. Once certified, CCSP locations, know as Certified Cargo Screening Facilities (CCSF), can tender screened cargo through the air cargo supply chain, directly to air carriers, with no need for it to be screened at the airport. Properly implemented, the CCSP promises to maintain supply chain velocity and the overall flow of commerce without loss from theft, terrorism or other threats.
The CCSP program is open to three primary participants in the air cargo supply chain:
- Manufacturers (OEM) and shippers from all industries that move enough cargo by of air can join the program and screen their goods as they are being packaged, thereby avoiding the need for specialized screening equipment and having others open and tamper with their goods.
- Freight forwarding companies and Independent Air Carriers (IAC) can participate in the program to provide screening and maintain secure custody of cargo on behalf of their shipping customers.
- Independent Cargo Screening Facilities (ICSF), a new business model that specializes in screening cargo destined for passenger aircraft on behalf of air carriers and indirect air carriers.
A simplified schematic view of the CCSP supply chain depicting product flow options and the chain of custody can be found in Appendix II (1).
TSA has recognized that there is not a “one size fits all” screening solution in the air cargo shipping environment. CCSP, unlike the on-airport, Federal or air carrier screening approaches, builds flexibility into the screening process in many ways. First, it provides the option for shippers to screen their products themselves; it also allows different types of commodities to be screened with processes and procedures that meet their specific and unique needs. Secondly, CCSP allows for a variety of screening methods and technologies to be used – including physical search – which many of the shipper participants can readily build into their existing packaging and shipping processes.
In securing the air cargo supply chain, screening alone is not enough. TSA’s supply chain approach is layered with other countermeasures against air cargo threats, such as vetting shipping companies and employees, facility security controls, chain of custody methods, and secondary screening. It is especially critical to establish and maintain a secure chain of custody that ensures cargo is not subject to tampering and compromise as it passes along the supply chain after it has been screened.
In summary, TSA has taken an approach that allows industry to participate in securing the air cargo supply chain in a manner that minimizes costs and best suits their needs. By varying the nodes in which screening can take place there are many more options available to air cargo supply chain participants that provide for special circumstances and give more control over their products and logistics costs.
4. Challenges of Managing and Securing the Air Cargo Supply Chain
The air cargo supply chain is, by definition, a high velocity supply chain. For shippers that move their products by air, lead times are critical and speed is of the essence; they are willing to pay significant premiums for it over other modes of transportation.
Characteristics of cargo that has a “need for speed” include: high value items for which security is imperative and transportation costs relatively small, exports with short lead times, “Just-in-Time” manufacturing components, critical replacement parts, perishables, shipments with unique timing requirements and circumstances (e.g. human remains, life saving material), and goods that simply need regular, reliable arrival times over long distances. Industry segments that ship high volumes of goods by passenger aircraft are often specific to regions of the country. For example, complex, expensive machined parts for oil drilling are frequently shipped from Houston; fresh fish from the Northwestern US. Also, global high tech industries such as pharmaceuticals, computer parts, printers, semi-conductors, and other electronics often depend on air shipments to move large volumes of product by both freight and passenger aircraft.
For some products, the use of slower, less predictable modes of transport can be offset by carrying more inventories held in different nodes of the supply chain to compensate for longer lead times. The trade off is the cost of carrying this inventory, and the potential impact on customer service levels. Trucking is typically the most viable alternative to shipping by air. Depending on the distances involved, express trucking services can compete on speed of delivery and cost. Obviously this is not an option when longer distances or transcontinental shipping is required.
To maintain competitiveness, it is imperative that the air cargo industry find solutions to minimize the added costs and processing time required by complying with the 9/11 Act mandate for 100% screening.
Uncertainty: challenge and opportunity
The enemy of speed and security in the supply chain is uncertainty. Demand uncertainty is so substantial in most supply chain environments that if it is not adequately addressed, it can severely degrade anticipated performance in terms of unit cost, speed, quality, and responsiveness.
Close collaboration with supply chain partners can minimize the impact of uncertainty by better anticipating demand and identifying security exposures, and planning mitigation strategies accordingly. Collaborative relationships that focus on reducing the uncertainty in operating environments by employing improved information systems and business processes will result in more efficient, secure supply chain performance. However, these collaborative arrangements by themselves cannot compensate for fundamentally flawed and operationally ineffective manufacturing, distribution and logistics environments. Well conceived and designed systems, security procedures and protocols are required.
Those CCSP participants including air cargo shippers, freight forwarders and air carriers who manage uncertainty best will gain competitive advantage; those who do not participate, or fail to manage uncertainty well, are at greater risk for failure and vulnerability to security threats.
High velocity supply chains and high security supply chains have much in common; it can be argued that you cannot have one without the other. Both depend on visibility to minimize uncertainty – the ability to know where things are, when, and where they are going next. High velocity supply chains are based upon repeatable, consistent processes to eliminate uncertainty through end-to-end efficiency and speed. Fortuitously, secure chains of custody are based upon repeatable, consistent processes to eliminate uncertainty through end-to-end traceability and uniquely identifiable, readily detectable and verifiable cargo integrity.
The Applicable “Laws of Supply Chain Physics” (1)
The 1st Law of Supply Chain Physics is “Local Optimization = Global Disharmony”. Supply chain partners acting independently and in their own self-interest will destabilize and slow down the air cargo supply chain system, increasing overall costs and vulnerability to security threats. Lack of standards and clumsy, inconsistent hand-offs among supply chain partners exposes the supply chain to a multitude of threats. Global coordination and collaboration are required to expedite the flow of commerce and maintain supply chain integrity.
The 2nd Law of Supply Chain Physics is Little’s Law, (L = λ W): “The average amount of inventory in a system is equal to the product of the demand rate and the average time a unit is in the system”. The penalty for lack of speed is higher inventories and lower customer service levels; the costs of this are quantifiable and vary from case to case. Most CCSP cost estimates focus on facilities, operations and screening technology. The hidden cost of screening air cargo without an efficient, collaborative supply chain is far greater. Impeding the flow of commerce will lead to inventory build ups and reductions in customer service that will have serious, far-reaching economic consequences.
According to Professor David Menachof of the Cass Business School, City University, London: “Using 2010 estimates of the value of air cargo shipments, an average one day delay for half of all shipments will result in an industry-wide inventory carrying cost of $537million for just US domestic shipments. If international air cargo shipments are included [not yet subject to a deadline], this cost to the supply chain increases to $1billion.
Much of the reason for these costs is the need for the supply chain to absorb the extra inventory needed to maintain on-time delivery . . . Note that these costs will occur even if freight is shifted from air to road transport which might end up being faster than the “delayed air shipment” but slower than the original air shipment transit time.” (2)
As noted above, supply chain performance and security are closely linked; hence, we propose the Supply Chain Security Corollary to Little’s Law: “The longer inventory is held in the system, the more vulnerable it is to tampering, contamination, terrorism and theft”. Said another way, the faster inventory moves through the system, the lower is the exposure to, and costs from, tampering, theft and security risks.
The 5th Law of Supply Chain Physics is “Collaboration and efficient supply chain design reduce uncertainty, increase velocity (and security), and improve operational and financial performance”. The active participation of shippers, industry associations and the air cargo community, working closely with the TSA and international regulatory bodies to develop effective standards, protocols, procedures and technologies, is prerequisite to securing the global supply chain. Further, given inefficiencies in the current air cargo system, advanced system design and optimization techniques, together with closer coordination at local, regional, national and international levels, can effectively negate the impact of enhanced security measures and expedite the flow of commerce.
5. The Five Principles of Secure Supply Design and Operation
The Essential Foundation: Integrated Systems
Efficient air cargo supply chains require five interconnected systems: engineering (system design), marketing (customer facing), cargo handling and processing, inbound / outbound logistics, and financial management. Compliance with the CCSP introduces another critical component to this mix: the need for cargo screening and a secure chain of custody. Shippers and the air cargo Industry are justifiably concerned that the introduction of this “extra step” to the supply chain management process has the potential to slow the system down, impeding the flow of commerce with damaging financial effects.
Opportunities for improved supply chain speed and efficiency tend to be at the boundaries of these systems. The greatest advantages will come from focusing on (1) integrating the five systems intra-organizationally, (2) integrating the supply chain processes with collaborating supply chain partners, and (3) implementing an integrated, systemic approach to supply chain security and employing best practices throughout the system. But integration alone will not achieve unimpeded supply chain flow; management must learn to deal explicitly with the impact of uncertainty on the supply chain decisions they make. While sharing data is essential, simply passing data will not be sufficient to substantially reduce the impact of uncertainty; predictable, verifiable, repeatable processes, close collaboration with supply chain trading partners and the effective use of advanced information technology and tools are the keys to achieving and sustaining a secure, high velocity air cargo supply chain.
Application of the “Five Principles of Secure Supply Chain Design and Operation” (3) can mitigate the impact of enhanced security procedures and increase the flow of commerce. Secure, high velocity supply chains share several key attributes. We have identified five guiding principles that provide the essential foundation for securing the air cargo supply chain while expediting the flow of commerce. Each principle is explained below, with illustrations of its applications.
Principle # 1: Know the Customer
Principle # 1- Concept
Without a clear understanding and definition of customer requirements, a secure, high-velocity air cargo supply chain cannot be established and sustained. To gain that understanding requires constant research and collaboration with supply chain partners, the construction of an information infrastructure to capture transaction data, and the storage and analysis of these data from a strategic, tactical and operational perspective.
Further, the needs of the customer must be understood within the context of the supply chain system within which it operates, the products it ships and the threats to which they are vulnerable that can vary considerably from location to location and time to time. All of these requirements must be thoroughly understood to establish the foundation for constructing responsive, efficient, secure supply chains.
Principle # 1 – Application to the Air Cargo Supply Chain
Supply chain security and logistics requirements vary greatly by type of shipper, commodity or product, operating location and destination. For example, the logistics and screening requirements for perishable flowers, certain fruits and live foodstuffs (i.e., Maine lobsters) which are packaged in boxes, crates and tanks respectively are very different than those for high-value semiconductor chips which are susceptible to electronic discharge and may be shipped in special, palletized containers. Other examples include jewelry, fine art and human remains, which are often shipped by air and must have special handling protocols because post screening re-inspection in the event of an alarm is problematic. All demand the speed of delivery provided by air freight, but storage and handling protocols, scanning and secure sealing techniques, and the chain of custody for each are very different.
These industry-specific requirements call for the development and use of unique procedures, security protocols, sealing and identification technologies and transportation strategies designed to speed processing and ensure the chain of custody from the shipper to the carrier. In turn, requirements will influence, and in some cases dictate, the supply chain management strategy. It is therefore imperative that shippers, air cargo logistics services providers and their industry associations work closely with the TSA to establish practical standards as quickly as possible if the impending deadline is to be met.
Principle # 2: Adopt Lean, Secure Operating Philosophies
Principle # 2: Concept
Over the last decade shippers, freight forwarders, 3PL’s, IAC’s and air carriers have focused on creating lean organizations and business processes. Internal lead times have been shortened and made more predictable, set up times and work-in-process inventories reduced. But for maximum supply chain efficiency, all supply chain trading partners must design, align, and execute their jointly operated processes so that the entire chain has the desired attributes: response times must be short, predictable and repeatable. Thus lean, secure supply chains must be designed as a system that responds quickly and predictably to fluctuations in demand and available capacity.
To date, most lean initiatives have been pursued within the enterprise. To attain maximum efficiency – with increased security – across the chain of custody, lean philosophies must be extended beyond the boundaries of individual organizations to include all supply chain partners. No combination of software systems and information technology can compensate for a poorly designed physical operating environment and inefficient, sloppy execution.
Principle # 2: Application to the Air Cargo Supply Chain
A recent study of the application of Business Process Reengineering (BPR) on the air cargo handling process identified substantial benefits in overall throughput through lean operations (4). Overall, the combined processes of operations, transportation, delay, inspection and storage were reduced from 120 steps to 18 steps and overall cycle time by 74%, while facility capacities and staffing remained constant. Delays in the process were almost completely eliminated through process improvement; no additional automation was incorporated to achieve these results.
Computer simulation models are powerful tools that can be used to guide lean process development for optimal facility efficiency, throughput and performance. For example, the Air Cargo Screening Facility (CSF) Operations Model (5) provides decision support and “what-if” analysis to answer questions that are encountered in the design and operational phases of an Air Cargo Screening Facility. This model has been used to plan requirements for storage and screening capacity, staffing, outbound logistics and material usage (e.g., tamper-evident tape and seals) and also to estimate facility throughput and in-process inventory levels to help develop accurate estimates of facility set-up and operating costs. For example, changes in the receiving and sorting processes recommended using this model resulted in projected reductions of late shipments by over 90% with the same level of resources.
Another study conducted with a major air carrier at Toronto Pearson Airport used a similar computer simulation technique to analyze air cargo operations at a new state-of-the-art cargo facility, equipped with automated material handling and fully computerized inventory control systems, validating the approach (6). The purpose of the study was to develop new processes to ensure that products and services were properly aligned with customers’ needs. The tool was used successfully to quantitatively evaluate and compare different policies, business practices and procedures within a given set of operational and business constraints. The model can also be used in evaluating scenarios such as the effect of an increase in cargo volumes or changing service level policies.
Principle # 3: Create a Secure Supply Chain Information Infrastructure
Principle # 3: Concept
The air cargo industry has taken advantage of advances in information technology to make great strides in improving its information infrastructure. Although actual performance frequently falls short of the desired level of performance, it is now possible for all partners in the secure air cargo supply chain to share demand information, shipment status and location, screening and logistics requirements and up-to-the-minute air carrier schedules.
But true collaboration requires more than just data exchange between successive supply chain partners. Rather, it requires joint planning of inventory, packaging, consolidation, screening and logistics strategies, and executing the resulting plans quickly and reliably on a continuing basis. How various capacities (inventory, transportation, storage, screening, air lift, peak load) are used daily and over longer time horizons must be considered from a systems perspective, not just a local point of view.
The secure air cargo supply chain information infrastructure must be capable of responding effectively to frequent changes in demand and logistics requirements. Re-planning the use of capacities may need to be done daily and in some cases on an hour-to-hour, minute-to-minute basis for maximum responsiveness and efficiency.
Principle # 3: Application to the Air Cargo Supply Chain
One of the world’s leading international freight forwarders is currently implementing an innovative, integrated information system in its CCSF’s (7). The system is capable of capturing all of the information about an air cargo shipment, down to the piece level, from the moment it arrives at the facility until it exits the facility for delivery to the air carrier.
For inbound cargo shipments the system records the delivery truck driver’s ID and photo, and scans in the Bill of Lading. Once in process, the CCSF operator records the Master Air Way Bill (MAWB) number and weight, identifies the technology used to perform the screening and then records the House Air Way Bill (HAWB) number and weight. The system automatically generates and records bar coded, tamper-evident package tape and seals and allows recording of pallet and/or ULD and truck seal numbers, as necessary, to uniquely identify the screened items and establish a secure chain of custody. All transactions are time-stamped for later retrieval and analysis, should it prove desirable or necessary. The system also generates all the CCSP-related reports required by the TSA automatically, periodically or on demand. Optionally, the system can be interfaced to screening devices to capture images and data related to the screened cargo, and linked external databases to perform personnel checks and incorporate truck and air carrier schedule updates in real-time.
In addition to the obvious productivity benefits, the data captured by the system can be used for forensic track and trace should it prove necessary, and provides the foundation for collaborative planning and scheduling with supply chain partners and air carriers to better coordinate supply chain activities and expedite cargo flow. Used with simulation tools, the data can also be used to optimize material flow, capacity utilization and facility throughput. Systems like this one are essential to maintaining air cargo supply chain security while expediting the flow of commerce in a cost effective manner.
Principle # 4: Integrate Business Processes
Principle # 4: Concept
Business processes must be established both intra- and inter-organizationally. These processes, coupled with the information infrastructure, support the efficient flow of material through the supply chain. While much attention has been placed on understanding business processes within shipper and air cargo handling organizations, it is essential to understand what processes must be built inter-organizationally – among trading partners and logistics services providers – to leverage, enhance and optimize their capabilities to expedite the flow of commerce.
Principle # 4: Application to the Air Cargo Supply Chain
The shortest distance between two points is a straight line, and the quickest, and cheapest, most secure route in the CCSP supply chain is directly from shipper to airside, without passing “Go” (i.e., through an intermediate node). The CCSP allows for this, and it is the right answer for many shippers. However, it is dependent upon collaborative, integrated business processes and between shipper and air carrier, or Freight Forwarder/ ICSF / IAC and air carrier, enabled by tightly coupled information systems as illustrated in Principle # 3.
Prerequisites are sharing and knowledge of up-to-date, real time flight status information; optimal load configurations of screened cargo; tamper-evident packaging and sealing to ensure a secure chain of custody; rapid delivery and capacity for unloading at the airport; knowledge of exactly what’s coming in, when, on the part of the carrier to help consolidate, weigh / balance and expedite handling of outbound loads; and readily verifiable cargo integrity. These conditions can only be realized by tightly integrated, collaborative business processes; the procedural discipline and information capture demanded by the “extra step” in the CCSP create an environment within which this is possible.
Benefits include more predictable lead and flow times, reductions in cargo handling, storage costs and wait times, better facility and aircraft utilization and increased velocity with less inventory in the air cargo supply chain.
Having said this, the current air cargo supply chain is extremely complex, and the requirements for 100% screening at the piece level create new challenges. For example, there is a need to segregate screened and unscreened cargo at the airport, and (at this writing) between incoming international and outbound domestic flights. These problems can be addressed through process reengineering, the application of lean, secure operating philosophies and simulation tools such as those described in Principle #2, and by Principle #5, the implementation of unified, advanced Decision Support Systems.
Principle # 5: Unify Decision Support Systems
Principle #5: Concept
Researchers have designed supply chain Decision Support System (DSS) environments for the air cargo industry for decades. These environments are typically based on different philosophical models. Also, they differ in how they forecast demand and how they drive logistics, handling and storage decisions. Their goal is to generate plans and schedules that consider some of the elements of the supply chain. No matter which approach is taken, these systems and their embedded rules dictate many daily supply chain activities. Therefore, they have a substantial impact on operating behavior, and consequently on overall supply chain performance, operational effectiveness and security. How much they enhance air cargo supply chain performance depends upon both the accuracy of their input data and the modeling approaches employed. We believe that these decision support systems need to address uncertainty in an explicit manner – most do not.
Principle # 5: Application to the Air Cargo Supply Chain
A recent study applied advanced Decision Support Systems techniques for planning and scheduling to the problem of scheduling truck arrivals at the Hong Kong International Airport – HKIA (8). Assumptions included collaborative sharing of current flight schedules with air carriers, a focus on air cargo handling operations for outbound flights only, and adequate docking capacity at the airport. There are a number of outbound flights with confirmed air waybills; the terminal operator schedules arrivals of the delivery trucks so that some of the shipments can be transferred directly to the departing flights without requiring extra handling and storage at the terminal, an approach that is analogous to that permitted in the CCSP for screened cargo.
The model also considers that multiple shipments (for different air carriers) may be delivered to the airport on a single truck, and that cargos come in different sizes and weights, which adds complexity to the cost minimization computation, but accurately reflects the way things work in the “real world”.
The benefits of using the advanced scheduling algorithm relative to the random, “First Come, First Served” (FCFS) in the current system at HKIA were substantial, with an average cost savings of 39.2%. The savings are due to the ability of the advanced scheduling approach to coordinate the arrivals of trucks so that a larger percentage of shipments can be transferred directly to the line-up areas, saving their handling and storage costs.
In addition to cost savings, this advanced scheduling approach has the added advantage of avoiding congestion at the air cargo terminal and guarantees that all shipments will arrive on time, eliminating late shipments (an average of 3.9% of all shipments in the FCFS approach) and reducing truck wait times, which averaged 99 minutes from arrival to unloading, substantially. As previously noted, the less time cargo is in transit and waiting means fewer opportunities for tampering, theft and sabotage.
Planned follow-on research will incorporate stochastic programming techniques to explicitly represent uncertainty in the scheduling process.
6. CCSP Benefits: “Security is Free”
Although few would disagree with the need for increased air cargo security, most shippers and the air cargo community view the mandate for 100% screening as a cost burden and impediment to supply chain flow. We believe that proper implementation of the CCSP can meet the dual objectives of security while maintaining, and in some cases accelerating, supply chain flow through improved collaboration and better use of information to take advantage of inefficiencies in the current system.
Analogies can be drawn between the implementation of manufacturing quality programs in the 1970’s and 80’s to improve the U.S.’s competitiveness in the face of low-cost, high quality products from Japan. American manufacturers were reluctant to take on the quality programs practiced by Japanese manufacturers that were viewed as costly and burdensome. Phillip Crosby, quality control manager of the Pershing missile program, implemented a “Zero Defects” program that yielded a 25% reduction in the overall rejection rate and a 30% reduction in scrap costs, more than paying for the program. Crosby’s prescription for quality improvement was a 14-step program outlined in his landmark 1979 book, “Quality is Free” (9), which quickly became the rallying cry of the manufacturing quality movement in the U.S. He believed companies that established similar initiatives could realize savings returns that would more than pay for the cost of their quality programs.
In the same way, we believe that shippers, freight forwarders and ICSF’s that embrace the air cargo security program in the CCSP together with advanced collaborative supply chain management techniques will find that the benefits far outweigh the costs. These may come from a variety of sources, including:
- Reduced Annualized Loss Expectancy (ALE) from lower exposure factors due to the introduction of enhanced security measures and fewer occurrences of loss;
- reduced loss from theft and mishandling of cargo resulting in lower insurance costs;
- shipment visibility and a consistent, monitored, auditable chain of custody across the air cargo supply chain for enhanced shipment flow, tracking and traceability;
- improved operational efficiency, lower supply chain system inventories, increased customer service levels and cash flow through process optimization and advanced supply chain planning and scheduling techniques;
- and of course, a significantly reduced possibility that catastrophic terrorist acts will occur.
Given the program’s potential benefits, the mantra of the CCSP may soon become “Security is Free”.
7. The Shipper’s Dilemma: “to screen or not to screen”
For shippers, the decision to participate in the CCSP and screen goods “in-house” versus outsourcing screening to an intermediary or an air carrier involves the evaluation of many variables, among them facility set-up costs and operating costs, inventory policies and customer service and satisfaction concerns. The following tables summarize these considerations:
A. Cost Considerations
|
|
In-house screening
|
|
Outsourced screening
|
||
|
2. Other Business Considerations
|
|
In-house screening
|
|
Outsource screening
|
|
|
|
A chart that may be useful to companies trying to decide on participating in the CCSP can be found in Appendix II (2): “Are you prepared for 100% screening?”
8. Summary and Conclusion: a Call to Action
The Congressional mandate for 100% screening of cargo carried on passenger aircraft will take effect in August, 2010. This is a fact; it will not change. The air cargo supply chain is built for speed; adding another step to the process for the purpose of ensuring security has the potential to slow down and disrupt the system with severe economic consequences.
The air cargo system obeys the Laws of Supply Chain Physics; speedy, efficient operations and security are complementary. Adherence to the 5 Principles will allow shippers, the air cargo community and air carriers to develop collaborative management systems and deploy Best Practices solutions to secure the supply chain and expedite the flow of commerce; however, this will take time to implement. The TSA’s Certified Cargo Screening Program provides a flexible framework to enable these systems.
At this writing, a great deal of work remains to be done. The apparent, but deceiving, availability of excess capacity in the current system, combined with a lack of awareness and, in some cases, denial, of the impending deadline and tools available through the CCSP, particularly among the shipping community, threatens to undermine the program and preclude deployment of effective solutions in time to forestall serious, disruptive bottlenecks when the mandate takes full effect.
It is imperative that shippers become aware and knowledgeable of the CCSP to make informed business decisions about how to best participate. At the same time, shippers, their industry associations and the air cargo community must immediately begin working together to develop the standard procedures, protocols and collaborative business processes needed to provide the requisite levels of security without compromising supply chain throughput.
This can still be accomplished, but the clock is ticking. The alternative is to let the “invisible hand”, aided and abetted by the TSA, sort the problem out over time, as it inevitably will. This messy, costly and painful process is avoidable and unnecessary. The time to act is now.
I. Bibliography
1. Muckstadt, J.A.; Murray, D.H; Rappold, J.M.; “Principles of Supply Chain Leadership”, unpublished. http://www.cayugapartners.com/casestudies.html
2. Menachof D, Russell, G. (2009), ‘Shocks and Longer Term Effects of 100% Air Cargo Screening’, Distribution Business Management Journal, Volume 8, 2009; [Peer Reviewed]
3. Muckstadt, J.A.; Murray, D.H; Rappold, J.M.; Collins, D.E.; “Guidelines for Collaborative Supply Design and Operation”, Information Systems Frontiers , Volume 3 , Issue 4; December, 2001 http://www.cayugapartners.com/casestudies.html
4. M.R. Rotab Khan; “Business process reengineering of an air cargo handling process”, Int. J. Production Economics 63 (2000) 99}108
5. Cayuga Partners / Cargo Security Alliance; http://www.securecargo.org/content/air-cargo-screening-facility-csf-operations-model-0
6. Nsakanda, A.L.; Turcotte, M.; Diaby, M.; “Air cargo operations evaluation and analysis through simulation”, Simulation Conference, 2004. Proceedings of the 2004 Winter Start Page: 1790 End Page: 1798 vol.2 ISSN: ISBN: 0-7803-8786-4 Volume: 2 Issue:
7. Regiscope “Cargo Cam” and “Screening Cam”; www.regiscope.com
8. Ou, Hsu and Li: “Scheduling Truck Arrivals at an Air Cargo Terminal”, Production and Operations Management, pp. 1–15, r 2009 Production and Operations Management Society.
9. Crosby, Philip; Quality is Free. New York: McGraw-Hill, 1979. | https://www.securecargo.org/articles/securing-the-air-cargo-supply-chain-expediting-the-flow-of-commerce-a-collaborative-approach/ |
As a famous American author, Ernest Hemingway was born in Oak Park, Illinois, USA. He won the Nobel Prize for Literature in 1954. He passed away at the age of 62.
It is appearances, characteristics and performance that make a man love an airplane, and they, are what put emotion into one. | https://www.purelovequotes.com/author/ernest-hemingway/ |
- Please check and comment entries here.
Pension Information
Definition
A wide array of research has investigated the effects of pension information on different individuals’ economic outcomes. While many studies show that information provision increases knowledge, the evidence is mixed regarding its effects on behavior. Nevertheless, examining different studies, some conclusions can be elicited about the effects of pension information on three broad areas of economic behavior: retirement planning, labor supply, and savings decisions. Specifically, the provision of pension information not only increases workers’ knowledge about their benefits, but it also fosters individuals’ retirement planning and decision making. Looking at individuals’ labor supply, our review of the literature showed that only correctly informed workers are responsive to the incentives to work longer. Finally, information letters and other educational interventions such as seminars are found to increase both enrollment in retirement plans and the amount of contributions. We also highlight that the lack of knowledge prevalently hits the most vulnerable individuals in the society, such as women. As a consequence, not providing sufficient information could contribute to widening the gender gap in pensions.
1. Introduction
In recent decades, many countries have shifted from defined benefit to defined contribution pension schemes, where pension information becomes crucial in order to make sound retirement decisions. As pointed out by , a lack of pension knowledge “is troubling since workers may save or consume suboptimally, […] or retire earlier than they would have if equipped with better pension information.” Such knowledge is related to information; thus, it also depends on the costs and benefits of gathering information .
As public pensions constitute a substantial share of the whole retirement income for many workers, it also is important for governments to provide individuals with information about their public retirement benefits . Public pension statements are certainly one way to do it. Looking at the United States, a 2001 Gallup survey found that respondents who reported receiving the Social Security statement were more knowledgeable about the program than those who did not. For instance, the study highlighted a significant increase in the number of respondents who knew the relationship between benefits and earnings, and that retirement age was increasing . Additionally, workers who received the statement were much more likely to be able to provide an estimate of their future benefits .
2. Pension Knowledge and Retirement Planning
Research has found that many people lack fundamental economic concepts and fail to plan for retirement even when they are close to it . This result has important consequences, since being able to develop retirement plans is crucial for retirement security and can explain why some people arrive close to retirement with very little wealth. showed that financial knowledge is key to wealth accumulation in a stochastic life cycle model, and they estimated that 30–40 percent of wealth inequality is accounted for by financial knowledge. Nevertheless, the role played by pension information is less clear.
An evaluation of a low-cost online financial and demographic literacy program was provided by , who implemented it with the largest industrial pension fund in Italy. The program was found not only to increase participants’ knowledge, but it also made individuals look for more information on financial markets and financial planning. Moreover, the authors showed that the positive effect lasted several months after the treatment.
A few studies directly evaluated the effect of providing public retirement benefits information through public pension statements. In the United Sates, where the Social Security Administration fielded many surveys to evaluate its outreach effort, a considerable percentage of respondents reported using the statement for retirement planning, even though they did not believe they would receive Social Security benefits at the time they retire . Even though there is widespread awareness of the unsustainability of pension systems, people seem to ignore or underestimate the cost of a public pay-as-you-go system . These results might also be a consequence of low levels of financial literacy and understanding of retirement schemes . , examining 85 preretirement planning seminars conducted by five companies in 2008 and 2009, showed that the exposure to them led to a considerable improvement in the knowledge of retirement programs and in the making of retirement choices, in addition to a reduction in transaction costs of managing pension plans.
In Sweden, where a notional defined contribution scheme provides a large share of retirement income, a lot of financial information—including forecasts of the expected future value of pension benefits—has been distributed through the so-called Orange Envelope to everybody eligible for a pension. The widespread dissemination of information is likely to have raised basic financial knowledge and have lowered the barriers to planning for retirement . However, fewer than half the recipients reported having a good understanding of the pension system . In Canada, public statement recipients said they had a better understanding of their pension plan and were more likely to plan for their retirement . As a spillover effect, knowledge on the pension system and the personal pension situation decreases individuals’ concerns about retirement, especially for women .
In line with these studies, exploited the introduction of an annual pension overview for all Dutch employees to estimate the effect of providing information on pension knowledge and active planning by identifying with this expression individuals that are not procrastinating in making retirement decisions. The research suggested that providing an annual pension statement might have a positive impact on pension knowledge, which in turn has a positive causal effect on active pension decision making, meaning that people will adjust their behavior if pensions are cut (or will not adjust it if they can easily make ends meet).
The evidence presented so far clearly shows that pension information has a positive impact on workers’ knowledge about their benefits and their self-declared retirement planning, but whether workers actually change their retirement behavior after receiving pension information is more controversial. In particular, focused on the introduction of the annual Social Security Statement in 1995, and using the Health and Retirement Study data, he found that workers did not update their expectations after receiving the public statement, nor did Social Security claiming patterns change. The study concluded that “either workers were already behaving optimally or the additional information provided by the statement isn’t sufficient to improve uninformed workers’ retirement choices”.
3. The Pension Gender Gap
The research reviewed so far does not focus on the gender dimension, with only few studies touching upon it. However, a natural and straightforward conclusion drawn from the different aspects considered here is that the lack of information and knowledge could mostly affect more vulnerable individuals in the society, such as women . In fact, pension benefits are first of all a consequence of the position that individuals hold in the labor market, and occupations depend on various features such as stability, labor market segregation, and wage gaps .
Disadvantageous conditions in the labor market cause lower pensions for women even when working hours or occupational positions are the same as those of men. Indeed, as pointed out by and , people often associate the concept of economic independence with the gender pay gap. Going one step further, the authors examine the pension gender gap, defined as the difference between the gross pensions of men and women over age 65.
The different spread out of women’s emancipation in the labor market is one of the factors responsible for differences in pension wealth accumulation. Older cohorts are indeed more influenced by past gender inequality. As reported in , “pensions of women are substantially lower than those of men, by 27 percent on average across the EU but by more than 40 percent in a few European countries. This average gap is higher than the one for hourly earnings at 14 percent”.
From a US perspective, analyzed the gender gap in Social Security and pension income between 1980 and 2000, and they highlighted the fact that, despite increases in female labor force participation and earnings, women tend to accumulate less pension wealth compared to men. Similar conclusions were drawn by , who aimed to understand and examine the different drivers of women’s labor earnings that contribute to the earnings in retirement. Using data from the Labor Force Survey, the author found that part-time working, types of occupation, and employment represent the main reasons why women’s incomes are lower than men’s, and, as a consequence, after retirement, women appear to have reduced entitlement to benefits from pension schemes . Therefore, focusing on the gender gap in pensions is fundamental to understand the well-functioning of a pension system; it is indeed an indicator of gender equality at older ages and might be useful in pointing out labor market inequalities .
Enhancing programs aimed at the improvement of financial literacy and pension knowledge in particular could represent a first step toward the awareness of the gender gap. Along this line, a recent OECD report points out that in many countries, the levels of financial literacy are very low, especially for women. Cross-comparable data from 30 countries and economies show that “overall levels of financial literacy are relatively low, with an average score of 13.2 out of a maximum of 21”. It continues underlining that “on average, only 56% of adults achieve the minimum target score on financial knowledge, with significant differences by gender, as 61% of men achieve the minimum target score, compared to 51% of women”.
This entry is adapted from 10.3390/economies8030067
References
- Mitchell, Olivia S. 1988. Worker knowledge of pension provisions. Journal of Labor Economics 6: 21–39.
- Gustman, Alan L., and Thomas L. Steinmeier. 2005. Imperfect knowledge of Social Security and pensions. Industrial Relations 44: 373–97.
- Fornero, Elsa, Noemi Oggero, and Riccardo Puglisi. 2019. Information and financial literacy for socially sustainable NDC pension schemes. In Progress and Challenges of Nonfinancial Defined Pension Schemes: Volume 2. Addressing Gender, Administration, and Communication. Washington: World Bank Publications, pp. 187–216.
- Kritzer, Barbara E., and Barbara A. Smith. 2016. Public pension statements in selected countries: A comparison. Social Security Bulletin 76: 27–56.
- Mastrobuoni, Giovanni. 2011. The role of information for retirement behavior: Evidence based on the stepwise introduction of the Social Security Statement. Journal of Public Economics 95: 913–25.
- Lusardi, Annamaria, Olivia S. Mitchell, and Noemi Oggero. 2018. The changing face of debt and financial fragility at older ages. American Economic Association Papers and Proceedings 108: 407–11.
- Lusardi, Annamaria, Olivia S. Mitchell, and Noemi Oggero. 2020. Debt and Financial Vulnerability on the Verge of Retirement. Journal of Money, Credit and Banking 52: 1005–34.
- Lusardi, Annamaria, Pierre-Carl Michaud, and Olivia S. Mitchell. 2017. Optimal financial knowledge and wealth inequality. Journal of Political Economy 125: 431–77.
- Billari, Francesco C., Carlo A. Favero, and Francesco Saita. 2017. Nudging Financial and Demographic Literacy: Experimental Evidence from an Italian Trade Union Pension Fund. BAFFI CAREFIN Working Paper No. 1767. Milano, Italy: Centre for Applied Research on International Markets Banking Finance and Regulation, Università Bocconi.
- Boeri, Tito, Axel Börsch-Supan, and Guido Tabellini. 2002. Pension reforms and the opinions of European citizens. American Economic Review Papers and Proceedings 92: 396–401.
- Oggero, Noemi. 2019. Retirement Expectations in the Aftermath of a Pension Reform. Working Paper No. 197/19. Turin: Center for Research on Pensions and welfare Policies.
- Allen, Steven G., Robert L. Clark, Jennifer Maki, and Melinda Sandler Morrill. 2016. Golden years or financial fears? How plans change after retirement seminars. Journal of Retirement 3: 96–115.
- Almenberg, Johan, and Jenny Säve-Söderbergh. 2011. Financial Literacy and Retirement Planning in Sweden. Journal of Pension Economics and Finance 10: 585–98.
- Sundén, Annika. 2009. Learning from the experience of Sweden: The role of information and education in pension reform. In Overcoming the Saving Slump. Chicago: University of Chicago Press, pp. 324–44.
- Spruit, Jordi. 2018. Does Pension Awareness Reduce Pension Concerns? Causal Evidence from The Netherlands. Netspar Academic Series MSc 06/2018-04. Tilburg: Netspar.
- Debets, Steven, Henriette Prast, Maria Cristina Rossi, and Arthur van Soest. 2018. Pension Communication in the Netherlands and Other Countries. CentER Discussion Paper Series No. 2018-047. Tilburg: CentER.
- Angelici, Marta, Daniela Del Boca, Noemi Oggero, Paola Profeta, Mariacristina Rossi, and Claudia Villosio. 2020. Pension Information and Women’s Awareness. IZA Discussion Paper No. 13573. Bonn: IZA.
- Frericks, Patricia, Trudie Knijn, and Robert Maier. 2009. Pension reforms, working patterns and gender pension gaps in Europe. Gender, Work & Organization 16: 710–30.
- Bettio, Francesca, Platon Tinios, and Gianni Betti. 2013. The Gender Gap in Pensions in the EU. Rome: European Institute for Gender Equality.
- Tinios, Platon, Francesca Bettio, Gianni Betti, and Thomas Georgiadis. 2015. Men, Women and Pensions. Luxembourg: Publications Office of the European Union.
- Lis, Maciej, and Boele Bonthuis. 2019. Drivers of the gender gap in pensions: Evidence from EU-SILC and the OECD pension model. In Social Protection and Jobs Discussion Paper. No. 1917. Washington: World Bank.
- Even, William E., and David A. Macpherson. 2004. When will the gender gap in retirement income narrow? Southern Economic Journal 71: 182–200.
- Gough, Orla. 2001. The impact of the gender pay gap on post-retirement earnings. Critical Social Policy 21: 311–34.
- OECD. 2016. OECD/INFE International Survey of Adult Financial Literacy Competencies. Paris: OECD Publishing.
- OECD. 2018. OECD Pensions Outlook 2018, OECD Pensions Outlook. Paris: OECD Publishing. | https://encyclopedia.pub/2517 |
There is a very close link between knowledge and performance, which is at the heart of any KM framework.
Knowledge results in performance. The more knowledge we have, the better we can perform. The more we learn from performance, the more knowledge we have. This puts us in a reinforcement cycle – a continuous improvement loop – continuously improving knowledge, continuously improving performance.
We are all well aware of this link as it applies to us as individuals. The more we learn about something, the better we get, whether this is learning to speak Mandarin, or learning to ride a bicycle. The knowledge builds up in our heads and in our legs and fingertips, and forms an asset we can draw on.
It's much harder to make this link for a team, or for an organization. How do we make sure the organization learns from performance and from experience? How do we collect or store that knowledge for future access, especially when the learning takes place in many teams, many sites or many countries? How do we access the store of knowledge when it is needed, given that much of it may still be in peoples' heads?
This link, between knowledge and performance, is fundamental to the concept of knowledge management. Knowledge management, at its simplest, is ensuring this loop is closed, and applied in a systematic and managed way, so that the organisation can continuously learn and continually improve performance. The knowledge manager should
- Know what sort of organisational performance needs to be improved or sustained (and in some organisations this may not be easy - the performance requirements of public sector organisations, for example, may not always be easy to define);
- Know what knowledge is critical to that performance;
- Develop a system whereby that knowledge is managed, developed and made available;
- Develop a culture where people will seek for that knowledge, and re-use and apply it;
- Develop work habits and skills that ensure performance is analysed, and that new knowledge is gained from that performance (using processes such as lesson learning or After Action reviews);
- Set up a workflow to ensure that the body of knowledge is updated with this new knowledge; and
- Be able to measure both the flow of knowledge through this loop, and also the impact this has on performance.
That's KM at its simplest; a closed cycle of continuous learning and continuous performance improvement.
The complexities come in getting this loop up and running, in a sustainable way, in the crazy complex pressurised world of organisational activity.
But that's what they pay us Knowledge Managers for, right? Dealing with those complexities. Designing the framework that closes the loop. Delivering the value. | http://www.nickmilton.com/2018/07/how-knowledge-and-performance-are-linked.html |
Transcultural Studies / Cultural Anthropology
The master’s degree program Transcultural Studies / Cultural Anthropology focuses on local and transnational cultural processes in Europe, Latin America and Southeast Asia from both a historical and a contemporary perspective. Students expand knowledge gained from the empirical cultural analysis of day-to-day life in globalized societies and learn to apply ethnographical and historical methods on the basis of insights from cultural theory. Students learn to independently explore and research geographic regions with their complex local systems and cultural diversity against the backdrop of supraregional, translocal and global networks. Moreover, they learn to resolve questions concerning processes and performances in the culture of everyday life in an analytical and interdisciplinary manner as well as to adequately prepare their research results for a broader audience.
Key characteristics of this degree program are its practical approach (e.g. close collaboration with regional museums, student research project) and students’ freedom to develop their own specialized profiles.
Possible lines of work:
Exhibitions, (curator work, guided tours, lending operations), museums/preservation of monuments and historic buildings/art dealing (galleries, auction houses), culture (administrative specialist e.g. in cultural institutions or municipal cultural affairs offices, internationally active organizations and companies (foundations, public authorities, non-governmental organizations, tourism industry), development collaboration, public relations, adult education, academia (research management, teaching/research at universities, research institutions, etc.), media/publishing
English
Summer semester
Examination Regulations (German versions are legally binding)
University degree (German or non-German) in a relevant disciplin
German language proficiency (CEFR level C1)
English language proficiency (CEFR level B2)
Modules in the area of cultural studies worth a minimum of 24 ECTS credits
At the University of Bonn, multilingualism and cultural diversity are considered to be valuable resources that complement subject-specific qualifications. This is why, in addition to curricular language modules, students have access to a diverse range of language-learning offers, including the independent-study offers at the Center for Language Learning (Sprachlernzentrum, SLZ) in which they can autonomously learn a foreign language or enhance existing language skills. Furthermore, students can apply for the “Certificate of Intercultural Competence” free of charge, which promotes extra-curricular and interdisciplinary activities of international or intercultural nature. | https://www.uni-bonn.de/en/studying/degree-programs/degree-programs-a-z/transcultural-studies-cultural-anthropology-ma |
FLOWERS ILLUSION
Yaroslav Levchenko is a painter who takes space, and the experience of it as his subject matter; primarily painting abstract or semi-abstract compositions derived from snapshots of very ordinary architectural motifs such as the kerb of a road, the grid on a sidewalk or the stop barriers of a parking lot. He evolves a clear, stripped down, painterly language using sober, neutral colours punctuated by the occasional instance of blood red, bright orange or pitchy black. Scrubby washes of paint in thinly applied hues work towards an implied conclusion, the approximation of a figurative image.
“ Flower illusion” collection diverting the viewers gaze to the outer edge of the painting and, in so doing, reminding the viewer of its function as a liminal container – these works are more figurative.
Levchenko’s work invests the sense of looking with renewed energy, opening up to the viewer the experience of a visual sublime. At the same time however, his works are humble and direct in their parameters, choosing subjects that are straightforward, clichéd even, such as the vanishing stretch of highway, or the brilliant gold of an autumnal tree.
| |
Luxury Automaker Stands Apart with Proactive Service Using A Rugged Mobile Solution
One of the largest multinational automotive companies and foremost manufacturers of luxury cars in the world understands that the tools and equipment that are used for maintenance must be carefully selected to maintain a high standard of excellence.
For a number of years, consumer-grade laptops were utilized for the vehicle maintenance application in the service bay. After experiencing consumer-grade devices that continuously failed from inevitable drops, dust and liquid spills that are part of the environment of a busy service bay, the dealer technology organization began searching for a more advanced and reliable solution.
To read the full case study, click here. | https://www.ruggedmobilityforbusiness.com/2016/12/luxury-automaker-stands-apart-with-proactive-service-using-a-rugged-mobile-solution/ |
The King's Academy Middle School strives to provide bright young minds with a space in which they feel comfortable expressing themselves; one that cultivates their curiosity through active and interactive forms of learning, personalized instruction and advising, and the development of strong habits of study and character. The Middle School focuses on character development, debate and self-expression, collaborative teamwork, understanding and evaluating multiple perspectives, and finally, play.
The academic program is unlike any other in Jordan, with interactive, hands-on classes that challenge the mind and stimulate the imagination.
The Middle School has replaced the traditional grading system with a student-centered approach that revolves around three important skills: reflection and meta-cognition, reading and understanding feedback, and goal setting. Students engage in writing their own report cards in reflections on their own learning, guided by evidence and narrative feedback from their teachers. They then set goals for themselves to highlight areas of growth. The Middle School engages and deepens student learning in a way that motivates them to take control of their own educational journey and develop a love of learning that lasts a lifetime.
King’s believes that arts and music programs inspire in students a desire to learn, and thus are just as important to education as math and science. The Middle School’s vibrant music and arts programs reflect this, offering students plenty of outlets to express their individuality and creativity through art, theater, dance, music, orchestra and more.
The Middle School is forging a revolutionary way of teaching and learning English and Arabic. The curriculum is based on award-winning educator Nancie Atwell’s approach to teaching children to read and write that helps kids become skilled, passionate, habitual and critical readers.
In the Middle School, traditional classrooms have been replaced with comfortable chairs, reading zones, writing workshops and classroom libraries filled with hundreds of titles for students to choose from. The reading zone aims to create readers for a lifetime by allowing students to actually enjoy what they are reading and by taking time out of each day for them to sit quietly in class and just read. Writing workshops give students regular chunks of time to write and the possibility to choose their own topics to write about. Teachers keep close tabs on their work and provide one-on-one feedback during the writing process, in addition to mini-lessons that teach the whole class the mechanics of writing.
Alongside an innovative curriculum, the Middle School has replaced traditional exams and grading with a completely revolutionary system. Minimesters take place for three to four days at the end of each term, replacing traditional end-of-term exams and providing students with an opportunity to demonstrate what they’ve learned in the classroom throughout the term through research, projects and presentations. Minimesters encompass projects and activities from all the disciplines taught to the students including humanities, science, math and English.
Learning is important, but kids need time at school to play too. The benefits of play are many; it helps cultivate creativity and imagination, and develop important social, emotional and cognitive skills. Every day, Free Play periods give students the opportunity to have fun with their friends outside of the classroom, play sports or just relax. The Middle School believes in incorporating elements of play into the classroom too. By incorporating fun and interactive exercises in class, and allowing students to makes choices about what they want to learn, they develop their creativity and imagination, and enjoy the process of learning.
At King’s Academy we believe in building a community, not just filling a classroom. The Middle School motto is Cherish One Another. Whether student or teacher, we strive to always be kind, to listen, to look out for each other and to support and uplift one another. Ask any Middle School teacher and they will tell you their door is always open. At the end of the day, we are family. Some of the ways that we strengthen our bond is through close-knit advisory groups, daily sit-down lunches, weekly class get-togethers and regular trips and extracurricular activities.
Each day, during P.E. class, Workshop or Engage, Middle School students pursue athletic, artistic or intellectual activities that interest them. Examples of Middle School Workshop and Engage activities include: coding, design thinking, entrepreneurship studies, debate, Model United Nations (MUN), robotics, Jordan Model Parliament, journalism, theater improv, community service, dance, art, theater, publications and athletics. P.E. includes cross fit for teens, basketball, soccer, volleyball, swimming, cross country, badminton, and frisbee gold. | https://www.kingsacademy.edu.jo/academics/middle-school |
By Dexter Webster, CEO, dTc Advisory
After working with a few clients over the past few weeks, I believe my first post should be on how best to handle all the important business tasks while growing your business. Now, where do I start?! Let’s assume you have done the basics such as having a solid strategic plan.
To make the most of your time, you should focus on automation, setting priorities and focus. Let’s look at setting priorities first.
Prioritization
- You don’t have to complete all tasks in one day.
- You should assess the value of each task ($$ generated or cost reduction) with required completion date (incorporate lead time).
- Once you have the above information, you can list “in order” the tasks that will return the greatest value to your business.
- Following the 20/80 rule, 20% of your tasks will yield 80% of the value. This is the focus area.
Now, how to focus. Here are a few points from Roxann Roeder’s 5 Habits that will get you 30% more productive.
Focus Management
- Each day, you should focus on activities that move your business forward – things you must do!
- To best do this, take time each day to get yourself going. Do something for your health at the start of the day without checking email or other busy activities. Healthy Body = Healthy Mind.
- Prior to jumping into the day’s work, take time to identify your priorities based on the model listed above. You will be able to identify 3 projects and 5 tasks that can move the projects forward.
- List the people who you must reach out to before end of day no matter what.
- List the people you need to hear from or need some something to get your day’s work done.
- List any import activity you can complete by end of day (small tasks).
- After completing steps 3 – 6, you can create three blocks of 50 mins where you will focus on the work identified uninterrupted. After each 50 min block, take 10 mins to refresh with a break to clear your mind / recharge.
- Lastly, make sure you take an hour break to get away from the desk/office for free thought.
Lastly, how to automate for growth.
Automation
- Once you know what you should do and the value of the tasks, you should look to automate what you can or find tools to help you better manage what you do.
- My recommendation is to use a tool like 17 Hats that will help with lead capture, contact management, quotes, invoicing, payment, bookkeeping, customer interaction, project / workflow management, to do list & more while automating tasks and integration with other features.
- Another recommendation is to automate your sales and marketing functions. Depending on your business and where you are, Constant Contact or Act! CRM will help with automating customer interactions that can be programmed based on customer actions. For example, you can automate a response to a customer who sends in an inquiry or response to a newsletter using Act! CRM. These tools frees up the small business to focus on tasks that require human interaction and are unique in nature.
Source: dTc Advisory
Dexter Webster, a proud Howard University graduate with a BBA in Information Systems and MBA in General Management/Strategy leads the day-to-day operations for dTc. Dexter has extensive experience in operations, customer service, technology, product and management consulting working for Fortune 100 companies. Dexter is responsible for developing strategic partnerships, and representing dTc externally. He can be reached at [email protected]. | http://www.atlantatribune.com/2016/07/19/work-smarter-small-shop-big-dreams/ |
Results – Grip
Peak horizontal acceleration parameter is the measured horizontal acceleration of the ‘hoof’ at impact.
Acceleration describes how fast the speed changes per time.
This measure describes the amount of grip (also known as slide/ strike movement).
High peak values of horizontal acceleration at impact means high amount of grip (‘hoof’ stops faster).
Reduced horizontal acceleration at impact means the ‘hoof’ comes to a stop slower.
The horizontal acceleration is given in g [m/s²]. 1 g = 9.80665 m/s².
Peak values shown at zero time [ms] simulating first impact of the early landing/ touchdown phase.
The graph is showing results from a hard surface (road).
The diagram is showing a calculation from the same set of data collected.
The grip parameter is the calculated amount of mm the ‘hoof’ slides on the surface after impact, before it comes to a stop.
Due to the degree of uncertainty in the dataset of peak vertical acceleration, the results cannot be used to quantify the difference between the test objects (based on setup and number of dataset collected).
In summary the results do not show any significant difference between the test objects. | https://www.eqfusion.com/testing/results-grip |
ALBANY, NY: Save the Pine Bush volunteers demonstrated today over the destruction of the Pine Bush for the building of Avila House. Avila House is proposed to be built in the rare Pine Bush ecosystem.
The Pine Bush is home to the Karner Blue butterfly, a federally-listed endangered species. The Federal Government has stated that the decline in the population of Karner Blues is related to the destruction of Karner Blue habitat. The Avila House will destroy an important Karner Blue migration route.
Avila House is an upscale senior housing facility being built by the Roman Catholic Diocese.
Bulldozing of the site has already begun. Save the Pine Bush filed suit in New State Supreme Court and the Appellate Division. Save the Pine Bush lost both cases, even though the law clearly is on the side of preserving the ecosystem. "But," said Lynne Jackson, volunteer with Save the Pine Bush, "What judge is ever going to rule against the Catholic Church, no matter what the law?"
"The population of Karner Blue butterflies has dropped drastically in the last 20 years, over 98%," said Jackson. "There were barely 1000 butterflies in the Pine Bush last summer, down from 65,000 in 1980, and millions in the 1940s. The drastic reduction in butterflies is due to habitat loss."
With the construction of Avila House, the Roman Catholic Diocese is contributing to sprawl. Avila House will be a car-dependent community, with not even a side walk to connect it to the Teresian House. The Roman Catholic Diocese has been contributing to sprawl by abandoning its architecturally significant churches in the inner cities, such as St. Joseph's which is in danger of imminent collapse, and building isolated facilities, such as Avila House, in the Pine Bush.
"We strongly believe that senior citizens should have healthy, safe places to live," said Jackson, "But, those places should not be in the Pine Bush. Once the Pine Bush is paved, it is gone. There are many other places this facility could have been built. I think it is ironic that one of the reasons the Diocese chose to build Avila House in the Pine Bush was so that seniors could live close to a spouse in the Teresian House. However, there is not a single sidewalk connecting the two, which means that it will not be safe to walk from Avila House to the Teresian House. People will need to use cars to travel between the two, even though seniors sometimes are no longer able to drive. The seniors who live in Avila House will need a car to obtain all of their essential services."
Establishment of a migration corridor between the last large site of Karner Blues (located at Crossgates Maul), and the Blueberry Hill area of the Pine Bush, immediately to the west of the proposed Avila House site is essential to the survival of the Karner Blue. The Avila House project is in the middle of this migration corridor.
The Recovery Team has determined that Karner Blue butterfly populations must be established between the largest remaining site of Karner Blues in the Pine Bush at Crossgates, and the Preserve. To be viable, a population of Karner Blues must be within 500 to 1000 meters of at least two other Karner Blue populations, which is the distance that 10% to 25% of Karner Blues can fly over their lifetime and reach another population of suitable blue lupine habitat. Since the distance between Crossgates and the Preserve is well over 1000 meters, the only way Karner Blues will every migrate from Crossgates to the Preserve is by establishment of "stepping stones" or small colonies of lupine and butterflies between Crossgates and the Preserve.
Before it was bulldozed, the site had open meadows with all of the plants needed by the butterflies to survive, except blue lupine. All vegetation has since been bulldozed for the senior housing.
The approval of this project violates the State and Federal Endangered Species Act . The Endangered Species Act prohibits the "taking" of an endangered species. Destruction of habitat and migration routes of endangered species are included in acts which are prohibited as taking or harming endangered species. Interference with the migratory route or corridor of an endangered species is a violation of the State and Federal Endangered Species Acts.
The approval of this project violates the State Environmental Quality Review Act in that the Planning Board did not consider the cumulative impact of development on the achievement of a minimum size and shape for the Albany Pine Bush Preserve. This 30-acre site represents 12.5% of the land which needs to be added to the Preserve to achieve a minimum size for the Pine Bush.
The Albany Pine Bush Preserve Commission's Implementation Guidelines call for full protection - meaning no development what-so-ever - of this 30-acre site.
At a time when other states and communities are desperately trying to reestablish extinct Karner Blue sites, it seems incredible that the City of Albany is still approving more destruction of Karner Blue habitat and that the Diocese would choose to construct Avila House here. Projects to recover Karner Blue butterflies are underway in Ohio, Indiana, New Hampshire, and Ontario, Canada. Even the City of Albany is involved in trying to restore Pine Bush ecosystem from developed sites. This year, the City purchased the Fox Run Mobile Home Park, and is in the process of buying out the residents and returning this developed site back to Pine Bush.
"St. Francis would turn over in his grave if he knew what Bishop Hubbard was doing," said Jackson.
The Pine Bush is a globally rare ecosystem and is the largest inland pine barrens of its kind in the United States. There would be no Pine Bush today if it were not for the efforts of Save the Pine Bush, a not-for-profit, all volunteer organization dedicated to Pine Bush preservation. Save the Pine Bush has been filing lawsuits against municipalities for their illegal approvals of developments in the Pine Bush for nearly 25 years. | http://savethepinebush.org/Action/AvilaProtest/Pages/PressRel.html |
The Grammy Award for Best Gospel Vocal Performance, Male was awarded from 1984 to 1990. From 1984 to 1989 it was titled the Grammy Award for Best Gospel Performance, Male.
Larnelle Harris for "How Excellent Is Thy Name"
Russell Taff is an American gospel singer and songwriter. He has sung a variety of musical styles throughout his career including: pop rock, traditional southern gospel, contemporary country music, and rhythm and blues. He first gained recognition as lead vocalist for The Imperials (1977–81). One of his best-known performances is the song "Praise the Lord". He has also been a member of the Gaither Vocal Band, and occasionally tours with Bill Gaither in the Gaither Homecoming concerts. As a solo artist and songwriter, Taff is known for the 1980s anthem "We Will Stand".
Michael Whitaker Smith is an American musician, who has charted in both contemporary Christian and mainstream charts. His biggest success in mainstream music was in 1991 when "Place in this World" hit No. 6 on the Billboard Hot 100. Over the course of his career, he has sold more than 18 million albums.
Larnelle Steward Harris is an American gospel singer and songwriter. During his 40-plus years of ministry, Harris has recorded 18 albums, won five Grammy Awards and 11 Dove Awards, and has had several number one songs on the inspirational music charts.
The 28th Annual Grammy Awards were held on February 25, 1986, at Shrine Auditorium, Los Angeles. They recognized accomplishments by musicians from the previous year, 1985.
The 27th Annual Grammy Awards were held on February 26, 1985, at Shrine Auditorium, Los Angeles, and were broadcast live on American television. They recognized accomplishments by musicians from the year 1984.
The 26th Annual Grammy Awards were held on February 28, 1984, at Shrine Auditorium, Los Angeles, and were broadcast live on American television. They recognized accomplishments by musicians from the year 1983. Michael Jackson who had been recovering from scalp burns sustained due to an accident which occurred during filming of a Pepsi commercial, won a record eight awards during the show. It is notable for garnering the largest Grammy Award television audience ever.
The 38th Annual Grammy Awards were held on February 28, 1996, at Shrine Auditorium, Los Angeles. The awards recognized accomplishments by musicians from the previous year. Alanis Morissette was the main recipient, being awarded four trophies, including Album of the Year. Mariah Carey and Boyz II Men opened the show with their Record of the Year nominated "One Sweet Day".
The 31st Annual Grammy Awards were held on February 22, 1989, at Shrine Auditorium, Los Angeles. They recognized accomplishments by musicians from the previous year.
The 30th Annual Grammy Awards were held March 2, 1988, at Radio City Music Hall, New York City. They recognized accomplishments by musicians from the previous year.
The Grammy Award for Best Gospel Vocal Performance, Female was awarded in from 1984 to 1990. From 1984 to 1989 it was titled the Grammy Award for Best Gospel Performance, Female.
The Grammy Award for Best Soul Gospel Performance, Male was awarded from 1984 to 1989. In 1990 this award was combined with the award for Best Soul Gospel Performance, Female as the Grammy Award for Best Soul Gospel Performance, Male or Female.
Anointed is a contemporary Christian music duo from Columbus, Ohio, known for their strong vocals and harmonies, featuring siblings Steve Crawford and Da'dra Crawford Greathouse, along with former members Nee-C Walls and Mary Tiller. Their musical style includes elements of R&B, pop, rock, funk and piano ballads. The group has won seven Dove Awards, two Stellar Awards and three Grammy Award nominations. The group has also been featured on several Christian compilation albums such as Real Life Music, 1996 and WOW The 90s.
The Winans are an American Gospel music quartet from Detroit, Michigan consisting of brothers Marvin, Carvin, Michael and Ronald Winans.
Richard Smallwood is an American gospel music artist who formed The Richard Smallwood Singers in 1977 in Washington, DC.
Paul Charles Smith is a Contemporary Christian Music performer and songwriter. He is best remembered for his early years with influential gospel group The Imperials. Smith spent four years with that group, recording four albums and one live video. Smith was inducted into the Gospel Music Association's Gospel Music Hall of Fame as a member of The Imperials. He has recorded five solo albums and is an award-winning songwriter.
The 14th Annual GMA Dove Awards were held on 1983 recognizing accomplishments of musicians for the year 1982. The show was held in Nashville, Tennessee. | https://wikimili.com/en/Grammy_Award_for_Best_Gospel_Vocal_Performance,_Male |
THE BIG BREED THEORY
The Flaws in the Big Bang theory .
1. There is no explanation of where the energy comes from to power the particles found in the quantum world - like the electrons. Recent explanations have led to the supposition of some mysterious dark energy and yet no explanation is given for where this energy might come from either. It also wrongly uses an assumed 'negative pressure of the vacuum' (NPV) to cancel the energy density of space.
2. With such a huge explosion ( where all the matter in the Universe is created in an instant ?) the universe would be expanding billions of times too fast. Creation has to switch-off after a minute fraction of a second otherwise an absurd rate of expansion is predicted. This is a well known problem and has been called 'The problem of the Cosmological Constant'.
3. Astronomers are now measuring some stars as being older than the oldest estimates for the age of the Universe under the big bang theory. If stars are older than the big bang, then the stated 13.7 billion year age of the universe must be wrong.
4. The big bang theory requires that gravitational potential energy of the Universe must be negative. This is postulated as the means for cancelling the energy of matter (not space whose energy is cancelled by a negative pressure). NGPE is said to allow matter to arise from nothing. Unfortunately this requires the datum for potential energy be set at an infinite distance from the position of any object. This is absurd since the theory is based on Newton's mechanics which considers the mass of any object to be non-varying. So it would still exist at infinite distance and therefore cannot be cancelled this way.
For a creation scenario it is inadmissible to choose the datum arbitrarily for convenience, as is allowed in normal use. It has to be chosen where matter is created - before gravity has acted. This means that gravitational potential energy must really be positive and so cannot cancel the energy of matter.
More details
Why the big bang must be wrong. Learn more....................
How this new theory works. Learn more...................
What new potential does this theory offer. Learn more............ | http://bigbreed.org/flawed%20big%20bang.html |
Q:
How to perform a query based on the result set of another query on the same table?
Requests can be made for multiple items and the only key is the surrogate request_id
I have the following query which returns a result set of 9 digit phone numbers
SELECT phone_number FROM purchases
WHERE item_type = 'popcorn'
GROUP BY phone_number;
I would like to then retrieve a list of of every DISTINCT item_type requested by each of those phone_number that was not 'popcorn' and then GROUP BY that item_type. IE I want to know what similar products customers tend to buy.
The desired result set may look something like this
item COUNT(*)
pretzels 200
chips 150
crackers 125
… …
In this case the number of phone_number's who bought popcorn and pretzels was 200 (I'm interested in this value). The total number of purchase of pretzels by people who bought popcorn was 1,000 but this is not the value I am interested in.
How can I create a single query that only specifies the item_type and retrieves a result like the above which shows items customers also bought?
Also - If someone can help me make a more descript title that would be appreciated.
EDIT
Here is the table
CREATE TABLE purchases(
request_id BIGINT NOT NULL auto_increment,
phone_number INT(9) NOT NULL,
item_type VARCHAR(256) NOT NULL,
PRIMARY KEY(request_id)
);
EDIT
Request for desired output based on this data http://www.sqlfiddle.com/#!2/23236/2
item COUNT(*)
cc 1
notpopcorn 1
A:
I would like to then retrieve a list of of every DISTINCT item_type
requested by each of those phone_number that was not 'popcorn' and
then GROUP BY that item_type. IE I want to know what similar products
customers tend to buy.
Try this:
SELECT item_type as item, COUNT(*)
FROM purchases
WHERE phone_number NOT IN (SELECT DISTINCT phone_number
FROM purchases
WHERE item_type = 'popcorn')
GROUP BY item_type;
SQL Fiddle Demo
Update: Try this:
SELECT item_type as item, COUNT(*)
FROM purchases
GROUP BY item_type, phone_number
HAVING phone_number IN (SELECT DISTINCT phone_number
FROM purchases
WHERE item_type = 'popcorn')
AND item_type <> 'popcorn';
Updated SQL Fiddle Demo
| |
1. When training clients who have had cancer, focus on which key aspects of training?
Low impact activities or walking
Light to medium intensities
Teach proper warm up and cool down
All of the above
2. According to research, what percentage of a person’s muscles declines about every 10 years?
1 to 2%
3 to 5%
5 to 10%
10 to 20%
3. When you measure heart rate with your fingers, you should use what count (in seconds)?
5 seconds
10 seconds
30 seconds
45 seconds
4. When you measure heart rate with two fingers, you multiply the heart beat by what number?
3
6
9
12
5. Use two fingers together to measure your heart rate at what body location?
Behind your ears
Behind your knee caps
At your wrist
At the bottom of your foot
6. There are two lifestyle factors that can contribute to cancer: a poor diet and an inactive lifestyle.
True
False
7. A rating of 13 or 14 on the Borg scale is considered at what difficulty level?
Easy
Medium
Difficult (somewhat hard)
Very difficult
8. Strength training is recommended for 5 days a week for seniors.
True
False
9. Lifestyle factors are known contributors to metabolic syndrome.
True
False
10. The most important factor in aging is keeping a fit and healthy body. | https://www.personaltrainercertification.us/free-learning-center/senior-fitness-instruction-practice-exam |
Stars don’t see gender, and now, NASA is working to not see it either when allocating telescope time to scientists, inspired by a successful experiment with the Hubble Space Telescope.
That experiment tested the hypothesis that if proposals are evaluated without knowledge of who wrote them and strictly on the merit of the science they proposed to do, the astronomers who received highly coveted observing time would end up being a more diverse group. That’s the principle behind dual-anonymous review, in which reviewers don’t know who submitted which proposals and proposers don’t know who reviewed their submissions. Dual-anonymous review is an attempt to reduce the warping power of implicit bias in the traditional review system, in which reviewers are anonymous but proposals include scientists’ names.
“We have noticed that in many of our proposal selections, there appears to be a bias in favor of one gender of proposer over another,” NASA Astrophysics Division Director Paul Hertz said during a town hall conversation held last month at the 235th meeting of the American Astronomical Society, in Honolulu.
By that, he didn’t just mean that male scientists receive more than half the opportunities on offer; such a result could simply reflect a greater number of men in the pool of submitters. What Hertz meant is that male scientists generally have a higher success rate in their proposals than female scientists.
The Space Telescope Science Institute, which manages science operations for the Hubble Space Telescope and will do so for NASA’s James Webb Space Telescope, saw that trend as well when it studied 20 years’ worth of proposals and awards. “That was observable data. You could see it,” Heidi Hammel, a planetary astronomer at the Space Telescope Science Institute, told Space.com.
But because the institute had data only about who submitted for telescope time and who received it, there was no way to tell whether the problem came from a difference in proposal quality or from bias, whether conscious or implicit. Hence, the experiment.
The committee reviewing proposals tried a few different ways of shifting the evaluation focus from individual scientists to the science they proposed. For example, one method required that proposals list only scientists’ first initials, not their first names. Another required that all the team members be listed in alphabetical order, so it was unclear who was leading the proposal.
Then, for proposals submitted in 2018, the committee went all in, requiring materials that entirely hid the identity of the writers. (A separate document available late in the review process could include nonanonymous information that would speak to the team’s ability to conduct the science they proposed to do.)
According to a report from the committee, scientists submitted nearly 500 proposals that cycle, more than 10 times the number that Hubble had time to gather observations for. Of those proposals, 28% were led by female scientists; of the 40 successful proposals, 12 were led by female scientists. That put female scientists at an 8.7% rate of success and male scientists at 8%, about the same; the year before, without fully anonymous proposals, those numbers were 13% and 24%. “It would be premature to draw broad conclusions, but the results are encouraging” that the anonymous method worked, the statement concluded.
Throughout the experimental process, the committee was assisted and observed by Stefanie Johnson, who specializes in organizational leadership and information analytics at the University of Colorado Boulder’s business school. She and a colleague have now published their analysis of the 2018 experiment in the journal Publications of the Astronomical Society of the Pacific.
In that paper, the authors compared dual-anonymous review with other potential tactics for reducing unconscious gender bias, which they described as losing efficacy over time or leading to “backlash against women because women are perceived as receiving extra advantages.” (Neither the Hubble committee statement nor Johnson’s paper addressed the fact that gender is more complex than a male-female binary; it’s unclear whether gender identities were assigned by outsiders or provided by the scientists in question in the Hubble experiment analysis.)
According to the analysis, the switch to dual-anonymous reviewing didn’t affect male scientists’ success rates to a statistically significant degree but did do so for female scientists. “It’s not proof that women will always do better, but hopefully the gender balance will be closer than in years past,” Johnson said in a statement. “What this shows is that taking gender out of the equation does allow women to perform better.”
Although the trial analysis focuses on gender disparities, the idea is that dual-anonymous review should have a similar relationship with a range of different types of potential bias, conscious or implicit. “By going to the dual-anonymous reviews, it isn’t only addressing gender issues, it’s also addressing underrepresented populations in many different classes,” Hammel said, mentioning specifically the draw of a prestigious university or well-respected advisor. “There are many kinds of biases that come into play when you know the names.”
NASA looked at the results of the experiment and decided it wanted to give the process a try more broadly. “The proposals were reviewed based on science merit only and not on how familiar the proposer’s name was to the reviewers,” Hertz said. “It worked great, so we have made the decision to move all of our [General Observer] programs to dual anonymous.” The announcement was met with applause from the gathered astronomers in Honolulu.
Hertz’s presentation included a slide outlining six observing proposal cycles with deadlines this year that would take the new format, including the first cycle of proposals for time on the James Webb Space Telescope, currently scheduled to launch in March 2021. Hertz said that NASA’s Science Mission Directorate, of which his division is a part, is implementing and exploring pilot programs for dual-anonymous review in contexts beyond the General Observer programs as well.
The new policy does mean that scientists will need to learn a new way to write the explanations they submit of why their work deserves an instrument’s precious observing time. They can no longer reference their own previous research, partnerships they’ve built, or funding or observing time they have already won. To smooth the transition, Hertz said, NASA is offering webinars and setting up a help line that scientists could call with questions about anonymizing their research.
“I think it’s a good thing. I think it’s the right thing. And I’m very pleased to see that NASA is moving forward on transforming their systems to allow these kinds of processes to work for NASA proposals,” Hammel said. “Maybe we’ll get to the point where people understand [that] in all these cases, you don’t need to know the person. You evaluate on the science that they say they’re going to do and rank it on that merit.”
Copyright 2020 Space.com, a Future company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed. | https://www.scientificamerican.com/article/hubble-telescope-test-inspires-changes-at-nasa-to-combat-gender-bias/ |
Such an eruption occurred in Columbia in November 1985. Essay on Natural Disaster 10. Going beyond the historical focus on relief and rehabilitation after the catastrophe, there is a need to look ahead and plan for disaster preparedness and mitigation. Less severe cyclonic activity has been noticed on the West Coast, with 33 cyclones occurring in the same period, out of which 19 of these were severe. It's due tomorrow and this is taking to long. Natural disasters are those calamities which reminds us of the cruel nature and its unpredictable happenings. Dust storms are windstorms accompanied by suspended clay, silt materials, usually but not always without precipitation.
Although flood management is a state subject, the Union government provides Central assistance to the flood-prone states for a few specified schemes, which are technical and promotional in nature. It can even be as much as 50 km. We have, however, learned how to build buildings and other structures that are better suited to tackling earthquakes. The disaster potential is particularly high at the time of landfall in the north Indian Ocean Bay of Bengal and the Arabian Sea due to the accompanying destructive wind, storm surges and torrential rainfall. Sustainability is the key word in the development process.
On this scale, the smallest quake felt by humans is about 3. Greater the vertical displacement, greater will be the wave size. The most important is the early warning systems. Because of this fact, many of us tend to believe that they happen as an act of nature, purely out of the human control. Asia tops the list of casualties due to natural disasters. Photo above: Biological pretreated waste Photo Below: Untreated waste Different measures can be taken because.
Space technology plays a crucial role in efficient mitigation of disasters. Quarrying, road construction, and other building activity in sensitive catchment areas add to the soil loss. It is a phenomenon that can cause damage to life and property and destroy the economic, social and cultural life of people. Earthquakes, windstorms, floods, and disease all strike anywhere on earth, often without warning. We have learned to predict when a tornado or a hurricane will pass over us, and we can predict, to a certain extent, when a volcano will erupt. One of the reasons for this region being prone to earthquake is the presence of the young-fold Himalayan Mountains here which have frequent tectonic movements.
Natural disasters destroy infrastructure, cause mass migration, reduction in food and fodder supplies and sometimes leads to drastic situations like starvation. Secondly, there has to be a focus towards preventive disaster management and development of a national ethos of prevention calls for an awareness generation at all levels. Types of natural disasters A natural disaster manifests itself in the shape of several natural hazards such as avalanches, earthquakes, volcanic eruptions, landslide, floods, tsunamis, storms, blizzards, droughts etc. Of these, storm surges are the greatest killers of a cyclone, by which sea water inundates low lying areas of coastal regions and causes heavy floods, erodes beaches and embankments, destroys vegetation and reduces soil fertility. We put up shelters when a hurricane approaches so that we can stow ourselves away to safety while the hurricane passes over. Hurricane katrina essay conclusions for national academies press coverage of natural disasters are a natural disasters.
The nation was stunned by this sudden and devastating natural calamity. Emergency management, First aid kit, Flood 2056 Words 7 Pages Short Essay Questions 1. These groups work on global and local scale rescue work. This moist unstable air rises, generates convective clouds and leads to an atmospheric disturbance with a fall in surface atmospheric pressure. In India, Tropical cyclones occur in the months of May-June and October-November. But this misconception is overturned every time we are faced with the wrathful power of nature, in the form of natural disasters.
Streets do fill up with water, but drainage systems are usually in place to take care of excessive water logging. Around 68 per cent area is also susceptible to drought. Over 31,849,838 number of people have died from natural disasters since 1900. The changes on the outer part of the Earth happen because of different kinds of weather. The expenditure include personal consumption. This in turn means more precipitation and more energy in storm systems, exacerbating natural cycles like La Nina.
Disaster Management and Planning : Many regions in India are highly vulnerable to natural and other disasters on account of geological conditions. Unsymmetrical plan, or with too many projections. The most vulnerable areas, according to the present seismic zone map of India include the Himalayan and Sub-Himalayan regions, Kutch and the Andaman and Nicobar Islands. Forest Fire : Forest or bush fire, though not causing much loss to human life, is a major hazard for forest cover in the country. In a State where the haves keeps the have-nots under control the Government should take extra care to ensure that victims of the earthquake get timely and adequate help. This has led to a threat from a set of natural hazards like pollution, global warming and ozone depletion on large or global scale. Acetic acid, Acid, Butyric acid 534 Words 6 Pages preserving rubbish amount up to 50% by getting compactibility and improving dry weight and reducing of lechate pollution.
Such calamities disrupt the normal life for many days. It would indeed seem that we have the ultimate hold over nature. Loss of life is well nigh complete, and belongings of people get lost, blown away or swept away. Whenever flooding level is higher than what the structure can hold, the result is devastating. So many people lose their lives during a natural upheaval and for those who are left out homeless having lost their near and dear ones; life becomes a daily struggle for survival. Finally, capacity building should not be limited to professionals and personnel involved in disaster management but should also focus on building the knowledge, attitude and skills of a community to cope with the effects of disasters. This period began at the beginning of the seventh century, concomitant with global cooling that peaked in the little ice age. | http://crowdfynd.com/speech-on-natural-calamities.html |
The health and safety of occupants and visitors in our federal buildings is our key priority. That is why we are fully focussed on the concerns raised by occupants of Les Terrasses de la Chaudière (LTDLC) complex.
The last few months have brought much change in how most public servants perform their work in the federal government, and many of us have faced work situations we had never experienced before. We would like to take this opportunity to reassure you that, while most of you transitioned to teleworking, Public Services and Procurement Canada (PSPC) has not wavered in our commitment to provide you with a modern and safe workplace, and to follow through on all the engagements we made during the February 25, 2020 townhall.
As the pandemic reduced LTDLC’s occupancy, we took every opportunity to accelerate and complete projects to provide you with a safe and positive work environment as you return to your workplace. As an example, we accelerated the modernization of 4 floors, which are anticipated to be completed in fall/winter 2020.
On this page
- Supporting your health and safety as you gradually return to the workplace
- Providing high quality drinking water
- Continuously monitoring and improving air quality
- Managing pests efficiently
- Modernizing interior workspaces (GCworkplace)
- Replacing the exterior
- Renewing and replacing base building systems
- Providing you with additional support
- Keeping you informed
Supporting your health and safety as you gradually return to the workplace
PSPC has been working closely with Health Canada (HC), the Public Health Agency of Canada (PHAC), the Treasury Board Secretariat (TBS), and industry leaders, throughout the COVID-19 pandemic, to keep those of you who remained at work in a safe environment, while also preparing and implementing the most appropriate practices and measures to ensure your safety when you are ready to return to your workplace.
Cleaning and disinfection of office spaces and common areas
As of March 23, 2020, PSPC increased to twice daily the cleaning/disinfecting frequency of high touch points such as common spaces, washrooms, meeting room furniture, doors, stairwells, and elevators. PSPC will continue with these heightened cleaning standards until new guidance is provided by HC. We continue to work closely with departments and agencies, including within the LTDLC complex, to address any requests for enhanced or specialized cleaning based on unique program requirements. Departments are also working with our building managers to ensure an appropriate supply of wipes and of hand sanitizer are available for you at all times.
Ongoing preventative measures for building operating systems
PHAC has confirmed that COVID-19 is not known to be spread through heating ventilation and air conditioning (HVAC) or water systems. However, PSPC continues to follow research into virus transmission during this time of heightened concern and pressures. In order to promote occupant wellness in federal buildings and facilities, we have implemented additional HVAC systems measures in buildings, including the LTDLC complex. These measures include:
- increasing outdoor airflow into buildings
- increasing operating hours of ventilation systems to ensure good airflow (LTDLC already operates on a 24/7 cycle)
- ensuring appropriate filtration and monitoring for appropriate temperature and humidity levels
We have reinforced measures to protect the safety of building water systems during periods of reduced occupancy and as part of return to occupancy measures. To address the risks associated with increased water stagnation, PSPC has implemented a rigorous flushing and testing protocol of the water systems to confirm compliance with the guidelines for Canadian drinking water quality. The water flushing protocol will continue as long as required while buildings are at reduced occupancy.
We have developed a comprehensive guide, the Building management direction for coronavirus disease 2019 (COVID-19), that you can consult for information on all preventative measures that we are implementing for building operating systems.
Promoting physical distancing
When you return to LTDLC workspaces, new measures to promote proper physical distancing will be evident throughout the complex, both in common areas and in the workspaces. PSPC building managers are working with the complex’s lead senior manager for emergencies and evacuations, building emergency and evacuation team and the lead for occupational health and safety to implement physical distancing and traffic flow measures. Some of the changes will include:
- designated 1-way entrance and exit points
- healthy elevator use protocols that limit passengers, including signage that indicates passenger limits, priority boarding, and arrows to manage loading/unloading
- wall signage and floor markings promoting physical distancing such as floor decals and signage to orient movement within lobbies and hallways with particular attention to pinch points
- reduced seating in commercial areas
- washroom occupancy restrictions
- modifications to security desks (for example, Plexiglas barriers)
Accessibility considerations for persons with disabilities are integrated into all re-occupancy measures—this includes height appropriate signage, floor markings that do not cause obstruction, and elevator priority for those with physical or mobility impairments.
Employers are responsible for introducing the physical distancing measures within the workspace in areas such as the use of workstations, conference rooms and other collaborative spaces, as well as in shared areas such as kitchens and eating areas. By the time you return to the complex, temporary signage will be installed throughout the office space and common areas to aid in the safe movement of employees and the public within our federal facilities.
As a support in helping your employers in planning these measures, PSPC has published the Guidance and practices for the safe return to workplaces in light of the easing of restrictions document, which has been distributed to all departments and agencies.
Additionally, within the LTDLC complex, the BGIS building management team has developed a common space ambassador program as a supplemental support to monitor distancing practices, provide good support to tenants, and assisting in the overall return to the workplace experience for employees. This support is funded by PSPC for common areas and is also available for clients to mobilise in their workspaces.
Encouraging personal hygiene
Reinforcing the importance of frequent hand washing remains the cornerstone of preventing the spread of infections, including the COVID-19 virus. Posters illustrating proper handwashing techniques have been created to increase awareness about COVID-19 prevention and are being installed within the LTDLC complex. As well, hand sanitizer dispensers have been installed at each entryway of the LTDLC complex.
Providing high quality drinking water
PSPC adheres to HC’s Guidelines for Canadian drinking water quality, which set out the maximum allowable concentrations of various parameters found in water, in order to provide the cleanest, safest and most reliable drinking water possible. Additionally, for daycares located in federal buildings, which are governed by provincial requirements, the applicable provincial requirement is also applied to ensure the most stringent levels of the federal and provincial requirements for water are met in these facilities.
As you may remember, in December 2019 and January 2020, 249 samples were collected for analysis of bacteria, metal and lead from all consumption points throughout the LTDLC complex. The results indicated that 97% of all consumption points met the new more stringent HC guidelines. The consumption points that did not meet the HC standard demonstrated very slightly elevated lead levels. Proper signage was installed at all affected points of consumption to inform users not to drink the water from these locations, and alternate sources of drinking water were provided. Since then, faucets were replaced and filters were installed as required.
As we prepare for gradual re-occupancy, we are closely monitoring all potable water quality throughout the complex to ensure it remains safe for all occupants.
In addition, PSPC committed to fund and install a total of 175 water bottle refill stations with filtration capabilities throughout the complex by March 2021. The first 40 units are expected to be installed and operational by the end of August, with the rest going in progressively over the fall and winter months.
Continuously monitoring and improving air quality
We are continuing to take proactive measures to address your concerns regarding air quality at the LTDLC complex.
We retained the services of a certified industrial hygienist who developed a tailor-made monitoring strategy to measure indoor air quality on floors where complaints have already been reported. The final report was released a few weeks ago will be reviewed with LTDLC complex occupational health and safety committees when the timing is appropriate.
The results did not identify any significant issues with the HVAC systems that supply air to the building nor any source of chemical contamination in the occupied spaces, which could have an impact on indoor air quality. The report identified minor areas that require further study related to where potential water infiltration around windows may have occurred, but these pose no immediate impacts to occupants. These findings do affirm and support our continuation of the ongoing window replacement program, which will see the replacement of 71 windows over the summer and fall months of 2020, and will have the added benefit of removing any potential building materials with mould from around leaky windows.
We are continuing with our determined approach to identify all opportunities to improve and optimize the air quality in the complex and, as a result, the way forward includes several additional activities:
- conducting thermal imaging of all exterior windows in the LTDLC complex to ensure we find, and prevent, all potential water leakage issues
- if problems are identified we will take immediate action to replace the affected windows, and ahead of the scheduled complex recladding exercise, which will address any other issues related to windows and water infiltration
- installing an estimated 15 new air purifiers as a pilot project in targeted areas based on service requests
- commissioning of new building automation system to modernize and improve monitoring and control of the air systems within the building
These measures will help increase your overall comfort while also resulting in greener operations with more efficient control and energy performance of the HVAC systems. We are also analyzing other possible scenarios and will work with the occupational health and safety committees on the best course of action to take to continue improving the air quality at the LTDLC complex.
Managing pests efficiently
We are firmly committed to properly manage and eradicate all the pests that have been found in the LTDLC complex. The recent low occupancy has been very helpful in enabling us to conduct even more intense efforts to address these issues.
Bats
Although bats have been trying to make the complex their home, we have been working with experts in the field to put in place a very efficient strategy to resolve this situation. We are pleased to report that 9 bat houses were installed in strategic locations on exterior walls in June 2020 to discourage bats from entering the building.
Another part of the plan involves identifying all the species present and learning more about how to manage them throughout the different seasons. Stantec Consulting was hired to conduct this work and will prepare a report, which will be available in the spring of 2021, to share the results and the recommended mitigation strategies. This type of research has never been done in a federal environment, and the LTDLC complex will be used as an example of best practices for other federal buildings, and even in other jurisdictions.
Bed bugs
Since March 2020, we have been conducting comprehensive inspections for bed bugs throughout all of LTDLC. This has included the use of traps for monitoring and canine inspections of each floor in the complex. Although the presence of bed bugs was detected on certain floors, we have conducted several rounds of deep steam and chemical treatments to completely eradicate the bugs from the affected carpeting and other soft surfaces. The complex was declared free of bedbugs by the experts on June 3, 2020.
Reporting cornerstone of effective management
Early reporting remains the cornerstone of effective management of any pest therefore, if you suspect that there are any pests in your workplace, please notify your manager and call the National Service Call Centre (NSCC) at 1-800-463-1850 so an investigation and treatment can take place. Calls placed to the NSCC remain anonymous. For more information about PSPC's robust pest management plan, visit control of pests in federal buildings.
Modernizing interior workspaces (GCworkplace)
Our objectives for the LTDLC complex include creating a modern workplace to attract, retain and enable public servants to work safely and comfortably in a greener and healthier environment. As you will have witnessed in the months prior to March 2020, several fit-up projects to modernize the workspaces were already underway, with the anticipation that more than 30% of the current LTDLC workforce would be in updated spaces by the end of 2022.
Progress at 15 and 25 Eddy:
- second floor is now complete
- fifth floor fit-up is substantially completed with only IT connectivity to be finished in summer 2020
- 15th floor fit-up is well into construction, with anticipated completion for fall of 2020
- sixth floor project is well into construction, with anticipated completion for fall of 2020
- modernization project of the fourth, 15th, 17th and 19th floors are currently in the planning stage
Additional work for 10 Wellington (floors 17 to 19) is also advancing.
Although much work had gone into implementing the existing designs for the new GCworkplace standard, the PSPC Interior Design National Centre of Expertise has been actively evaluating new design considerations to support the current reality and the future of work. As a result, current fit-up project designs may evolve based on the new guidance. Once more is known, we will share with you any modifications to the GCworkplace standard.
Replacing the exterior
With a significant investment of over $200 million, we are working towards fully replacing the brick façade of the entire LTDLC complex over the next few years. A request for proposal to engage a new architectural and engineering consultant team will be tendered over the summer with contract award expected in the fall of 2020. Supporting contracts for environmental and commissioning services will also be put in place concurrently. The design of the new replacement envelope is expected to be completed by the fall of 2021 and will be followed by construction until 2026.
Renewing and replacing base building systems
Other measures we are investing in to support you with modernized air management systems throughout the LTDLC complex involve the 2 key projects (currently in the planning stages):
- replacing the make-up air units all throughout the complex, ultimately resulting in a large increase in capacity of the air handling system
- modernisation of building control systems, to improve integration and control of the overall building HVAC and lighting systems, which will provide you with greater comfort
Providing you with additional support
At the townhall we committed to hire a tenant experience coordinator (TEC) who will be responsible to better track all your interactions with the National Service Call Centre and to provide you with additional support for any building issues that come up.
This representative is expected to be on site once you gradually start returning to the worksite in the fall. The TEC focus will be firmly on proactively resolving issues and identifying opportunities for continuous improvement, such as initiatives involving additional cleanliness, comfort and general environmental conditions. The TEC will provide feedback and relay improvement opportunities directly to the property and facility management team.
Keeping you informed
We are working closely with building managers, union representatives, employees on the premises, and health and safety committees to identify and resolve the concerns that have been raised by employees.
Please know that we are deeply grateful for your patience and perseverance as we implement changes to improve the LTDLC complex workplace.
- Date modified: | https://www.tpsgc-pwgsc.gc.ca/comm/vedette-features/2020-02-18-00-eng.html |
Samuel P. Huntington is the man who, in recent years, has reclassified the world. Carter’s advisor and director of the Harvard Institute of International Policy, Huntington divides the world into nine civilizations: Western, Slavic (or Orthodox), Islamic, African, Latin, Chinese, Hindu, Buddhist and Japanese.
Of course his thinking is deeply influenced by the great religious divisions and presents many problems of adjustment. Not in all areas of the world this consciousness is clear and strong; the classification presents consistent regional problems, such as, for example, the presence of Israel in the Middle East.
Already then, in 1993, Huntington argued that the borders between the West and Islam would quickly bloody; but the problem is not so much the rekindling of this historic dispute, which has been going on since the year 1000 and with which we are accustomed to living with ups and downs; the biggest and deepest point is that there is a vast and articulated attack against the “irresponsible” way of life of advanced societies and the West that takes two forms: competition and aggression.
China and Russia are on the first track; on the second the Middle East, Africa and South America. It is a critique open to our privileges by companies that cannot find order and development and blame us for this tragic event. Our faults are endless but they cannot be perennial. With this work the American scholar, who is all too easy to dismiss as a conservative, brings to the fore a different entity from the states of the ancient regime, or the theme of deep cultures or civilizations.
“My hypothesis – he writes – is that the source of fundamental conflict in the new world in which we live will not be substantially ideological or economic. The great divisions of humanity and the main source of conflict will be linked to culture. National states will remain the main actors in the global context, but the most important conflicts will take place between nations and groups of different civilizations. The clash of civilizations will dominate world politics. The fault lines between civilizations will be the lines on which the battles of the future will take place. In the history of ideological and class conflicts the key question was “who are you with?”, Today, in the conflicts of civilization the key question becomes “who are you?”. These are words we must learn to consider, they are a piece of the truth of our time. | https://scholaitalica.com/en/samuel-huntington-clash-of-civilization-lo-clash-of-civilta/ |
Description: Non-Breeding birds are white all over and the iris, lores and the bill are yellow. The legs are also yellow but are much paler than the bill.
Breeding adults develop buffy-orange plumes over their head, neck, throat and the lower back. The bill and legs also develop a reddish-orange hue.
It is a very common and very friendly bird, which is good for those who want to photograph it. If you approach it slowly without much movement, it won’t bother you coming too close.
Cattle Egrets are named so because they are often found near cattle herds, sitting on the backs of cows and buffaloes. They are one of the best friends of the cattle, they eat insects and parasites that stick to the cattle’s body, thus keeping them clean and also safe from infections. If you want to find them, the best place I would suggest would be a paddy cultivation where there is standing water.
Food: Insects, frogs, snails, grasshoppers and even lizards. | https://birdingwitharjun.com/2020/09/07/cattle-egret/ |
The utility model relates to the field of micro-strip antennas, and discloses a wide-beam micro-strip antenna, which comprises a dielectric plate, a wide-beam micro-strip antenna and a wide-beam micro-strip antenna, the microstrip antenna assembly is arranged on one surface of the dielectric plate, and the microstrip antenna assembly comprises at least three metal sheets; the feed network is arranged on the other side of the dielectric plate, the feed network comprises a signal input line, a radio frequency switch and at least two network branches, the signal input line is connected with the radio frequency switch, the radio frequency switch is connected with the at least two network branches, and the radio frequency switch is used for controlling current of the signal input line to flow to the at least two network branches; each network branch comprises at least one network endpoint, and each network endpoint comprises a feed coupling sheet which penetrates into the dielectric plate and is coupled with the microstrip antenna assembly. | |
Let's start from scratch in thinking about what memory is for, and consequently, how it works. Suppose that memory and conceptualization work in the service of perception and action. In this case, conceptualization is the encoding of patterns of possible physical interaction with a three-dimensional world. These patterns are constrained by the structure of the environment, the structure of our bodies, and memory. Thus, how we perceive and conceive of the environment is determined by the types of bodies we have. Such a memory would not have associations. Instead, how concepts become related (and what it means to be related) is determined by how separate patterns of actions can be combined given the constraints of our bodies. I call this combination “mesh.” To avoid hallucination, conceptualization would normally be driven by the environment, and patterns of action from memory would play a supporting, but automatic, role. A significant human skill is learning to suppress the overriding contribution of the environment to conceptualization, thereby allowing memory to guide conceptualization. The effort used in suppressing input from the environment pays off by allowing prediction, recollective memory, and language comprehension. I review theoretical work in cognitive science and empirical work in memory and language comprehension that suggest that it may be possible to investigate connections between topics as disparate as infantile amnesia and mental-model theory.
A functional theory of memory has already been developed as part of a general functional theory of cognition. The traditional conception of memory as “reproductive” touches on only a minor function. The primary function of memory is in constructing values for goal-directedness of everyday thought and action. This functional approach to memory rests on a solid empirical foundation.
Glenberg's theory is rich and provocative, in our view, but we find fault with the premise that all memory representations are embodied. We cite instances in which that premise mispredicts empirical results or underestimates human capabilities, and we suggest that the motivation for the embodiment idea – to avoid the symbol-grounding problem – should not, ultimately, constrain psychological theorizing.
Glenberg's rethinking of memory theory seems limited in its ability to handle abstract symbolic thought, the selective character of cognition, and the self. Glenberg's framework can be elaborated by linking it with theoretical efforts concerned with cognitive development (Piaget) and ecological perception (Gibson). These elaborations point to the role of memory in specifying the self as an active agent.
We are sympathetic to most of what Glenberg says in his target article, but we consider it common wisdom rather than something radically new. Others have argued persuasively against the idea of abstraction in cognition, for example. On the other hand, Hebbian connectionism cannot get along without the idea of association, at least at the neural level.
(1) Non-projectable properties as opposed to the clamping of projectable properties play a primary role in triggering and guiding human action. (2) Embodiment in language-mediated memories should be qualified: (a) Language imposes a radical discretization on body constraints (second-order embodiment). (b) Metaphors rely on second-order embodiment. (c) Language users sometimes suspend embodiment.
This commentary connects some of Glenberg's ideas to similar ideas from artificial intelligence. Second, it briefly discusses hidden assumptions relating to meaning, representations, and projectable properties. Finally, questions about mechanisms, mental imagery, and conceptualization in animals are posed.
Corresponding to Glenberg's distinction between the automatic and effortful modes of memory, I propose a distinction between cued and detached mental representations. A cued representation stands for something that is present in the external situation of the representing organism, while a detached representation stands for objects or events that are not present in the current situation. This distinction is important for understanding the role of memory in different cognitive functions like planning and pretense.
Where is the body in the mental model for a story?
Glenberg argues for embodied representations relevant to action. In contrast, we propose a grouping of representations, not necessarily all being directly embodied. Without assuming the existence of representations that are not directly embodied, one cannot account for the use of knowledge abstracted from direct experience.
Has Glenberg forgotten his nurse?
Glenberg's conception of “meaning from and for action” is too narrow. For example, it provides no satisfactory account of the “logic of Elfland,” a metaphor used by Chesterton to refer to meaning acquired by being told something.
All that we call spirit and art and ecstasy only means that for one awful instant we remember that we forget.
Glenberg provides a new and exciting view that is especially useful for capturing some functional aspects of memory. However, memory and its functions are too multifarious to be handled by any one conceptualization. We suggest that Glenberg's proposal be restricted to its own “focus of convenience.” In addition, its value will ultimately depend on its success in generating detailed and testable theories.
Glenberg focuses on conceptualizations that change from moment to moment, yet he dismisses the concept of working memory (sect. 4.3), which offers an account of temporary storage and on-line cognition. This commentary questions whether Glenberg's account adequately caters for observations of consistent data patterns in temporary storage of verbal and visuospatial information in healthy adults and in brain-damaged patients with deficits in temporary retention.
To model potential interactions, memory must not only mesh prior patterns of action, as Glenberg proposes, but also their internal consequences. This is necessary both to discriminate sensorimotor information by its relevance and to explain how go als about the world develop. In the absence of internal feedback, Glenberg is forced to reintroduce a grounding problem into his otherwise sound model by presupposing interactive goals.
Is memory caught in the mesh?
Can memory be cast as a system that meshes events to actions? This commentary considers the concepts of mesh versus association, arguing that thus far the distinction is inadequate. However, the goal of shifting to an action-based view of memory has merit, most notably in emphasizing memory as a skill and in focusing on processes as opposed to structures.
Glenberg tries to explain how and why memories have semantic content. The theory succeeds in specifying the relations between two major classes of memory phenomena – explicit and implicit memory – but it may fail in its assignment of relative importance to these phenomena and in its account of meaning. The theory is syntactic and extensional, instead of semantic and intensional.
There are three major weaknesses with Glenberg's theory. The first is that his theory makes assumptions about internal representations that cannot be adequately tested. The second is that he tries to accommodate data from three disparate domains: mental models, linguistics, and memory. The third is that he makes light of advances in cognitive neuroscience.
The functional theory of memory set out in Glenberg's target article accords with recent proposals in the developmental literature with respect to event memory, conceptualization, and language acquisition from an embodied, experiential view. The theory, however, needs to be supplemented with a recognition of the sociocultural contribution to these cognitive processes and emerging structures.
What would Glenberg's attractive ideas look like when computationally fleshed out? I suggest that the most helpful next step in formalizing them is neither a connectionist nor a symbolic implementation (either is possible), but rather an implementation- general analysis of the task in terms of the informational content required.
The ability of Glenberg's model to explain the development of complex symbolic abilities is questioned. Specifically, it is proposed that the concepts of clamping and suppression fall short of providing an explanation for higher symbolic processes such as autobiographical memory and language comprehension. A related concept, “holding in mind” (Olson 1993), is proposed as an alternative. | https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/issue/0E52ADE29B295C24021F5E02CC04D367 |
Analyzing high throughput genomics data is a complex and compute intensive task, generally requiring numerous software tools and large reference data sets, tied together in successive stages of data transformation and visualisation. A computational platform enabling best practice genomics analysis ideally meets a number of requirements, including: a wide range of analysis and visualisation tools, closely linked to large user and reference data sets; workflow platform(s) enabling accessible, reproducible, portable analyses, through a flexible set of interfaces; highly available, scalable computational resources; and flexibility and versatility in the use of these resources to meet demands and expertise of a variety of users. Access to an appropriate computational platform can be a significant barrier to researchers, as establishing such a platform requires a large upfront investment in hardware, experience, and expertise.
Results
We designed and implemented the Genomics Virtual Laboratory (GVL) as a middleware layer of machine images, cloud management tools, and online services that enable researchers to build arbitrarily sized compute clusters on demand, pre-populated with fully configured bioinformatics tools, reference datasets and workflow and visualisation options. The platform is flexible in that users can conduct analyses through web-based (Galaxy, RStudio, IPython Notebook) or command-line interfaces, and add/remove compute nodes and data resources as required. Best-practice tutorials and protocols provide a path from introductory training to practice. The GVL is available on the OpenStack-based Australian Research Cloud (http://nectar.org.au) and the Amazon Web Services cloud. The principles, implementation and build process are designed to be cloud-agnostic.
Conclusions
This paper provides a blueprint for the design and implementation of a cloud-based Genomics Virtual Laboratory. We discuss scope, design considerations and technical and logistical constraints, and explore the value added to the research community through the suite of services and resources provided by our implementation.
Citation: Afgan E, Sloggett C, Goonasekera N, Makunin I, Benson D, Crowe M, et al. (2015) Genomics Virtual Laboratory: A Practical Bioinformatics Workbench for the Cloud. PLoS ONE 10(10): e0140829. https://doi.org/10.1371/journal.pone.0140829
Editor: Christophe Antoniewski, CNRS UMR7622 & University Paris 6 Pierre-et-Marie-Curie, FRANCE
Received: May 21, 2015; Accepted: September 29, 2015; Published: October 26, 2015
Copyright: © 2015 Afgan et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Data Availability: All relevant data are within the paper.
Funding: This work was supported by a grant from The National eResearch Collaboration Tools and Resources project (NeCTAR; http://nectar.org.au). NeCTAR is an Australian Government Super Science project, financed by the Education Investment Fund. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript..
Competing interests: The authors have declared that no competing interests exist.
Introduction
What is the problem?
Modern genome research is a data-intensive form of discovery, encompassing the generation, analysis and interpretation of increasingly large amounts of experimental data against catalogs of public genomic knowledge in complex multi-stage workflows . New algorithm and tool development continues at a rapid pace to keep up with new ‘omic’ technologies , particularly sequencing. There are many visualisation options for exploring experimental data and public genomic catalogs (e.g. UCSC Genome Browser , GBrowse , IGV ). Analysis workflow platforms such as Galaxy , Yabi , Chipster , Mobyle , or GenePattern (to name a few) allow biologists with little expertise in programming to develop analysis workflows and launch tasks on High Throughput Computing (HTC) clusters.
However, the reality is that the necessary tools, platforms and data services for best practice genomics are generally complicated to install and customize, require significant computational and storage resources, and typically involve a high level of ongoing maintenance to keep the software, data and hardware up-to-date. It is also the case that a single workflow platform, however comprehensive, is rarely sufficient for all the steps of a real-world analysis. This is because analyses often involve analyst decisions based on feedback from visualisation and evaluation of processing steps, requiring a combination of various analysis, data-munging and visualisation tools to carry out an end-to-end analysis. This in turn requires expertise in software development, system administration, hardware and networking, as well as access to hardware resources, all of which can be a barrier for widespread adoption of genomics by domain researchers.
The consequences of these circumstances are significant:
- Reproducibility of genomics analyses is generally poor , in part because analysis environments are hard to replicate ;
- Tools and platforms that are able to provide best practice approaches are often complex, relying on technical familiarity with complicated compute environments ;
- Even for researchers with relevant technical skills and knowledge, managing software and data resources is often a significant time burden ;
- Skills training and education is often disconnected from practice, often because of the analysis environment constraints ;
- Accessing sufficient computation resources is challenging with current data sets, and this is compounded by the trend to larger experimental data; for instance, moving from exome to genome scale analysis is a significant scalability problem in backend compute ;
- Data management and movement is a technical challenge that affects the speed and accessibility of analysis . Again, this is compounded by the trend towards larger data sets.
We argue that lack of widespread access to an appropriate environment for conducting best-practice analysis is a significant obstruction to reproducible, high quality research in the genomics community; and further, transitioning from training to practice places non-trivial technical and conceptual demands on researchers. Public analysis platforms, such as Galaxy, provide solutions to some of these issues (particularly accessibility), but are generally handicapped by rapid growth in per-user demand for compute resources and data storage, and the enforced constraints on flexibility that are a requirement of a centrally managed resource.
What is the solution?
A ‘virtual laboratory’ environment to support genomic researchers that would meet a number of criteria, ideally providing:
- Reproducibility: through workflows and stable underlying analysis platform;
- Accessibility: through ease of gaining access to and using the platform;
- Flexibility: by imposing as few constraints as possible in the types of analysis and the methods that may be implemented, supported via a user-controlled environment;
- Performance: through scalability and highly available compute resources;
- Consistency: a common platform from training to best practice;
- Capability: through pre-population with best practice tools and reference datasets.
The objective of building such an environment is to make a platform embodying each of these characteristics widely available to a diverse range of users, facilitating widespread best practice training and analysis for genomics.
This is, of course, not a trivial objective to achieve, as each of these criteria has significant design and technical implications:
Reproducible genomics requires, at a minimum, a way of accessing the same tools and reference datasets used in an analysis, combined with a comprehensive record of the steps taken in that analysis in the form of a workflow, in sufficient detail to reliably produce the same outcome from the same input data, assuming a deterministic analysis . At the most basic level reproducibility can be achieved with shell scripting and documentation, but issues in ease of use, maintenance and genuine reproducibility are well-known , . This has catalysed a number of efforts in developing platforms for reproducible scientific analysis through structured workflows, including Galaxy, Yabi, Chipster, GenePattern and numerous commercial products (e.g., Igor , BaseSpace (https://basespace.illumina.com/), Globus Genomics ). An environment supporting reproducible genomics requires at least a workflow platform and a system for ensuring stability of the underlying software and data .
We would define an accessible environment as one that is:
- Simple to invoke or obtain access to (low cost of entry)
- Simple to communicate with (easy to connect, low latency)
- Simple to interact with, requiring minimal training in order to use effectively (intuitive)
Simplifying access to an analysis environment, then, requires the provider to furnish an intuitive platform that requires minimal client-side configuration—ideally, a web browser—and, further, does not require significant preparation or resources to invoke. In other words, the ideal accessible environment is one which a new user can immediately connect to and start using for training or data analysis. In many ways, public analysis services such as the Galaxy Main server (https://usegalaxy.org) and GenePattern (http://genepattern.broadinstitute.org) provide exactly this experience, and taken in isolation, meet the challenge of reproducible and accessible analysis extremely well.
However, managed services, while highly accessible, cannot provide great flexibility, which we would define as the freedom to both configure an environment and access that environment through a variety of means. Maximising flexibility implies user-level administrative control (e.g., configuring data, tools and, potentially, the supporting operating system directly), which is not generally possible in a centrally managed service. Hence, flexibility is in some ways the natural enemy of a managed service.
Building an analysis environment that guarantees good performance for a wide user base is especially challenging. In the case of a managed service for genomics, the more successful the service is in attracting users, the more likely it is that performance will suffer due to the number of users, particularly as those users explore larger data sets through a wider range of analysis options . Good performance on a per-user basis is a combination of available resources, user access to those resources, underlying infrastructure limits and bottlenecks (for instance, disk I/O), and the inherent scalability of the environment. We would argue that performance in the context of a widely available, flexible genomics environment requires high-availability, scalable back-end compute resources. We will discuss performance design principles and implications in more detail in a later section, as this is a particularly challenging but critical characteristic of an environment that aims to support large genomics data analysis.
Providing a consistent experience from training to practice is a combination of (at least) accessibility, performance, and flexibility. Ideally an analysis environment would be accessible for new users, with training materials that follow best practice protocols delivered in an intuitive way, leading to seamless scale-up for analysis of real data sets using the same interaction paradigm and maintaining good performance.
As users become more sophisticated in genomics analysis, they often move from a single intuitive analysis platform (such as Galaxy) to multiple platforms (R, command line, custom scripts) that provide more capability and flexibility (generally at the expense of simplicity). Therefore, a design principle for a general genomics environment should be for that environment to be able to be used for training (implying at least an accessible platform), but able to scale in flexibility by adding more options for interaction (such as command line and/or programmatic interfaces), and scale computationally to provide the performance for real data analysis. For all levels of the environment, we would provide high capability through access to best practice tools and availability of reference datasets, and ideally linked to low latency visualisation and data interpretation services.
Table 1 summarises the above discussion and captures core implications for each category of a powerful genomics laboratory.
Wide Availability
Designing a flexible, accessible, reproducible, high-performing environment to be widely available to a large, potentially geographically dispersed audience, places serious demands on system design and architecture. One useful interpretation of ‘widely available’ is that the environment has a low cost of entry as a whole—that is, minimal preparations and resources are required before obtaining access to an analysis environment that is genuinely useful. The more obstructions that are placed before a user can start doing analysis, the less available the environment can claim to be. For example, managed public web services have a very low cost of entry and are certainly widely available. In order to make a more flexible, high performing environment widely available for open ended data analysis, we need to enable a user to quickly and intuitively build or deploy an environment that uses infrastructure resources that are relatively simple for the user to obtain.
To that end, resources underpinning the analysis environment must be low-cost, scalable, and available to the user; to provide flexibility and performance, the user must have some control of these resources. A prominent infrastructure paradigm that fits these requirements is cloud computing . Cloud computing has demonstrated its suitability for providing highly available, accessible computational infrastructure suitable for data analysis , . In the cloud model, one rents computing resources in the form of virtual machines on an as-needed basis, from a pool of resources that is large enough to guarantee high availability, and therefore good scalability.
Further, providing an environment for a large audience over a large geographic region means that network bandwidth may become an important factor in getting data to and from the environment, as bandwidth often correlates with distance from a service . Thus in practice, making a flexible, high performance environment largely depends on availability of low-cost, infrastructure that is relatively close in terms of latency/bandwidth. Cloud computing often addresses this requirement through regional geographic hubs.
Cloud Computing Solutions for Genomics
Cloud resources have become quite popular in the form of public clouds (e.g., Amazon Web Services (AWS), HP Cloud, Google Compute Engine) where one pays only for the resources consumed. These resources are provisioned as ‘bare bones’ machines that need to be carefully tailored for use in genomics. This includes procuring the required resources, installing and configuring the necessary software, and populating it with appropriate data—all tasks that are time consuming and require significant technical expertise. Consequently, a range of cloud management software applications have been developed that tailor cloud resources to fulfill a functional role in bioinformatics. In addition to these dedicated, cloud-aware, applications, a number of platforms or virtual laboratories have also been developed that aggregate the functionality of many applications. Galaxy on the Cloud offers a preconfigured Galaxy application in a cloud environment. More generally, Globus Galaxies offers a general purpose platform for deploying software-as-a-service solutions in the cloud based on Galaxy. Additional platforms that focus on Big Data solutions and use of the MapReduce model include Cloudgene and Eoulsan . See Calabrese and Cannataro for a more details overview of the existing cloud-aware applications and platforms.
Over the past few years, there has been an increasing trend towards cloud resources also becoming available as research infrastructures, for example the Open Science Data Cloud in the US, EGI Federated Cloud in the EU, and NeCTAR Research Cloud in Australia. These provide alternatives to the public clouds by offering centralized access to clouds for researchers and projects, generally with merit-based allocation as opposed to direct financial expense to the researcher. NeCTAR, for example, offers access to an OpenStack-based Research Cloud (http://nectar.org.au/research-cloud) where any researcher in Australia can access limited virtual machines and storage, and apply for larger allocations of both.
These national compute infrastructures provide readily available virtual hardware, with the opportunity to address the scalability issue both at the personal level (as a researcher can request temporary resources as required) and at the community level (as each research group can apply their own merit-allocated CPU and storage quota, rather than overburdening a centralised server). The advent of research-oriented cloud computing has created an opportunity to build support for bioinformatics analyses on these highly available national infrastructures and public clouds.
Results and Discussion
Designing the Genomics Virtual Laboratory
In this section we provide a template for designing and building a genomics analysis environment based on a cloud-aware workbench of genomics tools platforms.
In response to the described circumstances, we developed the Genomics Virtual Laboratory (GVL). The GVL is designed to be a comprehensive genomics analysis environment supporting accessible best practice genomics for as wide a community of researchers as possible; this philosophy directs the design and implementation of the GVL to a great extent, as accessibility, flexibility, performance and wide availability are principal drivers. In practice, the GVL is a combination of scalable compute infrastructure, workflow platforms, genomics utilities, and community resources.
The primary objective of the GVL is to enable researchers to spawn and/or access automatically configured and highly available genomics analysis tools and platforms as a versatile workbench. Workbench instances may be centrally-managed servers or standalone and dedicated cloud-based versions. Either option is scalable and comes pre-populated with field-tested and best-of-breed software solutions in genomics, increasing reproducibility and usefulness of the solution. The aim is to offer a true genomics workbench suitable for bioinformatics data analysis for users with a variety of needs.
The design principle for the GVL has been to attempt to meet each of the design criteria—accessibility, reproducibility, flexibility, performance, capability and consistency—using existing software as much as possible. There are already very mature genomics workflow platforms providing accessibility and reproducibility, for instance; likewise, sophisticated platforms for flexible programmatic and statistical approaches to analysis and visualisation. With that in mind, a number of design choices were made on the functional software components of the GVL. These are summarised in Table 2, while individual solutions are described in more detail under Using the Genomics Virtual Laboratory.
The choice of the components were based on a number of factors, including platform functionality, platform maturity, community uptake and complementarity (e.g. Galaxy is focussed on bioinformatics workflows and easy access to tools; IPython Notebook on programmatic analyses; RStudio Server (http://www.rstudio.org/) on statistical analyses; UCSC genome browser is perhaps the most popular genome browser). In the case of a decision on management middleware for deploying the platforms, CloudMan has been demonstrably successful in providing cloud-based genomics workflow platforms based on Galaxy and was therefore the software of choice for this role. Additionally, local expertise was a factor in the final design decisions of the GVL workbench (e.g., tutorials).
Also considered in designing the GVL were the advantages of different sections of the community. As a scalable, extensive and customisable framework, the GVL primarily caters to the individual genomics researchers and small labs—it offers a simple, quick and reliable method for obtaining access to a scalable environment for genomics analysis and visualisation. It allows complete control over self-deployed instances and easy access to common data resources, such as indexed genomes, which can otherwise be problematic and time consuming to obtain. Finally, it offers substantial resources for learning new analysis methods.
Next, the GVL caters to the broader bioinformatics community—it provides a low-cost-of-entry target platform for collaborative genomics analysis and data sharing that is based on a common platform. The ability to customize the platform further facilitates its use for tool development and distribution. These features together also make the GVL a good environment for developing training materials and curricula, and for teaching. Because it is based on tools that intrinsically enable a reusable record of any analysis (e.g., Galaxy, IPython Notebook), the GVL also encourages reproducible research.
Finally, the GVL appeals to research infrastructure bodies and research institutions because it promotes democratized access to large scale, complex genomics analysis infrastructure. It focuses on simple and cost effective scaling (both in breadth and depth) of national computational infrastructure by delivering accessible and powerful solution to genomics researchers.
Using the Genomics Virtual Laboratory
In this section we describe the resulting functionality of the GVL from a user perspective—that of an ordinary end-user carrying out research or training, and that of a developer or a researcher building new tools and infrastructure.
Using the GVL as a researcher. From the users’ perspective, the GVL comprises three main parts: the cloud-launchable GVL workbench, always-on managed services, and community resources.
Cloud-launchable instances: these are based on the GVL machine image, and can be easily launched and configured via a launcher web application (Fig 1). Each launched instance runs the following services:
- The GVL Dashboard: provides easy access to all the services and their status—this is the default landing page for all self-launched GVL instances (Fig 2);
- CloudMan: used to manage cloud services associated with an instance, such as compute and storage. This includes the ability to scale the instance by adding worker nodes, turning each GVL instance into a virtual cluster-on-the-cloud;
- Galaxy: is a popular web-enabled platform for reproducible bioinformatics analysis, which is capable of deploying jobs over the cluster-on-the-cloud. Researchers can customise Galaxy via the Galaxy Toolshed and Galaxy Data Managers ;
- RStudio Server: a web-based platform for statistical analysis using R;
- IPython Notebook: a web-based platform for programmatic analysis using Python;
- VNC Remote Desktop: a web-based remote desktop interface to the Linux operating system (Ubuntu);
- An underlying Linux environment with full ssh access and administrative control. This includes access to command-line bioinformatics tools and reference data that comes preinstalled on the system (i.e., all the tools installed via Galaxy).
(a) A user initiates the launch process via the launch service (launch.genome.edu.au) by providing their cloud credentials to the launcher application and (b) within a few minutes is able to access the management interface (CloudMan) on the deployed instance of the workbench. (c) After workbench services have started, the researcher can use the applications as desired (e.g., Galaxy).
The GVL Dashboard is a portal running on every GVL instance. It lists all of the available services, their status, and offers a direct link to access those.
Managed services: these are services hosted by the GVL project that are readily available to anyone:
- Galaxy Tutorial instance: a managed Galaxy tutorial server (called Galaxy-tut; available at galaxy-tut.genome.edu.au) tailored for training and interactive learning with all the tools required to run GVL Tutorials. This server can be freely used by any researcher for training purposes, including running GVL tutorials and workshops;
- Galaxy Research instance: a managed Galaxy server for real-world research analyses (called Galaxy-QLD; available at galaxy-qld.genome.edu.au) that offers a broad spectrum of tools and generous storage quotas (currently 1TB);
- A local mirror of the UCSC genome browser (available on the Australian Research Cloud at ucsc.genome.edu.au) and associated visualisation services for fast access to hosted data;
- Shared reference datasets, such as reference genomes and indices, that are automatically made available to any launched GVL instance as a read-only networked file system.
Community resources and support in the form of comprehensive online teaching materials around common genomics experimental analyses, supported with a mechanism of delivering those to the bioinformatics community:
- GVL Tutorials: introduce important bioinformatics analysis techniques to new users. These tutorials are self-contained and can be self-driven or used in a workshop setting. For the most part, they make use of Galaxy for its excellent learning environment, and can be run on a training instance such as Galaxy-tut, or on a self-launched instance. Developed tutorials are based on common best practices or published methods (e.g., Trapnell et al. );
- GVL Protocols: are field-tested procedural methods in the design and implementation of a bioinformatics analysis, which, in comparison to Tutorials, provide less detailed instructions on each step, but more advice on analysis options and best-practice principles. Protocols include a general overview of the problem and a skeleton for an analysis but do not specify exact tools, parameters, or sample data. Consequently, they are seen as a roadmap for an analysis that should be extended or modified to accommodate the needs of a particular research analysis;
- Galaxy-enabled tools built by the GVL team: developed tools are available through the main Galaxy Toolshed and come pre-installed on any launched GVL instance. Many tools are used in GVL Tutorials and Protocols;
- Email-based helpdesk support for all components of the GVL.
These resources are presented to users as three broad categories, LEARN, USE and GET, which may be familiar to Galaxy users from http://galaxyproject.org/:
- USE—make use of managed services, including Galaxy servers and the UCSC genome browser;
- GET—get your own analysis platform using cloud infrastructure, with full administrative control and additional power-user utilities. This option allows a user to transition smoothly from training to research—a user-launched GVL instance provides a research environment consistent with the USE and LEARN environments, but allows researchers full control for further customisation (Fig 1);
- LEARN—learn bioinformatics analysis using GVL Tutorials, running them either on the Galaxy-Tut server or on a user's own instance. More advanced users can make use of GVL Protocols.
Currently, the GVL is implemented and available on the Australian Research Cloud as well as the Amazon Web Services (Sydney region). In addition, managed services (i.e., USE—running on the Research Cloud) and products (i.e., LEARN) are freely available to anyone. The self-launched instances (i.e., GET) are available to the Australian researchers and groups that have an allocation on the Research Cloud or anyone with Amazon Web Services credentials. All of the GVL services are linked from the GVL main webpage: https://genome.edu.au.
Leveraging the GVL as a developer. From the technical development perspective, the GVL comprises:
- A set of machine images, cloud resource components, and source code repositories containing the functional elements of the workbench: Galaxy, IPython Notebook, RStudio, bioinformatics toolkit;
- A sophisticated cloud infrastructure and application management system (i.e., CloudMan) to:
- Enable users to launch/deploy a new instance of the workbench;
- Manage workbench services and resources as required; and
- Scale the backend cloud infrastructure to match performance requirements, by building a cluster-on-the-cloud with Slurm as the job manager.
- Access to a shared file system containing large reference data; this file system is geographically replicated via GlusterFS file system (http://www.gluster.org/) and any launched instance can connect to it in read-only mode;
- An automated process for generating machine images and other cloud components that are pre-populated with the latest tools. The build process is based on a set of Ansible roles (http://www.ansible.com/) that are publicly available as a set of open source scripts (https://bitbucket.org/gvl/gvl-image-playbook). The GVL build process is compatible with multiple clouds and can be used to replicate the GVL environment by anyone (for documentation on how to do this, see the mentioned source code repository). Ansible roles are not used by ordinary end-users, who make use of a pre-built image to launch their GVL instance. However, they are useful for developing new images or deploying to new cloud environments.
A user-deployed GVL instance provides a Linux environment pre-populated with common programming languages and bioinformatics libraries, as well as with popular analysis platforms. A GVL instance also comes with a preconfigured database server (PostgreSQL), cluster management system (Slurm), and with CloudMan, which is capable of adding and removing worker nodes from the virtual cluster, managing various storage options, and managing GVL services. This environment provides developers and bioinformaticians with a convenient platform for tool development and testing, both command-line and Galaxy-based. The choice of type of machine instance and the cluster-on-the-cloud scalability features also provides an excellent environment for tool benchmarking and scalability testing.
The open build system of the GVL, and the general applicability of the cluster-on-the-cloud and service management model, make the GVL a good starting point for the development of other cloud-based research environments. Labs or developers can thus take the core GVL and customise it or extend it to meet their particular needs. This capability has already been exploited within the GVL project itself: genomics researchers often work in a specific sub-domain, each of which requires specific set of tools. Using the GVL's flexible build system, we are developing specialised "flavours" of the GVL workbench suitable for particular uses. The following flavours are currently under development and more are planned (in addition, community-contributed flavours are welcomed):
- Full: a complete toolset deployed by the GVL
- Tutorial: a set of tools used by the GVL tutorials
- Microbial: a toolset focused on microbial data analysis
The available flavours are selectable for self-launched instances via the Launcher app.
End-to-end usage scenario
Thus far we have described the GVL in terms of its components. In this section we describe an end-to-end usage scenario, illustrating how the GVL can support a user from training through to full analyses.
In our experience, a relatively complex but commonly requested analysis is RNA-seq based differential gene expression analysis (DGE). This use-case consists of a number of processing steps, and has a variety of tools options, published guides for best practice, and visualisation requirements that again can be met with myriad options. Aspects of this use-case are well established and can be implemented as runnable workflows; other aspects require researcher or analyst input for interpretation. In this scenario, we envision a biologist, or a bioinformatician new to RNA-seq analysis, who wishes to learn how to conduct such analyses well and to apply them to their own data.
Differential gene expression analysis aims to discover genes whose expression level varies under the experimental conditions of interest . RNA-seq has been shown to allow high technical accuracy for such analyses relative to older microarray-based methods . Due to the constraints imposed by the large number of genes in most organisms, and the relatively small number of samples that can feasibly be included in most studies, increasingly sophisticated statistical methods have been developed to take advantage of observed statistical properties of gene expression [41,42]. These methods may be available to researchers as command-line tools or as libraries for programmatic analysis, particularly in R . Both popular command-line tools, and R libraries, have been made available through Galaxy. These tools are also available via the GVL command-line and GVL RStudio, with the latter allowing maximum flexibility in developing more complex analyses.
In this scenario, we restrict ourselves to differential analysis of gene expression, and do not discuss the many other types of analysis that may be carried out with RNA-seq data. A typical RNA-seq DGE analysis consists of the following steps:
- Begin with RNA-seq data from a high-throughput sequencing experiment, usually in the form of FASTQ files. Currently, a typical amount of data for this analysis is on the order of 20 million reads per sample, where current read lengths are likely to be 100-150bp, giving 2-3GB of raw data per sample. Usually, in order to perform a statistically robust analysis, multiple samples from each experimental condition are required, giving data on the order of ~10-15GB.
- Align sequence reads to reference genome. This step can be carried out using well-established tools, but is compute and I/O intensive.
- Count aligned reads against gene model to produce table of gene feature counts per sample.
- Statistically test for differential expression of gene features between groups of samples. This step may be carried out using relatively simple methods or more advanced statistical approaches. More advanced approaches allow researchers to handle complex experimental designs.
- In many cases, the project will involve further analysis, such as pathway or gene set enrichment analysis, to help interpret the significance of the differentially expressed genes.
In this scenario, the GVL allows a novice to go through the following processes:
- The researcher learns the concepts of RNA-seq differential analysis through the GVL Introductory RNA-seq tutorial (https://genome.edu.au/wiki/Learn), which guides them step-by-step through the workflow and concepts. This tutorial takes researchers from the original read sequences, through read alignment, read-counting, and basic statistical analysis. Researchers can make use of the managed Galaxy-tut service (https://galaxy-tut.edu.au/) to work through this tutorial. Alternatively, researchers may launch their own personal instance to use for these tutorials (see Fig 1). For this step researchers may also wish to take advantage of introductory RNA-seq workshops run by the institutions supporting the GVL project, or the Galaxy Training Network (https://wiki.galaxyproject.org/Teach/GTN). These are usually one-day, hands-on workshops that include an introduction to launching an instance on the cloud, an introduction to using the Galaxy platform, and the Introductory RNA-seq tutorial.
- The researcher applies these analysis techniques to their own data. This analysis is enough to give preliminary results on real data, and concrete understanding of the method. For this step the researcher may use either the same personal instance as in Step (1), or a larger managed service. If the project is particularly large and merits its own compute allocation, researchers will be able to obtain Research Cloud quota from the NeCTAR Research Cloud and launch larger cloud instances.
- The researcher learns more advanced DGE techniques and concepts through the GVL Advanced RNA-seq tutorial (https://genome.edu.au/wiki/Learn). This tutorial applies alternative and more advanced statistical analysis packages. These approaches can still be accessed via the Galaxy interface via, for instance, a Galaxy wrapper around a standard edgeR-based analysis . In most cases, at this point the researcher is in a position to obtain publication-quality results on their data.
- Researchers may optionally move to RStudio or IPython Notebook on their GVL instance to produce more flexible visualisations of their results, or as a means to access downstream analysis tools appropriate to the project.
In some cases the experimental design may be particularly complex and require advanced understanding of the statistical issues involved. In such cases, there is no real substitute for statistical expertise, and the researcher or a collaborator on the project should have this. In this case our researcher can move to their GVL instance's RStudio platform in Step (3), which gives them a more advanced and flexible set of statistical tools. It is possible to carry out computationally-intensive steps (alignment of reads) in Galaxy or via the command-line, and then access the resulting gene counts in RStudio. Most popular Bioconductor libraries for DGE analysis are pre-installed into R on GVL instances.
The results output from any of these analyses can be stored in persistent cloud storage and/or downloaded for further use in other tools. The researcher can shut down their instance and use CloudMan functionality to re-initialise the same environment at a later date, or to share the complete workbench specification with other researchers .
The analysis steps themselves along with the data can be stored and published as a Galaxy History. In the case of a more advanced analysis, the steps can be stored and published as an R or R-markdown script , or an IPython Notebook document. All of these exported analyses have the potential to be imported into another GVL instance and re-run, providing excellent reproducibility. The full web and command-line access to the GVL platform means that the researcher and their collaborators are free to move onto any advanced methods appropriate to their project.
Conclusions and Future Work
Driven by a landscape of rapidly evolving data-generating technologies, the field of genomics has quickly grown into a demanding domain requiring a complex ecosystem of tools, technologies, computers, and data—all honed to support multi-step pipelines using dozens of tools and dealing with gigabytes of data. The reality is that processing research-sized genomic data requires a comprehensive data analysis workbench (Fig 3). This in turn inflicts a high level of maintenance overhead and required technical expertise on the data analysis process, which is a barrier to entry for biology-focused researchers.
Initially, standalone and purpose-specific tools were most prevalent. As the complexity of analyses grew, new platforms formed that aggregate many standalone tools and support different types of computational infrastructures to offer more versatile functionality. The GVL represents another step in this evolution where it aggregates a large number of the best-of-breed software and technologies available today.
The GVL has been established to reduce this barrier by improving understanding and accessibility of bioinformatics tools, backed by accessible, scalable and robust computational infrastructure. The GVL provides a set of well-rounded services that are accepted throughout the community and supports activities ranging from teaching and training to end-to-end research analysis, making it applicable as a bioinformatics workbench and a computational platform not only in technical terms but also in terms of a community that it supports. Services unified by the project supply a much-needed locus of tools and technologies allowing researchers to more easily and readily explore the data analysis space. GVL’s services are simple to access via the project’s website (genome.edu.au).
Ultimately, the GVL represents a blueprint for deploying virtual laboratories, even for domains other than genomics: it defines the components required to establish a virtual laboratory, technologies to embody these components, and use-cases to deliver a purposeful product. It supports the notion of Science-as-a-Service and can be used as a validated, exemplar method in the future. The GVL's design pattern and build system are currently being exploited to deploy the GVL onto other cloud stacks, and to develop customised "flavours" of the GVL for specific research sub-domains.
Methods
Building the Genomics Virtual Laboratory
In this section we describe the broad technical details of how the GVL is built.
The GVL is implemented by composing a carefully selected set of state-of-the-art software that has seen wide adoption and demonstrated utility in the space of bioinformatics data analysis. Much of the effort in building the GVL workbench has been a very significant technical effort in developing architectural approaches to aggregate and automate the process for generating the functional services of the workbench, and developing and extending sophisticated management software that allows users to first deploy and then scale and manage the resources and services underpinning the workbench. The developed components were created to be reusable, the outcome being that the GVL workbench can be replicated on any compatible cloud. Parts of the workbench stack are also sufficiently generic to be repurposable in contexts other than a genomics workbench (e.g., CloudMan as a generic cluster-in-the-cloud).
Architecturally, the GVL is composed of three broad layers: the Cloud as the base layer that offers resource capacity; the middle layer that provides resource management, structure and control over the cloud resources; and the application layer that contains the tools, utilities and data in an accessible form (Fig 4).
The GVL leverages cloud resource and is compatible with multiple cloud technologies. Through a set of cloud resource management tools, the details of cloud resources are hidden enabling non-cloud aware applications to readily execute in this environment.
In the context of the GVL implemented in Australia, the GVL relies on the NeCTAR Research Cloud as the base layer. The GVL image is also available for launch on the AWS Cloud, in the Sydney region. Cloud resources provide a uniform and readily available compute infrastructure. Technologies underpinning the GVL were designed to be cloud-agnostic and rely features common to a set of Infrastructure-as-a-Service clouds (also see below). Combined with the automated build process, this approach makes it feasible to deploy the GVL on a range of clouds available around the world.
The middle layer is primarily handled by CloudMan, which, as part of the GVL, needed to be extended to support clouds other than AWS . As described throughout the text, CloudMan creates a virtual cluster-in-the-cloud. The created cluster can behave as a traditional cluster environment, permitting applications designed for cluster environments to be readily run. Hence, no modification to the top level applications is necessary. The GVL management layer also offers the launch service. This is a web application used to launch GVL instances. It initiates the provisioning of the required cloud resources on any cloud supported by CloudMan and it monitors the launch process. It was implemented in Django and the deployment process has been automated as part of the GVL build process.
Finally, the application layer is composed from all the bioinformatics software making up the GVL workbench; minimal changes were required by the GVL to the software in this layer. Along with these minor changes, some "glue" components have been added to the application layer to unify the user experience, in particular the GVL Dashboard (Fig 2 above). The GVL Dashboard is a portal that runs on GVL instances and provides an overview of the state of all application-level services running on the given instance. All of the software development performed as part of the GVL has been released into the open-source domain (https://bitbucket.org/gvl), with many of the contributions having been incorporated into the respective parent projects.
The management layer is perhaps the aspect of the GVL with the most technical complexity, and is described in more detail now: it is comprised of a number of system-level components, including the virtual machine image, the tools file system and the indices file system. Fig 5 captures the architecture encapsulated by these components. These components are built using a set of Ansible roles that automate the build process(https://bitbucket.org/gvl/gvl-image-playbook). The architecture tying together all these individual components is what enables scalable compute at the back end and fast access to large reference datasets, making the platform of practical use for performing research-scale data analysis. This architecture is primarily implemented through the CloudMan application.
Each GVL instance is, at runtime, composed of a number of components that the GVL provides: a virtual machine image, a volume snapshot or an archive of the tools file system, and a snapshot or a hosted instance of the indices file system. Combined at runtime by CloudMan into a virtual cluster, the components enable a flexible and feature-full bioinformatics workbench.
- The machine image represents a blueprint for the required operating system and the system packages and libraries. The machine image also facilitates the GVL launch process (initiated by the launch service) by allowing instance bootstrapping. Once a machine image is launched, it is considered a live instance and the bootstrapping scripts contained within initiate required runtime configuration that leads to an automatically configured cluster and a data analysis platform. As well as the machine image itself, the scripts used to build this image have been made available in the open-source domain.
- The tools file system contains the Galaxy application and the associated bioinformatics tools and configurations. This file system has been implemented in two alternative versions: (1) as a volume snapshot and (2) as a downloadable archive. For the volume snapshot, at instance launch time, the volume snapshot is converted (by an API request to the cloud middleware) into a volume that is then attached and mounted as a file system. The created volume is under the ownership of the tenancy (i.e., user) that created it and is persistent across cluster invocations. This means that the entire cluster with all installed applications, configurations and data can safely be shut down during the period of inactivity and later started back up with all data and configuration available in the same state as before cluster termination. This model requires the tenancy to have an appropriate volume allocation and for the volume snapshot to be available in the same cloud availability zone that the cluster is launched in. Because not all users have a volume allocation, we have also made the file system available as a downloadable archive. At instance launch time, the archive is extracted onto the instance’s transient storage with the same content as the volume-based file system. Currently, this file system is only a few hundred megabytes in size and with the colocation and replication of the data across a cloud, the time required to download and extract the archive is comparable to the time it takes to complete the request to create a volume from a snapshot. This model makes it possible to create an exact replica of the necessary file system in any cloud region using only transient instance storage. Crucially important is to realize that, although it allows the GVL to be used even by users who have no available volume storage, this model of launching GVL services is not persistent across invocations and once a cluster is terminated, the data is gone.
- The indices file system contains formatted reference genome index data used by a variety of bioinformatics tools (e.g., during mapping). The reference file system is updated with new reference genomes as requested by the users and is currently several hundred gigabytes in size. To facilitate reuse of this valuable resource, as part of the GVL, we have made the file system available in two formats: as a volume snapshot and as a high-performance shared file system. This shared file system is a read-only, GlusterFS-based instance of the file system that launched instances simply mount and use in read-only mode. Replicated over several zones of the NeCTAR research cloud and hosted over an average of two dedicated virtual machines, the service has shown remarkable availability and stability under load (during workshops, for example). The reference data is kept separate from the data uploaded by users or the tool installation and configuration data; such data is kept local to each instance, which ensures only the person that launched the instance has access to it. Users who wish to add their own reference data may add it in parallel to this data, or may make use of the volume snapshot option to copy the entire reference data file system into their own storage allocation.
Acknowledgments
The Genomics Virtual Laboratory has been developed by the Genomics Virtual Laboratory project team (https://genome.edu.au/wiki/About) with funding, in part, supported by the Genomics Virtual Laboratory (GVL) grant from the National eResearch Collaboration Tools and Resources (NeCTAR).
Attributions: Amazon Web Services (AWS) and the “Powered by Amazon Web Services” logo are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries. Parts of the GVL are developed using Python in compliance with the Python project’s license.
Author Contributions
Conceived and designed the experiments: EA CS NG IM SG MP AL. Performed the experiments: EA CS NG IM DB MC SG YK MP RH AL. Analyzed the data: EA CS NG IM DB MC SG YK MP RH AL. Contributed reagents/materials/analysis tools: EA CS NG IM DB SG YK MP. Wrote the paper: EA CS NG AL.
References
- 1. Schatz MC, Langmead B. The DNA data deluge. IEEE Spectrum. 2013;50: 28–33.
- 2. Berger B, Peng J, Singh M. Computational solutions for omics data. Nature reviews Genetics. 2013;14: 333–46. pmid:23594911
- 3. Kent WJ, Sugnet CW, Furey TS, Roskin KM, Pringle TH, Zahler AM, et al. The Human Genome Browser at UCSC. Genome Research. 2002. pp. 996–1006. pmid:12045153
- 4. Stein LD, Mungall C, Shu S, Caudy M, Mangone M, Day A, et al. The generic genome browser: A building block for a model organism system database. Genome Research. 2002;12: 1599–1610. pmid:12368253
- 5. Nicol JW, Helt GA, Blanchard SG, Raja A, Loraine AE. The Integrated Genome Browser: Free software for distribution and exploration of genome-scale datasets. Bioinformatics. 2009;25: 2730–2731. pmid:19654113
- 6. Goecks J, Nekrutenko A, Taylor J. Galaxy: a comprehensive approach for supporting accessible, reproducible, and transparent computational research in the life sciences. Genome biology. 2010;11: R86. pmid:20738864
- 7. Hunter AA, Macgregor AB, Szabo TO, Wellington CA, Bellgard MI. Yabi: An online research environment for grid, high performance and cloud computing. Source Code for Biology and Medicine. 2012. p. 1. pmid:22333270
- 8. Kallio MA, Tuimala JT, Hupponen T, Klemelä P, Gentile M, Scheinin I, et al. Chipster: user-friendly analysis software for microarray and other high-throughput data. BMC Genomics. 2011. p. 507. pmid:21999641
- 9. Néron B, Ménager H, Maufrais C, Joly N, Maupetit J, Letort S, et al. Mobyle: A new full web bioinformatics framework. Bioinformatics. 2009;25: 3005–3011. pmid:19689959
- 10. Reich M, Liefeld T, Gould J, Lerner J, Tamayo P, Mesirov JP. GenePattern 2.0. Nature genetics. 2006. pp. 500–501. pmid:16642009
- 11. Ioannidis JP, Allison DB, Ball CA, Coulibaly I, Cui X, Culhane AC, et al. Repeatability of published microarray gene expression analyses. Nature genetics. 2009;41: 149–155. pmid:19174838
- 12. Dudley JT, Butte AJ. In silico research in the era of cloud computing. Nature biotechnology. 2010;28: 1181–5. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3755123&tool=pmcentrez&rendertype=abstract pmid:21057489
- 13. Krampis K, Booth T, Chapman B, Tiwari B, Bicak M, Field D, et al. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community. BMC Bioinformatics. 2012. p. 42. pmid:22429538
- 14. Anderson NR, Lee ES, Brockenbrough JS, Minie ME, Fuller S, Brinkley J, et al. Issues in biomedical research data management and analysis: needs and barriers. Journal of the American Medical Informatics Association: JAMIA. The Oxford University Press; 2007;14: 478–88. Available: http://jamia.oxfordjournals.org/content/14/4/478.abstract
- 15. Lathan CE, Tracey MR, Sebrechts MM, Clawson DM, Higgins GA. Using virtual environments as training simulators: Measuring transfer.pdf. Handbook of virtual environments: Design, implementation, and applications. 2002. pp. 403–414.
- 16. Stein LD. The case for cloud computing in genome informatics. Genome biology. 2010;11: 207. pmid:20441614
- 17. Myneni S, Patel VL. Organization of Biomedical Data for Collaborative Scientific Research: A Research Information Management System. International journal of information management. 2010;30: 256–264. Available: http://www.sciencedirect.com/science/article/pii/S0268401209001182 pmid:20543892
- 18. Peng RD. Reproducible research in computational science. Science (New York, NY). 2011;334: 1226–7.
- 19. Mesirov JP. Computer science. Accessible reproducible research. Science (New York, NY). 2010;327: 415–6.
- 20. Sandve GK, Nekrutenko A, Taylor J, Hovig E. Ten simple rules for reproducible computational research. PLoS computational biology. 2013;9: e1003285. pmid:24204232
- 21. Sebastian Wernicke KB. The IGOR Cloud Platform: Collaborative, Scalable, and Peer-Reviewed NGS Data Analysis [Internet]. Journal of Biomolecular Techniques: JBT. The Association of Biomolecular Resource Facilities; 2013. p. S34. Available: /pmc/articles/PMC3635388/?report=abstract
- 22. Liu B, Madduri RK, Sotomayor B, Chard K, Lacinski L, Dave UJ, et al. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses. Journal of Biomedical Informatics. 2014;49: 119–133. pmid:24462600
- 23. Nekrutenko A, Taylor J. Next-generation sequencing data interpretation: enhancing reproducibility and accessibility. Nature Reviews Genetics. 2012. pp. 667–672. pmid:22898652
- 24. Afgan E, Coraor N, Chilton J, Baker D, Taylor J. Enabling Cloud Bursting for Life Sciences within Galaxy. Concurrency and Computation:Practice and Experience. 2015;in press: 16.
- 25. Kuo AM-H. Opportunities and challenges of cloud computing to improve health care services. Journal of medical Internet research. 2011;13: e67. Available: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3222190&tool=pmcentrez&rendertype=abstract pmid:21937354
- 26. Schatz MC, Langmead B, Salzberg SL. Cloud computing and the DNA data race. Nature biotechnology. 2010;28: 691–693. pmid:20622843
- 27. Dudley JT, Pouliot Y, Chen R, Morgan AA, Butte AJ. Translational bioinformatics in the cloud: an affordable alternative. Genome medicine. 2010;2: 51. Available: http://genomemedicine.com/content/2/8/51 pmid:20691073
- 28. Schadt EE, Linderman MD, Sorenson J, Lee L, Nolan GP. Computational solutions to large-scale data management and analysis. Nature reviews Genetics. 2010;11: 647–657. pmid:20717155
- 29. Afgan E, Baker D, Coraor N, Goto H, Paul IM, Makova KD, et al. Harnessing cloud computing with Galaxy Cloud. Nature Biotechnology. Nature Publishing Group, a division of Macmillan Publishers Limited. All Rights Reserved.; 2011;29: 972–974.
- 30. Madduri R, Chard K, Chard R, Lacinski L, Rodriguez A, Sulakhe D, et al. The Globus Galaxies platform: delivering science gateways as a service. Concurrency and Computation: Practice and Experience. 2015; n/a–n/a.
- 31. Schönherr S, Forer L, Weißensteiner H, Kronenberg F, Specht G, Kloss-Brandstätter A. Cloudgene: a graphical execution platform for MapReduce programs on private and public clouds. BMC bioinformatics. 2012;13: 200. pmid:22888776
- 32. Jourdren L, Bernard M, Dillies M-A, Le Crom S. Eoulsan: a cloud computing-based framework facilitating high throughput sequencing analyses. Bioinformatics (Oxford, England). 2012;28: 1542–3.
- 33. Calabrese B, Cannataro M. Bioinformatics and Microarray Data Analysis on the Cloud. Methods in molecular biology (Clifton, NJ). 2015;
- 34. Pérez F, Granger BE. IPython: A system for interactive scientific computing. Computing in Science and Engineering. 2007;9: 21–29.
- 35. Afgan E, Baker D, Coraor N, Chapman B, Nekrutenko A, Taylor J. Galaxy CloudMan: delivering cloud compute clusters. BMC bioinformatics. 2010;11 Suppl 1: S4.
- 36. Blankenberg D, Von Kuster G, Bouvier E, Baker D, Afgan E, Stoler N, et al. Dissemination of scientific software with Galaxy ToolShed. Genome biology. 2014;15: 403. pmid:25001293
- 37. Blankenberg D, Johnson JE, Taylor J, Nekrutenko A. Wrangling Galaxy’s reference data. Bioinformatics (Oxford, England). 2014;30: 1917–9.
- 38. Trapnell C, Roberts A, Goff L, Pertea G, Kim D, Kelley DR, et al. Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks. Nature protocols. 2012;7: 562–78. pmid:22383036
- 39.
Morris A. Jette ABYMG. SLURM: Simple Linux Utility for Resource Management. Available: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6834
- 40. Zhao S, Fung-Leung W-P, Bittner A, Ngo K, Liu X. Comparison of RNA-Seq and microarray in transcriptome profiling of activated T cells. PloS one. 2014;9: e78644. pmid:24454679
- 41. Rapaport F, Khanin R, Liang Y, Pirun M, Krek A, Zumbo P, et al. Comprehensive evaluation of differential gene expression analysis methods for RNA-seq data. Genome biology. 2013;14: R95. Available: http://genomebiology.com/2013/14/9/R95 pmid:24020486
- 42. Soneson C, Delorenzi M. A comparison of methods for differential expression analysis of RNA-seq data. BMC bioinformatics. 2013;14: 91. Available: http://www.biomedcentral.com/1471-2105/14/91 pmid:23497356
- 43. Gentleman RC, Carey VJ, Bates DM, Bolstad B, Dettling M, Dudoit S, et al. Bioconductor: open software development for computational biology and bioinformatics. Genome biology. 2004;5: R80. pmid:15461798
- 44. Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics (Oxford, England). 2010;26: 139–40.
- 45. Afgan E, Chapman B, Taylor J. CloudMan as a platform for tool, data, and analysis distribution. BMC bioinformatics. 2012;13: 315. pmid:23181507
- 46. Baumer B, Cetinkaya-Rundel M, Bray A, Loi L, Horton NJ. R Markdown: Integrating A Reproducible Analysis Tool into Introductory Statistics. Technology Innovations in Statistics Education. 2014;8. Available: https://escholarship.org/uc/item/90b2f5xh
- 47. Foster I. Service-oriented science. Science (New York, NY). 2005;308: 814–7. Available: http://www.sciencemag.org/content/308/5723/814.abstract
- 48. Afgan E, Baker D, Nekrutenko A, Taylor J. A reference model for deploying applications in virtualized environments. Concurrency and Computation: Practice and Experience. 2012;24: 1349–1361.
- 49.
Afgan E, Krampis K, Goonasekera N, Skala K, Taylor J. Building and provisioning computational cluster environments on academic and commercial clouds. International Convention on Information and Communication Technlogy, Electronics and Microelectronics (MIPRO). Opatija, Croatia: IEEE; p. 6. | https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0140829 |
Riteish Deshmukh made his Bollywood debut with Tujhe Meri Kasam in 2003, along with his now-wife, Genelia D'Souza Deshmukh. The actor has played many characters but is famously known for his comic roles. His comic timing is loved by the audience, which can be seen by their praises. Riteish had been in several films but some of his characters are still fresh in people’s minds. Here is the list of few versatile roles that he has done.
Also Read | Riteish Deshmukh On Playing A Dwarf: Actors Need To Experiment
The adult comedy film stars Riteish Deshmukh, Vivek Oberoi & Aftab Shivdasani as three friends. The movie revolves around the lives of three friends and how they get caught up in a trap. Riteish plays a helpless husband who is ruled over by a dominating wife, played by Genelia D'Souza Deshmukh. The movie also stars Ajay Devgn, Lara Dutta, Amrita Rao and Archana Puran Singh. The popularity of the film can be seen as two more parts were made after the first being a hit.
The 2006 release was a multi-starrer comedy film. It includes Riteish Deshmukh, Shreyas Talpade, Jackie Shroff, Anupam Kher, Sunil Shetty, Chunky Pandey, Riya Sen, Celina Jaitly and Koena Mitra. Even after this star cast, the movie did not perform well at the box office. However, Riteish’s role of a con man was hilarious and appreciated by many. He was seen in many different avatars in the film itself.
The movie is considered to be one of the best comedy movies by the audience and many critics as well. Riteish played the role of a local want-to-be detective Deshbandhu Roy, in a funny manner. Sanjay Dutt, Arshad Warsi, Javed Jaffrey and Aashish Chaudhary also starred in the movie. It was a success at the box office and was turned into a franchise.
Also Read | Marjaavaan: Riteish And Sidharth Team Up Again After Ek Villain
One of the most famous comedy movies is Housefull. It had a big star cast of Akshay Kumar, Riteish Deshmukh, Deepika Padukone, Lara Dutta, Boman Irani, Arjun Rampal and Chunky Pandey. Still, Riteish's role as an innocent lover who gets stuck in the confusing situation stood out of the lot. The movie reportedly collected ₹117 crores worldwide. The fame of the film can be seen as the fourth part of the franchise is slated to release.
Released in 2014, the movie marked Riteish's debut as a villain. Sidharth Malhotra and Shraddha Kapoor were in the lead pair. However Riteish as Rakesh Mahadkar, a psychopath killer, received a lot of appreciation for his role. The movie was a hit at the box office and its songs were chartbusters.
The Maharashtrian boy made his debut in the Marathi film industry with Lai Bhaari in 2014. The action-drama revolves around a story on family property issues, and how things get dirty afterward. Riteish as Mauli gave an outstanding performance as per both critics and audiences. The film had catchy dialogues. As per reports, it collected ₹40 crores and became the highest-grossing Marathi film at that time.
Also Read | Marjaavaan: Sidharth Malhotra And Riteish Deshmukh In A Face-off
Also Read | Housefull 4: Akshay Kumar, Riteish Deshmukh, Bobby Deol's Quirky Looks
Get the latest entertainment news from India & around the world. Now follow your favourite television celebs and telly updates. Republic World is your one-stop destination for trending Bollywood news. Tune in today to stay updated with all the latest news and headlines from the world of entertainment. | https://www.republicworld.com/entertainment-news/bollywood-news/riteish-deshmukh-list-of-movies-where-he-played-versatile-roles.html |
Did you know that a burglary happens every 23 seconds in the United States? That’s a very shocking revelation indeed. And it’s all the more reason to protect your home from invasions. You need to take the extra effort to prevent any danger to your property and your family. Here are some tips on how to protect your home from intruders.
Exteriors
Every part of your property, from the exteriors to interiors, must be protected. The exteriors of your house, like your lawn, gates, etc. are your first line of defense against invaders. If you keep these areas protected, then there’s a significantly lesser chance of your home being in any danger. Here are some protective measures you have to take in your exteriors:
- Install a fence — although an aluminum panel fence is not an entirely permanent step to take, it can make a great difference. Having your lawn covered in fence makes it more difficult for invaders to waltz in, compared to a wide-open lawn.
- Keep your lawn well-lit — it’s harder to break in a property in broad daylight. Installing some lights outside your home will serve as a significant deterrent against burglars and intruders. It will make them think twice because light would increase the chances of them getting caught.
- Install security cameras — having security cameras around your house will help you monitor any unusual activity. You’ll always know what’s taking place in your property, even if you’re not around. And if you make the security cameras very visible, that’s a plus. If they know there are cameras, they will opt not to try anything funny.
- Always lock windows and doors — windows and doors are the typical entry places for illegal intruders. Despite this, more and more people are neglecting to keep them locked. Make it a habit to lock your windows and doors even if you’re inside the house.
- Don’t keep a spare key under the rug — it’s a common practice to leave a spare key outside your home. But this makes it easy for anyone to enter without your permission. Be smart about where you leave your spare key if you need to. But better yet, opt to always have your keys in your person.
Interiors
In rare cases, when an intruder successfully gets through the security devices in your lawn, it shouldn’t be easy for them to come waltzing in your front doors and entering your house. Even if you’ve taken the protective measures for the exteriors of your home, you still have to think about the interiors. Here are some ideas:
- Install a security alarm — security alarms will sound off when they detect suspicious activity. This will alert you when an intruder is trying to get into your home. Not only that, but the neighbors will also hear it and cause a fuss, which can effectively scare the intruder away.
- Consider getting a dog — this might sound pretty funny, but it’s an effective way to keep intruders away. If someone successfully breaks in your house, moving around will become harder for them with a dog on the premises.
- Keep valuables out of sight — the more visible your luxury cars, expensive furniture, and other belongings are, the more prone your home is for invasion. Do your best to keep a low profile, so you don’t attract any burglars.
- Have an escape plan — you and your family should always prepare for emergencies. Have an escape plan to adhere to in the event of a home invasion. Your priority is to keep everyone safe.
With these tips, you’re improving the safety of your home. Always prepare for the worst and keep safety at the top of your priorities at all times. | https://www.homewilling.com/how-to-protect-your-property-from-intruders/ |
Enhancing Children’s Lives: A Q&A with Dr. Sebastian Sattler
June 07, 2016 • By Sebastian Sattler
Should medications to enhance memory, improve learning, or heighten concentration be used to accelerate the cognitive functioning of otherwise healthy children? As policy discussions about so-called “smart drugs” percolate, the question of how parents view cognitive enhancement medications—and whether they would give them to their children—remains unexplored. In his Enhancing Life Project research, Sebastian Sattler, Research Assistant at the Institute of Sociology and Social Psychology at the University of Cologne in Germany and an Associate member at the Neuroethics Research Unit at the Institut de Recherches Cliniques de Montréal in Canada, is investigating parents’ roles, attitudes, and motives about the moral acceptability of pediatric CE. He will also explore these parents’ demographic and social contexts and their current decisions about enhancement for their children, with the goal of enriching the current debate with new empirical data.
What was the spark for the research you’re pursuing for the Enhancing Life Project? Are these new questions, or an extension of past research, or both?
I started researching enhancement more than eight years ago when my brother, who studied biochemistry, told me about experiments with substances that can extremely inhibit forgetfulness. It sounds quite fascinating: you take a substance and don’t forget anything again? But this comes with a severe downside. When this substance was tested on animals, they died very quickly. This caught my attention and I started looking into substances that were less strong—substances that help you concentrate and stay awake for longer. They are called cognitive enhancement medications.
In my earlier research, I focused on attitudes and behaviors among students, university teachers, and the general public regarding such medications. Later and especially now with the help of my Enhancing Life scholarship, I have also started thinking about children and parental decision-making because I often read about high numbers of ADHD prescriptions for children and wondered whether some of those children are just subject to their parents’ wish to make them better without any existing mental illnesses.
Diagnoses are quite subjective, and this leads to a danger that parents can be very influential in the process of diagnosis when it comes to something like ADHD. Given that all these medications can lead to mild to severe side effects—headache, insomnia, addiction, depression, or slowing of growth—medicating healthy children with such drugs is based on a risky decision and parents might accept these side effects with the aim of giving children an edge over others, while the children themselves have no say in the matter. That’s why I think it’s interesting and important to shed light on parents’ perspectives and rationales.
In your past research, what did you discover about cognitive enhancement technologies?
It’s a hard question to answer because enhancing life can mean different things to different people. In a broad sense, it’s about giving humans the capability to achieve their goals. When adding a normative flavor to the definition, enhancement should ideally not harm other humans or species or the environment. But of course enhancement for one person might mean diminishment for someone else—with my enhancement I can harm someone else or the life of another species or the environment. Given the problems we deal with on earth, like social inequality, there is a strong need for enhancement.
Beside the problem that drugs that confer super power do not exist, it is also difficult to define what normal means for one person and between persons. Moreover, perceptions of what is normal can change over time. Thus, it is not only difficult to define enhancing life in general, it’s especially difficult to define it in the context of cognitive enhancement.
What do you want to investigate in your project?
I’m looking at parents’ strategies to enhance the cognitive performance of their kids, mainly with pharmaceuticals. With these strategies parents intend to achieve specific goals for their children, such as better grades and perhaps thereby also goals for themselves, such as higher social approval.
In what way will your project contribute to the enhancement of life?
With my project, I hope to be able to contribute to a better understanding of what parents are willing and not willing to do with their kids in terms of pharmaceutical enhancement, what they find acceptable and what they find terrifying. With this knowledge, we might then try to reduce risky behaviors for children, which would hopefully enhance their lives. So enhancement strategies can have the desired effect of enhancing somebody, but not using them can also constitute enhancement.
You don’t spend all of your time doing research and teaching! What’s your favorite place to travel, or the next place you’d like to go?
I really like to be outdoors but I also like to meet people from different places on earth, so I try to combine these interests. One way is to use the couchsurfing-network, a group of people around the world, who offer their couch for travelers, with unimaginable hospitality. Thereby, I can meet local people in intriguing places. Last year, I was in the very privileged position to be able to travel to different places and visit friends or meet colleagues. This year, I would like to discover some local treasures and maybe go kayaking in northeastern Germany.
| |
Why doesn’t Asia have European-style regional integration? The goal of this paper is to provide a network theoretic explanation of the different paths of Asia and Europe during the postwar period. First, we propose a punctuatedequilibrium model of network diffusion that emphasizes the uncertain nature of diffusion dynamics. Then, we argue that successful regional integration hinges crucially on factors that assure critical mass of countries to embark on a venture of regional integration without fear of exploitation or noncompliance by regional powers in the future. In that regard, we compare the Sino-Japanese relationship in Asia with the Franco-German relationship in Europe, which we call the inter-core relationship, in shaping different paths of regional network diffusion within the two regions. While France and Germany have jointly played pivotal roles in shaping the path to European integration, China and Japan have acted like “two tigers” in the same mountain and missing important opportunities in the 1970s to transform their bilateral relationship. Utilizing the World Treaty Index data set and the community detection method, we found that distinct inter-core relationships in Europe and Asia indeed led to different patterns of evolution in the community structure of bilateral economic networks between Asian and European countries during the postwar period.
Previous studies of regional integration4 can be divided into roughly four different schools of thought: neofunctionalism, constructivism, realism, and liberal intergovernmentalism. Neofunctionalism emerged as an explanation of the evolution of the European Coal and Steel Community (ECSC) and the founding of the European Economic Community (EEC). Haas argued that, once started, early decisions of economic integration created unintended or unwanted consequences that constrain subsequent choices (Haas 1958). Haas further distinguished this mechanism, called “spillover,” into three types: functional spillovers (interdependent sectors eventually integrate under the pressure of experts and political elites), political spillovers (cooperation among supranational officials in one sector emp ower them to work as informal political entrepreneurs in other areas), and cultivated spillovers (centralized institutions representing common interests work as midwives to integration). In the view of neofunctionalists, the driving force of economic integration is based on endogenous factors (i.e. spillovers) and the integration process was a smooth evolution from low-level integration to deeper and wider levels. State actors do not have privileged positions in neofunctionalism: supranational elites and organizations were seen as important as state actors. Recently, historical institutionalism revisited the neofunctionalist argument, emphasizing the unintended consequences of early decisions of institutionalization on the integration process (Pierson 1996).
Constructivist explanations of integration emphasize the intersubjective nature of the integration process. Constructivists stress the role of ideational factors, such as rules, norms, languages, and identities, to building common understandings on the complex issues shaping the paths of institutional development. For example, Checkel argues that convergence of identities and preferences through social learning and social mobilization, or ‘Europeanization,’ played a major role in European integration (2001). Similarly, Jachtenfuchs, Diez and Jung argue that the integration process “depends not only on interests but also on normative ideas about a legitimate political order (‘polity-ideas’). These polity-ideas are extremely stable over time and resistant to change because they are linked to the identity and basic normative orientations of the actors involved” (1998, 407). Constructivist arguments fill an important void in the literature by highlighting the integration process as an outcome of intensive “intersubjective” interactions. However, constructive frameworks show weaknesses in successfully weaving their ideational explanations with structural factors such as power politics, economic preferences, and geopolitical concerns and, as a result, constructive explanations often “risk being empirically too thin and analytically too malleable” (Hemmer and Katzenstein 2002, 583).
Realists believe that regional integration is an outcome of power politics. For Rosato, for instance, economic integration in Europe was an attempt by France and West Germany “to balance against the Soviet Union and one another” (Rosato 2011, 2). Balancing and forming alliances are usual practices of power politics and hence the path to European integration can be shifted, stopped, and accelerated any time when major European players change their strategies. Similarly, Grieco notes that the cause of the integration hinges on the power gap between the major regional players and weaker states (1997). The huge power disparity between major Asian states and weaker states made it difficult to achieve European-style integration. Likewise, realists think that integration is nothing but a result of the interplay of power politics among major powers, and so place very little importance on the role of nonmaterial or endogenous factors. As Moravcsik puts it succinctly, realists explain the integration process “by omission” (Moravcsik 2013, 781).
Criticizing both neofunctionalism and realism, Moravcsik presented a theory of liberal intergovernmentalism as an alternative (Moravcsik 1998, 2005, 2013). Moravcsik claimed that neofunctionalism relied too much on endogenous factors such as spillover and, as a result, neofunctionalism fails to stand alone as a scientific theory that produces complete, falsifiable, and consistent hypotheses. In contrast, realism reduces the complex process of European integration to a single cause (power politics) and, as a result, fails to provide proper explanations for the important moments in the integration process that cannot be reduced to power politics.
Moravcsik takes a multi-causal approach that embraces the role of domestic preferences, power politics, and supranational institutions. First, like realists, he emphasizes the role of state actors in the process but stresses “state preferences” as a key concept that includes economic interests and geopolitical concerns. Then, differences in bargaining power among major powers determined the outcomes of substantive agreements. Once substantive agreements on integration are made, states created institutions to secure these outcomes under future uncertainty. In Moravcsik’s theory, intergovernmental bargaining played the role of connecting micro-level factors (state preferences) and macro-level factors (institutional choice). In this view, the process of European integration consists of multiple stop-and-goes, punctuated by important deals among major European powers.
We agree that both endogenous factors, such as spillover or diffusion, and exogenous factors, such as power politics, matter to the integration process. We also believe that the integration process has both continuities and changes, which represent important turning points (critical junctures) in the integration process, and these turning points were made possible by intergovernmental bargaining among major European powers. However, an important question is how to explain the dynamics of integration over a long period of time, with proper emphasis on important changes and the stable lock-in effects of endogenous factors. In the following, we propose our view that explains regional integration as a diffusion process with punctuated equilibria and present a theory of ICR as a meso-level explanation focusing on Asia and Europe.
1Recently, scholars of international relations increasingly view the complex interactions in international relations through the framework of “networks.” See Hoff and Ward (2004), Hafner-Burton et al. (2009), Merand et al. (2011), Carpenter (2011), Cranmer et al. (2011), and Maoz (2011). 2Accessible at http://worldtreatyindex.com/. For more information on the database, see Pearson (2001). 3Unfortunately, regular updates stopped in 2003. As our argument covers mainly the history of the 20th century, the lack of 21st century data does not hamper our analysis much here. However, it would be highly interesting to extend our analysis using post-2003 data given the growing importance of Chinese influence in Asia and the world in the early 21st century. 4Regional integration is commonly defined as “the concentration of economic flows or the coordination of foreign economic policies among a group of countries in geographic proximity to each other” (Mansfield and Milner 1998, 4). Haas (1958) provides a more nuanced definition of integration: the process “whereby political actors in several, distinct national settings are persuaded to shift their loyalties, expectations and political activities toward a new centre, whose institutions process or demand jurisdiction over the pre-existing national states” (Haas 1958, 16).
We consider regional integration as a punctuated diffusion process. Regional integration is “punctuated” with many stop-and-goes, critical moments, and stalemates. Regional integration is a “diffusion” process because regional integration involves the coordination of policies, the convergence of ideas, and the spread of institutions across nations.
The diffusion process in a society is most commonly explained as a S-shaped curve as shown in Figure 1. However, the S-shaped curve-based explanation of diffusion has not been fully embraced by social scientists, who generally have not accepted that diffusion in human networks is an irreversible, deterministic process without competition.
Even a successful case of diffusion resembles “stop-and-go” patterns rather than the smooth shape of the S-shaped curve. Therefore, it is appropriate to conceptualize the process of diffusion as a combination of several distinct regimes (or stasis), not as realizations of a homogeneous dynamic process, which is why we are conceptualize the process of diffusion as a punctuated.
Recent studies of diffusion in biology and sociology emphasize the punctuated nature of the diffusion processes (e.g. Loch and Huberman 1999; Boushey 2012). For example, Bejan and Lorente characterize the diffusion process as a combination of three distinct stages (or regimes): rise, invasion, and conquest (Bejan and Lorente 2011). In the initial stage, a new idea arises and a small group of countries adopt it. These countries can be called “innovators.” During the invasion stage, a critical mass of countries adopts the new idea initially embraced by innovators. Finally, the invasion is accelerated and the stage of conquest starts. Countries that have not yet adopted the idea experience negative externalities and incur increasing costs related to staying out of the diffusion process.5
Our punctuated-equilibrium model emphasizes the invasion stage as a critical moment of diffusion. This is where a critical mass of actors makes strategic decisions to join or not to join a proposed innovation. Regional integration starts from a small group of countries with similar economic preferences (Moravcsik 1998), similar security threats (Rosato 2011), or equal powers facing collective problems of economic interdependence (Grieco 1997). The idea of integration is proposed as a long-term solution to the collective problems they encounter and is adopted by some “innovators.” In the case of European integration, they were six founding members of EEC (West Germany, France, Italy, the Netherlands, Belgium, and Luxembourg). Theories of integration that emphasize the role of exogenous factors such as realism, rational institutionalism, and liberal intergovernmentalism well explain the ‘rise’ stage of European integration. Furthermore, there must not be much disagreement once a majority of countries adopts a certain innovation during the post-invasion period, so the constraining effect of early decisions (spillovers or institutional lock-ins) must be strong enough to change the preferences of insiders and outsiders in the network. Thus, the most important and critical question in the study of diffusion is what caused the process to reach critical mass such that the diffusion process becomes almost self-sustaining? This will be discussed below.
What causes the moment of critical mass in the process of regional integration? It is difficult to provide a comprehensive answer to this question in this paper partly because we do not have a well-defined concept of “region” in international relations.6 Instead of attempting to provide a generalizable theory for all potential “regions” in the world, we provide a meso-level theory focusing on Asia and Europe where the social construction of region has been developed relative to other parts of the world.
The reason we delimit our focus to Asia and Europe is not just a practical one but also a theoretical one. There is a similarity in regional network structures in Asia and Europe in the early postwar period, which allows a theoretical leverage to our analysis. Both Asia and Europe had “multiple cores” and were exposed to the influence of superpowers (the US and the USSR during the Cold War) although the degree and nature of the exposure differed between Asia and Europe.7 For this reason, windows of opportunity for a change in the regional order came when the strategic interests of the superpowers changed. For example, the relationship between France and West Germany changed dramatically around the Berlin Crisis of 1961, as French President Charles de Gaulle supported German Chancellor Konrad Adenauer consistently, while the US showed a lukewarm attitude toward the Soviet Union’s threat to West Germany. China and Japan also had an important opportunity for a change in their bilateral relationship in the 1970s after Nixon’s visit to China.
Given this structural similarity, we note differences in the bilateral relationship between cores of each network in Asia and Europe, which we call the inter-core relationship (ICR), as the key explanatory variable to differences in the evolution of regional network between Asia and Europe. In this paper, we define “cores” as pivotal players in regional network with de facto veto power over proposals for regional integration. Using this definition, we consider China and Japan in Asia and France and Germany in Europe as the “cores” of each regional network.8 In network literature, cores are defined as densely connected central actors. However, this static definition is not useful to explain the dynamic process of integration. Dense connections and central actors are visible only when integration reaches a certain level. Our alternative definition of “cores,” pivotal regional players with de facto veto power, is applicable to early stages of integration or even pre-integration periods.
More specifically, we argue that the Sino-Japanese relationship in Asia and the Franco-German relationship in Europe have maintained a similar structural position in each regional network but played very different roles in shaping the paths of integration in Asia and Europe. France and Germany, once conceived as “improbable partners,” have jointly played “pivotal roles in shaping Europe” during the postwar period (Calleo 1998, 1; Krotz and Schild 2014, 4). The joint leadership by these two pivotal players in Europe has been essential in overcoming obstacles to the economic integration of Europe. In contrast, China and Japan have acted like “two tigers” that compete to “occupy the same mountain” since the end of the World War II and have failed to generate any serious momentum for regional integration (Yahuda 2001, 1).
China and Japan had an opportunity to transform their bilateral relationship in the 1970s. China had good reasons for its rapprochement with Japan. From political standpoints, the PRC wanted formal Japanese recognition to bolster its legitimacy as the sole government of China, to include Taiwan,9 while trying to break free from the USSR’s encirclement.10 Another motive was economic. Given the miraculous economic rise of postwar Japan, China’s internal goal of development provided strong incentives for China to normalize its relationship with Japan. From Japan’s perspective, however, motivations for rapprochement were more political than economic. Domestically, supra-partisan pressure, e.g. the Japanese Diet’s Men’s League for Japan-China Friendship and Federation Political, and various interest groups, such as the Federation of Economic Organizations, had strengthened support for pro-Peking foreign policies.11 External shocks such as Nixon’s visit to Beijing in 1972 also provided the drive for Japan to reexamine foreign policy alignments and its relationship with China. However, the Sino-Japanese Peace and Friendship Treaty in 1978 “did not directly involve any major legal obligations or commitments on either side” (Lee 1979, 420) and failed to produce enduring legacies, such as regularized summit meetings and ministerial councils, as in Europe’s Élysée Treaty.
Then, what are the specific functions of a cooperative and robust ICR in the integration process? As exemplified by France and Germany in the process of European integration, a cooperative and robust ICR can provide several public goods that can contribute to reaching a critical mass for regional integration.
First, a cooperative and robust ICR can provide the function of co-leadership. Cores have de facto veto power and hence can block any change they do not want to see happen. However, in the presence of multiple cores, individual cores cannot change the status quo without the consent of other cores. The presence of multiple cores under shared leadership can serve as a check against unilateral actions or defection by a powerful state. In this way, shared leadership can legitimize the proposal for change in the absence of formal organizations. This legitimizing effect is particularly important in the early stage of integration in which institutionalization has not fully matured. In the early stage of integration, smaller states may fear that their minor positions could render them helpless in the face of unilateral actions by powerful states in the latter stages of integration. Thus, shared leadership makes it easy to induce the compliance of peripheral countries in the early stages of integration.
Second, a cooperative and robust ICR can perform the function of quasi-representation. Multiple cores can represent the heterogeneous interests of peripheral countries better than a single core. In the case of European integration, Krotz and Schild (2014) explain that the different views of France and Germany on the directions of European integration actually helped the integration process because these heterogeneous views between the cores of Europe served to better represent the views of peripheral countries.
Third, the combined bargaining powers of a cooperative and robust ICR can be highly instrumental in rewarding norm compliers and punishing norm violators during the integration process. As Moravcsik argued, the bargaining powers of key European actors have been important in sealing a deal on such sensitive issues as the Common Agriculture Program (Moravscik 1998). Issue linkages and offers of side payments were more effective and credible when the key European powers of France and West Germany teamed up as a bloc instead of competing against each other.
Lastly, a cooperative and robust ICR produces a stronger commitment to multilateral institutions. In the early stage of integration, weak states hesitate to join due to uncertainty over distributional gains, power asymmetry, and loss of autonomy. Regional multilateral institutions such as the ECSC, the European Commission, and the European Court of Justice are established to lessen these concerns from uncertainty about the integration process and to address complex problems of integration efficiently. Despite the establishment of regional multilateral institutions, member states and the European Council still remained the most powerful actors involved in integration, and powerful states can attempt to change regional multilateral institutions for their gains. The presence of a cooperative and robust ICR decreases the probability of opportunistic behaviors by strong states because of the existence of another strong power that can punish defections.
Our theory of ICR is well connected to liberal intergovernmentalism in that both emphasize the critical role of major European states in creating multiple momenta for integration. However, what makes the role of intergovernmental bargaining so special in the integration process, according to our theory, is not their individual traits (e.g. bargaining power, geopolitical importance, and economic size) but the relationships between them. The relational properties of core European actors, especially France and West Germany, matter much more in creating important momenta for integration.
On the other hand, our theory shares commonalities with neofunctionalism and historical institutionalism in that both emphasize the limited degree of state actors’ control over the integration process. Integration is a diffusion process in which actors have limited control over how their early decisions shape subsequent choices. Learning, emulation, peer pressure, path dependence, and institutional binding are examples of important endogenous dynamics in a diffusion process. However, we depart from neofunctionalism and historical institutionalism in that these endogenous dynamics do not automatically turn on during the diffusion process. A critical mass of adopters is required for a diffusion process to generate a self-sustaining force, and we argue that the ICR between France and West Germany in the 1960s is exemplar in this regard. The ICR between China and Japan did not generate a similar critical mass for further integration in Asia.
Scholars of European integration have noted the critical role of the Franco-German relationship in various ways. For example, our concept of ICR is closely related to the “embedded bilateralism” proposed by Krotz and Schild (2014). Krotz and Schild argue that “neglecting the special bilateral Franco-German connection and two countries’ joint role in the European Union mean missing crucial aspects of European politics” (2014, 1). Then, they theorize the Franco-German connection by “embedded bilateralism” that “captures the intertwined nature of a robustly institutionalized and normatively grounded interstate relationship” (Krotz and Schild 2014, 8). For them, the Franco-German relationship is “just one important instance of a (potentially) larger class of empirical phenomena in institutionalized multilateral settings or regional integration contexts” (Krotz and Schild 2014, 9). In this paper, we further their argument by embedding it in a general theory of diffusion.
Webber explains that regional integration in Asia during the Cold War was infeasible because there was no ‘France.’ That is, Japan’s relative economic power was too strong compared to other non-Communist countries and China could not provide a counterweight (Webber 2006). Our reinterpretation of Webber’s “no France in Asia” argument is that what was missing in Asia was a bilateral relationship like the one between “France and Germany.” Asia would be a lot like Europe if there existed an Asian power that could have filled the role of France for Japan.
Mattli provided another rationale for the joint role of cores in the integration process (Mattli 1999). Mattli explains that regional integration can be successful if states want access to wider markets, and the political leaders of those states are willing to accommodate the political costs of the integration. Political cost here means concession of political autonomy in exchange for economic benefits from integration. If the fear of losing autonomy or distributional concerns is too strong, the cost rises sharply and the supply drops. Thus, the success of integration depends on how to reduce future uncertainty. Mattli argued that this problem is easily solved by the presence of a regional leader “such as Germany in the European Union, Prussia in the Zollverein, or the United States in the North American Free Trade Area” because such “a state serves as focal point in the coordination of rules, regulations, and policies; it also helps to ease distributional tensions through, for example, side-payments” (Mattli 1999, 56). While we agree with Mattli’s point that regional leaders play an important role in providing focal points for coordination problems, we disagree with his reading of the history of European integration. It is highly debatable whether West Germany has been the sole leader of European integration since the 1940s.12
To sum, a larger amount of public goods for integration can be provided by a cooperative and robust ICR more than by the sum of each core’s contribution in the absence of a cooperative and robust ICR. However, two important caveats need to be mentioned. First, we do not claim that a cooperative and robust ICR is a major driving force of the entire integration process. As mentioned above, the integration process can be best understood as a punctuated process with moments of dramatic changes and periods of smooth evolution. The critical role of a cooperative and robust ICR in this punctuated diffusion process is rather limited to the moment of critical mass, after which the rate of integration becomes accelerated by including a significant number of countries in the integration process. Before that moment of critical mass, exogenous factors matter more than the ICR. Also, after critical mass is reached, endogenous factors, such as institutional lock-in, spillover, and network externalities, matter more than ICR. However, we argue that the moment of critical mass in regional networks with multiple cores and outside superpowers hinges critically on the presence of coleadership by pivotal regional players in each network.
The second caveat to our meso-theory of ICR is that the presence of a cooperative and robust ICR increases the possibility of achieving critical mass, but it does not guarantee it will happen. A cooperative and robust ICR may exploit its superior position at the expense of peripheral states and impose a regime that distributes more gains toward ICR members.
5The punctuated nature of diffusion can be found in many examples from the history of international relations. The diffusion of the gold standard in the late 19th century is a good example. England adopted the gold standard in 1844. The decision was related to dramatic changes in the domestic circulation of gold versus silver coins and subsequent enabling legislation, England’s Bank Charter Act. In 1871, Germany adopted the gold standard. The recently-unified German nation wished to have the monetary standard of England and reparations from the Franco-Prussian War provided the resources. In other words, England and Germany were “innovators” or “early adopters.” Once Germany decided to follow the British way of underwriting its monetary system, neighbor countries followed the same path: Norway in 1874, and Sweden, Denmark, the Netherlands, and France in 1875. The adoption of the gold standard by these countries significantly dropped the market price of silver in international markets. Countries maintaining the silver standard or the bimetallic standard suffered from a dramatic fall in the price of silver and there was no sign of hope for countries maintaining the silver standard or the bimetallic standard. As a result, the rush to the gold standard was accelerated. In our terminology, critical mass for the diffusion of the gold standard was formed in 1875. 6There have been ongoing debates on what constitutes a “region” or “regionalism” in international relations. See Nye (1968), Thompson (1973), Lewis and Wigen (1997), Fawcett (2004), Mansfield and Solingen (2010), and Powers and Goertz (2011). 7One important difference is that the US pursued different strategies towards Asia and Europe. As Hemmer and Katzenstein (2002) elaborate, the US pursued a multilateral approach to European security, epitomized by the formation of NATO, while the US did not seriously explore the similar possibility in Asia. There is no doubt that this factor has influenced the different paths of economic integration taken within Asia and Europe. However, it should also be noted that NATO did not produce the economic integration of Europe that we see today. As discussed by many scholars of European integration, one of the major fault lines in the integration process lies between the Atlanticists, led by the UK, and the Continentalists, led by France and exemplified by the debate on the European Defense Community. 8Readers might find the omission of some important actors such as the US, South Korea, ASEAN members, Benelux countries, and the UK to be problematic. Our rationale for the omission of these actors is as follows. First, the US does not meet the criteria to be considered a “regional” core. In economic matters, the US has always been considered as non-European actor. Although the US successfully constructed a “North Atlantic” region in terms of security area, the US has never asked, and European countries have never invited the US, to be a member of the European economic community. Likewise, the US has engaged actively with Asian countries in areas of security by forming multiple bilateral alliances. However, it was only recently that the US presented itself as an Asia-Pacific or Trans-Pacific player in the economic realm. Second, the omission of South Korea, ASEAN members, Benelux countries, and the UK does not necessarily imply that these countries are less important or less powerful than the chosen “cores.” Instead, the rationale for their omission is the fact that these countries do not have de facto veto power over proposals for regional integration. For example, the UK was not a founding member of the EEC, and France rejected the UK’s application to the EEC in 1963. South Korea became an important player in Asia’s regional economic network only after the 1980s. Benelux countries were founding members of the EEC, but it is difficult to consider them as a coherent bloc on complex issues involving the European process of integration. Likewise, it remains to be seen whether ASEAN members can act as a coherent bloc on complex issues of regional economic cooperation in Asia. 9China imposed conditions on Japan before accepting any negotiation over their normalization treaty in 1972. These conditions, called the “Three Principles,” demanded Japan’s formal approval of the PRC: 1) The government of the People’s Republic of China is the sole and legitimate government of China; 2) Taiwan is an inseparable part of the Chinese territory; and 3) In light of the previous points, the peace treaty between Japan and the Nationalist (Taiwanese) government is illegal and should be abrogated. After succeeding rounds of negotiations, Japan accepted the above principles. 10China was well aware of Soviet ambitions in Asia in normalizing relations with Japan. The PRC strongly insisted on including an anti-hegemony clause in the Sino-Japanese Peace and Friendship Treaty, which stated that China, Japan, nor any third country [USSR] should seek hegemony in Asia. 11For detailed historical and political background see Lee (1979). 12As explained in detail by Moravcsik (1998), West Germany played only a minor role in the early stages of European integration in the 1950s. It was France that played a leading role in that period. For example, the ECSC (European Coal and Steel Community) was proposed by French foreign minister Robert Schuman with the help of Jean Monnet. It was only after West Germany formed a firm and enduring bilateral relationship with France in the 1960s that West Germany played an active role in European integration (which was later hastened by the 1970s German economic miracle). Even during the post-1960s, France, under the presidency of Charles de Gaulle, successfully took the upper hand in the process by executing French veto power over the integration process, as in the case of the Empty Chair Crisis in 1966. Also, though West Germany wanted the UK to join the EEC, the UK was able to join only after Germany successfully convinced France not to veto the admission of the UK.
To examine our argument on the role of the ICR in regional integration, we analyze bilateral treaty network data. We collected bilateral treaty data from Peter H. Rohn’s World Treaty Index (Pearson 2001). The World Treaty Index (WTI) is the most comprehensive data source of international treaties. It was first published in 1974, followed by a second edition in 1983. Updates after the second edition are electronically available. In the data set, treaties are categorized into nine domains (Diplomacy, Welfare, Economics, Aid, Transport, Communication, Culture, Resources, and Administration) that are then subdivided into several specific topics. For example, the economy domain contains 15 subtopics (claim, raw materials trade, customs duties, economic cooperation, industry, investment guarantee, most favored nation status, patents and copyrights, payments and currency, products and equipment, taxation, technical cooperation, tourism, general trade, and trade and payments). Figure 2 visualizes treaty domains and corresponding treaty subtopics in the WTI. As shown in Figure 2, the WTI data set classifies all types of registered treaties and covers all registered treaties from 1901 to 2003.
In WTI, states form three types of treaties: multilateral treaties, bilateral treaties, and unilateral declarations. According to the Treaty Handbook published by the United Nations, a “multilateral treaty is an international agreement concluded between three or more subjects of international law, each possessing treaty-making capacity” and a “bilateral treaty is an international agreement concluded between two subjects of international law, each possessing treaty-making capacity.” Unilateral declarations “constitute interpretative, optional or mandatory declarations” (United Nations 2012, 33). Among those treaties included in the index, we chose to analyze bilateral treaties, which constituted 99.6% of all treaties recorded.13
Figure 3 shows the cumulative growth of treaty-joining states across nine treaty domains and 81 subtopics. In this plotting, we make binary annual bilateral treaty network data so country pairs that form more than one treaty in a subtopic are counted to have one bilateral treaty in the subtopic of that year. Numbers in parentheses indicate the total number of treaty-joining states in each subtopic at the end of the sample period. For example, “ally (56)” in the top left panel indicates that 56 countries joined at least one of the alliance-related bilateral treaties between 1901 and 2000. Abbreviations of treaty subtopics are the same with those in the WTI and they are reported in the appendix.
Several points in Figure 2 should be noted. First, the growth of treaty-joining states varies a widely across the 81 subtopics. Some of them closely follow the S curve while others show linear growth patterns or just flat patterns. Kernel regression smoother lines (thick solid lines), which show the average shape of growth over time, present as a highly stretched S curve.
Second, the number of treaty-joining countries varies across subtopics. Bilateral treaties on economic cooperation, econ in the top-right panel, have the largest number of joined countries, while bilateral treaties on religion (relig) in the bottom-left panel, have the smallest number of joined countries, with the exception of subtopics directly related to multilateral treaties such as UN membership (chart in the top-left panel), the International Court of Justice Clause (optc in the top-left panel), and UNICEF aid (unice in the middle-left panel).
Among these bilateral treaty data, we use bilateral treaties on economic cooperation for the analysis because regional integration is essentially an economic process. Also, as shown in Figure 3, treaties on economic cooperation are the most densely connected treaty domain among the nine treaty domains in the World Treaty Index.
We used a community detection method to identify the bloc structures of bilateral economic treaty networks in Asia and Europe. A “community” in a network refers to a group of nodes in which the density of connectedness within the group far outweighs the density of connectedness between groups. Community detection allows us to check whether changes in the relationship between regional cores affect the diffusion of treaty networks at the regional level.
Among the many methods of community detection, we employed the Walktrap measure developed by Latapy (2006). In this method, a walker is placed randomly on a node and moves to a neighboring node based on the following transition probability Pij=Aij/di. The formula indicates that the transition probability of a walker moving from the ith node to the jth node (Pij) is equal to the weight of the edge between them (Aij) divided by the degree of the ith node (di). The intuition is that after a certain number of walks, the walker tends to get ‘trapped’ into densely connected parts corresponding to communities (Latapy 2006, 192).
Pij is greater when the jth node possesses high degrees. After a certain number of walks, the walker has a strong tendency to return to a node with the highest degree; hence, detected communities are likely to center on a few well-connected nodes with high degrees (“cores” in this paper). Note that the transition probability of a node to any other node within the same community is always greater than the transition probability to a node in a different community. Thus, If cores (of different communities) stay disconnected, communities centered on those cores are likely to remain separated in the next stage. It is unlikely that inter-community connection between nodes of low degrees would make changes in the community structure. However, as cores (of different communities) get more connected, then the probability that those communities merge in the next stage increases.
In plain words, the above discussion implies that the presence of a cooperative and robust ICR (i.e. if cores of different communities get connected and stay connected for a long period of time) increases the probability of diffusion (community merging). In contrast, the lack of a cooperative and robust ICR (i.e. if cores of different communities stay unconnected and stay unconnected for a long time) decreases the probability of diffusion (community merging).
As mentioned above, we selected France and Germany as regional cores in Europe and China and Japan in Asia. As explained by Cole (2001) and Krotz and Schild (2014), the Franco-German relationship entered into a new era around the time of the Élysée Treaty in 1963. In contrast, China and Japan failed to form a cooperative and robust ICR during the postwar period.
In our analysis, we chose the 28 EU member states for the analysis of the European bilateral treaty network. For Asia, we chose 35 Asian countries, excluding Oceanian countries, Middle East countries, and Cyprus. The full list of country names is available in the Appendix.
13Bilateral treaties have several advantages in the study of the evolution of the international or a regional state system. First, bilateral treaties are more informative about the willingness of cooperation between a pair of countries than multilateral treaties. In bilateral treaties, states freely choose not just the type of treaty but also the contracting party. Thus, the formation of a treaty in a specific issue area indicates the willingness of cooperation among contracting parties involved in that specific issue of interest. In contrast, it is difficult for a state to select only a group of favored states in multilateral treaties. Except for the case of bilateral treaties completely nested within multilateral treaties (Powers et al. 2007), countries have a high level of discretion to join or not to join a bilateral treaty with other states. Second, the formation of a bilateral treaty is a serious commitment among contracting parties. Treaty formation is legal in nature; hence, once formed, treaties impose significant constraints on signatories’ behaviors (Simmons 2000; Koremenos 2005). The forms of constraints vary, though usually they are exhibited in the arbitration of multilateral institutions, direct retaliation, or domestic legal processes. Third, the formation, maintenance, and revision of bilateral treaties are systemic phenomena. Joining a certain type of a bilateral treaty to achieve critical mass generates negative or positive externalities for other countries. Also, a bilateral treaty in one issue area affects the possibility of future agreements on other issues. The complex interdependence of bilateral treaties has been conceptualized as “spillover” (Haas 1961) or “network externalities” (Eichengreen 1996) in the literature. Lastly, once a treaty is formed, it tends to last for a long time. States rarely abandon treaties, but tend to renegotiate or revise treaties once they encounter problems (Pearson 2001, 553). The ‘stickiness’ of treaties is an important characteristic of the modern international system.
In this section, we analyze different paths of bilateral treaty network evolution in Asia and Europe. Our focus in this analysis is on whether and how inter-core relationships in Asia and Europe have affected the paths of bilateral economic treaty networks captured by changes in the community structure.
We first show the Lorenz curve of the degree distribution in the bilateral treaty networks of Asia and Europe for all treaty domains. The Lorenz curve is one of most effective ways to investigate the degree densities of multiple networks over a long time span. The more concave the Lorenz curve, the more unequal degree of density the network has. Colors are scaled by time, with darker colors indicating more recent periods and brighter colors indicating earlier periods.
Figure 4 shows that the treaty network in Asia has been highly concentrated in a small number of countries over time and across different treaty domains. The inequality of degree density is particularly notable in two domains: Welfare and Resources. Figure 5 shows a very different picture from Europe. The inequality of degree density is increasingly lower over time across most treaty domains. Recently, the Lorenz curve has become very flat in Economy, Transportation, Resources, and Administration.
Next, we compare the number of economic treaties that are connected with both cores (shared network) with the number of treaties connected to only one core of the region (exclusive blocs). We expect that the size of the shared network increases from the 1960s in Europe while the size of exclusive blocs stays constantly small in Europe. As we expected, the left panel of Figure 6 (Europe) demonstrates that most European countries are well connected with both cores and the slope increases dramatically in the 1960s and 1970s. Note that there is a small hump in the dotted line around 1991, which indicates the massive inclusion of former communist countries into the bilateral economic treaty network. Some of those Eastern European countries tended to form bilateral economic treaties with either France or Germany, which produced a small hump in the graph. However, throughout the sample period, there is no significant increase in the size of exclusive blocs in Europe, and this pattern shows a dramatic contrast with that of the shared network.
The right panel of Figure 6 (Asia) shows a strikingly different picture. Note that the y-axis of the right panel has different scales compared to the left panel for better display of data. Although our sample of Asia contains a larger number of countries, the total number of ties in Asia is significantly smaller than those within Europe, and the size of the shared network in Asia stays low (under 350 ties) over time. The size of exclusive blocs grows during the 1960s and the early 1970s and remains constant afterwards.
From this analysis, we can see, first, the number of bilateral economic treaties connected with the two cores in Europe grows dramatically in the 1960s and 1970s, while the number of bilateral economic treaties connected with only one core does not rise at all during the same period. Second, the number of bilateral economic treaties connected with the two cores in Asia increases at the same rate with the number of bilateral economic treaties connected with only one core during the 1960s and the early 1970s. Although these patterns are consistent with our expectations, we have yet to see whether the diffusion of bilateral economic treaties in Europe is triggered by the transformation of the Franco-German relationship in the late 1950s and early 1960s.
Figure 7 and Figure 8 show the results of community detection analysis for Europe (1911-2000) and Asia (1944-2003). The time frame is different because Asian countries do not have any registered bilateral economic treaties before 1944. Country names are abbreviated by ISO2 codes and reported in the appendix. Different colors indicate different community memberships. Curved edges indicate the presence of bilateral economic treaties. We denote regional cores by a larger node size. Our interest is to check whether changes in the community membership of regional cores are followed by changes in the entire community structure.
The second row of Figure 7 clearly shows major changes in the community structure of bilateral economic treaty networks in Europe. We aggregate annual data into 10-year intervals for better visual presentation.14 As shown in the midcenter panel, the change in the ICR precedes this structural change. That is, after France and Germany were included in the same community in the 1950s, all the countries in the network belong to a single community. If we consider the merging of the community structure as the stage of conquest in the diffusion process, the 1960s can be thought of as the moment of critical mass in the economic integration of Europe. The 1950s has been known as the formative period for the economic integration of Europe with the signing of the Treaty of Paris in 1951, the establishment of the ECSC, the Treaty of Rome in 1958, and the European Free Trade Association (Britain, Denmark, Norway, Portugal, Spain and Sweden) in 1960. However, our analysis demonstrates that it was the 1960s that produced critical mass for the economic integration of Europe.
If we narrow down the timeframe to a three-year window, as shown in Figure 8, we have a more detailed picture of the transition caused by the change in the ICR. As argued by Cole (2001) and Krotz and Schild (2014), the major changes in the postwar Franco-German relationship occurred between the late 1950s and the early 1960s, epitomized by the 1963 Élysée Treaty. The direct result of this change was increasingly dense economic connections between the two countries during this period. The enduring cooperative relationship between the two countries afterwards, “sealed by a kiss” at the Élysée Palace, generated an important momentum for the economic integration of Europe.
Figure 9 delivers a very different picture of that of Asian economic integration. China and Japan have never belonged to the same community during the 1944 to 2003 period, and China and Japan have maintained their own spheres of influence in the bilateral economic treaty network during the postwar period. Japan has been actively engaging in treaty-making with Southeast Asian countries since the 1970s. China has maintained connections with Central Asian countries.
Note that the bilateral economic treaty network in Asia is sparser than that of Europe’s. Thus, one might argue that the low level of treaty density in Asia is the main reason for the absence of a shared community like the one in European. However, a careful look at the community detection results defies this conjecture. The bottom-right panel of Figure 9 shows the most recent community structure within Asia’s bilateral economic treaty network from 1994 to 2003. We can find that South Korea (KR) and China (CN) belong to the same community, while Japan (JP) still leads her own community. Without a fundamental change in the Sino-Japanese relationship, it is highly unlikely that an increase in the level of network density would bring about the merging of this balkanized economic treaty network structure in Asia.
In Figure 10, we narrow our focus to the 1970s in which China and Japan sought rapprochement, resulting in the 1972 joint communiqu´e and the China-Japan Peace and Friendship Treaty in 1978. The 1970s produced important opportunities for the two Asian cores to transform their bilateral relationship into something closer to a cooperative and robust ICR. Kakuei Tanaka (1972-1974) was chosen Japanese prime minister, and he was willing to apologize for Japan’Øs wartime aggression toward China. Mao Zedong considered Japan a major ally against the Soviet Union and refused to push a claim for reparations for past aggression from Japan (Yahuda 2014, 11). The US was backing Japan to “strengthen its ties with China for the purpose of stabilizing Northeast Asia and counter-balancing the Soviet Union’s appreciable military presence in the entire Asia-Pacific region” (Lee 1979, 432). However, unlike France and West Germany in the 1960s, China and Japan failed to materialize these opportunities in the 1970s into a robust and cooperative relationship, partly due to Mao’s death in 1976 and new negotiations with Deng Xiaoping in which negotiators failed to consider the need for greater institutionalization of the bilateral relationship using regularized summit meetings and ministerial councils, as had happened in Europe after the Élysée Treaty.
This missed opportunity is clearly found in Figure 10. The normalization of the diplomatic relationship between China and Japan in the 1970s did nothing to the community structure of the economic treaty network in Asia. China and Japan had led their own communities throughout the 1970s, and this has not changed up through the end of our sample period (2003).
14If we disaggregate the timeframe further, the patterns become less clear. However, the main conclusion from the analysis does not change: in the European bilateral economic treaty network, the change in the Franco-German relationship precedes changes in the entire community structure in Europe.
Focusing on the formation of bilateral treaties, we provided a network theoretic explanation of the different paths of postwar Asia and postwar Europe. After we proposed a punctuated-equilibrium model of network diffusion that emphasizes the uncertain nature of diffusion dynamics, we argued that successful regional integration hinges crucially on factors that assure a critical mass of countries to embark on a venture promoting a common future without fear of exploitation or noncompliance by regional powers. Focusing on the similarity in regional network structures in Asia and Europe in the early postwar periods, we presented a cooperative and robust ICR as a critical factor that distinguishes the European path from the Asian path throughout the postwar period.
Specifically, we argued that the Sino-Japanese relationship in Asia and the Franco-German relationship in Europe held similar positions but played much different roles in the structure of each regional network. While France and Germany have jointly played “pivotal roles in shaping Europe,” China and Japan have acted like “two tigers” competing to “occupy the same mountain”(Yahuda 2013, 1).
Utilizing the World Treaty Index data set and the community detection method, we found that distinct inter-core relationships in Europe and Asia indeed led to different patterns of evolution in the community structure of the bilateral economic networks between Asian and European countries. The economic bilateral treaty network in Europe merged into a common structure after France and West Germany cemented their relationship constructively and firmly. Although it is difficult to find a direct causal connection from the Franco-German relationship to integration, there are plenty of reasons that support our claim that a cooperative and robust Franco-German relationship fulfilled several functions that served as focal points for European integration. In sharp contrast, the absence of cooperative and robust ICR in Asia led to continuously diverging patterns within the bilateral economic treaty network.
As a prominent postwar historian put it, “If time and space provide the field in which history happens, then, structure and process provide the mechanism” (Gaddis 2002, 35, emphasis original). What we aimed for in this paper was to find a mechanism that explains differences between two fields (Asia and Europe during the postwar period) using a network theoretic perspective. In our reading, previous theories of regional integration did not pay enough attention to relational properties of interstate relationships and thus relied on either oversocialized (e.g. neofunctionalism, constructivism and historical institutionalism) or undersocialized (e.g. intergovernmentalism and realism) concepts to explain the integration process.15 Certainly, a network perspective is a highly simplified view of international relations that only considers ties, nodes, domains, and the history of relationships. However, we believe that for scholars of international relations it is a highly promising theoretical framework that can connect macro-level outcomes such as regional integration and diffusion with micro-level factors such as capabilities, preferences, and identities.
Although the goal of this paper was largely a theoretical one, we would like to note one important implication from our theory and findings. Since 1990, Asia has experienced a wide range of institution building, regional groupings, and identification efforts. The proliferation of regional initiatives for economic cooperation in Asia was caused by many factors, such as the end of the Cold War, the rapid economic development of Asian countries, the common experience of the Asian financial crisis, the rise of China, and the recent “pivot to Asia” by the US. However, despite so many different acronyms representing Asia (e.g. EAEC, APEC, ASEAN+3, ARF, SAARC, SCO, RCEP, and TPP), regional initiatives in Asia have yet to create the critical mass needed to reach the stage of “invasion.”
Based on the theory and findings of this paper, we argue that in order for Asian countries to have the momentum to achieve critical mass, China and Japan should find a mechanism that engenders a long-lasting cooperative relationship. In the Franco-German relationship, it was personal friendship among leaders (de Gaulle-Adenauer, Pompidou-Brandt, Giscard-Schmidt, Mitterand-Kohl, Sarkozy-Merkel, and Hollande-Merkel) and regularized summits and ministerial councils that created enduring cooperation between the two countries. We do not know what will work for the Sino-Japanese relationship in the 21st century, but it is certain that the economic integration of Asia will not go further as long as China and Japan remain two tigers on one mountain.
15We borrow the terms “oversocialized” and “undersocialized” from Granovettor (1985).
45.
Webber Douglas
2006
“Regional Integration in Europe and Asia: An Historical Perspective.” In Bertrand Fort and Douglas Webber eds., Regional Integration in East Asia and Europe: Convergence or Divergence?
[Figure 1.] The Process of Diffusion
[Figure 2.] World Treaty Index: Nine Domains and 81 Subtopics of International Treaties
[Figure 3.] The Number of Treaty-Joined Countries across Nine Domains and 81 Subtopics
[Figure 4.] Lorenz Curve of Degree Distribution in Asia: Colors Scaled by Time with Darker Colors Indicating Later Periods
[Figure 5.] Lorenz Curve of Degree Distribution in Europe: Colors Scaled by Time with Darker Colors Indicating Later Periods
[Figure 6.] The Size of Shared Networks and Exclusive Blocs in Asia and Europe. (Solid lines indicate weighted treaty networks and dotted lines indicate unweighted treaty networks. The numbers in the left axis in each plot indicate the number of ties from the weighted network.)
[Figure 7.] Changes in the Community Structure of Europe from 1911 to 2000 using 10-year Intervals. (Different colors indicate different communities, with cores identified by a larger node size.)
[Figure 8.] Changes in the Community Structure of Europe from 1956 to 1964 using Threeyear Intervals. (Different colors indicate different communities, with cores identified by a larger node size.)
[Figure 9.] Changes in the Community Structure of Asia from 1944 to 2003 using 10-year Intervals. (Different colors indicate different communities, with cores identified by a larger node size.)
[Figure 10.] Changes in the Community Structure of Asia from 1971 to 1979 using Three-year Intervals. (Different colors indicate different communities, with cores identified by a larger node size.)
| |
Please note that there are two different conference venues:
June 14/15 - Century City Conference Centre
June 16 - Kirstenbosch Conference Centre (transportation available)
Schedule
Simple
Expanded
Grid
By Venue
Speakers
Attendees
Invited Symposia
Search
Back To Schedule
Thursday
, June 15 • 13:30 - 15:00
Resilience from an Educational Perspective - Maria Pilar Garate Chateau, Sharon Butler, Azita Chitsazzadeh
Log in
to save this to your schedule, view media, leave feedback and see who's attending!
Tweet
Share
Resilience from an Educational Perspective
Abstract #287
Resilience and educational achievement: A Chilean study.
Presenter:
Maria Pilar Garate (Universidad Tecnica Federico Santa Maria Chile)
Co-Authors:
Maria Pilar Garate, J-F, Hugo Alarcon, Edward Johns, Lioubov Dombrovskaia, Teresita Arenas
Introduction:
This study will investigate the relationship between educational outcomes and constructs of resilience among first-year university students. In particular, it will take on an algorithmic analysis to systematically tease out nodes and distributive computations within the data to account for how constructs of resilience account towards educational gains and vice-versa.
Methods:
300 first engineering university students completed a paper-based social-emotional wellbeing and mental health survey. The survey was informed by four standardized psychometric batteries. Students completed the survey at the University during their regular class. The survey took around 30 to 40mins to complete. Results from the survey were mapped to a number of academic indices. Categories of algorithms were used to account for how much a construct within resilience accounted towards educational gains and vice-versa. Structural equational modeling was used to show to what extent a factor was related to educational outcomes and resilience.
Findings:
Increasing resilience is likely to contribute towards positive educational gains and equally catering for positive learning experiences is likely to promote aspects of resilience. An associated outcome would be to capture and study the correlation between resilience and personal life course journey, school behaviors, and academic achievement/performance.
Abstract #275
Title: The development of a whole town approach to building resilience in children and young people: The Blackpool HeadStart programme.
Presenter:
Sharon Butler, Lisa Mills, Ollie Gibbs, Josh Thompson (Blackpool Council, UK)
Co-Authors:
Young People's Executive Group, Angie Hart, Pauline Wigglesworth
Introduction:
Blackpool HeadStart is a £10 million Big Lottery Funded programme implementing co-produced, social justice resilience approaches in schools and local communities to support children's mental health. Blackpool is a quirky town with multiple social and economic challenges. We are using a community development approach and adapting traditional therapies.
Methods:
The presentation outlines the core components of HeadStart and discusses how we have drawn on both resilience and systems theory to work with the whole town. To achieve the aim of a whole system change, we are using and adapting Angie Hart and collaborators’ Resilient Therapy and their Academic Resilience Approaches. HeadStart’s theory of change has been co-produced with young people, practitioners and other stakeholders and it will be outlined in the presentation. Finally we give an overview of the research data that is currently being collected on all the elements of the programme, using both qualitative and quantitative methods.
Findings:
Our work to develop a whole town approach to resilience building has relevance to anyone who wants to develop resilience approaches in either schools or local communities more broadly, rather than with individual children only. Implications of working with social justice and co-productive approaches are also highlighted.
Abstract #235
Title: The comparison between sensation seeking, test anxiety and academic resiliency in athlete and non-athlete female students
Presenter:
Azita Chitsazzadeh (Tehran minister of education, Iran)
Introduction:
Today, sport and physical activities are considered essential for many students and most of academic systems have included in their own programs. The aim of this study was to the comparison of sensation seeking, test anxiety and academic resilience in athlete and non-athlete female students.
Methods:
The study population consisted of all ninth-grade high school female students of Tehran’s 3 district education in 2015-2016 whose number was about 2150. A total of 120 students (60 students in each group) was selected by the convenience sampling method.
Findings/Implications:
According to the findings, it can be concluded that test anxiety and adventure seeking are important variables among female athletes and the authorities should consider these variables during interactions with students.
Speakers
SB
Sharon Butler
Blackpool Council
MP
María Pilar Gárate Chateau
Universidad Tecnica Federico Santa Maria
AC
Azita Chitsazzadeh
Ministry of education
Thursday June 15, 2017 13:30 - 15:00 SAST
Room 08
Century City Conference Centre
Concurrent Sessions
Attendees (9)
R
M
S
E
B
C
Need help? | https://ptriv.sched.com/event/AMxZ/resilience-from-an-educational-perspective-maria-pilar-garate-chateau-sharon-butler-azita-chitsazzadeh |
Introduction
============
Different biosensors rely on different biomolecules such as nucleic acids, antibodies, cells or enzymes, which can work as both bioreceptors and signalling molecules or labels. The immobilisation and stabilisation of the bioreceptor on to the transducer of the biosensor are both of great importance. Biomolecules are immobilised not only on the transducer to act as a bioreceptor, but also on other surfaces such as magnetic, gold or latex particles, and recently in a wide variety of nanomaterials that enhance the analytical performance of the biosensors. Among the wide range of strategies dealing with the immobilisation of biomolecules, in the present review we will focus on the immobilisation of active biological receptors on electrode surfaces for electrochemical biosensors. A detailed discussion of the advantages and disadvantages of the irreversible (covalent binding, cross-linking and entrapment or micro-encapsulation) and reversible (adsorption, bioaffinity, chelation or metal binding and formation of disulfide bonds) immobilisation methods will be presented. Moreover, we will address the importance of the storage and operational stability of the biomolecules involved in the biosensing event and discuss the factors that influence protein stability.
Immobilisation methods on electrode surfaces
============================================
New types of transducers have been developed for use as biosensors, the most popular being optical, electrochemical and mass-based transduction methods. As analytical systems, electrochemical-based transduction devices are robust, easy to use, portable and inexpensive \[[@B1]\]. Many electrode materials such as glassy carbon, carbon paste, graphite composites, carbon/graphite formulations, carbon nanotubes, graphene and gold among others are used in electrochemical biosensors. Screen-printed electrodes (SPEs) are widely used as the measuring element due to easy and reproducible fabrication at both laboratory scale and in mass production \[[@B2],[@B3]\]. Several types of SPEs, functionalised or not, are now commercially available (e.g*.* Gwent Group Ltd) and many laboratories have their own facilities for in-house production. However, in addition to the configuration of the electrode and materials being crucial, so is the immobilisation of the bioreceptor on to the electrode surface.
Often when working with biological molecules like proteins, an immobilisation method optimised for one protein may need to be adjusted to take into consideration the unique properties of another protein. For instance, it may be simple to conjugate or modify highly soluble proteins that have a high degree of conformational stability. However, similar reactions carried out on hydrophobic membrane proteins or insoluble peptide sequences will often require changes to the reaction conditions, which will affect the same conjugation process.
The recent advances in bioconjugation techniques are widely described in the literature (reviewed in \[[@B4]\]). The different strategies can be classified according to various levels of selectivity and difficulty, ranging from random methods (e.g*.* adsorption) to more advanced techniques based on protein engineering used to facilitate directional immobilisation (e.g*.* bio-orthogonal chemistries and SpyTag/SpyCatcher) \[[@B5]--[@B7]\]. In this review, we focus on the most common immobilisation techniques used for biosensor construction, which can be classified into two broad categories: irreversible ([Figure 1](#F1){ref-type="fig"}) and reversible ([Figure 2](#F2){ref-type="fig"}) methods. The advantages and disadvantages of the main methods are summarised in [Table 1](#T1){ref-type="table"} \[[@B8],[@B9]\].
{#F1}
{#F2}
###### Characteristics of different immobilisation methods
---------------------------------------------------------------------------------------------------------------------------------------------
Immobilisation method Interaction Advantages Disadvantages
---------------------------- -------------- ------------------------------------------- -----------------------------------------------------
Covalent binding Irreversible Stability\ Cost
High binding strength
Cross-linking Irreversible Stability\ Diffusion limitation\
High binding strength Cross-linker toxicity
Entrapment Irreversible Stable to changes in pH or ionic strength Limited by mass transfer
Adsorption Reversible Simple\ Less reproducible\
Fast\ Random orientation\
Low cost Desorption following change in ionic strength or pH
Bioaffinity Reversible Good orientation\ Cost
High specificity\
High selectivity\
High functionality\
Well-controlled
Chelation or metal binding Reversible Simplicity Less reproducibility
Disulfide bonds Reversible Good orientation\ Cost\
High sensitivity\ Need for linkers
Well-ordered\
Stable bond
---------------------------------------------------------------------------------------------------------------------------------------------
Irreversible immobilisation methods
-----------------------------------
The immobilisation is irreversible when the bioreceptor attached to the support cannot be detached without destroying either the biological activity of the biomolecule or the support. The most common methods of irreversible immobilisation are covalent binding, cross-linking and entrapment or micro-encapsulation.
### Covalent binding
The immobilisation of bioreceptors by methods based on the formation of covalent bonds are among the most widely used. Due to the stable nature of the bonds formed between the biomolecule and support, the bioreceptor is not released into the solution upon use. However, in order to achieve high levels of bound protein activity, the active area of the biomolecule must not be compromised by the covalent linkage chemistry to the support. For instance, the amino acid residues essential for catalytic activity or the recognition area of antibodies must not be hindered or blocked; this may prove a difficult requirement to fulfil in some cases. A wide variety of reactions have been developed depending on the functional groups available on the target. Despite the complexity of the biomolecule structure, only a small number of functional groups comprise selectable targets for practical bioconjugation methods. In fact, just five chemical targets account for the vast majority of chemical modification techniques: *Primary amines (--NH*~2~*).* This group exists at the N-terminus of each polypeptide chain and in the side chain of lysine (Lys, K) residues. In physiological conditions, primary amines are positively charged and usually outward-facing on the surface of proteins, thus they are normally accessible for conjugation without denaturing the protein structure. Primary amines can be targeted using several kinds of conjugation chemistries. The most specific and efficient reagents are those that use the N-hydroxysuccinimidyl ester (NHS ester) reactive group.*Carboxygroups (--COOH).* This group exists at the C-terminus of each polypeptide chain and in the side chains of aspartic acid (Asp, D) and glutamic acid (Glu, E). Like primary amines, carboxygroups are usually available on the protein surface.*Thiols (--SH).* This group exists in the side chain of cysteine (Cys, C). Often, as part of a protein\'s secondary or tertiary structure, cysteines are joined together between their side chains via disulfide bonds (--S--S--). These must be reduced to thiols to make them available for binding. Reagents that are activated with maleimide or iodoacetyl groups are the most effective for thiol-directed conjugation.*Carbonyls (--CHO).* Ketone or aldehyde groups can be created in glycoproteins by oxidising the polysaccharide post-translational modifications (glycosylation) with sodium meta-periodate.*Carbohydrates (sugars).* Glycosylation occurs primarily in the constant fragment (Fc) region of antibodies (IgG). Component sugars in these polysaccharide moieties that contain *cis*-diols can be oxidised to create active aldehydes (--CHO) for coupling. Labelling carbohydrates requires more steps than labelling amines because the carbohydrates must first be oxidised to create reactive aldehydes; however, the strategy generally results in antibody conjugates with high activity due to the location of the carbohydrate moieties. Aldehyde-activated (oxidised) sugars can be reacted directly to primary amines through reductive amination (mentioned above) or to reagents that have been activated with hydrazide groups.
### Cross-linking
Cross-linking is the process of chemically joining two or more molecules by a covalent bond. Cross-linking reagents (or cross-linkers) are molecules that contain two or more reactive ends capable of chemically attaching to specific functional groups (primary amines, thiols, etc.) on proteins or other biomolecules. Cross-linkers are also commonly used to modify nucleic acids, drugs and solid surfaces. The same chemistry is applied to amino acid and nucleic acid surface modification and labelling. There are several technical handbooks available, which detail the chemical reactivity and the molecular properties of cross-linkers (e.g. \[[@B10]\]).
### Entrapment or micro-encapsulation
The entrapment method is based on the occlusion of a biomolecule, mostly enzymes, within a polymeric network that allows the substrate and products to pass through but retains the enzyme. This method differs from the coupling methods described above, in that the enzyme is not bound to the support or membrane. There are different approaches to entrapping enzymes such as gel or fibre entrapping and micro-encapsulation. The practical use of these methods is limited by mass transfer limitations through membranes or gels.
Reversible immobilisation methods
---------------------------------
Reversibly immobilised biomolecules can be detached from the support under gentle conditions. The use of reversible methods for bioreceptor immobilisation is highly attractive, mostly for economic reasons. After using the support, it can be regenerated and re-loaded with fresh bioreceptor. The most common methods of reversible immobilisation are adsorption, bioaffinity, chelation or metal binding and the formation of disulfide bonds.
### Adsorption
The simplest immobilisation method is non-specific adsorption, which is mainly based on physical adsorption or ionic binding. In physical adsorption, the bioreceptors are attached to the surface through hydrogen bonding, van der Waals forces or hydrophobic interactions, whereas in ionic binding, the enzymes are bound through salt linkages. The nature of the forces involved in non-covalent immobilisation results in a process that can be reversed by changing the conditions that influence the strength of the interaction (e.g*.* pH, ionic strength, temperature or polarity of the solvent). Immobilisation by adsorption is a mild easy to perform process and usually preserves the functionality of the biomolecule \[[@B11]\]. The limitations of the adsorption mechanism are the random orientation and weak attachment, which produce desorption and poor reproducibility \[[@B12]\].
### Bioaffinity
The principle of affinity between complementary biomolecules such as lectin--sugar, antibody--antigen and biotin--avidin has been applied to biomolecule immobilisation \[[@B13],[@B14]\]. The remarkable selectivity of the interaction is a major benefit of the method. However, the procedure often requires the covalent binding of a costly affinity ligand (e.g*.* antibody or lectin) to the support. The most established procedures are the (strept)avidin--biotin interaction and the use of Protein A or G for antibody immobilisation: *(Strept)avidin--biotin interaction.* Avidin is a glycoprotein found in egg whites that contains four identical subunits. Each subunit contains one binding site for biotin, or vitamin H, and one oligosaccharide modification. The tetrameric protein is highly basic, having an isoelectric point (pI) of about 10. The biotin interaction with avidin is among the strongest noncovalent affinities known, exhibiting a dissociation constant of about 1.3×10^−15^ M. The only disadvantage of using avidin is its tendency to bind non-specifically with components other than biotin due to its high pI and carbohydrate content. Streptavidin is a similar biotin-binding protein to avidin, but it is of bacterial origin and originates from *Streptomyces avidinii*. The primary structure of streptavidin is considerably different from that of avidin. This variation in the amino acid sequence results in a much lower pI for streptavidin (pI 5--6). The strength of the noncovalent (strept)avidin--biotin interaction along with its resistance to break down makes it extraordinarily useful in the bioconjugate chemistry of any biomolecule \[[@B15]\].*Protein A and G.* This bioconjugation is mainly used for the immobilisation of antibodies. Protein A is derived from *Staphylococcus aureus* and Protein G is from *Streptococcus* species. Both have binding sites for the Fc of mammalian immunoglobulins (IgG), which ensures good orientation of the antibody after conjugation. The affinity of these proteins for IgG varies with the animal species. Protein G has a higher affinity for rat, goat, sheep and bovine IgG, as well as for mouse IgG1 and human IgG3. Protein A has a higher affinity for cat and guinea pig IgG. In addition to IgG Fc-binding sites, native Protein G contains binding sites for albumin, which can lead to non-specific staining. This problem is addressed by creating recombinant forms of the protein.
### Chelation or metal binding
This method is known as 'metal link immobilisation'. The metal salt or hydroxide, mainly titanium and zirconium salt, is precipitated and bound by co-ordination with nucleophilic groups on the surface (e.g*.* cellulose-, chitin-, alginic acid- and silica-based carriers) by heating or neutralisation. Due to steric factors, not all of the metal binding sites are occupied, therefore some of the positions remain free to interact with groups from the biomolecule. The method is quite simple, but the adsorption sites are not uniform and the metal ion leakage is significant, which leads to a lack of reproducibility. In order to improve the control of the formation of the adsorption sites, chelator ligands such as ethylenediaminetetra-acetic acid (EDTA) can be immobilised on the solid supports by means of stable covalent bonds. Elution of the bound proteins can be easily achieved by competition with soluble ligands or by decreasing the pH.
### Disulfide bonds
These methods are unique because, even though a stable covalent bond is formed between the support and bioreceptor, it can be broken by reaction with a suitable agent such as dithiothreitol (DTT) under mild conditions. Additionally, because the reactivity of the thiol groups can be modulated via pH alteration, the activity yield of the methods involving disulfide bond formation is usually high, provided that an appropriate thiol-reactive adsorbent with high specificity is used \[[@B16]\].
Protein stabilisation
=====================
The stabilisation of proteins is of great importance in many applications, and particularly in biosensor development. Stabilisation is known as the ability of a protein to retain its structural conformation or its activity when subjected to physical or chemical manipulation. The two main types of stability may be defined as (i) storage or shelf stability, and (ii) operational stability. The first relates to the retention of protein activity over time when stored as a dehydrated preparation, a solution or immobilised on to a surface. The second generally relates to the retention of activity when in use. The retention of the biological activity of the biomolecule involved in the biorecognition of an analyte is paramount, and this depends on retention of the biological structure. In most cases, the actual mechanism of stabilisation remains to be fully understood \[[@B17]\]. Protein engineering can be a useful tool for increasing the stability of certain enzymes \[[@B18]\] and antibodies \[[@B19]\]. However, this is only possible so long as structural data are available for the protein under examination. Protein stability is also evaluated using freeze-drying and spray-drying \[[@B20]--[@B22]\]. In this section, we will focus on the stabilisation achieved by the use of additives to modify the microenvironment of the protein under investigation and the covalent immobilisation of proteins on to transducers.
Dry and solution stability of proteins
--------------------------------------
Most of the publications on enzyme stabilisation are focused on the effect of additives on protein stability showing that it has been the most popular method of enzyme stabilisation \[[@B23]\]. The most important parameter in the promotion of structural integrity and thus stability of a protein is to retain the surface water activity. The components of a stabiliser formulation are normally made up of a combination of polyalcohols and polyelectrolytes. The polyalcohols include sugars and sugar alcohols that modify the water environment surrounding a protein, thus replacing and competing with free water within the system. This modified hydration shell confers protection to the protein, maintaining a 3D structure and biological activity, and enables long-term storage of biological materials both in solution and in the dehydrated state. Polyelectrolytes include numerous polymers of different charge and structure that form electrostatic interactions with proteins. As a result, large protein--polyelectrolyte complexes are formed, which retain full biological activity. Where polyelectrolytes and polyalcohols are combined, a synergistic effect is usually observed. Ratios of polyelectrolyte to polyalcohol are extremely important in the overall stabilisation of proteins. The buffer type, pH, ionic strength, concentration and ratio of stabilisers to protein/enzyme all play crucial roles in protein stabilisation both in the dry state and in solution. Some additives such as metal ions are directly related to enzyme structure and as such are not strictly surface interactions. The addition of dilute solutions of metal salts (e.g*.* magnesium or calcium) often stabilise proteins to a high degree and act synergistically with polyelectrolyte combinations. The addition of polyelectrolytes to solutions of proteins promotes the formation of soluble protein--polyelectrolyte complexes by electrostatic interaction. Polyhydroxyl compounds are then able to penetrate the structure more effectively, leading to enhanced stabilisation.
In most cases, the biological activity of the protein, enzyme or antibody is used as the main parameter to determine the stabilisation effect. When no simple method is available to directly measure biological activity, other techniques can be used to determine any molecular and structural modifications, such as gel electrophoresis, circular dichroism, fluorescence and turbidimetric measurements. Gel electrophoresis is normally used to examine the interactions between proteins and polymers and can be used to predict specific formulations, which lead to improved protein stability. Due to the extremely large size of the polymers, interaction between the polymer and protein is detected as the retardation of the enzyme in the gel matrix. This technique also helps in the determination of the affinity of polymer binding, allowing the prediction of the amount of polymer needed and subsequently reducing the cost of stabilisation considerably. However, in most cases, simple activity tests or immunoassays are sufficient to evaluate the remaining activity \[[@B17]\].
Stabilisation by protein immobilisation
---------------------------------------
The covalent immobilisation of proteins onto transducers is considered another way of stabilising the biomolecule involved in the biosensing event. Each protein examined is unique, so in most cases what works for one type of protein rarely works for another. In addition, the problem of stability can be complicated during the immobilisation process, which can result in a stable protein but with vastly reduced residual activity. Of the methods described above, the most common immobilisation method used on a pre-activated carbon transducer utilises covalent coupling to the amino groups. In some enzymes, such as acetylcholinesterase \[[@B24]\], most of the amino groups are situated on the back face opposite to the active site, therefore this methodology ensures retaining the activity after immobilisation. Covalent immobilisation of other enzymes such as glucose oxidase has been described in many cases. The results show that the immobilised complex is more stable than the native immobilised enzyme. Stabilisation of the immobilised enzyme with polyelectrolyte combinations shows a distinct difference from that of the soluble enzyme dehydrated from solution. The orientation of the enzyme on to the surface of the transducer might explain this effect \[[@B25],[@B26]\].
The advantages of having enzymes attached to surfaces have been exploited by living cells for as long as life has existed. In fact, there is experimental evidence to suggest that the immobilised state might be the most common state for enzymes in their natural environment. The attachment of enzymes to an appropriate surface ensures that they remain at the site where their activity is required. This immobilisation enhances the protein concentration at the proper location and it may also protect the enzyme from being destroyed. Multimolecular assembly depends upon the combination of weak non-covalent forces, hydrophobic interactions and covalent bonds (e.g. disulfide bridges) \[[@B27],[@B28]\]. All of these different forces have been exploited in the development of immobilised enzymes.
One of the main problems associated with the use of immobilised enzymes is the loss of catalytic activity. This is probably due to the immobilisation site blocking access to the substrate-binding site of the protein resulting in the observed loss of enzyme activity. There are several strategies to avoid these steric problems. The careful choice of enzyme residues involved in the immobilisation, and the use of hydrophilic and inert spacer arms can reduce steric hindrance dramatically \[[@B29]\].
More recently, other immobilisation strategies have been adopted. These include the use of magnetic particles (MPs), which is attracting interest due to the intrinsic advantages of the material. MPs have been commercially available for many years (e.g*.* BioMag®, Dynabeads®, Adembeads® and Miltenyi®) and are widely used in laboratories to extract desired biological components, such as cells, organelles or DNA, from a fluid. In recent years, the magnetic properties of MPs have also been used as labels \[[@B30]\] and bioreceptor platforms in biosensing \[[@B31]\]. As shown in [Figure 3](#F3){ref-type="fig"} (left), they consist of an inorganic core of iron oxide (magnetite (Fe~3~O~4~), maghemite (Fe~2~O~3~) or other insoluble ferrites) coated with a polymer to confer stability (e.g*.* polystyrene, dextran, polyacrylic acid or silica), with added functional groups (e.g*.* amino and carboxylic acids) to make subsequent conjugations easy. Hence, iron oxide particles can carry diverse ligands, such as peptides, small molecules, proteins, antibodies and nucleic acids. In particular, this material is attractive for its use in immunomagnetic separation where antibodies are conjugated to the particles allowing the capture and orientation of the antigen on the surface of the electrode \[[@B32]\]. An example of solution stability for antibody-modified MPs is showed in [Figure 3](#F3){ref-type="fig"} (right). The antibody-modified MPs were stored in a number of stabiliser formulations at 32°C in the solution state. The different MPs were processed over several days by magneto-immunoassay to detect prostate-specific antigen (PSA). Agglomeration of particles in the different solutions caused variability; however, the results showed that the Q2030317P4 stabiliser retained at least 85% of the original antibody activity after 3 months of storage at this temperature. Meanwhile the unstabilised version lost complete binding ability within 1 month.
![Stabilisation by protein immobilisation\
Schematic representation of (**a**) magnetic particles, (**b**) activated with functional groups, and (**c**) conjugated to biological molecules \[[@B32]\]; (**d**) solution stability of antibody-modified magnetic particles over 3 months at 32°C using stabilisers from Applied Enzyme Technology Ltd. Activity tested by optical magneto-immunoassay using 15 ng/ml PSA, 0.1 mg/ml anti-PSA antibody-modified magnetic particles (Abcam, ab10184) and 1 μg/ml horseradish peroxidase (HRP)-labelled antibody (Abcam, ab24466).](bse0600059fig3){#F3}
Conclusions
===========
The field of bioconjugation has advanced at incredible pace. Tens of thousands of additional publications have appeared in biological, medical, polymer, material science and chemistry journals describing novel reactions and reagents along with their use in a variety of bioconjugation techniques. In this review, we discussed the most typical immobilisation techniques for active biological receptors on electrode surfaces, which could be extended to other transducer systems such as optical, piezoelectric or calorimetric as well as nanomaterials for all types of biosensors. The goal of the immobilisation method is maintaining biological activity while favouring, or at least not altering, the kinetics of the biological reaction.
The actual stabilisation factor for most biosensors seems to be a combination of structural stabilisation by immobilisation on to a surface and the addition of specific stabiliser molecules. Both shelf stability and operational stability are improved by using novel polyelectrolyte stabilisers. The methodology is relatively generic and can be adapted for many application areas. The molecular mechanisms of stabilisation are currently under investigation by using more sophisticated techniques such as circular dichroism, fluorescence spectroscopy, differential scanning calorimetry, electrophoretic techniques, analytical centrifugation and electron microscopy. Data accumulated from such experiments will help researchers to understand more about how proteins denature at the molecular level and ultimately enable the stabilisation of proteins in a more predictable fashion.
The combination of immobilising and stabilising biomolecules together with the integration of micro- and nano-structured materials within biosensing devices is providing excellent analytical performances for different applications. The need of more flexible, reliable and sensitive targeting of analytes has promoted research into the potential of nanomaterials and their incorporation into biosensor systems. Most of the immobilisation methods described in this review are used for the bioconjugation of biomolecules into nanomaterials such as carbon nanotubes (CNTs), gold nanoparticles (AuNPs), MPs or quantum dots (QDs).
Research and development into biosensors is focused on designs compatible with technologies, such as screen-printing techniques, which allow the industrial production of low-cost devices. In this context, both suitable bioconjugation strategies and the stabilisation of biomolecules on electrodes are essential for the development of commercially viable biosensors.
Summary
=======
- Biosensors rely on different biomolecules (nucleic acids, antibodies, cells or enzymes) as bioreceptors and signalling molecules or labels. Both immobilisation and stabilisation of the bioreceptor onto the transducer of the biosensor are of great importance.
- Immobilisation of active biological receptors on electrode surfaces for electrochemical biosensors construction. The most common immobilisation techniques used are classified in two broad categories: irreversible methods, when the bioreceptor cannot be detached without destroying either the biological activity of the biomolecule or the support, and reversible methods, when immobilised biomolecules can be detached from the support under gentle conditions.
- Protein stabilisation is known as the ability of a protein to retain its structural conformation or its activity when subjected to physical or chemical manipulations. Stabilisation of proteins is achieved by the use of additives to modify the microenvironment of the protein under investigation.
- Covalent immobilisation of proteins onto transducers is considered another way of stabilising the biomolecule involved in the biosensing event. Immobilisation of proteins onto nanomaterials such as magnetic particles provides analytical advantages.
MP
: magnetic particle
PSA
: prostate-specific antigen
SPE
: screen-printed electrode
Funding
=======
We acknowledge funding by the European Commission Framework Programme 7 through the Marie Curie Initial Training Network PROSENSE \[grant number 317420, 2012---2016\].
Competing Interests
===================
The Authors declare that there are no competing interests associated with the manuscript.
| |
A researcher with the University of North Carolina’s Institute of Marine Sciences tags a shark recently as part of the institute’s long-running shark study off the coast of Carteret County. (Mary Lide Parker/UNC photo)
Shark Week: Carteret County scientists research, educate year-round
A researcher with the University of North Carolina’s Institute of Marine Sciences tags a shark recently as part of the institute’s long-running shark study off the coast of Carteret County. (Mary Lide Parker/UNC photo)
MOREHEAD CITY — While shark enthusiasts get ready for the Discovery Channel’s 32nd annual Shark Week to begin Sunday, for some researchers based in Carteret County, every week is shark week.
Sharks are the research focus of several scientists and graduate students at the University of North Carolina’s Institute of Marine Sciences in Morehead City. The institute has the world’s longest-running shark research study, which formally began in 1972.
“It’s a spring, summer and fall, biweekly effort, where about 1 mile outside of Beaufort Inlet and 7 miles outside of Beaufort Inlet, we deploy 100 bait hooks on a long line and it soaks for an hour, then the long line is retrieved and we try to count up the sharks,” said IMS associate professor Dr. Joel Fodrie, who studies population and community ecology of coastal species, including sharks. “It’s been a very consistent program over 50 years.”
Dr. Fodrie said the long-running study has revealed a lot about sharks over the years, especially large-scale population trends. In the late 1980s, for example, researchers began to notice larger shark species were becoming less abundant, though it appears that trend has begun to shift in recent years.
The shark study has taught scientists a lot over the years, but Dr. Fodrie said there’s still much to learn about the fabled sea creatures.
“There’s a lot of mystery out there still. (Sharks) are fairly hard to study, they’re large, they’re highly mobile, we often don’t get to bring them into a lab, we don’t get to study them up close for hours and hours,” he said. “We need to spend some time in and around the water working with sharks simply because we have a lot of gaps to fill.
“I feel like our role here is really to figure out those mysteries of how sharks fit within the coastal community,” he continued.
Graduate students at IMS are among the researchers attempting to uncover some of those mysteries.
Savannah Ryburn is a master’s student at IMS researching what sharks eat. She has been using a new technique known as “fecal swabbing,” which involves using DNA analysis on shark poop to determine what they’ve eaten. Traditional methods involve cutting into the animal’s stomach to observe the contents, requiring the shark to be dead first.
“It’s been done on some animals terrestrially, but this is the first time it’s been implemented in shark research,” she said of the emerging technique. “It will be really cool to test out and see if we can implement this as a technique rather than the old method.”
Ms. Ryburn said not only does the technique let the sharks live, it is also less labor intensive and more accurate.
One thing shark scientists have learned recently is that many sharks have a more specialized diet than originally thought, meaning they stick to just one main food source rather than a wide range of foods.
“This research is really important for shark populations because they’re actually declining, mainly due to fisheries’ overexploitation,” she said. “So studying the diet of these sharks can help create more informed fishing regulations to protect the specific food sources for these sharks.”
Jeff Plumlee, a Ph.D. student at IMS, is studying shark habitats and communities in North Carolina and how they fit into the larger ecosystem. He said upwards of 50 species of sharks call North Carolina waters home.
“North Carolina is one of the more diverse places on the Atlantic coast for sharks,” he said. “…That’s because we have a really unique geography. We are the Hatteras break, which is where the Outer Banks juts out into the Atlantic Ocean is what we call a biogeographic break.”
To the north of the Hatteras break, Mr. Plumlee explained, live shark species typically associated with cold water, and to the south are the more warm-water shark species.
“Because North Carolina is positioned where it is, we have a really unique opportunity to look at all these different species and what environmental conditions are important habitat drivers that lead them to call North Carolina home,” he said.
Understanding more about shark habitats and the food webs they’re involved in helps scientists make better recommendations about fisheries management. Mr. Plumlee said protecting sharks is important to keeping ecosystems balanced, but he also recognizes the economic and cultural significance of the region’s fishing industry.
“It’s important to draw that balance,” he said. “Carteret County was built on the backs of fishermen, so it’s really important that we take into account how fishing is really important for our families and for our food resources, but it’s also important to keep those resources around for generations to come.”
Though they focus on different areas of research, all the shark scientists interviewed emphasized the importance of sharks for keeping the overall ecosystem balanced. Not only are the ocean’s apex predators, keeping different populations in check, they are also what scientists refer to as “canaries in the coal mine,” meaning changes in shark populations usually indicate a larger problem in the ecosystem.
“Sharks are really, really important to our ecosystems. They are ecosystem engineers, so they determine the diversity and the distribution of their prey items,” Mr. Plumlee said. “They are actually some of the most important controls we have to having a really healthy and diverse ecosystem.”
For those who are afraid of sharks, most experts agree the vast majority of them pose no threat to humans. In fact, shark bites are “extraordinarily” rare, and most sharks measure around just 3 feet. Ms. Ryburn pointed out the likelihood of being bit by a shark is actually lower than the likelihood of getting into a car accident, for example.
“I think they get a way worse rap than they should,” Ms. Ryburn said. “…It’s more dangerous to get in your car and drive on the highway or down the street than it is to be in the ocean.”
Contact Elise Clouser at [email protected]; by phone at 252-726-7081 ext. 229; or follow on Twitter @eliseccnt.
Watch this discussion.Stop watching this discussion.
(0) comments
Welcome to the discussion.
As a privately owned web site, we reserve the right to edit or
remove comments that contain spam, advertising, vulgarity, threats
of violence, racism, anti-Semitism, or
personal/abusive/condescending attacks on other users or goading
them. The same applies to trolling, the use of multiple aliases, or
just generally being a jerk. Enforcement of this policy is at the
sole discretion of the site administrators and repeat offenders may
be blocked or permanently banned without warning. | |
Beta Cell Discovery Holds Promise for Diabetes Treatment
Shivatra Talchai wasn’t the only researcher in Domenico Accili’s lab at Columbia University Medical Center who was working toward a cure for diabetes, but at 25, she was certainly the youngest. Nicknamed “Noi,” which translates roughly—and appropriately—to “junior” in her native Thai, she had yet to complete her doctorate in metabolic biology, but that didn’t stop her from approaching Accili as a graduate student in 2004.
Something of a legend in the world of diabetes research, Accili had a reputation for being a nurturing and supportive mentor to the researchers who worked with him. Talchai was determined to land one of the 10 or so coveted spots in his lab, even though she was two years shy of earning her PhD, a prerequisite for working with Accili. “I was a graduate student when I started, so I was quite the exception,” she says. “I wouldn’t take no for an answer.” In the incremental world of scientific discovery, where years pass with no guarantee of success, that sort of persistence would serve her well. She had no way of knowing it at the time, but the discovery she would make about what causes beta cells to lose their ability to make adequate amounts of insulin—the hallmark of diabetes—would represent what Accili would call “a sea change” in the way the scientific community views the disease.
An Auspicious Beginning
Arguably the most promising development for people with diabetes since the discovery of insulin began with the roundworm, of all things. In the late 1990s, scientists observed some connections that made them wonder if a gene originally identified for its effects on the lifespan of roundworms might also play a role in insulin action. The idea that a gene with the unwieldy name of Forkhead box O—FoxO, for short—could have implications for people with diabetes seemed farfetched at the time. How could something that happens in a spaghetti-like parasite known mostly for wreaking havoc on the intestines of puppies be of any consequence to people, never mind people with diabetes? “We worry about curing diabetes in people, so any time we try to reproduce the disease in an animal—be it a dog, a squirrel, a monkey, or a mouse—we’re straying further away from the actual person who [has] diabetes,” says Accili, the Russell Berrie Foundation professor of diabetes and director of the Diabetes Research Center at Columbia University College of Physicians and Surgeons. “When it was proposed that the FoxO gene played a role in insulin action in the roundworm, there was really no way to predict that it would be relevant to a human being.”
But there was plenty to suggest it might be. For starters, the discovery of FoxO in worms answered a question researchers had been unable to address: How does insulin control which genes are turned on and off? Scientists had been working on targeting various enzymes that break down glucose. But if they could understand how insulin affects these enzymes, they might be able to identify a single target—what they call a “master regulator”—to treat diabetes more effectively.
Imagine “an old house with leaky faucets in the bathrooms and kitchen,” says Accili. “The first thing you want to do is stop the leaks, so you look for the water main. Looking for the way in which insulin regulates genes was like looking for the water main. Can we control that so we can then go back and fix the individual faucets? FoxO [seemed to] meet the requirements of a gene that could be modulated by insulin to affect many genes at once.” In other words, insulin could alter FoxO (the water main) in a way that affected a lot of genes (the leaky faucets) at the same time. Or at least that’s the way it looked on paper.
The Path to Discovery
Laying the foundation for the kind of discovery that leads to new therapies and treatments is often the work of many scientists. Other scientists then add building blocks—one painstaking lab finding at a time—until, finally, what emerges is a game-changer.
Case in point: In the late 1990s, Accili was one of many researchers to show that insulin regulates the FoxO gene by changing where it’s situated in the liver cell. FoxO sits in the nucleus of the cell, where its job is to turn on some genes and turn off others. When cells are exposed to insulin, FoxO responds by leaving the nucleus very quickly. For someone who might be prone to type 2 diabetes, that quick exit from the nucleus is a good thing because it stops activating the genes that are linked to diabetes.
That finding helped lay the foundation for what came next. In 2003, Accili and his team were able to show in mice that when FoxO’s location inside the cell changes, glucose levels do, too. “This was interesting, but not nearly as interesting as what we found going further into this,” says Accili. “We had been focusing on the liver because that’s the glucose factory for the body and it’s the key site of the disease in type 2 diabetes. But we know that diabetes affects virtually all the organs in the body—from the kidneys to the lungs, the eyes, and the brain.” His next challenge: determine how this mechanism explains what happens in these areas, as well as in the insulin-producing beta cell.
“For people with type 2 diabetes, the beta cell was on the receiving end of years and years of metabolic stress,” says Accili. “The beta cell would first try to adapt to the body’s changing needs for insulin by making more insulin but would eventually give up once the need became overwhelming. We knew this process—so-called beta cell failure—was not really understood. The assumption was that the beta cell just dies of exhaustion, but I never found this convincing.”
Accili thought of the people with type 2 he treated as a young doctor in the ’80s. Once they began following a healthier eating and exercise regimen, they “would rebound with robust insulin properties,” he says. He knew this wasn’t consistent with the long-held notion that their beta cells had died. He just couldn’t explain it.
Maybe Talchai, his young protégé with a particular knack for creative thinking, could.
The Aha Moment
By 2006, the FoxO gene had all but taken over the Accili lab. Virtually every researcher was focused on FoxO in various organs and tissues (the brain, the liver, the heart, the gut, the fat cells) and not just in type 2 diabetes, but in type 1, as well. Talchai—at the time 27 and now armed with a PhD—was assigned the beta cell, and this mystery: What was the physiological significance of Accili’s research showing the relationship between FoxO’s location in the cell and the change in glucose levels?
She began her experiment by removing FoxO from the beta cells of a group of mice. Then she waited (and waited) for them to age. Initially, she noticed, the mice were normal. But over time, as they experienced regular stressors of life—specifically, multiple pregnancies in the female mice and aging in the males—they developed high blood glucose, decreased insulin secretion, and other signs of type 2 diabetes, the same way people with type 2 do. That combination means the insulin that should be made in order to lower blood glucose isn’t being produced, suggesting something is wrong with the beta cell.
What happened to the beta cells of these mice that now had diabetes? By 2009, Talchai still had no definitive answer. After almost four years on the case, she was ready to give up. At the outset, Accili told her what he tells every researcher taking on a project in his lab: “Don’t worry about how long it takes or how much it costs. My concern is making sure you’ll be cared for, and your concern is devoting 100 percent of your waking life energies to the project.” And that’s what Talchai had done. But her 60- and 70-hour work weeks yielded nothing conclusive.
On the day in 2009 she showed up at the lab ready to shut down the experiment, she spotted something she’d never seen before. In looking at the pancreas of a pregnant mouse with diabetes, she saw that the beta cells hadn’t died. She had been using a genetic trick to visualize beta cells and insulin with colored labels. When the beta cells were making insulin, as they should, they would appear yellow under the microscope; if beta cells were dying, she would have seen fewer yellow cells. But what she saw instead were green cells in place of yellow ones, indicating the cells that used to make insulin were still alive. In other words: The beta cells weren’t dead; they were essentially sleeping.
“People had always assumed the main reason beta cells failed in people with type 2 diabetes was because the beta cells died,” says Talchai. “But on that day, I saw that that was not true. The beta cells were actually alive. They did not produce insulin anymore, but they did not die.”
As aha moments go, this discovery—a process known as dedifferentiation—was “a real stunner,” says Accili, who was awarded the American Diabetes Association’s 2017 Banting Medal for Scientific Achievement for long-term contributions to the understanding, treatment, or prevention of diabetes. “We’ve been telling you, ‘Too bad—your beta cells are gone. Let’s give you insulin to replace them,’ ” he says. “But now we’re telling you, ‘The beta cells are still there and we know that if we act early enough we can bring them back to life so that you may never need to go on insulin.’ Nowhere has the scope of scientific inquiry been more advanced by this line of investigation on FoxO than in the pancreatic beta cell.”
Now that scientists know the beta cells aren’t dead but are merely sleeping, there’s the potential to wake them up through lifestyle modifications such as exercise, diet, and certain drugs. Scientists are still trying to figure out how to do that.
“It’s small advances that turn into big changes that benefit patients,” says Accili. “You get to the right answer through a process of unending mistakes. Once we’ve stripped off the mistakes, what’s left behind is the new discovery.”
3 Questions for Domenico Accili, MD
1. People who live with diabetes hear about all the research that’s being done and wonder why it’s taking so long to find a cure. Why should they be hopeful about the current research?
“The tools to understand the biology of disease have never been as powerful as they are today. Our understanding of [the way diabetes starts and develops], insulin resistance, and beta cell failure has reached a critical stage. We are now positioned to design new classes of drugs that will tackle root causes of diabetes and consign the disease to the realm of medical curiosities, much the same way as polio or smallpox.”
2. How close are we?
“The road ahead is fraught with uncertainty, and I don’t want to minimize the obvious difficulties of translating this promising biology. In many ways, this is only a start. But I do want to emphasize how a formerly intractable problem has been de-convoluted into its simplest parts by way of methodical, if boring, scientific advance and is now ripe for transformative drug discovery.”
3. How ripe is the field for drug discovery?
“Patients are frustrated by our putting dates on things we can’t yet foresee. It could literally be tomorrow and it could literally be 20 years from today. We have all the ingredients—it’s up to us to make them work.”
What About Type 1?
The FoxO gene also has implications for type 1 diabetes. In a study published in a 2014 issue of the journal Nature Communications, Columbia University’s Domenico Accili, MD, and his team showed that by turning off FoxO in the gut, they were able to convert human gastrointestinal cells into insulin-producing cells. The hope is that someday a drug could trigger the same effect in humans.
Additional sources: Utpal Pajvani, MD, PhD, assistant professor of medicine in the Division of Endocrinology at Columbia University Medical Center; and Rebecca Haeusler, PhD, assistant professor of pathology and cell biology at the Naomi Berrie Diabetes Center at Columbia University Medical Center. | http://www.diabetesforecast.org/2018/01-jan-feb/beta-cell-discovery-holds.html |
Accredited by the CGE (Conférence des Grandes Écoles), the Artificial Intelligence for Marketing Strategy program from EPITA School of Engineering and Computer Science consists of 3 semesters’ on-campus classes, 1 semester of internship and a learning experience in Dublin.
To obtain their degree, students must acquire 120 ECTS (European Credits Transfer Systems) and a B1 level of French.
This program has 2 cohorts: fall and spring semester that start in September and March respectively.
What you'll learn
The Master of Science in Artificial Intelligence for Marketing Strategy program is a joint degree between EPITA, an engineering school and EM Normandie, a business school.
The aim of the program is to prepare students with AI skills to apply technology to enhance an organization’s marketing strategies and decision-making process.
Graduates of this program will be able to utilize artificial intelligence techniques and tools to:
- improve consumer engagement experience by creating relevant client profiles based on KYC (Know Your Customer) concepts;
- monitor and analyze a variety of communication channels, such as social media, and assist in the understanding of the market’s perception of a brand;
- provide companies with relevant, timely and precise customer service and social media interaction;
- optimize the marketing content in order to boost the visibility of the companies and drive traffic to brands’ websites; and
- exploit computer vision to revolutionize the visual engagement strategy.
Careers
- Data Enabler
- Data Visualization Consultant
- Marketing Data Analyst
- Entrepreneur
- Customer Intelligence Manager
- E-marketer
- Operational researcher
- Business Intelligence Consultant
- Data Manager
- Data Analyst
- Data Strategist
- Data Planner
- Marketing Scientist
- Big Data Consultant
- Data Scientist
- Marketing Strategist
- Expert/Analyst in marketing analysis/marketing/research/CRM/credit analysis
- Business Data Analyst
Programme Structure
Courses include:
- Operational Marketing Concepts
- Strategic Marketing Principles
- Multi-cultural Management
- Global Leadership
- Mathematics for Data Science
- The Ethics of Artificial Intelligence
Key information
Duration
- Full-time
- 18 months
Start dates & application deadlines
- StartingApplication deadline not specified.
- StartingApplication deadline not specified.
Language
Credits
Delivered
DisciplinesMarketing Data Science & Big Data Artificial Intelligence View 36 other Masters in Artificial Intelligence in France
Academic requirements
We are not aware of any academic requirements for this programme.
English requirements
Other requirements
General requirements
- a 3-year or 4-year Bachelor’s degree (or higher) regardless of their discipline. There are no age restrictions. Students can join this program immediately after receiving their Bachelor’s degree or later in their careers.
- To apply, candidates should have the following digital documents translated to English or French:
- Curriculum Vitae
- Passport
- Official university transcripts
- Certified copy of the Bachelor’s degree certificate
- Certified copy of the High School diploma
- 2 letters of recommendation
- TOEFL (80 IBT), TOEIC (800), IELTS (6.0), if the native language or the medium of instruction of the Bachelor’s studies of the candidate is not English
- Statement of purpose
Tuition Fee
-
International12933 EUR/yearTuition FeeBased on the tuition of 19400 EUR for the full programme during 18 months.
-
EU/EEA12933 EUR/yearTuition FeeBased on the tuition of 19400 EUR for the full programme during 18 months.
Semesters 1, 2, 4: € 6000 each
Semester 3: € 7400
Living costs for Paris
The living costs include the total expenses per month, covering accommodation, public transportation, utilities (electricity, internet), books and groceries.
Funding
Studyportals Tip: Students can search online for independent or external scholarships that can help fund their studies. Check the scholarships to see whether you are eligible to apply. Many scholarships are either merit-based or needs-based.
Apply and win up to €10000 to cover your tuition fees. | https://www.mastersportal.com/studies/347529/artificial-intelligence-for-marketing-strategy.html |
We are a group of professional therapists who provide training to other practitioners specializing in the treatment and remediation of Autism Spectrum and related disorders. As one of the leading Non-Public Agencies (NPA), VHAP currently works with numerous school districts to provide behavior intervention services.
Our professional staff provides training and consultation support to school districts, agencies and home programs. We believe in a ‘train-the-trainers’ model which provides an opportunity for professionals to develop and refine their learning and then share it with their staff. We can help you implement and develop effective programs and enhance intervention services provided in classrooms and home programs.
Types of Professional Training:
- Consultation and Assessment Services
- Model 1:1 Direct Behavior Intervention in a Classroom Setting
- Consultation With Teachers, Psychologists and Other Service Providers
- Training of Classroom Aides to Implement Positive Behavior Support Strategies
Training Topics Can Include: | https://www.vhap.org/professional-training/ |
I recently read two articles which made me think that we still do not understand well enough what “information” is. Both articles consider ways of managing information by “side channels” or through “covert channels”. In other words, whatever we do, produces much more information than what we believe.
The first article is “Attack of the week: searchable encryption and the ever-expanding leakage function” by cryptographer Matthew Green in which he explains the results of this scientific article by P. Grubbs et al. The scenario is an encrypted database, that is a database where column data in a table is encrypted so that whoever accesses the DB has no direct access to the data (this is not the case where the database files are encrypted on the filesystem). The encryption algorithm is such that a remote client, who knows the encryption key, can make some simple kind of encrypted searches (queries) on the (encrypted) data, extracting the (encrypted) results. Only on the remote client data can be decrypted. Now an attacker (even a DB admin), under some mild assumptions, with some generic knowledge of the type of data in the DB and able to monitor which encrypted rows are the result of each query (of which she cannot read the parameters), applying some advanced statistical mathematics in learning theory, is anyway able to reconstruct with good precision the contents of the table. A simple example of this is a table containing the two columns employee_name and salary, both of them with encrypted values. In practice this means that this type of encryption leaks much more information than what we believed.
The second article is “ExSpectre: Hiding Malware in Speculative Execution” by J.Wampler et al. and, as the title suggests, is an extension of the Spectre CPU vulnerability. Also the Spectre and Meltdown attacks have to do with information management, but in these cases the information is managed internally in the CPU and it was supposed not to be accessible from outside it. In this particular article the idea is actually to hide information: the authors have devised a way of splitting a malware in two components, a “trigger” and a “payload”, such that both components appear to be benign to standard anti-virus and reverse engineering techniques. So the malware is hidden from view. When both components are executed on the same CPU, the trigger alters the internal state of the branch prediction of the CPU in such a way to make the payload execute malign code as a Spectre speculative execution. This does not alter the correct execution by the CPU of the payload program, but through Spectre, extra speculative instructions are executed and these, for example, can implement a reverse shell to give external access to the system to an attacker. Since the extra instructions are retired by the CPU at the end of the speculative execution, it appears as if they have never been executed and thus they seem to be untraceable. Currently this attack is mostly theoretical, difficult to implement and very slow. Still it is based on managing information in covert channels as both Spectre and Meltdown are CPU vulnerabilities which also exploit cache information side-channel attacks.
NIST has announced the conclusion of the first round of the standardization process for post-quantum-cryptography algorithms, that is public key and digital signature algorithms which are not susceptible to attacks by quantum computers.
The announcement can be found here and a report on the 26 participants to the second round can be downloaded from here.
I have just published here the second article of my short series on the EU General Data Protection Regulation 2016/679 (GDPR) for IT.
In this article I discuss a few points about the risk-based approach requested by the GDPR which introduces the Data Protection Impact Assessment (DPIA), and a few IT security measures which should often be useful to mitigate risks to the personal data.
I have just published here the first article of a short series in which I consider some aspects of the requirements on IT systems and services due to the EU General Data Protection Regulation 2016/679 (GDPR).
I started to write these articles in an effort, first of all for myself, to understand what actually the GDPR requires from IT, which areas of IT can be impacted by it and how IT can help companies in implementing GDPR compliance. Obviously my main interest is in understanding which IT security measures are most effective in protecting GDPR data and which is the interrelation between IT security and GDPR compliance.
It is a few years that it is known that the SHA1 Cryptographic Hash Algorithm is weak, and from 2012 NIST has suggested to substitute it with SHA256 or other secure hash algorithms. Just a few days ago it has been announced the first example of this weakness, the first computed SHA1 “collision”.
Since many years have passed from the discovery of SHA1 weaknesses and some substitutes without known weaknesses are available, one would expect that almost no software is using SHA1 nowadays.
Unfortunately reality is quite the opposite: many applications depend on SHA1 in critical ways, to the point of crashing badly if they encounter a SHA1 collision. The first to fall to this has been the WebKit browser engine source code repository due to the reliance of Apache SVN on SHA1 (see eg. here). But also Git depends on SHA1 and one of the most famous adopters of Git is the Linux kernel repository (actually Linus Torvalds created Git to manage the Linux kernel source code).
For some applications to substitute SHA1 with another Hash algorithm requires to rewrite extensively large parts of the source code. This requires time, expertise and money (probably not in this order) and does not add any new features to the application! So unless it is really necessary or no way to keep using SHA1 and avoid the “collisions” is found, nobody really considers to do the substitution. (By the way, it seems that there are easy ways of adding controls to avoid the above mentioned “collisions”, so “sticking plasters” are currently applied to applications adopting SHA1).
But if we think about this issue from a “secure software development” point of view, there should not be any problem in substituting SHA1 with another Hash algorithm. Indeed designing software in a modular way and keeping in mind that cryptographic algorithms have a limited time life expectancy, it should be planned from the beginning of the software development cycle how to proceed to substitute one cryptographic algorithm with another of the same class but “safer” (whatever that means in each case).
Obviously this is not yet the case for many applications, which means that we have still to learn quite a bit on how to design and write “secure” software.
The security of modern cryptography is based on number-theoretic computations so hard that the problems are practically impossible for attackers to solve. In practice this means that approaches and algorithms to crack the cryptographic algorithms are known but with the current best technologies it would take too many years to complete an attack.
But what if a shortcut is found at least in some particular cases?
or to use DH with 2048bit or larger keys.
What does this teach us about the security that cryptography provides to everyday IT?
How should we implement and manage cryptography within IT security?
Is cryptography joining the “zero days => vulnerabilities => patch management” life-cycle which has become one of the landmarks of current IT security?
Wired reports in this article of a recent advance in deployed cryptography by Google.
Last summer the NSA published an advisory about the need to develop and implement new crypto algorithms resistent to quantum computers. Indeed if and when quantum computers will arrive, they will be able to crack easily some of the most fundamental crypto algorithms in use, like RSA and Diffie Hellman. The development of quantum computers is slow, still it continues and it is reasonable to expect that sooner or later, some say in 20 years, they will become reality. Also the development of new crypto algorithms is slow, so the quest for crypto algorithms resistant to quantum computers, also called post-quantum crypto, has already been going on for a few years.
Very recently Google has announced the first real test case of one of these new post-quantum algorithms. Google will deploy to some Chrome Browsers an implementation of the Ring-LWE post-quantum algorithm. This algorithm will be used by the chosen test users, to connect to some Google services. Ring-LWE will be used together with the current crypto algorithms adopted by the browser. Composing the current algorithms with Ring-LWE will guarantee a combined level of security, that is the minimum level of security is that of the strongest algorithm used in the combination. It should be noted that Ring-LWE is a much more recent crypto algorithm compared to the standard ones, and its security has not been established yet to a comparable level of confidence.
If the level of security will not decrease and hopefully just increase, it has to be seen how it will work in practice in particular for performances.
For modern cryptography this two-year Google’s project could become a cornerstone for the development and deployment of post-quantum algorithms.
The security researcher Gal Beniamini has just published here the results of his investigation on the security of Android’s Full Disk Encrytion and found a way to get around it on smartphones and tablets based on the Qualcomm Snapdragon chipset.
The cryptography is ok but some a priori minor implementation details give the possibility to resourceful attackers (like state / nation agencies or well funded organized crime groups) of extracting the secret keys which should be protected in hardware. The knowledge of these keys would allow to decrypt the data in the file systems, the very issue which has been at the basis of the famous Apple vs. FBI case a few months ago.
Software patches have been released by Google and Qualcomm but, as usual with smartphones and tablets, it is not clear how many afflicted devices have received the update or will ever receive it.
In a few words, the problem lies in the interface between the Qualcomm’s hardware module, called the KeyMaster module, which generates, manages and protects the secret keys and the Android Operating System that needs to indirectly access the keys in this case to encrypt and decrypt the file-system. Some KeyMaster’s functions used by Android can be abused to make them reveal the secret keys.
This is another case which proves how it is difficult to implement cryptography right.
Monitoring outgoing traffic to detect intrusions in IT systems is not a new concept but often it does not seem to be enough appreciated, understood and implemented.
IT security defences cannot guarantee us against every possibile attack, so we must be prepared to the event of an intrusion and to manage the associated incident.
The first step in incident management is to detect an intrusion. Traditional tools like Anti-Virus, Intrusion Detection/Prevention Systems (IDS/IPS) etc. do their job but they can be bypassed. But intrusions can also be detected by monitoring the outgoing traffic.
In my recent personal experience, some intrusions have been detected and stopped because the outgoing traffic was monitored and blocked. Since the deployed malware was not able to call back home, it did not do anything and there was no damage; and since the outgoing traffic was monitored, the intrusion was immediately detected.
But monitoring the outgoing traffic to detect intrusions is becoming more and more difficult. For example attackers are adopting more often stealth techniques like using fake DNS queries. An interesting example has been recently described by FireEye in “MULTIGRAIN – POINT OF SALE ATTACKERS MAKE AN UNHEALTHY ADDITION TO THE PANTRY” . In this case, malware is exfiltrating data by making DNS calls to domains with names like log.<encoded data to exfiltrate>.evildomain.com . Obviously the DNS query fails, but in the logs of the receiving DNS server it is written the name of the requested domain, that is the data that the malware is exfiltrating.
As attackers are getting more creative to hide the back communication between malware and their Command & Control services, IT Security will need to devise more proactive approaches to monitoring and blocking outgoing traffic. | https://blog.ucci.it/tag/cryptography/ |
- Purchase/rental options available:
It is a general assumption in linguistic theory that the categories of tense, aspect, and mood (TAM) are inflectional categories of verbal classes only. In a number of languages around the world, however, nominals and other NP constituents are also inflected for these categories. In this article we provide a comprehensive survey of tense/aspect/mood marking on NP constituents across the world's languages. Two distinct types are identified: PROPOSITIONAL NOMINAL TAM, whereby the nominal carries TAM information relevant to the whole proposition, and INDEPENDENT NOMINAL TAM, in which the TAM information encoded on the nominal is relevant only to the NP on which it is markedÑindependent of the TAM of the clause as a whole. We illustrate these different types and their various properties using data from a wide range of languages showing that, while certainly unusual, the phenomenon of nominal tense/aspect/mood marking is far less marginal than is standardly assumed. Nominal TAM inflection must be accepted as a real possibility in universal grammatical structure, having significant implications for many aspects of linguistic theory. | https://muse.jhu.edu/article/176242 |
Learning From the Past Badge
Knowledge of past designs and their impact on our present and future is foundational to an architecture education. Learners are required to explore precedent cases to help them gain inspiration and enhance their design skills.
Learning from the past badge requirements
reflection
It is very interesting to think about past architecture's influence on the present and future. We're sure you have a lot of thoughts to share about your experience. Written and video submissions accepted.
Your submission must include:
- a brief description of your experience
- 3 "take-away" thoughts about your experience
- how you plan to use this experience to influence how you design in the future
photographs or sketches
Document your experience by taking photographs or drawing sketches.
Your submission must include: | http://www.alnpgh.org/learning-from-the-past-badge |
GENERAL DESCRIPTION OF THE INVENTION
The invention relates to the development of a specialized sterilization indicator that yields post sterilization evidence of attainment of certain sterilization parameters by: (1) the charge of a chemical indicator to give immediate visual indication of achievement of the desired temperature, and (2) biological verification of the destruction of the spores contained therein by the subsequent incubation of these indicator organisms. The specification of the chemical indicator, or melt pellet in the sealed glass vial is such that the change takes place after achieving the desired sterilization temperature, i.e., 250 F. (121 C.), 270 F. (132 C.), or 285 F. (141 C.), for a defined period of time, thereby providing visual evidence of achieving sterilization temperature. The biological indicator ampule contains a growth promoting culture medium with a pH indicator or a vital dye selected for the indicator organism and spores of known heat resistance. Change of the pH indicator present in the culture medium and turbidity of the media following incubation are evidence for non-sterility, whereas no color change and lack of turbidity after incubation are evidence for sterility.
Two novel applications of this invention are as follows:
1. There are, at present, two generally known means for monitoring the efficacy of a solution sterilization cycle, neither of which is easily carried out. The first and most difficult would be seeding a solution to be sterilized with a known amount of spores, sterilizing the container of solution, recovery and concentration of the spores, and determination of the viability of the spores by various microbiological means. The second generally known method would be to fill a solution flask with culture medium, seed the media with spores of a known resistance, sterilize the container of medium and incubate the solution flask to determine viability of the spores following the sterilization cycle. This method has the disadvantages of requiring immediate use of the culture medium to preclude the growth of adventitious microorganisms and is costly due to the large quantities of culture medium which must be used for such tests. Storage of this type of test container also presents a problem in incubation due to their bulk.
This invention is applicable to monitoring sterilization of solutions by placement of the indicators directly in one or more of the containers of solution being sterilized. Usually one or more of the indicators is employed to monitor a solution sterilization cycle. It provides a compact and easy-to- use sterilization indicator which can be evaluated at the point of use, thereby eliminating the involved procedures and specialized equipment required in the commonly used methods described above.
2. There is presently no acceptable method for evaluating the efficacy of a washer-sterilizer cycle, due to the filling of the sterilizer with water, or water sprays, during a portion of the washer- sterilizer cycle. Such action does destroy the integrity of the packaging commonly used in spore strip type indicators, thus making them susceptible to post sterilization adventitious contamination and false sterilization test results. The system disclosed herein allows the retention of the stability of the biological and chemical indicator until the sterilization cycle is completed. Placement of this combined indicator may be accomplished by various means of anchorage, i.e., tape, clips, or implantation in goods.
Other prior art applications may also be applicable to such a combined and biological indicator system. Such applications might be placement in challenge packs or within other portions of a sterilizer load or such indicators may be used in the evaluation of a dry heat sterilization means.
REFERENCE TO PRIOR ART
U.S. Pat. No. 2,854,384 shows a glass ampule containing two compartments separated by an aperture partition. The aperture is closed by a meltable plug. One compartment contains spores and the other contains a culture media. During sterilization, the plug melts and falls into the culture media allowing the spores to enter the culture media for incubation.
U.S. Pat. No. 3,440,144 discloses an apparatus for testing sterilization including a bag containing a glass ampule with culture medium therein and a spore strip in the bag. After sterilization, the operator can break the glass ampule allowing the culture medium to join the spores for incubation.
U.S. Pat. No. 3,661,717 shows a unitary indicator much like the preceding indicator.
U.S. Pat. No. 2,998,306 shows a spore strip of a common variety.
None of the forementioned patents combine both an immediate visual indicator and the confirming biological sterilization indicator.
OBJECTS OF THE INVENTION
It is an object of the present invention to provide an improved sterilization indicator.
Another object of the invention is to provide a sterilization indicator that is simple and efficient to use.
Another object of the invention is to provide a self-contained indicator system, which is capable of being evaluated and incubated at the point of use, thus eliminating procedures requiring a laboratory and microbiologist.
Another object of the invention is to provide a sterilization indicator wherein a chemical indicator is isolated from culture media in a sealed glass ampule. The chemical indicator changes when the ambient media has reached a predetermined temperature level and the media containing spores can be subsequently incubated, thereby giving proof positive of the success of the sterilization cycle.
With the above and other objectives in view, the present invention consists of the combination and arrangement of parts hereinafter more fully described, illustrated in the accompanying drawing and more particularly pointed out in the appended claims, it being understood that changes may be made in the form, size, proportions and minor details of construction without deparating from the spirit or sacrificing any of the advantages of the invention.
Construction of the indicator is not limited to the chemical indicator being located within the sealed ampule nor within the culture medium. Similar results may also be realized by placement of the chemical indicator externally, either separated or attached to the container of culture medium.
GENERAL DESCRIPTION OF THE DRAWINGS
FIG. 1 is a side view of the sterilization indicator according to the invention.
FIG. 2 is a view of the sterilization indicator in a container of solution to be sterilized.
FIG. 3 is an enlarged view of the inner vial of the sterilization indicator showing the meltable pellet therein.
FIG. 4 is a view of another embodiment of the invention.
FIG. 5 is a view of yet another embodiment of the invention.
FIG. 6 is a view of another embodiment of the invention.
FIG. 7 is a view of another embodiment of the invention.
DETAILED DESCRIPTION OF THE DRAWINGS
Now, with more particular reference to the drawings and FIG. 2, the invention of the sterilizing indicator is supported in the container 10 which contains a solution 11 to be sterilized. The combination chemical and biological indicator 12 is suspended in the solution 11 by means of a cord 19 supported on the closed end 20 of the combination chemical and biological indicator 12 and attached to the cap 17 of the container.
The chemical and biological indicator 12 is made up of an ampule 14, which may be made of glass, sealed with an incubation medium 13 therein, which is a suitable broth that may contain an indicator. Examples of indicators are pH indicators such as phenol red or brom cresol purple or vital dyes, such as triphenyl tetrazolium chloride.
The inner tube 15 is hollow and contains a melt pellet 16. The melt pellet 16 is loosely received on the inside of the tube. The melt pellet 16 is adapted to melt at a predetermined temperature, for example, 250. degree. F., 270° F. or 285° F. or at some suitable temperature. The broth, or incubation medium, 13 will also contain spores aerobic or anaerobic spore formers, or taken from the group of Bacillus stearothermophilus, Clostridium sporogenes, Clostridium thermosacharolyticum, Bacillus subtilus, and Putrefactive anaerobe 3679 of a predetermined variety, such as B. stearothermophilus or some other suitable spores taken from the group of Bacillus stearothermophilus, Clostridium sporogenes, Clostridium thermosacharolyticum, Bacillus subtilus, and Purefactive anaerobe 3679. The melt pellet 16 may be isolated since it may contain chemical materials that might be inhibitory to the spores or cidal to the bacterial growth and, therefore, interfere with the accuracy of the tests if they were not sealed up in the inner tube 15.
When the container 10 of solution to be sterilized is placed in a steam sterilizer or suitable thermal controlled chamber and brought up to temperature, and when the central part of the solution reaches a temperature at which the pellet 16 will melt, the pellet will melt and this will be visible from outside the container. If the melt pellet has not melted, the operator is immediately notified that the cycle was not successful and can resterilize the solution. Then, when the combination chemical and biological indicator 12 is removed from the container 10 and incubated, if the spores are viable, the vital dye will visually turn color and turbid. If a pH indicator is used, viable spores will also cause the solution to change color. If at the end of the incubation time, the spores are not viable, no change in clarity, no change in vital dye or color change will occur and a successful cycle is proven.
The biological test indicator could be used in a washer- sterilizer or other apparatus when it is desirable to get a preliminary indication of the success of a sterilizing cycle. If the pellet is melted, the operator knows immediately that the challenge part of the load has reached sterilizing temperature and can incubate the indicator to verify the success of the cycle. If the pellet is not melted, the load can immediately be resterilized.
In the embodiment of the invention shown in FIG. 4, we show a biological indicator 112, which may be suspended in a solution, such as the solution 11 in FIG. 2. The chemical indicator 112 is made up of an ampule 114, which may be made of glass, sealed at 120 with an incubation medium 113 therein. This incubation medium may be a suitable broth that may contain an indicator. The indicator may be a pH indicator, triphenyl tetrazolium chloride or other suitable indicator. The melt pellet 116 is adapted to melt at a predetermined temperature, for example, 250° F., 270° F., or 285° F. indicating that such temperature has been reached. The broth or incubation medium 113 will contain spores of a suitable variety.
Referring to the embodiment of FIG. 5, this embodiment of the biological chemical indicator 212 has an upper closed end 220 and contains a broth or incubation medium 213 and a melt pellet 216 is supported inside the container 214.
Referring to the embodiment of the invention of FIG. 6, the chemical biological indicator 312 shows a container 314 containing a broth 313 and having a suitable temperature indicating material 316 therein. This could be a filter paper with a temperature sensitive material painted onto it or it could be a material that melts at the preselected temperature. The enclosure 317 separates the material 316 from the broth 313.
In the embodiment of the invention shown in FIG. 7, we show the chemical and biological indicator 412 containing the incubation medium 413 inside of the outer container 414. The container 416 is affixed to the outside surface of the container 414 and the temperature indicator 417 is housed in the container 416. When the temperature surrounding the container 414 reaches a predetermined temperature, the indicator material 417 will so indicate.
The foregoing specification sets forth the invention in its preferred, practical forms but the structure shown is capable of modification within a range of equivalents without departing from the invention which is to be understood is broadly novel as is commensurate with the appended claims. | |
What is Next for Cities and Climate? Three Established Strategies for Advancing Climate Collaborations
With climate change impacts intensifying in the coming years, governmental action is now needed more than ever. Paola Adriázola introduces proven collaborative strategies for municipalities and regions around the world.
After the COVID-19 pandemic, governments and society will face the compounded challenges of building back economies and tackling the climate crisis. Collaborative governance at different levels is necessary to garner meaningful input from across sectors of society and government. To improve collaboration between administration levels, for the past six years the Vertical Integration and Learning for Low-Emission Development project has been working closely on governance strategies with national, municipal and regional administrations in Africa and Southeast Asia. Here are three identified ways in which local governments can get on track to advance collaborations for low-emission development and more resilient pathways.
1. Building Governance Coalitions for Local Climate Planning
When developing local climate strategies, a non-communicative sectoral approach continues to be the norm: Typically, departments responsible for relevant sectors, such as health, transport and waste management, work without talking to each other. This represents a major challenge to effective climate action planning in many cities around the world. Some good practices however can be observed.
For instance, Ormoc City in the province Leyte, the Philippines, established an interdepartmental climate change technical working group that gathers local government departments in sharing knowledge, making decisions, and assigning concrete climate responsibilities to each member. The results of the broad involvement were public officials’ heightened knowledge and skills for climate advocacy, collaborative problem solving, and the use of climate science and information in policymaking. The Ormoc experience shows that a local climate action plan is not merely a piece of paper. Instead, it is a key way to build the necessary coalitions that can help the city achieve stronger, agreed-upon results that secure buy-in.
In South Africa, a coalition, formed by the national Department of Environment, Forestry and Fisheries (DEFF), municipalities and provinces, stands behind the country-wide Local Government Climate Change Support Programme. The programme’s vision is to catalyse climate action at the local level while building collaboration between the spheres of government. Its key objectives are to mainstream climate change into municipal development planning and support municipalities in developing and financing climate projects.
Central to the programme’s method is an all-hands-on-deck approach that includes all relevant spheres of government, from the national level to the local level, as well as districts and provinces. To date, the programme’s training has reached nearly all South African municipalities and all 44 districts have completed risk and vulnerability assessments and developed climate change response plans, of which some have already been adopted by their municipal council.
2. Financing Community-Led Climate Change Priorities
Kenya is no stranger to the disconnect that can exist between official climate planning processes and the priorities of local communities. For the past decade, a pioneering governance and finance mechanism has been put to the test to address this divide.
The County Climate Change Funds (CCCFs) are county-controlled funds that finance climate projects identified and prioritised by local communities, thus empowering citizens. Designed to combine climate finance from different sources with the counties’ own funds, CCCFs aim to increase access to finance, promote local ownership and participation and increase county capacity.
Concretely, the CCCFs rely on county and ward or village climate change planning committees that facilitate the process of identifying local needs and priorities on the ground. These link up to the county-managed funds to finance climate projects prioritised by communities by pooling public and private resources from local, national and international climate finance streams.
The CCCFs across five pilot counties have financed more than 100 local climate change projects in its first decade, increasing drought resilience and providing sustainable livelihoods. They are now a key component of Kenya’s nationally led climate finance strategy and have increased public knowledge about climate change, community oversight of government interventions and improved devolved governance, ultimately improving outcomes for the communities themselves.
3. Involving the Private Sector as an Ally
For municipalities in the Philippines, even after they formulate their local climate action plans, the lack of access to funding and limited own-source revenues are significant barriers to implementation.
Ormoc City sees its planning process as a tool that helps the city access funds for priority projects while communicating clear climate objectives. By engaging in broad conversations with the public and the private sector, Ormoc has been able to secure private investment and national public funding to implement eighty per cent of the climate projects prioritised in its 2019 local climate change plan. Some example projects are the redesign of Ormoc’s city plaza to include a rainwater capture and storage system or conservation and “no plastic” projects supported by private companies and the Ormoc Chamber of Commerce and Industry.
Capitalising on increased business engagement, the municipal government is giving clear policy signals to steer investments to align with the city’s climate targets. The city government is sending a message to its private-sector allies that it is committed to economic growth while also forging a path towards sustainable development. The result of this process has been widespread public and business buy-in of climate projects that modernise the city.
To conclude, the experiences from these four countries show that targeted and effective involvement of different stakeholders can be key to scaling up interventions and aligning them with the priorities of citizens and businesses. By leading collaborative strategies, they provide inspiration and innovation for others to follow suit. | https://www.urbanet.info/what-is-next-for-cities-and-climate-three-established-strategies-for-advancing-climate-collaborations/ |
What is a Patient Participation Group (PPG)?
A group of registered patients and practice staff who meet frequently to discuss and make decisions about the practice and how it is able to service the community with improved healthcare services and facilities.
What is the purpose Of A PPG?
- For the practice and the group to agree what could enhance the practice.
- For the practice to understand the patients point of view and to encourage positive feedback.
- To actively encourage and welcome comments, suggestions from members of the local and wider community.
What is our aim in forming a PPG?
We aim to have a very active Patient Participation Group which meets quarterly with representatives from the practice to discuss topical issues, express their views on planned service developments and to raise any issues which are of concern and/or will help to improve the standard of care offered by the practice.
Together with our PPG we intend to:
- provide resources and services for the good of the practice population which would not otherwise be provided by statutory services
- encourage a spirit of self help and support amongst patients to improve their health and well being
- improve communication between the service providers, the group and the wider population
- promote a patient perspective and enable patients to access and make the best use of available health care.
No time to attend meetings?
If you would like to get involved but are unable to attend meetings why not join our virtual Patient Participation Group? Members can receive regular updates on practice developments and can give us views and suggestions.
We have a couple of ground rules for our PPG:
- The group is NOT a forum for individual complaints and single issues.
- We advocate open and honest communication.
- Silence indicates agreement – please speak up.
- All views are valid and will be listened to.
We welcome enquiries from patients who would like to join our patient group. | https://beaconmedicalpractice.com/patient-group/join-ppg/ |
My Sister Lives On The Mantelpiece by Annabel Pitcher is published by Orion Children’s Books and is sold at £5.75
What is the book about?
My sister lives on the mantelpiece speaks from the perspective of Jamie Matthews, a ten year old boy who’s sister Rose died five years prior and who’s mother has just left and the effects on his other sister and father. As he has not cried about it since her death and was too young to know her well, he serves as a narrator for the feelings of the rest of his family, but among this finds his own perspective and individual story. It takes on heavy themes like the realistic struggles of family from the point of view of childlike innocence.
Who is it aimed at?
Despite its themes that, under most circumstances I would consider to be more challenging, the way in which this is written makes it endlessly relatable and often funny, to a tougher age group. I would most enthusiastically recommend it to those within the 9-12 range.
What was your favourite part?
My favourite part was that, although the focus seemed to be on Jamie, the struggles of the other main characters were also brought in and well-developed with a wonderful sort of ease.
What was your least favourite part?
Though I deeply enjoyed reading it, due to how easy it was to get through I occasionally felt like some of the plot twists left a little more to be desired.
Which character would you most like to meet?
Of the characters presented I think I had the most interest in knowing more about the feelings and plot that Jasmine, Rose’s twin and Jamie’s older sister. I felt compelled to know more about her through what I learned in the book.
Why should someone buy this book?
I would absolutely recommend it as one of the most carefully and smoothly written children’s books I’ve come across that handles big issues and emotions in a way that is both touching and with a hint of fun along the way. | https://www.heraldscotland.com/arts_ents/18221905.young-adult-book-review-sister-lives-mantelpiece-annabel-pitcher/ |
Last week, the U.S. Supreme Court decided to hear South Dakota v. Wayfair. With this move, there is a real opportunity to bring online sales taxing into the 21st century, level the playing field for local retailers, and generate hundreds of millions in additional revenue.
The Chamber has advocated in recent years for congressional action on the issue of online sales tax, particularly through the Marketplace Fairness Act (MFA). Such a measure would alleviate the competitive disadvantage our brick-and-mortar retailers face against large online giants, like eBay or Wayfair.
If the SCOTUS rules that online retailers are required to remit sales tax, there will inevitably be a debate over where the money should go. We’ll ask our elected leaders to renew the commitment they made in 2013 to use additional sales tax revenue from the MFA to fund transportation priorities.
We are hopeful for a favorable SCOTUS decision; in the mean time, we continue to urge Congress to take action on online sales tax fairness so that implementation is uniform across all states. | https://roanokechamber.org/one-step-closer-to-e-commerce-tax-fairness/ |
Data Structure in Java – A Complete Guide for Linear & Non-Linear Data Structures
Sorting through the endless choice of mobile phones based on price or searching a particular book from millions of books on Flipkart, are all done with less complex and low-cost algorithms, which work on structured data. Since data structure is a core to any programming language and choosing a particular data structure majorly affects both the performance and functionality of Java applications, therefore it’s worth an effort to learn different data structures available in Java.
Today this article will guide you towards each type of Data Structures supported by Java with examples and syntax, along with their implementation and usage in Java.
Firstly, let’s get familiar with the Top 12 Java Applications with Techvidvan.
Keeping you updated with latest technology trends, Join TechVidvan on Telegram
What is a Data Structure in Java?
The term data structure refers to a data collection with well-defined operations and behavior or properties. A data structure is a unique way of storing or organizing the data in computer memory so that we can use it effectively. We use Data Structures primarily in almost every field of Computer Science, which is Computer Graphics, Operating systems, Artificial Intelligence, Compiler Design, and many more.
The need for Data Structures in Java
As the amount of data grows rapidly, applications get more complex, and there may arise the following problems:
- Processing Speed: As the data is increasing day by day, high-speed processing is required to handle this massive amount of data, but the processor may fail to deal with that much amount of data.
- Searching data: Consider an inventory with a size of 200 items. If your application needs to search for a particular item, it needs to traverse 200 items in every search. This results in slowing down the search process.
- Multiple requests at the same time: Suppose, millions of users are simultaneously searching the data on a web server, then there is a chance of server failure.
In order to solve the above problems, we use data structures. Data structure stores and manages the data in such a way that the required data can be searched instantly.
Advantages of Java Data Structures
- Efficiency: Data Structures are used to increase the efficiency and performance of an application by organizing the data in such a manner that it requires less space with higher processing speed.
- Reusability: Data structures provide reusability of data, that is after implementing a particular data structure once, we can use it many times at any other place. We can compile the implementation of these data structures into libraries and the clients can use these libraries in many ways.
- Abstraction: In Java, the ADT (Abstract Data Types) is used to specify a data structure. The ADT provides a level of abstraction. The client program uses the data structure with the help of the interface only, without having knowledge of the implementation details.
Data Structure Classification in Java
- Linear Data Structures: In a linear data structure all the elements are arranged in the linear or sequential order. The linear data structure is a single level data structure.
- Non-Linear Data Structures: The non-linear data structure does not arrange the data in a sequential manner as in linear data structures. Non-linear data structures are the multilevel data structure.
Types of Data Structure in Java
There are some common types of data structure in Java they are as follows –
1. Arrays
An Array, which is the simplest data structure, is a collection of elements of the same type that are referenced by a common name. Arrays consist of contiguous memory locations. The first address of the array belongs to the first element and the last address to the last element of the array.
Some points about arrays:
- Arrays can have data items of simple and similar types such as int or float, or even user-defined datatypes like structures and objects.
- The common data type of array elements is known as the base type of the array.
- Arrays are considered as objects in Java.
- The indexing of the variable in an array starts from 0.
- We must define an array before we can use it to store information.
- The storage of arrays in Java is in the form of dynamic allocation in the heap area.
- We can find the length of arrays using the member ‘length’.
- The size of an array must be an int value.
Arrays can be of 3 types:
- Single Dimensional Arrays
- Two-dimensional Arrays
- Multi-dimensional arrays
The below diagram shows the illustration of one-dimensional arrays.
Note:
We can use an array only when we predetermine the number of elements along with its size since the memory is preserved before processing. For this reason, arrays come under the category of static data structures.
Time complexities for array operations:
- Accessing elements: O(1)
- Searching:
For Sequential Search: O(n)
For Binary Search [If Array is sorted]:O(log n)
- Insertion: O(n)
- Deletion: O(n)
Dive a little deep into the concepts of Java Arrays to learn more in detail.
2. Linked Lists
The Linked Lists in Java is another important type of data structure. A Linked List is a collection of similar types of data elements, called nodes, which point to the next following nodes by means of pointers.
Need for Linked lists:
Linked lists overcome the drawbacks of arrays because in linked lists there is no need to define the number of elements before using it, therefore the allocation or deallocation of memory can be during the processing according to the requirement, making the insertions and deletions much easier and simpler.
Types of Linked lists:
Let’s start discussing each of these types in detail:
2.1 Singly-linked list
A singly-linked list is a linked list that stores data and the reference to the next node or a null value. Singly-linked lists are also known as one-way lists as they contain a node with a single pointer pointing to the next node in the sequence.
There is a START pointer that stores the very first address of the linked list. The next pointer of the last or end node stores NULL value, which points to the last node of the list which does not point to any other node.
2.2 Doubly-linked list
It is the same as a singly-linked list with the difference that it has two pointers, one pointing to the previous node and one pointing to the next node in the sequence. Therefore, a doubly-linked list allows us to traverse in both the directions of the list.
2.3 Circular Linked List
In the Circular Linked List, all the nodes align to form a circle. In this linked list, there is no NULL node at the end. We can define any node as the first node. Circular linked lists are useful in implementing a circular queue.
In the figure below we can see that the end node is again connected to the start node.
Time complexities for linked-list operations:
- Traversing elements: O(n)
- Searching an element: O(n)
- Insertion: O(1)
- Deletion: O(1)
We can also perform more operations like:
- Concatenating two lists
- Splitting list
- Reversal of list
3. Stack
A stack is a LIFO (Last In First Out) data structure that can be physically implemented as an array or as a linked list. Insertion and deletion of elements in a stack occur at the top end only. An insertion in a stack is called pushing and deletion from a stack is called popping.
When we implement a stack as an array, it inherits all the properties of an array and if we implement it as a linked list, it acquires all the properties of a linked list.
Common operations on a stack are:
- Push(): Adds an item to the top of the stack.
- Pop(): Removes the item from the top of the stack
- Peek(): It tells us what is on the top of the stack without removing it. Sometimes, we can also call it top().
Stacks are useful in:
- Parenthesis matching
- Solving the maze problem
- Nested Function calls
4. Queue
Logically, a queue is a FIFO (First In First Out) data structure and we can physically implement it either as an array or a linked list. Whatever way we use to implement a queue, insertions always take place at the “rear” end and deletions always from the “front” end of the queue.
Common operations on a queue are:
- Enqueue(): Adding elements at the rear end of the queue.
- Dequeue(): Deleting elements from the front end of the queue.
Variations in Queue:
Depending on the requirements of the program, we can use the queues in several forms and ways. Two popular variations of queues are Circular queues and Dequeues (Double-ended queues).
4.1 Circular Queues
Circular Queues are the queues implemented in circle form rather than a straight manner. Circular queues overcome the problem of unutilized space in the linear queues that we implement as arrays.
4.2 Dequeues
A double-ended queue or a dequeue is a refined queue in which can add or remove the elements at either end but not in the middle.
Applications of a Queue:
- Queues are useful in telephone inquiries, reservation requests, traffic flow, etc. While using telephone directory service, you might have sometimes heard “Please wait, You are in A QUEUE”.
- To access some resources like printers queues, disk queues, etc.
- For breadth-first searching in special data structures like graphs and trees.
- For handling scheduling of processes in a multitasking operating system example FCFS (First Come First Serve) scheduling, Round-Robin scheduling, etc.
5. Graph
A graph is a non-linear data structure in Java and the following two components define it:
- A set of a finite number of vertices which we call as nodes.
- An edge with a finite set of ordered pairs which is in the form (u, v).
- V represents the Number of Vertices.
- N represents the Number of Edges.
Classification of a Graph
Graph Data Structures in Java can be classified on the basis of two parameters: direction and weight.
5.1 Direction
On the basis of direction, the graph can be classified as a directed graph and an undirected graph.
A. Directed graph
A directed graph is a set of nodes or vertices connect together with each other and all the edges have a direction from one vertex to another. There is a directed edge for each connection of vertices. The figure below shows a directed graph:
B. Undirected graph
An undirected graph is a set of nodes or vertices which are connected together, with no direction. The figure below shows an undirected graph:
5.2 Weight
On the basis of weight, the graph can be classified as a weighted graph and an unweighted graph.
A. Weighted graph
A weighted graph is a graph in which the weight is present at every edge of the graph. A weighted graph is also a special type of labeled graph. The figure below shows a weighted graph:
B. Unweighted graph
An unweighted graph is the one in which there is no weight present on any edge. The figure below shows an unweighted graph:
6. Set
A Set is a special data structure in which we can not use the duplicate values. It is a very useful data structure mainly when we want to store unique elements, for example, unique IDs. There are many implementations of Set like HashSet, TreeSet, and LinkedHashSet provided by Java Collection API.
Summary
Data structures are useful in storing and organizing the data in an efficient manner. In the above article, we discussed some important Java Data Structures like arrays, linked lists, stacks, queues, graphs and, Set with their types, implementation, and examples. This article will surely help you in your future Java Programming.
Thank you for reading our article. If you have any queries related to Data Structures in Java, do let us know by dropping a comment below. | https://techvidvan.com/tutorials/data-structure-in-java/ |
The purpose of 7Geese is to utilize technology to help people achieve their goals.
7Geese believes that people are intrinsically motivated to perform their best when organizations have a compelling vision, clear objectives, and a rich culture of recognition and support.
Our challenge was to investigate whether or not 7Geese’s 1-on-1 feature was a compelling process that accounts for people’s real life experiences.
The objective of the 7Geese platform is to help companies move away from traditional performance reviews and accelerate employee growth and learning. The most recent 7Geese feature facilitates 1-on-1 meetings between managers and employees.
While the timing of the product implicated great potential, 7Geese expressed concern over the 1-on-1 section feeling disconnected from the rest of the platform’s services. In addition, there was no conclusive feedback from users confirming that the 1-on-1’s comply with their needs and current processes.
Prior to selecting a design method, we performed a standard SWOT analysis comparing 7Geese to its competitors. What are the Strengths, Weaknesses, Opportunities, and Threats that affect their platform?
It became clear that the approach of translating the 1-on-1 process into an interface could feel technical and rigid. In order to offer a reliable, human-centered 1-on-1 process, we decided to explore and compare the ways that 1-on-1’s are being conducted within real environments.
We determined that our target audience was managers who are usually involved in 1-on-1’s as facilitators, collaborators from human resources/people development areas, and employees who are the interviewees in a 1-on-1. We wanted to focus on their attitudes, needs, wants, and emotional states toward 1-on-1’s.
Due to the broad nature of the information we wanted to gather, we decided to explore a wide range of different ideas through collaborative ideation methods.
The first research activity was Experience Prototyping, then we combined it with our own observations, to outlined the User Journey to describe the path and touch points users follow.
The first research activity was Experience Prototyping, a simulation of a product or service within a series of group exercises that involves creating and testing low-fidelity prototypes in semi-realistic scenarios.
Experience Prototyping is particularly helpful for discovering what people know and how people feel, ultimately allowing the design team to empathize with participants and to obtain inspiration for future design.
The workshop itself was broken into several stages. First, participants were actively involved in a discussion about their experiences and perceptions of 1-on-1’s. This was followed by the individual creation of their ideal 1-on-1 meeting process.
After which, participants shared their insights. Finally, the participants split into two teams. Using a variety of creative, hands on tools, the teams created their ideal 1-on-1 landscapes. The landscapes were focused on three stages: before, during and after a 1-on-1.
Next, based on the prototypes produced during the workshop combined with our own observations, we outlined the User Journey to describe the path and touch points users would follow before, during and after a 1-on-1.
Finally, we used Insight Combination, a design method of contrasting and correlating insights amongst current design trends in order to seed a wide variety of design ideas.
During the Insight Combination activity we came up with multiple design ideas that would address both team members’ needs and managerial goals. As a team, we refined the ideas into finished design concepts by singling out the most promising ones.
Ultimately, we chose to work with three ideas that would cover the full 1-on-1 task flow.
Prior to our investigation, the 7Geese platform presented users with five 1-on-1 principles:
During our research phase we were able to confirm that 7Geese’s five principles are in fact key for 1-on-1 success.
However, these principles were hidden in another section of the platform. Our first solution was to give the principles the real estate and attention they deserve by making them visible throughout the entire 1-on-1 process with visually pleasing icons placed at the bottom of the screen.
To increase the conversational nature of the 1-on-1, we proposed that the 1-on-1 process function more as a guideline rather than a fixed questionnaire that is to be filled out step by step.
To encourage this, both facilitators and participants are given the opportunity to enlist and submit the themes they would like to discuss beforehand, allowing both parties to come to the meeting with a clear idea of what the conversation will be about.
In the redesigned note taking section, the facilitator transcribes the paper notes into the platform.
Again, rather than answering a questionnaire, the format was simplified down to three categories: 1) What was the meeting about? 2) Additional Notes and 3) Action Items. This allows the facilitator to be as simple or elaborate as necessary.
Once the section is completed the participant is automatically notified and is then free to review and comment.
Finally, we added a 1-on-1 history section that gives both parties the ability to track progress at a glance from one meeting to the next. The history includes the mood indicator that was set by the participant before the meeting. Not only does this function as a way to track a participant’s needs, it is also a useful preparation and follow-up tool for the facilitator.
In this technological age, management processes can certainly be digitized. In doing so, we must not forget about the potential negative impact of all things being “on the computer.”
There will always exist the need to incorporate the human element into any technology. The best way to do so is by getting input from people. | https://ux.nearsoft.com/casestudies/7-geese/ |
Food manufacturers reformulating products due to the impending FDA ruling on the GRAS (generally recognized as safe) status of partially hydrogenated oils (PHOs) and trans fats in processed foods. Shortening containing saturated and trans fatty acids are frequently used in the production of many baked products, such as donuts, cookies, cakes and pastries. These fatty acids provide benefits such as structure, texture, flavor and processing tolerances that are hard to replace. The reformulation process is not as simple as removing a PHO shortening and replacing it with a non-PHO shortening. This webinar provides information on the latest knowledge of how saturated and trans fatty acids impact cardiovascular health and expertise on how to reduce trans fatty acids in baked products. Dr. Dariush Mozaffarian discusses the impact of various fatty acids on cardiovascular health. Mr. Bob Johnson focuses on why PHOs gained popularity in the bakery industry, the challenges of replacing PHOs, and some available approaches to reformulating.
Dariush Mozaffarian, MD, MPH, is the Dean of the Gerald J. and Dorothy R. Friedman School of Nutrition Science and Policy at Tufts University in Boston, Massachusetts. Mozaffarian joins the Friedman School from Harvard University where he served as associate professor and co-director of the program in Cardiovascular Epidemiology at the Harvard School of Public Health and an associate professor in the Division of Cardiovascular Medicine at Harvard Medical School and Brigham and Women's Hospital. He has written or co-written more than 200 scientific publications on the diet and lifestyle factors that contribute to both heart disease and stroke, which are the leading causes of death worldwide.
Bob Johnson, is the Director of Research and Development at Bunge Oils and leads Bunge's technical service and applications support team. He works one-on-one with their customers to provide first-class assistance, training and functional aid. Johnson and his team work closely with Bunge's operations and supply chain teams to efficiently add new products, processes and technologies to Bunge's expanding product portfolio.
Broadcast Date: December 11, 2014
This webinar offers a forum for those that were not able to attend the 2014 Annual Meeting, missed selected talks, or simply desire a recap with a more in-depth discussion with the excellent scientists presenting their work.
The 2014 Annual Meeting Highlights Webinar revisits three exceptional presentations:
Cereal Grains: Impact on Gut Microbiota and Health
Devin Rose, University of Nebraska-Lincoln
Rose received his PhD from Purdue University with research focused on creating slowly fermentable dietary fibers for improved gut health. After completing his PhD, Dr. Rose worked as a postdoc with the Agricultural Research Service of the US Department of Agriculture on creating functional food ingredients from by-products of the grain milling industry. He is now an Assistant Professor at the University of Nebraska-Lincoln, where he teaches Food Carbohydrates, Sensory Evaluation, and Food Product Development Concepts. His research is focused on whole grain and dietary fiber processing and gut health.
Arabinoxylan Hydrolyzates as Immunomodulators
Mihiri Mendis, North Dakota State University
Mendis is a Ph.D. candidate in Cereal Science at North Dakota State University. She obtained her M.S. in Cereal science from North Dakota State University and her B.S special degree in Pharmacy from University of Colombo, Sri Lanka. She was the winner of the AACCI Best Student Research Paper competition in 2014. Her dissertation research focus on exploring the contribution from fine structural details of wheat derived arabinoxylans on it immunomodulatory and prebiotic properties.
Processing of Cereal Proteins and Starches into New Bio-based Chemicals and Products
Bert Lagrain, Center for Surface Chemistry and Catalysis, KU Leuven
Lagrain received his Ph.D. from KU Leuven (Belgium). His research focused on polymerization reactions in gluten and their importance in foods. After completing his Ph.D., Bert continued working at KU Leuven and also worked as a post-doc at the Leibniz Institute for Food Chemistry (Germany). Bert is currently an industrial research manager at the Centre of Surface Chemistry and Catalysis of KU Leuven with a research interest in the use of chemocatalysis for the biorefinery. He has published more than 45 refereed journal articles.
Broadcast Date: January 9, 2015
This webinar will present innovative ways to reduce sodium in baked goods. It will start with a brief overview on sodium-associated health concerns, which are the basis for worldwide efforts advocated by health organizations to reduce sodium in processed foods. New technological approaches, such as vacuum-mixing of dough and salt encapsulation, will be discussed as tools for salt reduction. Then, the optimization of sodium distribution and the enhancement of bread crumb structure will be presented as new strategies allowing sodium reduction in bread through faster sodium release and taste-texture interactions.
This webinar offers a forum for food scientists and technologists, food ingredient suppliers, bakers and those who desire a more in-depth discussion with the excellent scientists presenting their work.
Alain LeBail, Ph.D. is Professor at ONIRIS, Nantes-Atlantic National College of Food Science and Engineering and Veterinary Medicine. Pr. LeBail heads a research group “MAPS²â€: Matrices, Process-Properties, Structure-Sensorial that focuses on innovative baking technology. He is currently scientific coordinator of the FP7 project PLEASURE on salt, sugar, lipid reduction in foods. Pr. LeBail has published 130 papers and is co-author of 4 patents. He coordinated EU-FRESHBAKE – selected among 181 funded projects as one of the 8 success stories of FP6.
Peter Koehler, Ph.D. is vice-director of the Deutsche Forschungsanstalt für Lebensmittelchemie, Leibniz Institute, professor for food chemistry at Technische Universität München, and vice-director of the Hans-Dieter-Belitz-Institute for Cereal Grain Research in Freising, near Munich, Germany. His research is focused on basic and applied topics in cereal science including studies on celiac disease, structure-function relationships of cereal proteins, and the effects of enzymes and other additives (e.g. sodium chloride) in breadmaking
Broadcast Date: March 5, 2015
After a quick review of gluten-free baking and bakery products, we will present the problems we face and the challenges associated with obtaining high quality products to meet consumer’s expectations. We will discuss possible technological solutions available to deal with these challenges using enzymes or colloidal approaches.
This webinar offers a forum for food scientists and technologists, food ingredient suppliers, gluten-free sector, or simply a desire for a discussion on this topic with the excellent scientists presenting their work.
Regine SchÃnlechner is Assistant Professor at the Institute of Food Technology, Department of Food Sciences and Technology, University of Natural Resources and Life Sciences,
Vienna, Austria. Her expertise is food technology, nutritional science, nutrition in developing countries, and food additives. Her research has specialized in cereal technology, in particular
processing of specialty cereals/pseudocereals/gluten-free cereal products.
Cristina M. Rosell is a Full Research Professor in the Instituto de Agroquimica y TecnologÃa de Alimentos (IATA) in Madrid, Spain, that belongs to the Spanish National Research Council (CSIC), and an Associated Professor of the University of Valencia (UV). She is a member of AACC International, President of the Spanish Association of Cereal Science and Technology (AETC), and the Spanish representative in the Standardization Committee (ISO, CEN) for Cereals and derivatives. She has more than two hundred international peer-reviewed scientific publications and book chapters on the cereals topic, which compiled results from wheat/corn/rice quality, dough rheology, breadmaking, baked goods quality and cereal compound biochemistry. In her research she is applying a holistic approach in cereal science and technology combining instrumental analysis, sensory assessment and nutrition for designing tailored cereal based products.
Atze Jan van der Goot, Ph.D. studied chemical engineering and obtained his PhD on reactive extrusion at the University of Groningen, The Netherlands. He joined Unilever Research in Vlaardingen, the Netherlands as a research scientist. In 1999 he moved to Wageningen University to become an associate professor in Food Structuring. His main research interest is on the understanding of concentrated biopolymer mixtures under flow. His research at Wageningen resulted in 80 peer-reviewed papers and 5 patent applications.
Cannot retrieve the URL specified in the Content Link property. For more assistance, contact your site administrator. | https://www.cerealsgrains.org/meetings/webinars/Pages/CentennialSeries.aspx |
- Description: Great location! Cozy Cape on a dead end, cul-de-sac street. Close to all town amenities and commuting routes. Only 3 miles to the beach. Neighborhood setting great for walking and riding bikes through neighboring side streets in Glen Hill. This is a one owner home and they have added lots of fun features for stay-cations like an in-ground pool, hot tub and good sized outside entertaining area. 24 x 24 garage has a walk up attic for storage. Finished breezeway room between house and garage is currently a den but could be a great home office area. Open concept living room, kitchen and dining room. First floor bedroom and full bath. Second floor has a large master bedroom, two additional bedrooms and a full bath with separate shower and soaking tub. The basement has an additional room currently used as a family room. There are 3 gas fireplaces and a gas stove throughout the home to add the perfect ambiance and cozy feel. Don't miss this opportunity! Showings start at open house Sunday Aug 11 from 1-3pm. | https://www.oldeportproperties.com/listing/4769077/6-elm-drive-hampton-nh-03842/ |
(1200-1521) 1300, they settled in the valley of Mexico. Grew corn. Engaged in frequent warfare to conquer others of the region. Worshiped many gods (polytheistic). Believed the sun god needed human blood to continue his journeys across the sky. Practiced human sacrifices and those sacrificed were captured warriors from other tribes and those who volunteered for the honor.
Mesas
a flat-topped hill with steep sides
Confederation
A joining of several groups for a common purpose.
Pueblo
A communal multistoried dwelling made of stone or adobe brick by the Native Americans of the Southwest. (Cliff Palace at Mesa Verde)
Tlingit
were a Native American people that inhabited the Pacific coastland and offshore islands
of southern Alaska and western Canada.
Nomad
A member of a group that has no permanent home, wandering from place to place in search of food and water
clan
group of families related by blood or marriage
Migration
movement of people from one place to another
civilization
the stage of human social development and organization that is considered most advanced.
adobe
A brick or building material made of sun-dried earth and straw.
agriculture
the science or practice of farming, including cultivation of the soil for the growing of crops and the rearing of animals to provide food, wool, and other products.
Cliff Dwellers
Native Americans who built houses in the walls of canyons
irrigation
Supplying land with water through a network of canals
Kivas
underground ceremonial chambers
wampum
Belts or strings of polished seashells that were used for trading gift-giving by Iroquois & other Native Americans
longhouse
a home shared by several related Iroquois families
staple
A basic food that is used frequently and in large amounts
wigwams
Round huts
Iroquois Confederacy
An alliance of five northeastern Amerindian peoples (after 1722 six) that made decisions on military and diplomatic issues through a council of representatives. Allied first with the Dutch and later with the English, it dominated W. New England.
travois
sled made of poles tied together; used by Native Americans to transport goods across the plains
Hopi
A Native American tribe in the southwest who were farmers, lived in pueblos, and were excellent builders and potters. Descendants of the Anasazi.
paleo
ancient
Mound Builders
Native American groups who built earthen mounds
three-sister farming
Agricultural system employed by North American Indians as early as 1000 A.D.; maize, beans, and squash were grown together to maximize yields.
YOU MIGHT ALSO LIKE...
Native American
14 Terms
Charlotte_Bath
Ch. 1 The First Americans
32 Terms
youngm616
Native Americans by Region
24 Terms
Tonya_Bonnette
Unit 1. The First Americans - with images.
25 Terms
skirsztajn
OTHER SETS BY THIS CREATOR
Ela
11 Terms
ssmrscagle
A New Nation
14 Terms
ssmrscagle
Road to Revolution Study Guide
45 Terms
ssmrscagle
Road to the Revolution Vocabulary
14 Terms
ssmrscagle
THIS SET IS OFTEN IN FOLDERS WITH... | https://quizlet.com/317440823/5th-social-studies-flash-cards/ |
# Active Listening
Active Listen (Escuta Ativa) is a project of the Modern Orchestra, an orchestra aimed at the popularization of classical music. In partnership with the Derdic High School for the Deaf in São Paulo, linked to PUC-SP, it is a project that takes music to young deaf people. In semester modules, deaf young people have contact with music in unconventional ways. The modules are concluded with a presentation by the deaf youth with the orchestra, conducted and conducted by Leonard Evers.
# from the program
In this third module of the Escuta Ativa project, a transdisciplinary challenge was posed, involving introductory training activities in the areas of physical computing, data visualization, as well as body expression. The activities fueled the development, in parallel, of an audiovisual instrument that, from data generated by movement and collected with an accelerometer and gyroscope attached to it, simultaneously generates images and sounds, as well as changes the light patterns of a coupled LED ring. at the end of the instrument.
We started from an object that was familiar to them: the same sound tube used in the first module of the project, for body percussion. During the often confusing simultaneity of activities, the question was continually asked: how to make and enjoy music for the eyes, for the hands, for the whole body?
Replicated for collaborative use, the instrument served as a basis for incorporating the creative fragments generated from the meetings with the students, formalized in the four movements inserted in the concert of the Modern Orchestra. In these, we worked unpretentiously with an expanded and diffuse notion of the musical concepts of rhythm, melody and harmony. | https://gilfuser.net/en/portfolio/escuta-ativa/ |
Recently, the James Webb Telescope has been in focus for another important finding: the presence of carbon dioxide in a distant exoplanet. Experts opine that it could herald a new era of research on outside-the-solar-system worlds.
The finding of carbon dioxide came during Webb’s first campaign focusing on exoplanets. This was aimed at a giant hot and gaseous planetary object named the WASP-39b, located at a distance of 700 light-years from Earth and in the constellation Virgo. Notably, this planet is as massive as Saturn but it is larger than Jupiter. Again, this was earlier observed by the Hubble Space Telescope and also by the Spitzer Space Telescope.
The James Webb observatory’s main focus is on infrared wavelengths carrying heat. Interestingly, the Spitzer Space Telescope also observed the heat carrying infrared wavelengths. Previously, the presence of water vapor, sodium, and potassium could be observed in the atmosphere of the planet. But for the first time, James Webb could detect the presence of carbon dioxide.
Zafar Rustamkulov, a graduate student at Johns Hopkins University, USA, a member of the exoplanet team, and a co-author of the pre-print paper claiming the presence of carbon dioxide commented in a statement, “As soon as the data appeared on my screen, the whopping carbon dioxide feature grabbed me. It was a special moment, crossing an important threshold in exoplanet sciences.” Notably, carbon dioxide has not been detected in exoplanets before this. However, astronomers think that carbon dioxide can help them better understand the formation history and also the evolution of the exoplanets. The discovery of carbon dioxide was made using the NIRSpec instrument of the Webb.
Regarding this, Laura Kreidberg, the director of the Max Planck Institute for Astronomy in Germany and also a co-author of the preprint paper, was quoted as saying, “This unequivocal detection of carbon dioxide is a major milestone for exoplanet atmosphere characterization. Carbon dioxide helps us measure the complete carbon and oxygen inventory of the atmosphere, which is highly sensitive to the conditions in the disk where the planet formed.”
“These measurements can help identify how far from its star the exoplanet was formed and also determine the amount of solid and gaseous material it accumulated while it migrated to its current location,” she added.
The researchers believe that James Webb will help them detect carbon dioxide in other exoplanets as well. They also think that Webb will be able to detect carbon dioxide in Earth-like rocky bodies that are scattered in the galaxy. The exoplanet WASP-39 b is orbiting its parent star WASP-39 very closely which will be equal to 1/20th distance between Earth and the Sun.
The James Webb Telescope was launched into space to find the light emanating from the early universe’s first galaxies by exploring the solar system and the exoplanets—the planets that orbit stars other than the Sun. A primary target of the observatory will be the epoch where the early pioneering stars whose light ended the darkness theorized to engulf the cosmos shortly after the Big Bang happened, a time of more than 13 billion years ago.
One of the first success of the observatory was the images captured by it that reveals far-distant galaxies as they were some 4.6 billion years ago. These images were considered to be compelling and also thought that they are the first step toward reshaping our understanding of the time when the universe dawned. | https://peoplesdispatch.org/2022/08/31/james-webb-telescope-spots-carbon-dioxide-in-distant-exoplanet/ |
You are now leaving The United States of America Vietnam War Commemoration web site and are linking to another site that is not operated by the United States Department of Defense. The Department of Defense does not endorse this web site or the information, products, or services contained therein. The Department of Defense does not exercise any editorial control over the information you may find at other web sites.
Lieutenant JG Michael Zerbe,
U.S. Navy (VVMF)
Seaman Apprentice David Underhill,
U.S. Navy (VVMF)
A Navy UH-2 Seasprite picks up a
crewmember from the destroyer
USS Douglas H. Fox for transportation
back to the aircraft carrier USS Franklin
D. Roosevelt, August 12, 1964.
(U.S. Navy)
UH-2 Seasprite hovers over the
flight deck of the aircraft carrier USS
Kitty Hawk in the South China Sea
in March, 1966, just weeks before
Lieutenant JG Zerbe and Seaman
Underhill were killed during a similar
maneuver. (U.S. Navy)
On the morning of April 15, 1966, a Navy UH-2 helicopter piloted by Lieutenant JG Michael Zerbe slowly lifted off the flight deck of the USS Kitty Hawk, somewhere in the South China Sea. He and two other crewmen were putting the UH-2 Seasprite through its paces, because its engine had just been replaced. As Zerbe attempted to hover over the deck, something went catastrophically wrong. The helicopter yawed to the left, violently pitched forward, and its spinning rotors struck the deck and disintegrated. The aircraft itself rolled over the side of the ship and plunged into the water where it quickly sank. Two men aboard the helicopter managed to escape and swim to the surface, but Michael Zerbe was not among them. Another sailor, Seaman Apprentice David Underhill, was killed as well when flying shards from the rotor blades struck him in the chest.
Aside from riverine warfare, Vietnam is not generally remembered as a “naval” war, but the U.S. Seventh Fleet played an instrumental role in the United States’ Vietnam strategy. Countless bombers, fighters, and rescue aircraft launched from carrier decks in the Gulf of Tonkin, flying countless sorties over both North and South Vietnam. And the sailors, engineers, and aviators who kept them flying were just as pivotal for the Navy’s role in Southeast Asia. Those men and women also placed their lives on the line during their service, just like everyone else.
Carrier aircraft, especially, required nearly constant maintenance, and on April 15, 1966, Lieutenant JG Michael Zerbe and his crew were assigned to break-in a UH-2 Seasprite utility helicopter that had just been fitted with a new engine. The Seasprite was one of the Navy’s workhorse helicopters and it performed a wide variety of duties. In the Vietnam War, the UH-2 was perhaps most valuable as a search-and-rescue helicopter, pulling downed airmen and naval aviators from the waters of the Gulf of Tonkin, but it was also used heavily as a transport and utility aircraft.
Michael Zerbe, the pilot on April 15, was 24 years old from Julian, California, about 60 miles northeast of San Diego. The flight deck that morning was buzzing with activity as usual, and among the sailors performing their duties was Seaman Apprentice David Underhill. Just 19 years old from Fayetteville, North Carolina, Underhill was assigned to the 213th Fighter Squadron as a maintenance man.
Just after 10:00 am, Lieutenant Zerbe started the Seasprite’s engine and attempted to hover low over the deck. It quickly became clear something was very wrong. Instead of maintaining a stable hover, the UH-2 yawed to the left and pitched forward suddenly. In the blink of an eye, the helicopter’s rotors slammed into the Kitty Hawk’s deck and broke apart. Large shards of rotor blades violently sprayed outward, strafing the deck, and the helicopter itself, which was still carrying three men, rolled onto its side, off the deck, and fell nearly six stories into the South China Sea.
The shards of rotor blade struck at least four sailors who were on deck at the time, one of whom was Seaman Underhill. He was hit squarely in the chest and fatally wounded. The other three men survived. The wreckage of the helicopter itself quickly began to sink after it hit the water. As emergency and rescue operations began, numerous sailors waited anxiously to see if the three crew members would emerge at the surface. Two of them did so. Lieutenant Richard Cline and Petty Officer 1st Class Hugh Coleman managed to escape the sinking wreckage and make it to the surface. Zerbe, however, was never seen again.
Every servicewoman and serviceman—and many civilians—risked their lives while serving in Vietnam. Deaths and wounds were not always the result of combat. Michael Richard Zerbe and David Joseph Underhill sacrificed their lives in service to the United States on April 15, 1966. They are memorialized on Panel 6E, Lines 115 and 122, respectively, of the Vietnam Veterans Memorial Wall in Washington, D.C.1
1Edward J. Marolda, The Approaching Storm, Conflict in Asia, 1945–1965, The U.S. Navy and the Vietnam War (Washington, D.C.: Naval History and Heritage Command); “Kitty Hawk II (CVA-63),” Naval History and Heritage Command (accessed 4/10/19); “Wall of Faces,” Vietnam Veterans Memorial Foundation (accessed 4/10/19). | https://www.vietnamwar50th.com/education/this_week_in_history/ |
All relevant data are within the paper.
Introduction {#sec005}
============
Alzheimer's disease (AD) and age-related macular degeneration (AMD) are common neurodegenerative diseases associated with advanced age which share numerous pathological and mechanistic features \[[@pone.0223199.ref001]--[@pone.0223199.ref003]\]. Notably, both AD and AMD are histopathologically characterized by abnormal extracellular deposits. In AD, senile plaques composed primarily of amyloid-β (Aβ) form throughout the central nervous system (CNS) cortex and hippocampus \[[@pone.0223199.ref002]\]. These plaques are associated with neuronal dysfunction and cell death leading to progressive cognitive decline and memory loss. In AMD, lipoproteinaceous immune deposits called drusen form between the retinal pigment epithelium (RPE) and Bruch's membrane (BrM). The presence of drusen is associated with photoreceptor dysfunction and loss through either an atrophic ("dry" AMD) or exudative ("wet" AMD) process that results in progressive visual decline. Interestingly, senile plaques and drusen have been shown to have many common constituents, including Aβ, apolipoprotein E (APOE), and complement immune components \[[@pone.0223199.ref004]--[@pone.0223199.ref007]\]. These findings, along with shared environmental risk factors for AD and AMD including advanced age, cigarette smoking, and hyperlipidemia associated with a Western diet, have led to the hypothesis that these diseases may have a common underlying pathophysiology \[[@pone.0223199.ref001]\]. The presence of abnormal extracellular deposits, specifically senile plaques in AD and drusen in AMD, may result in chronic inflammation and oxidative stress that damage surrounding tissues; however, the elucidation of precise mechanisms and development of targeted treatments have remained elusive.
Demonstrating an association or lack of association between AD and AMD would provide the rationale for future work investigating common and divergent features of AD and AMD, as well as drive therapeutic translation across diseases. For example, it may suggest the addition of clinical endpoints to ongoing trials related to both conditions to determine whether therapeutic agents for one condition have any treatment effect on the other condition. It would also justify longitudinal studies aimed to better understand whether these diseases co-develop synchronously or whether one disease may represent a risk factor for the other. Furthermore, patient care would be impacted, as health care providers could inform patients of the likelihood of increased risk for associated disease given a single diagnosis of either AD or AMD.
Unfortunately, demonstrating an association between AD and AMD has proven difficult, and previous studies have generated conflicting results \[[@pone.0223199.ref008]--[@pone.0223199.ref014]\]. A major limitation of all prior work investigating the association of AD and AMD was the reliance on clinical criteria to diagnose AD. There are standardized clinical criteria for AD diagnosis (National Institute of Aging-Alzheimer Association Criteria), and research on imaging, blood, and cerebrospinal fluid (CSF)-based biomarkers to support the clinical impression is rapidly advancing \[[@pone.0223199.ref015]\]. However, the gold standard for AD diagnosis remains the neurohistopathology analysis. A recent study showed that the sensitivity and specificity of a clinical diagnosis of "probable AD", representing the highest level of confidence under the previously-used National Institute of Neurological and Communicative Disorders and Stroke (NINCDS) criteria, is approximately 71% when compared to the gold standard of neuropathological diagnosis \[[@pone.0223199.ref016]\]. Furthermore, the AD population in previous studies has potentially been biased towards milder disease, as patients with advanced dementia may be less likely to seek or receive ophthalmic or other healthcare \[[@pone.0223199.ref011], [@pone.0223199.ref017], [@pone.0223199.ref018]\].
Given these limitations of previous studies, the purpose of our work was to employ histopathological analysis of eye and brain autopsy specimens with the main goal to determine whether the comorbidity rate of AMD is significantly greater in patients with a neuropathological diagnosis of AD than in age-matched patients without AD. The main advantage of this methodology is that it employs the gold-standard pathological diagnosis for AD and evaluation of stages of AMD and AD. A second advantage is the ability to sample the full spectrum of AD. In addition to the primary goal, analysis of eye and brain pathology reports along with patient records enabled investigation of the association between AD and other degenerative diseases including glaucoma and other dementias.
Methods {#sec006}
=======
Histopathology specimens {#sec007}
------------------------
Pathologic specimens of eyes and brains of autopsy subjects aged 75 and above that presented to Duke University Medical Center were prepared and histologically analyzed by methods previously described \[[@pone.0223199.ref019], [@pone.0223199.ref020]\]. This age cutoff was chosen because the incidence of both AMD and AD increase significantly after age 75 \[[@pone.0223199.ref021], [@pone.0223199.ref022]\]. AMD severity was graded in a minimum of 5 eye sections per eye by a board-certified ophthalmic pathologist (AL) as previously detailed by Sarks \[[@pone.0223199.ref023]\] \[**[Fig 1](#pone.0223199.g001){ref-type="fig"}**\]. AD was graded in brain specimens by neuropathologists as previously described by Braak and Braak \[[@pone.0223199.ref024]\] and in accordance with the National Institute on Aging-Alzheimer's Association and the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) guidelines \[[@pone.0223199.ref012], [@pone.0223199.ref025]\]. For the purposes of our study, AMD was defined as Sarks grades III-VI, corresponding to intermediate to severe clinical AMD; the diagnosis of AMD was further classified as either "early" (Sarks grades III & IV) or "late" (Sarks grades V & VI). For cases with discrepant stages between the two eyes, the higher Sarks score for AMD was considered in the analysis. AD was defined as Braak and Braak Stage III-VI with "moderate" or "frequent" neuritic plaque density, representing CERAD 2 or 3 scores \[[@pone.0223199.ref012]\]. AD cases were further grouped into the categories of "Early AD" (Braak and Braak Stages III-IV) and "Late AD" (Braak and Braak Stages V-VI). The histopathologic diagnosis of advanced glaucoma was made when the following were observed: sparse retinal ganglion cells, diminished size of optic nerve axon bundles, and fibrotic thickening or "cupping" of the optic nerve \[[@pone.0223199.ref026]\]. "Non-AD dementia" included any other dementia with neuropathology features characteristic of frontotemporal degeneration, Lewy body disease, Parkinson's disease, or hippocampal sclerosis. Cerebrovascular atherosclerosis (CVA) was defined as the evidence of moderate to severe CNS atherosclerosis. The use of autopsy eyes and brain tissue for research was approved by the Institutional Review Board of Duke University. The need for participant consent was waived by the Institutional Review Board for this decedent research, as all research subjects are deceased, and all personal health information was used solely for research and was not disclosed to anyone outside Duke University without removing all identifiers.
![Diagram illustrating the Sarks stages of AMD.\
**a** Sarks I: normal control. **b** Sarks II: few small drusen. **c** Sarks III: early AMD with thin continuous sub-RPE deposits. **d** Sarks IV: intermediate AMD with thick sub-RPE deposits overlying degenerating choriocapillaries. **e** Sarks V: geographic atrophy. **f** Sarks V: choroidal neovascularization. **g** Sarks VI: disciform scar. NFL = Nerve Fiber Layer, GCL = Ganglion Cell Layer, IPL = Inner Plexiform Layer, INL = Inner Nuclear Layer, OPL = Outer Plexiform Layer, ONL = Outer Nuclear Layer, PR = Photoreceptors, RPE = Retinal Pigment Epithelium, BM = Bruch's Membrane, chor = choroid. Reprinted under a CC BY license, with permission from publisher SpringerNature, original copyright 2014 \[[@pone.0223199.ref027]\].](pone.0223199.g001){#pone.0223199.g001}
Chart review {#sec008}
------------
Patient data including demographic factors and comorbidities were obtained from the patient chart in the EPIC MaestroCare electronic medical record system of Duke University Medical Center. Patient comorbidities (cardiovascular disease, hypertension, diabetes, depression) were defined by their presence in the patient's medical problem list.
Statistical analyses {#sec009}
--------------------
An independent samples t-test was performed to compare the average age of patients in the AD and control study groups. A one-tailed z-test for two population proportions was conducted to determine any significant difference in AMD co-prevalence between the AD and control cohorts; two-tailed tests were used to assess all other demographic factors and potential disease associations. In addition, chi-squared tests of independence were performed to assess the comorbidity rate of AMD and AD, as well as the relationship between severity of each disease in the presence of the other. To further characterize the likelihood of AMD given an AD diagnosis, cases were divided into "Early AD" (Braak and Braak Stages III-IV) and "Late AD" (Braak and Braak Stages V-VI) cohorts, and odds ratios with 95% confidence intervals were calculated between each of these groups. Significance was defined as p\<0.05 for all analyses. The primary analysis was the comparison between AMD prevalence in AD vs. controls. As other analyses were considered exploratory, a correction for multiple comparisons was not applied in the secondary analyses. Analyses were performed in SAS 9.3 (SAS Institute, Cary NC).
Results {#sec010}
=======
Eye and brain samples from 115 AD and 57 control patients were identified and histopathologically staged for AMD and AD. Among the control cohort, 36 cases were Braak & Braak Stage 0, 7 were Stage I, and 14 were Stage II; within the AD cohort, 42 cases were Braak Stage III, 30 were Braak Stage IV, 31 were Braak Stage V, and 12 were Braak Stage VI \[**[Table 1](#pone.0223199.t001){ref-type="table"}**\]. Overall the age and race distributions between the two groups were similar (t = -0.211, p = 0.833; AD mean 86.8 years, ages 75--103, 86.1% white vs. control mean 86.6 years, ages 76--101, 78.9% white) \[**[Table 1](#pone.0223199.t001){ref-type="table"}**\]. The gender distribution was significantly different between the groups, with a higher preponderance of female patients in the AD than the non-AD group (p = 0.032, AD 66.1% female, control 49.1% female) \[**[Table 1](#pone.0223199.t001){ref-type="table"}**\]. The two cohorts were found to be similar with respect to co-morbidities including diabetes (p = 0.232) and hypertension (p = 0.353) \[**[Table 1](#pone.0223199.t001){ref-type="table"}**\].
10.1371/journal.pone.0223199.t001
###### Demographic characteristics of the cohort.
{#pone.0223199.t001g}
-------------------------------------------------------------------
Control (n = 57) AD (n = 115) p-value
----------------------- ------------------ -------------- ---------
**Demographic data**
Age (years, SD) 86.6 (5.9) 86.8 (5.7) 0.833
Gender (% female) 49.1 66.1 0.032
Race (% white) 78.9 86.1 0.232
**Systemic chronic**\
**disease data (%)**
Diabetes 21.0 13.9 0.232
Hypertension 43.8 36.5 0.353
Depression 7.0 13.0 0.235
Smoking 0.0 1.7 0.316
**Braak & Braak**\
**AD staging (%)**
Stage 0 63.2 \- \-
Stage I 12.3 \- \-
Stage II 24.6 \- \-
Stage III \- 36.5 \-
Stage IV \- 26.1 \-
Stage V \- 27.0 \-
Stage VI \- 10.4 \-
-------------------------------------------------------------------
There was no significant difference in AMD rate between AD patients and controls (z = 0.820, p = 0.794; 53.0% vs. 59.6%) \[**[Table 2](#pone.0223199.t002){ref-type="table"}**\]. Furthermore, AMD severity as determined by Sarks score was similar between AD patients and controls (χ^2^ = 2.96, p = 0.706), or between "early" and "late" AD (χ^2^ = 5.60, p = 0.848) \[**[Fig 2](#pone.0223199.g002){ref-type="fig"}**\]. Likewise, we did not find an association between Braak staging of AD and AMD (χ^2^ = 4.55, p = 0.602), even when stratified by "early" and "late" AD (χ^2^ = 5.16, p = 0.523) \[**[Table 3](#pone.0223199.t003){ref-type="table"}, [Fig 3](#pone.0223199.g003){ref-type="fig"}**\].
{#pone.0223199.g002}
{#pone.0223199.g003}
10.1371/journal.pone.0223199.t002
###### Ophthalmic and neuropathologic diagnoses.
{#pone.0223199.t002g}
Control (n = 57) AD (n = 115) p-value
----------------------------------- ------------------ -------------- -----------
**Ophthalmologic diagnoses (%)**
**AMD** **59.6** **53.0** **0.794**
Severe glaucoma 17.5 13.9 0.412
**Neuropathologic diagnoses (%)**
Non-AD dementia 61.4 53.9 0.351
Cerebrovascular Atherosclerosis 54.5 61.7 0.355
10.1371/journal.pone.0223199.t003
###### Association between AD stages and AMD.
{#pone.0223199.t003g}
n AMD Prevalence (%) Odds Ratio (95% CI) p-value
----------------------------- ---- -------------------- --------------------- ---------
**No AD (Braak 0-II)** 34 59.6 1.00 \-
**Early AD (Braak III-IV)** 37 51.4 0.72 (0.35--1.44) 0.350
**Late AD (Braak V-VI)** 24 55.8 0.85 (0.38--1.90) 0.700
**All AD (Braak III-VI)** 61 53.0 0.76 (0.40--1.45) 0.413
A secondary goal of this study was to employ the database of cases with matched eye and brain pathology reports to investigate the prevalence of glaucoma and other neuropathological diagnoses in the cohort of AD patients and non-AD controls over age 75 \[**[Table 2](#pone.0223199.t002){ref-type="table"}**\]. Comorbidity rates of advanced glaucoma, the only stage of glaucoma that can be reliability diagnosed via histopathology analysis, were similar between AD patients and controls (13.9% vs. 17.5%, p = 0.412). Likewise, frequency of cerebrovascular atherosclerosis and other dementias (Lewy body, Parkinson's disease, frontotemporal, or hippocampal sclerosis) did not differ significantly between the groups \[**[Table 2](#pone.0223199.t002){ref-type="table"}**\].
Discussion {#sec011}
==========
To our knowledge, this represents the first study exploring the association between AD and AMD in a population age 75 and older which employed histopathological analysis to provide a definitive postmortem diagnosis and staging of these two important aging diseases. Overall, the comorbidity rate of AMD observed in both the aged AD and the non-AD groups in this study was greater than 50%, and the average patient age at death was approximately 87 years. This was higher than previous estimates that the combined prevalence of early and late AMD in white Americans and Europeans aged 85 years and older approached 45% \[[@pone.0223199.ref001], [@pone.0223199.ref028]\]. Interestingly, our cohort was approximately 20% non-white. Though this difference can be attributed to the increased sensitivity of histopathology over that of the clinical exam in diagnosing AMD, it also raises the question of whether AMD is under-diagnosed in non-white populations.
Notably, there was no significant difference in AMD comorbidity between AD and non-AD subjects. This result supports the conclusion that, in spite of the mechanistic and histopathological similarities between AD and AMD, these diseases are not associated with each other at the individual level. Our results are in agreement with two recent studies that employed markedly different methodologies. In a large record linkage study of hospital admissions, Keenan et al. included 65,894 patients in the AMD cohort, 168,092 patients in the AD cohort and used a reference cohort of over \>7.7 million patients, all constructed from English National Health Service electronic records \[[@pone.0223199.ref011]\]. Most AMD hospital admissions in this study consisted of patients with neovascular AMD receiving intravitreal anti-VEGF therapy. This study did not find an increased risk of a subsequent hospital admission for AMD given a diagnosis of AD. In fact, there was a reduced risk of admission attributed to possible barriers to care for patients with dementia. A limitation of this study was that it mainly examined the potential association between AD and the neovascular form of AMD and not between the most common dry form of AMD. The authors further acknowledge that this AD cohort was likely to include other types of dementia, given the possible variability in diagnostic coding by various physicians. Strengths of this work included its large sample size and controlling for potential confounding factors such as socioeconomic status.
Similarly, in a large case-control study, Williams and colleagues found no increased prevalence of AMD in AD cases versus controls after correcting for age alone or age, smoking, and *APOE* ɛ4ɛ4 or ɛ3ɛ4 genotype \[[@pone.0223199.ref010]\]. This particular work included 258 AD cases defined by NINCDS criteria and 322 control patients. AMD grade was determined through masked examination of dilated retinal photographs centered on the macula using a modification of the system employed in the Rotterdam studies \[[@pone.0223199.ref029]\]. Weaknesses of this study included the fact that a greater proportion of AD cases (19.4%) than controls (12.1%) had ungradable retinal photographs. Indeed, if advanced AMD was over-represented amongst the ungradable AD group, the association between AD and AMD could have been under-estimated. Furthermore, a clinical diagnosis alone is not sufficient to determine the co-prevalence between AMD and AD. Clinically diagnosing AD is difficult and requires access to specialists and numerous imaging and laboratory tests \[[@pone.0223199.ref015]\]. Previous work supports that some individuals exhibit extraordinary cognitive resilience in that they do not present clinically with cognitive changes despite substantial burden of pathological AD changes in the brain. Furthermore, the AD population in previous clinical studies had a high likelihood of being biased towards milder disease, as patients with advanced dementia are less likely to receive coordinated care \[[@pone.0223199.ref011], [@pone.0223199.ref017], [@pone.0223199.ref018]\].
The Rotterdam study by Klaver et al., which measured the rate of incident clinically diagnosed AD in 1,438 patients with known AMD status over a four-year period, revealed that patients with advanced AMD had an increased risk of incident AD (RR = 2.1, 95% CI: 1.1--4.3). However, this increase became insignificant after controlling for age, gender, smoking, and atherosclerosis (RR = 1.5, 95% CI: 0.6--3.5) \[[@pone.0223199.ref008]\]. A number of additional studies have suggested a possible association between AMD and cognitive impairment; however, none of these publications specifically addressed the biases of a clinical diagnosis of Alzheimer dementia \[[@pone.0223199.ref009], [@pone.0223199.ref013], [@pone.0223199.ref014]\].
While an association between vision loss and cognitive impairment clearly exists, our current results and the two large case control studies cited above suggest a lack of a substantive co-prevalence between AD and AMD diagnoses \[[@pone.0223199.ref010], [@pone.0223199.ref011], [@pone.0223199.ref030]\]. Taken together, these findings support the notion that the epidemiological link between AMD and cognitive impairment arises because vision loss affects cognitive processes or the presentation of cognitive decline, rather than that the two diseases arise concurrently due to shared underlying pathology \[[@pone.0223199.ref031]\]. Genetic evidence also supports the notion that AD and AMD may have distinct pathophysiology in spite of their biochemical and histological similarities. AD and AMD appear to have essentially independent genetic risk profiles \[[@pone.0223199.ref001], [@pone.0223199.ref032]--[@pone.0223199.ref034]\]. Major AMD genetic risk associations including polymorphisms in complement factor H, locus LOC387715/age-related maculopathy susceptibility 2 (*ARMS2*), and high-temperature requirement factor A1 (*HTRA1*) are not associated to AD \[[@pone.0223199.ref001], [@pone.0223199.ref033], [@pone.0223199.ref035]\]. Furthermore, *APOE* genotype appears to have opposite effects on disease risk profiles between AD and AMD. The *APOE*-ɛ4 allele has been shown to increase the risk of AD and cardiovascular disease but to be protective for AMD, while the converse appears to be true for the *APOE*-ɛ2 allele \[[@pone.0223199.ref001], [@pone.0223199.ref003], [@pone.0223199.ref036]\]. Our results, which show no significant association between AD and AMD at the individual level, are concordant with these diseases having separate genetic risk profiles.
Our results have additional potential implications for AD and AMD therapeutics. Presuming numerous common mechanistic features between these diseases, there has been optimism that therapies developed for one could be translated effectively to the other. Indeed, in a mouse model of AMD, an anti-Aβ therapy originally developed for AD prevented visual functional decline and development of AMD-like pathology \[[@pone.0223199.ref037]\]. Despite their success in animal models, anti-amyloid therapies have been less promising in humans, with recent high-profile failures of phase 2 and 3 trials in AMD and AD; however, some trials are still ongoing in AMD and patients at risk of AD development but without symptoms of dementia \[[@pone.0223199.ref038]--[@pone.0223199.ref040]\]. Our current results indicate that translation of anti-amyloid therapeutics successfully from AD to AMD or vice versa may be more challenging than previously suggested by animal studies.
In addition to the primary goal to evaluate the association between AD and AMD, we sought to fully utilize the dataset of matched eye and neuropathology pathology reports to assess potential associations of AD with other eye and CNS pathology. Similar to AMD, glaucoma is a common, multifactorial disease associated with aging that represents a leading cause of irreversible vision loss. Glaucoma is a progressive optic neuropathy that is generally considered to be a neurodegenerative disease given its characteristic features such as the selective loss of neuron populations, specifically of the retinal ganglion cells, and trans-synaptic degeneration occurring in the lateral geniculate nucleus and visual cortex \[[@pone.0223199.ref041], [@pone.0223199.ref042]\]. Some researchers have postulated that glaucoma may be associated with CNS neurodegenerative diseases and AD in particular; indeed, a significantly increased prevalence of glaucoma in AD patients compared to non-AD controls has been suggested by small clinical studies conducted in Europe by Bayer et al. (26% in AD vs. 5% in controls) and Japan by Tamura et al. (24% vs. 10%) \[[@pone.0223199.ref043], [@pone.0223199.ref044]\]. The current study did not show any significant difference in prevalence of advanced glaucoma between AD patients and controls. Elucidating any potential association between glaucoma and AD remains an opportunity for further research.
Strengths and limitations {#sec012}
-------------------------
The current work was limited by the sample size, which was smaller than for other clinical or health outcomes studies. This was due to the inclusion of autopsy patients over age 75, an age chosen *a priori* with the goal to study the population affected by aging diseases such as AMD and AD, as well as the access to cases for which both eyes and brain specimens with neuropathology characterization were also available. Second, we were not able to correct for certain known AD and AMD risk associations such as *APOE* genotype, as this history was unavailable for a large number of the autopsy cases analyzed. Third, this study relied on Braak and Braak staging to determine severity of AD. Though this method is commonly used in exploratory research studies, future studies may benefit from leveraging other pathologic criteria such as the Thal stages of amyloid deposition, positron emission tomography and *in vivo* amyloid imaging techniques currently in development \[[@pone.0223199.ref045], [@pone.0223199.ref046]\].
Finally, we noted a number of differences in demographics and comorbidities between the AD and control cohorts. The AD cohort comprised a significantly greater female proportion. This difference was considered acceptable, as previous work has shown that while AMD prevalence increases exponentially with age, there is no significant difference by gender \[[@pone.0223199.ref047]\]. The incidence of both diabetes and hypertension was significantly higher in the control than the AD group. This was unexpected, as some studies have suggested that diabetes and hypertension are risk factors for AD \[[@pone.0223199.ref048], [@pone.0223199.ref049]\]. Since the comorbidity data relied on chart review, it is possible that AD patients were less likely to have hypertension or diabetes noted in their problem list than control patients and that these comorbidities were not actually unequally represented between the groups. In addition, the evidence implicating hypertension or diabetes as risk factors for AMD remains weak and inconsistent \[[@pone.0223199.ref028], [@pone.0223199.ref050]\]. To control for the imbalances in demographics and comorbidities noted between the AD and non-AD group, a future direction would be the calculation of AMD prevalence from the electronic data in a cohort of control patients with the same demographics as the AD group.
The strengths of the current study lie in the use of an unbiased sample of patients presenting for autopsy, utilization of the gold standard histopathologic diagnosis of AD and ability to sample the full spectrum of both AD and AMD stages through pathology characterization.
In conclusion, this study represents the first comorbidity study exploring the association between AD and AMD that used an unbiased sample generated through histopathological analysis to definitively diagnose AD. It demonstrated the lack of association between AD and AMD, which is consistent with the findings of several recent clinical studies \[[@pone.0223199.ref010], [@pone.0223199.ref011]\]. Though AD and AMD seemingly share numerous features including an association with aging, common risk factors, and extracellular lesions containing Aβ and complement components, our results suggest the pathophysiology of these diseases is likely distinct. Aforementioned genetic studies further support this conclusion \[[@pone.0223199.ref001], [@pone.0223199.ref003], [@pone.0223199.ref033], [@pone.0223199.ref035], [@pone.0223199.ref036]\]. Though we did not observe any association between AD and AMD, there may be other ocular manifestations of the pathologic process occurring in AD. Identifying retinal biomarkers of AD appears promising and remains an area of active research \[[@pone.0223199.ref051], [@pone.0223199.ref052]\]. Potentially, due to recent advances in ocular imaging, the retina can provide a non-invasive means to earlier and more accurate diagnosis of AD.
We would like to thank Brenda Dudzinski, who assisted with the creation of the autopsy database of cases; Christine Hulette, MD, who performed the neuropathologic diagnosis and AD grading of brain specimens; and Alan Proia, MD, PhD, who performed the histopathologic evaluation and AMD grading of eye specimens used in this study.
10.1371/journal.pone.0223199.r001
Decision Letter 0
Su
Yi
Academic Editor
© 2019 Yi Su
2019
Yi Su
This is an open access article distributed under the terms of the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
10 Jul 2019
PONE-D-19-17299
Comorbidity of age-related macular degeneration with Alzheimer's disease: A histopathologic case-control study
PLOS ONE
Dear Dr. Lad,
Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE's publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
We would appreciate receiving your revised manuscript by Aug 24 2019 11:59PM. When you are ready to submit your revision, log on to <https://www.editorialmanager.com/pone/> and select the \'Submissions Needing Revision\' folder to locate your manuscript file.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.
To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: <http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols>
Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). This letter should be uploaded as separate file and labeled \'Response to Reviewers\'.A marked-up copy of your manuscript that highlights changes made to the original version. This file should be uploaded as separate file and labeled \'Revised Manuscript with Track Changes\'.An unmarked version of your revised paper without tracked changes. This file should be uploaded as separate file and labeled \'Manuscript\'.
Please note while forming your response, if your article is accepted, you may have the opportunity to make the peer review history publicly available. The record will include editor decision letters (with reviews) and your responses to reviewer comments. If eligible, we will contact you to opt in or out.
We look forward to receiving your revised manuscript.
Kind regards,
Yi Su, Ph.D
Academic Editor
PLOS ONE
Journal Requirements:
1\. When submitting your revision, we need you to address these additional requirements.
Please ensure that your manuscript meets PLOS ONE\'s style requirements, including those for file naming. The PLOS ONE style templates can be found at
<http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf> and <http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf>
2\. We would be grateful if you could clarify whether the information included in the Supporting Information has been published previously, and if so, whether you have permission of the original copyright holder to reproduce these tables and images. If this information has been published previously we ask that you remove this information and include a relevant citation to the previous publications.
3\. Please provide additional details regarding participant consent. Please state whether this was obtained from patients before death and/or from the next of kin. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If the need for consent was waived by the ethics committee, please include this information. Please also explain why the requirement for consent was waived.
4\. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: <http://journals.plos.org/plosone/s/licenses-and-copyright>.
We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:
1. You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.
We recommend that you contact the original copyright holder with the Content Permission Form (<http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf>) and the following text:
"I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (<http://creativecommons.org/licenses/by/4.0/>). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form."
Please upload the completed Content Permission Form or other proof of granted permissions as an \"Other\" file with your submission.
In the figure caption of the copyrighted figure, please include the following text: "Reprinted from \[ref\] under a CC BY license, with permission from \[name of publisher\], original copyright \[original copyright year\]."
2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder's requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.
5\. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: <http://journals.plos.org/plosone/s/supporting-information>.
Additional Editor Comments:
This manuscript presented a retrospective pathology-based case-control study which investigates whether there is association between Alzheimer\'s disease (AD) and age-related macular degeneration (AMD). The study did not find a statistically significant association between AD and AMD and suggested that any shared characteristics between the two disease may be nondeterministic. This work received favorable review from expert reviewers and minor revision is recommended to address the issues raised by the reviewers.
\[Note: HTML markup is below. Please do not edit.\]
Reviewers\' comments:
Reviewer\'s Responses to Questions
**Comments to the Author**
1\. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer \#1: Partly
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
2\. Has the statistical analysis been performed appropriately and rigorously?
Reviewer \#1: I Don\'t Know
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
3\. Have the authors made all data underlying the findings in their manuscript fully available?
The [PLOS Data policy](http://www.plosone.org/static/policies.action#sharing) requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data---e.g. participant privacy or use of data from a third party---those must be specified.
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
4\. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
5\. Review Comments to the Author
Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)
Reviewer \#1: This is study on a very important question in the field of neurodegeneration both for AMD and for Alzheimer.
The study is rather well done and the literature review and context placement is very well done. The authors note the limitations very well which are unavoidable in a study fo this nature that requires Brian and eye autopsy specimens. Thus despite the limitations this study adds significantly to our knowledge.
One eadditional control that may be added is that of a the AMD prevalence in a cohort of controls patients form the electronic data record with the same demographics as the AD autopsy group to cover for the imbalances noted by the authors in their control group. Of course this will have its own limitations but it will be nice t know this information if possible.
Thank you
Reviewer \#2: This paper is a clearly written description of a methodologically rigorous study on an important topic. As the authors outline, the use of often elusive histopathological diagnoses makes this paper an impactful contribution to the field. It contains sufficient statistical information in general (though I wonder why the test statistic was given for chi-squared tests but not for t-tests), and thoughtful reflection in the discussion. I commend this write up as an excellent example of being as close to publication-ready as a submission can be. Well done!
\- On page 15, line 310, change \"Alike\...\" to \"Like\...\" or similar.
\- Some figures - if cost and space allow, which they may not - of examples of the histopathological findings would be engaging. Not essential.
\- In the section, at the end of pages 14 and beginning of page 15, on the lack of shared polymorphisms between AMD and AD, consider citing this: Am J Geriatr Psychiatry. 2015 Dec;23(12):1290-1296. doi: 10.1016/j.jagp.2015.06.005 Its of direct relevance but my knowledge of this paper is because I am one of the authors, so clearly have an interest to declare on this point! The authors may well be aware of this paper and have decided against citing it, which is obviously fine. I also declare that I am the first author cited in reference 10.
\*\*\*\*\*\*\*\*\*\*
6\. PLOS authors have the option to publish the peer review history of their article ([what does this mean?](https://journals.plos.org/plosone/s/editorial-and-peer-review-process#loc-peer-review-history)). If published, this will include your full peer review and any attached files.
If you choose "no", your identity will remain anonymous but your review may still be made public.
**Do you want your identity to be public for this peer review?** For information about this choice, including consent withdrawal, please see our [Privacy Policy](https://www.plos.org/privacy-policy).
Reviewer \#1: Yes: Demetrios Vavvas
Reviewer \#2: Yes: Dr Michael Williams
\[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link \"View Attachments\". If this link does not appear, there are no attachment files to be viewed.\]
While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, <https://pacev2.apexcovantage.com/>. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email us at <[email protected]>. Please note that Supporting Information files do not need this step.
10.1371/journal.pone.0223199.r002
Author response to Decision Letter 0
20 Aug 2019
PONE-D-19-17299
Comorbidity of age-related macular degeneration with Alzheimer's disease: A histopathologic case-control study
We would like to thank the Academic Editor and the reviewers for taking the time to evaluate our manuscript and for their thoughtful, constructive comments. As requested, we have revised the manuscript. Below we detail our "point by point" responses to the reviews and we indicate the location of the modified material in the revised manuscript.
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter.
N/A
To enhance the reproducibility of your results, we recommend that if applicable you deposit your laboratory protocols in protocols.io, where a protocol can be assigned its own identifier (DOI) such that it can be cited independently in the future. For instructions see: <http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols>
N/A
1\. When submitting your revision, we need you to address these additional requirements.
Please ensure that your manuscript meets PLOS ONE\'s style requirements, including those for file naming. The PLOS ONE style templates can be found at
[http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf](http://www.journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and http://www.journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf)
2\. We would be grateful if you could clarify whether the information included in the Supporting Information has been published previously, and if so, whether you have permission of the original copyright holder to reproduce these tables and images. If this information has been published previously we ask that you remove this information and include a relevant citation to the previous publications.
The tables included in the Supporting Information contain content that was previously published in the manuscripts referenced under 23-25. These references are cited appropriately in the Methods section (lines 115-120). The tables in the Supporting Information section were removed.
3. Please provide additional details regarding participant consent. Please state whether this was obtained from patients before death and/or from the next of kin. In the Methods section, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal). If the need for consent was waived by the ethics committee, please include this information. Please also explain why the requirement for consent was waived.
The need for participant consent was waived by the Duke IRB for this decedent research, as all research subject are deceased, and all personal health information (PHI) was used solely for research and was not disclosed to anyone outside the Duke University without removing all identifiers.
This information was added to the Methods section (lines 135-139).
We uploaded, as an "Other" file the rights license document granting permission to publish Figure 1 and the email confirmation from the copyright holder, SpringerNature. The copyright holder noted that, per legal department advice, SpringerNature is unable to sign any outside forms such as the Content Permission Form. However, the email communication states that we are able to reuse the figure in an open access publication under the Creative Commons Attribution License (CCAL) CC BY 4.0 with appropriate reference to the source in the figure legend.
In the figure caption of the copyrighted figure, please include the following text: "Reprinted from \[ref\] under a CC BY license, with permission from \[name of publisher\], original copyright \[original copyright year\]."
This text was included under lines 151-2.
2. If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder's requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information. If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.
N/A
5\. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: <http://journals.plos.org/plosone/s/supporting-information>.
N/A. The Supporting Information was removed, per \# 2 above.
Additional Editor Comments:
This manuscript presented a retrospective pathology-based case-control study which investigates whether there is association between Alzheimer\'s disease (AD) and age-related macular degeneration (AMD). The study did not find a statistically significant association between AD and AMD and suggested that any shared characteristics between the two disease may be nondeterministic. This work received favorable review from expert reviewers and minor revision is recommended to address the issues raised by the reviewers.
The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data---e.g. participant privacy or use of data from a third party---those must be specified.
The study is rather well done and the literature review and context placement is very well done. The authors note the limitations very well which are unavoidable in a study for this nature that requires brain and eye autopsy specimens. Thus despite the limitations this study adds significantly to our knowledge.
One additional control that may be added is that of a the AMD prevalence in a cohort of controls patients form the electronic data record with the same demographics as the AD autopsy group to cover for the imbalances noted by the authors in their control group. Of course this will have its own limitations but it will be nice to know this information if possible.
Thank you
We would like to thank the reviewer for the very supportive comments.
In order to attempt to fulfill this request, we completed a Duke Enterprise Data Unified Content Explorer (DEDUCE) search of the electronic data records from all normal, control patients seen at the Duke Medical Center over the past 37 years with the same demographics as the AD autopsy group (aged 75 and above). We found a number of 486,395 patients. Among these subjects, only 12,140 also carried the diagnosis of AMD, consistent with a prevalence of 2.49%, which is much lower and not consistent with estimates from prior studies (references 1, 27). This reflects the fact that only a minority of patients were seen at Duke Eye Center for a dilated examination that would allow a diagnosis of AMD to be made. A true prevalence of AMD in this population would only be possible if only the patients that received both a dilated eye examinations at Duke and a neurocognitive evaluation to evaluate AD were considered in this analysis. Unfortunately, these numbers are not available to us at this time. The reviewer's astute suggestion was added to the Discussion section as an important future direction (lines 372-375):
"To control for the imbalances in demographics and comorbidities noted between the AD and non-AD group, a future direction would be the calculation of the AMD prevalence from the electronic data in a cohort of control patients record with the same demographics as the AD group."
Reviewer \#2: This paper is a clearly written description of a methodologically rigorous study on an important topic. As the authors outline, the use of often elusive histopathological diagnoses makes this paper an impactful contribution to the field. It contains sufficient statistical information in general (though I wonder why the test statistic was given for chi-squared tests but not for t-tests), and thoughtful reflection in the discussion.
We thank the reviewer for the positive and constructive feedback.
The point on the test statistic is well taken, and for completeness, we are now reporting the test statistic for t-tests and z-tests in addition to chi-squared tests.
I commend this write up as an excellent example of being as close to publication-ready as a submission can be. Well done!
- On page 15, line 310, change \"Alike\...\" to \"Like\...\" or similar.
"Alike" was replaced with "Similar to"
- Some figures - if cost and space allow, which they may not - of examples of the histopathological findings would be engaging. Not essential.
While a figure showcasing histopathological examples of stages of AMD and AD would be indeed engaging, this is not feasible due to the many panels necessary and current space limitations. However, we believe that the diagram in Figure 1 concisely illustrates the Sarks stages of AMD, and the definitions of Braak and Braak stages are included in the Methods section.
\- In the section, at the end of pages 14 and beginning of page 15, on the lack of shared polymorphisms between AMD and AD, consider citing this: Am J Geriatr Psychiatry. 2015 Dec;23(12):1290-1296. doi: 10.1016/j.jagp.2015.06.005 Its of direct relevance but my knowledge of this paper is because I am one of the authors, so clearly have an interest to declare on this point! The authors may well be aware of this paper and have decided against citing it, which is obviously fine. I also declare that I am the first author cited in reference 10.
We appreciate this suggestion and have now cited this informative reference.
######
Submitted filename: Response to Reviewers.docx
######
Click here for additional data file.
10.1371/journal.pone.0223199.r003
Decision Letter 1
17 Sep 2019
Comorbidity of age-related macular degeneration with Alzheimer's disease: A histopathologic case-control study
PONE-D-19-17299R1
Dear Dr. Lad,
We are pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it complies with all outstanding technical requirements.
Within one week, you will receive an e-mail containing information on the amendments required prior to publication. When all required modifications have been addressed, you will receive a formal acceptance letter and your manuscript will proceed to our production department and be scheduled for publication.
Shortly after the formal acceptance letter is sent, an invoice for payment will follow. To ensure an efficient production and billing process, please log into Editorial Manager at <https://www.editorialmanager.com/pone/>, click the \"Update My Information\" link at the top of the page, and update your user information. If you have any billing related questions, please contact our Author Billing department directly at <[email protected]>.
If your institution or institutions have a press office, please notify them about your upcoming paper to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, you must inform our press team as soon as possible and no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact <[email protected]>.
With kind regards,
Yi Su, Ph.D
Academic Editor
PLOS ONE
Additional Editor Comments (optional):
Reviewers\' comments:
Reviewer\'s Responses to Questions
**Comments to the Author**
1\. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the "Comments to the Author" section, enter your conflict of interest statement in the "Confidential to Editor" section, and submit your \"Accept\" recommendation.
Reviewer \#1: All comments have been addressed
Reviewer \#2: All comments have been addressed
\*\*\*\*\*\*\*\*\*\*
2\. Is the manuscript technically sound, and do the data support the conclusions?
The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
3\. Has the statistical analysis been performed appropriately and rigorously?
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
4\. Have the authors made all data underlying the findings in their manuscript fully available?
The [PLOS Data policy](http://www.plosone.org/static/policies.action#sharing) requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data---e.g. participant privacy or use of data from a third party---those must be specified.
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
5\. Is the manuscript presented in an intelligible fashion and written in standard English?
PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.
Reviewer \#1: Yes
Reviewer \#2: Yes
\*\*\*\*\*\*\*\*\*\*
6\. Review Comments to the Author
Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)
Reviewer \#1: (No Response)
Reviewer \#2: Thank you for addressing all the comments made, congratulations on this informative study and write up
\*\*\*\*\*\*\*\*\*\*
7\. PLOS authors have the option to publish the peer review history of their article ([what does this mean?](https://journals.plos.org/plosone/s/editorial-and-peer-review-process#loc-peer-review-history)). If published, this will include your full peer review and any attached files.
If you choose "no", your identity will remain anonymous but your review may still be made public.
**Do you want your identity to be public for this peer review?** For information about this choice, including consent withdrawal, please see our [Privacy Policy](https://www.plos.org/privacy-policy).
Reviewer \#1: Yes: Demetrios G Vavvas
Reviewer \#2: Yes: Dr Michael Williams
10.1371/journal.pone.0223199.r004
Acceptance letter
23 Sep 2019
PONE-D-19-17299R1
Comorbidity of age-related macular degeneration with Alzheimer's disease: A histopathologic case-control study
Dear Dr. Lad:
I am pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.
If your institution or institutions have a press office, please notify them about your upcoming paper at this point, to enable them to help maximize its impact. If they will be preparing press materials for this manuscript, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact <[email protected]>.
For any other questions or concerns, please email <[email protected]>.
Thank you for submitting your work to PLOS ONE.
With kind regards,
PLOS ONE Editorial Office Staff
on behalf of
Dr. Yi Su
Academic Editor
PLOS ONE
[^1]: **Competing Interests:**The authors have declared that no competing interests exist.
[^2]: Current address: Department of Ophthalmology, Kittner Eye Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States of America
| |
The Canada Food Inspection Agency (CFIA), on August 11, 2022, gave notice to the importing industry that the organic import and admissibility requirements for Dairy products of chapters 04, 09, 18, 19, 21, 22 and 35, were incorporated in the CFIA AIRS tool.
At this time, Importers of organic products will be required to submit copy of organic certificate (electronic copy) when declaring import of organic products using the CBSA SWI Integrated Import Declaration (IID) database.
Under Part 13 of the SFCR, products must be certified as organic according to the Canadian Organic Standards. The SFCR also outlines the organic certification system known as the Canada Organic Regime. The purpose of the Canada Organic Regime is to regulate all parties involved in the certification of organic products (including operators, Certification Bodies and Conformity Verification Bodies) and to verify all applicable regulatory requirements, standards and guidance documents are being met.
If you have any questions, please contact the Canada Organic Regime team at:
[email protected].
To learn more view CFIA webpage on organic products
Please contact your Livingston account representative should you have any questions. | https://www.livingstonintl.com/cfia-requirement-changes-for-the-organic-certificate-for-dairy-products/ |
Psychology Position in the Neonatal Follow-up Program at Buerger (Main campus) : The role of the Psychologist is to carry out psychological and developmental assessment of infants and young children enrolled through Neonatal Follow-Up Programs and to provide support for the families of these children. The Neonatal Follow-up Program (NFP) is made up of an interdisciplinary team of physicians, nurse practitioners, social worker, physical therapists and psychologists. This current posting is for NFP at Buerger which enrolls infants from the neonatal intensive care units (NICUs) at CHOP and Hospital of University of Pennsylvania (HUP). NFP provides services to those children who were born prematurely or have other neonatal illnesses and spent time in the CHOP or affiliated NICUs. The services include full developmental and psychological assessments, medical, physical and neuro-motor evaluations by medical staff and parent support through screening and social work counseling.
The role of the psychologist includes performing assessments, in-depth interpretation of assessments and findings which include coordination of medical, family and environmental history, providing appropriate feedback to family at the time of the visit, giving support and advice to families seen through the program regarding developmental, behavioral and other concerns for both at home activities and referrals to community services, and preparing reports for parents, community agencies/schools, and healthcare colleagues documenting findings, impressions, diagnoses and recommendations. Children in NFP range in age from 6 months to 6 years. Assessment battery is standardized with individualizations to meet the needs of specific children. The role of the psychologist will also include participation in interdisciplinary team meetings, participation in quality improvement projects, research visits as needed and collaboration with other psychologists in the program. There are opportunities for research and quality improvement projects as well.
The Psychologist I provides psychological services to patients throughout the CHOP inpatient and ambulatory enterprise;participates in the educational experience of trainees; establishes plan and initiates professional growth and leadership within the Department of Child and Adolescent Psychiatry and Behavioral Science, and presents work at the local or regional level. In some settings, the psychologist may have a matrixed reporting relationship to a physician leader outside the Department.
Clinical Activity
- Practices effectively as an independent clinician.
- Responsive to feedback from supervisors and colleagues about clinical practice.
- Provides comprehensive assessment and treatment services to children and families,in inpatient and/or outpatient settings, and communicates with referral sources as indicated
- Perform clinical services at a percent of effort as designated by supervisor.
- Adheres to The Department of Child and Adolescent Psychiatry and Behavioral Sciences productivity standards.
- In settings where patients are scheduled into appointments, presents an EPIC schedule that rolls out, per department guidelines, delineating availability and anticipated vacation and conference time. Schedule must include enough openings to cover productivity expectations plus anticipated late cancellations/no shows.
- Assures that professional practice is in accordance with applicable professional and licensing body standards
- Follows all CHOP and Departmental policies and procedures regarding clinical and administrative activity associated with providing direct patient care
Collaboration
- Collaborates within the Department and with other Department/Division colleagues to assure that patients receive the kind of care required by patient circumstance.
- Responds to requests for consultation and service in a timely manner.
- Participates in The Department of Child and Adolescent Psychiatry and Behavioral Sciences committees
- Works collaboratively with the program’s intake coordinators and practice manager to assure there is sufficient scheduling availability and patients are scheduled appropriately
Teaching
- May provide supervision to trainees (including externs, interns, and/or fellows) within the Department’s psychology training programs
- Lectures and provides training to other disciplines (including child psychiatry)
- Participates in activities of the training faculty
Other
- Completes all mandatory education requirements, annual PPD testing, flu vaccine and other vaccinations as per CHOP policy
- Maintain Licensure including completion of required continuing education
- Establishes plans for professional growth with manager * Participates in clinical program development
- Collaborates with internal staff to complete CHOP requirements to include but not limited to Medical Staff Application
- Member and or active participation in a DCAPBS or CHOP Committee
- Presents work locally or regionally; publishes or presents at least one paper /chapter yearly
- Participates in community service
- Participates in ongoing professional practice evaluations as a clinician and peer reviewer
Required Education: Earned Ph.D. or Psy.D. in Psychology from a program accredited by the American/Canadian Psychological Association (APA)
Required Experience:
-
Completed postdoctoral training directly related to job role
-
APA or CPA accredited internship training
-
Completed post-doctoral training in child and adolescent focused psychology Level I 0-7 years
- Excellent interpersonal skills
- Skill in exercising initiative, judgment, problem solving, decision-making.
- Skill in developing and maintaining effective relationships with medical, and administrative staff,patients and the public.
- Epic training
- Ability to communicate effectively in writing and verbally.
- Ability to communicate clearly.
- Ability to accept feedback and incorporate into professional growth and own performance improvement.
- Maintains strictest confidentiality.
All CHOP employees who work in a patient building or who provide patient care are required to receive an annual influenza vaccine unless they are granted a medical or religious exemption.
Children's Hospital of Philadelphia is committed to providing a safe and healthy environment for its patients, family members, visitors and employees. In an effort to achieve this goal, employment at Children's Hospital of Philadelphia, other than for positions with regularly scheduled hours in New Jersey, is contingent upon an attestation that the job applicant does not use tobacco products or nicotine in any form and a negative nicotine screen (the latter occurs after a job offer).
Children's Hospital of Philadelphia is an equal opportunity employer. We do not discriminate on the basis of race, color, gender, gender identity, sexual orientation, age, religion, national or ethnic origin, disability or protected veteran status.
VEVRAA Federal Contractor/Seeking priority referrals for protected veterans. Please contact our hiring official with any referrals or questions. | https://careers.chop.edu/job/Philadelphia-Psychologist-I-%28Neonatal-Follow-up%29-PA-19104/575425000/ |
The cinema in recent years it has had to face a new thrust and social sensitivity: actors, actresses, directors and productions are asked to pay more attention to roles and diversity, as well as to casting choices and to give the possibility to minorities to play characters that in the past would have been barred from him. This has also been demonstrated recently victory of TAIL at the Oscars: the American remake has entrusted the deaf characters to real deaf performers, while the French original The Bélier family he had been criticized above all for this aspect. In this moment of transition, there are many actors and actresses who have found themselves having to apologize or say sorry for playing a “politically incorrect” character. For instance, Jake Gyllenhaal he had to retrofit one of his most famous roles because it didn’t represent the ethnicity of the character; similar speech for Zoe Saldana And Emma Stone, ended up at the center of controversy for two of their famous films. For Scarlett Johansson and other stars, however, the problem was in the sexual identity of the characters they have (or should have) played.
FIND THE GALLERY A THIS LINK OR BY BROWSING THE ANNEX AT THE BOTTOM OF THE TEXT
If the gallery “From Scarlett Johansson to Jake Gyllenhaal: 10 repentant actors of a politically incorrect role“You liked it, here you will find others that may be right for you: | https://sparkchronicles.com/10-actors-repent-of-a-politically-incorrect-role-2/ |
{To create} Order is freedom
My sister and I spotted this on Pinterest this past weekend and it really resonated with us. One doesn't often associate the words "fierce and original" with "steady and well-ordered". Being organised in life can often feel like an uphill, daily battle. What I have come to realise is that once you have your affairs, tasks and possessions in order, and have a system that you trust and find easy to use, continuing in that fashion is so much easier. For me, the beauty of this quote is that it tells us that creativity doesn't have to grow from chaos or desperation (which is an assumption I have leant on in the past). A simple example: if you think about painting, there is something so appealing about a fresh canvas, perfect lighting, paints laid out in order, and clean brushes. Going even deeper into that example, going to art classes and practicing those techniques eventually results in more freedom of expression because you are able to push the seeming boundaries of your capabilities and produce grander results. So, taking time to organise your life to accommodate practice, studying, having a clean ordered environment, aiming for consistency, actually leads to more creativity.
Go forth and conquer the suggestion that you are a disorganised person, too busy floundering in the resultant chaos to stop and develop your own unique ideas. Put things in order to allow yourself to create and be truly inspired by your own capabilities.
Source: Thank you to Eva Jorgensen of The Sycamore Street Press who hand lettered this beautiful quote and has inspired my sister and I to add more order to our lives.
| |
Dr. Ranjan Shetty is Cardiologist practicing at Manipal Hospital. He obtained his MBBS degree from Kasturba Medical College, Mangalore and MD in Internal Medicine in 2005 from the prestigious All India institute of Medical science and DM in Cardiology from another prestigious institute, PGIMER Chandigarh. He has won several gold medals during his MBBS course. He was nominated the Best House Physician during his course in AIIMS and received the honour from the then president of India, Dr. A P J Abdul Kalam. He is a specialist in Coronary, Congenital, Structural & peripheral intervention. He has performed more than 3000 angioplasties very complex coronary & non coronary interventions. He is the pioneer in Asia for LAA appendage closures. He has the highest numbers in Asia Pacific region & is the only proctor in South East Asia. He is known Imaging guided PCI and has one of heights experience in IVUS guided PCI. He is one of the subject expert in IUCS PCI.
Visited For Cardiac checkup - General
Happy with: Doctor friendliness
He was a good listener and gave an all inclusive approach to deal with my bp problem. He changed my medication as well. Let’s see if this works out well, will write a further detailed one accordingly
Visited For Hypertension
Happy with: Doctor friendlinessExplanation of the health issueTreatment satisfactionValue for money
Doctor was very supportive & explained the treatment process in detail. Very much satisfied. I will recommend.
Happy with: Doctor friendlinessExplanation of the health issueTreatment satisfactionValue for moneyWait time
There is some increase in my cholesterol so I consulted Dr. Ranjan, he is very polite, friendly, caring and patiently listens to our problem.Doctor clearly explained on the issue and did not prescribed any medicine and said currently the medication is not required and told to reduce weight and follow diet.I am very happy and satisfied with the doctor and I recommend to others as well.
Visited For Coronary Angiogram
Happy with: Explanation of the health issue
He gave us clear conclusion for my father's angiogram report. He explained the issue and solution very clearly.
Visited For Heart ConditionsAngiogram
Hoped for better: Explanation of the health issueTreatment satisfaction
I have visited almost 5 times, *** *** ******* ***** *** ****** *** ** ******* I don’t recommend this doctor. He don’t give 5 minutes time for us, he come and go, he is always in a hurry. I’m not satisfied at all..
now thinking to visit another cardiologist..
Visited For Implantable Cardioverter-Defibrillators (Icds)Heart Conditions
Happy with: Doctor friendlinessTreatment satisfactionValue for money
We took my dad to Mainpal hospital in critical health condition. Doctor Ranjan shetty handled medical management and my father recovered. Later doctor suggested to go for CRTD device implantation to get rid of frequent hospitalisation but we didn't opt for it immediately later after few months my father fallen sick again. This time we just went with CRTD implantation. Surgery was successfully and my father is really doing good and active. In my view Ranjan shetty is the best cardiologist in Bangalore. Earlier my farther got treatment from other hospitals but Ranajan shetty is exceptional
Visited For Cardiac Arrhythmias
Happy with: Doctor friendlinessExplanation of the health issueTreatment satisfactionValue for money
Dr Ranjan Shetty is extremely good and us absolutely recommended from any cardiac complication(s). My father in law was brought into emergency with arrhythmia in April 2018. From initial diagnosis to treatment, from treatment to post operative care - he was there with us in every step of the way. He is a pioneer in his field and has orchestrated many complicated solutions including incorporation of watchman device in our case. My father in law has been doing better ever since and we are absolutely thankful for the care and support received.
He didn't prescribe any medications but I underwent some kind of tests & procedures at the hospital during the diagnosis. I am feeling better after the consultation. The doctor was friendly while discussing the issues. With his clarified explanation, I was able to understand what is the cause and how to get over it. It was a good experience in total.
I am completed with my initial treatment and the follow up is awaited on the results. May be after three months, I will have my next session. During the consultation, he was very nice and he explained everything in detail. It was a good experience in total.
Though it was my first consultation, it was significant and satisfactory in all aspects. Whatever tests he prescribed to me was as required for the diagnosis. *** ******* *** ******* ** ***** It took around 45 minutes to consult him because he has gone for rounds when I reached there. It was alright and I am happy with the entire experience. | https://www.practo.com/bangalore/doctor/dr-ranjan-shetty-cardiologist/recommended?specialization=cardiologist |
The 1954 Convention Relating to the Status of Stateless Persons defines a stateless person as someone “who is not considered a national by any State under the operation of its law”. Statelessness is one of the major concerns in the Republic of South Sudan (hereinafter, South Sudan), and the United Nations High Commissioner for Refugees (UNHCR) is working with the Government of South Sudan to ensure access to nationality and nationality documentation by stateless persons and persons at risk of statelessness in the country. The purpose of this study is to collect and analyse current and reliable data on the present situation in South Sudan, in support of UNHCR’s efforts to address statelessness in the country.
Causes of Statelessness in South Sudan
In the wake of South Sudan’s independence from the Republic of the Sudan, the latter’s decision to revoke nationality from any individual qualifying for South Sudanese nationality has left many people at risk of statelessness. However, South Sudan is not party to either the 1954 Convention Relating to the Status of Stateless Persons or the 1961 Convention on the Reduction of Statelessness. Ambiguities in South Sudan’s 2011 Nationality Act and associated Nationality Regulations , including the use of terms such as “indigenous”, contribute to an increased risk of statelessness in South Sudan.
With regards to administrative and procedural risk factors, the Directorate of Nationality, Passports and Immigration (DNPI) suffers from a problematic lack of capacity, with various misinterpretations of the Nationality Act and its Regulations by DNPI officers undermining access to nationality documentation. In particular, some DNPI Officers interpret the alternative conditions set in Section 8(1)(a) and 8(1)(b) as cumulative conditions to acquire nationality by birth and thus, requiring both conditions to be fulfilled despite the clear use of the word ‘or’ in the Nationality Act.
Problematically, possession of nationality documentation in South Sudan is widely seen as being synonymous with possessing a nationality, including among DNPI officers; in effect, lack of documentation calls nationality itself into question. The Nationality Regulations provide for two pieces of documentation: a nationality certificate which confirms that the holder is a South Sudanese national, and a national identity card which confirms the identity of the holder. The latter is the recognised personal identification document in South Sudan and can only be issued to individuals with nationality certificates. In practice, however, the DNPI is currently only issuing nationality certificates and has not yet started issuing national identity cards, as legislation regarding the national identity cards has yet to be passed into law.
To obtain a nationality certificate, the applicants are requested to provide a birth certificate or age assessment if unavailable, two passport size photos, a photocopy of a witness’s identity document, and a signed application form. Although not clearly stated in the legislation, applicants must also in practice provide a residence certificate and specification of blood group. The applicants are additionally required to pay for the issuance of the nationality certificate, and undergo a formal interview by the DNPI before the nationality certificate can be issued.
Although the states of South Sudan are governed on the basis of decentralisation as per the Transitional Constitution of 2011, nationality certificates continue to be processed in Juba, causing lengthy delays in the processing of applications at the state level. While the DNPI aspires to maintain a presence in each of the country’s thirty-two new states, many offices are not operational due to logistical constraints and security concerns; the majority of applicants from field locations find themselves obliged to travel to headquarters in Juba in order to process their application, at significant financial cost. The breakdown in South Sudan’s capacity to register births has posed another challenge, necessitating formal age assessments as an alternative to birth certificates, pending the approval of the Civil Registration Bill.
Finally, contextual factors specific to South Sudan also contribute to the risk of statelessness. Widespread displacement resulting from an ongoing internal armed conflict undermines the ability of applicants to fulfil the requirements of applications for nationality certificates. Poverty also undermines access to nationality certificates procedures: in light of the financial crisis, the DNPI has recently increased fees in response to inflation, barring access to nationality certificates for an increasing number of potential applicants. Awareness of importance of nationality certificates and relevant procedures is also limited, especially in rural areas.
Populations at Risk of Statelessness
Limited efforts to raise awareness on behalf of the DNPI have concentrated predominantly on urban, educated segments of the population; despite work by mobile teams, populations in rural areas are less likely to be in possession of nationality certificates. Meanwhile, costs incurred throughout the application process can prevent vulnerable and low-income individuals from accessing nationality certificates, while displaced persons face further challenges in fulfilling necessary requirements.
Challenges to access to nationality certification also relate to ethnicity. Trans-boundary communities such as the Madi and Acholi face further difficulties in proving their South Sudanese origins. The Nationality Regulations request that confirmation be provided that applicants from trans-boundary communities are indeed from the South Sudanese part of the community, which, in practice, includes additional recommendation letters and verification from local leaders.
Most problematically, certain nomadic pastoralist groups – such as the Falata – are systematically denied access to application procedures for nationality certification by virtue of being considered as non-South Sudanese by DNPI officers, mirroring challenges that the group has faced throughout its history in seeking nationality recognition in Sudan. This systematic denial of nationality certificates renders these groups effectively stateless, leaving them particularly vulnerable to abuse.
Impact and Mitigation
While lack of nationality documentation does not necessarily equate to statelessness, the impacts in South Sudan are often synonymous. Lack of nationality certificates exposes affected individuals to serious political, economic, and social deprivations, undermining access to basic rights and services. Nationality certificates are necessary to open a bank account, register for school, own property, seek formal employment, or vote in elections. Those who are unable to secure nationality certificates are also denied protection afforded by this documentation, and are reported to face risk of arrest or forced UNHCR and partners have been working to reduce the risk of statelessness and increase access to nationality certificates; financial support for payment of fees associated with the nationality application procedure has proven to be particularly successful. Civil Society Organisations (CSOs) also have an important role to play in identifying and assisting persons at risk of statelessness due to their close ties to communities. However, their role continues to be limited, and support in the form of funding and capacity-building is needed to engage CSOs in the fight against statelessness. | https://citizenshiprightsafrica.org/a-study-of-statelessness-in-south-sudan/ |
Episode 119 – How to Reclaim Ownership of Your True Destiny with Nathan Kohlerman
These days we are conditioned to view the world through the lens of one individual, and that individual is universally accepted as being ourselves. In itself, this isn’t necessarily a problem, but without the right frame of reference it can lead us down a very limited, narrow path.
Reestablishing a connection with our true and divine selves is a massive undertaking, unpicking generations of mis-learned concepts and opening our minds to the full richness of the reality we exist in. It starts with you, with who you are and what you are capable of, but extends outwards in all directions, appreciating the majesty of the entire universe as ours to shape as we will.
Nathan Kohlerman has spent years on a journey to find the true essence of self, to establish a new lineage of love and hope and sovereign power over his own choices, to help drive all of us towards a better tomorrow by building a better today. He wants us all to reconnect with the world around us, to feel the power of nature, to immerse ourselves in the experience of divinity, and channel that awesome energy towards positive transformation.
He has been my friend, my brother, my teacher for many years, and I invite all of your to share in his wisdom with an open heart.
Nathan Kohlerman is a Spiritual Counselor, Masculine Alchemy Mentor, Embodiment Coach, Ordained Minister, and founder of NeuIntention®. Revolutionizing human potential through the mind, body, and soul, his multifaceted approach is intended to guide others on the path to awaken, heal, and transform through the power of self-realization, integration, and embodiment.
“In order to heal it, we must feel it.” – Nathan Kohlerman
Connect with Nathan Kohlerman: | https://ericbalance.com/ep119/ |
The ‘happiest place on Earth’ turns into a nightmare as people are locked in
The economic and social impact that globalization has had on the new coronavirus (COVID-19), as well as individual business and industry impact has been profound. The casino industry in Macau is no different, with casinos closing for over two months from February 2020.
1. What is exactly the “dynamic zeroCOVID” policy that Macau follows?
Macau has implemented the “dynamic zeroCOVID policy” to prevent the spread of COVID-19. This policy permits the locking of certain areas, including the area close to Shanghai Disney. This will prevent people from gathering in massive groups and prevent individuals from coming in contacts with each other.
2. How come casino operators in Macau in the midst of a decision by the government on new licenses?
Businesses are suffering from the effect on their business from the spread of coronavirus throughout China. Casino operators operating in Macau are among those who are affected by the virus. They await a decision from the government on how they’ll be granted new permits. The outbreak was first noticed in the city Wuhan, located to the west of central China. Since then, it has spread to other parts of the country, including the Special Administrative Region of Macau. Because of this outbreak, numerous companies in the region have been forced to close, including all of the casinos. The economy of Macau has suffered huge losses due to the closing of Macau’s casinos. Gaming plays a key role in the region’s GDP, and the impact of the revenue loss is being felt by numerous.
Quick Summary
The new coronavirus continues result in widespread death and disruption around the globe, and there’s the possibility of a halt in sight. The majority of businesses have to close down as authorities are trying to find people who are infected. individuals. Sometimes, a day out in the parks can transform into a nightmare that may end up threatening the life of those around you. This is the real-world scenario of the pandemic we are currently confronting. | https://newsquod.com/The-8216happiest-place-on-Earth8217-turns-into-a-nightmare-as-people-are-locked-in |
• We conserve land and maintain trails on our coastal properties at Seaside Beach, Hare Creek Beach and Navarro Point. All of these places are uniquely beautiful, offering trails to one-of-a-kind beaches and opportunities for ocean, wildlife and wildflower viewing, as well as open space areas for all to enjoy.
• We manage trails within public access easements at 10 unique locations, including long coastal walks west of Highway 1, opportunities to explore tidepools in rocky coves, awesome sinkholes to peer into, stunning views of quaint coastal towns and working harbors, and sandy-beach walks. These trails are for the public to explore and enjoy.
• The Land Trust owns 450+ acres in the Noyo River headwaters, near the Skunk Train switchbacks, which we manage for old-growth and fisheries habitat. We are currently doing fuels reduction work on these lands, which is beneficial to both the forest and nearby Brooktrails residents.
• We partner with 16 private landowners, using conservation easements, each of which is uniquely crafted to conserve special lands. On these privately-owned properties, we meet with each landowner once a year to monitor the conservation easement.
A big part of our stewardship program at the Mendocino Land Trust involves removal of invasive plants from lands that we steward. We’ve had mixed success with our efforts -- bull thistle is nearly gone at Navarro Point, and iceplant no longer thrives at Hare Creek Beach. Other invading plants are harder to control, such as Italian thistle west of Highway 1 at Navarro Point, and English ivy at Hare Creek Beach. Our dedicated weed-pulling volunteers meet regularly to steward our coastal properties.
We have monthly stewardship workdays at our Hare Creek Beach and Navarro Point properties. If you’d like to help out, please contact Garrett Linck, Conservation Project Coordinator, at [email protected], or call 707-962-0470. | http://www.mendocinolandtrust.org/care/land-stewardship-1/ |
HTTP (Hypertext Transfer Protocol) is a protocol that is used to transmit data over the Internet. It is the foundation of the World Wide Web and is used to transfer HTML (Hypertext Markup Language) pages and other types of data between clients and servers.
HTTP operates at the application layer of the Internet Protocol Suite, which is a set of protocols that define how data is transmitted over the Internet. It allows clients, such as web browsers, to communicate with servers, such as web servers, and request and receive data.
HTTP uses a request-response model, which means that a client sends a request to a server, and the server responds with the requested data or an error message. HTTP requests and responses are made up of various components, including a method, a URI (Uniform Resource Identifier), headers, and a message body.
WWW stands for World Wide Web, which is a system of interconnected documents and other resources that are accessed via the Internet. The World Wide Web is a vast collection of websites, web pages, and other digital resources that are connected to each other using hyperlinks.
Overall, HTTP is a crucial component of the Internet and is essential for the functioning of the World Wide Web. It is used by millions of people every day to access and interact with websites and other online resources. | https://www.unlimitedcomputing.no/news/what-is-http-and-what-stands-www-for/ |
At University of Wisconsin School of Medicine and Public Health’s Diversity Summit Thursday, the keynote speaker sketched the racial tensions and inequities present within the field of public health, citing studies from life expectancy rates to demographic income disparities.
Denise Rodgers, vice chancellor of Biomedical and Health Sciences at Rutgers University, said an“inconvenient truth” was that African Americans have the lowest life expectancies in the country.
“Racism is bad for your health,” Rodgers said. “Access to healthcare and services certainly contributes to the disparity [between life expectancies].”
Housing, geography and income all impact public health, Rodgers said. Even education can shorten or lengthen someone’s life. Graduating high school, for instance, can increase life expectancy dramatically, Rodgers said.
Health care coverage extended to transgender state employees after 5-4 vote by GIBGender-affirming surgery and related treatments will now be covered for transgender state employees under state-provided health insurance after a 5-4 Read…
Educational background and infant mortality rates are aggravated by racial divisions, Rodgers continued. According to a study, pregnant black women with an advanced degree still fall victim to infant mortality twice as often as pregnant white women.
“I don’t think there are any of us in health or public health that can look at this graph and feel good about what we’re doing,” Rodgers said. “There is clearly something very, very wrong here, regardless of race and ethnicity.”
Rodgers spoke about the potential causes of these tensions and disparities. The racist history of the U.S. and the biases, both conscious and unconscious, shape societal attitudes and public policy today. Any attempt to repeal the Affordable Care Act would be a detriment to public health, especially for African Americans, Rodgers claimed.
UW study finds black youths less protected in schools than white counterpartsUniversity of Wisconsin psychology professor James Li and graduate student Ryann Morrison recently published a study examining the concentration of black Read…
To make matters worse, low-income geographic areas tend to discourage a healthy lifestyle. Working in a low-income area, Rodgers said she sees fast food restaurants — and not much else.
“Residential segregation also systematically shapes healthcare access, utilization and quality of the neighborhood, and healthcare system providers,” Rodgers said.
Household dysfunction that often rises from poverty also exacerbates poor public health. Goldstein said children who experience severe abuse are at risk of dying twenty years before their peers, no matter the race. Experiencing abuse as a child has a similar effect on health outcomes as racism does, Rodgers said.
Even within healthcare systems, African Americans are marginalized, Goldstein said. During the treatment of cancers and heart disease — which disproportionately affect African Americans — many healthcare professionals tend to treat African Americans differently. White patients are twice as likely to receive pain medications as black patients, for instance. | https://badgerherald.com/news/2019/01/24/rutgers-professor-discusses-racial-tensions-disparities-in-public-health-policy/ |
Studies were conducted in the Dominican Republic during two years on adult Keitt mango (Mangifera indica L.) fields to examine the long-term effect of chemical and organic fertilization programs on marketable fruit yield. The treatments were (a) 1.8 kg 15-15-15 (N-P-K)/tree, once a year (b) 1.1 kg 15-15-15/tree, twice a year (c) 1.4 kg 15-15-15/tree, once a year (d) 1.8 kg 15-15-15/tree, once a year, plus 13.6 kg compost/tree (e) 1.1 kg 15-15-15/tree, twice a year, plus 13.6 kg compost/tree and (f) 1.4 kg 15-15-15/tree, once a year, plus 13.6 kg compost/tree. The results indicate that the application of 1.8 kg 15-15-15/tree, once a year, plus 13.6 kg compost/tree and 1.3 kg 15-15-15/tree, twice a year, plus 13.6 kg compost/tree improved marketable fruit number during both harvest years. The addition of compost for two years increased fruit number by averages of 17 and 24% in comparison with the same treatments without compost.
Bielinski M. Santos , 2007. Effects of Adding Compost to Fertilization Programs on Keitt Mango. Journal of Agronomy, 6: 382-384.
Mango is one of the main fruit crops in tropical regions. The leading mango producers in the world are India (12,000 t), China (2,150 t), Mexico (1,550 t) and Thailand (1,250 t) (FAO, 2003). In the Caribbean and Central America, there is extraordinary potential for mango production, because of the environmental and edafic conditions within that subregion. Although, most of the mango imports in the United States come from Mexico, there are significant production areas concentrated in Hawaii and south Florida.
Fertilization is critical to obtain satisfactory mango yield. Recommendations for N supply indicate that 400 g N/plant/year are needed for acceptable commercial yields (Chia et al., 1988; Wanitprapha et al., 1991; Xiuchong et al., 2001). Crane and Campbell (1994) suggested that N amounts could be increased depending on tree size and site conditions. In sandy soils, current fertilization practices raise environmental concerns about rapid N leaching to ground waters. Therefore, the determination of appropriate N application regimes is critical to reduce production costs and increase mango yield.
It has been widely known that organic matter content improves nutrient retention. Composts are defined as organic matter that has undergone partial thermophilic and aerobic decomposition (Raviv, 2005). These materials have shown to be an important component in both conventional and organic crop production and contribute with waste recycling (Chong, 2005; Mikkelsen and Bruulsema, 2005). Its continuous use has shown to improve N supply by acting as a slow-release agent (Hartz, 2006; Raviv, 2005), organic matter content and water holding capacity (Litvany and Ozores-Hampton, 2002) and soil biological activity and physical properties (Garciay and Hernandez, 1997; Jordahl and Karlen, 1993). Cattle manure is considered to have moderate to excellent characteristics for composting and it is frequently used in fruit crops, such as citrus (Litvany and Ozores-Hampton, 2002; Raviv, 2005). However, there is scarce information about the combined effect of fertilization and compost on mango. Thus, the objective of this study was to examine the influence of chemical and organic fertilization programs on Keitt mango yield.
Field trials were conducted during 2000 and 2001 in a mango grower field located near Baní, Peravia, Dominican Republic. The soil at the experimental site was a sandy loam inceptisol with pH 6.2. Average yearly rainfall and temperature were 900 mm and 27.3°C, respectively. Keitt mango trees were in their fourth year in the field and were planted 4 m between plants and 8 m between planting rows.
Six treatments were arranged in a split-plot design with four replications, where the chemical fertilization programs were the main plots and the application of organic fertilizer (compost) were the subplots. Three chemical fertilization programs were established as follows: (a) 1.8 kg 15-15-15 N-P-K applied once a year (b) 1.4 kg 15-15-15 applied once a year and (c) 1.1 kg 15-15-15 applied twice a year. Compost levels were 0 and 13.5 kg tree-1. Nutritional composition of the compost was 1.2, 0.9 and 2.1% N-P-K, respectively. Both fertilizer types were incorporated to the soil 60 cm around each plant. The application of 15-15-15 and compost occurred one month after the previous harvest. In the case of the treatments with two 15-15-15 applications, the second fertilization was accomplished five months after the first one. Other crop management practices, such as irrigation, tree pruning and harvest, followed local mango recommendations.
Mango fruits were harvested during two seasons and classified into marketable (fruits with no blemishes and at least 0.75 kg) and non-marketable. The influence of chemical and organic fertilization programs on marketable fruit yield was examined with analysis of variance (p = 0.05) and treatment means were separated with a Fishers protected least significant difference (LSD) procedure (SAS, 2000).
During both seasons, there were significant treatment effects on mango fruit yield. In 2000, the highest marketable fruit numbers were found with the application of either 1.8 kg 15-15-15/tree once a year or 1.1 kg 15-15-15/tree twice a year, in combination with 13.6 kg compost/tree (Table 1). These treatments had 85 and 83 fruits/tree, respectively. There was an average 17% yield reduction when the same 15-15-15 fertilization programs were used without compost. The two lowest mango marketable yields occurred with 1.4 kg 15-15-15/tree once a year, regardless of the addition of organic fertilizer.
The advantage of applying compost in combination with chemical fertilizer was also observed in 2001, where the treatments with organic fertilizer averaged approximately 19% more fruit yield than those without compost (Table 1). The highest mango marketable yields were obtained with 1.8 kg 15-15-15/tree once a year and 13.6 kg compost/tree and 1.1 kg 15-15-15/tree twice a year and 13.6 kg compost/tree, with 112 and 118 fruits/tree, respectively. These treatments were 24% higher than when no compost was applied. Similarly to the previous season, the lowest marketable mango yields were found with the two treatments with 1.4 kg 15-15-15/tree once a year.
Because the nutritional contribution of compost is low (0.16, 0.12 and 0.28 kg N-P-K/tree), it is unlikely that the observed sharp yield increases were due to these nutritional characteristics. In turn, these findings suggested that the beneficial effect of adding compost to regular chemical fertilization programs in mango could be caused by improved fertilizer retention in the soil and reduced leaching, thus increasing nutrient absorption, as suggested by previous studies (Hartz, 2006; Raviv, 2005).
Further studies need to be conducted to characterize the nature of this response and to determine whether adding compost of mango has a long-term effect on microbial activity, organic matter content and fertilizer retention.
Chia, C.L., R.A. Hamilton and D.O. Evans, 1988. Mango. Commodity Fact Sheet MAN-3(A). Hawaii Coop. University of Hawaii Press, USA.
Chong, C., 2005. Experiences with wastes and composts in nursery substrates. HortTechnology, 15: 739-747.
Crane, J.H. and C.W. Campbell, 1994. The Mango. IFAS Fact Sheet HS-2. University of Florida, Florida.
Food and Agriculture Organization (FAO), 2003. World agriculture information center database. Consulted on 13 February 2003. http://www.fao.org/waicent/portal /statistics_en.asp.
Garcia, C. and T. Hernandez, 1997. Biological and biochemical indicators in derelict soils subject to erosion. Soil Biol. Biochem., 29: 171-177.
Hartz, T.K., 2006. Vegetabale production best management practices to minimize nutrient loss. HortTechnology, 16: 398-403.
Jordahl, J.L. and D.L. Karlen, 1993. Comparison of alternative farming systems. III. Soil aggregate stability. Am. J. Alternative Agric., 8: 27-33.
Litvany, M. and M. Ozores-Hampton, 2002. Compost use in commercial citrus in Florida. HortTechnology, 12: 332-335.
Mikkelsen, R.L. and T.W. Bruulsema, 2005. Fertilizer use for horticultural crops in the U.S. during the 20th century. HortTechnology, 15: 24-30.
Raviv, M., 2005. Production of high-quality composts for horticultural purposes: A mini-review. HortTechnology, 15: 52-57.
SAS Institute, 2000. SAS/STAT User's Guide. Software Release 8. SAS Institute, Cary, USA., pp: 234.
Wanitprapha, K., K.M. Yokoyama, S.T. Nakamoto and C.L. Chia, 1991. Mango Economic Fact Sheet No. 16. University of Hawaii Press, USA., pp: 4.
Xiuchong, Z., L. Guojian, Y. Jianwu, A. Shaoying and Y. Lixian, 2001. Balanced fertilization on mango in Southern China. Better Crops Int., 15: 16-20. | https://scialert.net/fulltext/?doi=ja.2007.382.384 |
The Aspen Art Musem's new Fashion Lecture Series brings designer Cynthia Rowley to town.
This summer the Aspen Art Museum celebrates another medium beyond contemporary art with its new Fashion Lecture Series, kicking off with designer Cynthia Rowley. “I love fashion, and I’ve wanted to do [a lecture series] forever,” says AAM CEO and Director Heidi Zuckerman of the decision to start the new fashion-focused programming. “Now seemed the right moment. Art, architecture and fashion all reflect vital aspects of culture that influence our perceptions. Providing access for audiences to participate in those discussions is at the heart of the AAM mission.”
Rowley is no stranger to the art world: Her husband owns a gallery in New York called Half Gallery, and she almost pursued an artistic path herself. “I was a painting major at the Art Institute of Chicago before switching over to fashion, which, to me, is not all that different,” says Rowley. “I think about the same things—inspiration, concept, line, form, color and composition. It’s always interesting to me to explore the fusion of art and fashion.”
Rowley, who had a pop-up shop in Aspen from February through September of 2018, is known for infusing elements of sport into her line (she is arguably responsible for taking wetsuits from purely function to serious fashion). “I live an active lifestyle, so function is really important to me from a design perspective,” she says. “At the end of the day, my goal is to inspire women to live an adventurous lifestyle and to always be open to trying new things.” Clearly, Rowley will be speaking to the right crowd in Aspen. Aug. 8, 5:30pm, free, 675 E. Hyman Ave. | https://mlaspen.com/Cynthia-Rowley-Returns-to-Aspen-in-a-New-Sphere |
Rockefeller Philanthropy Advisors recently issued a set of reports delving into the time horizons that philanthropists adopt in framing their distributions. We talk to a senior RPA figure about the subject.
The pandemic has ratcheted up debate on whether philanthropists spend more resources in the short term or stay with steadily paying out funds over decades.
A few weeks ago, Rockefeller Philanthropy Advisors published a two-volume guide about the pros and cons of whether to donate resources rapidly or cap such transfers to ensure that they continue indefinitely. In philanthropy, a time horizon is the length of time over which a donor or foundation seeks to engage in philanthropic giving. It can be in perpetuity - meaning that there is no end date foreseen - or it can be time-limited, defined by a predetermined end date or triggering event. Time-limited philanthropy is also referred to as “limited-life,” “spend down,” “spend out,” “time bound,” “giving while living,” or “sunsetting.”
RPA issued two volumes of its study. The first volume was called Strategic Time Horizons in Philanthropy: Key Trends and Considerations. The second was called, Strategic Time Horizons in Philanthropy: Strategy in Action. The organization also issued a report of 12 case studies with philanthropists.
“More donors are looking for ways to create greater impact and build a legacy. And time horizons they operate under are a growing part of the discussion,” Olga Tarasov, director, Knowledge Development at Rockefeller Philanthropy Advisors, told Family Wealth Report.
There has been a “growing sense of urgency” over philanthropy because of COVID, social/racial justice issues and climate change, she said. A number of organizations, such as the UK’s Queen Elizabeth Trust, have built approaches that involve time-limited giving/donation.
At the other end of the scale, some foundations and funders like the idea of perpetuity because they want freedom to change direction in the causes they support. Some groups, such as the Ford Foundation, have time horizons of up to 40, 50 years or more, Tarasov said.
There are asset allocation/investment implications when managing the money that charities have, depending on the time horizons of their giving. A charity that thinks in terms of decades of existence might want to hold more illiquid investments – along the “Yale Model” approach pioneered by the endowment of Yale University – while a group with a more short-term approach will want to avoid locking investments up too heavily.
The Giving Pledge (Buffett, Gates, Bezos, others) is a particular example of how time horizons work. According to TGP website’s “about us” page, “Through joining the Giving Pledge, signatories commit to give the majority of their wealth to philanthropy. Many signatories have and will exceed that benchmark. Some make a series of very large gifts over a short period, while others establish a program of smaller, regular gifts distributed over many years. Each signatory’s approach to philanthropic giving is deeply personal.” Donors pledge to give away at least half of their wealth.
The time-limited philanthropy approach appears to be most popular in the US and Australia, Tarasov said.
There can be issues to resolve when different members of a family disagree about philanthropy, causes, and the time horizons, Tarasov said, and this is the sort of topic that organizations such as RPA can assist with.
Rockefeller Philanthropy Advisors advise on and manage more than $400 million in annual giving by individuals, families, corporations and foundations. It is a sponsor for more than 90 projects, providing governance, management and operational infrastructure to support their charitable purposes.
Among recent commentaries about philanthropy, see this article about the worlds of philanthropy and fine art and how they intersect. | https://www.familywealthreport.com/article.php?id=190832 |
New CCC advice offers no wriggle-room on aviation emissions
A new report from the Committee on Climate Change, published today, on how to deliver the ambitious climate targets agreed in Paris in 2015 identifies aviation as a ‘hard to treat’ sector and continues to caution against unlimited passenger growth.
The Government’s priority, CCC advises, should be to act ‘with urgency’ to close the policy gap for achieving existing climate commitment. The Climate Change Act requires the UK to cut total emissions by 80% of their 1990 levels by 2050 in order to limit the risk of exceeding 2 degrees of warming. Today’s report reiterates the CCC’s advice that new policy is required to ensure that UK aviation emissions are limited to around 2005 levels by 2050 (37.5Mt), implying no more than a 60% increase in passenger demand. The most recent Government forecasts predicted demand growth of 93% even on the assumption that aviation was exposed to carbon pricing and that no new runways were approved. The Airports Commission, in its analysis of possible airport expansion in the South East, predicted that emissions would increase further if a new runway was built, but offered no recommendations for how emissions and passenger growth could be limited to a sustainable level if expansion goes ahead.
Meeting the challenging goals of the 2015 Paris Agreement, the CCC’s new report says, will require emissions to fall to ‘net zero’ some time between 2050 and 2070. Since it will be impossible to eliminate CO2 emissions from sectors such as agriculture and aviation this will require significant deployment of carbon sinks and negative emissions technologies. But measures such as direct air capture and storage, and the development of carbon-storing materials will be challenging to deliver, and finding ways to reduce residual emissions from aviation, agriculture and industry is, the CCC advises, a priority. These measures could include, the report suggests, shifting demand to lower-emissions alternatives such as virtual conferencing in place of international travel.
The CCC remains cautious in its view of the potential for biofuel to deliver CO2 mitigation for aviation in the short to medium term. Given the likely ongoing scarcity of sustainable biomass, the report indicates, this should be used as efficiently as possible, with preference given to the use of wood in construction and of bioenergy with carbon capture and storage rather than to create biofuel for aviation. Nevertheless ‘substantial biofuel use in aircraft’ is cited as one of the few available options for bringing ‘hard to treat’ sectors in line with the net zero target beyond 2050. AEF recently published comment on anticipated proposals for policy incentives for aviation biofuel. | https://www.aef.org.uk/2016/10/13/new-ccc-advice-offers-no-wriggle-room-on-aviation-emissions/ |
Aluminum Sandwich Panels (also known as aluminum composite materials ACM) are used in architectural applications, such as to cover the outside surface of buildings (façade). ACM are widely used in countries in Europe, Middle East and parts of the former Soviet Union. ACM typically consist of two aluminum panels, (i.e., sheets or layers), with a usual thickness of about 0.5 mm, and a polymer core composition between the panels containing flame retardants, and having a usual thickness of about 3 mm. Normally, these ACM are produced via extrusion and calendaring. In Europe, they have to comply with regulations relevant for construction, such as the important fire safety construction product regulations (CPR). Depending on the height of the building, either Euroclass B (for buildings up to 20 m tall) or Euroclass A2 (for buildings taller than 20 m) regulations need to be passed.
Euroclass B panels have to pass the SBI test. This test is conducted in a room in which the samples (1.0×1.5 m+0.5×1.5 m) are mounted in a 90° angle and exposed to a gas burner flame. The fire growth rate (FIGRA) needs to be max 120 W/s and the total heat released (THR 600s) has to be max 7.5 MJ. Besides the SBI test, EN 11925-2 also needs to be passed with a flame spread max 150 mm within 60 seconds. The Euroclass B fire safety test is typically easy to achieve. To meet these requirements, aluminum hydroxide (ATH) can be utilized as flame retardant in polymer composition with a loading of around 70-75% by weight ATH.
The Euroclass A2 fire safety test is more difficult to achieve. Although Euroclass A2 panels also must be able to pass the SBI test, and have the same limits as Euroclass B panels, the more demanding hurdle is the bomb calorimeter test according to EN ISO 1716 entitled “Reaction to fire tests for building and transport products—Determination of heat of combustion”. The polymer core material has to have a calorific potential (PCS) of max 3.0 MJ/kg. That typically means a polymer or organic content of 10 wt % max, and an inorganic flame retardant loading of over 90 wt %. As disclosed in EP 2420380 A1, these high inorganic loadings cannot be extruded and thus cannot use the existing extrusion/calendaring systems used in the industry. One must use an alternative system, such as the compression molding technology described in EP 2420380 A1. This is a distinct disadvantage since the cost is high for installing new equipment.
Accordingly, there is a need to formulate polymeric core compositions for ACM having over 90 wt % inorganic flame retardants such that the compositions can be extruded and are able to pass the Euroclass A2 fire safety tests.
| |
Light is comprised of different wavelengths, each with their own unique properties. The germicidal properties of ultraviolet (UV) light, part of the non-visible spectrum, can be harnessed to effectively sanitize the air, water and surfaces.
At the appropriate wavelength and fluence (dose), exposure to ultraviolet light modifies or destroys the genetic material (DNA and RNA) in viruses, bacteria and mold, preventing replication.
Our Ultraviolet Light Sanitizer employs UV-C light in the wavelength range of 230-280nm to inactivate pathogens. | https://www.uvsanify.com/ |
Among the many theories about which civilization first sailed to the Americas and discovered them, there is also the theory that the ancient Phoenicians were the first. This theory became popular in the 18th century and is closely connected with the petroglyphs on Dighton Rock which are still of unknown origin.
This theory isn’t as popular as the one that says the Norse first made the discovery, but it’s worth mentioning. A lot of scholars began offering ideas about the true origin of the inscriptions on the rock back in the 18th century. Yale College’s seventh president, Ezra Stiles, a theologian, author, claimed the inscriptions were in Hebrew.
Antoine Court de Gébelin, who is mainly known for the popularization of the Tarot, had his own idea about the rock. He believed that the inscription was made by Carthaginian sailors who commemorated their journey to the shores of Massachusetts.
A copy of the symbols on Dighton Rock.
In the 19th century, the theory that a group of Israelite people visited the New World was widely adopted in the Mormon community. Later, Ross T. Christensen, an American archeologist, speculated that the Mulekites, who are mentioned in the Book of Mormon, were probably of Phoenician ethnic origin. The Phoenician theory is also supported in a book written in 1871, by John Denison Baldwin, an American anthropologist.
In Ancient America, Baldwin wrote, “The known enterprise of the Phoenician race, and this so variedly expressed ancient knowledge of America, strongly encourage the hypothesis that the people called Phoenicians came to this continent, established colonies in the region where ruined cities are found, and filled it with civilized life.
It is argued that they made voyages on the ‘great exterior ocean,’ and that such navigators must have crossed the Atlantic; and it is added that symbolic devices similar to those of the Phoenicians are found in the American ruins, and that an old tradition of the native Mexicans and Central Americans described the first civilizers as ‘bearded white men,’ who ‘came from the East in ships’.
Photograph of the Dighton Rock taken in 1893.
“A stone tablet with an inscription that was supposed to be of Phoenician origin appeared in Brazil in the 1870s. The tablet was given to Ladislau de Souza Mello Netto, who was the director of the National Museum of Brazil at that time, and he immediately acknowledged the artifact as genuine.
The inscription allegedly told the story of some Sidonian Canaanites who visited the shore of Brazil. It was later found that the symbols that appear on the tablet were variations of letters that appeared in different periods over a span of 800 years. It was impossible for all the letters to appear on the same tablet at the same time, so the artifact and the inscription were dismissed as fake.
A few new artifacts appeared in the 20th century, which again spiked Phoenician or Semitic discovery theory. Bat Creek inscription was one of these artifacts. Cyrus Herzl Gordon, an expert in Near Eastern cultures and ancient languages, believed that this tablet was inscribed in Paleo-Hebrew.
Gordon thought that this was proof that Semitic people visited the continent prior to Columbus. Later, the Bat Creek inscription, together with another artifact called the Las Lunas Decalogue Stone, were proven to be forgeries and Gordon’s claim was dismissed.
In 1996, Mark McMenamin, an American paleontologist, speculated that Phoenician sailors visited the Americas around 350 BC. He based his theory on some gold stater coins that were allegedly made by the state of Carthage. On the back of the coins was a map of the Mediterranean and another land on the west, across the Atlantic. McMenamin interpreted that land as the Americas but later discovered that those coins were actually a modern forgery.
Another form of written proofs that slightly goes in favor of the arrival of Phoenicians in the Americas can be found in Ptolemy’s Geography. Lucio Russo, an Italian physicist, mathematician, and historian of science, analyzed Ptolemy’s book and noticed that he gives the coordinates of the Fortunate Isles.
The fortunate Islands were a group of legendary islands mentioned by various ancient Greek writers. Russo also noticed that the size of the world in Ptolemy’s Geography is smaller than what Eratosthenes measured. After he gave the same coordinates of the Fortunate Islands to the Antilles, the map irregularities in Ptolemy’s descriptions disappeared.
According to Russo, Ptolemy could have known about the Antilles from his source, Hipparcos, who lived in Rhodes. It is possible that Hipparcos heard about the Antilles from Phoenicians sailors who controlled the western Mediterranean in those days. This is a far-fetched idea, but still, an interesting one.
Most of the modern-day scholars deny the idea that Phoenicians, Canaanites, or Carthaginians discovered the Americas first. Ronald H. Fritze, an American historian, says that although it was technically possible for those people to reach the Americas, it probably never happened: “No archaeological evidence has yet been discovered to prove the contentions of Irwin, Gordon, Bailey, Fell and others.
Since even the fleeting Norse presence in Vinland left definite archaeological remains at L’Anse aux Meadows in Newfoundland, it seems logical that the allegedly more extensive Phoenician and Carthaginian presence would have left similar evidence. The absence of such remains is strong circumstantial evidence that the Phoenicians and Carthaginians never reached the Americas.”Until some concrete evidence appears, this theory will remain only a fantasy. | https://www.histecho.com/theory-ancient-phoenicians-were-the-first-to-discover-the-americas/ |
key to applying the 3Rs in the most effective and economical manner is planning.
Reduce
Eliminate waste before it is created. Reduction of waste will reduce costs
associated with handling, managing and disposing of waste materials, it may
even reduce material costs by enhancing efficiency of the materials purchased.
Reduction can save a contractor money. (Refer to the calculate potential page
to determine how much you are throwing away.) Strategies for reducing waste
are most effective when developed during the design and planning phases of home
construction and understood by all employees and subcontractors. Three main
strategies that a contractor can employ for reducing waste during the planning
and construction phases are:
1. Design out waste
2. Purchase "Green"
3. Prevent on-site waste
Sources:
King County Solid Waste Division in Cooperation with Seattle Public Utilities,
Contractor's Guide To Preventing Waste and Recycling, http://dnr.metrokc.gov/swd/bizprog/sus_build/ContrGde.pdf
Sustainable Sources, Sustainable Building Sourcebook: Construction Waste,
http://www.greenbuilder.com/sourcebook/ConstructionWaste.html
Montana State University Extension Service, Pollution Prevention for Residential
Construction, http://peakstoprairies.org/p2bande/construction/ContrGuide/section8.cfm
Reuse and Recycle
Materials that cannot be eliminated through design, procurement decisions,
and on-site activities can be further eliminated from the waste stream by reuse
or recycling. Again, planning is essential to effective reuse and recycling.
The following are strategies to begin identifying reuse and recycling opportunties:
Planning and partnerships will make reuse and recycling more efficient. Below
are a few considerations for reuse and recycling success:
1. Reuse / Salvage
2. Recycle
Additional Resources:
Hazardous Waste Minimization Checklist & Assessment Manual for the Building
Construction Industry, California Environmental Protection Agency Department
of Toxic Substances Control, May 1993. This workbook was developed to aid the
construction industry in evaluating its operations in the interest of reducing
the amount of wastes generated. It will guide the contractor through an evaluation
of operations and reduction options using a checklist, tables and rating system.
Worksheets are also provided to assist in determining which options are the
most cost effective for implementation. For a copy of the workbook, contact
the Office of Pollution Prevention and Technology Department, P.O. Box 806,
Sacramento, CA 95812-0806 or (916)-322-3670.
Residential Construction Waste
From Disposal To Management - A factsheet for builders discussing benefits
and general methods for reducing, reusing and recycling C&D waste. This
factsheet is an introduction to the builder's field guide of the same name offered
by the NAHB Research Center. Ordering information is provided at the end of
the factsheet. National Association of Home Builders Research Center by Peter
Yost, 1997.
- http://www.p2pays.org/ref/01/00173.htm
Waste Wise Update: | http://www.peakstoprairies.org/p2bande/construction/C&DWaste/options.cfm |
Sennheiser Launches WiFi MobileConnect Jun 18, 2017
The MobileConnect system is comprised of the ConnectStation-a streaming server providing near latency-free multi-channel audio streaming via Wi-Fi. As MobileConnect operates over any existing WiFi infrastructure, it is fast and cost-effective to install and integrates easily with existing network infrastructure and audio equipment, offers Sennheiser.
Add your comments below
Please note by submitting this form you acknowledge that you have read the Terms of Service and the comment you are posting is in compliance with such terms. Be polite.
Inappropriate posts may be removed by the moderator. Send us your feedback. | |
By Diego Graglia, FI2W web editor
As soon as she took office Friday, Secretary of Labor Hilda Solís moved to reverse a rule affecting guest farmworkers that former President George W. Bush had modified in his last days in office.
The changes included eliminating duplication among state and federal agencies in processing applications, putting in place a new wage formula, and increasing fines for willfully displacing United States citizens with foreign workers.
Critics said Bush’s rules would push already poor wages even lower, reduce worker protections, and make it easier to hire foreigners without actually looking for American employees first.
Solís had been among the many critics of Bush’s decision, which was made in December but went into effect Jan. 17, three days before President Barack Obama was sworn in. At the time, then-U.S. Rep. Solís issued a statement calling the Bush rules “just the latest example of how out of touch the president is with working families, especially with Latino families that make up a large portion of the farmworkers in this country.”
On Solís’ first day in office, the Labor Department announced in a statement “the proposed suspension for nine months” of the rule. Solís said in the release:
Because many stakeholders have raised concerns about the H-2A regulations, this proposed suspension is the prudent and responsible action to take.
Suspending the rule would allow the department to review and reconsider the regulation, while minimizing disruption to state workforce agencies, employers and workers.
Bush’s changes had been called “a backstab to immigrants” and “a cheap shot.” The Republican administration said it sought to make the H-2A program –which grants visas under that name to temporary foreign farmworkers– less cumbersome for employers who had shunned it because of the large amounts of red tape they faced.
After Solís’ decision last week, pro-immigrant activists were satisfied.
“…this is important because the type of labor protections permitted under H-2A influences the labor protections that could be granted under an eventual immigration reform,” Angelica Salas, executive director of the Coalition for Humane Immigrant Rights of Los Angeles (CHIRLA), told La Opinión.
Farmworker Justice’s blog Harvesting Justice said, “The Bush Administration may have tried its best to leave a legacy of abuse for our nation’s farmworkers but the new administration is apparently not so willing to go along with it.”
The proposed H-2A rules suspension is open to a 10-day public comment period.
Growers are expected to oppose any new change, especially since many have placed orders for workers and are counting on them.
The Bush rules had reduced red tape, Jasper Hempel, executive vice president of the Western Growers Association, told The New York Times.
But he said the nation needed legislation, known as the AgJOBS bill, that would stabilize the farm labor situation by giving the more than one million illegal farm workers a path to legalization.
At her swearing-in ceremony Friday –which included an Obama-style oath fumble–, Solís made clear she intends “to be protector-in-chief of the nation’s workers,” CBS News‘ Mark Knoller wrote.
“To those who have for too long abused workers, put them in harms way, denied them fair pay – let me be clear: there is a new sheriff in town,” Solís said. | https://fi2w.org/on-first-day-new-labor-secretary-moves-to-reverse-bushs-guest-farmworker-rule/ |
Forensic science is a key component of criminal investigation and civil law worldwide. This broad-based field ranges over topics as varied as DNA typing, osteology, neuropathology, psychology, crime scene photography, ballistics, criminal profiling, and more. Elsevier provides forensics publications that cover all these topics, written by top authorities, to students, professors, researchers, and professionals. | http://scitechconnect.elsevier.com/category/forensic-sciences/page/2/ |
Sometimes a right kite is defined as a kite with at least one right angle. If there is only one right angle, it must be between two sides of equal length; in this case, the formulas given above do not apply.
Does a kite have a right angle?
The intersection of the diagonals of a kite form 90 degree (right) angles. This means that they are perpendicular. The longer diagonal of a kite bisects the shorter one.
Does a kite always have two right angles?
A square is a special rectangle that has all four sides congruent. A kite has two consecutive sides congruent. The angle between these two sides could be a right angle, but there would only be one right angle in the kite.
What angles do kites have?
Angles in a kite
A kite is symmetrical. So it has two opposite and equal angles.
How many corners does a kite have?
A quadrilateral, also called a kite, is a polygon that has four sides. In order to form four corners of a kite, four points on the plane must be “independent”. This means that no three of them are on the same straight line. But four corners do not always determine a kite in a single unique way.
How do you identify a kite?
If two disjoint pairs of consecutive sides of a quadrilateral are congruent, then it’s a kite (reverse of the kite definition). If one of the diagonals of a quadrilateral is the perpendicular bisector of the other, then it’s a kite (converse of a property).
Does a kite have two pairs of parallel sides?
Kites have no parallel sides, but they do have congruent sides. Kites are defined by two pairs of congruent sides that are adjacent to each other, instead of opposite each other.
What are the characteristics of a kite?
Kite properties include (1) two pairs of consecutive, congruent sides, (2) congruent non-vertex angles and (3) perpendicular diagonals.
Are the opposite angles of a kite equal?
No, a kite has only one pair of equal angles. The point at which the two pairs of unequal sides meet makes two angles that are opposite to each other. These two opposite angles are equal in a kite.
Can a kite have 4 equal sides?
In Euclidean geometry, a kite is a quadrilateral whose four sides can be grouped into two pairs of equal-length sides that are adjacent to each other.
…
Kite (geometry)
|Kite|
|Type||Quadrilateral|
|Edges and vertices||4|
|Symmetry group||D1 (*)|
What is the sum of all exterior angles of a kite?
The sum of exterior angles in a polygon is always equal to 360 degrees. Therefore, for all equiangular polygons, the measure of one exterior angle is equal to 360 divided by the number of sides in the polygon. | https://skydivehayabusa.com/kitesurfing/best-answer-does-a-kite-always-have-a-right-angle.html |
The popular image of the Second World War remains largely unchanged from 1939 to today. With few exceptions, the war films and propaganda posters of the time express sentiments about the war that have become fixed “truths” which continue to be repeated with variations. The image of the First World War is radically different, and perhaps unique in the unusual nature of the way it has been absorbed into popular memory. Though it preceded the Second World War by a mere twenty years, the imagery dating from the war itself seems radically alien. Later films about the war continually adopt a narrative of failure and futility, in stark contrast to the typically “heroic” narratives of events twenty years later. Though the attritional battles of the Somme and Passchendaele are remembered, the war-winning battles of 1918 barely exist at all in popular memory. An almost wholly caricatured version of the conflict came to be dominant from the 60s to the 90s.
This talk will look at the changing popular image of the war, contrasted with that of its successor, looking at why certain types of image that seemed compelling at the time, seem so alien now, and how a particular narrative of the war became, as it were, entrenched.
Paul Barlow is senior lecturer in the History of art at the University of Northumbria. He is the author of Time Present and Time Past, a critical biography of the Pre-Raphaelite painter John Everett Millais. He has also written widely on other aspects of British art and culture, including on Ruskin, Carlyle and other Victorian writers. He is currently researching aspects of Celtic culture in Brittany, and representations of Shakespeare. | https://breezecreatives.com/projects/the-wednesday-lecture/paul-barlow-3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.