content
stringlengths 0
1.88M
| url
stringlengths 0
5.28k
|
---|---|
The utility model relates to an extended-range hybrid power loader, and belongs to the technical field of engineering machinery. The machine comprises a frame, a cab, a working device for shoveling, digging, loading and unloading materials, an air pump for air brake, an air storage tank communicated with the air pump, a steering system and wheels, the working device is hinged to the front part ofthe frame; wherein the power device comprises a fuel tank, a range extender and a lithium battery box, the driving device comprises a driving motor, a transfer case, an auxiliary motor, a triple pumpand a hydraulic oil tank, and the control system comprises a power supply system, a charging controller, a driving motor controller, an auxiliary motor controller and an air pump motor controller; therange extender serves as a generator to charge the lithium battery box, fuel oil in the fuel tank is converted into electric energy through the range extender and then charged to the lithium batterybox, the fuel oil utilization rate is high, and the fuel oil utilization rate is increased; meanwhile, the range extender automatically charges the lithium battery box, so that the endurance mileage of the loader is increased. | |
Their study promotes further developments towards quantum computing and a deeper understanding of the foundations of quantum mechanics.
Entanglement is a fascinating property connecting quantum systems. Albert Einstein called it the “spooky action at a distance”. This bizarre coupling can link particles, even if they are located on opposite sides of the galaxy. The strength of their connections is behind the promising quantum computers, the dream machines capable of quick and efficient computations.
The team lead by Rainer Blatt at the Institute of Experimental Physics of the University of Innsbruck has been working very successfully towards the realization of a quantum computer. In their recent study, these physicists exposed four entangled ions to a noisy environment. “At the beginning the ions showed very strong connections,” says Julio Barreiro. “When exposed to the disturbing environment, the ions started a journey to the classical world. In this journey, their entanglement showed a variety of flavors or properties.” Their results go far beyond what was previously investigated with two entangled particles since four particles can be connected in many more ways. This investigation forms an important basis for the understanding of entanglement under the presence of the disturbances of the environment and the boundary between the dissimilar quantum and classical worlds. The work has now been published in the journal Nature Physics.
As part of their study, the Innsbruck scientists have developed new theoretical tools for the description of entangled states and novel experimental techniques for the control of the particles and their environment. Their high-impact research is possible thanks to support from the Austrian Science Fund FWF, the European Commission and the Tyrolean industry.Publication: Experimental multiparticle entanglement dynamics induced by decoherence. J. T. Barreiro, P. Schindler, O. Gühne, T. Monz, M. Chwalla, C. F. Roos, M. Hennrich, R. Blatt. Nature Physics. 27 September 2010. (DOI:10.1038/NPHYS1781 http://dx.doi.org/10.1038/NPHYS1781)
Contact:
Dr. Christian Flatz | idw
Further information:
http://www.uibk.ac.at
http://dx.doi.org/10.1038/NPHYS1781
Further reports about: > Flavors > Innsbruck > Nature Immunology > Nature Physics > Physic > Quantum > quantum computer
Climate cycles may explain how running water carved Mars' surface features
02.12.2016 | Penn State
What do Netflix, Google and planetary systems have in common?
02.12.2016 | University of Toronto
A multi-institutional research collaboration has created a novel approach for fabricating three-dimensional micro-optics through the shape-defined formation of porous silicon (PSi), with broad impacts in integrated optoelectronics, imaging, and photovoltaics.
Working with colleagues at Stanford and The Dow Chemical Company, researchers at the University of Illinois at Urbana-Champaign fabricated 3-D birefringent...
In experiments with magnetic atoms conducted at extremely low temperatures, scientists have demonstrated a unique phase of matter: The atoms form a new type of quantum liquid or quantum droplet state. These so called quantum droplets may preserve their form in absence of external confinement because of quantum effects. The joint team of experimental physicists from Innsbruck and theoretical physicists from Hannover report on their findings in the journal Physical Review X.
“Our Quantum droplets are in the gas phase but they still drop like a rock,” explains experimental physicist Francesca Ferlaino when talking about the...
The Max Planck Institute for Physics (MPP) is opening up a new research field. A workshop from November 21 - 22, 2016 will mark the start of activities for an innovative axion experiment. Axions are still only purely hypothetical particles. Their detection could solve two fundamental problems in particle physics: What dark matter consists of and why it has not yet been possible to directly observe a CP violation for the strong interaction.
The “MADMAX” project is the MPP’s commitment to axion research. Axions are so far only a theoretical prediction and are difficult to detect: on the one hand,...
Broadband rotational spectroscopy unravels structural reshaping of isolated molecules in the gas phase to accommodate water
In two recent publications in the Journal of Chemical Physics and in the Journal of Physical Chemistry Letters, researchers around Melanie Schnell from the Max...
The efficiency of power electronic systems is not solely dependent on electrical efficiency but also on weight, for example, in mobile systems. When the weight of relevant components and devices in airplanes, for instance, is reduced, fuel savings can be achieved and correspondingly greenhouse gas emissions decreased. New materials and components based on gallium nitride (GaN) can help to reduce weight and increase the efficiency. With these new materials, power electronic switches can be operated at higher switching frequency, resulting in higher power density and lower material costs.
Researchers at the Fraunhofer Institute for Solar Energy Systems ISE together with partners have investigated how these materials can be used to make power...
Anzeige
Anzeige
ICTM Conference 2017: Production technology for turbomachine manufacturing of the future
16.11.2016 | Event News
Innovation Day Laser Technology – Laser Additive Manufacturing
01.11.2016 | Event News
#IC2S2: When Social Science meets Computer Science - GESIS will host the IC2S2 conference 2017
14.10.2016 | Event News
UTSA study describes new minimally invasive device to treat cancer and other illnesses
02.12.2016 | Medical Engineering
Plasma-zapping process could yield trans fat-free soybean oil product
02.12.2016 | Agricultural and Forestry Science
What do Netflix, Google and planetary systems have in common? | http://www.innovations-report.com/html/reports/physics-astronomy/quantum-physics-flavors-entanglement-162297.html |
Homemade Rice Pilaf – Quick, Easy, and Delicious:
When I was young, one of my favorite side dishes was rice pilaf. It came in a slim box with a portion of rice and a sachet of spices, dehydrated chicken stock, and goodness knows what else – all set to prepare with water on the stovetop. The results were salty, addictive, and fragrant. My brothers and I would fight over who got to finish the bowl on the dinner table. One box was never enough.
These days, I make pilaf from scratch – and you probably do, too, without realizing it. The principle behind pilaf is that rice, or another grain such as bulgur or farro, is sautéed to lightly toast the grains, and then steamed in a flavorful broth, along with spices and a few aromatics such as onion and garlic. When ready to serve, the rice is fluffed to separate the grains and prevent stickiness, and handfuls of fresh herbs, chopped almonds, or chilies are added for extra flavor, texture, and color. You can choose to keep the rice simple or add the garnishes selectively to your taste. I tend to pile them on, because they add sensational flavor and freshness, while nudging an unassuming side into a stand-alone dish. So, before you reach for a box of pilaf in the supermarket with a long list of multi-syllabic ingredients, remember that it’s really quite easy – and much cheaper – to make your own from scratch.
Homemade Rice Pilaf
Active Time: 10 minutes
Total Time: 45 minutes
Serves 6 as a side dish
2 3/4 cups chicken stock (or vegetable stock for vegetarian option)
3 tablespoons unsalted butter, divided
1 teaspoon salt
1/2 teaspoon sweet paprika
Generous pinch of saffron threads
1 tablespoon extra-virgin olive oil
1/2 cup orzo
1 small yellow onion, finely chopped, about 1/2 cup
1 garlic clove, minced
1 1/4 cup basmati rice
Optional garnishes:
1 scallion, white and green part thinly sliced
1 small red jalapeño, finely chopped
2 to 3 tablespoons coarsely chopped almonds
2 to 3 tablespoons chopped fresh cilantro or parsley
1. Combine the stock, 2 tablespoons butter, the salt, paprika, and saffron in a medium saucepan. Bring to a simmer and keep warm.
2. Heat the oil and melt the remaining 1 tablespoon butter in a deep skillet (with a lid) over medium heat. Add the orzo and sauté until light golden, about 2 minutes. Add the onion and sauté for about 1 minute, and then add the garlic and sauté until fragrant, about 30 seconds. Add the rice and continue to cook, stirring constantly to coat and lightly toast the rice, for about 2 minutes.
3. Pour in the stock and bring to a simmer. Reduce the heat to low and cover the pot. Cook, undisturbed, until the rice is tender and the liquid is absorbed, 20 to 25 minutes. Remove the lid and fluff the rice with a fork. Let stand for 5 to 10 minutes
4. Serve with the garnishes sprinkled over the top. | https://tastefoodblog.com/2019/05/13/easy-rice-pilaf-recipe/?shared=email&msg=fail |
Professor Richard Susskind will be delivering the 25th Annual Lecture of the Society for Computers…
What features should you look for in an LPM support system?
Often when I discuss legal project management (LPM) with lawyers it is usually they who first raise the issue of IT support for LPM with me. I have a strong legal IT background and have long since been an advocate for applying IT to law (I began my commercial career developing legal expert systems and have since managed the development of legal workflow and practice management solutions). I do sometimes wonder however whether lawyers asking about IT support for LPM implementation amounts to displacement activity. Application software will not of itself make law firms more efficient, effective and responsive to client need. Software can help achieve these aims of course, but it cannot achieve them alone. At some point human judgement and intervention will be called for. Hence lawyers still need to apply (and in most cases improve) their estimating, budgeting and management skills if they, and their clients, are to benefit from project based legal service delivery.
There are quite a few systems available now which can be classed, broadly, as LPM support systems. Some systems are clearly marketed as LPM systems. Others are not, yet they too can be used to help with more explicit scoping, planning and execution of legal services. So what features should lawyers look for in a LPM support system?
The list below amounts to my high level wish list and I am not aware of any single system which does everything listed here:
1. If I were still practising as a solicitor (England and Wales) conducting civil litigation, then help with completing cost budgets required for multi-track cases would be high on my feature list.
As from April 1st 2013 civil litigators have been required to complete a detailed cost budget form – precedent form H – for the vast majority of multi-track cases where the total sum of damages sought is less than £2 million. Hence systems which could automatically complete a draft precedent form H, using data from previous similar cases, should be particularly welcomed by litigators. For this to happen there needs to be integration with, or ability to extract data from, existing practice management systems (PMS) which hold all the data about previous cases.
I’d suggest the ability to use historic PMS data should be a fundamental feature of LPM support systems, and not seen as something required solely to complete precedent form H (see next point below).
2. LPM is not confined to litigators, so LPM support software should also help create budgets for typical matters found in other practice areas.
3. Automatic creation of standard case workflows (containing core processes and documents) alongside draft case budgets would also be very useful.
Firms which already have case management workflow in place may want the budget estimating software to integrate with those workflows or, looked at another way, they may prefer for the workflows to be developed so they (ie the existing workflows) can also help prepare draft case budgets as referred to above.
4. Once the draft data is in place (budgets and workflows), the software should allow for the tailoring of estimates and workflow tasks for each case. Although many cases may be similar, few are identical.
5. Ideally during the tailoring exercise, the software should also allow for ‘what if’ scenario planning and hence the ability to create several alternate draft budgets for any given matter (so that each draft can be discussed with the client).
6. In larger law firms particularly, a wide range of staff will be involved in planning, supporting and executing legal service delivery. Therefore, ideally, the software should have a number of ‘views’ presenting data in ways most relevant to different team members.
7. Some kind of dashboard or ‘highlighting’ facility, which can report matter progress and help identify areas of concern, would also be useful, if not essential.
This monitoring and reporting could perhaps be done at several different levels of granularity. Starting by focusing on individual cases and then showing aggregated views, reporting on groups of cases being run within particular departments or offices etc.
8. Perhaps this item should have been the very first point listed: LPM support software should allow for the production of client friendly reports and / or have the ability to let clients interact directly with matters as they are progressing. After all, the whole point of LPM is to improve the client experience.
Listing the client experience last illustrates nicely a concern I have with legal IT generally. Sometimes the sheer effort required to implement and roll-out new systems can result in a lot of inward looking activity. Worst case, a lot of effort goes into deploying systems which make lawyers’ lives easier, but do little for their clients. This is not a good state of affairs regards any kind of legal IT system, but it would be especially unwelcome if LPM support systems were to have this effect.
Subscribe To My Blog Posts & Receive A Copy Of 'Become A Legal Project Manager'
If you have found this article useful then please join my mailing list to receive blog posts directly to your inbox.
Please rest assured that you will only receive updates from me about my blog posts and training courses. I will not pass on your details to any third party nor will I bombard you with lots of marketing emails.
By way of thank you, you will receive an e-book which explains the skills required of legal project managers, contains some typical legal project manager interview Q&A's and allows you to self-assess your competence to act as a legal project manager with a self-assessment form. | https://legalprojectmanagement.co.uk/what-features-should-you-look-for-in-an-lpm-support-system/ |
One role of the Development Office is to serve as a means for College of Sciences and Mathematics (COSAM) alumni to reconnect with their alma mater, former classmates, and professors. These alumni live down the street, across the nation, and around the world. What they have in common is a love for, and dedication to, Auburn University and COSAM, as well as their belief and support of our educational mission.
The loyalty and commitment of alumni, parents, and friends who give to the college is, and always will be, essential to our continued growth and progress. Our donors provide COSAM with reliable resources from private donations to help fund student scholarships, faculty support, technology, tutoring centers, lectureships, research, and other special projects. Alumni and friends who support COSAM ensure the continued enhancement of our academic programs. Ways in which you can support COSAM, include:
Join the Dean's Society
Establish a scholarship fund in your name or a loved one's name
Establish a professorship in memory or in honor of a favorite professor
Remember the college in your estate plans
Be a sponsor for the Dean's Scholarship Golf Classic
Contact your friends and ask them to give!
Please contact the Office of Development with any questions about how you can become part of the COSAM family. We ask you to join us in building on the qualities that made Auburn University and the College of Sciences and Mathematics special for you.
Best wishes, and War Eagle! | http://auburn.edu/cosam/departments/alumni/about/ |
Virtually any architectural style can be found in Los Angeles, although the ones most widely identified with the region are Spanish Mission Revival and Craftsman, as epitomized by the California bungalow. Such renowned architects as Irving Gill, Frank Lloyd Wright, Richard J. Neutra, and R.M. Schindler did some of their most original work in Los Angeles in the first half of the 20th century. The abundant sunshine, attractive landscape, and lack of hardened aesthetic traditions have invited experimentation among private and public patrons. For decades, the streets sprouted vernacular buildings humorously designed to suggest their commercial uses. The hat-shaped Brown Derby Restaurant and the Tail o’ the Pup hot dog stand resembling the featured product were among many that caught the public’s fancy. The experimental Case Study Houses of Craig Ellwood and Charles and Ray Eames are still much studied by students. Until 1956, Los Angeles enforced a 140-foot (43-metre) building height limit (except for City Hall) so as to maintain a horizontal appearance. When the ban was lifted, skyscraper construction began.
Museums
Los Angeles has more than 200 museums. The Los Angeles County Museum of Art (LACMA), founded in 1910, is the premier fine arts museum. It contains 250,000 pieces of art and is the anchor for what is known as “Museum Row” on Wilshire Boulevard. Other important art museums are the Huntington Library, Art Collections, and Botanical Gardens (1919) in San Marino; the Norton Simon Museum of Art (1975) in Pasadena; the J. Paul Getty Museum, with locations at the Getty Center in Los Angeles (designed by Richard Meier; 1997) and the Getty Villa in Malibu (opened 2006); and the three locations of the Museum of Contemporary Art (MOCA; founded 1979)—MOCA Grand Avenue, designed by Isozaki Arata (1986), the Geffen Contemporary at MOCA (1984), in a building renovated by Frank Gehry, and MOCA Pacific Design Center (designed by Cesar Pelli and Associates), which opened in 2000. The Natural History Museum of Los Angeles County (1913) and its sister institution, the Page Museum–La Brea Tarpits (1977), are popular. Among the museums devoted to ethnic heritage are the California African American Museum (1984), the Japanese American National Museum (1985), and the Skirball Cultural Center (featuring Jewish culture and history; 1996). There are several museums associated with movie stars: humorist Will Rogers’s ranch in Pacific Palisades, the Museum of the American West in Griffith Park (formerly the Gene Autry Western Heritage Museum; 1988), and silent-film cowboy William S. Hart’s home in Newhall. Other museums are devoted to children, crafts, maritime, television and radio, military, automobile, aeronautic, and railroad history.
Sports and recreation
Angelenos are avid fans of nearly every imaginable sport. Four milestones in the city’s evolving sports culture were hosting the 1932 Summer Olympic Games, the arrival of the Dodgers professional baseball team (formerly of Brooklyn, New York) in 1958 and the Lakers men’s professional basketball team (formerly of Minneapolis, Minnesota) in 1960, and again hosting the Summer Games in 1984. Other regional professional teams include the Rams and the Chargers (gridiron football), the Angels (baseball), the Kings and the Ducks (ice hockey), the Clippers (men’s basketball), the Sparks (women’s basketball), and the LA Galaxy and Los Angeles FC (football [soccer]). In addition to professional franchises, Los Angeles also supports numerous amateur events and high school and college rivalries. The many sports venues—the Rose Bowl, Memorial Coliseum, Dodger Stadium, Inglewood Forum, and Staples Center—also attest to the city’s high interest in sports.
The city of Los Angeles has few neighbourhood parks but does possess the world’s largest urban park, Griffith Park, covering some 6.5 square miles (17 square km) of rugged mountainous terrain. Exposition Park, Hancock Park, and Elysian Park are among other popular city recreation areas. Of the regional parks, the most important is the sprawling 239-square-mile (619-square-km) Santa Monica Mountains National Recreation Area (1978), the largest such preserve in an American metropolis. Jointly managed by the U.S. National Park Service, the California Department of Parks and Recreation, and the Santa Monica Mountains Conservancy, the area includes some existing homes but restricts permanent new construction to protect the natural environment. Regional beaches attract millions of visitors yearly, requiring the services of as many as several hundred lifeguards on a given summer’s day.
Los Angeles revolutionized the theme park industry. From his Burbank studio, movie mogul Walt Disney created a “Magic Kingdom” that would extend the life of his popular cartoon characters into an amusement park. He opened Disneyland in Orange county in 1955 to instant acclaim. Disney’s venture inspired the creation of Universal Studios Hollywood, a theme park in Studio City that also draws millions of visitors yearly. | https://www.britannica.com/place/Los-Angeles-California/Architecture |
Introduction: Good vision is among pilot’s specific needs. Therefore, visual acuity has been the basic standard assessment to determine pilot’s visual function and fitness to fly. With evidences of increasing number of military pilots with refractive errors in the western region, it is therefore important to acquire the information on visual acuity status among military pilots who routinely fly around the equator.
Method: Retrospective data of 147 pilots were used to determine the prevalence of visual acuity changes among three categories of RMAF pilot (fighter, transport and helicopter). Magnitudes of visual acuity changes were also recorded.
Results: Prevalence of visual acuity changes among RMAF operational pilots is 24.5%. Higher percentages of visual acuity changes are among transport pilots (Rt=31.3%, Lt=24.3%) followed by the helicopter (Rt/Lt=20.5%) and fighter pilots (Rt/Lt=14.7%). However, differences between them were not statistically significant (c2 = 3.832, p = 0.147). 63.9% of the pilots have visual acuity of 6/9 for right eye and 64.5% for left eye. Highest magnitude of recorded visual acuity status is 6/31. Pilots’ awareness level to correct their poor vision is high among fighter (80%) and helicopter pilot (75%) but low among transport pilot (44.4%).
Conclusion: Pilots from all categories of military aircraft have risk of developing visual acuity changes. Exposure to ionizing and non-ionizing radiation could be the possible cause that needs further evaluation. Review on current vision protection and vision conservation policy should be undertaken by the organisation to ensure damage to pilots’ vision can be minimised and vision readiness is optimised. | http://e-mjm.org/2015/v70s1/mjm-sept-suppl-2208.html |
The Special Collections Department of the University of Louisiana at Lafayette University Libraries brings together a wide variety of materials representing the south and southwest Louisiana experience. Special Collections supports research for undergraduate and graduate students, faculty, and other scholars whose work relies on archival records, rare books, manuscripts, media, digital files, and other primary sources. The central mission of Special Collections falls in line with the mission of the University Libraries, which as an integral part of the University of Louisiana at Lafayette, is to support fully the instructional and research programs of the University by providing access to information through the teaching, acquisition, organization, and preservation of information resources in all formats to the University's academic community, the region, and the state.
Scope
University Archives
The University Archives houses the archival records of the University of Louisiana at Lafayette from its inception to the present. The mission of the University Archives is to preserve and provide access to the University’s records of permanent, historical, legal, fiscal, and administrative value.
These records include, but are not limited to:
- Office of the President
- Academic and administrative offices and units
- Annual and operative budgets
- Publications
- Self-studies
- Student organizations
- Artifacts
Acadiana Manuscripts Collection
The Acadiana Manuscripts Collection contains collections of personal or family papers, business or organizational records, photograph collections, oral histories, and much more related to the Acadiana region and south and southwest Louisiana. While most of the manuscript collections relate to the Acadiana region of Louisiana, several have broader scopes that complement the region and culture.
Among the subjects covered in these collections:
- Agriculture
- Oil industry
- Architecture
- Education
- Literature
- Local and regional history
- Politics
- Women’s history
Collections include:
- David Reichard Williams Papers
- Rice Millers Association Records
- Edith Garland Dupré Papers
- John M. Parker Papers
- Jefferson Caffery Papers
- Barnett Studio Photographs [Freeland Collection]
- Ollie Tucker Osborne Papers
- Mary Alice Fontenot Papers
- German Prisoners of War Collection
- J. Carlton James Oral History Collection
- Council for the Development of French in Louisiana (CODOFIL) Records
The Archives also houses reels of microfilmed Louisiana colonial records copied from repositories in Europe and North America.
Louisiana Room
The Louisiana Room is a division of Special Collections that contains a collection of materials relating to Louisiana and the University. Resources include agriculture, arts and literature, business and industry, education, history, politics and government, and graduate and undergraduate theses. The various materials are available through Louisiana documents, books, periodicals, maps, microforms, DVDs, CDs, phonograph records, and vertical files. Published and cataloged materials can be found through the Dupré Library Catalog, while uncatalogued resources (Louisiana state documents, maps, and vertical files) may be found in indexes linked through Special Collections.
Louisiana state documents are organized by Louisiana document number according to the subject. The Louisiana Map Collection contains state and local maps, both current and historical. Vertical files consist of clippings from various Louisiana newspapers that have been organized topically for quicker access. Indexes for these files may be browsed in three categories: General, University, and Biographies.
The Louisiana Room also includes an extensive Genealogy Collection (PDF) for researching family histories in Louisiana. A Genealogical Research Guide and Selected Bibliography (PDF) is available for user reference.
Rare Book Collection
The Rare Book Collection contains items published before 1900 and items that have intrinsic value such as limited editions or copies inscribed by the author. Topics include horticulture, architecture, and French literature and history. All holdings are listed in the Dupré Library Catalog and identified by the designation "Rare Book Room." A fully searchable list of the rare books can be found under the Rare Book Collection.
Digital Archives
The Digital Archives includes digitized collections of the Acadiana Manuscripts Collection and those donated to Special Collections. These collections include digitized and born-digital materials. Special Collections provides digitization services for research and scholarly use. Digitization efforts are utilized to preserve the University's archival holdings and to increase their accessibility for the community. Special Collections also produces digital exhibits that reflect parts of collections from the University Archives and Acadiana Manuscripts Collection.
UL Lafayette Institutional Repository
UL Lafayette's Institutional Repository (IR) is an online space that brings together and provides open access to the University's scholarship and intellectual content. The repository accepts scholarship in a wide variety of formats, including PDF documents, audio/video, and images. Scholarship includes research articles, class projects, presentations, digital collections, and much more.
Other Collections
U.S. Government Information
Edith Garland Dupré Library was congressionally designated a Federal Depository Library in 1938. This means the library can provide the general public with free access to government information, in accordance with Section 1911, Title 44 of the U.S. Code (PDF). As a selective depository, Dupré Library receives over 40 percent of selections available to depository libraries through the Federal Depository Library Program (FDLP) of the U.S. Government Publishing Office (GPO), known formerly as the Government Printing Office. Circulation of most documents is allowed. General and specialized reference, research, and instructional services are provided. Individual or group consultations and instructional sessions may be scheduled.
Government publications are located in the following areas:
- U.S. Government Documents Collection (1st floor near the Reference Desk) — These publications have SuDoc call numbers.
- Main Stacks (Standard shelving on all floors) — These publications have Library of Congress (LC) call numbers.
- Reference & Research Dept. Collection — 1st Floor
- Microforms Collection — 1st Floor
- Louisiana Room — 3rd Floor
- Search online publications, databases, or federal websites via U.S. Government Information.
More information can be found under Government Documents.
Cajun and Creole Music Collection
The Cajun and Creole Music Collection (CCMC) consists of over 9,000 commercial recordings, selected unpublished or field recordings, and other music-related research materials. Formats include both analog and digital media: 78rpm, 45rpm, and LP (33 ⅓ rpm) records, 8-track tapes, audio-cassette tapes, CDs, VHS tapes and DVDs. The expanding collection also contains books, periodicals, photographs, artifacts, and other archival materials. The many different genres and styles of the Creole and Cajun musical cultures of Southwest and parts of Southeast Louisiana can be found in the CCMC.
Microforms
The Microforms Department contains back issues of newspapers, periodicals, and other items in various formats. Microfilm and microfiche are the most common formats.
Readers are available which have reading, printing, and other features. Materials owned by the University, or obtained through Interlibrary Loan, for the University of Louisiana at Lafayette users may be printed free of charge. Materials from other sources may be printed for $0.25 per page.
There are also offer several databases and digital applications for historical newspaper materials. Most of these resources are available to the University of Louisiana at Lafayette users only.
The Microforms Department is located around the corner from the Reference Desk on the 1st floor. Locate microform materials in the Dupré Library Catalog. Visit the Reference Desk for assistance.
Types & Formats
Special Collections acquires a wide variety of materials in both physical and digital formats. Materials include institutional records, personal papers, genealogy, newspapers, manuscripts, government publications, audiovisual recordings, microforms, maps, graphic images, ephemera, monographic series, serials, and theses/dissertations. Special Collections will not accept digital materials in outdated formats that cannot be supported within the department. If obsolete formats are transferred from a University of Louisiana at Lafayette division, Special Collections may recommend it be taken to a different repository.
Citations and References
For information regarding reference and citation style types for archival materials, view the Special Collections Reference Citations Research Guide. The guide provides basic formats and examples for three common styles: American Psychological Association (APA), Chicago Manual of Style, and Modern Language Association (MLA).
Privacy
Some materials in Special Collections may contain sensitive or confidential information protected under federal and/or state privacy laws and regulations. Special Collections takes steps to identify and in some cases remove this kind of information. Therefore, materials may need to be reviewed by Special Collections staff prior to access. Information may include but is not limited to educational, medical, financial, and personnel records (i.e. Social Security numbers, bank account information, credit card numbers, drivers’ licenses, employment, and medical records, student work, and more).
Users are advised that the disclosure of certain information pertaining to living individuals may have legal implications. Users who find sensitive or confidential information in any collection agree to immediately notify a Special Collections staff member. Users also agree to make no notes or other record of privacy protected information if found within the collections and further agrees not to publish, publicize, or disclose such information to any other party. Users assume all responsibility for infringement of right to privacy in use of these materials and agree to indemnify and hold harmless Special Collections, the University of Louisiana at Lafayette, its employees, and agents against all claims, demands, costs, and expenses arising from the use of Special Collections materials.
Clientele
Special Collections is open to all researchers within and outside the University. All researchers must produce proper identification (for the University community, their Cajun Card; for outside community, a picture ID such as a driver’s license or passport) and complete, or have on file a current registration form, which is provided by Special Collections staff. When requesting archival materials, a request form needs to be filled out for each individual collection.
Staff & Services
Special Collections is made up of three professional faculty and three library specialists. Each employee helps with assisting users in finding sources for research needs. Users can make requests via in-person, email, phone, mail, and live chat. Email and mail requests are routed to the Reference Archivist, while the employee assigned at the reference desk answers live chat. Only Special Collections staff is allowed to retrieve materials from the archives and Louisiana Room (refer to the Jefferson Caffery Reading Room User Policy for more information). Special Collections reserves the right to not fulfill requests based on time, distance, cost, lack of vital information, privacy concerns, or condition of the requested material.
Geographic Areas
Special Collections acquires materials related to the Acadiana region and south and southwest Louisiana. On rare occasions, Special Collections will acquire non-Louisiana documents that either are related to Acadiana or complement the current collections.
Accessions
For all donations, donors need to complete and sign a University Libraries' Deed of Gift Form (PDF). Within this agreement, physical property rights and legal title of donated materials are assigned/transferred to Special Collections and Edith Garland Dupré Library, the University of Louisiana at Lafayette. It is the intent of Special Collections to make the materials available for research on an unrestricted basis, and donors have the option to allow materials to be digitized for preservation purposes and/or inclusion on the University’s Institutional Repository (IR) and the Louisiana Digital Library. Donors may also set limiting conditions on donations and can choose how copyright is transferred. All Deeds of Gift are signed by the donor and submitted to the University's Office of Operational Review for approval.
Departments and units within the University may transfer materials to the University Archives. They must fill out and sign an Internal Transfer Form (PDF). This form describes the records being transferred, the departments they originate from, and any confidentiality or privacy conditions. This form is approved by the Head of Special Collections.
Special Collections may decline certain gifts if they duplicate already-held materials, if their condition would require extensive preservation, or if they fall outside the scope of its collections.
Deaccessions
Items may be deaccessioned from collections for various purposes. In order to deaccession items, a Deaccession Form needs to be filled out, explaining what the item(s) is, what collection it came from, the purpose for deaccessioning, and the method of removal. The methods of removal include returning the item(s) to the donor(s) and discarding; this decision must be made based on the original Deed of Gift and/or Transfer Agreement. If discarding, the staff member recommending deaccession needs to state how the item(s) will be discarded and must include the name of someone who will witness the discarding. The staff member must sign the form and get signed approval from the Head of Special Collections, the Assistant Dean of Technical Services, and ultimately the Dean of the Libraries. Once all three signatures are acquired, the item(s) can be removed from the collection.
Loans
All materials in Special Collections are non-circulating, meaning they cannot be taken out of the department. Certain materials may be loaned out on a case-by-case basis. In these cases, users must complete loan request forms with signed permissions by the Head of Special Collections. Users must return the materials by a decided-upon date. If requesting Louisiana Room materials, the Reference Archivist must sign the request forms in addition to the Head of Special Collections. Users accept full responsibility for any loss or damage incurred. No reproductions or other disposition may be made of this material without the written permission of the Head of Special Collections. Items may not be displayed or reproduced without the express written consent of the Head of Special Collections and in keeping with federal copyright laws and/or donor agreements (view the University’s Digitization and Photoduplication Policies and Procedures for more information). If the Head of Special Collections is not present to sign forms, then forms must be signed by the Assistant Dean of Technical Services. If the Assistant Dean of Technical Services is not present, then the forms must be signed by the Dean of the Libraries. Users must request loans and sign forms in person at Special Collections; loan requests will not be accepted for remote user.
Digitization & Photoduplication
Special Collections offers photocopying and digitization services. Users may photocopy and scan materials in the Jefferson Caffery Reading Room, or they may request the Digitization Archivist to provide digital copies. To photocopy materials, users must go to the Circulation Desk to obtain a Copy Card. Special Collections will not provide a card to photocopy. For digitization requests, users must complete a Digitization Request Form. The request is then forwarded to the Digitization Archivist.
View Dupré Library’s Digitization and Photoduplication Policies and Procedures for policies on scanning, copyright, fees, and publishing of Special Collections materials.
Records Management
Records management is carried out by the Head of Special Collections. View the University’s Records Management Policy for more information.
Acquisitions
Special Collections materials are acquired almost exclusively through donations. The department does not purchase collections or items unless there is an unusual significance that is vitally complementary to the archives. The department accepts materials on loan or deposit, though usually with the understanding that they will be donated at a specified later date. The department also encourages monetary donations through the University of Louisiana at Lafayette Foundation to help with acquiring needed supplies for processing and preservation.
Affiliations
Special Collections is closely aligned with the following:
Special Collections is also a member of the Louisiana Digital Consortium and contributes materials to the Louisiana Digital Library.
Contact
Direct any questions regarding this policy to: | https://library.louisiana.edu/collections/special-collections/special-collections-policy |
IRS Regulations and Other Guidance on Business Interest Expense Limitation
A business interest expense is the cost of interest on a business loan used to maintain business operations or pay for business expense. Read Sec. 163(j) limitation on business interest expense deductions to learn more.
Business interest expense (BIE) deductions are limited to the sum of:
• Taxpayer’s business interest income (BII);
• 30% of the taxpayer’s adjusted taxable income (ATI); and
• Taxpayer’s floor plan financing interest expense
Business interest expense deduction limitation does not apply to:
• Certain small businesses whose gross receipts are $26 million or less,
• Electing real property trades or businesses,
• Electing farming businesses, and
• Certain regulated public utilities.
For qualified residential living facilities, Notice 2020-59 is a proposed revenue procedure that provides a safe harbor allowing taxpayers engaged in a trade or business that manages or operates qualified residential living facilities. They are treated as a real property trade or business solely for purposes of qualifying as an electing real property trade or business, which is permitted to elect out of the business interest expense limits under Sec. 163(j)(7)(B).
Sec. 448(c) is gross receipts test that applies to determine whether a taxpayer is a small business that is exempt from the business interest expense deduction limitation. A taxpayer meets this test if the taxpayer has average annual gross receipts for the past three tax years of not more than $25 million, adjusted for inflation.
IRS’s Final and proposed regulations (REG-107911-18) addressES:
• how to calculate the interest expense limitation,
• what constitutes interest for purposes of the limitation, | https://www.maemura.com/irs-regulations-and-other-guidance-on-business-interest-expense-limitation/ |
“Oil is big business.” A classic example of this is the Texaco-Pennzoil court case, which appeared in the book Making Hard Decisions5 and in a subsequent case study by T. Reilly and N. Sharpe (2001). In 1984, a merger was hammered out between two oil giants, Pennzoil and Getty Oil. Before the specifics had been agreed to in a written and binding form, another oil giant—Texaco—offered Getty Oil more money. Ultimately, Getty sold out to Texaco.
Pennzoil immediately sued Texaco for illegal interference, and in late 1985 was awarded $11.1 billion—an enormous award at the time. (A subsequent appeal reduced the award to $10.3 billion.) The CEO of Texaco threatened to fight the judgment all the way to the U.S. Supreme Court, citing improper negotiations held between Pennzoil and Getty. Concerned about bankruptcy if forced to pay the required sum of money, Texaco offered Pennzoil $2 billion to settle the case. Pennzoil considered the offer, analyzed the alternatives, and decided that a settlement price closer to $5 billion would be more reasonable.
The CEO of Pennzoil had a decision to make. He could make the low-risk decision of accepting the $2 billion offer, or he could decide to make the counteroffer of $5 billion. If Pennzoil countered with $5 billion, what are the possible outcomes? First, Texaco could accept the offer. Second, Texaco could refuse to negotiate and demand settlement in the courts. Assume that the courts could order one of the following:
• Texaco must pay Pennzoil $10.3 billion.
• Texaco must pay Pennzoil’s figure of $5 billion.
• Texaco wins and pays Pennzoil nothing.
The award associated with each outcome—whether ordered by the court or agreed upon by the two parties—is what we will consider to be the “payoff” for Pennzoil. To simplify Pennzoil’s decision process, we make a few assumptions. First, we assume that Pennzoil’s objective is to maximize the amount of the settlement. Second, the likelihood of each of the outcomes in this high-profile case is based on similar cases. We will assume that there is an even chance (50%) that Texaco will refuse the counteroffer and go to court. According to a Fortune article,6 the CEO of Pennzoil believed that should the offer be refused, Texaco had a chance to win the case with appeals, which would leave Pennzoil with high legal fees and no payoff. Based on prior similar court cases and expert opinion, assume that there is also a 50% probability that the court will order a compromise and require Texaco to pay Pennzoil the suggested price of $5 billion. What are the remaining options for the court?
Assume that the probabilities of the other two alternatives—Pennzoil receiving the original total award ($10.3 billion) or Pennzoil getting nothing—are almost equal, with the likelihood of the original verdict being upheld slightly greater (30%) than the likelihood of reversing the decision (20%). Evaluate the expected payoff and risk of each decision for Pennzoil.
Students also viewed these questions
A start-up company is building a database of customers and sales information. For each customer, it records name, ID number, region of the country (1 = East, 2 = South, 3 = Midwest, 4 = West), date of last purchase, amount ...
An online survey of students in a large MBA Statistics class at a business school in the northeastern United States asked them to report their total personal investment in the stock market ($), total number of different ...
For the real estate data of Exercise 1, do the data appear to have come from a designed survey or experiment? What concerns might you have about drawing conclusions from this data set?
As part of the marketing group at Pixar, you are asked to find out the age distribution of the audience of Pixar’s latest film. With the help of 10 of your colleagues, you conduct exit interviews by randomly selecting ...
InterCon Travel Health is a Toronto-based company that provides services to health insurers of foreign tourists who travel to the United States and Canada.7 As described in the Guided Example in this chapter, the primary ...
Membership
TRY NOW
Access to
800,000+
Textbook Solutions
Ask any question from
24/7
available
Tutors
Live Video
Consultation with Tutors
50,000+
Answers by Tutors
OR
$ 1.99
VIEW SOLUTION
ADD TO CART
Relevant Tutors available to help
Prabhanjan Tattar
anwar iqbal
BS Industrial Engineering
User l_195965
M.Sc. in Mathematics
×
NO,THANKS
TRY NOW
Get In Touch
About Us
Contact Us
Complaint
Career
Jobs
FAQ
Company Info
Security
Copyrights
Terms & Condition
SolutionInn Fee
Promotion
Scholarship
Services
Freelancer By Skill
Sitemap
Blog
Press Release
Hire Pro Freelancer
Giveaway Books
Games
Complaint
Have any complaint ?
Copyright © 2017 SolutionInn All Rights Reserved . | http://www.solutioninn.com/oil-is-big-business-a-classic-example-of-this-is |
As the Department implements new policies aimed at providing better service, one of our main priorities is to hold ourselves accountable to the public in a system that is fair, transparent, and maintains the highest standards of professionalism that Chicago police officers represent.
In addition to the reforms we have already put into place, and have planned for the future, we will continue to evaluate and improve our processes to ensure Departmental integrity and consistency.
Reforms Completed
- Equipping every police officer on regular patrol with body cameras a year ahead of schedule – the largest deployment in the United States.
- A Department-wide standard for accountability in sustained allegations against officers – This will ensure that disciplinary measures that are implemented will be fair and consistent for all members
- Reinvigorated Training of Internal Affairs Investigators to provide Internal Affairs staff greater tools for more comprehensive investigations
- Revised Internal Affairs’ Operating Procedures – Includes recording all interviews – Nine CPD directives were modified to address the findings of the U.S. DOJ, two of which have already been put into place. | https://home.chicagopolice.org/office-of-reform-management/transparency-and-accountability/ |
Technical Field
Background Art
This invention relates to direct sequence spread spectrum communications and in particular it relates to a method for estimating the frequency offset of a local signal in a mobile receiver.
In cellular systems the timing and frequency accuracy of transmissions from network base stations rely on very stable and highly accurate reference oscillators. In the competitive market for supply of mobile stations for communication with the network base stations a low cost is demanded by the prospective purchasers of mobile equipment. Therefore low cost reference oscillators e.g voltage controlled crystal oscillators (VCXO) would be the usual choice for the reference oscillator of a mobile station such as is used in a wideband code division multiple access (WCDMA) network.
The frequency accuracy of these low cost reference oscillators e.g. 5 parts per million (ppm), is very much less than the frequency accuracy of the reference oscillators available to the base stations (e.g. 0.05ppm). The resulting difference in frequency between the base station transmissions and the locally generated carrier frequency used for down-conversion in the mobile station, the so-called frequency offset, causes problems with synchronization. Further frequency errors can arise at the mobile station because of the Doppler shifts produced by the movements of the mobile station.
US 5 982 809 to Liu
EP 0892528
GB 2287613
WO 99/66649
When power is applied to a mobile station the task of synchronization with a base station is initiated (initial cell search). The characteristics of the Universal Mobile Telecommunications System (UMTS) and the procedure for initial cell search to which the following description relates is described in the European Telecommunications Standards Institute (ETSI) publication TR 101 146 version 3.0.0 Universal Mobile Telecommunications System, Concept evaluation. As will be clear to those skilled in the art the instant invention is not restricted to use with the UMTS and may also be applied to other WCDMA systems. Reference is made to , to , to and to , which form part of the prior art.
figure 1
figure 1
1,
2
3.
The initial cell search by the mobile station is performed in three steps and the first step is the acquisition of slot synchronization to the transmissions of the base station providing, through a fading path, the strongest signal at the receiver of the mobile station. With reference to which is a schematic illustration of base station broadcast transmissions, base station transmissions are represented at the transmission channel at and the mobile station receiver at In by way of example the transmissions from only two base stations (BTS1 and BTS2) are shown.
figure 1
These base station transmissions are not synchronized with each other and are maintained to transmit over common fixed duration time intervals referred to as slots and common fixed duration framing intervals referred to as frames. One frame comprises 15 slots. In the start of a slot for the transmissions from BTS 2 is shown delayed from the start of a slot for the transmissions from BTS1 by an arbitrary amount t seconds.
figure 2
The base station transmissions include a synchronization channel (SCH) aligned with the slot boundary and a primary common control physical channel (PCCPCH). The synchronization channel comprises a primary synchronization code (PSC) and a secondary synchronization code (SSC) as illustrated in . The code transmitted as the primary synchronization code (Cp) is repeated at the beginning of each slot by all base stations.
3
2
2
3.
The BTS transmissions to the receiver will be affected by channel and the transmissions of BTS 2 are illustrated as received through a 3-path (multipath) channel while the transmissions of BTS1 are illustrated as received through a 2-path channel. The signals from BTS1 and BTS2 are effectively summed in channel before arriving at receiver Correlation of the received signal with the expected primary synchronization code which is stored in the receiver provides a number of correlation peaks. The highest peak detected corresponds to that base station of the network (the found base station) to which the receiver will synchronize.
supra.
The second step of initial cell search establishes frame synchronization and identifies the code group of the base station found in step 1 (the found base station). The third step of initial cell search determines the scrambling code assigned to the found BS. To avoid prolixity further details regarding the second and third steps of the initial cell search are not presented here and reference is made to ETSI publication TR 101 146
It is an object of this invention to provide an improved method of estimating the frequency offset in a direct sequence spread spectrum communications receiver.
In accordance with the invention there is provided a method of estimating the frequency offset in a direct sequence spread spectrum communications receiver comprising computation of differences in phase imparted by down-conversion in the receiver to parts of a synchronization code received over a radio channel, said differences in phase being computed from a series of correlations of parts of the received synchronization code with a synchronization code stored in the receiver, in which the period over which the series of correlations is performed is not greater than the length of said stored synchronization code.
figure 1
is a schematic illustration of base station transmissions,
figure 2
illustrates the composition of base station transmissions,
figure 3
is a flow chart illustrating the method of carrier offset estimation,
figure 4
illustrates a series of partial correlation periods within a single slot,
figure 5
illustrates a series of overlapping partial correlation periods within a single slot.
An example of the present invention will now be described with reference to the figures in which:
c
smp
1
2
x
f
c
k
1
f
x
=
×
f
smp
k
2
f
x
=
×
The implementation of the invention described herein is applicable to the initial cell search performed at a mobile station operating in the frequency division duplex (FDD) mode in a UMTS network.
The performance of the UMTS cell search can be degraded by offsets in the carrier and sampling clock frequencies. In practice, both the carrier and sampling clock frequencies are derived from the frequency of a reference oscillator (usually a VCXO). The carrier (f) and the sampling clock frequencies (f ) may be expressed as in equations (1) and (2) respectively. The terms k and k in these equations represent constants and f is the reference frequency supplied by the reference oscillator of the mobile station.
x
c
smp
x
The equations (1) and (2) indicate the ways in which inaccuracies in the reference frequency generated by the crystal oscillator translate into the inaccuracies in the carrier and sampling clock frequencies. When expressed in parts per million, the same inaccuracy will apply to each of the three frequencies, f, f and f. For example, for a desired carrier frequency of 2 GHz, and a sampling clock frequency of 15.36 MHz, an inaccuracy of 1 ppm (in f) represents offsets of 2 kHz in the carrier frequency and 15.36 Hz in the sampling frequency.
With regard to WCDMA cell search, the carrier frequency offset results in a continuous phase variation of the received complex signal. The sampling clock frequency offset may cause incorrect detection of vital system timing instances. Any effects of an offset in the sampling clock frequency are observed only after processing of the signals in a large number of slots. The phase rotation caused by the offsets in the carrier frequency results in a decrease in the received ratio of the signal power versus the noise plus interference power and as a consequence, an increase in the probability of instances of false detection of timing. Therefore offsets in both the carrier frequency and the sampling clock frequency will result in a degradation of the performance in all three steps of the UMTS cell search process.
The loss of performance in the cell search caused by the frequency inaccuracies is evident during the first step of the cell search process. Sampling clock offsets may cause errors in detection of the slot boundaries i.e. the slot boundaries will be positioned in the wrong places. If the error in locating the slot boundaries is larger than one chip period, the results obtained by the remaining cell search steps will also be in error. For practical frequency inaccuracies, however, a slippage of 1 chip caused by the sampling clock inaccuracies is observed over long time intervals.
Consequently, the inaccuracies of the sampling clock are of secondary importance when compared to the offsets in the carrier frequency. As the effects of an offset in the carrier frequency are observable immediately, these effects can be measured and used to correct the reference frequency. A reduction in the inaccuracy of the reference frequency will reduce the offsets in both the carrier and sampling clock frequencies also. The method described herein is based on the differential phase offsets imparted to the received primary synchronization code at down-conversion by errors in the local oscillator frequency used for down-conversion. The resulting measurements of phase offset are used to correct the reference oscillator frequency.
S
t
=
,
A
t
.
e
jθ
t
S
r
S
t
=
β
t
.
.
e
j
Δ
ω
t
φ
σ
+
+
t
t
A
t
t
t
t
t
A complex baseband signal transmitted by a base station may be represented as
where () and θ() represent the magnitude and phase respectively of the signal. The transmitted signal when received via a fading path can be represented as:
where Δω is the carrier frequency offset in radians per second, φ() is the random phase (in radians) due to the Doppler shift and σ() is the random phase due to noise and interference. Variations of the signal envelope are represented as β().
S
t
=
M
.
e
j
.
π
4
C
dt
=
⋅
∫
0
T
β
t
.
.
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
φ
σ
+
+
t
t
⋅
M
T
In the first step of the UMTS cell search, the in phase (I) and quadrature (Q) components of the received signal are correlated with the primary synchronization code. When the local primary synchronization code is aligned with the first symbol of a received PCCPCH +SCH time-slot (i.e. at the slot boundary), the transmit signal may be expressed as:
where is a constant. The correlation of the corresponding received signal with the local primary synchronization code stored in the receiver is shown in equation (5), where is the correlation period.
C
dt
=
⋅
⋅
∫
0
T
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
.
Equation (5) represents the correlation between the local primary synchronization code and the received signal at the slot boundaries. As the primary synchronization code is a known signal, the carrier frequency offset may be estimated by measuring the change in the phase of the received primary synchronization code. The effect of the signal components due to Doppler and noise plus interference are discussed below and for clarity are removed from equation 5 which may then be reduced to
figure 4
C
1
=
⋅
⋅
∫
0
T
2
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
.
dt
C
2
=
⋅
⋅
∫
T
2
T
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
.
dt
To evaluate the phase due to the carrier offset, the above integral may be evaluated over a number of intervals (i.e. by using partial correlations). The differential phase of the results will then contain a component which is directly proportional to the carrier frequency offset. This process is shown in the following equations where 2 intervals are used. An illustration of the use of two non-overlapping partial correlations is given also at .
ΔΦ
=
∠
-
∠
=
C
2
C
1
(
)
Δ
ω
T
2
The differential phase between the results is given by:
Δ
ω
=
2
ΔΦ
T
The carrier frequency offset may then be computed from:
N
N
i
C
Δ
ω
=
(
)
ΔΦ
N
T
ΔΦ
=
∠
-
∠
C
i
C
i
-
1
Δ
ω
‾
=
∑
i
=
1
N
-
1
Δ
ω
i
N
-
1
th
By making use of partial correlations, -1 differential phase values may be obtained each indicating a carrier frequency offset of:
where ΔΦ is
with representing the i partial correlation.
Multiple values of differential phase can be used to estimate the carrier frequency offset under additive white Gaussian noise (AWGN), multi-path and multi-user conditions by applying averaging to the individual results obtained from equation 10:
The effect of Doppler is minimized by ensuring that the differential phase values are obtained using partial correlation over periods which are much less than the coherence time of the channel. The coherence time is a period within which there is a high degree of correlation between faded signal samples and is approximately equal to the inverse of the Doppler frequency.
-3
-6
For a mobile speed of 500 km/h and a nominal carrier frequency of 2 GHz, the Doppler frequency approximates to 925 Hz. The corresponding value of coherence time is around 1x 10 seconds. Evaluation of differential phase values as described above is accomplished within the duration of a single PCCPCH +SCH symbol period (i.e. ∼ 67x10 seconds), which is much less than the coherence time.
Δ
ω
‾
=
∑
k
=
1
M
∑
i
=
1
N
-
1
Δ
ω
ik
M
N
-
1
ik
M
th
th
Phase variations due to Doppler usually may be assumed to be small and therefore would not significantly affect the computations described above. Improved estimates of the carrier frequency offset may be obtained, however, by computation of a set of values for the frequency offset over a number of slots. An averaging process can then be applied.
An averaging process is shown by the equation
where Δω represents the frequency offset estimate of the i correlation of the k slot. is the number of slots used in the averaging process. The frequency offset is then derived from the average value taken from the series of partial correlations within each slot and over a number of slots.
Various factors affect the choice of the number of partial correlations per primary synchronization code to be used in the above process. An increase in the number of partial correlations in the series of correlations results in a shorter correlation period (relative to the coherence time). With shorter correlation periods smaller variations of phase due to Doppler may be expected. Use of shorter correlation periods, however, causes a drop in the detected signal power and leads to a reduced signal to noise plus interference ratio. For reduced signal to noise ratios the effect of AWGN and interference on the detected differential phase values becomes more severe. It has been found that two partial correlations per primary synchronization code are sufficient for estimates of the carrier frequency offset to be obtained.
The minimum detectable frequency offset is dependent upon the ratio of the powers of the signal to the noise plus interference and also on the variations in the signal phase due to Doppler over the partial correlation intervals. Experimental results indicate that for a mobile station moving at 80 kilometres per hour (km/h) the method described herein can be expected to detect more than 95% of the carrier frequency offsets and the detection rate remains above 75% for 500Km/h.
figure 5
X
a
a
a
PC
1
=
⋅
⋅
∫
0
(
-
)
X
a
T
.
X
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
.
dt
PC
2
=
⋅
⋅
∫
a
T
.
X
T
M
2
e
j
⋅
π
4
e
j
Δ
ω
t
.
dt
For increased correlation powers overlapping between the partial correlations may be employed. By this means each of the correlations in a series of correlations include a part of the synchronization code common to another of the correlations in the series of correlations. With reference to two overlapping partial correlations are illustrated as performed within a single slot. In UMTS the primary synchronization code is of length 256 chips. A first partial correlation PC1 is performed with the first (256 - ) chips and a second partial correlation PC2 is performed with the last (256-) chips. In this series of correlations 256-2 chips are common to both PC 1 and PC2. More generally first and second overlapping partial correlations may be represented as
figure 5
a
X
X
a
Owing to the increase in the length of each partial correlation period the arrangement shown in provides a correlation power greater than that obtained from the non-overlapping arrangements. The corresponding differential phase is given by (Δω . . T)/, where is the total;number of chips in the primary synchronization code, T is the duration of the primary synchronization code in seconds and is the number of chips of the primary synchronization code not used in the partial correlation process.
a
a
figure 3
For a carrier frequency offset of 1 ppm, a carrier frequency of 2 GHz, and with = 64 chips as shown in this example, the nominal correlation peak of the partial correlations will be approximately 2.5 dB less than that provided by a correlation with the complete 256 chip code. The nominal differential phase resulting from use of an overlap with =64 chips is 12 degrees and this difference is sufficiently large to be detected by the algorithm of .
a
a
a
The advantage conferred by the use of overlapping partial correlations may be demonstrated by comparison of the above example (=64 chips) with an arrangement where there is no overlapping of the partial correlations. Where = 128 chips and two partial correlations are performed in a single slot the partial correlation peak will be 6 dB less than for the case of a full correlation (=0). When a relatively high level of noise plus interference is encountered, overlapping partial correlations can be used to increase the power in a partial correlation peak thereby to avoid an unacceptable degradation in performance.
a
In general, a suitable value for will be such that the overlapping correlation power is maximised while the resulting differential phase remains within the range of detection of the system. The invention may be carried into effect by means of standard digital techniques well known in the art. | |
SOWIT aims to bring technology at the service of the African agricultural development, providing farmers with the information they need to optimize their productivity and yields. At the crossroads of agronomy, remote sensing and artificial intelligence, the SOWIT team is rooted in the field. SOWIT relies on precision farming technologies (drones, satellites, sensors etc.) combining the worlds of research and production in the best interest of farmers. | https://magnitt.com/startups/sowit-51728 |
St Patrick's Day - ten things you may not have known about
St Patrick's Day - 10 facts you need to know starting with - drinking.
1. The drinking...
Drinking wasn't always part of the St Patrick's Day celebrations. It appears we were drowning the shamrock too much and in 1927, the government's pub ban came into force. The pubs were dry on March 17 from then until 1970.
2. There's a row over the parade...
Although Ireland didn't stage its first parade until 1931, there's a dispute between New York and Boston over who staged the first ever St Patrick's Day parade. New York says they had the first official one in 1762, but the lads in Boston claim they sort of held one in 1737.
3. The colour of St Patrick's Day was originally blue, but green became more prevalent as shamrocks became the official symbol of the holiday. In the 1798 rebellion, Irish soldiers wore full green uniforms on 17 March in hopes of making a revolutionary political statement. The shamrock on their hat was a sign of rebellion.
2. "The Wearing of the Green" came from a song by Dion Boucicault.
3. St. Patrick's Day is celebrated on March 17, which is both the day of religious feast and the anniversary of St. Patrick's death in the fifth century.
4. A tall tale exists that St Patrick was famous for banishing all the snakes in Ireland. It may not be true.
5. The Shamrock also called the "seamroy" by the Celts was a sacred plant in Ireland because it symbolised the rebirth of spring.
6. Corned beef and cabbage is the traditional St Patrick's day meal, but in New York, some celebrate with Irish bacon to save money.
7. The leprechaun is a purely American tradition introduced by Disney in the film Darby "O'Gill & the Little People."
8. Chicago has been dying its river green for the past 43 years to celebrate "The Emerald Isle." The green appears magically from an orange dye that many other cities have tried to copy.
9. March 17 is not just a national holiday in Ireland. it is also a national holiday on the island of Montserrat in the Caribbean. Their population of 4,000 come to a standstill for the day too, owing to the large number of Irish emigrants that landed there in the 17th century. In Ireland, the celebration outgrew the word "Day" and became St Patrick's Festival - a four day event that takes 18 months to plan and is enjoyed by 1.2 million people.
10. Shamrock is the symbol of Ireland and St. Patrick, and it became the latter because St. Patrick used it when teaching people about the holy trinity in the Christian religion. | https://www.leinsterexpress.ie/news/fun-and-games/370365/st-patrick-s-day-ten-things-you-may-not-have-known-about.html |
Ordnance Survey developed a transformation engine for strict and automatic mapping of IFC-BIM models into semantically enriched 3D CityGML building models, both exterior and interior. This work was completed with National University of Singapore (NUS) for the National Research Foundation (NRF) under the Virtual Singapore (VSG) programme.
Prior to this research project, between 2015-16, Government Technology Agency of Singapore (GovTech), OS and Singapore Land Authority (SLA) collaborated on a data model to store 3D building information from the data that SLA had captured through remote survey for the buildings in Singapore. The result contributed towards Virtual Singapore, aimed at creating a 3D digital representation of the urban landscape
The research was completed with the help of Singapore agencies Housing & Development Board (HDB), Building and Construction Authority (BCA) and Singapore Land Authority (SLA) who shared on the challenge of enhancing the building models in Virtual Singapore 3D with more geometric and semantic detail for both exterior and interior of buildings from IFC models. The outcome from this collaboration is an IFC2CityGML transformation engine that can be configured to satisfy the requirements for different geospatial use cases from IFC models.
The challenge
The challenge was to take the information “locked away” in BIM models and make it available to wider stakeholders to help them make better decisions.
BIM contains a lot of information on a building used during the construction and maintenance of it, which is often “locked away” in proprietary software and formats. Only some of this information is potentially useful.
BIM platforms lack the tools and analytics functions necessary to make it available to city planners or regulatory bodies who would benefit from the data to test new developments or city planning initiatives
The solution
Ordnance Survey engaged with potential users of the “locked up” BIM data to find out which data could be used, the best format and the application of it. The users were regulatory bodies, city planners, emergency service response teams and building owners.
Based on the findings, Ordnance Survey developed the IFC2CityGML transformation engine. The rules based conversion software can transfer detailed building model information from IFC to an enhanced CityGML data model, which can be configured for different geospatial use cases.
"This project successfully developed a conversion engine and rule set that is able to convert BIM IFC models into CityGML format, not only geometrically but also semantically, including interior structures. Ordnance Survey’s contribution was primordial in adopting a ‘use cases’ perspective to guide the research and development, benefiting a wide range of geospatial scenarios."Rudi Stouffs, Associate Professor & Deputy Head (Research), National University of Singapore
The benefits
The IFC2CityGML transformation engine uses an open 3D data standard so that data can easily be used and integrated, supporting advanced analytics and use cases, outside of proprietary software.
The rules-based approach enables automated reuse of the conversion process for different geospatial use cases and scenarios from IFC models.
The information stored in the BIM models can be used and managed for the lifecycle of the building, helping the industry to manage archive BIM models more easily.
The stakeholders are keen on understanding and implementing this use case approach in future developments for geospatial platforms and applications.
The project has also helped to improve understanding and collaboration across the BIM and GIS community in Singapore. | https://www.ordnancesurvey.co.uk/business-government/products/case-studies/improving-integration-bim-gis |
Management as a subject deals with theories that relates to practical problems and real results. So, to write management essays you must search for problems that exist in real world and support your ideas with proofs. Essays related to management essentially deal with the future.
How to generate ideas for your essay?
You can use your personal experience to obtain topics for your essay. Try to recall any problem you have encountered. That problem can be used as an essay topic. You can take up actual examples to derive topics. Applying your own common sense to find solutions to such problems and then using proofs to substantiate them can be a good method to tackle such essays.
A student should not confine himself to academic writing only. When writing essays related to management he should use facts, proofs, diagrams and charts to convey his point to the reader. A student will be considered successful in writing a management essay if his readers can fully understand his ideas.
It is also essential that the management paper retains the basic structure that the essay should have. An essay should begin with a thesis statement followed by a brief introduction. Several points should be put forward supporting the argument and ample number of evidences should be provided. Lastly, it must end with a conclusion that is based on the evidences to establish the thesis statement.
How to make your essay stand out?
An essay writer can make his essay stand out by infusing creativity in it. Creative thinking when applied correctly in an essay will not only amaze the readers but also inspire them to think unconventionally. This actually serves the basic purpose of essay writing.
Try to think differently while tackling your essay. Explore novel ideas and then verify whether they are applicable to your essay. You can check them against expected standards and include those in your essay that are most suitable.
A student must look out for facts that will make his essay strong. Searching answers to basic questions like what, where, why, who and when and how will lead you to a strategy that will make essay writing easy.
Readers of management essays are always looking out for something new and unique. New ideas interest them. So you must try to offer something unconventional in terms of ideas and concepts that will leave a lasting impression in the minds of the readers.
You must think independently and come up with alternate ideas that will genuine interest the readers.
The ideas you have used in your essay needs to be verified against facts and figures. Your ideas must be supported by evidences and facts. Otherwise the readers will find them unconvincing.
Help on essay writing is available easily. Companies like customessays.co.uk have created a niche in the market for essay writing by their dedication and hard work. As a custom essay writing company we pledge to provide our customers with original and unique essays that will help them to achieve good grades. You can approach us for dissertation help or for your management essay and rest assured that you will be satisfied by the quality of our work. | https://www.customessays.co.uk/blog/essay/management-essays-2 |
Patient / Parent / Caregiver Communication Tools to clarify Leukemia Treatment and Provide Relevant Information to Support Recovery
Led by SIOP Europe, POLARIS is a partnership project to develop a set of communication tools that ALL (Acute Lymphoblastic Leukemia) medical teams can use to explain ALL treatment protocols, medical terminology and investigations, guidance and results to patients/caregivers in an easy to understand visual language.
POLARIS aims to:
- Improve communication between the medical team, the child / teenager / AYA with leukemia as well as their caregivers.
- Support adherence to therapy by making the treatment schedule easy to follow for a person, who is not familiar with medical terminology.
- Develop better understanding of different medical procedures among patients and caregivers and in this way reduce the anxiety associated with undergoing medical tests and investigations.
- Give the patients a secure feeling and understanding of the situation and the treatment
POLARIS’ geographical focus is Europe, with the possibility to expand/grow the geographic reach to other world regions in future.
Digital and tangible tools - Protocols' Road maps
A graphic description of the treatment path. To be used in the clinic as a tangible, paper / board-based, at the time of discussing with patients / caregivers the different steps for each specific treatment plan of the protocol. It is important to present a coherent, standard and uniform look and feel and, hence, to ensure visual design consistency across all deliverables, based on the designed road maps.
Working groups composed of paediatric oncology professionals, nurses, psychologists, and patient advocates are working together to develop and adapt the original project for usage in different countries and groups.
The partners involved in this project are: | https://siope.eu/POLARIS-Project |
This necklace is based on The Day Of The Dead (Dia de los Muertos) sugar skulls.
I wanted to create something with a bit of sparkle and decided to use gemstones for the eyes. The silver versions eyes are called ruby and have hints of pinks and reds tones.
The piece is semi 3D to incorporate a little weight to the item which sits nicely on a long 30" chain.
Chain Length = 30" or 16" | https://www.lynseyluu.co.uk/product-page/sugar-skull-silver |
A peer-to-peer fundraising campaign is an online fundraising effort that involves a nonprofit organization soliciting donations directly from its supporters. Unlike traditional fundraising efforts, in which donors give money to a charity, peer-to-peer fundraisers ask their friends, family members, and colleagues to donate to the cause.
Fundraising & Development
Venmo for Nonprofits: A digital fundraising secret or potential trap? | https://nonprofitsdecoded.com/topics/peer-to-peer-fundraising/ |
Background & Summary {#Sec1}
====================
The last glacial period witnessed significant fluctuations in global climate, in the long term driven predominantly by orbital changes but with additional more rapid millennial scale fluctuations. This period encapsulated the last glacial maximum (LGM), when temperatures were potentially between 19--22 °C cooler across Greenland^[@CR1]^, resulting in expansion of ice cover and consequent polar amplification^[@CR2]^. This lowered sea level by up to 127--135 m^[@CR3]^, impacted ocean circulation^[@CR4]^ and opened previously inundated land bridges such as the Bering Straits^[@CR5]^, potentially influencing the migration of animal species including humans^[@CR6]^. Prior to the LGM, the climate was cold but was punctuated with a high degree of millennial variability. Since the LGM the climate has warmed, sea-level fallen and the climate in more recent times been influenced by anthropogenic activity. Reconstructing climate change over this period has important uses in a wide range of academic research.
Palaeoclimate observational datasets obtained in the field from ice or sediment-cores and consisting of timeseries of variables such as isotopes of carbon, oxygen, hydrogen or nitrogen, sedimentary input etc., are invaluable for understanding the spectrum of climatic variations^[@CR7]--[@CR10]^. These records have provided evidence for glaciations, abrupt climate change and more. Although they constitute direct measurements, many proxies may respond to multiple climatic variables or even non-linear combinations of variables. Moreover, for many reconstructed climate variables, there remains limited spatial coverage, especially as we move beyond time-slices such as the mid-Holocene or LGM, to transient experiments. Climate models offer an alternative method to study past climates. Although these are subject to their own wide range of biases, the benefit of such an approach is the ability to produce a high frequency dataset with global coverage for a full range of climate variables, all tied together in a manner that is consistent with the underlying physics encapsulated by the model.
Using climate models in palaeoclimate studies often focuses on key climate periods, such as the LGM^[@CR11]--[@CR13]^. Utilising a model for simulating millennial scale time periods is computationally expensive, so past studies that have generated a continuous climate timeseries have utilised lower resolution models, such as Earth System Models of Intermediate Complexity (EMICs) or Energy Balance models (EBMs)^[@CR14],[@CR15]^. Studies that have utilised general circulation models (GCMs) have used low-resolution versions^[@CR16]^, utilised a simple slab-ocean^[@CR17]^, or accelerated the boundary conditions^[@CR18]^. Producing a global timeseries using a fully coupled complex climate model is currently very challenging due to the extremely long run times involved, as well as the difficulty in storing such extensive model output. In order to avoid these issues, a 'snapshot' approach can be used, where model output is generated at intervals over the period under analysis. Each simulation uses pre-defined boundary conditions, including greenhouse gases, orbital parameters, extent of ice sheet cover and sea level. The key assumption is that the climate is in equilibrium with the boundary conditions, and that the final climate is largely independent of the initial conditions (so that the simulations can be run in parallel). Though both of these assumptions can be challenged, experience has shown that the climate outputs are a good representation of the orbital time scale climate change^[@CR2],[@CR19]^.
For some impact analyses, such as simulating peat formation^[@CR20]^, niche modelling^[@CR21]^, and species distribution modelling^[@CR22]^, these snapshot simulations may be sufficient. However, the millennial variability of climate may also be of importance^[@CR23]^. Hence we need to link these climate snapshot outputs together and incorporate cycles of higher-frequency variability, as they are necessarily omitted by the sampling frequency of the initial GCM 'snapshots'. Observed millennial scale climate variability includes climate perturbations such as the Dansgaard-Oeschger (D-O) and Heinrich (HE) events^[@CR24]--[@CR27]^. The last glacial cycle is characterised by 25 abrupt DO events, with consequent warming of between 8 °C to 16 °C over Greenland^[@CR10],[@CR25]^. Although the mechanisms responsible for these events are not fully understood, they are thought to be driven by abrupt changes in Atlantic meridional overturning circulation (AMOC) strength, possibly due to perturbations in the freshwater budget due to the melting of icebergs and sea-ice fluctuations^[@CR28],[@CR29]^. Imprinted on these millennial scale fluctuations is higher frequency inter/intra-annual and seasonal variability, including internal climate oscillations, driven by the different response times and non-linear interactions within the climate system. Recent work has suggested that inter-annual variability changes depending on the climate regime^[@CR30]^.
Here we present a monthly climate timeseries for the Northern Hemisphere (0°N--90°N) land surface, generated from 42 snapshot simulations of the past 60 kyrs at either 1 kyr or 2 kyr intervals using the Bristol University version of the fully coupled global circulation model HadCM3, termed HadCM3B-M2.1^[@CR31]^. We present the results for terrestrial grid-points only, as this dataset is aimed primarily towards terrestrial ecosystem modellers such as those investigating vegetation and species population dynamics, although the dataset can be used for a wide range of research. We focus specifically on the Northern Hemisphere, as we have not incorporated a model for how the two hemispheres behave during abrupt events (i.e. the bipolar see-saw). Millennial scale variability is added by incorporating the spatial results of hosing experiments, which simulate a change in the strength of the AMOC analogous to a D-O event. This is then combined with a temperature reconstruction from Greenland ice-core derived from nitrogen isotopes. Inter-annual variability has been incorporated directly from the model output. Finally the data has been downscaled to 0.5° resolution and bias corrected using the Climate Research Unit (CRU) data^[@CR32]^.
Methods {#Sec2}
=======
The HadCM3B-M2.1 coupled climate model {#Sec3}
--------------------------------------
The Hadley Centre Coupled Model 3 Bristol (HadCM3B) is a coupled climate model consisting of a 3D dynamical atmosphere^[@CR33]^ and ocean^[@CR34]^ component. HadCM3B is a version of the more commonly known HadCM3 that has been developed at the University of Bristol, and is outlined in detail in the study of Valdes *et al*.^[@CR31]^. It differs slightly to the original model code of HadCM3^[@CR33],[@CR34]^ as it has undergone a number of minor bug fixes as documented in detail in Section 2.1 of Valdes *et al*.^[@CR31]^, although such changes have been shown to have only a minimal impact on the simulated climate.
Despite the relatively old age of HadCM3B, the model has been shown to produce an accurate representation of different climate variables and remains competitive with other more modern climate models used in CMIP5^[@CR31]^. A key advantage of the model is that it is computationally fast, which permits long (i.e. millennial) scale simulations and large ensemble studies.
The atmosphere component of HadCM3B^[@CR33]^ has a resolution of 3.75° × 2.75° (equivalent to 96 × 73 grid points) and 19 vertical levels with a timestep of 30 minutes. The ocean model^[@CR34]^ has a resolution of 1.25° × 1.25° (equivalent to a 288 × 144 grid points) with 20 vertical levels and a timestep of one hour. The levels exhibit a finer resolution towards the surface, the first having a thickness of 10 m and the deepest a thickness of 616 m.
HadCM3 incorporates the land-surface scheme MOSES (Met Office Surface Exchange Scheme) that models the fluxes of energy and water and the physiological processes of photosynthesis, transpiration and respiration which is dependent on stomatal conductance and CO~2~ concentration^[@CR35]^. Here we use MOSES 2 version 2.1 (v2.1), therefore the full model name is HadCM3B-M2.1. For a full overview of MOSES 2 see Essery *et al*.^[@CR36]^ and for the differences between MOSES v2.1 and v2.2 see Valdes *et al*.^[@CR31]^. MOSES 2 incorporates the fractional coverage of nine different surface types, which are simulated by the dynamic global vegetation model (DGVM) TRIFFID. The vegetation can dynamically evolve throughout the simulations depending on four variables; moisture, temperature, atmospheric CO~2~^[@CR35]^, and competition between plant functional types (PFTs).
Sea ice is simulated via a zero-layer model^[@CR37]^ and is calculated on top of the ocean grid with movement controlled by the ocean currents in the upper ocean^[@CR34]^. Ice is formed in leads (i.e. the fractures that form due to stresses) and by snowfall, and removal occurs from the base continually throughout the year and on the surface via melting in the summer. The salinity of sea-ice is assumed to be constant with a flux into the ocean depending on melting or ice formation.
The model does not include an interactive ice model, or carbon and methane cycle so these boundary conditions have been imposed (see next section).
Experimental set-up {#Sec4}
-------------------
### Snapshot simulations and boundary conditions {#Sec5}
This study incorporates the results from 42 'snapshot' simulations which are updated versions of those outlined in Singarayer and Valdes^[@CR2]^ and Davies-Bernard *et al*.^[@CR33]^. Each simulation has been forced with prescribed variations in orbital parameters which are very well constrained^[@CR14]^, greenhouse gases (see Fig. [1](#Fig1){ref-type="fig"}) and ice-sheets. A summary of these is given in Table [1](#Tab1){ref-type="table"}. Concentrations of atmospheric CO~2~ are from the Vostok Ice core^[@CR38],[@CR39]^ whilst N~2~O and CH~4~ concentrations are taken from the EPICA Dome C ice core^[@CR40]^. The snapshot simulations extend back to 60 kyr before present (BP), where 0 BP refers to the year 1950. The 0 BP simulation has greenhouses gases that represent the pre-industrial (PI) period, where PI refers to 1850, equivalent to a CO~2~ concentration of 280 ppm. This simulation therefore represents a 1950 world in which greenhouse gases have not risen relative to the pre-industrial.Fig. 1Prescribed greenhouse gases for the 42 snapshot simulations. Each vertical line represents the position of the snapshot simulations in the 60 kyr timeseries.Table 1Names, years and boundary conditions for the 42 snapshot simulations used in this study.SimulationSimulation NameYear kyrOrbital ParametersC02\
(ppm)CH4\
(ppbv)N20\
(ppbv)EccentricityObliquity (°)Precession1teiia00.0172423.4460.0172807602702teiib10.0176423.5730.0182796272623teiic20.0180223.6970.0172776052654teiid30.0183823.8150.0142755722685teiie40.0187023.9230.0102735662626teiif50.0189924.0190.0052685592607teiig60.0192524.1000.0002655642578teiih70.0194824.163−0.0062616072619teiii80.0196724.206−0.01126162725810teiij90.0198424.229−0.01526566625911teiik100.0199724.229−0.01826768027112teiil110.0200724.207−0.02026467026713teiim120.0201424.161−0.02024546324414teiin130.0201824.093−0.01823865526215teiio140.0201824.004−0.01523756126416teiip150.0201523.895−0.01122447224117teiiq160.0201023.769−0.00521043722918teiir170.0200123.6270.00019437723719teiis180.0199023.4750.00618937124420teiit190.0197623.3150.01118837521921teiiu200.0195923.1510.01518837422422teiiv210.0194022.9890.01818636524523teiiw220.0191822.8320.01918535422824teiix240.0186922.5530.01719539020625teiiy260.0181422.3480.01019335821926teiiz280.0175322.2440.00020040822527teiiA300.0169022.255−0.00920039720028teiiB320.0162722.382−0.01520140923029teiiC340.0156722.610−0.01620541922430teiiD360.0151322.913−0.01320844122131teiiE380.0146723.257−0.00721147422932teiiF400.0143323.6050.00019742223033teiiG420.0141323.9220.00720241622234teiiH440.0140924.1780.01121141423135teiiI460.0142224.3530.01420750125636teiiJ480.0145024.4330.01420242022437teiiK500.0149324.4160.01121846723938teiiL520.0155024.3050.00521052626339teiiM540.0161724.113−0.00221344225140teiiN560.0169323.859−0.01021950523041teiiO580.0177623.565−0.01621954027642teiiP600.0186523.258−0.019213425224
Due to the lack of an ice-sheet model, the extent and elevation of ice-sheets has also been imposed. The major regions are the Antarctic, Greenland, North American and Fennoscandian ice sheets, which impact isotactic rebound and sea level. Here, reconstructions from present to the LGM (21 kyr BP) have been based on the ICE-5G model^[@CR41]^. This gives the evolution of a number of variables including ice extent, thickness and isostatic rebound on 500-year intervals over this period. Within the model these are used to calculate bathymetry, continental elevation (from ice sheet thickness and rebound), ice extent, and the land-sea mask at each time interval.
Beyond the LGM to 60 kyr BP, there have been few studies that have attempted to reconstruct ice-sheets (and less data is preserved due to the LGM ice sheet removing evidence of previous ice) and so data is less well constrained. We have experimented with two methodologies. The first used the ice sheet that the ICE-5G model uses to spin-up. The method assumes that during glacial periods land ice coverage is similar to that at the LGM whereas thickness is defined by the δ^18^O record of Martinsen *et al*.^[@CR42]^. This method was used in Singarayer *et al*.^[@CR2]^, but likely overestimates the area of the ice sheet which then overestimates the albedo effect of the ice. An alternative method equates earlier (pre-LGM) ice sheets to the equivalent ice volume (sea level) during the deglaciation. For instance, the sea level depression at 40 kyr BP is compared to the sea level during the deglaciation. Where these are the same, the ice-sheet extent is inferred to be the same as that at 40 kyr BP. This approximation is imprecise because ice sheets show different structures during growth and decay phases, but it is a much better approximation to the ice area than the previous method. For this reason, we use these in the current simulations (as in Davies-Barnard, *et al*.^[@CR43]^).
The boundary conditions have been incorporated into 42 snapshot simulations, which are set at 1000-year intervals between 0 and 22 kyr BP and 2000-year intervals between 22 to 60 kyr BP. Each simulation has been run for 2000 years of spin-up, and initialised from previous simulations for each period that have been run under the same boundary conditions, albeit with the addition of dynamic vegetation. Each simulation has therefore been run for a minimum of 6000 years. This permitted the experiments to be run simultaneously and hence is highly efficient, taking just a month or two on a high performance computer. A fully time continuous simulation would have taken about 3 years. Analysis has been conducted on the final 1000 year climatologies of each simulation unless stated otherwise.
In order to show the equilibrium state of the 42 simulations, Fig. [2](#Fig2){ref-type="fig"} shows the linear trend in surface air temperature for the 42 simulations. The linear trends in surface temperature are small, in most cases less than 0.01 °C/millennium. Although a trend remains, models commonly need to be run for many thousands of years to be in complete equilibrium, so we determine this to be suitable.Fig. 2The average trend in SAT (°C/kyr) for the 42 snapshot simulations. These have been calculated from the final 1000 years of each of the 2000 year simulations and highlight their equilibrium state.
### Splining {#Sec6}
The snapshot climatologies for each of 42 simulations were splined to a monthly time-series. Here the average climatology for each month for each simulation were splined together, producing a dataset of 60,000 × 12 time points. This approach is applied to both climate variables, including the land sea mask and the ice fraction used in each snapshot GCM simulation. The land sea mask and ice fraction are subsequently rounded to 0 or 100% coverage in any gridcell in every year.
Splining has been done using the NCAR command language (NCL) ftcurv function. This produces a smooth curve for a variable between each of the simulations using a technique termed spline under tension^[@CR44]^. The resultant timeseries for mean annual surface air temperature (SAT) and precipitation for the northern extra tropics and Greenland is shown in Fig. [3](#Fig3){ref-type="fig"}.Fig. 3Timeseries showing the splined data. These have been generated from the annual averages from the final 1000 years of the snapshot simulations. (**a**) Northern Hemisphere SATs (°C). (**b**) Greenland SATs (°C), Northern Hemisphere precipitation (mm/day), and (**d**) Greenland precipitation (mm/day).
### Interannual variability {#Sec7}
Interannual variability is incorporated in order to account for high frequency internal climate variability. A 1000-year timeseries of variability is calculated from the final 1000 years of each of the model snapshot simulations, by subtracting the climatological mean from the timeseries. This is then incorporated into the 60 kyr splined dataset, with variability switching at the mid-point between two simulations. Where the snapshot simulations are at 2000-year intervals (beyond 22 kyr BP), the 1000-year sections are repeated twice up to the mid-point between two of the simulations, at which point the variability switches to repeating the 1000-year segment from the subsequent simulation. The addition of variability to the splined data for SATs and precipitation is shown in Fig. [4](#Fig4){ref-type="fig"}.Fig. 4Timeseries showing the addition of interannual variability to the splined data. Interannual variability has been taken from the final 1000 years of the model simulations and switches at the mid point of each of the snapshot simulations. (**a**) Northern Hemisphere SATs (°C). (**b**) Greenland SATs (°C), Northern Hemisphere precipitation (mm/day), and (**d**) Greenland precipitation (mm/day).
Although this is an idealised approach, it does give an indication as to the extent of high frequency stochastic internal climate variability simulated by a climate model, and how this might vary over time. Note that variability on this timescale is an artefact of the model and is therefore synthetic, rather than being a product of data assimilation. Figure [5](#Fig5){ref-type="fig"} shows the monthly and annual standard deviations for high-frequency variability for the Northern Hemisphere and Greenland. Northern Hemisphere variability remains relatively constant throughout the simulations, with the exception of 30 and 32 kyr BP. In contrast, Greenland shows an increase in variability after the Holocene. Past studies utilising proxy records have also highlighted an increase in variability between the Holocene and the LGM^[@CR30],[@CR45]^, although we do not see this here on a hemispheric scale. This may be linked to an increase in the meridional temperature gradient at the LGM which increases the extent of variability^[@CR30]^.Fig. 5Standard deviations (SD) for the interannual variability component applied to the splined data for each of the 42 snapshot simulations. Annual and monthly values are given for the Northern Hemisphere and Greenland.
There is a clear change in variability between 29 kyr and 33 kyr BP. This represents four 1 kyr repeating sections, two of which from the 30 kyr and 32 kyr BP simulations respectively. These intra-millennial oscillations are an interesting artefact of these two model simulations, the direct cause of which is uncertain. They may reflect a long-term, millennial scale oscillation present in the simulations, a phenomenon which may represent a salt oscillator as previously identified by Peltier and Vettoretti (2014) in the CESM model^[@CR46]^. However, further analysis is required to test what drives this in HadCM3B. The increase in variability between 30--32 kyr BP is largely masked following he addition of DO-variability as discussed in the following section.
### Millennial scale climate variability {#Sec8}
The next step is to incorporate abrupt, millennial scale variability, which specifically refers to the typical D-O cycles that populate the glacial period. These tend to begin with a rapid (i.e. decadal) warming event which then cools over a longer period of time before cooling more rapidly towards the baseline. A number of studies have hypothesised that these events are driven by fluctuations in the strength of the AMOC, as indicated by ice-cores and sediment proxies^[@CR24]--[@CR27]^. Commonly therefore, modelling such events utilises hosing experiments which impact the strength of the AMOC^[@CR47]^.
The first step in isolating these events was to take the Greenland ice-core temperature reconstruction from Kindler *et al*., (Fig. [6a](#Fig6){ref-type="fig"})^[@CR8]^. This temperature record covers the whole of the last glacial period (10--120 kyr) and is derived from δ^15^N isotopes incorporated with a firn densification (i.e. the compaction of the perennial snowpack) and heat diffusion model. This record provides the most reliable estimate of the abrupt temperature increase during DO events, and is not subject to seasonal biases that affect temperature reconstructions derived from water isotope records^[@CR8]^. The Kindler record was splined using the same methodology as outlined above, to a uniform 20-year timestep from 10--60 kyr BP. This was then low-pass filtered (Fig. [6a](#Fig6){ref-type="fig"}; red line) to remove variability on timescales less than 500 years, which might conflict with the interannual variability applied in the previous step. The filtered timeseries is used to derive a temporal correction to the overall climate timeseries (Fig. [6b](#Fig6){ref-type="fig"}). The difference in Greenland temperature between the snapshot runs (splined to every year and then smoothed with 100 year running mean) and the Greenland ice-core temperatures is taken as a millennial-scale correction.Fig. 6The Greenland ice-core dataset and the temporal correction used to incorporate millennial scale variability to the dataset. (**a**) The Kindler *et al*.^[@CR8]^ dataset splined to a uniform 20-year timestep (black line) and following filtering using a 500-year low-pass filter (red line). (**b**) Difference in Greenland temperature between the splined data and the filtered ice-core dataset, this is used to scale a hosing experiment to give the spatial impact of millennial variability.
This correction was then used to scale a hosing experiment to give the spatial climate impact of the D-O events. The hosing experiment was performed using HadCM3B-M2.1 at 21 kyr, the period representing the last-glacial maximum. A freshwater forcing of 1 Sv is continually applied to the simulation (1 Sv = 1 × 10^6^ m^3^ s^−1^) over the Atlantic Ocean between 50°--70°N for 200 years. This acts to decrease the strength of the AMOC and causes a consequent climatic impact across the Northern Hemisphere centred around Greenland, with the final 50 years of the simulation used for analysis. Although this drives a cooling effect, it provides a spatial fingerprint for the millennial scale climate change. This approach is similar to that used in TraCE-21ka^[@CR48]^, where the freshwater forcing was continually updated to match proxy records of Greenland temperature. We adjusted the hosing response to achieve the same model-data agreement.
The resultant scaled extent and spatial patterns of the D-O events are then added to the splined and interannual data in 20-year segments (which is the approximate resolution of the Greenland ice-cores) from 11,000 to 60,000 years BP. This avoids any millennial-scale correction being applied during the Holocene. The consequent timeseries for the Northern Hemisphere and Greenland is shown in Fig. [7](#Fig7){ref-type="fig"}.Fig. 7Timeseries showing the addition of DO variability to the splined and interannual timeseries. The onset and duration of D-O variability is identified from Kindler *et al*.^[@CR8]^ and the spatial impact is identified via separate hosing experiments (see the text). (**a**) Northern Hemisphere SATs (°C). (**b**) Greenland SATs (°C), Northern Hemisphere precipitation (mm/day), and (**d**) Greenland precipitation (mm/day).
A limitation of this technique is that it assumes that all millennial scale variability is driven by changes in the Atlantic Meridional Overturning Circulation (AMOC). It is a similar approach to that used in the Trace-21Ka experiments. We also use only one LGM hosing experiment to derive the spatial fingerprint of millennial variability, despite there being a range of climate configurations present over the past 60 kyr. This decision reflects past unpublished work in which we carried out an ensemble of hosing experiments using HadCM3B that incorporated varying climate states representing the past 60 Kyr, in order to investigate the Heinrich events. These showed that that the spatial pattern of the hosing experiments does not vary depending on the climate state in the HadCM3B model. Finally, our overall approach does not include solar and volcanic forcing, as they are not included in the original snapshot GCM simulations.
### Downscaling and bias correction {#Sec9}
Following this, the dataset has been downscaled from the standard HadCM3B-M2.1 resolution (3.75° × 2.5°) to 0.5° resolution via bilinear interpolation. This has been performed using the NCL function linint2.
The final step is to bias correct the downscaled climate data. This has been done for temperature, precipitation, minimum monthly temperature and incoming shortwave energy. The bias corrected temperature data is also used to construct the wind-chill dataset. Snow-depth (as snow water equivalent) and number of rainy days per month have not been bias corrected.
The temperature (and consequently wind-chill), precipitation and minimum monthly temperature datasets have been bias corrected using the high-resolution CRU CL v2.0 observational dataset from the University of East Anglia, covering the period 1901 to 1990^[@CR32]^ at 1/6^th^ degree resolution (1080 × 2160). This has been generated from over 10,000 temperature stations and 25000 precipitation stations. The dataset is first upscaled to 0.5° resolution. The incoming shortwave dataset has been bias corrected using the EWEMBI dataset^[@CR49]^, which covers the period 1979--2013 on a 0.5° resolution. It is worth noting that these are not directly comparable observational datasets to use for bias correction, as our timeseries finishes with pre-industrial greenhouse gas concentrations. However in HadCM3B-M2.1 this has been shown to have only a small impact relative to the model biases^[@CR31]^.
The differences between the observations and modelled values are calculated for each month and for each grid square. This correction is then applied to the whole time period, which assumes that the bias is constant throughout time. With precipitation, the ratio between the modelled and CRU dataset is calculated which is then multiplied with the modelled data. However, some areas that are very dry might show large differences in the ratio compared to observations, although actual precipitation amounts might be very small. In order to avoid very large scaling values, the bias correction scaling of precipitation is capped at three times the modelled value.
The resultant mean annual corrections for temperature and precipitation are shown in Fig. [8](#Fig8){ref-type="fig"}. The final bias corrected timeseries for SAT and precipitation, compared with the pre-bias corrected data is shown in Fig. [9](#Fig9){ref-type="fig"}. In HadCM3B-M2.1, surface air temperatures are subject to a cold bias towards the poles, particularly over Russia and Scandinavia^[@CR31]^. As such, there is a small increase in NH extra-tropical land temperature in the Holocene following bias correction. In Greenland, there is a small decrease in SATs due to a warm bias simulated in the model. The model does a reasonable job at simulating spatial patterns of precipitation, and is comparable to other CMIP5 models^[@CR31]^. The model however overestimates precipitation in areas of significant topography, such as the Himalayas, Tibet and the Rockies, although negative biases in observations can act to amplify this^[@CR50]^. These regions are likely to be the driver behind the decrease in NH precipitation following bias correction. The finalised datasets used to produce this plot can be found within the NERC digital repository^[@CR51]^.Fig. 8Mean annual bias corrections used to bias correct the model data. These have been calculated using the CRU data. (**a**) SATs (°C), and (**b**) precipitation (mm/month).Fig. 9Timeseries showing the bias corrected data against the pre-bias corrected. (**a**) Bias corrected Northern Hemisphere SATs (°C), (**b**) bias corrected Greenland SATs (°C), (**c**) comparison of bias corrected (black) SATs (°C) against pre-bias corrected (red) SATs for the Northern Hemisphere smoothed with a 20-year running mean, (**d**) the same as (**c**) but for Greenland. (**e**--**h**) the same as (**a**--**d**) but for precipitation (mm/day).
Data Records {#Sec10}
============
The datasets are in the form of NetCDF files and can be found within the NERC digital repository^[@CR51]^. The climate variables and units are; temperature (°C), precipitation (mm/day), incoming shortwave energy (Wm^−2^), minimum monthly temperature (°C), snow depth as snow water (or liquid) equivalent (m), wind chill (°C) and number of rainy day per month (between 0--30). It is worth noting that the number of rainy days per month includes only those days when rain exceeds 0.4 mm/day. This is because daily rainfall in HadCM3B (like many GCMs) is prone to drizzle, i.e. rain every day, but does not show enough strong rainfall events.
Each climate variable is represented by a set of 24 files that have been compressed into seven folders: Temp, Precip, downSW, wchill, snowSWE, RainDays and minMonth. Each file represents 2500 years, equivalent to 30000 months, between the latitudes 0° to 90°N at 0.5° resolution. Each file therefore has the dimensions 180 (lat) × 720 (lon) × 30000 (month).
The seven folders contain the regridded, and with the exception of snow depth and rainy days, bias corrected data. The separate datasets for the different stages of the Methods (e.g. splining, addition of interannual variability and DO variability) are not given but are available on request. These are global datasets at the original HadCM3B-M2.1 resolution (2.75° × 3.25°).
We also provide the netcdf files for the land-sea mask and ice fraction, which have been compressed into two folders; landmask and IceFrac. These are provided as annual files, so have the dimensions 180 (lat) × 720 (lon) × 2500 (year).
We provide an example subset of the temperature data ("test_decadal_tas_0\_2.5kyr.nc"), which gives decadal averages for each month for 0--2500 years. The scripts for each stage of the methodology (splining, interannual variability, millennial variability, bias correction and downscaling) are given in the Scripts folder.
There are a number of additional variables that have not yet been produced but could be generated on request; these are total evaporation (mm/day), soil carbon (KgCm^−2^) and soil moisture (Kgm^−2^). Note that data can only be produced for the Northern Hemisphere.
The naming convention for the bias corrected files (temperature, precipitation, wind-chill, minimum monthly temperatures and incoming shortwave) is:
bias_regrid\_\<variable\>\_\<year start\>\_\<year end\>
And for the remaining climate variables (snow depth and rainy days), the land-sea mask and ice-fraction is:
regrid\_\<variable\>\_\<year start\>\_\<year end\>
variable: tas = surface air temperature
pr = precipitation
surface_downwelling_shortwave_flux = Incoming shortwave energy
wchill = wind-chill
tempmonmin_abs = minimum monthly temperature
lwe_thickness_of_surface_snow_amount = Snow (liquid) water equivalent
number_rainy_days = number of rainy days per month (\>0.4 mm/day threshold)
landmask = land-sea mask
icefrac = ice fraction
year start and year end: these refer to the beginning and end years of the file, and comprise a 2500 year section between 0 and 60000 years.
Technical Validation {#Sec11}
====================
A broad validation of the HadCM3B-M2.1 model against observational datasets is given in section 5 of Valdes *et al*.^[@CR31]^. They show that the model reproduces an accurate representation of different aspects of the climate system in land and sea surface temperatures, precipitation and ocean circulation. Similarly, they show that HadCM3B-M2.1 outperforms many of the CMIP5 models, particularly when evaluating surface air temperatures (see their Fig. [2](#Fig2){ref-type="fig"}), despite the cold bias in the model as discussed above. Here we provide a comprehensive validation for the temperature and precipitation datasets.
Timeseries validation {#Sec12}
---------------------
Validating palaeo-climate timeseries against observations is challenging due to the lack of observational datasets, which are themselves subject to a range of biases and uncertainties. Here we compare temperature and precipitation against a range of available ice-core and land-based datasets to validate in the temporal domain. Figure [10](#Fig10){ref-type="fig"} shows 23 land-based temperature records spanning the last glacial period from ice-cores (a--l), the ice margin (o--u) and at locations away from the ice-sheet (v--w), against the modelled temperature smoothed with a 20-year running mean. The references for panels a--e, v and w are given in the study of Shakun *et al*. (2012)^[@CR8],[@CR52]--[@CR56]^, whist references for the remaining 16 datasets are given in the supplementary data of Buizert *et al*.^[@CR57]^. Each timeseries are shown with the TraCE-21ka^[@CR48]^ dataset as the blue line, note here however that this latter model dataset has not been bias-corrected.Fig. 10Validation timeseries plots for SATs (°C). The bias corrected temperatures (black) are shown with a range of observational datasets (red). These are shown with the Trace-21Ka data in blue^[@CR48]^; note this this has not been bias corrected. References for panels (a--e,v,w) are given in the study of Shakun *et al*. (2012)^[@CR8],[@CR52]--[@CR56]^, references for the remaining 16 datasets are given in the supplementary data of Buizert *et al*.^[@CR57]^. The associated lat/lon position for each dataset is given above each panel. Note the longer time range in panels (a--e,v and w).
Panels a--e compare against the three Greenland ice-cores; GRIP, NGRIP, and GISP. There is generally good correlation between all three of these cores. The NGRIP datasets of Rasmussen *et al*., (2006; panel a)^[@CR55]^ and Masson-Delmotte *et al*., (2005; panel c)^[@CR9]^ show colder temperatures throughout the period of between 6 °C to 10 °C. Note here that the Masson-Delmotte *et al*.^[@CR9]^ dataset is shown as anomalies from present-day. This discrepancy is reduced following re-calibration of the dataset in the study of Kindler *et al*. (panel b), which is very well correlated to our dataset due to the fact this was used to normalise our D-O events. There is also good correlation between the patterns of temperature change and absolute values with both the GISP and GRIP cores, although there is a greater decline in Holocene temperatures in the modelled data.
The remaining ice-core datasets, located across Greenland and Arctic Canada, show generally good agreement with the modelled data, particularly for the Bølling Allerød and Younger Dryas periods. However, compared to the Agassiz and Hans Tausen ice cores, the modelled temperatures are too cool in the Holocene, whilst in the Cam Century and EGRIP ice cores, modelled LGM temperatures are warmer than the ice core datasets. Similarly, the ice-margin datasets generally show good comparison with the modelled data. There is also evidence from a number of these timeseries that the model simulates the Holocene Climatic Optimum (HCO), albeit to a lesser degree that many of the observational datasets (specifically panels j, k, p, q and r). This discrepancy in the modelled and proxy data during this period has been shown to be the case in a number of other climate models^[@CR58]^. This may reflect biases across the current generation of models and/or biases with the reconstructions themselves.
Panels (v) and (w) are the only available datasets that permit validation away from the Greenland ice-sheet. Panel (v) shows mean July air temperatures derived from fossil assemblages of Chironomidae from burial lakes in Alaska^[@CR53]^. There are greater discrepancies here than the ice-core/margin temperatures, however the general pattern of temperature change over the period is consistent, specifically the spike in temperatures at the Holocene followed by a decline until the LGM. Pre-industrial temperatures however are lower in the modelled output by up to 5 °C.
Panel (w) shows surface air temperatures reconstructed from membrane lipids of soil bacterial from the Mangshan loess plateau in Central China. Here there is significant discrepancy between the two datasets, with modelled data being up to 16 °C different. Although no past studies have investigated this region using HadCM3B-M2.1, other models investigating current day conditions have highlighted the difficulty in modelling climate across this region^[@CR59]^. This is in part due to its complex topography and location between the semi-arid continental region to the west, and the humid monsoon regions to the east. This may be a region of uncertainty with the dataset, which may be exacerbated by the bi-linear interpolation technique used to downscale the data. Using a more advanced statistical downscaling technique, as shown in past studies, may improve prediction skill^[@CR60]^.
Validating palaeo-precipitation is more challenging due to the lack of observational datasets. Here we compare the precipitation timeseries against reconstructions of snow accumulation, derived from the GRIP and GISP ice-cores^[@CR61]--[@CR63]^ (Fig. [11](#Fig11){ref-type="fig"}). Although this is not a direct comparison, the extremely low temperatures mean that the influence of rainfall during the glacial period was minimal. There is generally good correlation between the model and GRIP observational datasets, particularly between 0--30 kyr BP. The GISP dataset however shows greater than modelled precipitation in the Holocene, but there is good agreement between the Younger-Dryas to approximately 40 kyr BP. Both datasets show a breakdown in correlation towards the end of the datasets, where there is some disagreement of the timings of the DO events, which is likely related to differences in ice-core chronologies. The agreement between the model and the ice-core records in terms of the magnitude of accumulation increase across multiple DO events lends support to the overall approach we have employed.Fig. 11Validation plots for precipitation (mm/day). The bias corrected precipitation (black) is shown against two observational datasets from Greenland (red). Both of these datasets represent snow accumulation, which is used as an equivalent to precipitation in the absence of total precipitation datasets. (**a**) Shows the GRIP ice-core and (**b**) the GISP Ice-core. The associated reference and lat/lon position for each dataset is given above each panel. Mismatches in the timing of abrupt events are due to updates to the Greenland ice-core chronologies since the publication of these two records.
Spectral validation {#Sec13}
-------------------
Figure [12](#Fig12){ref-type="fig"} shows the power spectra for the final 117 years of the model data again the CRU observational dataset (1900--2017)^[@CR64]^ for temperature (left panels) and precipitation (right panels), in order to compare the spectral characteristics. The series mean and least squares linear trend have been removed prior to analysis, in addition to the annual and seasonal cycles in order to remove peaks equivalent to the 12-month and 6-month periods. Note that the seasonal cycle is well represented in the model data for both temperature and precipitation. The data is shown in a log-log convention in order to highlight low frequency variability. Although these datasets are not directly comparable as they represent different time periods, past work^[@CR31]^ has shown that the biases associated with this are small compared to the model biases.Fig. 12Power spectrum of the CRU observational dataset spanning 1900--2017 and the final 117 years of the model dataset for four different regions. The series mean, least squares linear trend, annual and seasonal cycles have been removed prior to analysis. (**a**--**d**) show temperature with CRU in red and the model in black, (**e**--**h**) show precipitation. The dashed lines represent the 95% confidence level determined by a red noise spectrum of a first-order auto-regressive (AR1) process.
The spectral plots show a significant degree of noise in both the observational and modelled datasets over the period of analysis, which may reflect the short timescale analysed. There are a number of differences in the regional power spectra for temperature, particularly in Greenland. However, North American and European temperature spectra show some similarities, particularly an approximate 10-month peak in North America and a lower frequency peak in Europe of between 11 and 12 years. This low frequency peak may represent a mode of North Atlantic variability such as the Atlantic Multidecadal Oscillation.
Variability in regional precipitation also shows a significant degree of noise for both observations and modelled data. The spectra for whole Northern Hemisphere region shows greater parity however, with potential peaks present for both datasets at approximately 13 to 14 years and at 5 to 6 years.
Spatial validation {#Sec14}
------------------
In order to assess if the spatial scale of the modelled data is comparable with observations, Figs [13](#Fig13){ref-type="fig"} and [14](#Fig14){ref-type="fig"} show the annual standard deviation, and the first and second EOFs of northern hemisphere winter of the modelled data against the CRU dataset^[@CR64]^. The CRU data spans the period 1900--2017, which is compared with the final 117 years of the data.Fig. 13Spatial pattern of the standard deviation for the final 117 years of the model dataset and the CRU observational dataset spanning 1900--2017. (**a**) Model SATs (°C), (**b**) CRU SATs (°C), (**c**) model precipitation (mm/day) and (**d**) CRU precipitation (mm/day).Fig. 14First and second EOFs showing the spatial scale of variability and the corresponding principal component timeseries of the CRU observational dataset spanning 1900--2017 and the final 117 years of the model dataset. The left panels show EOF1 and the right panels EOF2, for temperature (**a**--**d**) and precipitation (**e**--**h**).
Generally there is good agreement in the annual mean standard deviations (SD) for both temperature and precipitation, and the major patterns of climate are well represented. In some regions however, variability is modelled to be too large, such as Scandinavia and Alaska, which may reflect the cold bias in the regions and so an exaggerated seasonal cycle. In contrast, other regions show that the modelled variability is too small, such as across Northern Russia, Canada, Northern Africa and the Middle East. Precipitation also shows good similarities with observed data, although variability is too small in regions of the tropics, and too large in some very dry regions such as the Sahara/Sahel, the Mongolian Plateau and Greenland.
The first EOF (Fig. [14](#Fig14){ref-type="fig"}) for both modelled and observational SAT show good similarities, and may represent the pattern of the Arctic Oscillation (AO), with regions of variability over Greenland and an opposing pattern over Scandinavia and Northern Asia. The centres of variability differ between the datasets, specifically the extent of the eastward region of variance in observations. This oscillation accounts for a greater proportion of the variance in the observational dataset, and may also be evident in the principal component (PC) timeseries that shows variability on multi-decadal timescales. The second EOFs show greater disparity and are harder to interpret, with an area of variability over North America in the modelled data that may represent a positive phase of the Pacific/North American (PNA) teleconnections pattern which is not represented in the modelled data. Again there is a possible decadal scale oscillation apparent in the PC timeseries for the observational dataset that is not as well represented in the modelled data.
With precipitation, there is generally good correlation between the annual EOF1 and EOF2 against observations, highlighting strong opposing areas of variance in south-west USA and tropical South America, and across India and South-East Asia. This may be linked to the zonal shift in the position of the ITCZ at annual timescales. EOF2 also shows similarities, and variability may be linked to monsoon precipitation, including the African Monsoon (AFM) and Asian and Indo-Pacific monsoon (AIPM). The EOFs presented here are similar to those generated from the re-forecast data of Zuo *et al*.^[@CR65]^.
Both Figs [13](#Fig13){ref-type="fig"} and [14](#Fig14){ref-type="fig"} indicate that spatially, on high frequency annual/seasonal timescales, the spread and variability of the model data is broadly consistent with observations.
Usage Notes {#Sec15}
===========
The timeseries NetCDF files are in 2500 year/30000 month sections from 0 BP, i.e. the year 1950. Therefore the first month in the 0--2.5 kyr data files represents January 1950. Pre-industrial greenhouse gas concentrations are used to represent this year, as shown in Table [1](#Tab1){ref-type="table"}. Temperature is given in units of °C, precipitation is in mm/month, incoming shortwave energy in Wm^−2^, minimum monthly temperature in °C, snow depth as snow water (or liquid) equivalent in m, wind chill in °C, and number of rainy days per month (with a threshold of 0.4 mm/day) as a value between 0--30. Note also that for the snow-depth dataset, a mask is applied to all grid squares covered in an ice-sheet, so snow depth is only shown for ice-free grid-squares.
HadCM3B-M2.1 and the methodology used here is subject to a range of uncertainties and biases that inherently influences the simulated climatologies. A number of these are outlined in the above text, but there is a wide range of other literature that gives an overview of these uncertainties both in the model^[@CR31]^, and the limitations associated with the methodology^[@CR2]^.
HadCM3B-M2.1 is constructed on a Cartesian grid, so when calculating global or regional means a weighting must be applied to the data that takes into account the smaller size of grid squares towards the poles. The data can be used with a wide range of post processing software.
**Publisher's note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was carried out using the computational facilities of the Advanced Computing, Research Centre, University of Bristol - <http://www.bris.ac.uk/acrc>. E.A. is funded by the NERC project NE/P002536/1. P.O.H. is supported by a University of Birmingham fellowship. Our thanks go to our project partners for discussion on the topic area; Adrian Lister (PI), Jennifer Crees, Stephen Sitch, David Pearson, Brian Huntley, Thomas Hickler, Wolfgang Pappa and Veiko Lehsten. Also our thanks go to Mario Krapp (University of Cambridge) for his advice.
Writing and analysis was carried out by E.A. and P.O.H. The climate model simulations and design of experimental set-up was compiled and carried out by P.V.
Raw model output is available for further analysis from <https://www.paleo.bristol.ac.uk/ummodel/scripts/papers/Armstrong_et_al_2019.html>. A list of the snapshot simulation names used in this experiment is given in Table [1](#Tab1){ref-type="table"}.
All scripts used to construct the climate timeseries have been written using the NCAR command language (NCL, Version 6.4.0) and are available within the NERC digital repository^[@CR51]^.
The authors declare no competing interests.
| |
Dance in the Time of Coronavirus
Tanja Råman and John Collingswood from TaikaBox write about creative strategies one must adopt while working on an international dance project under the restrictions of the current COVID-19 pandemic. What can be done when performances and workshops are cancelled? As sensory experiences - essential for dance artists - are restricted, how to respond to the challenge and work together anyway?
Dancing on the Margins of Climate Change is a long-term project uniting 11 partners in the Barents region – Northern parts of Finland, Sweden, Norway and Russia– , and including creation and touring of dance and circus in the region.
The aim is to contribute to building a sustainable and thriving international dance sector in the North. Due to lack of resources, adequate funding and job opportunities, one of the biggest challenges for the development of the dance sector in the North is to be able to maintain professional artists and also to attract new ones to regenerate and strengthen the sector.
Therefore developing stronger networks amongst the artists and organisations in the Barents region has potential to create long-term benefits for everyone involved, provide employment and professional development opportunities for the freelancers in the area – and perhaps start a new kind of touring circuit that will help raise the profile of dance created in the North.
The project includes a research period leading to the production of four new pieces by dance and circus artists from Northern Finland, Sweden, Norway and Russia, and culminating in a joint tour across the four countries.
The artistic theme for 2020 revolves around local places, the inhabitants, and how they are being affected by global trends – urbanization, digitization, globalization and climate change. What a timely theme it has proven to be so far due to the pandemic outbreak of COVID-19 that has demonstrated clearly how vulnerable we are as human beings and as a society.
When we embarked on the project in 2018, we certainly did not expect to find ourselves in the current lockdown situation.
The world has presented us with a real global issue that we need to face – there is no point in pretending that everything is going to return to normal after a few months, if ever. There are likely to be more significant challenges on the horizon due to accelerated climate change. Therefore we are forced to embrace digital platforms and tools to support artistic work despite the fact that teleconnected performance seems to go against the intrinsic nature of traditional live dance shows.
At the end of March 2020 TaikaBox arranged a three-day on-line research period in Oulu, bringing the artists together via Skype for practical workshops, seminars and meetings. The discussions involved issues such as the artists’ ability to help people in the process of grieving a lost way of life and moving forward.
How could we, as dance artists, use digital tools to reach out and create meaningful connections with people who are isolated? Will there be an increased need for artistic experience – and particularly touch – after isolation?
Some participants were a bit skeptical about how a practical workshop and research gathering would work in video conferencing but in the end, the experience was positive.
“I have never ever danced on camera with my colleagues. This new experience was much more fun than I thought it to be at first. I had a big studio and big projections of everyone and that made it really easy to get into it. I particularly liked the impro tasks where we danced all together and also took in what we saw from the others,” explains Jenny Schinkler, dancer from Sweden.
The uncertainty about the future has meant that the project needs radical rethinking. We do not know if international travel will be possible later this year, or whether venues will be open during November when the tour was planned to take place.
Creating a range of potential plans is problematic as it is very difficult for such a big project to be flexible enough to respond rapidly to changes in the world situation. Venues need to be booked many months in advance and it is difficult to budget for all outcomes simultaneously, so a new plan is needed.
Sustainably produced art which is locally anchored
Dansinitiativet, the project leader in Sweden, is currently re-writing the project in consultation with TaikaBox, developing a new model for sustainable methods for creating and touring that can survive in a global lockdown, and will be relevant when the borders open up once more.
“The new format is based on digital platforms and tools to enhance the contact between the artists despite distance. We are working towards a structure and community that can keep living in some form in the future,” says Per Sundberg, producer at Dansinitiativet.
It is immensely important that we as artists continue to work in cross-cultural projects, particularly at this time when our borders are closed and many of us find ourselves lost in uncertainty and isolation – Marie Hermo Jensen, producer at Stellaris DansTeater, Norway.
“The outbreak of COVID-19 has forced us, artists who often focus on touch and sensory experiences in both our processes and in performance, to explore connectivity in a different way,” she continues.
Although everything is currently uncertain and seems impossible, we believe that persisting with the project and responding to new challenges has huge learning potential for all of the partners, participants and audiences involved.
By Tanja Råman and John Collingswood / TaikaBox, Oulu, Finland. TaikaBox is the Finnish partner in the project. | https://www.danceinfo.fi/en/news/dance-in-the-time-of-coronavirus/ |
Foundations for Performance Training: Skills for the Actor-Dancer explores the physical, emotional, theoretical, and practical components of performance training in order to equip readers with the tools needed to successfully advance in their development as artists and entertainers.
Each chapter provides a fresh perspective on subjects that students of acting and dance courses encounter throughout their training as performing artists. Topics include:
- Equity, diversity, and inclusion in performance
- Mind/body conditioning for training, rehearsal, and performance
- Developing stage presence and spatial awareness
- Cultivating motivation and intention in performance
- Expanding repertoire and broadening skillset for performance
- Auditioning for film and stage
- Developing theatrical productions
This book also offers experiential exercises, journal writing prompts, and assignments to engage readers, enrich their learning experience, and deepen their exploration of the material described in each chapter. Readers will grow as performing artists as they analyze the principles of both acting and dance and discover how deeply the two art forms are intertwined.
An excellent resource for students of acting, musical theatre, and dance courses, Foundations for Performance Training encourages a strong foundation in creative analysis, technique, artistic expression, and self-care to cultivate excellence in performance.
Table of Contents
Introduction 1. Embracing Performance and Communing with the Audience: Acting, Dance, and the Performer’s Role in Society 2. Performer Preparation: Practices from Theatre Movement for Mind/Body Connection and Free Expression 3. Performance Building Blocks: Key Concepts in Ballet Dance Technique for Presence and Authenticity on Stage and Screen 4. Discovering Character Motivation and Intention through Acting Technique and Physical Theatre 5. Jazz Dance and Musical Theatre: The Quintessential Combination of Acting and Dance in Performance 6. Auditioning: From a Necessary Evil to a Time to Shine 7. Performer and Theatre Maker: Creating New Works and Breathing New Life into Established Works 8. On Health and Wellness: Self-Care for Longevity as a Performer Final Thoughts Appendix Index
View More
Author(s)
Biography
Cara Harker is a Professor and Associate Chair of the Department of Theatre and Dance at East Tennessee State University (ETSU), where she teaches musical theatre dance, theatre movement, jazz, tap, ballroom, aerial, improvisation, and composition. She also serves as program coordinator for the dance minor program. | https://www.routledge.com/Foundations-for-Performance-Training-Skills-for-the-Actor-Dancer/Harker/p/book/9780367711801 |
Students learn more from doing activities and practicing their skills on assessments, yet it can be challenging and time consuming to generate such practice opportunities. In this work, we present a pipeline for generating and evaluating questions from text-based learning materials in an introductory data science course. The pipeline includes applying a T5 question generation model and a concept hierarchy extraction model on the text content, then scoring the generated questions based on their relevance to the extracted key concepts. We further classified the generated questions as either useful to learning or not with two different approaches: automated labeling by a trained GPT-3 model and manual review by expert human judges. Our results showed that the generated questions were rated favorably by all three evaluation methods. We conclude with a discussion of the strengths and weaknesses of the generated questions and outline the next steps towards refining the pipeline and promoting NLP research in educational domains.
1. INTRODUCTION
As education across grade levels continues to transition towards online platforms in response to the COVID-19 pandemic, the need for effective and scalable assessment tools emerges as a pressing issue for instructors and educators. Amid many other logistical issues that arise from emergency online education , instructors often find themselves having to generate a large question bank to accommodate this new learning format. In turn, this challenge motivates the need for supporting instructor efforts via methods that automatically generate usable assessment questions based on the learning materials, in a way that requires minimal inputs from instructors and domain experts.
Recent advances in natural language processing (NLP), question answering and question generation (QG) offer a promising path to accomplishing this goal. Most theories of learning emphasize repeated practice as an important mechanism for mastering low-level knowledge components, which altogether contribute to the high-level learning objectives . We therefore envision that having the ability to generate questions on-demand would accommodate students’ varying levels of learning needs, while allowing instructors to allocate resources to other components of the course. Our work presents an initial step towards realizing this capability. We applied Text-To-Text Transfer Transformer (T5) models on conceptual reading materials from a graduate-level data science course to generate potential questions that may be used for assessment. We then evaluated these questions in three different ways. First, we conducted a separate concept hierarchy extraction process on the reading materials to extract the important concept keywords and scored each generated question based on how many such keywords it contains. Second, we applied a fine-tuned GPT-3 model to classify the questions as either useful to learning or not. Finally, we had two data science instructors perform this same classification task manually. Our results contribute insights into the feasibility of applying state-of-the-art NLP models in generating meaningful questions, with a pipeline that generalizes well across learning domains.
2. METHODS
2.1 Dataset
We used the learning materials from a graduate-level introductory data science course at an R1 university in the northeastern United States. The course has been offered every semester since Summer 2020, with class sizes ranging from 30-90 in general. The course content is divided into the conceptual components and the hands-on projects. Students learn from six conceptual units, further broken down into sixteen modules, each corresponding to a data science topic such as Feature Engineering and Bias-Variance Trade-off . Each module consists of reading assignments, ungraded formative assessments and weekly quizzes serving as graded summative assessments. Students also get to practice with the learned concepts through seven hands-on coding projects, which are evaluated by an automatic grading system. In the scope of this work, we will focus on generating questions from the textual content of the sixteen modules in the course, using the following pipeline.
2.2 Question Generation Pipeline
First, we extracted the learning materials from an online learning platform which hosts the course. This extracted data is in XML format, which preserves not only the text content but also its hierarchy within the course structure (i.e., which module and unit each paragraph belongs to). We scraped the text content from the XML files using the BeautifulSoup1 library and cleaned the content to remove leading questions, such as “What does this accomplish” and “Why would this make sense?”. These questions were included to help students navigate the reading more effectively but do not contain meaningful information on their own. From this point, the resulting text data was input to two separate processes as follows.
Concept Hierarchy Extraction. This process was carried out by the MOOCCubeX pipeline , which performs weakly supervised fine-grained concept extraction on a given corpus without relying on expert input. As an example, given a paragraph that explains Regression, some of the extracted concepts include least-squared error, regularization, and conditional expectation; these could be viewed as the key concepts which students are expected to understand after reading the materials. A researcher in the team reviewed the generated concepts and manually removed those which were deemed invalid, including prepositions (e.g., ‘around’), generic verbs (e.g., ‘classifying’) and numbers (e.g., ‘45’ – this is part of a numeric example in the text, rather than an important constant to memorize).
Question Generation. For this process, we applied Google’s T5 , a transformer-based encoder-decoder model. Since its pre-training involves a multi-task structure of supervised and unsupervised learning, T5 works well on a variety of natural language tasks by merely changing the structure of the input passed to it. For our use case, the input data is the cleaned text content prepended by a header of the text. Our rationale for including the header is to inform the model of the high level concept which the generated questions should center around. We had previously tried extracting answers from the text content using a custom rule-based approach with a dependency parse tree, but found that this resulted in the creation of more nonsensical than sensible questions; in comparison, incorporating the headers led to higher quality questions. There were three hierarchical levels of header that were used in our input: Unit, Module and Title, where the former encompasses the latters. For example, the unit Exploratory Data Analysis includes the module Feature Engineering, which has a section titled Principal Component Analysis, among others. Before applying the model to our dataset, we also fine-tuned it on SQuAD 1.1, a well known reading comprehension dataset and a common benchmark for question-answering models .
2.3 Evaluation
We evaluated the generated questions with three different methods as follows.
Information Score. This is a custom metric that denotes how relevant each question is to the key concepts identified in the Concept Hierarchy Extraction step. We denote this set of key concepts as . For every generated question , we further denote as the set of tokens in it and compute the information score as the number of tokens in that coincide with an extracted concept,(1)
where the division by is for normalization. With this formulation, higher scores indicate better questions that touch on more of the key learning concepts.
GPT-3 Classification. We used a GPT-3 model as it has been a popular choice for text classification tasks such as detecting hate speech and text sentiment . Our classification task involves rating each generated question as either useful for learning or not useful. A useful-for-learning question is one that pertains to the course content and is intended to assess the domain knowledge of the student. On the other hand, a question is classified as not useful if it is vague, unclear, or not about assessing domain knowledge. For example, the question “What programming language do I need to learn before I start learning algorithms?” is a valid question, but it is classified as not useful for learning because it pertains to a course prerequisite rather than domain knowledge assessment. To perform this classification, we first fine-tuned the GPT-3 model with default hyperparameters on the LearningQ dataset , which contains 5600 student-generated questions from Khan Academy. Each question contains a label to indicate if it is useful for learning or not, as annotated by two expert instructors. Next, we passed in the T5-generated questions as the GPT-3 model’s input, obtaining the output as a set of binary labels indicating if it rated each question as useful for learning or not.
Expert Evaluation. To further validate the question quality, we had two expert raters with 5+ years of teaching experience in the domain of data science rate each question. Following the same classification process as in , the two raters indicated if each question was useful for learning or not. We measured the Inter-Rater Reliability (IRR) between the two raters and found they achieved a Cohen’s kappa of , with similarity in 75.59% of the question ratings, indicating a moderate level of agreement . The remaining discordant questions were discussed between the two raters until they reached a consensus on their classification, resulting in all of the generated questions being classified by both human judges and the GPT-3 model.
3. RESULTS
Following the above pipeline, we generated a total of 203 questions across the three header levels - Module, Unit, and Title. The Appendix shows a number of example generated questions, along with their information scores and GPT-3 model evaluation. Among the 203 questions, 151 (74.38%) were classified as useful for learning by the GPT-3 model. To compare this classification with the human raters’ consensus, we constructed a confusion matrix as shown in Table 1. We observed that the model agreed with human raters in 135 (66.50%) instances; in cases where they disagreed, most of the mismatches (52 out of 68) were due to the GPT-3 model overestimating the questions’ usefulness.
We followed up with a qualitative review of the questions rated as not useful by human experts to better understand (1) what separated them from the questions rated as useful, and (2) why the GPT-3 model might still rate them as useful. For (1), we identified two important requirements that a question generally needs to meet to be rated as useful by human experts. First, it has to thoroughly set up the context (e.g., what is the scenario, how many responses are expected) from which an answer could be reasonably derived. An example question that satisfies this category is “What are two types of visions that a data science team will work with a client to develop?,” where the bolded terms are important contextual factors which make the question useful. We further note that useful questions with thorough contexts tend to be longer, because they necessarily include more information to describe such contexts. At the same time, short questions may still be considered useful by expert raters if they target a sufficiently specific concept. For example, “what is a way to improve a decision tree’s performance?” is considered useful because the bolded term is very specific. On the other hand, a similar-looking question such as “what is a way to analyze business data” is not useful, due to “analyze business data” being too broad. The GPT-3 model typically fails to recognize this specificity criterion – many of the questions rated as useful by GPT-3, but not by human raters, are similar to ones such as “What are two types of data science tasks?,” which could be useful if “data science tasks” was replaced with a more targeted concept.
Next, we examined whether our score metric, which calculates the normalized number of important concepts that a question encapsulates, aligns with the expert classification of question usefulness for learning. We observed from Figure 1 that, across the three header levels, questions rated as useful tended to have similar or higher information scores than their counterparts.
4. DISCUSSION AND CONCLUSION
In this work, we propose and evaluate a domain-independent pipeline for generating assessment questions based on reading materials in a data science course. Our results showed that the GPT-3 model, fine tuned on the LearningQ dataset , was able to reach an acceptable level of agreement (on 66.50% of the questions) with the consensus of two expert raters. The model appeared to learn that long questions are likely useful, which is a reasonable assumption as these questions might contain more relevant contextual information. However, it also classified some short questions as useful, despite the lack of specificity which human evaluators could easily recognize. As the LearningQ dataset did not contain data science questions, it is no surprise that our model was not particularly good at differentiating between specific data science concepts (e.g., “decision tree’s performance”) and ambiguous ones (e.g., “business data”). Additional fine-tuning of the GPT-3 model on a labeled dataset closer to our learning domain would therefore be a promising next step.
When treating the expert rating of question usefulness as the ground truth, we found that the useful questions generally had higher information scores than those not rated as useful, suggesting that our rationale for the formulation of these metrics (i.e., that higher scores reflect more concepts captured and therefore higher quality) was justified. At the same time, several questions had relatively low information scores but were still rated as useful by experts (e.g., "What are two types of decision trees?") because they target a sufficiently specific concept. To detect these questions, it would be beneficial to incorporate measures of the generated questions’ level of specificity into the existing information score metric.
The above results have been obtained without the need for any human-labeled domain encoding, which makes our question generation pipeline highly domain-agnostic and generalizable. At the same time, there are ample opportunities to further promote its adoption across different learning domains. First, more research is needed to investigate question generation when the learning contents are not entirely textual, but may include multimedia components. Recent advances in the area of document intelligence [1, 4], combining NLP techniques with computer vision, could be helpful in this direction. Second, there remains the need to diversify the generated questions to meet a wider range of assessment goals. In particular, most of our current questions start with “what” (e.g., those in Table ??), which are primarily geared towards recalling information. Incorporating other question types in the generation pipeline could elicit more cognitive processes in Bloom’s taxonomy – for example, “how” questions can promote understanding and “why” questions are designed for analyzing – which in turn contribute to better learning overall. This diversifying direction is also an area of active research in the NLP and QG community [13, 14].
We further note that the proposed pipeline is also customizable to individual domains, so as to enable higher quality questions. First, hyperparameter tuning on a dataset relevant to the learning domain would likely improve the performance of the T5 and GPT-3 models. Second, the concept extraction process could be enhanced with a combination of machine-generated and human-evaluated skill mappings, which have been shown to result in more accurate knowledge models [10, 12]. Finally, the question evaluation criteria may also benefit from subject matter experts’ inputs to closely reflect the distinct nature of the learning domain; for example, chemistry assessments could potentially include both conceptual questions (e.g., “what is the chemical formula of phenol?”) and scenario-based questions (e.g., “describe the phenomenon that results from mixing sodium metal and chlorine gas?”).
5. REFERENCES
- D. Baviskar, S. Ahirrao, V. Potdar, and K. Kotecha. Efficient automated processing of the unstructured documents using artificial intelligence: A systematic literature review and future directions. IEEE Access, 2021.
- G. Chen, J. Yang, C. Hauff, and G.-J. Houben. Learningq: a large-scale dataset for educational question generation. In Twelfth International AAAI Conference on Web and Social Media, 2018.
- K.-L. Chiu and R. Alexander. Detecting hate speech with gpt-3. arXiv preprint arXiv:2103.12407, 2021.
- B. Han, D. Burdick, D. Lewis, Y. Lu, H. Motahari, and S. Tata. Di-2021: The second document intelligence workshop. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pages 4127–4128, 2021.
- C. B. Hodges, S. Moore, B. B. Lockee, T. Trust, and M. A. Bond. The difference between emergency remote teaching and online learning. 2020.
- H. Huang, T. Kajiwara, and Y. Arase. Definition modelling for appropriate specificity. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2499–2509, 2021.
- K. R. Koedinger, A. T. Corbett, and C. Perfetti. The knowledge-learning-instruction framework: Bridging the science-practice chasm to enhance robust student learning. Cognitive science, 36(5):757–798, 2012.
- D. R. Krathwohl. A revision of bloom’s taxonomy: An overview. Theory into practice, 41(4):212–218, 2002.
- J. R. Landis and G. G. Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977.
- R. Liu and K. R. Koedinger. Closing the loop: Automated data-driven cognitive model discoveries lead to improved instruction and learning gains. Journal of Educational Data Mining, 9(1):25–41, 2017.
- P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
- J. C. Stamper and K. R. Koedinger. Human-machine student model discovery and improvement using datashop. In International Conference on Artificial Intelligence in Education, pages 353–360. Springer, 2011.
- M. A. Sultan, S. Chandel, R. F. Astudillo, and V. Castelli. On the importance of diversity in question generation for qa. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5651–5656, 2020.
- S. Wang, Z. Wei, Z. Fan, Z. Huang, W. Sun, Q. Zhang, and X.-J. Huang. Pathqg: Neural question generation from facts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9066–9075, 2020.
- L. Xue, N. Constant, A. Roberts, M. Kale, R. Al-Rfou, A. Siddhant, A. Barua, and C. Raffel. mt5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934, 2020.
- J. Yu, Y. Wang, Q. Zhong, G. Luo, Y. Mao, K. Sun, W. Feng, W. Xu, S. Cao, K. Zeng, et al. Mooccubex: A large knowledge-centered repository for adaptive learning in moocs. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 4643–4652, 2021.
- R. Zhong, K. Lee, Z. Zhang, and D. Klein. Adapting language models for zero-shot learning by meta-tuning on dataset and prompt collections. arXiv preprint arXiv:2104.04670, 2021.
© 2022 Copyright is held by the author(s). This work is distributed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) license. | https://educationaldatamining.org/edm2022/proceedings/2022.EDM-posters.85/index.html |
Austrostipa from the Latin 'auster' meaning south and the genus Stipa, referring to the genus being allied to Stipa but restricted to Australia. Eremophila from the Greek 'eremos' meaning desert and 'phileo' meaning to love, referring to the Sandy Desert where the type specimen was collected, though the species is found in woodland, shrubland and mallee from Western Perth to Melbourne.
Distribution and status
Found in the southern part of South Australia, from the Nullarbor to the lower South-east in South Australia, growing in mallee, grassland, shrubland, open forest and woodland on sand, loam or clay. Also found in Western Australia, New South Wales and Victoria. Native. Common in South Australia. Common in the other states.
Herbarium regions: North Western, Nullarbor, Gairdner-Torrens, Flinders Ranges, Eastern, Eyre Peninsula, Northern Lofty, Murray, Yorke Peninsula, Southern Lofty, South Eastern, Green Adelaide
NRM regions: Adelaide and Mount Lofty Ranges, Alinytjara Wilurara, Eyre Peninsula, Northern and Yorke, South Australian Arid Lands, South Australian Murray-Darling Basin, South East
AVH map: SA distribution map (external link)
Plant description
Tufted perennial grass to 1 m high, with culms unbranched and nodes densely silky (sericeus). Leaves glabrous to finely pubescent with blade mostly tightly inrolled to 30 cm long and 4 mm wide when unrolled. Inflorescence a dense panicle at first, then becoming lax and sparse to 30 cm long, with purplish glumes to 23 mm long, the lower glume to 25 mm long. Flowering between August and November.
Key to this species: awn twice bent with coma; panicle contracted with short open branches; glumes narrow straight; callus long fine straight; lemma with dense shining rufous hairs and conspicuous sparse short-haired patch at the apex (appearing shaven). Fruits are burgandy-orange lemma to 9 mm long, evenly tapered to the base and covered in dense shining orange hairs with a conspicuous bald patch below the apex, which is either glabrous or sparsely covered by much shorter hairs; coma with hairs of similar length to those on the lemma; callus long fine and straight to 4 mm long; awn twice bent to 10 cm long with column pubescent with hairs to 0.4 mm; palea about equal to lemma, with dense silky-hairs along the centre line. Seeds are yellow-brown narrow-ovoid grain to 5 mm long within the lemma. Seed embryo type is lateral.
Seed collection and propagation
Collect seeds between October and January. Use your hands to gently strip the seeds (lemma) off the mature fruiting spike, those that are turning burgandy-orange. Mature seeds will come off easily compare to the immature seeds that remain on the spike. Alternatively, you can break off the whole fruit spike to allow some of the seeds to mature further. Place the seeds/spike in a tray and leave to dry for two weeks. No further cleaning is required if only seed collected. If seed spikes collected, use hand to strip off the mature seeds. Store the seeds with a desiccant such as dried silica beads or dry rice, in an air tight container in a cool and dry place. Viability of grass seeds could be very viable, depending on time of seed collections and seasonal conditions. | https://syzygium.xyz/saplants/Gramineae/Austrostipa/Austrostipa_eremophila.html |
Pre-AP English Course Perspective
Among the most popular AP courses, AP English Literature challenges students to read and interpret a wide range of imaginative works. The AP and Pre-AP courses invite students to explore a variety of genres and literary periods and to write clearly about the literature they encounter. On a daily basis, it asks them to read critically, think clearly, and write concisely. By the end of the course, students have cultivated a rich understanding of literary works and acquired a set of analytical skills they will use throughout their lives.
A Focus on Rhetoric
What makes Pre-AP English different from other high school English courses is its additional focus on rhetoric. In promoting writing in many contexts for a variety of purposes, the Pre-AP English course is the place where nonfiction texts and contexts take on an increased roll in the curriculum. Here students think deeply about language as a persuasive tool and about the dynamic relationship of writer, context, audience, and argument.
Reading and Writing from a Different Perspective
Pre-AP students need to adjust their perspective and build on their Critical Thinking skills/techniques when they take on the course. When we talk about familiar techniques of diction, syntax, imagery, and tone, we need to help students see how persuasive writers marshal these devises to the service of argument. When we talk about audience, we need to get students thinking about particular audiences and specific contexts for writing, rather than presuming a general audience as we usually do for literature.
This “finding of the argument” and “making of their own arguments” is often new for students, so the Pre-AP English course is designed to allow them time for reading, thinking, and writing. Reading time allows them to begin to recognize the various shapes and parts of an argument. Thinking time helps them explore issues, think about logical reasoning, and begin to understand appeals and rhetorical modes. Writing time provides them with the opportunity to work through the process of creating an argument.
There is neither a required reading list nor a required textbook for AP/Pre-AP English. Teachers are encouraged to select works of literary merit culled from a variety of genres and periods from the 16th century to the present. While students should have exposure to a variety of works, it is also important to make sure they get to know several works of literary merit in depth; this usually begins with roughly 5 pieces in Grade 9 and expanding to 7 for the Grade 12 AP course (includes summer reading list expectations).
Students will also devote a substantial portion of the class to poetry; not only can it be wonderfully rewarding to both teacher and students, but it can also be very useful test preparation: nearly half of the AP Exam includes questions about verse.
Who Should Take AP Literature, and Why?
It is important to recognize the power of an AP English class to challenge a wide range of students; however, the most important skill set necessary for Pre-AP English success is a strong motivation and the desire to work hard, as once a skill has been taught the students are expected to implement it independently. In addition, with an augmented reading list Pre-AP English students must be individually motivated to read and must not require coaxing from the teacher or parents to do so. Any apprehension by the students to complete the required readings on the structured timelines will result in them quickly being left behind.
All students who want to strengthen their analytical thinking, reading, and writing skills belong in Pre-AP English.
The AP French Language and Culture course emphasizes communication (understanding and being understood by others) by applying interpersonal, interpretive, and presentational skills in real-life situations. This includes vocabulary usage, language control, communication strategies, and cultural awareness. The AP French Language and Culture course strives not to overemphasize grammatical accuracy at the expense of communication. To best facilitate the study of language and culture, the course is taught almost exclusively in French.
The AP French Language and Culture course engages students in an exploration of culture in both contemporary and historical contexts. The course develops students’ awareness and appreciation of cultural products (e.g., tools, books, music, laws, conventions, institutions); practices (patterns of social interactions within a culture); and perspectives (values, attitudes, and assumptions).
What makes this course interesting?
- Learn about contemporary Francophone societies and cultures by examining their products, practices and perspectives through thematic study
- Use authentic sources such as newspaper and magazine articles, websites, films, music, video clips, blogs, podcasts, stories, and literary excerpts in French to develop language skills and communicative proficiency in real life settings
- Build communication skills through regular class discussion, one-on-one conversation, collaboration with classmates, role plays, email responses, essay and journal writing, and oral presentations
- Develop your French language proficiency through the exploration of a variety of interdisciplinary themes that tie closely to French culture.
What It Takes to Take AP
You’re already using the skills it takes to succeed; AP challenges you to take them to new levels.
Students looking to enrol in Advanced Placement Geography or History need to have an interest in learning, enjoy solving problems and be looking for a more challenging learning environment. The AP student does not have to be at the top of their class but needs to have the good work habits and desire to be successful.
Work Towards University Success and Stand Out to Admissions
Our students have scored above the global, national, and provincial average and 97% of our students have been successful on their AP Human Geography and World History Exams.
A number of Assumption students have used the Advanced Placement pathway to earn University credit while enrolled in high-school; many as grade 11 students. Even though our students are successful it is not the only reason to consider the AP pathway as it helps students to prepare for a university environment and lets elite universities know you are prepared to succeed when you enter their institution.
More About Our Programs and What Our Students Say
Please click on the following AP Math information links.
Course Details, About the Exam, Overview for AP Statistics
What is Advanced Placement?
Advanced Placement (AP) is a program created by the College Board, which offers College/University level curriculum to high school students. Colleges and Universities may grant course credit to students who obtain high scores on the AP examinations depending on individual school/program requirements, however, the AP exams are not mandatory or required to obtain the AP course credit.
AP courses are taught by highly qualified and/or trained teachers who use the AP course descriptions to guide them. The course descriptions outline the course content, describe the curricular goals of the subject, and provide sample exam questions. While the course descriptions are a significant source of information about the course content on which the AP exams will be based, AP teachers have the flexibility to determine how this content is presented.
What is Pre-AP?
Pre-AP exists in order to ensure that all students are provided with the requirements necessary to fulfill the AP curriculum but also the Ontario curriculum. Grades 9 through 11 are considered Pre-AP years in preparation for grade 12, which is the AP year for each subject.
Pre-AP aims to prepare every student for higher intellectual engagement by starting the development of skills and acquisition of knowledge as early as possible. It provides an opportunity to help all students acquire the knowledge, concepts, and skills needed to engage in a higher level of learning by consistently challenging students to expand their knowledge and skills to the next level. | https://secondary.hcdsb.org/assumption/advanced-placement-programs/ |
The first half in Manhattan was wild. Trickeration, field goals, and a disqualification are the take-aways from the first two quarters as the Wildcats lead the fifth ranked Sooners, 24-23.
Sooners struck first with a season long 44-yard field goal from Gabe Brkic. The first-year kicker went on to end the half with an even long, 50-yard kick, finishing the half 3/3 and remaining perfect on the year.
Kansas State has seemingly had their way rushing the ball against the Sooner defense. The Wildcats ended the first half with only 105 yards, but it felt like so much more. K-State finished the half with 204 total yards.
Trickeration was the name of the game. The first two plays for OU were successful as they both ended for big gains. The third resulted in an interception and a subsequent TD for the Wildcats.
Jalen was very effective with his arm and legs against the K-State defense. Hurts ended with 204 yards passing and 65 yards on the ground, leading the game in both categories.
The Oklahoma defense failed again for the 13th quarter in a row to get a turnover. If the Wildcats are going to continue to eat away at the clock, this game will come down who can force a turnover and get a stop in the fourth quarter.
Parnell Motley also disqualified for kicking an opposing player. The true freshman Jaden Davis will play the remaining of the game. Only three scholarship CBs on the roster coming into the game.
Sooners need to make some adjustments and dominate the line of scrimmage on defense if they want to win going away. If they fail to do so, the Wildcats could walk away with their fifth victory of the year. | https://www.si.com/college/oklahoma/football/the-first-half-in-manhattan-was-wild-trickeration-field-goals-and-a-disqualification-are-the-4vDNS-yC-E2bBDrAl3Svcg/ |
Common urological problems include urinary tract infections, benign prostatic hyperplasia, incontinence and kidney stones, according to Healthline. Benign prostatic hyperplasia affects men, while urinary tract infections, incontinence and kidney stones occur in both men and women.
Urinary tract infections, or UTIs, develop as a result of bacteria, yeast or viruses entering the urinary tract and causing infections, explains Healthline. They occur more frequently in women than in men. Symptoms include painful urination and an urge to urinate frequently. Treatment with antibiotics resolves UTIs within several days.
Benign prostatic hyperplasia, or BPH, is an enlargement of the prostate that puts pressure on the urethra, the tube through which urine exits the body, notes Healthline. Common in older men, symptoms include urinary frequency and weak urine stream. In some cases, treatment isn't necessary; in other cases, alpha blockers or surgery rectify the situation.
Incontinence is the inability to control the bladder, and it has numerous causes, states Healthline. Pregnancy and childbirth, diabetes, enlarged prostate, UTIs and diseases such as Parkinson's and multiple sclerosis are a few possible causes of incontinence. Limiting fluid intake may help control incontinence, but surgery may provide a solution in some cases.
Crystals in the urine can develop into stones in the kidneys, according to Healthline. As they move out of the kidneys into the ureters, they cause excruciating pain and potentially block urine flow. Some people pass these stones without assistance, but doctors must surgically remove others. Shock wave lithotripsy, in which sound waves work to break up the stones so that they pass from the body more easily, is a common treatment for kidney stones. | https://www.reference.com/health/common-urological-problems-be7d12cb78b5458c |
BIO Web Conf.
Volume 45, 202268th Scientific Conference with International Participation “FOOD SCIENCE, ENGINEERING AND TECHNOLOGY – 2021”
|Article Number||01001|
|Number of page(s)||5|
|Section||Food Science and Technology|
|DOI||https://doi.org/10.1051/bioconf/20224501001|
|Published online||04 February 2022|
Composition of Kashkaval cheese manufactured from different levels of somatic cell counts in sheep milk
1 Department of Milk and Dairy Products Technology, Technological Faculty, University of Food Technologies, Plovdiv, Bulgaria
2 Department of Analytical Chemistry, Technological Faculty, University of Food Technologies, Plovdiv, Bulgaria
* Corresponding author: [email protected]
The purpose of the present study was to investigate the influence of somatic cell count (SCC) on the composition of Kashkaval cheese. Kashkaval cheese samples were produced from three different batches of sheep milk with low (610 000 cells/ml), medium (770 000 cells/ml), and high (1 310 000 cells/ml) SCC, respectively. The main chemical parameters, such as pH, titratable acidity, moisture content, fat content in the dry matter, protein content, sodium chloride content, and microbiological parameters (lactic acid bacteria count, pathogenic microorganisms, coliforms, psychrotrophic, yeasts and molds) were studied during the ripening and storage periods. No statistically significant (P<0.05) changes were found in the values of the chemical parameters during the ripening period. At the beginning of ripening, the total lactic acid bacteria count for all cheese samples was about 4.1 log cfu/g, then increased to 6.2 log cfu/g (at 60 days of ripening) for test samples. The data collected in this study showed a slight decrease in pH values and a gradual increase in the titratable acidity, which was an indication for retarded fermentation during storage at low temperature. The lactic acid bacteria showed good survival, but higher sensitivity was observed in Lactobacillus spp. in comparison with Streptococcus spp.
© The Authors, published by EDP Sciences, 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while. | https://www.bio-conferences.org/articles/bioconf/abs/2022/04/bioconf_foset2022_01001/bioconf_foset2022_01001.html |
It’s your last chance to catch “Ralston Crawford and Jazz” at NOMA. On Friday, Oct. 12, NOMA’s “Where Y’Art?” events include book signings by John McCusker of “Creole Trombone: Kid Ory and the Early Years of Jazz,” music by Eileina Williams with Todd Duke, gallery talks and movie screenings, all from 5-9pm at the New Orleans Museum of Art. Ralston Crawford runs through Sunday, Oct. 14.
I’m going to stay away from any kind of review or criticism of HBO’s “Treme” and there is plenty of that, good and terrible, to go around. I’ll only say the show is deserving of both for different reasons and I’m still watching it.
“Treme” has featured a lot of music and culture which a lot of natives may not be familiar with. Basically, being middle-class suburbanites, the music and culture featured in “Treme” was that of neighborhoods we neither lived in or went to school in. (I grew up in the Aurora area of Algiers and went to school Uptown, for example). We knew it existed by rarely participated other than catching a passing parade, and never saw a Mardi Gras Indian unless it was at Jazz Fest.
The brass and jazz bands and Mardi Gras Indians featured in the “Treme” story lines arose out neighborhoods like the 7th Ward and 9th Ward as well as the Fauborg Tremé. Documenting the musicians and artists, and the people enjoying their work, was Ralston Crawford, who worked in New Orleans during the 1950s, ’60s and ’70s. If you’re a frequent watcher of the show—or otherwise—and still a fan of New Orleans’ “back of town” culture, you’ll be fascinated by Crawford’s photographs. NOMA’s exhibit consists mainly of photos, paintings and drawings from the Sheldon Art Galleries in St. Louis.
Crawford drew and did some lithographs, which are stark compared to his photographs that, for the most part, are populated by musicians and spectators. He was drawn to the city’s cemeteries (and was buried in St. Louis No. 3 in 1978) as is apparent in some of the photos, but his drawings, lithographs and paintings reflect the compositional elements of his subjects, which also apparent in some of the non-portrait photos.
I always like trying to get into an artist’s mind and figure out how they might have arrived at an image. With Crawford, it’s not hard to see how he was grabbed by shapes and then compelled to recreate them. Take the photo above, and painting, below.
He did similar things with street scenes and other cemetery scenes.
His documentation of the African-American music scene in New Orleans is boundless and should be viewed by any fan of New Orleans music, no matter what the era.Venues like the Dew Drop Inn were hot their heyday but long gone. If nothing else ever comes out of HBO’s “Treme,” it will have at least given a boost to the musicians—Kermit Ruffins, Trombone Shorty, Irvin Mayfield, Dr Michael White and so many more—who follow in the footsteps of Crawford’s subjects. Part of the work in the NOMA exhibit are photographs from the Tulane archives. You can read more about Crawford and browse Tulane’s collection of his photos here.
For all these pictures and many, many, more, I invite you to visit NOMA this weekend and get yourself lost in the world that made “Treme. | http://www.com-http.com/archives/2012/10/10/the-world-that-made-treme-ralston-crawford-and-jazz/ |
7” Sedona Pot with Lemon Plant AR1288
We currently have 0 in stock.
Overall Dimensions:
H: 16” x W: 13” x D: 11”
*Due to inherent qualities of a natural wood product, each piece has its own unique characteristics and slight variations in size, shape and color.
**Arrangements are assembled when ordered and may vary from photos. If any products are out of stock, some plants and filler items will be replaced with similar variations. Dimensions can differ slightly. | https://shop.replicadecor.com/products/7-sedona-pot-with-lemon-plant-ar1288 |
Angelina is a beautiful woman and person, thank you for this.
-
THANKS, as a man who grew up in middle class Africa, my parents never taught me how to cook, I just watched them from far. I have been living on canned food for years in usa, after watching this video, I am gonna it try out so I make some of those flavors I crave so much but haven't tasted in almost a decade
-
Soup stew
-
so so good,,, I use hearing, & cod but the same ingredinets yummmmmmmmmmmmmmmmmmm
-
May I suggest that you add some green(onion, cilantro…), and a touch of black pepper on top of the dish to make your presentation more appealing?
-
can you cook the blended (tomatoe,onion&pepper) stew with/without water instead of drowning it with oil and get the same taste?
-
there was no seasoning!
-
U R Beautiful))
-
What happened with all the seasoning for the stew . Didn't u use any. I'm not sure about the taste although it looks lovely
-
Very healthy
-
nice cooking i will try that
-
YOU DIDNT EVEN LET US NO THE QUANTITIES OF WHAT U USED
-
Oh My God !!! Looks Amazing !!!
-
WOW i didnt know african women were this beautiful !!!
-
Is the skin needed to be kept on the fish or can it be removed prior to cooking? Thank you for the video.
-
Thank you for this.
-
Thanks for the healthy alternative. One ingredient you listed with the vegetables, I'm not understanding the name (something ________bonnet???) I made this version, wasn't nearly as flavorful as I would like. Will try again. Thanks for sharing.
-
Hi, the water is added in the blender when the vegetables are pulsed. The chicken stock which has water is enough for this type of stew. You can however add water if you want the stew loose as a preference. This however depend on what you want to eat it with. You can have it loose if mixing with Okro to eat Powdered Yam but not with rice.
-
i think i prefer this to other ones i have seen ..but realy want to know why you didnt add at least a little water to stew to make it loose its oily state.i notice a lot of people dont do that.
-
The fish is not fried so as to retain all its vitamins. Just clean with lemon and season with 'all purpose seasoning, | https://easylowcalrecipes.com/2019/01/26/fish-stew-healthy-option/ |
This meal is a family favorite and one of Chris’s top favorite meals. It’s full of tomatoes, garlic, seasoning and flavor. I have to admit, I no longer follow the recipe and even when I did, I did so loosely. Most of the time, I’m cooking by smell, which has to be flavorful, and I heart garlic – big time! It’s not uncommon for me to double, possibly triple, the quantity of garlic and then add an extra clove or two for good measure. Also, through the years, I’ve definitely upped the amount of tomatoes, using as many as 18 Roma tomatoes and subsequently adjusting the remaining ingredients. More of one ingredient equals more of another. . . you get the idea. By all means, adjust to your taste and preferences.
If I don’t have fresh basil, I’ve used the Litehouse Basil before and it works just as well. In this case, I had fresh basil and used it, but again, I wanted more of a flavorful taste and added a liberal amount of this Oregano. Incorporate chicken and heat thoroughly.
Bring a large pot of salted water to boil and make pasta according to package directions and then drain. Add desired amount of pasta to a plate and top with tomato/ chicken mixture and top with parmesan cheese.
This entry was posted in Chicken, Italian, Pasta. Bookmark the permalink. | https://thecompanyshekeeps.org/2014/10/06/angel-hair-pasta-with-grilled-chicken-tomatoes-garlic-basil-chicken-spaghetti/ |
Yothasamut, Jomkwan (2016) Understanding actors and factors that influence the development of obesity among pre-schoolers in the Bangkok metropolitan area. Doctoral thesis, University of East Anglia.
|
|
Preview
|
PDF
|
Download (4MB) | Preview
Abstract
In Thailand, obesity in children aged 0-6 increased by 40% between 2004 and 2009, and urban residence was a significant risk factor. My study investigates actors and factors that influence obesity among pre-schoolers in the Bangkok Metropolitan Area. I use ecological system theory as a framework and employ a qualitative inductive approach to data collection to capture the temporal and spatial specificity of consumption. Pre-schoolers and caregivers and their interactions around food are my focus; however, I also investigate policy and interventions at national and institutional levels that influence childhood obesity. As part of a nested case study design, I select three kindergartens used by families of varying socio-economic status, and the homes of 18 pre-schoolers attending these kindergartens. My main method is participant observation and I use formal and informal interviews to gain more understanding of the data derived from the observation.
I identify three domains of food consumption: main meals, milk, and snacks, which are also the focus of Thai government policy addressing undernutrition. The effectiveness of these policies is affected by caregivers’ values concerning child-rearing and feeding, and by children’s characteristics and agency, expressed through their negotiation of their food choices with adults. While the lifestyles of employed parents and the obesogenic environment of the metropolitan area contribute to adults’ decisions concerning children’s food, socio-economic status is found to be a minor influence. More important are the values that adults hold, shaped by campaigns from the government and the private sector; e.g. buying expensive fortified milk because milk is good for children. My thesis shows how this combination of social and economic factors leads to the consumption of food high in sugar and calories. | https://ueaeprints.uea.ac.uk/id/eprint/62943/ |
Ship rats inhabit a diverse range of habitats from coastal areas up to the tree line, but are most abundant in lowland podocarp-braodleaved forest. They are also can be found in urban habitats.
Ship rats are omnivorous generalists with the main animal food being arthropods (particularly weta) but through to the high density compared to other predators the loss of eggs, chicks and adult birds (e.g. robins, tits and female mohua) is significant for the conservation concern of these species. Fruit are taken particularly from Cporpsma spp., hinau, karkaka, kiekiem, kohia, miro, matai, nikau, rimu, pate and supplejack and where these species are absent ship rats tend to prey more on invertebrates (especially in winter).
They are agile climbers and average swimmer (300m).
Norway rats are the largest rat species in New Zealand. They have a sleek and slender body, dark grey tail, thin hairless ears and small eyes. Their coat may vary between a black back and grey belly (‘rattus’ type), grey-brown back and grey belly (‘alexandrinus’ type) or grey-brown back and white belly (‘frugivorus’ type).
Ship rats may be mistaken for the other two species of rat in NZ: Norway rat (Rattus norvegicus) and Kiore (Rattus exulans).
Collection of seeds may alter the regeneration of these species, however they may also enhance the dispersal of other. Prey on invertebrates and the young, eggs of birds may also have secondary effects on the vegetation due to changes in ecosystem processes.
Widespread over a diverse range of habitats on both the North and South Island. Ship rats also occur on many offshore islands.
Weight: 120-160kg; max. head to body length: 225mm
Approximately 1830 - 1850
Accidentally
First introduced to New Zealand between 1830 - 1850 when they were the dominant rat species aboard sailing ships, however a few individuals may have got ashore at an earlier time. For unknown reasons they did not spread in either of the two main islands until after the late 19th century. Is also uncertain if the decline of kiore led to the widespread distribution of ship rats today. | http://www.nzpcn.org.nz/threats_details.aspx?ID=9 |
An Iseco Brand
Foston Europe is the international brand of Iseco Sistemas. Iseco Sistemas S.L. is a Spanish company founded in 1998. Since the founding of Iseco, the company has been specialized in offering technology services with great added value to its clients and users in the Information and Communications Technology (ICT) market. Iseco Sistemas is active in the Spanish market for offering integrated technical solutions in nurse call systems for the healthcare environment and is the market leader in the domestic market.
For the internationalization, Iseco Sistemas continues under the name Foston Europe. Foston Europe has the mission to develop solid partnerships in Northern, Western and Central Europe. The goal is to seek partners in order to bundle expertise and to come to new solutions for the long-term care environment.
Mission
Foston Europe strives to improve the healthcare environment for residents, caregivers and management by providing technological solutions which are specially adjusted to respond and meet unique healthcare needs. Everyone has the right to live and work in a safe environment with autonomy, privacy and dignity. Therefore, this company believes that its technology will improve people´s life.
Vision
The ambition is to develop an international network in the healthcare and technology market. Foston Europe strives to develop solid international partnerships with organisations that share the same passion. Together we will bundle our expertise and unify worldwide technologies to best serve the healthcare and technology market. | http://www.foston.eu/foston-europe/ |
The recent development in depth sensing devices offers a convenient and flexible way to acquire depth scans of an object or a scene that represent their partial shapes. In practice, we need to register these scans into a common coordinate system to better understand the object’s or scene’s geometry or compare known object models with these scans for 3D object recognition . All these applications require solving the partial shape matching problem [3, 4].
Depth scans (i.e., 3D point clouds) lack topology information of the shape and usually contain noise, holes, and/or varying point density. To facilitate partial shape matching, one common way is to convert the point cloud into a mesh to remove the noise and fill the holes, and then perform shape matching on the mesh instead [5, 6, 7, 8]. Although this conversion simplifies the matching process, it brings several drawbacks. First, original partial shape could be modified and/or downsampled by the conversion, e.g., when smoothing the depth scan for denoising. Second, the mesh topology generated by the conversion could be different from the real one such as incorrectly filled holes, misleading the shape matching.
Therefore, other researchers seek to perform shape matching directly on the point cloud data. This is generally achieved by representing and matching the scans using local shape descriptors. Although existing descriptors [9, 10, 11, 12] work well on clean depth scans, they have difficulties dealing with original scans acquired under various conditions such as occlusion, clutter, and varying lighting. This is because these descriptors are sensitive to noise and/or varying point density due to their encoded shape features such as point density [9, 10] and surface normals
, or are sensitive to scan boundary and holes due to their descriptor comparison scheme that is based on the vector distance[11, 12].
To address above limitations, we propose a Signature of Geometric Centroids (SGC) descriptor for partial shape matching with three novel components:
-
A Robust Descriptor. We construct the SGC descriptor by voxelizing the local shape within a uniquely defined local reference frame (LRF) and concatenating the geometric centroid and point density features extracted from each non-empty voxel. Thanks to the extracted shape features, our descriptor is robust against noise and varying point density.
-
A Descriptor Comparison Scheme. Rather than simply computing the Euclidean distance between two descriptors, we compute a similarity score between two descriptors based on comparing the extracted features from corresponding voxels that are both non-empty. By this, the comparison scheme supports shape matching between local shape that are incomplete.
-
Descriptor Saliency for Shape Matching. Different from keypoint detection that identifies distinct points locally on a single scan/model, we propose descriptor saliency to measure distinctiveness of SGC descriptors across all input scans and compute it from a descriptor-graph. Guided by the descriptor saliency, we improve shape matching performance by intentionally selecting distinct descriptors to find corresponding feature points.
We evaluate the robustness of SGC against various nuisances including scan noise, varying point density, distance to scan boundary, occlusion, and the effectiveness of using SGC and descriptor saliency for partial shape matching. Experimental results show that SGC outperforms three start-of-the-art descriptors (i.e., spin image , 3D shape context , and signature of histograms of orientations (SHOT) ) on publicly available datasets. We further apply SGC to two typical applications of partial shape matching, i.e., object/scene reconstruction and 3D object recognition, to demonstrate its usefulness in practice.
2 Related Work
Shape Matching. Shape matching aims at finding correspondences between complete or partial models by comparing their geometries. Many shape matching approaches apply global shape descriptors to characterize the whole shape, for example, using Reeb graphs or skeleton graphs for articulated objects and shape distributions for rigid objects. However, depth scans acquired from each single view usually have significant missing data. Matching these partial shapes is a difficult task because, before computing the correspondences of the shapes, we first need to find the common portions among them . This requires a careful design of local shape descriptors that are less sensitive to occlusion.
Local Shape Descriptors.
Local shape descriptors can be classified as low- and high-dimensional, according to the richness of encoded local shape information. Low-dimensional descriptors such as surface curvature and surface hashes , are easy to compute, store, and compare, yet have limited descriptive ability. Compared with them, high-dimensional descriptors provide a fairly detailed description of the local shape around a surface point. We classify high-dimensional descriptors into three classes according to their attached LRF .
Descriptors without an LRF. Early local shape descriptors are generated by directly accumulating some geometric attributes into a histogram, without building an LRF. Hetzel et al. represented local shape patches by encoding three local shape features (i.e., pixel depth, surface normals, and curvatures) into a multi-dimensional histogram. Yamany et al. described local shape around a feature point by generating a signature image that captures surface curvatures seen from that point. Kokkinos et al. generated an intrinsic shape context descriptor by shooting geodesic outwards from a keypoint to chart the local surface and creating a 2D histogram of features defined on the chart.
Due to the missing of an LRF, the correspondence built by matching the descriptors is limited to the point spatial position only. Thus, to match two scans by estimating a rigid transform, at least three pairs of corresponding points need to be found, making the space of searching corresponding points large.
Descriptors with a non-unique LRF. Researchers later attached an LRF for local shape descriptors to enrich the correspondence with spatial orientation. By this, two scans can be matched by finding a single pair of corresponding points using the descriptors and estimating the transform based on aligning associated LRFs. However, since the attached LRF is not unique, a further disambiguation process is required for the generated transform.
Johnson et al. proposed a spin image descriptor by spinning a 2D image about the normal of a feature point and summing up the number of points that fall into the bins of that image. Frome et al. proposed a 3D shape context (3DSC) descriptor by generating a 3D histogram of accumulated points within a partitioned spherical volume centered at a feature point and aligned with the feature normal. Mian et al.
proposed a 3D tensor descriptor by constructing an LRF from a pair of oriented points and encoding the intersected surface area into a multidimensional table. Zhong proposed intrinsic shape signatures by improving based on a different partitioning of the 3D spherical volume and a new definition of LRF with ambiguity.
Descriptors with a unique LRF. Recently, researchers constructed a unique LRF from the local shape around a feature point and further describe the local shape relative to the LRF. Thanks to the unique LRF, the transform to match two scans can be uniquely defined based on aligning corresponding LRFs.
Tombari et al. proposed a SHOT descriptor by concatenating local histograms of surface normals defined on each bin of a partitioned spherical volume aligned with a unique LRF. Guo et al. constructed a RoPS descriptor by rotationally projecting the neighboring points of a feature point onto 2D planes and calculating a set of statistics within a unique LRF. Guo et al. later generated three signatures representing the point distribution in three cylindrical coordinate systems and concatenated and compressed these signatures into a Tri-Spin-Image descriptor. Song and Chen developed a local voxelizer descriptor by voxelizing local shape within a unique LRF and concatenating an intersected surface area feature in each voxel, and applied it to surface registration .
SGC is also constructed within a unique LRF. Compared with above descriptors, the geometric centroid feature that we extract for constructing the descriptor is more robust against noise and varying point density. Moreover, our descriptor comparison scheme supports matching local shape that is close to the scan boundary. By this, SGC is more robust for shape matching on point cloud data than state-of-the-art descriptors [9, 10, 11], see Section 5 for the comparisons.
3 Signature of Geometric Centroids Descriptor
This section presents the method to construct an SGC descriptor for the local shape (i.e., support) around a feature point , a scheme to compare a pair of SGC descriptors, and the parameters tuned for generating SGC descriptors.
3.1 LRF Construction
Given a feature point on a scan and a radius , a local support is defined by intersecting the scan with a sphere centered at with radius
. Taking this support as input, we construct a unique LRF based on principal component analysis (PCA) on the support by using the approach in, see Figure 1(a). When the normal of
is available, we further improve the disambiguation of LRF axes by enforcing the principal axis associated with the smallest eigenvalue (i.e., the blue axis in Figure1(a)) to be consistent with the normal .
3.2 SGC Construction
Given the unique LRF, a general way to construct a descriptor is to partition a support into bins, extract shape features from each bin, and concatenate the values representing the shape features into a descriptor vector (or a histogram).
Partition the Support. Given a support around a feature point , there are three typical approaches to partition into small local patches. The first one is to partition the bounding spherical volume of into girds evenly or logarithmically along azimuth, elevation and radial dimensions. The second one is to partition the angular space of the spherical volume into relatively homogeneously distributed bins . However, the bins generated by these two approaches have varying sizes, which need to be compensated when constructing a descriptor. In addition, the irregular shape of these bins complicates the segmentation of local shape within each bin for extracting local shape features.
The third approach is to construct a bounding cubical volume of that is aligned with the LRF and partition the cubical volume into regular bins (i.e., voxels) . These regular bins simplify the extraction of local shape features and thus the descriptor construction. Therefore, we employ the third approach to partition for constructing the SGC descriptor, see Figure 1(b&c). Note that the edges of the cubical volume have a length of , where .
Extract Bin Features. Due to the missing of topology information, point clouds have limited types of shape features that can be extracted, e.g., surface normal feature in SHOT and point density feature in 3DSC . This paper proposes extracting a geometric centroid feature from each non-empty voxel for constructing SGC due to following reasons. First, centroid is an integral feature , thus can be more robust against noise and varying point density. Second, centroid can be computed simply by averaging the positions of all points staying within a voxel. Note that we do not realize any existing work that employs centroid features for constructing a usable descriptor.
Construct the Descriptor. We divide the cubical volume evenly into bins (i.e., voxels) with the same size, see Figure 1(c). For each voxel , we identify all points staying within the voxel and then calculate the centroid () for the points. Note that, the position of the centroid is relative to the minimum corner of in the LRF. We save the extracted feature as () for non-empty voxels, and (0,0,0,0) for empty ones. An SGC descriptor is generated by concatenating all these values assigned for each voxel. The dimension of an SGC descriptor saved in this way is .
Thanks to the unique LRF, the three positional values of ’s centroid () can be compressed into a single value using , where denotes the edge length of . By this, we compress the dimension of the descriptor to , saving 50% storage space.
3.3 Comparing SGC Descriptors
Ideally, SGC descriptors generated for two corresponding points in different scans should be exactly the same. However, due to variance of sampling, noise and occlusion, the two descriptors usually have a certain amount of difference. Unlike existing approaches that compare descriptors by computing their Euclidean distance[11, 7, 8], we develop a new scheme for comparing two SGC descriptors.
When constructing an SGC descriptor, most of the voxels are likely to be empty (see again Figure 1(c)). We classify each pair of corresponding voxels into three cases: 1) empty voxel vs empty voxel; 2) non-empty voxel vs empty voxel; and 3) non-empty voxel vs non-empty voxel. In all three cases, only case 3 should contribute to computing a similarity score between two descriptors. Thus, to compare two SGC descriptors quantitatively, we propose to accumulate a similarity score for every pair of corresponding voxels that are both non-empty.
In detail, we denote two SGC descriptors as and . The similarity between the -th voxel of , , and the -th voxel of , , is defined as:
|(1)|
where and represent the number of points in and respectively, while and represent the centroid of and respectively. Here we directly employ the number of points in each voxel to represent its point density as all voxels have the same size. The formula can be explained as follows. Whenever and/or are empty (i.e., or ), . Otherwise, when two corresponding voxels contain similar local shape, their centroids should be close to each other, making large. When and/or are large, is large also as the estimated centroid(s) are more accurate. By this, the formula encourages to find matches based on denser parts of input scans when the scans are irregularly sampled.
The overall similarity score between and can be obtained by accumulating the similarity value for every pair of corresponding voxels:
|(2)|
3.4 SGC Generation Parameters
The SGC descriptor has two generation parameters: (i) the support radius ; and (ii) the voxel grid resolution . According to our experiments, we choose as a tradeoff between the descriptiveness and sensitivity to occlusion, where denotes the point cloud resolution (i.e., average shortest distance among neighboring points in the scan). And we choose as a tradeoff between the descriptiveness and efficiency since a larger increases the descriptiveness and computational cost simultaneously. Note that in these experiments, we let the LRF and the descriptor have the same support radius, i.e., .
4 Partial Shape Matching using SGC
In this section, we describe the general pipeline to match two scans using SGC descriptors and propose a descriptor saliency measure for improving shape matching performance. We also highlight the advantage of using SGC descriptors for matching supports that are close to scan boundary.
4.1 General Shape Matching Pipeline
Given a data scan and a reference scan , the goal of shape matching between and is to find a rigid transform on to align it with . By employing the SGC descriptors, we can find such a transform with following steps:
1) Represent Scans with SGC Descriptors. We first conduct a uniform sampling on each of and to generate feature points that cover the whole scan surface. Next, for each feature point , we construct the LRF and SGC descriptor for the support around . By this, we represent each of and with descriptor vectors and the corresponding LRFs, see Figure 2(a&b).
2) Generate Transform Candidates. When a point on corresponds to another point on , their associated SGC descriptors should be similar to each other. Hence, we compare each feature descriptor of with each feature descriptor of by calculating a similarity score using Eq. 2. A feature point on and its closest feature point on are considered as a match if the similarity score is higher than a threshold. Each match generates a rigid transform candidate (i.e., a transformation matrix) by aligning the associated LRFs.
3) Select the Optimal Transform. By matching the descriptors of and , we obtain a number of candidate transforms. We sort these transforms based on the descriptor similarity score and then pick the top five candidates with the highest scores. We apply each of the five selected transforms on to align it with . We evaluate the transform by computing a scan overlap ratio. We first find all point-to-point correspondences by checking if the distance between a point on transformed and a point on is sufficiently small, and further compute the overlap ratio as the number of corresponding points divided by the total number of points in or (smaller one). We select the transform that ensures the largest overlap ratio as the optimal one, see Figure 2(c&d).
4) Refine the Scan Alignment. Optionally, we can apply iterative closest point (ICP) to refine the alignment generated by the selected optimal transform, see Figure 2(e). By comparing Figure 2(d&e), we can see that the transform calculated by aligning LRFs is very close to the one refined using ICP.
4.2 Improve Shape Matching using Descriptor Saliency
To ensure corresponding points to be found on different scans, we need to sample a large number of feature points on each scan, e.g., in our experiments. However, among the descriptors on a single scan, there could exist some descriptors close to one another since their corresponding supports are similar, see Figure 3(a). Moreover, among descriptors from all input scans, there could exist a larger number of descriptors with high similarities, see Figure 3(a-c).
Our observation is that when there exist a large number of descriptors with high similarities, it means their corresponding supports are less distinctive (e.g., flat or spherical shape), see the zooming views in Figure 3(a). Thus, it has a lower chance to match the scans correctly by using such supports and their descriptors. On the other hand, when a descriptor is quite different from others, it means its support is distinctive (see the top zooming views in Figure 3(b&c)).
Inspired by this observation, we propose a measure of descriptor saliency to improve the shape matching performance and compute it based on a descriptor-graph. The key idea is to find descriptors (and the corresponding supports) that are distinctive by measuring their saliency and apply these descriptors to find corresponding feature points. We first describe our approach to build a descriptor-graph, present our definition on the descriptor saliency, and then show how we apply the descriptor saliency to enhance shape matching.
Build a Descriptor-Graph. For a given reference scan , we build a descriptor-graph for all the descriptors sampled from based on their similarities computed using Eq. 2. Formally, let be a descriptor-graph, each node represents an SGC descriptor on . while each directed edge represents that is one of k-nearest neighbours (k-NN) of in the descriptor similarity space. Note that we do not require also to be one of k-NN of , which means there may not exist a directed edge in .
To build such a graph, a straightforward way is to exhaustive search all descriptors on to retrieve k-NN for each descriptor in . However, this approach is time-consuming, especially when is large. We speed up the creation of the graph following , and the basic idea is to initially fill the nearest neighbors by randomly sampling descriptors in , and iteratively optimize the nearest neighbors locally via similarity propagation and random search until convergence.
Define Descriptor Saliency. We define descriptor saliency as the distinctiveness among a set of given descriptors. The larger difference between a descriptor and others, the higher its saliency. Thus, we measure saliency of a descriptor in a descriptor-graph using , where denotes the number of nodes in that considers as a k-NN and is the mean value of all that is larger than zero. Note that although has nearest neighbors in , these neighbors could be very different from . By fixing , the value can reveal how many descriptors are close to (i.e., ’s distinctiveness). Figure 4 shows descriptor saliency in a simple descriptor-graph with .
Shape Matching with Descriptor Saliency. For a given reference scan , we first create a descriptor-graph for it and compute a saliency value for every descriptor in using . For a given descriptor on the data scan , say , we enhance the similarity score between and by using , i.e., , where is a weight to control the impact of saliency on the descriptor similarity. We set in our experiments.
Intuitively, we can find the descriptor on corresponding to on by simply comparing every on with and selecting the one with the largest . We speed up the search of the corresponding descriptor by taking advantage of with the idea of leveraging existing matches to find better ones. This is achieved by randomly selecting a set of nodes in and updating the nodes by a few iterations of similarity propagation and random search , guided by the similarity score (using Eq. 2) between and the nodes. After obtaining a small set of descriptors on that are similar to , we conduct re-ranking using to select the final correspondence.
We have illustrated applying descriptor saliency for shape matching between a pair of scans. Descriptor saliency is more suitable for shape matching among a number of scans, with following changes. First, we build a large descriptor-graph for descriptors from all the scans. Second, we compare a descriptor on scan with nodes in that are not from . By this, the larger the number of scans, the higher shape matching performance can be improved by descriptor saliency.
4.3 Matching Supports Close to Scan Boundary
Depth scans captured from a certain view are mostly incomplete due to a limited viewing angle, sensor noise, and occlusion. This results in a surface boundary for a scan. Matching supports close to the boundary is a challenging task. First, the support is likely to be incomplete, see examples in Figure 2(b). This affects an LRF’s repeatability since support is the only input to construct the LRF. Further, deviation of the LRF affects the construction of the descriptor since support partitioning is performed within the LRF. Second, the incomplete support directly affects the construction of the descriptor since voxels locating at the missing part(s) become empty, where no shape feature can be extracted.
Due to the above challenges, many existing descriptors are sensitive to the boundary points according to the evaluation in . Therefore, boundary points are usually ignored when applying existing descriptors to partial shape matching [30, 7], assuming that there is sufficient non-boundary scan surface for the matching. On the other hand, matching boundary points will improve the chance to correctly align different scans, especially when the scan overlap is small.
Our SGC descriptor is especially suitable for handling boundary points for shape matching. First, the centroid feature that SGC employs is robust against noise and varying point density, which usually happen at scan boundary. Second, our descriptor comparison scheme allows matching descriptors computed from either a complete or an incomplete support, see Figure 5. Third, we allow using two different radii for constructing the LRF and the descriptor, i.e., , see supports with varying sizes in Figure 5(left). By this, a smaller yet complete support can be employed for constructing a repeatable LRF while a larger support allows encoding more (complete or incomplete) local shape for constructing the descriptor. Based on our experiments, we find that achieves the best performance for matching boundary points when setting .
5 Performance of the SGC Descriptor
This section evaluates the robustness of SGC with respect to various nuisances, including noise, varying point density, distance to scan boundary, and occlusion. We compare SGC with three state-of-the-art descriptors that work on point cloud data: spin image (SI) , 3DSC and SHOT . Table 1 presents a detailed description of the parameter settings.
We perform the experiments on three publicly available datasets: the Bologna dataset , UWA dataset , and Queen’s dataset . Unlike the Bologna dataset that synthesizes complete object models to generate scenes, the scenes in the UWA and Queen’s dataset contain partial shape of object models. We employ the Bologna dataset to evaluate the descriptors’ performance with respect to noise and varying point density (Subsection 5.1 & 5.2), the UWA dataset to evaluate the descriptors’ performance with respect to distance to scan boundary and occlusion (Subsection 5.3 & 5.4), and the Queen’s dataset to evaluate improved performance by using descriptor saliency (Subsection 5.5).
We compare the descriptors’ performance using RP curves . In detail, we randomly select 1000 feature points in each model and find their corresponding points in the scenes via the physical nearest neighbouring search. By matching the scene features against the model features using each of the four descriptors, an RP curve of the descriptor is generated.
5.1 Robustness to Noise
To evaluate robustness of the descriptors against noise, we add four different levels of Gaussian noise with standard deviations of 0.1, 0.3, 0.5, and 1.0 pr to each scene. The RP curves of the four descriptors are presented in Figure6(a-d). Thanks to the robust centroid feature, the RP curves show that SGC performs the best under all levels of noise, followed by SHOT and 3DSC.
5.2 Robustness to Varying Point Density
To evaluate robustness of the descriptors with respect to varying point density, we downsample the noise free scenes to 1/2, 1/4 and 1/8 of their original point density (pd). The RP curves in Figure 6(e-g) show that SGC outperforms all other descriptors under all levels of downsampling. Figure 6(h) shows that SGC performs the best when the input scans are downsampled and contain noise.
5.3 Robustness to Distance to Scan Boundary
We perform experiments for feature points within different ranges of distance to the boundary, i.e., (0, 0.25R], (0.25R, 0.5R], (0.5R, 0.75R], and (0.75R, R]. Note that we set tuned for SGC and for all the other descriptors. Thanks to the varying support radius and descriptor comparison scheme, Figure 7 shows that SGC achieves the best performance for all the four cases.
5.4 Robustness to Occlusion
To evaluate performance of the descriptors under occlusion, we group sampled feature points into two categories following , i.e., (60%, 70%] and (70%,80%] occlusions. Figure 8(a&b) shows that SGC outperforms all the other descriptors with a large margin since SGC allows handling feature points at scan boundary.
5.5 Effectiveness of Descriptor Saliency
To demonstrate effectiveness of descriptor saliency, we compare our shape matching approach with an exhaustive search to find corresponding feature points. First, we build a descriptor-graph for descriptors sampled from all the five models in the Queen’s dataset with . Next, we randomly select 1000 feature points on a scene and calculate their SGC descriptors. For each scene descriptor, we retrieve its neighbours by searching the descriptor-graph with saliency or exhaustive searching all the model descriptors. Here, we concern how many neighbours we need to retrieve to ensure the corresponding descriptor is included. Figure 8(c) shows standard Cumulated Matching Characteristics (CMC) curves by using the two approaches. The curves show that descriptor saliency brings a certain amount of improvement in shape matching. In addition, descriptor-graph speeds up the search of corresponding descriptors, where each query process takes , much faster than the exhaustive search ().
6 Applications
3D Object/Scene Reconstruction. To reconstruct a more complete model from a set of scans, we build a descriptor-graph for all the scans. As the graph has encoded k-NN for each descriptor (and the feature point), we search the corresponding feature point (and its associated scan ID) locally within the k-NN, and align the two scans based on the correspondence and merge them into a larger point cloud. We keep aligning each of the remaining scans with the point cloud and merging them until all scans are registered. Figure 9 shows two objects and one scene reconstructed by our approach on different datasets [11, 35].
3D Object Recognition. We conduct this experiment on the challenging Queen’s dataset . To represent the model library well with SGC, we remove the noise in each model point cloud and build a descriptor-graph for descriptors sampled from all the models. For a give scene scan, we also sample a number of SGC descriptors. By searching a corresponding descriptor in the graph for a given scene descriptor, we know the correspondence between a model in the library and a partial scene, thus recognizing the object in the scene scan. Note that we recognize a single object at a time and segment the object once recognized.
Figure 10(a&b) show the recognition result on an example scene. Figure 10(c) shows that SGC based algorithm outperforms most existing methods including VD-LSD , 3DSC and spin image based algorithms. RoPS based algorithm is the current best 3D object recognition approach and it achieves slighter better performance than SGC with additional mesh information of the scene scans. In particular, the performance of our algorithm without using descriptor saliency decreases about 10%, indicating the usefulness of the saliency.
7 Conclusion
We have presented a novel SGC descriptor for matching partial shapes represented by 3D point clouds. SGC integrates three novel components: 1) a local shape description that encodes robust geometric centroid features; 2) a descriptor comparison scheme that allows comparing supports with missing parts; and 3) a descriptor saliency measure that can identify distinct descriptors. By this, SGC is robust against various nuisances in point cloud data when performing partial shape matching. We have demonstrated SGC’s performance by comparisons with state-of-the-art descriptors and two partial matching applications.
Acknowledgments
This work is supported in part by the Fundamental Research Funds for the Central Universities (WK0110000044), Anhui Provincial Natural Science Foundation (1508085QF122), National Natural Science Foundation of China (61403357, 61175057), and Microsoft Research Asia Collaborative Research Program.
References
- Aiger, D., Mitra, N.J., Cohen-or, D.: 4-points congruent sets for robust pairwise surface registration. ACM Trans. on Graphics (Proc. of SIGGRAPH) 27 (2008) Article 85.
- Bariya, P., Nishino, K.: Scale-hierarchical 3D object recognition in cluttered scenes. In: CVPR. (2010) 1657–1664
- Donoser, M., Riemenschneider, H., Bischof, H.: Efficient partial shape matching of outer contours. In: ACCV. (2009) 281–292
- Rodolà, E., Cosmo, L., Bronstein, M.M., Torsello, A., Cremers, D.: Partial functional correspondence. Computer Graphics Forum (2016)
-
Mian, A.S., Bennamoun, M., Owens, R.A.:
A novel representation and feature matching algorithm for automatic
pairwise registration of range images. | https://deepai.org/publication/signature-of-geometric-centroids-for-3d-local-shape-description-and-partial-shape-matching |
From the “science eventually self-corrects” department, new science showing coral bleaching of the Great Barrier Reef is a centuries-old problem, well before “climate change” became a buzzword and rising CO2 levels were blamed.
Marc Hendrickx writes:
New paper shows coral bleaching in GBR extending back 400+ years.
[This] busts myths promulgated by alarmist Dr. Ove Hoegh-Guldberg see for instance
“The science tells us that exceeding 2°C in average global temperature will largely exceed the thermal tolerance of corals today. It is already happening. Rolling mass bleaching events, unknown to science before 1979, are increasing in frequency and severity.”
Source: https://theconversation.com/drowning-out-the-truth-about-the-great-barrier-reef-2644
Also see the news report from The Australian https://www.theaustralian.com.au/news/nation/coral-bleaching-a-centuriesold-problem/news-story/33b3cbd7cd3b784322c0a7bc10e98eb6
Here is the paper: (open access)
https://www.frontiersin.org/articles/10.3389/fmars.2018.00283/full
Reconstructing Four Centuries of Temperature-Induced Coral Bleaching on the Great Barrier Reef
Abstract:
Mass coral bleaching events during the last 20 years have caused major concern over the future of coral reefs worldwide. Despite damage to key ecosystem engineers, little is known about bleaching frequency prior to 1979 when regular modern systematic scientific observations began on the Great Barrier Reef (GBR). To understand the longer-term relevance of current bleaching trajectories, the likelihood of future coral acclimatization and adaptation, and thus persistence of corals, records, and drivers of natural pre-industrial bleaching frequency and prevalence are needed. Here, we use linear extensions from 44 overlapping GBR coral cores to extend the observational bleaching record by reconstructing temperature-induced bleaching patterns over 381 years spanning 1620–2001. Porites spp. corals exhibited variable bleaching patterns with bleaching frequency (number of bleaching years per decade) increasing (1620–1753), decreasing (1754–1820), and increasing (1821–2001) again. Bleaching prevalence (the proportion of cores exhibiting bleaching) fell (1670–1774) before increasing by 10% since the late 1790s concurrent with positive temperature anomalies, placing recently observed increases in GBR coral bleaching into a wider context. Spatial inconsistency along with historically diverging patterns of bleaching frequency and prevalence provide queries over the capacity for holobiont (the coral host, the symbiotic microalgae and associated microorganisms) acclimatization and adaptation via bleaching, but reconstructed increases in bleaching frequency and prevalence, may suggest coral populations are reaching an upper bleaching threshold, a “tipping point” beyond which coral survival is uncertain.
This figure (especially panel B) suggests that there was bigger bleaching events in the past:
In the discussion section they say:
Both our reconstructed and the observational GBR bleaching records show maximum bleaching during 1998 across the whole GBR (Figure 3) although this is not scaled for observational effort.
Gosh, what happened in 1998? A super El-Niño, that’s what, and that natural cycle event wasn’t caused by man. Note all the warm ocean water near Eastern Australia where the GBR is located.
The UUIC writes:
Droughts in the Western Pacific Islands and Indonesia as well as in Mexico and Central America were the early (and sometimes constant) victims of this El Niño. These locations were consistent with early season El Niños in the past. A global view of the normal climatic effects of El Niño can be seen below.
Image by: CPC ENSO Main Page
But, Dr. Ove Hoegh-Guldberg makes his living blaming man-made climate change for just about every ill associated with the GBR, so I’m pretty sure he won’t like this paper as it draws attention to the obvious: Coral bleaching of the GBR is not a new problem unique to our time-frame. | https://wattsupwiththat.com/2018/08/17/remember-when-they-told-us-coral-bleaching-was-a-sure-result-of-recent-man-made-global-warming-never-mind/?shared=email&msg=fail |
Stephen Hawking is an English theoretical physicist, cosmologist, and an author, and his work with the University of Cambridge includes singularity theorems. In 2002, Hawking was ranked no 25 in BBC’s poll of the 100 Greatest Britons.
He was the Lucasian professor of mathematics at Cambridge University from 1979 to 2009. His book, A Brief History of Time, appeared on the British Sunday Times bestseller list for a record breaking 237 weeks. Hawking has a rare slow progressing form of amyotrophic lateral sclerosis that has gradually paralysed him over the decades and he now communicates using a single cheek muscle attached to a speech generating device.
Hawking has, very recently, allowed his 1966 doctoral thesis to be available online, thereby providing a glimpse into his mind as a 24-year-old student. His thesis titled, Properties of Expanding Universes, has been made accessible through Cambridge’s open access repository, Apollo. Previously a paper copy of his thesis was available for purchase, at Cambridge, for a fee of $85.
Hawking in this generous gesture welcomes the world with his words, “Anyone, anywhere in the world, should have free unhindered access not just to my research but to the research of every great and enquiring mind across the spectrum of human understanding.” His 119-page thesis has two parts with an Introduction to the Properties of Expanding Universes, followed by four chapters of the main presentation, including one chapter, “Singularities” where he has quoted Professor A Raychoudhuri, who is a theoretical particle physicist at Calcutta University. Hawking enlightens one through his Introduction, which elaborates on how the four chapters are linked by the Hoyle-Narlikar theory of gravitation, perturbation, gravitational radiation in an expanding universe and singularities.
The thesis includes several pages that feature complicated mathematical equations, hand written by Hawking. A few aspects have been briefly described, in this narrative, from Hawking’s Introduction to his famous thesis. The implications and consequences of the expansion of the universe are examined. Hawking explains that this expansion creates grave difficulties for the HoyleNarlikar theory of gravitation.
He elaborates on perturbation (a disturbance) of an expanding, homogeneous and isotropic universe (Isotropic is a physical property having the same value when measured in different directions.) Hawking concludes that galaxies cannot be formed as the result of the growth of perturbations, because initially they were small. The propagation and absorption of gravitational radiation is also investigated. Gravitational radiation in the expanding universe is examined by a method of symptotic expansions, or constituting a symptom, like the “peeling off” behaviour.
The occurrence of singularities in cosmological models is also defined. It is explained that a singularity is inevitable provided that certain general conditions are satisfied. The initial singularity was the gravitational singularity of infinite density thought to have contained all the mass and space-time of the universe. This was prior to the Big Bang and subsequent inflation, creating the present day universe. He affirms that the idea of the universe expanding is of recent origin.
All the early cosmologies were essentially stationary and even Einstein whose theory of relativity is the basis for almost all solar developments in cosmology, found it natural to suggest a static model of the universe. But Hawking saw a grave difficulty associated with a static model, such as Einstein’s. The stars had been radiating energy at their present rates for an infinite time and would have needed an infinite supply of energy.
If the stars had only a limited supply of energy then the entire universe would have reached thermal equilibrium, which is not the case. Olbers, a German scientist, known as the “Dark night sky paradox,” emphasised that the night sky conflicts with the assumption of an infinite and eternal static universe. Hawking adds that the recent expansion of the universe may have occurred by a contraction, which in turn may have been preceded by another expansion, the “bouncing” or “oscillating” model.
But this suffers from the same problem as the static model. It is thought that one of the weaknesses of Einstein’s theory of relativity is that although it furnishes field equations it does not provide boundary conditions for them. As a result it does not give a unique model for the universe but instead allows a whole series of models. Clearly, a theory providing boundary conditions and thus restricting possible solutions would be very attractive. Hawking suggests that science can postulate on some form of continual creation of matter in order to prevent the expansion from reducing the density. This leads to yet another model —the “steady state”.
He states that from the time of Copernicus, we have been demoted to a medium sized star somewhere near the edge of a fairly average galaxy; we are so humble that we would not claim to occupy any special position. This thesis helped launch Hawking and formed the bedrock of his reputation as one of the world’s most famous scientists. | https://www.thestatesman.com/features/a-peek-at-greatness-1502525641.html |
ORGANIZATION:
One of the premier music organizations in the southeastern United States and the oldest operating symphony orchestra in the Carolinas, the Charlotte Symphony Orchestra (CSO) connects with more than 100,000 music lovers each year through its lively season of concerts, broadcasts, community events, and robust educational programs. The CSO has demonstrated a commitment to its mission of uplifting, entertaining, and educating the diverse communities of Charlotte-Mecklenburg and beyond through exceptional music experiences. Celebrating its 90th anniversary in 2022, the CSO continues its vision to reach out through the transformative power of live music as a civic leader, reflecting and uniting the region.
The CSO employs a professional full-time orchestra of 62 musicians and three conductors, supports three youth orchestras, and offers an extensive array of educational and audience engagement programs. The CSO performs in a variety of venues in Charlotte-Mecklenburg and surrounding counties in the Charlotte region, principally at the Blumenthal Performing Arts Center’s Belk Theater (1,900 seats), the Levine Center for the Arts’ Knight Theater (1,100 seats), and Symphony Park at SouthPark Mall. Its extensive community engagement activities take the CSO’s music to a variety of churches, breweries, community centers, schools, and senior care centers throughout the region.
The 2022-2023 season will feature 11 guest conductors, including the CSO debut of Erina Yashima leading Beethoven’s rarely performed Triple Concerto and Berlioz’s Symphonie Fantastique, and Cleveland Orchestra’s Vinay Parameswaran conducting Benjamin Britten’s Les Illuminations and William Grant Still’s Poem for Orchestra. The Sandra and Leon Levine Pops series will explore a wide range of musical genres, highlighted by 007: The Best of James Bond, a celebration of five decades of music for the iconic spy films. Legendary film scores come to life in the CSO’s popular Movie Series, a true “surround-sound” live experience. Additionally, the CSO is opening the 2022 Charlotte International Arts Festival with a newly commissioned production of David Bowie’s final album, titled Blackstar Symphony. The CSO is one of 25 orchestras in the United States to receive Catalyst Fund grants from the League of American Orchestras to advance equity, diversity, and inclusion within the organization.
The CSO 2021-2025 Strategic Plan, completed in May 2021, focuses on seven essential areas of strategic focus: artistic vitality and growth, education, financial health and sustainability, innovation, organizational culture, public relevance, and audience development. Strategies to advance the CSO’s commitment to diversity, equity, and inclusion are incorporated within each area. This plan serves as a road map for CSO, pointing the orchestra towards an enterprising and resilient future.
The CSO is governed by a 30-member Board of Directors chaired by Linda McFarland Farthing. David Fisk was appointed President and CEO in 2020. For the last 12 years, the artistic leadership of the CSO was under the baton of internationally renowned Music Director Christopher Warren-Green, who has now taken on the titles of Conductor Laureate and Artistic Adviser. Emerging American conductor Christopher James Lees is the Resident Conductor of the CSO and Principal Conductor of the Charlotte Symphony Youth Orchestra. For the fiscal year ending June 30, 2022, the CSO is reporting total revenue of approximately $11.8 million, with over $6 million in contributions and grants (excluding government or other special assistance related to COVID relief), and just over $3 million in ticket sales and program services. Total expenses in FY22 were approximately $11.3 million.
For more information, please visit www.charlottesymphony.com
REPORTS/RELATIONSHIPS:
The Vice President of Finance and Administration will report to the President and Chief Executive Officer and will serve as a strategic partner to the Executive Leadership Team and Board of Directors. This individual will manage the staff accountant, office administrator, and outsourced controller.
BASIC FUNCTIONS:
The Vice President of Finance and Administration will provide oversight to the organization’s day-to-day accounting operations, office administration, and strategic financial plan. The Vice President of Finance and Administration is a collaborative leader and subject matter expert, delivering strong analytical, problem-solving, collaboration, and risk management abilities.
Specific duties will include, but not necessarily be limited to:
- Prepare and present financial information to the Chief Executive Officer, Board of Directors, and Finance Committee.
- Develop and maintain a strong financial planning and forecasting process that aligns with the vision and strategies of the CSO 2025 Strategic Plan.
- Oversee, direct, and organize the work of the accounting, administration, and IT staff.
- Prepare and maintain the CSO budget in partnership with the leadership team.
- Perform cash flow modeling and forecasting as well as monitor cash flow regularly.
- Serve as the primary liaison to the Finance and Audit Committees.
- Manage/oversee monthly closing.
- Provide analysis and reporting to the Chief Executive Officer and leadership team to support key financial targets.
- Oversee annual audit and 990 tax return in partnership with external accounting firm.
- Manage banking and brokerage relationships.
- Provide oversight of payroll process, retirement programs (403(b)) – including changes and compliance, and health insurance, in partnership with the CSO’s broker(s).
- Partner with HR to develop and implement competitive benefits and employee incentive programs.
- Provide oversight and point of contact for business liability and workers’ compensation insurance.
- Ensure the availability of adequate equipment and office systems.
- Serve as the primary contact for office lease, office equipment leases, and office-related vendors.
- Promote a culture of trust, integrity, teamwork, and high performance with all colleagues.
REQUIREMENTS:
- Five to seven years’ progressive experience in finance and/or accounting, with at least two years in a nonprofit organization.
- Ability to lead the team and handle complex, ambiguous issues.
- Ability to set goals for the department and advance progress toward goals by motivating staff and persisting in the face of obstacles.
- Willingness to collaborate with others, respecting and appreciating the perspectives and contributions of all.
- Ability to assess the needs of the audience to interpret and articulate complex financial and operational data in understandable and meaningful ways.
- Ability to lead, manage, and hold the team accountable, as well as the self-awareness to accept feedback for self-development and growth.
- Forward thinking, organized, experience with handling multiple projects and prioritizing workflow, and ability to meet deadlines.
- Strong risk-management, analytical, and problem-solving skills.
- Prior management experience; ability to lead, coach, manage, and motivate direct reports.
- Excellent executive-level verbal and written communication skills, interacting with people at all levels in both formal and informal settings.
- Proficient with Microsoft Office and accounting-related software programs such as Paylocity, Bill.com, Intacct, Expensify, and banking portals.
- An undergraduate degree in a related field within business, finance, or accounting; CPA certification strongly desired.
COMPENSATION:
Compensation will be commensurate with experience including a competitive base salary, bonus opportunity, and competitive benefits package. | https://clcbsearch.com/position/vice-president-of-finance-and-administration-charlotte-symphony/ |
About the Museum
The world is evolving, transforming and changing. And Saudi Arabia is changing at an even more rapid pace. At the King Abdulaziz Center for World Culture, Ithra, human potential is seen as the greatest source of change. The Center is focused on accelerating that potential through encouraging creativity, inspiring minds, and empowering talent.
Ithra is an all-purpose culture destination. It has created an environment for transformative experiences to unlock the power of potential through mastery in arts, science, literature and innovation. The facilities include an idea lab, library, theater, museum, energy exhibit, art gallery, children’s museum, and a knowledge tower housed under one roof to provide visitors with an immersive and transformative experience. The Center itself is an iconic landmark building, which reflects its purpose as a beacon for knowledge to illuminate, inspire and catalyze the potential of the Kingdom’s vast talent pool. | https://www.sothebys.com/en/museums/king-abdulaziz-center-for-world-culture |
Boom thump, boom thump, boom thump ...
Dancers twist and turn, moving their limbs to the rhythm of life.
BANG thump, BANG thump, BANG thump, BANG thump
Lonnie Custalow pounds out four loud ''honor beats'' on his drum. The dancers at the powwow raise their hands and feathered fans to honor the Great Spirit who has given them a good day to dance.
The drum has a spirit that speaks to the people, Custalow says.
''We are honoring the Creator,'' says Custalow, a Mattaponi Indian who leads a drum circle called the Wahunsenakah Singers. ''The circle represents the people, the way we live, and our universe. The drum is in the center because it is the heartbeat of the people.''
Visitors to powwows like the Virginia Indian Heritage Festival at Jamestown last weekend often move to the beat of the drum without understanding the spiritual meaning behind the rhythm.
According to legend, the Creator gave the drum to the American Indian people long ago. A Sioux Indian had a vision of a woman presenting him with a drum. From there, the custom spread through the tribal communities of North America.
Today, the drum is a central element of American Indian social and religious customs, says Chief Webster ''Little Eagle'' Custalow of the Mattaponi reservation in King William County.
''The drum's music makes the heart merry, just like you hear the piano at the church and it makes you feel good,'' he says. ''God loves music. He loves to hear the birds sing. He loves to hear the drum.''
American Indians were worshipping God with the drum long before English settlers brought Christianity to the shores of Virginia nearly four centuries ago, Chief Custalow says.
As with all cherished customs, the spirituality of drumming is a sensitive topic among American Indians, whose culture is often misrepresented by the entertainment industry and misunderstood by people unfamiliar with tribal traditions.
Although beliefs and customs vary among different tribes, many American Indians refer to God as the Creator or Great Spirit, a force that moves through all things. For this reason, nature, people, and even objects like the drum, are spiritually interconnected.
The drum also has a spirit of its own that comes from the animal and the tree, which provided the skin and wood that make up the drum.
Many local Indians integrate these beliefs with their Christian faith. Others practice only traditional spiritual customs. Drumming is viewed as a way to communicate with the Great Spirit.
The beats and chants have been memorized and passed down through the generations. Some songs call on the sacred Eagle to take the people's prayers up to the Creator. Other chants relate to hunting, remembering ancestors and honoring fallen warriors. The beats usually resemble a heartbeat - not the ''boom-thump-thump-thump ... boom-thump-thump-thump'' popularized by Hollywood movies.
''It makes me feel close to the Creator and all of creation,'' says Wallace Lemons, leader of Falling Water Drum in King and Queen County.
The drum is a sacred object worthy of respect, says Wallace, who is married to a Rappahannock Indian and describes himself as an adopted member of the Wampanoag tribe in Massachusetts.
''That drum sleeps in the bedroom with me,'' he says. ''I treat it like a member of my family.''
Drummers use sticks, sometimes called ''beaters,'' to pound on the large drums used at powwows. Many drumming groups make their own drums out of wood and deer or buffalo hide, although some groups buy their drums from American Indian artists for as much as $1,000.
''Some people try to touch the drum,'' Lemons says. ''They think it's just a musical instrument like a banjo. The oil on their hands can deteriorate the buffalo skin.''
Many drum leaders bless the drum by burning sacred herbs and using an eagle feather to brush the smoke around the area of the drum.
Robert Jondreau, a Norge resident who leads the Four Rivers Drum, says the herbs represent the four sacred directions - sweet grass is East, tobacco is South, sage is West, and cedar is North.
''We mix that up and we light it,'' says Jondreau, a member of the Ojibwa/Chippewa tribe of the Great Lakes and Ontario. ''We call it holy smoke. You know, like how a church has incense. We pray that the smoke lifts our prayers up to the Creator.
''It's a cleansing type of thing. It makes us feel good.''
While many of the groups that drum at public events like powwows are exclusively male, there is also a ''medicine circuit'' of women drummers, according to LaKatahasie Sweeney of Chesapeake.
Sweeney's drumming group performs private healing ceremonies.
''The drum reaches into the deeper part of the soul and communicates at the heart level,'' says Sweeney, who describes herself as a mystic, although she says some people call her a medicine woman.
''Each drum has its own tone, its own way of talking to you, and its own way to sing medicine to you. The drum's beat will resonate with your own heartbeat.''
The ''medicine'' of the drum's beat, Sweeney says, is ''the spirit that moves things.''
''It resonates to that person who needs it. It's a spiritual healing. That's how all illnesses can get cured.''
Many drumming groups are open to Indians of all tribes who are seriously interested in learning the sacred custom. Jondreau, whose group of seven drummers practices every Wednesday night at his home, says it takes most people a few months to learn four or five basic beats.
''I still mess up,'' says Jondreau, who has been drumming for two and half years. ''It's not something you can pick up overnight.''
Local American Indians say that several drumming groups have started up in the past 10 years in response to the growing popularity of powwows.
Jondreau says it's important to protect the spiritual tradition of drumming because many American Indian customs have been lost to centuries of persecution.
''As a Native American people, our spirituality is the only thing we have left.''
- Dave Schleck can be reached at 247-7430 or by e-mail at [email protected]
RECOGNIZED INDIAN TRIBES OF VIRGINIA
* Chickahominy
* Eastern Chickahominy
* Mattaponi
* Monacan
* Nansemond
* Pamunkey
* Rappahannock
* Upper Mattaponi
INDIAN POWWOWS
* WHAT: Mattaponi Indian Powwow
* WHO: Chiefs from several Virginia Indian tribes will be there along with dancers, drummers and artists
5 p.m.
* WHEN: Today, 10 a.m. - 5 p.m.
* WHERE: Mattaponi Indian Reservation, King William County
* COST: $4 per person, children 5 and under get in free
* INFORMATION: (804) 769-4508
* Upcoming Virginia Indian events:
* Chickahominy Tribe Third Annual Crab Feast , July 18, Chickahominy Tribal Grounds near Charles City, (804) 966-7043
* Rising Water/Falling Water Powwow , July 25-26, The Showplace in Mechanicsville, (804) 443-4221
* Nansemond Indian Tribal Festival , Aug. 15-16, Lone Star Lakes Lodge in Suffolk, (804) 232-0248
DANCING TO THE DRUM
Many American Indians believe the drum is a spirit and the heartbeat of the people. Here's a sampling of dances:
* Fancy Dance - starts off with a steady beat and gradually quickens, played for dancers who dress in elaborate colored feathers
* The Men's Sneak-up - all drummers drum out of sync while dancers' movements mimic an animal sneaking up on its prey, traditionally done before or after a hunt or battle
* Eagle-Calling Song - rhythm varies, but the song calls on the sacred Eagle to take the people's prayers up to the Creator. | https://www.dailypress.com/news/dp-xpm-19980620-1998-06-20-9806200136-story.html |
Blackfeet tribe and conservation groups join forces to cancel drilling leases on tribal sacred land
Terry Tatsey was raised in the Badger-Two Medicine. The 130,000-acre area, wedged between Glacier National Park and the Bob Marshall Wilderness, served as the foundation of his youth, and he can remember his grandparents telling stories about their ancestors hunting and fishing there. For him, that land provides a vital bond to the Blackfeet’s heritage that remains intensely spiritual, and continues to afford opportunities for solitude and reflection that are as relevant today as ever.
Tatsey recalls a hunting trip when he was a teenager, shortly after the Reagan Administration issued 47 oil and gas leases along the Rocky Mountain Front and Badger-Two Medicine in the early 1980s with little environmental review. He and some family members were leading a string of horses into the area when they came across a seismograph crew taking readings in advance of developing the newly leased property. Mineral deposits that served as licks for deer and elk had been drained for the survey. He was angry and remembers wanting to prevent any more destruction from occurring.
The Blackfeet immediately protested that the leases were illegal because the tribe was never consulted, and thirty years later, Tatsey and the Blackfeet are still fighting to cancel them. After several lawsuits, the US Forest Service suspended the leases in 1997 and issued a moratorium on leases for the largely roadless in Badger-Two Medicine area, which was designated a Traditional Cultural District under the National Preservation Historic Act in 1997.
The moratorium remained in place until 2013, when one of the lessees, Solenex, LLC, a Louisiana-based company, sued the Department of Interior for the delay surrounding its lease. Since then, the Blackfeet have been working with a number of regional and national environmental organizations that also want to see the leases cancelled in a collaboration that has proved significant for both the tribe and environmental organizations.
Groups like the Wilderness Society and the Montana Wilderness Association (MWA) were initially drawn to Badger-Two Medicine lease issue because of the area’s ecological and recreational values. According to Jennifer Ferenstein, a senior representative for the Wilderness Society who has worked in Montana’s environmental community for nearly twenty years, the Badger-Two Medicine is important because it borders both the Bob Marshall Wilderness and Glacier National Park and is contiguous with some of the best wildlife habitat in the Crown of the Continent ecosystem, which straddles northern Montana and southern Alberta. “The wildlife value, the roadless qualities, the core wilderness all make it an incredibly significant area,” she says.
While both organizations’ primary interest in protecting the Badger-Two Medicine remains in wildlife conservation, these environmental groups have also come recognize the cultural significance the area has for the Blackfeet. “You don’t have to work on this issue long to start taking an appreciation for what the Blackfeet are trying to achieve from a cultural standpoint, and right now, our missions are aligned,” says Casey Perkins, Rocky Mountain Front Field Director for MWA.
Tatsey agrees that the views of wilderness groups aren’t necessarily incongruent with those of the Blackfeet. In his position at the Blackfeet Community College, he is tasked with incorporating Blackfeet culture and traditional knowledge into the college’s western science and natural resource policy courses, and he jokes that western science is often used to validate something the Blackfeet have known for generations. For him, compatibility between these views depends on the definition of environmental. “The broadest meaning of environment is inclusive of pretty much everything, and in that sense, it could be parallel with the cultural understanding and values that we have.”
Ferenstein offers a similar sentiment and recognizes that the local environment is closely intertwined with Blackfeet cultural values. “The land is so important to them and their hunting and fishing rights and their treaty rights and ability to gather medicinal plants are so interconnected that I don’t think we can separate the two.” Adding to this, however, she makes it clear that she and her environmental colleagues won’t speak about the area’s sacredness. “We defer to the Blackfeet. That’s their realm.”
Perkins and Ferenstein understand the importance that empowering their Blackfeet partners has for the campaign’s long-term success, so they have worked hard to ensure the Blackfeet remain the face of this campaign against oil and gas leases. Even though the wilderness organizations have more experience navigating political channels, working with government agencies and dealing with the media, they appreciate that this is, first and foremost, a Blackfeet issue. The campaign’s success hinges on the strength and influence of Blackfeet voices, and their work has aimed to highlight those stories.
This has not been easy work. After generations of massacres, treaty violations, and tone-deaf government policies, the Blackfeet are apt to greet offers of support or assistance with wariness and caution. But in spite of this history, the tribe has welcomed state and national environmental organizations to their cause, fully aware of what success in the Badger-Two Medicine could mean for the tribe’s long-term future. As a result of this collaboration, the partners’ individual voices have been amplified. The collaboration has also offered an opportunity for healing and a path forward for the Blackfeet. All wrongs have not been forgotten, but for many they are fading. Optimism is slowly replacing pain and in their shared goals there is hope.
Helen Augare-Carlson grew-up on the Blackfeet reservation. She attended college at the University of Montana in Missoula, and since returning to the reservation, has worked as the Native Science Field Director at the Blackfeet Community College. When I ask her what has allowed the partnership between the tribe and environmental groups to flourish, she doesn’t hesitate. “The healing. It’s been good for us to know that others are starting to understand and listen to us.”
Augare-Carlson’s husband, Sheldon Carlson, agrees with this sentiment. “Non-members are starting to have more trust in Indians so it’s not always a big fight. They’re starting to see our side of what we do and our culture.”
He adds, “Our parents, grandparents, great-grandparents were scared to do their ceremonies so they did them with the curtain closed. Now we don’t have to do that.”
After more than thirty years of bureaucratic delays, lawsuits being put on hold, and legislation stalling in Congress, these efforts have finally started to produce tangible results. This September, the Advisory Council on Historic Preservation issued a report decreeing the Badger-Two Medicine too sacred to drill. In the following months, public support proliferated. Local papers ran editorials, former Park Service and Forest Service officials called for the cancellation of the leases, Montana Senator Jon Tester petitioned the Department of the Interior to cancel the leases, and even the rock band Pearl Jam announced its opposition to drilling in the area.
Then, in late November, Secretary of the Interior Sally Jewell announced her decision: The Solenex lease would be cancelled. Jewell’s announcement did not pertain to the other 17 leases in the area, which are still on hold, but if this first ruling is any indication, the Badger-Two Medicine will remain wild.
I spoke with Terry Tatsey the following day. “The first thought that came to my mind was that, as tribal people who have reserved rights and sacred places around this country, maybe tribes can have confidence that the federal government is fulfilling their trust responsibilities.” More fundamentally, he adds, “I felt it was an encouraging message that they are going to take care of what’s important to us.”
For the Blackfeet, the Badger-Two Medicine has always been considered a safe place. “Growing-up we were told that if things were going wrong, we should go to that place,” Augare-Carlson says. “We were going to have everything we needed and we were going to be safe.” The area was known for its pure, clear water, plentiful game, and spiritual significance, and for generations, the Blackfeet people depended on it. | https://michaeljdax.com/protecting-the-badger-two-medicine-a-healing-story/ |
Chilled Tomato Dill Soup is the perfect dish to cool you off on a warm summer night, and can be packed in a thermos to serve as part of a picnic.
This Chilled Tomato Dill Soup recipe is always a hit, especially on a hot summer night. It is bursting with the flavors of summer – fresh tomatoes, onions and garlic. Fresh dill leaves, light and flavorful, blend perfectly with the tomato flavor and add bits of green color to this salmon-colored soup. I like to take it as part of a picnic dinner for a Denver Botanic Gardens summer concert, along with one of our main dish salads and a French baguette. It is rich, so small portions are best.
Pin this recipe now to save for later!
Chilled Tomato Dill Soup
- Yield: 4 to 6 servings 1x
- Category: Chilled Soups, Starters, Entertaining
- Diet: Gluten Free
Description
Our tomato soup is bursting with summer flavors – tomatoes, onions and garlic. Fresh dill blends perfectly with the tomato and adds green color. Perfect to pack in a thermos for a picnic!
Ingredients
- 3 tablespoons butter
- 3 medium yellow onions (around 3/4 pound), chopped
- 1/2 teaspoon chopped garlic
- 6 large ripe tomatoes (2 1/4 – 2 1/2 pounds), cored, seeded and quartered
- 3/4 cup water
- 1 1/2 chicken bouillon cubes (or 1 1/2 teaspoons chicken granules)
- 1 1/2 tablespoons fresh dill (or 1 1/2 teaspoons dried)
- 3/4 cup mayonnaise
Instructions
- In a 4-quart saucepan, melt the butter and sauté the onion and garlic over medium heat until wilted, about 12 to 15 minutes.
- Add the tomatoes, water, bouillon cubes and dill, and simmer, covered, for 10 minutes, or until tomatoes are very tender. Remove from heat and cool.
- Place one-half of the tomato mixture in a blender and blend until smooth. Place in a large mixing bowl. Repeat with remaining tomato mixture. (Can also use an immersion blender.)
- Whisk in the mayonnaise and season to taste with salt and pepper. Cover and chill overnight. Serve in chilled bowls.
Notes
Gluten free: Use gluten free chicken bouillon.
Make ahead: Soup can be made up to two days ahead. Store covered and refrigerated. | https://www.seasonedkitchen.com/chilled-tomato-dill-soup |
COLUMBUS, Ohio (FOX19) - Several of Ohio’s largest teacher unions say Gov. Mike DeWine “coerced” commitments from school district superintendents to return to in-person learning.
DeWine made the claim Tuesday that 96 percent of public school districts had committed to returning to school at least partially in person by March 1.
The alleged price for refusing to sign the commitment form? Loss of access to the vaccine.
Teacher unions representing the city school districts of Akron, Canton, Cincinnati, Cleveland, Columbus, Dayton, Toledo and Youngstown said as much Thursday in a jointly issued statement.
“[...]Governor DeWine needs to stop playing games with the health and lives of our school communities,” the statement concludes.
The unions say schools are being “pressured” to reopen before it is safe. They also claim the commitment form in question “was presented as a prerequisite for educators and school staff to receive vaccines during Phase 1B.”
Unlike Kentucky, Ohio has not yet begun to vaccinate K-12 educators, instead prioritizing those 75 years and older. Vaccinations will begin among educators in Ohio the week of Feb. 1 for the first dose and will continue over the course of the month.
Immunity against COVID-19 is supposed not to reach the 95-percent efficacy mark until around 10 days after the second dose.
Conceivably, educators and school staff could return with students to in-person instruction not having received the second dose — or having received it, but not yet sporting the ironclad immunity touted by the vaccine makers and the FDA.
The teacher unions take it further, saying the timeline for vaccine distribution means “no educators and staff will be fully vaccinated” by March 1.
“We are disappointed that Governor DeWine has decided to use the distribution of a life-saving vaccine as a bargaining chip, holding this precious commodity hostage while pitting parents, administrators, teachers, other school workers, and students against each other,” the statement reads.
The unions agree consequences are unlikely for districts that fail to reopen March 1. That doesn’t mean, they argue, the tactic is without harm.
“Parents across the state now have unrealistic expectations for a March 1 reopening that simply will not be possible in many school districts,” the statement says. “In some districts, these expectations are already pushing superintendents to announce and plan for reopening before it is safe.”
The unions ask DeWine not to rush the reopening “mere weeks” ahead of the time vaccinations could be complete.
See a spelling or grammar error in our story? Click here to report it. Please include title of story. | |
Emotions can be difficult to define. We know we feel them, but what are they, really? Emotions are a mix of our sensory perception (sight, sound, taste, touch, and smell) of the world and how we interpret it.
Emotions provide us with physical sensations in our body which can influence how we think and drive us to act. When it comes down to it, emotions fulfill important evolutionary purposes for us.
First, they give us valuable information about ourselves. We feel our emotions through bodily sensations and link it to the specific emotion in our brain. This process is our body quickly trying to inform us about the situation. It can even serve as an alarm system for us, (“I feel scared in that old abandoned house”). Sometimes we interpret information from emotions correctly, and sometimes we do not (but more on that later!).
Second, emotions propel us to act. Feeling a particular emotion will inspire, trigger and/or drive us to a specific action. Where they enable us to communicate with others. People respond to other people based on their emotions, which can be seen through facial expression and our body language. (“You could see the fear in my eyes when I got startled by the creaking floorboards in the old house”)
That’s why it’s important for us to understand our emotions and what information they are trying to provide. This is also the reason it is important for us to tune in more mindfully to the bodily sensations that emotions provide, so we can understand the information better.
On the flipside, emotions can be problematic when they get too intense or are too much for the situation; if emotions drive us, then we need to have knowledge of how to steer. Emotions, like a bumpy road on a rainy day, can get hard to navigate through, especially when they become too intense. It’s when we let the emotions take the wheel and let them drive us to act in ways that land us in trouble, are not helpful, and/or are counter to the outcome we want.
Psychologist Dr. Robert Plutchik created a wheel of emotions based on the six basic emotions that encompass all the different feelings at different intensity levels. With so many variations of emotions, it would be helpful to refer to the wheel to help identify what were feeling.
Having emotional intelligence is beneficial for helping us manage our emotions and emotions can get tricky. This is where we come in! Let’s get to know each of our six basic emotions! Emotions 101 is a crash course on the best subject ever, you!
Ready to learn more about you? Access Emotions 101 and pick from any of the six basic emotions to get started.
References:
Dalgleish, T., & Power, M. J. (Eds.). (1999). Handbook of cognition and emotion. New York, NY, US: John Wiley & Sons Ltd.
Donaldson, M. (2017). Plitchik’s Wheel of emotions–2017 Update.
Linehan, M. (2015). DBT skills training manual (Second edition.). New York: The Guilford Press. | https://gritx.org/skills-studio/do_exercise/what-are-emotions?emotion=zQuestionMark |
Share:
Text:
MONTREAL --
The birthing centre at Lakeshore Hospital in Montreal's West Island will be closed this weekend except for emergencies.
The West Island health and social services centre (CIUSSS-OIM) said that the administration reorganized services temporarily beginning Friday night and lasting until Monday at 8 a.m. The reorganization was necessary, the CIUSSS said, due to lack of staff.
"Moms and families will not be left on their own," said spokesperson Hélène Bergeron-Gamache . "Cases that require priority and urgent care will be seen promptly."
The delivery room was only affected Friday night, and has been back to normal service since 8 a.m. Saturday. The neonatal care unit will resume services on Monday.
All patients will be redirected to the birthing units at the LaSalle Hospital (about 25 minutes away) and St. Mary's Hospital (about 20 minutes away).
MONTREAL NORTH HOSPITALS POSTPONE ELECTIVE SURGERIES
The Sacre-Coeur, Jean-Talon and Fleury hospitals in the Montreal North health and social services centre (CIUSSS) reduced the number of elective surgeries three days before the hospitals typically do.
As such, the CIUSSS said starting Dec. 20 (instead of Dec. 23), elective surgeries will be postponed to free up beds.
Spokesperson Séléna Champagne said all urgent and semi-urgent surgeries will be performed, as well as oncology and day surgery cases.
"We also continue to operate on all patients currently hospitalized in our beds who require surgery," she said.
For the Dec. 20-22 period, the hospital as reorganized its operating schedule temporarily to operate with fewer beds.
"Unfortunately, at Hôpital du Sacré-Coeur-de-Montréal, there are about five patients whose surgery will be postponed in the short term, while about five other patients will be operated on more quickly as day surgeries," said Champagne.
The reorganization was done because emergency rooms are very busy, Champagne said, and COVID-19 cases are rising.
As of Friday, the Montreal North CIUSSS has reported 1,668 new COVID-19 cases in the past two weeks, including 251 from Thursday to Friday.
The CIUSSS is encouraging all health-care workers to get their third dose of COVID-19 vaccine, and the administration is advising virtual meetings and telemedicine when possible. | |
Search:
More info on Monarchy of Canada
Wikis
Encyclopedia
Related links
Related topics
Quiz
Quiz
Map
Maps
Videos
Related Videos
Monarchy of Canada: Map
Advertisements
Categories:
Monarchy in Canada
|
Government of Canada
|
Government in Canada
|
Parliament of Canada
|
Heads of state of Canada
|
Legal history of Canada
|
Commonwealth realms
|
Current monarchies
|
Canadian heraldry
Wikipedia article:
Map showing all locations mentioned on Wikipedia article:
The
monarchy of Canada
also referred to as
The Crown in Right of Canada
,
Her Majesty in Right of Canada
, or
The Queen in Right of Canada
is the
constitutional
system of government in which a
hereditary
monarch
is the
sovereign
and
head of state
of
Canada
, forming the core, or "the most basic building block," of the country's
Westminster-style
parliamentary
democracy
.
The Crown is thus the foundation of the
executive
,
legislative
, and
judicial
branches of the
Canadian government
, as well as the kingpin of
Canadian federalism
.
While
Royal Assent
and the
royal sign-manual
are required to enact laws,
letters patent
, and
Orders-in-Council
, the authority for these acts stems from the Canadian populace, and, within the
conventional
stipulations of
constitutional monarchy
, the sovereign's direct participation in any of these areas of governance is limited, with most related powers entrusted for exercise by the elected and appointed
parliamentarians
, the
ministers of the Crown
generally drawn from amongst them, and the
judges
and
Justices of the Peace
.
The Crown today primarily functions as a guarantor of continuous and stable governance and a
nonpartisan
safeguard against the abuse of power, the sovereign acting as a custodian of the Crown's democratic powers and a representation of the "power of the people above government and political parties."
The Canadian monarchy has its roots in the
French
and
British crowns
, from which it has evolved over numerous centuries to become a distinctly Canadian institution one of the few crowns that have survived through uninterrupted inheritance represented by
unique symbols
, and sometimes being
colloquially
dubbed the
Maple
Crown
. The Canadian monarch since 6 February 1952,
Elizabeth II
is today
shared
equally with
fifteen other countries
within the
Commonwealth of Nations
, all being independent and the monarchy of each legally distinct. For Canada, the monarch is officially titled
Queen of Canada
( ), and she, her
consort
, and other members of the
Canadian Royal Family
undertake various public and private functions across Canada and on behalf of the country abroad. However, the Queen is the only member of the Royal Family with any
constitutional
role. While several powers are the sovereign's alone, because she lives predominantly in the United Kingdom, most of the royal constitutional and ceremonial duties in Canada are carried out by the Queen's representative, the
Governor General
; therefore, the Governor General is sometimes be referred to as the
de facto
head of state
. In each of
Canada's provinces
, the monarch is represented by a
Lieutenant Governor
, while the territories are not sovereign and thus do not have a
viceroy
.
International and domestic aspects
Further information:
Commonwealth realm > Relationship of the realms
Canada shares the same monarch with each of 15 monarchies in the 54-member
Commonwealth of Nations
, a grouping known informally as the
Commonwealth realms
. The emergence of this arrangement paralleled the evolution of
Canadian nationalism
following the end of the
First World War
and culminated in the passage of the
Statute of Westminster
in 1931, since when the pan-national Crown has had both a shared and separate character, and the sovereign's role as monarch of Canada has been distinct to his or her position as monarch of the United Kingdom. The monarchy thus ceased to be an exclusively British institution, and in Canada became a Canadian establishment, though it is still often misnomered as "British" in both legal and common language, for reasons historical, political, and of convenience; this conflicts with not only the federal and provincial governments' recognition and promotion of a distinctly Canadian Crown, but also the
sovereign's distinct Canadian title
,
Elizabeth the Second, by the Grace of God, of the United Kingdom, Canada and Her other Realms and Territories Queen,
Head of the Commonwealth
,
Defender of the Faith
.
Effective with the
Constitution Act, 1982
, no British or other realm government can advise the sovereign on any matters pertinent to Canada, meaning that on all matters of the Canadian state, the monarch is advised solely by Canadian federal
Ministers of the Crown
. As the monarch lives predominantly outside of Canada, one of the most important of these state duties carried out on the advice of the Canadian Prime Minister is the appointment of the federal
viceroy
, who is titled as
Governor General
, and performs most of the Queen's domestic duties in her absence.
The sovereign similarly only draws from Canadian coffers for support in the performance of her duties when in Canada or acting as Queen of Canada abroad; Canadians do not pay any money to the Queen or any other member of the Royal Family, either towards personal income or to support royal residences outside of Canada. Normally, tax dollars pay only for the costs associated with the Governor General and ten Lieutenant Governors as instruments of the Queen's authority, including travel, security, residences, offices, ceremonies, and the like. In the absence of official reports on the full cost of the monarchy, the
Monarchist League of Canada
regularly issues a survey based on various federal and provincial budgets, expenditures, and estimates; the 2009 edition found that the institution cost Canadians roughly $50 million in 2008.
Succession
Succession is by
male-preference primogeniture
governed by both the
Act of Settlement, 1701
, and
Bill of Rights, 1689
, legislation that limits the succession to the natural (i.e. non-
adopted
), legitimate descendants of
Sophia, Electress of Hanover
, and stipulates that the monarch cannot be a
Roman Catholic
, nor married to one, and must be in communion with the
Church of England
upon ascending the throne; these particular clauses have prompted
legal challenge
.
Though, via adopting the Statute of Westminster, these constitutional laws as they apply to Canada now lie within the full control of the
Canadian parliament
, Canada also agreed not to change its rules of succession without the unanimous consent of the other realms, unless explicitly leaving the shared monarchy relationship; a situation that applies
symmetrically
in all the other realms, including the United Kingdom, and has been likened to a
treaty
amongst these countries.
Thus, Canada's line of succession remains identical to
that of the United Kingdom
; however, there is no provision in Canadian law requiring that the King or Queen of Canada must be the same person as the King or Queen of the United Kingdom; if the UK were to breach the convention set out in the preamble to the Statute of Westminster and unilaterally change the line of succession to the British throne, the alteration would have no effect on the reigning sovereign of Canada or his or her heirs and successors. As such, the rules for succession are not fixed, but may be changed by a constitutional amendment.
Upon a
demise of the Crown
(the death or abdication of a sovereign), the late sovereign's heir immediately and automatically succeeds, without any need for confirmation or further ceremony hence arises the phrase "
The King is dead.
Long live the King!
."
It is customary, though, for the accession of the new monarch to be publicly
proclaimed
by the Governor General on behalf of the
Queen's Privy Council for Canada
, which meets at
Rideau Hall
after the accession.
Following an appropriate period of
mourning
, the monarch is also
crowned
in the United Kingdom in an ancient ritual, but one not necessary for a sovereign to reign. Per the 1927 Act Respecting the Demise of the Crown, no incumbent appointee of the Crown is affected by the death of the monarch, though they are required to re-take the
Oath of Allegiance
. By the Interpretation Act of 2005, all references in legislation to previous monarchs, whether in the masculine (e.g.
His Majesty
) or feminine (e.g.
the Queen
), continue to mean the reigning sovereign of Canada, regardless of his or her gender. After an individual ascends the throne, he or she typically continues to reign until death, being unable to unilaterally abdicate per the tenets of constitutional monarchy.
Personification of the state
As the living
embodiment
of
the Crown
, the sovereign is regarded as the
personification
, or
legal personality
, of the Canadian
state
, with the state therefore referred to as
Her Majesty The Queen in Right of Canada
( ), or
The Crown
.
As such, the monarch is the employer of all government staff (including the viceroys, judges, members of the
Canadian Forces
,
police
officers, and
parliamentarians
), the guardian of foster children (
Crown wards
), as well as the owner of all state lands (
Crown land
), buildings and equipment (
Crown held property
), state owned companies (
Crown Corporations
), and the
copyright
for all government publications (
Crown copyright
).
This is all in his or her position as sovereign, and not as an individual; all such property is held by the Crown in perpetuity and cannot be sold by the sovereign without the proper advice and consent of his or her ministers.
As the embodiment of the state, the monarch tops the
Canadian order of precedence
, and is also the locus of
oaths of allegiance
, required of many employees of the Crown, as well as by new
citizens
, as per the
Oath of Citizenship
laid out in the
Citizenship Act
. This is done in
reciprocation
to the sovereign's
Coronation
Oath, wherein he or she promises "to govern the Peoples of... Canada... according to their respective laws and customs."
Head of state
The sovereign is regarded as the
head of state
by official government sources and constitutional scholars, while the Governor General and Lieutenant Governors are all only representatives of, and thus equally subordinate to, that figure. The Governor General, his or her staff, government publications, and some constitutional scholars like
Edward McWhinney
, have, however, referred to the position of Governor General as that of Canada's head of state, though sometimes qualilfying the assertion with
de facto
or
effective
, and since 1927 Governors General have been received on
state visits
abroad as though they were heads of state. Officials at Rideau Hall have pointed to the Letters Patent of 1947 as justification for describing the Governor General as Canada's head of state, but others countered that the document makes no such distinction, either literally or implicitly. Michael D. Jackson, former protocol officer for Saskatchewan, pointed out that Rideau Hall had been attempting to "recast" the Governor General as head of state since the 1970s, and that doing so preempted both the Queen and all of the Lieutenant Governors, the latter causing not only "precedence wars" at provincial events (where the Governor General usurped the Lieutenant Governor's proper spot as most senior official in attendance), but also constitutional issues by "unbalancing[...] the federalist symmetry." This has been regarded by some as a natural evolution, and by others as a dishonest effort to alter the constitution without public scrutiny. Still others view the role of head of state as being shared by both the sovereign and her viceroys.
Constitutional role
Canada's constitution
is made up of a variety of statutes and conventions that are either British, French, or Canadian in origin, and together give Canada a
parliamentary system
of government wherein the role of the Queen is both legal and practical. The Crown is regarded as a
corporation
, with the sovereign, vested as she is with all powers of state, as the centre of a construct in which the power of the whole is shared by multiple institutions of government acting under the sovereign's authority; the Crown has thus been described as the underlying principle of Canada's institutional unity. Though her authority stems from
the people
, all Canadians live under the authority of the monarch, including anyone born in Canada, whether to citizens or to landed migrants, who is then recognised per
common law
as a natural-born subject of the Crown. For Canadians, the monarch acts as a "guardian of constitutional freedoms."
The vast powers that belong to the Crown are collectively known as the
Royal Prerogative
, the exercise of which does not require parliamentary approval, though it is not unlimited; for example, the monarch does not have the prerogative to impose and collect new taxes without the authorization of an
Act of Parliament
. The consent of the Crown must, however, be obtained before either of the houses of parliament may even debate a bill affecting the sovereign's prerogatives or interests, and no act of parliament binds the Queen or her rights unless the act states that it does. Further, the constitution instructs that any change to the position of the monarch, or the monarch's representatives in Canada, requires the consent of the
Senate
, the
House of Commons
, and the legislative assemblies of all the provinces.
The Crown also sits at the pinnacle of the
Canadian Forces
, with the constitution placing the monarch in the position of
Commander-in-Chief of the entire force
, though the Governor General carries out the duties attached to the position and also bears the title of
Commander-in-Chief in and Over Canada
. Though the monarch and members of her family also act as
Colonels-in-Chief
of various regiments in the military, these posts are only ceremonial in nature, reflecting
the Crown's relationship with the military
through participation in military ceremonies both at home and abroad.
The monarch also serves as the Honorary Commissioner of the
Royal Canadian Mounted Police
.
Included in Canada's constitution are the various treaties with the country's
First Nations
,
Inuit
, and
Métis
peoples, who view these documents as agreements directly between them and the reigning monarch. These accords illustrate a
long relationship between sovereign and aboriginals
, which is based on the Crown's responsibility to protect First Nations' territories and act as a fiduciary between the government and aboriginal peoples in Canada.
Executive (Queen-in-Council)
The
government of Canada
formally termed
Her Majesty's Government
is defined by the constitution as the Queen acting on the advice of
her Privy Council
; what is technically known as the
Queen-in-Council
, or sometimes the
Governor-in-Council
, referring to the Governor General as the Queen's stand-in. One of the main duties of the Crown is to "ensure that a democratically elected government is always in place," which means appointing a
prime minister
to thereafter head the
Cabinet
a committee of the Privy Council charged with
advising
the Crown on the exercise of the Royal Prerogative. The Queen is informed by her viceroy of the swearing-in and resignation of prime ministers and other members of the
ministry
, remains fully briefed through regular communications from her Canadian ministers, and holds audience with them whenever possible.
In the construct of
constitutional monarchy
and
responsible government
, the ministerial advice tendered is typically binding, meaning the monarch
reigns
but does not
rule
; this has been the case in Canada since the
Treaty of Paris
ended the reign of the territory's last
absolute monarch
,
King Louis XV
. It is important to note, however, that the Royal Prerogative belongs to the Crown and not to any of the ministers, and the royal and viceroyal figures may unilaterally use these powers in exceptional
constitutional crisis
situations, threby allowing the monarch to make sure "that the government conducts itself in compliance with the constitution." There are also a few duties which must be specifically performed by, or bills that require assent by, the Queen; these include applying the
royal sign-manual
and
Great Seal of Canada
to the appointment papers of governors general, the confirmation of awards of
Canadian honours
, the approval of any change in her Canadian title, and the creation of new Senate seats.
Foreign affairs
The Royal Prerogative also extends to foreign affairs: the sovereign or, since 1978, the Governor General negotiates and ratifies treaties, alliances, and international agreements, on the advice of the Cabinet. The Governor General, on behalf of the Queen, also accredits Canadian High Commissioners and ambassadors, and receives similar diplomats from foreign states. These tasks were solely in the domain of the sovereign until 1977, when, at the direction of Prime Minister
Pierre Trudeau
, Queen Elizabeth II agreed to allow the Governor General to perform these duties on her behalf, and in 2005 the
Letters of Credence and Recall
were altered so as to run in the name of the incumbent governor general, instead of following the usual international process of the letters being from one head of state to another. In addition, the issuance of passports falls under the Royal Prerogative, and, as such, all
Canadian passports
are issued in the monarch's name and remain her property.
Parliament (Queen-in-Parliament)
The sovereign is one of the three components of
parliament
, and is formally called the
Queen-in-Parliament
, but the monarch and viceroy do not participate in the legislative process save for the granting of
Royal Assent
, which is necessary for a bill to be enacted as law; either figure or a delegate may perform this task, and the viceroy has the option of deferring assent to the sovereign, as per the constitution.
The Governor General is further responsible for summoning the House of Commons, while either the viceroy or monarch can
prorogue
and
dissolve
the legislature, after which the Governor General usually
calls for a general election
. The new parliamentary session is marked by either the monarch or the Governor General reading the
Speech from the Throne
; as the both are traditionally barred from the House of Commons, this ceremony, as well as the bestowing of Royal Assent, takes place in the Senate chamber. Despite this exclusion, members of the commons must still express their loyalty to the sovereign and defer to her authority, as the Oath of Allegiance must be recited by all new parliamentarians before they may take their seat, and the
official opposition
is traditionally dubbed as
Her Majesty's Loyal Opposition
.
Courts (Queen-on-the-Bench)
The sovereign is responsible for rendering justice for all her subjects, and is thus traditionally deemed the
fount of justice
, or more officially, the
Queen on the Bench
. However, she does not personally rule in judicial cases; instead the judicial functions of the Royal Prerogative are performed in trust and in the Queen's name by Officers of Her Majesty's Court. These individuals enjoy the privilege granted conditionally by the sovereign to be free from criminal and civil liability for unsworn statements made within the court. This privilege extends from the notion in common law that the sovereign "can do no wrong"; the monarch cannot be prosecuted in her own courts for criminal offences. Civil lawsuits against the Crown in its public capacity (that is, lawsuits against the Queen-in-Council) are permitted; however, lawsuits against the monarch personally are not cognizable. In international cases, as a sovereign and under established principals of
international law
, the Queen of Canada is not subject to suit in foreign courts without her express consent. The monarch, and by extension the Governor General, also grants immunity from prosecution, exercises the
Royal Prerogative of Mercy
, and may pardon offences against the Crown, either before, during, or after a trial.
As the judges and courts are the sovereign's judges and courts, and as all law in Canada derives from the Crown, the monarch stands to give legitimacy to courts of justice, and is the source of their judicial authority. An image of the Queen and/or the
Arms of Her Majesty in Right of Canada
is always displayed in Canadian federal courtrooms.
Itinerant
judges will display an image of the Queen and the
Canadian flag
when holding a session away from established courtrooms; such situations occur in parts of Canada where the stakeholders in a given court case are too isolated geographically to be able to travel for regular proceedings.
Provinces
The Canadian monarchy is a
federal one
in which the Crown is unitary throughout all jurisdictions in the country, with the
headship of state
being a part of all equally. As such, the sovereignty of the provinces is passed on not by the Governor General or federal parliament, but through the overreaching Crown itself as a part of the executive, legislative, and judicial operations in each province. Though singular, linking the federal and provincial governments into a federal state, the Crown is thus "divided" into eleven legal jurisdictions, or eleven "crowns" one federal and ten provincial. The
Fathers of Confederation
viewed the system of constitutional monarchy as a bulwark against any potential fracturing of the
Canadian federation
.
A
Lieutenant Governor
serves as the Queen's representative in each province, carrying out all the monarch's constitutional and ceremonial duties of state on her behalf.
The
Commissioners
of Canada's territories of
Nunavut
,
Yukon
, and
Northwest Territories
are appointed by the Governor-in-Council, at the recommendation of the federal
Minister of Indian Affairs and Northern Development
; but, as the territories are not sovereign entities, the commissioners are not representatives of the sovereign.
Cultural role
Royal presence and duties
Members of the Royal Family have been present in Canada since the late 1700s, their reasons including participating in military manoeuvres, serving as the federal viceroy, or undertaking
official royal tours
. A prominent feature of the latter are numerous
royal walkabouts
, the tradition of which was initiated in 1939 by
Queen Elizabeth
when she was in Ottawa and broke from the royal party to speak directly to gathered veterans. Usually important milestones, anniversaries, or celebrations of
Canadian culture
will warrant the presence of the monarch, while other royals will be asked to participate in lesser occasions. A
household
to assist and tend to the monarch will form part of the royal party.
Official duties
involve the sovereign representing the Canadian state at home or abroad, or her relations as members of the
Canadian Royal Family
participating in government organized ceremonies either in Canada or elsewhere. The advice of the Canadian Cabinet is the impetus for royal participation in any Canadian event, though, at present, the Chief of Protocol and his staff in the
Department of Canadian Heritage
are, as part of the State Ceremonial and Canadian Symbols Program, responsible for orchestrating any official events in or for Canada that involve the Royal Family. Such events have included centennials and bicentennials;
Canada Day
; the openings of
Pan American
,
Olympic
, and other games; anniversaries of First Nations
treaty
signings; awards ceremonies;
D-Day
commemorations; anniversaries of the monarch's accession; and the like. Conversely,
unofficial duties
are performed by Royal Family members on behalf of Canadian organizations of which they may be
patrons
, through their attendance at charity events, visiting with members of the
Canadian Forces
as
Colonel-in-Chief
, or marking certain key anniversaries. The invitation and expenses associated with these undertakings are usually borne by the associated organization. In 2005 members of the Royal Family were present at a total of 76 Canadian engagements, as well as several more through 2006 and 2007.
Apart from Canada, the Queen and other members of the Royal Family regularly perform public duties in the other fifteen nations of the Commonwealth in which the Queen is head of state. This situation, however, can mean the monarch and/or members of the Royal Family will be promoting one nation and not another; a situation that has been met with criticism.
Symbols, associations, and awards
The main symbol of the monarchy is the sovereign herself, described as "the personal expression of the Crown in Canada," and her image is thus used to signify government authority her
effigy
, for instance, appearing on
currency
, and her portrait in government buildings and Canadian sovereignty. A
royal cypher
or crown is also used to illustrate the monarchy as the locus of authority, the latter without referring to any specific monarch. The former appears on buildings and official
seals
, and the latter on provincial and
national coats of arms
, as well as
police
force and
Canadian Forces
regimental and maritime badges and rank insignia. The sovereign will also appear in person to represent the Canadian nation, and is both mentioned in and the subject of
songs
,
loyal toasts
, and salutes.
The Queen is the fount of all
honours in Canada
, and new orders, decorations, and medals may only be created with the approval of the sovereign through letters patent. Hence, the insignia and medallions for these awards bear a crown, cypher, and/or effigy of the monarch. Similarly,
the country's heraldic authority
was created by the Queen in 1988, and, operating under the authority of the Governor General, grants new
coats of arms
(armorial bearings),
flags
, and
badges
to Canadian citizens, permanent residents, and corporate bodies. Use of the royal crown in such symbols is a gift from the monarch showing royal support and/or association, and requires her approval before being added.
Besides government and military institutions, a number of Canadian civilian organizations have association with the monarchy, either through their being founded via a
Royal Charter
(such as the
Hudson's Bay Company
, the city of
Saint John, New Brunswick
,
Scouts Canada
, and
McGill University
), having been granted
the right to use the prefix
royal
before their name
(such as the
Royal Ottawa Golf Club
and
the Royal Canadian Regiment
), or because at least one member of the Royal Family serves as a
patron
.
Some charities and volunteer organizations have also been founded as gifts to, or in honour of, some of Canada's monarchs or members of the Royal Family, such as the
Victorian Order of Nurses
(a gift to Queen Victoria for her
Diamond Jubilee
in 1897), the Canadian Cancer Fund (set up in honour of King George V's
Silver Jubilee
in 1935), and the Queen Elizabeth II Fund to Aid in Research on the Diseases of Children. A number of awards in Canada are similarly issued in the name of previous or present members of the Royal Family.
Further, organizations will give commemorative gifts to members of the Royal Family to mark a visit or other important occasion, such as the tapestry of the
Royal Canadian Mounted Police
badge presented to the Queen by the RCMP after she approved the new design in
Regina, Saskatchewan
, on 4 July 1973.
Canadian Royal Family
The
Canadian Royal Family
is a group of people related to the monarch of Canada. There is no strict legal or formal definition of who is or is not a member of the group, though the
Department of Canadian Heritage
maintains a list of immediate members, and the
Department of National Defence
stipulates that those in the direct line of succession who bear the
style
of
Royal Highness
(
Altesse Royale
) are subjects of, and owe their allegiance to, the reigning sovereign specifically as King or Queen of Canada, entitling them to
Canadian consular assistance
and to the protection of the Queen's
armed forces of Canada
when they are outside of the Commonwealth realms and in need of protection or aid.
Given the shared nature of the Canadian monarch, most members of the Canadian Royal Family are also members of the
British Royal Family
and thus the
House of Windsor
, as well as the distant relations of the
Greek
,
Danish
,
Spanish
, and
Belgian Royal Families
and include lineage from, amongst others,
French
,
Italian
,
Hungarian
,
Portuguese
,
Cuman
,
Norwegian
,
Swedish
,
German
,
Serbian
,
Armenian
,
Arab
, and
Mongolian
ethnicities. However, because Canada and the UK are independent of one another, it is incorrect to refer in the Canadian context to the family of the monarch as the "British Royal Family" as is frequently done by Canadian and other media and there exist some differences between the official lists of each: for instance, while he never held the style
His Royal Highness
,
Angus Ogilvy
was included in the Department of Canadian Heritage's royal family list, but was not considered a member of the British Royal Family.
Additionally, unlike in the United Kingdom, the monarch is the only member of the Royal Family in Canada with a
title established through law
; it would be possible for others to be granted distinctly Canadian titles (as is the case for the
Duke of Rothesay
in
Scotland
), but they have always been, and continue to only be, accorded the use of a
courtesy title
, which is that they have been granted via
Letters Patent
in the United Kingdom, though they are also in Canada translated to French.
File:Roy-fam-2007.jpg|thumb|right|Most members of the Royal Family gathered for a dinner celebrating the 60th wedding anniversary of Queen Elizabeth II and the Duke of Edinburgh.circle 203 153 28
The Queen
circle 281 147 28
The Duke of Edinburgh
circle 136 140 28
The Prince of Wales
circle 358 150 28
The Duchess of Cornwall
circle 43 141 28
Prince William of Wales
circle 420 141 30
Prince Henry of Wales
circle 262 90 22
The Duke of York
circle 214 100 22
Princess Beatrice of York
circle 316 102 22
Princess Eugenie of York
circle 364 89 20
The Earl of Wessex
circle 433 102 20
The Countess of Wessex
circle 100 93 20
The Princess Royal
circle 78 66 16
Peter Phillips
circle 158 97 20
Zara Phillips
circle 455 67 20
The Duke of Gloucester
circle 402 82 16
The Duchess of Gloucester
circle 276 53 16
The Duke of Kent
circle 188 62 20
Prince Michael of Kent
circle 233 69 20
Princess Michael of Kent
circle 316 64 16
Princess Alexandra
Though the group is predominantly based in the United Kingdom, the sovereign and those amongst her relations who do not meet the requirements of Canadian citizenship law are still not considered foreign to Canada; as early as 1959, it was recognised that the Queen was "equally at home in all her realms." Rather, as legal subjects of the country's monarch, the Royal Family holds a unique position reflected in the confusion that sometimes arises around the awarding of honours to them.
The only Canadian citizens within the Canadian Royal Family were married into it: In 1988,
Sylvana Jones
(née Tomaselli in
Placentia, Newfoundland
) wed
George Windsor, Earl of St Andrews
, a great-grandson of
King George V
; and on 18 May 2008,
Autumn Kelly
, originally from
Montreal
, married Queen Elizabeth II's eldest grandson,
Peter Phillips
.
Beyond legalities, members of the Royal Family have also, on occasion, declared themselves to be Canadian, and some members have lived in Canada for extended periods as viceroy or for other reasons. Still, the existence of a Canadian Royal Family is contested, mostly by individuals in Canada's fringe
republican movement
, but also by former
Lieutenant Governor of British Columbia
Iona Campagnolo
, and poet
George Elliott Clarke
publicly mused about a fully
First Nations
royal family for Canada.
According to the
Canadian Royal Heritage Trust
,
Prince Edward Augustus, Duke of Kent and Strathearn
due to his having lived in Canada between 1791 and 1800, and fathering
Queen Victoria
is the "ancestor of the modern Canadian Royal Family." Nonetheless, the concept of the Canadian Royal Family did not emerge until after the passage of the Statute of Westminster in 1931, when Canadian officials only began to overtly consider putting the principles of Canada's new status as an independent
kingdom
into effect.
At first, the monarch was the only member of the Royal Family to carry out public ceremonial duties solely on the advice of Canadian ministers;
King Edward VIII
became the first to do so when in July 1936 he dedicated the
Canadian National Vimy Memorial
in France one of his few obligations performed during his short reign.
Over the decades, however, the monarch's children, grandchildren, cousins, and their respective spouses began to also perform functions at the direction of the Canadian Crown-in-Council, representing the monarch within Canada or abroad.
By the 1960s, loyal societies in Canada recognized the Queen's cousin,
Princess Alexandra, The Honourable Lady Ogilvy
, as a "Canadian princess"; but, it was not until October 2002 when the term
Canadian Royal Family
was first used publicly and officially by a member of it: in a speech to the
Nunavut legislature
at its opening, Queen Elizabeth II stated: "I am proud to be the first member of the Canadian Royal Family to be greeted in Canada's newest territory."
The press frequently follows the movements of the Royal Family, and can, at times, affect the group's popularity, which has fluctuated over the years. Mirroring the mood in the United Kingdom, the family's lowest approval was during the mid-1980s to 1990s, when the children of the monarch were enduring their
divorces
and were the targets of negative
tabloid
reporting.
Residences and royal household
Rideau Hall, the monarch's principal Canadian residence, though foremostly that of the Governor General.
A number of buildings across Canada are reserved by the Crown for the use of the monarch and her viceroys.
The sovereign's primary official residence, as well as that primarily used by the Governor General, is
Rideau Hall
, located in
Ottawa
,
Ontario
, and another principal residence of the Governor General is the
Citadelle
, in
Quebec City
.
Each of these royal seats holds pieces from the Crown Collection, made up of antique and contemporary furniture and works of art from each province and territory of Canada, as well as Europe, Asia, and other regions, the majority of which came from donations to the Canada Fund.
The provinces of
British Columbia
,
Manitoba
,
Nova Scotia
,
New Brunswick
,
Newfoundland and Labrador
, and
Prince Edward Island
also maintain residences for the sovereign, though they are used primarily by the respective Lieutenant Governor.
Further, though neither was ever used for their intended purpose,
Hatley Castle
in British Columbia was purchased in 1940 by King George VI in Right of Canada to use as his home during the course of
World War II
, and the
Emergency Government Headquarters
, built in 1959 at
CFS Carp
and decommissioned in 1994, included a residential apartment for the sovereign or Governor General in the case of a
nuclear attack
on Ottawa.
Monarchs and members of their family have also owned in a private capacity homes and land in Canada:
King Edward VIII
owned Bedingfield Ranch, near
Pekisko, Alberta
;
The Marquess of Lorne
and
Princess Louise
owned a cottage on the
Cascapédia River
in
Quebec
; and
Princess Margaret
owned Portland Island between its gifting to her during a trip to the province in 1958 and her death in 2002, though she offered it on permanent loan to the
Crown in Right of British Columbia
in 1966, and the island and surrounding waters eventually became Princess Margaret Marine Park.
To assist the Queen in carrying out her official duties on behalf of Canada, she appoints various people to her Canadian
household
. Along with the
Canadian Secretary to the Queen
, the monarch's entourage includes two
Ladies-in-Waiting
, the
Canadian Equerry-in-Waiting
to the Queen, the Queen's Police Officer, the Duke of Edinburgh's Police Officer, the Queen's Honorary Physician, the Queen's Honorary Dental Surgeon, and the Queen's Honorary Nursing Sister the latter three being drawn from the
Canadian Forces
.
There are also three
Household Regiments
specifically attached to the Royal Household (the
Governor General's Foot Guards
,
the Governor General's Horse Guards
, and
the Canadian Grenadier Guards
), as well as two
Chapels Royal
in Ontario the
The Queen's Chapel of the Mohawks
, built in 1785 in
Brantford
, and
Christ Church, Her Majesty's Royal Chapel of the Mohawks
, founded in 1784 and rebuilt in 1843 near
Deseronto
.
Both were granted the status of Royal Chapel by Queen Elizabeth II in 2004.
History
The Canadian monarchy can trace its ancestral lineage back to the
kings of the Angles
and the early
Scottish kings
, through centuries since parts of the territories that today comprise Canada were claimed by
King Francis I
in 1534, and others by
Queen Elizabeth I
in 1583; both being
blood relatives
of the current Canadian monarch. Though the first French and British colonizers of Canada interpreted the hereditary nature of some indigenous
North American
chieftainships as a form of monarchy, it is generally accepted that Canada has been a territory of a monarch or a monarchy in its own right only since the establishment of
New France
in the early 17th century.
After the Canadian colonies of France were, via war and treaties, ceded to the British Crown, and the population was greatly expanded by
those loyal to George III
fleeing north from persecution during and following the
American Revolution
,
British North America
was in 1867
confederated
by
Queen Victoria
to form Canada as a
kingdom
in its own right. By the end of the
First World War
, the increased fortitude of
Canadian nationalism
inspired the country's leaders to push for greater independence from the King in his British Council, resulting in the creation of the uniquely Canadian monarchy through the
Statute of Westminster
, which was granted Royal Assent in 1931. Only five years later, Canada had three successive kings in the space of one year, with the death of
George V
, the
accession and abdication
of
Edward VIII
, and his replacement by
George VI
.
The latter became in 1939 the first reigning monarch of Canada to tour the country (though previous kings had done so before their accession). As the ease of travel increased,
visits by the sovereign and other Royal Family members
became more frequent and involved, seeing Queen Elizabeth II officiate at various moments of importance in the nation's history, one being when she proclaimed the country to be fully independent, via
constitutional patriation
, in 1982. That act is said to have entrenched the monarchy in Canada, due to the stringent requirements, as laid out in the amending formula, that must be met in order to alter the monarchy in any way.
Through the 1960s and 1970s, the rise of
Quebec nationalism
and changes in
Canadian identity
created an atmosphere where the purpose and role of the monarchy
came into question
. Some references to the monarch and the monarchy were removed from the public eye, and moves were made by the federal government to constitutionally alter the Crown's place and role in Canada; but, provincial and federal ministers, along with loyal
national citizen's organisations
, ensured that the system remained the same in essence. By 2002, the year of the royal tour and associated
fêtes
for
the Queen's Golden Jubilee
proved popular with Canadians across the country.
Debate
To date, outside of academic circles, there has been little national
debate on the Canadian monarchy
, a subject of which most Canadians are generally unaware. Neither of Canada's two most prominent political parties the
Liberal Party
and the
Conservative Party
is officially in favour of abolishing the monarchy, though the latter makes support for constitutional monarchy a founding principle in its policy declaration, and the
New Democratic Party
(NDP) has no official position on the role of the Crown. Only some Members of Parliament belonging to these parties, and the leaders of the
Bloc Québécois
, Canada has two special-interest groups representing the debate, who frequently argue the issue in the media: the
Monarchist League of Canada
and
Citizens for a Canadian Republic
. There are also other loyal organizations, such as the
United Empire Loyalists
' Association of Canada, the
Canadian Royal Heritage Trust
, and the
Orange Order in Canada
.
See also
List of Canadian monarchs
Current Commonwealth realms
States headed by Elizabeth II
Monarchies in the Americas
List of monarchies
Notes
Citations
References
Further information
Reading
Viewing
External links
Canadian government website for the
Canadian Monarchy
Canada: A Constitutional Monarchy
from the Government of Canada
Buckingham Palace website for the Canadian Monarchy
Maple Leaf Web: The Monarchy in Canada
Learning About the Canadian Crown Website
The Canadian Crown
The Unofficial Website of the Canadian Monarchy
Queen & Country: Enduring Loyalties
(contains footage on the subject of Elizabeth II's relationship with Canada)
Film footage of Queen Elizabeth II in Canada
CBC Digital Archives: Expodition: The royal treatment
The Royal Collection: Film footage of the Duke and Duchess of Edinburgh in Canada, 1951
CBC Digital Archives: Their Majesties in Canada
Film footage of King George VI and Queen Elizabeth in Canada, 1939
National Film Board of Canada fims on the Canadian monarchy
Embed code:
Advertisements
Got something to say? Make a comment.
Your name
Your email address
Message
Advertisements
Wikipedia article
is available under the
Creative Commons Attribution-ShareAlike License
. This content and its associated elements are made available under the
same license
where attribution must include acknowledgement of The Full Wiki as the source on the page same page with a link back to
this page
with no nofollow tag. | http://maps.thefullwiki.org/Monarchy_of_Canada |
Health is improving around the world, but 7 out of 10 deaths are now due to noncommunicable diseases, like stroke, diabetes, chronic kidney disease, and drug use disorders, according to a special issue of The Lancet.
The Global Burden of Disease, Injuries, and Risk Factors (GBD) 2015 study brought together 1870 experts in 127 countries and territories to analyze 249 causes of death, 315 diseases and injuries, and 79 risk factors occurring between 1990 and 2015.
Since 1980, life expectancy around the world has increased by more than a decade, rising to 69 years for men and 74.8 years for women in 2015. In that time, death rates for HIV/AIDS, malaria, and diarrhea have fallen significantly.
However, while overall life expectancy has risen by a large amount, healthy life expectancy has increased by just 6.1 years. This means that people are living more years with illness and disability. The burden of ill health has shifted away from communicable, maternal, neonatal, and nutritional disorders to disabling noncommunicable diseases, such as drug use disorder (particularly opioids and cocaine), hearing and vision loss, and osteoarthritis. This trend has a huge impact on health systems and the cost of treatment.
The researchers also created the Socio Demographic Index (SDI) to determine what progress would be expected in countries based on their level of development. The index is based on income per capita, educational attainment, and total fertility rate.
The index found that when looking at high SDI regions, North America had the worst healthy life expectancy. Mortality rate for children younger than 5 years was worse than expected in the United States and Canada. Plus, drug use disorders and diabetes cause a disproportionate amount of ill health and early death in America.
The purpose of these analyses was to provide governments and donors with evidence to identify national health challenges and priorities for intervention. | https://www.ajmc.com/printer?url=/focus-of-the-week/global-life-expectancy-rises-but-70-of-deaths-due-to-non-communicable-diseases |
Peer Advisor - Student Asst 3
THIS POSITION IS OPEN TO ALL ACTIVE UC MERCED STUDENTS (INCLUDING FEDERAL WORK STUDY ELIGIBLE).
Under the Immigration Reform and Control Act of 1986 employees are required to provide proof of eligibility to work in the United States (a list of acceptable documents to establish eligibility can be found here)
Hours Per Week: 10
Background Check: Yes
PHYSICAL WORK LOCATION: On Campus
Description:
NOTE: THIS POSITION WILL BE FOR SUMMER 2023 (MAY 2023 - AUGUST 2023), WITH POSSIBILITY TO EXTEND. TRAINING WILL COMMENCE IN APRIL 2023. PEER ADVISORS MAY BE SCHEDULED BETWEEN 10-18 HOURS PER WEEK DURING THE ACADEMIC YEAR, AND BETWEEN 10 TO 25 HOURS PER WEEK DURING THE SUMMER TERM.
Peer Advisors serve as primary contacts for general inquiries about OIA and its Study Abroad and International Student and Scholar units. Peer Advisors assist with outreach, recruitment, orientations and a variety of clerical tasks and projects. These responsibilities are primarily accomplished in-person, including while staffing the front desk in the OIA office and in other on-campus spaces.
Peer Advisors must be available to work during Summer 2023. The great majority of the work takes place during regular business hours but, on occasion, evenings and weekend hours may be assigned. Participation in trainings and meetings and some OIA events are required.
Specific duties include but are not limited to:
• Serving as an initial OIA contact by responding to general and routine questions from students, faculty and staff, and those external to the university, such as parents and potential students and visitors
• Providing information about OIA programs and other initiatives to students, staff, faculty, parents, campus visitors and other members of the community during outreach and marketing activities
• Promoting Study Abroad by encouraging students to participate in informational meetings, interviews and orientation programs, and through classroom, housing, and club presentations
• Pre-advising students about Study Abroad countries and programs, and helping students with the Study Abroad application and pre-departure process, through peer advising and other informational sessions
• Representing OIA during International Students and Scholars (ISS) orientations and other events
• Designing promotional materials including posters and flyers
• Organizing and updating informational materials
• Attending regular departmental meetings and training sessions
• Completing weekly assignments by deadlines
• Posting on OIA's media platforms (e.g. Facebook, lnstagram, Snapchat, Linkedln, and Twitter) in accordance with OIA's social media guidelines
• Assisting in the organization and implementation of OIA events and staffing, and otherwise participating in these events
• Supporting with Non-OIA, UC Merced internationalization efforts including but not limited to: student exchange, international student recruitment, and international relationship and partnership building
Qualifications:
• Spanish Speaking & Writing Abilities (Preferred but not required)
• Available to work seasons during summer, winter, and spring breaks
• Sophomore or junior class standing (Preferred)
• Interest in international education and commitment to furthering the goals of OIA and, more broadly, internationalization at UC Merced
• Ability to represent OIA in a professional, mature and courteous manner
• Ability to work comfortably with students, faculty, staff, visiting scholars, and external stakeholders of diverse identities and backgrounds
• Commitment to strictly adhering to student confidentiality protocols, policies and laws
• Ability to speak comfortably to small and large groups, and facilitate discussions about OIA and its programs/services
• Strong communication and listening skills
• Microsoft Office skills required; Mac computer experience and data entry experience highly preferred
• Knowledge of social media platforms including YouTube, Snapchat, Linkedln, Facebook, lnstagram, and Twitter
• Experience creating fliers or other promotional media preferred
• Ability to manage multiple projects and deadlines simultaneously
• Collaborative spirit and enthusiasm for working with other OIA staff and members of the UC Merced community
• Ability to work independently and complete assignments as assigned
• Creativity, reliability and a strong attention to detail
• Satisfactory completion of background check required prior to employment
INTERNATIONAL STUDENTS AND STUDY ABROAD RETURNEES ARE ENCOURAGED TO APPLY.
Please contact Julian Luke @ [email protected] for more information.
As of January 1, 2014, the University of California, Merced will be a smoke and tobacco free workplace. Information and the Smoke and Tobacco Free policy is available at http://smokefree.ucmerced.edu
The University of California at Merced is an affirmative action/equal opportunity employer with a strong institutional commitment to the achievement of diversity among its faculty, staff and students.
E-Verify: All employers who receive Federal contracts and grants are required to comply with E-Verify, an Internet-based system operated by the Department of Homeland Security(DHS) in partnership with the Social Security Administration (SSA). E-Verify electronically verifies employment eligibility by comparing information provided on the I-9 form to records in the DHS and SSA databases. Certain positions funded by federal contracts/subcontracts requires UC Merced to notify job applicants that an E-Verify check will be conducted and the successful candidate must pass the E-Verify check. | https://ucmerced.joinhandshake.com/jobs/7567700/share_preview |
In October and November, three different species of sea turtles have been reported in Monterey Bay and the Gulf of the Farallones National Marine Sanctuary waters.
It’s unusual for sea turtles to venture into temperate waters. Other species also visit when surface waters warm up, and this fall has been unusually warm with surface temperatures approaching 60 degrees F.
In October, an olive ridley sea turtle beached itself in Pacific Grove. Riding along the warm counter currents, these turtles are sometimes “cold-stunned” when the warmer currents disappear, stranding the turtles in colder bay waters. The turtle is currently being cared for at the Monterey Bay Aquarium until it can be returned to the wild.
The olive ridley sea turtle are considered the most abundant of the seven species, yet globally they have declined by more than 30% from historic levels. These turtles are considered endangered because of the loss of nesting sites in the world. The eastern Pacific turtles have been found to range from Baja California, Mexico to Chile. The nests of Pacific olive ridley are located around Costa Rica, Mexico, Nicaragua, and the Northern Indian Ocean; the breeding colony in Mexico was listed as endangered in the U.S. on July 28, 1978.
Early this month, a rare sighting of a green sea turtle was reported at the commercial wharf in Monterey. Local sea turtle experts positively identified a male green sea turtle from photos and videos. Green sea turtles are generally found south of San Diego, but have been sighted as far north as southern Alaska in the eastern Pacific. This turtle was outside of its normal range, and is a very rare sighting this far north and especially so close to the shore.
The green turtle was listed under the Endangered Species Act on July 28, 1978. The breeding populations in Florida and the Pacific coast of Mexico are listed as endangered. In 2004, the International Union for Conservation of Nature (IUCN) listed the green sea turtle as an endangered species, worldwide.
More common to our local waters is the giant eastern Pacific leatherback sea turtles. Leatherback sea turtles are the largest, deepest diving of all sea turtle species and are found swimming in all oceans across the globe.
Leatherback sea turtles in the Pacific Ocean are in far greater danger of extinction than Atlantic Ocean populations due to greater commercial fishing, illegal poaching, ocean pollution, and nesting beach destruction in the Pacific. Leatherbacks in the Pacific can be divided into two primary populations: those that nest in the eastern Pacific and those that nest in the west.
Leatherbacks in the eastern Pacific population primarily nest in Central America and spend most of their lives offshore of nesting beaches or migrating to foraging areas. The largest foraging area is off the shore of Chile in the southeastern Pacific. The corridor above the Cocos Ridge of seamounts is a migration area of critical importance to many species between Cocos Island and Easter Island. However, some nesting occurs in Mexico and foraging leatherbacks from the eastern Pacific population may venture into California waters to feed.
In 1990, the California State Legislature banned all longline fishing in the Exclusive Economic Zone (EEZ) to prevent the deaths of leatherbacks sea turtles. Then in 2008, the California Legislature passed Assembly Joint Resolution No. 62 (Leno) for west coast sea turtle protection, supporting efforts to preserve and recover Pacific leatherback populations.
The Marin non-profit organization, Sea Turtle Restoration Project’s (STRP), is working to protect these and all species of endangered sea turtles. Their work has resulted in the National Marine Fisheries Service establishing critical habitats for the leatherback within much of the California, Oregon, and Washington EEZ.
In an effort to enhance recovery prospects for the critically endangered Pacific leatherback sea turtle, they initiated a “citizen scientist” research program as part of its volunteer Leatherback Watch Program. The program tracks sightings of leatherbacks off the northern California coast, coordinating with recreational sailors, whale watchers, and scientists. This region is an essential feeding area for leatherbacks that swim across the entire Pacific Ocean from nesting beaches to reach the abundant jellyfish blooms that occur each summer in the California Current marine ecosystem. This year, the Leatherback Watch Program recorded over twenty sightings in our local waters.
We can also follow a live green sea turtle tagged by the researchers from the Sea Turtle Restoration Project. The scientists capture sea turtles off Cocos Island, an island 400 miles off the coast of Costa Rica. Brought aboard the vessel, the scientists weigh, measure, take blood and tissue for DNA analysis and equip the turtles with special satellite tags. These tags send a radio signal to a satellite and relay the position back to a computer.
Named Fillmore after cartoonist Jim Toomey’s comical creation, this real sea turtle’s wanderings can be followed on the STRP web site. In the past week, Fillmore the Green sea turtle swam north then east, making it back into the protected “No Take” area 12 nautical miles around Cocos Island National Park. Let’s hope Fillmore makes it east to the nesting beaches without encountering long lines or plastic bags. Data indicates that 80 percent of the debris on our beaches and shorelines comes from inland sources, traveling through our storm drains or creeks out to the beaches and oceans. When litter enters sea turtle feeding areas in the ocean, it can have deadly consequences for sea turtles that mistake the debris for food.
We can help protect these and other sea turtles by “Bagging the Plastics” and taking action to reduce the plastic waste polluting sea turtle feeding areas in the ocean. Fillmore and other sea turtles can use your help by sending a letter to the Costa Rican president asking for more protections at Cocos Island National Park.
Until the battery on the radio transmitter dies, we can follow Fillmore’s voyage and his exploits. Maybe one day Fillmore’s counterparts will visit our Sanctuary in greater numbers in a conservation success story. | https://ww2.kqed.org/quest/2011/11/29/various-voyages-of-sea-turtles/ |
The Georgia Fair Labor Platform is an informal alliance of independent trade unions, civil society organizations and activists working to improve labor conditions for workers in Georgia. We also act as a solidarity network for our members, offering mutual assistance and support on issues of concern.
Our key focus areas include:
Effective labor inspection
We monitor the work of the Labor Inspectorate through our Workplace Safety Monitor, and advocate for the Inspectorate to be strengthened, to better protect workers’ health and safety.
Living wages
We advocate for a long-overdue increase to Georgia’s minimum wage, and fight to make sure workers get the pay they deserve.
Stronger, better unions
We strengthen independent trade unions, by building their membership and making them more effective. We are also advocating for their inclusion in the tripartite committee.
Resources for workers
Elections 2020
Interested in what Georgia’s parliamentary candidates have planned for workers’ rights? Ahead of the 2020 Parliamentary elections, we sent questions to 12 major political parties, to learn more about their vision on key issues. We also studied their platforms to see how labor rights would be incorporated in their policies. We’ve compiled all of this info on our Elections 2020 page.
Workplace safety monitor
Workplace safety is a pressing issue in Georgia, with with dozens of workers dying and suffering serious injury every year. This tool tracks inspections by Georgia’s Labor Inspectorate, a government body which monitors workplace safety. Our database includes inspection reports back to September 2019. Find out if your employer has been inspected.
Wage theft calculator
Is your employer stealing from you? Wage theft – defined as an employer’s failure to pay money legally owed to an employee – is rampant in Georgia, though many people have no idea what it is. Our interactive Wage Theft Calculator helps you determine whether you’ve been a victim, and lets you share your results on social media.
Latest news
Online launch event: Workplace Safety Monitor and Wage Theft Calculator
Please join the Georgia Fair Labor Platform for a Facebook live event on December 9, at 5:00 pm, as part of Human Rights Week. We will introduce the...
Fair Labor Platform: Covid-19 situation in the Tbilisi metro is worsening
The Georgia Fair Labor Platform would like to respond to the developments in the Tbilisi Metro – namely a significant increase in the number of...
Fair Labor Platform expresses solidarity with striking employees of ‘Georgian House’
The Fair Labor Platform stands in solidarity with the striking employees of Georgian House Ltd., and considers the dismissal of employees during the... | http://shroma.ge/en/home/ |
UN Challenges Slavery Conditions for Domestic Workers
Andreia Soares, a nanny, plays with the 10-month-old toddler she looks after in her boss's apartment in Sao Paulo, Brazil, May 17, 2011. (Lalo de Almeida / The New York Times)
With a lot of luck, we may finally take decisive action to guarantee decent treatment for the world's highly exploited housekeepers, maids, nannies, and other domestic workers. There are an estimated 100 million of them, working in more than 180 countries.
Their pay is generally at the poverty level, and very few have fringe benefits such as pensions and employer-paid health care. Few have the protection of unions or labor laws, and they're often at the mercy of unscrupulous labor contractors.
Almost half of them are not entitled to even one day off per week. About a third of the female workers are denied maternity leave.
The hope for improving the domestics 'slavery-like conditions has arisen from action taken in Geneva this month at the annual meeting of the United Nation's International Labor Organization - the ILO.
Delegates representing unions, employers and governments voted 396 to 16 for what's called a "Convention on Domestic Workers." The nonbinding convention spells out how domestics should be treated in UN member countries - most importantly in the pace-setting United States.
In the US, as in most other countries, an estimated 80 percent of the domestics are women of color, subject to racial discrimination and physical and sexual abuse. In the United States, most of them are immigrants as well . They're easy targets for exploitation, especially since, as elsewhere, domestics mainly work in private, unregulated households, usually alone.
What's more, US domestics lack most of the protections of state and federal labor laws that are granted most US workers outside of agriculture. Most other non-agricultural workers at least have the right to unionize. But domestics don't even have that basic right.
The National Labor Relations Act specifically denies union rights to anyone "in the domestic service of any family or person." That's right. The Depression-era law that was designed to pull poverty-stricken workers out of poverty and build a middle class does indeed prohibit an entire group of exceptionally needy workers from taking a major step to improve their extremely poor working conditions. The word for that is "un-American."
That outrageous legal prohibition has its roots in racism. Pressures from southern states, which objected to granting union rights to the mainly black domestics, was the main reason domestics were excluded from the National Labor Relations Act.
Some domestics have nevertheless formed union-like organizations to seek better treatment. But they need the force of law behind them.
The ILO convention calls for guaranteeing domestic workers in the United States and everywhere else some of the key rights that unionized workers invariably have, among them, regular working hours, vacations, maternity leaves and Social Security benefits.
Domestics would be promised what amount to contracts with employers that would make clear just what they would be expected to do, for how long and for how much pay. Their working conditions would have to include time off of at least 24 hours a week.
Migrant workers would have to be provided with a written job offer of employment or a contract before crossing the border into another country to work.
It took several years for ILO representatives to adopt the domestic workers convention. It was finally adopted as a direct result of campaigning here and aboard by groups of activists from unions and other organizations. They will be working for the next few years to get as many nations as possible to implement the ILO convention with their help.
The effort in this country is being led by the National Domestic Workers Alliance, with major support from the AFL-CIO, which has arranged to have some domestic workers represent themselves in ILO meetings and voting.
Among other things, proponents hope to make it clear that "domestic workers are real workers, NOT powerless individuals who are expected to remain in quiet servitude and endure long hours without overtime pay, along with hazardous working conditions without access to health and safety protections."
Proponents also hope to end the "cultural relativity excuse that sleeping on a mattress in an unheated garage is better than he or she would get in their home country, or that the poor treatment of domestics is a tradition." The ILO convention says otherwise and workers in the United States and other countries where it is adopted "will be armed with the knowledge that there is an international standard that protects them."
Domestics already are granted labor rights in New York State, and California legislators are considering a proposal to bring them under that state's labor laws. But winning basic rights for the badly exploited domestic workers elsewhere will be very difficult. But so was convincing ILO representatives to take on the task, the long needed task of granting domestic workers union rights and, with them, the decent wages, hours and working conditions that come with unionization.
Yes, winning the union rights for domestics worldwide will be very difficult. But we know it can be done. And certainly we know that it should be done.
| |
The primary aim of this proposal is to improve end of life care for patients through the preparation of nurses in graduate programs. Recent national initiatives and major consensus documents have provided strong evidence of the need for improved professional education to impact end of life care. The National Cancer Policy Board and Institute of Medicine's report of improved end of life care published in 2001 has documented the considerable need for improved end of life care for more than 550,000 individuals who will die of cancer this year in the United States. This primary aim will be achieved through 4 workshops for faculty teaching in graduate nursing education programs. Each conference will be attended by 60 faculty for a total of 240 participants representing their 240 graduate programs thus directly reaching 63% of the nation's graduate nursing education programs and later reaching the remaining 37% through the dissemination efforts. The project combines the efforts of the City of Hope National Medical Center, the American Association of Colleges of Nursing, and Northwest Memorial Hospital/ Lurie Cancer Center. Specific aims include: 1) Adapt the existing ELNEC curriculum and teaching materials for use in graduate nursing, with emphasis on cancer care at the EOL. 2) Evaluate the impact of the curriculum on participants' knowledge and attitudes about EOL care. 3) Develop a network of course participants to share experiences in dissemination of the curriculum. 4) Evaluate the effectiveness of participants' implementation efforts within the graduate curricula. 5) Describe issues related to dissemination of EOL education within the curriculum of graduate colleges of nursing in terms of the characteristics of the course participants and type of curriculum. This project focused on graduate education builds upon the extremely successful project in progress, End of Life Nursing Education Consortium (ELNEC), supported by the Robert Wood Johnson Foundation which has targeted undergraduate nursing programs and continuing education providers. The 9 content areas of the curriculum are nursing care at EOL, pain management, symptom management, ethical issues, culture, communication, grief/loss and bereavement, achieving quality care at EOL, and preparation and care at the time of death. This proposal includes extensive evaluation planned to monitor individual and institutional dissemination.
| |
Why Are Some Chords More “Stable” Than Others?
Tonal music is music organized around a center, also known as the “tonic.” The first note of a major scale, for example, C in a C major scale, is the tonic. Every note in a scale, and the chords constructed from those notes, has a relationship to the tonic.
Chords are built from scales, so to better understand chords and their relationship to each other, it makes sense to understand how each tone of a scale functions. Let’s use the C major scale to investigate this.
To summarize: the 1st, 3rd and 5th degrees of the scale are relatively stable. The 2nd and 6th degrees are somewhat unstable. The 4th and 7th degrees are very unstable. As you’ll soon see, the stability of each scale tone is the reason that certain chords feel stable or grounded to us, while others feel unstable and in need of resolution. Understanding this is the basis of creating effective chord progressions, or analyzing a composed progression in order to improve your interpretation of it.
The tonic (I) chord is the most stable chord in the diatonic progression. Why? Because it’s constructed from scale degrees 1, 3 and 5 – which are all stable tones!
The subdominant (IV) chord is less stable than the tonic chord. This is because the bottom note of the chord (F) is the 4th degree of the major scale, which you’ll recall is quite unstable. However, the top note of the chord (C) is the 1st degree of the scale, which keeps the chord from being too unstable.
The dominant chord is the most unstable chord of the three. While it contains one fairly stable scale degree (the 5th), it also contains the 7th and 2nd degrees of the major scale, which are both unstable. If the dominant chord is a tetrad (a four-note chord, consisting of G-B-D-F), the interval established by the 7th degree of the scale (B) and 4th (F) (known as a “tritone”) particularly begs for resolution. This instability is why the dominant 7 chord is used in most cadences (brief harmonic progressions that suggest a conclusion).
In a future post, I’ll build on this understanding of chord stability to explain why certain chords work better than others at a given position in a chord progression.
Want to receive new Piano Lab Blog posts by email? | http://www.portlandpianolab.com/why-are-some-chords-more-stable-than-others/ |
Initial priority will be given to "further definition" of the onshore substation and onshore cable route as well as a review of options and scheduling of the marine export cable.
In early April, the UK High Court ruled in Warwick Energy's favour, requiring that UK central government authorities revisit their earlier decision to bar the company from proceeding with the construction of an onshore substation.
Dudgeon offshore wind farm is a Round 2 UK offshore wind farm, planned for an area off the Norfolk coast, some 32km from the town of Cromer.
Regulatory consent for the offshore elements of the project is expected shortly. The wind farm is currently scheduled to begin power generation by late 2015, according to ABB. | https://www.windpowermonthly.com/article/1131621/abb-set-provide-electrical-works-dudgeon-offshore-project |
20:36Taking a Glance at Ethnic Style
Ethnic look is the most uncertain and versatile style we've ever met. It’s bold, bright, relaxed, natural and free and romantic. In general, this style is a reflection of different countries cultural traditions. There are lots of cultures around the world and every country has its own traditional clothes. All ethnic clothes are made of natural fabrics, that’s why it’s so popular to wear it during summer time. Of course, if you gonna be dressed fully as an African girl or young Indian, then it’s gonna be quite a weird look, especially in the city center. The best way is to learn cultural aspects of the foreign country and create an outfit that is going to feature typical colors, fabrics, and accessories.
Women's Ethnic Fashion and Accessories Fashion Tips. | https://kiraindigo.com/news/taking_a_glance_at_ethnic_style/2020-04-19-1 |
What is the Truth of your story?
OK, you’ve outlined your story and you’re getting ready to write your first draft. But something doesn’t feel right. You tell yourself that you’re not ready, that you haven’t done enough research, that the story isn’t thoroughly outlined, or that you are somehow ill-equipped to complete this endeavor and that you are setting out to do something that is impossible.
You’re not wrong.
It is impossible to write a story from your pre-frontal cortex. Your conscious mind is not going to get you to the end of your story. It will tell you all sorts of things that are designed to protect you from pain.
We can listen to resistance all we want, but we don’t have to believe it. Resistance is out to stop us from telling our truth on the page, and there is only one way to battle it. We must be willing to shed our idea of the story for the truth of the story.
Have you ever had the experience of telling a story for years, some episode from your life, and then, one day, realizing that the story was not entirely accurate? The facts may be correct; however, the meaning you made out of them was built on a series of assumptions based on your perception of the world. As your perception shifted, you suddenly saw the entire story through a new lens.
Give yourself permission
As you work on your outline, do you see how your protagonist has begun the story with a false belief? Be curious about what this is and how it gets reframed at the end of the story. Give yourself permission to question this belief. Notice how invested you are in it. To some degree, you have built a life based on this belief, and, therefore, you need it to be true. To allow it to collapse would mean being in unfamiliar territory for a while.
But that is OK. It won’t kill you. If you have written a story before, you may be acquainted with the experience of walking out into the world upon completion with the sensation that everything is new and different. Being a storyteller is humbling work. It takes continued willingness to drop our assumptions of the world in order to find a greater freedom.
Learn more about marrying the wildness of your imagination to the rigor of structure in The 90-Day Novel, The 90-Day Memoir, or The 90-Day Screenplay workshops. | https://lawriterslab.com/permission-to-write-the-truth/ |
Do you know what it means to work on a remote team?
We’ve seen too many collections of people described as teams. A team has a common single goal: to solve a problem for the business or users. To accomplish that goal, team members depend on each other and collaborate to finish the work. An agile team has all the skills and capabilities it needs to produce the work for that goal.
Creating a great distributed team is more than finding people with the right technical skills. People also need collaboration skills, and that depends on their personalities and how they work. The team needs time to understand each other’s preferences so they can best collaborate to deliver value.
Collocated team members can see when team members are available, and they can see or share the equipment each team member uses or has access to.
Distributed team members, on the other hand, need to be explicit about their availability and equipment. Some team members may have office space. Some may work in a shared family space. Some team members might prefer coworking spaces. Team members should be aware of what advantages and obstacles these different working spaces provide to themselves and the entire team.
Consider taking time for the team to discover their preferences, skills, and abilities. When the team creates their working agreements, everyone can learn how they need to work together.
An agile coach or another experienced facilitator may help the team quickly explore these preferences and decide what might work best for everyone. Some examples might be:
- Based on our time zones, do we have core work hours? If so, what are they?
- What should someone do if they have a family issue or emergency that takes them away from these core collaboration hours?
- What is the best way for the team to share information synchronously (chat, meetings, or something else) versus asynchronously (email, wiki, or something else)?
- What types of regular meetings (e.g., planning, review, standups) should we put on the team calendar, and what are some as-needed meetings that we might anticipate?
- How will we review each others’ work? Will we primarily do this together and synchronously? Will it be a traditional review, or would the team prefer pairing or mobbing? If we prefer asynchronous review approaches (such as using Github), what would be an acceptable review size (many teams don’t care for three hundred lines of code to review at once)? How do we resolve an issue when we see asynchronous “conflict” in the review?
Successful agile teams must have all the skills and capabilities they need, understand how to collaborate, and know what they’re supposed to work on. If a team doesn’t have the necessary expertise, they may not deliver the desired product at the right time.
But What about Tools?
Tools serve the team, not the other way around.
Once your distributed team understands how they want to work, consider the least number of tools possible. The first tool a team might use is a physical board—specifically, a kanban board.
Too many teams select a tool with a default board rather than visualizing their current workflow. We recommend you start with a board that reflects your situation as it is now, with as many columns as you need.
Consider using index cards on a cork board to start. Take a picture of the board and post it somewhere in the team workspace. Especially if you only move a card once a day, this is sufficient until you make your stories smaller and move more cards more often. The benefit of a physical board is that you have total freedom to create the board your team needs, not a board someone (or a tool) imposed on you.
If you don’t like a physical board, consider a shared spreadsheet, such as Google Sheets, where you can easily change the columns to reflect your reality.
Once the team feels they have a good board design, then they might consider tools that can best support their workflow via an electronic kanban board.
Collocated team members can turn around and talk to each other freely. That’s a “backchannel” conversation. It might not be the primary discussion tool for the team.
Distributed teams also need a dedicated team backchannel—a way to conduct informal team discussions on an as-needed basis. We like a persistent text-based tool for the backchannel. Your team might need other audio and visual tools, such as a video meeting tool. Make sure everyone has access and can start a meeting when needed.
And, of course, your team needs software development and testing tools. Every team needs those, so we won’t address them here.
Build Your Successful Distributed Team
Jane worked across the organization—with Dave’s blessing—to reconfigure her team and several others. She led the newly configured team through their working agreements and project charter. She suggested they start with a paper board with an always-on camera so every team member could see the state of all the work. Then they could see if the team needed any new states.
For the first couple of weeks, the team modified the board almost every other day. By the end of the second week, they were able to use WIP (work in progress) limits and manage how they flowed work through their team. Their board changes prompted them to consider other tools they might need to improve their collaboration and meet their goals.
Tools become the least important decision for the team. First, make sure you have enough people to cover the skills and capabilities you need on your team. Next, the team either articulates their shared goal or works with someone such as a product owner to define it. When the team learns about and respects each others’ work preferences, the team can function and thrive. Then they can determine the workflow that allows them to collaborate toward the shared goal.
Focus your tool selection on how well these tools support the team, their workflow, and their preferences. Great distributed agile teams might need fewer and different tools than you suspect. Make sure you have a collaborative team who can see how to work together to solve the customer’s problem. Then, offer them the tools they need.
This is an edited excerpt from a post at Agile Connection by Mark Kilby and Johanna Rothman.
Mark Kilby is an Agile Coach at Sonatype. His interests include organizational change, breakthrough methods, collaboration techniques and technologies for distributed and co-located teams.
Johanna Rothman, known as the “Pragmatic Manager,” provides frank advice for your tough problems. She helps leaders and teams see problems and resolve risks and manage their product development. Read her other articles on her site, jrothman.com. She also writes a personal blog, createadaptablelife.com. | https://www.alldaydevops.com/blog/remote-work-isnt-all-about-the-tools |
Medical professionals have some of the hardest and more important jobs. Regardless of the position you desire to have within the medical field, the demands and stakes are often very high and require a very particular set of skills. The following are the top skills that are required to excel in your medical career.
Communication Skills
Since medical personnel interact with patients all day long, the ability to communicate effectively is extremely important. Depending on the position that you take and the severity of the illnesses or injuries your patients face, you may need to discuss complex and difficult concepts with them. It is important that your patients understand not only your diagnostic conclusions but also their options for treatments if necessary.
Listening Skills
In addition to making sure the patient understands the information that you share with them, you also want to be able to understand the patient. Particularly, when you see many patients a day, it can be hard to stop and focus on what they are saying. However, you rely on your patient to provide history and other relevant information that can only be shared if they choose to do so. Knowing how to let patients trust you, and listening carefully to what they say, will make you a more respected and effective medical professional.
Critical Thinking Skills
All medical personnel are required to make difficult decisions every day, some of which are life or death scenarios. The ability to evaluate, assess, and synthesize a variety of data and reach a conclusion is a skill that is crucial in the medical field. Some patients require decisions about their care to be made instantaneously, while other patients may need their case reconsidered from new angles multiple times before reaching a conclusion. The ability to think critically is crucial for effective care in all areas of medicine.
Resilience
As a medical professional, you will witness disease, injury, and even death. You will also work long hours in high-pressure situations with few breaks or opportunities to regroup until your shift is over. The ability to recover quickly from these stressors is key to your mental health and success as a medical professional.
While every medical professional will have a unique background and training, all will be required to possess many important skills in order to be successful in their chosen career. These top skills are necessary for anyone looking to advance in the medical field and become a more effective practitioner. If you want to start developing these skills now, there’s no reason to wait. Take steps to get an academic coach that can help you build the confidence and skills to get into the medical school and career of your choice. | https://www.studentcoachingservices.com/single-post/2018/08/23/Top-Skills-You-Will-Need-To-Get-Ahead-In-A-Medical-Career |
Depression is a complex disease. Symptoms can range from irritability, feelings of worthlessness and loss of energy, just to name a few. (Some individuals even report physical changes such as gastrointestinal problems and chronic joint pain.)
The signs and symptoms of depression vary and the therapies designed to manage the condition are diverse as well. And, patients react to those treatments in different ways. Given that, how can clinicians pinpoint which therapy (or combination of therapies) for depression will work the best?
Answering this question underlies the basis of the research directed by Conor Liston, a psychiatrist at Weill Cornell Medical College. Liston and his team are currently working to classify the various subtypes of depression using MRI scans of the brain and then identify the most effective treatments for each subtype.
The scientists first scanned the brains of 1,118 research participants. Among those, 458 had already received a clinical diagnosis of depression. Specifically, they wanted to better understand the level of activity among the medial prefrontal cortex and other areas of the brain. After reviewing the MRIs for various patterns of brain activity, Liston and team were able to identify four distinct depression subtypes.
And their results were remarkable. Among those with depression, they found that each subtype responded very differently to various therapies and medications. Based on the research findings, the scientists could predict how to best treat the patient for depression just by reviewing an image of their brain activity. In general, patients that exhibited a lot of activity in their medial prefrontal cortex responded more quickly to cognitive behavioral therapy in contrast to individuals with lower levels of activity in that region of the brain.
“The type of brain that responds to psychotherapy is where there’s a strong pattern of connectivity between the frontal areas of the brain, which are involved more in thinking, talking, and problem-solving, etc., with other portions of the brain. Whereas people who have low connectivity — the opposite pattern — respond to the medication,” says psychologist W. Edward Craighead, one of the authors of this study.
Individuals with behavioral health issues, such as depression, often struggle with addiction. That’s why we specialize in offering a variety of mental health services for Christians. To help address a complex dual-diagnosis, we can help with medication management, group and individual therapy and faith-based support. If you are dealing with behavioral health and addiction issues, please call (877) 310-9545 to explore your addiction treatment options at Christian Rehab Network.
It’s a staggering statistic. 17.5 million adults currently suffer from a serious mental illness according to the Substance Abuse and Mental Health Services Administration (SAMHSA). And, among those with behavioral health issues, four million adults also have a co-occurring addiction to drugs or alcohol.
Given the fact that so many individuals need help for both substance abuse issues and mental health conditions like depression and anxiety, more addiction recovery specialists recommend treatments, like talk therapy, that can help patients address both conditions concurrently.
If you are seeking help for a co-occurring condition, your addiction recovery team may recommend that you participate in psychotherapy (a.k.a. talk therapy) to help you process your feelings and find healthier alternatives to destructive behaviors.
But, if you have reservations about participating in talk therapy, you shouldn’t. By taking the time to learn more about this therapeutic option, you can ease your fears.
FACT: One of the biggest misconceptions about talk therapy is that it only focuses on what is wrong. In contrast, the focus of talk therapy is all about helping the client seek out and implement solutions for their problems.
FACT: In reality, there are a wide range of professionals you can see to participate in therapy sessions including psychiatrists, psychologists, social workers and counselors.
FACT: Over the past few decades, thousands of people have benefitted from going to therapy to help them work through challenging periods of their lives including divorce, starting a new career or the loss of a loved one. In fact, when individuals seek help for addiction or behavioral health issues, it is a positive sign that the individual is strong enough to prioritize their well-being.
Are you coping with an addiction to drugs or alcohol and suffering from a behavioral health issue like depression, anxiety or post-traumatic stress disorder? At Christian Rehab Network, we can help you get the comprehensive help you need to make you whole again – while also strengthening your relationship with Christ. Learn more about our mental health services for Christians by calling (877) 310-9545.
Contact us today: | https://christianrehabnetwork.com/tag/co-occurring-condition/ |
Purpose:
Provide comprehensive occupational health services through regulatory compliance, proactive intervention and prevention. Support empowerment of our work force to achieve and sustain their best level of health and wellness.
Scope:
The OHN will lead the implementation and delivery of all appropriate occupational health programs and services that support the K-C health and wellness strategy. Primary responsibilities for this position include injury/illness assessment, triage treatment and continuity of care. In addition, conducting health surveillance programs, providing health education, ergonomic assessments, case management and facilitating return to work are all essential components of this position. The OHN collaborates with human resources, legal and safety to ensure compliance with state and federal regulatory programs such as OSHA, ADA, HIPAA and FMLA.
Principal Accountabilities:
Basic Qualifications: | https://careers.njda.org/jobs/11613327/occupational-health-nurse-specialist |
The Advisor will work closely with UCSF Recency Program Managers, MOH, and CDC-Zambia. Duties will include: 1) overseeing daily on-site implementation of all project activities including lab and facility engagement, data collection, and oversight of in-country staff at subcontractor; 2) providing technical, administrative, and logistical support including pre-testing and refining tools, coordinating trainings, data collection, data analysis, study documentation, and ensuring protocol adherence before, during, and after the survey; 3) maintaining constant communication and provide regular updates to Recency Program Managers and implement feedback.
Project Manager
Information - Req 53553BR
The purpose of the No One Waits (NOW) study is to assess the feasibility, acceptability and effectiveness of accelerated initiation of commercially available DAA therapy targeting socially marginalized communities (e.g., medically underserved, homeless, people actively injecting drugs). The study will be carried out at two community sites that perform HCV testing: (a) fixed community site and (b) community mobile site via clinical research van. Participants (n=150) who test anti-HCV positive and HCV RNA positive (chronic infection) are invited to enroll into the NOW Study and begin HCV treatment at point of diagnosis. All evaluation, medication dissemination, and follow-up care will take place at the project site. We will estimate the effect of on-site POD treatment on (1) time from HCV testing to treatment initiation, (2) completing treatment, and (3) attaining SVR-12; overall and by study site. A secondary product will be a lesson learned guide of recommendations for implementing a POD on-site test and treat program for dissemination beyond San Francisco.
GSI Program Assistant
Information - Req 53438BR
The Program Assistant, Global Strategic Information (GSI), applies professional concepts to conduct analytical studies or projects of moderate scope and complexity to address a variety of policy, research and procedural issues. Fully analyzes issues and problems, gathers data and information, finds and evaluates alternatives and makes sound recommendations.
Local Communications Officer - AIDS 2020
Information
The incumbent will be responsible for serving as the primary communications lead for AIDS 2020,including strategy guidance, media, marketing and digital content development.This is an incredible opportunity to manage a large communications portfolio,working across sectors for an international event. The successful candidate will be able to work in a fast-paced environment, strong writer, have the ability to juggle multiple activities, and politically savvy.
Contact: Larkin Callaghan
Surveillance Analyst
Information - Req 52896BR
The Surveillance Analyst has primary responsibility for data management of the tuberculosis case registry and other surveillance data sources and serves as the subject matter expert for in-house data management for the TB Control Branch. The Surveillance Analyst writes, maintains and runs programs to monitor TB reporting and create reports, and manages electronic case reporting process using proprietary software. The ideal candidate will have experience in SAS or related software. The incumbent interacts with colleagues at the national and local levels to insure timely and accurate TB case reporting. | https://globalhealthsciences.ucsf.edu/about-us/careers-global-health |
Candide, by Voltaire, is the story of a young, naïve, illegitimate son of a nobleman in Westphalia, Germany. Candide follows the optimistic theories of his tutor, Dr. Pangloss, whose mantra is that “all is for the best in the best possible worlds (Taylor).” Throughout the story Candide suffers a series of horrific adventures of war, injustice, cruelty, slavery, and intolerance that challenge Pangloss’ optimistic teachings. Voltaire wrote Candide from his country estate, Ferney, outside Geneva. He wrote his characters as symbolic figures: Candide represents optimism; Cunégonde, the search for love; Pangloss, the pointlessness of metaphysics; Cacambo, friendship and loyalty; and Martin, negativity (Taylor). Each character represents an abstract idea during the Enlightenment period. Voltaire uses the rhetorical device of irony, saying one thing but meaning another, and absurd suffering in the novel to bring the readers to recognize the evil in the world. Voltaire was upset that the Enlightenment era did not live up to his expectations. Voltaire correctly depicts the culture of the eighteenth century throughout his satirical novel; talking about the tragic earthquake in Lisbon that led to the horrific auto-da-fé; his thoughts on Gottfried Wilhelm von Leibniz’s absurd philosophy; and how he used the legend of El Dorado to express his feeling of money. Voltaire was born as François-Marie Arouet on November 21, 1694, in Paris, France. He was the youngest of five children but only had three surviving sisters. When he was seven years old, his mother passed away. After his mother’s passing, Voltaire grew closer to his godfather who was known to be a free-thinker. Voltaire received a classical education at the Collége Louis-le-Grand, a Jesuit secondary school in Paris, where he began showing promise as a writer (Voltaire). Voltaire established himself as one of the leading writers of the Enlightenment era. His well-known works include the tragic play Zaïre, the historical study The Age of Louis XIV and the satirical novel Candide. Voltaire embraced Enlightenment philosophers such as Isaac Newton, John Locke and Francis Bacon; he found inspiration in their ideals of a free and liberal society, along with freedom of religion. Keeping with the Enlightenment thinkers of the era, he was a deist; the religious belief that God created the universe and established a reasonable understanding of moral and natural laws but does not intervene in human affairs through miracles or supernatural revelation (Voltaire). Voltaire was often at odds with the French authority over his politically and religiously charged works. He was twice imprisoned and spent many years in exile. Finally, being able to return to Paris, he shortly died in 1778 (Voltaire).
In 1755 Lisbon rivaled Florence, Rome and Venice in its wealth. New explorations opened routes to India, allowing Lisbon to become one of Europe’s richest cities. Eighteenth century prints show Lisbon as a city of wealth, a skyline full of towers and palaces (Hagman). However, all of that quickly changed on All Saint’s Day, November 1, 1755, when an earthquake felt from Ireland to Morocco hit Lisbon. The earthquake hit a 9 on the Richter scale. Then at 11 a.m. three tidal waves between 15 and 20 feet crashed into the city, hurling everything and everyone in its path (Hagman). Finally, the destruction was completed with a fire: “Whirlwinds of fire and ashes covered the streets and public places; houses fell, roofs were flung upon the pavements, and the pavements were scattered. Thirty thousand inhabitants of all ages and sexes were crushed under the ruins (Voltaire p).” After the earthquake, Europeans began to think “was it a natural occurrence or was it caused by divine wrath (Hagman)?” With the fear of the earthquake being divine wrath, leaders wanted all heretics gone. Due to the earthquake, Lisbon became famed for its auto-da-fé, meaning “Act of Faith (Graizbord).” Voltaire writes, “After the earthquake had destroyed three-fourths of Lisbon, the sages of that country could think of no means more effectual to prevent utter ruin than to give the people a beautiful auto-da-fé; for it had been decided by the University of Coimbra that the burning of a few people by a slow fire, and with great ceremony, is an infallible secret to hinder the earth from quaking (Voltaire p).” Since the age of enlightenment included a range of ideas centered on reason, people believed that there must be a cause for the effect of the earthquake. The auto-da-fé took place on June 20, 1756, due to the thought that nature did not cause the earthquake, but a being caused it and the inquisition would purge all heretics to protect another earthquake from occurring (Graizbord). In the novel, Pangloss is used as an example for the cause of the earthquake and is questioned for his optimistic thoughts after it hit, saying, “all that is is for the best. If there is a volcano at Lisbon it cannot be elsewhere. It is impossible that things should be other than they are; for everything is right (Voltaire p).” The Grand Inquisitor questions Pangloss on his philosophy, asking him, “you do not then believe in liberty (Voltaire p)?” The inquisitor believed Pangloss to be a heretic due to his optimistic philosophy and would later fall victim to the auto-da-fé. The novel Candide was published four years after the terrible earthquake and the auto-da-fé. During this time, the philosophy of optimism no longer seemed valid to Voltaire. Voltaire uses Pangloss, the philosopher of optimism, as a way to show readers that optimism had failed. The auto-da-fé is a man-made disaster while the earthquake is a natural one explaining that there is nothing moral about this and all is clearly not well in the world.
Voltaire writes Candide’s mentor and philosophical advisor, Pangloss, based off the German philosopher Gottfried Wilhelm von Leibniz. Leibniz, a German mathematician and philosopher, was born in the city of Leipzig on July 1, 1646 (Mercer). In 1652, at the age of six, Leibniz’s father, Friedrich Leibniz, passed away. Following his father’s death, Leibniz, as a young boy, taught himself Latin and read poetry, history, theology, and some Aristotelian philosophy in his father’s library (Mercer). Later in life, he made important contributions in numerous fields including mathematics, physics, logic, ethics, and theology. He also worked as a diplomat, an engineer, an attorney, and a political advisor. After his death, his reputation as a philosopher depended on texts that were unpublished, including some text that was never intended for publication (Mercer). Unlike many philosophers of the enlightenment period, he composed no complete exposition of his philosophical theories. Instead he wrote more pages of his philosophy than most scholars can read in a lifetime, which were unorganized, unedited, and undated (Mercer). In Candide Pangloss teaches the philosophical idea that “everything is for the best in this best of all possible worlds (Voltaire p).” This idea is a simplified version of Leibniz’s “best possible world” philosophy. Leibniz’s “best possible world” philosophy is his claim that the actual world is the best of all possible worlds, and his philosophy shows his attempt to solve the problem of evil. Christia Mercer, Leibniz’s bibliography author, explains Leibniz’s philosophy by stating, “On Leibniz’s account, God causes evil, for God creates the best series of things, including many things that are, when considered in themselves, bad or sinful (Mercer).” To Leibniz, the existence of any evil in the world would have to be a sign that God is either not entirely good or not all-powerful, and the idea of an imperfect God is absurd (Mercaer). Dr. Pangloss’s philosophy parodies the beliefs of Leibniz, that the world was perfect and that all evil in it was simply a mean to a greater good. Voltaire does not accept that a perfect God must exist, and he mocks Leibniz’s idea that the world must be completely good and provides a merciless amount of mockery on this idea throughout the novel (Mercer). An example of Voltaire mocking Leibniz’s idea occurs while Candide, Pangloss, and the Anabaptist James travel to Lisbon, and are suddenly struck by a terrible storm. Several men, including the charitable Anabaptist James, were thrown overboard by the storm and drowned. Candide begins to jump over to save the Anabaptist James, when suddenly Pangloss told him not to and, “demonstrated to him that the Bay of Lisbon had been made on purpose for the Anabaptist to be drowned (Voltaire p).” The whole ship had perished except Candide, Pangloss and the villain who had drowned the Anabaptist James. Voltaire is emphasizing the ineffectiveness of ordinary metaphysics when Pangloss is confronted with the problem of evil. The drowning of the honorable Anabaptist James emphasizes Voltaire’s view, that people live in a world where there is no justice; a world in which villains expect to prosper, while righteous souls like the Anabaptist James should anticipate nothing for their good deeds.
The legend of El Dorado was about a wealthy city of gold and the king who ruled over it. The story began after the first Spanish explorers landed in Central and South America. The Spanish explorers aimed to conquer the Americas to find new sources of wealth. The Spanish called the king El Dorado—The Gilded One— because his body was gilded or covered in gold (El Dorado). The legend tells the tale of a rich king who plastered his body with gold dust and then dived into a sacred lake to wash it off. The king would later toss gold into the lake as an offering to the gods. The tale of the wealthy king spread, and the city came to be known as El Dorado. The meaning of El Dorado would eventually change to describe any mythical region that contained great riches (El Dorado). The early myth of El Dorado was placed as a city near Lake Guatavita, a lake formed in a volcanic crater not far from modern Bogota, Colombia. The story was based on the Muisca king who covered himself with gold dust, boarded a raft in Lake Guatavita and made offerings to the gods. Many explorers such as the Spaniards and Germans searched the region in 1538 but failed to find El Dorado. The explorers went to extreme methods to find El Dorado, including their attempt to drain the lake in an effort to locate gold (El Dorado). The locals at the time did not appreciate explorers tearing through their city, so they claimed that the legendary city was somewhere far away in the hope that the Europeans would search elsewhere and leave them in peace. Explorers spent years in South America in hope of locating the golden city. Due to the explorers attempt to find the golden city, several bloody expeditions were launched to find this imaginary kingdom (El Dorado). In Candide, Cacambo and Candide spend several days on a canoe when suddenly they crash into rocks and landed in a large plain, bounded by inaccessible mountains. Cacambo, Candide and the travelers who journeyed with them walk to the village they saw from the river and were amused by looking at the large round pieces of yellow, red, and green on the ground, which casted a soft glow. A few of the travelers picked up these larges pieces and they quickly realized the ground they walk on was covered with gold, emeralds, and rubies. Candie later meets the Master of the Horse to the King and explains they are in Peru, and the kingdom in which they now inhabit is the ancient country of the Incas. The Incas wanted to conquer other parts of the world, and that is why their empire had been ended by the Spanish. Due to this, the princes of their new kingdom were to never leave in the hopes they can keep their riches from prying Europeans. The Master of the Horse to the King says, “The Spaniards have had a confused notion of this country and have called it El Dorado; and an Englishman, whose name was Sir Walter Raleigh, came very near it about hundred years ago (Volatire p).” Voltaire writes about Candide accidently landing in the city known to be El Dorado by the Europeans, while explorers such as Sir Walter Raleigh spent years trying to find the golden city. Throughout Candide’s journey in El Dorado, Voltaire reminds his readers of some destruction that has been brought by gold fever. He compares the destruction caused by gold fever to the destruction of humanity during the Enlightenment era. How people care to much about money and status and if people could forget about rankings the world would have peace, instead of people thinking gold leads to spiritual perfection. Voltaire’s ideal world is a place where gold has no value. El Dorado is extraordinarily wealthy, but they do not cling to their wealth, and are happy to share it with newcomers.
Voltaire was upset that the Enlightenment era did not live up to his expectations and uses Candide, philosophical and religious parody to express his thoughts about the earthquake in Lisbon and how it led to the auto-da-fé; his thoughts on Gottfried Wilhelm von Leibniz’s absurd “best possible world” philosophy; and how he uses the legend of El Dorado to express his feelings about money. Voltaire correctly depicts the culture of the eighteenth century. In Candide people believed that the earthquake struck due to a cause, whether it be God or heretics. He correctly depicts the eighteenth century by using the auto-da-fé to reflect how the people truly believed that nothing happens without a cause. Voltaire compares Gottfried Wilhelm von Leibniz to Pangloss, to express the absurd philosophies of the Enlightenment period. There were tons of idea’s and beliefs going around during Enlightenment period and Voltaire picked what he thought to be the most absurd, to express the philosophies of the time. Finally, the legend of El Dorado correctly depicts how the people of the Enlightenment period were corrupted by the desire for money believing that it will lead them to spiritual perfection. In the end Voltaire compares Candide rejecting Pangloss’s optimistic teachings to himself rejecting the Enlightenment era and moving on with his life.
“El Dorado.” UXL Encyclopedia of World Mythology, vol. 2, UXL, 2009, pp. 344-347. World History in Context, http://link.galegroup.com.liboc.tctc.edu:2048/apps/doc/CX3230900109/WHIC?u=tricotec_main&sid=WHIC&xid=c07832ac. Accessed 25 Nov. 2018.
Graizbord, David. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=khh&AN=23420886&site=hrc-live.Accessed 13 Nov. 2018.
Hagman, Harvey. “Recalling the great Lisbon earthquake.” World and I, Jan. 2010. General OneFile, http://link.galegroup.com.liboc.tctc.edu:2048/apps/doc/A216960955/GPS?u=tricotec_main&sid=GPS&xid=2dd98ba3.Accessed 12 Nov. 2019.
Mercer, Christia. “Gottfried Wilhelm Leibniz.” Encyclopedia of Philosophy, Macmillan, 2006. Biography In Context, http://link.galegroup.com.liboc.tctc.edu:2048/apps/doc/K3446801097/BIC?u=tricotec_main&sid=BIC&xid=a2dbbbbd. Accessed 19 Nov. 2018.
Taylor, Karen L. “Candide.” The Facts On File Companion to the French Novel, Facts On File, 2007. Bloom’s Literature, online.infobase.com/Auth/Index?aid=101308;itemid=WE54;articleId=24116. Accessed 16 Nov. 2018.
“Voltaire.” Encyclopedia of World Biography, Gale, 1998. Biography In Context, http://link.galegroup.com/apps/doc/K1631006764/BIC?u=tricotec_main;sid=BIC;xid=22c9125f. Accessed 13 Nov. 2018. | https://pickist.net/gwen-conway-english-103-mr/ |
Spirent’s GSS7000 Series of Multi-GNSS, Multi-frequency Simulators. Photo: Spirent.
Spirent Communications plc, a leader in Galileo, GPS and other global navigation satellite systems (GNSS) testing solutions, this week announced a partnership with Cranfield University, a global leader for education and transformational research in technology and management. The two organizations will be collaborating to develop connected autonomous vehicle (CAV) technologies.
The aim of the research project is to improve positioning and timing technologies to enable better performance of unmanned vehicles, such as autonomous aircraft or connected cars. Spirent engineers are working with Cranfield’s postgraduate researchers to develop new methods for synchronization and location testing, using Spirent’s advanced test systems. The project will use Spirent’s GSS7000 Series of Multi-GNSS, Multi-frequency Simulators.
Mark Holbrow, head of engineering at Spirent’s Positioning Business Unit, said “Location awareness for autonomous vehicles is of major importance, and is one of the most challenging applications in commercial GNSS development. We will be working with Cranfield to create new test and development tools that will provide the opportunity for improved system performance, accuracy and resilience”.
Spirent is supporting several Individual Research Projects (IRP) in Cranfield’s Autonomous Vehicle Dynamics and Control (AVDC) MSc program as an industry partner. The projects already identified are “GPS-Based Clock Synchronization for an Airborne Distributed Sensor Network” and “In-car mapping and receiver integration testing for autonomous vehicles”.
The GSS7000 Series offers simultaneous coherent GPS, GLONASS, BeiDou, Galileo, QZSS and SBAS signals from a single test scenario. Up to 256 channels provide ample signals for a wide range of development, integration and verification tasks, according to the company. | https://insidegnss.com/spirent-partners-with-cranfield-university-on-autonomous-vehicle-technology/ |
Use the Law of Cosines to solve oblique triangles.
Solve applied problems using the Law of Cosines.
Use Heron’s formula to find the area of a triangle.
Suppose a boat leaves port, travels 10 miles, turns 20 degrees, and travels another 8 miles as shown in [link] . How far from port is the boat?
Unfortunately, while the Law of Sines enables us to address many non-right triangle cases, it does not help us with triangles where the known angle is between two known sides, a SAS (side-angle-side) triangle , or when all three sides are known, but no angles are known, a SSS (side-side-side) triangle . In this section, we will investigate another tool for solving oblique triangles described by these last two cases.
The tool we need to solve the problem of the boat’s distance from the port is the Law of Cosines , which defines the relationship among angle measurements and side lengths in oblique triangles. Three formulas make up the Law of Cosines. At first glance, the formulas may appear complicated because they include many variables. However, once the pattern is understood, the Law of Cosines is easier to work with than most formulas at this mathematical level.
Understanding how the Law of Cosines is derived will be helpful in using the formulas. The derivation begins with the Generalized Pythagorean Theorem , which is an extension of the Pythagorean Theorem to non-right triangles. Here is how it works: An arbitrary non-right triangle A B C is placed in the coordinate plane with vertex A at the origin, side c drawn along the x -axis, and vertex C located at some point ( x , y ) in the plane, as illustrated in [link] . Generally, triangles exist anywhere in the plane, but for this explanation we will place the triangle as noted.
The formula derived is one of the three equations of the Law of Cosines. The other equations are found in a similar fashion.
Keep in mind that it is always helpful to sketch the triangle when solving for angles or sides. In a real-world scenario, try to draw a diagram of the situation. As more information emerges, the diagram may have to be altered. Make those alterations to the diagram and, in the end, the problem will be easier to solve.
The Law of Cosines states that the square of any side of a triangle is equal to the sum of the squares of the other two sides minus twice the product of the other two sides and the cosine of the included angle. For triangles labeled as in [link] , with angles α , β , and γ , and opposite corresponding sides a , b , and c , respectively, the Law of Cosines is given as three equations.
To solve for a missing side measurement, the corresponding opposite angle measure is needed.
When solving for an angle, the corresponding opposite side measure is needed. We can use another version of the Law of Cosines to solve for an angle. | https://www.jobilize.com/trigonometry/course/10-2-non-right-triangles-law-of-cosines-by-openstax |
Trigonometry is the relationship between the sides and angles in triangles. There are three trigonometric ratios we need to be aware of at GCSE: sine, cosine and tangent (abbreviated sin, cos, tan). Each trig ratio gives the relationship between an angle and two particular sides of a right angled triangle – for example, sine involves the ratio of the opposite side and the hypotenuse. The mnemonic SOHCAHTOA is used to remember which trig ratio is used for which pair of sides.
Simple trigonometry (SOHCAHTOA) can be used to find the length of a missing side in a right angle triangle. By using the inverse trig operation (notation sin-1, cos-1, tan-1), we can also find the size of a missing angle.
With the exception of a few angles, trigonometric functions often to return non-terminating decimals. It is therefore important that students are confident with rounding their answers to a suitable number of decimal places.
There are three more advanced rules using trigonometry at GCSE – these are the sine rule (law of sines), the cosine rule (law of cosines), and the trigonometric area formula for area of a triangle. These rules apply to any triangle, not just right angled triangles. The cosine rule, for example, can be used to find the length of a missing side, when two side lengths and the size of the angle between them is known.
While students do not need to know trigonometric identities at GCSE, knowledge of the identity tan(x) is equivalent to sin(x) divided by cos(x) can be useful for remembering some exact trig values.
Looking forward, students can then progress to additional geometry worksheets, for example a 3D Pythagoras worksheet or a trigonometric graphs worksheet.
For more teaching and learning support on Geometry our GCSE maths lessons provide step by step support for all GCSE maths concepts.
There will be students in your class who require individual attention to help them succeed in their maths GCSEs. In a class of 30, it’s not always easy to provide.
Help your students feel confident with exam-style questions and the strategies they’ll need to answer them correctly with our dedicated GCSE maths revision programme.
Lessons are selected to provide support where each student needs it most, and specially-trained GCSE maths tutors adapt the pitch and pace of each lesson. This ensures a personalised revision programme that raises grades and boosts confidence. | https://thirdspacelearning.com/resources/gcse-maths/trigonometry-worksheets/ |
The Board of Directors of CASWE-ACFTS has committed to ensuring that social work education in Canada contributes to transforming Canada’s colonial reality*
Ottawa. June 26th, 2017. At the May 27th, 2017 Board meeting, the Board of Directors of CASWE-ACFTS committed to ensuring that social work education in Canada contributes to transforming Canada’s colonial reality and approved a “Statement of Complicity and Commitment to Change”. “This is an important step in engaging social work education in the reconciliation process and supporting the Truth and Reconciliation Calls to Action” affirms CASWE-ACFTS President, Dr. Susan Cadell “as the statement acknowledges the negative effects of past and present practices, expresses regret for harm experienced by Indigenous peoples and communities, and commits to making positive change” she further explained. Dr. Cadell notes that the Board has tasked a working group “to help shift ways of thinking, to identify activities and make recommendations regarding ways of working with the Statement in order to begin the reconciliation process”. President Cadell also emphasized that there was broad membership support for the Statement as members responded with a standing ovation. During the Association’s Annual General Meeting held on May 31st, the student committee formally endorsed the Statement.
Details on the Board of Directors’ commitment follows.
Statement of Complicity and Commitment to Change:
The Call to Action from the Truth and Reconciliation Commission of Canada (2015) offers the most recent account of the colonial reality embedded within the land now known as Canada. The mechanisms that have given rise to, and continue to support, this colonial reality are far reaching, all encompassing, and complex. Colonizing thoughts and actions are evident in the hearts and minds of individuals, in our personal and community relationships, and in societal structures and institutions. This reality impoverishes the character of our country and of all of us who live within its boundaries.
CASWE-ACFTS has been responsible for reviewing and accrediting university based programs of social work education within Canada since 1973 and graduates of these accredited programs become professional social work practitioners. Therefore CASWE-ACFTS shares responsibility for the scope and nature of social work practice within Canada, be that past, present or future practice. Unfortunately, social work education, research and practice have been, and continue to be, complicit in our colonial reality. Such complicity contradicts the espoused values and ethics of social work, potentially negates the positive impact of social work interventions, and results in harmful policies and practices.
Transforming our colonial reality must be a responsibility shared by all Canadians. As beginning steps in embracing this shared responsibility, the members of the Board of Directors, of the Canadian Association for Social Work Education- Association canadienne pour la formation en travail social hereby annouces the Statement of Complicity and a Commitment to Change and,
- acknowledge that colonizing narratives, policies, and practices have been, and continue to be, embedded in social work education, research, and practice
- express deep regret for the harms experienced by Indigenous peoples and communities because of these colonizing narratives, policies, and practices
- commit, within our individual spheres of influence, to act in ways that lessen and eventually end such harms, thereby opening spaces to offer genuine apologies
- accept the United Nations Declaration on the Rights of Indigenous Peoples as the framework to guide reconciliation efforts
- reaffirm the importance of our collaborative relationship with Thunderbird Circle and develop initiatives to commemorate the strength, resiliency, and contribution of all Indigenous social work educators and students
- will ensure a territorial acknowledgement is posted on the CASWE-ACFTS web site
- will encourage institutional members to post a territorial acknowledgement on their School’s website and post a link to the CAUT guide to territorial acknowledgement on the CASWE-ACFTS website to assist Schools with this task
- will encourage and support Canadian schools of social work in revising mission statements, governance processes, curriculum, and pedagogy in ways that both advance the TRC recommendations and the overall indigenization of social work education
- will post, on the Association website, a list of resources to assist Schools in the above efforts
- will periodically review the vision, mission, principles and activities of our Association to ensure we are advancing reconciliation
- will seek to advance Article 14 (1) of UNDRIP through Memorandums of Understanding with relevant Indigenous institutions and programs
- will ensure the planned revision of our educational policies and standards (EPAS2019)
- incorporates current and comprehensive knowledge regarding the de-colonialization and indigenization of social work education including, but not necessarily limited to, the Calls to Action from the TRC, especially those related to child welfare, education, and health
- recognizes the distinct nature of Indigenous social work and avoids positioning such social work within the context of multi-cultural or cross cultural theory and practice.
This Statement uses the term “Indigenous” to include the distinct Canadian terms Aboriginal, First Nations, Indian, Métis, and Inuit as well as the more global context of First Peoples’ epistemologies, ways of knowing, knowledge systems, and lived experience. Indigenous is both an international and local term, reflecting the reality that issues such as the impact of colonization have both global and local implications.” (Association of Canadian Deans of Education, 2010, p. 1).
*Approved by the Board of directors May 27th, 2017.
For further information please contact: | https://caswe-acfts.ca/media-release-board-of-directors-endorses-a-statement-of-complicity-and-commits-to-change/ |
Every murderer and his story are peculiar and obscure in the mind, with a number of components. It is always a mystery as to the workings of their mind and what compels them to commit such vile acts of violence. In the end, their motives and conscience preceding and after the murder is all that matters. Their reasons for murder may be an account of several different factors, such as the environment and society, their characterization and past, or influences from other people. In Crime and Punishment, by Fyodor Dostoevsky, and The Stranger, by Albert Camus, protagonists’ Raskolnikov and Meursault commit acts of murder based on separate purposes but entirely motivated by their unique characteristics, and how they affect their mind after the deed’s been carried out.
In Crime and Punishment, Raskolnikov murders Lizaveta Ivanovna and her sister, Alyona Ivanovna, an old pawnbroker, whom he deems a detested woman and his characterization affects his thoughts after the murder. In the beginning, before the murder, Raskolnikov is indecisive about following through with his plan to kill Alyona and he carries out an “experiment” as practice and to gain a better understanding on where the money and gold are. This characterizes Raskolnikov as anxious and uncertain, and not fully confident in his own plan and execution of it. However, this hesitation pursues until after the murder of the pawnbroker, where Raskolnikov feels dreadful and uneasy having done this act. He gets jittery and listens to his conscience at times, which tells him a different thing than his heart. For example, on the way to the police station for a summons, the day after the murder, Raskolnikov imagines that he will “go in, fall on my knees, and confess everything” (97).
Also, he debates with himself whether or not to confess it all to the head clerk, Nikodim Fomitch, where he feels the urge “to get up at once, and tell him everything that had happened yesterday, and then go with him to his lodgings and show him the things in the hole in the corner” (107). Raskolnikov’s anxiety and sudden impulses to admit the truth become more visible when he faints at the police station, as soon as the murder of Alyona Ivanovna is mentioned. He remains this way weeks trailing the murder, which further epitomizes his character. Raskolnikov’s shock and nervousness impel him to stay attached to the murder. During the days of his illness in pursuit of the murder, he seems to only be interested in that subject each time that it is mentioned. It is noticed by Raskolnikov’s doctor, Zossimov as well as Razumihin.
He is an indefinite character, in that he wishes to confess his crime and be relieved of it, yet he does not want to face the punishment. This example can be portrayed when Raskolnikov is speaking with Zametov, who works at the police station, at a caf�. He drops numerous hints to Zametov, about how he is the murderer of the pawnbroker, however, it is assumed to be false and delusive as a result of his illness and delirium. Another example of Raskolnikov’s irresoluteness is at the final moment where he decides to go to the police office and confess to Ilya Petrovitch that he is the actual killer. He leaves the office decided upon leaving it a mystery, when he sees Sonia outside and stares into her eyes and he walks back into the office, revealing the long-kept secret after several different thoughts and instances of confessing, where he declares, “It was I killed the old pawnbroker woman and her sister Lizaveta with an axe and robbed them” (526). Raskolnikov’s consistent desires to confess the truth after the murder are the result of his characteristics.
In The Stranger, Meursault kills and Arab at a beach by shooting him once, then four more times, influenced by his individual characteristics. Meursault is generally a carefree soul, who may also be considered emotionless due to the lack of tears he sheds or emotions he reveals at the events approaching the funeral and the latter, itself, in the opening of the novel. At his mother’s vigil, Meursault displays a lack of respect, where he thinks to himself, “But I hesitated, because I didn’t know if I could do it with Maman right there. I thought about it; it didn’t matter. I offered the caretaker a cigarette and we smoked” (18). He does not show remorse towards his deceased mother, contrary to his mother’s friends. Also, Meursault focuses on details not relating to his mother, at the funeral; for example, he pays attention to the intense heat and all of the small features of Thomas P�rez such as his slight limp, wrinkly and sweaty skin, the constant taking on and off of his hat and his shortcuts taken to remain caught up with everyone else.
This manifests Meursault’s indifference to the world around him due to his unique traits. This also pertains to the murder of the Arab, since Meursault did not have a plan or motive to kill him, also lacking a guilty conscience after the murder. The same day following his arrest, Meursault ponders to himself, “I had read descriptions like this in books and it all seemed like a game to me” (64). He does not realize the situation that he is in, by killing a man, which is mainly a result of his type of character. Another example of Meursault’s indifference is his interactions with his current girlfriend, Marie, when she asks him whether he would like to get married to her or not or if he loves her. In response, he says “I told her it didn’t mean anything but that I didn’t think so” (35). In response to the marriage proposal, Meursault adds, “I said it didn’t make any difference to me and that we could if she wanted to” (41). This further explains his character and why the murder had been committed. It has been discerned by various people that Meursault is a taciturn and withdrawn person. His unique characteristics partake a major role in the murder of the Arab on a hot day at the beach; given that the murder was not transgressed out of rage or hatred but from the impassive and detached man that he is.
Overall, Dostoevsky and Camus deliver murder stories on different levels of understanding and character motives. However, both contain common feature, which is being influenced by their characteristics and the manner in which they act on their conscience when the murders have been done. Although Meursault and Raskolnikov are completely different in character, this is what persuades and prompts their actions and thoughts following the crime. Every person is likely to be instigated by his/her characteristics after any act that he/she commits. | https://artscolumbia.org/the-influences-of-traits-32383/ |
Upcoming Class Dates:
TBD
If you would like to sign-up for one of my upcoming CCT courses at Stanford University, please visit the Stanford Center for Compassion and Altruism Research and Education (CCARE) here for upcoming course offerings.
Specific course content includes:
Week 1: Introduction to the course and introduction to settling and focusing the mind
The first class includes an introduction to the course content, instructor, and fellow students. In-class discussion and practice will include: connecting with your intention for taking CCT right now, what is compassion, what does it mean to “train” compassion. We will practice cultivating the skill of focusing the mind through breath focus meditation. This step is considered foundational for all other practices in this program.
Week 2: Loving-kindness and compassion for a loved one (step 2)
Learning to recognize how the experiences of love and compassion feel like when they occur for a loved one. The meditation and practical exercises offered in this step aim to help practitioners recognize the physical and physiological signs of the feelings of warmth, tenderness, concern, and compassion towards a loved one.
Week 3: Compassion for oneself (step 3a)
Learning to develop qualities such as greater self-acceptance, tenderness, nonjudgment and caring in self-to-self relations. Connecting with one´s own feelings and needs and relating to them with compassion is the basis for developing a compassionate stance toward others.
Week 4: Loving-kindness for oneself (step 3b)
Learning to develop qualities of warmth, appreciation, joy, and gratitude in self-to-self relationship. While the previous step focused on self-acceptance, this step focuses on developing appreciation for one´s self.
Week 5: Embracing shared common humanity and developing appreciation of others (step 4)
Establishing the basis for compassion toward others through recognizing our shared common humanity, and appreciating the kindness of others and how human beings are deeply interconnected.
Week 6: Cultivating compassion for others (step 5)
On the basis of the previous step, participants begin to cultivate compassion for all beings by moving from focusing on a loved one to focusing on a neutral person, then on a difficult person, and finally on all beings.
Week 7: Active compassion practice (Tong-len) (step 6)
This step involves explicit evocation of the altruistic wish to do something about others’ suffering. In formal sitting practice, this essentially takes the form of a visualization practice where the practitioner imagines taking away the suffering of others and giving them what is beneficial in oneself. This practice is known as Tong-len or “giving and taking”.
Week 8: Closing and integrated daily compassion cultivation practice
In this final class, the essential elements of all six steps are combined into an integrated compassion meditation practice that can continue to be done daily by participants who choose to adopt it. | http://www.stanfordcct.com/schedule/ |
Bayesian inference provides an optimal framework to learn models from data with quantified uncertainty. The dimension of the model parameters is often very high or infinite in many practical applications with models represented by, e.g., differential equations or deep neural network. It is longstanding challenge to accurately and efficiently solve high-dimensional Bayesian inference problems due to the curse of dimensionality—the computational complexity grows rapidly (often exponentially) with respect to the parameter dimension. In this talk, I will present a class of transport-based projected variational methods to tackle the curse of dimensionality. We project the high-dimensional parameters to intrinsically low-dimensional data-informed subspaces, and employ transport-based variational methods to push samples drawn from the prior to a projected posterior. I will present error bounds for the projected posterior distribution measured in Kullback–Leibler divergence. Numerical experiments will be presented to demonstrate the properties of our methods, including improved accuracy, fast convergence with complexity independent of the parameter dimension and the number of samples, strong parallel scalability in processor cores, and weak data scalability in data dimension. | http://cse.cornell.edu/scan/2021/04/12/chen.html |
FIRSTRUN - Fiscal Rules and Strategies under Externalities and Uncertainties
Project description and general objectives of the project:
The FIRSTRUN project advances the theoretical and practical debates on the effective mechanisms of fiscal policy coordination. It analyzes the very reason why fiscal policy coordination may be needed in the first place, namely cross-country externalities (spillovers) related to national fiscal policies. Specifically, it identifies different types of spillover effects, investigates how they work in the EU and in the EMU, and analyses whether they work in the same fashion under different states of the economy and over the short and the long run. The project describes different forms that fiscal policy coordination can take in practice, e.g. ex-ante coordination and risk-sharing, and provides a critical assessment of the mechanisms already put in place. The FIRSTRUN project provides new tools for fiscal policy design by incorporating the new EU fiscal rules regarding e.g. government debt and deficit into applied models for fiscal policy evaluation. The tools can be used to support the decision makers in the implementation of the enhanced EU economic governance. FIRSTRUN also investigates the political economy of fiscal cooperation, for instance, the difficult inter-play between domestic political pressures and EU level priorities as well concerns about legitimation. By shedding light on the character of the governance framework for fiscal coordination, FIRSTRUN will highlight the features that work well or badly and provide insights that the EU level can exploit in its surveillance and advisory roles.
The FIRSTRUN project provides theoretical and econometric analysis of the reasons for having fiscal policy coordination in the EU and the different forms that it can take. In addition, it develops a new conceptual framework for understanding the practicalities of fiscal coordination and the disciplines it requires of participating governments. Project focuses on fiscal spillovers. Expert team take into account the interaction between fiscal and monetary policies, or the monetary-fiscal policy mix. Project distinguishes between ex-ante coordination of fiscal policies, such as debt and deficit rules, and ex-post mitigation of shocks in the form of various risk-sharing mechanisms. In addition, various macroprudential policies are considered. When analysing the implementation of fiscal coordination, the potential problems related to (the lack of) credibility and legitimacy are considered. When assessing the effectiveness of the existing coordination mechanisms in ensuring stabilization and sustainability, we will take into account population ageing and uncertainty about e.g. future tax revenues.
Project Structure and Involvement of CASE:
Project is divided into 8 work packages. CASE is involved in the following work packages:
- WP 1 Fiscal policy coordination and cross country spillover effects, namely D1.7 Policy brief highlighting the key implications of the FIRSTRUN results (CEPS and ETLA + all other partners)
- WP 2 Ex ante policy coordination: the new EU fiscal rules, namely D2.6 Working paper on the credibility of EU fiscal instruments (CASE major task)
- WP 6 Governance, institutional mechanisms for fiscal policy coordination and their legitimation, namely D6.5 Paper on in-depth case studies of the experience of fiscal coordination in selected countries (LSE, CASE, IER)
- WP 7 Dissemination and Outreach, namely organizing Second Stakeholder Forum in Warzaw (CASE and ETLA)
- WP 8 Scientific coordination and project management
Project Output:
- surveys: literature reviews, analysis of policy and stakeholder consultations, secondary data analysis; qualitative methods (interviews);
- data collection, synthesis of data, quantitative and economic analyses;
- case studies;
- country-reports;
- drafting policy recommendations for economic, political and public stakeholder;
- raising awareness and informing the stakeholders and broader audiences of the project by means of interactive project website, the electronic newsletter and social media, including Twitter accounts of the project and the researchers12 project newsletters;
- 2 Stakeholder Forums, scientific conferences, seminars, workshops
- Policy Briefs;
- all deliverables drafted in English.
Contracting institution: EC DG RTD
Sponsoring program and/or organization: Horizon 2020 (DG RTD)
Leader of the consorcium: ETLA - The Research Institute of the Finnish Economy (Finland)
Partners: | http://www.case-research.eu/en/firstrun-fiscal-rules-and-strategies-under-externalities-and-uncertainties |
Damian Grant is an earthquake engineer engaged in a broad range of global projects, including the design of new buildings, bridges and energy infrastructure, and the seismic assessment of existing structures. He provides specialist guidance to clients, including NGOs, developers, multinationals, and governmental agencies, on how best to address the seismic threat to their people and assets.
Damian is the lead author of Earthquake Design Practice for Buildings for ICE Publishing, now in its 4th edition. The book covers the principles of designing earthquake-resilient buildings, structural detailing requirements governing different construction materials and structural systems, and the use of seismic protective devices, such as seismic isolators, dampers and self-centring systems.
He is passionate about how to make the best use of the latest technologies – both digital and physical – to reduce the risk of earthquakes to people and their livelihoods. | https://www.arup.com/our-firm/damian-grant |
Monday, January 23, 2012
I was attending a conference when people were serving non-vegetarian food (beef) as part of main course. The conference was in Europe. There was general curiosity on how come I had no qualms in consuming beef, which led to question why is "Cow" holy.
As I have made clear previously, I am mostly a "Nirishwar-vaadi". I explained them the historicity of culture and evolution of certain traditions which sometimes are misconstrued as "roots". The ultra veneration of cow is one of them. There are many other instances. The fact that fanatic veneration of cow led to other 6 famous "restrictions" on an Indic mind was the first thing I explained to them.
The famous 7 restrictions (Sapta-Bandi) are
1. and 2. Sindhu bandi (one should not cross river Sindhu and cross ocean). Marathas stopped their pusruit of pathans in 1758 on the banks of river Sindhu on Attock, because they were scared of loosing their caste and religion if the crossed the river. Holkar then famously proclaimed that he is ready to convert to Islam and cross the river to punish Abdali.
Lokmanya Tilak, Gandhi and many others who returned from abroad by boat (either for studies, profession or punishment) had to do a Shuddhi-Sanskaara for readmittance in Dharma. Thus, restriction over crossing river Sindhu and ocean (also referred to as Sindhu in sanskrit literature) formed the is Deracination number 1 in India's Dharmaarthik (Socio-politico-economic) System.
3. Shuddhi Bandi - Once a non-hindu for more than 4 years, he is not allowed to return and reconvert. This is referred to in Deval Smriti. This is deracination number 2.
4 and 5. Jaati-Bandi - The Roti-Beti Vyavahaar is allowed only within the caste. Thus, restriction over indulging in food exchange and marital exchange formed deracination number 3 in India's dharmarthik system.
6. Cow veneration - killing of cow and consumption of beef are "Mahaapaatakas" without any prayaschitta. even if they are made to be consumed forcibly, one is lost forever. (deval smriti puts a limitation of 4 years, after which a convert cannot be taken back). The holy cow became chink in India's armour.
There is one incident described by Savarkar in his memoirs of Kalapani days. The muslims were converting Hindus by forcing them to consume beef. However they were afraid of Buddhists from myanmar because they had no qualms in consuming both beef and pork. Under Savarkar, Hindu prisoners started consuming beef and pork, whenever chance arose. The conversions stopped drastically.
The crystalline form of Islam makes it impossible to adapt, we have the luxury of adapting, then why not. This is deracination 4. Hindus are known to venerate everything, after all Sarvam Khalu Idam Brahma (everything that "is" is Brahman). But the fact that they allowed one veneration to become their weak spot led to their down-fall.
7. Sparsha bandi (Untouchability) - no need to say more about this, it is very clear - this is deracination number 5.
Just like everything else, the idea of "god" or any such tradition has a "Kaarya-Kaarana Bhaava" (Law of Causality).
In fact, it is only Indian system which encourages to actively seek power and money and material wealth. More than encouragement, Artha is one of the "mandatory Purushartha (achievement)" which has to be done in order to be called "successful". Others cannot manage the confluence of the Justice, power, desire and liberation in one life.
What is Deracination?
Hence, when we say "loosing roots", I say hold on. What are the roots we are referring to. Many "traditions" which we considered as "roots" are in fact remnants of Islamic and Victorian moralistic bondages. If young people are slowly getting rid of them (which they are), I say well done and keep it up. The idea of "Pre-Islamic" moral high-ground is the holy-grail which is being sought, especially in this thread. By pre-islamic, i mean from 400 AD to 1000 AD.
In this period (after defeat of Mihirakula by Yashodharma of malwa), there was no foreign invasion on India for 600 years (except for the brief invasion of arabs which was thoroughly routed by Rajputs). It was scientifically, technologically, ideologically, culturally very vibrant society with very wide range of ideas and memes supported by the emphasis on Justice (Dharma). The code of this Dharma was very simple - every opinion has right to exist as any other opinion. Opinions are allowed to collide and best one survived, others perished. When opinions perished, opinion-holders did not. This system of just opportunities for all ideas in all aspects is Dharma.
What is remedy?
When we say reverting to "pre-islamic system" it is not suggested that we loose everything we have earned in between and go back 1000 years. The trajectory of Indian system in 2012 would have been different if we were still following that pre-islamic Indian system in essence for past 1000 years. The islamic and Victorian influence has affected our trajectory, all I am asking for a course-correction so that we will be where we ought to be in modern times. I guess this process of course-correction started in 1947. We need to re-calibrate it and accelerate it, nothing else.
In conclusion, is not easy to identify any one of these groups as the “best” from India’s point of view.
Also, it is important to realize that no one group typically has complete dominance over a particular US administration’s foreign policy. The actual policy is often a vector sum of competing influences brought together by political expediency and self-interest.
For example, Clinton’s initiatives were planned by Clinton-Wilsonians but strongly modified to accommodate Hamiltonian interests (which became extremely powerful during the Reagan years.)
Bush’s Iraq War was a Bush-Wilsonian policy initiative to bring an American-controlled “democratic” regime change to Iraq. But to enact it, the Bush administration relied on support from both Hamiltonians (interest in the oil fields of Iraq) and Jacksonians (strong popular opposition to Islamism following 9/11.)
Obama is a Jeffersonian who is torn between his Jeffersonian electoral base, which favours a withdrawal from Afghanistan, and a Clinton-Wilsonian foreign policy establishment, which pursues a flawed policy based on alliance with Pakistan and negotiations with “good” Taliban.
It seems clear that the Clinton-Wilsonians are the most implacable foes of India among all these groups.
Others, particularly Bush-Wilsonians and Hamiltonians, can be engaged on some specific points of convergent interest, but must be handled carefully because other aspects of their agendas are inimical to Indian interest.
Ultimately, a Jacksonian President is perhaps most likely to nuke Pakistan or take a confrontationalist posture towards China… but depending on various factors, the specific circumstances and consequences may or may not be in India’s interest.. We will have to be quick on our feet to translate any advantage out of such situations.
And finally, if India ever rises beyond the confines of the region to the beginnings of global superpowerdom… probably our best bet is for the United States to follow a Jeffersonian line of limited intervention, leaving a power vacuum that we can endeavour to fill.
The Jeffersonians, compared to the Hamiltonians or Wilsonians, are decidedly inward-looking. They believe in a largely non-interventionist foreign policy, and in concentrating resources on domestic reforms.
Of the four groups of Meade’s spectrum, the Jeffersonians are most inclined to oppose the rise of the “military-industrial complex”… something that Eisenhower famously warned against as he was leaving office, and which is an important source of political influence for both Hamiltonians and Wilsonians.
As I mentioned earlier, many common Americans are either Jeffersonian or Jacksonian in their outlook. If you talk to an American about the India-Pakistan situation and he says something like “sort it out yourselves, it’s none of our business”… that American is most likely a Jeffersonian.
The typical Jeffersonian is to the “left” of the American political spectrum, upholding traditional “liberal” ideas such as increased Federal Government involvement in social and economic development, upliftment of underprivileged sections, civil rights, environmental conservationism, regulation of corporations, global initiatives against poverty/disease/global warming and so on. Such politicians as Dennis Kucinich are at the extreme left of this group.
However, not all Jeffersonians are leftist. Libertarian Isolationists such as Ross Perot and Ron Paul, who believe in a Fortress America model where the US military is exclusively employed to guard America’s borders and enforce illegal immigration laws, also purvey an essentially Jeffersonian foreign policy.
Jeffersonians with respect to India: As such, the Jeffersonian attitude towards India tends to be neutral… but this is largely irrelevant. That is because Jeffersonian Presidents tend to hand over control of foreign policy to Wilsonians. Jimmy Carter relied on Cold-War-Era Brzezinski, and Barack Obama relies on Clinton-Wilsonians such as Joe Biden, Richard Holbrooke and co. with Brzezinski still present as a mentor-figure. The advantage India has today is that it has cultivated a constituency with the Hamiltonians, who are much more powerful at present than they were during the Carter regime. With the Bush-Wilsonians largely in disgrace, the Hamiltonians are our primary channel of influencing American foreign policy in a positive manner at present.
4) The Jacksonians are also, primarily, inward-looking, though they differ dramatically from the Jeffersonians in terms of their domestic policy agenda. While the Jeffersonians tend to be idealists, the Jacksonians are fervent populists. In the tradition of Andrew Jackson, they stand for increased power of the executive branch (the President) relative to the legislature or judiciary; limited federal government role in the affairs of the country; the “patronage” policy of actively placing political supporters into appointed offices; expanded states’ rights; and decentralization.
Also in the tradition of Andrew Jackson, who pledged to expand the United States “from sea to shining sea”, the Jacksonians believe in America’s Manifest Destiny as the natural leader of the world and in securing America’s influence overseas by any means necessary… not shying away from unilateral military action whenever required.
Some articles on Meade’s spectrum describe Jacksonians as the only group that believes in American Exceptionalism. From an Indian point of view, this is not strictly true… ALL the four groups believe in American Exceptionalism… but the Jacksonians are the ones who most prominently wear it on their sleeves.
Jacksonians tend to be issue-based in their politics, rallying around anti-abortion movements, restriction of gay rights, defence of second-amendment gun rights, unapologetic Christian influence in schools and government institutions etc.
Jacksonians, unlike Jeffersonians, do not make “non-intervention” a cornerstone of their foreign policy views; they are quite happy to intervene in a muscular fashion whenever they deem it necessary to do so. However, their perspective is largely focused on internal priorities, so again, Jacksonian Presidents of the United States have traditionally handed over control of foreign policy to other groups. Reagan depended on Hamiltonians like James Baker and Cold-War-Era Wilsonians such as Alexander Haig. George W. Bush also depended on Hamiltonians, but ceded a large amount of policy space to the new Bush-Wilsonians or Neoconservatives of his day.
Wilsonians: They are Ideological Expansionists. They seek to use the economic, political and military might of the United States to create a world where all nations look to the United States for ideological leadership. Their goal is to have all other nations willingly subject themselves to the geopolitical dominance of the United States in a global Pax Americana.
Wilsonians pretend to be “anti-imperialistic”, and conceal their intentions behind rhetoric of “democracy”, “American moral compass” and “multi-lateralism.” In this sense, the Wilsonians are the most hypocritical of all the four groups.
The Wilsonians favour democracy in other nations, only when such democracy is guaranteed to be dominated by essentially pro-American parties who will toe the American line when it comes to making policy. They are intolerant of democratic systems which could potentially be dominated by independent parties who put their own national interest ahead of America’s.
Wilsonians with respect to India: In this sense, Wilsonians are the most likely group to be anti-India. They are relatively happy with Manmohan Singh because of his willingness to accommodate American interests; but they are deeply distrustful of Indian babudom, and they are completely against nationalist Indian parties like the BJP.
In fact, even though they claim to stand for “democracy”, Wilsonians prefer dictatorships that can be successfully manipulated by America, to democratic countries that are independent enough to oppose America. The Wilsonian path to American global dominance involves “balance of power” games which essentially amount to divide-and-rule. The Wilsonians see America as the true legates of the British Empire, even though they would like to couch their subsidiary alliances in the guise of “independent democratic regimes” that only seek the leadership of America because America is morally superior.
One important thing to realize about the Wilsonians is that, since the end of the Cold War, they have actually split into two competing camps.
As long as the Cold War was in progress, Wilsonians were more or less united in seeing international Communism, specifically Soviet Communism, as the chief obstacle to ideological dominance of the world by the United States. Henry Kissinger could be described as the archetypal old-school, Cold-War-Era Wilsonian. However, following the USSR’s collapse, there is disagreement among the two camps of Wilsonians as to what America’s priorities should be.
These two camps of Wilsonians:
2A) The “Bush Wilsonians”, also commonly known as “Neoconservatives”, who gained prominence during the George W. Bush regime. They include Cheney, Wolfowitz, Perle, Rice, as well as lower-profile figures such as Robert Blackwill. Think-tanks of the Bush-Wilsonian persuasion include the CATO institute, the American Enterprise Institute, the Heritage Foundation and the Project for a New American Century.
The term “Neoconservative” is actually a misnomer for this group, because they are actually less conservative than the other camp. They sought to radically reconstruct the American foreign policy establishment’s view of the world following the end of the Cold War.
From the Bush-Wilsonian perspective, the demise of the Soviet Union was the start of a brand new era in which America had a unique opportunity as the sole superpower to shape the world for domination. Ideologically, the Bush-Wilsonians subscribe to the notion that America must be the unilateral forerunner of Western civilization, inspired by a Judeo-Christian (mainly Christian) perspective.
They deviate from the old-school, Cold-War-Era Wilsonians in no longer seeing Russia as the chief threat to the United States, and rejecting the idea that American dominance must be pursued multilaterally through such organizations as the UN.
The Bush-Wilsonians regard China as the major future threat to the United States, followed closely by international Islamism. They are fervent supporters of Israel, owing to a strongly Biblical ideology.
As a means to ensuring American global dominance, the Bush-Wilsonians have sought to reconstruct the geopolitical framework of alliances and strategic partnerships that prevailed during the Cold War. They have tried to rope in India into the American camp by offering such carrots as the Indo-US Civil Nuclear Cooperation Agreement. They have also strengthened America’s ties with former Soviet Bloc nations in Eastern Europe, bringing Poland, Hungary and Czechoslovakia into NATO.
On the other hand, the Bush-Wilsonians have downgraded the American reliance on allies in Continental Western Europe, which they dismissively describe as “Old Europe”, even as they have sought to shore up a few key alliances of the Cold-War Era such as with the UK, Australia, and Japan.
Similarly, they have made some moves towards engaging Russia as a potential strategic partner rather than a competitor, especially in light of the challenges Russia appeared to be facing from a resurgent China and from Islamist terrorism in the early 2000s.
However, their approach to Russia has been wary, and often contradictory, as seen in the American support for the “Orange Revolution” in Ukraine, American initiatives to station missiles in East European countries such as Poland, and American backing of such individuals as Georgia’s Shakashvili who were belligerently anti-Russian. In such cases, some of the old-school Cold-War-Era Wilsonian prejudices seemed to re-establish themselves with regard to Bush-Wilsonian foreign policy.
These contradictions also manifested themselves when, after invading Afghanistan, the Bush-Wilsonians decided to rely on Pakistan as an ally against the Taliban, with fatal consequences.
The highlight of the Bush-Wilsonians’ dominance over the US Foreign Policy Establishment was of course, the Iraq War… something which has ended up destroying their credibility for the present.
Bush Wilsonians with respect to India: As far as India is concerned, the Bush-Wilsonians have made overtures to India that sharply contrasted with the dismissive attitude of the Cold-War-Era Wilsonians. However, the growth of predatory Evangelical missionary activity as Washington’s influence increased in Delhi during the Bush administration, is a warning sign that not all was well with US-India relations during this period. Additionally, the Bush-Wilsonians have repeatedly insisted that India “prove” its sincerity towards Washington, by downgrading its relationship with Iran for example.
When and if the Bush-Wilsonians regain their influence in Washington, India should game them deftly… securing all the benefits we can from their willingness to abandon Cold-War Era policy, but remaining careful not to cede an undue level of influence that might prove to be detrimental to our national and civilizational interests.
2B) Clinton Wilsonians: The second camp of Wilsonians that has emerged following the USSR’s demise are the “Clinton-Wilsonians.” They are actually more conservative than the Bush-Wilsonian “Neoconservatives”, in that their attitudes more closely reflect the classical Cold-War-Era Wilsonians’ worldview.
The Clinton-Wilsonians are the closest group to what Sanjay M likes to call “Atlanticists”. They are deeply distrustful of Russia, and less averse to China; they are also strongly invested in the idea of revitalizing the trans-Atlantic alliances with Western Europe that America maintained during the Cold War. For the rest of the world, the Clinton-Wilsonians firmly trust in the British techniques of divide-et-impera, and in our region in particular, they are the modern torchbearers of Olaf Caroe’s geopolitical agenda. They are more likely than any of the other groups to entertain the idea that Jihadi Islamism can continue to be a coercive policy tool in America's hands.
Think-tanks of the Clinton-Wilsonian persuasion include the Brookings Institution and the Carnegie Endowment for International Peace. Most of the Non-Proliferation types who bash India while ignoring Chinese/Paki proliferation, are Clinton-Wilsonians.
The Clinton-Wilsonians showed their eagerness to reshape the world in America’s favour following the end of the Cold War, most prominently in two instances. One was the war in Yugoslavia, which was deliberately split up into ethnic nationalities, providing additional levers of control that the West could easily manipulate. The second was the secession of East Timor from Indonesia.
In both of these cases, it should be noted that the Clinton-Wilsonians proceeded to fulfill their agenda under the cover of “international consensus”, using the UN to pull together “coalitions” of nations which supported the American initiative. This modus operandi is a key point of differentiation between Clinton-Wilsonians from Bush-Wilsonians, who have been much more prone to reject the authority of multilateral bodies like the UN and carry out unilateral actions such as the Iraq war.
Clinton Wilsonians with respect to India: As far as India is concerned, the Clinton-Wilsonians (who include such functionaries as Strobe Talbott, Richard Holbrooke and Robin Raphel) are an inflexible, implacable enemy. This is the single worst group that could come to dominate US foreign policy, from our point of view. They continue the most anti-India traditions of the Cold-War-Era Wilsonians, supporting Pakistan to the maximum extent possible and winking at Chinese nuclear proliferation to Pakistan, even while they bash India for developing its own nuclear arsenal. They refuse to see India as a potential strategic counter to China, and prefer to cultivate China in a “G2” model of cooperative partnership for the short-to-medium term.
The Clinton-Wilsonians are the group who most fervently support Pakistan as a counter to India’s regional dominance, as described in George Friedman’s Stratfor article. They are the most likely group to retain the India-Pakistan hyphen wherever possible, bombard India with equal-equal psyops, and overtly rake up the Kashmir issue as a pressure point against India. They seek to restrict Indian influence to a sub-dominant level even within the “South Asian” region. This is in sharp contrast to the Bush-Wilsonians who made some attempt to dehyphenate India and Pakistan, with a view to bolstering India as strategic rival against China.
I do not see how the Clinton-Wilsonians can be won over… when they are in charge of US foreign policy, it makes more sense for India to engage with other powerful interest groups such as the Hamiltonians so as the modulate the virulence of the Clinton-Wilsonians’ initiatives against India.
Speaking of Wilsonians in general, Lyndon Johnson (who began the Vietnam war) was a classic Wilsonian president, as was his successor Richard Nixon (who reached out to China via Pakistan to form an alliance against the Soviet Union). This is an illustration of how the policy groups of Meade’s spectrum can often cut across Republican/Democrat party lines.
More recently, Bill Clinton has been a Wilsonian president who was, however, always careful to secure the backing of the Hamiltonians (whose power greatly increased during the Reagan years.)
It should be noted that there are many in the US Foreign Policy Establishment who do not fully commit to either the Bush-Wilsonian or Clinton-Wilsonian camps. Robert Gates is one such. Other examples include academics like Stephen Cohen and Christine Fair, who pretend to an independent "maverick" image but in reality always make statements that are in line with the Wilsonian flavour-of-the-month in Washington.
If we want to analyze the dynamics between ideological groups that determine US foreign policy, let’s begin with a taxonomy based on existing scholarship. A good example would be the ideological classification proposed by Walter Russell Meade. He divides US policy groups into four classes: Hamiltonian, Wilsonian, Jeffersonian and Jacksonian, based on their broad imperatives. Here are a few articles explaining Meade’s “spectrum” and its four subdivisions from the American point of view:
http://www.lts.com/~cprael/Meade_FAQ.htm
http://www.foreignpolicy.com/articles/2010/01/04/the_carter_syndrome
To be useful to our analysis, we must reconstruct this “spectrum” from an Indian point of view. Here’s an attempt.
In general, Hamiltonians and Wilsonians are the more “outward looking” of the four groups. Jeffersonians and Jacksonians are the more “inward looking.”
Also in general, most of the American public tend to be either Jeffersonian or Jacksonian in their broad geopolitical outlook. The Hamiltonians are mostly represented by a powerful elite of corporate and business interests. The Wilsonian base is a well-entrenched Washington intelligensia with strong influence over institutions like the State Department and the Pentagon (the “babudom” of America.) Wilsonians also dominate American academia and think-tanks.
Let’s look at these four groups one by one.
1) Hamiltonians: named for America’s first treasury secretary, Alexander Hamilton, this group stands for Economic Expansionism. They support global political and military involvement for the purpose of creating and maintaining a system of trade and commerce dominated by the United States, with an American agenda at the helm.
Bretton Woods was the cradle of the modern Hamiltonian movement. The Marshall Plan, and the Roosevelt-Ibn Saud agreement (which formalized the USD as the currency in which international oil prices would be set) were early initiatives undertaken with Hamiltonian support to establish American economic supremacy.
Domestically, Hamiltonians are backed by big-business corporate interests.In nations where a climate favourable to international commerce exists, Hamiltonians try to further their agenda by political means (through American-dominated institutions such as the World Bank, G8 and WTO.)
In regions where a climate exists that is unfavourable to international commerce, the Hamiltonians are most concerned with making sure nothing happens to threaten the domination of global commerce by the United States. Chiefly, this means using the military, and shoring up military alliances, to ensure America’s energy security… and sometimes, to deny other nations the energy security they would need to compete economically with America. Hamiltonians insist that American foreign policy in the Middle East and Central Asia focus on enhancing American influence over the oil and mineral resources of those regions.
Hamiltonians with respect to India: Hamiltonians generally ignored the socialist avatar of India as a lost cause, but they have begun to take increasing notice of India since liberalization and economic growth began in the early 1990s.
The most pro-India Hamiltonians would like to shape the rise of India into an economic partner and hedge against other potential economic competitors such as China. This sub-group of Hamiltonians were fully supportive of the India-US Civilian Nuclear Cooperation Agreement. They are generally in favour of outsourcing and guest worker programs, as long as American corporations continue to receive growing access to Indian markets.
The least pro-India Hamiltonians, on the other hand, are skeptical about the relatively “slow” rise of India, about the obstacles to economic liberalization posed by the exigencies of India’s democratic system, and instead choose to support China as a relatively “sure bet.” They are the ones who would gladly overlook human-rights abuses or nuclear proliferation by China as long as market access and profit mechanisms remained intact.
As India continues to develop economically, it is likely that of all the four groups, the Hamiltonians will adopt policy attitudes most favourable to India. Along the way, however, there will be hiccups: India refusing to sign the Nuclear Liability Bill (thereby denying access to American energy corporations into the reactor-building market), or India choosing not to opt for an American-made MRCA, will be detrimental to the support we have among the Hamiltonians.
All Hamiltonians are realists for whom the bottom line is all about the money.
They see the maintenance of a running trade deficit with China as the best insurance against an inimical, confrontational US-PRC relationship in other spheres of competition. They figure that as long as China is invested in the economic well-being of the United States, its will to threaten the political interests of the United States will be limited.
Very few US presidents have been overt Hamiltonians, chiefly because being overtly associated with big business interests could be detrimental to the electoral success of a US presidential candidate. However, ALL US Presidents since Ronald Reagan have relied on the support of Hamiltonians to exercise their policy initiatives, and no president since Reagan has managed to enact a policy that was opposed by the Hamiltonians.
The most overtly Hamiltonian president so far might be George H.W. Bush, who actually ran the first Gulf War in such a way that America ended up making a profit! In recent years, meanwhile, some potential and actual Presidential candidates have been openly Hamiltonian, in background as well as in terms of their policy platforms. These include Steve Forbes, Mitt Romney and the mayor of NYC, Michael Bloomberg, who make no secret of their connection with US corporate interests.
| |
Childhood Adversity Shortens Lives (in Baboons)
Growing up in difficult circumstances slashes baboons’ lifespans, a new study finds.
By Nathan Collins
A baboon and her mother in Lake Manyara National Park, Tanzania. (Photo: Wikimedia Commons)
Losing a mother at a young age, growing up in cramped living quarters, and wanting for food can all have obvious effects on one’s health. It turns out, much the same is true of baboons—and that’s important, the authors of a new study write, because of what it tells us about childhood adversity in humans.
“Females who experience [three or more] sources of early adversity die a median of 10 years earlier than females who experience [one or no] adverse circumstances,” effectively cutting their average lifespans in half, write Jenny Tung, Elizabeth Archie, and colleagues in the journal Nature Communications.
The results suggest there’s a direct link between early adversity and a shortened lifespan.
Some had thought that, in humans, the link between childhood trauma and poor health later in life was indirect—childhood abuse, for example, might lead someone to abuse drugs, which in turn might take years off that person’s life. But since baboons don’t take drugs (not that we know of, anyway), the results suggest there’s a direct link between early adversity and a shortened lifespan.
The study focused on 196 female baboons in southern Kenya, where the Amboseli Baboon Research Project has been following the local population since 1971. Over much of that time, the researchers tracked drought conditions, social group sizes, mothers’ social status and social connectedness, mothers’ mortality rates, and the presence of younger siblings competing for attention.
Among the 29 baboons who experienced adversity in three or more of those situations, none lived longer than 15 years, compared with more than three-quarters of those who experienced no adversity. For the former group, the median lifespan was just shy of nine years, while for the latter it was 24 years. Female baboons who experienced childhood adversity had fewer social connections as adults and mothered fewer children, largely because of their shortened lifespans.
The link to social connections is particularly tantalizing, the researchers write. In humans, stronger social ties can lengthen a life, and weak ones can shorten it. The baboon findings add to that story, suggesting that a strong social network may be hard to come by without solid, stable childhood experiences, although the researchers caution there’s still much work to do before those links are understood in baboons, let alone humans. | https://psmag.com/news/childhood-adversity-shortens-lives-in-baboons |
The blocked shot can be one of the most exciting plays in basketball. A player that can contest shots at the rim provides a tremendous benefit to the team fortunate enough to have him. Having that last line of defense in the paint can cover up a lot of mistakes on the perimeter. There is a good reason that every NBA team wants desperately to have a guy that can guard the rim. And it is no accident that the University of Connecticut has won so many titles.
In the first post of this series, I looked at the impact of the blocked shot in basketball. Studying NCAA Division I basketball, there is a simple (but unsurprising) relationship between the percentage of two point shots blocked and two point field goal percentage. Additionally, I estimated that a single blocked shot is worth about 0.7 points (give or take a quarter of a point) in the final point margin for a game.
In this post, I want to apply some of of the ideas from my previous blocked shot post. One way to use what we have learned is to divide all two point shot attempts into two groups: blocked shots and unblocked shots. I want to make use of this approach to look at something that I have tackled in the past: trying to understand the collapse of Texas' defense down the stretch last season.
Last March, I tried to get to the bottom of what caused the Longhorns to falter down the stretch. Basically, the Texas defense stopped playing at an elite level, and had a number of poor defensive outings.
On the season, when Texas played good defense, they won the game. When they played poor defense, they lost. Using the kenpom.com site, you can rank each Texas game by defensive efficiency. Of their 12 worst defensive showings, 8 were losses. Texas did not lose a single game when they played good defense. Contrast this with the list that ranks Texas games by offensive performance. There isn't nearly the same relationship between offensive performance and wins and losses for this team.
So while everyone complained about the offense, the offense actually played pretty well in some of those losses. The story of last year's team was defense. It was defense that made the team great through the first half of February, and it was defense that let the Horns down during the stretch.
I spent some time searching for the cause of this problem, eventually deciding that from the Nebraska game on, Texas' opponents had become much more efficient in their shooting.
Prior to the Nebraska game, Texas made many of its opponents the shooting equivalent of Jai Lucas. But in the losses to Nebraska, Colorado, Kansas State, Kansas, and Arizona, Texas' opponents were at least as efficient as Jordan Hamilton and Tristan Thompson. Even in most of the wins after February 19, opponents were still somewhat efficient. Baylor, OU, and A&M were much closer to Gary Johnson levels of efficiency... In the games that Texas won, they held their opponents at or below the Gary Johnson efficiency level. In the games that Texas lost, their opponents shooting efficiency was as good or better than Jordan Hamilton's shooting efficiency.
Digging a little deeper, I noticed that there was a pretty strong, if unsurprising, relationship between opponent shooting efficiency and opponent field goal percentage on jump shots.
The major factor in the Texas defensive collapse seems to be Texas' ability to defend jump shots. The Nebraska game seems to be the exception to this, as Nebraska only shot 35% on jump shots (Nebraska hurt Texas in other ways). But Colorado, Kansas State, and Kansas all averaged around 50% on jump shots. Arizona only hit 37% on their jump shots, but this number is a bit deceiving. Strangely, Arizona only managed 15% on 2 point jump shots, but hit 57% of their three point shots.
I wanted to see if looking at the shot block percentage data for the 2011 Texas Longhorns, along with data on their opponents' two point FG% might add something to the story. First, it is important to recall how I am calculating the shot block percentage. I am following the method used at kenpom.com, and assuming all blocked shots come on two point attempts. While this is not quite true, blocked shots on three point attempts are pretty rare. With this assumption, I calculate the percentage of two point attempts that are blocked. Using this approach, it is also possible to calculate the two point field goal percentage on shots that are not blocked.
The graph below shows the shot block percentage for Texas and the two point field goal percentage on unblocked shots for Texas' opponents in 2011 from the start of conference play on. Previously, I had noted that in the key Texas losses down the stretch last season, Texas' defense on jump shots (as measured by jump shot field goal percentage) was pretty poor, whereas earlier in the season it had been outstanding. The graph below provides some complementary information. It shows that, while shot block percentage jumps around a fair bit from game to game, it wasn't particularly low in the Texas losses to Nebraska, Colorado, Kansas State, Kansas, or Arizona, or in the poor defensive performance against Oakland. It seems that the unblocked two point percentage tells a bit more of the story.
During the extended period of defensive dominance, Texas often held its opponents to less than 50% shooting on unblocked two point shots. This is very low. Recall that the NCAA Division I average in 2010-2011 was 53%. Some of this is likely poor shooting, but some of it is not; teams like Kansas and Missouri didn't typically struggle putting the ball in the hole last year. And while A&M and Texas Tech weren't exactly offensive juggernauts, these shooting numbers were well below their season averages. It seems pretty likely that Texas' defense was doing something right when it came to defending shooters.
And then something happened. Colorado, Kansas State, Kansas, Oakland, and (to a lesser degree) Arizona exploded offensively. Texas actually did a good job limiting Arizona on their two point attempts, blocking 17.5% of them and holding Arizona to 48.5% on unblocked two point attempts. (Arizona killed Texas from three.) On unblocked two point shots, Colorado averaged 67%, Kansas State averaged 69%, Kansas averaged 70%, and Oakland averaged 72%. Tristan Thompson saved Texas' bacon in that Oakland game; Tristan contributed 7 of the 9 Texas blocked shots, which stopped 20% of Oakland's attempts from two point range.
I decided to dig in a bit further using shot location data from cbssports.com. I compiled the percent of opponent two point attempts that were taken inside the lane in order to see if opponents were getting closer shots towards the end of the season. The data don't really allow us to draw broad conclusions, but they are interesting just the same. In the figure below, I plot the percentage of opponent two point attempts taken inside the lane along with the opponent's field goal percentage on unblocked two point shots. Oakland and Arizona managed to take many of their attempts from in close, as did Nebraska and Baylor (in their first game against Texas). Both Colorado and Kansas State shot very high percentages on unblocked two point attempts, but this wasn't because they were shooting from in close. They actually shot very well on long range two point attempts (both teams had pretty decent 3 point shooting percentages last season, indicating that they were good at distance shooting).
I still am not sure that I have much of a handle yet on what happened to the Texas defense last season, but I can summarize what I have learned. Earlier in the season, Texas excelled at defending players away from the basket, forcing them to miss a very high percentage of jump shots and unblocked two point attempts. This facet of defense was the greatest strength of last year's team, and is largely responsible for what got the Longhorns the #2 ranking in the AP poll. But as the season wore on, this major strength faded away. Perhaps some of that can be attributed to allowing a few more shots in the paint to teams like Oakland and Nebraska. But some of it cannot. | https://www.burntorangenation.com/2011/9/14/2388105/revisiting-the-collapse-of--2011-Longhorn-basketball |
[Chemical Constituents from Dalbergia cochinchinensis].
To investigate the chemical constituents from the heartwood of Dalbergia cochinchinensis. Isolate and purify compounds by various column chromatographic methods. Spectral analysis were taken to identify the structures. Elev- en compounds were isolated and identified as dibutyl terephthalate (1), medicarpin (2), pterostilbene (3), 6-hydroxy-2-(2-hydroxy-4- methoxyphenyl)-benzofuran (4), pterocarpol (5), butyl isobutyl phthalate (6), pterolinus B (7), methyl 4-hydroxybenzoate (8), ethyl 4- hydroxybenzoate (9),2-(2'-methoxy-4'-hydroxy)-aryl-3-methyl-6-hydroxy-benzofuran (10) and 6α-hydroxycyclonerolidol (11). Compounds 1 and 6~10 are isolated from Dalbergia genus for the first time, and compounds 2, 4 and 11 are isolated from this plant for the first time.
| |
Business as Usual
In his essay, George Ellis has once again gone public with his arguments that a set of ideas currently being considered by cosmologists concerning the notion of a multiverse is not scientific. His main criticism is that he does not think multiverse theories are predictive, as scientific theories should be. Were this really true, it would indeed be a problem. But Ellis’s claim relies on several basic misconceptions about science, and is thus easily refuted. He does not appreciate the difference between a framework and a theory, he applies a poorly-defined notion of direct evidence, and he mistakes the process of science for the output.
Let us begin with frameworks and theories. Every beginning physicist learns both classical mechanics and electromagnetism. These subjects have a quite different character; in classical mechanics we are told that force equals mass times acceleration, but this equation is actually useless until we are told what force is. And there are all sorts of different forces we might consider: gravity, electromagnetic forces, friction, tension, and so on. Any particular set of forces may govern the motion of a particular collection of particles, but classical mechanics itself does not prefer any specific choice. It is a framework within which we can realize many different theories. By contrast, there are a small number of concrete equations we study in electromagnetism—Maxwell’s Equations and the Lorentz force law—and then everything else follows from these few equations. This is a theory. It quantitatively describes many aspects of our world in an unambiguous way. Either it is correct, within its regime of validity, or it is not. Ellis’s first criticism of the multiverse is that there are a variety of distinct possibilities being considered, and thus that it cannot make specific predictions. But this is like saying that we should throw out classical mechanics because there are all sorts of forces which could exist. Sure there are, but in any particular theory, such as electromagnetism or gravity, there will be a specific set of rules to test.
Ellis’s second criticism of the multiverse framework is that typically most of the multiverse is outside our causal horizon, and thus we cannot receive direct signals from it. The multiverse, he argues, is for this reason not directly testable. But upon further scrutiny, this notion of direct testability is rather naive. Consider the existence of quarks. Have you ever seen one? I certainly haven’t. We infer the existence of quarks indirectly based on what happens when we collide other particles such as protons and electrons. My eyes only detect photons, and even those are conveyed into my brain indirectly through nerve signals. What matters in testing theories is not whether the evidence is direct or indirect. What matters is that the theory predicts more observations than it has parameters. And the hope of most people working on the multiverse framework is that any particular multiverse theory will indeed make detailed testable predictions. For example say that using a multiverse theory with zero parameters, I could compute with high accuracy the expected values of all of the 19 parameters in the standard model of particle physics, and the 6-7 parameters of the ΛCDM model of cosmology, and I found that they were all consistent with the latest measurements. If all were consistent with the latest measurements I suspect that even Ellis would be forced to give strong credence to this theory. We might not be able to directly observe the other regions in that multiverse, but we would still have strong experimental confirmation of the theory. We would never be sure it was correct, but this is also true for all of our other scientific theories. It could be the case that things work completely differently than we think. Our scientific skepticism will never allow us to rule this out entirely. We will never be able to test all the predictions of any theory. It is enough that we can test some of them.
We do not have such a predictive multiverse theory today and thus far the multiverse has mostly remained a framework. Despite some promising initial calculations, it is quite difficult to construct an actual theory, never mind test it against the data in the way I just discussed. The strongly-constrained structure of string theory suggests that it might lead to a unique multiverse theory, but formulating this theory with the precision of Maxwell’s electromagnetic theory will require us to solve some of the deepest problems in quantum gravity. This process may or may not succeed. We will not know until we try. This situation gives rise to Ellis’s third criticism of the multiverse framework, citing the philosopher Richard Dawid: people who work on it sometimes give non-empirical reasons for doing so. Of course they do! The motivations for considering the multiverse are empirical: the observed value of the cosmological constant, the observed spectrum of density perturbations in the early universe as reflected in the cosmic microwave background, the distribution of galaxies, and the observed existence of gravity. The space of potential theories is enormous. If we were not allowed to work on candidate theories without knowing they were true in advance, how would we ever develop any theories at all? I can think of few things that would be more destructive to science than this notion, were it to be adopted. The process of theoretical science is what happens prior to the theory being formulated well enough to test it against experiment. The non-empirical methods described by Dawid have been used throughout the history of science to help decide which theories to work on and how to refine them. Names like Newton, Maxwell, and Einstein spring to mind immediately, as do many others.
We do not yet know if a successful multiverse theory exists. But it is clear that the multiverse framework is well within the bounds of conventional science. It may eventually go the way of so many other promising ideas that did not pan out, but this would be for the usual reasons that scientific frameworks are discarded and not as a result of some philosophical infraction.
Daniel Harlow
George Ellisreplies:
“Ellis,” Daniel Harlow writes, “does not appreciate the difference between a framework and a theory, he applies a poorly-defined notion of direct evidence, and he mistakes the process of science for the output.”
As to frameworks and theories, the inflationary universe is a framework, with over a hundred theories developed within that framework. The multiverse proposals are not a frameworks in the same sense. They are a mixed bag of ideas, ranging from the extension of the universe beyond the visible horizon to chaotic inflation to conflations of the Everett wavefunction with chaotic inflation to proposals that the universe is a simulation.1
As to direct evidence, Harlow says:
But upon further scrutiny, this notion of direct testability is rather naive. Consider the existence of quarks. Have you ever seen one? I certainly haven’t. We infer the existence of quarks indirectly based on what happens when we collide other particles such as protons and electrons.
There are many different and repeatable experiments of various kinds avaialble to indirectly test the idea that such particles exist. We are, however, not talking about the existence of types of particles or forces, but about existence of domains in the universe, and about specific configurations of geometry and matter. We are dealing with geography, which is not an experimental science. It is an observational science. The issue is not testing the generic nature of what exists, which is what Harlow refers to, but the particular nature of the specific entities that exist. Repeatable tests can be used to affirm that electrons exist, but not that a specific multiverse bubble with any particular properties lies a hundred Hubble radii away in a particular direction. This is completely different from talking about generic properties of matter. We are not dealing with experimental science numerous repeatable experiments on many different samples of the same entity in identical circumstances can be undertaken. That is the key difference. Harlow’s example does not apply.
Like Carol Rovelli, I agree that one always uses non-empirical reasons for choosing what to do with regard to process versus output. I never said otherwise. And I do not say that one should not work on multiverses. “The process of theoretical science,” Harlow writes, “is what happens prior to the theory being formulated well enough to test it against experiment.” Indeed. In the case of the multiverse there are good reasons to believe that most models arising from the framework will never be testable by any kind of observation or experiment that can demonstrate the claimed space-time regions do indeed exist with the properties supposed. That is the issue. The process of science—exploring cosmology options, including the possible existence or not of a multiverse—is indeed what should happen. The scientific result is that there is no unique observable output predicted in multiverse proposals. This is because, as is often stated by proponents, anything that can happen does happen in most multiverses.2 Having reached this point, one has to step back and consider the scientific status of claims for their existence. The process of science must include this evaluation as well.
Daniel Harlow is Assistant Professor of Physics at MIT.
George Ellis is Emeritus Distinguished Professor of Complex Systems in the Department of Mathematics and Applied Mathematics at the University of Cape Town in South Africa.
| |
Search results for author:"David Hobbs"
Total records matched: 8 Search took: 0.082 secs
-
Cinekyd: Exploring the Origins of Youth Media Production
Renee Hobbs; David Cooper Moore
Journal of Media Literacy Education Vol. 6, No. 2 (2014) pp. 23–34
The youth media movement, which now has a place in countless venues, communities, and scholarly discourses, reflects an evolution of practices pioneered in the 1950s and 1960s as amateur filmmaking increasingly became a reality in American families...
-
Research into Telecommunications Options for People with Physical Disabilities
Toan Nguyen; Rob Garrett; Andrew Downing; Lloyd Walker; David Hobbs
Assistive Technology Vol. 19, No. 2 (2007) pp. 78–93
People with a disability do not have equitable access to the modern telecommunication medium. Many experience difficulty typing, handling the phone, dialing, or answering calls. For those who are unable to speak, the only option is to type messages...
-
Identifying and Using Hypermedia Browsing Patterns
Duncan Mullier; David Hobbs; David Moore
Journal of Educational Multimedia and Hypermedia Vol. 11, No. 1 (2002) pp. 31–50
Hypermedia offers benefits for users who wish to find information and users who wish to learn about a particular topic (Jonassen & Grabinger, 1991). However, hypermedia is also plagued by drawbacks that were identified early in its inception ...
Topics: Navigation, Multimedia
-
A Neural-Network system for Automatically Assessing Students
Duncan Mullier; David Moore; David Hobbs
World Conference on Educational Media and Technology 2001 (2001) pp. 1366–1371
This paper is concerned with an automated system for grading students into an ability level in response to their ability to complete tutorials. This is useful in that the student is more likely to improve their knowledge of a subject if they are...
-
Learning Style Theory and Computer Mediated Communication
Hilary Atkins; David Moore; Dave Hobbs; Simon Sharpe
EdMedia: World Conference on Educational Media and Technology 2001 (2001) pp. 71–75
This paper looks at the low participation rates in computer mediated conferences (CMC) and argues that one of the causes of this may be an incompatibility between students' learning styles and the style adopted by CMC. The main learning style...
-
Finding out the intention of a user of Educational Hypermedia
Duncan Mullier; David Hobbs; David Moore
World Conference on Educational Media and Technology 2000 (2000) pp. 794–799
Hypermedia has been identified as a promising medium for education. However, it is beset with problems concerning the complexity of navigation and meeting the individual needs of the user. This paper is concerned with a domain-independent metric...
-
Civilization in the 21st Century
David F. Lancy; David DeBry; Megan Andrew-Hobbs
World Conference on Educational Media and Technology 2000 (2000) p. 1821
We will report on the evolution of an on-line course. The Civilization/Humanities course had its origins in the reform of the university's General Education curriculum in 1994-95. It was one of several classes created to replace existing...
Topics: Curriculum
-
Professional development schools: Cataylsts for teacher and school change
Robert V. Bullough; Don Kauchak; Nedra A. Crow; Sharon Hobbs; David Stokes
Teaching and Teacher Education: An International Journal of Research and Studies Vol. 13, No. 2 pp. 153–169
Drawing on data from questionnaires and 49 interviews with teachers and principals, the impact of involvement in Professional Development Schools on teacher professional growth and school change at seven Professional Development School sites is... | http://mail.editlib.org/author/David+Hobbs/?&sort=date& |
We are often told that tone, body language, nuance and facial expressions play a huge role in the our communications. And that this is particularly the case when we are communicating in situations where understanding emotions and attitudes is important. This is part of the reason that emails are often seen as less effective that telephone calls, which are less effective than video calls which are in turn less effective than face to face meetings.
Based on research, Albert Mehrabian has concluded that only 7% of communication relating to feelings and attitudes takes place through the words we use, while 38% takes place through tone and voice and the remaining 55% take place through body language.
How many times have you had a conversation with someone where they are saying one thing, but all the signals they are putting out are saying something completely different? For example, you might be asking someone if they are happy with the piece of work you have delegated to them. They might be using words to say that “yes” they are, but their tone and body language might be making it clear that they are not.
So What?
A lesson we can draw from this research is that we need to pay attention to far more than just the words others are using when we communicate with them. This does mean that face to face communications can be helpful, at least some of the time, if we are to communicate effectively together. And when we are not face to face, say on a video-call, on the telephone or using email, we might want to change our approach to communication or ask more specific questions, if we are to ensure we’re communicating effectively.
In addition, we should also be aware of what we communicate to others through our tone and body language, not just our words. We convey huge amounts of information this way. As leaders and managers, it’s important that we understand what we’re communicating and that we try and communicate intentionally.
Also, we should also ensure that we can listen effectively. One way to think about this is the SIER Hierarchy of Active Listening, another is the Six Facets of Effective Listening for Better Connection. Of course, listening too is about more than just words. We need to learn to observe tone and body language, and understand what people are really communicating to us.
Learning More
Effective communication is a key skill in the world of work. We’ve written several other posts on the topic. These include the tongue in cheek ABCs of communication, the 7 Cs of communication and 10 Tips for Better Presentations. We’ve also written a bit on persuasion and influence. Posts include Cialdini’s 6 Principles of Persuasion and Monroe’s Motivated Sequence.
We’ve also explored the important role that communications and stories play in organizational change. You might enjoy this podcast which explores that idea.
Free Online Seminars
Don’t forget, as part of our commitment as a Community Interest Company, we deliver at least one free, online seminar every month to help individuals, managers and organizations develop. You can learn more about them and register for them via the link below:
The World of Work Project View
We are slightly wary of the precise numbers within Mehrabian’s 7-38-55 Communication Model from a work context, but that’s not really the point of the model. Even Professor Mehrabian called out words of caution around the absolute accuracy of these percentage in his work. He noted that his research focused on the communication of emotions and feeling, and acknowledged that contributing factors will be quite different when different topics are being discussed.
Regardless of the exact accuracy of the model, or the specific topics of conversation that it relates to, we like it. We know the numbers could be somewhat incorrect, but we still think the underlying message, that words alone don’t constitute communication, is very important.
Many individuals and leaders fail to appreciate the importance of other means of communication. By focusing on non-verbal signs from others they can improve their awareness of others and their awareness of the impact of their words and actions on others.
Your Favorite Podcast Player!
You can listen to any of our podcast episodes on your favorite podcast player via podlink.
Mehrabian, A., 1981. Silent Messages. Belmont, Calif.: Wadsworth Pub. Co. | https://worldofwork.io/2019/07/mehrabians-7-38-55-communication-model/ |
The Monitoring System Demographic and Health (SSDS) or Health and Demographic Surveillance System (HDSS) Taabo is a Switzerland Central Research Station Scientific Research in Côte d'Ivoire (CSRS) in the south central Cote d'Ivoire , about 150 km northwest of Abidjan. The HDSS began operations in early 2009 and the artificial lake of Taabo was chosen for its key epidemiological eco feature. Since its inception, there has been a strong interest in research and integrated monitoring of water-related diseases such as schistosomiasis and malaria. The HDSS Taabo generated specific data adjustment on the impact of targeted interventions against malaria, schistosomiasis and other neglected tropical diseases. It covers a small town, 13 villages and 100 hamlets. At the end of 2013, with a total population of 42 480 inhabitants, 6707 households were selected under surveillance. Verbal autopsies were performed to determine the cause of death.
Repeated cross-sectional epidemiological surveys about 5-7% of the population and specifically hematological investigations and parasitologues and questionnaires were conducted. The HDSS Taabo provides a database for inquiries, facilitates interdisciplinary research, and monitoring and provides a platform for the evaluation of health interventions. It continuously provides updated information on sociodemographic over 40 000 people in a mainly rural area of south central Cote. The HDSS Taabo has an interdisciplinary team of demographers, statisticians, database managers, field investigators and seizures operators. The participation rate for routine monitoring is very high and sometimes goes beyond 95%. A close link with the health system not only enables effective interventions, but also facilitates the return of results to health authorities. In addition, existing and productive partnerships give access HDSS the laboratory of the public hospital in the city of Taabo, and other well-equipped laboratories such as those of the CSRS, University Felix Houphouet-Boigny in Abidjan and the Institute of Tropical and Public Health (Swiss TPH) in Basel. The HDSS Taabo can be used for the implementation of clinical trials to evaluate the safety and efficacy of anthelmintic drugs and monitoring the effectiveness of interventions directed not only against NTDs and malaria but also to broader issues of public health.
Furthermore, it is important to emphasize that the HDSS Taabo remained active for socio prolonged political crisis which occurred Ivory Coast until 2012, which even allowed a detailed study of the effects of this on the displaced inside the country. The HDSS Taabo provided a unique platform for research and teaching and training of Masters in Science, Health Professionals and PhD. and postdoctoral fellows from diverse backgrounds, disciplines and cultures. quality control mechanisms are in place and adhered to. Supervisors regularly pursue the obtained data checks by field investigators. The data is entered twice by independent operators and there are a number of quality control steps to check the internal consistency. However, problems remain. First, the long-term financial sustainability will require new fundraising continues through tenders. Second, the HDSS database is currently underused, for specific analysis regarding the adjustments and comparisons with other demographic surveillance sites INDEPTH.
The team also encountered some difficulties regarding the collection of blood samples for specific research projects. Specific information campaigns have helped to overcome these difficulties. Often, some parents attribute the disease to their children participating in a survey. Therefore, the HDSS - under national legislation - had to provide free treatment in health facilities for all children who fell ill after an outbreak investigation. The experiences and lessons learned, it appears that regular feedback and sharing of key results to the local population is essential. The HDSS Taabo requires a more sophisticated data sharing policy and it is desired to further expand the existing network of partnerships and collaborations with various different institutions and researchers. An identified priority data collection using tablets instead of paper copies.
The HDSS Taabo is the first of its kind in the Ivory Coast and is built on three main pillars: population monitoring, health interventions and evaluation and scientific research to strengthen local conclusive evidence. The HDSS Taabo is in a diverse socio-ecological system and covers the rural and urban areas, providing a unique platform for studying infectious and chronic diseases and dynamics issues nutrition and lifestyle. The specialty of HDSS Taabo is the emphasis on neglected tropical diseases such as lymphatic filariasis, schistosomiasis, geo helminth infections and waterborne diseases in general. | http://www.csrs.ch/en/stations.php?hdss |
Read in our apps:
iOS
·
Android
Dean Falk
The Fossil Chronicles
Notify me when the book’s added
Impression
Add to shelf
Already read
Report an error in the book
Share
Facebook
Twitter
Vkontakte
To read this book, upload an EPUB or FB2 file to Bookmate.
How do I upload a book?
Search on Google
About
Two discoveries of early human relatives, one in 1924 and one in 2003, radically changed scientific thinking about our origins. Dean Falk, a pioneer in the field of human brain evolution, offers this fast-paced insider’s account of these discoveries, the behind-the-scenes politics embroiling the scientists who found and analyzed them, and the academic and religious controversies they generated. The first is the Taung child, a two-million-year-old skull from South Africa that led anatomist Raymond Dart to argue that this creature had walked upright and that Africa held the key to the fossil ancestry of our species. The second find consisted of the partial skeleton of a three-and-a-half-foot-tall woman, nicknamed Hobbit, from Flores Island, Indonesia. She is thought by scientists to belong to a new, recently extinct species of human, but her story is still unfolding. Falk, who has studied the brain casts of both Taung and Hobbit, reveals new evidence crucial to interpreting both discoveries and proposes surprising connections between this pair of extraordinary specimens.
Physical
Social Science
Anthropology
Life Sciences
Evolution
Paleontology
Science
This book is currently unavailable
357 printed pages
Original publication
2011
Publisher
University of California Press
Impressions
How did you like the book?
Sign in or Register
Don’t give a book. Give a library. | https://bookmate.com/books/m0Gr0uiy |
Along with their collection of insects to look at under microscopes we had our own collection of small fossils to view. This was a whole-school day and the organizers estimated about pupils attended the event. With teaching staff, helpers and organizers the estimate was about people on site during the day. Also inside the house were Timetaxi. We thank the Geological Society of London for providing a grant to pay for a wide-format pop-up projection screen which was used throughout the week to project images from our high-definition microscope. An early morning start at Dinosaur Isle Museum saw the Landrover loaded up with a model dinosaur which had to be taken apart first and various small and very large and heavy fossils. This was taken by car-ferry to Hampshire. Stopping off first at the University of Portsmouth to collect some more large fossils we then drove on to the installation in the Cathedral.
‘Incredibly rare’: Dinosaur blood, feathers found in ancient amber
Needless to say, this shocking discovery is once again going to have paleontologists scrambling to find a way to prop up the popular myths that they have been promoting. What they have been telling us simply does not fit the facts. The truth is that this latest find is even more evidence that dinosaurs are far, far younger than we have traditionally been taught.
Once upon a time, scientists believed that it would be impossible to find anything other than the hardened fossilized remains of extinct dinosaurs. But instead, we are now starting to find dinosaur soft tissue all over the place. Fossils include complete or nearly-complete skeletons associated with preserved soft tissues such as feathers, fur, skin or even, in some of the salamanders, external gills.
Subsequent studies found tissue and cells in other dinosaur and reptile fossils.8 Besides collagen, proteins such as actin and myosin were also found.9 These additional discoveries helped verify the authenticity of the dinosaur tissue, and undercut the arguments of contamination.
It was big news indeed last year when Schweitzer announced she had discovered blood vessels and structures that looked like whole cells inside that T. The finding amazed colleagues, who had never imagined that even a trace of still-soft dinosaur tissue could survive. After all, as any textbook will tell you, when an animal dies, soft tissues such as blood vessels, muscle and skin decay and disappear over time, while hard tissues like bone may gradually acquire minerals from the environment and become fossils.
Schweitzer, one of the first scientists to use the tools of modern cell biology to study dinosaurs, has upended the conventional wisdom by showing that some rock-hard fossils tens of millions of years old may have remnants of soft tissues hidden away in their interiors. And the new findings might help settle a long-running debate about whether dinosaurs were warmblooded, coldblooded—or both.
They claim her discoveries support their belief, based on their interpretation of Genesis, that the earth is only a few thousand years old. Growing up in Helena, Montana, she went through a phase when, like many kids, she was fascinated by dinosaurs. In fact, at age 5 she announced she was going to be a paleontologist. But first she got a college degree in communicative disorders, married, had three children and briefly taught remedial biology to high schoolers.
Did Humans Walk the Earth with Dinosaurs? Triceratops Horn Dated to 33,500 Years
The Radiometric Dating Game Radiometric dating methods estimate the age of rocks using calculations based on the decay rates of radioactive elements such as uranium, strontium, and potassium. On the surface, radiometric dating methods appear to give powerful support to the statement that life has existed on the earth for hundreds of millions, even billions, of years. We are told that these methods are accurate to a few percent, and that there are many different methods.
dinosaur soft tissue carbon dating. Researchers have found a reason for the puzzling survival of soft tissue c dating of multiple samples of bone from 8 dinosaurs found in.
The other groups mentioned are, like dinosaurs and pterosaurs, members of Sauropsida the reptile and bird clade , with the exception of Dimetrodon which is a synapsid. Definition Triceratops skeleton, Natural History Museum of Los Angeles County Under phylogenetic nomenclature , dinosaurs are usually defined as the group consisting of the most recent common ancestor MRCA of Triceratops and Neornithes , and all its descendants.
In traditional taxonomy, birds were considered a separate class that had evolved from dinosaurs, a distinct superorder. However, a majority of contemporary paleontologists concerned with dinosaurs reject the traditional style of classification in favor of phylogenetic taxonomy; this approach requires that, for a group to be natural, all descendants of members of the group must be included in the group as well.
Birds are thus considered to be dinosaurs and dinosaurs are, therefore, not extinct. Norman, and Paul M. Barrett in suggested a radical revision of dinosaurian systematics. Phylogenetic analysis by Baron et al. They resurrected the clade Ornithoscelida to refer to the group containing Ornithischia and Theropoda. Dinosauria itself was re-defined as the last common ancestor of Triceratops horridus , Passer domesticus , Diplodocus carnegii , and all of its descendants, to ensure that sauropods and kin remain included as dinosaurs.
Using one of the above definitions, dinosaurs can be generally described as archosaurs with hind limbs held erect beneath the body. Other groups of animals were restricted in size and niches; mammals, for example, rarely exceeded the size of a domestic cat, and were generally rodent-sized carnivores of small prey.
Controversial T. Rex Soft Tissue Find Finally Explained
Here, RSR presents the scientific journals reporting, the kinds of biological material found so far, and the dinosaurs yielding up these exciting discoveries: Dinosaur and Dinosaur-Layer Creatures: As you view the exciting scientific discoveries below in this chronological catalog, please feel free to listen to Real Science Radio co-hosts Fred Williams and Bob Enyart observe their annual tradition of presenting dinosaur soft tissue and other amazing discoveries including short-lived left-handed amino acids , DNA , and Carbon 14 , all in bones and other specimens from dinosaur-layer Mesozoic and even deeper strata.
Soft tissue and amino acids should last only a fraction of that time. Someone who believes the Earth is less than 10, years old may see Schweitzer’s find as compelling evidence for a young Earth rather than a cause to re-examine the nature of fossilization.
The four types of fossils are: There are six ways that organisms can turn into fossils, including: More rarely, fossils have been found of softer body tissues. Bones – these fossils are the main means of learning about dinosaurs. The fossilized bones of a tremendous number of species of dinosaurs have been found since , when the first dinosaur bone was discovered.
Teeth and Claws – Sometimes a bit of a broken tooth of a carnivore is found with another dinosaur’s bones, especially those of herbivores. Lots of fossilized teeth have been found, including those of Albertosaurus and Iguanodon. Eggs , Embryos , and Nests – Fossilized dinosaur eggs were first found in France in Many fossilized dinosaur eggs have been found, at over sites. Sometimes they have preserved parts of embryos, which can help to match an egg with a species of dinosaur.
The embryo also sheds light on dinosaur development. The nests and clutches of eggs tells much about dinosaurs’ nurturing behavior.
Dinosauria
Description Size in green compared with selected giant theropods Tyrannosaurus rex was one of the largest land carnivores of all time; the largest complete specimen, located at the Field Museum of Natural History under the name FMNH PR and nicknamed Sue , measured Historically average adult mass estimates have varied widely over the years, from as low as 4.
The forelimbs had only two clawed fingers, along with an additional small metacarpal representing the remnant of a third digit. The tail was heavy and long, sometimes containing over forty vertebrae , in order to balance the massive head and torso. To compensate for the immense bulk of the animal, many bones throughout the skeleton were hollow, reducing its weight without significant loss of strength. But in other respects Tyrannosaurus’s skull was significantly different from those of large non- tyrannosauroid theropods.
Feb 16, · Dr. Mary Schweitzer while doing research, accidentally discovered dinosaur soft tissue in a T-rex bone dated 67 million yrs old, which all textbooks say is : Open.
Tweet NaturalNews A recent archaeological discovery that throws a wrench into the conventional theory of evolution has reportedly cost a California professor his job. Mark Armitage, a former scientist at California State University, Northridge CSUN , was reportedly fired after claiming to have unearthed a dinosaur fossil that still contains soft, flexible tissue, suggesting that it can’t be millions of years old. A year veteran in his field, Armitage has published many studies over the years in peer-reviewed journals.
One of his most recent was published last July, pertaining to a discovery he made at the Hell Creek Formation excavation site in Montana. According to The Christian Post, Armitage was evaluating a triceratops horn fossil when he came across preserved soft tissue. A lawsuit recently filed in Armitage’s defense describes his reaction to the discovery as “fascinated,” since flexible matter has never before been discovered on a dinosaur fossil.
Naturally, Armitage published his findings — in this case, he published them in the Elsevier journal Acta Histochemica — and proceeded to share his findings with his students. Not long after, Armitage was approached by a CSUN faculty head who reportedly shouted at him, “We are not going to tolerate your religion in this department! This should be a wakeup call and warning to the entire world of academia.
Dinosaur Shocker
Scheletro di Marasuco lilloensis , ornitodirano simile ai dinosauri. Dinosauri primitivi, inclusi Herrerasauro grande , Eoraptor piccolo e un cranio di Plateosauro. I paleontologi credono che Eoraptor potesse assomigliare all’antenato comune di tutti i dinosauri. Gli altri arcosauri primitivi inclusi gli aetosauri, gli ornitosuchidi, i fitosauri , e i rauisuchi ebbero la loro fine milioni di anni fa durante l’ estinzione di massa del Triassico-Giurassico.
Part of the Explore the Film series. “Clearly this is in violation of the dating process. It challenges the entire dating process.” – Kevin Anderson, Microbiologist at Van Andel Creation Research Center In , soft tissue was discovered inside the femur of a dinosaur bone by Dr. Mary Schweitzer.
One of the scientists who found the tissue and published a paper on it in the peer-reviewed literature 1 Mark Armitage was subsequently fired from his position at California State University Northridge. He has sued the university , claiming that he was fired because of his religious views. Instead, this update is about the fossil itself. Samples from the fossil were sent to Dr. I wanted the soft tissue that was found in the fossil to be dated, but it was not.
According to a report in the journal Radiocarbon, bioapatite is actually preferable to soft tissue in many cases. As the report states:
Spectacular Dinosaur Has Skin, Horn, Pigments
It is captivating and compelling…covers all the bases. Second law of thermodynamics — Does this basic law of nature prevent evolution? Eden Communications, Christian Answers Network,1. Retrieved April , from http: The Illustrated Origins Answer Book. Eden Communications , 1 and Emmett L.
If the source of the carbon was mosasaur tissue (and this is the most straightforward explanation), then the mosasaur’s carbon date would be in line with an age of thousands of years, as inferred by the integrity of its soft tissue.
Evolution Before about it was widely thought that distinctively hominin fossils could be identified from 14 to 12 million years ago mya. However, during the s geneticists introduced the use of molecular clocks to calculate how long species had been separated from a common ancestor. The molecular clock concept is based on an assumed regularity in the accumulation of tiny changes in the genetic codes of humans and other organisms.
Use of this concept, together with a reanalysis of the fossil record, moved the estimated time of the evolutionary split between apes and human ancestors forward to as recently as about 5 mya. Since then the molecular data and a steady trickle of new hominin fossil finds have pushed the earliest putative hominin ancestry back in time somewhat, to perhaps 8—6 mya.
Possible pathways in the evolution of the human lineage. Announced in , this specimen is dated to the period between 7 and 6 mya. The distinctive mark of Hominini is generally taken to be upright land locomotion on two legs terrestrial bipedalism. The skull of S. The most remarkable aspect of this skull is the broadness and flatness of its face—something previously associated with much more recent hominins—in conjunction with a smaller, ape-sized braincase. This specimen also has small canine teeth compared with those of apes, thus aligning it with the hominins in an important functional regard.
Sahelanthropus, then, emphasizes an evolutionary pattern that seems to have been a characteristic of the tribe Hominini from the very start—a pattern that aligns it with what is observed in most other evolutionarily successful groups of mammals. Human evolution, it appears, has consistently been a process of trial and error.
LATEST NEWS
My cat used to drink from the garden pond and never seemed to suffer any ill effects, and you often see dogs drinking from muddy puddles. So why do humans have to be so careful and only drink clean water? Will it form sediment that gets buried beneath the seabed and eventually turns into plastic “oil” or “coal”?
Fossil soft tissue in dinosaur bones has been a controversial topic among researchers for quite some time. Hard tissues, such as bones, eggs, teeth, and enamel scales, are able to .
Various specimens of Tyrannosaurus rex with a human for scale. Size comparison of selected giant theropod dinosaurs, with Tyrannosaurus in purple. Tyrannosaurus rex was one of the largest land carnivores of all time; the largest complete specimen, FMNH PR ” Sue ” , measured The forelimbs were long thought to bear only two digits, but there is an unpublished report of a third, vestigial digit in one specimen.
The tail was heavy and long, sometimes containing over forty vertebrae, in order to balance the massive head and torso. To compensate for the immense bulk of the animal, many bones throughout the skeleton were hollow, reducing its weight without significant loss of strength. It was extremely wide at the rear but had a narrow snout, allowing unusually good binocular vision.
An Update On The Triceratops Fossil That Contained Soft Tissue
Senior research scientist Alexander Cherkinsky specializes in the preparation of samples for Carbon testing. He directed the pretreatment and processing of the dinosaur bone samples with the Accelerator Mass Spectrometer, though he did not know the bones were from dinosaurs, and he signed the reports. Carbon dating at this facility is certainly the very best. But in , someone told the director of the facility, Jeff Speakman, that the Paleochronology group was showing the Carbon reports on a website and YouTube and drawing the obvious conclusions.
So when he received another bone sample from the Paleochronology group, he returned it to sender and sent an email saying: The scientists at CAIS and I are dismayed by the claims that you and your team have made with respect to the age of the Earth and the validity of biological evolution.
* More Soft Dinosaur Tissue, Now from an “80 Million” Year Old Hadrosaur: Consistent with the expectations of biblical creationists, according to Nat’l Geographic, there’s yet another discovery of soft tissue in a dinosaur, this time, a hadrosaur, with soft blood vessels, connective tissue, and blood cell protein amino acid chains partially.
These are external links and will open in a new window Close share panel Image caption The feathered tail was preserved in amber from north-eastern Myanmar The tail of a feathered dinosaur has been found perfectly preserved in amber from Myanmar. The one-of-a-kind discovery helps put flesh on the bones of these extinct creatures, opening a new window on the biology of a group that dominated Earth for more than million years.
Examination of the specimen suggests the tail was chestnut brown on top and white on its underside. The tail is described in the journal Current Biology. The study’s first author, Lida Xing from the China University of Geosciences in Beijing, discovered the remarkable fossil at an amber market in Myitkina, Myanmar. The million-year-old amber had already been polished for jewellery and the seller had thought it was plant material.
On closer inspection, however, it turned out to be the tail of a feathered dinosaur about the size of a sparrow. Lida Xing was able to establish where it had come from by tracking down the amber miner who had originally dug out the specimen. | https://buyonlinenorxe.ru/references/ |
Patients undergoing orthodontic treatment are at greater risk for developing white spot lesions (WSLs). Although prevention is always the goal, WSLs continue to be a common sequela. For this reason, understanding the patterns of WSL improvement, if any, has great importance. Previous studies have shown that some lesions exhibit significant improvement, whereas others have limited or no improvement. Our aim was to identify specific patient-related and tooth-related factors that are most predictive of improvement with treatment.
Methods
Patients aged 12 to 20 years with at least 1 WSL that developed during orthodontic treatment were recruited from private dental and orthodontic offices. They had their fixed appliances removed 2 months or less before enrollment. Photographs were taken at enrollment and 8 weeks later. Paired photographs of the maxillary incisors, taken at each time point, were blindly assessed for changes in surface area and appearance at the individual tooth level using visual inspection.
Results
One hundred one subjects were included in this study. Patient age, brushing frequency, and greater percentage of surface area affected were associated with increased improvement. Central incisors exhibited greater improvements than lateral incisors. Longer time since appliance removal and longer length of orthodontic treatment were associated with decreased levels of improvement. Sex, oral hygiene status, retainer type, location of the lesion (gingival, middle, incisal), staining, and lesion diffuseness were not found to be predictive of improvement.
Conclusions
Of the various patient-related and tooth-related factors examined, age, time since appliance removal, length of orthodontic treatment, tooth type (central or lateral incisor), WSL surface area, and brushing frequency had significant associations with WSL improvement.
Highlights
- •
White spot lesions (WSLs) are a common sequela of orthodontic treatment.
- •
WSLs seem to improve in the months after orthodontic appliances are removed.
- •
Predicting the amount of improvement is challenging.
- •
Time since appliance removal may be an important predictive factor.
- •
Factors such as depth of lesion or patient’s diet might influence the improvement.
Orthodontic treatment has long served as a means for providing patients with improved esthetic, functional, and psychological benefits. Unfortunately, white spot lesions (WSLs) are a common and undesirable side effect that can diminish the satisfaction that a patient experiences after orthodontic treatment. Some studies have shown that the prevalence of WSLs is as high as 97% among orthodontic populations.
WSLs are characterized by their greater opacity than healthy enamel. They have a whiter appearance as a result of mineral loss in the surface layers; this alters the refractory index and increases the scattering of light in the affected area because of damaged surface roughness. The appearance of the lesion can vary from minor surface change to cavitation. In some instances, stains can be incorporated into a lesion and lead to the formation of brown spots during the remineralization process, worsening the esthetic problem. Prevention and treatment of WSLs are important for the integrity of the teeth, as well as for esthetics, since they often affect the maxillary incisors.
Several options have been proposed to address these lesions, depending on their nature and severity. The recommended treatments range from as simple as improved home care with fluoride toothpaste to more invasive options involving composite restorations. There is still a lack of strong evidence in the literature, however, regarding the most effective treatment protocol and the ideal timing for maximizing improvement.
In addition to the abundance of available treatment options, the unpredictable patterns and degrees of improvement add to the complexity of WSL treatment. There is a wide range of improvement in lesions from one patient to the next. Lesions can vary in size, shape, and location and are as unique as the oral environment of the patients in whom they are found. Results from a previous randomized control trial by Huang et al found no significant differences in subjective or objective improvement in the appearance of the WSLs among those who received MI Paste Plus, PreviDent fluoride varnish, or normal home care during an 8-week period. Although some WSLs exhibited little or no improvement, some did show considerable improvement. Since the treatment arm did not appear to have a large role in the improvement of WSLs, investigation of other possible factors associated with WSL improvement seemed warranted.
The first aim of this study was to determine whether the following patient factors are predictive of the overall improvement of WSLs: age, sex, time since appliance removal, length of orthodontic treatment, self-reported tooth brushing, oral hygiene, or retainer type. Each patient factor was analyzed with the null hypothesis of no difference in WSL improvement for both subjective and objective measures.
The second aim was to compare the following tooth-related factors with the amount of WSL improvement: proportion of tooth surface area affected, tooth type (central or lateral incisor), staining, location (gingival, middle, incisal), and lesion diffuseness. The null hypothesis was that there would be no difference in WSL improvement associated with the tooth-related factors.
Material and methods
This study is a further investigation of data from a previous project regarding WSLs. The photographs that formed the sample data were originally collected from a randomized (1:1:1), single-blind, active-controlled, parallel-group trial evaluating the improvement of WSLs in 3 treatment arms. The treatment arms were MI Paste Plus (GC America, Allsip, Ill), containing casein phosphopeptide-amorphous calcium phosphate and 900 ppm of fluoride; PreviDent fluoride varnish (22,600 ppm of fluoride; Colgate Oral Pharmaceuticals, New York, NY); and a home-care control group with oral hygiene instructions and over-the-counter toothpaste (1100 ppm of fluoride; Colgate Oral Pharmaceuticals). In the original study, photographs of the WSLs were taken at 2 times: the start of the study (T1) and 8 weeks later (T2). Data were collected from private orthodontic and general dentistry offices belonging to the Practice-based Research Collaborative in Evidence-based Dentistry network in the Northwestern United States (Northwest PRECEDENT). The network was co-operated by the University of Washington and the Oregon Health and Science University, and it comprised Washington, Oregon, Montana, Idaho, and Utah.
Eligibility criteria for this study included the fulfillment of the following conditions: completion of fixed appliance orthodontic therapy within the past 2 months, at least 1 WSL on the facial surface of a maxillary incisor that was not present before starting orthodontic treatment, and age between 12 and 20 years. Subjects excluded from this study were those who were unwilling to be randomly assigned to 1 of the 3 treatment groups; had any abnormal oral, medical, or mental conditions; received therapy for WSLs after orthodontic treatment; displayed frank cavitations associated with the maxillary incisors; or were unable to speak or read English. Patients (and parents, for those under 18 years of age) consented to participate before the study.
Throughout treatment, oral hygiene was reinforced by staff members. Clinicians provided patient information, including age, sex, length of orthodontic treatment, and retainer type. All subjects also completed a questionnaire, which gave us information regarding their average daily brushing frequency.
Two types of evaluations (subjective and objective improvement) were performed for the 4 maxillary incisors, for each pair of photographs (initial and 8 weeks). For subjective improvement, a blinded panel of 5 dental professionals (expert panel) assessed improvement using a visual analog scale from 0 to 100 mm (0 mm, no improvement or worsened, to 100 mm, complete resolution). These evaluations were performed as part of the original study, and the mean ratings of the panel were used for overall improvement of the 4 maxillary incisors.
For objective improvement, 2 examiners (a dental student and a general dentist) performed the assessments for improvement by measuring changes in WSL surface area at each time point. WSL surface area was divided by total tooth surface area to calculate the pretreatment and posttreatment percentages of affected surface areas. The change in percentage of affected surface area was obtained by subtracting the T2 surface area from the T1 surface area. These assessments were also performed as part of the original study for all 4 incisors.
For this current study, we considered improvement of a lesion to be a visible decrease in the affected surface area, minimized contrast between the WSL and surrounding healthy tooth structure, or any combination of changes resulting in an overall improved esthetic appearance. In the previous study, all 4 incisors were evaluated as a unit, rather than each tooth individually ( Table I ). To perform evaluations at the single-tooth level, we cropped the images of the 4 maxillary incisors into individual teeth (n = 404) and then further cropped them into horizontal thirds, evaluating only the portions affected (n = 728) ( Fig 1 ). To maintain a uniform size and a similar level of magnification, a grid of a fixed dimension was used to show only the tooth or portion of the tooth being evaluated. This method allowed us to mask all other teeth or parts of teeth, minimizing any undesired influence from the surrounding teeth on the evaluation scores. Once all images were cropped to the proper dimensions, the lesions were then categorized according to the different characteristics that were of interest to this study.
|Evaluation level||Evaluation type||Outcome measure||Factors evaluated|
|4 maxillary incisors||Subjective improvement
|
Objective improvement
|Visual improvement (%)
|
Reduction of surface area (%)
|Age
|
Sex
Time since deband
Treatment time
Brushing frequency
Oral hygiene
Retainer type
Initial WSL surface area
|Single tooth||Improvement scale||1: Significantly worse
|
2: Slightly worse
3: Same
4: Slightly better
5: Significantly better
|Age
|
Sex
Time since deband
Treatment time
Brushing frequency
Oral hygiene
Retainer type
Initial WSL surface area
Tooth type
Staining
|Tooth thirds||Improvement scale||1: Significantly worse
|
2: Slightly worse
3: Same
4: Slightly better
5: Significantly better
|Age
|
Sex
Time since deband
Treatment time
Brushing frequency
Oral hygiene
Retainer type
Initial WSL surface area
Lesion location
Diffuseness
At the single-tooth level, each lesion was categorized by the presence or absence of staining. Most WSLs are uniformly white throughout (unstained), but occasionally some have a yellowish or brownish area of discoloration ( Fig 2 ). A primary and a secondary evaluator (an orthodontic resident [S.K.] and a general dentist [M.K.]) categorized the staining before blinding for the time points to ensure that the staining was present at the start of the study. After categorization, the time points were obscured for all images to reduce any expectation bias for improvement because one might naturally expect improvement over time even when no improvement had occurred. Of the 404 incisors, 105 exhibited staining (26%). Categorization for staining had an agreement of 87% between the evaluators.
The images of the horizontal thirds were labeled as the gingival, middle, and incisal thirds. Only portions of the tooth affected by a WSL were evaluated. Portions of the tooth containing no lesions were not included. The same 2 evaluators independently examined each third and categorized each lesion by its diffuseness ( Fig 3 ). Any lesion with a discrete linear shape with areas of healthy, unaffected tooth structure adjacent to both sides of the lesion was considered to be a discrete lesion. Any lesion with a nonlinear, amorphous, or ill-defined appearance was categorized as diffuse. Any lesion containing both types of these lesions on the same tooth third was classified as a mixed lesion. Time points were also obscured for each third before evaluation.
For the blinded evaluation of single teeth and tooth thirds, the evaluators rated each image on a WSL improvement scale of 1 to 5: 1, significantly worse; 2, slightly worse; 3, the same; 4, slightly better; or 5, significantly better than its corresponding image taken at the other time point ( Table I ). The average scores of the raters were used. Twenty images were evaluated a second time, at least 1 month apart, to calculate the reliability of the raters’ average scores.
Statistical analysis
Analyses were conducted using SAS software (version 9.2; SAS Institute, Cary, NC).
Descriptive data were summarized with frequency tables ( Table II ). Regression models were run using generalized estimating equations, which allowed accounting for clustering by site and subject. Intraclass correlation coefficient (ICC) values for both factors were negligible.
|Parameter||Value|
|Age in years (SD)||14.4 (1.5)|
|Female (n)||52 (51.5%)|
|Initial surface area affected (SD)||10.7% (7.7%)|
|Oral hygiene (n)|
|Good||10 (9.9%)|
|Fair||43 (42.6%)|
|Poor||48 (47.5%)|
|Months since appliance removal (SD)|
|Mean (SD)||0.26 (0.44)|
|≤1 week||73 (72.3%)|
|Months in orthodontic treatment (SD)||25.7 (9.7)|
|Brushing frequency (n)|
|≤1 time per day||37 (36.6%)|
|≥2 times per day||64 (63.4%)|
|Retainer style (n)|
|Hawley||56 (55.5%)|
|Essix||45 (44.6%)|
We performed univariate analyses to identify potential factors of interest and then selected the covariates for our multivariate analyses based on our univariate results. Models were adjusted for age, sex, time since appliance removal, length of orthodontic treatment, and brushing frequency. Although the previous study found no difference in improvement among the 3 original treatment arm groups, we performed a sensitivity analysis for treatment arm to verify that there were no differences among groups when adjusting our model for our particular choice of covariates.
Results
A total of 115 subjects were eligible for evaluation in our study. Subjects were removed due to poor-quality images (n = 5) or missing lateral incisors (n = 2). One subject’s records were not obtainable from the previous study. Six additional subjects were dropped from the study because they had multiple retainer types. A total of 101 subjects (49 boys, 52 girls; mean age, 14.4 ± 1.5 years) were included in the final analyses. The subjects dropped from our study did not vary with respect to demographic data and initial WSL severity compared with the subjects included in this study. Although there was no difference in improvement among the 3 treatment groups in the original randomized controlled trial, patient compliance was factored in for the MI Paste Plus group as part of the multivariate analysis, and it was not significant.
Duplicate measurements of 20 sets of images showed good repeat reliability. The ICC was 0.92. The ICC values were 0.72 and 0.85 between the subjective and objective evaluators in the previous study, respectively.
For the analyses of all 4 incisors, the mean subjective improvement from the original study for the 4 incisors over the 8-week period (T1-T2) was 26%. Using these subjective ratings from the original study, we found no patient-related factors associated with improvement ( Table III ). The total percentage of surface area initially affecting all 4 incisors was also not significant for improvement (data not shown). The objective surface area measurements from the original study showed an average improvement of 19% for the 4 maxillary incisors. In the multivariate analyses, we found greater improvement in WSL appearance with each additional year of patient age. With each additional month of orthodontic treatment or time since appliance removal, less improvement was observed ( Table IV ). | https://pocketdentistry.com/predicting-improvement-of-postorthodontic-white-spot-lesions/ |
Almanac dream information - the meaning behind Almanac dreams.
The meaning behind Almanac Dreams
To dream of an almanac, means variable fortunes and illusive pleasures.
To be studying the signs, foretells that you will be harassed by small
matters taking up your time.
For more dream meanings:
Learn more Psychology at Psychologist World
Dream dictionary entry taken from 10,000 Dreams Interpreted by Gustavus Hindman Miller. Psychologist World provides these definitions as a courtesy and is not responsible for, or for any consequences resulting from the use of, Miller's archaic dream interpretations. | https://www.psychologistworld.com/dreams/dictionary/almanac |
Over half of teachers see children arriving at school hungry at least once a week, with almost 80% saying that this number has increased in the last year.
The survey of more than 3,000 people was done by Behaviour & Attitudes on behalf of Kellogg’s and found that food poverty is still a harsh reality for many families, with more than one fifth worried over the amount of money they have to spend on food.
Families with primary-school children are more likely to feel the pressure, with one third saying they were concerned over their food budget.
The report reveals that, among lower- income households, food poverty is as high as 11%, while only 4% of highest income groups cite food poverty as an issue.
DISCOVER MORE CONTENT LIKE THIS
Teachers are also seeing the impact of food poverty in their schools with 53% of those surveyed, noticing children arriving at school hungry at least once a week. A total of 77% of teachers said the number of children coming to school hungry has increased in the last year.
Half of the teachers surveyed also said that more than one third of parents have expressed concern over their ability to make their food budget stretch to the end of the week, while one fifth struggle to fund their family food budget over the weekend.
The study also found that one in five households with children has even had to change their eating habits due to financial constraints. | https://www.irishexaminer.com/ireland/half-of-teachers-see-children-turning-up-to-school-hungry-331475.html |
Water4Cities focuses on the next research objectives:
• To investigate new process and models for urban water cycle monitoring: The vision towards a real-time monitoring and decision support tool for urban water management requires a thorough analysis of the processes across the water lifecycle.
• To design the necessary methodology for the analysis and optimization of urban water: This objective relates to the construction of a theoretical framework upon which the decision support system will be implemented.
• To research optimization capabilities in data transmission protocols to support real-time water lifecycle monitoring: The proposed platform will need to monitor and control qualitative and quantitative degradation of water and determine its status in real time.
• To design novel data analytics, data visualization algorithms and decision support tools for optimized urban water management: the investigation of novel data analytics and data visualization algorithms supporting the water lifecycle monitoring and optimization services.
From a technical viewpoint, the project aims to reach the following technical objectives:
• To build a robust, energy efficient monitoring infrastructure for the collection of real-time data across the water lifecycle: Water4Cities will implement a data collection framework supporting energy efficient, robust and reliable data communication to assist real-time monitoring of qualitative and quantitative water and energy parameters.
• To deploy advanced data mining and data visualization tools for water management: On top of the data collection mechanism, data analytics and visualization algorithms designed to realize the water cycle monitoring.
• To develop decision support services and applications, catered for water utilities and decision makers in the Water4Cities process chain: This technical objective relates with the implementation of the services (including data visualization) to be delivered to the water providers and relevant stakeholders (e.g., municipalities).
• To test and validate the proposed ICT platform: Another major technical objective of the project is to provide a proof-of-concept test and validation of of the proposed solutions in real use case scenarios provided at Skiathos insland and in Ljubljana region. | http://videolectures.net/water4cities/ |
CCBR typically has 12-15 ongoing projects and has completed over 400 projects since 1982. Each project is guided by our commitment to impacting social change in practical and powerful ways. We conduct research with people not on people, cultivating respect with communities at every step of the process.
Projects can be searched for using words from the project title or using the service area, theme, or date range for the project. You can also type 'Service Area' or 'Theme' into the search bar to get a list of options in each of these fields.
- Projects
- /
New Canadian Youth Connections was developed to support government-assisted refugee (GAR) youth, aged 12-21, as they connect to and integrate into the Waterloo Region community through recreational programming and homework support with peer volunteers.
CCBR led an evaluation project motivated by two central considerations: 1) the need to secure sustaining funding for the program to continue current activities and expand Circle opportunities and 2) to provide information to program managers and staff so they can ensure that the program is working effectively towards its intended outcomes. This project’s evaluation plan included: interviews with program staff, focus groups with volunteers, and an analysis of surveys administered with program volunteers and youth. CCBR actively engaged stakeholders and project partners (CJI and Reception House) throughout the evaluation process. NCYC was funded by the Ontario Trillium Foundation. | https://projects.communitybasedresearch.ca/projects/8988_new-canadian-youth-connections-ncyc/ |
CROSS-REFERENCE TO RELATED APPLICATIONS
TECHNICAL FIELD
BACKGROUND OF THE PRESENT INVENTION
SUMMARY OF PRESENT INVENTION
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
This application claims priority to Hong Kong Patent Application No. 16104845.1 with a filing date of Apr. 27, 2016. The content of the aforementioned application, including any intervening amendments thereto, are incorporated herein by reference.
The present disclosure generally relates to a computer programming education tool and, in particular relates to an educational system consisting of a plurality of virtual robots, a plurality of physical robots, a tablet computer and a cloud service for programming.
At present, an increasing number of people choose electric learning to study new knowledge, electric learning is also called the E-learning, which provides educational activities by using computers and internet, and makes full use of information technology to provide a kind of new way of learning. With the development of the virtual reality technology, it is possible to make use of virtual robots into educational system by using computer programming to make student learn effectively and efficiently.
In accordance with one aspect of the disclosure, an education system is disclosed. The education system may comprise a control device, a physical toy, and a virtual toy, wherein the physical toy is used to communicate with the control device, and the control device is used to created the virtual toy, the virtual toy and the control device are used for programming, the virtual toy is a counterpart of the physical toy, the virtual toy of the physical toy is integrated in virtual world.
In accordance with an alternative or additional aspect of the disclosure, an education system is disclosed. The education system may comprise a control device, one or more physical toys, and one or more virtual toys, wherein the physical, toys are used to communicate with the control device, and the control device is used to create the virtual toys, the virtual toys and the control device are used for programming, and the virtual toy is equipped with movable parts that are used to be controlled by the control device, the virtual toy is further equipped with sensor parts that are used to send signals and triggers to the control device, each virtual toy is integrated alone in a virtual world or used to connect to a corresponding counterpart physical toy in real world, the control device is further used to opt to connect to a physical toy or a virtual toy, and the control device is used to decide how many virtual toys integrated in the virtual world, the control device is used to integrate multiple virtual toys by using different translators.
The illustrated education system may further comprise a server and a software application, wherein the server is used to save translators, stencils programs and other information to facilitate a lesson or/and provide download service, the control device is used to choose different virtual world with different educational settings, the software application is configured to utilize processing power of the control device to perform functions that are not available in the physical toy, the software application is configured to utilize computer vision function and image process function to build new educational blocks in a Visual Programming Language (VPL) software, the new educational blocks are used for voice recording and music playback, while the physical toy is used, to capture video stream, the software application is used to link with several classes and, instances of virtual toys simultaneously in a lesson, and the control device further comprising a Visual Programming Language (VPL) programming, and the VPL programming is used to switch to different virtual toys or physical toys, the virtual world is used to share among different users in different schools, the virtual world is used to provide competition or social activities, one or more virtual robots in the virtual world are used to be programmed by one user, the virtual robots are used to hold educational and entertainment competitions in virtual world, the physical robots are used to hold educational and entertainment competitions in real world.
The illustrated education system may further comprise a sensor, the sensor is used to detect signals and send the detected signals to another virtual toy in the control device, the sensor is a gesture sensor, the gesture sensor is used to receive dancing gesture signals and send the dancing gesture signals to the virtual toy, and the virtual toy is commanded by the dancing gesture signals, the gesture sensor is used to receive dancing gesture signals created by different user who dance, simultaneously before the gesture sensor.
Other advantages and features will be apparent from the following detailed description when read in conjunction with the attached drawings.
It should be understand that the drawings are not necessarily to scale and that the disclosure embodiments are sometimes illustrated diagrammatically and in partial views. In certain instances, details which are not necessary for an understanding of the disclosed system, or which render other details difficult to perceive, may have been omitted. It should be understood, of course, that this disclosure is not limited to the particular embodiments illustrated herein.
FIG. 1
Referring now to the drawing, and with specific reference to , an education system may compromise a control device, a physical toy in real world, Visual Programming Language (VPL) interpreter, a cloud service system comprising a cloud server, and a virtual toy in virtual word created by the control device. The physical toy is automation device or physical robot or toy robot which can communicate with the control device. The virtual toy, the VPL interpreter, the cloud sever and the control device are used for programming, wherein the control device is a tablet computer or a similar computer system or a virtual reality device.
1
2
8
2
2
3
3
2
2
5
In the illustrated embodiment, a physical toy () has a corresponding software 3D model () which lives in virtual world (), wherein the software model () is also called virtual robot or, virtual toy. The virtual robot () is equipped with movable parts that can be controlled, by first computer programming () integrated in the control device, wherein the first computer programming () is written by Visual Programming Language (VPL) Some examples of VPL software are Scratch from MIT Media Lab and Google Blockly. In addition, the virtual robot () is equipped with sensor parts, that can send signals and triggers to the control device, and the virtual robot () is also equipped with actuators () which have the similar functions with its physical world counterpart. The virtual toy is integrated alone in a virtual world or used to connect to a corresponding counterpart physical toy in real world. The control device is further used to opt to connect to a physical toy or a virtual toy.
In addition, the education system can integrate multiple virtual toys by using different translators, one or more virtual toys are further integrated in the virtual world, the control device is used to choose different virtual world with different educational settings. The control device is further used to decide how many virtual toys integrated in the virtual world, and one or more virtual toys are integrated in the virtual world.
FIG. 1
4
2
4
As shown in , the disclosed education system also, has second computer programming () integrated in the control device. When the virtual robot () sends signals and triggers by sensor parts, the second computer programming () receives the signals and triggers.
6
7
2
8
9
10
The control device can opt to connect to a physical robot () or a virtual robot () to perform third computer programming which is a Visual Programming Language (VPL) programming. In addition, the virtual robot () is resided inside a virtual world (), and the virtual world is different in different educational settings, for example, different courses and lessons would have a different virtual world. Translators, Stencils, Programs and other information to facilitate a lesson can be saved and/or downloaded from a server according to user accounts. When the lessons need, one or more virtual robots () can reside in one virtual world, and user can switch to different robots by using the third computer programming () which is a VPL programming.
The same Virtual World can be shared among students, tutors, or even students and tutors in different schools. When they enter the same virtual world, competition or social activities can happen. The virtual world system that links different users onto the same virtual world can compete and socialize by using virtual robots programmed by VPL. The virtual world is used to provide competition or social activities, one or more virtual robots in the virtual world are used to programmed by one user, the virtual robots are used to hold educational and entertainment competitions in virtual world, the physical robots are used to hold educational and entertainment competitions in real world.
At the same time, there are real robots in physical world belonged to the different users. By control of the real robots, the users can compete and socialize with each other through corresponding virtual robots. In the illustrated embodiment, the education system may compromise a software application. The software application can link with several classes and instances of virtual toys simultaneously form a lesson. The software application also can utilize processing power of the control device to perform function that is not available in the physical toy ([0014]), for example, using computer vision function and image process function to build new educational block in the VPL software while the physical toy ([0014]) can only capture video stream, the new educational blocks can use for voice recording and music playback, etc, the VPL software is Scratch from MIT Media Lab. or Google Blockly.
FIG. 2
1
2
3
8
11
12
8
2
2
11
12
Referring now to , another embodiment of an education system in accordance with the teachings of the disclosure is constructed. In the embodiment, the education system comprises a control device, a physical toy (), one or more virtual robots (), the first computer programming (), the virtual world (), and a gesture sensor (), and one or more users (). The virtual world () has many virtual robots (), and one or more virtual robots () can be programmed by one user (student or tutor). It is also possible that the user (student or tutor) connects to another virtual robot in, the same control device at the same time by using a sensor (). For example, the virtual robot is commanded by a user () doing dancing in front of a gesture sensor in real world, and the gesture sensor is used to receive dancing gesture signals created, by different user who dance simultaneously before the gesture sensor, wherein the gesture sensor can be a Kinect, The disclosed invention can extend to Virtual Reality (VR) apparatus (like Oculus) in place of control device (like iPad, and tablet computer) for viewing of the virtual robots and virtual world.
While only certain embodiments have been set forth, alternatives and modifications will be apparent from the above description to those skilled in the art. These and other alternatives are considered equivalents and within the spirit and scope of this disclosure and the appended claims.
DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the disclosed education system, reference should be made to the embodiments illustrated in greater detail in accompanying drawing, wherein:
FIG. 1
is an embodiment of an education system constructed in accordance with the teachings of the disclosure;
FIG. 2
is another embodiment of an education constructed in accordance with the teachings of the disclosure. | |
12 Must Eat Dishes in Makassar
Makassar has always been a great destination for traveling not only because its friendly residents or tourist destinations, the culinary dishes in Makassar have great taste as well.
If you are interested in tasting Makassar signature dishes, you can come visit Makassar and experience it by yourself. makassar signature dishes are famous for being rich in extraordinary flavors, having special aroma, attractive servings, as well as tempting appearance.
You may also want to read:
1. Coto Makassar
There is something missing if you take a trip to South Sulawesi without trying Coto Makassar.
This typical Makassar food gives a deep impression to anyone who has ever enjoyed it.
This dish is rich in spices, which makes it taste ‘very crunchy’ and will give an unforgettable taste for your tongue.
In Makassar, there are several recommended places if you are interested in trying this cuisine. One of them, Coto Nusantara where the culinary is served with thick gravy and large pieces of meat, plus extra chives and fried onions are served separately.
2. Sop Konro
One more typical Makassar culinary that you should try, namely sop konro.
This soup is usually made with beef ribs or beef with blackish brown sauce from kluwek and served with pieces of ketupat.
Read also: Must Eat Dishes in Indonesia
The taste of the sauce is very strong and delicious because it uses a variety of spices and a lot of coriander.
Usually ribs and beef in konro soup are served with various variations such as roasted or original konro. The most famous and legendary place to taste this typical Makassar soup is Konro Karebosi.
3. Mie Titi
Makassar is famous for its noodle dish as well, one of them is called as mie titi.
Mie titi is a typical Makassar dry noodle which is usually served with thick broth made from chicken broth, starch, eggs and various spices.
Usually this noodle is served with complementary sliced chicken, mushrooms, shrimp, squid, and lime slices. The dry texture combined with the sauce is very unique and you should try.
4. Pallubasa
This one dish serves thick grated coconut grated so that it tastes delicious.
Unlike Coto Makassar and Sop Konro which are served with ketupat, Pallubasa uses warm white rice.
Pallubasa is usually served when it’s hot, poured directly into a small bowl and completed by raw chicken eggs. However, it still tastes savory. Read more about Popular Food in Surabaya, Common Indonesian Street Food, and Popular food in Yogyakarta.
5. Pisang Epe
Pisang Epe, just like its name is made of banana fruit.
So here it is, the sweet taste of Makassar that can’t be missed at all, this sweet pisang epe is very delicious.
Moreover, the cooking method is unique, namely by being squeezed or flattened and then baked on the coals of fire.
The combination of the burning taste of the banana and palm sugar and grated coconut sauce makes this culinary a favorite dish to taste in Makassar.
Not only banana flavor, there are three other flavors available such as chocolate, cheese and durian.
6. Jalangkote
Jalangkote is a snack that is almost similar to Indonesian favorite snack, Fried Curry Puff.
However the outer skin of jalangkote is thinner than the original pastel. The ingredients for the stuffings consist of potatoes, carrots, bean sprouts, eggs, noodles and rice noodles.
In order to eat this snack, you must eat it in a unique way. You must enjoy this snack with a sauce made of a mixture of rawit, red chili, garlic, onion, salt, sugar and vinegar.
Surprisingly the chilli sauce will makes it taste more delicious and savory.
7. Es Pisang Ijo
Main course, snack, and now here we comes to the most favorite appetizer in Makassar.
Es pisang ijo or ice green banana is a dish that becomes the symbol of Makassar and is very popular not only in Makassar but in other regions as well.
Es pisang ijo is made from banana based with green flour mixture, marrow porridge, red syrup, coconut milk sauce, and sweet white condensed milk in addition to shaved ice to make Banana Ijo ice should not be missed when coming to Makassar.
8. Sop Ubi
Sop ubi is one of the traditional culinary ingredients made from ubi or cassava which is very easy to obtain in Makassar.
At a glance, this one food looks similar to Soto. But the difference is the soup from the sop ubi is more clear and not solid yellow like the soup in soto Makassar.
With a combination of various ingredients such as vermicelli, eggs, small pieces of meat, bean sprouts, peanuts, Sop Ubi also become one of the culinary list items in the Anging City of Mammiri which is really a must try.
9. Sop Saudara
At first the culinary name of this unique Sop was chosen because it was inspired by the name coto paraikatte.
In Makassar language paraikatte means brother or neighbor.
It is said that Sop Saudara was first made by H. Dollahi, who opened his own meat soup stall business in 1957. At that time, his stall was named after sop saudara.
Then apparently his idea was able to attract the interest of food lovers from all around the nations. Until finally his soup becomes very popular in Makassar.
Sop saudara is made from basic ingredients of beef which are usually served with complementary ingredients such as rice noodles, potato cakes, beef offal, and boiled eggs.
10. Pisang Epe Losari
Makassar is very popular with banana dishes.
Makassar has a banana dish called pisang epe Losari which is very popular because of its delicious taste and can be found in Losari only.
In order to make pisang epe, the type of banana used is a half-baked kepok banana. Why must it be half done?
Because the banana will produce a texture that is soft but not too soft. The Banana Epe processing process is very simple.
The peeled bananas are roasted over the coals until they are half done. After that the banana is placed on top of a tool made of wood blocks to be pressed until it is flat or slightly flat.
Then the banana is roasted again. After the combustion process is complete, the banana is served with a splash of durian or jackfruit flavored molten sugar.
Others
- Barongko: Barongko is a typical Bugis-Makassar regional dessert in the form of a very soft banana cake. Banana as the main raw material is processed to produce a sweet dish. Banana meat is mashed together with other ingredients such as eggs, sugar, salt and milk powder, then wrapped using banana leaves.
- Gogoso: Gogoso is one of the typical Bugis Makassar food that is very popular during Eid celebration. Gogoso is made from glutinous rice which is grilled in a package of banana leaves, generally served along with salted eggs. | https://factsofindonesia.com/must-eat-dishes-in-makassar |
Students leaving Marjory Stoneman Douglas High School in Parkland, Fla., after last month’s shooting. The suspect, Nikolas Cruz, is white, and far from evading disciplinary procedures, he had been expelled from the school.Credit
Saul Martinez for The New York Times
WASHINGTON — After a gunman marauded through Marjory Stoneman Douglas High School last month, conservative commentators — looking for a culprit — seized on an unlikely target: an Obama-era guidance document that sought to rein in the suspensions and expulsions of minority students.
Black students have never been the perpetrators of the mass shootings that have shocked the nation’s conscience nor have minority schools been the targets. But the argument went that any relaxation of disciplinary efforts could let a killer slip through the cracks.
And this week, President Trump made the connection, announcing that Education Secretary Betsy DeVos will lead a school safety commission charged in part with examining the “repeal of the Obama administration’s ‘Rethink School Discipline’ policies.”
To civil rights groups, connecting an action to help minority students with mass killings in suburban schools smacked of burdening black children with a largely white scourge.
“Yet again, the Trump administration, faced with a domestic crisis, has responded by creating a commission to study an unrelated issue in order to ultimately advance a discriminatory and partisan goal,” said Sherrilyn Ifill, the president and director-counsel at NAACP Legal Defense and Educational Fund Inc.
“School shootings are a grave and preventable problem, but rescinding the school discipline guidance is not the answer,” she said. “Repealing the guidance will not stop the next school shooter, but it will ensure that thousands more students of color are unnecessarily ushered into the school-to-prison pipeline.”
The issue of the Obama-era discipline guidance was raised formally by Senator Marco Rubio, Republican of Florida, who, after seeing a flurry of conservative news media reports, wrote a letter to Ms. DeVos and Attorney General Jeff Sessions questioning whether the guidance allowed the shooting suspect, Nikolas Cruz, to evade law enforcement and carry out the massacre at Stoneman Douglas High.
It was, on its face, an odd point: Mr. Cruz is white, and far from evading school disciplinary procedures, he had been expelled from Stoneman Douglas.
“The overarching goals of the 2014 directive to mitigate the school-to-prison pipeline, reduce suspensions and expulsions, and to prevent racially biased discipline are laudable and should be explored,” Mr. Rubio wrote, asking that the guidance be revised. “However, any policy seeking to achieve these goals requires basic common sense and an understanding that failure to report troubled students, like Cruz, to law enforcement can have dangerous repercussions.”
Broward County educators and advocates saw Mr. Rubio’s letter as an indictment of a program called Promise, which the county instituted in 2013 — one year before the Obama guidance was issued — and has guided its discipline reforms to reduce student-based arrests in Broward County, where Stoneman Douglas is.
The N.A.A.C.P. said that Mr. Rubio “notably backs away from raising the purchase age for assault-style rifles and restricting magazine capacity,” and instead focuses on a system that once sent one million minority students to Florida jails for “simple and routine discipline issues ranging from talking back to teachers to schoolyard scuffles.”
In a tweet on Tuesday, Mr. Rubio noted that the gunman was not in the Promise program, but had displayed violent and threatening behavior.
“The more we learn, the more it appears the problem is not the program or the DOE guidance itself, but the way it is being applied,” Mr. Rubio said, referring to the Education Department. “It may have created a culture discourages referral to law enforcement even in egregious cases like the #Parkland shooter.”
An error has occurred. Please try again later.
You are already subscribed to this email.
Long before the attack in Parkland, Fla., the 2014 discipline guidelines, which encouraged schools to examine their discipline disparities and to take stock of discriminatory policies, were already on Ms. DeVos’s radar — but not because they were seen as a possible culprit in the next school shooting. Conservatives were using the Trump administration’s effort to rein in federal overreach to reverse policies designed to protect against what the Obama administration had seen as discriminatory practices.
The “Rethink Discipline,” package that Mr. Trump’s commission will examine includes guidance that the Obama administration issued on the legal limitations on the use of restraints and seclusion, corporal punishment and equity for special education students.
In recent months, educators and policy experts from across the country have traveled to Washington to voice support for and opposition to the disciplinary guidance, in private meetings with officials at the Education Department and in a series of public forums.
Since the discipline guidelines were issued, conservatives have blamed the document for creating unsafe educational environments by pressuring schools to keep suspension numbers down to meet racial quotas, even if it meant ignoring troubling and criminal behavior. Teachers who sought suspensions or expulsions of minority students were painted as racists, conservatives maintained.
“Evidence is mounting that efforts to fight the school-to-prison pipeline is creating a school climate catastrophe and has if anything put at-risk students at greater risk,” said Max Eden, a senior fellow at the conservative Manhattan Institute, who argued that teacher bias was not the driving force behind school discipline.
But proponents argued that racial bias was well documented.
When the guidance was issued, federal data found that African-American students without disabilities were more than three times as likely as their white peers without disabilities to be expelled or suspended, and that more than 50 percent of students who were involved in school-related arrests or who were referred to law enforcement were Hispanic or African-American.
“Children’s safety also includes protection from oppression and bigotry and injustice,” Daniel J. Losen, director of the Center for Civil Rights Remedies at the University of California at Los Angeles’s Civil Rights Project, wrote in testimony to the Civil Rights Commission. “Fear-mongering and rhetoric that criminalizes youth of color, children from poor families and children with disabilities should not be tolerated.”
The Education and Justice Departments wrote in a 2014 Dear Colleague letter that discipline disparities could be caused by a range of factors, but the statistics in the federal data “are not explained by more frequent or more serious misbehavior by students of color.” The departments also noted that several civil rights investigations had verified that minority students were disciplined more harshly than their white peers for the same infractions.
“In short, racial discrimination in school discipline is a real problem,” the guidance said.
In recent months, Ms. DeVos has said change will be coming. She has already moved to rescind a regulation that protects against racial disparities in special education placements. Her goal, she said last month, was to be “sensitive to all of the parties involved.”
In a bruising interview on “60 Minutes” on Sunday, Ms. DeVos said that the disproportionate discipline issue “comes down to individual kids.” She declined to say whether she believed that black students disciplined more harshly for the same infraction were the victims of institutional racism.
“We’re studying it carefully and are committed to making sure students have opportunity to learn in safe and nurturing environments,” she said.
Ms. DeVos’s office for civil rights also announced that it would scale back the scope of investigations, reversing an approach taken under the Obama administration to conduct exhaustive reviews of school districts’ practices and data when a discrimination complaint was filed.
But Ms. DeVos’s own administration has continued to find racial disparities. In November, the Education Department found that the Loleta Union Elementary School District in California doled out harsher treatment to Native American students than their white peers. For example, a Native American student received a one-day out-of-school suspension for slapping another student on the way to the bus, in what was that student’s first disciplinary referral of the year. A white student received lunch detention for slapping two students on the same day — the student’s fifth and sixth referrals that year.
While Mr. Cruz was repeatedly kicked out of class and ultimately expelled, it is unclear whether he was ever referred to the police for his behavior in school. However, Mr. Cruz was known to law enforcement, which never found cause to arrest him, and a report of troublesome behavior to the F.B.I. went unheeded.
The Broward County superintendent, Robert Runcie, said that Mr. Rubio’s effort to connect the district’s discipline policies to the Stoneman Douglas shooting was misguided.
“We’re not going to dismantle a program that’s been successful in the district because of false information that someone has put out there,” Mr. Runcie said on Twitter. “We will neither manage nor lead by rumors.”
A version of this article appears in print on March 14, 2018, on Page A10 of the New York edition with the headline: Trump Points to Culprit in School Shootings, and It Isn’t Guns. Order Reprints|Today's Paper|Subscribe
| |
The tale unfolds in China, a place as yet unknown and mysterious to Tintin. It looks like our hero may have bitten off more than he can chew as he takes on the task of wiping out the international opium trade, which has a vice-like grip on this beautiful country. With the assistance of the secret society Sons of the Dragon, and his friend Chang (whom he encounters later on in the story), Tintin succeeds in overcoming myriad obstacles to finally triumph over his adversaries and disband their network of corruption.
For many fans, The Blue Lotus is filled with a certain angst not found in Tintin's other adventures. At times it has a desolate quality, as Tintin finds himself alone in the vastness of China, the most populous country in the world. Solitude is not his only problem however, as he has to face treachery, conspiracy, a death sentence and madness alongside a routine of physical threats and even natural disasters, all getting in the way of his worthy mission.Tintin's resoluteness and ability to overcome these myriad pitfalls is the true measure of his success.
The dragon possesses awesome powers, and is traditionally associated with protection and warding off evil spirits, in a similar way to the gothic gargoyles of past times.
Anna May Wong, the actress who played Hui Fei in Shanghai Express, was the subject of an article in A-Z magazine. Accompanying the article was a picture of the young woman posing in front of a red dragon emerging from a black background. This photo was undoubtedly the inspiration for the book cover of The Blue Lotus, from the first edition in 1936 to the reprints of 1942.
From December 1946, at the time the book was published in colour, the cover was altered and thenceforth displayed a black dragon on a red background - red being a colour symbolising mystery.
In the film Shanghai Express produced by VonSternberg, and which featured Marlene Dietrich in the role of Shanghai Lily, a mysterious telegram refers to a Blue Lotus.
The ethereal colour blue is generally considered to symbolise infinity. In 1933, Shanghai Express had its first screening in Europe.
In his preceding adventure, Cigars of the Pharaoh, Tintin unwittingly interfered in the filming of a key scene in a Rastapopolous movie blockbuster entitled Arabian Knights - in the original French edition entitled Petite-fille de Sheik (The Sheik's Grand-daughter) or Haine d'Arab (Arab Hate).
In The Blue Lotus, Tintin finds himself hiding in a cinema to escape some soldiers, and ends up watching the scene itself in a trailer on the big screen. Billboards outside the cinema display the title of the film as The Sheik Hate. The title is expressive of vengeance and retribution,recurrent themes in films of a certain era such as The Sheik (1922) and The Son of the Sheik (1926), which propelledactor Rudolf Valentino (1895-1926) into Hollywood stardom.
Keen to raise the realism in his stories to hitherto-unseen levels, Hergé took the advice of a priest, Father Gosset, who introduced him to a 27-year-old Chinese sculpture student at the Brussels Académie des Beaux-Arts. This young man's name was Chang Chong-chen. With great care and in great detail, Chang described a cultural, artistic and political panorama of China, and in doing so opened Hergé's eyes to a world he had never known. He immediately set out to incorporate as much of the information he was learning from his young Chinese teacher into his artwork and narrative.
Through his meetings with Chang and his efforts to raise his game, the masterpiece of The Blue Lotus was created. This was a turning point for Hergé, and from now on every Tintin adventure would be meticulously researched. Never again would he fall back on clichés and crude stereotypes. For these reasons it's now common to divide The Adventures of Tintin into "pre Blue Lotus" and "post Blue Lotus".
In 1937, Japan occupied Northeast China. At the time, many of the great Western powers were present in China, administering small pockets of territory called international "settlements" or "concessions".
At one point during the narrative, Tintin is bounced about between settlements as the corrupt chief of police Dawson kicks him out of the British sector, and he is transferred across Chinese territory into the Japanese-controlled zone.
Despite being in a completely foreign culture, Tintin is moved to stand up against the intolerance and outright abuse of the local populace by his fellow Western compatriots. He shows the same strength of character and courage in the face of the Japanese, who enslaved and oppressed Chinese inhabitants of occupied areas. From page 19 until the end of The Blue Lotus, Tintin dresses in Chinese style, not as a facile gesture of solidarity, but rather to blend in with the crowd!
Tintin heroically rescues a young boy from drowning, and they soon become friends. The boy is amazed to have had his life saved by a white person, as he recounts that his grandparents were massacred during the international crackdown and reprisals against the Boxer Rebellion, which threatened the European delegations in 1900. This is how Tintin and the young Chang Chon-chen met. Tintin will never forget his new friend.
intin takes pains to enlighten his new companion Chang about Europeans, telling him about their mistaken and mistrusting clichés and caricatures of Chinese culture.
Despite the harsh lessons he has learned, the young boy is so flabbergasted by the common misconceptions of the Chinese that Tintin recounts, he bursts out laughing.
Chang's tearful farewell expresses the ancient poetic tradition of his country, dubbed the Celestial Empire. Although he isn't used to such emotional goodbyes, Tintin echoes his friend's sentiments. Mr. Wang Chen-yee completes the heartfelt moment with elegant words steeped in Asian spirit. | http://en.tintin.com/albums/show/id/29/page/36 |
Special Issue "Artificial Intelligence in Lung Diseases"
A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".
Deadline for manuscript submissions: closed (31 October 2021).
Special Issue Editor
Interests: thoracic Pathology (lung, pleura, and mediastinum)
Special Issues, Collections and Topics in MDPI journals
Special Issue in Diagnostics: Artificial Intelligence in Lung Diseases 2.0
Special Issue Information
Dear Colleagues,
Precision medicine, more specifically artificial intelligence (AI) in the diagnosis and treatment of pulmonary diseases has evolved in important ways. The developments in thoracic imaging, thoracic pathology, and thoracic oncology are meaningful components that play a role in bringing those modalities to the bedside. Digital radiology as well as digital pathology are becoming portable specialties that, combined with advance oncological algorithms, can be used for the betterment of patients afflicted by the different thoracic diseases. Such technological advancements should not be limited to neoplastic diseases but should also include other non-neoplastic processes in which such technology is applicable. In our current practice, the team approach to disease seems to have a better impact on the clinical outcomes of patients; therefore, it is highly important that these technologies, as they advance, become part of the armamentaria of tools that all clinicians need to be familiar with, and possibly by having this team approach, the different aspects of our individual specialties will also expand.
Diagnostics is dedicating one Special Issue to the role of AI in pulmonary diseases, knowing of the technological advancement that is taking place. The goal is to bring such advancements to all individuals involved in the care of patients afflicted by the gamut of pulmonary diseases. It is our hope that you will contribute to this Special Issue whether in pathology, radiology, oncology, or pulmonary medicine.
Prof. Cesar A. Moran
Guest Editor
Manuscript Submission Information
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access monthly journal published by MDPI.
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions. | https://www.mdpi.com/journal/diagnostics/special_issues/AI_lung |
Perhaps those abstract lessons of Inside Out are more important to a different audience — parents.
For months, I’d looked forward to seeing Pixar’s Inside Out with my daughter, Liddy. Amy Poehler and Mindy Kaling turning feelings into characters! Characters we could understand and laugh at and talk about! When you’re the parent of a nine-year-old girl who is caught up in the fierce turmoil of third-grade life, that is no small thing. Plus, you know, popcorn and Reese’s Pieces.
I brought Liddy and her friend Charlotte, and we loaded up on snacks and settled into our seats just as the previews were wrapping up. The movie is a quick ninety minutes, and none of us even took a bathroom break. We were enthralled. We all loved it as much as I’d hoped, and when it ended the girls talked, rapid-fire, about the antics of Joy, Fear, Anger, Sadness, and Disgust.
Even if you haven’t seen the movie, you may have heard by now (spoilers ahead) that Amy Poehler’s Joy works hard to keep the other emotions in check as they all pilot the day-to-day life of an eleven-year-old girl named Riley. Sadness (voiced by Phyllis Smith of The Office) proves especially challenging in this regard as Riley faces a big move, family stress and friendship challenges.
But in the end, Sadness proves her worth. Joy learns, the hard way, the lesson so many of us are still working on. Sadness is a part of life. Efforts to banish her, distract her or leave her in the past, no matter how creative, just won’t work. And it turns out Riley needs Sadness as much as she needs Joy, in order to get the help and support of the people who love her. Allowing space for Sadness means she doesn’t have to go it alone.
So what did Liddy and her friend make of all this? When I asked them how those battles between emotions worked out for Riley, and how they might play out in real lives, like theirs, they looked at each other and kind of shrugged.
“I’ve never really thought of little feeling-creatures in your head controlling you,” Liddy said, with a hesitant smile. She looked like she was afraid to disappoint me.
“But let’s say we did think about them that way,” I said. “Like, Fear, for example. I can think of times in my life when Fear is around, you know, making his list of worst-case-scenarios like he did to Riley in the movie.”
“Um. That’s just … weird,” the girls said, laughing.
Later, when Liddy said that Mindy Kaling’s Disgust was her favorite character, I tried to make the connection once more. I wondered aloud if she could remember a time she felt Disgust working in her own life, thinking she might mention something about her brother burping or stealing a gulp from her water bottle.
“Mom,” she said — gently, as if to suggest I was still missing the point. “It really just feels like characters in a movie.”
I shared this anecdote with a child psychiatrist I know, who laughed and said that it all sounded exactly right. She reminded me that, even heading into adolescence, kids are very concrete thinkers. They simply want to fall in love with characters and soak up a good story.
“But is there some way parents could talk with their kids about the movie, that might help them make those connections?”
“I think you can just let her enjoy the story for what it is,” she smiled.
She helped me see that perhaps those abstract lessons of Inside Out are more important to a different audience — parents. Because maybe, for some of us (cough cough), it’s not our kids who need to work on accepting their Sadnesses. It’s us. | https://brainchildmag.com/tag/feelings/ |
World War I was the result of chain-reacting events; originated from the Crises in the Balkans which led to the Collapse of Bismarckian Alliances. The creation of The Triple Entente and the Triple Alliance further escalated the tension contributing to the outbreak of the war. Increase in international tension, caused by the division of Europe into two armed camps, provoked fear of war and prompted military alliances and an arms race.
The system maintained the balance of power in Europe after the creation of German Empire was Three Emperors’ League, established between Germany, Austria-Hungary, and Russia. This alliance guaranteed each other that when any one member state took military action against a non-member, particularly France or the Balkan nations, the other two would remain neutral. However, this system did not last long. At the Congress of Berlin, the peace conference concluding the First Balkan Crisis; Britainconcerned that growing Russian power at the expense of the Ottoman Empire would tilt the balance of power in Russia’s favor, and secured Constantinople and the Balkans away from Moscow’s dominion.
Subsequently, during the Second Balkan Crises, Russia warned it was ready to occupy Bulgaria if it did not yield to Serbian claims, at which point Austria-Hungary stepped in to support Bulgaria. As a result of Russia’s obvious political losses at the Congress of Berlin, and Germany’s decision to support Austria- Hungary instead of Russia at the Second Balkan Crises, Russia felt betrayed by Germany. Russia abandoned its alliance with Germany in the Three Emperors’ League, and concluded an alliance with France. Unfortunately, the breakdown of Three Emperors’ League damaged the balance of power and became the initial cause of the war.
The Essay on Complete Power Germany World Hitler
Halter - a name that was known to the world during one of the earlier centuries. Some of them saw him as a powerful and capable leader, the father of Germany while some thought that Hitler was a ruthless and the greatest enemy of world peace. What do you think? Hitler is a ruthless, cruel leader. Being a German, he did done his part to make Germany become strong again, bringing his people out of ...
Bismarck getting fired by Kaiser William II in 1890, made the situation even worse. The traditional dislike of Slavs kept Bismarck’s successors from renewing the understanding with Russia. This gave France an opportunity to ally with Russia. At any rate, we can see that the menace of the hostile division led to an arms race, in the increase in military expenditure between 1890 and 1900.
However, the direct cause of the war was Austria-Hungary’s declaration of war on Serbia after Assassination in Sarajevo in 1914. At the very beginning, Russia ordered a partial mobilization only against Austria-Hungary in support of Serbia, as the leader of the Slav nations. Yet eventually, this escalated into a general mobilization. Although, German, who feared that she would face attacks from both Russia and France, asked to demobilize, Russia refused to do so. Soon after that, German declared war against Russia and France, who also refused Germany’s request to stay neutral, and the First World War had begun.
In summary, Three Emperors’ League was a worthy system to prevail the balance of power, and it could have properly maintained Europe’s relations among the Ottoman’s dominion over the Balkans. To put it more concretely, Assassination in Sarajevo could not have caused the “World War” if this system was still functioning. | https://educheer.com/essays/the-causes-of-world-war-i-2/ |
YOGA DURING PREGNANCY – 1
Earlier generations of women were used to doing household chores themselves. This ensured that their bodies remained flexible, and the need for separate physical exercise did not arise. Our modern lifestyle has made life ‘easier’, but our bodies have become less flexible.
The muscles used during childbirth remain under-utilized in our day-to-day lives. While many muscles participate in childbirth, attention is usually, and mistakenly, focused on the abdominal and pelvic muscles with respect to labour. In reality, the thigh, back, neck, and even the facial muscles, directly or indirectly participate in this process. Strengthening these muscles as well, through specific exercises, will ensure a smooth delivery.
During pregnancy, all physical movement should be carried out with care to avoid problems for the mother and the unborn child. This does not mean that exercise or routine work should be avoided; on the contrary, too much rest and relaxation can prove disadvantageous, and hinder natural labour. The expectant mother should continue to carry out simple, routine household chores, as well as specific exercises beneficial for pregnancy. Another important advantage of such exercises is that they help to bring the foetus into its natural birth position in the womb, necessary for an easy and natural delivery.
The duration of pregnancy is divided into three phases, known as the first, second, and last trimesters. One can maximize the benefits of exercises by focusing on those that are meant for each trimester. Do consider the growth of the foetus and the physical changes occurring in the mother at this time.
SHAKTISANCHARAN (ENERGY FLOW)
1. Sit in sukhasana, hands resting on the thighs.
2. While inhaling, slowly raise the arms sideways up to 60
degrees, as shown in the illustration, with thumbs facing upwards
and fingers lightly flexed to form a fist. The arms should be
straight.
3. Continue to breathe slowly and deeply. Each cycle of inhalation
and exhalation should last for about 20-30 seconds initially.
4. To complete the exercise, while inhaling, raise both arms above
the head, with palms facing outward, in such a way, that the backs
of the hands touch each other. Exhale.
5. Inhale again, and while exhaling, lower the arms slowly and place
them again on the thighs.
Note : The eyes should be steady and focused, or, alternatively, they may remain closed lightly.
As you get better at this asana, you should hold your arms up in the illustrated position and breathe slowly and deeply for up to a minute, for the best effect of this exercise. If there is discomfort in the arms while doing the asana, they should not be lowered suddenly. The above sequence should be followed through to completion.
This asana strengthens the neck, shoulder and back muscles, and improves the supply of pran (life energy) to these regions. It leads to an improved flow of energy in the body, especially in the spine. This improved flow of energy, along with deep breathing, increases the tolerance for pain, and is thus helpful during labour. This asana should be a part of the daily exercises throughout pregnancy, if possible. | https://events.santulan.in/books/yoga-during-and-after-pregnancy/ |
In 2019, the 11th Street Bridge Park will become the District of Columbia’s first elevated park, connecting the historic Anacostia and Capitol Hill neighborhoods that are geographically divided by the Anacostia River.
Old bridge piers as they are today.
The 11th Street Bridge Park—a project of the Ward 8 based nonprofit Building Bridges Across the River at THEARC—will span the Anacostia River, constructed on top of the repurposed piers, which once supported the old 11th Street Bridge. From the beginning, community engagement and feedback have driven the conceptualization and design of the park.
Work on the Bridge Park began in 2011 and the first two years were filled with hundreds of neighborhood meetings on both sides of the river leading to the identification of 13 community-identified principles and six programming concepts for the design of the park.
In 2014, Bridge Park staff led a nation-wide design competition to manifest these programming concepts into actual renderings. As part of the competition, it was critical to ensure an active voice for neighborhood stakeholders. A Design Oversight Committee was formed and populated by residents, business owners and nearby stakeholders like the National Park Service, U.S. Navy Yard, and the local boating community. This committee reviewed and edited the competition brief before it was seen by a single architect and met with the four final teams several times, providing valuable on-going feedback during the design process.
At the end of a seven-month effort, the Committee used the same criteria as the formal competition jury to unanimously select the winning team of OMA+OLIN. OMA+OLIN’s inventive design proposed building a new set of steel trusses over the old piers that almost doubled the usable space and provided structural elements to protect visitors from the hot summer sun and cold winter winds.
In December 2016, the design team, along with the engineering firm Whitman, Requardt and Associates, began pre-construction work that is expected to last one and a half years. Construction could start as early as mid-2018 with the parking opening in late 2019.
Throughout this community-led process, it became clear that the Bridge Park had the potential to be more than just an innovative public space.
In particular, the Bridge Park can symbolize a new unity and connection between a booming area of the city and one that has long been overlooked and excluded from the city’s economic progress. While many park design strategies can, and will, be implemented to increase the connectivity and interaction between those living on both sides of the river, more must be done to ensure that those currently living near the 11th Street Bridge Park will benefit on a continued basis from the success of the park.
This is especially important for D.C. residents and small businesses located east of the river that have thus far been largely excluded from the city’s economic progress. Decades of disinvestment, coupled with the economic, racial and geographic segregation of those living in Wards 7 and 8, mean that many of the communities east of the river—especially those closest to where the Bridge Park touches down—are areas of low homeownership, as well as high poverty and unemployment.
Indeed, the most recent data from the American Community Survey reveal multiple census tracts where over 40 percent of families live in poverty and over 60 percent of residents 16 and older are either unemployed or not in the labor force. While the Bridge Park’s design strategies will increase connectivity and interaction between those living on both sides of the Anacostia River, more must be done to ensure that residents and small businesses nearby will continually benefit from the success of this signature new civic space.
To assist with its equitable development work, the Bridge Park turned to LISC DC, a community development organization with roots in Washington, D.C. for over 30 years.
With the help of LISC DC, Bridge Park staff formed an Equitable Development Task Force in the Fall of 2014 consisting of research and planning experts to review background data of the surrounding area, and to help guide the formation of an Equitable Development Plan.
The goal of the process was to ensure that future equitable development brainstorming sessions were based in a clear and objective understanding of surrounding neighborhood demographic and economic characteristics.
For much of 2015, the Equitable Development Task Force held brainstorming sessions with neighborhood residents, stakeholders, government officials, business owners and policy experts to identify actionable recommendations that Bridge Park and its partners can take in three areas: Workforce Development, Small Business Enterprise and Housing.
Over the course of these meetings, specific strategies and corresponding recommendations within each of the three areas emerged and were refined. Built out for each recommendation was a detailed timeline of action steps, collaborative partner list and measurable goals created in collaboration with researchers at the Urban Institute.
Working collaboratively with the community and local officials, the Bridge Park is committed to changing the narrative of how development typically takes place. It is well known that the construction of signature public parks can significantly change land values and uses in surrounding areas. Indeed, a recent HR&A economic impact study found that property values in comparable park developments increased by 5 to 40 percent.
The goal of Bridge Park’s Equitable Development Plan is to ensure that the park is a driver of inclusive development— development that provides opportunities for all residents regardless of income and demography. By following a community-driven and vetted process, it is our hope that other cities can look to the Bridge Park as a leading example of how the public and private sectors can invest in and create world-class public space in an equitable manner.
Over the course of the last year, the Bridge Park staff partnered with local non-profits to begin implementing its Equitable Development Plan. This includes starting an East of the River Home Buyers Club in collaboration with the affordable housing developer MANNA, Inc. Seventy-seven people are currently enrolled and 24 residents are now mortgage ready and plan to purchase their own home early this year. This means that these residents will have an opportunity to capture any future increase in property values in the neighborhoods surrounding the park.
A partnership with Housing Counseling Services supports weekly tenant rights workshops for nearby residents who are renters (renters make up 75% of the population in the neighborhoods surrounding Bridge Park east of the river). And in collaboration with the Bridge Park, City First Homes is working to form a Bridge Park Community Land Trust, starting with a series of public discussions like this November event which attracted over 100 local residents.
The next year and half before construction begins will see staff working to ensure that the community who lives near the park has the skill sets and capacity needed to apply for and receive construction jobs as well as employment opportunities after the park opens. Bridge Park staff are also working with a local MBA program to provide technical assistance and access to capital for local small businesses.
Last May, LISC DC announced an impressive $50 million commitment to invest in the neighborhoods surrounding Bridge Park. Called Elevating Equity, LISC DC has already invested more than $5 million to support affordable housing development and preservation, as well as non-profits working to improve the quality of life for low-income families in the surrounding area.
Finally, the Bridge Park team is utilizing the next three years before the park opens to pilot programming concepts. This includes an annual Anacostia River Festival that last spring brought nearly 8,000 residents down to the banks of the river. (Save the date for April 9, 2017 for the third annual Anacostia River Festival!) Arts installations celebrating the rich history of the region will start to appear this spring.
And through a partnership with the University of the District of Columbia and local communities of faith, the Bridge Park is building urban farms on both sides of the river. This work will inform the design of the park’s urban agriculture spaces and have already produced over 1,300 pounds of fresh fruits and vegetables – a critical need in a noted DC food desert.
Through deep listening with neighborhood residents at every step of the way, Bridge Park has become so much more than a park. Through intentional partnerships and early planning, we are collaboratively working to metaphorically and physically bridge DC.
Adam Kent: Adam Kent is a Program Officer in the DC office of the Local Initiatives Support Corporation (LISC). There, he is the lead Program Officer responsible for LISC DC’s $50 million Elevating Equity Initiative, a commitment LISC DC has made to help foster equity, inclusiveness, and an improved quality of life in the neighborhoods surrounding the future 11th Street Bridge Park.
Adam also leads LISC DC’s corridor-specific economic and small business development strategies, as well as LISC DC’s creative placemaking work. He also analyzes budget and legislative changes that affect community development in the District, and conducts data analysis and mapping to further inform LISC DC’s investment decisions. Prior to joining LISC, Adam worked as a high school math teacher in DC Public Schools and as a public policy researcher at the Urban Institute. He holds a BA in Economics from Macalester College, a MA in Teaching from American University, and a MPA from Princeton University.
Scott Kratz: For the last five years, Scott Kratz has been working with the Ward 8 non-profit Building Bridges Across the River at THEARC and District agencies to transform an old freeway bridge into a park above the Anacostia River. The old 11th Street Bridges that connect Capitol Hill with communities east of the river have reached the end of their lifespan, Kratz is working with the community to use the base of one of the bridges to create a one of a kind civic space supporting active recreation, environmental education and the arts.
Kratz is a resident of Barrack’s Row and has lived in Washington D.C. for the last 10+ years. He has worked in the education field for twenty years and began his career teaching at Kidspace, a children’s museum in Pasadena, California and later as the Associate Director of the Institute for the Study of the American West and Director of Programs at the Autry National Center in Los Angeles, CA. While at the Autry, he supervised a staff that planned and implemented programs including theater, film, music, festivals, family programs, lecture series, and academic symposia. Most recently, he was the Vice President for Education at the National Building Museum, a job that first brought him to Washington D.C. | https://revitalization.org/article/guest-article-physically-economically-socially-bridging-washington-dc/ |
Its been a difficult summer for the bees here at Violet Town this year, and for a lot of inland Victoria. There has not been much nectar flow available for the bees to make honey with. The Murrnong bees have remained healthy, with plenty of pollen, and plenty of brood. It is lucky that we left plenty of honey in the hives, because the girls have kept consuming their stored honey even through summer, when we usually hope to see them bringing in fresh honey. 2015 saw only 280mm of rain here, way down from the 625mm yearly average, so it has been tough for the plants, and one plant survival strategy is not to flower in dry conditions.
Mary from the February Backyard Bees course is holding up a frame of healthy brood.
Some bee keepers have been moving their hives for better forage elsewhere, and some have been feeding sugar syrup. We were just at the point of feeling we needed to move our hives, when Grey Box, E. microcarpa, started flowering. Fortunately, with 125mm of rain here between Christmas and end of January, there seems to have been enough moisture available for the Grey Box to put nectar in their flowers. So.. phew, when we are around the hives, we can again smell the sweet scent of nectar and fresh honey.
The next Backyard Bees hands on workshop is on Sunday March 20th.
In this one day workshop we cover the basics of beekeeping, and consider some of the decisions that a small scale beekeeper makes.
You will also gain some perspective and insight into how small scale and back yard bee keeping fits into the ecology of our food production.
Together we will open some of the Murrnong hives, and learn to recognise what we see happening in there.
This is the second of these workshops this autumn, after the first one, on Feb 21st, sold out. Here is a picture of the group dressed up and ready to head down to the hives. The calm warm autumn weather, with the bees busy foraging, made it an ideal and peaceful time to look inside.
Keeping even just one hive of bees in your backyard can give a big surplus of honey, do wonderful things for the pollination and productivity of your garden, and can also help to make sure we have plenty of healthy bees for the future. A well placed and well managed bee hive, with the flight path out of people’s way, can be nothing but a positive. The first your neighbors might know about the hive you have had for the last six months could be when you give them a jar of honey.
Bees are under stress around the world, bee numbers are down, the Varroa mite will probably get to Australia one day, and the neonicotinoid pesticides, so toxic to bees, continue to be used. Species diversity in large scale agricultural regions is now too low to feed bees through the year. In the apple and pear orchards of south west China, bees have been eradicated by pesticide use and habitat loss, and people have to do pollination (the free work of bees) by hand, with a feather and little bags of pollen. In Australia, beekeepers are paid to truck their bees in to pollinate horticultural crops.
There is species diversity in towns and home gardens, though. These are now an important bee forage resource. Backyard bees mostly feed from a different forage resource to commercially kept bees. The garden plants benefit, and we get the honey. Towns can provide a surplus of bees to support the surrounding agriculture or horticulture.
Backyard bee keepers often collect ‘wild’ bee swarms, and so are potentially working with a broader range of genetics in bees than is possible when all the bee queens are commercially bred. This is important to allow for continual evolution and adaptation among bees.
In Australia bees kept in backyards, or on rooftops in the city, generally have less exposure to insecticides than when bees are used for pollination in agriculture.
With their smaller scale, less commercial pressure, and hives kept mostly in one place, backyard bee keepers have opportunities to experiment and innovate with their bee keeping practices. All of this can contribute to a bee keeping culture of continuous improvement, and a healthy increase in bee numbers.
Ready for hive inspections at the Murrnong March 2015 Backyard Bees workshop.
Kate Marsh and Ralph Nottingham of Creative Collectives are putting on an open consultation at their property near Eldorado, Sun Nov 29 2015.
David Arnold will lead the workshop through the process of reading the landscape, figuring out how that land works, sift through their wish list to plan for functional connections, consider house site options, and develop a concept plan.
And… what can they do about water?
Kate says “Take the next step in learning about Permaculture. Come along and contribute to the planning of Hidden Valley Permaculture farm, Eldorado VIC.
Our Permaculture Transformation is about to begin.
If there is a well established oak tree somewhere near you that you have admired, (and you live in the southern hemisphere) about now could be the time to collect some acorns and plant them. Don’t be put off by the moderate growth rate, the quality of the result in the longer term is well worth it, and you don’t have to hold your breath while they grow.
We collected these acorns yesterday from magnificent old street trees in Violet Town. I propagate from old trees well proven in this climate and landscape. Our seedlings from these trees are just this year bearing their first acorns, in fact some of those are the lighter acorns in the top picture. These particular acorns have been carefully selected for their freshness and firmness, for planting. Mostly we just rake the acorns up, remove most of the dirt and leaves in a big sieve, and store for goat feed over the next few months. The goats make their own firm decisions about which ones they will eat. The goats’ milk becomes more mellow, richer and creamier 24hrs after their first feed of acorns in the autumn.
March 2015 Dappled light coming through the heavy shade of oaks planted here in 2006. The canopies are still small, but growing well. Heavy shade, soil improving qualities, fire retardant, beauty, and acorns for concentrated autumn fodder are why we planted these. These are possibly Algerian oaks. These trees are only semi-deciduous, staying green until the end of July, going brown in Autumn, then dropping their old leaves just as the new growth comes. This makes them well adapted to this relatively hot and dry climate, as they are able to make use of the usually more moist conditions in winter.
We have been planting oaks here since 1996. We were collecting acorns for autumn goat feed supplement anyway, so it was a no-brainer to try planting them. This is the oak establishment method that has worked best here, on this farm in this climate.
Autumn, approx March yr1. Collect or source acorns, keep in a damp cool place [eg in veg garden soil] or in a plastic bag in the fridge over the first winter.
Summer, yr1 Grow in veg garden over the first summer. A couple of times over summer drive a spade about 30cm under the tree seedlings, to cut the tap root and encourage more shallow root development.
Late Winter, August yr2 dig up from veg garden, prune excessively long roots, and plant out into spots or rip lines cleared of grass, with whatever compost you can spare.
Spring yr3 Clear strong grass away from young trees again to give them a chance to grow with the Spring moisture and warmth.
Nov 2006. The first year of planting, David follow up watering in acorns seeded direct into rip lines, in a drought year…….. I must have been keen! That area looks a lot nicer now. The previous photo, of the foliage, was taken about 6m to the right from here.
We have not tried leaching the tannins from acorns so we can eat them ourselves, but some friends do, and there is lots of information available about this, for example here. | https://murrnong.com/category/courses/page/2/ |
Use sine because you have the measurements for opposite and need to find the hypotenuse.
Set up a proportion Sine 30 = 5/x x= hypotenuse, 30 is the length of the opposite side of the right triangle.
Cross multiply x = 5/.5 = 10 meters long.
A ramp has an angle of inclination of 30 degrees. It has a height of 5 meters, how long is the ramp?
Set up a proportion Tangent 23 = x/45 x= opposite, 45 is the length of the adjacent side of the right triangle.
A hawk is sitting in a tree above a road. The hawk is watching a rodent that is 45 meters away from the tree and at an angle of 23 degrees. Calculate the height of the hawk.
When presented with a right triangle with missing angle measurements or side lengths, use trigonometric ratios in order to find the missing angle measures or side lengths.
A ratio is a comparison of one number to the size of another number. Therefore, a trig ratio tells you how one angle of a right triangle compares to the sides of the same triangle.
The trig ratios are used with right triangles to find side length and angle measures, but they can also be used as functions in equations.
The six trig ratios are sine, cosine, tangent, and their reciprocals, cosecant, secant, and cotangent.
Find the length of side x in the triangle ABC below.
1. Decide which Trig Ratio to use. Ask yourself, what two sides am I using from the reference angle?
2. Set up the ratio. In this example you will use Tangent because we have the adjacent, but we need the opposite. Opposite and adjacent are the two sides of tangent.
3. Use your calculator to find the Tangent, Cosine or Sine of the angle. | http://www.moomoomath.com/Solving-Trig-Ratios.html |
Aging as an immigrant is not easy. While migration has been shown to pose a risk to mental health among recently arrived immigrants, the acculturation process that develops over time does not necessarily protect their psychological wellbeing in later life. This is a great concern for Australia as the majority of its population is immigrant, with an increasing number from cultural and linguistic backgrounds distinct from the historically dominant population. Malays constitute a small fraction of the Australian population, yet the size of this community is increasing and therefore there is a value to study their aging. This ethnographic research examines the interaction between cultural and social environments in midlife, among immigrant Malay women living in Melbourne. This research was undertaken using a social constructionist approach, and in the course of this study, I attempt to understand women’s midlife journey and how this shapes their everyday lives as immigrants, using Bourdieu’s theory of habitus (1990), and theories of social capital particularly as explained by Putnam (1995), Coleman (1990) and Bourdieu (1986). Ethnographic data were collected using multiple methods from July 2010 to December 2012 in Melbourne. Thirty three women were recruited through snowball sampling, with 18 women participating in in-depth interviews and 15 women participating in three focus group discussions. Five key informants were interviewed separately. The data were analyzed using thematic analysis. My findings suggested that the journey of midlife changes immigrant Malay women’s perceptions of their priorities in life, and this in turn influences ideas of how to life, their daily practices, and hence their psychological wellbeing. How they navigate their lives is reflected in their habitus and influenced by the availability and accessibility of social capital; this in turn influenced their psychological wellbeing. The concept of psychological wellbeing as understood by the women is grounded in religious doctrines that shaped their dispositions. Acculturation to Australian society and local norms, values and understandings of wellbeing, and women’s own responses to their aging and to bodily changes, further contributed to how they understood wellbeing. Women’s subjectivity was constituted by practices that are informed by and sustain religious norms. Changes in dress, and their involvement in a particular religious community, informed the embodied behaviours that transformed women’s identity and, as they explained, “shaped their inner souls,” therefore, also their wellbeing. Women’s innovative and creative acquisition of new dispositions defined who they were as Malay-Muslim immigrants in Australia. | https://bridges.monash.edu/articles/thesis/Understanding_wellbeing_among_middle_aged_immigrant_Malay_women_in_Melbourne/4679659/1 |
Netlink Solutions (India) Limited, an information media and software development company, provides Web-based solutions for strategic business management in India. The last earnings update was 76 days ago. More info.
Netlink Solutions (India) has significant price volatility in the past 3 months.
No trading data on 509040.
Is Netlink Solutions (India) undervalued based on future cash flows and its price relative to the stock market?
Here we compare the current share price of Netlink Solutions (India) to its discounted cash flow analysis.value.
Below are the data sources, inputs and calculation used to determine the intrinsic value for Netlink Solutions (India).
The calculations below outline how an intrinsic value for Netlink Solutions (India) is arrived at by discounting future cash flows to their present value using the 2 stage method. We try to start with analysts estimates of free cash flow, however if these are not available we use the most recent financial results. In the 1st stage we continue to grow the free cash flow over a 10 year period, with the growth rate trending towards the perpetual growth rate used in the 2nd stage. The 2nd stage assumes the company grows at a stable rate into perpetuity.
The current share price of Netlink Solutions (India) is above its future cash flow value.
The amount the stock market is willing to pay for Netlink Solutions (India)'s earnings, growth and assets is considered below, and whether this is a fair price.
Are Netlink Solutions (India)'s earnings available for a low price, and how does this compare to other companies in the same industry?
** Primary Listing of Netlink Solutions (India).
Netlink Solutions (India) is loss making, we can't compare its value to the IN IT industry average.
Netlink Solutions (India) is loss making, we can't compare the value of its earnings to the India market.
Does Netlink Solutions (India)'s expected growth come at a high price?
Unable to calculate PEG ratio for Netlink Solutions (India), we can't assess if its growth is good value.
What value do investors place on Netlink Solutions (India)'s assets?
* Primary Listing of Netlink Solutions (India).
Netlink Solutions (India) is good value based on assets compared to the IN IT industry average.
Netlink Solutions (India) has a total score of 1/6, see the detailed checks below.
How is Netlink Solutions (India) expected to perform in the next 1 to 3 years based on estimates from 0 analysts?
In this section we usually present revenue and earnings growth projections based on the consensus estimates of professional analysts to help investors understand the company’s ability to generate profit. But as Netlink Solutions (India) has not provided enough past data and has no analyst forecast, its future earnings cannot be reliably calculated by extrapolating past data or using analyst predictions.
Is Netlink Solutions (India) expected to grow at an attractive rate?
Unable to compare Netlink Solutions (India)'s earnings growth to the low risk savings rate as no estimate data is available.
Unable to compare Netlink Solutions (India)'s earnings growth to the India market average as no estimate data is available.
Unable to compare Netlink Solutions (India)'s revenue growth to the India market average as no estimate data is available.
Unable to determine if Netlink Solutions (India) is high growth as no earnings estimate data is available.
Unable to determine if Netlink Solutions (India) is high growth as no revenue estimate data is available.
All data from Netlink Solutions (India) Company Filings, last reported 3 months ago, and in Trailing twelve months (TTM) annual period rather than quarterly.
Unable to establish if Netlink Solutions (India) will efficiently use shareholders’ funds in the future without estimates of Return on Equity.
Examine Netlink Solutions (India)'s financial health to determine how well-positioned it is against times of financial stress by looking at its level of debt over time and how much cash it has left.
Netlink Solutions (India)'s competitive advantages and company strategy can generally be found in its financial reports archived here.
Netlink Solutions (India) has a total score of 0/6, see the detailed checks below.
How has Netlink Solutions (India) performed over the past 5 years?
Below we compare Netlink Solutions (India)'s growth in the last year to its industry (IT).
Netlink Solutions (India) does not make a profit and their year on year earnings growth rate was negative over the past 5 years.
Unable to compare Netlink Solutions (India)'s 1-year earnings growth to the 5-year average as it is not currently profitable.
Unable to compare Netlink Solutions (India)'s 1-year growth to the IN IT industry average as it is not currently profitable.
Netlink Solutions (India)'s revenue and profit over the past 5 years is shown below, any years where they have experienced a loss will show up in red.
It is difficult to establish if Netlink Solutions (India) has efficiently used shareholders’ funds last year (Return on Equity greater than 20%) as it is loss-making.
It is difficult to establish if Netlink Solutions (India) has efficiently used its assets last year compared to the IN IT industry average (Return on Assets) as it is loss-making.
It is difficult to establish if Netlink Solutions (India) improved its use of capital last year versus 3 years ago (Return on Capital Employed) as it is currently loss-making.
How is Netlink Solutions (India)'s financial health and their level of debt?
The boxes below represent the relative size of what makes up Netlink Solutions (India)'s finances.
Netlink Solutions (India) is able to meet its short term (1 year) commitments with its holdings of cash and other short term assets.
Netlink Solutions (India) has no long term commitments.
This treemap shows a more detailed breakdown of Netlink Solutions (India)'s finances. If any of them are yellow this indicates they may be out of proportion and red means they relate to one of the checks below.
Netlink Solutions (India) has no debt, it does not need to be covered by short term assets.
All data from Netlink Solutions (India) Company Filings, last reported 3 months ago.
Netlink Solutions (India) has no debt.
Netlink Solutions (India) had no debt 5 years ago.
Netlink Solutions (India) has no debt, it does not need to be covered by operating cash flow.
Netlink Solutions (India) has no debt, therefore coverage of interest payments is not a concern.
Netlink Solutions (India) has a total score of 6/6, see the detailed checks below.
What is Netlink Solutions (India)'s current dividend yield, its reliability and sustainability?
Current annual income from Netlink Solutions (India) dividends.
If you bought ₹2,000 of Netlink Solutions (India) shares you are expected to receive ₹0 in your first year as a dividend.
Unable to evaluate Netlink Solutions (India)'s dividend yield against the bottom 25% of dividend payers as the company has not reported any payouts.
Unable to evaluate Netlink Solutions (India)'s dividend against the top 25% market benchmark as the company has not reported any payouts.
Unable to perform a dividend volatility check as Netlink Solutions (India) has not reported any payouts.
Unable to verify if Netlink Solutions (India)'s dividend has been increasing as the company has not reported any payouts.
What portion of Netlink Solutions (India)'s earnings are paid to the shareholders as a dividend.
Unable to calculate sustainability of dividends as Netlink Solutions (India) has not reported any payouts.
What is the CEO of Netlink Solutions (India)'s salary, the management and board of directors tenure and is there insider trading?
Mr. Minesh Vasantlal Modi serves as Chairman at Netlink Solutions India Ltd and has been its Whole Time Director since July 2011.
Minesh's compensation has increased whilst company is loss making.
Minesh's remuneration is higher than average for companies of similar size in India.
The average tenure for the Netlink Solutions (India) board of directors is over 10 years, this suggests they are a seasoned and experienced board.
Netlink Solutions (India) has a total score of 0/6, this is not included on the snowflake, see the detailed checks below.
Netlink Solutions (India) Limited, an information media and software development company, provides Web-based solutions for strategic business management in India. The company offers gifts and accessories magazines, as well as online magazines through its easy2source.com and corporategiftseasy2source.com portals; and operates portals under easy2source on a range of subjects comprising electricals, electronics, herbs and spices, jewelry, leather, material handling, foods and beverages, and automobiles. It also provides treasury management and administration, and search engine marketing services, as well as business and trade information services using search engines. The company was formerly known as VGR Construction Ltd. and changed its name to Netlink Solutions (India) Limited in March 2002. Netlink Solutions (India) Limited was incorporated in 1984 and is based in Mumbai, India. | https://simplywall.st/stocks/in/software/bse-509040/netlink-solutions-india-shares |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.